text
stringlengths
56
7.94M
\begin{document} \mathfrak{m}aketitle \blfootnote{\textup{2000} \textit{Mathematics Subject Classification}: \textup{11F75 (primary), 11F85 (secondary)}} \begin{abstract} In this paper, we present a generalisation of a theorem of David and Rob Pollack. In \cite{PP09}, they give a very general argument for lifting ordinary eigenclasses (with respect to a suitable operator) in the group cohomology of certain arithmetic groups. With slightly tighter conditions, we prove the same result for non-ordinary classes. Pollack and Pollack apply their results to the case of $p$-ordinary classes in the group cohomology of congruence subgroups for $\mathrm{SL}_2hree$, constructing explicit overconvergent classes in this setting. As an application of our results, we give an extension of their results to the case of non-critical slope classes in the same setting. \end{abstract} \section*{Introduction} \subsection*{Background} Modular symbols are cohomological objects that are powerful computational and theoretical tools in the study of automorphic forms. Classical modular symbols are elements in the cohomology of a locally symmetric space with coefficients in some polynomial space, and in many cases, there are ways of viewing such elements in the group cohomology of certain arithmetic subgroups. For example, to a modular form of weight $k$ and level $\Gamma_0(N)$, one can attach an element of the group cohomology $\mathrm{H}^1(\Gamma_0(N),V_{k-2}(\mathfrak{m}athbb{C}))$, where $V_{k-2}(\mathfrak{m}athbb{C})$ is the space of homogeneous polynomials in two variables over $\mathfrak{m}athbb{C}$ of degree $k-2$. These cohomology groups are equipped with an action of the Hecke operators, and the association of a modular symbol to an automorphic form respects this action. $\\ \\$ In \cite{Ste94}, Glenn Stevens developed the theory of \emph{overconvergent} modular symbols by replacing the space of polynomials with a much larger space, that of \emph{$p$-adic distributions}. There is a surjective Hecke-equivariant map from this space to the space of classical modular symbols (with $p$-adic coefficients). As a map from an infinite dimensional space to a finite dimensional space, this `specialisation map' must necessarily have infinite dimensional kernel, but in the same preprint, Stevens proved his \emph{control theorem}, which says that upon restriction to the `small slope eigenspaces', this specialisation map in fact becomes an isomorphism. This control theorem -- an analogue of Coleman's small slope classicality theorem -- has had important ramifications in number theory, being used to construct $p$-adic $L$-functions (see \cite{PS11} and \cite{PS12}) and Stark-Heegner points on elliptic curves (see \cite{Dar01} and \cite{DP06}). $\\ \\$ Such control theorems have now been proved in a variety of other cases, including -- but certainly not limited to -- for compactly supported cohomology classes attached to Hilbert modular forms by Daniel Barrera Salazar in \cite{Bar15}, for compactly supported cohomology classes attached to Bianchi modular forms in \cite{Wil17}, and for ordinary cohomology classes attached to automorphic forms for $\mathfrak{m}athrm{SL}_3$ by David and Robert Pollack in \cite{PP09}. In the latter, Pollack and Pollack gave a very general argument for explicitly lifting group cohomology eigenclasses (of a suitable operator) in the \emph{ordinary} case, that is, when the corresponding eigenvalue is a $p$-adic unit. This general lifting theorem has been used in a variety of other settings, including in the work of Xevi Guitart and Marc Masdeu in the explicit computation of Darmon points (see \cite{GM14}). $\\ \\$ Whilst control theorems do exist in wide generality -- for example, Eric Urban has proved a control theorem for quite general reductive groups in \cite{Urb11} -- these theorems are rarely constructive when we pass beyond the ordinary case. In this note, we generalise the (constructive) lifting theorem of Pollack and Pollack to non-ordinary classes. To do this, we use an idea of Matthew Greenberg in \cite{Gre07}, which the author found invaluable in developing the theory of overconvergent modular symbols over imaginary quadratic fields. $\\ \\$ In the remainder of the paper, we give an application of this theorem. In particular, we give an extension of the results of Pollack and Pollack over $\mathrm{SL}_2hree$ to explicitly construct overconvergent eigenclasses in the non-critical slope case. There are subtleties in this situation that do not need to be considered in the ordinary case; in particular, whilst Pollack and Pollack lift with respect to the operator $U_p$ induced by the element \[\pi \defeq \pitotal,\] we instead consider the two elements \[\pi_1 \defeq \pione, \mathrm{H}space{12pt}ace{12pt}\pi_2 \defeq\pitwo,\] with $\pi_1\pi_2 = \pi$. These induce commuting operators $U_{p,1}$ and $U_{p,2}$ on the cohomology with $U_{p,1}U_{p,2} = U_p$. We then lift twice; once with respect to the operator $U_{p,1}$ to a module of `partially' overconvergent coefficients, then with respect to the operator $U_{p,2}$ to the module of fully overconvergent coefficients used by Pollack and Pollack. In each case, we get a notion of `non-critical slope', and by combining these two notions we get a larger range of `non-criticality' than if we had just considered the operator $U_p$. This is similar in spirit to the results of \cite{Wil17}, Section 6, where control theorems are proved for $\mathrm{GL}_2$ over an imaginary quadratic field in which the prime $p$ splits as $\mathfrak{m}athfrak{p}\mathfrak{m}athfrak{p}bar$. This is done by lifting first to a module of half-overconvergent coefficients with respect to $U_{\mathfrak{m}athfrak{p}}$, then to a module of fully overconvergent coefficients with respect to $U_{\mathfrak{m}athfrak{p}bar}$. $\\ \\$ We give a very brief summary of the results in the case of $\mathrm{SL}_2hree$. First, we summarise the set-up: \begin{mnotnum}\label{fullnot} \begin{itemize} \item[(i)] Let $\lambda = (k_1,k_2,0)$ be a dominant algebraic weight of the torus $T\subset \mathfrak{m}athrm{GL}_3/\mathfrak{m}athbb{Q}$, and let $\Gamma \subset \Gamma_0(p)$ be a congruence subgroup of $\mathrm{SL}_2hree$. \item[(ii)] Let $L/\mathfrak{m}athbb{Q}p$ be a finite extension with ring of integers $\mathfrak{m}athcal{O}_L$. \item[(iii)]Let $V_\lambda(\mathfrak{m}athcal{O}_L)$ be the (finite-dimensional) space of classical coefficients over $\mathfrak{m}athcal{O}_L$, to be defined in Section \ref{classicalcoeffs}. \item[(iv)] Let $V_\lambda^\star$ denote $V_\lambda$ with a twisted action, as defined in Definition \ref{twisted}. \item[(v)] Let $\mathfrak{m}athbb{D}_\lambda(\mathfrak{m}athcal{O}_L)$ be the (infinite-dimensional) space of overcovergent coefficients over $\mathfrak{m}athcal{O}_L$, to be defined in Section \ref{ovcgtcoeffs}. \item[(vi)] Let $\rho_\lambda:\mathrm{H}^r(\Gamma,\mathfrak{m}athbb{D}_\lambda(\mathfrak{m}athcal{O}_L)) \rightarrow \mathrm{H}^r(\Gamma,L_\lambda(\mathfrak{m}athcal{O}_L)$ be the specialisation map on the cohomology at $\lambda$, where $L_\lambda(\mathfrak{m}athcal{O}_L) \defeq \mathfrak{m}athrm{Im}(\mathfrak{m}athbb{D}_\lambda(\mathfrak{m}athcal{O}_L)) \subset V_\lambda^\star(L)$ is the image of specialisation on the coefficients, to be defined in Section \ref{specialisation}. \end{itemize} \end{mnotnum} Then, in Theorem \ref{controltheorem}, we prove: \begin{mthmnum}Suppose $\alpha_1,\alpha_2 \in \mathfrak{m}athcal{O}_L$ with $v_p(\alpha_1)<k_1-k_2+1$ and $v_p(\alpha_2)<k_2+1.$ Then the restriction \[\rho_\lambda: \mathrm{H}^r(\Gamma,\mathfrak{m}athbb{D}_\lambda(\mathfrak{m}athcal{O}_L))^{U_{p,i}=\alpha_i} \isorightarrow \mathrm{H}^r(\Gamma,L_\lambda(\mathfrak{m}athcal{O}_L)^{U_{p,i}=\alpha_i}\] of the specialisation map to the simultaneous $\alpha_i$-eigenspaces of the $U_{p,i}$ operators is an isomorphism. \end{mthmnum} \begin{mfigure}{\emph{Graphic showing range of lifting for fixed $k_1 = k$ and varying $k_2$ (with dotted line $v_p(\alpha_1)+v_p(\alpha_2) = k+2$)}}{1.2}{GraphSL3small.pdf} \end{mfigure} \subsection*{Summary of argument} We give a brief summary of the argument we use to prove the general lifting theorem. The major component in the proof is showing that the specialisation map is surjective, in the process constructing an explicit lift of any element of the target space. Suppose we start with spaces $D$ and $V$, with actions of a group $\Gamma$ and an operator $U$, and suppose that $U$ also acts naturally on the group cohomology of these spaces. Suppose moreover that we have a surjection $\mathrm{pr} : D\rightarrow V$ that is equivariant with respect to the action of $\Gamma$ and $U$, inducing a map $\rho$ on the cohomology. We also assume that we can find a filtration $D \supset F^0D \supset F^1D \supset \cdots$ such that if we define $A^ND \defeq D/F^ND$, then we have $A^0D = V$. We also suppose that, among other conditions, we have $D \cong \lim_{\leftarrow}A^ND$. $\\ \\$ We then start with a $U$-eigenclass $\phi_0 \in \mathrm{H}^1(\Gamma,A^0D)$ with eigenvalue $\alpha$. Further assume that $\alpha$ is an algebraic integer (and hence can be thought of as living in the ring of integers of a finite extension of $\mathfrak{m}athbb{Q}p$). \begin{itemize} \item[(i)]First suppose that $\phi_0$ is \emph{ordinary at $p$}, that is, suppose $\alpha$ is a $p$-adic unit. Then we take a cocycle $\varphi_0$ representing $\phi_0$, and lift it to \emph{any} cochain $\widetilde{\varphi_1}: \Gamma \rightarrow D$. As $\alpha $ is a unit, we can apply the operator $\alpha^{-1}U$ to this cochain. The magic is that $\varphi_1 \defeq \widetilde{\varphi_1}|\alpha^{-1}U \mathfrak{m}athfrak{n}ewmod{F^1D}$ is an $A^1D$-valued cocycle that is independent of choices, and thus defines a canonical lift of $\phi_0$ to a $U$-eigensymbol $\phi_1 \in \mathrm{H}^1(\Gamma,A^1D)$. Continuing in this vein, we get compatible classes $\phi_N \in \mathrm{H}^1(\Gamma,A^ND)$ for each $N$, and thus an eigenclass in the inverse limit $\Phi \in \mathrm{H}^1(\Gamma,D)$ that maps to $\phi_0$ under $\rho$. \item[(ii)] For more general $\alpha$, we need a subtler argument. We would like to be able to apply the operator $\alpha^{-1}U$, but since $\alpha$ need not be a unit, we must strengthen our assumptions. In particular, we need the following: \begin{itemize} \item[(a)] A stronger condition on the filtration; namely, if $\mathfrak{m}u\in F^ND,$ then $\mathfrak{m}u|U\in \alpha F^{N+1}D.$ \item[(b)] An additional piece of data; namely, a $\Gamma$- and $U$-stable submodule $D^\alpha$ of $D$ such that if $\mathfrak{m}u\in D^\alpha$, we have $\mathfrak{m}u|U \in \alpha D$. \end{itemize} The benefit of this is that we can make sense of the operator $\alpha^{-1}U$ on cochains that have values in $D^\alpha$. We can run morally the same argument as above in this case. Unfortunately, the details of the argument become considerably more technical. \end{itemize} It is natural to ask when such conditions are satisfied. Condition (b) is relatively weak, and it seems reasonable to expect that a submodule $D^\alpha$ satisfying this condition exists in wider generality; in particular, when $D$ is a module of $p$-adic distributions on a finite number of variables, $D^\alpha$ can be defined by imposing a simple condition on the low degree moments. Condition (a), however, is stronger, and leads to the notion of \emph{small slope}. To illustrate this, consider the following examples of cases where such filtrations exist: \begin{itemize} \item One can find suitable filtrations in the cases of modular symbols attached to modular forms of weight $k+2$ over $\mathfrak{m}athbb{Q}$ (see \cite{Gre07}). In this case, condition (a) is satisfied only if $v_p(\alpha)<k+1$, that is, if the modular form has \emph{small slope at $p$}. \item A similar result is given for modular forms over an imaginary quadratic field $K$ in \cite{Wil17}. In the case of weight $(k,k)$, and $p\mathfrak{m}athcal{O}_K = \mathfrak{m}athfrak{p}\mathfrak{m}athfrak{p}bar$ split, the natural filtrations for $U_{\mathfrak{m}athfrak{p}}$ and $U_{\mathfrak{m}athfrak{p}bar}$ satisfy condition (a) (with respect to $\alpha_{\mathfrak{m}athfrak{p}}$ and $\alpha_{\mathfrak{m}athfrak{p}bar}$) only if $v_p(\alpha_{\mathfrak{m}athfrak{p}}),v_p(\alpha_{\mathfrak{m}athfrak{p}bar}) < k+1.$ A more detailed description of these results is given in Section \ref{gl2}. \end{itemize} \subsection*{Structure} In the first section, we describe the set-up of the theorem and the precise properties we require of our filtrations. In the second, we give a proof of our main theorem. In the third, we summarise the case of $\mathrm{GL}_2$ over an imaginary quadratic field. In the fourth, we set up the case of $\mathrm{SL}_2hree$ by giving the relevant definitions of the various coefficient spaces and specialisation maps, and finally, in the fifth section, we define the filtrations we require in this case before stating the results for $\mathrm{SL}_2hree$. \section{Setup} \begin{mnotnum}\label{setup}Suppose that we have: \begin{itemize} \item[(i)]A monoid $\Sigma$, \item[(ii)] A group $\Gamma \leq \Sigma$, \item[(iii)]A ring $R$ and a right $R[\Sigma]$-module $D$, \item[(iv)] An $R[\Sigma]$-stable filtration of $D$, given by $D \supset \mathfrak{m}athcal{F}^0D \supset \mathfrak{m}athcal{F}^1D \supset \cdots,$ such that if we define $\mathfrak{m}athbb{A}AA^ND \defeq D/\mathfrak{m}athcal{F}^ND$, then we have \[\lim\limits_{\longleftarrow}\mathfrak{m}athbb{A}AA^ND = D,\] and where the $\mathfrak{m}athcal{F}^ND$ have trivial intersection, and \item[(v)] For some fixed $\alpha \in R$, a right $\Sigma$-stable submodule $D^\alpha$ of $D$, with $V^\alpha \defeq \mathfrak{m}athrm{Im}(D^\alpha \rightarrow \mathfrak{m}athbb{A}AA^0D).$ \end{itemize} \end{mnotnum} Note that for each $\gamma \in \Sigma$ such that $\Gamma$ and $\gamma^{-1}\Gamma\gamma$ are commensurable, and any $\Gamma$-module $\mathfrak{m}athbb{D}$, we have an operator $U_\gamma$ on the cohomology group $\mathrm{H}^r(\Gamma,\mathfrak{m}athbb{D})$ defined in the usual way, that is, by the composition of the maps \[\mathrm{H}^r(\Gamma,\mathfrak{m}athbb{D}) \labelrightarrow{\text{res}} \mathrm{H}^r(\Gamma\cap\gamma^{-1}\Gamma\gamma,\mathfrak{m}athbb{D}) \labelrightarrow{\gamma} \mathrm{H}^r(\Gamma\cap\gamma\Gamma\gamma^{-1},\mathfrak{m}athbb{D}) \labelrightarrow{\text{cores}} \mathrm{H}^r(\Gamma,\mathfrak{m}athbb{D}).\] \begin{mthm}\label{liftingtheorem} Suppose that $\alpha$ is a non-zero element of $R$, that $D^\alpha$ and $V^\alpha$and their corresponding cohomology groups have trivial $R$-torsion, and that for some $\pi \in \Sigma$, we have \begin{itemize} \item[(a)] If $\mathfrak{m}u \in D^\alpha$, then $\mathfrak{m}u|\pi \in \alpha D$, and \item[(b)] If $\mathfrak{m}u \in \mathfrak{m}athcal{F}^ND,$ then $\mathfrak{m}u|\pi \in \alpha \mathfrak{m}athcal{F}^{N+1}D.$ \end{itemize} Then the restriction of the natural map $\rho: \mathrm{H}^r(\Gamma,D^\alpha) \rightarrow \mathrm{H}^r(\Gamma,V^\alpha)$ to the $\alpha$-eigenspaces of the $U_\pi$ operator is an isomorphism. \end{mthm} \begin{mrem}This result is very similar to Theorem 3.1 of \cite{PP09}; their conditions are slightly weaker, but their conclusion requires $\alpha$ to be a unit. In their case, they do not require the condition on trivial $R$-torsion, and then prove that there is a unique eigenlift $\Phi$ of an eigensymbol $\phi$ that has $\mathfrak{m}athrm{Ann}_R(\Phi) = \mathfrak{m}athrm{Ann}_R(\phi)$. For simplicity, we have imposed this condition to ensure these annihilators are trivial. In the cases we consider, these conditions are satisfied. \end{mrem} We have natural $\Sigma$-equivariant projection maps \[\mathrm{pr}^N: D \longrightarrow \mathfrak{m}athcal{A}^ND\]that induce $\Sigma$-equivariant maps \[\rho^N: \comp{D} \longrightarrow \comp{\mathfrak{m}athcal{A}^ND},\] (and hence $\rho\defeq \rho^0: \comp{D^\alpha} \rightarrow \comp{V^\alpha}$ by restriction) as well as maps $\mathrm{pr}^{M,N}:\mathfrak{m}athcal{A}^MD \rightarrow \mathfrak{m}athcal{A}^ND$ for $M \geq N$ that similarly induce maps $\rho^{M,N}.$ Thus we have an inverse system, and we have \begin{align*} \lim\limits_{\longleftarrow}\comp{\mathfrak{m}athcal{A}^ND} = \comp{D}. \end{align*} First we pass to a filtration where the $\Sigma$-action is nicer. Define $\mathfrak{m}athcal{F}^ND^\alpha = \mathfrak{m}athcal{F}^ND \cap D^\alpha.$ This is a $\Sigma$-stable filtration of $D^\alpha$, since $D^\alpha$ is $\Sigma$-stable. It's immediate that if $\mathfrak{m}u \in \mathfrak{m}athcal{F}^ND^\alpha,$ then $\mathfrak{m}u|\pi \in \alpha\mathfrak{m}athcal{F}^{N+1}D^\alpha.$ Define $\mathfrak{m}athcal{A}^ND^\alpha = D^\alpha/\mathfrak{m}athcal{F}^ND^\alpha$, so that we have the following (where the vertical maps are injections): \begin{diagram} D&&\rTo^{\pi^M}&&\mathfrak{m}athcal{A}^MD&&\rTo^{\pi^{M,N}}&&\mathfrak{m}athcal{A}^ND \\ \uTo&&&&\uTo&&&&\uTo \\ D^\alpha &&\rTo^{\pi^M}&&\mathfrak{m}athcal{A}^MD^\alpha &&\rTo^{\pi^{M,N}}&&\mathfrak{m}athcal{A}^ND^\alpha. \end{diagram} Again, we see that \begin{align} \label{ila}\lim\limits_{\longleftarrow}\comp{\mathfrak{m}athcal{A}^ND^\alpha} = \comp{D^\alpha}. \end{align} \begin{mnot}(The $U$ operator at the level of cochains). In \cite{PP09}, a description of the $U = U_\pi$ operator at the level of cochains is given. In particular, they take an explicit free resolution \[\cdots \labelrightarrow{\delta_3} F_2 \labelrightarrow{\delta_2} F_1 \labelrightarrow{\delta_1} F_0 \labelrightarrow{d_0} \mathfrak{m}athbb{Z} \longrightarrow 0\] of $\mathfrak{m}athbb{Z}$ by right $\mathfrak{m}athbb{Z}[\Gamma]$-modules; then, for a right $\mathfrak{m}athbb{Z}[\Gamma]$-module $\mathfrak{m}athbb{D}$, they use this to explicitly write down the spaces $C^r(\Gamma,\mathfrak{m}athbb{D}) \defeq \mathfrak{m}athrm{Hom}_\Gamma(F_r,\mathfrak{m}athbb{D})$ of cochains, $Z^r(\Gamma,\mathfrak{m}athbb{D}) \defeq \mathfrak{m}athrm{Ker}(d_{r}:C^r(\Gamma,\mathfrak{m}athbb{D}) \rightarrow C^{r+1}(\Gamma,\mathfrak{m}athbb{D})$ of cocycles, and $B^r(\Gamma,\mathfrak{m}athbb{D}) \defeq d_{r-1}(\Gamma,\mathfrak{m}athbb{D})$ of coboundaries, where $d_r$ is the obvious map induced by $\delta_r$. Then the group cohomology is defined as $\comp{\mathfrak{m}athbb{D}} \defeq Z^r(\Gamma,\mathfrak{m}athbb{D})/B^r(\Gamma,\mathfrak{m}athbb{D})$.\\ \\ Now, $F_{*}^{\pi} \rightarrow \mathfrak{m}athbb{Z} \rightarrow 0$ is a free resolution of $\mathfrak{m}athbb{Z}[\pi^{-1}\Gamma \pi]$-modules, and as $\mathfrak{m}athbb{F}_{*} \rightarrow \mathfrak{m}athbb{Z}\rightarrow 0$ is also a free resolution of $\mathfrak{m}athbb{Z}[\pi^{-1}\Gamma\pi]$-modules, there is a $\mathfrak{m}athbb{Z}[\pi^{-1}\Gamma\pi]$-complex map $\tau_{*}$ from $F_{*}$ to $F_{*}^{\pi}$ lifting the identity map on $\mathfrak{m}athbb{Z}$.\\ \\ Pick a set $\{\pi_i\}$ of coset representatives for $\Gamma\pi$ in $\Gamma\pi\Gamma$, noting that this is finite by commensurability. Then define $U:\mathfrak{m}athrm{Hom}(F_r,D) \rightarrow \mathfrak{m}athrm{Hom}(F_r,D)$ at the level of cochains by \[(\varphi|U)(f_r) \defeq \sum_{i}\varphi(\tau_r(f_r\cdot\pi_i^{-1}))\cdot\pi\pi_i,\mathrm{H}space{12pt} \varphi \in \mathfrak{m}athrm{Hom}(F_r,D), f_r\in F_r.\] Pollack and Pollack prove (in Lemma 3.2) that this induces a map of chain complexes and hence a map of cohomology groups. In fact, this map is nothing other than $U_\pi$ as defined above. \end{mnot} \begin{mdef*}($U$-eigensymbols of eigenvalue $\alpha$). Since $\mathfrak{m}athcal{A}^ND^\alpha$ may have non-trivial $\alpha$-torsion, we should make the statement ``$U_\pi$-eigensymbol in $\comp{\mathfrak{m}athcal{A}^ND^\alpha}$'' more precise. By condition (a) of \ref{liftingtheorem}, if $\mathfrak{m}u \in D^\alpha$, then $\mathfrak{m}u|\pi \in \alpha D$. We can thus consider $\pi$ as a map from $D^\alpha$ to $D$ in a natural way, and define another map $V_\pi$ from $D^\alpha$ to $D$ by setting \[x|V_\pi = y, \mathrm{H}space{12pt} \text{ where }x|\pi = \alpha y.\] We see we have a formal equality of maps $\alpha V_\pi = \pi$ from $D^\alpha$ to $D$. Thus we get an operator \[V\defeq V_\pi:\comp{D^\alpha} \longrightarrow \comp{D}\] on the cohomology, so that we have an equality of operators $\alpha V = U$ as operators on $\comp{D^\alpha}$. There is also a canonical operator \[\varepsilon: \comp{D^\alpha} \longrightarrow \comp{D}\] induced by the inclusion $D^\alpha \rightarrow D$. We see that if $\phi \in \comp{D^\alpha}$ satsifies $\phi|U = \alpha\phi$, then $\varepsilon(\phi) = \phi|V$ as elements of $\comp{D}$. \begin{mrem} The reason we don't simply just define $V = \alpha^{-1}U_\pi$ is that `dividing by $\alpha$' is not in general a well-defined notion on $D$. \end{mrem} It is easy to see that for each $N$, $V$ gives rise to an operator $V_N : \mathfrak{m}athcal{A}^ND^\alpha \rightarrow \mathfrak{m}athcal{A}^ND$. Denote the canonical map $\comp{\mathfrak{m}athcal{A}^ND^\alpha} \rightarrow \comp{\mathfrak{m}athcal{A}^ND}$ by $\varepsilon_N$. We say an element $\varphi_N \in \comp{\mathfrak{m}athcal{A}^ND^\alpha}$ is a \emph{$U$-eigensymbol of eigenvalue $\alpha$} if $\varepsilon_N(\varphi_N) = \varphi_N|V_N$ as elements of $\comp{\mathfrak{m}athcal{A}^ND}$. Henceforth, when we talk about $U$-eigensymbols, it shall be assumed that the eigenvalue is $\alpha$. \end{mdef*} \section{Proof of Theorem \ref{liftingtheorem}} \begin{proof}(Theorem \ref{liftingtheorem}). We first prove surjectivity. Take a $U$-eigensymbol $\phi_0$ of eigenvalue $\alpha$ in $\comp{V^\alpha} = \comp{\mathfrak{m}athbb{A}AA^0D^\alpha}$. Suppose there exists a lift $\phi_N \in \comp{\mathfrak{m}athbb{A}AA^{N+1}D^\alpha}$ of $\phi_0$ to a $U$-eigensymbol for some $N$. We prove that we can canonically lift $\phi_N$ to some $\phi_{N+1}$, and thus we will be done by induction and equation (\ref{ila}), as we have constructed an element in the inverse limit. We prove this in a series of claims.\\ \\ Take a cocycle $\varphi_N$ representing $\phi_N$, and lift it to a cochain $\varphi \in C_{\Gamma}(F_r,D^\alpha)$. We apply $V$ at the level of cochains, obtaining a cochain $\varphi|V:F_n\rightarrow D$. Define a cochain \[\tau_{N+1}:F_n \longrightarrow \mathfrak{m}athbb{A}AA^{N+1}D\] by composing this with the reduction map. This is in fact a cocycle; as $\varphi_N$ is a cocycle, $d\varphi$ takes values in $\mathfrak{m}athcal{F}^ND^\alpha$, and thus as we have $d(\varphi|V) = (d\varphi)|V$ taking values in $\mathfrak{m}athcal{F}^{N+1}D$ (by properties of $V$), it follows that $d\tau_{N+1} = 0$. Thus $\tau_{N+1}$ represents some cohomology class $[\tau_{N+1}]_D \in \comp{\mathfrak{m}athbb{A}AA^{N+1}D}.$ \begin{mcla}\label{tauind}The cohomology class $[\tau_{N+1}]_D$ is independent of choices. \end{mcla} \begin{proof} Suppose we take a different cochain $\widetilde{\varphi}$ lifting a different cocycle $\widetilde{\varphi_N}$ to a cochain taking values in $D^\alpha$. Then $[\rho^N(\varphi - \widetilde{\varphi})]_{D^\alpha} = 0$, where $\rho^N$ is the natural reduction map, as $\varphi_N$ and $\widetilde{\varphi_N}$ both represent $\phi_N$. Thus $[\varphi - \widetilde{\varphi}]_{D^\alpha} \in \comp{D^\alpha}$ is represented by a cocycle $\psi$ taking values in $\mathfrak{m}athcal{F}^ND^\alpha$. Therefore $[\varphi - \widetilde{\varphi}]_{D^\alpha}|V$ is represented by $\psi|V$, which by examining the explicit action of $U$ on cochains we see to take values in $\mathfrak{m}athcal{F}^{N+1}D$. After reduction (mod $\mathfrak{m}athcal{F}^{N+1}D$), we see that \[[\tau_{N+1}]_D - [\rho^{N+1}(\widetilde{\varphi}|V)]_D = [\rho^{N+1}(\psi|V)]_D = 0,\] which is the result. \end{proof} \begin{mcla}There exists a cocycle representing $[\tau_{N+1}]_D$ taking values in the smaller space $\mathfrak{m}athbb{A}AA^{N+1}D^\alpha$. \end{mcla} \begin{proof}As $\phi_N$ is a $U$-eigensymbol, we know that as cocycles, $\varphi_N$ and $\tau_{N} \defeq \rho^N(\varphi|V)$ determine the same cohomology class in $\comp{\mathfrak{m}athbb{A}AA^ND}$. Thus there exists some coboundary $b_N \in B^r(\Gamma,\mathfrak{m}athbb{A}AA^ND)$ such that $\varphi_N = \tau_N + b_N$. Then by definition $b_N = d(c_N)$ for some $c_N \in C^{r-1}(\Gamma,\mathfrak{m}athbb{A}AA^ND)$. Lift $c_N$ arbitrarily to a cochain $c_{N+1} \in C^{r-1}(\Gamma,\mathfrak{m}athbb{A}AA^{N+1})$, and define $b_{N+1} \defeq d(c_{N+1}) \in B^r(\Gamma,\mathfrak{m}athbb{A}AA^{N+1}D)$. Then \[\rho^{N+1,N}(\tau_{N+1} + b_{N+1}) = \tau_N + b_N = \varphi_N \in Z^r(\Gamma,\mathfrak{m}athbb{A}AA^ND^\alpha).\] Therefore it follows that $\varphi_{N+1} \defeq \tau_{N+1} + b_{N+1}$ takes values in the smaller space $\mathfrak{m}athbb{A}AA^{N+1}D^\alpha$. As $\tau_{N+1}+b_{N+1} \in Z^r(\Gamma,\mathfrak{m}athbb{A}AA^{N+1}D)$, it follows that $\varphi_{N+1} \in Z^r(\Gamma,D^\alpha)$. Thus $\varphi_{N+1}$ is the required cocycle to prove the claim. \end{proof} Define $\phi_{N+1} \defeq [\varphi_{N+1}]_{D^\alpha} \in \comp{\mathfrak{m}athbb{A}AA^{N+1}D^\alpha}$ to be the $\mathfrak{m}athbb{A}AA^{N+1}D^\alpha$-valued cohomology class determined by $\varphi_{N+1}$. \begin{mcla}The cohomology class $\phi_{N+1}$ is independent of all choices. \end{mcla} \begin{proof} Suppose we choose a different preimage $\widetilde{c_N}$ of $b_N$ under $d$, leading to a different $\widetilde{c_{N+1}}$ and $\widetilde{b_{N+1}}$, and thus a different $\widetilde{\varphi_{N+1}}$. Then \[\varphi_{N+1} - \widetilde{\varphi_{N+1}} = b_{N+1} - \widetilde{b_{N+1}} = d(c_{N+1} - \widetilde{c_{N+1}}).\] As $\varphi_{N+1} -\widetilde{\varphi_{N+1}}$ takes values in $\mathfrak{m}athbb{A}AA^{N+1}D^\alpha$, so must $c_{N+1} - \widetilde{c_{N+1}}$; hence $b_{N+1} - \widetilde{b_{N+1}} \in B^r(\Gamma,\mathfrak{m}athbb{A}AA^{N+1}D^\alpha)$, so that \[[\varphi_{N+1}]_{D^\alpha} = [\widetilde{\varphi_{N+1}}]_{D^\alpha} \in \comp{\mathfrak{m}athbb{A}AA^{N+1}D^\alpha}.\] Thus they also determine the same cohomology class, namely $[\tau_{N+1}]_D$, in $\comp{\mathfrak{m}athbb{A}AA^{N+1}D}.$ As the cohomology class $[\tau_{N+1}]_D$ is also uniquely determined by Claim \ref{tauind}, we're done. \end{proof} \begin{mcla}$\phi_{N+1}$ is a $U$-eigensymbol with eigenvalue $\alpha$. \end{mcla} \begin{proof}It's clear that the representative $\varphi_{N+1}$ of $\phi_{N+1}$ is a lift of $\varphi_N$, by definition. Thus any lift $\varphi$ of $\varphi_{N+1}$ to a cochain taking values in $D^\alpha$ is also a lift of $\varphi_{N}$, and accordingly, it follows that \[\phi_{N+1}|V_{N+1} \defeq [\rho^{N+1}(\varphi|V)]_D = [\tau_{N+1}]_D.\] Also by definition, $\varphi_{N+1}$ and $\tau_{N+1}$ represent the same elements of $\comp{\mathfrak{m}athbb{A}AA^{N+1}D}$, so that $\varepsilon(\phi_{N+1}) = [\tau_{N+1}]_D$. Combining the two equalities gives $\varepsilon(\phi_{N+1}) = \phi_{N+1}|V_{N+1}$, which is the required result. \end{proof} Thus we obtain surjectivity. Take some $U$-eigensymbol $\phi_0 \in \comp{V^\alpha} = \comp{\mathfrak{m}athbb{A}AA^0D^\alpha}$, and for each $N \in \mathfrak{m}athbb{N}$, lift it to a $U$-eigensymbol $\phi_N$ using the above method. Then we obtain an element of the inverse limit $\lim_\leftarrow\comp{\mathfrak{m}athbb{A}AA^ND^\alpha},$ which we know is isomorphic in a natural way to $\comp{D^\alpha}$. This element is thus a $U$-eigensymbol that maps to $\phi_0$ under the specialisation map.\\ \\ It remains to prove injectivity. Suppose $\phi \in \ker(\rho)$; we aim to show that $\phi = 0$. Consider the exact sequence \[0 \longrightarrow \mathfrak{m}athcal{F}^0D^\alpha \longrightarrow D^\alpha \longrightarrow V^\alpha \longrightarrow 0.\] This leads to a long exact sequence of cohomology \[\cdots \comp{\mathfrak{m}athcal{F}^0D^\alpha} \longrightarrow \comp{D^\alpha} \labelrightarrow{\rho} \comp{V^\alpha} \longrightarrow \cdots,\] and accordingly any element of $\ker(\rho)$ must lie in the image of $\comp{\mathfrak{m}athcal{F}^0D^\alpha}$. This is the same as saying $\phi$ can be represented by a cocycle $\varphi$ taking values in $\mathfrak{m}athcal{F}^0D^\alpha$. We now conclude using: \begin{mcla}Let $\phi \in \comp{D^\alpha}$ be represented by a cocycle $\varphi$ taking values in $\mathfrak{m}athcal{F}^0D^\alpha$. If $\phi$ is a $U$-eigensymbol, then $\phi = 0$. \end{mcla} \begin{proof} We consider $\varepsilon(\phi) = [\varphi]_D$, which is also a $U$-eigensymbol. It thus makes sense to apply the operator $V$ to $[\varphi]_D$, for which it is a fixed point. By condition (b) of Theorem \ref{liftingtheorem}, the $V$ operator takes $\mathfrak{m}athcal{F}^ND$ to $\mathfrak{m}athcal{F}^{N+1}D$; therefore, as $[\varphi]_D$ is represented by $\varphi|V^N$ for any $N$ (by the eigensymbol property), we see that for each $N$, the symbol $[\varphi]_D$ is represented by a cocycle taking values in $\mathfrak{m}athcal{F}^ND$. But the intersection of the $\mathfrak{m}athcal{F}^ND$ is trivial by assumption. Thus $\varepsilon(\phi) = [\varphi]_D$ is 0.\\ \\ It remains to prove that the map $\varepsilon$ is injective. We now know that $\varphi$ is a coboundary in $C^n(\Gamma,D)$, so that there exists some $c \in C^{n-1}(\Gamma,D)$ with $\varphi = d(c)$. But as $\varphi$ takes values in $D^\alpha$, it follows that $c$ must also take values in $D^\alpha$. Thus $\varphi$ is also a coboundary in $C^n(\Gamma,D^\alpha)$, and $\phi = [\varphi]_{D^\alpha} = 0$, as required. \end{proof} This completes the proof of Theorem \ref{liftingtheorem}. \end{proof} \section{Application to $\mathrm{GL}_2\times\mathrm{GL}_2$}\label{gl2} As an example of where this theorem applies, we give a brief summary of the case of $\mathrm{GL}_2\times\mathrm{GL}_2$, which is conceptually easier to understand than the case of $\mathrm{SL}_2hree$. In particular, we present the results in a concrete setting in the style of \cite{Wil17}, where these results were first proved. Recall the set-up: \begin{mnot} Let $K$ be an imaginary quadratic field with ring of integers $\mathfrak{m}athcal{O}_K$, and let $p$ be a rational prime that splits as $\mathfrak{m}athfrak{p}\mathfrak{m}athfrak{p}bar$ in $K$. Let $\Gamma \subset \Gamma_0(p)\subset \mathrm{SL}_2(\mathfrak{m}athcal{O}_K)$ be a congruence subgroup. Let $\Sigma$ denote the set of complex embeddings of $K$, and let $\lambda = (k,k) \in Z[\Sigma]$ be a weight, where $k$ is non-negative. Let $L/\mathfrak{m}athbb{Q}p$ be a finite extension with ring of integers $\mathfrak{m}athcal{O}_L$. \end{mnot} \subsection{Coefficient Modules} \begin{mdef}Let $V_k(\mathfrak{m}athcal{O}_L)\defeq\mathfrak{m}athrm{Sym}^k(\mathfrak{m}athcal{O}_L^2)$ be the space of homogeneous polynomials in two variables of degree $k$ over $\mathfrak{m}athcal{O}_L$. \end{mdef} We can identify $V_k(\mathfrak{m}athcal{O}_L)\otimes V_k(\mathfrak{m}athcal{O}_L)$ with a space of polynomial functions on $\mathfrak{m}athcal{O}_K\otimes_{\mathfrak{m}athbb{Z}}\mathfrak{m}athbb{Z}p$ in a natural way. \begin{mdef}Let $\mathfrak{m}athbb{A}_k(\mathfrak{m}athcal{O}_L)\defeq \mathfrak{m}athcal{O}_L\langle z\rangle$ be the Tate algebra over $\mathfrak{m}athcal{O}_L$, that is, the space of power series in one variable whose coefficients tend to zero as the degree tends to infinity. \end{mdef} \begin{mrem}For ease of notation, we will henceforth drop $\mathfrak{m}athcal{O}_L$ from the notation. All tensor products are over $\mathfrak{m}athcal{O}_L$. \end{mrem} Let $\Sigma_0(p) \subset M_2(\mathfrak{m}athcal{O}_L)\cap \mathrm{GL}_2(L)$ be the set of matrices that are upper-triangular modulo $p$. In particular, we have $\Gamma\subset\Sigma_0(p)$. Then $\mathfrak{m}athbb{A}_k$ has a natural left action of $\Sigma_0(p)$, depending on $k$ (justifying the notation), given by \[\mathfrak{m}atr\cdot f(z) = (a+cz)^kf\left(\mathfrak{m}athcal{F}rac{b+dz}{a+cz}\right).\] This action preserves the subspace $V_k$ and hence gives rise to component-wise actions of $\Sigma_0(p)^2$ on $V_k\otimes V_k$, $V_k\otimes\mathfrak{m}athbb{A}_k$ and $\mathfrak{m}athbb{A}_k\otimes \mathfrak{m}athbb{A}_k$. Accordingly, we get right actions of $\Sigma_0(p)^2$ on their corresponding topological duals $V_k\otimes V_kdual$, $V_k^*\otimes\D_k$ and $\mathfrak{m}athbb{D}Dk$ respectively. By dualising the inclusions, we get $\Sigma_0(p)^2$-equivariant surjections \[\mathfrak{m}athbb{D}Dk \labelrightarrow{\mathrm{pr}_2} V_k^*\otimes\D_k \labelrightarrow{\mathrm{pr}_1} V_k\otimes V_kdual,\] that induce maps \[\mathrm{H}^1(\Gamma,\mathfrak{m}athbb{D}Dk)\labelrightarrow{\rho_2}\mathrm{H}^1(\Gamma,V_k^*\otimes\D_k)\labelrightarrow{\rho_1}\mathrm{H}^1(\Gamma,V_k\otimes V_kdual)\] on the cohomology. $\\ \\$ We define filtrations as follows: \begin{mdef}\begin{itemize} \item[(i)]Let $N$ be an integer and define $\mathfrak{m}athcal{F}^N\mathfrak{m}athbb{D}_k \defeq \{\mathfrak{m}u\in\mathfrak{m}athbb{D}_k:\mathfrak{m}u(z^r) \in \pi_L^{N-r}\mathfrak{m}athcal{O}_L\}$, where $\pi_L$ is a uniformiser in $\mathfrak{m}athcal{O}_L$. Then define \[\mathfrak{m}athcal{F}^N[V_k^*\otimes\D_k] \defeq V_k^*\otimes \mathfrak{m}athcal{F}^N\mathfrak{m}athbb{D}_k.\] This is $\Sigma_0(p)$-stable by arguments in \cite{Gre07} and \cite{Wil17}. Now define \[F^N[V_k^*\otimes\D_k] \defeq \mathfrak{m}athcal{F}^N[V_k^*\otimes\D_k] \cap \ker(\mathrm{pr}_1),\] which is also $\Sigma_0(p)$-stable as $\mathrm{pr}_1$ is $\Sigma_0(p)$-equivariant. \item[(ii)]Similarly, define $F^N[\mathfrak{m}athbb{D}Dk] \defeq (\mathfrak{m}athcal{F}^N\mathfrak{m}athbb{D}Dk)\cap\ker(\mathrm{pr}_2),$ which again is $\Sigma_0(p)$-stable. \end{itemize} \end{mdef} Let $\alpha \in \mathfrak{m}athcal{O}_L$ and let $\pi_{\mathfrak{m}athfrak{p}}\defeq[\smallmatrd{1}{0}{0}{1},\smallmatrd{1}{0}{0}{p}]$ and $\pi_{\mathfrak{m}athfrak{p}bar}\defeq[\smallmatrd{1}{0}{0}{p},\smallmatrd{1}{0}{0}{1}]\in \Sigma_0(p)^2.$ First, we have: \begin{mlem}Suppose $v_p(\alpha)<k+1$. Then we have \begin{itemize} \item[(i)] If $\mathfrak{m}u \in F^N[V_k^*\otimes\D_k]$, then $\mathfrak{m}u|\pi_{\mathfrak{m}athfrak{p}} \in \alpha F^{N+1}[V_k^*\otimes\D_k].$ \item[(ii)] If $\mathfrak{m}u \in F^N[\mathfrak{m}athbb{D}Dk]$, then $\mathfrak{m}u|\pi_{\mathfrak{m}athfrak{p}bar} \in \alpha F^{N+1}[\mathfrak{m}athbb{D}Dk].$ \end{itemize} \end{mlem} We then define the analogue of the module $D^\alpha$ as follows: \begin{mdef} \begin{itemize} \item[(i)]Define $\mathfrak{m}athbb{D}_k^\alpha \defeq \{\mathfrak{m}u\in\mathfrak{m}athbb{D}_k: \mathfrak{m}u(z^r) \in \alpha p^{-r}\mathfrak{m}athcal{O}_L\}$, and then define \[ [V_k^*\otimes\D_k]^\alpha \defeq V_k^*\otimes \mathfrak{m}athbb{D}_k^\alpha.\] \item[(ii)]Similarly, define $[\mathfrak{m}athbb{D}Dk]^\alpha \defeq \mathfrak{m}athbb{D}_k^\alpha\widehat{\otimes}\mathfrak{m}athbb{D}_k$. \end{itemize} \end{mdef} \begin{mlem}\begin{itemize} \item[(i)] If $\mathfrak{m}u\in[V_k^*\otimes\D_k]^{\alpha}$, then $\mathfrak{m}u|\pi_{\mathfrak{m}athfrak{p}} \in \alphaV_k^*\otimes\D_k.$ \item[(ii)] If $\mathfrak{m}u\in[\mathfrak{m}athbb{D}Dk]^{\alpha}$, then $\mathfrak{m}u|\pi_{\mathfrak{m}athfrak{p}bar}\in\alpha\mathfrak{m}athbb{D}Dk.$ \end{itemize} \end{mlem} Accordingly, we can lift using Theorem \ref{liftingtheorem}, first along $\rho_1$ using the operator $U_{\mathfrak{m}athfrak{p}}$ induced by $\pi_{\mathfrak{m}athfrak{p}}$, and secondly along $\rho_2$ using the operator $U_{\mathfrak{m}athfrak{p}bar}$ induced by $\pi_{\mathfrak{m}athfrak{p}bar}$. In particular, we have: \begin{mthm} Let $\alpha_{\mathfrak{m}athfrak{p}},\alpha_{\mathfrak{m}athfrak{p}bar} \in \mathfrak{m}athcal{O}_L$ with $v_p(\alpha_1),v_p(\alpha_2) < k+1.$ Then the restriction of the map $\rho \defeq \rho_2\rho_1$ to the simultaneous $\alpha_{\mathfrak{m}athfrak{p}}$ and $\alpha_{\mathfrak{m}athfrak{p}bar}$ eigenspaces of the $U_{\mathfrak{m}athfrak{p}}$ and $U_{\mathfrak{m}athfrak{p}bar}$ operators respectively is an isomorphism. \end{mthm} \begin{mrems} \begin{itemize} \item[(i)] In \cite{Wil17}, these results are used to construct $p$-adic $L$-functions for automorphic forms for $\mathrm{GL}_2$ over an imaginary quadratic field, in the spirit of \cite{PS11}. In particular, we associate to such an automorphic form a canonical element in the overconvergent cohomology, from which we can very naturally build a ray class distribution that interpolates $L$-values of the automorphic form. It would be interesting to know if similar results existed in the case of $\mathrm{SL}_2hree$. \item[(ii)]In the interests of transition to the case of $\mathrm{SL}_2hree$, we can rephrase the above definitions in a more abstract way. In particular, let $G\defeq \mathfrak{m}athrm{Res}_{K/\mathfrak{m}athbb{Q}}\mathrm{GL}_2$, with Borel subgroup $B$ and opposite Borel $B^{\mathfrak{m}athrm{opp}}$. Define $T$ to be the torus, and note we can view $\lambda$ as a dominant weight for $T$, and that $V_k\otimes V_k$ is the representation of $\mathrm{GL}_2$ of highest weight $\lambda$ with respect to $B^{\mathfrak{m}athrm{opp}}$. Note that for an extension $L/\mathfrak{m}athbb{Q}p$, we have $G(L) \cong \mathrm{GL}_2(L)\times\mathrm{GL}_2(L)$. Then $\mathfrak{m}athbb{A}Ak$ is the ring of analytic functions on $B(L)$ that transform like $\lambda$ under multiplication by elements of $T(L)$, whilst $V_k\otimes \A_k$ is the ring of analytic functions on $B(L)$ that transform like $\lambda$ under multiplication by elements of $\mathrm{GL}_2(L)\times T_{\mathfrak{m}athbb{Q}}(L)$, where $T_{\mathfrak{m}athbb{Q}}(L)$ is the torus of diagonal matrices in the algebraic group $\mathrm{GL}_2/\mathfrak{m}athbb{Q}$. In particular, the definitions in the following section are a natural analogue of the theory described concretely above. \end{itemize} \end{mrems} \section{Overconvergent modular symbols for $\mathfrak{m}athrm{SL}_3$} We now apply the results above to give a generalisation of the lifting theorem for $\mathrm{SL}_2hree$ of Pollack and Pollack in \cite{PP09}. We first recall the setting, and also develop the notion of `partially overconvergent' modular symbols for $\mathrm{SL}_2hree$. \subsection{Notation}\label{notation} We recall the setting; where possible, we keep to the notation used by Pollack and Pollack in \cite{PP09} for clarity. For further details, the reader is directed to their paper. Let $G$ be the algebraic group $\mathfrak{m}athrm{GL}_3/\mathfrak{m}athbb{Q}$, and denote by $B$ (resp.\ $B^{\mathfrak{m}athrm{opp}}$) its Borel subgroup of upper-triangular (resp.\ lower-triangular) matrices, with $T$ and $N$ (resp.\ $N^{\mathfrak{m}athrm{opp}}$) the subgroups of $B$ (resp.\ $B^{\mathfrak{m}athrm{opp}}$) consisting of the diagonal and unipotent matrices respectively. Note that $B = TN$. Let $p$ be a prime, let $\Gamma_0(p)$ be the subgroup of $\mathrm{SL}_2hree(\mathfrak{m}athbb{Z})$ of matrices that are upper-triangular modulo $p$, and let $\Gamma$ be a congruence subgroup of $\mathrm{SL}_2hree(\mathfrak{m}athbb{Z})$ contained in $\Gamma_0(p)$. \subsection{Classical coefficient modules}\label{classicalcoeffs} Let $\lambda$ be a dominant algebraic character of the torus $T$, which can be seen as an element $\lambda = (k_1,k_2,k_3) \in \mathfrak{m}athbb{Z}^3.$ Let $V_\lambda$ be the (unique) representation of $G$ with highest weight $\lambda$ with respect to $B^{\mathfrak{m}athrm{opp}}$; for example, when $\lambda = (k,0,0)$, we see that $V_\lambda(A)$ is nothing but $\mathfrak{m}athrm{Sym}^k(A^3)$, for a suitable coefficient module $A$. \begin{mrem} We will restrict to the case where $\lambda = (k_1,k_2,0)$, rescaling by the determinant, since this slightly simplifies the calculations. Indeed, any such weight can be written in the form $\lambda = (k_1+v,k_2+v,v)$, and then $V_\lambda \cong V_{\lambda'}\otimes\det^v$, where $\lambda' = (k_1,k_2,0)$. All of our main results then go through in the general case with only slight modification, and indeed, the range of `non-criticality' for the slope for $\lambda'$ is the same as that for $\lambda$ scaled by $v$ in each component. \end{mrem} \subsection{Overconvergent coefficient modules}\label{ovcgtcoeffs} We denote by $\mathfrak{m}athbb{C}p$ the completion of fixed algebraic closure of $\mathfrak{m}athbb{Q}p$, and write $\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p}$ for its ring of integers. We now define two different overconvergent coefficient modules corresponding to two different parabolic subgroups of $\mathrm{SL}_2hree$. \subsubsection{Overconvergent with respect to $T = \mathrm{SL}_1^3$} We first look at the case where we consider the parabolic subgroup $T = \mathfrak{m}athrm{SL}_1\times\mathfrak{m}athrm{SL}_1\times\mathfrak{m}athrm{SL}_1.$ This identically mirrors the work of Pollack and Pollack in \cite{PP09}. In particular, let $\mathfrak{m}athcal{I}$ denote the subgroup of $G(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p})$ of matrices that are upper-triangular modulo the maximal ideal of $\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p}$. $\\ \\$ We consider continuous function $f: B(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p}) \rightarrow \mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p}$ satisfying the condition \begin{equation}\label{lambda} f(tb) = \lambda(t)f(b),\mathrm{H}space{12pt}ace{12pt}t \in T(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p}), b\in B(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p}) \end{equation} We note that any such function is determined by its restriction to $N(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p})$, and that we can identify $N(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p})$ with $\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p}^3$ by identifying \[\begin{pmatrix}1 & x & y\\ 0 & 1 & z\\ 0 & 0 & 1 \end{pmatrix} \longleftrightarrow (x,y,z) \in \mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p}^3.\] We write $f(x,y,z)$ for the image of this matrix under $f$. $\\ \\$ Let $L/\mathfrak{m}athbb{Q}p$ be a finite extension with ring of integers $\mathfrak{m}athcal{O}_L$. We say that such a function $f$ is \emph{$L$-rigid analytic} if, for $(x,y,z) \in N(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p})$, we can write $f$ in the form \[f(x,y,z) = \sum_{r,s,t\geq 0}c_{rst}x^ry^sz^t,\] where $c_{rst} \in L$ tends to $0$ as $r+s+t \rightarrow \infty$. Alternatively, this occurs if and only if $f(x,y,z) \in L\langle x,y,z \rangle$, the Tate algebra in three variables over $L$. Writing $\mathfrak{m}athcal{O}_L$ for the ring of integers of $L$, there is likewise an integral version with $c_{rst} \in\mathfrak{m}athcal{O}_L.$ \begin{mrem} Henceforth, we will state all definitions and results in terms of coefficients in $\mathfrak{m}athcal{O}_L$, since in the sequel we use this integrality in an essential way to define filtrations. We could easily instead state the definitions using $L$ in place of $\mathfrak{m}athcal{O}_L$. \end{mrem} \begin{mdef}\begin{itemize}\item[(i)]Write $\mathfrak{m}athbb{A}_\lambda(\mathfrak{m}athcal{O}_L)$ for the space of $\mathfrak{m}athcal{O}_L$-rigid analytic functions on $B(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p})$ that satisfy equation (\ref{lambda}). \item[(ii)] Let $\mathfrak{m}athbb{D}_\lambda(\mathfrak{m}athcal{O}_L)$ denote the topological dual \[\mathfrak{m}athbb{D}_\lambda(\mathfrak{m}athcal{O}_L)\defeq \mathrm{Hom}_{\mathrm{cts}}(\mathfrak{m}athbb{A}_\lambda(\mathfrak{m}athcal{O}_L),\mathfrak{m}athcal{O}_L)\] (resp.\ $\mathrm{Hom}_{\mathrm{cts}}(\mathfrak{m}athbb{A}_\lambda(\mathfrak{m}athcal{O}_L),\mathfrak{m}athcal{O}_L)$), the space of \emph{rigid analytic distributions on $B(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p})$ of weight $\lambda$}. \end{itemize} \end{mdef} In an abuse of notation, we write $x^ry^sz^t$ for the unique extension to $B(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p})$ of the function on $N(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p})$ that sends \[\begin{pmatrix}1&x&y\\0&1&z\\0&0&1\end{pmatrix} \longmapsto x^ry^sz^t,\] and note that any $\mathfrak{m}u \in \mathfrak{m}athbb{D}_\lambda(\mathfrak{m}athcal{O}_L)$ is uniquely determined by its values at $x^ry^sz^t$ for $r,s,t\geq 0$. Pollack and Pollack call this function $f_{rst}$. \subsubsection{Overconvergent with respect to $P \defeq \mathfrak{m}athrm{SL}_1\times\mathrm{SL}_2$} We now define a different module of overconvergent coefficients. This is, in a sense, a \emph{smaller} module of coefficients, and will play the role of `half-overconvergent' coefficients in the following. $\\ \\$ Let $P = \mathrm{SL}_1\times\mathrm{SL}_2 \subset \mathrm{SL}_2hree$. If $\lambda = (k_1,k_2,0)$ with $k_1\geq k_2$, we get an associated representation \[W_\lambda(A) \defeq \mathfrak{m}athrm{det}^{k_1}\otimes\mathrm{H}space{12pt}ace{1pt}\mathfrak{m}athrm{Sym}^{k_2}(A^2)\] of $P(A) = \mathrm{SL}_1(A)\times\mathrm{SL}_2(A)$, for suitable $A$. We can replace $B$ with the larger subgroup $B_1$ of matrices that are block lower-triangular with respect to this parabolic subgroup -- that is, matrices that are zero in the $(2,1)$ and $(3,1)$ entries -- and consider the space of functions $f: B_1(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p}) \longrightarrow W_\lambda(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p})$ satisfying the condition \[f(tg) = \lambda(t)f(g) \mathrm{H}space{12pt}ace{12pt}\mathfrak{m}athcal{F}orall t\in P(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p}),g\in B_1(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p}), \mathrm{H}space{12pt}ace{20pt}\text{where }\lambda(t)\in\mathfrak{m}athrm{GL}(W_\lambda).\] Note that any such function is entirely determined by its restriction to $B(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p})$, and indeed by its values on the subgroup \[\left\{\begin{pmatrix}1 & x & y\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{pmatrix} \in B_1(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p})\right\},\] by a similar argument to before. We say such a function is $\mathfrak{m}athcal{O}_L$-rigid analytic if it is an element of $\mathfrak{m}athcal{O}_L\langle x,y\rangle \otimes_{L}W_\lambda(L)$. \begin{mdef} Write $\mathfrak{m}athbb{A}_\lambda^P(\mathfrak{m}athcal{O}_L)$ for the space of $\mathfrak{m}athcal{O}_L$-rigid analytic functions on $B_1(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p})$ that transform like $\lambda$ under elements of $P$. \end{mdef} \begin{mprop} Let $f\in\mathfrak{m}athbb{A}_\lambda^P(\mathfrak{m}athcal{O}_L)$. For $g\in B$, let $P_g(X,Y)\defeq f(g) \in W_\lambda(\mathfrak{m}athcal{O}_L),$ where we consider elements of $W_\lambda$ as homogeneous polynomials of degree $k_2$ in two variables over $\mathfrak{m}athcal{O}_L$. Define a function \[f':B(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p}) \longrightarrow \mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p}\] by $f'(g) = P_g(0,1)$. Then $f' \in \mathfrak{m}athbb{A}_\lambda(\mathfrak{m}athcal{O}_L)$. Moreover, the association $f \mathfrak{m}apsto f'$ gives an isomorphism \[\mathfrak{m}athbb{A}_\lambda^P(\mathfrak{m}athcal{O}_L) \isorightarrow \bigg\{f(x,y,z) = \sum_{r,s,t\geq 0}\alpha_{r,s,t}x^ry^sz^t\in\mathfrak{m}athbb{A}_\lambda(\mathfrak{m}athcal{O}_L): \alpha_{r,s,t} = 0 \text{ for }t > k_2\bigg\}. \] \end{mprop} \begin{proof} Firstly, note that $f'$ is rigid analytic in three variables. In particular, let \[g \defeq \begin{pmatrix}1 & x & y\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{pmatrix} \mathrm{H}space{12pt}ace{12pt}\text{and}\mathrm{H}space{12pt}ace{12pt} P_g(X,Y) = \sum_{t=1}^{k_2}\sum_{r,s\geq 0}\alpha_{r,s,t}x^ry^sX^tY^{k_2-t},\] using rigidity of $f$. Then consider \[g'\defeq\begin{pmatrix}1 & x & y\\ 0 & 1 & z\\ 0 & 0 & 1 \end{pmatrix} = \begin{pmatrix}1 & 0 & 0\\ 0 & 1 & z\\ 0 & 0 & 1 \end{pmatrix}\begin{pmatrix}1 & x & y\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{pmatrix}.\] Recall that $\mathrm{GL}_2(L)$ acts on $W_\lambda(L)$ by $w|\smallmatrd{a}{b}{c}{d}(X,Y) = w(bY+dX,aY+cX),$ so that \begin{equation}\label{image} f'(x,y,z) = P_{g'}(0,1) = P_g(X+z,Y)\bigg|_{X=0,Y=1} = P_g(z,1) = \sum_{t=1}^{k_2}\sum_{r,s\geq 0}\alpha_{r,s,t}x^ry^sz^t. \end{equation} The rigidity follows. Now we show that $f'$ transforms under $T$ as $\lambda$. Let $g\in B(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p})$ and $t = (t_1,t_2,t_3) \in T(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p})$. Then compute \[P_{tg}(X,Y) = f(tg)(X,Y) = t_1^{k_1}f(g)(t_3X, t_2Y) = t_1^{k_1}P_g(t_3X,t_2Y).\] Accordingly, we have \begin{align*}f'(tg) = P_{tg}(0,1) = t_1^{k_1}P_g(0,t_2) = t_1^{k_1}t_2^{k_2}P_g(0,1)= \lambda(t)f'(g), \end{align*} as required. $\\ \\$ Finally, it remains to show that the map induces the stated isomorphism. From equation (\ref{image}), it is clear that $f' = 0$ if and only if $f = 0$, so that the association $f\mathfrak{m}apsto f'$ is injective. It is also clear that the image is the right-hand side of the isomorphism. This completes the proof. \end{proof} \begin{mdef}\begin{itemize} \item[(i)]From now on, in an abuse of notation using this isomorphism, we write $\mathfrak{m}athbb{A}_\lambda^P(\mathfrak{m}athcal{O}_L)$ for this subspace of $\mathfrak{m}athbb{A}_\lambda(\mathfrak{m}athcal{O}_L)$. \item[(ii)] Let $\mathfrak{m}athbb{D}^P_\lambda(\mathfrak{m}athcal{O}_L)$ denote the topological dual \[\mathfrak{m}athbb{D}^P_\lambda(\mathfrak{m}athcal{O}_L)\defeq \mathrm{Hom}_{\mathrm{cts}}(\mathfrak{m}athbb{A}^P_\lambda(\mathfrak{m}athcal{O}_L),\mathfrak{m}athcal{O}_L),\] the space of \emph{rigid analytic distributions on $B(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p})$ of weight $\lambda$ over $\mathfrak{m}athcal{O}_L$}. \end{itemize} \end{mdef} Note that by dualising the inclusion $\mathfrak{m}athbb{A}_\lambda^P(\mathfrak{m}athcal{O}_L)\subset \mathfrak{m}athbb{A}_\lambda(\mathfrak{m}athcal{O}_L)$, we get a surjective map \[\mathrm{pr}_\lambda^2: \mathfrak{m}athbb{D}_\lambda(\mathfrak{m}athcal{O}_L) \longrightarrow \mathfrak{m}athbb{D}_\lambda^P(\mathfrak{m}athcal{O}_L),\] where the notation will become clear in the sequel. \begin{mremnum}\label{pi} Note that $\mathfrak{m}athbb{D}_\lambda^P(\mathfrak{m}athcal{O}_L)$ is, in a sense, `partially' overconvergent, in the sense that it is overconvergent in the variables $x,y$ and classical in $z$. In the next section, we will introduce operators \[\pi_1 \defeq \begin{pmatrix}1 & 0 & 0\\ 0 & p & 0\\ 0 & 0 & p \end{pmatrix}\mathrm{H}space{12pt}ace{12pt}\text{and}\mathrm{H}space{12pt}ace{12pt}\pi_2 \defeq \begin{pmatrix}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & p \end{pmatrix},\] whose product is the element $\pi$ considered by Pollack and Pollack in \cite{PP09}. We will ultimately lift a classical modular symbol to one that takes values in $\mathfrak{m}athbb{D}_\lambda^P(\mathfrak{m}athcal{O}_L)$ using $\pi_1$, and then lift this further to a symbol that takes values in the space $\mathfrak{m}athbb{D}_\lambda(\mathfrak{m}athcal{O}_L)$ of fully overconvergent coefficients using $\pi_2$. \end{mremnum} \subsection{The action of $\Sigma$ and specialisation} \subsubsection{The weight $\lambda$ action} Let $\mathfrak{m}athscr{X}$ denote the image of the Iwahori group $\mathfrak{m}athcal{I}$ in $N^{\mathfrak{m}athrm{opp}}(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p})\backslash G(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p})$ under the natural embedding, and note that we can identify $\mathfrak{m}athscr{X}$ with $B(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p})$ in a natural way. Let \[I \defeq \mathfrak{m}athcal{I}\cap \mathrm{SL}_2hree(\mathfrak{m}athbb{Z}). \] (Note that $I=\Gamma_0(p)$ in this setting, though we retain the notation for ease of comparison with Pollack and Pollack.) We also define $\pi_1$ and $\pi_2$ as in Remark \ref{pi}, and let $\Sigma$ be the semigroup generated by $I$, $\pi_1$ and $\pi_2$. $\\ \\$ Note that $I$ acts on $N^{\mathfrak{m}athrm{opp}}(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p})\backslash G(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p})$ by right multiplication, and as $\pi$ normalises $N^{\mathfrak{m}athrm{opp}}$, we also have a right action of $\pi$ on this space by \[N^{\mathfrak{m}athrm{opp}}(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p})g|\pi = N^{\mathfrak{m}athrm{opp}}(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p})\pi^{-1}g\pi.\] Thus we have an action of $\Sigma$ on this space. This action preserves $\mathfrak{m}athscr{X}$ and hence gives rise to a right action of $\Sigma$ on $B(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{C}p})$. This in turn gives a left action of $\Sigma$ on $\mathfrak{m}athbb{A}_\lambda(\mathfrak{m}athcal{O}_L)$ by $\gamma\cdot f(b) = f(b|\gamma),$ and dually a right action of $\Sigma$ on $\mathfrak{m}athbb{D}_\lambda(\mathfrak{m}athcal{O}_L)$ by $\mathfrak{m}u|\gamma(f) = \mathfrak{m}u(\gamma\cdot f).$ In \cite{PP09}, Lemma 2.1, Pollack and Pollack give an explicit description of this action. We recap their results: \begin{mlem}\label{explicitaction}\begin{itemize} \item[(i)] Let $\lambda = (k_1,k_2,0)$. For $\gamma \in I$, the weight $\lambda$ action of $\gamma$ on $f\in\mathfrak{m}athbb{A}_\lambda(\mathfrak{m}athcal{O}_L)$ is given by \begin{align*}(\gamma f))(x,y,z) = &(a_{11}+a_{21}x + a_{31}y)^{k_1-k_2}(m_{33}-m_{13}y-m_{23}z +m_{13}xz)^{k_2} \times\\ &f\bigg(\mathfrak{m}athcal{F}rac{a_{12}+a_{22}x+a_{32}y}{a_{11}+a_{21}x + a_{31}y},\mathfrak{m}athcal{F}rac{a_{13}+a_{23}x+a_{33}y}{a_{11}+a_{21}x + a_{31}y}, \\ &\mathrm{H}space{12pt}ace{100pt}\mathfrak{m}athcal{F}rac{-m_{32} + m_{12}y + m_{22}z - m_{12}xz}{m_{33}-m_{13}y-m_{23}z+m_{13}xz}\bigg), \end{align*} where $\gamma = (a_{ij})$ and $m_{ij}$ is the $(i,j)$th minor of $\gamma$. \item[(ii)]We have \[\pi_1\cdot f(x,y,z) = f(px,py,z)\] and \[\pi_2\cdot f(x,y,z) = f(x,py,pz).\] \end{itemize} \end{mlem} \begin{proof} For part (i), see \cite{PP09}, Lemma 2.1. For part (ii), this is easily checked by computing \[\pi_1^{-1}\begin{pmatrix}1&x&y\\0&1&z\\0&0&1\end{pmatrix}\pi_1 = \begin{pmatrix}1&px&py\\0&1&z\\0&0&1\end{pmatrix}.\] The case of $\pi_2$ is done similarly. \end{proof} \begin{mprop} The action of $\Sigma$ preserves the subspace $\mathfrak{m}athbb{A}_\lambda^P(\mathfrak{m}athcal{O}_L)$ of $\mathfrak{m}athbb{A}_\lambda(\mathfrak{m}athcal{O}_L)$. \end{mprop} \begin{proof} The space $\mathfrak{m}athbb{A}_\lambda^P(\mathfrak{m}athcal{O}_L)$ is the span of the functions $x^ry^sz^t$ with $t\leq k_2$ (under suitable restrictions on the coefficients). So it suffices to show that $\gamma \cdot x^ry^sz^t$ lies in this span. But from Lemma \ref{explicitaction} above, this is clear. \end{proof} \begin{mcor} The map $\mathfrak{m}athbb{D}_\lambda(\mathfrak{m}athcal{O}_L) \rightarrow \mathfrak{m}athbb{D}_\lambda^P(\mathfrak{m}athcal{O}_L)$ given by dualising the inclusion is equivariant with respect to the action of $\Sigma$. \end{mcor} \subsubsection{Specialisation to weight $\lambda$}\label{specialisation} We want to exhibit a map from overconvergent to classical coefficients, which we'll call \emph{specialisation to weight $\lambda$}. To this end, let $v_\lambda$ be a highest weight vector in $V_\lambda(\mathfrak{m}athcal{O}_L)$ (which we take to be a \emph{right} representation of $G$). More precisely, this is an element satisfying \[v_\lambda|t = \lambda(t)v_\lambda \mathrm{H}space{12pt}ace{4pt}\mathfrak{m}athcal{F}orall t \in T(\mathfrak{m}athcal{O}_L), \mathrm{H}space{12pt}ace{12pt}v_\lambda|n = v_\lambda \mathrm{H}space{12pt}ace{4pt}\mathfrak{m}athcal{F}orall n \in N^{\mathfrak{m}athrm{opp}}(\mathfrak{m}athcal{O}_L).\] In particular, we can define a map \begin{align*} f_\lambda: G(\mathfrak{m}athcal{O}_L) &\longrightarrow V_\lambda(\mathfrak{m}athcal{O}_L)\\ g &\longmapsto v_\lambda|g. \end{align*} Since we have invariance under $N^{\mathfrak{m}athrm{opp}}$, this function descends to $N^{\mathfrak{m}athrm{opp}}\backslash G$. We can then restrict this function to (the $\mathfrak{m}athcal{O}_L$-points of) $\mathfrak{m}athscr{X}$. \begin{mlem} Let $\lambda = (k_1,k_2,0)\in\mathfrak{m}athbb{Z}^3.$ Then $V_\lambda(\mathfrak{m}athcal{O}_L)$ can be realised as a subrepresentation of $\mathfrak{m}athrm{Sym}^{k_1}(\mathfrak{m}athcal{O}_L^3)\otimes \mathfrak{m}athrm{Sym}^{k_2}(\mathfrak{m}athcal{O}_L^3)$, and the highest weight vector is \[v_\lambda = \sum_{i=0}^{k_2}(-1)^i\binomc{k_1}{i}X^{k_1-i}Y^i\otimes U^iV^{k_2-i},\] where a general element has form $\sum P(X,Y,Z)\otimes Q(U,V,W).$ \end{mlem} \begin{proof} See \cite{PP09}, Remark 2.4.3. \end{proof} \begin{mprop} We have $f_\lambda\bigg|_{\mathfrak{m}athscr{X}} \in \mathfrak{m}athbb{A}^P_\lambda(\mathfrak{m}athcal{O}_L)\otimes V_\lambda(\mathfrak{m}athcal{O}_L)$. \end{mprop} \begin{proof}We explicitly compute $v_\lambda|g$, where \[g = \threematrix{1}{x}{y}{1}{z}{1}.\] We see that this is equal to \begin{equation}\label{rholambda1} v_\lambda|g = \sum_{i=1}^{k_2}(-1)^i\binomc{k_2}{i}(X+xY+yZ)^{k_1-i}(Y+zZ)^i\otimes (U+xV+yW)^i(V+zW)^{k_2-i}. \end{equation} It's easy to see from this that the coefficient of each monomial is an element of $\mathfrak{m}athbb{A}_\lambda^P(\mathfrak{m}athcal{O}_L)$ (and in particular that the maximal degree of $z$ in this expression is $k_2$), and we conclude the result. \end{proof} For a distribution $\mathfrak{m}u\in\mathfrak{m}athbb{D}^P_\lambda(\mathfrak{m}athcal{O}_L)$, define an `evaluation at $\mathfrak{m}athbb{A}^P_\lambda(\mathfrak{m}athcal{O}_L)\otimes V_\lambda(\mathfrak{m}athcal{O}_L)$' map by setting \[\mathfrak{m}u(f\otimes v) = \mathfrak{m}u(f)\otimes v \in V_\lambda(\mathfrak{m}athcal{O}_L).\] In particular, we can evaluate at $f_\lambda$. \begin{mdef}Define the \emph{specialisation map at weight $\lambda$} to be the map \[\mathrm{pr}_\lambda^1 : \mathfrak{m}athbb{D}^P_\lambda(\mathfrak{m}athcal{O}_L) \longrightarrow V_\lambda(\mathfrak{m}athcal{O}_L)\] given by evaluation at $f_\lambda \in \mathfrak{m}athbb{A}^P_\lambda(\mathfrak{m}athcal{O}_L)\otimes V_\lambda(\mathfrak{m}athcal{O}_L).$ \end{mdef} This map is $I$-equivariant, but not $\pi_i$-equivariant. As in \cite{PP09}, we introduce a twisted action of $\pi_i$ to get around this. \begin{mdef} \label{twisted} Define a (right) action of $\Sigma$ on $V_\lambda(L)$ by \begin{align*} v\star \gamma &= v|\gamma, \mathrm{H}space{12pt}ace{4pt}\gamma \in I,\\ v\star \pi_i &= \lambda(\pi_i)^{-1}v|\pi_i. \end{align*} Let $V_\lambda^\star(L)$ denote the module $V_\lambda(L)$ with this twisted action. \end{mdef} Then we see that: \begin{mlem} The map $\mathrm{pr}_\lambda^1: \mathfrak{m}athbb{D}^P_\lambda(L) \longrightarrow V_\lambda^\star(L)$ is $\Sigma$-equivariant. \end{mlem} \begin{mdef} Let $L_\lambda(\mathfrak{m}athcal{O}_L) \defeq \mathrm{pr}_\lambda^1(\mathfrak{m}athbb{D}_\lambda^P(\mathfrak{m}athcal{O}_L))\subset V_\lambda^\star(L).$ Note that this is stable under the $\star$-action of $\Sigma$ since $\mathrm{pr}_\lambda^1$ is $\Sigma$-equivariant. \end{mdef} We have an action of $\Gamma \subset I$ on these coefficient spaces. In particular, we can define the group cohomology of these coefficient spaces, and then note that, for each integer $r$, the map $\mathrm{pr}_\lambda^1$ induces a map \[\rho_\lambda^1 \defeq \rho_\lambda^1(r): \mathrm{H}^r(\Gamma,\mathfrak{m}athbb{D}^P_\lambda(\mathfrak{m}athcal{O}_L)) \longrightarrow \mathrm{H}^r(\Gamma,L_\lambda(\mathfrak{m}athcal{O}_L)).\] These spaces come equipped with the natural Hecke action on cohomology, and the action of the $U_p$ operator is given by the matrix $\pi = \pi_1\pi_2$. \section{Filtrations and control theorems for $\mathrm{SL}_2hree$} We recall what we have done so far. For a weight $\lambda = (k_1,k_2,0) \in \mathfrak{m}athbb{Z}^3$, we defined a space $L_\lambda(\mathfrak{m}athcal{O}_L)$ of classical coefficients, a space $\mathfrak{m}athbb{D}_\lambda^P(\mathfrak{m}athcal{O}_L)$ of partially overconvergent coefficients, and a space $\mathfrak{m}athbb{D}_\lambda(\mathfrak{m}athcal{O}_L)$ of fully overconvergent coefficients (where $\mathfrak{m}athbb{D}_\lambda(\mathfrak{m}athcal{O}_L)$ is as defined in \cite{PP09}). We also defined maps $\mathrm{pr}_\lambda^i$ between these coefficient modules, and these induce maps \[ \mathrm{H}^r(\Gamma,\mathfrak{m}athbb{D}_\lambda(\mathfrak{m}athcal{O}_L)) \labelrightarrow{\rho_\lambda^2} \mathrm{H}^r(\Gamma, \mathfrak{m}athbb{D}_\lambda^P(\mathfrak{m}athcal{O}_L)) \labelrightarrow{\rho_\lambda^1} \mathrm{H}^r(\Gamma,L_\lambda(\mathfrak{m}athcal{O}_L))\] on the cohomology. $\\ \\$ In this section, we prove that if we restrict to the simultaneous \emph{small-slope} eigenspaces of the operators on the cohomology given by $\pi_1$ and $\pi_2$, the composition $\rho_\lambda$ of these maps is an isomorphism. For posterity, we give the definition of small slope now. \begin{mdef}\label{heckeatp} Let $U_{p,i}$ be the operator on the cohomology induced by the element $\pi_i$ of Remark \ref{pi}, for $i=1,2$. We call these operators the \emph{Hecke operators at $p$}. \end{mdef} \begin{mdef}Let $\phi$ be an eigensymbol at $p$ (with classical or overconvergent coefficients) of weight $\lambda = (k_1,k_2,0)$, and write $U_{p,i}\phi = \alpha_i \phi$ for $i = 1,2$. We say said to be \emph{small slope} at $p$ if \[v_p(\alpha_1) < k_1-k_2+1 \mathrm{H}space{12pt}ace{12pt}\text{and} \mathrm{H}space{12pt}ace{12pt} v_p(\alpha_2) < k_2+1.\] \end{mdef} In particular, we will show that the restriction of $\rho_\lambda$ to the small slope subspaces is an isomorphism. We use two applications of Theorem \ref{liftingtheorem} to prove this. \subsection{Lifting to partially overconvergent coefficients} We now define a filtration on the modules $\mathfrak{m}athbb{D}_\lambda^P(\mathfrak{m}athcal{O}_L)$ that allows us to apply Theorem \ref{liftingtheorem}. \subsubsection{Filtrations on $\mathfrak{m}athbb{D}_\lambda^P(\mathfrak{m}athcal{O}_L)$} \begin{mdef}Define \[F^N\mathfrak{m}athbb{D}_\lambda^P(\mathfrak{m}athcal{O}_L) \defeq \left\{\mathfrak{m}u \in \mathfrak{m}athbb{D}_\lambda^P(\mathfrak{m}athcal{O}_L): \mathfrak{m}u(x^ry^sz^t) \in \pi_L^{N-(r+s)}\mathfrak{m}athcal{O}_L\right\}.\] \end{mdef} \begin{mprop}\label{sigmastable} The filtration $F^N\mathfrak{m}athbb{D}_\lambda^P(\mathfrak{m}athcal{O}_L)$ is stable under the action of $\Sigma$. \end{mprop} \begin{proof} Let $\mathfrak{m}u\in F^N\D_\lambda^P(\roi_L).$ We know that, for $\gamma = (a_{ij}) \in I$, we have \begin{align*}\gamma\cdot x^ry^sz^t = &(a_{12}+a_{22}x+a_{32}y)^r(a_{13}+a_{23}x+a_{33}y)^s\\ &\times(-m_{32}+m_{22}z-(m_{12}z)x+m_{12}y)^t(a_{11}+a_{21}x+a_{31}y)^{k_1-k_2-r-s}\\ &\mathrm{H}space{12pt}ace{60pt}\times(m_{33}-m_{23}z-(m_{13}z)x-m_{13}y)^{k_2-t}, \end{align*} where $m_{ij}$ is the $(i,j)$th minor of $\gamma$, using Lemma \ref{explicitaction}. Write this as \[\mathfrak{m}u|\gamma(x^ry^sz^t) = \sum_{a,b\geq 0}\beta_{ab}(z)x^ay^b,\] where $\beta_{ab}(z)$ is a polynomial in $z$ of degree at most $t$. Then note that $p$ divides the terms $a_{21},a_{31},a_{32},m_{12},m_{13},$ and $m_{23}$, whilst the terms $a_{11},a_{22},a_{33},m_{22}$ and $m_{33}$ are all $p$-adic units. In particular, we examine the $p$-divisibility conditions on the coefficients $\beta_{ab}(z)$. Any monomial $x^ay^b$ coming from the first bracket in this expression has coefficient divisible by $p^{a+b-r}$, since $p|a_{32}$. Similarly, any such monomial in the second bracket has coefficient divisible by $p^{a+b-s}$. Moreover, since in the remaining three brackets $p$ divides the coefficient of both $x$ and $y$ before expanding, we see that any monomial including $x^ay^b$ in the expanded expression is divisible by $p^{a+b}$. Accordingly, by combining this, we see that $p^{a+b-(r+s)}|\beta_{ab}(z)$. Since we already know that $\mathfrak{m}u(x^ay^bz^c) \in \pi_L^{N-(a+b)}\mathfrak{m}athcal{O}_L$ for any $c\leq t$, we now see that \[\mathfrak{m}u(\beta_{ab}(z)x^ay^b) \in p^{a+b-(r+s)}\pi_L^{N-(a+b)}\mathfrak{m}athcal{O}_L \subset \pi_L^{N-(r+s)}\mathfrak{m}athcal{O}_L,\] as required. $\\ \\$ Since $\pi_1$ and $\pi_2$ act on such monomials by multiplying by a non-negative power of $p$, they also preserve the filtration. Thus the filtration is stable under the action of $\Sigma$. \end{proof} We actually need a slightly finer filtration. \begin{mdef} Define \[\mathfrak{m}athcal{F}^N\mathfrak{m}athbb{D}_\lambda^P(\mathfrak{m}athcal{O}_L) \defeq F^N\mathfrak{m}athbb{D}_\lambda^P(\mathfrak{m}athcal{O}_L)\cap \ker(\mathrm{pr}_\lambda^1).\] \end{mdef} Since $\mathrm{pr}_\lambda^1$ is $\Sigma$-equivariant, this filtration is also $\Sigma$-stable. The crux of our argument is then: \begin{mprop}\label{kernelcond} Suppose $\mathfrak{m}u\in \ker(\mathrm{pr}_\lambda^1).$ Then \[\mathfrak{m}u(x^ry^sz^t) = 0 \mathrm{H}space{12pt}ace{12pt}\text{for all }r+s \leq k_1-k_2, \mathrm{H}space{12pt}ace{4pt}0\leq t\leq k_2.\] \end{mprop} \begin{proof} We explicitly examine the map $\mathrm{pr}_\lambda^1$. Earlier, in equation (\ref{rholambda1}), we gave a formula for the expression $f_\lambda(x,y,z)$. If $\mathfrak{m}u \in \ker(\mathrm{pr}_\lambda^1)$, then in particular $\mathfrak{m}u(f_\lambda(x,y,z)) = 0$. We consider the monomials including the term $U^{k_2}$, keeping the notation of previously. Such a term can occur only for $i=k_2$, so that these terms all appear in \[(-1)^{k_2}(X+xY+zZ)^{k_2-k_1}(Y+zZ)^{k_2}\otimes U^{k_2}.\] By expanding out this bracket, and considering the coefficients of each monomial, we see that we have $\mathfrak{m}u(x^ry^sz^t) = 0$ for at least the range of $r,s$ and $t$ specified by the proposition. \end{proof} \begin{mrem}Note that, for general $\lambda$, this condition on $r+s$ is optimal. In particular, consider $\lambda = (k,1,0)$, for some integer $k\geq 1$. Then if $\mathfrak{m}u\in\ker(\rho_\lambda^1)$, then we do not necessarily have $\mathfrak{m}u(x^k) = 0$, so in particular we can't say anything general about the values $\mathfrak{m}u(x^ry^s)$ where $r+s>k-1$. \end{mrem} This filtration satisfies the conditions of Theorem \ref{liftingtheorem}, as we see by: \begin{mlem}\label{partb} Let $\mathfrak{m}u \in \mathfrak{m}athcal{F}il$, and let $\alpha \in \mathfrak{m}athcal{O}_L$ with $v_p(\alpha)<k_1-k_2+1$. Then \[\mathfrak{m}u|\pi_1 \in \alpha\mathfrak{m}athcal{F}^{N+1}\mathfrak{m}athbb{D}_\lambda^P(\mathfrak{m}athcal{O}_L).\] \end{mlem} \begin{proof} We have $\mathfrak{m}u|\pi_1(x^ry^sz^t) = p^{r+s}\mathfrak{m}u(x^ry^sz^t)$. From Proposition \ref{kernelcond}, we see that if $r+s\leq k_1-k_2$, we have $\mathfrak{m}u(x^ry^sz^t) = 0$. In particular, from this, we have \[\mathfrak{m}u|\pi_1(x^ry^sz^t) \in p^{k_1-k_2+1}\pi_L^{N-(r+s)}\mathfrak{m}athcal{O}_L.\] As $v_p(\alpha)<k_1-k_2+1$, and it must be divisible by an integral power of $\pi_L$, we have $p^{k_1-k_2+1}\in \alpha \pi_L\mathfrak{m}athcal{O}_L$, so that \[\mathfrak{m}u|\pi_1(x^ry^sz^t) \in \alpha\pi_L^{1+N-(r+s)}\mathfrak{m}athcal{O}_L.\] Thus $\mathfrak{m}u \in \alpha\mathfrak{m}athcal{F}^{N+1}\mathfrak{m}athbb{D}_\lambda^P(\mathfrak{m}athcal{O}_L)$, as required. \end{proof} \subsubsection{A submodule of $\mathfrak{m}athbb{D}_\lambda^P(\mathfrak{m}athcal{O}_L)$} We require one further definition before we can apply Theorem \ref{liftingtheorem}; namely, a submodule of $\mathfrak{m}athbb{D}_\lambda^P(\mathfrak{m}athcal{O}_L)$ that will play the role of $D^\alpha$ in condition (v) in Notation \ref{setup}. \begin{mdef} Let $\alpha\in\mathfrak{m}athcal{O}_L$. Define \[\mathfrak{m}athbb{D}_\lambda^{P,\alpha}(\mathfrak{m}athcal{O}_L) \defeq \left\{\mathfrak{m}u\in\mathfrak{m}athbb{D}_\lambda^P(\mathfrak{m}athcal{O}_L): \mathfrak{m}u(x^ry^sz^t)\in \alpha p^{-(r+s)}\mathfrak{m}athcal{O}_L\right\}.\] \end{mdef} \begin{mprop} The subspace $\mathfrak{m}athbb{D}_\lambda^{P,\alpha}(\mathfrak{m}athcal{O}_L)$ is stable under the action of $\Sigma$. \end{mprop} \begin{proof} Let $\mathfrak{m}u \in \mathfrak{m}athbb{D}_\lambda^P(\mathfrak{m}athcal{O}_L)$ and $\gamma\in I$, and recall the proof of Proposition \ref{sigmastable}, and in particular, the computation \[\mathfrak{m}u|\gamma(x^ry^sz^t) = \sum_{a,b\geq 0}\mathfrak{m}u(\beta_{ab}(z)x^ay^b),\] where $p^{a+b-(r+s)}|\beta_{ab}(z)$. Now take $\mathfrak{m}u$ to be in the smaller space $\mathfrak{m}athbb{D}_\lambda^{P,\alpha}(\mathfrak{m}athcal{O}_L)$. Then $\mathfrak{m}u(\beta_{ab}(z)x^ay^b \in p^{a+b-(r+s)}\alpha p^{-(a+b)}\mathfrak{m}athcal{O}_L = \alpha p^{-(r+s)}.$ Thus $\mathfrak{m}u|\gamma \in \mathfrak{m}athbb{D}_\lambda^{P,\alpha}(\mathfrak{m}athcal{O}_L)$, as required. $\\ \\$ As $\pi_1$ and $\pi_2$ act on monomials by multiplying by non-negative powers of $p$, stability in this case is clear. \end{proof} \begin{mlem}\label{parta} Suppose $\mathfrak{m}u\in\mathfrak{m}athbb{D}_\lambda^{P,\alpha}(\mathfrak{m}athcal{O}_L)$. Then \[\mathfrak{m}u|\pi_1 \in \alpha\mathfrak{m}athbb{D}_\lambda^P(\mathfrak{m}athcal{O}_L).\] \end{mlem} \begin{proof} Consider $\mathfrak{m}u|\pi_1(x^ry^sz^t) = p^{r+s}\mathfrak{m}u(x^ry^sz^t)$. Since $\mathfrak{m}u \in \mathfrak{m}athbb{D}_\lambda^{P,\alpha}(\mathfrak{m}athcal{O}_L)$, we see that $\mathfrak{m}u|\pi_1(x^ry^sz^t) \in \alpha\mathfrak{m}athcal{O}_L,$ and the result immediately follows. \end{proof} \subsubsection{Summary and results} We can now apply Theorem \ref{liftingtheorem} to the small slope subspace in this situation. In particular, in the set-up of this theorem, let $D = \mathfrak{m}athbb{D}_\lambda^P(\mathfrak{m}athcal{O}_L)$ and $D^\alpha = \mathfrak{m}athbb{D}_\lambda^{P,\alpha}(\mathfrak{m}athcal{O}_L)$. Then we have written down a filtration of this space that satisfies the conditions of Theorem \ref{liftingtheorem}. In particular, we have all the objects of Notation \ref{setup} (i)-(v), and then we've shown condition (a) of the theorem in Lemma \ref{parta} and condition (b) in Lemma \ref{partb}. So we've proved: \begin{mprop}\label{partial} Let $\alpha\in\mathfrak{m}athcal{O}_L$ with $v_p(\alpha)<k_1-k_2+1.$ Let $\mathfrak{m}athbb{D}_\lambda^P(\mathfrak{m}athcal{O}_L)$ be the module of partially overconvergent coefficients defined in Section \ref{ovcgtcoeffs}, and let $L_\lambda(\mathfrak{m}athcal{O}_L) = \mathrm{pr}_\lambda^1(\mathfrak{m}athbb{D}_\lambda^P(\mathfrak{m}athcal{O}_L))$. Then the restriction \[\rho_\lambda^1 : \mathrm{H}^r(\Gamma, \mathfrak{m}athbb{D}_\lambda^P(\mathfrak{m}athcal{O}_L))^{U_{p,1}=\alpha} \isorightarrow \mathrm{H}^r(\Gamma,L_\lambda(\mathfrak{m}athcal{O}_L))^{U_{p,1}=\alpha}\]of $\rho_\lambda^1$ to the $\alpha$-eigenspaces of the $U_{p,1}$ operator is an isomorphism. \end{mprop} \subsection{From partial to fully overconvergent coefficients} We now change direction and focus on the action of the $U_{p,2}$ operator induced from $\pi_2$. In particular, by applying the theorem again with the $U_{p,2}$ operator, we can lift from partial to fully overconvergent coefficients. As the results are very similar to, and in many cases simpler than, those above, we present the material here in less detail. $\\ \\$ Define a filtration on $\mathfrak{m}athbb{D}_\lambda(\mathfrak{m}athcal{O}_L)$ by \[\mathfrak{m}athcal{F}^N\mathfrak{m}athbb{D}_\lambda(\mathfrak{m}athcal{O}_L) \defeq \left\{\mathfrak{m}u \in \mathfrak{m}athbb{D}_\lambda(\mathfrak{m}athcal{O}_L): \mathfrak{m}u(x^ry^sz^t) \in \pi_L^{N-t}\mathfrak{m}athcal{O}_L\right\}\cap\ker(\mathrm{pr}_\lambda^2).\] This is $\Sigma$-stable by a very similar argument to previously. We also define, for $\alpha\in\mathfrak{m}athcal{O}_L$, \[\mathfrak{m}athbb{D}_\lambda^\alpha(\mathfrak{m}athcal{O}_L) \defeq \left\{\mathfrak{m}u\in\mathfrak{m}athbb{D}_\lambda(\mathfrak{m}athcal{O}_L): \mathfrak{m}u(x^ry^sz^t)\in \alpha p^{-t}\mathfrak{m}athcal{O}_L\right\},\] which is also easily seen to be $\Sigma$-stable and satisfies the conditions required of $D^\alpha$ in Theorem \ref{liftingtheorem}. When $v_p(\alpha)<k_2+1,$ we see that if $\mathfrak{m}u\in\mathfrak{m}athcal{F}^N\mathfrak{m}athbb{D}_\lambda(\mathfrak{m}athcal{O}_L)$, then $\mathfrak{m}u|\pi_2 \in \alpha\mathfrak{m}athcal{F}^{N+1}\mathfrak{m}athbb{D}_\lambda(\mathfrak{m}athcal{O}_L)$, again by a similar argument before after studying the kernel of $\mathrm{pr}_\lambda^2$. Putting this together and using Theorem \ref{liftingtheorem}, we get: \begin{mprop}\label{full} Let $\alpha \in \mathfrak{m}athcal{O}_L$ with $v_p(\alpha)<k_2+1$. Let $\mathfrak{m}athbb{D}_\lambda(\mathfrak{m}athcal{O}_L)$ and $\mathfrak{m}athbb{D}_\lambda^P(\mathfrak{m}athcal{O}_L)$ be the modules of fully and partially overconvergent coefficients respectively, as defined in Section \ref{ovcgtcoeffs}. Then the restriction \[\rho_\lambda^2 : \mathrm{H}^r(\Gamma, \mathfrak{m}athbb{D}_\lambda(\mathfrak{m}athcal{O}_L))^{U_{p,2}=\alpha} \isorightarrow \mathrm{H}^r(\Gamma,\mathfrak{m}athbb{D}_\lambda^P(\mathfrak{m}athcal{O}_L))^{U_{p,2}=\alpha}\] of $\rho_\lambda^2$ to the $\alpha$-eigenspaces of the $U_{p,2}$ operator is an isomorphism. \end{mprop} \subsection{Summary of results} We can combine the results of Propositions \ref{partial} and \ref{full} to obtain the following constructive non-critical slope control theorem for $\mathrm{SL}_2hree$. \begin{mthm}\label{controltheorem}Consider the set-up of Notation \ref{fullnot} in the Introduction. In particular, let $\lambda = (k_1,k_2,0)$ be a dominant algebraic weight, and let $\alpha_1,\alpha_2 \in \mathfrak{m}athcal{O}_L$ with $v_p(\alpha_1)<k_1-k_2+1$ and $v_p(\alpha_2) < k_2+1.$ Then the restriction \[\rho_\lambda :\mathrm{H}^r(\Gamma,\mathfrak{m}athbb{D}_\lambda(\mathfrak{m}athcal{O}_L))^{U_{p,i}=\alpha_i} \rightarrow \mathrm{H}^r(\Gamma,L_\lambda(\mathfrak{m}athcal{O}_L))^{U_{p,i}=\alpha_i}\] of the specialisation map to the simultaneous $\alpha_i$-eigenspaces of the $U_{p,i}$ operators, for $i = 1,2$, is an isomorphism. \end{mthm} \begin{proof} This is an immediate consequence of the two propositions. Indeed, both $\rho_\lambda^1$ and $\rho_\lambda^2$ are $\Sigma$-equivariant, so that a partial lift of a simultaneous $U_{p,1}$ and $U_{p,2}$ eigensymbol will likewise be a simultaneous eigensymbol, that can hence be lifted further to fully overconvergent coefficients. \end{proof} \small \renewcommand{\normalsize References}{\mathfrak{m}athfrak{n}ormalsize References} {} \end{document}
\begin{document} \address{Sabanci University, Orhanli, 34956 Tuzla, Istanbul, Turkey} \email{[email protected]} \address{Department of Mathematics, The Ohio State University, 231 West 18th Ave, Columbus, OH 43210, USA} \email{[email protected]} \begin{abstract} The Dirac operators $$ Ly = i \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \frac{dy}{dx} + v(x) y, \quad y = \begin{pmatrix} y_1\\y_2 \end{pmatrix}, \quad x\in[0,\pi],$$ with $L^2$-potentials $$ v(x) = \begin{pmatrix} 0 & P(x) \\ Q(x) & 0 \end{pmatrix}, \quad P,Q \in L^2 ([0,\pi]), $$ considered on $[0,\pi]$ with periodic, antiperiodic or Dirichlet boundary conditions $(bc)$, have discrete spectra, and the Riesz projections $$ S_N = \frac{1}{2\pi i} \int_{ |z|= N-\frac{1}{2} } (z-L_{bc})^{-1} dz, \quad P_n = \frac{1}{2\pi i} \int_{ |z-n|= \frac{1}{4} } (z-L_{bc})^{-1} dz $$ are well--defined for $|n| \geq N$ if $N $ is sufficiently large. It is proved that $$\sum_{|n| > N} \|P_n - P_n^0\|^2 < \infty, $$ where $P_n^0, \, n \in \mathbb{Z},$ are the Riesz projections of the free operator. Then, by the Bari--Markus criterion, the spectral Riesz decompositions $$ f = S_N f + \sum_{|n| >N} P_n f, \quad \forall f \in L^2; $$ converge unconditionally in $L^2.$ \end{abstract} \subjclass[2000]{34L40 (primary), 47B06, 47E05 (secondary)} \maketitle \section{Introduction} The question for unconditional convergence of the spectral decompositions is one of the central problems in Spectral Theory of Differential Operators \cite{D58,DS3,Mar,Na1,Na2,Min}. In the case of ordinary differential operators on a finite interval, say $I = [0,\pi],$ \begin{equation} \label{i1} \ell (y) = \frac{d^m y}{dx^m} + \sum_{k=0}^{m-2} q_k (x) \frac{d^k y}{dx^k}, \quad q_k \in H^k (I), \end{equation} with strongly regular boundary conditions $(bc)$ the eigenfunction decomposition \begin{equation} \label{i2} f(x) = \sum_k c_k (f) u_k (x), \quad \ell (u_k) = \lambda_k u_k, \; u_k \in (bc), \end{equation} {\em converge unconditionally} for every $f \in L^2 (I)$ (see \cite{Mih,Kes,DS3}). If $(bc)$ are regular but not strictly regular the system of root functions (eigenfunctions and associated functions) in general is not a basis in $L^2.$ But if the root functions are combined properly in disjoint groups $B_n, \; \cup B_n = \mathbb{N}, $ then the series \begin{equation} \label{i3} f(x) = \sum_n P_n f, \quad P_n f = \sum_{k\in B_n} c_k (f) u_k (x), \end{equation} {\em converges unconditionally in } $L^2 $ (see \cite{Shk79,Shk83}). Let us be more specific in the case of operators of second order \begin{equation} \label{i4} \ell (y) = y^{\prime \prime} +q(x) y, \quad 0 \leq x \leq \pi. \end{equation} Then, Dirichlet $bc =Dir:\; y(0) = y(\pi) =0 $ is {\em strictly regular}; however, Periodic $ bc= Per^+: \; y(0) = y(\pi), \; y^\prime(0) = y^\prime(\pi) $ and Antiperiodic $ bc= Per^-: \; y(0) =- y(\pi), \; y^\prime(0) = - y^\prime(\pi) $ are {\em regular, but not strictly regular.} Analysis -- even if it becomes more difficult and technical -- could be extended to singular potentials $q \in H^{-1}.$ A. Savchuk and A. Shkalikov showed (\cite{SS03}, Theorems 2.7 and 2.8) that for both Dirichlet bc or (properly understood) Periodic or Antiperiodic $bc,$ the spectral decomposition (\ref{i3}) converges unconditionally. An alternative proof of this result is given in \cite{DM19}. For Dirac operators (\ref{01}) the results on unconditional convergence are sparse and not complete so far \cite{Shk83,Mal,MalOr,TY01,TY02,HO06}. The case of separate boundary conditions, at least for smooth potential $v,$ has been studied in detail in \cite{Mal}. For periodic (or antiperiodic) $bc \;$ B.Mityagin proved unconditional convergence of the series (\ref{i3}) with $\dim P_n = 2, \; |n| \geq N(v),$ for potentials $v \in H^b, \; b >1/2 $ -- see Theorem 8.8 \cite{Mit04} for a precise statement. Our techniques from \cite{DM19} to analyze the resolvents $(\lambda-L_{bc})^{-1}$ of Hill operators with the weakest (in Sobolev scale) assumption $v \in H^{-1}$ on "smoothness" of the potential are adjusted and extended in the present paper to Dirac operators with potentials in $L^2.$ We prove (see Theorem \ref{thm1} for a precise statement) that if $v \in L^2$ and $bc = Per^\pm, Dir $ the sequence of deviations $\|P_n -P_n^0\|$ is in $\ell^2.$ Then, the Bari--Markus criterion (see \cite{B,M} or \cite{GK}, Ch.6, Sect.5.3, Theorem 5.2)) shows that the spectral decomposition \begin{equation} \label{i5} f = S_N f + \sum_{|n| >N} P_n f, \quad \forall f \in L^2, \end{equation} where, for $|n| \geq N(v),$ \begin{equation} \label{i6} \dim P_n = \begin{cases} 2 & bc = Per^{\pm}\\ 1 & bc =Dir \end{cases}, \end{equation} {\em converge unconditionally.} This is Theorem \ref{thm2}, the main result of the present paper. Further analysis requires thorough discussion of the algebraic structure of {\em regular} and {\em strictly regular} $bc$ for Dirac operators. Then we can claim a general statement which is an analogue of (\ref{i5})--(\ref{i6}), or Theorem \ref{thm2}, with $bc= Dir$ in case of strictly regular boundary conditions, and $bc =Per^\pm $ in case of regular but not strictly regular boundary conditions. We will give all the details in another paper. \section{Preliminary results} Consider the Dirac operator on $ I= [0,\pi] $ \begin{equation} \label{01} Ly = i \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \frac{dy}{dx} + v(x)y, \end{equation} where \begin{equation} \label{02} v(x)= \begin{pmatrix} 0 & P(x) \\ Q(x) & 0 \end{pmatrix}, \quad y = \begin{pmatrix} y_1\\ y_2 \end{pmatrix}, \end{equation} and $v$ is an $L^2$--potential, i.e., $P,Q \in L^2 (I). $ We equip the space $ H^0 $ of $L^2 (I)$--vector functions $F= \begin{pmatrix} f_1\\f_2 \end{pmatrix}$ with the scalar product $$ \langle F,G \rangle = \frac{1}{\pi} \int_0^\pi \left ( f_1(x) \overline{g_1(x)} + f_2(x) \overline{g_2(x)} \right ) dx. $$ Consider the following boundary conditions $(bc):$ (a) {\em periodic} $Per^+ : \quad y(0) = y(\pi),$ i.e., $ y_1 (0) = y_1 (\pi) $ and $ y_2 (0) = y_2 (\pi); $ (b) {\em anti-periodic} $Per^- : \; y(0) = - y(\pi),$ i.e., $ y_1 (0) = -y_1 (\pi) $ and $ y_2 (0) = -y_2 (\pi); $ (c) {\em Dirichlet} $Dir: \quad y_1 (0) = y_2 (0), \; y_1 (\pi) = y_2 (\pi).$ The corresponding closed operator with a domain \begin{equation} \label{07} \Delta_{bc} = \left \{f \in (W_1^2 (I))^2 \; : \; \; f = \begin{pmatrix} f_1 \\ f_2 \end{pmatrix} \in (bc) \right \} \end{equation} will be denoted by $L_{bc},$ or respectively, by $L_{Per^\pm} $ and $ L_{dir}.$ If $v=0,$ i.e., $P\equiv 0, Q \equiv 0,$ we write $L^0_{bc}$ (or simply $L^0$), or $L^0_{Per^\pm}, L^0_{Dir}$ respectively. Of course, it is easy to describe the spectra and eigenfunctions for $L^0_{bc}.$ (a) $Sp (L^0_{Per^+}) = \{n \;\text{even} \} = 2 \mathbb{Z}; $ each number $n \in 2\mathbb{Z} $ is a double eigenvalue, and the corresponding eigenspace is \begin{equation} \label{011} E^0_n = Span \{e^1_n, e^2_n\},\quad n \in 2 \mathbb{Z}, \end{equation} where \begin{equation} \label{012} \displaystyle e^1_n (x) = \begin{pmatrix} e^{-inx}\\ 0 \end{pmatrix}, \quad e^2_n (x) = \begin{pmatrix} 0\\ e^{inx} \end{pmatrix}; \end{equation} (b) $Sp (L^0_{Per^-}) = \{n \;\text{odd} \} = 2 \mathbb{Z}+ 1; $ the corresponding eigenspaces $E_n^0 $ are given by (\ref{011}) and (\ref{012}) but with $ n \in 2\mathbb{Z} +1; $ (c) $ Sp (L^0_{Dir}) = \{n \in \mathbb{Z} \};$ each eigenvalue $n $ is simple. The corresponding normalized eigenfunction is \begin{equation} \label{013} g_n (x) = \frac{1}{\sqrt{2}} \left (e^1_n + e^2_n \right ), \quad n \in \mathbb{Z}, \end{equation} so the corresponding (one-dimensional) eigenspace is \begin{equation} \label{014} G_n^0 = Span \{ g_n \}. \end{equation} We study the spectral properties of the operators $L_{Per^\pm} $ and $L_{Dir} $ by using their Fourier representations with respect to the eigenvectors of the corresponding free operators given above in (\ref{011})--(\ref{014}). Let \begin{equation} \label{03} P(x) = \sum_{m \in 2\mathbb{Z}} p(m) e^{imx}, \qquad Q(x) = \sum_{m \in 2\mathbb{Z}} q(m) e^{imx}, \end{equation} and \begin{equation} \label{04} P(x) = \sum_{m \in 1+2\mathbb{Z}} p_1(m) e^{imx}, \qquad Q(x) = \sum_{m \in 1+ 2\mathbb{Z}} q_1(m) e^{imx}, \end{equation} be, respectively, the Fourier expansions of the functions $P$ and $Q $ about the systems $\{e^{imx}, \, m \in 2\mathbb{Z}\}$ and $\{e^{imx}, \, m \in 1+2\mathbb{Z}\}.$ Then \begin{equation} \label{05} \|v\|^2 =\sum_{m\in 2\mathbb{Z}} \left ( |p(m)|^2 + |q(m)|^2 \right ) =\sum_{m\in 1+2\mathbb{Z}} \left ( |p_1(m)|^2 + |q_1(m)|^2 \right ). \end{equation} In its Fourier representation, the operator $L^0 $ is diagonal, and $V$ is defined by its action on vectors $e_n^1$ and $ e_n^2,$ with $n \in 2\mathbb{Z}$ for $bc = Per^+$ and $n \in 1+ 2\mathbb{Z}$ for $bc = Per^-.$ In view of (\ref{02}) and (\ref{03}), we have \begin{equation} \label{12} Ve_n^1 = \sum_{k \in n + 2\mathbb{Z}} q(k+n) e_k^2, \quad Ve_n^2 = \sum_{k \in n + 2\mathbb{Z}} p(-k-n) e_k^1, \end{equation} so, the matrix representation of $V$ is \begin{equation} \label{13} V \sim \begin{pmatrix} 0 & V^{12}\\ V^{21} & 0 \end{pmatrix}, \quad (V^{12})_{kn} = p(-k-n), \quad (V^{21})_{kn} = q(k+n). \end{equation} In the case of Dirichlet boundary conditions the operator $L^0$ is diagonal as well. The matrix representation of $V$ given by the following lemma. \begin{Lemma} \label{lem1} Let $(g_n)_{n \in \mathbb{Z}} $ be the orthogonal normalized basis of eigenfunctions of $L^0 $ in the case of Dirichlet boundary conditions. Then \begin{equation} \label{14} V_{kn}:= \langle Vg_n, g_k \rangle = W(k+n), \quad k,n \in \mathbb{Z}, \end{equation} with \begin{equation} \label{15} W(m) = \begin{cases} (p(-m) +q(m))/2 & m\; \text{even} \\ (p_1(-m) +q_1(m))/2 & m\; \text{odd} \end{cases}. \end{equation} \end{Lemma} The proof follows from a direct computation of $\langle Vg_n, g_k \rangle.$ Let us mention, that the sequences $p_1 (m)$ and $q_1 (m)$ in (\ref{15}) are Hilbert transforms of $p(n)$ and $q(n)$ (see \cite{DM15}, Lemma 2 in Section 1.3) but we do not need this fact. In the following only the relation (\ref{05}) is essential. In view of (\ref{011})--(\ref{014}) the operator $R^0_\lambda = (\lambda -L^0)^{-1} $ is well defined, respectively, for $\lambda \not \in 2\mathbb{Z}$ if $bc = Per^+,$ $\lambda \not \in 1+ 2\mathbb{Z}$ if $bc = Per^-,$ and $\lambda \not \in \mathbb{Z}$ if $bc = Dir.$ The operator $R^0_\lambda$ is diagonal, and we have \begin{equation} \label{21} R^0_\lambda e^1_n = \frac{1}{\lambda -n} e_n^1, \quad R^0_\lambda e^2_n = \frac{1}{\lambda -n} e_n^2 \quad \text{for} \; bc= Per^\pm, \end{equation} and \begin{equation} \label{22} R^0_\lambda g_n = \frac{1}{\lambda -n} g_n \quad \text{for} \; bc = Dir. \end{equation} The standard perturbation type formulae for the resolvent $R_\lambda = (\lambda - L^0 -V)^{-1}$ are \begin{equation} \label{23} R_\lambda = (1-R_\lambda^0 V)^{-1} R_\lambda^0 = \sum_{k=0}^\infty (R_\lambda^0 V)^k R_\lambda^0, \end{equation} and \begin{equation} \label{24} R_\lambda = R_\lambda^0 (1-V R_\lambda^0 )^{-1} = \sum_{k=0}^\infty R_\lambda^0 (V R_\lambda^0)^k. \end{equation} The simplest conditions that guarantee convergence of the series (\ref{23}) or (\ref{24}) in $\ell^2 $ are $$ \|R_\lambda^0 V\| <1, \quad \mbox{respectively,} \quad \|V R_\lambda^0\| < 1. $$ In the case of Dirac operators there are no such good estimates but there are good estimates for the norms of $(R_\lambda^0 V)^2 $ and $(V R_\lambda^0)^2 $ (see \cite{DM6} and \cite{DM15}, Section 1.2, for more comments). But now we are going to suggest another approach that is borrowed from the study of Hill operators with periodic singular potentials (see \cite{DM16,DM18,DM19}). Notice, that one can write (\ref{23}) or (\ref{24}) as \begin{equation} \label{25} R_\lambda = R^0_\lambda + R^0_\lambda V R^0_\lambda + \cdots = K^2_\lambda + \sum_{m=1}^\infty K_\lambda(K_\lambda V K_\lambda)^m K_\lambda, \end{equation} provided \begin{equation} \label{26} (K_\lambda)^2 = R^0_\lambda. \end{equation} In view of (\ref{21}) and (\ref{22}), we define an operator $K= K_\lambda $ with the property (\ref{26}) by \begin{equation} \label{31} K_\lambda e^1_n = \frac{1}{\sqrt{\lambda -n}} e_n^1, \quad K_\lambda e^2_n = \frac{1}{\sqrt{\lambda -n}} e_n^2 \quad \text{for} \; bc= Per^\pm, \end{equation} and \begin{equation} \label{32} K_\lambda g_n = \frac{1}{\sqrt{\lambda -n}} g_n \quad \text{for} \; bc = Dir, \end{equation} where $$\sqrt{z}= \sqrt{r} e^{i\varphi/2} \quad \mbox{if} \quad z= re^{i\varphi}, \;\; -\pi \leq \varphi < \pi. $$ Then $R_\lambda $ is well--defined if \begin{equation} \label{33} \|K_\lambda V K_\lambda \|_{\ell^2 \to \ell^2} <1. \end{equation} In view of (\ref{12}) and (\ref{31}), for periodic or anti--periodic boundary conditions $bc = Per^\pm,$ we have \begin{equation} \label{34} (K_\lambda V K_\lambda) e^1_n = \sum_k \frac{q(k+n)}{(\lambda -k)^{1/2}(\lambda -n)^{1/2}} e_k^2, \quad (K_\lambda V K_\lambda) e^2_n = \sum_k \frac{p(-k-n)}{(\lambda -k)^{1/2}(\lambda -n)^{1/2}} e_k^1, \end{equation} so, the Hilbert--Schmidt norm of the operator $K_\lambda V K_\lambda$ is given by \begin{equation} \label{35} \|K_\lambda V K_\lambda\|^2_{HS} = \sum_{k,m} \frac{|q(k+m)|^2}{|\lambda -k||\lambda -m|} +\sum_{k,m} \frac{|p(-k-m)|^2}{|\lambda -k||\lambda -m|}, \end{equation} where $k,m \in 2\mathbb{Z}$ for $bc = Per^+$ and $k,m \in 1+ 2\mathbb{Z}$ for $bc = Per^-.$ In an analogous way (\ref{14}), (\ref{15}) and (\ref{32}) imply, for Dirichlet boundary conditions $bc =Dir,$ \begin{equation} \label{37} (K_\lambda V K_\lambda) g_n = \sum_k \frac{W(k+n)}{(\lambda -k)^{1/2}(\lambda -n)^{1/2}} g_k, \quad k,n \in \mathbb{Z}, \end{equation} and therefore, we have \begin{equation} \label{38} \|K_\lambda V K_\lambda\|^2_{HS} = \sum_{k,m} \frac{|W(k+m)|^2}{|\lambda -k||\lambda -m|},\quad k,m \in \mathbb{Z}. \end{equation} For convenience, we set \begin{equation} \label{40} r(m) = \max (|p(m)|,|p(-m)|) + \max (|q(m)|,|q(-m)|), \quad m \in 2\mathbb{Z}, \end{equation} if $bc = Per^\pm,$ and \begin{equation} \label{41} r(m) = |W(m)| \quad m \in \mathbb{Z}, \end{equation} if $bc = Dir.$ Now we define operators $\bar{V} $ and $\bar{K}_\lambda $ which dominate, respectively, $V$ and $K_\lambda,$ as follows: \begin{equation} \label{42} \bar{V}e_n^1 = \sum_{k \in n + 2\mathbb{Z}} r(k+n) e_k^2, \quad \bar{V}e_n^2 = \sum_{k \in n + 2\mathbb{Z}} r(k+n) e_k^1 \quad \text{for} \; bc= Per^\pm, \end{equation} \begin{equation} \label{43} \bar{V} g_n = \sum_{k \in \mathbb{Z}} r(k+n) g_k \quad \text{for} \; bc = Dir, \end{equation} and \begin{equation} \label{44} \bar{K}_\lambda e^1_n = \frac{1}{\sqrt{|\lambda -n|}} e_n^1, \quad \bar{K}_\lambda e^2_n = \frac{1}{\sqrt{|\lambda -n|}} e_n^2 \quad \text{for} \; bc= Per^\pm, \end{equation} \begin{equation} \label{45} \bar{K}_\lambda g_n = \frac{1}{\sqrt{|\lambda -n|}} g_n \quad \text{for} \; bc = Dir. \end{equation} Since the matrix elements of the operator $K_\lambda V K_\lambda$ do not exceed, by absolute value, the matrix elements of $\bar{K}_\lambda \bar{V} \bar{K}_\lambda,$ we estimate from above the Hilbert--Schmidt norm of the operator $K_\lambda V K_\lambda$ by one and the same formula: \begin{equation} \label{46} \|K_\lambda V K_\lambda\|^2_{HS} \leq \|\bar{K}_\lambda \bar{V} \bar{K}_\lambda\|_{HS} = \sum_{i,k} \frac{|r(i+k)|^2}{|\lambda -i||\lambda -k|}, \end{equation} where $i,k \in 2\mathbb{Z}$ if $bc = Per^+$ and $i,k \in 1+ 2\mathbb{Z}$ if $bc = Per^-,$ or $ i,k \in \mathbb{Z}$ if $bc =Dir.$ Next we estimate the Hilbert--Schmidt norm of the operator $\bar{K}_\lambda \bar{V} \bar{K}_\lambda$ for $ \lambda \in C_n = \{\lambda:\; |\lambda -n|= 1/2 \}. $ For each $\ell^2$--sequence $x=(x(j))_{j \in \mathbb{Z}} $ and $m \in \mathbb{N}$ we set \begin{equation} \label{50} \mathcal{E}_m (x) = \left ( \sum_{|j|\geq m} |x(j)|^2 \right )^{1/2}. \end{equation} \begin{Lemma} \label{lem2} In the above notations, if $n\neq 0,$ then \begin{equation} \label{51} \|\bar{K}_\lambda \bar{V} \bar{K}_\lambda\|^2_{HS} = \sum_{i,k} \frac{|r(i+k)|^2}{|\lambda -i||\lambda -k|} \leq C \left ( \frac{\|r\|^2}{\sqrt{|n|}} + (\mathcal{E}_{|n|} (r))^2 \right ), \quad \lambda \in C_n, \end{equation} where $C$ is an absolute constant. \end{Lemma} {\em Remark:} For convenience, here and thereafter we denote by $C$ any absolute constant. \begin{proof} Since \begin{equation} \label{52} 2|\lambda - i| \geq |n-i| \quad \text{if} \; i \neq n, \; \lambda \in C_n= \{\lambda:\; |\lambda -n|= 1/2 \}, \end{equation} the sum in (\ref{51}) does not exceed $$ 4 |r(2n)|^2 + 4\sum_{k\neq n} \frac{|r(n+k)|^2}{|n-k|} + 4\sum_{i\neq n} \frac{|r(n+i)|^2}{|n-i|} + 4\sum_{i,k\neq n} \frac{|r(i+k)|^2}{|n-i||n-k|}. $$ In view of (\ref{t1}) and (\ref{t2}) in Lemma \ref{lemt1}, each of the above sums does not exceed the right-hand side of (\ref{51}), which completes the proof. \end{proof} {\em Corollary: There is $N \in \mathbb{N}$ such that} \begin{equation} \label{53} \|K_\lambda V K_\lambda\| \leq 1/2 \quad \text{for}\;\; \lambda \in C_n, \; |n| >N. \end{equation} \section{Core results} By our Theorem 18 in \cite{DM15} (about spectra localization), for sufficiently large $|n|,$ say $|n|>N,$ the operator $L_{Per^\pm}$ has exactly two (counted with their algebraic multiplicity) periodic (for even $n$) or antiperiodic (for odd $n$) eigenvalues inside the disc with a center $n$ of radius $1/2.$ The operator $L_{Dir}$ has, for all sufficiently large $|n|,$ one eigenvalue in every such disc. Let $P_n$ and $P_n^0 $ be the Riesz projections corresponding to $L$ and $L^0,$ i.e., $$ P_n = \frac{1}{2\pi i} \int_{C_n} (\lambda - L)^{-1} d\lambda, \quad P_n^0 = \frac{1}{2\pi i} \int_{C_n} (\lambda - L)^{-1} d\lambda, $$ where $C_n = \{\lambda: \; |\lambda - n| =1/2 \}. $ \begin{Theorem} \label{thm1} Suppose $L$ and $L^0$ are, respectively, the Dirac operator (\ref{01}) with an $L^2$ potential $v$ and the free Dirac operator, subject to periodic, antiperiodic or Dirichlet boundary conditions $bc =Per^\pm $ or $Dir.$ Then, there is $N \in \mathbb{N}$ such that for $|n| >N $ the Riesz projections $P_n$ and $P_n^0 $ corresponding to $L$ and $L^0$ are well defined and we have \begin{equation} \label{110} \sum_{|n|>N} \|P_n - P_n^0\|^2 < \infty. \end{equation} \end{Theorem} \begin{proof} Now we present the proof of the theorem up to a few technical inequalities. They will be proved later in Section 4, Lemmas \ref{lemt1} and \ref{lemt2}. 1. Let us notice that the operator--valued function $K_\lambda$ is analytic in $ \mathbb{C} \setminus \mathbb{R}_+$. But (\ref{25}), (\ref{111}) below and all formulas of this section -- which are essentially variations of (\ref{25}) -- always have even powers of $K_\lambda,$ and $K_\lambda^2 = R_\lambda^0 $ is analytic on $\mathbb{C}\setminus Sp(L^0).$ Certainly, this justifies the use of Cauchy formula or Cauchy theorem when warranted. In view of (\ref{53}), the corollary after the proof of Lemma 2, if $|n|$ is sufficiently large then the series in (\ref{25}) converges. Therefore, \begin{equation} \label{111} P_n - P_n^0 = \frac{1}{2\pi i} \int_{C_n} \sum_{s=0}^\infty K_\lambda (K_\lambda V K_\lambda )^{s+1} K_\lambda d\lambda. \end{equation} {\em Remark.} We are going to prove (\ref{110}) by estimating the Hilbert--Schmidt norms $\|P_n - P_n^0\|_{HS}$ which dominate $\|P_n - P_n^0\|.$ Of course, these norms are equivalent as long as the dimensions $dim \,(P_n - P_n^0)$ are uniformly bounded because for any finite dimensional operator $T $ we have $$ \|T\| \leq \|T\|_{HS} \leq (dim \,T)^{1/2} \|T\| $$ but in the context of this paper for all projections $dim \,P_n, \;dim \,P_n^0 \leq 2.$ 2. If $bc = Dir,$ then, by (\ref{013}), $$ \|P_n - P_n^0\|_{HS}^2 = \sum_{m,k \in \mathbb{Z}} |\langle (P_n - P_n^0)g_m,g_k \rangle|^2. $$ By (\ref{111}), we get $$ \langle (P_n - P_n^0)g_m,g_k \rangle = \sum_{s=0}^\infty I_n(s,k,m), $$ where $$ I_n(s,k,m)= \frac{1}{2\pi i} \int_{C_n}\langle K_\lambda(K_\lambda V K_\lambda )^{s+1}K_\lambda g_m, g_k \rangle d\lambda. $$ Therefore, $$ \sum_{|n|>N} \|P_n - P_n^0\|_{HS}^2 \leq \sum_{s,t=0}^\infty \sum_{|n|>N} \sum_{m,k \in \mathbb{Z}} |I_n(s,k,m)|\cdot |I_n(t,k,m)|. $$ Now, the Cauchy inequality implies \begin{equation} \label{112} \sum_{|n|>N} \|P_n - P_n^0\|_{HS}^2 \leq \sum_{s,t=0}^\infty (A(s))^{1/2}(A(t))^{1/2}, \end{equation} where \begin{equation} \label{113} A(s)=\sum_{|n|>N}\sum_{m,k \in \mathbb{Z}} |I_n(s,k,m)|^2. \end{equation} Notice that $A(s)$ depends on $N$ but this dependence is suppressed in the notation. From the matrix representation of the operators $K_\lambda $ and $V$ we get \begin{equation} \label{128} \langle K_\lambda (K_\lambda V K_\lambda )^{s+1} K_\lambda e_m, e_k \rangle = \sum_{j_1,\ldots,j_s} \frac{W(k + j_1) W(j_1+j_2) \cdots W(j_s +m)}{(\lambda -k)(\lambda -j_1)\cdots (\lambda -j_s)(\lambda -m)}, \end{equation} and therefore, \begin{equation} \label{129} I_n (s,k,m)= \frac{1}{2\pi i}\int_{C_n} \sum_{j_1, \ldots j_s} \frac{W(k + j_1) W(j_1+j_2) \cdots W(j_s +m)}{(\lambda -k)(\lambda -j_1)\cdots (\lambda -j_s)(\lambda -m)} d\lambda. \end{equation} In view of (\ref{41}), we have \begin{equation} \label{130} \left |\frac{W(k + j_1) W(j_1+j_2) \cdots W(j_s +m)}{(\lambda -k)(\lambda -j_1)\cdots (\lambda -j_s)(\lambda -m)} \right | \leq B(\lambda, k,j_1,\ldots,j_s,m), \end{equation} where \begin{equation} \label{133} B(\lambda, k,j_1,\ldots,j_s,m)= \frac{r(k+j_1)r(j_1 +j_2) \cdots r(j_{s-1}+j_s) r(j_s +m)}{|\lambda -k||\lambda -j_1| \cdots |\lambda -j_s| |\lambda -m |}, \quad s>0, \end{equation} and \begin{equation} \label{134} B(\lambda,k,m)= \frac{r(m+k)}{|\lambda -k||\lambda -m |} \end{equation} in the case when $s=0 $ and there are no $j$-indices. Moreover, by (\ref{41}),(\ref{43}) and (\ref{45}), we have \begin{equation} \label{135} \sum_{j_1,\ldots,j_s} B(\lambda,k, j_1,\ldots,j_s,m) = \langle \bar{K}_\lambda (\bar{K}_\lambda \bar{V} \bar{K}_\lambda)^{s+1}\bar{K}_z e_m, e_k \rangle. \end{equation} \begin{Lemma} \label{lemd} In the above notations, we have \begin{equation} \label{136} A(s) \leq B_1 (s)+B_2 (s)+B_3 (s)+B_4 (s), \end{equation} where \begin{equation} \label{137} B_1 (s)= \sum_{|n|>N} \sup_{\lambda \in C_n} \left ( \sum_{j_1, \ldots, j_s} B(\lambda, n,j_1,\ldots,j_s,n) \right )^2; \end{equation} \begin{equation} \label{138} B_2 (s)= \sum_{|n|>N} \sum_{k\neq n}\sup_{\lambda \in C_n}\left ( \sum_{j_1, \ldots, j_s} B(\lambda, k,j_1,\ldots,j_s,n) \right )^2; \end{equation} \begin{equation} \label{139} B_3 (s)= \sum_{|n|>N} \sum_{m\neq n} \sup_{\lambda \in C_n} \left ( \sum_{j_1, \ldots, j_s} B(\lambda, n,j_1,\ldots,j_s,m) \right )^2; \end{equation} \begin{equation} \label{140} B_4 (s)= \sum_{|n|>N} \sum_{m,k\neq n}\sup_{\lambda \in C_n}\left (\sum_{j_1, \ldots, j_s}^* B(\lambda, k,j_1,\ldots,j_s,m) \right )^2, \quad s\geq 1, \end{equation} where the symbol $*$ over the sum in the parentheses means that at least one of the indices $j_1, \ldots, j_s $ is equal to $n.$ \end{Lemma} \begin{proof} Indeed, in view of (\ref{113}), we have $$ A(s) \leq A_1 (s)+A_2 (s)+A_3 (s)+A_4 (s),$$ where $$ A_1 (s) = \sum_{|n|>N} |I_n (s,n,n)|^2, \quad A_2 (s) = \sum_{|n|>N} \sum_{k\neq n} |I_n (s,k,n)|^2 $$ $$A_3 (s) = \sum_{|n|>N} \sum_{m\neq n} |I_n (s,n,m)|^2, \quad A_4 (s) = \sum_{|n|>N} \sum_{m,k\neq n} |I_n (s,k,m)|^2. $$ By (\ref{129})--(\ref{134}) we get immediately that $$ A_\nu (s) \leq B_\nu (s), \quad \nu= 1,2,3.$$ On the other hand, by the Cauchy formula, $$ \int_{C_n} \frac{W(k + j_1) W(j_1+j_2) \cdots W(j_s +m)}{(\lambda -k)(\lambda -j_1)\cdots (\lambda -j_s)(\lambda -m)} d\lambda = 0 \quad \text{if} \quad k, j_1, \ldots, j_s, m \neq n. $$ Therefore, removing from the sum in (\ref{129}) the terms with zero integrals, and estimating from above the remaining sum, we get $$ |I_n (s,k,m)| \leq \sup_{\lambda \in C_n}\left (\sum_{j_1, \ldots, j_s}^* B(\lambda, k,j_1,\ldots,j_s,m) \right ), \qquad m,k \neq n. $$ From here it follows that $A_4 (s) \leq B_4 (s),$ which completes the proof. \end{proof} 3. In view of (\ref{112}) and (\ref{136}), Theorem \ref{thm1} will be proved if we get ''good estimates'' of the sums $B_{\nu} (s), \; \nu =1,\ldots,4, $ that are defined by (\ref{137})--(\ref{140}). If $bc = Per^\pm,$ then using the orthonormal system of eigenvectors of the free operator $L^0$ given by (\ref{012}), we get \begin{equation} \label{0111} \|P_n - P_n^0\|_{HS}^2 =\sum_{\alpha, \beta=1}^2 \sum_{m,k} |\langle (P_n - P_n^0)e^\alpha_m,e^\beta_k \rangle|^2, \end{equation} where $m,k \in 2\mathbb{Z}$ if $n$ is even or $m,k \in 1+ 2\mathbb{Z}$ if $n$ is odd. By (\ref{111}), we have \begin{equation} \label{0112} \langle (P_n - P_n^0)e^\alpha_m,e^\beta_k \rangle = \sum_{s=0}^\infty I^{\alpha \beta}(n,s,k,m), \end{equation} where \begin{equation} \label{0113} I^{\alpha \beta}(n,s,k,m)= \frac{1}{2\pi i} \int_{C_n}\langle K_\lambda(K_\lambda V K_\lambda )^{s+1}K_\lambda e^\alpha_m,e^\beta_k \rangle d\lambda. \end{equation} Therefore, $$ \sum_{|n|>N} \|P_n - P_n^0\|_{HS}^2 \leq \sum_{\alpha, \beta=1}^2 \sum_{t,s=0}^\infty \sum_{|n|>N} \sum_{m,k} |I^{\alpha \beta}(n,s,k,m)|\cdot |I^{\alpha \beta}(n,t,k,m)|. $$ Now, the Cauchy inequality implies \begin{equation} \label{0114} \sum_{|n|>N} \|P_n - P_n^0\|_{HS}^2 \leq \sum_{\alpha, \beta=1}^2 \sum_{t,s=0}^\infty (A^{\alpha \beta}(s))^{1/2}(A^{\alpha \beta}(t))^{1/2}, \end{equation} where \begin{equation} \label{141} A^{\alpha \beta}(s)=\sum_{|n|>N}\sum_{m,k} |I^{\alpha \beta}(n,s,k,m)|^2. \end{equation} \begin{Lemma} \label{lemp} In the above notations, with $r$ given by (\ref{40}), $B(\lambda, k,j_1,\ldots,j_s,m)$ defined in (\ref{133}),(\ref{134}), and $B_j (s), \, j=1, \ldots, 4,$ defined by (\ref{137})--(\ref{140}), we have \begin{equation} \label{142} A^{\alpha \beta}(s) \leq B_1 (s)+B_2 (s)+B_3 (s)+B_4 (s), \quad \alpha, \beta = 1,2. \end{equation} \end{Lemma} \begin{proof} The matrix representations of the operators $V$ and $K_\lambda $ given in (\ref{13}) and (\ref{31}) imply that if $s $ is even, then $\langle \langle K_\lambda(K_\lambda V K_\lambda )^{s+1}K_\lambda e^\alpha_m,e^\alpha_k \rangle=0 $ for $ \alpha =1,2, $ and if $s$ is odd then \begin{equation} \label{143} \langle K_\lambda(K_\lambda V K_\lambda )^{s+1}K_\lambda e^1_m,e^1_k \rangle =\sum_{j_1,\ldots,j_s}\frac{p(-k-j_1 ) q(j_1 + j_2 )\cdots p(-j_{s-1}- j_s ) q(j_s + m )}{(\lambda-k)(\lambda-j_1) \cdots (\lambda-j_s)(\lambda-m) }, \end{equation} \begin{equation} \label{144} \langle K_\lambda(K_\lambda V K_\lambda )^{s+1}K_\lambda e^2_m,e^2_k \rangle =\sum_{j_1,\ldots,j_s}\frac{q(k+j_1 ) p(-j_1 - j_2 )\cdots q(j_{s-1}+ j_s ) p(-j_s - m )}{(\lambda-k)(\lambda-j_1) \cdots (\lambda-j_s)(\lambda-m) }. \end{equation} In analogous way it follows that if $s$ is odd then $\langle \langle K_\lambda(K_\lambda V K_\lambda )^{s+1}K_\lambda e^1_m,e^2_k \rangle=0 $ and $\langle \langle K_\lambda(K_\lambda V K_\lambda )^{s+1}K_\lambda e^2_m,e^1_k \rangle=0, $ and if $s$ is even then \begin{equation} \label{145} \langle K_\lambda(K_\lambda V K_\lambda )^{s+1}K_\lambda e^1_m,e^2_k \rangle =\sum_{j_1,\ldots,j_s}\frac{q(k+j_1 ) p(-j_1 -j_2 )\cdots p(-j_{s-1}- j_s ) q(j_s + m )}{(\lambda-k)(\lambda-j_1) \cdots (\lambda-j_s)(\lambda-m) }, \end{equation} \begin{equation} \label{146} \langle K_\lambda(K_\lambda V K_\lambda )^{s+1}K_\lambda e^2_m,e^1_k \rangle =\sum_{j_1,\ldots,j_s}\frac{p(-k-j_1 ) q(j_1 +j_2 )\cdots q(j_{s-1}+ j_s ) p(-j_s - m )}{(\lambda-k)(\lambda-j_1) \cdots (\lambda-j_s)(\lambda-m) }. \end{equation} From (\ref{40}), (\ref{137})-(\ref{140}) and the above formulas it follows that $$ |\langle K_\lambda(K_\lambda V K_\lambda )^{s+1}K_\lambda e^\alpha_m,e^\beta _k \rangle| \leq \sum_{j_1,\ldots,j_s} B(\lambda, k,j_1,\ldots,j_s,m),$$ which implies immediately \begin{equation} \label{0146} |I^{\alpha \beta}_n (s,k,m)| \leq \sup_{\lambda \in C_n}\left (\sum_{j_1, \ldots, j_s} B(\lambda, k,j_1,\ldots,j_s,m) \right ). \end{equation} By (\ref{141}), $$ A^{\alpha \beta}(s) \leq A^{\alpha \beta}_1 (s)+A^{\alpha \beta}_2 (s)+A^{\alpha \beta}_3 (s)+A^{\alpha \beta}_4 (s),$$ where $$ A^{\alpha \beta}_1 (s) = \sum_{|n|>N} |I^{\alpha \beta}_n (s,n,n)|^2, \quad A^{\alpha \beta}_2 (s) = \sum_{|n|>N} \sum_{k\neq n} |I^{\alpha \beta}_n (s,k,n)|^2 $$ $$A^{\alpha \beta}_3 (s) = \sum_{|n|>N} \sum_{m\neq n} |I^{\alpha \beta}_n (s,n,m)|^2, \quad A^{\alpha \beta}_4 (s) = \sum_{|n|>N} \sum_{m,k\neq n} |I^{\alpha \beta}_n (s,k,m)|^2. $$ Therefore, in view of (\ref{0146}) and (\ref{137})--(\ref{139}), we get $$ A^{\alpha \beta}_\nu (s) \leq B_\nu (s), \quad \nu =1,2,3. $$ Finally, as in the proof of Lemma~\ref{lemd}, we take into account that in the sums (\ref{143})--(\ref{146}) the terms with indices $j_1, \ldots, j_s, m,k \neq n $ have zero integrals over the contour $C_n. $ Therefore, $$ |I^{\alpha \beta}_n (s,k,m)| \leq \sup_{\lambda \in C_n}\left (\sum^*_{j_1, \ldots, j_s} B(\lambda, k,j_1,\ldots,j_s,m) \right ), \quad m,k \neq n. $$ In view of (\ref{140}), this yields $ A^{\alpha \beta}_4 (s) \leq B_4 (s),$ which completes the proof. \end{proof} Such estimates are given in the next proposition. For convenience, we set for any $\ell^2 $--sequence $r= (r(j))$ \begin{equation} \label{147} \rho_N = \left ( \frac{\|r\|^2}{\sqrt{N}} + (\mathcal{E}_{N} (r))^2 \right )^{1/2}. \end{equation} \begin{Proposition} \label{prop1} In the above notations, \begin{equation} \label{148} B_\nu (s) \leq C\|r\|^2 \rho_N^{2s},\quad \nu =1,2,3, \qquad B_4 (s) \leq C s \|r\|^4 \rho_N^{2(s-1)}, \; s\geq 1, \end{equation} where $C$ is an absolute constant. \end{Proposition} \begin{proof} {\em Estimates for $B_1 (s).$} By (\ref{134}) and (\ref{137}), we have $$ B_1 (0)= \sum_{|n|>N} \sup_{\lambda \in C_n} \frac{|r(2n)|^2}{|\lambda -n|^2} = 4 (\mathcal{E}_{N} (r))^2 \leq 4\|r\|^2, $$ so (\ref{148}) holds for $ B_1 (s)$ if $ s=0.$ If $s=1,$ then by (\ref{133}), the sum $B_1 (1)$ from (\ref{137}) has the form $$B_1 (1) = \sum_{|n|>N} \sup_{\lambda \in C_n} \left | \sum_j \frac{r(n+j)r(j+n)}{|\lambda -n||\lambda - j||\lambda -n|} \right |^2. $$ By (\ref{52}), and since $|\lambda -n| = 1/2 $ for $\lambda \in C_n,$ we get $$ B_1 (1) \leq \sum_{|n|>N} \left (8\sum_{j\neq n} \frac{|r(j+n)|^2}{ |j-n|} + 8|r(2n)|^2\right )^2 $$ $$ \leq 128 \sum_{|n|>N}\left ( \sum_{j\neq n} \frac{|r(j+n)|^2}{ |j-n|} \right )^2 + 128\sum_{|n|>N}|r(2n)|^4. $$ By the Cauchy inequality and (\ref{t11}) in Lemma \ref{lemt2}, we have $$ \sum_{|n|>N}\left ( \sum_{j\neq n} \frac{|r(j+n)|^2}{ |j-n|} \right )^2 \leq \sum_{|n|>N} \sum_{j\neq n} \frac{|r(j+n)|^2}{ |j-n|^2} \|r\|^2 \leq C \|r\|^2\rho_N^2. $$ On the other hand, $\sum_{|n|>N}|r(2n)|^4 \leq \|r\|^2 (\mathcal{E}_N (r))^2 \leq \|r\|^2\rho_N^2, $ so (\ref{148}) holds for $ B_1 (s)$ if $ s=1.$ Next, we consider the case $s>1.$ In view of (\ref{133}), since $|\lambda -n|= 1/2$ for $\lambda \in C_n, $ the sum $B_1 (s)$ from (\ref{137}) can be written as $$ B_1 (s) = \sum_{|n|>N} 4 \sup_{\lambda \in C_n} \left |\sum_{j_1, \ldots,j_s} \frac{r(n+j_1)r(j_1+j_2)\cdots r(j_s +n)}{|\lambda-j_1||\lambda-j_2|\cdots |\lambda-j_s|}\right |^2. $$ Therefore, we have (with $j=j_1, k=j_s$) $$ B_1 (s) = 4\sum_{|n|>N} \sup_{\lambda \in C_n} \left |\sum_{j,k} \frac{r(n+j)}{|\lambda - j|^{1/2}}\cdot H_{jk} (\lambda) \cdot \frac{r(k +n)}{|\lambda - k|^{1/2}} \right |^2, $$ where $ (H_{jk}(\lambda))$ is the matrix representation of the operator $ H(\lambda) = (\bar{K}_\lambda \bar{V} \bar{K}_\lambda)^{s-1}. $ By (\ref{51}) in Lemma \ref{lem2}, $$ \|H(\lambda)\|_{HS} =\left ( \sum_{j,k} |H_{jk}(\lambda)|^2 \right )^{1/2} \leq \|\bar{K}_\lambda \bar{V} \bar{K}_\lambda \|_{HS}^{s-1} \leq \rho_N^{s-1} \quad \text{for} \; \lambda \in C_n, \; |n|>N. $$ Therefore, the Cauchy inequality implies $$ B_1 (s) \leq 4\|H(\lambda)\|^2_{HS} \cdot \sigma \leq 4\rho_N^{2(s-1)}\cdot \sigma, $$ where $$\sigma= \sum_{|n|>N} \sup_{\lambda \in C_n} \sum_{j,k} \frac{|r(n+j)|^2}{|\lambda - j|}\cdot \frac{|r(k+n)|^2}{|\lambda - k|}. $$ By (\ref{52}) and since $|\lambda -n| = 1/2 $ for $\lambda \in C_n,$ we have $$ \sigma \leq 4\sum_{|n|>N}\sum_{j,k \neq n} \frac{|r(n+j)|^2|r(n+k)|^2}{|n - j||n-k|}+ 4\sum_{|n|>N}|r(2n)|^2\sum_{k \neq n} \frac{|r(n+k)|^2}{|n-k|} $$ $$ + 4\sum_{|n|>N}|r(2n)|^2\sum_{j \neq n} \frac{|r(n+j)|^2}{|n - j|} + 4\sum_{|n|>N}|r(2n)|^4. $$ In view of (\ref{t12}) in Lemma \ref{lemt2}, the triple sum does not exceed $C\|r\|^2\rho_N^2.$ By (\ref{t1}) in Lemma \ref{lemt1}, each of the double sums can be estimated from above by $$C\sum_{|n|>N}|r(2n)|^2 \rho_N^2 \leq C\|r\|^2\rho_N^2, $$ and the same estimate holds for the single sum. Therefore, $$ B_1 (s) \leq C\rho_N^{2(s-1)}\cdot \|r\|^2 \rho_N^2, $$ which completes the proof of (\ref{148}) for $B_1 (s).$ {\em Estimates for $B_2 (s).$} By (\ref{134}) and (\ref{137}), we have $$ B_2 (0)= \sum_{|n|>N}\sum_{k \neq n} \sup_{\lambda \in C_n} \frac{|r(k+n)|^2}{|\lambda -k|^2|\lambda -n|^2}. $$ Taking into account that $|\lambda -n| = 1/2 $ for $\lambda \in C_n,$ we get, in view of (\ref{52}) and (\ref{t11}) in Lemma \ref{lemt2}, $$ B_2 (0) \leq 4\sum_{|n|>N}\sum_{k \neq n} \frac{|r(k+n)|^2}{|n-k|^2} \leq C \|r\|^2. $$ So, (\ref{148}) holds for $B_2 (s)$ if $ s=0.$ If $s=1,$ then, by (\ref{133}), the sum $B_2 (s) $ in (\ref{148}) has the form $$B_2 (1)= \sum_{|n|>N} \sum_{ k \neq n} \sup_{\lambda \in C_n} \left |\sum_j \frac{r(k+j)r(j+n)}{|\lambda -k||\lambda -j||\lambda -n|} \right |^2. $$ Since $|\lambda -n| = 1/2 $ for $\lambda \in C_n,$ we get, in view of (\ref{52}), $$ B_2 (1) \leq \sum_{|n|>N} \sum_{ k \neq n} \left |\sum_{j\neq n} 8 \frac{r(k+j)r(j+n)}{|n -k||n -j|} + 8 r(2n) \frac{r(k+n)}{|n-k|} \right |^2. $$ Therefore, $$ B_2 (1) \leq 128\sigma_1 + 128\sigma_2, $$ where (by the Cauchy inequality and (\ref{t11}) in Lemma \ref{lemt2}) $$ \sigma_1 = \sum_{|n|>N, k \neq n} \left (\sum_{j\neq n} \frac{r(k+j)r(j+n)}{|n -k||n -j|} \right )^2 \leq \sum_{|n|>N, k\neq n} \frac{1}{|n-k|^2}\left (\sum_{j\neq n} \frac{|r(n+j)|^2}{|n -j|^2} \right ) \cdot \|r\|^2 $$ $$ =\sum_{|n|>N, j \neq n} \frac{|r(n+j)|^2}{|n -j|^2} \sum_{k\neq n} \frac{\|r\|^2}{|n-k|^2} \leq C \rho_N^2 \|r\|^2, $$ and $$ \sigma_2 = \sum_{|n|>N, k \neq n} |r(2n)|^2 \frac{|r(n+k)|^2}{|n-k|^2}\leq C \rho_N^2 \|r\|^2. $$ Thus, (\ref{148}) holds for $B_2 (s) $ if $ s=1.$ If $s>1,$ then by (\ref{133}) and $|\lambda -n| =1/2$ for $\lambda \in C_n,$ we have $$B_2 (s) = \sum_{|n|>N, k \neq n} 2\sup_{\lambda \in C_n} \left |\sum_{j_1,\ldots,j_s} \frac{r(k+j_1)r(j_1+j_2) \cdots r(j_s +n)}{|\lambda - k||\lambda -j_1||\lambda -j_2| \cdots |\lambda -j_s| } \right |^2. $$ In view of (\ref{43}) and (\ref{44}), we get (with $j=j_1, i=j_s$) $$B_2 (s) = 2\sum_{|n|>N, k \neq n} \sup_{\lambda \in C_n} \left |\sum_{j,i} \frac{r(k+j)}{|\lambda -k ||\lambda -j|^{1/2} } \cdot H_{ji} (\lambda) \cdot \frac{r(i +n)}{|\lambda -i|^{1/2}} \right |^2, $$ where $H_{ji} (\lambda)$ is the matrix representation of the operator $H(\lambda) = (\bar{K}_\lambda \bar{V} \bar{K}_\lambda)^{s-1}.$ Therefore, by the Cauchy inequality and (\ref{51}) in Lemma \ref{lem2}, \begin{equation} \label{165} B_2 (s) \leq 2 \sup_{\lambda \in C_n}\|H(\lambda)\|^2_{HS} \cdot \tilde{\sigma}\leq 2 \rho_N^{2(s-1)} \cdot \tilde{\sigma}, \end{equation} where $$ \tilde{\sigma} = \sum_{|n|>N, k \neq n} \sup_{\lambda \in C_n} \sum_{i,j} \frac{|r(k+j)|^2 |r(i+n)|^2}{|\lambda - k|^2|\lambda - j||\lambda - i|}. $$ From $|\lambda -n| = 1/2 $ for $\lambda \in C_n$ and (\ref{52}) it follows that $$ \tilde{\sigma} \leq 8(\tilde{\sigma}_1 +\tilde{\sigma}_2 +\tilde{\sigma}_3+\tilde{\sigma}_4), $$ with $$ \tilde{\sigma}_1 = \sum_{|n|>N} \sum_{k \neq n} \sum_{j,i\neq n} \frac{|r(k+j)|^2 |r(i+n)|^2}{|n - k|^2|n - j||n - i|} \leq C \|r\|^2 (\mathcal{E}_{2N} (r))^2 \leq C\|r\|^2 \rho_N^2 $$ (by (\ref{t14}) in Lemma \ref{lemt2}); $$ \tilde{\sigma}_2 = \sum_{|n|>N} \sum_{k \neq n} \sum_{j\neq n} \frac{|r(k+j)|^2 |r(2n)|^2 }{|n - k|^2|n - j|} $$ $$ \leq \sum_{|n|>N} |r(2n)|^2\sum_{k \neq n} \frac{1}{|n-k|^2} \sum_j |r(k+j)|^2 \leq C \|r\|^2 (\mathcal{E}_{2N} (r))^2 \leq C\|r\|^2 \rho_N^2; $$ $$ \tilde{\sigma}_3 =\sum_{|n|>N} \sum_{k \neq n} \sum_{i\neq n} \frac{|r(k+n)|^2 |r(n+i)|^2}{|n - k|^2|n - i|} $$ $$\leq \sum_{|n|>N} \sum_{k \neq n} \frac{|r(k+n)|^2}{|n - k|^2} \cdot \sum_i |r(n+i)|^2 \leq C \|r\|^2 \rho_N^2 $$ (by (\ref{t11}) in Lemma \ref{lemt2}); $$ \tilde{\sigma}_4 = \sum_{|n|>N, k \neq n} \frac{|r(k+n)|^2|r(2n)|^2 }{|n - k|^2} \leq \|r\|^2 \sum_{|n|>N, k \neq n} \frac{ |r(k+n)|^2}{|n - k|^2} \leq C \|r\|^2 \rho_N^2 $$ (by (\ref{t11}) in Lemma \ref{lemt2}). These estimates imply the inequality $\tilde{\sigma} \leq C \|r\|^2 \rho_N^2, $ which completes the proof of (\ref{148}) for $\nu=2, s > 1.$ {\em Estimates for $B_3 (s).$} The sums $B_3 (s)$ can be estimated in a similar way because the indices $k$ and $m$ play symmetric roles. More precisely, since $$ B(\lambda,k,i_1, \ldots,i_s,n) =B(\lambda,n,j_1, \ldots,j_{\tau -1},k)$$ if $j_1 =i_{s -1}, \ldots, j_{s -1}= i_1, $ we have $B_3 (s) = B_2 (s).$ Thus, (\ref{148}) holds for $\nu=3.$ {\em Estimates for $B_4 (s).$} Here $s\geq 1 $ by the definition of $B_4 (s).$ Fix $s \geq 1$ and consider the sum in (\ref{140}) that defines $B_4 (s);$ then at least one of the indices $j_1, \ldots, j_s$ is equal to $n.$ Let $\tau \leq t$ be the least integer such that $j_\tau = n.$ Then, by (\ref{133}) or (\ref{134}), and since $|\lambda -n|=1/2$ for $\lambda \in C_n,$ we have $$ B(\lambda,k,j_1,\ldots,j_{\tau -1}, n,j_{\tau +1}, \ldots,j_s,m)= $$ $$ \frac{1}{2} B(\lambda,k,j_1, \ldots,j_{\tau -1}, n)\cdot B(\lambda, n,j_{\tau +1}, \ldots,j_s,m). $$ Therefore, $$ B_4 (s) \leq \sum_{\tau=1}^s \sum_{|n|>N} \sum_{k \neq n}\sup_{\lambda \in C_n} \left |\sum_{j_1,\ldots,j_{\tau -1}} B(\lambda,k,j_1, \ldots,j_{\tau -1},n) \right |^2 $$ $$ \times \sum_{m \neq n} \sup_{\lambda \in C_n} \left | \sum_{j_{\tau +1}, \ldots,j_s} B(\lambda,n,j_{\tau +1}, \ldots,j_s,m) \right |^2 $$ On the other hand, by the estimate of $B_3 (s) $ given by (\ref{148}), $$ \sum_{m \neq n} \sup_{\lambda \in C_n} \left | \sum_{j_{\tau +1}, \ldots,j_s} B(\lambda,n,j_{\tau +1}, \ldots,j_s,m) \right |^2 \leq C \|r\|^2 \rho_N^{2(s-\tau)},\quad |n|>N. $$ Thus, we have $$ B_4 (s) \leq C \|r\|^2 \sum_{\tau=1}^s \rho_N^{2(s-\tau)} \sum_{|n|>N} \sum_{k \neq n}\sup_{\lambda \in C_n} \left |\sum_{j_1,\ldots,j_{\tau -1}} B(\lambda,k,j_1, \ldots,j_{\tau -1},n) \right |^2 $$ Now, by (\ref{148}) for $\nu =2,$ $$ \sum_{|n|>N} \sum_{k \neq n}\sup_{\lambda \in C_n} \left |\sum_{j_1,\ldots,j_{\tau -1}} B(\lambda,k,j_1, \ldots,j_{\tau -1},n) \right |^2 \leq C \|r\|^2 \rho_N^{2(\tau -1)}. $$ Hence, $$ B_4 (s) \leq C \|r\|^4 \sum_{\tau=1}^s \rho_N^{2(s-1)}=C s \|r\|^4 \rho_N^{2(s-1)}, $$ which completes the proof of (\ref{148}). \end{proof} Now, we can complete the proof of Theorem \ref{thm1}. Lemma \ref{lemp}, (\ref{142}) together with the inequalities (\ref{148}) and (\ref{147}) in Proposition \ref{prop1} imply that \begin{equation} \label{201} A^{\alpha \beta} (s) \leq 4C \| r\|^2 ( 1 + \| r\|^2/\rho_N^2 ) (1+s) \rho_N^{2s} \end{equation} \begin{equation} \label{202} \left (A^{\alpha \beta} (s)A^{\alpha \beta} (t) \right )^{1/2} \leq 4C \| r\|^2 ( 1 + \| r\|^2/\rho_N^2 ) (1+s)(1+t) \rho_N^{s+t}. \end{equation} With $\rho \leq 1/2 $ by (\ref{147}) the inequality (\ref{202}) guarantees that the series on the right side of (\ref{0114}) converges and $$ \sum_{n>N} \|P_n -P_n^0\|^2 \leq \sum_{n>N} \|P_n -P_n^0\|_{HS}^2 \leq C_1 \| r\|^2 ( 1 + \| r\|^2/\rho_N^2 ) <\infty. $$ So, Theorem \ref{thm1} is proven subject to Lemmas \ref{lemt1} and \ref{lemt2} in the next section. \end{proof} \section{Technical Lemmas} In this section we use that \begin{equation} \label{t0} \sum_{n>N}\frac{1}{n^2} < \sum_{n>N}\left ( \frac{1}{n-1}- \frac{1}{n} \right ) = \frac{1}{N}, \quad N \geq 1. \end{equation} and \begin{equation} \label{t00} \sum_{p \neq \pm n} \frac{1}{(n^2-p^2)^2} < \frac{4}{n^2}, \quad n\geq 1. \end{equation} Indeed, $$ \frac{1}{(n^2-p^2)^2} = \frac{1}{4n^2} \left ( \frac{1}{n-p}+ \frac{1}{n+p} \right )^2 \leq \frac{1}{2n^2} \left ( \frac{1}{(n-p)^2}+ \frac{1}{(n+p)^2} \right ). $$ Therefore, the sum in (\ref{t00}) does not exceed $$\frac{1}{2n^2} \left ( \sum_{p \neq \pm n} \frac{1}{(n-p)^2}+\sum_{p \neq \pm n} \frac{1}{(n+p)^2} \right ) \leq \frac{1}{2n^2}\cdot 2 \frac{\pi^2}{3} <\frac{4}{n^2}. $$ \begin{Lemma} \label{lemt1} If $r = (r(k)) \in \ell^2 (2\mathbb{Z}) $ (or $r = (r(k)) \in \ell^2 (\mathbb{Z}) $), then \begin{equation} \label{t1} \sum_{k\neq n} \frac{|r(n+k)|^2}{|n-k|} \leq \frac{\|r\|^2}{|n|} + (\mathcal{E}_{|n|} (r))^2, \quad |n| \geq 1; \end{equation} \begin{equation} \label{t2} \sum_{i,k\neq n} \frac{|r(i+k)|^2}{|n-i||n-k|} \leq C \left ( \frac{\|r\|^2}{\sqrt{n}} + (\mathcal{E}_{|n|} (r))^2 \right ), \quad |n| \geq 1, \end{equation} where $n \in \mathbb{Z},\; i,k \in n+ 2\mathbb{Z} $ (or, respectively, $i,k \in \mathbb{Z} $) and $C $ is an absolute constant. \end{Lemma} \begin{proof} If $|n-k| \leq |n|,$ then we have $|n+k|\geq 2|n|-|n-k|\geq |n|.$ Therefore, $$ \sum_{k\neq n}\frac{|r(n+k)|^2}{|n-k|} \leq \sum_{0<|n-k|\leq |n|} |r(n+k)|^2 + \sum_{|n-k|>|n|}\frac{|r(n+k)|^2}{|n|}\leq (\mathcal{E}_{|n|} (r))^2 + \frac{\|r\|^2}{|n|}, $$ which proves (\ref{t1}). Next we prove (\ref{t2}). We have \begin{equation} \label{t5} \sum_{i,k\neq n} \frac{|r(i+k)|^2}{|n-i||n-k|} \leq \sum_{(i,k) \in J_1} + \sum_{(i,k) \in J_2} + \sum_{(i,k) \in J_3}, \end{equation} where $ J_1 = \left \{(i,k): \; 0<|n-i|<|n|/2, \; |n-k| < |n|/2 \right \}$, $$ J_2 = \left \{(i,k): \; i\neq n, \; |n-k|\geq \frac{|n|}{2} \right \}, \quad J_3 =\left \{(i,k): \; |n-i|\geq\frac{|n|}{2}, \; k\neq n \right \}. $$ For $(i,k) \in J_1$ we have $|i+k| = |2n - (n-i)-(n-k)| \geq 2|n|-|n-i|-|n-k|\geq |n|.$ Therefore, by the Cauchy inequality, $$ \sum_{(i,k) \in J_1} \leq \left ( \sum_{(i,k) \in J_1} \frac{|r(i+k)|^2}{|n-i|^2}\right )^{1/2} \left ( \sum_{(i,k) \in J_1} \frac{|r(i+k)|^2}{|n-k|^2} \right )^{1/2} \leq C (\mathcal{E}_{|n|} (r))^2. $$ On the other hand, again by the Cauchy inequality, $$ \sum_{(i,k) \in J_2}=\sum_{(i,k) \in J_3} \leq \left ( \sum_{(i,k) \in J_3} \frac{|r(i+k)|^2}{|n-i|^2}\right )^{1/2} \left ( \sum_{(i,k) \in J_3} \frac{|r(i+k)|^2}{|n-k|^2} \right )^{1/2} $$ $$ \leq \left ( \sum_{|n-i|\geq \frac{|n|}{2}} \frac{1}{|n-i|^2} \sum_k |r(i+k)|^2 \right )^{1/2} \left( \sum_{k\neq n} \frac{1}{|n-k|^2} \sum_i |r(i+k)|^2 \right )^{1/2} \leq C\frac{\|r\|^2}{\sqrt{n}}, $$ which completes the proof. \end{proof} \begin{Lemma} \label{lemt2} If $r = (r(k)) \in \ell^2 (2\mathbb{Z}) $ (or $r = (r(k)) \in \ell^2 (\mathbb{Z}) $), then \begin{equation} \label{t11} \sum_{|n|>N,k\neq n} \frac{|r(n+k)|^2}{|n-k|^2} \leq C \left ( \frac{\|r\|^2}{N} + (\mathcal{E}_N (r))^2 \right ); \end{equation} \begin{equation} \label{t12} \sum_{|n|>N}\sum_{i,p \neq n} \frac{|r(n+i)|^2|r(n+p)|^2}{|n - i||n-p|} \leq C \left ( \frac{\|r\|^2}{N} + (\mathcal{E}_N (r))^2 \right )\|r\|^2; \end{equation} \begin{equation} \label{t13} \sum_{|n|>N,j, p\neq n} \frac{|r(j+p)|^2}{|n-j|^2|n-p|^2} \leq C \left ( \frac{\|r\|^2}{N} + (\mathcal{E}_N (r))^2 \right ); \end{equation} \begin{equation} \label{t14} \sum_{|n|>N}\sum_{i,j,p \neq n} \frac{|r(n+i)|^2|r(j+p)|^2}{|n - i||n-j||n-p|^2} \leq C \left ( \frac{\|r\|^2}{N} + (\mathcal{E}_N (r))^2 \right )\|r\|^2, \end{equation} where $C$ is an absolute constant. \end{Lemma} \begin{proof} With $\tilde{k} = n-k$ and $\tilde{n} = n+k$ it follows that whenever $|\tilde{k}|\leq |n| $ we have $|\tilde{n}|=|2n-\tilde{k}|\geq 2|n|-|\tilde{k}|\geq |n|.$ Therefore, $$ \sum_{|n|>N} \sum_{k\neq n}\frac{|r(n+k)|^2}{|n-k|^2} = \sum_{|n|>N}\sum_{0<|n-k|\leq |n|} +\sum_{|n|>N}\sum_{|n-k|>|n|} $$ $$ \leq \sum_{|\tilde{k}|>0} \frac{1}{|\tilde{k}|^2} \sum_{|\tilde{n}|>N} |r(\tilde{n})|^2 + \sum_{|n|>N}\frac{1}{n^2} \sum_{k} |r(n+k)|^2 \leq C\left ( (\mathcal{E}_N (r))^2 +\frac{\|r\|^2}{N} \right ), $$ which proves (\ref{t11}). Since $\frac{1}{|n-i||n-p| } \leq \frac{1}{2} \left ( \frac{1}{|n-i|^2} +\frac{1}{|n-p|^2 } \right ),$ the sum in (\ref{t12}) does not exceed $$ \frac{1}{2}\sum_{|n|>N,i\neq n} \frac{|r(n+i)|^2}{|n-i|^2} \sum_p |r(n+p)|^2 + \frac{1}{2}\sum_{|n|>N,p\neq n} \frac{|r(n+p)|^2}{|n-p|^2} \sum_i |r(n+i)|^2. $$ In view of (\ref{t11}), the latter is less than $ C \left ( \frac{\|r\|^2}{N} + (\mathcal{E}_N (r))^2 \right )\|r\|^2,$ which proves (\ref{t12}). In order to prove (\ref{t13}), we set $\tilde{j}= n-j$ and $\tilde{p}= n-p. $ Then $$ \sum_{|n|>N; j,p\neq n} \frac{|r(j+p)|^2}{|n-j|^2|n-p|^2}= \sum_{\tilde{j},\tilde{p} \neq 0} \frac{1}{\tilde{j}^2} \frac{1}{\tilde{p}^2}\sum_{|n|>N} |r(2n-\tilde{j}-\tilde{p}|^2 $$ $$ \leq \sum_{0<|\tilde{j}|,|\tilde{p}|\leq N/2} \frac{1}{\tilde{j}^2} \frac{1}{\tilde{p}^2} \sum_{n>N} |r(2n-\tilde{j}-\tilde{p}|^2 +\sum_{|\tilde{j}| > N/2} \sum_{|\tilde{p}|\neq 0} \cdots + \sum_{|\tilde{j}|\neq 0} \sum_{|\tilde{p}|> N/2} \cdots $$ $$ \leq C(\mathcal{E}_N (r))^2 + \frac{C}{N}\|r\|^2 +\frac{C}{N}\|r\|^2, $$ which completes the proof of (\ref{t13}). Let $\sigma $ denote the sum in (\ref{t14}). The inequality $ab \leq (a^2 + b^2)/2,$ considered with $a=1/|n-i|$ and $b=1/|n-j|,$ implies that $\sigma\leq (\sigma_1 + \sigma_2)/2,$ where $$ \sigma_1 = \sum_{|n|>N, i\neq n} \frac{|r(n+i)|^2}{|n-i|^2} \sum_{p\neq n} \frac{1}{|n-p|^2} \sum_j |r(j+p)|^2 \leq C\left ( (\mathcal{E}_N (r))^2 +\frac{\|r\|^2}{N} \right ) \|r\|^2 $$ (by (\ref{t11})), and $$ \sigma_2 = \sum_{|n|>N} \sum_{j,p\neq n} \frac{|r(j+p)|^2}{|n-j|^2|n-p|^2} \sum_i |r(n+i)|^2 \leq C\left ( (\mathcal{E}_N (r))^2 +\frac{\|r\|^2}{N} \right )\|r\|^2 $$ (by (\ref{t13})). Thus (\ref{t14}) holds. \end{proof} \section{Conclusions} 1. The convergence of the series (\ref{110}) is the analytic core of Bari--Markus Theorem (see \cite{GK}, Ch.6, Sect.5.3, Theorem 5.2) which guarantees that the series $\sum_{|n| >N} P_n f $ converges unconditionally in $L^2$ for every $f \in L^2. $ But in order to have the identity $$ f = S_N f + \sum_{|n| >N} P_n f ,$$ we need to check the "algebraic" hypotheses in Bari--Markus Theorem: (a) The system of projections \begin{eqnarray} \label{c01} \{S_N; \; \; P_n, \; \; |n| >N \} \end{eqnarray} is {\it complete}, i.e., the linear span of the system of subspaces \begin{eqnarray} \label{c02} \{E^*; \; \; E_n, \; \; |n| >N \}, \quad E^* = Ran \, S_N, \; E_n = Ran \, P_n, \end{eqnarray} is dense in $L^2 (I).$ (b) The system of subspaces (\ref{c02}) is {\em minimal,} i.e., there is no vector in one of these subspaces that belongs to the closed linear span of all other subspaces. Condition (b) holds because the projections in (\ref{c01}) are continuous, commute and $$ P_n S_N =0, \quad P_n P_m =0 \quad \text{for} \; m \neq n, \quad |m|,|n| >N. $$ The system (\ref{c01}) is complete; this fact is well known since the early 1950's (see details in \cite{K,KL,GK}). More general statements are proven in \cite{MalOr} and \cite{Mit04}, Theorems 6.1 and 6.4 or Proposition 7.1. Therefore, all hypotheses of Bari--Markus Theorem hold, and we have the following theorem. \begin{Theorem} \label{thm2} Let $L$ be the Dirac operator (\ref{01}) with an $L^2$-potential $v,$ subject to the boundary conditions $bc = Per^\pm $ or $Dir.$ Then there is $N \in \mathbb{N}$ such that the Riesz projections $$ S_N = \frac{1}{2\pi i} \int_{ |z|= N-1/2 } (z-L_{bc})^{-1} dz, \quad P_n = \frac{1}{2\pi i} \int_{ |z-n|= 1/4 } (z-L_{bc})^{-1} dz $$ are well--defined, and $$ f = S_N f + \sum_{|n| >N} P_n f, \quad \forall f \in L^2; $$ moreover, this series converges unconditionally in $L^2.$ \end{Theorem} 2. General {\em regular} boundary conditions for the operator $L^0$ (or $L$) (\ref{01})--(\ref{02}) are given by a system of two linear equations \begin{eqnarray} \label{c1} y_1 (0) +b y_1 (\pi) + a y_2 (0) =0 \\ \nonumber d y_1 (\pi) + c y_2 (0) + y_2 (\pi)=0 \end{eqnarray} with the restriction \begin{equation} \label{c2} bc-ad \neq 0. \end{equation} A regular boundary condition is {\em strictly regular,} if additionally \begin{equation} \label{c3} (b-c)^2 + 4ad \neq 0, \end{equation} i.e., the characteristic equation \begin{equation} \label{c4} z^2 + (b+c)z + (bc-ad)=0 \end{equation} has two {\em distinct} roots. As we noticed in Introduction our main results (Theorem \ref{thm2}) can be extended to the cases of both strictly regular $(SR)$ and regular but not strictly regular $(R\setminus SR) \; bc.$ More precisely, the following statements hold. $(SR) \;$ case. Let $L_{bc}$ be an operator (\ref{01})--(\ref{02}) with $(bc) \in (\ref{c1})-(\ref{c2}).$ Then its spectrum $SP \, (L_{bc}) = \{\lambda_k, \; k \in \mathbb{Z} \}$ is discrete, $\sup |Im \,\lambda_k| < \infty, $ $\; |\lambda_k| \to \infty $ as $k \to \pm \infty,$ and all but finitely many eigenvalues $\lambda_k $ are simple, $L_{bc} u_k = \lambda_k u_k, \; |k| > N = N(v).$ Put $$ S_N = \frac{1}{2\pi i } \int_C (z-L_{bc})^{-1} dz, $$ where the contour $C$ is chosen so that all $\lambda_k, \; |k|\leq N, $ lie inside of $C, $ and $\lambda_k, \; |k|> N, $ lie outside of $C. $ Then the spectral decomposition $$ f= S_N f + \sum_{|k| > N} c_k (f) u_k, \quad \forall f \in L^2 $$ is well--defined and {\em converges unconditionally} in $L^2.$ $(R \setminus SR) \;$ case. Let $bc$ be regular, i.e., (\ref{c1})-(\ref{c2}) hold, but not strictly regular, i.e., \begin{equation} \label{c5} (b-c)^2 + 4ad = 0, \end{equation} and $z_* = \exp (i\pi \tau)$ be a double root of (\ref{c4}). Then its spectrum $SP \, (L_{bc}) = \{\lambda_k, \; k \in \mathbb{Z}\} $ is discrete; it lies in $\Pi_N \cup \bigcup_{m> N}D_m, \; N=N(v), $ where $$ \Pi_N = \{z\in \mathbb{C}: \; |Im \, (z-\tau)|, |Re \, (z-\tau)| < N-1/2 \} $$ and $D_m =\{z\in \mathbb{C}: \; | (z-m- \tau)|< 1/4 \}.$ The spectral decomposition $$ f= S_N f + \sum_{|m| > N} P_m f, \quad \forall f \in L^2 $$ is well--defined if we set $$ S_N = \frac{1}{2\pi i } \int_{\partial \Pi_N} (z-L_{bc})^{-1} dz, \quad P_m = \frac{1}{2\pi i } \int_{\partial D_m} (z-L_{bc})^{-1} dz,\quad |m| > N, $$ and it {\em converges unconditionally} in $L^2.$ Complete presentation and proofs of these general results will be given elsewhere. \end{document}
\begin{document} \title{Two explicit divisor sums} \begin{abstract} \noindent We give explicit bounds on sums of $d(n)^{2}$ and $d_{4}(n)$ where $d(n)$ is the number of divisors of $n$ and $d_{4}(n)$ is the number of ways of writing $n$ as a product of four numbers. In doing so we make a slight improvement on the upper bound for class numbers of quartic number fields. \end{abstract} \section{Introduction}\label{intro} \noindent Let $d(n)$ denote the number of divisors of $n$, and for $k\geq 2$ let $d_{k}(n)$ denote the number of ways of writing $n$ as a product of $k$ integers. Using the following Dirichlet series \begin{equation}\label{s1} \sum_{n=1}^{\infty} \frac{d(n)^{2}}{n^{s}} = \frac{\zeta^{4}(s)}{\zeta(2s)}, \quad \sum_{n=1}^{\infty} \frac{d_{k}(n)}{n^{s}} = \zeta^{k}(s), \quad (\Re(s) >1), \end{equation} we have, via Perron's formula, that \begin{equation}\label{s2} \sum_{n\leq x} d_{k}(n) \sim \frac{1}{(k-1)!} x (\log x)^{k-1}, \end{equation} and \begin{equation}\label{s2a} \sum_{n\leq x} d(n)^{2} \sim \frac{1}{\pi^{2}} x (\log x)^{3}. \end{equation} The purpose of this article is to consider good explicit versions of (\ref{s2a}) and (\ref{s2}) when $k=4$. The rolled-gold example is $\sum_{n\leq x} d(n)$, which is gives a bound of the form (\ref{s2}) for $k=2$. Berkan\'{e}, Bordell\`{e}s and Ramar\'{e} \cite[Thm. 1.1]{Berk} gave several pairs of values $(\alpha, x_{0})$ such that \begin{equation}\label{study} \sum_{n\leq x} d(n) = x(\log x + 2 \gamma -1) + \Delta(x), \end{equation} holds with $|\Delta(x)| \leq \alpha x^{1/2}$ for $x\geq x_0$, and where $\gamma$ is Euler's constant. One such pair given, which we shall use frequently, is $\alpha=0.397$ and $x_0= 5560$. The best known bound for (\ref{study}) is $\Delta(x) = O(x^{131/416 + \epsilon})$ by Huxley \cite{Huxley}. It seems hopeless to give a bound on the implied constant in this estimate. Therefore weaker\footnote{It is worth mentioning Theorem 1.2 in (\cite{Berk}), which gives $|\Delta(x)|\leq 0.764 x^{1/3} \log x$ for $x\geq 9995$. This could be used in our present approach, but the improvement is only apparent for very large $x$.}, yet-still-explicit bounds such as those in \cite{Berk} are very useful in applications. Bordell\`{e}s \cite{Border} considered the sum in (\ref{s2}), and showed that when $k\geq 2$ and $x\geq1$ we have \begin{equation}\label{hall} \sum_{n\leq x} d_{k}(n) \leq x \left( \log x + \gamma + \frac{1}{x}\right)^{k-1}. \end{equation} This misses the asymptotic bound in (\ref{s2}) by a factor of $1/(k-1)!$. Nicolas and Tenenbaum (see \cite{Border}, pg. 2) were able to meet the asymptotic bound in (\ref{s2}), by showing that for a fixed $k$, and $x\geq 1$, we have \begin{equation}\label{lounge} \sum_{n\leq x} d_{k}(n) \leq \frac{x}{(k-1)!} \left( \log x +(k-1)\right)^{k-1}. \end{equation} We improve these results in our first theorem. \begin{thm}\label{parlour} For $x\geq 2$ we have \begin{equation}\label{chaise} \sum_{n\leq x} d_{4}(n) = C_1 x\log^3 x + C_2 x\log^2x + C_3 x\log x + C_4x + \vartheta(4.48 x^{3/4} \log x), \end{equation} where $$C_{1} = 1/6, \quad C_{2} = 0.654\ldots, \quad C_{3} = 0.981\ldots, \quad C_{4} = 0.272\ldots,$$ are exact constants given in (\ref{d4}). Furthermore, when $x\geq 193$ we have \begin{equation}\label{sun} \sum_{n\leq x} d_{4}(n) \leq \frac{1}{3} x \log ^{3} x. \end{equation} \end{thm} \noindent It may be noted that the result in (\ref{chaise}) is a sharper bound than (\ref{hall}) and (\ref{lounge}) for all $x\geq 2$. Other bounds of the form (\ref{sun}) are possible: we have selected one that is nice and neat, and valid when $x$ is not too large. Sums of $d_{k}(n)$ can be used to obtain upper bounds on class numbers of number fields. Let $K$ be a number field of degree $n_{K}= [K:\mathbb{Q}]$ and discriminant $d_{K}$. Also, let $r_{1}$ (resp.\ $r_{2}$) denote the number of real (resp.\ complex) embeddings in $K$, so that $n_{K} = r_{1} + 2r_{2}$. Finally, let $$b= b_{K} = \left( \frac{n_{K}!}{n_{K}^{n_{K}}}\right) \left(\frac{4}{\pi}\right)^{r_{2}}|d_{K}|^{1/2},$$ denote the Minkowski bound, and let $h(K)$ denote the class number. Lenstra \cite[\S 6]{Lenstra} --- see also Bordell\`{e}s \cite[Lem.\ 1]{Border} --- proved that \begin{equation}\label{duck} h_{K} \leq \sum_{m\leq b} d_{n_{K}}(m). \end{equation} We note that we need only upper bounds on (\ref{s2}) to give bounds on the class number. Bordell\`{e}s used (\ref{hall}) in its weaker form \begin{equation}\label{games} \sum_{n\leq x} d_{k}(n) \leq 2x(\log x)^{k-1}, \quad (x\geq 6), \end{equation} to this end. The bound obtained on $h_{K}$ is not the sharpest possible for all degrees. For example, much more work has been done on quadratic extensions (see \cite{Le}, \cite{Louboutin}, and \cite{RamL}). Using Theorem \ref{parlour} we are able to give an improved bound on the class number of quartic number fields. \begin{cor}\label{chalk} Let $K$ be a quartic number field with class number $h_{K}$ and Minkowski bound~$b$. Then, if $b\geq 193$ we have $$ h_{K} \leq \frac{1}{3} x\log^3x.$$ \end{cor} We note that it should be possible to use Corollary \ref{chalk} to improve slightly Lemma 13 in \cite{Deb} when $n=4$. Turning to the sum in (\ref{s2a}), we first state the following result by Ramanujan \cite{Ram2} \begin{equation}\label{bravo} \sum_{n\leq x} d(n)^{2} - x(A \log^{3} x + B \log^{2} x + C\log x + D) \ll x^{3/5 + \epsilon}. \end{equation} The error in (\ref{bravo}) was improved by Wilson \cite{Wilson} to $x^{1/2 + \epsilon}$. The constants $A, B, C, D$ can (these days) be obtained via Perron's formula. Of note is an elementary result by Gowers \cite{Gowers}, namely, that \begin{equation}\label{kitchen} \sum_{n\leq x} d(n)^{2} \leq x(\log x + 1)^{3} \leq 2x (\log x)^{3}, \quad (x\geq 1). \end{equation} This is used by Kadiri, Lumely, and Ng \cite{Kadiri} in their work on zero-density estimates for the zeta-function. Although one would expect some lower-order terms, the bound in (\ref{kitchen}) is a factor of $2\pi^{2} \approx 19.7$ times the asymptotic bound in (\ref{s2a}), whence one should be optimistic about obtaining a saving. We obtain such a saving in our second main result. \begin{thm}\label{billiard} For $x\geq 2$ we have $$\sum_{n\leq x} d(n)^{2} = D_1 x\log^3 x + D_2 x\log^2x + D_3 x\log x + D_4x + \vartheta \left(9.73 x^\frac{3}{4} \log x + 0.73 x^\frac{1}{2} \right),$$ where $$D_{1} = \frac{1}{\pi^2}, \quad D_{2} = 0.745 \ldots, \quad D_{3} = 0.824 \ldots, \quad D_{4} = 0.461\ldots,$$ are exact constants given in (\ref{dn2}). Furthermore, for $x\geq x_j$ we have $$\sum_{n\leq x} d(n)^2 \leq K x \log ^{3} x,$$ where one may take $\{K, x_j\}$ to be, among others, $\{\tfrac{1}{4}, 433\}$ or $\{1, 7\}$. \end{thm} The outline of this article is as follows. Theorem \ref{parlour} is proved in Section \ref{boat}. A similar process would give good explicit bounds on $\sum_{n\leq x} d_{k}(n)$. We have not pursued this, but the potential for doing so is discussed. We then present Theorem \ref{billiard} in Section \ref{ship}. \section{Bounding $d_{4}(n)$}\label{boat} Since $d_{4}(n) = d(n) * d(n)$, where $*$ denotes Dirichlet convolution, the hyperbola method gives us that \begin{equation*} \sum_{n\leq x} d_{4}(n) = 2 \sum_{a\leq \sqrt{x}} d(a) \sum_{n\leq x/a} d(n) - \left( \sum_{n\leq \sqrt{x}} d(n)\right)^{2}. \end{equation*} Using (\ref{study}), we arrive at \begin{equation}\label{daisy} \begin{split} \sum_{n\leq x} d_{4}(n) &= 2x \left[ \left(\log x + 2\gamma -1\right) S_{1} (\sqrt{x}) - S_{2}(\sqrt{x})\right] + 2\sum_{a\leq \sqrt{x}} d(a) \Delta \left(\frac{x}{a} \right) \\ &\quad - \left\{\sqrt{x}\left(\frac{1}{2} \log x + 2\gamma -1\right) + \Delta(\sqrt{x})\right\}^{2}, \end{split} \end{equation} where \begin{equation*} S_{1}(x) = \sum_{n\leq x} \frac{d(n)}{n}, \text{ and } \quad S_{2}(x) = \sum_{n\leq x} \frac{d(n)\log n}{n}. \end{equation*} The absolute value of the sum on the right-hand side of (\ref{daisy}) can also be bounded above by $$2 \alpha x^{1/2} S_{3}(\sqrt{x}),$$ where $$S_{3}(x) = \sum_{n\leq x} \frac{d(n)}{\sqrt{n}}.$$ We can approximate $S_1$, $S_2$, and $S_3$ with partial summation and the bound in (\ref{study}). We note that for applications in \S \ref{ship} we need only concern ourselves with values of $x\geq 1$. Berkan\'{e}, Bordell\`{e}s, and Ramar\'{e} \cite[Cor.\ 2.2]{Berk}, give a bound for $S_{1}(x)$. As noted by Platt and Trudgian in \cite[\S 2.1]{Platt} their constant $1.16$ should be replaced by $1.641$ as in Riesel and Vaughan \cite[Lem.\ 1]{RV}. To obtain an error term in Theorem \ref{parlour} of size $x^{3/4} \log x$ we should like an error term in $S_{1}(x)$ of size $x^{-1/2}$, which is right at the limit of what is achievable. We follow the method used by Riesel and Vaughan (\cite[pp. 48--50]{RV}) to write \begin{equation}\label{chef} S_{1}(x) = \frac{1}{2} \log^{2} x + 2 \gamma \log x + \gamma^{2} - 2 \gamma_{1} + \vartheta(c x^{-1/2}), \end{equation} with $c=1.001$ for $x\geq 6 \cdot 10^5$. One can directly check that this also holds for $2\leq x < 6 \cdot 10^5$. Taking larger values of $x$ reduces the constant $c$, but not to anything less than unity. For $S_{2}$, we have \begin{equation}\label{S2} S_{2}(x) = \frac{1}{3} \log^{3} x + \gamma \log^{2} x + 2\gamma\gamma_{1} - \gamma_{2} + E_{2}(x), \end{equation} where \begin{equation*} E_{2}(x) = \frac{\Delta(x)\log x}{x} - \int_{x}^{\infty} \frac{(\log t -1)\Delta(t)}{t^{2}}\, dt. \end{equation*} Using the bound in (\ref{study}) with $\alpha=0.397$ and $x_0=5560$ we have \begin{equation*} |E_{2}(x)| \leq \alpha\left( 3 + \frac{2}{\log x_{0}}\right) x^{-1/2} \log x, \qquad (x\geq x_{0}). \end{equation*} Lastly, for $S_{3}$ we have \begin{equation}\label{S3} S_{3}(x) = 2x^{1/2} \log x + 4(\gamma -1)x^{1/2} + E_{3}(x), \end{equation} where \begin{equation}\label{box} E_{3}(x) = \frac{\Delta(x)}{x^{1/2}} - \frac{1}{2} \int_{x}^{\infty} \frac{\Delta(t)}{t^{3/2}}\, dt. \end{equation} For the integral in (\ref{box}) to converge we need to use a bound of the form $\Delta(t)\ll t^{1/2 - \delta}$. The only such explicit bound we know of is Theorem 1.2 in \cite{Berk}: as pointed out in \S \ref{intro} this improves on results only for large values of $x$. Instead, since $S_{3}(x)$ has a relatively small contribution to the total error, we can afford a slightly larger bound on $E_{3}(x)$. In writing the error as \begin{equation*} E_{3}(x) = 3 - 2\gamma + \frac{\Delta(x)}{x^{1/2}} + \frac{1}{2} \int_{1}^{x} \frac{\Delta(t)}{t^{3/2}}\, dt \end{equation*} we can apply the bound in (\ref{study}) and the triangle inequality to get \begin{align*} |E_{3}(x)| & \leq (3-2\gamma) + \alpha + \frac{1}{2} \alpha \log x \\ & \leq \log x\left(\frac{\alpha}{2} + \frac{3 - 2\gamma + \alpha}{\log x_{0}}\right):= \beta \log x, \qquad (x\geq x_{0}). \end{align*} Thus, the bounds in (\ref{chef}), (\ref{S2}), and (\ref{S3}) can be used in (\ref{daisy}) to prove Theorem \ref{parlour} \begin{align} \label{d4} \sum_{n\leq x} d_{4}(n) &= C_1 x\log^3 x + C_2 x\log^2x + C_3 x\log x + C_4x + E(x) \end{align} where $$|E(x)| \leq F_1 x^\frac{3}{4}\log x,$$ and we have \begin{align*} &C_1=\frac{1}{6}, \quad C_2= 2\gamma-\frac{1}{2} , \quad C_3= 6\gamma^2-4\gamma-4\gamma_1+1 , \\ & C_4= 4\gamma^3-6\gamma^2+ 4\gamma -12\gamma \gamma_1 + 4\gamma_1+2\gamma_2 -1 , \text{ and } F_1= 2c+6\alpha+ \frac{2\alpha}{\log x_{0}}. \end{align*} Recalling that $c=1.001, \alpha = 0.397$, and $x_{0} = 5560$, we prove the theorem for $x\geq 5560^2$. We directly calculated the partial sums of $d_4(n)$ to confirm that the bound in (\ref{chaise}) also holds for $2 \leq x < 5560^2$. We could bound the partial sums of $d_{k}(n)$ by generalising the previous method. When $k$ is even, one can use $d_{k}(n) = d_{k-2}(n) * d(n)$, and when $k$ is odd, one can use $d_{k}(n) = d_{k-1}(n) * 1$. We have not pursued this, but for small values of $k$, this is likely to lead to decent bounds. We expect that the error term could, potential, blow up on repeated applications of this process. One may also consider a more direct approach using a `hyperboloid' method, that is, considering $d_{3}(n) = 1 * 1 * 1.$ We have not considered this here. \section{Bound on $\sum_{n\leq x} d(n)^2$}\label{ship} We define $$H(s) = \sum_{n=1}^\infty \frac{h(n)}{n^{s}} = \frac{1}{\zeta(2s)},$$ and let $$H^{*}(s) = \sum_{n\geq 1} |h(n)| n^{-s} = \prod_{p} \left( 1 + \frac{1}{p^{2s}}\right) = \frac{\zeta(2s)}{\zeta(4s)},$$ whence both $H(s)$ and $H^{*}(s)$ converge for $\Re(s)> \frac{1}{2}$. Referring to (\ref{s1}), we therefore have $d(n)^2 = d_4(n) \ast h(n)$. Hence, we can write $$\sum_{n\leq x} d(n)^2 = \sum_{a\leq x} h(a) \sum_{b\leq \frac{x}{a}} d_4(b).$$ This leads to the bound \begin{align} \label{dn2} \sum_{n\leq x} d(n)^2 = D_1 x\log^3 x + D_2 x\log^2x + D_3 x\log x + D_4x + \sum_{a\leq x} h(a)E\left(\frac{x}{a}\right), \end{align} where \begin{align*} D_1&=C_1 H(1),\quad D_2=C_2H(1)+3C_1H'(1),\quad D_3=C_3H(1)+2C_2H'(1)+3C_1H''(1), \\ D_4&=C_4H(1)+C_3H'(1)+C_2H''(1)+C_1H^{(3)}(1). \end{align*} Furthermore, we require a bound on $E(x)$ which holds for $x\geq 1$. Adapting the error term in (\ref{d4}) to hold for the desired range, we can write $$\left| \sum_{a\leq x} h(a)E\left(\frac{x}{a}\right) \right| \leq F_1 H^{*}(\tfrac{3}{4}) x^\frac{3}{4} \log x + (1-C_4)x^\frac{1}{2}.$$ We also note the following exact values, for ease of further calculations: \begin{equation}\label{tea} \begin{split} H(1) &= \frac{6}{\pi^{2}}, \quad H'(1) = -\frac{72 \zeta'(2)}{\pi^{4}}, \quad H''(1) = \frac{1728 \zeta'(2)^{2}}{\pi^{6}} - \frac{144 \zeta''(2)}{\pi^{4}},\\ H^{(3)}(1)&= -\frac{62208 \zeta'(2)^{3}}{\pi^{8}} + \frac{10368 \zeta'(2) \zeta''(2)}{\pi^{6}} - \frac{288 \zeta^{(3)}(2)}{\pi^{4}}.\\ \end{split} \end{equation} As an aside, our result in Theorem \ref{billiard} could have been achieved with Ramar\'{e}'s general result in \cite[Lem.\ 3.2]{Ram}, as modified in \cite[Lem.\ 14]{TQ} (with the constants repaired as in \cite{RV}), but some further generalisation would have been necessary. Instead, we proceeded directly as above. \end{document}
\begin{document} \begin{abstract} We show that every abstract homomorphism $\varphi$ from a locally compact group $L$ to a graph product $G_\mathcal Gamma$, endowed with the discrete topology, is either continuous or $\varphi(L)$ lies in a 'small' parabolic subgroup. In particular, every locally compact group topology on a graph product whose graph is not 'small' is discrete. This extends earlier work by Morris-Nickolas. We also show the following. If $L$ is a locally compact group and if $G$ is a discrete group which contains no infinite torsion group and no infinitely generated abelian group, then every abstract homomorphism $\varphi:L\to G$ is either continuous, or $\varphi(L)$ is contained in the normalizer of a finite nontrivial subgroup of $G$. As an application we obtain results concerning the continuity of homomorphisms from locally compact groups to Artin and Coxeter groups. \end{abstract} \thanks{{Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC 2044--390685587, Mathematics M\"unster: Dynamics-Geometry-Structure.}} \maketitle \section{Introduction} We investigate the following type of question. \begin{quote} {\em Let $L$ be a locally compact group and let $G$ be a discrete group. Under what conditions on the group $G$ is an abstract (i.e. not necessarily continuous) homomorphism $\varphi:L\to G$ automatically continuous?} \end{quote} There are many results in this direction in the literature, see \cite{Kramer}, \cite{Conner}, \cite{Dudley}, \cite{Morris} or \cite{Paolini}. In particular, Dudley \cite{Dudley} proved that every abstract homomorphism from a locally compact group to a free group is automatically continuous. This was generalized by Morris and Nickolas \cite{Morris}. They proved that every abstract homomorphism from a locally compact group to a free product of groups is either continuous, or the image of the homomorphism is conjugate to a subgroup of one of the factors of the free product. Our first aim is to prove similar results for the case where the codomain $G$ of an abstract homomorphism $L\to G$ is a graph product of arbitrary groups. Given a simplicial graph $\mathcal Gamma=(V, E)$ and a collection of groups $\mathcal{G} = \{ G_u \mid u \in V\}$, the \emph{graph product} $G_\mathcal Gamma$ is defined as the quotient \[ \left( \underset{u \in V}{\ast} G_u \right) / \langle \langle [G_v,G_w]\text{ for }\{v,w\}\in E \rangle \rangle. \] We call $G_\mathcal Gamma$ \emph{finite dimensional} if there exists a uniform bound on the sizes of cliques in $\mathcal Gamma$. Throughout, $L$ denotes a Hausdorff locally compact group with identity component $L^\circ$, and $G$ denotes a discrete group. We call $L$ \emph{almost connected} if the totally disconnected group $L/L^\circ$ is compact. By an \emph{abstract homomorphism} we mean a group homomorphism between topological groups which is not assumed to be continuous. We remark that every abstract homomorphism whose codomain is discrete is open. \begin{NewPropositionA} Let $\varphi:L\to G_\mathcal Gamma$ be an abstract homomorphism from an almost connected locally compact group $L$ to a finite dimensional graph product $G_\mathcal Gamma$. Then $\varphi(L)$ lies in a complete parabolic subgroup of $G_\mathcal Gamma$. \end{NewPropositionA} Using Proposition A, we show the following more general result. \begin{NewTheoremB} Let $\varphi$ be an abstract homomorphism from a locally compact group $L$ to a finite dimensional graph product $G_\mathcal Gamma$. Then either $\varphi$ is continuous, or $\varphi(L)$ lies in a conjugate of a parabolic subgroup $G_{S\cup\lk(S)}$, where $S\neq\varnothing$ is a clique. If every composite $L\xrightarrow{\varphi}G_\mathcal Gamma\xrightarrow{r_v} G_v$ is continuous, then $\varphi$ is continuous. \end{NewTheoremB} In particular, every locally compact group topology on a finite dimensional graph product $G_\mathcal Gamma$ is discrete, unless $\mathcal Gamma$ is contained in the link of a clique. In the latter case, $G_\mathcal Gamma$ is a direct product of vertex groups and a smaller graph product, and then a locally compact topology on $G_\mathcal Gamma$ may indeed be nondiscrete. Our remaining results deal with a certain class $\mathcal G$ of discrete groups. Let $\mathcal G$ denote the class of all groups $G$ with the following two properties: \begin{enumerate} \item Every torsion subgroup $T\subseteq G$ is finite, and \item Every abelian subgroup $A\subseteq G$ is a (possibly infinite) direct sum of cyclic groups. \end{enumerate} The abelian subgroups $A$ in such a group are thus of the form $A=F\times\mathbb Z^{(J)}$, where $F$ is a finite abelian group and $\mathbb Z^{(J)}$ is free abelian of (possibly infinite) rank $\card(J)$. We remark that subgroups of free abelian groups are again free abelian \cite[A1.9]{HoMo}. We study abstract homomorphisms from locally compact groups to groups in this class. We show in Section 7 that the class $\mathcal G$ is huge. It is closed under finite products, under coproducts, and under passage to subgroups, see Proposition~\ref{LargeClass}. For example, every finitely generated hyperbolic group, every right-angled Artin group, every Artin group of finite type and every Coxeter group is in this class, see Propositions \ref{Hyperbolic}, \ref{Artin} and \ref{Coxeter}. Furthermore, the groups $\mathcal GL_n(\mathbb{Z})$, $\Out(F_n)$ and the mapping class groups $\Mod(S_g)$ of compact orientable surfaces of genus $g$ are in this class, see Proposition~\ref{OutFn}. Further examples of groups which are in the class $\mathcal G$ are diagram groups, see \cite[Theorem 16]{Guba}. In particular, the Thompson's group $F$ is a diagram group and this group contains a free abelian group which is not finitely generated. We obtain the following results. \begin{NewPropositionC} Let $\varphi$ be an abstract homomorphism from a locally compact group $L$ to a group $G$ in the class $\mathcal{G}$. Then $\varphi$ factors through the canonical projection $\pi:L\to L/L^\circ$. If $L$ is almost connected, then $\varphi(L)$ is finite. \end{NewPropositionC} \begin{NewTheoremD} Let $\varphi$ be an abstract homomorphism from a locally compact group $L$ to a group $G$ in the class $\mathcal{G}$. Then either $\varphi$ is continuous, or $\varphi(L)$ lies in the normalizer of a finite non-trivial subgroup of $G$. \end{NewTheoremD} The following is an immediate consequence of Theorem D. \begin{NewCorollaryE} Every abstract homomorphism from a locally compact group $L$ to a torsion free group $G$ in the class $\mathcal{G}$ is continuous. In particular, every abstract homomorphism from a locally compact group to a right-angled Artin group or to an Artin group of finite type is continuous. \end{NewCorollaryE} Related results on abstract homomorphisms into right-angled Artin groups and into Artin groups of non-exceptional finite type were recently proved in \cite{Corson}. Our proofs depend heavily on a theorem of Iwasawa on the structure of connected locally compact groups and on a theorem of van Dantzig on the existence of compact open subgroups in totally disconnected groups. For the proof of Proposition A we use the structure of the right-angled building $X_\mathcal Gamma$ associated to a graph product $G_\mathcal Gamma$. \section*{Acknowledgment} The authors thank the referee for careful reading of the manuscript and many helpful remarks, in particular concerning the infinite dimensional case in Theorem B. \section{Graph products}\label{GraphProductsSection} In this section we briefly present the main definitions and properties concerning graph products. These groups are defined by presentations of a special form. A \emph{simplicial graph} $\mathcal Gamma=(V,E)$ consists of a set $V$ of \emph{vertices} and a set $E$ of $2$-element subsets of $V$ which are called \emph{edges}. We allow infinite graphs. Given a subset $S\subseteq V$, the \emph{graph generated by $S$} is the graph with vertex set $S$ and edge set $E|_S=\{\{v,w\}\in E\mid v,w\in S\}$. We call $S$ a \emph{clique} if $E|_S=\{\{v,w\}\mid v,w\in S\text{ with }v\neq w\}=\binom{S}{2}$. We count the empty set as a clique. We say that $\mathcal Gamma$ has \emph{finite dimension} if there is a uniform upper bound on the cardinality of cliques in $\mathcal Gamma$. For a subset $S\subseteq V$ we define its \emph{link} as \[ \lk(S)=\{w\in V\mid \{v,w\}\in E\text{ for all }v\in S\}. \] \begin{definition} Let $\mathcal Gamma$ be a simplicial graph, as defined above. Suppose that for every vertex $v\in V$ we are given a nontrivial\footnote{We need nontrivial vertex groups in order to obtain a building. Alternatively, one may remove all vertices $v$ from $\mathcal Gamma$ whose vertex group $G_v$ is trivial, without changing the resulting graph product.} abstract group $G_v$. The \emph{graph product} $G_\mathcal Gamma$ is the group obtained from the free product of the $G_v$, for $v\in V$, by adding the commutator relations $gh=hg$ for all $g\in G_v$, $h\in G_w$ with $\left\{v,w\right\}\in E$, i.e. \[ \left( \underset{u \in V}{\ast} G_u \right) / \langle \langle [G_v,G_w]\text{ for }\{v,w\}\in E \rangle \rangle. \] \end{definition} Graph products are special instances of graphs of groups, and in particular colimits in the category of groups \cite[\S5]{Davis}. We call the graph product \emph{finite dimensional} if $\mathcal Gamma$ has finite dimension as defined above, i.e. if there is an upper bound on the size of cliques in $\mathcal Gamma$. The first examples to consider are the extremes. If $E=\varnothing$, then $G_{\mathcal Gamma}$ is the free product of the groups $G_v$, for $v\in V$. On the other hand, if $E=\binom{V}{2}$ is the set of all $2$-element subsets of $V$, then $G_\mathcal Gamma$ is the direct sum of the $G_v$, for $v\in V$. So graph products interpolate between free products and direct sums of groups. \subsection{Parabolic subgroups}\label{ParabolicSubgroups} Let $\mathcal Gamma=(V,E)$ be a simplicial graph, let $G_\mathcal Gamma$ denote the graph product of a family of groups $\{G_v\mid v\in V\}$ and let $S$ be a subset of $V$. The subgroup $G_S$ of $G_\mathcal Gamma$ generated by the $G_v$, for $v\in S$, is again a graph product, corresponding to the subgraph $\mathcal Gamma'=(S,E|_S)$. This follows from the Normal Form Theorem \cite[Thm.~3.9]{Green}. There is also a retraction homomorphism \[ r_S:G_\mathcal Gamma\to G_S \] which is obtained by substituting the trivial group for $G_v$ for all $v\in V-S$ \cite[Section 3]{Antolin}. If $S\subseteq V$ is a subset (resp. a clique), then $G_S$ is called a \emph{special parabolic subgroup} (resp. a \emph{ special complete parabolic subgroup}). The conjugates in $G_\mathcal Gamma$ of the special (complete) parabolic subgroups are called (\emph{complete}) \emph{parabolic subgroups}. We note that parabolic subgroups behave well. For $R,S\subseteq V$ and $a,b\in G_\mathcal Gamma$ we have \[ \tag{1} aG_{R}a^{-1}\subseteq bG_{S}b^{-1}\Rightarrow R\subseteq S \] see \cite[Corollary 3.8]{Antolin}. If $gG_Sg^{-1}\subseteq G_S$, then by \cite[Lemma 3.9]{Antolin} \[ \tag{2} gG_Sg^{-1}=G_S. \] Let $X$ be a subset of $G_{\mathcal Gamma}$. If the set of all parabolic subgroups containing $X$ has a minimal element, then this minimal parabolic subgroup containing $X$ is unique by the remarks above. In this case, it is called the \emph{parabolic closure} of $X$ and denoted by $\Pc(X)$. The parabolic closure always exists if $\mathcal Gamma$ is finite or if $X$ is finite \cite[Proposition 3.10]{Antolin}. Let $H\subseteq G_\mathcal Gamma$ be a subgroup. We denote by $\Nor_{G_\mathcal Gamma}(H)$ the normalizer of $H$ in $G_\mathcal Gamma$. For a parabolic subgroup of $G_\mathcal Gamma$ there is a good description of the normalizer. \begin{lemma}\cite[Lemma 3.12 and Proposition 3.13]{Antolin} \label{normalizer} \begin{enumerate} \item Let $H\subseteq G_\mathcal Gamma$ be a subgroup. Suppose that the parabolic closure of $H$ in $G_\mathcal Gamma$ exists. Then $\Nor_{G_\mathcal Gamma}(H)\subseteq \Nor_{G_\mathcal Gamma}{(\Pc(H))}$. \item Let $G_S$ be a non-trivial special parabolic subgroup of $G_\mathcal Gamma$. Then $\Nor_{G_\mathcal Gamma}(G_S)=G_{S\cup\lk(S)}$. \end{enumerate} \end{lemma} \section{Actions on cube complexes} A detailed description of {\rm CAT$(0)$}\ cube complexes can be found in \cite{Haefliger} and in \cite{Sageev}. Let $\mathcal{C}$ denote the class of finite dimensional {\rm CAT$(0)$}\ cube complexes and let $\mathcal{A}$ denote the subclass of $\mathcal{C}$ consisting of simplicial trees. Inspired by Serre's fixed point property ${\rm F}\mathcal{A}$, Bass introduced the property ${\rm F}\mathcal{A'}$ in \cite{Bass}. A group $G$ has property ${\rm F}\mathcal{A'}$ if every simplicial action of $G$ on every member $T$ of $\mathcal{A}$ is locally elliptic, i.e. if each $g\in G$ fixes some point on the tree $T$. A generalization of property ${\rm F}\mathcal{A'}$ was defined in \cite{Leder}. A group $G$ has property ${\rm F}\mathcal{C'}$ if every simplicial action of $G$ on every member $X$ of $\mathcal{C}$ is locally elliptic, i.e. if each $g\in G$ fixes some point on $X$. Bass proved in \cite{Bass} that every profinite group has property ${\rm F}\mathcal{A'}$. His result was generalized by Alperin to compact groups in \cite{Alperin} and to almost connected locally compact groups in \cite{Alperin2}. The next result was proved by Caprace in \cite[Theorem 2.5]{Caprace}. \begin{proposition} \label{compact} Let $X$ be a locally Euclidean {\rm CAT$(0)$}\ cell complex with finitely many isometry types of cells, and $L$ be a compact group acting as an abstract group on $X$ by cellular isometries. Then every element of $L$ is elliptic. In particular, every compact group has property ${\rm F}\mathcal{C}'$. \end{proposition} We recall that a group $G$ is called \emph{divisible} if $\left\{g^n\mid g\in G\right\}=G$ holds for all integers $n\geq1$. Another result which we will need in order to prove Proposition A is the following. \begin{lemma}\cite[Theorem 2.5 Claim 7]{Caprace} \label{divisible} Every divisible group has property ${\rm F}\mathcal{C}'$. \end{lemma} The following result is due to Sageev and follows from the proof of Theorem 5.1 in \cite{Sageev}, see also \cite[Theorem A]{Leder}. \begin{proposition} \label{globalfixedpoint} Let $G$ be a finitely generated group acting by simplicial isometries on a finite dimensional {\rm CAT$(0)$}\ cube complex. If the $G$-action is locally elliptic, then $G$ has a global fixed point. \end{proposition} The last fact we need for the proof of Proposition A concerning global fixed points is the following easy consequence of the Bruhat-Tits Fixed Point Theorem \cite[Lemma 2.1]{Marquis}. \begin{lemma} \label{boundedproduct} Suppose that a group $H$ acts isometrically on a complete {\rm CAT$(0)$}\ space. If $H=H_1H_2\cdots H_r$ is a product of finitely many subgroups $H_j$ each fixing some point in $X$, then $H$ has a global fixed point. \end{lemma} \subsection{Graph products, cube complexes and the building} Associated to finite dimensional graph products are certain finite dimensional {\rm CAT$(0)$}\ cube complexes. We briefly describe the construction of these spaces. For a graph product $G_\mathcal Gamma$ we consider the poset \[ P=\left\{gG_T\mid g\in G_\mathcal Gamma\text{ and } T \text{ is a clique}\right\}, \] ordered by inclusion (we recall that we allow empty cliques). The group $G_\mathcal Gamma$ acts by left multiplication on this poset and hence simplicially on the flag complex $X_\mathcal Gamma$ associated to this coset poset. This flag complex has a canonical cubical structure. With respect to this structure $X_\mathcal Gamma$ is the Davis realization of a right-angled building, \cite[Theorem 5.1]{Davis}. By \cite[Theorem 11.1]{Davis} the Davis realization of every building is a complete {\rm CAT$(0)$}\ space. Hence $X_\mathcal Gamma$ is a finite dimensional {\rm CAT$(0)$}\ cube complex, and $G_\mathcal Gamma$ acts isometrically on $X_\mathcal Gamma$. The \emph{chambers} of $X_\mathcal Gamma$ correspond to the cosets of the trivial subgroup, i.e. to the elements of $G_\mathcal Gamma$. The $G_\mathcal Gamma$-stabilizer of a chamber (a maximal cube) is therefore trivial. The \emph{vertices} of $X_\mathcal Gamma$ correspond to the cosets of the $G_S$, where $S\subseteq V$ is an inclusion-maximal clique. The action of $G_\mathcal Gamma$ on $X_\mathcal Gamma$ preserves the canonical cubical structure. One nice property of this action is the following: if a subgroup $H\subseteq G_\mathcal Gamma$ has a global fixed point in $X_\mathcal Gamma$, then there exists a vertex in $X_\mathcal Gamma$ which is fixed by $H$. This follows from the fact that the action is type preserving. Furthermore, the stabilizer of a vertex $gG_T$ is equal to $gG_Tg^{-1}$. \begin{lemma} \label{globalFP} Let $G_\mathcal Gamma$ be a finite dimensional graph product and let $H$ be a subgroup. If the action of $H$ on the building $X_\mathcal Gamma$ is locally elliptic, then $H$ has a global fixed point. \end{lemma} \begin{proof} For each finite subset $X\subseteq H$, the finitely generated group $\langle X\rangle$ acts locally elliptically on $X_\mathcal Gamma$. Thus $\langle X\rangle$ has by Proposition \ref{globalfixedpoint} a fixed vertex $gG_S$, for some $g\in G_\mathcal Gamma$ and some maximal clique $S$. It follows that the parabolic closure of $X$ is of the form $\Pc(X)=gG_{S_X}g^{-1}$, where $S_X$ is a clique depending uniquely on $X$. Since there is an upper bound on the size of cliques in $\mathcal Gamma$, there exists a finite set $Z\subseteq H$ such that $S_Z$ is maximal among all cliques $S_X$, for $X\subseteq H$ finite. We claim that $H\subseteq\Pc(Z)$. Let $h\in H$ and put $X=Z\cup\{h\}$. If we put $\Pc(X)=aG_{S_X}a^{-1}$ and $\Pc(Z)=bG_{S_Z}b^{-1}$, then \[ aG_{S_X}a^{-1}\supseteq bG_{S_Z}b^{-1} \] because $X\supseteq Z$. Then $S_X\supseteq S_Z$ holds by \ref{ParabolicSubgroups}(1). From the maximality of $S_Z$ we conclude that $S_Z=S_X$. Then $aG_{S_X}a^{-1}=bG_{S_Z}b^{-1}$ by \ref{ParabolicSubgroups}(2). It follows that $H\subseteq \Pc(Z)=bG_{S_Z}b^{-1}$, and thus $H$ has a global fixed point. \end{proof} \section{The proofs of Proposition A and Theorem B} \begin{NewPropositionA} Let $\varphi:L\to G_\mathcal Gamma$ be an abstract homomorphism from an almost connected locally compact group $L$ to a finite dimensional graph product $G_\mathcal Gamma$. Then $\varphi(L)$ lies in a complete parabolic subgroup of $G_\mathcal Gamma$. \end{NewPropositionA} \begin{proof} The group $L$ acts via \[ L\to G_\mathcal Gamma\to\Isom(X_\mathcal Gamma) \] isometrically and simplicially on the right-angled building $X_\mathcal Gamma$. Suppose first that $L$ is compact. Then the $L$-action is by Proposition \ref{compact} locally elliptic. Hence there is global fixed point by Lemma~\ref{globalFP}. Suppose next that $L$ is connected. By Iwasawa's decomposition \cite[Theorem 13]{Iwasawa} we have \[ L=H_1H_2\cdots H_r K, \] where $K$ is a connected compact group and $H_i\cong\mathbb{R}$ for $i=1, \ldots, r$. Each group $H_j$ has a fixed point by Lemma \ref{divisible}, and $K$ has a fixed point by the result in the previous paragraph. Hence $L$ has a fixed point by Lemma~\ref{boundedproduct}. Now we consider the general case. If $L$ is almost connected, then the identity component $L^\circ$ has a global fixed point by the previous paragraph. The fixed point set $Z\subseteq X_\mathcal Gamma$ of $L^\circ$ is a convex {\rm CAT$(0)$}\ cube complex, because the $L$-action is simplicial and type-preserving. By Proposition~\ref{compact}, the action of $L/L^\circ$ on $Z$ is locally elliptic. Hence the action of $L$ on $X$ is locally elliptic as well. Another application of Lemma ~\ref{globalFP} shows that $L$ has a global fixed point. \end{proof} Now we may prove Theorem B. \begin{NewTheoremB} Let $\varphi$ be an abstract homomorphism from a locally compact group $L$ to a finite dimensional graph product $G_\mathcal Gamma$. Then either $\varphi$ is continuous, or $\varphi(L)$ lies in a conjugate of a parabolic subgroup $G_{S\cup\lk(S)}$, where $S\neq\varnothing$ is a clique. If every composite $L\xrightarrow{\varphi}G_\mathcal Gamma\xrightarrow{r_v} G_v$ is continuous, then $\varphi$ is continuous. \end{NewTheoremB} \begin{proof} Let $L^\circ$ be the connected component of the identity in $L$. We distinguish two cases. \noindent\emph{Case 1: $\varphi(L^\circ)$ is not trivial.} By Proposition A we know that $\varphi(L^\circ)\subseteq gG_Tg^{-1}$ where $T\subseteq V$ is a clique and $g\in G_\mathcal Gamma$. Hence ${\Pc}(\varphi(L^\circ))=hG_{S}h^{-1}$, where $\varnothing\neq S\subseteq T$ and $h\in G_\mathcal Gamma$. Since $\varphi(L)$ normalizes $\varphi(L^\circ)$, we have by Lemma~\ref{normalizer} that $\varphi(L)\subseteq\Nor_{G_\mathcal Gamma}(\Pc(\varphi(L^\circ)))$. This normalizer is of the form $h G_{S\cup\lk(S)} h^{-1}$, for some $h\in G_\mathcal Gamma$. We note that in Case 1, the homomorphism $\varphi$ is not continuous, since the image of a connected group under continuous map is always connected and a connected subgroup of a discrete group is trivial. \noindent\emph{Case 2: $\varphi(L^\circ)$ is trivial.} Then $\varphi$ factors through an abstract homomorphism $\psi:L/L^\circ\to G_\mathcal Gamma$, and $L/L^\circ$ is a totally disconnected locally compact group. By van Dantzig's Theorem \cite[III\S 4, No. 6]{Bourbaki} there exists a compact open subgroup $K$ in $L/L^\circ$. \noindent\emph{Subcase 2a: There is a compact open subgroup $K\subseteq L/L^\circ$ such that $\psi(K)$ is trivial.} Then the kernel of $\psi$ is open in $L/L^\circ$ and hence $\psi$ and $\varphi$ are continuous. \noindent\emph{Subcase 2b: There is no compact open subgroup $K\subseteq L/L^\circ$ such that $\psi(K)$ is trivial.} Let $\mathcal K$ denote the collection of all compact open subgroups of $L/L^\circ$. We note that $L$ acts on $\mathcal K$ by conjugation. For $K\in\mathcal K$ we put $\Pc(\psi(K))=gG_{S_K}g^{-1}$. Thus $S_K$ is a clique in $\mathcal Gamma$ which depends uniquely on $K$. We choose $M\in\mathcal K$ in such a way that $S_M$ is minimal and we note that $S_M\neq\varnothing$. Given $a\in L/L^\circ$ we have $M\cap aMa^{-1}\in\mathcal K$ and \[\Pc(\psi(aMa^{-1}))=\psi(a)\Pc(\psi(M))\psi(a)^{-1}.\] From \ref{ParabolicSubgroups}(1) and \[ \Pc(\psi(M))\supseteq\Pc(\psi(M)\cap\psi(aMa^{-1}))\subseteq\Pc(\psi(aMa^{-1})) \] we obtain that \[ S_M\supseteq S_{M\cap aMa^{-1}}\subseteq S_{aMa^{-1}}. \] Since both $S_{aMa^{-1}}$ and $S_M$ are minimal we conclude that \[S_{M}=S_{M\cap aMa^{-1}}=S_{aMa^{-1}}\] and that \[\Pc(\psi(M))=\Pc(\psi(aMa^{-1}))=\psi(a)\Pc(\psi(M))\psi(a)^{-1}.\] Therefore $\psi(a)$ normalizes $\Pc(\psi(M))$, whence \[ \varphi(L)= \psi(L/L^\circ)\subseteq hG_{S_M\cup\lk(S_M)}h^{-1}, \] for some $h\in G_\mathcal Gamma$ by Lemma~\ref{normalizer}. Suppose now towards a contradiction that $\varphi$ is not continuous, but that each composite $L\xrightarrow{\varphi}G_\mathcal Gamma\xrightarrow{r_v} G_v$ is continuous. Then $\varphi(L)\subseteq gG_{S\cup\lk(S)}g^{-1}$ for some nonempty clique $S$. There is a direct product decomposition \[ G_{S\cup\lk(S)}=G_S\times G_{\lk(S)}=\prod_{v\in S}G_v\times G_{\lk(S)} \] and therefore $\varphi$ factors as a product of commuting homomorphisms \[ \varphi(a)=g\prod_{v\in S}\varphi_v(a)\varphi_{\lk(S)}(a)g^{-1}, \] with $\varphi_v=\varphi\circ r_v$ and $\varphi_{\lk(S)}=\varphi\circ r_{\lk(S)}$. Here we use the retractions $r$ introduced in Section~\ref{GraphProductsSection}. Since the $\varphi_v$ are all continuous, $\varphi_{\lk(S)}$ is not continuous. Hence we find a clique $T\subseteq\lk(S)$ such that $\varphi_{\lk(S)}(L)\subseteq hG_{T\cup(\lk(T)\cap\lk(S))}h^{-1}$. But then $S\cup T$ is a clique (because $T\subseteq\lk(S)$) which is strictly bigger than $S$. If we continue in this fashion, we end up after finitely many steps with an empty link, because $\mathcal Gamma$ has finite dimension. Thus $\varphi$ is a finite product of commuting continuous homomorphisms, and therefore itself continuous. This is a contradiction. \end{proof} The referee has pointed out that if every composite $L\xrightarrow{\varphi}G_\mathcal Gamma\xrightarrow{r_v} G_v$ is continuous, then $\varphi$ is continuous, whether or not the graph product if finite dimensional (see proof of Theorem 3.3 in \cite{Conner}). \section{The proof of Proposition C} We consider abstract homomorphisms from locally compact groups $L$ into groups $G$ which are in the class $\mathcal G$. We recall from the introduction that for such a group $G$, every torsion subgroup $T\subseteq G$ is finite, and every abelian subgroup $A\subseteq G$ is a (possibly infinite) direct sum of cyclic groups. In particular, such a group $G$ has no nontrivial divisible abelian subgroups. \begin{NewPropositionC} Let $\varphi$ be an abstract homomorphism from a locally compact group $L$ to a group $G$ in the class $\mathcal{G}$. Then $\varphi$ factors through the canonical projection $\pi:L\to L/L^\circ$. If $L$ is almost connected, then $\varphi(L)$ is finite. \end{NewPropositionC} \begin{proof} We first show that every homomorphism $\rho:K\to G$ has finite image if $K$ is compact. Suppose that $g\in K$. We claim that $\rho(g)$ has finite order. The subgroup $H=\overline{\langle g\rangle}$ is compact abelian, whence $\rho(H)=F\times \mathbb Z^{(J)}$, where $F$ is a finite abelian group. By Dudley's result \cite{Dudley}, a compact group has no nontrivial free abelian quotients. Therefore $\rho(H)$ is finite and in particular, $\rho(g)$ has finite order. Since $G$ contains no infinite torsion groups, $\rho(K)$ is finite. Now we show that $\varphi(L^\circ)$ is trivial. By Iwasawa's Theorem \cite[Theorem 13]{Iwasawa} there is a decomposition $L^\circ=H_1\cdots H_r K$, where $H_j\cong\mathbb R$ for $j=1,\ldots,r$ and where $K$ is a compact connected group. The groups $H_1,\ldots,H_r$ are abelian and divisible. From our assumptions on the class $\mathcal G$ we see that the abelian groups $\varphi(H_j)$ are trivial, for $j=1,\ldots,r$. The compact group $K$ is connected and therefore divisible \cite[Theorem 9.35]{HoMo}. A finite divisible group is trivial, and therefore $\varphi(K)$ is trivial as well. This shows that $\varphi(L^\circ)$ is trivial. The first paragraph of the present proof shows then that $\varphi(L)$ is finite if $L/L^\circ$ is compact. \end{proof} \section{The proof of Theorem D} We are now ready to prove Theorem D. \begin{NewTheoremD} Let $\varphi$ be an abstract homomorphism from a locally compact group $L$ to a group $G$ in the class $\mathcal{G}$. Then either $\varphi$ is continuous, or $\varphi(L)$ lies in the normalizer of a finite non-trivial subgroup of $G$. \end{NewTheoremD} \begin{proof} Let $L^\circ$ be the connected component of the identity in $L$. By Proposition C the homomorphism $\varphi$ factors through a homomorphism $\psi:L/L^\circ\to G$. The totally disconnected locally compact group $L/L^\circ$ contains by van Dantzig's Theorem \cite[III\S 4, No. 6]{Bourbaki} compact open subgroups. We distinguish two cases. \noindent\emph{Case 1: $\psi(K)$ is trivial for some compact open subgroup $K\subseteq L/L^\circ$.} Then the kernel of $\psi$ is open and therefore $\psi$ and $\varphi$ are continuous. \noindent\emph{Case 2: $\psi(K)$ is nontrivial for every compact open subgroup $K\subseteq L/L^\circ$.} By Proposition C, the image $\psi(K)$ of such a group $K$ is finite. Among the compact open subgroups of $L/L^\circ$ we choose $M$ such that $\psi(M)$ is minimal. Given $g\in L/L^\circ$, we have then that $\psi(gMg^{-1})=\psi(M\cap gMg^{-1})=\psi(M)$. It follows that $\psi(g)$ normalizes $\psi(M)$. \end{proof} \section{Some remarks on the class $\mathcal G$} In this last section we show that the class $\mathcal{G}$ contains many groups. \begin{proposition}\label{LargeClass} The class $\mathcal G$ is closed under passage to subgroups, under passage to finite products, and under passage to arbitrary coproducts. \end{proposition} \begin{proof} If $H\subseteq G\in\mathcal G$, then clearly $H\in\mathcal G$. If $G_1,\ldots,G_r\in\mathcal G$ and if $T\subseteq \prod_{j=1}^rG_j$ is a torsion group, then the projection $\pi_j(T)=T_j\subseteq G_j$ is also a torsion group. Hence $T\subseteq\prod_{j=1}^rT_j$ is finite. Similarly, if $A\subseteq \prod_{j=1}^rG_j$ is abelian, then $A$ is contained in the product $\prod_{j=1}^r\pi_j(A)$, which is a direct sum of a finite abelian group and a free abelian group. Hence $A$ itself is a direct sum of a finite abelian group and a free abelian group. Finally suppose that $(G_j)_{j\in J}$ is a family of groups in $\mathcal G$. By Kurosh's Subgroup Theorem \cite{Baer}, every subgroup of the coproduct $\coprod_{j\in J}G_j$ is itself a coproduct $F*\coprod_{j\in J}g_jU_jg_j^{-1}$, were $F$ is a free group, $U_j\subseteq G_j$ is a subgroup and the $g_j$ are elements of $\coprod_{i\in J}G_i$. If such a group is abelian, then it is either cyclic or conjugate to a subgroup of one of the free factors. \end{proof} \begin{proposition} \label{Hyperbolic} Every hyperbolic group $G$ is in the class $\mathcal{G}$. \end{proposition} \begin{proof} By a theorem of Gromov \cite[Chap.8 Cor.36]{Gromov}, every torsion subgroup of a hyperbolic group is finite. Furthermore, every abelian subgroup of a hyperbolic group is finitely generated. \end{proof} \begin{proposition} \label{Artin} Let $A$ be an Artin group. If $A$ is a right-angled Artin group or an Artin group of finite type, then $A$ is torsion free and every abelian subgroup of $A$ is finitely generated. \end{proposition} \begin{proof} Every right-angled Artin group is torsion free by \cite[Corollary 3.28]{Green}. Moreover $A$ is a {\rm CAT$(0)$}\ group, see \cite{Charney4}. Hence every abelian subgroup of $A$ is finitely generated, see \cite[II Corollary 7.6]{Haefliger}. If $A$ is an Artin group of finite type, then $A$ is torsion free by \cite{Brieskorn}. By \cite[Corollary 4.2]{Bestvina}, every abelian subgroup of $A$ is finitely generated. \end{proof} We note that it is an open question if every Artin group is torsion free \cite[Conjecture 12]{Charney}. \begin{proposition} \label{Coxeter} Let $W$ be a Coxeter group. Then every torsion subgroup of $W$ is finite and every abelian subgroup of $W$ is finitely generated. \end{proposition} \begin{proof} It was proved in \cite[Theorem 14.1]{Moussong} that Coxeter groups are {\rm CAT$(0)$}\ groups. Hence every abelian subgroup of $W$ is finitely generated \cite[II Corollary 7.6]{Haefliger} and the order of finite subgroups of $W$ is bounded \cite[II Corollary 2.8(b)]{Haefliger}. Let $T\subseteq W$ be a torsion group. Since $W$ is a linear group \cite[Corollary 6.12.11]{Davis2} and since every finitely generated linear torsion group is finite \cite[I]{Schur}, it follows that every finitely generated subgroup of $T$ is finite. Since the order of finite subgroups of $T$ is bounded, $T$ is finite. \end{proof} \begin{proposition} \label{OutFn} The groups $\mathcal GL_n(\mathbb{Z})$, the groups $\Out(F_n)$ of outer automorphisms of free groups and the mapping class groups $\Mod(S_g)$ of orientable surfaces of genus $g$ are in the class $\mathcal{G}$. \end{proposition} \begin{proof} Since $\mathcal GL_n(\mathbb{Z})$ is a linear group, it follows that every finitely generated torsion subgroup is finite \cite[I]{Schur}. Since the order of finite subgroups in $\mathcal GL_n(\mathbb{Z})$ is bounded \cite{Minkowski}, we obtain that every torsion subgroup of $\mathcal GL_n(\mathbb{Z})$ is finite. It was proved in \cite{Malcev} that every abelian subgroup of $\mathcal GL_n(\mathbb{Z})$ is finitely generated. Hence $\mathcal GL_n(\mathbb{Z})$ is in the class $\mathcal{G}$. The kernel of the map $\Out(F_n)\to\mathcal GL_n(\mathbb{Z})$ which is induced by the abelianization of $F_n$ is torsion free \cite{Baumslag}. Since every torsion subgroup of $\mathcal GL_n(\mathbb{Z})$ is finite, it follows that every torsion subgroup of $\Out(F_n)$ is finite. Every abelian subgroup of $\Out(F_n)$ is finitely generated, see \cite{Lubotzky}. Thus $\Out(F_n)$ is in the class $\mathcal{G}$. It was proved in \cite[Theorem A]{Birman} that every abelian subgroup of $\Mod(S_g)$ is finitely generated. Further, it was proved in \cite[Theorem 1]{Nikolaev} that $\Mod(S_g)$ is a linear group. Hence every finitely generated torsion subgroup is finite \cite[I]{Schur}. We know by \cite{Hurwitz} that the order of finite subgroups in $\Mod(S_g)$ is bounded. Therefore every torsion subgroup of $\Mod(S_g)$ is finite. Thus $\Mod(S_g)$ is in the class $\mathcal{G}$. \end{proof} \end{document}
\begin{document} \title{Capacity Bounded Grammars and Petri Nets} \defCapacity Bounded Grammars and Petri Nets{Capacity Bounded Grammars and Petri Nets} \defR.~Stiebe, S.~Turaev{R.~Stiebe, S.~Turaev} \author{Ralf Stiebe \institute{Fakult{\"a}t f{\"u}r Informatik\\ Otto-von-Guericke-Universit{\"a}t Magdeburg\\ PF 4120 -- D-39106 Magdeburg -- Germany} \email{[email protected]} \and Sherzod Turaev \institute{Universitat Rovira i Virgili\\ Facultat de Lletres -- GRLMC\\ E-43005 Tarragona -- Spain} \email{[email protected]} } \maketitle \begin{abstract} A capacity bounded grammar is a grammar whose derivations are restricted by assigning a bound to the number of every nonterminal symbol in the sentential forms. In the paper the generative power and closure properties of capacity bounded grammars and their Petri net controlled counterparts are investigated. \end{abstract} \section{Introduction} \label{sec:introduction} The close relationship between Petri nets and language theory has been extensively studied for a long time \cite{cre:man,das:pau}. Results from the theory of Petri nets have been applied successfully to provide elegant solutions to complicated problems from language theory \cite{esp,hau:jan}. A context-free grammar can be associated with a context-free (communica-tion-free) Petri net, whose places and transitions, correspond to the nonterminals and the rules of the grammar, respectively, and whose arcs and weights reflect the change in the number of nonterminals when applying a rule. In some recent papers, context-free Petri nets enriched by additional components have been used to define regulation mechanisms for the defining grammar \cite{das:tur,tur}. Our paper continues the research in this direction by restricting the (context-free or extended) Petri nets with place capacity. Quite obviously, a context-free Petri net with place capacity regulates the defining grammar by permitting only those derivations where the number of each nonterminal in each sentential form is bounded by its capacity. A similar mechanism was discussed in \cite{gin:spa1} where the total number of nonterminals in each sentential form is bounded by a fixed integer. There it was shown that grammars regulated in this way generate the family of context-free languages of finite index, even if arbitrary nonterminal strings are allowed as left-hand sides. The main result of this paper is that, somewhat surprisingly, grammars with capacity bounds have a greater generative power. This paper is organized as follows. Section~\ref{sec:def} contains some necessary definitions and notations from language and Petri net theory. The concepts of grammars with capacities and grammars controlled by Petri nets with place capacities are introduced in section~\ref{sec:capacities}. The generative power and closure properties of capacity-bounded grammars are investigated in sections \ref{sec:power-gs} and \ref{sec:nb-cfg}. Results on grammars controlled by Petri nets with place capacities are given in section~\ref{sec:PNC}. \section{Preliminaries} \label{sec:def} Throughout the paper, we assume that the reader is familiar with basic concepts of formal language theory and Petri net theory; for details we refer to \cite{das:pau,han,rei:roz}. The set of natural numbers is denoted by $\mathbb{N}$, the power set of a set S by $\Powerset{S}$. We use the symbol $\subseteq$ for inclusion and $\subset$ for proper inclusion. The \emph{length} of a string $w \in X^*$ is denoted by $|w|$, the number of occurrences of a symbol $a$ in $w$ by $|w|_a$ and the number of occurrences of symbols from $Y\subseteq X$ in $w$ by~$|w|_Y$. The \emph{empty} string is denoted by~$\lambda$. A \emph{phrase structure grammar} (due to Ginsburg and Spanier \cite{gin:spa1}) is a quadruple $G=(V, \Sigma, S, R)$ where~$V$ and $\Sigma$ are two finite disjoint alphabets of \emph{nonterminal} and \emph{terminal} symbols, respectively, $S\in V$ is the \emph{start symbol} and \hbox{$R\subseteq V^+\times (V\cup \Sigma)^*$} is a finite set of \emph{rules}. A string $x\in (V\cup \Sigma)^*$ \emph{directly derives} a string $y\in (V\cup \Sigma)^*$ in $G$, written as $x\Rightarrow y$, if and only if there is a rule $u\to v\in R$ such that $x=x_1ux_2$ and $y=x_1vx_2$ for some $x_1, x_2\in (V\cup \Sigma)^*$. The reflexive and transitive closure of the relation $\Rightarrow$ is denoted by $\Rightarrow^*$. A derivation using the sequence of rules $\pi=r_1r_2\cdots r_k$, $r_i\in R$, $1\leq i\leq k$, is denoted by $\xRightarrow{\pi}$ or $\xRightarrow{r_1r_2\cdots r_k}$. The \emph{language} generated by $G$, denoted by $L(G)$, is defined by $L(G)=\{w\in \Sigma^*: S\Rightarrow^* w\}.$ A phrase structure grammar $G=(V, \Sigma, S, R)$ is called \emph{context-free} if each rule $u\to v\in R$ has $u\in V$. The family of context-free languages is denoted by $\mathbf{CF}$. A \emph{matrix grammar} is a quadruple $G=(V, \Sigma, S, M)$ where $V, \Sigma, S$ are defined as for a context-free grammar, $M$ is a finite set of \emph{matrices} which are finite strings (or finite sequences) over a set of context-free rules. The language generated by the grammar $G$ consists of all strings $w\in \Sigma^*$ such that there is a derivation $S\xRightarrow{r_1r_2\cdots r_n}w$ where $r_1r_2\cdots r_n$ is a concatenation of some matrices $m_{i_1}, m_{i_2}, \ldots, m_{i_k}\in M$, $k\geq 1$. The family of languages generated by matrix grammars without erasing rules (with erasing rules, respectively) is denoted by $\mathbf{MAT}$ (by $\mathbf{MAT}^{\lambda}$, respectively). A \emph{vector grammar} is defined like a matrix grammar, but the derivation sequence $r_1r_2\cdots r_n$ has to be a shuffle of some matrices $m_{i_1}, m_{i_2}, \ldots, m_{i_k}\in M$, $k\geq 1$. A \emph{semi-matrix grammar} is defined like a matrix grammar, but the derivation sequence $r_1r_2\cdots r_n$ has to be the semi-shuffle of some matrices $m_{i_1}, m_{i_2}, \ldots, m_{i_k}\in M$, $k\geq 1$, i.\,e., from the shuffle of sequences from $\bigcup_{i=1}^t m_i^*$ where $$M=\{m_1,\ldots,m_t\}.$$ The language families generated by vector and semi-matrix grammars are denoted by ${\bf V}^{[\lambda]}$ and ${\bf sMAT}^{[\lambda]}$. A \emph{Petri net} (PN) is a construct $N = (P, T, F, \phi)$ where $P$ and $T$ are disjoint finite sets of \emph{places} and \emph{transitions}, respectively, $F \subseteq (P\times T) \cup (T\times P)$ is the set of \emph{directed arcs}, $$\varphi: (P\times T) \cup (T\times P) \rightarrow \{0, 1, 2, \dots\}$$ is a \emph{weight function}, where $\varphi(x,y)=0$ for all $(x,y)\in ((P\times T) \cup (T\times P))-F$. A mapping $$\mu: P \rightarrow \{0,1,2, \ldots\}$$ is called a \emph{marking}. For each place $p\in P$, $\mu(p)$ gives the number of \emph{tokens} in $p$. $^{\bullet}x=\{y: \, (y,x)\in F\}$ and $x^{\bullet}=\{y: \, (x,y)\in F\}$ are called the sets of \emph{input} and \emph{output} elements of $x\in P\cup T$, respectively. A sequence of places and transitions $\rho=x_1x_2\cdots x_n$ is called a \emph{path} if and only if no place or transition except $x_1$ and $x_n$ appears more than once, and $x_{i+1}\in x^\bullet_{i}$ for all $1\leq i\leq n-1$. We denote by $P_\rho, T_\rho, F_\rho$ the sets of places, transitions and arcs of $\rho$. Two paths $\rho_1$, $\rho_2$ are called \emph{disjoint} if $P_{\rho_1}\cap P_{\rho_2}=\emptyset$ and $T_{\rho_1}\cap T_{\rho_2}=\emptyset$. A path $\rho=t_{1}p_{1}t_{2}p_{2}\cdots p_{k-1}t_{k}$ ($\rho=p_{1}t_{1}p_{1}t_{2}\cdots t_{k}p_{1}$) is called a \emph{chain} (\emph{cycle}). A transition $t \in T$ is \emph{enabled} by marking $\mu$ iff $\mu(p)\geq \phi(p,t)$ for all $p\in P$. In this case $t$ can \emph{occur}. Its occurrence transforms the marking $\mu$ into the marking $\mu'$ defined for each place $p \in P$ by $\mu'(p)=\mu(p)-\phi(p,t)+\phi(t,p)$. This transformation is denoted by $\mu\xrightarrow{t}\mu'$. A finite sequence $t_1t_2\cdots t_k$ of transitions is called \emph{an occurrence sequence} enabled at a marking $\mu$ if there are markings $\mu_1, \mu_2, \ldots, \mu_k$ such that $\mu \xrightarrow{t_1} \mu_1 \xrightarrow{t_2} \ldots \xrightarrow{t_k} \mu_k$. For each $1\leq i\leq k$, marking $\mu_i$ is called \emph{reachable} from marking $\mu$. $\mathcal{R}(N, \mu)$ denotes the set of all reachable markings from a marking $\mu$. A \emph{marked} Petri net is a system $N=(P, T, F, \phi, \iota)$ where $(P, T, F, \phi)$ is a Petri net, $\iota$ is the \emph{initial marking}. Let $M$ be a set of markings, which will be called \emph{final} markings. An occurrence sequence $\nu$ of transitions is called \emph{successful} for $M$ if it is enabled at the initial marking $\iota$ and finished at a final marking $\tau$ of $M$. A Petri net $N$ is said to be $k$-\emph{bounded} if the number of tokens in each place does not exceed a finite number $k$ for any marking reachable from the initial marking $\iota$, i.\,e., $\mu(p)\leq k$ for all $p\in P$ and for all $\mu\in \mathcal{R}(N, \iota)$. A Petri net is called \emph{bounded} if it is $k$-bounded for some $k\geq 1$. A Petri net with \emph{place capacity} is a system $N=(P, T, F, \phi, \iota,\kappa)$ where $(P, T, F, \phi,\iota)$ is a marked Petri net and $\kappa:P \to \mathbb{N}$ is a function assigning to each place a number of maximal admissible tokens. A marking $\mu$ of $N$ is valid if $\mu(p)\leq \kappa(p)$, for each place $p\in P$. A transition $t \in T$ is \emph{enabled} by a marking $\mu$ if additionally the successor marking is valid. A \emph{cf Petri net} with respect to a context-free grammar $G=(V,\Sigma, S, R)$ is a system $$N=(P, T, F, \phi, \beta, \gamma, \iota)$$ where \begin{itemize} \item labeling functions $\beta:P\rightarrow V$ and $\gamma:T\rightarrow R$ are bijections; \item $(p,t)\in F$ iff $\gamma(t)=A\rightarrow \alpha$ and $\beta(p)=A$ and the weight of the arc $(p,t)$ is 1; \item $(t,p)\in F$ iff $\gamma(t)=A\rightarrow \alpha$, $\beta(p)=x$ where $|\alpha|_x>0$ and the weight of the arc $(t,p)$ is $|\alpha|_x$; \item the initial marking $\iota$ is defined by $\iota(\beta^{-1}(S))= 1$ and $\iota(p) = 0$ for all $p\in P-\beta^{-1}(S)$. \end{itemize} Further we recall the definitions of extended cf Petri nets, and grammars controlled by these Petri nets (for details, see \cite{das:tur,tur}). Let $G=(V, \Sigma, S, R)$ be a context-free grammar with its corresponding cf Petri net $$N=(P, T, F, \phi, \beta, \gamma, \iota).$$ Let $T_1, T_2, \ldots, T_n$ be a partition of $T$. 1. Let $\Pi=\{\rho_1, \rho_2, \ldots, \rho_n\}$ be the set of disjoint chains such that $T_{\rho_i}=T_i$, $1\leq i\leq n$, and $$\bigcup_{\rho\in\Pi}P_\rho\cap P=\emptyset.$$ An \emph{$h$-Petri net} is a system $N_h=(P\cup Q, T, F\cup E, \varphi, \zeta, \gamma, \mu_0, \tau)$ where \hbox{$Q=\bigcup_{\rho\in\Pi}P_\rho$} and $E=\bigcup_{\rho\in\Pi}F_\rho$; the weight function $\varphi$ is defined by $\varphi(x,y)=\phi(x,y)$ if $(x,y)\in F$ and $\varphi(x,y)=1$ if $(x,y)\in E$; the labeling function $\zeta:P\cup Q\rightarrow V\cup\{\lambda\}$ is defined by $\zeta(p)=\beta(p)$ if $p\in P$ and $\zeta(p)=\lambda$ if $p\in Q$; the initial marking $\mu_0$ is defined by $\mu_0(p)=\iota(p)$ if $p\in P$ and $\mu_0(p)=0$ if $p\in Q$; $\tau$ is the final marking where $\tau(p)=0$ for all $p\in P\cup Q$. 2. Let $\Pi=\{\rho_1, \rho_2, \ldots, \rho_n\}$ be the set of disjoint cycles such that $T_{\rho_i}=T_i$, $1\leq i\leq n$, and $$\bigcup_{\rho\in\Pi}P_\rho\cap P=\emptyset.$$ A \emph{$c$-Petri net} is a system $N_c=(P\cup Q, T, F\cup E, \varphi, \zeta, \gamma, \mu_0, \tau)$ where $Q=\bigcup_{\rho\in\Pi}P_\rho$ and $E=\bigcup_{\rho\in\Pi}F_\rho$; the weight function $\varphi$ is defined by $\varphi(x,y)=\phi(x,y)$ if \hbox{$(x,y)\in F$} and $\varphi(x,y)=1$ if $(x,y)\in E$; the labeling function $\zeta:P\cup Q\rightarrow V\cup\{\lambda\}$ is defined by $\zeta(p)=\beta(p)$ if $p\in P$ and $\zeta(p)=\lambda$ if $p\in Q$; the initial marking $\mu_0$ is defined by $\mu_0(p)=\iota(p)$ if $p\in P$, and $\mu_0(p_{i,1})=1$, $\mu_0(p_{i,j})=0$ where $p_{i,j}\in P_i$, $1\leq i\leq n$, $2\leq j\leq k_i$; $\tau$ is the final marking where $\tau(p)=0$ if $p\in P$, and $\tau(p_{i,1})=1$, $\tau(p_{i,j})=0$ where $p_{i,j}\in P_i$, $1\leq i\leq n$, $2\leq j\leq k_i$. 3. Let $\Pi=\{\rho_1, \rho_2, \ldots, \rho_n\}$ be the set of cycles such that $T_{\rho_i}\!=T_i$, $1\leq i\leq n$, \hbox{$P_{1}\cap P_{2}\cap \cdots \cap P_{n}\!=\!\{p_0\}$} and $$\bigcup_{\rho\in\Pi}P_\rho\cap P=\emptyset.$$ An \emph{$s$-Petri net} is a system $N_s=(P\cup Q, T, F\cup E, \varphi, \zeta, \gamma, \mu_0, \tau)$ where $Q=\bigcup_{\rho\in\Pi}P_\rho, E=\bigcup_{\rho\in\Pi}F_\rho$; the weight function $\varphi$ is defined by $\varphi(x,y)=\phi(x,y)$ if $(x,y)\in F$ and $\varphi(x,y)=1$ if $(x,y)\in E$; the labeling function $\zeta:P\cup Q\rightarrow V\cup\{\lambda\}$ is defined by $\zeta(p)=\beta(p)$ if $p\in P$ and $\zeta(p)=\lambda$ if $p\in Q$; $\mu_0$ is the initial marking where $\mu_0(p_0)=1$ and $\mu_0(p)=\iota(p)$ if $p\in (P\cup Q)-\{p_0\}$; $\tau$ is the final marking where $\tau(p_0)=1$ and $\tau(p)=0$ if $p\in (P\cup Q)-\{p_0\}$. \begin{example} Figure \ref{fig:xPNs} depicts extended cf Petri nets which are constructed with respect to the context-free grammar $G'=(\{S, A, B\}, \Sigma, S, R)$ where $R$ consists of $r_0: S\to AB$, $r_1: A\to \lambda$, $ r_3: A\rightarrow aA$, $r_5: A\to bA$, $r_2: B\to \lambda$, $r_4: B\to aB$, $r_6: B\to bB$. $\diamond$ \end{example} \begin{figure} \caption{Extended Petri nets.} \label{fig:hPN} \label{fig:cPN} \label{fig:sPN} \label{fig:xPNs} \end{figure} A \emph{$z$-PN controlled grammar} is a system $G=(V, \Sigma, S, R, N_z)$ where \hbox{$G'=(V, \Sigma, S, R)$} is a context-free grammar and $N_z$ is $z$-Petri net with respect to the context-free grammar $G'$ where $z\in\{h, c, s\}$. The \emph{language} generated by a $z$-Petri net controlled grammar $G$ consists of all strings $w\in \Sigma^*$ such that there is a derivation $S\xRightarrow{r_1r_2\cdots r_k}w\in \Sigma^*$ and a successful occurrence sequence of transitions $\nu=t_1t_2\cdots t_k$ of~$N_z$ such that $r_1r_2\cdots r_k=\gamma(t_1t_2\cdots t_k)$. \section{Grammars and Petri nets with capacities} \label{sec:capacities} We will now introduce grammars with capacities and show some relations to similar concepts known from the literature. A \emph{capacity-bounded} grammar is a quintuple $G=(V,\Sigma,S,R,\kappa)$ where $G'=(V,\Sigma,S,R)$ is a grammar and $\kappa: V \to \mathbb{N}$ is a capacity function. The language of $G$ contains all words $w \in L(G')$ that have a derivation $S\Rightarrow^* w$ such that $|\beta|_A\leq \kappa(A)$ for all $A\in V$ and each sentential form $\beta$ of the derivation. The families of languages generated by arbitrary capacity-bounded grammars (due to Ginsburg and Spanier) and by context-free capacity-bounded grammars are denoted by $\mathbf{GS}_{\mathit{cb}}$ and $\mathbf{CF}_{\mathit{cb}}$, respectively. The capacity function mapping each nonterminal to $1$ is denoted by $\mathbf{1}$. Capacity bounded grammars are closely related to nonterminal-bounded, deri-vation-bounded and finite index grammars. A grammar $G=(V,\Sigma,S,R)$ is \emph{nonterminal bounded} if $|\beta|_V\leq k$ for some fixed $k \in \mathbb{N}$ and all sentential forms $\beta$ derivable in $G$. The \emph{index} of a derivation in $G$ is the maximal number of nonterminal symbols in its sentential forms. $G$ is of \emph{finite index} if every word in $L(G)$ has a derivation of index at most $k$ for some fixed $k\in \mathbb{N}$. The family of context-free languages of finite index is denoted by $\mathbf{CF}_{\mathit{fin}}$. A \emph{derivation-bounded} grammar is a quintuple $G=(V,\Sigma,S,R,k)$ where $G'=(V,\Sigma,S,R)$ is a grammar and $k \in \mathbb{N}$ is a bound on the number of allowed nonterminals. The language of $G$ contains all words $w \in L(G')$ that have a derivation $S\Rightarrow^* w$ such that $|\beta|_V\leq k$, for each sentential form $\beta$ of the derivation. It is well-known that the family of derivation bounded languages is equal to $\mathbf{CF}_{\mathit{fin}}$, even if arbitrary grammars due to Ginsburg and Spanier are permitted \cite{gin:spa2}. \begin{example}\label{exa:NBLnotCF1} Let $G=(\{S, A, B, C, D, E, F\}, \{a, b, c\}, S, R,\mathbf{1})$ be the capacity-bounded grammar where $R$ consists of the rules: $$ \begin{array}{llll} r_1: S\to ABCD, & r_2: AB\to aEFb, & r_3: CD\to cAD, &r_4: EF\to EC,\\ r_5: EF\to FC, & r_6: AD\to FD, & r_7: AD\to ED, & r_8: EC\to AB,\\ r_9: FD\to CD, & r_{10}: FC\to AF,& r_{11}: AF\to \lambda, & r_{12}: ED\to \lambda. \end{array} $$ The possible derivations are exactly those of the form $$ \begin{array}{ll} S &\xRightarrow{r_1}ABCD\xRightarrow{(r_2r_3r_4r_6r_8r_9)^n}a^nABb^nc^nCD \xRightarrow{r_2r_3}a^{n+1}EFb^{n+1}c^{n+1}AD \\ & \xRightarrow{r_5r_7}a^{n+1}FCb^{n+1}c^{n+1}ED\xRightarrow{r_{10}r_{11}r_{12}}a^nb^nc^n \end{array} $$ (in the last phase, the sequences $r_{10}r_{12}r_{11}$ and $r_{12}r_{10}r_{11}$ could also be applied with the same result). Therefore, $L(G)=\{a^nb^nc^n: n\geq 1\}$. $\diamond$ \end{example} \begin{example}\label{exa:NBLnotCF2} Let $G=(\{S,A,B,C\},\{a,b,c\},S,R,\mathbf{1})$ be the context-free capa-city-bounded grammar where $R$ consists of the rules $r_1: S\to aBbaAb$, $r_2: A\to aBb$, $r_3: B\to C$, $r_4: C\to A$, $r_5: A\to BC$, $r_6: A\to c$, and let $M$ be the regular set $M=\{a^*ccb^*a^*cb^*\}$. The derivations in $G$ generating words from $M$ are exactly those of the form $$ \begin{array}{ll} S &\xRightarrow{r_1}aBbaAb\xRightarrow{(r_3r_2r_4r_3r_2r_4)^n}a^nBb^na^nAb^n \xRightarrow{r_6r_3r_4}a^nAb^na^ncb^n\\ &\xRightarrow{(r_2r_3r_4)^m}a^{n+m}Ab^{n+m}a^ncb^n \xRightarrow{r_5r_4r_3r_6r_4r_6}a^{n+m}ccb^{n+m}a^ncb^n \end{array} $$ (one can also apply $r_3r_6r_4$ in the third phase and $r_5r_4r_6r_3r_4r_6$ in the last phase with the same result). Hence, $ L(G)\cap M=\{a^nccb^na^mcb^m: n\geq m\geq 1\}\not\in \mathbf{CF}, $ implying that $L(G)$ is not context-free. $\diamond$ \end{example} The above examples show that capacity-bounded grammars -- in contrast to derivation bounded grammars -- can generate non-context-free languages. The generative power of capacity-bounded grammars will be studied in detail in the following two sections. The notions of finite index and bounded capacities can be extended to matrix, vector and semi-matrix grammars. The corresponding language families are denoted by ${\bf MAT}^{[\lambda]}_{\mathit{fin}}$, ${\bf V}^{[\lambda]}_{\mathit{fin}}$, ${\bf sMAT}^{[\lambda]}_{\mathit{fin}}$, ${\bf MAT}^{[\lambda]}_{cb}$, ${\bf V}^{[\lambda]}_{cb}$, ${\bf sMAT}^{[\lambda]}_{cb}$. Also control by Petri nets can in a natural way be extended to Petri nets with place capacities. Since an extended cf Petri net $N_z$, $z\in\{h, c, s\}$, has two kinds of places, i.\,e., places labeled by nonterminal symbols and \emph{control} places, it is interesting to consider two types of place capacities in the Petri net: first, we demand that only the places labeled by nonterminal symbols are with capacities (\emph{weak capacity}), and second, all places of the net are with capacities (\emph{strong capacity}). A $z$-Petri net $N_z=(P\cup Q, T, F\cup E, \varphi, \zeta, \gamma, \mu_0, \tau)$ is with \emph{weak capacity} if the corresponding cf Petri net $(P, T, F, \phi, \iota)$ is with place capacity, and \emph{strong capacity} if the Petri net $(P\cup Q, T, F\cup E, \varphi, \mu_0)$ is with place capacity. A grammar controlled by a $z$-Petri net with \emph{weak} (\emph{strong}) \emph{capacity} is a $z$-Petri net controlled grammar $G = (V, \Sigma, S, R, N_z)$ where $N_z$ is with weak (strong) place capacity. We denote the families of languages generated by grammars (with erasing rules) controlled by $z$-Petri nets with weak and strong place capacities by $\mathbf{wPN}_{cz}$, $\mathbf{sPN}_{cz}$ ($\mathbf{wPN}^\lambda_{cz}$, $\mathbf{sPN}^\lambda_{cz}$), respectively, where $z\in\{h, c, s\}$. \section{The power of arbitrary grammars with capacities} \label{sec:power-gs} It will be shown in this section that arbitrary grammars (due to Ginsburg and Spanier) with capacity generate exactly the family of matrix languages of finite index. This is in contrast to derivation bounded grammars which generate only context-free languages of finite index. First we show that we can restrict to grammars with capacities bounded by~$1$. Let $\mathbf{CF}_{\mathit{cb}}^{1}$ and $\mathbf{GS}_{\mathit{cb}}^{1}$ be the language families generated by context-free and arbitrary grammars with capacity function $\mathbf{1}$. \begin{lemma} $\mathbf{CF}_{\mathit{cb}}=\mathbf{CF}_{\mathit{cb}}^{1}$ and $\mathbf{GS}_{\mathit{cb}}=\mathbf{GS}_{\mathit{cb}}^{1}$. \end{lemma} \begin{proof} Let $G=(V,\Sigma,S,R, \kappa)$ be a capacity-bounded phrase structure grammar. We construct the grammar $G'=(V',\Sigma,(S,1),R')$ with capacity function $\mathbf{1}$ and \begin{eqnarray*} V'&=& \{(A,i) : A \in V, 1\leq i\leq \kappa(A)\},\\ R'&=& \{\alpha' \to \beta' : \alpha' \in h(\alpha), \beta' \in h(\beta), \mbox{ for some } \alpha \to \beta \in R\}, \end{eqnarray*} where $h:(V\cup \Sigma)^* \to (V' \cup \Sigma)^*$ is the finite substitution defined by $h(a)=\{a\}$, for $a \in \Sigma$, and\linebreak \hbox{$h(A)=\{(A,i): 1\leq i\leq \kappa(A)\}$}, for $A \in V$. It can be shown by induction on the number of derivation steps that $S \!\Rightarrow^*_{G,\kappa}\! \alpha$ holds iff \hbox{$(S,1) \!\Rightarrow^*_{G',1}\! \alpha'$}, for some $\alpha' \in h(\alpha)$. \end{proof} \begin{lemma} \label{lem:GScbSubsetMATfin} $\mathbf{GS}_{\mathit{cb}}\subseteq \mathbf{MAT}_{\mathit{fin}}$. \end{lemma} \begin{proof*} Consider some language $L\in \mathbf{GS}_{\mathit{cb}}$ and let $G=(V,\Sigma,S,R,\mathbf{1})$ be a capacity-bounded phrase structure grammar (due to Ginsburg and Spanier) such that $L=L(G)$. A word $\alpha\in (V\cup \Sigma)^*$ can be uniquely decomposed as $$ \alpha=x_1 \beta_1 x_2 \beta_2 \cdots x_n \beta_n x_{n+1}, x_1,x_{n+1} \in \Sigma^*, x_2,\ldots,x_n \in \Sigma^+, \beta_1,\ldots, \beta_n\in V^+. $$ The subwords $\beta_i$ are referred to as the \emph{maximal nonterminal blocks} of $\alpha$. Note that the length of a maximal block in any sentential form of a derivation in $G$ is bounded by $|V|$. We will first construct a capacity-bounded grammar $G'$ with $L(G')=L$ such that all words of $L$ can be derived in $G'$ by rewriting a maximal nonterminal block in every step. Let $G'=(V,\Sigma,S,R',\mathbf{1})$ where \begin{eqnarray*} R'&=& \{\alpha_1 \alpha \alpha_2 \to \alpha_1 \beta \alpha_2 : \alpha \to \beta \in R, \alpha_1,\alpha_2 \in V^*, |\alpha_1 \alpha \alpha_2|_A \leq 1, \mbox{ for all } A\in V\}. \end{eqnarray*} The inclusion $L(G) \subseteq L(G')$ is obvious since $R\subseteq R'$. On the other hand, any derivation step in $G'$ can be written as $\gamma_1 \underline{\alpha_1 \alpha \alpha_2} \gamma_2 \Rightarrow_{G'} \gamma_1 \underline{\alpha_1 \beta \alpha_2} \gamma_2$, where $\alpha \to \beta \in R$, implying that the same step can be performed in $G$ as $\gamma_1 \alpha_1 \underline{\alpha} \alpha_2 \gamma_2 \Rightarrow_{G,1} \gamma_1 \alpha_1 \underline{\beta} \alpha_2 \gamma_2.$ Thus $L(G')\subseteq L(G)$ holds as well. Moreover, any derivation step in $G$, $\gamma_1 \alpha_1 \underline{\alpha} \alpha_2 \gamma_2 \Rightarrow_{G,1} \gamma_1 \alpha_1 \underline{\beta} \alpha_2 \gamma_2$, $\alpha_1\alpha\alpha_2$ being a maximal nonterminal block, can be performed in $G'$ replacing the maximal nonterminal block $\alpha_1\alpha\alpha_2$ by $\alpha_1\beta\alpha_2$. In the second step we construct a context-free matrix grammar $H$ which simulates exactly those derivations in $G'$ that replace a maximal nonterminal block in each step. We introduce two alphabets \begin{eqnarray*} [V]&=&\{[\alpha] : \alpha \in V^+, |\alpha|_A\leq 1, \mbox{ for all } A \in V\}\mbox{ and } \overline{V}=\{\overline{A} : A \in V\}. \end{eqnarray*} The symbols of $[V]$ are used to encode each maximal nonterminal block as single symbols, while $\overline{V}$ is a disjoint copy of $V$. Any word $$\alpha=x_1 \beta_1 x_2 \beta_2 \cdots x_n \beta_n x_{n+1}, x_1,x_{n+1} \in \Sigma^*, x_2,\ldots,x_n \in \Sigma^+, \beta_1,\ldots \beta_n\in V^+$$ such that $|\alpha|_A\leq 1$, for all $A \in V$, can be represented by the word $[\alpha]=x_1 [\beta_1] x_2 [\beta_2] \cdots x_n [\beta_n] x_{n+1}$, where the maximal nonterminal blocks in $\alpha$ are replaced by the corresponding symbols from $[V]$. The desired matrix grammar is obtained as $H=(V_H,\Sigma,S',M)$, with $V_H=[V]\cup V \cup \overline{V} \cup \{S'\}$ and the set of matrices defined as follows. For any rule $r=\alpha\to \beta$ in $R'$, $M$ contains the matrix $m_r$ consisting of the rules \begin{itemize} \item $[\alpha] \to [\beta]$ (note that $\alpha \in [V]$, but $\beta\in ([V]\cup\Sigma)^*$), \item $A \to \overline{A}$, for all $A \in V$ such that $|\alpha|_A=1$ and $|\beta|_A=0$, \item $\overline{A}\to A$, for all $A \in V$ such that $|\alpha|_A=0$ and $|\beta|_A=1$. \end{itemize} (The order of the rules in $m_r$ is arbitrary). Additionally, $M$ contains the starting and the terminating matrices $$(S'\to [S] S \overline{A_1} \cdots \overline{A_m}) \mbox{ and } (\overline{S} \to \lambda,\overline{A_1} \to \lambda, \ldots,\overline{A_m} \to \lambda),$$ where $V=\{S,A_1,\ldots,A_m\}$. Intuitively, $H$ generates sentential forms of the shape $[\beta] \gamma$ where\linebreak $[\beta] \in ([V] \cup \Sigma)^*$ encodes a sentential form $\beta$ derivable in $G'$ and $\gamma \in (V\cup \overline{V})$ gives a count of the nonterminal symbols in $\beta$ as follows: $|\gamma|_A+|\gamma|_{\overline{A}}=1$ and $|\gamma|_A=|\beta|_A$. Formally, it can be shown by induction that a sentential form over $V_H \cup \Sigma$ can be generated after applying $k\geq 1$ matrices (except for the terminating) iff it has the form $[\beta] \gamma$ where \begin{itemize} \item $\beta \in (V\cup\Sigma)^*$ can be derived in $G'$ in $k-1$ steps, \item $\gamma \in \{S,\overline{S}\} \{A_1,\overline{A}_1\} \cdots\{A_m,\overline{A}_m\}$ and $|\gamma|_A=1$ iff $|\beta|_A=1$. \qed \end{itemize} \end{proof*} We can also show that the inverse inclusion also holds. \begin{lemma}\label{MATfinInCScb} $\mathbf{MAT}_{\mathit{fin}}\subseteq \mathbf{GS}_{\mathit{cb}}$. \end{lemma} \section{Capacity-bounded context-free grammars} \label{sec:nb-cfg} In this section, we investigate capacity-bounded context-free grammars. It turns out that they are strictly between context-free languages of finite index and matrix languages of finite index. Closure properties of capacity bounded languages with respect to AFL operations are shortly discussed at the end of the section. As a first result we show that the family of context-free languages with finite index is properly included in ${\bf CF}_{\mathit{cb}}$. \begin{lemma} \label{thm:hierarchyCapacityBounded1} ${\bf CF}_{\mathit{fin}} \subset {\bf CF}_{\mathit{cb}}$. \end{lemma} \begin{proof} Any context-free language generated by a grammar $G$ of index $k$ is also generated by the capacity-bounded grammar $(G,\kappa)$ where $\kappa$ is the capacity function constantly $k$. The properness of the inclusion follows from Example~\ref{exa:NBLnotCF2}. \end{proof} An upper bound for ${\bf CF}_{\mathit{cb}}$ is given by the inclusion ${\bf CF}_{\mathit{cb}}\subseteq {\bf GS}_{\mathit{cb}}={\bf MAT}_{\mathit{fin}}$. We can prove the properness of the inclusion by presenting a language from ${\bf MAT}_{\mathit{fin}} \setminus {\bf CF}_{\mathit{cb}}$. \begin{lemma} $L=\{a^n b^n c^n : n\geq 1\} \notin \mathbf{CF}_{\mathit{cb}}$. \end{lemma} \begin{proof} Consider a capacity-bounded context-free grammar $G=(V,\Sigma,S,R,\mathbf{1})$ such that $L \subseteq L(G)$. For $A \in V$, let $G_A=(V,\Sigma,R,A,\mathbf{1})$. The following holds obviously for any derivation in $G$ involving $A$: If $\alpha A \beta \Rightarrow^*_{G} xyz$, where $\alpha,\beta \in (V\cup \Sigma)^*$, $x,y,z \in \Sigma^*$ and $y$ is the yield of $A$, then $y \in L(G_A)$. On the other hand, for all $x,y,z \in \Sigma^*$ such that $y\in L(G_A)$, the relation $xAz \Rightarrow^*_{G} xyz$ holds. The nonterminal set $V$ can be decomposed as $V=V_{\mathit{inf}} \cup V_{\mathit{fin}}$, where \begin{eqnarray*} V_{\mathit{inf}} &=& \{A \in V : L(G_A) \mbox{ is infinite}\}\mbox{ and } V_{\mathit{fin}} = \{A \in V : L(G_A) \mbox{ is finite}\}. \end{eqnarray*} Let $K$ be a number such that $|w|<K$, for all $w \in \bigcup_{A \in V_{\mathit{fin}}} L(G_A)$. Consider the word $w=a^{rK} b^{rK} c^{rK}$, where $r$ is the longest length of a right side in a rule of~$R$. There is a derivation $S \Rightarrow^*_{G} w$. Consider the last sentential form $\alpha$ in this derivation that contains a symbol from $V_{\mathit{inf}}$. Let this symbol be $A$. All other nonterminals in $\alpha$ are from $V_{\mathit{fin}}$, and none of them generates a subword containing $A$ in the further derivation process. We get thus another derivation of $w$ in $G$ by postponing the rewriting of $A$ until all other nonterminals have vanished by applying on them the derivation sequence of the original derivation. This new derivation has the form $S\Rightarrow^*_{G} \alpha \Rightarrow^*_{G} xAz \Rightarrow^*_{G} xyz=w.$ The length of $y$ can be estimated by $|y|\leq rK$, as $A$ is in the first step replaced by a word over $(\Sigma \cup V_{\mathit{fin}})$ of length at most $r$. By the remarks in the beginning of the proof, any word $xy'z$ with $y' \in L(G_A)$ can be derived in $G$. A case analysis shows that $xy'z$ is not in $L$, for any $y'\neq y$. Hence $L(G) \neq L$. \end{proof} The results can be summarized as follows: \begin{theorem} \label{thm:hierarchyCapacityBounded} $\mathbf{CF}_{\mathit{fin}}\subset \mathbf{CF}_{\mathit{cb}} \subset \mathbf{GS}_{\mathit{cb}}=\mathbf{MAT}_{\mathit{fin}}.$ \end{theorem} As regards closure properties, we remark that the constructions showing the closure of $\mathbf{CF}$ under homomorphisms, union, concatenation and Kleene closure can be easily extended to the case of capacity bounded languages. \begin{theorem}\label{thm:closureCapacityBounded} $\mathbf{CF}_{\mathit{cb}}$ is closed under homomorphisms, union, concatenation and Kleene closure. \end{theorem} \begin{proof} We give here a proof only for the Kleene closure and leave the other cases to the reader. Let $L\in \mathbf{CF}_{\mathit{cb}}$ and let $G=(V,\Sigma,S,R,\mathbf{1})$ be a context-free grammar such that $L=L(G)$. We construct $G'=(V\cup\{S'\},\Sigma,S',R\cup \{S'\to SS',S'\to \lambda\},\mathbf{1}).$ Any terminating derivation in $G'$ that applies the rule $S'\to SS'$ $k$ times generates a word\linebreak \hbox{$w=w_1 w_2 \cdots w_k$}, where $w_i$ is the yield of the $i$-th symbol $S$ introduced by $S'\to SS'$. The subderivation from $S$ to $w_i$ only uses rules from $R$. Moreover, any sentential form $\beta_i$ in this subderivation is the subword of some sentential form $\beta$ in the derivation of $w$ in $G'$. Hence, $|\beta_i|_A \leq |\beta|_A\leq 1$, for all $1\leq i \leq k$ and all $A \in V$. Consequently, $w_i\in L(G)=L$ and $w \in L^*$. Conversely, any word $w=w_1 w_2 \cdots w_k$ with $w_i\in L$, for $1\leq i\leq k$, can be obtained in $G'$ by the derivation $$ S'\Rightarrow SS' \Rightarrow^* w_1 S' \Rightarrow w_1SS' \Rightarrow^* w_1w_2S' \Rightarrow^* w_1w_2 \cdots w_kS' \Rightarrow w_1w_2\cdots w_k $$ where the subwords $w_i$ are derived from $S$ as in $G$. \end{proof} As regards closure under intersection with regular sets and under inverse homomorphisms, the constructions to show closure of $\mathbf{CF}$ cannot be extended, since they do not keep the capacity bound. We suspect that $\mathbf{CF}_{\mathit{cb}}$ is not closed under any of these operations. \section{Control by Petri nets with place capacities} \label{sec:PNC} We will first establish the connection between context-free Petri nets with place capacities and capacity-bounded grammars. Later we will investigate the generative power of various extended context-free Petri nets with place capacities. The proof for the equivalence between context-free grammars and grammars controlled by cf Petri nets can be immediately transferred to context-free grammars and Petri nets with capacities: \begin{theorem} \label{thm:CapacityPetriNetGrammar} Grammars controlled by context-free Petri nets with place capacity functions generate the family of capacity-bounded context-free languages. \end{theorem} Let us now turn to grammars controlled by extended cf Petri nets with capacities. We will first study the generative power of capacity-bounded matrix and vector grammars, which are closely related to these Petri net grammars. \begin{theorem} \label{thm:matrixGrammarBounds} ${\bf MAT}_{\mathit{fin}}={\bf V}^{[\lambda]}_{\mathit{cb}}={\bf MAT}^{[\lambda]}_{\mathit{cb}}={\bf sMAT}^{[\lambda]}_{\mathit{cb}}$. \end{theorem} \begin{proof} We give the proof of ${\bf MAT}_{\mathit{fin}}={\bf V}^{\lambda}_{\mathit{cb}}$. The other equalities can be shown in an analogous way. Since ${\bf MAT}_{\mathit{fin}}={\bf V}_{\mathit{fin}}={\bf V}^{\lambda}_{\mathit{fin}}$, it suffices to prove ${\bf V}_{\mathit{fin}}\subseteq {\bf V}^{\lambda}_{\mathit{cb}}$ and ${\bf V}^{\lambda}_{\mathit{cb}}\subseteq {\bf V}^{\lambda}_{\mathit{fin}}$. The first inclusion is obvious because any vector grammar of finite index $k$ is equivalent to the same vector grammar with capacity function constantly $k$. To show ${\bf V}^{\lambda}_{\mathit{cb}}\subseteq {\bf V}^{\lambda}_{\mathit{fin}}$, consider a capacity-bounded vector grammar $$G=(\{A_0,A_1,\ldots,A_m\},\Sigma,A_0,M,\mathbf{1}).$$ (The proof that it suffices to consider the capacity function $\mathbf{1}$ is like for usual grammars.) To construct an equivalent vector grammar of finite index, we introduce the new nonterminal symbols $B_i,B'_i$, $0\leq i\leq m$, $C$, $C'$. For any rule $r: A\to \alpha$, we define the matrix $\mu(r)=(C\to C',s_0,s_1,\ldots,s_m,r,C'\to C)$ such that $s_i=B_i \to B'_i$ if $A=A_i$ and $|\alpha|_A=0$, $s_i=B'_i \to B_i$ if $A\neq A_i$ and $|\alpha|_{A_i}=1$, and $s_i$ is empty, otherwise. Now we can construct $G'=(V',\Sigma,S',M')$ where $M'$ contains \begin{itemize} \item for any matrix $m=(r_1,r_2, \ldots, r_k)$, the matrix $m'=(\mu(r_1), \ldots, \mu(r_k))$, \item the start matrix $(S'\to A_0 B_0 B'_1 \cdots B'_m C)$, \item the terminating matrix $(C\to \lambda, B'_0\to \lambda,B'_1\to\lambda, \ldots, B'_m\to \lambda)$, \end{itemize} and $V'=V\cup \{B_i,B'_i: 0\leq i\leq m\} \cup \{S',C,C'\}$. The construction of $G'$ allows only derivation sequences where complete submatrices $\mu(r)$ are applied: when the sequence $\mu(r)$ has been started, there is no symbol $C$ before $\mu(r)$ is finished, and no other submatrix can be started. It is easy to see that $G'$ can generate after applying complete submatrices exactly those words $\beta \gamma C$ such that $\beta \in (V\cup \Sigma)^*$, \hbox{$\gamma\in \{B_0,B'_0\}\{B_1,B'_1\} \cdots \{B_m,B'_m\}$} such that $\beta$ can be derived in $G$ and $|\gamma|_{B_i}=1$ iff $|\beta|_{A_i}=1$. Moreover, $G'$ is of index $2 |V|+1$. \end{proof} By constructions similar to those in \cite{tur} and Theorem~\ref{thm:matrixGrammarBounds} we can show with respect to weak capacities: \begin{theorem}\label{lem:VfinInwPNch} For $z\in \{h,c,s\}$, ${\bf MAT}_{\mathit{fin}}=\mathbf{wPN}^{[\lambda]}_{cz}$. \end{theorem} \begin{proof} We give only the proof for $z=h$. The other equations can be shown using analogous arguments. By Theorem~\ref{thm:matrixGrammarBounds} it is sufficient to show the inclusions ${\bf V}_{fin}\subseteq\mathbf{wPN}_{ch}$ and $\mathbf{wPN}^{\lambda}_{ch}\subseteq {\bf V}^{\lambda}_{cb}$. As regards the first inclusion, let $L$ be a vector language of finite index (with or without erasing rules), and let $ind(L)=k$, $k\geq 1$. Then, there is a vector grammar $G=(V, \Sigma, S, M)$ such that $L=L(G)$ and $ind(G)\leq k$. Without loss of generality we assume that $G$ is without repetitions. Let $R$ be the set of the rules of $M$. By Theorem 16 in \cite{tur}, we can construct an $h$-Petri net controlled grammar $G'=(V, \Sigma, S, R, N_h)$, $N_h=(P\cup Q, T, F\cup E, \varphi, \zeta, \gamma, \mu_0, \tau)$, which is equivalent to the grammar $G$. By definition, for every sentential form $w\in (V\cup\Sigma)^*$ in the grammar $G$, $|w|_V\leq k$. It follows that $|w|_A\leq k$ for all $A\in V$. By bijection $\zeta:P\cup Q\to V\cup\{\lambda\}$ we have $\mu(p)=\mu(\zeta^{-1}(A))\leq k$ for all $p\in P$ and $\mu \in \mathcal{R}(N_h, \mu_0)$, i.\,e., the corresponding cf Petri net $(P, T, F, \phi, \beta, \gamma, \iota)$ is with $k$-place capacity. Therefore~$G'$ is with weak place capacity. On the other hand, the construction of an equivalent vector grammar for an $h$-Petri net controlled grammar, can be extended to the case of weak capacities just by assigning the capacities of the corresponding places to the nonterminal symbols of the grammar. \end{proof} As regards strong capacities, there is no difference between weak and strong capacities for grammars controlled by $c$- and $s$-Petri nets because the number of tokens in every circle is limited by $1$. This yields: \begin{corollary}\label{lem:wPNx=sPNx} For $z\in \{c,s\}$, ${\bf MAT}_{\mathit{fin}}=\mathbf{sPN}^{[\lambda]}_{cz}$. \end{corollary} The only families not characterized yet are $\mathbf{sPN}^{[\lambda]}_{ch}$. We conjecture that they are also equal to ${\bf MAT}_{\mathit{fin}}$. \section{Conclusions} \label{sec:conclusions} We have introduced grammars with capacity bounds and their Petri net controlled counterparts. In particular, we have shown that their generative power lies strictly between the context-free languages of finite index and the matrix languages of finite index. Moreover, we studied extended context-free Petri nets with place capacities. A possible extension of the concept is to use capacity functions that allow an unbounded number of some nonterminals. The investigation shows that for every grammar controlled by a cf Petri net with $k$-place capacity, $k\geq 1$, there exists an equivalent grammar controlled by a cf Petri net with 1-place capacity, i.\,e., the families of languages generated by cf Petri nets with place capacities do not form a hierarchy with respect to the place capacities. \end{document}
\begin{document} \title{Localization for a random walk in slowly decreasing random potential} {\footnotesize \noindent $^{1,2}$Department of Statistics, Institute of Mathematics, Statistics and Scientific Computation, University of Campinas--UNICAMP, rua S\'ergio Buarque de Holanda 651, 13083--859, Campinas SP, Brazil\\ \noindent e-mail: \texttt{[email protected]}\\ \noindent e-mail: \texttt{[email protected]}; URL: \texttt{http://www.ime.unicamp.br/${\widehat S}im$popov}\\ \noindent $^{~3}$ Theoretical Soft Matter and Biophysics, Institute of Complex Systems, Forschungszentrum J\"ulich, \\ 52425 J\"ulich, Germany\\ \noindent e-mail: \texttt{[email protected]} } \title{Localization for a random walk in slowly decreasing random potential} \begin{abstract} We consider a continuous time random walk $X$ in random environment on ${\mathbb Z}^+$ such that its potential can be approximated by the function $V: {\mathbb R}^+\to {\mathbb R}$ given by $V(x)={\widehat S}igma W(x) -\frac{b}{1-\alpha}x^{1-\alpha}$ where ${\widehat S}igma W$ a Brownian motion with diffusion coefficient ${\widehat S}igma>0$ and parameters $b$, $\alpha$ are such that $b>0$ and $0<\alpha<1/2$. We show that $\mathbf{P}$-a.s.\ (where $\mathbf{P}$ is the averaged law) $\lim_{t\to \infty} \frac{X_t}{(C^*(\ln\ln t)^{-1}\ln t)^{\frac{1}{\alpha}}}=1$ with $C^*=\frac{2\alpha b}{{\widehat S}igma^2(1-2\alpha)}$. In fact, we prove that by showing that there is a trap located around $(C^*(\ln\ln t)^{-1}\ln t)^{\frac{1}{\alpha}}$ (with corrections of smaller order) where the particle typically stays up to time~$t$. This is in sharp contrast to what happens in the ``pure'' Sinai's regime, where the location of this trap is random on the scale $\ln^2 t$. \\[.3cm] \textbf{Keywords:} KMT strong coupling, Brownian motion with drift, localization, random walk in random environment, reversibility \\[.3cm] \textbf{AMS 2000 subject classifications:} 60J10, 60K37 {\mathfrak E}nd{abstract} {\widehat S}ection{Introduction and results} \label{s_intro} Suppose that ${\omega}ega=({\omega}ega_x)_{x\geq 1}$ is a sequence of a i.i.d.\ random variables. Fix $b>0$ and $\alpha \in (0,\frac{1}{2})$ and let us define the sequence $(q_y)_{y\geq 0}$ such that $q_0=0$ and $q_y=\frac{{\mathfrak E}xp({\omega}_y-by^{-\alpha})}{1+{\mathfrak E}xp({\omega}_y-by^{-\alpha})}$ for $y\geq 1$. For each realization of~${\omega}$, we consider the continuous time random walk~$X$ on~${\mathbb Z}^+$ with transition probabilities given by \begin{align*} {\mathtt P}_{{\omega}ega}[X_{t+h}&=y+1\mid X_t=y]=(1-q_y)h+o(h),\nonumber\\ {\mathtt P}_{{\omega}ega}[X_{t+h}&=y-1\mid X_t=y]=q_yh+o(h),\phantom{*****} \mbox{if $y\geq 1$}, {\mathfrak E}nd{align*} as $h\to 0$. We will denote by ${\mathbb P}, {\mathbb E}$ the probability and expectation with respect to ${\omega}ega$, and by ${\mathtt P}_{{\omega}ega}$, ${\mathtt E}_{{\omega}ega}$ the (so-called ``quenched") probability and expectation for the random walk in the fixed environment ${\omega}ega$. We will use the notation ${\mathtt P}_{{\omega}ega}^{x}$ for the quenched law of $X$ starting from~$x$. Nevertheless, for the sake of brevity, we will omit the superscript $x$ whenever $x=0$. We make the following assumption : \noindent \textbf{Condition S.} We have \[ {\mathbb E}[{\omega}ega_1]= 0,\quad {\widehat S}igmama^2 := {\mathbb E}[{\omega}ega_1^2] \in (0, +\infty). \] The vanishing expectation of ${\omega}ega_1$ means that the random walk has a drift which is asymptotically decaying, which is the case of interest to be studied here. For technical reasons we also assume that the following condition holds: \noindent \textbf{Condition K.} There exists a $\theta_0>0$ such that ${\mathbb E}[e^{\theta{\omega}_1}]<\infty$ for all $|\theta|<\theta_0$. The choice of the rates $q_y$ has the interpretation of a random walk in a power law potential with amplitude $b$ on which a Sinai-type random potential is superimposed. Indeed, in the case $b=0$, Condition~S corresponds to Sinai's regime~\cite{Sinai} (after stating our main result, we will compare it with what happens in ``pure'' Sinai's regime). Random walks in an asymptotically decaying power-law potential play an important role in a number of applications in physics. As a very well-studied example we mention the condensation transition in the zero-range process where the grand-canonical stationary distribution on a single site is that of a random walk in a power-law (or logarithmic) potential \cite{ZRP1,ZRP2,ZRP3,ZRP4}. For $0<\alpha<1$ and $b<0$ there exists a finite critical particle density above which the grand-canonical stationary distribution does not exist. Then, in a canonical ensemble with fixed total particle number such that the total density exceeds the critical value, a macroscopic number of particles ``condenses'' on a single site. The same is true for $\alpha=1$ and $b\leq -2$, a case of particular importance e.g.\ in DNA denaturation where by a mapping to the dynamics of unzipped DNA strands the presence or absence of a condensation transition indicates whether the DNA denaturation transition is of first or second order \cite{Kafr02,Ambj06}. It is then natural to study the effect of quenched disorder which is usually modelled by a random potential of the type defined above. It turns out that the condensation transition persists only in the range $0<\alpha<1/2$ \cite{CGS}, which appears to be related to the smoothening of depinning transitions for directed polymers with quenched disorder of which the DNA denaturation transition is an example \cite{GT1,GT2}. Directly from the viewpoint of random walks in random environments the presence of quenched disorder in an asymptotically decaying power-law potential has been studied in detail in \cite{Menswade1,Menswade2} in a discrete time setting. The presence of a condensation transition corresponds to ergodicity of the random walk. Going beyond stationary properties, these authors relate the position of the random walk to some expected hitting times to obtain a series of interesting results on the speed of the random walk starting from the origin. In this respect the transient case is of particular interest. For $b>0$ and $\alpha \geq 1/2$ the scenario is not very much different from the case of pure Sinai-disorder (no power law potential). Roughly speaking, the displacement of the random walk from the origin grows to leading order in time $t$ as $(\ln t)^2$, independent of $\alpha$. On the other hand, for $b>0$ and $0 < \alpha < 1/2$ it was proved \cite{Menswade2} that for a.e.\ random environment~${\omega}ega$ one has a.s.\ $(\ln \ln t)^{-1/\alpha - \varepsilonilon} < {\mathfrak E}ta_t({\omega}ega)/(\ln t)^{1/\alpha} < (\ln \ln t)^{2/\alpha + \varepsilonilon} $ for all but finitely many $t$. The approach used here allows us to go further. The main result of this paper is: \begin{theo} \label{Theo} Under Conditions~S and K, we have for ${\mathbb P}$-almost all realizations of ${\omega}ega$, \begin{equation*} \lim_{t\to \infty}\frac{X_t}{(C^*(\ln\ln t)^{-1}\ln t)^{\frac{1}{\alpha}}}=1,\phantom{***}\mbox{${\mathtt P}_{{\omega}ega}$-a.s.,} {\mathfrak E}nd{equation*} with $C^*= \frac{2\alpha b}{{\widehat S}igma^2(1-2\alpha)}$. {\mathfrak E}nd{theo} Observe that we define the model in a continuous-time setting rather than in discrete time. This brings about a (very) slight technical complication, but is better motivated from a physics perspective. Let us comment now on the relationship of our work with the classical model of one-dimensional RWRE in i.i.d.\ environement (see e.g.\ \cite{Zeitouni}). As often happens with theorems of this kind, the proof of Theorem~\ref{Theo} is obtained by showing that the particle will eventually find a \textit{trap} (i.e., a piece of the environment with ``drift inside''), and then stay there up to time~$t$. It it well-known that, for the RWRE in Sinai's regime, the location of this trap (scaled by $\ln^2 t$) is a random variable. However, it is interesting to observe that (as one can see from the proof of Theorem~\ref{Theo}) adding the power-law perturbation to the Sinai's potential changes the situation: the position of \textit{the trap} becomes ``less random'' (there are still fluctuations, of course, but they are of smaller order). As an aside, we mention that with $s(t):=(C^*(\ln\ln t)^{-1}\ln t)^{\frac{1}{\alpha}}$ we can also deduce from the proof of Theorem~\ref{Theo}, the following upper bounds for some particular hitting times of $X$. For $\varepsilon\in (0,1)$, let $\tau_{(1-\varepsilon)s(t)}$ be the first hitting time of the point $\lfloor(1-\varepsilon)s(t)\rfloor$ by the random walk $X$. Then, for all $\varepsilon>0$ there exists $\delta>0$ such that ${\mathbb P}$-a.s., \[ {\mathtt P}_{{\omega}ega}[\tau_{(1-\varepsilon)s(t)}>t]\leq {\mathfrak E}xp\{-t^{\frac{\delta}{2}}\} \] for all $t$ large enough (see equation~(\ref{FUG1})). In the next section, we introduce some notations and recall some auxiliary facts which are necessary for the proof of Theorem~\ref{Theo}. In section \ref{Technical}, we prove various technical lemmas about the asymptotic behavior of the environment. Finally, in section \ref{secTheo}, we give the proof of Theorem~\ref{Theo}. {\widehat S}ection{Notations and auxiliary facts} Given a realization of~${\omega}ega$, define the potential function for $x\in\mathbb{R^+}$, by \[ U(x) :={\widehat S}um_{y=1}^{\lfloor x\rfloor}\ln\frac{q_y}{1-q_y}={\widehat S}um_{y=1}^{\lfloor x\rfloor}({\omega}_y-by^{-\alpha}) \] where $\lfloor x\rfloor$ is the integer part of $x$ and ${\widehat S}um_{y=1}^{\lfloor x\rfloor}:=0$ if $x<1$. The behavior of $U$ is of crucial importance for the analysis of the asymptotic properties of the random walk $X$ (cf.\ Propositions \ref{Confine} and \ref{escape} below). Conditions S and K will allow us to couple the potential $U$ to Brownian motion with power law drift, simplifying much the proof of limit properties of the random walk $X$. Indeed, by the well-known Koml{\'o}s-Major-Tusn{\'a}dy strong approximation theorem (cf.\ Theorem 1 of \cite{KMT}), there exists (possibly in an enlarged probability space) a coupling for ${\omega}$ and a standard Brownian motion $W$, such that \begin{equation} \label{KMT} {\mathbb P}\Big[\limsup_{n\to \infty}\frac{\max_{1\leq m\leq n}|{\widehat S}um_{i=1}^m{\omega}ega_i-{\widehat S}igma W(m)|}{\ln n}\leq \hat{K}\Big]=1 {\mathfrak E}nd{equation} for some finite constant $\hat{K}>0$. A useful consequence of (\ref{KMT}) is that if $x$ is not too far away from the origin, then ${\widehat S}um_{i=1}^{\lfloor x\rfloor}{\omega}_i$ and ${\widehat S}igma W(x)$ are rather close for the vast majority of environments. Hence, it is convenient to introduce the following set of ``good'' environments and to restrict our forthcoming computations to this set. Fix $M>\frac{1}{\alpha}$ and for any~$t>e$, let \begin{equation} \label{approx1} {\mathcal G}amma(t): = \Big\{{\omega}ega : \Big|{\widehat S}um_{i=1}^{\lfloor x\rfloor} {\omega}_i-{\widehat S}igma W(x)\Big|\leq K\ln\ln t\;, \; x\in [0,\ln^{M}t]\Big\}. {\mathfrak E}nd{equation} By (\ref{KMT}) and properties of the modulus of continuity of Brownian motion, we can choose $K \in (0, \infty)$ in such a way that for ${\mathbb P}$-almost all~${\omega}ega$, it holds that ${\omega}ega\in{\mathcal G}amma(t)$ for all~$t$ large enough (cf. e.g. \cite{CP} or \cite{Galles} where this fact was used). On the other hand, using the fact that there exists a finite constant $C>0$ such that for all $x\geq 1$, \begin{equation} \label{approx2} \Big|{\widehat S}um_{i=1}^{\lfloor x\rfloor}i^{-\alpha}-\int_1^xu^{-\alpha}du\Big|\leq C, {\mathfrak E}nd{equation} we can define a new potential function $V$ by \[V(x):={\widehat S}igma W(x)-\frac{b}{1-\alpha}x^{1-\alpha}\] for all $x\in {\mathbb R}^+$ and using~(\ref{approx1}) and~(\ref{approx2}), we have that there exists a finite $K_1>0$ such that for all $t> e$ and ${\omega} \in {\mathcal G}amma(t)$, $\max_{x\leq \ln^M t}|V(x)-U(x)|\leq K_1\ln\ln t$. Observe that $V$ is a Brownian motion with a power law drift. For convenience, from now on, we will work with potential $V$ instead of $U$ (see Fig.\ \ref{fig1}). \begin{figure}[!htb] \begin{center} \includegraphics[scale= 0.6]{dessin1} \caption{Approximation of potential $U$ by $V$.} \label{fig1} {\mathfrak E}nd{center} {\mathfrak E}nd{figure} \begin{figure}[!htb] \begin{center} \includegraphics[scale= 0.6]{DessinDraw} \caption{On the definitions of $D^+_{[x_0,y_0]}(f)$ and $D^-_{[x_0,y_0]}(f)$.} \label{fig1b} {\mathfrak E}nd{center} {\mathfrak E}nd{figure} For a function $f:{\mathbb R}^+\to {\mathbb R}$ and $x_0<y_0$, let $D^+_{[x_0,y_0]}(f):={\widehat S}up_{u\in [x_0,y_0]}(f(u)-\inf_{v\in[x_0,u]}f(v))$ and $D^-_{[x_0,y_0]}(f):={\widehat S}up_{u\in [x_0,y_0]}(f(u)-\inf_{v\in[u,y_0]}f(v))$ be respectively the maximum draw-up and draw-down of the function $f$ on the interval $[x_0,y_0]$ (see Fig.~\ref{fig1b}). As we will see in the proof of Theorem~\ref{Theo}, these functionals applied to the potential $V$ are important quantities in order to determine the limiting behavior of the random walk $X$. The distribution of $D_{[x_0,y_0]}^+$ is not known for a Brownian motion with a power law drift. Fortunately, in our case, we can locally approximate the power law drift by a linear one. It happens that for fixed intervals~$I$ the law of $D_I^+$ is known for a Brownian motion with linear drift (cf.\ (1) in~\cite{MAPAM}) but in this reference, it is given under the form of an alternating series which is not easy to handle. If, instead of considering deterministic intervals~$I$ we consider intervals of size given by an exponential random variable independent of $W$ then the law of $D_I^+$ becomes much simpler and is more useful for our purposes. We now recall the following result which can be found in \cite{Salminen}: \begin{prop} \label{Prop0} Let $T$ be a random variable with exponential distribution of mean $\mu$ and $W^{({\widehat S}igma,\nu)}$ a Brownian motion with diffusion coefficient ${\widehat S}igma$ and linear drift $\nu$, that is, $W^{({\widehat S}igma,\nu)}(t)={\widehat S}igma W(t)+\nu t$ where $W$ is a standard Brownian motion. Assume that $T$ is independent of $W$. Then, \begin{equation*} P\Big[D_{[0,T]}^+(W^{({\widehat S}igma,\nu)})>a\Big]=\frac{{\mathfrak E}xp(\nu a{\widehat S}igma^{-2})}{\cosh(a{\widehat S}igma^{-1}{\widehat S}qrt{2\mu^{-1}+\nu^2{\widehat S}igma^{-2}})+\frac{\nu{\widehat S}igma^{-1}}{{\widehat S}qrt{2\mu^{-1}+\nu^2{\widehat S}igma^{-2}}}{\widehat S}inh(a{\widehat S}igma^{-1}{\widehat S}qrt{2\mu^{-1}+\nu^2{\widehat S}igma^{-2}})} {\mathfrak E}nd{equation*} for all $a\geq0$. {\mathfrak E}nd{prop} It is then not difficult to establish the following \begin{cor} \label{Corro1} Suppose that $\nu<0$ and that $a$, $\nu$ and $\mu$ are functions the real variable $t>0$. If $a|\nu|\to \infty$, $\nu^2\mu \to \infty$ and $a(\mu |\nu|)^{-1}\to 0$ as $t\to \infty$, then \begin{equation*} P\Big[D_{[0,T]}^+(W^{({\widehat S}igma, \nu)})>a\Big]= \frac{1}{1+\frac{{\widehat S}igma^2}{2\nu^2\mu}{\mathfrak E}xp(\frac{2|\nu|a}{{\widehat S}igma^2})}(1+o(1)) {\mathfrak E}nd{equation*} as $t\to \infty$. {\mathfrak E}nd{cor} For all $A{\widehat S}ubset {\mathbb Z}^+$ we define $\tau_A:=\inf\{t>0: X_t\in A\}$ the first hitting time of $A$ for the random walk~$X$. When $A=\{x\}$, $x\in {\mathbb Z}^+$, we simply write $\tau_x$ instead of $\tau_{\{x\}}$.\\ Let $I=[a,b]$ with $0\leq a<b<\infty$ be a finite interval of ${\mathbb Z}^+$ and let $H(I):=D^+_I(U)\wedge D^-_I(U)$ and $\tilde{M}:=D^+_I(U)\vee D^-_I(U)$. We will need the following upper bound on the probability of confinement which comes from the proof of Proposition 4.1 of \cite{PGF}: \begin{prop} \label{Confine} There exists a positive constant $K_2$ such that, ${\mathbb P}$-a.s., for any finite interval $I=[a,b]$ and any point $x$ such that $a<x<b$, \begin{equation*} {\mathtt P}_{{\omega}ega}^x[\tau_{\{a,b\}}\geq t] \leq {\mathfrak E}xp \Big\{-\frac{t}{K_2(b-a)^3(b-a+\tilde{M})e^{H(I)}}\Big\} {\mathfrak E}nd{equation*} for all $t>K_2(b-a)^3(b-a+\tilde{M})e^{H(I)}$. {\mathfrak E}nd{prop} For the random walk $X$, we will eventually need to estimate the probability of escaping to one specific direction. In Proposition \ref{escape}, as an example, we just state the result for the probability of escaping to the right. Nevertheless, in section \ref{secTheo}, we will use this estimate in both directions. We define a reversible measure $\pi$ by $\pi(0):=1$ and $\pi(x):= e^{-U(x)}+e^{-U(x-1)}$ for $x\geq 1$ (observe that $\pi(x)(1-q_x)=q_{x+1}\pi(x+1)$ for all $x\in {\mathbb Z}^+$). For any finite interval $I$ of ${\mathbb Z}^+$, we define $h_I:=\mathop{\mathrm{arg\,max}}_{x\in I}U(x)$. We will use the following estimate (see e.g.\ the proof of \ Proposition 4.2 in \cite{PGF}): \begin{prop} \label{escape} There exists a positive constant $K_3$ such that, ${\mathbb P}$-a.s., for any finite interval $I=[a,b]$ of ${\mathbb Z}^+$ we have \begin{equation*} {\mathtt P}_{{\omega}ega}^a[\tau_b<t] \leq K_3t\frac{\pi(h_I)}{\pi(a)} {\mathfrak E}nd{equation*} for all $t>1$. {\mathfrak E}nd{prop} Using the above expression of the reversible measure $\pi$, we have \begin{equation*} \frac{\pi(h_I)}{\pi(a)}\leq e^{-U(h_I)+U(a)}\Big(1+e^{U(h_I)-U(h_I-1)}\Big). {\mathfrak E}nd{equation*} If ${\omega}\in {\mathcal G}amma(t)$ and $h_I< \ln^M t$, we deduce that $|U(h_I)-U(h_I-1)|\leq 2K_1\ln\ln t$. Thus, we obtain the following upper bound for $\frac{\pi(h_I)}{\pi(a)}$, \begin{equation} \label{WATQ} \frac{\pi(h_I)}{\pi(a)}\leq e^{-U(h_I)+U(a)}(2K_1+1)\ln t. {\mathfrak E}nd{equation} {\widehat S}ection{Technical lemmas} \label{Technical} We start by showing four lemmas on the asymptotic behavior of the potential $V$. We mention that since $V$ is defined on ${\mathbb R}^+$, all the intervals considered in this section are intervals of ${\mathbb R}^+$. Let us recall that $s(t)=(C^*(\ln\ln t)^{-1}\ln t)^{\frac{1}{\alpha}}$. In Lemma \ref{Lem1}, we show that ${\mathbb P}$-a.s., for all $t$ large enough the maximum draw-up of $V$ before $(1-\varepsilon)s(t)$ is smaller than $(1-\delta)\ln t$, for $\delta$ suitably chosen (see Fig.\ \ref{fig2}). In Lemma \ref{Lem2}, we show that for any integer $N$, we have that, ${\mathbb P}$-a.s., for all $t$ large enough, there exists a partition of $[0, (1-\varepsilon)s(t)]$ into $N$ intervals such that on each interval the maximum draw-down of $V$ is greater than $(1+\delta)\ln t$ (see Fig.\ \ref{fig2b}). In Lemma \ref{Lem3}, we show that for any integer $N$, we have that, ${\mathbb P}$-a.s., for all $t$ large enough, there exists a partition of $[s(t), (1+\varepsilon)s(t)]$ into $N$ intervals such that on each interval the maximum draw-up of $V$ is greater than $(1+\delta)\ln t$ for $\delta$ suitably chosen (see Fig.\ \ref{fig2c}). Finally, in Lemma~\ref{Lem4}, we show that on the interval $[0,\ln^{\frac{1}{\alpha}} t]$ the range of $V$ is smaller than $2\ln^{\frac{1}{\alpha}}t$. The proofs of Lemmas \ref{Lem2} and \ref{Lem4} follow from standard properties of Brownian motion. To prove Lemmas~\ref{Lem1} and \ref{Lem3} we essentially use the the same method, that is, we first approximate the potential $V$ by some suitable drifted Brownian motion and then apply Corollary \ref{Corro1}. \begin{figure}[!htb] \begin{center} \includegraphics[scale= 0.6]{dessin2} \caption{Maximum drawup of $V$ before $(1-\varepsilon) s(t)$.} \label{fig2} {\mathfrak E}nd{center} {\mathfrak E}nd{figure} \begin{figure}[!htb] \begin{center} \includegraphics[scale= 0.6]{dessin2b} \caption{Partition of $[0,(1-\varepsilon) s(t)]$ into $N=4$ intervals.} \label{fig2b} {\mathfrak E}nd{center} {\mathfrak E}nd{figure} \begin{figure}[!htb] \begin{center} \includegraphics[scale= 0.6]{dessin2c} \caption{Partition of $[s(t), (1+\varepsilon) s(t)]$ into $N=3$ intervals.} \label{fig2c} {\mathfrak E}nd{center} {\mathfrak E}nd{figure} For $\varepsilon\in (0,1)$, $\delta\in (0,1)$ and $N\in {\mathbb N}$, let us define the following events \[ A_{\varepsilon,\delta}(t):=\Big\{D^+_{[0,(1-\varepsilon)s(t)]}(V)\leq(1-\delta)\ln t\Big\}, \] \begin{align*} B_{\varepsilon,\delta,N}(t)&:=\Big\{\mbox{there exists a partition of $[(1-\varepsilon)s(t),(1-\frac{\varepsilon}{2})s(t)]$ into $N$ intervals $I_j$ such that} \nonumber\\ & \phantom{****}D^-_{I_j}(V)>(1+\delta)\ln t, j=1,\dots,N\Big\}, {\mathfrak E}nd{align*} and \begin{align*} C_{\varepsilon,\delta,N}(t)&:=\Big\{\mbox{there exists a partition of $[s(t),(1+\varepsilon)s(t)]$ into $N$ intervals $J_j$ such that} \nonumber\\ & \phantom{****}D^+_{J_j}(V)>(1+\delta)\ln t, j=1,\dots,N\Big\}. {\mathfrak E}nd{align*} We first show the following \begin{lm} \label{Lem1} For all $\varepsilon\in (0,1)$, there exists $\delta>0$ small enough such that ${\mathbb P}[\liminf_{t\to \infty}A_{\varepsilon,\delta}(t)]=1$. {\mathfrak E}nd{lm} \textit{Proof.} Consider an exponential random variable $T$ with parameter 1 and independent of $W$. Let us also introduce the drifted Brownian motion $W^{({\widehat S}igma, m_1)}(x):={\widehat S}igma W(x)+m_1x$ where $m_1:=-\frac{b}{(1-\varepsilon)^{\alpha}s^{\alpha}(t)}$ is the derivative of the function $-\frac{b}{1-\alpha}x^{1-\alpha}$ at point $(1-\varepsilon)s(t)$ (see Fig.~\ref{fig3}). By the choice of $m_1$, we have that the event $\{D^+_{[0,(1-\varepsilon)s(t)]}(V)>(1-2\delta)\ln t\}$ is contained in the event $\{D^+_{[0,(1-\varepsilon)s(t)]}(W^{({\widehat S}igma, m_1)})>(1-2\delta)\ln t\}$, this implies that \begin{align} \label{EVEnt1} {\mathbb P}[A^c_{\varepsilon,2\delta}(t)] &\leq {\mathbb P}\Big[D^+_{[0,((1-\varepsilon)\vee (T(\ln\ln t)^2 ))s(t)]}(W^{({\widehat S}igma,m_1)})>(1-2\delta)\ln t\Big]\nonumber\\ &\leq {\mathbb P}\Big[D^+_{[0,T(\ln\ln t)^2 s(t)]}\Big(W^{({\widehat S}igma,m_1)} \Big)>(1-2\delta)\ln t\Big]+{\mathbb P}\Big[T\leq \frac{1-\varepsilon}{(\ln\ln t)^2}\Big]. {\mathfrak E}nd{align} \begin{figure}[!htb] \begin{center} \includegraphics[scale= 0.6]{dessin4} \caption{On the definition of $W^{({\widehat S}igma, m_1)}$.} \label{fig3} {\mathfrak E}nd{center} {\mathfrak E}nd{figure} As $T$ is exponentially distributed with parameter 1, the second term of the right-hand side of (\ref{EVEnt1}) is equal to \begin{equation} \label{RTY1} {\mathbb P}\Big[T\leq \frac{1-\varepsilon}{(\ln\ln t)^2}\Big]=\frac{1-\varepsilon}{(\ln\ln t)^{2}} {\mathfrak E}nd{equation} as $t \to \infty$. For the first term, by Corollary \ref{Corro1} we obtain \begin{equation} \label{RTY2} {\mathbb P}\Big[D^+_{[0,Ts(t)\ln\ln t ]}\Big(W^{({\widehat S}igma, m_1)} \Big)>(1-2\delta)\ln t\Big]=\frac{(1+o(1))}{1+(\ln t)^{(\frac{1}{\alpha}-2)\Big(\frac{1-2\delta}{(1-\varepsilon)^{\alpha}}-1\Big)+o(1)}} {\mathfrak E}nd{equation} as $t\to \infty$. Now, let $\mu>0$ and consider the sequence of time intervals $I_n:=[t_n, t_{n+1})$, where $t_n:=e^{(1+\mu)^n}$ for $n\geq 0$. Choosing $0<2\delta<1-(1-\varepsilon)^{\alpha}$ and using (\ref{EVEnt1}), (\ref{RTY1}) and (\ref{RTY2}) we obtain that ${\widehat S}um_{n\geq 0}{\mathbb P}[A^c_{\varepsilon,2\delta}(t_n)]<\infty$. Thus, by Borel-Cantelli Lemma we obtain that for ${\mathbb P}$-a.a.\ ${\omega}$ there exists $n_0=n_0({\omega})$ such that ${\omega}ega \in A_{\varepsilon,2\delta}(t_n)$ for all $n\geq n_0$. Now, let $n\geq n_0$ and suppose $t\in [t_n, t_{n+1})$. We have ${\mathbb P}$-a.s., \begin{align*} D^+_{[0,(1-\varepsilon)s(t)]}(V)&\leq D^+_{[0,(1-\varepsilon)s(t_{n+1})]}(V)\nonumber\\ &\leq (1-2\delta)\ln t_{n+1}\nonumber\\ &=(1-2\delta)(1+\mu)\ln t_n\nonumber\\ &\leq (1-2\delta)(1+\mu)\ln t. {\mathfrak E}nd{align*} Choosing $\mu$ in such a way that $(1-2\delta)(1+\mu)\leq (1-\delta)$, we obtain that for ${\mathbb P}$-a.a.\ ${\omega}$ there exists $t_0=t_0({\omega})$ such that ${\omega}ega \in A_{\varepsilon,\delta}(t)$ for all $t\geq t_0$, which proves Lemma \ref{Lem1}. $\Box$\par \par\relax \begin{lm} \label{Lem2} For all $\varepsilon\in (0,1)$ and $\delta>0$ we have ${\mathbb P}[\liminf_{t\to \infty}B_{\varepsilon,\delta,N}(t)]=1$, for all $N\geq 1$. {\mathfrak E}nd{lm} \textit{Proof.} Let $\mu>0$ be such that $\beta:=(1-\varepsilon)(1+\mu)^{\frac{1}{\alpha}}< (1-\frac{\varepsilon}{2})$ and consider the sequence of time intervals $I_n:=[t_n, t_{n+1})$, where $t_n:=e^{(1+\mu)^n}$ for $n\geq 0$. Divide the interval $[\beta s(t),(1-\frac{\varepsilon}{2})s(t)]$ into $N$ intervals $\mathcal{I}_j$, $j=1,\dots,N$, of size ${\mathfrak E}ta s(t)$ with ${\mathfrak E}ta:=N^{-1}(1-\frac{\varepsilon}{2}-\beta)$. Let us define the following events \[ E_{\varepsilon,\delta,\mu}(t):=\bigcup_{j=1}^N\Big\{D^-_{\mathcal{I}_j}(V)\leq (1+\delta)\ln t\Big\}. \] We have \begin{align} \label{WER} {\mathbb P}[E_{\varepsilon,2\delta,\mu}(t)]&\leq {\widehat S}um_{j=1}^{N}{\mathbb P}\Big[D^-_{\mathcal{I}_j}({\widehat S}igma W) \leq (1+2\delta)\ln t\Big]\nonumber\\ &= N{\mathbb P}\Big[\max_{s\in [0,{\mathfrak E}ta s(t)]}|W(s)|\leq \frac{1+2\delta}{{\widehat S}igma}\ln t\Big]\nonumber\\ &\leq N{\mathbb P}\Big[\max_{s\in [0,{\mathfrak E}ta s(t)]}W(s)\leq \frac{1+2\delta}{{\widehat S}igma}\ln t\Big]\nonumber\\ &=N\Big(1-2{\mathbb P}\Big[W({\mathfrak E}ta s(t))> \frac{1+2\delta}{{\widehat S}igma}\ln t\Big]\Big)\nonumber\\ &=N\Big(1-2\int_{\frac{(1+2\delta)\ln t}{{\widehat S}igma ({\mathfrak E}ta s(t))^{1/2}}}^{\infty} \frac{e^{-\frac{y^2}{2}}}{{\widehat S}qrt{2\pi}}dy\Big)\nonumber\\ &=N{\widehat S}qrt{\frac{2}{\pi}}\frac{1+2\delta}{{\widehat S}igma{\mathfrak E}ta^{\frac{1}{2}} (C^*)^{\frac{1}{2\alpha}}}(\ln\ln t)^{\frac{1}{2\alpha}}(\ln t)^{-(\frac{1}{2\alpha}-1)} (1+o(1)) {\mathfrak E}nd{align} as $t\to \infty$. We obtain from (\ref{WER}) that ${\widehat S}um_{n\geq 0}{\mathbb P}[E_{\varepsilon,2\delta,\mu}(t_n)]<\infty$. Thus, by Borel-Cantelli Lemma we obtain that for ${\mathbb P}$-a.a.\ ${\omega}$ there exists $n_0=n_0({\omega})$ such that ${\omega}ega \in E^c_{\varepsilon,2\delta,\mu}(t_n)$ for all $n\geq n_0$. Now, suppose that $n\geq n_0$ and $t\in [t_n, t_{n+1})$. Since we have $s^{\alpha}(t_n)\leq s^{\alpha}(t)\leq (1+\mu)s^{\alpha}(t_n)$ for large enough $n$, we deduce that ${\mathbb P}$-a.s., there exists a partition of $[(1-\varepsilon)s(t),(1-\frac{\varepsilon}{2})s(t)]$ into $N$ intervals $I_j$, $j=1,\dots,N$, such that on each one $D^-_{I_j}(V)> (1+2\delta)\ln t_n$. Since $\ln t_n\leq \ln t\leq (1+\mu)\ln t_n$, we have $(1+2\delta)\ln t_n\geq \frac{1+2\delta}{1+\mu}\ln t\geq (1+\delta)\ln t$ for $\mu>0$ small enough. From these last observations, we conclude that for ${\mathbb P}$-a.a.\ ${\omega}$, there exists $t_0=t_0({\omega})$ such that ${\omega}ega \in B_{\varepsilon,\delta,N}(t)$ for all $t\geq t_0$, which proves Lemma \ref{Lem2}. $\Box$\par \par\relax \begin{lm} \label{Lem3} For all $\varepsilon\in (0,1)$, there exists small enough $\delta>0$ such that ${\mathbb P}[\liminf_{t\to \infty}C_{\varepsilon,\delta,N}(t)]=1$, for all $N\geq 1$. {\mathfrak E}nd{lm} \textit{Proof.} Let $\mu>0$ be such that $(1+\beta):=(1+\frac{\varepsilon}{2})(1+\mu)^{\frac{1}{\alpha}}< (1+\varepsilon)$ and consider again the sequence of time intervals $I_n:=[t_n, t_{n+1})$, where $t_n=e^{(1+\mu)^n}$ for $n\geq 0$. Divide the interval $[(1+\beta) s(t),(1+\varepsilon)s(t)]$ into $N$ intervals $\mathcal{J}_j$, $j=1,\dots,N$ of size $\frac{\varepsilon-\beta}{N} s(t)$. Let us define the following events \[ F_{\varepsilon,\delta,\mu}(t):=\bigcup_{j=1}^N \Big\{D^+_{\mathcal{J}_j}(V)\leq(1+\delta)\ln t\Big\}. \] Let $m_2:=-\frac{b}{(1+2^{-1}\varepsilon)^{\alpha}s^{\alpha}(t)}$ be the derivative of the function $-\frac{b}{1-\alpha}x^{1-\alpha}$ at point $(1-\frac{\varepsilon}{2})s(t)$ and introduce the drifted Brownian motion $W^{({\widehat S}igma, m_2)}(x):={\widehat S}igma W(x)+m_2 x$ (see Fig.~\ref{fig4}). By definition of $W^{({\widehat S}igma, m_2)}$, we have that the event $\Big\{D^+_{[(1+\beta)s(t),(1+\beta+T(\ln\ln t)^{-1})s(t)]}(V)\leq(1+2\delta)\ln t\Big\}$ is contained in the event $\Big\{D^+_{[(1+\beta)s(t),(1+\beta+T(\ln\ln t)^{-1})s(t)]}(W^{({\widehat S}igma, m_2)})\leq(1+2\delta)\ln t\Big\}$, this leads to \begin{align} \label{event5} {\mathbb P}[F_{\varepsilon,2\delta,\mu}(t)]&\leq {\widehat S}um_{j=1}^{N} {\mathbb P}\Big[D^+_{\mathcal{J}_j}(V)\leq(1+2\delta)\ln t\Big]\nonumber\\ &\leq N{\mathbb P}\Big[D^+_{[(1+\beta)s(t),((1+\beta+N^{-1}(\varepsilon-\beta))\wedge (1+\beta+T(\ln\ln t)^{-1}))s(t)]}(V)\leq (1+2\delta)\ln t\Big]\nonumber\\ &\leq N {\mathbb P}\Big[D^+_{[(1+\beta)s(t),(1+\beta+T(\ln\ln t)^{-1})s(t)]}(V)\leq(1+2\delta)\ln t\Big]+N{\mathbb P}\Big[T>\frac{\varepsilon-\beta}{N}\ln\ln t \Big]\nonumber\\ &\leq N {\mathbb P}\Big[D^+_{[(1+\beta)s(t),(1+\beta+T(\ln\ln t)^{-1})s(t)]}(W^{({\widehat S}igma, m_2)})\leq(1+2\delta)\ln t\Big]+N{\mathbb P}\Big[T>\frac{\varepsilon-\beta}{N}\ln\ln t \Big]\nonumber\\ &= N{\mathbb P}\Big[D^+_{[0,Ts(t)(\ln\ln t)^{-1} ]}\Big( W^{({\widehat S}igma, m_2)} \Big)\leq(1+2\delta)\ln t\Big]+N{\mathbb P}\Big[T>\frac{\varepsilon-\beta}{N}\ln\ln t \Big]. {\mathfrak E}nd{align} \begin{figure}[!htb] \begin{center} \includegraphics[scale= 0.7]{dessin5} \caption{On the definition of $W^{({\widehat S}igma, m_2)}$.} \label{fig4} {\mathfrak E}nd{center} {\mathfrak E}nd{figure} As $T$ is exponentially distributed with parameter 1, we have for the second term of the right-hand side of (\ref{event5}) \begin{equation} \label{event6} N{\mathbb P}\Big[T>\frac{\varepsilon-\beta}{N}\ln\ln t \Big]=N\ln^{\frac{\varepsilon-\beta}{N}} t. {\mathfrak E}nd{equation} For the first term, we use Corollary \ref{Corro1} to obtain that \begin{equation} \label{event7} {\mathbb P}\Big[D^+_{[0,Ts(t)(\ln\ln t)^{-1} ]}\Big(W^{({\widehat S}igma, m_2)} \Big)\leq(1+2\delta)\ln t\Big]=1-\frac{(1+o(1))}{1+(\ln t)^{(\frac{1}{\alpha}-2)\Big(\frac{1+2\delta}{(1+2^{-1}\varepsilon)^{\alpha}}-1\Big)+o(1)}} {\mathfrak E}nd{equation} as $t\to \infty$. Choosing $0<2\delta<(1+2^{-1}\varepsilon)^{\alpha}-1$ and using (\ref{event5}), (\ref{event6}) and (\ref{event7}) we obtain that ${\widehat S}um_{n\geq 0}{\mathbb P}[F_{\varepsilon,2\delta,\mu}(t_n)]<\infty$. Thus, by Borel-Cantelli Lemma we obtain that for ${\mathbb P}$-a.a.\ ${\omega}$ there exists $n_0=n_0({\omega})$ such that ${\omega}ega \in F^c_{\varepsilon,2\delta,\mu}(t_n)$ for all $n\geq n_0$. Now, let $n\geq n_0$ and suppose $t\in [t_n, t_{n+1})$. Since we have $s^{\alpha}(t_n)\leq s^{\alpha}(t)\leq (1+\mu)s^{\alpha}(t_n)$, we deduce that ${\mathbb P}$-a.s., there exists a partition of $[(1+\frac{\varepsilon}{2})s(t),(1+\varepsilon)s(t)]$ into $N$ intervals $J_j$, $j=1,\dots,N$, such that on each one $D^+_{J_j}(V)> (1+2\delta)\ln t_n$. As $\ln t_n\leq \ln t\leq (1+\mu)\ln t_n$, we have $(1+2\delta)\ln t_n\geq \frac{1+2\delta}{1+\mu}\ln t\geq (1+\delta)\ln t$ for $\mu>0$ small enough. From these last observations, we conclude that for ${\mathbb P}$-a.a.\ ${\omega}$, there exists $t_0=t_0({\omega})$ such that ${\omega}ega \in C_{\varepsilon,\delta,N}(t)$ for all $t\geq t_0$, which proves Lemma \ref{Lem3}. $\Box$\par \par\relax Finally, let $G(t):=\Big\{\max_{y\leq \ln^{1/\alpha} t}|V(y)|\leq 2\ln^{\frac{1}{\alpha}}t\Big\}$. We show the following \begin{lm} \label{Lem4} We have that ${\mathbb P}[\liminf_{t\to \infty}G(t)]=1$. {\mathfrak E}nd{lm} \textit{Proof.} Let $n$ be an positive integer. By \cite{PerMot}, Lemma 12.9, we have \begin{align*} {\mathbb P}\Big[\max_{y \leq \ln^{1/ \alpha} (n+1)}|V(y)|> 2\ln^{\frac{1}{\alpha}}n\Big]&\leq {\mathbb P}\Big[\max_{y\leq \ln^{1/\alpha} (n+1)}|W(y)|>{\widehat S}igma^{-1}\ln^{\frac{1}{\alpha}}n\Big]\nonumber\\ &\leq 2{\mathbb P}\Big[\max_{y\leq \ln^{1/ \alpha} (n+1)}W(y)>{\widehat S}igma^{-1}\ln^{\frac{1}{\alpha}}n\Big]\nonumber\\ &=2{\mathbb P}\Big[W( \ln^{\frac{1}{\alpha}} (n+1))>{\widehat S}igma^{-1}\ln^{\frac{1}{\alpha}}n\Big]\nonumber\\ &\leq \frac{2}{{\widehat S}qrt{2\pi {\widehat S}igma^2}}e^{-\frac{\ln^{\frac{1}{\alpha}} (n+1)}{{2{\widehat S}igma^2}}} {\mathfrak E}nd{align*} for sufficiently large $n$. Since $\alpha\in(0,\frac{1}{2})$, we deduce that ${\widehat S}um_{n>1}{\mathbb P}\Big[\max_{y\leq \ln^{1/\alpha} (n+1)}|V(y)|> 2\ln^{\frac{1}{\alpha}}n\Big]<\infty$. By Borel-Cantelli Lemma, we have that for ${\mathbb P}$-a.a.\ ${\omega}$ there exists $n_0=n_0({\omega})$ such that for all $n\geq n_0$ we have $\max_{y\leq \ln^{1/\alpha} (n+1)}|V(y)|\leq 2\ln^{\frac{1}{\alpha}}n$. Now consider $n\geq n_0$ and $t\in [n,n+1)$, we have that $\max_{y\leq \ln^{1/\alpha} t}|V(y)|\leq \max_{y\leq \ln^{1/\alpha} (n+1)}|V(y)|\leq 2\ln^{\frac{1}{\alpha}}n\leq 2\ln^{\frac{1}{\alpha}}t$. This shows that ${\mathbb P}[\liminf_{t\to \infty}G(t)]=1$ and concludes the proof of Lemma \ref{Lem4}. $\Box$\par \par\relax {\widehat S}ection{Proof of Theorem~\ref{Theo}} \label{secTheo} In this last section, for the sake of brevity, expressions like $X_t=x$ or $\tau_x>t$ must be understood as $X_t=\lfloor x\rfloor$ or $\tau_{\lfloor x \rfloor}>t$ (where $\lfloor \cdot\rfloor$ is the integer part function) whenever $x$ in not necessarily integer. Also, in contrast with the former section, all the intervals considered in this section are intervals of ${\mathbb Z}^+$. We will also need the function $\lceil\cdot \rceil:=\lfloor \cdot\rfloor+1$. Fix some $\varepsilon\in (0,1)$. We start by showing that for ${\mathbb P}$-a.a.\ ${\omega}$, ${\mathtt P}_{{\omega}ega}[\liminf_{t \to \infty}s(t)^{-1}X_t\geq (1-\varepsilon)]=1$. Let $\delta\in (0,1)$ be such that Lemmas \ref{Lem1} and \ref{Lem2} hold. Take $N=\lfloor 2\delta^{-1}\rfloor$ and let ${\omega}$ be such that ${\omega}\in \liminf_{t\to \infty}(A_{\varepsilon, \delta}(t)\cap B_{\varepsilon, \delta,N}(t)\cap G(t)\cap {\mathcal G}amma(t))$. Let us define \[ \hat{\tau}(t):=\inf\{u> \tau_{\lceileil(1-\frac{\varepsilon}{2})s(\lfloor t\rfloor)\rceileil}: X_u=(1-\varepsilon)s(\lceil t\rceil)\} \] for all $t\geq 3$, with the convention $\inf\{{\mathfrak E}mptyset\}=\infty$. We have for all integer $n\geq 3$, \begin{align} \label{FUG} {\mathtt P}_{{\omega}ega}[\{\tau_{\lceil(1-\frac{\varepsilon}{2})s(n)\rceil} \geq n\}\cup\{\hat{\tau}(n)-\tau_{\lceil(1-\frac{\varepsilon}{2})s(n)\rceil}\leq n\}] &\leq {\mathtt P}_{{\omega}ega}[\tau_{\lceil(1-\frac{\varepsilon}{2})s(n)\rceil} \geq n]\nonumber\\ &\phantom{**}+{\mathtt P}_{{\omega}ega}[\hat{\tau}(n)-\tau_{\lceil(1-\frac{\varepsilon}{2})s(n)\rceil}\leq n]. {\mathfrak E}nd{align} The next step is to apply Proposition \ref{Confine} to the first term of the right-hand side of (\ref{FUG}). Since ${\omega}\in \liminf_{t\to \infty}(A_{\varepsilon, \delta}(t)\cap G(t)\cap {\mathcal G}amma(t))$, we have that for $n$ large enough $H([0,\lceil(1-\frac{\varepsilon}{2})s(n)\rceil]\leq D^+_{[0,\lceil(1-\frac{\varepsilon}{2})s(n)\rceil]}(U)\leq (1-\delta)\ln n+o(\ln n)$ and $\tilde{M}\leq 4\ln^{\frac{1}{\alpha}} n+o(\ln n)$. Therefore, by Proposition~\ref{Confine} we obtain \begin{align} \label{FUG1} {\mathtt P}_{{\omega}ega}[ \tau_{\lceil(1-\frac{\varepsilon}{2})s(n)\rceil} \geq n]\leq {\mathfrak E}xp{\{-n^{\delta+o(1)}\}} {\mathfrak E}nd{align} as $n \to \infty$. For the second term of the right-hand side of (\ref{FUG}), we have by the Markov property applied at time $\tau_{\lceil(1-\frac{\varepsilon}{2})s(n)\rceil}$, \begin{equation} {\mathtt P}_{{\omega}ega}[\hat{\tau}(n)-\tau_{\lceil(1-\frac{\varepsilon}{2})s(n)\rceil}\leq n]= {\mathtt P}_{{\omega}ega}^{\lceil(1-\frac{\varepsilon}{2})s(n)\rceil}[\tau_{(1-\varepsilon)s(n+1)}\leq n]. {\mathfrak E}nd{equation} Since ${\omega}\in \liminf_{t\to \infty}B_{\varepsilon,\delta,N}(t)\cap {\mathcal G}amma(t)$ there exists for $n$ large enough a partition $x_0=\lfloor(1-\varepsilon)s(n+1)\rfloor<x_1<\dots<x_{N-1}<x_N=\lceil(1-\frac{\varepsilon} {2})s(n)\rceil$ of $[\lfloor(1-\varepsilon)s(n+1)\rfloor,\lceil(1-\frac{\varepsilon}{2})s(n)\rceil]$ into $N=\lfloor 2\delta^{-1}\rfloor$ intervals $I_j=[x_{j-1},x_{j}]$, $j=1,\dots,N$, such that on each interval $D^-_{I_j}(U)>(1+\delta)\ln n-o(\ln n)$. By the Markov property we have \begin{align*} {\mathtt P}_{{\omega}ega}^{\lceil(1-\frac{\varepsilon}{2})s(n)\rceil}[\tau_{(1-\varepsilon)s(n+1)}\leq n] &\leq {\mathtt P}_{{\omega}ega}^{\lceil(1-\frac{\varepsilon}{2})s(n)\rceil}[\tau_{x_{j-1}}\leq n, j=1,\dots,N]\nonumber\\ &\leq \prod_{j=1}^{N}{\mathtt P}_{{\omega}ega}^{x_{j}}[\tau_{x_{j-1}}\leq n]. {\mathfrak E}nd{align*} Applying Proposition \ref{escape} to the right-hand side of the last inequality and using bound (\ref{WATQ}), we obtain \begin{align} \label{FUG2} {\mathtt P}_{{\omega}ega}^{\lceil(1-\frac{\varepsilon}{2})s(n)\rceil}[\tau_{(1-\varepsilon)s(n+1)}\leq n]&\leq K_3^{\lfloor 2\delta^{-1}\rfloor}(n+1)^{-(2-\delta)+o(1)} {\mathfrak E}nd{align} as $n\to \infty$. From (\ref{FUG}), (\ref{FUG1}) and (\ref{FUG2}), as $\delta\in (0,1)$, we deduce that ${\widehat S}um_{n\geq 3}{\mathtt P}_{{\omega}ega}[\{\tau_{(1-\frac{\varepsilon}{2})s(n)} \geq n\}\cup\{\hat{\tau}(n)-\tau_{(1-\frac{\varepsilon}{2})s(n)}\leq n\}]<\infty$. By Borel-Cantelli Lemma, we obtain that, ${\mathtt P}_{{\omega}ega}$-a.s., for all $n$ large enough $X_n>(1-\varepsilon)s(n)$. Now, for $t\in [n,n+1)$ and $n$ large enough, we have that $\tau_{\lceil(1-\frac{\varepsilon}{2})s(n)\rceil}<n\leq t$ and $\hat{\tau}(t)-\tau_{\lceil(1-\frac{\varepsilon}{2})s(n)\rceil}\geq n+1>t$, which implies $X_t> (1-\varepsilon)s(t)$. By Lemmas \ref{Lem1}, \ref{Lem2}, \ref{Lem4} and the definition of ${\mathcal G}amma(t)$, we conclude that for ${\mathbb P}$-a.a.\ ${\omega}$, ${\mathtt P}_{{\omega}ega}[\liminf_{t \to \infty} s(t)^{-1}X_t\geq (1-\varepsilon)]=1$. We continue the proof of Theorem \ref{Theo} by showing that for ${\mathbb P}$-a.a.\ ${\omega}$, ${\mathtt P}_{{\omega}ega}[\limsup_{t \to \infty} s(t)^{-1}X_t\leq(1+\varepsilon)]=1$. Let $\delta\in (0,1)$ be such that Lemma \ref{Lem3} holds, $N=\lfloor 2\delta^{-1}\rfloor$ and ${\omega}$ be such that ${\omega}\in \liminf_{t\to \infty}(C_{\varepsilon, \delta,N}(t)\cap {\mathcal G}amma(t))$. Since ${\omega}\in \liminf_{t\to \infty}(C_{\varepsilon, \delta,N}(t)\cap {\mathcal G}amma(t))$ there exists for all large enough integers $n$ a partition $y_0=0<y_1<\dots<y_{N-1}<y_N=\lfloor(1+\varepsilon)s(n)\rfloor$ of $[0,\lfloor(1+\varepsilon)s(n)\rfloor]$ into $N=\lfloor 2\delta^{-1}\rfloor$ intervals $J_j=[y_{j-1},y_{j}]$, $j=1,\dots,N$, such that on each interval $D^+_{J_j}(U)>(1+\delta)\ln n-o(\ln n)$. By the Markov property we have \begin{align*} {\mathtt P}_{{\omega}ega}[\tau_{(1+\varepsilon)s(n)}\leq n]\leq \prod_{j=1}^{N}{\mathtt P}_{{\omega}ega}^{y_{j-1}}[\tau_{y_{j}}\leq n]. {\mathfrak E}nd{align*} Applying Proposition \ref{escape} to the right-hand term of the last inequality and using bound (\ref{WATQ}), we obtain \begin{align} \label{ERT} {\mathtt P}_{{\omega}ega}[\tau_{(1+\varepsilon)s(n)}\leq n]\leq K_3^{\lfloor 2\delta^{-1}\rfloor}(n+1)^{-(2-\delta)+o(1)} {\mathfrak E}nd{align} as $n\to \infty$. From (\ref{ERT}), as $\delta\in (0,1)$, we deduce that ${\widehat S}um_{n\geq 3}{\mathtt P}_{{\omega}ega}[\tau_{(1+\varepsilon)s(n)}\leq n]<\infty$. By Borel-Cantelli Lemma, we obtain that, ${\mathtt P}_{{\omega}ega}$-a.s., for all $n$ large enough $X_n<(1+\varepsilon)s(n)$. Now, for $t\in [n,n+1)$ and $n$ large enough, we have that $\tau_{(1+\varepsilon)s(t)}\geq \tau_{(1+\varepsilon)s(n)}\geq n+1> t$, which implies $X_t<(1+\varepsilon)s(t)$. By Lemma~\ref{Lem3} and the definition of ${\mathcal G}amma(t)$, we conclude that for ${\mathbb P}$-a.a.\ ${\omega}$, $${\mathtt P}_{{\omega}ega}\Big[\limsup_{t \to \infty}\frac{X_t}{s(t)}\leq(1+\varepsilon)\Big]=1.$$ To sum up, we showed that for ${\mathbb P}$-a.a.\ ${\omega}$, \[ {\mathtt P}_{{\omega}ega}\Big[\liminf_{t\to \infty}\frac{X_t}{s(t)}\geq(1-\varepsilon), \limsup_{t \to \infty}\frac{X_t}{s(t)}\leq(1+\varepsilon)\Big]=1. \] As $\varepsilon$ is arbitrary, this shows Theorem \ref{Theo}. $\Box$\par \par\relax {\widehat S}ection*{Acknowledgements} C.G.\ is grateful to FAPESP (grant 2009/51139--3) for financial support. G.M.S.\ thanks FAPESP (grant 2011/21089-4) and S.P.\ thanks CNPq (grant 301644/2011-0) for financial supports. C.G.\ and S.P.\ thank FAPESP (grant 2009/52379-8) for financial support. C.G.\ and G.M.S.\ thank NUMEC for kind hospitality. \begin{thebibliography}{3} \bibitem{Ambj06} \textsc{T.~Ambj\"ornsson, S.~K.~Banik, O.~Krichevsky, R.~Metzler}(2006) Sequence sensitivity of breathing dynamics in heteropolymer DNA. \textit{Phys.\ Rev.\ Lett.} \textbf{97}, 128105. \bibitem{ZRP4} \textsc{I.~Armend\'ariz, M.~Loulakis} (2009) Thermodynamic limit for the invariant measures in supercritical zero range processes. \textit{Probab.\ Theory Relat.\ Fields} \textbf{145} (1-2), 175--188. \bibitem{CP} \textsc{F.~Comets, S.~Popov} (2003) Limit law for transition probabilities and moderate deviations for Sinai's random walk in random environment. \textit{Probab. Theory Relat. Fields} \textbf{126} (4), 571--609. \bibitem{ZRP3} \textsc{P.~Ferrari, C.~Landim, V.~Sisko} (2007) Condensation for a fixed number of independent random variables. \textit{J.\ Stat.\ Phys.} \textbf{128} (5), 1153--1158. \bibitem{PGF} \textsc{A.~Fribergh, N.~Gantert, S.~Popov} (2010) On slowdown and speedup of transient random walks in random environment. \textit{Probab.\ Theory Relat.\ Fields} \textbf{147} (1-2), 43--88. \bibitem{Galles} \textsc{C.~Gallesco} (2011) On the moments of the meeting time of independent random walks in random environment. \textit{To appear in ESAIM.: Probab.~and Stat.} \bibitem{GT1} \textsc{G.~Giacomin, F.~L.~Toninelli} (2006) Smoothing of depinning transitions for directed polymers with quenched disorder. \textit{Phys.\ Rev.\ Lett.} \textbf{96}, 070602. \bibitem{GT2} \textsc{G.~Giacomin, F.~L.~Toninelli} (2009) On the irrelevant disorder regime of pinning models. \textit{Ann.\ Probab.} \textbf{37} (5), 1841--1875. \bibitem{CGS} \textsc{S.~Gro{{\widehat S}s}kinsky, P.~Chleboun, G.~M.~Sch\"utz} (2008) Instability of condensation in the zero-range process with random interaction. \textit{Phys.\ Rev.\ E} \textbf{78}, 030101(R). \bibitem{ZRP2} \textsc{S.~Gro{{\widehat S}s}kinsky, G.~M.~Sch\"utz, H.~Spohn} (2003) Condensation in the zero range process: stationary and dynamical properties. \textit{J.\ Stat.\ Phys.} \textbf{113}, 389--410. \bibitem{ZRP1} \textsc{I.~Jeon, P.~March, B.~Pittel} (2000) Size of the largest cluster under zero-range invariant measures. \textit{Ann.\ Probab.} \textbf{28}, 1162--1194. \bibitem{Kafr02} \textsc{Y.~Kafri, D.~Mukamel, L.~Peliti} (2002) Melting and unzipping of DNA. \textit{Eur.\ Phys.\ J.\ B} \textbf{27}, 135--146. \bibitem{KMT} \textsc{J.~Koml{\'o}s, P.~Major, G.~Tusn{\'a}dy} (1976) An approximation of partial sums of independent RV's and the sample DF. II. \textit{Z.\ Wahrsch.\ Verw.\ Gebiete} \textbf{34} (1), 33--58. \bibitem{MAPAM} \textsc{M.~Magdon-Ismail, A.~Atiya, A.~Pratap, Y.~Abu-Mostafa} (2004) On the maximum drawdown of a Brownian motion. \textit{J.\ Appl.\ Prob.} \textbf{41}, 147--161. \bibitem{Menswade1} \textsc{M.V.~Menshikov, A.~R.~Wade} (2006) Random walk in random environment with asymptotically zero perturbation. \textit{J.\ Europ.\ Math.\ Soc.} \textbf{8} (3), 491--513. \bibitem{Menswade2} \textsc{M.V.~Menshikov, A.~R.~Wade} (2008) Logarithmic speeds for one-dimensional perturbed random walk in random environment. \textit{Stoch.\ Proc.\ Appl.} \textbf{118} (3), 389--416. \bibitem{PerMot} \textsc{P.~M\"orters, Y.~Peres} (2010) \textit{Brownian Motion.} Cambridge University Press. \bibitem{Salminen} \textsc{P.~Salminen, P.~Vallois} (2007) On maximum increase and decrease of Brownian motion. \textit{Ann.\ Inst.\ H.\ Poincar\'e} \textbf{43}, 655--676. \bibitem{Sinai} \textsc{Ya.G.~Sinai} (1982) The limiting behavior of one-dimensional random walk in random medium. \textit{Theory Probab.\ Appl.} \textbf{27}, 256--268. \bibitem{Zeitouni} \textsc{O.~Zeitouni} (2004) Random walks in random environment. \textit{Lectures on probability theory and statistics}, 189--312, \textit{Lecture Notes in Math.}, \textbf{1837}, Springer, Berlin. {\mathfrak E}nd{thebibliography} {\mathfrak E}nd{document}
\begin{document} \title{On the Design of Decentralised Data Markets\\ } \maketitle \pagestyle{plain} \begin{abstract} We present an architecture to implement a decentralised data market, whereby agents are incentivised to collaborate to crowd-source their data. The architecture is designed to reward data that furthers the market's collective goal, and distributes reward fairly to all those that contribute with their data. This is achieved leveraging the concept of Shapley's value from Game Theory. Furthermore, we introduce trust assumptions based on provable honesty, as opposed to wealth, or computational power, and we aim to reward agents that actively enable the functioning of the market. In order to evaluate the resilience of the architecture, we characterise its breakdown points for various adversarial threat models and we validate our analysis through extensive Monte Carlo simulations. \end{abstract} \section{Introduction} \label{sec: intro} \subsection{Preamble} \label{Preamble} In recent years there has been a shift in many industries towards data-driven business models \cite{stahl2014data}. Namely, with the advancement of the field of data analytics, and the increased ease in which data can be collected, it is now possible to use both these disruptive trends to develop insights in various situations, and to monetise these insights for monetary compensation. Traditionally, users have made collected data available to large platform providers, in exchange for services (for example, web browsing). However, the fairness and even ethics of these business models continue to be questioned, with more and more stakeholders arguing that such platforms should recompense citizens in a more direct manner for data that they control \cite{stucke2017should} \cite{8935575}, \cite{fbnytimes}, \cite{aperjis2012market}. To give more context, Apple has recently responded to such calls by introducing changes to their ecosystem to enable users to retain ownership of data collected on their devices. At the time of writing, it was recently reported that these new privacy changes have caused the profits of Meta, Snap, Twitter and Pinterest plummet (losing a combined value of \$278 billion since the update went into effect in late April 2021\footnote{\url{https://www.bloomberg.com/news/articles/2022-02-03/meta-set-for-200-billion-wipeout-among-worst-in-market-history}}). The privacy change introduced by Apple allows users to mandate apps not to track their data for targeted advertising. This small change has been received well amongst Apple users, with a reported 62\% of users opting out of the tracking \cite{attprivacy2021}. Clearly this change will have a profound impact on companies relying on selling targeted advertisements to the users of their products. Users can now decide how much data they wish to provide to these platforms and they seem keen to retain data ownership. It seems reasonable to expect that in the future, companies wishing to continue to harvest data from Apple will need to incentivise, in some manners, users or apps to make their data available.\newline The need for new ownership models to give users sovereignty over data is motivated by two principal concerns. The first one regards fair recompense to the data harvester by data-driven businesses. While it is true that users receive value from companies in the form of the services their platforms provide (e.g., Google Maps), it is not obvious that the exchange of value is fair. The second one arises from the potential for unethical behaviours that are inherent to the currently prevailing business models. Scenarios in which unethical behaviour have emerged arising out of poor data-ownership models are well documented. Examples of these include Google Project Nightingale \footnote{\url{https://www.bbc.co.uk/news/technology-50388464}} \footnote{\url{https://www.theguardian.com/technology/2019/nov/12/google-medical-data-project-nightingale-secret-transfer-us-health-information}}, where sensitive medical data was collected of patients that could not opt out of having their data stored in Google Cloud servers. The scale of this project was the largest of its kind, with millions of patient records collected for processing health care data. Another infamous case study was the Cambridge Analytica (CA) scandal in 2015. Personal data of 87 million users was acquired via 270,000 user giving access to a third party app that gave access to the users’ friend network, without these people having explicitly given access to CA to collect such data \footnote{\url{https://www.nytimes.com/2018/04/04/technology/mark-zuckerberg-testify-congress.html}} \cite{CAieee}. CA has a vast portfolio of elections they have worked to influence, with the most notorious one being the 2016 US presidential elections \cite{ur2019facebook}, \cite{trumpCA}.\newline It is important to understand that these cases are not anecdotal. Without the adequate infrastructure to track and trade ownership, cases like the ones outlined above, with for example, mass privacy breaches, having the potential to become more frequent. Apple's actions are an important step in the direction of giving individuals ownership over their data and potentially alleviating such issues, however, one may correctly ask why users should trust Apple, or any other centralised authority, to preserve their privacy and not trade with their data. Motivated by this background, and by this latter question, we argue for the shift towards a more decentralised data ownership model, where this ownership can be publicly verified and audited. We are interested in developing a data-market design that is hybrid in nature; {\em hybrid} in the sense that some non-critical components of the market are provided by trusted infrastructure, but where the essential components of the market place, governing ownership, trust, data veracity, etc., are all designed in a decentralised manner. The design of such markets is not new and there have been numerous attempts to design marketplaces to enable the exchange of data for money \cite{stahl2014data2}. This, however, is an extremely challenging endeavour. Data cannot be treated like a conventional commodity due to certain properties it possesses. It is easily replicable; its value is time-dependant and intrinsically combinatorial; and dependent on who has access to the data set. It is also difficult for companies to know the value of the data set a priori, and verifying its authenticity is challenging \cite{agarwal2019marketplace} \cite{arrow2015economic}. These properties make marketplace models difficult to design and have been an emergent research area.\newline In what follows, we describe a first step in the design of a marketplace where data can be exchanged. Furthermore, this marketplace provides certain guarantees to the buyer and seller alike. More specifically, the goal is to rigorously define and address the challenges related to the tasks of selling and buying data from unknown parties, whilst compensating the sellers fairly for their own data. As mentioned, to prevent monopolisation, a partially decentralised setting will be considered, focusing on the important case of data rich environments, where collected data is readily available and not highly sensitive. Accordingly, this work focuses on a specific use case from the automotive industry that, we hope, might represent a first step towards more general architectures. \subsection{Specific motivation} \label{Specific motivation} We focus on a class of problems where there is an oversupply of data but where there is a lack of adequate ownership methods to underpin a market. One example of such a situation is where agents collaborate as part of coalitions in a crowd sourced environment to make data available to potential buyers. More specifically, the interest is placed in the context of a city where drivers of vehicles wish to monetise the data harvested from their car's sensors. An architecture is proposed that enables vehicle owners to sell their data in coalitions to buyers interested in purchasing their data.\newline While this situation is certainly a simplified example of a scenario in which there is a need for data market, it remains of interest for two reasons. Firstly, situations of this nature prevail in many application domains. Scenarios where metrics of interest can be aggregated to generate a data rich image of the state of a given environment are of value to a wide range of stakeholders, which, in the given context, could include anyone from vehicle manufacturers, mobility and transport companies to city councils. Secondly, this situation, while simplifying several aspects, still captures many pertinent aspects of more general data-market design: for example, detection of fake data; certification of data-quality; resistance to adversarial attacks. The context of automotive data collection is a ripe opportunity to develop a decentralised data market. The past decade has seen traditional vehicles transition from being a purely mechanical device to a cyber-physical one, having both a physical and digital identity. From a practical viewpoint, vehicles are quickly increasing their sensing capabilities, especially given the development of autonomous driving research. Already, there is an excess of useful data collected by the latest generation vehicles, and this data is of high value. According to \cite{mckinzie2016} \emph{“car data and shared mobility could add up to more than \$ 1.5 trillion by 2030”}. Such conditions prevail not only in the automotive sector; for example, devices such as smartphones; smart watches; modern vehicles; electric vehicles; e-bikes and scooters; as well as a host of other IoT devices, are capable of sensing many quantities that are of interest to a diverse range of stakeholders. In each of these applications the need for such marketplaces is already emerging. Companies such a Nissan, Tesla, PSA and others have already invested in demonstration pilots in this direction and are already developing legal frameworks to develop such applications\protect\footnote{\url{https://www.aidataanalytics.network/data-monetization/articles/tesla-automaker-or-data-company}} in anticipation of opportunities that may emerge. As mentioned, the main issue in the design of such a data market lies in the lack of an adequate ownership method. Who owns the data generated by the vehicle? The answer is unclear. A study by the Harvard Journal of Law \& Technology concludes that most likely, it is the company that manufactured the car who owns the data, even though the consumer owns the smart car itself \cite{zhang2018owns}. According to the authors of the study, this is because the definition of ownership of data is not congruent to other existing definitions of ownerships such as intellectual property (IP), and therefore the closest proxy to owning a data set is having the rights to access, limit access to, use, and destroy data. Most importantly consumers do not have the right to economically exploit the data they produce. Nonetheless, EU GDPR laws expressly state that users will be able to transfer their car data to a third party should they so wish. According to \cite{storing2017} “The data portability principle was expressly created to encourage competition”. However, if the data is owned by the automobile company, how can consumers verify who has access to their data? Placing trust assumptions on the car manufacturer should be rigorously justified before such marketplaces emerge. Given the lack of verifiability of a centralised authority, such as a car manufacturing company, we propose exploring decentralised or hybrid alternatives.\newline The objective in this paper is directly motivated by such situations and by the issues described so far. However, as previously discussed, rather than enabling manufacturers to monetize this data, we are interested in situations where device owners, or coalitions of device owners, own the data collected by their devices, and wish to make this data available for recompense. This is fundamentally different to the situation that prevails today, whereby users make their data freely available to platform providers such as Google, in exchange for using their platform. Nevertheless, the recent actions of Apple suggest that this new inverted (and emancipated) business model, whereby providers compete and pay for data of interest, could emerge as an alternative model of data management, and also whereby users are able to control and manage the data that they reveal. Given this background context, we are interested in developing a platform whereby data owners can make available, and securely transfer ownership of data streams, to other participants in the market. \subsection{Related Work} \label{RelatedWork} \subsubsection{Decentralised vs Centralised Data Markets} \label{DecentralisedvsCntralised} Numerous works have proposed decentralised data markets. While many of these proposals use Blockchain architectures for their implementations, many simply utilise the underlying technology and fail to address Blockchain design flaws as they pertain to data markets \cite{ramachandran2018towards} \cite{hynes2018demonstration} \cite{travizano2018wibson}. For example, Proof-of-Work (PoW) based Blockchains reward miners with the most computational power. Aside from the widely discussed issue of energy wastage, the PoW mechanism is itself an opportunity, hitherto that has not been utilised, for a data-market to generate work that can be useful for the operation of the marketplace. As we shall shortly see, the PoW mechanism can itself be adapted to generate work that is useful for the operation of the marketplace. In addition, Blockchain based systems also typically use commission based rewards to guide the interaction between users of the network, and Blockchain miners. Such a miner-user interaction mechanism is not suitable in the context of data-markets, effectively prioritising wealthier users' access to the data-market. In addition, miners with greater computational power are more likely to earn the right to append a block, and thus earn the commission. This reward can then be invested in more computational power, leading to a positive feedback loop where more powerful miners become more and more likely to write blocks and earn more commissions. Similarly, the wealthier agents are the ones more likely to receive service for transactions of higher monetary value. This could cause traditional PoW-based Blockchains to centralise over time \cite{beikverdi2015trend}. Indeed, centralised solutions to data markets already exist, such as \cite{snowflake}, which namely focus on implementing methods to share and copy data, and certain rights to it, such as read rights. Considering the aforementioned properties of PoW-based Blockchains, the authors explore other distributed ledger architectures to implement a decentralised data market. \subsubsection{Trust and Honesty Assumptions} \label{TrustAssumptions} Another possible categorisation of prior work relates to the trust assumptions made in the system. The work in \cite{rasouli2021data} assumes that upon being shared, the data is reported truthfully and fully. In practise, this assumption rarely holds, and a mitigation for malicious behaviour in shared systems must be considered. This assumption is justified in their work by relying on a third party auditor, which the authors of \cite{travizano2018wibson} also utilise. However, introducing an auditor simply shifts the trust assumption to their honest behaviour and forgoes decentralisation. In \cite{agarwal2019marketplace}, it is identified that the buyer may not be honest in their valuation of data. They propose an algorithmic solution that prices data by observing the gain in prediction accuracy that it yields to the buyer. However, this comes at the cost of privacy for the buyer: they must reveal their predictive task. In practise, many companies would not reveal this Intellectual Property, especially when it is the core of their business model. The work of \cite{narula2018zkledger} is an example of a publicly verifiable decentralised market. Their system allows for its users to audit transactions without compromising privacy. Unfortunately, their ledger is designed for the transaction of a finite asset: creating or destroying the asset will fail to pass the auditing checks. For the transaction of money this is appropriate: it should not be possible to create or destroy wealth in the ledger (aside from public issuance and withdrawal transactions). However, for data this does not hold. Users should be able to honestly create assets by acquiring and declaring new data sets they wish to sell. Furthermore, their cryptographic scheme is built to transfer ownership of a single value through Pedersen commitments. \newline There is a need to have trust assumptions in components of the data market, whether it be centralised or decentralised. However, we believe that the users of the data market should agree on what or who to trust. A consensus mechanism is a means for a group of agents to agree on a certain proposition. For users to trust the consensus mechanism, they must have a series of provable guarantees that it was executed correctly. It is not sufficient for the consensus mechanism to function correctly, it should also prove this to the users. We advocate for placing the trust assumptions in consensus mechanisms that can be verified. In other words, the users of a data market should have a means to agree on what they trust, and they should have a means to verify that this agreement was reached in a correct, honest manner. In fact, this verification should be decentralised and public. Shifting the trust to a third-party auditing mechanism to carry out the verification can lead to a recursion problem, where one could continuously question why a third, fourth, fifth and so on auditing party should be trusted, until these can generate a public and verifiable proof of honest behaviour.\newline \subsubsection{Consensus Mechanisms} \label{consensusmechanisms} Consensus mechanisms are crucial in distributed ledgers to ensure agreement on the state of the ledger. For example, in the context of the branch of Computer Science known as \emph{distributed systems}, consensus can be mapped to the fault-tolerant state-machine replication problem \cite{raynal2010communication}. In such systems, the users in the network must come to an agreement as to what is the accepted state of the network. Furthermore, it is unknown which of these users are either faulty or malicious. This scenario is defined as a Byzantine environment, and the consensus mechanism used to address this issue must be Byzantine Fault Tolerant (BFT) \cite{castro1999practical}. In permissionless networks, probabilistic Byzantine consensus is achieved through the means of certain cryptographic primitives \cite{consensuslit}. Commonly this is done by solving a computationally expensive puzzle. In permissioned networks consensus is reached amongst a smaller subset of users in the network. This is done through BFT consensus mechanisms such as Practical BFT (PBFT) \cite{pbft} and PAXOS \cite{paxos}. Often the permissioned users are elected according to how much stake in the network they hold, following a proof of stake (PoS) method. This centralisation enables a higher throughput of transactions at the cost of higher messaging overhead, but ensures immediate consensus finality. They also require precise knowledge of the users' membership \cite{iotablog}. Meanwhile, in permissionless consensus protocols the guarantee of consensus is only probabilistic but does not require high node synchronicity or precise node memberships (ie: exact knowledge of which users are in the quorum), and are more robust \cite{vukolic2015quest} \cite{consensuslit}. When considering consensus mechanisms for permissionless distributed ledgers, there exist a wide range of consensus mechanisms that are a hybrid combination of either PoS and PoW (eg: Snow White), or PoW-BFT (eg: PeerCensus) or PoS-BFT (eg: Tendermint). Each consensus mechanism places greater importance in achieving different properties. For example, Tendermint focuses on deterministic, secure consensus with accountability guarantees and fast throughput \cite{buchman2016tendermint}. Snow White is a provably secure consensus mechanism that uses a reconfigurable PoS committee \cite{bentov2016snow}, and PeerCensus enables strong consistency in Bitcoin transactions, as opposed to eventual consistency \cite{decker2016bitcoin}. There also exists a class of probabilistic consensus mechanisms, such as FPC \cite{popov2021fpc}, Optimal Algorithms for Byzantine Agreements \cite{feldman1988optimal}, Randomised Byzantine Agreements \cite{toueg1984randomized} and Algorand \cite{gilad2017algorand}. We find this class of consensus mechanisms of particular interest for the context of a data market. Namely, the fact that they are probabilistic makes coercion of agents difficult for a malicious actor. To ensure malicious actors are selected by the consensus algorithm, they must know a priori the randomness used in the mechanism, or coerce a supra-majority of agents in the network. Furthermore, we argue that selecting users in a pseudo-random way treats users equally, and is closer to achieving fairness than selecting users with the greatest wealth or greatest computational power. Another consideration of fairness is made in \cite{DBFT}, where the mechanism does not rely on a correct leader to terminate, therefore decentralising the mechanism. In some examples described above, such as in \cite{gilad2017algorand}, agents with greater wealth in the ledger are more likely to be selected by a Verifiable Random Function to form part of the voting committee. However, we believe that for the context of the work here presented, voting power should be earned and not bought. Indeed, this right should be earned irrespective of wealth or computational power. This opens the question of, how then, should this power be allocated? The market should trust the actors that behave honestly and further the correct functioning of the market. Agents should prove their honesty and only then earn the right to be trusted. A collective goal for the data market can be defined, and agents who contribute to this should be adequately rewarded. This goal can be characterised mathematically, and each agent's marginal contribution can be evaluated. In this manner, rights and rewards can be granted proportionally. Algorand wishes to retain the voting power amongst the agents with the most stake, based on the assumption that the more stake an agent has in the system, the more incentive they have to behave honestly. This assumption cannot be made in our data market. Owning more cars (i.e. more stake) does not equate to being more trustworthy, and therefore should not increase an agent’s voting power, or their chances of participating in decision making. In fact, owning more vehicles could be an incentive to misbehave in the data market and upload fake data, whether this be to mislead competitors or to force a favourable outcome for themselves as a malicious actor. Purposely reporting fake data in the context of mobility has been described in \cite{sanchez2018waze} and \cite{tahmasebian2020crowdsourcing}, where collectives reported fake high congestion levels to divert traffic from their neighbourhoods \footnote{\url{https://www.washingtonpost.com/local/traffic-weary-homeowners-and-waze-are-at-war-again-guess-whos-winning/2016/06/05/c466df46-299d-11e6-b989-4e5479715b54_story.html}} This attack is known as \textit{data poisoning} and is a known attack of crowd-sourced applications, usually mounted through a Sybil attack. Furthermore, Algorand uses majority rule and their consensus mechanism only has two possible outcomes: accept the transaction or not (timeout or temporary consensus) \cite{gilad2017algorand}. In the context of the data market, this would not suffice. The consensus mechanism in the work here presented is used to determine which agents are the most trusted to compute the average or median of the data collected of a certain location. In other words, the consensus mechanism is a means to delegate a computation to someone based on how much they are trusted. Agents may be more, or less trusted, with some being preferred over others. These preferences may be stronger or weaker too. Using a majority voting method that only yields two possible options fails to encapsulate this information and is known to exclude minorities. The disadvantages of majority rule systems such as First-Past-the-Post voting are known of and extensively documented \cite{laslier2012and}, \cite{blais1996electoral}, \cite{courtney1999plurality}. A common critique of these voting systems is that they do not achieve proportional representation and retain power within a wealthy minority. Consequently, it could be argued that they are not an appropriate consensus mechanism for a context where we aim for decentralisation and fairness. \subsection{Structure of the Paper} Firstly we introduce a series of desirable properties that the market must satisfy. These are outlined in the design criteria section \ref{sec: design criteria}. Subsequently, a high level overview of the working components of the data market are presented in section \ref{sec: architecture}, as well as describing how each functional component contributes to achieving the desired properties described in the preceding section. Then we proceed to formalising definitions used in each component of the data market, as well as the assumptions made in \ref{sec: blocks of datamarket}. This section describes in detail how each component of the data market works. Finally, in section \ref{sec: attacks}, we describe the set of attacks considered, and in section \ref{sec: evaluation} the robustness of the components of the data market are evaluated. \section{Design Criteria for the Data Market} \label{sec: design criteria} Having discussed issues that pertain to and arise from poor data ownership models, we present a series of desirable criteria that the data market should achieve. More specifically, the work here proposed, begins to address the following research questions that are associated with data market designs: \begin{itemize} \item[a.] How to protect the market against fake data or faulty sensors? \item[b.] Given an oversupply of data, how to ensure that everybody receives a fair amount of write access to the market? \item[c.] How to enable verifiable exchange of data ownership? \item[d.] How to select data points from all those available to add most value to the marketplace? \item[e.] How to protect the marketplace against adversarial attacks? \end{itemize} Following directly from these open questions, the desirable criteria are defined as: \begin{itemize} \item \emph{Decentralised Decision Making:} The elements of the marketplace pertaining to trust, ownership and veracity are decentralised and do not rely on placing trust on third parties.\newline \item \emph{Verifiable centralisation:} The infrastructure on which the data market relies that is centralised, can be publicly verified. The reader should note that this ensures trust assumptions are placed on components of the data market that are publicly verifiable.\newline \item \emph{Generalised Fairness:} Access to the data market is governed by the notion of the potential value that a data stream brings to the market (as defined by a given application). This will determine which agents will get priority in monetising their data. Agents with equally valuable data must be treated the same and agents with data of no value should receive no reward. Further notions of fairness are considered and formalised by \cite{shapley53} under the definition of Shapley Fairness, and described in \ref{Shapley Value}. These are the definitions of fairness that we use for this data market proposal.\newline \item \emph{Resistant to duplication of data:} The datamarket must not allow malicious attackers to earn reward by duplicating their own data. Precisely, a distinction must be made between preventing the monetisation of duplicated data, versus preventing data duplication. \newline \item \emph{Resistant to fake data and faulty sensors:} The data market must be resilient to \emph{data poisoning} attacks, wherein adversaries collude to provide fake data to influence the network. Congruently, the datamarket must be resilient to poor quality data from honest actors with faulty sensors. Formally, the data for sale on the market must not deviate by more than a desired percent from the ground truth. For the purpose of this work, the ground truth is defined as data measured by centralised infrastructure. Given that this measurement is publicly verifiable (any agent in that location can verify that this measurement is true), this is considered an acceptable centralisation assumption. \newline \item \emph{Resistant to spam attacks:} The data market should not be susceptible to spam attacks; that is, malicious actors should not have the ability to flood and congest the network with fake or poor quality data. \end{itemize} \section{Preliminaries} The architecture for our proposed data market is illustrated in Figure \ref{fig:Data Market Architecture}, and makes reference to several technology components that are now briefly described in the subsequent section.\newline \subsection{Distributed Ledger Technology} A distributed ledger technology (DLT) will be used to record access (or any other given right) to a dataset, in the data market. A DLT is a decentralized database of transactions where these transactions are timestamped and accessible to the members of the DLT. They are useful to allow agents to track ownership, and are decentralized. Compared to a centralised storage system, this provides a geographically distributed, consensus-based, and verifiable system which is immutable after data has been written and confirmed. It is also more resilient against failure points than a centralised infrastructure. There are many types of DLT structures, but they all aim to provide a fast, reliable, and safe way of transferring value and data. Namely, DLTs strive to satisfy the following properties: have only one version of the ledger state, are scalable, make double spending impossible, and have fast transaction times. One example of a DLT is the IOTA Tangle, shown in figure \ref{fig:tangle}. In this DLT, all participants contribute to approving transactions, and the transactions are low to zero fee and near-instant. Further, decentralisation is promoted through the alignment of incentives of all actors \cite{Schueffel2018}. \begin{figure} \caption{IOTA Tangle. credit: IOTA Foundation} \label{fig:tangle} \end{figure} \subsection{Access control mechanism} Because DLTs are decentralised, they need a method to regulate who can write information to the ledger and who cannot. An access control mechanism is necessary to protect the distributed ledger from spam attacks. One way is by using Proof-of-Work (PoW) as it is done in the Blockchain, where computationally intense puzzles need to be solved to be able to write to the ledger. In this case, users with more computational power earn the right to access the ledger. An alternative is Proof-of-Stake where nodes can stake tokens to gain the access rights proportional to their amount of staked tokens \cite{ghaffari_bertin_hatin_crespi_2020}. \subsection{Consensus Mechanisms} A consensus mechanism is a means for a collective to come to an agreement on a given statement. In section \ref{consensusmechanisms} some examples of consensus mechanisms are discussed that are appropriate for Byzantine environments. Some of these utilise a voting mechanism to enable said consensus, and we now discuss an alternative voting mechanism that satisfies a different set of properties. It is important to note that there exist numerous methods of aggregating preferences, which are well studied in the field of social choice \cite{sen2008social}, and voting mechanisms provide different means to enable this aggregation \cite{votingTheory}. The taxonomy of voting systems is diverse. They can be either be considered probabilistic or deterministic; proportional or plurality rule; or ordinal as opposed to cardinal. Depending on the set of practical constraints or preferred properties of the implementation context, we encourage selecting an appropriate voting mechanism that best satisfies the desired criteria for a given application. Subsequently, we discuss Maximum Entropy Voting, and why it has desirable properties for the context of this data market. \subsubsection{Maximum Entropy Voting} \label{preliminary:max_entropy_voting} Within the classes of voting schemes, Maximum Entropy Voting (MEV) belongs to the family of probabilistic, proportional and ordinal systems. Arrow famously defined in \cite{arrow2012social} an impossibility theorem that applies to ordinal voting systems. In it, he states that no ordinal voting systems can satisfy all three of the following properties: \begin{defn}[Non-Dictatorial] There exists no single voter with power to determine the outcome of the voting scheme. \end{defn} \begin{defn}[Independence of Irrelevant Alternatives] The output of the voting system for candidate A and candidate B should depend only on how the voters ordered candidate A and candidate B, and not on how they ordered other candidates. \end{defn} \begin{defn}[Pareto property] \label{Pareto} If all voters prefer candidate A to candidate B, then the voting system should output candidate A over candidate B. \emph{Representative Probability} states that the probability of the outcome of candidate A being placed above candidate B should be the same as the fraction of the voters preferring the one to the other. \end{defn} Whilst this impossibility theorem only applies to ordinal voting systems and not cardinal ones, it has been shown by Gibbard's theorem that every deterministic voting system (including the cardinal ones) is either dictatorial or susceptible to tactical voting \cite{gibbard1973manipulation}. Gibbard later then shows in \cite{gibbard1977manipulation} that the Random Dictator voting scheme satisfies a series of desirable properties, namely: voters are treated equally, it has strong strategy proofness and it is Pareto efficient. With this in mind, the reader can now understand the novelty that the work in \cite{max_entropy_mackay} presents. Here, MEV is presented as a probabilistic system that first, determines the set of voting outcomes that proportionally represent the electorate's preference, whilst selecting the outcome within this set that minimises surprise. Lets proceed to elaborate: if one were to pick a voting system that is probabilistic and satisfies Arrow's properties to the greatest degree, the adequate system to choose would be Random Dictator. However, whilst computationally an inexpensive method to run, it suffers from a series of drawbacks. The one of greatest concern for the context of this work is the following: imagine a ballot is sampled that happens to contain a vote for an extreme candidate (or in this case, for a malicious actor). The entire choices of an individual that votes for extremes now dictate the entire electorate's leaders. In this scenario, a malicious agent would likely only vote for equally malicious agents, although the number of malicious agents is still assumed to be a minority. Could one reduce the amount of information taken from that sampled ballot? MEV proposes a way to sample ballots that while still representing the electorate's views, minimise the amount of information taken from their preferences. In essence, this is selecting a ballot that reflects the least surprising outcome for the electorate, whilst ensuring that it is still within the set of most representative choices. Furthermore, MEV still satisfies relaxed versions of the Independence of Irrelevant Alternatives and Pareto properties, whilst not being dictatorial. It also enjoys the benefits of proportional voting schemes as well as being less susceptible to tactical voting \cite{max_entropy_mackay}. As a result of this, it can be argued that it is difficult to predict the exact outcome of a vote and therefore it is secure against timed attacks \footnote{An attack wherein a malicious actor wishes to influence the outcome of an election with high certainty of success at a given instance in time.} because it is costly to have high confidence of success. As a result, we believe MEV offers a suite of benefits and properties that are desirable for the context of the data market presented here. \section{Architecture of the Data Market}\label{sec: architecture} \begin{figure*} \caption{Data Market Architecture. Credit for the images is given in \protect \footnotemark} \label{fig:Data Market Architecture} \end{figure*} As can be observed in figure \ref{fig:Data Market Architecture}, some of the functional components of the marketplace are decentralised, and some of the enabling infrastructure is provided by an external, possibly centralised, provider. In a given city, it is assumed that a number of agents on vehicles are collecting data about the state of the location they are in. They wish to monetise this information, but first, it must be verified that they are real agents. They present a valid proof of their identity and location, as well as demonstrating that they information is timely and relevant. The agents that successfully generate the aforementioned receive a validity token that allows to form spatial coalitions with other agents in their proximity. They then vote on a group of agents in their coalition that will be entrusted to calculate the agreed upon data of their corresponding location. The chosen agents do this by aggregating the coalition's data following a specified algorithm. This procedure happens simultaneously in numerous locations of the city. At a given point in time, all the datasets that have been computed by a committee in a spatial coalition then enter the access control mechanism. One can consider the data a queue and the access control mechanism the server. Here, they are ranked in order of priority by determining which data provides the greatest increase in value to the data market. The coalitions with the most valuable data perform the least amount of work. Coalitions wishing to sell their data must complete a useful proof of work that is inversely proportional to their added value to the market. This PoW entails calculating the added value of new data in the queue. Once this work is completed, the data can be sold. The buyers are allocated to the sellers through a given bidding mechanism, and the reward of this sale is distributed amongst the sellers using the reward distribution function. Successful transactions are recorded on the distributed ledger, and the data corresponding to the successful transactions are removed from the data market. \footnotetext{In order of appearance: Icons made by Freepik, Pixel perfect, juicy\_fish, srip, Talha Dogar and Triangle Squad from www.flaticon.com} In what follows, we describe the high level functioning of each of the components shown in figure \ref{fig:Data Market Architecture}, and how each contribute do achieving the desired properties described in section \ref{sec: design criteria}. \subsection{Verification} Agents are verified by a centralised authority that ensures they provide a valid position, identity and dataset. This component ensures that spam attacks are expensive, as well as enabling verifiable centralisation. All agents in the market can verify the validity of a proof of position and identity because this information is public.\newline \subsection{Consensus} \subsubsection{Voting Scheme} In a decentralised environment, agents must agree on what data is worthy of being trusted and sold on the data market. Agents express their preferences for who they trust to compute the most accepted value of a data point in a given location. This is carried out through a voting scheme. \subsubsection{Data Consensus} Once a group of trusted agents is elected, they must then come to a consensus as to what the accepted data value of a location is. This is computed by the group following an algorithm that aggregates the coalition's data. These components enable the property of \emph{decentralised decision making}, allowing coalitions to govern themselves and dictate who to trust for the decision making process. Furthermore, they make uploading fake data to the market costly, as malicious agents must coerce sufficient agents in the voting system, to ensure enough coerced actors will be elected to compute the value of a dataset that they wish to upload. \subsection{Access Control} \subsubsection{Contribution Ranking} Once datapoints are agreed upon for the given locations, it is necessary to determine which ones should receive priority when being sold. The priority is given to the data that increases the combined worth of the data market. This can be measured by using the Shapley value, defined in \cite{shapley53}, that in this case is used to measure the marginal contribution of dataset towards increasing the value of the market with respect to a given objective function. A precise formalisation is presented in definition \ref{Shapley Value}. This component provides the property of generalised fairness of the market, and agents with more valuable data should do less work to sell their data. \subsubsection{Useful Adaptive Proof of Work} Coalitions must perform a proof of work that is proportional to how valuable to the market their data is deemed. The work performed is adaptive, and furthermore, it is useful to the functioning of the market. This is because the work performed is in fact, calculating the worth of the new incoming data into the market. This feature ensures that spam attacks are costly and that the market is resistant to duplication of profit by simply duplicating data. This is because for every dataset a coalition wishes to sell, they have to complete a PoW. \subsection{Data Marketplace} This is where the collected and agreed upon data of specific locations is posted to be sold. The datasets themselves are not public, but rather a metadata label of the dataset, who it is owned by (the spatial coalition that crowd-sourced it) and the location it is associated with. Sellers can access and browse the market and place bids for specific datasets in exchange for monetary compensation. Sellers may wish to purchase access to the entire dataset, to a specific insight or to other defined rights, such as the rights to re distribute or perform further analytics on said dataset. Each right has a corresponding price. \subsection{Bidding Mechanism} The mechanism matches buyers to the sellers of data. This component determines the price-per-right. At this stage, a spatial coalition formed of multiple agents is considered to be one seller. Successful sales will be recorded in an immutable ownership record that is public, such that all participants of the market can see which agents have rightful access to a dataset. \subsection{Reward Distribution} Once a bid is successful, then the reward of the sale is distributed amongst the participants of the spatial coalition that generated the sold dataset. This is to ensure that all agents participating in the crowd-sourcing of the dataset receive adequate compensation for it. \subsection{Distributed ledger} Successful transactions are recorded on a distributed ledger to provide a decentralised, immutable record of ownership. This ledger will represent which agents have access to who's data, and what access rights they are allowed. \section{Building Blocks of the Data Market}\label{sec: blocks of datamarket} \subsection{Context} We present a case study with cars driving in a given city. We focus on Air Quality Index (AQI) as our metric of relevance, which is calculated by aggregating averages of different pollutant concentrations \footnote{\url{https://app.cpcbccr.com/ccr_docs/How_AQI_Calculated.pdf}}. To illustrate the function of the proposed data market, we divide the city into a grid with constant sized quadrants. As agents drive across the city they measure pollution concentration values of varying contaminants at different quadrants of the city. Only agents with a valid license plate are granted access to collect and sell data on the marketplace. \subsection{Assumptions} \label{assumptions} \begin{enumerate} \item For each potential data point that can be made available in the marketplace, there is an over-supply of measurements. \label{Assumption1} \item Competing sellers are interested in aggregating (crowd-sourcing) data points from the market to fulfil a specific purpose. \label{Assumption2} \item Competing buyers only purchase data from the regulated market, and that each data point in the market has a unique identifier so that replicated data made available on secondary markets can be easily detected by data purchasers. \label{Assumption3} \item There is an existing mechanism that can verify the geographical location of an agent with a certain degree of confidence, and thus the provenance of the aforementioned agent's data collected. Several works have been carried out that corroborate that this is a reasonable assumption to make \cite{10.1145/1869790.1869797}, \cite{6072209}\cite{8731780}\cite{9149366}\cite{boeira2019decentralized}. \label{Assumption4} \item Following from \ref{Assumption4} a Proof of Position algorithm is defined in \ref{PoP defn}. Furthermore it is assumed that agents cannot be in more than one location at the same time. When an agent declares a measurement taken from a given location, we can verify this datapoint, the agent's ID and their declared position using \ref{PoP defn}. \label{Assumption5} \item Following from assumption 1 and 2, the cases when a buyer wishes to purchase data from a geographical location where there is no data available are not accounted for. \label{Assumption6} \end{enumerate} \subsection{Definitions} \begin{defn}[Datapoint]\label{datapoint} A datapoint is defined as $x_i \in X$ where $x_i$ denotes the data point of agent $i$ and $X$ is the space of all possible measurements. \end{defn} \begin{defn}[Location quadrant]\label{location quadrant} The set of all possible car locations is defined as $ \mathcal{L}$. The location quadrant $q$, is an element of $\mathcal{L}$, where $q \in \mathcal{L}$. \end{defn} \begin{defn}[Buyer]\label{buyer} A buyer is defined as $m$, where $m \in M$ and $M$ is the set of agents looking to purchase ownership (or any other given rights) of the datasets that are available for sale on the marketplace. \end{defn} \begin{defn}[Agent]\label{Agent} An agent is defined as $a_{i,s} \in A$ where $A$ is the set of all agents competing to complete the marketplace algorithm to become sellers. The index $i \in N$, where $N$ is the total number of agents on the algorithmic marketplace at a given time interval $t \in T$. The index $s$ denotes the stage in the access control mechanism that agent $a_{i,s}$ is in, where $s \in \{1, 2\}$. In stage 1, agents are in the contribution ranking stage, where the value of their data is ranked according to their Shapley value. In stage 2, the must complete a useful, adaptive PoW before they can pass the access control mechanism and enter the data marketplace. For example, agent $a_{5,2}$ is the agent number 5, currently in stage 2 of the access control mechanism. For brevity, in sections where an agent is not in the access control mechanism, we omit the use of the second index. \end{defn} \begin{defn}[Spatial Coalition]\label{Spatial Coalition} A spatial coalition is defined as a group of agents in the same location quadrant $q$. The coalition is denoted as $C_q$. \end{defn} \begin{defn}[Crowdsourced Dataset] Agents in a spatial coalition $C_q$ aggregate their individual data to provide an agreed upon dataset $D_q$, of their location quadrant $q$. \end{defn} \begin{defn}[Value Function]\label{Value Function} The value function maps the aggregate of datapoints provided by a spatial coaltion to utility for a buyer. For the purpose of this case study, the data provided will be valued with respect to a linear regression model to predict Air Quality Index of a city. The function is denoted as $v(\mathnormal{S}) = y$ where $y$ is the utility allocated to a dataset and $\mathnormal{S}$ is a coalition of agents with corresponding datapoints. \end{defn} \begin{defn}[Shapley Value]\label{Shapley Value} The Shapley Value is defined in \cite{shapley53} as a way to distribute reward amongst a coalition of n-person games. Each player $i$ in the game receives a value $\psi_i$ that corresponds to their reward. The Shapley Value satisfies the notions of Shapley fairness which are: \begin{enumerate} \item Balance: \[\sum_{a_i = 1}^{A} \psi_{m}(a_i) = 1 \] \item Efficiency: The sum of the Shapley value of all the agents is equal to the value of the grand coalition of agents $[A]$: \[\sum_{a_i = 1}^{A} \psi_{a_i}(v) = v(A) \] \item Symmetry: If agents $a_i$ and $a_j$ are equivalent in the coalition of agents $\mathnormal{S}$ such that both agents are providing data of the same value where $v(\mathnormal{S} \cup \{a_i\} ) + v(\mathnormal{S} \cup \{a_j\})$ for every subset $\mathnormal{S}$ of $A$ which contains neither $a_i$ nor $a_j$, then $\psi_{a_i}(v) = \psi_{a_j}(v)$ \item Additivity: If we define a coalition of agents to be $k = \{a_i, a_j\}$ then $\psi_{k} = \psi_{a_i} + \psi_{a_j}$ \item Null agent: An agent $a_i$ is null if $v(\mathnormal{S} \cup \{ a_i \}) = v(\mathnormal{S})$. If this is the case then $\psi_{a_i} = 0$. \end{enumerate} Therefore formal definition of the Shapley value of an agent $a_i$ that is in a set of $A$ players is \[\psi_(a_i) = \sum_{\mathnormal{S} \subseteq A \setminus \{a_i\}} \frac{|\mathnormal{S}|! (|A| - \mathnormal{S} - 1)!}{|A|!} (v(\mathnormal{S} \cup \{a_i\}) - v(\mathnormal{S}))\] The Shapley Value is the unique allocation $\psi$ that satisfies all the properties of Shapley fairness \cite{shapley53}. \end{defn} \begin{defn}[Smart Contract]\label{smart contract} A smart contract is a program that will automatically execute a protocol once certain conditions are met. It does not require intermediaries and allows for the automation of certain tasks \cite{7467408}\cite{szabo1994}. In our context, a smart contract will be executed by agent $a_{i, 2}$ to compute the Shapley value of agent $a_{j, 1}$’s dataset. The outputs will be the Shapley value of agent $a_{j, 1}$’s dataset and a new smart contract for agent $a_{i, 2}$. Calculating the new smart contract generated serves as the proof of agent $a_{j, 1}$’s useful work. Every agent’s smart contract will also contain a record of the buyer IDs and the permission that they have purchased from the agent. These could include permission to read the dataset, to compute analytics or to re-sell the dataset. \end{defn} \begin{defn}[Bidding Mechanism]\label{bidding mechanism} Following from the assumption \ref{Assumption6}, there is a set of buyers $M_q$ for each $q \in \mathcal{L}$ wishing to purchase a dataset $D_q$ from that quadrant. A Bidding Mechanism is defined, $\mathnormal{BM}$, as a function that returns a buyer $m$ that will purchase $D_q$, such that $m \in M$. Consequently, for all $q \in \mathcal{L}$: $m \gets \mathnormal{BM(M, D_q)}$. \end{defn} \begin{defn}[Reward Distribution Function]\label{reward distribution} The reward associated with the datapoint of a specific quadrant is defined as $v(C_q)$. In other words, the value that the spatial coalition $C_q$ provides with their agreed upon datapoint $D_q$, of the location quadrant $q$. Each agent in $C_q$ receives a coefficient $\alpha = \frac{1}{|D_q - d_i|}$, where $d_i$ is the agent's individual datapoint. Consequently, the value $v(C_q)$ is split amongst all the agents in $C_q$ as follows: for each agent, they receive $|| \frac{v(C_q)}{|C_q|}\times \alpha ||$ \end{defn} \begin{defn}[Commitment] \label{Commitment defn} An agent commits to their datapoint by generating a commitment that is binding, such that the datapoint cannot be changed once the commitment is provided. A commitment to a datapoint $d_i$, location quadrant $q$ and ID $i$ of an agent $a_i$ is defined as $\mathnormal{c} \gets \mathrm{Commitment}(a_i, d_i, q, t) $ \end{defn} \begin{defn}[Proof of ID] \label{PoID defn} Let the Proof of Id be an algorithm, $\mathrm{PoID}$, that verifies the valid identity of an agent $a_i$, with ID $i$. In the context presented, this identification will be the license plate. The algorithm will return a boolean $\alpha$ that will be $True$ if the agent has presented a valid license plate and $False$ otherwise. Then $\mathrm{PoID}$ is defined as the following algorithm:\\ $\alpha$ $\gets$ $\mathrm{PoID}(i, c)$. This algorithm is executed by a central authority that can verify the validity of an agent's identity. \end{defn} \begin{defn}[Proof of Position] \label{PoP defn} Let Proof of Position be an algorithm, $\mathrm{PoP}$, that is called by an agent $a_i$, with ID $i$. The algorithm takes as inputs the agent's committment $c$, and their location quadrant $q$. We define $\mathrm{PoP}$ as the following algorithm:\\ $\beta$ $\gets$ $\mathrm{a_i^\mathrm{PoP}}(q, c)$\\ where the output will be a boolean $\beta$ that will be $\mathrm{True}$ if the position $\mathrm{q}$ matches the agent's true location and $\mathrm{False}$ otherwise. This algorithm is executed by a central authority that can verify the validity of an agent's position. \end{defn} \begin{defn}[TimeCheck]\label{timecheck} The function $\mathrm{TimeCheck}$ takes in three arguments, the timestamp, $t$, of a datapoint, the current time at which the function is executed, $timeNow$, and an acceptable range of elapsed time, $r$. The output of the function is $\gamma$. If $t - timeNow < r$, $\gamma$ takes value $True$ and $False$ otherwise.\\ $\gamma$ $\gets$ $\mathrm{TimeCheck}(t, timeNow, r)$ \end{defn} \begin{defn}[Verify] \label{Verify defn} Let $\mathrm{Verify}$ be an algorithm that checks that outputs of $\mathrm{PoID}$ and $\mathrm{PoP}$. It will return a token $Token$ that will take the value $True$ iff $\alpha$, $\beta$ and $\gamma$ are all $True$, and $False$ otherwise.\\ $Token$ $\gets$ $\mathrm{Verify}(\alpha, \beta, \gamma)$\\ \end{defn} \begin{defn}[Reputation] \label{def:Reputation} An agent $a_i$ assigns a score of trustworthiness to an agent $a_j$. This score is denoted as $r_{i\rightarrow j}$. This reputation is given by one agent to another in a rational, efficient and proper manner, and is an assessment of honesty. \end{defn} \begin{defn}[Election Scheme] \label{es} We use a generalised definition for voting schemes, following from the work in \cite{aida2021} and \cite{2015-ballot-secrecy}.An \textnormal{\fontfamily{lmss}\selectfont Election Scheme} is a tuple of probabilistic polynomial-time algorithms $(\mathrm{Setup, Vote, Partial-Tally, Recover})$ such that:\newline \noindent $\mathrm{Setup}$ denoted $(pk, sk) \gets \mathrm{Setup}(k)$ is run by the administrator. The algorithm takes security parameter $k$ as an input, and returns public key $pk$ and private key $sk$.\; \BlankLine \noindent $\mathrm{Vote}$ denoted $b \gets \mathrm{Vote}(pk, v, k)$ is run by the voters. The algorithm takes public key $pk$, the voter's vote $v$ and security parameter $k$ as inputs and returns a ballot $b$, or an error ($\perp$).\; \BlankLine \noindent $\mathrm{Partial-Tally}$ denoted $e \gets \mathrm{Partial-Tally}(sk, \mathfrak{bb}, k)$ is run by the administrator. The algorithm takes secret key $sk$, bulletin board containing the list of votes $\mathfrak{bb}$, and security parameter $k$ as inputs and returns evidence $e$ of a computed partial tally.\; \BlankLine \noindent $\mathrm{Recover}$ denoted $\mathfrak{v} \gets \mathrm{Recover}(\mathfrak{bb}, e, pk)$ is run by the administrator. The algorithm takes bulletin board $\mathfrak{bb}$, evidence $e$ and public key $pk$ as inputs and returns the election outcome $\mathfrak{v}$. \end{defn} \begin{defn}[Ballot Secrecy] Ballot secrecy can be understood as the property of voters having a secret vote; namely, no third party can deduce how a voter voted. We utilise the definition for \textnormal{\fontfamily{lmss}\selectfont Ballot Secrecy} presented in the work \cite{2015-ballot-secrecy}. This definition accounts for an adversary that can control ballot collection. The adversary must be able to meaningfully deduce which set of votes they have constructed a bulletin board for. This definition is formalised as a game where, if the adversary wins with a significant probability, the property of \textnormal{\fontfamily{lmss}\selectfont Ballot Secrecy} does not hold. An \textnormal{\fontfamily{lmss}\selectfont Election Scheme} is said to satisfy \textnormal{\fontfamily{lmss}\selectfont Ballot Secrecy} if for a probabilistic polynomial-time adversary, their probability of success is negligible. \end{defn} \color{black} \color{black} \section{The Data Market} \subsection{The Verification Algorithm} \LinesNumbered \begin{algorithm}[h] \SetKw{Return}{return} $\mathnormal{c} \gets \mathrm{Commitment}(a_i, d_i, q, t) $\; $\alpha$ $\gets$ $\mathrm{PoID}(i, c)$\; $\beta$ $\gets$ $\mathrm{PoP}(q, c)$\; $\gamma$ $\gets$ $\mathrm{TimeCheck}(timeNow, t, r)$\; $Token$ $\gets$ $\mathrm{Verify}(\alpha, \beta, \gamma)$\; \BlankLine \Return $Token$ $\gets$ $\{True, False\}$; \BlankLine \caption{Verification: Verifying Algorithm ($a_{i,0}$, $d_i$, $q$, $t$, $r$)} \end{algorithm} The validity of the data submission must be verified before the data reaches the data marketplace, to avoid retroactive correction of poor quality data. This is done through the $\mathrm{Verifying Algorithm}$. Firstly, an agent provides an immutable commitment of their datapoint, location quadrant, timestamp and unique identifier (Line 1). Next, the agent submits their unique identifier to a centralised authority that verifies that this is a valid and real identity (Line 2). In practise, for this context, this identifier will be the agent's vehicle license plate. Subsequently, the agent generates a valid proof of position (Line 3). Following from assumption \ref{Assumption2}, an agent can only provide one valid outcome from algorithm \ref{PoP defn} at a given time instance $t$. Then, the datapoint is checked to ensure it is not obsolete through $\mathrm{TimeCheck}$ (Line 4). Finally, the outputs of all previous functions are verified to ensure the agent has produced a valid proof (Line 5). If and only iff all of these are $True$, the agent is issued with a unique valid token, that allows them to participate in the consensus mechanism (Line 6). \subsection{Voting Scheme: Reputation-based Maximum Entropy Voting}\label{sec: voting scheme} In what follows we present an adaptation of the Maximum Entropy Voting scheme that takes into consideration the reputation of agents in the system. Both components are introduced and will work as a single functional building block in the the data market design. \subsubsection{Reputation} In the general sense, reputation can be seen as a performance or trustworthiness metric that is assigned to an entity or group. For the purpose of this work, reputation should be earned through proof of honesty and good behaviour. In this case, agents that can demonstrate they have produced an honest computation should receive adequate recompense. In our context, agents can be administrators by running an election. They must prove that an election outcome was correctly computed to earn reputation. In the case of Maximum Entropy Voting, the administrator running the election must prove that: the voting outcome was correctly computed, and that it does indeed satisfy the optimisation formulation defined in equations \ref{pro:MEV0}. To provide guarantees of correctness to the voters, we propose using an end-to-end (E2E) verifiable voting scheme. E2E voting schemes require that all voters can verify the following three properties: their vote was \emph{cast as intended}, \emph{recorded as cast} and \emph{tallied as cast} \label{E2Eproperties} \cite{ali2016overview}. An example of an E2E voting scheme is Helios \cite{adida2008helios}. This scheme uses homomorphic encryption, enabling the administrator to demonstrate the correctness of the aggregation of votes operation. The operations required in the aggregation of votes for MEV can be done under homomorphic encryption, and an E2E voting scheme such as Helios could be used to carry out this step. This aggregation is then used to solve the optimisation problem and yield a final vote outcome. Once the optimisation is solved, the administrator can release the aggregation of votes and prove that the aggregation operation is correct and that the solution of the optimisation problem satisfies the KKT conditions. Upon presenting the verifiable proofs mentioned above, agents behaving as administrators should receive reputation from other agents in the network. \newline \emph{Remark:} We note that the Helios voting scheme has been proven not to satisfy \textnormal{\fontfamily{lmss}\selectfont Ballot Secrecy} in \cite{2015-ballot-secrecy} and \cite{aida2021}. Proposing and testing an E2E verifiable voting scheme that satisfies definitions of \textnormal{\fontfamily{lmss}\selectfont Ballot Secrecy}, receipt-freeness and coercion resistance is beyond the scope of this work, although of interest for future work. \subsubsection{Reputation-based Maximum Entropy Voting} \begin{defn}[Vote] \label{def:S(i)} The vote of agent $a_i\in A$, is defined as a pairwise preference matrix in $S(i)\mathbb{R}^{N\times N}$. Each entry is indexed by any two agents in $A$ and its value is derived from datapoint $x_i$ \ref{datapoint} and reputation $r_{i\rightarrow j}$ \ref{def:Reputation}. An example of a pairwise preference matrix for three agents is shown in equation \eqref{equ:preference-matrix-agent}. \end{defn} \begin{defn}[Administrator] An agent that carries out the vote aggregation and the computation of the election outcome is defined as an administrator, $\mathbb{A}$. \end{defn} \begin{defn}[Aggregation of Votes] \label{def:S(A)} The aggregation of all agents’ votes $S(A)$, is defined as the average of $S(i),i\in A$, as follows: \begin{equation} S(A):=\frac{1}{N}\sum_{a_i\in A} S(i). \label{equ:S(A)} \end{equation} \end{defn} \begin{defn}[Agent Ordering] \label{def:ordering} An agent ordering, denoted as $t$, is defined as a permutation of agents in \cite{max_entropy_mackay}, i.e., arranging all agents in order. Further, concerning computation complexity, we suggest $t$ being a combination of agents, i.e., selecting a subset of agents as the preferred group, such that the order of agents does not matter. \end{defn} \begin{defn}[Ordering Set] \label{def:ordering-set} The ordering set $\mathcal{T}$ is the set of all possible agent orderings, such that $t$ is an element of $\mathcal{T}$. See Figure ~\ref{tab:ordering-set} for the example of an ordering set of combinations with 3 agents. \end{defn} \begin{defn}[Probability Measure of Ordering Set] \label{def:pi} The (discrete) probability measure, $\pi:\mathcal{T}\rightarrow\mathbb{R}_{\geq 0}$ gives a probability of each ordering $t\in\mathcal{T}$ being selected as the outcome ordering $t^*$. The measure $\pi$ of maximal entropy whilst adhering to Representative Probability, described in definition \ref{Pareto}, i.e., the optimal solution of the optimisation problem defined in equations \ref{pro:MEV0} is denoted as $\pi^*$. \end{defn} Given a set of agents of cardinality $\lvert A \rvert = N$, each agent $a_{i}$ has a data point $x_i\in X$ and a reputation $r_{i\rightarrow j}$ for all agents $a_{k}\in A$. The data point $x_i$ in this context is defined as measurements of pollutants of an agent which they want to submit and sell. The reputation $r_{i\rightarrow j} \in\mathbb{R} ^+$ is a non-negative value that represents the individualised reputation of agent $a_{j}$ from the perspective of agent $a_{i}$. To combine maximum entropy voting and reputation, a key step is to move from reputation $r_{i\rightarrow j}$ to a pairwise preference matrix $S(i)\in\mathbb{R}^{N\times N}$. The entry of a pairwise preference matrix is indexed by every two agents of $A$, and its values is defined as follows: \begin{equation} S(i)_{j,k}= \begin{cases} 1 & \text{if } a_i\text { prefers } a_{j} \text{ and } j \neq k\\ 0.5 & \text {if } a_i\text { prefers both equally and } j \neq k\\ 0 & \text {if } a_i\text { prefers } a_{k} \text{ or } j=k \end{cases} ,\label{equ:preference-matrix-agent} \end{equation} for $a_j,a_k\in A$ and $a_j$ is preferred to $a_k$ if and only if $\frac{1+\lvert x_i \rvert \cdot r_{i\rightarrow j}} {1+\lvert x_i - x_j \rvert} > \frac{1+\lvert x_i \rvert \cdot r_{i\rightarrow k}} {1+\lvert x_i - x_k \rvert} $ and both agents are equally preferred if the two values are equalised, such that the reputation is scaled by their absolute differences from agent $a_i$. Likewise, we could find a pairwise preference matrix $S(i)$ for each agent $a_i$. The average of pairwise preference matrices over all agents are denoted as the preference matrix $S(A)$, as in \ref{equ:S(A)}. $S(A)$ represents the pairwise preference of all agents in $A$, whose entries $S(A)_{j,k}$, displays the proportion of agents that prefers agent $a_j$ over agent $a_k$. The original MEV \cite{max_entropy_mackay} runs an optimisation over all candidate orderings, which strongly defines the computational complexity of the problem because the number of orderings is the factorial of the number of candidates. As a variant of MEV, we consider agent combinations, instead of permutations for the ordering set $\mathcal{T}$, such that $A$ is divided into a preferred group $\mathcal{P}$ of cardinality $M$ and non-preferred group $\mathcal{NP}$, where $M$ is the number of winners needed. Hence, the cardinality of the ordering set decreases from $N!$ to $\frac{M!}{M!(N-M)!}$. For small $M$, this leads to to a dramatic reduction of the computational complexity. For each ordering $t\in\mathcal{T}$, we could define its pairwise preference matrix $S(t)$, whose entry is defined in equation \eqref{equ:preference-matrix-ordering}, and likewise in equation \eqref{equ:preference-matrix-agent}: \begin{equation} S(t)_{j,k}= \begin{cases} 1 & \text{if }a_j \text{ is placed over } a_k \\ 0.5 & \text {if both are in the same group and } j \neq k\\ 0 & \text{if }a_k \text{ is placed over } a_j \text{ or } j=k \end{cases} ,\label{equ:preference-matrix-ordering} \end{equation} for $a_j,a_k\in A$. Let us define an \textbf{unknown} probability measure $\pi: \mathcal{T}\rightarrow\mathbb{R}_{\geq 0}$. $\pi(t),t\in\mathcal{T}$ gives the probability of $t$ being chosen as the outcome ordering. Then, we construct a theoretical preference matrix $S(\pi)$ as follows: \begin{equation} S(\pi):=\sum_{t\in\mathcal{T}} \pi(t)\cdot S(t). \label{equ:S(pi)} \end{equation} The entry $S(\pi)_{j,k}$ states that under probability measure $\pi$, the probability of the outcome ordering placing $a_j$ over $a_k$. Recall the definition of Representative Probability in Section~\ref{Pareto} or \cite{max_entropy_mackay}, it simply requests $S(\pi)=S(A)$. The entropy of $\pi$ measures the uncertainty of choosing elements in $\mathcal{T}$. The uniform distribution has the maximum amount of entropy. Associated with $\pi$, the entropy is defined as $-\sum_{t\in\mathcal{T}} \pi(t) \log \pi(t)$ \cite{gray2011entropy}. Hence, the original formulation of maximum entropy voting adhere to Representative Probability is as \eqref{pro:MEV0}. In this formulation, when maximising the entropy, we ensure the solution $\pi^*$ to be the most moderate probability measure with obeying Representative Probability in \ref{Pareto}. \begin{equation} \begin{split} \pi^*=\max_{\pi} & -\sum_{t\in\mathcal{T}} \pi(t) \log\pi(t) \\ \text{s.t.} &\sum_{t\in\mathcal{T}} \pi(t)\cdot S(t) = S(A) \\ &\sum_{t\in\mathcal{T}} \pi(t) = 1 \\ &\pi(t)\geq 0 \quad \forall {t\in\mathcal{T}} \end{split} \label{pro:MEV0} \end{equation} \subsubsection{A Motivating Example} Consider $A=\{a_i,a_j,a_k\}$ and only one winner is needed $(M=1)$, all possible combinations are in shown in Figure ~\ref{tab:ordering-set}, while the number of permutations would be $3!$. \begin{figure} \caption{The lower-carnality ordering set when $A=\{a_i,a_j,a_k\} \label{tab:ordering-set} \end{figure} As an example, the pairwise preference matrix $S(t_1)$ is displayed in \eqref{equ:S(t)-example}, following the definition in \eqref{equ:preference-matrix-ordering}. \begin{equation} S(t_1)= \begin{tabular}{c|ccc} &$a_i$&$a_j$&$a_k$\\ \hline $a_i$&0 &1 &1 \\ $a_j$&0 &0 &1/2\\ $a_k$&0 &1/2&0\\ \end{tabular} \label{equ:S(t)-example} \end{equation} Suppose an optimal measure $\pi^*$ is extracted from the optimisation problem in equations ~\ref{pro:MEV0}. Assuming $\pi^*(t_1)=0.3$, $\pi^*(t_2)=0.4$ and $\pi^*(t_3)=0.3$, to sample an outcome ordering $t^*$ from $\pi^*$, consider a prize wheel as in Figure~\ref{fig:prize-wheel}. The wheel includes $|\mathcal{T}|$ wedges where each wedge represents one ordering $t$ and takes the share of $\pi^*(t)$. To obtain an outcome ordering, simply spin the wheel and $t^*$ is the wedge where the red arrow stops, i.e., $t_1$ in Figure~\ref{fig:prize-wheel}. \begin{figure} \caption{A prize wheel for sampling an outcome ordering $t\in\mathcal{T} \label{fig:prize-wheel} \end{figure} The MEV is summarised as the algorithm \ref{alg:MEV}. Firstly, each agent $a_i$ constructs their vote, the pairwise preference matrix $S(i)$, from the data point $x_i$ and reputation $r_{i\rightarrow j}$ (Line 1). Then, an average of all agents' pairwise preference matrix $S(A)$ is calculated by the administrator $\mathbb{A}$, which is seen as the aggregation of all agents' votes (Line 2). Then, a low-cardinality ordering set of agents $\mathcal{T}$ is constructed from $M$, the number of necessary winners needed (Line 3). For every possible ordering of candidates $t$, a theoretical pairwise preference matrix $S(t)$ is constructed (Line 4). These two steps can be computed by any agent in the election, or the administrator. Then, the administrator solves the optimisation problem to maximise entropy as defined in equation \ref{equ:entropy} to find a probability measure $\pi^*$ of a given ordering (Line 6). This probability measure also adheres to the Representative Probability property \ref{Pareto}. Finally, the administrator samples an outcome ordering $t^*$ from $\pi^*$, using a "prize-wheel" sampling, as shown in Figure \ref{fig:prize-wheel} (Line 7). This ordering is the final election outcome. \begin{algorithm}[h!] \SetKw{Return}{return} $S(i) \eqref{equ:preference-matrix-agent} \gets$ $a_i(x_i,r_{i\rightarrow j})$\; $S(A) \eqref{equ:S(A)} \gets$ $\mathbb{A}(\mathnormal{S(i)})$ \; $\mathcal{T} \gets$ $M$, constructed as in Figure~\ref{tab:ordering-set}\; \For{$t \in \mathcal{T}$}{$S(t)$\eqref{equ:preference-matrix-ordering} $\gets$ $\mathcal{T}$\; } $\pi^*\gets$ $\mathbb{A(\mathcal{T},\mathnormal{S(t),S(A)})}$, solving equations ~\ref{pro:MEV0}\; $t^*\gets$ $\mathbb{A}(\pi^*)$ as in Figure~\ref{fig:prize-wheel}\; \BlankLine \Return $t^*$; \BlankLine \caption{MEV Algorithm ($x_i$, $r_{i\rightarrow j},M$)} \label{alg:MEV} \end{algorithm} \subsection{Data Consensus}\label{sec: data consensus} In an oversubscribed environment, crowd-sourcing can be used to estimate an agreed upon measurement which should reflect the ground truth as closely as possible. The assumption is that every agent measures the same source and should therefore have the same results within margins of measurement precision. The only reasons why there can be deviations are: either the used sensor is faulty or the agent is intentionally submitting incorrect results. Therefore, by comparing agents' measurements against each other the aim is to sort out faulty and incorrect results. There are different ways to approach this and in order to characterise them two concepts will be introduced, namely k-anonymity and the breakdown point. \subsubsection{K-Anonymity} One way to define k-anonymity is that data sourced from multiple agents satisfies k-anonymity if any individual data point cannot be related to less than $k$ agents, where $k$ is a positive integer.\cite{971193}\footnote{\href{http://latanyasweeney.org/work/kanonymity.html}{latanyasweeney.org; k-anonymity}} In other words, if a agreed upon dataset includes multiple measurements and is assigned a k-anonymity with $k=2$, it is not possible to identify a single measurement without a second agent revealing their measurement. \subsubsection{Breakdown Point} In general, the breakdown point characterises the robustness of an estimator and is usually dependent on the sample size $n$. For this work the definition given below is used to characterise the \emph{theoretical breakdown point}. In such a way the theoretical breakdown point $\textrm{BP}_{th}$ characterises the minimum share of malevolent agents needed to break the system and alter the the agreed upon dataset arbitrarily, given the worst case configuration of the system. \cite{breakdown_point}. Complementary to that we define the \emph{practical breakdown point} in definition \ref{def: practical break} \begin{defn}[Practical Breakdown Point]\label{def: practical break} The practical breakdown point $\textrm{BP}_{pr}$ is the average share of malevolent agents at which the agreed upon dataset is arbitrarily altered, given naturally occurring configurations of the system. \end{defn} \subsubsection{Mean} The mean $\bar{x}$ in its simplest form is defined in equation \eqref{eq: mean}. \begin{equation}\label{eq: mean} \bar{x}=\frac{1}{n}\left(\sum_{i=1}^{n} x_{i}\right)=\frac{x_{1}+x_{2}+\cdots+x_{n}}{n} \end{equation} where $x_i$ are the individual measurements and n the sample size. The mean can be calculated in a decentralised and privacy preserving manner\cite{Overko2019}. The k-anonymity of the mean results then in $k = n-1$. The theoretical breakdown point of the mean is $\frac{1}{n}$ or in other words, a single measurement can cause the mean to take on arbitrarily high or low values. This can be mitigated with domain knowledge, i.e. restraining the range for valid measurements. However, even with this mitigation in place, a larger coalition of malevolent agents is still able to influence the mean significantly. This, in combination with the fact that the presence of malevolent agents can be expected in a data market, may suggest that the mean is not sufficiently robust to compute an agreed upon dataset for most use cases. \subsubsection{Median} The median is the value separating the higher half from the lower half of a data sample. It can be defined for a numerically ordered, finite sample of size n, as follows: \begin{equation} \operatorname{median}(x)= \begin{cases} x_{(n+1) / 2} &\text{ if n is odd}\\ \frac{1}{2} \cdot (x_{(n / 2)} + x_{(n / 2)+1}) &\text{ if n is even}. \end{cases} \end{equation} This definition is invalid for an unordered sample of measurements. In order to compute the median for such a sample, the measurements need to be sorted numerically first, at least partly. Given a multi-agent setting, this can be done in a distributed way by using a selection algorithm that finds the $k^{th}$ smallest element(s) as long as the data of the agents can be shared with other agents in their coalition. If data cannot be shared with other members, calculating the median in a privacy-preserving way demands a more complex scheme \cite{median_privat} and is not trivial. The theoretical breakdown point of the median characterises it as one of the most robust estimators and is for the worst case given with $\frac{1}{2}$. \subsubsection{Mean Median Algorithm} In an adversarial environment, the high robustness of the median is desirable, however, often protection of privacy is also of concern. Therefore, the Mean Median Algorithm was designed to have an algorithm that estimates an agreed upon dataset in a robust and privacy-preserving way. It must be said that it is a compromise and this algorithm is not as robust as the median and not as privacy-preserving as the mean, when compared individually. Explaining the algorithm, the first step is to randomly assign every agent to a group in such a way that there are $g$ groups with at least $s$ agents each. The way the parameters $g$ and $s$ are chosen determine the anonymity and robustness properties of the algorithm and will be discussed in the simulation section \ref{sec: simulation}. The next step is to calculate the mean within each group. The resulting mean is at least of k-anonymity with $k = s-1$. As there are $g$ groups, there are $g$ ways in which the median is chosen. This gives a breakdown point given in equation \ref{eq: breakdown point mean median}. \begin{equation}\label{eq: breakdown point mean median} \text{Breakdown point of }\operatorname{mean median}(x)= \frac{g}{2n} \end{equation} The relationship between $s$, $g$ and the number of agents $n$, is given with the inequality \eqref{eq: mean median}. \begin{equation}\label{eq: mean median} n \geq s \cdot g \end{equation} \subsection{Stage 4: Access Control Mechanism to the Data Market} The previous stages of verification, voting and data consensus run simultaneously in numerous rounds, as vehicles sense data and form coalitions to provide an agreed up value for a given location quadrant. By the time they reach the access control, there is an excess of datapoints for different locations and these datapoints are on a queue to enter and be sold on the datamarket. This section outlines the access control mechanism to sell this oversupply of datapoints on the market, and in what order these should be prioritised to enter the datamarket. This access control mechanism can be considered to have two intermiediary steps: firstly, all datapoints are assigned a priority; and secondly, proportionally to this priority, the coalition owning that datapoint must perform an adaptive, useful proof of work. \subsubsection{Contribution ranking: Shapley Value} At a given time $t$, a new set of datapoints will be submitted to a queue, to ultimately enter the datamarket. Let this set be $\mathbf{D_t} = \{D_{q1}, D_{q2}, D_{q3}...\}$ where each item of the set is the datapoint computed by a coalition $C_q$, of a given location quadrant, where $\{q1, q2, q3 ... \} \in \mathcal{L}$. For each element in $\mathbf{D_t}$, the Shapley value $\psi(C_q)$ is calculated. Note that each element in $\mathbf{D_t}$ is a datapoint that corresponds to a spatial coalition $C_q$. The grand coalition in this case is considered to be the union of all coalitions that have datapoints already for sale on the datamarket, denoted as $\mathbf{S}$. Each datapoint in $\mathbf{D_t}$ is assessed using the Shapley value, which determines what datapoints would increase the overall value of the datamarket, with respect to the defined value function, should they be added to the grand coalition $\mathbf{S}$. In other words, the datapoints that receive a higher Shapley value, would contribute more towards increasing the combined value of the data already for sale on the market. In this manner, the Shapley value is used as a metric to rank the most valuable datapoints with respect to a value function. \subsubsection{Useful, adaptive proof of work} Subsequently, once the datapoints in $\mathbf{D_t}$ have each received a Shapley value, they are then assigned a proof of work they must complete. This proof of work is inversely proportional to the Shapley value. The more valuable a datapoint is deemed for the datamarket, the less proof of work the coalition owning it should complete, to enter the market. This assigned proof of work, in fact, is computing the Shapley Value of the next set of datapoints, $\mathbf{D_{t+1}}$. \subsubsection{A contextual example} In the context of agents measuring different levels of pollution, we illustrate an example of how the Shapley value would be used to rank the datapoints in terms of value, and allocate a proportional proof of work correspondingly. We use the data on pollution levels of a range of different contaminants, taken from a number of cities in India. The data has been made publicly available by the Central Pollution Control Board: \footnote{\url{https://cpcb.nic.in/}} which is the official portal of Government of India. The cleaned and processed data was accessed from \footnote{\url{https://www.kaggle.com/datasets/rohanrao/air-quality-data-in-india}}. We illustrate an example wherein a public authority is interested in purchasing data on pollution levels of different contaminants in order to predict the Air Quality Index (AQI) of a given location. We generate a linear regression model to predict AQI, which has been previously done in \cite{linearregAQI}, although other options for models to predict AQI have been explored in alternative works such as \cite{ameer2019comparative} and \cite{kumar2011forecasting}. We note that it is up to the buyer to select a model that best defines the objective they wish to achieve. A description of how AQI is calculated can be found in \footnote{\url{https://app.cpcbccr.com/ccr_docs/How_AQI_Calculated.pdf}}. Following from this calculation, it is reasonable to observe how the variables PM2.5 (Particulate Matter 2.5-micrometer in $\mathnormal{\mu g / m^3}$) and PM10 (Particulate Matter 10-micrometer in $\mathnormal{\mu g / m^3}$) are highly correlated with AQI. It can be seen from Figure \ref{fig:heatmap} that these are the two variables with the highest correlation to AQI. We include them as well as NO, NO2, NOx, NH3, CO, SO2 and O3 as training features for the linear regression model. \begin{figure*} \caption{Correlation Matrix of the different pollutants measured with AQI} \label{fig:heatmap} \end{figure*} Agents collecting measurements of different pollutants have their dataset evaluated by a preceding set of agents that must calculate some proof of work. They receive the seller's objective value function, which in this case is the linear regression model, and access to another agent's dataset. We show the results of calculating the Shapley value of individual datapoints within a given dataset in Figure \ref{fig:scatterplot}. We simulate this using the SHAP library, presented in \cite{NIPS2017_7062}. Following from the SHAP documentation: "Features pushing the prediction higher are shown in red, those pushing the prediction lower are in blue. The plot below sorts features by the sum of SHAP value magnitudes over all samples, and uses SHAP values to show the distribution of the impacts each feature has on the model output. The color represents the feature value (red high, blue low)" \cite{shapdocs}. This reveals that high PM2.5 and PM10 concentration increases the predicted AQI. \begin{figure*} \caption{SHAP Values of the samples of each feature} \label{fig:scatterplot} \end{figure*} We also show the mean absolute value of the SHAP values for each pollution contaminant in Figure \ref{fig:barchart}. From this we can deduce that any measurement belonging to the highest SHAP value classes will be deemed more worthy and thus the agent submitting it will have to perform less proof of work to sell it. Every spatial coalition $C_q$ would have their own total Shapley value, which is the aggregate of the Shapley values shown in Figure \ref{fig:barchart}. \begin{figure} \caption{Feature ranking} \label{fig:barchart} \end{figure} \subsubsection{Privacy Concerns} The reader may rightly question the privacy risks of an agent accessing another one's dataset to compute the Shapley value. What is to incentive them to compute the Shapley value honestly, and what is to prevent them from stealing or duplicating another agent's data if they realise it has a high Shapley value? To address the first concern, a Shapley value calculation is only accepted and considered complete once enough agents have agreed on the same outcome. With the assumption that the system is Byzantine, we assume that at least $2/3$ of agents are honest, and thus once a consensus is agreed on the value, it must be true. Secondly, in the market there is no protection against agents duplicating data, but they cannot monetise this copied data unless they go through the verification, consensus and then access control stages again. Because we are in an environment with an oversupply of data and that is crowd-sourced, the data is unlikely to be highly sensitive and thus the incentive to go through these steps is very small. Finally, we address the concern of having a public value function. We note that the value function is not the same as the buyer's predictive task, but rather the mathematical representation of the market's utility function. This information should be public, as it is the way the market agrees to assign value to data. Given that we propose this work in the context of a collective environment where the value function should dictate the entire market's objective, this should be public knowledge and not sensitive Intellectual Property. Furthermore, malicious buyers could attempt to propose an objective function that penalise data they are interested in, but that would not ensure that they would pay a lower price for their desired data, it would only delay the access of said data into the market. The price would be dictated by the bidding mechanism, which is something that can be agreed upon by the entire collective to prevent issues like the aforementioned one. \section{Adversarial Attacks} \label{sec: attacks} In an environment like decentralised networks or data markets one must take into account the possibility of attacks on the system. we proceed to describe their nature and how these are mitigated by the functional components of the data market architecture. \begin{defn}[Sybil Attack] Sybil Attacks are a type of attack in which an attacker creates a large number of pseudonymous identities which they use to exert power in and influence the network. \end{defn} Sybil attacks are mitigated in the verification stage, as agents must present a valid proof of identity. This proof is granted to them through a centralised authority but all other agents can verify that it exists and therefore that it must be valid. Generating multiple identities is made expensive in this proposed architecture, because agents must provide a valid license plate to enter the market and collect data. Unless the attacker purchases a real vehicle with a valid license plate, they cannot succeed in creating another identity, and therefore sell data in the market. \begin{defn}[Wormhole Attack] A Wormhole Attack involves a user maliciously reporting they are in a location that is not the one they are truly in. \end{defn} An attack can be mounted by a series of malicious actors claiming to measure data from a location they are not truly in, and wishing to monetise this fraudulent data. To mitigate against this attack, agents must present a valid proof of position in the verification stage (defined in \ref{PoP defn}). This proof is assumed to be correct and sound, and by definition, agents are only able to present one valid proof. \begin{defn}[Data Poisoning] Data Poisoning is an attack where malicious agents collude to report fake data in order to influence the agreed upon state of a system. \end{defn} Malicious agents wishing to report fake data must influence enough agents in their spacial coalition to ensure that sufficient agents in the data consensus stage will compute a fake data point. Probabilistic voting schemes make the cost of this coercion significantly high. Furthermore, to sell the uploaded data point, the agent must perform a useful proof of work that is proportional to how valuable the data point is deemed. The more useful the data point the less work the agent must carry out to sell it. Selling spam data will therefore be very time consuming for an attacker. \subsection{Evaluation} \label{sec: evaluation} \begin{figure*} \caption{Characterisation of data consensus algorithms' behaviour under different degrees of coordinated data poisoning attacks} \label{fig:algo characterisation} \end{figure*} In this chapter the data market as well as individual parts of it are analysed to assess the robustness against the earlier introduced attacks. Simulations of the trust and truth consensus mechanism are carried out to gain deeper insights. As previously discussed in section \ref{sec: attacks}, there are four types of unwanted instances that this work focus on, namely Sybil Attacks, Wormhole Attacks, Data Poisoning, and Faulty Data. Data poisoning and faulty data are both similar in the sense that untrue measurements are submitted for different reasons. In the case of Faulty data this happens unintentionally while data poisoning is intentional. Further, due to the (assumed) random nature of faulty data, where untrue measurements happen to be on all sides of the ground truth, it can be said that it rather cancels each other out, when the agreed upon data is estimated. In contrast, when multiple agents build a malevolent coalition to influence the vote by submitting the same untrue measurement, their influence is greater. Therefore, data poisoning can be seen as the worst case scenario among the two and by investigating it, a bound for both can be found. Note that the case of Faulty data with the same systematic error on multiple sensors results in the same outcome as data poisoning, with the difference that the bias is chosen randomly. To further investigate data poisoning, simulations have been carried out. \subsection{Simulation Setup} A class of agents was created and a ground truth established from which the honest agents measure their data point. To account for imperfect sensors and other sources of errors, the process of taking a measurement is represented by sampling from a Gaussian distribution with $\mu$ and $\sigma$. Additionally, a set of agents was created which have the same untrue measurement $\mu_{adv}$ to represent the group of dishonest agents forming a malevolent coalition to mount a \emph{data poisoning} attack. Further, a base reputation of 1 is assigned to all agents, and in a second step, every honest agent has a probability to have a high reputation assigned. This is modeled by a weighted coin toss deciding if the agent is assigned a high reputation, and if yes, the reputation is sampled from a Gaussian with $\mu_{rep}$ and $\sigma_{rep}$. To simulate the governance and consensus mechanisms, models of the different data consensus algorithms and the voting mechanism were built and applied to the created agents. It is important to note that the mean-median algorithm was implemented twice with different parameters, namely triplets and squareroot. The former means that the minimum number of agents per group is $s=3$, while for the squareroot implementation it is chosen dynamically depending on the number of agents $N$, with $\sqrt{N}=s$. In return, the triplets algorithm has a higher number of groups $g$ than the squareroots implementation, and therefore a higher robustness can be expected, as discussed in section \ref{sec: data consensus}. The simulations have been done with $S$ number of samples using Monte Carlo methods, varying numbers of agents $N$, and differently sized malevolent coalitions. Note that this setup assumes the honest actors to be independently, identically distributed, which implies that measurement errors are not systematical or correlated. It can be translated to a world where every agent takes their measurements independently with the same unbiased sensor system and spatio-temporal effects do not occur (within a spatial coalition). \emph{Remark:} We note that the purpose of the simulations in Figures \ref{fig:algo characterisation} and \ref{fig:algoboxplot} is to characterise the robustness of the data consensus algorithms against \emph{data poisoning} attacks. Therefore, the absolute value of the data that each agent submits is not of importance. Rather, our objective is to understand how resilient the algorithm is when a series of malevolent agents collude to send the same, malicious value, that differs highly from the ground truth. \begin{figure*} \caption{Breakdown analysis of data consensus algorithms, with a coordinated data poisoning attack} \label{fig:algoboxplot} \end{figure*} \subsection{Evaluation of Results}\label{sec: simulation} Figure \ref{fig:algo characterisation} shows a simulation of the different data consensus algorithms which can be used to find the agreed upon dataset. The simulation has been repeated $S=15$ times with $N=1000$ agents. The behaviour of the system with a high number of agents can be considered to reflect the upper limit scenario of the system, and scaling effects can be observed when the number of agents are lower. It is important to examine this scaling effect because in practise, varying numbers of agents will occur. To do so, the behaviour in the limit is examined to establish the baseline. Generally, it can be seen that the more adversaries are present, the higher is the deviation of the agreed upon dataset from the ground truth. For the median algorithm, in green, and the triplets algorithm, in blue, discrete steps can be seen, where at defined percentages of adversaries, the deviation of the agreed upon dataset changes drastically. For the median algorithm, this happens once at 50 \% which is the theoretical breakdown point. For the triplets algorithm this happens three times, which reflects the fact that there are three agents per group. It can be interpreted that for up to 20 \% of adversaries, there are high chances of all three agents in the median group being honest. Further, between 20 \% and 50 \% it is likely that one out of three is malevolent. This continues up to where from 80 \% onwards, there are high chances of all agents in the median group being malevolent. Between the jumps it can be observed that there is an upwards trend. This can be explained with the fact that the honest actors sample their data from a Gaussian distribution with non zero $\sigma$. Due to the fact that more and more adversaries are distorting the data, honest agents further away from the $\mu$ are selected. Lastly, the squareroots algorithm shows a fairly continuous, almost linear, reaction to adversaries. This is a result of the bigger groups, with $s=\sqrt{N}$ and $N=1000$, which allows finer blends of honest and malevolent agents in the median group. In theory, there should be also a discrete step-wise increase similar to the triplets algorithm because of the discrete number of agents $s$ underpinning the process. However, due to the smaller step size and the inherent randomness to the simulation, this can only be observed in the region between 80 \% and 100 \% of adversaries. To conclude, it can be said that at the baseline, the median algorithm is the most robust with a practical breakdown point at 50 \%, after which comes the mean median triplets with 20 \%, and the mean median squareroot with a breakdown point of about 2 \%. When using equation \eqref{eq: breakdown point mean median} to calculate the theoretical breakdown points for the mean median algorithm of 16.65 \% and 1.55 \%, it is easy to see, at least for the triplets implementation, that in the practical breakdown points are higher. \begin{figure*} \caption{Characterisation of breakdown of MEV combined with Median Algorithm} \label{fig:MEV medianboxplot} \end{figure*} To investigate the scaling effect and behaviour of the algorithm when smaller numbers of agents are present, the same simulation was carried out but with $N=20$ agents, see Figure \ref{fig:algoboxplot}, and presented as boxplot instead. This results in bigger steps in which the percent of adversaries is increased. For $N=20$, adding one adversary translates to an increase in 5 \%. The number of repeated sampling $S=100$ was increased to compensate the increased uncertainty due to the lower agent count. The boxplot shows the three algorithms for 9 different shares of adversaries, namely 0, 5, 10, 20, 25, 45, 50, and 55. It can be seen that in general, the algorithms act similarly as in Figure \ref{fig:algo characterisation} with two major differences. First, the boxplot clearly shows that given a share of adversaries, the spread of the deviation is higher. This is a direct result of the lower number of agents, where chances of fluctuations are higher. Secondly, the practical breakdown point is shifted to 10 \% and 15 \% for squareroot and triplets, respectively. Given equation \eqref{eq: breakdown point mean median}, the theoretical breakdown points for the mean median implementation are 10 \% and 15 \% which confirms the observation. For the squareroot implementation, this is a great improvement which can be explained by the low number of agents. This results in four groups $g=4$ which is in proportion to the number of agents $N=20$ higher than for $N=1000$ with $g=31$ groups. This shifts the theoretical breakdown point and therefore also the practical one. The explanation behind the shifting of the practical breakdown point of the triplets implementation lies less in a shift of the theoretical breakdown point (although there is a slight one) and more in the random nature of the allocation of the agents to the groups. Specifically, it is more likely to end up with two or more malevolent agents per group with higher numbers of agents. Equation \eqref{eq: Probbility of breakdownpoint} gives the probability that the breakdown occurs at the theoretical breakdown point, given the share of adversaries of the theoretical breakdown point. The chances are higher for the case of triplets with $N=20$, $r=6$ and $a=3$, than for triplets with $N=1000$, $r=333$ and $a=167$. \begin{equation} \textrm{BP}_{th}=\frac{(g-1)!}{g^{a-1}(g-a)!} \label{eq: Probbility of breakdownpoint} \end{equation} where $\textrm{BP}_{th}$ is the probability of the breakdown of the Mean Median Algorithm at the theoretical breakdown point, $g$ is the number of groups, and $a$ is the number of adversaries at the theoretical breakdown point. Figure \ref{fig:MEV medianboxplot} represents the entire consensus mechanism, whereby the main purpose is to demonstrate that the reputation system can increase the robustness, given a functional reputation system exists. The way this simulation works it that the voting scheme (MEV) outputs a set of agents, the committee, which then compute the agreed upon dataset of a given location. Given the context of this data market, where anonymity is not demanded, the use of the median algorithm as a data aggregation algorithm can be justified. The size of the committee was chosen with $K=3$ to be able to make use of the proposed reduction in computational complexity, described in section \ref{sec: voting scheme}. Figure \ref{fig:MEV medianboxplot} shows two simulations plotted which present the share of adversaries at which the practical breakdown point occurs (y-axis) as function of the share of highly reputational agents among the honest ones (x-axis). The latter resembles the weight of the weighted coinflip mentioned above. The plot in blue has $N=15$ and $S=10$, and the one in orange, $N=10$ and $S=15$. For both simulations the same Gaussian was used to sample the reputation for the high-reputation honest actors, $\mu_{adv}=100$ and $\sigma_{adv}=30$. For both simulations, a clear trend can be observed that with higher shares of reputational agents among the honest ones, the practical breakdown point is at higher shares of adversaries, with some outliers having a breakdown point of more than 80 \%. Precisely, this means that two out of the three committee members are honest. At the same time it is visible that a great spread is introduced by adding the voting consensus mechanism. To conclude the analysis on the robustness against \emph{data poisoning}, it can be said that with both, the Max Entropy voting and the median in place, the system offers solid protection against \emph{data poisoning}. In order to be confident to succeed, the malevolent coalition has to be in control from 40 \% to 80 \% of the network, depending on the reputation system and the honesty of the other agents. \subsection{Conclusion} Fairness, decentralisation and verifiability are fundamental to the body of this work. The novelties of this work include: ranking data in terms of how valuable it is to the market using the Shapley value, and proportionally adapting the proof of work to it. Furthermore, the proof of work is itself useful and necessary for the functioning of the market, and thus not wasteful. We also propose consensus through a voting scheme that satisfies desirable properties of fairness, and introduce an optimisation to make its computational complexity significantly lower for the context of this work. Most importantly, this voting scheme favours agents that can prove their honesty, as this is how reputation is earned in the system. Indeed, at time of writing, Algorand just announced that they are aware of the critique towards their voting algorithm. Passive agents with more wealth have more voting power than the active agents that are enabling the functioning of the network. Algorand have stated they agree with this critique and will be rolling out changes in June 2022 to reward active network users \footnote{\url{https://algorand.foundation/news/governance-voting-update-g3}}. We hope the work here presented can be a first step in enabling the shift towards this direction. For future work, we wish to explore how our voting scheme can be implemented in an End-to-end verifiable manner, and how the computation of the Shapley value for each agent's dataset can be done in a privacy preserving way. Achieving the latter may enable us to relax security assumptions of the honesty of agents computing the Shapley value. { \footnotesize } \onecolumn \appendix \subsection{Conic Optimisation and Lagrangian Relaxations} Relative entropy programs (REPs) and second-order cone programs (SOCPs) are conic optimisation problems in the relative entropy cones and second-order cones, possibly subject to other linear constraints. They could be solved via interior-point methods. Let $\mathbf{\pi,\delta,1}$ be $|\mathcal{T}|$-dimensional vectors. The elements of $\mathbf{\pi}$ are $\pi(t),t\in\mathcal{T}$, and $\mathbf{1}$ is an all-ones vector $\mathbf{1}$ of compatible size. A relative entropy cone $(\mathbf{\pi,1},\delta)\in\mathcal{RE}$ is defined as: \begin{equation} \mathcal{R E}:= \left\{ (\mathbf{\pi,1,\delta})\in\mathbb{R}^{|\mathcal{T}|}_{\geq 0}\times\mathbb{R}^{|\mathcal{T}|}_{\geq 0}\times \mathbb{R}^{|\mathcal{T}|}\vert\quad\pi(t)\log (\pi(t)/1) \leq\delta_t, \forall t\in\mathcal{T} \right\}, \label{equ:relative-entropy-cone} \end{equation} The objective function in \eqref{pro:MEV0} can be reformatted into \eqref{equ:relative-entropy-cone}. The relative entropy cone $(\mathbf{\pi,1},\delta)\in\mathcal{RE}$ induces that $-\sum_{t\in\mathcal{T}} \pi(t)\log \pi(t)\geq -\sum_{t\in\mathcal{T}}\delta_t$ and we can just minimise $\sum_{t\in\mathcal{T}}\delta_t$ to obtain a maximum entropy solution. Hence, the Problem~\ref{pro:MEV0} is re-formulated as \begin{equation} \begin{split} \max_{\pi,\delta} & \sum_{t\in\mathcal{T}}\delta_t \\ \text{s.t.} &\sum_{t\in\mathcal{T}} \pi(t)\cdot S(t) = S(A), \quad\sum_{t\in\mathcal{T}} \pi(t) = 1 \\ &\pi(t)\geq 0\textrm{ }\forall {t\in\mathcal{T}},\quad(\mathbf{\pi,1},\delta)\in\mathcal{RE}. \end{split} \label{pro:MEV0-relative} \end{equation} If $\mathcal{T}$ is the set of combinations, the constraint $\sum_{t\in\mathcal{T}}\pi(t)\cdot S(t) = S(A)$ in Problem~\ref{pro:MEV0} or \ref{pro:MEV0-relative} cannot always be satisfied. Correspondingly, we lift up this constraint to the objective function, with a multiplier $\lambda>0 $. Let \begin{equation} S^{\textrm{diff}}:=\sum_{t\in\mathcal{T}}\pi(t)\cdot S(t)-S(A) \label{equ:RP-distortion} \end{equation} According to the definitions of $S(A),S(t)$, $S^{\textrm{diff}}$ is an $N\times N$ symmetric matrix, with its diagonal being all zeros. On the other hand, $S^{\textrm{diff}}$ implies the distortion of solution $\pi$ from Representative Probability property \ref{Pareto}. Further, a second-order cone $(S^{\textrm{diff}},\eta)\in\mathcal{SO}$ is defined as \eqref{equ:second-order-cone}. \begin{equation} \begin{split} \mathcal{SO} := \left\{ (S^{\textrm{diff}},\eta)\in\mathbb{R}^{N\times N}_{\geq 0}\times\mathbb{R}_{\geq 0} \left| \large\sqrt{\sum_{i,j\in A, i< j} 2\left(S^{\textrm{diff}}_{i,j}\right)^2} \leq\eta \right. \right\}, \end{split} \label{equ:second-order-cone} \end{equation} where $S^{\textrm{diff}}_{i,j}$ denotes the element of $S^{\textrm{diff}}$ in row $i$ and column $j$. The Lagrangian relaxation of Problem~\ref{pro:MEV0-relative}, using second-order cone, reads \begin{equation} \begin{split} \max_{\pi,\delta,\eta} & \sum_{t\in\mathcal{T}}\delta_t + \lambda\; \eta\\ \text{s.t.} &\sum_{t\in\mathcal{T}} \pi(t)\cdot S(t) - S(A) = S^{\textrm{diff}}, \quad\sum_{t\in\mathcal{T}} \pi(t) = 1 \\ &\pi(t)\geq 0\textrm{ }\forall {t\in\mathcal{T}},\quad(\mathbf{\pi,1},\delta)\in\mathcal{RE},\quad (S^{\textrm{diff}},\eta)\in \mathcal{SO}. \end{split} \label{pro:MEV0-second-cone} \end{equation} \subsection{Data Generation} \label{sec:data-generation} Algorithm~\ref{alg:MEV} indicates that the input $S(A)$ for Problems~\ref{pro:MEV0}, \ref{pro:MEV0-relative} and \ref{pro:MEV0-second-cone}, is obtained from $x_i,r_{i\rightarrow j}$. The generation of measurements $x_i$ and reputation $r_{i\rightarrow j}$, could be divided into the honest-agent and the adversarial-agent cases. For the former case, the measurement is sampled from a Gaussian distribution $\mathcal{N}(\mu,\,\sigma)$ while an untrue measurement $\mu_{adv}$ is assigned to all adversarial agents directly, as in \eqref{equ:measurement-generation}. \begin{equation} \begin{cases} x_i \sim \mathcal{N}(\mu,\,\sigma) & \text{if } a_i \text{ is honest}, \\ x_i=\mu_{adv} \space & \text{if } a_i\text{ is adversarial}. \end{cases} \label{equ:measurement-generation} \end{equation} For simplicity, we assume the reputation $r_{i\rightarrow j}=r_i$, for all $a_j\in A$. For generating $r_i,a_i\in A$, a base reputation of $1$ is assigned to all agents. Further, a Binomial distribution variable $r_i^{\mathcal{B}}\sim\mathcal{B}(1,\,p)$ is used to determine if an honest-agent is respected: $a_i$ is honest and respected if $r_i^{\mathcal{B}}=1$. Further, an honest-and respected-agent would be added extra reputation $r_i^{\mathcal{N}}$ sampled from a Gaussian distribution $\mathcal{N}(\mu_{rep},\,\sigma_{rep})$. The procedure is displayed in \eqref{equ:reputation-generation}. \begin{equation} r_{i} = \begin{cases} 1 + r_i^{\mathcal{B}}\cdot r_i^{\mathcal{N}} & \text{if } a_i \text{ is honest} \\ 1 & \text{if } a_i\text{ is adversarial} \end{cases} \label{equ:reputation-generation} \end{equation} \subsection{Measuring Entropy} Given a set of agents $A$ and the number of winners needed $M$, we can build two orderings sets: one of combinations $\mathcal{T}_{\textrm{com}}$ and the other one of permutations $\mathcal{T}_{\textrm{per}}$. Suppose an optimal probability measure is obtained from Problem~\ref{pro:MEV0} for each ordering set, denoted as $\pi^*_{\textrm{com}}$ for $\mathcal{T}_{\textrm{com}}$ and $\pi^*_{\textrm{per}}$ for $\mathcal{T}_{\textrm{per}}$, with the same input $S(A)$. See Figure~\ref{app-tab:ordering-set} for an example when $A=\{a_i,a_j,a_k\}$ and $M=1$. Notice that for each element $t\in\mathcal{T}_{\textrm{com}}$, we can find $M!(N-M)!$ elements in $\mathcal{T}_{\textrm{per}}$ that are equivalent to $t$, in terms of the election results. We use $\sim$ to denote this equivalence relation. For instance, each row of Figure~\ref{app-tab:ordering-set} displays a equivalent tuple of $t\in\mathcal{T}_{\textrm{com}}$ and $\tau\in\mathcal{T}_{\textrm{per}}$. Specifically, $t_1\in\mathcal{T}_{\textrm{com}}$ is equivalent to $\tau_1,\tau_2\in\mathcal{T}_{\textrm{per}}$, because their election results are the same, i.e., only agent $a_i$ gets elected. Then, we have $t_1 \sim\tau_1\sim\tau_2$. \begin{figure} \caption{This table displays two ordering sets, i.e., combination set and permutation set, when $A=\{a_i,a_j,a_k\} \label{app-tab:ordering-set} \end{figure} To compare the entropy of $\pi^*_{\textrm{com}}$ and $\pi^*_{\textrm{per}}$, we suggest \begin{equation} \begin{split} \textrm{Entropy} (\pi^*_{\textrm{com}})&:=\sum_{t\in\mathcal{T}_{\textrm{com}}} \pi^*_{\textrm{com}}(t) \log \pi^*_{\textrm{com}}(t) \\ \textrm{Entropy} (\pi^*_{\textrm{per}})&:=\sum_{t\in\mathcal{T}_{\textrm{com}}} \left(\sum_{\tau\in\mathcal{T}_{\textrm{per}}, \tau \sim t} \pi^*_{\textrm{per}}(\tau) \right) \log \left(\sum_{\tau\in\mathcal{T}_{\textrm{per}}, \tau \sim t} \pi^*_{\textrm{per}}(\tau) \right) \end{split} \label{equ:entropy} \end{equation} \subsection{Numeric Illustrations} With $S(A)$ extracted from data generated in \ref{sec:data-generation}, we have the following implementations: \begin{itemize} \item ``Permutation'': solving Problem~\ref{pro:MEV0-relative} with input $\mathcal{T}=\mathcal{T}_{\textrm{per}},S(t),S(A)$, and optimal solutions $\pi^*_{\textrm{per}}$. \item ``Combination\_Lag'': solving Problem~\ref{pro:MEV0-second-cone} with input $\mathcal{T}=\mathcal{T}_{\textrm{com}},\lambda=2,S(t),S(A)$, and optimal solutions $\pi^*_{\textrm{com}}$. \end{itemize} Both are solved by MOSEK Optimizer API for Python 9.3.20 \cite{mosek}. Figure~\ref{fig:mev-lines} displays the results of runtime, entropy in \eqref{equ:entropy} and RP distortion $S^{\textrm{diff}}$ in \eqref{equ:RP-distortion}, of optimal solutions $\pi^*_{\textrm{com}}$ and $\pi^*_{\textrm{per}}$, when the number of agents $N$ are $6,\dots,15$ for ``Combination\_Lag'' and $6,\dots,9$ for ``Permutation''. Note that ``Permutation'' with larger $N$ is not implemented due to its spike in runtime. Under each $N$, both implementations are conducted 6 times ($6\times 2$ runs in total), with a new $S(A)$ generated every time. The average entropy, runtime and RP distortion of 6 runs are presented as solid curves for ``Permutation'' and dashed curves for ``Combination\_Lag''. \begin{figure} \caption{The average results of entropy, runtime and RP distortion, of implementing ``Combination\_Lag'' and ``Permutation'' for 6 times, with the number of agents $N$ being $6,\dots,15$ and $6,\dots,9$ respectively.} \label{fig:mev-lines} \end{figure} \end{document}
\begin{document} \makeatletter \def\mathop{\operator@font rank}\nolimits{\mathop{\operator@font rank}\nolimits} \def\mathop{\operator@font det}\nolimits{\mathop{\operator@font det}\nolimits} \makeatother \newtheorem{thm}{Theorem}[section] \newtheorem{ex}[thm]{Example} \newtheorem{lem}[thm]{Lemma} \newtheorem{rmk}[thm]{Remark} \newtheorem{defi}[thm]{Definition} \newtheorem{cor}[thm]{Corollary} \title[asymptotic stabilization]{Design of the state feedback-based feed-forward controller asymptotically stabilizing the overhead crane at the desired end position} \author{Robert Vrabel} \address{Robert Vrabel, Slovak University of Technology in Bratislava, Faculty of Materials Science and Technology, Institute of Applied Informatics, Automation and Mechatronics, Bottova~25, 917 01 Trnava, Slovakia} \email{[email protected]} \date{{\bf\today}} \begin{abstract} The problem of feed-forward control of overhead crane system is discussed. By combining the Kalman's controllability theory and Hartman-Grobman theorem from dynamical system theory, a linear, continuous state feedback-based feed-forward controller that stabilizes the crane system at the desired end position of payload is designed. The efficacy of proposed controller is demonstrated by comparing the simulation experiment results for overhead crane with/without time-varying length of hoisting rope. \end{abstract} \keywords{Overhead crane system; state feedback-based feed-forward controller; asymptotic stabilization; simulation experiment} \maketitle \section[Introduction]{Introduction} For overhead crane control, it is required that the trolley should reach the desired location as fast as possible while the payload swing should be kept as little as possible during the transferring process. However, it is extremely challenging to achieve these goals simultaneously owing to the underactuated characteristics of the crane system, more specifically, underactuated with respect to the load sway dynamics, which makes the linear part of overhead crane system uncontrollable in the sense of Kalman's control theory by a continuous control law. Due to this reason, the development of efficient control schemes for overhead cranes has attracted wide attention from the control community. For example, in \cite{Park} a nonlinear control law for container cranes with load hoisting using the feedback linearization technique and the decoupling strategy of swing angle-dynamics from trolley movement- and varying rope length- dynamics by Lyapunov function approach was investigated. The authors in \cite{Tagawa_et_al} and \cite{Tagawa2_et_al} developed a sensorless vibration control system for overhead crane system with varying wire length by using simulation-based control technique. In the paper \cite{Yang} an adaptive nonlinear coupling control law has been presented for the motion control of overhead crane with constant rope length and without considering the mass moment of inertia of the load. By utilizing a Lyapunov-based stability analysis, the authors achieved asymptotic tracking of the crane position and stabilization of payload sway angle of an overhead crane. A sliding mode controller composed of approximated control and switching action was designed in \cite{Tuan} for simultaneously combining control of cargo lifting, trolley moving, and cargo swing vanishing. In \cite{Tomczyk}, the problems of load operation and positioning under different wind disturbances by using dynamic model with a state simulator is discussed. In all of these mentioned papers, but also in others, see, e.g. \cite{Sorensen}, \cite[p.~273]{Zhang} and the reference therein, the control of sway-angle of payload is not considered and, therefore, this angle is assumed to be small what allows some approximations and truncations in the reference model (Remark~\ref{approx}). Also recall that for the mathematical models without considering a mass moment of inertia of the payload, what is usual for the simplified models of crane systems, the model is singular for rope length near zero what makes the system practically unanalyzable from the point of view of engineering practice. So, the purpose of this article is to show that by adding the control force for load sway damping (in the case of varying length of rope), the linear part of system becomes state controllable and the original overhead crane system asymptotically stabilizable at the required end position by linear and continuous state feedback and, moreover, the exact formulas for the control forces are given. The efficacy of proposed controller is demonstrated by comparing the computer simulation experiment results for overhead crane with and without varying length of hoisting rope, respectively. Anti-swing control of automatic overhead crane system required to transfer the payload without causing excessive swing at the end position. With a fully-automated overhead crane, the operators make the settings, and the crane automatically takes care of repetitive or difficult actions. This is especially useful in demanding and hazardous environments. Also, automated overhead cranes can reduce labor costs, track inventory, optimize storage, reduce damage, increase productivity and reduce the capital expense. Most of the proposed anti-swing controls use feedbacks that require two sensors to measure the trolley position and swing angle. However, installing swing angle sensor on a real overhead crane is often troublesome and also more costly. Moreover, vibration measurement sensors are required and this causes faults of the crane systems especially in severe industrial environments (\cite{Tagawa_et_al}). Our approach is based on the feed-forward design of the control forces $F_z,$ $F_l$ and $F_{\theta},$ see Fig.~\ref{fig:M1}, applied to the crane system using simulation-based control strategy and that will stabilize the system at the desired and in advance known end position of the payload. \begin{figure} \caption{Schematic diagram of the 2-D overhead crane system. The coordinates of payload are $z_p(t)=z(t)+l(t)\sin\theta(t),$ $y_p(t)=-l(t)\cos\theta(t)$} \label{fig:M1} \end{figure} \iffalse \begin{figure} \caption{Schematic diagram of the 2-D overhead crane system. The coordinates of payload are $z_p(t)=z(t)+l(t)\sin\theta(t),$ $y_p(t)=-l(t)\cos\theta(t)$} \label{fig:M1} \end{figure} \fi \section{Crane system dynamics. Reference model} Basically, an overhead crane is made up of a trolley (cart) moving along a horizontal axis with a load hung from a flexible rope. The Fig.~\ref{fig:M1} shows the swing motion of the load caused by trolley movement, in which $z$ is the trolley moving direction, $y$ is the vertical direction, $\theta(t)$ is the sway angle of the load, $z(t)$ is the position of the trolley, $l(t)$ is the hoist rope length. The symbols $F_z,$ $F_l$ and $F_{\theta}$ denote the control forces applied to the trolley in the $z$-direction, to the payload in the $l$-direction and in the direction perpendicular to $l$-direction, respectively. The following assumptions are made throughout the paper: \begin{itemize} \item[i)] The payload and trolley are connected by a massless, rigid rope, that is, a pendulum motion of the load is considered; \item[ii)] The trolley moves in the $z$-direction; \item[iii)] The payload moves on the $z-y$ surface; \item[iv)] All frictional elements in the trolley and hoist motions can be neglected; \item[v)] the rope elongation is negligible. \end{itemize} The information on the sway angle, sway angular velocity, trolley displacement and velocity, hoisting rope length and its time rate of change are not assumed to be known. So, this paper presents sensor less anti-swing control strategy for automatic overhead crane. Using Lagrangian method, the Lagrangian equation with respective to the generalized coordinate $q_i$ can be obtained as \begin{equation}\label{Lagrange_eq} \frac{d}{dt}\bigg(\frac{\partial L}{\partial\dot q_i}\bigg)-\frac{\partial L}{\partial q_i}=F_i, \end{equation} where $i = 1, 2, 3,$ $L = KE - PE$ ($KE$ means the system kinetic energy and $PE$ denotes the system potential energy), $q_i$'s are the generalized coordinates, here $q_1,$ $g_2$ and $q_3$ indicate $z,$ $l$ and $\theta,$ respectively, and $F_i$ represents nonconservative generalized force ($F_z,$ $F_l$ and $F_{\theta}$) associated with those coordinates. For the system under consideration the total kinetic energy and potential energy is \[ KE=\frac12(M+m){\dot z}^2+\frac12m{\dot l}^2+\frac12m(l\dot\theta)^2+m\dot z(\dot l\sin\theta+l\dot\theta\cos\theta)+\frac12I{\dot\theta}^2 \] and \[ PE=-glm\cos\theta, \] the potential energy of the trolley subsystem is kept unchanged. Here, $m$ is the payload mass, $M$ is the mass of the trolley with the hoist system, $I$ is the mass moment of inertia of the payload, and $g$ is the gravitational acceleration. Using the Lagrangian equation (\ref{Lagrange_eq}), one obtain the following relations between the generalized coordinates $z,$ $l$ and $\theta:$ \begin{equation}\label{eq:z} (M+m)\ddot z+m\ddot l\sin\theta+2m\dot l\dot\theta\cos\theta+lm\ddot\theta\cos\theta-lm{\dot\theta}^2\sin\theta=F_z, \end{equation} \begin{equation}\label{eq:l} m\ddot z\sin\theta+m\ddot l-lm{\dot\theta}^2-gm\cos\theta=F_l, \end{equation} and \begin{equation}\label{eq:theta} lm\ddot z\cos\theta+(ml^2+I)\ddot\theta+2lm\dot l\dot\theta+glm\sin\theta=F_{\theta}. \end{equation} Let us introduce the state variables $x_1=:z,$ $x_2=:l,$ $x_3=:\theta,$ $x_4=:\dot z,$ $x_5=:\dot l,$ and $x_6=:\dot \theta.$ Now solving the equations (\ref{eq:z}), (\ref{eq:l}) and (\ref{eq:theta}) with regard the variables $\dot x_4,$ $\dot x_5$ and $\dot x_6,$ one obtain the linear system of equations \[ \left( \begin{array}{ccc} M+m & m\sin\left(x_{3}\right) & mx_{2}\cos\left(x_{3}\right)\\ m\sin\left(x_{3}\right) & m & 0\\ mx_{2}\cos\left(x_{3}\right) & 0 & m{x^2_{2}}+I \end{array} \right) \left( \begin{array}{c} \dot x_4\\ \dot x_5\\ \dot x_6 \end{array} \right) \] \[ =\left( \begin{array}{c} F_{z}+mx_{2}{x^2_{6}}\sin\left(x_{3}\right)-2mx_{5}x_{6}\cos\left(x_{3}\right)\\ F_{l}+mx_{2}{x^2_{6}}+gm\cos\left(x_{3}\right)\\ F_{\theta}-2mx_{2}x_{5}x_{6}-gmx_{2}\sin\left(x_{3}\right) \end{array} \right) \] and its solution \[ \dot x_4= -\frac{1}{Mm{x^2_{2}}+Im{\cos^2\left(x_{3}\right)}+IM} \] \[ \times\Bigg[\cos\left(x_{3}\right)\left(F_{\theta}mx_{2}+2Imx_{5}x_{6}\right) \] \[ +\sin\left(x_{3}\right)\left(F_{l}m{x^2_{2}}+F_{l}I+Igm\cos\left(x_{3}\right)\right) \] \[ -F_{z}I-F_{z}m{x^2_{2}}\Bigg], \] \[ \dot x_5=\frac{1}{m\left(Mm{x^2_{2}}+Im{\cos^2\left(x_{3}\right)}+IM\right)} \] \[ \times\Bigg[F_{l}IM+F_{l}m^2{x^2_{2}}+F_{l}Im+Mm^2{x^3_{2}}{x^2_{6}} \] \[ +\frac{1}{2}F_{\theta}m^2x_{2}\sin\left(2x_{3}\right)-F_{z}m^2{x^2_{2}}\sin\left(x_{3}\right)+F_{l}Mm{x^2_{2}} \] \[ -F_{z}Im\sin\left(x_{3}\right)-F_{l}m^2{x^2_{2}}{\cos^2\left(x_{3}\right)}+Igm^2\cos\left(x_{3}\right)+Im^2x_{2}{x^2_{6}}{\cos^2\left(x_{3}\right)} \] \[ +Mgm^2{x^2_{2}}\cos\left(x_{3}\right)+IMgm\cos\left(x_{3}\right)+Im^2x_{5}x_{6}\sin\left(2x_{3}\right)+IMmx_{2}{x^2_{6}}\Bigg], \] and \[ \dot x_6=\frac{1}{Mm{x^2_{2}}+Im{\cos^2\left(x_{3}\right)}+IM} \] \[ \times\Bigg[F_{\theta}M+F_{\theta}m{\cos^2\left(x_{3}\right)}-F_{z}mx_{2}\cos\left(x_{3}\right)+F_{l}mx_{2}\cos\left(x_{3}\right)\sin\left(x_{3}\right) \] \[ -2Mmx_{2}x_{5}x_{6}-Mgmx_{2}\sin\left(x_{3}\right)\Bigg], \] where $F_{z},F_l,F_\theta$ represent the control forces. Now let us substitute $(M+m)u_1$ instead of $F_{z},$ $mu_2-gm\cos\left(x_{3}\right)$ instead of $F_{l}$ (here the term $-gm\cos\left(x_{3}\right)$ compensates the weight of load) and $Iu_3$ instead of $F_{\theta},$ where $u_1, u_2, u_3$ are new control variables. The reference mathematical model for design a feed-forward controller takes the form of control system $\dot x =G(x,u),$ $G=(G_1,\dots,G_6)^T$ with \[ G_1=x_4, \] \[ G_2=x_5, \] \[ G_3=x_6, \] \[ G_4= -\frac{1}{Mm{x^2_{2}}+Im{\cos^2\left(x_{3}\right)}+IM} \] \[ \times\Bigg[\cos\left(x_{3}\right)\left(Imu_{3}x_{2}+2Imx_{5}x_{6}\right) \] \[ +\sin\left(x_{3}\right)\left(m^2u_{2}{x^2_{2}}+Imu_{2}-gm^2{x^2_{2}}\cos\left(x_{3}\right)\right) \] \[ -Iu_{1}\left(M+m\right)-m\left(M+m\right)u_{1}{x^2_{2}}\Bigg], \] \[ G_5=\frac{1}{{Mm{x^2_{2}}+Im{\cos^2\left(x_{3}\right)}+IM}} \] \[ \times\Bigg[IMu_{2}+m^2u_{2}{x^2_{2}}+Imu_{2}+IMx_{2}{x^2_{6}} \] \[ -gm^2{x^2_{2}}\cos\left(x_{3}\right)-IMu_{1}\sin\left(x_{3}\right)-m^2u_{1}{x^2_{2}}\sin\left(x_{3}\right)+Mmu_{2}{x^2_{2}} \] \[ -Imu_{1}\sin\left(x_{3}\right)+gm^2{x^2_{2}}{\cos^3\left(x_{3}\right)}-m^2u_{2}{x^2_{2}}{\cos^2\left(x_{3}\right)} \] \[ +Mm{x^3_{2}}{x^2_{6}}+\frac12 Imu_{3}x_{2}\sin\left(2x_{3}\right)+Imx_{5}x_{6}\sin\left(2x_{3}\right) \] \[ -Mmu_{1}{x^2_{2}}\sin\left(x_{3}\right)+Imx_{2}{x^2_{6}}{\cos^2\left(x_{3}\right)}\Bigg], \] and \[ G_6= \frac{1}{Mm{x^2_{2}}+Im{\cos^2\left(x_{3}\right)}+IM} \] \[ \times\Bigg[IMu_{3}-x_{2}\left[2Mmx_{5}x_{6}+m\left(M+m\right)u_{1}\cos\left(x_{3}\right)\right] \] \[ +x_{2}\sin\left(x_{3}\right)\left[m\cos\left(x_{3}\right)\left(mu_{2}-gm\cos\left(x_{3}\right)\right)-Mgm\right]+Imu_{3}{\cos^2\left(x_{3}\right)}\Bigg]. \] \begin{rmk}\label{approx} The use of the control force $F_{\theta},$ represented by control variable $u_3,$ admits the greater angles of the payload sway during transportation, and therefore, one must work with complete system, and the often used small-angle approximations of the type $\sin\theta\approx 0$ (or $\sin\theta\approx\theta$), $\dot\theta^2\approx 0$ and $\cos\theta\approx 1$ can not be applied in our analysis. \end{rmk} \section{Theoretical background. Control law design} Our approach to the asymptotic stabilization of the overhead crane system is based on the two cornerstones of modern control theory and theory of dynamical systems. Consider a linear time-invariant (LTI) system $\dot x=Ax+Bu,$ where $A$ and $B$ are $n\times n$ and $n\times m$ constant real matrices, respectively. A fundamental result of linear control theory is that the following three conditions are equivalent, see, e.~g. \cite{AntMich}: \begin{itemize} \item[(i)] the pair $(A,B)$ is controllable; \item[(ii)] $\mathop{\operator@font rank}\nolimits\mathcal{C}_{(A,B)}=n,$ where $\mathcal{C}_{(A,B)}=:\left( B\ AB\ A^2B\ \cdots\ A^{n-1}B\right)$ is an $n\times mn$ Kalman's controllability matrix; \item[(iii)] for every $n$-tuple real and/or complex conjugate numbers $\lambda_1,$ $\lambda_2,$ $\dots,$ $\lambda_n,$ there exists an $m\times n$ state feedback gain matrix $K$ such that the eigenvalues of the closed-loop system matrix $A_{cl}=A-BK$ are the desired values $\lambda_1,\lambda_2,\dots,\lambda_n.$ \end{itemize} In general, the nonlinear control system \begin{equation*} \dot x=G(x,u), \ t\geq 0,\ x\in\mathbb{R}^n\ u\in\mathbb{R}^m, \ \dot{} =:d/dt, \end{equation*} with the state feedback of the form $u=-Kx$ and for which is assumed that $x=0$ is its solution, that is, $G(0,0)=0,$ is studied. It is well-known, that if the pair $(A,B),$ where $A=G_x(0,0)$ and $B=G_u(0,0)$ are the corresponding Jacobian matrices with respect to the state and input variables, respectively, and evaluated at $(0,0)$ is controllable, then the LTI system $\dot x=(A-BK)x$ is in some neighborhood of the origin topologically equivalent, and preserving the parametrization by time, to the system $\dot x=G(x,-Kx),$ provided that the eigenvalues of the matrix $A-BK$ have non-zero real part. The precise statement about this property gives the Hartman-Grobman theorem, see, e.~g. \cite[p.~120]{Perko}, providing the exact geometric characterization of the trajectories of the closed-loop system in the neighborhood of the equilibrium state. Thus, if the matrix $K$ is chosen such that all eigenvalues of $A-BK$ have negative real parts, the nonlinear system $\dot x=G(x,-Kx)$ is locally asymptotically stable in the neighborhood of $x=0.$ Here is meant the usual definition of local asymptotic stability, that is, the solution $x=0$ of the system $\dot x=G(x,-Kx)$ is asymptotically stable if for every $\varepsilon>0$ there exists $\delta>0$ such that if $||x(0)||\leq\delta$ then $||x(t)||\leq\varepsilon$ for all $t\geq 0,$ and, moreover, $||x(t)||\rightarrow 0^+$ with $t\rightarrow \infty,$ see, e.~g., \cite[p.~19]{Barbashin}. Here $||\cdot||$ denotes the Euclidean vector norm. Now, by imposing the natural requirement for the real working devices on the boundedness of the state variables $x_i,$ $i=1,\dots,6,$ namely that $||x||\leq\Delta$ for some constant $\Delta>0,$ the local asymptotic stability of the closed-loop system $\dot x=G(x,-Kx)$ around its equilibrium position $x=0$ by (formal and general) computing the lower bound of the region of attraction will be proved. In the engineering practice, one do not need to know this region explicitly because all generated trajectories are verified and validated for their suitability and appropriateness, and, moreover, a theoretical analysis of such highly coupled control systems that are investigated in the present paper is practically impossible. Let the gain matrix $K$ is such that the real parts of all eigenvalues of the matrix $A_{cl}$ are negative. Then \[ \dot x=G(x,-Kx)=A_{cl}x+R_1(x), \] where $R_1(x)$ is the Taylor's remainder, obviously $||R_1(x)||=o(||x||)$ as $||x||\rightarrow 0^+.$ This implies that there exists a constant $\sigma=\sigma(K)>0$ such that $||R_1(x)||\leq\sigma(K)||x||$ for all $||x||\leq\tilde\Delta(\sigma).$ Let us consider as a Lyapunov function candidate $V(x)=x^TPx,$ where symmetric and positive definite matrix $P$ is a solution of Lyapunov equation \[ PA_{cl}+A^T_{cl}P=-Q(K) \] for appropriate choice of the symmetric and positive definite matrix $Q,$ which can be solved as an optimization problem with regards to the gain matrix $K.$ Let the constant $\sigma(K)$ is such that \[ \lambda_{\min}(Q(K))>2\sigma(K)\lambda_{\max}(P), \] where $\lambda_{\min}$ and $\lambda_{\max}$ denote the minimal and maximal eigenvalue of the matrix, respectively. Then along the trajectories of the system $\dot x=G(x,-Kx)=:g_K(x)$ is \[ \dot V(x(t))=x^T(t)Pg_K(x(t))+g^T_K(x(t))Px(t) \] \[ =x^TP[A_{cl}x+R_1(x)]+[x^TA^T_{cl}+R^T_1(x)]Px \] \[ =x^T(PA_{cl}+A^T_{cl}P)x+2x^TPR_1(x)=-x^TQx+2x^TPR_1(x), \] by using the fact that $x^TPR_1(x)$ is a scalar, that is, $x^TPR_1(x)=\left(x^TPR_1(x)\right)^T=R^T_1(x)Px.$ Because for each $n\times n$ symmetric and positive definite real matrix $C$ is \[ \lambda_{\min}(C)||x||^2\leq x^TCx\leq\lambda_{\max}(C)||x||^2,\ x\in\mathbb{R}^n \] (a special case of Rayleigh-Ritz's theorem, \cite[p.~176]{HornJohnson}) and \[ x^TPR_1(x)=\frac{x^TPx}{||x||^2}x^TR_1(x)\leq\lambda_{\max}(P)||x||||R_1(x)||\leq\sigma(K)\lambda_{\max}(P)||x||^2, \] one get that \[ \dot V(x(t))\leq-\bigg( \lambda_{\min}(Q(K))-2\sigma(K)\lambda_{\max}(P) \bigg)||x||^2<0 \] for $||x||\leq\tilde\Delta,$ $x\neq0,$ which implies local asymptotic stability of the zero solution of the closed-loop system $\dot x=G(x,-Kx).$ One of the main aim of the present paper lies in comparing the performance of designed simulation- and state feedback-based feed-forward control for overhead crane system stabilizing the system in its end position for the crane with variable and constant length of the hoisting rope, Section~\ref{variable_length} and Section~\ref{constant_length}, respectively. \section{Application to the overhead crane with variable length of rope. Simulation experiment in MATLAB}\label{variable_length} First, let us verify that linear part of the overhead crane system $\dot x =G(x,u)$ is controllable at the point $(x,u)=(0,0):$ \[ A=G_x(0,0)= \left(\begin{array}{cccccc} 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 1\\ 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right),\ B=G_u(0,0)= \left(\begin{array}{ccc} 0 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & 0\\ 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{array}\right). \] Because $A^2=0,$ for the rank of the controllability matrix the equality \[ \mathop{\operator@font rank}\nolimits \mathcal{C}_{A,B}=\mathop{\operator@font rank}\nolimits (B\ AB\ A^2B\ A^3B\ A^4B\ A^5B)=\mathop{\operator@font rank}\nolimits (B\ AB) \] holds and since \[ (B\ AB)= \left(\begin{array}{cccccc} 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 1\\ 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 \end{array}\right) \] with $\mathop{\operator@font det}\nolimits(B\ AB)= -1\neq0,$ the linear part of the system $\dot x=G(x,u)$ is controllable. As has been analyzed above, all eigenvalues of the closed-loop matrix $A_{cl}=A-BK$ can be arbitrarily assigned by appropriately selecting a state feedback gain matrix $K.$ \begin{rmk} At this place it is worth noting that if the sway angle control force $F_{\theta}$ is not considered, the crane system may not be locally asymptotically stabilizable at the desired final position by using the linear state feedback control law. \end{rmk} Let the desired (and permissible) eigenvalues of the closed-loop system $\dot x=A_{cl}x$ are $p=[-0.1\ -0.15\ -0.2\ -0.25\ -0.3\ -0.35]$ that are determined in practice on the basis of the crane work parameters such as maximum permissible velocity of the used equipments, for example. In general, it is advisable to choose these eigenvalues real to avoid the oscillating trajectories. For the (negative) real eigenvalues, the convergence to the equilibrium point for $t\rightarrow \infty$ will be monotonous. The MATLAB output of the pole placement command {\tt K = place(A,B,p)} gives \[ K= \left(\begin{array}{ccccccc} 0.1050 & 0 & 0 & 0.6500 & 0 & 0 \\ 0 & 0.0300 & 0 & 0 & 0.3500 & 0\\ 0 & 0 & 0.0250 & 0 & 0 & 0.3500 \end{array}\right), \] which locally asymptotically stabilizes the equilibrium point $x_e=0$ of the system $\dot x=G(x,-Kx).$ The routine {\tt place} in MATLAB uses the algorithm of \cite{KaNiDo} which, for multi-input systems, optimizes the choice of eigenvectors for a robust solution and the sensitivity of the assigned poles to perturbations in the system and gain matrices is minimized. A general theory regarding pole placement problem for linear systems can be found in the work \cite[p.~335]{AntMich}. Now, let the desired end position of the payload is $\tilde x_{e,1} = 10,$ $\tilde x_{e,2} = 3$ and $\tilde x_{e,i} = 0,$ for $i=3,4,5,6,$ corresponding to the placement $z=\tilde x_{e,1}$ and $y=-\tilde x_{e,2}$ in Fig.~\ref{fig:M1}. The state variables transformation $\tilde x=\Phi (x)$ defined by the formula \begin{equation*} \Phi: \tilde x_1= \tilde x_{e,1}+x_1,\, \tilde x_2=\tilde x_{e,2}-x_2,\, \tilde x_3=x_3,\, \tilde x_4 =x_4, \tilde x_5 =-x_5,\, \tilde x_6=x_6 \end{equation*} is used. Obviously, $\Phi^{-1}(\tilde x_e)=0.$ Then the original system $\dot x=G(x,u)$ transformed to the new equilibrium point $\tilde x_e$ is $\dot{\tilde x}=\tilde G\left(\tilde x, \tilde u\right),$ where \[ \tilde G\left(\tilde x, \tilde u\right)=\left(\tilde x_4,\tilde x_5,\tilde x_6,G_4\left(\Phi^{-1}(\tilde x), \tilde u\right),-G_5\left(\Phi^{-1}(\tilde x),\tilde u\right), G_6\left(\Phi^{-1}(\tilde x), \tilde u\right) \right)^T, \] $\tilde u$ is instead of $u.$ The direct calculation gives that $\tilde A=:\tilde G_{\tilde x}(\tilde x_e,0)=A$ and \[ \tilde B=:\tilde G_{\tilde u}(\tilde x_e,0)= \left( \begin{array}{ccc} 0 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & 0\\ 1 & 0 & 0\\ 0 & -1 & 0\\ 0 & 0 & 1 \end{array} \right). \] It will be finding the feedback control law in the form $\tilde u=-\tilde Kx,$ and so $\dot{\tilde x}=\tilde G\left(\tilde x, -\tilde K\Phi^{-1}(\tilde x)\right)$ with $\tilde A_{cl}=\tilde A-\tilde B\tilde K\Phi_{\tilde x}^{-1}(\tilde x_e),$ where $\Phi_{\tilde x}^{-1}(\tilde x_e)$ denotes the Jacobian matrix of the inverse of the transformation $\Phi$ and evaluated at $\tilde x_e.$ The sufficient condition to have the matrices $A_{cl}=A-BK$ and $\tilde A_{cl}$ the same eigenvalues is to be $BK=\tilde B\tilde K\Phi_{\tilde x}^{-1}(\tilde x_e),$ that is, the feedback gain matrix $\tilde K$ may be directly derived from the matrix $K$ by multiplying the second row and second and fifth column of $K$ by the number $(-1).$ For the purpose of numerical simulation the following data will be used: \begin{itemize} \item[] $M=0.2\ [\times10^3\,\mathrm{kg}],\ $ $m=10\ [\times10^3\,\mathrm{kg}],\ $ $I=4\ [\times10^3\,\mathrm{kg\, m^2}],\ $ $g = 9.81\ [\mathrm{m\, s^{-2}}].$ \end{itemize} Now, let the starting position is $ \tilde x(0) = (0\ 3\ 0\ 0\ -0.5\ 0)^T,$ corresponding $z=0$ and $y=-3$ in Fig.~\ref{fig:M1}, that is, the payload starts from rest and is pulled upwards with the velocity of $\tilde x_5=-0.5\ [\mathrm{m\, s^{-1}}].$ The time evolution of the state variables $\tilde x_i,$ $i=1,\dots,6$ is displayed in Fig.~\ref{solutions_xi} and the corresponding control forces $\tilde F_{z},$ $\tilde F_l$ and $\tilde F_{\theta}$ are depicted in Fig.~\ref{control_forces}. Summarizing, the general strategy in stabilizing the system under consideration at the desired end position is as follows: \begin{itemize} \item[i)] First, the controllability of the linear part of mathematical model of the overhead crane system derived above must be verified; \item[ii)] Secondly, the range of the eigenvalues $\lambda_i$ of the closed-loop system taking into account the technical limitations of the specific overhead crane is established, for example, the maximum permissible velocities of the trolley and hoist device or the jerk during starting; \item[iii)] Thirdly, using the appropriate state variables transformation the equilibrium point $x=0$ of the system $\dot x=G(x,-Kx)$ is translated to the new position $\tilde x_e,$ which is the desired end position of payload; \item[iv)] Fourthly, for the desired eigenvalues of the closed-loop system, the gain matrix $K$ using the MATLAB command {\tt place} is calculated; \item[v)] Finally, the numerical simulation in the MATLAB environment is performed and the appropriateness of the generated trajectory is verified. Subsequently, the control forces \[ \tilde F_{z}(t)=(M+m)\tilde u_1(t)\ [=F_{z}(t)], \] \[ \tilde F_l(t)=m\tilde u_2(t)-gm\cos\left(x_{3}(t)\right)\ [\tilde F_l(t)+F_l(t)=-2gm\cos(x_3(t))], \] and \[ \tilde F_{\theta}(t)=I\tilde u_3(t)\ [=F_{\theta}(t)], \] $\tilde u(t)=-\tilde Kx(t),$ stabilizing the overhead crane system at the desired end location of payload, to the system are applied. \end{itemize} \begin{figure} \caption{The solutions $\tilde x_i,$ $i=1,\dots,6$ of the reference model with varying rope length} \label{solutions_xi} \end{figure} \begin{figure} \caption{The control forces $\tilde F_{z} \label{control_forces} \end{figure} \section{Application to the overhead crane with constant length of rope. Simulation experiment in MATLAB}\label{constant_length} For comparison purpose, in this section the simulation experiment with the same data as in previous section is performed, with this difference that $\tilde l(t)=\tilde x_2(t)\equiv \tilde x_{e,2}=3$ and so $\dot{\tilde l}(t)=\tilde x_5(t)\equiv0.$ Thus, the number of state variables and governing equations reduces to four and only one control force is considered, $F_z,$ so, the control system is underactuated. Specifically, from the equations (\ref{eq:z}) and (\ref{eq:theta}) with $F_{\theta}\equiv0,$ one get for $\dot x_4$ and $\dot x_6$ the system of linear equation \[ \left( \begin{array}{cc} M+m & lm\cos\left(x_{3}\right)\\ lm\cos\left(x_{3}\right) & ml^2+I \end{array} \right) \left( \begin{array}{c} \dot x_4\\ \dot x_6 \end{array} \right) \] \[ =\left( \begin{array}{c} F_{z}+lm{x^2_{6}}\sin\left(x_{3}\right)\\ -glm\sin\left(x_{3}\right) \end{array} \right), \] and after substituting $u_1$ for $F_{z}$ one get the system $\dot x =G(x,u),$ $x=(x_1,x_3,x_4,x_6)^T,$ $u=u_1,$ $G=(G_1,G_3,G_4,G_6)^T,$ where \[ G_1=x_4, \] \[ G_3=x_6, \] \[ G_4 =\frac{1}{l^2m^2{\sin^2\left(x_{3}\right)}+Ml^2m+Im+IM} \] \[ \times\Bigg[Iu_{1}+\sin\left(x_{3}\right)\left(l^3m^2{x^2_{6}}+Ilm{x^2_{6}}\right) \] \[ +l^2mu_{1}+\frac12gl^2m^2\sin\left(2x_{3}\right)\Bigg], \] \[ G_6 = -\frac{1}{l^2m^2{\sin^2\left(x_{3}\right)}+Ml^2m+Im+IM} \] \[ \times\Bigg[\cos\left(x_{3}\right)\left[l^2m^2{x^2_{6}}\sin\left(x_{3}\right)+lmu_{1}\right] \] \[ +lm\left(Mg+gm\right)\sin\left(x_{3}\right)\Bigg]. \] Now, the controllability of linear part of this system with \[ A=G_x(0,0)= \left(\begin{array}{cccc} 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\\ 0 & \frac{gl^2m^2}{Mml^2+I\left(M+m\right)} & 0 & 0\\ 0 & -\frac{glm\left(M+m\right)}{Mml^2+I\left(M+m\right)} & 0 & 0 \end{array}\right), \] \[ B=G_{u_1}(0,0)=\left(\begin{array}{c} 0\\ 0\\ \frac{ml^2+I}{Im+M\left(ml^2+I\right)}\\ -\frac{lm}{Mml^2+I\left(M+m\right)} \end{array}\right), \] will be verified. The controllability matrix $\mathcal{C}_{(A,B)}=(B\ AB\ A^2B\ A^3B)$ is a square matrix and \[ \mathop{\operator@font det}\nolimits\mathcal{C}_{(A,B)} = -\frac{g^2 l^4 m^4}{{\left(M m l^2+I \left(M+m\right)\right)}^4}\neq 0, \] which implies the controllability of the linear part of the system under consideration. \begin{figure} \caption{The solutions $\tilde x_i,$ $i=1,3,4,6$ of the reference model with constant rope length} \label{solutions_lconst_xi} \end{figure} \begin{figure} \caption{The control force $\tilde F_{z} \label{control_force_Lconst} \end{figure} For the simulation experiment the eigenvalues $p = [-0.2\ -0.25\ -0.3\ -0.35]$ of the closed-loop system are used, representing a conservative selection from the set of eigenvalues used in the previous section, and which are achieved for the gain matrix \[ K=(0.0010\ 99.1882\ 0.0159 \ -2.1061). \] The starting position is the same as in the previous simulation with a variable rope length, namely, $\tilde x(0)=(0\ 0\ 0\ 0)^T.$ In the Figs.~\ref{solutions_lconst_xi} and~\ref{control_force_Lconst} is depicted the time evolution of the state variables $\tilde x_1,$ $\tilde x_3,$ $\tilde x_4,$ $\tilde x_6$ and the control force $\tilde F_z,$ respectively. Comparing the corresponding figures, one can see substantial prolongation of the transportation time to the end position although with significantly less payload sway. \section{Conclusions} In this paper, a linear control law for fully automated overhead crane systems with the aim to suppress the sway motion and to reduce the overall time of transportation using the state feedback-based feed-forward was proposed. It was shown that by the appropriate choice of the state feedback gain matrix the crane system can be asymptotically stabilized around the desired end position. Although the technical realization of the additional device for control of the sway angle (to ensure the controllability of linear part of system) requires some one-off costs of implementation, the numerical simulations indicate a substantial reduction of the transportation time (up to 50\%) in comparison with the overhead crane system with fixed rope length, as demonstrate the first sub-figures on top-left in the Figs.~\ref{solutions_xi} and \ref{solutions_lconst_xi}, and which may be desirable under certain circumstances. \end{document}
\begin{document} \title{Global existence of weak solutions of a model for electrolyte solutions -- Part 1: Two-component case} \author[1]{Matthias Herz} \author[1]{Peter Knabner} \renewcommand\Affilfont{\itshape\small} \affil[1]{Department of Mathematics, University of Erlangen-N\"urnberg, Cauerstr. 11, D-91058 Erlangen, Germany} \date{\today} \maketitle \begin{abstract} This paper analytically investigates the \dpnp. This system is a mathematical model for electrolyte solutions. In this paper, we consider electrolyte solutions, which consist of a neutral fluid and two suspended oppositely charged chemical species with arbitrary valencies~$\zl[1]>0>\zl[2]$. We prove global existence and uniqueness of weak solutions in two space dimensions and three space dimensions. \par So far, most of the existence results have been proven for symmetric electrolyte solutions. These solutions consist of a neutral fluid and two suspended charged chemical species with symmetric valencies~$\pm\zl[]$. As many electrolyte solutions in biological applications and hydrodynamical applications are not symmetric, the presented extension of the previous existence results is an important step. \\[2.0mm] \textbf{Keywords:} Global existence, electrolyte solution, electrohydrodynamics, Moser iteration, generalized Schauder fixed point theorem, \dpnp. \end{abstract} \section{Introduction}\label{sec:Introduction} Many complicated phenomena in hydrodynamics and biology can be modeled in the context of electrolyte solutions. The reason for this is that models for electrolyte solutions must simultaneously capture the following three ubiquitous processes: (i) the transport of the charged particles, (ii) the hydrodynamic fluid flow, (iii) the electrostatics. Moreover, these processes simultaneously occur in electrolyte solutions. Firstly, the electrostatic field is generated by the movement of the charged particles, and conversely, the movement of the charged particles is influenced by the electrostatic field. Secondly, the fluid flow changes the flux of the charged particles and conversely, the moving charged particles lead to a force term, which generates an electroosmotic fluid flow. \par The classical models for electrolyte solutions, that capture the fully coupled nature of these processes are the so-called \pnp{s} (for a fluid at rest) and the \dpnp{s} (for laminar flow in porous media). In particular, \pnp{s} are also known as drift--diffusion systems, van Rosenbroeck systems, or semiconductor device equations. For a detailed derivation of these systems, we refer to \cite{DreyerGuhlkeMuller, Elimelech-book, Juengel-book, Masliyah-book, Russel-book, Samohyl, Schuss, Mielke10, Probstein-book}. Among many others, these models have been investigated analytically in \cite{BT2011, Burger12, Castellanos-book, GasserJungel97, Glitzky2004, Gajewski, HyongEtAll, Frehse-book, Markovich-book, Mielke10, Roubicek2005-1, Schuss, Wolfram, Schmuck11, herz_ex_dnpp}. \par So far, most of the analytical investigations have been carried out for electrolyte solutions, which consist of a electrically neutral solvent (at rest) and two oppositely charged chemical species with symmetric valencies~$\pm z$. One reason for this is that especially the symmetric valencies~$\pm1$ naturally occur in the context of semiconductor devices and most of the analytical investigations are related to semiconductor devices. Previous existence results, which consider charged solutes with arbitrary valencies, were proven amongst others in \cite{BT2011, Burger12, Glitzky2004, Roubicek2005-1, Roubicek2006}. These papers considered electrolyte solutions with multiple suspended charged solutes. We investigate this multicomponent case in the second part of this work, whereas in this paper, we focus on the two-component case. This means, we consider electrolyte solutions, that consist of a neutral solvent and two oppositely charged solutes. In this situation, the results of this paper go beyond the above mentioned papers. More precisely, the authors of \cite{BT2011} proved local in time existence. The results of \cite{Roubicek2005-1, Roubicek2006} were proven under the additional assumption of a volume-additivity constraint and by including an additional reaction force term in the transport equations. These additional assumptions allow to bypass in \cite{Roubicek2005-1, Roubicek2006} the main difficulties, which we briefly sketch below. Finally, the paper~\cite{Burger12} dealt with a stationary system and existence in two dimensions was established in \cite{Glitzky2004}. \par In the proof of the crucial a~priori estimates occur additional difficulties, if we allow for two oppositely charged solutes with arbitrary valencies~$\zl[1]>0>\zl[2]$. More precisely, the main difficulty is to obtain a~priori estimates for the solutes~$\cl$, which are independent of the electric field. Such a~priori estimates are easily obtained in case of symmetric valencies~$\pm z$. To briefly sketch this, we consider two charged solutes~$\cl[1]$ (positively charged) and $\cl[2]$ (negatively charged) with symmetric valencies~$\pm z$. In the proof of a~priori estimates for $\cl$, we test the equations for $\cl$ with the standard test functions~$\test=\cl$, and we remember that the electric field~$\fieldEL$ satisfies according to Gauss's law~$\grad\cdot\vecE=z(\cl[1]-\cl[2])$. Thereby, we obtain for the sum of the \enquote{electric~drift integrals}, which describe the electrophoretic motion \begin{align*} &-2\sum_l \scp{\pm z\cl\vecE}{\grad\cl~}_{\Lp{2}{}} =z\scp{\grad\cdot\vecE}{(\cl[1])^2-(\cl[2])^2}_{\Lp{2}{}} \\ &= z^2 \scp{\cl[1] - \cl[2]}{(\cl[1])^2 -(\cl[2])^2}_{\Lp{2}{}} \geq 0, \quad \text{since } \sqbrac{a - b}\sqbrac{a^2 - b^2} \geq 0 ~\text{ for } a,b\geq0~. \end{align*} Due to this pointwise sign condition, we can omit the sum of the \enquote{electric~drift integrals} and the a~priori estimates for $\cl$ are naturally independent of the electric field~$\fieldEL$. In case of arbitrary valencies, such a pointwise sign condition does not hold true, if we use the standard test functions~$\test=\cl$. However, we propose to carry over this pointwise sign condition by using weighted test functions~$\test=\abs{\zl}\cl$ instead. \par The contribution of this paper is to prove global existence, uniqueness, and boundedness of weak solutions for electrolyte solutions, which consist of a neutral solvent and two charged solutes with arbitrary valencies. Moreover, we do not impose any further restrictions such as the often used electroneutrality constraint, cf. \cite{Allaire10}, or the volume additivity constraint, cf. \cite{Roubicek2005-1}. This result is a first step towards the treatment of multicomponent electrolyte solutions, which contain $L\in\setN$ solutes. The presented proof is based on the weighted test functions~$\test=\abs{\zl}\cl$. More precisely, we particularly use these weighted test functions in the proof of \cref{lemma:energy}, which is the basis for the following a~priori estimates in \cref{lemma:bounded} and \cref{thm:aprioriBounds}. \par The rest of this paper is organized as follows: In \cref{sec:Model}, we present the \dpnp\ and in \cref{sec:Uniqueness}, we prove that solutions are unique. Then, we introduce the fixed point operator in \cref{sec:fpi-operator}. Finally, we show the crucial a~priori estimates in \cref{subsec:aprioriBounds}, and in \cref{subsec:fpi-point}, we show the global existence. \section{Model Equations}\label{sec:Model} Subsequently, we present the \dpnp. This system is a field-scale model \footnote{For a detailed introduction to the modeling of porous media and the notion of field-scales and pore-scales, we refer to \cite{bear-book}.} for electrolyte solutions in porous media. A rigorous derivation of field-scale \dpnp{s} from pore-scale systems was carried out, e.g., in \cite{Allaire10, RayMunteanKnabner}. Note that commonly on field-scales volume effects dominate and surface effects such as the electrostatic double-layer effects are negligible. However, a characteristic feature of porous media is are dominating surface effects even on field scales. This justifies to consider field-scale \dpnp{s}. \par We now introduce some notation, in order to present the model equations. \begin{enumerate}[label=({N}\arabic*), ref=({N}\arabic*), itemsep=1mm, start=\value{countNotation}+1] \item \textbf{Geometry: } For $n\in\{2,3\}$, let $\Omega\in\setR^n$ be a $n$-dimensional bounded domain with boundary~$\Gamma$ and corresponding exterior normal field~$\vecnu$. Next, let $I:=(0,\timeEnd)$ be a time interval and we introduce by $\OmegaT:= I\times\Omega$ a time space cylinder with lateral boundary~$\GammaT:=I\times\partial\Omega$. Furthermore, we suppose $\Omega$ to be a porous medium with constant porosity~$\theta$. \label{Not:Geom} \item \textbf{Variables: } We assume that $\Omega$ is fully saturated with a fluid, in which two charged chemical species are suspended. We denote the velocity field of the electrolyte solution by $\fieldF$, its pressure by $p$, the electric field and the electrostatic potential in the electrolyte solution by $\vecE$ and $\potEL$. Next, we denote the number densities of the respective chemical species by~$\cl$, $l\in\{1,2\}$. Furthermore, we define the concentration vector by $\vecc:=(\cl[1],\cl[2])$. \label{Not:Variables} \item \textbf{Electrics: } The chemical species~$\cl$ carry a charge~$e\zl$. Here, $e$ is the elementary charge and $\zl\in\setZ$ the respective valency ($\zl\neq0$). W.l.o.g., we assume $\zl[1]>0>\zl[2]$. The chemical species~$\cl$ possess electric mobilities~$\wl$, where $\mob$ is the so-called mobility tensor. It is~$\Dl=\boltz\temp\mob$ according to Einstein-Smoluchowski relation, see \cite[Chapter~6]{Masliyah-book}. Here, $\boltz$ is the Boltzmann constant, $\temp$ the temperature. Hence, we have the identity $\wl=e\zl(\boltz\temp)^{-1}\Dl$. We denote by $\chargeEL$ the free charge density and by $\rho_b$ a background charge density, e.g., coming from not resolved pore-scale inclusions inside $\Omega$. \label{Not:Electrics} \item \textbf{Coefficients: } We denote by $\Dl$ the diffusion-dispersion tensor, which is identical for all chemical species~$\cl$. Although the molecular diffusion might be different for each $\cl$, the dispersion coming from a tortuous geometry is by far dominating on the considered field scales. Since the geometry looks the same for all chemical species, we obtain a coinciding diffusion-dispersion tensor~$\Dl$. Next, we denote by $\permeabH$ the constant permeability tensor of the medium, by $\mu$ the dynamic viscosity of the fluid, and by $\permitEL:=\permitELscalar\Dl$ the constant electric permittivity tensor of the medium. For a rigorous derivation of the last relation, see \cite{RayMunteanKnabner}. We note, that we have the identity $\permitEL =\permitELscalar\Dl=\permitELscalar\boltz\temp\mob$. \label{Not:coeff} \setcounter{countNotation}{\value{enumi}} \end{enumerate} We suppose the boundary~$\Gamma$ of the domain~$\Omega$ is charged, e.g., from surfactants. As the solutes~$\cl$ carry charges, they interact with $\Gamma$ in a small boundary layer, the so-called electrostatic double-layer. This leads to a spatially inhomogeneous charge distributions, which gives rise to an electric field~$\fieldEL$. Simultaneously, the electric field~$\fieldEL$ generates an electric body force in the surrounding fluid. Thereby, an electroosmotic flow develops, which in turn interacts with the chemical species. This leads to an interplay between the electrophoretic movement of the charged particles, the electroosmotic flow of the fluid, and a varying electric field. \par \dpnp{s} capture these coupled processes based on the following three conservation laws:\\[2.0mm] \begin{subequations} \textbf{Law 1 -- Gauss's law:} The surface charges and the charged solutes~$\cl$ give rise to an electric field~$\fieldEL$. For the electric field~$\fieldEL$, we solve Gauss's law. Additionally, we assume that the electric field is generated by an electrostatic potential~$\potEL$. Thus, we have $\vecE:=-\grad\potEL$. The boundary data are denoted by $\sigma$ and the initial data are obtained by substituting the initial data~$\clstart$ of the charged solutes~$\cl$ on the right hand side of Gauss's law and solving for the electric field~$\fieldEL$. Furthermore, we assume that inside the electrolyte solution, we have a background charge density~$\rho_b$, coming, e.g., from not resolved pore-scale inclusions inside~$\Omega$. Mathematically, Gauss's law writes for the redefined electric field $\vecE:=\permitEL\fieldEL$ as \begin{align} \permitEL^{-1}\fieldEL &= -\grad\potEL && \text{in } \OmegaT, \label{eq:Model-1a} \\ \grad\cdot\fieldEL &= \chargeEL + \rho_b && \text{in } \OmegaT, \label{eq:Model-1b} \\ \chargeEL &= \theta(\chargeELlongTwo) && \text{in } \OmegaT, \label{eq:Model-1c} \\ \fieldEL\cdot\vecnu &= \sigma && \text{on } \GammaT, \label{eq:Model-1d} \end{align} \textbf{Law 2 -- Darcy's law:} The velocity field~$\fieldF$ is subject to conservation of mass and momentum. On field-scales, this is sufficiently well-captured by Darcy's law, which connects the velocity field~$\fieldF$ and the pressure gradient~$\grad\press$. As we include electroosmotic flows, an electric body force term enters the equations. The boundary data are denoted by $f$ and the initial data are obtained by inserting $\clstart$ and $\fieldEL(0)$ on the right hand side of Darcy's law. Mathematically, Darcy's law reads (with the redefined electric field $\vecE:=\permitEL\fieldEL$) as \begin{align} \permeabH^{-1} \fieldF &= \mu^{-1}\brac{-\grad\press + \permitEL^{-1}\forceEL} && \text{in } \OmegaT, \label{eq:Model-1e} \\ \grad\cdot\fieldF &= 0 && \text{in } \OmegaT, \label{eq:Model-1f} \\ \fieldF\cdot\vecnu &= f && \text{on } \GammaT. \label{eq:Model-1g} \end{align} \textbf{Law 3 -- Nernst--Planck equations:} The evolution of the chemical species~$\cl$ is subject to mass continuity. Here, the mass flux arises due to diffusion, convection, and an electric~drift. Such mass fluxes are called Nernst--Planck fluxes. We assume the equations for $\cl$ are coupled through reaction rates~$\Rl$. The initial data are denoted by $\clstart$ and the flux boundary data by~$\gl$. Mathematically, Nernst--Planck equations are given (with \ref{Not:coeff} and the redefined electric field $\vecE:=\permitEL\fieldEL$) by \begin{flalign} \theta\dert\cl + \grad\cdot\brac{\Dl\grad\cl + {\cl}[\fieldF + \coeffEL\fieldEL] } &= \theta\Rl(\Cl) && \text{in } \OmegaT, & \label{eq:Model-1h} \\[0.2cm] \brac{\Dl\grad\cl + {\cl}[\fieldF + \coeffEL\fieldEL] }\cdot\vecnu &= \gl && \text{on } \GammaT, & \label{eq:Model-1i} \\[0.2cm] \cl(0) &= \clstart && \text{on } \Omega. & \label{eq:Model-1j} \end{flalign} \end{subequations} \begin{remark} \textnormal{ Equations~\eqref{eq:Model-1a}--\eqref{eq:Model-1d} are Poisson's equation for $\potEL$ in mixed formulation. For this reason \enquote{Poisson} is contained in the name \dpnp. For analytical investigations, it is of advantage to deal with Poisson's equation directly, as the comprehensive regularity results for Poisson's equation hold true, cf. \cite{GilbargTrudinger-book}. However, the mixed formulation is of advantage, as we can easily introduce in \eqref{eq:Model-1a} the general electric fields~$\fieldEL= -\grad\potEL -\dert\vecA$, which is the expression of an electric field according to Maxwell's equations in terms of the electromagnetic potentials, cf. \cite{LifshitzLandau-book2}. Furthermore, the mixed formulation is of advantage as starting point for numerical approximations, since this leads to a direct approximation of the electric field~$\fieldEL$, cf. \cite{FrankRay2011_pnp}. } $\square$ \end{remark} \begin{remark} \textnormal{ The boundary flux in equation~\eqref{eq:Model-1i} can be equivalently expressed with equations \eqref{eq:Model-1d} and \eqref{eq:Model-1g} by \begin{align*} \brac{\Dl\grad\cl}\cdot\vecnu &= \gl - {\cl}f - (\permitELscalar\boltz\temp)^{-1}e\zl{\cl}\sigma \qquad \text{on }~ \GammaT~. \end{align*} Thus, the boundary flux condition is equivalent to a Robin boundary condition for the diffusion part. } $\square$ \end{remark} \begin{remark} \textnormal{ Equations~\eqref{eq:Model-1a}--\eqref{eq:Model-1j} contain the nonlinear coupling terms $\forceEL$ in Darcy's law, and $\cl\fieldF$, $\cl\fieldEL$ in Nernst--Planck equations. These nonlinearities arise only after combing the three subsystems to a \dpnp. This reflects the fact, that the coupling of initially isolated subprocesses leads to new nonlinearities in the resulting system. } $\square$ \end{remark} \subsection{Notation, Assumptions and Weak Formulation} We now introduce the required notation for the analytical investigations. \begin{enumerate}[label=({N}\arabic*), ref=({N}\arabic*), itemsep=0.0mm, start=\value{countNotation}+1] \item \textbf{Spaces: } For $k>0$, $p\in [1,\infty]$, we denote the Lebesgue spaces for scalar-valued and vector-valued functions by $\Lp{p}{}$ and the respective Sobolev spaces by $\Wkp{k}{p}{}$, cf.~\cite{Adams2-book}. Furthermore, we set $\Hk{k}{}:=\Wkp{k}{2}{}$ and we refer for the definition of the Bochner spaces~$\fspace{L^p}{I}{;V}$, $\fspace{H^k}{I}{;V}$ over a Banach space~$V$ to~\cite{Roubicek-book}. The $\Hkdiv[][f]{k}$-spaces are defined, e.g., in~\cite{brezzi-book} by \\ $\Hkdiv[][f]{k}:=\cbrac{\Test\in \Hk{k}{}:~\nabla\cdot\Test\in \Hk{k}{}, \Test\cdot\nu = f \text{ on } \Gamma}$. \label{Not:spaces} \item \textbf{Products: } We denote by $\scp{\cdot}{\cdot}_H$ the inner product on a Hilbert space~$H$ and by $\dualp{\cdot}{\cdot}_{V^\ast\times V}$, the dual pairing between a Banach space~$V$ and its dual space~$V^\ast$. On $\setR^n$, we just write $\vecv\cdot\vecu:=\scp{\vecv}{\vecu}_{\setR^n}$ and on $\Lp{2}{}$, we just denote $\scp{\cdot}{\cdot}_{\Omega}:=\scp{\cdot}{\cdot}_{\Lp{2}{}}$. In particular the dual pairing between $\Hk{1}{}$ and its dual~$\Hk{1}{}^\ast$, we abbreviate by $\dualp{\cdot}{\cdot}_{1,\Omega}:=\dualp{\cdot}{\cdot}_{\Hk{1}{}^\ast\times\Hk{1}{}}$. \label{Not:Prod} \setcounter{countNotation}{\value{enumi}} \end{enumerate} In order to successfully examine the above model, we introduce the following assumptions \begin{enumerate}[label=({A}\arabic*), ref=({A}\arabic*), itemsep=0.0mm, start=\value{countAssumption}+1] \item \textbf{Geometry: } Let $n\in\{2,3\}$ and $\Omega\subset\setR^n$ be a bounded Lipschitz domain, i.e. $\Gamma\in C^{0,1}$. \label{Assump:Geom} \item \textbf{Initial data: } The initial data~$\clstart$ are non negative and bounded, i.e., \\ $0\leq\clstart(x)\leq M_0$ for a.e. $x\in\Omega$ for some $M_0 \in\setR_+$. \label{Assump:InitData} \item \textbf{Ellipticity: } The diffusivity tensor~$\Dl$ and the permeability tensor~$\permeabH$ satisfy \\ $\Dl\xi\cdot\xi>\alpha_D\abs{\xi}^2$ and $\permeabH^{-1}\xi\cdot\xi>\alpha_K\abs{\xi}^2$ for all $\xi\in\setR^n$, \\ $\Dl\xi\cdot\eta<C_D\abs{\xi}\abs{\eta}$ and $\permeabH^{-1}\xi\cdot\eta<C_K\abs{\xi}\abs{\eta}$ for all $\xi,\eta\in\setR^n$. \label{Assump:Ellip} \item \textbf{Coefficients: } The porosity~$\theta$, the dynamic viscosity~$\mu$, and the electric permittivity~$\epsilon$ are positive constants. \label{Assump:Coeff} \item \textbf{Reaction rates: } The reaction rate functions $\Rl:\setR^2\rightarrow\setR$ are global Lipschitz continuous functions, i.e., $\Rl\in \Ck[\setR^2]{0,1}{}$ with Lipschitz constant~$C_{\Rl}$. Furthermore, we assume $\Rl(\vecnull)=0$ and $\Rl(\vecv) \geq 0$ for all $\vecv\in\setR^2$ with $v_l\leq 0$. This means, in case a chemical species vanishes, it can only be produced. \label{Assump:Reaction} \item \textbf{Boundary data: } We assume $\gl\in\Lp[\OmegaT]{\infty}{}$, $l=1,2$, $\sigma\in\Lp[\OmegaT]{\infty}{}$, and $f\in\Lp[\OmegaT]{\infty}{}$. Furthermore, we suppose that functions $\vecf,\vecsigma\in\Lp[I]{\infty}{;\Wkp{1}{\infty}{}}$ with $\vecsigma\cdot\vecnu=\sigma$ and $\vecf\cdot\vecnu=f$ exist. \label{Assump:BoundData} \item \textbf{Background charge density: } We assume $\rho_b\in \Lp[\OmegaT]{\infty}{}$ for the background charge density. \label{Assump:back-charge} \setcounter{countAssumption}{\value{enumi}} \end{enumerate} Equipped with the just introduced notation, we now define the weak formulation of the model. \begin{definition}[Weak solution]\label{def:weaksolution} The vector $\brac{\fieldEL,\potEL,\fieldF,\press,\Cl}\in \setR^{4+2n}$ is a weak solution of the \dpnp\ \eqref{eq:Model-1a}--\eqref{eq:Model-1j}, if and only if \begin{subequations} \begin{enumerate}[label=(\roman*), ref=(\roman*), itemsep=-1.5mm] \item $(\fieldEL,\potEL)\in \spaceSEF\times\spaceSP$ solves for all $(\Test,\test) \in \spaceTF\times\spaceTP$ \begin{align} \scp{\permitEL^{-1}\fieldEL}{\Test}_\Omega &= \scp{\potEL}{\grad\cdot\Test}_\Omega \label{eq:gaussWeak}\\ \scp{\grad\cdot\fieldEL}{\test}_{\Omega} &= \scp{\rho_b + \chargeEL}{\test}_\Omega \label{eq:gaussWeakDIV}, \\ \text{with the free} & \text{ charge density } \chargeEL \text{ given by }\chargeEL = \theta(\chargeELlongTwo)~ \nonumber. \end{align} \item $(\fieldF,\press)\in \spaceSF\times\spaceSP$ solves for all $(\Test,\test) \in \spaceTF\times\spaceTP$ \begin{align} \scp{\permeabH^{-1}\fieldF}{\Test}_\Omega &= \scp{\mu^{-1}\press}{\grad\cdot\Test}_\Omega + \scp{\coeffForceEL\forceEL}{\Test}_\Omega, \label{eq:darcyWeak}\\ \scp{\grad\cdot\fieldF}{\test}_\Omega &= 0~. \label{eq:darcyWeakDIV} \end{align} \item $\cl \in \fspace{L^\infty}{I}{;\Lp{2}{}} \cap \spaceSC \cap \Lp[\OmegaT]{\infty}{}$ solves for all $\test\in\spaceTC$ and for $l=1,2$ \begin{align}\label{eq:transportWeak} &\dualp{\theta\dert\cl}{\test}_{1,\Omega} + \scp{\Dl\grad\cl}{\grad\test}_\Omega - \scp{ {\cl}[\fieldF + \coeffEL\fieldEL] }{\grad\test}_\Omega \nonumber\\[2.0mm] & = \scp{\theta\Rl(\Cl)}{\test}_\Omega + \scp{\gl}{\test}_\Gamma~, \end{align} and $\cl$ take its initial values in the sense that \begin{align*} \lim\limits_{t\searrow0} \scp{\cl(t) - \clstart}{\test}_\Omega ~=~0 \qquad \text{for all } \test\in\Lp{2}{}~.\\[-12mm] \nonumber \end{align*} $\square$ \end{enumerate} \end{subequations} \end{definition} \begin{remark} \textnormal{ We note that equation \eqref{eq:transportWeak} is not well-defined without having $\cl \in\Lp[\OmegaT]{\infty}{}$. This is due to the fact, that an embedding of the type $\Hkdiv{1}{}\hookrightarrow\Lp{p}{}$, for some $p>2$, does not hold true. Thus, we have $\fieldEL,\fieldF\in\Lp{2}{}$ at the best and for the existence of the convection integral and the electric~drift integral in \eqref{eq:transportWeak}, we need the estimate \begin{align*} \scp{{\cl}[\fieldF + \coeffEL\fieldEL]}{\grad\test}_\Omega ~\leq~ \norm{\cl}{\Lp{\infty}{}} \norm{\fieldF + \coeffEL\fieldEL}{\Lp{2}{}} \norm{\grad\test}{\Lp{2}{}}~. \end{align*} This shows that $\cl\in\Lp[\OmegaT]{\infty}{}$ is mandatory for a well-defined weak formulation. Consequently, we have to include $\Lp[\OmegaT]{\infty}{}$ in the solution space for $\cl$. } $\square$ \end{remark} \begin{remark} \textnormal{ In equations \eqref{eq:gaussWeak} and \eqref{eq:darcyWeak}, the test function space differs from the solution space and the solutions $\fieldEL$ and $\fieldF$ are not admissible test functions. However, \eqref{eq:Model-1d}, \eqref{eq:Model-1g}, and \ref{Assump:BoundData} ensure that $\fieldEL-\vecsigma$ and $\fieldF-\vecf$ are admissible test functions. } $\square$ \end{remark} \section{Uniqueness}\label{sec:Uniqueness} In this section, we show that the solutions of the investigated \dpnp\ are unique. \begin{thm}[Uniqueness]\label{thm:unique} Let \ref{Assump:Geom}--\ref{Assump:back-charge} be valid and let $\brac{\fieldEL,\potEL,\fieldF,\press,\Cl}\in \setR^{4+2n}$ be a weak solution of \eqref{eq:Model-1a}--\eqref{eq:Model-1j} according to \cref{def:weaksolution}. Then, $\brac{\fieldEL,\potEL,\fieldF,\press,\Cl}$ is unique. \end{thm} \begin{proof} Let us assume that $\brac{\fieldEL_i,\potEL_i,\fieldF_i,\press_i,\Cl_i}$, $i=1,2$, are two solutions of \eqref{eq:Model-1a}--\eqref{eq:Model-1j} to identical data. Furthermore, we denote the difference between these two solution by \begin{align*} \brac{\fieldEL_{12},\potEL_{12},\fieldF_{12},\press_{12},\Cl_{12}}:=\brac{\fieldEL_1-\fieldEL_2,\potEL_1-\potEL_2,\fieldF_1-\fieldF_2,\press_1-\press_2,\Cl_1-\Cl_2} \end{align*} By subtracting the equations for the respective solutions, we obtain the error equations\\[2.0mm] \begin{subequations} \underline{Gauss's law:} \begin{align} \scp{\permitEL^{-1}\fieldEL_{12}}{\Test}_\Omega &= \scp{\potEL_{12}}{\grad\cdot\Test}_\Omega, \label{eq:gaussWeak-error} \\ \scp{\grad\cdot\fieldEL_{12}}{\test}_{\Omega} &= \scp{\chargeEL[,12]}{\test}_\Omega =\theta\sum_l\scp{\zl\cl[l,12] }{\test}_\Omega~. \label{eq:gaussWeakDIV-error} \end{align} \underline{Darcy's law:} \begin{align} \scp{\permeabH^{-1}\fieldF_{12}}{\Test}_\Omega &= \scp{\mu^{-1}\press_{12}}{\grad\cdot\Test}_\Omega + \theta\mu^{-1}\sum_l\scp{\zl\cl[l,1]\permitEL^{-1}\fieldEL_1-\zl\cl[l,2]\permitEL^{-1}\fieldEL_2}{\Test}_\Omega \label{eq:darcyWeak-error}\\ \scp{\grad\cdot\fieldF_{12}}{\test}_\Omega &= 0~. \label{eq:darcyWeakDIV-error} \end{align} \underline{Nernst--Planck equations:} \begin{align} & \dualp{\theta\dert\cl[l,12]}{\test}_{1,\Omega} + \scp{\Dl\grad\cl[l,12]}{\grad\test}_\Omega - \scp{ {\cl[l,1]}[\fieldF_1 + \coeffEL\fieldEL_1] }{\grad\test}_\Omega \nonumber\\[2.0mm] & + \scp{ {\cl[l,2]}[\fieldF_2 + \coeffEL\fieldEL_2] }{\grad\test}_\Omega = \theta \scp{\Rl(\Cl_1)-\Rl(\Cl_2)}{\test}_\Omega ~. \label{eq:transportWeak-error} \end{align} \end{subequations} We now show by contradiction that $\cl[l,12]\equiv0$ for $l=1,2$. To this end, we assume that \begin{align}\label{eq:unique-contradiction} \sum_l \norm{\cl[l,12]}{\Lp{2}{}}^2 > 0 ~~\Equivalent~~ \exists~\kappa>0 \text{ such that } \sum_l \norm{\cl[l,12]}{\Lp{2}{}}^2 \geq \kappa~. \end{align} Next, we test equation~\eqref{eq:transportWeak} with $\test=\cl[l,12]$ and we sum over $l=1,2$. Thereby, we come for the time integral and the diffusion integral with \ref{Assump:Ellip} to \begin{align*} \sum_l \dualp{\theta\dert\cl[l,12]}{\cl[l,12]}_{1,\Omega} + \scp{\Dl\grad\cl[l,12]}{\grad\cl[l,12]}_\Omega \geq \frac{\theta}{2}\derr \sum_l\norm{\cl[l,12]}{\Lp{2}{}}^2 + \alpha_D \sum_l\norm{\grad\cl[l,12]}{\Lp{2}{}}^2. \end{align*} For the reaction integral, we obtain with \ref{Assump:Reaction} \begin{align*} \theta\sum_l \scp{\Rl(\Cl_1)-\Rl(\Cl_2)}{\cl[l,12]}_\Omega \leq \theta\sum_l C_{\Rl}\norm{ \abs{\Cl_{12}} \cl[l,12]}{\Lp{1}{}} \leq \theta\max_l C_{\Rl} \sum_l \norm{\cl[l,12]}{\Lp{2}{}}^2. \end{align*} The convection integral and the electric~drift integral, we transform to \begin{align*} & -\sum_l \scp{ {\cl[l,1]}[\fieldF_1 + \coeffEL\fieldEL_1] - {\cl[l,2]}[\fieldF_2 + \coeffEL\fieldEL_2] }{\grad\cl[l,12]}_\Omega \\ &= -\sum_l \scp{ {\cl[l,1]}[\fieldF_{12} + \coeffEL\fieldEL_{12}]}{\grad\cl[l,12]}_\Omega \\ &~~~~ -\sum_l \scp{ {\cl[l,12]}[\fieldF_2 + \coeffEL\fieldEL_2] }{\grad\cl[l,12]}_\Omega ~~=: A.1 + A.2~, \end{align*} and $A.2$, we estimate with \eqref{eq:unique-contradiction} and \Young\ with a free parameter $\delta>0$, cf. \cite{GilbargTrudinger-book}, by \begin{align*} A.2 &\leq \delta\sum_l \norm{\grad\cl[l,12]}{\Lp{2}{}}^2 +\frac{\kappa}{\kappa}\frac{1}{2\delta}\norm{\fieldF_2 + \coeffEL\fieldEL_2}{\Lp{2}{}}^2\sum_l\norm{\cl[l,12]}{\Lp{\infty}{}}^2\\ &\leq \delta\sum_l \norm{\grad\cl[l,12]}{\Lp{2}{}}^2 + \frac{1}{2\kappa\delta}\sum_i\sqbrac{\norm{\fieldF_i + \coeffEL\fieldEL_i}{\Lp{2}{}}^2\norm{\Cl_i}{\Lp{\infty}{}}^2}\sum_l\norm{\cl[l,12]}{\Lp{2}{}}^2. \end{align*} Analogously, we come for $A.1$ to \begin{align*} A.1 &\leq \delta\sum_l \norm{\grad\cl[l,12]}{\Lp{2}{}}^2 + \frac{2}{\kappa\delta}\sum_i\sqbrac{\norm{\fieldF_i + \coeffEL\fieldEL_i}{\Lp{2}{}}^2\norm{\Cl_i}{\Lp{\infty}{}}^2}\sum_l\norm{\cl[l,12]}{\Lp{2}{}}^2. \end{align*} Altogether, we arrive with a proper choice of a free parameter~$\delta>0$ at \begin{align*} & \frac{\theta}{2}\derr \sum_l\norm{\cl[l,12]}{\Lp{2}{}}^2 + \frac{\alpha_D}{2} \sum_l\norm{\grad\cl[l,12]}{\Lp{2}{}}^2 \\ &\leq \brac{\theta\max_l C_{\Rl}+\frac{4}{\kappa\alpha_D}\sum_i\sqbrac{\norm{\fieldF_i + \coeffEL\fieldEL_i}{\Lp{2}{}}^2\norm{\Cl_i}{\Lp{\infty}{}}^2}}\sum_l\norm{\cl[l,12]}{\Lp{2}{}}^2. \end{align*} We note that the initial values vanish due to $\cl[l,12]=\clstart-\clstart=0$. Thus, applying \Grownwall, cf. \cite{Evans-book}, yields \begin{align*} \sum_l\norm{\cl[l,12](t)}{\Lp{2}{}}^2 \leq C \sum_l\norm{\cl[l,12](0)}{\Lp{2}{}}^2 =0 \quad \text{for a.e. } t\in[0,\timeEnd] ~. \end{align*} This is a contradiction to assumption \eqref{eq:unique-contradiction}. Hence, we have proven $\cl[l,12]\equiv0$. \par We proceed by testing equation~\eqref{eq:gaussWeakDIV-error} with $\test=\grad\cdot\fieldEL_{12}$. Thereby, we come with \Young\ and $\cl[l,12]\equiv0$ directly to \begin{align*} \norm{\grad\cdot\fieldEL_{12}}{\Lp{2}{}}^2 \leq \theta\max_l\abs{\zl} \sum_l\norm{\cl[l,12]}{\Lp{2}{}}^2=0~. \end{align*} Next, we test equation~\eqref{eq:gaussWeakDIV-error} with $\test=\potEL_{12}$ and equation~\eqref{eq:gaussWeak-error} with $\Test=\fieldEL_{12}$. By adding these equations, we get with \ref{Assump:Ellip} and $\cl[l,12]\equiv0$ \begin{align*} \permitELscalar\alpha_D\norm{\fieldEL_{12}}{\Lp{2}{}}^2 \leq \scp{\permitEL^{-1}\fieldEL_{12}}{\fieldEL_{12}}_\Omega = \theta \sum_l\scp{\zl\cl[l,12] }{\potEL_{12}}_\Omega = 0~. \end{align*} We now test equation~\eqref{eq:gaussWeak-error} with $\Test\in\spaceTF$, for which we assume that $\grad\cdot\Test=\potEL_{12}$ and $\norm{\Test}{\Hkdiv{1}{}}\leq C\norm{\potEL_{12}}{\spaceTP}$ holds, cf. \cite[Chapter~7.2]{Quarteroni-book}. This gives with \ref{Assump:Ellip} and \Young\ \begin{align*} \norm{\potEL_{12}}{\Lp{2}{}}^2 \leq \frac{2}{\permitELscalar\alpha_D}\norm{\fieldEL_{12}}{\Lp{2}{}}^2 = 0~. \end{align*} Hence, we have proven $\norm{\potEL_{12}}{\Lp{2}{}}^2 + \norm{\fieldEL_{12}}{\Hkdiv{1}{}}^2 \leq 0$, which means $\potEL_{12}\equiv0, \fieldEL_{12}\equiv\vecnull, \grad\cdot\fieldEL_{12}\equiv0$. \par Analogously, we test equation~\eqref{eq:darcyWeakDIV-error} with $\test=\grad\cdot\fieldF_{12}$. This shows $\norm{\grad\cdot\fieldF_{12}}{\Lp{2}{}}^2 =0$. Next, we test equation~\eqref{eq:darcyWeakDIV-error} with $\test=\press_{12}$ and equation~\eqref{eq:darcyWeak-error} with $\Test=\fieldF_{12}$. Then, we add these equations and obtain with \ref{Assump:Ellip}, $\cl[l,12]\equiv0$, and $\fieldEL_{12}\equiv\vecnull$ \begin{align*} \alpha_K \norm{\fieldF_{12}}{\Lp{2}{}}^2 &\leq \theta\mu^{-1}\sum_l\scp[]{\zl\cl[l,12]\permitEL^{-1}\fieldEL_1 + \zl\cl[l,2]\permitEL^{-1}\fieldEL_{12}}{\fieldF_{12}} = 0~. \end{align*} By testing equation~\eqref{eq:darcyWeak-error} with $\Test\in\spaceTF$, for which we assume according to \cite[Chapter~7.2]{Quarteroni-book} that $\grad\cdot\Test=\press_{12}$ and $\norm{\Test}{\Hkdiv{1}{}}\leq C\norm{\press_{12}}{\spaceTP}$ holds, we come with \ref{Assump:Ellip} and \Young\ to \begin{align*} \norm{\press_{12}}{\Lp{2}{}}^2 &\leq \delta C\norm{\press_{12}}{\Lp{2}{}}^2 + \frac{C_k^2}{2\delta}\norm{\fieldF_{12}}{\Lp{2}{}}^2 + \frac{\theta}{2\delta\mu}\norm{\chargeEL[,1]\permitEL^{-1}\fieldEL_1 - \chargeEL[,2]\permitEL^{-1}\fieldEL_2}{\Lp{2}{}}^2 \\ &=:I.1+I.2+I.3~. \end{align*} We already know $I.2=0$ and $I.3$ is estimated with $\cl[l,12]\equiv0$ and $\fieldEL_{12}\equiv\vecnull$ by \begin{align*} I.3 &= \norm{\chargeEL[,12]\permitEL^{-1}\fieldEL_1 + \chargeEL[,2]\permitEL^{-1}\fieldEL_{12}}{\Lp{2}{}}^2 = 0~. \end{align*} Thus, we have $\norm{\press_{12}}{\Lp{2}{}}=0$ and we have proven $\norm{\press_{12}}{\Lp{2}{}}^2 + \norm{\fieldF_{12}}{\Hkdiv{1}{}}^2 \leq 0$, which means $\press_{12}\equiv0, \fieldF_{12}\equiv\vecnull, \grad\cdot\fieldF_{12}\equiv0$. \end{proof} \section{Fixed Point Operator} \label{sec:fpi-operator} In the next sections, we prove the existence of solutions of the \dpnp\ by applying a fixed point approach. The idea behind this method of proof can be roughly summarized as follows: \par Firstly, linearize the nonlinear system with a suitable linearization method. For that purpose, often well-known and widely used numerical linearization schemes are used. Concerning \dpnp{s}, the most famous linearization scheme for numerical computations is the so-called Gummel iteration, cf. \cite{Gummel1964}. \par Secondly, reformulate the linearized system by means of an abstract operator. This operator is exactly constructed such that the images of this operator are the solutions of the linearized system. Furthermore, the construction must by carried out in such way, that the solutions of the nonlinear system are exactly the fixed points of this operator. Hence, the existence of solutions of the nonlinear system is equivalent to the existence of fixed points of the constructed operator. \par Thirdly, it remains to prove that the operator satisfies the assumptions of a fixed point theorem, which allows to conclude that a fixed point exists. This is the reason why the most part of the subsequent proof consists in verifying the assumptions of the fixed point~\cref{thm:fpi-theorem}. \par We now linearize the \dpnp\ by the following Gummel-type approach, which is sketched as follows. \begin{enumerate}[label=(L.\arabic*), ref=(L.\arabic*),topsep=2.0mm, itemsep=-1.0mm, start=0] \item We replace the free charge density~$\chargeEL$ by some given approximation~$\chargeELold$. \label{linStep:start} \item Thereby, we decouple Gauss's law from the remaining \dpnp, as for a given $\chargeELold$, we obtain a solution $(\fieldEL,\potEL)$ of Gauss's law independently of the remaining solution vector~$(\fieldF,\press,\Cl)$. In the following \cref{def:FixOp}, this is formulated by means of the operator~$\fixOp_1$. \label{linStep:gauss} \item Next, we proceed by solving Darcy's law. From \ref{linStep:start} and \ref{linStep:gauss} we know that we can take $(\chargeELold,\fieldEL,\potEL)$ as a given input. Hence, we obtain a solution~$(\fieldF,\press)$ independently of the remaining solution vector~$\Cl$. In the following \cref{def:FixOp}, this is formulated by means of the solution operator~$\fixOp_2$. \label{linStep:darcy} \item Finally, we know from \ref{linStep:start}--\ref{linStep:darcy}, that we can treat $(\chargeELold,\fieldEL,\potEL,\fieldF,\press)$ as a given input for the equations for $\cl$, which gives immediately the remaining solution~$\Cl$. This is formulated in the following \cref{def:FixOp} by means of the solution operator~$\fixOp_3$. \label{linStep:np} \end{enumerate} This Gummel-type linearization approach is rigorously formulated in the next definition. \begin{definition}[Fixed point operator]\label{def:FixOp} Let $K\subset X$ be a subset of the Banach space $X$, which is given by $X:= \sqbrac{\fspace{L^\infty}{I}{;\Lp{2}{}} \cap \spaceSC \cap \Lp[\OmegaT]{\infty}{}}^2$. We introduce the fixed point operator~$\fixOp$ by \begin{align*} \fixOp:=\fixOp_3\circ\fixOp_2\circ\fixOp_1: K \subset X \rightarrow X~. \end{align*} Herein, the suboperator $\fixOp_1$ is defined by \begin{subequations} \begin{align} \fixOp_1: &\begin{cases} \fixSet \rightarrow X\times\spaceSEF\times\spaceSP ~=:Y \\ \Clold ~\mapsto (\Clold,\fieldEL,\potEL), ~\text{ with } (\fieldEL,\potEL) \text{ solving for all } (\Test,\test) \in \spaceTF\times\spaceTP \end{cases} \nonumber\\ & \hspace{27.0mm} \scp{\permitEL^{-1}\fieldEL}{\Test}_\Omega = \scp{\potEL}{\grad\cdot\Test}_\Omega , \label{eq:gaussWeak-fpi} \\ & \hspace{27.0mm} \scp{\grad\cdot\fieldEL}{\test}_{\Omega} = \scp{\rho_b + \chargeELold}{\test}_\Omega , \label{eq:gaussWeakDIV-fpi} \\ & \hspace{27.0mm} \text{with the free charge density } \chargeELold \text{ given by }\chargeELold = \theta(\chargeELlongTwoold)~. \nonumber \end{align} Furthermore, the suboperator $\fixOp_2$ is defined by \begin{align} \fixOp_2: &\begin{cases} \qquad Y ~~~~\rightarrow Y\times \spaceSF\times\spaceSP~=:Z \\ (\Clold,\fieldEL,\potEL) \mapsto (\Clold,\fieldEL,\potEL,\fieldF,\press), ~\text{ with } (\fieldF,\press) \text{ solving for all } (\Test,\test) \in \spaceTF\times\spaceTP \end{cases} \nonumber\\ & \hspace{27.0mm} \scp{\permeabH^{-1}\fieldF}{\Test}_\Omega = \scp{\mu^{-1}\press}{\grad\cdot\Test}_\Omega + \scp{\coeffForceEL\forceELold}{\Test}_\Omega, \label{eq:darcyWeak-fpi} \\ & \hspace{27.0mm} \scp{\grad\cdot\fieldF}{\test}_\Omega = 0 . \label{eq:darcyWeakDIV-fpi} \end{align} Finally, the suboperator $\fixOp_3$ is defined by \begin{align} \fixOp_3: &\begin{cases} \qquad~~~ Z \qquad~~\rightarrow X \\ (\Clold,\fieldEL,\potEL,\fieldF,\press) \mapsto \Cl=(\cl[1],\cl[2]), ~\text{ with } \cl \text{ solving for all } \test\in\spaceTC \text{ and } l=1,2 \end{cases} \nonumber\\ & \hspace{27.0mm} \dualp{\theta\dert\cl}{\test}_{1,\Omega} + \scp{\Dl\grad\cl}{\grad\test}_\Omega - \scp{ {\cl}[\fieldF + \coeffEL\fieldEL] }{\grad\test}_\Omega \nonumber\\[2.0mm] & \hspace{27.0mm} = \scp{\theta\Rl(\Cl)}{\test}_\Omega + \scp{\gl}{\test}_\Gamma , \label{eq:transportWeak-fpi} \\ & \hspace{27.0mm} \text{and } \cl \text{ take its initial values in the sense that } \nonumber\\ & \hspace{27.0mm} \lim\limits_{t\searrow0} \scp{\cl(t) - \clstart}{\test}_\Omega ~=~0 \qquad \text{for all } \test\in\Lp{2}{}~. \nonumber\\[-12mm] \nonumber \end{align} \end{subequations} $\square$ \end{definition} \begin{remark} \textnormal{ We note, that the fixed point operator~$\fixOp$ is solely a function of~$\Clold$. For this reason, a fixed point~$\Cl$ of $\fixOp$ is only a partial solution in the sense of \cref{def:weaksolution}, as $\Cl$ only solves the equations~\eqref{eq:transportWeak}. However, the suboperators~$\fixOp_1$, $\fixOp_2$, and $\fixOp_3$ contain the necessary information about the remaining partial solutions $(\potEL,\fieldEL)$ and $(\press,\fieldF)$. Furthermore, in case a fixed point $\Cl=\fixOp(\Cl)$ exists, these supoperators ensure the existence of the partial solutions $(\potEL,\fieldEL)$ and $(\press,\fieldF)$ such that this yields the existence of a solution $\brac{\fieldEL,\potEL,\fieldF,\press,\Cl}\in \setR^{4+2n}$ according to \cref{def:weaksolution}. } $\square$ \end{remark} \begin{lemma}[well-definedness]\label{lemma:wellDef-fixOp} Let \ref{Assump:Geom}--\ref{Assump:back-charge} be valid. Then, $\fixOp: \fixSet\subset X \rightarrow X~$ defined in \cref{def:FixOp}, is well-defined. \end{lemma} \begin{proof} $\fixOp_1$ is well-defined in the first component, since $\fixOp_1$ is the identity in the first component. As to the components $(\fieldEL,\potEL)$, we know that for all $\Cl\in K$ unique solutions $(\fieldEL,\potEL)\in\spaceSEF\times\spaceSP$ of \eqref{eq:gaussWeak-fpi} and \eqref{eq:gaussWeakDIV-fpi} exist. This follows from \cite[Theorem 7.4.1]{Quarteroni-book}. However, $\potEL$ is only determined up to a constant. Imposing, e.g., a zero mean value constraint \footnote{\label{foot1}The mean value of a function $f\in\Lp{1}{}$ is defined by $\frac{1}{\abs{\Omega}}\Intdx{ f~}$.}, leads to uniqueness of $\potEL$. Furthermore, we note that the time variable $t$ plays only the role of a parameter in the equations for $(\fieldEL,\potEL)$. This leads to uniform results with respect to $t$. Hence, $\fixOp_1$ is well-defined. \par $\fixOp_2$ is the identity in the first three components. For the last two components $(\fieldF,\press)$, we know that for all $(\Cl,\fieldEL)\in K\times\spaceSEF$ unique solutions of \eqref{eq:darcyWeak-fpi} and \eqref{eq:darcyWeakDIV-fpi} exist. This follows again from \cite[Theorem 7.4.1]{Quarteroni-book}. Likewise, $\press$ is only determined up to a constant and we obtain uniqueness by imposing, e.g., a zero mean value constraint. The existence is uniform in time, as $t$ plays just the role of a parameter. Thus, $\fixOp_2$ is well-defined. \par Applying Rothe's method, cf. \cite[Chapter 8.2]{Roubicek-book}, \cite{rektorys-book}, together with the regularities of $\fieldEL,\potEL$, $\fieldF,\press$ (according to \cref{thm:aprioriBounds}) guarantees with \cref{lemma:nonnegative} and \cref{lemma:bounded} the existence of unique weak solutions $\Cl\in X$ of equations~\eqref{eq:transportWeak-fpi}. Thus, $\fixOp_3$ is well defined. \end{proof} \begin{lemma}[Regularity for Gauss's law]\label{lemma:regularity-gausslaw} Let \ref{Assump:Geom}--\ref{Assump:back-charge} be valid and let $(\fieldEL,\potEL,\fieldF,\press,\Cl)\in\setR^{4+2n}$ be a solution of \eqref{eq:Model-1a}--\eqref{eq:Model-1j} according to \cref{def:FixOp}. Then, for the partial solution~$(\fieldEL,\potEL)$, we have \begin{align*} \potEL\in\Lp[I]{\infty}{;\Hk{2}{}/\setR} \qquad \text{ and } \qquad \fieldEL\in\Lp[I]{\infty}{;\Hk{1}{}}~. \end{align*} \end{lemma} \begin{proof} We recall from \cite{Grisvard-book,Quarteroni-book}, that the equation \begin{align*} \scp{\permitEL\grad\psi}{\grad\test}_{\Omega} = \scp{\rho_b+\chargeELold}{\test}_\Omega + \scp{\sigma}{\test}_\Gamma \qquad \text{ for all } \test\in\Hk{1}{}. \end{align*} possesses a unique solution~$\psi\in\Hk{2}{}$, if we impose a zero mean value constraint\footnote{See footnote \ref{foot1}}. As the time variable~$t$ plays only the role of a parameter in the equation for $\psi$, we obtain all results uniformly in time. This yields $\psi\in\Lp[I]{\infty}{;\Hk{2}{}}$. Hence, by defining \begin{align*} \potEL:=\psi\in\Lp[I]{\infty}{;\Hk{2}{}} \qquad\text{ and }\qquad \fieldEL:=\permitEL\grad\psi\in\Lp[I]{\infty}{;\Hk{1}{}}, \end{align*} we obtain a solution of equations~\eqref{eq:gaussWeak-fpi} and \eqref{eq:gaussWeakDIV-fpi}. Finally, we already know from the proof of \cref{lemma:wellDef-fixOp}, that the above constructed solution~$(\fieldEL,\potEL)$ is the unique solution of equations~\eqref{eq:gaussWeak-fpi} and \eqref{eq:gaussWeakDIV-fpi}. \end{proof} \section{Global Existence of a Solution} \label{sec:Existence-twoComp} \subsection{A priori Estimates} \label{subsec:aprioriBounds} In this section, we show a~priori bounds for the solution vector $\brac{\fieldEL,\potEL,\fieldF,\press,\Cl}\in \setR^{4+2n}$. We begin with some preliminary results, which we need throughout the rest of this paper. Henceforth, we denote by $C$ a generic constant, which may change from line to line in the calculations. \begin{lemma}[Algebraic Inequality]\label{lemma:sign} Let $p\geq0$ and $a,b\in\setR$ with $a\geq0$ and $b\geq0$. Then, we have \begin{align*} (a-b)(a^p-b^p)=(b-a)(b^p-a^p) \geq 0~. \end{align*} \end{lemma} \begin{proof} The equality is obvious and the inequality follows by considering the cases $a\geq b$ and $a<b$. \end{proof} \begin{lemma}[Boundary Interpolation]\label{lemma:interpol-boundary} Let $u\in\Hk{1}{}$ and suppose \ref{Assump:Geom}. Then, we have \begin{align*} \norm{u}{\Lp[\Gamma]{2}{}}^2 ~\leq~ \delta \norm{\grad u}{\Lp{2}{}}^2 + 2\delta^{-1} \norm{u}{\Lp{2}{}}^2 \qquad \text{ for all } \delta\in(0,1)~. \end{align*} \end{lemma} \begin{proof} Let $s\in [1/2,1)$ and $\Hk{s}{}$ be a fractional Sobolev space defined in \cite[Chapter~7.35, 7.43]{Adams1-book}. According to \cite[Theorem~7.58]{Adams1-book} and \cite[Lemma~7.16]{Adams1-book}, we have the embeddings \begin{align*} \norm{u}{\Lp[\Gamma]{2}{}} \leq C \norm{u}{\Hk{s}{}} ~\leq~ C \norm{u}{\Hk{1}{}}^s \norm{u}{\Lp{2}{}}^{1-s} ~~\text{ for } s \in [1/2,1). \end{align*} Then, we choose $s=1/2$ and apply \Young\ with a free parameter $\delta\in(0,1)$. \end{proof} We now show a lower bound for the chemical species~$\cl$. \begin{lemma}[Non negativity]\label{lemma:nonnegative} Let \ref{Assump:Geom}--\ref{Assump:back-charge} be valid and let $\brac{\fieldEL,\potEL,\fieldF,\press,\Cl}\in \setR^{4+2n}$ be a weak solution of \eqref{eq:Model-1a}--\eqref{eq:Model-1j} according to \cref{def:FixOp}. Then, we have for $l\in\{1,2\}$ \begin{align*} \cl\tx \geq 0 \quad \text{ for a.e. } t\in[0,\timeEnd],~ \text{ a.e. } x\in\Omega~. \end{align*} \end{lemma} \begin{proof} We note that the following proof is independent of $\clold\neq\cl$ or $\clold=\cl$. For $l=1,2$, we modify equations~\eqref{eq:transportWeak-fpi} with $\cl[l,+]:=\max(\cl,0)$ to \begin{align}\label{eq:transportWeak-modified} &\dualp{\theta\dert\cl}{\test}_{1,\Omega} + \scp{\Dl\grad\cl}{\grad\test}_\Omega - \scp{ {\cl[l,+]}[\fieldF + \coeffEL\fieldEL] }{\grad\test}_\Omega \nonumber\\[2.0mm] & = \scp{\theta\Rl(\Cl)}{\test}_\Omega + \scp{\gl}{\test}_\Gamma~. \end{align} Obviously, equations~\eqref{eq:transportWeak-fpi} and \eqref{eq:transportWeak-modified} are identical for nonnegative solutions~$\cl$. This means that nonnegative solutions~$\cl$ of \eqref{eq:transportWeak-modified} are solutions of \eqref{eq:transportWeak-fpi}. Furthermore, by involving \cref{thm:unique}, we know that nonnegative solutions~$\cl$ of \eqref{eq:transportWeak-modified} are the unique solutions of equations~\eqref{eq:transportWeak-fpi}. Hence, it suffices to show that \eqref{eq:transportWeak-modified} solely allows for nonnegative solutions. \par To this end, we test \eqref{eq:transportWeak-modified} with $\cl[l,-]:=\min(\cl,0)$. Thereby, we obtain for the time integral and the diffusion integral with \ref{Assump:Ellip} \begin{align*} \sum_l \dualp{\theta\dert\cl}{\cl[l,-]}_{1,\Omega} + \scp{\Dl\grad\cl}{\grad\cl[l,-]}_\Omega \geq \frac{\theta}{2}\derr \sum_l\norm{\cl[l,-]}{\Lp{2}{}}^2 + \alpha_D \sum_l\norm{\grad\cl[l,-]}{\Lp{2}{}}^2. \end{align*} The convection integral and the electric~drift integral vanish due to $\Omega\cap\cbrac{\cl<0}\cap\cbrac{\cl>0}=\varnothing$. For the reaction integrals and the surface integrals, we come with \ref{Assump:Reaction}, \ref{Assump:BoundData}, \cref{lemma:interpol-boundary}, and \Holder\ to \begin{align*} \theta\sum_l \scp{\Rl(\Cl)}{\cl[l,-]}_\Omega + \scp{\gl}{\cl[l,-]}_\Gamma \leq \norm{\gl}{\Lp[\Gamma]{\infty}{}}\sum_l\norm{\cl[l,-]}{\Lp[\Gamma]{1}{}} \leq C\norm{\gl}{\Lp[\Gamma]{\infty}{}}\sum_l\norm{\cl[l,-]}{\Lp{2}{}} . \end{align*} Combining the previous estimates leads us to \begin{align*} \frac{\theta}{2}\derr \sum_l\norm{\cl[l,-]}{\Lp{2}{}}^2 + \frac{\alpha_D}{2} \sum_l\norm{\grad\cl[l,-]}{\Lp{2}{}}^2 \leq C\norm{\gl}{\Lp[\Gamma]{\infty}{}}\sum_l\norm{\cl[l,-]}{\Lp{2}{}} . \end{align*} It is either $\sum_l\norm{\cl[l,-]}{\Lp{2}{}}=0$ and we are done, or we have $\sum_l\norm{\cl[l,-]}{\Lp{2}{}}\geq \kappa$, for some $\kappa>0$. This gives \begin{align*} \frac{\theta}{2}\derr \sum_l\norm{\cl[l,-]}{\Lp{2}{}}^2 + \frac{\alpha_D}{2} \sum_l\norm{\grad\cl[l,-]}{\Lp{2}{}}^2 \leq \kappa^{-1}C\norm{\gl}{\Lp[\Gamma]{\infty}{}}\sum_l\norm{\cl[l,-]}{\Lp{2}{}}^2 . \end{align*} Applying \Grownwall\ and \ref{Assump:InitData} immediately yields $\sum_l\norm{\cl[l,-]}{\Lp{2}{}}=0$. \end{proof} Next, we prove energy estimates for the chemical species~$\cl$, by using the above mentioned weighted test functions, see \cref{sec:Introduction}. These energy estimates are crucial for all following results. \begin{lemma}[Energy estimates]\label{lemma:energy} Let \ref{Assump:Geom}--\ref{Assump:back-charge} be valid and let $\brac{\fieldEL,\potEL,\fieldF,\press,\Cl}\in \setR^{4+2n}$ be a weak solution of \eqref{eq:Model-1a}--\eqref{eq:Model-1j} according to \cref{def:FixOp}. Then, we have \begin{align*} \sum_l\sqbrac{ \norm{\cl}{\Lp[I]{\infty}{;\Lp{2}{}}}+\norm{\cl}{\Lp[I]{2}{;\spaceTC}} }~\leq~ C_0 \end{align*} Herein, the dependency of the constant is \begin{align*} C_0=C_0\!\brac{\timeEnd,\max_l\abs{\zl},\norm{\gl}{\Lp[\OmegaT]{\infty}{}},\norm{f}{\Lp[\GammaT]{\infty}{}},\norm{\sigma}{\Lp[\GammaT]{\infty}{}},\norm{\rho_b}{\Lp[\OmegaT]{\infty}{}},\norm{\clstart}{\Lp{2}{}} }. \end{align*} \end{lemma} \begin{proof} For ease of readability, we split the proof into two cases.\\[2.0mm] \underline{Case 1: $\cl=\clold$}~~ In equations~\eqref{eq:transportWeak-fpi}, we choose the test functions $\varphi := \abs{\zl}\cl \in \spaceTC$ and we sum over $l=1,2$. Thereby, we get for the time integrals and the diffusion integrals with \ref{Assump:Ellip} \begin{align*} & \sum_l \dualp{\theta\dert\cl}{\abs{\zl}\cl}_{1,\Omega} + \sum_l\scp{\Dl\grad\cl}{\grad(\abs{\zl}\cl)}_\Omega \\ &\geq \frac{\theta}{2}\derr\sum_l \abs{\zl}\norm{\cl}{\Lp{2}{}}^2 + \alpha_D \sum_l\abs{\zl}\norm{\grad\cl}{\Lp{2}{}}^2~~. \end{align*} For the convection integrals, we firstly use integration by parts and we insert equation \eqref{eq:darcyWeakDIV-fpi}. Secondly, we use \Holder\ and \cref{lemma:interpol-boundary} with a rescaled free parameter $\delta=\norm{f}{\Lp[\GammaT]{\infty}{}}^{-1}\delta$. This leads us to \begin{align*} & -\sum_l\scp{{\cl}\fieldF}{\grad(\abs{\zl}\cl)}_\Omega = -\frac{1}{2} \sum_l\abs{\zl}\scp{\fieldF}{\grad\cl^2}_\Omega = -\frac{1}{2} \sum_l\abs{\zl}\scp{f}{\cl^2}_\Gamma \\ &\geq -\delta \sum_l\abs{\zl}\norm{\grad\cl}{\Lp{2}{}}^2 ~-~ \delta^{-1} \norm{f}{\Lp[\GammaT]{\infty}{}}^2 \sum_l\abs{\zl}\norm{\cl}{\Lp{2}{}}^2 . \end{align*} Analogously, for the electric~drift integral we integrate by parts and we insert equation \eqref{eq:gaussWeakDIV-fpi}. This yields \begin{align*} I_{el}:=& -\frac{e}{\permitELscalar\boltz\temp} \sum_l\scp{\zl{\cl}\fieldEL}{\grad(\abs{\zl}\cl)}_\Omega ~= -\frac{e}{\permitELscalar\boltz\temp} \sum_l\Sign(\zl)\scp{\abs{\zl}\cl\fieldEL}{\grad(\abs{\zl}\cl)}_\Omega\\ &= \frac{e}{\permitELscalar\boltz\temp} \sum_l\Sign(\zl)\sqbrac{~\scp{\rho_b+\chargeELold}{(\abs{\zl}\cl)^2}_\Omega - \scp{\sigma}{(\abs{\zl}\cl)^2}_\Gamma~}. \end{align*} Together with \Holder\ and \cref{lemma:interpol-boundary}, we reach from this identity (with a rescaled $\delta$) at \begin{align*} I_{el} &\geq \frac{e}{\permitELscalar\boltz\temp} \sum_l\Sign(\zl)\scp{\chargeELold}{(\abs{\zl}\cl)^2}_\Omega -\delta \sum_l\abs{\zl}\norm{\grad\cl}{\Lp{2}{}}^2 \\ &~~~~ -\frac{2e^2\max_l\abs{\zl}^2}{(\permitELscalar\boltz\temp)^2}\sqbrac{\delta^{-1}\norm{\sigma}{\Lp[\GammaT]{\infty}{}}^2+\norm{\rho_b}{\Lp[\OmegaT]{\infty}{}}} \sum_l\abs{\zl}\norm{\cl}{\Lp{2}{}}^2 \\ &~~~~ := I.a + I.b + I.c. \end{align*} Now it remains to control the integral~$I.a$. As we assumed $\cl=\clold$, the free charge density~$\chargeELold$ is given by $\chargeELold=\chargeELlongTwo$. This shows with \cref{lemma:sign} \label{page:signCondition} \begin{align*} I.a = \frac{e}{\permitELscalar\boltz\temp}~\scp{\chargeELlongTwo}{(\zl[1]\cl[1])^2-(\abs{\zl[2]}\cl[2])^2}_\Omega \geq 0. \end{align*} This sign condition \footnote{Thanks to $I.a\geq0$, we can avoid to bound the integral~$I.a$ by suitable norms of its integrands, which would cause serious problems. See \cref{sec:Introduction} for further details.} only holds true, since we use the weighted test function~$\test=\abs{\zl}\cl$ instead of $\test=\cl$. For the surface integrals, we involve \ref{Assump:BoundData}, \cref{lemma:interpol-boundary}, and \Young. Thereby, we get \begin{align*} & \sum_l \scp{\gl}{\abs{\zl}\cl}_\Omega ~\leq~ \sum_l\abs{\zl}\norm{\cl}{\Lp[\Gamma]{2}{}}^2 + \max_l\abs{\zl}\sum_l\norm{\gl}{\Lp[\Gamma]{2}{}}^2 \\ &\leq \delta \sum_l \abs{\zl} \norm{\grad\cl}{\Lp{2}{}}^2 +2\delta^{-1} \sum_l \abs{\zl}\norm{\cl}{\Lp{2}{}}^2 +\max_l\abs{\zl} \sum_l\norm{\gl}{\Lp[\Gamma]{2}{}}^2 ~. \end{align*} Applying \ref{Assump:Reaction}, \Young, and recalling $\cl\geq0$, results for the reaction integrals in \begin{align*} & \sum_l\scp{\theta\Rl(\Cl)}{\abs{\zl}\cl}_\Omega \leq \theta \max_l C_{\Rl} \scp{\abs{\Cl}}{\sum_l\abs{\zl}\cl}_\Omega \leq \theta \max_l C_{\Rl} \scp{\sum_l\abs{\zl}\cl}{\sum_l\abs{\zl}\cl}_\Omega \\ &\leq 3\theta \max_l\abs{\zl}C_{\Rl} \sum_l \norm{\abs{\zl}\cl}{\Lp{2}{}}^2 ~. \end{align*} By combining the preceding estimates, we deduce with the choice~$\delta:=\frac{\alpha_D}{6}$ the estimate \begin{align}\label{eq:energy-energy1} & \frac{\theta}{2}\derr\sum_l \abs{\zl}\norm{\cl}{\Lp{2}{}}^2 + \frac{\alpha_D}{2} \sum_l\abs{\zl}\norm{\grad\cl}{\Lp{2}{}}^2 \nonumber\\ &\leq \frac{2e^2}{(\permitELscalar\boltz\temp)^2}\sqbrac{\frac{6}{\alpha_D} \norm{\sigma}{\Lp[\GammaT]{\infty}{}}^2+\norm{\rho_b}{\Lp[\OmegaT]{\infty}{}}} \sum_l\abs{\zl}\norm{\cl}{\Lp{2}{}}^2 \nonumber\\ &~~~~ +\sqbrac{\frac{6}{\alpha_D} \norm{f}{\Lp[\GammaT]{\infty}{}}^2 + 3\theta \max_l\abs{\zl}C_{\Rl} }\sum_l\abs{\zl}\norm{\cl}{\Lp{2}{}}^2 +\max_l\abs{\zl}\sum_l\norm{\gl}{\Lp[\Gamma]{2}{}}^2~. \end{align} For ease of readability, we introduce the abbreviation \begin{align*} B_0 &:= \max\!\brac{\frac{2}{\theta},\frac{2}{\alpha_D}} \frac{12e^2\max_l\abs{\zl}^2}{\alpha_D(\permitELscalar\boltz\temp)^2} \left[ \norm{\sigma}{\Lp[\GammaT]{\infty}{}}^2 + \norm{\rho_b}{\Lp[\OmegaT]{\infty}{}} + \norm{f}{\Lp[\GammaT]{\infty}{}}^2 + 3\theta \max_lC_{\Rl} \right]. \end{align*} Thus, we immediately obtain from \eqref{eq:energy-energy1} together with \Grownwall\ \begin{align*} \sum _l\abs{\zl}\norm{\cl}{\Lp[I]{\infty}{;\Lp{2}{}}}^2 \leq e^{B_0\timeEnd} \sqbrac{\sum_l\abs{\zl}\norm{\clstart}{\Lp{2}{}}^2 + \max_l\abs{\zl}\sum_l\norm{\gl}{\Lp{2}{}}^2} ~:= \hat{C}^2_0(\timeEnd). \end{align*} We substitute this bound into \eqref{eq:energy-energy1} and we integrate in time over $[0,\timeEnd]$. This yields \begin{align}\label{eq:energy-statedbound-1} & \sum _l \abs{\zl}\sqbrac{\norm{\cl}{\Lp[I]{\infty}{;\Lp{2}{}}} + \norm{\grad\cl}{\Lp[\OmegaT]{2}{}} } \nonumber\\ &\leq \hat{C}_0 + B_0^{\frac{1}{2}}\max_l\abs{\zl}\timeEnd^{\frac{1}{2}}\hat{C}_0+ \max_l\abs{\zl}\sum_l\norm{\gl}{\Lp{2}{}} ~:= C_0(\timeEnd)~. \end{align} \underline{Case 2: $\cl\neq\clold$}~~ Again, we test equations~\eqref{eq:transportWeak-fpi} with $\test:=\abs{\zl}\cl\in\spaceTC$. The above estimates for the respective integrals remain unchanged, except for the integral~$I.a$, which does not fulfill a sign condition this time. Thus, we now bound this integral with \Holder\ by \begin{align*} I.a &= \frac{e}{\permitELscalar\boltz\temp} \scp{\chargeELold}{\sum_l\Sign(\zl)\brac{\abs{\zl}\cl}^2}_\Omega ~\leq \frac{e}{\permitELscalar\boltz\temp} \max_l\abs{\zl}^2\norm{\Clold}{\Lp{\infty}{}}~~ \sum_l\abs{\zl}\norm{\cl}{\Lp{2}{}}^2. \end{align*} Herein, the constant depends on the $L^\infty$-norms of the $\clold$. However, this is uncritical in this case as we assumed $\cl\neq\clold$. Furthermore, in \cref{def:FixOp} we introduced the space~$X$ and the set~$\fixSet\subset X$. Furthermore, we supposed $\Clold\in\fixSet$, which ensures that the $L^\infty$-norms of the $\clold$ remain finite. Thus, provided we know $\norm{\Clold}{\Lp{\infty}{}}\leq R$ for all $\Clold\in\fixSet$, the constant in the above estimate just depends on an additional parameter~$R$. In conclusion, with the redefined constant \begin{align*} B_0 &:=\frac{24e^2\max_l\abs{\zl}^2C_{\Rl}}{\min(\theta,\alpha_D)\alpha_D(\permitELscalar\boltz\temp)^2} \left[ \norm{\sigma}{\Lp[\GammaT]{\infty}{}}^2 + \norm{\rho_b}{\Lp[\OmegaT]{\infty}{}} + \norm{f}{\Lp[\GammaT]{\infty}{}}^2 + 3\theta + \norm{\Clold}{\Lp{\infty}{}} \right], \end{align*} we obtain analogously to \eqref{eq:energy-statedbound-1} \begin{align}\label{eq:energy-statedbound-2} \sum _l \abs{\zl}\sqbrac{\norm{\cl}{\Lp[I]{\infty}{;\Lp{2}{}}} + \norm{\grad\cl}{\Lp[\OmegaT]{2}{}} } \leq C_0(\timeEnd,\norm{\Clold}{\Lp{\infty}{}} )~. \end{align} \end{proof} \begin{remark}\label{remark:boundForFreeCharge} In particular, \cref{lemma:energy} ensures $\norm{\chargeELold}{\Lp[I]{\infty}{;\Lp{2}{}}} \leq \max_l\abs{\zl} C_0$. This uniform bound holds for both cases, $\chargeEL=\chargeELold$ and $\chargeEL\neq\chargeELold$. $\square$ \end{remark} Next, we show that the chemical species~$\cl$ are bounded. As the proof is rather long and technical, we separate this proof from the proof of the remaining a~priori bounds in \cref{thm:aprioriBounds}. \begin{lemma}[Boundedness]\label{lemma:bounded} Let \ref{Assump:Geom}--\ref{Assump:back-charge} be valid and let $\brac{\fieldEL,\potEL,\fieldF,\press,\Cl}\in \setR^{4+2n}$ be a weak solution of \eqref{eq:Model-1a}--\eqref{eq:Model-1j} according to \cref{def:FixOp}. Then, we have \begin{align*} \sum_l \norm{\cl}{\Lp[\OmegaT]{\infty}{}} ~\leq~ C_M~. \end{align*} Herein, the dependency of the constant is \begin{align*} C_M=C_M \!\brac{\timeEnd,\max_l\abs{\zl},\norm{\gl}{\Lp[\OmegaT]{\infty}{}},\norm{f}{\Lp[\GammaT]{\infty}{}},\norm{\sigma}{\Lp[\GammaT]{\infty}{}},\norm{\rho_b}{\Lp[\OmegaT]{\infty}{}},\norm{\clstart}{\Lp{\infty}{}} } . \end{align*} \end{lemma} \begin{proof} As we have already established a lower bound for $\cl$ in \cref{lemma:nonnegative}, it remains to show an upper bound. To this end, we subsequently apply Moser's iteration technique, cf. \cite{Moser1,Moser2}. More precisely, we follow the proof of \cite[Theorem 6.15]{Liebermann-book} with a modified test function. \par Henceforth, we use the truncated solutions $\clm:=\min(\cl,m)$. For ease of readability, we split the proof into several steps.\\[2.0mm] \underline{Step 1: preliminary energy estimates}\\ The crucial step in Moser's iteration technique is to derive an energy estimate for $\cl$ to arbitrary high powers, i.e., for $\cl^{\alpha+1}$ with $\alpha\geq0$. For that purpose, we test equations~\eqref{eq:transportWeak-fpi} by $\varphi := \brac{\clm}^{2\alpha+1}$ for $\alpha\geq0$, we sum over $l=1,2$, and we bound the respective integrals. This part of the proof is related to the proof of \cref{lemma:energy}. For this reason, we just briefly repeat the similar parts. Firstly, using the above test function yields with \ref{Assump:Ellip} for the diffusion integrals \begin{align*} (2\alpha+1) \sum_l\scp{\Dl\grad\cl}{(\clm)^{2\alpha}\grad\clm}_\Omega \geq \frac{\alpha_D}{\alpha+1} \sum_l\norm{\grad(\clm)^{\alpha+1}}{\Lp{2}{}}^2~. \end{align*} The convection integrals, we firstly transform with integration by parts and inserting equation \eqref{eq:darcyWeakDIV-fpi}. Then, we involving \Holder\ and \cref{lemma:interpol-boundary} (with a rescaled parameter $\delta$). Thereby, we arrive for the convection integrals at to \begin{align*} & -\sum_l\scp{{\cl}\fieldF}{\grad\brac{\clm}^{2\alpha+1}}_\Omega = -\sum_l\frac{2\alpha+1}{(2\alpha+2)} \scp{f}{\brac{\clm}^{2\alpha+2}}_\Gamma \\ &\geq -\delta \sum_l\norm{\grad\brac{\clm}^{\alpha+1}}{\Lp{2}{}}^2 ~-~ 2\delta^{-1} \norm{f}{\Lp[\GammaT]{\infty}{}}^2 \sum_l\norm{\brac{\clm}^{\alpha+1}}{\Lp{2}{}}^2. \end{align*} Analogously, we transform the electric drift integrals with equation~\eqref{eq:gaussWeakDIV-fpi} to \begin{align*} I_{el}:=& -\frac{e}{\permitELscalar\boltz\temp} \sum_l\scp{\zl{\cl}\fieldEL}{\grad\brac{\clm}^{2\alpha+1}}_\Omega ~= -\frac{e(2\alpha+1)}{\permitELscalar\boltz\temp(2\alpha+2)} \sum_l\zl\scp{\fieldEL}{\grad\brac{\clm}^{2\alpha+2}}_\Omega\\ &= \frac{e(2\alpha+1)}{\permitELscalar\boltz\temp(2\alpha+2)} \sum_l\zl\sqbrac{~\scp{\rho_b+\chargeELold}{\brac{\clm}^{2\alpha+2}}_\Omega - \scp{\sigma}{\brac{\clm}^{2\alpha+2}}_\Gamma~}. \end{align*} Next, we apply \Holder, and \cref{lemma:interpol-boundary}. Thereby, we come (with a rescaled $\delta$) to \begin{align*} I_{el} &\geq \frac{e(2\alpha+1)\max_l\abs{\zl} }{\permitELscalar\boltz\temp(2\alpha+2)}\norm{\chargeELold}{\Lp{2}{}}\sum_l\norm{\brac{\clm}^{2\alpha+2}}{\Lp{2}{}} -\delta \sum_l\norm{\grad\brac{\clm}^{\alpha+1}}{\Lp{2}{}}^2 \\ &~~~~ -\underbrace{\frac{2e^2}{(\permitELscalar\boltz\temp)^2}\max_l\abs{\zl}\sqbrac{\delta^{-1}\norm{\sigma}{\Lp[\GammaT]{\infty}{}}^2+\norm{\rho_b}{\Lp[\OmegaT]{\infty}{}}} }_{=:K_0} \sum_l\norm{\brac{\clm}^{\alpha+1}}{\Lp{2}{}}^2 \\ &~~~~ := I.a + I.b + I.c. \end{align*} We are done for integrals~$I.b$ and $I.c$. As to the integral~$I.a$, we apply \GagNirenberg, cf. \cite{nirenberg1959}, \Young, \cref{lemma:energy}, and \cref{remark:boundForFreeCharge}. This shows \label{page:withoutSignCondition} \begin{align*} I.a &= \frac{e(2\alpha+1)}{\permitELscalar\boltz\temp(2\alpha+2)}\max_l\abs{\zl} \norm{\chargeELold}{\Lp{2}{}}\sum_l\norm{\brac{\clm}^{\alpha+1}}{\Lp{4}{}}^2 \\ &\geq -\frac{e}{\permitELscalar\boltz\temp} \max_l\abs{\zl}^2 C_0C_{gn} \sum_l\norm{\brac{\clm}^{\alpha+1}}{\Lp{2}{}}^{(4-n)/2}\norm{\brac{\clm}^{\alpha+1}}{\Hk{1}{}}^{n/2}\\ &\geq -\delta \sum_l\norm{\grad\brac{\clm}^{\alpha+1}}{\Lp{2}{}}^2 - \underbrace{C(\delta^{-1})\frac{e^4}{(\permitELscalar\boltz\temp)^4} \max_l\abs{\zl}^8 C_0^4C_{gn}^4}_{=:K_1} \sum_l\norm{\brac{\clm}^{\alpha+1}}{\Lp{2}{}}^2 . \end{align*} Substituting this into the above intermediate estimate, results for electric drift integral in \begin{align*} I_{el} \geq -\delta \sum_l\norm{\grad\brac{\clm}^{\alpha+1}}{\Lp{2}{}}^2 -\sqbrac{K_0+K_1} \sum_l\norm{\brac{\clm}^{\alpha+1}}{\Lp{2}{}}^2 . \end{align*} For the surface integrals, we involve \ref{Assump:BoundData}, \cref{lemma:interpol-boundary}. Furthermore, we apply \Young\ with $q=\frac{2\alpha+2}{2\alpha+1}$, $p=2\alpha+2$. Thereby, we get \begin{align*} & \sum_l \scp{\gl}{\brac{\clm}^{2\alpha+1}}_\Omega ~\leq~ \sum_l\norm{\brac{\clm}^{2\alpha+2}}{\Lp[\Gamma]{1}{}} + \sum_l\norm{\gl^{2\alpha+2}}{\Lp[\Gamma]{1}{}} \\ &\leq \delta \sum_l \norm{\grad\brac{\clm}^{\alpha+1}}{\Lp{2}{}}^2 +2\delta^{-1} \sum_l \norm{\brac{\clm}^{\alpha+1}}{\Lp{2}{}}^2 +\sum_l\norm{\gl^{\alpha+1}}{\Lp[\Gamma]{2}{}}^2 ~. \end{align*} A combination of the preceding estimates, together with the not yet considered time integrals and reaction integrals, yields with the choice~$\delta:=\frac{\alpha_D}{6(\alpha+1)}$ the preliminary energy estimate \begin{align}\label{eq:moser-energy-preliminary-1} & \theta \sum _l\dualp{\dert\cl}{\brac{\clm}^{2\alpha+1}}_{1,\Omega} + \frac{\alpha_D}{2(\alpha+1)}~\sum_l\norm{\grad\brac{\clm}^{\alpha+1}}{\Lp{2}{}}^2 \nonumber\\ &\leq \frac{12(\alpha+1)}{\alpha_D}\sqbrac{\norm{f}{\Lp[\GammaT]{\infty}{}}+1+K_0+K_1}\sum_l\norm{\brac{\clm}^{\alpha+1}}{\Lp{2}{}}^2 \nonumber\\ &~~~~ +\sum_l\norm{\gl^{\alpha+1}}{\Lp[\Gamma]{2}{}}^2 + \sum_l\scp{\theta\Rl(\Cl)}{\brac{\clm}^{2\alpha+1}}_\Omega \end{align} \underline{Step 2: base case:}~~ Before we continue the proof, we define for $j\in\setN_0$ a sequence of exponents~$\alpha_j$ by \begin{align} \label{eq:moser-exponents} 1+\alpha_j &:= \brac{\frac{n+2}{n}}^j ~~~\Hence~ 1+\alpha_j = (1+\alpha_{j-1})(1+\alpha_1)~. \end{align} Additionally, we cite from \cite[Proposition~3.2]{DiBenedetto-book} the parabolic embedding \begin{align}\label{eq:moser-parab-embedding} \norm{\cl}{\Lp[\OmegaT]{2\frac{n+2}{n}}{}} \leq C_S\!\brac{\timeEnd,\Gamma,n}~\sqbrac{ \norm{\cl}{\Lp[I]{\infty}{;\Lp{2}{}}} + \norm{\grad\cl}{\Lp[\OmegaT]{2}{}} ~}. \end{align} We now begin Moser's iteration procedure, which we rigorously formulate as mathematical induction. We start with $j=0$, Thus, we have $1+\alpha_0=1$, which means $\alpha_0=0$. Substituting this into \eqref{eq:moser-energy-preliminary-1}, we can safely let $m\rightarrow\infty$, as every integral remains finite. Furthermore, the reaction integrals and the time integrals, we estimate exactly as in the proof of \cref{lemma:energy}. Thereby, we rediscover a slightly modified version of \eqref{eq:energy-energy1}. I.e., we arrive with $m\rightarrow\infty$ at \begin{align*} & \frac{\theta}{2} \sum _l\derr\norm{\cl}{\Lp{2}{}}^2 + \frac{\alpha_D}{2} \sum_l\norm{\grad\cl}{\Lp{2}{}}^2 \nonumber\\ &\leq \frac{12}{\alpha_D} \sqbrac{ \norm{f}{\Lp[\GammaT]{\infty}{}}+1+K_0+K_1+3\theta\max_l C_{\Rl} }\sum_l\norm{\cl}{\Lp{2}{}}^2 + \sum_l\norm{\gl}{\Lp[\Gamma]{2}{}}^2~. \end{align*} Again, we deduce with \Grownwall\ \begin{subequations} \begin{align}\label{eq:moser-bound1-0} & \sum _l \sqbrac{\norm{\brac{\cl}}{\Lp[I]{\infty}{;\Lp{2}{}}} + \norm{\grad\brac{\cl}}{\Lp[\OmegaT]{2}{}} } \leq \hat{C}_0 + B_0^{\frac{1}{2}}\timeEnd^{\frac{1}{2}}\hat{C}_0+ \sum_l\norm{\gl}{\Lp{2}{}} ~:= C_0(\timeEnd)~. \end{align} This time, we have denoted the constants by \begin{align*} B_0:=&\min\!\brac{\frac{2}{\theta}, \frac{2}{\alpha_D}} \frac{12\max_l\abs{\zl}}{\alpha_D}\sqbrac{\norm{f}{\Lp[\GammaT]{\infty}{}}+1 + K_0+K_1 + 3\theta \max_l C_{\Rl} },\\ \hat{C}^2_0 := & e^{B_0\timeEnd} \sqbrac{\sum_l\norm{\clstart}{\Lp{2}{}}^2 + \sum_l\norm{\gl}{\Lp{2}{}}^2} . \end{align*} By combining \eqref{eq:moser-bound1-0}, the definition of $\alpha_1$ in \eqref{eq:moser-exponents}, and the embedding~\eqref{eq:moser-parab-embedding}, we finally obtain (with the elementary inequality $(a+b)^{1/p} \leq a^{1/p}+b^{1/p}$ for $a,b\geq0$ and $p\geq 1$ ) \begin{align}\label{eq:moser-bound2-0} & \sqbrac{\sum _l \norm{\cl}{\Lp[\OmegaT]{2(1+\alpha_1)}{}}^{2(1+\alpha_1)}}^{\frac{1}{2(1+\alpha_1)}} \leq \sum _l \norm{\cl}{\Lp[\OmegaT]{2(1+\alpha_1)}{}} = \sum _l \norm{\cl}{\Lp[\OmegaT]{2\frac{n+2}{n}}{}} \nonumber\\ &\leq C_S\sum _l \sqbrac{\norm{\cl}{\Lp[I]{\infty}{;\Lp{2}{}}} + \norm{\grad\cl}{\Lp[\OmegaT]{2}{}} } \leq C_SC_0~. \end{align} \end{subequations} \underline{Step 3: induction hypothesis}~~ Let $j\in\setN_0$ and let $\alpha_j$ be the corresponding exponent defined in \eqref{eq:moser-exponents}. We suppose, there exists a constant $C_{j-1}(\timeEnd)$ such that we have \begin{subequations} \begin{align} \sum _l \sqbrac{\norm{\brac{\cl}^{1+\alpha_{j-1}}}{\Lp[I]{\infty}{;\Lp{2}{}}} + \norm{\grad\brac{\cl}^{1+\alpha_{j-1}}}{\Lp[\OmegaT]{2}{}} } &\leq C^{1+\alpha_{j-1}}_{j-1}(\timeEnd), \label{eq:moser-bound1-j} \\ \sum _l \norm{\brac{\cl}^{\alpha_j}}{\Lp[\OmegaT]{2}{}} &\leq C_{j-1}(\timeEnd)~. \label{eq:moser-bound2-j} \end{align} \end{subequations} We note that \eqref{eq:moser-bound1-j} and \eqref{eq:moser-bound2-j} reduce for $j=0$ exactly to \eqref{eq:moser-bound1-0} and \eqref{eq:moser-bound2-0}.\\[2.0mm] \underline{Step 4: inductive step:}~~ We return with the choice of $\alpha:=\alpha_j$ to \eqref{eq:moser-energy-preliminary-1} and we can safely let $m\rightarrow\infty$, as we already know \eqref{eq:moser-bound2-j}. For the reaction integrals, this leads analogously to the base case with \Young\ $(p=\frac{2\alpha_j+1}{2\alpha_j+2}, q=2\alpha_j+2)$ to \begin{align*} \sum_l\scp{\theta\Rl(\Cl)}{\brac{\cl}^{2\alpha_j+1}}_\Omega \leq 3\theta \max_l C_{\Rl} \sum_l \norm{\brac{\cl}^{\alpha_j+1}}{\Lp{2}{}}^2 ~. \end{align*} Additionally, we estimate the time integrals analogously to the base case. Thus, after letting $m\rightarrow\infty$, by incorporating the bounds for the reaction integrals and the time integrals, and by introducing the abbreviations \begin{align*} A_j:=&\min\!\brac{ \frac{\theta}{(2\alpha_j+2)}, \frac{\alpha_D}{2(\alpha_j+1)} },\\ B_j:=&\frac{12(\alpha_j+1)}{A_j\alpha_D}\sqbrac{\norm{f}{\Lp[\GammaT]{\infty}{}}+ 1 + K_0 + K_1 3\theta \max_l C_{\Rl} }, \end{align*} we finally obtain the energy estimate \begin{align*} \derr\!\sum_l\norm{\brac{\cl}^{\alpha_j+1}}{\Lp{2}{}}^2 \!+\! \sum_l\norm{\grad\brac{\cl}^{\alpha_j+1}}{\Lp{2}{}}^2 \!\leq\! B_j\! \sum_l\norm{\brac{\cl}^{\alpha_j+1}}{\Lp{2}{}}^2 \!\!+\! \sum_l\norm{\gl^{\alpha_j+1}}{\Lp[\Gamma]{2}{}}^2. \end{align*} Hence, we conclude with \Grownwall\ the uniform bound \begin{align*} \sum _l\norm{\brac{\cl}^{\alpha_j+1}}{\Lp[I]{\infty}{;\Lp{2}{}}}^2 \leq \underbrace{e^{B_j\timeEnd} \sqbrac{\sum_l\norm{\brac{\clstart}^{\alpha_j+1}}{\Lp{2}{}}^2 + \sum_l\norm{\gl^{\alpha_j+1}}{\Lp{2}{}}^2} }_{:= \hat{C}^{2(\alpha_j+1)}_j(\timeEnd)}. \end{align*} Next, we integrate the above energy estimate in time over $[0,\timeEnd]$ and we involve the preceding bound. Thereby, we deduce the stated inequality~\eqref{eq:moser-bound1-j} \begin{subequations} \begin{align}\label{eq:moser-bound1-j+1} & \sum _l \sqbrac{\norm{\brac{\cl}^{\alpha_j+1}}{\Lp[I]{\infty}{;\Lp{2}{}}} + \norm{\grad\brac{\cl}^{\alpha_j+1}}{\Lp[\OmegaT]{2}{}} } \nonumber\\ &\leq \hat{C}^{\alpha_j+1}_j + B_j^{\frac{1}{2}}\timeEnd^{\frac{1}{2}}\hat{C}^{\alpha_j+1}_j+ \sum_l\norm{\gl^{\alpha_j+1}}{\Lp{2}{}} ~:= C^{\alpha_j+1}_j(\timeEnd)~. \end{align} Furthermore, with the definition of $\alpha_j$ in \eqref{eq:moser-exponents}, and the embedding~\eqref{eq:moser-parab-embedding}, we arrive at the stated bound~\eqref{eq:moser-bound2-j} \begin{align}\label{eq:moser-bound2-j+1} & \sqbrac{ \sum _l \norm{\cl}{\Lp[\OmegaT]{2(1+\alpha_{j+1})}{}}^{2(1+\alpha_{j+1})} }^{\frac{1}{2(1+\alpha_{j+1})}} \leq \sqbrac{ \sum _l \norm{\brac{\cl}^{\alpha_j+1}}{\Lp[\OmegaT]{2(1+\alpha_1)}{}} }^{\frac{1}{(1+\alpha_j)}} \nonumber\\ &\leq \sqbrac{ C_S\sum _l \brac{\norm{\brac{\cl}^{\alpha_j+1}}{\Lp[I]{\infty}{;\Lp{2}{}}} + \norm{\grad\brac{\cl}^{\alpha_j+1}}{\Lp[\OmegaT]{2}{}} } }^{\frac{1}{(1+\alpha_j)}} \nonumber\\ &\leq \sqbrac{C_SC^{1+\alpha_j}_j}^{\frac{1}{(1+\alpha_j)}} ~\leq C_S^{\frac{1}{(1+\alpha_j)}}C_j ~. \end{align} This shows that the induction hypothesis holds for all $j\in\setN$.\\[2.0mm] \underline{Step 5: limit case:}~~ We now consider the limit case $j\rightarrow\infty$. First of all, we note that we have \begin{align*} \norm{\Cl}{\Lp[\OmegaT]{p}{}} &:= \brac{ \sum_l \norm{\brac{\cl}}{\Lp[\OmegaT]{p}{}}^p }^{\frac{1}{p}} && 1\leq p < \infty~, \\ \norm{\Cl}{\Lp[\OmegaT]{\infty}{}} &:= \sum_l \norm{\brac{\cl}}{\Lp[\OmegaT]{\infty}{}}~, && p=\infty~. \end{align*} Hence, we can rewrite \eqref{eq:moser-bound2-j+1} as \begin{align}\label{eq:moser-bound2-j+1-new} \norm{\Cl}{\Lp[\OmegaT]{2(1+\alpha_{j+1})}{}} = \sqbrac{ \sum _l \norm{\cl}{\Lp[\OmegaT]{2(1+\alpha_{j+1})}{}}^{2(1+\alpha_{j+1})} }^{\frac{1}{2(1+\alpha_{j+1})}} \leq C_S^{\frac{1}{(1+\alpha_j)}}C_j ~. \end{align} \end{subequations} Our goal is to show that this inequality holds even in the limit case $j=\infty$. For that purpose, we recall that $a^{1/p}\rightarrow1$ as $p\rightarrow\infty$ for all $a>0$. Furthermore, from the definition of $\alpha_j$ in \eqref{eq:moser-exponents}, we know that $\alpha_j\rightarrow\infty$ as $j\rightarrow\infty$. Thus, with the definition of the constants $C_j$, we obtain in the limit \begin{align*} C_\infty &:= \lim_{j\rightarrow\infty} \brac{C_S^{\frac{1}{(1+\alpha_j)}}C_j} ~= \brac{\lim_{j\rightarrow\infty} C_S^{\frac{1}{(1+\alpha_j)}} } \brac{\lim_{j\rightarrow\infty} C_j } ~= \lim_{j\rightarrow\infty} C_j \\ &\leq \lim_{j\rightarrow\infty} \hat{C}_j + \lim_{j\rightarrow\infty} B_j^{\frac{1}{2(1+\alpha_j)}} \timeEnd^{\frac{1}{2(1+\alpha_j)}} \hat{C}_j + \lim_{j\rightarrow\infty}\sum_l\norm{\gl^{\alpha_j+1}}{\Lp{2}{}}^{\frac{1}{(1+\alpha_j)}}\\ &:= C_{\infty,1} + C_{\infty,2} + C_{\infty,3}. \end{align*} Next, we recall from \cite[Theorem~2.14]{Adams2-book} that for all $u\in\Lp{\infty}{}$ holds \begin{align}\label{eq:LpNorm->MaxNorm} \norm{u}{\Lp{\infty}{}} = \lim\limits_{p\rightarrow\infty} \norm{u}{\Lp{p}{}}~. \end{align} This reveals immediately with \ref{Assump:BoundData} that $C_{\infty,3}=\lim_{j\rightarrow\infty} \sum_l\norm{\gl}{\Lp{2(\alpha_j+1)}{}} = \sum_l\norm{\gl}{\Lp{\infty}{}}$. Furthermore, we arrive with \eqref{eq:LpNorm->MaxNorm}, the definition of $\hat{C}_j$ and $(e^x)^y=e^{xy}$ for $x,y>0$ at \begin{align*} C_{\infty,1} &\leq e^{\frac{B_0\timeEnd}{2}} \brac{\sum_l\norm{\clstart}{\Lp{\infty}{}} + \sum_l\norm{\gl}{\Lp{\infty}{}}}, \end{align*} where we used $\lim_{j\rightarrow\infty} \exp\!\brac{\frac{B_j\timeEnd}{2(1+\alpha_j)} } \leq \exp\!\brac{\frac{B_0\timeEnd}{2} }$ due to the definition of $B_j$ and $B_0$. Analogously, we get for $C_{\infty,2}$ with $j^{1/j}\rightarrow1$ as $j\rightarrow\infty$ \begin{align*} C_{\infty,2} &= \lim_{j\rightarrow\infty} B_j^{\frac{1}{2(1+\alpha_j)}} \timeEnd^{\frac{1}{2(1+\alpha_j)}} \hat{C}_j \leq e^{\frac{B_0\timeEnd}{2}} \brac{\sum_l\norm{\clstart}{\Lp{\infty}{}} + \sum_l\norm{\gl}{\Lp{\infty}{}}}. \end{align*} Combining the preceding estimates, shows that \begin{align*} C_\infty \leq 2 e^{\frac{B_0\timeEnd}{2}} \brac{\sum_l\norm{\clstart}{\Lp{\infty}{}} + \sum_l\norm{\gl}{\Lp{\infty}{}}} + \sum_l\norm{\gl}{\Lp{\infty}{}}. \end{align*} We now can safely let $j\rightarrow\infty$ in \eqref{eq:moser-bound2-j+1-new}. Thereby we finally arrive together with \eqref{eq:LpNorm->MaxNorm} at \begin{align*} &\sum_l \norm{\cl}{\Lp[\OmegaT]{\infty}{}} = \norm{\Cl}{\Lp[\OmegaT]{\infty}{}} = \lim_{j\rightarrow\infty} \norm{\Cl}{\Lp[\OmegaT]{2(\alpha_j+1)}{}} \\ &\leq 2 e^{\frac{B_0\timeEnd}{2}} \brac{\sum_l\norm{\clstart}{\Lp{\infty}{}} + \sum_l\norm{\gl}{\Lp{\infty}{}}} + \sum_l\norm{\gl}{\Lp{\infty}{}} ~=: C_M(\timeEnd)~. \end{align*} \end{proof} Finally, we show the desired a~priori bounds for a solution of the \dpnp. These a~priori bounds are crucial for the proof of \cref{thm:existence}. \begin{thm}[A priori Bounds]\label{thm:aprioriBounds} Let \ref{Assump:Geom}--\ref{Assump:back-charge} be valid and let $\brac{\fieldEL,\potEL,\fieldF,\press,\Cl}\in \setR^{4+2n}$ be a weak solution of \eqref{eq:Model-1a}--\eqref{eq:Model-1j} according to \cref{def:weaksolution}. Then, we have \begin{align*} & \norm{\potEL}{\Lp[I]{\infty}{;\Lp{2}{}}} + \norm{\fieldEL}{\Lp[I]{\infty}{;\Lp{2}{}}} ~\leq~ C(\timeEnd)~, \\ & \norm{\press}{\Lp[I]{\infty}{;\Lp{2}{}}} + \norm{\fieldF}{\Lp[I]{\infty}{;\Lp{2}{}}} ~\leq~ C(\timeEnd)~, \\ & \sum_l\sqbrac{ \norm{\cl}{\Lp[I]{\infty}{;\Lp{2}{}}}+\norm{\cl}{\Lp[I]{2}{;\spaceTC}}+\norm{\cl}{\Hk[I]{1}{;\spaceTC^\ast}}+\norm{\cl}{\Lp[\OmegaT]{\infty}{}} }~\leq~ C(\timeEnd)~. \end{align*} \end{thm} \begin{remark} \textnormal{ In the following proof, we derive the constants of the stated a~priori estimates in detail. This reveals how the constants depend on the data. However, for the following \cref{thm:existence}, especially the dependency of the end time~$\timeEnd$ is of interest. This is the reason why we just stated $C=C(\timeEnd)$ for the constants . } $\square$ \end{remark} \begin{proof} For ease of readability, we split the proof of the stated a~priori estimates into several steps.\\[2.0mm] \underline{Step~1.1 -- energy estimates and boundedness for $\cl$:}~~ These a~priori bounds are shown in \cref{lemma:energy} and \cref{lemma:bounded}.\\[2.0mm] \underline{Step~1.2 -- estimates for $\dert\cl$:}~~ We abbreviate $B:=\Lp[I]{2}{;\spaceTC}$. By involving equations~\eqref{eq:transportWeak-fpi}, we obtain the identity \begin{align*} & \theta\norm{\dert\cl}{\Lp[I]{2}{;\spaceTC^\ast}}:=\sup_{\norm{\test}{B}\leq1} \dualp[]{\theta\dert\cl}{\test}_{B^\ast\times B} \\ &= \sup_{\norm{\test}{B}\leq1} \sqbrac{ -\scp[]{\Dl\grad\cl}{\grad\test}_{\OmegaT} + \scp[]{{\cl}[\fieldF+\coeffEL\fieldEL]}{\grad\test}_{\OmegaT} + \scp[]{\theta\Rl(\Cl)}{\test}_{\OmegaT} + \scp[]{\gl}{\test}_{\GammaT} } \\ &=: I.1 ~+~ I.2 ~+~ I.3 ~+~ I.4~. \end{align*} For $I.1$ and $I.3$, we arrive with \Holder, \ref{Assump:Ellip}, \ref{Assump:Reaction}, and \cref{lemma:bounded} at \begin{align*} I.1 + I.3 &\leq C_D\norm{\grad\cl}{\Lp[\OmegaT]{2}{}} \sqbrac{\sup_{\norm{\test}{B}\leq1} \norm{\grad\test}{\Lp[\OmegaT]{2}{}}} +C_{\Rl}\sum_l \norm{\cl}{\Lp[\OmegaT]{2}{}} \sqbrac{\sup_{\norm{\test}{B}\leq1}\norm{\test}{\Lp[\OmegaT]{2}{}}} \\ &\leq (C_D+C_{\Rl}) \sum_l \norm{\cl}{B} ~\leq~ (C_D+\max_l C_{\Rl})C_0(\timeEnd) =: C_1(\timeEnd) . \end{align*} The integral~$I.2$, we bound with \Holder\ and \cref{lemma:bounded} by \begin{align*} I.2 &\leq \norm{{\cl}[\fieldF+\coeffEL\fieldEL]}{\Lp[\OmegaT]{2}{}} \sqbrac{ \sup_{\norm{\test}{B}\leq1}\norm{\grad\test}{\Lp[\OmegaT]{2}{}}} \\ &\leq \norm{\cl}{\Lp[\OmegaT]{\infty}{}} \norm{\fieldF+\coeffEL\fieldEL}{\Lp[\OmegaT]{2}{}} ~\leq~ C_M(\timeEnd) \norm{\fieldF+\coeffEL\fieldEL}{\Lp[\OmegaT]{2}{}} ~. \end{align*} For $I.4$, we immediately get with \Holder\ and \cref{lemma:interpol-boundary} \begin{align*} I.4 \leq \norm{\gl}{\Lp[\GammaT]{2}{}} \sqbrac{ \sup_{\norm{\test}{B}\leq1} \norm{\test}{\Lp[\GammaT]{2}{}} } \leq \norm{\gl}{\Lp[\GammaT]{2}{}} \sqbrac{ C\sup_{\norm{\test}{B}\leq1} \norm{\test}{\Lp[\OmegaT]{2}{}} } \leq C\norm{\gl}{\Lp[\GammaT]{2}{}}~. \end{align*} Thus, by combining the estimates for $I.1$ -- $I.4$, we have shown \begin{align}\label{eq:apriori-np-time} \norm{\dert\cl}{\Lp[I]{2}{;\spaceTC^\ast}} \leq C_1(\timeEnd) + C_M(\timeEnd) \norm{\fieldF+\coeffEL\fieldEL}{\Lp[\OmegaT]{2}{}} + C\norm{\gl}{\Lp[\GammaT]{2}{}}. \end{align} \underline{Step~1.3 -- a priori estimates for $\cl$:}~~ We now put \cref{lemma:energy}, \cref{lemma:bounded}, and the estimates~\eqref{eq:apriori-np-time} together. In anticipation of estimates~\eqref{eq:apriori-gauss} and \eqref{eq:apriori-darcy}, we obtain the desired a~priori bound \begin{align*} & \sum_l\sqbrac{ \norm{\cl}{\Lp[I]{\infty}{;\Lp{2}{}}}+\norm{\cl}{\Lp[I]{2}{;\spaceTC}}+\norm{\cl}{\Hk[I]{1}{;\spaceTC^\ast}}+\norm{\cl}{\Lp[\OmegaT]{\infty}{}} } \\ &\leq C_0(\timeEnd) + C_M(\timeEnd) + C_1(\timeEnd) + \max(1,\coeffEL)(C_e+C_f)C_M(\timeEnd) + C\norm{\gl}{\Lp[\GammaT]{2}{}} . \end{align*} \underline{Step~2.1 -- estimate for $\grad\cdot\fieldEL$:}~~ We test equation~\eqref{eq:gaussWeakDIV-fpi} with $\test=\grad\cdot\fieldEL$. Thereby, we directly get with \Young\ \begin{align*} \norm{\grad\cdot\fieldEL}{\Lp{2}{}}^2 \leq \norm{\rho_b}{\Lp{2}{}}^2 + \theta\max_l\abs{\zl} \sum_l\norm{\clold}{\Lp{2}{}}^2~. \end{align*} Since this estimate holds uniformly in time, we take the supremum over $t\in[0,\timeEnd]$ and come to \begin{align*} \norm{\grad\cdot\fieldEL}{\Lp[I]{\infty}{;\Lp{2}{}}}^2 \leq \norm{\rho_b}{\Lp[I]{\infty}{;\Lp{2}{}}}^2 + \theta\max_l\abs{\zl} \sum_l\norm{\clold}{\Lp[I]{\infty}{;\Lp{2}{}}}^2~. \end{align*} Hence, together with \cref{lemma:energy} we finally have (assume $\norm{\clold}{\Lp[I]{\infty}{;\Lp{2}{}}}\leq R$ in case $\clold\neq\cl$) \begin{align*} \norm{\grad\cdot\fieldEL}{\Lp[I]{\infty}{;\Lp{2}{}}} &\leq \begin{cases} \norm{\rho_b}{\Lp[I]{\infty}{;\Lp{2}{}}} + \theta\max_l\abs{\zl} R & \text{ if } \cl\neq\clold,\\[2.0mm] \norm{\rho_b}{\Lp[I]{\infty}{;\Lp{2}{}}} + \theta\max_l\abs{\zl} C_0 & \text{ if } \cl=\clold. \end{cases}\\[2.0mm] &=:C_{1,e}(\timeEnd,R)~. \end{align*} \underline{Step~2.2 -- estimate for $\potEL$:}~~ Next, we test equation~\eqref{eq:gaussWeak-fpi} with $\Test\in\spaceTF$. Due to \cite[Chapter~7.2]{Quarteroni-book}, we can choose $\Test$ such that $\grad\cdot\Test=\potEL$ and $\norm{\Test}{\Hkdiv{1}{}}\leq K\norm{\potEL}{\spaceTP}$ holds. This yields with \ref{Assump:Ellip} and \Young\ \begin{align*} \norm{\potEL}{\Lp{2}{}}^2 \leq \frac{K^2}{\permitELscalar\alpha_D}\norm{\fieldEL}{\Lp{2}{}}^2 ~~~~\Hence~ \norm{\potEL}{\Lp[I]{\infty}{;\Lp{2}{}}}^2 \leq \frac{K^2}{\permitELscalar\alpha_D}\norm{\fieldEL}{\Lp[I]{\infty}{;\Lp{2}{}}}^2~. \end{align*} \underline{Step~2.3 -- estimate for $\fieldEL$:}~~ Due to $\fieldEL\in\spaceSEF$ and \ref{Assump:BoundData}, we test equation~\eqref{eq:gaussWeak-fpi} with $\Test=\fieldEL-\vecsigma\in\spaceTF$. In addition, we test equation~\eqref{eq:gaussWeakDIV-fpi} with $\test=\potEL$. By adding these equations, we get with \ref{Assump:Ellip} and \Young\ \begin{align*} & \frac{1}{\permitELscalar C_D}\norm{\fieldEL}{\Lp{2}{}}^2 ~= \scp{\permitEL^{-1}\fieldEL}{\fieldEL}_\Omega ~\leq \scp{\permitEL^{-1}\fieldEL}{\vecsigma}_\Omega - \scp{\potEL}{\grad\cdot\vecsigma}_\Omega+ \scp{\rho_b + \chargeELold}{\potEL}_\Omega \\ &\leq \frac{\delta_1}{\permitELscalar\alpha_D}\norm{\fieldEL}{\Lp{2}{}}^2 + \frac{1}{4\delta_1}\norm{\vecsigma}{\Lp{2}{}}^2 + \delta_2\norm{\potEL}{\Lp{2}{}}^2 + \frac{1}{4\delta_2} \sqbrac{\norm{\grad\cdot\vecsigma}{\Lp{2}{}}^2 + \norm{\rho_b + \chargeELold}{\Lp{2}{}}^2}. \end{align*} Thus, we arrive with a suitable choice of $\delta_1$, $\delta_2$, the above estimate for $\potEL$, by taking the supremum over time, and with \cref{lemma:bounded} at (we assume $\cl=\clold$ and we skip the uncritical case $\cl\neq\clold$) \begin{align*} \norm{\fieldEL}{\Lp[I]{\infty}{;\Lp{2}{}}} \leq \kappa\sqbrac{\norm{\vecsigma}{\Lp[I]{\infty}{;\Hkdiv{1}{}}} + \norm{\rho_b}{\Lp[I]{\infty}{;\Lp{2}{}}} +\theta\max_l\abs{\zl}C_0 } ~:= C_{2,e}(\timeEnd). \end{align*} \underline{Step~2.4 -- a priori estimate for $(\fieldEL,\potEL)$:}~~ Collecting the preceding inequalities for $\grad\cdot\fieldEL$, $\fieldEL$, and $\potEL$ shows \begin{align}\label{eq:apriori-gauss} \norm{\fieldEL}{\Lp[I]{\infty}{;\Hkdiv{1}{}}} + \norm{\potEL}{\Lp[I]{\infty}{;\Lp{2}{}}} \leq C_{1,e} + C_{2,e} + \frac{K}{\sqrt{\permitELscalar\alpha_D}} C_{2,e} ~=: C_e(\timeEnd). \end{align} \underline{Step~3.1 -- estimate for $\grad\cdot\fieldF$:}~~ We test equation~\eqref{eq:darcyWeakDIV-fpi} with $\test=\grad\cdot\fieldF$ and immediately obtain $\norm{\grad\cdot\fieldF}{\Lp{2}{}}^2 =0$ and thus $\norm{\grad\cdot\fieldF}{\Lp[I]{\infty}{;\Lp{2}{}}} =0$.\\[2.0mm] \underline{Step~3.2 -- estimate for $\press$:}~~ Next, we test equation~\eqref{eq:darcyWeak-fpi} with $\Test\in\spaceTF$. According to \cite[Chapter~7.2]{Quarteroni-book}, we find a $\Test$ such that $\grad\cdot\Test=\press$ and $\norm{\Test}{\Hkdiv{1}{}}\leq K\norm{\press}{\spaceTP}$ holds. This leads us with \ref{Assump:Ellip}, \Young, \cref{lemma:bounded}, and \eqref{eq:apriori-gauss} to (we assume $\cl=\clold$ and we skip the uncritical case $\cl\neq\clold$) \begin{align*} \norm{\press}{\Lp{2}{}}^2 &\leq \delta K \norm{\press}{\Lp{2}{}}^2 + \frac{\mu C_K}{2\delta}\norm{\fieldF}{\Lp{2}{}}^2 +\frac{1}{2\delta\permitELscalar\alpha_D}\norm{\chargeELold\fieldEL}{\Lp{2}{}}^2 \\ &\leq \delta K \norm{\press}{\Lp{2}{}}^2 + \frac{\mu C_K}{2\delta}\norm{\fieldF}{\Lp{2}{}}^2 +\frac{\theta\max_l\abs{\zl}}{2\delta\permitELscalar\alpha_D} C_e^2 C_M^2~. \end{align*} A suitable choice of $\delta>0$ immediately shows \begin{align*} \norm{\press}{\Lp[I]{\infty}{;\Lp{2}{}}} &\leq 2\mu K C_K \norm{\fieldF}{\Lp[I]{\infty}{;\Lp{2}{}}} + \underbrace{\frac{2K\theta\max_l\abs{\zl}}{\permitELscalar\alpha_D} C_e(\timeEnd) C_M(\timeEnd)}_{:=C_{1,f}} ~. \end{align*} \underline{Step~3.3 -- estimate for $\fieldF$:}~~ We test equation~\eqref{eq:darcyWeakDIV-fpi} with $\test=\press$ and equation~\eqref{eq:darcyWeak-error} with the test function~$\Test=\fieldF-\vecf$. Here, we take $\vecf$ according to \ref{Assump:BoundData}, which ensures $\Test\in\spaceTF$. Furthermore, adding these equations, yields with \ref{Assump:Ellip} and \Young, \cref{lemma:bounded}, and \eqref{eq:apriori-gauss} (again, we assume $\cl=\clold$ and we skip the uncritical case $\cl\neq\clold$) \begin{align*} \alpha_K \norm{\fieldF}{\Lp{2}{}}^2 &\leq -\scp{\mu^{-1}\press}{\grad\cdot\vecf}_\Omega + \scp{\permeabH^{-1}\fieldF}{\vecf}_\Omega + \scp{\coeffForceEL\forceELold}{\fieldF-\vecf}_\Omega \\ &\leq \frac{\delta_1}{2}\norm{\press}{\Lp{2}{}}^2 + \frac{1}{2\mu^2\delta_1}\norm{\grad\cdot\vecf}{\Lp{2}{}}^2 + \delta_2\norm{\fieldF}{\Lp{2}{}}^2 + \brac{\frac{C_K^2}{2\delta_2}+\frac{1}{2}} \norm{\vecf}{\Lp{2}{}}^2 \\ &~~~~ +\sqbrac{\frac{1}{2\permitELscalar\alpha_D\mu\delta_2} + \frac{1}{2}} \norm{\forceELold}{\Lp{2}{}}^2 \\ &\leq \frac{\delta_1}{2}\norm{\press}{\Lp{2}{}}^2 + \frac{1}{2\mu^2\delta_1}\norm{\grad\cdot\vecf}{\Lp{2}{}}^2 + \delta_2\norm{\fieldF}{\Lp{2}{}}^2 + \brac{\frac{C_K^2}{2\delta_2}+\frac{1}{2}} \norm{\vecf}{\Lp{2}{}}^2 \\ &~~~~ +\theta\max_l\abs{\zl}\sqbrac{ \frac{1}{2\permitELscalar\alpha_D\mu\delta_2} + \frac{1}{2}}C_e^2 C_M^2 ~. \end{align*} We now insert the estimate for $\press$ and we choose $\delta_1$ and $\delta_2$ appropriately. Thereby, we directly arrive with taking the supremum over time at \begin{align*} \norm{\fieldF}{\Lp[I]{\infty}{;\Lp{2}{}}} &\leq \brac{\frac{4K C_K}{\mu\alpha^2_K}+\frac{2C_K}{\alpha_k}+\frac{2}{\alpha_K}} \norm{\vecf}{\Hkdiv{1}{}}^2 \\ &~~~~ +\theta\max_l\abs{\zl}\sqbrac{\frac{8K}{\permitELscalar\alpha_D\alpha_K}+\frac{1}{\permitELscalar\alpha_D\alpha_K\mu} + \frac{2}{\alpha_K}}C_e C_M ~=: C_{2,f}(\timeEnd). \end{align*} \underline{Step~3.4 -- a priori estimate for $(\fieldF,\press)$:}~~ Combing the preceding estimates for $\grad\cdot\fieldF$, $\fieldF$, $\press$ shows \begin{align}\label{eq:apriori-darcy} \norm{\fieldF}{\Lp[I]{\infty}{;\Hkdiv{1}{}}} + \norm{\press}{\Lp[I]{\infty}{;\Lp{2}{}}} \leq C_{2,f} + 2\mu K C_KC_{2,f} + C_{1,f} ~=: C_f(\timeEnd)~. \end{align} \end{proof} \begin{remark} \textnormal{ The proof of \cref{thm:aprioriBounds} is valid in arbitrary space dimensions, i.e., for $\Omega\subset\setR^n$ with $n\geq2$. However, in \ref{Assump:Geom} we restrict ourselves to $n\leq3$, as we use in the proof of \cref{thm:existence} compact embeddings of Aubin-Lions-type, which are valid only for $n\leq3$. } $\square$ \end{remark} \subsection{Existence of a fixed point}\label{subsec:fpi-point} In this section, we prove the existence of global weak solutions of the \dpnp. Our proof is based on the following fixed point theorem, see \cite[Corollary 9.6]{Zeidler1-book}. \begin{thm}\label{thm:fpi-theorem} Let $\fixOp: \fixSet\subset X\rightarrow \fixSet$ be continuous, where $\fixSet$ is a nonempty, compact, and convex set in a locally convex space $X$. Then, $\fixOp$ has a fixed point. \end{thm} A Banach space~$X$ equipped with the $\weakstar$-topology is a locally convex space $(X,\weakstar)$. Hence, the above fixed point theorem is tailored for Banach spaces, which carry the $\weakstar$-topology. In our case, the $\weakstar$-topology is the natural choice for the following three reasons: \par Firstly, the a~priori estimates from \cref{subsec:aprioriBounds} are equivalent to $\weakstar$-compactness. Secondly, the solution space for $\cl$ includes $\Lp[\OmegaT]{\infty}{}$, which is not reflexive. Hence, the $\weakstar$-topology differs from the $\weak$-topology. Thirdly, when using the $\weakstar$-topology, we can reuse the a~priori estimates from \cref{subsec:aprioriBounds} for the $\weakstar$-continuity of the fixed point operator. \par In summary, we can exaggeratedly state that in the $\weakstar$-topology, the compactness of $\fixSet$ and the continuity of $\fixOp$ is already contained in the a~priori estimates. However, this is valid, only if the predual of the solution space is separable. In this case, the set-based topological terms and the sequences-based ones coincide. This enables us to prove the continuity of the operator with $\weakstar$-convergent sequences, instead of investigating preimages of $\weakstar$-open sets. \begin{thm}\label{thm:existence} Let \ref{Assump:Geom}--\ref{Assump:back-charge} be valid. Then, there exists a solution $\brac{\fieldEL,\potEL,\fieldF,\press,\Cl}\in \setR^{4+2n}$ of equations~\eqref{eq:Model-1a}--\eqref{eq:Model-1j} according to \cref{def:weaksolution}. \end{thm} \begin{proof} For ease of readability, we split the proof into several steps\\[2.0mm] \underline{Step~1 -- the space $X$:}~~ First of all, we repeat the definition of the space~$X$ from \cref{def:FixOp} \begin{align*} X :=\sqbrac{\fspace{L^\infty}{I}{;\Lp{2}{}} \cap \spaceSC \cap \Lp[\OmegaT]{\infty}{}}^2 . \end{align*} Furthermore, we equip $X$ with the norm \begin{align*} \norm{\cdot}{X} := \norm{\cdot}{\fspace{L^\infty}{I}{;\Lp{2}{}}} + \norm{\cdot}{\fspace{L^2}{I}{;\Hk{1}{}}} + \norm{\cdot}{\fspace{H^¹}{I}{;\Hk{1}{}}^\ast} + \norm{\cdot}{\Lp[\OmegaT]{\infty}{}} . \end{align*} Thus, $(X,\norm{\cdot}{X})$ is a Banach space. However, we henceforth consider the locally convex space $(X,\weakstar)$ and all topological terms refer to the $\weakstar$-topology. Furthermore, the predual $X_0$ of $X$ can be written according to \cite[Chapter I, IV]{Gajewski-book}) as \begin{align*} X_0 := \sqbrac{\fspace{L^1}{I}{;\Lp{2}{}} + \fspace{L^2}{I}{;\Hk{1}{}^\ast} +\fspace{H^1}{I}{;\Hk{1}{}}^\ast + \Lp[\OmegaT]{1}{}}^2 . \end{align*} Hence, $X_0$ is a separable Banach space with dual $X$ and the topological terms for $(X,\weakstar)$ based on sets are equivalent with those based on sequences,~cf.~\cite{Zeidler1-book,Gajewski-book}. In particular, the notion of $\weakstar$-continuous/compact is equal to sequentially $\weakstar$-continuous/compact.\\[2.0mm] \underline{Step~2 -- the set $\fixSet$:}~~ For $R>0$, we introduce the set $\fixSet$ as a ball of radius $R$ in $X$, i.e., \begin{align*} \fixSet := \cbrac{ \vecv=(v_1,v_2)\in X: ~~\norm{\vecv}{X}\leq R } \subset X~. \end{align*} $\fixSet$ is nonempty, convex, and $\weakstar$-compact due to Banach-Alaoglu-Bourbaki~theorem, cf. \cite[Theorem 1.7]{Roubicek-book}.\\[2.0mm] \underline{Step~3 -- the operator $\fixOp$:}~~ We consider the operator~$\fixOp$, which was already introduced in \cref{def:FixOp}. This operator is a well-defined operator due to \cref{lemma:wellDef-fixOp}.\\[2.0mm] \underline{Step~4 -- self mapping property $\fixOp(\fixSet)\subset\fixSet$:}~~ Let $\Clold\in\fixSet$. The definition of the set~$\fixSet$ and the definition of the norm~$\norm{\cdot}{X}$ ensure that we have for $l=1,2$ \begin{align*} \norm{\clold}{\fspace{L^\infty}{I}{;\Lp{2}{}}} + \norm{\clold}{\fspace{L^2}{I}{;\Hk{1}{}}} + \norm{\clold}{\fspace{H^1}{I}{;\Hk{1}{}^\ast}} + \norm{\clold}{\Lp[\OmegaT]{\infty}{}} \leq R~. \end{align*} With this information, we return to the a~priori estimates in \cref{thm:aprioriBounds}. Carefully reading through the proof of \cref{thm:aprioriBounds} reveals in detail how the constants of the a~priori bounds are defined. More precisely, this shows that \begin{align*} \norm{\Cl}{X} \text{ is bounded in terms of } \begin{cases} \text{the data, the radius~$R$ and the end time~$\timeEnd$} & \text{if } \Cl\neq\Clold, \\ \text{the data and the end time~$\timeEnd$} & \text{if } \Cl=\Clold. \end{cases} \end{align*} In both cases, the constants of the a~priori estimate are partially independent of the end time~$\timeEnd$. This means, we can split the constants into \begin{align*} C = C(\timeEnd) + C(\timeEnd,R) + C_d~. \end{align*} We now choose the radius~$R:=2C(\timeEnd) + 2C_d$~. In the remaining part~$C(\timeEnd,R)$, we assume a sufficiently small end time~$\timeEnd<<1$ such that we have $C(\timeEnd,R) \leq C(\timeEnd)+C_d$~. This proves \begin{align*} \norm{\Cl}{X} ~\leq~ C ~=~ C(\timeEnd) + C(\timeEnd,R) + C_d ~\leq ~ 2C(\timeEnd) + 2C_d = R ~. \end{align*} Thus, we have $\fixOp(\fixSet)\subset\fixSet$. However, we note that due to the assumption $\timeEnd<<1$, we are restricted to generally small time intervals $I=[0,\timeEnd]$.\\[2.0mm] \underline{Step~5 -- $\weakstar$-continuity of $\fixOp$}\\ Subsequently, we use the already mentioned equivalence between $\weakstar$-continuous and sequentially $\weakstar$-continuous. This means, we show the $\weakstar$-continuity with the criterion based on sequences. For that purpose, we consider a sequence $(\Clold)_k\subset\fixSet$, for which we assume that $\Clold_k\overset{*~}\rightharpoonup\Clold$ in $X$. As $\fixOp(\Clold_k)=\Cl_k$ is the solution of \eqref{eq:transportWeak-fpi}, we know together with $\Clold_k\in\fixSet$ and the just established self mapping property that \begin{align}\label{eq:existence-conv-1} \Clold_k ~\overset{*~}\rightharpoonup~ \Clold ~~\text{ in } X \text{ with }~ \Clold\in\fixSet \qquad\text{and}\qquad \norm{\Cl_k}{X}=\norm{\fixOp(\Clold_k)}{X}\leq R~. \end{align} Consequently, $(\Cl_k)_k$ is a uniformly bounded sequence and a subsequence, denoted again by $(\Cl_k)_k$, $\weakstar$-converges to a unique limit $\Cl\in\fixSet$, i.e., it holds \begin{align}\label{eq:existence-conv-2} \Cl_k ~\overset{*~}\rightharpoonup~ \Cl ~~\text{ in } X \text{ with }~ \Cl\in\fixSet~. \end{align} The fixed point operator~$\fixOp$ is $\weakstar$-continuous, if and only if $\Cl$ solves the \enquote{limit} PDE \eqref{eq:transportWeak-fpi}, which is generated by $\Clold$. In this case, we have \begin{align}\label{eq:existence-defWeakstarContinuous} \Clold_k~\overset{*~}\rightharpoonup~ \Clold ~~\Hence~~ \Cl_k= \fixOp(\Clold_k) ~\overset{*~}\rightharpoonup~ \fixOp(\Clold)=\Cl \quad \text{ in } X~. \end{align} Thus, it remains to show that $\Cl$ solves the \enquote{limit} PDE \eqref{eq:transportWeak-fpi}, which is generated by $\Clold$. To show this, we return to \cref{def:FixOp} and we subtract the equations, which are generated by $\Clold_k$ from the equations, which are generated by $\Clold$ and we integrate in time. Thereby, we obtain the error equations\\[2.0mm] \begin{subequations} \underline{Gauss's law:} \begin{align} \scp{\permitEL^{-1}(\fieldEL_k-\fieldEL)}{\Test}_{\OmegaT} &= \scp{\potEL_k-\potEL}{\grad\cdot\Test}_{\OmegaT}, \label{eq:gaussWeak-weakstarError} \\ \scp{\grad\cdot(\fieldEL_k-\fieldEL)}{\test}_{\OmegaT} &= \theta \scp{\chargeELold[,k]-\chargeELold}{\test}_{\OmegaT}~. \label{eq:gaussWeakDIV-weakstarError} \end{align} \underline{Darcy's law:} \begin{align} \scp{\permeabH^{-1}(\fieldF_k-\fieldF)}{\Test}_{\OmegaT} &= \scp{\mu^{-1}(\press_k-\press)}{\grad\cdot\Test}_{\OmegaT} + \theta\mu^{-1}\scp{\chargeELold[,k]\permitEL^{-1}\fieldEL_k}{\Test}_{\OmegaT} \nonumber\\ &~~~~-\theta\mu^{-1}\scp{\chargeELold\permitEL^{-1}\fieldEL}{\Test}_{\OmegaT}, \label{eq:darcyWeak-weakstarError}\\ \scp{\grad\cdot(\fieldF_k-\fieldF)}{\test}_{\OmegaT} &= 0~. \label{eq:darcyWeakDIV-weakstarError} \\[6.0mm] \nonumber \end{align} \underline{Nernst--Planck equations:} \begin{align} & \dualp{\dert(\cl[l,k] - \cl)}{\test}_{\Lp[I]{2}{;\Hk{1}{}^\ast}\times\Lp[I]{2}{;\Hk{1}{}}} + \scp{\Dl\grad(\cl[l,k]-\cl)}{\grad\test}_{\OmegaT} \nonumber\\[2.0mm] & ~~-\scp{ {\cl[l,k]}\sqbrac{\fieldF_k +\coeffEL\fieldEL_k}-{\cl}\sqbrac{\fieldF+\coeffEL\fieldEL} }{\grad\test}_{\OmegaT} \nonumber\\[2.0mm] &= \theta\scp{\Rl(\Cl_k)-\Rl(\Cl)}{\test}_{\OmegaT} ~. \label{eq:transportWeak-weakstarError} \end{align} \end{subequations} We note that \eqref{eq:existence-conv-1} and Aubin-Lions Lemma, cf.~\cite[Lemma 7.7]{Roubicek-book}, imply the norm-convergences \begin{align}\label{eq:existence-AubinLions} \clold[l,k] \rightarrow \clold ~\text{ in } \Lp[\OmegaT]{2}{} \qquad\text{and}\qquad \clold[l,k] \rightarrow \clold ~\text{ in } \Lp[I]{2}{;\Lp{3}{}}~. \end{align} Hence, we obtain for $(\fieldEL,\potEL)$ analogously to the proof of \cref{thm:unique} or \cref{thm:aprioriBounds}, the norm-convergence $\norm{\potEL_k-\potEL}{\Lp[\OmegaT]{2}{}} + \norm{\fieldEL_k-\fieldEL}{\Lp[I]{2}{;\Hkdiv{1}{}}} \leq C \sum_l\norm{\clold[l,k]-\clold}{\Lp[\OmegaT]{2}{}} ~~\rightarrow 0~.$ \par Furthermore, for $(\fieldF,\press)$, we get analogously to the proof of \cref{thm:unique} or \cref{thm:aprioriBounds} \begin{align*} \norm{\press_k-\press}{\Lp[\OmegaT]{2}{}} + \norm{\fieldF_k-\fieldF}{\Lp[I]{2}{;\Hkdiv{1}{}}} \leq C \norm{\chargeELold[,k]\permitEL^{-1}\fieldEL_k-\chargeELold\permitEL^{-1}\fieldEL}{\Lp[\OmegaT]{2}{}}~. \end{align*} Applying \ref{Assump:Ellip}, \Holder, \cref{lemma:regularity-gausslaw}, and \eqref{eq:existence-AubinLions} yields for the right hand side \begin{align*} & C\norm{\chargeELold[,k]\permitEL^{-1}\fieldEL_k-\chargeELold\permitEL^{-1}\fieldEL}{\Lp[\OmegaT]{2}{}} \\ &\leq C\norm{\chargeELold[,k]\permitEL^{-1}(\fieldEL_k-\fieldEL)}{\Lp[\OmegaT]{2}{}} + C\norm{(\chargeELold[,k]-\chargeELold)\permitEL^{-1}\fieldEL}{\Lp[\OmegaT]{2}{}} \\ &\leq CR\norm{\fieldEL_k-\fieldEL}{\Lp[\OmegaT]{2}{}} + C\norm{\fieldEL}{\Lp[I]{2}{;\Hk{1}{}}} \norm{\chargeELold[,k]-\chargeELold}{\Lp[I]{2}{;\Lp{3}{}}} ~~\rightarrow 0~. \end{align*} This shows the norm-convergence $\norm{\press_k-\press}{\Lp[\OmegaT]{2}{}} + \norm{\fieldF_k-\fieldF}{\Lp[I]{2}{;\Hkdiv{1}{}}} ~~\rightarrow 0~.$ \par As to the convergence for $\Cl_k$, we begin with the time integrals and the diffusion integrals. From \eqref{eq:existence-conv-2} follows that $\dert\cl[l,k]$ resp. $\grad\cl[l,k]$ $\weakstar$-converge \footnote{In both cases $\weakstar$-convergence is equivalent to $\weak$-convergence as the involved spaces are reflexive.} towards $\dert\cl$ in $\Lp[I]{2}{;\Hk{1}{}^\ast}$ resp. $\grad\cl$ in $\Lp[\OmegaT]{2}{}$. Thus, we have \begin{align*} \dualp{\dert(\cl[l,k] - \cl)}{\test}_{\Lp[I]{2}{;\Hk{1}{}^\ast}\times\Lp[I]{2}{;\Hk{1}{}}} + \scp{\Dl\grad(\cl[l,k]-\cl)}{\grad\test}_{\OmegaT} \rightarrow 0. \end{align*} For the convection integrals and the electric~drift integrals, we obtain \begin{align*} & \scp{ {\cl[l,k]}\sqbrac{\fieldF_k +\coeffEL\fieldEL_k}-{\cl}\sqbrac{\fieldF+\coeffEL\fieldEL} }{\grad\test}_{\OmegaT} \nonumber\\ &= \scp{ ({\cl[l,k]}-{\cl})\sqbrac{\fieldF +\coeffEL\fieldEL} }{\grad\test}_{\OmegaT} \nonumber\\ & ~~~~+\scp{ {\cl[l,k]}\sqbrac{(\fieldF_k-\fieldF) +\coeffEL(\fieldEL_k-\fieldEL)} }{\grad\test}_{\OmegaT} ~~=: I.1 + I.2~. \end{align*} Concerning $I.1$, we note that $\sqbrac{\fieldF +\coeffEL\fieldEL}\grad\test \in \Lp[\OmegaT]{1}{}$ and $\cl[l,k]-\cl\in\Lp[\OmegaT]{\infty}{}$ with $\Lp[\OmegaT]{\infty}{}=\Lp[\OmegaT]{1}{}^\ast$. Thereby, we arrive with \eqref{eq:existence-conv-2} at \begin{flalign*} I.1 &= \dualp{({\cl[l,k]}-{\cl})}{\sqbrac{\fieldF +\coeffEL\fieldEL}\grad\test}_{\Lp[\OmegaT]{1}{}^\ast\times\Lp[\OmegaT]{1}{}} \rightarrow 0 . \end{flalign*} Concerning $I.2$, we come with \Holder\ and the $\Lp[\OmegaT]{2}{}$-convergence of $\fieldEL_k$, $\fieldF_k$ to \begin{align*} I.2 &\leq R\norm{\grad\test}{\Lp[\OmegaT]{2}{}} \brac{\norm{\fieldF_k-\fieldF}{\Lp[\OmegaT]{2}{}}+\coeffEL\norm{\fieldEL_k-\fieldEL}{\Lp[\OmegaT]{2}{}}} ~~\rightarrow 0. \end{align*} Thus, we get for the convection integrals and the electric~drift integrals \begin{align*} \scp{ {\cl[l,k]}\sqbrac{\fieldF_k +\coeffEL\fieldEL_k}-{\cl}\sqbrac{\fieldF+\coeffEL\fieldEL} }{\grad\test}_{\OmegaT} ~~\rightarrow0. \end{align*} Finally, for $\Cl_k$ follows from \eqref{eq:existence-conv-2} and Aubin-Lions Lemma the norm-convergence~$\cl[l,k]\rightarrow\cl$ in $\Lp[\OmegaT]{2}{}$. This leads for the reaction integrals with \Holder\ immediately to \begin{align*} \theta\scp{\Rl(\Cl_k)-\Rl(\Cl)}{\test}_{\OmegaT} \leq \theta\max_lC_{\Rl} \norm{\test}{\Lp[\OmegaT]{2}{}} \sum_l\norm{\cl[l,k]-\cl}{\Lp[\OmegaT]{2}{}}\rightarrow 0. \end{align*} In summary, we have shown that $\Cl=\fixOp(\Clold)$ for a arbitrarily chosen subsequence $(\Cl_k)_k$. Therefore, the whole sequence $(\Cl)_k$ converges and the operator~$\fixOp$ is $\weakstar$-continuous in the sense of equation~\eqref{eq:existence-defWeakstarContinuous}.\\[2.0mm] \underline{Step~6 -- existence:}~~ A combination of Steps 1 -- 5 shows that we can apply \cref{thm:fpi-theorem}. This yields directly the existence of a solution~$(\fieldEL,\potEL,\fieldF,\press,\Cl)$ on a generally small time interval~$[0,\timeEnd]$. \par We now consider for an arbitrary large end time~$\hat{T}$ a time interval $[0,\hat{T}]$, which we decompose with $(K+1)$~time points $0=:T_0 < T_1 < \ldots < T_K :=\hat{T}$ into $K$~subintervals $[T_i,T_{i+1}]$, $i \in \{0,\ldots,K-1\}$. Furthermore, we suppose that the subintervals $[T_i,T_{i+1}]$ are sufficiently small, such that Steps 1 -- 5 are fulfilled. Thus, a local solution~$(\fieldEL_i,\potEL_i,\fieldF_i,\press_i,\Cl_i)$ exists on $[T_i,T_{i+1}]$ and this solution satisfies the a~priori estimates from \cref{thm:aprioriBounds}. We now carefully check how the constants of the a~priori estimates depend on the end time~$T_{i+1}$ of the subinterval~$[T_i,T_{i+1}]$. This reveals that the dependency of these constants on $T_{i+1}$ behave as $\exp(T_{i+1})$, which eliminates any possibility of a blow-up on $[T_i,T_{i+1}]$. Thus, it is admissible to take the partial solution~$\Cl_i$ as the initial value for the $(i+1)$-th solution~$(\fieldEL_{i+1},\potEL_{i+1},\fieldF_{i+1},\press_{i+1},\Cl_{i+1})$. This leads together with \cref{thm:unique} to a continuation of the solution on the arbitrary large time interval $[0,\hat{T}]$ and consequently to a global solution. \par However, we note that this continuation procedure does not lead to solutions on $[0,\infty]$. \end{proof} \section{Conclusion} The contribution of this paper was to show the global existence of unique solutions of two-component electrolyte solutions, which are captured by the \dpnp. Here, two-component electrolyte solutions means that we considered electrolyte solutions, that consist of a neutral solvent and two oppositely charged solutes. In contrast to previous results, we allowed for two oppositely charged solutes with arbitrary valencies~$\zl[1]>0>\zl[2]$. Most importantly, we successfully established uniform a~priori estimates for the chemical species by using weighted test functions, i.e., instead of the standard test test functions~$\test=\cl$, we used the weighted test functions~$\test=\abs{\zl}\cl$. By means of this technique we avoided further restrictions such as the electroneutrality constraint of the volume-additivity constraint. Therefore, the results of this paper apply to general two-component electrolyte solutions, which are captured by the \dpnp. We note, that the a~priori estimates include a uniform $\Lp[\OmegaT]{\infty}{}$-bound for the charged solutes~$\cl$, which we obtained by the use of Moser's iteration technique. Moreover, the global existence and uniqueness result holds true in two space dimensions and three space dimensions. \par To our best knowledge, in particular for the case of three spatial dimensions, this is the first global existence and uniqueness result for two-component electrolyte solutions, that firstly are governed by the \dpnp, that secondly include two oppositely charged chemical species with arbitrary valencies, and which thirdly are not subject to further restrictions such as the electroneutrality constraint, or the volume additivity constraint. \end{document}
\begin{equation*}gin{document} \title{Effective contraction of Skinning maps} \author[T.~Cremaschi]{Tommaso Cremaschi} \address{Department of Mathematics, University of Southern California, Kaprelian Hall\\ 3620 S.\ Vermont Ave., Los Angeles, CA 90089-2532} \email{[email protected]} \thanks{T.C.~was partially supported by the National Science Foundation under Grant No.~DMS-1928930 while participating in a program hosted by the Mathematical Sciences Research Institute in Berkeley, California, during the Fall 2020 semester. } \author[L.~Dello Schiavo]{Lorenzo Dello Schiavo} \address{Institute of Science and Technology Austria, Am Campus 1, 3400 Klosterneuburg, Austria} \email{[email protected]} \thanks{ L.D.S.\ gratefully acknowledges funding of his current position by the Austrian Science Fund (FWF) grant F65, and by the European Research Council (ERC, grant No.~716117, awarded to Prof.\ Dr.~Jan Maas). } \begin{equation*}gin{abstract} Using elementary hyperbolic geometry, we give an explicit formula for the contraction constant of the skinning map over moduli spaces of relatively acylindrical hyperbolic manifolds. \end{abstract} \keywords{Skinning map, Poincar\'e series, deformations of hyperbolic manifolds, Kleinian groups.} \maketitle \section{Introduction} Let~$M_1$, $M_2$ be hyperbolic manifolds \blue{of finite-type, i.e. the interior of compact 3-manifolds,} with incompressible boundary, and homeomorphic geometrically finite ends~$E_1\subset M_1$ and~$E_2\subset M_2$. From a topological point of view, since~$M_1$ and~$M_2$ are tame, \cite{AG2004,CG2006}, the surfaces~$S_i$ corresponding to the boundary of the ends~$E_i$ are naturally homeomorphic. We can thus glue the two manifolds via an orientation-reversing homeomorphism~$\tau$, and obtain a new topological $3$-manifold~$M=M_1\cup_\tau M_2$. Usually, one seeks sufficient conditions for~$M$ to admit a complete hyperbolic metric, which is relevant, for example, in the proof of geometrization for hyperbolic manifolds, \cite{Kap2001}. We call this the \emph{glueing problem} for~$M$. The \emph{skinning map}, described below, was first introduced by W.~P.~Thurston, exactly to study this glueing problem,~\cite{Th1982}. The moduli space~$GF(M,\mathcal P)$ of all hyperbolic metrics on~$M$ with geometrically finite ends and parabolic locus~$\mathcal P$ is parameterised by the Teichm\"uller space~$\mcT(\partial_0 M)$ with~$\partial_0 M$ the closure in~$\partial M$ of \blue{the complement~$\mathcal P^\mathrm{c}$ of $\mathcal P$}, viz.\ $GF(M,\mathcal P)=\mathcal T(\partial_0 M)$. For simplicity, let us here assume that~$\mathcal P$ only contains toroidal boundary components of~$M$. Now, let~$N\in GF(M,\mathcal P)$ be a uniformization, and~$S\in\pi_0(\partial_0 M)$ be a (non-toroidal) boundary component. The cover of~$N$ associated to~$\pi_1(S)$ is a quasi-Fuchsian manifold~$N_S$. The manifold~$N_S$ has two ends,~$A$ and~$B$, of which~$A$ is isometric to the end of~$M$ corresponding to~$S$. One defines the skinning map~$\sigma_M$ at~$N$ as the conformal structure of the new end~$B$. As it turns out, the skinning map is an analytic map~$\sigma_M\colon \mathcal T(\partial_0 M)\rightarrow \mathcal T(\overline{\partial_0 M})$, where the bar denotes opposite orientation. The glueing instruction determines an isometry~$\tau^*\colon \mathcal T(\partial_0 M)\rightarrow \mathcal T(\overline{\partial_0 M})$, and any fixed point of $\tau^*\circ\sigma_M$ gives a solution to the glueing problem by the Maskit Combination Theorem, e.g.~\cite{Kap2001}. Given a covering map between Riemann surfaces $\pi\colon Y\rightarrow X$ the \emph{Poincar\'e series operator} is a push-forward operator $\tauheta_{Y/X}:Q(Y)\rightarrow Q(X)$, similar to the push-forward of measures, pushing quadrating differentials on~$Y$ to quadratic differentials on~$X$. In \cite{McM1989}, C.~McMullen showed that the skinning map of an acylindrical manifold~$N$ is contracting, with contraction constant only depending on the topology of~$\partial_0 M$. Furthermore, he related the skinning map to the Poincar\'e series operator~$\tauheta$ by the following formula: \begin{equation*}gin{equation}\label{eq:Intro:McMullen} \mathop{}\!\mathrm{d} \sigma_M^*(\phi)=\sum_{U\in BN} \tauheta_{U/X} \left(\phi\vert_U\right) \,\,\mathrm{,}\;\, \end{equation} where $BN$ is \blue{a} collection of sub-surfaces of $\im(\sigma)$. When~$M$ is acylindrical and~$P=\emp$, we have that $BN$ is just a collection of disks, the \emph{leopard spots} of~\cite{McM1989}. If~$P\neq\emp$ and~$M$ is relatively acylindrical, then we can also have punctured disks coming from peripheral cylinders of $M$. As a consequence of~\eqref{eq:Intro:McMullen}, one can \blue{estimate} the operator norm of the co-derivative map~$\mathop{}\!\mathrm{d}\sigma_M^*$ of the skinning map by bounding the Poincar\'e series operator of the corresponding surfaces. Using \blue{such estimate}, we provide here effective bounds, in terms of the topology of~$\partial_0 M$, on the contraction of the skinning map in the acylindrical case. This builds on previous work~\cite{BarDil96} of D.~E.~Barret and J.~Diller, who gave an alternative proof of McMullen's estimates on the norm of the Poincar\'e operator, \cite{McM1989}. Improving on the main result of~\cite{BarDil96} (Thm.~\ref{t:BD} below), we show: \begin{equation*}gin{thm}\label{t:Main} Suppose $X$ is a Riemann surface of finite-type and let $Y$ be a disk or a punctured disk. Further let~$\pi\colon Y\rightarrow X$ be a holomorphic covering map. Then, the norm of the corresponding Poincar\'e series operator satisfies: \begin{equation*}gin{align*} \norm{\tauheta}_\op < \frac{1}{1+C_{g,n,\ell}} <1 \end{align*} \blue{for some constant~$C_{g,n,\ell}>0$ depending only on the topology of $X\cong S_{g,n}$ and the injectivity radius $\ell$ of $X$.} \end{thm} In contrast with~\cite{BarDil96}, we compute the contraction constant~$C_{g,n,\ell}$ in a completely explicit way and in the case \blue{under examination} without any extra assumptions on~$\norm{\tauheta}_\op $. The constant~$C_{g,n,\ell}$ only depends on: the genus~$g$ of~$X$, the number of punctures~$n$ of~$X$, the length~$\ell$ of the shortest closed geodesic in~$X$. So, we obtain an explicit bound over the moduli space of geometrically finite hyperbolic manifolds. Furthermore,~$C_{g,n,\ell}$ is continuous and decreasing as a function of $\ell$, in fact it is linear in $\ell$, and satisfies the following asymptotic expansion for $g,n\gg 1$. Let~$\chi\eqdef 2g-2+n$ be the Euler characteristic, and~$\kappa\eqdef 3g-3+n$ be the complexity of~$X$. Then, \begin{equation*}gin{align*} \log \log \tonde{\tfrac{\ell}{C_{g,n,\ell}} } \asymp \tfrac{4}{\arcsinh(1)}\,\chi^2+\coth\ttonde{\tfrac{\pi}{12}}\,\chi + \pi \sinh\tonde{\tfrac{1}{2}\arcsinh\ttonde{\tanh(\pi/12)}}\,\kappa\,\,\mathrm{.} \end{align*} \paragraph{An application to infinite-type 3-manifolds} In~\cite{C2018b} the first named author studied the class~$\mathcal M^B$ of infinite-type 3-manifolds~$M$ admitting an exhaustion~$M=\cup_i M_i$ by hyperbolizable 3-manifolds~$M_i$ with incompressible boundary and with uniformly bounded genus. One can use skinning maps to study the space of hyperbolic metrics on the manifolds in $\mathcal M^B$ that admit hyperbolic structures. Indeed, consider all manifolds~$M\in \mathcal M^B$ such that for all $i\in{\mathbb N}$ every component $U_i\eqdef \overline{M_i\setminus M_{i-1}}$ is acylindrical. By the main results of \cite{C2018b} this guarantees that~$M$ is in fact hyperbolic, which is in general not the case, see~\cite{C20171,C2018c}, or \cite{CS2018,CVP2020} for other examples of infinite-type hyperbolic 3-manifolds. We can thus think of a (hyperbolic) metric~$g$ on~$M$ as a gluing of (hyperbolic) metrics~$g_i$ on the~$U_i$'s and so it makes sense to investigate the glueing of pairs~$U_i$,~$U_{i+1}$ via skinning maps. In order to approach the construction of~$g$ in this way, it is helpful to know that the contraction factor of the skinning maps over the Teichm\"uller spaces relative to~$U_i$ stays well below~$1$ uniformly in~$i$. The latter fact follows from Theorem~\ref{t:Main}, in view of the uniform bound on the genus of the~$M_i$'s. \section{Notation} Throughout the work,~$X$ is a hyperbolic Riemann surface of finite type. Let~$\overline X$ be the compact Riemannian surface obtained by adding a single point to each end of~$X$. We indicate by \begin{equation*}gin{itemize} \item $g$ the genus of~$X$; \item $n$ the cardinality of the set of punctures~$P\eqdef \overline{X}\setminus X$; \end{itemize} We may thus regard~$X$ as an element of the \blue{moduli space~$\mathcal M(S_{g,n})$} of the $n$-punctured Riemann surface of genus~$g$. Further let \begin{equation*}gin{itemize} \item $\chi\eqdef 2g-2+n$ be the Euler characteristic of~$X$; \item $\kappa\eqdef 3g-3+n$ be the complexity of~$X$, with the exception of the surface~$S_{0,2}$ for which $\kappa\eqdef 0$. \end{itemize} We say that a curve in~$X$ is a \emph{short geodesic} if it is a closed geodesic of length less than $2\arcsinh(1)$, and we define \begin{equation*}gin{itemize} \item $\Gamma$ the set of short geodesics on~$X$; \item $\ell\eqdef \min_{\gamma \in \Gamma} \ell(\gamma)$ \blue{(twice)} the \emph{injectivity radius} of~$X$. \end{itemize} For any~$A\subset X$, denote by~$\mathrm{cap}rd{A}$ the number of connected components of~$A$, and indicate by $U\in \pi_0(A)$ any of such connected components. Let~$d$ be the intrinsic distance of~$X$ and further set \begin{equation*}gin{align*} \enl{A}{s}\eqdef \set{x\in X : \dist(x,A)\leq s}\,\,\mathrm{,}\;\, \qquad s>0\,\,\mathrm{.} \end{align*} \paragraph{Regions} Denote by~$D$ the Poincar\'e disk, and set~$D^*\eqdef D\setminus\set{0}$. The \emph{cusp}~$\mcC_p$ about~$p\in P$ is the image of the punctured disk~$\set{0<\abs{z}<e^{-\pi}}$ under the holomorphic cover~$\pi_p\colon D^*\to X$ about~$p$. We start by recalling the following well-known fact. \begin{equation*}gin{lem}[{\cite[Thm.~4.1.1]{Bu1992}}]\label{l:Buser} Let~$\gamma$ be a short closed geodesic in~$X$ of length~$\ell(\gamma)$, and set~$w\eqdef \arcsinh\left(\frac 1{\sinh(\ell(\gamma)/2) }\right)$. The collar~$\mcC_\gamma$ around~$\gamma$ is isometric to $[-w,w]\times \mbbS^1$ with the metric $\mathop{}\!\mathrm{d}\rho^2+\ell(\gamma)^2\cosh^2(\rho)\mathop{}\!\mathrm{d} t^2$. \end{lem} \blue{Note that in the previous statement the local metric, in Fermi coordinates, is parametrised with $\ell$ speed hence the $\ell^2$ factor.} We define: \begin{equation*}gin{itemize} \item the \emph{cusp part}~$X_\cusps$ of~$X$ as~$X_\cusps\eqdef\cup_{p\in P} \, \mcC_p$; \item the \emph{core}~$X_\core$ of~$X$ as~$X_\core\eqdef X\setminus X_\cusps$; \item the \emph{thick part}~$X_\thick$ of~$X$ as~$X_\thick\eqdef X_\core\setminus \cup_{\gamma\in \Gamma}\, \mcC_\gamma$; \item the \emph{thin part}~$X_\thin$ of~$X$ as~$X_\thin\eqdef \overline{X\setminus X_\thick}$. \end{itemize} \paragraph{Quadratic differentials} Let~$T_{1,0}^*X$ be the holomorphic cotangent bundle of~$X$. A quadratic differential on~$X$ is any section~$\psi$ of~$T_{1,0}^*X\otimes T_{1,0}^*X$, satisfying, in local coordinates,~$\psi=\psi(z)\mathop{}\!\mathrm{d} z^2$. A quadratic differential~$\psi$ is \emph{holomorphic} if its local trivializations~$\psi(z)$ are holomorphic. To each holomorphic quadratic differential~$\psi$ we can associate a measure~$\abs{\psi}$ on~$X$ defined by~$\abs{\psi}=\abs{\psi(z)}\cdot\abs{\mathop{}\!\mathrm{d} z}^2$. We denote by~$\av{\psi({\,\cdot\,})}$ the density of the measure~$\abs{\psi}$ with respect to the Riemannian volume of~$X$. We say that any~$\psi$ as above is \emph{integrable} if~$\norm{\psi}\eqdef \abs{\psi}\!(X)$ is finite, and we denote by~$Q(X)$ the space of all integrable holomorphic quadratic differentials on~$X$, endowed with the norm~$\norm{{\,\cdot\,}}$. When~$X$ has finite topological type,~$Q(X)$ is finite-dimensional, its dimension depending only on~$g$ and~$n$. \paragraph{Constants} Everywhere in this work,~$r,s,t,w$ and~$\varepsilon$ are free parameters. We shall make use of the following universal constants: \begin{equation*}gin{itemize} \item $\varepsilon_0\eqdef \arcsinh(1)\approx 0.8813$ the two-dimensional Margulis constant; \item $c_1\eqdef \coth(\pi/12)\approx 3.9065$; \item $c_2\eqdef\arcsinh\ttonde{\tanh(\pi/12)}\approx 0.2532$; \item $c_3\eqdef \frac{\pi \sinh\tonde{\tfrac{1}{2}\arcsinh\ttonde{\tanh(\pi/12)}}}{\arcsinh(\tanh(\pi/12))}\approx 1.5750$; \item $c_4\eqdef \ttonde{1-\tanh^2(1/2)}^2\approx 0.6185$; \item $c_5\eqdef 4\pi\ttonde{1+\sinh(1)}\approx 27.3343$; \item $c_6\eqdef (e c_4)^{e^{2c_3+2}} \approx 76.5904$; \item $c_7\eqdef \max _x x\cdot\arcsinh\ttonde{\csch(x/2)}\approx 1.5536$. \end{itemize} Finally, for simplicity of notation, we shall make use of the following auxiliary constants, also depending on~$X$: \begin{equation*}gin{itemize} \item $a_1\eqdef 4\abs{\chi}^2/\varepsilonilon +2\kappa \log c_1+2\, c_2\, c_3$; \item $a_2\eqdef \log(e\, c_4)\, e^{a_1+2(1+c_3)}$. \end{itemize} \blue{We denote by~$a\wedge b$ the minimum between two quantities~$a,b\in{\mathbb R}$.} \section{Outline} We start by recalling the results of D.E.~Barret and J.~Diller \cite{BarDil96} that we make explicit using classic hyperbolic geometry. The main result of \cite{BarDil96} is: \begin{equation*}gin{thm}[{\cite[Thm.~1.1]{BarDil96}}]\label{t:BD} Suppose $X$, $Y$ are Riemman surfaces of finite-type and let $\pi\colon Y\rightarrow X$ be a holomorphic covering map. Then, the norm of the corresponding Poincar\'e operator satisfies: \begin{equation*}gin{equation*} \norm{\tauheta}_{\op}\eqdef \sup_{\substack{\phi\in Q(Y)\\\norm{\phi}=1}} \norm{\tauheta \phi} <1-k<1 \,\,\mathrm{.} \end{equation*} Furthermore, $k>0$ may be taken to depend only on the topology of $X$, $Y$, and the length $\ell$ of the shortest closed geodesic on $X$. As a function of $\ell$, \blue{the number} $k$ may be taken to be continuous and increasing. \end{thm} In order to prove the above theorem, consider a \blue{unit-norm} quadratic differential $\phi\in Q(Y)$ such that $\tauheta\phi\neq 0$. In~\cite{BarDil96}, the authors estimate \begin{equation*}gin{equation*} 1-\norm{\tauheta\phi} \end{equation*} as follows. Let $K\subset \overline X$ be any compact set containing the set~$Z$ of zeroes of $\tauheta\phi$ and the punctures of $X$, viz.\ $Z\cup P\subset K$, and such that $\partial K$ is smooth. Further let \begin{equation*}gin{equation}\label{eq:DefMr} m(r)\eqdef \min_{p\in\partial \enl K r}\av{\tauheta\phi} \,\,\mathrm{.} \end{equation} Then, for every $t>1$ and every $r_0>0$, \blue{\cite[Lem.~3.2]{BarDil96} proves the following estimate} \begin{equation*}gin{equation}\label{keyestimate} 1-\norm{\tauheta\phi}\geq \int_0^{r_0}m(r)\quadre{t^{-1}\area(X\setminus \enl K r)-\length(\partial \enl K r)}\mathop{}\!\mathrm{d} r \,\,\mathrm{.} \end{equation} In general the $t$ in the above estimate will depend on the geometry and topology of the covering surface $Y$. \blue{In the case at hand however,~$Y$ is either the Poincar\'e disk or a punctured disk, and by work of J.~Diller \cite{Dil95}, we can assume that $t=1$.} It is likely that the constants of Diller can be made explicit as well and so that one could have a version of Theorem \ref{t:BD} were the constants are explicit in the topology of $X$, $Y$ and their injectivity radii. In the following sections, we give effective estimates for $m(r)$, $\area(X\setminus \enl K r)$, and $\length(\partial \enl K r)$. In order to estimate $m(r)$ we will need the following result from \cite{BarDil96}. \begin{equation*}gin{thm}[{\cite[Thm.~4.4]{BarDil96}}]\label{BD4.4} \blue{ Let~$\psi\in Q(X)$ with zero set~$Z$. Suppose $W\subset X\setminus Z$ is a domain such that $\av{\psi(p)} \leq L$ for all $p\in W$, and set $\rho(p)\eqdef \min\set{1, \dist (p,\partial W)}$. } Then, if $\gamma\subset W$ is a path connecting $p_1$ and $p_2$ we have: \begin{equation*}gin{equation*} \frac{\av{\psi(p_1)} }{ \av{\psi(p_2)} }\geq \left( \frac{ \av{\psi(p_2)} }{c_4L} \right) ^{-1+\exp\left(\int_\gamma \frac{\mathop{}\!\mathrm{d} s}{\tanh (\rho/2)}\right)} \,\,\mathrm{.} \end{equation*} \end{thm} \section{Effective Computations}\label{explicitcomps} The following is an easy lemma bounding the diameter of components of $\enl{X_\thick}{\varepsilon}$ or $\enl{X_\core}{\varepsilon}$. \begin{equation*}gin{lem} Let~\blue{$X\in \mathcal M(S_{g,n})$}. Then, \begin{equation*}gin{enumerate}[$(i)$] \item\label{i:l:BD4.2:1} any pair of points in the same connected component of~$\enl{X_\thick}{\varepsilon}$ is joined by a path of length at most~$4\abs{\chi}/\varepsilon$; \item\label{i:l:BD4.2:2} any pair of points in~$\enl{X_\core}{\varepsilon}$ is joined by a path~$\gamma$ in~$\enl{X_\core}{\varepsilon}$ satisfying \begin{equation*}gin{align} \ell(\gamma)\leq 4\abs{\chi}^2/\varepsilon+ 2\kappa\, \arcsinh\ttonde{\csch(\ell/2)} \,\,\mathrm{.} \end{align} \end{enumerate} \begin{equation*}gin{proof} \blue{Assertion \iref{i:l:BD4.2:1} is a} consequence of the Bounded Diameter Lemma \cite{Th1978}. \iref{i:l:BD4.2:2} Using the fact that each component of $\enl{X_\thick}{\varepsilon}$ contains an essential pair of pants and that the maximal number of pairwise disjoint short curves is $\kappa$ we have: \paragraph{Claim} $\mathrm{cap}rd{\enl{X_\thick}{\varepsilon}}\leq \abs{\chi}$ and~$\mathrm{cap}rd{\enl{X_\thin}{\varepsilon}}\leq \kappa$. By short-cutting in the region we obtain: \paragraph{Claim} A length-minimizing~$\gamma$ enters each~$U\in\pi_0\ttonde{\enl{X_\core}{\varepsilon}}$, resp.~$U \in \pi_0\ttonde{\enl{X_\thin}{\varepsilon}}$ at most once. Let~$\gamma$ be length-minimizing. By~\iref{i:l:BD4.2:1} we have~$\length(\gamma \mathrm{cap}p U)\leq 4\abs{\chi}/\varepsilon$. By the Collar Lemma \cite{Bu1992}, \begin{equation*}gin{align*} \length(\gamma\mathrm{cap}p U)\leq \diam (U)\leq 2\,\arcsinh\ttonde{\csch(\ell/2)} \,\,\mathrm{.} \end{align*} The conclusion follows combining the previous estimates with the two claims. \end{proof} \end{lem} \blue{The next Lemma is~\cite[Lem.~4.6]{BarDil96}. We just work out the constant explicitly. } \begin{equation*}gin{lem} \label{4.6} Let~$L(s)\eqdef\displaystyle\max_{p\in \enl{X_\thick}{s}} \av{\psi(p)}$. Then, \begin{equation*}gin{enumerate}[$(i)$] \item\label{i:l:BD4.6:1} $L(0)\geq \tfrac{\ell \wedge 1}{16\, \abs{\chi}}\norm{\psi}$; \item\label{i:l:BD4.6:2} for all~$0\leq s\leq t$, we have~$L(s)\geq e^{s-t}L(t)$. \end{enumerate} \begin{equation*}gin{proof} \iref{i:l:BD4.6:1} Firstly assume that at most half the mass of $\psi$ is concentrated inside the collars of short geodesics. As in~\cite[Lem.~4.6(i)]{BarDil96}, it follows that \begin{equation*}gin{equation}\label{eq:l:BD4.6:1} \av{\psi}\geq \frac{\norm{\psi}}{2\, \area(X)}=\frac{\norm{\psi}}{4\pi\abs{\chi}} \geq \frac{\norm{\psi}}{16\,\abs{\chi}} \,\,\mathrm{.} \end{equation} Assume now that at least half the mass of $\psi$ is concentrated inside collars of short geodesics. Let~$\gamma$ be any such geodesic and let \blue{$\mcC\eqdef\mcC_\gamma$ be the collar around $\gamma$}. For~$r\leq R\eqdef \pi^2/\ell(\gamma)$ and $r$ satisfying~$\tan\ttonde{\pi r/(2R)}=\csch\ttonde{\ell(\gamma)/2}$, we have that \begin{equation*}gin{align*} \frac{1}{2\, \area(X)}\norm{\psi}\leq& \int_\mcC \abs{\psi} = \int_0^{2\pi} \int_{e^{-r}}^{e^r} \frac{\abs{f(z)}}{\abs{z}^2}r \mathop{}\!\mathrm{d} r\mathop{}\!\mathrm{d}\theta \\ \leq& \int_0^{2\pi}\int_{e^{-r}}^{e^r} L r^{-1} \mathop{}\!\mathrm{d} r\mathop{}\!\mathrm{d}\theta =4\pi L r \,\,\mathrm{,}\;\, \intertext{hence that} \frac{\norm{\psi}}{2\pi r\, \area(X) } \leq&\ 4 L \,\,\mathrm{.} \end{align*} Computing both~$r$ and~$R$ in terms of~$\ell(\gamma)$, \begin{equation*}gin{align*} L(0)\geq&\ \max_{\partial\mcC} \av{\psi} = \frac{4L R^2}{\pi^2}\cos^2 \frac{\pi r}{2R } \\ \geq&\ \frac{\norm{\psi}}{2\, \area(X)} \frac{R^2}{2R\arctan\tonde{\csch\ttonde{\ell(\gamma)/2}}} \cos^2\tonde{\arctan\tonde{\csch\ttonde{\ell(\gamma)/2}}} \,\,\mathrm{.} \intertext{Now, since~$\cos^2\tonde{\arctan\tonde{\csch(t)}}=\tanh^2(t)$, and substituting~$R\eqdef 2\pi/\ell(\gamma)$,} L(0)\geq &\ \frac{\norm{\psi}}{4\, \area(X)} \frac{R\, \tanh^2\ttonde{\ell(\gamma)/2}}{\arctan\tonde{\csch\ttonde{\ell(\gamma)/2}}} \\ =&\ \frac{\pi^2\norm{\psi}}{4\, \area(X)} \frac{\tanh^2\ttonde{\ell(\gamma)/2}}{\ell(\gamma)^2\cdot\arctan\ttonde{\csch(\ell(\gamma)/2)}} \cdot \ell(\gamma) \,\,\mathrm{.} \intertext{Since~$t\mapsto \tanh^2(t/2)/\ttonde{t^2\arctan(\csch(t/2))}$ has global minimum~$\tfrac{1}{2\pi}$ at~$t=0$, we have that} L(0)\geq&\ \frac{\pi\, \ell(\gamma)}{8\, \area(X)} \norm{\psi} \geq \frac{\ell}{16\abs{\chi}} \norm{\psi}\,\,\mathrm{.} \end{align*} Combining the above inequality with~\eqref{eq:l:BD4.6:1} yields the assertion. \iref{i:l:BD4.6:2} is~\cite[Lem.~4.6]{BarDil96}. \end{proof} \end{lem} Let $\log_+(x)\eqdef \max\set{0,\log(x)}$. We start with some estimates towards establishing \eqref{keyestimate}. \begin{equation*}gin{lem}\label{l:BD5.1} For each connected component~$U\in \pi_0\ttonde{\enl{X_\thick}{s}}$, letting~$s=\log_+(c_1 t)$ \begin{equation*}gin{enumerate}[$(i)$] \item\label{i:l:BD5.1:1} $\area(U) - t\, \length(\partial U)\geq \pi/3$; \item\label{i:l:BD5.1:2} for all $p\in U$: $\inj_p\geq c_2/t$; \item\label{i:l:BD5.1:3} given $p_1,p_2\in U$ there exists~$\gamma\subset U$ connecting~$p_1$ and~$p_2$ such that \begin{equation*}gin{equation*} \ell(\gamma)\leq \frac{4\abs{\chi}^2} \varepsilonilon +2\kappa \log t +2\kappa \log c_1\,\,\mathrm{.} \end{equation*} \end{enumerate} \begin{equation*}gin{proof} \iref{i:l:BD5.1:1} Let~$g_U$ and~$n_U$ respectively denote the genus of $U$ and the number of boundary components of~$U$. Further let~$A_1,\dotsc, A_{n_U}$ denote the embedded annuli bounded by short closed geodesics on one side and by connected components of~$\partial U$ on the other side. We allow for~$A_j$ being part of a cusp, in which case, on one side, it is bounded by a puncture rather than by a short geodesic. By the Gauss--Bonnet Theorem, \begin{equation*}gin{align*} \area(U)=2\pi(2g_U + n_U-2)-\sum \area(A_j) \,\,\mathrm{.} \end{align*} If~$n_U=0$ then $U=X$, which yields~$\area(U)-t\, \length(\partial U)=2\pi\abs{\chi}$. Thus, in the following we may assume without loss of generality that~$n_U\geq 1$. In this case, either~$g_U\geq 1$ and $n_U\geq 1$, or~$g_U=0$ and~$n_U\geq 3$. Thus, \begin{equation*}gin{align*} \area(U)\geq 2\pi\, \frac{n_U}{3} -\sum_j\area(A_j) \,\,\mathrm{.} \end{align*} Let~$\ell_j$ denote the length of the geodesic component of~$\partial A_j$ and $L_j$ denote the length of the other component. Then, \begin{equation*}gin{align*} \area(U)-t\, \length(\partial U) \geq 2\pi\, \frac{n_U}{3} + \sum_j\ttonde{(t-1)\area(A_j)-t(\area(A_j)+L_j)}\,\,\mathrm{.} \end{align*} By Lemma~\ref{l:Buser}, setting \begin{equation*}gin{equation}\label{eq:wAnnuli} w_j\eqdef \arcsinh\left(\frac 1{\sinh(\ell_j/2) }\right)\,\,\mathrm{,}\;\, \end{equation} we have that \begin{equation*}gin{equation*} \area(A_j)=\int_0^{w_j-s} \int_0^1\ell_j\cosh(\rho)\mathop{}\!\mathrm{d}\rho\mathop{}\!\mathrm{d} t=\ell_j\sinh(w_j-s) \end{equation*} and \begin{equation*}gin{equation*} L_j=\ell_j\cosh(w_j-s)\,\,\mathrm{.} \end{equation*} We see that \begin{equation*}gin{align*} \area(A_j)+L_j=\ell_j \ttonde{\sinh(w_j-s)+\cosh(w_j)}=\frac{e^{-s}\ell_j}{\tanh(\ell_j/4)} \end{align*} is monotone increasing in~$\ell_j$ (e.g.\ by differentiating w.r.t.\ $\ell_j$). Thus it achieves its minimum when the two boundary components of~$A_j$ coincide, in which case~$\ell_j=A_j$ and~$\area(A_j)=0$. In this case, $s$~measures the distance from the geodesic to the edge of the collar containing~$A_j$. Therefore, by the Collar Lemma, $\sin(\ell_j/2)=\csch(s)$, hence \begin{equation*}gin{align*} \area(U)-t\, \length(\partial U)\geq&\ 2n_U\tonde{\pi/3 - t\, \arcsinh\ttonde{\csch(s)}} \\ \geq&\ 2\tonde{\pi/3 - t\, \arcsinh\ttonde{\csch(s)}} \,\,\mathrm{.} \end{align*} Letting the right-hand side above be larger than~$\pi/3$ we get \begin{equation*}gin{align*} s\geq \arcsinh\ttonde{\csch(\pi/(6t))}\,\,\mathrm{,}\;\, \qquad t>1\,\,\mathrm{,}\;\, \qquad s=\log\tonde{\coth\ttonde{\tfrac{\pi}{12}} t}\,\,\mathrm{.} \end{align*} \iref{i:l:BD5.1:2} Let~$\mcC$ be a short collar in~$X$. For~$p\in \enl{X_\thick}{\varepsilon+s}\mathrm{cap}p \mcC$, by the Collar Lemma we have that \begin{equation*}gin{align*} \inj_p&\geq \arcsinh\left( e^{\dist(p,\partial\mcC)}\right)=\arcsinh\left(e^{-s}\right)=\arcsinh\left(\tfrac 1 {c_1 t}\right)\geq \frac {c_2}{t} \end{align*} with $c_2\eqdef \arcsinh(1/c_1)$, and where the last inequality is sharp by a direct computation. \iref{i:l:BD5.1:3} Let~$p_1$,~$p_2\in U$. Then we can find a rectifiable curve~$\gamma$, connecting~$p_1$ to~$p_2$, and enjoying the following properties: \begin{equation*}gin{enumerate}[$(a)$] \item if~$\gamma\mathrm{cap}p \mcC\neq \emp$, then~$\gamma\mathrm{cap}p \partial\mcC$ consists of two points belonging to distinct connected components of~$\partial\mcC$, and~$\length(\gamma\big\lvert_\mcC)\leq 2s$; \item in each connected component of~$\enl{X_\thick}{\varepsilon}$, the curve~$\gamma$ is a shortest path between its endpoints. \end{enumerate} See Fig.~\ref{Fig1} below. \begin{equation*}gin{center}\begin{equation*}gin{figure}[htb!] \centering \def350pt{350pt} \input{figure1.pdf_tex} \mathrm{cap}ption{The piecewise geodesic curve~$\gamma$ connecting~$p_1$ to~$p_2$ and~$\mathcal C$, shaded, are collars around short geodesics. }\label{Fig1} \end{figure}\end{center} We can decompose~$\gamma$ into its components in \begin{equation*}gin{align*} X_1\eqdef \enl{X_\thick}{\varepsilon} \qquad \text{and} \qquad X_2\eqdef \overline{\enl{X_\thick}{\varepsilon+s}\setminus \enl{X_\thick}{\varepsilon}}\subset \enl{X_\thin}{\varepsilon}\,\,\mathrm{.} \end{align*} By the Bounded Diameter Lemma \cite{Th1978}, the length of each component of~$\gamma$ in~$X_1$ is bounded by~$4\abs\chi/\varepsilonilon$, and we have at most~$\abs\chi$ such components. In each connected component of~$X_2\subset \enl{X_\thin}{\varepsilon}$ the length of~$\gamma$ is at most~$2s$, and there are at most~$\kappa$ such components. Thus, for $s=\log(c_1t)$ we get \begin{equation*}gin{equation*} \ell(\gamma)\leq \frac{4\abs\chi ^2}\varepsilonilon+2s\kappa=\frac{4\abs\chi ^2}\varepsilonilon+2\kappa \log(c_1t) \,\,\mathrm{.} \qedhere \end{equation*} \end{proof} \end{lem} \blue{We now show how to estimate the quantities related to~$\enl{K}{r}$ in Equation~\eqref{keyestimate}. Let $Z$ be the zeroes of a given quadratic differential $\psi$. } \begin{equation*}gin{lem}\label{areaest} Let~$U\in \pi_0\ttonde{\enl{X_\thick}{s}}$ and~$K\eqdef \overline{X\setminus U}\cup Z$. Then, for~$r\in (0,1)$, $t>1$, and~$s=\log_+(c_1 t)$ \begin{equation*}gin{align*} \area\ttonde{X\setminus \enl{K}{r}}-t\, \length\ttonde{\partial\enl{K}{r}}\geq&\ \pi/3 -\kappa rt\quadre{c_7- s+ 4\pi \ttonde{1+\sinh(1)} \frac{\abs{\chi}}{\kappa}} \\ \geq&\ \pi/3 -\kappa r t\quadre{4\pi \ttonde{1+\sinh(1)}+c_7 } \\ =&\ \pi/3 -\kappa r t (c_5+c_7) \,\,\mathrm{.} \end{align*} \begin{equation*}gin{proof} Since~$\mathrm{cap}rd{Z}\leq 2\abs{\chi}$ and~$r<1$, we have that \begin{equation*}gin{align} \nonumber \length\ttonde{\partial\enl{K}{r}}\leq&\ \length(\partial U)+\length\ttonde{\partial \enl{Z}{r}} \leq \length(\partial U) + 2\pi\mathrm{cap}rd{Z} \sinh(r) \\ \label{eq:l:Extra:1} \leq&\ \length(\partial U) + 4\pi\abs{\chi} \sinh(1) \, r \,\,\mathrm{.} \end{align} Furthermore, \begin{equation*}gin{align*} \area\ttonde{X\setminus \enl{K}{r}}=&\ \area(X)-\area\ttonde{\enl{K}{r}} \\ \geq&\ \area(X)-\tonde{\area\ttonde{\overline{X\setminus U}}+\area\ttonde{\enl{\partial U^+}{r}}+\area\ttonde{\enl{Z}{r}}} \\ \geq&\ \area(U)-\area\ttonde{\enl{\partial U^+}{r}}-4\pi\abs{\chi}\ttonde{\cosh(r)-1} \\ \geq&\ \area(U)-\area\ttonde{\enl{\partial U^+}{r}} -4\pi\abs{\chi}r \\ \geq &\ \area(U) - t\, \area\ttonde{\enl{\partial U^+}{r}} -4\pi\abs{\chi} t r \end{align*} since~$t>1$. We can estimate~$\area\ttonde{\enl{\partial U^+}{r}}$ by assuming that~$\enl{\partial U^+}{r}$ is isometrically embedded, so that, by Lemma~\ref{l:Buser}, \begin{equation*}gin{align*} \area\ttonde{\enl{\partial U^+}{r}}=\sum_j\ell(\gamma_j)\ttonde{\sinh(w_j-s+r)-\sinh(w_j-s)}\,\,\mathrm{.} \end{align*} Repeat the construction of the annuli~$A_j$ in Lemma~\ref{l:BD5.1}, and let~$w_j$ be defined as in~\eqref{eq:wAnnuli}. By Taylor expansion of~$\sinh$ around~$w_j-s>0$, \blue{we have that} \begin{equation*}gin{align*} \area(U) - t\, \area\ttonde{\enl{\partial U^+}{r}}\geq&\ \area(U)-r t\sum_j\ell(\gamma_j)(w_j-s)\\ \geq&\ \area(U)- r t \sum_j\, \ell(\gamma_j)\, \arcsinh\ttonde{\csch(\ell(\gamma_j)/2)} \\ &+r t\,\log(c_1 t)\sum_j \ell(\gamma_j) \\ \geq&\ \area(U)- c_7 \kappa rt+r t\, \log(c_1 t) \sum_j \ell(\gamma_j)\,\,\mathrm{.} \end{align*} As a function of the metric, the summation $\sum_j\ell(\gamma_j)$ attains its maximum over the \blue{moduli space~$\mathcal M(S_{g,n})$} when~$\ell(\gamma_j)=\varepsilon_0$ for each~$j$, thus its maximum is~$\kappa\varepsilonilon_0$. Therefore, \begin{equation*}gin{align}\label{eq:l:Extra:2} \area\ttonde{X\setminus \enl{K}{r}}\geq \area(U)-r t \kappa\, c_7+rt\kappa \varepsilonilon_0\, \log(c_1 t)-4\pi\abs{\chi} t r \end{align} Multiplying~\eqref{eq:l:Extra:1} by~$-t$ and adding~\eqref{eq:l:Extra:2}, together with Lemma~\ref{l:BD5.1}\iref{i:l:BD5.1:1}, yields the conclusion. \end{proof} \end{lem} Let~$U$ be the component of~$\enl{X_\thick}{\varepsilon+s}$ containing~$p_{\max}(s)$, where~$s=\log(c_1 t)$ and~$p_{\max}$ satisfies Lemma~\ref{4.6}. Set~$K'\eqdef \overline{X\setminus U}$ and let~$K\eqdef K'\cup Z$. \blue{This is a slight refinement of the previous $K$, in which we chose a specific component $U$ and a slightly larger neighbourhood of $U$. The next Lemma will deal with paths in $X\setminus \enl K r$. When, $r=0$ $X\setminus K=U\setminus Z$ which looks as Fig.~\ref{Fig3} below.} \begin{equation*}gin{center}\begin{equation*}gin{figure}[htb!] \centering \def350pt{250pt} \input{figure3.pdf_tex} \mathrm{cap}ption{The set $X\setminus K=\text{int}(U)\setminus Z$ is greyed out and the white points are zeroes of the quadratic differential.}\label{Fig3} \end{figure}\end{center} \begin{equation*}gin{lem}\label{5.2} Fix~$t>1$. If~$r< c_2 /(\abs{\chi}t)$, then any two points in $X\setminus \enl{K}{r}$ can be joined by a rectifiable curve in $X\setminus \enl{K}{r/2}$. \begin{equation*}gin{proof} \blue{We start with the following claim.} \paragraph{Claim} Let~$V \in \pi_0\ttonde{\enl{K}{r/2}}$. If~$V\mathrm{cap}p\enl{K'}{r/2}\neq \emp$, then~$V\subset \enl{K'}{r}$. Indeed, for~$c>0$ to be fixed later, let~$V\in \pi_0\ttonde{\enl{K}{c r}}$ with~$V\mathrm{cap}p\enl{K'}{c r}\neq \emp$. We need to show that if $V$ is such component it does not separate $X\setminus \enl K r $. Fix~$p\in V\setminus \enl{K'}{c r}$. Since~$V$ is connected and contained in $\enl {K}{cr}$, then~$p$ is joined to~$\enl{K'}{c r}$ by a chain of disks of radius~$c r$ centered at points in~$Z$. Therefore~$\dist(p, \enl{K'}{cr})\leq 2 c \mathrm{cap}rd{Z} r$. Choosing~$c<(2\mathrm{cap}rd{Z}+1)^{-1}$, e.g.~ $c\eqdef \tfrac{1}{2}(2\mathrm{cap}rd{Z}+1)^{-1}$, proves that $\dist(p, \enl{K'}{cr})\leq r/2$ and so that: \begin{equation*}gin{align*} \dist(p, K')&\leq \dist(p, \enl{K'}{cr})+cr=2c\mathrm{cap}rd Zr +cr\leq (2\mathrm{cap}rd Z+1)cr\leq r/2\,\,\mathrm{,}\;\, \end{align*} proving that~$V\subset \enl{K'}{r}$. \blue{This concludes the proof of the claim.} Thus, we need to show that for $r< c_2/t$ and for all $p_0,p_1\in X\setminus \enl{K}{r}\subset \enl{U}{s}$ there exists a rectifiable curve~$\gamma\subset X\setminus \enl{K}{c_2 r}$ connecting~$p_0$ to~$p_1$. By the Collar Lemma, \begin{equation*}gin{equation*} \inj_{\enl{U}{s}}\eqdef \min_{p\in \enl{U}{s}} \inj_p\geq \arcsinh(e^{-s})=\arcsinh\tonde{\frac{1}{c_1t}}\geq \frac{c_2}{t}\,\,\mathrm{,}\;\, \end{equation*} similarly to the proof of Lemma~\ref{l:BD5.1}\iref{i:l:BD5.1:2}. Now, argue by contradiction \blue{and assume} that there exists no rectifiable curve as in the assertion. Then, there exists a rectifiable loop~$\alpha$ in~$\enl{Z}{r/2}$ separating~$X\setminus \enl{K}{r}\subset \enl{U}{s}$ into connected components so that~$p_0$ and~$p_1$ belong to two distinct such components. See the picture in Fig.~\ref{Fig2} below. \begin{equation*}gin{center}\begin{equation*}gin{figure}[htb!] \centering \def350pt{350pt} \input{figure2.pdf_tex} \mathrm{cap}ption{The two cases for the loop~$\alpha$ separating~$p_1$ to~$p_2$. The shaded regions are part of $\enl{K}{r/2}$ and the grey dots are zeroes of the quadratic differential.}\label{Fig2} \end{figure}\end{center} For any such~$\alpha$, \begin{equation*}gin{equation*} \length(\alpha)\leq r \mathrm{cap}rd{Z}<\mathrm{cap}rd{Z}\frac {c_2}{\abs \chi t}\leq \frac {c_2} t\leq \inj_{\enl{U}{s}}\,\,\mathrm{.} \end{equation*} As a consequence,~$\alpha\subset \enl{U}{s}$ is null-homotopic and so we must be \blue{on the right side} of Fig.~\ref{Fig2}. Therefore, there exists $L\in {\mathbb R}^+$ such that $\alpha\subset B_L(q)$ for $q\in (U)_s$ and $L\leq \ell(\alpha)/2< r/2$. Thus, the component $W\subset X\setminus \enl{K}{r}$ containing, say, $p_1$, lies in $B_L(q)\subset B_r(q)$ and note that by construction its distance from any zero is at least~$r$. Therefore, $W$ is at distance $r/2+L< r$ from a zero. However, since $d(W,Z)\geq r$ we have a contradiction. \end{proof} \end{lem} We now state the main Lemma we will use in our estimate of \eqref{keyestimate}. \begin{equation*}gin{lem}\label{5.3} Let~$r<c_2/(\abs\chi t)$, and set~$a_1\eqdef 4\abs{\chi}^2/\varepsilonilon +2\kappa \log c_1+2\, c_2\, c_3$. Then, any two points in $\overline{X\setminus \enl{K}{r}}$ are joined by a rectifiable curve~$\gamma\subset \overline{X\setminus \enl{K}{r/2}}$ with the following properties: \begin{equation*}gin{enumerate}[$(i)$] \item\label{i:l:BD5.3:1} $\gamma$ consists of length-minimising geodesic segments and at most one arc in each of the components of $\partial \enl{K}{r/2}$; \item\label{i:l:BD5.3:2} $\ell(\gamma)\leq a_1+ 2\kappa \log t$; \item\label{i:l:BD5.3:3} for $z\in Z$: $\length\ttonde{\gamma\mathrm{cap}p B_w(z)}\leq 2(1+c_3) w$ for all $w>0$ such that~$B_w(z)$ is embedded. \end{enumerate} \begin{equation*}gin{proof} \iref{i:l:BD5.3:1}--\iref{i:l:BD5.3:2} Fix points~$p_0,p_1\in \overline{X\setminus \enl{K}{r}}$. By Lemma \ref{l:BD5.1} there exists a rectifiable~$\gamma\subset U$ connecting them, with \begin{equation*}gin{equation*} \ell(\gamma)\leq \frac{4\abs{\chi}^2} \varepsilonilon +2\kappa \log c_1 +2\kappa \log t \,\,\mathrm{.} \end{equation*} The curve~$\gamma$ intersects~$\enl{K}{r/2}$ in at most~$2\abs\chi$ components (i.e.\ balls around zeroes of~$\psi$). In each such component $V= B_{r/2}(z)$ (for some~$z\in Z$) we can replace~$\gamma\big\lvert_V$ by a shortest path on~$\partial V$ as the one in Lemma \ref{l:BD5.1} \ref{i:l:BD5.1:3}. Since~$V$ is a ball, the length of~$\gamma\big\lvert_V$ is bounded by half the length of the circumference of a great circle on~$V$, i.e.\begin{equation*}gin{equation}\label{eq:l:BD5.3:1} \pi\sinh(r/2)\leq \pi\sinh(r)\leq c_3 \, r \,\,\mathrm{,}\;\, \qquad r<\frac{c_2}{\abs{\chi} t}<\frac{c_2}{\abs{\chi}}\,\,\mathrm{.} \end{equation} By repeating this reasoning on each component~$V$ as above, we obtain a path $\gamma'\colon p_0\rightarrow p_1$ satisfying~\iref{i:l:BD5.3:1} and such that: \begin{equation*}gin{align*} \length(\gamma')&\leq\ell(\gamma)+2\abs\chi c_3\,r \\ &\leq \frac{4\abs{\chi}^2} \varepsilonilon +2\kappa\log c_1 +2\kappa \log t+2 \frac {c_2 c_3}{t} && (t>1) \\ &\leq \frac{4\abs{\chi}^2} \varepsilonilon + 2\kappa\log c_1 +2\, c_2\, c_3+2\kappa \log t\,\,\mathrm{.} \end{align*} \iref{i:l:BD5.3:3} Let $z\in Z$ be a zero of~$\psi$ and fix~$w>0$. Each component~$\alpha$ of~$\gamma$ in~$\partial \enl{K}{c_2r}$ has length at most $c_3\, r$ and each geodesic arc of~$\gamma$ connecting an endpoint of~$\alpha$ to~$\partial B_w(z)$ has length at most~$w$. We now estimate \begin{equation*}gin{equation*} \mathrm{cap}rd{\pi_0\ttonde{\gamma\mathrm{cap}p \overline{B_w(z)}}}\leq \begin{equation*}gin{cases} 0 & w<r/2 \\ 1 & p_1,p_2\in \overline{B_w(z)} \\ 1 & p_1\in \overline{B_w(z)}\,\,\mathrm{,}\;\, p_2\not\in \overline{B_w(z)} \\ 1 & p_1,p_2\not\in \overline{B_w(z)} \end{cases} \end{equation*} The first bound holds by definition. The second holds by the convexity of hyperbolic balls: if $p_1,p_2\in B_w(z)$ then we can choose~$\gamma\subset B_w(z)$. The third and fourth one follow from the fact that if~$\gamma$ has more than one component in~$B_w(z)$, then we can shortcut~$\gamma$ inside the ball. If~$w=r/2$, then~$\gamma\big\lvert_{\overline{B_w(z)}}\subset \partial B_w(z)$, and we may choose~$\gamma\big\lvert_{B_w(z)}$ to be a circumference arc, so that~$\length\ttonde{\gamma\big\lvert_{B_{r/2}(z)}}\leq \pi\sinh (r/2)\leq c_3 r$ by~\eqref{eq:l:BD5.3:1}. If instead~$w>a_1 r$, then we may choose~$\gamma$ to be either a geodesic segment, or a union~$\gamma_1\cup\gamma_{2}\cup\gamma_3$, where~$\gamma_1$ and $\gamma_{2}$ are geodesic segments each connecting~$\partial B_w(z)$ to~$\partial B_{r/2}(z)$, and~$\gamma_3$ is a circumference arc on~$\partial B_{r/2}(z)$. In the first case,~$\length(\gamma\big\lvert_{B_w(z)})\leq 2w$. In the second case, \[ \length\ttonde{\gamma\big\lvert_{B_w(z)}}\leq 2w+\pi\sinh(a_1 r)\leq 2w+c_3 r\leq 2w+c_3 r\,\,\mathrm{.} \] Thus, we obtain that: \begin{equation*}gin{equation*} \length\ttonde{\gamma\mathrm{cap}p \overline{B_w(z)}}\leq \begin{equation*}gin{cases} 0 & \text{if } w< r/2 \\ \displaystyle 2c_3 w & \text{if } w=r/2 \\ 2w+c_3w & \text{if } w\geq r/2 \end{cases}\qquad\qquad \leq 2(1+c_3)w \,\,\mathrm{,}\;\, \end{equation*} which concludes the proof. \end{proof} \end{lem} \blue{With~$m(r)$ as in~\eqref{eq:DefMr} we can now estimate~\eqref{keyestimate} and show our final result.} \begin{equation*}gin{proof}[Proof of Theorem~\ref{t:Main}] Let~$r<c_2/\abs{\chi}\leq 2$. Let $s$ be as in Lemma \ref{areaest} and choose $U$ to be the component of $X_\thick(s)$ containing the point $p_{\max}(s)$ as in Lemma \ref{4.6}. Let $Z$ be the set of zeroes of $\psi$, $K\eqdef \overline{X\setminus U} \cup Z$, and $K'\eqdef \overline{X\setminus U}$. Let $W\eqdef \enl{X_\thick}{s+1}\setminus Z$, $p_1\in\partial \enl{K}{r}$, and $p_2=p_{\max}(s)\in \enl{X_\thick}{s}\setminus Z$. Therefore, we have that $\langle \psi(p_2)\rangle=L(s)$. Moreover, let $\gamma\subset W$ be a path from $p_1$ to $p_2$ satisfying the conditions of Lemma \ref{5.3} and note that \begin{equation*}gin{equation*} \dist(p,\partial W)\geq \min \set{1,\dist(p,Z)}\,\,\mathrm{,}\;\, \qquad p\in\gamma \,\,\mathrm{.} \end{equation*} By Lemma \ref{4.6}\iref{i:l:BD4.6:2} we have that: \[ L(s+1)\leq e\cdot L(s) \,\,\mathrm{.} \] By Theorem \cite[4.4]{BarDil96}, we have that: \begin{equation*}gin{align*} \langle \psi(p_1)\rangle&\geq \langle \psi(p_2)\rangle \left(\frac{ \langle \psi(p_2)\rangle}{c_4 \cdotL(s+1)}\right)^{-1+\exp\tonde{ \int_\gamma\frac{\mathop{}\!\mathrm{d} s}{\tanh(1\wedge \dist(\gamma_s,Z))}}} \\ &\geq L(s)\left(\frac 1 {e\, c_4}\right)^{-1+\exp\tonde{\int_\gamma \coth\ttonde{1\wedge \dist(\gamma_s,Z)} \mathop{}\!\mathrm{d} s}} \\ &\geq e\, c_4\, L(0)\cdot \tonde{\frac{1}{e\, c_4}}^{\exp\tonde{\int_\gamma \coth\ttonde{1\wedge \dist(\gamma_s,Z)} \mathop{}\!\mathrm{d} s}} \,\,\mathrm{,}\;\, \intertext{where we can estimate~$L(0)$ by Lemma~\ref{4.6}\ref{i:l:BD4.6:1},} &\geq \frac{e\,c_4\,\ell}{16\abs{\chi}} \norm{\psi} \tonde{\frac{1}{e\, c_4}}^{\exp\tonde{\int_\gamma \coth\ttonde{1\wedge \dist(\gamma_s,Z)} \mathop{}\!\mathrm{d} s}} \\ &=\frac{e\,c_4\, \ell}{16\abs{\chi}} \norm{\psi} \exp\tonde{-\log(e\,c_4)\exp\tonde{\int_\gamma\coth\ttonde{1\wedge \dist(\gamma_s,Z)} \mathop{}\!\mathrm{d} s }} \end{align*} We now estimate~$\int_\gamma \coth(1\wedge \dist(\gamma_s,Z)) \mathop{}\!\mathrm{d} s$ from above by breaking it into two terms: \begin{equation*}gin{equation*} \int_\gamma\frac{\mathop{}\!\mathrm{d} s}{\tanh\ttonde{1\wedge \dist(\gamma_s,Z)}}\leq \int_{\gamma\setminus Z(1)}\mathop{}\!\mathrm{d} s+\int_{\gamma\mathrm{cap}p Z(1)}\frac{\mathop{}\!\mathrm{d} s}{\dist(\gamma_s,Z)} \end{equation*} The first term is bounded by $\ell(\gamma)$ while for the second term we have by Lemma~\ref{5.3}\iref{i:l:BD5.3:1} \begin{equation*}gin{align*} \int_{\gamma\mathrm{cap}p Z(1)}\frac{\mathop{}\!\mathrm{d} s}{\dist(\gamma_s,Z)}\leq& \int_1^{\frac{2}{r}} \length\ttonde{\gamma\mathrm{cap}p \enl{Z}{1/u}} \mathop{}\!\mathrm{d} u \\ \leq& \int_1^{\frac{2}{r}}\frac{2(1+c_3)}{u^2} \mathop{}\!\mathrm{d} u = 2(1+c_3)(1-r/2) \end{align*} since~$r\leq 2$. By Lemma \ref{5.3}\iref{i:l:BD5.3:1} we have that: \begin{equation*}gin{equation*} \ell(\gamma)\leq a_1+2\kappa\,\log t \end{equation*} Thus: \begin{equation*}gin{align*} \int_\gamma\frac{\mathop{}\!\mathrm{d} s}{\tanh\ttonde{1\wedge \dist(\gamma_s,Z)}}\leq &\ a_1+2\kappa\,\log t+2(1+c_3)(1-r/2) \\ =&\ a_1+2(1+c_3)+2\kappa\,\log t-(1+c_3) r \end{align*} Therefore, since~$\log(e\,c_4)>0$, for all $p_1\in\partial \enl{K}{r}$ we get: \begin{equation*} \langle \psi(p_1)\rangle\geq \frac{e\, c_4\, \ell}{16\abs\chi} \norm{\psi} \exp\tonde{-\log(e\, c_4) \exp\ttonde{ a_1+2(1+c_3)+2\kappa\,\log t-(1+c_3) r} } \end{equation*} Thus, by minimizing over~$p_1\in\partial \enl{K}{r}$ we obtain: \begin{equation*} m(r) \geq \frac{e\, c_4\,\ell}{16\abs\chi}\norm{\psi} \exp\tonde{-\log(e\, c_4) \exp\ttonde{ a_1+2(1+c_3)+2\kappa\,\log t-(1+c_3) r} } \end{equation*} which for $a_2\eqdef \log(e\, c_4) \, e^{a_1+2(1+c_3)}>0$ can be rewritten as: \begin{equation*} m(r)\geq e\, c_4\, \frac{\ell}{16\abs\chi}\norm{\psi} \exp\ttonde{-a_2 t^{2\kappa} e^{-(1+c_3) r} } \end{equation*} Then, equation~\eqref{keyestimate} with $K\eqdef W$ becomes, for $r_0<\frac 1{4t}$, \begin{equation*}gin{align*} 1-\norm\psi\geq&\ \int_0^{r_0} m(r) \tonde{ t^{-1} \area\ttonde{X\setminus \enl{K}{r}}-\length\ttonde{\partial \enl{K}{r}} }\mathop{}\!\mathrm{d} r\,\,\mathrm{.} \intertext{By Lemma~\ref{areaest} we thus have that, for every~$r_0<\tfrac{1}{4t}$,} 1-\norm\psi \geq&\ \frac{e\, c_4\, \ell}{16\abs\chi t}\norm{\psi} \int_0^{r_0} \exp\tonde{-a_2 t^{2\kappa} e^{-(1+c_3) r}} \ttonde{\pi/3 -\kappa r t (c_5+c_7) } \mathop{}\!\mathrm{d} r \\ \geq&\ \frac{e\, c_4\, \ell\, e^{-a_2 t^{2\kappa}}}{16\abs\chi t}\norm{\psi} \int_0^{r_0} \ttonde{\pi/3 -\kappa r t (c_5+c_7) } \mathop{}\!\mathrm{d} r \,\,\mathrm{.} \intertext{Maximizing over~$r_0\in \ttonde{0, \tfrac{1}{4t}}$ additionally so that the integrand is non-negative, we have therefore that} 1-\norm\psi \geq &\ \frac{e\, c_4\,\ell\, e^{-a_2 t^{2\kappa}}}{16\abs\chi t}\norm{\psi} \int_0^{\frac{1}{4t}\wedge \frac{\pi}{3\kappa t (c_5+c_7)}} \ttonde{\pi/3 -\kappa r t (c_5+c_7) } \mathop{}\!\mathrm{d} r \\ =& \ \frac{e\, \pi^2\, c_4}{288\, \kappa (c_5+c_7)} \frac{\ell\, e^{-a_2 t^{2\kappa}}}{\abs\chi t^2}\norm{\psi}\,\,\mathrm{,}\;\, \end{align*} and maximizing the right-hand side over~$t>1$, i.e.\ choosing $t=1$, we conclude that \begin{equation*}gin{align*} \norm{\psi}\leq& \frac{1}{1+\displaystyle \frac{C\,\ell\, e^{-a_2}}{\kappa\abs{\chi}} }\,\,\mathrm{,}\;\, \qquad C\eqdef \frac{e\, \pi^2\, c_4}{288\, (c_5+c_7)} \,\,\mathrm{.} \qedhere \end{align*} \end{proof} \paragraph{Contraction factors of skinning maps} We now apply our explicit bounds from Theorem \ref{t:Main} to get effective bounds on the contraction factor of the skinning map. Let $N\in AH(M,\mathcal P)$ be a pared acylindrical manifold so that \begin{equation*}gin{itemize} \item $\mathcal P\subset\partial M$ is a collection of pairwise disjoint closed annuli and tori; \item $\mathcal P$ contains all tori components of $M$ and $M$ is acylindrical relative to $\mathcal P$. \end{itemize} Let $\partial_0 M\eqdef \partial M\setminus\mathcal P$. By \cite[p.~443]{McM1989} we have that, for every such~$N$, \begin{equation*}gin{equation*} \abs{\mathop{}\!\mathrm{d} \sigma}=\abs{\mathop{}\!\mathrm{d} \sigma^*} \,\,\mathrm{.} \end{equation*} By Theorem \ref{t:Main}, \begin{equation*}gin{equation*} \mathop{}\!\mathrm{d}\sigma^*(\phi)=\sum_{U\in BN} \tauheta_{U/X} \left(\phi\vert_U\right)\leq \max_{X\in \partial_0M} \frac{1}{1+C_{g,n,\ell}} \norm\phi\,\,\mathrm{,}\;\, \end{equation*} where $\ell$ is the injectivity radius of the conformal boundary~$\partial_\infty N$, and $C_{g,n,\ell}$. Thus, we obtain the following corollary: \begin{equation*}gin{cor} Let~$(M,\mathcal P)$ be a pared acylindrical hyperbolic manifold. Then, the skinning map at $N\in AH(M,\mathcal P)$ has contraction factor bounded by \begin{equation*} \abs{d\sigma}\leq \max_{X\in \partial_0M} \frac{1}{1+C_{g,n,\ell}} \,\,\mathrm{.}\end{equation*} \end{cor} {\small \begin{equation*}gin{thebibliography}{10} \bibitem{AG2004} I.~Agol. \newblock {Tameness of hyperbolic 3-manifolds}. \newblock {\url{http://arxiv.org/abs/math/0405568}}, 2004. \bibitem{BarDil96} D.~E. Barrett and J.~Diller. \newblock Contraction properties of the {P}oincar\'{e} series operator. \newblock {\em Michigan Math. J.}, 43(3):519--538, 1996. \bibitem{Bu1992} P.~Buser. \newblock {\em Geometry and spectra of compact {R}iemann surfaces}. \newblock Modern Birkh\"{a}user Classics. Birkh\"{a}user Boston, Ltd., Boston, MA, 2010. \newblock Reprint of the 1992 edition. \bibitem{CG2006} D.~Calegari and D.~Gabai. \newblock Shrinkwrapping and the taming of hyperbolic 3-manifolds. \newblock {\em J. Amer. Math. Soc.}, 19(2):385--446, 2006. \bibitem{C20171} T.~Cremaschi. \newblock A locally hyperbolic 3-manifold that is not hyperbolic. \newblock {\em Proc. Amer. Math. Soc.}, 146(12):5475--5483, 2018. \bibitem{C2018b} T.~Cremaschi. \newblock Hyperbolization on infinite type 3-manifolds. \newblock \url{http://arxiv.org/abs/1904.11359}, 2019. \bibitem{C2018c} T.~Cremaschi. \newblock A locally hyperbolic 3-manifold that is not homotopy equivalent to any hyperbolic 3-manifold. \newblock {\em Conform. Geom. Dyn.}, 24:118--130, 2020. \bibitem{CS2018} T.~Cremaschi and J.~Souto. \newblock Discrete groups without finite quotients. \newblock {\em Topology Appl.}, 248:138--142, 2018. \bibitem{CVP2020} T.~Cremaschi and F.~Vargas~Pallete. \newblock {Hyperbolic limits of Cantor set complements in the sphere}. \newblock {\em{Bull.\ London Math.\ Soc.}}, apr 2020. \bibitem{Dil95} J.~Diller. \newblock {A canonical $\overline\partial$ problem problem for bordered Riemann surfaces}. \newblock {\em {Indiana Univ.\ Math.\ J.}}, 44:747--763, 1995. \bibitem{Kap2001} M.~Kapovich. \newblock {\em Hyperbolic manifolds and discrete groups}, volume 183 of {\em Progress in Mathematics}. \newblock Birkh\"{a}user Boston, Inc., Boston, MA, 2001. \bibitem{McM1989} C.~McMullen. \newblock Iteration on {T}eichm\"{u}ller space. \newblock {\em Invent. Math.}, 99(2):425--454, 1990. \bibitem{Th1978} W.~P. Thurston. \newblock {Geometry and Topology of 3-manifolds}. \newblock {\url{http://library.msri.org/books/gt3m/}}, 1978. \newblock {Princeton Mathematics Department Lecture Notes}. \bibitem{Th1982} W.~P. Thurston. \newblock Hyperbolic geometry and {$3$}-manifolds. \newblock In {\em Low-dimensional topology ({B}angor, 1979)}, volume~48 of {\em London Math. Soc. Lecture Note Ser.}, pages 9--25. Cambridge Univ. Press, Cambridge-New York, 1982. \end{thebibliography} } \small \end{document}
\betaegin{enumerate}gin{document} \title{Envelopes of $lpha$-sections} \alphabstract{ \noindent Let $K$ be a planar convex body af area $|K|$, and take $0<\alpha<1$. An $\alpha$-section of $K$ is a line cutting $K$ into two parts, one of which has area $\alpha|K|$. This article presents a systematic study of the envelope of $\alpha$-sections and its dependence on $\alpha$. Several open questions are asked, one of them in relation to a problem of fair partitioning. } \ {\noindent\betaf Keywords:} Convex body, alpha-section, envelope, floating body, fair partitioning. \ {\noindent\betaf MSC Classification: 52A10, 52A38, 51M25, 51M04} \ \sigmaec{1.}{Introduction} In this paper, unless explicitly stated otherwise, $K$ denotes a convex body in the Euclidean plane $E$; {\sigmal i.e.,} a compact convex subset of $E$ with nonempty interior. Let ${{\mathit o}megaidehat p}artial K$ denote the boundary of $K$ and $|K|$ its area. Given $\alpha\in\,]0,1[$, an {\sigmal$\alpha$-section of $K$} is an oriented line ${\mathcal D}e \sigmaubset E$ cutting $K$ in two parts, one to the right, denoted by $K^-$, of area $|K^-|=\alpha|K|$, and the other to the left, $K^+$, of area $|K^+|=(1-\alpha)|K|$; here $K^{{\mathit o}megaidehat p}m$ are compact sets, thus $K^+\cap K^-={\mathcal D}e\cap K$. Denote by $K_\alpha$ the intersection of all $K^+$ and call it the {\sigmal$\alpha$-core} of $K$; denote by $m_\alpha$ the envelope of all $\alpha$-sections of $K$. The purpose of this article is to study $m_\alpha$ and its relation to the $\alpha$-core. We refer to Sections~{\rm Re}\,ff{5.}-{\rm Re}\,ff{6.} for formal statements, and give in this introductory section an informal presentation. Since $K_\alpha$ is empty for $\alpha>{\rm var\,}phirac12$, we will often implicitly assume $\alpha\lambdaeq{\rm var\,}phirac12$ when dealing with $\alpha$-cores. The situation depends essentially upon whether $K$ is centrally symmetric (we will say `symmetric' in the sequel, for short) or not. Precisely, we prove the following statements. If $K$ is symmetric then one has $m_\alpha={{\mathit o}megaidehat p}artial K_\alpha$ for all $\alpha\in\,\betaig]0,\,^t\!\!\;frac12\betaig[$. Moreover, we have the following equivalence: the envelope $m_\alpha$ is of class $\mathcal{C}^1$ for all $\alpha\in\,\betaig]0,\,^t\!\!\;frac12\betaig[$ if and only if $K$ is strictly convex. If $K$ is non-symmetric then we cannot have $m_\alpha={{\mathit o}megaidehat p}artial K_\alpha$ for all $\alpha\in\,\betaig]0,\,^t\!\!\;frac12\betaig[$, because $m_\alpha$ exists for all $\alpha$, whereas $K_\alpha$ is empty for $\alpha$ close enough to ${\rm var\,}phirac12$. More precisely, there exists a critical value $\alpha_B\in\betaig[0,\,^t\!\!\;frac12\betaig[$ such that for all $\alpha\in\,]0,\alpha_B]$ we have $m_\alpha={{\mathit o}megaidehat p}artial K_\alpha$, and for all $\alpha\in\,\betaig]\alpha_B,\,^t\!\!\;frac12\betaig[$ we have $m_\alpha\sigmaupsetneq{{\mathit o}megaidehat p}artial K_\alpha$. The case $\alpha_B=0$ can occur, {\sigmal e.g.}, if there exists a triangle containing $K$ with an edge entirely contained in ${{\mathit o}megaidehat p}artial K$. We also prove that $m_\alpha$ is never of class $\mathcal{C}^1$ for $\alpha\in\,\betaig]\alpha_B,\,^t\!\!\;frac12\betaig[$, and that $m_\alpha$ is of class $\mathcal{C}^1$ for all $\alpha\in\,]0,\alpha_B[$ if and only if ${{\mathit o}megaidehat p}artial K$ does not contain two parallel segments. As a by-product, we obtain the following characterization: A convex body $K$ is non-symmetric if and only if there exists a triangle containing more than half of $K$ (in area), with one side entirely in $K$ and the two others disjoint from the interior of $K$. Concerning the $\alpha$-core, we prove that there is another critical value, $\alpha_K\in\betaig[{\rm var\,}phirac49,{\rm var\,}phirac12\betaig]$, such that if $0<\alpha<\alpha_K$ then $K_\alpha$ is strictly convex with nonempty interior, if $\alpha=\alpha_K$ then $K_\alpha$ is reduced to one point, and if $\alpha_K<\alpha<1$ then $K_\alpha$ is empty. We emphasize that, when $K_\alpha$ is a point, this point is {\sigmal not} necessarily the mass center of $K$, see Section 8.9. The value $\alpha_K={\rm var\,}phirac12$ occurs if and only if $K$ is symmetric and the value $\alpha_K={\rm var\,}phirac49$ occurs if and only if $K$ is a triangle. A similar study, for secants between parallel supporting lines to $K$, whose distances to the corresponding lines make a ratio of $\alpha/(1-\alpha)$, is the subject of {\sigmal (ir)reducibility theory} of convex bodies. There too, the envelope of those secants is sometimes different from the intersection of the half-planes they are defining; and there exists a ratio, called critical, for which the later object is reduced to a point. See for example \cit{h,kl,z1}. Our paper is closely related to previous works about slicing convex bodies, outer billiards (also called {\sigmal dual billiard}), floating bodies, and fair (or equi-) partitioning. It is also related to continuous families of curves in the sense of Gr\"unbaum, see Section 8.8 and the references therein. There is a vast literature on these subjects; we refer in the following to very few articles, and briefly present even fewer, that we find particularly relevant for our study. Further references can be found in those papers. Generalizing previous results on common tangents and common transversals to families of convex bodies \cit{bhj}, \cit{cgppsw}, J.~Kincses~\cit{k} showed that, for any {\sigmal well-separated} family of strictly convex bodies, the space of $\alpha$-sections is diffeomorphic to $\mathbb{S}^{d-k}$. A {\sigmal billiard table} is a planar strictly convex body $K$. Choose a starting point $x$ outside the table and one of the two tangents through $x$ to $K$, say the right one, denoted by $D$; the image $T(x)$ of $x$ by the {\sigmal billiard map} $T$ is the point symmetric of $x$ with respect to the tangency point $D\cap{{\mathit o}megaidehat p}artial K$. A {\sigmal caustic} of the billiard is an invariant curve (an invariant torus in the terminology of the KAM theory). The link between outer billiards and $\alpha$-sections is the following: If $K$ is the envelope of $\alpha$-sections of a convex set bounded by a curve $L$, for some $\alpha$, then $L$ is a caustic for the outer billiard of table $K$, {\sigmal cf. e.g.}~\cit{ft}. Outer billiards have been considered by several authors, such as J.~Moser~\cit{mo}, V.~F.~La\-zutkin~\cit{l}, E.~Gutkin and A.~Katok~\cit{gk}, D.~Fuchs and S.~Tabachnikov~\cit{ft}, S.~Tabachnikov~\cit{t1}. Therefore, it is natural that several authors considered envelopes of $\alpha$-sections in the framework of outer billiard, see {\sigmal e.g.}, Lecture~11 of the book of D.~Fuchs and S.~Tabachnikov~\cit{ft}, and references therein. Besides those studies, the envelope of $\alpha$-sections seems to be scarcely studied. With our notation, the set $K_{[\alpha]}$ bounded by $m_\alpha$ was called {\sigmal floating body} of $K$ and its study goes back to C.~Dupin, see \cit{du,w}. On the other hand, our ``$\alpha$-core'' $K_\alpha$ was introduced by C.~Sch\"utt and E.~Werner \cit{sw} and studied in a series of papers \cit{sw,sw2, st, w}. under the name of {\sigmal convex floating body}. For convex bodies $K$ in $\mathbb{R}^{d}$ and for $\alpha$ small enough, they gave estimates for ${\rm vol}_n(K)-{\rm vol}_n(K_{[\alpha]})$ and for ${\rm vol}_n(K)-{\rm vol}_n(K_\alpha)$, in relation to the affine surface area and to polygonal approximations. M.~Meyer and S.~Reisner~\cit{mr} proved in arbitrary dimension that $K$ is symmetric if and only if $m_\alpha={{\mathit o}megaidehat p}artial K_\alpha$ for any $\alpha\in\,\betaig]0,\,^t\!\!\;frac12\betaig[\,$. They also prove that $m_\alpha$ is smooth if $K$ is strictly convex. A. Stancu \cit{st} considered convex bodies $K \sigmaubset \mathbb{R}^{d}$ with boundary of class ${\cal C}^{\gammaeq 4}$, and proved that there exists $\deltaelta_K >0$ such that $K_{\deltaelta}$ is homothetic to $K$, for some $\deltaelta < \deltaelta_K$, if and only $K$ is an ellipsoid. The terminology of ``(convex) floating body'' is very suggestive for the floating theory in mathematical physics. For our study however, considering its close connections to $\alpha$-sections and fair partitioning, it seems more natural to use the term of ``$\alpha$-core''. Some of our results, essentially Proposition~{\rm Re}\,ff{p1}, are known and already published. In that case we mention references after the statement. For the sake of self-containedness, however, we will provide complete proofs. We end the paper with several miscellaneous results and open questions. Our main conjecture is as follows: If $K\sigmaubseteq L$ are two convex bodies and $\alpha\in\,\betaig]0,\,^t\!\!\;frac12\betaig[$, then there exists an $\alpha$-section of $L$ which either does not cross the interior of $K$, or is a $\beta$-section of $K$ for some $\beta\lambdaeq\alpha$. The conjecture has been recently proven in the case of planar convex bodies, see~\cit{fm}. There are two reasons which motivated us to undertake this systematic study of $\alpha$-sections and their envelopes. First, we found only one reference which focuses especially on the envelope of $\alpha$-sections, Lecture 11 of the nice book \cit{ft}, which is a simplified approach. Although the results contained in the present article seem natural, and the proofs use only elementary tools and are most of the time simple, we hope that our work will be helpful in clarifying the things. Our second motivation for studying $\alpha$-sections and their envelope was a problem of fair partitioning of a pizza. What we call {\sigmal a pizza} is a pair of planar convex bodies $K\sigmaubseteq L$, where $L$ represents the {\sigmal dough} and $K$ the {\sigmal topping} of the pizza. The problem of fair partitioning of convex bodies in $n$ pieces is a widely studied topic, see {\sigmal e.g.}~\cit{bbs,bm,b,bz,ka,kha,s,so}. Nevertheless, to our knowledge, our way of cutting has never been considered: We use a succession of double operations: a cut by a {\sigmal full} straight line, followed by a Euclidean move of one of the resulting pieces; then we repeat the procedure. The final partition is said to be {\sigmal fair} if each resulting slice has the same amount of $K$ and the same amount of $L$. The result of~\cit{fm} is the following: Given an integer $n\gammaeq2$, there exists a fair partition of {\sigmal any} pizza $(K,L)$ into $n$ parts if and only if $n$ is even. \sigmaec{2.}{Notation, conventions, and preliminaries} The notation ${\mathbb S}^1$ stands for the standard unit circle, ${\mathbb S}^1:=\mathbb{R}/(2{{\mathit o}megaidehat p}i\mathbb{Z})$, endowed with its usual metric $d(\,^t\!\!\;heta,\,^t\!\!\;heta')=\min\{|\,^t\!\!\;au-\,^t\!\!\;au'|\ ;\ \,^t\!\!\;au\in\,^t\!\!\;heta,\;\,^t\!\!\;au'\in\,^t\!\!\;heta'\}$. On ${\mathbb S}^1$ we use the notation: $\,^t\!\!\;heta\lambdaeq\,^t\!\!\;heta'$ if there exist $\,^t\!\!\;au\in\,^t\!\!\;heta,\;\,^t\!\!\;au'\in\,^t\!\!\;heta'$ such that $\,^t\!\!\;au\lambdaeq\,^t\!\!\;au'<\,^t\!\!\;au+{{\mathit o}megaidehat p}i$; it is not an order because it is not transitive. Given $\,^t\!\!\;heta\in{\mathbb S}^1$, let $\vec u(\,^t\!\!\;heta)$ denote the unit vector of direction $\,^t\!\!\;heta$, $\vec u(\,^t\!\!\;heta)=(\cos\,^t\!\!\;heta,\sigmain\,^t\!\!\;heta)$. For convenience, we add arrows $\,\vec{\;}\,$ on vectors. Unless explicitly specified otherwise, all derivatives will be with respect to $\,^t\!\!\;heta$, hence {\sigmal e.g.}, $\vec u\,'(\,^t\!\!\;heta)={\rm var\,}phirac{d\vec u}{d\,^t\!\!\;heta}(\,^t\!\!\;heta)=(-\sigmain\,^t\!\!\;heta,\cos\,^t\!\!\;heta)$ is the unit vector orthogonal to $\vec u(\,^t\!\!\;heta)$ such that the frame $\betaig(\vec u(\,^t\!\!\;heta),\vec u\,'(\,^t\!\!\;heta)\betaig)$ is counterclockwise. Given an oriented straight line ${\mathcal D}e$ in the plane, ${\mathcal D}e^+$ denotes the {\sigmal closed} half-plane on the left bounded by ${\mathcal D}e$, and ${\mathcal D}e^-$ is the closed half-plane on the right. We identify oriented straight lines with points of the cylinder ${\betaf C}={\mathbb S}^1\,^t\!\!\;imes\mathbb{R}$, associating each pair $(\,^t\!\!\;heta,t)\in{\betaf C}$ to the line oriented by $\vec u(\,^t\!\!\;heta)$ and passing at the signed distance $t$ from the origin. In other words, the half-plane ${\mathcal D}e^+$ is given by ${\mathcal D}e^+=\{x\in\mathbb{R}^2\ ;\ \lambdaangle x,\vec u\,'(\,^t\!\!\;heta)\rangle\gammaeq t\}$. We endow ${\betaf C}$ with the natural distance $d\betaig((\,^t\!\!\;heta,t),(\,^t\!\!\;heta',t')\betaig)=\betaig(d(\,^t\!\!\;heta,\,^t\!\!\;heta')^2+|t-t'|^2)\betaig)^{1/2}$. Given $\alpha\in\,]0,1[$, an {\sigmal $\alpha$-section of $K$} is an oriented line ${\mathcal D}e$ such that $|{\mathcal D}e^-\cap K|=\alpha|K|$. For all $\alpha\in\,]0,1[$ and all $\,^t\!\!\;heta\in[0,2{{\mathit o}megaidehat p}i[$, there exists a unique $\alpha$-section of $K$ of direction $\,^t\!\!\;heta$; it will be denoted by ${\mathcal D}e(\alpha,\,^t\!\!\;heta)$. This defines a continuous function ${\mathcal D}e:\,]0,1[\,\,^t\!\!\;imes{\mathbb S}^1\,^t\!\!\;o{\betaf C}$. We obviously have the symmetry \etaq d{ {\mathcal D}e^{{\mathit o}megaidehat p}m(1-\alpha,\,^t\!\!\;heta)={\mathcal D}e^\mp(\alpha,\,^t\!\!\;heta+{{\mathit o}megaidehat p}i). } For $0<\alpha\lambdaeq{\rm var\,}phirac12$, we call {\sigmal $\alpha$-core of $K$}, and denote by $K_\alpha$, the intersection of all left half-planes bounded by $\alpha$-sections: $$ K_\alpha=\betaigcap_{\,^t\!\!\;heta\in{\mathbb S}^1}{\mathcal D}e^+(\alpha,\,^t\!\!\;heta). $$ It is a compact convex subset of the plane, possibly reduced to one point or empty. Let ${\mathcal D}e={\mathcal D}e(\alpha,\,^t\!\!\;heta)$ be an $\alpha$-section of $K$ of direction $\vec u=\vec u(\,^t\!\!\;heta)$, and let $b,c$ denote the endpoints of the chord ${\mathcal D}e\cap K$, with $\vec{bc}$ having the orientation of $\vec u$ ({\sigmal i.e.}, the scalar product $\lambdaangle\vec{bc},\vec u\rangle$ is positive), see Figure~{\rm Re}\,ff{f1}. \betaegin{enumerate}gin{figure}[htb] \betaegin{enumerate}gin{center} \raisebox{0cm}{ {\rm var\,}phiramebox[2mm]{\ } skipsfysize5cm {\rm var\,}phiramebox[2mm]{\ } skipsfbox{f1.eps}} \betaegin{enumerate}gin{picture}(300,100) {{\mathit o}megaidehat p}ut(60,0){\lambdaine(5,1){220}}{{\mathit o}megaidehat p}ut(280,44){\vector(4,1){0}} {{\mathit o}megaidehat p}ut(40,60){\lambdaine(5,1){20}}{{\mathit o}megaidehat p}ut(60,64){\vector(4,1){0}} {{\mathit o}megaidehat p}ut(50,50){$\vec u$} {{\mathit o}megaidehat p}ut(40,60){\lambdaine(-1,5){4}}{{\mathit o}megaidehat p}ut(36,80){\vector(-1,4){0}} {{\mathit o}megaidehat p}ut(21,70){$\vec u\,'$} {{\mathit o}megaidehat p}ut(103,8.6){\lambdaine(5,-2){45}}{{\mathit o}megaidehat p}ut(148,-9.4){\vector(3,-1){0}} {{\mathit o}megaidehat p}ut(103,8.6){\circle*{3}}{{\mathit o}megaidehat p}ut(151,-13){$\vec b\,'$} {{\mathit o}megaidehat p}ut(100,-3){$b$}{{\mathit o}megaidehat p}ut(125,4.5){$\beta$} {{\mathit o}megaidehat p}ut(158.65,19.8){\circle*{3}} {{\mathit o}megaidehat p}ut(155,11){$m$} {{\mathit o}megaidehat p}ut(214.3,31){\vector(1,4){10}}{{\mathit o}megaidehat p}ut(214.3,31){\circle*{3}} {{\mathit o}megaidehat p}ut(227,65){$\vec c\,'$} {{\mathit o}megaidehat p}ut(215,22){$c$}{{\mathit o}megaidehat p}ut(222,40){$\gamma$} {{\mathit o}megaidehat p}ut(130,60){$K$}{{\mathit o}megaidehat p}ut(180,100){${{\mathit o}megaidehat p}artial K$} {{\mathit o}megaidehat p}ut(240,25){${\mathcal D}elta$} {{\mathit o}megaidehat p}ut(255,5){${\mathcal D}elta^-$}{{\mathit o}megaidehat p}ut(250,55){${\mathcal D}elta^+$} \etand{picture} \etand{center} \caption{{\sigmamallskipll Some notation}} \lambdab{f1} \etand{figure} Let $m=\,^t\!\!\;frac12(b+c)$ denote the midpoint of the chord $bc$. Let $h$ denote the half-length of $bc$; hence we have $b=m-h\vec u$ and $c=m+h\vec u$. The functions $b,c,m$ and $h$ are continuous with respect to $\alpha$ and $\,^t\!\!\;heta$. As we will see in the proof of Proposition~{\rm Re}\,ff{p1}, they are also left- and right-differentiable with respect to $\,^t\!\!\;heta$ at each $\,^t\!\!\;heta_0\in{\mathbb S}^1$, {\sigmal e.g.}, the following limit exists (with the convention $\,^t\!\!\;heta\,^t\!\!\;o\,^t\!\!\;heta_0^-$ for $\,^t\!\!\;heta\,^t\!\!\;o\,^t\!\!\;heta_0,\,^t\!\!\;heta<\,^t\!\!\;heta_0$) $$ \vec b_l'(\,^t\!\!\;heta_0)=\lambdaim_{\,^t\!\!\;heta\,^t\!\!\;o\,^t\!\!\;heta_0^-} \,^t\!\!\;frac1{\,^t\!\!\;heta-\,^t\!\!\;heta_0}\betaig(b(\alpha,\,^t\!\!\;heta)-b(\alpha,\,^t\!\!\;heta_0)\betaig), $$ and similarly for $\vec b_r', \vec c_l\,', \vec c_r\,'$. They are also left- and right-differentiable with respect to $\alpha$, but we will not use this fact. We say that $b$ is {\sigmal a regular point of ${{\mathit o}megaidehat p}artial K$} ({\sigmal regular} for short) if there is a unique supporting line to $K$ at $b$ ({\it i.e.}, $\vec b_l'=\vec b_r'$); otherwise we call $b$ {\sigmal a corner point of ${{\mathit o}megaidehat p}artial K$} (a {\sigmal corner} for short). If $b$ is regular then $\beta=(\widehat{\vec b\,',\vec u})\in\,]0,{{\mathit o}megaidehat p}i[$ denotes the angle between the tangent to ${{\mathit o}megaidehat p}artial K$ at $b$ and ${\mathcal D}e$. If $b$ is a corner then $\beta_l=(\widehat{\vec b_l',\vec u})$, resp. $\beta_r=(\widehat{\vec b_r',\vec u})$, is the angle between the left-tangent, resp. right-tangent, to ${{\mathit o}megaidehat p}artial K$ at $b$ and ${\mathcal D}e$. Similarly, let $\gamma=(\widehat{\vec u,\vec c\,'})\in\,]0,{{\mathit o}megaidehat p}i[$ (if $c$ is regular), resp. $\gamma_l,\,\gamma_r$ (if $c$ is a corner), be the angle between ${\mathcal D}e$ and the tangent, resp. left-tangent, right-tangent, to ${{\mathit o}megaidehat p}artial K$ at $c$, see Figure~{\rm Re}\,ff{f1}. For a fixed $\alpha$, the values of $\,^t\!\!\;heta$ such that $b(\alpha,\,^t\!\!\;heta)$ or $c(\alpha,\,^t\!\!\;heta)$ (or both) is a corner will be called {\sigmal singular}; those for which both $b$ and $c$ are regular will be called {\sigmal regular}. Observe that we always have $\beta_r\lambdaeq\beta_l$ and $\gamma_l\lambdaeq\gamma_r$, with equality if and only if $b$, resp. $c$, is regular. Also observe the following fact. \etaq b{ b\mbox{ and }c\mbox{ admit parallel supporting lines if and only if } \beta_r+\gamma_l\lambdaeq{{\mathit o}megaidehat p}i\lambdaeq\beta_l+\gamma_r. } Finally, observe that the angle between $\vec b'$, (resp. $\vec b_l'$, $\vec b_r'$) and the axis of abscissae is equal to $\,^t\!\!\;heta-\beta(\alpha,\,^t\!\!\;heta)$ (resp. $\,^t\!\!\;heta-\beta_l(\alpha,\,^t\!\!\;heta)$, $\,^t\!\!\;heta-\beta_r(\alpha,\,^t\!\!\;heta)$). Since these angles are increasing and intertwining functions of $\,^t\!\!\;heta$, we have the following statement. $$ \mbox{If }\,^t\!\!\;heta<\,^t\!\!\;heta'\mbox{ then }\,^t\!\!\;heta-\beta_l(\alpha,\,^t\!\!\;heta)\lambdaeq\,^t\!\!\;heta-\beta_r(\alpha,\,^t\!\!\;heta) \lambdaeq\,^t\!\!\;heta'-\beta_l(\alpha,\,^t\!\!\;heta')\lambdaeq\,^t\!\!\;heta'-\beta_r(\alpha,\,^t\!\!\;heta'). $$ It follows that \etaq t{ \lambdaim_{\,^t\!\!\;heta\,^t\!\!\;o\,^t\!\!\;heta_0^+}\beta_l(\alpha,\,^t\!\!\;heta)=\lambdaim_{\,^t\!\!\;heta\,^t\!\!\;o\,^t\!\!\;heta_0^+}\beta_r(\alpha,\,^t\!\!\;heta)= \beta_r(\alpha,\,^t\!\!\;heta_0)\lambdaeq\beta_l(\alpha,\,^t\!\!\;heta_0) =\lambdaim_{\,^t\!\!\;heta\,^t\!\!\;o\,^t\!\!\;heta_0^-}\beta_l(\alpha,\,^t\!\!\;heta)=\lambdaim_{\,^t\!\!\;heta\,^t\!\!\;o\,^t\!\!\;heta_0^-}\beta_r(\alpha,\,^t\!\!\;heta), } and similarly \etaq{3b}{ \lambdaim_{\,^t\!\!\;heta\,^t\!\!\;o\,^t\!\!\;heta_0^-}\gamma_l(\alpha,\,^t\!\!\;heta)=\lambdaim_{\,^t\!\!\;heta\,^t\!\!\;o\,^t\!\!\;heta_0^-}\gamma_r(\alpha,\,^t\!\!\;heta) =\gamma_l(\alpha,\,^t\!\!\;heta_0)\lambdaeq\gamma_r(\alpha,\,^t\!\!\;heta_0) =\lambdaim_{\,^t\!\!\;heta\,^t\!\!\;o\,^t\!\!\;heta_0^+}\gamma_l(\alpha,\,^t\!\!\;heta)=\lambdaim_{\,^t\!\!\;heta\,^t\!\!\;o\,^t\!\!\;heta_0^+}\gamma_r(\alpha,\,^t\!\!\;heta). } In a same way we have \etaq g{ \mbox{If }\alpha<\alpha'\mbox{ then }\beta_r(\alpha,\,^t\!\!\;heta)\lambdaeq\beta_l(\alpha,\,^t\!\!\;heta) \lambdaeq\beta_r(\alpha',\,^t\!\!\;heta)\lambdaeq\beta_l(\alpha',\,^t\!\!\;heta) $$ $$ \mbox{ and } \gamma_l(\alpha,\,^t\!\!\;heta)\lambdaeq\gamma_r(\alpha,\,^t\!\!\;heta) \lambdaeq\gamma_l(\alpha',\,^t\!\!\;heta)\lambdaeq\gamma_r(\alpha',\,^t\!\!\;heta). } As a consequence, $$ \lambdaim_{\alpha\,^t\!\!\;o\alpha_0^-}\beta_l(\alpha,\,^t\!\!\;heta)=\lambdaim_{\alpha\,^t\!\!\;o\alpha_0^-}\beta_r(\alpha,\,^t\!\!\;heta)= \beta_r(\alpha_0,\,^t\!\!\;heta)\lambdaeq\beta_l(\alpha_0,\,^t\!\!\;heta) =\lambdaim_{\alpha\,^t\!\!\;o\alpha_0^+}\beta_l(\alpha,\,^t\!\!\;heta)=\lambdaim_{\alpha\,^t\!\!\;o\alpha_0^+}\beta_r(\alpha,\,^t\!\!\;heta), $$ $$ \lambdaim_{\alpha\,^t\!\!\;o\alpha_0^-}\gamma_l(\alpha,\,^t\!\!\;heta)=\lambdaim_{\alpha\,^t\!\!\;o\alpha_0^-}\gamma_r(\alpha,\,^t\!\!\;heta) =\gamma_l(\alpha_0,\,^t\!\!\;heta)\lambdaeq\gamma_r(\alpha_0,\,^t\!\!\;heta) =\lambdaim_{\alpha\,^t\!\!\;o\alpha_0^+}\gamma_l(\alpha,\,^t\!\!\;heta)=\lambdaim_{\alpha\,^t\!\!\;o\alpha_0^+}\gamma_r(\alpha,\,^t\!\!\;heta). $$ All these elements $b,c,m,h,\beta,\gamma$ can be considered as functions of both $\alpha$ and $\,^t\!\!\;heta$. Nevertheless, as already said, all derivatives are with respect to $\,^t\!\!\;heta$. Let $v=v(\alpha,\,^t\!\!\;heta)$ be the scalar product $v=\lambdaangle\vec m',\vec u\rangle\in\mathbb{R}$. If $v$ has a discontinuity, then $v_l$ and $v_r$ denote its corresponding left and right limits. As we will see, $\vec m'$ is always collinear to $\vec u$, hence $v$ is the ``signed norm'' of $\vec m'$. We will also see that $v$ has discontinuities only if $b$ or $c$ (or both) is a corner. Since we chose $\,^t\!\!\;heta$ as parameter, we will also see that $v$ is the signed radius of curvature of the curve $m$, but we prefer to refer to it as the {\sigmal velocity} of the current point $m$ of the envelope. The symmetry~\rf d gives $m(\alpha,\,^t\!\!\;heta+{{\mathit o}megaidehat p}i)=m(1-\alpha,\,^t\!\!\;heta)$, hence $\vec m'(\alpha,\,^t\!\!\;heta+{{\mathit o}megaidehat p}i)=\vec m'(1-\alpha,\,^t\!\!\;heta)$. Since $\vec u(\,^t\!\!\;heta+{{\mathit o}megaidehat p}i)=-\vec u(\,^t\!\!\;heta)$, we obtain \etaq w{ v_l(\alpha,\,^t\!\!\;heta+{{\mathit o}megaidehat p}i)=-v_l(1-\alpha,\,^t\!\!\;heta)\;\mbox{ and }\;v_r(\alpha,\,^t\!\!\;heta+{{\mathit o}megaidehat p}i)=-v_r(1-\alpha,\,^t\!\!\;heta). } Our last notation is $V$ for the segment of endpoints $v_l$ and $v_r$; {\sigmal i.e.}, $V=[v_l,v_r]$ if $v_l\lambdaeq v_r$, $V=[v_r,v_l]$ otherwise. Formulae~\rf w yield \etaq W{ V(\alpha,\,^t\!\!\;heta+{{\mathit o}megaidehat p}i)=-V(1-\alpha,\,^t\!\!\;heta). } \sigmaec{2b.}{A ``digressing tour''} Before carrying on $\alpha$-sections, we would like to digress for a moment in a more general framework. The notion and results of this section are elementary and probably already known, but we did not find any reference in the literature. They can be considered as exercises in a graduate course on planar curves. Recall that a {\sigmal ruled function} (or {\sigmal regulated function}) $R:{\mathbb S}^1\,^t\!\!\;o\mathbb{R}$ is the uniform limit of piecewise constant functions. It is equivalent to saying that $R$ admits a left- and a right-limit, denoted below by $R_l$ and $R_r$, at each point of ${\mathbb S}^1$, see {\sigmal e.g.},~\cit{di}. Recall also our notation $\vec u(\,^t\!\!\;heta)=(\cos\,^t\!\!\;heta,\sigmain\,^t\!\!\;heta)$. \deltaf{d1}{ {\noindent\rm(a)} \ A {\sigmal tour} is a planar curve parametrized by its tangent. More precisely, we call $m:{\mathbb S}^1\,^t\!\!\;o\mathbb{R}^2$ a {\sigmal tour} if $m$ is continuous, has a left- and a right-derivative at each point of ${\mathbb S}^1$, and if there exists a ruled function $R:{\mathbb S}^1\,^t\!\!\;o\mathbb{R}$ such that $\vec m'_l(\,^t\!\!\;heta)=R_l(\,^t\!\!\;heta)\vec u(\,^t\!\!\;heta)$ and $\vec m'_r(\,^t\!\!\;heta)=R_r(\,^t\!\!\;heta)\vec u(\,^t\!\!\;heta)$. {\noindent\rm(b)} \ The {\sigmal core} $K=K(m)$ of a tour $m$ is the intersection of all left half-planes delimited by all tangents to $m$ oriented by $\vec u$, {\sigmal i.e.}, $K=\betaigcap_{\,^t\!\!\;heta\in{\mathbb S}^1}D^+(\,^t\!\!\;heta)$, where $D(\,^t\!\!\;heta)=m(\,^t\!\!\;heta)+\mathbb{R}\vec u(\,^t\!\!\;heta)$. } Tours are not necessarily simple curves. The case $m(\,^t\!\!\;heta)=m(\,^t\!\!\;heta+{{\mathit o}megaidehat p}i)$ gives rise to a {\sigmal double half-tour}. This is the case {\sigmal e.g.} for the envelope of half-sections of a planar convex body. \betaegin{enumerate}gin{figure}[htb] \betaegin{enumerate}gin{center} \raisebox{0cm}{ {\rm var\,}phiramebox[2mm]{\ } skipsfysize5cm {\rm var\,}phiramebox[2mm]{\ } skipsfbox{astroid.eps}} \hspace*{5cm} {\rm var\,}phiramebox[2mm]{\ } skipsfysize5cm {\rm var\,}phiramebox[2mm]{\ } skipsfbox{deltoid.eps} \hspace*{17mm} \etand{center} \caption{{\sigmamallskipll Some (double half-)tours: Left the astroid, right the deltoid. The astroid cannot be an envelope of an $\alpha$-section of a planar convex body; the deltoid is probably such a half-section, although we ignore how to prove it.}} \lambdab{f8} \etand{figure} \noindentndent {\sigmal Proof\;\sigmaep }opo{p1.1}{ Let $m$ be a tour, with associated ruled function $R$, and let $m^*$ denote its image in the plane: $m^*=m({\mathbb S}^1)$. {\noindent\rm(a)} \ For each $\,^t\!\!\;heta\in{\mathbb S}^1$, $R_l(\,^t\!\!\;heta)$, resp. $R_r(\,^t\!\!\;heta)$, is the signed left-, resp. right-radius of curvature of $m^*$ at $m(\,^t\!\!\;heta)$. {\noindent\rm(b)} \ If $R_l$ and $R_r$ do not vanish on ${\mathbb S}^1$, then $m^*$ is a $\mathcal{C}^1$ submanifold of the plane. {\noindent\rm(c)} \ Conversely, if there exist $\,^t\!\!\;heta_1\lambdaeq\,^t\!\!\;heta_2\lambdaeq\,^t\!\!\;heta_1+{{\mathit o}megaidehat p}i\in{\mathbb S}^1$ such that the product $R_r(\,^t\!\!\;heta_1)R_l(\,^t\!\!\;heta_2)$ is negative, then there exists $\,^t\!\!\;heta_3\in[\,^t\!\!\;heta_1,\,^t\!\!\;heta_2]$ such that $m^*$ is not of class $\mathcal{C}^1$ at $m(\,^t\!\!\;heta_3)$. {\noindent\rm(d)} \ The core $K$ of $m$ is either empty, or a point, or a strictly convex body. {\noindent\rm(e)} \ The boundary of the core, ${{\mathit o}megaidehat p}artial K$, is included in $m^*$. Moreover, ${{\mathit o}megaidehat p}artial K$ and $m^*$ coincide if and only if the functions $R_l$ and $R_r$ are nonnegative on ${\mathbb S}^1$. } {\noindent\sigmal Remarks.} \\ 1. In the context of convex floating bodies, the statement (d) above is already known, also in arbirtrary dimension, see {\sigmal e.g.} Prop. 1 (iv) and (v) in \cit{w} or Theorem 3 in \cit{mr}. \\ 2. In the case where $R$ vanishes without changing of sign, $m^*$ may, or may not, be of class $\mathcal{C}^1$. \noindentndent {\sigmal Proof\;\sigmaep } {\noindent\rm(a)} \ Let $s$ denote the curvilinear abscissa on $m$ from some starting point, say $m(0)$. We have $s(\,^t\!\!\;heta)=\deltaisplaystyle\int_0^\,^t\!\!\;heta R_l(\,^t\!\!\;au)d\,^t\!\!\;au=\deltaisplaystyle\int_0^\,^t\!\!\;heta R_r(\,^t\!\!\;au)d\,^t\!\!\;au$. Locally, if $R_l(\,^t\!\!\;heta_0)$ and $R_r(\,^t\!\!\;heta_0)$ are nonzero, then the tangent of $m$ at $m(\,^t\!\!\;heta_0)$ is $\vec u(\,^t\!\!\;heta_0)$ and we have $$ {\mathcal B}ig({\rm var\,}phirac{d\vec u}{ds}{\mathcal B}ig)_{l/r}(\,^t\!\!\;heta_0) ={\rm var\,}phirac{d\vec u}{d\,^t\!\!\;heta}\;{\mathcal B}ig({\rm var\,}phirac{d\,^t\!\!\;heta}{ds}{\mathcal B}ig)_{l/r}(\,^t\!\!\;heta_0) ={\rm var\,}phirac1{R_{l/r}(\,^t\!\!\;heta_0)}\,\vec u\,'(\,^t\!\!\;heta_0). $$ {\noindent\rm(b)} \ If $R_l$ and $R_r$ do not vanish, then $s$ is a homeomorphism (with inverse homeomorphism denoted $\,^t\!\!\;heta$ for convenience) and $m\betaig(\,^t\!\!\;heta(s)\betaig)=m(0)+\deltaisplaystyle\int_0^s\vec u\betaig(\,^t\!\!\;heta(t)\betaig)dt$, with $\vec u$ and $\,^t\!\!\;heta$ continuous, {\sigmal i.e.,} $m\circ\,^t\!\!\;heta$ is of class $\mathcal{C}^1$. {\noindent\rm(c)} \ We may assume, without loss of generality, that $R_r(\,^t\!\!\;heta_1)>0$ and $R_l(\,^t\!\!\;heta_2)<0$. If $\,^t\!\!\;heta_1=\,^t\!\!\;heta_2$ then $m(\,^t\!\!\;heta_1)$ is a cusp ({\sigmal i.e.}, a point with one half-tangent), hence $m^*$ is not $\mathcal{C}^1$ at $m(\,^t\!\!\;heta_1)$. In the sequel we assume $\,^t\!\!\;heta_1<\,^t\!\!\;heta_2$. Consider $E=\{\,^t\!\!\;heta\in[\,^t\!\!\;heta_1,\,^t\!\!\;heta_2]\ ;\ R_r(\,^t\!\!\;heta)>0\}$. Let $\,^t\!\!\;heta_3=\sigmaup E$. Since $0>R_l(\,^t\!\!\;heta_2)=\deltaisplaystyle\lambdaim_{\,^t\!\!\;heta\,^t\!\!\;o\,^t\!\!\;heta_2^-}R_l(\,^t\!\!\;heta)=\deltaisplaystyle\lambdaim_{\,^t\!\!\;heta\,^t\!\!\;o\,^t\!\!\;heta_2^-}R_r(\,^t\!\!\;heta)$, we have $\,^t\!\!\;heta\notin E$ for $\,^t\!\!\;heta<\,^t\!\!\;heta_2$, $\,^t\!\!\;heta$ close enough to $\,^t\!\!\;heta_2$. In the same manner, we have $\,^t\!\!\;heta\in E$ if $\,^t\!\!\;heta>\,^t\!\!\;heta_1$, $\,^t\!\!\;heta$ close enough to $\,^t\!\!\;heta_1$. Therefore, we have $\,^t\!\!\;heta_1<\,^t\!\!\;heta_3<\,^t\!\!\;heta_2$. Now two cases may occur. If there exists $\,^t\!\!\;heta_4>\,^t\!\!\;heta_3$, such that $R$ vanishes on the whole interval $\,]\,^t\!\!\;heta_3,\,^t\!\!\;heta_4[\,$, then $m$ is constant on $[\,^t\!\!\;heta_3,\,^t\!\!\;heta_4]$. We assume $\,^t\!\!\;heta_4 \lambdaeq \,^t\!\!\;heta_2$ maximal with this property. By contradiction, if $m^*$ is $\mathcal{C}^1$ at the point $m(\,^t\!\!\;heta_3)=m(\,^t\!\!\;heta_4)$ then necessarily we have $\,^t\!\!\;heta_4=\,^t\!\!\;heta_3+{{\mathit o}megaidehat p}i$, a contradiction with $\,^t\!\!\;heta_1<\,^t\!\!\;heta_3\lambdaeq\,^t\!\!\;heta_4 \lambdaeq \,^t\!\!\;heta_1+{{\mathit o}megaidehat p}i$. In the other case, for all $\deltaelta>0$ there exists $\,^t\!\!\;heta\in\,]\,^t\!\!\;heta_3,\,^t\!\!\;heta_3+\delta[\,$ such that $R_r(\,^t\!\!\;heta)<0$, and we obtain that $m(\,^t\!\!\;heta_3)$ is a cusp. Before the proofs of (d) and (e), we first establish two lemmas. \lambdaem{l1}{ The interior of the core $K$ of $m$ coincides with the intersection of all open half-planes ${\rm Int}\,\betaig(D^+(\,^t\!\!\;heta)\betaig),\;\,^t\!\!\;heta\in{\mathbb S}^1$: \etaq j{ {\rm Int}\,(K)=\betaigcap_{\,^t\!\!\;heta\in{\mathbb S}^1}{\rm Int}\,\betaig(D^+(\,^t\!\!\;heta)\betaig). } Furthermore, we have ${{\mathit o}megaidehat p}artial K\sigmaubset\betaigcup_{\,^t\!\!\;heta\in{\mathbb S}^1}D(\,^t\!\!\;heta)$. } \noindentndent {\sigmal Proof\;\sigmaep } The inclusion $\sigmaubseteq$ in~\rf j is evident: For each $\,^t\!\!\;heta$ we have $K\sigmaubset D^+(\,^t\!\!\;heta)$, hence ${\rm Int}\,(K)\sigmaubset{\rm Int}\,\betaig(D^+(\,^t\!\!\;heta)\betaig)$. Conversely, given $x\in K$, the map $\,^t\!\!\;heta\mapsto\,\mbox{\rm dist}\betaig(x,D(\,^t\!\!\;heta)\betaig)$ is continuous. Since ${\mathbb S}^1$ is compact, if $x$ is in the interior of $D^+(\,^t\!\!\;heta)$ for all $\,^t\!\!\;heta\in{\mathbb S}^1$, then this map has a minimum $\rho>0$, and the disc of center $x$ and radius $\rho$ is included in $K$. This proves~\rf j. If $x\in{{\mathit o}megaidehat p}artial K$, then $x$ is in every closed half-plane $D^+(\,^t\!\!\;heta)$ but not in every open one by~\rf j, hence $x$ has to be on (at least) one of the lines $D(\,^t\!\!\;heta)$. {\rm var\,}phiramebox[2mm]{\ } skip \lambdaem{l2}{ If $\,^t\!\!\;heta_1\in{\mathbb S}^1$ and $z$ are such that $z\in{{\mathit o}megaidehat p}artial K\cap D(\,^t\!\!\;heta_1)$ then $z=m(\,^t\!\!\;heta_1)$. } \noindentndent {\sigmal Proof\;\sigmaep } For small $ {\rm var\,}phiramebox[2mm]{\ } skips\neq0$, positive or negative, we have $$ m(\,^t\!\!\;heta_1+ {\rm var\,}phiramebox[2mm]{\ } skips)-m(\,^t\!\!\;heta_1)=\deltaisplaystyle\int_{\,^t\!\!\;heta_1}^{\,^t\!\!\;heta_1+ {\rm var\,}phiramebox[2mm]{\ } skips}R(\,^t\!\!\;heta)\vec u(\,^t\!\!\;heta)d\,^t\!\!\;heta. $$ By boundedness of $R_l$ and $R_r$, and by continuity of $\vec u$, we deduce that there exists $r$ equal to $R_l(\,^t\!\!\;heta_1)$ or $R_r(\,^t\!\!\;heta_1)$ such that \etaq n{ m(\,^t\!\!\;heta_1+ {\rm var\,}phiramebox[2mm]{\ } skips)-m(\,^t\!\!\;heta_1)=r\vec u(\,^t\!\!\;heta_1) {\rm var\,}phiramebox[2mm]{\ } skips+o( {\rm var\,}phiramebox[2mm]{\ } skips). } It follows that $D(\,^t\!\!\;heta_1)$ and $D(\,^t\!\!\;heta_1+ {\rm var\,}phiramebox[2mm]{\ } skips)$ cross at a distance ${\mathcal O}( {\rm var\,}phiramebox[2mm]{\ } skips)$ from $m(\,^t\!\!\;heta_1)$, see Figure~{\rm Re}\,ff{f3} below. If $z$ were different from $m(\,^t\!\!\;heta_1)$ then, for $ {\rm var\,}phiramebox[2mm]{\ } skips$ small, either negative or positive depending on the relative positions of $z$ and $m(\,^t\!\!\;heta_1)$, we would have $z\in{\rm Int}\,\betaig(D^-(\,^t\!\!\;heta_1+ {\rm var\,}phiramebox[2mm]{\ } skips)\betaig)$, hence $z\notin K$, a contradiction. {\rm var\,}phiramebox[2mm]{\ } skip {\noindent\sigmal Now we return to the proof of Proposition~{\rm Re}\,ff{p1.1}.} {\noindent\rm(d)} \ By contradiction, assume that ${{\mathit o}megaidehat p}artial K$ contains some segment $[x,y]$ with $x\neq y$ and take $z\in\,]x,y[\,$ arbitrarily. By Lemma~{\rm Re}\,ff{l1}, there exists $\,^t\!\!\;heta_1\in{\mathbb S}^1$ such that $z\in D(\,^t\!\!\;heta_1)$. Since both $x$ and $y$ belong to $K\sigmaubset D^+(\,^t\!\!\;heta_1)$, the line $D(\,^t\!\!\;heta_1)$ contains both $x$ and $y$, {\sigmal i.e.,} $\,^t\!\!\;heta_1$ is the direction of ${{\mathit o}megaidehat p}m\vecc{xy}$. By Lemma~{\rm Re}\,ff{l2}, we deduce that $z=m(\,^t\!\!\;heta_1)$. Hence we proved that {\sigmal any} $z\in\,]x,y[\,$ coincides with $z=m(\,^t\!\!\;heta_1)$, where $\,^t\!\!\;heta_1$ is one of the directions ${{\mathit o}megaidehat p}m\vecc{xy}$, which is impossible. {\noindent\rm(e)} \ The first assertion follows directly from Lemmas~{\rm Re}\,ff{l1} and~{\rm Re}\,ff{l2}. For the second one, first we proceed by contradiction and we assume $R_l(\,^t\!\!\;heta_1)<0$ or $R_r(\,^t\!\!\;heta_1)<0$ for some $\,^t\!\!\;heta_1\in{\mathbb S}^1$, say $R_r(\,^t\!\!\;heta_1)<0$. Then by~\rf n, for $ {\rm var\,}phiramebox[2mm]{\ } skips>0$ small enough, we have $m(\,^t\!\!\;heta_1+ {\rm var\,}phiramebox[2mm]{\ } skips)=m(\,^t\!\!\;heta_1)+ {\rm var\,}phiramebox[2mm]{\ } skips R_r(\,^t\!\!\;heta_1)\vec u(\,^t\!\!\;heta_1)+o( {\rm var\,}phiramebox[2mm]{\ } skips)$, hence $m(\,^t\!\!\;heta_1)\in{\rm Int}\,\betaig(D^-(\,^t\!\!\;heta_1+ {\rm var\,}phiramebox[2mm]{\ } skips)\betaig)$, see Figure~{\rm Re}\,ff{f3}. It follows that $m(\,^t\!\!\;heta_1)\notin K$. \betaegin{enumerate}gin{figure}[htb] \betaegin{enumerate}gin{center} \betaegin{enumerate}gin{picture}(350,50) {{\mathit o}megaidehat p}ut(45,25){\vector(1,0){225}} {{\mathit o}megaidehat p}ut(150,25){\circle*{3}} {{\mathit o}megaidehat p}ut(226,25){\circle*{3}} {{\mathit o}megaidehat p}ut(59,6){\vector(3,1){140}} {{\mathit o}megaidehat p}ut(100,19.50){\circle*{3}} {{\mathit o}megaidehat p}ut(145,10){$m(\,^t\!\!\;heta_1)$} {{\mathit o}megaidehat p}ut(82,3){$m(\,^t\!\!\;heta_1+ {\rm var\,}phiramebox[2mm]{\ } skips)$} {{\mathit o}megaidehat p}ut(205,45){$D(\,^t\!\!\;heta_1+ {\rm var\,}phiramebox[2mm]{\ } skips)$} {{\mathit o}megaidehat p}ut(272,16){$D(\,^t\!\!\;heta_1)$} {{\mathit o}megaidehat p}ut(225,12){$z$} \etand{picture} \etand{center} \caption{{\sigmamallskipll Proof of Lemma~{\rm Re}\,ff{l2} and Proposition~{\rm Re}\,ff{p1.1}~(d)}} \lambdab{f3} \etand{figure} Conversely, if $R_l$ and $R_r$ are nonnegative, take $\,^t\!\!\;heta_0\in{\mathbb S}^1$ arbitrarily. Then, for all $\,^t\!\!\;heta\in[\,^t\!\!\;heta_0,\,^t\!\!\;heta_0+{{\mathit o}megaidehat p}i]$, we have $$ m(\,^t\!\!\;heta)=m(\,^t\!\!\;heta_0)+\deltaisplaystyle\int_{\,^t\!\!\;heta_0}^\,^t\!\!\;heta\vec m'(\,^t\!\!\;au)d\,^t\!\!\;au =m(\,^t\!\!\;heta_0)+\deltaisplaystyle\int_{\,^t\!\!\;heta_0}^\,^t\!\!\;heta R_l(\,^t\!\!\;au)\vec u(\,^t\!\!\;au)d\,^t\!\!\;au. $$ Therefore we obtain $$ \lambdaangle\vecc{m(\,^t\!\!\;heta_0)m(\,^t\!\!\;heta)},\vec u\,'(\,^t\!\!\;heta_0)\rangle =\deltaisplaystyle\int_{\,^t\!\!\;heta_0}^\,^t\!\!\;heta R_l(\,^t\!\!\;au) \lambdaangle\vec u(\,^t\!\!\;au),\vec u\,'(\,^t\!\!\;heta_0)\rangle d\,^t\!\!\;au\gammaeq0 , $$ since $\lambdaangle\vec u(\,^t\!\!\;au),\vec u\,'(\,^t\!\!\;heta_0)\rangle =\sigmain(\,^t\!\!\;heta-\,^t\!\!\;heta_0)\gammaeq0$. In the same manner, we have for all $\,^t\!\!\;heta\in[\,^t\!\!\;heta_0-{{\mathit o}megaidehat p}i,\,^t\!\!\;heta_0]$, $\lambdaangle\vecc{m(\,^t\!\!\;heta_0)m(\,^t\!\!\;heta)},\vec u\,'(\,^t\!\!\;heta_0)\rangle\gammaeq0$, hence the whole curve $m^*$ is in the half-plane $D^+(\,^t\!\!\;heta_0)$. This holds for all $\,^t\!\!\;heta_0\in{\mathbb S}^1$, hence $m^*\sigmaubset K$. Finally, for each $\,^t\!\!\;heta\in{\mathbb S}^1$, since $m(\,^t\!\!\;heta)\in D(\,^t\!\!\;heta)$ and $K\sigmaubset D^+(\,^t\!\!\;heta)$, $m(\,^t\!\!\;heta)$ cannot be in ${\rm Int}\,(K)$, hence $m(\,^t\!\!\;heta)\in{{\mathit o}megaidehat p}artial K$. {\rm var\,}phiramebox[2mm]{\ } skip \sigmaec{3.}{Dependence with respect to $\,^t\!\!\;heta$} We begin this section with some known results; we give the proofs for the sake of completeness of the article. See Section~{\rm Re}\,ff{2.}, especially Figure~{\rm Re}\,ff{f1}, for the notation. We recall that $m$ is the midpoint of the chord $bc$ and that $v_{l|r}=\lambdaangle\vec m_{l|r}',\vec u\rangle$. \noindentndent {\sigmal Proof\;\sigmaep }opo{p1}{ {\rm(a)} \ The curve $m$ is the envelope of the family $\{{\mathcal D}e(\alpha,\,^t\!\!\;heta),\,^t\!\!\;heta\in{\mathbb S}^1\}$, {\sigmal i.e.,} \com{http://dictionary.cambridge.org/fr/dictionnaire/britannique/ on-the-one-hand-on-the-other-hand} the vector products $\vec m'_l{\mathit o}megaedge\vec u$ and $\vec m'_r{\mathit o}megaedge\vec u$ vanish identically. \sigmamallskipllskip {\noindent\rm(b)} \ If $b$ and $c$ are regular points of ${{\mathit o}megaidehat p}artial K$ then \etaq v{ \vec m'=v\vec u~\mbox{ and }~v=\,^t\!\!\;frac h2({\rm cotan}\beta+{\rm cotan}\gamma). } In the case where $b$ or $c$ (or both) is a corner of ${{\mathit o}megaidehat p}artial K$, we have $\vec m'_l=v_l\vec u$, $\vec m'_r=v_r\vec u$, $v_l=\,^t\!\!\;frac h2({\rm cotan}\beta_l+{\rm cotan}\gamma_l)$, and $v_r=\,^t\!\!\;frac h2({\rm cotan}\beta_r+{\rm cotan}\gamma_r)$. \sigmamallskipllskip {\noindent\rm(c)} \ If $b$ and $c$ are regular then $v$ is the signed radius of curvature of the curve $m$. If $b$ or $c$ (or both) is a corner then $v_l$, resp. $v_r$, is the signed radius of curvature on the left, resp. on the right, of $m$. } Statement (a) is attributed to M.~M.~Day~\cit{d} by S.~Tabachnikov. Formula~\rf v appears in a similar form in~\cit{gk}. \sigmamallskipllskip \noindentndent {\sigmal Proof\;\sigmaep } We fix $\alpha\in\,]0,1[$ and $\,^t\!\!\;heta\in{\mathbb S}^1$; we will not always indicate the dependence in $\alpha$ and $\,^t\!\!\;heta$ of the functions $b,c,m,\vec u$, etc. Let $ {\rm var\,}phiramebox[2mm]{\ } skips>0$, set $M( {\rm var\,}phiramebox[2mm]{\ } skips)={\mathcal D}e(\alpha,\,^t\!\!\;heta)\cap{\mathcal D}e(\alpha,\,^t\!\!\;heta+ {\rm var\,}phiramebox[2mm]{\ } skips)$ (see Figure~{\rm Re}\,ff{f2}), and consider the curvilinear triangles $$ T_b( {\rm var\,}phiramebox[2mm]{\ } skips)={\mathcal D}e(\alpha,\,^t\!\!\;heta)^-\cap{\mathcal D}e(\alpha,\,^t\!\!\;heta+ {\rm var\,}phiramebox[2mm]{\ } skips)^+\cap K\;,\,^t\!\!\;hetauad T_c( {\rm var\,}phiramebox[2mm]{\ } skips)={\mathcal D}e(\alpha,\,^t\!\!\;heta)^+\cap{\mathcal D}e(\alpha,\,^t\!\!\;heta+ {\rm var\,}phiramebox[2mm]{\ } skips)^-\cap K. $$ \betaegin{enumerate}gin{figure}[htb] \betaegin{enumerate}gin{center} \raisebox{0cm}{ {\rm var\,}phiramebox[2mm]{\ } skipsfysize5.5cm {\rm var\,}phiramebox[2mm]{\ } skipsfbox{geogebrafigure4.eps}} \etand{center} \caption{{\sigmamallskipll Proof of Proposition~{\rm Re}\,ff{p1}}} \lambdab{f2} \etand{figure} \betaigskip We have $|{\mathcal D}e(\alpha,\,^t\!\!\;heta)^-\cap K|=|{\mathcal D}e(\alpha,\,^t\!\!\;heta+ {\rm var\,}phiramebox[2mm]{\ } skips)^-\cap K|=\alpha$, hence $$0=|T_b( {\rm var\,}phiramebox[2mm]{\ } skips)|-|T_c( {\rm var\,}phiramebox[2mm]{\ } skips)|= \,^t\!\!\;frac12 {\rm var\,}phiramebox[2mm]{\ } skips\betaig(\|b-M( {\rm var\,}phiramebox[2mm]{\ } skips)\|^2-\|c-M( {\rm var\,}phiramebox[2mm]{\ } skips)\|^2\betaig) +{\mathit o}( {\rm var\,}phiramebox[2mm]{\ } skips). $$ As a consequence, we obtain $\deltaisplaystyle\lambdaim_{ {\rm var\,}phiramebox[2mm]{\ } skips\,^t\!\!\;o0^+}M( {\rm var\,}phiramebox[2mm]{\ } skips)=m$. The case $ {\rm var\,}phiramebox[2mm]{\ } skips<0$ is similar. Now we prove \etaq B{ \vec b_r'=h\,{\rm cotan}\beta_r\,\vec u-h\vec u\,'. } For $ {\rm var\,}phiramebox[2mm]{\ } skips>0$, let $B( {\rm var\,}phiramebox[2mm]{\ } skips)$ denote the intersection of the right-tangent to ${{\mathit o}megaidehat p}artial K$ at $b$ with the line ${\mathcal D}e(\alpha,\,^t\!\!\;heta+ {\rm var\,}phiramebox[2mm]{\ } skips)$, see Figure~{\rm Re}\,ff{f2}. By definition of right-tangent, we have $B( {\rm var\,}phiramebox[2mm]{\ } skips)=b(\,^t\!\!\;heta+ {\rm var\,}phiramebox[2mm]{\ } skips)+{\mathit o}( {\rm var\,}phiramebox[2mm]{\ } skips)$. Set $H( {\rm var\,}phiramebox[2mm]{\ } skips)=\|b-M( {\rm var\,}phiramebox[2mm]{\ } skips)\|$, $H_1( {\rm var\,}phiramebox[2mm]{\ } skips)=\lambdaangle B( {\rm var\,}phiramebox[2mm]{\ } skips)-b,\vec u\rangle$, $H_2( {\rm var\,}phiramebox[2mm]{\ } skips)=\lambdaangle M( {\rm var\,}phiramebox[2mm]{\ } skips)-B( {\rm var\,}phiramebox[2mm]{\ } skips),\vec u\rangle$, and $H_3( {\rm var\,}phiramebox[2mm]{\ } skips)=\lambdaangle b-B( {\rm var\,}phiramebox[2mm]{\ } skips),\vec u\,'\rangle$. We obtain the following linear system in $H_1,H_2,H_3$ $$ H( {\rm var\,}phiramebox[2mm]{\ } skips)=H_1( {\rm var\,}phiramebox[2mm]{\ } skips)+H_2( {\rm var\,}phiramebox[2mm]{\ } skips),\,^t\!\!\;hetauad H_3( {\rm var\,}phiramebox[2mm]{\ } skips)=H_1( {\rm var\,}phiramebox[2mm]{\ } skips)\,^t\!\!\;an\beta_r=H_2( {\rm var\,}phiramebox[2mm]{\ } skips)\,^t\!\!\;an {\rm var\,}phiramebox[2mm]{\ } skips, $$ giving $$ B( {\rm var\,}phiramebox[2mm]{\ } skips)=b-{\rm var\,}phirac{\,^t\!\!\;an\beta_r\,^t\!\!\;an {\rm var\,}phiramebox[2mm]{\ } skips}{\,^t\!\!\;an\beta_r+\,^t\!\!\;an {\rm var\,}phiramebox[2mm]{\ } skips}H( {\rm var\,}phiramebox[2mm]{\ } skips)\vec u+ {\rm var\,}phirac{\,^t\!\!\;an {\rm var\,}phiramebox[2mm]{\ } skips}{\,^t\!\!\;an\beta_r+\,^t\!\!\;an {\rm var\,}phiramebox[2mm]{\ } skips}H( {\rm var\,}phiramebox[2mm]{\ } skips)\vec u\,'. $$ As a consequence, we obtain $B( {\rm var\,}phiramebox[2mm]{\ } skips)=b- {\rm var\,}phiramebox[2mm]{\ } skips H( {\rm var\,}phiramebox[2mm]{\ } skips)\vec u+ {\rm var\,}phiramebox[2mm]{\ } skips{\rm cotan}\beta_rH( {\rm var\,}phiramebox[2mm]{\ } skips)\vec u\,'+{\mathit o}( {\rm var\,}phiramebox[2mm]{\ } skips)$. Since $H( {\rm var\,}phiramebox[2mm]{\ } skips)=h+{\mathit o}(1)$, it follows that $b(\,^t\!\!\;heta+ {\rm var\,}phiramebox[2mm]{\ } skips)=B( {\rm var\,}phiramebox[2mm]{\ } skips)+{\mathit o}( {\rm var\,}phiramebox[2mm]{\ } skips)= b- {\rm var\,}phiramebox[2mm]{\ } skips h\vec u+ {\rm var\,}phiramebox[2mm]{\ } skips{\rm cotan}\beta_rh\vec u\,'+{\mathit o}( {\rm var\,}phiramebox[2mm]{\ } skips)$, yielding~\rf B. Similarly, we have $\vec b_l'=h\,{\rm cotan}\beta_l\,\vec u-h\vec u\,'$, $\vec c_l'=h\,{\rm cotan}\gamma_l\,\vec u+h\vec u\,'$, and $\vec c_r'=h\,{\rm cotan}\gamma_r\,\vec u+h\vec u\,'$. Statements (a) and (b) now follow from $m={\rm var\,}phirac12(b+c)$. Since the expression of $v$ given by~\rf v is a ruled function, statement (c) follows directly from Proposition~{\rm Re}\,ff{p1.1}~(a). {\rm var\,}phiramebox[2mm]{\ } skip \coro{c3}{ For all $\,^t\!\!\;heta_0\in{\mathbb S}^1$ and all $\alpha\in\,]0,1[\,$, we have \etaq u{ \lambdaim_{\,^t\!\!\;heta\,^t\!\!\;o\,^t\!\!\;heta_0^-}v_l(\,^t\!\!\;heta)=\lambdaim_{\,^t\!\!\;heta\,^t\!\!\;o\,^t\!\!\;heta_0^-}v_r(\,^t\!\!\;heta)=v_l(\,^t\!\!\;heta_0) \mbox{ and } \lambdaim_{\,^t\!\!\;heta\,^t\!\!\;o\,^t\!\!\;heta_0^+}v_l(\,^t\!\!\;heta)=\lambdaim_{\,^t\!\!\;heta\,^t\!\!\;o\,^t\!\!\;heta_0^+}v_r(\,^t\!\!\;heta)=v_r(\,^t\!\!\;heta_0). } In other words, in the sense of the Pompeiu-Hausdorff distance, we have \etaq V{ \lambdaim_{\,^t\!\!\;heta\,^t\!\!\;o\,^t\!\!\;heta_0^-}V(\,^t\!\!\;heta)=\{v_l(\,^t\!\!\;heta_0)\} \mbox{ and } \lambdaim_{\,^t\!\!\;heta\,^t\!\!\;o\,^t\!\!\;heta_0^+}V(\,^t\!\!\;heta)=\{v_r(\,^t\!\!\;heta_0)\}. } Therefore, the function $V$ is upper semi-continuous (for the inclusion) with respect to $\,^t\!\!\;heta$. } \noindentndent {\sigmal Proof\;\sigmaep } Immediate, using~\rf t, \rf{3b},~\rf v, and the continuity of the cotangent function. {\rm var\,}phiramebox[2mm]{\ } skip The next statement will be used both in Sections~{\rm Re}\,ff{4.} and {\rm Re}\,ff{5.}. \coro{c4}{ For $\,^t\!\!\;heta_1,\,^t\!\!\;heta_2\in{\mathbb S}^1$, with $\,^t\!\!\;heta_1\lambdaeq\,^t\!\!\;heta_2\lambdaeq\,^t\!\!\;heta_1+{{\mathit o}megaidehat p}i$, we have \etaq c{ {\rm conv}\betaig(V(\alpha,\,^t\!\!\;heta_1)\cup V(\alpha,\,^t\!\!\;heta_2)\betaig) \sigmaubseteq\betaigcup_{\,^t\!\!\;heta_1\lambdaeq\,^t\!\!\;heta\lambdaeq\,^t\!\!\;heta_2}V(\alpha,\,^t\!\!\;heta). } } \noindentndent {\sigmal Proof\;\sigmaep } Let $v$ be in the above convex hull. If $v\in V(\alpha,\,^t\!\!\;heta_1)\cup V(\alpha,\,^t\!\!\;heta_2)$, there is nothing to prove. Otherwise assume, without loss of generality, that $\max V(\alpha,\,^t\!\!\;heta_1)<\min V(\alpha,\,^t\!\!\;heta_2)$ and set $\,^t\!\!\;heta_0=\sigmaup\{\,^t\!\!\;heta\gammaeq\,^t\!\!\;heta_1\ ;\ \max V(\alpha,\,^t\!\!\;heta)<v\}$. We have $\,^t\!\!\;heta_1\lambdaeq\,^t\!\!\;heta_0<\,^t\!\!\;heta_2$; for all $\,^t\!\!\;heta\in\,]\,^t\!\!\;heta_0,\,^t\!\!\;heta_2[$, $\max V(\alpha,\,^t\!\!\;heta)\gammaeq v$; and there exists a convergent sequence $\{\,^t\!\!\;heta_n\}_{n\in\mathbb{N}}$ tending to $\,^t\!\!\;heta_0$ with $\max V(\alpha,\,^t\!\!\;heta_n)<v$. By~\rf u, we obtain $\max V(\alpha,\,^t\!\!\;heta_0)=v$, hence $v\in V(\alpha,\,^t\!\!\;heta_0)$. {\rm var\,}phiramebox[2mm]{\ } skip \sigmaec{4.}{The forwards, the backwards, and the zero sets} In this section we fix $\alpha\in\,]0,1[\,$. Recall that $\,^t\!\!\;heta$ is called regular if ${{\mathit o}megaidehat p}artial K$ is $\mathcal{C}^1$ at $b(\alpha,\,^t\!\!\;heta)$ and $c(\alpha,\,^t\!\!\;heta)$, and singular otherwise. \deltaf{d2}{ (a) \ Let $F(\alpha)$ be the set of all $\,^t\!\!\;heta\in{\mathbb S}^1$ such that either $v(\alpha,\,^t\!\!\;heta)>0$ (if $\,^t\!\!\;heta$ is regular), or $V(\alpha,\,^t\!\!\;heta)\,\cap\,]0,+\infty[\,\neq\etamptyset$ (if $\,^t\!\!\;heta$ is singular); we call it the {\sigmal forwards set}. \sigmamallskipllskip \noindent(b) \ Similarly, let $B(\alpha)$ be the set of all $\,^t\!\!\;heta\in{\mathbb S}^1$ such that either $v(\alpha,\,^t\!\!\;heta)<0$ (if $\,^t\!\!\;heta$ is regular), or $V(\alpha,\,^t\!\!\;heta)\,\cap\,]-\infty,0[\,\neq\etamptyset$ (if $\,^t\!\!\;heta$ is singular); we call it the {\sigmal backwards set}. \sigmamallskipllskip \noindent(c) \ Finally, let $Z(\alpha)$ denote the set of all $\,^t\!\!\;heta\in{\mathbb S}^1$ such that either $v(\alpha,\,^t\!\!\;heta)=0$ (if $\,^t\!\!\;heta$ is regular), or $V(\alpha,\,^t\!\!\;heta)$ contains $0$ (if $\,^t\!\!\;heta$ is singular); we call it the {\sigmal zero set}. } By the symmetry~\rf W, we have (with the notation $[\,^t\!\!\;heta_1,\,^t\!\!\;heta_2]=\{\,^t\!\!\;heta\in{\mathbb S}^1\ ;\ \,^t\!\!\;heta_1\lambdaeq\,^t\!\!\;heta\lambdaeq\,^t\!\!\;heta_2\}$ for $\,^t\!\!\;heta_1,\,^t\!\!\;heta_2\in{\mathbb S}^1$) \etaq z{ \,^t\!\!\;heta\in Z(\alpha){\mathcal L}eftrightarrow\,^t\!\!\;heta+{{\mathit o}megaidehat p}i\in Z(1-\alpha)\mbox{ and } \,^t\!\!\;heta\in B(\alpha){\mathcal L}eftrightarrow\,^t\!\!\;heta+{{\mathit o}megaidehat p}i\in F(1-\alpha). } By Proposition~{\rm Re}\,ff{p1} (b), we have $$ \,^t\!\!\;heta\in Z(\alpha){\mathcal L}eftrightarrow{{\mathit o}megaidehat p}i\in[\beta_l+\gamma_l,\beta_r+\gamma_r]\;(\mbox{or } {{\mathit o}megaidehat p}i\in [\beta_r+\gamma_r,\beta_l+\gamma_l]). $$ Since $\beta_l\gammaeq\beta_r$ and $\gamma_r\lambdaeq\gamma_l$, by \rf b, we deduce that, if $\,^t\!\!\;heta\in Z(\alpha)$ then $b=b(\alpha,\,^t\!\!\;heta)$ and $c=c(\alpha,\,^t\!\!\;heta)$ admit parallel supporting lines of $K$. If one of the points $b, c$ or both is regular, the converse is also true; however, it can occur that $\beta_r+\gamma_l\lambdaeq{{\mathit o}megaidehat p}i\lambdaeq\beta_l+\gamma_r$ but ${{\mathit o}megaidehat p}i$ does not belong to $[\beta_l+\gamma_l,\beta_r+\gamma_r]$ (or to $[\beta_r+\gamma_r,\beta_l+\gamma_l]$), and then $V=[v_l,v_r]$ (or $[v_r,v_l]$) does not contain $0$. In the case where ${{\mathit o}megaidehat p}artial K$ is $\mathcal{C}^1$, by Proposition~{\rm Re}\,ff{p1}, $\,^t\!\!\;heta$ belongs to $B(\alpha)$ if and only if $\beta(\alpha,\,^t\!\!\;heta)+\gamma(\alpha,\,^t\!\!\;heta)>{{\mathit o}megaidehat p}i$, {\sigmal i.e.}, there exists a triangle $T=abc$, with one edge equal to the chord $bc$ (with $b=b(\alpha,\,^t\!\!\;heta),\,c=c(\alpha,\,^t\!\!\;heta)$), the other two edges, $ab$ and $ac$, not crossing the interior of $K$, and which contains an amount $1-\alpha$ of $K$: $|T\cap K|=(1-\alpha)|K|$. In the case where ${{\mathit o}megaidehat p}artial K$ is not $\mathcal{C}^1$, this latter condition is necessary but not always sufficient to have $\,^t\!\!\;heta\in B(\alpha)$. However, we will see in Section~{\rm Re}\,ff{5.} that this condition implies $\,^t\!\!\;heta\in B(\alpha')$ for all $\alpha'>\alpha$. Let us observe $Z(\alpha)$ and $B(\alpha)$, denoted $Z$ and $B$ here. These sets can be very complicated, even if ${{\mathit o}megaidehat p}artial K$ is $\mathcal{C}^1$. In Section~{\rm Re}\,ff{7.} we present a construction which, to any prescribed closed subset $C$ of ${\mathbb S}^1$, associates a $\mathcal{C}^1$ convex body $K$ such that $Z\betaig(\,^t\!\!\;frac12\betaig)$ coincides with $C$, up to countably many isolated points. We give next a brief description in the simplest cases. If $\,^t\!\!\;heta_0$ is an isolated point of $Z$ which does not belong to the closure of $B$, then $m(\,^t\!\!\;heta_0)$ is a point of zero curvature of $m$, but $m$ is still $\mathcal{C}^1$ at $\,^t\!\!\;heta_0$. If $\,^t\!\!\;heta_1<\,^t\!\!\;heta_2<\,^t\!\!\;heta_1+{{\mathit o}megaidehat p}i$ and $[\,^t\!\!\;heta_1,\,^t\!\!\;heta_2]\sigmaubset Z$ is an isolated connected component of $Z$, then $m$ has a corner; {\sigmal i.e.,} it is not $\mathcal{C}^1$ but has two half-tangents: a left-tangent oriented either by $\vec u(\,^t\!\!\;heta_1)$ if $]\,^t\!\!\;heta_1-\delta[\,\sigmaubset F$ for small $\delta>0$, or by $\vec u(\,^t\!\!\;heta_1+{{\mathit o}megaidehat p}i)$ if $]\,^t\!\!\;heta_1-\delta[\,\sigmaubset B$, and a right-tangent oriented either by $\vec u(\,^t\!\!\;heta_2)$ or by $\vec u(\,^t\!\!\;heta_2+{{\mathit o}megaidehat p}i)$. Moreover, in this situation, $m(\alpha,\,^t\!\!\;heta_1)$ is a local center of symmetry of ${{\mathit o}megaidehat p}artial K$: the arc of ${{\mathit o}megaidehat p}artial K$ in the sector ${\mathcal D}e(\alpha,\,^t\!\!\;heta_1)^+\cap{\mathcal D}e(\alpha,\,^t\!\!\;heta_2)^-$ is symmetric to that in ${\mathcal D}e(\alpha,\,^t\!\!\;heta_1)^-\cap{\mathcal D}e(\alpha,\,^t\!\!\;heta_2)^+$. If $\,^t\!\!\;heta_0$ is an isolated point of $Z$ and is the endpoint of both a segment $]\,^t\!\!\;heta_1,\,^t\!\!\;heta_0[$ of $F$ and a segment $]\,^t\!\!\;heta_0,\,^t\!\!\;heta_2[$ of $B$, then $m$ has a cusp at $m(\,^t\!\!\;heta_0)$. \noindentndent {\sigmal Proof\;\sigmaep }opo{p4}{ Let $\alpha\in\,]0,1[$. \sigmamallskipllskip {\noindent\rm(a) \ } The set $Z(\alpha)$ is closed in ${\mathbb S}^1$. \sigmamallskipllskip {\noindent\rm(b) \ } If ${{\mathit o}megaidehat p}artial K$ is $\mathcal{C}^1$ then the sets $F(\alpha)$ and $B(\alpha)$ are open in ${\mathbb S}^1$ and $F(\alpha)$, $B(\alpha)$, and $Z(\alpha)$ form a partition of \ ${\mathbb S}^1$. \sigmamallskipllskip {\noindent\rm(c) \ } In the general case, the three sets $F(\alpha)\cap B(\alpha)$, ${{\mathit o}megaidehat p}artial F(\alpha)$, and ${{\mathit o}megaidehat p}artial B(\alpha)$ are subsets of $Z(\alpha)$. } {\noindent\sigmal Remark.} If ${{\mathit o}megaidehat p}artial K$ is not $\mathcal{C}^1$ then, in general, there exists $\alpha\in\,]0,1[\,$ such that $B(\alpha)$ and $F(\alpha)$ are not open. Actually, if $c$ is a corner of ${{\mathit o}megaidehat p}artial K$, with half-tangents of directions $\,^t\!\!\;heta_1,\,^t\!\!\;heta_2$, and $b\in{{\mathit o}megaidehat p}artial K$ is such that all supporting lines to $K$ at $b$ have directions in $\,]\,^t\!\!\;heta_1+{{\mathit o}megaidehat p}i,\,^t\!\!\;heta_2+{{\mathit o}megaidehat p}i[\,$, then the $\alpha$-section (for some $\alpha$) passing through $b$ and $c$ has a direction $\,^t\!\!\;heta_0\in F(\alpha)\cap B(\alpha)$, but such that any $\,^t\!\!\;heta<\,^t\!\!\;heta_0$, $\,^t\!\!\;heta$ close enough to $\,^t\!\!\;heta_0$, does not belong to $B(\alpha)$ and any $\,^t\!\!\;heta>\,^t\!\!\;heta_0$, $\,^t\!\!\;heta$ close enough to $\,^t\!\!\;heta_0$, does not belong to $F(\alpha)$. \noindentndent {\sigmal Proof\;\sigmaep } (b) \ If ${{\mathit o}megaidehat p}artial K$ is $\mathcal{C}^1$ then the functions $\beta$ and $\gamma$ are continuous with respect to $\,^t\!\!\;heta$, hence also is $v$, so $Z=v^{-1}(\{0\})$ is a closed subset of ${\mathbb S}^1$, and $F=v^{-1}(\,]-\infty,0[\,)$ and $B=v^{-1}(\,]0,+\infty[\,)$ are open. Furthermore they do not intersect and their union is ${\mathbb S}^1$. \noindent(a) \ Let $\{\,^t\!\!\;heta_n\}_{n\in\mathbb{N}}$ be a sequence converging to $\,^t\!\!\;heta_0$, with $\,^t\!\!\;heta_n\in Z(\alpha)$, {\sigmal i.e.,} $0\in V(\alpha,\,^t\!\!\;heta_n)$. Then a subsequence $\{\,^t\!\!\;heta_{n_k}\}_{k\in\mathbb{N}}$ exists such that $\,^t\!\!\;heta_{n_k}$ tends either to $\,^t\!\!\;heta_0^-$ or to $\,^t\!\!\;heta_0^+$, say to $\,^t\!\!\;heta_0^-$. By~\rf V we have $\deltaisplaystyle\lambdaim_{k\,^t\!\!\;o+\infty}V(\alpha,\,^t\!\!\;heta_{n_k})=\betaig\{v_l(\alpha,\,^t\!\!\;heta_0)\betaig\}$, hence $v_l(\alpha,\,^t\!\!\;heta_0)=0$, and $\,^t\!\!\;heta_0\in Z(\alpha)$. \noindent(c) \ Let $\,^t\!\!\;heta_0\in{{\mathit o}megaidehat p}artial F(\alpha)$; then there exist sequences $\{\,^t\!\!\;heta_n\}_{n\in\mathbb{N}}$ in $F(\alpha)$ and $\{\,^t\!\!\;heta'_n\}_{n\in\mathbb{N}}$ in ${\mathbb S}^1\sigmaetminus F(\alpha)$, both converging to $\,^t\!\!\;heta_0$. This means that $V(\,^t\!\!\;heta_n)\cap\,]0,+\infty[\,\neq\etamptyset$ and $V(\,^t\!\!\;heta'_n)\sigmaubset\,]-\infty,0]$ for every $n\in\mathbb{N}$. By~\rf c, there exists $\,^t\!\!\;heta''_n\in[\,^t\!\!\;heta_n,\,^t\!\!\;heta'_n]$ (or $\,^t\!\!\;heta''_n\in[\,^t\!\!\;heta'_n,\,^t\!\!\;heta_n]$) such that $0\in V(\,^t\!\!\;heta''_n)$ for each $n$, {\sigmal i.e.,} $\,^t\!\!\;heta''_n\in Z(\alpha)$. Since the sequence $\{\,^t\!\!\;heta''_n\}_{n\in\mathbb{N}}$ tends to $\,^t\!\!\;heta_0$ and $Z(\alpha)$ is closed, we obtain $\,^t\!\!\;heta_0\in Z(\alpha)$. The proof for ${{\mathit o}megaidehat p}artial B(\alpha)\sigmaubseteq Z(\alpha)$ is similar. The proof for $F(\alpha)\cap B(\alpha)\sigmaubseteq Z(\alpha)$ is obvious. {\rm var\,}phiramebox[2mm]{\ } skip \sigmaec{5.}{Dependence with respect to $\alpha$} The functions $b$ and $c$ are left- and right-differentiable with respect to $\alpha$; one finds, {\sigmal e.g.}, $\veccc{\lambdaeft(\,^t\!\!\;frac{{{\mathit o}megaidehat p}artial b}{{{\mathit o}megaidehat p}artial\alpha}\right)_l} =\,^t\!\!\;frac1{2h}(\vec u\,'+{\rm cotan}\beta_r\,\vec u)$. However, we will not use their differentiability in $\alpha$, but only their monotonicity. Since the function ${\rm cotan}$ is decreasing on $\,]0,{{\mathit o}megaidehat p}i[\,$, by~\rf g we immediately obtain the following statement, whose proof is omitted. \noindentndent {\sigmal Proof\;\sigmaep }opo{p1bis}{ The functions $v,v_l$ and $v_r$ are nonincreasing in $\alpha$. More precisely, we have \etaq m{ \max V(\alpha',\,^t\!\!\;heta)\lambdaeq\min V(\alpha,\,^t\!\!\;heta)\mbox{ for all }\,^t\!\!\;heta\in{\mathbb S}^1 \mbox{ and all }0<\alpha<\alpha'<1. } } Recall the sets $F(\alpha)$, $B(\alpha)$, and $Z(\alpha)$ introduced in Definition~{\rm Re}\,ff{d2}. Let \betaegin{enumerate}gin{eqnarray*} I_F&=&\{\alpha\in\,]0,1[\,\ ;\ F(\alpha)\neq\etamptyset\},\\ I_B&=&\{\alpha\in\,]0,1[\,\ ;\ B(\alpha)\neq\etamptyset\},\\ I_Z&=&\{\alpha\in\,]0,1[\,\ ;\ Z(\alpha)\neq\etamptyset\}. \etand{eqnarray*} The symmetry~\rf z implies \etaq i{ \alpha\in I_F{\mathcal L}eftrightarrow1-\alpha\in I_B\mbox{ and }\alpha\in I_Z{\mathcal L}eftrightarrow1-\alpha\in I_Z. } Set $\alpha_B=\inf I_B$ and $\alpha_Z=\inf I_Z$. (We do not consider $\inf I_F$, which is always equal to $0$, see below). \,^t\!\!\;heo{t2}{ {\noindent\rm(a)} \ We have $I_B=\,]\alpha_B,1[$ (and hence $I_F=\,]0,1-\alpha_B[$ by~\rf i). If ${{\mathit o}megaidehat p}artial K$ is $\mathcal{C}^1$, then $I_Z=[\alpha_Z,1-\alpha_Z]$; otherwise $I_Z$ is one of the intervals $[\alpha_Z,1-\alpha_Z]$ or $]\alpha_Z,1-\alpha_Z[$. {\noindent\rm(b)} \ We have $\alpha_Z\lambdaeq\alpha_B\lambdaeq{\rm var\,}phirac12$. Moreover, if $\alpha_Z\neq\alpha_B$ then ${{\mathit o}megaidehat p}artial K$ contains two parallel segments. {\noindent\rm(c)} \ We have the following equivalences: \sigmamallskipllskip {\rm i.} \ $\alpha_B=\,^t\!\!\;frac12$ if and only if $K$ is symmetric. \sigmamallskipllskip {\rm ii.} \ $\alpha_Z=0$ if and only if ${{\mathit o}megaidehat p}artial K$ contains a segment whose endpoints admit two parallel supporting lines to $K$. \sigmamallskipllskip {\rm iii.} \ $\alpha_B=0$ if and only if ${{\mathit o}megaidehat p}artial K$ contains a segment whose endpoints admit two parallel supporting lines to $K$, one of which intersecting ${{\mathit o}megaidehat p}artial K$ at only one point. } {\noindent\sigmal Remarks.} 1. Notice the change of behaviour of $Z$ and $B$: In Proposition~{\rm Re}\,ff{p4}, $Z(\alpha)$ is always closed, whereas $B(\alpha)$ may be not open if ${{\mathit o}megaidehat p}artial K$ is not $\mathcal{C}^1$; here $I_B$ is always open, whereas $I_Z$ may be not closed if ${{\mathit o}megaidehat p}artial K$ is not $\mathcal{C}^1$. \noindent2. An example of a planar convex body such that $I_Z$ is open is the quadrilateral $OICJ$ in Figure~{\rm Re}\,ff{f7}, Section 8.9, for which one finds $I_Z=\,\betaig]{\rm var\,}phirac1{2c},1-{\rm var\,}phirac1{2c}\betaig[\,$. \noindentndent {\sigmal Proof\;\sigmaep } \noindent(a) \ By Proposition~{\rm Re}\,ff{p1bis}, if $0<\alpha<\alpha'<1$ then $B(\alpha)\sigmaubseteq B(\alpha')$. It follows that $I_B=\,]\alpha_B,1[\,$ or $I_B=[\alpha_B,1[$. We now prove that $\alpha_B\notin I_B$. If $\alpha\in I_B$, then there exists $\,^t\!\!\;heta\in{\mathbb S}^1$ such that, say, $v_l(\alpha,\,^t\!\!\;heta)<0$ (the case $v_r(\alpha,\,^t\!\!\;heta)<0$ is similar). Let $\alpha'<\alpha$ and $\,^t\!\!\;heta'<\,^t\!\!\;heta$ be such that $b(\alpha',\,^t\!\!\;heta')=b(\alpha,\,^t\!\!\;heta)$; then by~\rf u {\rm var\,}phiootnote{ In fact \rf u is stated for $\alpha$ fixed, but its proof uses only~\rf t and~\rf v --- which can easily be adapted to our situation --- and the continuity of the function ${\rm cotan}$. } , both $v_l(\alpha',\,^t\!\!\;heta')$ and $v_r(\alpha',\,^t\!\!\;heta')$ tend to $v_l(\alpha,\,^t\!\!\;heta)$ as $\,^t\!\!\;heta'\,^t\!\!\;o\,^t\!\!\;heta$ and $\alpha'\,^t\!\!\;o\alpha$, hence are negative for $(\alpha',\,^t\!\!\;heta')$ close enough to $(\alpha,\,^t\!\!\;heta)$. This shows that $\alpha'\in I_B$. As a consequence, $I_B=\,]\alpha_B,1[\,$. We now prove that $I_Z$ is convex. Let $\alpha_1<\alpha_2\in I_Z$; {\sigmal i.e.}, there exist $\,^t\!\!\;heta_1$ and $\,^t\!\!\;heta_2$ in ${\mathbb S}^1$ such that $0\in V(\alpha_1,\,^t\!\!\;heta_1)\cap V(\alpha_2,\,^t\!\!\;heta_2)$. We may assume, without loss of generality, that $\,^t\!\!\;heta_1\lambdaeq\,^t\!\!\;heta_2\lambdaeq\,^t\!\!\;heta_1+{{\mathit o}megaidehat p}i$. Let $\alpha\in\,]\alpha_1,\alpha_2[$. By~\rf m, we have $V(\alpha,\,^t\!\!\;heta_1)\cap\mathbb{R}^-\neq\etamptyset$ and $V(\alpha,\,^t\!\!\;heta_2)\cap\mathbb{R}^+\neq\etamptyset$ hence, by~\rf c, we have $0\in V(\alpha,\,^t\!\!\;heta)$ for some $\,^t\!\!\;heta\in[\,^t\!\!\;heta_1,\,^t\!\!\;heta_2]$ (in the case $\,^t\!\!\;heta_2=\,^t\!\!\;heta_1+{{\mathit o}megaidehat p}i$, both intervals $[\,^t\!\!\;heta_1,\,^t\!\!\;heta_2]$ and $[\,^t\!\!\;heta_2,\,^t\!\!\;heta_1+2{{\mathit o}megaidehat p}i]$ suit). This shows that $\alpha\in I_Z$. \noindent(b) \ If $\alpha$ belongs neither to $I_F$ nor to $I_B$, then we necessarily have $V(\alpha,\,^t\!\!\;heta)=\{0\}$ for all $\,^t\!\!\;heta\in{\mathbb S}^1$, hence the function $m(\alpha,\cdot)$ is constant. This implies $\alpha={\rm var\,}phirac12$ and $K$ symmetric. As a consequence, we have $\alpha_B\lambdaeq{\rm var\,}phirac12$. We prove ${\rm var\,}phirac12\in I_Z$, yielding $I_Z\neq\etamptyset$. Let $\,^t\!\!\;heta_0\in{\mathbb S}^1$ arbitrary. By~\rf W, $0\in{\rm conv}\betaig(V\betaig({\rm var\,}phirac12,\,^t\!\!\;heta_0\betaig)\cup V\betaig({\rm var\,}phirac12,\,^t\!\!\;heta_0+{{\mathit o}megaidehat p}i\betaig)\betaig)$, hence by~\rf c $0\in V\betaig({\rm var\,}phirac12,\,^t\!\!\;heta\betaig)$ for some $\,^t\!\!\;heta\in[\,^t\!\!\;heta_0,\,^t\!\!\;heta_0+{{\mathit o}megaidehat p}i]$, hence $Z\betaig(\,^t\!\!\;frac12\betaig)\neq\etamptyset$. We now prove $\alpha_Z\lambdaeq\alpha_B$. If $\alpha_B={\rm var\,}phirac12$, we are done; otherwise, let $\alpha\in I_B$, $\alpha\lambdaeq{\rm var\,}phirac12$. Then $v_l(\alpha,\,^t\!\!\;heta)$ or $v_r(\alpha,\,^t\!\!\;heta)$ is negative for some $\,^t\!\!\;heta\in{\mathbb S}^1$, say $v_l(\alpha,\,^t\!\!\;heta) <0$. From~\rf w and~\rf m it follows that $v_l(\alpha,\,^t\!\!\;heta+{{\mathit o}megaidehat p}i)=-v_l(1-\alpha,\,^t\!\!\;heta)\gammaeq-v_l(\alpha,\,^t\!\!\;heta)>0$, hence by continuity there exists $\,^t\!\!\;heta_1\in\,]\,^t\!\!\;heta,\,^t\!\!\;heta+{{\mathit o}megaidehat p}i[$ such that $v_l(\alpha,\,^t\!\!\;heta_1)=0$, so $\,^t\!\!\;heta_1\in Z(\alpha)$, and $\alpha\in\ I_Z$. If moreover $\alpha_Z<\alpha_B$, then consider $\alpha_Z<\alpha<\alpha'<\alpha_B\lambdaeq{\rm var\,}phirac12$, hence $\alpha,\alpha'\in I_Z\sigmaetminus I_B$. Let $\,^t\!\!\;heta\in Z(\alpha)$, $b=b(\alpha,\,^t\!\!\;heta)$, $c=c(\alpha,\,^t\!\!\;heta)$, and let $D_b$, $D_c$ be two parallel supporting lines to $K$ at $b$ and $c$ respectively, see Figure~{\rm Re}\,ff{f4}. \betaegin{enumerate}gin{figure}[htb] \betaegin{enumerate}gin{center} \raisebox{0cm}{ {\rm var\,}phiramebox[2mm]{\ } skipsfysize5.5cm {\rm var\,}phiramebox[2mm]{\ } skipsfbox{geogebrafigure7.eps}} \etand{center} \caption{{\sigmamallskipll Proof of Theorem~{\rm Re}\,ff{t2}~(b)} } \lambdab{f4} \etand{figure} Consider now $b'=b(\alpha',\,^t\!\!\;heta)$ (for the same $\,^t\!\!\;heta$) and $c'=c(\alpha',\,^t\!\!\;heta)$. Since $\alpha<\alpha'$, ${\mathcal D}e(\alpha',\,^t\!\!\;heta)$ is in the interior of ${\mathcal D}e^+(\alpha,\,^t\!\!\;heta)$. Since $B(\alpha')=\etamptyset$, there do not exist supporting lines to $K$ at $b'$ and $c'$ crossing in ${\mathcal D}e^+(\alpha',\,^t\!\!\;heta)$. As a consequence, $b'$ must lie on $D_b$ and $c'$ on $D_c$, and the segments $[b',b]$ and $[c',c]$ are on ${{\mathit o}megaidehat p}artial K$. \noindent(c) i. \ If $\alpha_B={\rm var\,}phirac12$ then by (a) ${\rm var\,}phirac12\notin I_F\cup I_B$ hence, as already said in the proof of (b), $K$ is symmetric. The converse is obvious. \noindent(c) ii. and iii. \ If $\alpha_Z=0$ then there exists a sequence $\{\alpha_n\}_{n\in\mathbb{N}}$ tending to $0$, with $\alpha_n\in I_Z$; {\sigmal i.e.}, such that for any $n$ there exists $\,^t\!\!\;heta_n\in{\mathbb S}^1$ with $0\in V(\alpha_n,\,^t\!\!\;heta_n)$. By compactness, we may assume without loss of generality that the sequence $\{\,^t\!\!\;heta_n\}_{n\in\mathbb{N}}$ converges to some $\,^t\!\!\;heta_0\in{\mathbb S}^1$. Let $b_n=b(\alpha_n,\,^t\!\!\;heta_n)$ and $c_n=c(\alpha_n,\,^t\!\!\;heta_n)$. By continuity, the sequences $\{b_n\}_{n\in\mathbb{N}}$ and $\{c_n\}_{n\in\mathbb{N}}$ converge to some $b$, resp. $c\in{{\mathit o}megaidehat p}artial K$. Since the width of $K$ satisfies $0<w(K)\lambdaeq\|b_n-c_n\|$, we have $b\neq c$. Since $0\in V(\alpha_n,\,^t\!\!\;heta_n)$, there are parallel supporting lines to $K$ at $b_n$ and $c_n$ for each $n$. Without loss of generality, we also assume that these lines converge, yielding two parallel supporting lines at $b$ and $c$, denoted $D_b$ and $D_c$. Since the sequence of lines $\betaig\{{\mathcal D}e(\alpha_n,\,^t\!\!\;heta_n)\betaig\}_{n\in\mathbb{N}}$ tends to the oriented line through $b$ and $c$, denoted by $(bc)$, and $|{\mathcal D}e^-(\alpha_n,\,^t\!\!\;heta_n)\cap K|=\alpha_n|K|\,^t\!\!\;o0$, we obtain $|(bc)^-\cap K|=0$, hence the segment $[b,c]$ is on ${{\mathit o}megaidehat p}artial K$. Conversely, if $[b,c]\sigmaubset{{\mathit o}megaidehat p}artial K$ and $b\neq c$ admit two parallel supporting lines to $K$, denoted by $D_b$ and $D_c$, let $\,^t\!\!\;heta$ denote the direction of $(bc)$; {\sigmal i.e.,} $\,^t\!\!\;heta$ is such that $\vec{bc}=k\vec u(\,^t\!\!\;heta)$ for some $k>0$. The angles $\beta_n=\beta\betaig({\rm var\,}phirac1n,\,^t\!\!\;heta\betaig)$ and $\gamma_n=\gamma\betaig({\rm var\,}phirac1n,\,^t\!\!\;heta\betaig)$ satisfy $\beta_n+\gamma_n\gammaeq{{\mathit o}megaidehat p}i$, hence $\,^t\!\!\;heta\in B\betaig({\rm var\,}phirac1n\betaig)\cup Z\betaig({\rm var\,}phirac1n\betaig)$, so $B\betaig({\rm var\,}phirac1n\betaig)\cup Z\betaig({\rm var\,}phirac1n\betaig)\neq\etamptyset$, ${\rm var\,}phirac1n\in I_B\cup I_Z$, and $0\lambdaeq\min(\alpha_Z,\alpha_B)=0$, hence $\alpha_Z=0$ since $\alpha_Z\lambdaeq\alpha_B$. If one of the supporting lines above, say $D_b$, intersects ${{\mathit o}megaidehat p}artial K$ only at $b$, then $b_n\notin D_b$, yielding $\beta_n+\gamma_n>{{\mathit o}megaidehat p}i$, hence ${\rm var\,}phirac1n\in I_B$, showing $\alpha_B=0$. Conversely, if $\alpha_B=0$, then the former points $b_n$ and $c_n$ admit supporting lines which cross in ${\mathcal D}e^+(\alpha_n,\,^t\!\!\;heta_n)$ (see comment after Definition~{\rm Re}\,ff{d2}). By contradiction, if both $D_b\cap{{\mathit o}megaidehat p}artial K$ and $D_c\cap{{\mathit o}megaidehat p}artial K$ contained more than $b$, resp. $c$, then for $n$ large enough we would have $b_n\in D_b$ and $c_n\in D_c$. Then for $m,n$ such that $b_m\in\,]b_n,b[\,$ and $c_m\in\,]c_n,c[\,$, the only supporting lines at $b_m, c_m$ could be $D_b$ and $D_c$, which do not cross, a contradiction. {\rm var\,}phiramebox[2mm]{\ } skip \sigmaec{6.}{The $\alpha$-core} In this section we compare the boundary ${{\mathit o}megaidehat p}artial K_\alpha$ with the image of $m(\alpha,\cdot)$, denoted by $m^*_\alpha$, for $\alpha\in\,]0,1[\,$. The function of $\,^t\!\!\;heta$, $m(\alpha,\cdot)$, is a tour in the sense of Definition~{\rm Re}\,ff{d1}, with $R_l=v_l(\alpha,\cdot)$ and $R_r=v_r(\alpha,\cdot)$, and core $K_\alpha$. In the case $\alpha={\rm var\,}phirac12$, $m\betaig({\rm var\,}phirac12,\cdot\betaig)$ is also a double half-tour. Then, by Proposition~{\rm Re}\,ff{p1.1} (b) and (c) and Theorem~{\rm Re}\,ff{t2} (a), $m^*_\alpha$ is of class $\mathcal{C}^1$ if $0<\alpha<\alpha_Z$ or $1-\alpha_Z<\alpha<1$, and is not $\mathcal{C}^1$ if $\alpha_B<\alpha<1-\alpha_B$. In the case $\alpha=\alpha_Z$, $m^*_\alpha$ may or may not be $\mathcal{C}^1$. However, if $\alpha_Z\neq\alpha_B$, we will see that $m^*_\alpha$ is not $\mathcal{C}^1$ also for $\alpha_Z<\alpha\lambdaeq\alpha_B$. By Proposition~{\rm Re}\,ff{p1.1} (d), $K_\alpha$ is strictly convex (or one point or empty), and by Proposition~{\rm Re}\,ff{p1.1} (e) we have $m^*_\alpha={{\mathit o}megaidehat p}artial K_\alpha$ if and only if $B(\alpha)$ is empty. By Theorem~{\rm Re}\,ff{t2}~(a), we then have \etaq8{ m^*_\alpha={{\mathit o}megaidehat p}artial K_\alpha{\mathcal L}eftrightarrow\alpha\lambdaeq\alpha_B. } Besides, the function $\alpha\mapsto K_\alpha$ is continuous (for the Pompeiu-Hausdorff distance on compact sets in the plane) and decreasing (with respect to inclusion): if $\alpha<\alpha'$, then $K_{\alpha'}\sigmaubsetneq K_\alpha$. Since $K_\alpha=\etamptyset$ if $\alpha>{\rm var\,}phirac12$, there exists a value $\alpha_K\lambdaeq{\rm var\,}phirac12$ such that \betaegin{enumerate}gin{itemize} \item $K_\alpha$ is strictly convex with a nonempty interior if $0<\alpha<\alpha_K$, \item $K_{\alpha_K}$ is a single point denoted by $T$, and \item $K_\alpha$ is empty if $\alpha_K<\alpha<1$. \etand{itemize} By~\rf8, this value $\alpha_K$ is at least $\alpha_B$. It is noticeable that, when $K_{\alpha_K}$ is a single point, this point is {\sigmal not} necessarily the mass center of $K$, see Section~8.9. We end this section with the following statement, which gathers the last results. \noindentndent {\sigmal Proof\;\sigmaep }opo{t3}{ {\noindent\rm(a)} \ If $\alpha_Z<\alpha<1-\alpha_Z$, then $m^*_\alpha$ is not $\mathcal{C}^1$. {\noindent\rm(b)} \ We have ${\rm var\,}phirac49\lambdaeq\alpha_K\lambdaeq{\rm var\,}phirac12$, with first equality if and only if $K$ is a triangle, and second equality if and only if $K$ is symmetric. {\noindent\rm(c)} \ If $K$ is non-symmetric, then $\alpha_B<\alpha_K$ (whereas for $K$ symmetric we have $\alpha_B=\alpha_K={\rm var\,}phirac12$). } \noindentndent {\sigmal Proof\;\sigmaep } (a) It remains to prove that, if $\alpha_Z<\alpha\lambdaeq\alpha_B$, then $m_\alpha^*$ is not $\mathcal{C}^1$. Assume $\alpha_Z<\alpha_B$; by Theorem~{\rm Re}\,ff{t2}~(b) and its proof, there exist two parallel segments on ${{\mathit o}megaidehat p}artial K$, denoted by $[a,b]$ and $[d,c]$ with $abcd$ in convex position, such that the line oriented by $\vec{bc}$, denoted by $D_1$, is an $\alpha_Z$-section of $K$. Let $D_2$ denote the line oriented by $\vec{ad}$. Then $D_2$ is a $\beta$-section for some $\beta\gammaeq\alpha_B$. For any $\alpha\in\,]\alpha_Z,\alpha_B]$, $\alpha > \alpha_Z$ but close to it, there is an $\alpha$-section, denoted by ${\mathcal D}e(\alpha,\,^t\!\!\;heta_1)$, passing through $c$ and crossing $[a,b]$ and an $\alpha$-section, denoted by ${\mathcal D}e(\alpha,\,^t\!\!\;heta_2)$, passing through $b$ and crossing $[d,c]$, see Figure~{\rm Re}\,ff{f5}. These $\alpha$-sections intersect at some point $P\in\betaig[{\rm var\,}phirac12(a+d),{\rm var\,}phirac12(b+c)\betaig[\,$. Then we have $m(\alpha,\,^t\!\!\;heta)=P$ for all $\,^t\!\!\;heta\in[\,^t\!\!\;heta_1,\,^t\!\!\;heta_2]$, $\vec{m'_l}(\alpha,\,^t\!\!\;heta_1)$ collinear to $\vec{Pc}=\|\vec{Pc}\|\,\vec u_{\,^t\!\!\;heta_1}$, and $\vec{m'_r}(\alpha,\,^t\!\!\;heta_2)$ collinear to $\vec{bP}=\|\vec{bP}\|\,\vec u_{\,^t\!\!\;heta_2}$. Since $\,^t\!\!\;heta_1\neq\,^t\!\!\;heta_2\neq\,^t\!\!\;heta_1+{{\mathit o}megaidehat p}i$, $m(\alpha,\cdot)$ is not $\mathcal{C}^1$ at $P$. If $\alpha$ is not close to $\alpha_Z$, ${\mathcal D}e(\alpha,\,^t\!\!\;heta_1)$ (passing through $c$) will not cross $[a,b]$, and ${\mathcal D}e(\alpha,\,^t\!\!\;heta_2)$ (passing through $b$) will not cross $[d,c]$. Nevertheless, the argument given above holds as well, except that ${\mathcal D}e(\alpha,\,^t\!\!\;heta_1)$ and ${\mathcal D}e(\alpha,\,^t\!\!\;heta_2)$ will not cross on $\betaig[{\rm var\,}phirac12(a+d),{\rm var\,}phirac12(b+c\betaig[$. \betaegin{enumerate}gin{figure}[htb] \betaegin{enumerate}gin{center} \raisebox{0cm}{ {\rm var\,}phiramebox[2mm]{\ } skipsfysize5.5cm {\rm var\,}phiramebox[2mm]{\ } skipsfbox{geogebrafigure5.eps}} \etand{center} \caption{{\sigmamallskipll Proof of Proposition~{\rm Re}\,ff{t3} (a)} } \lambdab{f5} \etand{figure} \noindent(b) Suppose $\alpha_{K}={\rm var\,}phirac{1}{2}$. Let $T$ be the unique point of $K_{1/2}$. With the notation for the proof of Proposition~{\rm Re}\,ff{p1}~(a), see Figure~{\rm Re}\,ff{f2}, since every half-section contains $T$, we have $M(\,^t\!\!\;heta,\,^t\!\!\;heta')=T$ for all $\,^t\!\!\;heta\neq\,^t\!\!\;heta'$. Now this proof implies that $$ \lambdaeft\vert \,^t\!\!\;heta-\,^t\!\!\;heta'\right\vert (\lambdaeft{\betaf V}ert b(\,^t\!\!\;heta )-T\right{\betaf V}ert -\lambdaeft{\betaf V}ert c(\,^t\!\!\;heta)-T\right{\betaf V}ert )={\mathcal O}\betaig(\lambdaeft\vert \,^t\!\!\;heta -\,^t\!\!\;heta'\right\vert ^2\betaig), $$ for all $\,^t\!\!\;heta<\,^t\!\!\;heta^{\noindentndent {\sigmal Proof\;\sigmaep }ime}$. Letting $\,^t\!\!\;heta^{\noindentndent {\sigmal Proof\;\sigmaep }ime}$ tend to $\,^t\!\!\;heta$, this shows that $b(\,^t\!\!\;heta)$ and $c(\,^t\!\!\;heta)$ are symmetric about $T$. The fact that $\alpha_K\gammaeq{\rm var\,}phirac49$ and that $\alpha_K={\rm var\,}phirac49$ if and only if $K$ is a triangle is well known; see, {\sigmal e.g.},~\cit n. \com{j'ai retrouve la ref !!} It reduces to the following statement which we prove below. \sigmamallskipllskip {\sigmal The mass center $G$ of $K$ is in ${\mathcal D}e^+\betaig({\rm var\,}phirac49,\,^t\!\!\;heta\betaig)$ for all $\,^t\!\!\;heta$. Moreover, there exists $\,^t\!\!\;heta\in{\mathbb S}^1$ such that ${\mathcal D}e({\rm var\,}phirac49,\,^t\!\!\;heta)$ contains $G$ if and only if $K$ is a triangle.} \sigmamallskipllskip Fix $\,^t\!\!\;heta$, let ${\mathcal D}e={\mathcal D}e\betaig(\,^t\!\!\;frac49,\,^t\!\!\;heta\betaig)$, and consider the frame $\betaig(b=b(\,^t\!\!\;heta),\vec u=\vec u(\,^t\!\!\;heta),\vec v=\vec{u'}(\,^t\!\!\;heta)\betaig)$. We have to prove that $y$-coordinate $G_y$ of $G$ is nonnegative and that this coordinate vanishes if and only if $K$ is a triangle. Let $D_b$ and $D_c$ be two supporting lines of $K$ at the points $b$ and $c=c(\,^t\!\!\;heta)$, see Figure~{\rm Re}\,f{f9}. \betaegin{enumerate}gin{figure}[htb] \betaegin{enumerate}gin{center} \raisebox{0cm}{ {\rm var\,}phiramebox[2mm]{\ } skipsfysize6.5cm {\rm var\,}phiramebox[2mm]{\ } skipsfbox{geogebrafigure8.eps}} \etand{center} \caption{{\sigmamallskipll Proof of Proposition~{\rm Re}\,ff{t3} (b). The triangle $V=L\cup U$ is in bold.} } \lambdab{f9} \etand{figure} These lines together with the segment $[b,c]$ define two convex sets $\mathcal C^+={\rm conv}\betaig(({\mathcal D}e^+\cap D_b)\cup({\mathcal D}e^+\cap D_c)\betaig)$ and $\mathcal C^-={\rm conv}\betaig(({\mathcal D}e^-\cap D_b) \cup({\mathcal D}e^-\cap D_c)\betaig)$, one on each side of ${\mathcal D}e$. The set $K^+=K\cap{\mathcal D}e^+$ is included in $\mathcal C^+$ and the set $K^-=K\cap{\mathcal D}e^-$ is included in $\mathcal C^-$. Let $a'\in D_b\cap{\mathcal D}e^-$ and $a''\in D_b\cap{\mathcal D}e^-$ be such that the triangles ${\rm conv}(a',b,c)$ and ${\rm conv}(a'',b,c)$ have an area equal to ${\rm var\,}phirac49|K|$. Then the line $(a'a'')$ is parallel to ${\mathcal D}e$ and for every $a\in[a',a'']$ the triangle ${\rm conv}(a,b,c)$ also has an area equal to ${\rm var\,}phirac49|K|$. For any such $a$, let $b'\in{{\mathit o}megaidehat p}artial K\cap[a,b[$ and $c'\in{{\mathit o}megaidehat p}artial K\cap[a,c[$. By continuity, one can choose $a$ such that the oriented line ${\mathcal D}e'=(b'c')$ is parallel to ${\mathcal D}e$. For this $a$, let $L={\rm conv}(a,b,c)$. We have $K^-\sigmaetminus L\sigmaubset{\mathcal D}e'^+$ and $L\sigmaetminus K^-\sigmaubset{\mathcal D}e'^-$, therefore the $y$-coordinate of the mass center $G^-$ of $K^-$ is larger than or equal to the $y$-coordinate of the mass center of $L$, the equality holding only when $K^-$ is a triangle. Furthermore, since $a\in\mathcal C^-$, the convex set $\mathcal C^+$ is included in the convex set $\mathcal C^{\noindentndent {\sigmal Proof\;\sigmaep }ime+}={\rm conv}\betaig(({\mathcal D}e^+\cap(ab)\cup({\mathcal D}e ^+\cap(ac)\betaig)$. Let $U$ be the trapezoid of area ${\rm var\,}phirac59|K|$ defined by the lines $(ab)$, $(ac)$, ${\mathcal D}e$ and a line ${\mathcal D}e''\sigmaubset{\mathcal D}e^+$ parallel to ${\mathcal D}e$. Since $K^+\sigmaetminus U\sigmaubset{\mathcal D}e''^+$ and $U\sigmaetminus K^+\sigmaubset{\mathcal D}e''^-$, the $y$-coordinate of the mass center $G^+$ of $K^+$ is larger than or equal to the $y$-coordinate of the mass center of $U$, the equality holding only when $K^+=U$. It follows that $G_y$ is larger than or equal to the $y$-coordinate of the mass center of the union $V$ of $L$ and $U$, with equality only if $K=V$. Since the mass center of $V$ is on ${\mathcal D}e$ we are done. \noindent(c) Suppose that $\alpha_B=\alpha_K$. By~\rf8, we obtain that $m(\alpha_K,\cdot)$ is constant and equal to $T$. It follows that, for all $\,^t\!\!\;heta\in{\mathbb S}^1$, $T$ is the middle of a chord of direction $\,^t\!\!\;heta$, hence $K$ is symmetric about $T$. {\rm var\,}phiramebox[2mm]{\ } skip \sigmaec{7.}{Miscellaneous results, remarks, and questions} {\noindent\betaf8.1 \ } It is easy to check that, if an $\alpha$-section crosses two non-parallel segments of ${{\mathit o}megaidehat p}artial K$, then the corresponding middle of chord $m(\alpha)$ lies on an arc of hyperbola asymptotic to the lines extending these segments. In particular, if $K$ is a convex polygon, then for all $\alpha\in\,]0,1[$ the curve $m(\alpha,\cdot)$ is entirely made of arcs of hyperbolae. Of course, not only segments yield arcs of hyperbolae. One can check for instance that two arcs of hyperbolae on ${{\mathit o}megaidehat p}artial K$ also give an arc of hyperbola for $m(\alpha,\cdot)$, if $\alpha$ is small enough. \betaigskip {\noindent\betaf8.2 \ } We saw that an envelope of $\alpha$-sections is a tour in the sense of Definition~{\rm Re}\,ff{d1}. Conversely, which tours are envelopes of some $\alpha$-sections $m(\alpha,K)$, for which convex bodies $K$ and for which $0<\alpha<1$? For example, the astroid on the left of Figure~{\rm Re}\,ff{f8} given by $\,^t\!\!\;heta\mapsto(-\cos^3\,^t\!\!\;heta,\sigmain^3\,^t\!\!\;heta)$, {\sigmal i.e.,} with $R(\,^t\!\!\;heta)={\rm var\,}phirac32\sigmain(2\,^t\!\!\;heta)$, cannot be such an envelop. More generally, following a remark by D.~Fuchs and S.~Tabachnikov in~\cit{ft}, a tour with a common tangent at two different points cannot be such an envelop, since each point would have to be the middle of the chord. \betaigskip {\noindent\betaf8.3 \ } The KAM theory applied to dual billiards shows that, if $m$ is a tour of class $\mathcal{C}^5$ (hence strictly convex), then there exist convex bodies $K$ with ${{\mathit o}megaidehat p}artial K$ arbitrarily close to $m$ such that $m=m(\alpha,K)$ (hence for some $\alpha$ arbitrarily close to $0$) and there exist convex bodies $K$ with ${{\mathit o}megaidehat p}artial K$ arbitrarily close to infinity such that $m=m(\alpha,K)$ (hence for some $\alpha$ arbitrarily close to ${\rm var\,}phirac12$). The curves ${{\mathit o}megaidehat p}artial K$ are invariant torii of the dual billiard. According to E.~Gutkin et A.~Katok \cit{gk}, the works of J.~~Moser~\cit{m} and R.~Douady~\cit{d} prove that these curves are convex. Notice the following apparent paradox. By Theorem~{\rm Re}\,ff{t2}(c)i., if $K$ is non-symmetric then, for $\alpha$ close to ${\rm var\,}phirac12$, $m(\alpha)$ is non-convex. This apparently contradicts the above results which imply that, given a strictly convex $\mathcal{C}^5$ non-symmetric curve, there exist values of $\alpha$ arbitrarily close to ${\rm var\,}phirac12$ and convex bodies $K$, necessarily non-symmetric, such that $m=m(\alpha,K)$. However, these curves have envelops $m(\beta,K)$ with cusps for some other $\beta\in\,]\alpha,{\rm var\,}phirac12[$, although $\alpha$ is arbitrarily close to ${\rm var\,}phirac12$. There is no real contradiction. Several questions remain open: Among tours which present cusps, which ones are envelopes of sections? For instance, does it exist a {\sigmal symmetric} curve with cusps which is the $\alpha$-envelope of some (necessarily non-symmetric) convex body? Also here, does it exist a non-symmetric convex body $K$ having a symmetric convex envelope $m(K,\alpha,\cdot)$ for some $\alpha\in\,]0,1[\,$? \betaigskip {\noindent\betaf8.4 \ } The link between envelopes of $\alpha$-sections and dual billiards yields a simple proof for the following (also simple) fact: The only convex bodies which have a circular envelope (or an elliptic one, since this is the same question modulo an affine transformation) are the discs having the same centers as the envelop. Actually, if the billiard table is a circle, then the billiard map is integrable {\rm var\,}phiootnote{A classical conjecture states that only ellipses have a dual billiard map which is integrable.}. This means that each orbit remains on a circle centered at the origin, and is either periodic or dense in this circle, depending whether the angle between the two tangents from the starting point at the table is in ${{\mathit o}megaidehat p}i\mathbb{Q}$ or not. If the convex body $K$ were not a disc, then its boundary would cross at least one circle with dense orbits, hence would contain at least one dense orbit on the latter circle, hence the whole circle, a contradiction. \betaigskip {\noindent\betaf8.5 \ } Let $C$ be a closed subset of ${\mathbb S}^1$. We construct below a $\mathcal{C}^1$ convex curve which is the boundary of a convex body for which $Z\betaig({\rm var\,}phirac12\betaig)=C\cup S$, for some countable set $S$. As convex curve, we start with the unit circle and will deform it. Since $C$ is closed, ${\mathbb S}^1\sigmaetminus C$ is a countable union of open intervals. Let $]\,^t\!\!\;heta_1,\,^t\!\!\;heta_2[$ be one of them. Between $\,^t\!\!\;heta_1$ and $\,^t\!\!\;heta_2$, we deform the circle into a convex $\mathcal{C}^1$ curve arbitrarily close to the union of two segments, one $[e^{i\,^t\!\!\;heta_2},p]$ tangent to the circle at $e^{i\,^t\!\!\;heta_2}$, the other $[e^{i\,^t\!\!\;heta_1},p]$, such that the area remains unchanged. (For convenience, here we use the notation of complex numbers.) We do the same symmetrically with respect to the line passing through the origin and of direction ${\rm var\,}phirac12(\,^t\!\!\;heta_1+\,^t\!\!\;heta_2+{{\mathit o}megaidehat p}i)$; {\sigmal i.e.}, we choose a $\mathcal{C}^1$ curve close to $[e^{i(\,^t\!\!\;heta_1+{{\mathit o}megaidehat p}i)},p']$ tangent to the circle at $e^{i(\,^t\!\!\;heta_1+{{\mathit o}megaidehat p}i)}$, and to $[e^{i(\,^t\!\!\;heta_2+{{\mathit o}megaidehat p}i)},p']$, with $p'={\mathit o}verline{p}\,e^{i(\,^t\!\!\;heta_1+\,^t\!\!\;heta_2+{{\mathit o}megaidehat p}i)}$, see Figure~{\rm Re}\,ff{f6}. Observe that the curve is no longer (centrally-)symmetric. If the curve is chosen with large curvature near the points $p$ and $p'$, then the interval $]\,^t\!\!\;heta_1,\,^t\!\!\;heta_2[$ contains only two values of $\,^t\!\!\;heta$ for which the half-section cuts the curve in a chord with two parallel tangents; these chords have one endpoint near $p$, resp. near $p'$, and do not contain the origin anymore. Doing this in any connected component of ${\mathbb S}^1\sigmaetminus C$, we obtain a convex $\mathcal{C}^1$ curve such that any $\,^t\!\!\;heta\in C$ has a halving chord with parallel tangents and all but two values of $\,^t\!\!\;heta$ in each connected component of ${\mathbb S}^1\sigmaetminus C$ have a halving chord without parallel tangents. \betaegin{enumerate}gin{figure}[htb] \betaegin{enumerate}gin{center} \raisebox{0cm}{ {\rm var\,}phiramebox[2mm]{\ } skipsfysize6cm {\rm var\,}phiramebox[2mm]{\ } skipsfbox{geogebrafigure6.eps}} \etand{center} \caption{{\sigmamallskipll A construction of $\mathcal{C}^1$ convex body with (almost) prescribed zero set} } \lambdab{f6} \etand{figure} \betaigskip {\noindent\betaf8.6 \ } It is easy to construct $K{\rm var\,}subsetneq L$ such that $L_\alpha\sigmaubset K_\alpha$ for some value of $\alpha$; for example, $L$ is an equilateral triangle, $K$ the inscribed circle of $L$, and $\alpha\in\betaig[{\rm var\,}phirac49,{\rm var\,}phirac12\betaig]$. Our calculation shows that this holds for all $\alpha\in\betaig[\alpha_1,{\rm var\,}phirac12\betaig]$, where $\alpha_1\alphapprox0.40716$ satisfies $1-{\rm var\,}phirac32(1-\sigmaqrt{1-2\alpha_1})=\cos t$ and $\alpha_1={\rm var\,}phirac1{{\mathit o}megaidehat p}i(t-\cos t\sigmain t)$. Actually, if the radius of $K$ is $R$ (hence the height of $L$ is $3R$), $K_\alpha$ is a disc of radius $R_\alpha=R\cos t$ such that $\alpha={\rm var\,}phirac1{{\mathit o}megaidehat p}i(t-\cos t\sigmain t)$. Then ${{\mathit o}megaidehat p}artial L_\alpha$ is made of three arcs of hyperbolae. In a (non-orthonormal) frame where the triangle has vertices $(0,0)$, $(0,1)$, and $(1,0)$, two of these arcs of hyperbolae are $xy={\rm var\,}phirac\alpha4$ and $x(1-x-y)={\rm var\,}phirac\alpha4$, hence they cross at a point of abscissa $x_c={\rm var\,}phirac12(1-\sigmaqrt{1-2\alpha})$) This proves that the three arcs of ${{\mathit o}megaidehat p}artial L_\alpha$ cross at a distance $R-{\rm var\,}phirac32R(1-\sigmaqrt{1-2\alpha})$ from the center. Is it true that, for every value of $\alpha\in\,]0,\,^t\!\!\;frac12[$ there exists $K\sigmaubsetneq L$ such that $L_\alpha\sigmaubset K_\alpha$? (It cannot exist $K,L$, $K\sigmaubsetneq L$, independent of $\alpha$ such that $L_\alpha\sigmaubset K_\alpha$ for all small $\alpha>0$). On the contrary, does it exist $\alpha_1>0$ such that for all pairs $K,L$, $K\sigmaubsetneq L$, and all $\alpha<\alpha_1$, $L_\alpha\not\sigmaubset K_\alpha$? Is $\alpha_1\alphapprox0.40716$ optimal with this respect? {\sigmal I.e.,} is it optimal when $K$ is a disc and $L$ a triangle? Or, instead, is it optimal when $K$ is an affinely regular hexagon inscribed in $L$, a triangle? \betaigskip {\noindent\betaf8.7 \ } The following questions have been asked by Jin-ichi Itoh, whom we would like to thank for his interest in our work. Let $w(K)$ denote the width of a convex body $K$, ${\mathit o}slash(K)$ its diameter, $r(K)$ its inradius, and $R(K)$ its circumradius. Do we have ${\rm var\,}phirac{w(K)}{{\mathit o}slash(K)}\lambdaeq{\rm var\,}phirac{w(K_\alpha)}{{\mathit o}slash(K_\alpha)}$ for all $\alpha<{\rm var\,}phirac12$? Do we have ${\rm var\,}phirac{r(K)}{R(K)}\lambdaeq{\rm var\,}phirac{r(K_\alpha)}{R(K_\alpha)}$ for all $\alpha<{\rm var\,}phirac12$? Both answers are ``no'', as shown by the following counter-examples. For the first question, if $K$ is the unit Reuleaux triangle, ${\mathit o}slash(K)=w(K)=1$, and $\alpha>0$ is small enough, then one can check that ${\mathit o}slash(K_\alpha)=1-{\mathcal O}\betaig(\alpha^{2/3}\betaig)$, whereas $w(K_\alpha)<1-\alpha^{1/2}<{\mathit o}slash(K_\alpha)$. For the second question, consider two small arcs of the circle of center 0 and radius 2 near the $x$-axis, two small arcs of the circle of center 0 and radius 1 near the $y$-axis, and choose for $K$ the convex hull of the union of these four arcs. In this manner we have ${\rm var\,}phirac{r(K)}{R(K)}={\rm var\,}phirac12$. However, we have $r(K_\alpha)=\cos t_1$, where $t_1$ is such that $ \alpha=\,^t\!\!\;frac1{|K|}(t_1-\sigmain t_1\cos t_1)=\,^t\!\!\;frac{2}{3|K|}t_1^3+{\mathcal O}(t_1^5), $ and $R(K_\alpha)=2\cos t_2$, where $t_2$ is such that $ \alpha=\,^t\!\!\;frac4{|K|}(t_2-\sigmain t_2\cos t_2)= \,^t\!\!\;frac{8}{3|K|}t_2^3+{\mathcal O}(t_2^5), $ hence $t_2<t_1$, so ${\rm var\,}phirac{r(K_\alpha)}{R(K_\alpha)}<{\rm var\,}phirac12$. Whether there is a constant $k>0$ such that ${\rm var\,}phirac{w(K)}{{\mathit o}slash(K)}\lambdaeq k{\rm var\,}phirac{w(K_\alpha)}{{\mathit o}slash(K_\alpha)}$ for all convex bodies $K$ and all $\alpha$, idem for ${\rm var\,}phirac{r(K)}{R(K)}$, and which constant is optimal, seems to be another interesting question. \betaigskip {\noindent\betaf8.8 \ } There is a tight link between the region where appear cusps, as described in~\cit{ft}, and the regions $M_2$ and $M_3$ described by T. Zamfirescu in~\cit z. Actually, let $K$ be a planar convex body and, for each $\,^t\!\!\;heta\in{\mathbb S}^1$, consider the so-called {\sigmal midcurve}, {\it i.e.}, the locus of midpoints of all chords of direction $\,^t\!\!\;heta$. If ${{\mathit o}megaidehat p}artial K$ is $\mathcal{C}^1$ and strictly convex then the family of all these midcurves for all $\,^t\!\!\;heta\in{\mathbb S}^1$ is a continuous family in the sense of Gr\"unbaum~\cit g, for which general results of~\cit z are at our disposal. In our situation, the regions $M_2$ and $M_3$ of~\cit z are the loci of points of ${\rm Int}\,(K)$ which are middles of at least $2$, resp. $3$, different chords. If $K$ is symmetric then $M_2=M_3=\{g(K)\}$, the mass center of $K$. If $K$ is not symmetric then it seems that we have \etaq6{ {\rm Int}\,(M_2)=M_3={\rm Int}\,{\mathcal B}ig(\!\!\betaigcup_{\alpha\in\,]0,{\rm var\,}phirac12[}(m_\alpha^*\sigmaetminus{{\mathit o}megaidehat p}artial K_\alpha){\mathcal B}ig). } Already~\rf6 can be verified when $K$ is a quadrilateral. For instance, the polygonal curve of all cusps described in Figure 11.15 of~\cit{ft} is indeed the boundary of $M_3$. Besides, when $K$ is a quadrilateral, a detailed analysis shows that the unique point $T$ of $K_{\alpha_K}$ is in $M_3$ and that the three chords bisected by $T$ are all $\alpha_K$-sections. This property of $T$ is likely to be true in the general case. Conversely, is $T$ the {\sigmal only} point of $M_3$ bisecting three $\alpha$-sections with the same $\alpha$? This has been checked in the case of quadrilaterals. \betaigskip {\noindent\betaf8.9 \ } We present here an explicit example showing that the unique point $T$ of $K_{\alpha_K}$ is not necessarily the mass center $G$ of $K$. The strategy of proof is to use a quadrilateral and to show that the three chords bisected by $G$ are $\alpha$-sections for different values of $\alpha$, showing that $T\neq G$ by Subsection~8.8. Given $c>1$, let $K=OIJC$ denote the quadrilateral ${\rm conv}(O,I,J,C)$, with $O=(0,0),\,I=(1,0),\,J=(0,1),\,C=(c,c)$. The area of $K$ is $|K|={\rm var\,}phirac12\,\|\veccc{IJ}\|\,\|\veccc{OC}\|=c$ and its mass center is $G=\betaig({\rm var\,}phirac{2c+1}6,{\rm var\,}phirac{2c+1}6\betaig)$ (the midpoint of the segment determined by the centers of mass of the triangles $OJC$ and $OIC$). The point $G$ is the midpoint of exactly three chords, $AB$, $EF$, and $E'F'$, with $A,\,B$ on the line of equation $x+y={\rm var\,}phirac{2c+1}3$, $E\in OJ$, $F\in IC$, and $E'\in OI$, $F'\in JC$, see Figure~{\rm Re}\,ff{f7}. \betaegin{enumerate}gin{figure}[htb] \betaegin{enumerate}gin{center} \betaegin{enumerate}gin{picture}(140,120) {{\mathit o}megaidehat p}ut(0,-.3){\lambdaine(1,0){60}}{{\mathit o}megaidehat p}ut(0,0){\vector(1,0){140}} {{\mathit o}megaidehat p}ut(0,.3){\lambdaine(1,0){60}} {{\mathit o}megaidehat p}ut(-.3,0){\lambdaine(0,1){60}}{{\mathit o}megaidehat p}ut(0,0){\lambdaine(0,1){60}} {{\mathit o}megaidehat p}ut(.3,0){\vector(0,1){125}} \multiput(32,32)(16,16){6}{{\lambdaine(1,1){10}}}{{\mathit o}megaidehat p}ut(0,0){\lambdaine(1,1){15}} {{\mathit o}megaidehat p}ut(0,59.7){\lambdaine(2,1){120}}{{\mathit o}megaidehat p}ut(0,60){\lambdaine(2,1){120}} {{\mathit o}megaidehat p}ut(0,60.3){\lambdaine(2,1){120}} {{\mathit o}megaidehat p}ut(59.7,0){\lambdaine(1,2){60}}{{\mathit o}megaidehat p}ut(60,0){\lambdaine(1,2){60}} {{\mathit o}megaidehat p}ut(60.3,0){\lambdaine(1,2){60}} {{\mathit o}megaidehat p}ut(-10,14){\lambdaine(5,3){125}} {{\mathit o}megaidehat p}ut(14,-10){\lambdaine(3,5){75}} {{\mathit o}megaidehat p}ut(15,85){\lambdaine(1,-1){70}} {{\mathit o}megaidehat p}ut(130,-9){$x$}{{\mathit o}megaidehat p}ut(-9,112){$y$} {{\mathit o}megaidehat p}ut(0,0){\circle*{3}}{{\mathit o}megaidehat p}ut(-3,-13){$O$} {{\mathit o}megaidehat p}ut(60,0){\circle*{3}}{{\mathit o}megaidehat p}ut(57,-13){$I$} {{\mathit o}megaidehat p}ut(0,60){\circle*{3}}{{\mathit o}megaidehat p}ut(-11,58){$J$} {{\mathit o}megaidehat p}ut(120,120){\circle*{3}}{{\mathit o}megaidehat p}ut(124,113){$C$} {{\mathit o}megaidehat p}ut(30,30){\circle*{3}}{{\mathit o}megaidehat p}ut(56,45){$G$} {{\mathit o}megaidehat p}ut(50,50){\circle*{3}}{{\mathit o}megaidehat p}ut(21,18){$H$} {{\mathit o}megaidehat p}ut(0,20){\circle*{3}}{{\mathit o}megaidehat p}ut(-12,20){$E$} {{\mathit o}megaidehat p}ut(100,80){\circle*{3}}{{\mathit o}megaidehat p}ut(100,68){$F$} {{\mathit o}megaidehat p}ut(20,0){\circle*{3}}{{\mathit o}megaidehat p}ut(20,-12){$E'$} {{\mathit o}megaidehat p}ut(80,100){\circle*{3}}{{\mathit o}megaidehat p}ut(68,100){$F'$} {{\mathit o}megaidehat p}ut(27,73.5){\circle*{3}}{{\mathit o}megaidehat p}ut(24,79){$B$} {{\mathit o}megaidehat p}ut(73.5,27){\circle*{3}}{{\mathit o}megaidehat p}ut(78,25){$A$} \etand{picture} \etand{center} \caption{{\sigmamallskipll The $\alpha$-center is not the mass center; here $c=2$}} \lambdab{f7} \etand{figure} These chords divide $K$ in proportion $(\alpha_{AB},1-\alpha_{AB})$, resp. $(\alpha_{EF},1-\alpha_{EF})$ and $(\alpha_{E'F'},1-\alpha_{E'F'})$ (by convention, $0<\alpha_{XY}\lambdaeq{\rm var\,}phirac12$). By symmetry, we have $\alpha_{EF}=\alpha_{E'F'}$, but there is no reason that $\alpha_{EF}=\alpha_{AB}$; as we will see, this is indeed not the case. The triangle $ABC$ is the image of $IJC$ by the dilation of center $C$ sending the point $H=\betaig({\rm var\,}phirac12,{\rm var\,}phirac12\betaig)$ to $G$, hence of factor ${\rm var\,}phirac{\|\veccc{GC}\|}{\|\veccc{HC}\|}={\rm var\,}phirac{4c-1}{3(2c-1)}$. With $|IJC|=c-{\rm var\,}phirac12$, this gives $|ABC|={\rm var\,}phirac{(4c-1)^2}{18(2c-1)}$, so $\alpha_{AB}={\rm var\,}phirac{(4c-1)^2}{18c(2c-1)}$. Since $F$ is on the line $IC$ of equation $c(x-1)=(c-1)y$, one finds $F=\betaig({\rm var\,}phirac{2c+1}3,{\rm var\,}phirac{2c}3\betaig)$, hence $E=\betaig(0,{\rm var\,}phirac13\betaig)$. The area of the triangle $EIF$ is ${\rm var\,}phirac12\deltaet(\veccc{EI}\;\veccc{EF})={\rm var\,}phirac12\lambdaeft| \betaegin{enumerate}gin{matrix} 1&{\rm var\,}phirac{2c+1}3\\ -{\rm var\,}phirac13&{\rm var\,}phirac{2c-1}3 \etand{matrix}\right|={\rm var\,}phirac{4c-1}9$, hence the area of the quadrilateral $OIFE$ is $|OIFE|=|OIE|+|EIF|={\rm var\,}phirac16+{\rm var\,}phirac{4c-1}9={\rm var\,}phirac{8c+1}{18}$. This gives $\alpha_{EF}={\rm var\,}phirac{8c+1}{18c}\neq\alpha_{AB}$, hence $G\neq T$. \com{on pourrait aussi calculer T explicitement} More generally, given a planar convex body $G$, we consider all chords the midpoint of which coincides to the mass center $G$ of $K$. It is known~\cit v that there are at least three different such chords; these chords are $\alpha$-sections for different values of $\alpha$ ranging from some $\alpha_{\min{}}(K)$ to some $\alpha_{\max{}}(K)$. Then the quotient ${\rm var\,}phirac{\alpha_{\min{}}(K)}{\alpha_{\max{}}(K)}$ measures in some sense the asymmetry of the body $K$. It is equal to $1$ if $K$ is (centrally) symmetric, or if $K$ has another symmetry which ensures that $G$ is the only affine-invariant point, {\sigmal e.g.} for $K$ a regular polygon, but it differs from $1$ in general. One can see that the minimum of this quotient is achieved for at least one affine class of convex bodies, and it would be worthy to determine the shape of these bodies, and to compute the corresponding minimum. In the case of all quadrilaterals, using Maple, we found that the minimum is attained precisely for a quadrilateral of our previous family, for $c={\rm var\,}phirac74$ and with ${\rm var\,}phirac{\alpha_{\min{}}(K)}{\alpha_{\max{}}(K)}={\rm var\,}phirac{24}{25}$. The same questions can be asked in higher dimensions, replacing the midpoint of a chord by the mass center of an $\alpha$-section. \betaigskip {\noindent\betaf8.10 \ } We end this section with our main conjecture. \conj{c1}{ For any convex bodies $K, L$ with $K\sigmaubset L$, and any $\alpha\in\,]0,{\rm var\,}phirac12[\,$, there exists an $\alpha$-section of $L$ which is a $\beta$-section of $K$ for some $\beta\lambdaeq\alpha$. } This conjecture has been recently proven in~\cit{fm} in the case of planar convex bodies. Another natural related question, the answer of which turns out to be negative, is the following. For $K\sigmaubset L$ convex bodies, does there always exist a half-section ${\mathcal D}e$ of $L$ such that $$ |K|\,\mbox{\rm length}({\mathcal D}e\cap L)\lambdaeq|L|\,\mbox{\rm length}({\mathcal D}e\cap K)\;? $$ For a counter-example, consider for $K$ a thin pentagon of width varying from $ {\rm var\,}phiramebox[2mm]{\ } skips$ on each side to $2 {\rm var\,}phiramebox[2mm]{\ } skips$ in the middle, placed at the basis of $L$, a triangle of heigth $1$, see Figure~{\rm Re}\,ff{f18}. \betaegin{enumerate}gin{figure}[htb] \betaegin{enumerate}gin{center} \betaegin{enumerate}gin{picture}(140,110) \sigmaetlength{{\bf U}nitlength}{0.7pt} {{\mathit o}megaidehat p}ut(0,-.3){\lambdaine(1,0){200}}{{\mathit o}megaidehat p}ut(0,0){\lambdaine(1,0){200}} {{\mathit o}megaidehat p}ut(0,.3){\lambdaine(1,0){200}} {{\mathit o}megaidehat p}ut(0,0){\lambdaine(2,3){100}} {{\mathit o}megaidehat p}ut(100,150){\lambdaine(2,-3){100}} {{\mathit o}megaidehat p}ut(0,-.3){\lambdaine(2,3){10}}{{\mathit o}megaidehat p}ut(0,0){\lambdaine(2,3){10}} {{\mathit o}megaidehat p}ut(0,.3){\lambdaine(2,3){10}} {{\mathit o}megaidehat p}ut(10,14.7){\lambdaine(6,1){90}}{{\mathit o}megaidehat p}ut(10,15){\lambdaine(6,1){90}} {{\mathit o}megaidehat p}ut(10,15.3){\lambdaine(6,1){90}} {{\mathit o}megaidehat p}ut(100,29.7){\lambdaine(6,-1){90}} {{\mathit o}megaidehat p}ut(100,30){\lambdaine(6,-1){90}}{{\mathit o}megaidehat p}ut(100,30.3){\lambdaine(6,-1){90}} {{\mathit o}megaidehat p}ut(190,14.7){\lambdaine(2,-3){10}}{{\mathit o}megaidehat p}ut(190,15){\lambdaine(2,-3){10}} {{\mathit o}megaidehat p}ut(190,15.3){\lambdaine(2,-3){10}} \etand{picture} \etand{center} \caption{{\sigmamallskipll In bold, the body $K$; in thin, the body $L$; here $ {\rm var\,}phiramebox[2mm]{\ } skips={\rm var\,}phirac1{10}$} } \lambdab{f18} \etand{figure} A detailed analysis shows that, for every half-section ${\mathcal D}e$ of $L$, one has $|K|\,\mbox{\rm length}({\mathcal D}e\cap L)>|L|\,\mbox{\rm length}({\mathcal D}e\cap K)$. \ {\noindent\betaf Acknowledgements.} The authors thank Theodor Hangan and Tudor Zamfirescu for fruitful discussions and for having pointed to them several references. They are also indebted to the anonymous referee who drew to their attention the reference~\cit{mr}. The third author thanks the {\sigmal Universit� de Haute Alsace} for a one-month grant and its hospitality, and acknowledges partial support from the Romanian National Authority for Scientific Research, CNCS-UEFISCDI, grant PN-II-ID-PCE-2011-3-0533. \betaegin{enumerate}gin{thebibliography}{99} \betaibi{bbs} I. B\'{a}r\'{a}ny, P. Blagojevi\'{c}, A. Sz\'{u}cs, Equipartitioning by a convex 3-fan, {\it Adv. Math.} 223 (2010), 579--593. \betaibi{bhj} I.~B\'ar\'any, A.~Hubard, J.~Jeronimo, Slicing convex sets and measures by a hyperplane, {\it Discrete Comput. Geom.} 39 (2008), 67--75. \betaibi{bm} I.~B\'ar\'any, J. Matousek, Simultaneous partitions of measures by $k$-fans, {\it Discrete Comput. Geom.} 25 (2001), 317--334. \betaibi b S. Bereg, Orthogonal equipartitions, {\it Computat. Geom.: Theory and Appl.} 42 (2009), 305--314. \betaibi{bz} P. V. M. Blagojevi\'c, G.M. Ziegler, Convex equipartitions via equivariant obstruction theory, {\it Israel J. Math.} 200 (2014), 49--77. \betaibi{cgppsw} S.E. Cappell, J.E. Goodman, J. Pach, R. Pollack, M. Sharir, R. Wenger, Common tangents and common transversals, {\it Adv. Math.} 106 (1994), 198--215. \betaibi d M.~M.~Day, Polygons circumscribed about closed convex curves, {\it Trans. Amer. Math. Soc.} 62 (1947), 315--319. \betaibi{di} J.~Dieudonn\'e, {\it Foundations of Modern Analysis}, Academic Press, 1969. \betaibi{du} C.~Dupin, {\it Applications de G\'eometrie et de M\'ecanique, \`a la Marine, aux Ponts et Chauss\'ees, etc}, Bachelier, Paris, 1822. \betaibi{fm} A. Fruchard, A. Magazinov, Fair partitioning by straight lines, submitted. \betaibi{ft} D.~Fuchs, S.~Tabachnikov, Lecture~11: Segments of equal area, 155--165, {\it Mathematical omnibus: Thirty lectures on classic mathematics}, Amer. Math. Soc. Ed., Providence, RI, 2007. \betaibi g B. Gr\"unbaum, Continuous families of curves, {\it Canad. J. Math.} 18 (1966), 529--537. \betaibi{gk} E.~Gutkin, A.~Katok, Caustics for inner and outer billiards, {\it Comm. Math. Phys.} 173 (1995), 101--133. \betaibi{h} P.C.~Hammer, Convex bodies associated with a convex body, {\it Proc. Amer. Math. Soc.} 2 (1951), 781--793. \betaibi{ka} R. N. Karasev, Equipartition of several measures, arXiv:1011.476v2 [math.MG] 29 Nov 2010. \betaibi{kha} R. N. Karasev, A. Hubard, B. Aronov, Convex equipartitions: the spicy chicken theorem, {\it Geom. Dedicata} 170 (2014), 263--279. \betaibi k J.~Kincses, The topological type of the $\alpha$-sections of convex sets, {\it Adv. Math.} 217 (2008), 2159--2169. \betaibi{kl} V.~Klee, The critical set of a convex body, {\it American J. Math.} 75 (1953), 178--188. \betaibi l V.~F.~Lazutkin, Existence of caustics for the billiard problem in a convex domain (Russian), {\it Izv. Akad. Nauk SSSR Ser. Mat.} 37 (1973), 186--216. \betaibi m V.~V.~Menon, A theorem on partitions of mass-distribution, {\it Pacific J. Math.} 16 (1966), 133--137. \betaibi{mr} M. Meyer, S. Reisner, A geometric property of the boundary of symmetric convex bodies and convexity of flotation surfaces, {\it Geom. Dedicata} 37 (1991), 327--337. \betaibi{mo} J.~Moser, {\it Stable and random motions in dynamical systems}, Ann. of Math. Studies 77, 1973. \betaibi n B. H. Neumann, On an invariant of plane regions and mass distributions, {\it J. London Math. Soc.} 20 (1945), 226--237. \betaibi s T. Sakai, Balanced convex partitions of measures in $\mathbb{R}^{2}$, {\it Graphs and Combinatorics} 18 (2002), 169--192. \betaibi{sw} C.~Sch\"utt, E.~Werner, The convex floating body, {\it Math. Scand.} 66 (1990), 75--290. \betaibi{sw2} C.~Sch\"utt, E.~Werner, Homothetic floating bodies, {\it Geom. Dedicata} 49 (1994), 335--348. \betaibi{so} P. Sober\'{o}n, Balanced convex partitions of measures in $\mathbb{R}^{d}$, arXiv:1010.6191v2 [math.MG] 12 May 2011. \betaibi{st} A.~Stancu, The floating body problem, {\it Bull. London Math. Soc.} 38 (2006), 839--846. \betaibi{t1} S.~Tabachnikov, On the dual billiard problem, {\it Adv. Math.} 115 (1995), 221--249. \betaibi v U. Viet, Umkehrung eines Satzes von H. Brunn \"uber Mittelpunktseibereiche, {\it Math.-Phys. Semesterber.} 5 (1956), 141--142. \betaibi z T. Zamfirescu, Spreads, {\it Abh. Math. Sem. Univ. Hamburg} 50 (1980), 238--253. \betaibi{z1} T. Zamfirescu, Sur la r\'eductibilit\'e des corps convexes, {\it Math. Z.} 95 (1967), 20--33. \betaibi{w} E. Werner, Floating bodies and illumination bodies, Proceedings of the Conference ``Integral Geormetry and Convexity'', Wuhan 2004, World Scientific, Singapore. \etand{thebibliography} {\sigmamallskipll \noindent Addresses of the authors: \noindent Nicolas Chevallier and Augustin Fruchard\\ Laboratoire de Math�matiques, Informatique et Applications\\ Facult� des Sciences et Techniques\\ Universit� de Haute Alsace\\ 2 rue des Fr�res Lumi�re\\ 68093 Mulhouse cedex, FRANCE \\ E-mails: {\,^t\!\!\;t [email protected],\ [email protected]} \ \noindent Costin V\^{\i}lcu\\ Simion Stoilow Institute of Mathematics of the Romanian Academy,\\ P.O. Box 1-764, Bucharest 70700, ROMANIA \\ E-mail: {\,^t\!\!\;t [email protected]} \etand{document}
\betaegin{document} \tauitle{\etauge \betaf Some qualitative properties of solutions to a reaction-diffusion equation with weighted strong reaction} \alphauthor{ \Lambdaarge Razvan Gabriel Iagar\,\varphiootnote{Departamento de Matem\'{a}tica Aplicada, Ciencia e Ingenieria de los Materiales y Tecnologia Electr\'onica, Universidad Rey Juan Carlos, M\'{o}stoles, 28933, Madrid, Spain, \tauextit{e-mail:} [email protected]},\\ [4pt] \Lambdaarge Ana I. Mu\~{n}oz\,\varphiootnote{Departamento de Matem\'{a}tica Aplicada, Ciencia e Ingenieria de los Materiales y Tecnologia Electr\'onica, Universidad Rey Juan Carlos, M\'{o}stoles, 28933, Madrid, Spain, \tauextit{e-mail:} [email protected]}, \\[4pt] \Lambdaarge Ariel S\'{a}nchez,\varphiootnote{Departamento de Matem\'{a}tica Aplicada, Ciencia e Ingenieria de los Materiales y Tecnologia Electr\'onica, Universidad Rey Juan Carlos, M\'{o}stoles, 28933, Madrid, Spain, \tauextit{e-mail:} [email protected]}\\ [4pt] } \deltaate{} \muaketitle \betaegin{abstract} We study the existence and qualitative properties of solutions to the Cauchy problem associated to the quasilinear reaction-diffusion equation $$ \piartial_tu=\Deltaelta u^m+(1+|x|)^{\sigmaigma}u^p, $$ posed for $(x,t)\iotan\muathbb{R}^N\tauimes(0,\iotanfty)$, where $m>1$, $p\iotan(0,1)$ and $\sigmaigma>0$. Initial data are taken to be bounded, non-negative and compactly supported. In the range when $m+p\gammaeq2$, we prove \varepsilonmph{local existence of solutions} together with a \varepsilonmph{finite speed of propagation} of their supports for compactly supported initial conditions. We also show in this case that, for a given compactly supported initial condition, there exist \varepsilonmph{infinitely many solutions} to the Cauchy problem, by prescribing the evolution of their interface. In the complementary range $m+p<2$, we obtain new \varepsilonmph{Aronson-B\'enilan estimates} satisfied by solutions to the Cauchy problem, which are of independent interest as a priori bounds for the solutions. We apply these estimates to establish \varepsilonmph{infinite speed of propagation} of the supports of solutions if $m+p<2$, that is, $u(x,t)>0$ for any $x\iotan\muathbb{R}^N$, $t>0$, even in the case when the initial condition $u_0$ was compactly supported. \varepsilonnd{abstract} \ \nuoindent {\betaf MSC Subject Classification 2020:} 35B44, 35B45, 35K57, 35K59, 35R25, 35R37. \sigmamallskip \nuoindent {\betaf Keywords and phrases:} reaction-diffusion equations, weighted reaction, Aronson-B\'enilan estimates, finite speed of propagation, strong reaction, non-uniqueness. \sigmaection{Introduction} This paper deals with the qualitative theory of the weak solutions to the Cauchy problem for the following reaction-diffusion equation \betaegin{equation}\lambdaabel{eq1} \piartial_tu=\Deltaelta u^m+(1+|x|)^{\sigmaigma}u^p, \tauhetaquad (x,t)\iotan \muathbb{R}^N\tauimes(0,\iotanfty), \ N\gammaeq1, \varepsilonnd{equation} supplemented with the initial condition \betaegin{equation}\lambdaabel{eq2} u(x,0)=u_0(x), \tauhetauad x\iotan\muathbb{R}^N. \varepsilonnd{equation} The exponents in \varepsilonqref{eq1} are considered throughout the paper to belong to the following range \betaegin{equation}\lambdaabel{exp} m>1, \tauhetaquad 0<p<1, \tauhetaquad 0<\sigmaigma<\iotanfty, \varepsilonnd{equation} although we also give alternative proofs or even slight improvements of known results with $\sigmaigma=0$. We will work in general with bounded, compactly supported, non-negative and non-trivial initial conditions, more precisely \betaegin{equation}\lambdaabel{icond} u_0\iotan L^{\iotanfty}(\muathbb{R}^N), \tauhetaquad {\rhom supp}\,u_0\sigmaubseteq B(0,R), \tauhetaquad u_0(x)\gammaeq0, \ {\rhom for \ any} \ x\iotan\muathbb{R}^N, \ u_0\nuot\varepsilonquiv0, \varepsilonnd{equation} and further regularity assumptions (such as continuity) will be specified at the points where they are needed. In nonlinear diffusion problems it is standard to consider data belonging to the space $L^1_{\rhom loc}(\muathbb{R}^N)$ and we are sure that some of our results can be extended to this weaker space than $L^{\iotanfty}(\muathbb{R}^N)$. However, for simplicity and also as in the range of exponents \varepsilonqref{exp} finite time blow-up of bounded solutions is expected (as established, for example, for self-similar solutions in recent works such as {\cal I}te{IS20a, IS20b, IMS22}), we decided to avoid possible pointwise singularities at finite points and thus require \varepsilonqref{icond}. A deep study of Eq. \varepsilonqref{eq1} in the range $p>1$ has been performed by Andreucci and DiBenedetto in {\cal I}te{AdB91} for weights with any $\sigmaigma\iotan\muathbb{R}$, that is, both positive and negative. Properties such as local existence of weak solutions under optimal growth conditions on the initial data, estimates, regularity of them and Harnack-type inequalities are obtained. The authors also specify in {\cal I}te{AdB91} that some of their results can be extended to the range $0<p<1$ but only when $\sigmaigma<0$ and leave open the case $\sigmaigma>0$ with $0<p<1$. We find this latter case very interesting to study due to the merging of the non-Lipschitz reaction term (which does not produce finite time blow-up by itself) with the unbounded weight. The non-weighted equation \betaegin{equation}\lambdaabel{eq4} u_t=\Deltaelta u^m+u^p, \varepsilonnd{equation} corresponding to $\sigmaigma=0$ in Eq. \varepsilonqref{eq1} is nowadays quite well understood (at least in dimension $N=1$) after a series of works by de Pablo and V\'azquez {\cal I}te{dPV90, dPV91, dPV92, dP94} and the outcome of them is very interesting, despite the fact that we are dealing with an ill-posed problem. It is shown that solutions \varepsilonmph{exist always} and they are global in time if $u_0$ satisfies a growth condition similar to the one required for existence in the porous medium equation, namely $u_0(x)=o(|x|^{2/(m-1)})$ {\cal I}te{dPV91} and that there is always a minimal and a maximal solution, both constructed via approximations. The most interesting and surprising features of this problem with $\sigmaigma=0$ are related with the uniqueness. This property depends strongly on two aspects: the sign of the critical exponent $m+p-2$ and the positivity or not of the initial condition $u_0$. More precisely $\omegaverline{u}llet$ if $m+p-2\gammaeq0$, uniqueness of solutions to the Cauchy problem \varepsilonqref{eq4}-\varepsilonqref{eq2} holds true if and only if ${\rhom supp}\,u_0(x)=\muathbb{R}^N$ {\cal I}te{dPV90}. If $u_0$ is compactly supported, then its support has the property of finite propagation and interfaces appear. Moreover, the extent of the non-uniqueness property in this case is addressed in {\cal I}te{dPV92} where it is shown that giving a rather general compactly supported initial condition $u_0$ and a function of time $\xii(t)$ advancing faster than the interface of the (unique) minimal solution constructed in {\cal I}te{dPV91}, there is always a solution to the Cauchy problem with data $u_0$ and interface at time $t>0$ given by $\xii(t)$. This is a very strong and sharp non-uniqueness property, giving rise in fact to an infinity of solutions. $\omegaverline{u}llet$ if $m+p-2<0$ things change radically due to the infinite speed of propagation: even if $u_0$ is compactly supported it is shown in {\cal I}te{dPV90} that $u(x,t)>0$ for any $x\iotan\muathbb{R}^N$ and $t>0$, thus we have a property known as \varepsilonmph{quasi-uniqueness}: there exists a unique solution to the Cauchy problem for any initial condition $u_0$ except for the trivial one $u_0\varepsilonquiv0$, where two different solutions are constructed. Considering weighted reaction terms came as a natural extension of the already well developed knowledge on the ``classical" reaction-diffusion equations with reaction of the form $u^p$ or more general functions resembling it (see {\cal I}te{QS,S4} as important monographs on this subject). Many results were achieved for the \varepsilonmph{semilinear case $m=1$} and unbounded weights of the form $|x|^{\sigmaigma}u^p$ (and sometimes even more general weights $V(x)$ instead of pure powers), always with $p>1$. Fujita-type exponents and conditions on the data for finite-time blow-up to occur were studied in celebrated papers by Baras and Kersner, Bandle and Levine, Pinsky et. al., see for example {\cal I}te{BK87, BL89, Pi97, Pi98}. More recently, still with $m=1$, an interesting question was addressed: considering the equation \betaegin{equation}\lambdaabel{eq3.bis} u_t=\Deltaelta u+|x|^{\sigmaigma}u^p, \varepsilonnd{equation} it is natural to ask ourselves whether $x=0$ (or more generally, any zero of a power-like weight $V(x)$) can be a blow-up point. Examples of both possible situations (when $x=0$ is a blow-up point and when it is not) were constructed (mostly for Cauchy-Dirichlet problems posed in bounded domains) in the series of recent papers by Guo and collaborators {\cal I}te{GLS10, GLS13, GS11, GS18}. Finer analysis on how blow-up occurs, with rates and local asymptotic behavior in self-similar forms near the blow-up time and points was performed by Filippas and Tertikas {\cal I}te{FT00} and in the very recent work by Mukai and Seki {\cal I}te{MS20} for different ranges of the exponent $p>1$. Due to their further complexity and the fact that even for the non-weighted case $\sigmaigma=0$ there are some difficult open problems (see for example {\cal I}te[Chapter 4]{S4}), equations such as \varepsilonqref{eq1} or its close relative \betaegin{equation}\lambdaabel{eq3} u_t=\Deltaelta u^m+|x|^{\sigmaigma}u^p, \varepsilonnd{equation} with $m>1$ have been less considered in literature. Apart from the quoted paper {\cal I}te{AdB91}, the Fujita-type exponent and rather sharp conditions on the initial data for the finite time blow-up to hold true have been obtained by Qi {\cal I}te{Qi98} and then Suzuki {\cal I}te{Su02} for the case $p>m>1$ (including also a part of the fast diffusion range $m<1$ in {\cal I}te{Qi98}). We recommend Suzuki's paper to the reader as a well-written basic work on the qualitative theory for these equations, while blow-up rates as $t\tauo T$ also for $p>m$ are proved in {\cal I}te{AT05}. In recent years, the authors of the present work started a larger project of understanding the patterns (in self-similar form) that solutions may take either in the case of global solutions, or close to the blow-up time if this occurs, for Eq. \varepsilonqref{eq3}. This is an important part of the study, since it is well-known that such patterns are a prototype of the general behavior of the equation, in form of asymptotic profiles as $t\tauo\iotanfty$ or $t\tauo T$ and also bring a deeper understanding on the blow-up sets and rates. A series of papers {\cal I}te{IS19a, IS21, IS22, ILS23} address the question of the blow-up profiles to Eq. \varepsilonqref{eq3} for $m>1$ and $1\lambdaeq p\lambdaeq m$, where interesting and rather unexpected behaviors were established. In all these cases, solutions blow up in finite time when $\sigmaigma>0$, but their specific blow-up behavior it is shown to depend strongly on the magnitude of $\sigmaigma$. Going back to the case of interest for us, $p\iotan(0,1)$, we have proved that blow-up profiles exist if the following condition is fulfilled: \betaegin{equation}\lambdaabel{const} L:=\sigmaigma(m-1)+2(p-1)>0. \varepsilonnd{equation} We classified such blow-up profiles in {\cal I}te{IMS22} in general dimensions $N\gammaeq2$, following previous results restricted to dimension $N=1$ in {\cal I}te{IS20b, IS20c}, obtaining again that the sign of $m+p-2$ is fundamental for their existence and behavior. More precisely, $\omegaverline{u}llet$ when $m+p-2>0$ all the blow-up self-similar profiles present an interface (that is, they are compactly supported) and there are two different types of possible interface behaviors. This is both a manifestation of non-uniqueness (expected for compactly supported data) and a suggestion of possible non-existence of solutions when $u_0(x)>0$, as there is no pattern they can approach when $t\tauo T$. $\omegaverline{u}llet$ when $m+p=2$ self-similar blow-up patterns present a rather similar panorama, but with a single type of interface behavior. $\omegaverline{u}llet$ when $m+p<2$ a rather striking non-existence of any kind of blow-up profiles (either with interfaces or not) occurs {\cal I}te{IS20b}. Such an outcome gives the intuition of an infinite speed of propagation and a complete non-existence of non-trivial solutions. In this paper we thus begin the qualitative study of the Cauchy problem for Eq. \varepsilonqref{eq1} posed in $\muathbb{R}^N$. We thus address the quite interesting (in view of precedents such as {\cal I}te{dPV90, dPV92}) question of speed of propagation and non-uniqueness for compactly supported data, deriving in the process some new Aronson-B\'enilan estimates for the solutions to Eq. \varepsilonqref{eq1} when $m+p<2$. We stress here that uniform lower bounds on the weight are essential in the forthcoming proofs, thus technical complications introduced by the weight in a neighborhood of $x=0$ (where it is no longer uniformly positive) appear when trying to adapt the same proofs to the close relative Eq. \varepsilonqref{eq3}. But let us present below in more detail our main results. \muedskip \nuoindent \tauextbf{Main results.} In order to state our results concerning the qualitative theory of solutions to Eq. \varepsilonqref{eq1}, we first have to introduce the notion of \varepsilonmph{weak solution} that will be used throughout the paper. Let us denote by $u(t)$ the mapping $x\muapsto u(x,t)$ for $t>0$ fixed. We will slightly modify the functional framework from Andreucci and DiBenedetto {\cal I}te{AdB91} by passing in our weak formulation the full Laplacian to the test function. \betaegin{definition}\lambdaabel{def.sol} A non-negative function $u:\muathbb{R}^N\tauimes(0,T)\muapsto[0,\iotanfty)$ is said to be a \varepsilonmph{weak solution} to the Cauchy problem \varepsilonqref{eq1}-\varepsilonqref{eq2} with initial condition $u_0$ as in \varepsilonqref{icond} if the following assumptions are satisfied by $u$: (a) \varepsilonmph{Regularity assumption}: we have \betaegin{equation}\lambdaabel{reg.sol} u\iotan C(0,T;L^1_{\rhom loc}(\muathbb{R}^N)){\cal A}p L^{\iotanfty}_{\rhom loc}(\muathbb{R}^N\tauimes(0,T)). \varepsilonnd{equation} (b) \varepsilonmph{Weak formulation of \varepsilonqref{eq1}}: for any test function $\varepsilonta\iotan C_0^{\iotanfty}(\muathbb{R}^N\tauimes(0,T))$ and for any $t\iotan(0,T)$ we have \betaegin{equation}\lambdaabel{weak.sol} \betaegin{split} \iotant_{\muathbb{R}^N}u(x,t)\varepsilonta(x,t)\,dx&+\iotant_0^t\iotant_{\muathbb{R}^N}\lambdaeft(-u(x,\tauau)\varepsilonta_t(x,\tauau)-u^m(x,\tauau)\Deltaelta\varepsilonta(x,\tauau)\rhoight)dx\,d\tauau\\&= \iotant_0^t\iotant_{\muathbb{R}^N}(1+|x|)^{\sigmaigma}u^p(x,\tauau)\varepsilonta(x,\tauau)dx\,d\tauau. \varepsilonnd{split} \varepsilonnd{equation} (c) \varepsilonmph{Taking the initial condition}: this is done in $L^1$ sense, more precisely \betaegin{equation}\lambdaabel{limit.sol} \lambdaim\lambdaimits_{t\tauo0}u(t)=u_0, \tauhetaquad {\rhom with \ convergence \ in} \ L^1_{\rhom loc}(\muathbb{R}^N) \varepsilonnd{equation} \varepsilonnd{definition} This change relaxes the functional assumptions of regularity for a solution, making it easier to obtain weak solutions by limiting processes. We will also need throughout the paper the notions of (weak) sub- and supersolution to Eq. \varepsilonqref{eq1}. We say that $u$ is a \varepsilonmph{weak subsolution} (respectively \varepsilonmph{weak supersolution}) to Eq. \varepsilonqref{eq1} if condition (a) is fulfilled and condition (b) is modified in the sense that the equal sign is replaced by $\lambdaeq$ (respectively $\gammaeq$) for any test functions $\varepsilonta\iotan C_0^{\iotanfty}(\muathbb{R}^N\tauimes(0,T))$ such that $\varepsilonta\gammaeq0$ and for any $t\iotan(0,T)$. Moreover, the notions of weak solution, subsolution, supersolution to Eq. \varepsilonqref{eq1} (without the initial condition) can be defined in an obvious way on time intervals $[t_1,t_2]\sigmaubset(0,T)$ instead of $(0,T)$ by just removing assumption (c). Once defined the notion of weak solution, we are in a position to give below the main theorems of this work. As explained above, it is expected from the results in papers such as {\cal I}te{dPV90, dPV91, dPV92} for equations with non-weighted reaction that in our range of exponents ill-posedness of the Cauchy problem will still hold true and both existence and uniqueness of solutions are an issue. As we shall see below, if we consider initial conditions $u_0$ as in \varepsilonqref{icond}, these basic properties of existence and uniqueness of weak solutions are strongly related to the finite or infinite speed of propagation of their edge of the support. Throughout the paper, we will denote by $C_0(\muathbb{R}^N)$ the space of continuous, compactly supported functions on $\muathbb{R}^N$. \muedskip \nuoindent \tauextbf{The range} $\muathbf{m+p\gammaeq2}$. Let us first focus on the range of exponents for which $m+p\gammaeq2$ and $\sigmaigma>0$. In this case, we shall prove that, at least for some interval of time $t\iotan(0,T)$, there exist infinitely many weak solutions to Eq. \varepsilonqref{eq1}. We begin with the existence of at least one local (in time) weak solution, as stated in the following: \betaegin{theorem}[Local existence of compactly supported solutions]\lambdaabel{th.exist} In our framework and notation, assume that $m+p\gammaeq2$ and let $u_0$ be an initial condition as in \varepsilonqref{icond} satisfying moreover that $u_0\iotan C_0(\muathbb{R}^N)$. Then there exists $T>0$ and there exists at least a weak solution $u$ to the Cauchy problem \varepsilonqref{eq1}-\varepsilonqref{eq2} for $t\iotan(0,T)$ which remains continuous and compactly supported: $u(t)\iotan C_0(\muathbb{R}^N)$ for any $t\iotan(0,T)$. \varepsilonnd{theorem} The proof relies on the construction of so-called \varepsilonmph{minimal solutions} to the Cauchy problem \varepsilonqref{eq1}-\varepsilonqref{eq2}, which are obtained through a limit process from a family of approximating (regularized) Cauchy problems. It will be shown that such a minimal solution exists for any compactly supported condition $u_0$ and stays compactly supported for $t\iotan(0,T)$, a property known as \varepsilonmph{finite speed of propagation}. The (local in time) finite speed of propagation will be proved with the aid of comparison with solutions and supersolutions in self-similar form introduced in the recent works {\cal I}te{IS20b, IS20c, IMS22}. The statement of Theorem \rhoef{th.exist} and precedents in the non-weighted case {\cal I}te{dPV92} give the idea that uniqueness does not hold true. In fact, we infer from Theorem \rhoef{th.exist} that at intuitive level the weight $V(x)=(1+|x|)^{\sigmaigma}$ is equivalent to a constant for times $t\iotan(0,T)$ (that is, while the support of $u(t)$ remains finite), thus a similar property to the non-weighted case concerning non-uniqueness of solutions is expected. The next result characterizes the extent of this non-uniqueness, by showing that we can prescribe in infinitely many ways the evolution of the interface of a solution stemming from the same initial condition. \betaegin{theorem}[Non-uniqueness of compactly supported solutions]\lambdaabel{th.nonuniq} In our framework and notation, assume that $m+p\gammaeq2$ and let $u_0$ be an initial condition as in \varepsilonqref{icond} satisfying moreover that $u_0\iotan C_0(\muathbb{R})$. Then there exists $T>0$ and infinitely many weak solutions to the Cauchy problem \varepsilonqref{eq1}-\varepsilonqref{eq2} for $t\iotan(0,T)$. The same existence of infinitely many weak solutions holds true for radially symmetric initial conditions $u_0\iotan C_0(\muathbb{R}^N)$ satisfying \varepsilonqref{icond}. \varepsilonnd{theorem} This result is rather similar to the non-weighted case $\sigmaigma=0$, although the proof will be technically more involved: it is not clear whether there exists with $\sigmaigma>0$ a maximal solution, in order to get almost for free a different, second solution (as it holds true for $\sigmaigma=0$, see {\cal I}te{dPV90, dPV91}), thus we have to use a different approach. The statement of Theorem \rhoef{th.nonuniq} will be thus enforced and made more precise in Section \rhoef{sec.nonuniq}, where we show that the existence of infinitely many solutions is linked to a prescribed evolution of their interface in time, adapting but also slightly improving techniques from {\cal I}te{dPV92}. \muedskip \nuoindent \tauextbf{The range} $\muathbf{m+p<2}$. In this complementary range, our main goal is to prove that solutions to the Cauchy problem for Eq. \varepsilonqref{eq1} with initial conditions as in \varepsilonqref{icond} have \varepsilonmph{infinite speed of propagation}, that is, they become immediately positive at any point $x\iotan\muathbb{R}^N$. This fact extends to the non-homogeneous range $\sigmaigma>0$ a similar result holding true for $\sigmaigma=0$, established in {\cal I}te{dPV90}, but we use a different approach which gives, along the way, a result of independent interest in the study of the homogeneous equation \varepsilonqref{eq4}. We begin by establishing the following \varepsilonmph{Aronson-B\'enilan estimates} (whose name stems from their celebrated short note {\cal I}te{ABE79} on solutions to the porous medium equation) for solutions to the homogeneous case $\sigmaigma=0$. We thus introduce the \varepsilonmph{pressure function} $$ v=\varphirac{m}{m-1}u^{m-1}. $$ \betaegin{theorem}[Aronson-B\'enilan estimates when $m+p<2$ and $\sigmaigma=0$]\lambdaabel{th.ABE} Assume that $m+p<2$. Let $u$ be a weak solution to Eq. \varepsilonqref{eq4} in $\muathbb{R}^N\tauimes(0,T)$ with a continuous initial condition $u_0(x)=u(x,0)$ satisfying \varepsilonqref{icond} and let $v$ be the pressure variable introduced above. Then the following inequality \betaegin{equation}\lambdaabel{ABE} \Deltaelta v\gammaeq-\varphirac{K}{t}, \tauhetaquad K=\varphirac{N}{N(m-1)+2}, \varepsilonnd{equation} holds true in the sense of distributions in $\muathbb{R}^N$, that means, \betaegin{equation}\lambdaabel{ABEdist} \iotant_0^{T}\iotant_{\muathbb{R}^N}\lambdaeft(v(x,t)\Deltaelta\varphi(x,t)+\varphirac{K}{t}\varphi(x,t)\rhoight)\,dx\,dt\gammaeq0, \varepsilonnd{equation} for any test function $\varphi\iotan C_0^{\iotanfty}(\muathbb{R}^N\tauimes(0,T))$ such that $\varphi\gammaeq0$ in $\muathbb{R}^N\tauimes(0,T)$. \varepsilonnd{theorem} \nuoindent \tauextbf{Remark.} Theorem \rhoef{th.ABE} is expected to hold true also \varepsilonmph{for any $\sigmaigma>0$}, provided $N\gammaeq2$. We give a formal proof of this fact at the end of Section \rhoef{sec.ABE}. However, transforming it into a rigorous proof is impossible by now for $\sigmaigma>0$, as we are lacking a well-posedness result for the Cauchy problem \varepsilonqref{eq1}-\varepsilonqref{eq2} in the range $m+p<2$. This is why, we introduce this formal proof as a remark. We then employ the Aronson-B\'enilan estimates established in Theorem \rhoef{th.ABE} and some consequences of them in order to prove the infinite speed of propagation of the supports of solutions to Eq. \varepsilonqref{eq1} when $m+p-2<0$, which is strongly contrasting to the results established in the range $m+p\gammaeq2$. We include the range $\sigmaigma=0$ in the statement, as our approach gives an alternative proof to the one in {\cal I}te{dPV90}. \betaegin{theorem}[Infinite speed of propagation when $m+p<2$]\lambdaabel{th.ISP} If the exponents $m$ and $p$ satisfy $m+p<2$, Eq. \varepsilonqref{eq1} has the property of infinite speed of propagation for any $\sigmaigma\gammaeq0$, that is, every weak solution (if it exists) to the Cauchy problem associated to Eq. \varepsilonqref{eq1} with continuous initial condition $u_0$ as in \varepsilonqref{icond} satisfies $u(x,t)>0$ for any $x\iotan\muathbb{R}^N$ and $t>0$. \varepsilonnd{theorem} The rest of the paper is devoted to the proofs of the main theorems, following the outlines explained in the comments near their statements. We then give at the end of the paper a list of open problems and possible extensions of our study that we believe to be interesting for future developments. \sigmaection{Existence and finite speed of propagation when $m+p\gammaeq2$}\lambdaabel{sec.localex} The goal of this section is to prove Theorem \rhoef{th.exist} in the range of parameters $m+p\gammaeq2$. Let $u_0$ be a continuous initial condition as in \varepsilonqref{icond}. The scheme of the proof is based on the construction of a \varepsilonmph{minimal solution} via an approximation process and showing that this minimal solution is a weak solution to the Cauchy problem \varepsilonqref{eq1}-\varepsilonqref{eq2} with compact support at any time $t\iotan(0,T)$ for some $T>0$. \betaegin{proposition}\lambdaabel{prop.minimal} There exists some $T>0$ and a continuous weak solution $\upsilonnderline{u}$ defined and compactly supported for $t\iotan(0,T)$ to the Cauchy problem \varepsilonqref{eq1}-\varepsilonqref{eq2} with $u_0$ as above such that for any other weak solution $u$ to the Cauchy problem \varepsilonqref{eq1}-\varepsilonqref{eq2} (if it exists), we have $$ \upsilonnderline{u}(x,t)\lambdaeq u(x,t), \tauhetaquad {\rhom for \ any} \ (x,t)\iotan\muathbb{R}^N\tauimes(0,T). $$ \varepsilonnd{proposition} \betaegin{proof} We divide the proof into three steps for the reader's convenience. Let us stress at this point that, while Step 1 of the proof is an adaptation of an analogous construction in {\cal I}te{dPV91}, the idea in Steps 2 and 3 strongly departs from the one used in the previously mentioned work and employs results on self-similar solutions to Eq. \varepsilonqref{eq3} published recently by the authors. \muedskip \nuoindent \tauextbf{Step 1. The construction of the minimal solution.} We approximate the Cauchy problem \varepsilonqref{eq1}-\varepsilonqref{eq2} by the following sequence of Cauchy problems for any positive integer $k\gammaeq1$ \betaegin{equation}\lambdaabel{approx.min} (P_k) \ \lambdaeft\{\betaegin{array}{ll}w_t=\Deltaelta w^m+\muin\{(1+|x|)^{\sigmaigma},k\}f_k(w), & (x,t)\iotan\muathbb{R}^N\tauimes(0,\iotanfty),\\ w(x,0)=u_0(x), & x\iotan\muathbb{R}^N\varepsilonnd{array}\rhoight. \varepsilonnd{equation} where $$ f_k(w)=\lambdaeft\{\betaegin{array}{ll}\lambdaeft(\varphirac{1}{k}\rhoight)^{p-1}w, & {\rhom if} \ 0\lambdaeq w\lambdaeq\varphirac{1}{k},\\ w^p, & {\rhom if} \ w\gammaeq\varphirac{1}{k}\varepsilonnd{array}\rhoight. $$ Since the nonlinearity in \varepsilonqref{approx.min} is of the form $g(x)h(w)$ with $g\iotan L^{\iotanfty}(\muathbb{R}^N)$, $g(x)\gammaeq 1$ for any $x\iotan\muathbb{R}^N$ and $h$ is a Lipschitz function, we infer by standard results for quasilinear parabolic equations (see for example {\cal I}te{DiB83, Sa83}) that the Cauchy problem $(P_k)$ admits a unique solution $w_k$ defined for $(x,t)\iotan\muathbb{R}^N\tauimes(0,\iotanfty)$, which is compactly supported and continuous. The comparison principle is in force for the Cauchy problem $(P_k)$ and $w_{k+1}$ is a supersolution to the problem $(P_k)$, thus $w_{k+1}\gammaeq w_k$ for any $k\gammaeq1$. This allows us to introduce the pointwise (and monotone increasing) limit (which might become infinite starting from some finite time) $$ \upsilonnderline{u}(x,t)=\lambdaim\lambdaimits_{k\tauo\iotanfty}w_k(x,t)<\iotanfty, \tauhetaquad (x,t)\iotan\muathbb{R}^N\tauimes(0,T_{\iotanfty}), $$ which is well defined provided that $T_{\iotanfty}>0$. This fact will follow from the construction of a ``universal" family of supersolutions in self-similar form which is postponed to Step 2 (in dimension $N=1$) and Step 3 (in dimension $N\gammaeq2$) below. Moreover, the solutions $w_k$ are thus uniformly bounded on $\muathbb{R}^N\tauimes(0,T)$ for some $T=T(u_0)>0$ depending on $u_0$, and this uniform boundedness, together with classical results in {\cal I}te{DiB83, Sa83}, imply that the family $(w_k)_{k\gammaeq1}$ is uniformly equicontinuous in $\muathbb{R}^N\tauimes[0,T]$, hence there exists a subsequence (relabeled also $w_k$ for simplicity) which converges locally uniformly to the same function $\upsilonnderline{u}(x,t)$. Then the fact that $\upsilonnderline{u}$ is a continuous weak solution to \varepsilonqref{eq1}-\varepsilonqref{eq2} for $(x,t)\iotan\muathbb{R}^N\tauimes(0,T_{\iotanfty})$ follows now readily from the previous convergences (assuming for now the outcome of Steps 2 and 3 below): indeed, Lebesgue's monotone convergence theorem ensures the assumptions (b) and (c) in Definition \rhoef{def.sol}, while the uniform bound by the supersolutions constructed in Steps 2 and 3 below, together with the equicontinuity, give that $\upsilonnderline{u}$ satisfies the regularity assumption (a) in Definition \rhoef{def.sol}. Moreover, if $u$ is another weak solution to the Cauchy problem \varepsilonqref{eq1}-\varepsilonqref{eq2}, then it is a supersolution to the problem $(P_k)$ for any $k\gammaeq1$, whence $u(x,t)\gammaeq w_k(x,t)$ for any $k\gammaeq1$ and $(x,t)\iotan\muathbb{R}^N\tauimes(0,\iotanfty)$ and by passing to the limit $u\gammaeq\upsilonnderline{u}$ in $\muathbb{R}^N\tauimes(0,T_{\iotanfty})$, proving the minimality of $\upsilonnderline{u}$. \muedskip \nuoindent \tauextbf{Step 2. Supersolutions in dimensions $N=1$.} We are left with the task of obtaining a uniform bound from above for all solutions $w_k$ to the Cauchy problems $(P_k)$, $k\gammaeq1$, at least up to some (short) finite time. This follows by comparison with suitable supersolutions in self-similar form constructed in recent works by the authors such as {\cal I}te{IMS22} for $N\gammaeq2$ and $m+p\gammaeq2$, respectively {\cal I}te{IS20b, IS20c} in dimension $N=1$ and either $m+p>2$ or $m+p=2$. If we restrict ourselves only to dimension $N=1$ for this step, it is shown in the previously quoted works that, in our range of exponents together with the extra condition $\sigmaigma>2(1-p)/(m-1)$, there are radially symmetric blow-up self-similar supersolutions to Eq. \varepsilonqref{eq3} in the form \betaegin{equation}\lambdaabel{SSS} u(x,t)=(T-t)^{-\alphalpha}f(|x|(T-t)^{\betaeta}), \ \alphalpha=\varphirac{\sigmaigma+2}{L}, \ \betaeta=\varphirac{m-p}{L} \varepsilonnd{equation} where $L>0$ has been defined in \varepsilonqref{const}, such that their self-similar profiles $f$ solve the differential equation \betaegin{equation}\lambdaabel{SSODE} (f^m)''(\xii)-\alphalpha f(\xii)+\betaeta\xii f'(\xii)+\xii^{\sigmaigma}f(\xii)^p=0, \tauhetaquad \xii=|x|^{\sigmaigma}(T-t)^{\betaeta}. \varepsilonnd{equation} Moreover, it is established in {\cal I}te[Proposition 4.1]{IS20b} if $m+p>2$ and in {\cal I}te[Proposition 3.1]{IS20c} if $m+p=2$ that the self-similar profiles $f$ of the supersolutions in the form \varepsilonqref{SSS} fulfill the following two additional properties: $\omegaverline{u}llet$ $f$ is strictly decreasing until reaching the zero level: $f(0)=A>0$, $f'(\xii)<0$ at points $\xii\gammaeq0$ where $f(\xii)>0$. $\omegaverline{u}llet$ $f$ presents an interface at some finite point $\xii_0\iotan(0,\iotanfty)$, that is, $f(\xii_0)=0$, $f(\xii)>0$ for any $\xii\iotan(0,\xii_0)$ and $(f^m)'(\xii_0)=0$. Here the blow-up time $T>0$ is a free parameter and the functions defined in \varepsilonqref{SSS} are actually weak solutions to Eq. \varepsilonqref{eq3} except at the point $x=0$ where the condition $(f^m)'(0)=0$ is not fulfilled in order to be a weak solution. We adapt these supersolutions to our Eq. \varepsilonqref{eq1} by defining \betaegin{equation}\lambdaabel{supersol} z(x,t)=(T-t)^{-\alphalpha}f((1+|x|)(T-t)^{\betaeta}), \varepsilonnd{equation} with $\alphalpha$, $\betaeta$ and $f$ as in \varepsilonqref{SSS}. Since $f$ is a supersolution to the differential equation \varepsilonqref{SSODE}, it is straightforward to check that $z$ is a supersolution to \varepsilonqref{eq1}. The amplitude $s(t)$ of the support of $z(t)$ at some $t\iotan(0,T)$ is given by $\xii_0=(1+s(t))(T-t)^{\betaeta}$ or equivalently $$ s(t)=(T-t)^{-\betaeta}\xii_0-1\tauo\iotanfty, \tauhetaquad {\rhom as} \ t\tauo T. $$ Moreover, the above supersolutions have been established in {\cal I}te{IS20b, IS20c} only under the condition $\sigmaigma>2(1-p)/(m-1)$. However, if $0<\sigmaigma\lambdaeq2(1-p)/(m-1)$, it is obvious from the fact that $1+|x|\gammaeq1$, that $(1+|x|)^{\sigmaigma}\lambdaeq(1+|x|)^{\sigmaigma_1}$ for any $\sigmaigma_1>2(1-p)/(m-1)$ and for any $x\iotan\muathbb{R}$. It thus follows that the supersolutions constructed for Eq. \varepsilonqref{eq1} with such an exponent $\sigmaigma_1>2(1-p)/(m-1)$ as above, will serve also as supersolutions to Eq. \varepsilonqref{eq1} with exponents $\sigmaigma$ smaller. Since $T$ is a free parameter and $u_0$ is bounded and compactly supported, we can choose some $T_0>0$ sufficiently small such that \betaegin{equation}\lambdaabel{interm11} z(x,0)=T_0^{-\alphalpha}f((1+|x|)T_0^{\betaeta})\gammaeq\|u_0\|_{\iotanfty}\gammaeq u_0(x) \varepsilonnd{equation} for any $x\iotan{\rhom supp}\,u_0$. This, together with the fact that any function $z$ as in \varepsilonqref{supersol} is a supersolution to Eq. \varepsilonqref{eq1}, gives that $z$ is also a supersolution to the Cauchy problem $(P_k)$ for any $k\gammaeq1$ for which the comparison principle applies to give that $$ w_k(x,t)\lambdaeq z(x,t), \tauhetaquad {\rhom for \ any} \ (x,t)\iotan\muathbb{R}\tauimes(0,T_0). $$ In particular we infer on the one hand that $T_{\iotanfty}\gammaeq T_0>0$ as claimed, and on the other hand by passing to the limit as $k\tauo\iotanfty$ we get $$ \upsilonnderline{u}(x,t)\lambdaeq z(x,t), \tauhetaquad {\rhom for \ any} \ (x,t)\iotan\muathbb{R}\tauimes(0,T_0) $$ which proves the finite speed of propagation of the support of $\upsilonnderline{u}$ at least for some short interval of time. \muedskip \nuoindent \tauextbf{Step 3. Supersolutions in dimension $N\gammaeq2$.} In this case, there are no longer bounded and decreasing supersolutions in the self-similar form used in Step 2. We thus construct suitable supersolutions to Eq. \varepsilonqref{eq3} by joining two different self-similar profiles. We begin again by fixing $\sigmaigma>2(1-p)/(m-1)$ as a first case. We consider in general self-similar solutions to Eq. \varepsilonqref{eq3} in the same form \varepsilonqref{SSS} as above, with the same exponents $\alphalpha$ and $\betaeta$, but whose profiles solve the differential equation \betaegin{equation}\lambdaabel{SSODE.N} (f^m)''(\xii)+\varphirac{N-1}{\xii}(f^m)'(\xii)-\alphalpha f(\xii)+\betaeta\xii f'(\xii)+\xii^{\sigmaigma}f(\xii)^p=0, \tauhetaquad \xii=|x|^{\sigmaigma}(T-t)^{\betaeta}. \varepsilonnd{equation} On the one hand, the analysis in {\cal I}te[Proposition 4.1]{IMS22} if $m+p>2$, respectively {\cal I}te[Proposition 4.2 and Lemma 4.3]{IMS22} if $m+p=2$, ensures that, given $\xii_0\iotan(0,\xii_*)$ sufficiently small, there exists a decreasing self-similar profile $f_2(\xii)$ solution to the differential equation \varepsilonqref{SSODE.N} having an interface exactly at $\xii=\xii_0$ and a vertical asymptote as $\xii\tauo0$ with local behavior $$ f_2(\xii)\sigmaim\lambdaeft\{\betaegin{array}{ll}C\xii^{-(N-2)/m}, & {\rhom if} \ N\gammaeq3,\\ C(-\lambdan\,\xii)^{1/m}, & {\rhom if} \ N=2,\varepsilonnd{array}\rhoight. $$ as it follows from {\cal I}te[Lemma 3.2 and Lemma 3.5]{IMS22}, where $C>0$ designs a positive constant that might change from one case to another. On the other hand, the analysis performed in {\cal I}te[Lemma 3.1]{IMS22} implies that, for any $A>0$, there exists a profile $f_1(\xii;A)$ local solution to Eq. \varepsilonqref{SSODE.N} such that $$ f_1(0;A)=A, \tauhetaquad f_1(\xii;A)\sigmaim\lambdaeft[A^{m-1}+\varphirac{\alphalpha(m-1)}{2mN}\xii^2\rhoight]^{1/(m-1)}, \tauhetaquad {\rhom as} \ \xii\tauo0, $$ which is increasing in a right-neighborhood of the origin up to some maximum point $\xii_1(A)>0$. Thus, given an initial condition $u_0$ as in \varepsilonqref{icond}, one can choose for example $A=\|u_0\|_{\iotanfty}$, fix some $\xii_0\iotan(0,\xii_1(A)){\cal A}p(0,\xii_*)$ such that there exists a decreasing profile $f_2(\xii)$ as above with vertical asymptote as $\xii\tauo0$ and edge of the support at $\xii=\xii_0$. These two profiles have to cross at some point $\omegaverline{\xii}\iotan(0,\xii_1(A))$. We finally define a self-similar supersolution to Eq. \varepsilonqref{eq1} as follows: \betaegin{equation}\lambdaabel{supersol.N} z(x,t)=(T-t)^{-\alphalpha}f((1+|x|)(T-t)^{\betaeta}), \tauhetaquad f(\xii)=\lambdaeft\{\betaegin{array}{ll}f_1(\xii;A), & \xii\iotan[0,\omegaverline{\xii}],\\ f_2(\xii), &\xii\gammaeq\omegaverline{\xii}, \varepsilonnd{array}\rhoight. \varepsilonnd{equation} and notice that this is indeed a supersolution for any $T>0$, as it can be described alternatively as $$ z(x,t)=\muin\{z_1(x,t),z_2(x,t)\}, \tauhetaquad z_i(x,t)=(T-t)^{-\alphalpha}f_i((1+|x|)(T-t)^{\betaeta}), \tauhetaquad i=1,2, $$ and $z_1$, $z_2$ are in fact solutions. The same considerations about the magnitude of $\sigmaigma$ as in the end of Step 2 ensure that the supersolution defined in \varepsilonqref{supersol.N} works also for values of $\sigmaigma$ smaller than $2(1-p)/(m-1)$. We thus complete the proof by choosing a sufficiently small $T_0>0$ such that \varepsilonqref{interm11} holds true on the support of $u_0$, and notice that $z$ is a supersolution to the approximating problems $(P_k)$ leading to the minimal solution. This again implies that $T_{\iotanfty}\gammaeq T_0>0$, completing the proof. \varepsilonnd{proof} The solution $\upsilonnderline{u}$ constructed in the proof of Proposition \rhoef{prop.minimal} will be referred as \varepsilonmph{the minimal solution} to the Cauchy problem \varepsilonqref{eq1}-\varepsilonqref{eq2} and denoted by $M(u_0)$ in the sequel. Notice that, the above proof does not imply that necessarily the minimal solution blows up in finite time. In fact, it might blow up or not according to whether $\sigmaigma$ is larger or smaller than $2(1-p)/(m-1)$, but this is not easy to prove once we miss a comparison principle, and we refer the reader to the section of open problems at the end. \sigmaection{Non-uniqueness for $m+p\gammaeq2$}\lambdaabel{sec.nonuniq} A natural question raised by the previous section is whether in the case $m+p\gammaeq2$ and for compactly supported and continuous data $u_0$ the minimal solution $\upsilonnderline{u}$ constructed in Proposition \rhoef{prop.minimal} is the only solution to the Cauchy problem \varepsilonqref{eq1}-\varepsilonqref{eq2}. For $\sigmaigma=0$ non-uniqueness follows easily from the construction of a different solution called maximal, which is shown to be strictly positive for any $t>0$ and thus different from $\upsilonnderline{u}$, see {\cal I}te{dPV91}. We cannot construct such a maximal solution to Eq. \varepsilonqref{eq1}, since strictly positive solutions might not even exist at all in some ranges of $\sigmaigma$ (see a comment with a formal intuition for such non-existence in the final Section of this work). In order thus to prove the existence of multiple solutions (and in fact an infinite number of them) we adapt to our case the deeper results in {\cal I}te{dPV92}, where infinitely many solutions to the Cauchy problem \varepsilonqref{eq4}-\varepsilonqref{eq2} are constructed, based on a prescribed evolution of the interface of them. As a preliminary fact, let us recall that Eq. \varepsilonqref{eq4} (that is, the case $\sigmaigma=0$) admits an \varepsilonmph{absolute minimal solution} in self-similar form \betaegin{equation}\lambdaabel{abs.min} E(x,t)=t^{1/(1-p)}\varphi(|x|t^{-\gammaamma}), \tauhetaquad \gammaamma=\varphirac{m-p}{2(1-p)} \varepsilonnd{equation} with zero initial condition $E(x,0)=0$ for any $x\iotan\muathbb{R}$, according to {\cal I}te{dPV90}. It is also shown in {\cal I}te{dPV90} that such solution lies below any solution (and supersolution) to Eq. \varepsilonqref{eq4} and consequently also below any solution to Eq. \varepsilonqref{eq1} (which is a strict supersolution to Eq. \varepsilonqref{eq4}). Moreover, the profile $\varphi$ of $E$ is non-increasing and compactly supported, thus ${\rhom supp}\,E\sigmaubseteq[-\varrho_0t^{\gammaamma},\varrho_0t^{\gammaamma}]$ where $\varrho_0>0$ is the right-interface point of the profile $\varphi$. With these elements and notation in mind, we can prove Theorem \rhoef{th.nonuniq} as an immediate consequence of a stronger result adapting a construction from {\cal I}te{dPV92}. More precisely, by prescribing the behavior of the interface (under some limitations) we can obtain a solution to the Cauchy problem \varepsilonqref{eq1}-\varepsilonqref{eq2} with exactly that given interface at every (small) time $t\iotan(0,T)$. We formalize this below in dimension $N=1$, but the analogous result for radially symmetric solutions in dimension $N\gammaeq2$ will be then completely analogous. \betaegin{proposition}\lambdaabel{prop.nonuniq1} Let $u_0\iotan C_0(\muathbb{R})$ be a compactly supported initial condition such that ${\rhom supp}\,u_0=[r_0,R_0]$ for some $r_0$, $R_0\iotan\muathbb{R}$. Let $M(u_0)$ be the minimal solution to the Cauchy problem \varepsilonqref{eq1}-\varepsilonqref{eq2} with initial condition $u_0$ defined for $t\iotan(0,T_0)$ and denote by $s_l(t)$, $s_r(t)$ the left and right interfaces of $M(u_0)(t)$ for $t\iotan(0,T_0)$, that is, ${\rhom supp}\,M(u_0)(t)=[s_l(t),s_r(t)]$. Let $\xii_l(t)$, $\xii_r(t)$ be two continuous functions of time such that $\xii_l(0)\lambdaeq r_0$, $\xii_r(0)\gammaeq R_0$ and $$ \xii_l(t_1)-\xii_l(t_2)\gammaeq s_l(t_1)-s_l(t_2), \tauhetaquad \xii_r(t_2)-\xii_r(t_1)\gammaeq s_r(t_2)-s_r(t_1), $$ for any $t_1$, $t_2\iotan(0,T_0)$ such that $t_1<t_2$. Then there exists at least a shorter time interval $(0,T)\sigmaubset(0,T_0)$ and a solution $u$ to the Cauchy problem \varepsilonqref{eq1}-\varepsilonqref{eq2} with initial condition $u_0$ defined for $t\iotan(0,T)$ such that its left and right interfaces are given exactly by $\xii_l(t)$ and $\xii_r(t)$ for any $t\iotan(0,T)$. \varepsilonnd{proposition} \betaegin{proof} We divide the proof into two steps. \muedskip \nuoindent \tauextbf{Step 1. The fundamental extension.} This step adapts to our problem Step 1 in the proof of {\cal I}te[Proof of Corollary 1.2]{dPV92} and at its end in fact we already have the proof of Theorem \rhoef{th.nonuniq}. The goal here is to construct a solution to our Cauchy problem whose support has an instantaneous jump to the right from $R_0=s_r(0)$ to $R_0+r$ at time $t=0$ for a given (fixed) $r>0$ (and of course, a perfectly similar construction can be done for the left interface). Let then $r>0$ be given. For any $\varepsilonpsilon>0$ sufficiently small (it is enough to start for example from $\varepsilonpsilon<\|u_0\|_{\iotanfty}/2$) there exists a last, closest point (that we denote by $R_{\varepsilonpsilon}$) to the right interface $R_0$ of $u_0$ such that $u_0(R_{\varepsilonpsilon})=\varepsilonpsilon$. Let us introduce the following compactly supported and continuous initial condition \betaegin{equation}\lambdaabel{new.data} u_{\varepsilonpsilon,0}(x)=\lambdaeft\{\betaegin{array}{ll}u_0(x), & {\rhom for} \ x\iotan(-\iotanfty,R_{\varepsilonpsilon}),\\ \varphirac{\varepsilonpsilon(R_0+r-x)}{R_0+r-R_{\varepsilonpsilon}}, & \rhom{for} \ x\iotan[R_{\varepsilonpsilon},R_0+r], \\ 0, & {\rhom for} \ x\iotan(R_0+r,\iotanfty),\varepsilonnd{array}\rhoight. \varepsilonnd{equation} that is, adding a linear extension to $u_0$ from $R_{\varepsilonpsilon}$ to the new edge of the support $R_0+r$. Let $u_{\varepsilonpsilon}=M(u_{\varepsilonpsilon,0})$ be the minimal solution associated to this new condition. By comparison between minimal solutions (which holds true since it is obvious that any solution to Eq. \varepsilonqref{eq1} with a larger initial condition than the given $u_0$ becomes a supersolution to any of the problems $(P_k)$ approximating the minimal solution $M(u_0)$) it readily follows that $u_{\varepsilonpsilon_2}(x,t)\gammaeq u_{\varepsilonpsilon_1}(x,t)$ for any $\varepsilonpsilon_2>\varepsilonpsilon_1>0$ and at any $(x,t)\iotan\muathbb{R}\tauimes(0,T)$ where $T>0$ is for example the lifetime of the solution with the biggest $\varepsilonpsilon$ chosen. Recalling the construction of the absolute minimal solution $E(x,t)$ to Eq. \varepsilonqref{eq4} (see the details in {\cal I}te[Theorem 4]{dPV90}) and the fact that any non-trivial solution to our Eq. \varepsilonqref{eq1} is a strict supersolution to Eq. \varepsilonqref{eq4} we easily conclude that for any $(x_0,t_0)\iotan\muathbb{R}\tauimes(0,T)$ with $x_0\iotan[R_{\varepsilonpsilon},R_0+r]$ we have \betaegin{equation}\lambdaabel{interm8} u_{\varepsilonpsilon}(x,t)\gammaeq E(x-x_0,t-t_0), \tauhetaquad t>t_0. \varepsilonnd{equation} We can then define $$ \omegaverline{u}(x,t)=\lambdaim\lambdaimits_{\varepsilonpsilon\tauo0}u_{\varepsilonpsilon}(x,t), \tauhetaquad (x,t)\iotan\muathbb{R}\tauimes(0,T), $$ which is a monotone limit and it is easy to see that $\omegaverline{u}$ is a weak solution to Eq. \varepsilonqref{eq1} using the Monotone Convergence Theorem (in fact we even have uniform convergence by Dini's Theorem since the limit is continuous). Let us stress here that we do not need in this construction a bound from above to guarantee that the limit is finite, since we are dealing with a decreasing limit. It is then obvious that $$ \omegaverline{u}(x,0)=\lambdaim\lambdaimits_{\varepsilonpsilon\tauo0}u_{\varepsilonpsilon,0}(x)=u_0(x) $$ and the comparison from above with $E$ in \varepsilonqref{interm8} transfers to the limit $\omegaverline{u}$, proving that for any $t>0$ we have $\omegaverline{u}(x,t)>0$ for $x\iotan(R_0,R_0+r)$. \muedskip \nuoindent \tauextbf{Step 2. The iterative construction.} We perform now an iterative construction based on a discretization in time and the application of Step 1 in each interval of the discretization to perform an instantaneous jump in the supports. Let then $\xii_l(t)$ and $\xii_r(t)$ be two continuous, increasing functions of time as in the statement of Proposition \rhoef{prop.nonuniq1}. Fix a time $t\iotan(0,T)$ for a $T>0$ sufficiently small (that will be chosen later). For any positive integer $n$ we construct an approximate solution $u_n$ as in {\cal I}te[Proof of Corollary 1.2]{dPV92}. We briefly and sketchy describe the construction here for the sake of completeness. Consider a partition $$ 0=t_0<t_1<t_2<...<t_{n-1}<t_n=t, \tauhetaquad t_j=\varphirac{jt}{n} $$ and construct the function $u_n$ by induction. More precisely, assume first that $u_n$ is already constructed for $t\iotan[0,t_j]$ and we want to pass to $t_{j+1}$. To this end, we begin from the edges of the support of $u_n(t_j)$ and we perform a jump of them by applying Step 1 both to the right (with $r=\xii_r(t_{j+1})-\xii_r(t_j)$) and to the left (with $r=\xii_l(t_j)-\xii_l(t_{j+1})$), using here the fact that the speed of advance of the prescribed interfaces to the left and to the right is higher than the ones of the minimal solution with initial condition $u_0$. The precise details are easy and given in the above mentioned proof of Corollary 1.2 in {\cal I}te{dPV92}. In order to pass to the limit as $n\tauo\iotanfty$ in the iteration we need an uniform bound from above to show that the limit solution does not escape to infinity. We cannot use a translation of a minimal solution or a construction based on it (as it was done in {\cal I}te[Lemma 2.4]{dPV92}) since our equation is not invariant to translations, but instead we can bound uniformly from above the iterated solutions by a sufficiently big non-increasing supersolution in self-similar form $$ U(x,t)=(T-t)^{-\alphalpha}f((1+|x|)(T-t)^{\betaeta}), \ \alphalpha=\varphirac{\sigmaigma+2}{L}, \ \betaeta=\varphirac{m-p}{L} $$ similar to the ones introduced in \varepsilonqref{SSS} with a profile $f(\xii)$ solving \varepsilonqref{SSODE} and having a right interface at $\xii_0\iotan(0,\iotanfty)$. Indeed, comparison from above with such a supersolution $U$ can be performed as the iterative construction of $u_n$ is based on adding up at each iteration step only minimal solutions, provided that at our fixed time $t>0$ for which we have built $u_n$ we have ordered supports between $u_n$ and $U$, that is \betaegin{equation}\lambdaabel{interm9} \xii_r(t)<\xii_0(T-t)^{-\betaeta}, \tauhetaquad \xii_l(t)>-\xii_0(T-t)^{-\betaeta}, \varepsilonnd{equation} which also gives a limitation for the lifetime $T>0$ depending on the two prescribed functions $\xii_r$, $\xii_l$. We notice that for ``faster" advancing interface functions $\xii_l(t)$, $\xii_r(t)$, smaller lifetime $T$ is expected. Once satisfied this condition, we can pass to the limit in the discretization as $n\tauo\iotanfty$ and obtain the desired weak solution $$ v(x,t)=\lambdaim\lambdaimits_{n\tauo\iotanfty}u_n(x,t), \tauhetaquad t\iotan(0,T), $$ with $T>0$ chosen sufficiently small according to \varepsilonqref{interm9}. The bound from below by $E$ at every point of positivity (as done in Step 1) together with the construction and the continuity of the prescribed functions $\xii_l$, $\xii_r$ ensure that the left and right interfaces of $v$ at every time $t\iotan(0,T)$ are given exactly by the continuous functions $\xii_l(t)$ and $\xii_r(t)$. In the meantime, the universal bound from above by the supersolution $U$ in self-similar form ensures that at least in the time interval $(0,T)$ we have $v(x,t)<\iotanfty$ for any $x\iotan\muathbb{R}$. \varepsilonnd{proof} \muedskip \nuoindent \tauextbf{Remarks. 1.} Step 1 above already completes the proof of Theorem \rhoef{th.nonuniq} in dimension $N=1$. Indeed, for every $r>0$ we can construct a different solution to the same Cauchy problem \varepsilonqref{eq1}-\varepsilonqref{eq2}. A totally analogous extension can be constructed for radially symmetric initial conditions $u_0\iotan C_0(\muathbb{R}^N)$ by replacing $x$ by $r=|x|$, which allows us to work with the radial variable exactly as in dimension $N=1$ but using for comparison from above the supersolutions introduced in Step 2 of the proof of Proposition \rhoef{prop.minimal}, thus completing the proof of Theorem \rhoef{th.nonuniq} also in dimension $N\gammaeq2$. \nuoindent \tauextbf{2.} Our Proposition \rhoef{prop.nonuniq1} also holds true for $\sigmaigma=0$ and improves slightly the result for Eq. \varepsilonqref{eq4} in {\cal I}te[Theorem 1.1 and Theorem 5.1]{dPV92} since we do not need to consider the non-increasing majorant $\tauilde{u_0}$ of the initial condition as considered in the above mentioned work. \sigmaection{Aronson-B\'enilan estimates when $m+p<2$}\lambdaabel{sec.ABE} This section is devoted to the deduction of the Aronson-B\'enilan estimates \varepsilonqref{ABE} in the homogeneous case $\sigmaigma=0$. Let us recall here the \varepsilonmph{pressure function} $$ v=\varphirac{m}{m-1}u^{m-1}, $$ since most of the forthcoming work will be performed on this function. \betaegin{proof}[Proof of Theorem \rhoef{th.ABE}] The proof is inspired from the one for {\cal I}te[Proposition 9.4]{VPME} but technically more involved. We first derive after rather straightforward calculations the \varepsilonmph{pressure equation} which will be used further in the present work, that is, the parabolic PDE satisfied by the function $v$ introduced above: \betaegin{equation}\lambdaabel{eq.pres} \betaegin{split} &v_t=(m-1)v\Deltaelta v+|\nuabla v|^2+K(m,p)v^{(m+p-2)/(m-1)}, \\ &K(m,p)=m\lambdaeft(\varphirac{m-1}{m}\rhoight)^{(m+p-2)/(m-1)}. \varepsilonnd{split} \varepsilonnd{equation} Let us notice here the strong influence of the sign of $m+p-2$ on this equation, due to its last term. In order to go further, we set $w=\Deltaelta v$ and we next derive the partial differential equation solved by $w$. To this end, we calculate the terms separately. On the one hand, for the reaction term we get \betaegin{equation*} \varphirac{\piartial}{\piartial x_i}K(m,p)v^{(m+p-2)/(m-1)}=K(m,p)\lambdaeft[\varphirac{m+p-2}{m-1}v^{(p-1)/(m-1)}\varphirac{\piartial v}{\piartial x_i}\rhoight] \varepsilonnd{equation*} hence \betaegin{equation*} \varphirac{\piartial^2}{\piartial x_i^2}K(m,p)v^{\varphirac{m+p-2}{m-1}}=K(m,p)\lambdaeft[\varphirac{m+p-2}{m-1}v^{\varphirac{p-1}{m-1}}\varphirac{\piartial^2v}{\piartial x_i^2}+\varphirac{(2-m-p)(1-p)}{(m-1)^2}v^{\varphirac{p-m}{m-1}}\lambdaeft(\varphirac{\piartial v}{\piartial x_i}\rhoight)^2\rhoight]. \varepsilonnd{equation*} The contribution of the reaction term thus gives \betaegin{equation*} \betaegin{split} \sigmaum\lambdaimits_{i=1}^N&\varphirac{\piartial^2}{\piartial x_i^2}K(m,p)v^{\varphirac{m+p-2}{m-1}}=K(m,p)\varphirac{m+p-2}{m-1}v^{\varphirac{p-1}{m-1}}w\\&+K(m,p)\varphirac{(2-m-p)(1-p)}{(m-1)^2}v^{\varphirac{p-m}{m-1}}|\nuabla v|^2=\muathcal{R}_1(x,v)w+\muathcal{R}_2(x,v), \varepsilonnd{split} \varepsilonnd{equation*} where \betaegin{equation*} \betaegin{split} &\muathcal{R}_1(x,v)=K(m,p)\varphirac{m+p-2}{m-1}v^{\varphirac{p-1}{m-1}}<0, \\ &\muathcal{R}_2(x,v)=K(m,p)\varphirac{(2-m-p)(1-p)}{(m-1)^2}v^{\varphirac{p-m}{m-1}}|\nuabla v|^2\gammaeq0. \varepsilonnd{split} \varepsilonnd{equation*} On the other hand, the diffusion term can be worked out as in {\cal I}te[Proposition 9.4]{VPME} to get in the end \betaegin{equation*} \betaegin{split} w_t&=(m-1)v\Deltaelta w+2m\nuabla v{\cal D}ot\nuabla w+(m-1)w^2+2\sigmaum\lambdaimits_{i,j=1}^N\lambdaeft(\varphirac{\piartial^2 v}{\piartial x_i\piartial x_j}\rhoight)^2+\muathcal{R}_1(x,v)w+\muathcal{R}_2(x,v)\\ &\gammaeq(m-1)v\Deltaelta w+2m\nuabla v{\cal D}ot\nuabla w+\lambdaeft(m-1+\varphirac{2}{N}\rhoight)w^2+\muathcal{R}_1(x,v)w+\muathcal{R}_2(x,v) \varepsilonnd{split} \varepsilonnd{equation*} after a straightforward application of the Cauchy-Schwartz inequality. This can be written equivalently as $\muathcal{L}w\gammaeq\muathcal{R}_2(x,v)$, where $$ \muathcal{L}w:=w_t-(m-1)v\Deltaelta w-2m\nuabla v{\cal D}ot\nuabla w-\lambdaeft(m-1+\varphirac{2}{N}\rhoight)w^2-\muathcal{R}_1(x,v)w $$ is a uniformly parabolic operator. Since $\muathcal{R}_2(x,v)\gammaeq0$, we deduce that $\muathcal{L}w\gammaeq0$ and we aim to find a subsolution for $\muathcal{L}$ depending only on time. We thus take for $t>0$ $$ \upsilonnderline{W}(x,t)=-\varphirac{C}{t}, \tauhetaquad C=\varphirac{N}{N(m-1)+2} $$ and calculate $$ \muathcal{L}\upsilonnderline{W}=\varphirac{C}{t^2}-\varphirac{C}{t^2}+\varphirac{C\muathcal{R}_1(x,v)}{t}<0, $$ since $m+p-2<0$ in our range of exponents. Applying the comparison principle to the parabolic operator $\muathcal{L}$ we infer that $$ \Deltaelta v(x,t)=w(x,t)\gammaeq\upsilonnderline{W}(t)=-\varphirac{N}{(N(m-1)+2)t}, \tauhetaquad {\rhom for \ any} \ (x,t)\iotan\muathbb{R}^N\tauimes(0,\iotanfty) $$ as stated. \muedskip All the previous calculations and the application of the maximum principle to the operator $\muathcal{L}$ are fully justified for solutions $u$ such that (in the pressure variable) $\muathcal{L}$ is uniformly parabolic, that is, when $v$, $\nuabla v$ are bounded and $v>0$ uniformly. In order to extend the Aronson-B\'enilan estimates to general weak solution we proceed by approximation. Let us first consider $u_0$ to be an initial condition as in \varepsilonqref{icond} and continuous. Then, there exists a unique solution $u$ to the Cauchy problem \varepsilonqref{eq4}-\varepsilonqref{eq2} in a time interval $(0,T)$, according to {\cal I}te{dPV91}. Let $u_k$ be the solution to the Cauchy problem with initial condition $$ u_{0,k}(x)=u_0(x)+\varphirac{1}{k}, $$ for any positive integer $k$. We infer from {\cal I}te[Theorem 2.1]{dPV91} and its proof that there exists a unique solution $u_k$ to \varepsilonqref{eq4} with initial condition $u_{0,k}$, and it satisfies $u_k(x,t)\gammaeq 1/k$ for any $x\iotan\muathbb{R}^N$, $t>0$. We further find from {\cal I}te[Theorem 8.1, Chapter V]{LSU} (which applies for our approximating solutions $u_k$ since they are now bounded from below by a positive constant) that $u_k$ has the regularity required for \varepsilonqref{eq4} to hold true in a classical sense and all the space derivatives of $u_k$ up to the second order and time derivatives of $u_k$ up to order one are uniformly bounded. In this case, the previous calculation applies rigorously for $u_k$ and we get that \betaegin{equation}\lambdaabel{interm12} \Deltaelta v_k(x,t)\gammaeq-\varphirac{K}{t}, \tauhetaquad v_k=\varphirac{m}{m-1}u_k^{m-1}. \varepsilonnd{equation} Moreover, the comparison principle for \varepsilonqref{eq4} (see {\cal I}te[Theorem 2.1]{dPV91}) entails that solutions $u_k$ for $k\gammaeq1$ form a non-increasing sequence of functions, thus there exists a limit $$ \omegaverline{u}(x,t)=\lambdaim\lambdaimits_{k\tauo\iotanfty}u_k(x,t), \tauhetaquad (x,t)\iotan\muathbb{R}^N\tauimes(0,T), $$ and the uniform bound of $u_k$ and their derivatives up to second order together with the Arzel\'a-Ascoli Theorem imply that $u_k\tauo\omegaverline{u}$ locally uniformly and the same holds true for their first order derivatives with respect to the space variables. Then, the uniform boundedness of $u_k$ and $\piartial_tu_k$ gives the continuity with respect to the time variable over $(0,T)$ of the limit function $\omegaverline{u}$, while the fact that $\omegaverline{u}$ belongs to $L^{\iotanfty}_{\rhom loc}$ is obvious, as it is bounded from above by any $u_k$. We thus fulfill the regularity assumption (a) in Definition \rhoef{def.sol}. The monotone convergence theorem then easily gives that $\omegaverline{u}$ satisfies assumptions (b) and (c) in Definition \rhoef{def.sol}, hence, since $u_k(x,0)=u_{0,k}(x)\tauo u_0(x)$ as $k\tauo\iotanfty$ on $\muathbb{R}^N$, we readily infer that $\omegaverline{u}$ is a weak solution to the Cauchy problem \varepsilonqref{eq4}-\varepsilonqref{eq2}. Uniqueness of solutions to the latter Cauchy problem, established in {\cal I}te{dPV90} for continuous and bounded initial conditions, then proves that $\omegaverline{u}=u$. We then come back to \varepsilonqref{interm12}, which, after multiplication by a non-negative test function and integration by parts, writes \betaegin{equation}\lambdaabel{interm13} \iotant_0^{T}\iotant_{\muathbb{R}^N}\lambdaeft(v_k(x,t)\Deltaelta\varphi(x,t)+\varphirac{K}{t}\varphi(x,t)\rhoight)\,dx\,dt\gammaeq0, \varepsilonnd{equation} for any $\varphi\iotan C_0^{\iotanfty}(\muathbb{R}^N\tauimes(0,T))$, $\varphi\gammaeq0$. We pass to the limit in \varepsilonqref{interm13} as $k\tauo\iotanfty$, taking into account that $v_k\tauo v$ locally uniformly as $k\tauo\iotanfty$, and obtain the claimed distributional form \varepsilonqref{ABEdist}. \varepsilonnd{proof} We end this section with a corollary which will be used in the sequel. \betaegin{corollary}\lambdaabel{cor.ABE} In the same conditions as in Theorem \rhoef{th.ABE}, we have $$ u_t\gammaeq-\varphirac{Ku}{t}, \tauhetaquad K=\varphirac{N}{N(m-1)+2}, $$ in the sense of distributions in $\muathbb{R}^N$. \varepsilonnd{corollary} \betaegin{proof} At a formal level, we infer from \varepsilonqref{eq.pres} that $v_t\gammaeq(m-1)v\Deltaelta v$, hence $$ (m-1)\varphirac{u_t}{u}=\varphirac{v_t}{v}\gammaeq(m-1)\Deltaelta v\gammaeq-\varphirac{N(m-1)}{(N(m-1)+2)t} $$ and we reach the conclusion with the same constant $K$ as in \varepsilonqref{ABE}. For general weak solutions the estimate is proved by using the same approximation as in the proof of Theorem \rhoef{th.ABE}. \varepsilonnd{proof} \muedskip \nuoindent \tauextbf{Remark. Formal proof of Aronson-B\'enilan estimates for} $\muathbf{\sigmaigma>0}$. At a formal level, the Aronson-B\'enilan estimates \varepsilonqref{ABE} or \varepsilonqref{ABEdist} hold true also for $\sigmaigma>0$ and $N\gammaeq2$. Indeed, a slightly longer but straightforward calculation along the first few lines of the proof of Theorem \rhoef{th.ABE} by computing the derivatives up to second order of the reaction term, but applied to Eq. \varepsilonqref{eq1} with $\sigmaigma>0$, gives \betaegin{equation*} \betaegin{split} \sigmaum\lambdaimits_{i=1}^N&\varphirac{\piartial^2}{\piartial x_i^2}K(m,p)(1+|x|)^{\sigmaigma}v^{\varphirac{m+p-2}{m-1}}=K(m,p)\varphirac{m+p-2}{m-1}(1+|x|)^{\sigmaigma}v^{\varphirac{p-1}{m-1}}w\\ &+K(m,p)(1+|x|)^{\sigmaigma-2}v^{\varphirac{p-m}{m-1}}\lambdaeft[\varphirac{(2-m-p)(1-p)}{(m-1)^2}(1+|x|)^2|\nuabla v|^2\rhoight.\\ &\lambdaeft.-2\sigmaigma\varphirac{2-m-p}{m-1}(1+|x|)v\varphirac{x}{|x|}{\cal D}ot\nuabla v+\sigmaigma\lambdaeft(\sigmaigma-1+(N-1)\varphirac{1+|x|}{|x|}\rhoight)v^2\rhoight]\\ &=\muathcal{R}_1(x,v)w+\muathcal{R}_2(x,v), \varepsilonnd{split} \varepsilonnd{equation*} where $$ \muathcal{R}_1(x,v)=K(m,p)\varphirac{m+p-2}{m-1}(1+|x|)^{\sigmaigma}v^{\varphirac{p-1}{m-1}}<0 $$ and $\muathcal{R}_2(x,v)$ gathers the rest of the terms. In order to proceed with the comparison principle as we did in the body of the proof of Theorem \rhoef{th.ABE}, we still need to have $\muathcal{R}_2(x,v)\gammaeq0$. To this end, we write $\muathcal{R}_2$ as a square and we examine the remainders. More precisely, using once more a standard Cauchy-Schwarz inequality for the scalar product $\nuabla v{\cal D}ot x/|x|$ we find \betaegin{equation*} \betaegin{split} \varphirac{\muathcal{R}_2(x,v)}{K(m,p)(1+|x|)^{\sigmaigma-2}v^{\varphirac{p-m}{m-1}}}&\gammaeq\lambdaeft[\varphirac{2-m-p}{m-1}(1+|x|)|\nuabla v|-\sigmaigma v\rhoight]^2+\varphirac{2-m-p}{m-1}(1+|x|)^2|\nuabla v|^2\\ &+\sigmaigma\lambdaeft[(N-1)\varphirac{1+|x|}{|x|}-1\rhoight]v^2 \varepsilonnd{split} \varepsilonnd{equation*} and taking into account that we are in the range $m+p-2<0$, it follows that $\muathcal{R}_2(x,v)\gammaeq0$ provided that $$ \sigmaigma\lambdaeft[(N-1)\varphirac{1+|x|}{|x|}-1\rhoight]\gammaeq0, $$ which holds true for $\sigmaigma>0$ if $N\gammaeq2$. We thus conclude, at a formal level, that \varepsilonqref{ABEdist} should hold true in this case. However, we left this part out of the statement of Theorem \rhoef{th.ABE} since the final approximation argument leading to the rigorous proof for $\sigmaigma=0$ cannot be performed, as we are missing an existence and uniqueness result for solutions to the Cauchy problem \varepsilonqref{eq1}-\varepsilonqref{eq2} when $\sigmaigma>0$. \sigmaection{Infinite speed of propagation when $m+p<2$}\lambdaabel{sec.ISP} In this part we use the Aronson-B\'enilan estimates in Theorem \rhoef{th.ABE} to establish the infinite speed of propagation of the supports of solutions to Eq. \varepsilonqref{eq1} when $m+p-2<0$ and thus complete the proof of Theorem \rhoef{th.ISP}. Let us stress again here that Theorem \rhoef{th.ISP} has been proved in {\cal I}te[Lemma 2.4]{dPV90} for $\sigmaigma=0$. We give here an independent proof, based on a completely different argument, and extend it to exponents $\sigmaigma>0$. \betaegin{proof}[Proof of Theorem \rhoef{th.ISP}] In a first step, let $\sigmaigma=0$ and assume for contradiction that for some compactly supported initial condition $u_0$ as in \varepsilonqref{icond}, $u(t)$ remains compactly supported for $t\iotan(0,T_0)$. Recall the pressure equation \varepsilonqref{eq.pres}, which for $\sigmaigma=0$ writes \betaegin{equation}\lambdaabel{eq.pres2} v_t=(m-1)v\Deltaelta v+|\nuabla v|^2+K(m,p)v^{(m+p-2)/(m-1)}. \varepsilonnd{equation} At a formal level, since $m+p-2<0$, fixing some $t\iotan(0,T_0)$ one reaches easily a contradiction in \varepsilonqref{eq.pres2}. Indeed, since $\Deltaelta v(x,t)\gammaeq-K/t$ and $|\nuabla v(x,t)|^2\gammaeq0$ at any point $x\iotan\muathbb{R}^N$, it follows that at the interface point $x=s(t)$ we get $v_t(s(t),t)=+\iotanfty$ in order to compensate the negative power $(m+p-2)/(m-1)$ in the last term of the right hand side and for any $t\iotan(0,T_0)$. This is obviously equivalent to the infinite speed of propagation of the supports. More rigorously, since $m+p<2$ we multiply by $v^{(2-m-p)/(m-1)}$ in \varepsilonqref{eq.pres2} and also by a test function $\varphi\iotan C_0^{\iotanfty}(\muathbb{R}^N)$, $\varphi\gammaeq0$, then we integrate on $\muathbb{R}^N$ and on any time interval $(\tauau_0,\tauau_1)\sigmaubset(0,T_0)$ and we drop the second term in the right hand side (which is always positive) to obtain \betaegin{equation}\lambdaabel{interm10} \betaegin{split} \varphirac{m-1}{1-p}&\iotant_{\tauau_0}^{\tauau_1}\iotant_{\muathbb{R}^N}(v^{(1-p)/(m-1)})_t\varphi\,dx\,dt\gammaeq(m-1)\iotant_{\tauau_0}^{\tauau_1}\iotant_{\muathbb{R}^N}v^{(1-p)/(m-1)}\Deltaelta v\varphi\,dx\,dt\\&+K(m,p)\iotant_{\tauau_0}^{\tauau_1}\iotant_{\muathbb{R}^N}\varphi\,dx\,dt\\ &\gammaeq-\varphirac{N(m-1)}{N(m-1)+2}\iotant_{\tauau_0}^{\tauau_1}\iotant_{\muathbb{R}^N}\varphirac{1}{t}v^{(1-p)/(m-1)}\varphi\,dx\,dt+K(m,p)\iotant_{\tauau_0}^{\tauau_1}\iotant_{\muathbb{R}^N}\varphi\,dx\,dt. \varepsilonnd{split} \varepsilonnd{equation} We now consider a sequence of test functions $(\varphi_n)_{n\gammaeq1}$ defined as follows $$ \varphi_n(x)=1, \ {\rhom for} \ |x|\lambdaeq n, \tauhetauad 0\lambdaeq\varphi_n(x)\lambdaeq 1 \ {\rhom for \ any} \ x\iotan\muathbb{R}^N, \tauhetauad {\rhom supp}\varphi_n\sigmaubseteq B(0,2n), $$ where $B(0,2n)=\{x\iotan\muathbb{R}^N: |x|\lambdaeq 2n\}$, and let $\varphi=\varphi_n$ in \varepsilonqref{interm10} for any positive integer $n\gammaeq1$. Since the support of $v$ is uniformly localized for $t\iotan[\tauau_0,\tauau_1]$ (as $\tauau_1<T_0$), it follows that the right-hand side of \varepsilonqref{interm10} tends to $+\iotanfty$ as $n\tauo\iotanfty$ due to the last integral only of $\varphi_n$. It thus follows that $$ \lambdaim\lambdaimits_{n\tauo\iotanfty}\iotant_{\tauau_0}^{\tauau_1}\iotant_{\muathbb{R}^N}(v^{(1-p)/(m-1)})_t\varphi_n\,dx\,dt=+\iotanfty $$ or equivalently $$ \lambdaim\lambdaimits_{n\tauo\iotanfty}\iotant_{\muathbb{R}^N}\lambdaeft[v^{(1-p)/(m-1)}(\tauau_1)-v^{(1-p)/(m-1)}(\tauau_0)\rhoight]\varphi_n\,dx=+\iotanfty, $$ which is a contradiction with the localization of the supports of $v(t)$ for $t\iotan[\tauau_0,\tauau_1]$. Let us notice here that the chosen range of exponents $p\iotan(0,1)$ and $m+p<2$ was decisive, as after multiplication by a positive power $v^{(2-m-p)/(m-1)}$, we got in the left hand side also a positive power $v^{(1-p)/(m-1)}$ of $v$, thus we do not create new singularities at the edges of the supports. We pass now to $\sigmaigma>0$. Assume that there is a weak solution $u$ to the Cauchy problem Eq. \varepsilonqref{eq1}-\varepsilonqref{eq2} defined for $t\iotan(0,T)$ with some $T>0$. Since $(1+|x|)^{\sigmaigma}\gammaeq1$ for any $x\iotan\muathbb{R}^N$ we deduce that $u$ is a supersolution to the Cauchy problem \varepsilonqref{eq4}-\varepsilonqref{eq2}. We infer from the comparison principle (which holds true for \varepsilonqref{eq4} and non-trivial initial data in the range $m+p<2$, {\cal I}te{dPV91}) that $u(x,t)>0$ for any $(x,t)\iotan\muathbb{R}^N\tauimes(0,T)$. \varepsilonnd{proof} \sigmaection*{Some extensions and open problems} We gather here some extensions related to the previous results, that we consider interesting. \muedskip \nuoindent \tauextbf{1. Finite time blow-up.} A natural question is whether any solution to Eq. \varepsilonqref{eq1} blows up in finite time or there are some initial conditions $u_0$ producing (minimal) solutions that are global in time. Our conjecture is that, if $L>0$, \varepsilonmph{any non-trivial solution is expected to blow up} in finite time, while if $L\lambdaeq0$, there are initial conditions producing solutions that are global in time (we recall that $L$ is defined in \varepsilonqref{const}). A formal argument about general finite time blow-up if $L>0$ is based on comparison with subsolutions in self-similar form obtained in our recent papers {\cal I}te{IS20b, IS20c, IMS22}. More precisely, it goes by contradiction as follows: assume that there exists $u_0\iotan C_0(\muathbb{R}^N)$ as in \varepsilonqref{icond} such that the minimal solution $M(u_0)$ is defined for $t\iotan(0,\iotanfty)$. We infer by comparison with the absolute minimal solution $E$ defined in {\cal I}te{dPV90} and \varepsilonqref{abs.min} that for large $t$, solution $M(u_0)(t)$ is as large as we want both in amplitude and support. We can thus find a blow-up self-similar solution to Eq. \varepsilonqref{eq3} as in {\cal I}te{IS20b, IS20c, IMS22} (which is a subsolution to Eq. \varepsilonqref{eq1}) below it. If comparison would be allowed, then we would get an easy contradiction proving that any minimal solution (and then any other solution) blows up in finite time. But as we see very well by considering the subsolution $\upsilonnderline{U}$ with initial condition $\upsilonnderline{U}(x,0)=0$ for any $x\iotan\muathbb{R}^N$, which is defined explicitly by \betaegin{equation}\lambdaabel{subsol} \upsilonnderline{U}(x,t)=\lambdaeft(\varphirac{1}{1-p}\rhoight)^{1/(p-1)}t^{1/(1-p)}(1+|x|)^{\sigmaigma/(1-p)}, \varepsilonnd{equation} comparison does not hold true in general, as otherwise any solution (even the ones with compact support) could have been compared to $\upsilonnderline{U}$ in order to force it to become positive everywhere, contradicting the finite speed of propagation at least in the range $m+p\gammaeq2$. This opens the question on whether the self-similar solutions to Eq. \varepsilonqref{eq1} are minimal solutions in the sense of the construction performed in Proposition \rhoef{prop.minimal}. \muedskip \nuoindent \tauextbf{2. Non-existence of positive solutions if $\sigmaigma>2(1-p)/(m-1)$}. Strongly connected to the first comment, and expecting (at a formal level) that we might compare a strictly positive solution to Eq. \varepsilonqref{eq1} with the subsolution $\upsilonnderline{U}$ introduced in \varepsilonqref{subsol}, we conjecture that, if $L>0$ (that is, $\sigmaigma>2(1-p)/(m-1)$) there are no solutions to Eq. \varepsilonqref{eq1} such that $u(x,t)>0$ for any $x\iotan\muathbb{R}^N$. Indeed, assuming that the comparison can be performed rigorously if $u(x,t)>0$ for any $x\iotan\muathbb{R}^N$, we would get a solution with local behavior $$ u(x,t)\gammaeq C(1+|x|)^{\sigmaigma/(1-p)}, \tauhetaquad {\rhom as} \ |x|\tauo\iotanfty, \tauhetaquad {\rhom for \ any} \ t>0. $$ But any solution to Eq. \varepsilonqref{eq1} is a supersolution to the standard porous medium equation and classical results on the porous medium equation (see for example {\cal I}te{AC83, BCP84, CVW87}) state that there are no solutions (and it seems to us that the proofs can be extended to supersolutions) to it increasing at infinity faster than $|x|^{2/(m-1)}$. Since $\sigmaigma/(1-p)>2/(m-1)$ if $L>0$, we would be in this case. In particular, since infinite speed of propagation is in force for $m+p<2$ by Theorem \rhoef{th.ISP}, we expect complete non-existence of non-trivial solutions if $L>0$ and $m+p<2$. \muedskip \nuoindent \tauextbf{3. Establishing which self-similar solutions are minimal.} An immediate adaptation of the result in {\cal I}te{IS20b} proves that Eq. \varepsilonqref{eq1} with $m+p>2$ presents two types of blow-up self-similar solutions $$ U(x,t)=(T-t)^{-\alphalpha}f((1+|x|)(T-t)^{\betaeta}), \tauhetaquad \alphalpha=\varphirac{\sigmaigma+2}{L}, \ \betaeta=\varphirac{m-p}{L} $$ which differ with respect to the local behavior of the profile $f(\xii)$ near the interface point $\xii_0\iotan(0,\iotanfty)$, namely $$ f(\xii)\sigmaim(\xii_0-\xii)^{1/(m-1)} \ ({\rhom Type \ I}) \ \ {\rhom or} \ \ f(\xii)\sigmaim(\xii_0-\xii)^{1/(1-p)} \ ({\rhom Type \ II}), $$ both taken in the limit $\xii\tauo\xii_0$, $\xii<\xii_0$. It is also shown at least at a formal level that these solutions satisfy two different interface equations. Thus, a natural question would be whether any of these self-similar solutions is minimal in the sense of the construction in Proposition \rhoef{prop.minimal}, which would allow us to compare and conclude on the finite time blow-up. This is not an easy question and our intuition suggests that minimality has to do with the interface equation: we might expect that the solutions with interface of Type I are minimal, while the other ones are not. This conjecture is supported by the analogy with the minimality of the traveling wave solutions to Eq. \varepsilonqref{eq4}, see for example {\cal I}te[Theorem 4.1]{dPV91}. \muedskip \nuoindent \tauextbf{4. Connection between non-uniqueness and blow-up time.} A much deeper open question is related to whether, in the range $\sigmaigma>2(1-p)/(m-1)$, prescribing a blow-up time $T$ and a function $u_0$, there exists a unique solution to the Cauchy problem \varepsilonqref{eq1}-\varepsilonqref{eq2} with condition $u_0$ blowing up in finite time exactly at the given time $T$. More precisely, taking $u_0\iotan C_0(\muathbb{R})$ satisfying \varepsilonqref{icond}, there exists a minimal solution $M(u_0)$ which (assuming that point 1 in this enumeration of open problems holds true, as we strongly expect) comes with a finite blow-up time $T_0\iotan(0,\iotanfty)$. We have also proved in Section \rhoef{sec.nonuniq} that the Cauchy problem \varepsilonqref{eq1}-\varepsilonqref{eq2} has an infinite number of compactly supported solutions with interfaces advancing faster than the minimal one, and estimate \varepsilonqref{interm9} shows that faster advancing speed of the interface implies shorter lifetime before blow-up. Thus one can wonder naturally whether, given $T\iotan(0,T_0)$, there exists one solution to the Cauchy problem \varepsilonqref{eq1}-\varepsilonqref{eq2} blowing up exactly at this time $T$. We do not have by now a suggestion of how to approach this problem, but it is in our opinion an interesting and deep open question. \betaigskip \nuoindent \tauextbf{Acknowledgements.} R. G. I. and A. S. are partially supported by the Project PID2020-115273GB-I00 and by the Grant RED2022-134301-T funded by MCIN/AEI/10.13039/ \\ 501100011033 (Spain). \betaibliographystyle{plain} \betaegin{thebibliography}{1} \betaibitem{AdB91} D. Andreucci and E. DiBenedetto, \varepsilonmph{On the Cauchy problem and initial traces for a class of evolution equations with strongly nonlinear sources}, Ann. Scuola Norm. Sup. Pisa, \tauextbf{18} (1991). \betaibitem{AT05} D. Andreucci and A. F. Tedeev, \varepsilonmph{Universal bounds at the blow-up time for nonlinear parabolic equations}, Adv. Differential Equations, \tauextbf{10} (2005), no. 1, 89-120. \betaibitem{ABE79} D. G. Aronson and Ph. B\'enilan, \varepsilonmph{R\'egularit\'e des solutions de l'\'equation des milieux poreux dans $\muathbb{R}^N$} (French), CR Acad. Sci. Paris S\'er A, \tauextbf{288} (1979), 103-105. \betaibitem{AC83} D. G. Aronson and L. A. Caffarelli, \varepsilonmph{The initial trace of a solution of the porous medium equation}, Trans. Amer. Mat. Soc., \tauextbf{280} (1983), no. 1, 351-366. \betaibitem{BL89} C. Bandle and H. Levine, \varepsilonmph{On the existence and nonexistence of global solutions of reaction-diffusion equations in sectorial domains}, Trans. Amer. Math. Soc., \tauextbf{316} (1989), 595-622. \betaibitem{BK87} P. Baras and R. Kersner, \varepsilonmph{Local and global solvability of a class of semilinear parabolic equations}, J. Differential Equations, \tauextbf{68} (1987), 238-252. \betaibitem{BCP84} P. B\'enilan, M. G. Crandall and M. Pierre, \varepsilonmph{Solutions of the porous medium in $\muathbb{R}^N$ under optimal conditions on the initial values}, Indiana Univ. Math. Jour., \tauextbf{33} (1984), no. 1, 51-87. \betaibitem{CVW87} L. A. Caffarelli. J. L. V\'azquez and N. I. Wolanski, \varepsilonmph{Lipschitz continuity of solutions and interfaces of the $N$-dimensional porous medium equation}, Indiana Univ. Math. J., \tauextbf{36} (1987), no. 2, 373-401. \betaibitem{DiB83} E. DiBenedetto, \varepsilonmph{Continuity of weak solutions to a general porous medium equation}, Indiana Univ. Math. J., \tauextbf{32} (1983), no. 1, 83-118. \betaibitem{FT00} S. Filippas and A. Tertikas, \varepsilonmph{On similarity solutions of a heat equation with a nonhomogeneous nonlinearity}, J. Differential Equations, \tauextbf{165} (2000), no. 2, 468-492. \betaibitem{GLS10} J.-S. Guo, C.-S. Lin and M. Shimojo, \varepsilonmph{Blow-up behavior for a parabolic equation with spatially dependent coefficient}, Dynam. Systems Appl., \tauextbf{19} (2010), no. 3-4, 415-433. \betaibitem{GLS13} J.-S. Guo, C.-S. Lin and M. Shimojo, \varepsilonmph{Blow-up for a reaction-diffusion equation with variable coefficient}, Appl. Math. Lett., \tauextbf{26} (2013), no. 1, 150-153. \betaibitem{GS11} J.-S. Guo and M. Shimojo, \varepsilonmph{Blowing up at zero points of potential for an initial boundary value problem}, Commun. Pure Appl. Anal., \tauextbf{10} (2011), no. 1, 161-177. \betaibitem{GS18} J.-S. Guo and P. Souplet, \varepsilonmph{Excluding blowup at zero points of the potential by means of Liouville-type theorems}, J. Differential Equations, \tauextbf{265} (2018), no. 10, 4942-4964. \betaibitem{ILS23} R. G. Iagar, M. Latorre and A. S\'anchez, \varepsilonmph{Blow-up patterns for a reaction-diffusion equation with weighted reaction in general dimension}, Adv. Differential Equations (accepted November 2022), Preprint ArXiv no. 2205.09407. \betaibitem{IMS22} R. G. Iagar, A. I. Mu\~{n}oz and A. S\'anchez, \varepsilonmph{Self-similar blow-up patterns for a reaction-diffusion equation with weighted reaction in general dimension}, Commun. Pure Appl. Analysis, \tauextbf{21} (2022), no. 3, 891-925. \betaibitem{IS19a} R. G. Iagar and A. S\'anchez, \varepsilonmph{Blow up profiles for a quasilinear reaction-diffusion equation with weighted reaction with linear growth}, J. Dynam. Differential Equations, \tauextbf{31} (2019), no. 4, 2061-2094. \betaibitem{IS20a} R. G. Iagar and A. S\'anchez, \varepsilonmph{Blow up profiles for a reaction-diffusion equation with critical weighted reaction}, Nonlinear Anal. \tauextbf{191} (2020), paper no. 111628, 24 pages. \betaibitem{IS20b} R. G. Iagar and A. S\'anchez, \varepsilonmph{Self-similar blow-up profiles for a reaction-diffusion equation with strong weighted reaction}, Adv. Nonl. Studies, \tauextbf{20} (2020), no. 4, 867-894. \betaibitem{IS21} R. G. Iagar and A. S\'anchez, \varepsilonmph{Blow up profiles for a quasilinear reaction-diffusion equation with weighted reaction}, J. Differential Equations, \tauextbf{272} (2021), 560-605. \betaibitem{IS20c} R. G. Iagar and A. S\'anchez, \varepsilonmph{Self-similar blow-up profiles for a reaction-diffusion equation with critically strong weighted reaction}, J. Dynam. Differential Equations, \tauextbf{34} (2022), no. 2, 1139-1172. \betaibitem{IS22} R. Iagar and A. S\'anchez, \varepsilonmph{Separate variable blow-up patterns for a reaction-diffusion equation with critical weighted reaction}, Nonlinear Anal., \tauextbf{217} (2022), Article ID 112740, 33p. \betaibitem{LSU} O. A. Ladyzhenskaya, M. A. Solonnikov and N. N. Uraltseva, \varepsilonmph{Linear and quasilinear equations of parabolic type}, Transl. Math. Monographs, \tauextbf{23}, Amer. Math. Society, Providence, 1968. \betaibitem{MS20} A. Mukai and Y. Seki, \varepsilonmph{Refined construction of Type II blow-up solutions for semilinear heat equations with Joseph-Lundgren supercritical nonlinearity}, Discrete Cont. Dynamical Systems, \tauextbf{41} (2021), no. 10, 4847-4885. \betaibitem{dP94} A. de Pablo, \varepsilonmph{Large-time behaviour of solutions of a reaction-diffusion equation}, Proc. Roy. Soc. Edinburgh Sect. A, \tauextbf{124} (1994), no. 2, 389-398. \betaibitem{dPV90} A. de Pablo and J. L. V\'azquez, \varepsilonmph{The balance between strong reaction and slow diffusion}, Comm. Partial Differential Equations, \tauextbf{15} (1990), no. 2, 159-183. \betaibitem{dPV91} A. de Pablo and J. L. V\'azquez, \varepsilonmph{Travelling waves and finite propagation in a reaction-diffusion equation}, J. Differential Equations, \tauextbf{93} (1991), no. 1, 19-61. \betaibitem{dPV92} A. de Pablo and J. L. V\'azquez, \varepsilonmph{An overdetermined initial and boundary-value problem for a reaction-diffusion equation}, Nonlinear Anal., \tauextbf{19} (1992), no. 3, 259-269. \betaibitem{Pi97} R. G. Pinsky, \varepsilonmph{Existence and nonexistence of global solutions for $u_t=\Deltaelta u+a(x)u^p$ in $\muathbb{R}^d$}, J. Differential Equations, \tauextbf{133} (1997), no. 1, 152-177. \betaibitem{Pi98} R. G. Pinsky, \varepsilonmph{The behavior of the life span for solutions to $u_t=\Deltaelta u+a(x)u^p$ in $\muathbb{R}^d$}, J. Differential Equations, \tauextbf{147} (1998), no. 1, 30-57. \betaibitem{Qi98} Y.-W. Qi, \varepsilonmph{The critical exponents of parabolic equations and blow-up in $\muathbb{R}^N$}, Proc. Roy. Soc. Edinburgh Section A, \tauextbf{128} (1998), no. 1, 123-136. \betaibitem{QS} P. Quittner and Ph. Souplet, \varepsilonmph{Superlinear parabolic problems. Blow-up, global existence and steady states}, Birkhauser Advanced Texts, Birkhauser Verlag, Basel, 2007. \betaibitem{Sa83} P. E. Sacks, \varepsilonmph{The initial and boundary problem for a class of degenerate parabolic equations}, Comm. Partial Differential Equations, \tauextbf{8} (1983), no. 7, 693-733. \betaibitem{S4} A. A. Samarskii, V. A. Galaktionov, S. P. Kurdyumov and A. P. Mikhailov, \varepsilonmph{Blow-up in quasilinear parabolic problems}, de Gruyter Expositions in Mathematics, \tauextbf{19}, W. de Gruyter, Berlin, 1995. \betaibitem{Su02} R. Suzuki, \varepsilonmph{Existence and nonexistence of global solutions of quasilinear parabolic equations}, J. Math. Soc. Japan, \tauextbf{54} (2002), no. 4, 747-792. \betaibitem{VPME} J. L. V\'azquez, \varepsilonmph{The porous medium equation. Mathematical theory}, Oxford Monographs in Mathematics, Oxford University Press, 2007. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \pagestyle{empty} \ \vskip1cm \centerline {{\mathbb C}font On sharp embeddings of Besov and Triebel-Lizorkin spaces} \vskip.5cm \centerline {{\mathbb C}font in the subcritical case} \vskip2cm \centerline {\Bfont Jan Vyb\'\i ral} \vskip.3cm \centerline {{\mathcal A}font Mathematisches Institut, Universit\"at Jena} \centerline {{\mathcal A}font Ernst-Abbe-Platz 2, 07740 Jena, Germany} \centerline {email:\ {\tt [email protected]}} \vskip.5cm \centerline {\bf \today} \vskip.5cm \begin{abstract} We discuss the growth envelopes of Fourier-analytically defined Besov and Triebel-Lizorkin spaces $B^s_{p,q}({\mathbb R}^n)$ and $F^s_{p,q}({\mathbb R}^n)$ for $s=\sigma_p=n\max(\frac 1p-1,0)$. These results may be also reformulated as optimal embeddings into the scale of Lorentz spaces $L_{p,q}({\mathbb R}^n)$. We close several open problems outlined already by H.~Triebel in \cite{T-SF} and explicitly formulated by D.~D.~Haroske in \cite{H}. \end{abstract} {\bf AMS Classification: }{46E35, 46E30} {\bf Keywords and phrases:} {Besov spaces, Triebel-Lizorkin spaces, rearrangement invariant spaces, Lorentz spaces, growth envelopes} \pagestyle{fancy} \section{Introduction and main results} We denote by $B^s_{p,q}({\mathbb R}^n)$ and $F^s_{p,q}({\mathbb R}^n)$ the Fourier-analytic Besov and Triebel-Lizorkin spaces (see Definition \ref{defsp} for details). The embeddings of these function spaces (and other spaces of smooth functions) play an important role in functional analysis. If $s>\frac {n}{p}$, then these spaces are continuously embedded into $C({\mathbb R}^n)$, the space of all complex-valued bounded and uniformly continuous functions on ${\mathbb R}^n$ normed in the usual way. If $s<\frac np$ then these function spaces contain also unbounded functions. This statement holds true also for $s=\frac np$ under some additional restrictions on the parameters $p$ and $q$. We refer to \cite[Theorem 3.3.1]{SiTr} for a complete overview. To describe the singularities of these unbounded elements, we use the technique of the non-increasing rearrangement. \begin{dfn}\label{defR} Let $\mu$ be the Lebesgue measure in ${\mathbb R}^n$. If $h$ is a measurable function on ${\mathbb R}^n$, we define the non-increasing rearrangement of $h$ through \begin{equation}\label{eq':2.1} h^*(t)=\sup \{\lambda>0: \mu\{x\in{\mathbb R}^n: |h(x)|>\lambda\}>t \},\qquad t\in (0,\infty). \end{equation} \end{dfn} To be able to apply this procedure to elements of $A^s_{p,q}({\mathbb R}^n)$ (with $A$ standing for $B$ or $F$), we have to know whether all the distributions of $A^s_{p,q}({\mathbb R}^n)$ may be interpreted as measurable functions. This is the case if, and only if, $A^s_{p,q}({\mathbb R}^n)\hookrightarrow L_1^{\rm loc}({\mathbb R}^n)$, the space of all measurable, locally-integrable functions on ${\mathbb R}^n$. A complete treatment of this question may be found in \cite[Theorem 3.3.2]{SiTr}: \begin{equation}\label{B1} B^s_{p,q}({\mathbb R}^n)\hookrightarrow L_1^{\rm loc}({\mathbb R}^n) {\mathcal L}eftrightarrow \begin{cases}\text{either}&s>\sigma_p:=n\max(\frac 1p-1,0),\\ \text{or}\quad &s=\sigma_p, 1<p\le\infty, 0<q\le\min(p,2),\\ \text{or}\quad &s=\sigma_p, 0<p\le 1, 0<q\le1 \end{cases} \end{equation} and \begin{equation}\label{F1} F^s_{p,q}({\mathbb R}^n)\hookrightarrow L_1^{\rm loc}({\mathbb R}^n) {\mathcal L}eftrightarrow \begin{cases}\text{either}&s>\sigma_p,\\ \text{or}\quad &s=\sigma_p, 1\le p<\infty, 0<q\le2,\\ \text{or}\quad &s=\sigma_p, 0<p<1, 0<q\le\infty. \end{cases} \end{equation} Let us assume, that a function space $X$ is embedded into $L_1^{\rm loc}({\mathbb R}^n)$. The {\it growth envelope function} of $X$ was defined by D.~D.~Haroske and H.~Triebel (see \cite{H'}, \cite{H}, \cite{T-SF} and references given there) by \begin{equation*} {\mathcal E}^X_G(t):=\sup_{||f|X||\le 1}f^*(t),\quad 0<t<1. \end{equation*} If ${\mathcal E}^X_G(t)\approx t^{-\alpha}$ for $0<t<1$ and some $\alpha>0$, then we define the \fix{OK?} {\it growth envelope index} $u_X$ as the infimum of all numbers $v$, $0<v\le\infty$, such that \begin{equation}\label{eq:index} \left( \int_{0}^\epsilon \left[\frac{f^*(t)}{{\mathcal E}^X_G(t)}\right]^{v}\frac{dt}{t}\right)^{1/v} \le c\, ||f|X|| \end{equation} (with the usual modification for $v=\infty$) holds for some $\epsilon>0, c>0$ and all $f\in X.$ The pair ${\mathfrak E_G}(X)=({\mathcal E_G^X},u_X)$ is called {\it growth envelope} for the function space $X$. In the case $\sigma_p<s<\frac np$, the growth envelopes of $A^s_{p,q}({\mathbb R}^n)$ are known, cf. \cite[Theorem 15.2]{T-SF} and \cite[Theorem 8.1]{H}. If $s=\sigma_p$ and \eqref{B1} or \eqref{F1} is fulfilled in the $B$ or $F$ case, respectively, then the known information is not complete, cf. \cite[Rem. 12.5, 15.1]{T-SF} and \cite[Prop. 8.12, 8.14 and Rem. 8.15]{H}: \begin{thm} \label{thm1.1} (i) Let $1< p<\infty$ and $0<q\le \min(p,2)$. Then \begin{equation*} \mathfrak E_G(B^0_{p,q})=(t^{-\frac{1}{p}},u)\qquad\text{with}\quad q\le u\le p. \end{equation*} (ii) Let $1\le p<\infty$ and $0<q\le 2$. Then \begin{equation*} \mathfrak E_G(F^0_{p,q})=(t^{-\frac{1}{p}},p). \end{equation*} (iii) Let $0<p\le 1$ and $0<q\le 1$. Then \begin{equation*} \mathfrak E_G(B^{\sigma_p}_{p,q})=(t^{-1},u)\qquad\text{with}\quad q\le u\le 1. \end{equation*} (iv) Let $0<p<1$ and $0<q\le\infty$. Then \begin{equation*} \mathfrak E_G(F^{\sigma_p}_{p,q})=(t^{-1},u)\qquad\text{with}\quad p\le u\le 1. \end{equation*} \end{thm} We fill all the above mentioned gaps. \begin{thm} \label{thm1.2} (i) Let $1\le p<\infty$ and $0<q\le \min(p,2)$. Then \begin{equation*} \mathfrak E_G(B^0_{p,q})=(t^{-\frac{1}{p}},p). \end{equation*} (ii) Let $0<p<1$ and $0<q\le 1$. Then \begin{equation*} \mathfrak E_G(B^{\sigma_p}_{p,q})=(t^{-1},q). \end{equation*} (iii) Let $0<p<1$ and $0<q\le\infty$. Then \begin{equation*} \mathfrak E_G(F^{\sigma_p}_{p,q})=(t^{-1},p). \end{equation*} \end{thm} We also reformulate these results as optimal embeddings into the scale of Lorentz spaces (cf. Definition \ref{Lpq}): \begin{thm} \label{thm1.3} (i) Let $1\le p<\infty$ and $0<q\le \min(p,2)$. Then \begin{equation*} B^0_{p,q}({\mathbb R}^n)\hookrightarrow L_{p}({\mathbb R}^n). \end{equation*} (ii) Let $0<p<1$ and $0<q\le 1$. Then \begin{equation}\label{emb} B^{\sigma_p}_{p,q}({\mathbb R}^n)\hookrightarrow L_{1,q}({\mathbb R}^n). \end{equation} (iii) Let $0<p<1$ and $0<q\le\infty$. Then \begin{equation*} F^{\sigma_p}_{p,q}({\mathbb R}^n)\hookrightarrow L_{1,p}({\mathbb R}^n) \end{equation*} and all these embeddings are optimal with respect to the second fine parameter of the scale of the Lorentz spaces. \end{thm} \begin{rem} (i) Let us also observe, that \eqref{emb} improves \cite[Theorem 3.2.1]{SiTr} and \cite[Theorem 2.2.3]{SR}, where the embedding $B^{n(\frac 1p-1)}_{p,q}({\mathbb R}^n)\hookrightarrow L_{1}({\mathbb R}^n)$ is proved for all $0<p<1$ and $0<q\le 1.$ (ii) Let us also mention, that growth envelopes for function spaces with minimal smoothness were recently studied in \cite{CGO}. These authors worked with spaces defined by differences and therefore their results are of a different nature. \end{rem} \section{Preliminaries, notation and definitions} We use standard notation: ${\mathbb N}$ denotes the collection of all natural numbers, ${\mathbb R}^n$ is the Euclidean $n$-dimensional space, where $n\in{\mathbb N}$, and ${\mathbb C}$ stands for the complex plane. \begin{dfn}\label{Lpq} (i) Let $0< p\le\infty$. We denote by $L_p({\mathbb R}^n)$ the Lebesgue spaces endowed with the quasi-norm $$ ||f|L_p({\mathbb R}^n)||=\begin{cases} {\rm d}isplaystyle\biggl(\int_{{\mathbb R}^n}|f(x)|^pd x\biggr)^{1/p},\quad& 0< p<\infty,\\ {\rm d}isplaystyle\operatornamewithlimits{ess\,sup}_{x\in{\mathbb R}^n} |f(x)|,\quad &p=\infty. \end{cases} $$ (ii) Let $0<p,q\le\infty$. Then the Lorentz space $L_{p,q}({\mathbb R}^n)$ consists of all $f\in L_1^{\rm loc}({\mathbb R}^n)$ such that the quantity $$ ||f|L_{p,q}({\mathbb R}^n)||=\begin{cases} {\rm d}isplaystyle\left(\int_0^\infty [t^{\frac 1p}f^*(t)]^q\frac{dt}{t}\right)^{1/q},\quad 0<q<\infty,\\ {\rm d}isplaystyle \sup_{0<t<\infty} t^{\frac 1p}f^*(t),\quad q=\infty \end{cases} $$ is finite \end{dfn} \begin{rem} These definitions are well-known, we refer to \cite[Chapter 4.4]{BS} for details and further references. We shall need only very few properties of these spaces. Obviously, $L_{p,p}({\mathbb R}^n)=L_p({\mathbb R}^n)$. If $0< q_1 \le q_2 \le\infty$, then $L_{p,q_1}({\mathbb R}^n)\hookrightarrow L_{p,q_2}({\mathbb R}^n)$ - so the Lorentz spaces are monotonically ordered in $q$. We shall make use of the following lemma: \end{rem} \begin{lem}\label{lem1q} Let $0<q<1$. Then the $||\cdot|L_{1,q}({\mathbb R}^n)||$ is the $q$-norm, it means $$ ||f_1+f_2|L_{1,q}({\mathbb R}^n)||^q\le ||f_1|L_{1,q}({\mathbb R}^n)||^q+ ||f_2|L_{1,q}({\mathbb R}^n)||^q $$ holds for all $f_1,f_2\in L_{1,q}({\mathbb R}^n).$ \end{lem} \begin{proof} First note, that the function $s\to s^q$ is increasing for all $0<q<\infty$ on $(0,\infty)$. This leads to the identity \begin{equation}\label{eqq} (|f|^q)^*(t)= (f^*)^q(t), \end{equation} which holds for all $t>0$, $0<q<\infty$ and all measurable functions $f$. The reader may also consult \cite[Proposition 2.1.7]{BS}. Using \eqref{eqq} and $0<q<1$, we obtain \begin{align*} ||f_1+f_2|L_{1,q}({\mathbb R}^n)||^q&= \int_0^\infty t^{q-1}((f_1+f_2)^*(t))^qdt \le \int_0^\infty t^{q-1}((|f_1|+|f_2|)^*(t))^qdt\\ &= \int_0^\infty t^{q-1}((|f_1|+|f_2|)^q)^*(t)dt \le \int_0^\infty t^{q-1}(|f_1|^q+|f_2|^q)^*(t)dt. \end{align*} We observe, that $t:\to t^{q-1}$ is a decreasing function on $(0,\infty)$ and that $$ \int_0^\xi (|f_1|^q+|f_2|^q)^*(t) dt\le \int_0^\xi (|f_1|^q)^*(t)dt +\int_0^\xi (|f_2|^q)^*(t) dt $$ holds for all $\xi\in(0,\infty)$. Hence, by Hardy's lemma (cf. \cite[Proposition 2.3.6]{BS}), \begin{align*} ||f_1+f_2|L_{1,q}({\mathbb R}^n)||^q&\le \int_0^\infty t^{q-1}(|f_1|^q)^*(t)dt+\int_0^\infty t^{q-1}(|f_2|^q)^*(t)dt\\ &=||f_1|L_{1,q}({\mathbb R}^n)||^q+ ||f_2|L_{1,q}({\mathbb R}^n)||^q. \end{align*} \end{proof} Let $S({\mathbb R}^n)$ be the Schwartz space of all complex-valued rapidly decreasing, infinitely differentiable functions on ${\mathbb R}^n$ and let $S'({\mathbb R}^n)$ be its dual - the space of all tempered distributions. For $f\in S'({\mathbb R}^n)$ we denote by $\widehat f= F f$ its Fourier transform and by $f^\vee$ or $F^{-1}f$ its inverse Fourier transform. We give a Fourier-analytic definition of Besov and Triebel-Lizorkin spaces, which relies on the so-called {\it dyadic resolution of unity}. Let $\varphi\in S({\mathbb R}^n)$ with \begin{equation}\label{eq0} \varphi(x)=1\quad \text{if}\quad |x|\le 1\quad\text{and}\quad\varphi(x)=0\quad\text{if}\quad|x|\ge \frac 32. \end{equation}We put $\varphi_0=\varphi$ and $\varphi_j(x)=\varphi(2^{-j}x)-\varphi(2^{-j+1}x)$ for $j\in{\mathbb N}$ and $x\in{\mathbb R}^n. $ This leads to the identity \begin{equation*} \sum_{j=0}^\infty\varphi_j(x)=1,\qquad x\in{\mathbb R}^n. \end{equation*} \begin{dfn}\label{defsp} (i) Let $s\in{\mathbb R}, 0< p,q\le\infty$. Then $B^{s}_{pq}({\mathbb R}^n)$ is the collection of all $f\in S'({\mathbb R}^n)$ such that \begin{equation}\label{eq1} ||f|B^s_{pq}({\mathbb R}^n)||=\biggl(\sum_{j=0}^\infty 2^{jsq}||(\varphi_j \widehat f)^\vee|L_p({\mathbb R}^n)||^q\biggr)^{1/q}<\infty \end{equation} (with the usual modification for $q=\infty$). (ii) Let $s\in{\mathbb R}, 0< p<\infty, 0< q\le\infty$. Then $F^{s}_{pq}({\mathbb R}^n)$ is the collection of all $f\in S'({\mathbb R}^n)$ such that \begin{equation}\label{eq2} ||f|F^s_{pq}({\mathbb R}^n)||=\biggl|\biggl|\biggl(\sum_{j=0}^\infty 2^{jsq}|(\varphi_j \widehat f)^\vee(\cdot)|^q\biggr)^{1/q}|L_p({\mathbb R}^n)\biggr|\biggr|<\infty \end{equation}(with the usual modification for $q=\infty$). \end{dfn} \begin{rem} These spaces have a long history. In this context we recommend \cite {P}, \cite{T-FS1}, \cite{T-FS2} and \cite{T-FS3} as standard references. We point out that the spaces $B^s_{pq}({\mathbb R}^n)$ and $F^s_{pq}({\mathbb R}^n)$ are independent of the choice of $\varphi$ in the sense of equivalent (quasi-)norms. Special cases of these two scales include Lebesgue spaces, Sobolev spaces, H\"older-Zygmund spaces and many other important function spaces. \end{rem} We introduce the sequence spaces associated with the Besov and Triebel-Lizorkin spaces. Let $m\in{\mathbb Z}^n$ and $j\in{\mathbb N}_0$. Then $Q_{j\, m}$ denotes the closed cube in ${\mathbb R}^n$ with sides parallel to the coordinate axes, centred at $2^{-j}m$, and with side length $2^{-j}$. By $\chi_{j\, m}=\chi_{Q_{j\,m}}$ we denote the characteristic function of $Q_{j\,m}$. If $$ \lambda=\{\lambda_{j\,m}\in{\mathbb C}:j\in{\mathbb N}_0, m\in{\mathbb Z}^n\}, $$ $-\infty<s<\infty$ and $0<p,q\le \infty$, we set \begin{equation}\label{eq:2.10} ||\lambda| b^s_{pq}||=\biggl(\sum_{j=0}^\infty 2^{j (s-\frac np)q}\Bigl(\sum_{m\in{\mathbb Z}^n}|\lambda_{j\, m}|^p \Bigr)^{\frac qp}\biggr)^\frac 1q \end{equation} appropriately modified if $p=\infty$ and/or $q=\infty$. If $p<\infty$, we define also \begin{equation}\label{defspqf'} ||\lambda|f^{s}_{pq}||=\biggl|\biggl| \biggl(\sum_{j=0}^\infty \sum_{m\in{\mathbb Z}^n}|2^{j s}\lambda_{j\,m}\chi_{j\,m}(\cdot)|^q\biggr)^{1/q} |L_p({\mathbb R}^n)\biggr|\biggr|. \end{equation} The connection between the function spaces $B^s_{pq}({\mathbb R}^n)$, $F^s_{pq}({\mathbb R}^n)$ and the sequence spaces $b^s_{pq}$, $f^s_{pq}$ may be given by various decomposition techniques, we refer to \cite[Chapters 2 and 3]{T-FS3} for details and further references. All the unimportant constants are denoted by the letter $c$, whose meaning may differ from one occurrence to another. If $\{a_n\}_{n=1}^\infty$ and $\{b_n\}_{n=1}^\infty$ are two sequences of positive real numbers, we write $a_n\lesssim b_n$ if, and only if, there is a positive real number $c>0$ such that $a_n\le c\, b_n, n\in{\mathbb N}.$ Furthermore, $a_n\approx b_n$ means that $a_n\lesssim b_n$ and simultaneously $b_n\lesssim a_n$. \section{Proofs of the main results} \subsection{Proof of Theorem \ref{thm1.2} (i)} In view of Theorem \ref{thm1.1}, it is enough to prove, that for $1\le p<\infty$ and $0<q\le\min(p,2)$ the index $u$ associated to $B^0_{p,q}({\mathbb R}^n)$ is greater or equal to $p$. We assume in contrary that \eqref{eq:index} is fulfilled for some $0<v<p$, $\epsilon>0$, $c>0$ and all $f\in B^0_{p,q}({\mathbb R}^n)$. Let $\psi$ be a non-vanishing $C^{\infty}$ function in ${\mathbb R}^n$ supported in $[0,1]^n$ with $\int_{{\mathbb R}^n}\psi(x)dx =0.$ Let $J\in{\mathbb N}$ be such that $2^{-Jn}<\epsilon$ and consider the function \begin{equation}\label{atoms} f_j=\sum_{m=1}^{2^{(j-J)n}}\lambda_{j\, m}\psi (2^j(x-(m,0,{\rm d}ots,0))),\quad j>J, \end{equation} where $$ \lambda_{j\, m}=\frac{1}{m^{\frac 1p}\log^{\frac 1v} (m+1)},\quad m=1,{\rm d}ots, 2^{(j-J)n} $$ Then \eqref{atoms} represents an atomic decomposition of $f$ in the space $B^0_{p,q}({\mathbb R}^n)$ according to \cite[Chapter 1.5]{T-FS3} and we obtain (recall that $v<p$) \begin{align}\notag ||f_j|B^0_{p,q}({\mathbb R}^n)||&\lesssim 2^{-j\frac np}\left(\sum_{m=1}^{2^{(j-J)n}}\lambda_{j\, m}^p\right)^{1/p} \le 2^{-j\frac np}\left(\sum_{m=1}^\infty m^{-1}(\log(m+1))^{-\frac pv}\right)^{1/p}\\ &\lesssim 2^{-j\frac np}\label{eq:small}. \end{align} On the other hand, \begin{align*} \left(\int_0^\epsilon \left[f_j^*(t)t^{\frac 1p}\right]^v\frac{dt}{t}\right)^{1/v} &\ge \left(\int_0^{2^{-Jn}}f_j^*(t)^v t^{v/p-1} dt\right)^{1/v} \gtrsim \left(\sum_{m=1}^{2^{(j-J)n}}\lambda_{j\, m}^v\int_{c\,2^{-jn}(m-1)}^{c\,2^{-jn}m} t^{v/p-1}dt\right)^{1/v}\\ &\gtrsim \left(\sum_{m=1}^{2^{(j-J)n}}\lambda_{j\, m}^v 2^{-jnv/p}m^{v/p-1} \right)^{1/v} =2^{-j\frac{n}{p}}\left(\sum_{m=1}^{2^{(j-J)n}}\frac{1}{m \log(m+1)}\right)^{1/v}. \end{align*} As the last series is divergent for $j\to\infty$, this is in a contradiction with \eqref{eq:small} and \eqref{eq:index} cannot hold for all $f_j,j>J.$ \begin{rem} Observe, that Theorem \ref{thm1.3} (i) is a direct consequence of Theorem \ref{thm1.2} (i). The embeddings $B^0_{1,q}({\mathbb R}^n)\hookrightarrow B^0_{1,1}({\mathbb R}^n)\hookrightarrow L_1({\mathbb R}^n)$ if $p=1$ and $B^0_{p,q}({\mathbb R}^n)\hookrightarrow F^0_{p,2}({\mathbb R}^n)=L_p({\mathbb R}^n)$ if $1<p<\infty$ show, that $B^0_{p,q}({\mathbb R}^n)\hookrightarrow L_p({\mathbb R}^n)$. And Theorem \ref{thm1.2} (i) implies that if $B^0_{p,q}({\mathbb R}^n)\hookrightarrow L_{p,v}({\mathbb R}^n)$ for some $0<v< \infty$, then $p\le v.$ This proves the optimality of Theorem \ref{thm1.3} (i) in the frame of the scala of Lorentz spaces. \end{rem} \subsection{Proof of Theorem \ref{thm1.2} (ii) and Theorem \ref{thm1.3} (ii)} Let $0<p<1$, $0<q\le 1$ and $s=\sigma_p=n\left(\frac 1p-1\right)$. We prove first Theorem \ref{thm1.3} (ii), i.e. we show that $$ B^{\frac np-n}_{p,q}({\mathbb R}^n)\hookrightarrow L_{1,q}({\mathbb R}^n), $$ or, equivalently, $$ \left(\int_0^\infty [tf^*(t)]^q\frac{dt}{t}\right)^{1/q}\le c\, ||f|B^{\frac np-n}_{p,q}({\mathbb R}^n)||, \qquad f\in B^{\frac np-n}_{pq}({\mathbb R}^n). $$ Let $$ f=\sum_{j=0}^\infty f_j=\sum_{j=0}^\infty \sum_{m\in {\mathbb Z}^n} \lambda_{j\, m} a_{j\, m} $$ be the optimal atomic decomposition of an $f\in B^{\frac np-n}_{p,q}({\mathbb R}^n)$, again in the sense of \cite{T-FS3}. Then \begin{equation}\label{eq:p1} ||f|B^{\frac np-n}_{p,q}({\mathbb R}^n)|| \approx \left(\sum_{j=0}^\infty 2^{-jqn}\left(\sum_{m\in{\mathbb Z}^n}|\lambda_{j\, m}|^p\right)^{q/p}\right)^{1/q} \end{equation} and by Lemma \ref{lem1q} \begin{equation}\label{eq:p2} ||f|L_{1,q}({\mathbb R}^n)||= ||\sum_{j=0}^\infty f_j|L_{1,q}({\mathbb R}^n)|| \le \left(\sum_{j=0}^\infty ||f_j|L_{1,q}({\mathbb R}^n)||^q\right)^{1/q}. \end{equation} We shall need only one property of the atoms $a_{j\, m}$, namely that their support is contained in the cube $\tilde Q_{j\, m}$ - a cube centred at the point $2^{-j}m$ with sides parallel to the coordinate axes and side length $\alpha 2^{-j}$, where $\alpha>1$ is fixed and independent of $f$. We denote by $\tilde\chi_{j\, m}(x)$ the characteristic functions of $\tilde Q_{j\, m}$ and by $\chi_{j\, l}$ the characteristic function of the interval $(l2^{-jn},(l+1)2^{-jn}).$ Hence $$ f_j(x)\le c\sum_{m\in{\mathbb Z}^n} |\lambda_{j\, m}|\tilde\chi_{j\, m}(x),\quad x\in{\mathbb R}^n $$ and \begin{align} \notag ||f_j|L_{1,q}({\mathbb R}^n)|| &\lesssim \left(\int_0^\infty \sum_{l=0}^\infty \left[(\lambda_j)^*_l \chi_{j\, l}(t)\right]^q t^{q-1}dt\right)^{1/q} \le \left(\sum_{l=0}^\infty \left[(\lambda_j)^*_l\right]^q \int_{2^{-jn}l}^{2^{-jn}(l+1)}t^{q-1}dt\right)^{1/q}\\ &\lesssim 2^{-jn} \left(\sum_{l=0}^\infty \left[(\lambda_j)^*_l\right]^q(l+1)^{q-1}\right)^{1/q} \lesssim 2^{-jn}||\lambda_j|\ell_p||. \label{eq:p3} \end{align} The last inequality follows by $(l+1)^{q-1}\le 1$ and $\ell_p\hookrightarrow \ell_q$ if $p\le q.$ If $p>q$, the same follows by H\"older's inequality with respect to indices $\alpha=\frac pq$ and $\alpha'=\frac{p}{p-q}$: $$ \left(\sum_{l=0}^\infty \left[(\lambda_j)^*_{l}\right]^q (l+1)^{q-1}\right)^{1/q} \le \left(\sum_{l=0}^\infty \left[(\lambda_j)^*_{l}\right]^{q\cdot\frac{p}{q}}\right)^{\frac 1q\cdot\frac{q}{p}} \cdot \left(\sum_{l=0}^\infty (l+1)^{(q-1)\cdot \frac{p}{p-q}}\right)^{\frac{1}{q}\cdot\frac{p-q}{p}} \le c\,||\lambda_j|\ell_p||. $$ Here, we used that for $0<q<p<1$ the exponent $\frac{(q-1)p}{p-q}=-1+\frac{(p-1)q}{p-q}$ is strictly smaller than $-1$. The proof now follows by \eqref{eq:p1}, \eqref{eq:p2} and \eqref{eq:p3}. $$ ||f|L_{1,q}({\mathbb R}^n)||\le \left(\sum_{j=0}^\infty ||f_j|L_{1,q}({\mathbb R}^n)||^q\right)^{1/q} \le c\, \left(\sum_{j=0}^\infty 2^{-jnq}||\lambda_j|\ell_p||^q\right)^{1/q} \le c\, ||f|B^{\sigma_p}_{p,q}({\mathbb R}^n)||. $$ \begin{rem} We actually proved, that \eqref{eq:index} holds for $X=B^{\frac np-n}_{pq}({\mathbb R}^n)$, $v=q$ and $\epsilon=\infty$. This, together with Theorem \ref{thm1.1} (iii) implies immediately Theorem \ref{thm1.2} (ii). \end{rem} \subsection{Proof of Theorem \ref{thm1.2} (iii) and Theorem \ref{thm1.3} (iii)} Let $0<p<1$ and $0<q\le\infty$. By the Jawerth embedding (cf. \cite{J} or \cite{V}) and Theorem \ref{thm1.2} (ii) we get for any $0<p<\tilde p<1$ $$ F^{\sigma_p}_{p,q}({\mathbb R}^n)\hookrightarrow B^{\sigma_{\tilde p}}_{\tilde p,p}({\mathbb R}^n)\hookrightarrow L_{1,p}({\mathbb R}^n). $$ \thebibliography{99} \bibitem{BS} C.~Bennett and R.~Sharpley, {\it Interpolation of operators}, Academic Press, San Diego, 1988. \bibitem{CGO} A.~M.~Caetano, A.~Gogatishvili and B.~Opic, {\it Sharp embeddings of Besov spaces involving only logarithmic smoothness}, J. Appr. Theory 152 (2008), 188-214. \bibitem{H'} D.~D.~Haroske, {\it Limiting embeddings, entropy numbers and envelopes in function spaces}, Habilitationsschrift, Friedrich-Schiller-Universit\"at Jena, Germany, 2002. \bibitem{H} D.~D.~Haroske, {\it Envelopes and sharp embeddings of function spaces}, Chapman \& Hall / CRC, Boca Raton, 2007. \bibitem{J} B.~Jawerth, {\it Some observations on Besov and Lizorkin-Triebel spaces}, Math. Scand. 40 (1977), 94-104. \bibitem{P} J.~Peetre, {\it New thoughts on Besov spaces}, Duke Univ. Math. Series, Durham, 1976. \bibitem{SR} W.~Sickel and T.~Runst, {\it Sobolev spaces of fractional order, Nemytskij operators, and nonlinear partial differential equations.} de Gruyter Series in Nonlinear Analysis and Applications, 3. Walter de Gruyter \& Co., Berlin, 1996. \bibitem{SiTr} W.~Sickel and H.~Triebel, {\it H\"older inequalities and sharp embeddings in function spaces of $B^s_{pq}$ and $F^s_{pq}$ type}, Z. Anal. Anwendungen, 14 (1995), 105-140. \bibitem{T-FS1} H.~Triebel, {\it Theory of function spaces}, Birkh\"auser, Basel, 1983. \bibitem{T-FS2} H.~Triebel, {\it Theory of function spaces II}, Birkh\"auser, Basel, 1992. \bibitem{T-SF} H.~Triebel, {\it The structure of functions}, Birkh\"auser, Basel, 2001. \bibitem{T-FS3} H.~Triebel, {\it Theory of function spaces III}, Birkh\"auser, Basel, 2006. \bibitem{V} J.~Vyb\'\i ral, {\it A new proof of the Jawerth-Franke embedding}, Rev. Mat. Complut. 21 (2008), 75-82. \end{document}
\begin{document} \begin{frontmatter} \title{Current fluctuations of a system of one-dimensional random walks in random~environment\thanksref{T1}} \thankstext{T1}{This work was done while both authors were visiting Institut Mittag-Leffler (Djursholm, Sweden) for the program ``Discrete Probability,'' and while the first author was visiting the University of Wisconsin--Madison as a Van Vleck Visiting Assistant Professor.} \runtitle{Current fluctuations of RWRE} \begin{aug} \author[A]{\fnms{Jonathon} \snm{Peterson}\ead[label=e1]{[email protected]}\thanksref{t2}} \and \author[B]{\fnms{Timo} \snm{Sepp\"al\"ainen}\corref{}\ead[label=e2]{[email protected]}\thanksref{t3}} \thankstext{t2}{Supported in part by NSF Grant DMS-08-02942.} \thankstext{t3}{Supported in part by NSF Grant DMS-07-01091 and by the Wisconsin Alumni Research Foundation.} \runauthor{J. Peterson and T. Sepp\"al\"ainen} \affiliation{Cornell University and University of Wisconsin--Madison} \address[A]{Department of Mathematics\\ Cornell University\\ Malott Hall\\ Ithaca, New York 14850\\ USA\\ \printead{e1}} \address[B]{Department of Mathematics\\ University of Wisconsin--Madison\\ Van Vleck Hall, 480 Lincoln Dr.\\ Madison, Wisconsin 53706\\ USA\\ \printead{e2}} \end{aug} \received{\smonth{4} \syear{2009}} \revised{\smonth{12} \syear{2009}} \begin{abstract} We study the current of particles that move independently in a common static random environment on the one-dimensional integer lattice. A two-level fluctuation picture appears. On the central limit scale the quenched mean of the current process converges to a Brownian motion. On a smaller scale the current process centered at its quenched mean converges to a mixture of Gaussian processes. These Gaussian processes are similar to those arising from classical random walks, but the environment makes itself felt through an additional Brownian random shift in the spatial argument of the limiting current process. \end{abstract} \begin{keyword}[class=AMS] \kwd[Primary ]{60K37} \kwd[; secondary ]{60K35}. \end{keyword} \begin{keyword} \kwd{Random walk in random environment} \kwd{current fluctuations} \kwd{central limit theorem}. \end{keyword} \end{frontmatter} \section{Introduction} We investigate the effect of a random environment on the fluctuations of particle current in a system of many particles. We take the standard model of random walk in random environment (RWRE) on the one-dimensional integer lattice, and let a large number of particles evolve independently of each other but in a common, fixed environment $\omega$. On the level of the averaged (annealed) distribution particles interact with each other through the environment. We set the parameters of the model so that an individual particle has a positive asymptotic speed $\mathrm{v}_P$ and satisfies a central limit theorem around this limiting velocity under the averaged distribution. There is also a quenched central limit theorem that requires an environment-dependent correction $Z_n(\omega)$ to the asymptotic value $n\mathrm{v}_P$. We scale space and time by the same factor $n$. We consider initial particle configurations whose distribution may depend on the environment, but in a manner that respects spatial shifts. Under a fixed environment the initial occupation variables are required to be independent. We find a two-tier fluctuation picture. On the scale ${n}^{1/2}$ the quenched mean of the current process behaves like a Brownian motion. In fact, up to $o(n^{1/2})$ deviations, this quenched mean coincides with the quenched CLT correction $Z_n(\omega)$ multiplied by the mean density of particles. Around its quenched mean, the current process fluctuates on the scale $n^{1/4}$. These fluctuations are described by the same self-similar Gaussian processes that arise for independent particles performing classical random walks. But the environment-determined correction $Z_n(\omega)$ appears again, this time as an extra shift in the spatial argument of the limit process of the current. The broader context for this paper is the ongoing work to elucidate the patterns of universal current fluctuations in one-dimensional driven particle systems. A key object is the flux function $H(\mu)$ that gives the average rate of mass flow past a fixed point in space when the system is in a stationary state with mean density $\mu$. Known rigorous results have confirmed the following delineation. If $H $ is strictly convex or concave, then current fluctuations have magnitude $n^{1/3}$ and limit distributions are related to Tracy--Widom distributions from random matrix theory. If $H$ is linear, then the magnitude of current fluctuations is $n^{1/4}$ and limit distributions are Gaussian. The RWRE model has a linear flux. Our results show that in a sense it confirms the prediction stated above, but with additional features coming from the random environment. Limit processes possess covariances that are similar to those that arise for independent classical random walks. However, when the environment is averaged out, limit distributions can fail to be Gaussian. \subsection*{Literature} A standard reference on the basic RWRE model is \cite{zRWRE}. Further references to RWRE work follow below when we review basic results. Earlier related results for current fluctuations of independent particles appeared in papers \cite{durr-gold-lebo,kSTCP} and \cite{sepp-rw}. A central model for the study of fluctuations in the case of a concave flux is the asymmetric exclusion process. Key papers include \cite{bala-sepp-aom,ferr-spoh-06,joha} and \cite{quas-valko}. Though not a system with drift, the symmetric simple exclusion process shares some features with this class of systems with linear flux. Namely, in the stationary process current fluctuations have magnitude $t^{1/4}$ and fractional Brownian motion limits. This line of work began with \cite{arratia}, with most recent contributions that give process level limits in \cite{jara-landim-06} and \cite{peli-seth}. Fluctuations of symmetric systems have also been studied with disorder on the bonds \cite{jara-09,jara-landim-08}. \subsection*{Organization of the paper} We define the model and state the results for the current process and its quenched mean in Section \ref{resultsec}. Section \ref{CLTreview} reviews known central limit results for the walk itself that we need for the proof. Sections \ref{qmeansec} and~\ref{currsec} prove the fluctuation theorems for the current. An \hyperref[UIapp]{Appendix} proves a uniform integrability result for the walk that is used in the proofs. \section{Description of the model and main results}\label{resultsec} We begin with the standard RWRE model on $\mathbb Z$ with the extra feature that we admit infinitely many particles. Let $\Omega:= [0,1]^{\mathbb Z}$ be the space of environments. For any environment $\omega= \{ \omega_x \}_{x\in\mathbb Z} \in\Omega$ and any $x\in\mathbb Z$, let $\{X^{m,i}_\centerdot\}_{m,i}$ be a family of Markov chains with distribution $P_\omega$ given by the following properties: \begin{enumerate}[(1)] \item[(1)]$\{X^{m,i}_\centerdot\}_{m \in\mathbb Z, i\in\mathbb N}$ are independent under the measure $P_\omega$. \item[(2)]$P_\omega(X^{m,i}_0 = m ) = 1$, for all $m\in\mathbb Z$ and $i\in\mathbb N$. \item[(3)] The transition probabilities are given by \[ P_\omega(X^{m,i}_{n+1} = x+1 | X^{m,i}_n = x ) = 1- P_\omega (X^{m,i}_{n+1} = x-1 | X^{m,i}_n = x ) = \omega_x. \] \end{enumerate} A system of random walks in a random environment may then be constructed by first choosing an environment $\omega$ according to a probability distribution $P$ on $\Omega$ and then constructing the system of random walks $\{X^{m,i}_\centerdot\}$ as described above. The distribution $P_\omega$ of the random walks given the environment $\omega$ is called the \textit{quenched law}. The \textit{averaged law} $\mathbb{P}$ (also called the annealed law) is obtained by averaging the quenched law over all environments. That is, $\mathbb {P}(\cdot ) := \int_{\Omega} P_\omega(\cdot) P(d\omega)$. Often we will be considering events that only concern the behavior of a single random walk started at location $m$, and so we will use the notation $X^{m}_n$ in place of $X^{m,1}_n$. Moreover, if the random walk starts at the origin, we will further abbreviate the notation by $X_n$ in place of $X_n^0$. Expectations with respect to the measures $P$, $P_\omega$ and $\mathbb{P}$ will be denoted by $E_P$, $E_\omega $ and $\mathbb{E}$, respectively, and variances with respect to the measure $P_\omega$ will be denoted by $\operatorname{Var}_\omega$. Generic probabilities and expectations not defined in the RWRE model are denoted by $\mathbf{P}$ and $\mathbf{E}$. For the remainder of the paper we will make the following assumptions on the distribution $P$ of the environments. \begin{asm}\label{UEIIDasm} The distribution on environments is i.i.d. and uniformly elliptic. That is, the variables $\{\omega_x\}_{x\in\mathbb Z}$ are independent and identically distributed under the measure $P$, and there exists a $\kappa> 0$ such that $P(\omega_x \in[\kappa, 1-\kappa] ) = 1$. \end{asm} \begin{asm}\label{CLTasm} $E_P (\rho_0^2) < 1$, where $\rho_x := \frac{1-\omega_x}{\omega_x}$. \end{asm} The above assumptions on the distribution $P$ on environments imply that the RWRE are transient to $+\infty$ with strictly positive speed $\mathrm{v}_P$ \cite{sRWRE}. That is, \begin{equation}\label{LLN} \lim_{n\rightarrow\infty} \frac{X_n}{n} = \frac{1-E_P \rho _0}{1+E_P \rho_0} =: \mathrm{v}_P> 0,\qquad \mathrm{\mathbb{P}\mbox{-}a.s.} \end{equation} Moreover, Assumptions \ref{UEIIDasm} and \ref{CLTasm} imply that a quenched central limit theorem holds with a random (depending on the environment) centering. That is, there exists an explicit function of the environment $Z_n(\omega)$ and a constant $\sigma_1 > 0$ such that for $P\mathrm{\mbox{-}a.e.}$ environment $\omega$, \[ \lim_{n\rightarrow\infty} P_\omega\biggl( \frac{X_n - n\mathrm {v}_P+Z_n(\omega)}{\sigma_1 \sqrt{n}} \leq x \biggr) = \Phi(x)\qquad \forall x\in\mathbb R, \] where $\Phi$ is the standard normal distribution function. The environment-dependent centering in the above quenched central limit theorem cannot be replaced by a deterministic centering since it is known that there exists a constant $\sigma_2 > 0$\vspace*{1pt} such that the process $t\mapsto\frac{Z_{nt}(\omega)}{\sigma_2 \sqrt{n}}$ converges weakly to a standard Brownian motion.\vspace*{1pt} Definitions of $\sigma_1,\sigma_2$ and $Z_n(\omega)$ are provided in Section \ref{CLTreview} where we give a more detailed review of the known limit distribution results for RWRE under Assumptions \ref{UEIIDasm} and~\ref{CLTasm}. In this paper we will be concerned with a system of RWRE in a common environment with a finite (random) number of walks started at each site $x\in\mathbb Z$. Let $\eta_0(x)$ be the number of walks started from $x \in\mathbb Z$. We will allow the law of the initial configurations to depend on the environment (in a measurable way). Let $\theta$ be the shift operator on environments defined by $(\theta^x\omega)_y = \omega_{x+y}$. We will assume that our initial configurations are stationary in the following sense. \begin{asm}\label{ICasm} The distribution of $\eta_0$ is such that $\omega\mapsto P_\omega (\eta_0(0) = k)$ is a measurable function of $\omega$ for any $k\in\mathbb N$, and the law of $\eta_0$ respects the shifts of the environment: $P_\omega( \eta _0(x) = k ) = P_{\theta^x\omega}( \eta_0(0) = k )$. Also, given the environment~$\omega$, the $\{ \eta_0(x) \}$ are independent and independent of the paths of the random walks. \end{asm} We will also need the following moment assumptions. \begin{asm}\label{ICMasm} For some $\varepsilon>0$, \begin{equation} E_P [ E_\omega(\eta_0(x))^{2+\varepsilon} + \operatorname{Var}_\omega(\eta _0(x))^{2+\varepsilon} ]<\infty. \label{vpmomass1} \end{equation} \end{asm} To simplify notation, we will let $\bar\mu(\omega) := E_\omega[ \eta_0(0)]$. Note that Assumption \ref{ICasm} implies that $E_\omega[ \eta_0(m) ] = \bar\mu (\theta^m\omega)$. Let $\mu:= E_P[ \bar\mu(\omega)] = \mathbb{E}\eta_0(0)$ be the average density of the initial configuration of particles, and let $\sigma_0^2=E_P[ \operatorname{Var}_\omega(\eta_0(x)) ]$. The law of large numbers \eqref{LLN} implies that each random walk moves with asymptotic speed $\mathrm{v}_P$. The main object of study in this paper is the following two-parameter process. For $t\geq0$ and $r\in\mathbb R$, let \begin{eqnarray}\label{defYn} Y_n(t,r) &=& \sum_{m> 0} \sum_{k=1}^{\eta_0(m)} \mathbf{1}{ \bigl\{ X^{m,k}_{nt} \leq nt \mathrm{v}_P+r\sqrt n \bigr\} } \nonumber \\[-8pt] \\[-8pt] \nonumber &&{}- \sum_{m\leq0} \sum_{k=1}^{\eta_0(m)} \mathbf{1}{ \bigl\{ X^{m,k}_{nt} > nt\mathrm{v}_P+ r\sqrt n \bigr\} } . \end{eqnarray} \begin{figure} \caption{A visual representation of the process $Y_n(t,r)$ which is the net (negative) current seen by an observer starting at the origin at time 0 and ending at $nt\mathrm{v} \label{fig1} \end{figure} A visual description of the process $Y_n(t,r)$ is given in Figure 1. $Y_n(t,r)$ is similar to what was called the space--time current process in \cite{kSTCP} and studied in a constant environment (i.e., particles performing independent classical random walks). We altered the definition because the limit process of this version has a more natural description. The process studied earlier in \cite{kSTCP} equals\vspace*{-2pt} \begin{eqnarray}\label{oldcurrent} Y_n(t,r) - Y_n(0,r) &=& \sum_{m> r\sqrt n} \sum_{k=1}^{\eta_0(m)} \mathbf{1}{\bigl \{ X^{m,k}_{nt} \leq nt\mathrm{v}_P+r\sqrt n \bigr\} } \nonumber \\[-10pt] \\[-10pt] \nonumber &&{}- \sum_{m\leq r\sqrt n} \sum_{k=1}^{\eta_0(m)} \mathbf{1}{ \bigl\{ X^{m,k}_{nt} > nt\mathrm{v}_P+ r\sqrt n \bigr\} } . \end{eqnarray} This process $Y_n(\cdot,r) - Y_n(0,r)$ is the net right-to-left particle current seen by an observer who starts at $r\sqrt{n}$ and moves with deterministic speed $\mathrm{v}_P$. Adapting the proof of \cite{kSTCP} to our definition of $Y_n(t,r)$ gives this theorem: \begin{theorem}[(Kumar \cite{kSTCP})] Assume that the environment is nonrandom.\vspace*{1pt} That is, there exists a $p\in (0,1)$ such that $P(\omega_x = p, \forall x\in\mathbb Z) = 1$. Let $\mathbb{E}(\eta_0) = \mu$ and ${\rm\mathbb{V}ar}(\eta_0) = \sigma_0^2$, and assume that $\mathbb{E}(\eta_0^{12}) < \infty$. Then, the process $n^{-1/4}( Y_n(\cdot, \cdot) - \mathbb{E}Y_n(\cdot,\cdot) )$ converges in distribution on the $D$-space of two-parameter cadlag processes. The limit is the mean zero Gaussian process $V^0(\cdot,\cdot)$ with covariance \begin{equation}\label{CRWcov} \mathbf{E}[ V^0(s,q)V^0(t,r) ] = \Gamma((s,q),(t,r)), \end{equation} where the covariance function $\Gamma$ is defined below in \eqref{Gammadef}. \end{theorem} The theorem above uses the higher moment assumption $\mathbb{E}(\eta _0^{12}) < \infty$ for process-level tightness. We have not proved such tightness, hence, we get by with the moments assumed in \eqref{vpmomass1}. We turn to discuss the results in the random environment.\vadjust{\goodbreak} The random environment adds a new layer of fluctuations to the current. These larger fluctuations are of order $\sqrt{n}$ and depend only on the environment. This is summarized by our first main result. The process $Z_{nt}(\omega)$ in the statement below is the correction required in the quenched central limit theorem of the walk, defined in \eqref{Zdef} in Section \ref{CLTreview}. \begin{theorem} \label{QMCurrent} For any $\varepsilon>0$, $0<R, T<\infty$, \begin{equation}\qquad\lim_{n\to\infty} P\Bigl( \sup_{t\in[0,T], r\in[-R,R]} \bigl\vert E_\omega Y_n(t,r) - \mu r \sqrt{n} - \mu Z_{nt}(\omega) \bigr\vert \ge\varepsilon\sqrt n \Bigr) =0. \label{limYZ} \end{equation} Moreover, since $\{ n^{-1/2} Z_{nt}(\omega)\dvtx t\in\mathbb R_+\}$ converges weakly to $\{ \sigma_2 W(t)\dvtx t\in\mathbb R_+ \}$, where $W(\cdot)$ is a standard Brownian motion, then the two-parameter process $\{ n^{-1/2} E_\omega Y_n(t,r)\dvtx t\in \mathbb R_+, r\in\mathbb R\}$ converges weakly to $\{ \mu\sigma_2 W(t) + \mu r\dvtx t\in\mathbb R_+, r\in\mathbb R\}$. \end{theorem} To see the next order of fluctuations, we center the current at its quenched mean. Define \begin{eqnarray}\label{vpdefVn} V_n(t,r) &=&Y_n(t,r) -E_\omega Y_n(t,r) \nonumber\\ &=& \sum_{m> 0} \Biggl( \sum_{k=1}^{\eta_0(m)} \mathbf{1}{ \bigl\{ X^{m,k}_{nt} \leq nt\mathrm{v}_P+r\sqrt{n} \bigr\} }\nonumber\\ &&\qquad {} - E_\omega(\eta_0(m))P_\omega\bigl\{ X^m_{nt} \leq nt\mathrm{v}_P+r\sqrt{n} \bigr\} \Biggr)\\ &&{} - \sum_{m\leq0} \Biggl( \sum_{k=1}^{\eta _0(m)}\mathbf{1}{ \bigl\{ X^{m,k}_{nt} > nt\mathrm{v}_P+r\sqrt{n} \bigr\} }\nonumber\\ &&\qquad\hspace*{14pt} {}- E_\omega(\eta_0(m))P_\omega\bigl\{ X^{m}_{nt} > nt\mathrm{v}_P+r\sqrt{n} \bigr\} \Biggr).\nonumber \end{eqnarray} The fluctuations of $V_n(t,r)$ are of order $n^{1/4}$ and the same as the current fluctuations in a deterministic environment, up to a random shift coming from the environment. We need to introduce some notation. For any $\alpha>0$, let $\phi _{\alpha ^2}(\cdot)$ and $\Phi_{\alpha^2}(\cdot)$ be the density and distribution function, respectively, for a Gaussian distribution with mean zero and variance $\alpha^2$. Also, let \begin{eqnarray}\label{Psidef} \Psi_{\alpha^2}(x) &:=& \alpha^2 \phi_{\alpha^2}(x) - x \Phi _{\alpha^2}(-x)\quad \mbox{and} \nonumber \\[-8pt] \\[-8pt] \nonumber \Psi_0(x) &:=& \lim_{\alpha\rightarrow0} \Psi_{\alpha ^2}(x) = x^-. \end{eqnarray} Then, for any $(s,q),(t,r) \in\mathbb R_+ \times\mathbb R$ define the covariance function \begin{eqnarray}\label{Gammadef} \Gamma((s,q),(t,r)) &:=& \mu\bigl( \Psi_{\sigma_1^2(s+t)}(q-r) - \Psi_{\sigma _1^2|s-t|}(q-r) \bigr) \nonumber \\[-8pt] \\[-8pt] \nonumber &&{} + \sigma_0^2 \bigl( \Psi_{\sigma_1^2 s}(-q) + \Psi_{\sigma _1^2 t} (r) - \Psi _{\sigma_1^2(s+t)}(r-q) \bigr), \end{eqnarray} where $\sigma_1$ is the scaling factor in the quenched central limit theorem [see \eqref{clt1} in Section \ref{CLTreview} for a formula]. Given the above definitions, let $(V,Z)=( V(t,r), Z(t)\dvtx t\in\mathbb R_+, r\in\mathbb R )$ be the process whose joint distribution is defined as follows: \begin{longlist}[(ii)] \item[(i)] Marginally, $Z(\cdot)=\sigma_2 W(\cdot)$ for a standard Brownian motion $W(\cdot)$, and $\sigma_2$ is the scaling factor in the central limit theorem of the correction $Z_{nt}(\omega)$ [see \eqref{clt2} in Section \ref{CLTreview} for a formula]. \item[(ii)] Conditionally on the path $Z(\cdot)\in C(\mathbb R_+,\mathbb R)$, $V$ is the mean zero Gaussian process indexed by $\mathbb R_+\times\mathbb R$ with covariance \begin{eqnarray}\label{vpcov} &&\mathbf{E}[V(s,q)V(t,r) \vert Z(\cdot)] \nonumber \\[-8pt] \\[-8pt] \nonumber &&\qquad=\Gamma \bigl(\bigl(s,q+Z(s)\bigr),\bigl(t,r+Z(t)\bigr)\bigr)\qquad \mbox{for $(s,q), (t,r)\in\mathbb R_+\times\mathbb R$.} \end{eqnarray} \end{longlist} An equivalent way to say this is to first take independent $(V^0,Z)$ with $Z$ as above and $V^0=\{V^0(t,r)\dvtx (t,r)\in\mathbb R_+\times\mathbb R\}$ the mean zero Gaussian process with covariance $ \Gamma((s,q),(t,r))$ from \eqref{Gammadef}, and then define $V(t,r)=V^0(t,r+Z(t))$. The next theorem gives joint convergence of the centered current process and the environment-dependent shift. \begin{theorem} Under the averaged probability\vspace*{1.5pt} $\mathbb{P}$, as $n\to\infty$, the finite-dimensional distributions of the joint process $\{( n^{-1/4}V_n(t,r), n^{-1/2} Z_{nt}(\omega) )\dvtx t\in\mathbb R_+, r\in\mathbb R \}$ converge to those of the process $(V,Z)$. \label{findimthm} \end{theorem} Our proof shows additionally that \begin{eqnarray*} &&\lim_{n\to\infty} E_P\Biggl\vert E_\omega\exp\Biggl\{in^{-1/4} \sum_{k=1}^N \alpha_k V_n(t_k,r_k) \Biggr\}\\ &&\hspace*{41pt}\qquad{} - \mathbf{E}\exp\Biggl\{i \sum_{i=1}^N \alpha_k V(t_k,r_k) \Biggr\} \Biggr\vert=0 \end{eqnarray*} for any choice of time--space points $(t_1,r_1),\ldots,(t_N,r_N)\in \mathbb R _+\times\mathbb R$\vspace*{1pt} and $\alpha_1,\ldots,\break \alpha_N \in\mathbb R$. [See \eqref{vplim7} below.] This falls short of a quenched limit for $n^{-1/4}V_n$ (a~limit for a fixed $\omega$), but it does imply that if a quenched limit exists, the limit process is the one that we describe. We suspect, however, that no quenched limit exists since the techniques of this paper can be used to show that the quenched covariances of the process $n^{-1/4}V_n(\cdot,\cdot)$ do not converge $P\mathrm{\mbox{-}a.s.}$ The mean zero Gaussian process $\{u(t,r)\dvtx t\in\mathbb R_{+},r\in \mathbb R\}$ with covariance $ \mathbf{E}[u(s,q)u(t,r)]= \Gamma((s,q),(t,r))$ from \eqref{Gammadef} can be represented as the sum of two integrals: \begin{eqnarray}\label{udef} u(t,r)&=& \sqrt{\mu} \int\!\!\!\int_{[0,t]\times\mathbb R} \phi_{\sigma_1^2(t-s)}(r-x) \,dW(s,x) \nonumber \\[-8pt] \\[-8pt] \nonumber &&{}+\sigma_0\int_{\mathbb R} \phi_{\sigma_1^2t}(r-x)B(x) \,dx, \end{eqnarray} where $W$ is a two-parameter Brownian motion on $\mathbb R_+ \times\mathbb R$ (Brownian sheet) and~$B$ an independent two-sided one-parameter Browian motion on $\mathbb R$. The process $u(t,r)$ is also a weak solution of the stochastic heat equation with initial data given by Brownian motion \cite{walsh}: \begin{equation} \quad u_t = \frac{\sigma_1^2}2 u_{rr} + \sqrt{\mu} \dot W ,\qquad u(0,r)=\sigma_0 B(r),\qquad (t,r)\in\mathbb R_{+}\times\mathbb R. \label{stheateq2} \end{equation} This type of process we obtain if we define $u(t,r)=V(t,r-Z(t))$ by regarding the random path $-Z(\cdot)$ as the new spatial origin. We next remark on the distribution of the limiting process $V(t,r)$ in a couple of special cases. First we consider the case when $\sigma_0=0$ (this includes the case of deterministic initial configurations). If $\sigma _0=0$, then \eqref{Gammadef} and \eqref{vpcov} imply that, for any fixed $t\geq0$, the one-parameter process $V(t,\cdot)$ has conditional covariance \begin{eqnarray*} E[V(t,q)V(t,r) \vert Z(\cdot)]&=&\Gamma \bigl(\bigl(t,q+Z(t)\bigr),\bigl(t,r+Z(t)\bigr) \bigr) \\ &=& \mu\bigl(\Psi_{2\sigma_1^2 t}(q-r) - \Psi_{0}(q-r)\bigr). \end{eqnarray*} In particular, the covariances of $V(t,\cdot)$ do not depend on the process $Z(\cdot)$ and are the same as in the classical random walk case. \begin{cor} If $\sigma_0 = 0$, then for any fixed $t\geq0$ the (averaged) finite-dimensional distributions of the one parameter process $\{ n^{-1/4} V_n(t,r) \dvtx r\in\mathbb R\}$ converge to those of the one parameter mean zero Gaussian process $V^0(t,\cdot)$ with covariances given by \eqref {CRWcov} with $s=t$. \end{cor} A second special case worth considering is when $\mu= \sigma_0^2$. In the case of classical random walks, $\mu= \sigma_0^2$ implies that \[ \mathbf{E}[ V^0(s,0)V^0(t,0) ] = \frac{\mu\sigma_1}{\sqrt{2\pi}} \bigl( \sqrt{s} + \sqrt {t} - \sqrt{|s-t|}\bigr), \] so that $V^0(\cdot,0)$ is a fractional Brownian motion with Hurst parameter $1/4$. For RWRE, $\mu= \sigma_0^2$ implies that \begin{eqnarray}\label{condcov} &&\mathbf{E}[V(s,0)V(t,0) \vert Z(\cdot)] \nonumber \\[-8pt] \\[-8pt] \nonumber &&\qquad= \mu\bigl( \Psi_{\sigma_1^2 s}(-Z(s)) + \Psi_{\sigma_1^2 t}(Z(t)) - \Psi_{\sigma _1^2|s-t|}\bigl(Z(t)-Z(s)\bigr) \bigr). \end{eqnarray} Since the right-hand side of \eqref{condcov} is a nonconstant random variable, the marginal distribution of $V(t,0)$ is non-Gaussian. Taking expectations of \eqref{condcov} with respect to $Z(\cdot)$ gives that \begin{equation}\label{specialcasecov} \mathbf{E}[V(s,0)V(t,0)] = \frac{\mu\sqrt{\sigma_1^2 + \sigma _2^2}}{\sqrt{2\pi}} \bigl( \sqrt{s} + \sqrt{t} - \sqrt{|s-t|}\bigr). \end{equation} Thus, we have the following. \begin{cor} If $\mu= \sigma_0^2$, then the process $V(\cdot,0)$ has covariances like that of a fractional Brownian motion, but is not a Gaussian process. \end{cor} \begin{rem} The condition that $\mu= \sigma_0^2$ is important because it includes the case when the configuration of particles is stationary under the dynamics of the random walks. For classical random walks, the stationary distribution on configurations of particles is when the $\eta _0(x)$ are i.i.d. $\operatorname{Poisson}(\mu)$ random variables. Consider now the case where, given $\omega$, the $\eta_0(x)$ are independent and \begin{equation}\label{fdef} \qquad\quad \eta_0(x) \sim\operatorname{Poisson}(\mu f(\theta^x\omega))\qquad \mbox{where } f(\omega) = \frac{\mathrm{v}_P}{\omega_0}\Biggl( 1 + \sum_{i=1}^{\infty} \prod _{j=1}^i \rho_{j} \Biggr). \end{equation} It was shown in \cite{psHydro} that, given $\omega$, the above distribution on the configuration of particles is stationary under the dynamics of the random walks. Note that in this case, $E_\omega\eta_0(0) = \operatorname{Var} _\omega\eta _0(0) = \mu f(\omega)$. Moreover, Assumptions \ref{UEIIDasm} and \ref {CLTasm} imply that $E_P \rho_0^{2+\varepsilon} < 1$ for some $\varepsilon>0$, and, thus, it can be shown that $E_P f(\omega)^{2+\varepsilon} < \infty$. Therefore, Assumptions \ref{ICasm} and \ref{ICMasm} are fulfilled in this special case. \end{rem} It is intuitively evident but not a corollary of our theorem that if the environment-dependent shift is introduced in the current process itself, the random shift $Z$ disappears from the limit process $V$. For the sake of completeness, we state this result too. For $(t,r)\in\mathbb R_+\times\mathbb R$ define \begin{eqnarray}\label{defYnq} Y_n^{(q)}(t,r) &=& \sum_{m> 0} \sum_{k=1}^{\eta_0(m)} \mathbf{1}{ \bigl\{ X^{m,k}_{nt} \leq nt\mathrm{v}_P-Z_{nt}(\omega )+r\sqrt n \bigr\} } \nonumber \\[-8pt] \\[-8pt] \nonumber &&{}- \sum_{m\leq0} \sum_{k=1}^{\eta_0(m)} \mathbf{1}{ \bigl\{ X^{m,k}_{nt} > nt\mathrm{v}_P -Z_{nt}(\omega)+ r\sqrt n \bigr\} } \end{eqnarray} and its centered version \[ V_n^{(q)}(t,r) = Y_n^{(q)}(t,r) - E_\omega Y_n^{(q)}(t,r). \] The process $V_n^{(q)}$ has the same limit as classical random walks. As above, let $V^0=\{V^0(t,r)\dvtx (t,r)\in\mathbb R_+\times\mathbb R\}$ be the mean zero Gaussian process with covariance~\eqref{CRWcov}. \begin{theorem} Under the averaged probability $\mathbb{P}$, as $n\to\infty$, the finite-dimensional distributions of the joint process $\{( n^{-1/4}V_n^{(q)}(t,r), n^{-1/2} Z_{nt}(\omega) )\dvtx\break t\in \mathbb R_+, r\in \mathbb R\}$ converge to those of the process $(V^0,Z)$ where $V^0$ and $Z$ are independent. \label{findimthmVq} \end{theorem} It can be shown, using the techniques of this paper, that $n^{-1/2} E_\omega Y_n^{(q)}(t,r)$ converges to zero in probability for any fixed $t$ and $r$. We suspect that the fluctuations of $E_\omega Y_n^{(q)}(t,r)$ are at most of order $n^{-1/4}$, but at this point we have no result. \section{Review of CLT for RWRE}\label{CLTreview} In this section we review some of the limiting distribution results for one-dimensional RWRE implied by Assumptions \ref{UEIIDasm} and \ref {CLTasm}. Before stating a theorem which summarizes what is known, we introduce some notation. Let $T_x := \inf\{ n\geq0\dvtx X_n = x \}$ be the hitting time of the site $x\in\mathbb Z$ of a~RWRE started at the origin, and for $x\in\mathbb Z$ let \begin{equation}\label{hdef} h(x,\omega) := \cases{ \mathrm{v}_P\displaystyle\sum_{i=0}^{x-1} ( E_{\theta^i \omega} T_1 - \mathbb {E}T_1 ), & \quad $x \geq1$, \vspace*{2pt}\cr 0, & \quad $x= 0$, \vspace*{2pt}\cr - \mathrm{v}_P\displaystyle\sum_{i=x}^{-1} ( E_{\theta^i \omega} T_1 - \mathbb {E}T_1 ), & \quad $x \leq-1$.} \end{equation} Define also \begin{equation}Z_{nt}(\omega) := h(\lfloor nt\mathrm{v}_P \rfloor ,\omega). \label{Zdef} \end{equation} \begin{theorem}[(\cite{gQCLT,kksStable,pThesis,zRWRE})]\label{QCLTthm} Let Assumptions \textup{\ref{UEIIDasm}} and \textup{\ref{CLTasm}} hold. Then, the following hold: \begin{enumerate}[(1)] \item[(1)] The RWRE satisfies a quenched functional central limit theorem with a random (depending on the environment) centering. For $n\in\mathbb N$ and $t\geq0$, let \begin{equation} B^n(t) := \frac{X_{nt} - nt\mathrm{v}_P+ Z_{nt}(\omega)}{\sigma_1 \sqrt{n}}\qquad \mbox{where } \sigma_1^2 := \mathrm{v}_P^3 E_P( \operatorname{Var}_\omega T_1 ). \label{clt1} \end{equation} Then, for $P$-a.e. environment $\omega$, under the quenched measure $P_\omega$, $B^n(\cdot)$ converges weakly to standard Brownian motion as $n\rightarrow\infty$. \item[(2)] Let \begin{equation} \zeta^n(t) := \frac{Z_{nt}(\omega)}{\sigma_2 \sqrt{n}}\qquad \mbox{where } \sigma _2^2 := \mathrm{v}_P^2 \operatorname{Var}( E_\omega T_1). \label{clt2} \end{equation} Then, under the measure $P$ on environments, $\zeta^n(\cdot)$ converges weakly to standard Brownian motion as $n\rightarrow\infty$. \item[(3)] The RWRE satisfies an averaged functional central limit theorem. Let \[ \mathbb{B}^n(t) := \frac{X_{nt} - nt\mathrm{v}_P+ Z_{nt}(\omega )}{\sigma\sqrt{n}}\qquad \mbox{where } \sigma^2 = \sigma_1^2 + \sigma_2^2. \] Then, under the averaged measure $\mathbb{P}$, $\mathbb{B}^n(\cdot)$ converges weakly to standard Brownian motion. \end{enumerate} \end{theorem} \begin{rem} The conclusions of Theorem \ref{QCLTthm} still may hold if the law on environments is not uniformly elliptic or i.i.d. but satisfies certain mixing properties \cite{gQCLT,kksStable,mrzStable,pThesis,zRWRE}. However, if the environment is i.i.d., the requirement that $E_P \rho _0^2 < 1$ in Assumption \ref{CLTasm} cannot be relaxed in order for Theorem \ref{QCLTthm} to hold \cite{kksStable,p1LSL2,pzSL1}. \end{rem} Let $B_\centerdot$ denote a standard Brownian motion with distribution $\mathbf{P}$. The quenched functional central limit theorem implies that, $P\mathrm{\mbox{-}a.s.}$, for any $s,t\geq0$ and $x,y\in\mathbb R$, \begin{eqnarray}\label{QCLT} &&\lim_{n\rightarrow\infty} P_\omega\biggl( \frac{X_{ns} - ns\mathrm {v}_P+Z_{ns}(\omega)}{\sigma_1\sqrt {n}} \leq x , \frac{X_{nt} - nt\mathrm{v}_P+Z_{nt}(\omega)}{\sigma _1\sqrt{n}} \leq y \biggr) \nonumber \\[-8pt] \\[-8pt] \nonumber &&\qquad = \mathbf{P}( B_s \leq x, B_t \leq y ), \end{eqnarray} where $B_\centerdot$ is a standard Brownian motion. Moreover, for fixed $s,t > 0$, the convergence in \eqref{QCLT} is uniform in $x$ and $y$. In \cite{zRWRE}, only an averaged central limit theorem is proved. However, since $\mathbb{B}^n(t) = \frac{\sigma_1}{\sigma}B^n(t) + \frac{\sigma_2}{\sigma }\zeta^n(t)$, the averaged functional central limit theorem can be derived from the previous two parts of Theorem \ref{QCLTthm}. Indeed, it follows immediately that the finite-dimensional distributions of $\mathbb {B}^n(t)$ converge to those of a Brownian motion (as in \cite{zRWRE}, this uses that convergences of terms like \eqref{QCLT} hold uniformly in $x$ and $y$). Thus, it only remains to show that~$\mathbb {B}^n(\cdot )$ is tight, but this is not too difficult. The random centering $nt\mathrm{v}_P- Z_{nt}(\omega)$ in the quenched CLT is more convenient than centering by the quenched mean $E_\omega X_{\lfloor nt \rfloor}$. Both centerings are essentially the same in the sense that they do not differ on the scale of $\sqrt{n}$: \begin{equation}\label{limqmeanZ} \lim_{n\rightarrow\infty} P\Bigl( \sup_{k\leq n} | E_\omega X_k - k\mathrm{v}_P+ Z_k(\omega) | \geq\varepsilon\sqrt{n} \Bigr) = 0\qquad \forall\varepsilon>0. \end{equation} But $Z_{nt}(\omega) = h( \lfloor nt\mathrm{v}_P \rfloor,\omega)$ is convenient because it is defined in terms of partial sums of the random variables $E_{\theta^i\omega} T_1$ for which there is an explicit formula in terms of the environment $\omega$ (see \cite{pThesis} or \cite{zRWRE}). We note the following lemma due to Goldsheid \cite{gQCLT} which we will use in several places in the remainder of the paper. \begin{lem}\label{Goldsheid} Let Assumptions \textup{\ref{UEIIDasm}} and \textup{\ref{CLTasm}} hold. Then there exists an $\eta>0$ and a~constant $C<\infty$ such that \begin{equation}\label{hLpbound} E_P \Bigl[ \sup_{1\leq k \leq n} |h(k,\omega)|^{2+2\eta} \Bigr] \leq C n^{1+\eta}\qquad \forall n\in\mathbb N. \end{equation} \end{lem} We conclude\vspace*{1pt} this section by stating a new result on the uniform integrability (under the averaged measure) of $n^{-1/2} (X_n - n\mathrm{v}_P)$. \begin{prop}\label{UIprop} Let $\sigma_1^2$ and $\sigma_2^2$ be defined as in Theorem \textup{\ref {QCLTthm}}. Then, \begin{equation}\label{UIclaim} \lim_{n\rightarrow\infty} \frac{1}{n} \mathbb{E}( X_n - n\mathrm{v}_P)^2 = \sigma_1^2 + \sigma_2^2. \end{equation} Moreover, there exists a constant $C<\infty$ such that \begin{equation}\label{supUIclaim} \mathbb{E}\Bigl[\sup_{k\leq n} (X_k - k \mathrm{v}_P)^2 \Bigr] \leq C n. \end{equation} \end{prop} The proof of Proposition \ref{UIprop} is given in \hyperref[UIapp]{Appendix}. It should be noted that while the statement \eqref{limqmeanZ} does not appear anywhere in the literature (at least that we know of), it is included in the proof of Proposition \ref{UIprop}. \section{Fluctuations of the quenched mean of the current}\label{qmeansec} In this section we prove Theorem \ref{QMCurrent} for the quenched mean of $Y_n(t,r)$. Introduce the notation \begin{eqnarray}\label{Wndef} W_n(t,r) &:=& E_\omega Y_n(t,r) - \mu r\sqrt{n} \nonumber\\ &=& \sum_{m> 0} E_\omega[ \eta_0(m) ] P_\omega\bigl( X^m_{nt} \leq nt\mathrm{v}_P+ r\sqrt{n} \bigr)\\ &&{} - \sum_{m\leq0}E_\omega[ \eta_0(m) ] P_\omega\bigl( X^m_{nt} > nt\mathrm{v}_P+ r\sqrt{n} \bigr) - \mu r\sqrt{n}.\nonumber \end{eqnarray} The task is to show that $\frac{1}{\sqrt{n}} W_n(t,r)$ can be approximated by $\frac {1}{\sqrt{n}} Z_{nt}(\omega)$ uniformly in both $r\in[-R,R]$ and $t\in [0,T]$ with probability tending to one. The main work goes toward approximation uniformly in $t\in[0,T]$ for a fixed $r$. Uniformity in $r\in[-R,R]$ then comes easily at the end of this section, completing the proof of Theorem \ref{QMCurrent}. Before the main work we prove two lemmas that remove a few technical difficulties. One technical difficulty is presented by small times $t$. For any fixed $\delta>0$ and $t\geq\delta$ we will use the quenched central limit theorem to approximate the probabilities in the definition of $W_n(t,r)$. However, we cannot do this approximation for arbitrarily small $t$ all at once. The following lemma will be used later to handle the small values of $t$. \begin{lem}\label{supWnt} There exists a constant $C<\infty$ such that, for any $r\in\mathbb R$ and $\delta>0$, \[ \limsup_{n\rightarrow\infty} \frac{1}{\sqrt{n}} E_P \Bigl[ \sup _{t\in[0,\delta]} |W_{n}(t,r)| \Bigr] \leq C\sqrt{\delta}. \] \end{lem} \begin{pf} The triangle inequality implies that \begin{eqnarray}\label{triineq} &&\frac{1}{\sqrt{n}}E_P \Bigl[ \sup_{t\in[0,\delta]} |W_{n}(t,r) | \Bigr] \nonumber \\[-8pt] \\[-8pt] \nonumber &&\qquad\leq\frac{1}{\sqrt{n}} E_P [ |W_{n}(0,r)| ] + \frac {1}{\sqrt {n}} E_P \Bigl[ \sup_{t\in[0,\delta]} |W_{n}(t,r) - W_n(0,r) | \Bigr]. \end{eqnarray} For $r>0$, \[ \frac{1}{\sqrt{n}} W_n(0,r) = \frac{1}{\sqrt{n}} \sum_{0<m\leq r\sqrt {n}} E_\omega(\eta_0(m)) - \mu r = \frac{1}{\sqrt{n}} \sum _{0<m\leq r\sqrt {n}} \bar{\mu}(\theta^m\omega) - \mu r . \] A similar equality holds for $r\leq0$. Therefore, the ergodic theorem implies that the first term on the right-hand side of \eqref{triineq} vanishes as $n\rightarrow\infty$, and so it remains only to show that \begin{equation}\label{WtrW0r} \limsup_{n\rightarrow\infty} \frac{1}{\sqrt{n}} E_P \Bigl[ \sup _{t\in[0,\delta]} |W_{n}(t,r) - W_n(0,r) | \Bigr] \leq C\sqrt{\delta}. \end{equation} Recalling \eqref{oldcurrent} and the fact that $W_n(t,r)= E_\omega Y_n(t,r) - \mu r \sqrt{n}$, we obtain that \begin{eqnarray*} && W_n(t,r)-W_n(0,r) \\ &&\qquad= \sum_{m> r\sqrt{n}} E_\omega[ \eta_0(m) ] P_\omega\bigl( X^m_{nt} \leq nt\mathrm{v}_P+ r\sqrt{n} \bigr) \\ &&\qquad\quad {}- \sum_{m\leq r\sqrt{n}}E_\omega[ \eta_0(m) ] P_\omega \bigl( X^m_{nt} > nt\mathrm{v}_P+ r\sqrt{n} \bigr). \end{eqnarray*} Therefore, \begin{eqnarray*} && \sup_{t\in[0,\delta]} |W_n(t,r)-W_n(0,r)| \\ &&\qquad\leq\sum_{m>r\sqrt{n}} E_\omega[ \eta_0(m) ] \sup_{t\in [0,\delta]} P_{\theta ^m \omega}\bigl( X_{nt} - nt\mathrm{v}_P\leq r\sqrt{n} - m\bigr) \\ &&\qquad\quad{} + \sum_{m\leq r\sqrt{n}} E_\omega[ \eta_0(m) ] \sup_{t\in [0,\delta]} P_{\theta^m\omega}\bigl(X_{nt} - nt\mathrm{v}_P> r\sqrt{n} - m\bigr) \\ &&\qquad\leq\sum_{m>r\sqrt{n}} E_\omega[ \eta_0(m) ] P_{\theta^m \omega }\Bigl( \inf _{t\in[0,\delta]} ( X_{nt} - nt\mathrm{v}_P) \leq r\sqrt{n} - m \Bigr) \\ &&\qquad\quad {} + \sum_{m\leq r\sqrt{n}} E_\omega[ \eta_0(m) ] P_{\theta ^m\omega}\Bigl( \sup_{t\in[0,\delta]} ( X_{nt} - nt\mathrm{v}_P) > r\sqrt{n} - m \Bigr). \end{eqnarray*} Then, the shift invariance of $P$ and Assumption \ref{ICasm} imply that \begin{eqnarray*} && E_P\Bigl\{ \sup_{t\in[0,\delta]} |W_n(t,r)-W_n(0,r)| \Bigr\}\\ &&\qquad\leq E_P\biggl\{ E_\omega[ \eta_0(0) ] \biggl[ \sum _{m>r\sqrt{n}} P_{\omega}\Bigl( \inf_{t\in[0,\delta]} ( X_{nt} - nt\mathrm {v}_P) \leq r\sqrt{n} - m \Bigr) \\ &&\qquad\hspace*{71pt}\quad {}+ \sum_{m\leq r\sqrt{n}} P_{\omega}\Bigl( \sup_{t\in[0,\delta]} ( X_{nt} - nt\mathrm{v}_P) > r\sqrt{n} - m \Bigr) \biggr] \biggr\} \\ &&\qquad \leq E_P\Bigl\{ E_\omega[ \eta_0(0)] \Bigl[ E_\omega \Bigl(\sup_{t\in [0,\delta]} ( X_{nt} - nt\mathrm{v}_P)^-\Bigr)\\ &&\qquad\quad \hspace*{68pt}{} + E_\omega\Bigl(\sup_{t\in [0,\delta]} ( X_{nt} - nt\mathrm{v}_P)^+ \Bigr) + 1 \Bigr] \Bigr\} \\ &&\qquad \leq2 E_P \Bigl\{ E_\omega[ \eta_0(0) ] E_\omega\Bigl(\sup _{t\in[0,\delta ]} | X_{nt} - nt\mathrm{v}_P| \Bigr) \Bigr\} + \mu. \end{eqnarray*} The Cauchy--Schwarz inequality, along with Assumption \ref{ICMasm} and Proposition \ref{UIprop}, implies that the right-hand side is bounded above by $C\sqrt{n\delta} + \mu$. Dividing by $\sqrt{n}$ and taking $n\rightarrow\infty$, we obtain \eqref{WtrW0r}. \end{pf} A second technical difficulty in the analysis of $W_n(t,r)$ is restricting the sums in the definition of $W_n(t,r)$ to $[-a(n)\sqrt {n},a(n)\sqrt{n}]$, where $a(n)$ is some sequence tending to $\infty$ slowly (to be specified later, but at least slower than any polynomial in $n$). Let $W_n(t,r) = W_{n,1}(t,r) + W_{n,2}(t,r)$, where \begin{eqnarray*} W_{n,1}(t,r) &=& \sum_{m=1}^{\lfloor a(n)\sqrt{n} \rfloor} E_\omega[\eta _0(m)] P_\omega\bigl(X^m_{nt} \leq nt\mathrm{v}_P+ r\sqrt{n} \bigr) \\ &&{} - \sum_{m=-\lfloor a(n)\sqrt{n} \rfloor+1}^0 E_\omega[\eta _0(m)] P_\omega\bigl(X^m_{nt} > nt\mathrm{v}_P+ r\sqrt{n}\bigr) - \mu r \sqrt{n}. \end{eqnarray*} The next lemma implies that the main contributions to $W_n(t,r)$ come from $W_{n,1}(t,r)$. \begin{lem} \label{Wn2} For any $\varepsilon>0$, $T<\infty$ and $r\in\mathbb R$, \[ \lim_{n\rightarrow\infty}P\biggl( \sup_{t\in[0,T]} \frac{1}{\sqrt{n}}| W_{n,2}(t,r)| \geq\varepsilon\biggr) = 0. \] \end{lem} \begin{pf} It is enough to show that $E_P |\sup_{t\in[0,T]} W_{n,2}(t,r)| = o(\sqrt{n})$. Similarly to the proof of Lemma \ref{supWnt}, we obtain that \begin{eqnarray*} &&\sup_{t\in[0,T]} |W_{n,2}(t,r)|\\ &&\qquad\leq\sum_{m>\lfloor a(n)\sqrt{n} \rfloor} E_\omega[\eta_0(m)] P_{\theta^m \omega}\Bigl( \inf_{t\in[0,T]} \bigl( X_{nt} - nt\mathrm{v}_P- r\sqrt{n} \bigr) \leq- m \Bigr) \\ &&\qquad\quad {} + \sum_{m\leq-\lfloor a(n)\sqrt{n} \rfloor} E_\omega[\eta _0(m)] P_{\theta^m\omega }\Bigl( \sup_{t\in[0,T]} \bigl( X_{nt} - nt\mathrm{v}_P- r\sqrt {n} \bigr) > - m \Bigr) , \end{eqnarray*} and the shift invariance of $P$ implies that \begin{eqnarray*} &&\hspace*{-4pt} E_P\Bigl\{ \sup_{t\in[0,T]} |W_{n,2}(t,r)| \Bigr\}\\ &&\hspace*{-4pt}\qquad\leq E_P\biggl\{ E_\omega[ \eta_0(0) ] \biggl[ \sum _{m>\lfloor a(n)\sqrt{n} \rfloor} P_{\omega}\Bigl( \inf_{t\in [0,T]} \bigl( X_{nt} - nt\mathrm{v}_P- r\sqrt{n} \bigr) \leq- m \Bigr) \\ &&\qquad\hspace*{67pt}\quad {}+ \sum_{m\leq-\lfloor a(n)\sqrt{n} \rfloor} P_{\omega}\Bigl( \sup _{t\in[0,T]} \bigl( X_{nt} - nt\mathrm{v}_P- r\sqrt{n} \bigr) > - m \Bigr) \biggr] \biggr\} \\ &&\hspace*{-4pt}\qquad \leq E_P \Bigl\{ E_\omega[ \eta_0(0)] \Bigl[ E_\omega \Bigl(\sup_{t\in [0,T]} \bigl( X_{nt} - nt\mathrm{v}_P-r\sqrt{n} + \bigl\lfloor a(n)\sqrt {n} \bigr\rfloor \bigr)^-\Bigr)\\ &&\qquad\hspace*{64pt}\quad {} + E_\omega\Bigl(\sup_{t\in[0,T]} \bigl( X_{nt} - nt\mathrm{v}_P -r\sqrt{n} - \bigl\lfloor a(n)\sqrt{n} \bigr\rfloor\bigr)^+ \Bigr) \Bigr] \Bigr\} \\ &&\hspace*{-4pt}\qquad \leq2E_P \Bigl\{ E_\omega[ \eta_0(0)]\Bigl[ E_\omega \Bigl( \sup_{t\in [0,T]} \bigl|X_{nt}- nt\mathrm{v}_P- r\sqrt{n}\bigr| \\ &&\qquad\hspace*{120pt}\quad{} \times \mathbf{1} \Bigl\{ \sup_{t\in[0,T]} \bigl|X_{nt} - nt\mathrm {v}_P- r\sqrt{n} \bigr|\\ &&\qquad\hspace*{213pt}\quad{} \geq a(n) \sqrt{n} \Bigr\} \Bigr) \Bigr] \Bigr\}. \end{eqnarray*} Let $p=2+\varepsilon$ for some $\varepsilon>0$ satisfying Assumption \ref{ICMasm}, and let $1/p+1/q = 1$. Note that $p>2$ implies that $q\in(1,2)$. Then, H\"older's inequality implies that \begin{eqnarray*} &&E_P\Bigl\{ \sup_{t\in[0,T]} |W_{n,2}(t,r)| \Bigr\} \\ &&\qquad\leq C E_P\Bigl\{ E_\omega\Bigl( \sup_{t\in[0,T]} \bigl|X_{nt}- nt\mathrm{v}_P- r\sqrt{n}\bigr|\\ &&\hspace*{86pt}\qquad{}\times \mathbf{1}{ \Bigl\{ \sup_{t\in[0,T]} \bigl|X_{nt} - nt\mathrm{v}_P- r\sqrt{n} \bigr| \geq a(n) \sqrt{n} \Bigr\} } \Bigr) ^q \Bigr\}^{1/q}, \end{eqnarray*} applying the Cauchy--Schwarz inequality to the inner expectation \begin{eqnarray*} && \leq C E_P\Bigl\{ E_\omega\Bigl( \sup_{t\in[0,T]} \bigl|X_{nt}- nt\mathrm{v}_P- r\sqrt{n}\bigr|^2 \Bigr)^{q/2}\\ &&\hspace*{14pt} \qquad{} \times P_\omega\Bigl( \sup_{t\in[0,T]} \bigl|X_{nt} - nt\mathrm{v}_P- r\sqrt{n} \bigr| \geq a(n) \sqrt{n} \Bigr)^{q/2} \Bigr\}^{1/q} \end{eqnarray*} by H\"older's inequality again and because probabilities are bounded above by $1$ \begin{eqnarray} \label{EWn2upper} & \leq& C \mathbb{E}\Bigl( \sup_{t\in[0,T]} \bigl|X_{nt}- nt\mathrm {v}_P- r\sqrt{n}\bigr|^2 \Bigr)^{1/2} \nonumber \\[-8pt] \\[-8pt] \nonumber &&{}\times \mathbb{P}\Bigl( \sup_{t\in[0,T]} \bigl|X_{nt} - nt\mathrm{v}_P- r\sqrt {n} \bigr| \geq a(n) \sqrt{n} \Bigr)^{(2-q)/2q}. \end{eqnarray} Proposition \ref{UIprop} implies that (for a fixed $T<\infty$ and $r\in \mathbb R$) the first term on \eqref{EWn2upper} is $\mathcal{O}(\sqrt {n})$, and the averaged functional central limit theorem [part (3) of Theorem~\ref {QCLTthm}] implies that the last term in \eqref{EWn2upper} vanishes as $n\rightarrow\infty$. This completes the proof of the lemma. \end{pf} The majority of this section is devoted the the proof of the following proposition which is a slightly weaker version of Theorem \ref{QMCurrent}. \begin{prop}\label{Wn1} For any $\varepsilon>0$, $T<\infty$ and $r\in\mathbb R$, \[ \lim_{n\rightarrow\infty} P\biggl( \sup_{t\in[0,T]} \frac {1}{\sqrt{n}} | W_{n}(t,r) - \mu Z_{nt}(\omega) | \geq\varepsilon\biggr) = 0. \] Therefore, $\frac{1}{\sqrt{n}} W_{n}(\cdot,r)$ converges in distribution to $ \mu\sigma_2 W( \cdot)$, where $W(\cdot)$ is a standard Brownian motion. \end{prop} \begin{pf} For any $\delta>0$, \begin{eqnarray} & &\mathbb{P}\biggl( \sup_{t\in[0,T]} \frac{1}{\sqrt{n}} | W_{n}(t,r) - \mu Z_{nt}(\omega) | \geq\varepsilon\biggr) \nonumber\\ &&\qquad \leq\mathbb{P}\biggl( \sup_{t\in[0,\delta]} | W_n(t,r) | \geq\frac{\varepsilon}{2} \sqrt{n} \biggr) + \mathbb{P}\biggl( \sup_{t\in[0,\delta]} \mu| Z_{nt}(\omega) | \geq \frac{\varepsilon}{2} \sqrt{n} \biggr) \nonumber\\ &&\qquad\quad {} + \mathbb{P}\biggl( \sup_{t\in[\delta,T]} \frac {1}{\sqrt{n}} | W_{n}(t,r) - \mu Z_{nt}(\omega) | \geq\varepsilon\biggr) \nonumber\\ &&\qquad \leq\frac{2}{\varepsilon\sqrt{n}} \mathbb{E}\Bigl[\sup _{t\in[0,\delta]} | W_n(t,r) |\Bigr] + \mathbb{P}\biggl( \sup_{t\in[0,\delta]} \mu| Z_{nt}(\omega) | \geq\frac{\varepsilon }{2} \sqrt{n} \biggr) \label{tlessd} \\ &&\qquad\quad {} + \mathbb{P}\biggl( \sup_{t\in[\delta,T]} \frac {1}{\sqrt{n}} | W_{n,2}(t,r) | \geq\frac{\varepsilon}{2} \biggr) \label {tlessda} \\ &&\qquad\quad {} + \mathbb{P}\biggl( \sup_{t\in[\delta,T]} \frac {1}{\sqrt{n}} | W_{n,1}(t,r) - \mu Z_{nt}(\omega) | \geq\frac{\varepsilon }{2} \biggr). \label{dtT} \end{eqnarray} Letting $n\rightarrow\infty$, Lemma \ref{supWnt} and the fact that $Z_{nt}(\omega )/\sqrt{n}$ converges to Brownian motion imply that the two terms in \eqref{tlessd} can be made arbitrarily small by taking $\delta \rightarrow0$. Also, Lemma \ref{Wn2} implies that the term in \eqref{tlessda} vanishes as $n\rightarrow\infty$. Thus, it is enough to show that, for any $\delta>0$, \eqref {dtT} vanishes as $n\rightarrow\infty$. For this, we need the following lemmas whose proofs we defer for now.\vspace*{-1pt} \begin{lem} \label{WtildeW} Let \begin{eqnarray*} \widetilde{W}_{n,1}(t,r) &:=& \sum_{m=1}^{\lfloor a(n)\sqrt{n} \rfloor} E_\omega( \eta_0(m)) \Phi_{\sigma_1^2 t}\biggl( \frac{Z_{nt}(\theta^m\omega)-m}{\sqrt {n}} + r \biggr) \\[-1pt] &&{} - \sum_{m=-\lfloor a(n)\sqrt{n} \rfloor+1}^0 E_\omega ( \eta_0(m)) \Phi _{\sigma_1^2 t}\biggl( - \frac{Z_{nt}(\theta^m\omega)-m}{\sqrt{n}} - r \biggr)\\ &&{} - \mu r \sqrt{n}. \end{eqnarray*} Then, for any $\varepsilon>0$, $r\in\mathbb R$ and $0<\delta <T<\infty$, \[ \lim_{n\rightarrow\infty} P\biggl( \sup_{t \in[\delta,T]} \frac {1}{\sqrt{n}} |W_{n,1}(t,r) - \widetilde{W}_{n,1}(t,r)| \geq\varepsilon \biggr) = 0. \] \end{lem} \begin{lem} \label{tWhW} Let \begin{eqnarray*} \widehat{W}_{n,1}(t,r) &:=& \sum_{m=1}^{\lfloor a(n)\sqrt{n} \rfloor } E_\omega( \eta_0(m)) \Phi_{\sigma_1^2 t}\biggl( \frac{Z_{nt}(\omega)-m}{\sqrt{n}} + r \biggr) \\[-1pt] &&{} - \sum_{m=-\lfloor a(n)\sqrt{n} \rfloor+1}^0 E_\omega( \eta _0(m)) \Phi_{\sigma_1^2 t}\biggl( - \frac{Z_{nt}(\omega)-m}{\sqrt {n}} - r \biggr) - \mu r \sqrt{n}. \end{eqnarray*} Then, for any $\varepsilon>0$, $r\in\mathbb R$ and $0<\delta <T<\infty$, \[ \lim_{n\rightarrow\infty} P\biggl( \sup_{t\in[\delta,T]} \frac {1}{\sqrt{n}}|\widetilde {W}_{n,1}(t,r) - \widehat{W}_{n,1}(t,r)| \geq\varepsilon \biggr) = 0. \] \end{lem} \begin{lem} \label{hWbW} Let \begin{eqnarray*} \overline{W}_{n,1}(t,r) &:=& \sum_{m=1}^{\lfloor a(n)\sqrt{n} \rfloor} \mu\Phi_{\sigma_1^2 t}\biggl( \frac{Z_{nt}(\omega)-m}{\sqrt{n}} + r \biggr) \\ &&{} - \sum_{m=-\lfloor a(n)\sqrt{n} \rfloor+1}^0 \mu\Phi_{\sigma _1^2 t}\biggl( - \frac{Z_{nt}(\omega)-m}{\sqrt{n}} - r \biggr) - \mu r \sqrt{n}. \end{eqnarray*} Then, for any $\varepsilon>0$, $r\in\mathbb R$ and $0<\delta <T<\infty$, \[ \lim_{n\rightarrow\infty} P\biggl( \sup_{t\in[\delta,T]} \frac {1}{\sqrt{n}}|\widehat {W}_{n,1}(t) - \overline{W}_{n,1}(t)| \geq\varepsilon \biggr) = 0. \] \end{lem} Assuming for now Lemmas \ref{WtildeW}, \ref{tWhW} and \ref{hWbW}, to finish the proof of Proposition~\ref{Wn1}, it remains to compare $\overline {W}_{n,1}(t,r)$ with $\mu Z_{nt}(\omega)$. Since $\Phi_{\sigma_1^2t}(\cdot)$ is strictly increasing and bounded above by 1, we have using a Riemann sum approximation that, for any $t\in[0,T]$, \begin{eqnarray} \label{RSapprox} \hspace*{18pt}&&\biggl| \frac{\overline{W}_{n,1}(t,r)}{\sqrt{n}} + \mu r \nonumber\\ &&{} - \mu\int _0^{a(n)} \biggl(\Phi_{\sigma_1^2 t}\biggl( \frac{Z_{nt}(\omega)}{\sqrt{n}} + r -x \biggr) - \Phi _{\sigma_1^2 t}\biggl( - \frac{Z_{nt}(\omega)}{\sqrt{n}} - r -x \biggr) \biggr)\,dx \biggr| \\ &&\qquad\leq\frac{2\mu}{\sqrt{n}}.\nonumber \end{eqnarray} It is an easy exercise in calculus to show that, for any $z\in\mathbb R$ and $A>0$, \[ \int_0^{A} \bigl(\Phi_{\alpha^2}( z -x ) - \Phi_{\alpha ^2}( -z -x )\bigr)\, dx = z + \Psi_{\alpha^2}(A+z) - \Psi_{\alpha^2}(A-z), \] where $\Psi_{\alpha^2}(x)$ is defined in \eqref{Psidef}. Therefore, \begin{eqnarray*} &&\int_0^{a(n)} \biggl(\Phi_{\sigma_1^2 t}\biggl( \frac{Z_{nt}(\omega )}{\sqrt{n}} +r -x \biggr) - \Phi_{\sigma_1^2 t}\biggl( - \frac{Z_{nt}(\omega)}{\sqrt {n}} - r -x \biggr)\biggr)\, dx \\ &&\qquad = \frac{Z_{nt}(\omega)}{\sqrt{n}} + r + \Psi_{\sigma_1^2 t} \biggl(a(n)+\frac{Z_{nt}(\omega)}{\sqrt{n}} + r \biggr)\\ &&\qquad\quad {} - \Psi_{\sigma _1^2 t} \biggl(a(n)-\frac{Z_{nt}(\omega)}{\sqrt{n}} - r \biggr). \end{eqnarray*} Recalling \eqref{RSapprox}, this implies that for $\varepsilon>0$ and $n$ sufficiently large, \begin{eqnarray}\label{Psireduce} && P\biggl( \sup_{t\in[0,T]} \biggl| \frac{\overline {W}_{n,1}(t,r)}{\sqrt{n}} - \frac{Z_{nt}(\omega)}{\sqrt{n}} \biggr| \geq\varepsilon\biggr) \nonumber\\ &&\qquad \leq P\biggl( \sup_{t\in[0,T]} \biggl| \Psi_{\sigma_1^2 t} \biggl(a(n)+\frac{Z_{nt}(\omega)}{\sqrt{n}}+ r \biggr)\\ &&\qquad\hspace*{42pt}\quad{} - \Psi_{\sigma _1^2 t} \biggl(a(n)-\frac{Z_{nt}(\omega)}{\sqrt{n}} - r \biggr) \biggr| \geq \frac{\varepsilon}{2 \mu} \biggr).\nonumber \end{eqnarray} A simple calculation shows that $\Psi_{\alpha^2}'(x) = - \Phi _{\alpha^2}(-x) < 0$, and so $\Psi_{\alpha^2}(x)$ is decreasing in $x$. Another direct calculation shows that $\frac{d}{d\alpha} \Psi _{\alpha^2} (x) = \alpha\phi_{\alpha^2}(x) >0$. Thus, $\Psi_{\alpha^2}(x)$ is increasing in $\alpha$. Thus, if $|Z_{nt}(\omega)| \leq a(n)\sqrt{n}/2$ and $t\leq T$, \begin{eqnarray*} &&\biggl| \Psi_{\sigma_1^2 t}\biggl(a(n)+\frac{Z_{nt}(\omega)}{\sqrt {n}} + r \biggr) - \Psi_{\sigma_1^2 t}\biggl(a(n)-\frac{Z_{nt}(\omega)}{\sqrt{n}} - r \biggr) \biggr|\\ &&\qquad\leq2 \Psi_{\sigma_1^2 t}\bigl(a(n)/2 - |r| \bigr) \\ &&\qquad\leq2 \Psi_{\sigma_1^2 T}\bigl( a(n)/2 - |r| \bigr). \end{eqnarray*} Since $\lim_{x\rightarrow\infty} \Psi_{\sigma_1^2 T}(x) = 0$,\vspace*{1pt} then $\Psi_{\sigma_1^2 T}(a(n)/2-|r|) < \frac{\varepsilon}{2}$ for all $n$ large enough. Thus, recalling \eqref{Psireduce}, we obtain that, for any $\varepsilon>0$ and $n$ sufficiently large, \[ P\biggl( \sup_{t\in[0,T]} \biggl| \frac{\overline {W}_{n,1}(t,r)}{\sqrt{n}} - \frac{Z_{nt}(\omega)}{\sqrt{n}} \biggr| \geq\varepsilon\biggr) \leq P\biggl( \sup_{t\in[0,T]}\biggl | \frac{Z_{nt}(\omega)}{\sqrt {n}} \biggr| \geq\frac{a(n)}{2} \biggr). \] Since $t\mapsto\frac{Z_{nt}(\omega)}{\sqrt{n}}$ converges in distribution to a Brownian motion, this last probability tends to zero as $n\rightarrow\infty $. This completes the proof of Proposition \ref{Wn1}. \end{pf} We now return to the proofs of Lemmas \ref{WtildeW}--\ref{hWbW}. \begin{pf*}{Proof of Lemma \protect\ref{WtildeW}} Let \begin{eqnarray*} D(n,\omega) &:=& \sup_{x\in\mathbb R} \biggl| P_\omega\biggl( \frac {X_n - n\mathrm{v}_P+ Z_n(\omega )}{\sqrt{n}} \leq x \biggr) - \Phi_{\sigma_1^2}(x) \biggr|\quad \mbox {and}\\ \bar{D}(n,\omega) &:=& \sup_{k\geq n} D(k,\omega). \end{eqnarray*} Theorem \ref{QCLTthm} implies that $\lim_{n\rightarrow\infty} \bar {D}(n,\omega) = 0$, $P\mathrm{\mbox{-}a.s.}$, and so by the bounded convergence theorem, $\lim _{n\rightarrow \infty} E_P[ \bar{D}(n,\omega)^p] = 0$ for any $p>0$. Thus, it is possible to choose the sequence $a(n)$ tending to infinity slowly enough so that \[ \lim_{n\rightarrow\infty} a(n) (E_P[ \bar{D}(\delta n,\omega)^2 ] )^{1/2} = 0\qquad \forall\delta>0 \] [e.g., let $a(n) = (E_P[ \bar{D}(\sqrt{n},\omega)^2 ])^{-1/4}$]. The definition of $D(n,\omega)$ implies that, for any $t>0$, \[ | W_{n,1}(t,r) - \widetilde{W}_{n,1}(t,r) | \leq \sum_{m=-\lfloor a(n)\sqrt{n} \rfloor+1}^{\lfloor a(n)\sqrt{n} \rfloor} E_{\theta^m\omega}( \eta _0(0)) D(nt,\theta^m\omega) . \] Therefore, \begin{eqnarray*} &&P\Bigl( \sup_{t\in[\delta,T]} | W_{n,1}(t,r) - \widetilde {W}_{n,1}(t,r) | \geq\varepsilon\sqrt{n} \Bigr) \\ &&\qquad\leq P\Biggl( \sup_{t\in[\delta,T]} \sum_{m=-\lfloor a(n)\sqrt{n} \rfloor+1}^{\lfloor a(n)\sqrt{n} \rfloor} E_{\theta^m\omega}( \eta _0(0)) D(nt,\theta^m\omega) \geq\varepsilon\sqrt {n} \Biggr) \\ &&\qquad\leq P\Biggl( \sum_{m=-\lfloor a(n)\sqrt{n} \rfloor+1}^{\lfloor a(n)\sqrt{n} \rfloor} E_{\theta^m\omega}( \eta_0(0)) \bar{D}(\delta n,\theta^m\omega) \geq\varepsilon\sqrt{n} \Biggr) \\ &&\qquad\leq\frac{2 a(n)}{\varepsilon} E_P[E_{\omega}( \eta_0(0)) \bar {D}(\delta n,\omega)] \\ &&\qquad\leq\frac{2 a(n)}{\varepsilon} ( E_P[(E_\omega\eta_0(0))^2] )^{1/2} ( E_P[ \bar{D}(\delta n,\omega)^2] )^{1/2}, \end{eqnarray*} where the next to last inequality follows from Chebyshev's inequality and the shift invariance of $P$. Our choice of the sequence $a(n)$ ensures that this last term vanishes as $n\rightarrow\infty$. \end{pf*} \begin{pf*}{Proof of Lemma \protect\ref{tWhW}} Note that the mean value theorem implies \begin{eqnarray*} | \Phi_{\sigma_1^2t}(x) - \Phi_{\sigma_1^2 t}(y) | &\leq&\Bigl( \sup _{z\in\mathbb R} \Phi_{\sigma_1^2t}'(z) \Bigr) |x-y|\\ & =& \frac {1}{\sigma_1\sqrt{2\pi t}}|x-y|\qquad \forall x,y \in\mathbb R. \end{eqnarray*} Therefore, \begin{eqnarray*} &&\sup_{t\in[\delta,T]} |\widetilde{W}_{n,1}(t,r) - \widehat {W}_{n,1}(t,r)| \\ &&\qquad\leq\sup _{t\in[\delta,T]} \sum_{m=-\lfloor a(n)\sqrt{n} \rfloor+1}^{\lfloor a(n)\sqrt{n} \rfloor} \bar\mu(\theta^m \omega) \frac{1}{\sigma_1\sqrt{2\pi t}} \biggl| \frac{Z_{nt}(\theta ^m\omega) - Z_{nt}(\omega )}{\sqrt{n}} \biggr| \\ &&\qquad\leq\frac{2 a(n)}{\sigma_1 \sqrt{2\pi\delta}} \sup_{t\in [\delta,T]} \max_{| m| \leq a(n)\sqrt{n}} | Z_{nt}(\theta^m\omega) - Z_{nt}(\omega) | \\ &&\qquad\quad {} \times\Biggl( \frac{1}{2 a(n)\sqrt{n}} \sum_{m=-\lfloor a(n)\sqrt{n} \rfloor+1}^{\lfloor a(n)\sqrt{n} \rfloor} \bar\mu (\theta^m \omega) \Biggr). \end{eqnarray*} The ergodic theorem implies that the averaged sum on the last line converges to $\mu$, $P\mathrm{\mbox{-}a.s.}$ Thus, to finish the proof of the lemma, it is enough to show that \[ \lim_{n\rightarrow\infty} P\biggl( \sup_{t\in[\delta,T]} \max_{| m| \leq a(n)\sqrt {n}} | Z_{nt}(\theta^m\omega) - Z_{nt}(\omega) | \geq \frac{\varepsilon\sqrt {n}}{a(n)} \biggr) = 0\qquad \forall\varepsilon>0. \] Since $Z_{nt}(\theta^m\omega) = h(m+\lfloor nt\mathrm{v}_P \rfloor ,\omega)-h(m,\omega)$, \[ |Z_{nt}(\theta^m\omega) - Z_{nt}(\omega)| \leq|h(m,\omega)| + |h(m+\lfloor nt\mathrm{v}_P \rfloor,\omega) - h(\lfloor nt\mathrm{v}_P \rfloor,\omega) |. \] Thus, \begin{eqnarray*} && \sup_{t\in[\delta,T]} \max_{| m| \leq a(n)\sqrt{n}} | Z_{nt}(\theta ^m\omega) - Z_{nt}(\omega) | \\ &&\qquad \leq2 \max_{x\in[0,nT]} \max_{1\leq m \leq a(n)\sqrt{n} } |h(x+m,\omega) - h(x,\omega) | \\ &&\qquad \leq6 \max_{0 \leq i\leq\sqrt{n}T/a(n)} \max_{1 \leq m \leq a(n)\sqrt{n}} \bigl|h\bigl(i \bigl\lfloor a(n)\sqrt{n} \bigr\rfloor+ m,\omega\bigr) - h\bigl(i \bigl\lfloor a(n)\sqrt{n} \bigr\rfloor,\omega\bigr)\bigr|. \end{eqnarray*} This implies that \begin{eqnarray*} && P\biggl( \sup_{t\in[\delta,T]} \max_{| m| \leq a(n)\sqrt{n}} | Z_{nt}(\theta^m\omega) - Z_{nt}(\omega) | \geq\frac {\varepsilon\sqrt{n}}{a(n)} \biggr)\\ &&\qquad \leq P\biggl( \max_{0 \leq i\leq\sqrt{n}T/a(n)} \max_{1 \leq m \leq a(n)\sqrt{n}} \bigl|h\bigl(i \bigl\lfloor a(n)\sqrt{n} \bigr\rfloor+ m,\omega\bigr) - h\bigl(i \bigl\lfloor a(n)\sqrt{n} \bigr\rfloor,\omega\bigr)\bigr| \\ &&\hspace*{288pt}\qquad\geq\frac{\varepsilon \sqrt{n}}{6a(n)} \biggr)\\ &&\qquad\leq\frac{\sqrt{n}T}{a(n)} P\biggl( \max_{1 \leq m \leq a(n)\sqrt{n}} |h(m,\omega)| \geq\frac{\varepsilon\sqrt{n}}{6a(n)} \biggr), \end{eqnarray*} where the last inequality is from a union bound and the shift invariance of $P$. Recalling Lemma \ref{Goldsheid}, there exist constants $C,\eta>0$ such that, for any fixed $\varepsilon>0$ and $0<\delta<T<\infty$, \begin{eqnarray*} && P\biggl( \sup_{t\in[\delta,T]} \max_{| m| \leq a(n)\sqrt{n}} | Z_{nt}(\theta^m\omega) - Z_{nt}(\omega) | \geq\frac {\varepsilon\sqrt{n}}{a(n)} \biggr)\\ &&\qquad\leq\frac{\sqrt{n}T}{a(n)} \biggl( \frac{6a(n)}{\varepsilon\sqrt {n}} \biggr)^{2+2\eta} C \bigl( a(n)\sqrt{n} \bigr)^{1+\eta} \\ &&\qquad= \mathcal{O}( a(n)^{2+3\eta} n^{-\eta/2} ). \end{eqnarray*} Since $a(n)$ grows slower than polynomially in $n$, this last term vanishes as $n\rightarrow\infty$. \end{pf*} \begin{pf*}{Proof of Lemma \protect\ref{hWbW}} For any integer $R$ let \begin{eqnarray*} \widehat{W}_{n,1}^R(t,r) &:=& \sum_{m=1}^{\lfloor R \sqrt{n} \rfloor } E_\omega( \eta_0(m)) \Phi_{\sigma_1^2 t}\biggl( \frac{Z_{nt}(\omega)-m}{\sqrt{n}} + r \biggr) \\ &&{} - \sum_{m=-\lfloor R\sqrt{n} \rfloor+1}^0 E_\omega( \eta_0(m)) \Phi_{\sigma_1^2 t}\biggl( - \frac{Z_{nt}(\omega)-m}{\sqrt{n}} - r \biggr) - \mu r \sqrt{n} \end{eqnarray*} and \begin{eqnarray*} \overline{W}{}^R_{n,1}(t,r) &:=& \sum_{m=1}^{ \lfloor R \sqrt{n} \rfloor} \mu\Phi_{\sigma_1^2 t}\biggl( \frac{Z_{nt}(\omega)-m}{\sqrt{n}} + r \biggr) \\ &&{}- \sum_{m=- \lfloor R \sqrt{n} \rfloor+1}^0 \mu\Phi_{\sigma_1^2 t} \biggl( - \frac{Z_{nt}(\omega )-m}{\sqrt{n}} - r \biggr) - \mu r \sqrt{n}. \end{eqnarray*} Then, it is enough to show that \begin{equation}\label{hWbWR} \lim_{n\rightarrow\infty} \frac{1}{\sqrt{n}} E_P \Bigl[ \sup _{t\in[\delta,T]} |\widehat{W}_{n,1}^R(t,r) - \overline{W}{}^R_{n,1}(t,r)| \Bigr] = 0\qquad \forall R<\infty, \end{equation} and that \begin{eqnarray}\label{hWR} && \lim_{R\rightarrow\infty} \limsup_{n\rightarrow\infty} P\biggl( \sup_{t\in[\delta,T]} \frac {1}{\sqrt{n}} | \widehat{W}_{n,1}(t,r) - \widehat {W}_{n,1}^R(t,r) | \geq\varepsilon\biggr) \nonumber \\[-8pt] \\[-8pt] \nonumber &&\qquad = 0\qquad \forall\varepsilon>0 \end{eqnarray} and \begin{eqnarray}\label{bWR} &&\lim_{R\rightarrow\infty} \limsup_{n\rightarrow\infty} P\biggl( \sup_{t\in[\delta,T]} \frac {1}{\sqrt{n}} | \overline{W}_{n,1}(t,r) - \overline {W}{}^R_{n,1}(t,r) | \geq\varepsilon\biggr) \nonumber \\[-8pt] \\[-8pt] \nonumber &&\qquad= 0\qquad \forall\varepsilon>0. \end{eqnarray} To bound \eqref{hWbWR}, we fix another parameter $L$ and then divide the interval $(-\lfloor R\sqrt{n} \rfloor, \lfloor R\sqrt{n} \rfloor ]$ into $2 RL$ intervals, each of length approximately $ \sqrt{n}/L $. For ease of notation, let $B_{n,L}(\ell) := \{ m \in\mathbb Z\dvtx \frac{(\ell-1)\sqrt{n}}{L} < m \leq\frac {\ell\sqrt{n}}{L} \}$. Now, for any $m\in B_{n,L}(\ell)$ and $t \in[\delta,T]$, \begin{eqnarray*} \Phi_{\sigma_1^2 t} \biggl( \frac{Z_{nt}(\omega) - m}{\sqrt{n}} + r \biggr) - \Phi_{\sigma_1^2 t}\biggl ( \frac{Z_{nt}(\omega)}{\sqrt{n}} - \frac{\ell}{L} + r \biggr) &\leq&\frac{1}{\sqrt{2\pi t}} \biggl| \frac{m}{\sqrt{n}} - \frac {\ell}{L} \biggr|\\ & \leq&\frac{C}{L}, \end{eqnarray*} where the constant $C$ depends only on $\delta>0$. Thus,\vspace*{-1pt} \begin{eqnarray*} \frac{1}{\sqrt{n}} \widehat{W}_{n,1}^R(t,r) &=& \sum_{\ell= 1}^{RL}\biggl( \frac{1}{\sqrt{n}} \sum_{m\in B_{n,L}(\ell)} \bar\mu(\theta ^m\omega) \biggr) \Phi_{\sigma_1^2 t} \biggl( \frac{Z_{nt}(\omega)}{\sqrt{n}} - \frac{\ell}{L} + r \biggr) \\[-1pt] &&{} - \sum_{\ell= -RL+1}^{0}\biggl( \frac{1}{\sqrt{n}} \sum _{m\in B_{n,L}(\ell)} \bar\mu(\theta^m\omega) \biggr) \Phi_{\sigma_1^2 t}\biggl ( - \frac {Z_{nt}(\omega)}{\sqrt{n}} + \frac{\ell}{L} - r \biggr)\\[-1pt] &&{} + \frac{1}{\sqrt{n}} \sum_{m=1}^{\lfloor R \sqrt{n} \rfloor } \bar\mu(\theta ^m\omega) \mathcal{O}(L^{-1}) \\[-1pt] &&{}- \frac{1}{\sqrt{n}} \sum _{m=-\lfloor R\sqrt{n} \rfloor+ 1}^{0} \bar\mu(\theta^m\omega) \mathcal{O}(L^{-1}).\vspace*{-1pt} \end{eqnarray*} A similar equality also holds for $\overline{W}{}^R_{n,1}(t,r)$ with $\bar\mu (\theta^m\omega)$ replaced by $\mu$. Therefore, using the fact that $\Phi _{\alpha^2}$ is bounded by 1, we obtain that \begin{eqnarray*} && \sup_{t\in[\delta,T]} \frac{1}{\sqrt{n}} |\widehat {W}_{n,1}^R(t,r) - \overline {W}{}^R_{n,1}(t,r)|\\[-1pt] &&\qquad \leq\sum_{\ell= -RL+1}^{RL}\biggl| \frac{1}{\sqrt{n}} \sum _{m\in B_{n,L}(\ell)} \bigl( \bar\mu(\theta^m\omega) - \mu\bigr) \biggr|\\[-1pt] &&\qquad\quad {}+ \mathcal{O}(L^{-1}) \Biggl( \frac{1}{\sqrt{n}} \sum_{m=-\lfloor R\sqrt{n} \rfloor+ 1}^{\lfloor R \sqrt{n} \rfloor} \bar\mu(\theta^m\omega) + 2 R \mu \Biggr). \end{eqnarray*} Note that we were able to include the supremum over $t$ in the above inequality since the constant in the $\mathcal{O}(L^{-1})$ term is valid for any $t\geq\delta$. Taking expectations of the above with respect to the measure $P$ and letting $n\rightarrow\infty$, the ergodic theorem implies that the first term vanishes and the second term has $\limsup$ less than $4R\mu\mathcal{O}(L^{-1})$. Thus, taking $L\rightarrow\infty$ proves \eqref{hWbWR}. To bound \eqref{hWR}, let\vspace*{-1pt} \[ G_{n,R}:= \biggl\{ \omega\dvtx \sup_{t\in[\delta,T]} \biggl| \frac {Z_{nt}(\omega)}{\sqrt {n}} + r \biggr| \leq\frac{R}{2} \biggr\}. \] Since $t\mapsto Z_{nt}(\omega)/\sqrt{n}$ converges to Brownian motion, \[ \lim _{R\rightarrow\infty} \lim_{n\rightarrow\infty} P(G_{n,R}) = 1 \] for any fixed $r\in\mathbb R$. Thus, \begin{eqnarray}\label{hWRG} && \lim_{R\rightarrow\infty} \limsup_{n\rightarrow\infty} P\biggl( \sup_{t\in[\delta,T]} \frac {1}{\sqrt{n}} | \widehat{W}_{n,1}(t,r) - \widehat {W}_{n,1}^R(t,r) | \geq\varepsilon\biggr) \nonumber \\[-8pt] \\[-8pt] \nonumber &&\qquad \leq\lim_{R\rightarrow\infty}\limsup_{n\rightarrow\infty } \frac{1}{\varepsilon\sqrt {n}} E_P\Bigl[\sup_{t\in[\delta,T]} |\widehat{W}_{n,1}(t,r) - \widehat {W}_{n,1}^R(t,r) | \mathbf{1}_{G_{n,R}} \Bigr]. \end{eqnarray} If $\omega\in G_{n,R}$, $|m| \geq R\sqrt{n}$ and $t\leq T$, then $\Phi_{\sigma _1^2 t} ( | \frac{Z_{nt}(\omega)}{\sqrt{n}} + r | - \frac {|m|}{\sqrt{n}} ) \leq\Phi_{\sigma_1^2 T} ( \frac {R}{2} - \frac {|m|}{\sqrt{n}} )$. Therefore, \begin{eqnarray*} && \frac{1}{\sqrt{n}} E_P\Bigl[\sup_{t\in[\delta,T]} |\widehat{W}_{n,1}(t) - \widehat{W}_{n,1}^R(t) | \mathbf{1}_{G_{n,R}} \Bigr] \\ &&\qquad \leq\frac{1}{\sqrt{n}} \sum_{m=\lfloor R\sqrt{n} \rfloor +1}^{\lfloor a(n)\sqrt{n} \rfloor} \mu\Phi_{\sigma_1^2 T}\biggl( \frac{R}{2} - \frac{m}{\sqrt{n}} \biggr)\\ &&\qquad\quad {} + \frac{1}{\sqrt{n}} \sum_{m=-\lfloor a(n)\sqrt{n} \rfloor +1}^{-\lfloor R\sqrt{n} \rfloor} \mu\Phi_{\sigma_1^2 T}\biggl( \frac{R}{2} + \frac{m}{\sqrt{n}} \biggr) \\ &&\qquad \leq\mu\int_R^\infty\Phi_{\sigma_1^2 T}( R/2 - x ) \,dx + \mu\int_{-\infty}^{-R} \Phi_{\sigma_1^2 T} ( R/2 + x ) \,dx + \frac{\mu}{\sqrt{n}}, \end{eqnarray*} where the last inequality is from a Riemann sum approximation. Since the integrals in the last line can be made arbitrarily small by taking $R\rightarrow\infty$, recalling \eqref{hWRG} finishes the proof of \eqref{hWR}. The proof of \eqref{bWR} is similar. \end{pf*} We conclude this section with the proof of Theorem \ref{QMCurrent}. \begin{pf*}{Proof of Theorem \protect\ref{QMCurrent}} To prove Theorem \ref{QMCurrent} from Proposition \ref{Wn1}, we need to justify the ability to include a supremum over $r\in[-R,R]$ inside the probability in the statement of Proposition \ref{Wn1}. A simple union bound implies that we may include a supremum over a finite set of $r$ values inside the probability in the statement of Proposition \ref{Wn1}. That is, for $N<\infty$ and $r_1,r_2,\ldots, r_N \in\mathbb R$,\looseness=1 \begin{equation}\label{finiter} \qquad\quad \lim_{n\rightarrow\infty} P\biggl( \max_{k\leq N} \sup_{t\in [0,T]} \frac{1}{\sqrt {n}} \bigl| E_\omega Y_{n}(t,r_k) - \mu r_k \sqrt{n} - \mu Z_{nt}(\omega) \bigr| \geq\varepsilon\biggr) = 0. \end{equation} Now, the definition of $Y_n(t,r)$ implies that $Y_n(t,r)$ is nondecreasing in $r$. Therefore, for any fixed $t$, $E_\omega Y_n(t,r) - \mu Z_{nt}(\omega)$ is nondecreasing in $r$. Choose $-R=r_1 < r_2 < \cdots< r_{N-1} < r_N = R$ such that $r_{k+1}-r_k \leq\frac{\varepsilon}{2\mu}$ for $k=1,\ldots, N-1$. Then, if $r\in[r_k,r_{k+1}]$, \begin{eqnarray*} && \bigl\{ \bigl| E_\omega Y_{n}(t,r) - \mu r \sqrt{n} - \mu Z_{nt}(\omega) \bigr| \geq\varepsilon\sqrt{n} \bigr\} \\ &&\qquad\subset\biggl\{ \bigl| E_\omega Y_{n}(t,r_k) - \mu r_k \sqrt{n} - \mu Z_{nt}(\omega) \bigr| \geq\frac{\varepsilon}{2}\sqrt{n} \biggr\} \\ &&\hspace*{12pt}\qquad{} \cup\biggl\{ \bigl| E_\omega Y_{n}(t,r_{k+1}) - \mu r_{k+1} \sqrt {n} - \mu Z_{nt}(\omega) \bigr| \geq\frac{\varepsilon}{2} \sqrt {n} \biggr\}. \end{eqnarray*} Taking unions over $r\in[-R,R]$ and $t\in[0,T]$ implies that \begin{eqnarray*} && \Bigl\{ \sup_{r\in[-R,R]} \sup_{t\in[0,T]} \bigl| E_\omega Y_{n}(t,r) - \mu r \sqrt{n} - \mu Z_{nt}(\omega) \bigr| \geq\varepsilon\sqrt {n} \Bigr\} \\ &&\qquad\subset\biggl\{ \max_{k\leq N} \sup_{t\in[0,T]} \bigl| E_\omega Y_{n}(t,r_k) - \mu r_k \sqrt{n} - \mu Z_{nt}(\omega) \bigr| \geq \frac{\varepsilon }{2} \sqrt{n} \biggr\}. \end{eqnarray*} Recalling \eqref{finiter} finishes the proof of Theorem \ref{QMCurrent}. \end{pf*} \section{Fluctuations of the centered current} \label{currsec} Theorems \ref{findimthm} and \ref{findimthmVq} are proved in a similar way. We spell out some details for Theorem \ref{findimthm} and restrict to a few remarks on Theorem \ref{findimthmVq}. The following representation of the covariance function $\Gamma ((s,q),(t,r))$ will be convenient (proof by calculus). Recall that $B_\centerdot$ denotes standard Brownian motion: \begin{eqnarray}\label{vpGa} \Gamma((s,q),(t,r)) &=& \mu\int_{-\infty}^{\infty}( \mathbf{P}[B_{\sigma _1^2s}\le q-x]\mathbf{P}[ B_{\sigma _1^2t}> r-x] \nonumber\\ &&\hspace*{32pt}{}- \mathbf{P}[B_{\sigma_1^2s}\le q-x, B_{\sigma_1^2t}> r-x] ) \,dx \nonumber \\[-8pt] \\[-8pt] \nonumber &&{} + \sigma_0^2 \biggl\{ \int_{0}^\infty\mathbf {P}[B_{\sigma_1^2s}\le q-x]\mathbf{P}[ B_{\sigma_1^2t}\le r-x] \,dx\\ &&\hspace*{32pt}{} + \int_{-\infty}^{0} \mathbf{P}[B_{\sigma _1^2s}> q-x]\mathbf{P}[ B_{\sigma_1^2t}> r-x] \,dx \biggr\}.\nonumber \end{eqnarray} Pick time--space points $(t_1,r_1),\ldots,(t_N,r_N)\in\mathbb R_+\times\mathbb R$ and $\alpha_1,\ldots, \alpha_N, \beta_1,\ldots,\break \beta_N\in\mathbb R$. Form the linear combinations \[ (\bar V_n, \bar Z_n)= \Biggl(n^{-1/4} \sum_{i=1}^N \alpha_i V_n(t_i,r_i) , n^{-1/2} \sum_{i=1}^N \beta_i Z_{nt_i} \Biggr) \] and \[ (\bar V, \bar Z)= \Biggl( \sum_{i=1}^N \alpha_i V(t_i,r_i) , \sum_{i=1}^N \beta_i Z(t_i) \Biggr). \] Theorem \ref{findimthm} is proved by showing $(\bar V_n, \bar Z_n) \stackrel{\mathcal{D}}{\longrightarrow}(\bar V, \bar Z)$ for an arbitrary choice of $\{t_i,r_i,\alpha_i,\beta_i\}$. We can work with $\bar V_n$ alone for a while because much of its analysis is done under a fixed $\omega$, and then $Z_{n\centerdot}$ is not random: \begin{equation}\qquad\bar V_n=n^{-1/4} \sum_{i=1}^N \alpha_i V_n(t_i,r_i) =n^{-1/4}\sum_{x\in\mathbb{Z}} \sum_{i=1}^N \alpha_i \bigl[ \mathbf{1}_{ \{ x>0 \} } \phi_{x,i} - \mathbf{1}_{ \{ x\le0 \} } \psi_{x,i}\bigr], \label{vpVnsum} \end{equation} where \[ \phi_{x,i}=\sum_{k=1}^{\eta_0(x)}\mathbf{1}{ \bigl\{ X^{x,k}_{nt_i}\le nt_i\mathrm{v}_P +r_i\sqrt{n} \bigr\} } -E_\omega(\eta_0(x)) P_\omega\bigl\{X^{x}_{nt_i}\le nt_i\mathrm {v}_P+r_i\sqrt{n} \bigr\} \] and \[ \psi_{x,i}=\sum_{k=1}^{\eta_0(x)}\mathbf{1}{ \bigl\{ X^{x,k}_{nt_i}> nt_i\mathrm{v}_P+r_i\sqrt{n} \bigr\} } -E_\omega(\eta_0(x)) P_\omega\bigl\{X^{x}_{nt_i}> nt_i\mathrm {v}_P+r_i\sqrt{n} \bigr\}. \] Equation \eqref{vpVnsum} expresses $\bar V_n=n^{-1/4}\sum_{x\in \mathbb Z} u(x)$ as a sum of random variables \[ u(x) = \sum_{i=1}^N \alpha_i \bigl[ \mathbf{1}_{ \{ x>0 \} } \phi_{x,i} - \mathbf{1}_{ \{ x\le0 \} } \psi_{x,i}\bigr] \] that are independent and mean zero under the quenched measure $P_\omega $. They satisfy \begin{equation}\abs{u(x)}\le\sum_{i=1}^N\,\abs{\alpha_i} \bigl( \eta _0(x)+E_\omega(\eta _0(x))\bigr). \label{vpaux2.6} \end{equation} Again we will pick $a(n)\nearrow\infty$ and define \[ \bar V_n^*=n^{-1/4}\sum_{\abs{x}\le a(n)\sqrt{n}} u(x). \] We first show that the rest of the sum can be ignored. \begin{lem} $ {\lim_{n\to\infty} \mathbb{E} [(\bar V_n-\bar V_n^*)^2 ] =0. }$ \label{vpVV*lm} \end{lem} \begin{pf} By the independence of the $\{u(x)\}$ under $P_\omega$, \begin{eqnarray}\label{vpfd.07} \hskip12pt&&\mathbb{E}[(\bar V_n-\bar V_n^*)^2]\nonumber\\ &&\qquad = n^{-1/2}E_P\sum _{\abs{x}> a(n) \sqrt{n}} E_\omega[u(x)^2] \nonumber\\ &&\qquad\le Cn^{-1/2}\\ &&\qquad\quad\times\sum_{i=1}^N \sum_{\abs{x}> a(n)\sqrt{n}} E_P\Biggl[ \mathbf{1}_{ \{ x>0 \} } \operatorname{Var}_\omega\Biggl( \sum_{k=1}^{\eta_0(x)}\mathbf{1}{ \bigl\{ X^{x,k}_{nt_i}\le nt_i\mathrm{v}_P +r_i\sqrt{n} \bigr\} } \Biggr) \nonumber\\ &&\hspace*{90pt}\qquad\quad {} + \mathbf{1}_{ \{ x\le0 \} } \operatorname{Var}_\omega\Biggl( \sum_{k=1}^{\eta_0(x)}\mathbf{1}{ \bigl\{ X^{x,k}_{nt_i}> nt_i\mathrm{v}_P +r_i\sqrt{n} \bigr\} } \Biggr)\Biggr].\nonumber\hskip-12pt \end{eqnarray} Consider the first type of variance above: \begin{eqnarray*} &&\operatorname{Var}_\omega\Biggl( \sum_{k=1}^{\eta_0(x)}\mathbf{1}{ \bigl\{ X^{x,k}_{nt_i}\le nt_i\mathrm{v}_P +r_i\sqrt{n} \bigr\} } \Biggr)\\ &&\qquad = E_\omega(\eta_0(x)) \operatorname{Var}_\omega\bigl(\mathbf{1}{\bigl \{ X^{x}_{nt_i}\le nt_i\mathrm{v}_P +r_i\sqrt{n} \bigr\} } \bigr)\\ &&\qquad\quad {}+\operatorname{Var}_\omega(\eta_0(x)) P_\omega\bigl\{X^{x}_{nt_i}\le nt_i\mathrm {v}_P+r_i\sqrt{n} \bigr\}^2 \\ &&\qquad \le[ E_\omega(\eta_0(x)) + \operatorname{Var}_\omega(\eta _0(x))] P_\omega\bigl\{X^{x}_{nt_i}\le nt_i\mathrm{v}_P+r_i\sqrt{n} \bigr\}. \end{eqnarray*} The upshot is that to show the vanishing of \eqref{vpfd.07} we need to control terms of the type \begin{equation}\qquad n^{-1/2}\sum_{{x}> a(n)\sqrt{n}} E_P\bigl[ \bigl(E_\omega(\eta(x)) + \operatorname{Var}_\omega(\eta_0(x))\bigr) P_\omega\bigl\{X^x_{n}\le n\mathrm{v}_P+r\sqrt{n} \bigr\}\bigr] \label{vpfd.1} \end{equation} as $a(n)\to\infty$, together with its counterpart for $x<-a(n)\sqrt{n}$. For convenience we replaced time points $nt_i$ with $n$ and $r$ represents $\max r_i$. We treat the part in~\eqref{vpfd.1} with the variance and omit the rest. Letting $a_1(n)=a(n)-r$, \begin{eqnarray*} &&n^{-1/2}\sum_{{x}> a(n)\sqrt{n}} E_P\bigl[ \operatorname{Var}_\omega(\eta(x)) P_\omega\bigl\{X^x_{n}\le n\mathrm{v}_P+r\sqrt{n} \bigr\}\bigr] \\ &&\qquad = n^{-1/2}\sum_{{x}> a(n)\sqrt{n}} E_P\bigl[ \operatorname{Var}_\omega (\eta(x)) P_{\theta^x\omega}\bigl\{X_{n}\le n\mathrm{v}_P+r\sqrt{n}-x \bigr\}\bigr] \\ &&\qquad \le n^{-1/2}\sum_{{y}> a_1(n)\sqrt{n}} E_P\bigl[ \operatorname{Var} _\omega(\eta(0)) P_{\omega}\bigl\{X_{n}- n\mathrm{v}_P\le-y \bigr\}\bigr] \\ &&\qquad= E_P\biggl[ \operatorname{Var}_\omega(\eta(0)) E_\omega\biggl\{ \biggl( \frac {X_{n}- n\mathrm{v}_P}{\sqrt{n}} +{a_1(n)}\biggr)^- \biggr\} \biggr]\\ &&\qquad\le\{ E_P[(\operatorname{Var}_\omega(\eta(0)))^p]\}^{1/p} \biggl\{ E_P\biggl[ \biggl( E_\omega\biggl\{ \biggl( \frac {X_{n}- n\mathrm{v}_P }{\sqrt{n}} +{a_1(n)}\biggr)^- \biggr\} \biggr)^q \biggr] \biggr\}^{1/q} \end{eqnarray*} for some $p>2$ and, hence, $q=p/(p-1)<2$. By assumption \eqref{vpmomass1}, the first factor above is a constant if we take $2<p<2+\varepsilon$. Then by the $L^2(\mathbb{P})$ boundedness of $n^{-1/2} (X_{n}- n\mathrm{v}_P)$ (Proposition~\ref{UIprop}), the second factor vanishes as $a(n)\to\infty$. \end{pf} Assume now by a truncation that for $\bar V_n^*$ the initial occupations satisfy \begin{equation}\eta_0(x)\le n^{1/4-\delta} \label{vptrunc} \end{equation} for a small $\delta>0$. Let momentarily $\widetilde V_{n}^*$ denote the variable with truncated occupations $\tilde\eta_0(x)=\lfloor\eta_0(x)\wedge n^{1/4-\delta} \rfloor$. \begin{lem} If $a(n)\nearrow\infty$ slowly enough, $\mathbb{E}[\abs{\bar V_{n}^*-\widetilde V_{n}^*}^2 ]\to0$. \end{lem} \begin{pf} With\vspace*{-1pt} $A^{x,k}_i$ denoting the random walk events that appear in $\phi_{x,i}$ and $B^{x,k}_i$ the ones in $\psi_{x,i}$, \begin{eqnarray*} &&\bar V_{n}^*-\widetilde V_{n}^* = \sum_{i=1}^N \alpha_i n^{-1/4} \Biggl[ \sum_{0<{x}\le a(n)\sqrt{n}} \Biggl( \sum_{k=\tilde \eta _0(x)+1}^{\eta_0(x)} \mathbf{1}_{ \{ A^{x,k}_i \} }\\ &&\hspace*{141pt}\qquad{} - E_\omega\bigl(\eta_0(x)-\tilde\eta _0(x)\bigr) P_\omega(A^{x}_i) \Biggr)\\ &&\hspace*{75pt}\qquad\quad {}- \sum_{ a(n)\sqrt{n}\le x\le0} \Biggl( \sum_{k=\tilde\eta _0(x)+1}^{\eta_0(x)} \mathbf{1}_{ \{ B^{x,k}_i \} }\\ &&\qquad\hspace*{152pt}{} - E_\omega\bigl(\eta_0(x)-\tilde\eta _0(x)\bigr) P_\omega(B^{x}_i) \Biggr) \Biggr]. \end{eqnarray*} Square and use independence across sites as in the beginning of the proof of Lemma \ref{vpVV*lm} to get \begin{eqnarray*} E_\omega\abs{\bar V_{n}^*-\widetilde V_{n}^*}^2 &\le& C n^{-1/2} \sum_{\abs{x}\le a(n)\sqrt{n}} \bigl[ \operatorname{Var}_\omega\bigl( \eta_0(x)-\tilde \eta _0(x)\bigr) + E_\omega\bigl(\eta_0(x)-\tilde\eta_0(x)\bigr)\bigr] \\ &\le& C n^{-1/2} \sum_{\abs{x}\le a(n)\sqrt{n}} E_\omega\bigl(\eta_0(x)^2\mathbf{1}{ \{ \eta_0(x)\ge n^{1/4-\delta } \} } \bigr). \end{eqnarray*} By shift-invariance, \[ \mathbb{E}\abs{\bar V_{n}^*-\widetilde V_{n}^*}^2 \le Ca(n)\mathbb{E}[\eta_0(0)^2\mathbf{1}{ \{ \eta_0(0)\ge n^{1/4-\delta} \} } ]. \] Assumption \eqref{vpmomass1} implies that $\mathbb{E}(\eta _0(0)^2)<\infty$ and, hence, the last expectation tends to $0$ as $n\to\infty$. The lemma follows. \end{pf} Consequently, Theorem \ref{findimthm} is not affected by this truncation. For the remainder of this proof we work with the truncated occupation variables that satisfy \eqref{vptrunc} without indicating it explicitly in the notation. Recall that for complex numbers such that $\abs{z_i},\abs{w_i}\le1$, \begin{equation}\Biggl\vert \prod_{i=1}^m z_i - \prod_{i=1}^m w_i \Biggr\vert \le\sum_{i=1}^m \abs{z_i-w_i}. \label{auxineq} \end{equation} Let \[ \sigma_{n,\omega}^2(x) = n^{-1/2} E_\omega[u(x)^2]. \] By \eqref{vpaux2.6} and the truncation \eqref{vptrunc}, \begin{equation}\sigma_{n,\omega}^2(x) \le Cn^{-1/2} E_\omega[\eta _0(x)^2] \le Cn^{-2\delta} \label{vpaux4} \end{equation} which is $<$1 for large enough $n$. Then \begin{eqnarray}\label{vpaux5} &&\biggl\vert E_\omega[e^{i\bar V_{n}^*}] -\prod_{\abs{x}\le a(n)\sqrt{n}} \biggl(1-\frac12 \sigma_{n,\omega}^2(x)\biggr) \biggr\vert \nonumber \\[-8pt] \\[-8pt] \nonumber &&\qquad\le\sum_{\abs{x}\le a(n)\sqrt{n}} \biggl\vert E_\omega\bigl(e^{in^{-1/4}u(x)}\bigr) -\biggl(1-\frac12 \sigma_{n,\omega}^2(x)\biggr)\biggr\vert \end{eqnarray} by an expansion of the exponential, as in the proof of the Lindeberg--Feller theorem in \cite{durr}, Section~2.4.b, page~115, \begin{eqnarray}\label{vpaux7} &\le&\frac{C\varepsilon(n)}{\sqrt n} \sum_{\abs{x}\le a(n)\sqrt{n}} E_\omega[u(x)^2] \nonumber \\[-8pt] \\[-8pt] \nonumber &&{}+ \frac{C}{\sqrt n} \sum_{\abs{x}\le a(n)\sqrt{n}} E_\omega[u(x)^2 \mathbf{1}{ \{ \abs{u(x)}\ge n^{1/4}\varepsilon(n) \} } ] \end{eqnarray} for some $0<\varepsilon(n)\searrow0$ that we can choose. If $\varepsilon(n)n^{\delta}\to \infty$, then the truncation~\eqref{vptrunc} makes the second sum on line \eqref {vpaux7} vanish. Take $E_P$ expectation over the inequalities from \eqref{vpaux5} to \eqref{vpaux7}. Since $E_\omega[u(x)^2] \le CE_\omega[\eta_0(x)^2]$, moment assumption \eqref{vpmomass1} gives \begin{equation}\frac{C\varepsilon(n)}{\sqrt n} \sum_{\abs{x}\le a(n)\sqrt{n}} \mathbb{E}[u(x)^2] \le Ca(n)\varepsilon(n). \label{vpaux8} \end{equation} Thus, if $a(n)\nearrow\infty$ slowly enough so that $\varepsilon(n)=a(n) ^{-2}\gg n^{-\delta}$, \eqref{vpaux7} vanishes as $n\to\infty$. We have reached this intermediate conclusion: \begin{equation} \lim_{n\to\infty} E_P\biggl\vert E_\omega[e^{i\bar V_{n}^*}] -\prod_{ \abs{x}\le a(n)\sqrt{n}} \biggl(1-\frac12 \sigma_{n,\omega}^2(x)\biggr) \biggr\vert=0. \label{vpaux9} \end{equation} The main technical work is encoded in the following proposition. Recall the definition of $\Gamma$ from \eqref{vpGa}. \begin{prop}\label{vpgprop} There exist bounded continuous functions $g_n$ on $\mathbb R^N$ with these properties: \begin{longlist}[(a)] \item[(a)] $\sup_n \norm{g_n}_\infty<\infty$ and $g_n\to g$ uniformly on compact subsets of $\mathbb R^N$ where $g$ is also bounded, continuous and satisfies \begin{eqnarray}\label{vpgGa} g(z_1,\ldots,z_N) = \sum_{1\le i,j\le N}\alpha_i\alpha_j \Gamma\bigl((t_i,r_i + z_i), (t_j,r_j +z_j) \bigr) \nonumber \\[-8pt] \\[-8pt] \eqntext{\mbox{for $z=(z_1,\ldots,z_N)\in\mathbb R^N$.}} \end{eqnarray} \item[(b)] The following limit holds in $P$-probability as $n\to\infty$: \begin{equation}\biggl\vert \sum_{\abs{x}\le a(n)\sqrt{n}} \sigma _{n,\omega}^2(x) - g_n( n^{-1/2}Z_{nt_1}, \ldots, n^{-1/2}Z_{nt_N} ) \biggr\vert \longrightarrow0. \label{vpsigmaglim} \end{equation} \end{longlist} \end{prop} \begin{pf*}{Proof of Theorem \protect\ref{findimthm} assuming Proposition \protect\ref{vpgprop}} By virtue of Lemma \ref{vpVV*lm}, it remains to show \begin{equation}\vert \mathbb{E}[e^{i\bar V_n^*+i\bar Z_n}] - \mathbf{E}[e^{i\bar V+i\bar Z}] \vert \to0. \label{vpaux9.2} \end{equation} (We need not put coefficients in front of $\bar V_n^*$ and $\bar Z_n$ because these coefficients can be subsumed in the $\alpha_i,\beta_i$ coefficients.) Define the random $N$-vectors \[ \mathbf{z}_n^{1,N}=(n^{-1/2}Z_{nt_1},\ldots, n^{-1/2}Z_{nt_N}) \quad \mbox{and}\quad \mathbf{z}^{1,N}=(Z(t_1),\ldots, Z(t_N)). \] Then the conditional distribution of $V$ given $Z$, described in conjunction with \eqref{vpcov} above, together with \eqref{vpgGa} gives \[ \mathbf{E}[e^{i\bar V+i\bar Z}] = \mathbf{E}\bigl[e^{ -{(1/2)} g(\mathbf {z}^{1,N}) +i\bar Z}\bigr]. \] Now bound the absolute value in \eqref{vpaux9.2} by \begin{eqnarray}\label{vpaux9.25} &&\bigl\vert E_P\bigl[ E_\omega(e^{i\bar V_n^*}) e^{i\bar Z_n(\omega)} \bigr] - \mathbf{E}\bigl[e^{-(1/2) g(\mathbf{z}^{1,N}) +i\bar Z}\bigr] \bigr\vert \nonumber\\ &&\qquad\le E_P\bigl\vert E_\omega(e^{i\bar V_{n}^*}) - e^{ -(1/2) g_n(\mathbf{z} _n^{1,N}) } \bigr\vert\\ &&\quad\qquad{} + \bigl \vert E_P\bigl[e^{-(1/2) g_n(\mathbf{z}_n^{1,N}) +i\bar Z_n(\omega )}\bigr] - \mathbf{E}\bigl[e^{-(1/2) g(\mathbf{z}^{1,N}) +i\bar Z}\bigr] \bigr\vert.\nonumber \end{eqnarray} The last absolute values expression above vanishes as $n\to\infty$ by the invariance principle $n^{-1/2}Z_{n\cdot}\stackrel{\mathcal {D}}{\longrightarrow}Z(\cdot)$ [Theorem \ref {QCLTthm}, part (2)] and by a simple property of weak convergence stated in Lemma \ref{vpweakcomvlm} after this proof. The second-to-last term is bounded as follows: \begin{eqnarray} && E_P\bigl\vert E_\omega(e^{i\bar V_{n}^*}) - e^{ -(1/2) g_n(\mathbf{z} _n^{1,N}) } \bigr\vert\nonumber\\ &&\qquad\le E_P\biggl\vert E_\omega(e^{i\bar V_{n}^*}) -\prod_{\abs{x}\le a(n)\sqrt{n}} \biggl(1-\frac12 \sigma_{n,\omega}^2(x)\biggr) \biggr\vert\label {vpaux9.3} \\ &&\qquad\quad {} + E_P\biggl\vert \prod_{\abs{x}\le a(n)\sqrt{n}} \biggl(1-\frac12 \sigma_{n,\omega}^2(x)\biggr) - \exp\biggl\{ -\frac12 \sum_{\abs{x}\le a(n)\sqrt{n}} \sigma_{n,\omega}^2(x) \biggr\} \biggr\vert\label{vpaux9.4}\hskip-12pt \\ &&\qquad\quad {} + E_P\biggl\vert \exp\biggl\{ -\frac12 \sum_{\abs{x}\le a(n)\sqrt{n}} \sigma _{n,\omega}^2(x) \biggr\} - \exp\biggl( -\frac12 g_n(\mathbf{z}_n^{1,N}) \biggr) \biggr\vert. \label{vpaux9.5} \end{eqnarray} Let $n\to\infty$. Line \eqref{vpaux9.3} after the inequality vanishes by \eqref{vpaux9}. Line \eqref{vpaux9.4} vanishes by the inequalities \begin{eqnarray*} \exp\biggl( -\frac12(1+n^{-2\delta}) \sum_{\abs{x}\le a(n) \sqrt{n}} \sigma_{n,\omega}^2(x) \biggr) &\le&\prod_{\abs{x}\le a(n)\sqrt{n}} \biggl(1-\frac12 \sigma_{n,\omega}^2(x)\biggr)\\ &\le&\exp\biggl( -\frac12\sum_{\abs{x}\le a(n)\sqrt{n}} \sigma _{n,\omega }^2(x) \biggr), \end{eqnarray*} where we used \eqref{vpaux4} and $ -y-y^2\le\log(1-y)\le-y$ for small $y>0$. Finally, line \eqref{vpaux9.5} vanishes by \eqref{vpsigmaglim}. We have shown that line \eqref{vpaux9.25} vanishes as $n\to\infty$ and thereby verified \eqref{vpaux9.2}. This completes the proof of Theorem \ref{findimthm}, assuming Proposition \ref{vpgprop}. \end{pf*} Lines \eqref{vpaux9.3}--\eqref{vpaux9.5}, $\mathbf{z}_n^{1,N}\mathop{\stackrel{\mathcal{D}}{\longrightarrow}}\limits_{n\to \infty}\mathbf{z}^{1,N}$ and $g_n\to g$ uniformly on compacts show that \begin{equation} E_P\bigl\vert E_\omega(e^{i\bar V_{n}^*}) - e^{ -(1/2) g(\mathbf {z}^{1,N}) } \bigr\vert\to0. \label{vplim7} \end{equation} This verifies the remark stated after Theorem \ref{findimthm}. The next lemma was used in the proof above. We omit its short and simple proof.\vspace*{-3pt} \begin{lem} Suppose $\zeta_n\stackrel{\mathcal{D}}{\longrightarrow}\zeta$ for random variables with values in some Polish space $S$. Let $f_n,f$ be bounded, continuous functions on $S$ such that $\sup_n \norm{f_n}_\infty<\infty$ and $f_n\to f$ uniformly on compact sets. Then $f_n(\zeta_n)\stackrel{\mathcal {D}}{\longrightarrow}f(\zeta)$. \label{vpweakcomvlm}\vspace*{-3pt} \end{lem} We turn to the proof of the main technical proposition, Proposition \ref{vpgprop}.\vspace*{-3pt} \begin{pf*}{Proof of Proposition \protect\ref{vpgprop}} Consider $n$ large enough so that $a(n)>\max_i\abs{r_i}$: \begin{eqnarray}\label{vpfindim1} \sum_{\abs{x}\le a(n)\sqrt{n}} \sigma_{n,\omega}^2(x) &=& n^{-1/2}\sum_{ \abs{x}\le a(n)\sqrt{n}} E_\omega[u(x)^2]\nonumber\\ &=& n^{-1/2}\sum_{\abs{x}\le a(n)\sqrt{n}} \operatorname{Cov}_\omega[u(x),u(x)] \nonumber \\[-8.5pt] \\[-8.5pt] \nonumber &=&\sum_{1\le i,j\le N}\alpha_i\alpha_j n^{-1/2}\sum_{\abs{x}\le a(n) \sqrt{n}} \bigl[ \mathbf{1}_{ \{ x>0 \} } \operatorname{Cov}_\omega(\phi_{x,i}, \phi_{x,j})\\ &&\hspace*{130pt}{}+ \mathbf{1}_{ \{ x\le0 \} } \operatorname{Cov}_\omega(\psi_{x,i}, \psi _{x,j})\bigr].\nonumber \end{eqnarray} Whenever we work with a fixed $(i,j)$ we let $((s,q),(t,r))$ represent $((t_i,r_i),\break(t_j, r_j))$ to avoid excessive subscripts. To each term above apply the formula for the covariance of two random sums, with $\{Z_i\}$ i.i.d. and independent of~$K$: \[ \operatorname{Cov}\Biggl( \sum_{i=1}^K f(Z_i) , \sum_{j=1}^K g(Z_j)\Biggr) = EK \operatorname{Cov}(f(Z),g(Z))+ \operatorname{Var}(K) Ef(Z) Eg(Z). \] The first covariance on the last line of \eqref{vpfindim1} develops as \begin{eqnarray} \label{vpfindim3} &&\operatorname{Cov}_\omega(\phi_{x,i}, \phi_{x,j})\nonumber\\ &&\qquad= E_\omega(\eta_0(x)) P_\omega\bigl\{X^{x}_{ns}\le ns\mathrm{v}_P+q\sqrt{n}, X^{x}_{nt}\le nt\mathrm{v}_P+r\sqrt{n} \bigr\} \nonumber\\ &&\qquad\quad {} - E_\omega(\eta_0(x)) P_\omega\bigl\{X^{x}_{ns}\le ns\mathrm{v}_P+q\sqrt{n} \bigr\}P_\omega\bigl\{ X^{x}_{nt}\le nt\mathrm{v}_P+r\sqrt {n} \bigr\} \nonumber\\ &&\qquad\quad {} + \operatorname{Var}_\omega(\eta_0(x)) P_\omega\bigl\{X^{x}_{ns}\le ns\mathrm{v}_P+q\sqrt{n} \bigr\}P_\omega\bigl\{ X^{x}_{nt}\le nt\mathrm{v}_P+r\sqrt {n} \bigr\} \\ &&\qquad= - E_\omega(\eta_0(x)) P_\omega\bigl\{X^{x}_{ns}\le ns\mathrm{v}_P+q\sqrt{n}, X^{x}_{nt}> nt\mathrm{v}_P+r\sqrt{n} \bigr\} \nonumber\\ &&\qquad\quad {} + E_\omega(\eta_0(x)) P_\omega\bigl\{X^{x}_{ns}\le ns\mathrm{v}_P+q\sqrt{n} \bigr\}P_\omega\bigl\{ X^{x}_{nt}> nt\mathrm{v}_P+r\sqrt {n} \bigr\} \nonumber\\ &&\qquad\quad {} + \operatorname{Var}_\omega(\eta_0(x)) P_\omega\bigl\{X^{x}_{ns}\le ns\mathrm{v}_P+q\sqrt{n} \bigr\}P_\omega\bigl\{ X^{x}_{nt}\le nt\mathrm{v}_P+r\sqrt {n} \bigr\}.\nonumber \end{eqnarray} Develop the second covariance in a similar vein, and then collect the terms: \begin{eqnarray}\label{vpfindim3.3} &&\hspace*{-4pt}\sum_{\abs{x}\le a(n)\sqrt{n}} \sigma_{n,\omega}^2(x) \nonumber\\ &&\hspace*{-4pt}\qquad= \sum_{1\le i,j\le N}\alpha_i\alpha_j \biggl[ n^{-1/2}\sum_{\abs{x}\le a(n)\sqrt{n}} E_\omega(\eta_0(x))\bigl( P_\omega\bigl\{X^{x}_{nt_i}\le nt_i\mathrm{v}_P+r_i\sqrt{n} \bigr\} \nonumber\\ \label{vpfindim3.4} &&\qquad\quad\hspace*{179pt} {}\times P_\omega \bigl\{ X^{x}_{nt_j}> nt_j\mathrm{v}_P+r_j\sqrt{n} \bigr\}\\ &&\qquad\quad \hspace*{179pt}{}- P_\omega\bigl\{X^{x}_{nt_i}\le nt_i\mathrm{v}_P+r_i\sqrt{n},\nonumber \\[-8pt] \\[-8pt] \label{vpfindim3.5}&&\qquad\quad \hspace*{209pt} X^{x}_{nt_j}> nt_j\mathrm{v}_P +r_j\sqrt{n} \bigr\}\bigr) \nonumber \\[-8pt] \\[-8pt] \label{vpfindim3.8}&&\qquad\hspace*{59pt}\quad {}+n^{-1/2}\sum_{\abs{x}\le a(n)\sqrt{n}} \operatorname{Var}_\omega(\eta_0(x)) \nonumber\\ &&\qquad\hspace*{151pt}{}\times \bigl( \mathbf{1}_{ \{ x>0 \} } P_\omega\bigl\{X^{x}_{nt_i}\le nt_i\mathrm{v}_P+r_i\sqrt{n} \bigr\}\nonumber\\ &&\qquad\quad \hspace*{154pt}{} \times P_\omega \{ X^{x}_{nt_j}\le nt_j\mathrm{v}_P+r_j\sqrt{n} \} \\ &&\qquad\quad \hspace*{154pt}{} + \mathbf{1}_{ \{ x\le0 \} } P_\omega\bigl\{X^{x}_{nt_i}> nt_i\mathrm{v}_P+r_i\sqrt{n} \bigr\}\nonumber\\ &&\qquad\quad \hspace*{177pt}{} \times P_\omega\bigl\{ X^{x}_{nt_j}> nt_j\mathrm{v}_P +r_j\sqrt{n} \bigr\} \bigr) \biggr]. \nonumber \end{eqnarray} The function $g_n(z_1,\ldots,z_N)$ required for Proposition \ref{vpgprop} is defined as the linear combination of integrals of Brownian probabilities\vspace*{1pt} that match up with the terms of the sum above. For $(z_1,\ldots,z_N)\in\mathbb R^N$, \begin{eqnarray}\label{vpg_n} &&\hspace*{-4pt}g_n(z_1,\ldots,z_N)\nonumber\\ &&\qquad= \sum_{1\le i,j\le N}\alpha_i\alpha_j \biggl[ \mu\int_{-a(n)}^{a(n)}( \mathbf{P}[B_{\sigma _1^2t_i}\le z_i+r_i-x]\nonumber\\ &&\qquad\hspace*{113pt} {}\times \mathbf{P}[ B_{\sigma_1^2t_j}> z_j+r_j-x]\nonumber\\ &&\qquad{}\hspace*{113pt} - \mathbf{P}[B_{\sigma_1^2t_i}\le z_i+r_i-x, B_{\sigma_1^2t_j}\nonumber\\ &&\qquad{}\hspace*{183pt}> z_j+r_j-x] ) \,dx \\ & &\hspace*{93pt}{} + \sigma_0^2 \biggl\{ \int_{0}^{a(n)} \mathbf{P}[B_{\sigma _1^2t_i}\le z_i+r_i-x]\nonumber\\ &&\hspace*{148pt}{}\times\mathbf{P}[ B_{\sigma_1^2t_j}\le z_j+r_j-x] \,dx\nonumber\\ &&\hspace*{122pt}{} + \int_{-a(n)}^{0} \mathbf{P}[B_{\sigma_1^2t_i}> z_i+r_i-x]\nonumber\\ &&\hspace*{162pt}{}\times \mathbf{P}[ B_{\sigma _1^2t_j}> z_j+r_j-x] \,dx \biggr\} \biggr].\nonumber \end{eqnarray} Let $g(z_1,\ldots,z_N)$ be the function defined by the above sum of integrals with $a(n)$ replaced by $\infty$. Then \eqref{vpgGa} holds by direct comparison with definition \eqref{vpGa}. Part~(a) of Proposition \ref{vpgprop} is now clear. To prove limit \eqref{vpsigmaglim} in part (b) of Proposition \ref {vpgprop}, namely, that \[ \biggl\vert \sum_{\abs{x}\le a(n)\sqrt{n}} \sigma_{n,\omega}^2(x) - g_n( n^{-1/2}Z_{nt_1}, \ldots, n^{-1/2}Z_{nt_N} ) \biggr\vert \stackrel{P}\longrightarrow0, \] we approximate the sums on lines \eqref{vpfindim3.3}--\eqref{vpfindim3.8} with the corresponding integrals from \eqref{vpg_n}. The steps are the same for each sum. We illustrate this reasoning with the sum of the terms on line \eqref{vpfindim3.4}, given by \begin{eqnarray}\label{vpdefU} && U_n(\omega)=\sum_{\abs{m}\le a(n)\sqrt{n}} E_\omega(\eta_0(m)) \nonumber \\[-8pt] \\[-8pt] \nonumber &&\hspace*{64pt}\qquad{}\times P_\omega\bigl\{X^{m}_{ns}\le ns\mathrm{v}_P+q\sqrt{n}, X^{m}_{nt}> nt\mathrm{v}_P+r\sqrt{n} \bigr\} \end{eqnarray} and the corresponding part of \eqref{vpg_n}, defined by \begin{eqnarray}\label{vpdefU*} && U^*_n(\omega)=\mu\int_{-a(n)}^{a(n)} \mathbf{P}\biggl\{B_{\sigma _1^2s}\le\frac {Z_{ns}(\omega)}{\sqrt{n}}-x+q , \nonumber \\[-8pt] \\[-8pt] \nonumber &&\qquad\hspace*{77pt}{} B_{\sigma_1^2t}> \frac{Z_{nt}(\omega)}{\sqrt{n}}-x+r \biggr\}\, dx. \end{eqnarray} The goal is to show \[ \lim_{n\to\infty}\abs{ n^{-1/2} U_n(\omega)- U^*_n(\omega)}=0\qquad \mbox{in $P$-probability. } \] The steps are the same as those employed in the proofs of Lemmas \ref{WtildeW}--\ref{hWbW}. First approximate $U_n$ with \begin{eqnarray}\label{vpdefwtU} &&\widetilde U_n(\omega)=\sum_{\abs{m}\le a(n)\sqrt{n}} E_\omega(\eta_0(m)) \mathbf{P}\biggl\{B_{\sigma_1^2s}\le\frac{Z_{ns}(\theta^m\omega )}{\sqrt{n}}-\frac {m}{\sqrt{n}}+q , \nonumber \\[-8pt] \\[-8pt] \nonumber &&\hspace*{133pt}\qquad{} B_{\sigma_1^2t}> \frac{Z_{nt}(\theta^m\omega)}{\sqrt{n}}-\frac {m}{\sqrt{n}}+r \biggr\}. \end{eqnarray} This approximation is similar to the proof of Lemma \ref{WtildeW} and uses the fact that, for a fixed $s,t>0$, the limits of the form \eqref {QCLT} are uniform in $x,y\in\mathbb R$. Then remove the shift from $Z_n(\omega)$ by defining \begin{eqnarray}\label{vpdefwhU} &&\widehat U_n(\omega)=\sum_{\abs{m}\le a(n)\sqrt{n}} E_\omega(\eta_0(m)) \mathbf{P}\biggl\{B_{\sigma_1^2s}\le\frac{Z_{ns}(\omega)}{\sqrt {n}}-\frac{m}{\sqrt {n}}+q , \nonumber \\[-8pt] \\[-8pt] \nonumber &&\qquad\hspace*{132pt}{} B_{\sigma_1^2t}> \frac{Z_{nt}(\omega)}{\sqrt{n}}-\frac{m}{\sqrt {n}}+r \biggr\} \end{eqnarray} and showing that $ \lim_{n\to\infty}n^{-1/2} \abs{\widetilde U_n-\widehat U_n}=0$, in $P$-probability. For the last step, to show $ \lim_{n\to\infty}\abs{ n^{-1/2} \widehat U_n(\omega)- U^*_n(\omega)}=0$ in $P$-probability, truncate the sum~\eqref{vpdefwhU} and the integral \eqref{vpdefU*}, use a Riemann approximation of the sum, introduce an intermediate scale for further partitioning and appeal to the ergodic theorem, as was done in Lemma \ref{hWbW}. We omit these details since the corresponding steps were spelled out in full in Section \ref {qmeansec}. We have verified the part of the desired limit \eqref{vpsigmaglim} that comes from pairing up the sum on line \eqref{vpfindim3.4} with the second line of \eqref{vpg_n}. The remaining parts are handled similarly. This completes the proof of Proposition \ref{vpgprop}. \end{pf*} Theorem \ref{findimthm} has now been proved. Proof of Theorem \ref{findimthmVq} goes essentially the same way. The crucial difference comes at the point \eqref{vpdefwhU} where $\widehat U_n$ is introduced. Instead of $n^{-1/2}Z_{ns}(\omega)$ and $n^{-1/2}Z_{nt}(\omega)$ inside the Brownian probability $\mathbf{P}$, one has $n^{-1/2}(Z_{ns}(\omega)-Z_{ns}(\theta^m\omega))$ and $n^{-1/2}(Z_{nt}(\omega)-Z_{nt}(\theta^m\omega))$. These vanish on the scale considered here, with $\abs{m}\le a(n)\sqrt n$, by the arguments used in the proof of Lemma \ref{tWhW}. Consequently, in the subsequent approximation by $U_n^*$ at \eqref{vpdefU*}, the terms $n^{-1/2}Z_{ns}(\omega)$ and $n^{-1/2}Z_{nt}(\omega)$ have disappeared. Then in limit \eqref{vpaux9.2} in Proposition \ref{vpgprop} we can take $g_n(0,\ldots,0)$. \begin{appendix} \section*{Appendix: Uniform integrability of $\sup_{k\leq n} (X_k-k\mathrm{v}_P)/\sqrt{n}$}\label{UIapp} In this appendix we give the proof of Proposition \ref{UIprop}. The main tool used in the proof is a martingale representation that was given in the proof of the averaged central limit theorem in \cite{zRWRE}. Recall the definition of $h(x,\omega)$ in \eqref{hdef}, and let $\mathcal {F}_n := \sigma(X_i \dvtx i\leq n)$. Then, $M_n := X_n - n\mathrm{v}_P+ h(X_n, \omega)$ is an $\mathcal {F}_n$-martigale under the measure $P_\omega$. The correction term $h(X_n,\omega)$ may further be decomposed as $h(X_n,\omega) = Z_n(\omega) + R_n$, where $Z_n(\omega) = h(\lfloor n\mathrm{v}_P \rfloor,\omega)$ and $R_n := h(X_n,\omega )- Z_n(\omega) $. The main contributions to $X_n - n\mathrm{v}_P$ come from $M_n$ and $Z_n(\omega)$, while the term $R_n$ contributes on a scale of order less than $\sqrt {n}$. $M_n$ accounts for the fluctuations due to the randomness of the walk in a fixed environment, and $Z_n(\omega)$ accounts for the fluctuations due to randomness of the environment. Using the above notation, we then have \setcounter{equation}{0} \begin{eqnarray}\label{Mnexpansion} \mathbb{E}( X_{n} - n\mathrm{v}_P)^2 &=& \mathbb{E}M_n^2 + E_P Z_n(\omega)^2 + \mathbb{E}R_n^2 \nonumber \\[-8pt] \\[-8pt] \nonumber &&{} - 2 \mathbb{E}[ M_n R_n ]+ 2 \mathbb{E}[ Z_n(\omega) R_n]. \end{eqnarray} Note that the term $\mathbb{E}[ M_n Z_n(\omega) ]$ is missing on the right-hand side above. This is because $Z_n(\omega)$ depends only on the environment and $M_n$ is a martingale under~$P_\omega$ and, thus, $\mathbb{E}[M_n Z_n(\omega)] = E_P[ Z_n(\omega) E_\omega(M_n) ] = 0$. Since H\"{o}lder's inequality implies that \[ \mathbb{E}[ M_n R_n ] + \mathbb{E}[ Z_n(\omega) R_n] \leq\bigl( (\mathbb{E}M_n^2)^{1/2} + (E_P Z_n(\omega)^2)^{1/2} \bigr)(\mathbb{E}R_n^2)^{1/2}, \] to complete the proof of \eqref{UIclaim}, it is enough to show \begin{equation} \label{MnZnL2} \lim_{n\rightarrow\infty} \frac{1}{n} \mathbb{E}M_n^2 = \sigma_1^2,\qquad \lim_{n\rightarrow\infty} \frac{1}{n} E_P Z_n(\omega)^2 = \sigma_2^2 \end{equation} and \begin{equation} \label{RnL2} \lim_{n\rightarrow\infty} \frac{1}{n} \mathbb{E}R_n^2 = 0. \end{equation} Since $Z_n(\omega) = h(\lfloor n\mathrm{v}_P \rfloor,\omega)$, to prove the second statement in \eqref{MnZnL2}, it is enough to show that \[ \lim_{n\rightarrow\infty} \frac{1}{n} E_P[ h(n,\omega)^2 ] = \mathrm{v}_P\operatorname{Var}( E_\omega T_1) = \frac{1}{\mathrm{v}_P}\sigma_2^2. \] However, since $h(n,\omega)$ is the sum of mean zero terms, \begin{eqnarray*} E_P[h(n,\omega)^2] &=& \operatorname{Var}(h(n,\omega)) \nonumber\\ &=& \mathrm{v}_P^2 \sum _{i=0}^{n-1} \operatorname{Var}( E_\omega T_1 ) + 2 \mathrm{v}_P^2 \sum_{0\leq i < j \leq n-1} \operatorname{Cov}( E_{\theta^i \omega} T_1, E_{\theta^j \omega} T_1 ) \\ &=& n \mathrm{v}_P^2 \operatorname{Var}( E_\omega T_1 ) + 2 \mathrm{v}_P^2 \sum _{k=1}^{n-1} (n-k) \operatorname{Cov}( E_{\omega} T_1, E_{\theta^k \omega} T_1 ),\nonumber \end{eqnarray*} where the last equality is due to the shift invariance of environments. Since $E_{\theta^k \omega} T_1 = 1 + \rho_k + \rho_k E_{\theta ^{k-1} \omega} T_1$ (see the derivation of a formula for $E_\omega T_1$ in \cite {sRWRE} or \cite{zRWRE}), the fact that $P$ is an i.i.d. law on environments implies that \[ \operatorname{Cov}( E_{\omega} T_1, E_{\theta^k \omega} T_1 ) = (E_P\rho_0) \operatorname{Cov} ( E_{\omega} T_1, E_{\theta^{k-1} \omega} T_1 ). \] Iterating this computation, we get that $\operatorname{Cov}( E_{\omega} T_1, E_{\theta^k \omega} T_1 ) = (E_P \rho_0)^k \operatorname{Var}( E_\omega T_1 )$. Therefore, \begin{eqnarray*} E[h(n,\omega)^2] &=& n \mathrm{v}_P^2 \operatorname{Var}( E_\omega T_1 ) + 2 \mathrm{v}_P^2 \operatorname{Var}( E_\omega T_1) \sum _{k=1}^{n-1} (n-k) (E_P \rho_0)^k \\ & =& n \mathrm{v}_P^2 \operatorname{Var}( E_\omega T_1 ) \Biggl( 1 + 2 \sum _{k=1}^{n-1} (E_P \rho _0)^k \Biggr)\\ &&{} - 2\mathrm{v}_P^2 \operatorname{Var}(E_\omega T_1) \sum_{k=1}^{n-1} k (E_P \rho_0)^k . \end{eqnarray*} Since $E_P \rho_0 < 1$, this implies that \begin{eqnarray}\label{hL2} \lim_{n\rightarrow\infty} \frac{1}{n} E_P[h(n,\omega)^2] &= &\mathrm {v}_P^2 \operatorname{Var}( E_\omega T_1 )\biggl( 1 + 2 \frac{ E_P \rho_0 }{1- E_P \rho_0} \biggr) \nonumber \\[-8pt] \\[-8pt] \nonumber &=& \mathrm {v}_P\operatorname{Var}( E_\omega T_1 ), \end{eqnarray} where the last equality is from the explicit formula for $\mathrm {v}_P$ given in \eqref{LLN}. Thus, we have proved the second statement in \eqref{MnZnL2}. We now turn to the proof of the first statement in \eqref{MnZnL2}. Let \[ V_n := \sum_{k=1}^n E_\omega[ (M_{k+1} - M_k)^2 | \mathcal {F}_k ] . \] Note that $E_\omega V_n = E_\omega M_n^2$ since $M_n$ is a martingale under $P_\omega$. Thus, the first statement in \eqref{MnZnL2} is equivalent to $\lim_{n\rightarrow\infty} \mathbb{E}V_n /n = \sigma_1^2$. A direct computation (see the proof of the averaged central limit theorem on page 211 of \cite{zRWRE}) yields that $E_\omega[ (M_{k+1} - M_k)^2 | \mathcal{F}_k ] = g(\theta^{X_k} \omega)$, where \[ g(\omega) = \mathrm{v}_P^2 \bigl( \omega_0 (E_\omega T_1 - 1)^2 + (1-\omega_0)(E_{\theta^{-1}\omega} T_1 + 1)^2 \bigr). \] Recall the definition of $f(\omega)$ in \eqref{fdef}, and let $Q$ be a measure on environments defined by $\frac{dQ}{dP}(\omega) = f(\omega )$, where $f(\omega)$ is defined in \eqref{fdef}. Under the averaged measure $\mathbb Q( \cdot) = E_Q[ P_\omega(\cdot) ] $, the sequence $\{ \theta^{X_k} \omega\}_{k\in\mathbb N}$ is stationary and ergodic. Therefore, $\frac{V_n}{n} = \frac{1}{n} \sum_{k=1}^n g(\theta^{X_k} \omega )$ converges in $L^1(\mathbb Q)$ to \[ E_Q[ g(\omega)] = E_P\biggl[ \frac{dQ}{dP}(\omega) g(\omega) \biggr] = \mathrm{v}_P^3 E_P[ \operatorname{Var}_\omega T_1 ] = \sigma_1^2, \] where the second to last equality follows from the formulas for $\frac {dQ}{dP}(\omega)$ and $g(\omega)$ given above, the explicit formula for $\operatorname{Var} _\omega T_1$ shown in \cite{pThesis}, and the shift invariance of the law $P$. Since $\frac{dQ}{dP}(\omega) = f(\omega) \geq\mathrm{v}_P$, we obtain that \[ \mathbb{E}| V_n/n - \sigma_1^2 | = E_Q \biggl[ \frac {dP}{dQ}(\omega) E_\omega | V_n/n - \sigma_1^2 | \biggr] \leq\frac{1}{\mathrm {v}_P} E_{\mathbb Q}| V_n/n- \sigma_1^2 | \mathop{\longrightarrow}_{n\rightarrow\infty }0. \] Thus, since $V_n/n$ converges in $L^1(\mathbb Q)$ to $\sigma_1^2$, $V_n/n$ also converges to $\sigma_1^2$ in $L^1(\mathbb{P})$. Finally, we turn to the proof of \eqref{RnL2}. Fix a $\beta\in(1/2,1)$. Since $R_n = h(X_n, \omega) - h(\lfloor n\mathrm{v}_P \rfloor, \omega)$, \begin{eqnarray*} E_\omega R_n^2 &\leq&\sup_{x\dvtx |x-\lfloor n\mathrm{v}_P \rfloor| \leq n^{\beta}} |h(x,\omega) - h(\lfloor n\mathrm{v}_P \rfloor,\omega )|^2 \\ &&{}+ \sup_{|x|\leq n} 4|h(x,\omega)|^2 P_\omega(|X_n - \lfloor n\mathrm{v}_P \rfloor| > n^{\beta}). \end{eqnarray*} Then, the shift invariance of the measure $P$ and H\"older's inequality imply that, for any $\delta>0$, \begin{eqnarray*} \mathbb{E}R_n^2 & \leq&2E_P \Bigl[ \sup_{|x|\leq n^{\beta}} h(x,\omega)^2 \Bigr] + 4 E_P\Bigl[ \sup_{|x|\leq n} h(x,\omega)^2 P_\omega(|X_n - \lfloor n\mathrm{v}_P \rfloor| > n^{\beta}) \Bigr] \\ &\leq& E_P \Bigl[ \sup_{|x|\leq n^{\beta}} h(x,\omega)^2 \Bigr] +4\Bigl(E_P\Bigl[ \sup_{|x|\leq n} h(x,\omega)^{2+2\delta} \Bigr] \Bigr)^{1/(1+\delta)} \\ &&\hspace*{98pt}{}\times\mathbb{P} (|X_n - \lfloor n\mathrm{v}_P \rfloor| > n^{\beta})^{\delta /(1+\delta)} \\ &\leq& C n^{\beta} + C n \mathbb{P}(|X_n - \lfloor n\mathrm{v}_P \rfloor| > n^{\beta})^{\delta/(1+\delta)}, \end{eqnarray*} where the last inequality follows from Lemma \ref{Goldsheid}. The first term on the right above is $o(n)$ since $\beta< 1$, and the second term on the right is $o(n)$ because $\beta>1/2$ and the averaged central limit theorem implies that $\mathbb{P}(|X_n - \lfloor n\mathrm {v}_P \rfloor| > n^{\beta})$ tends to zero. This completes the proof of \eqref{RnL2} and thus also the first part of Proposition~\ref{UIprop}. To prove the second part of Proposition \ref{UIprop}, we again use the representation $X_n - n\mathrm{v}_P= M_n - Z_n(\omega) - R_n$. Then, \[ \mathbb{E}\Bigl[ \sup_{k\leq n} (X_k - k\mathrm{v}_P)^2 \Bigr] \leq3\mathbb{E}\Bigl[ \sup _{k\leq n} M_k^2 \Bigr] + 3\mathbb{E}\Bigl[ \sup_{k\leq n} Z_k(\omega)^2 \Bigr] + 3\mathbb{E}\Bigl[ \sup_{k\leq n} R_k^2 \Bigr]. \] Since $M_n$ is a martingale, Doob's inequality and the first statement in \eqref{MnZnL2} imply that \[ \mathbb{E}\Bigl[ \sup_{k\leq n} M_k^2 \Bigr] \leq4 \mathbb {E}[ M_n^2 ] = \mathcal{O}(n). \] The same argument given above which showed that $\mathbb{E}R_n^2 = o(n)$ can be repeated to show that, for any $\beta\in(1/2,1)$, there exists a constant $C<\infty$ such that \[ \mathbb{E}\Bigl[ \sup_{k\leq n} R_k^2 \Bigr] \leq C n^{\beta} + C n \mathbb{P}\Bigl( \sup _{k\leq n} |X_k-k\mathrm{v}_P| \geq n^{\beta} \Bigr) = o(n), \] where in the last equality we used the averaged functional central limit theorem. To finish the proof of \eqref{supUIclaim}, we need to show that $E_P [ \sup_{k\leq n} Z_k(\omega)^2 ] = \mathcal{O}(n)$. Since $Z_n(\omega) = h(\lfloor n\mathrm{v}_P \rfloor,\omega)$, this is equivalent to showing that\break $E_P [ \sup_{k\leq n} h(k,\omega)^2 ] = \mathcal {O}(n)$. However, H\" older's inequality and \eqref{hLpbound} imply that there exists an $\eta >0$ and $C<\infty$ such that \[ E_P \Bigl[ \sup_{k\leq n} h(k,\omega)^2 \Bigr] \leq\Bigl( E_P \Bigl[ \sup _{k\leq n} |h(k,\omega)|^{2+2\eta} \Bigr] \Bigr)^{1/(1+\eta)} \leq Cn. \] This completes the proof of Proposition \ref{UIprop}. \end{appendix} \printaddresses \end{document}
\begin{document} \title{The Rational Homotopy Type of Homotopy Fibrations Over Connected Sums} \author{Sebastian Chenery} \address{Mathematical Sciences, University of Southampton, Southampton SO17 1BJ, United Kingdom} \email{[email protected]} \subjclass[2020]{Primary 55P35; Secondary 57N65} \keywords{Loop spaces, connected sums} \begin{abstract} We provide a simple condition on rational cohomology for the total space of a pullback fibration over a connected sum to have the rational homotopy type of a connected sum, after looping. This takes inspiration from recent work of Jeffrey and Selick, in which they study pullback fibrations of this type, but under stronger hypotheses compared to our result. \end{abstract} \maketitle \section{Introduction} Taking inspiration from \cite{js}, we begin with a homotopy fibration $F\rightarrow L\xrightarrow{f} C$ in which all spaces have the homotopy type of Poincar\'e Duality complexes; that is to say simply connected, finite dimensional $CW$-complexes whose cohomology rings satisfy Poincar\'e Duality. Writing $dim(C)=n$ and $dim(L)=m$, let $B$ be another $n$-dimensional Poincar\'e Duality complex. Form the connected sum $B\#C$, and take the natural collapsing map $p:B\#C\rightarrow C$. Defining the $m$-dimensional complex $M$ as the pullback of $f$ across $p$, we have a homotopy fibration diagram \begin{equation}\label{dgm:main1} \begin{tikzcd}[row sep=1.5em, column sep = 1.5em] F \arrow[rr] \arrow[dd, equal] && M \arrow[dd] \arrow[rr] && B\#C \arrow[dd, "p"] \\ \\ F \arrow[rr] && L \arrow[rr, "f"] && C. \end{tikzcd} \end{equation} A natural question follows: to what extent does $M$ behave like a connected sum? Jeffrey and Selick give a partial answer to the above question in \cite{js}. They consider the question when each space is a closed, oriented, smooth, simply connected manifold, but in the stricter setting of fibre bundles, and construct a space $X'$ with the property that there is an isomorphism of homology groups \[H_k(M;\varmathbb{Z})\cong H_k(X';\varmathbb{Z})\oplus H_k(L;\varmathbb{Z})\] for $0<k<m$ \cite{js}*{Theorem 3.3}. This suggests that in certain circumstances we might expect there to be an $m$-dimensional manifold $X$, such that $M\simeq X\#L$. Jeffrey and Selick show that there are contexts in which such an $X$ exists, and others where it cannot exist\footnote{At time of writing, it is known that the current arXiv version of \cite{js} contains a mistake. It has been communicated to me privately that a new version has been prepared, which recovers the same main results, and will be published and uploaded online soon.}. Similar questions to the above have been asked recently. Duan \cite{duan} approaches the topic from a much more geometric, surgery theoretic viewpoint. In this work, the principal objects of concern are manifolds which exhibit a regular circle action; namely, a free circle action on an $n$-dimensional closed, oriented, smooth, simply connected manifold whose quotient space is an $(n-1)$-dimensional closed, oriented, smooth, simply connected manifold. Translated into the context of \cite{js}, Duan studies the situation when $F\simeq S^1$. If $L$ is of dimension at least 5, it is shown in \cite{duan} that the total space of the pullback fibration is indeed always diffeomorphic to a connected sum. Although the thrust of \cite{duan} is mainly concerned with constructing smooth manifolds that admit regular circle actions, it is interesting to remark that its strategy yields a specific class of examples for the situation as in Diagram (\ref{dgm:main1}). Other recent work includes that of Huang and Theriault \cite{huangtheriault}, in which they consider the loop space homotopy type of manifolds after stabilisation by connected sum with a projective space. They do so by combining the results of \cite{duan} with a homotopy theoretic analysis of special cases of Diagram (\ref{dgm:main1}). In this paper we give a special circumstance, recorded in Proposition \ref{prop:1}, in which the based loop space of $M$ is homotopy equivalent to the based loops of a connected sum. This takes its most dramatic form in the context of rational homotopy theory, which we state in the Main Theorem below. Let $\overline{C}$ and $\overline{L}$ denote the $(n-1)$- and $(m-1)$-skeleta of $C$ and $L$, respectively. \begin{mainthm}[Theorem \ref{thm:connsum}] Given spaces and maps as in Diagram (\ref{dgm:main1}), if \begin{enumerate} \item[(i)] the fibre map \(F\rightarrow M\) is (rationally) null homotopic, and, \item[(ii)] both $H^*(\overline{C};\varmathbb{Q})$ and $H^*(\overline{L};\varmathbb{Q})$ are generated by more than one element, \end{enumerate} there is a rational homotopy equivalence $\varmathbb{O}mega M\simeq \varmathbb{O}mega (X\#L)$ for an appropriate \(CW\)-complex \(X\) which we construct in Section \ref{sec:pullbacks}. \end{mainthm} \noindent Thus we are able to give an affirmative answer in this situation, but after looping and up to rational homotopy equivalence. Examples of homotopy fibrations that fulfil the criteria of the Main Theorem include certain sphere bundles. Furthermore, note that a consequence of the Main Theorem is that there is an isomorphism of rational homotopy groups: $\pi_*(M)\otimes\varmathbb{Q}\simeq\pi_*(X\#L)\otimes\varmathbb{Q}$. The author would like to thank their PhD supervisor, Stephen Theriault, for the many enlightening discussions during the preparation of this work, as well as Paul Selick and Lisa Jeffrey for their generosity. The author also wishes to thank the reviewer for their helpful and insightful comments. \subsection*{Competing Interests Declaration:} the author declares none. \section{Preliminaries} For two path connected and based spaces $X$ and $Y$, the \textit{(left) half-smash} of $X$ and $Y$ is the quotient space \[X\ltimes Y=(X\times Y)/(X\times y_0)\] where $y_0$ denotes the basepoint of $Y$. Furthermore, it is a well known result that if $Y$ is a co-$H$-space, then there is a homotopy equivalence $X\ltimes Y\simeq(X\wedge Y)\vee Y$. We now move to another definition. For a homotopy cofibration $A\xrightarrow{f} B \xrightarrow{j} C$, the map $f$ is called \textit{inert} if $\varmathbb{O}mega j$ has a right homotopy inverse. This an integral version of a notion used in rational homotopy theory, namely \textit{rational inertness}, which we define in Section \ref{sec:rational}. We will make use of the following result, due to Theriault \cite{t20}. \begin{thm}[Theriault]\label{thm:splitfib} Let $A\xrightarrow{f} B \xrightarrow{j} C$ be a homotopy cofibration of simply connected spaces, where the map $f$ is inert. Then there is a homotopy fibration \[\varmathbb{O}mega C\ltimes A\rightarrow B\xrightarrow{j}C.\] Moreover, this homotopy fibration splits after looping, so there is a homotopy equivalence $\varmathbb{O}mega B\simeq\varmathbb{O}mega C\times\varmathbb{O}mega(\varmathbb{O}mega C\ltimes A).$ $\square$ \end{thm} Take now a different situation, in which we have two homotopy cofibrations of simply connected spaces \[A\xrightarrow{f} B \xrightarrow{j} C\text{\; and \;}Y\xrightarrow{i}B\xrightarrow{p}X.\] In the diagram below, each complete row and column is a homotopy cofibration, and the bottom-right square is a homotopy pushout, defining the new space $Q$ and the maps $h$ and $q$: \begin{equation}\label{dgm:setup} \begin{tikzcd}[row sep=1.5em, column sep = 1.5em] && Y \arrow[rr, equal] \arrow[dd, "i"] && Y \arrow[dd, "j\circ i"] \\ &&&&& \\ A \arrow[rr, "f"] \arrow[dd, equal] && B \arrow[rr, "j"] \arrow[dd, "p"] && C \arrow[dd, "h"] \\ &&&&& \\ A \arrow[rr, "p\circ f"] && X \arrow[rr, "q"] && Q. \end{tikzcd} \end{equation} We record an elementary fact in the following lemma, for ease of reference. \begin{lem} \label{lem:loophinv} Take the setup of Diagram (\ref{dgm:setup}). If the maps $\varmathbb{O}mega p$ and $\varmathbb{O}mega q$ have right homotopy inverses, then so does $\varmathbb{O}mega h$. Moreover, there is a homotopy equivalence \[\varmathbb{O}mega C\simeq\varmathbb{O}mega Q\times\varmathbb{O}mega(\varmathbb{O}mega Q\ltimes Y).\] \end{lem} \begin{proof} Let us denote the right homotopy inverses of $\varmathbb{O}mega q$ and $\varmathbb{O}mega p$ by $s$ and $t$, respectively. Then, by the homotopy commutativity of Diagram (\ref{dgm:setup}), $\varmathbb{O}mega h$ has right homotopy inverse given by the composite $\varmathbb{O}mega j\circ t \circ s$. As $Y$ and $C$ are simply connected, the space $Q$ is as well, and the map $j\circ i$ is by definition inert. Hence we may apply Theorem \ref{thm:splitfib} to the right-most column of (\ref{dgm:setup}), obtaining the asserted homotopy equivalence. \end{proof} Finally, recall that the spaces considered by Jeffrey and Selick in \cite{js} have the homotopy type of oriented, smooth, closed, simply connected manifolds, and are thus Poincar\'e Duality complexes; that is to say, they have have the homotopy type of simply connected $CW$-complexes whose cohomology rings satisfy Poincar\'e Duality. For such a complex there exists a $CW$ structure having a single top-dimensional cell. For brevity, given a $k$-dimensional Poincar\'e Duality complex $Y$, let $\overline{Y}$ denote its $(k-1)$-skeleton, and note that there exists a homotopy cofibration\[S^{k-1}\xrightarrow{f}\overline{Y}\rightarrow \overline{Y}\cup_f e^k\simeq Y\] where $f$ is the attaching map of the top-cell of $Y$. Furthermore, given two \(k\)-dimensional Poincar\'e Duality complexes \(X\) and \(Y\), whose top-dimensional cells are attached by maps \(f\) and \(g\) (respectively), one forms their connected sum by means of the composite \[f+g:S^{k-1}\xrightarrow{\sigma}S^{k-1}\vee S^{k-1}\xrightarrow{f \vee g}\overline{X}\vee\overline{Y}\] where \(\sigma\) is the usual comultiplication. The homotopy cofibre of \(f+g\) is defined to be \(X\#Y\). In particular, \(\overline{X\#Y}\simeq\overline{X}\vee\overline{Y}\), and there is a homotopy cofibration \[\overline{X}\rightarrow X\#Y\rightarrow Y.\] \section{Pullbacks over Connected Sums}\label{sec:pullbacks} The situation we wish to study begins with a homotopy fibration \(F\rightarrow L\xrightarrow{f} C\), in which each space has the homotopy type of a Poincar\'e Duality complex. As in the introduction, let $dim(C)=n$ and $dim(L)=m$, and let $B$ be another $n$-dimensional Poincar\'e Duality complex. We form the connected sum $B\#C$, and take the natural collapsing map $p:B\#C\rightarrow C$. Defining the $m$-dimensional complex $M$ as the pullback of $f$ across $p$, we have a homotopy fibration diagram \begin{equation}\label{dgm:main2} \begin{tikzcd}[row sep=1.5em, column sep = 1.5em] F \arrow[rr, "\alpha"] \arrow[dd, equal] && M \arrow[dd, "\pi"] \arrow[rr] && B\#C \arrow[dd, "p"] \\ \\ F \arrow[rr] && L \arrow[rr, "f"] && C \end{tikzcd} \end{equation} where we denote the induced map \(M\rightarrow L\) by \(\pi\) and the fibre map \(F\rightarrow M\) by \(\alpha\). \begin{lem}\label{lem:pushout} With spaces and maps as in Diagram (\ref{dgm:main2}), there is a homotopy pushout square \begin{equation*} \begin{tikzcd}[row sep=3em, column sep = 3em] F\times\overline{B} \arrow[r] \arrow[d, "p_1"] & M \arrow[d, "\pi"] \\ F \arrow[r] & L. \end{tikzcd} \end{equation*} Where the map \(p_1\) is projection to the first factor. In particular, if \(\alpha\) is null homotopic, there is a homotopy cofibration \(F\ltimes\overline{B}\rightarrow M\xrightarrow{\pi} L\). \end{lem} \begin{proof} To prove the existence of the asserted homotopy pushout, we will use Mather's Cube Lemma. Indeed, consider the following diagram \begin{equation}\label{dgm:cube} \begin{tikzcd}[row sep=1em, column sep = 1em] F\times\overline{B} \arrow[rr, "\beta"] \arrow[dr,swap,"p_1"] \arrow[dd,swap,"p_2"] && M \arrow[dd] \arrow[dr] \\ & F \arrow[rr, crossing over] && L \arrow[dd, "f"] \\ \overline{B} \arrow[rr] \arrow[dr] && B\#C \arrow[dr, "p"]\\ & * \arrow[rr] \arrow[from=uu, crossing over] && C. \\ \end{tikzcd} \end{equation} We must show is that (\ref{dgm:cube}) commutes up to homotopy, that bottom face is a homotopy pushout and that the four vertical faces are homotopy pullbacks. The bottom face of (\ref{dgm:cube}) arises from the homotopy cofibration \(\overline{B}\rightarrow B\#C\xrightarrow{p}C\), and so is homotopy pushout. The front face is evidently a homotopy pullback, because it comes from the homotopy fibration we began with, as is the right-hand face of the cube, which is the right-hand sqaure in Diagram (\ref{dgm:main2}). Furthermore, it is an elementary fact that the left-hand face of the cube, together with the projection maps, is also homotopy pullback. What remains to show is that the map \(\beta:F\times\overline{B}\rightarrow M\) is chosen such that the diagram commutes up to homotopy and that the rear face is a homotopy pullback. Indeed, as the right-hand face is a homotopy pullback, \(\beta\) is induced by the existence of the composites \(F\times\overline{B}\rightarrow F\rightarrow L\) and \(F\times\overline{B}\rightarrow\overline{B}\rightarrow B\#C\), so the diagram does indeed homotopy commute. One then applies \cite{ark}*{Theorem 6.3.3}, which forces the rear face to be a homotopy pullback. In the special case in which the fibre map \(\alpha\) is null homotopic, we may pinch out a copy of \(F\) in the asserted pushout, giving the square \begin{equation*} \begin{tikzcd}[row sep=3em, column sep = 3em] F\ltimes\overline{B} \arrow[r] \arrow[d] & M \arrow[d, "\pi"] \\ * \arrow[r] & L. \end{tikzcd} \end{equation*} which is equivalent to the stated homotopy cofibration. \end{proof} \begin{rem} Note that, because Diagram (\ref{dgm:main2}) is homotopy commutative, requiring \(\alpha\) to be null homotopic forces the fibre map \(F\rightarrow L\) to have also been null homotopic to begin with. \end{rem} We now give the thrust of this section, providing a circumstance in which the based loop space of the Poincar\'e Duality complex $M$ is homotopy equivalent to the based loop space of a connected sum. Let $X'=F\ltimes\overline{B}$ and $X=X'\cup e^m$ (the homotopy class of the attaching map \(S^{m-1}\rightarrow X'\) plays no role in what is to follow, so we suppress it in the definition of \(X\)). \begin{prop}\label{prop:1} Take the situation as in Diagram (\ref{dgm:main1}), and suppose that the map $\varmathbb{O}mega p$ has a right homotopy inverse. Then the map $\varmathbb{O}mega \pi$ has a right homotopy inverse. Moreover, if \(\alpha\) is null homotopic and the attaching map of the top cell of $L$ is inert, then \[\varmathbb{O}mega M\simeq \varmathbb{O}mega(X\#L).\] \end{prop} \begin{proof} Denoting the right homotopy inverse of $\varmathbb{O}mega p$ by $s:\varmathbb{O}mega C\rightarrow \varmathbb{O}mega(B\#C)$, consider the diagram \begin{equation*} \begin{tikzcd}[row sep=1.5em, column sep = 1.5em] && \varmathbb{O}mega L \arrow[dddr, equal, bend right=20] \arrow[dr, dashed, "\lambda"] \arrow[r, "\varmathbb{O}mega f"] & \varmathbb{O}mega C \arrow[drr, "s"] &&& \\ &&& \varmathbb{O}mega M \arrow[rr] \arrow[dd, "\varmathbb{O}mega\pi"] && \varmathbb{O}mega(B\#C) \arrow[dd, "\varmathbb{O}mega p"] \\ &&&&&&& \\ &&& \varmathbb{O}mega L \arrow[rr, "\varmathbb{O}mega f"] && \varmathbb{O}mega C \end{tikzcd} \end{equation*} where the map $\lambda$ will be detailed momentarily. Since the right-hand square of Diagram (\ref{dgm:main2}) is a homotopy pullback, so is the square in the above. Furthermore, since $\varmathbb{O}mega p\circ s\simeq 1_{\varmathbb{O}mega C}$, the diagram commutes. As $\varmathbb{O}mega M$ is the homotopy pullback of $\varmathbb{O}mega f$ across $\varmathbb{O}mega p$, the map $\lambda$ exists and we have that $\varmathbb{O}mega\pi\circ\lambda\simeq1_{\varmathbb{O}mega L}$. In other words, the map $\lambda$ is a right homotopy inverse for $\varmathbb{O}mega\pi$. Consequently, in the case when \(\alpha\) is null homotopic, we apply Theorem \ref{thm:splitfib} to the homotopy cofibration \[X'\rightarrow M\xrightarrow{\pi}L\] from the special case of Lemma \ref{lem:pushout}. Indeed, since $\varmathbb{O}mega \pi$ has a right homotopy inverse, the map $X'\rightarrow M$ is by definition inert, so Theorem \ref{thm:splitfib} immediately gives us that \[\varmathbb{O}mega M\simeq \varmathbb{O}mega L\times \varmathbb{O}mega(\varmathbb{O}mega L\ltimes X').\] On the other hand, let us now consider the connected sum $X\#L$. Take two homotopy cofibrations: one is the attaching map of the top-cell of $X\#L$, and the other is from inclusion of a wedge summand \[S^{m-1}\rightarrow X'\vee \overline{L}\rightarrow X\#L\text{\; and \;}X'\hookrightarrow X'\vee \overline{L}\xrightarrow{q} \overline{L}.\] We combine these to give a cofibration diagram, in the sense of (\ref{dgm:setup}) \begin{equation*} \begin{tikzcd}[row sep=1.5em, column sep = 1.5em] &&& X' \arrow[rr, equal] \arrow[dd, hook] && X' \arrow[dd] \\ &&&&&&& \\ &S^{m-1} \arrow[rr] \arrow[dd, equal] && X'\vee \overline{L} \arrow[dd, "q"] \arrow[rr] && X\#L \arrow[dd] \\ &&&&&&& \\ & S^{m-1} \arrow[rr] && \overline{L} \arrow[rr, "j"] && L. \end{tikzcd} \end{equation*} The map $q$ pinches to the second wedge summand, and therefore has a right homotopy inverse given by inclusion; therefore $\varmathbb{O}mega q$ also has a right homotopy inverse. Moreover, if the attaching map of the top-cell of $L$ is inert, the map $\varmathbb{O}mega j$ has a right homotopy inverse, by definition. Thus Lemma \ref{lem:loophinv} applies, implying there is a homotopy equivalence \[\varmathbb{O}mega(X\#L)\simeq \varmathbb{O}mega L\times \varmathbb{O}mega(\varmathbb{O}mega L\ltimes X').\] Thus $\varmathbb{O}mega M$ and $\varmathbb{O}mega(X\#L)$ are both homotopy equivalent to $\varmathbb{O}mega L\times \varmathbb{O}mega(\varmathbb{O}mega L\ltimes X')$, and are therefore homotopy equivalent to each other. \end{proof} \begin{exa} A general class of examples that satisfy the requirement that \(\alpha\simeq*\) are sphere bundles, \(S^r\rightarrow L\rightarrow C\), where the pullback \(M\) has trivial \(r^{th}\) homotopy group. Consider for example the classical Hopf bundle \(S^1\rightarrow S^3\xrightarrow{\eta} S^2.\) Taking products with the trivial fibration $*\rightarrow S^4\rightarrow S^4$ yields a new homotopy fibration \[S^1\rightarrow S^3\times S^4\xrightarrow{\eta\times1}S^2\times S^4.\] Applying our construction with $B=S^3\times S^3$, we have the following pullback diagram of homotopy fibrations \begin{equation*} \begin{tikzcd}[row sep=1.5em, column sep = 1.5em] & S^1 \arrow[rr] \arrow[dd, equal] && M \arrow[dd, "\pi"] \arrow[r] & (S^3\times S^3)\#(S^2\times S^4) \arrow[dd, "p"]\\ \\ & S^1 \arrow[rr] && S^3\times S^4 \arrow[r, "\eta\times1"] & S^2\times S^4. \end{tikzcd} \end{equation*} Using techniques from \cite{t20}*{Section 9} it can be shown that $\varmathbb{O}mega(S^2\times S^4)$ retracts off $\varmathbb{O}mega((S^3\times S^3)\#(S^2\times S^4))$ via a right homotopy inverse for $\varmathbb{O}mega p$. Moreover, the attaching map for the top-cell of the product $S^3\times S^4$ is known to be inert. Now we show that \(\pi_1(M)\cong0\). The existence of a right homotopy inverse for the map \(\varmathbb{O}mega p\) implies that its homotopy fibre (and consequently the homotopy fibre of \(\pi\)) is homotopy equivalent to \(\varmathbb{O}mega(\varmathbb{O}mega(S^2\times S^4)\ltimes(S^3\vee S^3))\), by Theorem \ref{thm:splitfib}. It is now easy to check that the long exact sequence of homotopy groups induced by the fibration sequence \(\varmathbb{O}mega(\varmathbb{O}mega(S^2\times S^4)\ltimes(S^3\vee S^3))\rightarrow M\xrightarrow{\pi}S^3\times S^4\) forces \(\pi_1(M)\) to be trivial. Therefore Proposition \ref{prop:1} applies, with \[X'\simeq S^1\ltimes(S^3\vee S^3)\simeq S^3\vee S^3\vee S^4\vee S^4.\] By gluing a 7-cell to $X'$, we may take $X=(S^3\times S^4)\#(S^3\times S^4)$. Hence we obtain a homotopy equivalence \[\varmathbb{O}mega M\simeq \varmathbb{O}mega((S^3\times S^4)\#(S^3\times S^4)\#(S^3\times S^4)).\] To conclude this example, we remark that many of the situations considered by Duan in \cite{duan} also fit into this framework. \end{exa} \section{The Rational Homotopy Perspective}\label{sec:rational} We wish to apply Proposition \ref{prop:1} in the context of rational homotopy theory. Let \[S^{k-1}\xrightarrow{f}Y\xrightarrow{i}Y\cup_f e^k\] be a homotopy cofibration, where the map $f$ attaches a $k$-cell to $Y$ and $i$ is the inclusion. The map $f$ is \textit{rationally inert} if $\varmathbb{O}mega i$ induces a surjection in rational homology. This implies that, rationally, $\varmathbb{O}mega i$ has a right homotopy inverse. The following theorem was first proved in \cite{hl}*{Theorem 5.1}, though we prefer the statement found in \cite{fh}. \begin{thm}[Halperin-Lemaire]\label{thm:hl} If $Y\cup_f e^k$ is a Poincar\'e Duality complex and $H^*(Y;\varmathbb{Q})$ is generated by more than one element, then the attaching map $f$ is rationally inert. $\square$ \end{thm} \noindent This leads us to the statement and proof of the Main Theorem. \begin{thm}\label{thm:connsum} Given spaces and maps as in Diagram (\ref{dgm:main2}), if \begin{enumerate} \item[(i)] the map \(\alpha\) is (rationally) null homotopic, and, \item[(ii)] both $H^*(\overline{C};\varmathbb{Q})$ and $H^*(\overline{L};\varmathbb{Q})$ are generated by more than one element, \end{enumerate} there is a rational homotopy equivalence $\varmathbb{O}mega M\simeq \varmathbb{O}mega (X\#L)$. \end{thm} \begin{proof} By Theorem \ref{thm:hl}, the attaching maps for the top-cells of $C$ and $L$ are rationally inert. We have the homotopy pullback below, which is the right-hand square of (\ref{dgm:main2}) \begin{equation*} \begin{tikzcd}[row sep=1.5em, column sep = 1.5em] M \arrow[rr] \arrow[dd, "\pi"] && B\#C \arrow[dd, "p"] \\ \\ L \arrow[rr, "f"] && C. \end{tikzcd} \end{equation*} Rationalising spaces and maps in this pullback square, we see that Proposition \ref{prop:1} would apply if the map $\varmathbb{O}mega p$ has a rational right homotopy inverse, as the attaching map for the top-cell of $L$ is rationally inert. Thus we would have a rational homotopy equivalence $\varmathbb{O}mega M\simeq\varmathbb{O}mega (X\#L)$. It therefore remains to show that the map $\varmathbb{O}mega p$ has a rational right homotopy inverse. With this in mind, consider the following homotopy cofibration diagram \begin{equation*} \begin{tikzcd}[row sep=1.5em, column sep = 1.5em] &&& \overline{B} \arrow[rr, equal] \arrow[dd, hook] && \overline{B} \arrow[dd] \\ &&&&&&& \\ &S^{n-1} \arrow[rr] \arrow[dd, equal] && \overline{B}\vee \overline{C} \arrow[dd, "q"] \arrow[rr] && B\#C \arrow[dd, "p"] \\ &&&&&&& \\ & S^{n-1} \arrow[rr] && \overline{C} \arrow[rr, "i_C"] && C. \end{tikzcd} \end{equation*} As the pinch map $q$ has a right homotopy inverse, so does $\varmathbb{O}mega q$. Furthermore, the attaching map of the top-cell of $C$ is rationally inert, and therefore $\varmathbb{O}mega i_C$ has a right homotopy inverse after rationalisation. Therefore the map $\varmathbb{O}mega p$ also has a (rational) right homotopy inverse, by Lemma \ref{lem:loophinv}. \end{proof} \begin{rem} Recall that a simply connected space $Y$ is called \textit{rationally elliptic} if $dim(\pi_*(Y)\otimes\varmathbb{Q})<\infty$, and called\textit{ rationally hyperbolic} otherwise \cite{fht}. We remark briefly on the rational hyperbolicity of the spaces discussed above. Indeed, suppose that the skeleton $\overline{B}$ is a suspension. Then, as $X'=F\ltimes\overline{B}$, we have a homotopy equivalence $X'\simeq(F\wedge\overline{B})\vee\overline{B}$, which is again a suspension. Thus, rationally, $X'$ is homotopy equivalent to a wedge of spheres. Assuming $X'$ is rationally homotopy equivalent to a wedge containing more than one sphere of dimension greater than 1, this would imply the rational hyperbolicity of $\varmathbb{O}mega X$. Indeed, this is guaranteed if the ring $H^*(\overline{B};\varmathbb{Q})$ has more that one generator of degree 2 or more, or if $H^*(\overline{B};\varmathbb{Q})$ has one such generator and $F$ is not rationally contractible. Since $X'$ homotopy retracts off $\varmathbb{O}mega L\ltimes X'$, by Theorem \ref{thm:connsum} we have that $\varmathbb{O}mega X'$ retracts off $\varmathbb{O}mega M$. With the assumptions on $X'$ above, this implies that $\varmathbb{O}mega M$ is rationally hyperbolic. As a final observation, note that a natural situation in which $\overline{B}$ has the homotopy type of a suspension would be when $B$ is sufficiently highly connected: by \cite{ganeacogroups}, if $B$ is $k$-connected, $\overline{B}$ has the homotopy type of a suspension if $n\leq3k+1$. For example, take $B$ to be an $(n-1)$-connected $2n$-dimensional Poincar\'e Duality complex. \end{rem} \end{document}
\begin{document} \begin{center} \large \bf Birationally rigid Fano fibre spaces. II \end{center} \centerline{A.V.Pukhlikov} \parshape=1 3cm 10cm \noindent {\small \quad\quad\quad \quad\quad\quad\quad \quad\quad\quad {\bf }\newline In this paper we prove birational rigidity of large classes of Fano-Mori fibre spaces over a base of arbitrary dimension, bounded from above by a constant that depends on the dimension of the fibre only. In order to do that, we first show that if every fibre of a Fano-Mori fibre space satisfies certain natural conditions, then every birational map onto another Fano-Mori fibre space is fibre-wise. After that we construct large classes of fibre spaces (into Fano double spaces of index one and into Fano hypersurfaces of index one) which satisfy those conditions. Bibliography: 35 titles.} \section*{Introduction} {\bf 0.1. Birationally rigid Fano-Mori fibre spaces.} In this paper we investigate the problem of birational rigidity of Fano-Mori fibre spaces $\pi\colon V\to S$. We assume that the base $S$ is non-singular, the variety $V$ has at most factorial terminal singularities, the anticanonical class $(-K_V)$ is relatively ample and $$ \mathop{\rm Pic} V = {\mathbb Z} K_V\oplus \pi^* \mathop{\rm Pic} S. $$ Let $\pi'\colon V'\to S'$ be an arbitrary rationally connected fibre space, that is, a morphism of projective algebraic varieties, where the base $S'$ and the fibre of general position ${\pi'}^{-1}(s')$, $s'\in S'$, are rationally connected and $\mathop{\rm dim} V = \mathop{\rm dim} V'$. Consider a birational map $\chi\colon V\dashrightarrow V'$ (provided they exist). In order to describe the properties of the map $\chi$, of crucial importance is whether $\chi$ is fibre-wise or not, that is, whether this map transforms the fibres of the projection $\pi$ into the fibres of the projection $\pi'$. It is expected (and confirmed by all known examples, see subsection 0.6), that the answer is positive if the fibre space $\pi$ is ``sufficiently twisted over the base''. Investigating this problem, one can choose various classes of Fano-Mori fibre space and various interpretations of the property to be ``twisted over the base''. In the present paper we prove the following fact. {\bf Theorem 1.} {\it Assume that the Fano-Mori fibre space $\pi\colon V\to S$ satisfies the conditions {\rm (i)} every fibre $F_s={\pi}^{-1}(s)$, $s\in S$, is a factorial Fano variety with terminal singularities and the Picard group $\mathop{\rm Pic} F_s = {\mathbb Z} K_{F_s}$, {\rm (ii)} for every effective divisor $D\in |-nK_{F_s}|$ on an arbitrary fibre $F_s$ the pair $(F_s,\frac{1}{n} D)$ is log canonical, and for every mobile linear system $\Sigma_s\subset |-nK_{F_s}|$ the pair $(F_s,\frac{1}{n} D)$ is canonical for a general divisor $D\in\Sigma_s$, {\rm (iii)} for every mobile family $\overline{\cal C}$ of curves on the base $S$, sweeping out $S$, and a curve $\overline{C}\in \overline{\cal C}$ the class of the following algebraic cycle of dimension $\mathop{\rm dim} F$ for any positive $N\geqslant 1$ $$ -N (K_V\circ \pi^{-1}(\overline{C}))-F $$ (where $F$ is the fibre of the projection $\pi$) is not effective, that is, it is not rationally equivalent to an effective cycle of dimension $\mathop{\rm dim} F$. Then every birational map $\chi\colon V\dashrightarrow V'$ onto the total space of a rationally connected fibre space $V'/S'$ is fibre-wise, that is, there exists a rational dominant map $\beta\colon S\dashrightarrow S'$, such that the following diagram commutes} $$ \begin{array}{rcccl} & V & \stackrel{\chi}{\dashrightarrow} & V' & \\ \pi & \downarrow & & \downarrow & \pi' \\ & S & \stackrel{\beta}{\dashrightarrow} & S'. \end{array} $$ Now we list the standard implications of Theorem 1, after which we discuss the point of how restrictive the conditions (i)-(iii) are. {\bf Corollary 1.} {\it In the assumptions of Theorem 1 on the variety $V$ there are no structures of a rationally connected fibre space over a base of dimension higher than $\mathop{\rm dim} S$. In particular, the variety $V$ is non-rational. Every birational self-map of the variety $V$ is fibre-wise and induces a birational self-map of the base $S$, so that there is a natural homomorphism of groups $\rho\colon \mathop{\rm Bir} V\to \mathop{\rm Bir} S$, the kernel of which $\mathop{\rm Ker} \rho$ is the group $\mathop{\rm Bir} F_{\eta} = \mathop{\rm Bir} (V/S)$ of birational self-maps of the generic fibre $F_{\eta}$ (over the generic non-closed point $\eta$ of the base $S$), whereas the group $\mathop{\rm Bir} V$ is an extension of the normal subgroup $\mathop{\rm Bir} F_{\eta}$ by the group $\Gamma=\rho(\mathop{\rm Bir} V)\subset \mathop{\rm Bir} S$:} $$ 1\to \mathop{\rm Bir} F_{\eta} \to \mathop{\rm Bir} V\to \Gamma \to 1. $$ How restrictive are the conditions (i)-(iii)? The condition (iii) belongs to the same class of conditions as the well known $K^2$-condition and the $K$-condition for fibrations over ${\mathbb P}^1$ (see, for instance, \cite[Chapter 4]{Pukh13a}) and the Sarkisov condition for conic bundles (see \cite{S80,S82}). This condition measures the ``degree of twistedness'' of the fibre space $V/S$ over the base $S$. Below we illustrate this meaning of the condition (iii) by particular examples. We will see that this condition is not too restrictive: for a fixed method of constructing the fibre space $V/S$ and a fixed ``ambient'' fibre space $X/S$ the condition (iii) is satisfied by ``almost all'' families of fibre spaces $V/S$. In terms of numerical geometry of the varieties $V$ and $S$ the condition (iii) can be expressed in the following way. Let $$ A^*(V)=\mathop{\bigoplus}\limits^{\mathop{\rm dim} V}_{i=0} A^i(V) $$ be the numerical Chow ring of the variety $V$, graded by codimension. Set $$ A_i(V)=A^{\mathop{\rm dim} V -i} (V) \otimes {\mathbb R} $$ and denote by the symbol $A^{\rm mov}_i (V)$ the closed cone in $A_i(V)$, generated by the classes of mobile cycles, the families of which sweep out $V$, and by the symbol $A^+_i (V)$ the pseudoeffective cone in $A_i(V)$, generated by the classes of effective cycles. Furthermore, by the symbol $A_{i,\leqslant j}(V)$ we denote the linear subspace in $A_i(V)$, generated by the classes of subvarieties of dimension $i$, the image of which on $S$ has dimension at most $j$. In the real space $A_{i,\leqslant j}(V)$ consider the closed cones $A_{i,\leqslant j}^{\rm mov}(V)$ of mobile and $A_{i,\leqslant j}^+(V)$ of pseudoeffective classes. In a similar way we define the real vector space $A_i(S)$ and the closed cones $A^{\rm mov}_i (S)$ è $A_i^+(S)$. If $\delta=\mathop{\rm dim} F$ is the dimension of the fibre of the projection $\pi$, then the operation of taking the preimage generates a linear map $$ \pi^*A_i(S) \to A_{\delta+i,\leqslant i}(V), $$ whereas $\pi^*(A_i^+(S))\subset A_{\delta+i,\leqslant i}^+(V)$ and $\pi^*(A_i^{\rm mov}(S))\subset A_{\delta+i,\leqslant i}^{\rm mov}(V)$. Now let us consider the linear map $$ \gamma\colon A_1(S)\to A_{\delta,\leqslant 1}(V), $$ defined by the formula $$ z\mapsto -(K_V\cdot \pi^* z). $$ The condition (iii) means that the image of the cone $\gamma(A_1^{\rm mov} (S))$ is contained in the boundary of the pseudoeffective cone $A^+_{\delta,\leqslant 1}(V)$, that is, $$ \gamma (A_1^{\rm mov} (S)) \cap \mathop{\rm Int} A^+_{\delta,\leqslant 1}(V) = \emptyset. $$ More precisely, for any class $z\in A_1^{\rm mov} (S)$ the intersection of the closed ray $$ \{ \gamma(z) - t[F]\,|\, t\in {\mathbb R}_+ \} $$ (where $[F]\in \mathop{\rm Int} A^+_{\delta,\leqslant 1}(V)$ is the class of the fibre of the projection $\pi$) with the cone $A^+_{\delta,\leqslant 1}(V)$ either is empty or consists of just one point $\gamma(z)$. One may suggest that the condition (iii) is close to a precise one (``if and only if''), that is, its violation (or an essential deviation from this condition) implies the existence of another structure of a Fano-Mori fibre space on the variety $V$. The following remark gives an obvious way to check the condition (iii). {\bf Remark 0.1.} Assume that on the variety $V$ there is a numerically effective divisorial class $L$ such that $(L^{\delta}\cdot F)>0$ and the linear function $(\cdot L^{\delta})$ is non-positive on the cone $\gamma(A_1^{\rm mov} (S))$, that is to say, for any mobile curve $\overline{C}$ on $S$ the inequality \begin{equation}\label{10.06.2014.1} \left(L^{\delta}\cdot K_V\cdot \pi^{-1}(\overline{C})\right)\geqslant 0 \end{equation} holds. Then the condition (iii) is obviously satisfied. The conditions (i) and (ii), however, are much more restrictive. They mean that {\it all} fibres of the projection $\pi$ are varieties of sufficiently general position in their family. This implies that the dimension of the base for a fixed family of fibres is bounded from above (by a constant depending on the particular family, to which the fibres belong). In the examples considered in the present paper, for a sufficiently high dimension of the fibre $\delta=\mathop{\rm dim} F$ the dimension of the base is bounded from above by a number of order $\frac12\delta^2$. Recall that up to now not a single example was known of a fibration into higher dimensional Fano varieties over a base of dimension two and higher with just one structure of a rationally connected fibre space (for a brief historical survey, see subsection 0.5). {\bf 0.2. Fibrations into double spaces of index one.} By the symbol ${\mathbb P}$ we denote the projective space ${\mathbb P}^M$, $M\geqslant 5$. Let ${\cal W}={\mathbb P}(H^0({\mathbb P},{\cal O}_{\mathbb P}(2M)))$ be the space of hypersurfaces of degree $2M$ in ${\mathbb P}$. The following fact is true. {\bf Theorem 2.} {\it There exists a Zariski open subset ${\cal W}_{\rm reg}\subset {\cal W}$, such that for any hypersurface $W\in {\cal W}_{\rm reg}$ the double cover $\sigma\colon F\to {\mathbb P}$, branched over $W$, satisfies the conditions (i) and (ii) of Theorem 1, and moreover, the estimate} $$ \mathop{\rm codim} (({\cal W}\setminus {\cal W}_{\rm reg})\subset {\cal W})\geqslant \frac{(M-4)(M-1)}{2} $$ {\it holds.} An explicit description of the set ${\cal W}_{\rm reg}$ and a proof of Theorem 2 are given in \S 2. Fix a number $M\geqslant 5$ and a non-singular rationally connected variety $S$ of dimension $\mathop{\rm dim} S<\frac12 (M-4)(M-1)$. Let ${\cal L}$ be a locally free sheaf of rank $M+1$ on $S$ and $X={\mathbb P}({\cal L})={\bf Proj}\, \mathop{\oplus}\limits_{i=0}^{\infty}{\cal L}^{\otimes i}$ the corresponding ${\mathbb P}^M$-bundle. We may assume that ${\cal L}$ is generated by its sections, so that the sheaf ${\cal O}_{{\mathbb P}({\cal L})}(1)$ is also generated by the sections. Let $L\in \mathop{\rm Pic} X$ be the class of that sheaf, so that $$ \mathop{\rm Pic} X = {\mathbb Z} L\oplus \pi_X^* \mathop{\rm Pic} S, $$ where $\pi_X\colon X\to S$ is the natural projection. Take a general divisor $U\in |2(ML+\pi^*_X R)|$, where $R\in \mathop{\rm Pic} S$ is some class. If this system is sufficiently mobile, then by the assumption about the dimension of the base $S$ and Theorem 2 we may assume that for every point $s\in S$ the hypersurface $U_s= U\cap \pi^{-1}_X(s)\in {\cal W}_{\rm reg}$, and for that reason the double space branched over $U_s$, satisfies the conditions (i) and (ii) of Theorem 1. Let $\sigma\colon V\to X$ the double cover branched over $U$. Set $\pi=\pi_X\circ\sigma\colon V\to S$, so that $V$ is a fibration into Fano double spaces of index one over $S$. Recall that the divisor $U\in |2(ML+\pi^*_X R)|$ is assumed to be sufficiently general. {\bf Theorem 3.} {\it Assume that the divisorial class $(K_S+R)$ is pseudoeffective. Then for the fibre space $\pi\colon V\to S$ the claims of Theorem 1 and Corollary 1 hold. In particular, $$ \mathop{\rm Bir} V = \mathop{\rm Aut} V = {\mathbb Z}/2{\mathbb Z} $$ is the cyclic group of order 2.} {\bf Proof.} Since the conditions (i) and (ii) of Theorem 1 are satisfied by construction of the variety $V$, it remains to check the condition (iii). Let us use Remark 0.1. Elementary computations show that the inequality (\ref{10.06.2014.1}) up to a positive factor is the inequality $$ ((K_S+R)\cdot \overline{C})\geqslant 0. $$ Since the curve $\overline{C}$ belongs to a mobile family, sweeping out the base $S$, the last inequality holds if the class $(K_S+R)$ is pseudoeffective. Q.E.D. for the theorem. {\bf Example 0.2.} Take $S = {\mathbb P}^m$, where $m<\frac12 (M-4)(M-1)$, $X={\mathbb P}^M\times {\mathbb P}^m$ and $W_X$ is a generic hypersurface of bidegree $(2M,2l)$, where $l\geqslant m+1$. Then for the double cover $\sigma\colon V\to X$, branched over $W_X$, the claims of Theorem 1 and Corollary 1 are true. Note that for $l\leqslant m$ on the double cover $V$ there is another structure of a Fano fibre space: it is given by the projection $\pi_1\colon V\to {\mathbb P}^M$. Therefore, the condition (iii) of Theorem 1 and its realization in Theorem 3 turn out to be precise. {\bf 0.3. Fibrations into Fano hypersurfaces of index one.} The symbol ${\mathbb P}$ still stands for the projective space ${\mathbb P}^M$, $M\geqslant 10$. Fix $M$. Let ${\cal F}={\mathbb P}(H^0({\mathbb P},{\cal O}_{\mathbb P}(M)))$ be the space of hypersurfaces of degree $M$ in ${\mathbb P}$. The following fact is true. {\bf Theorem 4.} {\it There is a Zariski open subset ${\cal F}_{\rm reg}\subset {\cal F}$, such that every hypersurface $F\in {\cal F}_{\rm reg}$ satisfies the conditions (i) and (ii) of Theorem 1, and the following estimate holds:} \begin{equation}\label{14.06.2014.1} \mathop{\rm codim} (({\cal F}\setminus {\cal F}_{\rm reg})\subset {\cal F})\geqslant \frac{(M-7)(M-6)}{2}-5. \end{equation} An explicit description of the subset ${\cal F}_{\rm reg}$ and a proof of Theorem 4 are given in \S 2-3. Fix a non-singular rationally connected variety $S$ of dimension $\mathop{\rm dim} S<\frac12 (M-7)(M-6)-5$. As in subsection 0.2, let ${\cal L}$ be a locally free sheaf of rank $M+1$ on $S$ and $X={\mathbb P}({\cal L})={\bf Proj}\, \mathop{\oplus}\limits_{i=0}^{\infty}{\cal L}^{\otimes i}$ the corresponding ${\mathbb P}^M$-bundle in the sense of Grothendieck; we assume that ${\cal L}$ is generated by global sections. Let $\pi_X\colon X\to S$ be the projection, $L\in \mathop{\rm Pic} X$ the class of the sheaf ${\cal O}_{{\mathbb P}({\cal L})}(1)$. Consider a general divisor $$ V\in |ML+\pi^*_X R|, $$ where $R\in \mathop{\rm Pic} S$ is some divisor on the base. By the assumption about the dimension of the base made above and Theorem 4 we may assume that the Fano fibre space $\pi\colon V\to S$, where $\pi=\pi_X|_V$, satisfies the conditions (i) and (ii) of Theorem 1. {\bf Theorem 5.} {\it Assume that the divisorial class $(K_S+\left(1-\frac{1}{M}\right)R)$ is pseudoeffective. Then for the Fano fibre space $\pi\colon V\to S$ the claims of Theorem 1 and Corollary are true. In particular, the group $$ \mathop{\rm Bir} V = \mathop{\rm Aut} V $$ is trivial.} {\bf Proof.} The conditions (i) and (ii) of Theorem 1 are satisfied by the generality of the divisor $V$. The inequality (\ref{10.06.2014.1}) up to a positive factor is the same as the inequality $$ ((M K_S+(M-1)R)\cdot \overline{C})\geqslant 0. $$ Therefore, by Remark 0.1, the condition (iii) of Theorem 1 also holds. Q.E.D. for the theorem. {\bf Example 0.2.} Take $S = {\mathbb P}^m$, where $m\leqslant\frac12 (M-7)(M-6)-6$, $X={\mathbb P}^M\times {\mathbb P}^m$ and $V\subset X$ is a sufficiently general hypersurface of bidegree $(M,l)$, where $l$ satisfies the inequality $$ l\geqslant\frac{M}{M-1} (m+1). $$ Then the Fano fibre space $V/{\mathbb P}^m$ satisfies all assumptions of Theorem 1 and therefore for this fibre space the claim of Theorem 1 and that of Corollary 1 are true. Note that for $l\leqslant m$ on the variety $V$ there is another structure of a Fano fibre space, given by the projection $V\to {\mathbb P}^M$. Note also that if we fix the dimension $m$ of the base, then for $M\geqslant m$ the condition of Theorem 5 is close to the optimal one: it is satisfied for $l\geqslant m+2$, so that the only value of the integral parameter $l$, for which the problem of birational rigidity of the fibre space $V/{\mathbb P}^m$ remains open, is $l=m+1$. In that case the projection $V\to {\mathbb P}^M$ is a $K$-trivial fibre space. {\bf 0.4. The structure of the paper.} The present paper is organized in the following way. In \S 1 we prove Theorem 1. After that, in \S 2 we deal with the conditions of general position, which should be satisfied for every fibre of the fibre space $V/S$ in order for the conditions (i) and (ii) of Theorem 1 to hold. The conditions of general position (regularity) are given for Fano double spaces of index one and Fano hypersurfaces of index one. This makes it possible to define the sets ${\cal W}_{\rm reg}$ and ${\cal F}_{\rm reg}$ and prove Theorem 2 and carry out the preparational work for the proof of Theorem 4, the main technical fact of the present paper, which implies Theorem 5, geometrically the most impressive result of this paper, in an obvious way. In \S 3 we complete the proof of Theorem 4, more precisely we show that the condition (ii) of Theorem 1 is satisfied for a regular Fano hypersurface $F\in {\cal F}_{\rm reg}$. The proof makes use a combination of the technique of hypertangent divisors and the inversion of adjunction. Note that the approach of the present paper corresponds to the linear method of proving birational rigidity, see \cite[Chapter 7]{Pukh13a}; the technique of the quadratic method (in the first place, the technique of counting multiplicities) is not used. The assumption in Theorem 1 that the base $S$ of the fibre space $V/S$ is non-singular seems to be unnecessary and could be replaced by the condition that the singularities are at most terminal and (${\mathbb Q}$-)factorial. {\bf 0.5. Historical remarks and acknowledgements.} The starting point of studying birational geometry of rationally connected fibre spaces seems to be the use of de Jonqiere transformations (see, for instance, \cite{Hud}). In the modern algebraic geometry the objects of this type started to be systematically investigated in the works of V.A.Iskovskikh and M.Kh.Gizatullin about pencils of rational curves \cite{I67,I70,Giz67} over non-closed fields, which followed the investigation of the ``absolute'' case in the papers of Yu.I.Manin \cite{M66,M67,M72}. We also point out the paper of I.V.Dolgachev \cite{Dol66}, which started (in the modern period) the study of $K$-trivial fibrations. After the breakthrough in three-dimensional birational geometry that was made in the classical paper of V.A.Iskovskikh and Yu.I.Manin on the three-dimensional quartic \cite{IM} the problems of the ``relative'' three-dimensional birational geometry were the next to be investigated, that is, the task was to describe birational maps of three-dimensional algebraic varieties, fibred into conics over a rational surface or into del Pezzo surfaces over ${\mathbb P}^1$. The famous Sarkisov theorem gave an almost complete solution of the question of birational rigidity for conic bundles \cite{S80,S82}. A similar question for the pencils of del Pezzo surfaces remained absolutely open until 1996 \cite{Pukh98a}; see the introduction to the last paper about the reasons of those difficulties (the test class construction turned out to be unsuitable for studying the varieties of that type). The method of proving birational rigidity, realized in \cite{Pukh98a}, generalized well into the arbitrary dimension, for varieties fibred into Fano varieties over ${\mathbb P}^1$. In a long series of papers \cite{Pukh00d,Sob01,Sob02,Pukh04a,Pukh04b,Pukh06a,Pukh06b,Pukh09a} birational rigidity was shown for many classes of Fano fibre space over ${\mathbb P}^1$. At the same time, the birational geometry of the remaining families of three-dimensional varieties with a pencil of del Pezzo surfaces of degree 1 and 2 was investigated \cite{Grin00,Grin03a,Grin03b,Grin04}; in that direction the results that were obtained were nearly exhaustive. However, the base of the fibre spaces under investigation remained one-dimensional and even Fano fibrations over surfaces seemed to be out of reach. The only exception in that series of results was the theorem about Fano direct products \cite{Pukh05} and the papers about direct products that followed \cite{Pukh08a,Ch08}. In those papers the Fano fibre spaces under consideration had the both the base and the fibre of arbitrary dimension. However, the fibre spaces themselves were very special (direct products) and could not pretend to be {\it typical} Fano fibre spaces. The present paper gives, at long last, numerous examples of typical birationally rigid Fano fibre spaces with the base and fibre of high dimension (for a fixed dimension of the fibre $\delta$ the dimension of the base is bounded by a constant of order $\sim \frac12\delta^2$). Theorem 1 can be viewed as a realization of the well known principle: the ``sufficient twistedness'' of a fibre space over the base implies birational rigidity. This principle was many times confirmed in the class of fibrations over ${\mathbb P}^1$; now it is extended to the the class of fibre spaces over a base of arbitrary dimension. The main object of study in this paper is a fibre space into Fano hypersurfaces of index one, so that it is a follow up of the paper \cite{Pukh00d}. From the technical viewpoint, the predecessors of this paper are \cite{Pukh08a,Pukh09b}, where the {\it linear} method of proving birational rigidity was developed. It is possible, however, that the quadratic techniques could be applied to the class of Fano fibre spaces over a base of arbitrary dimension as well. Various technical moments related to the arguments of the present paper were discussed by the author in his talks given in 2009-2014 at Steklov Institute of Mathematics. The author is grateful to the members of the divisions of Algebraic Geometry and Algebra and Number Theory for the interest in his work. The author also thanks his colleagues in the Algebraic Geometry research group at the University of Liverpool for the creative atmosphere and general support. \section{Birationally rigid fibre spaces} In this section we prove Theorem 1. We do it in three steps: first, assuming that the birational map $\chi\colon V\dashrightarrow V'$ is not fibre-wise, we prove the existence of a maximal singularity of the map $\chi$, covering the base $S'$ (subsection 1.1). After that, we construct such a sequence of blow ups of the base $S^+\to S$, that the image of every maximal singularity on $S$ is a prime divisor (subsection 1.2). Finally, using a very mobile family of curves contracted by the projection $\pi'$, we obtain a contradiction with the condition (iii) of Theorem 1 (subsection 1.3). This implies that the map $\chi\colon V\dashrightarrow V'$ is fibre-wise, which completes the proof of Theorem 1. {\bf 1.1. Maximal singularities of birational maps.} In the notations of Theorem 1 fix a birational map $\chi\colon V\dashrightarrow V'$ onto the total space $V'$ of a rationally connected fibre space $\pi'\colon V'\to S'$. Consider any very ample linear system $\overline{\Sigma'}$ on $S'$. Let $\Sigma'=(\pi')^*\overline{\Sigma'}$ be its pull back on $V'$, so that the divisors $D'\in\Sigma'$ are composed from the fibres of the projection $\pi'$, and for that reason for any curve $C\subset V'$, contracted by the projection $\pi'$ we have $(D'\cdot C)=0$. The linear system $\Sigma'$ is obviously mobile. Let $$ \Sigma=(\chi^{-1})_*\Sigma'\subset |-nK_V+\pi^*Y| $$ be its strict transform on $V$, where $n\in{\mathbb Z}_+$. {\bf Lemma 1.1.} {\it For any mobile family of curves $\overline{C}\in\overline{\cal C}$ on $S$, sweeping out $S$, the inequality $(\overline{C}\cdot Y)\geqslant 0$ holds, that is to say, the numerical class of the divisor $Y$ is non-negative on the cone} $A^{\rm mov}_1(S)$. {\bf Proof.} This is almost obvious. For a general divisor $D\in\Sigma$ the cycle $(D\circ \pi^{-1}(\overline{C}))$ is effective. Its class is $-n(K_V\circ \pi^{-1}(\overline{C})) +(Y\cdot \overline{C})F$, so that by the condition (iii) the claim of the lemma follows. Q.E.D. Obviously, the map $\chi$ is fibre-wise if and only if $n=0$. Therefore, if $n=0$, then the claim of Theorem 1 holds. So let us assume that $n\geq 1$ and show that this assumption leads to a contradiction. The linear system $\Sigma$ is mobile. Let us resolve the singularities of the map $\chi$: let $$ \varphi\colon\widetilde{V}\to V $$ be a birational morphism (a composition of blow ups with non-singular centres), where $\widetilde{V}$ is non-singular and the composition $\chi\circ\varphi\colon\widetilde{V}\dashrightarrow V'$ is regular. Furthermore, consider the set ${\cal E}$ of prime divisors on $\widetilde{V}$, satisfying the following conditions: \begin{itemize} \item every divisor $E\in{\cal E}$ is $\varphi$-exceptional, \item for every $E\in{\cal E}$ the closed set $\chi\circ\varphi(E) \subset V'$ is a prime divisor on $V'$, \item the set $\chi\circ\varphi(E)$ for every $E\in{\cal E}$ covers the base: $\pi'[\chi\circ\varphi(E)]=S'$. \end{itemize} Setting $\widetilde{K}=K_{\widetilde{V}}$, write down $$ \widetilde{\Sigma}\subset|-n\widetilde{K}+(\pi^*Y- \sum_{E\in{\cal E}}\varepsilon(E)E)+\Xi|, $$ where $\widetilde{\Sigma}$, as usual, is the strict transform of the mobile linear system $\Sigma$ on $\widetilde{V}$, $\varepsilon(E)\in{\mathbb Z}$ is some coefficient and $\Xi$ stands for a linear combination of $\varphi$-exceptional divisors which do not belong to the set ${\cal E}$. {\bf Definition 1.1.} An exceptional divisor $E\in{\cal E}$ is called {\it a maximal singularity} of the map $\chi$, if $\varepsilon(E)>0$. Obviously, a maximal singularity satisfies {\it the Noether-Fano inequality} $$ \mathop{\rm ord}\nolimits_E\varphi^*\Sigma>na(E), $$ where $a(E)=a(E,V)$ is the discrepancy of the divisor $E$ with respect to $V$. In this paper we somewhat modify the standard concept of a maximal singularity, requiring in addition that it is realized by a divisor on $V'$, covering the base. Let ${\cal M}\subset{\cal E}$ be the set of all maximal singularities. {\bf Proposition 1.1.} {\it Maximal singularities do exist:} ${\cal M\neq\emptyset}$. {\bf Proof.} Assume the converse, that is, for any $E\in{\cal E}$ the inequality $\varepsilon(E)\leq 0$ holds. Let ${\cal C}'$ be a family of rational curves on $V'$, satisfying the following conditions: \begin{itemize} \item the curves $C'\in{\cal C}'$ are contracted by the projection $\pi'$, \item the curves $C'\in{\cal C}'$ sweep out a dense open subset in $V'$, \item the curves $C'\in{\cal C}'$ do not intersect the set of points where the rational map $(\chi\circ\varphi)^{-1}\colon V'\dashrightarrow\widetilde{V}$ is not well defined. \end{itemize} Apart from that, we assume that a general curve $C'\in{\cal C}'$ intersects every divisor $\chi\circ\varphi(E)$, $E\in{\cal E}$, transversally at points of general position. Such a family of curves we will call {\it very mobile}. Obviously, very mobile families of rational curves do exist. Let $\widetilde{C}\cong C'$ be the inverse image of the curve $C'\in{\cal C}'$ on $\widetilde{V}$. Since the linear system ${\Sigma}'$ is pulled back from the base, for a divisor $\widetilde{D}\in\widetilde{\Sigma}$ we have the equality $(\widetilde{C}\cdot\widetilde{D})=0$. On the other hand, $(\widetilde{C}\cdot\widetilde{K})=(C'\cdot K_{V'})<0$ and $$ (\widetilde{C}\cdot(\pi^*Y-\sum_{E\in{\cal E}}\varepsilon(E)E))\geqslant 0, $$ since by the condition (iii) of our theorem $(\widetilde{C} \cdot\pi^*Y)\geqslant 0$ and by assumption $-\varepsilon(E)\in{\mathbb Z}_+$ for all $E\in{\cal E}$. Finally, the divisor $\Xi$ (which is not necessarily effective) is a linear combination of such $\varphi$-exceptional divisors $R\subset\widetilde{V}$, that $\pi'[\chi\circ\varphi(R)]$ is a proper closed subset of the base $S'$. So we have the equality $(\widetilde{C}\cdot\Xi)=0$. This implies that $$ (\widetilde{C}\cdot\widetilde{D})\geqslant n>0, $$ which is a contradiction. Therefore, ${\cal M}\neq\emptyset$. Q.E.D. for the proposition. {\bf Proposition 1.2.} {\it For any maximal singularity $E\subset{\cal M}$ its center $$ \mathop{\rm centre}(E,V)=\varphi(E) $$ on $V$ does not cover the base: $\pi(\mathop{\rm centre}(E,V))\subset S$ is a proper closed subset of the variety $S$.} {\bf Proof.} Assume the converse: the centre of some maximal singularity $E\in{\cal M}$ covers the base: $\pi(\mathop{\rm centre}(E,V))=S$. Let $F=\pi^{-1}(s)$, $s\in S$ be a fibre of general position. By assumption the strict transform $\widetilde{F}$ of the fibre $F$ on $\widetilde{V}$ has a non-empty intersection with $E$, and for that reason every irreducible component of the intersection $\widetilde{F}\cap E$ is a maximal singularity of the mobile linear system $\Sigma_F=\Sigma|_F\subset |-nK_F|$. However, by the condition (ii) of Theorem 1 on the variety $F$ there are no mobile linear systems with a maximal singularity. This contradiction proves the proposition. {\bf 1.2. The birational modification of the base of the fibre space $V/S$.} Now let us construct a sequence of blow ups of the base, the composition of which is a birational morphism $\sigma_S\colon S^+\to S$, and the corresponding sequence of blow ups of the variety $V$, the composition of which is a birational morphism $\sigma\colon V^+\to V$, where $V^+=V\mathop{\times}\nolimits_S S^+$, so that the following diagram commutes $$ \begin{array}{rcccl} & V^+ &\stackrel{\sigma}{\to} & V &\\ \pi_+ & \downarrow & & \downarrow & \pi\\ & S^+ & \stackrel{\sigma_S}{\to} & S. &\\ \end{array} $$ The birational morphism $\sigma_S$ is constructed inductively as a composition of elementary blow ups $\overline{\sigma}_i\colon S_i\to S_{i-1}$, $i=1,\dots$, where $S_0=S$. Assume that $\overline{\sigma}_i$ are already constructed for $i\leqslant k$ (if $k=0$, then we start with the base $S$). Set $V_k=V\mathop{\times}\nolimits_SS_k$ and let $\pi_k\colon V_k\to S_k$ be the projection. Consider the irreducible closed subsets \begin{equation}\label{20.05.2014.1} \pi_k(\mathop{\rm centre}(E,V_k))\subset S_k, \end{equation} where $E$ runs through the set ${\cal M}$. By Proposition 1.2, all these subsets are proper subsets of the base $S_k$. If all of them are prime divisors on $S_k$, we stop the procedure: set $S^+=S_k$ and $V^+=V_k$. Otherwise, for $\overline{\sigma}_{k+1}$ we take the blow up of any inclusion-minimal set (\ref{20.05.2014.1}) for all $E\in{\cal M}$. It is easy to check that the sequence of blow ups $\overline{\sigma}$ terminates. Indeed, set $$ \alpha_k=\sum_{E\in{\cal M}}a(E,V_k). $$ Since the birational morphism $\sigma_k\colon V_k\to V_{k-1}$ is the blow up of a closed irreducible subset, containing the centre of one of the divisors $E\in{\cal M}$ on $V_{k-1}$, we get the inequality $\alpha_{k+1}<\alpha_k$. The numbers $\alpha_i$ are by construction non-negative, which implies that the sequence of blow ups $\overline{\sigma}_i$ is finite. Therefore, for any maximal singularity $E\in{\cal M}$ the closed subset $\pi_+(\mathop{\rm centre}(E,V^+))\subset S^+$ is a prime divisor. {\bf 1.3. The mobile family of curves.} Again let us consider a very mobile family of curves ${\cal C}'$ on $V'$ and its strict transform ${\cal C}^+$ on $V^+$. Let $C^+\in{\cal C}^+$ be a general curve and $\overline{C^+}=\pi_+(C^+)$ the corresponding curve of the family $\overline{\cal C^+}$ on $S^+$. Furthermore, let $\Sigma^+$ be the strict transform of the linear system $\Sigma$ on $V^+$. For some class of divisors $Y^+$ on $S^+$ we have: $$ \Sigma^+\subset |-nK^++\pi^*_+Y^+|, $$ where for simplicity of notation $K^+=K_{V^+}$. Note that even if $Y$ is an effective or mobile class on $S$, in this case $Y^+$ is not its strict transform on $S^+$, that is to say, we violate the principle of notations. The following observation is crucial. {\bf Proposition 1.3.} {\it The inequality $$ (\overline{C^+}\cdot Y^+)<0 $$ holds. In particular, the class $Y^+$ is not pseudoeffective.} {\bf Proof.} Assume the converse: $$ (C^+\cdot\pi^*Y^+)=(\overline{C^+}\cdot Y^+)\geqslant 0. $$ We may assume that the resolution of singularities $\varphi$ of the map $\chi$ filters through the sequence of blow ups $\sigma\colon V^+\to V$, so that for the strict transform $\widetilde{\Sigma}$ of the linear system $\Sigma$ on $\widetilde{V}$ we have $$ \widetilde{\Sigma}\subset |-n\widetilde{K}+(\pi^*_+Y^+ -\sum_{E\in{\cal E}}\widetilde{\varepsilon}(E)E)+\widetilde{\Xi}|, $$ where $\widetilde{K}=K_{\widetilde{V}}$, $\widetilde{\varepsilon}(E)\in{\mathbb Z}$ and $\widetilde{\Xi}$ is a linear combination of exceptional divisors of the birational morphism $\widetilde{V}\to V^+$, which are not in the set ${\cal E}$. For the strict transform $\widetilde{C}\in\widetilde{\cal C }$ of the curve $C^+\in{\cal C}^+$ and the divisor $\widetilde{D}\in\widetilde{\Sigma}$ we have, as in the proof of Proposition 1.1, the equality $(\widetilde{C}\cdot\widetilde{D})=0$. By the construction of the divisor $\widetilde{\Xi}$ we have $(\widetilde{C}\cdot\widetilde{\Xi})=0$. Finally, $(\widetilde{C}\cdot\widetilde{K})<0$, whence we conclude that $$ (\widetilde{C}\cdot(\pi^*_+Y^+-\sum_{E\in{\cal E}} \widetilde{\varepsilon}(E)E)< 0. $$ By our assumption for at least one divisor $E\in{\cal E}$ we have the inequality $\widetilde{\varepsilon}(E)> 0$. This divisor is automatically a maximal singularity, $E\in{\cal M}$. By our construction, however, we can say more: $E$ is a maximal singularity for the mobile linear system $\Sigma^+$ as well, that is, the pair $\left(V^+,\frac{1}{n}\Sigma^+\right)$ is not canonical and $E$ realizes a non-canonical singularity of that pair. However, $\pi_+(\mathop{\rm centre}(E,V^+))=\overline{E}\subset S^+$ is a prime divisor, so that $\pi^{-1}_+(\overline{E})\subset V^+$ is also a prime divisor. The linear system $\Sigma^+$ has no fixed components, therefore for a general point $s\in\overline{E}$ and the corresponding fibre $F=\pi^{-1}_+(s)\subset V^+$ we have: the linear system $\Sigma_F=\Sigma^+|_F\subset |-nK_F|$ is non-empty and for $D_F\in\Sigma_F$ the pair $\left(F,\frac{1}{n}D_F\right)$ is non log canonical by the inversion of adjunction (see \cite{Kol93}). This contradicts the condition (ii) of our theorem. Proposition 1.3 is shown. Q.E.D. Finally, let us complete the proof of Theorem 1. Let us write down explicitly the divisor $\pi^*_+Y^+$ in terms of the partial resolution $\sigma$. Let ${\cal E}^+$ be the set of all exceptional divisors of the morphism $\sigma$, the image of which on $V'$ is a divisor and covers the base $S'$. Therefore, ${\cal E}^+$ can be identified with a subset of the set ${\cal E}$. In the course of the proof of Proposition 1.3 we established that $$ {\cal M}^+={\cal M}\cap{\cal E}^+\neq\emptyset. $$ Now we write $$ \pi^*_+Y^+=\pi^*Y-\sum_{E\in{\cal E}^+}\varepsilon_+(E)E+\Xi^+. $$ Besides, we have $$ K^+=\sigma^*K_V+\sum_{E\in{\cal E}^+}a_+(E)E+\Xi_K, $$ where all coefficients $a_+(E)$ are positive and the divisor $\Xi_K$ is effective, pulled back from the base $S^+$ and the image of each of its irreducible component on $V'$ has codimension at least 2, so that the general curve $C^+\in{\cal C}^+$ does not intersect the support of the divisor $\Xi_K$. Let $C\in{\cal C}$ be its image on the original variety $V$ and $\overline{C}=\pi(C)\in\overline{\cal C}$ the projection of the curve $C$ on the base $S$. For a general divisor $D\in\Sigma$ and its strict transform $D^+\in\Sigma^+$ on $V^+$ the scheme-theoretic intersection $(D^+\circ\pi^{-1}_+(\overline{C^+}))$ is well defined, it is an effective cycle of dimension $\delta=\mathop{\rm dim}F$ on $V^+$. For its numerical class we have the presentation \begin{equation}\label{16.05.2014.1} \begin{array}{c} \displaystyle (D^+\circ\pi^{-1}_+(\overline{C^+}))\sim-n(\sigma^*K_V\circ\pi^{-1}_+ (\overline{C^+}))+ \\ \\ \displaystyle +\left(\left[\sum_{E\in{\cal E}^+}(-na_+(E)-\varepsilon_+(E))E\right]\cdot C^+\right)F. \end{array} \end{equation} Since $(\overline{C}\cdot Y)\geqslant 0$ and $(C^+\cdot\pi^*_+Y^+)< 0$, we have $$ \left(-\left[\sum_{E\in{\cal E}^+}\varepsilon_+(E)E\right] \cdot C^+\right)< 0, $$ so that in the formula (\ref{16.05.2014.1}) the intersection of the divisor in square brackets with $C^+$ is negative. Therefore, $$ \sigma_*(D^+\circ\pi^{-1}_+(\overline{C^+}))\sim-n(K_V\circ\pi^{-1} (\overline{C}))+bF, $$ where $b<0$. Since on the left we have an effective cycle of dimension $\delta$ on $V$, we obtain a contradiction with the condition (iii) of our theorem. Proof of Theorem 1 is complete. Q.E.D. \section{Varieties of general position} In this section we state the explicit local conditions of general position for the double spaces (subsection 2.1) and hypersurfaces (subsection 2.2), defining the sets ${\cal W}_{\rm reg}\subset {\cal W}$ and ${\cal F}_{\rm reg}\subset {\cal F}$. In subsection 2.1 we prove Theorem 2. In subsection 2.3-2.5 we prove a part of the claim of Theorem 4: the estimate for the codimension of the complement ${\cal F}\setminus {\cal F}_{\rm reg}$; in subsection 2.5 we also consider some immediate geometric implications of the conditions of general position. {\bf 2.1. The double spaces of general position.} The open subset ${\cal W}_{\rm reg}\subset {\cal W}$ of hypersurfaces of degree $2M$ in ${\mathbb P}={\mathbb P}^M$ is defined by local conditions, which a hypersurface $W\in {\cal W}_{\rm reg}$ must satisfy at {\it every} point $o\in W$. These conditions depend on whether the point $o\in W$ is non-singular or singular. First, let us consider the condition of general position for a {\bf non-singular point} $o\in W$. Let $(z_1,\dots,z_M)$ be a system of affine coordinates with the origin at the point $o$ and $$ w=q_1+q_2+\dots +q_{2M} $$ the affine equation of the branch hypersurface $W$, where the polynomials $q_i(z_*)$ are homogeneous of degree $i=1,\dots,2M$. At a non-singular point $o\in W$ (that is, $q_1\not\equiv 0$) the hypersurface $W$ must satisfy the condition (W1) the rank of the quadratic form $q_2|_{\{q_1=0\}}$ is at least 2. {\bf Proposition 2.1.} {\it Violation of the condition (W1) imposes on the coefficients of the quadratic form $q_2$ (with the linear form $q_1$ fixed) $$ \frac{(M-2)(M-1)}{2} $$ independent conditions.} {\bf Proof} is obvious. Q.E.D. Now let us consider the condition of general position for a {\bf singular point} $o\in W$. Let $$ w=q_2+q_3+\dots +q_{2M} $$ be the affine equation of the branch hypersurface $W$ with respect to a system of affine coordinates $(z_1,\dots,z_M)$ with the origin at the point $o$. At a singular point $o$ the hypersurface $W$ must satisfy the condition (W2) the rank of the quadratic form $q_2$ is at least 4. {\bf Proposition 2.2.} {\it Violation of the condition (W2) imposes on the coefficients of the quadratic form $q_2$ $$ \frac{(M-2)(M-1)}{2} $$ independent conditions.} {\bf Proof} is obvious Q.E.D. Now we define the subset ${\cal W}_{\rm reg}\subset {\cal W}$, requiring that $W\in {\cal W}_{\rm reg}$ satisfies the condition (W1) at every non-singular and the condition (W2) at every singular point. Obviously, ${\cal W}_{\rm reg}\subset {\cal W}$ is a Zariski open subset (possibly, empty). {\bf Proposition 2.3.} {\it The following estimate holds:} $$ \mathop{\rm codim}(({\cal W}\setminus {\cal W}_{\rm reg}) \subset {\cal W})\geqslant \frac{(M-4)(M-1)}{2}. $$ {\bf Proof} is obtained by the standard arguments, see \cite[Chapter 3]{Pukh13a}: one considers the incidence subvariety $$ {\cal I}=\{(o,W)\,|\, o\in W\}\subset {\mathbb P}\times{\cal W}; $$ for a fixed point $o\in {\mathbb P}$ the codimension of the set of hypersurfaces ${\cal W}_{\rm non-reg}(o)$, containing that point and non-regular in it, is given by Propositions 2.1 and 2.2 (in the singular case $M$ more independent conditions are added as $q_1\equiv 0$). After that one computes the dimension of the set $$ {\cal I}_{\rm non-reg}=\mathop{\bigcup}\limits_{o\in{\mathbb P}} \{o\}\times {\cal W}_{\rm non-reg}(o) $$ and considers the projection onto ${\cal W}$. This completes the proof. Q.E.D. Obviously, for any hypersurface $W\in {\cal W}_{\rm reg}$ the double cover $F\to {\mathbb P}$, branched over $W$, is an irreducible algebraic variety. Moreover, by the condition (W2) the variety $F$ belongs to the class of varieties with quadratic singularities of rank at least 5 \cite{EP}. Recall that a variety ${\cal X}$ is a variety with quadratic singularities of rank at least $r$, if in a neighborhood of every point $o\in {\cal X}$ the variety ${\cal X}$ can be realized as a hypersurface in a non-singular variety ${\cal Y}$, and the local equation ${\cal X}$ at the point $o$ is of the form $\beta_1(u_*)+\beta_2(u_*)+\dots =0$, where $(u_*)$ is a system of local parameters at the point $o\in {\cal Y}$, and either $\beta_1\not\equiv 0$, or $\beta_1\equiv 0$ and $\mathop{\rm rk} \beta_2\geqslant r$. It is clear that $\mathop{\rm codim} (\mathop{\rm Sing}{\cal X}\subset {\cal X})\geqslant r-1$, so that the variety $F$ is factorial \cite{CL}. Furthermore, it is easy to show (see \cite{EP}), that the class of quadratic singularities of rank at least $r$ is stable with respect to blow ups in the following sense. Let $B\subset {\cal X}$ be an irreducible subvariety. Then there exists an open set ${\cal U}\subset {\cal Y}$, such that ${\cal U}\cap B\neq \emptyset$, ${\cal U}\cap B$ is a non-singular algebraic variety and for its blow up $$ \sigma_B\colon {\cal U}^+\to {\cal U} $$ we have that $({\cal X}\cap {\cal U})^+\subset {\cal U}^+$ is a variety of quadratic singularities of rank at least $r$. In order to see this, note the following simple fact: if ${\cal Z}\ni o$ is a non-singular divisor on ${\cal Y}$, where ${\cal Z}\neq {\cal X}$ and the scheme-theoretic restriction ${\cal X}|_{\cal Z}$ has at the point $o$ a quadratic singularity of rank $l$, then ${\cal X}$ has at the point $o$ a quadratic singularity of rank at least $l$. Now if $B\not\subset \mathop{\rm Sing}{\cal X}$, then the claim about stability is obvious. Therefore, we may assume that $B\subset \mathop{\rm Sing}{\cal X}$. The open set ${\cal U}\subset {\cal Y}$ can be chosen in such a way that $B\cap {\cal U}$ is a non-singular subvariety and the rank of quadratic points $o\in B\cap {\cal U}$ is constant and equal to $l\geqslant r$. But then in the exceptional divisor ${\cal E}=\sigma_B^{-1}(B\cap {\cal U})$ the divisor $({\cal X}\cap {\cal U})^+\cap {\cal E}$ is a fibration into quadrics of rank $l$, so that $({\cal X}\cap {\cal U})^+\cap {\cal E}$ has at most quadratic singularities of rank at least $l$. Therefore, $({\cal X}\cap {\cal U})^+\subset {\cal U}^+$ has quadratic singularities of rank at least $r$ as well, according to the remark above. For an explicit analytic proof, see \cite{EP}. The stability with respect to blow ups implies that the singularities of the variety $F$ are terminal (for the particular case of one blow up it is obvious: the discrepancy of an irreducible exceptional divisor $({\cal X}\cap {\cal U})^+\cap {\cal E}$ with respect to ${\cal X}$ is positive; every exceptional divisor over ${\cal X}$ can be realized by a sequence of blow ups of the centres). Finally, $F$ satisfies the condition (ii) of Theorem 1, that is, the condition of divisorial canonicity, see the proof of part (ii) of Theorem 2 in \cite{Pukh05} and Theorem 4 in \cite{Pukh09b}. This completes the proof of Theorem 2. Q.E.D. {\bf 2.2. Fano hypersurfaces of general position.} As in the case of double space, the open subset ${\cal F}_{\rm reg}\subset {\cal F}$ of hypersurfaces of degree $M$ in ${\mathbb P}={\mathbb P}^M$ is defined by the local conditions, which a hypersurface $F\in {\cal F}_{\rm reg}$ should satisfy at every point $o\in F$. Again these conditions are different for non-singular and singular points $o\in F$. Consider first the conditions of general position for a {\bf non-singular point} $o\in F$. Let $(z_1,\dots,z_M)$ be a system of affine coordinates with the origin at the point $o$ and $$ w=q_1+q_2+q_3+\dots +q_{M} $$ the affine equation of the hypersurface $F$, where the polynomials $q_i(z_*)$ are homogene\-ous of degree $i=1,\dots,M$. Here is the list of conditions of general position, which a hypersurface $F$ should satisfy at a non-singular point $o$. (R1.1) The sequence $$ q_1,q_2,\dots ,q_{M-1} $$ is regular in the local ring ${\cal O}_{o,{\mathbb P}}$, that is, the system of equations $$ q_1=q_2=\dots =q_{M-1}=0 $$ defines a one-dimensional subset, a finite set of lines in ${\mathbb P}$, passing through the point $o$. In particular, $q_1\not\equiv 0$. The equation $q_1=0$ defines the tangent space $T_oF$ (which we, depending on what we need, will consider either a linear subspace in ${\mathbb C}^M$, or as its closure, a hyperplane in ${\mathbb P}$). Now set ${\bar q}_i=q_i|_{\{q_1=0\}}$ for $i=2,\dots,M$: these are polynomials on the linear space $T_o F\cong {\mathbb C}^{M-1}$. The condition (R1.1) means the regularity of the sequence $$ {\bar q}_2, {\bar q}_3,\dots , {\bar q}_{M-1}. $$ Such form is more convenient for estimating the codimension of the set of hypersurfa\-ces which do not satisfy the regularity condition. (R1.2) The quadratic form ${\bar q}_2$ on the space $T_o F$ is of rank at least 6, and the linear span of every irreducible component of the closed algebraic set $$ \{q_1=q_2=q_3=0\} $$ in ${\mathbb C}^M$ is the hyperplane $\{q_1=0\}$, that is, the tangent hyperplane $T_o F$. An equivalent wording of this condition: every irreducible component of the closed set $\{ {\bar q}_2={\bar q}_3=0 \}$ in ${\mathbb P}^{M-2}={\mathbb P}(\{q_1=0\})$ is non-degenerate. (R1.3) For any hyperplane $P\subset {\mathbb P}$, $P\ni o$, different from the tangent hyperplane $T_o F\subset {\mathbb P}$, the algebraic cycle of scheme-theoretic intersection of hyperplanes $P$, $T_o F$, the projective quadric $\overline{\{q_2=0\}}\subset {\mathbb P}$ and $F$, that is, the cycle, $$ (P\circ \overline{\{q_1=0\}} \circ \overline{\{q_2=0\}} \circ F), $$ is irreducible and reduced. (The line above means the closure in ${\mathbb P}$ and the operation $\circ$ of taking the cycle of scheme-theoretic intersection is considered here on the space ${\mathbb P}$, too.) Now let us consider the conditions of general position for a {\bf singular point} $o\in F$. Let $(z_1,\dots,z_M)$ be a system of affine coordinates with the origin at the point $o$ and $$ f=q_2+q_3+\dots+q_M $$ the affine equation of the hypersurface $F$, where the polynomials $q_i(z_*)$ are homogene\-ous of degree $i=2,\dots, M$. Let us list the conditions of general position which must be satisfied for the hypersurface $F$ at a singular point $o$. (R2.1) For any linear subspace $\Pi\subset{\mathbb C}^M$ of codimension $c\in\{0,1,2\}$ the sequence \begin{equation}\label{15.05.2014.2} q_2|_{\Pi},\dots,q_{M-c}|_{\Pi} \end{equation} is regular in the ring ${\cal O}_{o,\Pi}$, that is, the system of equations $$ q_2|_{{\mathbb P}(\Pi)}=\dots=q_{M-c}|_{{\mathbb P}(\Pi)}=0 $$ defines in the space ${\mathbb P}(\Pi)\cong{\mathbb P}^{M-c-1}$ a finite set of points. (R2.2) The quadratic form $q_2(z_*)$ is of rank at least 8. (R2.3) Now let us consider $(z_1,\dots,z_M)$ as homogeneous coordinates $(z_1:\dots :z_M)$ on ${\mathbb P}^{M-1}$. The divisor $$ \{q_3|_{\{q_2=0\}}=0\} $$ on the quadric $\{q_2=0\}$ is not a sum of three (not necessarily distinct) hyperplane sections of this quadric, taken from the same linear pencil. Now arguing in the word for word the same way as in subsection ï. 2.1, we conclude that any hypersurface $F\in {\cal F}_{\rm reg}$ is an irreducible projective variety with factorial terminal singularities. Obviously, $K_F=-H_F$ and $\mathop{\rm Pic} F = {\mathbb Z} H_F$, where $H_F$ is the class of a hyperplane section $F\subset {\mathbb P}$, that is, $F$ is a Fano variety of index one. In order to prove Theorem 4, we have to show the following two facts: --- the inequality (\ref{14.06.2014.1}), --- the divisorial log-canonicity of the hypersurface $F\in {\cal F}_{\rm reg}$, that is, the condition (ii) of Theorem 1 for the variety $F$. These two tasks are dealt with in the remaining part of this section and \S 3, respectively. {\bf 2.3. The conditions of general position at a non-singular point.} Let $o\in F$ be a non-singular point. Fix an arbitrary non-zero linear form $q_1$ and consider the affine space of polynomials $$ q_1+{\cal P}^{\rm sing}=\{q_1+q_2+\dots +q_M\}, $$ where ${\cal P}^{\rm sing}$ is the space of polynomials of the form $f=q_2+q_3+\dots +q_M$. Let ${\cal P}_i\subset \{q_1+{\cal P}^{\rm sing}\}$, $i=1,2,3$, be the closures of the subsets, consisting of such polynomials $f$, which do not satisfy the condition (R1.i), respectively. Set $$ c_i=\mathop{\rm codim} ({\cal P}_i\subset \{q_1+{\cal P}^{\rm sing}\}). $$ {\bf Proposition 2.4.} {\it For $M\geqslant 8$ the following equality holds:} $$ \min\{c_1,c_2,c_3\}=c_2=\frac{(M-6)(M-5)}{2}. $$ {\bf Proof} is easy to obtain by elementary methods. First of all, by Lemma 2.1, shown below (where one must replace $M$ by $(M-1)$), we obtain $$ c_1=\frac{(M-1)(M-2)}{2} + 2. $$ Furthermore, a violation of the condition $\mathop{\rm rk} {\bar q}_2\geqslant 6$ imposes on the coefficients of the quadratic form $q_2$ $$ \frac{(M-6)(M-5)}{2}< c_1 $$ independent conditions. Assuming the condition $\mathop{\rm rk} {\bar q}_2\geqslant 6$ to be satisfied, we obtain that the quadric $\{{\bar q}_2=0\}$ is factorial. It is easy to check that reducibility or non-reducedness of the divisor $q_3|_{\{{\bar q}_2=0\}}$ on this quadric gives $$ \frac{M^3-6M^2-7M+54}{6}>\frac{(M-6)(M-5)}{2} $$ independent conditions on the coefficients of the cubic form $q_3$. Finally, let us consider a hyperplane $P\neq T_oF$ and the quadratic hypersurface $$ q_2|_{P\cap \{q_1=0\}}=0. $$ Its rank is at least 5, so it is still factorial. Let us estimate from below the number of independent conditions, which are imposed on the coefficients of the polynomials $q_3,\dots, q_M$ if the condition (R1.3) is violated. Define the values $v(\mu)$, $\mu=0,1,2,3$, by the table \begin{center} \begin{tabular}{|l|l|l|l|l|} \hline $\mu$ & 0 & 1 & 2 & 3 \\ \hline $v(\mu)$ & 0 & 1 & $M$ & $\frac12 M(M+1) -1$ \\ \hline \end{tabular} \end{center} \noindent and set $$ f(j,\mu)={j+M-1\choose M-1} - {j+M-3\choose M-1} -v(\mu)+ v(\max (0, \mu-2)). $$ Now, using the factoriality of the quadric, we obtain the estimate $$ \begin{array}{c} c_3\geqslant f(M,3)-(M-2) - \\ \\ - \max\left[ \max\limits_{M-1\geqslant j\geqslant 2} (f(j,2)+f(M-j,1)), \max\limits_{M-1\geqslant j \geqslant 3} (f(j,3)+f(M-j,0))\right]. \end{array} $$ An elementary check shows that the minimum of the right hand side is strictly higher than $c_2$ (and for $M\to \infty$ grows exponentially). Q.E.D. for the proposition. {\bf 2.4. The conditions of general position at a singular point.} Recall that ${\cal P}^{\rm sing}$ is the space of polynomials of the form $$ f=q_2+q_3+\dots+q_M $$ in the variables $z_*=(z_1,\dots,z_M)$, where $q_i(z_*)$ are homogeneous of degree $i$. Let ${\cal P}^{\rm sing}_{\rm reg}\subset{\cal P}^{\rm sing}$ be the subset of polynomials satisfying the conditions (R2.1-R2.3). {\bf Proposition 2.5.} {\it The following estimate holds:} $$ \mathop{\rm codim}(\overline{({\cal{P}^{\rm sing}\backslash{\cal P}^{\rm sing}_{\rm reg}})}\subset{\cal P}^{\rm sing})=\frac{(M-7)(M-6)}{2}. $$ {\bf Proof.} It is sufficient to show that violation of each of the conditions (R2.1-R2.3) at the point $o=(0,\dots,0)$ separately imposes on the polynomial $f$ at least $(M-7)(M-6)/2$ independent conditions. It is easy to check that violation of the condition (R2.2) imposes on the coefficients of the quadratic form $q_2(z_*)$ precisely $(M-7)(M-6)/2$ independent conditions. Therefore, considering the condition (R2.3), we may assume that the condition (R2.2) is satisfied; in particular, the quadric $\{q_2=0\}$) is factorial and violation of the condition (R2.3) imposes on the coefficients of the cubic form $q_3(z_*)$ (with the polynomial $q_2$ fixed) $$ M\frac{M^2+3M-16}{6}\geqslant\frac{(M-7)(M-6)}{2} $$ independent conditions for $M\geqslant 4$. It remains to consider the case when the condition (R2.1) is violated. {\bf Lemma 2.1.} {\it Violation of the condition (R2.1) for one value of the parameter $c=0$ imposes on the coefficients of the polynomial $f$ \begin{equation}\label{15.05.2014.1} \frac{M(M-1)}{2}+2 \end{equation} independent conditions.} {\bf Proof} is obtained by the standard methods \cite[Chapter 3]{Pukh13a}. We just remind the scheme of arguments. Fix the first moment when the sequence of polynomials $q_2,\dots,q_M$ becomes non-regular: assume that the regularity is first violated for $q_k$, that is, the closed set $\{q_2=\dots=q_{k-1}=0\}$ has the ``correct'' codimension $k-2$ and $q_k$ vanishes on one of the components of that set. For $k\leqslant M-1$ we apply the method of \cite{Pukh98b} and obtain that violation of the regularity condition imposes on the coefficients of the polynomial $f$ at least $$ {M+1\choose k}\geqslant\frac{(M+1)M}{2} $$ independent conditions; the right hand side of the last inequality is strictly higher than (\ref{15.05.2014.1}), which is what we need. Let us consider the last option: $$ \{q_2=\dots=q_{M-1}=0\}\subset{\mathbb P}^{M-1} $$ is a one-dimensional closed set and $q_M$ vanishes on one of its irreducible components, say $B$. The case when $B\subset{\mathbb P}^{M-1}$ is a line is a special one: it is easy to check that vanishing on a line in ${\mathbb P}^{M-1}$ imposes on the polynomials $q_2,\dots,q_M$ in total precisely (\ref{15.05.2014.1}) independent conditions. Therefore, we may assume that $B$ is not a line, that is, $\mathop{\rm dim} B <\langle B\rangle =k\geqslant 2$. Now we apply the method suggested in \cite{Pukh01}, fixing $k$ and the linear subspace $\langle B\rangle$. To begin with, consider the case $k\leqslant M-2$. In that case there are indices $$ i_1,\dots,i_{k-1}\in\{2,\dots,M-1\}, $$ such that the restrictions $q_{i_1}|_{\langle B\rangle,}\dots, q_{i_{k-1}}|_{\langle B\rangle}$ form a good sequence and $B$ is one of its associated subvarieties (see \cite[Sec.3, Proposition 4]{Pukh01}, the details of this procedure are described in the proof of the cited proposition). Taking into account that $B\subset\langle B\rangle$ is by construction a non-degenerate curve, we see that decomposable polynomials of the form $l_1\dots l_a$, where $l_i$ are linear forms on $\langle B\rangle\cong{\mathbb P}^k$, can not vanish on $B$. This gives $jk+1$ independent conditions for each of the polynomials $q_j$ for $j\not\in\{i_1,\dots,i_{k-1}\}$, so that in total we get at least $$ \frac{k(M-k)(M-k+1)}{2}+M-2k-1 $$ independent conditions for these polynomials (the minimum is attained for $i_1=M-k+1,\dots$, $i_{k-1}=M-1$). Taking into account the condition $q_M|_B\equiv 0$ and the dimension of the Grassmanian of $k$-dimensional subspaces in ${\mathbb P}^{M-1}$, we obtain at least $$ M^2-kM+k^2-M+k+1 $$ independent conditions for $f$. It is easy to check that the last number is not smaller than (\ref{15.05.2014.1}). Finally, if $k=M-1$, that is, $B$ is a non-degenerate curve in ${\mathbb P}^{M-1}$, then the condition $q_M|_B\equiv 0$ gives at least $M(M-1)+1$ independent conditions for $q_M$. Proof of Lemma 2.1 is complete. Q.E.D. Now let us complete the proof of Proposition 2.5. For a {\it fixed} linear subspace $\Pi\subset{\mathbb C}^M$ of codimension $c\in\{0,1,2\}$ violation of regularity of the sequence (\ref{15.05.2014.2}) imposes on the polynomial $f$ at least $(M-c)(M-c-1)/2+2$ independent conditions. Subtracting the dimension of the Grassmanian of subspaces of codimension $c$ in ${\mathbb C}^M$, we get the least value $(M-3)(M-6)/2$ for $c=2$. This completes the proof of Proposition 2.5. Q.E.D. {\bf 2.5. Estimating the codimension of the complement to the set ${\cal F}_{\rm reg}$.} Recall that $F\in {\cal F}_{\rm reg}$ if and only if at every non-singular point $o\in F$ the conditions (R1.1-3) are satisfied, and at every singular point $o\in F$ the conditions (R2.1-3) are satisfied. Propositions 2.4 and 2.5 imply the following fact. {\bf Proposition 2.6.} {\it The following estimate holds:} $$ \mathop{\rm codim}(({\cal F}\setminus {\cal F}_{\rm reg})\subset {\cal F})\geqslant \frac{(M-7)(M-6)}{2}-5. $$ {\bf Proof} is completely similar to the proof of Proposition 2.3 and follows from Propositions 2.4 and 2.5. Now let us consider some geometric facts which follow immediately from the conditions of general position. These facts will be needed in \S 3 to exclude log maximal singularities. In \cite{Pukh05} it was shown that for any effective divisor $D\sim nH$ on $F$ (where we write $H$ in stead of $H_F$ to simplify the notations) the pair $(F,\frac{1}{n}D)$ is canonical at non-singular points $o\in F$. This fact will be used without special references. Now let $D_2=\{q_2|_F=0\}$ be the first hypertangent divisor, so that we have $D^+_2\in|2H-3E|$. Recall that $E\subset{\mathbb P}^{M-1}$ is an irreducible quadric of rank at least 8. Obviously, the divisor $D_2\in|2H|$ satisfies the equality $$ \frac{\mathop{\rm mult}_o}{\mathop{\rm deg}}D_2=\frac{3}{M}. $$ Here and below the symbol $\mathop{\rm mult}\nolimits_o/\mathop{\rm deg}$ means the ratio of multiplicity at the point $o$ to the degree. {\bf Lemma 2.2.} {\it Let $P\subset F$ be the section of the hypersurface $F$ by an arbitrary linear subspace in ${\mathbb P}$ of codimension two, containing the point $o$. Then the restriction $D_2|_P$ is an irreducible reduced divisor on the hypersurface} $P\subset{\mathbb P}^{M-2}$. {\bf Proof.} The variety $P$ has at most quadratic singularities of rank at least 6 and for that reason it is factorial. Therefore, reducibility or non-reducedness of the divisor $D_2|_P$ means, that the equality $D_2|_P=H_1+H_2$ holds, where $H_i$ are possibly coinciding hyperplane sections of $P$. By the condition (R2.2) the equalities $\mathop{\rm mult}_oH_i=2$ hold. However, $\mathop{\rm mult}_oD_2|_P=6$. Therefore, $D_2|_P$ can not break into two hyperplane sections. Q.E.D. for the lemma. {\bf Proposition 2.7.} {\it The pair $(F,\frac12 D_2)$ has no non log canonical singularities, the centre of which on $F$ contains the point} $o$: $LCS(F,\frac12 D_2)\not\ni o$. {\bf Proof.} Assume the converse. In any case $$ \mathop{\rm codim} (LCS\left(F,\frac12 D_2\right)\subset F)\geqslant 6, $$ so that consider the section $P\subset F$ of the hypersurface $F$ by a generic linear subspace of dimension 5, containing the point $o$. Then the pair $(P,\frac12 D_2|_P)$ has the point $o$ as an isolated centre of a non log canonical singularity. Let $\sigma_P\colon P^+\to P$ be the blow up of the non-degenerate quadratic singularity $o\in P$ so that $E_P=E\cap P^+$ is a non-singular exceptional quadric in ${\mathbb P}^4$. Since $$ \frac12(D_2|_P)^+\sim H_P-\frac32 E_P\quad\mbox{and}\quad a(E_P,P)=2>\frac32 $$ (where $H_P$ is the class of a hyperplane section of $P\subset{\mathbb P}^5$), the pair $(P^+,\frac12(D_2|_P)^+)$ is not log canonical. The union $LCS(P^+,\frac12(D_2|_P)^+)$ of all centres of non log canonical singularities of that pair, intersecting $E_P$, is a connected closed subset of the exceptional quadric $E_P$, every irreducible component $S_P$ of which satisfies the inequality $\mathop{\rm mult}_{S_P}(D_2|_P)^+\geqslant 3$. Coming back to the original pair $(F,\frac12 D_2)$, we see that for some irreducible subvariety $S\subset E$ the inequality $\mathop{\rm mult}_SD^+_2\geqslant 3$ holds, where $S\cap P^+=S_P$, so that $\mathop{\rm codim}(S\subset E)\in\{1,2,3\}$. However, the case $\mathop{\rm codim}(S\subset E)=3$ is impossible: by the connectedness principle this equality means that $S_P$ is a point, and then $S\subset E$ is a linear subspace of codimension 3, which is impossible if $\mathop{\rm rk}q_2\geqslant 8$ (a 7-dimensional non-singular quadric does not contain linear subspaces of codimension 3). Consider the case $\mathop{\rm codim}(S\subset E)=2$. Let $\Pi\subset E$ be a general linear subspace of maximal dimension. Then $D^+_2|_{\Pi}$ is a cubic hypersurface that has multiplicity 3 along an irreducible subvariety $S_{\Pi}=S\cap\Pi$ of codimension 2. Therefore, $D^+_2|_{\Pi}$ is a sum of three (not necessarily distinct) hyperplanes in $\Pi$, containing the linear subspace $S_{\Pi}\subset\Pi$ of codimension 2, and for that reason $D^+_2|_E$ is a sum of three (not necessarily distinct) hyperplane sections from the same linear pencil as well, and $S$ is the intersection of the quadric $E$ and a linear subspace of codimension 2. However, this is impossible by the condition (R2.3). Finally, if $\mathop{\rm codim}(S\subset E)=1$, then $D^+_2|_E=3S$ is a triple hyperplane section of the quadric $E$, which is impossible by the condition (R2.3). This completes the proof of Proposition 2.7. Q.E.D. Here is one more fact that will be useful later. {\bf Proposition 2.8.} {\it For any hyperplane section $\Delta\ni o$ of the hypersurface $F$ the pair $(F,\Delta)$ is log canonical.} {\bf Proof.} This follows from a well known fact (see, for instance, \cite{Co00,Pukh09b}): if $(p\in X)$ is a germ of a non-degenerate quadratic three-dimensional singularity, $\sigma\colon \widetilde{X}\to X$ its resolution with the exceptional quadric $E_X\cong{\mathbb P}^1\times{\mathbb P}^1$ and $D_X$ a germ of an effective divisor such that $o\in D_X$ and $\widetilde{D}_X\sim -\beta E_X$, then the pair $(X,\frac{1}{\beta}D_X)$ is log canonical at the point $o$. Q.E.D. \section{Exclusion of maximal singularities} In this section we complete the proof of Theorem 4. The symbol $F$ stands for a fixed hypersurface of degree $M$ in ${\mathbb P}$, satisfying the regularity conditions: $F\in {\cal F}_{\rm reg}$. As we mentioned in \S 2, in \cite{Pukh05} it was shown that the pair $(F, \frac{1}{n} D)$ has no maximal singularities, the centre of which is not contained in the closed set $\mathop{\rm Sing} F$, for every effective divisor $D\sim nH$. In \cite{EP} it was shown that for any mobile linear system $\Sigma\subset |nH|$ the pair $(F, \frac{1}{n} D)$ is canonical for a general divisor $D\in \Sigma$, that is, $\Sigma$ has no maximal singularities. Therefore, in order to complete the proof of Theorem 4 it is sufficient to show that for any effective divisor $D\sim nH$ the pair $(F, \frac{1}{n} D)$ is log canonical, and we may assume only those log maximal singularities, the centre of which is contained in $\mathop{\rm Sing} F$. In subsection 3.1 we carry out preparatory work: by means of the technique of hypertangent divisors we obtain estimates for the ratio $\mathop{\rm mult}\nolimits_o/\mathop{\rm deg}$ for certain classes of irreducible subvarieties of the hypersurface $F$. After that we fix a pair $(F, \frac{1}{n} D)$ and assume that it is not log canonical. The aim is to bring this assumption to a contradiction. Let $B^*\subset \mathop{\rm Sing} F$ be the centre of the log maximal singularity of the divisor $D$, $o\in B^*$ a point of general position, $F^+\to F$ its blow up, $D^+$ the strict transform of the divisor $D$. In subsection 3.2 we study the properties of the pair $(F^+, \frac{1}{n} D^+)$: we show that this pair has a non log canonical singularity, the centre of which is a subvariety of the exceptional divisor of the blow up of the point $o$. After that in subsections 3.2 and 3.3 we show that this is impossible, which completes the proof of Theorem 4. {\bf 3.1. The method of hypertangent divisors.} Fix a singular point $o\in F$, a system of coordinates $(z_1,\dots,z_M)$ on ${\mathbb P}$ with the origin at that point and the equation $f=q_2+\dots q_M$ of the hypersurface $F$. {\bf Proposition 3.1.} {\it Assume that the variety $F$ satisfies the conditions (R2.1, R2.2) at the singular point $o$. Then the following claims hold. {\rm (i)} For every irreducible subvariety of codimension 2 $Y\subset F$ the following inequality holds: $$ \frac{\mathop{\rm mult}_o}{\mathop{\rm deg}}Y\leqslant\frac{4}{M}. $$ {\rm (ii)} Let $\Delta\ni o$ be an arbitrary hyperplane section of the hypersurface $F$. For every prime divisor $Y\subset \Delta$ the following inequality holds: $$ \frac{\mathop{\rm mult}_o}{\mathop{\rm deg}}Y\leqslant\frac{3}{M}. $$ {\rm (iii)} Let $P\ni o$ be the section of the hypersurface $F$ by an arbitrary linear subspace of codimension two. For every prime divisor $Y\subset P$ the following inequality holds:} $$ \frac{\mathop{\rm mult}_o}{\mathop{\rm deg}}Y\leqslant\frac{4}{M}. $$ {\bf Proof} is obtained by means of the method of hypertangent divisors \cite[Chapter 3]{Pukh13a}. For $k=2,\dots,M-1$ let $$ \Lambda_k=\left|\sum^k_{i=2}s_{k-i}(q_2+\dots+q_i)|_F=0\right| $$ be the $k$-{\it th hypertangent linear system}, where $s_j(z_*)$ are all possible homogeneous polynomials of degree $j$. For the blow up $\sigma\colon F^+\to F$ of the point $o$ with the exceptional divisor $E=\sigma^{-1}(o)$, naturally realized as a quadric in ${\mathbb P}^{M-1}$, we have $$ \Lambda^+_k\subset |kH-(k+1)E| $$ (where $\Lambda^+_k$ is the strict transform of the system $\Lambda_k$ on $F^+$). Let $D_k\in\Lambda_k$, $k=2,\dots,M-1$ be general hypertangent divisors. Let us show the claim (i). By the condition (R2.1) the equality \begin{equation}\label{05.05.2014.1} \mathop{\rm codim}\nolimits_o(\mathop{\rm Bs}\Lambda_k\subset F)=k-1 \end{equation} holds, where the symbol $\mathop{\rm codim}_o$ means the codimension in a neighborhood of the point $o$; therefore, $$ Y\cap D_4\cap D_5\cap\dots\cap D_{M-1} $$ in a neighborhood of the point $o$ is a closed one-dimensional set. We construct a sequence of irreducible subvarieties $Y_i\subset F$ of codimension $i$: $Y_2=Y$ and $Y_{i+1}$ is an irreducible component of the effective cycle $(Y_i\circ D_{i+2})$ with the maximal value of the ratio $\mathop{\rm mult}_o/\mathop{\rm deg}$. The cycle $(Y_i\circ D_{i+2})$ every time is well defined, because by the inequality (\ref{05.05.2014.1}) we have $Y_i\not\subset D_{i+2}$ for a general hypertangent divisor $D_{i+2}$. By the construction of hypertangent linear system, at every step of our procedure the inequality $$ \frac{\mathop{\rm mult}_o}{\mathop{\rm deg}}Y_{i+1}\geqslant\frac{i+3}{i+2}\cdot\frac{\mathop{\rm mult}_o}{\mathop{\rm deg}}Y_i $$ holds, so that for the curve $Y_{M-2}$ we have the estimates $$ 1\geqslant\frac{\mathop{\rm mult}_o}{\mathop{\rm deg}}Y_{M-2}\geqslant \frac{\mathop{\rm mult}_o}{\mathop{\rm deg}}Y\cdot\frac{5}{4}\cdot\frac{6}{5}\cdot\dots\cdot\frac{M}{M-1}, $$ which implies the claim (i). Let us prove the claim (ii). By Lemma 2.2, the divisor $D_2|_{\Delta}$ is irreducible and reduced, and by the condition (R2.1) it satisfies the equality $$ \frac{\mathop{\rm mult}_o}{\mathop{\rm deg}}D_2|_{\Delta}=\frac{3}{M}. $$ Therefore, we may assume that $Y\neq D_2|_{\Delta}$, so that $Y\not\subset D_2$ and the effective cycle of codimension two $(Y\circ D_2)$ on $\Delta$ is well defined and satisfies the inequality $$ \frac{\mathop{\rm mult}_o}{\mathop{\rm deg}}(Y\circ D_2)\geqslant\frac32\cdot\frac{\mathop{\rm mult}_o}{\mathop{\rm deg}}Y. $$ Let $Y_2$ be an irreducible component of that cycle with the maximal value of the ratio $\mathop{\rm mult}_o/\mathop{\rm deg}$. Applying to $Y_2$ the technique of hypertangent divisors in precisely the same way as in the part (i) above, we see that by the condition (R2.1) the intersection $$ Y_2\cap D_4|_{\Delta}\cap D_5|_{\Delta}\cap\dots\cap D_{M-2}|_{\Delta} $$ in a neighborhood of the point $o$ is a one-dimensional closed set, where $D_4\in\Lambda_4,\dots$, $D_{M-2}\in\Lambda_{M-2}$ are general hypertangent divisors (note that the last hypertangent divisor is $D_{M-2}$, and not $D_{M-1}$, as in the part (i), because the dimension of $\Delta$ is one less than the dimension of $F$ and the condition (R2.1) provides the regularity of the truncated sequence $q_2|_{\Pi},\dots,q_{M-1}|_{\Pi}$, where in this case $\Pi$ is a hyperplane, cutting out $\Delta$ on $F$). Now, arguing in the word for word same way as in the proof of the claim (i), we obtain the estimate $$ 1\geqslant\frac{\mathop{\rm mult}_o}{\mathop{\rm deg}}Y\cdot\frac32\cdot\frac54\cdot\frac65\cdot\dots\cdot\frac{M-1}{M-2}, $$ which implies that $$ \frac{\mathop{\rm mult}_o}{\mathop{\rm deg}}Y\leqslant\frac{8}{3(M-1)}. $$ For $M\geqslant 9$ the right hand side of the inequality does not exceed $3/M$, which proves the claim (ii). Let us show the claim (iii). We argue in the word for word same way as in the proof of the part (ii), with the only difference: in order to estimate the multiplicity of the cycle $(Y\circ D_2|_P)$ at the point $o$ we use the hypertangent divisors $$ D_4|_P,D_5|_P,\dots,D_{M-3}|_P $$ (one less than above), so that we get the estimate $$ 1\geqslant\frac{\mathop{\rm mult}_o}{\mathop{\rm deg}}Y\cdot\frac32\cdot\frac54\cdot\frac65\cdot\dots\cdot\frac{M-2}{M-3}, $$ which implies that $$ \frac{\mathop{\rm mult}_o}{\mathop{\rm deg}}Y\leqslant\frac{8}{3(M-2)}. $$ For $M\geqslant 6$ the right hand side of the inequality does not exceed $4/M$, which proves the claim (iii). Proof of Proposition 3.1 is complete. Q.E.D. Let us resume the proof of Theorem 4. {\bf 3.2. The blow up of a singular point.} Assume that the pair $(F,\frac{1}{n}D)$ is not log canonical for some divisor $D\in|nH|$, that is, for some prime divisor $E^*$ over $F$, that is, a prime divisor $E^*\subset\widetilde{F}$, where $\psi\colon\widetilde{F}\to F$ is some birational morphism, $\widetilde{F}$ is non-singular and projective, the {\it log Noether-Fano inequality} holds: $$ \mathop{\rm ord}\nolimits_{E^*}\psi^* D> n(a(E^*)+1). $$ By linearity of the inequality in $D$ and $n$ we may assume the divisor $D$ to be prime. Let $B^*=\psi(E^*)\subset F$ be the centre of the log maximal singularity $E^*$. We known that $B^*\subset \mathop{\rm Sing} F$; in particular, $\mathop{\rm codim}(B^*\subset F)\geqslant 7$. Let $o\in B^*$ be a point of general position, $\varphi\colon F^+\to F$ its blow up, $E\subset F^+$ the exceptional quadric. Consider the first hypertangent divisor $D_2\in|2H|$ at the point $o$. By Lemma 2.2, the divisor $D_2$ is irreducible and reduced, and by Proposition 2.7, the pair $(F,\frac12 D_2)$ is log canonical at the point $o$. Therefore, $D\neq D_2$. {\bf Proposition 3.2.} {\it The following inequality holds} $$ \mathop{\rm mult}\nolimits_oD\leqslant\frac{8}{3}n. $$ {\bf Proof.} Consider the effective cycle $(D\circ D_2)$ of codimension two. Obviously, $$ \frac{\mathop{\rm mult}_o}{\mathop{\rm deg}}(D\circ D_2)\geqslant\frac32\cdot \frac{\mathop{\rm mult}_o}{\mathop{\rm deg}}D, $$ however, by Proposition 3.1, (i), the left hand part of this inequality does not exceed $(4/M)$. Since $\mathop{\rm deg}D=nM$, Proposition 3.2 is shown. Q.E.D. Write down $D^+\sim nH-\nu E$, where $\nu\leqslant\frac43 n$. Let us consider the section $P$ of the hypersurface $F$ by a general 5-dimensional linear subspace, containing the point $o$. Let $P^+$ be the strict transform of $P$ on $F^+$ and $E_P=P^+\cap E$ a non-singular three-dimensional quadric. Set also $D_P=D|_P$. Obviously, the pair $(P,\frac{1}{n}D_P)$ has the point $o$ as an isolated centre of a non log canonical singularity. Since $a(E_P)=2$ and $D^+_P\sim nH_P-\nu E_P$ (where $H_P$ is the class of a hyperplane section of the variety $P$), where $\nu\leqslant\frac43n<2n$, the pair $(P^+,\frac{1}{n}D^+_P)$ is not log canonical and the union $LCS_E(P^+,\frac{1}{n}D^+_P)$ of centres of all non log canonical singularities of that pair, intersecting $E_P$, is a connected closed subset of the exceptional quadric $E_P$. Let $S_P$ be an irreducible component of that set. Obviously, the inequality $$ \mathop{\rm mult}\nolimits_{S_P}D^+_P> n $$ holds. Furthermore, $\mathop{\rm codim}(S_P\subset E_P)\in\{1,2,3\}$. Returning to the original pair $(F,\frac{1}{n}D)$, we see that there is a non log canonical singularity of the pair $(F^+,\frac{1}{n}D^+)$, the centre of which is a subvariety $S\subset E$, such that $S\cap E_P=S_P$ and, in particular, $\mathop{\rm codim}(S\subset E)\in\{1,2,3\}$. Note at once that the case $\mathop{\rm codim}(S\subset E)=3$ is impossible: by the connectedness principle in that case $S_P$ is a point and for that reason $S$ is a linear subspace of codimension 3 on the quadric $E$ of rank at least 8, which is impossible. It is not hard to exclude the case $\mathop{\rm codim}(S\subset E)=1$, either. Assume that it does take place. Then the divisor $S$ is cut out on $E$ by a hypersurface of degree $d_S\geqslant 1$. Let $H_E$ be the class of a hyperplane section of the quadric $E$. The divisor $D^+|_E\sim\nu H_E$, so that $$ \frac43n\geqslant\nu>nd_S $$ and for that reason $S$ is a hyperplane section of the quadric $E$. Let $\Delta\in|H|$ be the uniquely determined hyperplane section of the hypersurface $F$, such that $\Delta\ni o$ and $\Delta^+\cap E=S$. The pair $(F^+,\Delta^+)$ is log canonical and for that reason $D\neq\Delta$. For the effective cycle $(D\circ\Delta)$ of codimension two on $F$ we have $$ \mathop{\rm mult}\nolimits_o(D\circ\Delta)\geqslant 2\nu+2\mathop{\rm mult}\nolimits_SD^+>4n, $$ so that $$ \frac{\mathop{\rm mult}_o}{\mathop{\rm deg}}(D\circ\Delta)>\frac{4}{M}, $$ which contradicts Proposition 3.1. This excludes the case of a divisorial centre. {\bf 3.3. The case of codimension two.} Starting from this moment, assume that $\mathop{\rm codim}(S\subset E)=2$. {\bf Lemma 3.1.} {\it The subvariety $S$ is contained in some hyperplane section of the quadric $E$.} {\bf Proof.} Since $\mathop{\rm mult}_SD^+>n$ and $D^+|_E\sim \nu H_E$ with $\nu\leqslant\frac43n$, for every secant line $L\subset E$ of the subvariety $S$ we have $L\subset D^+$. Let $\Pi\subset E$ be a linear space of maximal dimension and of general position and $S_{\Pi}=S\cap\Pi$. The secant lines of the closed set $S_{\Pi}\subset\Pi$ of codimension two can not sweep out $\Pi$, since $E\not\subset D^+$. Therefore, there are two options (see \cite[Lemma 2.3]{Pukh09b}): 1) the secant lines of the set $S_{\Pi}$ sweep out a hyperplane in $\Pi$, 2) $S_{\Pi}\subset\Pi$ is a linear subspace of codimension two. In the first case the secant lines $L\subset E$ of the set $S$ sweep out a divisor on $E$, which ca only be a hyerplane section of the quadric $E$. In the second case $S$ contains all its secant lines and is a section of $E$ by a linear subspace of codimension two. Q.E.D. for the lemma. As we have just shown, one of the two options takes place: either there is a unique hyperplane section $\Lambda$ of the quadric $E$, containing $S$ (Case 1), or $S=E\cap \Theta$, where $\Theta$ is a linear subspace of codimension two (Case 2). Let us study them separately. Assume that {\bf Case 1} takes place. Then $S$ is cut out on $\Lambda$ by a hypersurface of degree $d_S\geqslant2$. Set $$ \mu=\mathop{\rm mult}\nolimits_SD^+\quad\mbox{and}\quad\gamma=\mathop{\rm mult}\nolimits_{\Lambda}D^+, $$ where $\mu>n$ and $\mu\leqslant 2\nu\leqslant\frac83n$. {\bf Lemma 3.2.} {\it The following inequality holds:} $$ \gamma\geqslant\frac{2\mu-\nu}{3}. $$ {\bf Proof} is easy to obtain in the same way as the short proof of Lemma 3.5 in \cite[subsection 3.7]{Pukh00d}. Let $L\subset\Lambda$ be a general secant line of the set $S$. Consider the section $P$ of the hypersurface $F$ by a general 4-plane in ${\mathbb P}$, such that $P\ni o$ and $P^+\cap E$ contains the line $L$. Obviously, $o\in P$ is a non-degenerate quadratic point and $E_P=P^+\cap E\cong{\mathbb P}^1\times{\mathbb P}^1$ is a non-singular quadric in ${\mathbb P}^3$. Set $D_P=D|_P$. Obviously, $\gamma=\mathop{\rm mult}_LD^+_P$. Let $\sigma_L\colon P_L\to P^+$ be the blow up of the line $L$, $E_L=\sigma^{-1}_L(L)$ the exceptional divisor; since ${\cal N}_{L/P^+}\cong{\cal O}\oplus{\cal O}_L(-1)$, the exceptional surface $E_L$ is a ruled surface of the type ${\mathbb F}_1$, so that $$ \mathop{\rm Pic}E_L={\mathbb Z}s\oplus{\mathbb Z}f, $$ where $s$ and $f$ are the classes of the exceptional section and the fibre, respectively. Furthermore, $E_L|_{E_L}=-s-f$. Let $D_L$ be the strict transform of $D^+_P$ on $P_L$. Obviously, $$ D_L\sim n H_P-\nu E_P-\gamma E_L $$ (where $H_P$ is the class of a hyperplane section of $P$), so that $$ D_L|_{E_L}\sim\gamma s+(\gamma+\nu)f. $$ On the other hand, $L$ is a general secant line of the set $S$ and for that reason $L$ contains at least two distinct points $p,q\in S$. Therefore, the divisor $D^+_P$ has at the points $p,q\in L$ the multiplicity $\mu$ and for that reason the effective 1-cycle $D_L|_{E_L}$ contains the corresponding fibres $\sigma^{-1}_L(p)$ and $\sigma^{-1}_L(q)$ over those points with multiplicity $(\mu-\gamma)$. Therefore, $$ \gamma+\nu\geqslant 2(\mu-\gamma), $$ whence follows the claim of the lemma. Q.E.D. Now let us consider the uniquely determined hyperplane section $\Delta$ of the hypersur\-face $F\subset{\mathbb P}$, such that $\Delta\ni o$ and $\Delta^+\cap E=\Lambda$. Set $D_{\Delta}=D|_{\Delta}$. Write down $$ D^+|_{\Delta^+}=D^+_{\Delta}+a\Lambda. $$ Obviously, $$ \mathop{\rm mult}\nolimits_oD_{\Delta}=2(\nu+a)\geqslant 2\nu+2\frac{2\mu-\nu}{3}=\frac43(\mu+\nu)>\frac83n. $$ Since as we noted above, the subvariety $S$ is cut out on the quadric $\Lambda$ by a hypersurfa\-ce of degree $d_S\geqslant 2$, the divisor $D^+_{\Delta}\sim nH_{\Delta}-(\nu+a)\Lambda$ can not contain $S$ with multiplicity higher than $$ \frac{1}{d_S}(\nu+a)\leqslant\frac{\nu+a}{2}. $$ Since the pair $(F^+,\frac{1}{n}D^+)$ has a non log canonical singularity with the centre at $S$, the inversion of adjunction implies that the pair $\square=(\Delta^+,\frac{1}{n}(D^+_{\Delta}+a\Lambda))$ has a non log canonical singularity with the centre at $S$ as well. Recall that $\mathop{\rm codim}(S\subset\Delta^+)=2$. Consider the blow up $\sigma_S\colon\widetilde{\Delta}\to\Delta^+$ of the subvariety $S$ and denote by the symbol $E_S$ the exceptional divisor $\sigma^{-1}_S(S)$. The following fact is well known. {\bf Proposition 3.3.} {\it For some irreducible divisor $S_1\subset E_S$, such that the projection $\sigma_S|_{S_1}$ is birational, the inequality \begin{equation}\label{03.05.2014.1} \mathop{\rm mult}\nolimits_S(D^+_{\Delta}+a\Lambda)+\mathop{\rm mult}\nolimits_{S_1}(\widetilde{D}_{\Delta}+a\widetilde{\Lambda})>2n \end{equation} holds, where $\widetilde{D}_{\Delta}$ and $\widetilde{\Lambda}$ are the strict transforms of $D^+_{\Delta}$ and $\Lambda$ on $\widetilde{\Delta}$, respectively.} {\bf Proof:} see Proposition 9 in \cite{Pukh05}. Set $\mu_S=\mathop{\rm mult}_SD^+_{\Delta}$ and $\beta=\mathop{\rm mult}_{S_1}\widetilde{D}_{\Delta}$. Consider first the case of general position: $S_1\neq E_S\cap\widetilde{\Lambda}$. In that case $S_1\not\subset\widetilde{\Lambda}$ and the inequality (\ref{03.05.2014.1}) takes the following form: $$ \mu_S+\beta+a>2n. $$ Since $\mu_S\geqslant\beta$, the more so $2\mu_S+a>2n$. On the other hand, we noted above that $2\mu_S\leqslant\nu+a$. As a result, we obtain the estimate $$ \nu+2a>2n. $$ Therefore, $\mathop{\rm mult}_oD_{\Delta}>\nu+2n>3n$. However, $D_{\Delta}\sim nH_{\Delta}$ is an effective divisor on the hyperplane section $\Delta$ and by Proposition 3.1, (ii), it satisfies the inequality $$ \frac{\mathop{\rm mult}_o}{\mathop{\rm deg}}D_{\Delta}\leqslant\frac{3}{M}. $$ This contradiction excludes the case of general position. Therefore, we are left with the only option: $S_1=E_S\cap\widetilde{\Lambda}$. In that case the inequality (\ref{03.05.2014.1}) takes the following form: $$ \mu_S+\beta+2a>2n. $$ This inequality is weaker than the corresponding estimate in the case of general position, but as a compensation we obtain the additional inequality $$ 2\mu_S+2\beta\leqslant\nu+a $$ (the restriction $D^+_{\Delta}|_{\Lambda}$ is cut out by a hypersurface of degree $(\nu+a)$ and contains the divisor $S\sim d_SH_{\Lambda}$ with multiplicity at least $\mu_S+\beta$). Combining the last two estimates, we obtain the inequality $$ \nu+5a>4n, $$ which implies that $5(\nu+a)>8n$ and so $\mathop{\rm mult}_oD_{\Lambda}>\frac{16}{5}n$; as we mentioned above, this contradicts Proposition 3.1, (ii). This completes the exclusion of Case 1. Therefore, {\bf Case 2} takes place: $S=E\cap\Theta$, where $\Theta\subset{\mathbb P}^{M-1}$ is a linear subspace of codimension two. Let $P\subset F$ be the section of the hypersurface $F$ by the linear subspace of codimension two in ${\mathbb P}$, which is uniquely determined by the conditions $P\ni o$ and $P^+\cap E=S$. Furthermore, let $|H-P|$ be the pencil of hyperplane sections of $F$, containing $P$. For a general hyperplane section $\Delta\in|H-P|$ we have: \begin{itemize} \item the divisor $D$ does not contain $\Delta$ as a component, so that the effective cycle $(D\circ\Delta)=D_{\Delta}$ of codimension two on $F$ is well defined; this cycle can be looked at as an effective divisor $D_{\Delta}\in|nH_{\Delta}|$ on the hypersurface $\Delta\subset{\mathbb P}^{M-1}$, \item for the strict transform $D^+_{\Delta}$ on $F^+$ the equality $\mathop{\rm mult}_SD^+_{\Delta}=\mathop{\rm mult}_SD^+$ holds. \end{itemize} Of course, the divisor $D_{\Delta}$ may contain $P$ as a component. Write down $D_{\Delta}=G+aP$, where $a\in{\mathbb Z}_+$ and $G$ is an effective divisor that does not contain $P$ as a component, $G\in|(n-a)H_{\Delta}|$. Obviously, $G^+\sim(n-a)H_{\Delta}-(\nu-a)E_{\Delta}$, where $E_{\Delta}=\Delta^+\cap E$, and, besides, $$ \mathop{\rm mult}\nolimits_SG^+=\mathop{\rm mult}\nolimits_SD^+-a>n-a. $$ Set $m=n-a$. The effective cycle of codimension two $G_P=(F\circ P)$ on $\Delta$ is well defined and can be considered as an effective divisor $G_P\in|mH_P|$ on the hypersurface $P\subset{\mathbb P}^{M-2}$. The divisor $G_P$ satisfies the inequality $$ \mathop{\rm mult}\nolimits_oG_P\geqslant 2(\nu-a)+ 2\mathop{\rm mult}\nolimits_SG^+>4m. $$ This is impossible by Proposition 3.1, (iii). Therefore, the assumption that the pair $(F,\frac{1}{n} D)$ is not log canonical for some divisor $D\sim nH$, leads to a contradiction. Proof of Theorem 4 is complete. \begin{flushleft} Department of Mathematical Sciences,\\ The University of Liverpool \end{flushleft} \noindent{\it [email protected]} \end{document}
\begin{document} \title[Homothetic variant of fractional Sobolev space ... revisited]{Homothetic variant of fractional Sobolev space with application to Navier-Stokes system revisited} \author{Jie Xiao} \address{Department of Mathematics and Statistics, Memorial University of Newfoundland, St. John's, NL A1C 5S7, Canada} \email{[email protected]} \thanks{JX was in part supported by NSERC of Canada and URP of Memorial University.} \subjclass[2010]{{31C15, 35Q30, 42B37, 46E35}\\ {\it Key words and phrases.}\ {$\{Q_\alpha^{-1}\}_{0L^p(\mathbb R^n)e\alpha<1}$, $L^p(\mathbb R^n)im_{\alpha\to 1}Q_\alpha^{-1}$, Navier-Stokes equations}} \date{} \dedicatory{} \keywords{} \begin{abstract} This note provides a deeper understanding of the main results obtained in the author's 2007 DPDE paper \cite{Xiao}. \end{abstract} \maketitle \tableofcontents \section{Introduction}L^p(\mathbb R^n)abel{s1} \setcounter{equation}{0} This note is devoted to a further understanding of the results on the so-called Q-spaces on $\mathbb R^n$ and the incompressible Navier-Stokes equations on $\mathbb R^{1+n}_+=(0,\infty)\times\mathbb R^n$ established in the author's 2007 DPDE paper \cite{Xiao}. For $\alpha\in (-\infty,\infty)$, the space $Q_\alpha$ on $\mathbb R^n$ is defined as the class of all measurable complex-valued functions $f$ on $\mathbb R^n$ with \begin{equation} L^p(\mathbb R^n)abel{eQ} \||f\||_{Q_\alpha}=\sup_{(r,x)\in \mathbb R^{1+n}_+}L^p(\mathbb R^n)eft(r^{2\alpha-n}\iint_{B(x,r)\times B(x,r)}\fracrac{|f(y)-f(z)|^2}{|y-z|^{n+2\alpha}}\,dydz\right)^\fracrac12<\infty. \end{equation} Here and henceforth, $B(x,r)\subseteq\mathbb R^n$ stands for the open ball centered at $x$ with radius $r$. This space exists as a homothetic variant of the fractional Sobolev space $\dot{L}^2_\alpha$ on $\mathbb R^n$, where $$ f\in\dot{L}^2_\alpha\Longleftrightarrow\iint_{\mathbb R^n\times \mathbb R^n}\fracrac{|f(y)-f(z)|^2}{|y-z|^{n+2\alpha}}\,dydz<\infty. $$ According to \cite{EsJPX, Xiao}, $\big({Q}_\alpha/\mathbb C,\||f\||_{{Q}_\alpha}\big)$ is not only a Banach space, but also affine invariant: if $(L^p(\mathbb R^n)ambda,x_0)\in\mathbb R^{1+n}_+$ then $$ \phi(x)=L^p(\mathbb R^n)ambda x+x_0\mathbb Rightarrow\||f\circ\phi\||_{Q_\alpha}=\||f\||_{Q_\alpha}. $$ Interestingly, one has the following structure: $$ Q_\alpha=\begin{cases} BMO\ \hbox{as}\ \alpha\in (-\infty,0);\\ (-\Delta)^{-\fracrac{\alpha}{2}}\mathcal{L}_{2,n-2\alpha}\ \hbox{between}\ W^{1,n}\ and\ BMO \ \hbox{as}\ \alpha\in (0,1);\\ \mathbb C\ \hbox{as}\ \alpha\in [1,\infty), \end{cases} $$ where $(-\Delta)^{-{\alpha}/{2}}$ stands for the $-\alpha/2$-th power of the Laplacian operator, and $$ \begin{cases} f\in\mathcal{L}_{2,n-2\alpha}\Longleftrightarrow\||f\||^2_{\mathcal{L}_{2,n-2\alpha}}=\sup_{(r,x)\in\mathbb R^{1+n}_+}r^{2(\alpha-n)}\iint_{B(x,r)\times B(x,r)}|f(y)-f(z)|^2\,dydz<\infty;\\ f\in W^{1,n}\Longleftrightarrow \||f\||_{W^{1,n}}^n=\int_{\mathbb R^n}|\nabla f(x)|^n\,dx<\infty;\\ f\in BMO\Longleftrightarrow \||f\||_{BMO}^2=\sup_{(r,x)\in\mathbb R^{1+n}_+} r^{-2n}\iint_{B(x,r)\times B(x,r)}|f(y)-f(z)|^2\,dydz<\infty. \end{cases} $$ As showed in \cite{Xiao}, the importance of the structure lies in an application of $Q_\alpha$ to treating the existence and uniqueness of the so-called mild solution $u=u(t,x)=(u_1(t,x),...,u_n(t,x))$ of the normalized incompressible Navier-Stokes system with the pressure function $p=p(t,x)$ and the initial data $a=a(x)=(a_1(x),...,a_n(x))$ below \begin{equation}L^p(\mathbb R^n)abel{e181} L^p(\mathbb R^n)eft\{\begin{array}{r@{}l} \partial_t u-\Delta u+u\cdot\nabla u+\nabla p=0\ \ \hbox{on}\ \ \mathbb R^{1+n}_+;\\ \nabla\cdot u=0\ \ \hbox{on}\ \ \mathbb R^n;\\ u(0,\cdot)=a(\cdot)\ \ \hbox{on}\ \ \mathbb R^n, \end{array} \right. \end{equation} namely, $u$ solves the integral equation \begin{equation} L^p(\mathbb R^n)abel{eIe} u(t,x)=e^{t\Delta}a(x)-\int_0^t e^{(t-s)\Delta}P\nabla\cdot(u\otimes u)ds, \end{equation} where $$ L^p(\mathbb R^n)eft\{\begin{array}{r@{}l} e^{t\Delta}a(x)=(e^{t\Delta}a_1(x),...,e^{t\Delta}a_n(x));\\ P=\{P_{jk}\}_{j,k=1,...,n}=\{\delta_{jk}+R_jR_k\}_{j,k=1,...,n};\\ \delta_{jk}=\hbox{Kronecker\ symbol};\\ R_j=\partial_j(-\Delta)^{-\fracrac12}=\hbox{Riesz\ transform}. \end{array} \right. $$ Even more interestingly, several relevant advances were made in \cite{M, QLi, KXZZ, GJLV, LiZh, LiYa, Lem1, Lem2, LiXiYa}. The principal results in these papers have strongly inspired the author to revisit and optimize the main results in \cite{Xiao}. The present article is divided into the following two sections between this Introduction and the References at the end: \ref{s182}.\ \ \ $\{Q^{-1}_\alpha\}_{0L^p(\mathbb R^n)e\alpha<1}$ and its Navier-Stokes equations; \ref{s183}.\ \ \ $L^p(\mathbb R^n)im_{\alpha\to 1}Q^{-1}_\alpha$ and its Navier-Stokes equations. \noindent{\it Notation}. $UL^p(\mathbb R^n)esssim V$ or $V\gtrsim U$ stands for $UL^p(\mathbb R^n)e C V$ for a constant $C>0$ independent of $U$ and $V$; $U\approx V$ is used for both $UL^p(\mathbb R^n)esssim V$ and $VL^p(\mathbb R^n)esssim U$. \section{$\big\{Q_{\alpha}^{-1}\big\}_{0L^p(\mathbb R^n)e\alpha<1}$ and its Navier-Stokes equations}L^p(\mathbb R^n)abel{s182} \setcounter{equation}{0} \subsection{$\big\{(-\Delta)^{-\alpha/2}\mathcal L_{2,n-2\alpha}\big\}_{0L^p(\mathbb R^n)e\alpha<1}\ \&\ \big\{Q_{\alpha}^{-1}\big\}_{0L^p(\mathbb R^n)e\alpha<1}$}L^p(\mathbb R^n)abel{s182a} As an extension of the John-Nirenberg's $BMO$-space \cite{JN}, the $Q$-space $Q_\alpha{}$ was studied first in \cite{EsJPX}, and then in \cite{DX1,DX2}. Among several characterizations of $Q_\alpha{}$, the following, as a variant of \cite[Theorem 3.3]{DX1} (expanding Fefferman-Stein's basic result for $BMO=(-\Delta)^{-0}\mathcal L_{2,n}$ in \cite{FS}), is of independent interest: given $\alpha\in [0,1)$ and a $C^\infty$ function $\psi$ on $\mathbb R^n$ with \begin{equation}L^p(\mathbb R^n)abel{e182} L^p(\mathbb R^n)eft\{\begin{array}{r@{}l} \psi\in L^1{};\\ |\psi(x)|L^p(\mathbb R^n)esssim (1+|x|)^{-(n+1)}\ \ \hbox{for}\ \ x\in\mathbb R^{n};\\ \int_{\mathbb R^n}\psi(x)dx=0;\\ \psi_t(x)=t^{-n}{\psi(\fracrac{x}{t})}\ \ \hbox{for}\ \ (t,x)\in\mathbb R^{1+n}_+, \end{array} \right. \end{equation} one has: \begin{equation}L^p(\mathbb R^n)abel{e183} f\in (-\Delta)^{-\alpha/2}\mathcal L_{2,n-2\alpha} \Longleftrightarrow\sup_{(r,x\in\mathbb R^{1+n}_+}r^{2\alpha-n}\int_0^r{\mathbb B_n}ig(\int_{B(x,r)}{|f\ast\psi_t(y)|^2}\,dy{\mathbb B_n}ig){t^{-1-2\alpha}}\,dt<\infty. \end{equation} Obviously, $\ast$ stands for the convolution operating on the space variable and $$ (-\Delta)^{-\alpha/2}\mathcal L_{2,n-2\alpha}=\begin{cases} BMO\ \hbox{for}\ \alpha=0;\\ Q_\alpha\ \hbox{for}\ \alpha\in (0,1). \end{cases} $$ Upon choosing four $\psi$-functions in (\ref{e182})-(\ref{e183}), we can get four descriptions of $(-\Delta)^{-\alpha/2}\mathcal L_{2,n-2\alpha}$ involving the Poisson and heat semi-groups. To see this, denote by $e^{-t\sqrt{-\Delta}}(\cdot,\cdot)$ and $e^{t\Delta}(\cdot,\cdot)$ the Poisson and heat kernels respectively: $$ L^p(\mathbb R^n)eft\{\begin{array}{r@{}l} e^{-t\sqrt{-\Delta}}(x,y)={\Gamma\big(\fracrac{n+1}{2}\big)}{\pi^{-\fracrac{n+1}{2}}}t{(|x-y|^2+t^2)^{-\fracrac{n+1}{2}}};\\ e^{t\Delta}(x,y) =(4\pi t)^{-\fracrac{n}{2}}{\exp\big(-\fracrac{|x-y|^2}{4t}\big)}. \end{array} \right. $$ And, for $\beta\in (-\infty,\infty)$ the notation $(-\Delta)^{\fracrac{\beta}{2}} f$, determined by the Fourier transform $\widehat{(\cdot)}$: $\widehat{(-\Delta)^{\fracrac{\beta}{2}}f}(x)=(2\pi|x|)^\beta\hat{f}(x),$ represents the $\beta/2$-th power of the Laplacian $$ -\Delta f=-\Delta_x f=-\sum_{j=1}^n\partial_j^2 f=-\sum_{j=1}^n\fracrac{\partial^2 f}{\partial x_j^2}. $$ {\it Choice 1}: If $$ \begin{cases} \psi_{1,0}(x)={\mathbb B_n}ig({1+|x|^2-(n+1)\Gamma\big(\fracrac{n+1}{2}\big){\pi^{-\fracrac{n+1}{2}}}}{\mathbb B_n}ig){(1+|x|^2)^{-\fracrac{n+3}{2}}};\\ (\psi_{1,0})_t(x)=t{\partial_t}e^{-t\sqrt{-\Delta}}(x,0), \end{cases} $$ then \begin{equation*}L^p(\mathbb R^n)abel{e184} f\in (-\Delta)^{-\alpha/2}\mathcal L_{2,n-2\alpha}\Longleftrightarrow\sup_{(r,x)\in\mathbb R^{1+n}_+}r^{2\alpha-n}{\int_0^r\big(\int_{B(x,r)} {|{\partial_t}e^{-t\sqrt{-\Delta}}f(y)|^2}\,dy\big){t^{1-2\alpha}}dt}<\infty. \end{equation*} {\it Choice 2}: If $$ \begin{cases} \psi_{1,j}(x)=-(n+1)\Gamma\big(\fracrac{n+1}{2}\big){\pi^{-\fracrac{n+1}{2}}}{(1+|x|^2)^{-\fracrac{n+3}{2}}};\\ (\psi_{1,j})_t(x)=t{\partial_j}e^{-t\sqrt{-\Delta}}(x,0), \end{cases} $$ then \begin{equation*}L^p(\mathbb R^n)abel{e184a} f\in (-\Delta)^{-\alpha/2}\mathcal L_{2,n-2\alpha}\Longleftrightarrow\sup_{(r,x)\in\mathbb R^{1+n}_+}{r^{2\alpha-n}\int_0^r\big(\int_{B(x,r)} {|{\nabla_y}e^{-t\sqrt{-\Delta}}f(y)|^2}\,dy\big){t^{1-2\alpha}}dt}<\infty, \end{equation*} where $\nabla_y$ is the gradient with respect to the space variable $y=(y_1,...,y_n)\in\mathbb R^n$. {\it Choice 3}: If $$ \begin{cases} \psi_{2,0}(x)=-(4\pi)^{-\fracrac n2}{\mathbb B_n}ig(n-\fracrac{|x|^2}{2}{\mathbb B_n}ig)\exp{\mathbb B_n}ig(-\fracrac{|x|^2}{4}{\mathbb B_n}ig);\\ (\psi_{2,0})_t(x)=t\partial_t e^{t^2\Delta}(x,0), \end{cases} $$ then \begin{equation*}L^p(\mathbb R^n)abel{e1851} f\in (-\Delta)^{-\alpha/2}\mathcal L_{2,n-2\alpha}\Longleftrightarrow\sup_{(r,x)\in\mathbb R^{1+n}_+}r^{2\alpha-n}{\int_0^r\big(\int_{B(x,r)}|\partial_t e^{t^2\Delta}f(y)|^2\,dy\big){t^{1-2\alpha}}dt}<\infty. \end{equation*} {\it Choice 4}: If $$ \begin{cases} \psi_{2,j}(x)=-(4\pi)^{-\fracrac n2}{\mathbb B_n}ig(\fracrac{x_j}{2}{\mathbb B_n}ig)\exp{\mathbb B_n}ig(-\fracrac{|x|^2}{4}{\mathbb B_n}ig);\\ (\psi_{2,j})_t(x)=t{\partial_j}e^{t^2\Delta}(x,0), \end{cases} $$ then \begin{equation*}L^p(\mathbb R^n)abel{e185} f\in (-\Delta)^{-\alpha/2}\mathcal L_{2,n-2\alpha}\Longleftrightarrow\sup_{(r,x)\in\mathbb R^{1+n}_+}r^{2\alpha-n}{\int_0^r\big(\int_{B(x,r)}|\nabla_y e^{t^2\Delta}f(y)|^2\,dy\big){t^{1-2\alpha}}dt}<\infty. \end{equation*} The previous characterizations lead to the following assertion uniting \cite[Theorem 1.2 (iii)]{Xiao} and the corresponding result on $BMO^{-1}$ in \cite{KoTa}. \begin{theorem}L^p(\mathbb R^n)abel{t184} For $\alpha\in [0,1)$ let $Q_{\alpha}^{-1}=((-\Delta)^{-\alpha/2}\mathcal L_{2,n-2\alpha})^{-1}$ be the class of all functions $f$ on $\mathbb R^n$ with \begin{equation} L^p(\mathbb R^n)abel{eQ-1} \|f\|_{Q_{\alpha}^{-1}{}}=\sup_{(r,x)\in\mathbb R^{1+n}_+}L^p(\mathbb R^n)eft(r^{2\alpha-n}{\int_0^{r^2}\big(\int_{B(x,r)}|e^{t\Delta}f(y)|^2\,dy\big)\,t^{-\alpha}dt}\right)^\fracrac12<\infty, \end{equation} then \begin{equation}L^p(\mathbb R^n)abel{e1810} \nabla\cdot\big(Q_\alpha{}\big)^n=\hbox{div}\big(Q_\alpha{}\big)^n=Q_{\alpha}^{-1}{}. \end{equation} Consequently, \begin{equation} L^p(\mathbb R^n)abel{e1811} 0L^p(\mathbb R^n)e \alpha_1<\alpha_2<1\Longrightarrow Q_{\alpha_2}^{-1}{}\subseteq Q_{\alpha_1}^{-1}{}. \end{equation} \end{theorem} \begin{proof} The argument below, taken essentially from the proofs of \cite[Lemma 2.2 and Theorem 1.2 (ii)]{Xiao}, is valid for all $\alpha\in [0,1)$. {\it Step 1}. We prove $$ f_{j,k}={\partial_j\partial_k} (-\Delta)^{-1}f\ \ \&\ \ f\in Q_{\alpha}^{-1}{}\Longrightarrow f_{j,k}\in Q^{-1}_{\alpha}{}\ \ \hbox{for}\ \ j,k=1,2,...n. $$ Taking a $C^\infty_0{}$ function $\phi$ with $$ L^p(\mathbb R^n)eft\{\begin{array}{r@{}l} \hbox{supp}\phi\subset B(0,1);\\ \int_{\mathbb R^n}\phi(x)dx=1;\\ \phi_r(x)=r^{-n}\phi(x/r);\\ g_r(t,x)=\phi_r\ast\partial_j\partial_k (-\Delta)^{-1}e^{t\Delta}f(x), \end{array} \right. $$ we get $$ e^{t\Delta}f_{j,k}(x)=\partial_j\partial_k (-\Delta)^{-1}e^{t\Delta}f(x)=f_r(t,x)+g_r(t,x). $$ Upon denoting by $\dot{B}^{1,1}_1{}$ the predual of the homogeneous Besov space $\dot{B}^{-1,\infty}_\infty{}$ (consisting of all functions $f$ on $\mathbb R^n$ with $\|e^{t\Delta}f\|_{L^\infty{}}L^p(\mathbb R^n)esssim t^{-1/2}$), we find (cf. \cite[p. 160, Lemma 16.1]{Lem}) $$ f\in Q^{-1}_{\alpha}\Longrightarrow f\in BMO^{-1}{}\subseteq\dot{B}^{-1,\infty}_\infty{}\Longrightarrow \|g_r(t,\cdot)\|_{L^\infty{}}L^p(\mathbb R^n)e{\big\|\partial_j\partial_k (-\Delta)^{-1}e^{t\Delta}f\big\|_{\dot{B}^{-1,\infty}_\infty{}}} {\|\phi_r\|_{\dot{B}^{1,1}_1{}}}L^p(\mathbb R^n)esssim{r^{-1}\|f\|_{\dot{B}^{-1,\infty}_\infty{}}}, $$ thereby reaching \begin{equation}L^p(\mathbb R^n)abel{e1812} \int_0^{r^2}{\mathbb B_n}ig(\int_{B(x,r)}|g_r(t,y)|^2\,dy{\mathbb B_n}ig)t^{-\alpha}dtL^p(\mathbb R^n)esssim r^{n-2\alpha}\|f\|^2_{\dot{B}^{-1,\infty}_\infty}L^p(\mathbb R^n)esssim r^{n-2\alpha}\|f\|^2_{Q^{-1}_{\alpha}{}}. \end{equation} Next, taking another $C^\infty_0{}$ function $\psi$ with $\psi=1$ on $B(0,10)$, writing $$ L^p(\mathbb R^n)eft\{\begin{array}{r@{}l} \psi_{r,x}=\psi\big(\fracrac{y-x}{r}\big);\\ f_r=F_{r,x}+G_{r,x};\\ G_{r,x}={\partial_j\partial_k}(-\Delta)^{-1}\psi_{r,x}e^{t\Delta}f-\phi_r\ast{\partial_j\partial_k}(-\Delta)^{-1}\psi_{r,x}e^{t\Delta}f, \end{array} \right. $$ and employing the Plancherel formula for the space variable, we find out \begin{align*} \int_0^{r^2}\big\|{\partial_j\partial_k}(-\Delta)^{-1}\psi_{r,x}e^{t\Delta}f\big\|_{L^2{}}^2t^{-\alpha}dt&L^p(\mathbb R^n)esssim\int_0^{r^2}{\mathbb B_n}ig(\int_{\mathbb R^n}\big|y_jy_k|y|^{-2}\widehat{\big(\psi_{r,x}e^{t\Delta}f\big)}(y)\big|^2 dy{\mathbb B_n}ig)t^{-\alpha}dt\\ &L^p(\mathbb R^n)esssim\int_0^{r^2}\big\|{\psi_{r,x}e^{t\Delta}f}\big\|_{L^2{}}^2t^{-\alpha}dt. \end{align*} At the same time, using Minkowski's inequality (for $\phi_r$) and the Plancherel formula once again, we read off $$ \int_0^{r^2}\big\|\phi_r\ast{\partial_j\partial_k} (-\Delta)^{-1}\psi_{r,x}e^{t\Delta}f\big\|_{L^2{}}^2t^{-\alpha}dtL^p(\mathbb R^n)esssim\int_0^{r^2}\big\|{\psi_{r,x}e^{t\Delta}f}\|_{L^2{}}^2t^{-\alpha}dt. $$ Consequently $$ \int_0^{r^2}\big\|G_{r,x}(t,\cdot)\big\|_{L^2{}}^2t^{-\alpha}dtL^p(\mathbb R^n)esssim\int_0^{r^2}\big\|{\psi_{r,x}e^{t\Delta}f}\big\|_{L^2{}}^2t^{-\alpha}dt. $$ To handle $F_{r,x}$, we apply the following inequality (cf. \cite[p. 161]{Lem}) $$ \int_{B(x,r)}|F_{r,x}(t,y)|^2\,dyL^p(\mathbb R^n)esssim r^{n+1}\int_{\mathbb R^n\setminus B(x,10r)}{|e^{t\Delta}f(w)|^2}{|x-w|^{-(n+1)}}\,dw $$ to obtain $$ \int_0^{r^2}{\mathbb B_n}ig(\int_{B(x,r)}|F_{r,x}(t,y)|^2dy{\mathbb B_n}ig)t^{-\alpha}dtL^p(\mathbb R^n)esssim \sum_{l=1}^\infty\int_{B(x,r10^{1+l})\setminus B(x,r10^l)} \fracrac{{\mathbb B_n}ig(\int_0^{r^2}|e^{t\Delta}f(w)|^2t^{-\alpha}dt{\mathbb B_n}ig)}{(|w-x|r^{-1})^{n+1}}\,dwL^p(\mathbb R^n)esssim {\|f\|^2_{Q^{-1}_{\alpha}{}}}{r^{2\alpha-n}}. $$ A combination of the above estimates for $F_{r,x}$ and $G_{r,x}$ yields \begin{equation}L^p(\mathbb R^n)abel{e1813} \int_0^{r^2}\int_{B(x,r)}|f_{r}(t,y)|^2t^{-\alpha}dydtL^p(\mathbb R^n)esssim r^{n-2\alpha}\|f\|^2_{Q^{-1}_{\alpha}{}}. \end{equation} Of course, both (\ref{e1812}) and (\ref{e1813}) produce $f_{j,k}\in Q_{\alpha}^{-1}{}$, as desired. {\it Step 2}. We check $\nabla\cdot\big(Q_\alpha{}\big)^n=Q_{\alpha}^{-1}{}$. If $f\in\nabla\cdot\big(Q_{\alpha}{}\big)^n$, then there exist $f_1,...,f_n\in Q_\alpha{}$ such that $f=\sum_{j=1}^n{\partial_j}f_j$. Thus, an application of the Minkowski inequality derives $$ \|f\|_{Q_{\alpha}^{-1}{}}L^p(\mathbb R^n)e\sum_{j=1}^n\big\|{\partial_j}f_j\big\|_{Q_{\alpha}^{-1}{}}L^p(\mathbb R^n)esssim\sum_{j=1}^n\|f_j\|_{Q_{\alpha}{}}<\infty. $$ Conversely, if $f\in Q_{\alpha}^{-1}{}$, then an application of {\it Step 1} derives $f_{j,k}={\partial_j\partial_k}(-\Delta)^{-1}f\in Q^{-1}_{\alpha}{}$, whence giving $f_k=-{\partial_k}(-\Delta)^{-1}f\in Q_\alpha{}$. So, $$ \widehat{\sum_{k=1}^n{\partial_k}f_k}=-\sum_{k=1}^n\widehat{f_{k,k}}=\hat{f}\Longrightarrow f\in \nabla\cdot\big(Q_\alpha{}\big)^n. $$ {\it Step 3}. (\ref{e1811}) follows immediately from (\ref{e1810}). \end{proof} \subsection{Navier-Stokes system initiated in $\{(Q_\alpha^{-1})^n\}_{0L^p(\mathbb R^n)e\alpha<1}$}L^p(\mathbb R^n)abel{s182b} Classically, the Cauchy problem for (\ref{e181}) is to establish the existence of a solution (velocity) $u=u(t,x)=\big(u_1(t,x),...,u_n(t,x)\big)$ with a pressure $p=p(t,x)$ of the fluid at time $t\in (0,\infty)$ and position $x\in\mathbb R^n$ assuming the initial data/velocity $a=a(x)=(a_1(x),...,a_n(x))$. Of particularly important is the invariance of (\ref{e181}) under the scaling transform: $$ L^p(\mathbb R^n)eft\{\begin{array}{r@{}l} u(t,x)\mapsto u_L^p(\mathbb R^n)ambda(t,x)=L^p(\mathbb R^n)ambda u(L^p(\mathbb R^n)ambda^2t,L^p(\mathbb R^n)ambda x);\\ p(t,x)\mapsto p_L^p(\mathbb R^n)ambda(t,x)=L^p(\mathbb R^n)ambda^2p(L^p(\mathbb R^n)ambda^2t,L^p(\mathbb R^n)ambda x);\\ a(x)\mapsto a_L^p(\mathbb R^n)ambda(x)=L^p(\mathbb R^n)ambda a(L^p(\mathbb R^n)ambda x). \end{array} \right. $$ Namely, if $(u(t,x), p(t,x), a(x))$ solves (\ref{e181}) then $(u_L^p(\mathbb R^n)ambda(t,x), p_L^p(\mathbb R^n)ambda(t,x), a_L^p(\mathbb R^n)ambda(x))$ also solves (\ref{e181}) for any $L^p(\mathbb R^n)ambda>0$. This suggests a consideration of (\ref{e181}) with an initial data being of the scaling invariance. Through the scale invariance $$ \|a_L^p(\mathbb R^n)ambda\|_{(L^n{})^n}=\sum_{j=1}^n\|(a_j)_L^p(\mathbb R^n)ambda\|_{L^n{}}=\|a\|_{(L^n{})^n}, $$ Kato proved in \cite{Ka} that (\ref{e181}) has mild solutions locally in time if $a\in (L^n{})^n$ and globally if $\|a\|_{(L^n{})^n}$ is small enough (for some generalizations of Kato's result, see e.g. \cite{Ta} and \cite{Y}). Note that $\|\cdot\|_{Q_{\alpha}^{-1}{}}$ is invariant under the scale transform $a(x)\mapsto L^p(\mathbb R^n)ambda a(L^p(\mathbb R^n)ambda x)$. So it is a natural thing to extend the Kato's results to $\{Q^{-1}_\alpha\}_{0L^p(\mathbb R^n)e\alpha<1}$. To do this, we introduce the following concept whose case with $\alpha=0$ coincides with the space triple $(BMO^{-1}_{T},\overline{VMO}^{-1},X_T)$ in \cite{KoTa}. \begin{definition}L^p(\mathbb R^n)abel{d183} Let $(\alpha,T)\in [0,1)\times (0,\infty]$. \item{\rm (i)} A distribution $f$ on $\mathbb R^n$ is said to be in $Q_{\alpha;T}^{-1}{}$ provided $$ \|f\|_{Q^{-1}_{\alpha;T}{}}=\sup_{(r,x)\in (0,{T})\times\mathbb R^n}L^p(\mathbb R^n)eft(r^{2\alpha-n}\int_0^{r^2}\int_{B(x,r)}|e^{t\Delta}f(y)|^2t^{-\alpha}\,dydt\right)^\fracrac12<\infty. $$ \item{\rm (ii)} A distribution $f$ on $\mathbb R^n$ is said to be in $\overline{VQ}_\alpha^{-1}$ provided $L^p(\mathbb R^n)im_{T\to 0}\|f\|_{Q^{-1}_{\alpha;T}{}}=0.$ \item{\rm (iii)} A function $g$ on $\mathbb R^{1+n}_+$ is said to be in $X_{\alpha; T}{}$ provided $$ \|g\|_{X_{\alpha, T}{}}=\sup_{t\in (0,T)}{t}^\fracrac12\|g(t,\cdot)\|_{L^\infty{}}+\sup_{(r,x)\in (0,{T})\times\mathbb R^n} L^p(\mathbb R^n)eft(r^{2\alpha-n}\int_0^{r^2}\int_{B(x,r)}|g(t,y)|^2t^{-\alpha}dydt\right)^\fracrac12<\infty. $$ \end{definition} Clearly, if $0L^p(\mathbb R^n)e\alpha_1L^p(\mathbb R^n)e\alpha_2<1$ then $X_{\alpha_2;T}\subseteq X_{\alpha_1;T}$. Moreover, one has: $$ L^p(\mathbb R^n)eft\{\begin{array}{r@{}l} f_L^p(\mathbb R^n)ambda(x)=L^p(\mathbb R^n)ambda f(L^p(\mathbb R^n)ambda x);\\ g_L^p(\mathbb R^n)ambda(t,x)=L^p(\mathbb R^n)ambda g(L^p(\mathbb R^n)ambda^2 t,L^p(\mathbb R^n)ambda x);\\ (L^p(\mathbb R^n)ambda,t,x)\in (0,\infty)\times(0,\infty)\times\mathbb R^n, \end{array} \right. \Longrightarrow L^p(\mathbb R^n)eft\{\begin{array}{r@{}l} \|f_L^p(\mathbb R^n)ambda\|_{Q^{-1}_{\alpha;\infty}{}}=\|f\|_{Q^{-1}_{\alpha;\infty}{}};\\ \|g_L^p(\mathbb R^n)ambda\|_{X_{\alpha;\infty}{}}=\|g\|_{X_{\alpha;\infty}{}}. \end{array} \right. $$ Also, recalling (cf. \cite{CaMePl}) $$ f\in \dot{B}_{p,\infty}^{-1+\fracrac np}{}\ \hbox{under}\ p>n\Longleftrightarrow\|e^{t\Delta}f\|_{L^p{}}L^p(\mathbb R^n)esssim t^{\fracrac{n-p}{2p}}\ \hbox{for\ all}\ t>0, $$ one has $$ p>n>\alpha p\Longrightarrow L^n{}\subseteq \dot{B}_{p,\infty}^{-1+\fracrac np}{} \subseteq Q^{-1}_{\alpha;\infty}{}=Q^{-1}_\alpha, $$ which follows from H\"older's inequality based calculation for $r\in (0,1)$: $$ \int_0^{r^2}\int_{B(x,r)}|e^{t\Delta}f(y)|^2t^{-\alpha}dydtL^p(\mathbb R^n)esssim r^{n(1-\fracrac2p)}\int_0^{r^2}\|e^{t\Delta}f\|_{L^p{}}^2 t^{-\alpha}dtL^p(\mathbb R^n)esssim r^{n-2\alpha}. $$ In order to establish the existence and uniqueness of a mild solution of (\ref{e181}) with an initial data in $(Q_\alpha^{-1})^n$, we need two lemmas. \begin{lemma}L^p(\mathbb R^n)abel{l181} Given $(\alpha,T)\in [0,1)\times(0,\infty]$ and a function $f(\cdot,\cdot)$ on $\mathbb R^{1+n}_+$, let $$ \mathsf{I}(f,t,x)=\int_0^t e^{(t-s)\Delta}\Delta f(s,x)ds\quad\fracorall\quad (t,x)\in\mathbb R^{1+n}_+. $$ Then \begin{equation}L^p(\mathbb R^n)abel{e1814} \int_0^T\big\|\mathsf{I}(f,t,\cdot)\big\|_{L^2{}}^2t^{-\alpha}dtL^p(\mathbb R^n)esssim\int_0^T\big\|f(t,\cdot)\big\|_{L^2{}}^2t^{-\alpha}dt. \end{equation} \end{lemma} \begin{proof} This lemma and its proof are basically the same as \cite[Lemma 3.1]{Xiao} and its argument under $\alpha\in (0,1)$. It is enough to verify (\ref{e1814}) for $T=\infty$ thanks to three facts: (i) $\mathsf{I}(f,\cdot,\cdot)$ counts only on the values of $f$ on $(0,t)\times\mathbb R^n$; (ii) if $T<\infty$ then one can extend $f$ by letting $f=0$ on $(T,\infty)$; (iii) we can define $f(\cdot)=0=\mathsf{I}(f,t,\cdot)$ for $t\in (-\infty,0)$. Through defining $$ \kappa(t,x)=L^p(\mathbb R^n)eft\{\begin{array}{r@{}l} \Delta e^{t\Delta}(x,0)\ \ \hbox{for}\ \ t>0;\\ 0\ \ \hbox{for}\ \ tL^p(\mathbb R^n)e 0, \end{array} \right. $$ we get $$ \mathsf{I}(f,t,x)=\int_{\mathbb R}\int_{\mathbb R^n}\kappa(t-s,x-y)f(s,y)dyds, $$ whence finding that $\mathsf{I}(f,t,x)$ is actually a convolution operator over $\mathbb R^{1+n}$. Due to $$ \widehat{\kappa(t,\cdot)}(\zeta)=\int_{\mathbb R^n}\kappa(t,x)\exp(-2\pi i x\cdot \zeta)dx=-(2\pi)^2|\zeta|^2\exp\big(-(2\pi)^2t|\zeta|^2\big), $$ we have \begin{align*} \widehat{\mathsf{I}(f,t,\cdot)}(\zeta)&=\int_{\mathbb R^{1+n}}f(s,y)L^p(\mathbb R^n)eft(\int_{\mathbb R^n}\kappa(t-s,v)\exp(-2\pi i (v+y)\cdot\zeta)dv\right)dyds\\ &=-(2\pi)^2\int_0^t|\zeta|^2\exp\big(-(2\pi)^2(t-s)|\zeta|^2\big)\widehat{f(s,\cdot)}(\zeta)ds. \end{align*} This last formula, along with the Fubini theorem and the Plancherel formula, derives \begin{align*} \int_0^\infty\big\|\mathsf{I}(f,t,\cdot)\big\|^2_{L^2}t^{-\alpha}dt&L^p(\mathbb R^n)e\int_0^\inftyL^p(\mathbb R^n)eft(\int_{\mathbb R^n}{\mathbb B_n}ig(\int_0^t\fracrac{|\zeta|^2|\widehat{f(s,\cdot)}(\zeta)|}{\exp\big((2\pi)^2(t-s)|\zeta|^2\big)}\,ds{\mathbb B_n}ig)^2d\zeta\right)t^{-\alpha}dt\\ &\approx\int_{\mathbb R^n}L^p(\mathbb R^n)eft(\int_0^\infty{\mathbb B_n}ig(\int_0^\infty \fracrac{\big(1_{\{0L^p(\mathbb R^n)e sL^p(\mathbb R^n)e t\}}\big)|\zeta|^2|\widehat{f(s,\cdot)}(\zeta)|}{\exp\big((2\pi)^2(t-s)|\zeta|^2\big)}ds{\mathbb B_n}ig)^2t^{-\alpha}dt\right)d\zeta. \end{align*} This indicates that if one can verify \begin{equation}L^p(\mathbb R^n)abel{e1815} \int_0^\inftyL^p(\mathbb R^n)eft(\int_0^\infty {\mathbb B_n}ig(1_{\{0L^p(\mathbb R^n)e sL^p(\mathbb R^n)e t\}}{\mathbb B_n}ig)\fracrac{|\zeta|^2|\widehat{f(s,\cdot)}(\zeta)|}{\exp\big((t-s)|\zeta|^2\big)}ds\right)^2t^{-\alpha}dtL^p(\mathbb R^n)esssim\int_0^\infty |\widehat{f(t,\cdot)}(\zeta)|^2t^{-\alpha}dt, \end{equation} then the Plancherel formula can be used once again to produce $$ \int_0^\infty\big\|\mathsf{I}(f,t,\cdot)\big\|_{L^2{}}^2t^{-\alpha}dt L^p(\mathbb R^n)esssim\int_0^\infty\big\|f(t,\cdot)\|_{L^2{}}^2t^{-\alpha}dt, $$ as required. To prove (\ref{e1815}), let us rewrite its left side as $ \int_0^\inftyL^p(\mathbb R^n)eft(\int_0^\infty K(s,t)F(s,\zeta)ds\right)^2dt, $ where $$ L^p(\mathbb R^n)eft\{\begin{array}{r@{}l} F(s,\zeta)=s^{-\fracrac{\alpha}{2}}|\widehat{f(s,\cdot)}(\zeta)|;\\ K(s,t)=\big(1_{\{0L^p(\mathbb R^n)e sL^p(\mathbb R^n)e t\}}\big)\big(\fracrac{s}{t}\big)^{\fracrac{\alpha}{2}}{|\zeta|^2}{\exp\big(-(t-s)|\zeta|^2\big)}. \end{array} \right. $$ A simple calculation shows $$ L^p(\mathbb R^n)eft\{\begin{array}{r@{}l} \int_0^\infty K(s,t)ds=|\zeta|^2\int_0^t\big(\fracrac{s}{t}\big)^{\fracrac{\alpha}{2}}\exp(-(t-s)|\zeta|^2)dsL^p(\mathbb R^n)esssim 1;\\ \int_0^\infty K(s,t)dt=|\zeta|^2\int_s^\infty\big(\fracrac{s}{t}\big)^{\fracrac{\alpha}{2}}\exp(-(t-s)|\zeta|^2)dtL^p(\mathbb R^n)esssim 1, \end{array} \right. $$ and then an application of the Schur lemma gives $$ \int_0^\inftyL^p(\mathbb R^n)eft(\int_0^\infty K(s,t)F(s,\zeta)ds\right)^2dtL^p(\mathbb R^n)esssim\int_0^\infty \big(F(t,\zeta)\big)^2dt, $$ as desired. \end{proof} \begin{lemma}L^p(\mathbb R^n)abel{l182} Given $\alpha\in [0,1)$ and a function $f$ on $(0,1)\times\mathbb R^n$, let $$ \mathsf{J}(f;\alpha)=\sup_{(r,x)\in(0,1)\times\mathbb R^n}r^{2\alpha-n}\int_0^{r^2}\int_{B(x,r)}|f(t,y)|t^{-\alpha}dtdy. $$ Then \begin{equation}L^p(\mathbb R^n)abel{e1816} \int_0^1{\mathbb B_n}ig\|\sqrt{-\Delta}e^{t\Delta}\int_0^t f(s,\cdot)ds{\mathbb B_n}ig\|_{L^2{}}^2t^{-\alpha}dtL^p(\mathbb R^n)esssim \mathsf{J}(f;\alpha)\int_0^1\big\|f(t,\cdot)\big\|_{L^1{}}t^{-\alpha}dt. \end{equation} \end{lemma} \begin{proof} This lemma and its argument follow from \cite[Lemma 3.2]{Xiao} and its proof. To be short, let $L^p(\mathbb R^n)angle\cdot,\cdot\rangle$ be the inner product in $L^2{}$ with respect to the space variable $x\in\mathbb R^n$. Then \begin{align*} \|\cdots\|_{L^2{}}^2&=\int_{\mathbb R^n}{\mathbb B_n}ig|\sqrt{-\Delta}e^{t\Delta}\int_0^t f(s,y)ds{\mathbb B_n}ig|^2 dy\\ &=\int_0^t\int_0^tL^p(\mathbb R^n)eftL^p(\mathbb R^n)angle \sqrt{-\Delta}e^{t\Delta}f(s,\cdot), \sqrt{-\Delta}e^{t\Delta}f(h,\cdot)\right\rangle dsdh. \end{align*} Consequently \begin{align*} \int_0^1\|\cdots\|_{L^2{}}^2t^{-\alpha}dt&L^p(\mathbb R^n)esssim\iint_{0<h<s<1}L^p(\mathbb R^n)eftL^p(\mathbb R^n)angle |f(s,\cdot)|, (e^{2\Delta}-e^{2s\Delta})|f(h,\cdot)|\right\rangle\,s^{-\alpha} dsdh\\ &L^p(\mathbb R^n)esssimL^p(\mathbb R^n)eft(\int_0^1\|f(s,\cdot)\|_{L^1{}}\,{s^{-\alpha}ds}\right)\sup_{s\in (0,1]}L^p(\mathbb R^n)eft\|\int_0^s e^{2s\Delta}|f(h,\cdot)|dh\right\|_{L^\infty{}}. \end{align*} From \cite[p. 163]{Lem} it follows that $$ \sup_{(s,z)\in(0,1]\times\mathbb R^n}\int_0^s e^{2s\Delta}|f(h,z)|dh L^p(\mathbb R^n)esssim\sup_{(r,x)\in(0,1)\times\mathbb R^n}r^{-n}\int_0^{r^2}\int_{B(x,r)}|f(s,y)|dyds. $$ and so that $$ \sup_{(s,z)\in(0,1]\times\mathbb R^n}\int_0^s e^{2s\Delta}|f(h,z)|dhL^p(\mathbb R^n)esssim\sup_{(r,x)\in(0,1)\times\mathbb R^n}{r^{2\alpha-n}\int_0^{r^2}\int_{B(x,r)}|f(s,y)|\,{s^{-\alpha}}dsdy}. $$ This in turn implies $$ \int_0^1\|\cdots\|_{L^2{}}^2t^{-\alpha}dtL^p(\mathbb R^n)esssim \mathsf{J}(f;\alpha)\int_0^1\|f(s,\cdot)\|_{L^1{}}\, {s^{-\alpha}ds}, $$ whence giving (\ref{e1816}). \end{proof} Below is the so-called existence and uniqueness result for a mild solution to (\ref{e181}) established in \cite{KoTa, Xiao}. \begin{theorem}L^p(\mathbb R^n)abel{t183} Let $\alpha\in [0,1)$. Then \item {\rm (i)} (\ref{e181}) has a unique small global mild solution $u$ in $(X_{\alpha}{})^n$ for all initial data $a$ with $\nabla\cdot a=0$ and $\|a\|_{(Q^{-1}_{\alpha}{})^n}$ being small. \item{\rm (ii)} For any $T\in (0,\infty)$ there is an $\epsilon>0$ such that (\ref{e181}) has a unique small mild solution $u$ in $(X_{\alpha;T})^n$ on $(0,T)\times\mathbb R^n$ when the initial data $a$ satisfies $\nabla\cdot a=0$ and $\|a\|_{(Q^{-1}_{\alpha;T}{})^n}L^p(\mathbb R^n)e\epsilon$. Consequently, for all $a\in \big(\overline{VQ}_\alpha^{-1}\big)^n$ with $\nabla\cdot a=0$ there exists a unique small local mild solution $u$ in $(X_{\alpha; T}{})^n$ on $(0,T)\times\mathbb R^n$. \end{theorem} \begin{proof} For completeness, we give a proof based on a slight improvement of the argument for \cite[Theorem 1.4 (i)-(ii)]{Xiao}. Notice that the following estimate for a distribution $f$ on $\mathbb R^n$ (cf. \cite[Lemma 16.1]{Lem}): $$ \|e^{t\Delta}f\|_{L^\infty{}}L^p(\mathbb R^n)esssim t^{-\fracrac{1+n}{2}}\sup_{x\in\mathbb R^n}\int_0^t\int_{B(x,t)}|e^{s\Delta}f(y)|^2\,dyds\quad\fracorall\quad t\in (0,\infty) $$ implies $$ {t}^\fracrac12\|e^{t\Delta}f\|_{L^\infty{}}L^p(\mathbb R^n)esssim\|f\|_{Q_{0;T}^{-1}{}}L^p(\mathbb R^n)esssim\|f\|_{Q_{\alpha;T}^{-1}{}}\quad\hbox{for}\quad 0<t<TL^p(\mathbb R^n)e\infty. $$ So, according to the Picard contraction principle (see e.g. \cite[p. 145, Theorem 15.1]{Lem}), we know that verifying Theorem \ref{t183} via the integral equation (\ref{eIe}) is equivalent to showing that the bilinear operator $$ \mathsf B(u,v;t)=\int_0^t e^{(t-s)\Delta} P\nabla\cdot(u\otimes v)\,ds $$ is bounded from $(X_{\alpha;T}{})^n\times(X_{\alpha;T}{})^n$ to $(X_{\alpha;T}{})^n$. Of course, $u\in (X_{\alpha;T}{})^n$ and $a\in (Q^{-1}_{\alpha;T}{})^n$ are respectively equipped with the norms: $$ L^p(\mathbb R^n)eft\{\begin{array}{r@{}} \|u\|_{(X_{\alpha;T}{})^n}=\sum_{j=1}^n\|u_j\|_{X_{\alpha;T}{}};\\ \|a\|_{(Q^{-1}_{\alpha;T}{})^n}=\sum_{j=1}^n\|a_j\|_{Q^{-1}_{\alpha;T}{}}. \end{array} \right. $$ {\it Step 1}. We are about to show $L^\infty$-bound: \begin{equation}L^p(\mathbb R^n)abel{e1817} |\mathsf B(u,v;t)|L^p(\mathbb R^n)esssim t^{-\fracrac12}\|u\|_{(X_{\alpha;T}{})^n}\|v\|_{(X_{\alpha;T}{})^n}\ \ \fracorall\ \ t\in (0,T). \end{equation} Indeed, if $\fracrac{t}{2}L^p(\mathbb R^n)e s<t$ then $$ \|e^{(t-s)\Delta} P\nabla\cdot(u\otimes v)\|_{L^\infty{}}L^p(\mathbb R^n)esssim(t-s)^{-\fracrac12}{\|u\|_{L^\infty{}}\|v\|_{L^\infty{}}}L^p(\mathbb R^n)esssim\big(s(t-s)^\fracrac12\big)^{-1}{\|u\|_{(X_{\alpha;T}{})^n}\|v\|_{(X_{\alpha;T}{})^n}}. $$ Meanwhile, if $0<s<\fracrac{t}{2}$ then $$ |e^{(t-s)\Delta}P\nabla\cdot(u\otimes v)|L^p(\mathbb R^n)esssim\int_{\mathbb R^n}\fracrac{|u(s,y)||v(s,y)|}{({t}^\fracrac12+|x-y|)^{n+1}}dyL^p(\mathbb R^n)esssim\sum_{{k}\in\mathbb Z^n}{({t}^\fracrac12(1+|k|)^{-(n+1)}}{\int_{x-y{t}^\fracrac12(k+[0,1]^n)}\fracrac{|u(s,y)|}{|v(s,y)|^{-1}}dy}. $$ The Cauchy-Schwarz inequality is applied to imply $$ \int_0^t\int_{x-y\in t^\fracrac12(k+[0,1]^n)}|u(s,y)||v(s,y)|dyds L^p(\mathbb R^n)esssim t^\fracrac{n}{2}\|u\|_{(X_{\alpha;T}{})^n}\|v\|_{(X_{\alpha;T}{})^n}. $$ These inequalities in turn derive \begin{align*} |\mathsf B(u,v;t)|&L^p(\mathbb R^n)esssim\int_0^{\fracrac t2}|e^{(t-s)\Delta}P\nabla\cdot(u\otimes v)|ds+\int_{\fracrac t2}^t|e^{(t-s)\Delta}P\nabla\cdot(u\otimes v)|ds\\ &L^p(\mathbb R^n)esssim L^p(\mathbb R^n)eft(t^{-\fracrac12}+\int_{\fracrac{t}{2}}^t s^{-1}(t-s)^{-\fracrac12}ds\right)\|u\|_{(X_{\alpha;T}{})^n}\|v\|_{(X_{\alpha;T}{})^n}\\ &L^p(\mathbb R^n)esssim t^{-\fracrac12}\|u\|_{(X_{\alpha;T}{})^n}\|v\|_{(X_{\alpha;T}{})^n}, \end{align*} producing (\ref{e1817}). {\it Step 2}. We are about to prove $L^2$-bound: \begin{equation}L^p(\mathbb R^n)abel{e1818} r^{2\alpha-n}\int_0^{r^2}\int_{B(x,r)}|\mathsf B(u,v;t)|^2s^{-\alpha}dydsL^p(\mathbb R^n)esssim\|u\|^2_{(X_{\alpha;T}{})^n}\|v\|^2_{(X_{\alpha;T}{})^n}\ \ \fracorall\ \ (r^2,x)\in (0,T)\times\mathbb R^n. \end{equation} In fact, if $$ L^p(\mathbb R^n)eft\{\begin{array}{r@{}} 1_{r,x}=1_{B(x,10 r)};\\ \mathsf B(u,v;t)=\mathsf B_1(u,v;t)-\mathsf B_2(u,v;t)-\mathsf B_3(u,v;t);\\ \mathsf B_1(u,v;t)=\int_0^s e^{(s-h)\Delta}P\nabla\cdot\big((1-1_{r,x})u\otimes v\big)dh;\\ \mathsf B_2(u,v;t)=(-\Delta)^{-\fracrac 12}P\nabla\cdot\int_0^s e^{(s-h)\Delta}\Delta {\mathbb B_n}ig((-\Delta)^{-\fracrac12}(I-e^{h\Delta})(1_{r,x})u\otimes v{\mathbb B_n}ig)dh;\\ \mathsf B_3(u,v;t)=(-\Delta)^{-\fracrac 12}P\nabla\cdot(-\Delta)^\fracrac12 e^{s\Delta}{\mathbb B_n}ig(\int_0^s\big(1_{r,x})u\otimes v\big)dh{\mathbb B_n}ig);\\ I=\hbox{the\ identity\ operator}, \end{array} \right. $$ then one has the following consideration under $0<s<r^2$ and $|y-x|<r$. First, we utilize the Cauchy-Schwarz inequality to get \begin{align*} |\mathsf B_1(u,v;t)|&L^p(\mathbb R^n)esssim\int_0^s\int_{\mathbb R^n\setminus B(x,10r)}\fracrac{|u(h,z)||v(h,z)|}{((s-h)^\fracrac12+|y-z|)^{n+1}}dzdh\\ &L^p(\mathbb R^n)esssim\int_0^{r^2}\int_{\mathbb R^n\setminus B(x,10r)}{|u(h,z)||v(h,z)|}{|x-z|^{-(n+1)}}dzdh\\ &L^p(\mathbb R^n)esssim\fracrac{L^p(\mathbb R^n)eft(\int_0^{r^2}\int_{\mathbb R^n\setminus B(x,10r)}{|u(h,z)|^2}{|x-z|^{-(n+1)}}dzdh\right)^\fracrac12}{L^p(\mathbb R^n)eft(\int_0^{r^2}\int_{\mathbb R^n\setminus B(x,10r) }{|v(h,z)|^2}{|x-z|^{-(n+1)}}dzdh\right)^{-\fracrac12}}\\ &L^p(\mathbb R^n)esssim r^{-1}\|u\|_{(X_{\alpha;T}{})^n}\|v\|_{(X_{\alpha;T}{})^n}, \end{align*} whence obtaining $$ \int_0^{r^2}\int_{B(x,r)}|\mathsf B_1(u,v;t)|^2t^{-\alpha}dydtL^p(\mathbb R^n)esssim r^{n-2\alpha}\|u\|_{(X_{\alpha;T}{})^n}^2\|v\|_{(X_{\alpha;T}{})^n}^2. $$ Next, for $\mathsf B_2(u,v;t)$ set $$ \mathsf M(h,y)=1_{r,x}(u\otimes v)=1_{r,x}(y)\big(u(h,y)\otimes v(h,y)\big). $$ From the $L^2$-boundedness of the Riesz transform and Lemma \ref{l181} it follows that $$ \int_0^{r^2}\big\|\mathsf B_2(u,v;t)\big\|_{L^2{}}^2\,{t^{-\alpha}}dt L^p(\mathbb R^n)esssim\int_0^{r^2}L^p(\mathbb R^n)eft\|{\mathbb B_n}ig((-\Delta)^{-\fracrac12}(I-e^{s\Delta})\mathsf M(s,\cdot){\mathbb B_n}ig)\right\|_{L^2{}}^2\,{s^{-\alpha}}ds. $$ Note that $\sup_{s\in (0,\infty)}s^{-1}(1-\exp(-s^2))<\infty$. So, $(-\Delta)^{-\fracrac12}(I-e^{s\Delta})$ is bounded on $L^2{}$ with operator norm $L^p(\mathbb R^n)esssim {s}^\fracrac12$. This fact, along with the Cauchy-Schwarz inequality, implies $$ \int_0^{r^2}\big\|\mathsf B_2(u,v;t)\big\|_{L^2{}}^2\,{t^{-\alpha}dt}L^p(\mathbb R^n)esssim r^{n-2\alpha}\|u\|_{(X_{\alpha;T}{})^n}^2\|v\|_{(X_{\alpha;T}{})^n}^2. $$ In a similar manner, we establish the following estimate for $\mathsf B_3(u,v;t)$: $$ \int_0^{r^2}\big\|\mathsf B_3(u,v;t)\big\|_{L^2{}}^2\,{t^{-\alpha}dt}L^p(\mathbb R^n)esssim r^{4+n-2\alpha}\int_0^1L^p(\mathbb R^n)eft\|(-\Delta)^\fracrac12 e^{\tau\Delta}\int_0^\tau |\mathsf M(r^2\theta,r\cdot)|d\theta\right\|_{L^2{}}^2\,{\tau^{-\alpha}}d\tau. $$ Note that Lemma \ref{l182} ensures that if $$ \mathsf K(M;\alpha)=\sup_{\rho\in (0,1)}\rho^{-n}\int_0^{\rho^2}\int_{B(x,\rho)} |\mathsf M(r^2\theta,rw)|\tau^{-\alpha}dwd\tau $$ then $$ \int_0^1L^p(\mathbb R^n)eft\|(-\Delta)^\fracrac12 e^{\tau\Delta}\int_0^\tau |\mathsf M(r^2\theta,r\cdot)|d\theta\right\|_{L^2{}}^2\,{\tau^{-\alpha}d\tau}L^p(\mathbb R^n)esssim \mathsf K(M;\alpha)\int_0^1{\mathbb B_n}ig\|\mathsf M(r^2\theta,r\cdot){\mathbb B_n}ig\|_{L^1{}}\,{\theta^{-\alpha}d\theta}. $$ So, the easily-verified estimates $$ L^p(\mathbb R^n)eft\{\begin{array}{r@{}} \mathsf K(M;\alpha)L^p(\mathbb R^n)esssim r^{-2}\|u\|_{(X_{\alpha;T}{})^n}\|v\|_{(X_{\alpha;T}{})^n};\\ \int_0^1\|\mathsf M(r^2\theta,r\cdot)\|_{L^1{}}\,{\theta^{-\alpha}d\theta}L^p(\mathbb R^n)esssim r^{-2}\|u\|_{(X_{\alpha;T}{})^n}\|v\|_{(X_{\alpha;T}{})^n}; \end{array} \right. $$ derive $$ \int_0^{r^2}\|\mathsf B_3(u,v;t)\|_{L^2{}}^2t^{-\alpha}dtL^p(\mathbb R^n)esssim r^{n-2\alpha}\|u\|^2_{(X_{\alpha;T}{})^n}\|v\|^2_{(X_{\alpha;T}{})^n}. $$ Putting the estimates for $\{\mathsf B_j(u,v)\}_{j=1}^3$ together, we reach (\ref{e1818}). Finally, the boundedness of $\mathsf B(\cdot,\cdot;t): (X_{\alpha;T}{})^n\times(X_{\alpha;T}{})^n\mapsto (X_{\alpha;T}{})^n$ follows from both (\ref{e1817}) and (\ref{e1818}). Of course, $T=\infty$ and $T\in (0,\infty)$ assure (i) and (ii) respectively. \end{proof} \section{$L^p(\mathbb R^n)im_{\alpha\to 1}Q^{-1}_\alpha$ and its Navier-Stokes equations}L^p(\mathbb R^n)abel{s183} \setcounter{equation}{0} \subsection{$(-\Delta)^{-1/2}L_{2,n-2}\ \&\ L^p(\mathbb R^n)im_{\alpha\to 1}Q^{-1}_\alpha$}L^p(\mathbb R^n)abel{s183a} A careful observation of the analysis carried out in Section \ref{s182} reveals that one cannot take $\alpha=1$ in those lemmas and theorems. But, upon recalling $$ Q_\alpha=(-\Delta)^{-\alpha/2}\mathcal{L}_{2,n-2\alpha}\quad\fracorall\quad\alpha\in (0,1), $$ for which the proof given in the first group of estimates on \cite[p. 234]{Xiao} unfortunately contains five typos and the correct formulation reads as: \begin{align*} |(\psi_0)_t\ast f_2(y)&L^p(\mathbb R^n)esssim\int_{\mathbb R^n\setminus 2B}\fracrac{t|f(z)-f_{2B}|}{(t+|x-z|)^{n+1}}\,dz\\ &L^p(\mathbb R^n)esssim\int_{\mathbb R^n\setminus 2B}\fracrac{t|f(z)-f_{2B}|}{|x-z|^{n+1}}\,dz\\ &L^p(\mathbb R^n)esssim t\sum_{k=1}^\infty\int_{B_k}\fracrac{|f(z)-f_{2B}|}{|x-z|^{n+1}}\,dz\\ &L^p(\mathbb R^n)esssim t\sum_{k=1}^\infty(2^kr)^{-(n+1)}\int_{B_k}|f-f_{2B}|\,dz\\ &L^p(\mathbb R^n)esssim t r^{-(1+\alpha)}\|f\|_{\mathcal{L}_{2,n-2\alpha}}, \end{align*} and considering the limiting process of (\ref{eQ-1}) as $\alpha\to 1$ via the fact that $(1-\alpha)t^{-\alpha}dt$ converges weak-$\ast$ as $\alpha\to 1$ to the point-mass at $0$ but also $\int_{B(x,r)}|e^{t\Delta}f(y)|^2dy$ approaches $\int_{B(x,r)}|f(y)|^2dy$ as $t\to 0$, in Theorem \ref{t184}, (\ref{eQ-1}) and Definition \ref{d183} we can naturally define the limiting space $L^p(\mathbb R^n)im_{\alpha\to 1}Q^{-1}_{\alpha}$ as the square Morrey space $L_{2,n-2}{}$ (cf. \cite{M}) - the class of all $L^2_{loc}{}$-functions $f$ with \begin{equation} L^p(\mathbb R^n)abel{mo} \|f\|_{L_{2,n-2}}=\sup_{(r,x)\in\mathbb R^{1+n}_+}L^p(\mathbb R^n)eft(r^{2-n}\int_{B(x,r)}|f(y)|^2\,dy\right)^\fracrac12<\infty. \end{equation} In the light of (\ref{mo}) and a result on the Riesz operator $(-\Delta)^{-1/2}$ acting on the square Morrey space in \cite{A3}, we have $$ L^p(\mathbb R^n)eft\{\begin{array}{r@{}} (-\Delta)^{-1/2}L_{2,n-2}\subseteq BMO;\\ L_{2,n-2}\subseteq BMO^{-1};\\ f_L^p(\mathbb R^n)ambda(x)=L^p(\mathbb R^n)ambda f(L^p(\mathbb R^n)ambda x)\ \ \fracorall\ \ (L^p(\mathbb R^n)ambda,x)\in\mathbb R^{1+n}_+;\\ \|f_L^p(\mathbb R^n)ambda\|_{L_{2,n-2}}=\|f\|_{L_{2,n-2}}\ \ \fracorall\ \ L^p(\mathbb R^n)ambda\in (0,\infty). \end{array} \right. $$ Here it is worth pointing out that $(-\Delta)^{-1/2}L_{2,n-2}{}$ is also affine invariant under the norm $$ \|f\|_{(-\Delta)^{-1/2}L_{2,n-2}}=\|(-\Delta)^\fracrac12 f\|_{L_{2,n-2}}. $$ To see this, note that $$ f\in (-\Delta)^{-1/2}L_{2,n-2}\Longleftrightarrow f(x)=\int_{\mathbb R^n}{g(y)}{|y-x|^{1-n}}\,dy\ \ \hbox{for\ some}\ \ g\in L_{2,n-2}. $$ So, a simple computation gives $$ \begin{cases} f(L^p(\mathbb R^n)ambda x+x_0)=\int_{\mathbb R^n}{G_L^p(\mathbb R^n)ambda(y)}{|y-x|^{1-n}}\,dy\ \ \hbox{with}\\ G_L^p(\mathbb R^n)ambda(x)=L^p(\mathbb R^n)ambda g(L^p(\mathbb R^n)ambda x+x_0)\ \&\ \|G_L^p(\mathbb R^n)ambda\|_{L_{2,n-2}}=\|g\|_{L_{2,n-2}}. \end{cases} $$ The following assertion supports the above limiting process. \begin{theorem}L^p(\mathbb R^n)abel{p181} $ (-\Delta)^{-\fracrac12}L_{2,n-2}\subseteq\cap_{\alpha\in (0,1)}Q_\alpha\ \&\ L_{2,n-2}\subseteq\cap_{\alpha\in (0,1)}{Q}^{-1}_{\alpha}. $ \end{theorem} \begin{proof} Given $\alpha\in (0,1)$. For $f\in (-\Delta)^{-\fracrac12}L_{2,n-2}\subseteq BMO$, $j\in\mathbb Z$ and a Schwartz function $\psi$, let $$ \begin{cases} f=(-\Delta)^{-\fracrac12}g;\\ \psi_j(x)=2^{jn}\psi(2^jx);\\ \Delta_j(f)(x)=\psi_j\ast f(x);\\ \widehat{\Delta'_j(f)}(x)=|2^{j}x|^\alpha\hat{\psi}(2^{-j}x)\hat{f}(x);\\ \hbox{supp}\hat{\psi}\subset\{y\in\mathbb R^n:\ 2^{-1}L^p(\mathbb R^n)e|y|L^p(\mathbb R^n)e 2\};\\ \sum_{j}\hat{\psi}_j\equiv 1. \end{cases} $$ A simple computation gives that for any cube $I$ (whose edges are parallel to the coordinate axes) in $\mathbb R^n$ with side length $\ell(I)$, \begin{equation} L^p(\mathbb R^n)abel{tt} \ell(I)^{2\alpha-n}\iint_{I\times I}{|f(x)-f(y)|^2}{|x-y|^{-(n+2\alpha)}}\,dxdyL^p(\mathbb R^n)esssim T_1(I)+T_2(I), \end{equation} where $$ \begin{cases} T_1(I)=\ell(I)^{2\alpha-n}\iint_{I\times I}{|\sum_{j<-L^p(\mathbb R^n)og_2\ell(I)}\Delta_j(f)(x)-\sum_{j<-L^p(\mathbb R^n)og_2 \ell(I)}\Delta_j(f)(y)|^2|}{|x-y|^{-(n+2\alpha)}}\,dxdy;\\ T_2(I)=\ell(I)^{2\alpha-n}\iint_{I\times I}{|\sum_{j\ge-L^p(\mathbb R^n)og_2\ell(I)}\Delta_j(f)(x)-\sum_{j\ge-L^p(\mathbb R^n)og_2 \ell(I)}\Delta_j(f)(y)|^2|}{|x-y|^{-(n+2\alpha)}}\,dxdy. \end{cases} $$ According to \cite[(3.2)]{QLi} and the last estimate for $\mathbb{IV}$ in \cite{QLi} as well as \cite[(22)]{Bar}, we get \begin{equation} L^p(\mathbb R^n)abel{ttt} \begin{cases} \sup_I T_1(I)L^p(\mathbb R^n)esssim\||f\||^2_{BMO}\sup_I\ell(I)^{2\alpha-n-2}\int_I\int_I|x-y|^{2-2\alpha-n}\,dxdyL^p(\mathbb R^n)esssim\|g\|_{L_{2,n-2}}^2;\\ \sup_I T_2(I)L^p(\mathbb R^n)esssim\sup_I \ell(I)^{2\alpha-n}\sum_{j\ge-L^p(\mathbb R^n)og_2\ell(I)}2^{2\alpha j}\|(-\Delta)^\fracrac12\Delta'_j g\|^2_{L^2(I)}L^p(\mathbb R^n)esssim\|g\|^2_{L_{2,n-2}}. \end{cases} \end{equation} Each $\sup_I$ in (\ref{ttt}) ranges over all cubes $I$ with edges being parallel to the coordinate axes. Thus, $f\in Q_\alpha$ follows from (\ref{tt}) and (\ref{ttt}) as well as (\ref{eQ}). This shows the first inclusion of Theorem \ref{p181}. Next, suppose $f\in L_{2,n-2}$. Then the easily-verified uniform boundedness of the map $f\mapsto e^{t\Delta}f$ on $L_{2,n-2}{}$, i.e., $$ \sup_{t\in (0,\infty)}\|e^{t\Delta}f\|_{L_{2,n-2}{}}L^p(\mathbb R^n)esssim \|f\|_{L_{2,n-2}}, $$ yields $$ r^{2\alpha-n}\int_0^{r^2}{\mathbb B_n}ig(\int_{B(x,r)}|e^{s\Delta}f|^2\,dy{\mathbb B_n}ig)\,{s^{-\alpha}ds} L^p(\mathbb R^n)esssim r^{2(\alpha-1)}\int_0^{r^2}\|e^{s\Delta}f\|^2_{L_{2,n-2}}\,s^{-\alpha}dsL^p(\mathbb R^n)esssim\|f\|^2_{L_{2,n-2}}, $$ whence giving $f\in Q^{-1}_{\alpha}$ and verifying the second inclusion of Theorem \ref{p181}. \end{proof} \subsection{Navier-Stokes equations initiated in $(L^p(\mathbb R^n)im_{\alpha\to 1}Q_\alpha^{-1})^n$}L^p(\mathbb R^n)abel{s183b} When applying $\nabla$ to $\big((-\Delta)^{-1/2}L_{2,n-2}\big)^n$ or $\big((-\Delta)^{-1/2}L^p(\mathbb R^n)im_{\alpha\to 1}Q_\alpha^{-1}\big)^n$ (cf. Theorem \ref{t184}), we are suggested to consider $L_{2,n-2}$ in a further study of (\ref{e181}). To see this clearly, let us introduce the following definition. \begin{definition}L^p(\mathbb R^n)abel{d184} \item{\rm (i)} A function $f\in L_{2,n-2}{}$ is said to be in $VL_{2,n-2}{}$ provided that for any $\epsilon>0$ there is a $C^\infty_0{}$ function $h$ such that $\|f-h\|_{L_{2,n-2}{}}<\epsilon$, namely, $VL_{2,n-2}{}$ is the closure of $C^\infty_0{}$ in $L_{2,n-2}{}$. \item{\rm (ii)} Given $T\in (0,\infty]$, a function $g\in L^2_{loc}((0,T)\times\mathbb R^n)$ is said to be in $X_{2,n-2;T}$ provided $$ \|g\|_{X_{2,n-2;T}}=\sup_{t\in (0,T)}{t}^\fracrac12\|g(t,\cdot)\|_{L^\infty}+\sup_{t\in (0,T)}\|g(t,\cdot)\|_{L_{2,n-2}}<\infty. $$ \end{definition} Related to Theorem \ref{p181} is the following inclusion $X_{2,n-2;T}\subseteq\cap_{\alpha\in (0,1)}{X}_{\alpha;T}$ which follows from $$ \int_0^{r^2}\int_{B(x,r)}|g(t,y)|^2\,t^{-\alpha}dydtL^p(\mathbb R^n)esssim r^{n-2}\|g(t,\cdot)\|^2_{L_{2,n-2}}\int_0^{r^2}t^{-\alpha}\,dtL^p(\mathbb R^n)esssim r^{n-2\alpha}. $$ As a limiting case $\alpha\to 1$ of Theorem \ref{t183}, we have the following generalization of the 3D result \cite[Theorem 1 (A)-(B)]{Lem1} (cf. \cite{Ka1, Lem2}) on the existence of a mild solution to (\ref{e181}) under $a=(a_1,...,a_n)\in (L_{2,n-2}{})^n$ and $\|a\|_{(L_{2,n-2}{})^n}=\sum_{j=1}^n\|a_j\|_{L_{2,n-2}{}}.$ \begin{theorem}L^p(\mathbb R^n)abel{t183a} \item {\rm (i)} (\ref{e181}) has a small global mild solution $u$ in $(X_{2,n-2;\infty})^n$ for all initial data $a=(a_1,...,a_n)$ with $\nabla\cdot a=0$ and $\|a\|_{(L_{2,n-2}{})^n}$ being small. \item{\rm (ii)} For any $a=(a_1,...,a_n)\in \big({VL_{2,n-2}}{}\big)^n$ with $\nabla\cdot a=0$ there exists a $T>0$ depending on $a$ such that (\ref{e181}) has a small local mild solution $u$ in $C\big([0,T],(L_{2,n-2})^n\big)$. \end{theorem} \begin{proof} To prove this assertion, for $T\in (0,\infty]$ let us introduce the following middle space $X_{4,2;T}$ of all functions $g$ on $\mathbb R^{1+n}_+$ with $$ \|g\|_{X_{4,2;T}}=\sup_{t\in (0,T)}{t}^\fracrac12\|g(t,\cdot)\|_{L^\infty}+\sup_{t\in (0,T)}t^\fracrac14\|g(t,\cdot)\|_{L_{4,n-2}}<\infty, $$ where $$ \|g(t,\cdot)\|_{L_{4,n-2}}=L^p(\mathbb R^n)eft(\sup_{(r,x)\in\mathbb R^{1+n}_+}r^{2-n}\int_{B(x,r)}|g(t,y)|^4\,dy\right)^\fracrac14. $$ Note that the following estimate for $f\in L_{2,n-2}{}$ (cf. \cite[Theorem 18.1]{Lem}): \begin{equation} L^p(\mathbb R^n)abel{ete} |e^{t\Delta}f(x)|L^p(\mathbb R^n)esssim\sum_{k\in\mathbb Z^n}\sup_{z\in k+[0,1]^n} \exp{\mathbb B_n}ig(-\fracrac{|z|^2}{4}{\mathbb B_n}ig)\int_{k+[0,1]^n}|f(x-{t}^\fracrac12 y)|\,dy\quad\fracorall\quad (t,x)\in\mathbb R^{1+n}_+, \end{equation} along with the Cauchy-Schwarz inequality, deduces ${t}^\fracrac12\|e^{t\Delta}f\|_{L^\infty{}}L^p(\mathbb R^n)esssim\|f\|_{L_{2,n-2}}.$ So, (\ref{ete}), plus the uniform boundedness of the map $f\mapsto e^{t\Delta}f$ on $L_{2,n-2}{}$, gives $$ \|e^{t\Delta}f\|_{L_{4,n-2}}L^p(\mathbb R^n)esssim\|e^{t\Delta}f\|^{\fracrac12}_{L^\infty}\|e^{t\Delta}f\|^{\fracrac12}_{L_{2,n-2}}L^p(\mathbb R^n)esssim t^{-\fracrac14}\|f\|_{L_{2,n-2}{}}. $$ Thus $$ L^p(\mathbb R^n)eft\{\begin{array}{r@{}} \|e^{t\Delta}f(x)\|_{X_{4,2;T}}L^p(\mathbb R^n)esssim \|f\|_{L_{2,n-2}};\\ L^p(\mathbb R^n)im_{T\to 0}\|e^{t\Delta}f(x)\|_{X_{4,2;T}}=0\quad\hbox{as}\quad f\in VL_{2,n-2}. \end{array} \right. $$ Keeping the previous preparation and the Picard contraction principle in mind, we find that showing Theorem \ref{t184}, via the integral equation (\ref{eIe}) and the iteration process $$ L^p(\mathbb R^n)eft\{\begin{array}{r@{}} u^{(0)}(t,\cdot)=e^{t\Delta}a(\cdot);\\ u^{(j+1)}(t,\cdot)=u^{(0)}(t,\cdot)-\mathsf B\big(u^{(j)}(t,\cdot),u^{(j)}(t,\cdot),t\big);\\ j=0,1,2,3,...., \end{array} \right. $$ amounts to proving the boundedness of the bilinear operator $\mathsf B(\cdot,\cdot,t):\ (X_{4,2;T})^n\times(X_{4,2;T})^n\mapsto (X_{4,2;T})^n$. However, this boundedness follows directly from the following estimates (cf. \cite[(25)-(24)]{Lem1}) for $0<s<t<T$: $$ L^p(\mathbb R^n)eft\{\begin{array}{r@{}} \fracrac{\|e^{(t-s)\Delta}P\nabla\cdot(u\otimes v)\|_{(L^\infty{})^n}}{(t-s)^{-\fracrac12}} L^p(\mathbb R^n)esssim\min{\mathbb B_n}ig\{\big(s(t-s)\big)^\fracrac12{s^{\fracrac14}\|u\|_{(L_{4,n-2}{})^n}s^{\fracrac14}\|v\|_{(L_{4,n-2}{})^n}}{\big({s(t-s)}\big)^\fracrac12},\, {s^{-1}{s}^\fracrac12\|u\|_{(L^\infty{})^n}{s}^\fracrac12 \|v\|_{(L^\infty{})^n}}{\mathbb B_n}ig\};\\ \fracrac{\|e^{(t-s)\Delta}P\nabla\cdot(u\otimes v)\|_{(L_{4,n-2}{})^n}}{(t-s)^{-\fracrac12}}L^p(\mathbb R^n)esssim s^{-\fracrac34}{\big(s^{\fracrac14}\|u\|_{(L_{4,n-2}{})^n}\big) \big(s^{\fracrac12}\|v\|_{(L^{\infty}{})^n}\big)}. \end{array} \right. $$ \end{proof} \begin{remark}L^p(\mathbb R^n)abel{c181} Though Theorem \ref{t183} can be used to derive that if $\|a\|_{(L_{2,n-2}{})^n}$ is sufficiently small then there is a unique solution $u$ of (\ref{e181}) in $\big({X}_{\alpha;\infty}\big)^n$, Theorem \ref{t183} cannot guarantee $u\in ({X}_{2,n-2,\infty})^n$ due to $X_{2,n-2,\infty}\subseteq\cap_{0<\alpha<1}X_{\alpha;\infty}$. In any event, we always have $ \sup_{t\in (0,\infty)}{t}^\fracrac12\|u(t,\cdot)\|_{L^\infty{}}<\infty $ and even more general estimate (cf. \cite[(49) \& Lemma 3]{Lem1}): $ \sup_{t\in (0,\infty)}{t}^\fracrac12\|u^{(j+1)}(t,\cdot)-u^{(j)}(t,\cdot)\|_{L^\infty{}}L^p(\mathbb R^n)esssim (j+1)^{-2}. $ \end{remark} \fracrenchspacing \end{document}
\mbox{\boldmath{$e$}}gin{document} \title{ Distributed optimal control problems for a class of elliptic hemivariational inequalities with a parameter and its asymptotic behavior} \author{ Claudia M. Gariboldi \footnote{\, Depto. Matem\'atica, FCEFQyN, Universidad Nacional de R\'io Cuarto, Ruta 36 Km 601, 5800 R\'io Cuarto, Argentina. E-mail: [email protected].} \hspace{.15cm} and \ Domingo A. Tarzia \footnote{\, Depto. Matem\'atica, FCE, Universidad Austral, Paraguay 1950, S2000FZF Rosario, Argentina.} \footnote{\, CONICET, Argentina. E-mail: [email protected].} } \date{} \maketitle Dedicated to Professor Stanislaw Mig\'orski on the occasion of his 60th birthday \noindent {\bf Abstract.} \ In this paper, we study optimal control problems on the internal energy for a system governed by a class of elliptic boundary hemivariational inequalities with a parameter. The system has been originated by a steady-state heat conduction problem with non-monotone multivalued subdifferential boundary condition on a portion of the boundary of the domain described by the Clarke generalized gradient of a locally Lipschitz function. We prove an existence result for the optimal controls and we show an asymptotic result for the optimal controls and the system states, when the parameter, like a heat transfer coefficient, tends to infinity on a portion of the boundary. \noindent {\bf Key words.} Elliptic hemivariational inequality, optimal control problems, asymptotic behavior, Clarke generalized gradient, mixed elliptic problem, convergence. \noindent {\bf 2020 Mathematics Subject Classification. } 35J65, 35J87, 49J20, 49J45. {\thispagestyle{empty}} \section{Introduction} We consider a bounded domain $\Omega$ in $\mathbb{R}^d$ whose regular boundary $\Gamma $ consists of the union of three disjoint portions $\Gamma_{i}$, $i=1$, $2$, $3$ with $|\Gamma_{i}|>0$, where $|\Gamma_i|$ denotes the $(d-1)$-dimensional Hausdorff measure of the portion $\Gamma_i$ on $\Gamma$. The outward normal vector on the boundary is denoted by $n$. We formulate the following steady-state heat conduction problem with mixed boundary conditions ~\cite{AK, BBP, G, LCB, Ta2, Ta3}: \mbox{\boldmath{$e$}}gin{eqnarray} && -\Delta u=g \ \ \mbox{in} \ \ \Omega, \ \ \quad u\big|_{\Gamma_{1}}=0, \ \ \quad-\frac{\partial u}{\partial n}\big|_{\Gamma_{2}}=q, \ \ \quad u\big|_{\Gamma_{3}}=b, \langlebel{P} \end{eqnarray} where $u$ is the temperature in $\Omega$, $g$ is the internal energy in $\Omega$, $b$ is the temperature on $\Gamma_{3}$ and $q$ is the heat flux on $\Gamma_{2}$, which satisfy the hypothesis: $g\in H=L^2(\Omega)$, $q\in Q=L^2(\Gamma_2)$ and $b\in H^{\frac{1}{2}}(\Gamma_3)$. Throughout the paper we use the following notation \mbox{\boldmath{$e$}}gin{eqnarray*} && V=H^{1}(\Omega), \quad V_{0}=\{v\in V \mid v = 0 \ \ \mbox{on} \ \ \Gamma_{1} \}, \\[2mm] && K=\{v\in V \mid v = 0 \ \ \mbox{on} \ \ \Gamma_{1},\ v = b \ \ \mbox{on} \ \ \Gamma_{3} \}, \quad K_{0}=\{v\in V \mid v = 0 \ \ \mbox{on} \ \ \Gamma_{1}\cup \Gamma_3 \}, \\[2mm] && a(u,v)=\int_{\Omega }\nabla u \, \nabla v \, dx, \quad L(v)= \int_{\Omega}g v \,dx - \int_{\Gamma_{2}}q \gamma (v) \,d\Gamma, \end{eqnarray*} where $\gamma \colon V \to L^2(\Gamma)$ denotes the trace operator on $\Gamma$. In what follows, we write $u$ for the trace of a function $u \in V$ on the boundary. In a standard way, we obtain the following variational formulation of (\ref{P}): \mbox{\boldmath{$e$}}gin{eqnarray} && \hspace{-1cm} \mbox{find} \ \ u_{\infty}\in K \ \ \mbox{such that}\ \ a(u_{\infty},v)=L(v) \ \ \mbox{for all} \ \ v\in K_{0}, \langlebel{Pvariacional} \end{eqnarray} The standard norms on $V$ and $V_0$ are denoted by \mbox{\boldmath{$e$}}gin{eqnarray*} && \| v \|_V = \Big( \| v \|^2_{L^2(\Omega)} + \| \nabla v \|^2_{L^2(\Omega;\mathbb{R}^d)} \Big)^{1/2} \ \ \mbox{for} \ \ v \in V, \\ [2mm] && \| v \|_{V_0} = \| \nabla v \|_{L^2(\Omega;\mathbb{R}^d)} \ \ \mbox{for} \ \ v \in V_0. \end{eqnarray*} It is well known by the Poincar\'e inequality, see~\cite{CLM, R, Ta2}, that on $V_0$ the above two norms are equivalent. Note that the form $a$ is bilinear, symmetric, continuous and coercive with constant $m_a > 0$, i.e. \mbox{\boldmath{$e$}}gin{equation}\langlebel{coercive} a(v, v) = \|v\|^{2}_{V_0} \ge m_a \|v\|^{2}_{V} \ \ \mbox{for all} \ \ v\in V_{0}. \end{equation} We remark that, under additional hypotheses on the data $g$, $q$ and $b$, problem (\ref{P}) can be considered as steady-state two-phase Stefan problem, see, for example,~\cite{GT,TT,Ta, Ta3}. We can particularly see it in ~\cite{GT} (Example 1 in page 629, Example 2 in page 630, and Example 3 in page 631); in ~\cite{TT} (Example (i) and (ii) in psge 35, and Example (iii) in page 36), and in ~\cite{Ta3} (Example 1 and Example 2 in page 180). Now, in this paper, we consider the mixed nonlinear boundary value problem for an elliptic equation as follows: \mbox{\boldmath{$e$}}gin{equation}\langlebel{Pjalfa} -\Delta u=g \ \ \mbox{in} \ \ \Omega, \ \quad u\big|_{\Gamma _{1}}=0, \ \quad -\frac{\partial u}{\partial n}\big|_{\Gamma_{2}}=q, \ \quad -\frac{\partial u}{\partial n}\big|_{\Gamma_{3}} \in \alpha \, \partial j(u), \end{equation} which has been recently studied in~\cite{GMOT}. Here $\alpha$ is a positive constant which can be considered as the heat transfer coefficient on the boundary while the function $j \colon \Gamma_{3} \times \mathbb{R} \to \mathbb{R}$, called a superpotential (nonconvex potential), is such that $j(x, \cdot)$ locally Lipschitz for a.e. $x \in \Gamma_3$ and not necessary differentiable. Since in general $j(x, \cdot)$ is nonconvex, so the multivalued condition on $\Gamma_3$ in problem (\ref{Pjalfa}) is described by a nonmonotone relation expressed by the generalized gradient of Clarke~\cite{C}. Such multivalued relation in problem (\ref{Pjalfa}) is met in certain types of steady-state heat conduction problems (the behavior of a semipermeable membrane of finite thickness, a temperature control problems, etc.). Further, problem (\ref{Pjalfa}) can be considered as a prototype of several boundary semipermeability models, see~\cite{MO,NP,P,ZLM}, which are motivated by problems arising in hydraulics, fluid flow problems through porous media, and electrostatics, where the solution represents the pressure and the electric potentials. Note that the analogous problems with maximal monotone multivalued boundary relations (that is the case when $j(x, \cdot)$ is a convex function) were considered in~\cite{Barbu,DL}, see also references therein. Under the above notation, the weak formulation of the elliptic problem (\ref{Pjalfa}) becomes the following elliptic boundary hemivariational inequality~\cite{GMOT}: \mbox{\boldmath{$e$}}gin{equation}\langlebel{Pj0alfavariacional} \mbox{find} \ \ u \in V_0 \ \ \mbox{such that} \ \ a(u,v) + \alpha \int_{\Gamma_{3}}j^{0}(u;v)\, d\Gamma \geq L(v) \ \ \mbox{\rm for all} \ \ v\in V_{0}. \end{equation} Here and in what follows we often omit the variable $x$ and we simply write $j(r)$ instead of $j(x, r)$. The stationary heat conduction models with nonmonotone multivalued subdifferential interior and boundary semipermeability relations cannot be described by convex potentials. They use locally Lipschitz potentials and their weak formulations lead to hemivariational inequalities, see~\cite[Chapter~5.5.3]{NP} and~\cite{P}. We mention that theory of hemivariational and variational inequalities has been proposed in the 1980s by Panagiotopoulos, see~\cite{NP,P0,P1}, as variational formulations of important classes of inequality problems in mechanics. In the last few years, new kinds of variational, hemivariational, and variational-hemivariational inequalities have been investigated, see recent monographs~\cite{CLM,MOS,SM}, and the theory has emerged today as a new and interesting branch of applied mathematics. We consider the distributed optimal control problem of the type studied in \cite{GaTa, Li, Tr} given by: \mbox{\boldmath{$e$}}gin{equation}\langlebel{OPVariational} \text{find}\quad g^{*}\in H \quad \text{such that} \quad J(g^{*})=\min_{g\in H}J(g) \end{equation} with \mbox{\boldmath{$e$}}gin{equation} J(g)=\frac{1}{2}||u_{g}-z_{d}||^{2}_{H}+\frac{M}{2}||g||^{2}_{H} \end{equation} where $u_{g}$ is the unique solution to the variational equality (\ref{Pvariacional}), $z_{d}\in H$ given and $M$ a positive constant. The goal of this paper is to formulate, for each $\alpha>0$, the following new distributed optimal control problem \mbox{\boldmath{$e$}}gin{equation}\langlebel{OPHemivariational} \text{find}\quad g_{\alpha}^{*}\in H \quad \text{such that} \quad J_{\alpha}(g_{\alpha}^{*})=\min_{g\in H}J_{\alpha}(g) \end{equation} with \mbox{\boldmath{$e$}}gin{equation} J_{\alpha}(g)=\frac{1}{2}||u_{\alpha g}-z_{d}||^{2}_{H}+\frac{M}{2}||g||^{2}_{H} \end{equation} where $u_{\alpha g}$ is a solution to the hemivariational inequality (\ref{Pj0alfavariacional}), $z_{d}\in H$ given and $M$ a positive constant, and to study the convergent to problem (\ref{OPHemivariational}) when the parameter $\alpha$ goes to infinity. The paper is structured as follows. In Section~\ref{Preliminaries} we establish preliminaries concepts of the hemivariational inequalities theory, which are necessary for the development of the following sections. In Section~\ref{OCP}, for each $\alpha >0$, we obtain an existence result of solution to the optimal control problem (\ref{OPHemivariational}). Finally, in Section~\ref{Asymptotic}, we prove the strong convergence of a sequence of optimal controls of the problems (\ref{OPHemivariational}) to the unique optimal control of the problem (\ref{OPVariational}), when the parameter $\alpha$ goes to infinity. Moreover, we obtain the strong convergence of the system states related to the problems (\ref{OPHemivariational}) to the system state related to the problem (\ref{OPVariational}), when $\alpha$ goes to infinity. These results generalize for a locally Lipschitz function $j$, under the hypothesis $H(j)$ and $(H_1)$, the classical results obtained in ~\cite{GaTa} for a quadratic superpotential $j$. \section{Preliminaries}\langlebel{Preliminaries} In this section we recall standard notation and preliminary concepts, which are necessary for the development of this paper. Let $(X, \| \cdot \|_{X})$ be a reflexive Banach space, $X^{*}$ be its dual, and $\langlengle \cdot, \cdot \ranglengle$ denote the duality between $X^*$ and $X$. For a real valued function defined on $X$, we have the following definitions~\cite[Section~2.1]{C} and~\cite{DMP,MOS}. \mbox{\boldmath{$e$}}gin{Definition} A function $\varphi \colon X\rightarrow \mathbb{R}$ is said to be locally Lipschitz, if for every $x\in X$ there exist $U_{x}$ a neighborhood of $x$ and a constant $L_{x}>0$ such that $$ |\varphi(y)-\varphi(z)|\leq L_{x}\|y-z\|_{X} \ \ \mbox{\rm for all} \ \ y, z\in U_{x}. $$ For such a function the generalized (Clarke) directional derivative of $j$ at the point $x\in X$ in the direction $v\in X$ is defined by $$ \varphi^{0}(x;v)=\limsup\limits_{y \rightarrow x, \, \langlembda \rightarrow 0^{+}} \frac{\varphi(y +\langlembda v)-\varphi(y)}{\langlembda} \, . $$ The generalized gradient (subdifferential) of $\varphi$ at $x$ is a subset of the dual space $X^{*}$ given by $$ \partial \varphi(x)=\{\zeta\in X^{*} \mid \varphi^{0}(x;v)\geq \langlengle \zeta,v\ranglengle \ \ \mbox{\rm for all} \ \ v \in X\}. $$ \end{Definition} We consider the following hypothesis. \noindent ${\underline{H(j)}}$: $j\colon \Gamma_3 \times \mathbb{R} \to \mathbb{R}$ is such that \noindent \quad (a) $j(\cdot, r)$ is measurable for all $r \in \mathbb{R}$, \noindent \quad (b) $j(x, \cdot)$ is locally Lipschitz for a.e. $x \in \Gamma_3$, \noindent \quad (c) there exist $c_0$, $c_1 \ge 0$ such that $| \partial j(x, r)| \le c_0 + c_1 |r|$ for all $r \in \mathbb{R}$, a.e. $x\in \Gamma_3$, \noindent \quad (d) $j^0(x, r; b-r) \le 0$ for all $r \in \mathbb{R}$, a.e. $x \in \Gamma_{3}$ with a constant $b \in \mathbb{R}$. Note that the existence results for elliptic hemivariational inequalities can be found in several contributions, see~\cite{CLM,unified,ELAS,MOS,NP}. In \cite[Theorem~4]{GMOT}, the hypothesis $H(j)$(d) is considered in order to obtain existence of a solution to problem \eqref{Pj0alfavariacional}. Moreover, under this condition the authors have studied the asymptotic behavior when $\alpha \to \infty$ (see \cite[Theorem~7]{GMOT}). We note that, if the hypothesis $H(j)$(d) is replaced by the relaxed monotonicity condition (see \cite[Remark~10]{GMOT} for details) \mbox{\boldmath{$e$}}gin{equation*} (e)\qquad j^0(x, r; s-r) + j^0(x,s; r-s) \le m_j \, |r-s|^2 \end{equation*} for all $r$, $s \in \mathbb{R}$, a.e. $x\in\Gamma_3$ with $m_j \ge 0$, and the following smallness condition $$ (f)\qquad m_a > \alpha \, m_j \| \gamma\|^2 $$ is assumed, then problem (\ref{Pj0alfavariacional}) is uniquely solvable, see~\cite[Lemma~20]{ELAS} for the proof. However, this smallness condition is not suitable in the study to problem \eqref{Pj0alfavariacional} since for a sufficiently large value of $\alpha$, it is not satisfied. Finally, in \cite{GMOT} we can find several examples of locally Lipschitz (nondifferentiable and nonconvex) functions which satisfies the above hypotheses. \section{Optimal control problems}\langlebel{OCP} We know, by \cite{GaTa}, that there exists a unique optimal pair $(g^{*},u_{g^{*}})\in H\times V_{0}$ of the distributed optimal control problem (\ref{OPVariational}). Now, we pass to a result on existence of solution to the optimal control problem (\ref{OPHemivariational}) in which the system is governed by the hemivariational inequality (\ref{Pj0alfavariacional}). \mbox{\boldmath{$e$}}gin{Theorem}\langlebel{existence} For each $\alpha > 0$, if $H(j) (a)-(d)$ holds, then the distributed optimal control problems (\ref{OPHemivariational}) has a solution. \end{Theorem} \mbox{\boldmath{$e$}}gin{proof} By definition, for each $\alpha >0$, the functional $J_{\alpha}$ is bounded from below. Next, taking into account that the hemivariational inequality (\ref{Pj0alfavariacional}) has solution (see \cite[Theorem 4]{GMOT}), for each $\alpha >0$ and each $g\in H$, we denote by $T_{\alpha}(g)$ the set of solutions of (\ref{Pj0alfavariacional}) and we have that \mbox{\boldmath{$e$}}gin{equation}\langlebel{min1} m=\inf\{J_{\alpha}(g), g\in H, u_{\alpha g}\in T_{\alpha}(g)\}\geq 0. \end{equation} Let $g_{n}\in H$ be a minimizing sequence to (\ref{min1}) such that \mbox{\boldmath{$e$}}gin{equation}\langlebel{min2} m\leq J_{\alpha}(g_{n}) \leq m + \frac{1}{n}. \end{equation} Taking into account that the functional $J_{\alpha}$ satisfies $$\lim\limits_{||g||_{H}\rightarrow +\infty}J_{\alpha}(g)=+\infty$$ we obtain that there exists $C_{1} >0$ such that \mbox{\boldmath{$e$}}gin{equation}\langlebel{cota1} ||g_{n}||_{H}\leq C_{1}. \end{equation} Moreover, we can prove that there exists $C_{2} >0$ such that \mbox{\boldmath{$e$}}gin{equation}\langlebel{cota2} ||u_{\alpha g_{n}}||_{V_{0}}\leq C_{2}. \end{equation} In fact, let $u_\infty \in K$ be the solution to problem (\ref{Pvariacional}). We have \mbox{\boldmath{$e$}}gin{equation*}\mbox{\boldmath{$e$}}gin{split} a(u_{\alpha g_{n}}, u_\infty -u_{\alpha g_{n}}) + \alpha \int_{\Gamma_{3}} j^0(u_{\alpha g_{n}}; u_\infty-u_{\alpha g_{n}}) \, d\Gamma &\ge \int_{\Omega}g_{n}(u_\infty-u_{\alpha g_{n}})\, dx \\ &-\int_{\Gamma_{2}}q (u_\infty-u_{\alpha g_{n}})\, d\Gamma.\end{split} \end{equation*} Hence \mbox{\boldmath{$e$}}gin{equation*}\mbox{\boldmath{$e$}}gin{split} a(u_\infty - u_{\alpha g_{n}}, u_\infty -u_{\alpha g_{n}}) & \le a(u_\infty, u_\infty-u_{\alpha g_{n}}) + \alpha \int_{\Gamma_{3}}j^0(u_{\alpha g_{n}}; b-u_{\alpha g_{n}}) \, d\Gamma \\ & + \int_{\Omega}g_{n} (u_{\alpha g_{n}}-u_\infty)\, dx-\int_{\Gamma_{2}}q (u_\infty-u_{\alpha g_{n}})\, d\Gamma. \end{split}\end{equation*} From hypothesis $H(j)$(d), since the form $a$ is bounded (with positive constant $M_{a}$), we get \mbox{\boldmath{$e$}}gin{equation*}\mbox{\boldmath{$e$}}gin{split} \| u_\infty - u_{\alpha g_{n}} \|_{V_{0}}^2 & \le a(u_\infty, u_\infty-u_{\alpha g_{n}}) + \int_{\Omega}g_{n} (u_{\alpha g_{n}}-u_\infty)\, dx-\int_{\Gamma_{2}}q (u_\infty-u_{\alpha g_{n}})\, d\Gamma \\ & \le M_{a} \| u_\infty \|_V \| u_\infty - u_{\alpha g_{n}} \|_V + \left(||g_{n}||_{H}+||q||_{Q}||\gamma||\right) \| u_\infty - u_{\alpha g_{n}} \|_V\\ &\le \left( M_{a}\| u_\infty \|_V+C_{1}+C_{3}||q||_{Q}||\gamma||\right) \| u_\infty - u_{\alpha g_{n}} \|_{V_{0}} \end{split}\end{equation*} where $||\gamma||$ denote the norm of trace operator and $C_{3}$ is a positive constant due to the equivalence of norms. Subsequently, we obtain (\ref{cota2}). Therefore, there exist $f\in H$ and $\eta_{\alpha}\in V_{0}$ such that \[ u_{\alpha g_{n}}\rightharpoonup \eta_{\alpha}\quad \text{in}\quad V_{0} \quad\text{weakly} \qquad \text{and}\qquad g_{n}\rightharpoonup f\quad \text{in}\quad H \quad \text{weakly}. \] Now, for all $g_{n}\in H$, we have \[ a(u_{\alpha g_{n}},v) + \alpha \int_{\Gamma_{3}}j^{0}(u_{\alpha g_{n}};v)\, d\Gamma \geq \int_{\Omega}g_{n}v \, dx -\int_{\Gamma_{2}}qv\, d\Gamma \ \ \mbox{\rm for all} \ \ v\in V_{0} \] and taking the upper limit, we obtain \mbox{\boldmath{$e$}}gin{equation}\langlebel{CCC} a(\eta_{\alpha}, v) + \alpha \limsup_{n\rightarrow +\infty} \int_{\Gamma_{3}} j^0(u_{\alpha g_{n}}; v) \, d\Gamma \ge \int_{\Omega}fv \, dx -\int_{\Gamma_{2}}qv\, d\Gamma \ \ \mbox{\rm for all} \ \ v \in V_0. \end{equation} By the compactness of the trace operator from $V$ into $L^2(\Gamma_{3})$, we have $u_{\alpha g_{n}} \big|_{\Gamma_{3}} \to \eta_{\alpha} \big|_{\Gamma_{3}}$ in $L^2(\Gamma_3)$, as $n \to +\infty$, and at least for a subsequence, $u_{\alpha g_{n}}(x) \to \eta_{\alpha}(x)$ for a.e. $x \in \Gamma_3$ and $|u_{\alpha g_{n}}(x)| \le h_{\alpha}(x)$ a.e. $x \in \Gamma_3$, where $h_{\alpha} \in L^2(\Gamma_3)$. Since the function $\mathbb{R}\times \mathbb{R} \ni (r, s) \mapsto j^0(x, r; s) \in \mathbb{R}$ a.e. on $\Gamma_3$ is upper semicontinuous, see \cite[Proposition 3]{GMOT}, we obtain $$ \limsup_{n\rightarrow +\infty} j^0(x,u_{\alpha g_{n}}(x); v(x)) \le j^0(x, \eta_{\alpha}(x); v(x)) \ \ \mbox{a.e.} \ \ x \in \Gamma_3. $$ Next, from $H(j)(c)$, we deduce the estimate \mbox{\boldmath{$e$}}gin{equation*} |j^0(x, u_{\alpha g_{n}}(x); v(x))| \le (c_0 + c_1 |u_{\alpha g_{n}}(x)|) \, |v(x)| \le k_{\alpha}(x) \ \ \mbox{a.e.} \ \ x \in \Gamma_3 \end{equation*} where $k_{\alpha} \in L^1(\Gamma_3)$, $k_{\alpha}(x) = (c_0 + c_1 h_{\alpha}(x)) |v(x)|$ and we apply the dominated convergence theorem, see~\cite{DMP} to get $$ \limsup_{n\rightarrow +\infty} \int_{\Gamma_3} j^0(u_{\alpha g_{n}}; v) \, d\Gamma \le \int_{\Gamma_3} \limsup_{n\rightarrow +\infty} j^0(u_{\alpha g_{n}}; v) \, d\Gamma \le \int_{\Gamma_3} j^0(\eta_{\alpha}; v)\, d\Gamma. $$ Using the latter in (\ref{CCC}), we obtain \[ a(\eta_{\alpha},v) + \alpha \int_{\Gamma_{3}}j^{0}(\eta_{\alpha};v)\, d\Gamma \geq \int_{\Omega}fv \, dx -\int_{\Gamma_{2}}qv\, d\Gamma \ \ \mbox{\rm for all} \ \ v\in V_{0} \] that is, $\eta_{\alpha}\in V_{0}$ is a solution to the hemivariational inequality (\ref{Pj0alfavariacional}). Next, we have proved that \[ \eta_{\alpha}=u_{\alpha f} \] where $u_{\alpha f}$ is a solution of the hemivariational inequality (\ref{Pj0alfavariacional}) for data $f\in H$ and $q\in Q$. Finally, from (\ref{min2}) and the weak lower semicontinuity of $J_{\alpha}$, we have \mbox{\boldmath{$e$}}gin{equation}\mbox{\boldmath{$e$}}gin{split} m&\geq \liminf_{n\rightarrow +\infty}J_{\alpha}(g_{n})\\& \geq \frac{1}{2}\liminf_{n\rightarrow +\infty} ||u_{\alpha g_{n}}-z_{d}||^{2}_{H}+\frac{M}{2}\liminf_{n\rightarrow +\infty}||g_{n}||^{2}_{H}\\ & \geq \frac{1}{2}||u_{\alpha f}-z_{d}||^{2}_{H}+\frac{M}{2}||f||^{2}_{H}=J_{\alpha}(f),\nonumber \end{split}\end{equation} and therefore, $(f,u_{\alpha f})$ is an optimal pair to optimal control problem (\ref{OPHemivariational}). \end{proof} \section{Asymptotic behavior of the optimal controls}\langlebel{Asymptotic} In this section we investigate the asymptotic behavior of the optimal solutions to problem~(\ref{OPHemivariational}) when $\alpha \rightarrow\infty$. To this end, we need the following additional hypothesis on the superpotential~$j$. \noindent ${\underline{(H_1)}}$: \quad if $j^0(x, r; b-r) = 0$ for all $r \in \mathbb{R}$, a.e. $x \in \Gamma_{3}$, then $r = b$. \mbox{\boldmath{$e$}}gin{Theorem}\langlebel{Theorem6} Assume $H(j)$ and $(H_1)$. If $(g_{\alpha},u_{\alpha g_{\alpha}})$ is a optimal solution to problem (\ref{OPHemivariational}) and $(g^{*},u_{\infty g^{*}})$ is the unique solution to problem (\ref{OPVariational}), then $g_{\alpha} \to g^{*}$ in $H$ strongly and $u_{\alpha g_{\alpha}} \to u_{\infty g^{*}}$ in $V$ strongly, when $\alpha \to \infty$. \end{Theorem} \mbox{\boldmath{$e$}}gin{proof} We will make the prove in three steps. \newline \textbf{Step 1.} Since $(g_{\alpha},u_{\alpha g_{\alpha}})$ is a optimal solution to problem (\ref{OPHemivariational}), we have the following inequality \[ \frac{1}{2}||u_{\alpha g_{\alpha}}-z_{d}||^{2}_{H}+\frac{M}{2}||g_{\alpha}||^{2}_{H}\leq \frac{1}{2}||u_{\alpha g}-z_{d}||^{2}_{H}+\frac{M}{2}||g||^{2}_{H},\quad \forall g\in H \] and taking $g=0$, we obtain that there exists a positive constant $C_{1}$ such that \[ \frac{1}{2}||u_{\alpha g_{\alpha}}-z_{d}||^{2}_{H}+\frac{M}{2}||g_{\alpha}||^{2}_{H}\leq \frac{1}{2}||u_{\alpha 0}-z_{d}||^{2}_{H}\leq C_{1} \] because $\{u_{\alpha 0}\}$ is convergent when $\alpha \to \infty$, see \cite[Theorem 7]{GMOT}. Therefore, there exist positive constants $C_{2}$ and $C_{3}$, independent of $\alpha$, such that \mbox{\boldmath{$e$}}gin{equation}\langlebel{cota} ||g_{\alpha}||_{H}\leq C_{2}\quad \text{and}\quad ||u_{\alpha g_{\alpha}}||_{H}\leq C_{3}. \end{equation} Now, we choose $v = u_{\infty g^{*}} - u_{\alpha g_{\alpha}} \in V_0$ as a test function in the elliptic boundary hemivariational inequality (\ref{Pj0alfavariacional}) to obtain \mbox{\boldmath{$e$}}gin{equation*} a(u_{\alpha g_{\alpha}}, u_{\infty g^{*}} - u_{\alpha g_{\alpha}}) + \alpha \int_{\Gamma_{3}} j^0(u_{\alpha g_{\alpha}}; u_{\infty g^{*}} - u_{\alpha g_{\alpha}}) \, d\Gamma \ge L(u_{\infty g^{*}} - u_{\alpha g_{\alpha}}). \end{equation*} From the equality $$a(u_{\alpha g_{\alpha}}, u_{\infty g^{*}} - u_{\alpha g_{\alpha}}) = - a(u_{\infty g^{*}} - u_{\alpha g_{\alpha}}, u_{\infty g^{*}} - u_{\alpha g_{\alpha}}) + a(u_{\infty g^{*}}, u_{\infty g^{*}} - u_{\alpha g_{\alpha}}),$$ we get \mbox{\boldmath{$e$}}gin{equation}\mbox{\boldmath{$e$}}gin{split}\langlebel{N2} & a(u_{\infty g^{*}} - u_{\alpha g_{\alpha}}, u_{\infty g^{*}} - u_{\alpha g_{\alpha}}) - \alpha \int_{\Gamma_{3}} j^0(u_{\alpha g_{\alpha}}; u_{\infty g^{*}} - u_{\alpha g_{\alpha}}) \, d\Gamma \\ & \le a(u_{\infty g^{*}}, u_{\infty g^{*}} - u_{\alpha g_{\alpha}}) - L(u_{\infty g^{*}} - u_{\alpha g_{\alpha}}). \end{split}\end{equation} Taking into account that $j^0(x, u_{\alpha g_{\alpha}}; u_{\infty g^{*}} - u_{\alpha g_{\alpha}}) = j^0(x, u_{\alpha g_{\alpha}}; b-u_{\alpha g_{\alpha}})$ on $\Gamma_{3}$, and by $H(j)$(d), we have $j^0(x, u_{\alpha g_{\alpha}}; u_{\infty g^{*}} - u_{\alpha g_{\alpha}}) \le 0$ on $\Gamma_{3}$. Hence \mbox{\boldmath{$e$}}gin{equation*} a(u_{\infty g^{*}} - u_{\alpha g_{\alpha}},u_{\infty g^{*}} - u_{\alpha g_{\alpha}}) \le a(u_{\infty g^{*}}, u_{\infty g^{*}} - u_{\alpha g_{\alpha}}) - L(u_{\infty g^{*}} - u_{\alpha g_{\alpha}}). \end{equation*} By the boundedness and coerciveness of $a$, we infer \mbox{\boldmath{$e$}}gin{equation*} m_a \| u_{\infty g^{*}} - u_{\alpha g_{\alpha}} \|_V^2 \le (M_{a} \|u_{\infty g^{*}}\|_V + \| L \|_{V^*}) \, \| u_{\infty g^{*}} - u_{\alpha g_{\alpha}} \|_V \end{equation*} with $M_{a} > 0$, and subsequently \mbox{\boldmath{$e$}}gin{equation}\mbox{\boldmath{$e$}}gin{split}\langlebel{ZZZ} \| u_{\alpha g_{\alpha}} \|_V &\le \| u_{\infty g^{*}} - u_{\alpha g_{\alpha}} \|_V + \| u_{\infty g^{*}} \|_V\\ & \le \frac{1}{m_a} (M_{a} \|u_{\infty g^{*}}\|_V + \| L \|_{V^*}) + \| u_{\infty g^{*}} \|_V \\& =: C_{4}, \end{split} \end{equation} where $C_{4} > 0$ is a constant independent of $\alpha$. Hence, since $a(u_{\infty g^{*}} - u_{\alpha g_{\alpha}},u_{\infty g^{*}} - u_{\alpha g_{\alpha}}) \ge 0$, from (\ref{N2}), we have \mbox{\boldmath{$e$}}gin{equation*}\mbox{\boldmath{$e$}}gin{split} - \alpha \int_{\Gamma_{3}} j^0(u_{\alpha g_{\alpha}}; u_{\infty g^{*}} - u_{\alpha g_{\alpha}}) \, d\Gamma &\le (M_{a} \|u_{\infty g^{*}}\|_V + \| L \|_{V^*}) \, \| u_{\infty g^{*}} - u_{\alpha g_{\alpha}} \|_V\\ & \le \frac{1}{m_a} (M_{a} \|u_{\infty g^{*}}\|_V + \| L \|_{V^*})^2 \\ &=: C_5, \end{split}\end{equation*} where $C_5 > 0$ is independent of $\alpha$. Thus \mbox{\boldmath{$e$}}gin{equation}\langlebel{N3} - \int_{\Gamma_{3}} j^0(u_{\alpha g_{\alpha}}; u_{\infty g^{*}} - u_{\alpha g_{\alpha}}) \, d\Gamma \le \frac{C_5}{\alpha}. \end{equation} It follows from (\ref{ZZZ}) that $\{ u_{\alpha g_{\alpha}} \}$ remains in a bounded subset of $V$. Thus, there exists $\eta \in V$ such that, by passing to a subsequence if necessary, we have \mbox{\boldmath{$e$}}gin{equation}\langlebel{CONV8} u_{\alpha g_{\alpha}} \rightharpoonup \eta \ \ \mbox{weakly in} \ \ V, \ \mbox{as} \ \alpha \to \infty. \end{equation} Moreover. from (\ref{cota}) we have that there exists $h\in H$ such that \mbox{\boldmath{$e$}}gin{equation}\langlebel{convcontrol} g_{\alpha} \rightharpoonup h\ \ \mbox{weakly in} \ \ H, \ \mbox{as} \ \alpha \to \infty. \end{equation} \newline \textbf{Step 2.} Next, we will show that $h=g^{*}$ and $\eta =u_{\infty g^{*}}$. We observe that $\eta \in V_0$ because $\{u_{\alpha g_{\alpha}} \} \subset V_0$ and $V_0$ is sequentially weakly closed in $V$. Let $w \in K$ and $v = w - u_{\alpha g_{\alpha}} \in V_0$. From (\ref{Pj0alfavariacional}), we have \mbox{\boldmath{$e$}}gin{equation*} L(w-u_{\alpha g_{\alpha}}) \le a(u_{\alpha g_{\alpha}}, w -u_{\alpha g_{\alpha}}) + \alpha \int_{\Gamma_{3}} j^0(u_{\alpha g_{\alpha}}; w-u_{\alpha g_{\alpha}}) \, d\Gamma. \end{equation*} Since $w = b$ on $\Gamma_3$, by $H(j)$(d), we have \mbox{\boldmath{$e$}}gin{equation*} \alpha \int_{\Gamma_{3}} j^0(u_{\alpha g_{\alpha}}; w-u_{\alpha g_{\alpha}}) \, d\Gamma = \alpha \int_{\Gamma_{3}} j^0(u_{\alpha g_{\alpha}}; b-u_{\alpha g_{\alpha}}) \, d\Gamma \le 0 \end{equation*} which implies \mbox{\boldmath{$e$}}gin{equation}\langlebel{INEQ2} L(w-u_{\alpha g_{\alpha}}) \le a(u_{\alpha g_{\alpha}}, w -u_{\alpha g_{\alpha}}). \end{equation} Next, we use the weak lower semicontinuity of the functional $V \ni v \mapsto a(v,v) \in \mathbb{R}$ and from (\ref{INEQ2}), we deduce \mbox{\boldmath{$e$}}gin{equation}\langlebel{INEQ4} \eta \in V_0 \ \ \mbox{satisfies} \ \ L(w-\eta) \le a(\eta, w-\eta) \ \ \mbox{for all} \ \ w \in K. \end{equation} Subsequently, we will show that $\eta \in K$. In fact, from (\ref{CONV8}), by the compactness of the trace operator, we have $u_{\alpha g_{\alpha}} \big|_{\Gamma_{3}} \to \eta \big|_{\Gamma_{3}}$ in $L^2(\Gamma_3)$, as $\alpha \to \infty$. Passing to a subsequence if necessary, we may suppose that $u_{\alpha g_{\alpha}} (x) \to \eta(x)$ for a.e. $x \in \Gamma_3$ and there exists $f \in L^2(\Gamma_3)$ such that $|u_{\alpha g_{\alpha}}(x)| \le f(x)$ a.e. $x \in \Gamma_3$. Using the upper semicontinuity of the function $\mathbb{R}\times \mathbb{R} \ni (r, s) \mapsto j^0(x, r; s) \in \mathbb{R}$ for a.e. $x \in \Gamma_3$, see \cite[Proposition 3 (iii)]{GMOT}, we get $$ \limsup_{\alpha\rightarrow \infty} j^0(x, u_{\alpha g_{\alpha}}(x); b - u_{\alpha g_{\alpha}}(x)) \le j^0(x,\eta(x); b-\eta(x)) \ \ \mbox{a.e.} \ \ x \in \Gamma_3. $$ Next, taking into account the estimate \mbox{\boldmath{$e$}}gin{equation*} |j^0(x, u_{\alpha g_{\alpha}}(x); b - u_{\alpha g_{\alpha}}(x))| \le (c_0 + c_1 |u_{\alpha g_{\alpha}} (x)|) \, |b-u_{\alpha g_{\alpha}}(x)| \le k(x) \ \ \mbox{a.e.} \ \ x \in \Gamma_3 \end{equation*} with $k \in L^1(\Gamma_3)$ given by $k(x) = (c_0 + c_1 f(x)) (|b| + f(x))$, by the dominated convergence theorem, see~\cite{DMP}, we obtain $$ \limsup_{\alpha\rightarrow \infty} \int_{\Gamma_3} j^0(u_{\alpha g_{\alpha}}; b - u_{\alpha g_{\alpha}}) \, d\Gamma \le \int_{\Gamma_3} j^0(\eta; b-\eta)\, d\Gamma. $$ Consequently, from $H(j)$(d) and (\ref{N3}), we have \mbox{\boldmath{$e$}}gin{equation*} 0 \le -\int_{\Gamma_{3}} j^0(\eta; b-\eta) \, d\Gamma \le \liminf_{\alpha\rightarrow \infty} \left( -\int_{\Gamma_{3}} j^0(u_{\alpha g_{\alpha}}; b-u_{\alpha g_{\alpha}}) \, d\Gamma \right) \le 0 \end{equation*} which gives $\int_{\Gamma_{3}} j^0(\eta; b-\eta) \, d\Gamma =0$. Again by $H(j)$(d), we get $j^0(x, \eta; b-\eta) = 0$ a.e. $x\in\Gamma_{3}$. Using $(H_1)$, we have $\eta(x) = b$ for a.e. $x \in \Gamma_3$, which together with (\ref{INEQ4}) implies \mbox{\boldmath{$e$}}gin{equation*}\langlebel{INEQ5} \eta \in K \ \ \mbox{satisfies} \ \ L(w-\eta) \le a(\eta, w-\eta) \ \ \mbox{for all} \ \ w \in K. \end{equation*} Next, we will prove that $\eta = u_{\infty h}$. To this end, let $v := w -\eta \in K_0$ with arbitrary $w \in K$. Hence, $L(v) \le a(\eta, v)$ for all $v \in K_0$. Recalling that $v \in K_0$ implies $-v \in K_0$, we obtain $a(\eta, v) \le L(v)$ for all $v \in K_0$. Hence, we conclude that $$ \eta \in K \ \ \mbox{satisfies} \ \ a(\eta, v) = L(v)\ \ \mbox{for all} \ \ v \in K_0, $$ i.e., $\eta \in K$ is a solution to problem (\ref{Pvariacional}). By the uniqueness of solution to problem (\ref{Pvariacional}), we have $\eta = u_{\infty h}$ and hence $u_{\alpha g_{\alpha}} \rightharpoonup u_{\infty h}$ weakly in $V$, as $\alpha \to \infty$. \newline Now \[ J_{\alpha}(g_{\alpha})\leq J_{\alpha}(f), \quad \forall f \in H \] next \mbox{\boldmath{$e$}}gin{equation*}\mbox{\boldmath{$e$}}gin{split} J(h)& =\frac{1}{2}||u_{\infty h}-z_{d}||_{H}^{2}+\frac{M}{2}||h||_{H}^{2}= \frac{1}{2}||\eta -z_{d}||_{H}^{2}+\frac{M}{2}||h||_{H}^{2}\\ &\leq \liminf_{\alpha \to \infty}J_{\alpha}(g_{\alpha})\leq \liminf_{\alpha \to \infty}J_{\alpha}(f)\\ & =\lim_{\alpha \to \infty}J_{\alpha}(f)=J(f), \quad \forall f\in H \end{split}\end{equation*} and from the uniqueness of the optimal control problem (\ref{OPVariational}), see \cite{GaTa}, we obtain that \[ h=g^{*}, \] therefore $u_{\infty h}=u_{\infty g^{*}}$. Next, we have that, when $\alpha \to \infty$ \mbox{\boldmath{$e$}}gin{equation*} g_{\alpha}\rightharpoonup g^{*} \ \ \text{weakly in} \ \ H \quad \text{and}\quad u_{\alpha g_{\alpha}} \rightharpoonup u_{\infty g^{*}} \ \ \text{weakly in} \ \ V . \end{equation*} \newline \textbf{Step 3.} Now, we prove the strong convergence $u_{\alpha g_{\alpha}} \to u_{\infty g^{*}}$ in $V$, as $\alpha \to \infty$. Choosing $v = u_{\infty g^{*}}-u_{\alpha g_{\alpha}} \in V_0$ in problem (\ref{Pj0alfavariacional}), we obtain \mbox{\boldmath{$e$}}gin{equation*} a(u_{\alpha g_{\alpha}}, u_{\infty g^{*}} -u_{\alpha g_{\alpha}}) + \alpha \int_{\Gamma_{3}} j^0(u_{\alpha g_{\alpha}}; u_{\infty g^{*}}-u_{\alpha g_{\alpha}}) \, d\Gamma \ge L(u_{\infty g^{*}}-u_{\alpha g_{\alpha}}). \end{equation*} Hence \mbox{\boldmath{$e$}}gin{equation*}\mbox{\boldmath{$e$}}gin{split} a(u_{\infty g^{*}} -u_{\alpha g_{\alpha}}, u_{\infty g^{*}} -u_{\alpha g_{\alpha}}) & \le a(u_{\infty g^{*}}, u_{\infty g^{*}} -u_{\alpha g_{\alpha}}) + L(u_{\alpha g_{\alpha}} - u_{\infty g^{*}}) \\& + \alpha \int_{\Gamma_{3}} j^0(u_{\alpha g_{\alpha}}; u_{\infty g^{*}}-u_{\alpha g_{\alpha}}) \, d\Gamma. \end{split}\end{equation*} Since $u_{\infty g^{*}}= b$ on $\Gamma_3$, by $H(j)$(d) and the coerciveness of the form $a$, we have $$ m_a \, \| u_{\infty g^{*}} -u_{\alpha g_{\alpha}} \|^2_V \le a(u_{\infty g^{*}}, u_{\infty g^{*}} -u_{\alpha g_{\alpha}}) + L(u_{\alpha g_{\alpha}} - u_{\infty g^{*}}). $$ Employing the weak continuity of $a(u_{\infty g^{*}}, \cdot)$, the compactness of the trace operator and taking into account that $u_{\alpha g_{\alpha}}\rightarrow u_{\infty g^{*}}$ strongly in $H$, we conclude that $u_{\alpha g_{\alpha}} \to u_{\infty g^{*}}$ strongly in $V$, as $\alpha \to \infty$. \newline Finally, we prove the strong convergence of $g_{\alpha}$ to $g^{*}$ in $H$, when $\alpha\rightarrow \infty$. In fact, from $u_{\alpha g_{\alpha}}\rightarrow u_{\infty g^{*}}$ strongly in $H$, we deduce \mbox{\boldmath{$e$}}gin{equation}\langlebel{a} \lim_{\alpha\rightarrow \infty}\frac{1}{2}||u_{\alpha g_{\alpha}}-z_{d}||_{H}^{2}=\frac{1}{2}||u_{\infty g^{*}}-z_{d}||_{H}^{2} \end{equation} and as $g_{\alpha}\rightharpoonup g^{*}$ weakly in $H$, then \mbox{\boldmath{$e$}}gin{equation}\langlebel{b} ||g^{*}||_{H}^{2}\leq \liminf_{\alpha\rightarrow \infty}||g_{\alpha}||_{H}^{2}. \end{equation} Next, from (\ref{a}) and (\ref{b}), we obtain \mbox{\boldmath{$e$}}gin{equation*} \frac{1}{2}||u_{\infty g^{*}}-z_{d}||_{H}^{2}+\frac{M}{2}||g^{*}||_{H}^{2}\leq \liminf_{\alpha\rightarrow \infty} \left(\frac{1}{2}||u_{\alpha g_{\alpha}}-z_{d}||_{H}^{2}+\frac{M}{2}||g_{\alpha}||_{H}^{2}\right), \end{equation*} that is \mbox{\boldmath{$e$}}gin{equation*} J(g^{*})\leq \liminf_{\alpha\rightarrow \infty}J_{\alpha}(g_{\alpha}). \end{equation*} On the other hand, from the definition of $g_{\alpha}$, we have \mbox{\boldmath{$e$}}gin{equation*} J_{\alpha}(g_{\alpha})\leq J_{\alpha}(g^{*}) \end{equation*} then, taking into account that $u_{\alpha g^{*}}\rightarrow u_{\infty g^{*}}$ strongly in $H$, see~\cite[Theorem~7]{GMOT}, we obtain \mbox{\boldmath{$e$}}gin{equation*} \limsup_{\alpha\rightarrow \infty}J_{\alpha}(g_{\alpha})\leq \limsup_{\alpha\rightarrow \infty}J_{\alpha}(g^{*})=J(g^{*}) \end{equation*} and therefore \mbox{\boldmath{$e$}}gin{equation*} \lim_{\alpha\rightarrow \infty}J_{\alpha}(g_{\alpha})=J(g^{*}) \end{equation*} or equivalently \mbox{\boldmath{$e$}}gin{equation}\langlebel{c} \lim_{\alpha\rightarrow \infty}\left(\frac{1}{2}||u_{\alpha g_{\alpha}}-z_{d}||_{H}^{2}+\frac{M}{2}||g_{\alpha}||_{H}^{2}\right)=\frac{1}{2}||u_{\infty g^{*}}-z_{d}||_{H}^{2}+\frac{M}{2}||g^{*}||_{H}^{2}. \end{equation} Now, from (\ref{a}) and (\ref{c}), when $\alpha\rightarrow \infty$, we have \mbox{\boldmath{$e$}}gin{equation*} ||g_{\alpha}||_{H}^{2}\rightarrow ||g^{*}||_{H}^{2} \end{equation*} and as $g_{\alpha}\rightharpoonup g^{*}$ weakly in $H$, we deduce that $g_{\alpha}\rightarrow g^{*}$ strongly in $H$. This completes the proof. \end{proof} We remark that we can find examples of several locally Lipschitz functions $j$ which satisfies the hypothesis $H(j)$ and $(H_1)$ in ~\cite{GMOT}. \section{Conclusions}\langlebel{Conclusions} We have studied a parameter optimal control problems for systems governed by elliptic boundary hemivariational inequalities with a non-monotone multivalued subdifferential boundary condition on a portion of the boundary of the domain which is described by the Clarke generalized gradient of a locally Lipschitz function. We prove an existence result for the optimal controls and we show an asymptotic result for the optimal controls and the system states, when the parameter (the heat transfer coefficient on a portion of the boundary) tends to infinity. These results generalize for a locally Lipschitz function $j$, under the hypothesis $H(j)$ and $(H_1)$, the classical results obtained in ~\cite{GaTa} for a quadratic superpotential $j$. \end{document}
\begin{document} \title[Deformation and moduli]{ A superficial working guide to deformations and moduli} \author{ F. Catanese}\footnote{I owe to David Buchsbaum the joke that an expert on algebraic surfaces is a `superficial' mathematician. } \address {Lehrstuhl Mathematik VIII\\ Mathematisches Institut der Universit\"at Bayreuth\\ NW II, Universit\"atsstr. 30\\ 95447 Bayreuth} \email{[email protected]} \thanks{The present work took place in the realm of the DFG Forschergruppe 790 ``Classification of algebraic surfaces and compact complex manifolds''. A major part of the article was written, and the article was completed, when the author was a visiting research scholar at KIAS. } \maketitle {\em Dedicated to David Mumford with admiration.} \tableofcontents \section*{Introduction} There are several ways to look at moduli theory, indeed the same name can at a first glance disguise completely different approaches to mathematical thinking; yet there is a substantial unity since, although often with different languages and purposes, the problems treated are substantially the same. The most classical approach and motivation is to consider moduli theory as the fine part of classification theory: the big quest is not just to prove that certain moduli spaces exist, but to use the study of their structure in order to obtain geometrical informations about the varieties one wants to classify; and using each time the most convenient incarnation of `moduli'. For instance, as a slogan, we might think of moduli theory and deformation theory as analogues of the global study of an algebraic variety versus a local study of its singularities, done using power series methods. On the other hand, the shape of an algebraic variety is easily recognized when it has singularities! In this article most of our attention will be cast on the case of complex algebraic surfaces, which is already sufficiently intricate to defy many attempts of investigation. But we shall try, as much as possible, to treat the higher dimensional and more general cases as well. We shall also stick to the world of complex manifolds and complex projective varieties, which allows us to find so many beautiful connections to related fields of mathematics, such as topology, differential geometry and symplectic geometry. David Mumford clarified the concept of biregular moduli through a functorial definition, which is extremely useful when we want a precise answer to questions concerning a certain class of algebraic varieties. The underlying elementary concepts are the concept of normal forms, and of quotients of parameter spaces by a suitable equivalence relation, often given by the action of an appropriate group. To give an idea through an elementary geometric problem: how many are the projective equivalence classes of smooth plane curves of degree 4 admitting 4 distinct collinear hyperflexes? A birational approach to moduli existed before, since, by the work of Cayley, Bertini, Chow and van der Waerden, varieties $X^n_d \subset \ensuremath{\mathbb{P}}^N$ in a fixed projective space, having a fixed dimension $n$ and a fixed degree $d$ are parametrized by the so called Chow variety $\sC h(n;d;N)$, over which the projective group $G:= \ensuremath{\mathbb{P}} GL (N+1, \ensuremath{\mathbb{C}})$ acts. And, if $Z$ is an irreducible component of $\sC h(n;d;N)$, the transcendence degree of the field of invariant rational functions $ \ensuremath{\mathbb{C}}(Z)^G$ was classically called the number of polarized moduli for the class of varieties parametrized by $Z$. This topic: `embedded varieties' is treated in the article by Joe Harris in this Handbook. A typical example leading to the concept of stability was: take the fourfold symmetric product $Z$ of $\ensuremath{\mathbb{P}}^2$, parametrizing 4-tuples of points in the plane. Then $Z$ has dimension 8 and the field of invariants has transcendence degree 0. This is not a surprise, since 4 points in linear general position are a projective basis, hence they are projectively equivalent; but, if one takes 4 point to lie on a line, then there is a modulus, namely, the cross ratio. This example, plus the other basic example given by the theory of Jordan normal forms of square matrices (explained in \cite{Mum-Suom} in detail) guide our understanding of the basic problem of Geometric Invariant Theory: in which sense may we consider the quotient of a variety by the action of an algebraic group. In my opinion geometric invariant theory, in spite of its beauty and its conceptual simplicity, but in view of its difficulty, is a foundational but not a fundamental tool in classification theory. Indeed one of the most difficult results, due to Gieseker, is the asymptotic stability of pluricanonical images of surfaces of general type; it has as an important corollary the existence of a moduli space for the canonical models of surfaces of general type, but the methods of proof do not shed light on the classification of such surfaces (indeed boundedness for the families of surfaces with given invariants had followed earlier by the results of Moishezon, Kodaira and Bombieri). We use in our title the name `working': this may mean many things, but in particular here our goal is to show how to use the methods of deformation theory in order to classify surfaces with given invariants. The order in our exposition is more guided by historical development and by our education than by a stringent logical nesting. The first guiding concepts are the concepts of Teichm\"uller space and moduli space associated to an oriented compact differentiable manifold $M$ of even dimension. These however are only defined as topological spaces, and one needs the Kodaira-Spencer-Kuranishi theory in order to try to give the structure of a complex space to them. A first question which we investigate, and about which we give some new results (proposition \ref{kur=teich} and theorem \ref{kur=teich-surf}), is: when is Teichm\"uller space locally homeomorphic to Kuranishi space? This equality has been often taken for granted, of course under the assumption of the validity of the so called Wavrik condition (see theorem \ref{kur3}), which requires the dimension of the space of holomorphic vector fields to be locally constant under deformation . An important role plays the example of Atiyah about surfaces acquiring a node: we interpret it here as showing that Teichm\"uller space is non separated (theorem \ref{nonseparated}). In section 4 we see that it also underlies some recent pathological behaviour of automorphisms of surfaces, recently discovered together with Ingrid Bauer: even if deformations of canonical and minimal models are essentially the same, up to finite base change, the same does not occur for deformations of automorphisms (theorems \ref{main1} and \ref{path}). The connected components for deformation of automorphisms of canonical models $(X,G,\alpha)$ are bigger than the connected components for deformation of automorphisms of minimal models $(S,G,\alpha')$, the latter yielding locally closed sets of the moduli spaces which are locally closed but not closed. To describe these results we explain first the Gieseker coarse moduli space for canonical models of surfaces of general type, which has the same underlying reduced space as the coarse moduli stack for minimal models of surfaces of general type. We do not essentially talk about stacks (for which an elementary presentation can be found in \cite{fantechi}), but we clarify how moduli spaces are obtained by glueing together Kuranishi spaces, and we show the fundamental difference for the \'etale equivalence relation in the two respective cases of canonical and minimal models: we exhibit examples showing that the relation is not finite (proper) in the case of minimal models (a fact which underlies the definition of Artin stacks given in \cite{artinstacks}). We cannot completely settle here the question whether Teichm\"uller space is locally homeomorphic to Kuranishi space for all surfaces of general type, as this question is related to a fundamental question about the non existence of complex automorphisms which are isotopic to the identity, but different from the identity (see however the already mentioned theorem \ref{kur=teich-surf}). Chapter five is dedicated to the connected components of moduli spaces, and to the action of the absolute Galois group on the set of irreducible components of the moduli space, and surveys many recent results. We end by discussing concrete issues showing how one can determine a connected component of the moduli space by resorting to topological or differential arguments; we overview several results, without proofs but citing the references, and finally we prove a new result, theorem \ref{doublecover}, obtained in collaboration with Ingrid Bauer. There would have been many other interesting topics to treat, but these should probably better belong to a `part 2' of the working guide. \section{Analytic moduli spaces and local moduli spaces: Teichm\"uller and Kuranishi space} \subsection{Teichm\"uller space} Consider, throughout this subsection, an oriented real differentiable manifold $M$ of real dimension $2n$ (without loss of generality we may a posteriori assume $M$ and all the rest to be $\sC^{\infty}$ or even $\sC^{\omega}$, i.e., real-analytic). At a later point it will be convenient to assume that $M$ is compact. Ehresmann (\cite{ACS}) defined an {\bf almost complex structure} on $M$ as the structure of a complex vector bundle on the real tangent bundle $TM_{\ensuremath{\mathbb{R}}}$: namely, the action of $\sqrt {-1}$ on $TM_{\ensuremath{\mathbb{R}}}$ is provided by an endomorphism $$ J : TM_{\ensuremath{\mathbb{R}}} \ensuremath{\rightarrow} TM_{\ensuremath{\mathbb{R}}}, {\rm{\ with }}\ J^2 = - Id.$$ It is completely equivalent to give the decomposition of the complexified tangent bundle $TM_{\ensuremath{\mathbb{C}}} : = TM_{\ensuremath{\mathbb{R}}} \otimes_{\ensuremath{\mathbb{R}}}\ensuremath{\mathbb{C}}$ as the direct sum of the $i$, respectively $-i$ eigenbundles: $$ TM_{\ensuremath{\mathbb{C}}} = TM^{1,0} \oplus TM^{0,1} {\rm{ \ where }} \ TM^{0,1} = \overline {TM^{1,0} }.$$ In view of the second condition, it suffices to give the subbundle $TM^{1,0} $, or, equivalently, a section of the associated Grassmannian bundle $ \sG (n, TM_{\ensuremath{\mathbb{C}}})$ whose fibre at a point $x \in M$ is the variety of $n$-dimensional vector subspaces of the complex tangent space at $x$, $TM_{\ensuremath{\mathbb{C}}, x}$ (note that the section must take values in the open set $\mathcal T_n$ of subspaces $V$ such that $V$ and $\bar{V}$ generate). The space $\sA \sC (M)$ of almost complex structures, once $TM_{\ensuremath{\mathbb{R}}}$ (hence all associated bundles) is endowed with a Riemannian metric, has a countable number of seminorms (locally, the sup norm on a compact $K$ of all the derivatives of the endomorphism $J$), and is therefore a Fr\'echet space. One may for instance assume that $M$ is embedded in some $\ensuremath{\mathbb{R}}^N$. Assuming that $M$ is compact, one can also consider the Sobolev k-norms (i.e., for derivatives up order k). A closed subspace of $\sA \sC (M)$ consists of the set $ \sC (M)$ of complex structures: these are the almost complex structures for which there are at each point $x$ local holomorphic coordinates, i.e., functions $z_1, \dots , z_n$ whose differentials span the dual $(TM^{1,0}_y)^{\vee}$ of $TM^{1,0}_y$ for each point $y$ in a neighbourhood of $x$. In general, the splitting $$TM_{\ensuremath{\mathbb{C}}}^{\vee} = (TM^{1,0})^{\vee} \oplus (TM^{0,1})^{\vee}$$ yields a decomposition of exterior differentiation of functions as $ df = \partial f + \bar{\partial} f$, and a function is said to be holomorphic if its differential is complex linear, i.e., $ \bar{\partial} f = 0$. This decomposition $ d= \partial + \bar{\partial} $ extends to higher degree differential forms. The theorem of Newlander-Nirenberg (\cite{NN}), first proven by Eckmann and Fr\"olicher in the real analytic case (\cite{E-F}, see also \cite{montecatini} for a simple proof) characterizes the complex structures through an explicit equation: \begin{theo} {\bf (Newlander-Nirenberg)} An almost complex structure $J$ yields the structure of a complex manifold if and only if it is integrable, which means $ \bar{\partial} ^2 = 0. $ \end{theo} Obviously the group of oriented diffeomorphisms of $M$ acts on the space of complex structures, hence one can define in few words some basic concepts. \begin{defin} Let $\sD iff ^+ (M)$ be the group of orientation preserving diffeomorphisms of $M$ , and let $\sC (M)$ the space of complex structures on $M$. Let $\sD iff ^0 (M) \subset \sD iff ^+ (M)$ be the connected component of the identity, the so called subgroup of diffeomorphisms which are isotopic to the identity. Then Dehn (\cite{dehn}) defined the mapping class group of $M$ as $$\sM ap (M) : = \sD iff ^+ (M) / \sD iff ^0 (M),$$ while the Teichm\"uller space of $M$, respectively the moduli space of complex structures on $M$ are defined as $$ \sT (M) : = \sC (M) / \sD iff ^0 (M) , \ \mathfrak M (M) : = \sC (M) / \sD iff ^+ (M).$$ \end{defin} These definitions are very clear, however they only show that these objects are topological spaces, and that $$ (*) \ \mathfrak M (M) = \sT (M) / \sM ap (M) .$$ The simplest examples here are two: complex tori and compact complex curves. The example of complex tori sheds light on the important question concerning the determination of the connected components of $\sC (M)$, which are called the deformation classes in the large of the complex structures on $M$ (cf. \cite{cat02}, \cite{cat04}). Complex tori are parametrized by an open set $\mathcal T_n$ of the complex Grassmann Manifold $Gr(n,2n)$, image of the open set of matrices $\{ \Omega \in Mat(2n,n; \ensuremath{\mathbb{C}}) \ | \ (i)^n det (\Omega \overline {\Omega}) > 0 \}.$ This parametrization is very explicit: if we consider a fixed lattice $ \Ga \cong \ensuremath{\mathbb{Z}}^{2n}$, to each matrix $ \Omega $ as above we associate the subspace $$ V = ( \Omega ) (\ensuremath{\mathbb{C}}^{n}),$$ so that $ V \in Gr(n,2n)$ and $\Ga \otimes \ensuremath{\mathbb{C}} \cong V \oplus \bar{V}.$ Finally, to $ \Omega $ we associate the torus $Y_V : = V / p_V (\Ga)$, $p_V : V \oplus \bar{V} \ensuremath{\rightarrow} V$ being the projection onto the first addendum. Not only we obtain in this way a connected open set inducing all the small deformations (cf. \cite{k-m71}), but indeed, as it was shown in \cite{cat02} (cf. also \cite{cat04}) $\mathcal T_n$ is a connected component of Teichm\"uller space (as the letter $\mathcal T$ suggests). It was observed however by Kodaira and Spencer already in their first article (\cite {k-s58}, and volume II of Kodaira's collected works) that for $n \geq 2$ the mapping class group $ SL ( 2n, \ensuremath{\mathbb{Z}}) $ does not act properly discontinuously on $\mathcal T_n$. More precisely, they show that for every non empty open set $U \subset \mathcal T_n$ there is a point $t$ such that the orbit $ SL ( 2n, \ensuremath{\mathbb{Z}}) \cdot t$ intersects $U$ in an infinite set. This shows that the quotient is not Hausdorff at each point, probably it is not even a non separated complex space. Hence the moral is that for compact complex manifolds it is better to consider, rather than the Moduli space, the Teichm\"uller space. Moreover, after some initial constructions by Blanchard and Calabi (cf. \cite{bla53}, \cite{bla54}, , \cite{bla56}, \cite{cal58}) of non K\"ahler complex structures $X$ on manifolds diffeomorphic to a product $C \times T$, where $C$ is a compact complex curve and $T$ is a 2-dimensional complex torus, Sommese generalized their constructions, obtaining (\cite{somm75}) that the space of complex structures on a six dimensional real torus is not connected. These examples were then generalized in \cite{cat02} \cite{cat04} under the name of {\bf Blanchard-Calabi manifolds} showing (corollary 7.8 of \cite{cat04}) that also the space of complex structures on the product of a curve $C$ of genus $g \geq 2$ with a four dimensional real torus is not connected, and that there is no upper bound for the dimension of Teichm\"uller space (even when $M$ is fixed). The case of compact complex curves $C$ is instead the one which was originally considered by Teichm\"uller. In this case, if the genus $g$ is at least $2$, the Teichm\"uller space $\sT_g$ is a bounded domain, diffeomorphic to a ball, contained in the vector space of quadratic differentials $H^0 ( C, \ensuremath{\mathcal{O}}_C ( 2 K_C))$ on a fixed such curve $C$. In fact, for each other complex structure on the oriented 2-manifold $M$ underlying $C$ we obtain a complex curve $C'$, and there is a unique extremal quasi-conformal map $ f : C \ensuremath{\rightarrow} C'$, i.e., a map such that the Beltrami distortion $\mu_f : = \bar{\partial} f / \partial f$ has minimal norm (see for instance \cite{hubbard} or \cite{ar-cor}). The fact that the Teichm\"uller space $\sT_g$ is homeomorphic to a ball (see \cite{Tro} for a simple proof) is responsible for the fact that the moduli space of curves $\mathfrak M_g$ is close to be a classifying space for the mapping class group (see \cite{mumshaf} and the articles by Edidin and Wahl in this Handbook). \subsection{Kuranishi space} Interpreting the Beltrami distortion as a closed $(0,1)$- form with values in the dual $(TC^{1,0})$ of the cotangent bundle $(TC^{1,0})^{\vee}$, we obtain a particular case of the Kodaira-Spencer-Kuranishi theory of local deformations. In fact, by Dolbeault 's theorem, such a closed form determines a cohomology class in $H^1 ( \ensuremath{\mathbb{T}}heta_C)$, where $ \ensuremath{\mathbb{T}}heta_C$ is the sheaf of holomorphic sections of the holomorphic tangent bundle $(TC^{1,0})$: these cohomology classes are interpreted, in the Kodaira-Spencer-Kuranishi theory, as infinitesimal deformations (or derivatives of a family of deformations) of a complex structure: let us review briefly how. Local deformation theory addresses precisely the study of the {\bf small deformations} of a complex manifold $ Y = (M, J_0)$. We shall use here unambiguously the double notation $TM^{0,1} = TY^{0,1} ,\ TM^{1,0} = TY^{1,0} $ to refer to the splitting determined by the complex structure $J_0$. $J_0$ is a point in $\sC (M)$, and a neighbourhood in the space of almost complex structures corresponds to a distribution of subspaces which are globally defined as graphs of an endomorphism $$ \phi : TM^{0,1} \ensuremath{\rightarrow} TM^{1,0},$$ called a {\bf small variation of complex structure}, since one then defines $$TM^{0,1}_{\phi} : = \{ (u, \phi (u))| \ u \in TM^{0,1} \} \subset TM^{0,1} \oplus TM^{1,0}.$$ In terms of the new $\bar{\partial}$ operator, the new one is simply obtained by considering $$\bar{\partial}_{\phi} : = \bar{\partial} + \phi, $$ and the integrability condition is given by the Maurer-Cartan equation $$ (MC) \ \ \bar{\partial} (\phi) + \frac{1}{2} [ \phi, \phi ] = 0,$$ where $[ \phi, \phi ] $ denotes the Schouten bracket, which is the composition of exterior product of forms followed by Lie bracket of vector fields, and which is graded commutative. Observe for later use that the form $ F(\phi) : = ( \bar{\partial} (\phi) + \frac{1}{2} [ \phi, \phi ] )$ is $\bar{\partial} $ closed, if $ \bar{\partial} (\phi)=0 $, since then $$ \bar{\partial} F(\phi) = \frac{1}{2} \bar{\partial} [ \phi, \phi ] = \frac{1}{2}( [ \bar{\partial} \phi, \phi ] + [ \phi, \bar{\partial} \phi ])= 0.$$ Recall also the theorem of Dolbeault: if $\ensuremath{\mathbb{T}}heta_Y$ is the sheaf of holomorphic sections of $TM^{1,0}$, then $H^j (\ensuremath{\mathbb{T}}heta_Y)$ is isomorphic to the quotient space $\frac{Ker(\bar{\partial} ) }{Im (\bar{\partial} )}$ of the space of $\bar{\partial} $ closed $(0,j)$-forms with values in $TM^{1,0}$ modulo the space of $\bar{\partial} $-exact $(0,j)$-forms with values in $TM^{1,0}$. Our $F$ is a map of degree 2 between two infinite dimensional spaces, the space of $(0,1)$-forms with values in the bundle $TM^{1,0}$, and the space of $(0,2)$-forms with values in $TM^{1,0}$. Observe that, since our original complex structure $J_0$ corresponds to $\phi = 0$, the derivative $DF$ of the above equation $F$ at $\phi = 0$ is simply $$ \bar{\partial} (\phi)= 0,$$ hence the tangent space to the space of complex structures consists of the space of $ \bar{\partial} $-closed forms of type $(0,1)$ and with values in the bundle $ TM^{1,0}$. Moreover the derivative of $F$ surjects onto the space of $\bar{\partial} $-exact $(0,2)$-forms with values in $TM^{1,0}$. We are now going to show why we can restrict our consideration only to the class of such forms $\phi$ in the Dolbeault cohomology group $$H^1 (\ensuremath{\mathbb{T}}heta_Y): = ker ( \bar{\partial} ) / Im ( \bar{\partial}) .$$ This is done by answering the question: how does the group of diffeomorphisms act on an almost complex structure $J$? This is in general difficult to specify, but we can consider the infinitesimal action of a 1-parameter group of diffeomorphisms $$\{ \psi_t : = exp ( t (\theta + \bar{\theta} ) | t \in \ensuremath{\mathbb{R}} \},$$ corresponding to a differentiable vector field $\theta$ with values in $ TM^{1,0}$ ; from now on, we shall assume that $M$ is compact, hence the diffeomorphism $\psi_t$ is defined $\forall t \in \ensuremath{\mathbb{R}}$. We refer to \cite{kur3} and \cite{huy}, lemma 6.1.4 , page 260, for the following calculation of the Lie derivative: \begin{lemma} Given a 1-parameter group of diffeomorphisms $\{ \psi_t : = exp ( t (\theta + \bar{\theta} ) | t \in \ensuremath{\mathbb{R}} \}$, $(\frac{d}{dt})_{t=0} (\psi_t ^* (J_0)) $ corresponds to the small variation $ \bar{\partial} (\theta)$. \end{lemma} The lemma says, roughly speaking, that locally at each point $J$ the orbit for the group of diffeomorphisms in $\sD iff^0(M)$ contains a submanifold, having as tangent space the forms in the same Dolbeault cohomology class of $0$, which has finite codimension inside another submanifold with tangent space the space of $ \bar{\partial} $-closed forms $\phi$. Hence the tangent space to the orbit space is the space of such Dolbeault cohomology classes. Even if we `heuristically' assume $ \bar{\partial} (\phi)= 0,$ it looks like we are still left with another equation with values in an infinite dimensional space. However, the derivative $DF$ surjects onto the space of exact forms, while the restriction of $F$ to the subspace of $ \bar{\partial}$-closed forms ($\{ \bar{\partial} (\phi)= 0\}$ takes values in the space of $ \bar{\partial} $-closed forms: this is the moral reason why indeed one can reduce the above equation $ F=0$, associated to a map between infinite dimensional spaces, to an equation $ k=0$ for a map $k : H^1 (\ensuremath{\mathbb{T}}heta_Y) \ensuremath{\rightarrow} H^2 (\ensuremath{\mathbb{T}}heta_Y)$, called the Kuranishi map. This is done explicitly via a miracolous equation (see \cite{k-m71}, \cite{Kodbook},\cite{kur4} and \cite{montecatini} for details) set up by Kuranishi in order to reduce the problem to a finite dimensional one (here Kuranishi, see \cite{kur3}, uses the Sobolev r- norm in order to be able to use the implicit function theorem for Banach spaces). Here is how the Kuranishi equation is set up. Let $\eta_1, \dots , \eta_m \in H^1 (\ensuremath{\mathbb{T}}heta_Y)$ be a basis for the space of harmonic (0,1)-forms with values in $TM^{1,0}$, and set $t : =(t_1, \dots, t_m) \in \ensuremath{\mathbb{C}}^m$, so that $ t \mapsto \sum_i t_i \eta_i$ establishes an isomorphism $\ensuremath{\mathbb{C}}^m \cong H^1 (\ensuremath{\mathbb{T}}heta_Y)$. Then the {\em Kuranishi slice} (see \cite{palais} for a general theory of slices) is obtained by associating to $t$ the unique power series solution of the following equation: $$ \phi (t) = \sum_i t_i \eta_i + \frac{1}{2} \bar{\partial}^* G [ \phi (t), \phi (t)] , $$ satisfying moreover $\phi (t) = \sum_i t_i \eta_i + $ higher order terms ($G$ denotes here the Green operator). The upshot is that for these forms the integrability equation simplifies drastically; the result is summarized in the following definition. \begin{defin} The Kuranishi space $\frak B (Y)$ is defined as the germ of complex subspace of $H^1 (\ensuremath{\mathbb{T}}heta_Y)$ defined by $\{ t \in \ensuremath{\mathbb{C}}^m | \ H [ \phi (t), \phi (t)] = 0 \} $, where $H$ is the harmonic projector onto the space $ H^2 (\ensuremath{\mathbb{T}}heta_Y)$ of harmonic forms of type $(0,2)$ and with values in $TM^{1,0}$. Kuranishi space $\frak B (Y)$ parametrizes exactly the set of small variations of complex structure $ \phi (t)$ which are integrable. Hence over $\frak B (Y)$ we have a family of complex structures which deform the complex structure of $Y$. \end{defin} It follows from the above arguments that the Kuranishi space $\frak B (Y)$ surjects onto the germ of the Teichm\"uller space at the point corresponding to the given complex structure $Y = (M, J_0)$. It fails badly to be a homeomorphism, and my favourite example for this is (see \cite{catrav}) the one of the Segre ruled surfaces $\ensuremath{\mathbb{F}}_n$, obtained as the blow up at the origin of the projective cone over a rational normal curve of degree $n$, and described by Hirzebruch biregularly as $\ensuremath{\mathbb{P}} ( \ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{P}}^1} \oplus \ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{P}}^1} (n)), n \geq 0.$ Kuranishi space is here the vector space $$ H^1 (\ensuremath{\mathbb{T}}heta_{\ensuremath{\mathbb{F}}_n}) \cong {\rm Ext}^1( \ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{P}}^1}(n), \ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{P}}^1} )$$ parametrizing projectivizations $\ensuremath{\mathbb{P}} (E)$, where the rank 2 bundle $E$ occurs as an extension $$ 0 \ensuremath{\rightarrow} \ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{P}}^1} \ensuremath{\rightarrow} E \ensuremath{\rightarrow} \ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{P}}^1} (n)\ensuremath{\rightarrow} 0. $$ By Grothendieck's theorem, however, $E$ is a direct sum of two line bundles, hence we get as a possible surface only a surface $\ensuremath{\mathbb{F}}_{n-2k}$, for each $ k \leq \frac{n}{2}$. Indeed Teichm\"uller space, in a neighbourhood of the point corresponding to $\ensuremath{\mathbb{F}}_n$ consists just of a finite number of points corresponding to each $\ensuremath{\mathbb{F}}_{n-2k}$, and where $\ensuremath{\mathbb{F}}_{n-2k}$ is in the closure of $\ensuremath{\mathbb{F}}_{n-2h}$ if and only if $k \leq h$. The reason for this phenomenon is the following. Recall that the form $\phi$ can be infinitesimally changed by adding $ \bar{\partial} (\theta)$; now, for $\phi = 0$, nothing is changed if $ \bar{\partial} (\theta)=0$. i.e., if $\theta \in H^0 (\ensuremath{\mathbb{T}}heta_Y)$ is a holomorphic vector field. But the exponentials of these vector fields, which are holomorphic on $ Y = \ensuremath{\mathbb{F}}_n$, but not necessarily for $\ensuremath{\mathbb{F}}_{n-2k}$, act transitively on each stratum of the stratification of ${\rm Ext}^1( \ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{P}}^1}(n), \ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{P}}^1} )$ given by isomorphism type (each stratum is thus the set of surfaces isomorphic to $\ensuremath{\mathbb{F}}_{n-2k}$). In other words, the jumping of the dimension of $H^0 (\ensuremath{\mathbb{T}}heta_{Y_t})$ for $t \in \frak B (Y)$ is responsible for the phenomenon. Indeed Kuranishi, improving on a result of Wavrik (\cite{Wav}) obtained in \cite{kur3} the following result. \begin{theo} {\bf( Kuranishi's third theorem)}\label{kur3} Assume that the dimension of $H^0 (\ensuremath{\mathbb{T}}heta_{Y_t})$ for $t \in \frak B (Y)$ is a constant function in a neighbourhood of $0$. Then there is $k > > 0$ and a neighbourhood $\frak U$ of the identity map in the group $\sD iff (M)$, with respect to the k-th Sobolev norm, and a neighbourhood $U$ of $0$ in $ \frak B (Y)$ such that, for each $f \in \frak U$, and $t \neq t' \in U$, $f$ cannot yield a holomorphic map between $Y_t$ and $Y_{t'}$. \end {theo} Kuranishi's theorem (\cite{kur1},\cite{kur2}) shows that Teichm\"uller space can be viewed as being locally dominated by a complex space of locally finite dimension (its dimension, as we already observed, may however be unbounded, cf. cor. 7.7 of \cite{cat04}). A first consequence is that Teichm\"uller space is locally connected by holomorphic arcs, hence the determination of the connected components of $\sC (M)$, respectively of $\sT (M)$, can be done using the original definition of deformation equivalence, given by Kodaira and Spencer in \cite{k-s58}. \begin{cor} Let $Y = (M,J)$, $Y' = (M,J')$ be two different complex structures on $M$. Define deformation equivalence as the equivalence relation generated by direct deformation equivalence, where $Y$, $Y'$ are said to be {\bf direct disk deformation equivalent} if and only if there is a proper holomorphic submersion with connected fibres $f \colon \sY \ensuremath{\rightarrow} \Delta$, where $\sY$ is a complex manifold, $\De \subset \ensuremath{\mathbb{C}}$ is the unit disk, and moreover there are two fibres of $f$ biholomorphic to $Y$, respectively $ Y'$. Then two complex structures on $M$ yield points in the same connected component of $\sT (M)$ if and only if they are in the same deformation equivalence class. \end{cor} In the next subsections we shall illustrate the meaning of the condition that the vector spaces $H^0 (\ensuremath{\mathbb{T}}heta_{Y_t})$ have locally constant dimension, in terms of deformation theory. Moreover, we shall give criteria implying that Kuranishi and Teichm\"uller space do locally coincide. \subsection{Deformation theory and how it is used} One can define deformations not only for complex manifolds, but also for complex spaces. The technical assumption of flatness replaces then the condition that $\pi$ be a submersion. \begin{defin} 1) A {\bf deformation} of a compact complex space $X$ is a pair consisting of 1.1) a flat proper morphism $ \pi : \ensuremath{\mathcal{X}} \ensuremath{\rightarrow} T$ between connected complex spaces (i.e., $\pi^* : \ensuremath{\mathcal{O}}_{T,t} \ensuremath{\rightarrow} \ensuremath{\mathcal{O}}_{\ensuremath{\mathcal{X}},x}$ is a flat ring extension for each $ x$ with $ \pi (x) = t$) 1.2) an isomorphism $ \psi : X \cong \pi^{-1}(t_0) : = X_0$ of $X$ with a fibre $X_0$ of $\pi$. 2.1) A {\bf small deformation} is the germ $ \pi : (\ensuremath{\mathcal{X}}, X_0) \ensuremath{\rightarrow} (T, t_0)$ of a deformation. 2.2) Given a deformation $ \pi : \ensuremath{\mathcal{X}} \ensuremath{\rightarrow} T$ and a morphism $ f : T' \ensuremath{\rightarrow} T$ with $ f (t'_0) = t_0$, the {\bf pull-back} $ f^* (\ensuremath{\mathcal{X}})$ is the fibre product $ \ensuremath{\mathcal{X}}': = \ensuremath{\mathcal{X}} \times_T T'$ endowed with the projection onto the second factor $T'$ (then $ X \cong X'_0$). 3.1) A small deformation $ \pi : \ensuremath{\mathcal{X}} \ensuremath{\rightarrow} T$ is said to be {\bf versal} or {\bf complete} if every other small deformation $ \pi : \ensuremath{\mathcal{X}}' \ensuremath{\rightarrow} T'$ is obtained from it via pull back; it is said to be {\bf semi-universal} if the differential of $ f : T' \ensuremath{\rightarrow} T$ at $t'_0$ is uniquely determined, and {\bf universal} if the morphism $f$ is uniquely determined. 4) Two compact complex manifolds $X,Y$ are said to be {\bf direct deformation equivalent} if there are a deformation $ \pi : \ensuremath{\mathcal{X}} \ensuremath{\rightarrow} T$ of $X$ with $T$ irreducible and where all the fibres are smooth, and an isomorphism $ \psi' : Y \cong \pi^{-1}(t_1) : = X_1$ of $Y$ with a fibre $X_1$ of $\pi$. \end{defin} Let's however come back to the case of complex manifolds, observing that in a small deformation of a compact complex manifold one can shrink the base $T$ and assume that all the fibres are smooth. We can now state the results of Kuranishi and Wavrik ((\cite{kur1}, \cite{kur2}, \cite{Wav}) in the language of deformation theory. \begin{theo} {\bf (Kuranishi).} Let $Y$ be a compact complex manifold: then I) the Kuranishi family $ \pi : (\ensuremath{\mathcal{Y}}, Y_0) \ensuremath{\rightarrow} (\frak B (Y), 0)$ of $Y$ is semiuniversal. II) $(\frak B (Y), 0)$ is unique up to (non canonical) isomorphism, and is a germ of analytic subspace of the vector space $ H^1 (Y, \ensuremath{\mathbb{T}}heta_Y)$, inverse image of the origin under a local holomorphic map (called Kuranishi map and denoted by $k$) $ k : H^1 (Y, \ensuremath{\mathbb{T}}heta_Y) \ensuremath{\rightarrow} H^2 (Y, \ensuremath{\mathbb{T}}heta_Y) $ whose differential vanishes at the origin. Moreover the quadratic term in the Taylor development of the Kuranishi map $k$ is given by the bilinear map $ H^1 (Y, \ensuremath{\mathbb{T}}heta_Y) \times H^1 (Y, \ensuremath{\mathbb{T}}heta_Y) \ensuremath{\rightarrow} H^2 (Y, \ensuremath{\mathbb{T}}heta_Y)$, called Schouten bracket, which is the composition of cup product followed by Lie bracket of vector fields. III) The Kuranishi family is a versal deformation of $Y_t$ for $t \in \frak B (Y)$. IV) The Kuranishi family is universal if $H^0 (Y, \ensuremath{\mathbb{T}}heta_Y)= 0.$ V) {\bf (Wavrik)} The Kuranishi family is universal if $ \frak B (Y)$ is reduced and $h^0 (Y_t, \ensuremath{\mathbb{T}}heta_{Y_t}) : = \rm{dim}\ H^0 (Y_t, \ensuremath{\mathbb{T}}heta_{Y_t}) $ is constant for $t \in \frak B (Y)$ in a suitable neighbourhood of $0$. \end{theo} In fact Wavrik in his article (\cite{Wav}) gives a more general result than V); as pointed out by a referee, the same criterion has also been proven by Schlessinger (prop. 3.10 of \cite{FAR}). Wavrik says that the Kuranishi space is a local moduli space under the assumption that $h^0 (Y_t, \ensuremath{\mathbb{T}}heta_{Y_t})$ is locally constant. This terminology can however be confusing, as we shall show, since in no way the Kuranishi space is like the moduli space locally, even if one divides out by the action of the group $ Aut (Y)$ of biholomorphisms of $Y$. The first most concrete question is how one can calculate the Kuranishi space and the Kuranishi family. In this regard, the first resource is to try to use the implicit functions theorem. For this purpose one needs to calculate the Kodaira Spencer map of a family $ \pi : (\ensuremath{\mathcal{Y}}, Y_0) \ensuremath{\rightarrow} (T, t_0)$ of complex manifolds having a smooth base $T$. This is defined as follows: consider the cotangent bundle sequence of the fibration $$ 0 \ensuremath{\rightarrow} \pi^* (\Omega^1_T) \ensuremath{\rightarrow} \Omega^1_{\ensuremath{\mathcal{Y}}} \ensuremath{\rightarrow} \Omega^1_{\ensuremath{\mathcal{Y}}| T} \ensuremath{\rightarrow} 0,$$ and the direct image sequence of the dual sequence of bundles, $$ 0 \ensuremath{\rightarrow} \pi_* (\ensuremath{\mathbb{T}}heta_{\ensuremath{\mathcal{Y}}| T}) \ensuremath{\rightarrow} \pi_* (\ensuremath{\mathbb{T}}heta_{\ensuremath{\mathcal{Y}}}) \ensuremath{\rightarrow} \ensuremath{\mathbb{T}}heta_T \ensuremath{\rightarrow} \ensuremath{\mathcal{R}}^1 \pi_* (\ensuremath{\mathbb{T}}heta_{\ensuremath{\mathcal{Y}}| T}) .$$ Evaluation at the point $t_0$ yields a map $\rho$ of the tangent space to $T$ at $t_0$ into $ H^1 (Y_0, \ensuremath{\mathbb{T}}heta_{Y_0})$, which is the derivative of the variation of complex structure (see \cite{k-m71} for a more concrete description, but beware that the definition given above is the most effective for calculations). \begin{cor} Let $Y$ be a compact complex manifold and assume that we have a family $ \pi : (\ensuremath{\mathcal{Y}}, Y_0) \ensuremath{\rightarrow} (T, t_0)$ with smooth base $T$, such that $Y \cong Y_0$, and such that the Kodaira Spencer map $\rho_{t_0}$ surjects onto $ H^1 (Y, \ensuremath{\mathbb{T}}heta_Y)$. Then the Kuranishi space $\frak B (Y)$ is smooth and there is a submanifold $T' \subset T$ which maps isomorphically to $\frak B (Y)$; hence the Kuranishi family is the restriction of $\pi$ to $T'$. \end{cor} The key point is that, by versality of the Kuranishi family, there is a morphism $f : T \ensuremath{\rightarrow} \frak B (Y)$ inducing $\pi$ as a pull back, and $\rho$ is the derivative of $f$. This approach clearly works only if $Y$ is {\bf unobstructed}, which simply means that $\frak B (Y)$ is smooth. In general it is difficult to describe the Kuranishi map, and even calculating the quadratic term is nontrivial (see \cite{quintics} for an interesting example). In general, even if it is difficult to calculate the Kuranishi map, Kuranishi theory gives a lower bound for the `number of moduli' of $Y$, since it shows that $\frak B (Y)$ has dimension $\geq h^1 (Y, \ensuremath{\mathbb{T}}heta_Y) - h^2 (Y, \ensuremath{\mathbb{T}}heta_Y)$. In the case of curves $ H^2 (Y, \ensuremath{\mathbb{T}}heta_Y) = 0$, hence curves are unobstructed; in the case of a surface $S$ $$ \rm{dim} \frak B (S ) \geq h^1 ( \ensuremath{\mathbb{T}}heta_S) - h^2 ( \ensuremath{\mathbb{T}}heta_S) = - \chi ( \ensuremath{\mathbb{T}}heta_S) + h^0 ( \ensuremath{\mathbb{T}}heta_S) = 10 \chi (\ensuremath{\mathcal{O}}_S) - 2 K^2_S + h^0 ( \ensuremath{\mathbb{T}}heta_S).$$ The above is the Enriques' inequality (\cite{enr}, observe that Max Noether postulated equality), proved by Kuranishi in all cases and also for non algebraic surfaces. There have been recently two examples where resorting to the Kuranishi theorem in the obstructed case has been useful. The first one appeared in a preprint by Clemens (\cite{clemens1}), who then published the proof in \cite{clemens}; it shows that if a manifold is K\"ahlerian, then there are fewer obstructions than foreseen, since a small deformation $Y_t$ of a K\"ahler manifold is again K\"ahler, hence the Hodge decomposition still holds for $Y_t$. Another independent proof was given by Manetti in \cite{Manobs}. \begin{theo}{\bf (Clemens-Manetti)}\label{Hodgekills} Let $Y$ be a compact complex K\"ahler manifold. Then there exists an analytic automorphism of $ H^2 (Y, \ensuremath{\mathbb{T}}heta_Y)$ with linear part equal to the identity, such that the Kuranishi map $ k : H^1 (Y, \ensuremath{\mathbb{T}}heta_Y) \ensuremath{\rightarrow} H^2 (Y, \ensuremath{\mathbb{T}}heta_Y) $ takes indeed values in the intersection of the subspaces $$ Ker ( H^2 (Y, \ensuremath{\mathbb{T}}heta_Y) \ensuremath{\rightarrow} Hom ( H^q (\Omega^p_Y) , H^{q+2} (\Omega^{p-1}_Y))$$ (the linear map is induced by cohomology cup product and tensor contraction). \end{theo} Clemens' proof uses directly the Kuranishi equation, and a similar method was used by S\"onke Rollenske in \cite{rol1}, \cite{rol2} in order to consider the deformation theory of complex manifolds yielding left invariant complex structures on nilmanifolds. Rollenske proved, among other results, the following \begin{theo}{\bf (Rollenske)} Let $Y$ be a compact complex manifold corresponding to a left invariant complex structure on a real nilmanifold. Assume that the following condition is verified: (*) the inclusion of the complex of left invariant forms of pure antiholomorphic type in the Dolbeault complex $$(\bigoplus _p H^0(\sA ^{(0,p)}(Y)),\overline{ \partial})$$ yields an isomorphism of cohomology groups. Then every small deformation of the complex structure of $Y$ consists of left invariant complex structures. \end{theo} The main idea, in spite of the technical complications, is to look at Kuranishi's equation, and to see that everything is then left invariant. Rollenske went over in \cite{rol3} and showed that for the complex structures on nilmanifolds which are complex parallelizable Kuranishi space is defined by explicit polynomial equations, and most of the time singular. There have been several attempts to have a more direct approach to the understanding of the Kuranishi map, namely to do things more algebraically and giving up to consider the Kuranishi slice. This approach has been pursued for instance in \cite{SS} and effectively applied by Manetti. For instance, as already mentioned, Manetti (\cite{Manobs}) gave a nice elegant proof of the above theorem \ref{Hodgekills} using the notion of differential graded Lie algebras, abbreviated by the acronym DGLA 's. The typical example of such a DGLA is provided by the Dolbeault complex $$(\bigoplus _p H^0 ( \sA ^{(0,p)}(TM^{1,0}_Y)),\overline{ \partial})$$ further endowed with the operation of Schouten bracket (here: the composition of exterior product followed by Lie bracket of vector fields), which is graded commutative. The main thrust is to look at solutions of the Maurer Cartan equation $ \bar{\partial} (\phi) + \frac{1}{2} [ \phi, \phi ] = 0$ modulo gauge transformations, i.e., exponentials of sections in $H^0 ( \sA ^{(0,0)}(TM^{1,0}_Y))$. The deformation theory concepts generalize from the case of deformations of compact complex manifolds to the more general setting of DGLA's , which seem to govern almost all of the deformation type problems (see for instance \cite{man09}). \subsection{Kuranishi and Teichm\"uller} Returning to our setting where we considered the closed subspace $ \sC (M)$ of $\sA \sC (M)$ consisting of the set of complex structures on $M$, it is clear that there is a universal tautological family of complex structures parametrized by $ \sC (M)$, and with total space $$\mathfrak U_{ \sC (M)} : = M \times \sC (M) ,$$ on which the group $\sD iff ^+ (M)$ naturally acts, in particular $\sD iff ^0(M)$. A rather simple observation is that $\sD iff ^0(M)$ acts freely on $ \sC (M)$ if and only if for each complex structure $Y$ on $M$ the group of biholomorphisms $Aut(Y)$ contains no automorphism which is differentiably isotopic to the identity (other than the identity). \begin{defin} A compact complex manifold $Y$ is said to be {\bf rigidified} if $Aut(Y) \cap \sD iff ^0(Y) = \{ Id_Y \}.$ A compact complex manifold $Y$ is said to be cohomologically rigidified if $Aut(Y) \ensuremath{\rightarrow} Aut (H^* (Y, \ensuremath{\mathbb{Z}}))$ is injective, and rationally cohomologically rigidified if $Aut(Y) \ensuremath{\rightarrow} Aut (H^* (Y, \ensuremath{\mathbb{Q}}))$ is injective. \end{defin} The condition of being rigidified is obviously stronger than the condition $H^0 (\ensuremath{\mathbb{T}}heta_Y)=0$, which is necessary, else there is a positive dimensional Lie group of biholomorphic self maps, and is weaker than the condition of being cohomologically rigidified. Compact curves of genus $ g \geq 2$ are rationally cohomologically rigidified since if $\tau : C \ensuremath{\rightarrow} C$ is an automorphism acting trivially on cohomology, then in the product $ C \times C$ the intersection number of the diagonal $\De_C$ with the graph $\Ga_{\tau}$ equals the self intersection of the diagonal, which is the Euler number $ e(C) = 2 - 2g < 0$. But, if $\tau$ is not the identity, $\Ga_{\tau}$ and $\De_C$ are irreducible and distinct, and their intersection number is a non negative number, equal to the number of fixed points of $\tau$, counted with multiplicity: a contradiction. It is an interesting question whether compact complex manifolds of general type are rigidified. It is known that already for surfaces of general type there are examples which are not rationally cohomologically rigidified (see a partial classification done by Jin Xing Cai in \cite{cai}), while examples which are not cohomologically rigidified might exist among surfaces isogenous to a product (potential candidates have been proposed by Wenfei Liu). Jin Xing Cai pointed out to us that, for simply connected (compact) surfaces, by a result of Quinn (\cite{quinn}), every automorphism acting trivially in rational cohomology is isotopic to the identity, and that he conjectures that simply connected surfaces of general type are rigidified (equivalently, rationally cohomologically rigidified). \begin{rem} Assume that the complex manifold $Y$ has $H^0 (\ensuremath{\mathbb{T}}heta_Y)=0$, or satisfies Wavrik's condition, but is not rigidified: then by Kuranishi' s third theorem, there is an automorphism $ f \in Aut (Y) \cap \sD iff ^0(Y)$ which lies outside of a fixed neighbourhood of the identity. $f$ acts therefore on the Kuranishi space, hence, in order that the natural map from Kuranishi space to Teichm\"uller space be injective, $f$ must act trivially on $ \frak B (Y)$, which means that $f$ remains biholomorphic for all small deformations of $Y$. \end{rem} At any case, the condition of being rigidified implies that the tautological family of complex structures descends to a universal family of complex structures on Teichm\"uller space: $$\mathfrak U_{ \sT (M)} : = (M \times \sC (M)) / \sD iff ^0(M) \ensuremath{\rightarrow} \sC (M)) / \sD iff ^0(M) = \sT (M) .$$ on which the mapping class group acts. Fix now a complex structure yielding a compact complex manifold $Y$, and compare with the Kuranishi family $$\sY \ensuremath{\rightarrow} \frak B (Y).$$ Now, we already remarked that there is a locally surjective continuous map of $ \frak B (Y)$ to the germ $\sT (M)_Y$ of $\sT (M)$ at the point corresponding to the complex structure yielding $Y$. For curves this map is a local homeomorphism, and this fact provides a complex structure on Teichm\"uller space. \begin{rem} Indeed we observe that more generally, if 1) the Kuranishi family is universal at any point 2) $ \frak B (Y) \ensuremath{\rightarrow} \sT (M)_Y$ is a local homeomorphism at every point, then Teichm\"uller space has a natural structure of complex space. Moreover 3) since $ \frak B (Y) \ensuremath{\rightarrow} \sT (M)_Y$ is surjective, it is a local homeomorphism iff it is injective; in fact, since $\sT(M)$ has the quotient topology and it is the quotient by a group action, and $ \frak B (Y)$ is a local slice for a subgroup of $\sD iff ^0(M)$, the projection $ \frak B (Y) \ensuremath{\rightarrow} \sT (M)_Y$ is open. \end{rem} The simple idea used by Arbarello and Cornalba (\cite{ar-cor}) to reprove the result for curves is to establish the universality of the Kuranishi family for continuous families of complex structures. In fact, if any family is locally induced by the Kuranishi family, and we have rigidified manifolds only, then there is a continuous inverse to the map $ \frak B (Y) \ensuremath{\rightarrow} \sT (M)_Y$, and we have the desired local homeomorphism between Kuranishi space and Teichm\"uller space. Since there are many cases (for instance, complex tori) where Kuranishi and Teichm\"uller space coincide, yet the manifolds are not rigidified, we give a simple criterion. \begin{prop}\label{kur=teich} 1) The continuous map $\pi \colon \frak B (Y) \ensuremath{\rightarrow} \sT (M)_Y$ is a local homeomorphism between Kuranishi space and Teichm\"uller space if there is an injective continuous map $f \colon \frak B (Y) \ensuremath{\rightarrow} Z$, where $Z$ is Hausdorff, which factors through $\pi$. 2) Assume that $Y$ is a compact K\"ahler manifold and that the local period map $f$ is injective: then $\pi \colon \frak B (Y) \ensuremath{\rightarrow} \sT (M)_Y$ is a local homeomorphism. 3) In particular, this holds if $Y$ is K\"ahler with trivial canonical divisor \footnote{As observed by a referee, the same proof works when $Y$ is K\"ahler with torsion canonical divisor, since one can consider the local period map of the canonical cover of $Y$}. \end{prop} {\it Proof. } 1) : observe that, since $ \frak B (Y)$ is locally compact and $Z$ is Hausdorff, it follows that $f$ is a homeomorphism with its image $ Z' : = Im f \subset Z$. Given the factorization $ f = F \circ \pi$, then the inverse of $\pi$ is the composition $ f^{-1} \circ F$, hence $\pi$ is a homeomorphism. 2) : if $Y$ is K\"ahler, then every small deformation $Y_t$ of $Y$ is still K\"ahler, as it is well known (see \cite{k-m71}). Therefore one has the Hodge decomposition $$ H^* (M, \ensuremath{\mathbb{C}}) = H^* (Y_t, \ensuremath{\mathbb{C}}) = \bigoplus_{p,q} H^{p,q} (Y_t)$$ and the corresponding period map $f \colon \frak B (Y) \ensuremath{\rightarrow} \mathfrak D$, where $ \mathfrak D$ is the period domain classifying Hodge structures of type $ \{ (h_{p,q}) |0 \leq p,q, p + q \leq 2n \}.$ As shown by Griffiths in \cite{griff1}, see also \cite{griff2} and \cite{vois}, the period map is indeed holomorphic, in particular continuous, and $ \mathfrak D$ is a separated complex manifold, hence 1) applies. 3) the previous criterion applies in several situations, for instance, when $Y$ is a compact K\"ahler manifold with trivial canonical bundle. In this case the Kuranishi space is smooth (this is the so called Bogomolov-Tian-Todorov theorem, compare \cite{bogomolov}, \cite{tian}, \cite{todorov}, and see also \cite{ran} and \cite{kawamata} for more general results) and the local period map for the period of holomorphic n-forms is an embedding, since the derivative of the period map, according to \cite{griff1} is given by cup product $$\mu \colon H^1 (Y, \ensuremath{\mathbb{T}}heta_Y) \ensuremath{\rightarrow} \oplus_{p,q} Hom ( H^q (\Omega^p_Y) , H^{q+1} (\Omega^{p-1}_Y))$$ $$ = \oplus_{p,q} Hom (H^{p,q}(Y) , H^{p-1,q+1}(Y)) .$$ If we apply it for $q=0, p=n$, we get that $\mu$ is injective, since by Serre duality $ H^1 (Y, \ensuremath{\mathbb{T}}heta_Y) = H^{n-1} (Y, \Omega^1_Y \otimes \Omega^n_Y)^{\vee}$ and cup product with $H^0 (\Omega^n_Y)$ yields an isomorphism with $H^{n-1} (Y, \Omega^1_Y )^{\vee}$ which is by Serre duality exactly isomorphic to $ H^{1} (\Omega^{n-1}_Y)$. \qed As we shall see later, a similar criterion applies to show `Kuranishi= Teichm\"uller' for most minimal models of surfaces of general type. For more general complex manifolds, such that the Wavrik condition holds, then the Kuranishi family is universal at any point, so a program which has been in the air for a quite long time has been the one to glue together these Kuranishi families, by a sort of analytic continuation giving another variant of Teichm\"uller space. We hope to be able to return on this point in the future. \section{The role of singularities} \subsection{Deformation of singularities and singular spaces} The basic analytic result is the generalization due to Grauert of Kuranishi's theorem (\cite{grauert}, see also \cite{sernesi} for the algebraic analogue) \begin{theo} {\bf Grauert's Kuranishi type theorem for complex spaces.} Let $X$ be a compact complex space: then I) there is a semiuniversal deformation $ \pi : (\ensuremath{\mathcal{X}}, X_0) \ensuremath{\rightarrow} (T, t_0)$ of $X$, i.e., a deformation such that every other small deformation $ \pi' : (\ensuremath{\mathcal{X}}', X'_0) \ensuremath{\rightarrow} (T', t'_0)$ is the pull-back of $\pi$ for an appropriate morphism $f : (T', t'_0) \ensuremath{\rightarrow} (T, t_0)$ whose differential at $t'_0$ is uniquely determined. II) $(T, t_0)$ is unique up to isomorphism, and is a germ of analytic subspace of the vector space $\ensuremath{\mathbb{T}} ^1$ of first order deformations. $(T, t_0)$ is the inverse image of the origin under a local holomorphic map (called Kuranishi map and denoted by $k$) $$ k : \ensuremath{\mathbb{T}} ^1 \ensuremath{\rightarrow} \ensuremath{\mathbb{T}} ^2 $$ to the finite dimensional vector space $\ensuremath{\mathbb{T}} ^2$ (called {\bf obstruction space}), and whose differential vanishes at the origin (the point corresponding to the point $t_0$). If $X$ is reduced, or if the singularities of $X$ are local complete intersection singularities, then $\ensuremath{\mathbb{T}} ^1 = {\rm Ext }^1 (\Omega^1_X, \ensuremath{\mathcal{O}}_X ).$ If the singularities of $X$ are local complete intersection singularities, then $ \ensuremath{\mathbb{T}} ^2 = {\rm Ext }^2 (\Omega^1_X, \ensuremath{\mathcal{O}}_X) $ . \end{theo} Recall once more that this result reproves the theorem of Kuranishi (\cite{kur1}, \cite{kur2}), which dealt with the case of compact complex manifolds, where $\ensuremath{\mathbb{T}} ^j = {\rm Ext }^j (\Omega^1_X, \ensuremath{\mathcal{O}}_X ) \cong H^j (X, \ensuremath{\mathbb{T}}heta_X)$, $\ensuremath{\mathbb{T}}heta_X : = \sH om ( \Omega^1_X, \ensuremath{\mathcal{O}}_X ) $ being the sheaf of holomorphic vector fields. There is also the local variant, concerning isolated singularities, which was obtained by Grauert in \cite{grauert1} extending the earlier result by Tyurina in the unobstructed case where $ {\sE xt }^2 (\Omega^1_X, \ensuremath{\mathcal{O}}_X )_{x_0} = 0$ (\cite{tyurina1}). \begin{theo} {\bf Grauert' s theorem for deformations of isolated singularities..} Let $(X, x_0)$ be the germ of an isolated singularity of a reduced complex space: then I) there is a semiuniversal deformation $ \pi : (\ensuremath{\mathcal{X}}, X_0, x_0) \ensuremath{\rightarrow} (\ensuremath{\mathbb{C}}^n, 0) \times (T, t_0)$ of $(X, x_0)$, i.e., a deformation such that every other small deformation $ \pi' : (\ensuremath{\mathcal{X}}', X'_0, x'_0) \ensuremath{\rightarrow} (\ensuremath{\mathbb{C}}^n, 0) \times (T', t'_0)$ is the pull-back of $\pi$ for an appropriate morphism $f : (T', t'_0) \ensuremath{\rightarrow} (T, t_0)$ whose differential at $t'_0$ is uniquely determined. II) $(T, t_0)$ is unique up to isomorphism, and is a germ of analytic subspace of the vector space $\ensuremath{\mathbb{T}} ^1_{x_0}: = {\sE xt }^1 (\Omega^1_X, \ensuremath{\mathcal{O}}_X )_{x_0},$ inverse image of the origin under a local holomorphic map (called Kuranishi map and denoted by $k$) $$ k : {\rm \ensuremath{\mathbb{T}} ^1_{x_0}} = {\sE xt }^1 (\Omega^1_X, \ensuremath{\mathcal{O}}_X )_{x_0} \ensuremath{\rightarrow} \ensuremath{\mathbb{T}} ^2_{x_0} $$ to the finite dimensional vector space $\ensuremath{\mathbb{T}} ^2_{x_0}$ (called {\bf obstruction space}), and whose differential vanishes at the origin (the point corresponding to the point $t_0$). The obstruction space $ \ensuremath{\mathbb{T}} ^2_{x_0} $ equals $ {\sE xt }^2 (\Omega^1_X, \ensuremath{\mathcal{O}}_X )_{x_0}$ if the singularity of $X$ is normal. \end{theo} For the last assertion, see \cite{sernesi}, prop. 3.1.14, page 114. The case of complete intersection singularities was shown quite generally to be unobstructed by Tyurina in the hypersurface case (\cite{tyurina1}), and then by Kas-Schlessinger in \cite{k-s}. This case lends itself to a very explicit description. Let $(X,0)\subset \ensuremath{\mathbb{C}}^n$ be the complete intersection $ f^{-1} (0)$, where $$ f = (f_1, \dots , f_p) \colon (\ensuremath{\mathbb{C}}^n,0) \ensuremath{\rightarrow} (\ensuremath{\mathbb{C}}^p,0).$$ Then the ideal sheaf $\sI _X$ of $X$ is generated by $(f_1, \dots , f_p)$ and the conormal sheaf $\sN ^{\vee}_X : = \sI _X / \sI _X^2 $ is locally free of rank $p$ on $X$. Dualizing the exact sequence $$ 0 \ensuremath{\rightarrow} \sN ^{\vee}_X \cong \ensuremath{\mathcal{O}}_X^p \ensuremath{\rightarrow} \Omega^1_{\ensuremath{\mathbb{C}}^n} \otimes \ensuremath{\mathcal{O}}_X \cong \ensuremath{\mathcal{O}}_X^n \ensuremath{\rightarrow} \Omega^1_X \ensuremath{\rightarrow} 0 $$ we obtain (as $ \ensuremath{\mathbb{T}}heta_X : = \sH om ( \Omega^1_X, \ensuremath{\mathcal{O}}_X)$) $$ 0 \ensuremath{\rightarrow} \ensuremath{\mathbb{T}}heta_X \ensuremath{\rightarrow} \ensuremath{\mathbb{T}}heta_{\ensuremath{\mathbb{C}}^n} \otimes \ensuremath{\mathcal{O}}_X \cong \ensuremath{\mathcal{O}}_X^n \ensuremath{\rightarrow} \sN_X \cong \ensuremath{\mathcal{O}}_X^p \ensuremath{\rightarrow} \sE xt^1 ( \Omega^1_X, \ensuremath{\mathcal{O}}_X) \ensuremath{\rightarrow} 0 $$ which represents $\ensuremath{\mathbb{T}} ^1_{0}: = {\sE xt }^1 (\Omega^1_X, \ensuremath{\mathcal{O}}_X )_{0}$ as a quotient of $ \ensuremath{\mathcal{O}}_{X,0}^p$, and as a finite dimensional vector space (whose dimension will be denoted as usual by $\tau$, which is the so called Tyurina number). Let $(g^1, \dots,g^{\tau}) \in \ensuremath{\mathcal{O}}_{X,0}^p$, $g^i = (g^i_1, \dots, g^i_p) $ represent a basis of $\ensuremath{\mathbb{T}} ^1_{0}$. Consider now the complete intersection $$ (\mathfrak X , 0) : = V (F_1, \dots, F_p) \subset (\ensuremath{\mathbb{C}}^n \times \ensuremath{\mathbb{C}}^{\tau},0)$$ where $$ F_j (x,t) : = f_j (x) + \sum_{i=1}^{ \tau} t_i g^i_j(x).$$ Then $$ \xymatrix { (X, 0) \ar@{^{(}->}[r]^i &(\mathfrak X , 0) \ar[r]^\phi & (\ensuremath{\mathbb{C}}^{\tau},0)}$$ where $i$ is the inclusion and $\phi$ is the projection, yields the semiuniversal deformation of $ (X,0)$. In the case $p=1$ of hypersurfaces the above representation of $\ensuremath{\mathbb{T}} ^1_{0}: = {\sE xt }^1 (\Omega^1_X, \ensuremath{\mathcal{O}}_X )_{0}$ as a quotient of $ \ensuremath{\mathcal{O}}_{X,0}$ yields the well known formula: $$ \ensuremath{\mathbb{T}} ^1_{0} = \ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{C}}^n,0} / (f, f_{x_1}, \dots, f_{x_n}),$$ where $ f_{x_i} : = \frac{\partial f}{\partial x_i}.$ The easiest example is then the one of an ordinary quadratic singularity, or node, where we have $p=1$, and $ f = \sum_{i=1, \dots n} x_i^2$. Then our module $\ensuremath{\mathbb{T}} ^1_{0} = \ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{C}}^n, 0}/ (x_i)$ and the deformation is $$ f + t = \sum_{i=1}^n x_i^2 + t = 0. $$ \subsection{ Atiyah's example and three of its implications} Around 1958 Atiyah (\cite{atiyah}) made a very important discovery concerning families of surfaces acquiring ordinary double points. His result was later extended by Brieskorn and Tyurina (\cite{tju}, \cite{brieskorn2}, \cite{nice}) to the more general case of rational double points, which are the rational hypersurface singularities, and which are referred to as RDP's or as Du Val singularities (Patrick Du Val classified them as the surface singularities which do not impose adjunction conditions, see\cite{duval}, \cite{artin}, \cite{reid1}, \cite{reid2})) or as Kleinian singularities (they are analytically isomorphic to a quotient $\ensuremath{\mathbb{C}}^2 / G$, with $ G \subset SL ( 2 , \ensuremath{\mathbb{C}})$). The crucial part of the story takes place at the local level, i.e., when one deforms the ordinary double point singularity $$ X = \{ (u,v,w ) \in \ensuremath{\mathbb{C}}^3 | w^2 = uv \} .$$ In this case the semiuniversal deformation is, as we saw, the family $$\ensuremath{\mathcal{X}}X = \{ (u,v,w,t ) \in \ensuremath{\mathbb{C}}^4| w^2 - t = uv \,\}$$ mapping to $\ensuremath{\mathbb{C}}$ via the projection over the variable $t$; and one observes here that $\ensuremath{\mathcal{X}}X \cong \ensuremath{\mathbb{C}}^3$. The minimal resolution of $X$ is obtained blowing up the origin, but we cannot put the minimal resolutions of the $X_t$ together. One can give two reasons for this fact. The first is algebro geometrical, in that any normal modification of $\ensuremath{\mathcal{X}}X$ which is an isomorphism outside the origin, and is such that the fibre over the origin has dimension at most 1, must be necessarily an isomorphism. The second reason is that the restriction of the family (of manifolds with boundary) to the punctured disk $\{ t \neq 0 \}$ is not topologically trivial, its monodromy being given by a Dehn twist around the vanishing two dimensional sphere (see\cite{milnor}). As a matter of fact the square of the Dehn twist is differentiably isotopic to the identity, as it is shown by the fact that the family $X_t$ admits a simultaneous resolution after that we perform a base change $$ t = \tau^2 \Rightarrow w^2 - \tau^2 = uv.$$ \begin{defin} Let $\ensuremath{\mathcal{X}}X \ensuremath{\rightarrow} T'$ be the family where $$\ensuremath{\mathcal{X}}X = \{ (u,v,w,\tau ) | w^2 - \tau^2 = uv \}$$ and $T'$ is the affine line with coordinate $\tau$. $\ensuremath{\mathcal{X}}X $ has an isolated ordinary quadratic singularity which can be resolved either by blowing up the origin (in this way we get an exceptional divisor $ \cong \ensuremath{\mathbb{P}}^1 \times \ensuremath{\mathbb{P}}^1$) or by taking the closure of one of two distinct rational maps to $\ensuremath{\mathbb{P}}^1$. The two latter resolutions are called the {\bf small resolutions}. One defines $\ensuremath{\mathcal{S}} \subset \ensuremath{\mathcal{X}}X \times \ensuremath{\mathbb{P}}^1$ to be one of the small resolutions of $\ensuremath{\mathcal{X}}X$, and $\ensuremath{\mathcal{S}}'$ to be the other one, namely: $$\ensuremath{\mathcal{S}} : \{ (u,v,w,\tau)(\xi) \in \ensuremath{\mathcal{X}}X \times \ensuremath{\mathbb{P}}^1| \ \frac{w-\tau}{u} = \frac{v}{w+\tau} = \xi \}$$ $$\ensuremath{\mathcal{S}}' : \{ (u,v,w,\tau)(\eta) \in \ensuremath{\mathcal{X}}X \times \ensuremath{\mathbb{P}}^1| \ \frac{w+\tau}{u} = \frac{v}{w-\tau} = \eta \}.$$ \end{defin} Now, the two families on the disk $\{ \tau \in \ensuremath{\mathbb{C}}| | \tau| < \epsilon \}$ are clearly isomorphic by the automorphism $\sigma_4$ such that $\sigma_4(u,v,w,\tau) = (u,v,w,- \tau) $, On the other hand, the restrictions of the two families to the punctured disk $\{ \tau \neq 0\}$ are clearly isomorphic by the automorphism acting as the identity on the variables $(u,v,w,\tau) $, since over the punctured disk these two families coincide with the family $\ensuremath{\mathcal{X}}X$. This automorphism yields a birational map $\iota : \ensuremath{\mathcal{S}} \dasharrow \ensuremath{\mathcal{S}}'$ which however does not extend biregularly, since $\xi u = v \eta ^{-1}$. The automorphism $\sigma : = \sigma_4 \circ \iota $ acts on the restriction $\ensuremath{\mathcal{S}}^*$ of the family $\ensuremath{\mathcal{S}}$ to the punctured disk, and it acts on the given differentiably trivialized family $\ensuremath{\mathcal{S}}^*$ of manifolds with boundary via the Dehn twist on the vanishing 2-sphere. For $\tau = 0$ the Dehn twist cannot yield a holomorphic map $\phi \colon S_0 \ensuremath{\rightarrow} S_0$, since every biholomorphism $\phi$ sends the (-2)-curve $E$ to itself ($E$ is the only holomorphic curve in its homology class), hence it acts on the normal bundle of $E$ by scalar multiplication, therefore by an action which is homotopic to the identity in a neighbourhood of $E$: a contradiction. From the above observations, one can derive several `moral' consequences, when one globalizes the procedure. Assume now that we have a family of compact algebraic surfaces $X_t$ such that $X_t$ is smooth for $ t \neq 0$, and, for $t =0$, it acquires a node. We can then take the corresponding families $S_{\tau}$ and $S'_{\tau}$ of smooth surfaces. We can view the family $S_{\tau}$ as the image of a 1 dimensional complex disk in the Teichm\"uller space $\sT(S_0)$ of $S_0$, and then the Dehn twist $\sigma$ yields a self map $$\sigma^* \colon \sT(S_0) \ensuremath{\rightarrow} \sT(S_0).$$ It has the property that $\sigma^* (S_{\tau}) = S_{- \tau}$ for $\tau \neq 0$, but for $\tau = 0$, we have that $\sigma^* (S_{0}) \neq S_0$, since a map homotopically equivalent to the Dehn twist cannot yield a biholomorphic map. Hence we get two different points of $\sT(S_0)$, namely, $\sigma^* (S_{0}) \neq S_0$, which are both limits $ lim_{\tau \ensuremath{\rightarrow} 0} \sigma^* (S_{\tau}) = lim_{\tau \ensuremath{\rightarrow} 0} S_{- \tau}$ and the conclusion is the following theorem, which is a slightly different version of a result of Burns and Rapoport (\cite{b-r}). \begin{theo}\label{nonseparated} Let $S_0$ be a compact complex surface which contains a (-2)-curve $E$, i.e., a smooth rational curve with self intersection equal to $-2$, obtained from the resolution of a normal surface $X_0$ with exactly one singular point , which is an ordinary quadratic singularity. Assume further that $X_0$ admits a smoothing deformation. Then the Teichm\"uller space $\sT(S_0)$ is not separated. \end{theo} That such a surface exists is obvious: it suffices, for each degree $ d \geq 2$, to consider a surface $X_0$ in $ \ensuremath{\mathbb{P}}^3$, with equation $ f_0 (x_0, x_1,x_2,x_3) = 0$, and such that there is no monomial divisible by $x_0^{d-1}$ appearing in $f_0$ with non zero coefficient. The required smoothing is gotten by setting $X_t : = \{ f_t : = f_0 + t x_0^d = 0 \}$. This example can of course be interpreted in a second way, and with a completely different wording (non separatedness of some Artin moduli stack), which I will try to briefly explain in a concrete way. It is clear that $\sigma^* (S_{0}) \neq S_0$ in Teichm\"uller space, but $\sigma^* (S_{0})$ and $ S_0$ yield the same point in the moduli space. Think of the family $S_{\tau}$ as a 1 dimensional complex disk in the Kuranishi space of $S_0$: then when we map this disk to the moduli space we have two isomorphic surfaces, namely, since $\sigma^* (S_{\tau}) = S_{- \tau}$ for $\tau \neq 0$, we identify the point $\tau$ with the point $-\tau$. If we consider a disk $\De $, then we get an equivalence relation in $\De \times \De$ which identifies $\tau$ with the point $-\tau$. We do not need to say that $\tau = 0$ is equivalent to itself, because this is self evident. However, we have seen that we cannot extend the self map $\sigma$ of the family $\ensuremath{\mathcal{S}}^*$ to the full family $\ensuremath{\mathcal{S}}$. Therefore, if we require that equivalences come from families, or, in other words, when we glue Kuranishi families, we obtain the following. The equivalence relation in $\De \times \De$ is the image of two complex curves, one being the disk $\De$, the other being the punctured disk $\De^*$. $\De$ maps to the diagonal $\De \times \De$, i.e., $ \tau \mapsto (\tau, \tau)$, while the punctured disk $\De^*$ maps to the antidiagonal, deprived of the origin, that is,$ \tau \neq 0, \tau \mapsto (\tau, - \tau)$. The quotient in the category of complex spaces is indifferent to the fact that we cannot have a family extending the isomorphism $\iota$ given previously across $\tau = 0$, and the quotient is the disk $\De_t$ with coordinate $ t : = \tau^2$. But over the disk $\De_t$ there will not be, as already remarked, a family of smooth surfaces. This example by Atiyah motivated Artin in \cite{artinstacks} to introduce his theory of Artin stacks, where one takes quotients by maps which are \'etale on both factors, but not proper ( as the map of $\De^*$ into $\De \times \De$). A third implication of Atiyah's example will show up in the section on automorphisms. \section{Moduli spaces for surfaces of general type} \subsection{Canonical models of surfaces of general type.} In the birational class of a non ruled surface there is, by the theorem of Castelnuovo (see e.g. \cite{arcata}), a unique (up to isomorphism) minimal model $S$. We shall assume from now on that $S$ is a smooth minimal (projective) surface of general type: this is equivalent (see \cite{arcata}) to the two conditions: (*) $K_S^2 > 0$ and $K_S$ is nef (we recall that a divisor $D$ is said to be {\bf nef} if, for each irreducible curve $C$, we have $ D \cdot C \geq 0$). It is very important that, as shown by Kodaira in \cite{kod-1}, the class of non minimal surfaces is stable by small deformation; on the other hand, a small deformation of a minimal algebraic surface of general type is again minimal (see prop. 5.5 of \cite{bpv}). Therefore, the class of minimal algebraic surfaces of general type is stable by deformation in the large. Even if the canonical divisor $K_S$ is nef, it does not however need to be an ample divisor, indeed {\em The canonical divisor $K_S$ of a minimal surface of general type $S$ is ample iff there does not exist an irreducible curve $C$ ($\neq 0$) on $S$ with $K\cdot C=0$ $\Leftrightarrow $ there is no (-2)-curve $C$ on $S$, i.e., a curve such that $C\cong \ensuremath{\mathbb{P}}^1$, and $C^2=-2$ .} The number of (-2)-curves is bounded by the rank of the Neron Severi lattice $NS(S)$ of $S$, and these curves can be contracted by a contraction $\pi \colon S \ensuremath{\rightarrow} X$, where $X$ is a normal surface which is called the {\bf canonical model} of $S$. The singularities of $X$ are exactly Rational Double Points (in the terminology of \cite{artin}), also called Du Val or Kleinian singularities, and $X$ is Gorenstein with canonical divisor $K_X$ such that $ \pi^* ( K_X) = K_S$. The canonical model is directly obtained from the $5$-th pluricanonical map of $S$, but it is abstractly defined as the Projective Spectrum (set of homogeneous prime ideals) of the canonical ring $$\ensuremath{\mathcal{R}} (S) : =(\ensuremath{\mathcal{R}}(S,K_S)) : = \bigoplus_{m \geq 0}H^0 (\ensuremath{\mathcal{O}}_S (mK_S).$$ In fact if $S$ is a surface of general type the canonical ring $\ensuremath{\mathcal{R}}(S)$ is a graded $\ensuremath{\mathbb{C}}$-algebra of finite type (as first proven by Mumford in \cite{mum1}), and then the canonical model is $X=\mbox{\rm Proj}(\ensuremath{\mathcal{R}}(S,K_S))= \mbox{\rm Proj}(\ensuremath{\mathcal{R}}(X,K_X))$. By choosing a minimal homogeneous set of generators of $\ensuremath{\mathcal{R}}(S)$ of degrees $d_1, \dots, d_r$ one obtains a natural embedding of the canonical model $X$ into a weighted projective space (see\cite{dolg}). This is however not convenient in order to apply Geometric Invariant Theory, since one has then to divide by non reductive groups, unlike the case of pluricanonical maps, which we now discuss. In this context the following is the content of the theorem of Bombieri (\cite{bom}), which shows with a very effective estimate the boundedness of the family of surfaces of general type with fixed invariants $K^2_S$ and $ \chi (S) := \chi (\ensuremath{\mathcal{O}}_S)$. \begin{theo}{\bf (Bombieri)} Let $S$ be a minimal surface of general type, and consider the linear system $|mK_S|$ for $m\ge 5$, or for $m=4$ when $K^2_S \ge 2$. Then $|mK_S|$ yields a birational morphism $\varphi_m$ onto its image, called the m-th pluricanonical map of $S$, which factors through the canonical model $X$ as $\varphi_m = \psi_m \circ \pi$, and where $\psi_m$ is the m-th pluricanonical map of $X$, associated to the linear system $|mK_X|$, and gives an embedding of the canonical model $$ \psi_m \colon X \ensuremath{\rightarrow} \cong X_m \subset \ensuremath{\mathbb{P}} H^0 (\ensuremath{\mathcal{O}}_X (mK_X))^{\vee} = \ensuremath{\mathbb{P}} H^0 (\ensuremath{\mathcal{O}}_S (mK_S))^{\vee} .$$ \end{theo} \subsection{ The Gieseker moduli space} The theory of deformations of complex spaces is conceptually simple but technically involved because Kodaira, Spencer, Kuranishi, Grauert et al. had to prove the convergence of the power series solutions which they produced. It is a matter of life that tori and algebraic K3 surfaces have small deformations which are not algebraic. But there are cases, like the case of curves and of surfaces of general type, where all small deformations are still projective, and then life simplifies incredibly, since one can deal only with projective varieties or projective subschemes. For these, the most natural parametrization, from the point of view of deformation theory, is given by the Hilbert scheme, introduced by Grothendieck (\cite{groth}). Let us illustrate this concept through the case of surfaces of general type. For these, as we already wrote, the first important consequence of the theorem on pluricanonical embeddings is the finiteness, up to deformation, of the minimal surfaces $S$ of general type with fixed invariants $\chi (S) = a $ and $K^2_S = b$ . In fact, their 5-canonical models $X_5$ are surfaces with Rational Double Points as singularities and of degree $ 25 b$ in a fixed projective space $ \ensuremath{\mathbb{P}}^N$, where $ N + 1 = P_5: = h^0 (5 K_S) = \chi (S) + 10 K^2_S = a + 10 b$. The Hilbert polynomial of $X_5$ equals $$ P (m) := h^0 (5 m K_S) = a + \frac{1}{2}( 5 m -1) 5 m b .$$ Grothendieck (\cite{groth}) showed that there is i) an integer $d$ and ii) a subscheme $\ensuremath{\mathbb{H}}H = \ensuremath{\mathbb{H}}H_P$ of the Grassmannian of codimension $P(d)$- subspaces of $ H^0 (\ensuremath{\mathbb{P}}^N, \ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{P}}^N} (d))$, called Hilbert scheme, such that iii) $\ensuremath{\mathbb{H}}H $ parametrizes the degree $d$ graded pieces $ H^0 (\sI_{\Sigma}(d))$ of the homogeneous ideals of all the subschemes $\Sigma \subset \ensuremath{\mathbb{P}}^N$ having the given Hilbert polynomial $P$. We can then talk about the Hilbert point of $\Sigma$ as the Pl\"ucker point $$ \langle\langlembda^{P(d)} (r_{\Sigma}^{\vee})$$ $$ r_{\Sigma} : H^0 (\ensuremath{\mathbb{P}}^N, \ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{P}}^N} (d)) \ensuremath{\rightarrow} H^0 (\Sigma, \ensuremath{\mathcal{O}}_{\Sigma} (d))$$ being the restriction homomorphism (surjective for $d$ large). Inside $\ensuremath{\mathbb{H}}H$ one has the open set $$ \ensuremath{\mathbb{H}}H ^0 : = \{ \Sigma | \Sigma {\rm \ is\ reduced \ with \ only \ R.D.P. 's \ as \ singularities }\}. $$ This is plausible, since rational double points are hypersurface singularities, and first of all the dimension of the Zariski tangent space is upper semicontinuous, as well as the multiplicity: some more work is needed to show that the further property of being a `rational' double point is open. The result has been extended in greater generality by Elkik in \cite{elkik}. One can use the following terminology (based on results of Tankeev in \cite{tankeev}). \begin{defin}The 5-pseudo moduli space of surfaces of general type with given invariants $ K^2$, $\chi$ is the closed subscheme $ \ensuremath{\mathbb{H}}H _0 \subset \ensuremath{\mathbb{H}}H^0$(defined by fitting ideals of the direct image of $\omega_{\Sigma}^{ \otimes 5} \otimes \ensuremath{\mathcal{O}}_{\Sigma}(-1) $), $$ \ensuremath{\mathbb{H}}H _0 (\chi, K^2) : = \{ \Sigma \in \ensuremath{\mathbb{H}}H^0| \omega_{\Sigma}^{ \otimes 5} \cong \ensuremath{\mathcal{O}}_{\Sigma}(1) \} $$ \end{defin} Since $\ensuremath{\mathbb{H}}H_0$ is a quasi-projective scheme, it has a finite number of irreducible components, called the {\bf deformation types } of the surfaces of general type with given invariants $ K^2$, $\chi$. As we shall see, the above deformation types of canonical models coincide with the equivalence classes for the relation of deformation equivalence between minimal surfaces of general type. \begin{rem}The group $ \ensuremath{\mathbb{P}} GL (N+1 , \ensuremath{\mathbb{C}})$ acts on $\ensuremath{\mathbb{H}}H_0$ with finite stabilizers (corresponding to the groups of automorphisms of each surface) and the orbits correspond to the isomorphism classes of minimal surfaces of general type with invariants $ K^2$, $\chi$. Tankeev in \cite{tankeev} showed that a quotient by this action exists not only as a complex analytic space, but also as a Deligne Mumford stack (\cite{d-m}). \end{rem} Saying that the quotient is a stack is a way to remedy the fact that, over the locus of surfaces with automorphisms, there does not exist a universal family, so we have only, in Mumford's original terminology, a coarse and not a fine moduli space. In a technically very involved paper (\cite{gieseker}) Gieseker showed that, if one replaces the 5-canonical embeddding by an $m$-canonical embedding with much higher $m$, then the Hilbert point $ \langle\langlembda^{P(d)} (r_{\Sigma}^{\vee})$ is a stable point; this means that, beyond the already mentioned preperty that the stabilizer is finite, that there are polynomial functions which are invariant for the action of $ SL (N+1 , \ensuremath{\mathbb{C}})$ and which do not vanish at the point, so that the Hilbert point maps to a point of the Projective spectrum of the ring of $ SL (N+1 , \ensuremath{\mathbb{C}})$-invariants. The result of Gieseker leads then to the following \begin{theo}{\bf (Gieseker)} For m very large, the quotient $$ \mathfrak M_{\chi, K^2}^{can}: = \ensuremath{\mathbb{H}}H _0 (\chi, K^2)/ SL (N+1 , \ensuremath{\mathbb{C}})$$ exists as a quasi-projective scheme. It is independent of $m$ and called the {\bf Gieseker moduli space} of canonical models of surfaces of general type with invariants $\chi, K^2$. \end{theo} It should be noted that at that time Gieseker only established the result for a field of characteristic zero; as he remarks in the paper, the only thing which was missing then in characteristic $p$ was the boundedness of the surfaces of general type with given invariants $\chi, K^2$. This result was provided by Ekedahl's extension of Bombieri's theorem to characteristic $p$ (\cite{ekedahl}, see also \cite{cf} and \cite{4auth} for a simpler proof). \subsection{Minimal models versus canonical models} Let us go back to the assertion that deformation equivalence classes of minimal surfaces of general type are the same thing as deformation types of canonical models (a fact which is no longer true in higher dimension). We have more precisely the following theorem. \begin{theo}\label{can=min} Given two minimal surfaces of general type $S, S'$ and their respective canonical models $X, X'$, then $S$ and $S'$ are deformation equivalent $\Leftrightarrow$ $X$ and $X'$ are deformation equivalent. \end{theo} The idea of the proof can be simplified by the elementary observation that, in order to analyse deformation equivalence, one may restrict oneself to the case of families parametrized by a base $T$ with $ dim (T) = 1$: since two points in a complex space $ T \subset \ensuremath{\mathbb{C}}^n$ (or in an algebraic variety) belong to the same irreducible component of $T$ if and only if they belong to an irreducible curve $ T ' \subset T$. And one may further reduce to the case where $T$ is smooth simply by taking the normalization $ T^0 \ensuremath{\rightarrow} T_{red} \ensuremath{\rightarrow} T$ of the reduction $T_{red}$ of $T$, and taking the pull-back of the family to $T^0$. But the crucial point underlying the result is the theorem on the so-called simultaneous resolution of singularities (cf. \cite{tju},\cite{brieskorn}, \cite{brieskorn2}, \cite{nice}) \begin{theo}\label{simultaneous} {\bf (Simultaneous resolution according to Brieskorn and Tjurina).} Let $T : = \ensuremath{\mathbb{C}}^{\tau}$ be the basis of the semiuniversal deformation of a Rational Double Point $ (X,0)$. Then there exists a ramified Galois cover $ T' \ensuremath{\rightarrow} T$, with $T'$ smooth $T' \cong \ensuremath{\mathbb{C}}^{\tau}$ such that the pull-back $ \ensuremath{\mathcal{X}} ' : = \ensuremath{\mathcal{X}} \times _T T'$ admits a simultaneous resolution of singularities $ p : \ensuremath{\mathcal{S}}' \ensuremath{\rightarrow} \ensuremath{\mathcal{X}}'$ (i.e., $p$ is bimeromorphic and all the fibres of the composition $ \ensuremath{\mathcal{S}}' \ensuremath{\rightarrow} \ensuremath{\mathcal{X}}' \ensuremath{\rightarrow} T'$ are smooth and equal, for $t'_0$, to the minimal resolution of singularities of $ (X,0)$. \end{theo} We reproduce Tjurina' s proof for the case of $A_n$-singularities, observing that the case of the node was already described in the previous section. {\it Proof. } Assume that we have the $A_n$-singularity $$ \{ (x,y,z) \in \ensuremath{\mathbb{C}}^3 | xy = z^{n+1} \}.$$ Then the semiuniversal deformation is given by $$ \ensuremath{\mathcal{X}} : = \{ ((x,y,z) ,(a_2, \dots a_{n+1}) ) \in \ensuremath{\mathbb{C}}^3 \times \ensuremath{\mathbb{C}}^n| xy = z^{n+1} + a_2 z^{n-1} + \dots a_{n+1} \} ,$$ the family corresponding to the natural deformations of the simple cyclic covering. We take a ramified Galois covering with group $\ensuremath{\mathcal{S}}_{n+1}$ corresponding to the splitting polynomial of the deformed degree $n+1$ polynomial $$\ensuremath{\mathcal{X}}' : = \{ ((x,y,z), (\alpha_1, \dots \alpha_{n+1})) \in \ensuremath{\mathbb{C}}^3 \times \ensuremath{\mathbb{C}}^{n+1}| \sum_j \alpha_j = 0, \ xy = \prod_j ( z - \alpha_j) \} .$$ One resolves the new family $\ensuremath{\mathcal{X}}'$ by defining $ \phi_i : \ensuremath{\mathcal{X}}' \dasharrow \ensuremath{\mathbb{P}}^1$ as $$ \phi_i : = (x , \prod_{j=1}^i ( z - \alpha_j))$$ and then taking the closure of the graph of $ \Phi : = (\phi_1, \dots \phi_n) : \ensuremath{\mathcal{X}}' \dasharrow (\ensuremath{\mathbb{P}}^1)^n$. \qed Here the Galois group $G$ of the covering $ T' \ensuremath{\rightarrow} T$ in the above theorem is the Weyl group corresponding to the Dynkin diagram of the singularity (whose vertices are the (-2) curves in the minimal resolution, and whose edges correspond to the intersection points). I.e., if $\sG$ is the simple algebraic group corresponding to the Dynkin diagram (see \cite{hum}), and $H$ is a Cartan subgroup, $N_H$ its normalizer, then the Weyl group is the factor group $W: = N_H / H$. For example, $A_n$ corresponds to the group $ SL(n+1, \ensuremath{\mathbb{C}})$, its Cartan subgroup is the subgroup of diagonal matrices, which is normalized by the symmetric group $\ensuremath{\mathcal{S}}_{n+1}$, and $N_H$ is here a semidirect product of $H$ with $\ensuremath{\mathcal{S}}_{n+1}$. E. Brieskorn (\cite{nice}) found later a direct explanation of this phenomenon. The Weyl group $W$ and the quotient $ T = T' / W$ play a crucial role in the understanding of the relations between the deformations of the minimal model $S$ and the canonical model $X$, which is a nice discovery by Burns and Wahl (\cite{b-w}). But, before we do that, let us make the following important observation, saying that the local analytic structure of the Gieseker moduli space is determined by the action of the group of automorphisms of $X$ on the Kuranishi space of $X$. \begin{rem} Let $X$ be the canonical model of a minimal surface of general type $S$ with invariants $\chi, K^2$. The isomorphism class of $X$ defines a point $[X] \in \mathfrak M_{\chi, K^2}^{can}$. Then the germ of complex space $(\mathfrak M_{\chi, K^2}^{can},{[X]})$ is analytically isomorphic to the quotient $\mathfrak B (X) / Aut (X)$ of the Kuranishi space of $X$ by the finite group $ Aut (X) = Aut (S)$. \end{rem} Forgetting for the time being about automorphisms, and concentrating on families, we want to explain the `local contributions to global deformations of surfaces', in the words of Burns and Wahl (\cite{b-w}). Let $S$ be a minimal surface of general type and let $X$ be its canonical model. To avoid confusion between the corresponding Kuranishi spaces, denote by $\Def(S)$ the Kuranishi space for $S$, respectively $\Def(X)$ the Kuranishi space of $X$. Their result explains the relation holding between $\Def(S)$ and $\Def(X)$. \begin{theo}{\bf (Burns - Wahl)} Assume that $K_S$ is not ample and let $\pi :S \ensuremath{\rightarrow} X$ be the canonical morphism. Denote by $\mathcal{L}_X$ the space of local deformations of the singularities of $X$ (Cartesian product of the corresponding Kuranishi spaces) and by $\mathcal{L}_S$ the space of deformations of a neighbourhood of the exceptional locus of $\pi$. Then $\Def(S)$ is realized as the fibre product associated to the Cartesian diagram \begin{equation*} \xymatrix{ \Def(S) \ar[d]\ar[r] & Def (S_{Exc(\pi)})= : \mathcal{L}_S \cong \ensuremath{\mathbb{C}}^{\nu}, \ar[d]^{\lambda} \\ L \colon \Def(X) \ar[r] & Def (X_{Sing X}) = : \mathcal{L}_X \cong \ensuremath{\mathbb{C}}^{\nu} ,} \end{equation*} where $\nu$ is the number of rational $(-2)$-curves in $S$, and $\lambda$ is a Galois covering with Galois group $W := \oplus_{i=1}^r W_i$, the direct sum of the Weyl groups $W_i$ of the singular points of $X$ (these are generated by reflections, hence yield a smooth quotient, see \cite{chevalley}). \end{theo} An immediate consequence is the following \begin{cor}{\bf (Burns - Wahl)} 1) $\psi:\Def(S) \ensuremath{\rightarrow} \Def(X)$ is a finite morphism, in particular, $\psi$ is surjective. \noindent 2) If the derivative of $\Def(X) \ensuremath{\rightarrow} \mathcal{L}_X$ is not surjective (i.e., the singularities of $X$ cannot be independently smoothened by the first order infinitesimal deformations of $X$), then $\Def(S)$ is singular. \end{cor} Moreover one has a further corollary \begin{cor}{\bf \cite{cat5}}\label{ENR} If the morphism $L$ is constant, then $\Def(S)$ is everywhere non reduced, $$ \Def(S) \cong \Def(X) \times \lambda^{-1} (0).$$ \end{cor} In \cite{cat5} several examples were exhibited, extending two previous examples by Horikawa and Miranda. In these examples the canonical model $X$ is a hypersurface of degree $d$ in a weighted projective space: $$ X_d \subset \ensuremath{\mathbb{P}} (1,1,p,q) , d > 2 + p + q, $$ where \begin{itemize} \item $ X_d \subset \ensuremath{\mathbb{P}} (1,1,2,3) $, $ d = 1 + 6 k$, $X$ has one singularity of type $A_1$ and one of type $A_2$, or \item $ X_d \subset \ensuremath{\mathbb{P}} (1,1,p,p+1) $, $ d = p (k (p+1) -1)$, $X$ has one singularity of type $A_p$, or \item $ X_d \subset \ensuremath{\mathbb{P}} (1,1,p,rp-1) $, $ d =(kp-1) (rp-1)$, $ r > p-2$, $X$ has one singularity of type $A_{p-1}$. \end{itemize} The philosophy in these examples (some hard calculations are however needed) is that all the deformations of $X$ remain hypersurfaces in the same projective space, and this forces $X$ to preserve, in view of the special arithmetic properties of the weights and of the degree, its singularities. \subsection{Number of moduli done right} The interesting part of the discovery of Burns and Wahl is that they completely clarified the background of an old dispute going on in the late 1940's between Francesco Severi and Beniamino Segre. The (still open) question was: given a degree $d$, which is the maximum number $\mu (d)$ of nodes that a normal surface $X \subset \ensuremath{\mathbb{P}}^3$ of degree $d$ can have ? The answer is known only for small degree $ d\leq 6$: $\mu (2)=1$, $\mu (3)=4$ (Cayley's cubic), $\mu (4)=16$ (Kummer surfaces), $\mu (5)=31$ (Togliatti quintics), $\mu (6)=65$ (Barth's sextic), and Severi made the following bold assertion: an upper bound is clearly given by the `number of moduli', i.e., the dimension of the moduli space of the surfaces of degree $d$ in $\ensuremath{\mathbb{P}}^3$; this number equals the difference between the dimension of the underlying projective space $\frac{(d+3)(d+2)(d+1)}{6} -1$ and the dimension of the group of projectivities, at least for $ d \geq 4$ when the general surface of degree $d$ has only a finite group of projective automorphisms. One should then have $ \mu (d) \leq \nu (d) : = \frac{(d+3)(d+2)(d+1)}{6} -16$, but Segre (\cite{segre}) found some easy examples contradicting this inequality, the easiest of which are some surfaces of the form $$ L_1(x)\cdot \dots \cdot L_{d}(x) - M (x)^2,$$ where $d$ is even, the $L_i(x)$ are linear forms, and $M(x)$ is a homogeneous polynomial of degree $\frac{d}{2}$. Whence the easiest Segre surfaces have $ \frac{1}{4} d^2 (d-1)$ nodes, corresponding to the points where $ L_i (x) = L_j (x) = M(x)= 0$, and this number grows asymptotically as $ \frac{1}{4} d^3$, versus Severi's upper bound, which grows like $ \frac{1}{6} d^3$ (in fact we know nowadays, by Chmutov in \cite{chmutov}, resp. Miyaoka in \cite{miyaoka}, that $ \frac{5}{12} d^3 \leq \mu(d) \leq \frac{4}{9} d^3$). The problem with Severi's claim is simply that the nodes impose independent conditions infinitesimally, but only for the smooth model $S$: in other words, if $X$ has $\de$ nodes, and $S$ is its desingularization, then $Def(S)$ has Zariski tangent dimension at least $\de$, while it is not true that $Def(S)$ has dimension at least $\de$. Burns and Wahl, while philosophically rescueing Severi' s intuition, showed in this way that there are a lot of examples of obstructed surfaces $S$, thereby killing Kodaira and Spencer's dream that their cohomology dimension $h^1 (\ensuremath{\mathbb{T}}heta_S)$ would be the expected number of moduli. \subsection{The moduli space for minimal models of surfaces of general type} In this section we shall derive some further moral consequences from the result of Burns and Wahl. For simplicity, consider the case where the canonical model $X$ has only one double point, and recall the notation introduced previously, concerning the local deformation of the node, given by the family $$ uv = w^2 - t,$$ the pull back family $$\ensuremath{\mathcal{X}}X = \{ (u,v,w,\tau ) | w^2 - \tau^2 = uv \}$$ and the two families $$\ensuremath{\mathcal{S}} : \{ (u,v,w,\tau)(\xi) \in \ensuremath{\mathcal{X}}X \times \ensuremath{\mathbb{P}}^1| \ \frac{w-\tau}{u} = \frac{v}{w+\tau} = \xi \}$$ $$\ensuremath{\mathcal{S}}' : \{ (u,v,w,\tau)(\eta) \in \ensuremath{\mathcal{X}}X \times \ensuremath{\mathbb{P}}^1| \ \frac{w+\tau}{u} = \frac{v}{w-\tau} = \eta \}.$$ There are two cases to be considered in the following oversimplified example: 1) $t \in \Delta$ is a coordinate of an effective smoothing of the node, hence we have a family $\ensuremath{\mathcal{S}}$ parametrized by $ \tau \in \De$ 2) we have no first order smoothing of the node, hence the Spectrum of the ring $\ensuremath{\mathbb{C}}[\tau]/ (\tau^2)$ replaces $\Delta$. In case 1), we have two families $\ensuremath{\mathcal{S}}, \ensuremath{\mathcal{S}}'$ on a disk $ \Delta$ with coordinate $\tau$, which are isomorphic over the punctured disk. This induces for the punctured disk $ \Delta$ with coordinate $\tau$, base of the first family, an equivalence relation induced by the isomorphism with itself where $\tau$ is exchanged with $- \tau$; in this case the ring of germs of holomorphic functions at the origin which are invariant for the resulting equivalence relation is just the ring of power series $\ensuremath{\mathbb{C}}\{t\}$, and we have a smooth `moduli space'. In case 2), there is no room for identifying $\tau$ with $- \tau$, since if we do this on $\ensuremath{\mathbb{C}}[\tau]/ (\tau^2)$, which has only one point, we glue the families without inducing the identity on the base, and this is not allowed. In this latter case we are left with the non reduced scheme $Spec (\ensuremath{\mathbb{C}}[\tau]/ (\tau^2))$ as a `moduli space'. Recall now Mumford's definition of a coarse moduli space (\cite{mumford}, page 99, definition 5.6 , and page 129, definition 7.4) for a functor of type $ \ensuremath{\mathcal{S}} urf$, such as $ \ensuremath{\mathcal{S}} urf^{min} $, associating to a scheme $T$ the set $ \ensuremath{\mathcal{S}} urf^{min} (T)$ of isomorphism classes of families of smooth minimal surfaces of general type over $T$, or as $ \ensuremath{\mathcal{S}} urf^{can} $, associating to a scheme $T$ the set $ \ensuremath{\mathcal{S}} urf^{can} (T)$ of isomorphism classes of families of canonical models of surfaces of general type over $T$. It should be a scheme $A$, given together with a morphism $\Phi$ from the functor $ \ensuremath{\mathcal{S}} urf $ to $h_A: = \sH om (-, A)$, such that \begin{enumerate} \item for all algebraically closed fields $k$, $\Phi(Spec \ k) \colon \ensuremath{\mathcal{S}} urf (Spec \ k) \ensuremath{\rightarrow} h_A (Spec \ k)$ is an isomorphism \item any other morphism $\Psi$ from the functor $ \ensuremath{\mathcal{S}} urf $ to a functor $h_B$ factors uniquely through $ \chi \colon h_A \ensuremath{\rightarrow} h_B$. \end{enumerate} Since any family of canonical models $p \colon \sX \ensuremath{\rightarrow} T$ induces, once we restrict $T$ and we choose a local frame for the direct image sheaf $p_* (\omega_{ \sX | T}^m)$ a family of pluricanonical models embedded in a fixed $\ensuremath{\mathbb{P}}^{P_m-1}$, follows \begin{theo}\label{Gieseker=coarse} The Gieseker moduli space $ \mathfrak M_{\chi, K^2}^{can}$ is the coarse moduli space for the functor $ \ensuremath{\mathcal{S}} urf^{can}_{\chi, K^2} $, i.e., for canonical models of surfaces $S$ of general type with given invariants $ \chi, K^2$. Hence it gives a natural complex structure on the topological space $ \mathfrak M(S)$, for $S$ as above. \end{theo} As for the case of algebraic curves, we do not have a fine moduli space, i.e., the functor is not representable by this scheme. Here, automorphisms are the main obstruction to the existence of a fine moduli space: dividing the universal family over the Hilbert scheme by the linear group we obtain a family over the quotient coarse moduli space such that the fibre over the isomorphism class of a canonical model $X$, in the case where the group of automorphisms $Aut (X)$ is non trivial, is the quotient $ X / Aut (X)$. And $ X / Aut (X)$ is then not isomorphic to $X$. Instead, in the case of the functor $ \ensuremath{\mathcal{S}} urf^{min} (T)$, there is a further problem: that the equivalence relation (of isomorphism of families) is not proper on the parameter space, as we already mentioned. While for curves we have a Deligne-Mumford stack, which amounts roughly speaking to take more general functors than functors which are set valued, this no longer holds for surfaces of general type. Therefore Artin in \cite{artinstacks} had to change the definition, allowing more general equivalence relations. The result is (\cite{artinstacks}, Example 5.5 page 182) \begin{theo}{\bf (Artin)} There exists a moduli space $ \mathfrak M_{\chi, K^2}^{min}$ which is an algebraic Artin stack for minimal surfaces of general type with given invariants $ \chi, K^2$. \end{theo} The beginning step for Artin was to show that there is a finite number of algebraic families parametrizing all the minimal models with given invariants: this step is achieved by Artin in \cite{ArtinBrieskorn} showing that the simultaneous resolution of a family of canonical models can be done not only in the holomorphic category, but also over schemes, provided that one allows a base change process producing locally quasi-separated algebraic spaces. After that, one can consider the equivalence relation given by isomorphisms of families. We shall illustrate the local picture by considering the restriction of this equivalence relation to the base $Def(S)$ of the Kuranishi family. (I) First of all we have an action of the group $Aut(S)$ on $Def(S)$, and we must take the quotient by this action. In fact, if $g \in Aut(S)$, then $g$ acts on $Def(S)$, and if $\ensuremath{\mathcal{S}} \ensuremath{\rightarrow} Def(S)$ is the universal family, we can take the pull back family $g^* \ensuremath{\mathcal{S}} $. By the universality of the Kuranishi family, we have an isomorphism $g^* \ensuremath{\mathcal{S}} \cong \ensuremath{\mathcal{S}} $ lying over the identity of $Def(S)$, and by property (2) we must take the quotient of $Def(S)$ by this action of $Aut(S)$. (II) Let now $w \in W$ be an element of the Weyl group which acts on $Def(S)$ via the Burns-Wahl fibre product. We let $U_w$ be the open set of $\sL_S$ where the transformation $w$ acts freely (equivalently, $w$ being a pseudoreflection, $U_w$ is the complement of the hyperplane of fixed points of $w$), and we let $Def(S)_w$ be equal to the open set inverse image of $U_w$. Since the action of $w$ is free on $Def(S)_w$, we obtain that $w$ induces an isomorphism of the family $\ensuremath{\mathcal{S}}_w \ensuremath{\rightarrow} Def(S)_w$ with its pull back under $w$, inducing the identity on the base: hence we have to take the graph of $w$ on $Def(S)_w$, and divide $Def(S)_w$ by the action of $w$. (III) The 'equivalence relation' on $Def(S)$ is thus generated by (I) and (II), but it is not really a proper equivalence relation. The complex space underlying $ \mathfrak M_{\chi, K^2}^{min}$ is obtained taking the subsheaf of $\ensuremath{\mathcal{O}}_{Def(S)}$ consisting of the functions which are invariant for this equivalence relation (i.e., in case (II) , their restriction to $Def(S)_w$ should be $w$- invariant). $ \mathfrak M_{\chi, K^2}^{min}$ has the same associated reduced complex space as $ \mathfrak M_{\chi, K^2}^{can}$, but a different ringed space structure, as the examples of \cite{cat5} mentioned after corollary \ref{ENR} show, see the next subsection. In fact, the main difference is that $ \mathfrak M_{\chi, K^2}^{can}$ is locally, by Burns-Wahl's fibre product theorem, the quotient of $Def(S)$ by the group $ G'$ which is the semidirect product of the Weyl group $W$ by $Aut(S) = Aut(X)$, as a ringed space (the group $G'$ will make its appearance again in the concrete situation of Lemma \ref{nolift}). Whereas, for $ \mathfrak M_{\chi, K^2}^{min}$ the action on the set is the same, but the action of an element $w$ of the Weyl group on the sheaf of regular functions is only there on an open set ( $Def(S)_w$) and this set $Def(S)_w$ may be empty if $Def(X)$ maps to a branch divisor of the quotient map $\sL_S \ensuremath{\rightarrow} \sL_X$. A general question for which we have not yet found the time to provide an answer is whether there is a quasi-projective scheme whose underlying complex space is $ \mathfrak M_{\chi, K^2}^{min}$: we suspect that the answer should be positive. \subsection{Singularities of moduli spaces} In general one can define \begin{defin} The local moduli space $ (\mathfrak M_{\chi, K^2}^{min, loc},{[S]})$ of a smooth minimal surface of general type $S$ is the quotient $ Def(S) / Aut(S)$ of the Kuranishi space of $S$ by the action of the finite group of automorphisms of $S$. \end{defin} Caveat: whereas, for the canonical model $X$, $ Def(X) / Aut(X)$ is just the analytic germ of the Gieseker moduli space at the point corresponding to the isomorphism class of $X$, the local moduli space $ (\mathfrak M_{\chi, K^2}^{min, loc},{[S]}) : = Def(S) / Aut(S)$ is different in general from the analytic germ of the moduli space $( \mathfrak M_{\chi, K^2}^{min},{[S]})$, though it surjects onto the latter. But it is certainly equal to it in the special case where the surface $S$ has ample canonical divisor $K_S$. The Cartesian diagram by Burns and Wahl was used in \cite{cat5} to construct everywhere non reduced moduli spaces $ (\mathfrak M_{\chi, K^2}^{min, loc},{[S]}) : = Def(S) / Aut(S)$ for minimal models of surfaces of general type. In this case the basic theorem is \begin{theo}{\bf (\cite{cat5})} There are (generically smooth) connected components of Gieseker moduli spaces $ \mathfrak M_{\chi, K^2}^{can}$ such that all the canonical models in it are singular. Hence the local moduli spaces $ (\mathfrak M_{\chi, K^2}^{min, loc},{[S]}) $ for the corresponding minimal models are everywhere non reduced, and the same occurs for the germs $( \mathfrak M_{\chi, K^2}^{min},{[S]})$. \end{theo} The reason is simple: we already mentioned that if we take the fibre product \begin{equation*} \xymatrix{ \Def(S) \ar[d]\ar[r] & Def (S_{Exc(\pi)})= : \mathcal{L}_S \cong \ensuremath{\mathbb{C}}^{\nu}, \ar[d]^{\lambda} \\ \Def(X) \ar[r] & Def (X_{Sing X}) = : \mathcal{L}_X \cong \ensuremath{\mathbb{C}}^{\nu} ,} \end{equation*} the lower horizontal arrow maps to a reduced point, hence $Def(S)$ is just the product $Def(X) \times \lambda^{-1} (0)$, and $\lambda^{-1} (0)$ is a non reduced point, spectrum of an Artin local ring of length equal to the cardinality of the Galois group $W := \oplus_{i=1}^r W_i$. Hence $\Def(S)$ is everywhere non reduced. Moreover, one can show that, in the examples considered, the general surface has no automorphisms, i.e., there is an open set for which the analytic germ of the Gieseker moduli space coincides with the Kuranishi family $\Def(X)$, and the family of canonical models just obtained is equisingular. Hence, once we consider the equivalence relation on $\Def(S)$ induced by isomorphisms, the Weyl group acts trivially (because of equisingularity, we get $\Def(S)_w = \emptyset$ $\forall w \in W$). Moreover, by our choice of a general open set, $Aut(S)$ is trivial. The conclusion is that $( \mathfrak M_{\chi, K^2}^{min},{[S]})$ is locally isomorphic to the Kuranishi family $\Def(S)$, hence everywhere non reduced. \qed Using an interesting result of M'nev about line configurations, Vakil (\cite{murphy}) was able to show that `any type of singularity' can occur for the Gieseker moduli space (his examples are such that $S = X$ and $S$ has no automorphisms, hence they produce the desired singularities also for the local moduli space, for the Kuranishi families; they also produce singularities for the Hilbert schemes, because his surfaces have $q(S) : = h^1(\ensuremath{\mathcal{O}}_S) = 0$). \begin{theo}{\bf (Vakil's `Murphy's law')} Given any singularity germ of finite type over the integers, there is a Gieseker moduli space $ \mathfrak M_{\chi, K^2}^{can}$ and a surface $S$ with ample canonical divisor $K_S$ (hence $S = X$) such that $ ( \mathfrak M_{\chi, K^2}^{can}, [X])$ realizes the given singularity germ. \end{theo} In the next section we shall see more instances where automorphisms play an important role. \section{Automorphisms and moduli} \subsection{Automorphisms and canonical models} The good thing about differential forms is that any group action on a complex manifold leads to a group action on the vector spaces of differential forms. Assume that $G$ is a group acting on a surface $S$ of general type, or more generally on a K\"ahler manifold $Y$: then $G$ acts linearly on the Hodge vector spaces $H^{p,q}(Y) \cong H^q (\Omega^p_Y)$, and also on the vector spaces $ H^0 ( (\Omega^n_Y)^{\otimes m}) = H^0 (\ensuremath{\mathcal{O}}_Y (mK_Y))$, hence on the canonical ring $$\ensuremath{\mathcal{R}} (Y) : =(\ensuremath{\mathcal{R}}(Y,K_Y)) : = \bigoplus_{m \geq 0}H^0 (\ensuremath{\mathcal{O}}_Y (mK_Y)).$$ If $Y$ is a variety of general type, then $G$ acts linearly on the vector space $H^0 (\ensuremath{\mathcal{O}}_Y (mK_Y))$, hence linearly on the m-th pluricanonical image $Y_m$, which is an algebraic variety bimeromorphic to $Y$. Hence $G$ is contained in the algebraic group $ Aut (Y_m)$ and, if $G$ were infinite, as observed by Matsumura (\cite{matsumura}), $ Aut (Y_m)$ would contain a non trivial Cartan subgroup (hence $\ensuremath{\mathbb{C}}$ or $\ensuremath{\mathbb{C}}^*$) and $Y$ would be uniruled, a contradiction. This was the main argument of the following \begin{theo}{\bf (Matsumura)} The automorphism group of a variety $Y$ of general type is finite. \end{theo} Let us specialize now to $S$ a surface of general type, even if most of what we say shall remain valid also in higher dimension. Take an $m$-pseudo moduli space $ \ensuremath{\mathbb{H}}H _0 (\chi, K^2)$ with $m$ so large that the corresponding Hilbert points of the varieties $X_m$ are stable, and let $G$ be a finite group acting on a minimal surface of general type $S$ whose $m$-th canonical image is in $ \ensuremath{\mathbb{H}}H _0 (\chi, K^2)$. Since $G$ acts on the vector space $V_m : = H^0 (\ensuremath{\mathcal{O}}_S (mK_S))$, the vector space splits uniquely, up to permutation of the summands, as a direct sum of irreducible representations $$ (**) \ \ V_m = \bigoplus_{\rho \in Irr (G)} W_{\rho}^{n(\rho)}.$$ We come now to the basic notion of a family of $G$-automorphisms \begin{defin} A family of $G$-automorphisms is a triple $$ ((p \colon \ensuremath{\mathcal{S}} \ensuremath{\rightarrow} T), G, \alpha )$$ where: \begin{enumerate} \item $ (p \colon \ensuremath{\mathcal{S}} \ensuremath{\rightarrow} T)$ is a family in a given category (a smooth family for the case of minimal models of general type) \item $G$ is a (finite) group \item $\alpha \colon G \times \ensuremath{\mathcal{S}} \ensuremath{\rightarrow} \ensuremath{\mathcal{S}} $ yields a biregular action $ G \ensuremath{\rightarrow} Aut (\ensuremath{\mathcal{S}})$, which is compatible with the projection $p$ and with the trivial action of $G$ on the base $T$ (i.e., $p(\alpha (g,x)) = p(x) , \ \forall g \in G, x \in \ensuremath{\mathcal{S}}$). \end{enumerate} As a shorthand notation, one may also write $ g(x)$ instead of $\alpha (g,x)$, and by abuse of notation say that the family of automorphisms is a deformation of the pair $(S_t,G)$ instead of the triple $(S_t,G, \alpha_t)$. \end{defin} \begin{prop}\label{actiontype} 1) A family of automorphisms of surfaces of general type (not necessarily minimal models) induces a family of automorphisms of canonical models. 2) A family of automorphisms of canonical models induces, if the basis $T$ is connected, a constant decomposition type $(**)$ for $V_m (t)$. 3) A family of automorphisms of surfaces of general type admits a differentiable trivialization, i.e., in a neighbourhood of $ t_0 \in T$, a diffeomorphism as a family with $ ( S_0 \times T, p_T, \alpha_0 \times Id_T)$; in other words, with the trivial family for which $ g (y,t) = (g(y),t)$. \end{prop} {\it Proof. } We sketch only the main ideas. 1) follows since one can take the relative canonical divisor $ K : = K_{\ensuremath{\mathcal{S}}|T} $, the sheaf of graded algebras $$ \ensuremath{\mathcal{R}} (p): = \oplus_m p_* (\ensuremath{\mathcal{O}}_{\ensuremath{\mathcal{S}}} (mK))$$ and take the relative Proj, yielding $ \ensuremath{\mathcal{X}}X : = Proj (\ensuremath{\mathcal{R}} (p) )$, whose fibres are the canonical models. 2) follows since for a representation space $(V, \rho')$ the multiplicity with which an irreducible representation $W$ occurs in $V$ is the dimension of $ Hom(W,V)^G$, which in turn is calculated by the integral on $G$ of the trace of $\rho''(g)$, where $\rho''(g)$ is the representation $ Hom(W,V)$. If we have a family, we get a continuous integer valued function, hence a constant function. 3) Since $G$ acts trivially on the base $T$, it follows that for each $g \in G$ the fixed locus $ Fix(g)$ is a relative submanifold with a submersion onto $T$. By the use of stratified spaces (see \cite{mather}), and control data, one finds then a differentiable trivialization for the quotient analytic space $ \ensuremath{\mathcal{S}} / G$, hence a trivialization of the action. \qed Let us then consider the case of a family of canonical models: by 2) above, and shrinking the base in order to make the addendum $ \ensuremath{\mathcal{R}} (p)_m = p_* (\ensuremath{\mathcal{O}}_{\ensuremath{\mathcal{S}}} (mK))$ free, we get an embedding of the family $$ (\ensuremath{\mathcal{X}}X, G) \hookrightarrow T \times (\ensuremath{\mathbb{P}} ( V_m = \bigoplus_{\rho \in Irr (G)} W_{\rho}^{n(\rho)} ), G).$$ In other words, all the canonical models $X_t$ are contained in a fixed projective space, where also the action of $G$ is fixed. Now, the canonical model $X_t$ is left invariant by the action of $G$ if and only if its Hilbert point is fixed by $G$. Hence, we get a closed set $ \ensuremath{\mathbb{H}}H _0 (\chi, K^2)^G \subset \ensuremath{\mathbb{H}}H _0 (\chi, K^2)$ of the pseudomoduli space, and a corresponding closed subset of the moduli space. Hence we get the following theorem. \begin{theo}\label{can-aut=closed} The surfaces of general type which admit an action of a given pluricanonical type $(**)$ i.e., with a fixed irreducible G- decomposition of their canonical ring, form a closed subvariety $( \mathfrak M_{\chi, K^2}^{can})^{G, (**)}$ of the moduli space $ \mathfrak M_{\chi, K^2}^{can}$. \end{theo} We shall see that the situation for the minimal models is different, because then the subset of the moduli space where one has a fixed differentiable type is not closed. \subsection{ Kuranishi subspaces for automorphisms of a fixed type} Proposition \ref{actiontype} is quite useful when one analyses the deformations of a given $G$-action. In the case of the canonical models, we just have to look at the fixed points for the action on a subscheme of the Hilbert scheme; whereas, for the case of the deformations of the minimal model, we have to look at the complex structures for which the given differentiable action is biholomorphic. Hence we derive \begin{prop} Consider a fixed action of a finite group $G$ on a minimal surface of general type $S$, and let $X$ be its canonical model. Then we obtain closed subsets of the respective Kuranishi spaces, corresponding to deformations which preserve the given action, and yielding a maximal family of deformations of the $G$-action. These subspaces are $ \mathfrak B (S) \cap H^1(\ensuremath{\mathbb{T}}heta_S)^G = Def(S) \cap H^1(\ensuremath{\mathbb{T}}heta_S)^G$, respectively $ \mathfrak B (X) \cap {\rm Ext}^1(\Omega^1_X, \ensuremath{\mathcal{O}}_X)^G = Def (X) \cap {\rm Ext}^1(\Omega^1_X, \ensuremath{\mathcal{O}}_X)^G$. \end{prop} We refer to \cite{montecatini} for a proof of the first fact, while for the second the proof is based again on Cartan's lemma (\cite{cartan}), that the action of a finite group in an analytic neighbourhood of a fixed point can be linearized. Just a comment about the contents of the proposition: it says that in each of the two cases, the locus where a group action of a fixed type is preserved is a locally closed set of the moduli space. We shall see more clearly the distinction in the next subsection. \subsection{Deformations of automorphisms differ for canonical and for minimal models} The scope of this subsection is to illustrate the main principles of a rather puzzling phenomenon which we discovered in my joint work with Ingrid Bauer (\cite{burniat2}, \cite{burniat3}) on the moduli spaces of Burniat surfaces. Before dwelling on the geometry of these surfaces, I want to explain clearly what happens, and it suffices to take the example of nodal secondary Burniat surfaces, which I will denote by BUNS in order to abbreviate the name. For BUNS one has $K^2_S = 4, p_g (S) : = h^0(K_S) = 0$, and the bicanonical map is a Galois cover of the Del Pezzo surface $Y$ of degree 4 with just one node as singularity (the resolution of $Y$ is the blow up $Y'$ of the plane in 5 points, of which exactly 3 are collinear). The Galois group is $G = (\ensuremath{\mathbb{Z}}/ 2)^2$, and over the node of $Y$ lies a node of the canonical model $X$ of $S$, which does not have other singularities. Then we have BUES, which means extended secondary Burniat surfaces, whose bicanonical map is again a finite $(\ensuremath{\mathbb{Z}}/ 2)^2$ - Galois cover of the 1-nodal Del Pezzo surface $Y$ of degree 4 (and for these $S=X$, i.e., the canonical divisor $K_S$ is ample). All these actions on the canonical models fit together into a single family, but, if we pass to the minimal models, then the topological type of the action changes in a discontinuous way when we pass from the closed set of BUNS to the open set of BUES, and we have precisely two families. We have , more precisely, the following theorems (\cite{burniat3}): \begin{theo}\label{main1}{\bf (Bauer-Catanese)} An irreducible connected component, normal, unirational of dimension 3 of the moduli space of surfaces of general type $\mathfrak M^{can}_{1,4}$ is given by the subset $\sN \sE \sB_4$, formed by the disjoint union of the open set corresponding to BUES ({\em extended} secondary Burniat surfaces), with the irreducible closed set parametrizing BUNS (nodal secondary Burniat surfaces). For all surfaces $S$ in $\sN \sE \sB_4$ the bicanonical map of the canonical model $X$ is a finite cover of degree 4, with Galois group $G = (\ensuremath{\mathbb{Z}}/ 2)^2$, of the 1-nodal Del Pezzo surface $Y$ of degree 4 in $\ensuremath{\mathbb{P}}^4$. Moreover the Kuranishi space $\mathfrak B (S)$ of any such a minimal model $S$ is smooth. \end{theo} \begin{theo}\label{path}{\bf (Bauer-Catanese)} The deformations of nodal secondary Burniat surfaces (secondary means that $K^2_S =4$) to extended secondary Burniat surfaces yield examples where $\Def(S,(\ensuremath{\mathbb{Z}}/2\ensuremath{\mathbb{Z}})^2) \ensuremath{\rightarrow} \Def(X,(\ensuremath{\mathbb{Z}}/2\ensuremath{\mathbb{Z}})^2)$ is not surjective. Indeed the pairs $(X,G)$, where $G : = (\ensuremath{\mathbb{Z}}/2\ensuremath{\mathbb{Z}})^2$ and $X$ is the canonical model of an extended or nodal secondary Burniat surface, where the action of $G$ on $X$ is induced by the bicanonical map of $X$, belong to only one deformation type. If $S$ is a BUNS, then $\Def(S,(\ensuremath{\mathbb{Z}}/2\ensuremath{\mathbb{Z}})^2) \subsetneq \Def(S)$, and $\Def(S,(\ensuremath{\mathbb{Z}}/2\ensuremath{\mathbb{Z}})^2)$ consists exactly of all the BUNS '; while for the canonical model $X$ of $S$ we have: $\Def(X,(\ensuremath{\mathbb{Z}}/2\ensuremath{\mathbb{Z}})^2) = \Def(X)$. Indeed for the pairs $(S,G)$, where $S$ is the minimal model of an extended or nodal Burniat surface, $G : = (\ensuremath{\mathbb{Z}}/2\ensuremath{\mathbb{Z}})^2$ and the action is induced by the bicanonical map (it is unique up to automorphisms of $G$), they belong to exactly two distinct deformation types, one given by BUNS, and the other given by BUES. \end{theo} The discovery of BUES came later as a byproduct of the investigation of tertiary (3-nodal) Burniat surfaces, where we knew by the Enriques-Kuranishi inequality that tertiary Burniat surfaces cannot form a component of the moduli space: and knowing that there are other deformations helped us to find them eventually. For BUNS, we first erroneously thought (see \cite{burniat2}) that they form a connected component of the moduli space, because $ G= (\ensuremath{\mathbb{Z}}/2\ensuremath{\mathbb{Z}})^2 \subset Aut(S)= Aut(X)$ for a BUNS, and BUNS are exactly the surfaces $S$ for which the action deforms, while we proved that for all deformations of the canonical model $X$ the action deforms. The description of BUNS and especially of BUES is complicated, so I refer simply to \cite{burniat3}; but the essence of the pathological behaviour can be understood from the local picture around the node of the Del Pezzo surface $Y$. We already described most of this local picture in a previous section. We make here a first additional observation: \begin{prop}\label{def-act} Let $t \in \ensuremath{\mathbb{C}}$ , and consider the action of $G: = (\ensuremath{\mathbb{Z}}/2 \ensuremath{\mathbb{Z}})^2$ on $\ensuremath{\mathbb{C}}^3$ generated by $\sigma_1(u,v,w) = (u,v,-w)$, $\sigma_2(u,v,w) = (-u,-v,w)$. Then the hypersurfaces $X_t = \{ (u,v,w)| w^2 = uv + t\}$ are $G$-invariant, and the quotient $X_t / G $ is the hypersurface $$ Y_t = Y_0 = Y := \{ (x,y,z)| z^2 = xy\} ,$$ which has a nodal singularity at the point $x=y=z=0$. $X_t \ensuremath{\rightarrow} Y$ is a family of finite bidouble coverings (Galois coverings with group $G: = (\ensuremath{\mathbb{Z}}/2 \ensuremath{\mathbb{Z}})^2$). We get in this way a flat family of (non flat) bidouble covers. \end{prop} \begin{proof} The invariants for the action of $G$ on $\ensuremath{\mathbb{C}}^3 \times \ensuremath{\mathbb{C}}$ are: $$ x: =u^2, y: = v^2, z : = uv , s: = w^2, t.$$ Hence the family $\ensuremath{\mathcal{X}}X$ of the hypersurfaces $X_t$ is the inverse image of the family of hypersurfaces $ s = z +t$ on the product $$Y \times \ensuremath{\mathbb{C}}^2 = \{(x,y,z,s,t)| xy = z^2 \} .$$ Hence the quotient of $X_t$ is isomorphic to $Y$. \end{proof} The following is instead a rephrasing and a generalization of the discovery of Atiyah in the context of automorphisms, which is the main local content of the above theorem. It says that the family of automorphisms of the canonical models $X_t$, i.e., the automorphism group of the family $\ensuremath{\mathcal{X}}X$, does not lift, even after base change, to the family $\ensuremath{\mathcal{S}}$ of minimal surfaces $S_{\tau}$. \begin{lemma}\label{nolift} Let $G$ be the group $G \cong (\ensuremath{\mathbb{Z}}/2 \ensuremath{\mathbb{Z}})^2$ acting on $\ensuremath{\mathcal{X}}X$ trivially on the variable $\tau$, and else as follows on $\ensuremath{\mathcal{X}}X$: the action of $G: = (\ensuremath{\mathbb{Z}}/2 \ensuremath{\mathbb{Z}})^2$ on $\ensuremath{\mathbb{C}}^3$ is generated by $\sigma_1(u,v,w) = (u,v,-w)$, $\sigma_2(u,v,w) = (-u,-v,w)$ (we set then $\sigma_3 := \sigma_1 \sigma_2$, so that $\sigma_3(u,v,w) = (-u,-v,-w)$). The invariants for the action of $G$ on $\ensuremath{\mathbb{C}}^3 \times \ensuremath{\mathbb{C}}$ are: $$ x: =u^2, y: = v^2, z : = uv , s: = w^2, t.$$ Observe that the hypersurfaces $X_t = \{ (u,v,w)| w^2 = uv + t\}$ are $G$-invariant, and the quotient $X_t / G $ is the hypersurface $$ Y_t \cong Y_0 = \{ (x,y,z)| z^2 = xy\} ,$$ which has a nodal singularity at the point $x=y=z=0$. Let further $\sigma_4$ act by $\sigma_4(u,v,w,\tau) = (u,v,w,- \tau) $, let $G' \cong (\ensuremath{\mathbb{Z}}/2 \ensuremath{\mathbb{Z}})^3$ be the group generated by $G$ and $\sigma_4$, and let $H \cong (\ensuremath{\mathbb{Z}}/2 \ensuremath{\mathbb{Z}})^2$ be the subgroup $\{ Id, \sigma_2, \sigma_1\sigma_4 , \sigma_3\sigma_4\}$. The biregular action of $G'$ on $\ensuremath{\mathcal{X}}X$ lifts only to a birational action on $\ensuremath{\mathcal{S}}$, respectively $\ensuremath{\mathcal{S}}'$. The subgroup $H$ acts on $\ensuremath{\mathcal{S}}$, respectively $\ensuremath{\mathcal{S}}'$, as a group of biregular automorphisms. The elements of $ G' \setminus H = \{ \sigma_1, \sigma_3, \sigma_4 , \sigma_2\sigma_4\}$ yield isomorphisms between $\ensuremath{\mathcal{S}}$ and $\ensuremath{\mathcal{S}}'$. The group $G$ acts on the punctured family $\ensuremath{\mathcal{S}} \setminus S_0$, in particular it acts on each fibre $S_{\tau}$. Since $\sigma_4$ acts trivially on $S_0$, the group $G'$ acts on $S_0$ through its direct summand $G$. The biregular actions of $G$ on $\ensuremath{\mathcal{S}} \setminus S_0$ and on $ S_0$ do not patch together to a biregular action on $\ensuremath{\mathcal{S}}$, in particular $\sigma_1$ and $\sigma_3$ yield birational maps which are not biregular: they are called Atiyah flops (cf. \cite{atiyah}). \end{lemma} Another more geometrical way to see that there is no $G$-action on the family $\ensuremath{\mathcal{S}}$ is the following: if $G$ would act on $\ensuremath{\mathcal{S}}$, and trivially on the base, then the fixed loci for the group elements would be submanifolds with a smooth map onto the parameter space $\ensuremath{\mathbb{C}}$ with parameter $\tau$. Hence all the quotients $ S_{\tau} / G$ would be homeomorphic. But for BUNS the quotient of $S_0$ by $G$ is the blow up $Y'$ of $Y$ at the origin, while for $\tau \neq 0$, $ S_{\tau} / G$ is just $Y$! \footnote{In the case of BUNS, $Y$ is a nodal Del Pezzo surface of degree $4$, whereas in the local analysis we use the same notation $Y$ for the quadric cone, which is the germ of the nodal Del Pezzo surface at the nodal singular point. } In fact, if one wants to construct the family of smooth models as a family of bidouble covers of a smooth surface, one has to take the blown up surface $Y'$ and its exceptional divisor $N$ ($N$ is called the nodal curve). \begin{rem} i) The simplest way to view $X_t$ is to see $\ensuremath{\mathbb{C}}^2$ as a double cover of $Y$ branched only at the origin, and then $X_t$ as a family of double covers of $\ensuremath{\mathbb{C}}^2$ branched on the curve $ uv + t = 0$, which acquires a double point for $t=0$. ii) If we pull back the bidouble cover $X_t$ to $Y'$, and we normalize it, we can see that the three branch divisors, corresponding to the fixed points for the three non trivial elements of the group $G$, are as follows: \begin{itemize} \item $D_3$ is, for $t=0$, the nodal curve $N$, and is the empty divisor for $t\neq 0$; \item $D_1$ is, for $t \neq 0$, the inverse image of the curve $z + t = 0$; while, for $t=0$, it is only its strict transform, i.e. a divisor made up of $F_1, F_2$, the proper transforms of the two branch lines (\{x=z=0\}, resp. \{y=z=0\}) on the quadric cone $Y$ \item $D_2$ is an empty divisor for $t=0$, and the nodal curve $N$ for $t\neq 0$. \end{itemize} \end{rem} The above remark shows then that in order to construct the smooth models, one has first of all to take a discontinuous family of branch divisors; and, moreover, for $ t \neq 0$, we obtain then a non minimal surface which contains two (-1)-curves ($S_t = X_t$ is then gotten by contracting these two (-1)-curves). \subsection{Teichm\"uller space for surfaces of general type} Recall the fibre product considered by Burns and Wahl: \begin{equation*} \xymatrix{ \Def(S) \ar[d]\ar[r] & Def (S_{Exc(\pi)})= : \mathcal{L}_S \cong \ensuremath{\mathbb{C}}^{\nu}, \ar[d]^{\lambda} \\ \Def(X) \ar[r] & Def (X_{Sing X}) = : \mathcal{L}_X \cong \ensuremath{\mathbb{C}}^{\nu} ,} \end{equation*} This gives a map $f \colon Def (S) \ensuremath{\rightarrow} Def (X) / Aut (X)$ of the Kuranishi space of $S$ into an open set of a quasiprojective variety, which factors through Teichm\"uller space. \begin{theo}\label{kur=teich-surf} Let $S$ be the minimal model of a surface of general type. Then the continuous map $\pi \colon Def (S) \ensuremath{\rightarrow} \sT (M)_S$ is a local homeomorphism between Kuranishi space and Teichm\"uller space if 1) $Aut (S)$ is a trivial group, or 2) $K_S$ is ample and $S$ is rigidified. \end{theo} {\it Proof. } We need only to show that $\pi$ is injective. Assume the contrary: then there are two points $t_1, t_2 \in Def(S)$ yielding surfaces $S_1$ and $S_2$ isomorphic through a diffeomorphism $\Psi$ isotopic to the identity. By the previous remark, the images of $t_1, t_2 $ inside $Def (X) / Aut (X)$ must be the same. Case 1): there exists then an element $w$ of the Weyl group of $\lambda$ carrying $t_1 $ to $t_2$, hence the composition of $w$ and $\Psi$ yields an automorphism of $S_1$. Since $Aut(S_1) = Aut (X_1)$ and the locus of canonical models with non trivial automorphisms is closed, we conclude that, taking $Def(S)$ as a suitably small germ, then this automorphism is the identity. This is however a contradiction, since $w$ acts non trivially on the cohomology of the exceptional divisor, while $\Psi$ acts trivially. Case 2) : In this case there is an automorphism $g$ of $S$ carrying $t_1 $ to $t_2$, and again the composition of $g$ and $\Psi$ yields an automorphism of $S_1$. We apply the same argument, since $g$ is not isotopic to the identity by our assumption. \qed \begin{rem} With more work one should be able to treat the more general case where we assume that $Aut (S)$ is non trivial, but $S$ is rigidified. In this case one should show that a composition $ g \circ w$ as above is not isotopic to the identity. The most interesting question is however whether every surface of general type is rigidified. \end{rem} \section{Connected components of moduli spaces and arithmetic of moduli spaces for surfaces} \subsection{Gieseker's moduli space and the analytic moduli spaces} As we saw, all 5-canonical models of surfaces of general type with invariants $ K^2$, $\chi$ occur in a big family parametrized by an open set of the Hilbert scheme $ \ensuremath{\mathbb{H}}H ^0$ parametrizing subschemes with Hilbert polynomial $ P (m)= \chi + \frac{1}{2}( 5 m -1) 5 m K^2 ,$ namely the open set $$ \ensuremath{\mathbb{H}}H ^0 (\chi, K^2) : = \{ \Sigma | \Sigma {\rm \ is\ reduced \ with \ only \ R.D.P. 's \ as \ singularities \ }\}. $$ Indeed, it is not necessary to consider the 5-pseudo moduli space of surfaces of general type with given invariants $ K^2$, $\chi$, which was defined as the closed subset $ \ensuremath{\mathbb{H}}H _0 \subset \ensuremath{\mathbb{H}}H^0$, $$ \ensuremath{\mathbb{H}}H _0 (\chi, K^2) : = \{ \Sigma \in \ensuremath{\mathbb{H}}H^0| \omega_{\Sigma}^{ \otimes 5} \cong \ensuremath{\mathcal{O}}_{\Sigma}(1) \} .$$ At least, if we are only interested about having a family which contains all surfaces of general type, and are not interested about taking the quotient by the projective group. Observe however that if $\Sigma \in \ensuremath{\mathbb{H}}H ^0 (\chi, K^2)$, then $\Sigma$ is the canonical model $X$ of a surface of general type , embedded by a linear system $|D|$, where $D$ is numerically equivalent to $5K_S$, i.e., $D = 5 K_S+ \eta$, where $ \eta$ is numerically equivalent to $0$. Therefore the connected components $\sN$, respectively the irreducible components $\sZ$ of the Gieseker moduli space correspond to the connected , resp. irreducible, components of $ \ensuremath{\mathbb{H}}H _0 (\chi, K^2)$, and in turn to the connected , resp. irreducible, components of $ \ensuremath{\mathbb{H}}H ^0 (\chi, K^2) $ which intersect $ \ensuremath{\mathbb{H}}H _0 (\chi, K^2)$. We shall however, for the sake of brevity, talk about connected components $\sN$ of the Gieseker moduli space $\frak M^{can}_{a,b}$ even if these do not really parametrize families of canonical models. We refer to \cite{perugia} for a more ample discussion of the basic ideas which we are going to sketch here. $\frak M^{can}_{a,b}$ has a finite number of connected components, and these parametrize the deformation classes of surfaces of general type. By the classical theorem of Ehresmann (\cite{ehre}), deformation equivalent varieties are diffeomorphic, and moreover, by a diffeomorphism carrying the canonical class to the canonical class. Hence, fixed the two numerical invariants $\chi (S) = a, K^2_S = b$, which are determined by the topology of $S$ (indeed, by the Betti numbers of $S$), we have a finite number of differentiable types. It is clear that the analytic moduli space $\frak M (S)$ that we defined at the onset is then the union of a finite number of connected components of $\frak M^{can}_{a,b}$. But how many, and how? A very optimistic guess was: one. A basic question was really whether a moduli space $\mathfrak M (S)$ would correspond to a unique connected component of the Gieseker moduli space, and this question was abbreviated as the DEF = DIFF question. I.e., the question whether differentiable equivalence and deformation equivalence would coincide for surfaces. I conjectured (in \cite{katata}) that the answer should be negative, on the basis of some families of simply connected surfaces of general type constructed in \cite{cat1}: these were then homeomorphic by the results of Freedman (see \cite{free}, and \cite{f-q}), and it was then relatively easy to show then (\cite{cat3}) that there were many connected components of the moduli space corresponding to homeomorphic but non diffeomorphic surfaces. It looked like the situation should be similar even if one would fix the diffeomorphism type. Friedman and Morgan instead made the `speculation' that the answer to the DEF= DIFF question should be positive (1987) (see \cite{f-m1}), motivated by the new examples of homeomorphic but not diffeomorphic surfaces discovered by Donaldson (see \cite{don} for a survey on this topic). The question was finally answered in the negative, and in every possible way (\cite{man4},\cite{k-k},\cite{cat4},\cite{c-w},\cite{bcg}. \begin{theo} (Manetti '98, Kharlamov -Kulikov 2001, C. 2001, C. - Wajnryb 2004, Bauer- C. - Grunewald 2005 ) The Friedman- Morgan speculation does not hold true and the DEF= DIFF question has a negative answer. \end{theo} In my joint work with Bronek Wajnryb (\cite{c-w}) the question was also shown to have a negative answer even for simply connected surfaces. I showed later (\cite{cat02}) that each minimal surface of general type $S$ has a natural symplectic structure with class of the sympletic form equal to $c_1 (K_S)$, and in such a way that to each connected component $\sN$ of the moduli space one can associate the pair of a differentiable manifold with a symplectic structure, unique up to symplectomorphism. Would this further datum determine a unique connected component, so that DEF = SIMPL ? This also turned out to have a negative answer (\cite{cat09}). \begin{theo} Manetti surfaces provide counterexamples to the DEF = SIMPL question. \end{theo} I refer to \cite{cime} for a rather comprehensive treatment of the above questions. Let me just observe that the Manetti surfaces are not simply connected, so that the DEF=SYMPL question is still open for the case of simply connected surfaces. Concerning the question of canonical symplectomorphism of algebraic surfaces, Auroux and Katzarkov (\cite{a-k}) defined asymptotic braid monodromy invariants of a symplectic manifold, extending old ideas of Moishezon (see \cite{moi}). Quite recent work, not covered in \cite{cime}, is my joint work with L\"onne and Wajnryb (\cite{clw}), which investigates in this direction the braid monodromy invariants (especially the `stable' ones) for the surfaces introduced in \cite{cat1}. \subsection{Arithmetic of moduli spaces} A basic remark is that all these schemes are defined by equations involving only $\ensuremath{\mathbb{Z}}$ coefficients, since the defining equation of the Hilbert scheme is a rank condition on a multiplication map (see for instance \cite{green}), and similarly the condition $ \omega_{\Sigma}^{ \otimes 5} \cong \ensuremath{\mathcal{O}}_{\Sigma}(1) $ is also closed (see \cite{abvar}) and defined over $\ensuremath{\mathbb{Z}}$.. It follows that the absolute Galois group $ Gal (\overline{\ensuremath{\mathbb{Q}}}, \ensuremath{\mathbb{Q}})$ acts on the Gieseker moduli space $\frak M^{can}_{a,b}$. To explain how it concretely acts, it suffices to recall the notion of a conjugate variety. \begin{rem} 1) $\phi \in Aut (\ensuremath{\mathbb{C}})$ acts on $ \ensuremath{\mathbb{C}} [z_0, \dots z_n]$, by sending $P (z) = \sum_{i =0}^n \ a_i z ^i \mapsto \phi (P) (z) : = \sum_{i =0}^n \ \phi (a_i) z ^i$. 2) Let $X$ be as above a projective variety $$X \subset \ensuremath{\mathbb{P}}^n_\ensuremath{\mathbb{C}}, X : = \{ z | f_i(z) = 0 \ \forall i \}.$$ The action of $\phi$ extends coordinatewise to $ \ensuremath{\mathbb{P}}^n_\ensuremath{\mathbb{C}}$, and carries $X$ to another variety, denoted $X^{\phi}$, and called the {\bf conjugate variety}. Since $f_i(z) = 0 $ implies $\phi (f_i)(\phi (z) )= 0 $, we see that $$ X^{\phi} = \{ w | \phi (f_i)(w) = 0 \ \forall i \}.$$ \end{rem} If $\phi$ is complex conjugation, then it is clear that the variety $X^{\phi}$ that we obtain is diffeomorphic to $X$, but in general, what happens when $\phi$ is not continuous ? Observe that, by the theorem of Steiniz, one has a surjection $ Aut (\ensuremath{\mathbb{C}}) \ensuremath{\rightarrow} Gal(\bar{\ensuremath{\mathbb{Q}}} /\ensuremath{\mathbb{Q}})$, and by specialization the heart of the question concerns the action of $Gal(\bar{\ensuremath{\mathbb{Q}}} /\ensuremath{\mathbb{Q}})$ on varieties $X$ defined over $\bar{\ensuremath{\mathbb{Q}}}$. For curves, since in general the dimensions of spaces of differential forms of a fixed degree and without poles are the same for $X^{\phi}$ and $X$, we shall obtain a curve of the same genus, hence $X^{\phi}$ and $X$ are diffeomorphic. But for higher dimensional varieties this breaks down, as discovered by Jean Pierre Serre in the 60's (\cite{serre}), who proved the existence of a field automorphism $\phi \in Gal(\bar{\ensuremath{\mathbb{Q}}} /\ensuremath{\mathbb{Q}}) $, and a variety $X$ defined over $\bar{\ensuremath{\mathbb{Q}}}$ such that $X$ and the Galois conjugate variety $X^{\phi}$ have non isomorphic fundamental groups. In work in collaboration with Ingrid Bauer and Fritz Grunewald (\cite{almeria}, \cite{bcgGalois}) we discovered wide classes of algebraic surfaces for which the same phenomenon holds. A striking result in a similar direction was obtained by Easton and Vakil (\cite{east-vak}) \begin{theo} The absolute Galois group $Gal(\bar{\ensuremath{\mathbb{Q}}} /\ensuremath{\mathbb{Q}})$ acts faithfully on the set of irreducible components of the (coarse) moduli space of canonical surfaces of general type, $$ \frak M^{can} : = \cup_{a,b \geq 1} \frak M^{can}_{a,b}. $$ \end{theo} \subsection{Topology sometimes determines connected components} There are cases where the presence of a big fundamental group implies that a connected component of the moduli space is determined by some topological invariants. A typical case is the one of surfaces isogenous to a product (\cite{isogenous}), where a surface is said to be isogenous to a (higher) product if and only if it is a quotient $$ (C_1 \times C_2)/G, $$ where $C_1, C_2$ are curves of genera $g_1, g_2 \geq 2$, and $G$ is a finite group acting freely on $ (C_1 \times C_2)$. \begin{theo}\label{fabiso}(see \cite{isogenous}). a) A projective smooth surface is isogenous to a higher product if and only if the following two conditions are satisfied: 1) there is an exact sequence $$ 1 \rightarrow \Pi_{g_1} \times \Pi_{g_2} \rightarrow \pi = \pi_1(S) \rightarrow G \rightarrow 1, $$ where $G$ is a finite group and where $\Pi_{g_i}$ denotes the fundamental group of a compact curve of genus $g_i \geq 2$; 2) $e(S) (= c_2(S)) = \frac{4}{|G|} (g_1-1)(g_2-1)$. \noindent b) Any surface $X$ with the same topological Euler number and the same fundamental group as $S$ is diffeomorphic to $S$. The corresponding subset of the moduli space, $\mathfrak{M}^{top}_S = \mathfrak{M}^{diff}_S$, corresponding to surfaces orientedly homeomorphic, resp. orientedly diffeomorphic to $S$, is either irreducible and connected or it contains two connected components which are exchanged by complex conjugation. In particular, if $S'$ is orientedly diffeomorphic to $S$, then $S'$ is deformation equivalent to $S$ or to $\bar{S}$. \end{theo} Other non trivial examples are the cases of Keum-Naie surfaces, Burniat surfaces and Kulikov surfaces (\cite{keumnaie} , \cite{burniat1}, \cite{chancough}): for these classes of surfaces the main result is that any surface homotopically equivalent to a surface in the given class belongs to a unique irreducible connected component of the moduli space. Just to give a flavour of some of the arguments used, let us consider a simple example which I worked out together with Ingrid Bauer. Let $S$ be a minimal surface of general type with $ q(S) \geq 2$. Then we have the Albanese map $$\alpha \colon S \ensuremath{\rightarrow} A: = Alb (S), $$ and $S$ is said to be of Albanese general type if $\alpha (S) : = Z$ is a surface. This property is a topological property (see \cite{albanese}), since $\alpha$ induces a homomorphism of cohomology algebras $$\alpha ^* \colon H^* (A, \ensuremath{\mathbb{Z}}) \ensuremath{\rightarrow} H^* (S, \ensuremath{\mathbb{Z}})$$ and $H^* (A, \ensuremath{\mathbb{Z}}) $ is the full exterior algebra $\langle\langlembda^* (H^1 (A, \ensuremath{\mathbb{Z}})) \cong \langle\langlembda^* (H^1 (S, \ensuremath{\mathbb{Z}}))$ over $H^1 (S, \ensuremath{\mathbb{Z}})$. In particular, in the case where $ q(S)= 2$, the degree $d$ of the Albanese map equals the index of the image of $\langle\langlembda^4 H^1 (S, \ensuremath{\mathbb{Z}})$ inside $H^4 (S, \ensuremath{\mathbb{Z}}) = \ensuremath{\mathbb{Z}} [S]$. The easiest case is the one when $d=2$, because then $K_S = R$, $R$ being the ramification divisor. Observe that the Albanese morphism factors through the canonical model $X$ of $S$, and a morphism $ a \colon X \ensuremath{\rightarrow} A$. Assume now that $a$ is a finite morphism, so that $2K_X = a^*(a_* (K_X))$. In particular, if we set $D : = a_* (K_X)$, then $ D^2 = 2 K_X^2 = 2 K_S^2$, and this number is also a topological invariant. By the standard formula for double covers we have that $ p_g (S) = h^0 (L) + 1$, where $D$ is linearly equivalent to $2L$; hence, if $L$ is a polarization of type $(d_1, d_2)$, then $ p_g (S) = d_1 d_2 + 1$, $ D$ is a polarization of type $(2d_1, 2d_2)$, and $4 d_1 d_2 = 2 L^2 = K_S^2$, hence in particular we have $$ K_S^2 = 4 (p_g -1) = 4 \chi(S), $$ since $q(S) = 2$. I can moreover recover the polarization type $(d_1, d_2)$ (where $d_1$ divides $d_2$) using the fact that $2 d_1$ is exactly the divisibility index of $D$. This is in turn the divisibility of $K_S$, since $K_S$ gives a linear form on $H^2 (A, \ensuremath{\mathbb{Z}})$ simply by the composition of pushforward and cup product, and this linear form is represented by the class of $D$. Finally, the canonical class $K_S$ is a differentiable invariant of $S$ (see \cite{don5} or \cite{mor}). The final argument is that, by formulae due to Horikawa (\cite{quintics}), necessarily if $ K_S^2 = 4 \chi(S)$ the branch locus has only negligible singularities (see \cite{keumnaie}), which means that the normal finite cover branched over $D$ has rational double points as singularities. \begin{theo}{\bf (Bauer-Catanese)}\label{doublecover} Let $S$ be a minimal surface of general type whose canonical model $X$ is a finite double cover of an Abelian surface $A$, branched on a divisor $D$ of type $(2d_1, 2 d_2)$. Then $S$ belongs to an irreducible connected component $\sN$ of the moduli space of dimension $ 4 d_1 d_2 + 2 = 4 \chi(S) + 2$. Moreover, 1) any other surface which is diffeomorphic to such a surface $S$ belongs to the component $\sN$. 2) The Kuranishi space $Def(X)$ is always smooth. \end{theo} The assumption that $X$ is a finite double cover is a necessary one. For instance, Penegini and Polizzi (\cite{pepo2}) construct surfaces with $p_g (S)= q (S)= 2$ and $ K^2_S=6$ such that for the general surface the canonical divisor is ample (whence $ S=X$), while the Albanese map, which is generically finite of degree $2$, contracts an elliptic curve $Z$ with $ Z^2 = -2$ to a point. The authors show then that the corresponding subset of the moduli space consists of three irreducible connected components. Other very interesting examples with degree $d=3$ have been considered by Penegini and Polizzi in \cite{pepo}. \section{Smoothings and surgeries} Lack of time prevents me to develop this section. I refer the reader to \cite{cime} for a general discussion, and to the articles \cite{man4} and \cite{korean} for interesting applications of the $\ensuremath{\mathbb{Q}}$-Gorenstein smoothings technique (for the study of components of moduli spaces, respectively for the construction of new interesting surfaces). There is here a relation with the topic of compactifications of moduli spaces. Arguments to show that certain subsets of the moduli spaces are closed involved taking limits of canonical models and studying certain singularities (see \cite{cat2}, \cite{man3}, see also \cite{riem} for the relevant results on deformations of singularities); in \cite{k-sb} a more general study was pursued of the singularities allowed on the boundary of the moduli space of surfaces. I refer to the article by Koll\'ar in this Handbook for the general problem of compactifying moduli spaces (also Viehweg devoted a big effort into this enterprise, see \cite{vieh}, \cite{lastopus}, another important reference is \cite{nikos}). An explicit study of compactifications of the moduli spaces of surfaces of general type was pursued in \cite{opstall}, \cite{ale-pardini}, \cite{wenfei}, \cite{soenkecomp}. There is here another relation, namely to the article by Abramovich and others in this Handbook, since the deformation of pairs $(Y,D)$ where $Y$ is a smooth complex manifold and $D = \cup_{i=1, \dots, h}D_i$ is a normal crossing divisor, are governed by the cohomology groups $$H^i (\ensuremath{\mathbb{T}}heta_Y (- \log D_1, \dots ,- \log D_h)),$$ for $i=1,2$, and where the sheaf $\ensuremath{\mathbb{T}}heta_Y (- \log D_1, \dots ,- \log D_h)$ is the Serre dual of the sheaf $\Omega^1_Y ( \log D_1, \dots ,\log D_h)(K_{{Y}})$, with its residue sequence $$ 0 \ensuremath{\rightarrow} \Omega^1_{{Y}} (K_{{Y}}) \ensuremath{\rightarrow} \Omega^1_{\tilde{Y}}(\log D_1, \dots ,\log D_h) (K_{{Y}}) \ensuremath{\rightarrow} \bigoplus_{i=1}^3 \ensuremath{\mathcal{O}}_{D_i} (K_{{Y}})\ensuremath{\rightarrow} 0. $$ These sheaves are the appropriate ones to calculate the deformations of ramified coverings, see for instance \cite{cat1}),\cite{Pardini}, \cite{bidouble}, \cite{burniat2}, and especially \cite{burniat3}). I was taught about these by David Mumford back in 1977, when he had just been working on the Hirzebruch proportionality principle in the non compact case (\cite{Hirzprop}). \noindent {\bf Acknowledgements.} I would like to thank Ingrid Bauer, Barbara Fantechi and Keiji Oguiso for interesting conversations, Wenfei Liu and JinXing Cai for email correspondence, S\"onke Rollenske, Antonio J. Di Scala and two referees for reading the manuscript with care and pointing out corrections. I am grateful to Jun Muk Hwang, Jongnam Lee and Miles Reid for organizing the winter school `Algebraic surfaces and related topics', which took place at the KIAS in Seoul in March 2010: some themes treated in those lectures are reproduced in this text. A substantial part of the paper was written when I was a visiting KIAS scholar in March-April 2011: I am very grateful for the excellent atmosphere I found at the KIAS and for the very nice hospitality I received from Jun Muk Hwang and Jong Hae Keum. \end{document}
\begin{document} \title{Subwavelength guided modes for acoustic waves in bubbly crystals with a line defect} \begin{abstract} The recent development of subwavelength photonic and phononic crystals shows the possibility of controlling wave propagation at deep subwavelength scales. Subwavelength bandgap phononic crystals are typically created using a periodic arrangement of subwavelength resonators, in our case small gas bubbles in a liquid. In this work, a waveguide is created by modifying the sizes of the bubbles along a line in a dilute two-dimensional bubbly crystal, thereby creating a line defect. Our aim is to prove that the line defect indeed acts as a waveguide; waves of certain frequencies will be localized to, and guided along, the line defect. The key result is an original formula for the frequencies of the defect modes. Moreover, these frequencies are numerically computed using the multipole method, which numerically illustrates our main results. \end{abstract} \def {\textbf{ Keywords.}~\,\relax}2{ {\textbf{ Mathematics Subject Classification (MSC2000).}~\,\relax}} \def\par{\partialar} {\textbf{ Keywords.}~\,\relax}2{35R30, 35C20.} \def {\textbf{ Keywords.}~\,\relax}{ {\textbf{ Keywords.}~\,\relax}} \def\par{\partialar} {\textbf{ Keywords.}~\,\relax}{bubble, subwavelength resonance, subwavelength phononic crystal, subwavelength wave\-guide, line defect, weak localization.} \section{Introduction} Line defects in bandgap photonic or phononic bandgap crystals are of interest due to their possible applications in low-loss waveguides. The main mathematical problem of interest is to show that the spectrum of the defect operator has a non-zero overlap with the original bandgap. Moreover, it is also of interest to understand the nature and location of the defect spectrum. For previous works regarding line defects in bandgap crystals we refer to \cite{santosa,kuchment_line, kuchment_EM, Brown, Brown1, cardone1, Delourme_dilute, Fliss_DtN}. In this work, we consider a line defect in a phononic bandgap crystal comprised of gas bubbles in a liquid. The gas bubbles are known to resonate at a low frequency, called the \textit{Minnaert frequency}. The corresponding wavelength is larger than the bubble by several orders of magnitude \cite{first, minnaert}. Based on this, it is possible to create \textit{subwavelength} bandgap crystals, which operate at wavelengths much larger than the unit cell size of the microstructured material. One of the main motivations for studying subwavelength bandgap materials is to manipulate wave propagation at subwavelength scales. A second motivation is for their use in devices where conventional bandgap materials, based on Bragg scattering, would create infeasibly large devices \cite{pnas, nature}. Mathematical properties of bubbly phononic bandgap materials have been studied in, for example, \cite{first, defectSIAM, doublenegative, bandgap, nearzero, effectivemedium}, and subwavelength phononic bandgap materials have been experimentally realised in \cite{experiment2013, Lemoult_sodacan, phononic1}. Wave localization due to a point defect in a bubbly bandgap material was first proven in \cite{defectSIAM}. In \cite{defectX}, where some additions and minor corrections to \cite{defectSIAM} were made, it is shown that the mechanism for creating localized modes using small perturbations is quite different depending on the volume fraction of the bubbles. In order to create localized modes in the dilute regime, the defect should be smaller than the surrounding bubbles, while in the non-dilute regime, the defect has to be larger. Based on this, in the case of a line defect, it is natural to expect different behaviour in these two different regimes. This suggests that different methods of analysis are needed in the two regimes. In this paper, we will mainly focus on the dilute regime, taking the radius of the bubbles sufficiently small. If the defect size is small, \textit{i.e.}{} if the size of the perturbed bubble is close to its original size, then the band structure of the defect problem will be a small perturbation of the band structure of the original problem \cite{MaCMiPaP, AKL}. This way, it is possible to shift the defect band upwards, and a part of the defect band will fall into the subwavelength bandgap. However, because of the curvature of the original band, it is impossible to create a defect band entirely inside the bandgap with this approach. In order to create defect bands which are entirely located inside the subwavelength bandgap, we have to consider slightly larger perturbations. In this paper, we will show that for arbitrarily small defects, a part of the defect band will lie inside the bandgap. Moreover, we will show that for suitably large perturbation sizes, the entire defect band will fall into the bandgap, and we will explicitly quantify the size of the perturbation needed in order to achieve this. Because of this, our results are more general than previous weak localization results since we explicitly show how the defect band depends on the perturbation size. In order to have \textit{guided} waves along the line defect, the defect mode must not only be localized to the line, but also propagating along the line. In other words, we must exclude the case of standing waves in the line defect, \textit{i.e.}{} modes which are localized in the direction of the line. As discussed in \cite{kuchment_review,kuchment_line}, such modes are associated with the point spectrum of the perturbed operator which appears as a flat band in the dispersion relation. Proving the absence of bound modes in phononic or photonic waveguides is a challenging problem; for example in \cite{hardwall1} this was proven by imposing ``hard-wall'' Dirichlet or Neumann boundary conditions along the waveguide, while in \cite{surface4} the absence of bound modes was proven in the case of a simpler Helmholtz-type operator. In this paper, we use the explicit formula for the defect band to show that it is nowhere flat, and hence does not correspond to bound modes in the direction of the line. The paper is structured as follows. In Section \ref{sec-1} we discuss preliminary results on layer potentials, and outline the main results from \cite{bandgap}. In Section \ref{sec-2} we restrict to circular domains and follow the approach of \cite{defectSIAM,defectX} to model the line defect using the fictitious source superposition method, originally introduced in \cite{Wilcox}. In Section \ref{sec-3} we prove the existence of a defect resonance frequency, and derive an asymptotic formula in terms of the density contrast in the dilute regime. Using this formula, we show that the defect modes are localized to, and guided along, the line defect. In Section \ref{sec:num} we compute the defect band numerically, in order to verify the formula and also illustrate the behaviour in the non-dilute regime. The paper ends with some concluding remarks in Section \ref{sec-5}. In Appendix \ref{app:nondil}, we restrict ourselves to small perturbations to derive an asymptotic formula valid in the non-dilute regime. In Appendix \ref{app:noncirc} we outline the fictitious source superposition method in the case of non-circular domains. \section{Preliminaries} \label{sec-1} \subsection{Layer potentials} \label{sec:layerpot} Let $Y^2 =[-1/2,1/2)^2\subset \mathbb{R}^2$ be the unit cell and assume that the bubble occupies a bounded and simply connected domain $D\in Y^2$ with $\partial D \in C^{1,s}$ for some $0<s<1$. Let $\Gamma^0$ and $\Gamma^k,k>0$ be the Green's functions of the Laplace and Helmholtz equations in dimension two, respectively, \textit{i.e.}{}, \begin{equation*} \begin{cases} \displaystyle \Gamma^k(x,y) = -\frac{i}{4}H_0^{(1)}(k|x-y|), \ & k>0, \\ \noalign{ } \displaystyle \Gamma^0(x,y) = \frac{1}{2\partiali}\ln|x-y|, & k=0, \end{cases} \end{equation*} where $H_0^{(1)}$ is the Hankel function of the first kind and order zero. Here, the outgoing Sommerfeld radiation condition is used for selecting the physical Helmholtz Green's function \cite{MaCMiPaP}. Let $\mathcal{S}_{D}^k: L^2(\partialartial D) \rightarrow H_{\textrm{loc}}^1(\mathbb{R}^2)$ be the single layer potential defined by \begin{equation*} \mathcal{S}_D^k[\partialhi](x) = \int_{\partialartial D} \Gamma^k(x,y)\partialhi(y) \: \mathrm{d} \sigma(y), \quad x \in \mathbb{R}^2. \end{equation*} Here, $H_{\textrm{loc}}^1(\mathbb{R}^2)$ denotes the space of functions that, on every compact subset of $\mathbb{R}^2$, are square integrable and have a weak first derivative that is also square integrable. We also define the Neumann-Poincar\'e operator $\mathcal{K}_D^{k,*}: L^2(\partialartial D) \rightarrow L^2(\partialartial D)$ by \begin{equation*} \mathcal{K}_D^{k,*}[\partialhi](x) = \int_{\partialartial D} \frac{\partialartial }{\partialartial \nu_x}\Gamma^k(x,y) \partialhi(y) \: \mathrm{d} \sigma(y), \quad x \in \partialartial D. \end{equation*} The following so-called jump relations of $\mathcal{S}_D^k$ on the boundary $\partialartial D$ are well-known (see, for example, \cite{MaCMiPaP}): \begin{equation*} \mathcal{S}_D^k[\partialhi]\big|_+ = \mathcal{S}_D^k[\partialhi]\big|_-, \end{equation*} and \begin{equation*} \frac{\partialartial }{\partialartial \nu}\mathcal{S}_D^k[\partialhi] \bigg|_{\partialm} = \left(\partialm\frac{1}{2} I + \mathcal{K}_D^{k,*}\right) [\partialhi]. \end{equation*} Here, $\partialartial/\partialartial \nu$ denotes the outward normal derivative, and $|_\partialm$ denote the limits from outside and inside $D$. In two dimensions, we have the following expansion of the Green's function for the Helmholtz equation \cite{MaCMiPaP} \begin{equation*}\label{eq:hankel} -\frac{i}{4}H_0(k|x-y|) = \frac{1}{2\partiali} \ln |x-y| + \eta_k + \sum_{j=1}^\infty\left( b_j \ln(k|x-y|) + c_j \right) (k|x-y| )^{2j}, \end{equation*} where $\ln$ is the principal branch of the logarithm and $$ \eta_k = \frac{1}{2\partiali}(\ln k+\gamma-\ln 2)-\frac{i}{4}, \quad b_j=\frac{(-1)^j}{2\partiali}\frac{1}{2^{2j}(j!)^2}, \quad c_j=b_j\left( \gamma - \ln 2 - \frac{i\partiali}{2} - \sum_{n=1}^j \frac{1}{n} \right),$$ with $\gamma$ being the Euler constant. Define, for $\partialhi \in L^2(\partialartial D)$, \begin{equation*} \hat{S}_D^k[\partialhi](x) = \mathcal{S}_D[\partialhi](x) + \eta_k\int_{\partialartial D} \partialhi\: \mathrm{d} \sigma. \end{equation*} Then the following expansion holds: \begin{equation} \label{eq:Sexpansion} \mathcal{S}_D^k = \hat{\mathcal{S}}_{D}^k +O(k^2\ln k). \end{equation} We also introduce a quasi-periodic version of the layer potentials. For $\alpha\in [0,2\partiali)^2$, the quasi-periodic Green's function $\Gamma^{\alpha, k}$ is defined to satisfy $$ (\mathcal{D}elta_x + k^2) \Gamma^{\alpha, k} (x,y) = \sum_{n\in \mathbb{R}^2} \delta(x-y-n) e^{i n\cdot \alpha}, \qquad x,y\in Y,$$ where $\delta$ is the Dirac delta function. The function $\Gamma^{\alpha, k} $ is $\alpha$-quasi-periodic in $x$, \textit{i.e.}{}, $e^{- i \alpha\cdot x} \Gamma^{\alpha, k}(x,y)$ is periodic in $x$ with respect to $Y$. We define the quasi-periodic single layer potential $\mathcal{S}_D^{\alpha,k}$ by $$\mathcal{S}_D^{\alpha,k}[\partialhi](x) = \int_{\partialartial D} \Gamma^{\alpha,k} (x,y) \partialhi(y) \: \mathrm{d}\sigma(y),\quad x\in \mathbb{R}^2.$$ It satisfies the following jump formulas: \begin{equation*} \mathcal{S}_D^{\alpha,k}[\partialhi]\big|_+ = \mathcal{S}_D^{\alpha,k}[\partialhi]\big|_-, \end{equation*} and $$ \frac{\partial}{\partial\nu} \mathcal{B}ig|_{\partialm} \mathcal{S}_D^{\alpha,k}[\partialhi] = \left( \partialm \frac{1}{2} I +( \mathcal{K}_D^{-\alpha,k} )^*\right)[\partialhi]\quad \mbox{on}~ \partial D,$$ where $(\mathcal{K}_D^{-\alpha,k})^*$ is the operator given by $$ (\mathcal{K}_D^{-\alpha, k} )^*[\partialhi](x)= \int_{\partial D} \frac{\partial}{\partial\nu_x} \Gamma^{\alpha,k}(y,y) \partialhi(y) \: \mathrm{d}\sigma(y).$$ We recall that $\mathcal{S}_D^{\alpha,0} : L^2(\partial D) \rightarrow H^1(\partial D)$ is invertible for $\alpha \ne 0$ \cite{MaCMiPaP}. \subsection{Floquet transform} A function $f(x_1)$ is said to be $\alpha$-quasi-periodic in the variable $x_1\in \mathbb{R}$ if $e^{-i\alpha x_1}f(x_1)$ is periodic. Given a function $f\in L^2(\mathbb{R})$, the Floquet transform in one dimension is defined as \begin{equation}\label{eq:floquet} \mathcal{F}[f](x_1,\alpha) = \sum_{m\in \mathbb{Z}} f(x_1-m) e^{i\alpha m}, \end{equation} which is $\alpha$-quasi-periodic in $x_1$ and periodic in $\alpha$. Let $Y = [-1/2,1/2)$ be the unit cell and $Y^* := \mathbb{R} / 2\partiali \mathbb{Z} \simeq [0,2\partiali)$ be the Brillouin zone. The Floquet transform is an invertible map $\mathcal{F}:L^2(\mathbb{R}) \rightarrow L^2(Y\times Y^*)$, with inverse (see, for instance, \cite{kuchment, MaCMiPaP}) \begin{equation*} \mathcal{F}^{-1}[g](x_1) = \frac{1}{2\partiali}\int_{Y^*} g(x_1,\alpha) \: \mathrm{d} \alpha. \end{equation*} \subsection{Bubbly crystals and subwavelength bandgaps}\label{subsec:bandgap} Here we briefly review the subwavelength bandgap opening of a bubbly crystal from \cite{bandgap}. Assume that a single bubble occupies the region $D$ specified in Section \ref{sec:layerpot}. We denote by $\rho_b$ and $\kappa_b$ the density and the bulk modulus inside the bubble, respectively. We let $\rho_w$ and $\kappa_w$ be the corresponding parameters outside the bubble. We introduce \begin{equation*} v_w = \sqrt{\frac{\kappa_w}{\rho_w}}, \quad v_b = \sqrt{\frac{\kappa_b}{\rho_b}}, \quad k_w= \frac{\omega}{v_w} \quad \text{and} \quad k_b= \frac{\omega}{v_b} \end{equation*} as the speed of sound outside and inside the bubbles, and the wavenumber outside and inside the bubbles, respectively. Here, $\omega$ corresponds to the operating frequency of the acoustic waves. Let $\mathcal{C} = \cup_{n\in\mathbb{Z}^2}(D+n)$ be the periodic bubbly crystal. Define, for $x\in \mathbb{R}^2$, $$\rho(x) = \rho_b\chi_{\mathcal{C}}(x) + \rho_w(1-\chi_{\mathcal{C}}(x)), \quad \kappa(x) = \kappa_b\chi_{\mathcal{C}}(x) + \kappa_w(1-\chi_{\mathcal{C}}(x)),$$ where $\chi_{\mathcal{C}}$ is the characteristic function of $\mathcal{C}$. We assume that there is a large contrast in the density, that is, the density contrast $\delta$ satisfies \begin{equation} \label{data2} \delta = \frac{\rho_b}{\rho_w} \ll 1. \end{equation} Recall that under (\ref{data2}), there exists a subwavelength resonance of the bubble in free space \cite{first}. In the following, we shall also make the assumption stated below. \begin{assump} \label{assumption1} Without loss of generality, we assume that $$v_w = v_b = 1.$$ \end{assump} In this case we have $k_b = k_w = \omega$. Assumption \ref{assumption1} only serves to simplify the expressions. The methods presented in this paper indeed apply as long as the wave speeds outside and inside the bubbles are comparable to each other. The wave propagation problem inside the periodic crystal can be modelled as \begin{equation}\label{eq:original} \kappa(x) \nabla \cdot \left(\frac{1}{\rho(x)} \nabla v(x) \right) + \omega^2 v(x) = 0, \quad x \in \mathbb{R}^2.\end{equation} We denote by $\mathcal{L}ambda_0$ the set of propagating frequencies, \textit{i.e.}{}, the set of $\omega$ such that $\omega^2$ is in the spectrum of the operator $$-\kappa \nabla \cdot \frac{1}{\rho} \nabla.$$ Denote by $Y_s = Y\times \mathbb{R}$ the unit strip and recall that $Y^2 = [-1/2,1/2)^2$ is the unit cell of the crystal. Applying the Floquet transformation, first in $x_1$-direction and then in $x_2$-direction, equation \eqnref{eq:original} can be decomposed first as \begin{equation}\label{eq:Fonce} \begin{cases} \displaystyle \kappa(x) \nabla \cdot \left(\frac{1}{\rho(x)} \nabla v(x) \right) + \omega^2 v(x) = 0, \quad x \in Y_s, \\ \displaystyle e^{-i \alpha_1 x_1} u \,\,\, \mbox{is periodic in } x_1, \end{cases} \end{equation} where $\alpha_1 \in Y^*$, and then as \begin{equation}\label{eq-scattering-quasiperiodic} \begin{cases} \displaystyle \kappa(x) \nabla \cdot \left(\frac{1}{\rho(x)} \nabla v(x) \right) + \omega^2 v(x) = 0, \quad x \in Y^2, \\ \displaystyle e^{-i \alpha \cdot x} u \,\,\, \mbox{is periodic in } x, \end{cases} \end{equation} where $\alpha = (\alpha_1, \alpha_2) \in Y^*\times Y^*$. We denote by $\mathcal{L}ambda_{0,\alpha_1}$ the set of $\omega$ such that $\omega^2$ is in the spectrum of the operator implied by \eqnref{eq:Fonce} and by $\mathcal{L}ambda^{ess}_{0,\alpha_1}$ the essential part of this spectrum. It is known that \eqref{eq-scattering-quasiperiodic} has non-trivial solutions for discrete values of $\omega$: $$ 0 \le \omega_1^\alpha \le \omega_2^\alpha \le \cdots,$$ and we have the following band structure of propagating frequencies for the periodic bubbly crystal $\mathcal{C}$: \begin{align*} \displaystyle \mathcal{L}ambda_{0,\alpha_1} &= \left[\min_{\alpha_2\in Y^*} \omega_1^{(\alpha_1,\alpha_2)},\max_{\alpha_2\in Y^*} \omega_1^{(\alpha_1,\alpha_2)}\right] \cup \left[\min_{\alpha_2\in Y^*} \omega_2^{(\alpha_1,\alpha_2)}, \max_{\alpha_2\in Y^*} \omega_2^{(\alpha_1,\alpha_2)} \right] \cup \cdots, \\ \displaystyle \mathcal{L}ambda_0 &= \left[0,\max_{\alpha\in Y^*\times Y^*} \omega_1^\alpha\right] \cup \left[\min_{\alpha\in Y^*\times Y^*} \omega_2^\alpha, \max_{\alpha\in Y^*\times Y^*} \omega_2^\alpha \right] \cup \cdots. \end{align*} In \cite{bandgap}, it is proved that there exists a subwavelength spectral gap opening in the band structure. Let us briefly review this result. We look for a solution $v$ of \eqref{eq-scattering-quasiperiodic} which has the following form: \begin{equation*} \label{Helm-solution} v = \begin{cases} \mathcal{S}_{D}^{\alpha,k_w} [\varphi^\alpha]\quad & \text{in} ~ Y^2 \setminus \overline{D},\\ \mathcal{S}_{D}^{k_b} [\partialsi^\alpha] &\text{in} ~ {D}, \end{cases} \end{equation*} for some densities $\varphi^\alpha, \partialsi^\alpha \in L^2(\partial D)$. Using the jump relations for the single layer potentials, one can show that~\eqref{eq-scattering-quasiperiodic} is equivalent to the boundary integral equation \begin{equation} \label{eq-boundary} \mathcal{A}^\alpha(\omega, \delta)[\Phi^\alpha] =0, \end{equation} where \[ \mathcal{A}^\alpha(\omega, \delta) = \begin{pmatrix} \mathcal{S}_D^{k_b} & -\mathcal{S}_D^{\alpha,k} \\ -\frac{1}{2}+ \mathcal{K}_D^{k_b,*}& -\delta\left( \frac{1}{2}+ \left(\mathcal{K}_D^{ -\alpha,k}\right)^*\right) \end{pmatrix}, \,\, \Phi^\alpha= \begin{pmatrix} \varphi^\alpha\\ \partialsi^\alpha \end{pmatrix}. \] Since it can be shown that $\omega=0$ is a characteristic value for the operator-valued analytic function $\mathcal{A}(\omega,0)$, we can conclude the following result by the Gohberg-Sigal theory \cite{MaCMiPaP, Gohberg1971}. \begin{lem} For any $\delta$ sufficiently small, there exists a characteristic value $\omega_1^\alpha= \omega_1^\alpha(\delta)$ to the operator-valued analytic function $\mathcal{A}^\alpha(\omega, \delta)$ such that $\omega_1^\alpha(0)=0$ and $\omega_1^\alpha$ depends on $\delta$ continuously. \end{lem} The next theorem gives the asymptotic expansion of $\omega_1^\alpha$ as $\delta\rightarrow 0$. \begin{thm}{\rm{\cite{bandgap}}} \label{approx_thm} For $\alpha \ne 0$ and sufficiently small $\delta$, we have \begin{align*} \omega_1^\alpha= \sqrt{\frac{\delta {\mathrm{Cap}}_{D,\alpha}}{|D|}} + O(\delta^{3/2}), \label{o_1_alpha} \end{align*} where the constant ${\mathrm{Cap}}_{D,\alpha}$ is given by $${\mathrm{Cap}}_{D,\alpha}:= - \langle(\mathcal{S}_D^{\alpha,0})^{-1} [\chi_{\partialartial D}], \chi_{\partialartial D}\rangle.$$ Here, $\langle \,\cdot \,,\,\cdot\, \rangle$ stands for the standard inner product of $L^2(\partialartial D)$ and $\chi_{\partialartial D}$ denotes the characteristic function of $\partialartial D$. \end{thm} Let $\omega_1^*=\max_\alpha \omega_1^\alpha$. The following theorem expresses the fact that a subwavelength bandgap opens in the band structure of the bubbly crystal. \begin{thm}{\rm{\cite{bandgap}}} \label{main_bandgap} For every $\varepsilon>0$, there exists $\delta_0>0$ and $\tilde \omega > \omega_1^*$ such that \begin{equation*} [ \omega_1^*+\varepsilon, \tilde\omega ] \subset [\max_\alpha \omega_1^\alpha, \min_\alpha \omega_2^\alpha] \end{equation*} for $\delta<\delta_0$. \end{thm} \section{Integral representation for bubbly crystals with a defect} \label{sec-2} \subsection{Formulation of the line defect problem} \begin{figure} \caption{Illustration of the defect crystal and the material parameters.} \label{fig:defect} \end{figure} In the following, we will consider the case when all the bubbles are circular disks. This gives a convenient presentation, and makes the problem similar to the point defect problem studied in \cite{defectSIAM,defectX}. In Appendix \ref{app:noncirc}, we will outline the analysis in the case of non-circular bubbles. Consider a perturbed crystal, where all the disks along the $x_1$-axis are replaced by defect disks of radius $R_d$ with $0< R_d<R$. Denote the centre defect disk by $D_d$ and let $$\mathcal{C}_d = \left( \bigcup_{m\in \mathbb{Z}} D_d + (m,0)\right) \cup \left( \bigcup_{\substack{ m\in \mathbb{Z} \\ n\in\mathbb{Z}\setminus\{0\}}} D+(m,n) \right)$$ be the perturbed crystal, depicted in Figure \ref{fig:defect}. Moreover, let $\varepsilon = R_d-R<0, \varepsilon \in (-R,0)$ be the perturbation of the radius. Define $$\rho_d(x) = \rho_b\chi_{\mathcal{C}_d}(x) + \rho_w(1-\chi_{\mathcal{C}}(x)), \quad \kappa_d(x) = \kappa_b\chi_{\mathcal{C}}(x) + \kappa_w(1-\chi_{\mathcal{C}_d}(x)).$$ The wave propagation problem inside the periodic crystal can be modelled as \begin{equation}\label{eq:scattering} \kappa_d(x) \nabla \cdot \left(\frac{1}{\rho_d(x)} \nabla u(x) \right) + \omega^2 u(x) = 0, \quad x\in\mathbb{R}^2.\end{equation} We denote by $\mathcal{L}ambda_d$ the set of propagating frequencies in the line defect crystal, \textit{i.e.}{} the set of $\omega$ such that $\omega^2$ is in the spectrum of the operator $$-\kappa_d \nabla \cdot \frac{1}{\rho_d} \nabla.$$ Since the defect crystal is periodic in the $x_1$-direction, we can use the Floquet transformation to decompose \eqnref{eq:scattering} as \begin{equation}\label{eq:scattering-strip} \begin{cases} \displaystyle \kappa_d(x) \nabla \cdot \left(\frac{1}{\rho_d(x)} \nabla u(x) \right) + \omega^2 u(x) = 0, \quad x \in Y_s, \\ \displaystyle e^{-i \alpha_1 x_1} u \,\,\, \mbox{is periodic in } x_1, \end{cases} \end{equation} where $\alpha_1 \in Y^*$ and $Y_s$ again denotes the strip $Y_s = [-1/2,1/2)\times \mathbb{R}$. We will denote by $\mathcal{L}ambda_{d,\alpha_1}$ the set of $\omega$ such that $\omega^2$ is in the spectrum of the operator implied by \eqnref{eq:scattering-strip} and by $\mathcal{L}ambda^{ess}_{d,\alpha_1}$ the corresponding essential part of the spectrum. In the strip $Y_s$, the perturbations $\rho_d-\rho$ and $\kappa_d-\kappa$ have compact support. Since the essential spectrum is stable under compact perturbations \cite{Figotin,MMMP4}, it can be shown that the essential spectra $\mathcal{L}ambda^{ess}_{0,\alpha_1}$ and $\mathcal{L}ambda^{ess}_{d,\alpha_1}$ coincide. In this paper, we want to show that introducing the line defect creates a defect band $\omega^\varepsilon(\alpha_1) \notin \mathcal{L}ambda_{0,\alpha_1}$. Moreover, we want to show that $\varepsilon$ can be chosen such that $\omega^\varepsilon(\alpha_1) \notin \mathcal{L}ambda_0$ for all $\alpha_1 \in Y^*$, which means that any Bloch mode is localized to the line defect. We also want to show that $\omega^\varepsilon(\alpha_1)$ is not contained in the pure point part of $\mathcal{L}ambda_{d,\alpha_1}$, which means that there are no bound modes in the defect direction. \subsection{Effective sources for the defect} Here we describe an effective sources approach to the solution of \eqnref{eq:scattering-strip} in the strip. The idea is to model the defect bubble $D_d$ as an unperturbed bubble $D$ with additional fictitious monopole and dipole sources $f$ and $g$. This method was originally introduced in \cite{Wilcox} and then it was applied in \cite{defectSIAM,defectX} for a point defect in a bubbly crystal. Let us consider the following problem: \begin{equation} \label{eq:scattering_fictitious} \left\{ \begin{array} {ll} &\displaystyle \nabla \cdot \frac{1}{\rho_w} \nabla \widetilde{u}+ \frac{\omega^2}{\kappa_w} \widetilde{u} = 0 \quad \text{in} \quad Y_s \setminus \mathcal{C}, \\ \noalign{ } &\displaystyle \nabla \cdot \frac{1}{\rho_b} \nabla \widetilde{u}+ \frac{\omega^2}{\kappa_b} \widetilde{u} = 0 \quad \text{in} \quad Y_s\cap\mathcal{C}, \\ \noalign{ } &\displaystyle \widetilde{u}|_{+} -\widetilde{u}|_{-} =f\delta_{m,0} \quad \text{on} \quad \partialartial D+(0,m), \ m\in\mathbb{Z}, \\ \noalign{ } & \displaystyle \frac{1}{\rho_w} \frac{\partialartial \widetilde{u}}{\partialartial \nu} \bigg|_{+} - \frac{1}{\rho_b} \frac{\partialartial \widetilde{u}}{\partialartial \nu} \bigg|_{-} =g\delta_{m,0} \quad \text{on} \quad \partialartial D+(0,m), \ m\in\mathbb{Z}, \\ \noalign{ } & \displaystyle e^{-i \alpha_1 x_1} \widetilde u \,\,\, \mbox{is periodic in } x_1, \end{array} \right. \end{equation} where $f$ and $g$ are the source terms and $\delta_{m,n}$ is the Kronecker delta function. Note that the sources are present only on the boundary of the central bubble $D$. We denote the solution to the original problem \eqnref{eq:scattering-strip} by $u$ and the effective source solution \eqref{eq:scattering_fictitious} by $\widetilde{u}$. We want to find appropriate conditions on $f$ and $g$ in order to achieve \begin{equation}\label{eq:coincide} u \equiv \widetilde{u} \quad \mbox{in } (Y_s\setminus D) \cup D_d. \end{equation} Then $u$ can be recovered by extending $\widetilde{u}$ to the whole region including $D\setminus D_d$ with boundary conditions on $\partial D$ and $\partial D_d$. The conditions for the effective sources $f$ and $g$, which are necessary in order to correctly model the defect, will be characterized in the next subsection. \subsection{Characterization of the effective sources}\label{subsec:char_eff} Here we clarify the relation between the effective source pair $(f,g)$ and the layer density pair $(\varphi,\partialsi)$ defined in equation \eqnref{eq:u_rep_effective} below. First, we observe that away from the central unit cell $Y^2$, the equations \eqnref{eq:scattering-strip} and \eqref{eq:scattering_fictitious} satisfy the same geometric and quasi-periodic conditions. Thus, in order for \eqnref{eq:coincide} to hold, it is sufficient for $u$ and $\widetilde{u}$ to coincide inside the central unit cell $Y^2$. Inside $Y^2$, the solution $\widetilde{u}$ can be represented as \begin{align}\label{eq:u_rep_effective} \widetilde{u} = \begin{cases} H + \mathcal{S}_D^{k_w}[\partialsi] &\quad \mbox{in } Y^2\setminus \overline{D}, \\ \mathcal{S}_D^{k_b}[\varphi] &\quad \mbox{in } D, \end{cases} \end{align} for some pair $(\varphi,\partialsi)\in L^2(\partial D)^2$, with $H$ satisfying the homogeneous equation $(\mathcal{D}elta + k_w^2) H = 0$ in $Y^2$. In (\ref{eq:u_rep_effective}), the local properties of $\widetilde{u}$ around $\partialartial D$ are given by the single-layer potentials, while $H$ can be chosen to make $\widetilde{u}$ satisfy the quasi-periodic condition. From the jump conditions given in Section \ref{sec:layerpot}, the pair $(\varphi,\partialsi)$ satisfies \begin{equation}\label{eq:AD} \mathcal{A}_D \begin{pmatrix} \varphi \\ \partialsi \end{pmatrix} := \begin{pmatrix} \mathcal{S}_D^{k_b} & - \mathcal{S}_D^{k_w} \\[0.3em] \displaystyle \partialartial \mathcal{S}_D^{k_b}/\partialartial \nu |_{-} & \displaystyle -\delta \partialartial \mathcal{S}_D^{k_w} / \partialartial \nu|_{+} \end{pmatrix} \begin{pmatrix} \varphi \\ \partialsi \end{pmatrix} = \begin{pmatrix} H|_{\partialartial D}- f \\[0.3em] \displaystyle\partialartial H/\partialartial \nu |_{\partialartial D} - g \end{pmatrix}. \end{equation} Similarly, inside $Y^2$, the solution $u$ can be represented as \begin{align*} u= \begin{cases} H + \mathcal{S}_{D_d}^{k_w}[\partialsi_d] &\quad \mbox{in } Y^2\setminus \overline{D_d}, \\ \mathcal{S}_{D_d}^{k_b}[\varphi_d] &\quad \mbox{in } D_d, \end{cases} \end{align*} where \begin{equation}\label{eq:ADd_original} \mathcal{A}_{D_d} \begin{pmatrix} \varphi_d \\ \partialsi_d \end{pmatrix}:= \begin{pmatrix} \mathcal{S}_{D_d}^{k_b} & - \mathcal{S}_{D_d}^{k_w} \\[0.3em] \displaystyle \partialartial \mathcal{S}_{D_d}^{k_b} /\partialartial \nu|_{-} & \displaystyle -\delta \partialartial \mathcal{S}_{D_d}^{k_w}/\partialartial \nu |_{+} \end{pmatrix} \begin{pmatrix} \varphi_d \\ \partialsi_d \end{pmatrix} = \begin{pmatrix} H|_{\partialartial {D_d}} \\[0.3em] \displaystyle\partialartial H /\partialartial \nu |_{\partialartial {D_d}} \end{pmatrix}. \end{equation} Now, having the two solutions coincide inside $(Y^2\setminus D) \cup D_d$ is equivalent to the conditions \begin{equation}\label{eq:SDdSD_inside} \mathcal{S}_{D_d}^{k_b}[\varphi_d] \equiv \mathcal{S}_D^{k_b}[\varphi] \quad\mbox{in }D_d, \end{equation} and \begin{equation}\label{eq:SDdSD_outside} \mathcal{S}_{D_d}^{k_w}[\partialsi_d] \equiv \mathcal{S}_D^{k_w}[\partialsi] \quad\mbox{in }Y^2\setminus \overline{D}. \end{equation} Assuming $D$ is a disk, the above equations were solved in \cite{defectSIAM,defectX}, and we state the results in Proposition \ref{prop:effective} below. First, we introduce some notation. Since $D$ and $D_d$ are circular disks, we can use a Fourier basis for functions in $L^2(\partial D)$ or $L^2(\partial D_d)$. For $n\in \mathbb{Z}$, define the subspace $V_n$ of $L^2(\partial D)$ as $V_n : = \mbox{span}\{ e^{im \theta}\}$. Then define the subspace $V_{mn}$ of $L^2(\partial D)^2$ as $$ V_{mn} : = V_m \times V_n, \quad m,n\in \mathbb{Z}. $$ Similarly, let ${V}_{mn}^d$ be the subspace of $L^2(\partial D_d)^2$ with the same Fourier basis. Then it can be shown that the operator $\mathcal{A}_{D}$ in \eqref{eq:AD} has the following matrix representation as an operator from $V_{mn}$ to $V_{m'n'}$: \begin{equation*}\label{eq:AD_multipole} (\mathcal{A}_{D})_{V_{mn}\rightarrow V_{m'n'}} = \delta_{mn}\delta_{m'n'} \frac{(-i)\partiali R}{2} \begin{pmatrix} J_n(k_b R)H_n^{(1)}(k_b R) & - J_n(k_w R)H_n^{(1)}(k_w R) \\ k_b J_n'(k_b R) H_n^{(1)}(k_b R) & - \delta k_w J_n(k_w R) \big(H_n^{(1)}\big)'(k_w R) \end{pmatrix}. \end{equation*} Similarly, the operator $\mathcal{A}_{D_d}$ in \eqref{eq:ADd_original} is represented as follows: \begin{equation*}\label{eq:ADd_multipole} (\mathcal{A}_{D_d})_{V_{mn}^d\rightarrow V_{m'n'}^d} = \delta_{mn}\delta_{m'n'} \frac{(-i)\partiali R_d}{2} \begin{pmatrix} J_n(k_b R_d)H_n^{(1)}(k_b R_d) & - J_n(k_w R_d)H_n^{(1)}(k_w R_d) \\ k_b J_n'(k_b R_d) H_n^{(1)}(k_b R_d) & - \delta k_w J_n(k_w R_d) \big(H_n^{(1)}\big)'(k_w R_d) \end{pmatrix}. \end{equation*} In \cite{defectSIAM,defectX}, the following proposition was shown. \begin{prop} \label{prop:effective} The density pair $(\varphi,\partialsi)$ and the effective sources $(f,g)$ satisfy the following relation \begin{equation*}\label{eq:relation_density_source} (\mathcal{A}_D^\varepsilon - \mathcal{A}_D)\begin{pmatrix} \varphi \\[0.3em] \partialsi \end{pmatrix} = \begin{pmatrix} f \\[0.3em] g \end{pmatrix}, \end{equation*} where the operators $\mathcal{P}_1: L^2(\partial D)^2\rightarrow L^2(\partial D_d)^2$ and $\mathcal{P}_2: L^2(\partial D)^2\rightarrow L^2(\partial D_d)^2$ are defined by \begin{align*} (\mathcal{P}_1)_{V_{mn}\rightarrow V_{m'n'}^d} &= \delta_{mn} \delta_{m'n'} \frac{R}{R_d}\begin{pmatrix} \displaystyle\frac{H_n^{(1)}(k_b R) }{H_n^{(1)}(k_b R_d)} & 0 \\ 0& \displaystyle\frac{J_n(k_w R) }{J_n(k_w R_d)} \end{pmatrix}, \\ (\mathcal{P}_2)_{V_{mn}\rightarrow V_{m'n'}^d} &= \delta_{mn} \delta_{m'n'} \begin{pmatrix} \displaystyle \frac{J_n(k_w R_d) }{J_n(k_w R)} & 0 \\ 0& \displaystyle\frac{J_n'(k_w R_d) }{J_n'(k_w R)} \end{pmatrix}, \end{align*} and $\mathcal{A}_D^\varepsilon$ is defined as \begin{equation}\label{eq:ADd} \mathcal{A}_D^\varepsilon := (\mathcal{P}_2)^{-1}\mathcal{A}_{D_d} \mathcal{P}_1 . \end{equation} \end{prop} \subsection{Floquet transform of the solution} In view of Proposition \ref{prop:effective}, we can identify the solutions $u$ and $\widetilde{u}$. In this section, we derive an integral equation for the effective source problem \eqref{eq:scattering_fictitious}. This problem is already quasi-periodically reduced in the $x_1$-direction, with quasi-periodicity $\alpha_1$. For some quasi-periodicity $\alpha_2 \in Y^*$, we set $\alpha = (\alpha_1,\alpha_2)$ and apply the Floquet transform to the solution $u$ in the $x_2$-direction as follows: $$ u^\alpha = \sum_{m\in \mathbb{Z}} u(x-(0,m)) e^{i\alpha_2 m}. $$ The transformed solution $u^\alpha$ satisfies \begin{equation*} \label{eq:scattering_quasi} \left\{ \begin{array} {ll} &\displaystyle \nabla \cdot \frac{1}{\rho_w} \nabla u^\alpha+ \frac{\omega^2}{\kappa_w} u^\alpha = 0 \quad \text{in} \quad Y^2 \setminus \overline{D}, \\ \noalign{ } &\displaystyle \nabla \cdot \frac{1}{\rho_b} \nabla u^\alpha+ \frac{\omega^2}{\kappa_b} u^\alpha = 0 \quad \text{in} \quad D, \\ \noalign{ } &\displaystyle u^\alpha|_{+} -u^\alpha|_{-} =f \quad \text{on} \quad \partialartial D, \\ \noalign{ } & \displaystyle \frac{1}{\rho_w} \frac{\partialartial u^\alpha}{\partialartial \nu} \bigg|_{+} - \frac{1}{\rho_b} \frac{\partialartial u^\alpha}{\partialartial \nu} \bigg|_{-} =g \quad \text{on} \quad \partialartial D, \\ \noalign{ } & \displaystyle e^{-i \alpha \cdot x} u^\alpha \text{ is periodic}. \end{array} \right. \end{equation*} The solution $u^\alpha$ is $\alpha$-quasi-periodic in the two-dimensional cell $Y^2$, and can be represented using quasi-periodic layer potentials as \begin{align*} u^\alpha = \begin{cases} \mathcal{S}_D^{\alpha,k_w}[\partialsi_\alpha], &\quad \mbox{in } Y^2\setminus \overline{D}, \\ \mathcal{S}_D^{k_b}[\varphi_\alpha], &\quad \mbox{in } D, \end{cases} \end{align*} where, similarly as in equation \eqnref{eq-boundary}, the pair $(\varphi^\alpha,\partialsi^\alpha)\in L^2(\partial D)^2$ is the solution to \begin{equation*}\label{eq:phipsialpha} \mathcal{A}^\alpha(\omega,\delta) \begin{pmatrix} \varphi^\alpha \\[0.3em] \partialsi^\alpha \end{pmatrix} := \begin{pmatrix} \mathcal{S}_D^{k_b} & -\mathcal{S}_D^{\alpha,k} \\[0.3em] -\frac{1}{2}+ \mathcal{K}_D^{k_b,*}& -\delta\left( \frac{1}{2}+ \left(\mathcal{K}_D^{ -\alpha,k}\right)^*\right) \end{pmatrix} \begin{pmatrix} \varphi^\alpha \\[0.3em] \partialsi^\alpha \end{pmatrix} = \begin{pmatrix} - f \\[0.3em] - g \end{pmatrix}. \end{equation*} Since the operator $\mathcal{A}^\alpha$ is invertible for small enough $\delta$ and for $\omega$ inside the bandgap \cite{bandgap}, we have $$ \begin{pmatrix} \varphi^\alpha \\[0.3em] \partialsi^\alpha \end{pmatrix} = \mathcal{A}^\alpha(\omega,\delta)^{-1}\begin{pmatrix} - f \\[0.3em] - g \end{pmatrix}. $$ The solution $u$ to problem \eqnref{eq:scattering_fictitious} can be recovered by the inversion formula as \begin{equation*} u(x)=\frac{1}{2\partiali}\int_{Y^*} u^{(\alpha_1,\alpha_2)}(x) \: \mathrm{d}\alpha_2. \end{equation*} Now, by the same arguments as those in \cite{defectSIAM,defectX}, we obtain the following proposition. \begin{prop} \label{prop:floquet} The density pair $(\varphi,\partialsi)$ and the effective source pair $(f,g)$ satisfy \begin{equation}\label{eq:IntAa} \begin{pmatrix} \varphi \\[0.3em] \partialsi \end{pmatrix} =\left(\frac{1}{2\partiali}\int_{Y^*} \mathcal{A}^{(\alpha_1,\alpha_2)}(\omega,\delta)^{-1} \: \mathrm{d}\alpha_2 \right)\begin{pmatrix} -f \\[0.3em] -g \end{pmatrix}, \end{equation} for small enough $\delta$ and for $\omega \notin \mathcal{L}ambda_{0,\alpha_1}$ inside the bandgap. \end{prop} \subsection{The integral equation for the layer densities} Here we state the integral equation for the layer density pair $(\partialhi,\partialsi)$. The following result is an immediate consequence of Propositions \ref{prop:effective} and \ref{prop:floquet}. \begin{prop} \label{prop:fg} The layer density pair $(\partialhi,\partialsi)\in L^2(\partialartial D)^2$ satisfies the following integral equation: \begin{align} \label{eq:Mdensity} \mathcal{M}^{\varepsilon,\delta,\alpha_1}(\omega)\begin{pmatrix} \partialhi \\ \partialsi \end{pmatrix}:=\bigg(I+\left(\frac{1}{2\partiali}\int_{Y^*}\mathcal{A}^\alpha(\omega,\delta)^{-1} \: \mathrm{d}\alpha_2\right)(\mathcal{A}_{D}^\varepsilon(\omega,\delta) - \mathcal{A}_{D}(\omega,\delta))\bigg) \begin{pmatrix} \partialhi \\[0.3em] \partialsi \end{pmatrix} = \begin{pmatrix} 0 \\[0.3em] 0 \end{pmatrix}, \end{align} for small enough $\delta$ and for $\omega \notin \mathcal{L}ambda_{0,\alpha_1}$ inside the bandgap. \end{prop} The expression of this integral equation resembles the one for a point defect found in \cite{defectSIAM,defectX}. However, this similarity is not obvious, and can be seen as a consequence of the cancellation of $H$ in Proposition \ref{prop:effective}. The significance of Proposition \ref{prop:fg} is as follows. If we can show that there is a characteristic value $\omega = \omega^\varepsilon$ of $\mathcal{M}^{\varepsilon,\delta,\alpha_1}$ inside the bandgap, \textit{i.e.}{} if there is a non-trivial pair $(\partialhi,\partialsi)$ such that $\mathcal{M}^{\varepsilon,\delta,\alpha_1}(\omega^\varepsilon) \left( \begin{smallmatrix} \partialhi \\ \partialsi\end{smallmatrix} \right) = 0$, then $\omega^\varepsilon$ is a resonance frequency for the defect mode. \section{Subwavelength guided modes in the defect} \label{sec-3} Here, we will prove the existence of a resonance frequency $\omega = \omega^\varepsilon(\alpha_1)$ inside the bandgap of the unperturbed crystal at $\alpha_1$. We will give an asymptotic formula for $\mathcal{M}^{\varepsilon,\delta,\alpha_1}$ in terms of $\delta$ in the dilute regime. Moreover, we will show that the defect band is not contained in the pure point spectrum of the defect operator, and for perturbation sizes $\varepsilon$ larger than some critical $\varepsilon_0$, the entire defect band is located in the bandgap region of the original operator. \subsection{Asymptotic expansions for small $\delta$} In this section, we will asymptotically expand $\mathcal{M}^{\varepsilon,\delta,\alpha_1}$ in the limit as $\delta \rightarrow 0$ and with $\omega$ in the subwavelength regime, \textit{i.e.}{}, $\omega = O(\sqrt{\delta})$. Throughout this section we assume that $\alpha \neq (0,0)$. We begin by studying the operator $(\mathcal{A}^\alpha(\omega, \delta))^{-1}$. Define $\partialsi_{\alpha}$ as $$\partialsi_{\alpha} = \left(\mathcal{S}_D^{\alpha,0}\right)^{-1} [\chi_{\partial D}].$$ Since we know that $\big(\frac{1}{2}I + (\mathcal{K}_D^{-\alpha,0})^*\big)[\partialsi_\alpha] = \partialsi_\alpha$, we can decompose this operator as $$ \frac{1}{2}I + (\mathcal{K}_D^{-\alpha,0})^* = P_\alpha + Q_\alpha,$$ where $$P_\alpha = -\frac{\langle\chi_{\partial D}, \cdot\rangle}{{\mathrm{Cap}}_{D,\alpha}}\partialsi_\alpha$$ is a projection on $\partialsi_\alpha$. Then it can be shown that $Q_\alpha[\partialsi_\alpha] = 0$ and $Q_\alpha^* [\chi_{\partial D}] = 0$, where $Q_\alpha^*$ is the adjoint of $Q_\alpha$. For small $\delta$ and for $\omega=O(\sqrt{\delta})$ inside the corresponding bandgap, the operator $\mathcal{A}^\alpha(\omega, \delta)$ can be decomposed as $$\mathcal{A}^\alpha(\omega, \delta) = \begin{pmatrix} \mathcal{S}_D^{\omega} & -\mathcal{S}_D^{\alpha,\omega} \\ -\frac{1}{2}I+ \mathcal{K}_D^{\omega, *}& 0 \end{pmatrix} - \delta \begin{pmatrix} 0 & 0 \\ 0 & P_\alpha \end{pmatrix} - \delta \begin{pmatrix} 0 & 0 \\ 0 & Q_\alpha \end{pmatrix} + O(\delta^3).$$ Define the operators $$A_0 = \begin{pmatrix} \mathcal{S}_D^{\omega} & -\mathcal{S}_D^{\alpha,\omega} \\ -\frac{1}{2}I+ \mathcal{K}_D^{\omega, *}& 0 \end{pmatrix} , $$ and $$A_1 = I - \delta A_0^{-1} \begin{pmatrix} 0 & 0 \\ 0 & P_\alpha \end{pmatrix}.$$ The motivation for defining these operators is given in Lemmas \ref{lem:invA0A1} and \ref{lem:invAa}. Introducing these operators enables the explicit computation of $(A^\alpha)^{-1}$. We will compute the asymptotic expansion of these operators for small $\omega$ and $\delta$. \begin{lem} \label{lem:invA0A1} The following results hold for $A_0$ and $A_1$: \begin{itemize} \item[(i)] For $\omega \neq 0$, $A_0: L^2(\partial D) \rightarrow L^2(\partial D)$ is invertible, and as $\omega \rightarrow 0$ and $\delta \rightarrow 0$, $$A_0^{-1} = \begin{pmatrix} 0 & -\frac{\langle \chi_{\partial D}, \cdot \rangle}{\partiali R^3\omega^2\ln \omega} \chi_{\partial D} +O\left(\frac{1}{\omega\ln\omega}\right) \\ -\left(\mathcal{S}_D^{\alpha,0}\right)^{-1} + O(\omega^2) & -\frac{\langle \chi_{\partial D}, \cdot \rangle}{\partiali R^2\omega^2}\partialsi_\alpha +O\left(\frac{1}{\omega}\right) \end{pmatrix}.$$ \item[(ii)] For $\omega \neq \omega^\alpha$, $A_1: L^2(\partial D) \rightarrow L^2(\partial D)$ is invertible, and as $\omega \rightarrow 0$ and $\delta \rightarrow 0$, $$A_1^{-1} = \begin{pmatrix} I & -\frac{(\omega^\alpha)^2}{\omega^2R\ln \omega} \frac{\left\langle \chi_{\partial D}, \left(P_\alpha^\partialerp\right)^{-1}[\cdot] \right\rangle}{{\mathrm{Cap}}_{D,\alpha}}\chi_{\partial D} + O\left(\frac{\omega}{\ln\omega}\right)\\ 0 & \left(P_\alpha^\partialerp\right)^{-1} +O(\omega) \end{pmatrix},$$ where $P_\alpha^\partialerp = I-\frac{(\omega^\alpha)^2}{\omega^2}P_\alpha$. \end{itemize} \end{lem} \noindent \textit{Proof of (i).} We easily find that \begin{equation}\label{eq:Alem1}A_0^{-1} = \begin{pmatrix} 0 & \left(-\frac{1}{2}I+ \mathcal{K}_D^{\omega, *}\right)^{-1} \\ -\left(\mathcal{S}_D^{\alpha,\omega}\right)^{-1} & \left(\mathcal{S}_D^{\alpha,\omega}\right)^{-1} \mathcal{S}_D^{\omega} \left(-\frac{1}{2}I+ \mathcal{K}_D^{\omega, *}\right)^{-1} \end{pmatrix},\end{equation} which is well-defined since $-\frac{1}{2}I+ \mathcal{K}_D^{\omega, *}: L^2(\partial D) \rightarrow L^2(\partial D)$ is invertible for $\omega\neq0$ \cite{MaCMiPaP}. From the low-frequency expansion of $\mathcal{S}_D^{\alpha,\omega}$ \cite{MaCMiPaP}, and using the Neumann series, we have \begin{align} \label{eq:Alem2} \left(\mathcal{S}_D^{\alpha,\omega}\right)^{-1} &= \left(\mathcal{S}_D^{\alpha,0}+O(\omega^2)\right)^{-1} \nonumber \\ &=\left(\mathcal{S}_D^{\alpha,0}\right)^{-1} +O(\omega^2). \end{align} Using the Fourier basis, the operator $-\frac{1}{2}I+ \mathcal{K}_D^{\omega,*}$ can be represented as \cite{bandgap} $$\left(-\frac{1}{2}I+ \mathcal{K}_D^{\omega, *}\right)_{V_{m}\rightarrow V_{n}} = \delta_{mn}\left(-\frac{1}{2} + \frac{-i\partiali R \omega}{4}\left(H_n^{(1)}(\omega R)J_n'(\omega R) + (H_n^{(1)})'(\omega R)J_n(\omega R)\right) \right). $$ Using standard asymptotics we can compute \begin{align*} \left(-\frac{1}{2}I+ \mathcal{K}_D^{\omega, *}\right)_{V_{n}\rightarrow V_{n}} = \begin{cases} -\frac{R^2}{2}\omega^2\left(2\partiali\eta_\omega + \ln R\right) + O(\omega^3\ln\omega) \quad &n=0 ,\\ -\frac{1}{2} + O(\omega) & n\neq0 .\end{cases} \end{align*} Hence the operator $\left(-\frac{1}{2}I+ \mathcal{K}_D^{\omega, *}\right)^{-1}$ can be written as \begin{equation}\label{eq:Alem3}\left(-\frac{1}{2}I+ \mathcal{K}_D^{\omega, *}\right)^{-1} = -\frac{1}{\partiali R^3\omega^2\left(2\partiali\eta_\omega + \ln R\right)}\langle \chi_{\partial D}, \cdot \rangle \chi_{\partial D} + O\left(\frac{1}{\omega\ln\omega}\right).\end{equation} Moreover, we have from \eqnref{eq:Sexpansion} that $\mathcal{S}_D^{\omega}[\chi_{\partial D}] = (2\partiali R\eta_\omega+ R\ln R)\chi_{\partial D} + O(\omega^2\ln\omega)$, and so \begin{equation}\label{eq:Alem4}\left(\mathcal{S}_D^{\alpha,\omega}\right)^{-1} \mathcal{S}_D^{\omega} \left(-\frac{1}{2}I+ \mathcal{K}_D^{\omega, *}\right)^{-1} = -\frac{\langle \chi_{\partial D}, \cdot \rangle}{\partiali R^2\omega^2}\partialsi_\alpha +O\left(\frac{1}{\omega}\right).\end{equation} Combining equations \eqnref{eq:Alem1}, \eqnref{eq:Alem2}, \eqnref{eq:Alem3} and \eqnref{eq:Alem4} proves (i). \qed \noindent \textit{Proof of (ii).} Using the definition of $A_1$, and the expression for $A_0$, we can compute $$A_1 = I - \delta \begin{pmatrix} 0 & -\frac{\langle \chi_{\partial D}, \cdot \rangle}{\partiali R^3\omega^2\ln \omega} \chi_{\partial D} +O\left(\frac{1}{\omega\ln\omega}\right) \\ 0 & -\frac{\langle \chi_{\partial D}, \cdot \rangle}{\partiali R^2\omega^2}\partialsi_\alpha +O\left(\frac{1}{\omega}\right) \end{pmatrix}.$$ Recall the asymptotic expression of $\omega^\alpha$ given in Theorem \ref{approx_thm}: \begin{equation}\label{eq:delta} \omega^\alpha = \sqrt{\frac{\delta{\mathrm{Cap}}_{D,\alpha}}{\partiali R^2}} +O(\delta^{3/2}). \end{equation} We then find that $$A_1 = \begin{pmatrix} I & \frac{(\omega^\alpha)^2}{\omega^2R\ln \omega} \frac{\langle \chi_{\partial D}, \cdot \rangle}{{\mathrm{Cap}}_{D,\alpha}}\chi_{\partial D} +O\left(\frac{\omega}{\ln\omega}\right) \\ 0 & I-\frac{(\omega^\alpha)^2}{\omega^2}P_\alpha +O(\omega) \end{pmatrix}.$$ Define $P_\alpha^\partialerp = I-\frac{(\omega^\alpha)^2}{\omega^2}P_\alpha$. For $\omega$ small enough, $A_1$ is invertible precisely when $P_\alpha^\partialerp$ is invertible, \textit{i.e.}{} when $\omega\neq \omega^\alpha$. Moreover, we have $$A_1^{-1} = \begin{pmatrix} I & -\frac{(\omega^\alpha)^2}{\omega^2R\ln \omega} \frac{\left\langle \chi_{\partial D}, \left(P_\alpha^\partialerp\right)^{-1}[\cdot] \right\rangle}{{\mathrm{Cap}}_{D,\alpha}}\chi_{\partial D} + O\left(\frac{\omega}{\ln\omega}\right)\\ 0 & \left(P_\alpha^\partialerp\right)^{-1} +O(\omega) \end{pmatrix}.$$ This proves (ii). \qed \begin{lem} \label{lem:invAa} For $\omega \neq \omega^\alpha$, and as $\omega \rightarrow 0$ and $\delta \rightarrow 0$, we have \begin{align*} (\mathcal{A}^\alpha(\omega, \delta))^{-1} &= A_1^{-1}A_0^{-1}\big(I + O(\delta)\big). \end{align*} \end{lem} \begin{proof} We have already established the invertibility of $A_0$ and $A_1$. Using this fact, we have \begin{align*} \mathcal{A}^\alpha(\omega, \delta) &= A_0 - \delta \begin{pmatrix} 0 & 0 \\ 0 & P_\alpha \end{pmatrix} - \delta \begin{pmatrix} 0 & 0 \\ 0 & Q_\alpha \end{pmatrix} + O(\delta^3) \\ &= A_0\left( I - \delta A_0^{-1}\begin{pmatrix} 0 & 0 \\ 0 & P_\alpha \end{pmatrix} - \delta A_0^{-1}\begin{pmatrix} 0 & 0 \\ 0 & Q_\alpha \end{pmatrix} + O(\delta^2) \right) \\ &= A_0A_1\left( I - \delta A_1^{-1}A_0^{-1}\begin{pmatrix} 0 & 0 \\ 0 & Q_\alpha \end{pmatrix}+ O(\delta^2)\right). \end{align*} Because $Q_\alpha^* \chi_{\partial D} = 0$, we have that $$\delta A_1^{-1}A_0^{-1}\begin{pmatrix} 0 & 0 \\ 0 & Q_\alpha \end{pmatrix} = O(\delta).$$ We then have that \begin{align*} (\mathcal{A}^\alpha(\omega, \delta))^{-1} &= A_1^{-1}A_0^{-1}\big(I + O(\delta)\big)^{-1} \\ &= A_1^{-1}A_0^{-1}\big(I + O(\delta)\big), \end{align*} where the last step follows using the Neumann series. \end{proof} Next, we compute the operator $(\mathcal{A}_D^\varepsilon - \mathcal{A}_D)$. Using Proposition \ref{prop:effective} and equation (\ref{eq:ADd}), we have \begin{equation*} (\mathcal{A}_{D}^\varepsilon)_{V_{mn}\rightarrow V_{m'n'}} = \delta_{mn}\delta_{m'n'} \frac{(-i)\partiali R}{2} \begin{pmatrix} J_n(\omega R)H_n^{(1)}(\omega R) & - J_n(\omega R)H_n^{(1)}(\omega R_d) \frac{J_n(\omega R) }{J_n(\omega R_d)}\\\omega J_n'(\omega R) H_n^{(1)}(\omega R) & - \delta \omega J_n(\omega R) \big(H_n^{(1)}\big)'(\omega R_d) \frac{J_n'(\omega R)}{J_n'(\omega R_d) }\end{pmatrix}. \end{equation*} Consequently, the operator $(\mathcal{A}_D^\varepsilon - \mathcal{A}_D)$ is given by \begin{align*} (\mathcal{A}_D^\varepsilon - \mathcal{A}_D)_{V_{mn}\rightarrow V_{m'n'}} =& \delta_{mn}\delta_{m'n'} \frac{(-i)\partiali RJ_n(\omega R)}{2} \begin{pmatrix} 0 & H_n^{(1)}(\omega R) - \frac{J_n(\omega R)H_n^{(1)}(\omega R_d) }{J_n(\omega R_d)}\\0 & \delta \omega \left(\big(H_n^{(1)}\big)'(\omega R) - \frac{J_n'(\omega R)\big(H_n^{(1)}\big)'(\omega R_d)}{J_n'(\omega R_d)} \right)\end{pmatrix}. \end{align*} Introduce the notation \begin{equation} \label{eq:Apert} \mathcal{A}_D^\varepsilon - \mathcal{A}_D := \begin{pmatrix} 0 & E_1^\varepsilon \\0 & E_2^\varepsilon\end{pmatrix}. \end{equation} Using asymptotic expansions of the Bessel function $J_n(z)$ and the Hankel function $H_n^{(1)}(z)$, for small $z$, straightforward computations show that \begin{align*} (E_1^\varepsilon)_{V_{m}\rightarrow V_{n}} &= \delta_{m,n}\frac{(-i)\partiali R}{2}\frac{J_n(\omega R)}{J_n(\omega R_d)}\left(H_n^{(1)}(\omega R){J_n(\omega R_d)} - J_n(\omega R)H_n^{(1)}(\omega R_d) \right), \\ &= \begin{cases} \displaystyle \delta_{m,n}\left(R\ln\frac{R}{R_d} + O(\omega\ln\omega)\right), \qquad &n = 0 ,\\[0.5em] \displaystyle \delta_{m,n} \left(-\frac{R}{2|n|}\left(1-\frac{R^{2|n|}}{R_d^{2|n|}}\right) + O(\omega)\right), &n\neq 0.\end{cases} \end{align*} Moreover, we have \begin{align} \label{eq:E2} (E_2^\varepsilon)_{V_{m}\rightarrow V_{n}} &= \delta_{m,n}\frac{(-i)\partiali RJ_n(\omega R)}{2} \delta \omega \left(\big(H_n^{(1)}\big)'(\omega R) - \frac{J_n'(\omega R)\big(H_n^{(1)}\big)'(\omega R_d)}{J_n'(\omega R_d)} \right), \\ \nonumber &= \begin{cases} \displaystyle \delta_{m,n}\left( \delta \left(1-\frac{R^2}{R_d^2} \right) + O(\delta\omega^2\ln\omega)\right), \qquad &n = 0, \\[0.5em] \displaystyle \delta_{m,n}O(\delta), &n\neq 0.\end{cases} \end{align} We are now ready to compute the full operator $\mathcal{M}^{\varepsilon,\delta,\alpha_1}$. \begin{prop} The operator $\mathcal{M}^{\varepsilon,\delta,\alpha_1}(\omega)$ has the form \begin{equation} \label{eq:Mmatrix} \mathcal{M}^{\varepsilon,\delta,\alpha_1}(\omega) = \begin{pmatrix} I & M_1(\omega) \\ 0 & I + M_0(\omega) \end{pmatrix}, \end{equation} where the operators $M_0(\omega), M_1(\omega): L^2(\partial D) \rightarrow L^2(\partial D)$ depend on $\varepsilon,\delta,\alpha_1$. Moreover, as $\omega\rightarrow 0$, $\delta\rightarrow 0$ and $\omega \notin \mathcal{L}ambda_{0,\alpha_1}$ we have $$M_0(\omega) = -\frac{1}{2\partiali}\int_{Y^*} \left( \left(P_\alpha^\partialerp\right)^{-1}(\mathcal{S}_{D}^{\alpha,0})^{-1}E_1^\varepsilon + \delta \left(1-\frac{R^2}{R_d^2} \right)\frac{\langle \chi_{\partial D}, \cdot \rangle}{\partiali R^2\left(\omega^2-(\omega^\alpha)^2\right)}\partialsi_\alpha \right) \: \mathrm{d} \alpha_2 +O(\omega).$$ \end{prop} \begin{proof} The expression of $\mathcal{M}^{\varepsilon,\delta,\alpha_1}$ given in \eqnref{eq:Mmatrix} follows from equations \eqnref{eq:Mdensity} and \eqnref{eq:Apert}. Combining Lemmas \ref{lem:invA0A1} and \ref{lem:invAa}, we find that $$(\mathcal{A}^\alpha(\omega, \delta))^{-1} = \begin{pmatrix} \frac{(\omega^\alpha)^2}{\omega^2R\ln \omega} \frac{\left\langle \chi_{\partial D}, \left(P_\alpha^\partialerp\right)^{-1}\left(\mathcal{S}_D^{\alpha,0}\right)^{-1}[\cdot] \right\rangle}{{\mathrm{Cap}}_{D,\alpha}}\chi_{\partial D} + O\left(\frac{\omega}{\ln\omega}\right) & -\frac{\langle \chi_{\partial D}, \cdot \rangle}{\partiali R^3\left(\omega^2-(\omega^\alpha)^2\right)\ln \omega} \chi_{\partial D} +O\left(\frac{1}{\omega\ln\omega}\right) \\ -\left(P_\alpha^\partialerp\right)^{-1}\left(\mathcal{S}_D^{\alpha,0}\right)^{-1} +O(\omega) & -\frac{\langle \chi_{\partial D}, \cdot \rangle}{\partiali R^2\left(\omega^2-(\omega^\alpha)^2\right)}\partialsi_\alpha +O\left(\frac{1}{\omega}\right) \end{pmatrix}.$$ Combining this with equations \eqnref{eq:Mdensity}, \eqnref{eq:Apert} and \eqnref{eq:E2} yields the desired expression for $M_0(\omega)$. \end{proof} \begin{rmk} It is clear that $\omega=\omega^\varepsilon$ is a characteristic value of $\mathcal{M}^{\varepsilon,\delta,\alpha_1}$ if and only if $\omega^\varepsilon$ is a characteristic value for $I + M_0$. We have thus reduced the characteristic value problem for the two-dimensional matrix operator $\mathcal{M}^{\varepsilon,\delta,\alpha_1}$ to the scalar operator $I+M_0$. \end{rmk} \subsection{Defect resonance frequency in the dilute regime} The following theorem is the main result of this paper. Again, we say a frequency $\omega$ is in the subwavelength regime if $\omega = O(\sqrt{\delta})$. \begin{thm} \label{thm:dilute} For $\delta$ and $R$ small enough, there is a unique characteristic value $\omega^\varepsilon(\alpha_1)$ of $\mathcal{M}^{\varepsilon,\delta,\alpha_1}(\omega)$ such that $\omega^\varepsilon(\alpha_1)\neq \mathcal{L}ambda_{0,\alpha_1}$ and $\omega^\varepsilon(\alpha_1)$ is in the subwavelength regime. Moreover, $$\omega^\varepsilon(\alpha_1) = \hat\omega+O\left(R^2 + \delta\right),$$ where $\hat\omega$ is the root of the following equation: \begin{equation} \label{eq:dilute} 1 + \left(\frac{\hat\omega^2R^2}{2\delta}\ln\frac{R}{R_d} + \left(1-\frac{R^2}{R_d^2} \right)\right)\frac{1}{2\partiali}\int_{Y^*} \frac{(\omega^\alpha)^2}{\hat\omega^2-(\omega^\alpha)^2}\: \mathrm{d} \alpha_2 = 0.\end{equation} \end{thm} \begin{proof} We seek the characteristic values of the operator $I+M_0$. We consider the dilute regime, \textit{i.e.}{} where $R$ is small. As shown in \cite{bandgap}, in this case we have \begin{align}\label{eq:Sdilute} \mathcal{S}_D^{\alpha,0}[\partialhi] &= \mathcal{S}_D^0[\partialhi] + R \mathcal{R}_\alpha(0) \int_{\partial D}\partialhi \: \mathrm{d} \sigma + O(R^2\|\partialhi\|), \end{align} where $\mathcal{R}_\alpha(x) = \Gamma^{\alpha,0}(x) - \Gamma^0(x)$. In particular, $$\partialsi_\alpha = \left(\mathcal{S}_{D}^{\alpha,0}\right)^{-1}[\chi_{\partial D}] = -\frac{{\mathrm{Cap}}_{D,\alpha}}{2\partiali R}\chi_{\partial D} + O\left(\frac{R^2}{\ln R}\right).$$ We will compute $M_0$ in the Fourier basis. It is known that \cite{MaCMiPaP} $$(\mathcal{S}_D^0)_{V_m\rightarrow V_n} = -\delta_{m,n}\frac{R}{2|n|}, \quad m\neq 0,$$ which gives $$\left((\mathcal{S}_{D}^{\alpha,0})^{-1}\right)_{V_m\rightarrow V_n} = -\delta_{m,n}\frac{2|n|}{R} + O(R), \quad m\neq 0.$$ Moreover, $$\left( \left(P_\alpha^\partialerp\right)^{-1}\right)_{V_m\rightarrow V_n} = \begin{cases} \delta_{m,n}\frac{\omega^2}{\omega^2-(\omega^\alpha)^2}+ O\left(\frac{R^2}{\ln R}\right), \quad & n= 0, \\ \delta_{m,n}, & n\neq 0. \end{cases}$$ In total, we have on the subspace $V_0$, $$(I+M_0)_{V_0\rightarrow V_0} = 1 + \left(\frac{\omega^2R^2}{2\delta}\ln\frac{R}{R_d} + \left(1-\frac{R^2}{R_d^2} \right)\right)\frac{1}{2\partiali}\int_{Y^*} \frac{(\omega^\alpha)^2}{\omega^2-(\omega^\alpha)^2}\: \mathrm{d} \alpha_2 +O\left(\frac{R^2}{\ln R} + \omega\right).$$ Moreover, if $n\neq 0$, then $$(I+M_0)_{V_m\rightarrow V_n} = \delta_{m,n}\left(\frac{R^{2|n|}}{R_d^{2|n|}}\right) +O\left(R^2 + \omega\right), \quad n\neq 0. $$ In summary, the operator $I+M_0$ can be written as $$ I+M_0(\omega) = \hat M(\omega) + O\left(R^2 + \omega\right),$$ where the limiting operator $\hat M(\omega)$ is a diagonal operator in the Fourier basis, with non-zero diagonal entries for $n\neq 0$. We conclude that $\omega = \hat\omega$ is a characteristic value for $\hat M(\omega)$ if and only if one of the diagonal entries vanishes at $\omega = \hat\omega$, \textit{i.e.}{} if \begin{equation}\label{eq:dilutepf} 1 + \left(\frac{\hat\omega^2R^2}{2\delta}\ln\frac{R}{R_d} + \left(1-\frac{R^2}{R_d^2} \right)\right)\frac{1}{2\partiali}\int_{Y^*} \frac{(\omega^\alpha)^2}{\hat\omega^2-(\omega^\alpha)^2}\: \mathrm{d} \alpha_2 = 0. \end{equation} Next, we show that equation \eqnref{eq:dilutepf} has a zero $\hat{\omega} \notin \mathcal{L}ambda_{0,\alpha_1}$ satisfying $\hat{\omega} = O(\sqrt{\delta})$. Introduce the notation $$I(\omega,\alpha_1) = \frac{1}{2\partiali}\int_{Y^*} \frac{(\omega^\alpha)^2}{\omega^2-(\omega^\alpha)^2}\: \mathrm{d} \alpha_2,$$ then equation \eqnref{eq:dilutepf} implies \begin{equation} \label{eq:Iinv} \hat\omega^2\left(\frac{R^2}{2\delta}\ln\frac{R}{R_d}\right) + \left(1-\frac{R^2}{R_d^2} \right) + \frac{1}{I(\hat{\omega},\alpha_1)} = 0. \end{equation} For a fixed $\alpha_1\in Y^*$, define $\omega^* = \omega^{(\alpha_1,\partiali)},$ which is the edge of the first band in $\mathcal{L}ambda_{0,\alpha_1}$. Observe that ${1}/{I(\omega,\alpha_1)}$ is monotonically increasing in $\omega$, and $$\lim_{\omega\rightarrow \omega^*}\frac{1}{I(\omega,\alpha_1)} = 0, \qquad \frac{1}{I(\omega,\alpha_1)} \rightarrow \frac{\omega^2}{\omega_0^2} \ \ \text{as} \ \ \omega\rightarrow \infty,$$ where $\omega_0^2$ is the average $$\omega_0^2 = \frac{1}{2\partiali}\int_{Y^*} (\omega^\alpha)^2\: \mathrm{d} \alpha_2.$$ In the dilute regime, we can compute $$(\omega^*)^2 = -\frac{2\delta}{R^2\ln R} + O\left(\delta^2 + \frac{1}{R}\right),$$ so as $\omega \rightarrow \omega^*$, the right-hand side of equation \eqnref{eq:Iinv} tends to $$\frac{\ln R_d}{\ln R} - \frac{R^2}{R_d^2} + O\left(\delta + R\right).$$ Since $R_d < R$, the leading-order term is negative. On the other hand, as $\omega \rightarrow \infty$, the right-hand side of \eqnref{eq:Iinv} tends to $\infty$. Since the right-hand side of equation \eqnref{eq:Iinv} is monotonically increasing, this equation has a unique zero $\omega = \hat\omega$. It can be verified that this zero has multiplicity one. Moreover, $\hat{\omega}$ satisfies $\hat\omega = O(\sqrt{\delta})$. Now, we turn to the full operator $I+M_0(\omega)$. Since $I+M_0(\omega) = \hat M(\omega) + O\left(R^2 + \omega\right),$ by the Gohberg-Sigal theory \cite{MaCMiPaP, AKL, Gohberg1971}, close to $\hat\omega$ there is a unique characteristic value $\omega^\varepsilon$ of the operator $I+M_0(\omega)$, satisfying $$\omega^\varepsilon = \hat\omega+O\left(R^2 + \delta\right).$$ This concludes the proof. \end{proof} \begin{rmk} In the case of $R_d > R$, \textit{i.e.}{} larger defect bubbles, similar arguments show that any subwavelength frequency $\omega^\varepsilon(\alpha_1) \in \mathcal{L}ambda_{d,\alpha_1} \setminus \mathcal{L}ambda_{0,\alpha_1}$ satisfies equation \eqnref{eq:dilute} in the dilute regime. However, it is easily verified that this equation has no solutions $\hat\omega > \omega^*$ in the case $R_d>R$. The conclusion is that we must reduce the size of the defect bubbles in order to create subwavelength guided modes in the dilute regime. \end{rmk} \subsubsection{Absence of bound modes in the line defect direction} In this section, we will show that the defect band is not contained in the pure point spectrum of the defect crystal. \begin{lem} \label{lem:palpha1} For $(\alpha_1,\alpha_2) \in Y^*\times Y^*$, $\alpha_2 \neq 0$, the partial derivative of the quasi-periodic Green's function $$\frac{\partial}{\partial \alpha_1} \Gamma^{\alpha,0}(0)$$ is zero precisely when $\alpha_1 = 0$ or $\alpha_1 = \partiali$. \end{lem} \begin{proof} From the spectral form of the Green's function \cite{MaCMiPaP}: $$\Gamma^{\alpha,0}(x) = -\sum_{m\in \mathbb{Z}^2} \frac{e^{i(\alpha+2\partiali m)\cdot x}}{|\alpha+2\partiali m|^2},$$ it can be easily shown that $$\nabla_\alpha \Gamma^{\alpha,0}(0) = \sum_{m\in \mathbb{Z}^2}\frac{\alpha + 2\partiali m}{|\alpha + 2\partiali m|^4}.$$ By symmetry of the summation, we find that $$\frac{\partialartial}{\partialartial \alpha_1} \Gamma^{\alpha,0}(0) = 0$$ if and only if $\alpha_1 = 0$ or $\alpha_1=\partiali$. \end{proof} \begin{prop} \label{prop:palpha1} For $\delta$ and $R$ small enough, and for $\alpha_1 \neq 0,\partiali$, the characteristic value $\omega^\varepsilon = \omega^\varepsilon(\alpha_1)$ satisfies $$\frac{\partial \omega^\varepsilon}{\partial \alpha_1} \neq 0.$$ \end{prop} \begin{proof} To simplify the computations, we introduce the following notation: \begin{alignat*}{3} &a = \frac{R^2}{4\partiali\delta}\ln\frac{R}{R_d}, \quad \qquad &&b = \frac{1}{2\partiali} \left(1-\frac{R^2}{R_d^2} \right), \\ &x = x(\alpha_1) = \hat{\omega}^2, &&y = y(\alpha_1,\alpha_2) = (\omega^\alpha)^2. \end{alignat*} Then equation \eqnref{eq:dilute} reads $$\left(ax + b\right)\int_{Y^*} \frac{y}{x-y}\: \mathrm{d} \alpha_2 = 1.$$ Denote by $x' = \frac{\partial x}{\partial \alpha_1}$ and $y' = \frac{\partial y}{\partial \alpha_1}$, then we have $$ax' \int_{Y^*} \frac{y}{x-y}\: \mathrm{d} \alpha_2 - (ax+b)\int_{Y^*} \frac{x'y - xy'}{(x-y)^2}\alpha_2 = 0,$$ or equivalently, $$x'A + B = 0$$ where $$A = a \int_{Y^*} \frac{y}{x-y}\: \mathrm{d} \alpha_2 -(ax+b)\int_{Y^*} \frac{y}{(x-y)^2}\: \mathrm{d}\alpha_2,$$ and $$B = (ax+b)x \int_{Y^*} \frac{y'}{(x-y)^2}\: \mathrm{d}\alpha_2.$$ First, we show that $A\neq 0$ which implies that the zeros of $x'$ coincides with the zeros of $B$. We have \begin{align*} A &= a \int_{Y^*} \frac{y}{x-y}\: \mathrm{d} \alpha_2 -(ax+b)\int_{Y^*} \frac{y}{(x-y)^2}\: \mathrm{d}\alpha_2 \\ &= \int_{Y^*} \frac{ay(x-y)-(ax+b)y}{(x-y)^2}\: \mathrm{d}\alpha_2, \\ &= -\int_{Y^*} \frac{y(ay+b)}{(x-y)^2}\: \mathrm{d}\alpha_2 < 0, \end{align*} since $y(ay+b) > 0$ for all $(\alpha_1,\alpha_2) \in Y^*\times Y^*$. Next, we show that the leading order of $B$ vanishes exactly at the points $\alpha_1 = 0$ and $\alpha = \partiali$. Using equations \eqnref{eq:delta} and \eqnref{eq:Sdilute}, we have \begin{align*} y' &= \frac{\partialartial}{\partialartial \alpha_1} (\omega^\alpha)^2 \\ &= \frac{\partialartial}{\partialartial \alpha_1} \left(\frac{-2\delta}{R^2\ln R + 2\partiali R^3\mathcal{R}_\alpha(0)}\right) + O\left(\frac{R^3}{\ln R}+ \delta ^2\right)\\ &=\frac{4\partiali R^3\delta}{\left(R^2\ln R + 2\partiali R^3\mathcal{R}_\alpha(0)\right)^2}\frac{\partialartial}{\partialartial \alpha_1}\mathcal{R}_\alpha(0) + O\left(\frac{R^3}{\ln R}+\delta^2\right). \end{align*} Since $\mathcal{R}_\alpha = \Gamma^{\alpha,0} - \Gamma^0$, using Lemma \ref{lem:palpha1} we conclude that for $\delta$ and $R$ small enough and $\alpha_1 \neq 0,\partiali$, $y'$ is non-zero for any $\alpha_2$. Hence $B$ is non-zero, which concludes the proof. \end{proof} Proposition \ref{prop:palpha1} shows that the defect dispersion relation is not flat, except for local extrema at $\alpha_1 = 0$ and $\alpha_1 = \partiali$. Thus, the defect band is not in the pure point spectrum of the defect operator, and corresponding Bloch modes are not bounded in the line defect direction. \subsubsection{Bandgap located defect bands} In this section, we will demonstrate that it is possible to position the entire defect band in the bandgap region with a suitable choice of $\varepsilon$. Recall that $\varepsilon = R_d - R$. As before, let $$I(\omega,\alpha_1) = \frac{1}{2\partiali}\int_{Y^*} \frac{(\omega^\alpha)^2}{\omega^2-(\omega^\alpha)^2}\: \mathrm{d} \alpha_2.$$ \begin{lem} \label{lem:min} For a fixed $\omega \notin \mathcal{L}ambda_0$, the minimum $$\min_{\alpha_1 \in Y^*} I(\omega,\alpha_1)$$ is attained at $\alpha_1 = \alpha_0$ with $\alpha_0\rightarrow 0$ as $\delta\rightarrow 0$. \end{lem} \begin{proof} We begin by observing that the minima of $I(\omega,\alpha_1)$ and $\omega^{(\alpha_1,\alpha_2)}$ are attained at the same point $\alpha_1 = \alpha_0\in Y^*$. Using Lemma \ref{prop:palpha1}, for every fixed $\alpha_2 \neq 0$ the minimum of ${\mathrm{Cap}}_{D,\alpha}$ is attained at $\alpha_1 = 0$, so by Theorem \ref{approx_thm} the minimum of $\omega^{(\alpha_1,\alpha_2)}$ is attained at $\alpha_1 = \alpha_0$ with $\alpha_0 \rightarrow 0$ as $\delta \rightarrow 0$. Since $\omega^{(0,0)} = 0$ (see \cite{bandgap}) this is true for all $\alpha_2 \in Y^*$. \end{proof} \begin{prop} For $\delta$ small enough, there exists an $\varepsilon$ such that for all $\alpha_1 \in Y^*$ we have $$\omega^\varepsilon(\alpha_1) \notin \mathcal{L}ambda_0.$$ \end{prop} \begin{proof} We want to show that $$\min_{\alpha_1 \in Y^*} \omega^\varepsilon(\alpha_1) > \max_{\alpha\in Y^*\times Y^*} \omega^{\alpha}.$$ Using Lemma \ref{lem:min}, it is easy to see that $$\min_{\alpha_1 \in Y^*} \omega^\varepsilon(\alpha_1)$$ is attained at $\alpha_1 = \alpha_0$. Moreover, from \cite{highfrequency} we know that $$\max_{\alpha\in Y^*\times Y^*} \omega^{\alpha}$$ is attained at $\alpha=\alpha^*=(\partiali,\partiali)$. Using Theorem \ref{thm:dilute}, we conclude that the lower edge of the defect band coincides with the upper edge of the unperturbed band if $$ \left(\frac{R^2}{R_d^2} - \frac{\ln R_d}{\ln R} \right) = \frac{1}{I(\omega^*,\alpha_0)} + O\left(\sqrt{\delta} + R\right). $$ For small enough $R$ and $\delta$, the right-hand side is positive, while the left-hand side ranges from $0$ to $+\infty$ for $\varepsilon \in (-R,0)$. Hence we can find a solution $\varepsilon_0$ to this equation, and the statement holds for $\varepsilon > \varepsilon_0$. \end{proof} \begin{rmk} In practice, for $\delta$ small enough, we can approximate $\varepsilon_0$ as the root to the equation \begin{equation}\label{eq:eps0} \left(\frac{R^2}{R_d^2} - \frac{\ln R_d}{\ln R} \right) = \frac{1}{I(\omega^*,0)}. \end{equation} \end{rmk} \section{Numerical illustrations} \label{sec:num} \subsection{Implementation} \subsubsection{Discretization of the operator} The operator $\mathcal{M}^{\varepsilon,\delta,\alpha_1}(\omega)$ was approximated as a matrix $M(\omega)$ using the truncated Fourier basis $e^{-iN\theta}$, $e^{-i(N-1)\theta}, \ldots, e^{iN\theta}$. We refer to \cite{defectSIAM,defectX} for the details of the discretization. The integral over $Y^*$ in \eqnref{eq:Mdensity} was approximated using the trapezoidal rule with $100$ discretization points. The characteristic value problem for $\mathcal{M}^{\varepsilon,\delta,\alpha_1}(\omega)$ was formulated as the root-finding problem $\det M(\omega) = 0$ and solved using Muller's method \cite{MaCMiPaP}. \subsubsection{Evaluation of the asymptotic formula} The integral over $Y^*$ in equation \eqnref{eq:dilute} was approximated using the trapezoidal rule with $100$ discretization points. Again, the equation was numerically solved using Muller's method. \subsection{Dilute regime} Figure \ref{fig:bandgap_dilute} shows the unperturbed band structure and the defect band for $\alpha_1$ over the Brillouin zone $[0,2\partiali]$. The material parameters were chosen as $\kappa_b = \rho_b = 1$, $\kappa_w = \rho_w = 5000$, $R=0.05$ and $\varepsilon = -0.2R$. It can be seen that the entire defect band is located inside the deep subwavelength regime of the bandgap. Moreover, the defect frequencies computed using the asymptotic formula agree well with the values computed by discretizing the operator $\mathcal{M}^{\varepsilon,\delta,\alpha_1}$. Also, we see that the defect band is not flat. In summary, these results show that the defect crystal supports guided modes in the subwavelength regime, localized to the line defect. \begin{figure*} \caption{ (Dilute case) First two bands of the unperturbed crystal (left) and magnification of the first band and the defect mode (right). The defect band is computed using the asymptotic formula \eqnref{eq:dilute} \label{fig:bandgap_dilute} \end{figure*} \subsubsection{Computation of $\varepsilon_0$} In this section, we numerically compute the critical perturbation size, where the entire defect band is located in the bandgap. The critical perturbation size was computed in two ways: by solving equation \eqnref{eq:eps0} for the leading order term, and by solving the root-finding problem $\omega^{\varepsilon_0}(0) = \omega^*$ where $\omega^\varepsilon$ was computed by discretizing the operator $\mathcal{M}^{\varepsilon,\delta,\alpha_1}$. Figure \ref{fig:eps0} shows $\varepsilon_0$ for different $R$ in the dilute regime. The material parameters were chosen as $\kappa_b = \rho_b = 1$ and $\kappa_w = \rho_w = 10000$. The values obtained from the asymptotic formula and by discretizing the operator agree, with a smaller radius $R$ giving a smaller error. Quantitatively, for $R$ in this regime, we require a decrease of the bubble size by around $14\%$ to $26\%$ in order that the defect band be located inside the bandgap. \begin{figure*} \caption{Critical defect size $\varepsilon_0$, \textit{i.e.} \label{fig:eps0} \end{figure*} \subsection{Non-dilute regime} Here we compute the defect band in the non-dilute regime, in both cases $\varepsilon < 0$ and $\varepsilon > 0$, corresponding to smaller and larger defect bubbles, respectively. Theorem \ref{thm:first} in Appendix \ref{app:nondil} shows that there is a defect frequency $\omega^\varepsilon$ in the bandgap for small $\varepsilon > 0$ but not for small $\varepsilon < 0$. \subsubsection{Larger defect bubbles} Figure \ref{fig:bandgap_ndilute} shows the band structure in the non-dilute case with $\varepsilon > 0$. The material parameters were chosen as $\kappa_b = \rho_b = 1$, $\kappa_w = \rho_w = 5000$, $R=0.4$ and $\varepsilon = 0.45R$. As expected from Theorem \ref{thm:first}, there is a defect band above the first band of the unperturbed crystal. Moreover, it is possible to position the entire band inside the bandgap. \begin{figure*} \caption{ (Non-dilute case) First two bands of the unperturbed crystal (left) and magnification of the first band and the defect mode (right). The defect band was computed by discretizing the operator $\mathcal{M} \label{fig:bandgap_ndilute} \end{figure*} \begin{figure*} \caption{ (Non-dilute case) First two bands of the unperturbed crystal (left) and magnification of the first band and the defect mode (right). The defect band was computed by discretizing the operator $\mathcal{M} \label{fig:bandgap_ndilute2} \end{figure*} \subsubsection{Smaller defect bubbles} Figure \ref{fig:bandgap_ndilute2} shows the band structure in the non-dilute case with $\varepsilon = -0.6R$. The material parameters were chosen as $\kappa_b = \rho_b = 1$, $\kappa_w = \rho_w = 5000$, $R=0.4$ and $\varepsilon = 0.45R$. In this case a defect band is present inside the bandgap. Here $\varepsilon$ is quite large, in contrast to Theorem \ref{thm:first} which is only valid for small $\varepsilon$. \section{Concluding remarks} \label{sec-5} In this paper, we have for the first time proved the possibility of creating subwavelength guided waves localized to a line defect in a bubbly phononic crystal. We have shown that introducing a defect line, by shrinking the bubbles along the line, creates a defect frequency band inside the bandgap of the original crystal. An arbitrarily small perturbation will create a non-zero overlap between the defect band and the bandgap, and we have explicitly quantified the required defect size in order to position the entire defect band inside the bandgap. Moreover, we have shown for the first time that the defect band is not contained in the pure point spectrum of the perturbed operator. This shows that we can create truly guided modes, which are not localized in the direction of the defect. In the future, we plan to study more sophisticated waveguides, with bends and junctions. Moreover, we also plan to study waveguides in phononic subwavelength bandgap crystals with non-trivial topology, rigorously proving the existence of topologically protected subwavelength states in bubbly crystals. \appendix \section{The resonance frequency of the defect mode for small perturbations} \label{app:nondil} Here we derive a formula for the resonance frequency of the defect mode in the case of small $\varepsilon$, following the approach of \cite{defectSIAM,defectX}. The strength of this approach is that it is valid in both the dilute and non-dilute regimes. We begin by reformulating the integral equation \eqnref{eq:Mdensity} in terms of the effective sources $(f, g)$ instead of the layer densities $(\partialhi,\partialsi)$. The following proposition is a restatement of Proposition \ref{prop:fg}. \begin{prop}\label{prop:fg'} The effective source pair $(f,g)\in L^2(\partialartial D)^2$ satisfies the following integral equation: \begin{align}\label{eq:fg'} \mathcal{M}^{\varepsilon,\delta,\alpha_1}(\omega)\begin{pmatrix} f \\ g \end{pmatrix}:=\bigg(I+(\mathcal{A}_{D}^\varepsilon(\omega,\delta) - \mathcal{A}_{D}(\omega,\delta))\frac{1}{2\partiali}\int_{Y^*}\mathcal{A}^{(\alpha_1,\alpha_2)}(\omega,\delta)^{-1} \: \mathrm{d} \alpha_2\bigg) \begin{pmatrix} f \\[0.3em] g \end{pmatrix} = \begin{pmatrix} 0 \\[0.3em] 0 \end{pmatrix}, \end{align} for small enough $\delta$ and for $\omega \notin \mathcal{L}ambda_{0,\alpha_1}$ inside the bandgap. \end{prop} In this section, we derive an expression for the characteristic value $\omega^\varepsilon$ of $\mathcal{M}^{\varepsilon,\delta,\alpha_1}(\omega)$ located slightly above $\omega^\alpha$ for both the dilute and non-dilute regimes. Let us first analyse the operator $\int_{Y^*} (\mathcal{A}^{\alpha})^{-1}\: \mathrm{d}\alpha$. Since $\omega^\alpha$ is a simple pole of the mapping $\omega \mapsto \mathcal{A}^\alpha(\omega,)^{-1}$ in a neighbourhood of $\omega^\alpha$, according to \cite{MaCMiPaP}, we can write \begin{equation} \label{eq:polepencil} \mathcal{A}^\alpha(\omega)^{-1} = \frac{\mathcal{L}^\alpha}{\omega- \omega^\alpha} + \mathcal{R}^\alpha(\omega), \end{equation} where the operator-valued function $\mathcal{R}^\alpha(\omega)$ is holomorphic in a neighbourhood of $\omega^\alpha$, and the operator $\mathcal{L}^\alpha$ maps $L^2(\partial D)^2$ onto $\ker \mathcal{A}^\alpha(\omega^\alpha,\delta)$. Let us write $$ \ker {\mathcal{A}^\alpha(\omega^\alpha)} = \mbox{span} \{\Psi^\alpha\}, \quad \ker {\big(\mathcal{A}^\alpha(\omega^\alpha)\big)^*} = \mbox{span} \{\Phi^\alpha\}, $$ where $^*$ denotes the adjoint operator. Then, as in \cite{MaCMiPaP, thinlayer}, it can be shown that $$ \mathcal{L}^\alpha = \frac{\langle \Phi^\alpha,\ \cdot\ \rangle \Psi^\alpha}{\langle \Phi^\alpha, \frac{d}{d \omega}\mathcal{A}^\alpha\big|_{\omega=\omega^\alpha} \Psi^\alpha \rangle}, $$ where again $\langle \,\cdot \,,\,\cdot\, \rangle$ stands for the standard inner product of $L^2(\partialartial D)^2$. Hence the operator $\mathcal{M}^{\varepsilon,\delta,\alpha_1}$ can be decomposed as $$ \mathcal{M}^{\varepsilon,\delta,\alpha_1}(\omega) = I + (\mathcal{A}_{D}^\varepsilon - \mathcal{A}_D) \frac{1}{2\partiali}\int_{Y^*}\frac{\mathcal{L}^\alpha}{\omega-\omega_\alpha}\: \mathrm{d} \alpha_2 + (\mathcal{A}_{D}^\varepsilon - \mathcal{A}_D) \frac{1}{2\partiali}\int_{Y^*} \mathcal{R}_\alpha \: \mathrm{d}\alpha_2. $$ Note that the third term in the right-hand side is holomorphic with respect to $\omega$. Denote by $\alpha^* = (\alpha_1,\partiali)$ and $\omega^*(\alpha_1) = \omega^{(\alpha_1,\partiali)}$. Using similar arguments as in \cite{highfrequency} and the fact that each bubble is a circular disk, we can prove the following result on the shape of the dispersion relation close to $\alpha^*$. \begin{lem} \label{lem:max_omegaalp} For a fixed $\alpha_1 \in Y^*$, the characteristic value $\omega^\alpha$ attains its maximum over $\alpha_2$ at $\alpha_2 = \partiali$, \textit{i.e.}{} at $\alpha=\alpha^*$. Moreover, for $\alpha_2$ near $\partiali$, we have $$ \omega^\alpha = \omega^*(\alpha_1) - \frac{1}{2}c_\delta(\alpha_1) (\alpha_2-\partiali)^2 + o\left((\alpha_2-\partiali)^2\right). $$ Here, $c_\delta(\alpha_1)$ is a positive function of $\alpha_1$ and $\delta$. \end{lem} The operator $\int_{Y^*}\frac{\mathcal{L}^\alpha}{\omega-\omega^\alpha}\: \mathrm{d} \alpha_2$ becomes singular when $\omega\rightarrow \omega^\alpha$. Moreover, since we want to compute the defect band inside the bandgap of the periodic problem at $\alpha_1$, we can assume $\omega$ is inside this bandgap. Consequently, the singularity occurs as $\omega\rightarrow \omega^*$. Let us extract its singular part explicitly. Denote by $ \mathcal{A}^* = \mathcal{A}^{(\alpha_1,\partiali)}, \Phi^* = \Phi^{(\alpha_1,\partiali)},$ and $\mathcal{L}^* = \mathcal{L}^{(\alpha_1,\partiali)} $. Moreover, denote by $B_j$ a bounded function with respect to $\omega$ in $V$. Then, by Lemma \ref{lem:max_omegaalp}, we have \begin{align*} \frac{1}{2\partiali}\int_{Y^*}\frac{\mathcal{L}^{(\alpha_1,\alpha_2)}}{\omega-\omega^{(\alpha_1,\alpha_2)}} \: \mathrm{d}\alpha_2 &= \frac{\mathcal{L}^*}{2\partiali}\int_{0}^{2\partiali} \frac{1}{\omega - \omega^*+\frac{1}{2} c_\delta(\alpha_1) (\alpha_2-\partiali)^2} \: \mathrm{d}\alpha_2 + B_1(\omega) \\ &=\frac{\mathcal{L}^*}{2\partiali}\sqrt{\frac{2}{(\omega-\omega^*)c_\delta(\alpha_1)}}2\arctan\left({\sqrt{\frac{c}{2(\omega-\omega^*)}}\partiali}\right) + B_1(\omega)\\ &=\frac{\mathcal{L}^*}{\sqrt{2(\omega-\omega^*)c_\delta(\alpha_1)}} + B_2(\omega). \end{align*} We therefore get \begin{align*} \mathcal{M}^{\varepsilon,\delta,\alpha_1}(\omega)=I + \frac{1}{\sqrt{2(\omega-\omega^*)c_\delta(\alpha_1)}} (\mathcal{A}_{D}^\varepsilon(\omega^*)-\mathcal{A}_{D}(\omega^*))\mathcal{L}^* + {\mathcal{R}^\varepsilon(\omega)}, \end{align*} for some $\mathcal{R}^\varepsilon(\omega) = O(\varepsilon)$ which is analytic and bounded for $\omega$ close to $\omega^*$. We look for characteristic values $\omega= \omega^\varepsilon$ of $\mathcal{M}^{\varepsilon,\delta,\alpha_1}(\omega)$, \textit{i.e.}{} values such that there exists some $\Psi^\varepsilon\neq 0$ with ${\mathcal{M}^{\varepsilon,\delta,\alpha_1}(\omega)}\Psi^\varepsilon = 0$. Expanding this equation, we have \begin{align*} \Psi^\varepsilon + \frac{1}{\sqrt{2(\omega^\varepsilon-\omega^*)c_\delta(\alpha_1)}} \frac{ (\mathcal{A}_{D}^\varepsilon-\mathcal{A}_{D})(\omega^*) \Psi^*}{\langle \Phi^*, \frac{d}{d \omega}\mathcal{A}^*\big|_{\omega=\omega^*} \Psi^* \rangle}\langle \Phi^*, \Psi^\varepsilon \rangle + {\mathcal{R}^\varepsilon(\omega)}\Psi^\varepsilon =0. \end{align*} Then, multiplying by $\Phi^*$, we obtain \begin{align*} \langle \Phi^*, \Psi^\varepsilon \rangle \left(1 + \frac{1}{\sqrt{2(\omega^\varepsilon-\omega^*)c_\delta(\alpha_1)}} \frac{ \langle \Phi^*, (\mathcal{A}_{D}^\varepsilon-\mathcal{A}_{D})(\omega^*) \Psi^*\rangle}{\langle \Phi^*, \frac{d}{d \omega}\mathcal{A}^*\big|_{\omega=\omega^*} \Psi^* \rangle}\right) + \langle \Phi^*, {\mathcal{R}^\varepsilon(\omega)}\Psi^\varepsilon\rangle =0. \end{align*} Since $\mathcal{R}^\varepsilon(\omega) = O(\varepsilon)$, it follows from the above equation that $\langle \Phi^*, \Psi^\varepsilon \rangle\neq0$. Therefore, choose $\Psi^\varepsilon$ such that $\langle \Phi^*, \Psi^\varepsilon \rangle = 1$. This gives \begin{equation} \label{eq:exact} \omega^\varepsilon = \omega^* + \frac{1}{2c_\delta(\alpha_1)} \frac{1}{\left(1+\langle \Phi^*, {\mathcal{R}^\varepsilon(\omega)}\Psi^\varepsilon\rangle\right)^2} \left(\frac{\langle \Phi^*, (\mathcal{A}_{D}^\varepsilon-\mathcal{A}_{D})(\omega^*) \Psi^*\rangle}{\langle \Phi^*, \frac{d}{d \omega}\mathcal{A}^*\big|_{\omega=\omega^*} \Psi^* \rangle}\right)^2. \end{equation} In order to derive a more explicit expression, we will consider the asymptotic limit of $\delta \rightarrow 0$. As in \cite{defectSIAM,defectX}, we have the following lemma. \begin{lem}\label{lem:estim1} The following results hold: \begin{itemize} \item[(i)]When $\delta\rightarrow 0$, we have $$ \big\langle \Phi^*, \frac{d}{d \omega}\mathcal{A}^*(\omega^*,\delta) \Psi^* \big\rangle = -2\partiali{\omega^*\ln \omega^*} R^3 + O(\sqrt{\delta}), $$ which is positive for $\delta$ small enough. \item[(ii)] For a fixed $\varepsilon$, when $\delta\rightarrow 0$ we have \begin{align*} \left\langle\Phi^*,\big(\mathcal{A}_{D}^\varepsilon(\omega^*,\delta)-\mathcal{A}_{D}(\omega^*,\delta)\big){ \Psi^*} \right\rangle &=\delta\varepsilon \ln\omega^*\left(R\|\partialsi_{\alpha^*}\|^2_{L^2(\partial D)} - 2{\mathrm{Cap}}_{D,\alpha^*} \right) + O(\varepsilon\delta + \varepsilon^2\delta\ln\delta), \end{align*} where $\partialsi_{\alpha^*} = (\mathcal{S}_{D}^{\alpha^*,0})^{-1}[\chi_{\partial D}]$ and ${\mathrm{Cap}}_{D,\alpha^*} = -\langle \partialsi_{\alpha^*}, \chi_{\partial D}\rangle$. For small $\varepsilon$ and $\delta$, this expression is positive for $\varepsilon <0$ if $R$ is small enough, or $\varepsilon >0$ for $R$ close enough to $1/2$. \end{itemize} \end{lem} Combining equation \eqnref{eq:exact} and Lemma \ref{lem:estim1}, we obtain the following result \begin{thm} \label{thm:first} Assume that $\delta$ is small enough and the pair $(R,\varepsilon)$ satisfies one of the two assumptions \begin{itemize} \item[(i)] $R$ small enough and $\varepsilon<0$ small enough in magnitude (Dilute regime), \item[(ii)] $R$ close enough to $1/2$ and $\varepsilon>0$ small enough (Non-dilute regime). \end{itemize} Then there exists one frequency value $\omega^\varepsilon(\alpha_1)$ such that the problem \eqref{eq:scattering-strip} has a non-trivial solution and $\omega^\varepsilon(\alpha_1)$ is slightly above $\omega^*(\alpha_1)$. Moreover, as $\delta \rightarrow 0$ we have \begin{equation} \label{eq:defect-freq} \omega^\varepsilon(\alpha_1) = \omega^*(\alpha_1) + \frac{1}{2c_\delta(\alpha_1)}\left(\frac{\delta\varepsilon\left(R\|\partialsi_{\alpha^*}\|^2_{L^2(\partial D)} - 2{\mathrm{Cap}}_{D,\alpha^*} \right)}{2\partiali{\omega^*(\alpha_1)} R^3}\right)^2 + O\left(\frac{\varepsilon^2\sqrt{\delta}}{\ln\delta} + \varepsilon^3\sqrt{\delta}\right), \end{equation} where $\alpha^* = (\alpha_1, \partiali)$. \end{thm} \begin{rmk} It is easily verified that equation \eqnref{eq:dilute} evaluated for small $\varepsilon$ coincides with equation \eqnref{eq:defect-freq} evaluated for small $R$. \end{rmk} \section{Characterization of the effective sources for non-circular bubbles} \label{app:noncirc} Let now $D$ be a general simply connected domain with $\partialartial D \in C^1$. In this section, we will restrict to the case of small size perturbations $\varepsilon <0 $. Define the defect bubble $D_d \in D$ as the domain with boundary $$ \partialartial D_d = \{x + \varepsilon\nu_x | x\in \partialartial D \},$$ where $\nu_x$ is the outward unit normal of $\partialartial D$ at $x\in\partialartial D$. We will need some results given in \cite{thinlayer}. First, we introduce some notation. Define the mapping $p: \partialartial D \rightarrow \partialartial D_d, \ p(x) = x+\varepsilon \nu_x$. Let $x,y\in \partialartial D$ and let $\widetilde{x} = p(x) \in \partialartial D_d$ and $\widetilde{y} = p(y) \in \partialartial D_d$. Define $q: L^2( \partialartial D) \rightarrow L^2(\partialartial D_d), \ q(\partialhi)(\widetilde x) = \partialhi ( p^{-1}(\widetilde x)) $, and for a surface density $\partialhi$ on $\partialartial D$, define $\widetilde{\partialhi} = q(\partialhi)$ on $\partialartial D_d$. We also define the signed curvature $\tau = \tau(x), x\in \partialartial D$ in the following way. Let $x = x(t)$ be a parametrization of $\partialartial D$ by arc length. Then define $\tau$ by $$ \frac{d^2}{dt^2}x(t) = -\tau \nu_x. $$ Observe that $\tau$ is independent of the orientation of $\partialartial D$. The following results are given in \cite{thinlayer}, but adjusted to the case where $\varepsilon <0$. \begin{prop}\label{prop:asympsingle} Let $k>0$. Let $\partialhi \in L^2(\partialartial D)$ and let $x,y,\widetilde{x},\widetilde{y},\widetilde{\partialhi}$ be as above. Then \begin{equation} \label{eq:asympSdD} \mathcal{S}_{D_d,D}^{k}[\partialhi](\widetilde{x}) = \mathcal{S}_D^k[\partialhi](x) +\varepsilon \left(-\frac{1}{2}I + \mathcal{K}_D^{k,*}\right)[\partialhi](x) + o(\varepsilon), \end{equation} \begin{equation} \label{eq:asympSd} \mathcal{S}_{D_d}^k[\widetilde{\partialhi}](\widetilde{x}) = \mathcal{S}_D^k[\partialhi](x) + \varepsilon \left(\mathcal{K}_D^k + \mathcal{K}_D^{k,*}\right)[\partialhi](x) + \varepsilon\mathcal{S}_D^k[\tau\partialhi](x) + o(\varepsilon), \end{equation} \begin{equation} \label{eq:asympSDd} \mathcal{S}_{D,D_d}^k[\widetilde{\partialhi}](x) = \mathcal{S}_D^k[\partialhi](x) + \varepsilon \left(-\frac{1}{2}I+ \mathcal{K}_D^k \right)[\partialhi](x) + \varepsilon\mathcal{S}_D^k[\tau\partialhi](x) + o(\varepsilon). \end{equation} \end{prop} \begin{prop} \label{prop:asympK} Let $\partialhi \in L^2(\partialartial D)$ and let $x,y,\widetilde{x},\widetilde{y},\widetilde{\partialhi}$ be as above. Then \begin{equation} \label{eq:asympK} \mathcal{K}_{D_d}^{k,*}[\widetilde{\partialhi}](\widetilde{x}) = \mathcal{K}_D^{k,*}[\partialhi](x) + \varepsilon\mathcal{K}_1^k[\partialhi](x) + o(\varepsilon), \end{equation} where $\mathcal{K}_1^k$ is given by \begin{equation*} \mathcal{K}_1^k = \mathcal{K}_D^{k,*}[\tau\partialhi](x) - \tau(x) \mathcal{K}_D^{k,*}[\partialhi](x) + \frac{\partialartial \mathcal{D}_D^k}{\partialartial \nu}[\partialhi](x) - \frac{\partialartial^2}{\partialartial T^2}\mathcal{S}_D^k[\partialhi](x) - k^2\mathcal{S}_D^k[\partialhi](x). \end{equation*} Here $\frac{\partialartial^2}{\partialartial T^2}$ denotes the second tangential derivative, which is independent of the orientation of $\partialartial D$. \end{prop} We also state the following result which is given, for example, in \cite{McLean}. \begin{prop} \label{prop:hypersingular} For $x\in \partialartial D$ and $k\geq 0$ we have \begin{align*} \frac{\partialartial \mathcal{D}_D^k}{\partialartial \nu}[\partialhi](x) &= \left(\frac{1}{2}I + \mathcal{K}_D^{k,*}\right)\left(\mathcal{S}_D^k\right)^{-1}\left(-\frac{1}{2} + \mathcal{K}_D^k\right)[\partialhi](x) \\ &= \left(-\frac{1}{2}I + \mathcal{K}_D^{k,*}\right)\left(\mathcal{S}_D^k\right)^{-1}\left(\frac{1}{2} + \mathcal{K}_D^k\right)[\partialhi](x). \end{align*} \end{prop} As in Section \ref{sec-3}, we consider the defect problem \eqnref{eq:scattering}, modelled by the fictitious sources as in equation \eqnref{eq:scattering_fictitious}. Observe that Proposition \ref{prop:floquet} is valid even for the case of non-circular bubbles. To derive the analogue of Proposition \ref{prop:effective}, we again study equations \eqnref{eq:SDdSD_inside} and \eqnref{eq:SDdSD_outside}, \textit{i.e.}{}, \begin{align*} \mathcal{S}_{D_d}^{k_b}[\varphi_d] &\equiv \mathcal{S}_D^{k_b}[\varphi] \quad\mbox{in }D_d, \\ \mathcal{S}_{D_d}^{k_w}[\partialsi_d] &\equiv \mathcal{S}_D^{k_w}[\partialsi] \quad\mbox{in }Y^2\setminus \overline{D}. \end{align*} Since $\omega$ is in the subwavelength regime, $k_b$ is not a Dirichlet eigenvalue. Together with the uniqueness of the exterior Dirichlet problem, we conclude that it is sufficient to consider these equations on the boundaries. Using the notation from above, this means \begin{align*} \mathcal{S}_{D_d}^{k_b}[\varphi_d] &= \mathcal{S}_{D_d,D}^{k_b}[\varphi] ,\\ \mathcal{S}_{D,D_d}^{k_w}[\partialsi_d] &= \mathcal{S}_D^{k_w}[\partialsi]. \end{align*} Using the expansions \eqnref{eq:asympSdD}, \eqnref{eq:asympSd} and \eqnref{eq:asympSDd}, we find \begin{align*} \mathcal{S}_D^{k_b}[q^{-1}\varphi_d] + \varepsilon \left(\mathcal{K}_D^{k_b} + \mathcal{K}_D^{k_b,*}\right)[q^{-1}\varphi_d] + \varepsilon\mathcal{S}_D^{k_b}[\tau q^{-1}\varphi_d] &=\mathcal{S}_D^{k_b}[\varphi] +\varepsilon \left(-\frac{1}{2}I + \mathcal{K}_D^{k_b,*}\right)[\varphi] + o(\varepsilon)\\ \mathcal{S}_D^{k_w}[q^{-1}\partialsi_d](x) + \varepsilon \left(-\frac{1}{2}I+ \mathcal{K}_D^{k_w} \right)[q^{-1}\partialsi_d](x) + \varepsilon\mathcal{S}_D^{k_w}[\tau q^{-1}\partialsi_d](x) &= \mathcal{S}_D^{k_w}[\partialsi] + o(\varepsilon), \end{align*} with $q$ defined as above. From this we find that \begin{align} \label{eq:noncirc1} \begin{pmatrix} \varphi_d \\ \partialsi_d \end{pmatrix} =& Q\left(I + \varepsilon\begin{pmatrix} - \left(\mathcal{S}_D^{k_b}\right)^{-1}\left(\frac{1}{2}I + \mathcal{K}_D^{k_b} \right) - \tau & 0 \\ 0 & - \left(\mathcal{S}_D^{k_w}\right)^{-1}\left(-\frac{1}{2}I + \mathcal{K}_D^{k_w} \right) - \tau \end{pmatrix}\right)\begin{pmatrix} \varphi \\ \partialsi \end{pmatrix} + o(\varepsilon) \nonumber \\ :=& \mathcal{P}_1\begin{pmatrix} \varphi \\ \partialsi \end{pmatrix}, \end{align} where $Q$ is the bijection $Q: \left(L^2(\partialartial D)\right)^2 \rightarrow \left(L^2(\partialartial D_d)\right)^2, Q = (q, q)$ and $\mathcal{P}_1: \left(L^2(\partialartial D)\right)^2 \rightarrow \left(L^2(\partialartial D_d)\right)^2$. Using the asymptotic expansions \eqnref{eq:asympSd} and \eqnref{eq:asympK}, we can expand the operator $\mathcal{A}_{D_d}$ as \begin{equation} \label{eq:noncirc2} \mathcal{A}_{D_d} = Q \circ \left(\mathcal{A}_{D}(\omega,\delta) + \varepsilon \mathcal{A}_1(\omega,\delta)\right)\circ Q^{-1} + o(\varepsilon), \end{equation} where \begin{equation*} \label{eq:A1} \mathcal{A}_1(\omega,\delta) = \begin{pmatrix} \mathcal{K}_D^{k_b} + \mathcal{K}_D^{k_b,*} + \mathcal{S}_D^{k_b}[\tau\cdot] & -\left(\mathcal{K}_D^{k_w} + \mathcal{K}_D^{k_w,*} + \mathcal{S}_D^{k_w}[\tau\cdot]\right)\\ \mathcal{K}_1^{k_b} & -\delta \mathcal{K}_1^{k_w} \end{pmatrix}. \end{equation*} Using Taylor expansion, we have that $$\frac{\partialartial}{\partialartial \nu} H |_{\partialartial D} = \frac{\partialartial}{\partialartial \nu} H |_{\partialartial D_d} - \varepsilon \frac{\partialartial^2}{\partialartial \nu^2}H |_{\partialartial D_d} .$$ We use the Laplacian in the curvilinear coordinates defined by $T_{\widetilde x},\nu_{\widetilde x}$ for $\widetilde x\in \partialartial D_d$, \begin{equation*}\label{eq:lapcurve} \mathcal{D}elta = \frac{\partialartial^2}{\partialartial \nu^2} + \tau(\widetilde x)\frac{\partialartial}{\partialartial \nu} + \frac{\partialartial^2}{\partialartial T^2}. \end{equation*} It is easily verified that the curvatures on the two boundaries satisfy $$\tau(\widetilde x) = \tau(x) +O(\varepsilon).$$ Hence we obtain $$ \frac{\partialartial^2}{\partialartial \nu^2}H |_{\partialartial D_d} = -\left(k_w^2 +\frac{\partialartial^2}{\partialartial T^2}\right) H |_{\partialartial D_d} - \tau \frac{\partialartial}{\partialartial \nu} H |_{\partialartial D_d} + O(\varepsilon).$$ In total, we have \begin{equation} \label{eq:noncirc3} \begin{pmatrix} H|_{\partialartial D} \\[0.3em] \displaystyle\partialartial H /\partialartial \nu |_{\partialartial D} \end{pmatrix} = \mathcal{P}_2^{-1} \begin{pmatrix} H|_{\partialartial D_d} \\[0.3em] \displaystyle\partialartial H /\partialartial \nu |_{\partialartial D_d} \end{pmatrix}, \end{equation} where the operator $\mathcal{P}_2^{-1}: L^2(\partial D_d)^2\rightarrow L^2(\partial D)^2 $ is given by $$ \mathcal{P}_2^{-1} = \left( I + \varepsilon\begin{pmatrix} \displaystyle 0 & -1 \\ k_w^2 +\left(\partialartial_T\right)^2& \tau \end{pmatrix}\right)Q^{-1} + o(\varepsilon). $$ Combining equations \eqnref{eq:AD}, \eqnref{eq:ADd_original}, \eqnref{eq:noncirc1} and \eqnref{eq:noncirc3} we arrive at \begin{equation*} \left(\mathcal{P}_2^{-1}\mathcal{A}_{D_d}\mathcal{P}_1 - \mathcal{A}_{D}\right)\begin{pmatrix}\varphi \\ \partialsi \end{pmatrix} = \begin{pmatrix}f \\ g \end{pmatrix}. \end{equation*} As before, we define $\mathcal{A}_D^\varepsilon = \mathcal{P}_2^{-1}\mathcal{A}_{D_d}\mathcal{P}_1$. Finally, we can compute this explicitly using equations \eqnref{eq:noncirc1}, \eqnref{eq:noncirc2} and \eqnref{eq:noncirc3} and Proposition \ref{prop:hypersingular} to obtain the following proposition, which is the analogue of Proposition \ref{prop:effective} in the case of non-circular bubbles. \begin{prop} \label{prop:effective_noncirc} The density pair $(\varphi,\partialsi)$ and the effective sources $(f,g)$ satisfy the following relation \begin{equation*} \left(\mathcal{A}_D^\varepsilon - \mathcal{A}_{D}\right)\begin{pmatrix} \varphi \\[0.3em] \partialsi \end{pmatrix} = \begin{pmatrix} f \\[0.3em] g \end{pmatrix}, \end{equation*} where the operator $\mathcal{A}_D^\varepsilon$ satisfies $$ \mathcal{A}_D^\varepsilon - \mathcal{A}_{D} = \varepsilon(\delta-1)\begin{pmatrix} 0 & \frac{1}{2}I + \mathcal{K}_D^{\omega,*} \\[0.3em] 0 & \omega^2 \mathcal{S}_D^\omega + \frac{\partialartial^2}{\partialartial T^2}\mathcal{S}_D^\omega \end{pmatrix} + o(\varepsilon) .$$ \end{prop} {} \end{document}
\begin{document} \title{Prolongations of convenient Lie algebroids} \author{Patrick Cabau \& Fernand Pelletier} \address{Unit\'e Mixte de Recherche 5127 CNRS, Universit\'e de Savoie Mont Blanc, Laboratoire de Math\'ematiques (LAMA),Campus Scientifique, 73370 Le Bourget-du-Lac, France} \email{[email protected], [email protected]} \date{} \maketitle \begin{abstract} We first define the concept of Lie algebroid in the convenient setting. In reference to the finite dimensional context, we adapt the notion of prolongation of a Lie algebroid over a fibred manifold to a convenient Lie algebroid over a fibred manifold. Then we show that this construction is stable under projective and direct limits under adequate assumptions. \end{abstract} \textbf{MSC 2010:} 53D35, 55P35.\\ \textbf{Keywords:} convenient Lie algebroid, prolongation of a convenient Lie algebroid, projective and direct limits of sequences of Banach Lie algebroids. \section{introduction} In classical mechanics, the configuration space is a finite dimensional smooth manifold $M$ where the tangent bundle $p_M:TM \to M$ corresponds to the velocity space. This geometrical object plays a relevant role in the Lagrangian formalism between tangent and cotangent bundles (cf. \cite{Kle}). In \cite{Wei}, Weinstein develops a generalized theory of Lagrangian Mechanics on Lie algebroids. In \cite{Lib}, Libermann shows that such a formalism is not possible, in general, if we consider the tangent bundle of a Lie algebroid. The notion of prolongation of a Lie algebroid introduced by Higgins and Mackenzie \cite{HiMa} offers a nice context in which such a formalism was generalized by Mart\'inez (cf. \cite{Ma1} and \cite{Ma2}).\\ The notion of Lie algebroid in the Banach setting was simultaneously introduced in \cite{Ana} and \cite{Pe1}. Unfortunately, in this setting, there exist many problems to generalizing all canonical Lie structures on a finite dimensional Lie algebroid and, at first, a nice definition of a Lie bracket (cf. \cite{CaPe}). In this paper, we consider the more general convenient setting (cf. \cite{KrMi}) in which we give a definition of a convenient Lie algebroid (and more generally a partial convenient Lie algebroid) based on a precise notion of sheaf of Lie brackets \footnote{cf. Remark \ref{R_LocalGlobal}} on a convenient anchored bundle (cf. section \ref{___AlmostLieAlgebroidAndLieAlgebroid}).\\ As in finite dimension, given a convenient anchored bundle $ \left( \mathcal{A},\pi, M,\rho \right) $, \footnote{The anchor $\rho$ is a vector bundle morphism $\rho: \mathcal{A} \to TM$.} the total space of the prolongation $ \hat{\bf p}:\mathbf{T}\mathcal{M}\to \mathcal{M}$ of $ \left( \mathcal{A},\pi, M,\rho \right) $ over a fibred manifold $\mathbf{p}:\mathcal{M}\to M$, is the pullback over $\rho$ of the bundle $T\mathbf{p}: T\mathcal{M} \to TM$. Moreover, we have an anchor $\hat{\rho}: \mathbf{T}\mathcal{M} \to T\mathcal{M}$. \\ In finite dimension, the Lie bracket on $\mathcal{A}$ gives rise to a Lie bracket on $\mathbf{T}\mathcal{M}$. Unfortunately, this is not any more true in infinite dimension. If $\tilde{\pi}:\widetilde{\mathcal{A}}\to \mathcal{M}$ is the pullback of $\pi:\mathcal{A}\to M$, over $\mathbf{p}$, then the module of local sections of $\widetilde{\mathcal{A}}$ is no longer finitely generated by local sections along $\mathbf{p}$. For this reason, the way to define the prolongation of the bracket does not work as in finite dimension. Thus the prolongation of such a Lie algebroid is not again a Lie algebroid but only a strong partial Lie algebroid (cf. $\S$ \ref{___StructureofPartialLieAlgebroid}). We also define (non linear) connections on such a prolongation. As for the tangent bundle of a Banach vector bundle (cf. \cite{AgSu}), we then show that the kernel bundle of $\hat{\bf p}:\mathbf{T}\mathcal{M}\to \mathcal{M}$ is split if and only if there exists a linear connection on $\mathcal{M}$.\\ In the Banach setting, it is proved that if the kernel of the anchor $\rho$ of a Banach Lie algebroid $(\mathcal{A}, \pi, M,\rho, [.,.]_\mathcal{A})$ is split and its range is closed, then the associated distribution on $M$ defines a (singular) foliation (cf. \cite{Pe1}). Under these assumptions, we show that the prolongation $ \left( \mathbf{T}\mathcal{A}, \hat{\mathbf{p}}, \mathcal{A}, \hat{\rho} \right) $ has the same properties, and the foliation defined by $\hat{\rho}(\mathbf{T}\mathcal{A})$ on $\mathcal{A}$ is exactly the set $\{\mathcal{A}_{| L}, \; L \textrm{ leaf of } \rho(\mathcal{A})\}$ (cf. Theorem \ref{T_tildefol}).\\ As an illustration of these notions, in the convenient setting, and not only in a Banach one, we end this work by the prolongation of projective (resp. direct) limits of sequences of projective (resp. ascending) sequences of fibred Banach Lie algebroids with finite or infinite dimensional fibres. \\ \emph{This work can be understood as the basis for further studies on how the Lagrangian formalism on finite dimensional Lie algebroid (cf. \cite{Ma2} for instance) can be generalized in this convenient framework.}\\ Section \ref{_ConvenientLieAlgebroid} is devoted to the presentation of the prerequisities needed in this paper about (partial) convenient Lie algebroids. After some preliminaries on notations, we introduce the notion of convenient anchored bundle. Then we define a notion of almost bracket on such a vector bundle. The definition of a convenient Lie algebroid and some of its properties are given in subsection \ref{___AlmostLieAlgebroidAndLieAlgebroid}. The next subsection exposes the concept of partial convenient Lie algebroid. The following subsection is devoted to the definitions of some derivative operators (Lie derivative with respect to some sheaf of $k$-forms sections and exterior derivative), in particular of strong partial Lie algebroids. The last subsection recall some results about integrability of Banach Lie algebroids when the Banach Lie algebroid is split and the range of its anchor is closed (cf \cite{Pe2}).\\ The important part of this work is contained in section \ref{__ProlongationOfAConvenientLieAlgebroidAlongAFibration}. In the first subsection, we build the prolongation of a convenient anchored bundle. Then, in subsection \ref{___ProlongationOfTheLieBracket}, we can define a Lie bracket on local projectable sections of the total space of the prolongation. We then explain why this bracket cannot be extended to the whole set of local sections if the typical fibre of the anchored bundle is not finite dimensional, which is the essential difference with the finite dimensional setting. We end this section by showing that if a Banach Lie algebroid is split and the range of its anchor is closed, the same is true for its prolongation and the range of the anchor of the prolongation defines also a foliation even if its Lie bracket is not defined on the set of all local sections.\\ The two last sections show that, under adequate assumptions, the prolongation of a projective (resp. direct) limit of a projective sequence (resp. ascending sequence) of Banach Lie algebroids is exactly the prolongation of the projective (resp. direct) limit of this sequence. \\ In order to make the two last sections more affordable for non-informed readers, we have added two appendices recalling the needed concepts and results on projective and direct limits. \section{Convenient Lie algebroid} \label{_ConvenientLieAlgebroid} \subsection{Local identifications and expressions in a convenient bundle}${}$\\ \label{___LocalIdentificationsAndExpressionsInAConvenientBundle} \emph{In all this paper, we work in the convenient setting and we refer to \cite{KrMi}.}\\ Consider a convenient vector bundle $\pi: \mathcal{A}\to M$ where the typical fibre is a convenient linear space $\mathbb{A}$. For any open subset $U\subset M$, we denote by $C^\infty(U)$ the ring of smooth functions on $U$ and by $\Gamma\left( \mathcal{A}_U \right) $ the $C^\infty(U)$-module of smooth sections of the restriction $ \mathcal{A}_U$ of $\mathcal{A}$ over $U$, simply $\Gamma(\mathcal{A})$ when $U=M$.\\ Consider a chart $(U,\phi)$ on $M$ such that we have a trivialization $\tau:{ \mathcal{A}}_U\to \psi(U)\times \mathbb{A} $ Then $T\phi$ is a trivialization of $TM_U$ on $\phi(U)\times \mathbb{M}$ and $T\tau$ is a trivialization of $T \mathcal{A}_U$ on $\phi(U)\times \mathbb{A}\times\mathbb{M}\times\mathbb{A}$.\\ For the sake of simplicity, we will denote these trivializations: \begin{description} \item[--] $ \mathcal{A}_U= \mathcal{A} _{|U}\equiv \phi(U)\times\mathbb{\mathbb{A} }$; \item[--] $TM_U=TM_{|U}\equiv \phi(U)\times\mathbb{M}$; \item[--] $T \mathcal{A}_U=T \mathcal{A}_{|{ \mathcal{A}_{(U)}}}\equiv(\phi(U)\times{\mathbb{A} })\times(\mathbb{M}\times{\mathbb{A} })$; \item[--] $T \mathcal{A}^*_{| \mathcal{A}^*_U}\equiv \phi(U)\times\mathbb{A}^{*}\times\mathbb{M}\times \mathbb{A}^*$. \end{description} where $U \subset M$ is identified with $\psi(U)$.\\ We will also use the following associated local coordinates where $\equiv$ stands for the representation in the corresponding trivialization. \begin{description} \item[] $\mathfrak{a}=(x,a)\equiv(\mathsf{x,a})\in U\times \mathbb{A}$ \item[] $(x,v)\equiv (\mathsf{x,v})\in U\times\mathbb{M}$ \item[] $(\mathfrak{a},\mathfrak{b})\equiv(\mathsf{x,a, v ,b})\in U\times\mathbb{A}\times\mathbb{M}\times \mathbb{A}.$ \item[] $(\sigma,w \eta)\equiv (\mathsf{x,\xi,w_1,\eta_2})\in U\times\mathbb{A}^{*}\times\mathbb{M}\times \mathbb{A}^*$ \end{description} \subsection{Convenient anchored bundle} \label{___ConvenientAnchoredBundle} Let $\pi:\mathcal{A}\to M$ be a convenient vector bundle whose fibre is a convenient linear space $\mathbb{A}$. \begin{definition} \label{D_Anchor} A morphism of vector bundles $\rho:\mathcal{A}\to TM$ is called an \textit{anchor} and the quadruple $ \left( \mathcal{A},\pi,M,\rho \right) $ is called a convenient anchored bundle\index{convenient!anchored bundle}. \end{definition} \begin{notations} \label{N_Anchor} ${}$ \begin{enumerate} \item In this section, if there is no ambiguity, the anchored bundle $(\mathcal{A},\pi,M,\rho)$ is fixed and, in all this work, the Lie bracket of vector fields on a convenient manifold will be simply denoted $[.,.]$. \item For any open set $U$ in $M$, the morphism $\rho$ gives rise to a $C^\infty(U)$-morphism of modules ${\rho}_U:\Gamma\left( \mathcal{A}_U\right) \to \mathfrak{X}(U) $ defined, for any $x \in M$ and any smooth section $\mathfrak{a}$ of $\mathcal{A}_U$, by: \[ \left( {\rho}_U\left( \mathfrak{a} \right) \right) \left( x\right) =\rho\left( \mathfrak{a}\left( x\right) \right) \] and still denoted by $\rho$. \item For any convenient spaces $\mathbb{E}$ and $\mathbb{F}$, we denote by $\operatorname{L}(\mathbb{E},\mathbb{F})$, the convenient space of bounded linear operators from $\mathbb{E}$ to $\mathbb{F}$ and for $\mathbb{E}=\mathbb{F}$, we set $\operatorname{L}(\mathbb{E}):=\operatorname{L}(\mathbb{E},\mathbb{F})$; $\operatorname{GL}(\mathbb{E})$ is the group of bounded automorphisms of $\mathbb{E}$. \item In local coordinates in a chart $(U,\phi)$, the restriction of $\rho$ to $U$ will give rise to a smooth field $\mathsf{x} \mapsto \mathsf{\rho_x}$ from $\phi(U)\equiv U$ to $\operatorname{L}(\mathbb{A},\mathbb{M})$. \end{enumerate} \end{notations} \subsection{Almost Lie bracket} \label{___AlmostLieBracket} \begin{definition} \label{D_AlmostLieBracketOnAnAnchoredBundle} An almost Lie bracket on an anchored bundle $\mathcal{A}$ is a sheaf of skew-symmetric bilinear maps \[ \lbrack.,.]_{\mathcal{A}_U}:\Gamma\left( \mathcal{A}_U\right) \times\Gamma\left( \mathcal{A}_U\right) \to\Gamma\left( \mathcal{A}_U\right) \] for any open set $U\subseteq M$ which satisfies the following properties: \begin{enumerate} \item [\textbf{(AL 1)}] The Leibniz identity:\index{Leibniz identity} \[ \forall\left( \mathfrak{a}_{1},\mathfrak{a}_{2}\right) \in \Gamma \left( \mathcal{A}_U \right) ^{2}, \forall f \in C^\infty(M) ,\ [\mathfrak{a}_{1},f\mathfrak{a}_{2}]_{\mathcal{A}}=f.[\mathfrak{a}_{1},\mathfrak{a}_{2}]_{\mathcal{A}}+df(\rho(\mathfrak{a}_{1})).\mathfrak{a}_{2}. \] \item[\textbf{(AL 2)}] For any open set $U\subseteq M$ the map \[ (\mathfrak{a}_1,\mathfrak{a}_2)\mapsto [\mathfrak{a}_1,\mathfrak{a}_2]_{\mathcal{A}_U} \] only depends on the $1$-jets of $\mathfrak{a}_1$ and $\mathfrak{a}_2$ of sections of $\mathcal{A}_U$. \end{enumerate} By abuse of notation, such a sheaf of almost Lie brackets will be denoted $[.,.]_\mathcal{A}$. \end{definition} \begin{remark} \label{R_LocalGlobal} In finite dimension, the bracket is defined on global sections and induces a Lie bracket on local sections which depend on the $1$-jets of sections. In the convenient setting (as in the Banach one), if $M$ is not smoothly regular, the set of restrictions to some open set $U$ of global sections of $\mathcal{A}$ could be different from $\Gamma(\mathcal{A}_U)$ but, unfortunately, we have no example of such a situation. Thus, any bracket defined on the whole space $\Gamma(\mathcal{A})\times \Gamma(\mathcal{A})$ will not give rise to a bracket on local sections of $\mathcal{A}$ and, even if it is true, the condition \emph{\textbf{(AL 2)}} will not be true in general. \end{remark} In the context of local trivializations ($\S$ \ref{___LocalIdentificationsAndExpressionsInAConvenientBundle}), if $\operatorname{L}^2_{\operatorname{alt}}(\mathbb{A};\mathbb{A}) $ is the convenient space of bounded skew-symmetric operators on $\mathbb{A}$ with values in $\mathbb{A}$, there exists a smooth field \[ \begin{array}{cccc} \mathsf{C}: & U & \rightarrow &\operatorname{L}_{\operatorname{alt}}^2(\mathbb{A},\mathbb{A}) \\ & \mathsf{x} & \mapsto & \mathsf{C}_\mathsf{x} \end{array} \] such that, for $\mathfrak{a}_1(x)\equiv(\mathsf{x},\mathsf{a_1(x)})$ and $\mathfrak{a}_2(x)\equiv(\mathsf{x,a_2(x)})$, we have: \begin{eqnarray} \label{eq_loctrivct} [\mathfrak{a}_1,\mathfrak{a}_2]_U(x) \equiv (\mathsf{x,C_x(a_1(x),a_2(x)})+d \mathsf{a_2(\rho_x(a_1(x)))}-d \mathsf{a_1(\rho_x(a_2(x)))}). \end{eqnarray} \subsection{Almost Lie algebroid and Lie algebroid} \label{___AlmostLieAlgebroidAndLieAlgebroid} \begin{definition} \label{D_AlmostLieAlgebroid} The quintuple $ \left( \mathcal{A},\pi,M,\rho,[.,.]_{\mathcal{A}} \right) $ where $ \left( \mathcal{A},\pi,M,\rho \right) $ is an anchored bundle and $[.,.]_{\mathcal{A}}$ an almost Lie bracket is called a convenient almost Lie algebroid\index{convenient almost Lie algebroid}\index{almost Lie algebroid!convenient}. \end{definition} In this way, the \emph{Jacobiator}\index{Jacobiator} is the $\mathbb{R}$-trilinear map $J_{\mathcal{A}_U}:\Gamma(\mathcal{A}_{U})^{3}\to\Gamma(\mathcal{A}_{ U})$ defined, for any open set $U$ in $M$ and any section $\left( \mathfrak{a}_{1}, \mathfrak{a}_{2}, \mathfrak{a}_{3} \right) \in \Gamma(\mathcal{A}_{U})^3$ by \[ J_{\mathcal{A}_U}(\mathfrak{a}_{1},\mathfrak{a}_{2},\mathfrak{a}_{3} )=[\mathfrak{a}_{1},[\mathfrak{a}_{2},\mathfrak{a}_{3}]_{\mathcal{A}}]_{\mathcal{A}}+[\mathfrak{a}_{2},[\mathfrak{a}_{3},\mathfrak{a}_{1}]_{\mathcal{A}}]_{\mathcal{A}}+[\mathfrak{a}_{3},[\mathfrak{a}_{1},\mathfrak{a}_{2}]_{\mathcal{A}}]_{\mathcal{A}}. \] \begin{definition} \label{D_ConvenientLieAlgebroid} A convenient Lie algebroid \index{convenient Lie algebroid}\index{Lie algebroid!convenient} is a convenient almost Lie algebroid $ \left( \mathcal{A},\pi,M,\rho ,[.,.]_{\mathcal{A}} \right) $ such that the associated jacobiator $J_{\mathcal{A}_U}$ vanishes identically on each module $\Gamma(\mathcal{A}_{U})$ for all open sets $U$ in $M$. \end{definition} We then have the following result (cf. \cite{BCP}, Chapter 3): \begin{proposition} \label{P_EquivalenceMorphismJEtensor} Consider an almost convenient Lie algebroid $\left( \mathcal{A},\pi,M,\rho ,[.,.]_{\mathcal{A}} \right) $. \begin{enumerate} \item For any open set $U\subseteq M$ and for all $\left( \mathfrak{a}_{1},\mathfrak{a}_{2} \right) \in \Gamma\left( \mathcal{A}_{ U} \right) ^2$, the map \[ \left( \mathfrak{a}_{1},\mathfrak{a}_{2} \right)\mapsto \rho \left( [\mathfrak{a}_1,\mathfrak{a}_2]_\mathcal{A} \right) -[\rho(\mathfrak{a}_1), \rho(\mathfrak{a}_2)] \] only depends on the $1$-jet of $\rho$ at any $x\in U$ and the values of $\mathfrak{a}_1$ and $\mathfrak{a}_2$ at $x$. \item If the jacobiator $J_{\mathcal{A}_U}$ vanishes identically, then we have: \begin{equation} \label{eq_rhoCompatible} \forall \left( \mathfrak{a}_{1},\mathfrak{a}_{2} \right) \in \Gamma\left( \mathcal{A}_{ U} \right) ^2,\; \rho \left( [\mathfrak{a}_1,\mathfrak{a}_2]_\mathcal{A} \right) =[\rho(\mathfrak{a}_1), \rho(\mathfrak{a}_2)]. \end{equation} \item If the property (\ref{eq_rhoCompatible}) is true, then $J_{\mathcal{A}_U}$ is a bounded trilinear $C^\infty(U)$-morphism from $ \Gamma\left( \mathcal{A}_U \right)^3$ to $\Gamma\left( \mathcal{A}_U \right)$ which takes values in $\ker \rho$ over $U$.\\ \end{enumerate} \end{proposition} If for each open set $U$, the assumption (2) of Proposition \ref{P_EquivalenceMorphismJEtensor} is satisfied, (3) implies that the family $\{ J_{\mathcal{A}_U}, U\textrm{ open in } M \}$ defines a sheaf of trilinear morphisms from the sheaf $\{ (\Gamma(A_U))^3, U \textrm{ open in } M\}$ into the sheaf $\{ \Gamma(A_U), U \textrm{ open in } M\}$. This sheaf will be denoted $J_{\mathcal{A}}$. \begin{corollary} \label{C_rhoLieAlgebraMorphism} If $ \left( \mathcal{A},\pi,M,\rho,[.,.]_{\mathcal{A}} \right) $ is a convenient Lie algebroid, then $\rho$ induces a morphism of Lie algebras from $\Gamma(\mathcal{A}_{ U})$ into $\mathfrak{X}(U)$ for any open sets $U$ in $M$. \end{corollary} \begin{definition} \label{D_SplitConvenientLieAlgebroid} A convenient Lie algebroid $\left( \mathcal{A},\pi,M,\rho,[.,.]_{\mathcal{A}} \right) $ will be called split\index{split convenient Lie algebroid} if, for each $x\in {M}$, the kernel of $\rho_x=\rho_{| \pi^{-1}(x)}$ is supplemented in $ \pi^{-1}(x)$. \end{definition} For example, if $ \operatorname{ker}\rho_x$ is finite dimensional or finite codimensional for all $x\in M$ or if $\mathbb{A}$ is a Hilbert space, then $(\mathcal{A},\pi,M,\rho,[.,.]_{\mathcal{A}})$ is split. Another particular situation is the case where the anchor $\rho\equiv 0$ and then $\left( \mathcal{A},\pi,M,\rho,[.,.]_{\mathcal{A}} \right) $ is a \emph{Lie algebra Banach bundle}. \subsection{Structure of partial Lie algebroid} \label{___StructureofPartialLieAlgebroid} We have the following generalization of the notion of convenient Lie algebroid: \begin{definition} \label{D_PartialConvenientLieAlgebroid} Let $(\mathcal{A},\pi,M,\rho)$ be a convenient anchored bundle. Consider a sub-sheaf $\mathfrak{P}_M$ of the sheaf $\Gamma(\mathcal{A})_M$ of sections of $\mathcal{A}$. Assume that $\mathfrak{P}_M$ can be provided with a structure of Lie algebras sheaf which satisfies, for any open set $U$ in $M$: \begin{enumerate} \item[(i)] for any $(\mathfrak{a}_1,\mathfrak{a}_2)\in \left( \mathfrak{P}(U) \right) ^2$ and any $f\in C^\infty(U)$, we have the Leibniz conditions \begin{eqnarray} \label{eq_rhoCompatibilitySheaf} [\mathfrak{a}_1,f\mathfrak{a}_2]_{\mathfrak{P}(U)}=df(\rho(\mathfrak{a}_1))\mathfrak{a}_2+f[\mathfrak{a}_1,\mathfrak{a}_2]_{\mathfrak{P}(U)} \end{eqnarray} \item[(ii)] the Lie bracket $[.,.]_{\mathfrak{P}(U)}$ on $\mathfrak{P}(U)$ only depends on the $1$-jets of sections of ${\mathfrak{P}(U)}$; \item[(iii)] $\rho$ induces a Lie algebra morphism from $\mathfrak{P}(U)$ to $\mathfrak{X}(U)$. \end{enumerate} Then $ \left( \mathcal{A},\pi, M,\rho,\mathfrak{P}_M \right)$ is called a convenient partial Lie algebroid\index{partial Lie algebroid}. The family $\{ [.,.]_{\mathfrak{P}(U)}, U \textrm{ open set in }M \}$ is called a sheaf bracket\index{sheaf bracket} and is denoted $[.,.]_\mathcal{A}$. \\ A partial convenient Lie algebroid $(\mathcal{A},\pi, M,\rho,\mathfrak{P}_M)$ is called strong\index{partial Lie algebroid!strong} if for any $x\in M$, the stalk\index{stalk} \[ \mathfrak{P}_x=\underrightarrow{\lim}\{ \mathfrak{P}(U),\;\; \varrho^U_V:V \to U,\;\; U,V \textrm{ open neighbourhoods of } x : U \supset V \} \] is equal to $\pi^{-1}(x)$ for any $x\in M$. \end{definition} Any convenient Lie algebroid is a partial Lie algebroid.\\ More generally, if $(\mathcal{A},\pi, M,\rho)$ is a convenient anchored bundle, any convenient subbundle $\mathcal{B}$ of $\mathcal{A}$ such that $ \left( \mathcal{B}, \pi_{\mathcal{B}}=\pi_{| \mathcal{B}},M, \rho_{\mathcal{B}}=\rho_{| \mathcal{B}}, [.,.]_{\mathcal{B}} \right) $ is a convenient Lie algebroid provided with a structure of convenient partial Lie algebroid on $\mathcal{A}$ which is not strong in general. Another type of example of convenient partial Lie algebroids will be described in the context of the prolongation of a convenient Lie algebroid in the next section. This convenient partial Lie algebroid will be a strong partial Lie algebroid. \begin{remark} \label{R_PartialLieBracket} In local coordinates, the Lie bracket $[.,.]_{\mathfrak{P}(U)}$ can be written as in (\ref{eq_loctrivct}). \end{remark} \subsection{Derivative operators} \label{___DerivativeOperators} \subsubsection{Preliminaries} \label{____Preliminaries} If $U$ is a $c^\infty$-open subset of a convenient space $\mathbb{E}$, the space $C^\infty(U,\mathbb{F})$ of smooth maps from $U$ to a convenient space $\mathbb{F}$ is a convenient space (cf. \cite{KrMi}, 3.7 and 3.11).\\ The space $L(\mathbb{E},\mathbb{F})$ of bounded linear maps from $\mathbb{E}$ to $\mathbb{F}$ endowed with the topology of uniform convergence on bounded subsets in $\mathbb{E}$ is a closed subspace of $C^\infty(\mathbb{E},\mathbb{F})$ and so is a convenient space.\\ More generally, the set $L^{k}_{\operatorname{alt}}(\mathbb{E},\mathbb{F})$ of all bounded $k$-linear alternating mappings from $\mathbb{E}^k$ to $\mathbb{F}$ endowed with the topology of uniform convergence of bounded sets is a closed subspace of $C^\infty(\mathbb{E}^k,\mathbb{F})$ (cf. \cite{KrMi}, Corollary 5.13) and so $L^{k}_{\operatorname{alt}}(\mathbb{E},\mathbb{F})$ is a convenient space.\\ On the other hand, if $\bigwedge^k(\mathbb{E})$ is the set of alternating $k$-tensors on $\mathbb{E}$ then $L^{k}_{\operatorname{alt}}(\mathbb{E}):=L^{k}_{\operatorname{alt}}(\mathbb{E},\mathbb{R})$ is isomorphic as a locally convex topological space to $L_{\operatorname{alt}}^k(\mathbb{E})$ (cf. \cite{KrMi}, Corollary 5.9) and so has a natural structure of convenient space.\\ Recall that bounded linear maps are smooth (cf. \cite{KrMi}, Corollary 5.5).\\ Let us consider a convenient vector bundle $\pi:\mathcal{A}\rightarrow M$ with typical fibre $\mathbb{A}$. We study the bundle \[ \begin{array} [c]{cccc} \pi^k: & L_{\operatorname{alt}}^k(\mathcal{A})=\displaystyle\bigcup_{x\in M}L_{\operatorname{alt}}^k(\mathcal{A}_x) & \to & M\\ & (x,\omega) & \mapsto & x \end{array} \] Using any atlas for the bundle structure of $\pi:\mathcal{A}\rightarrow M$, it is easy to prove that $\pi^k:L_{\operatorname{alt}}^k(\mathcal{A})\rightarrow M$ is a convenient vector bundle. The vector space of local sections of $L_{\operatorname{alt}}^k(\mathcal{A}_U)$ is denoted by $\bigwedge^{k}\Gamma^*(\mathcal{A}_U)$ and is called the set of $k$-exterior differential forms on $\mathcal{A}_U$. We denote by $\bigwedge^k\Gamma^*(\mathcal{A})$ the sheaf of sections of $\pi^k: L_{\operatorname{alt}}^k(\mathcal{A})\to M$ and $\bigwedge\Gamma^*(\mathcal{A})=\displaystyle\bigcup_{k=0}^\infty\bigwedge^k\Gamma^*(\mathcal{A})$ the sheaf of associated graded exterior algebras. \\ {\bf In this section, we assume that $(\mathcal{A},\pi, M,\rho)$ is an anchored bundle and that $ \left( \mathcal{A},\pi, M,\rho,\mathfrak{P}_M \right)$ is a fixed strong partial Lie algebroid}.\\ This situation is always satisfied if $(\mathcal{A},\pi, M,\rho, \left[.,.\right]_{\mathcal{A}})$ is a Lie algebroid and occurs for the prolongation of a convenient Lie algebroid (cf $\S$ \ref{__ProlongationOfAConvenientLieAlgebroidAlongAFibration}). This context also occurs in the setting of partial Poisson manifolds (cf. \cite{PeCa}). \subsubsection{ Insertion operator } \label{____InteriorProduct} Let $\mathfrak{a}$ be a local section of $\mathcal{A}$ defined on an open set $U$. As in \cite{KrMi}, 33.10, we have \begin{proposition} \label{D_InsertionOperator} The insertion operator\index{insertion operator}\index{operator!insertion} $i_\mathfrak{a}$ is the graded endomorphism of degree $-1$ defined by: \begin{enumerate} \item \begin{enumerate} \item[(i)] For any function $f\in C^\infty(U)$ \begin{eqnarray} \label{eq_i0} i_{\mathfrak{a}}\left( f\right) =0 \end{eqnarray} \item[(ii)] For any $k$-form $\omega$ (where $k>0$), \begin{eqnarray} \label{eq_iq} \left( i_{\mathfrak{a}}\omega\right) \left( \mathfrak{a}_{1},\dots,\mathfrak{a}_{k}\right)(x)=\omega( \mathfrak{a}(x),\mathfrak{a}_1(x),\dots,\mathfrak{a}_k(x)). \end{eqnarray} \end{enumerate} \item $\hfil{ i_\mathfrak{a}(\omega \wedge \omega^\prime)=i_\mathfrak{a}(\omega)\wedge \omega^\prime+(-1)^{{\rm deg}\omega}i_\mathfrak{a}(\omega^\prime). }$ \end{enumerate} \end{proposition} \subsubsection{Lie derivative} \label{____Liederivative} \begin{proposition} \label{P_Liekform} For $k\geq 0$, let $\omega$ be a local $k$-form that is an element of $\bigwedge^{k}\Gamma^*(\mathcal{A}_U)$ for some open set $U$ of $M$. Given any section $\overline{\mathfrak{a}}\in \mathfrak{P}(U)$, the \emph{Lie derivative}\index{Lie derivative} with respect to $\overline{\mathfrak{a}}$ on sections of $\mathfrak{P}(U)$, denoted by $L_{\overline{\mathfrak{a}}}^\rho$, is the graded endomorphism with degree $0$ defined in the following way: \begin{enumerate} \item For any function $f\in C^\infty(U)$, \begin{eqnarray} \label{eq_L0} L_{\overline{\mathfrak{a}}}^{\rho}(f) = i_{{\rho}\circ\overline{ \mathfrak{a}}}\left( df\right) \end{eqnarray} where $L_{X}$ denote the usual Lie derivative with respect to the vector field $X$ on $M$. \item For any $k$-form $\omega$ (where $k>0$), \begin{eqnarray} \label{eq_Lq} \begin{aligned} \left( L_{\overline{\mathfrak{a}}}^{\rho}\omega\right) \left( \mathfrak{a}_{1},\dots,\mathfrak{a}_{k}\right)(x)= & L_{\overline{\mathfrak{a}}}^{\rho}\left( \omega\left( \mathfrak{a}_{1},\dots,\mathfrak{a}_{k}\right) \right)(x)\\ &-{\displaystyle\sum\limits_{i=1}^{k}} \omega\left( \mathfrak{a}_{1},\dots,\mathfrak{a}_{i-1} ,\left[ \overline{\mathfrak{a}},\overline{\mathfrak{a}}_{i}\right] _{\mathcal{A}},\mathfrak{a}_{i+1},\dots,\mathfrak{a}_{k}\right)(x) \end{aligned} \end{eqnarray} where $\overline{\mathfrak{a}}_i$ is any section of $\mathfrak{P}(U)$ such that $\overline{\mathfrak{a}}_i(x)={\mathfrak{a}}_i(x)$ for $i \in \{1, \dots, k\} $. \end{enumerate} \end{proposition} \begin{proof} Since the problem is local, we may assume that $U$ is a $c^\infty$-open set in $\mathbb{M}$ over which $\mathcal{A}$ is trivial. Fix some section $\mathfrak{a}$ of $\mathcal{A}$ on $U$ and fix some $x\in U$. After shrinking $U$ if necessary, if $f$ is a smooth function on $U$, it is clear that (\ref{eq_L0}) is well defined.\\ For $k>0$, let $\omega\in \bigwedge^{k}\Gamma^*(\mathcal{A}_U)$. Since we have a strong partial Lie algebroid, this implies that for any $k$-uple $(\mathfrak{a} _1,\dots,\mathfrak{a}_k)$ of local sections on $U$ of $\mathcal{A}$, the value $\omega(\mathfrak{a}_1(x),\dots,\mathfrak{a}_k(x))$ is well defined. Let $\overline{\mathfrak{a}}_i\in \mathfrak{P}(U)$ such that $\mathfrak{a}_i(x)=\overline{\mathfrak{a}}_i(x)$ for $i \in \{1,\dots,k\}$, we apply the formula (\ref{eq_Lq}) to $(\overline{\mathfrak{a}},\overline{\mathfrak{a}}_1,\dots, \overline{\mathfrak{a}}_k)$. In our context, $\omega$ is a smooth field over $U$ with values in $L_{\operatorname{alt}}^k(\mathbb{A})$ and each $\overline{\mathfrak{a}}_i$ is a smooth map from $U$ to $\mathbb{A}$. In this way, we have \begin{align*} L^\rho_{\overline{\mathfrak{a}}}\omega \left( \overline{ \mathfrak{a}}_{1},\dots,\overline{ \mathfrak{a}}_{k}\right)(x) =& d_x\omega \left( \rho(\overline{\mathfrak{a}}); \overline{ \mathfrak{a}}_{1}(x),\dots,\overline{ \mathfrak{a}}_{k}(x) \right)\\ &+\sum_{i=1}^k\omega \left( \overline{ \mathfrak{a}}_{1}(x),\dots, d_x\overline{ \mathfrak{a}}_{i}(\rho(\overline{\mathfrak{a}})), \dots,\overline{ \mathfrak{a}}_{k}(x) \right) \end{align*} Since $[.,.]_{\mathfrak{P}(U)}$ only depends on the $1$-jets of sections, as for an almost Lie bracket (cf. Remark \ref{R_PartialLieBracket}), we have: \[ \left[ \overline{\mathfrak{a}},\overline{\mathfrak{a}}_{i}\right]_{\mathcal{A}}(x)=d_x\overline{\mathfrak{a}}_{i}(\rho( \overline{\mathfrak{a}}))-d_x\overline{\mathfrak{a}}(\rho( \overline{\mathfrak{a}}_i)+ C_x( \overline{\mathfrak{a}}(x), \overline{\mathfrak{a}}_i(x)). \] It follows that we have \begin{align*} \left( L_{\overline{\mathfrak{a}}}^{\rho}\omega\right) \left( \mathfrak{a}_{1},\dots,\mathfrak{a}_{k}\right)(x)= d_x\omega\left(\rho(\overline{\mathfrak{a}}); \overline{ \mathfrak{a}}_{1}(x),\dots,\overline{ \mathfrak{a}}_{k}(x)\right))\\ + {\displaystyle\sum\limits_{i=1}^{k}} \omega\left( \overline{\mathfrak{a}}_{1}(x),\dots,\overline{\mathfrak{a}}_{i-1}(x) ,d_x\overline{\mathfrak{a}}(\rho( \overline{\mathfrak{a}}_i)+ C_x( \overline{\mathfrak{a}}(x), \overline{\mathfrak{a}}_i(x)),\overline{\mathfrak{a}}_{i+1}(x),\dots,\overline{\mathfrak{a}}_{k}(x) \right) . \end{align*} which implies that $L_{\overline{\mathfrak{a}}}^{\rho}\omega$ is a well defined $k$-skew symmetric form on $\mathcal{A}_x$ on $U$ since its value only depends on the $1$-jet of $\overline{\mathfrak{a}}$ and of $\omega$ and the values of $(\mathfrak{a}_1,\dots,\mathfrak{a}_k)$ at $x$. Now, as $x\mapsto C_x$ is a smooth map from $U$ to $L_{\operatorname{alt}}^2(\mathbb{A})$ and so is bounded, the differential of functions is a bounded morphism of convenient space (cf. \cite{KrMi}, 3), and $\rho$ is a bounded morphism of convenient spaces, this completes the proof according to the uniform boundedness principle given in \cite{KrMi}, Proposition 30.1. \end{proof} \begin{remark} \label{R_Liederivativef} From the relation (\ref{eq_L0}), it is clear that the Lie derivative of a function is defined for any section of $\mathcal{A}_U$. Of course, this is also true for any $k$-form on a Lie algebroid. But, for a strong partial Lie algebroid, this is not true for any $k$-form with $k>0$, since the last formula in the previous proof shows clearly that $L_{\overline{\mathfrak{a}}}^{\rho}\omega$ also depends on the $1$-jet of ${\overline{\mathfrak{a}}}$. \end{remark} \begin{remark} \label{R_AlmostLieDerivative} Assume that $(\mathcal{A},\pi, M,\rho)$ is provided with an almost Lie bracket $\left[.,.\right]_{\mathcal{A}}$. Then the Lie derivative $ L_{{\mathfrak{a}}}^{\rho}\omega$ is again well defined by an evident adaptation of formula (\ref{eq_Lq} ) for any local section $\mathfrak{a}$ and $k$-form $\omega$ defined on some open set $U$. Moreover, if the Lie bracket on $ \left( \mathcal{A},\pi, M,\rho,\mathfrak{P}_M \right)$ is induced by the almost Lie bracket $\left[.,.\right]_{\mathcal{A}}$, then the Lie derivative defined in Proposition \ref{P_Liekform} and the previous global one are compatible. \end{remark} \subsubsection{Exterior derivative} \label{____ExteriorDerivative} At first, for any function $f$, we can also define the $1$-form $d_{\rho}f,$ by \begin{eqnarray} \label{eq_d0} d_{\rho}f={{\rho}^t}\circ df \end{eqnarray} where ${\rho}^t:T^{\prime}M\rightarrow \mathcal{A}^{\prime}$ is the transposed mapping of $ \rho$. The Lie derivative with respect to any local section $\mathfrak{a}$ of $\mathcal{A}$ commutes with $d_{\rho}$. The \emph{exterior differential}\index{exterior differential} on $\bigwedge\Gamma^*(\mathcal{A})$ is defined as follows: \begin{proposition} \label{P_Exreriordidderential}${}$ \begin{enumerate} \item[(1)] The exterior differential $d_\rho$ is the graded endomorphism of degree $1$ on $\bigwedge\Gamma^*(\mathcal{A})$ defined in the following way: \begin{enumerate} \item For any function $f$, $d_{\rho}f$ is defined previously; \item For $k>0$ and any $k$-form $\omega$, the exterior differential $d_{\rho}\omega$ is the unique $(k+1)$-form such that, for all $\mathfrak{a}_{0},\dots,\mathfrak{a}_{k}\in \Gamma(\mathcal{A})$, \begin{eqnarray} \label{eq_dext} \begin{aligned} \left( d_{\rho}\omega\right) \left( \mathfrak{a}_{0},\dots, \mathfrak{a}_{q}\right)(x) & ={\displaystyle\sum\limits_{i=0}^{q}}\left( -1\right) ^{i}L_{\mathfrak{a}_{i}}^{\rho }\left( \omega\left( \mathfrak{a}_{0},\dots,\widehat{ \mathfrak{a}_{i}},\dots, \mathfrak{a}_{q}\right)(x) \right) \\ & +{\displaystyle\sum\limits_{0\leq i<j\leq q}}\left( -1\right) ^{i+j}\left( \omega\left( \left[ \overline{\mathfrak{a}}_{i}, \overline{\mathfrak{a}}_{j}\right] _{\mathcal{A}}, \mathfrak{a}_{0} ,\dots,\widehat{ \mathfrak{a}_{i}},\dots,\widehat{ \mathfrak{a}_{j}},\dots, \mathfrak{a}_{q}\right) \right)(x) \end{aligned} \end{eqnarray} where $\overline{\mathfrak{a}}_i$ is any section of $\mathfrak{P}(U)$ such that $\overline{\mathfrak{a}}_i(x)={\mathfrak{a}}_i(x)$ for $i \in \{0,\dots k\}$. \end{enumerate} \item[(2)] For any $k$-form $\eta$, $l$-form $\zeta$ where $\left( k,l \right) $ in $\mathbb{N}^2$, we then have the following property \begin{equation} \label{Eq_WedgeProduct} d_\rho(\eta\wedge\zeta)=d_\rho(\eta)\wedge \zeta+(-1)^k\eta\wedge d_\rho(\zeta). \end{equation} \begin{equation} \label{Eq_dcircd} d_{\rho}\circ d_{\rho}={d_\rho}^2=0. \end{equation} \end{enumerate} \end{proposition} \begin{proof}${}$\\ (1) Using the same context as in the proof of Proposition \ref{P_Liekform}, on the one hand, in local coordinates, we have \begin{align*} L_{\overline{\mathfrak{a}}_{i}}^{\rho} \left( \omega \left( \mathfrak{a}_{0},\dots,\widehat{\mathfrak{a}_{i}},\dots, \mathfrak{a}_{q}\right)(x) \right) &= d_x\omega\left(\rho({\mathfrak{a}}_i); \overline{ \mathfrak{a}}_{1}(x),\dots,\widehat{\overline{ \mathfrak{a}}_{i}},\dots,\overline{ \mathfrak{a}}_{k}(x)\right)\\ &\;\;\;+\sum_{i=1}^k\omega \left(\overline{ \mathfrak{a}}_{1}(x),\dots, d_x\overline{ \mathfrak{a}}_{j}(\rho({\mathfrak{a}}_i)), \dots,\overline{ \mathfrak{a}}_{k}(x)\right) .\\ \end{align*} On the other hand, we have \begin{eqnarray*} \begin{aligned} &\omega\left( \left[ \overline{\mathfrak{a}}_{i}, \overline{\mathfrak{a}}_{j}\right] _{\mathcal{A}}, \mathfrak{a}_{0} ,\dots,\widehat{ \mathfrak{a}_{i}},\dots,\widehat{ \mathfrak{a}_{j}},\dots, \mathfrak{a}_{q}\right) (x)\\ &=\omega\left( d_x\overline{\mathfrak{a}}_{i}(\rho( \overline{\mathfrak{a}}_j))-d_x\overline{\mathfrak{a}}_j(\rho( \overline{\mathfrak{a}}_i)+ C_x( \overline{\mathfrak{a}}_i(x), \overline{\mathfrak{a}}_j(x)), \overline{\mathfrak{a}}_i(x)), \overline{\mathfrak{a}}_{j}, \mathfrak{a}_{0} ,\dots,\widehat{ \mathfrak{a}_{i}},\dots,\widehat{ \mathfrak{a}_{j}},\dots, \mathfrak{a}_{k}\right)(x). \end{aligned} \end{eqnarray*} Finally, as $\rho(\mathfrak{a}_i(x))=\rho(\overline{\mathfrak{a}}_i(x))$ for $i \in \{0,\dots, k\}$, we obtain: \begin{eqnarray*} \begin{aligned} &\left( d_{\rho}\omega\right) \left( \mathfrak{a}_{0},\dots, \mathfrak{a}_{q}\right)(x)={\displaystyle\sum\limits_{i=0}^{k}}\left( -1\right) ^{i} d_x\omega\left(\rho(\overline{\mathfrak{a}}_i); \overline{ \mathfrak{a}}_{1}(x),\dots,\widehat{\overline{ \mathfrak{a}}_{i}},\dots,\overline{ \mathfrak{a}}_{k}(x) \right) \\ &+{\displaystyle\sum\limits_{0\leq i<j\leq k}}\left( -1\right)^{i+j}\left( \omega\left( C_x\left( \overline{\mathfrak{a}}_i(x), \overline{\mathfrak{a}}_j(x)\right), \overline{\mathfrak{a}}_{0}(x),\dots,\widehat{\overline{ \mathfrak{a}}_{i}}(x),\dots,\widehat{ \overline{\mathfrak{a}}_{j}}(x),\dots, \overline{\mathfrak{a}}_{k}(x) \right)\right) \end{aligned} \end{eqnarray*} Since this value only depends on the $1$-jet of $\omega$ at $x$ and the value of each $\overline{\mathfrak{a}}_i(x)$ for $i \in \{0,\dots k\}$, it follows that $d_{\rho}\omega$ is a well defined $(k+1)$-form by the same arguments as at the end of the proof of Proposition \ref{P_Liekform} .\\ (2) According to the definition of the wedge product, the last formula in local coordinates for $ \left( d_{\rho}\omega\right) \left( \mathfrak{a}_{0},\dots, \mathfrak{a}_{q}\right)$ clearly implies relation (\ref{Eq_WedgeProduct}).\\ Since the Lie bracket on $\mathfrak{P}(U)$ satisfies the Jacobi identity for any open set $U$, and since the differential $d_\rho\omega$ only depends on the $1$-jet of $\omega$, as in finite dimension, it follows that $d_\rho(d_\rho \omega)=0$. \end{proof} \subsubsection{Nijenhuis endomorphism} \label{____NijenhuisEndomorphism} In this subsection, we only consider the case of a convenient Lie algebroid $\left( \mathcal{A},\pi,M,\rho,[.,.]_{\mathcal{A}} \right) $. Let $A$ be an endomorphism of $\mathcal{A}$. The \emph{Lie derivative of $A$} with respect to a local section $\mathfrak{a}$ is defined by \begin{eqnarray} \label{eq_LieDerivativeEndomorphism} L^\rho_\mathfrak{a}A(\mathfrak{b})=[\mathfrak{a},A \left( \mathfrak{b} \right) ]_\mathcal{A}-A \left( [\mathfrak{a},\mathfrak{b}]_\mathcal{A} \right) \end{eqnarray} for all local or global sections $\mathfrak{b}$ with the same domain as $\mathfrak{a}$.\\ The \emph{Nijenhuis tensor}\index{Nijenhuis tensor}\index{tensor!Nijenhuis} of $A$ is the tensor of type $(2,0)$ defined by: \begin{eqnarray} \label{eq_NijenhuisEndomorphism} N_A(\mathfrak{a},\mathfrak{b})=[A\mathfrak{a},A\mathfrak{b}]_\mathcal{A}-A[A\mathfrak{a},\mathfrak{b}]_\mathcal{A}-A[\mathfrak{a},A\mathfrak{b}]_\mathcal{A}-[\mathfrak{a},\mathfrak{b}]_\mathcal{A} \end{eqnarray} for all local or global sections $\mathfrak{a}$ and $\mathfrak{b}$ with same domain. \begin{remark} \label{R_PartialAlgebroid} Consider a partial Lie algebroid $ \left( \mathcal{A},\pi, M,\rho,\mathfrak{P}_M \right) $. If $A$ is an endomorphism of sheaves of $\mathfrak{P}_M$, the same formulae are well defined for any local section $\overline{\mathfrak{a}}$ of $\mathfrak{P}_M$. In this way, we can also define the Nijenhuis tensor $N_A$ as an endomorphism of sheaves of $\mathfrak{P}_M$. \end{remark} \subsection{Lie morphisms and Lie algebroid morphisms} \label{___LieAlgebroidMorphism} Let $(\mathcal{A}_1, \pi_1, M_1,\rho_1, [.,.]_{\mathcal{A}_1})$ and $(\mathcal{A}_2, \pi_2, M_2,\rho_2, [.,.]_{\mathcal{A}_2})$ be two convenient Lie algebroids.\\ We consider a bundle morphism $\Psi:\mathcal{A}_1 \to \mathcal{A}_2$ over $\psi:M_1 \to M_2$. \\ In the one hand, according to \cite{HiMa} in finite dimension, we can introduce: \begin{definition} \label{D_psiRelatedSections} Consider a section $\mathfrak{a}_1$ of $\mathcal{A}_1$ over an open set $U_1$ and a section $\mathfrak{a}_2$ of $\mathcal{A}_2$ over an open set $U_2$ which contains $\psi(U_1)$. We say that the pair of sections $(\mathfrak{a}_1,\mathfrak{a}_2)$ are $\psi$-related\index{related pairs of sections} if we have \begin{description} \item{\bf (RS)} $\Psi\circ \mathfrak{a}_1=\mathfrak{a}_2\circ \psi$. \end{description} \end{definition} \begin{definition} \label{D_LieMorphism} $\Psi$ is called a Lie morphism\index{Lie morphism}\index{morphism!Lie} over $\psi$ if it fulfills the following conditions: \begin{description} \item[\textbf{(LM 1)}] $\rho_2 \circ \Psi= T\psi \circ \rho_1$; \item[\textbf{(LM 2)}] $\Psi \circ [\mathfrak{a}_1,\mathfrak{a}^\prime_1]_{\mathcal{A}_1} = [\mathfrak{a}_2,\mathfrak{a}'_2]_{\mathcal{A}_2}\circ \psi$ for all $\psi$-related pairs of sections $(\mathfrak{a}_1,\mathfrak{a}_2)$ and $(\mathfrak{a}^\prime_1,\mathfrak{a}^\prime_2)$. \end{description} \end{definition} \begin{remark} \label{R_LieMorphismPartialAlgebroid} For $i \in \{1,2\}$, let $ \left( \mathcal{A}_i, \pi_i, M_i,\rho_i, \mathfrak{P}_{M_i} \right) $ be a partial Lie algebroid.\\ We consider a sheaf morphism $\Psi:\mathfrak{P}_{M_1} \to \mathfrak{P}_{M_2}$ over a smooth map $\psi:M_1 \to M_2$. Then the Definition \ref{D_psiRelatedSections} makes sense for pairs of sections $(\mathfrak{a}_1,\mathfrak{a}_2)\in \mathfrak{P}_{M_1}\times\mathfrak{P}_{M_2}$ which are then called $\psi$-related. If $[.,.]_{\mathcal{A}_i}$ is the sheaf of Lie bracket defined on $\mathfrak{P}_{M_i}$ the assumption \emph{\textbf{(LM 2)}} in Definition \ref{D_LieMorphism} also makes sense for two pairs of such $\psi$-related sections.\\ Thus $\Psi$ will be called a {\bf Lie morphism of partial convenient Lie algebroids} if it satisfies the assumptions \emph{\textbf{(LM 1)}}, and \emph{\textbf{(LM 2)}} for two pairs of $\psi$-related sections of $\mathfrak{P}_{M_1}\times\mathfrak{P}_{M_2}$.\\ \end{remark} For any local $k$-form $\omega$ on $\mathcal{A}_2$ defined on $U_2$, we denote by $\Psi^*\omega$ the local $k$-form on $\mathcal{A}_1$ defined on $U_1=\psi^{-1}(U_2)$ by: \begin{eqnarray} \label{eq_PullbackOmega} (\Psi^*\omega)_{x_1}(\mathfrak{a}_1\dots\mathfrak{a}_k)=\omega_{\psi(x_1)}\left( \Psi(\mathfrak{a}_1),\cdots,\Psi(\mathfrak{a}_k) \right) \end{eqnarray} for all $x_1\in U_1$.\\ On the other hand, as classically in finite dimension, we can introduce: \begin{definition} \label{D_ClassicLieAlgebroidMorphism} $\Psi$ is a Lie algebroid morphism over $\psi$ if and only if we have \begin{description} \item[\textbf{(LAM 1)}] $\Psi^*(d_{\rho _2}f) = d_{\rho _1} \left( f \circ \psi \right) $ for all $f \in C^\infty (U_2)$; \item[\textbf{(LAM 2)}] $\Psi^*(d_{\rho _2} \omega)=d_{\rho _1} \Psi^*(\omega)$ for any $1$-form $\omega$ on $ \{ \mathcal{A}_2 \} _{U_2}$. \end{description} \end{definition} It is easy to see that condition \textbf{(LM 1)} and \textbf{(LAM 1)} are equivalent (cf. Proof of Proposition \ref{P_psiDiffeomorphism}). Property \textbf{(LM 2)} implies property \textbf{(LAM 2)} for $\psi $-related sections but, in general, a pair of local sections $(\mathfrak{a}_1,\mathfrak{a}_2)$ of $\mathcal{A}_1$ and $ \mathcal{A}_2$ are not $\psi$-related while each member (\ref{eq_PullbackOmega}), for any two such pairs, is well defined. On the other hand, under the assumption of \textbf{(LM 2)}, we have \begin{eqnarray} \label{eq_Psia1a2} [\Psi(\mathfrak{a}_1),\Psi(\mathfrak{a}_2)]_{\mathcal{A}_2}\left(\psi(x_1)\right)=([\mathfrak{a}'_1,\mathfrak{a}'_2]_{\mathcal{A}_2})\left(\psi(x_1) \right) . \end{eqnarray} For any $x_1\in U_1$. Therefore the relation \textbf{(LAM 1)} is satisfied for any such pair $ (\mathfrak{a}_1,\mathfrak{a}_2)$ of sections of $\mathcal{A}_1$ which are $\psi$-related to a pair $ (\mathfrak{a}_1^\prime,\mathfrak{a}_2^\prime)$ of sections of $\mathcal{A}_2$. Of course, this property is no longer true for any pair $(\mathfrak{a}_1,\mathfrak{a}_2)$ of local sections of $\mathcal{A}_2$ and so the bracket "$[\Psi (\mathfrak{a}_1),\Psi (\mathfrak{a}^\prime_1)]_{\mathcal{A}_2}(\psi(x_1))$" is not defined. Thus, in general, both definitions are not comparable. However, if $\psi$ is a local diffeomorphism, we have: \begin{proposition} \label{P_psiDiffeomorphism} Let $ \left( \mathcal{A}_1, \pi_1, M_1,\rho_1, [.,.]_{\mathcal{A}_1} \right) $ and $ \left( \mathcal{A}_2, \pi_2, M_2,\rho_2, [.,.]_{\mathcal{A}_2} \right) $ be two convenient Lie algebroids. We consider a bundle morphism $\Psi:\mathcal{A}_1\to \mathcal{A}_2$ over a local diffeomorphisms $\psi:M_1\to M_2$. Then $\Psi$ is a Lie algebroid morphism if and only if it is a Lie morphism. \end{proposition} For instance, given any convenient Lie algebroid $ \left( \mathcal{A}, \pi, M,\rho, [.,.]_{\mathcal{A}} \right) $, then $\rho$ is Lie morphism and a Lie algebroid morphism from $ \left( \mathcal{A}, \pi, M,\rho, [.,.]_{\mathcal{A}} \right) $ to the convenient Lie algebroid $(TM, p_M, M, Id, [.,.])$.\\ \begin{proof} Since the set of differentials $\{df, \; f \textrm{ smooth map around } x_2\in M_2\}$ is a separating family on $T_{x_2}M_2$, by an elementary calculation, we obtain the equivalence $\textbf{(LM 1)}\;\Leftrightarrow\; \textbf{(LAM 1)}$.\\ At first note that \textbf{(LM 2)} and \textbf{(LAM 2)} are properties of germs. Thus the equivalence is in fact a local problem. Fix some $x^0_1\in M_1$ and $x^0_2=\psi(x^0_1)$. Since $\psi$ is a local diffeomorphism, the model of $M_1$ and of $M_2$ are the same convenient space $\mathbb{M}$ and we have charts $(U_1,\phi_1)$ and $(U_2,\phi_2)$ around $x^0_1$ in $M_1$ and $x^0_2$ in $M_2$ such that for $i \in \{1,2\}$: \begin{description} \item[--] $\phi_i(x_i)=0\in \mathbb{M}$; \item[--] $\phi_2\circ \psi\circ \phi_1^{-1}$ is a diffeomorphism between the $c^\infty$-open sets $\mathsf{U}_1:=\phi_1(U_1)$ and $\mathsf{U}_2:=\phi_2(U_2)$; \item[--] a trivialization $\tau_i \{\mathcal{A}_i\} _{U_1}=U_i \times\mathbb{A}_i$. \end{description} Thus, without loss of generality, we may assume that $M_i$ is a $c^\infty$-open neighbourhood of $0\in \mathbb{M}$, $\psi$ is a diffeomorphism from $M_1$ to $M_2$ and $\mathcal{A}_i=M_i\times \mathbb{A}_i$. By the way, the anchor $\rho_i$ is a smooth map from $M_i$ to $\operatorname{L}(\mathbb{M},\mathbb{A}_i)$ and each section $\mathfrak{a}_i$ of $\mathcal{A}_i$ is a smooth map from $M_i$ to $\mathbb{A}_i$. Under this context, on the one hand, for all $x_1\in M_1$, we have \begin{eqnarray*} \begin{aligned} \Psi^*d_{\rho_2}\omega(\mathfrak{a},\mathfrak{a}')(x_1) & = {d}_{\psi(x_1)} \left( \omega(\Psi(\mathfrak{a}') \right) \left( \rho_2\circ \Psi(\mathfrak{a}) \right)) -{d}_{\psi(x_1)} \left( \omega(\Psi(\mathfrak{a}) \right) \left(\rho_2\circ \Psi(\mathfrak{a}') \right) \\ &-\omega \left( [\Psi(\mathfrak{a}),\Psi(\mathfrak{a}')]_{\mathcal{A}_2} \right) \left( \psi(x_1) \right).\\ \end{aligned} \end{eqnarray*} On the other hand, we have \begin{eqnarray*} \begin{aligned} d_{\rho_1}\Psi^*\omega(\mathfrak{a},\mathfrak{a}')(x_1)& ={d}_{\psi(x_1)}\omega \left( (\Psi(\mathfrak{a}') \right) \left( T\psi\circ\rho(\mathfrak{a}) \right)\\ &-{d}_{\psi(x_1)}\omega \left( (\Psi(\mathfrak{a}) \right) \left(T\psi\circ\rho(\mathfrak{a}') \right) -\omega \left( \Psi([\mathfrak{a},\mathfrak{a}']_{\mathcal{A}_1} \right) (\psi(x_1)).\\ \end{aligned} \end{eqnarray*} Note that for any two pairs of $\psi$-related sections $(\mathfrak{a}_1,\mathfrak{a}_1')$ and $(\mathfrak{a}_2,\mathfrak{a}_2')$, as $\psi$ is a diffeomorphism, \textbf{(LM 2)} is equivalent to \begin{eqnarray}\label{eq_Psia1a2} [\Psi(\mathfrak{a}_1),\Psi(\mathfrak{a}_2)]_{\mathcal{A}'}\left(\psi(x_1)\right)=\Psi([\mathfrak{a}_1,\mathfrak{a}_2]_{\mathcal{A}})\left(\psi(x_1)\right) \end{eqnarray} for all $x_1\in M_1$. Thus, if \textbf{(LM 1)} and \textbf{(LM 2)} are true, then, in the previous local context, \textbf{(LAM 2)} is equivalent to \begin{eqnarray} \label{eq_OmegaBracket} \omega \left( \Psi([\mathfrak{a}_1,\mathfrak{a}_2]_{\mathcal{A}_1} \right)(\psi(x_1)) =\omega\left( [\Psi(\mathfrak{a}_1),\Psi(\mathfrak{a}_2)]_{\mathcal{A}'} \right) (\psi(x_1)) \end{eqnarray} for any $1$-form $\omega$ on $\mathcal{A}_2$ and any $x_1\in M_1$. As $\psi$ is a diffeomorphism for any pair of sections $(\mathfrak{a}_1, \mathfrak{a}_2)$ of $\mathcal{A}_1$ if we set $ \mathfrak{a} _1^\prime=\Psi(\mathfrak{a}_1)\circ\psi^{-1}$ and $ \mathfrak{a}_2^\prime=\Psi(\mathfrak{a}_2)\circ\psi^{-1}$, then $(\mathfrak{a}_1, \mathfrak{a}^\prime_1)$ and $(\mathfrak{a}_2, \mathfrak{a}^\prime_2)$ are $\psi$-related and it follows that \textbf{(LM 1)} and \textbf{(LM 2)} implies \textbf{(LAM 1)} and \textbf{(LAM 2)}. Conversely, assume that \textbf{(LAM 1)} and \textbf{(LAM 2)} are true. Consider any two pairs of $\psi$-related sections $(\mathfrak{a}_1,\mathfrak{a}_1')$ and $(\mathfrak{a}_2,\mathfrak{a}_2')$. In this case the relation (\ref{eq_OmegaBracket}) evaluated on $(\mathfrak{a}_1,\mathfrak{a}_2)$ is equivalent to \textbf{(LAM 2)} for any $1$-form $\omega $ on $U_2$. Since around each point in $M_2$, the set of germs of $1$-forms on $ \mathcal{A}_2$ is separating for germs of sections of $\mathcal{A}_2$ and as $\psi$ is a diffeomorphism, this implies (\ref{eq_OmegaBracket}).\\ It follows that the relation \textbf{(LM 2}) evaluated on both pairs $(\mathfrak{a}_1,\mathfrak{a}_1')$ and $(\mathfrak{a}_2,\mathfrak{a}_2')$ is satisfied, which ends the proof. \end{proof} \subsection{Foliations and Banach-Lie algebroids} \label{__FoliationsAndBanachLieAlgebroids} We first recall the classical notion of integrability of a distribution on a Banach manifold (cf. \cite{Pe1}). Let $M$ be a Banach manifold. \begin{enumerate} \item A distribution\index{distribution} $\Delta$ on $M$ is an assignment $\Delta: x\mapsto\Delta_{x}\subset T_{x}M$ on $M$ where $\Delta_{x}$ is a subspace of $T_{x}M$. The distribution $\Delta$ is called closed if $\Delta_x$ is closed in $T_xM$ for all $x\in M$. \item A vector field $X$ on $M$, defined on an open set Dom$(X)$, is called tangent to a distribution $\Delta$ if $X(x)$ belongs to $\Delta_{x}$ for all $x\in$Dom$(X)$. \item Let $X$ be a vector field tangent to a distribution $\Delta$ and $\operatorname{Fl}^X_t$ its flow. We say that $\Delta$ is $X$-invariant if $T_x\operatorname{Fl}^X_t(\Delta_x)=\Delta_{\operatorname{Fl}^X_t(x)}$ for all $t$ for which $\operatorname{Fl}^X_t(x)$ is defined. \item A distribution $\Delta$ on $M$ is called integrable if, for all $x_{0}\in M$, there exists a weak submanifold $(N,\phi)$ of $M$ such that $\phi(y_{0})=x_{0}$ for some $y_{0}\in N$ and $T\phi(T_{y}N)=\Delta_{\phi(y)}$ for all $y\in N$. In this case $(N,\phi)$ is called an integral manifold of $\Delta$ through $x$. A leaf $L$ is a weak submanifold which is a maximal integral manifold. \item A distribution $\Delta$ is called involutive if for any vector fields $X$ and $Y$ on $M$ tangent to $\Delta$ the Lie bracket $[X,Y]$ defined on Dom$(X)\cap$Dom$(Y)$ is tangent to $\Delta$.\\ \end{enumerate} Classically, in the Banach context, when $\Delta$ is a supplemented subbundle of $TM$, according to the Frobenius Theorem, involutivity implies integrability.\\ In finite dimension, the famous results of H. Sussman and P. Stefan give necessary and sufficient conditions for the integrability of smooth distributions.\\ Few generalizations of these results in the framework of Banach manifolds can be found in \cite{Ste}. In the context of this section, we have (cf. \cite{Pe1}): \begin{theorem} \label{T_IntegrabilityDistributionRangeAnchor} Let $(\mathcal{A},\pi,M,\rho,[.,.]_{\mathcal{A}})$ be a split Banach-Lie algebroid.\\ If $\rho(\mathcal{A})$ is a closed distribution, then this distribution is integrable. \end{theorem} Note that if $\rho$ is a Fredholm morphism, the assumptions of Theorem \ref{T_IntegrabilityDistributionRangeAnchor} are always satisfied. In the Hilbert framework, only the closeness of $\rho$ is required. \section{Prolongation of a convenient Lie algebroid along a fibration} \label{__ProlongationOfAConvenientLieAlgebroidAlongAFibration} \subsection{Prolongation of an anchored convenient bundle } \label{___ProlongationOfAnAnchoredConvenientBundle} Let ${\bf p}:\mathcal{E}\rightarrow M$ be a convenient vector bundle with typical fibre $\mathbb{E}$ and $\mathcal{M}$ an open submanifold such that the restriction of ${\bf p}$ to $\mathcal{M}$ is a surjective fibration over $M$ of typical fibre $\mathbb{O}$ (open subset of $\mathbb{E}$). We consider some anchored convenient bundle $({\mathcal{A}},\pi,M,\rho)$. \begin{notations} \label{N_locM} If $(U,\phi)$ is a chart such that $\mathcal{E}_U$ and $\mathcal{A}_U$ are trivializable, then $TM_U=TM_{| U}$ and $T\mathcal{M}_U$ are also trivializable. In this case, we have trivializations and local coordinates \begin{description} \item[--] $\mathcal{E}_U\equiv U\times \mathbb{E}$ and $\mathcal{M}_U\equiv U\times \mathbb{O}$ with local coordinates $\mathsf{m}=(\mathsf{x,e})$ \item[--] $T\mathcal{M}_U\equiv (U\times \mathbb{O})\times\mathbb{M}\times \mathbb{E}$ with local coordinates $(\mathsf{m, v, z})$. \end{description} \end{notations} For $m\in {\bf p}^{-1}(x)$ we set \[ {\mathbf{T}}^{\mathcal{A}}_m{\mathcal{M}}=\{(a,\mu)\in{\mathcal{A}}_x\times T_m{\mathcal{M}}: \; \rho(a)=T{\bf p}(\mu)\}. \] An element of ${\mathbf{T}}^{\mathcal{A}}_m{\mathcal{M}}$ will be denoted $(m,a,\mu)$.\\ We set ${\mathbf{T}}^{\mathcal{A}}{\mathcal{M}}=\displaystyle\bigcup_{m\in{\mathcal{M}}}{\mathbf{T}}^{\mathcal{A}}_m{\mathcal{M}}$ and we consider the projection $\hat{{\bf p}}:{\mathbf{T}}^{\mathcal{A}}{\mathcal{M}}\rightarrow {\mathcal{M}}$\\ defined by $\hat{{\bf p}}(m,a,\mu)=m$.\\ We introduce the following context: \begin{enumerate} \item Let $\widetilde{\pi}: \widetilde{\mathcal{A}}\rightarrow {\mathcal{M}}$ be the pull-back of the bundle $\pi:{\mathcal{A}}\rightarrow M$ by ${\bf p}:{\mathcal{M}}\rightarrow M$. We denote by $\widetilde{\bf p}$ the canonical vector bundle such that the following diagram is commutative: \[ \xymatrix{ \widetilde{\mathcal{A}} \ar[r]^{\widetilde{{\bf p}}}\ar[d]_{\widetilde{\pi}} & \mathcal{A} \ar[d]^{\pi}\\ \mathcal{M} \ar[r]^{{\bf p}} & M\\ } \] \item Consider the map \[ \begin{array} [c]{cccc} \hat{ \rho}: & {\mathbf{T}}^{\mathcal{A}}{\mathcal{M}} & \to & T{\mathcal{M}} \\ & (m,a,\mu) & \mapsto & (m,\mu) \end{array} \] and let $ {\bf p}_{\widetilde{\mathcal{A}}}: {\mathbf{T}}^{\mathcal{A}}{\mathcal{M}}\mapsto \widetilde{\mathcal{A}}$ be the map defined by ${\bf p}_{\widetilde{\mathcal{A}}}(m,a,v,z)=(m,a)$. Then the following diagrams are commutative \[ \xymatrix{ {\mathbf{T}}^{\mathcal{A}}{\mathcal{M}} \ar[r]^{{{\bf p}}_{\widetilde{\mathcal{A}}}}\ar[d]_{\hat{{\bf p}}} & \widetilde{\mathcal{A}} \ar[d]^{\widetilde{\pi}}\\ \mathcal{M} \ar[r]^{\operatorname{Id}} & \mathcal{M}\\ } \;\;\;\;\;\;\;\;\;\;\;\; \xymatrix{ {\mathbf{T}}^{\mathcal{ A}}\mathcal{M} \ar[r]^{\hat{ \rho}}\ar[d]_{{\bf p}_{\mathcal{A}}} & T\mathcal{M} \ar[d]^{T{\bf p}}\\ \mathcal{A} \ar[r]^{\rho} & TM\\ } \] \item If $ {\bf p}_{\mathcal{M}}: {T}{\mathcal{M}}\mapsto {\mathcal{M}}$ is the tangent bundle, consider the associated vertical bundle ${\bf p}_\mathcal{M}^\mathbf{V} :\mathbf{T}\mathcal{M}\rightarrow \mathcal{M}$. Then there exists a canonical isomorphism bundle ${\bf \nu}$ from the pull-back $\widetilde{\bf p}:\widetilde{\mathcal{E}}\rightarrow \mathcal{M}$ of the the bundle ${\bf p}:\mathcal{E}\rightarrow M$ over ${\bf p}:\mathcal{M}\rightarrow M$ to ${\bf p}_\mathcal{M}^V :\mathbf{V}\mathcal{M}\rightarrow \mathcal{M}$ so that the following diagram is commutative: \begin{eqnarray} \label{eq_nu} \xymatrix{ \widetilde{\mathcal{E}} \ar[r]^{{\bf \nu}} \ar[d]_{\widetilde{\bf p}} & \mathbf{V}\mathcal{M} \ar[d]^{{\bf p}_\mathcal{M}^V }\\ \mathcal{M} \ar[r]^{\operatorname{Id}} & \mathcal{M}\\ } \end{eqnarray} \end{enumerate} \begin{theorem} \label{T_Prolongation} ${}$ \begin{enumerate} \item $\hat{\bf p}:{\mathbf{T}}^{\mathcal{A}}{\mathcal{M}}\rightarrow {\mathcal{M}}$ is a convenient bundle with typical fibre $\mathbb{A}\times \mathbb{E}$ and $(\mathbf{T}^{\mathcal{A}}\mathcal{M},\hat{\bf p},\mathcal{M},\hat{\rho}) $ is an anchored bundle. \item ${{\bf p}}_{\widetilde{\mathcal{A}}}$ is a surjective bundle morphism whose kernel is a subbundle of $\mathbf{T}\mathcal{M}$. The restriction of $\hat{\rho}$ to $\ker{{\bf p}}_{\widetilde{\mathcal{A}}}$ is a bundle isomorphism on $\mathbf{V}\mathcal{M}$. \item Given an open subset $V$, then, for each section $\mathbf{X}$ of $\mathbf{T}\mathcal{M}$ defined on the open set $\mathcal{V}=\hat{\mathbf{p}}^{-1}(V)\subset \mathcal{M}$, there exists a pair $(\mathfrak{a},X)$ of a section $\mathfrak{a}$ of $\widetilde{\mathcal{A}}$ and a vector field $X$ on $\mathcal{V}$ such that \begin{eqnarray} \label{eq_aX} \forall m\in \mathcal{V},\; T\mathbf{p}(X(m))=\rho \circ {\mathbf{p}_\mathcal{A}}(\mathfrak{a}(m)). \end{eqnarray} Conversely such a pair $(\mathfrak{a},X)$ which satisfies (\ref{eq_aX}) defines a unique section $\mathbf{X}$ on $\mathcal{V}$, the associated pair of $\mathbf{X}$ is precisely $(\mathfrak{a},X)$ and with these notations, we have $\hat{\rho}(\mathbf{X})=X$. \end{enumerate} \end{theorem} \begin{proof} (1) Let $(U,\phi)$ be a chart on $M$ such that we have a trivialization $\tau:\mathcal{A}_U\rightarrow \phi(U)\times \mathbb{A}$ and $\Phi:\mathcal{M}_U\rightarrow \phi(U)\times \mathbb{O}\subset \phi(U)\times \mathbb{E}$. Then $T\phi$ is a trivialization of $TM_U$ on $\phi(U)\times \mathbb{M}$ and $T\Phi$ is a trivialization of $T\mathcal{M}_U$ on $\phi(U)\times \mathbb{O}\times\mathbb{M}\times\mathbb{E}$.\\ To be very precise, according to the notations in $\S$ \ref{___LocalIdentificationsAndExpressionsInAConvenientBundle}, we have \begin{description} \item $\phi(x)\equiv\mathsf{x} $; \item $\tau(x,a)\equiv (\mathsf{x,a})$; \item $\Phi(x,e)\equiv (\mathsf{x,e})$ and for $m=(x,e)$, $\Phi(m)\equiv \mathsf{m}$; \item $T\phi(x,v)\equiv (\mathsf{x,v})$; \item $T\Phi(m, \mu)= T\Phi(x,e,v,z)\equiv(\mathsf{x,e,v,z})$ \end{description} where $\equiv$ stands for "denoted".\\ By the way, in this local context and with these notations, we have \begin{eqnarray} \label{eq_locTAM} \mathbf{T}^{\mathcal{A}}\mathcal{M}_U\equiv\{\mathsf{(x,e,a,v, z})\in \phi(U)\times\mathbb{O}\times\mathbb{A}\times \mathbb{M}\times\mathbb{E}\;:\; \mathsf{v={\bf \rho}_x(a)}\} \end{eqnarray} where ${\bf \rho}$ corresponds to the local expression of the anchor. It follows that: \begin{eqnarray} \label{eq_LocTAMstrict} \mathbf{T}^{\mathcal{A}}\mathcal{M}_U\equiv\{\mathsf{(x,e,a,\rho_x(b),z}) \;\;: (\mathsf{x,e,a,z})\in \phi(U)\times\mathbb{O}\times \mathbb{A}\times \mathbb{E}\}, \end{eqnarray} and so the map $\mathbf{T}\Phi: \mathbf{T}^{\mathcal{A}}\mathcal{M}_U\rightarrow \phi(U)\times\mathbb{O}\times \mathbb{A}\times \mathbb{E}$ defined by $\mathbf{T}\Phi(x,e,a,v,z)\equiv(\mathsf{x,e,a,z})$ is a smooth map which is bijective. Moreover, for each $m=(x,e)\in \mathcal{M}_U$ the restriction $\mathbf{T}{\Phi}_m$ to $\mathbf{T}_m\mathcal{M}_U$ is clearly linear and we have the following commutative diagram \[ \xymatrix{ \mathbf{T}^\mathcal{A}\mathcal{M}_U \ar[r]^{\mathbf{T}{\Phi}}\ar[d]_{\hat{\mathbf{p}}} & \phi(U)\times\mathbb{O}\times \mathbb{A}\times\mathbb{E} \ar[d]^{\hat{\pi}_1}\\ {\mathcal{M}}_U \ar[r]^{\Phi} & \phi(U )\times\mathbb{O}\\ } \] This shows that $\mathbf{T}{\Phi}$ is a local trivialization of $\mathbf{T}^\mathcal{A}\mathcal{M}_U$ modelled on $\mathbb{A}\times \mathbb{E}$.\\ Now, in this local context, $\hat{\rho}(x,e,a,v,z)\equiv (\mathsf{x,e,a,z})$.\\ Consider two such chart domains $U$ and $U'$ in $M$ such that $U\cap U'\not=\emptyset$. Then we have: \begin{itemize} \item[--] the transition maps associated to the trivializations $\tau$ and $\tau'$ in $\mathcal{A}$ are of type \[ \mathsf{(x,a)}\mapsto \mathsf{\left( t(x), G_x(a) \right) } \] where $\mathsf{x\mapsto G_{x}}$ takes value in $\operatorname{GL}(\mathbb{A})$, and is a smooth map from $U\cap U^\prime$ into $\operatorname{L}(\mathbb{A})$; \item[--] the transition maps associated to the trivializations $\Phi$ and $\Phi'$ in $\mathcal{M}$ are of type \[ \mathsf{(x,e,a,v)} \mapsto \mathsf{\left( t(x),F_x(e) \right) } \] where $\mathsf{x} \mapsto \mathsf{F_{x}}$ takes value in $\operatorname{GL}(\mathbb{E})$, and is a smooth map from $U\cap U^\prime$ into $\operatorname{L}(\mathbb{E})$; \item[--] if $\widetilde{\Phi}$ and $\widetilde{\Phi}'$ are the trivializations of $\widetilde{\mathcal{A}}$ associated to $\Phi$ and $\Phi'$, the transition maps associated to trivializations $\widetilde{\Phi}$ and $\widetilde{\Phi}'$ in $\widetilde{\mathcal{A}}$ are of type \[ \mathsf{(x,e,a)} \mapsto \mathsf{\left( t(x),F_x(e),G_{x}(a) \right) }; \] \item[--] the transition maps associated to trivializations $T\Phi$ and $T\Phi'$ in $T\mathcal{M}$ are of type \[ \mathsf{(x,e,v,z)} \mapsto \left( \mathsf{ t(x),F_x(e),} d\mathsf{_xt(y), H_{(x,e)}(v) } \right) \] where $\mathsf{(x,e)} \mapsto \mathsf{H_{(x,e)}}$ takes value in $\operatorname{GL}(\mathbb{E})$ and is a smooth map from $U\cap U^\prime$ to $\operatorname{L}(\mathbb{E})$; \item[--] the transition maps associated to trivializations ${\bf T}\Phi$ and ${\bf T}\Phi'$ in ${\bf T}^\mathcal{A}\mathcal{M}$ are of type \[ \mathsf{(x,e,a,z)} \mapsto \mathsf{\left( t(x),F_x(e),G_{x}(a), H_{(x,e)}(z)) \right) }. \] \end{itemize} Clearly, this implies that $\hat{\bf p}:{\bf T}\mathcal{M}\to \mathcal{M}$ is a convenient bundle.\\ Now, in a trivialization $\tau$ and $T\phi$, we write $\rho(x,a)\equiv (\mathsf{x,a}) \mapsto (\mathsf{x,\rho_x(a)})$. If, in another trivialization $\tau'$ and $ T\phi'$, we write $\rho(x,a)\equiv (\mathsf{x',a'}) \mapsto (\mathsf{x',\rho'_{x'}(a')})$, then, for the associated transition maps, we have \[ \mathsf{(x',\rho'_{x'})= (x',} d \mathsf{_x t\circ \rho_x\circ G_x.)} \] It follows easily that $\hat{\rho}$ is a bundle convenient morphism.\\ (2) By construction, the following diagram is commutative: \[ \xymatrix{ {\bf T}^{\mathcal{A}}\mathcal{M} \ar[r]^{\hat{\rho}}\ar[d]_{{\bf p}_{\widetilde{\mathcal{A}}}} & T{\mathcal{M}} \ar[d]^{p_\mathcal{M}}\\ \widetilde{\mathcal{A}} \ar[r]^{\widetilde{\pi}} & \mathcal{M}\\ } \] In the trivialization $\widetilde{\Phi}:\widetilde{A}_{\mathcal{M}_U}\rightarrow \phi(U)\times\mathbb{O}\times \mathbb{A}$, using the same convention as previously, we have \[ \{(x,e,a,v,z)\mapsto {{\bf p}}_{\widetilde{\mathcal{A}}}(x,e,a,v)\}\equiv \{(\mathsf{x,e,a,v,z})\mapsto (\mathsf{x,e,a})\}. \] Thus, by analog arguments as in the proof of (1), it is clear that ${{\bf p}}_{\widetilde{\mathcal{A}}}$ is compatible with the transition maps associated to the trivializations over the chart domains $U$ and $U'$ of $M$ for ${\bf T}^{\mathcal{A}}\mathcal{M}$ and for $\widetilde{\mathcal{A}}$. Thus ${\bf p}_{\widetilde{\mathcal{A}}}: {\bf T}^{\mathcal{A}}\mathcal{M}\to \widetilde{\mathcal{A}}$ is a Banach bundle morphism which is surjective.\\ \noindent From the construction of $\mathbf{T}^{\mathcal{A}}\mathcal{M}$ we have \[ \ker{{\bf p}}_{\widetilde{\mathcal{A}}}=\{(m,a,v)\in \mathbf{T}_m^\mathcal{A}\mathcal{M}\;\;: \rho(a)=0\}. \] The definition of $\hat{\rho}$ implies that its restriction to $\ker{{\bf p}}_{\widetilde{\mathcal{A}}}$ is an isomorphism onto $\mathbf{V}\mathcal{M}$, which ends the proof of (2).\\ (3) Let $\mathbf{X}$ be a section of $\mathbf{T}\mathcal{M}$ defined on $\mathcal{V}=\hat{\mathbf{p}}^{-1}(V)$. According to (2), if we set $\mathfrak{a}=\mathbf{p}_{\widetilde{\mathcal{A}}}\circ \mathbf{X}$ and $X=\hat{\rho}(\mathbf{X})$, the pair $(\mathfrak{a},X)$ is well defined and from the definition of $\mathbf{T}^\mathcal{A}\mathcal{M}$ the relation (\ref{eq_aX}) is satisfied. Conversely, if $\mathfrak{a}$ is a section $\mathfrak{a}$ of $\widetilde{\mathcal{A}}$ and $X$ a vector field on $\mathcal{V}$, the relation (\ref{eq_aX}) means exactly that $\mathbf{X}(m)=(\widetilde{\mathbf{p}}(\mathfrak{a}(m)), X(m))$ belongs to $\mathbf{T}_m^\mathcal{A}\mathcal{M}$; so we get a section $\mathbf{X}$ of $\mathbf{T}^\mathcal{A}\mathcal{M}$. Now it is clear that $\mathfrak{a}=\mathbf{p}_{\widetilde{\mathcal{A}}}\circ \mathbf{X}$ and $X=\hat{\rho}(\mathbf{X})$. \end{proof} \begin{definition} \label{D_ProlongantionOfAnAnchorBundleOverAFibration} The anchored bundle $(\mathbf{T}^{\mathcal{A}}\mathcal{M}, \hat{\mathbf{p}}, \mathcal{M}, \hat{\rho})$ is called the prolongation of $(\mathcal{A},\pi,M,\rho)$ over $\mathcal{M}$. The subbundle $\ker{{\bf p}}_{\widetilde{\mathcal{A}}}$ will be denoted $\mathbf{V}^{\mathcal{A}}\mathcal{M}$ and is called the vertical subbundle. \end{definition} \begin{remark} \label{R_VerticalBundle} According to the proof of Theorem \ref{T_Prolongation}, if ${\bf V}^{\mathcal{A}}\mathcal{M}_U$ is the restriction of ${\bf V}^{\mathcal{A}}\mathcal{M}$ to $\mathcal{M}_U$, we have \[ {\bf V}^{\mathcal{A}}\mathcal{M}_U\equiv\{\mathsf{(x,e,a,0,z}) \;\;: (\mathsf{x,e,a,z})\in \phi(U)\times\mathbb{O}\times \mathbb{A}\times \mathbb{E}\}. \] \end{remark} \begin{examples} \label{Ex_Class} For simplicity in these examples we assume that the model space $\mathbb{M}$ of $M$ and the typical fiber $\mathbb{A}$ of $\mathcal{A}$ are Banach spaces. \begin{enumerate} \item If $\mathcal{A}={\mathcal{M}}=TM$ then we have ${\mathbf{T}}{\mathcal{A}}=TTM$ and ${\mathbf{T}}{\mathcal{A}}^*=TT^{*}M$ and the anchor $\hat{\rho}$ is the identity. \item If $\mathcal{A}={\mathcal{M}}$ then $\mathbf{T}^{\mathcal{A}}\mathcal{A}$ is simply denoted $\mathbf{T}\mathcal{A}$ and the anchor $\hat{\rho}$ is the map $(x,a,b,c)\mapsto (x,a,\rho(b),\nu(c))$ from $\mathbf{T}\mathcal{A}$ to $T\mathcal{A}$. \item If $\mathcal{M}=\mathcal{A}^*$ then $\mathbf{T}^{\mathcal{A}}\mathcal{A}^*$ is simply denoted $\mathbf{T}\mathcal{A}^*$ and the anchor $\hat{\rho}$ is the map $(x,\xi,a,\eta)\mapsto (x,\xi,\rho(a),\nu(\eta)$ from $\mathbf{T}\mathcal{A}^*$ to $T\mathcal{A}^*$. \item If $\mathcal{M}=\mathcal{A}\times_M {\mathcal{A}}^*$ then $\mathbf{T}^{\mathcal{A}}\mathcal{M}$ is simply denoted $\mathbf{T}(\mathcal{A}\times\mathcal{A}^*)$ and the anchor $\hat{\rho}$ is the map $(x,a,\xi,b,c,\eta)\mapsto (x,a,\xi,\rho(b),\nu(v),\eta))$ from $\mathbf{T}(\mathcal{A}\times\mathcal{A}^*)$ to $T(\mathcal{A}\times\mathcal{A}^*)$. \item If $\mathcal{M}$ is a conic submanifold of $\mathcal{A}$ (cf \cite{Pe2}), then $\mathbf{T}^{\mathcal{A}}\mathcal{M}$ is simply denoted $\mathbf{T}\mathcal{M}$ and the anchor $\hat{\rho}$ is the same expression as in (2). \end{enumerate} \end{examples} \begin{remark} \label{R_Chart} We come back to the previous general context: $(\mathcal{A},\pi, M,[.,.]_\mathcal{A})$ is a convenient Lie algebroid and $\mathbf{p}:\mathcal{E}\to M$is a convenient vector bundle and $\mathcal{M}$ is an open submanifold of $\mathcal{E}$ which is fibered on $M$.\\ Let $(U, \phi)$ be a chart of $M$ such that $\mathcal{A}_U$ (resp. $\mathcal{E}_U$) is trivial and we have denoted \begin{center} $\tau:\mathcal{A}_U\rightarrow \phi(U)\times \mathbb{A}$ (resp. $\Phi : \mathcal{E}_U\rightarrow \phi(U)\times \mathbb{E}$) \end{center} the associated trivialization (cf. notations in the proof of Theorem \ref{T_Prolongation}). Then we have a canonical anchored bundle \begin{center} $(\phi(U)\times \mathbb{A}, \pi_1,\phi(U), \mathsf{r}= T\phi\circ \rho \circ\phi^{-1})$ \end{center} where $\pi_1:\phi(U)\times \mathbb{A}\rightarrow \phi(U)$.\\ Therefore, the prolongation $\phi(U)\times \mathbb{A}$ over $\phi(U)\times \mathbb{E}$ is then \[ \mathbf{T}^{\phi(U)\times \mathbb{A}}(\phi(U)\times \mathbb{E})=\phi(U)\times \mathbb{E}\times\mathbb{A}\times\mathbb{E} \] and the anchor $\hat{\mathsf{r}}$ is the map $(\mathsf{x,e,a,z}) \mapsto (\mathsf{x,e, r(a), z})$.\\ Note that the bundle $\mathbf{T}^{\phi(U)\times \mathbb{A}}(\phi(U)\times \mathbb{E})=\phi(U)\times \mathbb{E}\times\mathbb{A}\times\mathbb{E}$ can be identified with the subbundle \[ \left\{ (\mathsf{x,e,a,0,v})\in U\times \mathbb{E}\times\mathbb{A}\times\mathbb{M}\times\mathbb{E} \right\} \] of $T(\phi(U)\times \mathbb{E})=\phi(U)\times \mathbb{E}\times\mathbb{A}\times\mathbb{M}\times\mathbb{E}.$\\ An analog description is true for any open set $\mathcal{U}$ in $\mathcal{E}$ such that $\mathbf{p}(\mathcal{U})=U$ since $\mathcal{U}$ is contained in $\mathcal{M}_U$.\\ \end{remark} Given a fibred morphism $\Psi:{\mathcal{M}}\rightarrow {\mathcal{M}}'$ between the fibred manifolds ${\bf p}:{\mathcal{M}}\rightarrow M$ and ${\bf p}':{\mathcal{M}'}\rightarrow M'$ over $\psi:M\rightarrow M'$ and a morphism of anchored bundles $ \varphi$ between $({\mathcal{A}},\pi, M,\rho)$ and $({\mathcal{A}}',\pi', M',\rho')$, over $\psi:M\rightarrow M'$, we get a map \begin{eqnarray}\label{eq_bfTPsi} \begin{array}[c]{cccc} \label{eq_TPsi} {\mathbf{T}}\Psi: & {\mathbf{T}}^{\mathcal{A}}{\mathcal{M}}& \to & {\mathbf{T}}^{\mathcal{A}'}{\mathcal{M}}'\\ & (m,a,\mu) & \mapsto & ( \Psi(m);\varphi(a),T_m\Psi(\mu) ) \end{array} \end{eqnarray} \begin{remark} \label{R_chartTM} As in the proof of Theorem \ref{T_Prolongation}, consider a chart $(\mathcal{U},\Phi)$ of $\mathcal{M}$ where $\mathbf{p}(\mathcal{U})=U$ and let $\tau:\mathcal{A}_U\rightarrow U\times\mathbb{A}$ be an associated trivialization of $\mathcal{A}_U$. According to Remark \ref{R_Chart}, since $\Phi$ is a smooth diffeomorphism from $\mathcal{U}$ onto its range $\phi(U)\times\mathbb{O}$ in $\phi(U)\times\mathbb{E}$, then $\mathbf{T}\Phi: \mathbf{T}^{\mathcal{A}}\mathcal{M}_{| \mathcal{U}}\rightarrow \mathbf{T}^{\phi(U)\times \mathbb{A}}\mathcal{U}=\Phi(\mathcal{U})\times\mathbb{A}\times\mathbb{E}$ is a bundle isomorphism and so $(\mathbf{T}^{\mathcal{A}}\mathcal{M}_{| \mathcal{U}},\mathbf{T}\Phi)$ is a chart for $\mathbf{T}^\mathcal{A}\mathcal{M}$. \end{remark} \begin{notations} \label{N_Prolongations} From now on, the anchored bundle $(\mathcal{A},\pi,M,\rho)$ is fixed and, if no confusion is possible, we simply denote by $\mathbf{T}\mathcal{M}$ and $\mathbf{V}\mathcal{M}$ the sets $\mathbf{T}^{\mathcal{A}}\mathcal{M}$ and $\mathbf{V}^\mathcal{A}\mathcal{M}$ respectively. In particular {\bf when $\mathcal{M}=\mathcal{A}$, the prolongation $\mathbf{T}\mathcal{A}$ will be simply called the prolongation of the the Lie algebroid $\mathcal{A}$}\index{prolongation of a convenient Lie algebroid}. The bundle $\mathbf{V}\mathcal{M}$ will be considered as a subbundle of $\mathbf{T}\mathcal{M}$ as well as of $T\mathcal{M}$. \end{notations} \subsection{Connections on a prolongation} \label{___ConnectionsOnAProlongation} Classically (\cite{KrMi}, 37), a \emph{connection}\index{connection} \footnote{In the finite dimensional context such a connection is sometimes called a nonlinear connection.} on a convenient vector bundle $\mathbf{p}:\mathcal{E}\rightarrow M$ is a Whitney decomposition $T\mathcal{E}=H\mathcal{E}\oplus V\mathcal{E}$. Now, as in finite dimension we introduce this notion on $\mathbf{T}\mathcal{M}$. \begin{definition} \label{D_NLConnection} A connection on $\mathbf{T}\mathcal{M}$ is a decomposition of this bundle in a Whitney sum $\mathbf{T}\mathcal{M}=\mathbf{H}\mathcal{M}\oplus \mathbf{V}\mathcal{M}$.\end{definition} Such a decomposition is equivalent to the datum of an endomorphism $\bf{N}$ of $\mathbf{T}\mathcal{M}$ such that $\mathbf{N}^2=\operatorname{Id}$ with $\mathbf{T}\mathcal{M}=\operatorname{ker} (\operatorname{Id}+\mathbf{N})$ and $\mathbf{H}\mathcal{M}=\operatorname{ker} (\operatorname{Id}-\mathbf{N})$ where $\operatorname{Id}$ is the identity morphism of $\mathbf{T}\mathcal{M}$. We naturally get two projections: \begin{description} \item[ ] $h_\mathbf{N} =\displaystyle\frac{1}{2}(\operatorname{Id}+\mathbf{N}): \mathbf{T}\mathcal{M}\rightarrow \mathbf{H}\mathcal{M}$ \item[ ] $v_\mathbf{N}=\displaystyle\frac{1}{2}(\operatorname{Id}-\mathbf{N}): \mathbf{T}\mathcal{M}\rightarrow \mathbf{V}\mathcal{M}$. \end{description} $v_\mathbf{N}$ and $h_\mathbf{N}$ are called respectively the \emph{vertical}\index{projector!vertical} and \emph{horizontal} projector\index{projector!horizontal} of $\mathbf{N}$.\\ Using again the context of Remark \ref{R_chartTM}, we have charts $$\Phi:\mathcal{M}_U\rightarrow \phi(U)\times \mathbb{O}\;\;\textrm{ and}\;\;\mathbf{T}\Phi:\mathbf{T}\mathcal{M}_U\rightarrow \phi(U)\times \mathbb{O}\times \mathbb{A}\times\mathbb{E}.$$ If $\mathbf{N}$ is a connection on $\mathbf{T}\mathcal{M}$, then $\mathsf{N}=\mathbf{T}\Psi\circ \mathbf{N}\circ \Psi^{-1}$ is a non linear connection on the trivial bundle $ \phi(U)\times \mathbb{O}\times \mathbb{A}\times\mathbb{E}$. Thus $\mathsf{N}$ can be written as a matrix field of endomorphisms of $\mathbb{A}\times\mathbb{E}$ of type \begin{eqnarray} \label{eq_Christoffel} \begin{pmatrix} \operatorname{Id}_\mathbb{A} & 0\\ -2\digamma & -\operatorname{Id}_\mathbb{E} \end{pmatrix} \end{eqnarray} and so the associated horizontal (resp. vertical) projector is given by \begin{description} \item $\mathsf{h_N}_{\mathsf{m}}(\mathsf{a,z})=\displaystyle\frac{1}{2}(\mathsf{a,z})- \digamma(\mathsf{a})$; \item $\mathsf{v_N}_{\mathsf{m}}(\mathsf{a,z})=\displaystyle\frac{1}{2}(\mathsf{a,v})+\digamma(\mathsf{a})$. \end{description} The associated horizontal space in $\{\mathsf{m}\}\times \mathbb{A}\times \mathbb{E}$ is \[ \left\{ \displaystyle\frac{1}{2}(\mathsf{a,z})- \digamma(\mathsf{a}), (\mathsf{a,z})\in \{\mathsf{m}\}\times \mathbb{A}\times\mathbb{E}\right\} \] and the associated vertical space in $\{\mathsf{m}\}\times \mathbb{A}\times \mathbb{E}$ is $\{\mathsf{m}\}\times \{\mathsf{0}\}\times \mathbb{E}$.\\ $\digamma$ is called the \emph{(local) Christoffel symbol of} ${\bf N}$.\\ Let $\widetilde{\bf p}: \widetilde{\mathcal{A}\times\mathcal{E}}\rightarrow \mathcal{M}$ the fibered product bundle over $\mathcal{M}$ of $({\pi},{\bf p}): {\mathcal{A}}\times {\mathcal{E}}\rightarrow {M}$. We have natural inclusions $\iota_1 :\widetilde{\mathcal{A}}\rightarrow \widetilde{\mathcal{A}\times\mathcal{E}}$ and $\iota_2 :\widetilde{\mathcal{E}}\rightarrow \widetilde{\mathcal{A}\times\mathcal{E}}$, given respectively by $\iota_1(m,a)=(m,a,0)$ and $\iota_1(m,z)=(m,0,z)$, such that \begin{eqnarray} \label{eq_DecompAE} \widetilde{\mathcal{A}\times\mathcal{E}}=\iota_1 (\widetilde{\mathcal{A}})\oplus \iota_2 (\widetilde{\mathcal{E}}). \end{eqnarray} With these notations, we have \begin{proposition} \label{P_IsoConnection} ${}$ \begin{enumerate} \item There exists a non linear connection $\mathbf{N}$ on $\mathbf{T}\mathcal{M}$ if and only if there exists a convenient bundle morphism $\mathbf{H}$ from $\widetilde{\mathcal{A}}$ to $\mathbf{T}\mathcal{M}$ such that $\mathbf{T}\mathcal{M}=\mathbf{H}(\widetilde{\mathcal{A}})\oplus \mathbf{V}\mathcal{M}$. In this case $\mathbf{T}\mathcal{M}$ is isomorphic to $\widetilde{\mathcal{A}\times\mathcal{E}}$. \item Assume that $\mathbf{N}$ is a connection on ${\bf T}\mathcal{M}$. Let $\Upsilon$ be semi-basic vector valued $\Upsilon$ \footnote{ that is a morphism from ${\bf T}\mathcal{M}$ to ${\bf V}\mathcal{M}$ such that $\Upsilon(\mathbf{Z})=0$ for any local vertical section $\mathbf{Z}$}, then $\mathbf{N}+\Upsilon$ is a connection on ${\bf T}\mathcal{M}$. Conversely, given any nonlinear connection $\mathbf{N}'$ on ${\bf T}\mathcal{M}$, there exists a unique semi-basic vector valued $\Upsilon$ such that $\mathbf{N}'=\mathbf{N}+\Upsilon$. \end{enumerate} \end{proposition} According to this Proposition we introduce: \begin{definition} \label{D_SplitProlongation} A prolongation of $\mathcal{A}$ over ${\bf p}:\mathcal{M}\to M$ is called a split prolongation\index{split prolongation} if there exists a Withney decomposition ${\bf T}\mathcal{M}={\bf K}\mathcal{M}\oplus {\bf V}\mathcal{M}$. \end{definition} The following result is a clear consequence of Proposition \ref{P_IsoConnection}. \begin{corollary} \label{C_SplitProlongation} Let ${\bf T}\mathcal{M}$ be a split prolongation. Then there exists a connection on ${\bf T}\mathcal{M}$ and ${\bf T}\mathcal{M}$ is isomorphic to $\widetilde{\mathcal{A}\times\mathcal{E}}$. \end{corollary} \begin{proof}[Proof of Proposition \ref{P_IsoConnection}]${}$\\ (1) Assume that we have a connection $\mathbf{N}$ on $\mathbf{T}\mathcal{M}$ and let $\mathbf{T}\mathcal{M}=\mathbf{H}\mathcal{M}\oplus \mathbf{V}\mathcal{M}$ be the associated Whitney decomposition.\\ Let $\mathbf{H}$ be the restriction of $\mathbf{p}_{\widetilde{\mathcal{A}}}$ to $\mathbf{H}\mathcal{M}$. Since $\mathbf{V}\mathcal{M}$ is the kernel of the surjective morphism $\mathbf{p}_{\widetilde{\mathcal{A}}}$, it follows that $\mathbf{H}$ is an isomorphism. Since we have an isomorphism $\nu:\widetilde{\mathcal{E}}\rightarrow \mathbf{V}\mathcal{M}$, according to (\ref{eq_DecompAE}), it follows easily that $\widetilde{\mathcal{A}\times\mathcal{E}}$ is isomorphic to $\mathbf{T}\mathcal{M}$. The converse is clear.\\ (2) At first, if $\Upsilon$ is semi basic, then $\operatorname{ker} (\operatorname{Id}+\mathbf{N}+\Upsilon)=\mathbf{V}\mathcal{M}$ and clearly the range of $\operatorname{Id}+\mathbf{N}+\Upsilon$ is a supplemented subbundle of $\mathbf{V}\mathcal{M}$.\\ On the one hand, if $\mathbf{N}'$ is a connection, we set $\Upsilon=\mathbf{N}'-\mathbf{N}$. Then $\Upsilon(\mathbf{Z})=0$, for all local vertical sections. On the other hand $\Upsilon(\mathbf{X})=(\operatorname{Id} +\mathbf{N}')(\mathbf{X})-(\operatorname{Id} +\mathbf{N}(\mathbf{X}) $ which belongs to $\mathbf{V}\mathcal{M}$.\\ \end{proof} A sufficient condition for the existence of a connection on $\mathbf{T}\mathcal{M}$ is given by the following result: \begin{theorem} \label{T_CShconnection} Assume that there exists a linear connection on the bundle $\mathbf{p}:\mathcal{E}\rightarrow M$. Then there exists a connection $\mathbf{N}$ on $\mathbf{T}\mathcal{M}$. \end{theorem} \begin{proof} let $\mathbf{p}^{\ast}TM$ (resp. $\mathbf{p}^{\ast}\mathcal{E}$) is the pull-back over $\mathcal{M}$ of $TM\rightarrow M$ (resp. $\mathcal{E}\rightarrow M$).\\ If there exists a linear connection $N$ on the bundle $\mathcal{E}\rightarrow M$, there exists a convenient isomorphism bundle \[ \kappa=(\kappa_1,\kappa_2): T\mathcal{E}\rightarrow \mathbf{p}^{\ast}TM\oplus \mathbf{p}^{\ast}\mathcal{E}.\] (cf. Theorem 3.1 \cite{AgSu} in Banach setting, \cite{BCP}, Chapter 6 in convenient setting). Therefore, for an open fibred submanifold $\mathcal{M}$ of $\mathcal{E}$, by restriction, we obtain an isomorphism (again denoted $\kappa$). \[ \kappa:T\mathcal{M}\rightarrow \mathbf{p}^{\ast}TM \oplus \mathbf{p}^{\ast}\mathcal{E} \] Without loss of generality, we can identify $T\mathcal{M}$ with $\mathbf{p}^{\ast}TM \oplus \mathbf{p}^{\ast}\mathcal{E}$. For $m=(x,e)$, the fibre $\mathbf{T}_m\mathcal{M}$ is then \[ \mathbf{T}_m\mathcal{M}=\{(a,\mu)\in \mathcal{A}_x\times T_xM\times\mathbf{V}_m\mathcal{M}\;\;: \rho(a)=T\mathbf{p}(\mu)\}. \] But, under our identification of $T\mathcal{M}$ with $\mathbf{p}^{\ast}TM\oplus \mathbf{p}^{\ast}\mathcal{E}$, if $m=(x,e)$, this implies that $\mu\in T_m\mathcal{M}$, can be written as a pair $(v,z)\in T_xM\times \mathcal{E}_x$ and so we replace the condition $\rho(a)=T\mathbf{p}(\mu)$ by $\rho_x(a)=v$.\\ Recall that, in the one hand, we have a chart (cf. proof of Theorem \ref{T_Prolongation}): \[ \mathbf{T}\Phi: \mathbf{T}\mathcal{M}_U\rightarrow \Phi(\mathcal{M}_U)\times \mathbb{A}\times \mathbb{E}. \] The value of $\mathbf{T}\Phi(m, a,\mu)$ can be written $(\phi(x),\Phi_x(e),\tau_x(a),T_m\Phi(\mu))$ (value denoted $(\mathsf{x,e,a,z})$) with $T_m\Psi (T\mathbf{p}(\mu))=T_x\phi(\rho_x(a))$. But, under our assumption, we have $T_m\Phi(\mu))=(T_x\phi(\rho_x(a)),\mathsf{z})\in \{(\mathsf{x,e})\}\times \mathbb{M}\times \mathbb{E}$ and so we obtain \begin{eqnarray} \label{eq_TPhiassump} \mathbf{T}\Phi(m, a,\mu)\equiv (\mathsf{x,e,a,z}). \end{eqnarray} On the other hand, we have a trivialization $\widetilde{\tau\times\Phi}$ from $\widetilde{\mathcal{A}\times\mathcal{E}}_{\mathcal{M}_U}$ to $\Phi(\mathcal{M}_U)\times\mathbb{A}\times\mathbb{E}$ over $\Phi$. In fact, we have $(\widetilde{\tau\times\Phi})(x,e,a,z)=(\phi(x),\Phi_x(e),\tau_x(a),\Phi_x(z))$. \\ According to our assumption, the map $\widetilde{\Psi}:\widetilde{\mathcal{A}\times}\mathcal{E}\rightarrow \mathbf{T}\mathcal{M}$ is given by \[ \widetilde{\Psi}(m,a,z)=(m,a,\rho(a),z) \] is well defined. In local coordinates, we have \[ \widetilde{\Psi}(e,x,a,\rho(a),z)\equiv(\mathsf{x,e,a,z}). \] Thus $\widetilde{\Psi}$ is the identity in local coordinates and so is a local bundle isomorphism. To complete the proof, we only have to show that under our assumption $\widetilde{\Psi}$ is a convenient bundle morphism.\\ By analogy with the notations used in the proof of Theorem \ref{T_Prolongation} (1), let $(U',\phi')$ be another chart on $M$ and consider all the corresponding trivializations $T\phi'$, $\Phi'$, and $\Phi'$. We set $(\mathsf{x',e',v',z'})=T\Phi'(\mathfrak{e},\mathfrak{z})$ (i.e. $(\mathfrak{e},\mathfrak{z})\equiv(\mathsf{x',e',v',z'}) $ following our convention), in these new coordinates, we have $\widetilde{\Psi}(\mathfrak{e},\mathfrak{z}) \equiv(\mathsf{x',e',a',z'})$. Assume that $U\cap U'\not=\emptyset$. For the change of coordinates, we set $\theta_\mathsf{x}=\phi'\circ\phi^{-1}(\mathsf{x})$, and each associated transition map gives rise to a smooth field of isomorphisms of convenient spaces as follows: \begin{eqnarray*} \begin{aligned} &T_\mathsf{x}\theta(\mathsf{v})=T_\mathsf{x}(\phi'\circ\phi^{-1})(\mathsf{v})\\ &\mathfrak{T}_\mathsf{x}(\mathsf{a})=\left(\tau' \circ\tau ^{-1}\right)_\mathsf{x}(\mathsf{a})\\ &\Theta_\mathsf{x}(\mathsf{e})=(\Phi'\circ \Phi^{-1})_\mathsf{x}(\mathsf{e}). \end{aligned} \end{eqnarray*} Thus, under our assumption, according to (\ref{eq_TPhiassump}), in fact we have \[ \mathbf{T}\Phi'\circ \mathbf{T}\Phi^{-1}(\mathsf{x,e,a,z})=(\theta(\mathsf{x}),\Theta_\mathsf{x}(\mathsf{e}), \mathfrak{T}_\mathsf{x}(\mathsf{a}), \Theta_\mathsf{x}(\mathsf{z})). \] Now with the previous notations we have \[ \left(\widetilde{\tau'\times\Phi'}\right)\circ \left(\widetilde{\tau\times\Phi}\right)^{-1}(\mathsf{x,e,a,z})=(\theta(\mathsf{x}),\Theta_\mathsf{x}(\mathsf{e}), \mathfrak{T}_\mathsf{x}(\mathsf{a}), \Theta_\mathsf{x}(\mathsf{z}))=\mathbf{T}\Phi'\circ \mathbf{T}\Phi^{-1}(\mathsf{x,e,a,z}). \] Since, in such local coordinates, $\widetilde{\Psi}$ is the identity map, so under our assumption $\widetilde{\Psi} $ is a convenient bundle isomorphism. \end{proof} \subsection{Prolongation of the Lie bracket} \label{___ProlongationOfTheLieBracket} In the finite dimensional framework, a Lie bracket for smooth sections of $\hat{\bf p}:{\bf T}\mathcal{M}\to \mathcal{M}$ is well defined. Unfortunately, we will see that it is not true in this general convenient context.\\ According to the notations \ref{N_locM}, for each open set $U$ of $M$, we denote by $\Gamma(\textbf{T}\mathcal{M}_U)$, $\Gamma(\textbf{V}\mathcal{M}_U)$ and $\Gamma(\widetilde{\mathcal{A}}_U)$ the $C^\infty(\mathcal{M}_{U})$-module of sections of $\textbf{T}\mathcal{M}_U$, ${\bf V}\mathcal{M}_U$ and $\widetilde{\mathcal{A}}_U$ respectively. We also denote $\Gamma(\mathcal{A}_U)$ the $C^\infty({U})$-module of sections $\pi:\mathcal{A}_U\to U$. \begin{definition} \label{D_Mproj} Let $U$ be an open subset of $M$. A section $\mathbf{X}$ in $\Gamma({\bf T}\mathcal{M}_U)$ is called projectable\index{projectable section} if there exists $\mathfrak{a}\in\Gamma({\mathcal{ A}}_U)$ such that \[ {\bf p}_{\mathcal{A}}\circ\mathbf{ X}=\mathfrak{a}. \] \end{definition} Therefore $\mathbf{ X} $ is projectable if and only if there exists a vector field $X$ on $\mathcal{M}_U$ and $\mathfrak{a}\in\Gamma(\mathcal{ A}_U)$ such that (see Theorem \ref{T_Prolongation}) \[ \mathbf{ X}=(\mathfrak{a}\circ{\bf p},X) \textrm{ with } T{\bf p}(X)=\rho\circ \mathfrak{a}. \] \textbf{Assume now that $(\mathcal {A},\pi,M,\rho,[.,.]_{\mathcal {A}})$ is a convenient Lie algebroid}. Let $\mathbf{X}_i=(\mathfrak{a}_i\circ{\bf p},X_i)$, $i \in \{ 1,2 \}$ be two projectable sections defined on $\mathcal{M}_U$. We set \begin{eqnarray} \label{eq_projectTMbracket} [\mathbf{X}_1,\mathbf{X}_2]_{{\bf T}\mathcal{M}}=([\mathfrak{a}_1,\mathfrak{a}_2]_{\mathcal{A}}\circ {\bf p},[X_1,X_2]) \end{eqnarray} Since $\rho([\mathfrak{a}_1,\mathfrak{a}_2]_{\mathcal{A}})=[\rho(\mathfrak{a}_1),\rho(\mathfrak{a}_2)]$ and $T{\bf p} \left( [X_1,X_2] \right) =[T{\bf p}(X_1),T{\bf p}(X_2)]$, it follows that $[\mathbf{X}_1,\mathbf{X}_2]_{{\bf T}\mathcal{M}}$ is a projectable section which is well defined. Moreover, we have \begin{eqnarray}\label{eq_HatRhoMorphism} \hat{\rho} \left( [\mathbf{X}_1,\mathbf{X}_2]_{{\bf T}\mathcal{M}} \right) =[\hat{\rho}(\mathbf{X}_1),\hat{\rho}(\mathbf{X}_2)]. \end{eqnarray} \begin{comments} \label{Com_PartialBracket} Now, \textbf{in finite dimension}, it is well known that the module of sections of $\widetilde{\mathcal{A}}_U\to \mathcal{M}_U$ is the $C^\infty(\mathcal{M}_{U})$-module generated by the set of sections $\mathfrak{a}\circ {\bf p}$ where $\mathfrak{a}$ is any section of $\mathcal{A}_U\to U$. Therefore, according to Theorem \ref{T_Prolongation}, the module $\Gamma({\bf T}\mathcal{M}_U)$ is generated, as $C^\infty(\mathcal{M}_{U})$-module, by the set of all projectable sections of $\Gamma({\bf T}\mathcal{M}_U)$. This result is essentially a consequence of the fact that, in the local context used in the proof of Theorem \ref{T_Prolongation}, over such a chart domain $U$, the bundle $\widetilde{\mathcal{A}}_U$ is a finite dimensional bundle and so the module of sections of $\widetilde{\mathcal{A}}_U$ over $\mathcal{M}_U$ is finitely generated as $C^\infty(\mathcal{M}_U)$. Thus, if $\mathcal{A}$ is a finite rank bundle over $M$, then the module $\Gamma({\bf T}\mathcal{M}_U)$ is generated, as $C^\infty(\mathcal{M}_{U})$-module, by the set of all projectable sections of $\Gamma({\bf T}\mathcal{M}_U)$.\\ \textbf{Unfortunately, this is no longer true in the convenient context and even in the Banach setting in general}.\\ Note that, under some type of approximation properties of $\mathbb{A}$, we can show that the module generated by the set of all projectable sections of $\Gamma(\mathcal{A}_U)$ as $C^\infty(\mathcal{M}_{U})$-module, is dense in $\Gamma(\widetilde{\mathcal{A}}_U)$ as a convenient space. In this case, the $C^\infty(\mathcal{M}_{U})$-module, generated by the set of all projectable sections of $\Gamma({\bf T}\mathcal{M}_U)$ will be dense in $\Gamma({\bf T}\mathcal{M}_U)$ (as a convenient space). We could hope that, in this context, the Lie bracket $[.,.]_{\mathbf{T}\mathcal{M}_U}$ can be extended to $\Gamma({\bf T}\mathcal{M}_U)$.\\ \end{comments} \begin{definition} We denote by $\mathfrak{P}({\bf T}\mathcal{M}_U)$ the $C^\infty(\mathcal{M}_{U})$-module $\Gamma({\bf T}\mathcal{M}_U)$ generated by the set of projectable sections defined on $U$ in the $C^\infty(\mathcal{M}_{U})$-module. \end{definition} Each module $\mathfrak{P}({\bf T}\mathcal{M}_U)$ has the following properties: \begin{lemma} \label{L_extbracket} ${}$ \begin{enumerate} \item For any open subset $U$ in $M$, there exists a well defined Lie bracket $[.,.]_{{\bf T}\mathcal{M}_U}$ on $\mathfrak{P}({\bf T}\mathcal{M}_U)$ which satisfies the assumption of Definition \ref{D_AlmostLieBracketOnAnAnchoredBundle} and whose restriction to projectable sections is given by the relation (\ref{eq_projectTMbracket}) \item For each $x\in M$, there exists a chart domain $U$ around $x$ in $M$ such that ${\bf T}\mathcal{M}_U$ is trivializable over $\mathcal{M}_U$ and for each $(m,v)\in {\bf T}_m\mathcal{M}$ where $m=(x,e)\in \mathcal{M}_U$, there exists a projectable section ${\bf X}$ defined on $\mathcal{M}_U$ such that ${\bf X}(m)=(a,v)$. \item Assume that we have a Whitney decomposition ${\bf T}\mathcal{M}={\bf K}\mathcal{M}\oplus{\bf V}\mathcal{M}$ and let $p_{\bf K}$ be the associated projection on ${\bf K}\mathcal{M}$. Then, for any section ${\bf X}\in \mathfrak{P}({\bf T}\mathcal{M}_U)$, the induced section ${\bf X}^K=p_{\bf K}\circ {\bf X}$ belongs to $\mathfrak{P} \left( {\bf K}\mathcal{M}_U \right) =\Gamma \left( {\bf K}\mathcal{M}_U \right) \cap \mathfrak{P} \left( {\bf T}\mathcal{M}_U \right) $. In particular, ${\bf X}^K$ is projectable if and only if ${\bf X}$ is so. \end{enumerate} \end{lemma} \begin{proof} (1) First of all, using the Leibniz formula, if $\mathbf{X}_1$ and $\mathbf{X}_2$ are projectable sections defined on $\mathcal{M}_U$, we have \[ [\mathbf{X}_1,f\mathbf{X}_2]_{{\bf T}\mathcal{M}_U}=df(\hat{\rho}(\mathbf{X}_1))\mathbf{X}_2+ f[\mathbf{X}_1,\mathbf{X}_2]_{{\bf T}\mathcal{M}_U} \] for any $f\in C^\infty(\mathcal{M})$. Now any local section $\mathbf{X}$ of $\mathfrak{P}({\bf T}\mathcal{M}_U)$ is a finite linear functional sum \[ \mathbf{X}=f_1 \mathbf{X}_2+\cdots+ f_k\mathbf{X}_k \] where, for $i \in \{ 1\dots,k \}$, each $\mathbf{X}_i$ is projectable and $f_i$ is a local smooth function on $\mathcal{M}_U$. Therefore, such a decomposition allows to define the bracket $[\mathbf{Y},\mathbf{X}]_{{\bf T}\mathcal{M}_U}$ for all projectable sections $\mathbf{Y}$ defined on the same open set as $\mathbf{X}$. Note that, from (\ref{eq_projectTMbracket}) and the Leibniz formula, the value $[\mathbf{Y},\mathbf{X}]_{{\bf T}\mathcal{M}_U}(m)$ only depends on the $1$-jet of $\mathbf{Y}$ and $\mathbf{X}$ at point $m$ and so the value of $[\mathbf{Y}_,\mathbf{X}]_{{\bf T}\mathcal{M}_U}(m)$ is well defined. Since such a value is independent on the expression of $\mathbf{X}$, by similar arguments, we can define the bracket $[\mathbf{X}',\mathbf{X}]_{{\bf T}\mathcal{M}_U}(m)$, for any other local section $\mathbf{X}'$ of $\mathfrak{P}({\bf T}\mathcal{M}_U)$. Now since, by assumption, $[.,.]_\mathcal{A}$ and the Lie bracket of vector fields satisfies the assumption of Definition \ref{D_AlmostLieBracketOnAnAnchoredBundle}, the restriction of $[.,.]_{{\bf T}\mathcal{M}_U}$ to projectable sections satisfies also the assumption of Definition \ref{D_AlmostLieBracketOnAnAnchoredBundle} and so is its extension to sections of $\mathfrak{P}({\bf T}\mathcal{M}_U)$ for any open set $U$ of $M$ is an almost Lie bracket. \\ In this context, the jacobiator\index{jacobiator} on ${{\bf T}\mathcal{M}_U}$ is defined by\\ $J_{{\bf T}\mathcal{M}_U}(\mathbf{X}_1,\mathbf{X}_2,\mathbf{X}_3) {}$ ${} =[[\mathbf{X}_1,\mathbf{X}_2]_{{\bf T}\mathcal{M}_U},\mathbf{X}_3]_{{\bf T}\mathcal{M}_U} +[[\mathbf{X}_2,\mathbf{X}_3]_{{\bf T}\mathcal{M}_U},\mathbf{X}_1]_{{\bf T}\mathcal{M}_U}+[ [\mathbf{X}_3,\mathbf{X}_1]_{{\bf T}\mathcal{M}_U},\mathbf{X}_2]_{{\bf T}\mathcal{M}_U}$\\ But according to relation (\ref{eq_HatRhoMorphism}) for projectable sections, using the Leibnitz property, it is easy to see that, for all $\mathbf{X}_1,\mathbf{X}_2$ in $\mathfrak{P}(\mathbf{T}\mathcal{M}_U)$ \[ \hat{\rho}([\mathbf{X}_1,\mathbf{X}_2]_{{\bf T}\mathcal{M}})=[\hat{\rho}(\mathbf{X}_1),\hat{\rho}(\mathbf{X}_2)]. \] On the other hand, from(\ref{eq_projectTMbracket}), for projectable sections $\mathbf{X}_i$ , $i \in \{1,2,3\}$, it follows that \[ J_{{\bf T}\mathcal{M}_U}(\mathbf{X}_1,\mathbf{X}_2,\mathbf{X}_3)=0. \] Therefore, according to these properties, it follows that $J_{{\bf T}\mathcal{M}_U}$ vanishes identically on $\mathfrak{P}({\bf T}\mathcal{M}_U)$ , which ends the proof of (1). (2) Choose $x_0\in M$. According to the proof of Theorem \ref{T_Prolongation}, there exists a chart domain $U$ around $x$ such that \begin{description} \item $\mathcal{M}_U\equiv U\times\mathbb{O}$ \item $\mathcal{A}_U\equiv U\times\mathbb{A}$ \item $T\mathcal{M}_U\equiv U\times\mathbb{O}\times\mathbb{M}\times\mathbb{E}$ \item $\widetilde{\mathcal{A}}_U=U\times\mathbb{O}\times\mathbb{A}$ \item ${\bf T}\mathcal{M}_U\equiv U\times\mathbb{O}\times\mathbb{A}\times\mathbb{E}$. \end{description} Consider $(a_0,v_0)\in {\bf T}_{m_0}\mathcal{M}$ where $m=(x_0,e_0)\in \mathcal{M}_U$. Using local coordinates, if $m_0\equiv \mathsf{(x_0,e_0)}$ and $(a_0,v_0)\equiv \mathsf{(a_0,v_0)}$ we consider the section \[ \mathsf{X}: U\times\mathbb{O}\to U\times\mathbb{O}\times\mathbb{A}\times\mathbb{E} \] given by $ \mathsf{X(x,e)=(x,e, a_0, v_0)}$. Then, by construction, the corresponding local section ${\bf X}\equiv X$ is a projectable section defined on $\mathcal{M}_U$.\\ (3) Under the assumptions of (3), note that the restriction ${\bf p}_{{\bf K}\mathcal{M}}$ of ${\bf p}_{\widetilde{\mathcal{A}}}$ to ${\bf K}\mathcal{M}$ is an isomorphism onto $\widetilde{\mathcal{A}}$. Let ${\bf X}$ be a section of ${\bf T}\mathcal{M}_U$. Thus the difference ${\bf X}-{\bf X}^K$ is a vertical section. Since the sum of two projectable sections is a projectable section, it follows that ${\bf X}$ is projectable if and only if ${\bf X}^K$ is projectable. \end{proof} \begin{notations} \label{N_SheafSectionBracketlocal} ${}$ \begin{enumerate} \item\textbf{Sheaf $\mathfrak{P}_\mathcal{M}$ of sections of $\mathbf{T}\mathcal{M}$:}\\ On the one hand, for any open set $\mathcal{U}$ of $\mathcal{M}$, if $U=\mathbf{p}(\mathcal{U})$, then $\mathcal{U}\subset \mathcal{M}_U$ and so $\mathfrak{P}(\mathcal{U}):=\{(\mathbf{X}_{| \mathcal{U}},\; \mathbf{X}\in \mathfrak{P}(\mathcal{M}_U)\}$.\\ On the other hand, since the set of smooth sections of a Banach bundle defines a sheaf over its basis, it follows that the set $\{\mathfrak{P}({\bf T}\mathcal{U}),\; \mathcal{U} \textrm{ open set in }\mathcal{M}\}$ defines a sub-sheaf of modules $\mathfrak{P}_\mathcal{M}$ of the sheaf of modules $\Gamma_\mathcal{M}$ of sections of ${\bf T}\mathcal{M}$. Thus $\{\mathfrak{P}(\mathcal{M}_U),\; U\textrm{ open set in } M\}$ generates a sheaf of modules $\mathfrak{P}_\mathcal{M}$ on $\mathcal{M}$. According to Definition \ref{D_PartialConvenientLieAlgebroid}, in this context, when no confusion is possible, we simply denote by $[.,.]_{{\bf T}\mathcal{M}}$ the sheaf of brackets generated by $\{[.,.]_{{\bf T}\mathcal{M}_U}, U\textrm{ open set in } M\}$.\\ \item {\bf Local version of the Lie bracket $[.,.]_{{\bf T}\mathcal{M}} $:}\\ Each section ${\bf X}$ in $\mathfrak{P}({\bf T}\mathcal{M}_U)$ has a decomposition: \[ {\bf X}=\displaystyle\sum_{i=1}^p f_i\mathbf{X}_i \] with $f_i \in C^\infty(\mathcal{M}_U)$ and ${\bf X}_i$ are projectable sections for all $i \in \{1,\dots,p \}$. We consider a chart domain $U$ in $M$ for which the situation considered in the proof of Lemma \ref{L_extbracket} (2) is valid. In the associated local coordinates, for any section ${\bf X}$ of ${\bf T}\mathcal{M}_U$, we have ${\bf X}(x,e)\equiv\mathsf{(x,e, a(x,e), z(x,e))}$. Now, ${\bf X}$ is projectable if and only if $\mathsf{a}$ only depends on $\mathsf{x}$. Under these notations, if $\bf{X}'\equiv \mathsf{(x,e, a'(e), z'(x,e))}$ is another projectable section, we have (cf (\ref{eq_projectTMbracket}) and Notations \ref{eq_projectTMbracket}): \begin{eqnarray} \label{eq_BracketProject} \begin{aligned} \left[{\bf X},{\bf X}' \right]_{{\bf T}\mathcal{M}}(x,e) \equiv &\mathsf{(x,e, C_x(a,a')} + d \mathsf{a'(\rho_x(a))} - d \mathsf{a(\rho_x(a')),}\\ &d \mathsf{ z'(\rho_x(a),z)} - d \mathsf{ z((\rho_x(a'),z')))}. \end{aligned} \end{eqnarray} Now consider two sections ${\bf X}$ and ${\bf X}'$ in $\mathfrak{P} \left( {\bf T}\mathcal{M}_U \right)$. We can write ${\bf X}=\displaystyle\sum_{i=1}^p f_i\mathbf{X}_i$ and ${\bf X}'=\displaystyle\sum_{j=1}^q f'_j\mathbf{X}'_j$ where $\left( f_i, f'_i \right) \in C^\infty(\mathcal{M}_U)^2$ and ${\bf X}_i$ and ${\bf X}'_j$ are projectable sections for all $i \in \{1,\dots,p\}$ and $j \in \{1,\dots,q\}$. Since the value of the Lie bracket $[{\bf X},{\bf X}']_{{\bf T}\mathcal{M}}$ at $m$ only depends on the $1$-jet of ${\bf X}$ and ${\bf X}'$ and $m$, so this value does not depend on the previous decompositions. Now ${\bf X}$ and ${\bf X}'$ can be also written as a pair $(\mathfrak{a}, X)$ and $(\mathfrak{a}', X')$ respectively. Of course, if $m=(x,e)$, we have \[ \mathfrak{a}(m)= \displaystyle\sum_{i=1}^p f_i(m)a_i(x) \textrm{ and } \mathfrak{a}'(m)= \displaystyle\sum_{j=1}^q f'_i(m)a'_i(x) \] \[ X= \displaystyle\sum_{i=1}^p f_iX_i \textrm{ and } X= \displaystyle\sum_{i=1}^p f_iX_i. \] In local coordinates, we then have \[ {\bf X}\equiv \mathsf{(a,z)=\left( \displaystyle\sum_{i=1}^p f_i(m)a_i,\sum_{i=1}^p f_iz_i \right) } \] \[ {\bf X}'\equiv \mathsf{(a',z')= \left( \displaystyle\sum_{j=1}^q f'_ja'_j,\sum_{j=1}^q f_jz_j \right) }. \] Thus the Lie bracket $[{\bf X},{\bf X}']_{{\bf T}\mathcal{M}}$ has the following expression in local coordinates: \begin{eqnarray} \label{eq_BracketQuasiprojectable} \begin{aligned} \left[{\bf X},{\bf X}'\right]_{{\bf T}\mathcal{M}}& \equiv \mathsf{( C(a,a') } +d \mathsf{a'(\rho_x(a'))}\\ &- d \mathsf{a(\rho_x(a')),} d \mathsf{z'(\rho_x(a),z)} -d \mathsf{z((\rho_x(a'),z'))} \end{aligned} \end{eqnarray} where $\mathsf{C}: U\to {L}_{alt}^2(\mathbb{A})$ is defined in local coordinates by the Lie bracket $[.,.]_\mathcal{A}$ and where \[ \mathsf{C(a,a')(m)= \displaystyle\sum_{i=1}^p \sum_{j=1}^q f_i(m)f'_j(m) C_x(a_i(x),a'_j(x))}. \] \end{enumerate} \end{notations} \begin{remark} \label{R_AFiniterank} According to Comments \ref{Com_PartialBracket}, when $\mathcal{A}$ is a finite rank vector bundle, then $\mathfrak{P}(\mathcal{M}_U=\Gamma(\mathbf{T}\mathcal{M}_U$ and so $[.,.]_{\mathbf{T}\mathcal{M}_U}$ is defined for all sections of $\Gamma(\mathbf{T}\mathcal{M}_U)$. \end{remark} Now, from Lemma \ref{L_extbracket}, $ \left( \mathfrak{P}({\bf T}\mathcal{M}_U), [.,.]_{{\bf T}\mathcal{M}} \right)$ is a Lie algebra and $\hat{\rho}$ induces a Lie algebra morphism from $(\mathfrak{P}({\bf T}\mathcal{M}_U), [.,.]_{{\bf T}\mathcal{M}})$ to the Lie algebra of vector fields on $\mathcal{M}_U$. In this way, so we get: \begin{theorem} \label{T_PartialLieAlgebroid} The sheaf $\mathfrak{P}_\mathcal{M}$ on $\mathcal{M}$ which gives rise to a strong partial convenient Lie algebroid on the anchored bundle $({\bf T}\mathcal{M}, \hat{p}, \mathcal{M}, \hat{\rho})$. Moreover, the restriction of the bracket $[.,.]_{{\bf T}\mathcal{M}}$ to the module of vertical sections induces a convenient Lie algebroid structure on the anchored subbundle $({\bf V}\mathcal{M}, \hat{p}_{|{\bf V}\mathcal{M}},\mathcal{M},\hat{\rho})$ which is independent on the bracket $[.,.]_\mathcal{A}$. \end{theorem} From Remark \ref{R_AFiniterank}, we obtain: \begin{corollary} \label{C_LieAlgebroidTM} If $\mathcal{A}$ is a finite dimensional fiber Banach bundle, then\\ $ \left( {\bf T}\mathcal{M},\hat{p}, \mathcal{M}, \hat{\rho},[.,.]_{{\bf T}\mathcal{M}} \right) $ is a convenient Lie algebroid. \end{corollary} \begin{proof}[Proof of Theorem \ref{T_PartialLieAlgebroid}] From Lemma \ref{L_extbracket}, $(\mathfrak{P}({\bf T}\mathcal{M}_U), [.,.]_{{\bf T}\mathcal{M}})$ is a Lie algebra and $\hat{\rho}$ induces a Lie algebra morphism from $(\mathfrak{P}({\bf T}\mathcal{M}_U), [.,.]_{{\bf T}\mathcal{M}})$ to the Lie algebra of vector fields on $\mathcal{M}_U$. This implies the same properties for $(\mathfrak{P}({\bf T}\mathcal{U}), [.,.]_{{\bf T}\mathcal{M}})$ for any open set $\mathcal{U}$ in $\mathcal{M}$. Thus we obtain a sheaf $\mathfrak{P}_\mathcal{M}$ of Lie algebras and $C^\infty_\mathcal{M}$ modules on $\mathcal{M}$, which implies that $({\bf T}\mathcal{M},\hat{p}, \mathcal{M}, \hat{\rho},[.,.]_{{\bf T}\mathcal{M}})$ is a partial convenient Lie algebroid which is strong, according to Lemma \ref{L_extbracket} (2).\\ It remains to prove the last property. From its definition, the restriction of $\hat{\rho}$ to the vertical bundle ${\bf V}\mathcal{M}\to \mathcal{M}$ is an isomorphism onto the vertical bundle of the tangent bundle $T\mathcal{M}\to \mathcal{M}$. Now, from the definition of the bracket of projectable sections, it is clear that the bracket of two (local) vertical sections of ${\bf T}\mathcal{M}\to \mathcal{M}$ is also a (local) vertical section which is independent on the choice of $[.,.]_\mathcal{A}$.\\ \end{proof} \begin{remark} \label{R_liftmorphism} Given a fibred morphism $\Psi:{\mathcal{M}}\to {\mathcal{M}}'$ between fibred manifold ${\bf p}:{\mathcal{M}}\to M$ and ${\bf p}':{\mathcal{M}'}\to M'$ over $\psi:M\to M'$ and a morphism of anchored bundles $ \varphi$ between two Lie algebroids $({\mathcal{A}}, \pi, M,\rho, [.,.]_{\mathcal{A}})$ and $({\mathcal{A}}', \pi',M',\rho',[.,.]_{\mathcal{A}'})$, over $\psi:M\to M'$. Then, from Remark \ref{R_LieMorphismPartialAlgebroid}, the prolongation ${\bf T}\Psi: {{\bf T}}^{\mathcal{A}}{\mathcal{M}}\to {{\bf T}}^{\mathcal{A}'}{\mathcal{M}}'$, is a Lie morphism of partial convenient Lie algebroids from $({\bf T}^{\mathcal{A}}\mathcal{M},\mathcal{M},\hat{\rho},\mathfrak{P}_{\mathcal{M}})$ to $({\bf T}^{\mathcal{A}'}\mathcal{M}',\mathcal{M}',\hat{\rho}',\mathfrak{P}_{\mathcal{M}'})$. \end{remark} \subsection{Derivative operator on ${\bf T}\mathcal{M}$} \label{___LieDerivativeExteriorDifferentialAndNijenhuisTensorOnTM} Consider an open set $\mathcal{U}$ in $\mathcal{M}$. We simply denote ${\bf T}\mathcal{U}$ the restriction of ${\bf T}\mathcal{M}$ to $\mathcal{U}$. Note that $U={\bf p}(\mathcal{U})$ is an open set in $M$ and of course $\mathcal{U}$ is contained in the open set $\mathcal{M}_U$; so we have a natural restriction map from $\Gamma({\bf T}\mathcal{M}_U)$ (resp. $\mathfrak{P}({\bf T}\mathcal{M}_U)$) into $\Gamma({\bf T}\mathcal{U})$ (resp. $\mathfrak{P}({\bf T}\mathcal{U})$).\\ Since ${\bf T}\mathcal{M}$ has only a strong partial convenient Lie algeboid structure, by application of results of subsection \ref{___DerivativeOperators}, we then have the following result: \begin{theorem} \label{T_DifferentialOperatorTM} Fix some open set $\mathcal{U}$ in $\mathcal{M}$. \begin{enumerate} \item Fix some projectable section $\mathfrak{u}\in \mathfrak{P}({\bf T}\mathcal{U})$. For any $k$-form $\omega$ on $\mathcal{U}$ the Lie derivative $L^{\hat{\rho}}_\mathfrak{ u}\omega$ is a well defined $k$-form on $\mathcal{U}$. \item For any $k$-form $\omega$ on $\mathbf{T}\mathcal{U}$ the exterior derivative $d_{\hat{\rho}}\omega$ of $\omega$ is well defined $(k+1)$-form on $\mathcal{U}$.\\ \end{enumerate} \end{theorem} \subsection{Prolongations and foliations} \label{___ProlongationsAndFoliations} \emph{We assume that $(\mathcal{A},M,\rho,[.,.]_\mathcal{A})$ is a split Banach Lie algebroid}.\\ Under the upper assumptions, by application of Theorem \ref{T_Prolongation} and Theorem 2 in \cite{Pe1}, we obtain the following link between the foliation on $M$ and the foliation on $T\mathcal{M}$: \begin{theorem} \label{T_tildefol} Assume that $(\mathcal{A},\pi,M,\rho,[.,.]_\mathcal{A})$ is split and the distribution $\rho(\mathcal{A})$ is closed. \begin{enumerate} \item The distribution $\hat{\rho}({\bf T}\mathcal{A})$ is integrable on ${\bf T}\mathcal{M}$. \item Assume that $\mathcal{M}=\mathcal{A}$. Let $L$ be a leaf of $\rho(\mathcal{A})$ and $(\mathcal{A}_L,\pi_{|L},L,\rho_L,[.,.]_{\mathcal{A}_L})$ be the Banach-Lie algebroid which is the restriction of $(\mathcal{A}, \pi, M,\rho,[.,.]_{\mathcal{A}})$. Then $\hat{L}=\mathcal{A}_L=\pi^{-1}(L)$ is a leaf of $\hat{\rho}( \mathbf{T}\mathcal{A})$ such that $\hat{\mathbf{p}}(\hat{L})=L$. \item In the previous context, the partial Banach-Lie algebroid which is the prolongation of $(\mathcal{A}_L,\pi_{|L},L,\rho_L,\mathfrak{P}_{\mathcal{A}_L})$ over $\hat{L}$ is exactly the restriction to $\hat{L}$ of the partial Banach-Lie algebroid $({\bf T}\mathcal{A}, \hat{p},\mathcal{A},\hat{\rho}, \mathfrak{P}_{\mathcal{A}})$. \end{enumerate} \end{theorem} \begin{remark} \label{R_prolongAL} ${}$ \begin{enumerate} \item If there exists a weak Riemannian metric on $\mathcal{A}$, the distribution $\rho(\mathcal{A})$ is closed and the assumptions of Theorem \ref{T_tildefol} are satisfied. These assumptions are always satisfied if $\rho$ is a Fredholm morphism. \item The set of leaves of the foliations defined by $\hat{\rho}(\mathbf{T}\mathcal{A})$ is \[ \{\mathcal{A}_L, \; L \textrm{ leaf of } \rho(\mathcal{A})\;\}. \] \end{enumerate} \end{remark} \begin{proof} From Theorem \ref{T_Prolongation}, if $m=(x,e)\in \mathcal{A}$, the fibre ${\bf T}_m\mathcal{M}$ can be identified with ${\mathcal{A}}_x\times \widetilde{\mathcal{E}}_m\equiv \mathbb{A}\times\mathbb{E}$, so we have $\hat{\rho}_m(a, v)\equiv (\mathsf{\rho_x(a),v})$. It follows that $\ker(\hat{\rho}_m)$ can be identified with $\ker\rho_x\times{\bf V}_m\mathcal{M}\subset {\mathcal{A}}_x\times T_m\mathcal{A}$. Thus, $\ker\hat{\rho}_m$ is split if and only if $\ker\rho_x$ is split. Moreover $\hat{\rho}_m({\bf T}_m\mathcal{A})=\rho_x(\mathcal{A}_x)\times {\bf V}_m\mathcal{M}$ is closed in ${\bf T}_m\mathcal{M}\equiv\mathbb{M}\times\mathbb{E}$ if and only if $\rho_x(\mathcal{A}_x)$ is closed in $T_xM$. Then (1) will be a consequence of \cite{BCP}, Theorem 8.39 if we show that, for any $(x,e)\in\mathcal{M}$, there exists an open neighbourhood $U$ of $x$ in $M$ such that the set $\mathfrak{P}({\bf T}\mathcal{M}_U)$ is a generating upper set for $\hat{\rho}({\bf T}\mathcal{M})$ around $(x,e)$ (cf. \cite{BCP}, Definition 8.38) and satisfied the condition {\bf (LB)} given in \cite{BCP}, $\S$ Integrability and Lie invariance.\\ The point $m=(x,e)\in \mathcal{M}$ is fixed and we choose a chart domain $U$ of $x\in M$ such that $\mathcal{M}_U$ and $\mathcal{A}_U$ are trivializable. Without loss of generality, according to the notations in $\S$ \ref{___LocalIdentificationsAndExpressionsInAConvenientBundle}, we may assume that $U\subset \mathbb{M}$, $\mathcal{M}_U=U\times\mathbb{O}\subset \mathbb{M}\times\mathbb{E}$, $\mathcal{A}_U=U\times\mathbb{A}$. Thus, according to the proof of Theorem \ref{T_Prolongation}, we have ${\bf T}\mathcal{M}_U=U\times\mathbb{O}\times \mathbb{A}\times\mathbb{E}$. In this context, if $\{\mathsf{x}\}\times\mathbb{A}=\ker \rho_\mathsf{x}\oplus \mathbb{S}$, then \[ \{\mathsf{(x,e)}\}\times \mathbb{A}\times\mathbb{E}=\{\mathsf{(x,e)}\}\times(\ker \rho_\mathsf{x}\oplus \mathbb{S})\times \mathbb{E}. \] Now consider the upper trivialization $\rho: U\times \mathbb{A}\to U\times\mathbb{M}(=TM_U)$ and the associated upper trivialization: \[ \hat{\rho}:U\times\mathbb{O}\times\mathbb{A}\times\mathbb{E}\to U\times\mathbb{O}\times\mathbb{A}\times\mathbb{M}(=T\mathcal{M}_U). \] Then, from the definition of $\hat{\rho}$, any upper vector field is of type \[ {\bf X}_{\mathsf{(a,v)}}=\hat{\rho}\mathsf{(a, v)= (\rho(a), v)} \] for any $\mathsf{(a, v)}\in \mathbb{A}\times\mathbb{E}$. From the proof of Lemma \ref{L_extbracket} (2), it follows that such a vector field is the range, under $\hat{\rho}$, of a projectable section. Moreover, the stability of projectable sections under the Lie bracket $[.,.]_{{\bf T}\mathcal{M}_U}$ and the fact that $\hat{\rho}$ induces a Lie algebra morphism from $\mathfrak{P}({\bf T}\mathcal{M}_U)$ into the Lie algebra of vector fields on $\mathcal{M}_U$ implies that condition ({\bf LB}) is satisfied for the set $\mathfrak{P}({\bf T}\mathcal{M}_U)$.\\ Assume that $\mathcal{M}=\mathcal{A}$. Fix some leaf $L$ in $M$. If $(\mathcal{A}_L,\pi_{|L},L,\rho_L,[.,.]_{\mathcal{A}_L})$ is the Banach-Lie algebroid which is the restriction of $(\mathcal{A}, \pi, M,\rho,[.,.]_\mathcal{A})$ then $\pi^{-1}(L)=\mathcal{A}_L$ and $\rho_L(\mathcal{A}_L)=TL$. Now by construction, the prolongation $\mathbf{T}\mathcal{A}_L$ of $\mathcal{A}_L$ relative to the Banach-Lie algebroid $(\mathcal{A}_L,\pi_{|L},L,\rho_L,[.,.]_{\mathcal{A}_L})$ is characterized by \[ \mathbf{T}_{(x,a)}\mathcal{A}_L=\{(b, (v,y))\in \mathcal{A}_x \times T_{(x,a)}\mathcal{A}_L\;:\; \rho_x(b)=y\}. \] It follows that $\mathbf{T}\mathcal{A}_L$ is the restriction of $\mathbf{T}\mathcal{A}$ to $\mathcal{A}_L$ and also $\hat{\rho}(\mathbf{T}\mathcal{A}_L)=T\mathcal{A}_L$. Since $L$ is connected, so is $\mathcal{A}_L$ and then $\mathcal{A}_L$ is a leaf of $\hat{\rho}(\mathbf{T}\mathcal{M})$. \end{proof} \section{Projective limits of prolongations of Banach Lie algebroids} \label{__ProjectiveLimitsOfProlongationsOfBanachAnchoredBundles} \begin{definition} \label{D_ProjectiveSequenceofBanachLieAlgebroids} Consider a projective sequence of Banach-Lie algebroid bundles \begin{center} $\left( \left( \mathcal{A}_{i},\pi_{i},M_{i},\rho_{i},[.,.]_i \right),\left( \zeta_i^j, \xi_i^j, \delta_i^j \right) \right) _{(i,j)\in\mathbb{N}^2, j \geq i}$ \end{center} (resp. of Banach bundles $\left( \left( \mathcal{E}_{i},\mathbf{\pi}_{i},M_{i},\rho_{i},[.,.]_i \right),\left( \xi_{i}^{j}, \delta_{i}^{j} \right) \right) _{(i,j)\in\mathbb{N}^2, j \geq i}$).\\ A sequence of open fibred manifolds $\mathcal{M}_i $ of $\mathcal{E}_i$ is called compatible with algebroid prolongations, if, for all $\left( i,j \right) \in\mathbb{N}^2$ such that $j\geq i$, we have \begin{description} \item[\textbf{(PSPBLAB 1)}] {\hfil $\xi_i^j(\mathcal{M}_j)\subset \mathcal{M}_i$;} \item[\textbf{(PSPBLAB 2)}] {\hfil $\mathbf{p}_i \circ \xi_i^j = \delta_i^j \circ \mathbf{p}_j$.} \end{description} \end{definition} Under the assumptions of Definition \ref{D_ProjectiveSequenceofBanachLieAlgebroids}, for each $i\in \mathbb{N}$, we denote by $(\mathbf{T}\mathcal{M}_i, \hat{\mathbf{p}}_i, \mathcal{M}_i, \hat{\rho}_i)$ the prolongation of $\mathcal{A}_i$ over $\mathcal{M}_i$ and $[.,.]_{\bf{T}\mathcal{M}_i}$ the prolongation of the Lie bracket $[.,.]_{\mathcal{A}_i}$ on projectable sections of $\mathbf{T}\mathcal{M}_i$. We then have the following result. \begin{proposition} \label{P_ProjectiveLimitProlongationBracket} Consider a projective sequence of Banach-Lie algebroid bundles \begin{center} $\left( \left( \mathcal{A}_{i},\pi_{i},M_{i},\rho_{i},[.,.]_{\mathcal{A}_i} \right),\left( \zeta_i^j, \xi_i^j, \delta_i^j \right) \right) _{(i,j)\in\mathbb{N}^2, j \geq i}$ \end{center} (resp. Banach bundles $\left( \left( \mathcal{E}_{i},\mathbf{\pi}_{i},M_{i},\rho_{i},[.,.]_i \right),\left( \xi_{i}^{j}, \delta_{i}^{j} \right) \right) _{(i,j)\in\mathbb{N}^2, j \geq i}$) and a sequence of open fibred manifolds $\mathcal{M}_i $ of $\mathcal{E}_i$ compatible with algebroid prolongations. Then \begin{enumerate} \item $\left( \mathbf{T}\mathcal{M}=\underleftarrow{\lim}\mathbf{T}\mathcal{M}_i,\hat{\mathbf{p}}=\underleftarrow{\lim}\hat{\mathbf{p}}_i,M=\underleftarrow{\lim}\mathcal{M}_i, \hat{\rho}=\underleftarrow{\lim}\hat{\rho}_i \right) $ is a Fr\'{e}chet anchored bundle which is the prolongation of $\left( \mathcal{A}=\underleftarrow{\lim}\mathcal{A}_i,\pi=\underleftarrow{\lim}\pi_i,M=\underleftarrow{\lim}M_i \right)$ over $M$.\\ \item Consider any of open $U$ in $M$ and a sequence of open set $U_i$ in $M_i$ such that $U=\underleftarrow{\lim}U_i$. We denote by $\mathfrak{P}^{pl}(\mathbf{T}\mathcal{M}_U)$ the $C^\infty(U)$-module generated by all projective limits $\mathbf{X}=\underleftarrow{\lim}\mathbf{X}_i$ of projectable sections $\mathbf{X}_i$ of $\mathbf{T}\mathcal{M}_i$ over $\{\mathcal{M}_i\}_{U_i}$ Then there exists a Lie bracket $[.,.]_{\mathbf{T}\mathcal{M}} $ defined on $\mathfrak{P}^{pl}(\mathbf{T}\mathcal{M}_U)$ which satisfies the assumptions of Definition \ref{D_AlmostLieBracketOnAnAnchoredBundle} characterized by \[ [\mathbf{X}, \mathbf{X}']_{\mathbf{T}\mathcal{M}_U}=\underleftarrow{\lim}[\mathbf{X}_i,\mathbf{X}'_i]_{\mathbf{T}\mathcal{M}_i} \] where $\mathbf{X}=\underleftarrow{\lim}\mathbf{X}_i$ and $\mathbf{X}'=\underleftarrow{\lim}\mathbf{X}'_i$.\\ \end{enumerate} \end{proposition} Note that $\mathfrak{P}^{pl}(\mathbf{T}\mathcal{M}_U)$ is a submodule of the module $\mathfrak{P}(\mathbf{T}\mathcal{M}_U)$ generated by projectable sections $\mathbf{T}\mathcal{M}_U$. Therefore, by analog argument as used in the proof of Theorem \ref{T_PartialLieAlgebroid}, the set $\{\mathfrak{P}^{pl}(\mathbf{T}\mathcal{M}_U), U \textrm{ open set in } M\}$ generates a sheaf of $\mathfrak{P}^{pl}_\mathcal{M}$ over $ \mathcal{M}$. Moreover, for any open set $\mathcal{U}$ in $ \mathcal{M}$, according to Proposition \ref{P_ProjectiveLimitProlongationBracket}, the restriction of $\hat{\rho}$ to each $\mathfrak{P}^{pl}(\mathbf{T}\mathcal{U})$ is a Lie algebra morphism into the Lie algebra morphism into the Lie algebra of vector fields on $\mathcal{U}$. Thus we obtain: \begin{theorem} \label{T_ProjectiveLimitOfProlongationOfBanachAnchoredBundles} Consider a projective sequence of Banach-Lie algebroid bundles \begin{center} $\left( \left( \mathcal{A}_{i},\pi_{i},M_{i},\rho_{i},[.,.]_{\mathcal{A}_i} \right),\left( \zeta_i^j, \xi_i^j, \delta_i^j \right) \right) _{(i,j)\in\mathbb{N}^2, j \geq i}$ \end{center} (resp. Banach bundles $\left( \left( \mathcal{E}_{i},\mathbf{\pi}_{i},M_{i},\rho_{i},[.,.]_i \right),\left( \xi_{i}^{j}, \delta_{i}^{j} \right) \right) _{(i,j)\in\mathbb{N}^2, j \geq i}$) and a sequence of open fibred manifolds $\mathcal{M}_i $ of $\mathcal{E}_i$ compatible with algebroid prolongations. Then $\left( \mathbf{T}\mathcal{M}=\underleftarrow{\lim}\mathbf{T}\mathcal{M}_i,\hat{\mathbf{p}}=\underleftarrow{\lim}\hat{\mathbf{p}}_i,M=\underleftarrow{\lim}\mathcal{M}_i, \hat{\rho}=\underleftarrow{\lim}\hat{\rho}_i \right) $ is a Fr\'{e}chet anchored bundle which is the prolongation of $\left( \mathcal{A}=\underleftarrow{\lim}\mathcal{A}_i,\pi=\underleftarrow{\lim}\pi_i,M=\underleftarrow{\lim}M_i \right)$ over $M$. Moreover $\left(\mathbf{T}\mathcal{M},\hat{\bf p}, \mathcal{M}, \hat{\rho},\mathfrak{P}^{pl}_{\mathcal{M}}\right)$ is a strong partial Fr\'echet Lie algebroid. \end{theorem} \begin{remark} \label{R_NotPartialLieAlgebroidProlongation} In general, since the projective limit of a projective sequence of Banach algebroids has only a structure of partial Fr\'echet Lie algebroid, it follows that $\mathfrak{P}^{pl}_\mathcal{M}$ is a subsheaf of modules of $\mathfrak{P}_\mathcal{M}$ and the inclusion is strict in the sense that for each open $\mathcal{U}$ in $\mathcal{M}$, the inclusion of $\mathfrak{P}^{pl}(\mathcal{U})$ in $\mathfrak{P}(\mathcal{U})$ is strict. Thus we do not have a structure of strong partial Fr\'echet Lie algebroid defined on $\mathfrak{P}_\mathcal{M}$ as in Theorem \ref{T_PartialLieAlgebroid}. \end{remark} \begin{proof}[Proof of Proposition \ref{P_ProjectiveLimitProlongationBracket}]${}$\\ (1) According to \textbf{(PSPBLAB 1)} and Theorem \ref{T_ProjectiveLimitOfBanachLieAlgebroids}, $\left( \underleftarrow{\lim}\mathcal{A}_i,\underleftarrow{\lim}\pi_i,\underleftarrow{\lim}M_i ,\underleftarrow{\lim}\rho_i \right)$ is a Fr\'echet anchored bundle. \\ From \textbf{(PSPBLAB 2)} and Proposition \ref{P_ProjectiveLimitOfBanachVectorBundles}, we obtain a structure of Fr\'echet vector bundle on $\left( \underleftarrow{\lim}\mathcal{E}_i,\underleftarrow{\lim}\mathbf{p}_i,\underleftarrow{\lim}M_i \right) $. Since each $\mathcal{M}_i$ is an open manifold of $\mathcal{E}_i$ such that the restriction of $\mathbf{p}_i$ is a surjective fibration of $\mathcal{M}_i$ over $M_i$ and we have $\xi_i^j(\mathcal{M}_j) \subset \mathcal{M}_i$, it follows that $(\mathcal{M}_i,\xi_i^j) _{(i,j)\in\mathbb{N}^2, j \geq i}$ is a projective sequence of Banach manifolds and so the restriction of $\mathbf{p}=\underleftarrow{\lim}\mathbf{p}_i$ to $\mathcal{M}=\underleftarrow{\lim}\mathcal{M}_i$ is a surjective fibration onto $M$\\ Recall that \[ { \bf T}\mathcal{M}_j =\{ \left( a_j,\mu_j \right) \in \mathcal{A}_{x_j} \times T_{m_j}\mathcal{M}_j: \; \rho_j \left( a_j \right) = T_{m_j}{\bf p}_j \left( \mu_j \right) \}. \] Let $\left( a_j,\mu_j \right)$ be in ${ \bf T}{\mathcal{M}_j}$ and consider $a_i=\xi_{i}^{j} \left( a_j \right) $ and $\mu_i=T\xi_i^j\left( \mu_j \right) $. We then have: \[ \rho_i\left( a_i \right) = \rho_i \circ \xi_{i}^{j} \left( a_j \right) = T\delta_i^j \circ \rho_j \left( a_j \right) \] and also \[ T_{m_i}{\bf p}_i \left( \mu_i \right) = T_{m_i}{\bf p}_i \circ T\xi_i^j \left( \mu_j \right) = T\delta_i^j \circ T_{m_j}{\bf p}_j \left( \mu_j \right). \] Since $\rho_j \left( a_j \right) = T_{m_j}{\bf p}_j \left( \mu_j \right)$, we then obtain $\rho_i \left( a_i \right) = T_{m_i}{\bf p}_i \left( \mu_i \right)$. So $\mathbf{T}\xi_i^j : \mathbf{T} \mathcal{M}_j \to \mathbf{T} \mathcal{M}_i$ is a morphism of Banach bundles and we have the following commutative diagram \[ \xymatrix{ \mathbf{T} \mathcal{M}_i \ar@{<-}[r]^{\mathbf{T}\xi_i^j}\ar[d]_{\hat{\rho}_i} & \mathbf{T} \mathcal{M}_j \ar[d]^{\hat{\rho}_j}\\ T\mathcal{M}_i \ar@{<-}[r]^{T\xi_i^j} & T\mathcal{M}_j\\ } \] We deduce that $\left( \left( {\mathbf{T}}^{\mathcal{A}_i} \mathcal{M}_i,\hat{\mathbf{p}}_i,\mathcal{M}_i,\hat{\rho}_i \right),\left( \mathbf{T}\xi_i^j,\xi_{i}^{j}\right) \right) _{(i,j)\in\mathbb{N}^2, j \geq i}$ is a projective sequence of Banach anchored bundles. Applying again Theorem \ref{T_ProjectiveLimitOfBanachLieAlgebroids}, we get a Fr\'echet anchored bundle structure on $\left( \underleftarrow{\lim}\bf{T}\mathcal{M}_i,\underleftarrow{\lim}\hat{\bf{p}}_i,\underleftarrow{\lim}\mathcal{M}_i \right)$ over $\underleftarrow{\lim}\mathcal{M}_i$ and which appears as the prolongation of $\left( \underleftarrow{\lim}\mathcal{A}_i,\underleftarrow{\lim}\pi_i,\underleftarrow{\lim}M_i ,\underleftarrow{\lim}\rho_i \right) $ over $\underleftarrow{\lim}\mathcal{M}_i $.\\ (2) Let $U$ be an open set in $M$. There exists $U_i$ in $M_i$ such that $\delta_i(U)\subset U_i$ for each $i\in \mathbb{N}$ and so that $U=\underleftarrow{\lim}U_i$. Now, from the definition of $\{\mathcal{M}_i\}_{U_i}$, we must have $\mathcal{M}_U=\underleftarrow{\lim}\{\mathcal{M}_i\}_{U_i}$.\\ Recall that a projectable section $\mathbf{X}_i$ on $\{\mathcal{M}_i\}_{U_i}$ is characterized by a pair $(\mathfrak{a}_i, X_i)$ where $\mathfrak{a}_i$ is a section of $\{\mathcal{A}_i\}_{U_i}$ and $X_i$ is a vector field on $\{\mathcal{M}_i\}_{U_i}$ such that $\rho_i\circ \mathfrak{a}_i=T{\mathbf{p}_i}(X_i)$. Assume that $\mathfrak{a}=\underleftarrow{\lim}\mathfrak{a}_i$ and $X=\underleftarrow{\lim}X_i$, from the compatibility with bonding maps for sequences of sections $(\mathfrak{a}_i)$, anchors $(\rho_i)$ and vector fields $(X_i)$ we must have then have $T{\bf p}(X)=\rho\circ \mathfrak{a}$. But from Theorem \ref{T_Prolongation}, it follows that $(\mathfrak{a}\circ \mathbf{p}, X)$ defines a projectable section $\mathbf{X}$ over $\mathcal{M}_U$ and so $\mathbf{X}=\underleftarrow{\lim}\mathbf{X}_i$.\\ On the other hand, recall that from (\ref{eq_projectTMbracket}), for each $i\in \mathbb{N}$, we have \begin{eqnarray} \label{Eq_BracketTMi} [\mathbf{X}_i,\mathbf{X}'_i]_{{\bf T}\mathcal{M}_i}=([\mathfrak{a}_i,\mathfrak{a}'_i]_{\mathcal{A}_i}\circ {\bf p},[X_i,X'_i]) \end{eqnarray} Now since $\xi_i^j$ is a Lie algebroid morphism, over $\delta_i^j$, according to Definition \ref{D_LieMorphism} we have \begin{eqnarray} \label{Eq_bracketfij} \xi_i^j([\mathfrak{a}_j,\mathfrak{a}'_j)]_{\mathcal{A}_j})(x_j)=\left([\mathfrak{a}_i,\mathfrak{a}_i]_{\mathcal{A}_i}\right)(\delta_i^j(x_j)). \end{eqnarray} Since $\delta_i^j\circ\mathbf{p}_j=\mathbf{p}_i\circ \lambda_i^j$, we have: \begin{eqnarray} \label{Eq_bracketfij} \xi_i^j([\mathfrak{a}_j,\mathfrak{a}'_j]_{\mathcal{A}_j})\circ\mathbf{p}_j(m_j)=\left([(\mathfrak{a}_i,\mathfrak{a}'_i]_{\mathcal{A}_i}\right)\circ \mathbf{p}_i\circ \xi_i^j(m_j). \end{eqnarray} Naturally, since $X_i$ (resp. $X'_i$) and $X_j$ (resp. $X'_j$) are $\xi_i^j$ related, we also have \begin{eqnarray}\label{Eq_bracketxiij} [X_i,X'_i)]\left(\xi_i^j(m_j)\right)=T\xi_i^j\left([X_j,X_j]\right)(m_j). \end{eqnarray} From (\ref{Eq_BracketTMi}) and (\ref{eq_bfTPsi}) we then obtain \begin{eqnarray}\label{Eq_BracketbfTM} \begin{aligned} \mathbf{T}\xi_i^j\left([\mathbf{X}_i,\mathbf{X}^\prime_j]_{\mathbf{T}\mathcal{M}_i}\right)(m_j) &=\left(\xi_i^j([\mathfrak{a}_j,\mathfrak{a}^\prime_j]_{\mathcal{A}_j})\circ\mathbf{p}_j(m_j),T_{m_j}\xi_i^j([{X}_i,{X}^\prime_j])\right)\\ &=\left([\mathfrak{a}_i,\mathfrak{a}^\prime_i]_{\mathcal{A}_i}\circ\mathbf{p}_i\circ \xi_i^j(m_j),[{X}_i,X^\prime_i])\circ \xi_i^j(m_j)\right)\\ &=\left([\mathbf{X}_i,\mathbf{X}_i^\prime]_{\mathbf{T}\mathcal{M}_i}\right)\circ\xi_i^j(m_j). \end{aligned} \end{eqnarray} It follows that we can define: \begin{eqnarray}\label{Eq_ProjectiveLimit bracketTMi} [\mathbf{X},\mathbf{X}']_{\mathbf{T}\mathcal{M}_U}=\underleftarrow{\lim}[\mathbf{X}_i,\mathbf{X}'_i]_{\mathbf{T}\mathcal{M}_i}. \end{eqnarray} Now, since each bracket $[.,.]_{\mathbf{T}\mathcal{M}_i}$ satisfies the Jacobi identity, from $(\ref{Eq_ProjectiveLimit bracketTMi})$, the same is true for $[.,.]_{\mathbf{T}\mathcal{M}}$ on projective limit $\mathbf{X}$ and $\mathbf{X'}$ of projective sections $(\mathbf{X}_i)$ and $(\mathbf{X}'_i)$. Finally, as \begin{eqnarray}\label{Eq_compatibilityrho} [\hat{\rho}_i(\mathbf{X}_i),\hat{\rho}_i\mathbf{X}'_i)]_{\mathbf{T}\mathcal{M}_i}=\hat{\rho}_i\left([\mathbf{X}_i,\mathbf{X}_i]_{\mathbf{T}\mathcal{M}_i}\right). \end{eqnarray} from the compatibility with bonding maps for sequences of sections $(\mathfrak{a}_i)$, anchors $(\rho_i)$ vector fields $(X_i)$ and Lie brackets $[.,.]_{\mathbf{T}\mathcal{M}_i}$ on projective sequence $(\mathbf{T}\mathcal{M}_i)$, it follows that $\hat{\rho}$ satisfies the same type of relation as (\ref{Eq_compatibilityrho}).\\ Now using by same arguments as used in the proof of Lemma \ref{L_extbracket} we show that we can extend this bracket to a Lie bracket on the module $\mathfrak{P}^{pl}(\mathbf{T}\mathcal{M}_U)$ so that we have a Lie algebra and the restriction of $\hat{\rho}$ to this Lie algebra is a morphism of Lie algebra into the Lie algebra of vector fields on $\mathcal{M}_U$. \end{proof} \section{Direct limits of prolongations of Banach Lie algebroids} \label{__DirectLimitsOfProlongationsOfBanachAnchoredBundles} As in the previous section we introduce: \begin{definition} \label{D_DirectSequenceofBanachLieAlgebroids} Consider a direct sequence of Banach-Lie algebroid bundles \begin{center} $\left( \left( \mathcal{A}_{i},\pi_{i},M_{i},\rho_{i} ,[.,.]_{\mathcal{A}_i}\right),\left( \eta_i^j, \chi_i^j, \varepsilon_i^j \right) \right) _{(i,j)\in\mathbb{N}^2, i \leq j}$ \end{center} (resp. of Banach bundles $\left( \left( \mathcal{E}_{i},\mathbf{\pi}_{i},M_{i} \right),\left( \chi_i^j, \delta_i^j \right) \right) _{(i,j)\in\mathbb{N}^2, i\leq j}$).\\ A sequence of open fibred manifolds $\mathcal{M}_i $ of $\mathcal{E}_i$ is called compatible with algebroid prolongations, if for all $\left( i,j \right) \in\mathbb{N}^2$ such that $i\leq j$, we have \begin{description} \item[\textbf{(DSPBLAB 1)}] {\hfil $\chi_i^j(\mathcal{M}_j)\subset \mathcal{M}_i$;} \item[\textbf{(DSPBLAB 2)}] {\hfil $\varepsilon_i^j \circ \mathbf{p}_i = \mathbf{p}_j \circ \chi_i^j$}. \end{description} \end{definition} \begin{remark} The context of direct limit in which we work concerns ascending sequences of Banach manifolds $(M_i)_{i\in \mathbb{N}}$ where $M_i$ is a closed submanifold of $M_{i+1}$. The reason of this assumption is essentially that their direct limit has a natural structure of (n.n.H) convenient manifold. \\ Although each manifold $\mathcal{M}_i$ is open in $\mathcal{E}_i$, since $\mathcal{E}_i$ is a closed subbundle of $\mathcal{E}_j$, it follows that $ \left( \mathcal{M}_i,\chi_i^j \right) _{(i,j)\in\mathbb{N}^2, i \leq j}$ is an ascending sequence of convenient manifolds. \end{remark} As in the previous section, for each $i\in \mathbb{N}$, we denote by $(\mathbf{T}\mathcal{A}_i, \hat{\mathbf{p}}_i, \mathcal{A}_i, \hat{\rho}_i)$ the prolongation of $\mathcal{A}_i$ over $\mathcal{M}_i=\mathcal{A}_i$ and $[.,.]_{\mathbf{T}\mathcal{A}_i}$ the prolongation of the Lie bracket $[.,.]_{\mathcal{A}_i}$ on projectable section of $\mathbf{T}\mathcal{A}_i$.\\ Adapting the argument used in proof of Proposition \ref{P_ProjectiveLimitProlongationBracket} to this setting of strong asending sequences and direct limits, we have the result below. Note that, in this context, the prolongation is not Hausdorff in general. However, all the arguments used in the proofs are local and so they still work in this context. \begin{proposition} \label{P_DirectLimitProlongationBracket} Consider a direct sequence of Banach-Lie algebroid bundles \begin{center} $\left( \left( \mathcal{A}_{i},\pi_{i},M_{i},\rho_{i},[.,.]_{\mathcal{A}_i} \right) , \left( \eta_i^j,\xi_i^j, \varepsilon_i^j \right) \right) _{(i,j)\in\mathbb{N}^2, i \leq j}$ \end{center} (resp. Banach bundles $ \left( \left( \mathcal{E}_{i},\mathbf{\pi}_{i},M_{i},\rho_{i},[.,.]_i \right) , \left( \xi_i^j, \varepsilon_i^j \right) \right) _{(i,j)\in\mathbb{N}^2, i \leq j}$) and a sequence of open fibred manifolds $\mathcal{M}_i $ of $\mathcal{E}_i$ compatible with algebroid prolongations. Then \begin{enumerate} \item $\left( \underrightarrow{\lim}\mathbf{T}\mathcal{M}_i,\underrightarrow{\lim}\hat{\mathbf{p}}_i,\underrightarrow{\lim}\mathcal{M}_i, \underleftarrow{\lim}\hat{\rho}_i \right) $ is a convenient anchored bundle which is the prolongation of $\left( \underrightarrow{\lim}\mathcal{A}_i,\underrightarrow{\lim}\pi_i,M_i \right)$ over $\underrightarrow{\lim}\mathcal{M}_i$. \item Consider any open set $U$ in $M$ and a sequence of open sets $U_i$ in $M_i$ such that $U=\underrightarrow{\lim}U_i$. We denote by $\mathfrak{P}^{dl}(\mathbf{T}\mathcal{M}_U)$ the $C^\infty(U)$-module generated by all direct limits $\mathbf{X}=\underrightarrow{\lim}\mathbf{X}_i$ of projectable sections $\mathbf{X}_i$ of $\mathbf{T}\mathcal{M}_i$ over $\{\mathcal{M}_i\}_{U_i}$.\\ Then there exists a Lie bracket $[.,.]_{\mathbf{T}\mathcal{M_U}} $ defined on $\mathfrak{P}^{pl}(\mathbf{T}\mathcal{M}_U)$ which satisfies the assumptions of Definition \ref{D_AlmostLieBracketOnAnAnchoredBundle} characterized by \[ [\mathbf{X}, \mathbf{X}']_{\mathbf{T}\mathcal{M}_U}=\underrightarrow{\lim}[\mathbf{X}_i,\mathbf{X}'_i]_{\mathbf{T}\mathcal{M}_i} \] where $\mathbf{X}=\underrightarrow{\lim}\mathbf{X}_i$ and $\mathbf{X}'=\underrightarrow{\lim}\mathbf{X}'_i$.\\ \end{enumerate} \end{proposition} As in the context of Projective sequences, $\mathfrak{P}^{dl}(\mathbf{T}\mathcal{M}_U)$ is a submodule of the module $\mathfrak{P}(\mathbf{T}\mathcal{M}_U)$ generated by projectable sections $\mathbf{T}\mathcal{M}_U$. Therefore, again by analog argument as used in the proof of Theorem \ref{T_PartialLieAlgebroid}, the set $\{\mathfrak{P}^{dl}(\mathbf{T}\mathcal{M}_U), U \textrm{ open set in } M\}$ generates a sheaf of $\mathfrak{P}^{dl}_\mathcal{M}$ over $\mathcal{M}$. Moreover, for any open set $\mathcal{U}$ in $ \mathcal{M}$, according to Proposition \ref{P_DirectLimitProlongationBracket}, the restriction of $\hat{\rho}$ to each $\mathfrak{P}^{dl}(\mathbf{T}\mathcal{U})$ is a Lie algebra morphism into the Lie algebra morphism into the Lie algebra of vector fields on $\mathcal{U}$. Thus we obtain: \begin{theorem} \label{T_DirectLimitOfProlongationOfBanachAnchoredBundles} Consider a direct sequence of Banach-Lie algebroid bundles \begin{center} $\left( \left( \mathcal{A}_{i},\pi_{i},M_{i},\rho_{i},[.,.]_{\mathcal{A}_i} \right),\left( \eta_i^j, \xi_{i}^{j}, \varepsilon_{i}^{j} \right) \right) _{(i,j)\in\mathbb{N}^2, j \geq i}$ \end{center} (resp. Banach bundles $\left( \left( \mathcal{E}_{i},\mathbf{\pi}_{i},M_{i},\rho_{i},[.,.]_i \right),\left( \xi_i^j, \varepsilon_i^j \right) \right) _{(i,j)\in\mathbb{N}^2, i \leq j}$) and a sequence of open fibred manifolds $\mathcal{M}_i $ of $\mathcal{E}_i$ compatible with algebroid prolongations. Then $\left( \underrightarrow{\lim}\mathbf{T}\mathcal{M}_i,\underrightarrow{\lim}\hat{\mathbf{p}}_i,\underrightarrow{\lim}\mathcal{M}_i, \underleftarrow{\lim}\hat{\rho}_i \right) $ is a convenient anchored bundle which is the prolongation of $\left( \underrightarrow{\lim}\mathcal{A}_i,\underrightarrow{\lim}\pi_i,M_i \right)$ over $\underrightarrow{\lim}\mathcal{M}_i$. Moreover $\left(\mathbf{T}\mathcal{M},\hat{\bf p}, \mathcal{M}, \hat{\rho},\mathfrak{P}^{dl}_{\mathcal{M}}\right)$ is a strong partial convenient Lie algebroid. \end{theorem} \appendix \section{Projective limits} \label{_ProjectiveLimits} \subsection{Projective limits of topological spaces} \label{__ProjectiveLimitsOfTopologicalSpaces} \begin{definition} \label{D_ProjectiveSequenceTopologicalSpaces} A projective sequence of topological spaces\index{projective sequence!of topological spaces} is a sequence\\ $\left( \left( X_{i},\delta_{i}^{j}\right) \right)_{(i,j) \in \mathbb{N}^2,\; j \geq i}$ where \begin{description} \item[\textbf{(PSTS 1)}] For all $i\in\mathbb{N},$ $X_{i}$ is a topological space; \item[\textbf{(PSTS 2)}] For all $\left( i,j \right)\in\mathbb{N}^2$ such that $j\geq i$, $\delta_{i}^{j}:X_{j}\to X_{i}$ is a continuous map; \item[\textbf{(PSTS 3)}] For all $i\in\mathbb{N}$, $\delta_{i}^{i}={Id}_{X_{i}}$; \item[\textbf{(PSTS 4)}] For all $\left( i,j,k \right)\in\mathbb{N}^3$ such that $k \geq j \geq i$, $\delta_{i}^{j}\circ\delta_{j}^{k}=\delta_{i}^{k}$. \end{description} \end{definition} \begin{notation} \label{N_ProjectiveSequence} For the sake of simplicity, the projective sequence $\left( \left( X_{i},\delta_{i}^{j}\right) \right)_{(i,j) \in \mathbb{N}^2,\; j \geq i}$ will be denoted $\left( X_{i},\delta_{i}^{j} \right) _{j\geq i}$. \end{notation} An element $\left( x_{i}\right) _{i\in\mathbb{N}}$ of the product ${\displaystyle\prod\limits_{i\in\mathbb{N}}}X_{i}$ is called a \emph{thread}\index{thread} if, for all $j\geq i$, $\delta_{i}^{j}\left( x_{j}\right)=x_{i}$. \begin{definition} \label{D_ProjectiveLimitOfASequence} The set $X=\underleftarrow{\lim}X_{i}$\index{$X=\underleftarrow{\lim}X_{i}$} of all threads, endowed with the finest topology for which all the projections $\delta_{i}:X\to X_{i} $ are continuous, is called the projective limit of the sequence\index{projective limit!of a sequence} $\left( X_{i},\delta_{i}^{j} \right) _{j\geq i}$. \end{definition} A basis\index{basis!of a topology} of the topology of $X$ is constituted by the subsets $\left( \delta_{i} \right) ^{-1}\left( U_{i}\right) $ where $U_{i}$ is an open subset of $X_{i}$ (and so $\delta_i$ is open whenever $\delta_i$ is surjective). \begin{definition} \label{D_ProjectiveSequenceMappings} Let $\left( X_{i},\delta_{i}^{j} \right) _{j\geq i}$ and $\left( Y_{i},\gamma_{i}^{j} \right) _{j\geq i}$ be two projective sequences whose respective projective limits are $X$ and $Y$. A sequence $\left( f_{i}\right) _{i\in\mathbb{N}}$ of continuous mappings $f_{i}:X_{i}\to Y_{i}$, satisfying, for all $(i,j) \in \mathbb{N}^2,$ $j \geq i,$ the coherence condition\index{coherence condition} \[ \gamma_{i}^{j}\circ f_{j}=f_{i}\circ\delta_{i}^{j} \] is called a projective sequence of mappings\index{projective sequence!of mappings}. \end{definition} The projective limit of this sequence is the mapping \[ \begin{array} [c]{cccc} f: & X & \to & Y\\ & \left( x_{i}\right) _{i\in\mathbb{N}} & \mapsto & \left( f_{i}\left( x_{i}\right) \right) _{i\in\mathbb{N}} \end{array} \] The mapping $f$ is continuous if all the $f_{i}$ are continuous (cf. \cite{AbMa}). \subsection{Projective limits of Banach spaces} \label{__ProjectiveLimitsOfBanachSpaces} Consider a projective sequence $\left( \mathbb{E}_{i},\delta_{i}^{j} \right) _{j\geq i}$ of Banach spaces. \begin{remark} \label{R_ProjectiveSequenceOfBondingsMapsBetweenBanachSpacesDeterminedByConsecutiveRanks} Since we have a countable sequence of Banach spaces, according to the properties of bonding maps, the sequence $\left( \delta_i^j\right)_{(i,j)\in \mathbb{N}^2, \;j\geq i}$ is well defined by the sequence of bonding maps $\left( \delta_i^{i+1}\right) _{i\in \mathbb{N}}$. \end{remark} \subsection{Projective limits of differential maps} \label{__ProjectiveLimitsOfDifferentialMapsBetweenFrechetSpaces} The following proposition (cf. \cite{Gal}, Lemma 1.2 and \cite{BCP}, Chapter 4) is essential \begin{proposition} \label{P_ProjectiveLimitsOfDifferentialMaps} Let $\left( \mathbb{E}_i,\delta_i^j \right) _{j\geq i}$ be a projective sequence of Banach spaces whose projective limit is the Fréchet space $\mathbb{F}=\underleftarrow{lim} \mathbb{E}_i$ and $ \left( f_i : \mathbb{E}_i \to \mathbb{E}_i \right) _{i \in \mathbb{N}} $ a projective sequence of differential maps whose projective limit is $f=\underleftarrow{\lim} f_i$. Then the following conditions hold: \begin{enumerate} \item $f$ is smooth in the convenient sense (cf. \cite{KrMi}) \item For all $x = \left( x_i \right) _{i \in \mathbb{N}}$, $df_x = \underleftarrow{\lim} { \left( df_i \right) }_{x_i} $. \item $df = \underleftarrow{\lim}df_i$. \end{enumerate} \end{proposition} \subsection{Projective limits of Banach manifolds} \label{__ProjectiveLimitsOfBanachManifolds} \begin{definition} \label{D_ProjectiveSequenceofBanachManifolds} The projective sequence $\left( M_{i},\delta_{i}^{j} \right) _{j\geq i}$ is called \textit{projective sequence of Banach manifolds}\index{projective sequence!of Banach manifolds} if \begin{description} \item[\textbf{(PSBM 1)}] $M_{i}$ is a manifold modelled on the Banach space $\mathbb{M}_{i}$; \item[\textbf{(PSBM 2)}] $\left( \mathbb{M}_{i},\overline{\delta_{i}^{j}}\right) _{j\geq i}$ is a projective sequence of Banach spaces; \item[\textbf{(PSBM 3)}] For all $x=\left( x_{i}\right) \in M=\underleftarrow{\lim}M_{i}$, there exists a projective sequence of local charts $\left( U_{i},\xi_{i}\right) _{i\in\mathbb{N}}$ such that $x_{i}\in U_{i}$ where one has the relation \[ \xi_{i}\circ\delta_{i}^{j}=\overline{\delta_{i}^{j}}\circ\varphi_{j}; \] \item[\textbf{(PSBM 4)}] $U=\underleftarrow{\lim}U_{i}$ is a non empty open set in $M$. \end{description} \end{definition} Under the assumptions \textbf{(PSBM 1)} and \textbf{(PSBM 2)} in Definition \ref{D_ProjectiveSequenceofBanachManifolds}, the assumptions \textbf{(PSBM 3)}] and \textbf{(PSBM 4)} around $x\in M$ is called \emph{the projective limit chart property} around $x\in M$ and $(U=\underleftarrow{\lim}U_{i}, \phi=\underleftarrow{\lim}\phi_{i})$ is called a \emph{projective limit chart}. The projective limit $M=\underleftarrow{\lim}M_{i}$ has a structure of Fr\'{e}chet manifold modelled on the Fr\'{e}chet space $\mathbb{M} =\underleftarrow{\lim}\mathbb{M}_{i}$ and is called a \emph{$\mathsf{PLB}$-manifold}\index{$\mathsf{PLB}$-manifold}. The differentiable structure is defined \textit{via} the charts $\left( U,\varphi\right) $ where $\varphi =\underleftarrow{\lim}\xi_{i}:U\to\left( \xi_{i}\left(U_{i}\right) \right) _{i \in \mathbb{N}}.$\\ $\varphi$ is a homeomorphism (projective limit of homeomorphisms) and the charts changings $\left( \psi\circ \varphi^{-1}\right) _{|\varphi\left( U\right) }=\underleftarrow{\lim }\left( \left( \psi_{i}\circ\left( \xi_{i}\right) ^{-1}\right) _{|\xi_{i}\left( U_{i}\right) }\right) $ between open sets of Fr\'{e}chet spaces are smooth in the sense of convenient spaces. \subsection{Projective limits of Banach vector bundles } \label{__ProjectiveLimitsOfBanachVectorBundles} Let $\left( M_{i},\delta_{i}^{j}\right) _{j\geq i}$ be a projective sequence of Banach manifolds where each manifold $M_{i}$ is modelled on the Banach space $\mathbb{M}_{i}$.\\ For any integer $i$, let $ \left( E_{i},\pi_{i},M_{i} \right) $ be the Banach vector bundle whose type fibre is the Banach vector space $\mathbb{E}_{i}$ where $\left( \mathbb{E}_{i},\lambda_{i}^{j}\right) _{j\geq i}$ is a projective sequence of Banach spaces. \begin{definition} \label{D_ProjectiveSequenceBanachVectorBundles} $\left( (E_i,\pi_i,M_i),\left(\xi_i^j,\delta_i^j \right) \right) _{j \geq i}$, where $\xi_i^j:E_j \to E_i$ is a morphism of vector bundles, is called a projective sequence of Banach vector bundles\index{projective sequence!of Banach vector bundles} on the projective sequence of manifolds $\left( M_{i},\delta_{i}^{j}\right) _{j\geq i}$ if, for all $ \left( x_{i} \right) $, there exists a projective sequence of trivializations $\left( U_{i},\tau_{i}\right) $ of $\left( E_{i},\pi _{i},M_{i}\right) $, where $\tau_{i}:\left( \pi_{i}\right) ^{-1}\left( U_{i}\right) \to U_{i}\times\mathbb{E}_{i}$ are local diffeomorphisms, such that $x_{i}\in U_{i}$ (open in $M_{i}$) and where $U=\underleftarrow{\lim}U_{i}$ is a non empty open set in $M$ where, for all $(i,j) \in \mathbb{N}^2$ such that $j\geq i,$ we have the compatibility condition \begin{description} \item[(\textbf{PLBVB})] $\left( \delta_{i}^{j}\times\lambda_{i}^{j}\right) \circ\tau_{j}=\tau_{i}\circ \xi_i^j$. \end{description} \end{definition} With the previous notations, $(U=\underleftarrow{\lim}U_{i}, \tau=\underleftarrow{\lim}\tau_i)$ is called a \emph{ projective bundle chart limit}\index{projective bundle chart limit}. The triple of projective limit $(E=\underleftarrow{\lim}E_{i}, \pi=\underleftarrow{\lim}\pi_{i}, M=\underleftarrow{\lim}M_{i}))$ is called a \emph{projective limit of Banach bundles} or $\mathsf{PLB}$-bundle\index{$\mathsf{PLB}$-bundle} for short. \\ The following proposition generalizes the result of \cite{Gal} about the projective limit of tangent bundles to Banach manifolds (cf. \cite{DGV} and \cite{BCP}). \begin{proposition} \label{P_ProjectiveLimitOfBanachVectorBundles} Let $\left( (E_i,\pi_i,M_i),\left(\xi_i^j,\delta_i^j \right) \right)_{j \geq i}$ be a projective sequence of Banach vector bundles. \\ Then $\left( \underleftarrow{\lim}E_i,\underleftarrow{\lim}\pi_i,\underleftarrow{\lim}M_i \right) $ is a Fr\'{e}chet vector bundle. \end{proposition} \begin{definition} \label{D_ProjectiveSequenceofBanachLieAlgebroids} $\left( \left( E_{i},\pi_{i},M_{i},\rho_{i}, [.,.]_{i} \right),\left( \xi_i^j, \delta_{i}^{j} \right) \right) _{(i,j)\in\mathbb{N}^2, j \geq i}$ is called a projective sequence of Lie algebroids \index{projective sequence!of Lie algebroids} if \begin{description} \item[\textbf{(PSBLA 1)}] $\left( E_{i},\xi_i^j\right) _{j\geq i} $ is a projective sequence of Banach vector bundles ($\pi_{i}:E_{i}\to M_{i})_{i\in\mathbb{N}}$ over the projective sequence of manifolds $ \left( M_{i},\delta_{i}^{j}\right) _{j\geq i}$; \item[\textbf{(PSBLA 2)}] For all $\left( i,j \right) \in\mathbb{N}^2$ such that $j\geq i$, one has \[ \rho_{i}\circ \xi_i^j=T\delta_{i}^{j}\circ\rho_{j} \] \item[\textbf{(PSBLA 3)}] $\xi_i^j:E_{j}\to E_{i}$ is a Lie morphism from $\left( E_{j},\pi_{j},M_{j},\rho_{j} \right) $ to $\left( E_{i},\pi_{i},M_{i},\rho_{i}\right) $ \end{description} \end{definition} We then have the following result (cf. \cite{BCP}): \begin{theorem} \label{T_ProjectiveLimitOfBanachLieAlgebroids} Let $\left( \left( E_{i},\pi_{i},M_{i},\rho_{i}, [.,.]_{i} \right),\left( \xi_i^j, \delta_{i}^{j} \right) \right) _{(i,j)\in\mathbb{N}^2, j \geq i}$ be a projective sequence of Banach-Lie algebroids. If $(M_i,\delta_i^j)_{(i,j)\in\mathbb{N}^2, j \geq i}$ is a submersive projective sequence, then $\left(\underleftarrow{\lim}E_{i},\underleftarrow{\lim}\pi_{i},\underleftarrow{\lim}M_{i},\underleftarrow{\lim}\rho_{i}\right) $ is a strong partial Fr\'{e}chet Lie algebroid. \end{theorem} \section{Direct limits} \label{_Directlimits} \subsection{Direct limits of topological spaces} \label{__DirectLimitsOfTopologicalSpaces} Let $\left\{ \left( Y_{i},\varepsilon_{i}^{j}\right) \right\} _{(i,j)\in I^2,\ i \preccurlyeq j}$ be a direct system of topological spaces and continuous maps. The direct limit $\left\{ \left( X,\varepsilon_{i}\right) \right\} _{i\in I}$ of the sets becomes the direct limit in the category ${\bf TOP}$ of topological spaces if $X$ is endowed with the direct limit topology (DL-topology for short)\index{DL-topology}\index{topology!DL-}, i.e. the final topology with respect to the inclusion maps $\varepsilon_{i}:X_{i}\to X$ which corresponds to the finest topology which makes the maps $\varepsilon_{i}$ continuous. So $O\subset X$ is open if and only if $\varepsilon_{i}^{-1}\left( O\right) $ is open in $X_{i}$ for each $i\in I$. \begin{definition} \label{D_AscendingSequenceOfTopologicalSpaces} \index{ascending sequence!of topological spaces} Let $\mathcal{S}=\left( \left( X_{n},\varepsilon_{n}^{m}\right) \right) _{(m,n) \in\mathbb{N}^2,\ n\leq m}$ be a direct sequence of topological spaces such that each $\varepsilon_{n}^{m}$ is injective. Without loss of generality, we may assume that we have \[ X_{1}\subset X_{2}\subset\cdots\subset X_{n}\subset X_{n+1}\subset\cdots \] and $\varepsilon_{n}^{n+1}$ becomes the natural inclusion. \begin{description} \item[\textbf{(ASTS)}] $\mathcal{S}$ will be called an ascending sequence of topological spaces. \item[\textbf{(SASTS)}] Moreover, if each $\varepsilon_{n}^{m}$ is a topological embedding, then we will say that $\mathcal{S}$ is a \emph{strict ascending sequence of topological spaces} (\textit{expanding sequence} \end{description} \end{definition} \begin{notation} \label{N_DirectSequence} The direct sequence $\left( \left( X_{n},\varepsilon_{n}^{m}\right) \right)_{(n,m) \in \mathbb{N}^2,\; n \leq m}$ will be denoted $\left( X_{n},\varepsilon_{n}^{m} \right) _{n \leq m}$. \end{notation} If $\left( X_{n},\varepsilon_{n}^{m}\right) _{n\leq m}$ is a strict ascending sequence of topological spaces\index{ascending sequence!of topological spaces}, each $\varepsilon_{n}$ is a topological embedding from $X_{n}$ into $X=\underrightarrow{\lim}X_{n}$. \subsection{Direct limits of ascending sequences of Banach manifolds} \label{__DirectLimitsOfAscendingSequencesOfBanachManifolds} \index{direct limit!of Banach manifolds} \begin{definition} \label{D_AscendingSequenceOfBanachManifolds} $\mathcal{M}=(M_{n},\varepsilon_{n}^{n+1}) _{n\in\mathbb{N}}$ is called an ascending sequence of Banach manifolds if, for any $n\in\mathbb{N}$, $\left( M_{n},\varepsilon_{n}^{n+1}\right) $ is a closed submanifold of $M_{n+1}$. \end{definition} \begin{proposition} \label{P_ConditionsDefiningDLCP}(cf. \cite{CaPe} \cite{BCP}) Let $\mathcal{M}=\left( M_{n},\varepsilon_{n}^{n+1} \right) _{n\in\mathbb{N}}$ be an ascending sequence of Banach manifolds.\\ Assume that for $x\in M=\underrightarrow {\lim}M_{n}$, there exists a sequence of charts $\left( (U_{n},\phi_{n})\right) _{n \in \mathbb{N}}$ of $\left( M_{n}\right) _{n \in \mathbb{N}}$, such that: \begin{description} \item[\textbf{(ASC 1)}] $(U_{n})_{n\in\mathbb{N}}$ is an ascending sequence of chart domains; \item[\textbf{(ASC 2)}] {\hfil $\forall n\in\mathbb{N},\ \phi_{n+1}\circ\varepsilon_{n}^{n+1}=\iota_{n}^{n+1}\circ\phi_{n}$.} \end{description} Then $U=\underrightarrow{\lim}U_{n}$ is an open set of $M$ endowed with the $\mathrm{DL}$-topology and $\phi=\underrightarrow{\lim}\phi_{n}$ is a well defined map from $U$ to $\mathbb{M}=\underrightarrow{\lim}\mathbb{M}_{n}$.\\ Moreover, $\phi$ is a continuous homeomorphism from $U$ onto the open set $\phi(U)$ of $\mathbb{M}$. \end{proposition} \begin{definition} \label{D_DLChartProperty} We say that an ascending sequence $\mathcal{M}= (M_{n},\varepsilon_{n}^{n+1}) _{n\in\mathbb{N}}$ of Banach manifolds has the direct limit chart property \emph{\textbf{(DLCP)}}\index{direct!limit chart} at $x$ if it satisfies both \emph{\textbf{(ASC 1)}} and \emph{\textbf{(ASC 2)}}. \end{definition} We then have the fundamental result (cf. \cite{CaPe}). \begin{theorem} \label{T_LBCmanifold} Let $ \left( M_{n} \right) _{n\in\mathbb{N}}$ be an ascending sequence of Banach $C^\infty$-manifolds, modelled on the Banach spaces $\mathbb{M}_{n}$. Assume that \begin{description} \item[\textbf{(ASBM 1)}] $ \left( M_{n} \right) _{n\in\mathbb{N}}$ has the direct limit chart property \emph{\textbf{(DLCP)}} at each point $x\in M=\underrightarrow{\lim}M_{n}$ \item[\textbf{(ASBM 2)}] $\mathbb{M}=\underrightarrow{\lim}\mathbb{M}_{n}$ is a convenient space. \end{description} Then there is a unique n.n.H. convenient manifold structure on $M=\underrightarrow{\lim}M_{n}$ modelled on the convenient space $\mathbb{M}$ such that the topology associated to this structure is the $DL$-topology on $M$. \\ In particular, for each $n\in\mathbb{N}$, the canonical injection $\varepsilon_{n}:M_{n}\longrightarrow M$ is an injective conveniently smooth map and $(M_{n},\varepsilon_{n})$ is a closed submanifold of $M$.\\ Moreover, if each $M_n$ is locally compact or is open in $M_{n+1}$ or is a paracompact Banach manifold closed in $M_{n+1}$, then $M=\underrightarrow{\lim}M_{n}$ is provided with a Hausdorff convenient manifold structure.\\ \end{theorem} \subsection{Direct limits of Banach vector bundles} \label{__DirectLimitsOfBanachVectorBundles} \index{direct limit!of Banach vector bundles} \begin{definition} \label{D_AscendingSequenceBanachVectorBundles}$\left( (E_n,\pi_n,M_n),\left(\lambda_n^{n+1},\varepsilon_n^{n+1} \right) \right)_{n \in \mathbb{N}}$ is called a strong ascending sequence of Banach vector bundles if the following assumptions are satisfied: \begin{description} \item[\textbf{(ASBVB 1)}] $\mathcal{M}=(M_{n})_{n\in\mathbb{N}}$ is an ascending sequence of Banach $C^\infty$-manifolds, where $M_{n}$ is modelled on the Banach space $\mathbb{M}_{n}$ such that $\mathbb{M}_{n}$ is a supplemented Banach subspace of $\mathbb{M}_{n+1}$ and the inclusion $\varepsilon_{n}^{n+1}:M_{n}\to M_{n+1}$ is a $C^\infty$ injective map such that $(M_{n},\varepsilon_{n}^{n+1})$ is a closed submanifold of $M_{n+1}$; \item[\textbf{(ASBVB 2)}] The sequence $(E_{n})_{n\in\mathbb{N}}$ is an ascending sequence such that the sequence of typical fibres $\left(\mathbb{E}_{n}\right) _{n\in\mathbb{N}}$ of $(E_{n})_{n\in\mathbb{N}}$ is an ascending sequence of Banach spaces and $\mathbb{E}_{n}$ is a supplemented Banach subspace of $\mathbb{E}_{n+1}$; \item[\textbf{(ASBVB 3)}] For each $n\in\mathbb{N}$, $\pi_{n+1}\circ\lambda_{n}^{n+1}=\varepsilon_{n}^{n+1}\circ\pi_{n}$ where $\lambda _{n}^{n+1}:E_{n}\to E_{n+1}$ is the natural inclusion; \item[\textbf{(ASBVB 4)}] Any $x\in M=\underrightarrow{\lim}M_{n}$ has the direct limit chart property \emph{\textbf{(DLCP) }}for $(U=\underrightarrow {\lim}U_{n},\phi=\underrightarrow{\lim}\phi_{n})$; \item[\textbf{(ASBVB 5)}] For each $n\in\mathbb{N}$, there exists a trivialization $\Psi_{n}:\left( \pi_{n}\right) ^{-1}\left( U_{n}\right) \to U_{n}\times\mathbb{E}_{n}$ such that, for any $i\leq j$, the following diagram is commutative: \end{description} \[ \xymatrix{ \left( \pi_{i} \right) ^{-1} \left( U_{i} \right) \ar[r]^{\lambda_{i}^{j}}\ar[d]_{\Psi_{i}} & \left( \pi_{j} \right) ^{-1} \left( U_{j} \right) \ar[d]^{\Psi_j}\\ U_{i} \times \mathbb{E}_{i} \ar[r]^{\varepsilon_{i}^{j} \times \iota_{i}^{j}} & U_{j} \times \mathbb{E}_{j} } \] \end{definition} For example, the sequence $\left( \left( TM_n,\pi_n,M_n \right) , \left( T\varepsilon_n^{n+1},\varepsilon_n^{n+1} \right) \right) _{n \in \mathbb{N}}$ is a strong ascending sequence of Banach vector bundles whenever $ \left( M_{n} \right)_{n\in\mathbb{N}}$ is an ascending sequence which has the direct limit chart property at each point of $x \in M=\underrightarrow{\lim}M_{n}$ whose model $\mathbb{M}_{n}$ is supplemented in $\mathbb{M}_{n+1}$. \begin{notation} From now on and for the sake of simplicity, the strong ascending sequence of vector bundles $\left( (E_n,\pi_n,M_n),\left(\lambda_n^{n+1},\varepsilon_n^{n+1} \right) \right)_{n \in \mathbb{N}}$ will be denoted $\left( E_{n},\pi_{n},M_{n}\right)_{\underrightarrow{n}}$. \end{notation} We then have the following result given in \cite{CaPe}. \begin{proposition} \label{P_StructureOnDirectLimitOfLinearBundles} Let $\left( E_{n},\pi_{n},M_{n}\right) _{\underrightarrow{n}}$ be a strong ascending sequence of Banach vector bundles. We have: \begin{enumerate} \item $\underrightarrow{\lim}E_{n}$ has a structure of not necessarily Hausdorff convenient manifold modelled on the LB-space $\underrightarrow{\lim}\mathbb{M}_{n}\times\underrightarrow{\lim}\mathbb{E}_{n}$ which has a Hausdorff convenient structure if and only if $M$ is Hausdorff. \item $\left( \underrightarrow{\lim}E_{n},\underrightarrow{\lim}\pi _{n},\underrightarrow{\lim}M_{n}\right) $ can be endowed with a structure of convenient vector bundle whose typical fibre is $\underrightarrow{\lim }\mathbb{\mathbb{E}}_{n}$. \end{enumerate} \end{proposition} \end{document}
\begin{document} \title{Almost quadratic gap between partition complexity and query/communication complexity} \date{} \author{Andris Ambainis$^1$ \and Martins Kokainis$^1$ } \maketitle \begin{abstract} \noindent We show nearly quadratic separations between two pairs of complexity measures: \begin{itemize} \item We show that there is a Boolean function $f$ with $D(f)=\Omega((D^{sc}(f))^{2-o(1)})$ where $D(f)$ is the deterministic query complexity of $f$ and $D^{sc}$ is the subcube partition complexity of $f$; \item As a consequence, we obtain that there is $f(x, y)$ such that $D^{cc}(f)=\Omega(\log^{2-o(1)}\chi(f))$ where $D^{cc}(f)$ is the deterministic 2-party communication complexity of $f$ (in the standard 2-party model of communication) and $\chi(f)$ is the partition number of $f$. \end{itemize} Both of those separations are nearly optimal: it is well known that $D(f)=O((D^{sc}(f))^{2})$ and $D^{cc}(f)=O(\log^2\chi(f))$. \end{abstract} \footnotetext[1]{Faculty of Computing, University of Latvia. E-mail: {\tt [email protected], [email protected]}. Supported by the European Commission FET-Proactive project QALGO, ERC Advanced Grant MQC and Latvian State Research programme NexIT project No.1.} \setcounter{page}{0} \thispagestyle{empty} \section{Introduction} Both query complexity and communication complexity of a function $f$ can be lower bounded by an appropriate measure of complexity on partitions of the input set with the property that $f$ is constant on every part of the partition. In the communication complexity setting, the {\em partition number} of $f:X\times Y\rightarrow \{0, 1\}$ (denoted by $\chi(f)$) is the smallest number of rectangles $X_i \times Y_i$ in a partition of $X\times Y$ with the property that $f$ is constant (either 0 or 1) on every $X_i \times Y_i$. If a deterministic communication protocol communicates $k$ bits, it specifies a partition into $2^k$ rectangles. Therefore \cite{Yao79}, $D^{cc}(f)\geq \log \chi(f)$ where $D^{cc}(f)$ is the deterministic communication complexity of $f$ in the standard two-party model. The corresponding notion in query complexity is the {\em subcube partition complexity} $D^{sc}(f)$ of a function $f:\{0, 1\}^n\rightarrow \{0, 1\}$ \cite{FKW,Kothari}. It is defined as the smallest $k$ for which $\{0, 1\}^n$ can be partitioned into subcubes $S_i$ so that each $S_i$ defined by fixing at most $k$ variables and, on each $S_i$, $f$ is a constant. Again, it is easy to see that $D^{sc}(f)$ is a lower bound for the deterministic decision tree complexity $D(f)$ (since any deterministic decision tree defines a partition of $\{0, 1\}^n$). Although we do not consider randomized complexity in this paper, we note that both of these measures have randomized counterparts \cite{FKW,JK,JLV} which provide lower bounds for randomized communication complexity and randomized query complexity. We study the question: how tight are the partition lower bounds? For communication complexity, Aho et al. \cite{AUY83} showed that $D^{cc}(f)=O(\log^2 \chi(f))$, by observing that $D(f)$ is upper-bounded by the square of the non-deterministic communication complexity which, in turn, is at most $\log \chi(f)$. Since then, it was an open problem to determine whether any of these two bounds is tight. For query complexity, it is well known that $D(f) = O(C^2(f))$ (where $C(f)$ is the standard certificate complexity without the unambiguity requirement) \cite{BW}. Since $C(f)\leq D^{sc}(f)$ , we have $D(f) = O( \lr{D^{sc}(f)}^{2})$. On the other hand, Savicky \cite{Sav02} has constructed a function $f$ with $D(f) = \Omega( \lr{D^{sc}(f)}^{1.128...})$. Recently, G\"o\"os, Pitassi and Watson \cite{GPW15} made progress on these longstanding open questions, by constructing a function $f$ with $D(f) = \tilde{\Omega}( \lr{D^{sc}(f)}^{1.5})$ and showing that a separation between $D(f)$ and $D^{sc}(f)$ can be ``lifted" from query complexity to the communication complexity. Thus, $D^{cc}(f)=\tilde{\Omega}(\log^{1.5} \chi(f))$ We improve this result by constructing $f$ with $D(f)=\Omega( \lr{D^{sc}(f)}^{2-o(1)})$ (which almost matches the upper bound of $D(f)=O( \lr{D^{sc}(f)}^{2}) $). This also implies a similar separation between $D^{cc}(f)$ and $\log \chi(f)$, via the lifting result of \cite{GPW15}. Our construction is based on the {\em cheat-sheet} method that was very recently developed by Aaronson, Ben-David and Kothari \cite{ABK} to give better separations between query complexity in different models of computation and related complexity measures. (In particular, they used cheat sheets to give the first superquadratic separation between quantum and randomized query complexities, for a total Boolean function.) The cheat-sheet method takes a function $f$ and produces a new function $f_{CS}$ consisting of a composition of several copies of $f$, together with ``cheat-sheets" that allow a quick verification of values of $f$, given a pointer to the right cheat sheet. In section \ref{sec:res}, we observe that cheat-sheet functions have the property that $f_{CS}^{-1}(1)$ can be partitioned into subcubes of dimensions that can be substantially smaller than $D^{sc}(f)$. This property does not immediately imply a better separation between $D$ and $D^{sc}$, because the cheat-sheet construction does not give a similar partition for $f_{CS}^{-1}(0)$. However, we can compose the cheat-sheet construction with several rebalancing steps which rebalance the complexity of partitions for $f_{CS}^{-1}(0)$ and $f_{CS}^{-1}(1)$. Repeating this composed construction many times gives $D(f)=\Omega( \lr{D^{sc}(f)}^{2-o(1)})$. \section{Preliminaries} We use $[n]$ to denote $\{1, 2, \ldots, n\}$ and $\log n$ to denote $\log_2 n$. We use the big-$\tilde{O}$ notation which is a counterpart of big-$O$ notation that hides polylogarithmic factors: \begin{itemize} \item $f(n)=\tilde{O}(g(n))$ if $f(n) = O(g(n) \log^c g(n))$ for some constant $c$ and \item $f(n)=\tilde{\Omega}(g(n))$ if $f(n)= \Omega(\frac{g(n)}{\log^c g(n)})$ for some constant $c$. \end{itemize} \subsection{Complexity measures for Boolean functions} We study the complexity of Boolean functions $f:\{0, 1\}^n \rightarrow \{0, 1\}$ (or, more generally, functions $f:\Sigma_1 \times \Sigma_2 \times \ldots \times \Sigma_n \rightarrow \{0, 1\}$ where $\Sigma_i$ are finite sets of arbitrary size). We denote the input variables by $x_1, \ldots, x_n$. Then, the function is $f(x_1, \ldots, x_n)$. Often, we use $x$ as a shortcut for $(x_1, \ldots, x_n)$ and $f(x)$ as a shortcut for $f(x_1, \ldots, x_n)$. $AND_n$ denotes the function $AND_n(x_1, \ldots, x_n)$ which is 1 if $x_1=\ldots=x_n=1$ and 0 otherwise. $OR_n$ denotes the function $OR_n(x_1, \ldots, x_n)$ which is 1 if $x_i=1$ for at least one $i$ and 0 otherwise. For functions $f:\{0, 1\}^n \rightarrow \{0, 1\}$ and $g:\Sigma_1 \times \ldots \Sigma_k \rightarrow \{0, 1\}$, their composition is the function $f\circ g^n: (\Sigma_1 \times \ldots \Sigma_k)^n \rightarrow \{0, 1\}$ defined by \[ f\circ g^n (x_{11}, \ldots, x_{1k}, \ldots, x_{n1}, \ldots, x_{nk}) = f(g(x^{(1)}), \ldots, g(x^{(n)})) \] where $x^{(i)}$ denotes $(x_{i1}, \ldots, x_{ik})$. We consider the complexity of Boolean functions in the decision tree (query) model. More information on this model can be found in the survey by Buhrman and de Wolf \cite{BW}. A summary of recent results (that appeared after \cite{BW} was published) can be found in \cite{ABK}. {\bf Deterministic query complexity.} A {\em deterministic query algorithm} or {\em deterministic decision tree} is a deterministic algorithm ${\cal A}$ which accesses the input $(x_1, \ldots, x_n)$ by querying the input variables $x_i$. We say that ${\cal A}$ computes a function $f(x_1, \ldots, x_n)$ if, for any $(x_1, \ldots, x_n)\in\{0, 1\}^n$, the output of ${\cal A}$ is equal to $f(x_1, \ldots, x_n)$. The {\em deterministic query complexity} of $f$, $D(f)$, is the smallest $k$ such that there exists a deterministic query algorithm ${\cal A}$ that computes $f(x_1, \ldots, x_n)$ and, on every $(x_1, \ldots, x_n)$, queries at most $k$ input variables $x_i$. {\bf Partial assignments.} A \emph{partial assignment} in $ \Sigma_1 \times \Sigma_2 \times \ldots \Sigma_n $, where $ \Sigma_i $ are finite alphabets, is a function $a : I_a \to \bigcup_{i=1}^n \Sigma_i $, where $ I_a \subset [n] $, satisfying $ a(i) \in \Sigma_i $ for all $ i \in I_a$. We say that an assignment $a$ {\em fixes a variable} $x_i$ if $i\in I_a$. The length of a partial assignment is the size of the set $I_a$. A string $x \in \Sigma_1 \times \Sigma_2 \times \ldots \Sigma_n $ is \emph{consistent} with the assignment $a$ if $x_i =a(i)$ for all $ i \in I_a$; then we denote $ x \sim a$. Every partial assignment defines an associated \emph{subcube}, which is the set of all strings consistent with that assignment: \[ S_C = \lrb{x \in \Sigma_1 \times \Sigma_2 \times \ldots \Sigma_n \ \vline\ x \sim a }. \] In the other direction, for every subcube there is a unique partial assignment that defines it. {\bf Certificate complexity.} For $b\in \lrb{0,1}$, a \emph{$b$-certificate} for a function $f : \Sigma_1 \times \Sigma_2 \times \ldots \Sigma_n \to\lrb{0,1}$ is a partial assignment $ a$ such that $ f(x) = b$ for all $x\sim a$. The \emph{$b$-certificate complexity} of $f$ (denoted by $ C_b(f) $) is the smallest number $k$ such that, for every $x:f(x)=b$, there exists a $b$-certificate $a$ of length at most $k$ with $x\sim a$. The \emph{certificate complexity} of $f$ (denoted by $ C(f) $) is the maximum of $ C_0(f) $, $ C_1(f) $. Equivalently, we can say that $C_b(f)$ is the smallest number $k$ such that the set $f^{-1}(b)$ can be written as a union of subcubes $S_C$ corresponding to partial assignments of length at most $k$. {\bf Unambiguous certificate complexity.} The \emph{unambiguous $b$-certificate complexity} of $f$ (denoted by $ UP_b(f) $) is the smallest number $k$ such that we can choose a collection of $b$-certificates $a_1, \ldots, a_m$ of length at most $k$ with the property that, for every $x:f(x)=b$, there is exactly one $a_i$ with $x\sim a_i$. Equivalently, $UP_b(f)$ is the smallest number for which the set $f^{-1}(b)$ can be written as a disjoint union of subcubes defined by fixing at most $k$ variables. The \emph{deterministic subcube partition complexity} of $f$ (denoted by $ D^{sc}(f) $ \cite{Kothari}) is defined as maximum of $ UP_0(f) $, $ UP_1(f) $ and is the smallest $k$ for which $\Sigma_1 \times \Sigma_2 \times \ldots \Sigma_n$ can be written as a disjoint union of subcubes obtained by fixing at most $k$ variables, with $f$ being constant on every subcube. $D^{sc}(f)$ is also known as {\em two-sided unambiguous certificate complexity} \cite{GPW15} and has been studied under several other names (from {\em non-overlapping cover size} \cite{BOH} to {\em non-intersecting complexity} \cite{Belovs}). \subsection{Technical lemmas} Let $ f (x_1, \ldots, x_n, y_1, \ldots, y_k) : \lrb{0,1}^n \times [N]^k \to \lrb{0,1} $ be a function with some $\{0, 1\}$-valued variables and some variables which take values in a larger set $[N]$. We can 'Booleanize' $f$ in a following way. Let $d=\lceil \log N \rceil$. We fix a mapping $h$ from $\lrb{0,1}^{d}$ onto $[N]$ and define $ \tilde f : \lrb{0,1}^{n + k d} \to \lrb{0,1}$ with variables $x_i, i\in[n]$ and $y_{ij}, i\in[k], j\in [d]$ by \[ \tilde f(x_1, \ldots, x_n, y_{11}, \ldots, y_{kd}) = f(x_1, \ldots, x_n, h(y_{11}, \ldots, y_{1d}), \ldots, h(y_{k1}, \ldots, y_{kd})) .\] Similarly to equation (3) in \cite{GPW15}, we have \begin{lemma}\label{th:booleanization} For any measure $ m \in \lrb{D,UP_0, UP_1, C_0, C_1} $, \[ m(\tilde f ) \leq m(f) \cdot \lceil \log N \rceil. \] Moreover, $ D(f) \leq D(\tilde f ) $. \end{lemma} \begin{proof} In appendix \ref{app:tech}. \end{proof} A second technical result that we use is about combining partitions into subcubes for several sets $\Sigma_i^{n_i}$ into a partition for $\Sigma_1^{n_1}\times \Sigma_2^{n_2}\times \ldots \times \Sigma_m^{n_m} $. \begin{lemma}\label{th:product_partition} Suppose that $ \Sigma_1 $, \ldots, $ \Sigma_m $ are finite alphabets and $ n_1, \ldots, n_m \in \mbb N $. Suppose that for each $ i \in [m] $ there is a set $ K_i \subset \Sigma_i^{n_i} $ which can be partitioned into disjoint subcubes defined by assignments of length at most $ d_i $. Then the set $ K_1 \times K_2 \times \ldots \times K_m \subset \Sigma_1^{n_1}\times \Sigma_2^{n_2}\times \ldots \times \Sigma_m^{n_m} $ can be partitioned into disjoint subcubes defined by assignments of length at most $ d_1+\ldots+d_m $. \end{lemma} \begin{proof} In appendix \ref{app:tech}. \end{proof} \subsection{Communication complexity} In the standard model of communication complexity \cite{Yao79}, we have a function $f(x, y):X\times Y \rightarrow \{0, 1\}$, with $x$ given to one party (Alice) and $y$ given to another party (Bob). {\em Deterministic communication complexity} of $f$, $D^{cc}(f)$ is the smallest $k$ such that there is a protocol for Alice and Bob that computes $f(x, y)$ and, for any $x$ and $y$, the amount of bits communicated between Alice and Bob is at most $k$. The counterpart of $D^{sc}(f)$ for communication complexity is the partition number $\chi(f)$, defined as the smallest $k$ such that there $X\times Y$ can be partitioned into $X_i\times Y_i$ for $i\in[k]$ so that each $(x, y)\in X\times Y$ belongs to exactly one of $X_i\times Y_i$ and, for each $i\in [k]$, $f(x, y)$ is the same for all $(x, y)\in X_i\times Y_i$. \section{Results} \label{sec:res} \begin{theorem} \label{th:main} There is a sequence of functions $f$ with $D(f)=\Omega((D^{sc}(f))^{2-o(1)})$. \end{theorem} \begin{proof}[Proof sketch.] We describe the main steps of the proof of Theorem \ref{th:main} (with each step described in more detail in the next subsection or in the appendix). We use the {\em cheat-sheet} construction of Aaronson, Ben-David and Kothari \cite{ABK} which transforms a given function $f$ into a new function $f^{t,c}_{\mathrm {CS}}$ in a following way. \begin{definition}\label{def:sh} Suppose that $ \Sigma $ is a finite alphabet and $ f : \Sigma^n \to \lrb{0,1} $ satisfies $ C(f) \leq c \leq n$. Let $ t \in \mbb N $ be fixed. A function $ f^{t,c}_{\mathrm {CS}} : \Sigma^{tn} \times [n]^{2^t tc} \to \lrb{0,1} $ for an input $ (x,y) $, $x\in \Sigma^{t n}$, $y\in [n]^{2^t tc} $ is defined as follows: \begin{itemize} \item the input $ x$ is interpreted as a $ t \times n$ matrix with entries $ x_{p,q} \in \Sigma $, $ p \in [t] $, $ q\in [n] $; \item for $ p\in [t] $, the $ p $th row of $ x $ is denoted by $ x^{(p)} = (x_{p,1},x_{p,2},\ldots,x_{p,n}) \in \Sigma^n$ and is interpreted as an input to the function $f$, with the value of function denoted $ b_p = f(x^{(p)}) \in \lrb{0,1}$, ; \item the input $ y $ is interpreted as a three-dimensional array of size $ 2^t \times t \times c $ with entries $ y_{p,q,r} \in [n]$, $ p \in [2^t] $, $ q\in [t] $, $ r \in [c] $; \item we denote $ \hat p =1+ \sum_{i=1}^{t} b_i \, 2^{i-1} \in [2^t] $; \item the function $ f^{t,c}_{\mathrm {CS}} $ is defined to be 1 for the input $ (x,y) $ iff for each $ q \in [t] $ the following properties simultaneously hold: \begin{enumerate} \item the $ c $ numbers $ y_{\hat p, q, 1} $, \ldots, $y_{\hat p, q, c} $ are pairwise distinct, and \item the assignment $ a^q : \lrb{y_{\hat p, q, 1} ,\ldots, y_{\hat p, q, c} } \to \Sigma$, defined by $$ a^q(y_{\hat p,q,r}) = x_{q, y_{\hat p,q,r}}, \quad \text{for all } r \in [c], $$ is a $ b_q $-certificate for $ f $. \end{enumerate} \end{itemize} \end{definition} The function $f^{t,c}_{\mathrm {CS}}$ can be computed by computing $f(x^{(p)})$ for all $ p\in [t] $, determining $\hat p$ and verifying whether the two properties hold. Alternatively, if someone guessed $\hat p$, he could verify that $ b_p = f(x^{(p)})$ for all $p\in[t]$ by just checking that the certificates $a^q$ (which could be substantially faster than computing $f(x^{(p)})$. This is the origin of the term ``cheat sheet" \cite{ABK} - we can view $ y_{\hat p,q,r}$ for $ q\in [t] $, $ r \in [c] $ as a cheat-sheet that allows to check $ b_p = f(x^{(p)})$ without running a full computation of $f$. This construction has the following effect on the complexity measures $D(f)$, $C(f)$, $UP_0(f)$ and $UP_1(f)$: \begin{lemma}\label{th:sh_properties} Assume that $ f $ and $ c $ satisfy the conditions of Definition \ref{def:sh} and $ t \geq 2\log \lr{t D(f)} $. Then, we have: \begin{enumerate} \item $ \frac{t}{2} D(f) \leq D(f^{t,c}_{\mathrm {CS}}) \leq t D(f) + tc$; \item $ UP_1(f^{t,c}_{\mathrm {CS}}) \leq 2tc$; \item $ UP_0(f^{t,c}_{\mathrm {CS}}) \leq t D^{sc}(f) + 2tc$; \item $ C(f^{t,c}_{\mathrm {CS}}) \leq 3tc $. \end{enumerate} \end{lemma} \begin{proof} In section \ref{sec:lemmas}. \end{proof} We note that some of variables for the resulting function $f^{t,c}_{\mathrm {CS}}$ take values in $[n]$, even if the original $f$ is Boolean. To obtain a Boolean function as a result, we ``Booleanize" $f^{t,c}_{\mathrm {CS}}$ as described in Lemma \ref{th:booleanization}. For our purposes, the advantage of Lemma \ref{th:sh_properties} is that $UP_1(f^{t,c}_{\mathrm {CS}})$ for the resulting function $f^{t,c}_{\mathrm {CS}}$ may be substantially smaller than $UP_1(f)$ (because $UP_1(f^{t,c}_{\mathrm {CS}})$ depends on $C(f)$ which can be smaller than $UP_1(f)$)! However, going from $f$ to $f_{\mathrm {CS}}$ does not decrease $UP_0$ (because $UP_0(f^{t,c}_{\mathrm {CS}})$ depends on $D^{sc}(f) $). To deal with that, we combine the cheat-sheet construction with two other steps which rebalance $UP_0$ and $UP_1$. These two steps are composition with AND and OR which have the following effect on the complexity measures $D(f)$, $C(f)$, $UP_0(f)$ and $UP_1(f)$: \begin{lemma}\label{th:AND_OR_composition} Suppose that $ g : \lrb{0,1}^N \to \lrb{0,1} $. Let $ f_{and} = AND_n \circ g^n $ and $ f_{or } = OR_n \circ g^n $. Then \begin{align*} & D(f_{and}) = D(f_{or}) = n D(g); \\ & C_0 (f_{or}) =n C_0(g), \quad C_1(f_{or}) = C_1 (g); \\ & C_0 (f_{and}) = C_0(g), \quad C_1(f_{and}) = n C_1 (g); \\ & UP_0 (f_{or}) \leq n UP_0 (g), \quad UP_1(f_{or}) \leq (n-1) UP_0(g) + UP_1 (g) ; \\ & UP_0 (f_{and}) \leq (n-1) UP_1(g) + UP_0 (g), \quad UP_1(f_{and}) \leq nUP_1 (g). \end{align*} \end{lemma} \begin{proof} In appendix \ref{app:main}. \end{proof} By combining Lemmas \ref{th:booleanization}, \ref{th:sh_properties} and \ref{th:AND_OR_composition}, we get \begin{lemma}\label{th:block} Let $ f : \lrb{0,1}^N \to \lrb{0,1} $ be fixed. Suppose that $ t,c,n \in \mbb N $ satisfy $ \max \lrb{nC_0(f), C_1(f)} \leq c $ and $ t \geq 2\log \lr{ t D(f)} $. Consider the function $$ f' : \lrb{0,1}^{tNn^2 + 2^t tc n \lceil \log (Nn) \rceil } \to \lrb{0,1} $$ defined as follows: \[ f' = AND_n \circ \widetilde {h}_{\mathrm {CS}}^n, \] where $\widetilde {h}_{\mathrm {CS}} $ is the boolean function associated to $ h^{t,c}_{\mathrm {CS}}$ and $ h: \lrb{0,1}^{Nn} \to \lrb{0,1} $ is defined as \[ h:= OR_n \circ f^n . \] Then the following estimates hold: \begin{enumerate} \item $ 0.5 tn^2 D(f) \leq D(f') \leq tn( c+n D(f) )\lceil \log (Nn) \rceil $; \item $ D^{sc}(f') \leq tn\lr{2 c + D^{sc}(f) } \lceil \log (Nn) \rceil $; \item $ C_0(f') \leq 3tc \lceil \log (Nn) \rceil $; \item $ C_1(f') \leq 3tcn \lceil \log (Nn) \rceil $. \end{enumerate} \end{lemma} \begin{proof} In appendix \ref{app:main}. \end{proof} Applying Lemma \ref{th:block} results in a function $f'$ for which $D(f')$ is roughly $n^2 D(f)$ but $D^{sc}(f)$ is roughly $n D^{sc}(f)$. We use Lemma \ref{th:block} repeatedly to construct a sequence of functions $f^{(1)}, f^{(2)}, \ldots$ in which $f^{(1)}=AND_n$ and each next function $f^{(m+1)}$ is equal to the function $f'$ obtained by applying Lemma \ref{th:block} to $f^{(m)}=f$. The complexity of those functions is described by the next Lemma. \begin{lemma}\label{th:iteracija} Let $ m \in \mbb N $. Then there are positive integers $ a_0 $, $ a_1 $, $ a_2 $, $ a_3 $ s.t. for all integers $ n \geq 2$ there exists $ N \in \mbb N $ and a function $ f^{(m)} : \lrb{0,1}^N \to \lrb{0,1} $ satisfying \begin{align*} & N \leq a_0 \, n^{9m} \log^{10m-10} (n), \\ & a_1 \, n^{2m-1} \log^{m-1}(n) \leq D(f^{(m)}) \leq a_2 \, n^{2m-1} \log^{2m-2}(n), \\ & D^{sc}(f^{(m)}) \leq a_3 \, n^{m} \log^{2m-2}(n), \\ & C_0 (f^{(m)}) \leq a_3 \, n^{m-1} \log^{2m-2}(n), \\ & C_1 (f^{(m)}) \leq a_3 \, n^{m} \log^{2m-2}(n). \end{align*} \end{lemma} \begin{proof} In appendix \ref{app:main}. \end{proof} Theorem \ref{th:main} now follows immediately from Lemma \ref{th:iteracija} which implies that for every $ m\in \mbb N $ we have a family of functions satisfying \[ D^{sc} (f) = \tilde O(n^m) , \quad D(f ) = \tilde \Omega (n^{2m-1}). \] Then, $D(f)=\tilde\Omega((D^{sc}(f))^{2-\frac{1}{m}})$. Since this construction works for every $ m\in \mbb N $, we get that $D(f)=\Omega((D^{sc}(f))^{2-o(1)})$ for an appropriately chosen sequence of functions $f$. \end{proof} {\bf Communication complexity implications.} The standard strategy for transferring results from the domain of query complexity to communication complexity is to compose a function $f:\{0, 1\}^n \rightarrow \{0, 1\}$ with a function $g:X\times Y \rightarrow \{0, 1\}$, obtaining the function \[ f\circ g^n (x_1, \ldots, x_n, y_1, \ldots, y_n) = f(g(x_1, y_1), \ldots, g(x_n, y_n)) .\] We can then define $x=(x_1, \ldots, x_n)$ as Alice's input and $y=(y_1, \ldots, y_n)$ as Bob's input. Querying the $i^{\rm th}$ variable then corresponds to computing $g(x_i, y_i)$ and, if we have a query algorithm which makes $D(f)$ queries, we can obtain a communication protocol for $f\circ g^n$ with a communication that is approximately $D(f) D^{cc}(g)$. Building on an earlier work by Raz and McKenzie \cite{RM}, G\"o\"os, Pitassi and Watson \cite{GPW15} have shown \begin{theorem} \label{th:gpw} \cite{GPW15} For any $n$, there is a function $g:X\times Y$ such that, for any $f:\{0, 1\}^n \rightarrow \{0, 1\}$, we have \begin{enumerate} \item $D^{cc}(f\circ g^n) = \Theta(D(f) \log n)$ and \item $\log \chi(f\circ g^n) = O(D^{sc}(f) \log n)$. \end{enumerate} \end{theorem} In this theorem, $D^{cc}(f\circ g^n) = O(D(f) \log n)$ and $\log \chi(f\circ g^n) = O(D^{sc}(f) \log n)$ follow easily by replacing queries to variables $x_i$ with computations of $g$. The part of the theorem that is not obvious (and is quite difficult technically) is that one also has $D^{cc}(f\circ g^n) = \Omega(D(f) \log n)$, i.e. communication protocols for $f\circ g^n$ cannot be better than the ones obtained from deterministic query algorithms for $f$. By combining Theorems \ref{th:main} and \ref{th:gpw}, we immediately obtain \begin{corollary} There exists $h$ with $D^{cc}(h)=\Omega(\log^{2-o(1)} \chi(h))$. \end{corollary} \subsection{Proof of Lemma \ref{th:sh_properties}} \label{sec:lemmas} \begin{proof}[Proof of Lemma \ref{th:sh_properties}] For brevity, we now denote $f^{t,c}_{\mathrm {CS}}$ as simply $f_{\mathrm {CS}}$. As before, the first $ tn $ variables are indexed by pairs $ (p,q) $, where $ p \in [t] $, $ q\in [n] $; the remaining $ 2^t tc $ variables are indexed by triples $ (p,q,r) $, where $ p \in [2^t] $, $ q\in [t] $, $ r \in [c] $. We denote $ x^{(p)} = (x_{p,1},x_{p,2},\ldots,x_{p,n}) \in \Sigma^n$ for each $ p \in [t] $. \textbf{Lower bound on deterministic complexity.} Let ${\cal A}$ be an adversarial strategy of setting the variables $x_1, \ldots, x_n$ in the function $f(x_1, \ldots, x_n)$ that forces an algorithm to make at least $D(f)$ queries. We use $t$ copies of this strategy ${\cal A}_1, \ldots, {\cal A}_t$, with $ {\cal A}_p$ setting the values of $x_{p, 1}, \ldots, x_{p, n}$ in $f(x_{p, 1}, \ldots, x_{p, n})$. The overall adversarial strategy is \begin{enumerate} \item If a variable $x_{p, q}$ is queried and its value has not been set yet, use ${\cal A}_p$ to choose its value; \item If a variable $y_{p, q, r}$ is queried and its value has not been set yet: \begin{enumerate} \item Let $b$ be the $q^{\rm th}$ bit of $p$. \item Choose a certificate $a$ that certifies $f(x_{1}, \ldots, x_{n})=a$ and contains all variables $x_i$ with indices $i=y_{p, q, r'}$ for which we have already chosen values. ` If possible, choose $a$ so that, in addition to those requirements, $a$ is also consistent with the values among $x_{q, 1}, \ldots, x_{q, n}$ that we have have already fixed. \item Set $y_{p, q, r}$ be an index of a variable that is fixed by $a$ but is not among $y_{p, q, r'}$ that have already been chosen. \item If $x_{q, y_{p, q, r}}$ is not already fixed, fix it using the adversarial strategy ${\cal A}_q$. \end{enumerate} \end{enumerate} If less than $\frac{t D(f)}{2}$ queries are made, less than $\frac{t}{2}$ of $f(x^{(p)})$ for $p\in[t]$ have been fixed. Thus, more than $\frac{t}{2}$ of $f(x^{(p)})$ are not fixed yet. This means that there are more than $2^{\frac{t}{2}} > t D(f)$ possible choices for $\hat{p}$ that are consistent with the answers to queries that have been made so far. Since less than $\frac{t D(f)}{2}$ queries have been made, one of these choices has the property that no $y_{\hat{p}, q, r}$ has been queried for any $q$ and $r$. We now set the remaining variables $x_{p, q}$ so that $f(x^{(p)})$ equals the $p^{\rm th}$ bit of $\hat{p}$. Since no $y_{\hat{p}, q, r}$ has been queried, we are free to set them so that the correct requirements are satisfied for every $q$ ($y_{\hat{p}, q, r}$ are all distinct and $a^q(y_{\hat{p}, q, r})=x_{q, y_{\hat{p}, q, r}}$) or so that they are not satisfied. Depending on that, we can have $f_{\mathrm {CS}}=0$ or $f_{\mathrm {CS}}=1$. \textbf{Upper bound on deterministic complexity.} We first use $tD(f)$ queries to compute $f(x_{p, 1}, \ldots, x_{p, n})$ for all $p\in[t]$. Once we know $f(x^{(p)})$ for all $p\in [t]$, we can calculate $\hat{p}$ from Definition \ref{def:sh}. We then use $tc$ queries to query $y_{\hat{p}, q, r}$ for all $q\in[t]$ and $r\in [c]$ and $tc$ queries to query $x_{q, y_{\hat{p}, q, r}}$ for all $q\in[t]$ and $r\in [c]$. Then, we can check all the conditions of Definition \ref{def:sh}. \textbf{Unambiguous 1-certificate complexity.} We show $UP_1(f_{\mathrm {CS}} )\leq 2tc$ by giving a mapping from 1-inputs $(x, y)$ to 1-certificates $a$, with the property that, for any two 1-inputs $(x, y)$ and $(x', y')$, the corresponding 1-certificates $a$ and $a'$ are either the same or contradict one another in some variable. Then, for every 1-input $(x, y)$ there is exactly one certificate $a$ with $(x, y)\sim a$. To map a 1-input $(x, y)$ to a 1-certificate $a$, we do the following: \begin{enumerate} \item Let $ b_i =f( x^{(i)}) $, for each $ i\in [t] $, and $\hat{p}=1+\sum_{i=1}^t b_i 2^{i-1}$. Since $ f_{\mathrm {CS}}(x,y)=1 $, the $ c $ numbers $ y_{\hat p, i, 1} $, \ldots, $y_{\hat p, i, c} $ are distinct and the assignment $ a_i : \lrb{y_{\hat p, i, 1} ,\ldots, y_{\hat p, i, c} } \to \Sigma$, defined by $$ a_i(y_{\hat p,i,r}) = x_{i, y_{\hat p,i,r}}, \quad \text{for all } r \in [c], $$ is a valid $ b_i $-certificate for $ f $. We define that $a$ must fix all variables fixed by $a_1, \ldots, a_t$ in the same way (i.e., $ a $ fixes $ x_{i,j} $ with indices $ j =y_{\hat p, i,r} $, for all $ i\in [t] $ and $ r\in [c] $). \item We define that $a$ also fixes the variables $y_{\hat{p}, q, r}$, $ q \in [t] $, $ r\in [c] $, to the values that they have in the input $(x, y)$. \end{enumerate} In each stage $ tc $ variables are fixed. Thus, the length of the resulting partial assignment $a$ is $2tc$. Notice that $ a $ is a 1-certificate for $ f_{\mathrm {CS}} $, since the $ t $ assignments $ a^1 $, \ldots, $ a^t $ uniquely determine $ \hat p $, and all variables $ y_{\hat p,q,r} $ are fixed to valid values, ensuring that $ f_{\mathrm {CS}}= 1 $. We now show that, for any two different 1-certificates $a$, $a'$ constructed through this process, there is no input $(x, y)$ that satisfies both of them. If $a$ and $a'$ differ in the part that is fixed in the first stage, there are two possible cases: \begin{enumerate} \item There exists $j\in [t]$ such that $ a_j $ is 0-certificate for $ f $, whereas $ a'_j $ is a 1-certificate for $ f $ (or vice-versa). Then there must be no $x^{(j)}$ that satisfies both of them. \item For every $q\in [t]$, $ a_q $ and $ a'_q $ are both $ b_q $-certificates, for the same $ b_q \in \lrb{0,1} $ but there exists $j\in [t]$ such that $a_j$ differs from $a'_j$. In this case $ a_1 $, \ldots, $ a_t $ and $ a'_1 $, \ldots, $ a'_t $ determine the same value $ \hat p $. Since $ a_j $ and $ a'_j $ are different certificates that fix the same variables (namely, they both fix $ x_{j,l} $ with indices $ l =y_{\hat p, j,r} $, $r\in [c]$), they must fix at least one variable to different values. Then, no $ x^{(j)} $ can satisfy both $ a_j $ and $ a'_j $. \end{enumerate} If $a$ and $a'$ are the same in the part fixed in the 1st stage, the values of the variables that we fix in the 1st stage uniquely determine $\hat{p}$ and, hence, the indices of variables that are fixed in the 2nd stage. Hence, the only way how $a$ and $a'$ could differ in the part fixed in the 2nd stage is if the same variable $y_{\hat{p}, q, r}$ is fixed to different values in $a$ and $a'$. This means that there is no $(x, y)$ that satisfies both $a$ and $a'$. \textbf{Unambiguous 0-certificate complexity.} Similarly to the previous case, we give a mapping from 0-inputs $(x, y)$ to 0-certificates $a$, with the property that, for any two 0-inputs $(x, y)$ and $(x', y')$, the corresponding 0-certificates $a$ and $a'$ are either the same or contradict one another in some variable. To map a 0-input $(x, y)$ to a 0-certificate $a$, we do the following: \begin{enumerate} \item We fix a partition of $\{0, 1\}^n$ into subcubes corresponding to assignments of length at most $D^{sc}(f)$ with the property that $f$ is constant on every subcube. For each $p\in[t]$, $(x_{p, 1}, \ldots, x_{p, n})$ belongs to some subcube in this partition. Let $a_p$ be the certificate that corresponds to this subcube. We define that $a$ must fix all variables fixed by $a_1, \ldots, a_t$ in the same way. \item Let $\hat{p}=1+\sum_{i=1}^t b_i 2^{i-1}$ where $b_i = f(x^{(i)})$. (We note that $a_1, \ldots, a_t$ determine the values of $f(x^{(1)}), \ldots, f(x^{(t)})$. Thus, $\hat{p}$ is determined by the variables that we fixed at the previous stage.) We define that $a$ also fixes the variables $y_{\hat{p}, q, r}$ to the values that they have in the input $(x, y)$. \item If there is $ q \in [t] $ such that the set $ \lrb{y_{\hat p, q,1}, \ldots, y_{\hat p, q,c}} $ is not equal to the set of variables fixed in some $b_q$-certificate (this includes the case when these $ c $ numbers are not distinct), the values of variables fixed by $a$ imply that $f_{\mathrm {CS}}(x, y)=0$. In this case, we stop. \item Otherwise, there must be $ q' \in [t] $ such that the set $ \lrb{y_{\hat p, q',1}, \ldots, y_{\hat p, q',c}} $ is equal to the set of variables fixed in some $b_{q'}$-certificate but the actual assignment $x_{y_{\hat p, q',1}}, \ldots, x_{y_{\hat p, q',c}}$ is not a valid $b_{q'}$-certificate. In this case, we define that $a$ also fixes $x_{y_{\hat p, q, r}}$ for all $q\in [t], r\in [c]$ to the values that they have in the input $(x, y)$. Since this involves fixing $x_{y_{\hat p, q',1}}, \ldots, x_{y_{\hat p, q',c}}$ which do not constitute a valid $b_{q'}$-certificate, this implies $f_{\mathrm {CS}}(x, y)=0$. \end{enumerate} The first stage fixes at most $t D^{sc}(f)$ variables, the second stage fixes $tc$ variables and the last stage also fixes $tc$ variables (some of which might have already been fixed in the 1st stage). Thus, the length of the resulting 0-certificate $a$ is at most $t D^{sc}(f)+2tc$. We now show that, for any two different 0-certificates $a$, $a'$ constructed through this process, there is no input $(x, y)$ that satisfies both of them. If $a$ and $a'$ differ in the part that is fixed in the first stage, there exists $j\in [t]$ such that $a_j$ differs from $a'_j$. Since $a_j, a'_j$ correspond to different subcubes in a partition, there must be no $x^{(j)}$ that satisfies both of them. If $a$ and $a'$ are the same in the part fixed in the 1st stage, the values of the variables that we fix in the 1st stage uniquely determine $\hat{p}$ and, hence, the indices of variables that are fixed in the 2nd stage. Hence, the only way how $a$ and $a'$ could differ in the part fixed in the 2nd stage is if the same variable $y_{\hat{p}, q, r}$ is fixed to different values in $a$ and $a'$. This means that there is no $(x, y)$ that satisfies both $a$ and $a'$. If $a$ and $a'$ are the same in the part fixed in the first two stages, the values of the variables that we fix in these stages uniquely determine the indices of variables that are fixed in the last stage and the same argument applies. \textbf{Certificate complexity.} Since $ b $-certificate complexity is no larger than unambiguous $ b $-certificate complexity, we immediately conclude that \[ C_1 (f_{\mathrm {CS}}) \leq UP_1 (f_{\mathrm {CS}}) \leq 2tc. \] We show $C_0(f_{\mathrm {CS}} )\leq 3tc$ by giving a mapping from 0-inputs $(x, y)$ to 0-certificates $a$ (now different certificates are not required to contradict one another). Then, the collection of all 0-certificates $a$ to which some $(x, y)$ is mapped covers $f_{\mathrm {CS}}^{-1}(0)$ (possibly with overlaps). To map a 0-input $(x, y)$ to a 0-certificate $a$, we do the following: \begin{enumerate} \item Let $a_p$, for each $p\in[t]$, be a $ f(x^{(p)}) $-certificate that is satisfied by $x^{(p)} = (x_{p, 1}, \ldots, x_{p, n})$. We define that $a$ must fix all variables fixed by $a_1, \ldots, a_t$ in the same way. \item Let $\hat{p}=1+\sum_{i=1}^t b_i 2^{i-1}$ where $b_i = f(x^{(i)})$. (Notice that $\hat{p}$ is determined by the variables that we fixed at the previous stage.) We define that $a$ also fixes the variables $y_{\hat{p}, q, r}$ to the values that they have in the input $(x, y)$. \item If there is $ q \in [t] $ such that the set $ \lrb{y_{\hat p, q,1}, \ldots, y_{\hat p, q,c}} $ is not equal to the set of variables fixed in some $b_q$-certificate, the values of variables fixed by $a$ imply that $f_{\mathrm {CS}}(x, y)=0$. In this case, we stop. \item Otherwise, there must be $ q' \in [t] $ such that the set $ \lrb{y_{\hat p, q',1}, \ldots, y_{\hat p, q',c}} $ is equal to the set of variables fixed in some $b_{q'}$-certificate but the actual assignment $x_{y_{\hat p, q',1}}, \ldots, x_{y_{\hat p, q',c}}$ is not a valid $b_{q'}$-certificate. In this case, we define that $a$ also fixes $x_{y_{\hat p, q, r}}$ for all $q\in [t], r\in [c]$ to the values that they have in the input $(x, y)$. Since this involves fixing $x_{y_{\hat p, q',1}}, \ldots, x_{y_{\hat p, q',c}}$ which do not constitute a valid $b_{q'}$-certificate, this implies $f_{\mathrm {CS}}(x, y)=0$. \end{enumerate} The first stage fixes at most $t c$ variables, the second stage fixes $tc$ variables and the last stage also fixes $tc$ variables (some of which might have already been fixed in the 1st stage). Thus, the length of the resulting 0-certificate $a$ is at most $3tc$. We conclude that $ C_0(f_{\mathrm {CS}}) \leq 3tc $ and also \[ C(f_{\mathrm {CS}}) \leq 3tc . \] \end{proof} \section{Conclusions} A deterministic query algorithm induces a partition of the Boolean hypercube $\{0, 1\}^n$ into subcubes that correspond to different computational paths that the algorithm can take. If ${\cal A}$ makes at most $k$ queries, each subcube is defined by values of at most $k$ input variables. It is well known that one can also go in the opposite direction, with a quadratic loss. Given a partition of $\{0, 1\}^n$ into subcubes $S_i$ defined by fixing at most $k$ input variables with a function $f$ constant on every $S_i$, one can construct a query algorithm that computes $f$ with at most $k^2$ queries \cite{JLV}. In this paper, we show that this transformation from partitions to algorithms is close to being optimal, by exhibiting a function $f$ with a corresponding partition for which any deterministic query algorithm requires $\Omega(k^{2-o(1)})$ queries. Together with the ``lifting theorem" of \cite{GPW15}, this implies a similar result for communication complexity: there is a communication problem $f$ for which the input set can be partitioned into $2^k$ rectangles with $f$ constant on every rectangle but any deterministic communication protocol needs to communicate $\Omega(k^{2-o(1)})$ bits. An immediate open question is whether randomized or quantum algorithms (protocols) still require $\Omega(k^{2-o(1)})$ queries (bits). It looks plausible that the lower bound for deterministic query complexity $D(f)$ for our construction can be adapted to randomized query complexity, with a constant factor loss every time when we iterate our construction. If this is indeed the case, we would get a similar lower bound for randomized query algorithms. With randomized communication protocols, the situation is more difficult because the $D^{cc}(f\circ g^n) = \Theta(D(f) \log n)$ result of \cite{GPW15} has no randomized counterpart \cite{GJ+15}. In the quantum case, our composed function $f$ no longer requires $\Omega(k^{2-o(1)})$ queries because one could use Grover's quantum search algorithm \cite{Grover} to evaluate $AND_n$ and $OR_n$. Using this approach, we can show that the function $f^{(m)}$ of Lemma \ref{th:iteracija} can be computed with $O(n^{m-\frac{1}{2}})$ quantum queries which is less than our bound on $D^{sc}(f)$. Generally, it seems that we do not know functions $f$ for which quantum query complexity $Q(f)$ is asymptotically larger than $D^{sc}(f)$. \begin{appendix} {\Huge Appendix} \section{Proofs of technical lemmas} \label{app:tech} \begin{proof}[Proof of Lemma \ref{th:booleanization}] If we have a query algorithm for $f$, we can replace each query querying a variable $y_i$ by $d=\lceil \log N\rceil$ queries querying variables $y_{i1}, \ldots, y_{id}$ (and a computation of $h(y_{i1}, \ldots, y_{id}))$) and obtain an algorithm for $\tilde{f}$. Conversely, we can transform an algorithm computing $\tilde{f}$ into an algorithm computing $f$ by replacing a query to $y_{ij}$ by a query to $y_i$. For certificate complexity measures ($C_a$ and $UP_a$, $a\in\{0, 1\})$, let \[ x = (x_1, \ldots, x_n, h(y_{11}, \ldots, y_{1d}), \ldots, h(y_{k1}, \ldots, y_{kd})) \] be the input to $f$ corresponding to an input $\tilde{x}=(x_1, \ldots, x_n, y_{11}, \ldots, y_{kd})$ to $\tilde{f}$. We can transform a certificate for the function $f$ on the input $x$ into a certificate for the function $\tilde{f}$ on the input $\tilde x$ by replacing each variable $y_i$ with $d=\lceil \log N\rceil$ variables $y_{i1}, \ldots, y_{id}$. This gives $C_a(\tilde f ) \leq C_a(f) \cdot \lceil \log N \rceil$. If the certificates for $f$ with which we start are unambiguous, then, for any two different certificates $I_1, I_2$, there is a variable in which they differ. If $I_1$ and $I_2$ differ in one of $x_i$'s, the corresponding certificates for $\tilde f$ differ in the same $x_i$. If $I_1$ and $I_2$ differ in one of $y_i$'s, the corresponding certificates for $\tilde f$ differ in at least one of $y_{ij}$ for the same $i$ and some $j\in [d]$. This gives $UP_a(\tilde f ) \leq UP_a(f) \cdot \lceil \log N \rceil$. \end{proof} \begin{proof}[Proof of Lemma \ref{th:product_partition}] For each $ i\in [m] $ we can express $ K_i $ as \[ K_i = \bigcup_{j=1}^{t_i} S_{i,j}, \] where $ S_{i,j} \subset \Sigma_i^{n_i}$ are subcubes and for all $ j, j' \in [t_i] $ with $j\neq j'$ the subcubes are disjoint, i.e., $ S_{i,j} \cap S_{i,j'} = \emptyset$. Let $ a_{i,j} : I_{i,j} \to \Sigma_i $, $ I_{i,j} \subset [n_i] $, $ \lrv{I_{i,j}} \leq d_i $, be the partial assignment, associated to the subcube $ S_{i,j} $, $ j \in [t_i] $, $ i \in [m] $. Let $ J = [t_1] \times [t_2] \times \ldots \times [t_m] $ and denote $ \mb j = (j_1, \ldots, j_m) \in J $. For each $ k\in [m] $ denote $ N_k = n_1+n_2+\ldots+n_k $. Define a set $ S_{\mb j} $ for each $ \mb j \in J $ as \[ S_{\mb j} = S_{1,j_1} \times S_{2,j_2} \times \ldots \times S_{m,j_m}. \] Fix any $ {\mb j} \in J$. Clearly, $ S_{\mb j} \subset \Sigma_1^{n_1}\times \Sigma_2^{n_2}\times \ldots \times \Sigma_m^{n_m} $. Denote $ I + k = \lrb{i + k \ \vline\ i \in I} $ for a set $ I \subset \mbb R $ and $ k \in \mbb R $. Notice that $ S_{\mb j} $ is a subcube, with the associated partial assignment $ A_{\mb j} : I_{\mb j} \to \bigcup_{i \in [m]} \Sigma_i $, where the set $ I \subset [N_m] $ is defined as \[ I = I_{1,j_1} \cup \lr{I_{2,j_2} + N_1}\cup \lr{I_{3,j_3} +N_2} \cup \ldots \cup \lr{I_{m,j_m} + N_{m-1}} \] and \[ A_{\mb j} (i) = \begin{cases} a_{1,j_1}(i) , & i \in I_{1,j_1} \subset [N_1] ,\\ a_{2,j_2}(i-N_1) , & i \in I_{1,j_1}+N_1 \subset [N_1+1 \,..\, N_2] ,\\ \ldots \\ a_{k,j_k}(i-N_{k-1}) , & i \in I_{k,j_k} + N_{k-1}\subset [N_{k-1}+1 \,..\, N_k],\\ \ldots \\ a_{m,j_m}(i-N_{m-1}) , & i \in I_{m,j_m} + N_{m-1}\subset [N_{m-1}+1 \,..\, N_m] . \end{cases} \] We also observe that the length of the assignment $A_{\mb j}$ defining $ S_{\mb j} $ is \[ \lrv{I_{\mb j}} = \lrv{I_{1,j_1}}+\lrv{I_{2,j_2}}+ \ldots + \lrv{I_{m,j_m}} \leq d_1 + d_2+ \ldots+ d_m. \] Finally, $ S_{\mb j} $, $ \mb j \in J $ define a partition of the whole space $ \Sigma_1^{n_1}\times \Sigma_2^{n_2}\times \ldots \times \Sigma_m^{n_m} $ into disjoint subcubes. To see that, fix any $ x=(x_1,\ldots, x_m) $ with $ x_i \in \Sigma_i^{n_i} $, $ i\in [m] $. Since $ S_{i,j} $, $ j\in [t_i] $, partition the space $ \Sigma_i^{n_i} $, there is a unique $ j_i \in [t_i] $ such that $ x_i \in \Sigma_i^{n_i} $, for all $ i \in [m] $. But then there is a unique $ \mb j = (j_1,\ldots, j_m) \in J $ such that $ x \in S_{\mb j} $. \end{proof} \section{Proofs of Lemmas \ref{th:AND_OR_composition}-\ref{th:iteracija}} \label{app:main} \begin{proof}[Proof of Lemma \ref{th:AND_OR_composition}] The equalities $ D(f_{and}) = D(f_{or}) = n D(g) $ immediately follow from \cite[Lemma 3.1]{Tal2013}. The equalities $ C_0 (f_{or}) =n C_0(g) $ and $ C_1(f_{or}) = C_1 (g) $ have been shown in \cite[Proposition 31]{Gilmer2013}. Since $ f_{and}= \neg OR_n \circ (\neg g)^n $, also $ C_0 (f_{and}) = C_0(g) $ and $ C_1(f_{and}) = n C_1 (g) $ follow from \cite[Proposition 31]{Gilmer2013}. It remains to show the upper bounds on $ UP_0 (f_{or}) $ and $ UP_1(f_{or}) $ (the proof for $ f_{and} $ is similar). Let $ UP_0 (g) = u_0 $ and $ UP_1 (g) = u_1 $. Then, for each $ b\in \lrb{0,1} $, $ g^{-1}(b) $ can be partitioned into disjoint subcubes defined by assignments of length at most $ u_b $. For an input $x\in \lrb{0,1}^{Nn}$, let $x=(x^{(1)}, \ldots, x^{(n)}), x^{(i)}\in \lrb{0,1}^N$. We have $ f_{or}(x) =0 $ iff $ g(\tilde x^{(k)}) = 0 $ for all $ k \in [n] $. Hence \[ f_{or}^{-1} (0) = \lr{g^{-1}(0)}^n. \] By Lemma \ref{th:product_partition}, $ f_{or}^{-1} (0) $ can be partitioned into disjoint subcubes defined by assignments of length at most $ n u_0 $. Thus, $ UP_0 (f_{or}) \leq n UP_0 (g) $. For $f_{or}^{-1} (1)$, we have \[ f_{or}^{-1} (1) = \bigcup_{j=1}^n K_j , K_j = \lr{g^{-1}(0)}^{j-1} \times g^{-1}(1) \times \lrb{0,1}^{(n-j)N} \subset \lrb{0,1}^{Nn} . \] We note that the sets $ K_j $ are disjoint (since each $K_j$ consists of all $ x \in \lrb{0,1}^{Nn} $ for which $j$ is the smallest index with $g(x^{(j)})=1$). By Lemma \ref{th:product_partition}, $\lr{g^{-1}(0)}^{j-1} \times g^{-1}(1)$ can be partitioned into disjoint subcubes defined by assignments of length at most $ (j-1) u_0 + u_1$. This induces a partition of $K_j$ into subcubes defined by the same assignments. Hence, $ f_{or}^{-1} (1) $ can be partitioned into disjoint subcubes defined by assignments of length at most $ (n-1)u_0 + u_1 $. That is, $ UP_1(f_{or}) \leq (n-1) UP_0(g) + UP_1 (g) $. In particular, this implies that $ D^{sc}(f_{or}) \leq n D^{sc} (g) $; note that this also follows from \cite[Proposition 3]{Kothari}. \end{proof} \begin{proof}[Proof of Lemma \ref{th:block}] By Lemma \ref{th:AND_OR_composition}, \begin{align*} & D(h) = D(OR_n) \cdot D(f) = n D(f),\\ & D^{sc}(h) \leq D^{sc}(OR_n) \cdot D^{sc}(f) = n D^{sc}(f),\\ & C_0(h) =n C_0(f) \leq c,\\ & C_1(h) = C_1(f) \leq c . \end{align*} By Lemma \ref{th:sh_properties}, \begin{align*} & D(h^{t,c}_{\mathrm {CS}}) \geq 0.5D(h) = 0.5 tn D(f), \\ & D(h^{t,c}_{\mathrm {CS}}) \leq t D(h) + tc= tn D(f) + tc, \\ & UP_1(h^{t,c}_{\mathrm {CS}}) \leq 2tc, \\ & UP_0(h^{t,c}_{\mathrm {CS}}) \leq t D^{sc}(h) + 2tc \leq t n D^{sc}(f) + 2tc , \\ & C(h^{t,c}_{\mathrm {CS}}) \leq 3tc. \end{align*} By Lemma \ref{th:booleanization}, \begin{align*} & 0.5 tn D(f) \leq D(\widetilde {h}_{\mathrm {CS}}) \leq t(n D(f) +c) \lceil \log (Nn) \rceil, \\ & UP_1(\widetilde {h}_{\mathrm {CS}}) \leq 2tc \lceil \log (Nn) \rceil, \\ & UP_0(\widetilde {h}_{\mathrm {CS}}) \leq ( t n D^{sc}(f) +2 tc) \lceil \log (Nn) \rceil , \\ & C(\widetilde {h}_{\mathrm {CS}}) \leq 3tc \lceil \log (Nn) \rceil . \end{align*} Finally, again by Lemma \ref{th:AND_OR_composition}, \begin{align*} & D(f') = n D(\widetilde {h}_{\mathrm {CS}}), \\ & C_0(f') = C_0(\widetilde {h}_{\mathrm {CS}}) , \\ & C_1(f') = nC_1(\widetilde {h}_{\mathrm {CS}}) , \\ & UP_1(f') \leq n UP_1(\widetilde {h}_{\mathrm {CS}}) , \\ & UP_0(f') \leq (n-1)UP_1(\widetilde {h}_{\mathrm {CS}}) + UP_0(\widetilde {h}_{\mathrm {CS}}). \end{align*} Consequently, we have the following estimates: \begin{align*} & 0.5 tn^2 D(f) \leq D(f') \leq tn(n D(f) +c) D(f) \lceil \log (Nn) \rceil, \\ & C_0(f') \leq 3tc \lceil \log (Nn) \rceil, \\ & C_1(f') \leq 3tcn \lceil \log (Nn) \rceil , \\ & UP_1(f') \leq 2tcn \lceil \log (Nn) \rceil , \\ & UP_0(f') \leq \lr{2 tcn+ t n D^{sc}(f) } \lceil \log (Nn) \rceil . \end{align*} The last two inequalities imply $ D^{sc}(f') = \max \lrb {UP_1(f'),UP_0(f')} \leq \lr{2 tcn+ t n D^{sc}(f) } \lceil \log (Nn) \rceil $, concluding the proof. \end{proof} \begin{proof}[Proof of Lemma \ref{th:iteracija}] The proof is by induction on $ m $. When $ m=1 $, take $ a_0 = a_1= a_2=a_3 =1 $. Then for any $ n \geq 2 $ one can choose $ N=n $, $ f^{(1)} = AND_n $. Suppose that for $ m=b $ there are constants $ a_0 $, $ a_1 $, $ a_2 $, $ a_3 $ s.t. for all $ n \geq 2$ there is $ f^{(m)} : \lrb{0,1}^N \to \lrb{0,1}$, satisfying the given constraints. We argue that for $ m=b+1 $ there are positive integers \begin{align*} & a_0' = 4 (a_2 + 4b) a_0+ 128 a_2^4 a_3(a_2 + 4b)(a_0 +19b-10) , \\ & a_1' = 2 (2b-1)a_1 , \\ & a_2' = 4(a_2+a_3)(a_2 + 4b) (a_0 +19b-10) , \\ & a_3' = 12 a_3 (a_2 + 4b ) (a_0 +19b-10) \end{align*} s.t. for all $ n \geq 2 $ there is $ N' \in \mbb N $ and $ f' : \lrb{0,1}^{N'} \to \lrb{0,1} $, satisfying the necessary properties. We fix $ n \geq 2 $. Let $ f^{(b)} $ be given by the inductive hypothesis. Let $ f^{(b+1)} =f'$ where $f'$ is obtained by applying Lemma \ref{th:block} to $ f=f^{(b)}$, with parameters $ {t= \lceil 4 \log \lr{ a_2 \, n^{2b-1} \log^{2b-2}(n)} \rceil +4} $ and $ {c= \max \lrb{nC_0(f^{(b)}), C_1(f^{(b)})} }$. Notice that for all $ D \geq 2 $ setting $ t \geq 4 \log (D) + 4 $ yields $ t \geq 2\log (tD) $. Hence the chosen value of $ t $ satisfies $ t \geq 2\log \lr{ t D(f^{(b)})} $ and we have $ f^{(b+1)} : \lrb{0,1}^{N'} \to \lrb{0,1}$, where \[ N' = tNn^2 + 2^t tc n \lceil \log (Nn) \rceil , \] and, by Lemma \ref{th:block}, \begin{align*} & 0.5 tn^2 D(f^{(b)}) \leq D(f^{(b+1)}) \leq tn (c + nD(f^{(b)}))\lceil \log (Nn) \rceil ; \\ & D^{sc}(f^{(b+1)}) \leq tn\lr{2 c + D^{sc}(f^{(b)}) } \lceil \log (Nn) \rceil ; \\ & C_0(f^{(b+1)}) \leq 3tc \lceil \log (Nn) \rceil; \\ & C_1(f^{(b+1)}) \leq 3tcn \lceil \log (Nn) \rceil . \end{align*} Notice that \begin{align*} & \lceil \log (Nn) \rceil \leq (a_0 +19b-10) \log (n); \\ & t=4+ \lceil 4 \log \lr{a_2 \, n^{2b-1} \log^{2b-2}(n)} \rceil \leq 4 (a_2 + 4b) \log (n); \\ & t > 4\log \lr{a_2 \, n^{2b-1} \log^{2b-2}(n)} \geq 4(2b-1 )\log (n);\\ & 2^t \leq 32 a_2^4 \, n^{8b-4} \log^{8b-8}(n);\\ & c \leq a_3 \, n^{b} \log^{2b-2}(n) ; \\ & 2 c + D^{sc}(f^{(b)}) \leq 3a_3 \, n^{b} \log^{2b-2}(n) ;\\ & c + nD(f^{(b)}) \leq (a_2 + a_3)\, n^{2b} \log^{2b-2}(n) \end{align*} We conclude that \begin{multline*} N' \leq 4 (a_2 + 4b) \log (n) a_0 \, n^{9b+2} \log^{10b-10} (n ) + \\ 32 a_2^4 \, n^{8b-4} \log^{8b-8}(n) 4 (a_2 + 4b) \log (n) a_3 \, n^{b} \log^{2b-2}(n) n (a_0 +19b-10) \log (n) =\\ 4 (a_2 + 4b) a_0 \, n^{9b+2} \log^{10b-9} (n) + 128 a_2^4 a_3(a_2 + 4b)(a_0 +19b-10) n^{9b-3}\log^{10b-8}(n) \leq \\ a_0' \, n^{9b+9} \log^{10b}(n). \end{multline*} Similarly, \begin{align*} & D(f^{(b+1)}) \leq 4(a_2 + 4b) \log (n) n \, (a_2+a_3)\, n^{2b} \log^{2b-2}(n) (a_0 +19b-10) \log (n) = a_2' \, n^{2b+1} \log^{2b}(n), \\ & D(f^{(b+1)}) \geq 2(2b-1) \log (n) n^2 \, a_1\, n^{2b-1} \log^{b-1}(n) = a_1' \, n^{2b+1} \log^{b}(n) , \\ & D^{sc}(f^{(b+1)}) \leq 12 (a_2 + 4b ) \log (n) n \, a_3 \, n^{b} \log^{2b-2}(n) (a_0 +19b-10) \log (n) = a_3' \, n^{b+1} \log^{2b}(n),\\ & C_0(f^{(b+1)}) \leq 12(a_2 + 4b ) \log (n) a_3 \, n^{b} \log^{2b-2}(n) (a_0 +19b-10) \log (n) = a_3' n^{b} \log^{2b}(n), \\ & C_1(f^{(b+1)}) \leq 12(a_2 + 4b ) \log (n) a_3 \, n^{b+1} \log^{2b-2}(n) (a_0 +11b-10) \log (n) = a_3' n^{b+1} \log^{2b}(n), \end{align*} completing the inductive step. \end{proof} \end{appendix} \end{document}
\begin{equation}gin{document} \textrmitle{{On the Factorization of Rational Discrete-Time Spectral Densities}} \author{Giacomo Baggio, Augusto~Ferrante \textrmhanks{Giacomo Baggio is with the Dipartimento di Ingegneria dell'Informazione, Universit\`a di Padova, via Gradenigo, 6/B -- I-35131 Padova, Italy. E-mail: {\textrmt [email protected].} } \textrmhanks{Augusto Ferrante is with the Dipartimento di Ingegneria dell'Informazione, Universit\`a di Padova, via Gradenigo, 6/B -- I-35131 Padova, Italy. E-mail: {\textrmt [email protected].} } \textrmhanks{Partially supported by University of Padova under the grant ``Progetto di Ateneo".} } \markboth{DRAFT}{Shell \MakeLowercase{\textrmextit{et al.}}: Bare Demo of IEEEtran.cls for Journals} \title{{On the Factorization of Rational Discrete-Time Spectral Densities} \begin{equation}gin{abstract} In this paper, we consider an arbitrary matrix-valued, rational spectral density $\Phi(z)$. We show with a constructive proof that $\Phi(z)$ admits a factorization of the form $\Phi(z)=W^\textrmop (z^{-1})W(z)$, where $W(z)$ is {\em stochastically minimal}. Moreover, $W(z)$ and its right inverse are analytic in regions that may be selected with the only constraint that they satisfy some symplectic-type conditions. By suitably selecting the analyticity regions, this extremely general result particularizes into a corollary that may be viewed as the discrete-time counterpart of the matrix factorization method devised by Youla in his celebrated work \cite{Youla-1961}. \end{abstract} \IEEEpeerreviewmaketitle \section{Introduction} The spectral factorization problem is a classical and extensively investigated problem in Linear-Quadratic optimal control theory \cite{Willems-1971,Stoorvogel-S-98,Aksikas-et-al-07,Ferrante-Ntog-Automatica-13,Swigart-Lall-14}, estimation theory and stochastic realization \cite{Lindquist-P-85-siam,Lindquist-P-91-jmsec,Picci-P-94,Ruckebusch-78-2,Ruckebusch-80,Ferrante-94-ieee,Ferrante-94-jmsec,Ferrante-M-P-93}, operator theory and network theory \cite{Brune-31,Anderson-Vongpanitlerd-1973,BarasDW,FaurreCG,Fuhrmann-95,Helton,Zemanian}, interpolation theory | from the classical paper \cite{NudeScw} to the recent works of Byrnes, Georgiou, Lindquist and coworkers, see \cite{Byrnes-et-al} and references therein | and passivity from the classical positive-real systems theory \cite{Willems-1971,Anderson-Vongpanitlerd-1973,Brogliato-LME-07,Khalil-02} to the more recent negative-imaginary systems theory \cite{Petersen-Lanzon-10,Xiong-PL-10,Ferrante-N-13}, to mention just the main fields and a few references. Indeed, spectral factorization is the common denominator of a circle of ideas including LQ optimization methods, passivity theory, positivity, second-order stationary stochastic processes and Riccati equations. It seems therefore fair to say that spectral factorization is one of the cornerstones of modern systems and control theory. Since the pioneering works of Kolmogorov and Wiener in the forties, a variety of methods have been proposed for the analysis and solution of this problem under different assumptions and in different settings, see e.g., \cite{Anderson-et-al-1974, Jezek-Kucera-1985, Rissanen-1973, Tunnicliffe-Wilson-1972,Callier-1985,Youla-Kazanjian-1978,Moir1,Moir2,Moir3}, to cite but a few. We also refer to the relatively recent survey \cite{Sayed-K} that contains many other references and different points of view on this problem. A particularly relevant result on this topic is the well-known procedure devised by Youla in \cite{Youla-1961} which can be used to solve the rational multivariate spectral factorization problem in continuous-time. Remarkably, this method does not require any additional system-theoretic assumption: the rational spectrum $\Phi(s)$ may feature poles and zeroes on the imaginary axis, its rank may be deficient and it can be a non-proper rational function. Moreover, this method permits a generalization that allows for the selection of the region of analyticity of the spectral factor. This turns out to be a crucial feature in the solution of related control problems: For example, in \cite{Ferrante-Pandolfi-2002} a spectral factor having poles and zeroes in a certain region of the complex plane has been used to weaken the standard assumptions for the solvability of the classical Positive Real Lemma equations. Surprisingly, the discrete-time counterpart of this result is so far missing. The reason could be due to the difficulty of deriving a result that parallels the Oono-Yasuura algorithm that constitutes a fundamental step in Youla's work. In order to fill this gap, in this paper, we establish a general discrete-time spectral factorization result. In particular, we show that, given an arbitrary rational matrix function $\Phi(z)$ that is positive semi-definite on the unit circle, and two arbitrary regions featuring a geometry compatible with spectral factorization, $\Phi(z)$ admits a spectral factorization of the form $\Phi(z)=W^\textrmop (z^{-1})W(z)$ where the poles and zeroes of $W(z)$ lie on the prescribed regions. The proof is constructive and gives, as a byproduct, stochastic minimality of the spectral factor, (i.e. minimality of the McMillan degree of $W(z)$) which is a crucial feature in stochastic realization theory \cite{Lindquist-P-85-siam,Lindquist-P-91-jmsec,Ferrante-96-siam} and is one of the key aspects in the present analysis. We consider the factorization of the form $\Phi(z)=W^\textrmop (z^{-1})W(z)$ corresponding to optimal control and network synthesis problems. All the theory is, however, easily adaptable to obtain a dual counterpart for the factorization of the form $\Phi(z)=W(z)W^\textrmop (z^{-1})$. The latter is the natural factorization associated to the representation of second-order stationary stochastic processes and hence to filtering and estimation problems. In fact, if $\Phi(z)$ is the spectral density of such a process $y(t)$, and $\Phi(z)$ admits a spectral factorization of the form $\Phi(z)=W(z)W^\textrmop (z^{-1})$, then $y(t)$ may be represented as the output of a linear system with transfer function $W(z)$ driven by white noise $e(t)$. When all the poles of $W(z)$ lie inside the unit circle, $W(z)$ is called {\em causal} spectral factor as there is a causal relation between $e(t)$ and $y(t)$ \cite{Lindquist-P-85-siam}. If, moreover, also the zeroes of $W(z)$ lie inside the unit circle, $W(z)$ is called {\em outer} spectral factor and the relation between $y(t)$ and $e(t)$ (which is, in this case, the innovation of $y(t)$) is causal and causally invertible. The outer spectral factor is essentially unique and may be recovered in our theory by suitably selecting the regions where the poles and zeroes of $W(z)$ are located; this may be viewed as the discrete-time counterpart of Youla's result. Of course, with respect to most classical control applications, the outer spectral factor is the required solution. Nevertheless, when a-causal control and estimation problems are involved, see e.g. \cite{Willems-1971,Colaneri-F-siam,Colaneri-F-SCL,Ferrante-P-98}, and in stochastic realization theory, see \cite{Lindquist-P-85-siam,Picci-P-94,Ferrante-P-P-02-LAA}, spectral factors whose poles and zeroes lie in different regions of the complex plane become important. This provides a strong motivation for our general result where the regions for poles and zeroes of the spectral factor can be suitably selected. The organization of the paper is as follows. In section \ref{sec:prob-def+main-res}, we formally introduce the discrete-time spectral factorization problem and, after a few definitions we present our main results. In section \ref{sec:problem-statement}, we review some notions from polynomial and rational matrix theory. Section \ref{sec:pre-analysis} is devoted to present a number of preliminary results. In section \ref{sec:main-theorem}, we derive the proof of our main result and present some byproducts of our theory. Section VI shows a numerical example of the proposed factorization algorithm. Finally, in section \ref{sec:conclusions}, we draw some concluding remarks and we describe a number of possible future research directions. {\em General notation and conventions:} Given an arbitrary matrix $G$, we write $G^\textrmop$, $\overline{G}$, $G^{-1}$, $G^{-L}$ and $G^{-R}$ for the transpose, complex conjugate, inverse, left inverse and right inverse of $G$, respectively. In what follows, $[G]_{ij}$ stands for the $(i,j)$-th entry of $G$ and $[G]_{i:j,k:h}$ for the sub-matrix obtained by extracting the rows from index $i$ to index $j$ ($i\leq j$) of $G$ and the columns from index $k$ to index $h$ ($k\leq h$) of $G$. If $\mathbf{v}$ is a vector, then $[\mathbf{v}]_i$ denotes the $i$-th component of $v$. Here, as usual, $I_n$ is the $n\textrmimes n$ identity matrix, $\mathbf{0}_{m,n}$ is the $m\textrmimes n$ zero matrix and $\operatorname{diag}[a_1,\dots,a_n]$ represents the matrix whose diagonal entries are $a_1,\dots,a_n$. We denote by $\mathbb{R}[z]^{m\textrmimes n}$, $\mathbb{R}[z,z^{-1}]^{m\textrmimes n}$ and $\mathbb{R}(z)^{m\textrmimes n}$ the set of real $m\textrmimes n$ polynomial, Laurent polynomial (L-polynomial, for short) and rational matrices, respectively. Given a rational matrix $G(z)\in\mathbb{R}(z)^{m\textrmimes n}$, we let $G^*(z):=G^\textrmop(z^{-1})$, $G^{-*}(z):=[G^{-1}]^*(z)$, $G^{-R*}(z):=[G^{-R}]^*(z)$ and $G^{-L*}(z):=[G^{-L}]^*(z)$. We denote by $\mathrm{rk}(G)$ the normal rank of $G(z)$, i.e., the rank almost everywhere in $z\in\mathbb{C}$ of $G(z)$. The rational matrix $G(z)$ is said to be analytic in a region of the complex plane if all its entries are analytic in this region. Moreover, as in \cite{Youla-1961}, with a slight abuse of notation, when we say that a rational function $f(z)$ is analytic in a region $\mathbb{T}$ of the complex plane that is not open, we mean that $f(z)$ does not have poles in $\mathbb{T}$. In the case of a rational $f(z)$ this abuse does not cause any problems; in fact, $f(z)$ can have only finitely many poles so that there exists a larger open region $\mathbb{T}_\varepsilon\supset \mathbb{T}$ in which $f(z)$ is indeed analytic. For example, if $f(z)$ is rational and does not have poles on the unit circle, we say that $f(z)$ is analytic on the unit circle in place of $f(z)$ is analytic on an open annulus containing the unit circle. Notice that such an annulus does indeed exist. Finally, throughout the paper, we let $\mathbb{R}_0:=\mathbb{R}\setminus\{0\}$, $\mathbb{C}_0:=\mathbb{C}\setminus\{0\}$ and we denote by $\overline{\mathbb{C}}:=\mathbb{C}\cup \{\infty\}$ the extended complex plane. \section{Problem definition and Main result}\label{sec:prob-def+main-res} We start by introducing the object of our analysis and define the problem of spectral factorization: \begin{equation}gin{definition}[Para-Hermitian matrix] A rational matrix $G(z)\in\mathbb{R}(z)^{n\textrmimes n}$ is said to be \emph{para-Hermitian} if $G(z) = G^*(z)$. \end{definition} \begin{equation}gin{definition}[Spectrum] A para-Hermitian rational matrix $\Phi(z)\in\mathbb{R}(z)^{n\textrmimes n}$ is said to be a \emph{spectrum} if $\Phi(e^{j\omega})$ is positive semi-definite for all $\omega\in[0,2\pi)$ such that $\Phi(e^{j\omega})$ is defined. \end{definition} \begin{equation}gin{definition}[Para-unitary matrix] A rational matrix $G(z)\in\mathbb{R}(z)^{n\textrmimes n}$ is said to be \emph{para-unitary} if \[ G^*(z) G(z) =G(z) G^*(z)=I_n. \] \end{definition} \begin{equation}gin{remark} Notice that a para-Hermitian matrix $G(z)$ is Hermitian in the ordinary sense on the unit circle, while a para-unitary matrix $G(z)$ is unitary in the ordinary sense on the unit circle. \end{remark} The spectral factorization problem can be defined as follows: \begin{equation}gin{problem}\label{prob-sf} Given a spectrum $\Phi(z)$ find a factorization of the form \begin{equation}gin{equation}\label{sp-fac-def} \Phi(z)=W^*(z)W(z). \end{equation} \end{problem} A matrix function $W(z)$ satisfying (\ref{sp-fac-def}) is called {\em spectral factor} of $\Phi(z)$. Clearly, Problem \ref{prob-sf} admits many solutions. For control applications we are interested in solutions featuring some additional properties: Typical requirements are minimal complexity | as measured by the McMillan degree of $W(z)$ | full row-rank of $W(z)$, and the fact that the poles and/or the zeroes of $W(z)$ lie in certain regions of the complex plane. The most general kind of such regions are the following. \begin{equation}gin{definition}[(Weakly) Unmixed-symplectic]\label{def:unmixed-symplectic} A set $\mathscr{A}\subset \overline{\mathbb{C}}$ is {\em unmixed-symplectic}\footnote{The reason for the term ``symplectic'' is that $\mathscr{A}$ and $\mathscr{A}^*$ are symmetric with respect to the unit circle, a type of symmetry induced by symplectic property, see, e.g. \cite{Ferrante-L-98}. In this spirit, the corresponding property in continuous-time, where $\mathscr{A}^*:= \{\,z\,:\, -z\in\mathscr{A}\,\}$, $\mathscr{A}\cup \mathscr{A}^*$ is the whole complex plane with the exception of the imaginary axis and $\mathscr{A}\cap \mathscr{A}^*=\emptyset$, could be called ``unmixed-Hamiltonian". } if $$ \mathscr{A}\cup \mathscr{A}^*=\overline{\mathbb{C}}\setminus \{\,z\in\mathbb{C}\,:\, |z|=1\,\},\ \ {\rm and}\ \ \mathscr{A}\cap \mathscr{A}^*=\emptyset,$$ where $\mathscr{A}^*=\{\,z\,:\, z^{-1}\in\mathscr{A}\,\}$. The set $\mathscr{A}\subset \overline{\mathbb{C}}$ is {\em weakly unmixed-symplectic} if $$ \mathscr{A}\cup \mathscr{A}^*=\overline{\mathbb{C}},\ \ {\rm and}\ \ \mathscr{A}\cap \mathscr{A}^*=\{\,z\in\mathbb{C}\,:\, |z|=1\,\}.$$ \end{definition} We are now ready for our main result. \begin{equation}gin{theorem}\label{thmsf-dt-g} Let $\Phi(z)\in\mathbb{R}(z)^{n\textrmimes n}$ be a spectrum of normal rank $\mathrm{rk}(\Phi)=r\neq 0$. Let $\mathscr{A}_p$ and $\mathscr{A}_z$ be two unmixed-symplectic sets. Then, there exists a function $W(z)\in\mathbb{R}(z)^{r\textrmimes n}$ such that \begin{equation}gin{enumerate} \item $\Phi(z)=W^*(z)W(z)$. \label{item:thmsf-dt-g(1)} \item $W(z)$ is analytic in $\mathscr{A}_p$ and its right inverse $W^{-R}(z)$ is analytic in $\mathscr{A}_z$. \label{item:thmsf-dt-g(2)} \item \label{item:thmsf-dt-g(3)} $W(z)$ is {\em stochastically minimal}, i.e., the McMillan degree of $W(z)$ is a half of the McMillan degree of $\Phi(z)$. \newcounter{temp} \setcounter{temp}{\value{enumi}} \end{enumerate} Moreover, \begin{equation}gin{enumerate} \setcounter{enumi}{\value{temp}} \item \label{item:thmsf-dt-g(4)} If $\mathscr{A}_p=\mathscr{A}_z$ then $W(z)$ satisfying points \ref{item:thmsf-dt-g(1)}), and \ref{item:thmsf-dt-g(2)}) is unique up to a constant, orthogonal matrix multiplier on the left, i.e., if $W_1(z)$ also satisfies points \ref{item:thmsf-dt-g(1)}), and \ref{item:thmsf-dt-g(2)}) then $W_1(z)=TW(z)$ where $T\in\mathbb{R}^{r\textrmimes r}$ is orthogonal. Therefore, if $\mathscr{A}_p=\mathscr{A}_z$, points \ref{item:thmsf-dt-g(1)}) and \ref{item:thmsf-dt-g(2)}) imply point \ref{item:thmsf-dt-g(3)}). \item \label{item:thmsf-dt-g(6)} If $\Phi(z)=L^*(z) L(z)$ is any factorization in which $L(z)\in\mathbb{R}(z)^{r\textrmimes n}$ is analytic in $\mathscr{A}_z$, then $L(z)=V(z)W(z)$, $V(z)\in\mathbb{R}(z)^{r\textrmimes r}$ being a para-unitary matrix analytic in $\mathscr{A}_z$. Moreover, given an arbitrary para-unitary matrix $V(z)\in\mathbb{R}(z)^{r\textrmimes r}$ being analytic in $\mathscr{A}_p$, $L(z):=V(z)W(z)$ is analytic in $\mathscr{A}_p$ and satisfies $\Phi(z)=L^*(z) L(z)$, so that, if $\mathscr{A}_p=\mathscr{A}_z=:\mathscr{A}$ then $\Phi(z)=L^*(z) L(z)$ is a factorization in which $L(z)\in\mathbb{R}(z)^{r\textrmimes n}$ is analytic in $\mathscr{A}$ if and only if $L(z)=V(z)W(z)$, $V(z)\in\mathbb{R}(z)^{r\textrmimes r}$ being a para-unitary matrix analytic in $\mathscr{A}$. \item If $\Phi(z)$ is analytic on the unit circle, then points \ref{item:thmsf-dt-g(1)})-\ref{item:thmsf-dt-g(6)}) still hold even if $\mathscr{A}_p$ is weakly unmixed-symplectic. \label{item:thmsf-dt-g(7)} \item If $\Phi(z)$ is analytic on the unit circle and the rank of $\Phi(z)$ is constant on the unit circle, then points \ref{item:thmsf-dt-g(1)})-\ref{item:thmsf-dt-g(6)}) still hold even if $\mathscr{A}_p$ and/or $\mathscr{A}_z$ are weakly unmixed-symplectic. \label{item:thmsf-dt-g(8)} \end{enumerate} \end{theorem} Of course the most common requirement in control theory is that $W(z)$ is outer which correspond to setting $\mathscr{A}_p=\mathscr{A}_z=\{\,z\in\overline{\mathbb{C}}\,:\,|z|>1 \,\}$ in the general case, $\mathscr{A}_p=\{\,z\in\overline{\mathbb{C}}\,:\,|z|\geq 1 \,\}$ and $\mathscr{A}_z=\{\,z\in\overline{\mathbb{C}}\,:\,|z|>1 \,\}$ in the case when $\Phi(z)$ is analytic on the unit circle and $\mathscr{A}_p=\mathscr{A}_z=\{\,z\in\overline{\mathbb{C}}\,:\,|z|\geq1 \,\}$ when $\Phi(z)$ is analytic on the unit circle and the rank of $\Phi(z)$ is constant on the unit circle. This particular case of the previous result corresponds to the following result whose first 6 points are the discrete-time counterpart of the celebrated Youla's Theorem \cite[Thm.2]{Youla-1961}. \begin{equation}gin{theorem}\label{thmsf-dt} Let $\Phi(z)\in\mathbb{R}(z)^{n\textrmimes n}$ be a spectrum of normal rank $\mathrm{rk}(\Phi)=r\neq 0$. Then, there exists a matrix $W(z)\in\mathbb{R}(z)^{r\textrmimes n}$ such that \begin{equation}gin{enumerate} \item $\Phi(z)=W^*(z)W(z)$. \label{item:thmsf-dt(i)} \item $W(z)$ and its right inverse $W^{-R}(z)$ are both analytic in $\{\,z\in\overline{\mathbb{C}}\,:\,|z|>1 \,\}$. \label{item:thmsf-dt(ii)} \item $W(z)$ is unique up to a constant, orthogonal matrix multiplier on the left, i.e., if $W_1(z)$ also satisfies points \ref{item:thmsf-dt(i)}) and \ref{item:thmsf-dt(ii)}), then $W_1(z)=TW(z)$ where $T\in\mathbb{R}^{r\textrmimes r}$ is orthogonal. \label{item:thmsf-dt(iii)} \item Any factorization of the form $\Phi(z)=L^*(z) L(z)$ in which $L(z)\in\mathbb{R}(z)^{r\textrmimes n}$ is analytic in $\{\,z\in\overline{\mathbb{C}}\,:\,|z|>1 \,\}$, has the form $L(z)=V(z)W(z)$, where $V(z)\in\mathbb{R}(z)^{r\textrmimes r}$ is a para-unitary matrix analytic in $\{\,z\in\overline{\mathbb{C}}\,:\,|z|>1 \,\}$. Conversely, any $L(z)=V(z)W(z)$, where $V(z)\in\mathbb{R}(z)^{r\textrmimes r}$ is a para-unitary matrix analytic in $\{\,z\in\overline{\mathbb{C}}\,:\,|z|>1 \,\}$, is a spectral factor of $W(z)$ analytic in $\{\,z\in\overline{\mathbb{C}}\,:\,|z|>1 \,\}$. \label{item:thmsf-dt(iv)} \item If $\Phi(z)$ is analytic on the unit circle, then $W(z)$ is analytic in $\{\, z\in\overline{\mathbb{C}}\,:\,|z|\geq 1\, \}$. \label{item:thmsf-dt(v)} \item If $\Phi(z)$ is analytic on the unit circle and the rank of $\Phi(z)$ is constant on the unit circle, then $W(z)$ and its right inverse $W^{-R}(z)$ are both analytic in $\{\, z\in\overline{\mathbb{C}}\,:\,|z|\geq 1\, \}$. \label{item:thmsf-dt(vi)} \item \label{item:thmsf-dt(vii)} $W(z)$ satisfying points \ref{item:thmsf-dt(i)}) and \ref{item:thmsf-dt(ii)}) is {\em stochastically minimal}, i.e., the McMillan degree of $W(z)$ is a half of the McMillan degree of $\Phi(z)$. \end{enumerate} \end{theorem} \begin{equation}gin{remark} Notice that the hypothesis $\mathrm{rk}(\Phi)\neq 0$ of the previous results is only assumed to rule out the trivial case of an identically zero spectrum $\Phi(z)$ for which the only spectral factorizations clearly correspond to $W(z)=\mathbf{0}_{m,n}$, with $m$ being arbitrary, so that, in this case, $W(z)$ cannot be chosen to be full row-rank. \end{remark} \section{Mathematical preliminaries on rational matrices}\label{sec:problem-statement} Let $f(z)=p(z)/q(z)\in \mathbb{R}(z)$, $q(z)\neq 0$, be a nonzero rational function. We can always write $f(z)$ in the form \[ f(z)=\frac{n(z)}{d(z)}(z-\alpha)^\nu, \quad \forall\, \alpha\in \mathbb{C}, \] where $\nu$ is an integer and $n(z),\, d(z)\in \mathbb{R}[z]$ are nonzero polynomials such that $n(\alpha)\neq 0$ and $d(\alpha)\neq 0$. The integer $\nu$ is called {\em valuation of f(z) at $\alpha$} and we denote it with the symbol $v_\alpha(f)$. The valuation of $f(z)$ at infinity is defined as $v_\infty(f):=\deg q(z)-\deg p(z)$, where $\deg(\cdot)$ denotes the degree of a polynomial. If $f(z)$ is the null function, by convention, $v_\alpha(f)=+\infty$ for every $\alpha\in \overline{\mathbb{C}}$. If $v_\alpha(f)<0$, then $\alpha\in \overline{\mathbb{C}}$ is called a {\em pole} of $f(z)$ of multiplicity $-v_\alpha(f)$. If $v_\alpha(f)>0$, then $\alpha\in \overline{\mathbb{C}}$ is called a {\em zero} of $f(z)$ of multiplicity $v_\alpha(f)$. The rational function $f(z)$ is said to be \emph{proper} if $v_\infty(f)\geq 0$, \emph{strictly proper} if $v_\infty(f)> 0$. A polynomial matrix $G(z)\in\mathbb{R}[z]^{m\textrmimes n}$ is said to be {\em unimodular} if it has a polynomial inverse (either left, right or both). Similarly, a L-polynomial matrix $G(z)\in\mathbb{R}[z,z^{-1}]^{m\textrmimes n}$ is said to be {\em L-unimodular} if it has a L-polynomial inverse (either left, right or both). A square polynomial matrix $G(z)\in\mathbb{R}[z]^{n\textrmimes n}$ is unimodular if and only if its determinant is a nonzero constant $\alpha\in\mathbb{R}_0$. On the other hand, a square L-polynomial matrix $G(z)\in\mathbb{R}[z,z^{-1}]^{n\textrmimes n}$ is L-unimodular if and only if its determinant is a nonzero monomial $\alpha z^{k}$, $\alpha\in\mathbb{R}_0$, $k\in\mathbb{Z}$. Consider now a nonzero real L-polynomial vector $\mathbf{v}(z)\in\mathbb{R}[z,z^{-1}]^{p}$. We can write it as \[ \mathbf{v}(z)=\mathbf{v}_k z^k+\mathbf{v}_{k+1} z^{k+1}+\cdots+\mathbf{v}_{K-1} z^{K-1}+\mathbf{v}_K z^K, \] with $\mathbf{v}_k$ and $\mathbf{v}_K$, $k\leq K$, nonzero vectors in $\mathbb{R}^p$. We say that the integer $k$ is the {\em minimum-degree} of $\mathbf{v}(z)$, written $\min\,\deg\, \mathbf{v}$, while the integer $K$ is the {\em maximum-degree} of $\mathbf{v}(z)$, written $\max\,\deg\, \mathbf{v}$.\footnote{If $\mathbf{v}(z)$ is the zero vector, then $\min\,\deg\, \mathbf{v}$ and $\max\,\deg\, \mathbf{v}$ are left undefined.} Let $G(z)\in\mathbb{R}[z,z^{-1}]^{m\textrmimes n}$ and let $k_i$ and $K_i$ be the minimum- and maximum-degree of the $i$-th column of $G(z)$, for all $i=1\dots,m$. We define the {\em highest-column-degree coefficient matrix} of $G(z)$ as the constant matrix $G^{\rm hc}\in\mathbb{R}^{m\textrmimes n}$ whose $i$-th column consists of the coefficients of the monomials $z^{K_i}$ in the same column of $G(z)$. Furthermore, we define the {\em lowest-column-degree coefficient matrix} of $G(z)$ as the constant matrix $G^{\rm lc}\in\mathbb{R}^{m\textrmimes n}$ whose $i$-th column consists of the coefficients of the monomials $z^{k_i}$ in the same column of $G(z)$. By considering, instead of the columns, the rows of $G(z)$ we can define, by following the same lines in the above, the {\em highest-row-degree coefficient matrix} of $G(z)$, $G^{\rm hr}\in\mathbb{R}^{m\textrmimes n}$, and the {\em lowest-row-degree coefficient matrix} of $G(z)$, $G^{\rm lr}\in\mathbb{R}^{m\textrmimes n}$. A classical result in rational matrix theory is the following (see, e.g., \cite[Ch.6, \S5]{Kailath-1998}). \begin{equation}gin{theorem}[Smith-McMillan] \label{thm:smith-mcmillan} Let $G(z)\in\mathbb{R}(z)^{m\textrmimes n}$ and let $\mathrm{rk}(G)=r$. There exist unimodular matrices $U(z)\in\mathbb{R}[z]^{m\textrmimes r}$ and $V(z)\in\mathbb{R}[z]^{r\textrmimes n}$ such that \begin{equation}gin{align}\label{eq:smith-mcmillan-canonic-form} D(z):&=U(z)G(z)V(z)\nonumber\\ &=\operatorname{diag}\left[\frac{\varepsilon_1(z)}{\psi_1(z)},\frac{\varepsilon_2(z)}{\psi_2(z)},\dots,\frac{\varepsilon_r(z)}{\psi_r(z)}\right], \end{align} where $\varepsilon_1(z),\varepsilon_2(z),\dots,\varepsilon_r(z), \psi_1(z), \psi_2(z),\dots,\psi_r(z)\in\mathbb{R}[z]$ are monic polynomials satisfying the conditions: {(i)} $\varepsilon_i(z)$ and $\psi_i(z)$ are relatively prime, $i=1,2,\dots,r$, {(ii)} $\varepsilon_i(z)\mid \varepsilon_{i+1}(z)$ and $\psi_{i+1}(z)\mid \psi_{i}(z)$, $i=1,2,\dots,r-1$.{\footnote{We write $p(z) \mid q(z)$, with $p(z), q(z)\in \mathbb{R}[z]$, to say that $p(z)$ divides $q(z)$.}} \end{theorem} The rational matrix $D(z)$ in (\ref{eq:smith-mcmillan-canonic-form}) is known as the {\em Smith-McMillan canonical form} of $G(z)$. (In general, we say that a rational matrix is {\em canonic} if it is of the form in (\ref{eq:smith-mcmillan-canonic-form}) and satisfies the conditions of the above theorem.) The (finite) zeroes of $G(z)$ coincide with the zeroes of $\varepsilon_r(z)$ and the (finite) poles of $G(z)$ with the zeroes of $\psi_1(z)$. Note that, unlike what happens in the scalar case, the set of zeroes and poles of a rational matrix may not be disjoint. Let $G(z)\in\mathbb{R}(z)^{m\textrmimes n}$ and write $G(z)=C(z)D(z)F(z)$, where $D(z)$ is the Smith-McMillan form of $G(z)$ and $C(z), F(z)$ are unimodular matrices. If $\mathrm{rk}(G)=m=n$, then the inverse of $G(z)$ has the form \[ G^{-1}(z)=F^{-1}(z)D^{-1}(z)C^{-1}(z) \] and $D^{-1}(z)$ coincides with the Smith-McMillan canonical form of $G^{-1}(z)$, up to a permutation of the diagonal elements. Therefore, the poles of $G^{-1}(z)$ are exactly the zeroes of $G(z)$. In a similar fashion, if $G(z)$ has normal rank $m$ ($n$), there always exists a right (left) inverse of $G(z)$ such that the poles of $G^{-R}(z)$ ($G^{-L}(z)$) coincide with the zeroes of $G(z)$.\footnote{ The latter fact is not true for all the right/left inverses of $G(z)$, since, in general, the zeroes of $G(z)$ are among the poles of all such inverses (see \cite[Ch.6, \S5, Ex.14]{Kailath-1998}).} Indeed, we may take \begin{equation}gin{align} G^{-R}(z)&=F^{-R}(z)D^{-1}(z)C^{-1}(z),\label{eq:right-inv}\\ G^{-L}(z)&=F^{-1}(z)D^{-1}(z)C^{-L}(z).\label{eq:left-inv} \end{align} In the following, we consider only right and left inverses of the form (\ref{eq:right-inv}) and (\ref{eq:left-inv}), respectively. Let $\alpha_1,\alpha_2,\dots,\alpha_t$ be the (finite) zeroes and (finite) poles of $G(z)$. We can write the Smith-McMillan canonical form of $G(z)$ as \begin{equation}gin{align} \mathrm{diag}\Big[(z-\alpha_1)^{\nu_1^{(1)}}\cdots (z-&\alpha_t)^{\nu_t^{(1)}},\dots,\nonumber\\ &(z-\alpha_1)^{\nu_1^{(r)}}\cdots (z-\alpha_t)^{\nu_t^{(r)}}\Big].\nonumber \end{align} The integer exponents $\nu_i^{(1)}\leq \nu_i^{(2)}\leq \cdots \leq \nu_i^{(r)}$, appearing in the above expression, are called the {\em structural indices} of $G(z)$ at $\alpha_i$ and they are used to represent the zero-pole structure at $\alpha_i$ of $G(z)$. To obtain the zero-pole structure at infinity of $G(z)$, we can proceed as follows. We make a change of variable, $z\textrmo \lambda^{-1}$, and compute the Smith-McMillan form of $G(\lambda^{-1})$, then the structural indices of $G(\lambda^{-1})$ at $\lambda=0$ will give the set of structural indices of $G(z)$ at $z=\infty$. Lastly, if $p_1,\dots,p_h$ are the distinct poles (the pole at infinity included) of $G(z)$, we recall that the {\em McMillan degree} of $G(z)$ can be defined as (see, e.g., \cite[Ch.6, \S5]{Kailath-1998}) \begin{equation}gin{equation}\label{eq:mcmillan-degree} \delta_M(G):=\sum_{i=1}^h\delta(G;p_i), \end{equation} where $\delta(G;p_i)$ is the degree of the pole $p_i$, i.e., the largest multiplicity that $p_i$ possesses as a pole of any minor of $G(z)$. In particular, if $D(z)$ in (\ref{eq:smith-mcmillan-canonic-form}) is the Smith-McMillan form of $G(z)$ and $G(z)$ has no pole at infinity then $\delta(G;p_i)=\delta(D;p_i)$ for all $i=1,\dots,h$, which, in turn, yields $\delta_M(G)=\delta_M(D)=\sum_{i=1}^r \deg \psi_i(z)$. \section{Preliminary results}\label{sec:pre-analysis} In this section, we collect a set of lemmata which we will exploit in the constructive proof of the main theorem. \begin{equation}gin{lemma}\label{lemma1} A matrix $G(z)\in\mathbb{R}(z)^{m\textrmimes n}$ is analytic in $\mathbb{C}_0$ together with its inverse (either right, left or both) if and only if it is a L-unimodular polynomial matrix. \end{lemma} \begin{equation}gin{proof} If $G(z)$ is L-unimodular, then $G(z)$ has an inverse (either left, right or both) which is L-polynomial. Hence, the only possible finite zeroes/poles of $G(z)$ are located at $z=0$. This, in turn, implies that $G(z)$ must be analytic together with its inverse in $\mathbb{C}_0$. Vice versa, suppose that $G(z)$ is analytic with its inverse in $\mathbb{C}_0$. Firstly, we notice that the existence of a left or right inverse for $G(z)$ implies that the normal rank of $G(z)$ is either $r=n$ or $r=m$, respectively. Without loss of generality, we can suppose that $r=n$. By the Smith-McMillan Theorem, we can write $G(z)=C(z)D(z)F(z)$, where $C(z)\in\mathbb{R}[z]^{m\textrmimes n},\ F(z)\in\mathbb{R}[z]^{n\textrmimes n}$ are unimodular (and, a fortiori, L-unimodular) polynomial matrices, respectively, and $D(z)\in\mathbb{R}(z)^{n\textrmimes n}$ is diagonal, canonic of the form \[ D(z)=\mathrm{diag} \left[\frac{\varepsilon_1(z)}{\psi_1(z)}, \frac{\varepsilon_2(z)}{\psi_2(z)},\dots , \frac{\varepsilon_n(z)}{\psi_n(z)} \right]. \] The analyticity of $G(z)$ in $\mathbb{C}_0$ implies that all $\psi_i(z)\in\mathbb{R}[z]$, $i=1,\dots,n$, are nonzero monomials. The Smith-McMillan canonical form of $G^{-L}(z)$ is given by \[ \mathrm{diag} \left[\frac{\psi_n(z)}{\varepsilon_n(z)}, \frac{\psi_{n-1}(z)}{\varepsilon_{n-1}(z)},\dots , \frac{\psi_1(z)}{\varepsilon_1(z)} \right]. \] Hence, the analyticity of $G^{-L}(z)$ in $\mathbb{C}_0$ implies that all $\varepsilon_i(z)\in\mathbb{R}[z]$, $i=1,\dots,n$, are nonzero monomials. Therefore, $D(z)$ is a L-unimodular polynomial matrix. Since $G(z)=C(z)D(z)F(z)$ is the product of three L-unimodular polynomial matrices, $G(z)$ must be a L-unimodular polynomial matrix. \end{proof} \begin{equation}gin{lemma}\label{lemma2} Let $\mathscr{A}\subset \overline{\mathbb{C}}$ be an unmixed-symplectic set. A para-unitary matrix $G(z)\in\mathbb{R}(z)^{n\textrmimes n}$ analytic in $\mathscr{A}$ with inverse analytic in $\mathscr{A}$ is a constant orthogonal matrix. \end{lemma} \begin{equation}gin{proof} The analyticity of the inverse of $G(z)$ in $\mathscr{A}$ implies that of $G(z^{-1})$ in the same region, and therefore that of $G(z)$ in $\mathscr{A}^*$. We also notice that in the unit circle it holds $G^*(e^{j\omega}) G(e^{j\omega})=G^\textrmop(e^{-j\omega}) G(e^{j\omega})=I_n$, $\forall\,\omega \in [0,2\pi)$, and we can write out the diagonal elements in expanded form as \[ \sum_{i=1}^n |[G(e^{j\omega})]_{ik}|^2=1, \ \ \ \ \forall\, k=1,\dots,n, \ \forall\, \omega \in [0,2\pi). \] The latter equation implies that \[ |[G(e^{j\omega})]_{ik}|\leq 1, \ \ \ \ \forall\, i,\, k=1,\dots,n, \ \forall\, \omega \in [0,2\pi), \] and, therefore, we proved the analyticity of $G(z)$ on the unit circle. By Definition \ref{def:unmixed-symplectic} of unmixed-symplectic set, it follows that $G(z)$ is analytic on the entire extended complex plane. This means that $G(z)$ is analytic and bounded in $\mathbb{C}$. Hence, we can apply Liouville's Theorem \cite[Ch.V, Thm.1.4]{Lang-1985} and conclude that $G(z)$ must be a constant orthogonal matrix. \end{proof} \begin{equation}gin{remark} With the usual choice $\mathscr{A}=\{\, z\in\overline{\mathbb{C}} \,:\, |z|> 1\,\}$, the previous lemma reads as follows: A para-unitary matrix $G(z)\in\mathbb{R}(z)^{n\textrmimes n}$ analytic in $\{\, z\in\overline{\mathbb{C}} \,:\, |z|> 1\,\}$ with inverse analytic in $\{\, z\in\overline{\mathbb{C}} \,:\, |z|> 1\,\}$ is a constant orthogonal matrix. \end{remark} \begin{equation}gin{definition}[Left-standard factorization]\label{def:ls-fact} Let $G(z)\in \mathbb{R}(z)^{m\textrmimes n}$ and let $\mathrm{rk}(G)=r$. A decomposition of the form $G(z)=A(z)\Delta(z)B(z)$ is called a \emph{left-standard factorization} if \begin{equation}gin{enumerate} \item $\Delta(z)\in\mathbb{R}(z)^{r \textrmimes r}$ is diagonal and analytic with its inverse in $\{\, z\in \mathbb{C}_0 \,:\, |z|\neq1 \, \}$; \item $A(z)\in\mathbb{R}(z)^{m \textrmimes r}$ is analytic together with its left inverse in $\{\, z\in \mathbb{C}_0 \,:\, |z|\leq 1 \, \}$; \item $B(z)\in\mathbb{R}(z)^{r \textrmimes n}$ is analytic together with its right inverse in $\{\, z\in \mathbb{C} \,:\, |z|\geq 1 \, \}$. \end{enumerate} \end{definition} \begin{equation}gin{remark} If, in Definition \ref{def:ls-fact}, $A(z)$ and $B(z)$ are interchanged, we have a \emph{right-standard factorization}. Hence, it follows that any left-standard factorization of $G(z)$ generates a right-standard factorization of $G^\textrmop(z)$, $G^{-1}(z)$ (if $G(z)$ is nonsingular), $G(z^{-1})$, e.g., in the first case we have $G^\textrmop(z) =B^\textrmop(z) \Delta(z) A^\textrmop(z)$. \end{remark} \begin{equation}gin{lemma}\label{lemma3} Any rational matrix $G(z)\in \mathbb{R}(z)^{m\textrmimes n}$ of normal rank $\mathrm{rk}(G)=r$ admits a left-standard factorization. \end{lemma} \begin{equation}gin{proof} By the Smith-McMillan Theorem, we can write $G(z)=C(z)D(z)F(z)$, where $C(z)\in \mathbb{R}[z]^{m\textrmimes r}$, $F(z)\in \mathbb{R}[z]^{r\textrmimes n}$ are unimodular polynomial matrices and $D(z)\in \mathbb{R}(z)^{r\textrmimes r}$ is diagonal and canonic of the form \[ D(z)=\mathrm{diag} \left[\frac{\varepsilon_1(z)}{\psi_1(z)}, \frac{\varepsilon_2(z)}{\psi_2(z)},\dots , \frac{\varepsilon_r(z)}{\psi_r(z)} \right]. \] We factor $\varepsilon_i(z)\in\mathbb{R}[z]$ and $\psi_i(z)\in\mathbb{R}[z]$, $i=1,\dots,r$, in $D(z)$ into the product of three polynomials: the first without zeroes in $\{\, z\in \mathbb{C}\,:\, |z|\leq 1 \,\}$, the second without zeroes in $\{\, z\in \mathbb{C}\,:\, |z|\neq 1 \,\}$ and the third without zeroes in $\{\, z\in \mathbb{C}\,:\, |z|\geq 1 \,\}$. Thus, it is possible to write \[ D(z)=D_-(z)\Delta(z)D_+(z), \] where $D_-(z)$ and its inverse are analytic in $\{\, z\in \mathbb{C}\,:\, |z|\leq 1 \,\}$, $\Delta(z)$ and its inverse in $\{\, z\in \mathbb{C}\,:\, |z|\neq 1 \,\}$ and $D_+(z)$ and its inverse in $\{\, z\in \mathbb{C}\,:\, |z|\geq 1 \,\}$. Eventually, by choosing $A(z):=C(z)D_-(z)$ and $B(z):=D_+(z)F(z)$, we have that $G(z)=A(z)\Delta(z)B(z)$ is a left-standard factorization of $G(z)$. \end{proof} Left-standard factorizations are not unique. Indeed, any two decompositions are connected as follows. \begin{equation}gin{lemma}\label{lemma4} Let $G(z)\in \mathbb{R}(z)^{m\textrmimes n}$ be a rational matrix of normal rank $\mathrm{rk}(G)=r$ and let $G(z)=A(z)\Delta(z)B(z)=A_1(z)\Delta_1(z)B_1(z)$ be two left-standard factorizations of $G(z)$. Then, \[ A_1(z)=A(z)M^{-1}(z), \quad B_1(z)=N(z)B(z), \] where $M(z)\in\mathbb{R}[z,z^{-1}]^{r\textrmimes r}$ and $N(z)\in\mathbb{R}[z,z^{-1}]^{r\textrmimes r}$ are two L-unimodular polynomial matrices such that \begin{equation}gin{align}\label{eq:M(z)Delta(z)N(z)} M(z)\Delta(z)N^{-1}(z)=\Delta_1(z). \end{align} \end{lemma} \begin{equation}gin{proof} By assumption, $$G(z)=A(z)\Delta(z)B(z)=A_1(z)\Delta_1(z)B_1(z)$$ which, in turn, implies \begin{equation}gin{align}\label{eq:lemma4} \Delta_1^{-1}(z)A_1^{-L}(z)A(z)\Delta(z)=B_1(z)B^{-R}(z). \end{align} By Definition \ref{def:ls-fact} of left-standard factorization, the right-hand side of (\ref{eq:lemma4}) is analytic in $\{\,z\in\mathbb{C}\,:\,|z|\geq 1\,\}$, while the left-hand side of (\ref{eq:lemma4}) in $\{\,z\in\mathbb{C}_0\,:\,|z|< 1\,\}$. Therefore, it follows that $B_1(z)B^{-R}(z)$ is analytic in $\mathbb{C}_0$. Moreover, the inverse of $B_1(z)B^{-R}(z)$ satisfies \[ [B_1(z)B^{-R}(z)]^{-1}=\Delta^{-1}(z)[A_1^{-L}(z)A(z)]^{-1}\Delta_1(z) \] and is also analytic in $\mathbb{C}_0$. Thus, by Lemma \ref{lemma1}, $N(z):=B_1(z)B^{-R}(z)$ must be a L-unimodular matrix. Similarly, $M(z):=A_1^{-L}(z)A(z)$ is a L-unimodular matrix. Finally, a rearrangement of (\ref{eq:lemma4}) yields (\ref{eq:M(z)Delta(z)N(z)}). \end{proof} \begin{equation}gin{remark} Notice that, by replacing the word ``left-standard'' with the word ``right-standard'' in Lemmata \ref{lemma3} and \ref{lemma4}, we obtain, by minor modifications in the proofs, a right-standard counterpart of Lemmata \ref{lemma3} and \ref{lemma4}. \end{remark} Let $\Phi(z)\in\mathbb{R}(z)^{n\textrmimes n}$ be a para-Hermitian matrix of normal rank $\mathrm{rk}(\Phi)=r$ and let $\Phi(z)=A(z)\Delta(z)B(z)$ be a left-standard factorization of $\Phi(z)$. We have that \[ \Phi(z)=\Phi^*(z) =B^*(z) \Delta^*(z) A^*(z) \] is also a left-standard factorization of $\Phi(z)$. In particular, $\Delta^*(z)$ is equal to $\Delta(z)$, except for multiplication of suitable monomials of the form $\pm z^{k_i}$ in its diagonal entries, i.e., \[ \Delta^*(z) =\Sigma(z) \Delta(z), \] where \begin{equation}gin{equation}\label{eq:sigmadelta} \Sigma(z)=\mathrm{diag}\left[e_1(z),e_2(z),\dots,e_r(z)\right] \end{equation} and $e_i(z)=\pm z^{k_i},\ k_i\in\mathbb{Z},\ i=1,\dots,r$. By invoking Lemma \ref{lemma4}, we can write \begin{equation}gin{equation}\label{eq:A(z)B(z)} A^*(z) = N(z)B(z), \quad B^*(z) = A(z)M^{-1}(z), \end{equation} where $N(z),\, M(z)\in\mathbb{R}[z,z^{-1}]^{r\textrmimes r}$ are L-unimodular matrices. The following two lemmata are used to establish a further characterization of a para-Hermitian matrix when it is positive semi-definite upon the unit circle. \begin{equation}gin{lemma}\label{lemma5-pre} Let $G(z)\in\mathbb{R}(z)^{n\textrmimes n}$ and let $\mathbb{T}$ be a region of the complex plane such that \begin{equation}gin{enumerate} \item $G(z)$ is Hermitian on $\mathbb{T}$; \item $\mathbf{x}^\textrmop G(\lambda)\mathbf{x}\geq 0$, $\forall\,\mathbf{x}\in\mathbb{R}^n$ and $\forall\,\lambda\in\textrmilde{\mathbb{T}}\subseteq \mathbb{T}$ for which $G(\lambda)$ has finite entries. \end{enumerate} Let $D(z)\in\mathbb{R}(z)^{r\textrmimes r}$ be the Smith-McMillan canonical form of $G(z)$ and denote by $g_{\mathbf{ij}}^{(\ell)}$ and $d_{\mathbf{ij}}^{(\ell)}$ the $\ell\textrmimes \ell$ minors ($1\leq \ell\leq r$) of the rational matrices $G(z)$ and $D(z)$, respectively, obtained by selecting those rows and columns whose indices appear in the ordered $\ell$-tuples $\mathbf{i}$ and $\mathbf{j}$, respectively. Then, \[ \min_{\mathbf{i}} v_\alpha(d_{\mathbf{ii}}^{(\ell)})=\min_{\mathbf{i}} v_\alpha(g_{\mathbf{ii}}^{(\ell)}), \quad \forall \alpha\in\mathbb{T}. \] \end{lemma} \begin{equation}gin{proof} Firstly, we recall that for any rational matrix $G(z)$ it holds \[ \min_{\mathbf{i}} v_\alpha(d_{\mathbf{ii}}^{(\ell)})=\min_{\mathbf{ij}} v_\alpha(d_{\mathbf{ij}}^{(\ell)})=\min_{\mathbf{ij}} v_\alpha(g_{\mathbf{ij}}^{(\ell)}),\quad \forall \alpha\in\mathbb{C}. \] The latter result is well-known and is presented, for instance, as an exercise in \cite[Ch.6, \S 5, Ex.6]{Kailath-1998}. Hence, it remains to prove that \begin{equation}gin{equation}\label{eq:lemma-val-g} \min_{\mathbf{ij}} v_\alpha(g_{\mathbf{ij}}^{(\ell)})=\min_{\mathbf{i}} v_\alpha(g_{\mathbf{ii}}^{(\ell)}),\quad \forall \alpha\in\mathbb{T}. \end{equation} Since $G(z)$ is Hermitian positive semi-definite on the region $\textrmilde{\mathbb{T}}$, it admits a decomposition of the form $G(\lambda)=W(\lambda)\overline{W(\lambda)}^\textrmop$ for all $\lambda\in\textrmilde{\mathbb{T}}$. By applying the Binet-Cauchy Theorem (see \cite[Vol.I, Ch.1, \S 2]{Gantmacher-1959}), we have \begin{equation}gin{align} &g_{\mathbf{ij}}^{(\ell)}(\lambda)=\sum_{\mathbf{h}}w_{\mathbf{ih}}^{(\ell)}(\lambda)\overline{w_{\mathbf{jh}}^{(\ell)}(\lambda)}, \quad \forall\,\lambda\in\textrmilde{\mathbb{T}}, \label{eq:corollary-minors-1}\\ &g_{\mathbf{ii}}^{(\ell)}(\lambda)=\sum_{\mathbf{h}}w_{\mathbf{ih}}^{(\ell)}(\lambda)\overline{w_{\mathbf{ih}}^{(\ell)}(\lambda)}=\sum_{\mathbf{h}}\left|w_{\mathbf{ih}}^{(\ell)}(\lambda)\right|^2, \quad \forall\,\lambda\in\textrmilde{\mathbb{T}},\label{eq:corollary-minors-2} \end{align} where $g_{\mathbf{ij}}^{(\ell)}(\lambda)$ and $w_{\mathbf{ij}}^{(\ell)}(\lambda)$ denote the $\ell\textrmimes \ell$ minors of matrices $G(\lambda)$ and $W(\lambda)$, obtained by selecting those rows and columns whose indices appear in the ordered $\ell$-tuples $\mathbf{i}$ and $\mathbf{j}$, respectively. Moreover, in both the summations (\ref{eq:corollary-minors-1})-(\ref{eq:corollary-minors-2}), $\mathbf{h}:=(h_1,\dots,h_\ell)$, $1\leq h_1<\cdots<h_\ell\leq n$, runs through all such multi-indices. By using Cauchy-Schwarz inequality and (\ref{eq:corollary-minors-1})-(\ref{eq:corollary-minors-2}), we have \begin{equation}gin{align} \left|g_{\mathbf{ij}}^{(\ell)}(\lambda)\right|&=\left|\sum_{\mathbf{h}}w_{\mathbf{ih}}^{(\ell)}(\lambda)\overline{w_{\mathbf{jh}}^{(\ell)}(\lambda)}\right|\nonumber \\ &\leq \sqrt{\sum_{\mathbf{h}}\left|w_{\mathbf{ih}}^{(\ell)}(\lambda)\right|^2\sum_{\mathbf{h}}\left|w_{\mathbf{jh}}^{(\ell)}(\lambda)\right|^2}\nonumber \\ &= \sqrt{g_{\mathbf{ii}}^{(\ell)}(\lambda)g_{\mathbf{jj}}^{(\ell)}(\lambda)} \nonumber \\ &\leq \max\left\{g_{\mathbf{ii}}^{(\ell)}(\lambda), g_{\mathbf{jj}}^{(\ell)}(\lambda) \right\}, \ \ \ \forall\,\lambda\in\textrmilde{\mathbb{T}}. \label{eq:inequality-cs} \end{align} The latter inequality implies that for every zero $\alpha\in\mathbb{T}$ of multiplicity $k$ of a minor of $G(z)$, there exists at least one principal minor of $G(z)$ which has the same $\alpha$ either as a zero of multiplicity less than or equal to $k$ or a pole of multiplicity greater than or equal to $0$. Similarly, inequality (\ref{eq:inequality-cs}) implies also that for every pole $\alpha\in\mathbb{T}$ of multiplicity $k$ of a minor of $G(z)$, there exists at least one principal minor of $G(z)$ which has the same pole of multiplicity greater than or equal to $k$. Therefore, we conclude that (\ref{eq:lemma-val-g}) holds. \end{proof} \begin{equation}gin{lemma}\label{lemma5} Let $\Phi(z)\in\mathbb{R}(z)^{n\textrmimes n}$ be a spectrum of normal rank $\mathrm{rk}(\Phi)=r$ and let $D(z)\in\mathbb{R}(z)^{r\textrmimes r}$ be its Smith-McMillan canonical form. Then, the zeroes and poles on the unit circle of the diagonal elements of $D(z)$ have even multiplicity. \end{lemma} \begin{equation}gin{proof} Firstly, we assume that the numerators and denominators of all entries in $\Phi(z)$ are relatively prime polynomials. Let $ \alpha_1=e^{j\omega_1},\, \alpha_2=e^{j\omega_2},\dots,\, \alpha_t=e^{j\omega_t}, $ be the zeroes/poles on the unit circle of $\Phi(z)$ and let $ \nu_i^{(1)},\, \nu_i^{(2)},\dots,\, \nu_i^{(r)}, \ (\nu_i^{(1)}\leq \nu_i^{(2)}\leq \dots\leq\nu_i^{(r)}), $ be the structural indices of $\Phi(z)$ at $\alpha_i, \ i=1,\dots,t$. Since $\Phi(z)$ is positive semi-definite on the unit circle, one can directly verify that the zeroes and poles on the unit circle of the principal minors of $\Phi(z)$ must have even multiplicity. Now, by setting $\mathbb{T}:=\{\,z\in\mathbb{C}\,:\,|z|=1 \,\}$, we can apply Lemma \ref{lemma5-pre}. By considering the minors of order $\ell=1$, it follows that $\nu_i^{(1)}$ is even for all $i=1,2,\dots,t$. Similarly, by considering the minors of order $\ell=2$ in Lemma \ref{lemma5-pre}, it follows that $ \nu_i^{(1)}+\nu_i^{(2)}$ is even for all $i=1,2,\dots,t$. Since $\nu_i^{(1)}$ is even, then also $\nu_i^{(2)}$ must be even for all $i=1,2,\dots,t$. By iterating the argument, we conclude that every zero/pole on the unit circle of the diagonal elements of $D(z)$ has even multiplicity. \end{proof} \begin{equation}gin{remark} Lemma \ref{lemma5-pre} can also be used to obtain an alternative proof of \cite[Lemma 4, point 2]{Youla-1961}, which represents the continuous-time counterpart of Lemma \ref{lemma5}. \end{remark} \begin{equation}gin{lemma}\label{lemma-review} Let $\Phi(z)\in\mathbb{R}(z)^{n\textrmimes n}$ be a spectrum of normal rank $\mathrm{rk}(\Phi)=r$ and let $D(z)\in\mathbb{R}(z)^{r\textrmimes r}$ be its Smith-McMillan canonical form. Then $D(z)$ can be written as \begin{equation}gin{equation}\label{eq:D(z)-decomposition} D(z) = \Sigma(z)\Lambda^*(z)\Theta^*(z) \Theta(z)\Lambda(z) \end{equation} where $\Lambda(z)$ is diagonal, canonic and analytic with its inverse in $\{\,z\in\mathbb{C}\,:\,|z|\geq 1\,\}$ and, if $z=0$ is either a zero, pole or both of $D(z)$, $\Lambda(z)$ has the same structural indices at $z=0$ of $D(z)$; $\Theta(z)$ is diagonal, canonic and analytic with its inverse in $\{\, z\in\mathbb{C}\,:\, |z|\neq 1\,\}$; $\Sigma(z)$ has the form \begin{equation}gin{equation}\label{eq:sigmadelta-2} \Sigma(z)=\mathrm{diag}\left[e_1(z),e_2(z),\dots,e_r(z)\right], \end{equation} with $e_i(z)=\alpha_{i} z^{k_i},\, \alpha_{i}\in \mathbb{R}_0,\, k_i\in\mathbb{Z},\, i=1,\dots,r$. \end{lemma} \begin{equation}gin{proof} By direct computation, we obtain \begin{equation}gin{equation}\label{eq:sigmabarD} D^*(z)=\Sigma'(z)\bar{D}(z), \end{equation} where $\bar{D}(z)$ is canonic and $\Sigma'(z)$ is a diagonal matrix with elements $\alpha z^k$, $\alpha\in \mathbb{R}_0$, $k\in\mathbb{Z}$, on its diagonal. Since $\Phi(z)$ is a spectrum, we can write \[\Phi(z)=C(z)D(z)F(z)=F^*(z) D^*(z) C^*(z) =\Phi^*(z),\] The matrices $F(z)\in\mathbb{R}[z]^{n\textrmimes r}$, $C(z)\in\mathbb{R}[z]^{r\textrmimes n}$, are unimodular, while $F^*(z)$, $C^*(z)$ are L-unimodular. By Lemma \ref{lemma1}, $F(z)$, $C(z)$, $F^*(z)$, $C^*(z)$ are analytic in $\mathbb{C}_0$ with their inverses. Thus, we have (see \cite[Ch.6, \S5, Ex.6]{Kailath-1998}) \[ \min_{\mathbf{i}} v_\alpha(d_{\mathbf{ii}}^{(\ell)})=\min_{\mathbf{i}} v_\alpha({d^{*(\ell)}_{\mathbf{ii}}}),\ \ \forall \alpha\in\mathbb{C}_0,\ \forall \ell : 1\leq \ell\leq r, \] where $d_{\mathbf{ii}}^{(\ell)}$ and ${d^{*(\ell)}_{\mathbf{ii}}}$ denote the $\ell\textrmimes \ell$ minors of $D(z)$ and $D^*(z)$, respectively, obtained by selecting those rows and columns whose indices appear in the ordered $\ell$-tuple $\mathbf{i}$. The previous equation implies that, for every $\alpha\in \mathbb{C}_0$, being either a pole, zero or both of $D(z)$, $D^*(z)$ has the same structural indices at $\alpha$ of $D(z)$. Therefore, since by (\ref{eq:sigmabarD}) $\bar{D}(z)$ is canonic, it follows that \[ D^*(z)=\Sigma''(z) D(z) \] where $\Sigma''(z)$ is diagonal with elements $\alpha z^k$, $\alpha\in \mathbb{R}_0$, $k\in\mathbb{Z}$, on its diagonal. This means that any zero/pole at $\alpha\in\mathbb{C}_0$ in the diagonal terms of $D(z)$ is accompanied by a zero/pole at $\alpha^{-1}$, and we can always write $D(z)$ as \begin{equation}gin{align}\label{eq:Dzdec} D(z)=\Sigma_1(z)\Lambda^*(z) \Delta(z)\Lambda(z), \end{align} where $\Sigma_1(z)$ is diagonal with elements $ \alpha z^{k}$, $\alpha\in\mathbb{R}_0$, $k\in\mathbb{Z}$, on its diagonal; $\Lambda(z)$ and $\Delta(z)$ are diagonal, canonic and analytic with their inverse in $\{\,z\in\mathbb{C}\,:\,|z|\geq 1\,\}$ and $\{\, z\in\mathbb{C}\,:\, |z|\neq 1\,\}$, respectively. Moreover, if $z=0$ is either a pole, zero or both of $D(z)$, $\Lambda(z)$ possesses the same structural indices at $z=0$ of $D(z)$. As a matter of fact, let $\alpha_{i,k}$, $i=1,\dots,p_k$, and $\begin{equation}ta_{j,k}$, $j=1,\dots,q_k$, be the zeroes and poles, respectively, in $\{\,z\in\mathbb{C}_0\,:\,|z|< 1\,\}$ of $[D(z)]_{kk}$ and let $h_k\in \mathbb{Z}$ be the valuation at $z=0$ of $[D(z)]_{kk}$. We can write, for all $k=1,\dots,r$, \begin{equation}gin{align*} &[D(z)]_{kk}=z^{h_k}\frac{\prod_{i=1}^{p_k}(z-\alpha_{i,k}^{-1})(z-\alpha_{i,k})}{\prod_{j=1}^{q_k}(z-\begin{equation}ta_{j,k}^{-1})(z-\begin{equation}ta_{j,k})}[\Delta(z)]_{kk}\\ &= \underbrace{\gamma_k \frac{z^{h_k}}{z^{q_k-p_k}}}_{[\Sigma_1(z)]_{kk}} \underbrace{z^{-h_k} \frac{\prod_{i=1}^{p_k}(z^{-1}-\alpha_{i,k})}{\prod_{j=1}^{q_k}(z^{-1}-\begin{equation}ta_{j,k})}}_{[\Lambda^*(z)]_{kk}}[\Delta(z)]_{kk}\,\cdot \\ &\hspace{5cm} \cdot \underbrace{z^{h_k} \frac{\prod_{i=1}^{p_k}(z-\alpha_{i,k})}{\prod_{j=1}^{q_k}(z-\begin{equation}ta_{j,k})}}_{[\Lambda(z)]_{kk}} \end{align*} with $\gamma_k:=(-1)^{q_k-p_k}\frac{\prod_{j=1}^{q_k}\begin{equation}ta_{j,k}}{\prod_{i=1}^{p_k}\alpha_{i,k}}$. Now, by exploiting Lemma \ref{lemma5}, $\Delta(z)$ can be written as \[ \Delta(z)=\Theta^2(z)=\Sigma_2(z)\Theta^*(z) \Theta(z), \] with $\Sigma_1(z)$ diagonal with elements $ \pm z^{k}$, $k\in\mathbb{Z}$, on its diagonal and $\Theta(z)$ diagonal, canonic and analytic together with its inverse in $\{\,z\in\mathbb{C}\,:\,|z|\neq 1\,\}$. Finally, we can rearrange $D(z)$ in the form \[ D(z)=\Sigma(z)\Lambda^*(z)\Theta^*(z) \Theta(z)\Lambda(z), \] where $\Sigma(z):=\Sigma_1(z)\Sigma_2(z)$ has the form in (\ref{eq:sigmadelta-2}). \end{proof} To conclude this section, we report below another useful result. \begin{equation}gin{lemma}\label{lemma7} Let $\Psi(z)\in\mathbb{R}[z,z^{-1}]^{r\textrmimes r}$ be a para-Hermitian L-unimodular matrix which is positive definite on the unit circle. Then, $\Psi^{\rm hc}$ is nonsingular if and only if $\Psi(z)$ is a constant matrix. \end{lemma} \begin{equation}gin{proof} If $\Psi(z)$ is a constant matrix then $\Psi^{\rm hc}=\Psi(z)$ is nonsingular, by definition of L-unimodular matrix. Conversely, assume that $\Psi^{\rm hc}$ is nonsingular. Let us denote by $K_i\in\mathbb{Z}$, $i=1,\dots,r$, the maximum-degree of the $i$-th column of $\Psi(z)$ and by $k_i\in\mathbb{Z}$, $i=1,\dots,r$, the minimum-degree of the $i$-th row of $\Psi(z)$. Since $\Psi(z)=\Psi^*(z)$, we have that $\det \Psi(z)$ is a nonzero real constant and \begin{equation}gin{equation}\label{eq:Ki-ki} K_i=-k_i, \quad i=1,\dots,r. \end{equation} Moreover, since $\Psi(z)$ is positive definite on the unit circle, the diagonal elements of $\Psi(z)$ cannot be equal to zero and, therefore, $K_i\geq 0$, $i=1,\dots,r$. Actually, the nonsingularity of $\Psi^{\rm hc}$ yields \begin{equation}gin{equation}\label{eq:Kizero} K_i=0,\quad i=1,\dots,r, \end{equation} otherwise one can check, by exploiting the Leibniz formula for determinants, that the maximum-degree of $\det \Psi(z)$ would be strictly positive; but this is not possible since, as noticed above, $\det \Psi(z)$ is a nonzero real constant and so $\max\,\deg\, (\det \Psi(z)) = 0$. By (\ref{eq:Kizero}), all the entries of $\Psi(z)$ must have maximum-degree less than or equal to zero. But, by (\ref{eq:Ki-ki}), $k_i=-K_i$ for all $i=1,\dots,r$, and so (\ref{eq:Kizero}) also implies that all the entries of $\Psi(z)$ must have minimum-degree greater than or equal to zero. We conclude that \[ \max\,\deg\, [\Psi(z)]_{ij}=\min\,\deg\, [\Psi(z)]_{ij}=0, \quad i,\, j=1,\dots,r, \] and, therefore, $\Psi(z)$ must be a constant matrix. \end{proof} \section{Proof of the main theorem}\label{sec:main-theorem} We are now ready to prove our main result. For the sake of clarity and readability, we first prove the special case of Theorem \ref{thmsf-dt} and we then proceed to the proof of our general Theorem \ref{thmsf-dt-g}. {\bf \em Proof of Theorem \ref{thmsf-dt}:} We first prove statement \ref{item:thmsf-dt(iii)}). Let $W(z)$ and $W_1(z)$ be two matrices satisfying \ref{item:thmsf-dt(i)}) and \ref{item:thmsf-dt(ii)}). Then, \begin{equation}gin{align}\label{eq:thmsf-dt-1} W^*(z) W(z)=W_1^*(z) W_1(z). \end{align} The latter equation implies $V^*(z) V(z)=I_r$, where $V(z):=W_1(z)W^{-R}(z)$ is analytic in $\{\,z\in\overline{\mathbb{C}}\,:\,|z|>1 \,\}$. Thus, $V(z)\in\mathbb{R}(z)^{r\textrmimes r}$ is a para-unitary matrix analytic in $\{\, z\in\overline{\mathbb{C}} \,:\, |z|> 1\,\}$. Moreover, we have that $\Delta_1(z):=W_1(z)-V(z)W(z)=W_1(z)[I_n-W^{-R}(z)W(z)]$ satisfies \begin{equation}gin{align}\label{eq-unicita} &\Delta_1^*(z)\Delta_1(z) =\nonumber \\ & = [I_n-W^*(z)W^{-R*}(z)]W_1^*(z)W_1(z)[I_n-W^{-R}(z)W(z)]\nonumber \\ & = [I_n-W^*(z)W^{-R*}(z)]W^*(z)W(z)[I_n-W^{-R}(z)W(z)]\nonumber \\ & = 0, \end{align} so that \begin{equation}gin{equation}\label{w1=vw} W_1(z)=V(z)W(z) \end{equation} yielding that $ V^{-1}(z)=W(z)W_1^{-R}(z)$ is analytic in $\{\, z\in\overline{\mathbb{C}} \,:\, |z|> 1\,\}$. In view of Lemma \ref{lemma2}, we conclude that $V(z)$ is a constant orthogonal matrix. Consider now statement \ref{item:thmsf-dt(iv)}) and let $\Phi(z)=L^*(z) L(z)$ where $L(z)\in\mathbb{R}(z)^{n\textrmimes r}$ is analytic in $\{\,z\in\overline{\mathbb{C}}\,:\,|z|>1 \,\}$. In this case, we can write \[ L^*(z) L(z)=W^*(z) W(z). \] The latter equation implies $V^*(z) V(z)=I_r$, where $V(z):=L(z)W^{-R}(z)$ and $W(z)\in\mathbb{R}(z)^{r\textrmimes n}$ is a rational matrix satisfying \ref{item:thmsf-dt(i)}) and \ref{item:thmsf-dt(ii)}). Since $L(z)$ and $W^{-R}(z)$ are both analytic in $\{\,z\in\overline{\mathbb{C}}\,:\,|z|>1 \,\}$, then $V(z)\in\mathbb{R}(z)^{r\textrmimes r}$ is a para-unitary matrix analytic in $\{\, z\in\overline{\mathbb{C}} \,:\, |z|> 1\,\}$. The same computation that led to (\ref{w1=vw}) now gives $L(z)=V(z)W(z)$. Now, we provide a constructive proof of statements \ref{item:thmsf-dt(i)}) and \ref{item:thmsf-dt(ii)}), which represent the core of the Theorem. The procedure is divided in four steps. \emph{Step 1.} Reduce $\Phi(z)$ to the Smith-McMillan canonical form. By using the same standard procedure described in \cite[Thm.2]{Youla-1961}, we arrive at \begin{equation}gin{align} \Phi(z)=C(z)D(z)F(z), \end{align} where $C(z)\in\mathbb{R}[z]^{n\textrmimes r}$, $F(z)\in\mathbb{R}[z]^{r\textrmimes n}$ are unimodular polynomial matrices and $D(z)\in\mathbb{R}(z)^{r\textrmimes r}$ is diagonal and canonic. \emph{Step 2.} According to Lemma \ref{lemma-review}, we can write $D(z)$ in the form \begin{equation}gin{equation}\label{eq:smith-mcmillan-phi} D(z)=\Sigma(z)\Lambda^*(z)\textrmilde{\Delta}(z)\Lambda(z), \end{equation} where: \begin{equation}gin{enumerate} \item $\Lambda(z)\in\mathbb{R}(z)^{r\textrmimes r}$ is diagonal, canonic and analytic together with $\Lambda^{-1}(z)$ in $\{\,z\in\mathbb{C}\,:\,|z|\geq 1 \,\}$ and, if $z=0$ is either a pole, zero or both of $D(z)$, $\Lambda(z)$ possesses the same structural indices at $z=0$ of $D(z)$; \item $\textrmilde{\Delta}(z):=\Theta^*(z) \Theta(z)=\textrmilde{\Delta}^*(z)$, where $\Theta(z)\in\mathbb{R}(z)^{r\textrmimes r}$ is diagonal, canonic and analytic together with $\Theta^{-1}(z)$ in $\{\,z\in\mathbb{C}\,:\,|z|\neq 1 \,\}$; \item $\Sigma(z)\in\mathbb{R}(z)^{r\textrmimes r}$ is diagonal of the form \[ \Sigma(z)=\mathrm{diag}\left[e_1(z),e_2(z),\dots,e_r(z)\right], \] where $e_i(z)=\alpha_{i} z^{k_i},\, \alpha_{i}\in \mathbb{R}_0,\, k_i\in\mathbb{Z},\, i=1,\dots,r$. \end{enumerate} Let us define \[ A(z) := C(z)\Sigma(z)\Lambda^*(z),\quad B(z) := \Lambda(z)F(z). \] We have that $\Phi(z)=A(z)\textrmilde{\Delta}(z)B(z)$ is a left-standard factorization of $\Phi(z)$. {\emph{Step 3.}} Let $I(z):=B^{-R}(z)\Theta^{-1}(z)$. By (\ref{eq:A(z)B(z)}), we have $A^*(z)=N(z)B(z)$ and, therefore, \begin{equation}gin{align}\label{eq:Mtilde(z)-dt} &I^*(z)\Phi(z) I(z)= I^*(z) \Phi^*(z) I(z)\nonumber \\ & = \Theta^{-*}(z)B^{-R*}(z) B^*(z)\textrmilde{\Delta}^*(z) N(z) B(z) B^{-R}(z)\Theta^{-1}(z)\nonumber\\ & = \Theta^{-*}(z) \Theta^*(z) \Theta(z) N(z) \Theta^{-1}(z)\nonumber \\ & = \Theta(z) N(z) \Theta^{-1}(z)=:\Psi(z), \end{align} where $N(z)=A^*(z)B^{-R}(z)\in\mathbb{R}[z,z^{-1}]^{r\textrmimes r}$ is a L-unimodular matrix. By (\ref{eq:Mtilde(z)-dt}), $\Psi(z)$ is a para-Hermitian matrix positive semi-definite on the unit circle. Actually a good deal more is true. We notice that $A(z)\textrmilde{\Delta}(z)B(z)$ and $B^*(z)\textrmilde{\Delta}(z) A^*(z)$ are two left-standard factorizations of $\Phi(z)$. Hence, by replacing $\Delta_1(z)$ with $\textrmilde \Delta(z)=\textrmilde \Delta^*(z)$ in (\ref{eq:M(z)Delta(z)N(z)}), we obtain \begin{equation}gin{align}\label{eq:Delta(z)N(z)Delta(z)} \textrmilde \Delta(z) N(z) \textrmilde \Delta^{-1}(z)= M(z), \end{align} where $M(z)\in\mathbb{R}[z,z^{-1}]$ is L-unimodular. Since $\textrmilde \Delta(z)=\Theta^*(z) \Theta(z)$ is diagonal and \[\Theta(z):=\mathrm{diag}[\textrmheta_1(z),\dots,\textrmheta_r(z)]\] is canonic, (\ref{eq:Delta(z)N(z)Delta(z)}) implies that $[N(z)]_{ij}$ is divisible by the L-polynomial $[\textrmilde \Delta(z)]_{jj}/[\textrmilde \Delta(z)]_{ii}, \ j\geq i$. But \begin{equation}gin{align} [\textrmilde \Delta(z)]_{ii}&=\textrmheta_i^*(z) \textrmheta_i(z)=\textrmheta_i(1/z) \textrmheta_i(z)=\pm z^{k_{i}} \textrmheta_i^2(z),\nonumber \end{align} where $k_i\in\mathbb{Z}, \ i=1,\dots,r$. Hence, $[N(z)]_{ij}$ must be divisible by the polynomial \[ f_{ij}^2(z):=\frac{\textrmheta_j^2(z)}{\textrmheta_i^2(z)}, \ \ \ j\geq i, \] and, a fortiori, by \[ f_{ij}(z)= \frac{\textrmheta_j(z)}{\textrmheta_i(z)}, \ \ \ j\geq i. \] This suffices to establish that $\Psi (z)$ is L-polynomial. Actually, by (\ref{eq:Mtilde(z)-dt}), it follows that $\Psi(z)$ has determinant which is a real nonzero constant. Hence, $\Psi(z)$ is L-unimodular and positive definite on the unit circle. The problem is now reduced to that of finding a factorization of $\Psi(z)$ of the form \begin{equation}gin{align} \Psi(z)=P^*(z) P(z), \end{align} where $P(z)\in\mathbb{R}[z]^{r\textrmimes r}$ is a unimodular polynomial matrix. After this is achieved, the desired factorization for $\Phi(z)$ is obtained as $\Phi(z)=W^*(z) W(z)$ with \begin{equation}gin{align}\label{eq:H(z)-dt} W(z):&= P(z)\Theta(z)B(z)\nonumber \\ &= P(z)\Theta(z)\Lambda(z)F(z)\nonumber \\ &= P(z)D_+(z)F(z), \end{align} where we have defined $D_+(z):=\Theta(z)\Lambda(z)$. Indeed, by straightforward algebra, \begin{equation}gin{align} W^*(z) W(z) & = B^*(z) \Theta^*(z) P^*(z) P(z)\Theta(z)B(z)\nonumber \\ & = B^*(z) \textrmilde{\Delta}(z) N(z)B(z)\nonumber \\ & = B^*(z) \textrmilde{\Delta}(z) A^*(z)\nonumber \\ & = \Phi^*(z) = \Phi(z).\nonumber \end{align} \emph{Step 4.} We illustrate an algorithm which provides a factorization of a para-Hermitian L-unimodular polynomial matrix $\Psi(z)=\Psi^*(z)\in\mathbb{R}[z,z^{-1}]^{r\textrmimes r}$ positive definite on the unit circle into the product $P^*(z) P(z)$, where $P(z)$ is a unimodular polynomial matrix. The algorithm consists of the following two steps. First of all, we define $\Psi_1(z):=\Psi(z)$ and denote by $h\in\mathbb{N}$ the loop counter of the algorithm, which is initially set to $h:=1$. \begin{equation}gin{enumerate} \item \label{item:thmsf-dt-step4I} Let $K_i\in\mathbb{Z}, \ i=1,\dots,r$, be the maximum-degree of the $i$-th column of $\Psi_h(z)$ and $k_i\in\mathbb{Z}, \ i=1,\dots,r$, be the minimum-degree of the $i$-th row of $\Psi_h(z)$. Consider the {highest-column-degree coefficient matrix} of $\Psi_h(z)$, denoted by $\Psi_h^\mathrm{hc}$, and the lowest-row-degree coefficient matrix of $\Psi_h(z)$, denoted by $\Psi_h^\mathrm{lr}$. As noticed in the proof of Lemma \ref{lemma7}, the positive nature of $\Psi_h(z)$ implies that $K_i\geq 0$ for all $i= 1,\dots,r$. Moreover, the para-Hermitianity of $\Psi_h(z)$ implies that $\Psi_h^\mathrm{hc}=(\Psi_h^\mathrm{lr})^\textrmop$ which, in turn, yields $K_i=-k_i$ for all $i=1,\dots,r$. By Lemma \ref{lemma7}, it follows that $\Psi_h^{\mathrm{hc}}$ is nonsingular if and only if $\Psi_h(z)$ is a constant matrix. If $\Psi_h(z)$ is a constant matrix, we set $\bar{h}:=h$ and go to step \ref{item:thmsf-dt-step4II}). If this is not the case, we calculate a nonzero vector $\mathbf{v}_h:=[v_1 \ v_2 \ \dots \ v_r]^\textrmop\in\mathbb{R}^r$ such that $\Psi_h^\mathrm{hc}\mathbf{v}_h=\mathbf{0}$. Let us define the \emph{active index set} \[ \mathcal{I}_h:=\{\,i \, :\, v_i\neq 0\,\} \] and the \emph{highest maximum-degree active index set}, $\mathcal{M}_h\subset \mathcal{I}_h$, \[ \mathcal{M}_h:=\{\,i\in\mathcal{I}_h \, :\, \ K_i\geq K_j, \ \forall\, j\in\mathcal{I}_h\,\}. \] We pick an index $p\in\mathcal{M}_h$. Then, we define the polynomial matrix \begin{equation}gin{align} &\qquad \qquad \quad \quad \quad \quad \ \ \ \ {\scriptsize \textrmext{column $p$}} \nonumber \\ \Omega_h^{-1}(z)&:=\left[\begin{equation}gin{smallarray}{ccccccc} \ 1 \ & \cdots & 0 & \frac{v_1}{v_p}z^{K_p-K_1} & 0 & \cdots & 0 \\ 0 &\ \ddots \ & & \vdots & & & 0 \\ \vdots & &\ 1 \ & \frac{v_{p-1}}{v_p}z^{K_p-K_{p-1}} & & & \vdots \\ \vdots & & &\ 1\ & & & \vdots \\ \vdots & & & \frac{v_{p+1}}{v_p}z^{K_p-K_{p+1}} &\ 1 \ & & \vdots \\ 0 & & & \vdots & &\ \ddots \ & 0 \\ 0 & \cdots & 0 & \frac{v_r}{v_p}z^{K_p-K_r} & 0 & \cdots & \ 1 \ \end{smallarray}\right].\nonumber\\ \label{eq:matrix-reduction-dt} \end{align} Notice that the entry at $(i,p)$ of $\Omega_h^{-1}(z)$ has the form \begin{equation}gin{align}\label{eq:alpha-delta} \frac{v_i}{v_p}z^{K_p-K_i}=\alpha_iz^{\delta_i} , \ \ \ i=1,\dots,r, \end{align} with $\alpha_i:={v_i}/{v_p}\in\mathbb{R}$ and $\delta_i:=K_p-K_i\geq 0$. In fact, if $K_i>K_p$, then $v_i=0$ and so $\alpha_i=0$. By (\ref{eq:matrix-reduction-dt}), $\det \Omega_h^{-1}(z)=1$ and, therefore, $\Omega_h^{-1}(z)\in\mathbb{R}[z]^{r\textrmimes r}$ is a unimodular polynomial matrix. By operating the transformation \[ \Psi_{h+1}(z):=\Omega_h^{-*}(z) \Psi_h(z) \Omega_h^{-1}(z), \] we obtain a new positive definite matrix $\Psi_{h+1}(z)$ with the same determinant of $\Psi_h(z)$. Furthermore, the maximum-degree of the $p$-th column of $\Psi_{h+1}(z)$ is lower than $K_p$, while the maximum-degree of the $i$-th column, $i\neq p$, is not greater than $K_i$. This fact needs a detailed explanation. If we post-multiply $\Psi_h(z)$ by $\Omega_h^{-1}(z)$, we obtain a matrix of the form \begin{equation}gin{align} &\Psi_h'(z):=\Psi_h(z)\Omega_h^{-1}(z)\nonumber\\ &=\left[\begin{equation}gin{array}{c|c|c} [\Psi_h(z)]_{1:r,1:p-1} & \boldsymbol{\psi}_{h}(z) & [\Psi_h(z)]_{1:r,p+1:r} \\ \end{array}\right],\nonumber \end{align} where all the L-polynomials in the $p$-th column vector \begin{equation}gin{align}\label{eq:psi-vector} \boldsymbol{\psi}_h(z)=[\Psi_h(z)]_{1:r,p:p}+ \sum_{i\neq p} \alpha_iz^{\delta_i} [\Psi_h(z)]_{1:r,i:i} \end{align} have maximum-degree lower than $K_p$, since $\Psi_h^\mathrm{hc}\mathbf{v}_h=\mathbf{0}$, and minimum-degree which satisfies \begin{equation}gin{align}\label{eq:min-deg-ki} \min\,\deg\, [\boldsymbol{\psi}_h(z)]_i\geq k_i=-K_i, \ \ \ i=1,\dots,r, \end{align} since in (\ref{eq:psi-vector}) $\delta_i\geq 0$, for all $i$ such that $\alpha_i\neq 0$. Now, by pre-multiplying $\Psi_h'(z)$ by $\Omega_h^{-*}(z)$, the resulting matrix $\Psi_{h+1}(z)$ can be written in the form \begin{equation}gin{align}\label{eq:psihc} &\Psi_{h+1}(z)=\Omega_h^{-*}(z) \Psi_h(z) \Omega_h^{-1}(z)\nonumber\\ &=\left[\begin{equation}gin{smallarray}{c|c|c} [\Psi_h(z)]_{1:p-1,1:p-1} & \boldsymbol{\psi}_{h+1}'(z) & [\Psi_h(z)]_{1:p-1,p+1:r} \\ \hline \boldsymbol{\psi}'^\textrmop_{h+1}(z^{-1}) & \psi_{h+1}''(z) & \boldsymbol{\psi}'''^\textrmop_{h+1}(z^{-1}) \\ \hline [\Psi_h(z)]_{p+1:r,1:p-1} & \boldsymbol{\psi}_{h+1}'''(z) & [\Psi_h(z)]_{p+1:r,p+1:r}\end{smallarray}\right],\nonumber \end{align} where the $p$-th column vector \[\left[\begin{equation}gin{array}{c|c|c}\boldsymbol{\psi}'^\textrmop_{h+1}(z) & \psi_{h+1}''(z) & \boldsymbol{\psi}'''^\textrmop_{h+1}(z) \end{array}\right]^\textrmop\] differs from $\boldsymbol{\psi}_h(z)$ only for the value of the $p$-th entry $\psi_{h+1}''(z)$. Moreover, the maximum-degree of $\psi_{h+1}''(z)$ cannot increase after the operation is performed, since \[ \psi_{h+1}''(z)=[\boldsymbol{\psi}_h(z)]_p +\sum_{i\neq p}\alpha_i z^{-\delta_i}[\boldsymbol{\psi}_h(z)]_i, \] and, by (\ref{eq:alpha-delta}), $\delta_i\geq 0$, for all $i$ such that $\alpha_i\neq 0$. We conclude that all the L-polynomials in the $p$-th column of $\Psi_{h+1}(z)$ have maximum-degree lower than $K_p$, while, by (\ref{eq:min-deg-ki}), the maximum-degree of all the other columns does not increase. We notice also that, since $\Psi_{h+1}(z)=\Psi_{h+1}^*(z)$, all the L-polynomials in the $p$-th row of $\Psi_{h+1}(z)$ have minimum-degree greater than $k_p=-K_p$, while the minimum-degree of all the other rows does not decrease. Eventually, we update the value of the loop counter $h$ by setting $h:=h+1$ and return to step \ref{item:thmsf-dt-step4I}). \item \label{item:thmsf-dt-step4II} Since $\Psi_{\bar{h}}\in\mathbb{R}^{r\textrmimes r}$ is positive definite, we can always factorize it into the product $\Psi_{\bar{h}}=C^\textrmop C$ where $C\in\mathbb{R}^{r\textrmimes r}$, by using standard techniques such as the Cholesky decomposition (see, e.g., \cite[Ch.4]{Golub-1996}). Finally, we have constructed a polynomial unimodular matrix \[ P(z)=C \Omega_{\bar{h}-1}(z)\Omega_{\bar{h}-2}(z)\cdots \Omega_1(z). \] such that $\Psi(z)=P^*(z) P(z)$. \end{enumerate} It is worthwhile noticing that the iterative procedure of step \ref{item:thmsf-dt-step4I}) is always brought to an end (after a maximum of $K_1+\dots+K_p$ iterations) since at the $h$-th iteration the maximum-degree of a column of $\Psi_h(z)$ is reduced at least by one, while the maximum-degree of all the other columns does not increase. To complete the proof of statements \ref{item:thmsf-dt(i)}) and \ref{item:thmsf-dt(ii)}), we notice that, by construction, the rational matrix $W(z)$, as defined in (\ref{eq:H(z)-dt}), and its right inverse are analytic in $\{\,z\in\mathbb{C}\,:\,|z|>1 \,\}$. Moreover, we recall that, if $z=0$ is either a pole, zero or both of $D(z)$, $D_+(z)$ and $D(z)$ have the same zero-pole structure at $z=0$. Now, suppose, by contradiction, that $W(z)$ has a pole at $z=\infty$. Then $W^*(z)$ has a pole at $z=0$. But, since $\Phi(z)=W^*(z)W(z)$, it follows that \begin{equation}gin{align}\label{eq:Wstar-infty} W^*(z)&= \Phi(z)W^{-R}(z) \nonumber\\ &=C(z)D(z)F(z)F^{-R}(z)D_+^{-1}(z)P^{-1}(z) \nonumber\\ &=C(z)D(z)D_+^{-1}(z) P^{-1}(z)\nonumber\\ &=C(z)D_-(z)P^{-1}(z), \end{align} where $D_{-}(z):= D(z)D_+^{-1}(z)$ has no pole at $z=0$. Since $P^{-1}(z)$ and $C(z)$ are unimodular matrices, in view of (\ref{eq:Wstar-infty}), also $W^*(z)$ has no pole at $z=0$. Hence, the contradiction. We conclude that $W(z)$ has no pole at infinity. Finally, by following a similar argument, it can be verified that also $W^{-R}(z)$ has no pole at infinity. Now consider statement \ref{item:thmsf-dt(v)}). If $\Phi(z)$ is analytic on the unit circle, then $\Theta(z)$ does not possess any finite pole. This, in turn, implies that $D_+(z)=\Theta(z)\Lambda(z)$ is analytic in $\{\, z\in\overline{\mathbb{C}}\,:\,|z|\geq 1 \}$. Thus, $W(z)$, as defined in (\ref{eq:H(z)-dt}), is also analytic in the same region. As for point \ref{item:thmsf-dt(vi)}), the additional assumption that the rank of $\Phi(z)$ is constant on the unit circle implies that $\Theta(z)$ does not possess any finite zero. Thus, $\Theta(z)=I_r$ and, by (\ref{eq:H(z)-dt}), \[ W^{-R}(z)=F^{-R}(z) \Lambda^{-1}(z) P^{-1}(z) \] is analytic in $\{\, z\in\overline{\mathbb{C}}\,:\,|z|\geq 1 \}$. Hence, $W(z)$ and its right inverse $W^{-R}(z)$ are both analytic in $\{\, z\in\overline{\mathbb{C}}\,:\,|z|\geq 1 \}$. Lastly, consider point \ref{item:thmsf-dt(vii)}). As shown in (\ref{eq:smith-mcmillan-phi}), the Smith-McMillan canonical form of $\Phi(z)$, $D(z)$, is connected to that of $W(z)$, $D_+(z)=\Theta(z)\Lambda(z)$, by \begin{equation}gin{align}\label{eq:D(z)-plus-dt} D(z)=\Sigma(z) D_+^*(z)D_+(z), \end{align} where $\Sigma(z)\in\mathbb{R}(z)^{r\textrmimes r}$ is a diagonal matrix with elements $ \alpha_{i} z^{k_i}$, $\alpha_{i}\in\mathbb{R}_0$, $k_i\in\mathbb{Z}$, on its diagonal. Let $p_1,\dots,p_h$ be the nonzero finite poles of $\Phi(z)$. By (\ref{eq:D(z)-plus-dt}), it follows that \begin{equation}gin{equation}\label{MMd=Sod} \delta(\Phi;p_i)=\begin{equation}gin{cases} \delta(W;p_i) & \textrmext{if } | p_i|<1, \\ 2\delta(W;p_i) & \textrmext{if } |p_i|=1,\\ \delta(W;1/p_i) & \textrmext{if } |p_i|>1. \end{cases} \end{equation} Moreover, if $p\in\overline{\mathbb{C}}$ is a pole of $\Phi(z)$ of degree $\delta(\Phi;p)$ then also $1/p$ is a pole of $\Phi(z)$ of the same degree and if $p\in\overline{\mathbb{C}}$ is not a pole of $\Phi(z)$ then neither $p$ nor $1/p$ are poles of $W(z)$. Thus, we have \begin{equation}gin{align}\label{eq:delta-pii-z} \sum_{i=1}^h\delta(\Phi;p_i)&=\sum_{\substack{i\, :\, |p_i|<1}} \delta(W;p_i)\, + \sum_{\substack{i\, :\, |p_i|>1}} \delta(W;1/p_i)\ + \nonumber\\ &\hspace{2.85cm} +\sum_{\substack{i\, :\, |p_i|=1}} 2\delta(W;p_i)\nonumber \\ &= 2\sum_{\substack{i\, :\, |p_i|\leq 1}} \delta(W;p_i) \end{align} By (\ref{eq:mcmillan-degree}), the McMillan degree of a rational matrix equals the sum of the degrees of all its poles, including the pole at infinity. If $\Phi(z)$ has no pole at infinity, then (\ref{eq:delta-pii-z}) directly yields $\delta_M(\Phi)=2\delta_M(W)$. Otherwise, assume that $\Phi(z)$ has a pole at infinity. Since $W(z)$ and $\Phi(z)$ have the same structural indices at $z=0$ and $W(z)$ has no pole at $z=\infty$, it follows that \begin{equation}gin{align}\label{eq:delta-zero} \delta(\Phi;\infty)=\delta(\Phi;0)=\delta(W;0)\ \ \ \ \textrmext{and} \ \ \ \ \delta(W;\infty)=0. \end{align} Therefore, by equations (\ref{eq:delta-pii-z}) and (\ref{eq:delta-zero}), \begin{equation}gin{align} \delta_M(\Phi)&=\sum_{i=1}^h\delta(\Phi;p_i)+\delta(\Phi;0)+\delta(\Phi;\infty)\nonumber\\ &=2\sum_{\substack{i\, :\, |p_i|\leq 1}} \delta(W;p_i) +2\delta(W;0)=2\delta_M(W),\nonumber \end{align} \hspace*{\fill}~\QED\par\endtrivlist\unskip We are now ready to prove our main Theorem \ref{thmsf-dt-g}. Many of the ideas for this proof can be elaborated from those of the proof of Theorem \ref{thmsf-dt}. {\bf \em Proof of Theorem \ref{thmsf-dt-g}:} { We first show how to modify the constructive procedure used in the proof of Theorem \ref{thmsf-dt} in order to obtain a spectral factor $W(z)$ which satisfies points 1) and 2). With reference to step 2 in the proof of Theorem \ref{thmsf-dt}, we rearrange the Smith-McMillan form of $\Phi(z)$ as \begin{equation}gin{align}\label{eq:Dz-dec-thm1} D(z)=\Sigma(z) \Lambda(z)\textrmilde{\Delta}(z) \Lambda(z), \end{align} where the only difference with respect to the decomposition in (\ref{eq:smith-mcmillan-phi}) is that here $\Lambda(z)\in\mathbb{R}(z)^{r\textrmimes r}$ is diagonal, canonic and analytic in $\mathscr{A}_p\setminus\{\infty\}$ with $\Lambda^{-1}(z)$ analytic in $\mathscr{A}_z\setminus\{\infty\}$. Moreover, if $0\not\in \mathscr{A}_p$ and $z=0$ is a pole of $D(z)$, then $\Lambda(z)$ has the same negative structural indices at $z=0$ of $\Phi(z)$, and if $0\not\in \mathscr{A}_z$ and $z=0$ is a zero of $D(z)$, then $\Lambda(z)$ has the same positive structural indices at $z=0$ of $\Phi(z)$. Now, to apply the procedure described in the proof of Theorem \ref{thmsf-dt}, it suffices to prove that for any choice of the unmixed-symplectic sets $\mathscr{A}_p$ and $\mathscr{A}_z$, the para-Hermitian matrix $\Psi(z)$, as defined in (\ref{eq:Mtilde(z)-dt}), is still L-unimodular. With reference to the notation introduced in the proof of Theorem \ref{thmsf-dt}, $\Psi(z)$ can be written as \begin{equation}gin{align}\label{eq:Xi(z)} \Psi(z)&= \Theta(z) N(z) \Theta^{-1}(z)\nonumber\\ &=\Theta(z)A^*(z)B^{-R}(z) \Theta^{-1}(z)\nonumber\\ &=\Theta(z)\Lambda(z)\Sigma^*(z)C^*(z)F^{-R}(z)\Lambda^{-1}(z) \Theta^{-1}(z)\nonumber\\ &=\Sigma^*(z)D_+(z)\Xi(z)D_+^{-1}(z), \end{align} where we have defined $\Xi(z):=C^*(z)F^{-R}(z)\in\mathbb{R}[z,z^{-1}]^{r\textrmimes r}$ which is L-unimodular and whose structure does not depend upon the choice of $\mathscr{A}_p$ and $\mathscr{A}_z$. Moreover, in this case, $D_+(z)=\Theta(z)\Lambda(z)$ is diagonal, canonic and analytic in $\mathscr{A}_p\setminus\{\infty\}$ with inverse analytic in $\mathscr{A}_z\setminus\{\infty\}$. Let us first consider the standard choice $\mathscr{A}_p=\mathscr{A}_z=\{\, z\in\overline{\mathbb{C}}\,:\,|z|>1 \,\}$. In the proof of Theorem \ref{thmsf-dt}, we have shown that $\Psi(z)$ is L-unimodular. Since $D_+(z)$ is diagonal and canonic and $\Sigma^*(z)$ is L-unimodular, by (\ref{eq:Xi(z)}), it follows that $[\Xi(z)]_{ij}\in\mathbb{R}[z,z^{-1}]$ must be divisible (the concept of divisibility here is the one associated to the ring of L-polynomials) by the polynomial \[ p_{ij}(z):=\frac{[D_+(z)]_{jj}}{[D_+(z)]_{ii}}, \ \ \ j\geq i. \] On the other hand, let us consider the opposite choice $\mathscr{A}_p=\mathscr{A}_z=\{\, z\in{\mathbb{C}}\,:\,|z|<1 \,\}$. By using the right-standard counterpart of Lemma \ref{lemma4} and by following verbatim the argument used in step 3 of Theorem \ref{thmsf-dt}, it can be proven that $\Psi(z)$ is still L-unimodular. Hence, by (\ref{eq:Xi(z)}), $[\Xi(s)]_{ij}$ must be also divisible by the L-polynomial $p_{ij}(z^{-1})$, $j\geq i$. Therefore, $[\Xi(z)]_{ij}$ must be divisible by the L-polynomial \[ q_{ij}(z):= p_{ij}(z) p_{ij}(z^{-1}), \ \ \ j\geq i. \] Since, for any choice of the unmixed-symplectic sets $\mathscr{A}_p$ and $\mathscr{A}_z$, the factors of $[D_+(z)]_{jj}[D_+(z)]_{ii}^{-1}$, $j\geq i$, are contained in the ones of $q_{ij}(z)$, then $[\Xi(z)]_{ij}$ must be divisible by the polynomial $[D_+(z)]_{jj}[D_+(z)]_{ii}^{-1}$, $j\geq i$, for any choice of $\mathscr{A}_p$ and $\mathscr{A}_z$. We conclude that $\Psi(z)$ must be a L-polynomial matrix for any choice of $\mathscr{A}_p$ and $\mathscr{A}_z$. But, since $\Psi(z)$ is para-Hermitian, $\det \Psi(z)$ is a real constant, hence $\Psi(z)$ is L-unimodular. To prove point \ref{item:thmsf-dt-g(3)}) we need to show that the McMillan degree of the spectral factor $W(z)$ just obtained equals one half of the McMillan degree of $\Phi(z)$. To this aim, we can follow the same lines of the proof of point \ref{item:thmsf-dt(vii)}) of Theorem \ref{thmsf-dt}. In fact, we can define $\mathscr{A}_{p,1}:=\mathscr{A}_p\setminus \left(\{\,z\in\mathbb{C}\,:\, |z|=1\,\}\cup\{0,\infty\}\right)$ and partition $\mathbb{C}_0$ as $$ \mathbb{C}_0=\{\,z\in\mathbb{C}\,:\, 1/z\in\mathscr{A}_{p,1}\,\}\cup \{\,z\in\mathbb{C}\,:\, |z|=1\,\}\cup \mathscr{A}_{p,1} $$ and replace equation (\ref{MMd=Sod}) with the more general expression for the degree of the pole $p_i$ of $\Phi(z)$ \begin{equation}gin{equation}\label{MMd=Sod-gen} \delta(\Phi;p_i)=\begin{equation}gin{cases} \delta(W;p_i) & \textrmext{if } 1/p_i\in \mathscr{A}_{p,1}, \\ 2\delta(W;p_i) & \textrmext{if } |p_i|=1,\\ \delta(W;1/p_i) & \textrmext{if } p_i\in \mathscr{A}_{p,1}. \end{cases}\nonumber \end{equation} The rest of the proof remains the same. The proof of point \ref{item:thmsf-dt-g(4)}) is very similar to that of point \ref{item:thmsf-dt(iii)}) of Theorem \ref{thmsf-dt}. The only difference is that the para-unitary matrix function $V(z):=W_1(z)W^{-R}(z)$ and its inverse are not analytic in $\{\, z\in\overline{\mathbb{C}} \,:\, |z|> 1\,\}$ but they are analytic in $\mathscr{A}_p$, so that Lemma \ref{lemma2} still applies. As for point \ref{item:thmsf-dt-g(6)}), we define $V(z):=L(z)W^{-R}(z)$ which is clearly para-unitary and analytic in $\mathscr{A}_z$, and the same computation that led to (\ref{w1=vw}), gives $L(z)=V(z)W(z)$. On the other hand, if $V(z)$ is para-unitary and analytic in $\mathscr{A}_p$, then it is immediate to check that $L(z):=V(z)W(z)$ is a spectral factor of $\Phi(z)$ and is analytic in $\mathscr{A}_p$ as well. The proof of points \ref{item:thmsf-dt-g(7)}) and \ref{item:thmsf-dt-g(8)}) is exactly the same as that of points \ref{item:thmsf-dt(v)}) and \ref{item:thmsf-dt(vi)}) of Theorem \ref{thmsf-dt}. \hspace*{\fill}~\QED\par\endtrivlist\unskip} \subsection{Corollaries} To conclude this section, we present two straightforward corollaries of Theorem \ref{thmsf-dt}. The first is a complete parametrization of the set of all spectral factors of a given spectrum. \begin{equation}gin{corollary} Let $\Phi(z)$ be a given spectrum and $W(z)$ be any spectral factor satisfying conditions \ref{item:thmsf-dt(i)}) and \ref{item:thmsf-dt(ii)}) of Theorem \ref{thmsf-dt}. Let $L(z)\in\mathbb{R}(z)^{m\textrmimes n}$, then $\Phi(z)=L^*(z) L(z)$ if and only if \[ L(z)=V(z)\left[\begin{equation}gin{array}{c} I_r \\ \hline \mathbf{0}_{m-r, r} \end{array}\right] W(z), \] where $V(z)\in\mathbb{R}(z)^{m\textrmimes m}$ is an arbitrary para-unitary matrix and $r=\mathrm{rk}(\Phi)$. \end{corollary} \begin{equation}gin{proof} By repeating an argument used in points \ref{item:thmsf-dt(iii)}) and \ref{item:thmsf-dt(iv)}) of Theorem \ref{thmsf-dt}, we have that $L(z)=U(z)W(z)$, with $U(z)\in\mathbb{R}(z)^{m\textrmimes r}$ a rational matrix satisfying $U^*(z)U(z)=I_r$. If we choose $V(z)\in\mathbb{R}(z)^{m\textrmimes m}$ to be any para-unitary matrix with $U(z)$ incorporated into its first $r$ columns, i.e., \[ U(z)=V(z)\left[\begin{equation}gin{array}{c} I_r \\ \hline \mathbf{0}_{m-r, r} \end{array}\right], \] we conclude. \end{proof} The next result characterizes the spectral factors of L-polynomial spectra. \begin{equation}gin{corollary} Let $\Phi(z)$ be a spectrum and $W(z)$ be the spectral factor provided in the (constructive) proof of Theorem \ref{thmsf-dt-g}. Assume that $\Phi(z)$ is L-polynomial. If $\infty\in\mathscr{A}_p$, then $W(z)$ is polynomial in $z^{-1}$ (so that $W^*(z)$ is polynomial in $z$). Otherwise, $0\in\mathscr{A}_p$ and $W(z)$ is polynomial in $z$ (so that $W^*(z)$ is polynomial in $z^{-1}$). \end{corollary} \begin{equation}gin{proof} We consider only the case of $\infty\in\mathscr{A}_p$, the other being similar. If $\Phi(z)$ is L-polynomial, then the only finite pole it may possess is located at $z=0$. Since $W(z)$ does not have the pole at infinity, $W(z)$ must be polynomial in $z^{-1}$. The latter fact, in turn, implies that $W^*(z)$ must be a polynomial matrix. \end{proof} \section{A numerical example}\label{sec:numerical-example} In this section, we will show an application to stochastic realization of the algorithm used in the constructive proof of Theorem \ref{thmsf-dt-g}. To this aim, let us consider a purely non-deterministic, second order process $\{y(t)\}_{t\in\mathbb{Z}}$ whose spectral density is \[ \Phi(z) =\begin{equation}gin{bmatrix} \frac{-2z+6-2z^{-1}}{-2z +5 -2z^{-1}} & z-1 & z-1 \\ z^{-1} -1 & -z +2 -z^{-1} &-z +2 -z^{-1}\\ z^{-1} -1 & -z +2 -z^{-1} &-z +2 -z^{-1} \end{bmatrix}. \] We want to compute a stochastically minimal, anti-causal realization of $\{y(t)\}_{t\in\mathbb{Z}}$ having all its zeroes in the (closed) unit disk. Since our method has been developed to compute a spectral factorization in the form $\Phi(z)=W^\textrmop (z^{-1}) W(z)$, this requirement corresponds to the choice $\mathscr{A}_z:=\{\,z\in\mathbb{C}\,:\, |z|<1\,\}$ and $\mathscr{A}_p:=\{\,z\in\mathbb{C}\,:\, |z|>1\,\}$. Notice that $\Phi(z)$ is non-proper, it features a zero on the unit circle and it is rank deficient, namely $\mathrm{rk}(\Phi)=2$. We now apply step-by-step the proposed factorization algorithm in order to compute a spectral factor $W(z)\in\mathbb{R}(z)^{2\textrmimes 3}$ analytic in $\mathscr{A}_p$ with right inverse analytic in $\mathscr{A}_z$. {\em Step 1.} The Smith-McMillan canonical form of $\Phi(z)$ is given by \[ D(z)=\operatorname{diag}\left[ \frac{1}{z(z-2)(z-\frac{1}{2})}, z(z-1)^2\right], \] $\Phi(z)$ can be decomposed as \[ \Phi(z)=C(z)D(z)F(z), \] where $C(z)\in\mathbb{R}[z]^{3\textrmimes 2}$ and $F(z)\in\mathbb{R}[z]^{2\textrmimes 3}$ are unimodular matrices. {\em Step 2.} The matrices $\Lambda(z)$, $\Theta(z)$ and $\Sigma(z)$ defined in (\ref{eq:Dz-dec-thm1}) have the form \begin{equation}gin{align*} &\Lambda(z)=\operatorname{diag}\left[ \frac{1}{z\left(z-\frac{1}{2}\right)}, 1 \right], \Theta(z)=\operatorname{diag}\left[1, z-1\right],\\ &\Sigma(z)=\operatorname{diag}\left[-\frac{1}{2 z^2}, -z^2\right]. \end{align*} Note that $\Lambda(z)$ is analytic in $\mathscr{A}_p\setminus\{\infty\}$ with inverse analytic in $\mathscr{A}_z$. Let $A(z) = C(z)\Sigma(z)\Lambda^*(z)$, $B(z) = \Lambda(z)F(z)$. {\em Step 3.} The matrix $\Psi(z)=\Theta(z)^{-1}N(z)\Theta(z)$, with $N(z)=A^*(z)B^{-R}(z)$, is given by \begin{equation}gin{align*} &\Psi(z) = \Theta(z)^{-1}N(z)\Theta(z)\\ &=\left[ \begin{equation}gin{smallmatrix} -\frac{1}{2}z+\frac{3}{2}-\frac{1}{2}z^{-1} & -\frac{9}{4}z^3+\frac{25}{2}z^2-\frac{43}{2}z+\frac{43}{4}+\frac{1}{2}z^{-1} \\ \frac{1}{2}z+\frac{43}{4}-\frac{43}{2}z^{-1}+\frac{25}{2}z^{-2} -\frac{9}{4}z^{-3}& \psi_{22}(z) \end{smallmatrix} \right]. \end{align*} where $\psi_{22}(z):=\frac{9}{4} z^3+\frac{341}{8} z^2-\frac{1747}{8} z+\frac{2780}{8}-\frac{1747}{8} z^{-1}+\frac{341}{8}z^{-2} +\frac{9}{4}z^{-3}$. It is worth noting that $\Psi(z)$ is para-Hermitian, L-unimodular and positive definite upon the unit circle. {\em Step 4.} Let $\Psi_1(z):=\Psi(z)$. The highest-column-degree coefficient matrix of $\Psi_1(z)$ is \[ \Psi_1^{\mathrm{hc}}= \left[ \begin{equation}gin{array}{cc} -\frac{1}{2} & -\frac{9}{4} \\ \frac{1}{2} & \frac{9}{4} \\ \end{array} \right]. \] Since $\Psi_1^{\mathrm{hc}}$ is singular, we calculate a nonzero vector $\mathbf{v}_1\in\ker \Psi_1^{\mathrm{hc}}$. One such a vector is given, for instance, by $\mathbf{v}_1 = [9\ -2]^\textrmop$. The highest maximum-degree active index set is $\mathcal{M}_1=\{2\}$, we construct the unimodular matrix $\Omega_1^{-1}(z)$ of the form (\ref{eq:matrix-reduction-dt}) \[ \Omega_1^{-1}(z) = \left[ \begin{equation}gin{array}{cc} 1 & -\frac{9 }{2}z^2 \\ 0 & 1 \\ \end{array} \right] \] in order to reduce the maximum degree of the second column of $\Psi_1(z)$, \begin{equation}gin{align*} &\Psi_2(z) = \Omega_1^{-*}(z)\Psi_1(z)\Omega_1^{-1}(z)\\ &=\left[ \begin{equation}gin{smallmatrix} -\frac{1}{2}z +\frac{3}{2}-\frac{1}{2}z^{-1} & \frac{23}{4} z^2-\frac{77}{4} z+\frac{43}{4}+\frac{1}{2}z^{-1} \\ \frac{1}{2} z +\frac{43}{4}-\frac{77}{4} z^{-1}+\frac{23}{4} z^{-2} & -\frac{23}{4} z^2-\frac{973}{4} z+\frac{2123}{4} -\frac{973}{4} z^{-1}-\frac{23}{4}z^{-2} \\ \end{smallmatrix}\right]. \end{align*} Since $\Psi_2^{\mathrm{hc}}$ is singular, we repeat the previous step. In this case, we have $\mathbf{v}_2=[23 \ 2]^\textrmop\in\ker \Psi_2^{\mathrm{hc}}$, $\mathcal{M}_2=\{2\}$, and \[ \Omega_2^{-1}(z) = \left[ \begin{equation}gin{array}{cc} 1 & \frac{23}{2}z \\ 0 & 1 \\ \end{array} \right]. \] Hence, we compute the reduced matrix \begin{equation}gin{align*} \Psi_3(z) &= \Omega_2^{-*}(z)\Psi_2(z)\Omega_2^{-1}(z)\\ &=\left[ \begin{equation}gin{array}{cc} -\frac{1}{2}z+\frac{3}{2}-\frac{1}{2}z^{-1} & -2 z+5+\frac{1}{2}z^{-1} \\ \frac{1}{2}z+5-2z^{-1} & 2 z+21+ 2z^{-1} \\ \end{array} \right]. \end{align*} Actually, $\Psi_3^{\mathrm{hc}}$ is singular. In this case, $\mathbf{v}_3=[-4 \ 1]^\textrmop\in\ker \Psi_3^{\mathrm{hc}}$, $\mathcal{M}_3=\{2\}$, \[ \Omega_3^{-1}(z)=\left[ \begin{equation}gin{array}{cc} 1 & -4 \\ 0 & 1 \\ \end{array} \right] \] and we obtain \begin{equation}gin{align*} \Psi_4(z) &= \Omega_3^{-*}(z)\Psi_3(z)\Omega_3^{-1}(z)\\ &=\left[ \begin{equation}gin{array}{cc} -\frac{1}{2}z+\frac{3}{2}-\frac{1}{2}z^{-1} & -1+\frac{5}{2 }z^{-1} \\ \frac{5}{2}z-1 & 5 \\ \end{array} \right]. \end{align*} Yet another iteration is required; indeed $\Psi_4^{\mathrm{hc}}$ is singular. Thus we proceed by computing $\mathbf{v}_4=[-2 \ 1]^\textrmop\in\ker \Psi_4^{\mathrm{hc}}$, $\mathcal{M}_3=\{1\}$, \[ \Omega_4^{-1}(z)=\left[\begin{equation}gin{array}{cc} 1 & 0 \\ -\frac{1}{2}z & 1 \\ \end{array} \right] \] and eventually we arrive at \begin{equation}gin{align*} \Psi_5 = \Omega_4^{-*}(z)\Psi_4(z)\Omega_4^{-1}(z)=\left[\begin{equation}gin{array}{cc} \frac{1}{4} & -1 \\ -1 & 5 \\ \end{array} \right]. \end{align*} The latter matrix is constant and positive definite; therefore it admits a Cholesky factorization \[ \Psi_5=C^\textrmop C,\quad C=\left[ \begin{equation}gin{array}{cc} \frac{1}{2} & -2 \\ 0 & 1 \\ \end{array} \right]. \] The fourth step of the algorithm is concluded, since we found a factorization $\Psi(z)=P^*(z)P(z)$, with $P(z)$ unimodular of the form \begin{equation}gin{align*} P(z)&=C\Omega_4(z)\Omega_3(z)\Omega_2(z)\Omega_1(z)\\ &=\left[ \begin{equation}gin{array}{cc} -z+\frac{1}{2} & -\frac{1}{4} z \left(18 z^2-55 z+39\right) \\ \frac{1}{2}z & \frac{1}{4} \left(9 z^3-23 z^2+8 z+4\right) \\ \end{array} \right]. \end{align*} Finally, we have that \[ W(z)=P(z)\Theta(z)B(z)=\left[ \begin{equation}gin{array}{ccc} -\frac{1}{z} & \frac{1}{z}-1 & \frac{1}{z}-1 \\ \frac{1}{2 z-1} & 0 & 0 \\ \end{array} \right]. \] is a stochastically minimal spectral factor of $\Phi(z)$ analytic in $\mathscr{A}_p$ with right inverse analytic in $\mathscr{A}_z$. Therefore the sought for realization is $$ y(t)=W^\textrmop(z^{-1}) e(t) $$ with $e(t)$ being white noise. \section{Concluding remarks and future directions}\label{sec:conclusions} In this paper we have established a general result on spectral factorization for an arbitrary discrete-time spectrum. This result opens the way for many applications and generalizations of known results in several fields of systems theory such as estimation and stochastic realization. In particular, for these applications it will be important to further investigate the links between arbitrary spectral factors and stochastic minimality. A conjecture in this direction, which is currently under investigation, is the following. \begin{equation}gin{conjecture} Let $\Phi(z)\in\mathbb{R}(z)^{n\textrmimes n}$ be a spectrum of normal rank $\mathrm{rk}(\Phi)=r\neq 0$. Let $\mathscr{A}_p$ and $\mathscr{A}_z$ be two unmixed-symplectic sets. Let $W(z)$ be a spectral factor satisfying points \ref{item:thmsf-dt-g(1)}), \ref{item:thmsf-dt-g(2)}) and \ref{item:thmsf-dt-g(3)}) of Theorem \ref{thmsf-dt-g}. Then $W(z)$ is unique up to a constant, orthogonal matrix multiplier on the left, i.e., if $W_1(z)$ also satisfies points \ref{item:thmsf-dt-g(1)}), \ref{item:thmsf-dt-g(2)}) and \ref{item:thmsf-dt-g(3)}), then $W_1(z)=TW(z)$ where $T\in\mathbb{R}^{r\textrmimes r}$ is orthogonal. \end{conjecture} This conjecture would be a first step towards a complete parametrization of the set of all stochastically minimal right invertible spectral factors. We believe that this set can be parametrized very efficiently in terms of the all-pass divisors of a generalized {\em phase function} $T_0(z)$:\footnote{Notice that the definition of phase function employed in the conjecture is dual with respect to the classical definition used in stochastic realization, \cite{Lindquist-P-85-siam,Lindquist-P-91-jmsec}.} \begin{equation}gin{conjecture} Let $\Phi(z)\in\mathbb{R}(z)^{n\textrmimes n}$ be a spectrum of normal rank $\mathrm{rk}(\Phi)=r\neq 0$. Let $W_-(z)$ be the spectral factor corresponding to Theorem \ref{thmsf-dt} and $\overline{W}_+(z)$ be the spectral factor corresponding to $\mathscr{A}_p=\mathscr{A}_z:=\{\, z\in\mathbb{C}\,:\, |z|<1\,\}$. Let $T_0$ be the all-pass function defined by $T_0(z):=\overline{W}_+(z)W_-^{-R}(z)$. Then, the set of all minimal right invertible spectral factors of $\Phi(z)$ is given by \begin{equation}ann &&\big\{\,W(z)=T_1(z) W_-(z)\,:\,\\ &&\hspace{1.5cm} T_1^\ast (z)T_1(z) =T_1 (z)T_1^\ast (z) =I_r,\\ &&\hspace{1.5cm} \delta_M(T_1(z))+\delta_M(T_0(z)T_1^\ast (z))=\delta_M(T_0(z))\,\big\}. \end{equation}ann \end{conjecture} \begin{equation}gin{thebibliography}{100} \bibitem{Aksikas-et-al-07} I.~Aksikas, J.~J.~ Winkin, and D. Dochain. \newblock Optimal LQ-Feedback Regulation of a Nonisothermal Plug Flow Reactor Model by Spectral Factorization. \newblock {\em IEEE Trans. Automat. Contr.,} 52(7):1179--1193, 2007. \bibitem{Anderson-Vongpanitlerd-1973} B.~D.~O.~Anderson and S.~Vongpanitlerd. \newblock {\em Network Analysis And Synthesis: A Modern Systems Theory Approach}. \newblock Prentice-Hall electrical engineering series. Prentice-Hall, Englewood Cliffs, NJ, USA, 1973. \bibitem{Anderson-et-al-1974} B.~D.~O.~Anderson, K.~Hitz, and N.~Diem. \newblock Recursive algorithm for spectral factorization. \newblock {\em {IEEE} Trans. Circuits Syst.}, 21(6):742--750, 1974. \bibitem{BarasDW} J.~S. Baras and P.~Dewilde. \newblock {Invariant Subspace Methods in Linear Multivariable-Distributed System and Lumped-Distributed Network Synthesis}. \newblock In {\em Proc. IEEE (1976)}, 64:145--160, 1976. \bibitem{Brogliato-LME-07} B.~Brogliato, R. Lozano, B. Maschke and O.~Egeland. \newblock {\em Dissipative Systems Analysis and Control Theory and Applications}. \newblock 2nd ed. New York: Springer-Verlag, 2007. \bibitem{Brune-31} O.~Brune, \newblock The synthesis of a finite two-terminal network whose driving-point impedance is a prescribed function of frequency. \newblock {\em Journal of Mathematical Physics}, 10:191--236, 1931. \bibitem{Byrnes-et-al} C.~I.~Byrnes, T.~T.~Georgiou, A.~Lindquist, and A.~Megretski. \newblock Generalized interpolation in $H^{\infty}$ with a complexity constraint. \newblock {\em Transactions of the American Mathematical Society }, 358(3):965--987, 2006. \bibitem{Callier-1985} F.~.M.~Callier. \newblock{On polynomial matrix spectral factorization by symmetric extraction}, \newblock {\em IEEE Trans. Automat. Contr.}, 30(5):453--464, 1985. \bibitem{Colaneri-F-siam} P.~Colaneri and A.~Ferrante. \newblock {Algebraic Riccati Equation and $J$-Spectral Factorization for $\mathcal{H}_{\infty}$ Filtering and Deconvolution}. \newblock {\em SIAM J. Contr. and Opt.}, 45(1):123--145, 2006. \bibitem{Colaneri-F-SCL} P.~Colaneri and A.~Ferrante. \newblock {Algebraic Riccati Equation and $J$-Spectral Factorization for $\mathcal{H}_{\infty}$ Estimation}. \newblock {\em Systems \& Control Letters}, 51(5):383--393, 2004. \bibitem{FaurreCG} P.~Faurre, M.~Clerget, and F.~Germain. \newblock {\em {Op\'erateurs Rationnels Positifs}}, \newblock Dunod, 1979. \bibitem{Ferrante-94-ieee} A.~Ferrante. \newblock A parametrization of minimal stochastic realizations, \newblock {\em IEEE Trans. Automat. Contr.}, 39(10):2122--2126, 1994. \bibitem{Ferrante-94-jmsec} A.~Ferrante. \newblock A Parametrization of the Minimal Square Spectral Factors of a Nonrational Spectral Density. \newblock {\em J. Math. Systems, Estimation, and Control}, 7(2):197--226, 1997. \bibitem{Ferrante-96-siam} A.~Ferrante. \newblock A Homeomorphic Characterization of Minimal Spectral Factors. \newblock {\em SIAM J. Contr. and Opt.}, 35(5):1508--1523, 1997. \bibitem{Ferrante-L-98} A.~Ferrante and B.~Levy. \newblock {Canonical Form for Symplectic Matrix Pencils}. \newblock {\em Linear Algebra Appl.}, 274:259--300, 1998. \bibitem{Ferrante-M-P-93} A.~Ferrante, G.~Michaletzky, and M.~Pavon. \newblock Parametrization of all minimal square spectral factors. \newblock {\em System \& Control Letters}, 21:249--254, 1993. \bibitem{Ferrante-Ntog-Automatica-13} A.~Ferrante and L.~Ntogramatzidis. \newblock {The Generalised Discrete Algebraic Riccati Equation in Linear-Quadratic Optimal Control}. \newblock {\em Automatica}, 49:471--478, 2013. \bibitem{Ferrante-N-13} A.~Ferrante, and L.~Ntogramatzidis. \newblock Some new results in the theory of negative imaginary systems with symmetric transfer matrix function. \newblock {\em Automatica}, 49(7):2138--2144, 2013. \bibitem{Ferrante-Pandolfi-2002} A.~Ferrante and L.~Pandolfi. \newblock On the solvability of the positive real lemma equations. \newblock {\em Systems \& Control Letters}, 47(3):209--217, 2002. \bibitem{Ferrante-P-98} A.~Ferrante and G.~Picci. \newblock {Minimal Realization and Dynamic Properties of Optimal Smoothers}. \newblock {\em IEEE Trans. Automat. Contr.}, 45(11):2028--2046, 2000. \bibitem{Ferrante-P-P-02-LAA} A.~Ferrante, G.~Picci, and S.~Pinzoni. \newblock {Silverman Algorithm and the Structure of Discrete-Time Stochastic Systems}. \newblock {\em Linear Algebra Appl. {\rm (Special Issue on Linear Systems and Control)}}, 351-352:219--242, 2002. \bibitem{Fuhrmann} P.~A.~Fuhrmann. \newblock {\em Linear Operators and Systems in Hilbert Space}. \newblock McGraw-Hill, 1981. \bibitem{Fuhrmann-95} P.~A.~Fuhrmann. \newblock On the characterization and parametrization of minimal spectral factors. \newblock {\em J. Math. Systems, Estimation, and Control}, 5:383--444, 1995. \bibitem{Helton} J.~W.~Helton. \newblock {System with Infinite-Dimension State Space: The Hilbert Space Approach}. \newblock In {\em Proc. IEEE (1976)}, 64:145--160, 1976. \bibitem{Gantmacher-1959} F.~Gantmacher. \newblock {\em The Theory of Matrices}, \newblock volume I-II of AMS Chelsea Publishing Series. AMS Chelsea, 1959. \bibitem{Golub-1996} G.~H.~Golub and C.~F.~Van~Loan. \newblock {\em Matrix Computations (3rd Ed.)}. \newblock Johns Hopkins University Press, Baltimore, MD, USA, 1996. \bibitem{Jezek-Kucera-1985} J.~Je\v{z}ek and V.~Ku\v{c}era. \newblock Efficient algorithm for matrix spectral factorization. \newblock {\em Automatica}, 21(6):663--669, 1985. \bibitem{Kailath-1998} T.~Kailath. \newblock {\em Linear Systems}. \newblock Prentice-Hall information and system sciences series. Prentice Hall International, 1998. \bibitem{Khalil-02} H.~K.~Khalil. \newblock {\em Nonlinear Systems}. \newblock 3rd ed. New Jersey: Prentice Hall, 2002. \bibitem{Lang-1985} S.~Lang. \newblock {\em Complex analysis}. \newblock Graduate texts in mathematics. Springer-Verlag, 1985. \bibitem{Lindquist-P-85-siam} A.~Lindquist and G.~Picci. \newblock Realization theory for multivariate stationary gaussian processes. \newblock {\em SIAM J. Contr. and Opt.}, 23:809--857, 1985. \bibitem{Lindquist-P-91-jmsec} A.~Lindquist and G.~Picci. \newblock A geometric approach to modeling and estimation of linear stochastic systems. \newblock {\em J. Math. Systems, Estimation, and Control}, 1:241--333, 1991. \bibitem{Moir1} T.~J.~Moir. \newblock Control theoretical approach to multivariable spectral factorisation problem. \newblock {\em Electronics Letters}, 45:1215--1216, 2009. \bibitem{Moir2} T.~J.~Moir. \newblock A control theoretical approach to the polynomial spectral-factorization problem. \newblock {\em Circuits Systems and Signal Processing}, 30:987--998, 2011. \bibitem{Moir3} T.~J.~Moir. \newblock Spectral factorization using FFTs for large-scale problems. \newblock {\em International Journal of Adaptive Control and Signal Processing}, DOI: 10.1002/acs.2512, 2014. \bibitem{NudeScw} A.~A.~Nudel'man and N.~A.~Schwartzman. \newblock On the existence of the solutions to certain operatorial inequalities, (in russian), \newblock {\em Sib. Math. Z.} 16:562--571, 1975. \bibitem{Petersen-Lanzon-10} I.R.~Petersen, and A.~Lanzon. \newblock Feedback control of negative-imaginary systems. \newblock {\em IEEE Control Systems Magazine}, 30(5):54--72, 2010. \bibitem{Picci-P-94} G.~Picci and S.~Pinzoni. \newblock Acausal models and balanced realizations of stationary processes. \newblock {\em Linear Algebra Appl.}, 205-206:997--1043, 1994. \bibitem{Rissanen-1973} J.~Rissanen. \newblock Algorithms for the triangular decomposition of block Hankel and Toeplitz matrices with application to factoring positive matrix polynomials. \newblock {\em Mathematics of Computation}, 27:147--154, 1973. \bibitem{Ruckebusch-78-2} G.~Ruckebusch. \newblock Factorisations minimales de densit\'es spectrales et r\'epresentations markoviennes. \newblock In {\em Proc. 1re Colloque AFCET--SMF (1978)}. Palaiseau, France, 1978. \bibitem{Ruckebusch-80} G.~Ruckebusch. \newblock {\em Th\'eorie g\'eometrique de la r\'epresentation markovienne}. \newblock PhD thesis (Th\'ese de doctorat d'etat), Univ. Paris VI, 1980. \bibitem{Sayed-K} A.~H~Sayed, and T.~Kailath. \newblock A survey of spectral factorization methods \newblock {\em Numer. Linear Algebra Appl.}, 8:467--496, 2001. \bibitem{Stoorvogel-S-98} A.~A.~Stoorvogel and A.~Saberi. \newblock The discrete-time algebraic {R}iccati equation and linear matrix inequality. \newblock {\em Linear Algebra Appl.}, 274:317--365, 1998. \bibitem{Swigart-Lall-14} J.~Swigart and S.~Lall. \newblock Optimal Controller Synthesis for Decentralized Systems Over Graphs via Spectral Factorization. \newblock {\em IEEE Trans. Automat. Contr.}, 59(9): 2311--2323, 2014. \bibitem{Tunnicliffe-Wilson-1972} G.~Tunnicliffe-Wilson. \newblock The Factorization of Matricial Spectral Densities. \newblock {\em SIAM Journal on Applied Mathematics}, 23(4):420--426, 1972. \bibitem{Willems-1971} J.~C.~Willems. \newblock Least squares stationary optimal control and the algebraic Riccati equation. \newblock {\em IEEE Trans. Automat. Contr.}, 16(6): 621--634, 1971. \bibitem{Xiong-PL-10} J.~Xiong, I.~R.~Petersen, and A.~Lanzon. \newblock A negative imaginary lemma and the stability of interconnections of linear negative imaginary systems. \newblock {\em IEEE Trans. Automat. Contr.}, 55(10):2342--2347, 2010. \bibitem{Youla-1961} D.~C.~Youla. \newblock On the factorization of rational matrices. \newblock {\em IRE Trans. Information Theory}, 7(3):172--189, 1961. \bibitem{Youla-Kazanjian-1978} D.~C.~Youla and N.~Kazanjian. \newblock Bauer-type factorization of positive matrices and the theory of matrix polynomials orthogonal on the unit circle. \newblock {\em {IEEE} Trans. Circuits Syst.}, 25(6):57--69, 1978. \bibitem{Zemanian} A.~H.~Zemanian. \newblock {Infinite Electrical Networks}. \newblock In {\em Proc. IEEE}, 64(1):6--17, 1976. \end{thebibliography} \end{document}
\begin{document} \maketitle \begin{abstract} We describe a new approach to the word problem for Artin-Tits groups and, more generally, for the enveloping group~$\EG\MM$ of a monoid~$\MM$ in which any two elements admit a greatest common divisor. The method relies on a rewrite system~$\RD\MM$ that extends free reduction for free groups. Here we show that, if $\MM$ satisfies what we call the $3$-Ore condition about common multiples, what corresponds to type~FC in the case of Artin-Tits monoids, then the system~$\RD\MM$ is convergent. Under this assumption, we obtain a unique representation result for the elements of~$\EG\MM$, extending Ore's theorem for groups of fractions and leading to a solution of the word problem of a new type. We also show that there exist universal shapes for the van Kampen diagrams of the words representing~$1$. \end{abstract} \section{Introduction} The aim of this paper, the first in a series, is to describe a new approach to the word problem for Artin-Tits groups, which are those groups that admit a finite presentation~$\GR\SS\RR$ where $\RR$ contains at most one relation of the form $\ss... = \tt...$ for each pair of generators~$(\ss, \tt)$ and, if so, the relation has the form $\ss\tt\ss... = \tt\ss\tt...$, both sides of the same length. Introduced and investigated by J.\,Tits in the 1960s, see~\cite{BriBou}, these groups remain incompletely understood except in particular cases, and even the decidability of the word problem is open in the general case~\cite{Chc, GoP2}. Our approach is algebraic, and it is relevant for every group that is the enveloping group of a cancellative monoid~$\MM$ in which every pair of elements admits a greatest common divisor (``gcd-monoid''). The key ingredient is a certain rewrite system (``reduction'')~$\RD\MM$ that acts on finite sequences of elements of~$\MM$ (``multifractions'') and is reminiscent of free reduction of words (deletion of factors~$\xx\xx\inv$ or $\xx\inv\xx$). In the current paper, we analyze reduction in the special case when the monoid~$\MM$ satisfies an assumption called the $3$-Ore condition, deferring the study of more general cases to subsequent papers~\cite{Diu, Div, Dix}. When the approach works optimally, namely when the $3$-Ore condition is satisfied, it provides a description of the elements of the enveloping group~$\EG\MM$ of the monoid~$\MM$ that directly extends the classical result by \O.\,Ore \cite{Ore}, see~\cite{ClP}, which is the base of the theory of Garside groups~\cite{Eps, Dir} and asserts that, if $\MM$ is a gcd-monoid and any two elements of~$\MM$ admit a common right multiple, then every element of~$\EG\MM$ admits a unique representation~$\aa_1\aa_2\inv$ with right gcd$(\aa_1, \aa_2) = 1$. Our statement here is parallel, and it takes the form: \begin{thrmA}\label{T:Main} Assume that $\MM$ is a noetherian gcd-monoid satisfying the $3$-Ore condition: \begin{quote} Any three elements of~$\MM$ that pairwise admit a common right multiple $($\resp left multiple$)$ admits a global common right multiple $($\resp left multiple$)$. \end{quote} \ITEM1 The monoid~$\MM$ embeds in its enveloping group~$\EG\MM$ and every element of~$\EG\MM$ admits a unique representation $\aa_1 \aa_2\inv \aa_3 \aa_4\inv\!... \, \aa_\nn^{\pm1}$ with $\aa_\nn \not= 1$, right gcd$(\aa_1, \aa_2) = 1$, and, for $\ii$ even $($\resp odd$)$, if $\xx$ divides~$\aa_{\ii+1}$ on the left $($\resp on the right $)$, then $\xx$ and $\aa_\ii$ have no common right $($\resp left$)$ multiple. \ITEM2 If, moreover, $\MM$ admits a presentation by length-preserving relations and contains finitely many basic elements, the word problem for~$\EG\MM$ is decidable. \end{thrmA} The sequences involved in Theorem~A\ITEM1 are those that are irreducible with respect to the above alluded rewrite system~$\RD\MM$ (more precisely, a mild amendment~$\RDh\MM$ of it), and the main step in the proof is to show that, under the assumptions, the system~$\RD\MM$ is what is called locally confluent and, from there, convergent, meaning that every sequence of reductions leads to a unique irreducible sequence. The above result applies to many groups. In the case of a free group, one recovers the standard results about free reduction. In the case of an Artin-Tits group of spherical type and, more generally, of a Garside group, the irreducible sequences involved in Theorem~A have length at most two, and one recovers the standard representation by irreducible fractions occurring in Ore's theorem. But other cases are eligible. In the world of Artin-Tits groups, we show \begin{thrmB}\label{T:AT3Ore} An Artin-Tits monoid is eligible for Theorem~A if and only if it is of type~FC. \end{thrmB} The paper is organized as follows. Prerequisites about the enveloping group of a monoid and gcd-monoids are gathered in Section~\ref{S:GcdMon}. Reduction of multifractions is introduced in Section~\ref{S:Red} as a rewrite system, and its basic properties are established. In Section~\ref{S:Confl}, we investigate local confluence of reduction and deduce that, when the $3$-Ore condition is satisfied, reduction is convergent. In Section~\ref{S:AppliConv}, we show that, when reduction is convergent, then all expected consequences follow, in particular Theorem~A. We complete the study in the $3$-Ore case by showing the existence of a universal reduction strategy, implying that of universal shapes for van Kampen diagrams (Section~\ref{S:Univ}). Finally, in Section~\ref{S:AT}, we address the special case of Artin-Tits monoids and establish Theorem~B. \subsection*{Acknowledgments} The author thanks Pierre-Louis Curien, Jean Fromentin, Volker Gebhardt, Juan Gonz\'alez-Meneses, Vincent Jug\'e, Victoria Lebed, Luis Paris, Friedrich Wehrung, Bertold Wiest, Zerui Zhang, Xiangui Zhao for their friendly listening and their many suggestions. \section{Gcd-monoids}\label{S:GcdMon} We collect a few basic properties about the enveloping group of a monoid (Subsection~\ref{SS:Multifrac}) and about gcd-monoids, which are monoids, in which the divisibility relations enjoy lattice properties, in particular greatest common divisors (gcds) exist (Subsection~\ref{SS:Div}). Finally, noetherianity properties are addressed in Subsection~\ref{SS:Noeth}. More details can be found in~\cite[Chap.\,II]{Dir}. \subsection{The enveloping group of a monoid}\label{SS:Multifrac} For every monoid~$\MM$, there exists a group~$\EG\MM$, the \emph{enveloping group} of~$\MM$, unique up to isomorphism, together with a morphism~$\can$ from~$\MM$ to~$\EG\MM$, with the universal property that every morphism of~$\MM$ to a group factors through~$\can$. If $\MM$ admits (as a monoid) a presentation~$\MON\SS\RR$, then $\EG\MM$ admits (as a group) the presentation~$\GR\SS\RR$. Our main subject is the connection between~$\MM$ and~$\EG\MM$, specifically the representation of the elements of~$\EG\MM$ in terms of those of~$\MM$. The universal property of~$\EG\MM$ implies that every such element can be expressed as \begin{equation}\label{E:AltProd} \iota(\aa_1) \iota(\aa_2)\inv \iota(\aa_3) \iota(\aa_4)\inv \pdots \end{equation} with $\aa_1, \aa_2, ...$ in~$\MM$. It will be convenient here to represent such decompositions using (formal) sequences of elements of~$\MM$ and to arrange the latter into a monoid. \begin{defi} If $\MM$ is a monoid, we denote by~$\FR\MM$ the family of all finite sequences of elements of~$\MM$, which we call \emph{multifractions on~$\MM$}. For~$\aav$ in~$\FR\MM$, the length of~$\aav$ is called its \emph{depth}, written~$\dh\aav$. We write $\ef$ for the unique multifraction of depth zero (the empty sequence) and, for every~$\aa$ in~$\MM$, we identify~$\aa$ with the depth one multifraction~$(\aa)$. \end{defi} We use $\aav, \bbv, ...$ as generic symbols for multifractions, and denote by~$\aa_\ii$ the $\ii$th entry of~$\aav$ counted from~$1$. A depth~$\nn$ multifraction~$\aav$ has the expanded form $(\aa_1 \wdots \aa_\nn)$. In view of using the latter to represent the alternating product of~\eqref{E:AltProd}, we shall use~$/$ for separating entries, thus writing $\aa_1 \sdots \aa_\nn$ for $(\aa_1 \wdots \aa_\nn)$. This extends the usual convention of representing $\iota(\aa)\iota(\bb)\inv$ by the fraction~$\aa/\bb$; note that, with our convention, the left quotient $\iota(\aa)\inv\iota(\bb)$ corresponds to the depth three multifraction $1/\aa/\bb$. We insist that multifractions live in the monoid~$\MM$, and \emph{not} in the group~$\EG\MM$, in which $\MM$ need not embed. \begin{defi} For $\aav, \bbv$ in~$\FR\MM$ with respective depths~$\nn, \pp \ge 1$, we put \begin{equation*} \aav \opp \bbv = \begin{cases} \aa_1 \sdots \aa_{\nn} / \bb_1 \sdots \bb_\pp &\quad\text{if $\nn$ is even,}\\ \aa_1 \sdots \aa_{\nn-1} / \aa_\nn \bb_1 / \bb_2 \sdots \bb_\pp &\quad\text{if $\nn$ is odd,} \end{cases} \end{equation*} completed with $\ef \opp \aav = \aav \opp \ef = \aav$ for every~$\aav$. \end{defi} Thus $\aav \opp \bbv$ is the concatenation of~$\aav$ and~$\bbv$, except that the last entry of~$\aav$ is multiplied by the first entry of~$\bbv$ for $\dh\aav$ odd, \ie, when $\aa_\nn$ corresponds to a positive factor in~\eqref{E:AltProd}. It is easy to check that $\FR\MM$ is a monoid and to realize the group~$\EG\MM$ as a quotient of this monoid: \begin{prop}\label{P:EnvGroup} \ITEM1 The set~$\FR\MM$ equipped with~$\opp$ is a monoid with neutral element~$\ef$. It is generated by the elements~$\aa$ and~$1/\aa$ with~$\aa$ in~$\MM$. The family of all depth~one multifractions is a submonoid isomorphic to~$\MM$. \ITEM2 Let $\simeq$ be the congruence on~$\FR\MM$ generated by $(1, \ef)$ and the pairs $(\aa / \aa, \ef)$ and $(1 / \aa / \aa, \ef)$ with~$\aa$ in~$\MM$, and, for~$\aav$ in~$\FR\MM$, let~$\can(\aav)$ be the $\simeq$-class of~$\aav$. Then the group~$\EG\MM$ is (isomorphic to) $\FR\MM{/}{\simeq}$ and, for every~$\aav$ in~$\FR\MM$, we have \begin{equation}\label{E:Eval} \can(\aav) = \can(\aa_1) \, \can(\aa_2)\inv \, \can(\aa_3) \, \can(\aa_4)\inv \pdots. \end{equation} \end{prop} \begin{proof} \ITEM1 The equality of $(\aav \opp \bbv) \opp \ccv$ and $\aav \opp (\bbv \opp \ccv)$ is obvious when at least one of~$\aav$, $\bbv$, $\ccv$ is empty; otherwise, one considers the four possible cases according to the parities of~$\dh\aav$ and~$\dh\bbv$. That $\FR\MM$ is generated by the elements~$\aa$ and~$1/\aa$ with~$\aa$ in~$\MM$ follows from the equality \begin{equation}\label{E:Decomp} \aa_1 \sdots \aa_\nn = \aa_1 \opp 1/\aa_2 \opp \aa_3 \opp 1/\aa_4 \opp \pdots . \end{equation} Finally, by definition, $\aa \opp \bb = \aa\bb$ holds for all~$\aa, \bb$ in~$\MM$. \ITEM2 For every~$\aa$ in~$\MM$, we have $\aa \opp 1/\aa = \aa/\aa \simeq \ef$ and $1/\aa \opp \aa = 1/\aa/\aa \simeq \ef$, so $\can(1/\aa)$ is an inverse of~$\can(\aa)$ in~$\FR\MM{/}{\simeq}$. By~\eqref{E:Decomp}, the multifractions~$\aa$ and~$1/\aa$ with~$\aa$ in~$\MM$ generate the monoid~$\FR\MM$, hence $\FR\MM{/}{\simeq}$ is a group. As $\simeq$ is a congruence, the map~$\can$ is a homomorphism from~$\FR\MM$ to~$\FR\MM{/}{\simeq}$, and its restriction to~$\MM$ is a homomorphism from~$\MM$ to~$\FR\MM{/}{\simeq}$. Let $\phi$ be a homomorphism from~$\MM$ to a group~$\GG$. Extend~$\phi$ to~$\FR\MM$ by $$\phih(\ef) = 1 \quad \text{and} \quad \phih(\aav) = \phi(\aa_1) \, \phi(\aa_2)\inv \, \phi(\aa_3) \, \phi(\aa_4)\inv \pdots.$$ By the definition of~$\opp$, the map~$\phih$ is a homomorphism from the monoid~$\FR\MM$ to~$\GG$. Moreover, we have $\phih(1) = 1 = \phih(\ef)$ and, for every~$\aa$ in~$\MM$, \begin{gather*} \phih(\aa / \aa) = \phi(\aa)\phi(\aa)\inv = 1 = \phih(\ef),\quad \phih(1 / \aa / \aa) = \phi(1) \phi(\aa)\inv \phi(\aa) = 1 = \phih(\ef). \end{gather*} Hence $\aav \simeq \aav'$ implies $\phih(\aav) = \phih(\aav')$, and $\phih$ induces a well-defined homomorphism~$\hat\phi$ from $\FR\MM{/}{\simeq}$ to~$\GG$. Then, for every~$\aa$ in~$\MM$, we find $\phi(\aa) = \phih(\aa) = \hat\phi(\can(\aa))$. Hence, $\phi$ factors through~$\can$. Hence, $\FR\MM{/}{\simeq}$ satisfies the universal property of~$\EG\MM$. Finally, \eqref{E:Eval} directly follows from~\eqref{E:Decomp}, from the fact that $\can$ is a (monoid) homomorphism, and from the equality $\can(1/\aa) = \can(\aa)\inv$ for~$\aa$ in~$\MM$. \end{proof} Hereafter, we identify $\EG\MM$ with $\FR\MM{/}{\simeq}$. One should keep in mind that Prop.\,\ref{P:EnvGroup} remains formal (and essentially trivial) as long as no effective control of~$\simeq$ is obtained. We recall that, in general, $\can$ need not be injective, \ie, the monoid~$\MM$ need not embed in the group~$\EG\MM$. We now express the word problem for the group~$\EG\MM$ in the language of multifractions. If $\SS$ is a set, we denote by~$\SS^*$ the free monoid of all $\SS$-words, using~$\ew$ for the empty word. To represent group elements, we use words in~$\SS \cup \SSb$, where $\SSb$ is a disjoint copy of~$\SS$ consisting of one letter~$\INV\ss$ for each letter~$\ss$ of~$\SS$, due to represent~$\ss\inv$. The letters of~$\SS$ (\resp $\SSb$) are called \emph{positive} (\resp \emph{negative}). If $\ww$ is a word in~$\SS \cup \SSb$, we denote by~$\INV\ww$ the word obtained from~$\ww$ by exchanging~$\ss$ and~$\INV\ss$ everywhere and reversing the order of letters. Assume that $\MM$ is a monoid, and $\SS$ is included in~$\MM$. For $\ww$ a word in~$\SS$, we denote by~$\clp\ww$ the evaluation of~$\ww$ in~$\MM$, \ie, the element of~$\MM$ represented by~$\ww$. Next, for every word~$\ww$ in~$\SS \cup \SSb$, there exists a unique finite sequence $(\ww_1 \wdots \ww_{\nn})$ of words in~$\SS$ satisfying \begin{equation}\label{E:CanDec} \ww = \ww_1 \, \INV{\ww_2}\, \ww_3 \, \INV{\ww_4} \, \pdots \end{equation} with $\ww_\ii \not= \ew$ for $1 < \ii \le \nn$ (as $\ww_1$ occurs positively in~\eqref{E:CanDec}, the decomposition of a negative letter~$\INV\ss$ is $(\ew, \ss)$). We then define $\clp\ww$ to be the multifraction $\clp{\ww_1} \sdots \clp{\ww_{\nn}}$. Then we obtain: \begin{lemm}\label{L:WP1} For every monoid~$\MM$ and every generating family~$\SS$ of~$\MM$, a word~$\ww$ in~$\SS \cup \SSb$ represents~$1$ in~$\EG\MM$ if and only if the multifraction $\clp\ww$ satisfies $\clp\ww \simeq 1$ in~$\FR\MM$. \end{lemm} \begin{proof} For $\ww, \ww'$ in~$\SS^*$, write $\ww\equivp\ww'$ for $\clp\ww = \clp{\ww'}$. Let $\equiv$ be the congruence on the free monoid~$(\SS \cup \SSb)^*$ generated by~$\equivp$ together with the pairs $(\ss\INV\ss, \ew)$ and $(\INV\ss\ss, \ew)$ with $\ss \in \SS$. For~$\ww$ a word in~$\SS \cup \SSb$, let $\cl\ww$ denote the $\equiv$-class of~$\ww$. By definition, $\ww$ represents~$1$ in~$\EG\MM$ if and only if $\ww \equiv \ew$ holds. Now, let $\ww$ be an arbitrary word in~$\SS \cup \SSb$, and let $(\ww_1 \wdots \ww_{\nn})$ be its decomposition~\eqref{E:CanDec}. Then, in the group~$\EG\MM$, we find \begin{align*} \cl\ww &= \cl{\ww_1} \, \cl{\ww_2}\inv \, \cl{\ww_3} \, \cl{\ww_4}\inv \, \, \pdots &&\text{by~\eqref{E:CanDec}}\\ &= \can(\clp{\ww_1}) \, \can(\clp{\ww_2})\inv \, \can(\clp{\ww_3}) \, \can(\clp{\ww_4})\inv \, \pdots &&\text{by definition of~$\can$}\\ &= \can(\clp{\ww_1} / \clp{\ww_2} / \clp{\ww_3} / \pdots ) &&\text{by~\eqref{E:Eval}}\\ &= \can(\clp\ww) &&\text{by definition of~$\clp\ww$.} \end{align*} Hence $\ww \equiv \ew$, \ie, $\cl\ww = 1$, is equivalent to $\can(\clp\ww) = 1$, hence to $\clp\ww \simeq 1$ by Prop.~\ref{P:EnvGroup}\ITEM2. \end{proof} Thus solving the word problem for the group~$\EG\MM$ with respect to the generating set~$\SS$ amounts to deciding the relation $\clp\ww \simeq 1$, which takes place inside the ground monoid~$\MM$. \subsection{Gcd-monoids}\label{SS:Div} The natural framework for our approach is the class of gcd-monoids. Their properties are directly reminiscent of the standard properties of the gcd and lcm operations for natural numbers, with the difference that, because we work in general with non-commutative monoids, all notions come in a left and a right version. We begin with the divisibility relation(s), which play a crucial role in the sequel. \begin{defi}\label{D:Div} If $\MM$ is a monoid and $\aa, \bb$ lie in~$\MM$, we say that $\aa$ is a \emph{left divisor} of~$\bb$ or, equivalently, that $\bb$ is a \emph{right multiple} of~$\aa$, written $\aa \dive \bb$, if $\aa\xx = \bb$ holds for some~$\xx$ in~$\MM$. \end{defi} The relation~$\dive$ is reflexive and transitive. If $\MM$ is left cancellative, \ie, if $\aa\xx = \aa\yy$ implies $\xx = \yy$, the conjunction of $\aa \dive \bb$ and $\bb \dive \aa$ is equivalent to $\bb = \aa\xx$ with~$\xx$ invertible. Hence, if $1$ is the only invertible element in~$\MM$, the relation~$\dive$ is a partial ordering. When they exist, a least upper bound and a greatest lower bound with respect to~$\dive$ are called a \emph{least common right multiple}, or \emph{right lcm}, and a \emph{greatest common left divisor}, or \emph{left gcd}. If $1$ is the only invertible element of~$\MM$, the right lcm and the left gcd~$\aa$ and $\bb$ are unique when they exist, and we denote them by~$\aa \lcm \bb$ and~$\aa \gcd \bb$, respectively. Left-divisibility admits a symmetric counterpart, with left multiplication replacing right multiplication. We say that $\aa$ is a \emph{right divisor} of~$\bb$ or, equivalently, that $\bb$ is a \emph{left multiple} of~$\aa$, denoted $\aa \divet \bb$, if $\bb = \xx \aa$ holds for some~$\xx$. We then have the derived notions of a left lcm and a right gcd, denoted~$\lcmt$ and~$\gcdt$ when they are unique. \begin{defi}\label{D:GcdMon} A \emph{gcd-monoid} is a cancellative monoid with no nontrivial invertible element, in which any two elements admit a left gcd and a right gcd. \end{defi} \begin{exam}\label{X:GcdMon} Every Artin-Tits monoid is a gcd-monoid: the non-existence of nontrivial invertible elements follows from the homogeneity of the relations, whereas cancellativity and existence of gcds (the latter amounts to the existence of lower bounds for the weak order of the corresponding Coxeter group) have been proved in~\cite{BrS} and~\cite{Dlg}. A number of further examples are known. Every Garside monoid is a gcd-monoid, but gcd-monoids are (much) more general: typically, every monoid defined by a presentation $\MON\SS\RR$ where $\RR$ contains at most one relation $\ss... = \tt...$ and at most one relation $...\ss = ...\tt$ for all~$\ss, \tt$ in~$\SS$ and satisfying the left and right cube conditions of~\cite{Dgp} and~\cite[Sec.\,II.4]{Dir} is a gcd-monoid. By~\cite[Sec.\,IX.1.2]{Dir}, this applies for instance to every Baumslag--Solitar monoid $\MON{\tta, \ttb}{\tta^\pp\ttb^\qq = \ttb^{\qq'}\tta^{\pp'}}$. \end{exam} The partial operations~$\lcm$ and~$\gcd$ obey various laws that we do not recall here. We shall need the following formula connecting right lcm and multiplication: \begin{lemm}\label{L:IterLcm} If $\MM$ is a gcd-monoid, then, for all~$\aa, \bb, \cc$ in~$\MM$, the right lcm $\aa \lcm \bb\cc$ exists if and only if $\aa \lcm \bb$ and $\aa' \lcm \cc$ exist, where~$\aa'$ is defined by $\aa \lcm \bb = \bb \aa'$, and then we have \begin{equation}\label{E:IterLcm} \aa \lcm \bb\cc = \aa \opp \bb'\cc' = \bb\cc \opp \aa''. \end{equation} with $\aa \lcm \bb = \bb \aa' = \aa \bb'$ and $\aa' \lcm \cc = \aa' \cc' = \cc \aa''$. \end{lemm} \hangindent=40mm\hangafter=3 We skip the easy verification, see for instance~\cite[Prop.~II.2.12]{Dir}. To prove and remember formulas like~\eqref{E:IterLcm}, it may be useful to draw diagrams and associate with every element~$\aa$ of the monoid~$\MM$ a labeled edge \begin{picture}(12,2)(0,0)\pcline{->}(1,0)(11,0)\taput{$\aa$}\end{picture}. Concatenation of edges is read as a product in~$\MM$ (which amounts to viewing~$\MM$ as a category), and equalities then correspond to commutative diagrams. With such conventions, \eqref{E:IterLcm} can be read in the diagram on the left.\ \begin{picture}(0,0)(142,0) \psset{nodesep=0.7mm} \psset{yunit=0.8mm} \pcline{->}(0,10)(15,10)\taput{$\bb$} \pcline{->}(15,10)(30,10)\taput{$\cc$} \pcline{->}(0,10)(0,0)\tlput{$\aa$} \pcline{->}(15,10)(15,0)\trput{$\aa'$} \pcline{->}(30,10)(30,0)\trput{$\aa''$} \pcline{->}(0,0)(15,0)\taput{$\bb'$} \pcline{->}(15,0)(30,0)\taput{$\cc'$} \end{picture} \begin{lemm}\label{L:Lcm} If $\MM$ is a gcd-monoid and $\aa\dd = \bb\cc$ holds, then $\aa\dd$ is the right lcm of~$\aa$ and~$\bb$ if and only if $1$ is the right gcd of~$\cc$ and~$\dd$. \end{lemm} \begin{proof} Assume $\aa\dd = \bb\cc = \aa \lcm \bb$. Let $\xx$ right divide~$\cc$ and~$\dd$, say $\cc = \cc' \xx$ and $\dd = \dd' \xx$. Then $\aa\dd = \bb\cc$ implies $\aa\dd'\xx = \bb\cc'\xx$, whence $\aa\dd' = \bb\cc'$. By definition of the right lcm, this implies $\aa\dd \dive \aa\dd'$, whence $\dd' = \dd \xx'$ for some~$\xx'$ by left cancelling~$\aa$. We deduce $\dd = \dd \xx' \xx$, whence $\xx' \xx = 1$. Hence $\cc$ and~$\dd$ admit no non-invertible common right divisor, whence $\cc \gcdt \dd = 1$. Conversely, assume $\aa\dd = \bb\cc$ with $\cc \gcdt \dd = 1$. Let $\aa\dd' = \bb\cc'$ be a common right multiple of~$\aa$ and~$\bb$, and let $\ee = \aa\dd \gcd \aa\dd'$. We have $\aa \dive \aa\dd$ and $\aa \dive \aa\dd'$, whence $\aa \dive \ee$, say $\ee = \aa \dd''$. Then $\aa\dd'' \dive \aa\dd$ implies $\dd'' \dive \dd$, say $\dd = \dd'' \xx$, and similarly $\aa\dd'' \dive \aa\dd'$ implies $ \dd'' \dive \dd'$, say $\dd' = \dd'' \xx'$. Symmetrically, from $\aa\dd = \bb\cc$ and $\aa\dd' = \bb\cc'$, we deduce $\bb \dive \aa\dd$ and $\bb \dive \aa\dd'$, whence $\bb \dive \ee$, say $\ee = \bb \cc''$. Then we find $\bb\cc = \aa \dd = \aa \dd'' \xx = \bb\cc'' \xx$, whence $\cc = \cc'' \xx$ and $\dd = \dd'' \xx$. Thus, $\xx$ is a common right divisor of~$\cc$ and~$\dd$. Hence, by assumption, $\xx$ is invertible. Similarly, we find $\bb\cc' = \aa\dd' = \aa\dd'' \xx' = \bb\cc'' \xx'$, whence $\cc' = \cc'' \xx'$, and, finally, $\aa\dd' = \aa\dd''\xx' = \aa\dd \xx\inv\xx'$, whence $\aa\dd \dive \aa\dd'$. Hence $\aa\dd$ is a right lcm of~$\aa$ and~$\bb$. \end{proof} \begin{lemm}\label{L:CondLcm} If $\MM$ is a gcd-monoid, then any two elements of~$\MM$ admitting a common right multiple $($\resp left multiple$)$ admit a right lcm $($\resp left lcm$)$. \end{lemm} \begin{proof} Assume that $\aa$ and~$\bb$ admit a common right multiple, say $\aa\dd = \bb\cc$. Let $\ee = \cc \gcdt \dd$. Write $\cc = \cc' \ee$ and $\dd = \dd' \ee$. As $\MM$ is right cancellative, $\aa\dd = \bb\cc$ implies $\aa\dd' = \bb\cc'$, and $\ee = \cc \gcdt \dd$ implies $\cc' \gcdt \dd' = 1$. Then Lemma~\ref{L:Lcm} implies that $\aa\dd'$ is a right lcm of~$\aa$ and~$\bb$. The argument for the left lcm is symmetric. \end{proof} When the conclusion of Lemma~\ref{L:CondLcm} is satisfied, the monoid~$\MM$ is said to \emph{admit conditional lcms}. Thus every gcd-monoid admits conditional lcms. \begin{rema} Requiring the absence of nontrivial invertible elements is not essential for the subsequent developments. Using techniques from~\cite{Dir}, one could drop this assumption and adapt everything. However, it is more pleasant to have unique gcds and lcms, which is always the case for the monoids we are mainly interested in. \end{rema} \subsection{Noetherianity}\label{SS:Noeth} If $\MM$ is a monoid, we shall denote by~$\MMinv$ the set of invertible elements of~$\MM$ (a subgroup of~$\MM$). We use $\div$ for the \emph{proper} left divisibility relation, where $\aa \div \bb$ means $\aa\xx = \bb$ for some non-invertible~$\xx$, hence for~$\xx \not= 1$ if $\MM$ is assumed to admit no nontrivial invertible element. Symmetrically, we use~$\divt$ for the proper right divisibility relation. \begin{defi} A monoid~$\MM$ is called \emph{noetherian} if $\div$ and~$\divt$ are well-founded, meaning that every nonempty subset of~$\MM$ admits a $\div$-minimal element and a $\divt$-minimal element. \end{defi} A monoid~$\MM$ is noetherian if and only if $\MM$ contains no infinite descending sequence with respect to~$\div$ or~$\divt$. It is well known that well-foundedness is characterized by the existence of a map to Cantor's ordinal numbers~\cite{Lev}. In the case of a monoid, the criterion can be stated as follows: \begin{lemm}\label{L:Wit} If $\MM$ is a cancellative monoid, the following are equivalent: \ITEM1 The proper left divisibility relation~$\div$ of~$\MM$ is well-founded. \ITEM2 There exists a map~$\wit$ from~$\MM$ to ordinal numbers such that, for all~$\aa, \bb$ in~$\MM$, the relation $\aa \div \bb$ implies $\wit(\aa) < \wit(\bb)$. \ITEM3 There exists a map~$\wit$ from~$\MM$ to ordinal numbers satisfying, for all~$\aa, \bb$ in~$\MM$, \begin{equation}\label{E:Wit} \wit(\aa\bb) \ge \wit(\aa) + \wit(\bb), \quad \text{and}\quad \wit(\aa) > 0 \text{\ for $\aa \notin \MMinv$}. \end{equation} \end{lemm} We skip the proof, which is essentially standard. A symmetric criterion holds for the well-foundedness of ~$\divt$, with \eqref{E:Wit} replaced by \begin{equation}\label{E:Witt} \wit(\aa\bb) \ge \wit(\bb) + \wit(\aa), \quad \text{and}\quad \wit(\aa) > 0 \text{\ for $\aa \notin \MMinv$}. \end{equation} We shall subsequently need a well-foundedness result for the transitive closure of left and right divisibility (``factor'' relation), a priori stronger than noetherianity. \begin{lemm}\label{L:Factor} If $\MM$ is a cancellative monoid, then the \emph{factor} relation defined by \begin{equation}\label{E:Factor} \aa \fac \bb \quad \Leftrightarrow \quad \exists\xx, \yy\, (\,\xx\aa\yy = \bb\ \text{and at least one of $\xx, \yy$ is not invertible} \, ). \end{equation} is well-founded if and only if $\MM$ is noetherian. \end{lemm} \begin{proof} Both $\div$ and $\divt$ are included in~$\fac$, so the assumption that $\fac$ is well-founded implies that both $\div$ and $\divt$ are well-founded, hence that $\MM$ is noetherian. Conversely, assume that $\aa_1 \antifac \aa_2 \antifac \pdots$ is an infinite descending sequence with respect to~$\fac$ and that $\div$ is well-founded. For each~$\ii$, write $\aa_\ii = \xx_\ii \aa_{\ii+1} \yy_\ii$ with $\xx_\ii$ and $\yy_\ii$ not both invertible. Let $\bb_\ii = \xx_1 \pdots \xx_{\ii-1} \aa_\ii$. Then, for every~$\ii$, we have $\bb_\ii = \bb_{\ii+1} \yy_\ii$, whence $\bb_{\ii+1} \dive \bb_\ii$. The assumption that $\div$ is well-founded implies the existence of~$\nn$ such that $\yy_\ii$ (which is well-defined, since $\MM$ is left cancellative) is invertible for~$\ii \ge \nn$. Hence $\xx_\ii$ is not invertible for~$\ii \ge \nn$. Let $\cc_\ii = \aa_\ii \yy_{\ii-1} \pdots \yy_1$. Then we have $\cc_\ii = \xx_\ii \cc_{\ii+1}$ for every~$\ii$. Hence $\cc_{\nn} \multt \cc_{\nn+1} \multt \pdots$ is an infinite descending sequence with respect to~$\divt$. Hence $\divt$ cannot be well-founded, and $\MM$ is not noetherian. \end{proof} A stronger variant of noetherianity is often satisfied (typically by Artin-Tits monoids). \begin{defi}\label{D:StrNoeth} A cancellative monoid~$\MM$ is called \emph{strongly noetherian} if there exists a map~$\wit : \MM \to \NNNN$ satisfying, for all~$\aa, \bb$ in~$\MM$, \begin{equation}\label{E:StrWit} \wit(\aa\bb) \ge \wit(\aa) + \wit(\bb), \quad \text{and}\quad \wit(\aa) > 0 \text{\ for $\aa \notin \MMinv$}. \end{equation} \end{defi} As $\NNNN$ is included in ordinals and its addition is commutative, \eqref{E:StrWit} implies~\eqref{E:Wit} and~\eqref{E:Witt} and, therefore, a strongly noetherian monoid is noetherian, but the converse is not true. An important consequence of strong noetherianity is the decidability of the word problem. \begin{prop}\label{P:WordPbMon} If $\MM$ is a strongly noetherian gcd-monoid, $\SS$ is finite, and $(\SS, \RR)$ is a recursive presentation of~$\MM$, then the word problem for~$\MM$ with respect to~$\SS$ is decidable. \end{prop} \begin{proof} First, we observe that, provided $\SS$ does not contain~$1$, the set of all words in~$\SS$ representing an element~$\aa$ of~$\MM$ is finite. Indeed, assume that $\wit : \MM \to \NNNN$ satisfies~\eqref{E:StrWit} and that $\ww$ represents~$\aa$. Write $\ww = \ss_1 \pdots \ss_\ell$ with $\ss_1 \wdots \ss_\ell \in \SS$. Then \eqref{E:StrWit} implies $\wit(\aa) \ge \sum_{\ii = 1}^{\ii = \ell} \wit(\ss_\ii) \ge \ell$, hence $\ww$ necessarily belongs to the finite set~$\SS^{\wit(\aa)}$. Then, by definition, $\MM$ is isomorphic to~${\SS^*\!}{/}{\equivp}$, where $\equivp$ is the congruence on~$\SS^*$ generated by~$\RR$. If $\ww, \ww'$ are words in~$\SS$, we can decide $\ww \equivp \ww'$ as follows. We start with~$\XX:= \{\ww\}$ and then saturate~$\XX$ under~$\RR$, \ie, we apply the relations of~$\RR$ to the words of~$\XX$ until no new word is added: as seen above, the $\equivp$-class of~$\ww$ is finite, so the process terminates in finite time. Then $\ww'$ is $\equivp$-equivalent to~$\ww$ if and only if $\ww'$ appears in the set~$\XX$ so constructed. \end{proof} Prop.\,\ref{P:WordPbMon} applies in particular to all gcd-monoids that admit a finite homogeneous presentation~$(\SS, \RR)$, meaning that every relation of~$\RR$ is of the form $\uu = \vv$ where $\uu, \vv$ are words of the same length: then defining~$\wit(\aa)$ to be the common length of all words representing~$\aa$ provides a map satisfying~\eqref{E:StrWit}. Artin-Tits monoids are typical examples. \begin{rema} Strong noetherianity is called \emph{atomicity} in~\cite{Dfx}, and gcd-monoids that are strongly noetherian are called \emph{preGarside} in~\cite{GoP3}. \end{rema} The last notion we shall need is that of a basic element in a gcd-monoid. First, noetherianity ensures the existence of extremal elements, and, in particular, it implies the existence of \emph{atoms}, \ie, elements that are not the product of two non-invertible elements. \begin{lemm}\cite[Cor.\,II.2.59]{Dir}\label{L:Atoms} If $\MM$ is a noetherian gcd-monoid, then a subset of~$\MM$ generates~$\MM$ if and only if it contains all atoms of~$\MM$. \end{lemm} \begin{proof} Let $\aa \not= 1$ beong to~$\MM$. Let $\XX:= \{\xx \in \MM \setminus \{1\} \mid \xx \dive \aa\}$. As $\aa$ is non-invertible, $\XX$ is nonempty. As $\div$ is well-founded, $\XX$ has a $\div$-minimal element, say~$\xx$. Then $\xx$ must be an atom. So every non-invertible element is left divisible by an atom. Now, write $\aa = \xx \aa'$ with $\xx$ an atom. If $\aa'$ is not invertible, then, by the same argument, write $\aa' = \xx' \aa''$ with $\xx'$ an atom, and iterate. We have $\aa \multt \aa'$, since $\xx$ is not invertible (in a cancellative monoid, an atom is never invertible), whence $\aa \multt \aa' \multt \aa'' \multt \pdots$. As $\divt$ is well-founded, the process stops after finitely steps. Hence $\aa$ is a product of atoms. The rest is easy. \end{proof} \begin{defi}\label{D:RCClosed} A subset~$\XX$ of a gcd-monoid is called \emph{RC-closed} (``closed under right complement'') if, whenever $\XX$ contains $\aa$ and~$\bb$ and $\aa \lcm \bb$ exists, $\XX$ also contains the elements~$\aa'$ and~$\bb'$ defined by $\aa \lcm \bb = \aa \bb' = \bb \aa'$. We say \emph{LC-closed} (``closed under left complement'') for the counterpart involving left lcms. \end{defi} Note that nothing is required when $\aa \lcm \bb$ does not exist: for instance, in the free monoid based on~$\SS$, the family~$\SS \cup\{1\}$ is RC-closed. Lemma~\ref{L:Atoms} implies that, if $\MM$ is a noetherian gcd-monoid, then there exists a smallest generating subfamily of~$\MM$ that is RC-closed, namely the closure of the atom set under the right complement operation associating with all~$\aa, \bb$ such that $\aa \lcm \bb$ exists the (unique) element~$\aa'$ satisfying $\aa \lcm \bb = \bb \aa'$. \begin{defi}\cite{Dgk}\label{D:Primitive} If $\MM$ is a noetherian gcd-monoid, an element~$\aa$ of~$\MM$ is called \emph{right basic} if it lies in the closure of the atom set under the right complement operation. \emph{Left-basic} elements are defined symmetrically. We say that $\aa$ is \emph{basic} if it is right or left basic. \end{defi} Even if its atom family is finite, a noetherian gcd-monoid may contain infinitely many basic elements: for instance, in the Baumslag--Solitar monoid $\MON{\tta, \ttb}{\tta\ttb = \ttb\tta^2}$, all elements~$\tta^{2^\kk}$ with $\kk \ge 0$ are right basic. However, this cannot happen in an Artin-Tits monoid: \begin{prop}\cite{Din, DyH} A (finitely generated) Artin-Tits monoid contains finitely many basic elements. \end{prop} This nontrivial result relies on the (equivalent) result that every such monoid contains a finite Garside family, itself a consequence of the result that a Coxeter group admits finitely many low elements. Typically, there are $10$ basic elements in the Artin-Tits monoid of type~$\Att$, namely $1$, the $3$~atoms, and the $6$~products of two distinct atoms. \section{Reduction of multifractions}\label{S:Red} Our main tool for investigating the enveloping group~$\EG\MM$ using multifractions is a family of partial depth-preser\-ving transformations that, when defined, map a multifraction to a $\simeq$-equivalent multifraction, which we shall see is smaller with respect to some possibly well-founded partial order. As can be expected, irreducible multifractions, \ie, those multifractions that are eligible for no reduction, will play an important role. In this section, we successively introduce and formally define the reduction rules~$\Red{\ii, \xx}$ in Subsection~\ref{SS:Principle}, then establish in Subsection~\ref{SS:Basic} the basic properties of the rewrite system~$\RD\MM$, and finally describe in Subsection~\ref{SS:Cut} a mild extension~$\RDh\MM$ of~$\RD\MM$ that is necessary for the final uniqueness result we aim at. \subsection{The principle of reduction}\label{SS:Principle} In this article, we only consider multifractions~$\aav$, where the first entry is positive. However, to ensure compatibility with~\cite{Diu} and~\cite{Div}, where multifractions with a negative first entry are also considered, it is convenient to adopt the following convention: \begin{defi} If $\aav$ is a multifraction, we say that $\ii$ is \emph{positive} (\resp \emph{negative}) \emph{in~$\aav$} if $\can(\aa_\ii)$ (\resp $\can(\aa_\ii)\inv$) occurs in~\eqref{E:AltProd}. \end{defi} So, everywhere in the current paper, $\ii$ is positive in~$\aav$ if and only if $\ii$ is odd. Let us start from free reduction. Assume that $\MM$ is a free monoid based on~$\SS$. A multifraction on~$\MM$ is a finite sequence of words in~$\SS$. Then every element of the enveloping group~$\EG\MM$, \ie, of the free group based on~$\SS$, is represented by a unique freely reduced word in~$\SS \cup \SSb$~\cite{LyS}, or equivalently, in our context, by a unique freely reduced multifraction~$\aav$, meaning that, if $\ii$ is negative (\resp positive) in~$\aav$, the first (\resp last) letters of~$\aa_\ii$ and~$\aa_{\ii +1}$ are distinct. The above (easy) result is usually proved by constructing a rewrite system that converges to the expected representative. For $\aav, \bbv$ in~$\FR\MM$, and for~$\ii$ negative (\resp positive) in~$\aav$ and $\xx$ in~$\SS$, let us declare that $\bbv = \aav \act \Rdiv{\ii, \xx}$ holds if the first (\resp last) letters of~$\aa_\ii$ and~$\aa_{\ii + 1}$ coincide and $\bbv$ is obtained from~$\aav$ by erasing these letters, \ie, if we have \begin{equation}\label{E:Div} \aa_\ii = \xx \bb_\ii \text{ and } \aa_{\ii + 1} = \xx \bb_{\ii + 1} \text{ ($\ii$ negative in~$\aav$)}, \ \text{or} \ \aa_\ii = \bb_\ii \xx \text{ and } \aa_{\ii + 1} = \bb_{\ii + 1} \xx \text{ ($\ii$ positive in~$\aav$)}. \end{equation} Writing $\aav \rd \bbv$ when $\bbv = \aav \act \Rdiv{\ii, \xx}$ holds for some~$\ii$ and~$\xx$ and $\rds$ for the reflexive--transitive closure of~$\rd$, one easily shows that, for every~$\aav$, there exists a unique irreducible~$\bbv$ satisfying $\aav \rds \bbv$, because the rewrite system~$\RDiv\MM$ so obtained is \emph{locally confluent}, meaning that \begin{equation}\label{E:LocConf0} \parbox{113mm}{If we have $\aav \rd \bbv$ and $\aav \rd \ccv$, there exists~$\ddv$ satisfying $\bbv \rds \ddv$ and $\ccv \rds \ddv$.} \end{equation} If we associate with a multifraction~$\aav$ a diagram of the type $\begin{picture}(36,4)(0,0) \psset{nodesep=0.7pt} \pcline{->}(0,0)(10,0)\taput{$\aa_1$} \pcline{<-}(10,0)(20,0)\taput{$\aa_2$} \pcline{->}(20,0)(30,0)\taput{$\aa_3$} \put(31,0){...} \end{picture}$ (with alternating orientations), then free reduction is illustrated as in Fig.\,\ref{F:Free}. \begin{figure} \caption{\small Free reduction: $\bbv = \aav \act \Rdiv{\ii, \xx} \label{F:Free} \end{figure} Let now $\MM$ be an arbitrary cancellative monoid. The notion of an initial or final letter makes no sense, but it is subsumed in the notion of a left and a right divisor: $\xx$ left divides~$\aa$ if $\aa$ may be expressed as $\xx\yy$. Then we can extend the definition of~$\Rdiv{\ii, \xx}$ in~\eqref{E:Div} without change, allowing~$\xx$ to be any element of~$\MM$, and we still obtain a well defined rewrite system~$\RDiv\MM$ on~$\FR\MM$ for which Fig.\,\ref{F:Free} is relevant. However, when $\MM$ is not free, $\RDiv\MM$ is of little interest because, in general, it fails to satisfy the local confluence property~\eqref{E:LocConf0}. \begin{exam}\label{X:Braid} Let $\MM$ be the $3$-strand braid monoid, given as $\MON{\tta, \ttb}{\tta\ttb\tta = \ttb\tta\ttb}$, and let $\aav = \tta / \tta\ttb\tta / \ttb$. One finds $\aav \act \Rdiv{1, \tta} = 1 / \tta\ttb / \ttb$ and $\aav \act \Rdiv{2, \ttb} = \tta / \tta\ttb / 1$, and one easily checks that no further division can ensure confluence. \end{exam} In order to possibly restore confluence, we extend the previous rules by relaxing some assumption. Assume for instance $\ii$ negative in~$\aav$. Then $\aav \act \Rdiv{\ii, \xx}$ is defined if $\xx$ left divides both~$\aa_\ii$ and~$\aa_{\ii + 1}$. We shall define $\aav \act \Red{\ii, \xx}$ by keeping the condition that $\xx$ divides~$\aa_{\ii + 1}$, but relaxing the condition that $\xx$ divides~$\aa_\ii$ into the weaker assumption that $\xx$ and~$\aa_\ii$ admit a common right multiple. Provided the ambient monoid is a gcd-monoid, Lemma~\ref{L:CondLcm} implies that $\xx$ and~$\aa_\ii$ then admit a right lcm, and there exist unique~$\xx'$ and~$ \bb_\ii$ satisfying $\aa_\ii \xx' = \xx \bb_\ii = \xx \lcm \aa_\ii$. In this case, the action of~$\Red{\ii, \xx}$ will consist in removing~$\xx$ from~$\aa_{\ii + 1}$, replacing~$\aa_\ii$ with~$ \bb_\ii$, and incorporating the remainder~$\xx'$ in~$\aa_{\ii - 1}$, see Fig.\,\ref{F:Red}. Note that, for $\xx$ left dividing~$\aa_\ii$, we have~$\xx' = 1$ and $\aa_\ii = \xx \bb_\ii$, so we recover the division~$\Rdiv{\ii, \xx}$. The case of $\ii$ positive in~$\aav$ is treated symmetrically, exchanging left and right everywhere. Finally, for $\ii = 1$, we stick to the rule~$\Rdiv{1, \xx}$, because there is no $0$th entry in which $\xx'$ could be incorporated. \begin{defi}\label{D:Red} If $\MM$ is a gcd-monoid, $\aav, \bbv$ belong to~$\FR\MM$, and $\ii \ge 1$ and $\xx \in \MM$ hold, we say that $\bbv$ is obtained from~$\aav$ by \emph{reducing~$\xx$ at level~$\ii$}, written $\bbv = \aav \act \Red{\ii, \xx}$, if we have $\dh\bbv = \dh\aav$, $\bb_\kk = \aa_\kk$ for $\kk \not= \ii - 1, \ii, \ii + 1$, and there exists~$\xx'$ satisfying $$\begin{array}{lccc} \text{for $\ii$ negative in~$\aav$:} &\bb_{\ii-1} = \aa_{\ii-1} \xx', &\xx \bb_\ii = \aa_\ii \xx' = \xx \lcm \aa_\ii, &\xx \bb_{\ii+1} = \aa_{\ii+1},\\ \text{for $\ii \ge 3$ positive in~$\aav$:\qquad} &\bb_{\ii-1} = \xx' \aa_{\ii-1}, &\bb_\ii \xx = \xx' \aa_\ii = \xx \lcmt \aa_\ii, &\bb_{\ii+1} \xx = \aa_{\ii+1},\\ \text{for $\ii = 1$ positive in~$\aav$:} &&\bb_\ii \xx = \aa_\ii, &\bb_{\ii+1} \xx = \aa_{\ii+1}. \end{array}$$ We write $\aav \rd \bbv$ if $\aav \act \Red{\ii, \xx}$ holds for some~$\ii$ and some~$\xx \not= 1$, and use $\rds$ for the reflexive--transitive closure of~$\rd$. The rewrite system~$\RD\MM$ so obtained is called \emph{reduction}. \end{defi} As is usual, we shall say that $\bbv$ is an \emph{$\RRR$-reduct} of~$\aav$ when $\aav \rds \bbv$ holds, and that $\aav$ is \emph{$\RRR$-irreducible} if no rule of~$\RD\MM$ applies to~$\aav$. \begin{figure} \caption{\small The rewriting relation $\bbv = \aav \act \Red{\ii, \xx} \label{F:Red} \end{figure} \begin{exam}\label{X:ExRed} If $\MM$ is a free monoid, two elements of~$\MM$ admit a common right multiple only if one is a prefix of the other, so $\aav \act \Red{\ii, \xx}$ can be defined only when $\aav \act \Rdiv{\ii, \xx}$ is, and $\RD\MM$ coincides with the free reduction system~$\RDiv\MM$. Let now $\MM$ be the $3$-strand braid monoid, as in Example~\ref{X:Braid}. Then $\RD\MM$ properly extends~$\RDiv\MM$. For instance, considering $\aav = \tta / \tta\ttb\tta / \ttb$ again and putting $\bbv = \aav \act \Rdiv{1, \tta} = 1 / \tta\ttb / \ttb$, the elements~$\tta\ttb$ and~$\ttb$ admit a common right multiple, hence $\bbv$ is eligible for~$\Red{2, \ttb}$, leading to $\bbv \act \Red{2, \ttb} = \tta / \tta\ttb / 1$, which restores local confluence: $\aav \act \Rdiv{2,\fr{b}} = \aav \act \Rdiv{1,\fr{a}}\Red{2, \fr{b}}$. More generally, assume that $\MM$ is a Garside monoid~\cite{Dgk}, for instance an Artin-Tits group of spherical type (see Section~\ref{S:AT}). Then lcms always exist, and can be used at each step: the whole of $\aa_{\ii+1}$ can always be pushed through~$\aa_\ii$, with no remainder at level~$\ii+1$. Starting from the highest level, one can push all entries down to $1$ or~$2$ and finish with a multifraction of the form $\aa_1 /\aa_2 /1\sdots1$. The process stops, because, at level~$1$, the only legal reduction is division. Finally, let $\MM$ be the Artin-Tits monoid of type~$\Att$, here defined as $$\MON{\tta, \ttb, \ttc}{\tta\ttb\tta = \ttb\tta\ttb, \ \ttb\ttc\ttb = \ttc\ttb\ttc, \ \ttc\tta\ttc = \tta\ttc\tta},$$ and consider $\aav := 1 /\ttc/\tta\ttb\tta$. Both $\tta$ and $\ttb$ left divide~$\tta\ttb\tta$ and admit a common right multiple with~$\ttc$, so $\aav$ is eligible for~$\Red{2,\tta}$ and~$\Red{2, \ttb}$, leading to $\aav \act \Red{2,\tta} = \tta\ttc / \ttc\tta / \ttb\tta$ and $\aav \act \Red{2, \ttb} = \ttb\ttc / \ttc\ttb / \tta\ttb$. The latter are eligible for no reduction: this shows that a multifraction may admit several irreducible reducts, so we see that confluence cannot be expected in every case. \end{exam} The following statement directly follows from Def.\,\ref{D:Red}: \begin{lemm}\label{L:RedDef} Assume that $\MM$ is a gcd-monoid. \ITEM1 The multifraction~$\aav \act \Red{1, \xx}$ is defined if and only if we have $\dh\aav \ge 2$ and $\xx$ right divides both~$\aa_1$ and~$\aa_2$. If $\ii$ is negative $($\resp positive ${\ge}3$$)$ in~$\aav$, then $\aav \act \Red{\ii, \xx}$ is defined if and only if we have $\dh\aav > \ii$ and $\xx$ and~$\aa_\ii$ admit a common right $($\resp left$)$ multiple, and $\xx$ divides~$\aa_{\ii + 1}$ on the left $($\resp right$)$. \ITEM2 A multifraction~$\aav$ is $\RRR$-irreduc\-ible if and only if $\aa_1$ and~$\aa_2$ have no nontrivial common right divisor, and, for $\ii < \dh\aav$ negative $($\resp positive ${\ge}3$$)$ in~$\aav$, if $\xx$ left $($\resp right$)$ divides~$\aa_{\ii+1}$, then $\xx$ and $\aa_\ii$ have no common right $($\resp left$)$ multiple. \end{lemm} \begin{rema}\label{R:Relax} The $\ii$th and $(\ii + 1)$st ent ries do not play symmetric roles in~$\Red{\ii, \xx}$: we demand that $\aa_{\ii+1}$ is a multiple of~$\xx$, but not that $\aa_\ii$ is a multiple of~$\xx$. Note that, in~$\Red{\ii, \xx}$, if we see the factor~$\xx'$ of Def.~\ref{D:Red} and Fig.~\ref{F:Red} as the result of~$\xx$ crossing~$\aa_\ii$ (while $\aa_\ii$ becomes~$\bb_\ii$), then we insist that $\xx$ crosses the whole of~$\aa_\ii$: relaxing this condition and only requiring that $\xx$ crosses a divisor of~$\aa_\ii$ makes sense (at the expense of allowing the depth of the multifraction to increase), but leads to a rewrite system with different properties, see~\cite{Dix}. \end{rema} \subsection{Basic properties of reduction}\label{SS:Basic} The first, fundamental property of the transformations~$\Red{\ii, \xx}$ and the derived reduction relation~$\rds$ is their compatibility with the congruence~$\simeq$: reducing a multifraction on a monoid~$\MM$ does not change the element of~$\EG\MM$ it represents. \begin{lemm}\label{L:RedSimeq} If $\MM$ is a gcd-monoid and $\aav, \bbv$ belong to~$\FR\MM$, then $\aav \rds \bbv$ implies~$\aav \simeq \bbv$. \end{lemm} \begin{proof} As $\rds$ is the reflexive--transitive closure of~$\rd$, it is sufficient to establish the result for~$\rd$. Assume $\bbv = \aav \act \Red{\ii, \xx}$ with, say, $\ii$ negative in~$\aav$. By definition, we have $\xx \bb_\ii = \aa_\ii \xx' = \xx \lcm \aa_\ii$ in~$\MM$, whence $\can(\xx) \can (\bb_\ii) = \can(\aa_\ii) \can(\xx')$ and, from there, $\can(\aa_\ii)\inv \can(\xx) = \can(\xx') \can(\bb_\ii)\inv$ in~$\EG\MM$. Applying~\eqref{E:Eval} and $\can(1) = 1$, we obtain $\can(1 / \aa_\ii / \xx) = \can(\aa_\ii)\inv \can(\xx) = \can(\xx') \can(\bb_\ii)\inv = \can(\xx' / \bb_\ii / 1)$, whence $1 / \aa_\ii / \xx \simeq \xx' / \bb_\ii / 1$. Multiplying on the left by~$\aa_{\ii-1}$ and on the right by~$\bb_{\ii+1}$, we deduce $$\aa_{\ii-1} / \aa_\ii / \aa_{\ii+1} \simeq \bb_{\ii-1} / \bb_\ii / \bb_{\ii+1}$$ and, from there, $\aav \simeq \bbv$. The argument is similar for $\ii$ positive in~$\aav$, replacing $\xx \bb_\ii = \aa_\ii \xx'$ with $\bb_\ii \xx = \xx' \aa_\ii$ and $1 / \aa_\ii / \xx \simeq \xx' / \bb_\ii / 1$ with $\aa_\ii / \xx / 1 \simeq 1 / \xx' / \bb_\ii$, leading to $\aa_{\ii - 2} / \aa_{\ii-1} / \aa_\ii / \aa_{\ii+1} \simeq \bb_{\ii - 2} / \bb_{\ii-1} / \bb_\ii / \bb_{\ii+1}$ for $\ii \ge 3$, and to $\aa_\ii / \aa_{\ii+1} \simeq \bb_\ii / \bb_{\ii + 1}$ for~$\ii =~1$, whence, in any case, to~$\aav \simeq\nobreak \bbv$. \end{proof} The next property is the compatibility of reduction with multiplication. The verification is easy, but we do it carefully, because it is crucial. \begin{lemm}\label{L:CompatRed} If $\MM$ is a gcd-monoid, the relation~$\rds$ is compatible with multiplication on~$\FR\MM$. \end{lemm} \begin{proof} It suffices to consider the case of~$\rd$. By Prop.\,\ref{P:EnvGroup}\ITEM1, the monoid~$\FR\MM$ is generated by the elements~$\cc$ and~$1/\cc$ with~$\cc$ in~$\MM$, so it is sufficient to show that $\bbv = \aav \act \Red{\ii, \xx}$ implies $\cc \opp \aav \rd \cc \opp \bbv$, $1/\cc \opp \aav \rd 1/\cc \opp \bbv$, $\aav \opp \cc \rd \bbv \opp \cc$, and $\aav \opp 1/\cc \rd \bbv \opp 1/\cc$. The point is that multiplying by~$\cc$ or by $1/\cc$ does not change the eligibility for reduction because it removes no divisors. So, assume $\bbv = \aav \act \Red{\ii, \xx}$ with $\dh\aav = \nn$. Let $\aav':= \cc \opp \aav$ and $\bbv':= \cc \opp \bbv$. In the case $\ii \ge 2$, we find $\aa'_\ii = \aa_\ii$ and $\xx$ divides~$\aa'_{\ii+1} = \aa_{\ii+1}$, so $\aav' \act \Red{\ii, \xx}$ is defined, and we have $\bbv' = \aav' \act \Red{\ii, \xx}$. In the case $\ii = 1$, we find $\xx \divet \aa'_1 = \cc \aa_1$ and $\xx \divet \aa'_2 = \aa_2$, so $\aav' \act \Red{\ii, \xx}$ is defined, and we have $\bbv' = \aav' \act \Red{\ii, \xx}$. Put now $\aav':= 1/\cc \opp \aav$ and $\bbv':= 1/\cc \opp \bbv$. We have $\aav' = 1 / \cc / \aa_1 \sdots \aa_\nn$ and, similarly, $\bbv' = 1 / \cc / \bb_1 \sdots \bb_\nn$. We find now (in every case) $\aa'_{\ii + 2} = \aa_\ii$ and $\xx$ divides~$\aa'_{\ii+3} = \aa_{\ii+1}$, so $\aav' \act \Red{\ii + 2, \xx}$ is defined, and $\bbv' = \aav' \act \Red{\ii + 2, \xx}$ follows. Then let $\aav':= \aav \opp \cc$ and $\bbv':= \bbv \opp \cc$. If $\nn$ is negative in~$\aav$, we have $\aav' = \aa_1 \sdots \aa_\nn / \cc$ and $\bbv' = \bb_1 \sdots \bb_\nn / \cc$, whence $\bbv' = \aav' \act \Red{\ii, \xx}$. If $\nn$ is positive in~$\aav$, we find $\aav' = \aa_1 \sdots \aa_\nn\cc$ and $\bbv' = \bb_1 \sdots \bb_\nn\cc$, and $\bbv' = \aav' \act \Red{\ii, \xx}$ again: we must have $\ii < \nn$, and everything is clear for $\ii \le \nn - 2$; for $\ii = \nn-1$, there is no problem as $\xx \dive \aa_{\ii +1}$, \ie, $\xx \dive \aa_\nn$, implies $\xx \dive \aa'_\nn = \aa_\nn\cc$. So, we still have $\bbv' = \aav' \act \Red{\ii, \xx}$. Finally, put $\aav':= \aav \opp 1/\cc$ and $\bbv':= \bbv \opp 1/\cc$. If $\nn$ is negative in~$\aav$, we have $\aav' = \aa_1 \sdots \aa_\nn / 1 / \cc$ and $\bbv' = \bb_1 \sdots \bb_\nn / 1 / \cc$, whence $\bbv' = \aav' \act \Red{\ii, \xx}$ directly. Similarly, if $\nn$ is positive in~$\aav$, we find $\aav' = \aa_1 \sdots \aa_\nn / \cc$ and $\bbv' = \bb_1 \sdots \bb_\nn / \cc$, again implying $\bbv' = \aav' \act \Red{\ii, \xx}$. So the verification is complete. (Observe that the treatments of right multiplication by~$\cc$ and by~$1/\cc$ are not exactly symmetric.) \end{proof} A rewrite system is called \emph{terminating} if no infinite rewriting sequence exists, hence if every sequence of reductions from an element leads in finitely many steps to an irreducible element. We easily obtain a termination result for reduction: \begin{prop}\label{P:Termin} If $\MM$ is a noetherian gcd-monoid, then $\RD\MM$ is terminating. \end{prop} \begin{proof} Using $\fac$ for the factor relation of~\eqref{E:Factor}, we consider for each~$\nn$ the anti-lexicographic extension of~$\fac$ to $\nn$-multifractions: \begin{equation}\label{E:Factor1} \aav \fac_\nn \bbv \quad\Leftrightarrow\quad\exists\ii \in \{1 \wdots \nn\}\, (\aa_\ii \fac \bb_\ii \ \text{and}\ \forall\jj \in \{\ii+1 \wdots \nn\}\,(\aa_\jj = \bb_\jj)). \end{equation} Then, if $\aav, \bbv$ are $\nn$-multifractions, $\aav \rd \bbv$ implies $ \bbv \fac_\nn \aav$. Indeed, $\bbv = \aav \act \Red{\ii, \xx}$ with $\xx \not= 1$ implies $\bb_\kk = \aa_\kk$ for $\kk \ge \ii +2$, and $\bb_{\ii+1} \fac \aa_{\ii+1}$: if $\ii$ is negative in~$\aav$, then $\bb_{\ii+1}$ is a proper right divisor of~$\aa_{\ii+1}$ whereas, if $\ii$ is positive in~$\aav$, then it is a proper left divisor. In both cases, we have $\bb_\ii \fac \aa_\ii$, whence $\bbv \fac_\nn \aav$. Hence an infinite sequence of $\RRR$-reductions starting from~$\aav$ results in an infinite $\fac_\nn$-descending sequence. If $\MM$ is a noetherian gcd-monoid, then, by Lemma~\ref{L:Factor}, $\fac$ is a well-founded partial order on~$\MM$. This implies that $\fac_\nn$ is a well-founded partial order for every~$\nn$: the indices~$\ii$ possibly occurring in a $\fac_\nn$-descending sequence should eventually stabilize, resulting in a $\fac$-descending sequence in~$\MM$. Hence no such infinite sequence may exist. \end{proof} \begin{coro}\label{C:Irred} If $\MM$ is a noetherian gcd-monoid, then every $\simeq$-class contains at least one $\RRR$-irreducible multifraction. \end{coro} \begin{proof} Starting from~$\aav$, every sequence of reductions leads in finitely many steps to an $\RRR$-irreducible multifraction. By Lemma~\ref{L:RedSimeq}, the latter belongs to the same $\simeq$-class as~$\aav$. \end{proof} Thus, when Corollary~\ref{C:Irred} is relevant, $\RRR$-irreducible multifractions are natural candidates for being distinguished representatives of $\simeq$-classes. \begin{rema} Prop.\,\ref{P:Termin} makes the question of the termination of~$\RD\MM$ fairly easy. But it would not be so, should the definition be relaxed as alluded in Remark~\ref{R:Relax}. On the other hand, as can be expected, noetherianity is crucial to ensure termination. For instance, the monoid $\MM = \MON{\tta, \ttb}{\tta = \ttb\tta\ttb}$ is a non-noetherian gcd-monoid, and the infinite sequence $1 / \tta / \tta \rd 1 / \tta\ttb / \tta\ttb \rd / 1 / \tta\ttb^2 / \tta\ttb^2 \rd \pdots$ shows that reduction is not terminating for~$\MM$. \end{rema} Our last result is a simple composition rule for reductions at the same level. \begin{lemm}\label{L:IterRed} If $\MM$ is a gcd-monoid and $\aav$ belongs to~$\FR\MM$, then, if $\ii$ is negative (\resp positive) in~$\aav$, then $(\aav \act \Red{\ii, \xx}) \act \Red{\ii, \yy}$ is defined if and only if $\aav \act \Red{\ii, \xx\yy}$ $($\resp $\aav \act \Red{\ii, \yy\xx}$$)$ is, and then they are equal. \end{lemm} We skip the proof, which should be easy to read on Fig.\,\ref{F:IterRed}: the point is the rule for an iterated lcm, as given in Lemma~\ref{L:IterLcm}, and obvious on the diagram. \begin{figure} \caption{\small Composition of two reductions, here for $\ii$ negative in~$\aav$.} \label{F:IterRed} \end{figure} In Def.~\ref{D:Red}, we put no restriction on the parameter~$\xx$ involved in the rule~$\Red{\ii, \xx}$. Lemma~\ref{L:IterRed} implies that the relation~$\rds$ is not changed when one restricts to rules~$\Red{\ii, \xx}$ with~$\xx$ in some distinguished generating family, typically $\xx$ an atom when $\MM$ is noetherian. \subsection{Reducing depth}\label{SS:Cut} By definition, all rules of~$\RD\MM$ preserve the depth of multifractions. In order to possibly obtain genuinely unique representatives, we introduce an additional transformation erasing trivial final entries. \begin{defi}\label{D:Cut} If $\MM$ is a gcd-monoid, then, for $\aav, \bbv$ in~$\FR\MM$, we declare $\bbv = \aav \act \Cut$ if the final entry of~$\aav$ is~$1$, and $\bbv$ is obtained from~$\aav$ by removing it. We write $\aav \rdh \bbv$ for either $\aav \rd \bbv$ or $\bbv = \aav \act \Cut{}$, and denote by~$\rdhs$ the reflexive--transitive closure of~$\rdh$. We put $\RDh\MM:= \RD\MM \cup \{\Cut\}$. \end{defi} It follows from the definition that an $\nn$-multifraction~$\aav$ is $\RRRh$-irreduc\-ible if and only if it is $\RRR$-irreducible and, in addition, satisfies $\aa_\nn \not= 1$. We now show that the rule~$\Cut{}$ does not change the properties of the system. Hereafter, we use $\One\nn$ (often abridged as~$\one$) for the $\nn$-multifraction~$1 \sdots 1$, $\nn$~factors. \begin{lemm}\label{L:RedCut} If $\MM$ is a gcd-monoid and $\aav, \bbv$ belong to~$\FR\MM$, then $\aav \rdhs \bbv$ holds if, and only if $\aav \rds \bbv \opp \One\pp$ holds for some~$\pp \ge 0$. \end{lemm} \begin{proof} We prove using induction on~$\mm$ that $\aav \rdh^\mm \bbv$ implies the existence of~$\pp$ satisfying $\aav \rds \bbv \opp \One\pp$. This is obvious for $\mm = 0$. For $\mm = 1$, by definition, we have either $\aav \rd \bbv$, whence $\aav \rds \bbv \opp \One0$, or $\bbv = \aav \act \Cut$, whence $\aav \rds \bbv \opp \One\pp$ with $\pp = 1$ for $\dh\aav$ odd and $\pp = 2$ for $\dh\aav$~ even. Assume $\mm \ge 2$. Write $\aav \rdh^{\mm - 1} \ccv \rdh \bbv$. By induction hypothesis, we have $\aav \rds \ccv \opp \One\qq$ and $\ccv \rds \bbv \opp \One\rr$ for some~$\qq, \rr$. By Lemma~Ê\ref{L:CompatRed}, we deduce $\ccv \opp \One\qq \rds \bbv \opp \One\rr \opp \One\qq$, whence, by transitivity, $\aav \rds \bbv \opp \One\rr \opp \One\qq$, which is $\aav \rds \bbv \opp \One\pp$ with $\pp = \qq + \rr$ (\resp $\pp = \qq + \rr - 1$) for~$ \rr$ even (\resp odd). Conversely, we have $\bbv = (\bbv \opp \One\pp) \act (\Cut)^\pp$ for~$\dh\bbv$ even and $\bbv = (\bbv \opp \One\pp) \act (\Cut)^{\pp - 1}$ for~$\dh\bbv$ odd, so $\aav \rds \bbv \opp \One\pp$ implies $\aav \rdhs \bbv$ in every case. \end{proof} \enlargethispage{2mm} \begin{lemm}\label{L:Cut} Assume that $\MM$ is a gcd-monoid. \ITEM1 The relation~$\rdhs$ is included in~$\simeq$ and it is compatible with multiplication. \ITEM2 The rewrite system~$\RDh\MM$ is terminating if and only if $\RD\MM$ is. \end{lemm} \begin{proof} \ITEM1 Assume $\aav \rdhs \bbv$. By Lemma~\ref{L:RedCut}, we have $\aav \rds \bbv \opp \One\pp$ for some~$\pp$. First, we deduce $\aav \simeq \bbv \opp \One\pp$, whence $\aav \simeq \bbv$ owing to~\eqref{E:Decomp}. Next, let~$\ccv$ be an arbitrary multifraction. By Lemma~\ref{L:CompatRed}, $\aav \rds \bbv \opp \One\pp$ implies $\ccv \opp \aav \rds \ccv \opp \bbv \opp \One\pp$, whence $\ccv \opp \aav \rdhs \ccv \opp \bbv$. On the other hand, $\aav \rds \bbv \opp \One\pp$ implies $\aav \opp \ccv \rds \bbv \opp \One\pp \opp \ccv$. An easy induction from $1 / 1 / \xx \act \Red{2, \xx} = \xx / 1 / 1$ yields the general relation $\One\pp \opp \ccv \rds \ccv \opp \One\qq$ with $\qq = \pp$ for $\pp$ and~$\dh\ccv$ of the same parity, $\qq = \pp + 1$ for~$\pp$ even and $\dh\ccv$ odd, and $\qq = \pp - 1$ for~$\pp$ odd and $\dh\ccv$ even. We deduce $\aav \opp \ccv \rds \bbv \opp \ccv \opp \One\qq$, whence $\aav \opp \ccv \rdhs \bbv \opp \ccv$. \ITEM2 As $\RD\MM$ is included in~$\RDh\MM$, the direct implication is trivial. Conversely, applying~$\Cut{}$ strictly diminishes the depth. Hence an $\RDh\MM$-sequence from an $ \nn$-multifraction~$\aav$ contains at most~$\nn$ applications of~$\Cut{}$ and, therefore, an infinite $\RDh\MM$-sequence from~$\aav$ must include a (final) infinite $\RD\MM$-subsequence. \end{proof} We conclude with a direct application providing a two-way connection between the congruence~$\simeq$ and the symmetric closure of~$\rdhs$. This connection is a sort of converse for Lemma~\ref{L:Cut}\ITEM1, and it will be crucial in Section~\ref{S:AppliConv} below. \begin{prop}\label{P:Zigzag} If $\MM$ is a gcd-monoid and $\aav, \bbv$ belong to~$\FR\MM$, then $\aav \simeq \bbv$ holds if and only if there exist $\rr \ge 0$ and multifractions $\ccv^0 \wdots \ccv^{2\rr}$ satisfying \begin{equation}\label{E:Zigzag} \aav = \ccv^0 \rdhs \ccv^1 \antirdhs \ccv^2 \rdhs \ \pdots \ \antirdhs \ccv^{2\rr} = \bbv. \end{equation} \end{prop} \begin{proof} Write $\aav \approx \bbv$ when there exists a zigzag as in~\eqref{E:Zigzag}. As $\rdhs$ is reflexive and transitive, $\approx$ is an equivalence relation. By Lemma~\ref{L:Cut}\ITEM1, $\rdhs$ is included in~$\simeq$, which is symmetric, hence $\approx$ is included in~$\simeq$. Next, by Lemma~\ref{L:Cut} again, $\rdhs$ is compatible with multiplication, hence so is~$\approx$. Hence, $\approx$ is a congruence included in~$\simeq$. As we have $1 \act \Cut = \ef$ and, for every~$\aa$ in~$\MM$, $$\aa / \aa \act \Red{1, \aa}\Cut\Cut = \ef, \quad 1/ \aa / \aa \act \Red{2, \aa}\Cut\Cut\Cut = \ef,$$ the relations $1 \approx \ef$, $\aa / \aa \approx \ef$ and $1/ \aa / \aa \approx \ef$ hold. By definition, $\simeq$ is the congruence generated by the pairs above, hence $\approx$ and $\simeq$ coincide. \end{proof} Using the connection between~$\rds$ and~$\rdhs$, we deduce \begin{coro} If $\MM$ is a gcd-monoid and $\aav, \bbv$ belong to~$\FR\MM$, then $\aav \simeq \bbv$ holds if and only if there exist $\pp, \qq, \rr \ge 0$ and multifractions $\ddv^0 \wdots \ddv^{2\rr}$ satisfying \begin{equation}\label{E:Zigzag1} \aav \opp \One\pp = \ddv^0 \rds \ddv^1 \antirds \ddv^2 \rds \ \pdots \ \antirds \ddv^{2\rr} = \bbv \opp \One\qq. \end{equation} \end{coro} \begin{proof} Assume that $\aav$ and~$\bbv$ are connected as in~\eqref{E:Zigzag}. Let $\nn = \max(\dh{\ccv^0} \wdots \dh{\ccv^{2\rr}})$ and, for each~$\ii$, let $\ddv^\ii$ be an $\nn$-multifraction of the form~$\ccv^\ii \opp \One{\mm}$. By Lemma~\ref{L:RedCut}, $\ccv^{2\ii} \rdhs \ccv^{2\ii - 1}$ implies $\ccv^{2\ii} \rds \ccv^{2\ii - 1} \opp \One\pp$ for some~$\pp$, whence, by Lemma~\ref{L:CompatRed}, $\ddv^{2\ii} \rds \ccv^{2\ii - 1} \opp \One\qq$ for some~$\qq$. As $\rds$ preserves depth, the latter multifraction must be~$\ddv^{2\ii - 1}$. The argument for~$\ddv^{2\ii} \rds \ddv^{2\ii + 1}$ is similar. \end{proof} \section{Confluence of reduction}\label{S:Confl} As in the case of free reduction and of any rewrite system, we are interested in the case when the system~$\RD\MM$ and its variant~$\RDh\MM$ are convergent, meaning that every element, here, every multifraction, reduces to a unique irreducible one. In this section, we first recall in Subsection~\ref{SS:ConfConf} the connection between the convergence of~$\RD\MM$ and its local confluence~\eqref{E:LocConf}, and we show that $\RD\MM$ and $\RDh\MM$ are similar in this respect. We thus investigate the possible local confluence of~$\RD\MM$ in Subsection~\ref{SS:LocConf}. This leads us to introduce in Subsection~\ref{SS:3Ore} what we call the $3$-Ore condition. \subsection{Convergence, confluence, and local confluence}\label{SS:ConfConf} A rewrite system, here on~$\FR\MM$, is called \emph{confluent} when \begin{equation}\label{E:Conf} \parbox{113mm}{If we have $\aav \rds \bbv$ and $\aav \rds \ccv$, there exists~$\ddv$ satisfying $\bbv \rds \ddv$ and $\ccv \rds \ddv$} \end{equation} (``diamond property''), and it is called \emph{locally confluent} when \begin{equation}\label{E:LocConf} \parbox{113mm}{If we have $\aav \rd \bbv$ and $\aav \rd \ccv$, there exists~$\ddv$ satisfying $\bbv \rds \ddv$ and $\ccv \rds \ddv$.} \end{equation} By Newman's classical Diamond Lemma~\cite{New, DeJ}, a terminating rewrite system is convergent if and only if it is confluent, if and only if it is locally confluent. In the current case, we saw in Prop.\,\ref{P:Termin} and Lemma~\ref{L:Cut} that, under the mild assumption that the ground monoid~$\MM$ is noetherian, the systems~$\RD\MM$ and~$\RDh\MM$ are terminating. So the point is to investigate the possible local confluence of these systems. Once again, $\RD\MM$ and~$\RDh\MM$ behave similarly. \begin{lemm}\label{L:RedRedh} If $\MM$ is a gcd-monoid, then $\RDh\MM$ is locally confluent if and only if $\RD\MM$ is. \end{lemm} \begin{proof} Assume that $\RD\MM$ is locally confluent. To establish that $\RDh\MM$ is locally confluent, it suffices to consider the mixed case $\bbv = \aav \act \Red{\ii, \xx}$, $\ccv = \aav \act \Cut$. So assume that $\aav \act \Cut$ and~$\aav \act \Red{\ii, \xx}$ are defined, with $\xx \not= 1$. Let~$\nn = \dh\aav$. The assumption that $\aav \act \Cut$ is defined implies $\aa_\nn = 1$, whereas the assumption that $\aav \act \Red{\ii, \xx}$ is defined with $\xx \not= 1$ implies that $\xx$ divides~$\aa_{\ii + 1}$ (on the relevant side), whence $\aa_{\ii + 1} \not= 1$. So the only possibility is $\ii + 1 < \nn$. Then we immediately check that $\aav \act \Cut\Red{\ii, \xx}$ and $\aav \act \Red{\ii, \xx}\Cut$ are defined, and that they are equal, \ie, \eqref{E:LocConf} is satisfied for $\ddv = \aav \act \Cut\Red{\ii, \xx}$. Conversely, assume that $\RDh\MM$ is locally confluent, and we have $\aav \rd \bbv$ and $\aav \rd \ccv$. By assumption, we have $\bbv \rdhs \ddv$ and $\ccv \rdhs \ddv$ for some~$\ddv$. By Lemma~\ref{L:RedCut}, we deduce the existence of~$\pp, \qq$ satisfying $\bbv \rds \ddv \opp \One\pp$ and $\ccv \rds \ddv \opp \One\qq$ and, as $\rds$ preserves depth, we must have $\pp = \qq$, and $\ddv \opp \One\pp$ provides the expected common $\RRR$-reduct. \end{proof} Thus we can summarize the situation in \begin{prop}\label{P:ConvConf} If $\MM$ is a noetherian gcd-monoid, the following are equivalent: \ITEM1 The rewrite system~$\RDh\MM$ is convergent; \ITEM2 The rewrite system~$\RD\MM$ is convergent; \ITEM3 The rewrite system~$\RD\MM$ is locally confluent, \ie, it satisfies~\eqref{E:LocConf}. \end{prop} \subsection{Local confluence of~$\RD\MM$}\label{SS:LocConf} In order to study the possible local confluence of~$\RD\MM$, we shall assume that a multifraction~$\aav$ is eligible for two rules~$\Red{\ii, \xx}$ and~$\Red{\jj, \yy}$, and try to find a common reduct for $\aav \act \Red{\ii, \xx}$ and~$\aav \act \Red{\jj, \yy}$. The situation primarily depends on the distance between~$\ii$ and~$\jj$. We begin with the case of remote reductions ($\vert\ii - \jj\vert \ge 2$). \begin{lemm}\label{L:Distant} Assume that both $\aav \act \Red{\ii, \xx}$ and $\aav \act \Red{\jj, \yy}$ are defined and $\vert\ii - \jj\vert \ge 2$ holds. Then $\aav \act \Red{\ii, \xx}\Red{\jj, \yy}$ and $\aav \act \Red{\jj, \yy}\Red{\ii, \xx}$ are defined and equal. \end{lemm} \begin{proof} The result is straightforward for $\vert\ii-\jj\vert \ge 3$ since, in this case, the three indices $\ii - 1, \ii, \ii + 1$ involved in the definition of~$\Red{\ii, \xx}$ are disjoint from the three indices $\jj - 1, \jj, \jj + 1$ involved in that of~$\Red{\jj, \yy}$ and, therefore, the two actions commute. The case $\vert\ii-\jj\vert = 2$ is not really more difficult. Assume for instance $\ii$ negative in~$\aav$ and $\ii = \jj + 2$ (see Fig.\,\ref{F:LocConfDisj}). Put $\bbv := \aav \act \Red{\ii, \xx}$ and $\ccv := \aav \act \Red{\jj, \yy}$. By definition of~$\Red{\ii, \xx}$, we have $\bb_{\jj + 1} \multe \aa_{\jj + 1}$ and $\bb_\jj = \aa_\jj$, so $\yy \dive \aa_{\jj + 1}$ implies $\yy \dive \bb_{\jj + 1}$, and the assumption that $\aav \act \Red{\jj, \yy}$ is defined implies that $\bbv \act \Red{\jj, \yy}$ is defined. Similarly, we have $\cc_{\ii+1} = \aa_{\ii+1}$ and $\cc_\ii = \aa_\ii$, so the assumption that $\aav \act \Red{\ii, \xx}$ is defined implies that $\ccv \act \Red{\ii, \xx}$ is defined too. Then we have $\bbv \act \Red{\jj, \yy} = \ccv \act \Red{\ii, \xx} = \ddv$, with \begin{multline*} \dd_{\ii - 3} = \aa_{\ii - 3}\yy', \ \yy\dd_{\ii - 2} = \aa_{\ii - 2} \yy' = \yy \lcm \aa_{\ii - 2}, \ \yy\dd_{\ii - 1} = \aa_{\ii - 1}\xx', \ \xx \dd_\ii = \aa_\ii \xx' = \xx \lcm \aa_\ii, \ \xx \dd_{\ii+1} = \aa_{\ii+1}. \end{multline*} The argument for $\ii$ positive in~$\aav$ is symmetric. \end{proof} \begin{figure} \caption{\small Local confluence of reduction for $\jj = \ii - 2$ (case $\ii$ negative in~$\aav$): starting with the grey path, if we can both push~$\xx$ through~$\aa_\ii$ at level~$\ii$ and~$\yy$ through~$\aa_{\ii - 2} \label{F:LocConfDisj} \end{figure} We turn to $\vert\ii - \jj\vert = 1$. This case is more complicated, but confluence can always be realized. \begin{lemm}\label{L:LocConfSucc} Assume that both $\aav \act \Red{\ii, \xx}$ and $\aav \act \Red{\jj, \yy}$ are defined and $\ii = \jj + 1$ holds. Then there exist~$\vv$ and~$\ww$ such that $\aav \act \Red{\ii, \xx}\Red{\ii-1,\vv}$ and $\aav \act \Red{\ii-1,\yy} \Red{\ii, \xx} \Red{\ii-2, \ww}$ are defined and equal. \end{lemm} \begin{proof} (See Fig.\,\ref{F:LocConfSucc}.) Assume $\ii$ negative in~$\aav$. Put $\bbv := \aav \act \Red{\ii, \xx}$ and $\ccv := \aav \act \Red{\ii-1,\yy}$. By definition, there exist~$\xx', \yy'$ satisfying \begin{gather*} \bb_{\ii-2} = \aa_{\ii-2}, \quad \bb_{\ii-1} = \aa_{\ii - 1} \xx', \quad \xx\bb_\ii = \aa_\ii\xx' = \xx \lcm \aa_\ii, \quad \xx \bb_{\ii+1} = \aa_{\ii+1},\\ \cc_{\ii-2} = \yy' \aa_{\ii-2}, \quad \cc_{\ii-1}\yy = \yy' \aa_{\ii - 1} = \yy \lcmt \aa_{\ii - 1}, \quad \cc_\ii \yy = \aa_\ii, \quad \cc_{\ii+1} = \aa_{\ii+1}. \end{gather*} As $\aa_\ii$ is~$\cc_\ii \yy$ and $\aa_\ii \xx' = \xx \bb_\ii$ is the right lcm of~$\xx$ and~$\aa_\ii$, Lemma~\ref{L:IterLcm} implies the existence of~$\xx''$, $\uu$, and $\vv$ satisfying \begin{equation}\label{E:AdjCase1} \bb_\ii = \uu \vv \quad \text{with} \quad \cc_\ii \xx'' = \xx \uu = \xx \lcm \cc_\ii \quand \yy \xx' = \xx'' \vv = \yy \lcm \xx''. \end{equation} Let us first consider~$\ccv$. By construction, $\xx$ left divides~$\cc_{\ii + 1}$, which is~$\aa_{\ii + 1}$, and $\xx \lcm \cc_\ii$ exists. Hence $\ccv \act \Red{\ii, \xx}$ is defined. Call it~$\ddv$. The equalities of~\eqref{E:AdjCase1} imply \begin{equation*} \dd_{\ii-2} = \cc_{\ii-2} = \yy' \aa_{\ii - 2}, \quad \dd_{\ii-1} = \cc_{\ii - 1} \xx'', \quad \dd_\ii = \uu, \quad \dd_{\ii+1} = \bb_{\ii+1}. \end{equation*} Next, by~\eqref{E:AdjCase1} again, $\vv$ right divides~$\bb_\ii$, and we have \begin{equation}\label{E:AdjCase2} \yy' \bb_{\ii - 1} = \yy' \aa_{\ii - 1} \xx' = \cc_{\ii - 1} \yy \xx' = \cc_{\ii - 1} \xx'' \vv, \end{equation} which shows that $\vv$ and~$\bb_{\ii - 1}$ admit a common left multiple, hence a left lcm. It follows that $\bbv \act \Red{\ii - 1, \vv}$ is defined. Call it~$\eev$. By definition, we have \begin{equation*} \ee_{\ii-2} = \vv' \bb_{\ii - 2} = \vv' \aa_{\ii-2}, \quad \ee_{\ii-1} \vv = \vv' \bb_{\ii - 1} = \vv \lcmt \bb_{\ii - 1}, \quad \ee_\ii \vv = \bb_\ii, \quad \ee_{\ii+1} = \bb_{\ii+1} \end{equation*} (the colored path in Fig.\,\ref{F:LocConfSucc}). Now, merging $ \ee_\ii \vv = \bb_\ii$ with $\bb_\ii = \uu \vv$ in~\eqref{E:AdjCase1}, we deduce $\ee_\ii = \uu = \dd_\ii$. On the other hand, we saw in~\eqref{E:AdjCase2} that $\yy' \bb_{\ii - 1}$, which is also $\dd_{\ii - 1} \vv$, is a common left multiple of~$\vv$ and~$\bb_{\ii - 1}$, whereas, by definition, $\vv' \bb_{\ii - 1}$, which is also $\ee_{\ii - 1} \vv$, is the left lcm of~$\vv$ and~$\bb_{\ii - 1}$. By the definition of a left lcm, there must exist~$\ww$ satisfying \begin{equation}\label{E:AdjCase3} \yy' = \ww \vv' \quand \dd_{\ii - 1} = \ww \ee_{\ii - 1}. \end{equation} From the left equality in~\eqref{E:AdjCase3}, we deduce $\dd_{\ii - 2} = \yy' \aa_{\ii - 2} = \ww \vv' \aa_{\ii - 2} = \ww \ee_{\ii - 2}$. Hence $\eev$ is obtained from~$\ddv$ by left dividing the $(\ii - 2)$nd and $(\ii - 1)$st entries by~$\ww$. This means that $\eev = \ddv \act \Red{\ii - 2, \ww}$ holds, completing the argument in the general case. In the particular case $\ii = 2$, $\jj = 1$, the assumption that $\Red{1, \yy}$ is defined implies $\yy \divt \aa_{\ii - 1}$, which, with the above notation, implies $\yy'= 1$. In this case, we necessarily have $\ww = \vv' = 1$, so that $\Red{1, \vv}$ is well defined, and the result remains valid at the expense of forgetting about the term~$\Red{\ii-2, \ww}$, which is then trivial. Finally, the case $\ii$ positive in~$\aav$ is addressed similarly. \end{proof} \begin{figure} \caption{\small Local confluence for $\jj = \ii - 1$ (case $\ii$ negative in~$\aav$): starting with the grey path, if we can both push~$\xx$ through~$\aa_\ii$ at level~$\ii$ and $\yy$ through~$\aa_{\ii - 1} \label{F:LocConfSucc} \end{figure} \begin{rema} Note that, in Lemma~\ref{L:LocConfSucc}, the parameters~$\vv$ and~$\ww$ occurring in the confluence solutions depend not only on the initial parameters~$\xx, \yy$, but also on the specific multifraction~$\aav$. \end{rema} The last case is $\ii = \jj$, \ie, two reductions at the same level. Here an extra condition appears. \begin{lemm}\label{L:LocConfLcm} Assume that $\aav \act \Red{\ii, \xx}$, and $\aav \act \Red{\ii, \yy}$ are defined, with $\ii$ negative $($\resp positive$)$ in~$\aav$, and $\aa_\ii, \xx$, and $\yy$ admit a common right $($\resp left$)$ multiple. Then $\aav \act \Red{\ii, \xx}\Red{\ii, \vv}$ and $\aav \act \Red{\ii, \yy} \Red{\ii, \ww}$ are defined and equal, where $\vv$ and~$\ww$ are defined by $\xx \lcm \yy = \xx\vv = \yy\ww$ $($\resp $\xx \lcmt \yy = \vv\xx = \ww\yy$$)$. \end{lemm} \begin{proof} (See Fig.\,\ref{F:LocConfLcm}.) Assume $\ii$ negative in~$\aa_\ii$. Put $\bbv:= \aav \act \Red{\ii, \xx}$ and $\ccv := \aav \act \Red{\ii, \yy}$. By assumption, $\aa_{\ii + 1}$ is a right multiple of~$\xx$ and of~$\yy$, hence $\xx \lcm \yy$ exists, and $\aa_{\ii + 1}$ is a right multiple of the latter. Write $\zz = \xx \lcm \yy = \xx \vv = \yy \ww$. The assumption that $\aav \act \Red{\ii, \xx}$ and $\aav \act \Red{\ii, \yy}$ are defined implies that both $\xx$ and $\yy$ left divide~$\aa_{\ii+1}$, hence so does their right lcm~$\zz$. On the other hand, by associativity of the lcm, the assumption that $\xx$, $\yy$, and $\aa_\ii$ admit a common right multiple implies that $\zz$ and $\aa_\ii$ admit a right lcm. Hence $\aav \act \Red{\ii, \zz}$ is defined. By Lemma~\ref{L:IterRed}, we deduce $\aav \act \Red{\ii, \zz} = \aav \act \Red{\ii, \xx} \Red{\ii, \vv} = \aav \act \Red{\ii, \yy} \Red{\ii, \ww}$. The case of $\ii$ positive in~$\aav$ is symmetric. The particular case $\ii = 1$ results in no problem, since, if $\xx$ and $\yy$ right divide~$\aa_1$, then so does their left lcm~$\zz$. \end{proof} \begin{figure} \caption{\small Local confluence for $\ii = \jj$ (case $\ii$ negative in~$\aav$): if a global common multiple of~$\xx$, $\yy$, and~$\aa_\ii$ exists and $\xx$ and~$\yy$ cross~$\aa_\ii$, then so does their right lcm.} \label{F:LocConfLcm} \end{figure} \begin{rema} Inspecting the above proofs shows that allowing arbitrary common multiples rather than requiring lcms in the definition of reduction would not really change the situation, but just force to possibly add extra division steps, making some arguments more tedious. In any case, irreducible multifractions are the same, since a multifraction may be irreducible only if adjacent entries admit no common divisor (on the relevant side), \end{rema} \subsection{The $3$-Ore condition}\label{SS:3Ore} The results of Subsection~\ref{SS:LocConf} show that reduction of multifractions is close to be locally confluent, \ie, to satisfy the implication~\eqref{E:LocConf}: the only possible failure arises in the case when $\aav \act \Red{\ii, \xx}$ and $\aav \act \Red{\ii, \yy}$ are defined but the elements~$\xx$, $\yy$, and~$\aa_\ii$ admit no common multiple. Here we consider those monoids for which such a situation is excluded. \begin{defi} We say that a monoid~$\MM$ satisfies the \emph{right} (\resp \emph{left}) \emph{$3$-Ore condition} if \begin{equation}\label{E:3Ore} \parbox{128mm}{if three elements of~$\MM$ pairwise admit a common right $($\resp left$)$ multiple, \\\null then they admit a common right $($\resp left$)$ multiple.} \end{equation} Say that $\MM$ satisfies the \emph{$3$-Ore condition} if it satisfies the right and the left $3$-Ore conditions. \end{defi} The terminology refers to Ore's Theorem: a (cancellative) monoid is usually said to satisfy the Ore condition if any two elements admit a common multiple, and this could also be called the \emph{$2$-Ore condition}, as it involves pairs of elements. Condition~\eqref{E:3Ore} is similar, but involving triples. Diagrammatically, the $3$-Ore condition asserts that every tentative lcm cube whose first three faces exist can be completed. The $2$-Ore condition implies the $3$-Ore condition (a common multiple for~$\aa$ and a common multiple of~$\bb$ and~$\cc$ is a common multiple for~$\aa, \bb, \cc$), but the latter is weaker. \begin{exam} A free monoid satisfies the $3$-Ore condition. Indeed, two elements~$\aa, \bb$ admit a common right multiple only if one is a prefix of the other, so, if $\aa, \bb, \cc$ pairwise admit common right multiples, they are prefixes of one of them, and therefore admit a global right multiple. \end{exam} Merging Lemmas~\ref{L:Distant}, \ref{L:LocConfSucc}, and~\ref{L:LocConfLcm} with Prop.\,\ref{P:ConvConf}, we obtain \begin{prop}\label{P:3OreConv} If $\MM$ is a noetherian gcd-monoid satisfying the $3$-Ore condition, then the systems~$\RD\MM$ and $\RDh\MM$ are convergent. \end{prop} It turns out that the implication of Prop.\,\ref{P:3OreConv} is almost an equivalence: whenever (a condition slightly stronger than) the convergence of~$\RD\MM$ holds, then $\MM$ must satisfy the $3$-Ore condition: we refer to~\cite{Diu} for the proof (which requires the extended framework of signed multifractions developed there). Before looking at the applications of the convergence of~$\RD\MM$, we conclude this section with a criterion for the $3$-Ore condition. It involves the basic elements of Def.\,\ref{D:Primitive}. \begin{prop}\label{P:3OreInd} A noetherian gcd-monoid~$\MM$ satisfies the $3$-Ore condition if and only if it satisfies the following two conditions: \begin{gather} \label{E:3OreInd1} \parbox{128mm}{if three right basic elements of~$\MM$ pairwise admit a common right multiple, \\\null then they admit a common right multiple.}\\ \label{E:3OreInd2} \parbox{128mm}{if three left basic elements of~$\MM$ pairwise admit a common left multiple, \\\null then they admit a common left multiple.} \end{gather} \end{prop} \begin{proof} The condition is necessary, since \eqref{E:3OreInd1} and \eqref{E:3OreInd2} are instances of~\eqref{E:3Ore}. Conversely, assume that $\MM$ is a noetherian gcd-monoid~$\MM$ satisfying~\eqref{E:3OreInd1}. Let $\XX$ be the set of right basic elements in~$\MM$. We shall prove that $\MM$ satisfies the right $3$-Ore condition. Let us say that $\OOO(\aa, \bb, \cc)$ holds if either $\aa, \bb, \cc$ have a common right multiple, or at least two of them have no common right multiple. Then \eqref{E:3OreInd1} says that $\OOO(\aa, \bb, \cc)$ is true for all $\aa, \bb, \cc$ in~$\XX$, and our aim is to prove that $\OOO(\aa, \bb, \cc)$ is true for all $\aa, \bb, \cc$ in~$\MM$. We shall prove using induction on~$\mm$ the property \begin{equation}\label{E:Pm} \tag{$\PPP_\mm$} \parbox{128mm}{$\OOO(\aa, \bb, \cc)$ holds for all $\aa, \bb, \cc$ with $\aa \in \XX^\pp$, $\bb \in \XX^\qq$, $\cc \in \XX^\rr$ and $\pp + \qq + \rr \le \mm$.} \end{equation} As $\OOO(\aa, \bb, \cc)$ is trivial if one of~$\aa, \bb, \cc$ is~$1$, \ie, when one of $\pp, \qq, \rr$ is zero, the first nontrivial case is $\mm = 3$ with $\pp = \qq = \rr = 1$, and then \eqref{E:3OreInd1} gives the result. So $(\PPP_3)$ is true. Assume now $\mm \ge 4$, and let $\aa, \bb, \cc$ satisfy $\aa \in \XX^\pp$, $\bb \in \XX^\qq$, $\cc \in \XX^\rr$ with $\pp + \qq + \rr \le \mm$ and pairwise admit common right multiples. Then at least one of~$\pp, \qq, \rr$ is~$\ge 2$, say $\rr \ge 2$. Write $\cc = \zz \cc'$ with $\zz \in \XX$ and $\cc' \in \XX^{\rr-1}$. By assumption, $\aa \lcm \cc$ is defined, so, by Lemma~\ref{L:IterLcm}, $\aa \lcm \zz$ exists and so is $\aa' \lcm \cc'$, where $\aa'$ is defined by $\aa \lcm \zz = \zz \aa'$. Similarly, $\bb \lcm \zz$ and $\bb' \lcm \cc'$ exist, where $\bb'$ is defined by $\bb \lcm \zz = \zz \bb'$ (see Fig.\,\ref{F:Pm}). Then $\aa, \bb$, and~$\zz$ pairwise admit common right multiples and one has $\pp + \qq + 1 < \mm$ so, by the induction hypothesis, they admit a global common right multiple and, therefore, $\aa' \lcm \bb'$ is defined. On the other hand, as $\XX$ is RC-closed, $\aa \in \XX^\pp$ implies $\aa' \in \XX^\pp$: indeed, assuming $\aa = \xx_1 \pdots \xx_\pp$ with $\xx_1 \wdots \xx_\pp \in \XX$, Lemma~\ref{L:IterLcm} implies $\aa' = \xx'_1 \pdots \xx'_\pp$ with $\xx'_\ii$ and $\zz_\ii$ inductively defined by $\zz_0 = \zz$ and $\xx_\ii \lcm \zz_{\ii - 1} = \zz_{\ii - 1} \xx'_\ii = \xx_\ii \zz_\ii$ for $1 \le \ii \le \pp$. As $\XX$ is RC-closed, $\xx_\ii \in \XX$ implies $\xx'_\ii \in \XX$. Similarly, $\bb \in \XX^\qq$ implies $\bb' \in \XX^\qq$. So $\aa', \bb'$, and~$\cc'$, belong to~$\XX^\pp, \XX^\qq$, and~$\XX^{\rr-1}$. We saw that $\aa' \lcm \bb'$ exists. On the other hand, the assumption that $\aa \lcm \cc$ and $\bb \lcm \cc$ exist implies that $\aa' \lcm \cc'$ and $\bb' \lcm \cc'$ do. As we have $\pp + \qq + (\rr - 1) < \mm$, the induction hypothesis implies that $\aa', \bb', \cc'$ admit a common right multiple~$\dd'$, and then $\zz \dd'$ is a common right multiple for~$\aa, \bb, \cc$. Hence $(\PPP_\mm)$ is true. And, as $\XX$ generates~$\MM$, every element of~$\MM$ lies in~$\XX^\pp$ for some~$\pp$. Hence the validity of $(\PPP_\mm)$ for every~$\mm$ implies that $\MM$ satisfies the right $3$-Ore condition. A symmetric argument using left basic elements gives the left $3$-Ore condition, whence, finally, the full $3$-Ore condition. \end{proof} \begin{figure} \caption{\small Induction for Prop.\,\ref{P:3OreInd} \label{F:Pm} \end{figure} It follows that, under mild assumptions (see Subsection~\ref{SS:WordPb} below), the $3$-Ore condition is a decidable property of a presented noetherian gcd-monoid. \begin{rema} A simpler version of the above argument works for the $2$-Ore condition: any two elements in a noetherian gcd-monoid admit a common right multiple (\resp left multiple) if and only if any two right basic (\resp left basic) elements admit one. \end{rema} \section{Applications of convergence}\label{S:AppliConv} We now show that, as can be expected, multifraction reduction provides a full control of the enveloping group~$\EG\MM$ when the rewrite system~$\RD\MM$ is convergent. We shall successively address the representation of the elements of~$\EG\MM$ by irreducible multifractions (Subsection~\ref{SS:Repr}), the decidability of the word problem for~$\EG\MM$ (Subsection~\ref{SS:WordPb}), and what we call Property~$\PropH$ (Subsection~\ref{SS:PropH}). \subsection{Representation of the elements of~$\EG\MM$}\label{SS:Repr} The definition of convergence and the connection between the congruence~$\simeq$ defining the enveloping group and $\RRR$-reduction easily imply: \begin{prop}\label{P:AppliConv} If $\MM$ is a noetherian gcd-monoid and $\RD\MM$ is convergent, then every element of~$\EG\MM$ is represented by a unique $\RRRh$-irreducible multifraction; for all~$\aav, \bbv$ in~$\FR\MM$, we have \begin{equation} \label{E:AppliConv1} \aav \simeq \bbv \quad \Longleftrightarrow \quad \redh(\aav) = \redh(\bbv), \end{equation} where $\redh(\aa)$ is the unique $\RDh\MM$-irreducible reduct of~$\aav$, and, in particular, \begin{equation} \label{E:AppliConv2} \text{$\aav$ represents~$1$ in~$\EG\MM$} \quad \Longleftrightarrow \quad \aav \rdhs \ef. \end{equation} The monoid~$\MM$ embeds in~$\EG\MM$, and the product of~$\EG\MM$ is determined by \begin{equation}\label{E:AppliConv3} \can(\aav) \opp \can(\bbv) = \can(\redh(\aav \opp \bbv)). \end{equation} \end{prop} \begin{proof} By Prop.\,\ref{P:ConvConf}, the assumption that $\RD\MM$ is convergent implies that $\RDh\MM$ is convergent as well, so $\redh$ is well defined on~$\FR\MM$ and, by Cor.\,\ref{C:Irred}, every $\simeq$-class contains at least one $\RDh\MM$-irreducible multifraction. Hence, the unique representation result will follows from~\eqref{E:AppliConv1}. Assume $\aav \simeq \bbv$. By Prop.\,\ref{P:Zigzag}, there exists a zigzag $\ccv^0 \wdots \ccv^{2\rr}$ connecting~$\aav$ to~$\bbv$ as in~\eqref{E:Zigzag}. Using induction on~$\kk \ge 0$, we prove $\ccv^\kk \rdhs \redh(\aav)$ for every~Ê$\kk$ . For $\kk = 0$, the result is true by definition. For $\kk$ even, the result for~$\kk$ follows from the result for~$\kk - 1$ and the transitivity of~$\rdhs$. Finally, assume $\kk$ odd. We have $\ccv^{\kk - 1} \rdhs \ccv^\kk$ by~\eqref{E:Zigzag} and $\ccv^{\kk - 1} \rdhs \redh(\aav)$ by induction hypothesis. By definition of~$\redh$, this implies $\redh(\ccv^{\kk - 1}) = \redh(\ccv^\kk)$ and $\redh(\ccv^{\kk - 1}) = \redh(\aav)$, whence $\redh(\ccv^\kk) = \redh(\aav)$. For $\kk = 2\rr$, we find $\redh(\bbv) = \redh(\aav)$. Hence $\aav \simeq \bbv$ implies $\redh(\aav) = \redh(\bbv)$. The converse implication follows from Lemma~\ref{L:Cut}. So \eqref{E:AppliConv1} is established, and \eqref{E:AppliConv2} follows, since, because $\ef$ is $\RRRh$-irreducible, the latter is a particular instance of~\eqref{E:AppliConv1}. Next, no reduction of~$\RD\MM$ applies to a multifraction of depth one, \ie, to an element of~$\MM$, hence we have $\redh(\xx) = \xx$ for $\xx \not= 1$ and $\redh(1) = \ef$. Hence $\xx \not= \yy$ implies $\redh(\xx) \not= \redh(\yy)$, whence $\can(\xx) \not= \can(\yy)$. So the restriction of~$\can$ to~$\MM$ is injective, \ie, $\MM$ embeds in~$\EG\MM$. Finally, \eqref{E:AppliConv1} implies $\can(\aav) = \can(\redh(\aav))$, and \eqref{E:AppliConv3} directly follows from~$\can$ being a morphism. \end{proof} Merging with~Prop.\,\ref{P:3OreConv}, we obtain the following result, which includes Item~\ITEM1 in Theorem~A in the introduction: \begin{thrm}\label{T:MainA1} If $\MM$ is a noetherian gcd-monoid satisfying the $3$-Ore condition, then every element of~$\EG\MM$ is represented by a unique $\RDh\MM$-irreducible multifraction. The monoid~$\MM$ embeds in the group~$\EG\MM$, and \eqref{E:AppliConv1}, \eqref{E:AppliConv2}, and \eqref{E:AppliConv3} are valid in~$\MM$. \end{thrm} We now quickly mention a few further consequences of the unique representation result. First, there exists a new, well defined integer parameter for the elements of~$\EG\MM$: \begin{defi}\label{D:Depth} If $\MM$ is a gcd-monoid and $\RD\MM$ is convergent, then, for~$\gg$ in~$\EG\MM$, the \emph{depth}~$\dh\gg$ of~$\gg$ is the depth of the (unique) $\RDh\MM$-irreducible multifraction that represents~$\gg$. \end{defi} \begin{exam} The only element of depth~$0$ is~$1$, whereas the elements of depth~$1$ are the nontrivial elements of~$\MM$. The elements of depth~$2$ are the ones that can be expressed as a (right) fraction~$\aa / \bb$ with $\aa, \bb$ in~$\MM$, etc. If $\MM$ is a Garside monoid, and, more generally, if $\EG\MM$ is a group of right fractions for~$\MM$, every element of~$\EG\MM$ has depth at most~$2$. When $\EG\MM$ is a group of left fractions for~$\MM$, then every element of~$\EG\MM$ has depth at most~$3$, possibly a sharp bound: for instance, in the Baumslag--Solitar monoid $\MON{\tta, \ttb}{\tta^2\ttb = \ttb\tta}$, the element $ \ttb\inv\tta\inv\ttb$ is represented by the irreducible multifraction~$ 1/\tta\ttb/\ttb$ and it has depth~$3$. On the other hand, a non-cyclic free group contains elements of arbitrary large depth. \end{exam} \begin{ques} Does there exist a gcd-monoid~$\MM$ such that $\RD\MM$ is convergent and the least upper bound of the depth on~$\EG\MM$ is finite $\ge 4$? \end{ques} The following inequalities show that, when it exists, the depth behaves like a sort of gradation on the group~$\EG\MM$. They easily follow from the definition of the product on~$\FR\MM$ and can be seen to be optimal: \begin{gather} \label{E:DepthInv} \dh{\gg\inv} = \begin{cases} \dh\gg \text{\ or\ } \dh\gg + 1 &\text{for $\dh\gg$ odd,}\\ \dh\gg \text{\ or\ } \dh\gg - 1 &\text{for $\dh\gg$ even,} \end{cases}\\ \label{E:DepthProd} \max(\dh\gg - \dh\hh\sh, \dh\hh - \dh\gg\sh) \le \dh{\gg\hh} \le \begin{cases} \dh\gg + \dh\hh - 1 &\text{for $\dh\gg$ odd,}\\ \dh\gg + \dh\hh &\text{for $\dh\gg$ even,} \end{cases} \end{gather} with $\nn\sh$ standing for $\nn$ if $\nn$ is even and for $\nn + 1$ if $\nn$ is odd. By the way, changing the definition so as to ensure $\dh{\gg\inv} = \dh\gg$ seems difficult: thinking of the signed multifractions of~\cite{Diu}, we could wish to forget about the first entry when it is trivial, but this is useless: for instance, in the right-angled Artin-Tits monoid $\MON{\fr{a, b, c}}{\fr{ab=ba, bc=cb}}$, if $\fr{a/bc/a}$ represents $\gg$, then $\gg\inv$ is represented by~$\fr{b/a/c/a}$, leading in any case to $\dh{\gg\inv} = \dh\gg + 1$. In the same vein, we can associate with every nontrivial element of~$\EG\MM$ an element of~$\MM$: \begin{defi} If $\MM$ is a gcd-monoid and $\RD\MM$ is convergent, then, for~$\gg$ in~$\EG\MM \setminus \{1\}$, the \emph{denominator}~$\Den\gg$ of~$\gg$ is the last entry of the $\RRRh$-irreducible multifraction representing~$\gg$. \end{defi} Then the usual characterization of the denominator of a fraction extends into: \begin{prop} If $\MM$ is a gcd-monoid and $\RD\MM$ is convergent, then, for every~$\gg$ in~$\EG\MM$ with $\dh\gg$ even $($\resp odd$)$, $\Den\gg$ is the $\dive$-smallest $($\resp $\divet$-smallest$)$ element~$\aa$ of~$\MM$ satisfying $\dh{\gg\aa} < \dh\gg$ $($\resp $\dh{\gg\aa\inv} < \dh\gg$$)$. \end{prop} We skip the (easy) verification. \subsection{The word problem for~$\EG\MM$}\label{SS:WordPb} In view of~\eqref{E:AppliConv2} and Lemma~\ref{L:WP1}, one might think that reduction directly solves the word problem for the group~$\EG\MM$ when $\RD\MM$ is convergent. This is essentially true, but some care and some additional assumptions are needed. The problem is the decidability of the relation~$\rds$, \ie, the question of whether, starting with a (convenient) presentation of a gcd-monoid, one can effectively decide whether, say, the multifraction represented by a word reduces to~$\one$. The question is \emph{not} trivial, because the existence of common multiples need not be decidable in general. However, we shall see that mild finiteness assumptions are sufficient. If $\SS$ is a generating subfamily of a monoid~$\MM$, then, for~$\aa$ in~$\MM$, we denote by~$\LG\SS\aa$ the minimal length of a word in~$\SS$ representing~$\aa$. \begin{lemm}\label{L:BoundCM} If $\MM$ is a noetherian gcd-monoid, $\SS$ is the atom set of~$\MM$, and $\LG\SS\xx \le C$ holds for every right basic element~$\xx$ of~$\MM$, then, for all~$\aa, \bb$ in~$\MM$ such that $\aa \lcm \bb$ exists, we have \begin{equation} \LG\SS{\aa \lcm \bb} \le C(\LG\SS\aa + \LG\SS\bb). \end{equation} \end{lemm} \begin{proof} Let $\XX$ be the set of right basic elements in~$\MM$. Assume that $\aa, \bb$ are elements of~$\MM$ such that $\aa \lcm \bb$ exists. Let $\pp:= \LG\SS\aa$, $\qq := \LG\SS\bb$. Then $\aa$ lies in~$\SS^\pp$ (\ie, it can be expressed as the product of $\pp$ elements of~$\XX$), hence a fortiori in~$\XX^\pp$, since $\SS$ is included in~$\XX$. Similarly, $\bb$ lies in~$\XX^\qq$. Now, as already mentioned in the proof of Prop.\,\ref{P:3OreInd}, a straightforward induction using Lemma~\ref{L:IterLcm} shows that, if $\aa$ and~$\bb$ lie in~$\XX^\pp$ and~$\XX^\qq$ and $\aa \lcm \bb$ exists, then one has $\aa \lcm \bb = \aa\bb' = \bb\aa'$ with $\aa' \in \XX^\pp$ and $\bb'$ in~$\XX^\qq$. We conclude that $\aa \lcm \bb$ lies in~$\XX^{\pp + \qq}$, and, therefore, we have $\LG\SS{\aa\lcm\bb} \le C(\pp + \qq)$. \end{proof} \begin{lemm}\label{L:Decid1} Assume that $\MM$ is a strongly noetherian gcd-monoid with finite\-ly many basic elements. Let $\SS$ be the atom set of~$\MM$. Then, for all~$\ii$ and~$\uu$ in~$\SS^*$, the relation ``\,$\clp\ww \act \Red{\ii, \clp\uu}$ is defined'' is decidable and the map ``\,$\ww \mapsto \clp\ww \act \Red{\ii, \clp\uu}$'' is computable. \end{lemm} \begin{proof} By~\cite[Thrm.~4.1]{Dfx}, $\MM$ admits a finite presentation~$(\SS, \RR)$: it suffices, for all~$\ss, \tt$ such that $\ss$ and~$\tt$ admit a common right multiple in~$\MM$, to choose two words~$\uu, \vv$ such that both $\ss\uu$ and~$\tt\vv$ represent~$\ss \lcm \tt$ in~$\MM$ and to put in~$\RR$ the relation~$\ss\uu = \tt\vv$. By definition, $\clp\ww \act \Red{\ii, \clp\uu}$ is defined if and only if calling $(\ww_1 \wdots \ww_\nn)$ the decomposition~\eqref{E:CanDec} of~$\ww$, the elements~$\clp\uu$ and~$\clp{\ww_\ii}$ admit a common multiple, and $\clp\uu$ divides~$\clp{\ww_{\ii+1}}$ (both on the relevant side). As seen in the proof of Prop.\,\ref{P:WordPbMon}, the set of all words in~$\SS$ that represent~$\clp{\ww_{\ii + 1}}$ is finite, hence we can decide $\clp\uu \dive \clp{\ww_{\ii + 1}}$ (\resp $\clp\uu \divet \clp{\ww_{\ii + 1}}$) by exhaustively enumerating the class of~$\ww_{\ii + 1}$ and check whether some word in this class begins (\resp finishes) with~$\uu$. Deciding the existence of $\clp\uu \lcm \clp{\ww_\ii}$ is a priori more difficult, because we do not remain inside some fixed class. However, by Lemma~\ref{L:BoundCM}, $\clp\uu \lcm \clp{\ww_\ii}$ exists if and only if, calling~$C$ the sup of the lengths of the (finitely many) words that represent a basic element of~$\MM$, there exist two equivalent words of length at most $C (\LG{}\uu + \LG{}{\ww_\ii})$ that respectively begin with~$\uu$ and with~$\ww_\ii$. This can be tested in finite time by an exhaustive search. Finally, when $\clp\ww \act \Red{\ii, \clp\uu}$ is defined, computing its value is easy, since it amounts to performing multiplications and divisions in~$\MM$, and the word problem for~$\MM$ is decidable. \end{proof} \begin{prop}\label{P:Decid} If $\MM$ is a strongly noetherian gcd-monoid with finite\-ly many basic elements and atom set~$\SS$, then the relations $\clp\ww \rds \one$ and $\clp\ww \rdhs \ef$ on~$(\SS \cup \INV\SS)^*$ are decidable. \end{prop} \begin{proof} For $\aav$ in~$\FR\MM$, consider the tree~$T_{\aav}$, whose nodes are pairs~$(\bbv, \ss)$ with $\bbv$ a reduct of~$\aav$ and $\ss$ is a finite sequence in~$\NNNN \times \SS$: the root of~$T_{\aav}$ is~$(\aav, \ew)$, and the sons of~$(\bbv, \ss)$ are all pairs $(\bbv \act \Red{\ii, \xx}, \ss {}^\frown (\ii, \xx))$ such that $\bbv \act \Red{\ii, \xx}$ is defined. As $\SS$ is finite, the number of pairs~$(\ii, \xx)$ with $\xx$ in~$\SS$ and $\bbv \act \Red{\ii, \xx}$ defined is finite, so each node in~$T_{\aav}$ has finitely many immediate successors. On the other hand, as $\MM$ is noetherian, $\RD\MM$ is terminating and, therefore, $T_{\aav}$ has no infinite branch. Hence, by K\" onig's lemma, $T_{\aav}$ is finite. Therefore, starting from a word~$\ww$ in~$\SS \cup \SSb$ and applying Lemma~\ref{L:Decid1}, we can exhaustively construct~$T_{\clp\ww}$. Once this is done, deciding $\clp\ww \rds \one$ (or, more generally, $\clp\ww \rds \clp{\ww'}$ for any~$\ww'$) is straightforward: it suffices to check whether $\one$ (or $\clp{\ww'}$) occur in~$T_{\clp\ww}$, which amounts to checking finitely many $\equivp$-equivalences of words. The argument for~$\RDh\SS$ is similar, mutatis mutandis. \end{proof} Then we can solve the word problem for~$\EG\MM$: \begin{prop}\label{P:WordPb} If $\MM$ is a strongly noetherian gcd-monoid with finitely many basic elements and $\RD\MM$ is convergent, the word problems for~$\EG\MM$ is decidable. \end{prop} \begin{proof} Let $\SS$ be the atom set of~$\MM$. Then Prop.\,\ref{P:Decid} states the decidability of the relation $\clp\ww \rdhs \ef$ for words in~$\SS \cup \SSb$. By~\eqref{E:AppliConv2}, this relation is equivalent to $\clp\ww \simeq 1$, hence, by Lemma~\ref{L:WP1}, to $\ww$ representing~$1$ in~$\EG\MM$. \end{proof} Merging with Prop\,\ref{P:3OreConv}, we deduce the second part of Theorem~A in the introduction: \begin{thrm}\label{T:MainA2} If $\MM$ is a strongly noetherian gcd-monoid satisfying the $3$-Ore condition and containing finitely many basic elements, the word problem for~$\EG\MM$ is decidable. \end{thrm} \subsection{Property~$\PropH$}\label{SS:PropH} We conclude with a third application of semi-convergence involving the property introduced in~\cite{Dia} and called Property~$\PropH$ in~\cite{Dib} and~\cite{GoR}. We say that a presentation~$(\SS, \RR)$ of a monoid~$\MM$ satisfies \emph{Property~$\PropH$} if a word~$\ww$ in~$\SS \cup \SSb$ represents~$1$ in~$\EG\MM$ if and only if one can go from~$\ww$ to the empty word only using \emph{special} transformations of the following four types (we recall that $\equivp$ is the congruence on~$\SS^*$ generated by~$\RR$): - replacing a positive factor~$\uu$ of~$\ww$ (no letter~$\INV\ss$) by~$\uu'$ with $\uu' \equivp \uu$, - replacing a negative factor~$\INV\uu$ of~$\ww$ (no letter~$\ss$) by~$\INV{\uu'}$, with $\uu' \equivp \uu$, - deleting some length two factor $\INV\ss \ss$ or replacing some length two factor~$\INV\ss \tt$ with~$\vv \INV\uu$ such that $\ss\vv = \tt\uu$ is a relation of~$\RR$ (``right reversing'' relation~$\rev$ of~\cite{Dia}), - deleting some length two factor $\ss\INV\ss$ or replacing some length two factor~$\ss\INV\tt$ with~$\INV\uu\vv$ such that $\vv\ss = \uu\tt$ is a relation of~$\RR$ (``left reversing'' relation~$\revt$ of~\cite{Dia}). \noindent All special transformations replace a word with another word that represents the same element in~$\EG\MM$, and the point is that new trivial factors~$\ss\INV\ss$ or~$\INV\ss\ss$ are never added: words may grow longer (if some relation of~$\RR$ involves a word of length~$\ge 3$), but, in some sense, they must become simpler, a situation directly reminiscent of Dehn's algorithm for hyperbolic groups, see~\cite[Sec.\,1.2]{Dib} for precise results in this direction. Let us say that a presentation~$(\SS, \RR)$ of a gcd-monoid~$\MM$ is a \emph{right lcm presentation} if $\RR$ contains one relation for each pair~$(\ss, \tt)$ in~$\SS \times \SS$ such that $\ss$ and~$\tt$ admit a common right multiple and this relation has the form $\ss\vv = \tt\uu$ where both $\ss\vv$ and $\tt\uu$ represent the right lcm $\ss \lcm \tt$. A left lcm presentation is defined symmetrically. By~\cite[Thrm.~4.1]{Dfx}, every noetherian gcd-monoid admits (left and right) lcm presentations. For instance, the standard presentation of an Artin-Tits monoid is an lcm presentation on both sides. \begin{prop}\label{P:PropH} If $\MM$ is a gcd-monoid and $\RD\MM$ is convergent, then Property~$\PropH$ is true for every presentation of~$\MM$ that is an lcm-presentation on both sides. \end{prop} \begin{proof}[Proof (sketch)] Assume that $(\SS, \RR)$ is the involved presentation. Let $\ww$ be a word in~$\SS \cup \SSb$. Then $\ww$ represents~$1$ in~$\EG\MM$ if and only if $\clp\ww \rds \one$ holds, where we recall $\clp\ww$ is the multifraction $\clp{\ww_1} / \clp{\ww_2} / \clp{\ww_3} / \pdots$ assuming that the parsing of~$\ww$ is $\ww_1 \INV{\ww_2} \ww_3 \pdots$. So the point is to check that, starting from a sequence of positive words $(\uu_1 \wdots \uu_\nn)$ and~$\ss$ in~$\SS$, we can construct a sequence $(\vv_1 \wdots \vv_\nn)$ satisfying $$\clp{\vv_1} \sdots \clp{\vv_\nn} = \clp{\uu_1} \sdots \clp{\uu_\nn} \act \Red{\ii, \ss}$$ (assuming that the latter is defined) by only using special transformations. This is indeed the case, as we can take (for $\ii$ negative in~$\aav$) $\vv_\kk = \uu_\kk$ for $\kk \not= \ii - 1, \ii, \ii + 1$, and $$\vv_{\ii-1} = \uu_{\ii - 1} \ss', \quad \INV\ss \uu_\ii \rev \vv_\ii \INV{\ss'}, \quad \uu_{\ii + 1} \equiv^+ \ss \vv_{\ii +1},$$ where $\rev$ is the above alluded right reversing relation that determines a right lcm~\cite{Dia, Dir}. \end{proof} \begin{coro}\label{C:PropH} If $\MM$ is a noetherian gcd-monoid satisfying the $3$-Ore condition, then Property~$\PropH$ is true for every presentation of~$\MM$ that is an lcm-presentation on both sides. \end{coro} By Prop.\,\ref{P:AT3Ore} (below), every Artin-Tits monoid of type~FC satisfies the $3$-Ore condition, hence is eligible for Cor.\,\ref{C:PropH}: this provides a new, alternative proof of the main result in~\cite{Dib}. \begin{rema} If $\MM$ is a noetherian gcd-monoid and $\SS$ is the atom family of~$\MM$, then $\MM$ admits a right lcm presentation~$(\SS, \RR_\rr)$ and a left lcm presentation~$(\SS, \RR_\ell)$ but, in general, $\RR_\rr$ and~$\RR_\ell$ need not coincide. Adapting the definition of Property~$\PropH$ to use~$\RR_\rr$ for~$\rev$ and $\RR_\ell$ for~$\revt$ makes every noetherian gcd-monoid~$\MM$ such that $\RD\MM$ is convergent eligible for Prop.\,\ref{P:PropH}. \end{rema} \section{The universal reduction strategy}\label{S:Univ} When the rewrite system~$\RD\MM$ is convergent, every sequence of reductions from a multifraction~$\aav$ leads in finitely many steps to~$\red(\aav)$. We shall see now that, when the $3$-Ore condition is satisfied, there exists a canonical sequence of reductions leading from~$\aav$ to~$\red(\aav)$, the remarkable point being that the recipe so obtained only depends on~$\dh\aav$. In Subsection~\ref{SS:LocIrred}, we establish technical preparatory results about how local irreducibility is preserved when reductions are applied. The universal recipe is established in Subsection~\ref{SS:Univ}, with a geometric interpretation in terms of van Kampen diagrams in Subsection~\ref{SS:Diagram}. Finally, we conclude in Subsection~\ref{SS:Torsion} with a few (weak) results about torsion. \subsection{Local irreducibility}\label{SS:LocIrred} \begin{defi} If $\MM$ is a gcd-monoid, a multifraction~$\aav$ on~$\MM$ is called \emph{$\ii$-irreducible} if $\aav \act \Red{\ii, \xx}$ is defined for no~$\xx \not= 1$. \end{defi} An $\nn$-multifraction is $\RRR$-irreducible if and only if it is $\ii$-irreducible for every~$\ii$ in~$\{1 \wdots \nn-1\}$. In general, $\ii$-irreducibility is not preserved under reduction. However we shall see now that partial preservation results are valid, specially in the $3$-Ore case. \begin{lemm}\label{L:Irred1} Assume that $\MM$ is a gcd-monoid and $\bbv = \aav \act \Red{\ii, \xx}$ holds. \ITEM1 If $\aav$ is $\jj$-irreducible for some $\jj \not= \ii - 2, \ii, \ii + 1$, then so is~$\bbv$. \ITEM2 If $\aav$ is $(\ii - 2)$-irreducible and $\bbv \act \Red{\ii-2, \zz}$ is defined, then, for~$\ii$ negative (\resp positive) in~$\aav$, we must have $\bb_{\ii - 1} = \xx' \lcmt \uu$ (\resp $\xx' \lcm \uu$), where $\xx'$ and~$\uu$ are defined by $\aa_\ii \xx' = \xx \bb_\ii$ (\resp $\xx' \aa_\ii = \bb_\ii \xx'$) and $\zz \uu = \bb_{\ii - 1}$ (\resp $\uu \zz = \bb_{\ii - 1}$). \end{lemm} \begin{proof} \ITEM1 The result is trivial for $\jj \le \ii - 3$ and $\jj \ge \ii + 2$, as $\bb_\jj = \aa_\jj$ and $\bb_{\jj + 1} = \aa_{\jj + 1}$ then hold. We now consider the case $\jj = \ii - 1$, with $\ii \ge 4$ negative in~$\aav$. Assume that $\aav$ is $(\ii - 1)$-irreducible and $\bbv \act \Red{\ii-1, \yy}$ is defined (Fig.\,\ref{F:Irred}). Our aim is to show $\yy = 1$. By construction, we have $\bb_{\ii - 1} = \aa_{\ii-1} \xx'$ with $\aa_\ii \xx':= \xx \lcm \aa_\ii$. By Lemma~\ref{L:IterLcm}, the assumption that $\yy \lcmt \bb_{\ii-1}$ exists implies that $\yy \lcmt \xx'$ and $\yy' \lcmt \aa_{\ii-1}$ both exist where $\yy'$ is determind by $\yy' \xx' = \xx' \lcmt \yy$. On the other hand, the equality $\aa_\ii \xx' = \xx \bb_\ii$ shows that $\aa_\ii \xx'$ is a common right multiple of~$\xx'$ and~$\bb_\ii$, hence a fortiori of~$\xx'$ and~$\yy$. By definition of~$\yy'$, this implies $\yy' \divet \aa_\ii$. Hence $\aav \act \Red{\ii-1, \yy'}$ is defined. As $\aav$ is $(\ii-1)$-irreducible, this implies $\yy' = 1$, which implies $\yy \divet \xx'$. Thus $\yy$ is a common right divisor of~$\xx'$ and~$\bb_\ii$. By definition, $\aa_\ii \xx'$ is the right lcm of~$\xx$ and~$\aa_\ii$, hence, by Lemma~\ref{L:Lcm}, $\xx' \gcdt \bb_\ii = 1$ holds. Therefore, the only possibility is $\yy = 1$, and $\bbv$ is $(\ii-1)$-irreducible. For $\ii = 2$, the argument is similar: the assumption that $\yy$ right divide~$\bb_1$ implies that $\yy'$ right divides~$\aa_1$, as well as~$\aa_2$, and the assumption that $\aav$ is $1$-irreducible implies $\yy' = 1$, whence $\yy = 1$ as above. Finally, for $\ii$ positive in~$\aav$, the argument is symmetric, mutatis mutandis. \ITEM2 Assume that $\aav$ is $(\ii - 2)$-irreducible and $\bbv \act \Red{\ii - 2, \zz}$ is defined. We first assume $\ii \ge 4$ negative in~$\aav$. By definition, $\zz$ is a left divisor of~$\bb_{\ii - 1}$, say $\bb_{\ii - 1} = \zz \uu$. By construction, $\bb_{\ii - 1}$, which is $\aa_{\ii - 1} \xx'$, is a right multiple of~$\xx'$ and~$\uu$, hence $\xx' \lcmt \uu$ exists and it right divides~$\bb_{\ii - 1}$, say $\bb_{\ii - 1} = \vv (\xx' \lcmt \uu)$. Then $\vv$ is a left divisor of~$\aa_{\ii - 1}$. By construction, $\vv$ left divides~$\zz$, hence the assumption that $\bbv \act \Red{\ii - 2, \zz}$ is defined, which implies that $\zz$ and~$\aa_{\ii - 2}$ admit a common right multiple, a fortiori implies that $\vv$ and $\aa_{\ii - 2}$ admit a common right multiple. It follows that $\aav \act \Red{\ii - 2, \vv}$ is defined. As $\aav$ is assumed to be $(\ii - 2)$-irreducible, this implies $\vv = 1$, hence $\bb_{\ii - 1} = \uu \lcmt \xx'$. For $\ii \ge 5$ positive in~$\aav$, the argument is symmetric. Finally, for $\ii = 3$, $\vv$ is a common right divisor of~$\aa_1$ and~$\aa_2$, and the $1$-irreducibility of $\aav$ implies $\vv = 1$, whence $\bb_2 = \uu \lcm \xx'$ again. \end{proof} \begin{figure}\label{F:Irred} \end{figure} \begin{lemm}\label{L:Irred2} Assume that $\MM$ is a gcd-monoid satisfying the $3$-Ore condition and $\ccv = \aav \act \Red{\ii, \xx}\Red{\ii - 2, \zz}$ holds. If $\aav$ is $(\ii - 1)$- and $(\ii - 2)$-irreducible, then $\ccv$ is $(\ii - 1)$-irreducible. \end{lemm} \begin{proof} (See Fig.\,\ref{F:Irred} again.) Put $\bbv:= \aav \act \Red{\ii, \xx}$. By Lemma~\ref{L:Irred1}\ITEM1, the $(\ii - 1)$-irreducibility of~$\aav$ implies that of~$\bbv$. However, as $\ii - 1 = (\ii - 2) + 1$, Lemma~\ref{L:Irred1}\ITEM1 is useless to deduce that $\bbv \act \Red{\ii - 2, \zz}$ is $(\ii - 1)$-irreducible. Now, assume that $\ccv \act \Red{\ii - 1, \yy}$ is defined with, say, $\ii$ negative in~$\aav$. Define $\xx'$ and~$\uu$ by $\aa_\ii \xx' = \xx \bb_\ii$ and $\zz \uu = \bb_{\ii - 1}$. By construction, we have $\cc_{\ii - 1} = \uu$. The existence of $\ccv \act \Red{\ii - 1, \yy}$ implies $\yy \divet \cc_\ii$ and the existence of~Ê$\yy \lcmt \uu$. Next, $\xx'$ and~$\bb_\ii$ admit a common left multiple, namely~$\xx \bb_\ii$, hence a fortiori so do $\xx'$ and~$\yy$. Finally, $\uu$ and~$\xx'$ admit a common left multiple, namely~$\bb_{\ii - 1}$. As $\MM$ satisfies the $3$-Ore condition, $\yy$, $\xx'$, and~$\uu$ admit a common left multiple, hence $\yy \lcmt \xx' \lcmt \uu$ exists. As $\aav$ is also $(\ii - 2)$-irreducible, Lemma~\ref{L:Irred1}\ITEM2 implies $\bb_{\ii - 1} = \xx' \lcmt \uu$. Thus $\yy \lcmt \bb_{\ii - 1}$ exists, hence $\bbv \act \Red{\ii - 1, \yy}$ is defined. As $\bbv$ is $(\ii - 1)$-irreducible, we deduce $\yy = 1$. Hence $\ccv$ is $(\ii - 1)$-irreducible. As usual, the argument for $\ii$ positive in~$\aav$ is symmetric (and $\ii = 3$ is not special here). \end{proof} \subsection{The recipe}\label{SS:Univ} Our universal recipe relies on the existence, for each level, of a unique, well defined maximal reduction applying to a given multifraction. \begin{lemm}\label{L:RedMax} If $\MM$ is a noetherian gcd-monoid satisfying the $3$-Ore condition, for every~$\aav$ in~$\FR\MM$ and every $\ii < \dh\aav$ negative $($\resp positive$)$ in~$\aav$, there exists~$\xx_\smax$ such that $\aav \act \Red{\ii, \xx}$ is defined if and only if $\xx \dive \xx_\smax$ $($\resp $\xx \divet \xx_\smax$$)$ holds. \end{lemm} \begin{proof} Assume $\ii$ negative in~$\aav$ and let $\XX:= \{\xx \in \MM \mid \aav \act \Red{\ii, \xx} \text{ is defined}\}$. Let $\xx, \yy \in \XX$. By definition, $\xx$ and $\yy$ both left divide~$\aa_{\ii+1}$, so $\xx \lcm \yy$ exists, and it left divides~$\aa_{\ii+1}$. On the other hand, by assumption, $\xx \lcm \aa_\ii$ and~$\yy \lcm \aa_\ii$ exist. By the $3$-Ore condition, $(\xx \lcm \yy) \lcm \aa_\ii$ exists, whence $\xx \lcm \yy \in \XX$. Put $\YY := \{ \yy \mid \exists\xx{\in}\XX\, (\xx \yy = \aa_{\ii + 1})\}$. As $\MM$ is noetherian, there exists a $\divt$-minimal element in~$\YY$, say~$\yy_\smin$. Define~$\xx_\smax$ by $\xx_\smax \yy_\smin = \aa_{\ii + 1}$. By construction, $\xx_\smax$ lies in~$\XX$, so $\xx \dive \xx_\smax$ implies $\xx \in \XX$. Conversely, assume $\xx \in \XX$. We saw above that $\xx \lcm \xx_\smax$ exists and belongs to~$\XX$. Now, by the choice of~$\xx_\smax$, we must have $\xx \lcm \xx_\smax = \xx_\smax$, \ie, $\xx \dive \xx_\smax$. The argument for $\ii \ge 3$ positive in~$\aav$ is similar. For $\ii = 1$, the result reduces to the existence of a right gcd. \end{proof} \begin{nota} In the context of Lemma~\ref{L:RedMax}, we write $\aav \act \Redmax\ii$ for $\aav \act \Red{\ii, \xx_\smax}$ for $\ii < \dh\aav$, extended with $\aav \act \Redmax\ii := \aav$ for $\ii \ge \dh\aav$. Next, if $\U\ii$ is a sequence of integers, say $\U\ii = (\ii_1 \wdots \ii_\ell)$, we write $\aav \act \Redmax{\U\ii}$ for $\aav \act \Redmax{\ii_1} \pdots \Redmax{\ii_\ell}$. \end{nota} In this way, $\aav \act \Redmax\ii$ is defined for every positive integer~$\ii$. One should not forget that, in this expression, $\Redmax\ii$ depends on~$\aav$ and does not correspond to a fixed~$\Red{\ii, \xx}$. Note that, for every~$\aav$ with $\dh\aav \ge 2$, we have $\aav \act \Redmax1 = \aav \act \Rdiv{1, \aa_1 \gcdt \aa_2}$. By Lemma~\ref{L:RedMax}, a multifraction $\aav \act \Redmax\ii$ is always $\ii$-irreducible: $\aav$ is $\ii$-irreducible if and only if $\aav = \aav \act \Redmax\ii$ holds. The next result shows that conveniently reducing a multifraction that is $\ii$-irreducible for $\ii < \mm$ leads to a multifraction that is $\ii$-irreducible for $\ii \le \mm$, paving the way for an induction. \begin{lemm}\label{L:Irred3} Assume that $\MM$ is a gcd-monoid satisfying the $3$-Ore condition and $\aav$ is an $\nn$-multifraction that is $\ii$-irreducible for every~$\ii < \mm$. Put $\Sigma(\mm) = (\mm, \mm - 2, \mm - 4, ..., 2)$ $($\resp $(\mm, \mm - 2, ..., 1)$$)$ for $\mm$ even $($\resp odd$)$. Then $\aav \act \Redmax{\Sigma(\mm)}$ is $\ii$-irreducible for every~$\ii \le \mm$. \end{lemm} \begin{proof} Put $\aav^0 := \aav$ and $\aav^\kk:= \aav^{\kk - 1} \act \Redmax{\mm - 2 \kk + 2}$ for $\kk > 1$. We prove using induction on~$\kk \ge 0$ \begin{equation}\label{E:k2}\tag{$\HHH_\kk$} \text{$\aav^\kk$ is $\ii$-irreducible for $\ii = 1 \wdots \mm$ with $\ii \not= \mm - 2\kk$.} \end{equation} By assumption, $\aav^0$, \ie, $\aav$, is $\ii$-irreducible for every $\ii < \mm$. Hence $(\HHH_0)$ is true. Next, we have $\aav^1 = \aav \act \Redmax\mm$. By Lemma~\ref{L:Irred1}\ITEM1, the $\ii$-irreducibility of~$\aav$ for $\ii < \mm$ implies that $\aav^1$ is $\ii$-irreducible for $\ii = 1 \wdots \mm - 1$ with $\ii \not= \mm - 2$. On the other hand, the definition of~$\Redmax\mm$ implies that $\aav^1$ is $\mm$-irreducible. Hence $(\HHH_1)$ is true. Assume now $\kk \ge 2$. By~$(\HHH_{\kk - 1})$, $\aav^{\kk - 1}$ is $\ii$-irreducible for $\ii = 1 \wdots \mm$ with $\ii \not= \mm - 2\kk +2$. Then, as above, Lemma~\ref{L:Irred1}\ITEM1 and the definition of~$\Redmax{\mm - 2 \kk + 2}$ imply that $\aav^\kk$ is $\ii$-irreducible for \break $\ii = 1 \wdots \mm$ with $\ii \not=\mm - 2\kk, \mm - 2\kk + 3$. Now, $\aav^\kk = \aav^{\kk - 2} \act \Redmax{\mm - 2\kk + 4} \Redmax{\mm - 2 \kk + 2}$ also holds and, by~$(\HHH_{\kk - 2})$, $\aav^{\kk - 2}$ is both $(\mm - 2\kk + 2)$- and $(\mm - 2\kk + 3)$-irreducible. Then Lemma~\ref{L:Irred2} implies that $\aav^\kk$ is $(\mm - 2\kk + 3)$-irreducible. Thus, $\aav^\kk$ is $\ii$-irreducible for $\ii = 1 \wdots \mm$ with $\ii \not= \mm - 2\kk$. Hence $(\HHH_\kk)$ is true. Applying~$(\HHH_\kk)$ for $\kk = \lfloor (\mm + 1) / 2 \rfloor$, which gives $\mm - 2\kk < 1$, we obtain that $\aav^\kk$, which is $\aav \act \Redmax{\Sigma(\mm)}$, is $\ii$-irreducible for every~$\ii \le \mm$. \end{proof} Building on Lemma~\ref{L:Irred3}, we now easily obtain the expected universal recipe. \begin{prop}\label{P:Strat} If $\MM$ is a noetherian gcd-monoid satisfying the $3$-Ore condition, then, for every $\nn$-multifraction~$\aav$ on~$\MM$, we have \begin{equation}\label{E:Strat} \red(\aav) = \aav \act \Redmax{\univ\nn}, \end{equation} where $\univ\nn$ is empty for $\nn \le 1$, and is $(1, 2 \wdots \nn - 1)$ followed by~$\univ{\nn - 2}$ for~$\nn \ge\nobreak 2$. \end{prop} \begin{proof} An induction from Lemma~\ref{L:Irred3} shows that, for $\dh\aav = \nn$ and~$\Sigma$ as in Lemma~\ref{L:Irred3}, \begin{equation}\label{E:Strat1} \aav \act \Redmax{\Sigma(1)} \Redmax{\Sigma(2)} \pdots \Redmax{\Sigma(\nn - 1)} \end{equation} is $\ii$-irreducible for every $\ii < \nn$, hence it must be~$\red(\aav)$. Then we observe that the terms in~\eqref{E:Strat1} can be rearranged. Indeed, the proof of Lemma~\ref{L:Distant} shows that, as partial mappings on~$\FR\MM$, the transformations $\Red{\ii, \xx}$ and $\Red{\jj, \yy}$ commute for $\vert\ii - \jj\vert \ge 3$ (we claim nothing for $\vert \ii - \jj \vert = 2$). Applying this in~\eqref{E:Strat1} to push the high level reductions to the left gives~\eqref{E:Strat}. \end{proof} Thus, when Prop.\,\ref{P:Strat} is eligible, reducing a multifraction~$\aav$ amounts to performing the following quadratic sequence of algorithmic steps: \texttt{for $\pp:= 1$ to $\lfloor \dh\aav /2 \rfloor$ do} \HS{4}\texttt{for $\ii:= 1$ to $\dh\aav + 1 - 2\pp$ do} \HS{8}\texttt{$\aav:= \aav \act \Redmax\ii$}. In particular, $\aav$ represents~$1$ in~$\EG\MM$ if and only if the process ends with a trivial multifraction (all entries equal to~$1$). By the way, the proof of Prop.\,\ref{P:Strat} shows that, for every $\nn'$-multifraction~$\aav$ with $\nn' \ge \nn$, the multifraction $\aav \act \Redmax{\univ\nn}$ is $\ii$-irreducible for every $\ii < \nn$. \subsection{Universal van Kampen diagrams}\label{SS:Diagram} Applying the rule of~\eqref{E:Strat} amounts to filling a diagram that only depends on the depth of the considered multifraction. For instance, we have for every $6$-multifraction~$\aav$ the universal recipe \begin{equation*} \red(\aav) = \aav \act \Redmax1 \Redmax2 \Redmax3 \Redmax4 \Redmax5 \Redmax1 \Redmax2 \Redmax3 \Redmax1, \end{equation*} and reducing~$\aav$ corresponds to filling the universal diagram of Fig.\,\ref{F:Recipe}. \begin{figure}\label{F:Recipe} \end{figure} Things take an interesting form when we consider a unital multifraction~$\aav$, \ie, $\aav$ represents~$1$ in~$\EG\MM$. By~\eqref{E:AppliConv2}, we must finish with a trivial multifraction, \ie, all arrows~$\aa'_\ii$ in Fig.\,\ref{F:Recipe} are equalities. If \VR(3, 0) $(\Gamma, *)$ is a finite, simply connected pointed graph, let us say that a multifraction~$\aav$ on a monoid~$\MM$ admits a \emph{van Kampen diagram of shape~$\Gamma$} if there is an $\MM$-labeling of~$\Gamma$ such that the outer labels from~$*$ are~$\aa_1 \wdots \aa_\nn$ and the labels in each triangle induce equalities in~$\MM$. This notion is a mild extension of the usual one: if $\SS$ is any generating set for~$\MM$, then replacing the elements of~$\MM$ with words in~$\SS$ and equalities with word equivalence provides a van Kampen diagram in the usual sense for the word in~$\SS \cup \SSb$ then associated with~$\aav$. It is standard that, if $\aav$ is unital and $\MM$ embeds in~$\EG\MM$, then there exists a van Kampen diagram for~$\aav$, in the sense above. However, in general, there is no uniform constraint on the underlying graph of a van Kampen diagram, typically no bound on the number of cells or of spring and well vertices. What is remarkable is that Prop.\,\ref{E:Strat} provides one unique common shape that works for \emph{every} multifraction of depth~$\nn$. \begin{defi} \rightskip32mm We define $(\UG4, *)$ to be the pointed graph on the right, and, for $\nn \ge 6$ even, we inductively define~$(\UG\nn, *)$ to be the graph obtained by appending $\nn - 2$ adjacent copies of~$\UG4$ around~$(\UG{\nn - 2}, *)$ starting from~$*$, with alternating orientations, and connecting the last copy of~$(\UG4, *)$ with the first one, see Fig.\,\ref{F:UnivGr}. \begin{picture}(0,0)(-8,-2) \psset{nodesep=0.7mm} \put(-0.7, -0.7){$*$} \pcline{->}(1,0)(20,0) \pcline{->}(0,1)(0,16) \pcline{<-}(0,16)(20,16) \pcline{<-}(20,1)(20,16) \pcline{->}(0.7, 0.7)(10,8) \pcline{->}(20,16)(10,8) \pcline{<-}(0,16)(10,8) \pcline{->}(10,8)(20,0) \end{picture} \end{defi} \begin{figure} \caption{\small The universal graph~$\UG\nn$, here for $\nn = 4, 6, 8$: for \emph{every} \label{F:UnivGr} \end{figure} One easily checks that~$\UG\nn$ contains ${\frac14}\nn(\nn - 2) - 1$ copies of~$\UG4$, and ${\frac12}\nn(\nn - 3) - 1$ interior nodes, namely ${\frac18}\nn(\nn - 2) - 1$ wells, ${\frac18}(\nn - 2)(\nn - 4)$ springs, and ${\frac14}\nn(\nn - 2)$ four-prongs. \begin{prop}\label{P:Tiling} If $\MM$ is a noetherian gcd-monoid that satisfies the $3$-Ore condition, then every unital $\nn$-multifraction~$\aav$ on~$\MM$ admits a van Kampen diagram of shape~$\UG\nn$. \end{prop} \begin{proof}[Proof (sketch)] We simply bend the diagram of Fig.\,\ref{F:Recipe} so as to close it and obtain a diagram, which we can view as included in the Cayley graph of~$\MM$, whose outer boundary is labeled with the entries of~$\aav$. There are two nontrivial points. First, we must take into account the fact that the $\nn/2$ right triangles (which are half-copies of~$\UG4$) are trivial. The point is that, if unital multifractions $\aav, \bbv$ satisfy $\aav \act \Red{1, \xx_1} \pdots \Red{\nn - 1, \xx_{\nn - 1}} = \bbv$ with $\bb_{\nn - 1} = \bb_\nn = 1$, then the number of copies of~$\UG4$ can be diminished from~$\nn - 1$ to~$\nn - 2$ in the first row (and the rest accordingly). Indeed, we easily check that $\bb_\nn = 1$ is equivalent to~$\aa_\nn = \xx_{\nn - 1}$, whereas $\bb_{\nn - 1} = 1$ is equivalent to~$\xx_{\nn - 2}\inv\aa_{\nn - 1} \divet \aa_\nn$, enabling one to contract $$\begin{picture}(110,17)(0,0) \psset{nodesep=0.7mm}\pspolygon[linearc=3,linewidth=4mm,linecolor=white,fillstyle=solid,fillcolor=color1](0,0)(18,14)(36,14) (18,0) \pspolygon[linearc=3,linewidth=4mm,linecolor=white,fillstyle=solid,fillcolor=color1](18,0)(36,14) (54,14) (36,0) \pcline{->}(0,0)(18,0) \pcline{<-}(18,0)(36,0) \pcline{->}(0,0)(9,7) \pcline{->}(9,7)(18,0) \pcline{<-}(18,0)(27,7) \pcline{<-}(27,7)(36,0) \pcline[style=double](36,0)(45,7) \pcline{<-}(9,7)(18,14) \pcline[linecolor=color3]{->}(18,14)(27,7) \pcline{->}(27,7)(36,14) \pcline[linecolor=color3]{<-}(36,14)(45,7) \pcline[style=double](45,7)(54,14) \pcline{->}(18,14)(36,14)\taput{$\aa_{\nn - 1}$} \pcline{<-}(36,14)(54,14)\taput{$\aa_\nn$} \pcline[style=exist]{->}(18,14)(18,0) \pcline[style=exist]{<-}(36,14)(36,0) \psarc[style=thin](18,0){3.5}{40}{140} \psarc[style=thin](36,0){3.5}{40}{140} \put(18,6.7){$\Red{\nn\HS{-0.1}{-}\HS{-0.3}1}$} \put(36.5,6.7){$\Red{\nn}$} \put(23,11){\color{color3}$\xx_{\nn{-}1}$} \put(41,11){\color{color3}$\xx_{\nn}$} \put(60,7){into} \pspolygon[linearc=3,linewidth=4mm,linecolor=white,fillstyle=solid,fillcolor=color1](70,0)(88,14)(106,14) (88,0) \pspolygon[linearc=3,linewidth=4mm,linecolor=white,fillstyle=solid,fillcolor=color1](88,0)(106,14)(106,0) \pcline{->}(70,0)(88,0) \pcline{->}(88,14)(106,14)\taput{$\aa_{\nn - 1}$} \pcline[linecolor=color3]{->}(88,14)(97,7) \pcline[linecolor=color3]{->}(106,0)(106,14)\trput{$\aa_\nn \color{color3} = \xx_{\nn}$} \pcline[style=exist]{->}(88,14)(88,0) \put(88,6.7){$\Red{\nn\HS{-0.1}{-}\HS{-0.3}1}$} \put(100,6.7){$\Red{\nn}$} \pcline{->}(70,0)(79,7) \pcline{<-}(79,7)(88,14) \pcline{->}(79,7)(88,0) \pcline{<-}(88,0)(97,7) \pcline{->}(97,7)(106,14) \pcline{<-}(88,0)(106,0) \pcline{<-}(97,7)(106,0) \put(93,11){\color{color3}$\xx_{\nn{-}1}$} \psarc[style=thin](88,0){3.5}{40}{140} \end{picture}$$ Second, we must explain how the two graphs~$\UG4$ of the penultimate row in Fig.\,\ref{F:Recipe} can be contracted into one. This follows from the fact, established in~\cite[Lemma\,6.14]{Diu}, that, if $\cc_1 \sdots \cc_4$ and~$\dd_1 \sdots \dd_4$ label two copies of~$\UG4$ with a common fourth vextex (from~$*$) and we have $\cc_2 = \dd_3$ and $\cc_4 = \dd_1$, then $\cc_1 / \cc_2 / \dd_3 / \dd_4$ label one copy of~$\UG4$. Thus we can contract $$\begin{picture}(88,20)(0,-3) \psset{nodesep=0.7mm}\pspolygon[linearc=3,linewidth=4mm,linecolor=white,fillstyle=solid,fillcolor=color1](0,0)(0,14)(18,14) \pspolygon[linearc=3,linewidth=4mm,linecolor=white,fillstyle=solid,fillcolor=color1](0,0)(18,14)(36,14) (18,0) \pspolygon[linearc=3,linewidth=4mm,linecolor=white,fillstyle=solid,fillcolor=color1](18,0)(36,14) (36,0) \pcline{->}(0,0)(18,0)\tbput{$\cc_4$} \pcline{<-}(18,0)(36,0)\put(25.5,-2.7){$\dd_1$} \pcline{->}(0,0)(9,7) \pcline{->}(9,7)(18,0) \pcline{<-}(18,0)(27,7) \pcline{<-}(27,7)(36,0) \pcline{<-}(9,7)(18,14) \pcline{<-}(0,14)(9,7) \pcline{->}(18,14)(27,7) \pcline{->}(27,7)(36,14) \pcline{<-}(18,14)(36,14)\taput{$\dd_3$} \pcline{<-}(0,14)(0,0)\tlput{$\cc_1$} \pcline{->}(0,14)(18,14)\taput{$\cc_2$} \pcline{->}(18,14)(18,0)\tlput{$\cc_3$}\trput{$\dd_2$} \pcline{<-}(36,14)(36,0)\trput{$\dd_4$} \psarc[style=double](4,0){4}{180}{270} \pcline[style=double](3,-4)(33,-4) \psarc[style=double](32,0){4}{270}{0} \put(51,6){into} \pspolygon[linearc=3,linewidth=4mm,linecolor=white,fillstyle=solid,fillcolor=color1](70,0)(70,14)(88,14) \pspolygon[linearc=3,linewidth=4mm,linecolor=white,fillstyle=solid,fillcolor=color1](70,0)(88,14)(88,0) \pcline{->}(70,0)(70,14)\tlput{$\cc_1$} \pcline{<-}(70,14)(79,7) \pcline{->}(70,0)(88,0)\tbput{$\dd_1$} \pcline{<-}(70,14)(88,14)\taput{$\cc_2$} \pcline{->}(88,14)(88,0)\trput{$\dd_3$} \pcline{->}(70,0)(79,7) \pcline{<-}(79,7)(88,14) \pcline{->}(79,7)(88,0) \pcline{->}(79,7)(88,14) \end{picture}$$ One easily checks that what remains is a copy of~$\UG\nn$. \end{proof} \subsection{Application to the study of torsion}\label{SS:Torsion} The existence of the universal reduction strategy provides a powerful tool for establishing further properties, typically for extending to the $3$-Ore case some of the results previously established in the $2$-Ore case. Here we mention a few (very) partial results involving torsion. It is known~\cite{Dfz, Dha} that, if $\MM$ is a gcd-monoid satisfying the $2$-Ore condition, then the group~$\EG\MM$ is torsion free. \begin{conj} If $\MM$ is a noetherian gcd-monoid satisfying the $3$-Ore condition, then the group~$\EG\MM$ is torsion free. \end{conj} We establish below a few simple instances of this conjecture, using the following observation: \begin{lemm}\label{L:Cycle} If $\MM$ is a noetherian gcd-monoid and $\aa, \bb, \xx_1 \wdots \xx_\pp$ satisfy \begin{equation}\label{E:Cycle} \aa\xx_1 = \bb \xx_2, \ \aa\xx_2 = \bb\xx_3, ..., \ \aa\xx_{\pp - 1} = \bb\xx_\pp, \ \aa\xx_\pp = \bb \xx_1, \end{equation} then $\aa = \bb$ holds. \end{lemm} \begin{proof} Let $\wit$ be a map from~$\MM$ to the ordinals satisfying~\eqref{E:Wit}. By induction on~$\alpha$, we prove \begin{equation*}\label{E:Cycle1} \parbox{128mm}{$\PPP(\alpha)$: \quad If $\aa, \bb, \xx_1 \wdots \xx_\pp$ satisfy \eqref{E:Cycle} with $\min_\ii(\wit(\aa\xx_\ii)) \le \alpha$, then we have $\aa = \bb$.} \end{equation*} Assume first $\alpha = 0$. Let $\aa, \bb, \xx_1 \wdots \xx_\pp$ satisfy \eqref{E:Cycle} and $\min_\ii(\wit(\aa\xx_\ii)) \le \alpha$. We have $\wit(\aa\xx_\ii) = 0$ for some~$\ii$, whence $\aa\xx_\ii = 1$, whence $\aa = 1$. By~\eqref{E:Cycle}, we have $\aa\xx_\ii = \bb\xx_{\ii + 1}$ (with $\xx_{\pp + 1}$ meaning~$\xx_1$), whence $\bb\xx_{\ii + 1} = 1$ and, therefore, $\aa = \bb = 1$. So $\PPP(0)$ is true. Assume now $\alpha > 0$. Let $\aa, \bb, \xx_1 \wdots \xx_\pp$ satisfy \eqref{E:Cycle} and $\min_\ii(\wit(\aa\xx_\ii)) \le \alpha$. By~\eqref{E:Cycle}, $\aa$ and $\bb$ admit a common right multiple, hence a right lcm, say $\aa \bb' = \bb \aa'$. Then, for~$\ii \le \pp$, the equality $\aa\xx_\ii = \bb\xx_{\ii + 1}$ (with $\xx_{\pp + 1}$ meaning~$\xx_1$) implies the existence of~$\xx'_\ii$ satisfying $\xx_\ii = \bb' \xx'_\ii$ and $\xx_{\ii + 1} = \aa' \xx'_\ii$. It follows that $\aa', \bb', \xx'_1 \wdots \xx'_\pp$ satisfy (the counterpart of)~\eqref{E:Cycle}. Assume first $\aa \not= 1$. By assumption, we have $\wit(\aa\xx_\ii) \le \alpha$ for some~$\ii$ and, therefore, $\wit(\xx_\ii) = \wit(\aa'\xx'_{\ii - 1}) < \alpha$. Applying the induction hypothesis to $\aa', \bb', \xx'_1 \wdots \xx'_\pp$, we deduce $\aa' = \bb'$, whence $\aa = \bb$ by right cancelling~$\aa'$ in~$\aa\bb' = \bb\aa'$. Assume now $\bb \not= 1$. By assumption, we have $\wit(\bb\xx_\ii) \le \alpha$ for some~$\ii$ and, again, $\wit(\xx_\ii) = \wit(\aa'\xx'_{\ii - 1}) < \alpha$. Applying the induction hypothesis, we deduce as above $\aa' = \bb'$ and $\aa = \bb$. Finally, if both $\aa \not= 1$ and $\bb \not= 1$ fail, the only possibility is $\aa = \bb = 1$. In every case, $\aa = \bb$ holds, and $\PPP(\alpha)$ is satisfied. \end{proof} \begin{prop} If $\MM$ is a noetherian gcd-monoid satisfying the $3$-Ore condition, then no element~$\gg \not= 1$ of~$\EG\MM$ may satisfy $\gg^2 = 1$ with $\dh\gg \le 5$, or $\gg^3 = 1$ or $\gg^4 = 1$ with $\dh\gg \le 3$. \end{prop} \begin{proof} First, for~$\nn$ odd, $(\aa_1 \aa_2\inv \aa_3 \pdots \aa_\nn)^\pp = 1$ implies $(\aa_\nn\aa_1 \aa_2\inv \pdots \aa_{\nn-1}\inv)^\pp = 1$, so the existence of~$\gg$ satisfying $\gg^\pp = 1$ with $\dh\gg$ odd implies the existence of~$\gg'$ with $\dh{\gg'} = \dh\gg - 1$ satisfying $\gg'{}^\pp = 1$. Hence, the cases to consider are $\gg^2 = 1$ with $\dh\gg \le 4$, $\gg^3 = 1$ and $\gg^4 = 1$ with $\dh\gg \le 2$. Assume $\gg^2 = 1$ with $\dh\gg \le 4$. Let $\aa/\bb/\cc/\dd$ be the unique $\RRR$-irred\-ucible $4$-multifraction representing~$\gg$. Then $\dd/\cc/\bb/\aa$ represents~$\gg\inv$, and $\gg\inv = \gg$ together Prop.\,\ref{P:Strat} imply \begin{equation}\label{E:Torsion1} \red(\dd/\cc/\bb/\aa) = \dd/\cc/\bb/\aa \act \Redmax1 \Redmax2 \Redmax3 \Redmax1 = \aa/\bb/\cc/\dd. \end{equation} Because $\aa/\bb/\cc/\dd$ is $\RRR$-irreducible, we have $\cc \gcdt \dd = 1$, so applying~$\Redmax1$ to~$\dd/\cc/\bb/\aa$ leaves the latter unchanged. Hence there exist $\xx, \yy, \zz$ in~$\MM$ satisfying $\dd/\cc/\bb/\aa \act \Red{2, \xx}\Red{3, \yy} \Red{1, \zz} = \aa/\bb/\cc/\dd$. Expanding this equality provides $\xx', \yy'$ and $\bb', \cc'$ satisfying $$\cc\xx' = \xx\cc' \ (= \xx \lcm \cc), \ \xx\bb' = \bb, \ \yy'\bb' = \cc\yy \ (=\bb' \lcmt \yy), \ \aa = \dd\yy, \ \aa\zz = \dd\xx', \ \bb\zz = \yy'\cc'.$$ Eliminating~$\aa$, we obtain $\dd\yy\zz = \dd\xx'$, whence $\xx' = \yy\zz$ and, eliminating~$\bb$ and~$\xx'$, we remain with $\yy' \opp \bb'\zz = \xx \opp \cc' \ (\ = \cc\yy\zz)$ and $\xx\opp \bb'\zz = \yy' \opp \cc'$. Applying Lemma~\ref{L:Cycle} with $\pp = 2$ and $\xx_1 = \bb'\zz$, $\xx_2 = \cc'$, we deduce $\yy' = \xx$, whence $\cc' = \bb'\zz$, and, from there, $\bb\zz = \xx\bb'\zz = \xx\cc' = \cc\xx'$, which shows that $\bb$ and~$\cc$ admit a common right multiple. As, by assumption, $\aa/\bb/\cc/\dd$ is $\RRR$-irreducible, the only possibility is $\cc = 1$ and, from there, $\dd = 1$. Applying Prop.\,\ref{P:Strat}, we find $\red(1/1/\bb/\aa) = 1/1/\bb/\aa \act \Redmax2\Redmax3\Redmax1 = \bb/\aa/1/1$ and, merging with~\eqref{E:Torsion1}, we deduce $\bb/\aa/1/1 = \aa/\bb/1/1$, whence $\aa = \bb$. As, by assumption, $\aa$ and~$\bb$ admit no nontrivial common right divisor, we deduce $\aa = \bb = 1$ and, finally, $\gg = 1$. Assume now $\gg^3 = 1$ with $\dh\gg \le 2$. Let $\aa/\bb$ be the unique $\RRR$-irreducible $2$-multifraction representing~$\gg$. Then $\bb/\aa/\bb/\aa$ represents~$\gg^{-2}$ and the assumption $\gg^{-2} = \gg$ plus Prop.\,\ref{P:Strat} imply \begin{equation}\label{E:Torsion2} \red(\bb/\aa/\bb/\aa) = \bb/\aa/\bb/\aa \act \Redmax1 \Redmax2 \Redmax3 \Redmax1 = \aa/\bb/1/1. \end{equation} Arguing as above, we deduce the existence of $\xx', \yy'$ and $\aa', \bb'$ satisfying $\aa\xx' = \xx\aa' \ (= \xx \lcm \aa)$, $\bb = \xx\bb'$, $\aa = \yy'\bb'$, $\aa\zz = \bb\xx'$, and $\bb\zz = \yy'\aa'$. Eliminating~$\aa$ and~$\bb$, we find $\xx \opp \bb'\xx' = \yy' \opp \bb'\zz$, $\xx \opp \bb'\zz = \yy' \opp \aa'$, $\xx \opp \aa' = \yy' \opp \bb'\xx'$. Applying Lemma~\ref{L:Cycle} with $\pp = 3$ and $\xx_1 = \bb'\zz$, $\xx_2 = \bb'\zz'$, $\xx_3 = \aa'$, we deduce $\yy' = \xx$, whence $\bb' \xx' = \bb' \zz = \aa'$ and, from there, $\xx' = \zz$. Merging with $\aa\zz = \bb\xx'$ and right cancelling~$\xx'$, we deduce $\aa = \bb$, whence $\aa = \bb = 1$ since $\aa \gcdt \bb = 1$ holds, and, finally, $\gg = 1$. (An alternative argument can be obtained by expanding $\red(\aa/\bb/\aa) = \red(\bb/\aa/\bb)$.) For $\gg^4 = 1$ with $\dh\gg \le 2$, we have $(\gg^2)^2 = 1$, whence $\gg^2 = 1$ by applying the result above to~$\gg^2$, which is legal by $\dh{\gg^2} \le 4$. We then deduce $\gg = 1$ by applying the first result to~$\gg$. \end{proof} A few more particular cases could be addressed similarly, but the complexity grows fast and it is doubtful that a general argument can be reached in this way. \section{The case of Artin-Tits monoids}\label{S:AT} Every Artin-Tits monoid is a noetherian gcd-monoid, hence it is eligible for the current approach. In this short final section, we address the question of recognizing which Artin-Tits monoids satisfy the $3$-Ore condition and are therefore eligible for Theorem~A. The answer is the following simple criterion, stated as Theorem~B in the introduction: \begin{prop}\label{P:AT3Ore} An Artin-Tits monoid satisfies the $3$-Ore condition if and only if it is of type~FC. \end{prop} We recall that an Artin-Tits monoid $\MM = \MON\SS\RR$ is \emph{of spherical type} if the Coxeter group~$\WW$ obtained by adding to~$\RR$ the relation~$\ss^2 = 1$ for every~$\ss$ in~$\SS$ is finite. In this case, the canonical lifting~$\Delta$ to~$\MM$ of the longest element~$\ww_0$ of~$\WW$ is a Garside element in~$\MM$, and $(\MM, \Delta)$ is a Garside monoid~\cite{Dfx}. Then $\MM $ satisfies the $2$-Ore condition: any two elements of~$\MM$ admit a common right multiple, and a common left multiple. If $\MM = \MON\SS\RR$ is an Artin-Tits monoid, then, for~$\II \subseteq \SS$, the standard parabolic submonoid~$\MM_\II$ generated by~$\II$ is the Artin-Tits monoid $\MON\II{\RR_\II}$, where $\RR_\II$ consists of those relations of~$\RR$ that only involve generators from~$\II$. Then $\MM$ is of \emph{type~FC} (flag complex)~\cite{Alt, Chc, Jug} if every submonoid~$\MM_\II$ such that any two elements of~$\II$ admit a common multiple is spherical. The global lcm of~$\II$ is then denoted by~$\Delta_\II$. The specific form of the defining relations implies that, for every element~$\aa$ of an Artin-Tits monoid, the generators of~$\SS$ occurring in a decomposition of~$\aa$ do not depend on the decomposition. Call it the \emph{support}~$\Supp(\aa)$ of~$\aa$. An easy induction from Lemma~\ref{L:IterLcm} implies \begin{equation}\label{E:Supp} \Supp(\aa') \subseteq \Supp(\aa) \cup \Supp(\bb) \quad \text{for} \quad \aa \lcm \bb = \bb\aa' . \end{equation} We begin with two easy observations that are valid for all Artin-Tits monoids: \begin{lemm}\label{L:Supp} Assume that $\MM$ is an Artin-Tits monoid with atom family~$\SS$. \ITEM1 Assume $\aa \in \MM$, $\ss \in \SS \setminus \Supp(\aa)$, and $\ss \lcm \aa$ exists. Write $\ss \lcm \aa = \aa\uu$. Then $\ss \dive \uu$ holds. \ITEM2 Assume $\aa, \bb \in \MM$, and $\ss \in \Supp(\aa) \setminus \Supp(\bb)$ and $\tt \in \Supp(\bb) \setminus \Supp(\aa)$. If $\aa$ and $\bb$ admit a common right multiple, then so do $\ss$ and~$\tt$. \end{lemm} \begin{proof} \ITEM1 We use induction on the length~$\wit(\aa)$ of~$\aa$. For $\wit(\aa) = 0$, \ie, $\aa = 1$, we have $\uu = \ss$, and the result is true. Otherwise, write $\aa = \tt \aa'$ with $\tt \in \SS$. By assumption, we have $\tt \not= \ss$ and $\ss \lcm \tt$ exists since $\ss \lcm \aa$ does. By definition of an Artin-Tits relation, we have $\ss \lcm \tt = \tt \ss \vv$ for some~$\vv$. Applying Lemma~\ref{L:IterLcm} to~$\ss$, $\tt$, and~$\aa'$ gives $\ss \vv \lcm \aa' = \aa' \uu$, and then applying it to~$\aa'$, $\ss$, and $\vv$ gives $\uu = \uu' \vv'$ with $\uu', \vv'$ (and $\ww$) determined by $\ss \lcm \aa' = \ss\ww = \aa' \uu'$ and $\vv \lcm \ww = \ww\vv'$: $$\begin{picture}(38,17) \psset{nodesep=0.5mm} \psset{yunit=0.9mm} \pcline{->}(0,0)(15,0) \pcline{->}(15,0)(35,0) \pcline{->}(0,16)(0,0)\tlput{$\ss$} \pcline{->}(15,16)(15,8)\trput{$\ss$} \pcline{->}(15,8)(15,0)\trput{$\vv$} \pcline{->}(35,16)(35,8)\trput{$\uu'$} \pcline{->}(35,8)(35,0)\trput{$\vv'$} \pcline{->}(15,8)(35,8)\taput{$\ww$} \pcline{->}(0,16)(15,16)\taput{$\tt$} \pcline{->}(15,16)(35,16)\taput{$\aa'$} \psarc[style=thin](15,0){3}{90}{180} \psarc[style=thin](35,0){3}{90}{180} \psarc[style=thin](35,8){3}{90}{180} \psline[style=thin](38,0)(40,0)(40,16)(38,16)\put(41,8){$\uu$} \end{picture}$$ Then the assumption $\ss \notin \Supp(\aa)$ implies $\ss \notin \Supp(\aa')$ and, by definition, we have $\wit(\aa') < \wit(\aa)$. Then the induction hypothesis implies $\ss \dive \uu'$, whence $\ss \dive \uu' \vv' = \uu$. \ITEM2 Assume that $\aa \lcm \bb$ exists. Starting with arbitrary expressions of~$\aa$ and~$\bb$, write $\aa = \aa_1 \ss \aa_2$ and $\bb = \bb_1 \tt \bb_2$ with neither $\ss$ nor~$\tt$ in $\Supp(\aa_1) \cup \Supp(\bb_2)$. Applying Lemma~\ref{L:IterLcm} repeatedly, we decompose the computation of~$\aa \lcm \bb$ into $3 \times 3$ steps: $$\begin{picture}(45,31)(0,0) \psset{nodesep=0.5mm} \psset{yunit=0.75mm} \pcline[style=exist]{->}(0,0)(15,0) \pcline[style=exist]{->}(15,0)(30,0) \pcline[style=exist]{->}(30,0)(45,0) \pcline{->}(0,12)(15,12) \pcline{->}(15,12)(30,12) \pcline[style=exist]{->}(30,12)(45,12) \pcline{->}(0,24)(15,24)\taput{$\bb'$} \pcline{->}(15,24)(30,24)\taput{$\vv$} \pcline[style=exist]{->}(30,24)(45,24) \pcline{->}(0,36)(15,36)\taput{$\bb_1$} \pcline{->}(15,36)(30,36)\taput{$\tt$} \pcline{->}(30,36)(45,36)\taput{$\bb_2$} \pcline{->}(0,36)(0,24)\tlput{$\aa_1$} \pcline{->}(0,24)(0,12)\tlput{$\ss$} \pcline{->}(0,12)(0,0)\tlput{$\aa_2$} \pcline{->}(15,36)(15,24)\trput{$\aa'$} \pcline{->}(15,24)(15,12)\trput{$\uu$} \pcline[style=exist]{->}(15,12)(15,0) \pcline{->}(30,36)(30,24) \pcline{->}(30,24)(30,12) \pcline[style=exist]{->}(30,12)(30,0) \pcline[style=exist]{->}(45,36)(45,24) \pcline[style=exist]{->}(45,24)(45,12) \pcline[style=exist]{->}(45,12)(45,0) \psarc[style=thin](15,0){3}{90}{180} \psarc[style=thin](30,0){3}{90}{180} \psarc[style=thin](45,0){3}{90}{180} \psarc[style=thin](15,12){3}{90}{180} \psarc[style=thin](30,12){3}{90}{180} \psarc[style=thin](45,12){3}{90}{180} \psarc[style=thin](15,24){3}{90}{180} \psarc[style=thin](30,24){3}{90}{180} \psarc[style=thin](45,24){3}{90}{180} \end{picture}$$ By assumption, neither~$\ss$ nor~$\tt$ belongs to $\Supp(\aa_1) \cup \Supp(\bb_1)$, hence neither belongs to $\Supp(\aa') \cup \Supp(\bb')$. Then \ITEM1 implies $\ss \dive \uu$ and $\tt \dive \vv$. By assumption, $\uu \lcm \vv$ exists, hence (by Lemma~\ref{L:IterLcm} once more) so does~$\ss \lcm \tt$. \end{proof} Putting things together, we can complete the argument. \begin{proof}[Proof of Prop.\,\ref{P:AT3Ore}] Assume that $\MM$ is an Artin-Tits monoid with atom set~$\SS$ and $\MM$ satisfies the $3$-Ore condition. We prove using induction on~$\card\II$ that, whenever $\II$ is a subset of~$\SS$ whose elements pairwise admit common multiples, then $\Delta_\II$ exists. For $\card\II \le 2$, the result is trivial. Assume $\card\II \ge 3$. Let $\ss \not= \tt$ belong to~$\II$, and put $\JJ := \II \setminus \{\ss, \tt\}$. Each of $\card(\JJ \cup \{\ss\})$, $\card(\JJ \cup \{\tt\})$, and $\card\{\ss, \tt\}$ is smaller than $\card\II$ hence, by induction hypothesis, $\Delta_{\JJ \cup \{\ss\}}$, which is $\Delta_\JJ \lcm \ss$, $\Delta_{\JJ \cup \{\tt\}}$, which is $\Delta_\JJ \lcm \tt$, and $\Delta_{\{\ss, \tt \}}$, which is $\ss \lcm \tt$, exist. The assumption that $\MM$ satisfies the $3$-Ore condition implies that $\Delta_\JJ \lcm \ss \lcm \tt$, which is~$\Delta_\II$, also exists. Hence $\MM$ is of type~FC. Conversely, assume that $\MM$ is of type~FC. Put $$\XX:= \{\xx \in \MM \mid \exists\II \subseteq \SS\, (\Delta_\II \text{ exists and } \xx \dive \Delta_\II)\}.$$ For each~$\ss$ in~$\SS$, we have $\Delta_{\{\ss\}} = \ss$, hence $\ss \in \XX$: thus $\XX$ contains all atoms, and therefore $\XX$ generates~$\MM$. Next, we observe that, for all~$\xx, \yy$ in~$\XX$, \begin{equation}\label{E:LcmExists} \xx \lcm \yy \text{ exists} \quad \Leftrightarrow\quad \forall \ss, \tt \in \Supp(\xx) \cup \Supp(\yy)\ (\ss \lcm \tt \text{ exists}). \end{equation} Indeed, put $\II:= \Supp(\xx) \cup \Supp(\yy)$, and assume first that $\xx \lcm \yy$ exists. Let $\ss, \tt$ belong to~$\II$. If $\ss$ and~$\tt$ both belong to~$\Supp(\xx)$, or both belong to~$\Supp(\yy)$, then $\ss \lcm \tt$ exists by definition of~$\XX$. Otherwise, we may assume $\ss \in \Supp(\xx) \setminus \Supp(\yy)$ and $\tt \in \Supp(\yy) \setminus \Supp(\xx)$, and Lemma~\ref{L:Supp}\ITEM2 implies that $\ss \lcm \tt$ exists as well. Conversely, assume that $\ss \lcm \tt$ exists for all~$\ss, \tt$ in~$\II$. Then the assumption that $\MM$ is of type~FC implies that $\Delta_\II$ exists. Then we have $\xx \dive \Delta_\II$ and $\yy \dive \Delta_\II$, hence $\xx \lcm \yy$ exists (and it divides~$\Delta_\II$). We deduce that the family~$\XX$ is RC-closed. Indeed, assume that $\xx, \yy$ lie in~$\XX$ and $\xx \lcm \yy$ exists. Write $\xx \lcm \yy = \xx \yy'$. By~\eqref{E:LcmExists}, $\ss \lcm \tt$ exists for all~$\ss, \tt$ in~$\Supp(\xx) \cup \Supp(\yy)$. As $\MM$ is of type~FC, this implies the existence of~$\Delta_\II$, where $\II$ is again $\Supp(\xx) \cup \Supp(\yy)$, and we deduce that both $\xx$ and~$\yy$ divide~$\Delta_\II$. As $\Delta_\II$ is a Garside element in~$\MM_\II$, this implies that $\yy'$ also left divides~$\Delta_\II$, hence it belongs to~$\XX$. Next, assume that $\xx, \yy, \zz$ lie in~$\XX$ and pairwise admit right lcms. By~\eqref{E:LcmExists}, we deduce that $\ss \lcm \tt$ exists for all~$\ss, \tt$ in~$\Supp(\xx) \cup \Supp(\yy)$, in~$\Supp(\yy) \cup \Supp(\zz)$, and in~$\Supp(\xx) \cup \Supp(\zz)$, whence for all~$\ss, \tt$ in $\JJ:= \Supp(\xx) \cup \Supp(\yy) \cup \Supp(\zz)$. As $\MM$ is of type~FC, this implies that $\Delta_\JJ$ exists. Then $\xx, \yy, \zz$ all divide~$\Delta_\JJ$, hence they admit a common multiple. Hence, $\XX$ satisfies~\eqref{E:3OreInd1}. A symmetric argument shows that $\XX$ satisfies~\eqref{E:3OreInd2}. By Proposition~\ref{P:3OreInd}, we deduce that $\MM$ satisfies the right $3$-Ore condition. \end{proof} It follows from Theorem~A that, if $\MM$ is an Artin-Tits of type~FC, every element of~$\EG\MM$ is represented by a unique $\RRRh$-irreducible multifraction. From there, choosing any normal form on the monoid~$\MM$ (typically, the normal form associated with the smallest Garside family~\cite{Din, DyH}), this decomposition provides a unique normal form for the elements of the group~$\EG\MM$. \begin{ques} \ITEM1 Is there a connection between the above normal form(s) and the one of~\cite{Alt}? \ITEM2 (L.\,Paris) Is there a connection between the above normal form(s) and the one associated with the action on the Niblo--Reeves CAT(0) complex \cite{NiR, GoP1}? \end{ques} A positive answer to the second question seems likely. If so, the current construction would provide a simple, purely algebraic construction of a geodesic normal form. Preliminary observations in this direction were proposed by B.\,Wiest~\cite{WieCAT}. \newcommand{\noopsort}[1]{} \end{document}
\begin{document} \title[Riemannian Foliations on Compact Lie Groups]{Totally Geodesic Riemannian Foliations on Compact Lie Groups} \author{Llohann D. Speran\c ca} \thanks{This work was partially supported by CNPq grant number 404266/2016-9 and FAPESP grant number 2017/10892-7.} \address{Universidade Federal de S\~ao Paulo, ICT\\ Av. Cesare Monsueto Giulio Lattes, 1211 - Jardim Santa Ines I\\ CEP 12231-280 \\ S\~ao Jos\'e dos Campos, SP, Brazil} \email{[email protected]} \subjclass[2010]{ MSC 53C35, MSC 53C20 \and MSC 53C12} \keywords{Lie groups, Riemannian foliations, symmetric spaces, holonomy, nonnegative sectional curvature} \begin{abstract} In 86 Ranjan questioned whether a submersion $\pi\colon G\to B$ from a compact simple Lie group with bi-invariant metric is a coset foliation or not, provided the submersion is Riemannian with totally geodesic fibers. Here we answer this question affirmatively, even when the submersion is defined only in an open subset of $G$ (assuming suitable compactness hypothesis). \end{abstract} \maketitle \section{Introduction}\label{sec:1} The present work is dedicated to the simple question: how to fill a given geometric space with a geometric pattern? Or, following Thurston \cite{thurston1974construction}: how to construct a manifold out of stripped fabric? For instance, starting with a Lie group $G$, we could use its algebraic structure to construct a pattern: any Lie subgroup $H<G$ induces a decomposition of $G$ by both right cosets, $\cal F^+_H=\{gH~|~h\in G\}$, and left cosets, $\cal F^-_H=\{Hg~|~h\in G\}$. Such decompositions are called as \textit{coset foliations}. In general, a \textit{foliation}\footnote{Only non-singular foliations with connected leaves are considered in this paper.} $\cal F$ on $M$ is the decomposition of $M$ into the integrable maximal submanifolds of an involutive subbundle $\cal V=T\cal F\subseteq TM$. Such submanifolds are called \textit{leaves}. Existence, obstructions and classifications of foliations are deep topological subjects (see e.g. Haefliger \cite{haefliger} and Thurston \cite{thurston1976existence,thurston1974theory,thurston1974construction}) and they acquire a geometric flavor by imposing distance rigidity between leaves: a foliation is called \textit{Riemannian} if its leaves are locally equidistant (see e.g. Molino \cite{molino1988riemannian} or Ghys \cite{ghys1984feuilletages}). The decomposition into the fibers of a Riemannian submersion is a main example of a Riemannian foliation: a submersion $\pi:M\to B$ is \emph{Riemannian} if the restriction $d\pi_p|_{(\ker d\pi_p)^\perp}$ is an isometry to $ T_{\pi(p)}B$ for every $p\in M$ (see e.g. O'Neill \cite{oneill} or Gromoll--Walschap \cite{gw}). The classification of Riemannian submersions from compact Lie groups with bi-invariant metrics was asked by Grove \cite[Problem 5.4]{grove2002geometry}. Indeed, in such groups all known Riemannian foliations with totally geodesic leaves are coset foliations. Therefore, following Ranjan \cite{ranjan}, it is natural to ask whether coset foliations are the only Riemannian foliations with totally geodesic leaves on such groups. The affirmative answer to this question is supported by the following conjecture, commonly called ``Grove's Conjecture'' (see also Munteanu--Tapp \cite{tappmunteanu2}): \begin{conj}\label{conj:grove} Let $G$ be a compact simple Lie group with a bi-invariant metric. A Riemannian submersion $\pi\colon G\to B$ with connected totally geodesic fibers is induced either by left or right cosets. \end{conj} Here Ranjan's question together with Conjecture \ref{conj:grove} are proved affirmatively, without the simplicity assumption. \begin{theorem}\label{thm:grove} Let $\pi \colon G\to B$ be a Riemannian submersion with totally geodesic connected fibers on $G$, a compact connected Lie group with bi-invariant metric. Then $\pi$ is isometric to a coset foliation. \end{theorem} Actually, our proof has only one non-local instance, which can be circumvented by suitable compactness hypothesis. In particular, the proof works well for foliations on compact groups and reduce the general problem to foliations whose leaves are totally geodesic flats (as the foliation defined by the fibers of a vector bundle): \begin{theorem}\label{thm:groveF} Let $\cal F$ be a Riemannian foliation with connected totally geodesic leaves on a connected open subset $U$ of a compact Lie group with bi-invariant metric $G$. Then $\cal V=T\cal F$ splits as $\cal V=\Delta_0\oplus\Delta_1$, where $\Delta_0$ defines a totally geodesic Riemannian foliation by Euclidean spaces and $\Delta_1$ is isometric to a coset foliation. Moreover, $\cal F$ is a coset foliation if it satisfies one of the following additional hypothesis: \begin{enumerate}[$(a)$] \item $\tilde L_{e}$, the universal cover of a fixed leaf, has no Euclidean factors; \item $L_{e}$ is complete and the integrability tensor of $\cal F$ is bounded along $L_{e}$; \item the closure of $L_{e}$ is a compact subset of $G$. \end{enumerate} \end{theorem} The proof is essentially algebraic, with two main geometrical instances: Theorems \ref{thm:A} and \ref{thm:tori}. Both are interesting on their own: Theorem \ref{thm:A} is a refinement of the celebrated Ambrose--Singer Theorem for foliations with totally geodesic fibers on spaces with non-negative sectional curvature; Theorem \ref{thm:tori} could be readily used in an attempt to generalize Theorem \ref{thm:grove} to symmetric spaces. The author is tempted to believe that the foliation defined by $\Delta_0$ is (locally) of a metric product $G=G'\times \bb R^k$, therefore it is a coset foliation. The thesis of Theorem \ref{thm:grove} follows once we show that $\lie g$ decomposes in ideals $\lie g=\lie g_+\oplus \lie g_-$ such that, for every vectors $X,Y$ orthogonal to leaves, \begin{equation}\label{eq:Aintroduction} [X,Y]^v=\Big( [X_-,Y_-] - [X_+,Y_+]\Big)^v, \end{equation} where $^v$ denotes the orthogonal projection to $\cal V=T\cal F$ and $X_\pm,Y_\pm$ are the $\lie g_\pm$-components of $X,Y$. It follows from \cite[Corollary 4.2]{tappmunteanu2} that $\pi$ is given by cosets (see section \ref{sec:1main} for details). The decomposition $\lie g_+\oplus\lie g_-$ is obtained by refining \cite[Theorem 1.5]{tappmunteanu2} and ideas in \cite{ranjan}. We observe that the hypothesis on Theorem \ref{thm:grove} can not be relaxed: Kerin--Shankar \cite{kerin-shankar} presented infinite families of Riemannian submersions from compact Lie groups with bi-invariant metrics that can not be realized as principal bundles (for instance, the composition $h\circ pr:SO(16)\to S^8$ of the orthonormal frame bundle $pr:SO(16)\to S^{15}$ with the Hopf map $S^{15}\to S^8$ is one such submersion.) Moreover, the simple group $SO(8)$ admits a foliation, $\cal F_{SO(8)}$, by totally geodesic round 7-spheres (obtained by trivializing the orthonormal frame bundle $SO(8)\to S^7$). Kerin--Shankar examples does not have totally geodesic fibers and $\cal F_{SO(8)}$ is not Riemannian. The general classification of Riemannian foliations is wide open. For instance, classifications neither for totally geodesic Riemannian foliations on symmetric spaces, nor for generic Riemannian foliations on Lie groups are known (we refer to Lytchak \cite{lytchak2014polar}, Lytchak--Wilking \cite{lytchak2016riemannian} and Wilking \cite{wilking2001index} for important developments in other cases). The author believes that the proof here can be partially replicated for the symmetric space case, giving important first steps. \subsection{Preliminaries and description of each step}\label{sec:1main} Given a Riemannian foliation $\cal F$ on $M$, we might think of $\cal F$ locally as an stripped fabric (or a Riemannian submersion) with leaves vertically placed. At each point $x\in M$, we decompose $T_xM$ as the tangent to the leaf $\cal V_x$ and its orthogonal complement $\cal H_x=(\cal V_x)^\bot$. We call $\cal V_x$ as the \textit{vertical space} and $\cal H_x$ as the \textit{horizontal space} at $x$. Given $X\in TM$, we denote $X^h,X^v$ the horizontal and vertical components of $X$, respectively. A vector field $X$ is said to be \textit{basic horizontal} if it is $\cal H$-valued and, for every vertical field $V$, $[X,V]$ is vertical. Equivalently, if $\cal F$ is induced by a submersion $\pi$, $X$ is basic if $d\pi(X)$ is fiberwise constant. The flow of a basic horizontal vector field $X$ induces local diffeomorphisms between leaves (as a standard computation shows -- see e.g. Hirsch \cite[Proposition 17.6]{hirsch2012differential}). These (local) diffeomorphisms are called \textit{(local) holonomy transformations}. It is known that holonomy transformations are (local) isometries if and only if leaves are totally geodesic (see e.g. Gromoll--Walschap \cite[Lemma 1.4.3]{gw}), which is the case at hand. Given a Riemannian foliation $\cal F$, the Gray--O'Neill integrability tensor $A\colon \cal H\times\cal H\to \cal V$ is defined by \begin{equation*} A_XY=\h[\bar X,\bar Y]^v, \end{equation*} where $\bar X,\bar Y$ are horizontal extensions of $X,Y$. We follow Ranjan \cite{ranjan} and denote $A^\xi X$ as the opposite dual of $A$ defined by: \begin{equation*} \lr{A^\xi X,Y}=-\lr{A_XY,\xi}. \end{equation*} Let $\phi_t$ be the flow of a basic horizontal field $X$ and $c$ an integral curve of $\phi_t$. For any given $\xi\in\cal V_{c(0)}$, we define its \textit{holonomy field} along $c$ by \[\xi(t)=d\phi_t(\xi).\] Alternatively, $\xi(t)$ is the only vector field along $c$ that satisfies \begin{gather*} \nabla_X \xi(t)=A^\xi X,\\ \xi(0)=\xi.\nonumber \end{gather*} The \textit{dual leaf at $p\in M$}, $L^\#_p$, is the subset of points in $M$ that can be joined to $p$ by horizontal curves (compare Wilking \cite{wilkilng-dual} or Gromoll--Walschap \cite[section 1.8]{gw}). When the leaves of $\cal F$ are the fibers of a principal $G$-bundle $\pi\colon P\to B$, the integrability tensor, infinitesimal holonomy fields and dual leaves replace classical objects: given a connection 1-form $\omega\colon TP\to \lie g$, the curvature 2-form satisfies $\Omega(X,Y)=-2\omega(A_XY)$; for any holonomy field along a horizontal curve $c$, $\omega_{c(t)}(\xi(t))=\omega_{c(0)}(\xi(0))$. That is, $\xi(t)$ is the restriction to $c$ of the action field defined by $\xi$; $L^\#_p$ is \textit{the holonomy bundle through $p$} (see e.g. \cite[secnomition II]{knI} for a definition of the last). The celebrated Ambrose--Singer Theorem \cite[Theorem 2]{ambrose-singer} identifies the Lie algebra of the holonomy group of $\pi$ with $\omega(T_pP(p))=\omega(L^\#_p)$. Theorem \ref{thm:A} refines this result in the case of Riemannian foliations/submersions on non-negatively curved ambient spaces: \begin{theorem}\label{thm:A} Let $\cal F$ be a totally geodesic Riemannian foliation on a manifold $M$ of non-negative sectional curvature. Let $L^\#_p$ be the dual leaf of $\cal F$ through $p$. Then \begin{equation*} TL^\#_p\cap \cal V_p=span\{A_XY~|~X,Y\in \cal H_p\}. \end{equation*} \end{theorem} Theorem \ref{thm:A} is used twice: to obtain a local version of the splitting theorem \cite[Corollary 3.3]{lytchak2014polar} and as one of the last steps in the paper. When we are in the scope of Theorem \ref{thm:groveF} (i.e., $\cal F$ is a totally geodesic Riemannian foliation on a neighborhood of a Lie group with bi-invariant metric), Ranjan \cite{ranjan} makes a key observation: let ${e}$ be the identity of $G$. For every $\xi\in\cal V_{e}$, $X\in\cal H_{{e}}$, the Grey--O'Neill's formulas imply: \begin{equation}\label{eq:Ranjan} (A^\xi)^2=(\h\ad_\xi)^2. \end{equation} By using it, \cite{ranjan} proves Conjecture \ref{conj:grove} for simple groups that have a maximal torus inside a leaf. Such torus provides a decomposition of the basic horizontal fields, producing candidates for $\lie g_\pm$. Then the simplicity of the group is used to prove that either $\lie g_+$ or $\lie g_-$ is trivial. Without the maximal torus assumption, we introduce a new root system based on the integrability tensor of $\cal F$. \begin{theorem}\label{thm:tori} Let $\cal F$ be a Riemannian foliation with totally geodesic leaves on a manifold $M$. Let $L_p$ be the leaf through $p\in M$ and assume that a neighborhood of 0 in a subspace $\lie t^v\subseteq \cal V_p$ exponentiates to a totally geodesic flat. Suppose that one of the hypothesis hold: \begin{enumerate}[$(a)$] \item $\exp(\lie t^v)$ is a complete totally geodesic flat and $A$ is bounded; \item $L_{e}$ is the open neighborhood of a compact symmetric space without Euclidean factors. \end{enumerate} Then, $R(\eta,\xi)^h=A^\eta A^\xi-A^\xi A^\eta$ for all $\xi,\eta\in \lie t^v$. \end{theorem} Equation \eqref{eq:Ranjan} together with Theorem \ref{thm:tori} readily gives a decomposition $X=X_++X_-$, producing spaces $\cal H_+(\lie t^v)+\cal H_-(\lie t^v)\supseteq \cal H_{e}$. The Lie algebras $\lie g_+,\lie g_-$ are the ideals generated by $\cal H_+(\lie t^v),\cal H_-(\lie t^v)$. The bulk of the paper is to prove that these ideals commute with each other. To this aid, we build upon Munteanu--Tapp \cite[Theorem 1.5]{tappmunteanu2} (Theorem \ref{thm:tapp} below), which provides Lie algebraic relations between the original root system and the one in Theorem \ref{thm:tori}: A triple $\{X,V,\cal A\}\subseteq T_pM$ is called a \textit{good triple} if $\exp_p (tV(s))=\exp_p(sX(t))$ for all $s,t\in\bb R$, where $V(s),X(t)$ denote the Jacobi fields along $\exp(sX)$ and $\exp(tV)$ that satisfy $V(0)=V$, $X(0)=X$ and $V'(0)=\cal A=X'(0)$, respectively. Such conditions are achieved in totally geodesic Riemannian foliations by $X\in\cal H$, $V\in \cal V$ and $\cal A=A^VX$. \cite[Theorem 1.5]{tappmunteanu2} provides a key identity that is used in section \ref{sec:good}: \begin{theorem}[Theorem 1.5, \cite{tappmunteanu2}]\label{thm:tapp} Let $G$ be a compact Lie group with a bi-invariant metric and denote its Lie algebra by $\lie g$. The triple $\{X,V,A\}\subseteq \lie g$ is good if and only if, for all integers $n,m\geq 0$, \begin{equation*} [\ad_X^nB,\ad_V^m\bar B]=0, \end{equation*} where $B=\frac{1}{2}\ad_VX-A$ and $\bar B= -\frac{1}{2}\ad_VX-A$. \end{theorem} Once proved that $\lie g_+$ commutes with $\lie g_-$, it follows from \cite[Proposition 4.1, Corollary 4.2]{tappmunteanu2} that the foliation is given by cosets: on the one hand \begin{prop}[Munteanu--Tapp \cite{tappmunteanu2}, Proposition 4.1]\label{prop:MunteanuTapp} Let $\cal F_1,\cal F_2$ be Riemannian foliations with totally geodesic leaves on $M$. Suppose that their vertical spaces, together with their integrability tensors coincide at a point ${e}$. Then $\cal F_1=\cal F_2$. \end{prop} On the other hand, assume that $M$ is a (open) subset of the product Lie group $G=G_1\times G_2$, equipped with a bi-invariant metric. Consider a subgroup $H<G_1\times G_2$ and define $\cal F_H$ as: \begin{equation}\label{eq:example_product} L_{(g_1,g_2)}=\{(h_1g_1,g_2h_2^{-1})~|~(h_1,h_2)\in H \}. \end{equation} Then, \begin{equation}\label{eq:Asplit} A^{(\xi_1,\xi_2)}(X_1,X_2)= (\h \ad_{\xi_1} X_1, -\h\ad_{\xi_2}X_2 ). \end{equation} Observe that inverting the second coordinate on $G_1\times G_2$ is an isometry and interchanges $\mathcal F$ to a Riemannian foliation whose $A$-tensor satisfy $A^\xi X=\h\ad_\xi X$, for all $\xi\in \cal V_{e},X\in\cal H_{e}$. \begin{cor}[Munteanu--Tapp \cite{tappmunteanu2}, Corollary 4.2]\label{cor:TappMunteanu} Let $\mathcal F$ be a Riemannian foliation with totally geodesic leaves on a connected Lie group $G$ with bi-invariant metric. If $A^\xi X=\h\ad_\xi X$, for all $\xi\in \cal V_{e},X\in\cal H_{e}$, then $\cal V_{e}$ is a subalgebra and $\cal F$ is the foliation defined by the left cosets of the subgroup whose subalgebra is $\cal V_{e}$. \end{cor} Although \cite[Proposition 4.1]{tappmunteanu2} assumes completeness of $M$, its proof can be carried out as far as every two points of $M$ can be joined by the concatenation of vertical and horizontal geodesics. Theorem \ref{thm:groveF} follows by putting together \eqref{eq:Aintroduction}, \eqref{eq:Asplit} and Corollary \eqref{cor:TappMunteanu}. The paper is divided as follow: in section \ref{ap:AS} we prove Theorem \ref{thm:A} and use it to reduce Theorem \ref{thm:groveF} to the case of an irreducible foliation. Section \ref{sec:thmB} deals with the proof of Theorem \ref{thm:tori} and presents the splitting $\cal V=\Delta_0\oplus\Delta_1$. The proof of Theorems \ref{thm:grove}, \ref{thm:groveF} are completed in section \ref{sec:good}. \mbox{} The author would like to thank C. Dur\'an, K. Shankar and K. Tapp for suggestions and insightful conversations. Specially K. Shankar for pointing out \cite{berestovskii-nikonorov} (which was a crucial reference for an earlier version of the paper). The author also would like to thank Miguel Dom\'ingues V\'azquez and the anonymous referee for many suggestions, and Universidade Federal do Paran\'a for hosting the author for most part of this work. \section{An Ambrose-Singer theorem for non-negatively curved foliations}\label{ap:AS} Let $\pi\colon M\to B$ be a Riemannian submersion. For simplicity we assume that all submersions and foliations here have totally geodesic fibers/leaves. Define \begin{equation*}\label{eq:dualleaf} L^\#_p=\{c(1)\in M~|~c\colon [0,1]\to M~\text{horizontal},~c(0)=p\}. \end{equation*} We recall that, given a curve $\tilde c\colon [0,1]\to B$, its horizontal lifts define the \textit{holonomy diffeomorphism} $\phi_{\tilde c}\colon \pi^{-1}(\tilde c(0))\to \pi^{-1}(\tilde c(1))$ (by sending a point $q\in \pi^{-1}(\tilde c(0))$ to the endpoint of the horizontal lift of $\tilde c$ starting at $q$). When $\cal F$ is a Riemannian foliation, one can still define local diffeomorphisms, since $\cal F$ is locally given by submersions, therefore their differentials define holonomy fields. Thus, given a horizontal curve $c\colon [0,1]\to M$ and $\xi\in \cal V_{c(0)}$, we denote $d\phi_c(\xi)=\xi(1)$, where $\xi(t)$ is the holonomy field defined by $\xi$, i.e., it satisfies: \begin{gather}\label{key} \nabla_{\dot c} \xi(t)=A^\xi \dot c,\\ \xi(0)=\xi.\nonumber \end{gather} When $\pi$ is a principal $H$-bundle $L^\#_p\cap \pi^{-1}(b)$ coincides with an orbit of the holonomy group of $\pi$ at $b$. In this case, the Ambrose--Singer Theorem \cite{ambrose-singer} characterizes the Lie algebra of the holonomy group through the connection 2-form $\Omega$. The result naturally extends to the case of a (not necessarily principal) Riemannian foliation (as one can see from the proof in \cite[section 3.4.2]{clarke2012holonomy}): \begin{theorem}[Ambrose--Singer \cite{ambrose-singer}, Theorem 2] \label{thm:AS} Let $\cal F$ be a Riemannian foliation and $L_p^\#$ the dual leaf at $p\in M$. Then, \begin{equation*} T_p L_{p}^\#\cap \cal V_p=span\{d\phi_c^{-1}(A_XY)~|~c \text{ horizontal},~c(0)=p,~X,Y\in\cal H \}. \end{equation*} \end{theorem} Although the Theorem gives a semi-local characterization for $TL_p^\#$, one must understand the behavior of the holonomy fields and of the $A$-tensor, which might be quite arbitrary objects. The situation can be greatly improved when the ambient space has non-negative sectional curvature. \begingroup \def\ref{thm:tori}{\ref{thm:A}} \begin{theorem} Let $\cal F$ be a totally geodesic Riemannian foliation on a manifold $M$ of non-negative sectional curvature. Let $L^\#_p$ be the dual leaf of $\cal F$ at $p$. Then \begin{equation*} TL^\#_p\cap \cal V_p=span\{A_XY~|~X,Y\in \cal H_p\}. \end{equation*} \end{theorem} \addtocounter{theorem}{-1} \endgroup Under such hypothesis, a vector in the cokernel of $A_X$ never leaves it (see Lemma \ref{prop:A-flatgeo}). Observe that the result is absolutely local since there is no hypothesis on the completeness of $M$. Moreover, punctual information of $A$ spreads out through $M$: for instance, by combining Theorems \ref{thm:AS} and \ref{thm:A}, one concludes that $A=0$ at $p$ if and only if $L^\#_p$ is a \textit{polar section} for $\cal F$ (i.e., $L^\#_p$ intersects every leaf perpendicularly). In this case, one can show that the universal cover $\tilde M$ metrically splits as $\tilde M=\tilde L^\#_p\times \tilde L_p$ (see \cite[Theorem 1.4.1]{gw}). Now we proceed to the proof of Theorem \ref{thm:A}. As in the introduction, denote $A^\xi\colon \cal H_q\to \cal H_q$ as \[\lr{A^\xi X,Z}=-\lr{A_XZ,\xi}. \] Denote $(\nabla_X A)^\xi Z=\nabla_X(A^\xi Z)-A^{\nabla_X^v\xi}Z-A^\xi \nabla_X^h Z$. By extending $\xi$ as a holonomy field, one sees that: \[\lr{(\nabla_X A)_XZ,\xi}=-\lr{(\nabla_X A)^\xi X,Z}.\qedhere\] In the remaining of the section, we assume $\cal F$ with totally geodesic leaves and $M$ with non-negative sectional curvature. Theorem \ref{thm:A} is based on the next inequality. \begin{lem}\label{lem:WNN} For each $p\in M$ there is a neighborhood $U$ and a constant $a>0$ such that \begin{gather}\label{eq:nn} a\|X\|\|Z\|\|A^\xi X\|\geq |\lr{(\nabla_XA)^\xi X,Z}| \end{gather} for all $X,Z\in\cal H_q$ and $\xi\in\cal V_q$, $q\in U$.\end{lem} \begin{proof} Given $X,Z\in\cal H$ and $\xi\in\cal V$, Gray--O'Neill equations (\cite[page 44]{gw}) states that the unreduced sectional curvature $K(X,\xi+tZ)=R(X,\xi+tZ,\xi+tZ,X)$ satisfies \begin{equation}\label{eq:quadratictrick} K(X,\xi+tZ)=t^2K(X,Z)+2t\lr{(\nabla_XA)_XZ,\xi}+\|A^\xi X\|^2. \end{equation} Since $K(X,\xi+tZ)\geq 0$, the discriminant of the polynomial \eqref{eq:quadratictrick} satisfies \begin{equation*} 0\leq K(X,Z)\|A^\xi X\|^2-\lr{(\nabla_XA)_XZ,\xi}^2. \end{equation*} On small neighborhoods, continuity of $K$ guarantees the existence of some $a>0$ such that $K(X,Z)\leq a\|X\|^2\|Z\|^2$. \end{proof} \begin{lem}\label{prop:A-flatgeo} Let $\xi(t)$ be a holonomy field along $\gamma(t)=\exp(tX)$, $X\in\cal H$. If $A^{\xi(0)}{X}=0$ then $A^{\xi(t)}{\dot \gamma(t)}=0$ for all $t$. \end{lem} \begin{proof} Take $\|X\|=1$ and $Z=A^{\xi}{\dot \gamma}$ in \eqref{eq:nn}. Recalling that $\nabla_{\dot \gamma}^v\xi=0$, we get \begin{align}\label{proof:flatgeo1} a\|A^\xi{\dot \gamma}\|^2&\geq \lr{(\nabla_{\dot \gamma}A)^\xi{\dot \gamma},A^\xi {\dot \gamma}}=\lr{\nabla_{\dot \gamma}(A^\xi{\dot \gamma}),A^\xi {\dot \gamma}}=\frac{1}{2}\frac{d}{dt}\|A^\xi {\dot \gamma}\|^2. \end{align} Equation \eqref{proof:flatgeo1} is Gronwall's inequality for $u(t)=\|A^{\xi(t)}{\dot c(t)}\|^2$, implying that \begin{equation*}\label{proof:flatgeo} \|A^\xi(t){\dot \gamma(t)}\|^2\leq \|A^{\xi(0)}{\dot \gamma(0)}\|^2e^{2 a t} \end{equation*} for all $t>0$. In particular, if $A^{\xi(0)}{\dot \gamma(0)}=0$, $A^{\xi(t)}{\dot \gamma(t)}=0$. Analogously, $A^{\xi(t)}{\dot \gamma(t)}=0$ for $t<0$ by replacing $X$ by $-X$ in the argument. \end{proof} The main ingredient in the proof is the constancy of the rank of $\ker (A^{\xi(t)}\colon\cal H_{c(t)}\to \cal H_{c(t)})$ along $\exp(\ker A^\xi)$ (Proposition \ref{prop:Dconstantrank}). First we present two algebraic lemmas. \begin{lem}\label{lem:in1} Let $X,Y\in\cal H$ be orthonormal and $A^{\xi}X=0$. Then, \begin{gather*}\label{eq:eigen0} {2 a}\|A^\xi Y\|^2\geq \lr{(\nabla_XA)^\xi Y+(\nabla_YA)^\xi X,A^\xi Y}. \end{gather*} \end{lem} \begin{proof} For any unitary $Z\in\cal H$, \eqref{eq:nn} gives: \begin{align*} 2 a \|Z\|\|A^\xi {(Y+X)}\|&\geq \lr{(\nabla_XA)^\xi X+(\nabla_XA)^\xi Y+(\nabla_YA)^\xi X+(\nabla_YA)^\xi Y,Z},\\ 2 a \|Z\|\|A^\xi{(Y-X)}\|&\geq -\lr{(\nabla_XA)^\xi X-(\nabla_XA)^\xi Y-(\nabla_YA)^\xi X+(\nabla_YA)^\xi Y,Z}. \end{align*} Observe that $A^\xi{(X+Y)}=A^\xi{(Y-X)}=A^\xi Y$ and take $Z=A^\xi Y$. The result now follows by summing up both inequalities. \end{proof} Consider the non-negative symmetric operator $D=-A^\xi A^\xi$ and recall that $\ker A^\xi=\ker D$. Given $DY=\lambda^2 Y$, $\lambda >0$, define $\bar Y=\lambda ^{-1}A^\xi Y$. We have $\|\bar Y\|=\|Y\|$ and $A^\xi \bar Y=-\lambda Y$. In particular, if $\|Y\|=1$, $\|DY\|=\lambda^2$ and $\|A^\xi Y\|=\|A^\xi \bar Y\|=\lambda$. \begin{lem}\label{lem:in2} Let $X,Y$ be unitary horizontals satisfying $A^\xi X=0$ and $DY=\lambda^2 Y\neq 0$. Then, \begin{equation*} \lr{(\nabla_YA)^\xi X,A^\xi Y}+\lr{(\nabla_{\bar Y}A)^\xi X,A^\xi \bar Y}=\lr{(\nabla_XA)^\xi \bar Y ,A^\xi \bar Y}. \end{equation*} \end{lem} \begin{proof} By combining the Bianch identity of $R^v(X,Y)Z$ and Grey--O'Neill's equation $R^v(X,Y)Z=-(\nabla^v_ZA)_XY$ we get (see also Lemma 1.5.1 in \cite[page 26]{gw}): \begin{equation*} \lr{(\nabla_YA)^\xi X,\bar Y}=-\lr{(\nabla_XA)^\xi \bar Y,Y}-\lr{(\nabla_{\bar Y}A)^\xi Y,X}. \end{equation*} By replacing $A^\xi Y=\lambda\bar Y$ and $A^\xi\bar Y=-\lambda Y$, we have: \begin{align*} \lr{(\nabla_YA)^\xi X,A^\xi Y}=&\lambda\lr{(\nabla_YA)^\xi X,\bar Y}=-\lambda[\lr{(\nabla_XA)^\xi \bar Y,Y}+\lr{(\nabla_{\bar Y}A)^\xi Y,X}]\\ =&\lr{(\nabla_XA)^\xi \bar Y,A^\xi\bar Y}-\lr{(\nabla_{\bar Y}A)^\xi X,A^\xi\bar Y}.\qedhere \end{align*} \end{proof} \begin{prop}\label{prop:Dconstantrank} Let $\xi(t)$ be a holonomy field along $\gamma(t)=\exp(t X_0)$, $X_0\in \cal H_p$, and suppose that $A^{\xi(0)} X_0=0$. If $\lambda(t)^2$ is a continuous eigenvalue of $D=-A^{\xi(t)}A^{\xi(t)}$, then $\lambda(t)$ either vanishes identically or it never vanishes. \end{prop} \begin{proof} We argue by contradiction. Assume that $\lambda$ vanishes at $t=0$ but there is $l>0$ such that $\lambda(t)>0$ for all $t\in (0,l)$. We further assume (by possibly reducing $l$) that $D$ has a smooth frame of eigenvectors along $\gamma((0,l))$. The Proposition follows from Gronwall's inequality once we prove that \begin{equation}\label{eq:eigen} { a'}\lambda^2\geq \frac{d}{dt}\lambda^2, \end{equation} for some $a'>0$. In particular, $\lambda(t)^2\leq \lambda(\epsilon)^2e^{ a' t}$ for all $\epsilon\in (0,l)$, $t\in (\epsilon,l)$. Thus, $\lambda$ must vanish on $(0,l)$, a contradiction. Inequality \eqref{eq:eigen} follows from Lemmas \ref{lem:in1} and \ref{lem:in2}: let $Y$ be a smooth unitary vector field satisfying $DY=\lambda^2Y$. Applying Lemma \ref{lem:in1} on both $Y$ and $\bar Y$ gives \begin{align*} 2 a\lambda^2&\geq \lr{(\nabla_XA)^\xi Y+(\nabla_YA)^\xi X,A^\xi Y},\\ 2{ a}\lambda^2&\geq \lr{(\nabla_XA)^\xi \bar Y+(\nabla_{\bar Y}A)^\xi X,A^\xi \bar Y}. \end{align*} Summing up gives: \begin{align*} 4{ a}\lambda^2 &\geq \lr{(\nabla_XA)^\xi Y,A^\xi Y}+\lr{(\nabla_XA)^\xi \bar Y,A^\xi \bar Y}\\&+\lr{(\nabla_YA)^\xi X,A^\xi Y}+\lr{(\nabla_{\bar Y}A)^\xi X,A^\xi \bar Y}.\nonumber \end{align*} Applying Lemma \ref{lem:in2}, we have $\lr{(\nabla_YA)^\xi X,A^\xi Y}+\lr{(\nabla_{\bar Y}A)^\xi X,A^\xi \bar Y}=\lr{(\nabla_XA)^\xi \bar Y ,A^\xi \bar Y}$. On the other hand, \begin{align*}\label{proof:eigen1} \lr{(\nabla_XA)^\xi \bar Y ,A^\xi \bar Y} =&\lr{\nabla_X(A^\xi \bar Y) ,A^\xi \bar Y}-\lr{A^\xi(\nabla_X \bar Y),A^\xi \bar Y} \\=&\h\frac{d}{dt}\lambda^2-\lambda^2\lr{\nabla_X \bar Y, \bar Y}=\h\frac{d}{dt}\lambda^2. \end{align*} Analogously, $\lr{(\nabla_XA)^\xi Y ,A^\xi Y}=\h\frac{d}{dt}\lambda^2$. Thus we can take $a'=\frac{8}{3}a$, concluding the proof. \end{proof} As the last step, fix $p\in M$ and denote $\lie a_q=span\{A_XY~|~X,Y\in\cal H_q\}$. Observe that $\lie a_q^\bot=\{\xi\in\cal V_q~|~A^\xi =0\}$ and recall that a horizontal curve can be smoothly approximated by a broken horizontal geodesic. Then, Proposition \ref{prop:Dconstantrank} gives: \begin{cor}\label{claim:2} For any curve $c$, $d\phi_c(\lie a_p^\bot)=\lie a_{c(1)}^\bot$. \end{cor} Theorem \ref{thm:A} follows directly from Corollary \ref{claim:2}: since $d\phi_c$ is an isometry, Lemma \ref{claim:2} implies that $d\phi_c^{-1}(\lie a_{\phi_c(p)})=\lie a_p$. Applying Theorem \ref{thm:AS}, one directly gets the equality $TL^\#_p\cap \cal V_p=\lie a_p$. $\;\;\;\Box$\\ Once established Theorem \ref{thm:A}, one can observe that $\lie a_p^\perp$ coincides with $\nu(L^\#_p)$, the space normal to $L^\#_p$. It gives the very important local version of \cite[Proposition 3.1]{lytchak2014polar} below. It guarantees the same thesis of \cite[Proposition 3.1]{lytchak2014polar} by exchanging the completeness of $M$ by the assumption of totally geodesic leaves in $\cal F$. \begin{cor}\label{cor:flat} Let $\cal F$ be a totally geodesic Riemannian foliation on $M$, a non-negatively curved manifold. Then the sectional curvature $sec(\xi,X)=0$, for every $\xi\in \nu(L^\#)$ and $X\in \cal H$ and $\nu(L^\#)$ is parallel translated along $\exp(tX)$. \end{cor} \begin{proof} Along the proof of Theorem \ref{thm:A}, we have shown that the distribution $p\mapsto \lie a^\perp_p=\nu(L^\#_p)$ is invariant along holonomy transformation defined by horizontal geodesics. Moreover, if $\xi(t)$ is the holonomy field defined by $\xi\in\lie a^\perp _p$ along $\gamma(t)=\exp(tX)$, $X\in \cal H_p$, then \[sec(\xi(t),\dot \gamma(t))=\frac{\|A^{\xi(t)} \dot \gamma(t)\|^2}{\|\dot \gamma(t)\|^2\|\xi(t)\|^2}\equiv0.\] On the one hand, $\nabla_{\dot \gamma(t)}\xi(t)=0$, thus $\xi(t)$ is parallel. On the other hand, $\xi(t)\in\lie a^\perp_{\gamma(t)}$ for every $t$, therefore $\lie a^\perp$ is parallel along $\gamma$. \end{proof} \subsection{Reduction to the single-dual-leaf case}\label{sec:pi1} Here we reduce the proof of Theorem \ref{thm:groveF} to the case of an irreducible $\cal F$ (i.e., with only one dual leaf). First we observe that it is sufficient to prove Theorem \ref{thm:groveF} locally: if $\{U_i\}$ is an open cover of $U$ such that $\cal F|_{U_i}$ is the restriction of a coset foliation defined by a connected subgroup, then $\cal F|_{U_i}$ and $\cal F|_{U_j}$ must be the restriction of the same coset foliation whenever $U_i\cap U_j\neq \emptyset$. To reduce to the case of only one dual leaf, we give a local version of the following result due to Lytchak (see also \cite{silva2020completeness}): \begin{theorem}[Corollary 3.3 \cite{lytchak2014polar}]\label{thm:lytchak} Let $\cal F$ be a regular Riemannian foliation on a simply connected compact symmetric space $M$. Then there is a metric decomposition $M=M_1\times M_2$ and a foliation $\cal F_1$ on $M_1$ such that each slice $M_1\times\{x_2\}$ is a dual leaf for $\cal F$ and $\cal F$ satisfies \[\cal F=\{L\times M_2~|~L\in \cal F_1\}. \] \end{theorem} Again, let $\cal F$ be a Riemannian submersion and consider the decomposition $G=G_1\times G_2$ given by Theorem \ref{thm:lytchak}. If $\cal F_1$ is given by left $H$-cosets, $H<G_1$, then $\cal F$ is given by the left $H\times G_2$-cosets. We now establish our local version of Theorem \ref{thm:lytchak}: \begin{prop}\label{prop:Lytchak} Let $\cal F$ be a totally geodesic Riemannian foliation on a simply connected symmetric space $M$ with non-negative sectional curvature. Then there is a metric decomposition $M=M_1\times M_2$ and a foliation $\cal F_1$ on $M_1$ such that each slice $M_1\times\{x_2\}$ is a dual leaf for $\cal F$ and $\cal F$ satisfies \[\cal F=\{L\times M_2~|~L\in \cal F_1\}. \] \end{prop} \begin{proof} Theorem \ref{thm:lytchak} is based on \cite[Proposition 3.1]{lytchak2014polar} and the completeness of dual leaves of regular Riemannian foliations on compact non-negatively curved manifolds (\cite[Theorem 3, item (b)]{wilkilng-dual}). Here we argue on how to trade completeness by the assumption of totally geodesic leaves on both points. We first observe that dual leaves of (non-singular) totally geodesic foliations must be complete. In such case, holonomy transformations are local isometries of leaves, and can be lifted as full isometries defined on the whole universal cover o each leaf. Therefore the intersection of the dual leaf to a leaf $L$, $L^\#\cap L$ is (finitely covered by) the orbit of a proper group action on $\tilde L$. It follows that $L^\#$ is complete since $L^\#$ is invariant under holonomy transformations. On the other hand, the proof of Proposition 3.1 in \cite{lytchak2014polar} is based on simple Lie algebraic computations (by identifying $M=G/K$ and $\lie g=TM\oplus \lie k$) and the following two properties: $sec(\nu(L^\#),\cal H)=0$ and that $\nu(L^\#)$ is invariant with respect to parallel translations along horizontal geodesics. These two facts are guaranteed by the arguments in \cite{wilkilng-dual} for singular Riemannian foliation on a complete ambient space with non-negative sectional curvature. In our case, these two facts were recovered by Corollary \ref{cor:flat}. \end{proof} Considering the arguments above, it is sufficient to prove Theorems \ref{thm:grove}, \ref{thm:groveF} assuming that $\cal F$ has only one dual leaf. This assumption is used in section \ref{sec:proof} in order to apply Theorem \ref{thm:A}. \section{The $A$-root system}\label{sec:thmB} Both in here and in section \ref{sec:good} we work with the complexification of some related spaces, specially $\lie g$ and $\cal H_{e}$. Given a vector space $V$, the complexification of $V$ will be denoted by $V^{\bb C}$. Given an operator $A\colon V\to V$, its natural complexification is denoted by the same letter $A\colon V^\bb C\to V^\bb C$ and is defined by $A(x+iy)=A(x)+iA(y)$. We follow Knapp \cite{knapp2013lie} and extend inner products on $V$ to $\bb C$-bilinear symmetric products on $V^\bb C$ (not to Hermitian, positive definite ones). The usual setting for a root system consists of an abelian real Lie algebra $\lie t$ acting on a linear space $V$ through a Lie algebra morphism $\rho\colon\lie t\to \End(V)$. For instance, one may endow $V$ with an inner product and suppose that $\rho(\lie t)\subseteq \End(V)$ is a subspace of commuting skew-adjoint linear endomorphisms of $V$. In this case, $\rho$ naturally defines an action on $V^{\bb C}$ and the subset $\rho(\lie t)\subseteq \End(V^\bb C) $ consists of endomorphisms with pure imaginary eigenvalues that can be diagonalized in a single bases. The \textit{root decomposition induced by $\rho(\lie t)$} is defined by \begin{equation*} V^{\bb C}=\sum_{\alpha\in\Pi} V_\alpha. \end{equation*} where $V_\alpha$ is the \textit{weight space} of the linear function $\alpha:\lie t\to i\bb R$: \[V_\alpha=\{X\in V^{\bb C}~|~\rho(A)X=\alpha(A)X,~\forall A\in\lie t\}.\] Whenever $V_\alpha\neq \{0\}$, for $\alpha\neq 0$, we call $\alpha$ a \textit{root}. We denote the set of roots by $\Pi(\lie t)$. Let $\cal F$ be a Riemannian foliation. Let $\imath \colon \lie t^v \looparrowright L_p$ be an immersed totally geodesic flat with $\imath(0)=p$. Here we prove that, under certain conditions, $\rho_A(\xi)=A^\xi$ defines a representation of $\lie t^v$ on $\cal H_p$. We use the following convention for the Riemannian curvature: \[R(X,Y)Z=\nabla_X\nabla_YZ-\nabla_Y\nabla_XZ-\nabla_{[X,Y]}Z. \] \begingroup \def\ref{thm:tori}{\ref{thm:tori}} \begin{theorem} Let $\cal F$ be a Riemannian foliation with totally geodesic leaves on a manifold $M$. Let $L_p$ be the leaf through $p\in M$ and that a neighborhood of 0 in a subspace $\lie t^v\subseteq \cal V_p$ exponentiates to a totally geodesic flat. Suppose that one of the hypothesis hold: \begin{enumerate}[$(a)$] \item $\exp(\lie t^v)$ is complete and $A$ is bounded; \item $L_{e}$ is the open neighborhood of a compact symmetric space. \end{enumerate} Then, $R(\eta,\xi)^h=[A^\eta,A^\xi]$ for all $\xi,\eta\in \lie t^v$. \end{theorem} \addtocounter{theorem}{-1} \endgroup We begin by proving item $(a)$. \begin{proof}[Proof of item $(a)$] Consider basic horizontal fields $X,Y$ and vertical fields $\xi,\eta$ such that $\nabla_\xi\eta=\nabla_\eta\xi=0$ along $\exp(\lie t^v)$. We have, \begin{align*} \lr{R(\eta,\xi)X,Y}=&\lr{\nabla_\eta\nabla_\xi X-\nabla_\xi\nabla_\eta X,Y} = \lr{\nabla_\eta (A^\xi X)- \nabla_\xi(A^\eta X),Y}\\ =& -\eta\lr{A_XY,\xi}+\xi\lr{A_XY,\eta}-\lr{ A^\xi X,A^\eta Y}+\lr{A^\eta X,A^\xi Y}\\ =& -\lr{\nabla_\eta(A_XY),\xi}+\lr{\nabla_\xi(A_XY),\eta}+\lr{(A^\eta A^\xi-A^\xi A^\eta)X,Y}. \end{align*} The proof is concluded by observing that $R^h(\eta,\xi)\cal V_p=0$, since fibers are totally geodesic, and $\lr{\nabla_\xi(A_XY),\eta}=-\lr{\nabla_\eta(A_XY),\xi}=0$. For the last, consider the geodesic $\gamma(s)=\exp(t\xi)$ and recall that $A_XY$ is a Jacobi field along $\gamma$. We have, \begin{equation*}\label{proof:tori} \xi\xi\lr{A_XY,\eta}=\lr{\nabla_\xi\nabla_\xi(A_XY),\eta}=\lr{R(\xi,\eta)\xi,A_XY}=0. \end{equation*} Therefore, $\varphi(t)=\lr{A_{X}Y,\eta}(\gamma(t))$ is a bounded affine function in the real line. In particular, $\varphi(t)$ must be constant and $\lr{\nabla_\xi(A_XY),\eta}=\xi\lr{A_XY,\eta}$ vanishes. \end{proof} In the proof above, the boundedness of $A$ is used to show that $A_XY|_{\exp(t\xi)}$ is bounded. Fortunately, there is a natural way to ensure such a bound using only local information. Let $\cal F$ be as in Theorem \ref{thm:groveF} i.e., $\cal F$ is a totally geodesic Riemannian foliation on an open subset $U\subseteq G$ of a Lie group $G$ with bi-invariant metric. We assume that ${e}\in U$ without lost of generality. Recall that, although $L_{e}$ is defined only on $U$, $L_{e}$ can be isometrically identified with an open neighborhood of a complete symmetric space $\tilde L_{e}$ (\cite[Theorem 5.1]{helgasondifferential}). Since $\tilde L_{e}$ must have non-negative sectional curvature, it is locally isometric to a product $L_0\times L_1$, where $L_0$ is an Euclidean space and $L_1$ is a compact symmetric space. It is easy to conclude that $A_XY$ can only have unbounded components on $L_0$. With this motivation in mind, we prove item $(b)$ \begin{proof}[Proof of item $(b)$] Observe that every isometry $\varphi:L_{e}\to L_{e}$ can be extended to an isometry of $\tilde L_{e}$ (for instance, if $\phi\colon L_{e}\to L_{e}$ is an isometry, then its graph is a closed totally geodesic submanifold $\Gamma\subseteq L_{e}\times L_{e}$, thus a symmetric space by itself. Therefore, there is a unique symmetric space $\tilde \Gamma$ containing $\Gamma$ as an open subset. One clearly sees that $\tilde \Gamma$ can be naturally identified as a submanifold of $\tilde L_{e}\times \tilde L_{e}$ which is the graph of an isometry $\tilde\varphi$). In particular, Killing fields on $L_{e}$ are the restriction of Killing fields in $\tilde L_{e}$. Since $\tilde L_{e}$ is locally the product $L_0\times L_1$, a Killing field $\zeta$ in $L_{e}$ decomposes as the sum of a component $\zeta_0$ in $L_0$ and $\zeta_1$ in $L_1$. Since $L_1$ is compact, $\zeta$ is unbounded only if $\zeta_0$ is unbounded. The result now follows since $A_XY|_{L_{e}}$ is a Killing field, whenever $X,Y$ are basic horizontal. \end{proof} \subsection{Splitting of totally geodesic foliations}\label{sec:split} At last, we observe that, even if $L_{e}$ has an Euclidean factor, we can still split the foliation and use Theorem \ref{thm:tori} on the compact factor. The splitting we mean is stated in the next result, which should be either known or expected to hold among specialists. \begin{theorem}\label{thm:split} Let $\cal F$ be a totally geodesic Riemannian foliation with simply connected dual leaves. For a fixed $L\in \cal F$, let $TL=\bigoplus_{i=0}^s\tilde\Delta_i$ be the de Rham decomposition of $TL$. Then, there are smooth integrable distributions $\Delta_0,...,\Delta_s$ on $M$ such that, for every $i$: \begin{enumerate}[$(1)$] \item $\Delta_i$ is vertical and $\cal V=\bigoplus_{i=0}^s\Delta_i$; \item for every leaf $L'\in \cal F$, $TL'=\bigoplus_{i=0}^s\Delta_i|_{L'}$ is the de Rham decomposition of $TL'$; \item $\cal F_i$, the foliation defined by $\Delta_i$, is Riemannian and has totally geodesic leaves. \end{enumerate} \end{theorem} Let $\cal F$ be an irreducible Riemannian foliation with totally geodesic leaves. Let $L_p$ be the leaf through $p\in M$ and denote $TL_p=\bigoplus_i\tilde\Delta_i$ as the de Rham decomposition of $TL_p$. Let $c\colon[0,1]\to M$ be a horizontal curve. Recall that holonomy fields define a linear isometry $d\phi_c\colon \cal V_{c(0)}\to \cal V_{c(1)}$, $d\phi_c(\xi)(\xi(1))$, where $\xi(t)$ is the holonomy field defined by $\xi$ along $c$. Moreover, by recalling that Riemannian foliations are locally given by Riemannian submersions, one concludes that $c$ defines an isometry between universal covers $\phi_c\colon \tilde L_{c(0)}\to \tilde L_{c(1)}$. Fix a leaf $L\in \cal F$ and $TL=\bigoplus \tilde \Delta_i$, its de Rham decomposition. Define $\Delta_i(q)=d\phi_c(\tilde{\Delta}_i)$, where $c$ is a horizontal curve joining $p$ to $q$. We claim that $\Delta_i$ is well defined if dual leaves are simply connected. Let $c_1,c_2:[0,1]\to M$ be horizontal curves joining $L$ to $q$. Then $(d\phi_{c_2})^{-1}d\phi_{c_1}=d\phi_{c}$, where $c$ is the concatenation of $c_1$ with the reverse of $c_2$. In particular, $d\phi_{c_1}(\tilde\Delta_i)=d\phi_{c_2}(\tilde \Delta_i)$ for every pair of horizontal curves $c_1,c_2$, such that $c_1(0),c_2(0)\in L$ and $c_1(1)=c_2(1)$, if and only if $d\phi_{c}(\tilde \Delta_i)=\tilde \Delta_i$ for every horizontal curve $c$, such that $c(0),c(1)\in L$. Denote \[\Hol_L=\{\phi_c\colon\tilde L_p\to \tilde L_p~|~c(0),c(1)\in L,~c~\text{horiontal}\}.\] Observe that $\Hol_L$ is a subgroup of isometries of $\tilde L_p$. Moreover, according to Eschenburg--Heintze \cite{Eschenburg-Heintze}, it is sufficient to show that $\Hol_L$ does not exchange factors of the de Rham decomposition of $\tilde L$. On the one hand, if $\Hol_L$ exchange factors, the action of $\Hol_L$ on $\tilde L$ has non-connected isotropy group at some point (if $\tilde L$ has two isometric factors $M_1\times M_1$, then the points in the diagonal have non-connected isotropy). On the other hand, if $\phi_c(p)=p$, then $c(0)=c(1)=p$ and $d\phi_c$ is naturally identified with the isotropy representation of $\phi_c$ at $p$. Therefore, it is sufficient to prove that the group: \[ H_p=\{d\phi_c\colon \cal V_p\to \cal V_p~|~c(0)=c(1)=p,~c~\text{horiontal}\} \] is connected. \begin{claim} The distribution $\tilde\Delta_i$ is $\cal H_p$-invariant. \end{claim} \begin{proof} Let $\bar{\pi}\colon O(\cal V)\to M$ be the bundle of orthonormal frames of $\cal V$, i.e., \[O(\cal V)=\{b\colon \bb R^k\to \cal V_p~|~b~\text{linear isometry}\}.\] $O(k)$ acts on $O(\cal V)$ by right composition. Observe that $\tilde{\cal F}=\{\bar{\pi}^{-1}(L)~|~ L\in \cal F \}$ defines a foliation on $O(\cal V)$. One can make $\tilde{\cal F}$ Riemannian by observing that $\nabla^\cal V_X\xi=(\nabla_X\xi)^v$ defines a $O(k)$-invariant $\tilde{\cal F}$-horizontal distribution $\tilde{\cal H}$ and equipping each fiber with a bi-invariant metric. Let $b\in O(\cal V)$ be such that $\bar \pi(b)=p$. Denote the $\tilde{\cal F}$-dual leaf at $b$ by $\cal E_b$. One can observe that $\bar \pi|_{\cal E_b}\colon \cal E_b\to L^\#_p$ is a principal bundle with principal group $b^{-1}(\cal E_b\cap \bar{\pi}^{-1}(p))=b^{-1}\cal H_pb$. By definition, every point in $\cal E_b$ is connected to $b$ by a $\tilde{\cal F}$-horizontal curve, in particular, by a $\bar \pi$-horizontal curve (i.e., orthogonal to the $\bar\pi$-fibers). Therefore, $\bar \pi|_{\cal E_b}$ is irreducible as a principal bundle, concluding that $H$ is connected, since $L^\#_p$ is simply connected (see \cite{knI} for details). \end{proof} Observe that $\Delta_i\subseteq \cal V$ and $\bigoplus\Delta_i|_{L_q}$ is a de Rham decomposition in each leaf $L_q$. Therefore $\Delta_i|_{L_q}$ integrates a Riemannian foliation with totally geodesic leaves on each $L_q$. Since $L_q$ is totally geodesic on $M$, the integral submanifolds of $\Delta_i$ are totally geodesic on $M$. It is left to prove that the foliation defined by $\Delta_i$ on $M$ is Riemannian. \begin{claim} Let $\cal F_i$ be the foliation defined by $\Delta_i$. Then $\cal F_i$ is Riemannian. \end{claim} \begin{proof} We follow Gromoll--Walschap \cite[Theorem 1.2.1]{gw} and show that the Lie derivative $\cal L_{U}g^{\Delta_i^\perp}=0$ for every $U\in\Delta_i$. Observe that $\Delta_i^\bot=\cal H\oplus (\oplus_{i\neq j}\Delta_j)$ and: $\cal L_Ug(\cal H, \cal V)=0$ and $\cal L_{U}g(\cal H, \cal H)=0$ since $\cal F$ is Riemannian and $U$ vertical; $\cal L_{U}g({\Delta_j},\Delta_k)=0$, $j,k\neq i$, since the leaves of $\cal F$ are totally geodesic and the restriction of $\Delta_i$ to each leaf is Riemannian. \end{proof} \section{Good triples and the $\cal H_\pm$-decomposition}\label{sec:good} From now on, we specialize to the case of a totally geodesic Riemannian foliation $\cal F$ on $U\subseteq G$, a compact Lie group with bi-invariant metric. Furthermore, we assume that $\cal F$ satisfy the thesis hypothesis in Theorem \ref{thm:tori} and use it throughout. We call such a foliation as a \textit{Ranjan foliation}. This section is the technical bulk of the paper and concludes the proof of Theorem \ref{thm:grove}. Here we decompose $\lie g$ in commuting ideals $\lie g_+,\lie g_-$ satisfying \eqref{eq:Aintroduction} (we actually consider a third ideal $\lie g_0$ for technical reasons, but it can be incorporated in either $\lie g_+$ or $\lie g_-$.) The decomposition $\lie g_+,\lie g_-$ will be achieved step by step: in section \ref{sec:brac1} we fix a maximal abelian subalgebra inside $\cal V_{e}$ and decompose the elements of $\cal H_{{e}}$ according to the relation between the root system of $\lie g$ and of Theorem \ref{thm:tori}. The process produces subspaces $\cal H_+(\lie t^v),\cal H_-(\lie t^v),\cal H_0(\lie t^v)\subseteq \lie g$; section \eqref{sec:brac2} proves a strong commuting identity for $\cal H_+(\lie t^v),\cal H_-(\lie t^v)$ which is used throughout; in section \ref{sec:brac3} we expand $\cal H_\pm(\lie t^v)$ to subspaces $\lie H_\pm(\cal F)$ which are independent of the choice of $\lie t^v$; in \ref{sec:proofHpm} we prove that $\cal H_\pm(\cal F)$ commute, providing the very important Lemma \ref{cor:A}. Using Lemma \ref{cor:A} and the irreducibility hypothesis, we put Theorem \ref{thm:A} into play in order to prove that $\ad_{\cal V_{e}}$ preserves the subalgebras generated by $\mathcal H_\pm(\cal F)$; finally, in section \ref{sec:proof} we prove that the algebras generated by $\cal H_\pm(\cal F)$ are ideals and that they commute. Together with Lemma \ref{cor:A} and Proposition \ref{prop:MunteanuTapp}, it concludes the proof of Theorem \ref{thm:grove}. The foliation defined by \eqref{eq:example_product} gives a picture of $\cal H_\pm(\cal F)$: let $H<G=G_1\times G_2$ and consider $\cal F_H$ as the foliation defined by the orbits of $(h_1,h_2)\cdot(g_1,g_2)= (h_1g_1,g_2h_2^{-1})$. Each vector in $\cal H_{(g_1,g_2)}$ has a component tangent to $G_1\times\{g_2\}$ and other tangent to $\{g_1\}\times G_1$. The subsets spanned by such components are the desired subspaces $\cal H_+(\cal F)$ and $\cal H_-(\cal F)$, respectively. In this case the ideals generated by $\cal H_+(\cal F),\cal H_-(\cal F)$ clearly commute. The whole paper is dedicated to show that this is the general situation. All arguments in section \ref{sec:brac1}-\ref{sec:proofHpm} follows from Theorem \ref{thm:tori} and Munteanu--Tapp's Theorem \ref{thm:tapp}. That is, assuming that both theorems hold at ${e}$, the results in section \ref{sec:brac1}-\ref{sec:proofHpm} holds. Section \ref{sec:proof} further requires Lytchak's decomposition Theorem \ref{thm:lytchak} (or its local version, Proposition \ref{prop:Lytchak}) to reduce the general case to the case of a single dual leaf, which is required to apply Theorem \ref{thm:A}. Given a Ranjan foliation, the leaf through ${e}$, $L_{e}$, is a totally geodesic submanifold of a symmetric space, thus a symmetric space itself. In particular, $\lie t^v=\lie t\cap \cal V_{e}$ exponentiates to a maximal totally geodesic flat in $L_{e}$, as long as $\lie t$ is a totally geodesic abelian subalgebra in $\lie g$. At last, we recall that the $(4,0)$ Riemannian curvature tensor of a bi-invariant metric satisfies: \begin{equation}\label{eq:R} R(X,Y,Z,W)=-\frac{1}{4}\lr{[X,Y],[Z,W]}. \end{equation} \subsection{The horizontal decomposition I}\label{sec:brac1} Consider a maximal vertical abelian subalgebra $\lie t^v\subseteq \cal V_{{e}}$ completed to a maximal abelian subalgebra $\lie t=\lie t^v\oplus \lie t'\subseteq \lie g=T_{e} G$. $\lie t$ and $\lie t^v$ act on $\lie g$ through the representations $\rho_{\ad}(\xi)=\frac{1}{2}\ad_\xi$ and $\rho_A(\xi)=A^\xi$, respectively. Given linear maps $\alpha\colon \lie t^v\to i\bb R$, $\alpha'\colon\lie t'\to i\bb R$, we consider the spaces: \begin{align*} \lie g_{\alpha,\alpha'}(\lie t)&=\{X\in\lie g^{\bb C}~|~\textstyle{\h}\ad_{\xi+\xi'}X=(\alpha(\xi)+\alpha'(\xi'))X,~ \text{for all }\xi\in\lie t^v,~ \xi'\in\lie t' \},\\ \lie g_\alpha(\lie t^v) &=\{X\in\lie g^{\bb C} ~|~\textstyle{\h}\ad_{\xi}X=\alpha(\xi)X,~ \text{for all }\xi\in\lie t^v\},\\ \cal H_\alpha(\lie t^v) &=\{X\in{\cal H}^{\bb C}_{\I} ~|~A^{\xi}X=\alpha(\xi)X,~ \text{for all }\xi\in\lie t^v \}. \end{align*} We call their elements as $(\alpha,\alpha')$-weights, vertical $\alpha$-weights and $\alpha$-$A$-weight, respectively. Whenever one of such spaces is non-trivial, the corresponding linear map is called a root, vertical root or $A$-root, respectively. We denote the corresponding set of roots as $\Pi(\lie t)$, $\Pi^v(\lie t^v)$ and $\Pi^A(\lie t^v)$. Observe that a vertical root $\alpha$ can always be completed to a root $(\alpha,\alpha')\colon\lie t\to i\bb R$ of $\lie g$ by a linear function $\alpha'\colon\lie t'\to i\bb R$: since $\lie t'$ commutes with $\lie t^v$, $\lie g_\alpha(\lie t^v)$ can be further decomposed as \[\lie g_\alpha(\lie t^v)=\sum_{\alpha'}\lie g_{\alpha,\alpha'}(\lie t). \] Conversely, the identification $(\lie t\oplus\lie t')^*=\lie t^*\oplus (\lie t')^*$ is given by the restriction $\tilde \alpha\mapsto (\tilde\alpha|_{\lie t^v},\tilde \alpha|_{\lie t'})$. In particular, if $(\alpha,\alpha')$ is a root, $\alpha$ is a vertical root. We take advantage of this two-level decomposition: \begin{equation*} \lie g^{\bb C}=(\lie t^v)^{\bb C}+\sum_{\alpha\in\Pi^v(\lie t^v)}\lie g_\alpha(\lie t^v)=\lie t^\bb C+\sum_{(\alpha,\alpha')\in \Pi(\lie t)}\lie g_{\alpha,\alpha'}(\lie t), \end{equation*} and the decomposition based on Theorem \ref{thm:tori}: \[{\cal H}^{\bb C}_{\I}=\cal H_0(\lie t^v)+\sum_{\alpha\in\Pi^A(\lie t^v)}\cal H_\alpha(\lie t^v), \] where $\cal H_0(\lie t^v)=\cap_{\xi \in {\lie t^v}}\ker A^\xi$. The $\lie g_{\alpha,\beta}(\lie t)$-, $\lie g_\alpha(\lie t^v)$-, $\cal H_\alpha(\lie t^v)$-components of $X$ will be denoted by $X_{\alpha,\beta}$, $X_\alpha$, $X^\alpha$, respectively. Our first step in this algebraic part is to relate $A$-weights to vertical weights. \begin{lem}\label{lem:root1} Let $\lie t^v$ be a maximal vertical abelian subalgebra. Then, \begin{equation*}\label{eq:PiA} \Pi^{A}(\lie t^v)=\{\alpha\in \Pi^{v}(\lie t^v)~|~\cal H_e^\bb C\cap (\lie g_\alpha(\lie t^v)+\lie g_{-\alpha}(\lie t^v))\neq \emptyset\}\subseteq \Pi^v(\lie t^v). \end{equation*} Moreover, if $X\in{\cal H}^{\bb C}_{\I}$ is an $\alpha$-$A$-weight, then \begin{equation*}\label{eq:Xalpha} X=X_{\alpha}+X_{-\alpha}\in \lie g_\alpha(\lie t^v)+\lie g_{-\alpha}(\lie t^v). \end{equation*} \end{lem} \begin{proof} Since we are dealing with Ranjan foliations, Grey--O'Neill's equations gives (compare Ranjan \cite{ranjan}, equation (1.3)): \begin{equation*}\label{eq:Atoad} -A^\xi A^\xi X= R(X,\xi)\xi=-\frac{1}{4}\ad_\xi^2 X \end{equation*} for every $\xi\in\cal V_{{e}}$. Therefore, if $X$ is either a $\alpha$-$A$-weight or $(\h\ad_\xi)^2X=\alpha(\xi)^2X$, we get \begin{equation}\label{eq:A=vweight} (A^\xi)^2X=\alpha(\xi)^2X={\frac{1}{4}}\ad_\xi^2X. \end{equation} Recall that two roots $\alpha,\beta\in\Pi^v(\lie t^v)$ satisfying $\alpha(\xi)^2=\beta(\xi)^2$ for all $\xi$ must satisfy $\alpha=\pm\beta$. In particular, \begin{align*} \lie g_\alpha(\lie t^v)+\lie g_{-\alpha}(\lie t^v)&=\bigcap_{\xi\in\lie t^v}\ker\Big((\tfrac{1}{2}\ad_\xi)^2-\alpha(\xi)^2\id\Big),\\ \cal H_\alpha(\lie t^v)+\cal H_{-\alpha}(\lie t^v)&= \bigcap_{\xi\in\lie t^v}\ker \Big((A^\xi)^2-\alpha(\xi)^2\id\Big). \end{align*} Thus, $\cal H_\alpha(\lie t^v)+\cal H_{-\alpha}(\lie t^v)\subseteq \lie g_\alpha(\lie t^v)+\lie g_{-\alpha}(\lie t^v)$. Since $\cal H_\alpha(\lie t^v)+\cal H_{-\alpha}(\lie t^v)\subseteq {\cal H}^{\bb C}_{\I}$, we conclude: \begin{gather*} \cal H_\alpha(\lie t^v)+\cal H_{-\alpha}(\lie t^v)={\cal H}^{\bb C}_{\I}\cap (\lie g_\alpha(\lie t^v)+\lie g_{-\alpha}(\lie t^v)).\qedhere \end{gather*} \end{proof} In particular, all vertical roots appearing on horizontal vectors are the $A$-roots, i.e., \[X=X_0+\sum_{\alpha\in\Pi^A(\lie t^v)}X^\alpha=X_0+\sum_{\alpha\in\Pi^A(\lie t^v)}X_ \alpha, \] where \[X_0\in \cal H_0(\lie t^v)=\cap_{\xi\in\lie t^v}\ker A^\xi={\cal H}^{\bb C}_{\I}\cap (\cap_{\xi\in\lie t^v}\ker\ad_\xi).\] Following Lemma \ref{lem:root1}, we define the projections $\pi_{\epsilon}(\lie t^v)\colon {\cal H}^{\bb C}_{\I}\to \lie g^\bb C$, $\epsilon=0,+,-$ by \[\textstyle \pi_0(\lie t^v)(X_0+\sum X^\alpha)=X_0, \qquad \pi_\pm(\lie t^v)(X^\alpha)=(X^\alpha)_{\pm\alpha}\in \lie g_{\pm\alpha}(\lie t^v). \] So, $X=\pi_0(\lie t^v)(X)+\pi_+(\lie t^v)(X)+\pi_-(\lie t^v)(X)$ for every $X\in{\cal H}^{\bb C}_{\I}$. Since $\lie g^\bb C$ is the complexification of $\lie g$, $\lie g^\bb C$ inherits two natural objects: a complex conjugation and the extension of the bi-invariant inner product $\lr{,}$ to a symmetric $\bb C$-bilinear form, also denoted by $\lr{,}$. We emphasize that we choose the $\bb C$-extension of $\lr{,}$ so both $A^\xi$ and $\ad_\xi$ are skew-symmetric. In particular: \begin{gather*}\label{eq:product1} \lr{X,Y}=\lr{X_0,Y_0}+\sum_{\alpha\in\Pi^A(\lie t^v)}\lr{X_\alpha,Y_{-\alpha}}=\lr{X_0,Y_0}+\sum_{\alpha\in\Pi^A(\lie t^v)}\lr{X^\alpha,Y^{-\alpha}}. \end{gather*} We also observe that $\cal H_\epsilon(\lie t^v)$ are real subspaces, i.e., invariant by the complex conjugation, since they are (a sum of) the image of real operators restricted to the kernel of real operators: \begin{align*} \textstyle \cal H_\pm(\lie t^v)\cap (\lie g_\alpha(\lie t^v)\oplus \lie g_{-\alpha}(\lie t^v))= (A^\xi\pm\h\ad_\xi)(\cal H_\alpha(\lie t^v)\oplus \cal H_{-\alpha}(\lie t^v)) \\ =\textstyle (A^\xi\pm\h\ad_\xi)\ker\Big((A^\xi)^2-\alpha(\xi)^2\id_{{\cal H}^{\bb C}_{\I}}\Big). \end{align*} Analogously, $\cal H_0(\lie t^v)$ is the intersection of kernels of real operators. In particular, $\cal H_0(\lie t^v)$ is orthogonal to $\cal H_+(\lie t^v)+\cal H_-(\lie t^v)$. We gather the notations/statements in the last paragraphs as a lemma: \begin{lem} Let $\lie t^v$ be a vertical abelian subagebra and $\pi_\epsilon(\lie t^v)$ be the projections defined by Lemma \ref{lem:root1}. Then: \begin{enumerate}[$(1)$] \item $X=X_0+X_++X_-$ for every $X\in{\cal H}^{\bb C}_{\I}$, where $X_\epsilon=\pi_\epsilon(\lie t^v)(X)$; \item $\cal H_\epsilon(\lie t^v)$ are real subspaces of $\lie g^\bb C$; \item $\cal H_0(\lie t^v)\perp (\cal H_+(\lie t^v)+\cal H_-(\lie t^v)),$ \end{enumerate} \end{lem} We state another technical lemma to be used in the next section. \begin{lem}\label{lem:tori0} Let $\lie t^v$ be a maximal vertical subalgebra and $\lie t\supseteq\lie t^v$ a maximal torus. Then, $\lie t$ decomposes orthogonally as $\lie t=\lie t^v\oplus \lie t'$ with $\lie t'\subseteq \cal H_0(\lie t^v)$. \end{lem} \begin{proof} Let $t\in\lie t$, $l\in\lie t^v$ and decompose $t$ in its vertical and horizontal components, $t=t^v+t^h$. On the one hand, $R(t,l)=\frac{1}{2}\ad_{[t,l]}=0$ . On the other hand, since fibers are totally geodesic, $R(\cal H,l,\xi,\eta)=0$ for all $\xi,\eta\in\cal V_{{e}}$. Thus \begin{equation*} 0=R(t,l,\xi,\eta)=R(t^h,l,\xi,\eta)+R(t^v,l,\xi,\eta)=R(t^v,l,\xi,\eta). \end{equation*} In particular, $R(t^v,l,l,t^v)=\frac{1}{4}||[t^v,l]||^2=0$. Since $l\in \lie t^v$ is arbitrary and $\lie t^v$ maximal, $ t^v\in\lie t^v$. Since $t^h=t-t^v\in \lie t$, we conclude that $[t^h,l]=0$ for all $l\in \lie t^v$, thus $t^h\in\cal H_0(\lie t^v)$. \end{proof} We claim that, if $\lie t''$ is an abelian subalgebra in $\cal H_0(\lie t^v)$, there is a maximal abelian subalgebra $\lie t'\subseteq \cal H_0(\lie t^v)$ such that $\lie t'\supseteq \lie t''$. Indeed, recall that any abelian subalgebra in a compact Lie group can always be extended to a maximal abelian subalgebra. Therefore, $\lie t\oplus \lie t''$ can be extended to a maximal $\lie t$. Since $\lie t,\lie t^v$ are arbitrary in Lemma, we conclude that there is a $\lie t'\supseteq \lie t''$, such that $\lie t=\lie t^v\oplus \lie t'$. \subsection{The bracket identity}\label{sec:brac2} Fix a maximal vertical abelian subalgebra $\lie t^v$ and complete it to a maximal abelian subalgebra $\lie t=\lie t^v\oplus \lie t'$. We have: \begin{prop}\label{prop:bracmaster} For every $X,Y\in {\cal H}^{\bb C}_{\I}$ and $(\alpha,\alpha'), ~(\beta,\beta')\in\Pi(\lie t)$, \begin{equation*}\label{eq:bracmaster} [(X_+)_{\alpha,\alpha'},(Y_-)_{\beta,\beta'}]=0. \end{equation*} \end{prop} We brake the proof into steps, stated as the next lemmas, and write $\pi_\pm(\lie t^v)(X^\alpha)=X^\alpha_\pm$. The first Lemma is a restatement of Theorem \ref{thm:tapp} taking into account the $\pi_\pm(\lie t^v)$-decomposition. \begin{lem}\label{lem:brac0} Let $\xi\in\lie t^v$, $X\in {\cal H}^{\bb C}_{\I}$. Then, for all $n,m\geq 0$, \begin{equation*} [\ad_{X}^m\ad_\xi X_-,\ad_\xi^{n+1} X_+]=0. \end{equation*} \end{lem} \begin{proof} Recall from the discussion in section \ref{sec:1} that $\{X,\xi,A^\xi X\}$ is a good triple for every $X\in\cal H$ and $\xi\in \cal V$. We apply Theorem \ref{thm:tapp} to it. Let $X^\alpha$ be a $\alpha$-$A$-weight and observe that \begin{gather*} \h\ad_\xi X^\alpha_\pm=\pm\alpha(\xi) X^\alpha_\pm,\quad\quad A^\xi (X^\alpha_++X^\alpha_-)=\alpha(\xi)X^\alpha. \end{gather*} Thus: \begin{align*}\textstyle B=(\h\ad_\xi -A^\xi) X &= \sum_{\alpha\in \Pi^{A}(\lie t^v)}\alpha(\xi)\left((X^\alpha_+-X^\alpha_-)- (X^\alpha_++X^\alpha_-)\right)\\ &=\sum_{\alpha\in \Pi^{A}(\lie t^v)}-2\alpha(\xi)X^\alpha_-=\ad_\xi X_-. \end{align*} Analogously, $\bar B=-\ad_\xi X_+$. \end{proof} Expanding the sum $X_\pm=\sum X^\alpha_\pm$, we get: \begin{cor}\label{cor:brac0} Let $X \in {\cal H}^{\bb C}_{\I}$, $\xi\in\lie t^v$. Then, for all $m\geq 0$ and $\beta\in\Pi^v(\lie t^v)$, \begin{equation*} \sum_{\alpha\in\Pi^A(\lie t^v)}\alpha(\xi)[\ad_{X}^m X_-^\alpha,X_+^\beta]=0. \end{equation*} \end{cor} \begin{proof} From Lemma \ref{lem:brac0}, we have \begin{equation*} 0=[\ad_{X}^m\ad_\xi X_-,\ad_\xi^{n+1} X_+]=\sum_{\alpha,\beta}\alpha(\xi)\beta(\xi)^{n+1} [\ad_{X}^m X_-^\alpha,X_+^\beta] \end{equation*} for every $n\geq 0$. Suppose $\xi$ is such that $\alpha(\xi)\neq \beta(\xi)$ for every pair of distinct $A$-roots $\alpha\neq \beta$. In this case, by taking enough values of $n$ we conclude that $\sum\alpha(\xi)[\ad_{X}^m X_-^\alpha,X_+^\beta]=0$ (recall that the determinant of the Vandermonde matrix of a set of pairwise distinct values is non-zero). However, the set of such $\xi$'s is dense in $\lie t^v$, concluding the result for every $\xi$. \end{proof} \begin{lem}\label{lem:bracmaster} Let $X \in {\cal H}^{\bb C}_{\I}$. Then, for all $l\geq 0$ and $\alpha,\beta\in\Pi^v(\lie t^v)$, \begin{equation*} [ X_-^\alpha,\ad_{X_0}^lX_+^\beta]=0. \end{equation*} \end{lem} \begin{proof} We use induction on $s$ in: for all $m\geq 0$, \begin{equation}\label{proof:brac0:0} \sum_{\alpha\in\Pi^v(\lie t^v)}\alpha(\xi)[\ad_{X}^m X_-^\alpha,\ad_{X_0}^sX_+^\beta]=0. \end{equation} Observe that \eqref{proof:brac0:0} holds for $s=0$ (Corollary \ref{cor:brac0}). As the induction hypothesis, we assume that \eqref{proof:brac0:0} holds for $s\leq k$ and compute $[\ad_X^mX^\alpha_-,\ad_{X_0}^{k+1}X^\beta_+]$ backwards: \begin{multline*} [\ad_X^{m+1}X^\alpha_-,\ad_{X_0}^{k}X^\beta_+]=[[X_0,\ad_X^{m}X^\alpha_-],\ad_{X_0}^{k}X^\beta_+]+[[X_-,\ad_X^{m}X^\alpha_-],\ad_{X_0}^{k}X^\beta_+]\\+[[X_+,\ad_X^{m}X^\alpha_-],\ad_{X_0}^{k}X^\beta_+] =\ad_{X_0}[\ad_X^{m}X^\alpha_-,\ad_{X_0}^{k}X^\beta_+]-[\ad_X^{m}X^\alpha_-,\ad_{X_0}^{k+1}X^\beta_+]\\ +\ad_{X_-}[\ad_X^{m}X^\alpha_-,\ad_{X_0}^{k}X^\beta_+]-[\ad_X^{m}X^\alpha_-,[X_-,\ad_{X_0}^{k}X^\beta_+]]+[[X_+,\ad_X^{m}X^\alpha_-],\ad_{X_0}^{k}X^\beta_+]. \end{multline*} That is, \begin{multline} \label{proof:brac0:1} [\ad_X^{m}X^\alpha_-,\ad_{X_0}^{k+1}X^\beta_+]= \ad_{X_0}[\ad_X^{m}X^\alpha_-,\ad_{X_0}^{k}X^\beta_+]-[\ad_X^{m+1}X^\alpha_-,\ad_{X_0}^{k}X^\beta_+]\\ +\ad_{X_-}[\ad_X^{m}X^\alpha_-,\ad_{X_0}^{k}X^\beta_+]-[\ad_X^{m}X^\alpha_-,[X_-,\ad_{X_0}^{k}X^\beta_+]]-[[\ad_X^{m}X^\alpha_-,X_+],\ad_{X_0}^{k}X^\beta_+]. \end{multline} In order to apply the induction hypothesis, we multiply both sides by $\alpha(\xi)$ and sum in $\alpha$. It follows that the first three terms on the right-hand-side vanish. We deal with the last term in a separate claim. \begin{claim}\label{claim:bracmaster1} $\sum\alpha(\xi)[X_+,\ad_X^{m}X^\alpha_-]=0$. \end{claim} \begin{proof} It is sufficient to prove that $\sum_{\alpha}\alpha(\xi)[X_+^\beta,\ad_X^{m}X^\alpha_-]=0$ for every $\beta\in\Pi^A(\lie t^v)$. We use induction on $s$ in: for every $r\geq 0$, \begin{equation}\label{proof:lem:bracmaster1} \sum_{\alpha\in \Pi^A(\lie t^v)}\alpha(\xi)[\ad_X^rX_+^\beta,\ad_X^{s}X^\alpha_-]=0. \end{equation} The case $s=0$ is Corollary \ref{cor:brac0}. Assuming that \eqref{proof:lem:bracmaster1} holds for $s\leq k$, we have \begin{align*} \sum_{\alpha\in \Pi^A(\lie t^v)}\alpha(\xi)[\ad_X^{r}X_+^\beta,\ad_X^{k+1}X^\alpha_-]=&\ad_X\sum_{\alpha\in \Pi^A(\lie t^v)}\alpha(\xi)[\ad_X^{r}X_+^\beta,\ad_X^{k}X^\alpha_-]\\&-\sum_{\alpha\in \Pi^A(\lie t^v)}\alpha(\xi)[\ad_X^{r+1}X_+^\beta,\ad_X^{k}X^\alpha_-]=0.\qedhere \end{align*} \end{proof} The 5th and the proof is completed once we observe that $[X_-^\alpha,\ad_{X_0}^{k}X^\beta_+]=0$ for any $\alpha\in\Pi^A(\lie t^v)$, provided \eqref{proof:brac0:0} holds for $s\leq k$. However, since $X_0$ commutes with $\lie t^v$, the term $[X_-^\alpha,\ad_{X_0}^{k}X^\beta_+]$ lies in $\lie g_{\beta-\alpha}(\lie t^v)$. Therefore, each term in the induction hypothesis \eqref{proof:brac0:0}: \begin{equation*} \sum_{\alpha\in\Pi^A(\lie t^v)}\alpha(\xi)[X_-^\alpha,\ad_{X_0}^{k}X^\beta_+] =0.\label{proof:lem:bracmaster2} \end{equation*} lies in a different weight space. Since $\xi$ is arbitrary, each term must vanish, concluding the proof. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:bracmaster}] We first prove the intermediate step $X=Y$ by following along the same lines as in the proof of Corollary \ref{cor:brac0}. Since $X_0$ is itself horizontal and does not influence $X_\pm$, we consider $X=t+X_++X_-$ where $t\in\lie t'$ can be chosen at our will. Given $\beta\in\Pi^A(\lie t^v)$, denote $\Pi_\beta=\{\beta'\colon \lie t'\to i\bb R~|~(\beta,\beta')\in\Pi(\lie t)\}$. Lemma \ref{lem:bracmaster} gives for all $l\geq 0$, \begin{equation*} 0=[ X_-^\alpha,\ad_{t}^lX_+^\beta]=\sum_{\beta'\in\Pi_\beta}\beta'(t)^l[ X_-^\alpha,(X_+^\beta)_{\beta,\beta'}]=\sum_{\beta'\in\Pi_\beta}\beta'(t)^l[ X_-^\alpha,(X_+^\beta)_{\beta,\beta'}]. \end{equation*} Consider $t$ such that the values $\beta'(t)$, $\beta'\in\Pi_\beta$, are all distinct and nonzero. Taking enough values of $l$ gives \begin{equation}\label{proof:prop:bracmaster} 0=[X_-^\alpha,(X_+^\beta)_{\beta,\beta'}]=\sum_{\alpha'\in\Pi_{-\alpha}}[ (X_-^\alpha)_{-\alpha,\alpha'},(X_+^\beta)_{\beta,\beta'}]. \end{equation} Since $\alpha,\beta,\beta'$ are fixed, each term $[(X_-^\alpha)_{-\alpha,\alpha'},(X_+^\beta)_{\beta,\beta'}]$ lies in a different root space. Therefore, \eqref{proof:prop:bracmaster} implies that $[ (X_-^\alpha)_{-\alpha,\alpha'},(X_+^\beta)_{\beta,\beta'}]=0$ for all $\alpha,\alpha',\beta,\beta'$. Noting that $(X_\pm^\alpha)_{\pm \alpha,\alpha'}=(X^\alpha)_{\pm \alpha,\alpha'}$, and writing $X_+=\sum X^\alpha_+$, $X_-=\sum X^\beta_-$, we get $[ (X_-)_{-\alpha,\alpha'},(X_+)_{\beta,\beta'}]=0$. To proceed, recall that $\lie g$ is the product of an abelian Lie algebra and a semi-simple Lie algebra. In particular, the root space $\lie g_{\alpha,\alpha'}(\lie t)$ is 1-dimensional and the brackets $[,]\colon \lie g_{\alpha,\alpha'}(\lie t)\times\lie g_{\beta,\beta'}(\lie t)\to \lie g^\bb C$ are either zero, when $(\alpha+\alpha',\beta+\beta')\notin\Pi(\lie t)\cup\{(0,0)\}$, or are non-degenerate, i.e., $[x,y]=0$ only if $x=0$ or $y=0$. Let $\pi_{\alpha,\alpha'}\colon \lie g^\bb C\to\lie g_{\alpha,\alpha'}(\lie t)$ be the linear projection onto $\lie g_{\alpha,\alpha'}(\lie t)$ and denote $\pi_{\alpha,\alpha'}^\pm=\pi_{\alpha,\alpha'}\circ \pi_\pm(\lie t^v)$. Suppose $(-\alpha+\beta,\alpha'+\beta')$ is a root (if not, $[(X_-)_{-\alpha,\alpha'},(Y_+)_{\beta,\beta'}]$ trivially vanishes). Since $[\pi_{-\alpha,\alpha'}^+(X),\pi_{\beta,\beta'}^-(X)]=0$ for every $X\in{\cal H}^{\bb C}_{\I}$, we conclude that ${\cal H}^{\bb C}_{\I}=\ker \pi_{-\alpha,\alpha'}^+\cup \ker \pi_{\beta,\beta'}^-$. This is only possible if one of the kernels coincides with ${\cal H}^{\bb C}_{\I}$. In particular, for every pair $\alpha,\alpha',\beta,\beta'$, $[\pi_{-\alpha,\alpha'}^+(\cal H_+(\lie t^v)),\pi_{\beta,\beta'}^-(\cal H_-(\lie t^v))]=\{0\}$. \end{proof} Proposition \ref{prop:bracmaster} shows, in particular, that $\ker\pi^+_{\alpha,\alpha'}\cup \ker\pi^-_{-\alpha,-\alpha'}={\cal H}^{\bb C}_{\I}$. On the other hand, $\cal H_\epsilon(\lie t^v)$ are real spaces therefore $\pi^\pm_{\alpha,\alpha'}\neq \{0\}$ if and only if its complex conjugate, $\pi^{\pm}_{-\alpha,-\alpha'}$, satisfy $\pi^{\pm}_{-\alpha,-\alpha'}\neq \{0\}$. Putting together these two pieces of information, we conclude that $\ker\pi^+_{\alpha,\alpha'}\cup\ker\pi^-_{\alpha,\alpha'}={\cal H}^{\bb C}_{\I}$, i.e., each root (together with its negative) can appear as a component of at most one of th two spaces $\cal H_+(\lie t^v)$, $\cal H_-(\lie t^v)$. These arguments derive a central property of: \begin{equation}\label{eq:upsilon} \Upsilon_\pm(\lie t^v)=\{(\alpha,\alpha')\in \Pi(\lie t^v)~|~\exists X\in{\cal H}^{\bb C}_{\I},~(X_+)_{\alpha,\alpha'}\neq 0 \}, \end{equation} which is a main object in the next section. We have shown: \begin{cor}\label{cor:root1} $\Upsilon_+(\lie t)\cap \Upsilon_-(\lie t)=\emptyset$. In particular, $\cal H_+(\lie t^v)\cap \cal H_-(\lie t^v)=\{0\}$ and $\cal H_+(\lie t^v)\perp \cal H_-(\lie t^v)=\{0\}$. \end{cor} The orthogonality follows since $\cal H_\pm(\lie t^v)$ are real spaces. \begin{rem} An important step both in here (see Corollary \ref{cor:A}) and in \cite{ranjan} is to show that: \begin{equation}\label{eq:A} A_{X}Y=\h\Big([X_-,Y_-]-[X_+,Y_+]\Big)^v. \end{equation} Such equality implies that $A^\xi X=\h\ad_\xi X_+ -\h\ad_\xi X_-$ for all $\xi\in\cal V_{e}$. If one fix $\lie t^v$ and the same computations as in \cite{ranjan}, one can show that equation \eqref{eq:A} holds for $X=X^\alpha$, $Y=Y^\beta$, when $\alpha\neq \beta$. However, the information is lost for $\alpha=\beta$. We prove \eqref{eq:A} by using a much more refined decomposition. \end{rem} \subsection{The horizontal decomposition II}\label{sec:brac3} We call an immersed subgroup $H\hookrightarrow G$ as \textit{$\cal V$-maximal} if its adjoint representation leaves $\cal V_{{e}}$ invariant and it is transitive in the set of maximal vertical abelian subalgebras. That is, $\Ad_H(\cal V_{{e}})=\cal V_{{e}}$ and, fixed $\lie t^v$, every other maximal vertical abelian subalgebra is of the form $\Ad_h \lie t^v$. We recall a few points: \begin{enumerate} \item if $L_{{e}}$ is a subgroup, then $H=F_{{e}}$ is $\cal V$-maximal; \item if $L_{{e}}$ is an irreducible symmetric space which is not a Lie group, then a $\cal V$-maximal $H$ can be chosen as the subgroup whose Lie algebra is $\lie h=[\cal V_{e}, \cal V_{e}]$ (see Conlon \cite{conlon1972class} or Berestovskii-Nikonorov \cite[Lemma 7]{berestovskii-nikonorov}). It follows that $\lie h\subseteq \cal H_{e}$; \item writing $h^*(\alpha,\beta)=(\alpha\circ \Ad_h^{-1},\beta\circ\Ad_h^{-1})$, \[ \Pi(\Ad_h\lie t)=\{h^*(\alpha,\beta)~|~(\alpha,\beta)\in\Pi(\lie t)\};\] \item $\lie g_{h^*(\alpha,\beta)}(\Ad_h\lie t)=\Ad_h (\lie g_{(\alpha,\beta)}(\lie t))$. \end{enumerate} Given a $\cal V$-maximal $H$, define the vector spaces \begin{align*}\label{eq:calHpm} {\cal H}_\pm(\cal F)&=\sum_{h\in H}\cal H_\pm(\Ad_h\lie t^v),\\ \cal H_0(\cal F)&={\cal H}^{\bb C}_{\I}\cap ({\cal H}_+(\cal F)+{\cal H}_-(\cal F))^\bot. \end{align*} It is clear that $\cal H_\epsilon(\cal F)$ is independent of $H$ and $\lie t^v$, however both are crucial in our prove of the commutation. We now state a main result: \begin{theorem}\label{prop:Hpm} Let $\cal F$ be a Ranjan foliation. Then \begin{enumerate} \item $\cal H_+(\cal F)\bot\cal H_-(\cal F)$; \item $\cal H_+(\cal F)\cap \cal H_-(\cal F)=\{0\}$; \item $[\cal H_+(\cal F),\cal H_-(\cal F)]{\,=\,}\{0\}$. \end{enumerate} \end{theorem} The current section is a preliminary step for Theorem \ref{prop:Hpm}, where we prove Proposition \ref{prop:Adtori}. Theorem \ref{prop:Hpm} is proved in section \ref{sec:proofHpm}. We fix an arbitrary maximal abelian subalgebras $\lie t^v\subseteq \lie t$ throughout. \begin{prop}\label{prop:Adtori} For every $h\in H$, $\cal H_\pm(\Ad_h\lie t^v)=\Ad_h\cal H_\pm(\lie t^v)$. \end{prop} The proof of Proposition \ref{prop:Adtori} uses Proposition \ref{prop:bracmaster}, to control the set of $\rm{Ad}_h\lie t$-roots, and the next three lemmas. \begin{lem}\label{lem:AdgH0} For every $h\in H$, $\cal H_0(\Ad_h\lie t^v)=\Ad_h\cal H_0(\lie t^v)$. \end{lem} \begin{proof} $\cal H_0(\lie t^v)={\cal H}^{\bb C}_{\I}\cap_{\xi\in\lie t^v}\ker \ad_\xi$. Therefore: \begin{align*} \Ad_h\cal H_0(\lie t^v)=& \, (\Ad_h{\cal H}^{\bb C}_{\I})\cap\!\left(\;\underset{\mathclap{\xi\in\lie t^v}}{\cap}\Ad_h\ker \ad_\xi\!\right)\!\\ =&\, {\cal H}^{\bb C}_{\I}\cap \left(\;\underset{\mathclap{\xi\in\lie t^v}}{\cap}\ker \ad_{\rm{Ad}_h\xi}\!\right) ={\cal H}^{\bb C}_{\I}\cap\left(\;\;\underset{\mathclap{\;\;\;\xi\in\rm{Ad}_h\lie t^v}}{\cap\;}\;\;\ker \ad_{\xi}\right)\!\!. \qedhere \end{align*} \end{proof} Let $\Upsilon(\lie t)=\Upsilon_+(\lie t^v)\cup \Upsilon_-(\lie t^v)$. Since $\Ad_g$ fixes $ \cal H_{{e}}$, $h^*\Upsilon(\lie t)=\Upsilon(\Ad_h\lie t)$. Moreover: \begin{lem}\label{prop:AdUpsilon} For every $h\in H$, $\Upsilon_\pm(\Ad_h\lie t)=h^*(\Upsilon_\pm(\lie t))$. \end{lem} \begin{proof} Given $(\alpha,\alpha')\in \Upsilon_+(\lie t)\cup\Upsilon_-(\lie t)$, we prove that \[H^{\pm}_{\alpha,\alpha'}= \{h\in H~|~h^*(\alpha,\alpha')\in\Upsilon_\pm(\Ad_{h}\lie t)\}\] are open subsets of $H$. Note that $H^+_{\alpha,\alpha'}\cap H^-_{\alpha,\alpha'}=\emptyset$ (Corollary \ref{cor:root1}) and $H^+_{\alpha,\alpha'}\cup H^-_{\alpha,\alpha'}=H$, since $\Upsilon(\Ad_h\lie t^v)=h^*\Upsilon(\lie t^v)$. Since $H$ is connected, it is sufficient to prove that $H^\pm_{\alpha,\alpha'}$ are open. Analogous to the proof of Proposition \ref{prop:bracmaster}, consider the projections \[\pi^\pm_{\alpha,\alpha'}(h)=\pi_{h^*\alpha,h^*\alpha'}\circ\pi^\pm(\Ad_h\lie t^v).\] Now suppose that $(\alpha,\alpha')\in\Upsilon_+(\lie t^v)$. Then, there is $X$ such that $\pi^\pm_{\alpha,\alpha'}({e})\neq 0$. Moreover, \[\pi^\pm_{\alpha,\alpha'}(h)(X) = \pi^\pm_{\alpha,\alpha'}(h)(X^{h^*\alpha})=\frac{1}{2h^*\alpha(\Ad_h\xi)}((\textstyle \h\ad_{\Ad_h\xi}\pm A^{\Ad_h\xi})X)_{h^*\alpha,h^*\alpha'} \] for any $\Ad_h\xi\in\Ad_h\lie t^v$ such that $h^*\alpha(\Ad_h\xi)=\alpha(\xi)\neq 0$. By fixing $\xi\in\lie t^v$ we see that $\pi^\pm_{\alpha,\alpha'}(h)$ is continuous as a family of operators with respect to $h$. Thus, if $\pi^\pm_{\alpha,\alpha'}(h)(X)\neq 0$, $\pi^\pm_{\alpha,\alpha'}(h')(X)\neq 0$ for $h'$ close to $h$, concluding that $H^\pm_{\alpha,\alpha'}$ is open. \end{proof} Given a set $S\subseteq \lie g$, denote $\cal{L}(S)$ as the subalgebra generated by $S$. Define the auxiliary spaces: \begin{align*}\lie H_\pm(\lie t)&= \cal L\left({\bigoplus_{(\alpha,\beta)\in\Upsilon_\pm(\lie t)}}\lie g_{(\alpha,\beta)}(\lie t)\right);\\ \lie H_0(\lie t)&= (\lie H_+(\lie t)+\lie H_-(\lie t))^\perp.\end{align*} Proposition \ref{prop:bracmaster}, Corollary \ref{cor:root1} and invariance by complex conjugation guarantees that $[\lie H_+(\lie t),\lie H_-(\lie t)]=\{0\}$, $\lie H_+(\lie t)\bot \lie H_-(\lie t)$ and $\lie H_+(\lie t)\cap \lie H_-(\lie t)=\{0\}$. Moreover, Lemma \ref{prop:AdUpsilon} implies that $\lie H_\pm(\Ad_h\lie t)=\Ad_h \lie H_\pm(\lie t)$ for all $h\in H$. \begin{proof}[Proof of Proposition \ref{prop:Adtori}] Let $\pi_\pm(\lie t)\colon {\cal H}^{\bb C}_{\I}\to \lie H_\pm(\lie t)$ be the projections defined by the decomposition $\lie g=\lie H_+(\lie t)\oplus\lie H_-(\lie t)\oplus\lie H_0(\lie t)$. From Corollary \ref{cor:root1}, it follows that $\pi_\pm(\lie t)({\cal H}^{\bb C}_{\I})=\cal H_\pm(\lie t^v)$. From Lemma \ref{prop:AdUpsilon}, $\pi_{\pm}(\Ad_h\lie t)=\Ad_h\circ \pi_{\pm}(\lie t)\circ \Ad_{h^{-1}}$, $h\in H$. Therefore, \begin{align*}\cal H_\pm(\Ad_h\lie t^v)&=\pi_\pm(\Ad_h\lie t)({\cal H}^{\bb C}_{\I})=\Ad_h(\pi_\pm(\lie t)(\Ad_{h^{-1}}{\cal H}^{\bb C}_{\I}))\\&=\Ad_h(\pi_\pm(\lie t)({\cal H}^{\bb C}_{\I}))=\Ad_h(\cal H_\pm(\lie t^v)). \qedhere\end{align*} \end{proof} Proposition \ref{prop:Adtori} gives a new characterization of $\cal H_\pm(\cal F)$: $\cal H_\pm(\cal F)$ is the smallest $\Ad_H$-invariant subset containing $\cal H_\pm(\lie t^v)$. \subsection{Proof of Theorem \ref{prop:Hpm}}\label{sec:proofHpm} In order to prove Theorem \ref{prop:Hpm}, fix $\lie t^v$ and observe from Proposition \ref{prop:Adtori} and Jacobi identity that $[\cal H_+(\cal F),\cal H_-(\cal F)]=\{0\}$ if and only if $[\Ad_H\cal H_+(\lie t^v), \cal H_-(\lie t^v)]=\{0\}$. Moreover, every element in $H$ can be written as $e^\theta$ for some $\theta\in \lie h$, since $\lie g$ is compact and $H$ is connected. Thus, a power series argument guarantees that $[\cal H_+(\cal F),\cal H_-(\cal F)]=\{0\}$ if and only if $[\ad_\theta^k\cal H_+(\lie t^v),\cal H_-(\lie t^v)]=\{0\}$ for every $\theta\in\lie h$ and $k\geq 0$. Our next aim is to show, by brute force, that $[\ad_\theta^k\cal H_+(\lie t^v),\cal H_-(\lie t^v)]=\{0\}$. We start by studying the elements of $\lie h$. If $L_{{e}}$ is a subgroup, we can take $\lie h=\cal V_I$. If $L_{e}$ is an irreducible symmetric space which is not a group, we chose $\lie h=[\cal V_{{e}},\cal V_{{e}}]$. However, $L_{{e}}$ might be reducible, so we write $\cal V_{{e}}=\bigoplus\Delta_i$, where $\exp(\Delta_i)$ are locally irreducible symmetric spaces. An standard argument shows that the sum of the aforementioned options produces a $\cal V$-maximal group. \begin{lem}\label{claim:Hpm1} If $i\neq j$, then $[\Delta_i,\Delta_j]=0$. In particular, $H$ is $\cal V$-maximal, for $\lie h=\sum\lie h_i$ where $\lie h_i=\Delta_i$ whether $\Delta_i$ is a subalgebra or $\lie h_i=[\Delta_i,\Delta_i]$ otherwise. \end{lem} \begin{proof} Since $L_{{e}}$ is totally geodesic and is locally isometric to a metric product $\exp(\Delta_0)\times\cdots\times\exp(\Delta_s)$, the curvature tensor of $G$ at the identity satisfies $R(\Delta_i,\Delta_j)=\{0\}$. Therefore, $\lr{R(\xi,\eta)\eta,\xi}=\frac{1}{4}\|[\xi,\eta]\|^2=0$ for all $\xi\in\Delta_i$, $\eta\in\Delta_j$. In particular, $\lie h=\sum \lie h_i$ integrates a subgroup which is, up to covering, a product $H=\tilde H_0\times\cdots\times \tilde H_s$. To see that $H$ is transitive in the set of maximal vertical abelian subalgebras, note that a maximal abelian subalgebra of $\cal V_{{e}}$ splits as $\lie t^v=\bigoplus \lie t^v\cap \Delta_i$ (one can use arguments as in Lemma \ref{lem:tori0}, for example). Thus, since each $\tilde H_i$ acts transitively on the set of abelian subalgebras of $\Delta_i$, $H$ acts transitively on the set of maximal abelian subalgebras of $\cal V_{{e}}$. \end{proof} Whenever $\Delta_i$ is not a subalgebra, Besrestovskii--Nikonorov \cite[Lemma 7]{berestovskii-nikonorov} guarantees that $\exp(\Delta_i\oplus \lie h_i)$ is the full subgroup of isometries of $\exp(\Delta_i)$ and that $(\Delta_i\oplus \lie h_i,\lie h_i)$ is a symmetric pair (note that $[\Delta_i,\lie h_i]\subseteq \Delta_i$, since $\Delta_i$ is a Lie triple system -- see e.g. Helgason \cite{helgasondifferential}). In this case, $\lie h_i$ is horizontal: it is orthogonal to $\Delta_j$, $j\neq i$, since $\lr{[\Delta_i,\Delta_i],\Delta_j}=\lr{\Delta_i,[\Delta_j,\Delta_i]}=\{0\}$; and orthogonal to $\Delta_i$ since $[\Delta_i,\lie h_i]\subseteq \Delta_i$ We decompose $\lie h=\lie h^v\oplus \lie h^h$ in its vertical and horizontal components, denoting by $\Delta^\vee$ (respectively, $\Delta^\wedge$) the sum of the $\Delta_i$-components which are subalgebras (respectively, which are not subalgebras). Observe that $[\lie h^h,\lie h^v]=\{0\}$ and decompose $\theta\in \lie h $ in its horizontal and vertical components, $\theta=Z+\zeta$. We proceed by induction on $m$ to show that: for all $m,n\geq 0$, \begin{equation}\label{proof:Hpm} [\ad_{\zeta}^m\ad_Z^n\cal H_+(\lie t^v),\cal H_-(\lie t^v)]=\{0\}. \end{equation} First we show that \eqref{proof:Hpm} holds for $m=0$ (Claim \ref{claim:Hpm2}), then, assuming that \eqref{proof:Hpm} holds for $m\leq k$, we show that it holds for $m=k+1$. \begin{claim}\label{claim:Hpm2} $[\ad_Z^n\cal H_+(\lie t^v),\cal H_-(\lie t^v)]=\{0\}$ and $\ad_Z^n \cal H_+(\lie t^v)\perp \cal H_-(\lie t^v)$ for all $n\geq 0$. \end{claim} \begin{proof} Let $Z\in\lie h^h$ and write $Z_\epsilon=\pi_\epsilon(\lie t^v)(Z)\in\cal H_\epsilon(\lie t^v)$. We choose $\lie t=\lie t^v\oplus \lie t'$ such that $Z_0\in\lie t'$. Since $[\lie H_+(\lie t),\cal H_-(\lie t^v)]=\{0\}$, $\lie H_+(\lie t)\perp\cal H_-(\lie t^v)$ and $\cal H_+(\lie t^v)\subseteq \lie H_+(\lie t)$, it is sufficient to show that $\lie H_+(\lie t)$ is $\ad_Z$-invariant. But, $\ad_{Z_0}(\lie H_+(\lie t)) \subseteq \lie H_+(\lie t)$, since $Z_0\in\lie t$ and $\lie H_+(\lie t)$ is a sum of weight spaces; $\ad_{Z_+}(\lie H_+(\lie t)) \subseteq \lie H_+(\lie t)$, since $Z_+\in \lie H_+(\lie t)$; $\ad_{Z_-}(\lie H_+(\lie t))=\{0\} \subseteq \lie H_+(\lie t)$, by Proposition \ref{prop:bracmaster}. \end{proof} From now on, we assume $\lie h^v\neq\{0\}$ and proceed to technical steps. \begin{claim}\label{claim:Hpm0} $X_\pm^v\in\Delta^\wedge$. In particular, $\ad_\zeta^m(\cal H_\pm(\lie t^v))\subseteq{\cal H}^{\bb C}_{\I}$, \end{claim} \begin{proof} Let $X_\pm^\alpha=(X^{\alpha})_+\in\cal H_\pm(\lie t^v)$ be the $\cal H_\pm(\lie t^v)$ component of an $\alpha$-$A$-weight. Recall that $\Delta^\vee$ is a subalgebra and $[\Delta^\vee,\Delta^\wedge]=0$. Therefore, $\ad_{\Delta^\vee}$ preserves the decomposition $\Delta^\vee\oplus\Delta^\wedge\oplus {\cal H}^{\bb C}_{\I}$. Moreover, if $\alpha(\xi)\neq 0$ for some $\xi\in\Delta^\vee\cap \lie t^v$, \begin{equation*} {\cal H}^{\bb C}_{\I}\ni (\alpha(\xi)\pm \textstyle{\h}\ad_{\xi})X^\alpha=(\alpha(\xi)\pm\h\ad_{\xi})(X^\alpha_++X^\alpha_-)=2\alpha(\xi)X^\alpha_\pm. \end{equation*} Therefore, $(X_+^\alpha)^v\neq 0$ only if $\alpha(\xi)=0$ for every $\xi\in \Delta^{\vee}\cap \lie t^v$. Since $\cal H_0(\lie t^v)\subseteq {\cal H}^{\bb C}_{\I}$, we conclude that $(X_+^\alpha)^v\neq 0$ only if $\alpha(\xi')=1$ for some $\xi'\in\lie t^v\cap \Delta^\wedge$. Thus, \[\lr{X^\alpha_\pm,\Delta^\vee}=\lr{\pm\tfrac{1}{2}\ad_{\xi'}X^\alpha_\pm,\Delta^\vee}=-\tfrac{1}{2}\lr{X^\alpha_\pm,\ad_{\xi'}\Delta^\vee}=0. \] The Claim is concluded by observing that $\Delta^\vee,\Delta^\wedge$ are real spaces. \end{proof} The next claim is a common induction step in the next proofs. \begin{claim}\label{claim:Hpm induction} Suppose that $X'\in {\cal H}^{\bb C}_{\I}\cap (\cal H_0(\lie t^v)+\cal H_+(\lie t^v))$ and $[X',\cal H_-(\lie t^v)]=0$. Then $\ad_\zeta X'\in {\cal H}^{\bb C}_{\I}\cap (\cal H_0(\lie t^v)+\cal H_+(\lie t^v))$ and $[\ad_\zeta X',\cal H_-(\lie t^v)]=0$. \end{claim} \begin{proof} Note that $\ad_\zeta X'\in {\cal H}^{\bb C}_{\I}\cap (\cal H_0(\lie t^v)+\cal H_+(\lie t^v))$: \begin{equation}\label{eq:commuting} \lr{\ad_\zeta X',\cal H_-(\lie t^v)}=\lr{\zeta,[X',\cal H_-(\lie t^v)]}=0. \end{equation} Therefore, since $\ad_\zeta X'$ has no $\cal H_-(\lie t^v)$-component, \begin{equation*} [\ad_\zeta X',Y_-]=[(\ad_\zeta X')_0,Y_-]=[(\ad_\zeta X'_0)_0,Y_-]+[(\ad_\zeta X'_+)_0,Y_-]. \end{equation*} We show that both terms are zero. Since $\lie t^v\cap \Delta^\vee$ is a maximal torus, we can write $\zeta=\zeta_0+\sum\zeta_\alpha$, where $\zeta_0\in\lie t^v\cap \Delta^\vee$ and $\zeta_\alpha\in\lie g_\alpha(\lie t^v)$, $\alpha\neq 0$. On the other hand, $\ad_{\zeta_\alpha}X_0'\in\lie g_\alpha(\lie t^v)$. Thus, $(\ad_\zeta X'_0)_0=(\ad_{\zeta_0} X'_0)_0=0$, since $\ad_{\lie t^v}\cal H_0(\lie t^v)=0$. On its turn, the second term belongs to $\lie H_-(\lie t)$, for $\lie t'$ such that $ (\ad_\zeta X'_+)_0\in \lie t'$. On the other hand, by replacing $X'$ by $X'_+$ on equation \eqref{eq:commuting}, we have $[(\ad_\zeta (X'_+))_0,Y_-]=[\ad_\zeta X'_+,Y_-]$. Thus, \begin{equation*} \lr{[\ad_\zeta X'_+,Y_-],\lie H_-(\lie t)}=\lr{[\ad_\zeta Y_-,X'_+],\lie H_-(\lie t)}=\lr{\ad_\zeta Y_-,[X'_+,\lie H_-(\lie t)]}=0. \end{equation*} Since $\lie H_-(\lie t)$ is a real space, we conclude that $[\ad_\zeta X'_+,Y_-]=[\ad_\zeta X',Y_-]=0$. \end{proof} In particular, $\ad_\zeta^m X_+\in{\cal H}^{\bb C}_{\I}\cap(\cal H_0(\lie t^v)+\cal H_+(\lie t^v))$ for $m\geq 1$. \begin{claim}\label{claim:Hpm2.66} For any $X\in{\cal H}^{\bb C}_{\I}$, there is a decomposition $X_+=\overline X_++X'$ where $Y_+\in {\cal H}^{\bb C}_{\I}\cap \cal H_+(\lie t^v)$ and $\ad_\zeta X'=0$. Moreover, $\ad_Z^n \overline X_+\in {\cal H}^{\bb C}_{\I}\cap (\cal H_0(\lie t^v)+\cal H_+(\lie t^v))$. \end{claim} \begin{proof} We prove the Claim for each component $X^\alpha_+=(X^\alpha)_+$ of $X_+$, considering separate cases: if $\alpha(\lie t^v\cap \Delta^\vee)\neq \{0\}$, $\overline X_+=X^\alpha_+$ and $X'=0$ satisfies the desired conditions. On the other hand, since $[\Delta^\vee,\Delta^\wedge]=\{0\}$: \[\ad_{\xi'}\ad_\zeta X_+^\alpha=\ad_\zeta\ad_{\xi'}X^\alpha_+=2\alpha(\xi')\ad_\zeta X^\alpha_+ \] for every $\xi'\in\lie t^v\cap \Delta^\wedge$. Note that $\ad_\zeta X^\alpha_+ \in \cal H_+(\lie t^v)$ (Claim \ref{claim:Hpm induction} plus the fact that $\ad_{\xi'}\ad_\zeta X^\alpha_+\neq 0 $ for some $\xi'\in\lie t^v$), thus, $\ad_\zeta$ preserves the eigenspaces $V_\lambda$ of $\ad_{\xi'}$. Thus, $X^\alpha_+$ can be decomposed as $\overline X_++X'$, where $\overline X_+\in\ad_\zeta(V_\lambda)$ and $X'\in\ker\ad_\zeta$. The second statement follows since $\ad_Z$ preserves ${\cal H}^{\bb C}_{\I}$ and Claim \ref{claim:Hpm2}: \begin{equation} \lr{\ad_Z^n \overline X_+,\cal H_-(\lie t^v)}=\lr{Z,[\ad_Z^{n-1}\overline X_+,\cal H_-(\lie t^v)]}=0.\qedhere \end{equation} \end{proof} \begin{proof}[Proof of Theorem \ref{prop:Hpm}] The main item in Theorem \ref{prop:Hpm} is the third item, from where we start the proof. To this aim, we proceed by induction on $m\geq 1$ on: \begin{align}\label{proof:Hpm:0} \ad_\zeta^m\ad_Z^n X_+\in{\cal H}^{\bb C}_{\I}\cap(\cal H_0(\lie t^v)+\cal H_+(\lie t^v)),\\ [\ad_{\zeta}^{m}\ad_Z^nX_+,\cal H_-(\lie t^v)]=0, \label{proof:Hpm:1} \end{align} for every $n\geq 0$. From Claim \eqref{claim:Hpm2.66}, induction on \eqref{proof:Hpm:0}, \eqref{proof:Hpm:1} is equivalent to induction on \begin{align*} \ad_\zeta^m\ad_Z^n \overline X_+\in{\cal H}^{\bb C}_{\I}\cap(\cal H_0(\lie t^v)+\cal H_+(\lie t^v)),\\ [\ad_{\zeta}^{m}\ad_Z^n\overline X_+,\cal H_-(\lie t^v)]=0, \end{align*} for $\bar X_+\in{\cal H}^{\bb C}_{\I}\cap \cal H_{+}(\lie t^v)$. The last induction follows from Claim \ref{claim:Hpm induction}, concluding item \textit{(3)} in Theorem \ref{prop:Hpm}. Since $\cal H_{\pm}(\cal F)$ are real spaces, item \textit{(2)} follows from item \textit{(1)}. Moreover, item \textit{(1)} holds once it is proved that $\ad_\zeta^m\ad^n_Z X_+\perp \cal H_-(\lie t^v)$ for every $m,n\geq 0$. The case $m=0$ is in Claim \ref{claim:Hpm2} and $m\geq 1$ is \eqref{proof:Hpm:0}. \end{proof} As an application of Theorem \ref{prop:Hpm}, we characterize the $A$-tensor. \begin{lem}\label{cor:A} Let $\cal F$ be a Ranjan foliation and write $X=X_0+X_++X_-$, where $X_\epsilon\in\cal H_\epsilon(\cal F)$. Then $A^\xi X=\h\ad_\xi(X_+-X_-).$ In particular, \begin{equation}\label{eq:Aproved} A_XY=\h\left([X_-,Y_-]-[X_+,Y_+] \right)^v. \end{equation}\end{lem} \begin{proof} Fix $X\in\cal H_{{e}}$ and $\xi\in\cal V_{{e}}$. Let $\lie t^v$ be a maximal abelian vertical subalgebra containing $\xi$. Using the linearity of $A^\xi$, we divide the proof into two cases: $(i)$ $X$ has no $\cal H_0(\lie t^v)$-component; $(ii)$ $X\in\cal H_0(\lie t^v)$. Denote by \begin{gather*} \pi^\epsilon\colon \cal H_+(\lie t^v)+\cal H_-(\lie t^v)+\cal H_0(\lie t^v)\to \cal H_\epsilon(\cal F)\\ \pi^\epsilon(\lie t^v)\colon {\cal H}^{\bb C}_{\I} \to \cal H_\epsilon(\lie t^v) \end{gather*} the respective orthogonal projections. Supposing that $\pi^0(\lie t^v)(X)=0$, we have \[\textstyle A^\xi X= A^\xi(\pi^+(\lie t^v)(X)+\pi^-(\lie t^v)(X))=\h\ad_\xi (\pi^+(\lie t^v)(X))-\h\ad_\xi(\pi^-(\lie t^v)(X)).\] On the other hand, since $\cal H_\pm(\lie t^v)\subseteq \cal H_\pm(\cal F)$, $\pi^\pm\circ \pi^\pm(\lie t^v)=\pi^\pm(\lie t^v)$ and $\pi^{\mp}\circ\pi^{\pm}(\lie t^v)=0$. Now, assume that $X\in\cal H_0(\lie t^v)$. We claim that $\pi^\pm(\cal H_0(\lie t^v))\subseteq \cal H_0(\lie t^v)$. By following Claim \eqref{claim:Hpm2} and \eqref{proof:Hpm:0}, we conclude that an element $X_\pm$ is composed by components lying either in $\cal H_0(\lie t^v)$ (components with $m\geq 1$ in \eqref{proof:Hpm:0}) or in $\sum_{\lie t\supseteq \lie t^v}\lie H_\pm(\lie t)$ (Claim \ref{claim:Hpm2}). I.e., $\cal H_+(\cal F)$ decomposes as \begin{equation}\label{eq:HFbounds} \cal H_\pm(\cal F)= (\cal H_\pm(\cal F)\cap \cal H_0(\lie t^v))\oplus\left( \cal H_\pm(\cal F)\cap \sum_{\lie t\supseteq \lie t^v}\lie H_\pm(\lie t)\right). \end{equation} Since the second space space is orthogonal to $\cal H_0(\lie t^v)$, $\pi^0(\cal H_0(\lie t^v))$ do not have components on it, thus concluding that $\pi^\pm(\cal H_0(\lie t^v))\subseteq \cal H_0(\lie t^v)$. With $A^\xi X=\h\ad_\xi(X_+-X_-)$ at hand, equation \eqref{eq:Aproved} is straightforward: \begin{align*} -2\lr{A_XY,\xi}&= 2\lr{A^\xi X,Y }=\lr{\ad_\xi (X_+-X_-),Y}=\lr{\ad_\xi (X_+-X_-),Y_++Y_-}\\ &=\lr{\xi,[X_+,Y_++Y_-]-[X_-,Y_++Y_-]}=\lr{\xi,[X_+,Y_+]-[X_-,Y_-]}.\qedhere \end{align*} \end{proof} \subsection{Proof of Theorem \ref{thm:grove}}\label{sec:proof} We now have all elements to prove Theorem \ref{thm:grove}. To simplify notation, we denote $\cal H_\epsilon(\cal F)=\cal H_\epsilon$. As pointed out in section \ref{sec:pi1}, it is sufficient to prove Theorem \ref{thm:grove} for irreducible foliations. In this case, Theorem \ref{thm:A} guarantees that $\cal V_{e}$ is spanned by the image of the $A$-tensor. Thus Lemma \ref{cor:A} gives: \begin{equation*} \cal V_{e}\subseteq {\cal H}^{\bb C}_{\I}+ [\cal H_+ ,\cal H_+ ]+[\cal H_- ,\cal H_- ]. \end{equation*} In particular: \begin{equation}\label{eq:V} \lie g^\bb C \subseteq \cal H_0 +\cal L(\cal H_+ )+\cal L(\cal H_- ), \end{equation} where $\cal L(S)$ is the Lie algebra generated by $S\subseteq \lie g^\bb C$. Moreover: \begin{claim}\label{claim:H0} $[\cal H_0 ,\cal H_\epsilon ]\subseteq \cal H_\epsilon $. \end{claim} \begin{proof} Observe that \begin{align*} \cal H_0(\cal F)&={\cal H}^{\bb C}_{\I}\cap ({\cal H}_+(\cal F)+{\cal H}_-(\cal F))^\bot\\ &={\cal H}^{\bb C}_{\I}\cap \bigcap_{h\in H}({\cal H}_+(\Ad_h\lie t^v)+{\cal H}_-(\Ad_h\lie t^v))^\bot=\bigcap_{h\in H}\cal H_0(\lie t^v)\\ &=\{X\in{\cal H}^{\bb C}_{\I}~|~ \ad_\xi X=0,~\forall\xi\in\cal V_I\}. \end{align*} In particular, $\cal H_0 $ is a subalgebra: let $X,Y\in\cal H_0 $, $\xi\in\cal V_{e}$, then \[\ad_\xi[X,Y]=[\ad_\xi X,Y]+[X,\ad_\xi Y]=0. \] Moreover, $[\cal H_0 ,\cal{V}_{e}]=\{0\}$, thus $\cal H_0^\bot$, ${\cal H}^{\bb C}_{\I}$ and therefore $\cal H_0 ^\perp\cap {\cal H}^{\bb C}_{\I}$ are invariant under $\ad_{\cal H_0 }$. In particular, $[\cal H_0 ,\cal H_+ +\cal H_- ]\subseteq {\cal H}^{\bb C}_{\I}\cap(\cal H_+ +\cal H_-) $. On the other hand, \[\lr{[\cal H_0 ,\cal H_\pm ],\cal H_\mp }=-\lr{\cal H_0 ,[\cal H_\pm ,\cal H_\mp ] }=0.\qedhere \] \end{proof} \begin{cor} $\cal L(\cal H_\pm )$ is an ideal. \end{cor} \begin{proof} Let $Z\in\lie g^\bb C$. Then $Z=Z_0+Z_++Z_-$, where $Z_\epsilon\in\cal L(\cal H_\epsilon )$ (equation \eqref{eq:V}). But $[Z_0,~\cal L(\cal H_\pm)]\subseteq \cal L(\cal H_\pm) $ by Claim \ref{claim:H0}; $[Z_\pm,\cal L(\cal H_\pm)]\subseteq \cal L(\cal H_\pm)$ by the {definition of }$\cal L(\cal H_\pm)$; and $[Z_\mp,\cal L(\cal H_\pm)]=\{0\}$ by {Theorem \ref{prop:Hpm}} and Jacobi identity. \end{proof} Since $\cal L(\cal H_\pm)$ are real subspaces, their real parts are ideals of $\lie g$, which we denote by $\cal L(\cal H_\pm)$ as well. In particular $G$ decomposes as $G= G_+{\times }G_-{\times }G_0$, where $G_\epsilon$ is the subgroup whose Lie algebra is \begin{align*} \lie g_\pm & = \cal L(\cal H_\pm)\cap (\cal L(\cal H_+)\cap \cal L(\cal H_-))^\perp,\\ \lie g_0& = (\lie g_++\lie g_-)^\perp+\cal L(\cal H_+)\cap \cal L(\cal H_-). \end{align*} By observing that $\cal H_\pm\perp \cal L(\cal H_\mp)$, we conclude that $\cal H_++\cal H_-\perp \lie g_0$. We claim that $A_{X}Y=\h([X_-,Y_-]-[X_+,Y_+])^v$, where $X_\epsilon\in\lie g_\pm$. Denote $\pi^\epsilon(Z)$ the $\cal H_\epsilon $-component of $Z\in{\cal H}^{\bb C}_{\I}$. Since $[\pi^0(Z),\xi]=0$ for all $\xi\in\cal V_{e}$, we conclude that $[\pi^0(Z)_\epsilon,\xi]=[\pi^0(Z)_\epsilon,\xi_\epsilon] =0$ for $\epsilon=0,+,-$. Thus, \begin{align*} \lr{[X_-,Y_-],\xi}=&\lr{[\pi^-(X)+\pi^0(X)_-,\pi^-(Y)+\pi^0(Y)_-],\xi}\\ =&\lr{[\pi^-(X),\pi^-(Y)+\pi^0(Y)_-],\xi}+\lr{\pi^-(Y)+\pi^0(Y)_-,[\xi,\pi^0(X)_-]}\\ =&\lr{[\pi^-(X),\pi^-(Y)],\xi}+\lr{\pi^-(Y)+\pi^0(Y)_-,[\xi,\pi^0(X)_-]}+\lr{\pi^-(X),[\pi^0(Y)_-,\xi]}\\ =&\lr{[\pi^-(X),\pi^-(Y)],\xi}. \end{align*} Analogously, $\lr{[X_+,Y_+],\xi}=\lr{[\pi_+(X),\pi(Y)_+],\xi}$. The claim now follows from Lemma \ref{cor:A}. The proof is almost finished and follows from Munteanu--Tapp Corollary \ref{cor:TappMunteanu}. It is only left to produce an isometry of $G$ whose resulting foliation satisfy $A^\xi X=\h\ad_\xi X$. Let $\Phi\colon G\to G$ be the isometric involution defined by \[\Phi(g_+,g_-,g_0)=(g_+,g_-^{-1},g_0), \] and consider \[\tilde{\cal F}=\{\Phi(L)~|~L\in \cal F \}.\] Denote $\tilde Z=d\Phi(Z)$, $\tilde{\cal V}_{e}=d\Phi(\cal V_{e})$ and by $\tilde A$ the $A$-tensor of $\tilde{\cal F}$. Observe that $d\Phi_e(Z_\pm)=\pm Z_\pm$ and recall that $\tilde A_{\tilde X}\tilde Y=d\phi(A_XY)$. We have: \begin{align*} \tilde A_{\tilde X}\tilde Y&=d\Phi(A_XY)=\h\left(d\Phi([X_-,Y_-]-[X_+,Y_+]) \right)^{\tilde{\cal V}_{e}}\\ &=\h\left(-[X_-,Y_-]-[X_+,Y_+] \right)^{\tilde{\cal V}_{e}}=-\h\left([\tilde X_-,\tilde Y_-]+[\tilde X_+,\tilde Y_+] \right)^{\tilde{\cal V}_{e}}, \end{align*} where the third equality follows since $[X_\pm,Y_\pm]\in \lie g_\pm$ and the last since \[[\tilde X_\pm,\tilde Y_\pm]=[\pm X_\pm,\pm Y_\pm ]=[X_\pm,Y_\pm]. \] In particular, the respective dual tensor is given by \[\tilde A^{\tilde \xi}\tilde X=\h\ad_{\tilde \xi} \tilde X. \] Corollary \ref{cor:TappMunteanu} guarantees that $\tilde{\cal V}_I$ is a subalgebra and that $\tilde{\cal F}$ is the coset fibration defined by the subgroup integrated by $\tilde{\cal V}_{e}$, completing the proof. $\;\;\;\Box$\\ \end{document}
\begin{document} \title{Greedy Criterion in Orthogonal Greedy Learning } \author{{Lin Xu,~Shaobo Lin,~Jinshan Zeng,~Xia Liu and~Zongben Xu} \thanks{L. Xu, X. Liu and Z. B. Xu are with the Institute for Information and System Sciences, Xi'an Jiaotong University, Xi'an 710049, China.} \thanks{S. B. Lin is with the College of Mathematics and Information Science, Wenzhou University, Wenzhou 325035, China.} \thanks{J. S. Zeng is with the College of Computer Information Engineering, Jiangxi Normal University, Nanchang, Jiangxi 330022, China.} \thanks{}} \markboth{Journal of \LaTeX\ Class Files,~Vol.~14, No.~8, August~2015} {Shell \MakeLowercase{\textit{et al.}}: Bare Demo of IEEEtran.cls for IEEE Journals} \maketitle \begin{abstract} Orthogonal greedy learning (OGL) is a stepwise learning scheme that starts with selecting a new atom from a specified dictionary via the steepest gradient descent (SGD) and then builds the estimator through orthogonal projection. In this paper, we find that SGD is not the unique greedy criterion and introduce a new greedy criterion, called ``$\delta$-greedy threshold'' for learning. Based on the new greedy criterion, we derive an adaptive termination rule for OGL. Our theoretical study shows that the new learning scheme can achieve the existing (almost) optimal learning rate of OGL. Plenty of numerical experiments are provided to support that the new scheme can achieve almost optimal generalization performance, while requiring less computation than OGL. \end{abstract} \begin{IEEEkeywords} Supervised learning, greedy algorithms, orthogonal greedy learning, greedy criterion, generalization capability. \end{IEEEkeywords} \IEEEpeerreviewmaketitle \section{Introduction} \IEEEPARstart{S}{upervised} learning focuses on synthesizing a function to approximate an underlying relationship between inputs and outputs based on finitely many input-output samples. Commonly, a system tackling supervised learning problems is called as a learning system. A standard learning system usually comprises a hypothesis space, an optimization strategy and a learning algorithm. The hypothesis space is a family of parameterized functions providing a candidate set of estimators, the optimization strategy formulates an optimization problem to define the estimator based on samples, and the learning algorithm is an inference procedure that numerically solves the optimization problem. Dictionary learning is a special learning system, whose hypothesis spaces are linear combinations of atoms in some given dictionaries. Here, the dictionary denotes a family of base learners \cite{Temlaykov2008}. For such type hypothesis spaces, many regularization schemes such as the bridge estimator \cite{Armagan2009}, ridge estimator \cite{Golub1979} and Lasso estimator \cite{Tibshirani1995} are common used optimization strategies. When the scale of dictionary is moderate (i.e., about hundreds of atoms), these optimization strategies can be effectively realized by various learning algorithms such as the regularized least squares algorithms \cite{Wu2006}, iterative thresholding algorithms \cite{Daubechies2004} and iterative re-weighted algorithms \cite{Daubechies2010}. However, when presented large input dictionary, a large portion of the aforementioned learning algorithms are time-consuming and even worse, they may cause the sluggishness of the corresponding learning systems. Greedy learning or, more specifically, learning by greedy type algorithms, provides a possible way to circumvent the drawbacks of regularization methods \cite{Barron2008}. Greedy algorithms are stepwise inference processes that start from a null model and solve heuristically the problem heuristically of making the locally optimal choice at each step with the hope of finding a global optimum. Within moderate number of iterations, greedy algorithms possess charming computational advantage compared with the regularization schemes \cite{Temlaykov2008}. This property triggers avid research activities of greedy algorithms in signal processing \cite{Dai2009,Kunis2008,Tropp2004}, inverse problems \cite{Donoho2012,Tropp2010}, sparse approximation \cite{Donoho2007,Temlaykov2011} and machine learning \cite{Barron2008,Chen2013a,Lin2013a}. \subsection{Motivations of greedy criteria} Orthogonal greedy learning (OGL) is a special greedy learning strategy. It selects a new atom based on SGD in each iteration and then constructs an estimator through orthogonal projecting to subspaces spanned by the selected atoms. It is well known that SGD needs to traverse the whole dictionary, which leads to an insufferable computational burden when the scale of dictionary is large. Moreover, as the samples are noised, the generalization capability of OGL is sensitive to the number of iterations. In other words, due to the SGD criterion, a slight turbulence of the number of atoms may lead to a great change of the generalization performance. To overcome the above problems of OGL, a natural idea is to re-regulate the criterion to choose a new atom by taking the ``greedy criterion'' issue into account. The Fig. \ref{IdeaGC} is an intuitive description to quantify the greedy criterion, where $r_k$ represents the residual at the $k$-th iteration, $g$ is an arbitrary atom from the dictionary and $\theta$ is the included angle between $r_k$ and $g$. In Fig. \ref{IdeaGC} (a), both $r_k$ and $g$ are normalized to the unit ball due to the greedy criterion focusing on the orientation rather than magnitude. The cosine of the angle $\theta$ (cosine similarity) is used to quantify the greedy criterion. As shown in Fig. \ref{IdeaGC} (b), the atom $g_k$ possessing the smallest $\theta$ is regarded to be the greediest one at each iteration in OGL. \begin{figure} \caption{An intuitive description of the greedy criterion. (a) Normalize the current residual $r_k$ and atoms $g$ to the unit ball. (b) The atom $g_k$ possessing the smallest $\theta$ is regarded to be the greediest one at each iteration.} \label{IdeaGC} \end{figure} Since the greedy criterion can be quantified by the cosine similarity, a preferable way to circumvent the aforementioned problems of OGL is to weaken the level of greed by thresholding the cosine similarity. In particular, other than traversing the dictionary, we can select the first atom satisfying the thresholding condition. Such a method essentially reduces the computation cost of OGL and makes the learning process more stable. \subsection{Our contributions} Different from other three issues, the ``greedy criterion'' issue, to the best of our knowledge, has not been noted for the learning purpose. The aim of the present paper is to reveal the importance and necessity of studying the ``greedy criterion'' issue in OGL. The main contributions can be summarized as follows. $\bullet$ We argue that SGD is not the unique criterion for OGL. There are many other greedy criteria in greedy learning, which possess similar learning performance as SGD. $\bullet$ We use a new greedy criterion called the ``$\delta$-greedy threshold'' to quantify the level of greed in OGL. Although a similar criterion has already been used in greedy approximation \cite{Temlaykov2008a}, the novelty of translating it into greedy learning is that using this criterion can significantly accelerate the learning process. We can also prove that, if the number of iteration is appropriately specified, then OGL with the ``$\delta$-greedy threshold'' can reach the existing (almost) optimal learning rate of OGL \cite{Barron2008}. $\bullet$ Based on the ``$\delta$-greedy threshold'' criterion, we propose an adaptive terminate rule for OGL and then provide a complete learning system called $\delta$-thresholding orthogonal greedy learning ($\delta$-TOGL). Different from classical termination rules that devote to searching the appropriate number of iterations based on the bias-variance balance principle \cite{Barron2008,Xu2014}, our study implies that the balance can also be attained through setting a suitable greedy threshold. This phenomenon reveals the essential importance of the ``greedy criterion'' issue. We also present the theoretical justification of $\delta$-TOGL. $\bullet$ We carefully analyze the generalization performance and computation cost of $\delta$-TOGL, compared with other popular learning strategies such as the pure greedy learning (PGL) \cite{Barron2008, Temlaykov2008}, OGL, regularized least squares (RLS) \cite{Hoerl1970} and fast iterative shrinkage-thresholding algorithm (FISTA) \cite{Beck2009} through plenty of numerical studies. The main advantage of $\delta$-TOGL is that it can reduce the computational cost without sacrificing the generalization capability. In many applications, it can learn hundreds of times faster than conventional methods. \subsection{Organization} The rest of the paper is organized as follows. In Section 2, we present a brief introduction of statistical learning theory and greedy learning. In Section 3, we introduce the ``$\delta$-greedy threshold'' criterion in OGL and provide its feasibility justification. In Section 4, based on the ``$\delta$-greedy threshold'' criterion, we propose an adaptive termination rule and the corresponding $\delta$-TOGL system. The theoretical feasibility of the $\delta$-TOGL system is also given in this section. In Section 5, we present numerical simulation experiments to verify our arguments. In Section 6, $\delta$-TOGL is tested with real-world data. In Section 7, we provide the detailed proofs of the main results. Finally, the conclusion is drawn in the last section. \section{Preliminaries} In this section, we present some preliminaries to serve as the basis for the following sections. \subsection{Statistical learning theory} Suppose that the samples $\mathbf{z}=(x_{i},y_{i})_{i=1}^{m}$ are drawn independently and identically from $Z:=X\times Y$ according to an unknown probability distribution $\rho $ which admits the decomposition \begin{equation} \rho (x,y)=\rho _{X}(x)\rho (y|x). \end{equation} Let $f:X\rightarrow Y$ be an approximation of the underlying relation between the input and output spaces. A commonly used measurement of the quality of $f$ is the generalization error, defined by \begin{equation} \mathcal{E}(f):=\int_{Z}(f(x)-y)^{2}d\rho , \end{equation} which is minimized by the regression function \cite{Cucker2001} \begin{equation} f_{\rho }(x):=\int_{Y}yd\rho (y|x). \end{equation} The goal of learning is to find a best approximation of the regression function $f_{\rho }$. Let $L_{\rho _{_{X}}}^{2}$ be the Hilbert space of $\rho _{X}$ square integrable functions on $X$, with norm $\Vert \cdot \Vert _{\rho }.$ It is known that, for every $f\in L_{\rho _{X}}^{2}$, it holds that \begin{equation} \mathcal{E}(f)-\mathcal{E}(f_{\rho })=\Vert f-f_{\rho }\Vert _{\rho }^{2}. \end{equation} Without loss of generality, we assume $y \in [-M,M]$ almost surely. Thus, it is reasonable to truncate the estimator to $[-M,M]$. That is, if we define \begin{equation} \pi_Mu:=\left\{\begin{array}{l l} u, & \mbox{if}\ |u|\leq M \\ M\text{sign}(u),& \mbox{otherwise} \end{array} \right. \end{equation} as the truncation operator, where $\text{sign}(u)$ represents the sign function of $u$, then \begin{equation} \|\pi_Mf_{\bf z}-f_\rho\|^2_\rho\leq \|f_{\bf z}-f_\rho\|^2_\rho. \end{equation} \subsection{Greedy learning} Four most important elements of greedy learning are \textit{dictionary selection}, \textit{greedy criterion}, \textit{iterative strategy} and \textit{termination rule}. This is essentially different from greedy approximation which focuses only on \textit{dictionary selection} and \textit{iterative format} issues \cite{Temlaykov2008}. Greedy learning concerns not only the approximation capability, but also the cost, such as the model complexity, which should pay to achieve a specified approximation accuracy. In a nutshell, greedy learning can be regarded as a four-issue learning scheme. $\bullet$ \textit{Dictionary selection} : this issue devotes to selecting a suitable dictionary for a given learning task. As a classical topic of greedy approximation, there are a great deal of dictionaries available to greedy learning. Typical examples include the greedy basis \cite{Temlaykov2008}, quasi-greedy basis \cite{Temlaykov2003}, redundant dictionary \cite{Devore1996}, orthogonal basis \cite{Temlyakov1998}, kernel-based sample dependent dictionary \cite{Chen2013,Lin2013a} and tree \cite{Friedman2001}. $\bullet$ \textit{Greedy criterion} : this issue regulates the criterion to choose a new atom from the dictionary in each greedy step. Besides the widely used steepest gradient descent (SGD) method \cite{Devore1996}, there are also many methods such as the weak greedy \cite{Temlaykov2000}, thresholding greedy \cite{Temlaykov2008} and super greedy \cite{Liu2012} to quantify the greedy criterion for approximation purpose. However, to the best of our knowledge, only the SGD criterion is employed in greedy learning, since all the results in greedy approximation \cite{Liu2012,Temlaykov2000,Temlaykov2008} imply that SGD is superior to other criteria. $\bullet$ \textit{Iterative format} : this issue focuses on how to define a new estimator based on the selected atoms. Similar to the ``dictionary selection'', the ``iterative format'' issue is also a classical topic in greedy approximation. There are several types of iterative schemes \cite{Temlaykov2008}. Among these, three most commonly used iterative schemes are pure greedy \cite{Konyagin1999}, orthogonal greedy \cite{Devore1996} and relaxed greedy formats \cite{Temlaykov2008a}. Each iterative format possesses its own pros and cons \cite{Temlaykov2003,Temlaykov2008}. For instance, compared with the orthogonal greedy format, pure and relaxed greedy formats have benefits in computation but suffer from either low convergence rate or small applicable scope. $\bullet$ \textit{Termination rule} : this issue depicts how to terminate the learning process. The termination rule is regarded as the main difference between greedy approximation and learning, which has been recently studied \cite{Barron2008,Chen2013,Lin2013a,Xu2014}. For example, Barron et al. \cite{Barron2008} proposed an $l^0$-based complexity regularization strategy as the termination rule, and Chen et al. \cite{Chen2013} provided an $l^1$-based adaptive termination rule. Let $H$ be a Hilbert space endowed with norm $\|\cdot\|_H$ and inner product $\langle\cdot,\cdot\rangle_H$. Let $\mathcal D=\{g\}_{g\in\mathcal D}$ be a given dictionary satisfying $\sup_{g\in D,x\in X}|g(x)|\leq 1$. Denote $\mathcal L_1=\{f:f=\sum_{g\in D}a_gg\}$ as a Banach space endowed with the norm \begin{equation} \|f\|_{\mathcal L_1}:=\inf_{\{a_g\}_{g\in\mathcal D}}\left\{\sum_{g\in \mathcal D}|a_g|:f=\sum_{g\in \mathcal D}a_gg\right\}. \end{equation} There exist several types of greedy algorithms \cite{Temlaykov2003}. The three most commonly used are the pure greedy algorithm (PGA) \cite{Konyagin1999}, orthogonal greedy algorithm (OGA) \cite{Devore1996} and relaxed greedy algorithm (RGA) \cite{Temlaykov2008a}. These algorithms initialize with $f_0:=0$. The new approximation $f_k\; (k \ge 1)$ is defined based on $r_{k-1}:=f-f_{k-1}$. In OGA, $f_k$ is defined by \begin{equation} f_k=P_{V_{{\bf z},k}} f, \end{equation} where $P_{V_{{\bf z},k}}$ is the orthogonal projection onto the space $V_{{\bf z},k}=\mbox{span}\{g_1,\dots,g_k\}$ and $g_k$ is defined as \begin{equation} g_k=\arg\max_{g\in\mathcal D}|\langle r_{k-1},g\rangle_H|. \end{equation} Given ${\bf z}=(x_i,y_i)_{i=1}^m$, the empirical inner product and norm are defined by \begin{equation} \langle f,g\rangle_m:=\frac1m\sum_{i=1}^mf(x_i)g(x_i), \end{equation} and \begin{equation} \|f\|_m^2:=\frac1m\sum_{i=1}^m|f(x_i)|^2. \end{equation} Setting $f_{\bf z}^0=0$, the four aforementioned issues are attended in OGL as follows: \begin{itemize} \item Dictionary selection: Select a suitable dictionary $\mathcal D_n:=\{g_1,\dots,g_n\}$. \item Greedy criterion: \begin{equation}\label{OGAgd} g_k=\arg\max_{g\in\mathcal D_n}|\langle r_{k-1},g\rangle_m|. \end{equation} \item Iteration format: \begin{equation} f_{\bf z }^k=P_{V_{{\bf z},k}} f, \end{equation} where $P_{V_{{\bf z},k}}$ is the orthogonal projection onto $V_{{\bf z},k}=\mbox{span}\{g_1,\dots,g_k\}$ in the metric of $\langle\cdot,\cdot\rangle_m$. \item Termination rule: Terminate the learning process when $k$ satisfies a certain assumption. \end{itemize} \section{Greedy criterion in OGL} Given a real functional $V: H\rightarrow\mathbf R$, the Fr\'{e}chet derivative of $V$ at $f$, $V'_f: H\rightarrow\mathbf R$ is a linear functional such that for $h\in H$, \begin{equation} \lim_{\|h\|_{ H}\rightarrow0}\frac{|V(f+h)-V(f)-V'_f(h)|}{\|h\|_{ H}}=0, \end{equation} and the gradient of $V$ as a map $\mbox{grad}V: H\rightarrow H$ is defined by \begin{equation} \langle \mbox{grad}V(f),h\rangle_H=V'_f(h),\ \mbox{for all}\ h\in H. \end{equation} The greedy criterion adopted in Eq.(\ref{OGAgd}) is to find $g_k\in \mathcal D_n$ such that \begin{equation} \langle -\mbox{grad}(A_m)(f_{\bf z}^{k-1}),g_k\rangle=\sup_{g\in \mathcal D_n}\langle -\mbox{grad}(A_m)(f_{\bf z}^{k-1}),g\rangle, \end{equation} where $A_m(f)=\sum_{i=1}^m|f(x_i)-y_i|^2$. Therefore, the classical greedy criterion is based on the steepest gradient descent (SGD) of $r_{k-1}$ with respect to the dictionary $\mathcal D_n$. By normalizing the residual $r_k$, $k=0,1,2,\dots,n$, greedy criterion in Eq.(\ref{OGAgd}) means to search $g_k$ satisfying \begin{equation} g_k=\arg\max_{g\in\mathcal D_n}\frac{|\langle r_{k-1},g\rangle_m|}{\|r_{k-1}\|_m}. \end{equation} Geometrically, the current $g_k$ minimizes the angle between $r_{k-1}/\|r_{k-1}\|_m$ and $g$, which is depicted in Fig. \ref{IdeaGC}. Recalling the definition of OGL, it is not difficult to verify that the angles satisfy \begin{equation} |\cos\theta_1|\leq|\cos\theta_2|\leq\cdots\leq|\cos\theta_k|\leq\cdots\leq|\cos\theta_n|, \end{equation} or \begin{equation} \frac{|\langle r_{0},g_1\rangle_m|}{\|r_{0}\|_m} \geq \cdots \geq \frac{|\langle r_{k-1},g_k\rangle_m|}{\|r_{k-1}\|_m} \geq \cdots \geq \frac{|\langle r_{n-1},g_n\rangle_m|}{\|r_{n-1}\|_m}, \end{equation} since $\frac{|\langle r_{k-1},g_k\rangle_m|}{\|r_{k-1}\|_m}=|\cos\theta_k|$. If the algorithm stops at the $k$-th iteration, then there exists a threshold $\delta \in [|\cos\theta_k|,|\cos\theta_{k+1}|]$ to quantify whether another atom should be added to construct the final estimator. To be detailed, if $|\cos{\theta_k}|\geq\delta$, then $g_k$ is regarded as an ``active atom'' and can be selected to build the estimator, otherwise, $g_k$ is a ``dead atom '' which should be discarded. Based on the above observations and motivated by the Chebshev greedy algorithm with thresholds \cite{Temlaykov2008a}, we are interested in selecting an arbitrary ``active atom'', $g_k$, in $\mathcal D_n$, that is \begin{equation}\label{our metric} \frac{|\langle r_{k-1},g_k\rangle_m|}{\|r_{k-1}\|_m} > \delta. \end{equation} If there is no $g_k$ satisfying Eq. (\ref{our metric}), then the algorithm terminates. We call the greedy criterion Eq. (\ref{our metric}) as the ``$\delta$-greedy threshold'' criterion. In practice, the number of ``active atom'' is usually not unique. We can choose the first ``active atom'' satisfied Eq. (\ref{our metric}) at each greedy iteration to accelerate the algorithm. Once the ``active atom'' is selected, then the algorithm goes to the next greedy iteration and the ``active atom'' is redefined. Through such a greedy-criterion, we can develop a new orthogonal greedy learning scheme, called thresholding orthogonal greedy learning (TOGL). The two corresponding elements of TOGL can be reformulated as follows: \begin{itemize} \item Greedy definition: Let $g_k$ be an arbitrary (or the first) atom from $\mathcal D_n$ satisfying Eq. (\ref{our metric}). \item Termination rule: Terminate the learning process either there is no atom satisfying Eq. (\ref{our metric}) or $k$ satisfies a certain assumption. \end{itemize} Without considering the termination rule, the classical greedy criterion Eq. (\ref{OGAgd}) in OGL always selects the greediest atom at each greedy iteration. However, Eq. (\ref{our metric}) slows down the speed of gradient descent and therefore may conduct a more flexible model selection strategy. According to the bias and variance balance principle \cite{Cucker2007}, the bias decreases while the variance increases as a new atom is selected to build the estimator. If a lower-correlation atom is added, then the bias decreases slower and the variance also increases slower. Then, the balance can be achieved in TOGL within a more gradual flavor than OGL. Moreover, Eq. (\ref{our metric}) also provides a terminate condition that if all atoms, $g$, in $\mathcal D_n$ satisfy \begin{equation}\label{Stop 1} \frac{|\langle r_{k-1},g\rangle_m|}{\|r_{k-1}\|_m} \le \delta, \end{equation} then the algorithm terminates. The termination rule concerning $k$ in TOGL is necessary and is used to avoid certain extreme cases in practice. Indeed, using only the terminate condition Eq. (\ref{Stop 1}) may drive the algorithm to select all atoms from $\mathcal D_n$. As Fig. \ref{FSTC1} shows, if the target function $f$ is almost orthogonal to the space spanned by the dictionary and atoms in the dictionary are almost linear dependent, then the selected $\delta$ should be too small to distinguish which is the ``active atom ''. Consequently, the corresponding learning scheme selects all atoms of dictionary and therefore degrades the generalization capability of OGL. \begin{figure} \caption{The necessity of termination rule concerning $k$ in TOGL.} \label{FSTC1} \end{figure} Now we present a theoretical assessment of TOGL. At first, we give a few notations and concepts, which will be used in the rest part of the paper. For $r>0$, the space $\mathcal L_{1,\mathcal D_n}^r$ is defined to be the set of all functions $f$ such that, there exists a $h\in\mbox{span}\{\mathcal D_n\}$ satisfying \begin{equation}\label{prior} \|h\|_{\mathcal L_1(\mathcal D_n)}\leq\mathcal B, \ \mbox{and}\ \|f - h\| \leq {\mathcal B}{n^{ - r}}, \end{equation} where $\|\cdot\|$ denotes the uniform norm for the continuous function space $C(X)$. The infimum of all $\mathcal B$ satisfying Eq. (\ref{prior}) defines a norm (for $f$ ) on $\mathcal L_{1,\mathcal D_n}^r$. The Eq. (\ref{prior}) defines an interpolation space and is a natural assumption for the regression function in greedy learning \cite{Barron2008}. This assumption has already been adopted to analyze the learning capability of greedy learning \cite{Barron2008,Lin2013a,Xu2014}. The Theorem \ref{THEOREM1} illustrates the performance of TOGL and consequently, reveals the feasibility of the greedy criterion in Eq. (\ref{our metric}). \begin{theorem}\label{THEOREM1} Let $0<t<1$, $0<\delta\leq 1/2$, and $f_{\bf z}^{k,\delta}$ be the estimator deduced by TOGL. If $f_\rho\in \mathcal L_{1,\mathcal D_n}^r$, then there exits a ${k^*} \in \mathbf N$ such that $$ \begin{aligned} & {\cal E}({\pi _M}f_{\bf{z}}^{{k^*},\delta}) - {\cal E}({f_\rho } ) \le \\ & C{{\cal B}^2}({(m{\delta ^2})^{ - 1}}\log m\log\frac{1}{\delta }\log\frac{2}{t} + {\delta ^2} + {n^{ - 2r}}) \end{aligned} $$ holds with probability at least $1-t$, where $C$ is a positive constant depending only on $d$ and $M$. \end{theorem} If $\delta= \mathcal O (m^{-1/4})$, and the size of dictionary, $n$, is selected to be large enough, i.e., $n \geq \mathcal O({m^{\frac{1}{{4r}}}})$, then Theorem {\ref{THEOREM1}} shows that the generalization error of ${\pi_M}f_{\bf{z}}^{{k^*},\delta}$ is asymptotic to $\mathcal O (m^{-1/2}(\log m)^2)$. Up to a logarithmic factor, this bound is the same as that in \cite{Barron2008} and is the ``record'' of OGL. This implies that weakening the level of greed in OGL is a feasible way to avoid traversing the dictionary. It should also be pointed out that different from OGL \cite{Barron2008}, there are two parameters, $k$ and $\delta$, in TOGL. Therefore, Theorem {\ref{THEOREM1}} only presents a theoretical verification that introducing the ``$\delta$-greedy threshold'' to measure the level of greed does not essentially degrade the generalization capability of OGL. Taking the practical applications into account, eliminating the condition concerning $k$ in the termination rule is crucial. This is the scope of the following section, where an adaptive termination rule with respect to $\delta$ is presented. \section{$\delta$-thresholding orthogonal greedy learning} In the previous section, we developed a new greedy learning scheme called as thresholding orthogonal greedy learning (TOGL) and theoretically verified its feasibility. However, there are two main parameters (i.e., the value of threshold $\delta$ and iteration $k$) should be simultaneously fine-tuned. It puts more pressure on parameter selection, which may dampen the spirits of practitioners. Given this, we further propose an adaptive termination rule only based on the value of threshold. Notice that, the value $\|r_{k-1}\|_m/\|y(\cdot)\|_m$ becomes smaller and smaller along the selection of more and more ``active'' atoms, where $y(\cdot)$ is a function satisfying $y(x_i)=y_i, i=1,\dots,m$. Then, an advisable terminate condition is to use $\delta$ to quantify $\| r_{k-1}\|_m/\|y(\cdot)\|_m$. Therefore, we append another terminate condition as \begin{equation}\label{Our metric2} \| r_{k-1}\|_m \leq \delta\|y(\cdot)\|_m \end{equation} to replace the previous terminate condition concerning $k$ in TOGL. Based on it, a new termination rule can be obtained: \begin{itemize} \item Termination rule: Terminate the learning process if either Eq. (\ref{Our metric2}) holds or there is no atom satisfying Eq. (\ref{our metric}). That is: \begin{equation}\label{Our metric3} \max_{g\in \mathcal D_n}|\langle r_{k},g\rangle_m|\leq\delta\|r_k\|_{m} \ \text{or} \ \|r_k\|_m\leq\delta\|f\|_m. \end{equation} \end{itemize} For such a change, we present a new learning system named the $\delta$-thresholding orthogonal greedy learning ($\delta$-TOGL) as the Algorithm 1. \begin{algorithm}[H] \caption{$\delta$-TOGL}\label{DTOGL} \begin{algorithmic} \STATE {{\textbf{Step 1 (Initialization)}}:\\ Given data ${\bf z}=(x_i,y_i)_{i=1}^m$ and dictionary $\mathcal D_n$.\\ Given a proper greedy threshold $\delta$.\\ Set initial estimator $f_0=0$ and iteration $k:=0$.} \STATE {{\textbf{Step 2 ($\delta$-greedy threshold)}}:\\ Select $g_k$ be an arbitrary atom from $\mathcal D_n$ satisfying $$ \frac{|\langle r_{k-1},g_k\rangle_m|}{\|r_{k-1}\|_m} > \delta. $$ } \STATE {\textbf{{Step 3 (Orthogonal projection)}}: \\ Let $V_{{\bf z},k} =\mbox{Span}\{g_1,\dots,g_{k}\}$. Compute $f^{\delta}_{{\bf z}}$ as: $$ {f^{\delta}_{{\bf z}}} = {P_{{\bf z},V_{{\bf z},k}}}({ y}). $$ The residual: $r_{k}:=y-f^{\delta}_{{\bf z}},$ where $P_{{\bf z},V_{{\bf z},k}}$ is the orthogonal projection onto space $V_{{\bf z},k}$ in the criterion of $\langle\cdot,\cdot\rangle_m$.} \STATE {\textbf{{Step 4 (Termination rule)}}:\\ If termination rule satisfied as: $$ \max_{g\in \mathcal D_n}|\langle r_{k},g\rangle_m|\leq \delta\|r_k\|_{m} \ \text{or} \ \|r_k\|_m\leq\delta\|f\|_m, $$ then the algorithm terminates and outputs final estimator $f_{\bf z}^\delta$. \\ Otherwise, turn to Step 2 and $k:=k+1$. } \end{algorithmic} \end{algorithm} The implementation of OGL requires traversing the dictionary, which has a complexity of $\mathcal O(mn)$. Inverting a $k \times k$ matrix in orthogonal projection has a complexity of $\mathcal O(k^3)$. Thus, the $k$th iteration of OGL has a complexity of $ \mathcal O(mn + k^3)$. In Step 2 of $\delta$-TOGL, $g_k$ is an arbitrary atom from $\mathcal D_n$ satisfying the ``$\delta$-greedy threshold'' condition. It motivates us to select the first atom from $\mathcal D_n$ satisfying Eq. (\ref{our metric}). Then the complexity of $\delta$-TOGL is smaller than $\mathcal O(mn+k^3)$. In fact, it usually requires a complexity of $\mathcal O(m+k^3)$, and gets a complexity of $\mathcal O(mn+k^3)$ only for the worst case. $\delta$-TOGLR essentially reduces the complexity of OGL, especially when $n$ is large. The memory requirements of OGL and $\delta$-TOGL are $\mathcal O(mn)$. The following theorem shows that if $\delta$ is appropriately tuned, then the $\delta$-TOGL estimator $f_{\bf z}^\delta$ can realize the (almost) optimal generalization capability of OGL and TOGL. \begin{theorem}\label{THEOREM2} Let $0<t<1$, $0<\delta\leq 1/2$, and $f_{\bf z}^\delta$ be defined in Algorithm 1. If $f_\rho\in \mathcal L_{1,\mathcal D_n}^r$, then the inequality $$ \begin{aligned} &\mathcal E(\pi_Mf_{\bf z}^\delta)-\mathcal E(f_\rho) \leq \\ & C{{\cal B}^2}({(m{\delta ^2})^{ - 1}}\log m \log\frac{1}{\delta }\log\frac{2}{t} + {\delta ^2} + {n^{ - 2r}}) \end{aligned} $$ holds with probability at least $1-t$, where $C$ is a positive constant depending only on $d$ and $M$. \end{theorem} If $n \geq \mathcal O({m^{\frac{1}{{4r}}}})$ and $\delta= \mathcal O (m^{-1/4})$, then the learning rate in Theorem \ref{THEOREM2} asymptotically equals to $\mathcal O (m^{-1/2}(\log m)^2)$, which is the same as that of Theorem \ref{THEOREM1}. Therefore, Theorem \ref{THEOREM2} implies that using Eq. (\ref{Our metric2}) to replace the terminate condition concerning $k$ is theoretically feasible. The most important highlight of Theorem \ref{THEOREM2} is that it provides a totally different way to circumvent the overfitting phenomenon of OGL. The termination rule is crucial for OGL, but designing an effective termination rule is a tricky problem. All the aforementioned studies \cite{Barron2008,Chen2013,Xu2014} of the termination rule attempted to design a termination rule by controlling the number of iterations directly. Since the generalization capability of OGL is sensitive to the number of iterations, the results are at times inadequate. The termination rule employed in the present paper is based on the study of the ``greedy-criterion'' issue of greedy learning. Theorem \ref{THEOREM2} shows that, besides controlling the number of iterations directly, setting a greedy threshold to redefine the greedy criterion can also conduct an effective termination rule. Theorem \ref{THEOREM2} implies that this new termination rule theoretically works as well as others. Furthermore, when compared with $k$ in OGL, the generalization capability of the $\delta$-TOGL is stable to $\delta$, since the new criterion slows down the changes of bias and variance. \section{Simulation verifications} In this section, a series of simulations are carried out to verify our theoretical assertions. Firstly, we introduce the simulation settings, including the data sets, dictionary, greedy criteria and experimental environment. Secondly, we analyze the relationship between the greedy criteria and generalization performance in orthogonal greedy learning (OGL) and demonstrate that steepest gradient descent (SGD) is not the unique greedy criterion. Thirdly, we present a performance comparison of different greedy criteria and illustrate the ``$\delta$-greedy threshold'' is feasible. Fourthly, we empirically study the performance of $\delta$-thresholding orthogonal greedy learning ($\delta$-TOGL) and justify the feasibility of it. Finally, we compare $\delta$-TOGL with other widely used dictionary-based learning methods and show it is a promising learning scheme. \subsection{Simulation settings} Throughout the simulations, let ${\bf z}=\{(x_{i},y_{i})\}_{i=1}^{m_1}$ be the training samples with $\{x_i\}_{i=1}^{m_1}$ being drawn independently and identically according to the uniform distribution on $[-\pi,\pi]$ and $y_{i}=f_{\rho }(x_{i})+\mathcal N(0,\sigma^2), $ where $$ {f_\rho}(x) = \frac{{\sin x}}{x}, \quad x \in [ - \pi ,\pi ]. $$ Four levels of noise: $\sigma_1=0.1$, $\sigma_2=0.5$, $\sigma_3=1$ and $\sigma_4=2$ are used in the simulations. The learning performance (in terms of root mean squared error (RMSE)) of different algorithms are then tested by applying the resultant estimators to the test set ${\bf z}_{test}= \{(x_{i}^{(t)},y_{i}^{(t)})\}_{i=1}^{m_2}$, which is similarly generated as ${\bf z}$ but with a promise that $y_{i}$ are taken to be $ y_{i}^{(t)}=f_{\rho }(x_{i}^{(t)}).$ In each simulation, we use the Gaussian radial basis function (RBF) \cite{Chen1991} to build up the dictionary: $$ \left\{e^{-\|x-t_i\|^2/\eta^2}: i=1, \ldots,n\right\}, $$ where $\{t_i\}_{i=1}^n$ are drawn according to the uniform distribution in $[-\pi,\pi]$. Since the aim of each simulation is to compare $\delta$-TOGL with other learning methods on the same dictionary, we just set $\eta=1$ throughout the simulations. We use four different criteria to select the new atom in each greedy iteration: $$ {g_k}: = \arg \mathop {\max }\limits_{g \in \mathcal D_n} | \langle {r_{k - 1}},g \rangle_m |, $$ $$ {g_k}: = \arg \mathop {\text{second} \max }\limits_{g \in \mathcal D_n} | \langle {r_{k - 1}},g \rangle_m |, $$ $$ {g_k}: = \arg \mathop {\text{third} \max }\limits_{g \in \mathcal D_n} | \langle {r_{k - 1}},g\rangle_m |, $$ and $$ {g_k} \ \text{randomly selected from} \ \mathcal{D}_n. $$ Here, $\arg \mathop { \text{second} \max }\limits_{}$ and $\arg \mathop { \text{third} \max }\limits_{}$ mean the values of $|\langle r_{k-1},g\rangle_m|$ reach the second and third largest values, respectively. Randomly selected means to randomly select $g_k$ from the dictionary. We use four abbreviations OGL1, OGL2, OGL3 and OGLR to to denote the corresponding learning schemes, respectively. Let $\mathcal D_{n,k,\delta}$ be the set of atoms of $\mathcal D_n$ satisfying $ \frac{|\langle r_{k-1},g_k\rangle_m|}{\|r_{k-1}\|_m} > \delta$. Four corresponding criteria are employed as following: $$ {g_k}: = \arg \mathop {\max }\limits_{g \in \mathcal D_{n,k,\delta}} | \langle {r_{k - 1}},g \rangle_m |, $$ $$ {g_k}: = \arg \mathop {\text{second} \max }\limits_{g \in \mathcal D_{n,k,\delta}} | \langle {r_{k - 1}},g \rangle_m |, $$ $$ {g_k}: = \arg \mathop {\text{third} \max }\limits_{g \in \mathcal D_{n,k,\delta}} | \langle {r_{k - 1}},g\rangle_m |, $$ and $$ {g_k}= \text{First} (D_{n,k,\delta}). $$ Here $\text{First} (D_{n,k,\delta})$ denotes the first atom of $\mathcal D_n$ satisfying $ \frac{|\langle r_{k-1},g_k\rangle_m|}{\|r_{k-1}\|_m} > \delta$. We also use TOGL1 (or $\delta$-TOGL1), TOGL2 (or $\delta$-TOGL2), TOGL3 (or $\delta$-TOGL3) and TOGLR (or $\delta$-TOGLR) to denote the corresponding algorithms. All numerical studies are implemented by MATLAB R2015a on a Windows personal computer with Core(TM) i7-3770 3.40GHz CPUs and RAM 16.00GB. All the statistics are averaged based on 10 independent trails. \subsection{Greedy criteria in OGL} In this section, we examine the role of the greedy criterion in OGL via comparing the performance of OGL1, OGL2, OGL3 and OGLR. Let $m_1=1000$, $m_2=1000$ and $n=300$ throughout this subsection. Fig. \ref{OGA} shows the performance of OGL with four different greedy criteria. \begin{figure*} \caption{The generalization performance of OGL with four different greedy criteria. (a) The noise level $\sigma_1=0.1$. (b) $\sigma_2=0.5$. (c) $\sigma_3=1$. (d) $\sigma_4=2$.} \label{OGA} \end{figure*} We observe that OGL1, OGL2 and OGL3 have similar performance, while OGLR performs worse. This shows that SGD is not the unique greedy criterion and shows the necessity to study the ``greedy criterion'' issue. Detailed comparisons are listed in the Table \ref{OGATab}. Here TestRMSE and $k_{OGL}^*$ denote the theoretically optimal RMSE and number of iteration, where the parameter $k$ is selected according to the test data directly. \begin{table}[htb] \renewcommand{1.6}{1.3} \begin{center} \caption{ Quantitive comparisons of OGL with different greedy criteria.}\label{OGATab} \begin{tabular}{|c|c|c|c|c|c|} \hline Methods & TestRMSE & ${k_{OGL}^*}$ & Methods & TestRMSE & ${k_{OGL}^*}$ \\ \hline \multicolumn{3}{|c|}{$\sigma=0.1$} & \multicolumn{3}{|c|}{$\sigma=0.5$} \\ \hline OGL1 &0.0249 &9 & OGL1 &0.0448 &7 \\ \hline OGL2 &0.0248 &9 & OGL2 &0.0436 &8 \\ \hline OGL3 &0.0251 &10 & OGL3 &0.0466 &8 \\ \hline OGLR &0.0304 &9 & OGLR &0.0647 &9 \\ \hline Methods & TestRMSE & ${k_{OGL}^*}$ & Methods & TestRMSE & ${k_{OGL}^*}$ \\ \hline \multicolumn{3}{|c|}{$\sigma=1$} & \multicolumn{3}{|c|}{$\sigma=2$} \\ \hline OGL1 &0.0780 &7 & OGL1 &0.1371 &5 \\ \hline OGL2 &0.0762 &7 & OGL2 &0.1374 &7 \\ \hline OGL3 &0.0757 &7 & OGL3 &0.1377 &7 \\ \hline OGLR &0.0995 &7 & OGLR &0.1545 &6 \\ \hline \end{tabular} \end{center} \end{table} \subsection{Feasibility of ``$\delta$-greedy threshold''} In this simulation, we aim at verifying the feasibility of the ``$\delta$-greedy threshold'' criterion. For this purpose, we select optimal $k$ according to the test data directly and compare different greedy criteria satisfying Eq. (\ref{our metric}). Fig. \ref{TOGA} shows the simulation results. \begin{figure*} \caption{The generalization performance of TOGL with four different greedy criteria. (a) The noise level $\sigma_1=0.1$. (b) $\sigma_2=0.5$. (c) $\sigma_3=1$. (d) $\sigma_4=2$.} \label{TOGA} \end{figure*} \begin{figure*} \caption{The generalization performance of $\delta$-TOGL with four different greedy criteria. (a) The noise level $\sigma_1=0.1$. (b) $\sigma_2=0.5$. (c) $\sigma_3=1$. (d) $\sigma_4=2$.} \label{DTOGA} \end{figure*} \begin{figure*} \caption{The influence with respect to corresponding parameter changes on the training cost and sparsity in OGL and $\delta$-TOGL, respectively. (a) The training time of OGL. (b) The training time of $\delta$-TOGL. (c) The sparsity of the estimator in OGL. (d) The sparsity of the estimator in $\delta$-TOGL. } \label{DTOGL} \end{figure*} Different from the previous simulation, we find in this experiment that the optimal RMSE of TOGLR is similar as that of TOGL1, TOGL2 and TOGL3. The main reason is that TOGL appends atom satisfying the ``$\delta$-greedy threshold'' criterion Eq. (\ref{our metric}). It implies that once an appropriately value of $\delta$ is preset, then the selection of the atom is not relevant. Therefore, it agrees with Theorem \ref{THEOREM1} and demonstrates that the introduced ``$\delta$-greedy threshold'' is feasible. We also present quantitive comparisons in the Table \ref{TOGATa}. \begin{table}[htb] \renewcommand{1.6}{1.3} \begin{center} \caption{Quantitive comparisons for different greedy criteria in TOGL.}\label{TOGATa} \scalebox{1.1}[1.1]{ \begin{tabular}{|c|c|c|c|}\hline Methods & ${\delta}$ and $k$& TestRMSE &${k_{TOGL}^*}$ \\ \hline \multicolumn{4}{|c|}{$\sigma=0.1$} \\ \hline TOGL1 &[\text{1.00e-6,3.58e-5}]([9,13]) &0.0213 &8 \\ \hline TOGL2 &[\text{1.00e-6,1.70e-6}]([11,12]) &0.0213 &8 \\ \hline TOGL3 &[\text{1.00e-6,1.70e-6}]([12,13]) &0.0222 &10 \\ \hline TOGLR &\text{9.52e-6}(12) &0.0203 &11 \\ \hline \multicolumn{4}{|c|}{$\sigma=0.5$} \\ \hline TOGL1 &[\text{1.00e-6,6.95e-5}]([8,13]) &0.0384 &8 \\ \hline TOGL2 &[\text{1.00e-6,4.67e-5}]([9,13]) &0.0390 &8 \\ \hline TOGL3 &[\text{1.00e-6,9.06e-5}]([8,13]) &0.0371 &8 \\ \hline TOGLR &\text{6.95e-5}(9) &0.0379 &8 \\ \hline \multicolumn{4}{|c|}{$\sigma=1$} \\ \hline TOGL1 &[\text{1.00e-6,5.60e-6}]([11,13]) &0.0877 &8 \\ \hline TOGL2 &[\text{1.00e-6,4.30e-6}]([11,13]) &0.0862 &8 \\ \hline TOGL3 &[\text{1.00e-6,6.40e-6}]([11,13]) &0.0840 &8 \\ \hline TOGLR &\text{7.30e-6}(12) &0.0842 &8 \\ \hline \multicolumn{4}{|c|}{$\sigma=2$} \\ \hline TOGL1 &[\text{1.00e-6,1.18e-4}]([8,13]) &0.1402 &6 \\ \hline TOGL2 &[\text{1.00e-6,1.18e-4}]([8,13]) &0.1404 &6 \\ \hline TOGL3 &[\text{1.00e-6,1.03e-4}]([8,13]) &0.1408 &6 \\ \hline TOGLR &\text{6.09e-5}(10) &0.1392 &5 \\ \hline \end{tabular}} \end{center} \end{table} In Table \ref{TOGATa}, the second column (``${\delta}$ and $k$'') compares the optimal $\delta$ and corresponding $k$ (in the bracket) derived only from Eq. (\ref{Stop 1}) in TOGL. We also use ${k_{TOGL}^*}$ to denote the optimal $k$ (with the best performance). The aim of recording these quantities is to verify that only using Eq. (\ref{Stop 1}) to build up the terminate criterion is not sufficient. In fact, TABLE \ref{TOGATa} shows that for some data distributions, Eq. (\ref{Stop 1}) fails to find out the optimal number of iteration $k$. Compared Table \ref{TOGATa} with Table \ref{OGATab}, we find the TestRMSE derived from TOGL is comparable with OGL, which states the feasibility of TOGL. \subsection{Feasibility of $\delta$-TOGL} The only difference between $\delta$-TOGL and TOGL lies in the termination rule. Firstly we conduct the simulations to verify the feasibility of the termination rule Eq. (\ref{Our metric3}) in the Table \ref{DTOGAT}. Here, the second column (${\delta}$ and $k$) records the optimal $\delta$ and corresponding $k$ derived from the terminate rule Eq. (\ref{Our metric3}) in $\delta$-TOGL. ${k_{\delta-TOGL}^*}$ denotes the optimal $k$ selected according to the test samples. We see that the value of $k$ obtained by Eq. (\ref{Our metric3}) is almost the same as ${k_{\delta-TOGL}^*}$ for all four types of noise data. Furthermore, comparing Table \ref{DTOGAT} with TABLE \ref{TOGATa}, we find their TestRMSE are comparable. All these verify the feasibility and necessity of the termination rule Eq. (\ref{Our metric3}) in $\delta$-TOGL. \begin{table}[htb] \renewcommand{1.6}{1.3} \begin{center} \caption{Feasibility of the termination rule.}\label{DTOGAT} \begin{tabular}{|c|c|c|c|}\hline Methods & ${\delta}$ and $k$ & TestRMSE & ${k_{\delta-TOGL}^*}$ \\ \hline \multicolumn{4}{|c|}{$\sigma=0.1$} \\ \hline $\delta$-TOGL1 &[\text{4.30e-6,4.91e-6}](11) &0.0255 &11 \\ \hline $\delta$-TOGL2 &[\text{5.60e-6,6.40e-6}]([10,11]) &0.0254 &10 \\ \hline $\delta$-TOGL3 &\text{3.76e-6}(11) &0.0255 &11 \\ \hline $\delta$-TOGLR &\text{2.75e-5}(11) &0.0268 &11 \\ \hline \multicolumn{4}{|c|}{$\sigma=0.5$} \\ \hline $\delta$-TOGL1 &[\text{1.18e-4,1.35e-4}]([7,8]) &0.0407 &7 \\ \hline $\delta$-TOGL2 &[\text{2.01e-4,4.45e-4}](7) &0.0401 &7 \\ \hline $\delta$-TOGL3 &[\text{1.54e-4.2.29e-4}]([7,8]) &0.0407 &7 \\ \hline $\delta$-TOGLR &\text{1.35e-4}([8,9]) &0.0406 &9 \\ \hline \multicolumn{4}{|c|}{$\sigma=1$} \\ \hline $\delta$-TOGL1 &[\text{1.03e-4,1.76e-4}]([7,8]) &0.0747 &7 \\ \hline $\delta$-TOGL2 &[\text{1.03e-4,1.54e-4}]([7,8]) &0.0752 &7 \\ \hline $\delta$-TOGL3 &[\text{1.35e-4,1.54e-4}]([7,8]) &0.0733 &7 \\ \hline $\delta$-TOGLR &\text{3.89e-4}([7,8]) &0.0759 &7 \\ \hline \multicolumn{4}{|c|}{$\sigma=2$} \\ \hline $\delta$-TOGL1 &[\text{2.01e-4,2.99e-4}]([6,7]) &0.1529 &6 \\ \hline $\delta$-TOGL2 &[\text{2.29e-4,3.41e-4}]([6,7]) &0.1516 &6 \\ \hline $\delta$-TOGL3 &\text{2.29e-4}([6,7]) &0.1519 &5 \\ \hline $\delta$-TOGLR &\text{2.99e-4}([7,8]) &0.1537 &6 \\ \hline \end{tabular} \end{center} \end{table} From OGL to $\delta$-TOGL, the main parameter changes from $k$ to $\delta$. The following simulations aim at highlighting the role of the main parameters to illustrate the feasibility of $\delta$-TOGL. Similar to Fig. \ref{OGA}, we consider the relation between TestRMSE and the main parameter of $\delta$-TOGL in the Fig. \ref{DTOGA}. It can be found from Fig. \ref{DTOGA} that although there may be additional oscillation within a small scope, the generalization capability of $\delta$-TOGL is not very sensitive to $\delta$ on the whole, which is different from OGL (see Fig. \ref{OGA}). We also examine the relation between training and test cost and the main parameter in OGL and $\delta$-TOGL to illustrate the feasibility of $\delta$-TOGL. As the test time mainly depends on the sparsity of the estimator, we record the sparsity instead. In this simulation, the scope of iterations in OGL starts from $0$ to the size of dictionary (i.e., $n$=300) and our theoretical assertions reveal that the range of $\delta$ in $\delta$-TOGL is $(0, 0.5]$. We create 50 candidate values of $\delta$ within $[10^{-6},1/2]$. It can be observed from the results in Fig. \ref{DTOGL} that the training time (in seconds) and sparsity of $\delta$-TOGL is far less than OGL, which implies the computational amount of $\delta$-TOGL is much smaller than OGL. \subsection{Comparisons} In this part, we compare $\delta$-TOGL with other classical dictionary-based learning schemes such as the pure greedy learning PGL \cite{Friedman2001}, OGL \cite{Barron2008}, ridge regression \cite{Golub1979} and Lasso \cite{Tibshirani1995}. We employ the $\mathcal{L}_2$ regularized least-square (RLS) solution in ridge regression and the fast iterative shrinkage-thresholding algorithm (FISTA) in Lasso \cite{Beck2009}. All the parameters, i.e., the number of iterations $k$ in PGL or OGL, the regularization parameter $\lambda$ in RLS or FISTA and the greedy threshold $\delta$ in $\delta$-TOGL are all selected according to test dataset (or test RMSE) directly, since we mainly focus on the impact of the theoretically optimal parameter rather than validation techniques. The results are listed in Table \ref{table6}, where the standard errors of test RMSE are also reported (numbers in parentheses). \begin{table}[htb] \renewcommand{1.6}{1.3} \begin{center} \caption{Comparing the performance of $\delta$-TOGL with other classic algorithms.}\label{table6} \scalebox{0.90}[1]{ \begin{tabular}{|c|c|c|c|c|}\hline Methods & Parameter & TestRMSE & Sparsity & Running time \\ \hline \multicolumn{5}{|c|}{Regression function $sinc$, dictionary ${\mathcal D}_n, n=300$, noise level $\sigma=0.1$} \\ \hline PGL &$k=78$&0.0284(0.0037) & 78.0 & 27.4 \\ \hline OGL &$k=9$&0.0218(0.0034) & 9.0 & 11.3\\ \hline $\delta$-TOGL1 &\text{$\delta=1.00e-4$}&0.0200(0.0044) &7.4 &4.0 \\ \hline $\delta$-TOGL2 &\text{$\delta=2.00e-4$}&0.0203(0.0064) &8.0 &3.9 \\ \hline $\delta$-TOGL3 &\text{$\delta=1.30e-6$}&0.0284(0.0074) &12.2 &4.3 \\ \hline $\delta$-TOGLR &\text{$\delta=5.11e-4$}&0.0219(0.0059) &9.1 &3.5 \\ \hline $\mathcal L_2$(RLS) &$\lambda=\text{5e-5}$&0.0313(0.0088) &300.0 &0.5 \\ \hline $\mathcal L_1$(FISTA) &$\lambda = \text{5e-6}$&0.0318(0.0102) &281.2 & 41.7 \\ \hline \multicolumn{5}{|c|}{Regression function $sinc$, dictionary ${\mathcal D}_n, n=1000$, noise level $\sigma=0.1$} \\ \hline PGL &$k=181$&0.0278(0.0044) &181.0 & 116.6 \\ \hline OGL &$k=9$&0.0255(0.0045) &9.0 & 62 \\ \hline $\delta$-TOGL1 &\text{$\delta=1.00e-4$}&0.0277(0.0072) &7.2 &5.8 \\ \hline $\delta$-TOGL2 &\text{$\delta=6.00e-4$}&0.0294(0.0119) &7.0 &5.8 \\ \hline $\delta$-TOGL3 &\text{$\delta=6.00e-6$}&0.0211(0.0036) &7.8 &6.0 \\ \hline $\delta$-TOGLR &\text{$\delta=3.68e-4$}&0.0284(0.0082) &10.4 &4.7 \\ \hline $\mathcal L_2$(RLS) &$\lambda=0.0037$& 0.0322(0.0103) &1000.0 &6.1 \\ \hline $\mathcal L_1$(FISTA) &$\lambda=\text{8e-6}$&0.0317(0.0079) &821.2 & 103.7 \\ \hline \multicolumn{5}{|c|}{Regression function $sinc$, dictionary ${\mathcal D}_n, n=2000$, noise level $\sigma=0.1$} \\ \hline PGL &$k=263$ & 0.0267(0.0036) & 263.0 & 236.4 \\ \hline OGL &$k=9$&0.0250(0.0054) &9.0 &374.7 \\ \hline $\delta$-TOGL1 &\text{$\delta=2.00e-4$}&0.0256(0.0078) &7.1 &9.5 \\ \hline $\delta$-TOGL2 &\text{$\delta=1.00e-4$}&0.0280(0.0089) &8.6 &9.3 \\ \hline $\delta$-TOGL3 &\text{$\delta=2.00e-6$}&0.0222(0.0082) &7.6 &9.2 \\ \hline $\delta$-TOGLR &\text{$\delta=4.176e-5$}&0.0266(0.0079) &10.6 &6.7 \\ \hline $\mathcal L_2$(RLS) &$\lambda= 0.0005$&0.0305(0.0088) &2000.0 &28.9 \\ \hline $\mathcal L_1$(FISTA) &$\lambda=\text{7e-6}$&0.0335(0.0079) &1252.4 & 176.3 \\ \hline \end{tabular}} \end{center} \end{table} From the results of Table \ref{table6}, we observe that the sparsities (or the number of selected atoms) of greedy-type strategies are far smaller than regularization-based methods, while they enjoy better performance. It empirically verifies that greedy-type algorithms are more suitable for redundant dictionary learning, which is also consistent with \cite{Barron2008}. Furthermore, it can be found in Table \ref{table6} that, although the generalization performance of all the aforementioned learning schemes are similar, $\delta$-TOGL finishes the corresponding learning task within a remarkably short period of running time. Although PGL has a lower computation complexity than OGL, its convergence rate is quite slow. Generally, PGL needs tens of thousands of iterations to guarantee performance, just as we preset the maximun of the default number of iteration of PGL is $10000$ in the numerical studies. Therefore the applicable range of PGL is restricted. OGL possesses almost optimal convergence rate and generally converges within a few number of iterations. However, its computation complexity is huge, especially in large-scale dictionary learning. Table \ref{table6} shows that, when the size of dictionary $n$ are $300$ and $1000$, OGL performs faster than PGL, however it is much slower than PGL when $n$ is $2000$. $\delta$-TOGL can significantly reduce the computation cost of OGL without sacrificing its generalization performance and sparsity, just as the results of $\delta$-TOGL1, $\delta$-TOGL2, $\delta$-TOGL3 and $\delta$-TOGLR shown in Table \ref{table6}. It is mainly due to an appropriate ``$\delta$-greedy threshold'' effective filtering a mass of ``dead atoms'' from the dictionary. We also notice that, $\delta$-TOGLR not only owns the good performance but also has the lowest computation complexity among the four $\delta$-TOGL learning schemes. It implies that, selecting the ``active atom'' from the dictionary without traversal can further reduce the complexity without deteriorating the performance of OGL. \section{Real data experiments} We have verified that $\delta$-TOGL is a feasible learning scheme in previous simulations. Especially, $\delta$-TOGLR possesses both good generalization performance and the lowest computation complexity. We now verify the learning performance of $\delta$-TOGLR on five real data sets and compare it with other classical dictionary-based learning methods including PGL, OGL, RLS and FISTA. The first dataset is the Prostate cancer dataset \cite{Blake1998}. The data set consists of the medical records of 97 patients who have received a radical prostatectomy. The predictors are 8 clinical measures and 1 response variable. The second dataset is the Diabetes data set \cite{Efron2004}. This data set contains 442 diabetes patients that are measured on 10 independent variables and 1 response variable. The third one is the Boston Housing data set created form a housing values survey in suburbs of Boston by Harrison \cite{Harrison1978}. The Boston Housing dataset contains 506 instances which include 13 attributions and 1 response variable. The fourth one is the Concrete Compressive Strength (CCS) dataset \cite{Ye1998}, which contains 1030 instances including 8 quantitative independent variables and 1 dependent variable. The fifth one is the Abalone dataset\cite{Nash1994} collected for predicting the age of abalone from physical measurements. The data set contains 4177 instances which were measured on 8 independent variables and 1 response variable. Similarly, we randomly divide all the real data sets into two disjoint equal parts. The first half serves as the training set and the second half serves as the test set. We also use the Z-score standardization method \cite{Kreyszig1979} to normalize the data sets, in order to avoid the error caused by considerable magnitude difference among data dimensions. For each real data experiment, Gaussian radial basis function is also used to build up the dictionary: $$ \left\{e^{-\|x-t_i\|^2/\eta^2}: i=1, \ldots,n\right\}, $$ where $\{t_i\}_{i=1}^n$ are drawn as the training samples themselves, thus the size of dictionary equals to training samples. We set the standard deviation of radial basis function as $\eta= \frac{d_{max}}{\sqrt{2n}}$, where $d_{max}$ is maximum distance among all centers $\{t_i\}_{i=1}^n$, in order to avoid the radial basis function is too sharp or flat. Table \ref{t7} documents the experimental results of generalization performance and running time on aforementioned five real data sets. We can clearly observe that, for the small-scale dictionary, i.e., for the Prostate data set, although $\delta$-TOGLR can achieve good performance, its running cost is greater than OGL and RLS. In fact, for each candidate threshold parameter $\delta$, a different iteration of the algorithm is needed run from scratch, which cancels the computational advantage of $\delta$-TOGLR in small size dictionary learning. However, we also notice that, for the middle-scale dictionary, i.e., Diabetes, Housing and CCS, $\delta$-TOGLR begin to gradually surpass the other learning methods in computation with maintaining similar generalization performance as OGL. Especially for the large-scale dictionary learning, i.e., Abalone, $\delta$-TOGLR dominates other methods with a large margin in computation complexity and still possesses good performance. \begin{table*}[htb] \renewcommand{1.6}{1.6} \begin{center} \caption{The comparative results of performance and running time on five real data sets}\label{t7} \scalebox{1}[1]{ \begin{tabular}{|c|c|c|c|c|c|}\hline \backslashbox[2cm] {Methods}{Datasets} & Prostate & Diabetes & Housing & CCS & Abalone \\ \hline Dictionary size & $n=50$ & $n=220$ & $n=255$ & $n=520$ & $n=2100$ \\ \hline \multicolumn{6}{|c|}{Average performance } \\ \hline $\delta$-TOGLR & 0.4208 (0.0112) & 55.1226 (1.0347) & 4.045 (0.4256) & 7.1279 (0.3294) & 2.2460 (0.0915) \\ \hline PGL & 0.4280 (0.0081) & 56.3125 (2.0542) & 4.0716 (0.2309) & 11.2803 (0.0341) & 2.5880 (0.0106) \\ \hline OGL & 0.5170 (0.0119) & 54.6518 (2.8700) & 3.9447 (0.1139) & 6.0128 (0.1203) & 2.1725 (0.0088) \\ \hline RLS & 0.4415 (0.0951) & 57.3886 (1.5854) & 3.9554 (0.3236) & 9.8512 (0.2693) & 2.2559 (0.0514) \\ \hline FISTA & 0.6435 (0.0151) & 61.7636 (2.5811) & 5.1845 (0.1859) & 12.8127 (0.3019) & 3.4161 (0.0774) \\ \hline \multicolumn{6}{|c|}{Average running time} \\ \hline $\delta$-TOGLR & 0.58 & 1.11 & 0.89 & 0.82 & 4.22 \\ \hline PGL & 41.93 & 49.06 & 52.04 & 79.93 & 193.97 \\ \hline OGL & 0.16 & 1.11 & 1.42 & 7.46 & 787.2 \\ \hline RLS & 0.15 & 0.27 & 0.33 & 1.20 & 42.59 \\ \hline FISTA & 0.52 & 1.11 & 1.40 & 9.04 & 257.8 \\ \hline \end{tabular}} \end{center} \end{table*} \section{Conclusion and further discussions} In this paper, we study the greedy criteria in orthogonal greedy learning (OGL). The main contributions can be concluded in four aspects. Firstly, we propose that the steepest gradient descent (SGD) is not the unique greedy criterion to select atoms from dictionary in OGL, which paves a new way for exploring greedy criterion in greedy learning. To the best of our knowledge, this may be the first work concerning the ``greedy criterion'' issue in the field of supervised learning. Secondly, motivated by a series of previous researches of Temlyakov and his co-authors in greedy approximation \cite{Temlaykov2000,Temlaykov2003,Temlaykov2008,Temlaykov2008a,Liu2012}, we eventually use the ``$\delta$-greedy threshold'' criterion to quantify the level of greed for the learning purpose. Our theoretical result shows that OGL with such a greedy criterion yields a learning rate as $ m^{-1/2} (\log m)^2$, which is almost the same as that of the classical SGD-based OGL in \cite{Barron2008}. Thirdly, based on the ``$\delta$-greedy threshold'' criterion, we derive an adaptive terminal rule for the corresponding OGL and thus provide a complete new learning scheme called as $\delta$-thresholding orthogonal greedy learning ($\delta$-TOGL). We also present the theoretical demonstration that $\delta$-TOGL can reach the existing (almost) optimal learning rate just as the iteration-based termination rule dose in \cite{Barron2008}. Finally, we analyze the generalization performance of $\delta$-TOGL and compare it with other popular dictionary-based learning methods including pure greedy learning PGL, OGL, ridge regression and Lasso through plenty of numerical experiments. The empirical results verify that the $\delta$-TOGL is a promising learning scheme, which possesses the good generalization performance and learns much faster than conventional methods in large-scale dictionary. \appendices \section{Proofs} Since Theorem \ref{THEOREM1} can be derived from Theorem \ref{THEOREM2} directly, we only prove Theorem \ref{THEOREM2} in this section. The methodology of proof is somewhat standard in learning theory. In fact, we use the error decomposition strategy in \cite{Lin2013a} to divide the generalization error into approximation error, sample error and hypothesis error. The main difficult of the proof is to bound the hypothesis error. The main tool to bound it is borrowed from \cite{Temlaykov2008a}. In order to give an error decomposition strategy for $\mathcal E(f_{\bf z}^k)-\mathcal E(f_\rho)$, we need to construct a function $f_k^*\in \mbox{span}(D_n)$ as follows. Since $f_\rho\in \mathcal L_{1,\mathcal D_n}^r$, there exists a $h_\rho:=\sum_{i=1}^na_ig_i\in \mbox{Span}(\mathcal D_n)$ such that \begin{equation}\label{h} \|h_\rho\|_{\mathcal L_{1,\mathcal D_n}}\leq\mathcal B,\ \mbox{and}\ \|f_\rho-h_\rho\|\leq \mathcal B n^{-r}. \end{equation} Define \begin{equation}\label{f*} f_0^*=0,\ f_k^*=\left(1-\frac1k\right)f^*_{k-1}+\frac{\sum_{i=1}^n|a_i|\|g_i\|_\rho}{k}g^*_k, \end{equation} where $$ g_k^*:=\arg\max\limits_{g\in \mathcal D_n'}\left\langle h_\rho-\left(1-\frac1k\right)f_{k-1}^*,g\right\rangle_{\rho}, $$ and $$ \mathcal D_n':=\left\{{g_i(x)}/{\|g_i\|_\rho}\right\}_{i=1}^n \bigcup \left\{-{g_i(x)}/{\|g_i\|_\rho}\right\}_{i=1}^n $$ with $g_i\in \mathcal D_n$. Let $f_{\bf z}^\delta$ and $f_k^*$ be defined as in Algorithm 1 and Eq. (\ref{f*}), respectively, then we have \begin{eqnarray*} &&\mathcal E(\pi_Mf_{\bf z}^\delta)-\mathcal E(f_\rho)\\ &\leq& \mathcal E(f_k^*)-\mathcal E(f_\rho) + \mathcal E_{\bf z}(\pi_Mf_{\bf z}^\delta)-\mathcal E_{\bf z}(f_k^*)\\ &+& \mathcal E_{\bf z}(f_k^*)-\mathcal E(f_k^*)+\mathcal E(\pi_Mf_{\bf z}^\delta)-\mathcal E_{\bf z}(\pi_Mf_{\bf z}^\delta), \end{eqnarray*} where $\mathcal E_{\bf z}(f)=\frac1m\sum_{i=1}^m(y_i-f(x_i))^2$. Upon making the short hand notations $$ \mathcal D(k):=\mathcal E(f_k^*)-\mathcal E(f_\rho), $$ $$ \mathcal S({\bf z},k,\delta):=\mathcal E_{\bf z}(f_k^*)-\mathcal E(f_k^*)+\mathcal E(\pi_Mf_{\bf z}^\delta)-\mathcal E_{\bf z}(\pi_Mf_{\bf z}^\delta), $$ and $$ \mathcal P({\bf z},k,\delta):=\mathcal E_{\bf z}(\pi_Mf_{\bf z}^\delta)-\mathcal E_{\bf z}(f_k^*) $$ respectively for the approximation error, the sample error and the hypothesis error, we have \begin{equation}\label{error decomposition} \mathcal E(\pi_Mf_{\bf z}^\delta)-\mathcal E(f_\rho)=\mathcal D(k)+ \mathcal S({\bf z},k,\delta)+\mathcal P({\bf z},k,\delta). \end{equation} At first, we give an upper bound estimate for $\mathcal D(k)$, which can be found in Proposition 1 of \cite{Lin2013a}. \begin{lemma}\label{LEMMA1} Let $f_k^*$ be defined in Eq. (\ref{f*}). If $f_\rho\in \mathcal L_{1,\mathcal D_n}^r$, then \begin{equation}\label{approximation error estimation} \mathcal D(k)\leq \mathcal B^2(k^{-1/2}+n^{-r})^2. \end{equation} \end{lemma} To bound the sample and hypothesis errors, we need the following Lemma \ref{LEMMA2}. \begin{lemma}\label{LEMMA2} Let $y(x)$ satisfy $y(x_i)=y_i$, and $f_{\bf z}^\delta$ be defined in Algorithm 1. Then, there are at most \begin{equation}\label{Estimate k} C\delta^{-2}\log\frac1\delta \end{equation} atoms selected to build up the estimator $f_{\bf z}^\delta$. Furthermore, for any $h \in \mbox{Span}\{D_n\}$, we have \begin{equation}\label{estimate hypothesis error} \|y - f_{\bf z}^\delta\|_m^2\leq2\|y - h\|_m^2+ 2\delta^2\|h\|_{\mathcal L_1(\mathcal D_n)}. \end{equation} \end{lemma} \begin{proof} (\ref{Estimate k}) can be found in \cite[Theorem 4.1]{Temlaykov2008a}. Now we turn to prove (\ref{estimate hypothesis error}). Our termination rule guarantees that either $\max_{g\in \mathcal D_n}|\langle r_{k},g\rangle_m|\leq\delta\|r_k\|_{m}$ or $\|r_k\|\leq\delta\|y\|_m.$ In the latter case the required bound follows form \begin{equation*} \begin{aligned} \|y\|_m & \leq\|y-h\|_m+\|h\|_m \\ & \leq\delta(\|y-h\|_m+\|h\|_m) \\ & \leq\delta(\|f-h\|_m+\|h\|_{\mathcal L_1(\mathcal D_n)}). \end{aligned} \end{equation*} Thus, we assume $\max_{g\in \mathcal D_n}|\langle r_{k},g\rangle_m|\leq\delta\|r_k\|_{m}$ holds. By using $$ \langle y-f_k,f_k\rangle_m=0, $$ we have \begin{equation*} \begin{aligned} \|r_k\|_m^2 &= \langle r_k,r_k\rangle_m \\ &= \langle r_k,y-h\rangle_m+\langle r_k,h\rangle_m \\ & \leq \|y-h\|_m\|r_k\|_m+\langle r_k,h\rangle_m\\ & \leq \|y-h\|_m\|r_k\|_m+\|h\|_{\mathcal L_1(\mathcal D_n)}\max_{g\in \mathcal D_n}\langle r_k,g\rangle_m \\ & \leq \|y-h\|_m\|r_k\|_m+\|h\|_{\mathcal L_1(\mathcal D_n)}\delta\|r_k\|_m. \end{aligned} \end{equation*} This finishes the proof. \end{proof} Based on Lemma \ref{LEMMA2} and the fact $\|f^*_k\|_{\mathcal L_1(\mathcal D_n)}\leq \mathcal B$ \cite[Lemma 1]{Lin2013a}, we obtain \begin{equation}\label{hypothesis error estimation} \mathcal P({\bf z},k,\delta)\leq 2\mathcal E_{\bf z}(\pi_Mf_{\bf z}^\delta)-\mathcal E_{\bf z}(f_k^*)\leq 2\mathcal B\delta^2. \end{equation} Now, we turn to bound the sample error $\mathcal S({\bf z},k)$. Upon using the short hand notations $$ S_1({\bf z},k):=\{\mathcal E_{\bf z}(f_k^*)-\mathcal E_{\bf z}(f_\rho)\}-\{\mathcal E(f_k^*)-\mathcal E(f_\rho)\} $$ and $$ S_2({\bf z},\delta):=\{\mathcal E(\pi_Mf_{\bf z}^\delta)-\mathcal E(f_\rho)\}-\{\mathcal E_{\bf z}(\pi_Mf_{\bf z}^\delta)-\mathcal E_{\bf z}(f_\rho)\}, $$ we write \begin{equation}\label{sample decomposition} \mathcal S({\bf z},k)=\mathcal S_1({\bf z},k)+\mathcal S_2({\bf z},\delta). \end{equation} It can be found in Proposition 2 of \cite{Lin2013a} that for any $0<t<1$, with confidence $1-\frac{t}2$, \begin{equation}\label{S1 estimate} \mathcal S_1({\bf z},k)\leq \frac{7(3M+\mathcal B\log\frac2t)}{3m}+\frac12\mathcal D(k) \end{equation} Using \cite[Eqs(A.10)]{Xu2014} with $k$ replaced by $C\delta^{-2}\log\frac1\delta$, we have \begin{equation}\label{S2 estimate} \mathcal S_2({\bf z},\delta)\leq \frac12\mathcal E(\pi_Mf_{\bf z}^\delta)-\mathcal E(f_\rho)+\log\frac2t\frac{C\delta^{-2}\log\frac1\delta\log m}{m} \end{equation} holds with confidence at least $1-t/2$. Therefore, (\ref{error decomposition}), (\ref{approximation error estimation}), (\ref{hypothesis error estimation}), (\ref{S1 estimate}), (\ref{S2 estimate}) and (\ref{sample decomposition}) yields that \begin{equation*} \begin{aligned} & \mathcal E(\pi_Mf_{\bf z}^\delta)-\mathcal E(f_\rho) \\ & \leq C\mathcal B^2( (m\delta^2)^{-1}\log m\log \frac{1}{\delta }\log\frac2t+\delta^2+n^{-2r}) \end{aligned} \end{equation*} holds with confidence at least $1-t$. This finishes the proof of Theorem \ref{THEOREM2}. \section*{Acknowledgment} The research was supported by National Basic Research Program (973 Program) (2013CB329404) and Key Project of National Science Foundation of China (Grant No. 11131006 and 91330204). \end{document}
\begin{document} \begin{titlepage} \center \includegraphics[width=4cm]{logoUHi.jpg} { \Large \bfseries Reinforcement Learning Approach to Active}\\ { \Large \bfseries Learning for Image Classification}\\ \begin{minipage}{0.49\textwidth} \begin{flushleft} \large \emph{Author:}\\ Thorben \textsc{Werner} \\ 279870 \\ \end{flushleft} \end{minipage} ~ \begin{minipage}{0.46\textwidth} \begin{flushright} \large \emph{Supervisors:} \\ Prof. Dr. Dr. Lars \textsc{Schmidt-Thieme} \\ Mohsan \textsc{Jameel} \end{flushright} \end{minipage}\\ { \today}\\ { \large \bfseries Thesis submited for}\\ \textsc{\Large Master of Science in IMIT - Angewandte Informatik}\\% Major heading such as course name \textsc{\large Wirtschaftsinformatik und Maschinelles Lernen}\\ \textsc{\large Stiftung Universität Hildesheim}\\ \textsc{\large Universitätsplatz 1, 31141 Hildesheim}\\ \end{titlepage} \setcounter{secnumdepth}{1} \noindent \textbf{Statement as to the sole authorship of the thesis:} \\Reinforcement Learning Approach to Active Learning for Image Classification.\\[1mm] I hereby certify that the master's thesis named above was solely written by me and that no assistance was used other than that cited. The passages in this thesis that were taken verbatim or with the same sense as that of other works have been identified in each individual case by the citation of the source or the origin, including the secondary sources used. This also applies for drawings, sketches, illustrations as well as internet sources and other collections of electronic texts or data, etc. The submitted thesis has not been previously used for the fulfillment of degree requirements and has not been published in English or any other language. I am aware of the fact that false declarations will be treated as fraud. \today, Hildesheim \thispagestyle{empty} \setcounter{tocdepth}{2} \begin{abstract} Machine Learning requires large amounts of labelled data to fit a model. Many datasets are already publicly available, nevertheless forcing application possibilities of machine learning to the domains of those public datasets. The ever-growing penetration of machine learning algorithms in new application areas requires solutions for the need of data in those new domains. This thesis works on active learning as one possible solution to reduce the amount of data that needs to be processed by hand, by processing only those datapoints that specifically benefit the training of a strong model for the task. \\ A newly proposed framework from \cite{howToActiveLearn} for framing the active learning workflow as a reinforcement learning problem is adapted for image classification and a series of three experiments is conducted. Each experiment is evaluated and potential issues with the approach are outlined. Each following experiment then proposes improvements to the framework and evaluates their impact. After the last experiment, a final conclusion is drawn, unfortunately rejecting this work's hypothesis and outlining that the proposed framework at the moment is not capable of improving active learning for image classification with a trained reinforcement learning agent. \end{abstract} \tableofcontents \listoffigures \listoftables \lstlistoflistings \pagenumbering{arabic} \chapter{Basics} \section{Machine Learning} Machine Learning (ML) is one of the two major sub-fields of artificial intelligence, the other one being formal reasoning. ML is the science of learning abstract rules from data without human supervision. For this, one needs to decide on one target variable $y = \{ 1, \cdots, C \}$ that will be subject of the prediction of the model. Next, a dataset of features \textbf{x} $= \{ x_1, \cdots, x_M \}$ is collected that will serve as an input to the model. The result is a collection of observations and corresponding values of the target variables $D = \{($\textbf{x}$^{(1)}, y^{(1)}), \cdots, ($\textbf{x}$^{(N)}, y^{(N)}) \}$. \\ The learning process is characterized by the model, which is defined by its parameters $\theta$, the employed loss function $L(D, \theta)$ and the learning procedure. Generally, the learning procedure tries to optimize $\theta$ so that the loss function is minimal. At this minimum the model is said to have optimal predictive performance. \\ The most recent form of rule abstraction is the organisation of layers that learn increasingly complex dependencies, starting with very simple abstractions in the first layer. This architecture is called a deep neural network (NN). \\[2mm] This section also provides a list of machine learning terms that will not be described in detail later in the thesis. \paragraph{Gradient Descent} Optimization technique to train many different machine learning models. Uses the derivative of the loss function with respect to the parameters of the model to update the parameters in small steps. Ideally minimizing the error the model is making in the process and fitting it to the task at hand. \paragraph{(Leaky) ReLU} Rectified Linear Unit. Non-linear function used in Neural Networks. Does not change any non-negative inputs, but truncates all values below 0 to be exactly 0. The leaky version does not truncate values below 0, but rather discounts them with a factor $a$ \begin{figure} \caption{Visualization of ReLU (left) and Leaky ReLU (right) \hspace{2mm} \end{figure} \paragraph{Softmax} Exponential function that is used for many different purposes in machine learning. Normalizes a given vector of values to be non-negative and to sum to 1, turning it into a probability distribution. Softmax also emphasizes high values and suppresses low values in the process. \section{Neural Networks} Deep Neural Networks consist of layers of ``artificial neurons''. Each neuron creates a linear combination of its inputs $a = Wx$, with $W$ being the neurons weight matrix and $x$ being the vector of inputs. Before the neuron passes this linear combination to the next layer, a non-linear activation function $\phi(a)$ is applied to introduce non-linearity into the network. The collection of all weight matrices defines the model parameters $\theta$.\\ Each linear combination followed by an activation function represents one rule that is learned by the network. The first layer now learns simple rules on the raw data. The second layer, being fed the outputs from the first layer, learns a linear combination of the rules from the first layer and so on. This architecture is able to learn higher and higher abstractions of the data with increasing depth of the network.\\[1mm] Neural networks have many different archetypes that are specialized to solve certain problems. The networks that were described up to this point were fully connected (dense) networks. An other archetype is the convolutional neural network (CNN) that was developed by Yann LeCun et. al. \cite{origConvNet} to efficiently process images. Each layer of this type defines one or multiple sliding windows, called filters, that are moved across the image combining a certain number of input pixels into a single pixel of higher abstraction. This combination again is a linear combination usually followed by an activation function and is defined by the weight matrix (kernel) of the filter. A visualization of this process can be found in figure \ref{fig:convExample}. \begin{figure} \caption{Visualization of a convolution (\cite{bookDeepLearning} \label{fig:convExample} \end{figure} \section{Active Learning} Burr Settles, the author of a heavily cited active learning survey \cite{alSurvey} and follow-up book \cite{alBook}, describes active learning (AL) as follows: ``Active learning [...] is a subfield of machine learning and, more generally, artificial intelligence. The key hypothesis is that if the learning algorithm is allowed to choose the data from which it learns — to be “curious,” if you will — it will perform better with less training.''(\cite{alSurvey} p.4)\\ A strong model that is trained on very few datapoints is especially valuable if the cost of acquiring labels is high, or if the model generally needs an immense quantity of random datapoints to learn effectively.\\ The basic active learning framework consists of four components: the unlabeled dataset $U$, the labeled dataset $L$, the model, parameterized by $\theta$, and a sampling strategy $\phi(p(U,\theta))$. \begin{lstlisting}[escapeinside={(&}{&)}, caption={Generic Active Learning Workflow}, captionpos=t, label={alg:activeLearning}] INPUT: Unlabeled Dataset (&$U$&), Model (&$\theta$&), Sampling Strategy (&$\phi$&), Budget (&$L$&) = {} # Start with an empty labeled set i = 0 while i < Budget: # Retrain model fit((&$\theta$&), (&$L$&)) # Find most promising datapoint to label (&$x^*$&) = (&$\phi$&)(p((&$\theta$&), (&$U$&))) # Obtain label from oracle and add to dataset (&$L$&) = (&$L \cup (x^*, y^*$) &) (&$U$&) = (&$U \setminus x^*$ &) i += 1 \end{lstlisting} One iteration of AL is performed by calculating the predictions of the model on the unlabeled dataset $P_\theta(\hat y \mid U)$. Then, based on that information the sampling strategy $\phi$ picks a point $x^\star \in U$ that maximizes the expected improvement of the model when added to the labeled dataset. After $x^\star$ is selected, an oracle provides a label for the datapoint, it is added to the labeled set $L$ and the model is retrained. \section{Reinforcement Learning}\label{sec:basicRL} Reinforcement Learning is the field of optimizing an agent, which acts in an environment, based on a reward signal. The agent tries to maximize the reward by assessing the state that it is currently in and estimating the expected reward of possible actions from this point. Most of the time this is done by using the framework of a Markov Decision Process (MDP) to model the environment in form of it's state space $S$, possible actions $A$ and the reward function $r$, as described by Sutton and Barton (\cite{bookRL} pp. 47-68). The interactions of agent and environment take place during discrete timesteps $T \in \mathbb{N}$ and are subject to a state transition function $p$. The resulting tuple ($S, A, T, p, r$) fully descibes the dynamic of the MDP.\\ Sutton and Barton characterize reinforcement learning (RL) as the third paradigm of machine learning, besides supervised and unsupervised learning (\cite{bookRL} p. 2). This is due to the additional challenges of RL that do not appear in traditional methods of ML. The most prevalent one being the need to balance exploration and exploitation in the training process. The agent should always pick the action that yields the maximum expected reward (exploitation), but in order to discover the true value of each action in different situations the agent needs to explore unknown or sub-optimal actions (exploration).\\ [1mm] This work focuses on Q-Learning. Formally, an agent, represented by a neural network $\pi$, also called Deep Q Network (DQN), receives a state $s_i$ and predicts the so-called Q-Value for every possible action $a_i$. This Q-Value $\hat Q_\pi (s_i, a_i)$ indicates the quality of $a_i$ for that given state $s_i$. Apart from exploration scenarios, the agent always greedily picks the action with the highest Q-Value.\\[1mm] The neural network is optimized via gradient descent with Mean Squared Error (MSE) as loss function, resulting in a standard regression problem displayed in Equation \ref{eq:bellmanTarget}. The target $y_i$ is given by the Bellman-Equation (\cite{bookRL} p. 157) and $i = 1, \cdots, N$ indicates the iterations during training. \begin{equation}\label{eq:bellmanTarget} \pi^* = \underset{\pi}{argmin} \hspace{1mm} \frac{1}{2N} \sum\limits_{i=1}^N \| \hat Q_\pi (s_i, a_i) - y_i \|^2_2 \end{equation} \begin{equation*} y_i = r_i + \gamma \hspace{1mm} \underset{a}{max} \hspace{1mm} \hat Q_\pi(s_{i+1}, a) \end{equation*} An update is performed after an interaction of the agent with the environment and both a reward $r_i$ and the follow-up state $s_{i+1}$ have been obtained. $\hat Q_\pi(s_i, a_i)$ is updated to reflect the received reward plus the best possible action in the follow-up state. The discounting factor $\gamma$ encourages the agent to find rewards early, as later ones are discounted by $\gamma^k$ after $k$ steps. The basic algorithm is summarized in Algorithm \ref{alg:simpleRL} \begin{lstlisting}[escapeinside={(&}{&)}, caption={Reinforcement Learning Algorithm}, captionpos=t, label={alg:simpleRL}] INPUT: Environment (&$Env$&), Agent (&$A_\pi$&), NumInteractions i = 0 while i < NumInteractions: done = False # indicates when a game terminated and the environment needs to be reset (&$s_i$&) = Env.reset() # Reset the environment and receive initial state while not done: # Predict Q values and determine action Q, (&$a_i$&) = (&$A_\pi$&).predict((&$s_i$&)) # Use action to interact with the environment (&$s_{i+1}$&), (&$r_i$&), done = (&$Env$&).step((&$a_i$&)) # Use new state (&$s_{i+1}$&) to update the agent with equation (&\ref{eq:bellmanTarget}&) (&$A_\pi$&).update((&$s_i$&), (&$a_i$&), (&$s_{i+1}$&), (&$r_i$&)) (&$s_i$&) = (&$s_{i+1}$&) i += 1 \end{lstlisting} \chapter{Introduction} Machine Learning (ML) starts to penetrate all areas of public and private life. This development gives rise to an ever increasing need for training data to support all the new ML models. Since many models also require a ground truth to be attached to every datapoint, creating a new dataset becomes increasingly difficult and costly. One not only needs to collect high amounts of data, but also needs to label each datapoint by hand. Companies like Google started to provide labeling services for machine learning datasets, that consist of a big workforces of human labeling experts \cite{googleLabeling}. For individuals, however, no good solution exists. To help anyone that needs to create a new labeled dataset, one can either try to automate the process, or reduce the amount data that needs to labeled by hand. One way to reduce the amount of data that needs to be labeled is to employ active learning. The goal is to train a strong model for a new ML task on as few datapoints as possible, requiring the datapoints to be carefully selected in order to maximize their benefit for model training. Even though many different sampling strategies for AL have existed for a long time, the current surge in reinforcement learning introduced a new angle of improvement for active learning. On a fundamental level, AL can be framed as a RL problem, namely a sequence of decisions (which datapoints to pick) leads to a final result that can be evaluated in quality (performance of the model), therefore emitting a reward signal that can be used by a RL agent.\\ This thesis picks up the work of Fang et. al. \cite{howToActiveLearn} that proposes a formulation of stream-based AL as a RL problem and evaluates the performance of the trained agent on a Named Entity Recognition (NER) task. The paper will be described in detail in Section \ref{sec:howToActiveLearn}. \\[1mm] In a series of three experiments, this thesis will apply the work of \cite{howToActiveLearn} to image classification, propose several improvements to the training framework and the formulation of the RL problem and evaluate on the performance of the RL agent compared to traditional AL sampling strategies. \\ After a methodology section in Chapter \ref{chap:method}, the experiments are described in Chapter \ref{chap:exp1}-\ref{chap:exp3}. A final conclusion and an extensive list of possible future works are given Chapter \ref{chap:conclusion} and \ref{sec:futureWork}. \chapter{Related Work} \section{Active Learning} The field of active learning has been traversed by multiple surveys already. A central piece is the ``Active Learning Literature Survey'' by Burr Settles \cite{alSurvey}, which includes a definition of the AL problem, an exhaustive list of sampling strategies, different AL frameworks and many practical considerations. Also, more recent works like the ``Survey on instance selection for active learning'' by Fu et. al. \cite{newAlSurvey} are available, but don't add additional value. \subsection{Active Learning for Image Classification} The work of Joshi et. al. \cite{rwAL1} is a typical example of active learning for image classification. The authors use uncertainty sampling to select images from an unlabeled pool to train SVMs. The combination of uncertainty sampling and SVM models is the most common in this sub-field of AL. A more widespread study was conducted by Tuia et. al. \cite{rwAL2}, who created a survey similar to \cite{alSurvey} but limited to applications for satellite image data. Hoi et. al. \cite{rwAL3} explore the use of batch AL in the fields of medical image analysis, again using SVMs as predictive model for medical datasets like the UCI breast cancer dataset. \subsection{Active Learning with Neural Networks} As mentioned above, using neural networks in an active learning framework is quite uncommon. The work of Wang et. al. \cite{rwAL4}, however, uses a CNN for face recognition and object detection. They state that one of the problems is that ``Most AL methods pay close attention to model/classifier training. Their strategies to select the most informative samples are heavily dependent on the assumption that the feature representation is fixed.''(\cite{rwAL4} p.1) The authors hint at the fact that most models that are used in AL have a fixed feature representation (mainly SVMs), but CNNs optimize their feature selection jointly with the decision boundary. To overcome this issue Wang et. al. propose to not only add samples of low confidence (high informativeness) to the labeled set, but also complement it with high confidence samples to quickly grow the training set of the CNN.\\ Gal et. al. \cite{rwAL5} use the Bayesian equivalent of CNNs to reintroduce a proper model uncertainty that is missing with conventional CNNs, but is important for active learning. To prove this importance the authors compare the performance of active learning with conventional CNNs against Bayesian CNNs and conclude that a proper uncertainty measure for the classification model at hand is highly beneficial for standard AL methods. \section{Reinforcement Learning} The combination of reinforcement learning and active learning is extremely rare in the current state-of-the-art. One of two works is ``Learning to active Learn: A deep reinforcement learning approach'' by Fang et. al. \cite{howToActiveLearn}. This paper serves a baseline paper and will be discussed in detail below. \\ The second paper is ``Active learning for reward estimation in opposite reinforcement learning'' by Lopes et. al. \cite{rwRL1} which is combining reinforcement learning and active learning in the inverse way this work intents to. The authors use active learning to aid the learning process of a RL agent instead of enhancing active learning with a RL agent. \\ Considering the absence of compatible works in the field, this section will instead introduce three well known papers that provide important pieces for the employed training framework. \subsection{Human-level control through deep reinforcement learning (2015)} The most cited paper in the field of reinforcement learning on Google Scholar probably is the starting point for most reinforcement learning endeavors. The authors Mnih et al. \cite{mnih2015} compiled an excellent guide for successful reinforcement learning frameworks and training procedures. \\ Establishing the essential components for RL, like a memory buffer, a separate target network for Q-Learning and the use of a deep convolutional architecture. \\ In addition, Mnih et. al. provide a comprehensive list of hyperparameters that pose a solid configuration for a wide range of RL frameworks. \subsection{Prioritized Experience Replay (2016)} The uniform sampling of memories in RL is commonly known to be highly inefficient. To address this issue Schaul et. al. propose a technique called ``Prioritized Experience Replay'' \cite{prioReplay} that assigns a higher sampling probability to more informative samples (indicated by the respective loss of that sample). This vastly improves scenarios in which a small amount of important samples with non-zero reward is buried under a mass of ``failures'' with zero rewards, causing a need for excessive amounts of training iterations for an agent to learn a useful policy. (\cite{prioReplay} pp. 2-3) The authors apply their sampling technique to the experiments of \cite{mnih2015} and show significant improvements on various games, compared to traditional uniform memory sampling. \subsection{Deep Reinforcement Learning with Double Q-Learning (2016)} The introduction of a second Q-network into the Q-learning framework was already proposed and implemented by previous papers, however, van Hasselt et. al. \cite{ddqn} properly decoupled the target and action network in during optimization to further stabilize the training of reinforcement learning agents. Motivated by a study on overestimation errors, the authors propose to use the online network to evaluate the greedy policy while the target network estimates its value. The paper includes empirical results that show that the Double-DQN (DDQN) successfully overcomes the overestimation bias that normal DQN architectures inherit. \chapter{Methodology}\label{chap:method} \section{Active Learning}\label{sec:al} The following sections describe the different sampling strategies $\phi$ that can be used in Algorithm \ref{alg:activeLearning}, and how active learning is applied to image classification. \subsection{Uncertainty Sampling} \label{sec:uncertaintySampling} The most common sampling strategy is to pick the instance from the unlabeled set $U$ for which the model is most ``unsure''. This is straightforward for probabilistic models as they provide their certainty in the output. Different metrics can be employed to evaluate the uncertainty (\cite{alSurvey} pp. 12-15):\\ (i) the least confident value among all of the highest class probabilities \begin{align*} x^\star = \underset{x \in U}{argmax} \hspace{2mm} 1 - P_\theta(\hat y\mid x) \\ with \hspace{4mm} \hat y = \underset{y}{argmax} \hspace{2mm} P_\theta(y\mid x) \end{align*} (ii) the difference between the highest $\hat y_1$ and second highest $\hat y_2$ class probability, also known as ``Best vs Second Best'' (BvsSB) \begin{align*} x^\star = \underset{x \in U}{argmin} \hspace{2mm} P_\theta(\hat y_1 \mid x) - P_\theta(\hat y_2 \mid x) \end{align*} (iii) the Shannon entropy over the class probabilities \begin{equation*} x^\star = \underset{x \in U}{argmax} \hspace{1.5mm} - \sum\limits_{c=1}^C P_\theta(y_c\mid x) log P_\theta(y_c\mid x) \end{equation*} A visual comparison of the sampling strategies can be found in Figure \ref{fig:samplingStratz}. The graphs picture a three class scenario where the position within the triangle represents the predicted probability for each of the three classes for a given instance. The color represents the grade of information that instance provides to the model if added to the labeled set. \begin{figure} \caption{Most informative areas in a 3 class problem according to the sampling strategies (\cite{alSurvey} \label{fig:samplingStratz} \end{figure} Each strategy shares the property that the most informative samples can be found in the middle of the triangle, where the probability for each class is leveled at $\frac{1}{3}$, and the least informative samples are at the corners, where one class has a probability of 1 and the rest has 0. Considering the edges, however, they differ significantly from one another. Strategy (i) and (iii) both are very centralised, favoring those instances that produce a low overall confidence in the prediction. Strategy (ii) only focuses on the two highest probabilities, favoring a high uncertainty between those two. By doing this, the BvsSB strategy solves a common problem of (i) and (iii). In situations with a large number of labels, the probability distribution $P_\Theta(\hat y \mid x)$ tends to flatten out towards all the unlikely labels. This lowers the confidence of more likely labels compared to the mass of unlikely labels as well as causing a high entropy for every sample $x \in U$. Thus hindering the performance of the least confident (i) and entropy (iii) strategy. \section{Learning how to Active Learn}\label{sec:howToActiveLearn} Starting point for this work is the paper ``Learning how to Active Learn: A Deep Reinforcement Learning Approach'' a paper from Meng Fang, Yuan Li and Trevor Cohn \cite{howToActiveLearn}. Fang et. al. trained a Named Entity Recognition (NER) model on a limited set of labeled data that was curated by an AL procedure. They motivated their work with the profound usecase of AL inside a low-resource language. Contrary to existing methods, Fang et. al. did not use well known heuristics for the AL sampling (see Sec. \ref{sec:al}) but trained a reinforcement learning agent to do the sampling instead. The authors use a high resource language (English) to learn a sampling policy, which then can be transferred to a low-resource language setting (German, dutch, Spanish).\\[1mm] To train the RL agent Fang et. al. reformulated the active learning problem into a Markov decision process (MDP) by presenting the agent a single unlabeled datapoint and defining the action space as \{\textit{Label}, \textit{NotLabel}\}. This policy can be optimized with reinforcement learning to pick those datapoints that maximize the expected improvement in model performance. \begin{lstlisting}[escapeinside={(&}{&)}, caption={Learn an active learning policy (\cite{howToActiveLearn} Alg. 1)}, captionpos=t, label={alg:baselineALRL}] INPUT: Unlabeled Dataset (&$U$&), Budget (&$B$&), NER Model (&$\theta$&), RL Agent (&$Q_\pi$&), Memory Buffer (&$M$&) for episode in [1, ... , N]: (&$L$&) = {} # Start with an empty labeled set (&$\theta$&) = random for i in [1, ..., |D|]: construct state (&$s_i$&) using (&$x_i$&) (&$a_i$&) = (&$\underset{a}{argmax} \hspace{1mm} Q_\pi (s_i, a)$&) # obtain agent's decision if (&$a_i$&) = 1: obtain annotation (&$y_i$&) (&$L$&) = (&$L$&) (&$\cup$&) ((&$x_i$&), (&$y_i$&)) fit (&$\theta$&) to (&$L$&) generate reward (&$r_i$&) using validation data if |D| = (&$B$&): store transaction (&$M$&) = (&$M$&) (&$\cup$&) (&$(s_i, a_i, r_i, DONE)$&) break construct new state (&$s_{i+1}$&) store transaction (&$M$&) = (&$M$&) (&$\cup$&) (&$(s_i, a_i, r_i, s_{i+1})$&) sample minibatch (&$(s_j, a_j, r_j, s_{j+1}) \sim M$&) perform gradient descent according to Eq. (&\ref{eq:bellmanTarget}&) \end{lstlisting} \paragraph{Note} Algorithm \ref{alg:baselineALRL} is meant to be as close as possible to it's source and therefore does deviate stylistically from other algorithms presented in this thesis. \\[1mm] The paper details an implementation for the state representation and reward function. It proposes a state at the i-\textit{th} time step $s_i$ that consists of the concatenation of an embedding of the considered datapoint $x_i$, an embedding of the model marginals $p_\theta(y \mid x_i)$ and a confidence measure $C$ based on the model. The reward is calculated based on the improvement of the NER model on a validation set. For this, F1-Score is used as metric. The construction of the deep Q-network (DQN) and the training procedure follows Mnih et. al. \cite{mnih2015}. The authors use convolutional layers to embed the considered datapoint $x_i$ and the marginals $p_\theta(y \mid x_i)$ and a single layer DQN to predict the Q-values. During training, experience replay \cite{mnih2015} and stochastic gradient descent with batch size 32 are used. The NER model is a standard linear chain CRF (\cite{CRF})\\ This thesis omits the presented graphs, as they display specific cross-lingual experiments, which are not comparable to any of the following work. Generally, Fang et. al. showed a clear improvement of the trained AL-agent over the classic entropy based sampling strategy in all experiments. \subsection{Active Learning for Image Classification} Conceptually, the AL workflow does not change when applied to image classification. However, the used model $M$ tends to be more complex. Since up to one model per picked datapoint is trained, AL usually uses easy to train models like SVMs. To enable the use of an arbitrary image dataset, a CNN is employed in the further experiments of this work.\\ This makes it hard to supplement the heuristic $\phi$ with information besides the final prediction $p_\theta(\hat y \mid x)$. Furthermore, the training procedure for CNNs inherits a high amount of variance. Different initializations require varying amounts of iterations to reach a certain level of performance, which in turn might even be unobtainable for other initializations. \section{Reinforcement Learning} A reinforcement learning approach is characterized by three components: (i) a neural network that acts as an agent, (ii) the environment in which that agent operates and (iii) the learning algorithm used to update the agent. \subsection{Active Learning Environment}\label{sec:ALEnv} The proposed environment for the active learning game conforms to the OpenAI Gym \cite{openAiGym} interface. It exposes the functions \begin{lstlisting} def reset(): return state def step(action): return newState, float reward, bool done, info \end{lstlisting} and the fields \begin{lstlisting} int actionSpace int[] stateSpace \end{lstlisting} Apart from the usual functions, the environment needs to maintain the labeled and the unlabeled datasets as well as the image classification (IC) network for the active learning workflow.\\ The internal state of the environment is defined by a list of datapoint IDs that play the role of the current position of the agent in conventional RL environments. Based on these IDs, whose respective datapoints are fed into the image classification model, the current state vector can be calculated. The exact nature of the state vector depends on the experimental setup and is detailed in the respective chapters below.\\ The action space is defined by a number of categorical actions. All actions but the last add an image to the labeled dataset and trigger the training process of the image classification model. The last action always is a special ``pick no image'' option. When chosen, a random image from the current state is replaced with a new one and no training of the IC model takes place.\\ The default reward of each action is 0, deviating only if reward shaping is enabled or the game has ended. In those two cases the reward is equal to the improvement in F1 score of the IC model on a held-out validation set compared to the last time a reward was generated.\\ To keep the running time feasible, model training is heavily sanctioned. The validation error is measured on a reduced validation set of 1000 samples and an aggressive early stopping is employed which stops as soon as the validation error is not monotonically decreasing. This has potential negative effects on the reward signal. Even though the training set (the environments labeled dataset) barely changes in most cases and therefore the F1-Score should deviate only slightly, the reward was fluctuating heavily. To balance this, the F1-Score $F_i$ was internally modeled by a moving average \begin{equation*} F_i = \alpha F_{i-1} + (1-\alpha)F_i \hspace{5mm} with \hspace{5mm} \alpha = 0.7 \end{equation*} Each game in this environment is initialized with an initial labeled set of 5 images per class. In active learning this is called the seed set. \begin{figure} \caption{Architecture of the RL environment} \label{fig:envArch} \end{figure} \subsubsection{Additional Parameters of the active learning environment}\label{lst:alEnvFields} \begin{itemize} \item model - The image classification model that is used \item dataset - A tuple of training and validation data \item {\color{blue} int} initialPointsPerClass - Number of labeled images per class at game start \item {\color{blue} bool} rewardShaping - Determines, if a reward should be generated after every interaction, or only at the end of a game \item {\color{blue} int} budget - The number of images that can be added to the labeled set before the environment resets \item {\color{blue} int} maxInteractionPerGame - The number of interactions after which a game terminates even if the budget is not exhausted \end{itemize} \subsection{DQN / DDQN} The agent in reinforcement learning is modeled by a neural network. The agent tries to act intelligently by assessing his current situation and the potential value of actions from this point. The resulting behavior is also known as a policy. A well known approach to do this is to estimate the so called Q-Value of an action in a given state $s_i$ with a Deep Q Network (DQN). The Q-Value models the expected sum of future rewards when following the current policy. Practically, this resolves into a regression problem, where the agent network takes a state as input and outputs one estimated Q-Value for each possible action.\\ To determine an action $a_i$ from the estimates one can simply take the action with the highest Q-Value. This behavior is called a greedy policy and is usually applied during evaluation of the agent. During training however, the agent needs to resort to some form of exploration in order to arrive at optimal solutions. A behavior that is impossible with a greedy policy. To introduce variance, either a $\epsilon$-greedy policy can be used, where the agent takes a random action with a probability of $\epsilon$, or the Q-values are passed through a softmax function to convert them into a probability distribution, from which an action can be sampled.\\ \begin{equation}\label{eq:softmaxGreedy} a_i \sim Cat(A, P_\theta) \end{equation} \begin{equation*} P_{\theta, k} = \frac{exp(Q_k(s_i) / \tau)}{\sum\limits_{j=1}^A exp(Q_j(s_i) / \tau)} \hspace{3mm}, k = 1, \cdots, A \end{equation*} The parameter $\tau$ is called the greed parameter and controls the decisiveness of the distribution, singling out the highest value when approaching 0.\\ To optimize the agent network, Alg. \ref{alg:simpleRL} can be used. Following well known papers, this procedure is extended with a Memory Replay Buffer \cite{mnih2015} and a Double DQN (DDQN) agent \cite{ddqn}. \paragraph{Memory Replay} Instead of using only the most recent interaction with the environment (namely the tuple ($s_i, a_i, s_{i+1}, r_i$) from Alg. \ref{alg:simpleRL}), every interaction is stored in a memory buffer from which a minibatch of interactions can be retrieved for agent training. Using an interaction to fit the agent does not remove it from the memory, but the memory buffer usually has a maximum length. This gradually removes old and potentially less relevant interactions, while still raising the data efficiency by enabling interactions to be used multiple times. This thesis will reside on uniform sampling of interactions and not look into prioritized replay \cite{prioReplay}. \paragraph{DDQN} The double DQN architecture was proposed by van Hasselt et al. \cite{ddqn}. The authors tackle the common problem of overestimations in deep Q-networks by proposing an architecture with two Q-networks. One primary network to define the policy and one secondary network that is used in the formulation of the regression target.\\ General Equation from Section \ref{sec:basicRL} Eq. \ref{eq:bellmanTarget} \begin{equation*} y_i^{DQN} = r_i + \gamma \hspace{1mm} \underset{a}{max} \hspace{1mm} \hat Q_\theta(s_{i+1}, a) \end{equation*} Decoupling of state selection and state evaluation (\cite{ddqn} Eq. 4) \begin{equation*} y_i^{DQN} = r_i + \gamma \hspace{1mm} \hat Q_\theta \left( s_{i+1}, \hspace{1mm} \underset{a}{argmax} \hspace{1mm} \hat Q_\theta(s_{i+1}, a) \right) \end{equation*} Using the primary network to pick future actions and the secondary network $Q_{\theta'}$ to evaluate the state (\cite{ddqn} Eq. 4) \begin{equation} y_i^{DDQN} = r_i + \gamma \hspace{1mm} \hat Q_{\theta'} \left( s_{i+1}, \hspace{1mm} \underset{a}{argmax} \hspace{1mm} \hat Q_\theta(s_{i+1}, a) \right) \end{equation} By separating the action selection and state evaluation during training, Hasselt et. al. were able to show consistent improvements over the results of Mnih et. al. \cite{mnih2015} The secondary network can be updated by simply switching the role of primary and secondary network repeatedly, or, following the original idea of a target network \cite{mnih2015}, by copying the weights from the primary network to the secondary network in a fixed interval $C$. \section{Evaluation of an Active Learning Agent}\label{sec:alEval} Since the classic sampling strategies for AL are simple to compute heuristics, the sampling strategy is usually presented the entire unlabeled dataset and picks the single most promising datapoint to be labeled. Then the classification model is retrained and the process is repeated until the budget is exhausted. (see Alg. \ref{alg:activeLearning}) \\ The evaluation scheme for this work will consist of one game in the AL environment with a budget of 800. After each added image the resulting F1-Score for the environment's image classification model is recorded and plotted into a graph of F1-Score vs Number of added Images.\\ Such a graph gives a good overview of the performance of an sampling agent in different stages of the AL process. Trivially, the higher the overall profile of the curve is the better a sampling strategy performs. The evaluation is performed multiple times and the results are averaged. As stated in Section \ref{sec:ALEnv}, the environment already tracks the F1-Score as a moving average. However, a second averaging with a sliding window of size 10 is applied to the curve before plotting to smooth the curves until they clearly separate and a trend becomes visible. \section{Baselines}\label{sec:baselines} Each AL agent needs to compete against 3 baselines: first, against a random agent, who picks random images from the unlabeled dataset until the budget is exhausted and, second, against two variations of active learning with the BvsSB sampling strategy. A separate test of all mentioned sampling strategies of Sec. \ref{sec:uncertaintySampling} showed that the BvsSB is top-performing consistently.\\ Variant 1 of the BvsSB baseline complies to the default AL workflow and picks the single most promising image out of all unlabeled datapoints in each iteration.\\ Variant 2 is based on the workflow of Experiment 2, where only a small subset of the unlabeled dataset is presented at each iteration. During testing, this variant did not present any improvement over the random baseline and therefore was extended with an adaptive threshold that controls how many images are skipped before an image is chosen to be added. The baseline starts with a high initial threshold that slowly decays over time if no presented image generates a high enough BvsSB score. If an image exceeds the threshold, it is added to the labeled set and the threshold is reset to the initial high value. The configuration of baseline variant 2 is as follows: \begin{itemize} \item Number of presented images per interaction: 5 \item Initial threshold: 0.8 \item Threshold decay: 0.05 \end{itemize} A comparison of the employed baselines can be seen in Fig. \ref{fig:baselines}. \begin{figure} \caption{Baselines for AL agent evaluation} \label{fig:baselines} \end{figure} The frequent drops in the random agent's curve are due to a phenomenon in the image classification model. By adding random images to the training set of the model, the IC model occasionally, due to an unfortunate distribution of training data, does not predict certain classes at all during evaluation and therefore negatively impacts the F1-Score. This phenomenon has reduced impact the bigger the training set gets. \chapter{Experiment 1}\label{chap:exp1} This and all following experiments use MNIST as dataset for the active learning process. The dataset is loaded through the \textit{keras.datasets.mnist} module and contains 60000 uint8-images of handwritten digits. \paragraph{Note} The following chapters will contain statements like ``the agent is presented a single image''. The phrases ``presented image'' and ``presented state'' will be used interchangeably and always are a simplification of ``the presented state, which is based on single or multiple images''. \section{Experiment: Adaptation of the base paper to Image Classification} To use the approach from the base paper for image classification some adjustments need to be made. Most notably, the classification model is switched from CRFs (\cite{howToActiveLearn} Chap. 4) to a convolutional neural network.\\ Additionally, some improvements are applied to the RL algorithm itself. Instead of using a single Q-network, Double Q-Learning \cite{ddqn} is employed. Instead of training a single Q-estimator, two networks are maintained simultaneously. This reduces the instability stemming from optimizing a moving target and fixes the overestimation problem described in \cite{ddqn}, where single network setups are overly optimistic due to estimation errors. \\ Following the base paper, the training procedure is a sequence of states generated from single images, presented to the agent. For each state the agent decides wether the corresponding image is to be labeled or not. After the action is taken, the interaction is stored in a memory buffer and the agent trained with a batch of memories. The state $s_i$ is consists of (i) entropy and BvsSB score of the IC model's prediction $p_\theta(y \mid x_i)$ for the image, (ii) metrics on the IC model itself, namely the mean, standard deviation and norm of each layer and (iii) the average F1-Score of the current IC model.\\ The reward function is adopted from the base paper and consists of the improvement in F1-score of the image classification model on a validation dataset as discussed in Sec \ref{sec:ALEnv}. \\ In Experiment 1 and 2 the reward function can have two variants. One variant generates a reward after each action of the agent (reward shaping), while the other only generates a reward after a finished game (budget or maximum interactions exhausted). \section{Execution and Results} A full specification of the setup can be found in the appendix. The most important parameters are listed below: \paragraph{Environment} \begin{itemize} \item \textbf{State Space} Vector $\in \mathbb{R}^{27}$ generated from a single image, containing (i) entropy and BvsSB score of the IC model's prediction, (ii) IC model's layer-wise std-deviation, mean and norm, (iii) average F1-Score of the current IC model \item \textbf{Action Space} $a \in \{label, notLabel\}$ \item Budget: 800 \item Reward Shaping: False \end{itemize} \begin{minipage}{0.48\textwidth} \paragraph{Agent} \begin{itemize} \item Dense 24: LeakyReLU \item BatchNorm \item Dense 12: LeakyReLU \item BatchNorm \item \textbf{Policy:} Softmax greedy \end{itemize} \end{minipage} \begin{minipage}{0.48\textwidth} \paragraph{General} \begin{itemize} \item Number of interactions: 12000 \item Exploration: 4000 \item Conversion: 4000 \item Evaluation runs: 15 \end{itemize} \end{minipage}\\[5mm] The agent is trained with games of budget 800 for a total number of 12000 interactions. The exploration and conversion periods define the progression of the greed parameter $\tau$ for the softmax greedy policy (Eq. \ref{eq:softmaxGreedy}) of the agent. During the exploration period $\tau$ is 1, during the conversion period it is gradually lowered to the final value of 0.2. Every interaction thereafter stays at $\tau = 0.2$\\[1mm] The training progress of the agent is visualized by the cumulated loss and reward per game. Each new game is marked with a vertical dotted line. \begin{figure} \caption{Experiment 1 - Loss and reward progression} \label{fig:exp1_prog} \end{figure} The script caches the top performing agent throughout the whole training process and uses that checkpoint for the evaluation, rather than the latest state of the agent. The evaluation is conducted as described in Section \ref{sec:alEval}. To consolidate the results, the evaluation is repeated 15 times and the curves are averaged over all runs. \begin{figure} \caption{Experiment 1 - Evaluation of trained agent compared to the baselines} \label{fig:exp1_eval} \end{figure} As stated in Section \ref{sec:alEval}, a higher curve profile indicates a higher performance during evaluation. Even though a significant lift of the agent compared to the random baseline can be observed, it lacks behind the baselines, in the same manner. The following values of Table \ref{tab:exp1_eval} are averaged in a window of [-10, +10] around each evaluation point. \begin{table}[H] \centering \caption{Experiment 1 - Comparison of the agent's performance at fixed points, represented by the F1-Score of the IC model} \begin{tabular}{c | c | c | c} & 100 & 400 & 800 \\ \hline BvsSB 1 & 0.77 & 0.90 & 0.96 \\ BvsSB 2 & 0.78 & 0.87 & 0.91 \\ \textbf{DDQN} & 0.74 & 0.85 & 0.90 \\ Random & 0.72 & 0.81 & 0.86 \end{tabular} \label{tab:exp1_eval} \end{table} None of the baselines used the exact state representation (number of presented images (see Sec. \ref{sec:baselines})) and is directly comparable to the agent. However, both variants of the BvsSB sampling are computationally inexpensive and use a very limited set of information compared the agent and therefore should ideally be matched or succeeded by the agent. A detailed discussion of all trained agents and the employed baselines can be found in the conclusion. \paragraph{Reward Shaping} A second experiment with reward shaping was unsuccessful, due to the agent navigating himself into a seemingly locking state during evaluation. The training agent training did not show abnormal behavior, however, during evaluation the agent always reaches a point where it rejects every presented image and therefore cannot finish the evaluation game. This issue persisted even after many restarts of the training and ultimately was abandoned due to time constraints. \section{Discussion: Miss-formulation of the RL environment} The obtained results show the applicability of RL agents in an active learning setting. This however, was already shown by the baseline paper, whose workflow was implemented in this experiment. Even though the setting changed from Named Entity Recognition (NER) to image classification.\\ The main difference between the trained agent and the vastly superior BvsSB sampling methods is the contextualization of each decision. While the RL agent only is presented a single image and needs to decide wether to label the image or not, the baselines are presented the full dataset (BvsSB Variant 1) or a subset of the unlabeled data (BvsSB Variant 2). This difference is likely to be the main source of the lacking performance of the RL agent.\\ The approach of this first experiment inherits two issues: (i) The agent has no form of context for its decision. It can only decide, based on the presented state (based on one image) and its own internal modeling of the problem, which likely will be based on some sort of threshold mechanism. If that internal model is flawed at some point of the state space it can happen that the agent will refuse every image presented to it. This happened multiple times during training and was only overcome by restarting the whole training process. (ii) The states have a very weak dependency between each other. Every time the agent makes a decision the presented image is replaced by a random image from a large unlabeled dataset $|U| \approx 60000$. Since the impact of a single added image on the environment is minimal in most cases, this can be compared to a labyrinth problem, where the agent is randomly relocated after each move (the follow up state $s_{i+1}$ is very weakly dependent on the previous state $s_i$ and action $a_i$). \chapter{Experiment 2}\label{chap:exp2} \section{Experiment: Remodeling the Environment} In order to improve the setup and overcome the issues discussed in the previous section, a new state and action space for the AL environment is proposed. \\ Instead of single images, a sample of size $s$ of multiple images is presented to the agent. The agent now decides if (i) one of these images should be added to the labeled dataset or (ii) no image should be added. In case (i) the chosen image is added to the labeled set and replaced by a random image from the unlabeled set, while the rest of the image sample stays the same. In case (ii) no image is added to the labeled set and a random image from the sample is replaced. \\ In addition to the previously discussed issues, this setup improves on the comparability of the training process and the standard AL workflow by moving closer to the intuition of AL that the sampling agent should be able to choose the most promising datapoint out of a sample. It is expected that this will increase the training efficiency and the predictive power of the agent.\\ This approach changes the action space of the environment to $s + 1$, where the additional action represents the \textit{add no image} action. \section{Execution and Results} Again, a full specification of the setup can be found in the appendix. The most important parameters are listed below: \paragraph{Environment} \begin{itemize} \item \textbf{State Space} Vector $\in \mathbb{R}^{35}$ generated from a sample of images, containing (i) entropy and BvsSB score of the IC model's prediction for each image, (ii) IC model's layer-wise std-deviation, mean and norm, (iii) average F1-Score of the current IC model \item \textbf{Action Space} $a \in [0, 1, ... \hspace{1mm} , s + 1]$ \item Budget: 800 \item Sample size $s$: 5 \item Reward Shaping: False \end{itemize} \begin{minipage}{0.48\textwidth} \paragraph{Agent} \begin{itemize} \item Dense 48: LeakyReLU \item BatchNorm \item Dense 24: LeakyReLU \item BatchNorm \item \textbf{Policy:} Softmax greedy \end{itemize} \end{minipage} \begin{minipage}{0.48\textwidth} \paragraph{General} \begin{itemize} \item Number of interactions: 12000 \item Exploration: 4000 \item Conversion: 4000 \item Evaluation runs: 15 \end{itemize} \end{minipage}\\[5mm] Analogously to experiment 1, the agent is trained for 12000 interactions, with 4000 interactions exploration and 4000 interaction conversion. The sample size $s$ for this experiment is set to 5, which results in an action space of $\{0,1,2,3,4,5\}$ with action $5$ being \textit{add no image}.\\ The agent was scaled up by a factor of 2 to compensate for the increased complexity of the state space.\\ Again, the training progress is visualized by the cumulated loss and reward per game. \begin{figure} \caption{Experiment 2 - Loss and reward progression} \label{fig:exp2_prog} \end{figure} The obtained performance is measured over 15 evaluation runs and compared to the baselines. \begin{figure} \caption{Experiment 2 - Evaluation of trained agent compared to the baselines} \label{fig:exp2_eval} \end{figure} The result is very comparable to the plot of experiment 1 (Fig. \ref{fig:exp1_eval}). To draw a comparison, the agents from both experiments are plotted. \begin{figure} \caption{Comparison of the agents of experiment 1 and 2} \label{fig:exp2_eval2} \end{figure} From Figure \ref{fig:exp2_eval2}, it is apparent that the new agent actually performs worse than the agent from experiment 1. To confirm this the following table contains averaged measurements from different points of the evaluation. \begin{table}[H] \centering \caption{Experiment 2 - Comparison of the agents' performances at fixed points, represented by the F1-Score of the IC model} \begin{tabular}{c | c | c | c} & 100 & 400 & 800 \\ \hline BvsSB 2 & 0.78 & 0.87 & 0.91 \\ DDQN Exp 1 & 0.742 & 0.853 & 0.900 \\ DDQN Exp 2& 0.756 & 0.843 & 0.886 \\ Random & 0.72 & 0.81 & 0.86 \end{tabular} \label{tab:exp2_eval} \end{table} While for experiment 1 no baseline was directly comparable to the agent, for experiment 2, the agent uses the exact setup of baseline ``BvsSB Variant 2''. As stated in Section \ref{sec:baselines}, the only addition to the baseline is a decaying threshold that controls how many images are rejected on average. \section{Discussion: Quality of the Reward signal}\label{sec:exp2_disc} Considering the firm expectation of increased performance, this experiment needs to be marked as a failure. The updated state contains all information that the previous version did, and even adds context to each decision of the agent. The fact that agent 2 performs worse than agent 1 indicates a mistake in the implementation, or a misconception of the environment. \\ The implementation has been thoroughly checked and external libraries have been applied to the problem to verify the result. The comparison to the external libraries reinforced the results and did not reveal any new insights and consequently are omitted in this thesis. \\ The following discussion will not focus on the quality of the implementation, but on possible misconceptions of the problem.\\[1mm] Now that the state space contains all necessary information to at least match the baseline variant 2, another possible angle of improvement is the reward signal currently issued by the environment, which remained unchanged between experiment 1 and 2.\\ Two possible issues can be discussed here. (i) The impact of each individual image that is added to the labeled set is minimal, and in some situations even negative. This means that each decision the agent makes only has a very small influence on the environment and therefore the reward signal. (ii) Without reward shaping, the payoffs for the agent are incredibly long-term. With an active learning budget of 800 the agent is issued a reward at most every 800 interactions. This long-term nature combined with the minimal impact of each individual decision results in a high amount of noise that might negatively impact the agent's training procedure.\\ These two statements stem from a phenomenon that can be observed in the evaluation of both experiments so far (Fig. \ref{fig:exp1_prog} and Fig. \ref{fig:exp2_prog}). While the loss curves behave as expected and monotonically decrease, the cumulated rewards actually decrease over time as well. This is unexpected behavior for a reinforcement learning agent and might signal a problem with the reward signal of the environment. Generally an agent should archive increasing performances (increasing reward curve) by fitting the reward signal of the environment better (decreasing loss curve). Since this behavior is not observed, on the basis of current observations it even is inverse, the next experiment tries to improve the quality of the reward signal. \\ [1mm] Apart from the theoretical discussion above, a simple sanity check has been employed to examine the learning behavior of the agent. During training, the states are recorded in the memory buffer. From this buffer the states can be retrieved, concatenated and the mean $\mu$ and standard deviation $\sigma$ for each feature can be measured. The values of $\mu$ and $\sigma$ can be used to draw a sample $s$ from the state space by sampling each feature $m$ uniformly from a two standard deviation range. \begin{equation*} s_m \sim Uniform(\mu_m - 2\sigma_m, \mu_m + 2\sigma_m) \hspace{4mm} \forall m=1 ... M \end{equation*} For this evaluation, the impact of the BvsSB- and entropy-score for individual images on the respective Q-value is tested. Concretely, the correlation of a feature in the state space with the respective Q-Value is measured. \\ \begin{figure} \caption{Visualization of the correlation plots} \label{fig:correlation} \end{figure} \begin{figure} \caption{Experiment 2 - Impact of the BvsSB-score on the predicted Q-Value} \label{fig:BvsSB_2} \end{figure} \begin{figure} \caption{Experiment 2 - Impact of the entropy-score on the predicted Q-Value} \label{fig:entropy_2} \end{figure} The depicted behavior of the agent is a good indicator, that the agent is not capturing the relationship of BvsSB- and entropy-scores of an image with the potential value of that image for the AL process. Not only does the predicted Q-values fluctuate heavily, it rarely shows the expected positive trend that assigns high predicted Q-values to high BvsSB- or entropy-scores. \chapter{Experiment 3}\label{chap:exp3} \section{Experiment: Playing shorter Games} To improve the quality of the reward signal, three changes to the environment are proposed. (i) Instead of considering a sample of single images for each state, the environment bundles multiple images and averages their metrics. \begin{figure} \caption{Visualization of the new state with bundled images instead of single images} \end{figure} Consequently, each interaction of the agent now adds multiple images to the labeled set instead of just one. This fixes the issue of minimal impact of the agent's decisions, discussed in the previous section.\\ (ii) Instead of playing one big game of active learning until the budget is exhausted and then fully reset the whole environment, play shorter sub-games with soft resets between them until the global budget is exhausted. This setup uses no reward shaping, but a reward is issued to the agent after each sub-game. Sub-games are played until the environment reaches the global AL budget and fully resets.\\ To understand hard and soft resets the internal variables of the AL environment need to be considered. \begin{itemize} \item {\color{blue} array} \texttt{labeledDataset} - Labeled set for the active learning process \item {\color{blue} int} \texttt{budget} - Global active learning budget \item {\color{blue} int} \texttt{subgameLength} - Number of added images per sub-game \item {\color{blue} int} \texttt{addedImages} - Number of currently added images in the sub-game \item {\color{blue} float} \texttt{initalF1Score} - F1-Score at the start of the sub-game \item {\color{blue} float} \texttt{currentF1Score} - F1-Score after the last interaction \end{itemize} Consider the start of the training process for the agent. Initially the environment is hard-reset. The \texttt{labeledDataset} is initialized with a seed set, the image classification model $\theta$ is trained and evaluated and \texttt{currentF1Score} and \texttt{initalF1Score} are set. \begin{lstlisting}[escapeinside={(&}{&)}, caption={Hard Reset of the Environment in Experiment 3}, captionpos=t, label={alg:hardReset}] def hard_reset(): (&\texttt{labeledDataset}&) = initSeedSet() (&$\theta$&) = fit((&\texttt{labeledDataset}&)) (&\texttt{initalF1Score}&) = (&\texttt{currentF1Score}&) = evaluate((&$\theta$&)) (&\texttt{addedImages}&) = 0 \end{lstlisting} At this point the agent starts to interact with the environment and adds images to the \texttt{labeledDataset}. The number of added images is tracked by \texttt{addedImages}. Once \texttt{addedImages} is equal or greater than \texttt{subgameLength} a reward is issued based on \texttt{currentF1Score - initalF1Score} and the environment is soft-reset. The soft updates \texttt{initalF1Score} and resets \texttt{addedImages}, but crucially keeps \texttt{labeledDataset} intact. \begin{lstlisting}[escapeinside={(&}{&)}, caption={Soft Reset of the Environment in Experiment 3}, captionpos=t, label={alg:softReset}] def soft_reset(): (&\texttt{initalF1Score}&) = (&\texttt{currentF1Score}&) (&\texttt{addedImages}&) = 0 \end{lstlisting} These sub-games are played until enough images are added to \texttt{labeledDataset} to exhaust the global \texttt{budget}. This is determined by $|$\texttt{labeledDataset}$|$ - $|$\texttt{seedSet}$|$ $>$ \texttt{budget}. Once that threshold is reached, the environment hard resets. This procedure is repeated until the desired number interactions for the agent training is reached.\\ To finish the enumeration, the third proposal is to (iii) introduce reward scaling by a constant factor. It has been observed, that the reward signal of the sub-games spans only a small area around 0. To provide a more pronounced signal to the agent, the reward should be scaled to ideally match the typical [0, 1] interval of reinforcement learning. \section{Execution and Results} One last time, a full specification of the setup can be found in the appendix. The most important parameters are listed below: \paragraph{Environment} \begin{itemize} \item \textbf{State Space} Vector $\in \mathbb{R}^{35}$ generated from a sample of images, containing (i) averaged entropy and BvsSB scores of the IC model's prediction for each image bundle, (ii) IC model's layer-wise std-deviation, mean and norm, (iii) average F1-Score of the current IC model \item \textbf{Action Space} $a \in [0, 1, ... \hspace{1mm} , s + 1]$ \item Budget: 800 \item Sample size $s$: 5 \item Images to Bundle: 5 \item Sub-Game Length: 50 \item Reward Scaling: 40 \end{itemize} \begin{minipage}{0.48\textwidth} \paragraph{Agent (Unchanged)} \begin{itemize} \item Dense 48: LeakyReLU \item BatchNorm \item Dense 24: LeakyReLU \item BatchNorm \item \textbf{Policy:} Softmax greedy \end{itemize} \end{minipage} \begin{minipage}{0.48\textwidth} \paragraph{General} \begin{itemize} \item Number of interactions: 8000 \item Exploration: 3000 \item Conversion: 3000 \item Evaluation runs: 15 \end{itemize} \end{minipage}\\[5mm] For this experiment the agent is only trained for only 8000 interactions with 3000 interactions each exploration and conversion. Since 5 images are added per interaction, the AL budget is reached faster and more games are played. This leads to a faster conversion. The sample size $s$ for this experiment remains at 5.\\ A constant scaling of factor 40 was applied to the reward signal.\\ Each dotted vertical line in Figure \ref{fig:exp3_prog} still corresponds to a full game of active learning, each consisting of several sub-games (Hence there are multiple datapoints between each line). \begin{figure} \caption{Experiment 3 - Loss and reward progression} \label{fig:exp3_prog} \end{figure} The evaluation procedure again, did not change compared to the previous experiments. \begin{figure} \caption{Experiment 3 - Evaluation of trained agent compared to the baselines} \label{fig:exp3_eval} \end{figure} As with experiment 2, the performance curve is very similar to experiments 1 and 2. To draw a comparison, all three agents are plotted next to each other and the their values are compared at the usual points in Table \ref{tab:exp3_eval} \begin{figure} \caption{Experiment 3 - Evaluation of trained agents of all experiments} \label{fig:exp3_eval2} \end{figure} \begin{table}[H] \centering \caption{Experiment 3 - Comparison of the agents' performances at fixed points, represented by the F1-Score of the IC model} \begin{tabular}{c | c | c | c} & 100 & 400 & 800 \\ \hline BvsSB 2 & 0.78 & 0.87 & 0.91 \\ DDQN Exp 1 & 0.742 & 0.853 & 0.900 \\ DDQN Exp 2 & 0.756 & 0.843 & 0.886 \\ DDQN Exp 3 & 0.742 & 0.833 & 0.884 \\ Random & 0.72 & 0.81 & 0.86 \end{tabular} \label{tab:exp3_eval} \end{table} \section{Discussion: Assessing the changes} A decreased performance (Table \ref{tab:exp3_eval}), not only compared to experiment 1, but even to the direct predecessor, is an unexpected result. To tackle this issue each of the three proposed changes to the environment is discussed in isolation. \paragraph{Reward Scaling} Since the distribution of the reward signal is difficult to read of Figure \ref{fig:exp3_prog} a histogram of the scaled and unscaled rewards was generated (Figure \ref{fig:exp3_hist}). \begin{figure} \caption{Experiment 3 - Histogram of the received rewards, unscaled (top) and scaled by a factor of 40 (bottom)} \label{fig:exp3_hist} \end{figure} It is evident that the unscaled reward signal only provides minimal feedback for the agent's updates and therefore should be considered sub-optimal.\\ Considering the negative results from this experiment it cannot be stated that the scaled rewards ultimately provide better feedback, but the scaled distribution at least resembles a common distribution of RL rewards. Obviously, an amplification of the reward signal is of little use if the signal itself is flawed, or does not provide useful information. \paragraph{Shorter Sub-Games} To evaluate the impact of sub-games being played the received rewards during agent training (Fig. \ref{fig:exp3_prog}) are discussed. \\ Since, a lot more games are played compared to experiment 1 and 2 a lot more rewards are issued and a more detailed curve of the received rewards can be plotted. \begin{figure} \caption{Experiment 3 - Linear Regression of the received rewards} \label{fig:exp3_lr} \end{figure} The first observation is that the signal is heavily fluctuating. This is because the reward is generated by the improvement in F1-Score of the image classification model. Every time the environment is hard-reset (Alg. \ref{alg:hardReset}) the labeled set $L$ is small and every added bundle of images has a noticeable effect (producing positive rewards), while subsequent images have lower impact on the growing set $L$, and therefore produce lower, or negative rewards. Each peak of the reward signal in Figure \ref{fig:exp3_lr} corresponds to one hard reset of the environment.\\ The second observation is that the linear trend of the reward signal seems to be negative. This is a strong indicator that the agent is not learning a useful strategy for this problem. An analysis of the agents behavior can be found in paragraph ``General Analysis''. \paragraph{Image Bundling} Since the bundled images have not been tested in isolation it is not possible to evaluate on the specific impact of that update. Considering the negative outcome of this experiment, it can only be assumed that the positive impact was negligible or it did actually contribute negatively to the reward signal. \paragraph{General Analysis} In this paragraph an analysis of the agent behavior is performed. To evaluate the learned policy, the impact of certain input feature on the predicted Q-Values are measured. For this, the process of drawing samples from a custom distribution, described in section ``Experiment 2 - Discussion'' (\ref{sec:exp2_disc}), is employed again (Fig. \ref{fig:BvsSB_3}, \ref{fig:entropy_3}). Similarly, the entropy- and BvsSB-Score are considered to be the most important features for the expected value of an image (and therefore it's predicted Q-Value). The evaluation shows the same behavior from experiment 2 (Fig. \ref{fig:BvsSB_2} and \ref{fig:entropy_2}). As indicated in Section \ref{sec:exp2_disc} already, the overall behavior of the agent is unexpected to say the least. \\ The problems include: (i) Each of the 5 plots should depict the same behavior, since each plot corresponds to the same feature and simply uses a different bundle of images. (ii) The correlation of BvsSB-/entropy-score can be considered to be strictly positive, as both baselines for this work solely depend on these scores and reliably outperform any RL approach tested so far. Therefore the learned behavior of Figure \ref{fig:BvsSB_3} and \ref{fig:entropy_3} does not capture this correlation correctly. \begin{figure} \caption{Experiment 3 - Impact of the BvsSB-score on the predicted Q-Value} \label{fig:BvsSB_3} \end{figure} \begin{figure} \caption{Experiment 3 - Impact of the entropy-score on the predicted Q-Value} \label{fig:entropy_3} \end{figure} \chapter{Conclusion}\label{chap:conclusion} \section{Overall Results of the Experiments} This conclusion, as for the thesis itself, will start at Fang's et. al. paper ``Learning how to Active Learn [...]'' \cite{howToActiveLearn}, which served as baseline paper. The authors of said paper are able to use RL to learn an active learning sampling strategy that consistently outperforms their baselines for a Named Entity Recognition (NER) task. While those results indicated the applicability of RL for active learning, a couple key differences to this thesis need to be pointed out. Fang et. al. used CRFs to classify their data, while this thesis uses CNNs. This introduces significant differences in the amount of needed training data for the model (200 sampled points in \cite{howToActiveLearn} vs. 800 in Experiment 1-3). The different classification task for this thesis introduced the BvsSB-baseline that proved to be more powerful than entropy based sampling (as discussed in Section \ref{sec:baselines}), it remains untested if BvsSB would also outperform the baseline of \cite{howToActiveLearn} in their setting. \\ These differences were initially thought of as minor hurdles on the path of reproducing the results from \cite{howToActiveLearn}. Even though this thesis did not find a definitive cause for the lacking performance of experiment 1, they might be included in the underlying problems of that approach. \\ Apart from the trivial conclusion that this work did not match the expectations, the rest of this section will focus on the individual experiments. \\[2mm] After replicating the algorithm from \cite{howToActiveLearn} and observing the lacking performance displayed in Figure \ref{fig:exp1_eval} the hypothesis was proposed that a poor formulation of the active learning problem in the reinforcement learning framework was the root cause.\\ The following experiments 2 and 3 were meant to prove this hypothesis and improve on the outlined problems. Experiment 2 implemented a more natural version of AL by presenting a sample of datapoints to the agent instead of single points, but failed to yield an improvement of performance. The discussion in Section \ref{sec:exp2_disc} identified the noisy reward signal as a possible problem. The final experiment proposed several changes to the environment to improve the reward signal, which did not help the performance of the third approach, but even lowered it slightly below the previous experiments.\\ Even though many different improvements to the setup could still be implemented (see Chapter \ref{sec:futureWork}) the original hypothesis needs to be rejected. \\ Experiment 2 and 3 indicate that a reformulation of the AL problem does not lift the performance of the system on it's own. \\ However, the conducted evaluation of the impact of the entropy- and BvsSB-scores on the respective Q-Values (Figure \ref{fig:BvsSB_2}, \ref{fig:entropy_2}, \ref{fig:BvsSB_3} and \ref{fig:entropy_3}) indicate a sub-optimal learning process during agent training. It is possible that by improving the training process the agent would be able to accurately capture the expected correlation and most likely improve it's performance during evaluation. \\[2mm] At this point it is unclear if an improvement based on an other fundamental misconception would fix the poor performance of this system, or if it is a case of the ``right mix'' of techniques that need to be employed to result in a strong RL agent. To evaluate on this question, the outlined problems and the advanced techniques described in Chapter \ref{sec:futureWork} can be addressed. If one of the techniques mentioned of paragraph ``Advanced Concepts'' does indeed bring a significant lift in performance, it is very likely that the root problem was not the poor formulation of the RL environment, but instead the small size of labeled set $L$ (in case of advanced concept (iii) \cite{rwAL4}). On the other hand, if a comprehensive hyperparameters optimization proves successful, the original formulation of \cite{howToActiveLearn} likely was sufficient and the image classification usecase just needs a high amount of fine-tuning. \chapter{Future Work}\label{sec:futureWork} \paragraph{Evaluation} The framework in this case is considered to include the evaluation procedure for the trained agent. While the evaluation is clearly showing a lack of performance in the context of traditional active learning, it misses a comparison to any parameterized or end-to-end learned method. Such a comparison might give further insights on the state of the current system and how much it can be improved. \paragraph{Environment} Central part of the framework is of course the RL environment, which presents the following angles for improvements: (i) It is basically random for any state if it will receive a non-zero reward or not, since a reward is only issued when the budget is exhausted or a sub-game is finished and not depending on a special state/action pair (as it would be for a labyrinth environment). (ii) The environment does not induce any form of labeling cost, which is a common theme in AL problems (iii) The agent does not receive any negative reward for indefinitely rejecting every presented image sample (iv) The current state representation does not include potentially important information like \begin{itemize} \item The an embedded form of the presented images, i.e. by a pretrained feature extractor \item The number added images per class (and possibly metrics like the standard deviation of that distribution) \item Any representation of previous decisions or states \item The number of rejected samples since the last accepted one \end{itemize} \paragraph{Hyperparameters} Due to time constraints, a number of technical details can be improved upon. Many hyperparameters of the training have not been optimized with a proper search technique. This includes the type of optimizer and parameters thereof, the architecture of the RL agents, the progression of the learning rate and greed parameter $\tau$ for the agent, as well as the momentum of the moving average for the internal F1-Score of the environment. \paragraph{Training Speed} Furthermore, the overall speed of the training can be improved by fine tuning the fitting of the image classification model that takes place whenever an image is added to the labeled set. One possible solution would be to use a pretrained model and only apply transfer learning on the updated labeled set. \paragraph{Number of Iterations} Compared to other applications of RL, all experiments ran for a small number of interactions (8000-12000). An extensive experiment with an increased number of iterations might yield better results. \\ Furthermore, the training did not contain a ``warm-up'' phase where the agents collects experiences without actually training on them in order to populate the memory buffer. \paragraph{Advanced Concepts} Apart from technical implementations, multiple advanced concepts can be applied to the problem. \\ (i) An advanced memory replay can be employed by implementing the work of ``Prioritized Experience Replay'' by Schaul et. al. \cite{prioReplay}. \\ (ii) Following the ideas of curriculum learning \cite{currLearning}, the agent may be trained with problems of increasing difficulty by slowly increasing the budget and/or the length of the sub-games. \\ (iii) The advanced sampling strategy from \cite{rwAL4} can be used to quickly grow the labeled set and stabilize the CNN training, by automatically adding instances with high confidence. \addtocontents{toc}{\protect\setcounter{tocdepth}{1}} \chapter{Appendix} \section{Full Project} The full project is available under\\ \href{https://github.com/ex0Therm1C/FinalExperiments}{https://github.com/ex0Therm1C/FinalExperiments} \section{Experiment 1 Full Setup} \subsection{Image Classifier} \begin{tabular}{l | l | c | l } Layer 1 & Layer 2 & Layer 3 & Layer 4 \\ \hline Conv2D 64 & Conv2D 32 & Flatten & Dense 24 \\ Size: 3 & Size: 3 && \\ Stride: 3 & & & \\ ReLU & ReLU && ReLU \end{tabular} \subsection{Agent} \begin{tabular}{l | c | l | c} Layer1 & Layer2 & Layer3 & Layer 4 \\ \hline Dense 24 & BatchNorm & Dense 12 & BatchNorm \\ LeakyReLU && LeakyReLU \\ 0.001 L2-Reg && 0.001 L2-Reg \\ Init.: HE-uniform && Init.: HE-uniform \\ \end{tabular}\\[2mm] \textbf{Policy:} Softmax Greedy \\ \textbf{Update Eq.} Van Hasselt \cite{ddqn} DDQN | $\gamma=0.9$\\ \textbf{Optimizer:} SGD \\ \subsection{Environment} \paragraph{State Space} Vector $\in \mathbb{R}^{27}$ generated from a single image, containing (i) entropy and BvsSB score of the IC model's prediction, (ii) IC model layer-wise std-deviation, mean and norm, (iii) average F1-Score of the current IC model \paragraph{Action Space} $a \in \{label, notLabel\}$ \subsection{General Parameters} \begin{tabular}{c | c} \hline icModelMaxEpochs & 50\\ earlyStoppingPatience & 1\\ agentBatchSize & 64 \\ targetNetworkUpdateRate (C) & 10 \\ budget & 800 \\ Reward Shaping & False \\ maxInteractionPerGame & 1200 \\ minTrainingInteractions & 12000 \\ greedParameterRange & [1, 0.2]\\ agentLearningRateRange & [0.001, 0.00001] \\ exploration & 4000 \\ conversion & 4000 \\ memoryMaxLength & 1000 \end{tabular} \section{Experiment 2 Full Setup} \subsection{Image Classifier} Unchanged from Experiment 1 \subsection{Agent} \begin{tabular}{l | c | l | c} Layer1 & Layer2 & Layer3 & Layer 4 \\ \hline Dense 48 & BatchNorm & Dense 24 & BatchNorm \\ LeakyReLU && LeakyReLU \\ 0.001 L2-Reg && 0.001 L2-Reg \\ Init.: HE-uniform && Init.: HE-uniform \\ \end{tabular}\\[2mm] \textbf{Policy:} Softmax Greedy \\ \textbf{Update Eq.} Van Hasselt \cite{ddqn} DDQN | $\gamma=0.9$\\ \textbf{Optimizer:} SGD \\ \subsection{Environment} \paragraph{State Space} Vector $\in \mathbb{R}^{35}$ generated from a sample of images, containing (i) entropy and BvsSB score of the IC model's prediction for each image, (ii) IC model's layer-wise std-deviation, mean and norm, (iii) average F1-Score of the current IC model \paragraph{Action Space} $a \in [0, 1, ... \hspace{1mm} , s + 1]$ \subsection{General Parameters} \begin{tabular}{c | c} \hline icModelMaxEpochs & 50\\ earlyStoppingPatience & 1\\ agentBatchSize & 64 \\ targetNetworkUpdateRate (C) & 10 \\ budget & 800 \\ Reward Shaping & False \\ sample size & 5 \\ maxInteractionPerGame & 1200 \\ minTrainingInteractions & 12000 \\ greedParameterRange & [1, 0.2]\\ agentLearningRateRange & [0.001, 0.00001] \\ exploration & 4000 \\ conversion & 4000 \\ memoryMaxLength & 1000 \end{tabular} \section{Experiment 3 Full Setup} \subsection{Image Classifier} Unchanged from Experiment 1 \subsection{Agent} Unchanged from Experiment 2 \subsection{Environment} \paragraph{State Space} Vector $\in \mathbb{R}^{35}$ generated from a sample of images, containing (i) averaged entropy and BvsSB scores of the IC model's prediction for each image bundle, (ii) IC model's layer-wise std-deviation, mean and norm, (iii) average F1-Score of the current IC model \paragraph{Action Space} $a \in [0, 1, ... \hspace{1mm} , s + 1]$ \subsection{General Parameters} \begin{tabular}{c | c} \hline icModelMaxEpochs & 50\\ earlyStoppingPatience & 1\\ agentBatchSize & 64 \\ targetNetworkUpdateRate (C) & 10 \\ budget & 800 \\ Reward Shaping & False \\ sample size & 5 \\ Images to Bundle & 5 \\ Sub-Game Length & 50 \\ Reward Scaling & 40 \\ maxInteractionPerGame & 1200 \\ minTrainingInteractions & 8000 \\ greedParameterRange & [1, 0.2]\\ agentLearningRateRange & [0.001, 0.00001] \\ exploration & 3000 \\ conversion & 3000 \\ memoryMaxLength & 1000 \end{tabular} \addtocontents{toc}{\protect\setcounter{tocdepth}{2}} \end{document}
\begin{document} \begin{center} {\large {\bf {\color{red}On a generalization of the Perron-Frobenius theorem \\in an ordered Banach space}}}\\ \end{center} \begin{center} {\bf Abdelkader Intissar}$^{1,2}$ \end{center} \n $^{1}$ Equipe d'Analyse Spectrale, Université de Corse, UMR-CNRS No. 6134, Quartier Grossetti\\ 20 250 Corté, France\\ Tél : 00 33 (0) 4 95 45 00 33 -Fax : 00 33 (0) 4 95 45 00 33\\ Email address : [email protected]\\ \n $^{2}$ Le Prador, 129 rue du Commandant Rolland, 13008 Marseille, France\\ Email address : [email protected]\\ \n {\bf {\color{red} Abstract}}\\ \n We work in an ordered Banach space with closed generating positive cone. We show that a positive compact operator has zero spectral radius or a positive eigenvector with the corresponding eigenvalue equal to the spectral radius.\\ \n {\bf {\color{red} $\S$ 1 The Perron-Frobenius Theorem in $\mathbb{R}^{n}$}}\\ \n {\color{blue} {\bf (A) Nonnegative Vectors and Matrices}}\\ \n {\bf Definition 1.1}\\ \n {\color{red}$\bullet$} A vector $x \in \mathbb{R}^{n}$ is nonnegative, and we write $x \geq 0$, if its coordinates are nonnegative. It is positive, and we write $x > 0$, if its coordinates are (strictly) positive. Furthermore, a matrix $A \in M_{n×m}(\mathbb{R})$ (not necessarily square) is nonnegative (respectively, positive) if its entries are nonnegative (respectively, positive); we again write $A \geq 0$ (respectively, $A > 0$). More generally, we define an order relation $x \leq y$ whose meaning is $y- x \geq 0$.\\ \n {\color{red}$\bullet$} Given $x \in \mathbb{C}^{n}$, we let $|x|$ denote the nonnegative vector whose coordinates are the numbers $\mid x_{j}\mid$ Likewise, if $A \in M_{n}(\mathbb{C})$, the matrix $|A|$ has entries $|a_{i j}|$.\\ \n{\color{red}$\bullet$} Observe that given a matrix and a vector (or two matrices), the triangle inequality implies $|Ax| \leq |A| · |x|$.\\ \n For a systematic study of positive matrices, we can consult the volumes 1 and 2 of Gantmacher {\bf{\color{blue}[Gantmacher]}}.\\ \n {\bf Proposition 1.2}\\ \n A matrix is nonnegative if and only if $x \geq 0$ implies $Ax \geq 0$. It is positive if and only if $x \geq 0$ and $x \neq 0$ imply $Ax > 0.$\\ \n {\bf Proof}\\ \n Let us assume that $Ax \geq 0$ (respectively,$ > 0$) for every $x \geq 0$ (respectively, $\geq 0$ and $\neq 0$). Then the $ith$ column $A^{(i)}$ is nonnegative (respectively, positive), since it is the image of the $ith$ vector of the canonical basis. Hence $A \geq 0$ (respectively, $> 0$).\\ \n Conversely, $A \geq 0$ and $x \geq 0$ imply trivially $Ax \geq 0$. If $A > 0, x \geq 0$, and $x \neq 0$, there exists an index $l$ such that $x_{l} > 0$. Then $\displaystyle{(Ax)_{i} = \sum_{j}a_{ij}x_{j } \geq a_{il}x_{l} > 0}$, and hence $Ax > 0$. \\ \n {\bf Primitive and irreducible non-negative square matrices}\\ \n {\bf Definition 1.3} (see for example {\bf{\color{blue}[Sternberg]}})\\ \n {\color{red}$\bullet$} A non-negative matrix square $A$ is called {\color{red}primitive} if there is a $k$ such that all the entries of $A^{k}$ are positive. \\ \n {\color{red}$\bullet$} It is called {\color{red}irreducible} if for any $i, j$ there is a $ k = k(i, j) $ such that $\displaystyle{(A^{k})_{ij} > 0}$.\\ \n {\color{red}$\bullet$} An $n \times n $ matrix $A = (a_{ij}) $ is said to be reducible if $n \geq 2$ and there exists a permutation matrix P such that :\\ \n $^{t}PAP = \left [ \begin{array} {cc} A_{11}&A_{12}\\ \quad\\ 0&A_{22}\\ \end{array} \right ]$ \quad\\ \n where $A_{11}$ and $A_{12}$ are square matrices of order at least one. If A is not reducible, then it is said to be irreducible.\\ \n {\bf Lemma 1.4}\\ \n If $A$ is irreducible then $I + A$ is primitive.\\ \n {\bf Proof} \\ \n Indeed, the binomial expansion $\displaystyle{I + A)^{k} = I + kA + \frac{k(k-1)}{2}A^{2} + ...}$ will eventually have positive entries in all positions if $k$ large enough.\\ \n An important point is the following:\\ \n {\bf Proposition 1.5}\\ \n If $A \in M_{n}(\mathbb{R}) $ is nonnegative and {\color{red}irreducible}, then $\displaystyle{(I+A)^{n-1} > 0}$.\\ \n {\bf Proof}\\ \n Let $x \neq 0$ be nonnegative, and define $x^{m} = (I +A)^{m}x$, which is nonnegative too. Let us denote by $P_{m}$ the set of indices of the nonzero components of $x^{m}$: $P_{0}$ is nonempty. Because $\displaystyle{x_{}^{m+1} \geq x_{i}^{m}}$ , one has $\displaystyle{P_{m} \subset P_{m+1}}$. Let us assume that the cardinality $|P_{m}|$ of $P_{m}$ is strictly less than $n$. There are thus one or more zero components, whose indices form a nonempty subset $I$, complement of $P_{m}$. Because $A$ is {\color{red}irreducible}, there exists some nonzero entry $a_{ij}$, with $i \in I $and $j \in P_{m}$. Then $\displaystyle{x_{i}^{m+1} \geq a_{ij}x_{i}^{m} > 0}$, which shows that $P_{m+1}$ is not equal to $P_{m}$, and thus $\displaystyle{|Pm+1| > |Pm|}$.\\ \n By induction, we deduce that $\displaystyle{|P_{m}| \geq min{m+1,n}}$. Hence $\displaystyle{|P_{n-1}| = n}$, meaning that $\displaystyle{x_{n-1} > 0}$. We conclude with Proposition 1.2.\\ \n {\color{blue} {\bf (B) The Perron-Frobenius Theorem:Weak Form}}\\ \n Here we denote by $\sigma(A)$ the spectrum (the set of all eigenvalues) of a (square) matrix $A$, and by $\rho(A)$ the spectral radius of $A$, i.e., the quantity $max \{ \mid \lambda \mid ; \lambda \in \sigma(A)\}$\\ \n {\bf Theorem 1.6} \\ \n Let $A \in M_{n}(\mathbb{R})$ be a nonnegative matrix. Then its spectral radius $\rho(A)$ is an eigenvalue of $A$ associated with a nonnegative eigenvector.\\ \n {\bf Proof}\\ \n Let $\lambda$ be an eigenvalue of maximal modulus and $v$ an eigenvector, normalized by $\mid\mid v \mid\mid_{1} = 1$. Then $\displaystyle{\rho(A) \mid v|\mid = |\lambda v| = |Av| \leq A|v|}$.\\ \n Let us denote by $\mathcal{C}$ the subset of $\mathbb{R}^{n}$ (actually a subset of the unit simplex $\mathcal{K}_{n}$) defined by the (in)equalities $\displaystyle{ \sum_{i}^{} x_{i} = 1, x \geq 0}$, and $Ax \geq \rho(A)x$. This is a {\color{red}closed convex set}, nonempty, inasmuch as it contains $|v|$. Finally, it is bounded, because $x \in \mathcal{C}$ implies $0 \leq x_{j} \leq 1$ for every $ j$; thus it is {\color{red} compact}. \\ \n Let us distinguish {\color{red}two cases}.\\ \n {\color{blue}1} There exists $x \in \mathcal{C}$ such that $Ax = 0$. Then $\rho(A)x \leq 0$ furnishes {\color{red}$\rho(A) = 0$}. The theorem is thus proved in this case.\\ \n {\color{blue}2} For every $x \in \mathcal{C}, Ax = 0$. Then let us define on $\mathcal{C} $ a {\color{red}continuous map} $f$ by\\ \n $\displaystyle{f (x) = \frac{1}{\mid\mid Ax \mid\mid_{1}} Ax}$.\\ \n It is clear that $f (x) \geq 0$ and that $\mid\mid f (x) \mid\mid_{1} = 1$. Finally,\\ \n $\displaystyle{Af (x) = \frac{1}{\mid\mid Ax \mid\mid_{1}} A Ax = \frac{1}{\mid\mid Ax \mid\mid_{1}} A\rho(A)x = \rho(A)f(x) }$.\\ \n so that $ f (\mathcal{C}) \subset \mathcal{C}$. Then {\color{red}Brouwer's theorem} (see {\bf{\color{blue}[Berger et al]}}, p. 217) asserts that a continuous function from a {\color{red}compact convex subset }of $\mathbb{R}^{n}$ into itself has a fixed point.\\ \n Thus let $y$. be a fixed point of $f$. It is a nonnegative eigenvector, associated with the eigenvalue $r = \mid\mid Ay \mid\mid_{1}$. Because $y \in \mathcal{C}$, we have $ry = Ay \geq \rho(A)y$ and thus $r \geq \rho(A)$, which implies $r = \rho(A)$.\\ \n {\bf Remark 1.7}\\ \n (i) That proof can be adapted to the case where a real number $r$ and a nonzero vector $y$ are given satisfying $y \geq 0$ and $Ay \geq ry$. \\ \n Just take for $\mathcal{C}$ the set of vectors $x$ such that $\displaystyle{\sum_{i}^{ }x_{i} = 1, x \geq 0}$, and $Ax \geq rx$. We then conclude that $\rho(A) \geq r$.\\ \n (ii) In section 2 (the main of this work), we prove that a positive compact operator has either zero spectral or a positive eigenvector with the corresponding eigenvalue equal to the spectral radius.\\ \n {\color{blue} {\bf (C) The Perron-Frobenius Theorem: Strong Form}}\\ \n {\bf Theorem 1.8}\\ \n Let $A \in M_{n(}\mathbb{R})$ be a {\color{red}nonnegative irreducible }matrix. Then $\rho(A)$ is a {\color{red}simple } eigenvalue of $A$, associated with a {\color{red}positive}eigenvector. Moreover, $\rho(A) > 0$.\\ \n {\bf Proof}\\ \n For $r \geq 0$, we denote by $\mathcal{C}_{r}$ the set of vectors of $\mathbb{R}^{n}$ defined by the conditions \\ \n $x \geq 0$, \quad $\mid\mid x \mid\mid_{1} = 1$ and $Ax \geq rx$.\\ \n Each $\mathcal{C}_{r }$ is a {\color{red}convex compact} set. We know that if $\lambda$ is an eigenvalue associated with an eigenvector $x$ of unit norm $\mid\mid x \mid\mid_{1} = 1$, then $\displaystyle{\mid x \mid \in \mathcal{C}_{\mid \lambda\mid}}$. In particular, $\mathcal{C}_{\rho(A)}$ is nonempty.\\ \n Conversely, if $\mathcal{C}_{r }$ is nonempty, then $ x \in \mathcal{C}_{r}$, $r = r\mid\mid x \mid\mid_{1} \leq \mid\mid Ax \mid\mid _{1} \leq A \mid\mid x \mid\mid _{1} = \mid\mid A \mid\mid _{1}$, and therfore $ r \leq \mid\mid A \mid\mid _{1}$.\\ \n Furthermore, the map $r \longrightarrow \mathcal{C}_{r }$ is non increasing with respect to inclusion, and is `` left continuous'' in the following sense. If $ r > 0$, one has $\displaystyle{\mathcal{C}_{r} = \bigcap_{s < r} \mathcal{C}_{r}}$.\\ \n Let us then define $\displaystyle{R = sup\{r ; \mathcal{C}_{r} \neq \emptyset\}}$, so that $R \in [\rho(A), \mid\mid A \mid\mid_{1}]$. The monotonicity with respect to inclusion shows that $r < R$ implies $\mathcal{C}_{r} \neq \emptyset$.\\ \n If $x > 0$ and $\mid\mid x \mid\mid_{1} = 1$, then $Ax > 0 $ because $A$ is {\color{red} nonnegative} and {\color{red} irreducible}.\\ \n Setting $\displaystyle{ r := min_{j}(Ax)_{j}/x_{j} > 0}$, we have $\mathcal{C}_{r} \neq \emptyset$, whence $R \geq r > 0$. The set $\mathcal{C}_{R}$, being the intersection of a totally ordered family of nonempty compacts sets, is nonempty.\\ \n Let $x \in\mathcal{C}_{R}$ be given. the lemma 1.9 below \\ \n {\bf Lemma 1.9}\\ \n Let $r \geq 0$ and $x \geq 0$ such that $Ax \geq rx$ and $Ax \neq rx$. Then there exists $r' > r$ such that $\mathcal{C}_{r'}$ is nonempty.\\ \n shows that $x$ is an eigenvector of $A$ associated with the eigenvalue $R$.We observe that this eigenvalue is not less than $\rho(A)$ and infer that $\rho(A) = R$. Hence $\rho(A)$ is an eigenvalue associated with the eigenvector $x$.\\ \n The following lemma :\\ \n {\bf Lemma 1.10}\\ \n The nonnegative eigenvectors of $A$ are positive. The corresponding eigenvalue is positive too.\\ \n ensures that $x > 0$ and $\rho(A) > 0$.\\ \n The simplicity of the eigenvalue $\rho(A)$ is given in\\ \n {\bf Lemma 1.11}\\ \n The eigenvalue $\rho(A)$ is simple.\\ \n Finally, we can state the following result:\\ \n {\bf Lemma 1.12}\\ \n Let $M, B \in M_{n}(\mathbb{C}) $ be matrices, with $M$ irreducible and $|B| \leq M$. Then \\ \n $\rho(B) \leq \rho(M)$.\\ \n In the case of equality ($\rho(B) = \rho(M)$), the following hold:\\ \n {\color{red}$\bullet$} $|B| = M$.\\ \n {\color{red}$\bullet$} For every eigenvector $x$ of $B$ associated with an eigenvalue of modulus $\rho(M)$, $|x|$ is an eigenvector of $M$ associated with $\rho(M)$.\\ \n {\color{blue}{\bf (D) Proof of Lemmas}}\\ \n {\bf Proof of Lemma 1.9}\\ \n Set $y := (I_{n} + A)^{n-1}x$. Because $A$ is irreducible and $x \geq 0$ is nonzero, one has $y > 0$. Likewise, $Ay - ry = (I_{n} + A)^{n-1}(Ax - rx) > 0$. \n Let us define $r := min_{j}(Ay)_{ j}/y_{ j}$, which is strictly larger than $r$. We then have $Ay \geq ry$, so that $\mathcal{C}_{r}$ contains the vector $y/\mid\mid y \mid\mid_{1}$.\\ \n {\bf Proof of Lemma 1.10}\\ \n Given such a vector $x$ with $Ax = \lambda x$, we observe that $\lambda \in \mathbb{R}_{+}$. Then $\displaystyle{x = \frac{1}{(1+ \lambda)^{n-1}}(I_{n} + A)^{n-1}x}$ and the right-hand side is strictly positive, from Proposition 1.5. Inasmuch as $A$ is irreducible and nonnegative, we infer $Ax = 0$. Thus $\lambda = 0$; that is, $\lambda > 0$.\\ \n {\bf Proof of Lemma 1.11}\\ \n Let $P_{A}(X)$ be the characteristic polynomial of $A$. It is given as the composition of an $n$-linear form (the determinant) with polynomial vector-valued functions (the columns of $XI_{n} - A)$. If $\phi$ is $p$-linear and if $\displaystyle{V_{1}(X), . . . ,V_{p}(X) }$ are polynomial vector-valued functions, then the derivative of the polynomial $\displaystyle{P(X) := \phi(V_{1}(X), . . . ,V_{p}(X)) }$ is given by \\ \n $\displaystyle{P'(X) = \phi(V_{1}^{'}, V_{2}, . . . ,V_{p})+ \phi(V_{1}, V_{2}^{'}, . . . ,V_{p}) +· · ·+\phi(V_{1}, V_{2}, . . . ,V_{p}^{'})}$.\\ \n One therefore has \\ \n $\displaystyle{P_{A}(X) = det(e_{1},a_{2}, . . . ,a_{n}) + det(a_{1},e_{2}, . . . ,a_{n})+· · ·+det(e_{1},a_{2}, . . . ,a_{n-1}, e_{n})}$,\\ \n where $a_{j}$ is the $jth$ column of $XI_{n} - A$ and $ \{e_{1}, . . . ,e_{n} \}$ is the canonical basis of $\mathbb{R}^{n}$.\\ \n Developing the $jth$ determinant with respect to the $jth$ column, one obtains\\ \n $\displaystyle{P_{A}^{'}(X) =\sum_{j=1}^{n}P_{A_{j}}(X)}$\\ \n where $A_{j} \in M_{n-1}(\mathbb{R})$ is obtained from $A$ by deleting the $jth$ row and the $jth$ column.\\ \n Let us now denote by $B_{j} \in M_{n}(\mathbb{R})$ the matrix obtained from $A$ by replacing the entries of the $jth$ row and column by zeroes. This matrix is block-diagonal, the two diagonal blocks being $A_{j } \in M_{n-1}(\mathbb{R})$ and $0 \in M_{1}(\mathbb{R})$. Hence, the eigenvalues of $B_{j}$ are those of $A_{j}$, together with zero, and therefore $\rho(B_{j}) = \rho(A_{j})$. Furthermore, $|B_{j}| \leq A$, but $|B_{j}| = A$ because $A$ is irreducible and $B_{j}$ is block-diagonal, hence reducible. It follows (Lemma 12) that $\rho(Bj) < \rho(A).$ Hence $P_{A_{j }}$ does not vanish over $[\rho(A), +\infty)$. Because $P_{A_{j}} (t) \sim t^{n-1}$ at infinity, we deduce that $P_{A_{j }}(\rho(A)) > 0.$\\ \n Finally, $P_{A}(\rho(A))$ is positive and $\rho(A)$ is a simple root.\\ \n {\bf Proof of Lemma 1.12}\\ \n In order to establish the inequality, we proceed as above. If $\lambda$ is an eigenvalue of $B$, of modulus $\rho(B)$, and if $x$ is a normalized eigenvector, then $\rho(B)|x| \leq |B| · |x| \leq M|x|$, so that $\mathcal{C}_{\rho(B)}$ is nonempty. Hence $\rho(B) \leq R = \rho(M)$\\ \n Let us investigate the case of equality. If $\rho(B) = \rho(M)$, then $ |x| \in \mathcal{C}_{\rho(M)}$, and therefore $|x|$ is an eigenvector: $M|x| = \rho(M)|x| = \rho(B)|x| \leq |B| · |x|$. Hence, $ (M- |B|)|x| \leq 0$. Because $ |x| > 0$ (from Lemma 10) and $M-|B| \geq 0$, this gives $|B| = M$.\\ \n The above results have a long history, in fact In 1907, Perron {\bf {\color{blue}[Perron1]}} and {\bf {\color{blue}[Perron2]}} gave proofs of the following famous theorem, which now bears his name, on positive matrices:\\ \n {\color{black}{\bf Perron Theorem (1907)}}\\ \n Let $A$ be a square positive matrix. Then $\rho(A)$ is a simple eigenvalue of $A$ and there is a corresponding positive eigenvector.\\ Furthermore, $\mid \lambda \mid < \rho(A)$ for all $\lambda \in \sigma(A), \lambda \neq \rho(A)$.\\ \n \n In $1912$, Frobenius {\bf {\color{blue}[Frobenius]}} extended the Perron theorem to the class of irreducible nonnegative matrices:\\ \n {\color{black}{\bf Frobenius Theorem (1912)}}\\ \n Let $A \geq 0$ be irreducible. Then \\ \n (i) $\rho(A)$ is simple eigenvalue of $A$, and there is a corresponding positive eigenvector.\\ \n (ii) If $A$ has $m$ eigenvalues of modulus $\rho(A)$, then they are in the following form $\displaystyle{\rho(A)e^{\frac{2ik\pi}{m}}}$ ; \\ \quad \quad \quad $ k = 0, ...., m-1$.\\ \n (iii) The spectrum of $A$ is invariant under a rotation about the origin of the complex plane\\ \quad \quad \,\quad by $\frac{2\pi}{m}$, i.e., $\displaystyle{e^{\frac{2i\pi}{m}}\sigma(A) = \sigma(A)}$.\\ \n (iv) If $m > 1$ then there exists a permutation matrix $P$ such that :\\ \n $^{t}PAP = \left (\begin{array}{ccccc} 0&A_{12}&&&\\ \quad\\ &0&A_{23}&&\\ \quad\\ & &0&\ddots&\\ \quad\\ &&&\ddots&A_{m-1,m}\\ \quad\\ A_{m,1}&&&&0\\ \end{array} \right )$ \quad\\ \n where the zero blocks along the diagonal are square.\\ \n we refer the reader to the intersting paper of C. Bidard and M. Zerner {\bf {\color{blue}[Bidard et al]}} for an application of the Perron-Frobenus theorem in relative spectral theory and {\bf{\color{blue}[Alintissar et al]}} on some economic models.\\ \n A natural extension of the concept of a nonnegative matrix is that of an integral operator with a nonnegative kernel. The following extension of Perron's theorem is due to Jentzsch {\bf{\color{blue}[Jentzsch]}}:\\ \n {\color{black}{\bf Jentzsch Theorem (1912)}}\\ \n Let $k(., .)$ be a continuous real function on the unit square with $k(s, t) > 0$ for all $0 \leq s, t \leq 1$. If $K: L^{2}[0, 1] \longrightarrow L^{2}[0, 1]$ denotes the integral operator with kernel $k$ defined by setting \\ \n $\displaystyle{ (Kf)(s) = \int_{0}^{1}k(s, t)f(t)dt , f \in L^{2}[0, 1],}$\\ \n then\\ \n (i) $K$ has positive spectral radius;\\ \n (ii) the spectral radius $\rho(K)$ is a simple eigenvalue, with (strictly) positive eigenvector;\\ \n (iii) if $ \lambda = \rho(K)$ is any other eigenvalue of $K$, then $\mid \lambda \mid < \rho(K)$.\\ \n A generalization of Jentzsch theorem is given by Schaefer in his book {\bf {\color{blue} [Schaefer]}} as follow:\\ \n {\color{black}{\bf Schaefer Theorem(1974)}}\\ \n Let $(\mathbb{X}, \mathcal{T}, \tau)$ be a measure space with positive measure $\tau$ and $L_{p}(\tau) $ be the set of all measurable functions on $\mathbb{X}$ whose absolute value raised to the $p^{-th}$ power has finite integral.\\ \n Let $T$ be an integral bounded operator defined on $L_{p}(\tau) $ by a kernel $\mathcal{N} \geq 0$.\\ \n We suppose that:\\ \n (i) There exists $n \in \mathbb{N}$ such that $^{n}$ is compact operator.\\ \n (ii) For $\mathbb{S} \in \mathcal{T}; \tau(\mathbb{S} > 0$ and $\tau(\mathbb{X} - \mathbb{S}) > 0$ we have :\\ \n $\displaystyle{\int_{\mathbb{X} - \mathbb{S}}\int_{\mathbb{S}}\mathcal{N}(s, t)d\tau(s)d\tau(t) > 0}$ \\ \n Then the spectral radius $r(T)$ of integral operator $T$ is simple eigenvalue associated to an eigenfunction $f$ satisfying $f(s) > 0 \, \tau$-almost everywhere.\\ \n An fine study on above theorems is given in an article of Zerner in 1987 {\bf {\color{blue}[Zerner]}} in particular the following result : \\ \n {\color{black}{\bf Zerner Theorem (1987)}}\\ \n Suppose that $A$ is irreducible and that $\rho(A)$ is a pole of its resolvent. Then $\rho(A)$ is non-zero and it is a simple pole, any positive eigenvector associated with $\rho(A)$ is quasi-interior and any positive eigenvector of the transpose $A'$ of $A$ associated with $\rho(A)$ is a form strictly positive.\\ \n If moreover $\rho(A)$ is of finite multiplicity, it is a simple eigenvalue. \\ \n {\bf Remark 1.13}\\ \n {\color{red}$\bullet$} We should not be under any illusions about the conclusion `` $ \rho (A) $ nonzero ''. In applications, we do not really see how we could prove that $ \rho (A) $ is a pole sarees at the same time that it is non-zero. \\ \n \n {\color{red}$\bullet$} We refer the reader to an original application of Jentzsch Theorem in reggeons field theory see T. Ando and M. Zerner $(1984)$ in {\bf {\color{blue}[Ando et al]}} and A. Intissar and J.K Intissar $(2019)$ in {\bf {\color{blue}[Intissar et al]}} \\ \n {\bf {\color{red} $\S$ 2 The Perron-Frobenius Theorem in an ordered Banach space with closed generating positive cone}}\\ \n {\bf {\color{blue} (A) Introduction and statement of the results}}\\ \n Below we give some abstract properties of ordered relation on a Banach space\\ \n Let $(E, \leq )$ be an ordered vector space and $\mathcal{C}$ is a proper $\mathcal{C}\cap(-\mathcal{C}) = \{0\}$ convex positive cone in $E$,\\ \n {\color{blue}$\bullet$} $\displaystyle{x \leq y \Longrightarrow x +a \leq y+a }$\\ \n {\color{blue}$\bullet$} $\displaystyle{x \leq y \iff y - x \in \mathcal{C}}$\\ \n {\color{blue}$\bullet$} $\mathcal{C} + \mathcal{C} \subset \mathcal{C}$\\ \n {\color{blue}$\bullet$} $\displaystyle{x \leq y \, and \, t \geq 0 \Longrightarrow tx \leq ty }$\\ \n {\color{blue}$\bullet$} $\displaystyle{x_{1} \leq y_{1} \, and \, x_{2} \leq y_{2} \Longrightarrow x_{1} + y_{1} \leq x_{2} + y_{2}}$\\ \n {\color{blue}$\bullet$} $\displaystyle{ x \in \mathcal{C} \, and \, -x \in \mathcal{C} \Longrightarrow x = 0}$\\ \n {\color{blue}$\bullet$} $\displaystyle{\mathcal{C}}$ , is closed.\\ \n {\bf Definition 2.1}\\ \n A real Banach space $E$ equipped with a closed convex proper and generating cone $\mathcal{C}$ is called an ordered Banach space.\\ \n By proper is meant that there exists no $x \neq 0$ in $E$ such that both $x$ and $-x$ belong to $\mathcal{C}$ and by generating that every $x \in E$ has a decomposition $x = x^{+} - x^{-}$ where $x^{+} $ and $x^{-}$ both belong to $\mathcal{C}$\\ \n The cone $\mathcal{C}$ defines an order relation in $E$.\\ \n A vector in $E$ is called positive (agreement with the French terminology) if it belong to $\mathcal{C}$ , negative if its opposite is positive.\\ \n The elements belonging to the cone $\mathcal{C}$ are called positive elements of $E$\\ \n An operator $A$ acting on an ordered Banach space $E$ is said to be positive if it transforms positive elements into positive elements i.e. $A(\mathcal{C}) \subset \mathcal{C}$.\\ \n In $1948$, in an abstract order-theoretic setting, in the important memoir {\bf{\color{blue}[14]}} Krein and Rutman have partially extended the Perron-Frobenius theorem to a positive compact linear operator leaving invariant a convex cone in a Banach space.\\ \n They obtained the following:\\ \n {\color{black}{\bf Krein-Rutman Theorem (1948)}} \\ \n (i) Let $A$ be a positive compact linear operator on $E$. Suppose that $A(\mathcal{C}) \subseteq \mathcal{C}$, where $\mathcal{C}$ is a closed generating cone in $E$. If $\rho(A) > 0$, then there exists a nonzero vector $x \in \mathcal{C}$ such that $Ax = \rho(A)x$.\\ \n (ii) Let $A$ be a positive compact linear operator on $E$. Assume there is a non negative vector $x$, a natural number $p$ and a positive real $\lambda$ such that \\ \n $\displaystyle{A^{p}x \geq \lambda^{p}x}$ $\diamondsuit$ill { } {\bf {\color{blue} (2.1)}}\\ \n (The inequality $x \leq y$ means that $y - x \in \mathcal{C}$)\\ \n Then $A$ has a positive eigenvector associated with an eigenvalue at least equal to $\lambda$.\\ \n Remember that the spectral radius of an operator is the radius of the smallest closed disk containing its spectrum. In the case of a compact operator, it is either zero or the largest modulus of its eigenvalues.\\ \n This suggest the following result which is the main part of this work:\\ \n {\bf Theorem 2.2}\\ \n The spectral radius $\rho(A)$ of a positive compact linear operator $A$ on $E$ is either $0$ or an eigenvalue corresponding to a positive eigenvector.\\ \n {\bf Proof}\\ \n We give the proof in several steps.\\ \n This section will be devoted to proof of this theorem. But before, we give the following corollary and remark and two classical theorems from Kato's book on the structure of spectrum of {\color{red}compact } operator\\ \n {\bf Corollary 2.3} (monotonicity of the spectral radius)\\ \n Let $A_{1}$ and $A_{2}$ be two positive compact linear operators on $E$ such that $A_{2} - A_{1}$ is positive.\\ \n Then the spectral radius of $A_{2}$ is at least equal to the spectral radius of $A_{1}$.\\ \n {\bf Proof}\\ \n Call $r$ the spectral radius of $A_{1}$. We may assume $r {\color{red} >} 0$, otherwise the conclusion is obvious. By theorem 2.2, $r$ is an eigenvalue of $A_{1}$ corresponding to a positive eigenvector $u$. Then we have $A_{2}u \geq A_{1}u = r u$.\\ \n So that by (ii) of Perron Frobenius theorem with $p = 1$ $A_{2}$ has an eigenvalue at least equal to $r$.\\ \n {\bf Remark 2.4}\\ \n From now on, $A$ will denote a positive compact linear operator on $E$ with {\color{red}non zero spectral radius}.\\ \n Multiplying $A$ by a positive factor multiplies its spectral radius by the same factor. So we can assume the spectral radius of $A$ to be equal to one and we shall do so henceforth.\\ \n A simple and well know special case is if we suppose that $A$ have an eigenvalue of the form $\displaystyle{e^{\frac{ik\pi}{n}}}$. Then one is an eigenvalue of $A^{2n}$.\\ \n Let $u$ be an eigenvector associated with this eigenvalue $(A^{2n}u = u)$. \\ \n We assume $u$ to be non negative otherwise we would take $-u$ instead. So we can apply (ii) of Krein-Rutman theorem, obtaining a positive eigenvector of $A$ associated with an eigenvalue at least equal to {\color{red}one} but it cannot be larger.\\ \n The spectrum of a {\color{red}compact} operator $A$ in $E$ has a simple structure ``analogous'' to that of an operator in a finite-dimensional space. This is translated by the following theorem\\ \n {\bf Theorem 2.5} (Kato theorem III.6.26 p. 185, {\bf {\color{blue}[Kato]}})\\ \n Let $A$ be a compact operator acting on $E$. Then \\ \n (i) its spectrum $\sigma(A)$ is a cuntable set with no accumulation point different from zero.\\ \n (ii) each nonzero $\lambda \in \sigma(A)$ is an eigenvalue of $A$ with finite multiplicity, and $\overline{\lambda}$ is an eigenvalue of $A'$ with same multiplicity.\\ \n In this section we will also use the following theorem adapted from theorem III. 6.17 p. 178 [Kato] on the separation of spectrum.\\ \n {\bf Theorem 2.6} (Kato theorem III.6.17 p. 178, {\bf {\color{blue}[Kato]}})\\ \n $ E$ can to be split into the direct sum of two closed subspaces $E'$ and $E''$, both invariant under $A$, with the following properties where $A'$ and $A''$ denote the restriction of $A$ to $E'$ and $E''$ respectively.\\ \n (i) All eigenvalues of $A'$ have modulus one.\\ \n (ii) The spectral radius of $A''$ is at most equal to $r$ (equal if we have chosen $r$ as small as possible). \n Moreover, the eigenvalues of modulus one being isolated and of finite multiplicity, $E'$ has a finite dimension.\\ \n It follows from above theorem that a crucial step in the proof of theorem 2.2 is that $E'$ contains a {\color{red} non zero positive vector}.\\ \n This will be proved after the following proposition in following step:\\ \n {\bf {\color{blue} (B) Construction of a sequence of nearly eigenvectors}}\\ \n {\bf Proposition 2.7}\\ \n The aim of this proposition is to construct a sequence of vectors $w_{k}$ and a sequence $p_{k}$ with the following properties:\\ \n ($\alpha$) $w_{k}$ is positive and has norm one.\\ \n ($\beta$) $\displaystyle{lim \, p_{k} = + \infty}$ as $k \longrightarrow \infty$.\\ \n ($\gamma$) There is a sequence of real numbers $\lambda_{k}$ such that :\\ \n (i) $\displaystyle{lim \, inf \lambda_{k} \geq 1}$ as $k \longrightarrow \infty$.\\ \n (ii) $\displaystyle{lim \, z_{k} = 0}$ as $k \longrightarrow \infty$. by setting $\displaystyle{z_{k} = \lambda_{k}w_{k} - A^{p_{k}}w_{k}}$.\\ \n {\bf Proof}\\ \n As a preliminary, we need the following simple consequence of Hahn-Banach . A positive form on $E$ is a form which is non negative on $\mathcal{C}$ :\\ \n {\bf Lemma 2.8}\\ \n Let $u$ be any non negative vector. then there is a continuous positive linear form $f$ on $E$ such that {\color{red}$f(u) > 0$}.\\ \n {\bf Proof of lemma}\\ \n $- \mathcal{C}$ is a closed convex set and $u \notin (- \mathcal{C})$. So there is an {\color{red} affine} form $f_{1}$ which is {\color{red}non positive} on ({\color{red}$- \mathcal{C}$}) and {\color{red} positive} at $u$ ({\color{blue}{\bf [Bourbaki] }}ch. 11 $\S$ 3 proposition 4)\\ \n Let us define $f$ by :\\ \n $\displaystyle{ f_{1}(x) = f(x) + f_{1}(0)}$. $\diamondsuit$ill { } {\bf {\color{blue} (2.2)}}\\ \n and check that it has the desired properties.\\ \n First $\displaystyle{ f(u) = f_{1}(u) - f_{1}(0)}$ where $f_{1}(u)$ is positive and $f_{1}(0)$ is not, thus $f(u) > 0$.\\ \n Now let $x \in \mathcal{C}$ and look at $\displaystyle{ f_{1}(tx) = tf(x) + f_{1}(0)}$ which we know to be non positive for all negative $t$ implying $f(x) \geq 0$. The lemma is proved.\\ \n Now for technicalities, we may assume that $A$ has an eigenvalue $e^{i\theta}$ where $\frac{\theta}{\pi}$ is irrational. Let $u + iv$. be a corresponding eigenvector. Here again we may and will assume $u$ non negative and we write $v = v^{+} - v^{-}$ with $v^{+}$ and $ v^{-}$ positive.\\ Let $f$ given by the above lemma, we define an operator $B$ by \\ \n $\displaystyle{B(x) = \frac{f(x)}{f(u)}v^{+}}$ $\diamondsuit$ill { } {\bf {\color{blue} (2.3)}}\\ \n {\bf Remark 2.9}\\ \n Note that $B$ is positive, compact (even of rank one) and maps $u$ on $v^{+}$.\\ \n Now let $p$ be any natural number such that $p \in ]0, \frac{\pi}{2}[ \, mod \, 2\pi $ then we have :\\ \n $\displaystyle{A^{p}(u + iv) = e^{ip\theta}(u + iv)}$. $\diamondsuit$ill { } {\bf {\color{blue} (2.4)}}\\ \n Taking the real pats of above equality to get \\ \n $\displaystyle{A^{p}u = cos (p\theta)u - sin (p\theta) v}$ $\diamondsuit$ill { } {\bf {\color{blue} (2.5)}}\\ \n So that\\ \n $\displaystyle{(A^{p} + sin (p\theta)B)u = cos (p\theta)u + sin (p\theta) v^{-1} \geq cos (p\theta) u}$ $\diamondsuit$ill { } {\bf {\color{blue} (2.6)}}\\ \n We apply the (ii) of Krein-Rutman theorem to the operator $\displaystyle{A^{p} + sin (p\theta)B}$, the number $p$ of the statement of the theorem being here equal to one.\\ \n We conclude that there is a positive vector $x_{p}$ of norm one such that:\\ \n $\displaystyle{(A^{p} + sin (p\theta)B)x_{p} = \mu_{p}x_{p}}$ $\diamondsuit$ill { } {\bf {\color{blue} (2.7)}}\\ \n with\\ \n $\displaystyle{ \mu_{p} \geq cos(p\theta)}$ $\diamondsuit$ill { } {\bf {\color{blue} (2.8)}}\\ \n As $\displaystyle{\frac{\theta}{\pi}}$ is irrational, we can find a strictly increasing sequence $(p_{k})$ with :\\ \n $\displaystyle{p_{k}\theta = \epsilon_{k} \quad mod. \, [2\pi]}$, $\epsilon_{k} > 0$ , $\displaystyle{lim \, \epsilon_{k} = 0} $ as $k \longrightarrow \infty$ $\diamondsuit$ill { } {\bf {\color{blue} (2.9)}}\\ \n Setting $\displaystyle{w_{k} = x_{p_{k}}}$ and $\displaystyle{\lambda_{k} = \mu_{p_{k}}}$ we check ($\alpha$) and ($\beta$) and we have:\\ \n $\displaystyle{z_{k} = sin (p_{k}\theta)Bw_{k}}$ $\diamondsuit$ill { } {\bf {\color{blue} (2.10)}}\\ \n so that\\ \n $\displaystyle{\mid\mid z_{k} \mid\mid \leq \epsilon_{k}\mid\mid B \mid\mid}$ $\diamondsuit$ill { } {\bf {\color{blue} (2.11)}}\\ \n and by (2.8):\\ \n $\displaystyle{\lambda_{k} \geq cos(p_{k}\theta)}$ $\diamondsuit$ill { } {\bf {\color{blue} (2.12)}}\\ \n where the second member has limit one when $k$ tends to infinity. Then properties ($\alpha$) to ($\gamma$) hold.\\ \n In this step, we consider the projecteur $P'$ with kernel $E''$ and range $E'$ and let $P'' = I - P'$ then we will show the following lemma :\\ \n {\bf Lemma 2.10}\\ \n $\displaystyle{ lim \, P''w_{k} = 0 }$ as $k \longrightarrow \infty$\\ \n {\bf Proof} \\ \n By the property ($\gamma$) of preceding step, we have $\displaystyle{\lambda_{k}w_{k} = A^{p_{k}}w_{k} + z_{k}}$ where $\lambda_{k}$ is bounded from below and $z_{k}$ converges to zero. So what we have to show is that $\displaystyle{P'' A^{p_{k}}w_{k}}$ converges to zero.\\ \n Let $r'$ be a number satisfying $r < r' <1$ (As the non zero eigenvalues of $A$ are isolated , there is a number $r < 1$ such that the modulus of any eigenvalue of $A$ is either one or not larger than $r$).\\ \n Notice that \\ \n $\displaystyle{ P'' A^{p_{k}}w_{k} = A^{p_{k}}P''w_{k} = A''^{p_{k}}P''w_{k}}$ $\diamondsuit$ill { } {\bf {\color{blue} (2.13)}}\\ \n By Gel-fand's theorem, we know that the spectral radius of $A''$ is the limit of $\displaystyle{ \mid\mid A''^{n} \mid\mid^{\frac{1}{n}}}$ soi that we have, for $k$ large enough, the following inequality :\\ \n $\displaystyle{\mid\mid A''^{p_{k}}\mid\mid < r'^{p_{k}}}$ $\diamondsuit$ill { } {\bf {\color{blue} (2.14)}}\\ \n Now, putting together (2.13) and (2.14) we get: \\ \n $\displaystyle{ \mid\mid P'' A^{p_{k}}w_{k}\mid\mid \leq r'^{p_{k}} \mid\mid P'' \mid\mid}$ $\diamondsuit$ill { } {\bf {\color{blue} (2.15)}}\\ \n As $r'$ is smaller than one, this ends the proof of the lemma.\\ \n {\bf Remark 2.11} (Extracting a convergent subsequence) \\ \n $\displaystyle{(P'w_{k})}$ is a bounded sequence in the finite dimensional vector space $E'$. We can extract from it a convergent subsequence and, by above lemma, the corresponding subsequence of the $w_{k}'s$ converges to the same limit $w$.\\ \n As a limit of the $\displaystyle{P'w_{k}'s}$ , $w$ belongs to $E'$. As a limit of the $w_{k}'s$, it is positive and has norm one.\\ \n In this last step we will see the situation in $E'$ in particular we will derive the existence of a positive eigenvector in $E'$ by the following Lemma:\\ \n {\bf Lemma 2.11}\\ \n Let $\mathcal{C}$ be a closed convex cone in finite dimensional vector space $F$. Assume $\mathcal{C}$ to be neither $\{0\}$ nor the whole space.\\ \n Let $T$ be a linear operator on $F$ which maps $\mathcal{C}$ into itself. \\ \n Then $T$ has an eigenvector belong to $\mathcal{C}$.\\ \n {\bf Proof} \\ \n If $T$ maps some non zero vector of $\mathcal{C}$ to $0$, we are through. If not, call $\mathbb{S}$ the unit sphere of some euclidian metric on $F$ and $\mathcal{C}_{1}$ the intersection of $\mathbb{S}$ and $\mathcal{C}$.\\ \n $\mathcal{C}_{1}$ is homeomorphic to closed ball of some space $\mathbb{R}^{k}$ (take for instance the stereographic projection from some point of $\mathbb{S}$ not belong to $\mathcal{C}$ ; we get a compact convex set). \\ \n The mapping $\displaystyle{x \longrightarrow \frac{T(x)}{\mid\mid T(x) \mid\mid}}$ is continuous from $\mathcal{C}_{1}$ into itself. By Brouwer's therem {\bf {\color{blue}[Brouwer]}}, it has a fixed point, and this is an eigenvector of $T$. The lemma is proved.\\ \n {\bf Summing up}\\ \n We have proved the existence of a positive eigenvector in $E'$. As has already been indicated in step 2, the corresponding eigenvalue is positive, the eigenvector being positive; it has modulus one, the eigenvector belong to $E'$. It must be one, the spectral radius of $A$.\\ \n {\bf References}\\ \n {\bf {\color{blue} [Alintissar et al]}} Alintissar, A. , Intissar, A. and Intissar, J.K. :. On dynamics of wage-price spiral and stagflation in some model economic systems, adsabs.harvard.edu/abs/2018arXiv181201707A\\ \n {\bf{\color{blue}[Ando et al] }} Ando, T. and Zerner, M. :. sur une valeur propre d'un opérateur, Commun. Math. Phys. 93,123 139 (1984)\\ \n {\bf{\color{blue} [Berger et al]}}, Berger , M. : and Gostiaux, B. :. .Differential Geometry: Manifold, Curves and Surfaces, volume 115 of Graduate Texts in Mathematics. Springer-Verlag, New York, 1988.\\ \n {\bf{\color{blue} [Bidard et al]}} Bidard,C and Zerner M. :. The Perron-Frobenius theorem in relative spectral theory ?, Mathematische Annalen, vol.289, (1991) pp.451-464.\\ \n {\bf{\color{blue}[Bourbaki] }} Bourbaki, N. :. Eléments de mathématique livre V , Espaces vectoriels topologiques Hermann (1953)\\ \n {\bf{\color{blue}[Frobenius]}} Frobenius, G.F. :. Uber Matrizen aus nicht negativen Elementen, Sitzungsber. Kon. Preuss. Akad. Wiss. Berlin, (1912), 456-477\\ \n {\bf{\color{blue}[Gantmacher1]}} Gantmacher., F. R. :. The Theory of Matrices. Vol. 1. Chelsea, New York, 1959.\\ \n {\bf{\color{blue}[Gantmacher2]}} Gantmacher, F. R. :. The Theory of Matrices. Vol. 2. Chelsea, New York, 1959.\\ \n {\bf{\color{blue}[Kato]}} Kato, T. Perturbation Theory for linear Operators Springer second Edition (1980)\\ \n {\bf{\color{blue} [Krein-Rutman]}} Krein, M.G and Rutman , M. A. :. Linear operators leaving invariant a cone in a Banach space, Amer. Math. Soc. Transl. Ser. 1 10 (1950), 199-325 [originally Uspekhi Mat. Nauk 3 (1948), 3-95].\\ \n {\bf{\color{blue} [Intissar et al]}} Intissar, A. and Intissar, J.K. :. A Complete Spectral Analysis of Generalized Gribov-Intissar's Operator in Bargmann Space, Complex Analysis and Operator Theory (2019) 13:1481?1510 \\ \n {\bf{\color{blue} [Jentzsch]}} Jentzsch, P. :. Uber Integralgleichungen mit positivem Kern, J. Reine Angw. Moth. 141 (1912). 235-244.\\ \n {\bf{\color{blue}[Perron1]}} Perron, O. :. Grundlagen fur eine Theorie des Jacobischen Kettenbruchalgorithmus, Math. Ann. 63 (1907), 1-76.\\ \n {\bf{\color{blue}[Perron2]}} Perron, O. :. Zur Theorie der uber Matrizen, Math. Ann. 64 (1907), 248-263.\\ \n {\bf{\color{blue}[Schaefer]}} Scheafer, H. H. :.Banach Lattices and Positive Operators, Springer, Berlin/Heidelberg/ New York, (1974).\\ \n {\bf{\color{blue}[Sternberg]}} Sternberg, S. :. The Perron-Frobenius theorem, Lecture 12,\\ {\color{blue}http://www.math.harvard.edu/library/sternberg/slides/1180912pf.pdf}\\ \n {\bf{\color{blue}[zerner] }} Zerner, M. Quelques proprités spectrales des opérateurs positifs, journal of Functional Analysis, 72, 381-417 (1987)\\ \end{document}
\begin{document} \begin{abstract} We give a new approach to handling hypergraph regularity. This approach allows for vertex-by-vertex embedding into regular partitions of hypergraphs, and generalises to regular partitions of sparse hypergraphs. We also prove a corresponding sparse hypergraph regularity lemma. \end{abstract} \title{Regularity inheritance in hypergraphs} \section{Introduction} The regularity method is a rich topic in extremal combinatorics which has some remarkable applications. Its roots lie in Szemerédi's proof that dense subsets of the natural numbers contain arbitrarily long arithmetic progressions~\cite{szemeredi1975sets}, and since then many more applications have been found. The method consists of a \emph{regularity lemma} which states that any large structure can be decomposed into pieces which have random-like behaviour, and a \emph{counting lemma} which states that a random-like piece has approximately the same number of small substructures as an analogous genuinely random piece has in expectation. This paper primarily concerns the counting lemma in the setting of sparse hypergraphs. With a precise formulation of the aforementioned lemmas for graphs one can prove the \emph{triangle removal lemma}, which states that graphs on $n$ vertices that contain at most $o(n^3)$ triangles may be made triangle-free by removing at most $o(n^2)$ edges. From this result one can deduce Roth's theorem~\cite{roth1953}, that dense subsets of the natural numbers contain an arithmetic progression of length three. There are at least three natural and highly fruitful directions in which to generalise the above results: to larger subgraphs than the triangle, to hypergraphs, and to sparse host graphs. A counting lemma gives sufficient pseudorandomness conditions for the existence of a (large number of) triangles in an $n$-vertex graph, and one could generalise this to larger subgraphs (in particular those whose size grows with $n$), such as a collection of $n/3$ vertex-disjoint triangles or the square of a Hamilton cycle. A key result along this line of thought is the blow-up lemma of Koml{\'o}s, S{\'a}rk{\"o}zy, and Szemer{\'e}di~\cite{KSSblow}. Another direction is to generalise the regularity and counting lemmas to hypergraphs, e.g. to prove a hypergraph removal lemma. As observed by Solymosi~\cite{solymosi2004}, a suitable hypergraph version of the above triangle removal lemma implies a multidimensional generalisation of Szemerédi's theorem. A third direction relates to sparse host graphs. Going in this direction, one can prove a \emph{relative removal lemma} for hypergraphs which are a subgraph of a sparse, highly pseudorandom \emph{majorising hypergraph}, and with it prove the Green--Tao theorem~\cite{GTprimes} which states that dense subsets of the primes contain arbitrarily long arithmetic progressions. This was recently done by Conlon, Fox, and Zhao~\cite{CFZrelative}, whose methods require weaker pseudorandomness properties of the primes than were originally required by Green and Tao. Combinations of these generalisations have also been developed, such as the hypergraph blow-up lemma of Keevash~\cite{keevash2011hypergraph}, and the blow-up lemma for sparse graphs of Allen, Böttcher, Hàn, Kohayakawa and Person~\cite{ABHKPblow}. The purpose of this paper is to develop an embedding method which generalises the standard one for graphs in a combination of all three directions. That is, we develop a tool for counting in regular subgraphs of sparse, pseudorandom hypergraphs that can be used to embed bounded-degree hypergraphs whose size grows with the number of vertices of the host hypergraph. The main advantage of our approach is that it constructs an embedding vertex-by-vertex in a way that generalises a well-studied approach to the counting lemma (and related embedding results) in graphs. In order to have applications of our embedding method, we also state and prove a sparse hypergraph regularity lemma. This is not especially novel, but to the best of our knowledge it was not explicitly in the literature (though it was well known how to prove such a thing). \subsection{Regularity in graphs, sparse graphs, and hypergraphs} In graphs there are several different notions of regularity, and when the graph in question is \emph{dense}, i.e.\ has $n$ vertices and $\Omega(n^2)$ edges, it is easy to show that many of these notions are essentially equivalent~\cite{T87random,Tpseudo,CGWquasi}. The original definition of regularity by Szemerédi~\cite{Sreg} for a pair $(X, Y)$ of disjoint sets of vertices in a graph roughly states that for large enough subsets $X'\subseteq X$ and $Y'\subseteq Y$, the density of the bipartite subgraphs induced by $(X,Y)$ and by $(X',Y')$ are approximately equal. An application of the Cauchy--Schwarz inequality shows that this is equivalent to containing approximately the minimum possible number of four-cycles crossing $(X, Y)$ given this density. These definitions can be extended to sparse graphs and hypergraphs, but the equivalence between different forms is a much more complex subject in these settings. In this paper we use a definition of regularity that is a generalisation of four-cycle minimality known as \emph{octahedron minimality}. One way of extending the large and influential body of work on dense, regular graphs to sparse graphs (those with $n$ vertices and $o(n^2)$ edges) is to consider a suitably well-behaved \emph{majorising graph} $\Gamma$ and study subgraphs $G\subseteq\Gamma$ which are in some way regular relative to $\Gamma$. Given proper definitions of these concepts, one obtains the dense setting by taking $\Gamma$ to be the complete graph. Though the behaviour of $\Gamma$ and of $G$ relative to $\Gamma$ are both types of pseudorandomness, from now on we use the term \emph{regularity} to refer to how edges of graphs (and hypergraphs) $G$ are distributed inside a majorising graph (or hypergraph) $\Gamma$, and reserve the term \emph{pseudorandomness} for the behaviour of $\Gamma$. For graphs, a pseudorandomness condition known as \emph{jumbledness}, which controls the number of edges between pairs of sets in the graph, is somewhat standard. Jumbledness is quite a strong condition, demanding control over edges between very small sets of vertices. For combinatorial applications, one can usually obtain such control. However, it has recently been observed that for many applications in number theory (in which one wants to work with a graph derived from a number theoretic object, such as the set of primes) jumbledness either fails to be true or at best requires the assumption of commonly believed but unproved conjectures. This motivates the use of a weaker notion of pseudorandomness in terms of small subgraph counts, also known as linear forms conditions, which follow from jumbledness and which one also can unconditionally obtain in the number theoretic applications. Briefly, this notion of pseudorandomness asserts that when we count small subgraphs in $\Gamma$, we obtain the same answer as if $\Gamma$ were truly random, up to a small constant relative error. We should point out that qualitatively, one cannot further weaken the pseudorandomness assumption: having a counting lemma for $G\subseteq\Gamma$ in particular implies that the number of small subgraphs in $\Gamma$ is as one would expect in a truly random structure. Quantitatively the story is different: in order to count subgraphs of $G$ of a given size, we need to assume counting of rather larger subgraphs in $\Gamma$ and with much smaller error terms. We do not believe our quantative bounds are optimal, and it would be interesting to improve them. We made no attempt to optimise our proof, preferring clarity: we do not see any reason to believe that the proof strategy can give optimal quantative bounds. The first main contribution of this paper is to define another notion of pseudorandomness for hypergraphs (and more general objects) $\Gamma$ called \emph{typically hereditary counting} (THC). We show that for a hypergraph $H$, counting conditions which depend on the maximum degree of $H$, but not on the number of vertices of $H$, imply THC strong enough for embedding $H$ into $\Gamma$ and into regular subgraphs of $\Gamma$. We also show that THC holds with high probability in random hypergraphs which are not too sparse. As mentioned above, there is a standard strategy which allows one to embed a (potentially large) bounded degree graph $H$ into a regular partition of a dense graph $G$. This strategy does not directly generalise to sparse graphs, or to hypergraphs of uniformity greater than $2$ whether dense or not. However a consequence of our results is that in all of these settings one obtains the THC property, and this does allow for a simple generalisation of the standard graph strategy, as we illustrate in Theorem~\ref{thm:THCexample}. This is particularly valuable because the THC property has only one error parameter $\varepsilon$, which does not change during the embedding process. In contrast the use of hypergraph regularity produces at least two error parameters (with very different sizes) and the error parameters tend to proliferate during the embedding process. Given a rough understanding of our regularity and pseudorandomness concepts, a useful model to bear in mind is that a graph $G$ composed of unions of regular pairs shares many properties with a random model where for each pair, edges between the sets appear independently at random with probability equal to the density of the pair. A counting lemma generally shows that the number of copies of some small subgraph $H$ in $G$ is close to the expected such number in this analogous random model, and we will informally refer to the `expected number' of copies of $H$ in regular graphs $G$ even though the quantity is entirely deterministic. When considering a graph $G$ which is a subgraph of a pseudorandom majorising graph $\Gamma$, the analogous random model is to let each edge of $\Gamma$ be present in $G$ independently at random with a probability related to the relative density of $G$ in $\Gamma$. We give a hypergraph counting lemma (Theorem~\ref{thm:counting}) in the setting sketched above which is slightly stronger than the standard dense hypergraph counting lemma, and is comparable to what one obtains for sparse graphs with the methods of Conlon, Fox, and Zhao~\cite{CFZrelative}. Importantly, our methods also allow us to prove a \emph{one-sided counting lemma} (or embedding result) for bounded-degree hypergraphs $H$, where the required pseudorandomness of $\Gamma$ does not grow with the number of vertices of $H$, see Section~\ref{sec:countandembed}. To prove these counting and embedding results our main technique is \emph{regularity inheritance}, which has recently been successfully applied to embedding problems in sparse graphs~\cite{GKRSinherit,CFZextremal,ABSSregularity,ABHKPblow}. The basic idea is that given a pseudorandom graph $\Gamma$ and regular subgraph $G$, a typical vertex should have the property that its neighbourhood in $\Gamma$ is similarly pseudorandom, and its neighbourhood in $G$ is a similarly regular subgraph. We give a form of this inheritance in hypergraphs where the pseudorandomness of $\Gamma$ is controlled by small subgraph counts (which THC-graphs are able to satisfy), and for our octahedron-minimality version of regularity. An outline of how to prove counting and embedding results given suitable regularity inheritance lemmas is quite straightforward. One assumes pseudorandomness and regularity hypotheses that imply, via regularity inheritance, that similar conditions hold in the neighbourhood of a typical vertex and, by iterating this, construct an embedding vertex-by-vertex. To make this work in hypergraphs is technically difficult because usual forms of (strong) hypergraph regularity require one to consider \emph{regular complexes} where edges of size $k$ are regular with respect to edges of size $(k-1)$, but the error parameter in this regularity is much larger than the density of the $(k-1)$-edges. The second main contribution of this paper is a formal definition of certain pseudorandomness and regularity conditions which we call a \emph{good partial embedding} (GPE) and several lemmas which prove that the above sketch can be rigorously applied to embed hypergraphs with this definition. It may not be clear from this sketch what the difference between GPE and THC is (and why we are not doing the same thing twice). We will discuss this in the concluding remarks. \subsection{Related work} Our methods can be considered as an extension to hypergraphs of certain ideas present in~\cite{ABSSregularity,ABHKPblow} for sparse graphs. Amongst other things, in these papers the authors develop regularity inheritance lemmas and apply them to control good partial embeddings in sparse graphs. The main idea is to develop techniques for working with graphs $G\subseteq\Gamma$ where the error parameter $\varepsilon$ in a regularity condition for $G$ relative to $\Gamma$ is much larger than the overall density of $\Gamma$. It is natural to draw on such techniques when working with hypergraphs because, even in the case of dense hypergraphs, one is forced to consider situations where regularity error parameters are larger than overall densities. It may be useful to compare our setting to that of Conlon, Fox, and Zhao~\cite{CFZrelative}, where the pseudorandomness of the majorising hypergraph $\Gamma$ is measured by linear forms conditions, which state that the number of certain subgraphs of $\Gamma$ is close to their expectation, and the regularity condition for subgraphs $G\subseteq\Gamma$ is known as \emph{weak regularity}. In~\cite{CFZrelative} they prove that to count copies of some $H$ in a weak-regular $G\subseteq\Gamma$ it suffices to be able to count subgraphs of at most $2v(H)$ vertices in $\Gamma$. There are two important differences between our methods and those of Conlon, Fox, and Zhao. Firstly, if one only needs lower bounds on the number of copies of $H$, our pseudorandomness condition on $\Gamma$ follows from counting conditions that are given in terms of the maximum degree of $H$ and do not grow with $v(H)$. Secondly, we work with an octahedron-minimality regularity that is much stronger than their weak regularity. Though it is significantly more technical to work with, the extra strength it has can be desirable in applications. That our pseudorandomness conditions for embedding some $H$ do not grow with $v(H)$ means we can find embeddings of bounded-degree $H$ into suitable $G$ where $v(H)$ grows with $v(G)$. This setting is well-studied in graphs and hypergraphs, leading notably to the aforementioned sparse blow-up lemma~\cite{ABHKPblow} for graphs, and the hypergraph blow-up lemma of Keevash~\cite{keevash2011hypergraph}. Keevash's result allows one to embed $n$-vertex hypergraphs $H$ of bounded degree in suitably regular $n$-vertex $G$, but works only in the dense case (where $\Gamma$ is the complete graph), and requires a regularity condition that results from the \emph{regular approximation lemma}~\cite{RSreg2} which has a rather different flavour than the octahedron-minimality definition which we use. Broadly speaking, our choice of regularity results in a setting that resembles sparse regular graphs, and our methods are accordingly inspired by that setting, while regular approximation allows one to work in a setting that resembles dense regular graphs and select techniques accordingly. There is a great deal of technical difficulty in making this precise, however. We expect that the methods developed in this paper will lead to a hypergraph blow-up lemma which avoids the technicalities of regular approximation and which applies also in the sparse setting. This is work in progress. \subsection{Organisation} The rest of this paper is outlined as follows. In the next section we state our main results after giving the necessary preliminary definitions. Section~\ref{sec:sketch} contains a detailed sketch of a counting lemma for dense graphs, embellished by comparisons to the significantly more general and technical setting of sparse hypergraphs. These comparisons motivate many of the rather technical concepts defined in this paper. In Section~\ref{sec:Gacount} we deal entirely with the majorising hypergraph $\Gamma$ and prove that our pseudorandomness property THC follows from certain counting conditions, and that it typically holds in suitable $\Gamma$ obtained from a random hypergraph. In Section~\ref{sec:reg} we state and prove a sparse hypergraph regularity lemma, and explain why its output matches what we need for our methods here. We then deduce counting and embedding results for sparse hypergraphs of a rather standard form from our more general results for good partial embeddings. In Section~\ref{sec:count} we prove the more general results for counting and embedding in good partial embeddings. In Section~\ref{sec:CS} we state and prove a range of results related to the Cauchy--Schwarz inequality that we use throughout the paper, and in Section~\ref{sec:inherit} we apply these results to prove Lemma~\ref{lem:k-inherit}. In the final section we give some concluding remarks and prove a sample application, Theorem~\ref{thm:THCexample}, of our methods. \section{Main concepts and results}\label{sec:main} Before we can state our main results we need some definitions, and here we give a word of warning that our usage of the term `hypergraph' differs slightly from what is standard. The usual definition of a \emph{hypergraph} consists of a vertex set $V$ and edge set $E$ containing subsets of $V$. Normally one is interested in \emph{$k$-uniform hypergraphs} which are hypergraphs with the additional condition that $E$ only contains subsets of $V$ of size exactly $k$. When considering hypergraph regularity, one is often forced to consider \emph{$k$-complexes} which correspond to a union of $\ell$-uniform hypergraphs for $\ell\in[k]$ on the same vertex set with the additional property that the edge set $E$ of a complex is down-closed: if $f\in E$ and $e\subseteq f$ then $e\in E$. We prefer to give alternative definitions to better separate the roles of complexes and hypergraphs in our methods. \subsection{Complexes, weighted hypergraphs, and homomorphisms} The main topics of this paper are counting and embedding in hypergraphs, and here we give precise definitions of these terms, and of the phrase `number of copies' that we used informally in the introduction. We are primarily interested in finding homomorphisms (which we will define) from a complex $H$ (as above) to a complex $\mathcal{G}$ that in applications is usually in some way `inspired by' a uniform hypergraph. For this reason we exclusively use \emph{complex} to refer to the object $H$ whose vertices form the domain of the homomorphism, and \emph{hypergraph} to refer to the `host graph' $\mathcal{G}$ whose vertices form the image of the homomorphism. The above definition of complex is standard, and if for some $k\ge 1$, the complex $H$ contains no edges of size greater than $k$, we say $H$ is a $k$-complex (we do not insist that $H$ contains edges of size exactly $k$). Contrasting with usual `uniform' usage, our definition of hypergraph allows for edges of each size from $0$ upwards, and we actually allow \emph{weighted hypergraphs}, but we are not interested in weights on $H$, and so do not refer to weighted complexes. It is convenient to avoid the assumption that our (weighted) hypergraphs $\mathcal{G}$ are down-closed\footnote{It is not entirely obvious what the weighted generalisation of down-closed should be, which is one good reason for avoiding the notion.}. As we will see when we come to the definition of a homomorphism, an edge of $\mathcal{G}$ whose subsets are not all contained in $\mathcal{G}$ cannot play a role in any homomorphisms from $H$ to $\mathcal{G}$, but it will nevertheless be convenient in the proof to allow such edges. Given a vertex set $V$, a \emph{weighted hypergraph} is a function from the power set of $V$ to the non-negative reals. We think of a normal, unweighted hypergraph as being equivalent to its characteristic function, but remind the reader that the more usual setting for the hypergraph regularity method is to embed $k$-complexes into $k$-complexes, so this characteristic function may be nonzero on edges of sizes $0,1,\dotsc,k$. Though it may seem odd to care about the weight of the empty edge, and in applications it will often simply be $1$ at the start of the proof, it turns out to be useful during the proof. The extra generality of weights turns out not to complicate our methods, and to rather simplify the notation. It is not essential to our approach; if one starts with unweighted hypergraphs, the functions appearing throughout will take only values $\{0,1\}$; that is, they are unweighted hypergraphs. We use the letter $\Gamma$ and calligraphic letters $\mathcal{G}$, $\mathcal{H}$ for weighted hypergraphs, and the corresponding lower case letters $\gamma$, $g$, and $h$ for the weight functions. A \emph{homomorphism} $\phi$ from a complex $H$ to a weighted hypergraph $\mathcal{G}$ is a map $\phi:V(H)\to V(\mathcal{G})$ such that $\abs[\big]{\phi(e)}=\abs{e}$ for each $e\in H$, and the \emph{weight} of $\phi$ is \[ \mathcal{G}(\phi) := \prod_{e\in H}g\big(\phi(e)\big)\,. \] Note that this product does run over $e=\emptyset$ and edges of size $1$ in $H$. If $\mathcal{G}$ is an unweighted hypergraph, then the weight of $\phi$ is either $0$ or $1$, taking the latter value if and only if $\phi(e)$ is an edge of $\mathcal{H}$ (in the usual unweighted sense) for each $e\in F$, including edges of size one (vertices), and the empty edge (the reason for including $e=\emptyset$ becomes clear later). In other words, this is if and only if $\phi$ is a homomorphism according to the usual unweighted definition from $H$ to $\mathcal{G}$. We will be interested in summing the weights of homomorphisms, which is thus equivalent for unweighted hypergraphs to counting homomorphisms by the usual definition. Slightly abusing terminology for the sake of avoiding unwieldy phrases, we will talk about `counting homomorphisms' or `the number of homomorphisms' when what we really mean is `the sum of weights of homomorphisms'. Bearing in mind that our weighted hypergraphs are inspired by $k$-uniform hypergraphs, we wish to consider weighted hypergraphs which contain edges of size $0,1,\dotsc, k$, but not of size $k+1$. If the weight function is to generalise the indicator function for edges in the unweighted setting then we should say that $\mathcal{G}$ is a $k$-graph to mean that $g(e)=0$ for any edge $e$ of size at least $k+1$. We prefer an alternative definition for convenience of notation. If one is interested in weights in $\mathcal{G}$ of edges up to size $k$, one can ask for a homomorphism from a $k$-complex into $\mathcal{G}$, which naturally excludes any edges of size at least $k+1$. It is more convenient for our purposes to say that $\mathcal{G}$ is a \emph{$k$-graph} to mean that $g(e)=1$ for all edges $e$ of size at least $k+1$, so that such edges do not affect the weight of any homomorphisms into $\mathcal{G}$. This affords a certain amount of flexibility in the homomorphism counting methods we develop. For example, let $H$ be a $(k+1)$-simplex (the down-closure of a single edge of size $k+1$), and $H'$ be obtained from $H$ by removing the edge of size $k+1$. If $\mathcal{G}$ is a $k$-graph then homomorphisms from $H$ and $H'$ to $\mathcal{G}$ receive the same weight and we do not need to distinguish between them. We are usually not interested in counting general homomorphisms from $H$ to $\mathcal{G}$; for simplicity we reduce to a \emph{partite setting} where we have identified special image sets in $V(\mathcal{G})$ for each vertex of $H$. More formally, for the partite setting we will have a complex $H$ on vertex set $X$, a $k$-graph $\mathcal{G}$ on vertex set $V$, a partition of $X$ into disjoint sets $\{X_j\}_{j\in J}$ indexed by $J$, and a partition of $V$ into disjoint sets $\{V_j\}_{j\in J}$ indexed by $J$. The sets $X_j$ and $V_j$ are called \emph{parts}. We say a set of vertices (e.g.\ in $X$) is \emph{crossing}, or \emph{partite} if it contains at most one vertex from each part. As a shorthand, we say that $H$, $\mathcal{G}$ are $J$-partite to mean we have this setting; partitions of $V(H)$ and $V(\mathcal{G})$ indexed by $J$. If only the number of indices matters, we sometimes write e.g.~$k$-partite to mean $J$-partite for some set $J$ of size $k$. Given this partite setting, a \emph{partite homomorphism} from $H$ to $\mathcal{G}$ is a homomorphism from $H$ to $\mathcal{G}$ that maps each $X_j$ into $V_j$. That is, given an index set $J$ and partitions of $X$ and $V$ indexed by $J$, we consider special homomorphisms from $X$ to $V$ that `respect' the partition. Given $x\in X_j$ we sometimes write $V_x$ for the part $V_j$ into which we intend to embed $x$; and for a crossing subset $e$ of $X$ we write $V_e=\prod_{x\in e}V_x$ for the collection of crossing $\abs{e}$-sets with vertices in $\bigcup_{x\in e}V_x$. Given the partite, weighted setup above we write $\mathcal{G}(H)$ for the expected weight of a uniformly random partite homomorphism from $H$ to $\mathcal{G}$, that is, the normalised sum over all partite homomorphisms $\phi$ from $H$ to $\mathcal{G}$ of the weight of $\phi$, \[ \mathcal{G}(H) := \Ex[\Big]{\prod_{e\in H}g\big(\phi(e)\big)} = \Big(\prod_{j\in J}\abs{V_j}^{-\abs{X_j}}\Big)\sum_{\phi}\prod_{e\in H}g\big(\phi(e)\big)\,. \] If $\mathcal{G}$ is constant on the sets $V_e$ for crossing $e\subseteq V(H)$, then we obtain $\mathcal{G}(H)=\mathcal{G}(\phi)$ for any partite homomorphism $\phi:H\to\mathcal{G}$, and a counting lemma states that $\mathcal{G}(H)$ is close to this `expected value' where the constants taken on the sets $V_e$ are (close to) the density of $\mathcal{G}$ on the appropriate $V_e$. In this paper we will primarily work with partite homomorphisms which map exactly one vertex of $H$ into each part of $\mathcal{G}$. We reduce the general setting to this one-vertex-per-part setting by the following somewhat standard `copying process'. \begin{definition}[Standard construction]\label{def:standard} Given an index set $J$, a $k$-complex $H$ with vertex set $X$ partitioned into $\{X_j\}_{j\in J}$ and a $k$-graph $\mathcal{G}$ with vertex set $V$ partitioned into $\{V_j\}_{j\in J}$, the \emph{standard construction} is as follows. Let $\mathcal{G}'$ be an $X$-partite $k$-graph with vertex sets $\{V'_x\}_{x\in X}$ where for each $x\in X$, the set $V'_x$ is a copy of the set $V_j$ such that $x\in X_j$, and where for each set $f\subseteq V(H)$ and each edge $e\in V'_f$ we define \[g'(e):=\begin{cases} 1 & \text{if $f\not\in H$}\,,\\ g(e') & \text{if $f\in H$}\,,\end{cases}\] where $e'$ is the natural projection of $e$ to $V(\mathcal{G})$. \end{definition} This construction defines a new $k$-graph $\mathcal{G}'$ (together with a partition of its vertices indexed by $X$) whose vertices are all copies of vertices in $\mathcal{G}$, with weights given precisely so that $J$-partite homomorphism counts from $H$ to $\mathcal{G}$ correspond to $X$-partite homomorphism counts from $H$ to $\mathcal{G}'$. One is forced to consider $H$ as $J$-partite for the former counts, and $X$-partite (with parts of size $1$) for the latter. That is, for $f\not\in H$ the edges $V'_f$ all have weight one in $\mathcal{G}'$, so for each $\phi:V(H)\to V(\mathcal{G}')$ we have \[\mathcal{G}(\phi)=\prod_{f\in H}g\big(\phi(f)\big)=\prod_{f\subseteq V(H)}g'\big(\phi(f)\big)=\mathcal{G}'(\phi)\,,\] where we abuse notation by identifying $\phi$ with its natural projection onto $V(\mathcal{G})$. \subsection{Density and link graphs} The notation for weighted hypergraphs gives us a rather compact way of expressing the `expected number' of homomorphisms from $H$ to $\mathcal{G}$ in the partite setting. If $H$ and $\mathcal{G}$ are $J$-partite, and for $f\subseteq J$ we have constants $d(f)$ which represent the average weight $\mathcal{G}$ gives to edges in $V_f$, we can reuse the notation for $k$-graphs to represent the product of densities that form the expected value of $\mathcal{G}(\phi)$. More formally, we have a $k$-graph $\mathcal{D}$ on vertex set $J$ whose weight function $d$ maps $f\mapsto d(f)$. If $\mathcal{G}$ is constant on each $V_f$, then (trivially) we have $\mathcal{G}(H)=\mathcal{D}(H)$. More generally, if $\mathcal{G}$ is not constant, but the edges are well-distributed (in a sense we will make precise later) and the density on each $V_f$ is about $d(f)$, we will say $\mathcal{D}$ is \emph{a density graph} for $\mathcal{G}$. Note that we do not insist that densities are given exactly by $\mathcal{D}$ (we allow a small error which we will specify later) and hence $\mathcal{D}$ is not given uniquely by $\mathcal{G}$. This turns out to be convenient for notation. Our definition of weighted hypergraph, in which the empty set is given a weight, is not always convenient. We will see that we cannot necessarily keep control of the weight of the empty set, and as a result we have to scale explicitly by it in many formulae. We will usually have $\mathcal{G}$ and $\mathcal{D}$ as above, except that $g(\emptyset)\ne d(\emptyset)$ and so we take care to scale by these values. In the model situation where $\mathcal{G}$ is constant on each $V_f$ we would have $\mathcal{G}(H)/g(\emptyset)=\mathcal{D}(H)/d(\emptyset)$. This rather formal interpretation of our notation does serve a purpose, we will use $g(\emptyset)$ for keeping track of the embedded weight in a partial embedding, for which we would otherwise have to invent further notation. The present choice of notation also avoids frequently having to explicitly exclude $\emptyset$ as a subset of some index set throughout the argument. Before stating our results we also require a definition of the \emph{link graph} of a vertex $v$ in a weighted hypergraph $\mathcal{G}$, which corresponds to our notion of the neighbourhood of $v$ in $\mathcal{G}$. Let $J$ be an index set, $i\in J$, and let $\mathcal{G}$ be a hypergraph with vertex sets $\{V_j\}_{j\in J}$. For a vertex $v\in V_i$, let $\mathcal{G}_{v}$ be the graph on $\{V_j\}_{j\in J\setminus\{i\}}$ with weight function $g_v$ defined as follows. For $f\subseteq J\setminus\{i\}$ and $e\in V_f$, we set \[ g_v(e) := g(e)\cdot g(v,e)\,. \] Note that we write $g(v,e)$ for the more cumbersome $g\big(\{v\}\cup e\big)$ and we do allow $e=\emptyset$ in this definition. The point of this definition is that a partite homomorphism from a complex $H$ to $\mathcal{G}$ which maps vertex $i\in V(H)$ to $v\in V(G)$ corresponds (in terms of weight) to a homomorphism from $H-i$ to $\mathcal{G}_v$. In the weighted setting we are required to replace the notion of the size of a set of vertices with the sum of the weights of the vertices, and for convenience we work with the following normalised version of this idea. We are primarily interested in showing that a large fraction of a part $V_j$ in some partite $k$-graph $\mathcal{G}$ has some `good' property. Given a subset $U\subseteq V_j$ we write $\vnorm{U}{\mathcal{G}}:=\Ex{\indicator{v\in U}g(v)}$, where the expectation is over a uniform choice of $v\in V_j$, so that $\vnorm{V_j}{\mathcal{G}}$ is the average weight of a vertex in $V_j$. Now if $U$ is the set of vertices satisfying some property, a statement of the form $\vnorm{U}{\mathcal{G}}\ge (1-\varepsilon)\vnorm{V_j}{\mathcal{G}}$ for a small $\varepsilon>0$ is the weighted generalisation of `a large fraction of the vertices in $V_j$ have the property'. \subsection{Pseudorandomness for the majorising hypergraph} Our first main result is to define the pseudorandomness condition typically hereditary counting (Definition~\ref{def:THC}), and show that it follows from certain counting conditions (Theorem~\ref{thm:GaTHC}). The definition and theorem are important in their own right for the following reason. Given a bounded-degree complex $H$, Theorem~\ref{thm:GaTHC} gives sufficient counting conditions for a good lower bound on the number of homomorphisms from $H$ into $\Gamma$. Since we do not have a matching upper bound, this result is known as a \emph{one-sided counting lemma}. Note that we have not made any attempt to optimise the dependence on $H$ of these counting conditions, the key innovation is that for bounded-degree $H$ the size of the graphs appearing in the conditions is bounded (i.e.\ does not grow with $v(H)$). \begin{definition}[Typically hereditary counting (THC)]\label{def:THC} Given $k\ge 1$, a vertex set $J$ endowed with a linear order, and a density $k$-graph $\mathcal{P}$ on $J$, we say the $J$-partite $k$-graph $\Gamma$ is an \emph{$(\eta,c^*)$-THC graph} if the following two properties hold. \begin{enumerate}[label=\textup{(THC\arabic*)}] \item\label{thc:count} For each $J$-partite $k$-complex $R$ with at most $4$ vertices in each part and at most $c^*$ vertices in total, we have \[\Gamma(R)=\big(1\pm v(R)\eta\big)\tfrac{\gamma(\emptyset)}{p(\emptyset)}\mathcal{P}(R)\,.\] \item\label{thc:hered} If $|J|\ge 2$ and $x$ is the first vertex of $J$, there is a set $V_x'\subseteq V_x$ with $\vnorm{V_x'}{\Gamma}\ge(1-\eta)\vnorm{V_x}{\Gamma}$ such that for each $v\in V_x'$ the graph $\Gamma_v$ is an $(\eta,c^*)$-THC graph on $J\setminus\{x\}$ with density graph $\mathcal{P}_x$. \end{enumerate} \end{definition} Roughly, THC means that we can count accurately copies of small complexes (and the count corresponds to the expected number) and that this property is typically hereditary in the sense that for most vertices $v$ we can count in the link $\Gamma_v$, and we can count in typical links of $\Gamma_v$, and so on. The important point separating this definition from simply `we can count all small subgraphs accurately' is that we may take links a large (depending on the number of vertices of $\Gamma$) number of times. It is immediate that when $\Gamma$ is the complete $J$-partite $k$-graph (that is, it assigns weight $1$ to all $J$-partite edges) then for any $c^*$ it is a $(0,c^*)$-THC graph, with density graph $\mathcal{P}$ being the complete $k$-graph on $J$ (and the ordering on $J$ is irrelevant). This is the setting we obtain (from the standard construction) when we are interested in embedding a $k$-complex $H$ on $J$ into a dense partite $k$-graph $\mathcal{G}$, which we think of as a relatively dense subgraph of the complete $J$-partite $k$-graph on $V(\mathcal{G})$. More importantly for this paper, the following result shows that counting conditions in $\Gamma$ of a type found frequently in the literature suffice for $\Gamma$ to be the majorising hypergraph in our upcoming counting and embedding results (stated in Section~\ref{sec:countandembed}). \begin{theorem}\label{thm:GaTHC} For all $\Delta,\,k\ge 2$, $c^*\ge \Delta+2$, and $0<\eta'<1/2$, there exists $\eta_0>0$ such that whenever $0<\eta<\eta_0$ the following holds. Let $J$ be a finite set and $H$ be a $J$-partite $k$-complex on $J$ with $\Delta(H^{(2)}) \le \Delta$. Suppose that $\Gamma$ is a $J$-partite $k$-graph in vertex sets $\{V_j\}_{j\in J}$ which is identically $1$ on any $V_e$ such that $e\notin H$, and $\mathcal{P}$ is a density graph on $J$ such that for all $J$-partite $k$-complexes $F$ on at most $(\Delta+2)c^*$ vertices we have \[ \Gamma(F) = (1\pm\eta)\tfrac{\gamma(\emptyset)}{p(\emptyset)}\mathcal{P}(F)\,. \] Then $\Gamma$ is an $(\eta', c^*)$-THC graph. \end{theorem} The combination of Definition~\ref{def:THC} and Theorem~\ref{thm:GaTHC} have two major implications for this paper. Firstly they give a linear-forms type condition for a hypergraph $\Gamma$ to be sufficiently pseudorandom for the GPE techniques that allow us to prove embedding and counting lemmas (for regular $\mathcal{G}\subseteq\Gamma$) which are discussed in later subsections. Secondly, given such a counting lemma (or similar results from the literature) one can verify that the subgraph $\mathcal{G}$ itself satisfies THC via Theorem~\ref{thm:GaTHC}, and obtain one-sided counting in $\mathcal{G}$ directly from THC. In the final section of this paper we discuss the merits of these approaches and motivate the presence of both of them. We also show that THC holds with high probability when $\Gamma$ is a random hypergraph (Lemma~\ref{lem:randomTHC}), which allows for the methods of this paper to be applied in subgraphs of random hypergraphs. One way of achieving this would be to verify that the conditions of Theorem~\ref{thm:GaTHC} hold in suitable random hypergraphs, but we prefer to give a direct verification as it yields a better dependence of the probability on the structure of $\Gamma$, and shows that the THC property can be tractable in a direct manner. We view a random $k$-uniform hypergraph on $n$ vertices as a $k$-graph that is complete (i.e.\ weight $1$) on edges of size at most $k-1$, and for which weights of $k$-edges are independent Bernoulli random variables (taking values in $\{0,1\}$) with probability $p$. Let $\Gamma=G^{(k)}(n,p)$ be this $k$-graph. For a finite set $J$, let $H$ be a $J$-partite $k$-complex on a vertex set $X$, and let $V(\Gamma)$ be partitioned into $\{V_j\}_{j\in J}$. Suppose that $X$ comes with a linear order. By the standard construction we obtain vertex sets $\{V_x'\}_{x\in X}$, and an $X$-partite $k$-graph $\Gamma'$. Note that, as observed after Definition~\ref{def:standard}, partite homomorphism counts in $\Gamma$ and in $\Gamma'$ are in correspondence. In particular, the counting property~\ref{thc:count} is equivalent to asking for the same bounds on homomorphism counts in $\Gamma$. Furthermore, if we embed an initial segment of $X$ and update $\Gamma'$ by taking links of all the embedded vertices, then~\ref{thc:count} in the link graph is the same as asking for a count of \emph{rooted homomorphisms} in $\Gamma$. There is a slight subtlety here, namely that if we embed two vertices of $X$ to (automatically) different vertices of $\Gamma'$ which correspond to one vertex of $\Gamma$, then the complex we count in a link of $\Gamma'$ and that which we count rooted in $\Gamma$ are not quite the same. We would like to know that in this setup $\Gamma'$ is well-behaved enough to apply the main results of this paper, which we might expect to be true provided $p$ is not too small. We prove that for $c^*\in\mathbb{N}$ and $\eta>0$, provided $p$ and the parts $V_j$ are large enough, with high probability $\Gamma'$ is an $(\eta,c^*)$-THC graph. To state the requirements on $p$ formally we give a definition of degeneracy. Suppose that $X$ is equipped with a fixed ordering, and let \[ \deg_k(H):=\max_{e\in H}\abs[\big]{\{f\in H^{(k)}: e\subseteq f,\,f\setminus e \text{ precedes } e\}}\,, \] where $f\setminus e$ precedes $e$ if and only if each vertex of $f\setminus e$ comes before every vertex of $e$ in the order on $X$. Then, when embedding vertices in order, part-way through the process an edge $e$ can be the set of unembedded vertices for at most $\deg_k(H)$ edges of size $k$ in $F$. We make no attempt to optimise the dependence of $p$ on the relevant parameters. \begin{lemma}\label{lem:randomTHC} Let $\eta>0$ be a real number, $c^*,\,\Delta,\, d,\,k\in\mathbb{N}$, and $J$ be a finite set. Suppose that $H$ is a $J$-partite $k$-complex of maximum degree $\Delta$ and degeneracy $\deg_k(H)\le d$ with vertex set $X$ equipped with some fixed ordering. For some fixed $0<\varepsilon<1$, let $\Gamma=G^{(k)}(n,p)$ be a random $k$-graph where $\min\big\{p^{4^kc^*d},\,p^{4^k\Delta+d}\big\}\ge (2\log n) n^{\varepsilon-1}$. Suppose also that $(1-\eta)^\Delta \ge 1/2$ and $\abs{X}\le n$. Then with probability at least $1-o(1)$ the following holds. For any partition $\{V_j\}_{j\in J}$ of $V(\Gamma)$ into parts of size at least $n_0=n/\log n$, writing $\Gamma'$ for the $X$-partite graph obtained by the standard construction to $H$, $\Gamma$, and $\{V_j\}_{j\in J}$, we have that $\Gamma'$ is an $(\eta, c^*)$-THC graph with density graph $\mathcal{Q}$ that gives weight $p$ to edges of $H^{(k)}$ and weight $1$ elsewhere. \end{lemma} \subsection{Regularity} As mentioned in the introduction, a dense bipartite graph is regular if and only if the number of copies of $C_4$ it contains is close to minimal for that density. To generalise this to hypergraphs, we need to define the \emph{octahedron graph}. We will need several related graphs later, so we give the general definition. Given a vector $\vec{a}$ with $k$ nonnegative integer entries, we define $\oct{k}{\vec{a}}$ to be the $k$-partite complex whose $j$th part has $\vec{a}_j$ vertices, and which contains all crossing $i$-edges for each $1\le i\le k$. Let $\vec{1}^k$ and $\vec{2}^k$ denote the $k$-vectors all of whose entries are respectively $1$ and $2$. Then $\oct{k}{\vec{1}^k}$ is the complex generated by down-closure of a single $k$-uniform edge, while $\oct{k}{\vec{2}^k}$ is `the octahedron'. Note that $\oct{2}{\vec{2}^2}$ is the down-closure of the $2$-graph $C_4$. Later we will also require notation for two copies of $\oct{k}{1,\vec{a}}$ which share the same first vertex but are otherwise disjoint, for which we write\footnote{The `$+2$' in $+2\oct{k}{\vec{a}}$ is supposed to represent adding a common `tail' to two disjoint copies of $\oct{k}{\vec{a}}$.} $+2\oct{k}{\vec{a}}$. We are now in a position to define regularity for hypergraphs. Even when we are working in the `dense case', that is, we are thinking of $\mathcal{G}$ as a relatively dense subgraph of the complete hypergraph (as opposed to some much sparser `majorising hypergraph'), we will often need to introduce a graph $\Gamma$ which is not complete and of which $\mathcal{G}$ is a relatively dense subgraph. The reader should always think of $\Gamma$ as being a hypergraph whose good behaviour we have already established (and we are trying to show that $\mathcal{G}$ is also well behaved). \begin{definition}[Regularity of hypergraphs]\label{def:reg} Given $k\ge 1$ and nonnegative real numbers $\varepsilon$, $d$, let $\mathcal{G}$ and $\Gamma$ be $k$-partite hypergraphs on the same vertex parts. Suppose that for each $e$ with $|e|<k$ we have $g(e)=\gamma(e)$, and suppose that for each $e$ with $|e|=k$ we have $g(e)\le\gamma(e)$. Then we say that $\mathcal{G}$ is \emph{$(\varepsilon,d)$-regular} with respect to $\Gamma$ if \begin{align} \mathcal{G}\big(\oct{k}{\vec1^k}\big)&=(d\pm\varepsilon)\Gamma\big(\oct{k}{\vec{1}^k}\big) &\text{and}&& \mathcal{G}\big(\oct{k}{\vec2^k}\big)&\le\big(d^{2^k}+\varepsilon\big)\Gamma\big(\oct{k}{\vec2^k}\big)\,. \end{align} We say that $\mathcal{G}$ is \emph{$\varepsilon$-regular} with respect to $\Gamma$ to mean that the corresponding $(\varepsilon, d)$-regularity statement holds with $d= \mathcal{G}(\oct{k}{\vec{1}^k})/\Gamma(\oct{k}{\vec{1}^k})$. \end{definition} Note that in this definition we do not specify the octahedron density of $\mathcal{G}$ but only give an upper bound. The definition is only useful for graphs $\Gamma$ such that a matching lower bound holds for all $\mathcal{G}$, which we will see (Corollary~\ref{cor:relCSoctlow}) is the case when $\Gamma$ is sufficiently well behaved. Regularity for $k$-graphs is not usually discussed for $k=1$, but we use the notion as a shorthand for relative density in this paper. The definition makes sense when $k=1$, but only the first part of the assertion, that $\mathcal{G}$ has density close to $d$ with respect to $\Gamma$, is important. For any $1$-graph $\mathcal{G}$ on a vertex set $V$, we have \[ \mathcal{G}\big(\oct{1}{\vec2^1}\big) = \Ex{g(u)g(v)}[u,v\in V] = \Ex{g(v)}[v\in V]^2 = \mathcal{G}\big(\oct{1}{\vec1^1}\big)^2\,, \] and so imposing the upper bound on octahedron count is superfluous, as essentially the same upper bound (the change in $\varepsilon$ being unimportant) follows from the density. \subsection{Counting and embedding results}\label{sec:countandembed} We can now state counting and embedding lemmas of a rather standard type that follow from our methods. In the dense case, that is when $\Gamma$ is a complete $J$-partite $k$-graph (for which THC is trivial), our counting lemma (Theorem~\ref{thm:counting}) is more or less the same as that given in~\cite{NRScount}. The notion of regularity used there is that of the regularity lemma in~\cite{RSreg}, which is slightly stronger than the octahedron minimality we use (see \cite{DHNRcharacterising} for the 3-uniform case). The embedding lemma, Theorem~\ref{thm:embedding}, is (as far as we know) not found in this form in the literature, but it does follow fairly easily from~\cite{NRScount}. A related but rather harder statement is found in~\cite{CFKO}. In the sparse case, our Theorem~\ref{thm:counting} essentially follows from the results of~\cite{NRScount} and of~\cite{CFZrelative}, though again it is not explicitly stated. We would like to stress that the main novelty here is that our proofs proceed via a vertex-by-vertex embedding. As is standard in this context, we write e.g.\ $0< \eta_0 \ll d_1,\dotsc,d_k$ to mean that there is an increasing function $f$ such that the argument is valid for $0 < \eta_0 \le f(d_1,\dotsc,d_k)$. Finally, we will say $\Gamma$, with density graph $\mathcal{P}$, is a $(\eta_0,c^*)$-THC graph for $H$ (where both $\Gamma$ and $H$ are $J$-partite) if applying the standard construction to $\Gamma$ and its density graph $\mathcal{P}$ yields a $(\eta_0,c^*)$-THC graph. \begin{theorem}[Counting lemma for sparse hypergraphs]\label{thm:counting} For all $k\ge 2$, finite sets $J$, and $J$-partite $k$-complexes $H$, given parameters $\eta_k$, $\eta_0$ and $\varepsilon_\ell$, $d_\ell$ for $1\le\ell\le k$ such that $0<\eta_0\ll d_1,\dotsc,d_k,\,\eta_k$, and for all $\ell$ we have $0<\varepsilon_{\ell}\ll d_{\ell},\dotsc,d_k,\,\eta_k$, the following holds. Let $c^*=\max\{2v(H)-1, 4k^2+k\}$. Given any $J$-partite weighted $k$-graphs $\mathcal{G}\subseteq\Gamma$ and density graphs $\mathcal{D}$, $\mathcal{P}$, where $\Gamma$ is an $\big(\eta_0,c^*\big)$-THC graph for $H$ with density graph $\mathcal{P}$, and where for each $e\subseteq J$ of size $1\le\ell\le k$, the graph $\ind{\mathcal{G}}{V_e}$ is $\varepsilon_\ell$-regular with relative density $d(e)\ge d_\ell$ with respect to the graph obtained from $\ind{\mathcal{G}}{V_e}$ by replacing layer $\ell$ with $\Gamma$, we have \[ \mathcal{G}(H) = \big(1\pm v(H)\eta_k\big)\tfrac{g(\emptyset)}{d(\emptyset)p(\emptyset)}\mathcal{D}(H)\mathcal{P}(H)\,. \] \end{theorem} \begin{theorem}[Embedding lemma for sparse hypergraphs]\label{thm:embedding} For all $k\ge2$ and $\Delta\ge1$, given parameters $\eta_k$, $\eta_0$, and $\varepsilon_\ell$, $d_\ell$ for $1\le\ell\le k$ such that $0<\eta_0\ll d_1,\dotsc,d_k,\,\eta_k,\,\Delta$, and for all $\ell$ we have $0<\varepsilon_{\ell}\ll d_{\ell},\dotsc,d_k,\,\eta_k,\,\Delta$, the following holds. Let $\mathcal{G}\subseteq\Gamma$ be $J$-partite weighted $k$-graphs with associated density graphs $\mathcal{D}$, $\mathcal{P}$, where $\Gamma$ is an $(\eta_0,4k^2+k)$-THC graph for $H$ with density graph $\mathcal{P}$, and where for each $e\subseteq J$ of size $1\le\ell\le k$, the graph $\ind{\mathcal{G}}{V_e}$ is $\varepsilon_\ell$-regular with relative density $d(e)\ge d_\ell$ with respect to the graph obtained from $\ind{G}{V_e}$ by replacing layer $\ell$ with $\Gamma$. Then we have \[ \mathcal{G}(H) \ge (1-\eta_k)^{v(H)}\tfrac{g(\emptyset)}{d(\emptyset)p(\emptyset)}\mathcal{D}(H)\mathcal{P}(H)\,. \] for all $J$-partite $k$-complexes $H$ of maximum degree $\Delta$. \end{theorem} \subsection{Regularity inheritance} As stated in the introduction, we prove counting and embedding results via regularity inheritance. For sparse graphs, a regularity inheritance lemma states that, given vertex sets $X$, $Y$, and $Z$ such that on each pair we have a regular subgraph of a sufficiently well-behaved majorising graph, neighbourhoods of vertices $z\in Z$ on one or two sides of the pair $(X,Y)$ typically induce another regular subgraph of the majorising graph. The cases `one side' and `two sides' (see~\cite{CFZextremal,ABSSregularity}) are usually stated as separate lemmas, and the quantitative requirement for `well-behaved' are a little different. In this paper, we will not try to optimise this quantitative requirement and so state one lemma which covers all cases. In addition to a regularity inheritance lemma one usually needs to make use of the (trivial) observation that given a regular pair $(X,Y)$ in a graph $G$, if $Y'$ is a subset of $Y$ which is not too small then most vertices in $X$ have about the expected neighbourhood in $Y'$ (see Section~\ref{sec:sketch}). Another way of phrasing this is to define a partite weighted graph $\mathcal{G}$ on $X\cup Y$, with weights on the crossing $2$-edges corresponding to edges of $G$ and weights on the vertices of $Y$ being the characteristic function of $Y'$; then for most $v\in X$ the link $1$-graph $\mathcal{G}_v$ has about the expected density (recall that regularity is trivial for $1$-graphs). We will need a generalisation of this observation to graphs of higher uniformity, where we will need not only that the link graph typically has the right density but also that it is typically regular. It is convenient to state this too as part of our general regularity inheritance lemma. Informally, the idea is the following. If $\mathcal{G}\subseteq\Gamma$ are $\{0,\dots,k\}$-partite weighted graphs, which are equal on all edges except those in $V_{[k]}$ and $V_{\{0,\dots,k\}}$, and we have that $\mathcal{G}[V_{[k]}]$ and $\mathcal{G}[V_{\{0,\dots,k\}}]$ are respectively $(\varepsilon,d)$-regular and $(\varepsilon,d')$-regular with respect to $\Gamma$, and $\Gamma$ is sufficiently well-behaved, then for most $v\in V_0$ the graph $\mathcal{G}_v$ is $(\varepsilon',dd')$-regular with respect to $\Gamma_v$, where $\varepsilon'$ is not too much larger than $\varepsilon$. Following the general setting of the results outlined so far, our notion of `well-behaved' for $\Gamma$ is given in terms of small subgraph counts, exactly as one might have in typical links of a THC-graph. In the statement of the lemma we use notation $\mathcal{H}^{(\ell)}$ to mean the $k$-graph which gives the same weight as $\mathcal{H}$ to edges of size $\ell$ but weight $1$ to all other crossing edges, and $\mathcal{H}\cdot\mathcal{H}'$ to mean the $k$-graph whose weight function is the pointwise product $h\cdot h'$. Recall that $+2\oct{k}{\vec a}$ represents two copies of $\oct{k}{1,\vec{a}}$ which share the first vertex, but are otherwise disjoint. \begin{lemma}\label{lem:k-inherit} For all $k\ge 1$ and $\varepsilon',\, d_0 > 0$, provided $\varepsilon,\, \eta>0$ are small enough that \[ \min\{\varepsilon', 2^{-k}\} \ge 2^{2^{k+6}}k^3\big(\varepsilon^{1/16}+\eta^{1/32}\big)d_0^{-2^{k+1}}\,, \] the following holds for all $\mathcal{P}$ and all $d,\,d'\ge d_0$. Let $\{V_j\}_{0\le j\le k}$ be vertex sets, and $\mathcal{P}$ be a density graph on $\{0,\dotsc,k\}$. Let $\mathcal{G}\le\Gamma$ be $(k+1)$-partite $(k+1)$-graphs on $V_0,\dotsc,V_k$ with such that \begin{enumerate}[label=\textup{(INH\arabic*)}] \item\label{inh:count} for all complexes $R$ of the form $+2\oct{k}{\vec a}$ or $\oct{k+1}{\vec b}$, where $\vec a\in\{0,1,2\}^k$ and $\vec{b}\in\{0,1,2\}^{k+1}$, we have \[ \Gamma(R) = (1\pm \eta)\tfrac{\gamma(\emptyset)}{p(\emptyset)}\mathcal{P}(R)\,, \] \item\label{inh:GGam} $\mathcal{G}$ gives the same weight as $\Gamma$ to every edge except those of size $k+1$ and those in $V_{[k]}$, \item\label{inh:regJ} $\mathcal{G}^{(k+1)}\cdot\Gamma^{(\le k)}$ is $(\varepsilon, d')$-regular with respect to $\Gamma$, \item\label{inh:regf} $\ind{\mathcal{G}}{V_1,\dotsc,V_k}$ is $(\varepsilon, d)$-regular with respect to $\ind{\Gamma}{V_1,\dotsc,V_k}$. \end{enumerate} Then there exists a set $V_0'\subseteq V_0$ with $\vnorm{V_0'}{\Gamma}\ge (1-\varepsilon')\vnorm{V_0}{\Gamma}$ such that for every $v\in V_0'$ the graph $\mathcal{G}_v$ is $(\varepsilon', dd')$-regular with respect to $\Gamma_v$. \end{lemma} This is the promised regularity inheritance lemma. The quantification of the constants is crucial for the definition of a good partial embedding in the following Section~\ref{subsec:gpe}; in order for a useful counting lemma to follow from our approach one needs to be able to control the regularity error parameters at every step of a vertex-by-vertex embedding, and work with underlying densities much smaller than these errors. Observe that in the statement above, the quantities $d$ and $d'$ are relative densities of parts of $\mathcal{G}$ with respect to $\Gamma$; they need to be large compared to $\varepsilon'$ (and $\varepsilon$) in order for the statement to be interesting, and $\eta$ also needs to be small compared to $\varepsilon'$. But the densities $p(e)$ from $\mathcal{P}$, which by \ref{inh:count} are approximately the absolute densities in $\Gamma$, can be (and in applications usually will be) very small compared to all other quantities. In typical applications $d$, $d'$, $\varepsilon$, $\varepsilon'$, $\eta$ will be constants fixed in a proof and independent of $v(\mathcal{G})$, while the $p(e)$ may well tend to zero as $v(\mathcal{G})$ grows. \subsection{Good partial embeddings and counting}\label{subsec:gpe} When using regularity inheritance to prove counting and embedding results, it is natural to describe one step of the embedding, isolate some common structure from each step, and prove the full result by induction. The `common structure' that we define is that of a \emph{good partial embedding} (GPE). Theorems~\ref{thm:counting}, and~\ref{thm:embedding} follow from more general results for GPEs. Here we give motivation and definitions of the necessary ideas, and state these results for GPEs. A comparison of our methods to the simplest example of this approach in graphs is given in Section~\ref{sec:sketch}. We construct homomorphisms from $H$ to $\mathcal{G}$ vertex-by-vertex and count the contribution to the total number of homomorphisms from each choice of image as the construction proceeds. Given a complex $H$ and a $V(H)$-partite setup where we want to find a partite homomorphism from $H$ to a weighted graph $\mathcal{G}$, we start with a trivial partial embedding $\phi_0$ from $H_0:=H$ to $\mathcal{G}_0:=\mathcal{G}$ in which no vertices are embedded. Now for each $t=1,\dots,v(H)$ in succession, we choose a vertex $x_t$ of $H_{t-1}$ and a vertex $v_t$ of $V_{x_t}$. We set $\phi_t:=\phi_{t-1}\cup\{x_t\to v_t\}$, and $H_t:=H_{t-1}\setminus\{x_t\}$, and we write $\mathcal{G}_t:=(\mathcal{G}_{t-1})_{v_t}$, that is we take the link graph. The graph $\mathcal{G}_{v(H)}$ is an empty weighted graph with weight function $g_{v(H)}$: the only edge it contains is the empty set, and its weight is \[g_{v(H)}(\emptyset)=\prod_{e\subseteqeq V(H)}g\big(\phi(e)\big)=\mathcal{G}(\phi)\,.\] Obviously, in general the final value $\mathcal{G}(\phi)$ depends on the choices of the $v_t$ made along the way, but in the model case when for each $f\subseteq V(H)$ the function $g$ is constant, say equal to $d(f)$, on $V_f$ we obtain the same answer which ever choices we make. Furthermore, (trivially) at each step $t$, when we are to choose $v_t$ the average weight in $\mathcal{G}_{t-1}$ of vertices in $V_{x_t}$ depends only on the values $d(f)$ and not on the choices made; and a similar statement is true for the edges in each $V_f$. When the (strong) hypergraph regularity lemma is applied to a $k$-uniform subgraph of $\Gamma$, one ends up working with a subgraph $\mathcal{G}$ of $\Gamma$ which has the following properties. First, there is a vertex partition $\{V_j\}_{j\in J}$ indexed by $J$ of $V(\mathcal{G})=V(\Gamma)$. Second, for each $f\subseteq J$ with $2\le|f|\le k$, the graph $\mathcal{G}[V_f]$ is $\big(\varepsilon_{|f|},d(f)\big)$-regular with respect to the graph whose weight function is equal to that of $\mathcal{G}$ on edges of size at most $|f|-1$ and to $\Gamma$ on edges of size $|f|$. Here one should think of the edges of $\mathcal{G}$ of size $k-1$ and less as being output by the regularity lemma, and the $k$-edges as being the subgraph of $\Gamma$ which we are regularising. The difficulty is that, while we always have $\varepsilon_{|f|}\ll d(f)$, and indeed $\varepsilon_\ell\ll d(f)$ for any $f$ with $|f|\ge\ell$, it may be the case that $\varepsilon_\ell$ is large compared to the $d(f)$ with $|f|<\ell$. The solution to this is to separate counting and embedding into several steps. To begin with, we can count any small hypergraph to high precision in the ambient $\Gammama$ by assumption. We define a hypergraph whose edges are given weight equal to $\Gammama$ on edges of size $3$ and above, but equal to $\mathcal{G}$ on edges of size two (and one). We can think of this hypergraph as being very regular and dense relative to $\Gamma$: the relative density parameters are $d(e)$ for $|e|=2$ which are much larger than the regularity parameter $\varepsilon_2$. Using our regularity inheritance lemma, we show that we can count any small hypergraph to high precision in this new hypergraph. This means we can now think of our new hypergraph as a well-behaved ambient hypergraph, and consider the hypergraph whose edges have weight equal to $\Gammama$ on edges of size $4$ and above, but equal to $\mathcal{G}$ on edges of size $3$ and below. The same argument shows we can count small hypergraphs to high precision in this hypergraph too, and so on. Our approach thus keeps track of a \emph{stack} of hypergraphs, where we assume that we can count in the bottom \emph{level} $\Gammama$ and inductively bootstrap our way to counting in the top level $\mathcal{G}$ by using the fact that each level is relatively dense and very regular with respect to the level below. In general, we may have a more complicated setup because we have embedded some vertices. We begin by definining abstractly the structure we consider, and will then move on to giving the conditions it must satisfy in order that we can work with it. It is convenient to introduce a complex $H$ and a partial embedding of that complex in order to define the update rule; we do not need to specify the graph into which $H$ is partially embedded. \begin{definition}[Stack of candidate graphs, update rule] Let $k\ge 2$, and suppose that a $k$-complex $H$, a partial embedding $\phi$ of $H$, and disjoint vertex sets $V_x$ for each $x\in V(F)$ which is unembedded (that is, $x\not\in\dom\phi$) are given. Suppose that for each $0\le\ell\le k$ and each $e\subseteq V(H)\setminus\dom\phi$ we are given a subgraph $\cC^{(\ell)}(e)$ of $V_e$. We write $\cC^{(\ell)}$ for the union of the $\cC^{(\ell)}(e)$; that is, the graph with parts $\{V_x\}_{x\in V(H)\setminus\dom\phi}$ whose weight function is equal to that of $\cC^{(\ell)}(e)$ on $V_e$ for each $e\subseteq V(H)\setminus\dom\phi$. If $\cC^{(0)}\ge\cC^{(1)}\ge\dotsb\ge\cC^{(k)}$ then we call the collection of $k+1$ graphs a \emph{stack of candidate graphs}, and $\cC^{(\ell)}$ is the \emph{level $\ell$ candidate graph}. Given $x\in V(H)\setminus\dom\phi$ and $v\in V_x$, we form a stack of candidate graphs corresponding to the partial embedding $\phi\cup\{x\mapsto v\}$ according to the following \emph{update rule}. For each $0\le\ell\le k$, we let $\cC^{(\ell)}_{x\mapsto v}:=\cC^{(\ell)}_v$ be the link graph of $v$ in $\cC^{(\ell)}$. Note that trivially since $\cC^{(\ell)}\le\cC^{(\ell-1)}$ we have $\cC^{(\ell)}_{x\mapsto v}\le \cC^{(\ell-1)}_{x\mapsto v}$ for each $1\le\ell\le k$, so that this indeed gives a stack of candidate graphs. \end{definition} It will be important in what follows that we think of each $\cC^{(\ell)}$ both as specifying weights for an ongoing embedding of $H$, and also as a partite graph into which we expect to know the number of embeddings of some (small, not necessarily related to $H$) complex $R$. We are now in a position to define a \emph{good partial embedding} (GPE). Informally, this is a partial embedding of $H$ together with a stack of candidate graphs, such that for each $1\le\ell\le k$ the graph $\cC^{(\ell)}$ is relatively dense and regular with respect to $\cC^{(\ell-1)}$. We specify the relative density of each $\cC^{(\ell)}(e)$ explicitly in terms of numbers a density $k$-graph $\mathcal{D}^{(\ell)}$ with densities $d^{(\ell)}(f)\in[0,1]$ for each $1\le\ell\le k$ and $f\subseteq V(H)$, which we think of as being the relative densities in the trivial GPE. We denote by $\mathcal{D}^{(\ell)}_\phi$ the density $k$-graph obtained from $\mathcal{D}^{(\ell)}$ by repeatedly taking the neighbourhood of vertices $x\in\dom\phi$, so that $\mathcal{D}^{(\ell)}_\phi$ gives the `current' relative densities of $\cC^{(\ell)}$. We will need a collection of parameters which describe, respectively, the minimum relative densities in each level of the stack (with respect to the level below) at any step of the embedding (denoted $\delta_\ell$), the required accuracy of counting in each level (denoted $\eta_\ell$), and the regularity required in each level. The regularity parameters are somewhat complicated. In general, one should focus on the best- and worst-case regularity; it is necessary to have the other parameters, but one only needs the extra granularity they offer in certain parts of the argument. Briefly, when we say $\cC^{(\ell)}(e)$ is $\varepsilon_{\ell,r,h}$-regular, the $\ell$ indicates the level in the stack, $r=|e|$ gives the uniformity, and $h$ is the number of \emph{hits}, that is, how many times in creating $\phi$ we previously degraded the regularity of $\cC^{(\ell)}(e)$. This will turn out to be equal to \[\pi_{\phi}(e):=\big|\{x\in\dom\phi:\{x\}\cup e'\in H\text{ for some $\emptyset\neq e'\subseteq e$}\}\big|\,.\] The maximum value of $\pi_\phi(e)$ that we could observe in the proof is related to a degeneracy-like property of $H$ which we now define. Given a fixed linear order on $V(H)$, write $\vdeg(H)$ for the \emph{vertex-degeneracy} of $H$, which is \[ \vdeg(H):= \smash[b]{\max_{e\in H}}\abs[\Big]{\big\{x\in V(H) :\text{$x\le y$ for all $y\in e$, and $\{x\}\cup e'\in H$ for some $\emptyset\neq e'\subseteq e$}\big\}}\,. \] The definition of vertex-degeneracy was chosen precisely to make $\pi_{\phi}(e)\le \vdeg(H)$ hold for all unembedded $e\in H$ whenever $\phi$ is a partial embedding of $H$ with $\dom\phi$ an initial segment of $V(H)$. \begin{definition}[Ensemble of parameters, valid ensemble]\label{def:ensemble} Given integers $k$, $c^*$, $h^*$, and $\Delta$, an \emph{ensemble of parameters} is a collection $\delta_1,\dotsc,\delta_k$ of \emph{minimum relative densities}, $\eta_0,\dotsc,\eta_k$ of \emph{counting accuracy parameters}, and $\big(\varepsilon_{\ell,r,h}\big)_{\ell,r\in[k],\,h\in\{0,\dotsc,h^*\}}$ of \emph{regularity parameters}. For each $\ell\in[k]$ we define the \emph{best-case regularity} $\varepsilon_\ell:=\min_{r\in[k],h\in\{0,\dotsc,h^*\}}\varepsilon_{\ell,r,h}$ and the \emph{worst-case regularity} $\varepsilon'_\ell:=\max_{r\in[k],h\in\{0,\dotsc,h^*\}}\varepsilon_{\ell,r,h}$. An ensemble of parameters is \emph{valid} if the following statements all hold for each $1\le\ell\le k$. \begin{enumerate}[label=\textup{(VE\arabic*)}] \item\label{ve:worst} $\eta_0\ll \delta_1,\dotsc,\delta_\ell,\,\eta_\ell,\,k,\,c^*$ and for all $\ell'\in[\ell]$ we have $\varepsilon'_{\ell'}\ll\delta_{\ell'},\dotsc,\delta_\ell,\, \eta_\ell,\, k,\,c^*,\,\Delta$ such that the following hold: \begin{align} \eta_0 &\le \frac{\eta_\ell}{72(k+1)c^*} \prod_{0<\ell''\le\ell}\delta_{\ell''}^{c^*}\,, \\\varepsilon_{\ell'}' &\le \frac{\eta_\ell\delta_{\ell'}}{72k(k+1)\Delta^2} \prod_{\ell'<\ell''\le\ell} \delta_{\ell''}^{c^*} \,. \end{align} \item\label{ve:reginh} For each $r\in[k]$ and $0\le h\le h^*-1$, we have $\varepsilon_{\ell,r,h}\ll\varepsilon_{\ell,r,h+1},\,\delta_\ell$ small enough for Lemma~\ref{lem:k-inherit} (inheritance and link regularity) with input $\delta_\ell$ and $\varepsilon_{\ell,r,h+1}$. In particular $\varepsilon_{\ell,r,h}$ increases with $h$. \item\label{ve:sizeorder} For each $r\in[k-1]$ we have $\varepsilon_{\ell,r+1,h^*}\le\varepsilon_{\ell,r,0}$. \item\label{ve:count} The counting accuracy $(4k+1)\eta_{\ell-1}$ is good enough for each application of Lemma~\ref{lem:k-inherit} as above. That is, with inputs $\delta_\ell$ and $\varepsilon_{\ell,r,h}$ for $1\le h\le h^*$ we have $(4k+1)\eta_{\ell-1}$ small enough to apply Lemma~\ref{lem:k-inherit}. \end{enumerate} \end{definition} By this definition, we always have $\varepsilon_\ell=\varepsilon_{\ell,k,1}$ and $\varepsilon'_\ell=\varepsilon_{\ell,1,h^*}$. It is important to observe that we can obtain a valid ensemble of parameters by starting with $\delta_k$ and $\eta_k$, choosing $\varepsilon_{k,1,h^*}=\varepsilon'_k$ to satisfy \[ \varepsilon_{k,1,h^*} \le \frac{\eta_k\delta_{k}}{72k(k+1)\Delta^2}\,, \] then choosing in order \[ \varepsilon_{k,1,h^*-1}\gg\dots\gg\varepsilon_{k,1,0}\gg\varepsilon_{k,2,h^*}\gg\dots\gg\varepsilon_{k,2,1}\gg\dots\gg\varepsilon_{k,k,0}=\varepsilon_k\,, \] at which point we can calculate the required accuracy of counting $\eta_{k-1}$ and given $\delta_{k-1}$ choose $\varepsilon'_{k-1}$ to match it, and repeat this process down the stack. In particular, this order of choosing constants is compatible with the strong hypergraph regularity lemma (see Section~\ref{sec:reg}), to which we would first input $\varepsilon_k$, be given a $d_{k-1}$ which means we can specify $\delta_{k-1}$, then choose $\varepsilon_{k-1}$, and be able to calculate $\delta_{k-2}$, and so on. Given a partial embedding $\phi$ of $H$, a stack of candidate graphs, $1\le\ell\le k$ and $e\subseteq V(H)\setminus\dom\phi$ with $|e|\ge 1$, we let $\ocC^{(\ell-1)}(e)$ denote the subgraph of $\cC^{(\ell-1)}$ induced by $\bigcup_{x\in e}V_x$. We let $\tcC^{\ell}(e)$ denote the graph obtained from $\ocC^{(\ell-1)}(e)$ by replacing the weights of edges in $V_e$ with the weights of $\cC^{\ell}(e)$. We will always consider regularity of $\tcC^{(\ell)}(e)$ with respect to $\ocC^{(\ell-1)}(e)$. This may seem strange; if we are working with unweighted graphs then there may be edges at all levels of the complex $\ocC^{(\ell-1)}(e)$ which are not in $\ocC^{(\ell)}(e)$, and so we are insisting on a regularity involving some edges of $\cC^{(\ell)}(e)$ which do not contribute to the count of embeddings into $\cC^{(\ell)}$. But it turns out to be necessary. \begin{definition}[Good partial embedding]\label{def:gpe} Given $k\ge 2$, a $k$-complex $H$ of maximum degree $\Delta$, integers $c^*$, $h^*$, and for each $0\le\ell\le k$ a density $k$-graph $\mathcal{D}^{(\ell)}$ on $V(H)$, let $\delta_1,\dots,\delta_k$, $\eta_0,\dotsc,\eta_k$, and $\big(\varepsilon_{\ell,r,h}\big)_{\ell,r\in[k],h\in[h^*]}$ be a valid ensemble of parameters. Given $1\le\ell\le k$, we say that a partial embedding $\phi$ of $H$ together with a stack of candidate graphs $\cC^{(0)},\dots,\cC^{(\ell)}$ is an \emph{$\ell$-good partial embedding} ($\ell$-GPE) if \begin{enumerate}[label=\textup{(GPE\arabic*)}] \item\label{gpe:c0} The graph $\cC^{(0)}$ is an $\big(\eta_0,c^*\big)$-THC graph with density graph $\mathcal{D}^{(0)}_\phi$. \item\label{gpe:reg} For each $1\le\ell'\le \ell$ and $\emptyset\neq e\subseteq V(H)\setminus\dom\phi$, the graph $\tcC^{(\ell')}(e)$ is $(\varepsilon,d)$-regular with respect to $\ocC^{(\ell'-1)}(e)$, where \[\varepsilon=\varepsilon_{\ell',|e|,\pi_\phi(e)}\quad\text{ and }\quad d=d^{(\ell')}_\phi(e)=\prod_{\substack{f\subseteq V(H),\\e\subseteq f,\,f\setminus e\subseteq\dom\phi}}d^{(\ell')}(f)\,.\] \item\label{gpe:dens} The parameters $\delta_1,\dotsc,\delta_\ell$ are `global' lower bounds on the relative density terms in the sense that for each $1\le\ell'\le\ell$ and $\emptyset\neq e\subseteq V(H)\setminus\dom\phi$, we have \[ \delta_{\ell'}\le \prod_{\substack{f\subseteq V(H),\\ e\subseteq f}}d^{(\ell)}p(f)\,. \] \end{enumerate} When we have a $k$-good partial embedding, we will usually simply say \emph{good partial embedding} (GPE). \end{definition} If we were told that the trivial partial embedding was good, and that for every $x$ and $v\in V_x$, extending a good partial embedding $\phi$ of $H$ to $\phi\cup\{x\mapsto v\}$ and using the update rule to obtain a new stack of candidate graphs would result in a good partial embedding, then we would rather trivially conclude the desired counting lemma. We would simply count the number of ways to complete the embedding: when we come to embed some $x$ to $\cC^{(k)}(x)$ (with respect to the current GPE $\phi$) the density of $\cC^{(k)}(x)$ would be \[ \prod_{\ell=0}^k d^{(\ell)}_\phi(x) = \prod_{\ell=0}^k\prod_{\substack{f\subseteq V(H),\,x\in f, \\ f\setminus\{x\}\subseteq\dom\phi}}d^{(\ell)}(f) \] up to a relative error which is small provided that for each $\ell$, all the $\varepsilon_{\ell,r,h}$ are small enough compared to the $d^{(\ell)}(f)$. Since this formula does not depend on a specific $\phi$ but only on $\dom\phi$ (so, on the order we embed the vertices) we conclude that the total weight of embeddings of $H$ is \[ \tfrac{c^{(k)}(\emptyset)}{\prod\limits_{0\le\ell\le k}d^{(\ell)}(\emptyset)}\prod_{\ell=0}^k\mathcal{D}^{(\ell)}(H) = \tfrac{c^{(k)}(\emptyset)}{\prod\limits_{0\le\ell\le k}d^{(\ell)}(\emptyset)}\prod_{\ell=0}^k\prod_{f\subseteq V(H)}d^{(\ell)}(f) \] up to a relative error which is small provided that for each $\ell$, all the $\varepsilon_{\ell,r,h}$ are small enough given the $d^{(\ell)}(f)$ and $v(H)^{-1}$. This is the statement we would like to prove. Of course, it is unrealistic to expect that we always get a good partial embedding when we extend a good partial embedding. However, it is enough if we typically get a good partial embedding, and the next lemma states that this is the case. \begin{lemma}[One-step Lemma]\label{lem:onestep} Given $k\ge 2$, a $k$-complex $H$ of maximum degree $\Delta$ and vertex-degeneracy $\vdeg(H)\le \Delta'$, positive integers $c^*$ and $h^*$, a valid ensemble of parameters, a partial embedding $\phi$ and stack of candidate graphs $\cC^{(0)},\dots,\cC^{(k)}$ giving a GPE, let $B_0(x)$ denote the set of vertices $v\in V_x$ such that condition~\ref{gpe:c0} does not hold for the extension $\phi\cup\{x\mapsto v\}$ and the updated candidate graph $\cC^{(0)}_{x\mapsto v}$. For $1\le\ell\le k$, let $B_\ell(x)$ denote the set of vertices $v\in V_x$ such that $\phi\cup\{x\mapsto v\}$ and the updated candidate graphs do not form an $\ell$-GPE. Then for every $1\le\ell\le k$ such that $\ell(4k+1)\le c^*$ and $\ell(4k+1)+k\Delta'\le h^*$, we have \[ \vnorm{B_\ell(x)\setminus B_{\ell-1}(x)}{\cC^{(\ell-1)}(x)} \le k\Delta^2\varepsilon'_\ell\vnorm{V_x}{\cC^{(\ell-1)}(x)}\,. \] \end{lemma} The point of this collection of bounds on atypical vertices is that if a vertex $v$ is in $B_{\ell}(x)\setminus B_{\ell-1}(x)$ for some $\ell$, then we will be able to upper bound the count of $H$-copies extending $\phi\cup\{x\mapsto v\}$ in terms of the count of those $H$-copies in $\cC^{(\ell-1)}$ (which we show we can estimate accurately). This upper bound is bigger than the number we would like to get (the count in $\cC^{(k)}$) by the reciprocal of a product of some $d^{(\ell')}(f)$ terms, for various edges $f$ but only for $\ell'\ge\ell$. In particular, if $v(H)-\abs{\dom\phi}$ is not too large then this product is much larger than $\varepsilon'_\ell$, so that the vertices of $B_{\ell}(x)\setminus B_{\ell-1}(x)$ in total do not contribute much to the overall count. The corresponding counting lemma is then the following. \begin{lemma}[Counting Lemma for GPEs]\label{lem:GPEcount} Given $k\ge 2$, positive integers $\Delta$, $c^*$, $h^*$, and a valid ensemble of parameters, let $\phi$ be a partial embedding of a $k$-complex $H$ of maximum degree $\Delta$, and suppose that for some $1\le\ell\le k$, the stack of candidate graphs $\cC^{(0)},\dots,\cC^{(\ell)}$ gives an $\ell$-GPE. Write $r=v(H)-\abs{\dom\phi}$ and suppose that we have $c^*\ge\max\{2r-1, \ell(4k+1)\}$, $h^*\ge \ell(4k+1)+\vdeg(H)$, and $r\eta_\ell\le 1/2$. Then \begin{align} \cC^{(\ell)}(H-\dom\phi) &=(1\pm r\eta_\ell)\tfrac{c^{(\ell)}(\emptyset)}{\prod\limits_{0\le\ell'\le\ell}d_\phi^{(\ell')}(\emptyset)}\prod_{0\le\ell'\le\ell}\mathcal{D}^{(\ell')}_\phi(H-\dom\phi)\,. \end{align} \end{lemma} The right-hand side consists of a relative error term and a product of densities, where the $\emptyset$ terms correspond to edges of $F$ which are fully embedded by $\phi$, and the remaining terms correspond to the expected weight of edges not yet fully embedded by $\phi$. The proofs of Lemmas~\ref{lem:onestep} and~\ref{lem:GPEcount} are an intertwined induction, which we give in the following Section~\ref{sec:count}. Specifically, to prove Lemma~\ref{lem:onestep} for some $\ell\ge 1$ we assume Lemma~\ref{lem:GPEcount} for $\ell'<\ell$, and to prove Lemma~\ref{lem:GPEcount} for $\ell\ge 1$ we assume Lemma~\ref{lem:onestep} for $\ell'\le\ell$. The base case is provided by the observation that the counting conditions we require to prove Lemma~\ref{lem:onestep} for $\ell=1$, in $\cC^{(0)}$, hold because~\ref{gpe:c0} states that $\cC^{(0)}$ is a THC graph. If one is only interested in a lower bound for the purpose of embedding, our methods are significantly simpler because we trivially have zero as a lower bound for the total weight of embeddings using bad vertices, and one can afford the luxury of ignoring levels below $k$ of the stack. Controlling this error is what requires $c^*\ge 2r-1$ in Lemma~\ref{lem:GPEcount}, but we would like to depend less on the global structure of $H$ in an embedding result, stated as Lemma~\ref{lem:GPEemb} below. \begin{lemma}[Embedding lemma for GPEs]\label{lem:GPEemb} Given $k\ge 2$, positive integers $\Delta$, $\Delta'$, $c^*\ge k(4k+1)$, $h^*\ge k(4k+1)+k\Delta'$, and a valid ensemble of parameters, let $\phi$ be a partial embedding of a $k$-complex $H$ of maximum degree $\Delta$ and vertex-degeneracy at most $\Delta'$, and suppose that the stack of candidate graphs $\cC^{(0)},\dots,\cC^{(k)}$ gives a $k$-GPE. Write $r=v(H)-\abs{\dom\phi}$. Then we have \begin{align} \cC^{(k)}(H-\dom\phi) &\ge (1- \eta_k)^r\tfrac{c^{(k)}(\emptyset)}{\prod\limits_{0\le\ell\le k}d_\phi^{(\ell)}(\emptyset)}\prod_{0\le\ell\le k}\mathcal{D}^{(\ell)}_\phi(H-\dom\phi)\,. \end{align} \end{lemma} Note that although Lemmas~\ref{lem:GPEcount} and~\ref{lem:GPEemb} only explicitly allow for counting embeddings in a partite graph where one vertex is embedded to each part, it is easy to deduce versions where multiple vertices may be embedded into each part by applying the standard construction at each level of the stack. It is trivial to check that for levels $1$ to $k$ the required regularity is carried over, and the homomorphism counts imposed on the bottom level are similarly preserved by the construction. \section{A sketch of counting in dense graphs}\label{sec:sketch} The following sketch proves what is perhaps the simplest non-trivial example of a counting lemma, and forms the basis of our methods. In a few places we use simple facts about dense, regular graphs that are much more difficult to prove in more general settings. Most of the technical work in this paper is dedicated to proving such facts in the setting of sparse hypergraphs. Let $X$, $Y$, and $Z$ be disjoint vertex sets in a graph $G$, each of size $n$, such that each pair of sets induces an $\varepsilon$-regular bipartite graph of density $d$ with $\varepsilon<d/2$. It is usual for us to use Szemerédi's original definition of regularity, which means here that for subsets $X'\subseteq Y$ and $Y'\subseteq Y$ each of size at least $\varepsilon n$, there are $(d\pm\varepsilon)\abs{X'}\abs{Y'}$ edges between $X'$ and $Y'$, and similar for the pairs $(X,Z)$ and $(Y,Z)$. We sketch a proof that the number of triangles with one vertex in each set is $(d^3\pm \xi)n^3$ with an error $\xi$ polynomial in $\varepsilon$ and $\varepsilon /d$. The proof requires standard properties of a regular pair and some standard notation: $\deg(u,S)$ is the number of edges $\{u,v\}\in E(G)$ with $v\in S$, and $N(u)$ is the set $\big\{v \in V(G) : \{u,v\}\in E(G)\big\}$. We first observe that, by regularity, for all but at most $4\varepsilon n$ vertices $x\in X$ we have both $\deg(x,Y)$ and $\deg(x,Z)$ in the range $(d\pm\varepsilon)n$. We also note that any vertex is in at most $n^2$ triangles. Another standard consequence of regularity is that for any typical $x\in X$ (that is, with $\deg(x,Y)$ and $\deg(x,Z)$ in the range $(d\pm\varepsilon)n$), the pair $\big(N(x)\cap Y,N(x)\cap Z\big)$ \emph{inherits regularity} and is $\big(\frac{\varepsilon}{d-\varepsilon}, d\big)$-regular. We now consider $N(x)\cap Y$ and note that similarly, for all but at most \[ \tfrac{2\varepsilon}{d-\varepsilon}\abs{N(x)\cap Y}\le 2\varepsilon\tfrac{d+\varepsilon}{d-\varepsilon}n\le 6\varepsilon n \] vertices $y\in N(x)\cap Y$, the vertices $x$ and $y$ have \[ \big(d\pm\tfrac{\varepsilon}{d-\varepsilon}\big)\abs{N(x)\cap Z}=\big(d\pm\tfrac{\varepsilon}{d-\varepsilon}\big)(d\pm\varepsilon)n \] common neighbours in $Z$ (and so are in that many triangles); and the atypical vertices in $N(x)\cap Y$ contribute at most $(d+\varepsilon)n\le n$ triangles each. Pulling together these bounds there are at least zero and at most $4\varepsilon n^3+6\varepsilon n^3$ triangles using an atypical $x\in X$, and using a typical $x$ but an atypical $y\in Y$ respectively. We also have the lower bound \[ (1-4\varepsilon)n\cdot \big(1-\tfrac{2\varepsilon}{d-\varepsilon}\big)(d-\varepsilon)n \cdot \big(d-\tfrac{\varepsilon}{d-\varepsilon}\big)(d-\varepsilon)n\,, \] and the upper bound \[ n\cdot (d+\varepsilon)n \cdot \big(d+\tfrac{\varepsilon}{d-\varepsilon}\big)(d+\varepsilon)n\,, \] on the number of triangles using typical $x$ and $y$. Given $\varepsilon<d/2$ we can bound $\varepsilon/(d-\varepsilon)\le 2\varepsilon/d$, and hence the above sketch indeed shows that there are $(d^3\pm\xi)n^3$ triangles with $\xi$ polynomial in $\varepsilon$ and $\varepsilon/d$. For a more general version where one counts copies of some small graph $H$, one considers embedding $H$ into $G$ one vertex at a time, keeping track at each step of the number of ways to extend the next embedding. For the general argument we do two things: argue that most ways of continuing the embedding are `typical', and that `atypical' choices do not contribute much. More generally, `typical' simply means that neighbourhoods (and common neighbourhoods) of embedded vertices are about the size one would expect from the densities of the regular pairs, and that most vertices are typical is a simple consequence of regularity; and the atypical choices do not contribute much because they are so few. Our methods for sparse hypergraphs follow the same lines as this sketch, but some of the steps are significantly more involved. Adapting the sketch to sparse graphs requires some similar modifications (see~\cite{ABHKPblow}), and our terminology is chosen to follow these developments, but hypergraphs present their own technical challenges. If it is given that $G$ has the THC property, then `every choice of image made so far in the partial embedding is typical' simply means that we always choose a vertex whose link gives a THC graph. By definition there are few vertices which are atypical. This is technically easy to work with (indeed, THC was designed to make this so). If we are using GPEs, then our definition of a GPE (Definition~\ref{def:gpe}) is what it means for every choice of image made so far in the partial embedding to have been `typical'. The fact that most ways of continuing the embedding are typical is our Lemma~\ref{lem:onestep}, and the control of embeddings using atypical vertices appears in Lemma~\ref{lem:GPEcount}. This requires technically more work---mainly because the definitions are complicated---but has the advantage that one has access to the properties of the majorising graph $\Gamma$, which (for example if $\Gamma$ is a random graph) can be useful. We finally point out that, in contrast to the above sketch where inheritance of regularity follows immediately from the definition, in sparse (hyper)graphs inheritance is not automatic. Our regularity inheritance lemma (Lemma~\ref{lem:k-inherit}) requires careful applications of the Cauchy--Schwarz inequality, and is crucial for proving Lemma~\ref{lem:onestep}. \section{Counting and embedding in \texorpdfstring{$\Gammama$}{Gamma}}\label{sec:Gacount} In this section we prove Theorem~\ref{thm:GaTHC} and Lemma~\ref{lem:randomTHC}. Both results give sufficient conditions for a $k$-graph $\Gamma$ to be a THC-graph in a way that is compatible with the hypotheses of our counting and embedding results (Theorems~\ref{thm:counting} and~\ref{thm:embedding}). \subsection{Counting implies THC} To prove Theorem~\ref{thm:GaTHC} we show the following. Suppose $J$ is a vertex set with a linear order. For convenience we will usually take $J=[m]$ with the natural order. Suppose $\Gamma$ is an $[m]$-partite $k$-graph with density $k$-graph $\mathcal{D}$. Suppose that $H$ is a $k$-complex on $[m]$ with $\Delta(H^{(2)})\le\Delta$, and that $\Gamma$ is identically equal to $1$ on $e$-partite edges for each $e\not\in H$. Suppose that $c^*\ge 2\Delta+2$ and $\varepsilon^*>0$ are given, and that that counts of all small (depending on $c^*$) subgraphs in $\Gamma$ match those in $\mathcal{D}$ to high accuracy (depending on $\varepsilon^*$ and $c^*$). Then $\Gamma$ is a $(c^*,\varepsilon^*)$-THC graph. A difficulty with proving this is that the definition of $(c^*,\varepsilon)$-THC is recursive; it is not easy to verify whether a given graph satisfies the definition. So we will begin by defining a graph with some additional structure which helps us to perform the verification. We say a set $X\subseteq [m]$ is a \emph{counting place} if $H^{(2)}[X]$ is a connected graph with at most $c^*$ vertices. We claim that it is enough to know accurate counts in $\Gamma$ of small $X$-partite graphs for all counting places $X$. Making this precise, we have \begin{proposition}\label{prop:countplace} Given $k$, $c^*$, and $\varepsilon>0$, let $J$ be a vertex set. Suppose that $\Gamma$ is a $J$-partite $k$-graph, and $\mathcal{D}$ is a density $k$-graph on $J$. Suppose that $H$ is a $k$-complex on $J$, suppose that if $e\not\in E(H)$ then $\Gamma$ is identically $1$ on $V_e$ and $d(e)=1$, and suppose that for each counting place $X\subseteq J$ and each $X$-partite $k$-complex $F$ with at most $c^*$ vertices we have \[\Gamma(F)=(1\pm\varepsilon)^{v(F)}\tfrac{\gamma(\emptyset)}{d(\emptyset)}\mathcal{D}(F)\,.\] Then for each $X\subseteq J$ with $|X|\le c^*$ and each $X$-partite $k$-complex $F$ with at most $c^*$ vertices we have \[\Gamma(F)=(1\pm\varepsilon)^{v(F)}\tfrac{\gamma(\emptyset)}{d(\emptyset)}\mathcal{D}(F)\,.\] \end{proposition} \begin{proof} Given $X$ and $F$, we are done if $X$ is a counting place, so suppose that $X$ is not a counting place. Then $H^{(2)}[X]$ has components on vertex sets $X_1,\dots,X_\ell$, and let for each $i$ the $k$-complex $F_i$ consist of the $X_i$-partite edges of $F$. By definition each $X_i$ is a counting place, so we have for each $i$ \[\Gamma(F_i)=(1\pm\varepsilon)^{v(F_i)}\tfrac{\gamma(\emptyset)}{d(\emptyset)}\mathcal{D}(F_i)\,.\] Now since edges of $\Gamma$ which are on sets $V_e$ for $e\not\in H$ are identically $1$, and since $H$ is a complex and hence down-closed, we have \[\Gamma(F)=\gamma(\emptyset)^{1-\ell}\prod_{i=1}^\ell\Gamma(F_i)=\gamma(\emptyset)^{1-\ell}\prod_{i=1}^\ell\Big((1\pm\varepsilon)^{v(F_i)}\tfrac{\gamma(\emptyset)}{d(\emptyset)}\mathcal{D}(F_i)\Big)=(1\pm\varepsilon)^{v(F)}\tfrac{\gamma(\emptyset)}{d(\emptyset)}\mathcal{D}(F)\,,\] as desired. \end{proof} Given a counting place $X$, and some $t<\min(X)$, we say a set $Y$ is \emph{of interest for $(t,X)$} if $Y$ consists of all vertices $1\le y\le t$ such that there exists $x\in X$ with $xy\in H^{(2)}$. We say $Y$ is \emph{of interest} if there exist $(t,X)$ such that $Y$ is of interest for $(t,X)$. Given a set $Y$ of interest, we let \begin{align*} s_H(Y)&:=\big|\{z\in[m]:zy\in H^{(2)}\text{ for some }y\in Y \text{ with }y>z\}\big|+|Y|\,,\text{ and}\\ p_H(Y)&:=\big|\{Y':Y\subseteqneq Y'\,,\,Y'\text{ is of interest, and }\max(Y')=\max(Y)\}\big|\,. \end{align*} When $H$ is clear from the context we omit it. Finally, given a set $Y$, we say $e=\{e_1,\dots,e_{|Y|}\}\in V_Y$ is a \emph{fail set for $Y$} if there is some $(t,X)$ such that $Y$ is of interest for $(t,X)$ and such that the link graph $\Gamma_e$ obtained by taking links of $\Gamma$ with successively $e_1,e_2,\dots,e_{|Y|}$, and the similarly defined density graph $\mathcal{D}_Y$, satisfy \[\Gamma_e(F)\neq(1\pm\varepsilon)\tfrac{\gamma_e(\emptyset)}{d_Y(\emptyset)}\mathcal{D}_Y(F)\,,\] for some $X$-partite $k$-complex $F$ with at most $c^*$ vertices. We \emph{decorate} the $[m]$-partite $k$-graph $\Gamma$ by choosing (possibly empty) sets $B^Y$ for each $Y$ which is of interest, where each $B^Y$ consists of edges in $V_{Y'}$ for some $Y'$ such that either $1\le |Y'|\le\Delta$ or $Y'=Y$. We will think of the $B^Y$ as \emph{bad sets}; we will say sets in $B^Y$ of size $|Y|$ are \emph{large bad sets} and the rest are \emph{small bad sets}. We say a decorated graph $\Gamma$ with bad sets $B^Y$ for $Y$ of interest, and density graph $\mathcal{D}$, is \emph{$(H,\varepsilon,\delta,c^*)$-safe} if the following are true. \begin{enumerate}[label=(S\arabic*)] \item\label{safe:count} for each counting place $X$ and each $X$-partite graph $F$ with $v(F)\le c^*$ we have \[\Gamma(F)=(1\pm\varepsilon)\tfrac{\gamma(\emptyset)}{d(\emptyset)}\mathcal{D}(F)\,,\] \item\label{safe:fail} for each $Y$ of interest and each fail set $e$ for $Y$, there is a set of $B^Y$ contained in $e$, and \item\label{safe:notbig} for each $Y$ of interest we have \begin{equation}\label{eq:safe:notbig} \sum_{Z\subseteq Y}\frac{d(\emptyset)\sum_{e\in V_Z\cap B^Y}\prod_{e'\subseteq e}\gamma(e')}{\gamma(\emptyset)|V_Z|\prod_{Z'\subseteq Z}d(Z')}\le\delta^{s(Y)}2^{-p(Y)}\,. \end{equation} \end{enumerate} If $\varepsilon$ and $\delta$ are chosen appropriately, and $c^*\ge 2\Delta(H)+2$, then a $(H,\varepsilon,\delta,c^*)$-safe graph will turn out to be an $(\varepsilon^*,c^*)$-THC graph. The next lemma formalises this. \begin{lemma}\label{lem:safe} Given integers $\Delta$ and $c^*$ such that $c^*\ge \Delta+2$, and $\delta>0$, if $\varepsilon>0$ is sufficiently small the following holds. Given a $k$-complex $H$ on $[m]$ such that $H^{(2)}$ has maximum degree at most $\Delta$, let $\Gamma$ be an $[m]$-partite $k$-graph with density graph $\mathcal{D}$. Suppose that whenever $e\not\in E(H)$ we have $\Gamma[V_e]$ identically equal to one. Suppose that for each $Y$ of interest we are given a set $B^Y$, decorating $\Gamma$, such that the decorated graph is $(H,\varepsilon,\delta,c^*)$-safe. Let $H'$ be the $k$-complex on $\{2,3,\dots,m\}$ with edges $E(H')=\{e\in E(H):1\not\in e\}\cup\{e\setminus\{1\}:e\in E(H),1\in e\}$. Given $v\in V_1$, let $\Gamma_v$ be the link graph of $v$, and let $\mathcal{D}_1$ be the link graph of $1$ in $\mathcal{D}$. For all but at most $10\cdot 2^{(c^*+3)\Delta^{c^*+3}}\delta d(1)|V_1|$ total weight of vertices $v$ in $V_1$, there is a decoration of $\Gamma_v$ with respect to which $\Gamma_v$ is $(H',\varepsilon,\delta,c^*)$-safe with density graph $\mathcal{D}_1$. \end{lemma} The reader might wonder at this point why we do not combine~\ref{safe:fail} and~\ref{safe:notbig} (and avoid having decorations at all) by simply summing over fail sets for $Y$ rather than sets of $B^Y$. The reason is that we are not able to show the required bound~\ref{safe:notbig} typically continues to hold in $\Gamma_v$; we have trouble if exceptionally many fail sets all contain one subset. However we can solve this problem by declaring such a subset to be itself bad, as we then do not have to control the fail sets which contain it. \begin{proof}[Proof of Lemma~\ref{lem:safe}] Given $\Delta$, $c^*\ge\Delta+2$, and $\delta>0$, we require $\varepsilon>0$ to be small enough that \[\delta^{\Delta^2c^*}2^{-2c^*\Delta^{c^*+1}}>2\Delta(c^*\Delta)^\Delta(4\Delta+8)\varepsilon\] Since any $Y$ of interest in $H$ is a collection of neighbours in $H^{(2)}$ of some counting place $X$, which by definition is a set of size at most $c^*$, it follows that $|Y|\le\Delta c^*$ and that $s(Y)\le\Delta^2 c^*$. Since $X$ is connected in $H^{(2)}$, it follows that any two vertices of $Y$ are at distance at most $c^*+1$ in $H^{(2)}$, so the number of sets of interest containing any given vertex of $H$ is at most $1+\Delta+\dots+\Delta^{c^*+1}\le 2c^*\Delta^{c^*+1}$. In particular $p(Y)\le 2c^*\Delta^{c^*+1}$. It follows that \[\delta^{s(Y)}2^{-p(Y)}>2\Delta|Y|^\Delta(4\Delta+8)\varepsilon\] holds for any $Y$ of interest in $H$. We begin by altering the sets $B^Y$ in order to avoid exceptionally many large bad sets all containing a small subset. Specifically, suppose $Y$ is of interest, and suppose $1$ is a neighbour in $H^{(2)}$ of at least one vertex of $Y$. We let $\bar{B}^Y$ be obtained as follows. We start with $\bar{B}^Y$ empty. Let $Z=Y\cap N_H^{(2)}(1)$. Whenever $e\in V_Z$ satisfies \[\frac{1}{|V_{Y\setminus Z}|\prod_{W\subseteq Y:W\not\subseteq Z}d(W)}\sum_{\substack{e'\in V_{Y}\cap B^Y\\e\subseteq e'}}\prod_{f\subseteq e':f\not\subseteq e}\gamma(f) > 1\] we add $e$ to $\bar{B}^Y$. We then add all sets in $B^Y$ which are not supersets of any set in $\bar{B}^Y$. By construction, every set of $B^Y$ has a subset in $\bar{B}^Y$, so by~\ref{safe:fail} every fail set for $Y$ has a subset in $\bar{B}^Y$. Observe that by~\ref{safe:notbig}, we have \begin{equation}\label{eq:barnotbig} \sum_{Z\subseteq Y}\frac{d(\emptyset)\sum_{e\in V_Z\cap \bar{B}^Y}\prod_{e'\subseteq e}\gamma(e)}{\gamma(\emptyset)|V_Z|\prod_{Z'\subseteq Z}d(Z')}\le\delta^{s(Y)}2^{-p(Y)}\,. \end{equation} Given $v\in V_1$ and any $Y$ of interest, let \[\bar{B}^Y_v:=\big\{e\in \bar{B}^Y:e\cap V_1=\emptyset\big\}\cup\big\{e\setminus\{v\}:e\in\bar{B}^Y,v\in e\big\}\,.\] If $v\in V_1$ has $\gamma(v)=0$ we say $v$ is disallowed for $Y$. If $\gamma(v)>0$, we say that $v$ is \emph{disallowed for $Y$} if either $1\in Y$ or $1y\in H^{(2)}$ and we have \begin{equation}\label{eq:THC:dis} \sum_{\substack{Z\subseteq Y\\1\not\in Z}}\frac{d(\emptyset)d(1)\sum_{e\in V_Z\cap \bar{B}^Y_v}\prod_{e'\subseteq e\cup\{v\}}\gamma(e')}{\gamma(\emptyset)\gamma(v)|V_Z|\prod_{Z'\subseteq Z\cup\{1\}}d(Z')}>\delta^{s(Y)-1}2^{-p(Y)-1}\,. \end{equation} \begin{claim}\label{cl:fewdis} For each $Y$ such that either $1\in Y$ or $1y\in H^{(2)}$ for some $y\in Y$, the set of vertices $v\in V_1$ which are disallowed for $Y$ has total weight at most $10\delta d(1)|V_1|$. \end{claim} \begin{proof} We define \[S_1:=\big\{e\in\bar{B}^Y:\abs{e\cap V_1}>0\big\}\quad\text{and}\quad S_2:=\big\{e\cup\{v\}:e\in\bar{B}^Y,v\in V_1, e\cap V_1=\emptyset\big\}\,.\] We first aim to estimate \begin{equation}\label{eq:fewdis:bigsum} \sum_{W\subseteq Y\cup\{1\}}\frac{d(\emptyset)\sum_{e\in V_W\cap (S_1\cup S_2)}\prod_{e'\subseteq e}\gamma(e')}{\gamma(\emptyset)|V_W|\prod_{W'\subseteq W}d(W')}\,. \end{equation} By~\eqref{eq:barnotbig}, the contribution to~\eqref{eq:fewdis:bigsum} made by edges $e\in S_1$ is at most $\delta^{s(Y)}2^{-p(Y)}$. It remains to estimate the contribution made by edges in $S_2$. These edges split up into the edges $S_L$ which come from large bad sets in $\bar{B}^Y$ and the edges $S_Z$ which come from small bad sets contained in $V_Z$ for some $Z\subseteq Y\setminus\{1\}$ of size at most $\Delta$. For a given $Z\subseteq Y\setminus\{1\}$ of size at most $\Delta$, we work as follows. First, we define three functions $\tilde{X},\tilde{Y},\tilde{W}$ from $V_Z$ to $\mathbb{R}^+_0$, as follows. We set \[\tilde{X}(e):=\prod_{e'\subseteq e}\gamma(e')\,,\quad\tilde{Y}(e):=\frac{1}{|V_1|}\sum_{v\in V_1}\prod_{e'\subseteq e}\gamma\big(e'\cup\{v\}\big)\quad\text{and}\quad\tilde{W}(e)=\begin{cases}1&\text{ if }e\in S_Z\\0&\text{ otherwise}\end{cases}\,.\] Letting $e$ be chosen uniformly at random in $V_Z$, these functions become random variables. Observe that $\Ex{\tilde{X}}$, $\Ex{\tilde{X}\tilde{Y}}$ and $\Ex{\tilde{X}\tilde{Y}^2}$ are, respectively, equal to $\Gamma(K_Z)$, $\Gamma(K_{Z\cup\{1\}})$ and $\Gamma(K_{Z,1,1})$, where the $k$-complex $K_{Z,1,1}$ is obtained from $K_{Z\cup\{1\}}$ by duplicating the vertex $1$ (with the $(Z\cup\{1\})$-partition in which both copies of $1$ are assigned to part $1$). Since these graphs have at most $\Delta+2$ vertices, by Proposition~\ref{prop:countplace} we have \begin{align*} \Ex{\tilde{X}}&=(1\pm\varepsilon)^{|Z|}\tfrac{\gamma(\emptyset)}{d(\emptyset)}\prod_{Z'\subseteq Z}d(Z')\,,\\ \Ex{\tilde{X}\tilde{Y}}&=(1\pm\varepsilon)^{|Z|+1}\tfrac{\gamma(\emptyset)}{d(\emptyset)}\prod_{Z'\subseteq Z}d(Z')d(Z'\cup\{1\})\,,\quad\text{and}\\ \Ex{\tilde{X}\tilde{Y}^2}&=(1\pm\varepsilon)^{|Z|+2}\tfrac{\gamma(\emptyset)}{d(\emptyset)}\prod_{Z'\subseteq Z}d(Z')d(Z'\cup\{1\})^2\,. \end{align*} Thus the conditions of Lemma~\ref{lem:ECSdist} are met, with $d_{\subref{lem:ECSdist}}:=\prod_{Z'\subseteq Z}d(Z'\cup\{1\})$ and with $\varepsilon_{\subref{lem:ECSdist}}:=(2\Delta+4)\varepsilon$. We conclude from Lemma~\ref{lem:ECSdist} that \[\Ex{\tilde{W}\tilde{X}\tilde{Y}}=\Big(1-\varepsilon_{\subref{lem:ECSdist}}\pm 2\sqrt{\tfrac{\varepsilon_{\subref{lem:ECSdist}}\Ex{\tilde{X}}}{\Ex{\tilde{W}\tilde{X}}}}\Big)\cdot d_{\subref{lem:ECSdist}}\Ex{\tilde{W}\tilde{X}}\le d_{\subref{lem:ECSdist}}\Ex{\tilde{W}\tilde{X}}+2\varepsilon_{\subref{lem:ECSdist}}d_{\subref{lem:ECSdist}}\Ex{\tilde{X}}\,.\] Substituting back the definitions, and using the above estimate for $\Ex{\tilde{X}}$, we have \[\frac{\sum_{e\in S_Z}\prod_{e'\subseteq e}\gamma(e')}{|V_{Z\cup\{1\}}|}\le \prod_{Z'\subseteq Z}d(Z'\cup\{1\})\cdot\sum_{e\in S_Z}\frac{\prod_{e'\subseteq e}\gamma(e')}{|V_Z|}+(4\Delta+8)\varepsilon\tfrac{\gamma(\emptyset)}{d(\emptyset)}\prod_{Z'\subseteq Z}d(Z')d(Z'\cup\{1\})\,,\] and rearranging we get \[\ \frac{d(\emptyset)\sum_{e\in S_Z}\prod_{e'\subseteq e}\gamma(e')}{\gamma(\emptyset)|V_{Z\cup\{1\}}|\prod_{Z'\subseteq Z\cup\{1\}}d(Z')}\le \frac{d(\emptyset)\sum_{e\in V_Z\cap\bar{B}^Y}\prod_{e'\subseteq e}\gamma(e')}{\gamma(\emptyset)|V_Z|\prod_{Z'\subseteq Z}d(Z')}+(4\Delta+8)\varepsilon\,.\] We can obtain a similar estimate for the large bad sets. Note that $S_L$ is non-empty only if $Y$ does not contain $\{1\}$. We let $Z$ be the neighbours in $H^{(2)}$ of $1$. We define $\tilde{X}$ and $\tilde{Y}$ exactly as above (with this $Z$), and for $e\in V_Z$ set \[\tilde{W}(e):=\frac{1}{|V_{Y\setminus Z}|\prod_{W\subseteq Y:W\not\subseteq Z}d(W)}\sum_{\substack{e'\in V_{Y}\cap B^Y\\e\subseteq e'}}\prod_{f\subseteq e':f\not\subseteq e}\gamma(f) \,.\] By definition of $\bar{B}^Y$, we have $0\le\tilde{W}(e)\le 1$ for each $e\in V_Z$, so that (by exactly the same calculation as above) we can apply Lemma~\ref{lem:ECSdist} to estimate $\Ex{\tilde{W}\tilde{X}\tilde{Y}}$. Observe that $d(Z')=\gamma(e')=1$ if $e\in V_{Z'}$ and $Z'$ is a subset of $Y\cup\{1\}$ which is not contained in either $Z\cup\{1\}$ or $Y$, because such a $Z'$ must contain $1$ and some $y\in Y\setminus Z$ which is not in $H^{(2)}$, so that $Z\not\in H$. By the same calculation as above, and using this observation, we obtain \[\ \frac{d(\emptyset)\sum_{e\in S_L}\prod_{e'\subseteq e}\gamma(e')}{\gamma(\emptyset)|V_{Y\cup\{1\}}|\prod_{Z'\subseteq Y\cup\{1\}}d(Z')}\le \frac{d(\emptyset)\sum_{e\in V_Y\cap\bar{B}^Y}\prod_{e'\subseteq e}\gamma(e')}{\gamma(\emptyset)|V_Y|\prod_{Z'\subseteq Y\cup\{1\}}d(Z')}+(4\Delta+8)\varepsilon\,.\] Now summing these bounds for all the $S_Z$ and for $S_L$, we have \[\sum_{Z\subseteq Y}\frac{d(\emptyset)\sum_{e\in S_Z}\prod_{e'\subseteq e}\gamma(e')}{\gamma(\emptyset)|V_{Z\cup\{1\}}|\prod_{Z'\subseteq Z\cup\{1\}}d(Z')}\le \sum_{Z\subseteq Y}\frac{d(\emptyset)\sum_{e\in V_Z\cap\bar{B}^Y}\prod_{e'\subseteq e}\gamma(e')}{\gamma(\emptyset)|V_Z|\prod_{Z'\subseteq Z}d(Z')}+2\Delta|Y|^{\Delta}(4\Delta+8)\varepsilon\,,\] where the edges of $S_L$ are considered in the term $Z=Y$ and where we take $S_Z=\emptyset$ whenever $|Z|=0$ or $\Delta<|Z|<|Y|$ (so that the sum runs over less than $2\Delta|Y|^{\Delta}$ non-zero terms). Using~\eqref{eq:barnotbig}, we can substitute $\delta^{s(Y)}2^{-p(Y)}$ as an upper bound for the sum on the right hand side, and we have $S_Z=V_{Z\cup\{1\}}\cap S_2$, so we obtain \[\sum_{Z\subseteq Y}\frac{d(\emptyset)\sum_{e\in V_{Z\cup\{1\}}\cap S_2}\prod_{e'\subseteq e}\gamma(e')}{\gamma(\emptyset)|V_{Z\cup\{1\}}|\prod_{Z'\subseteq Z\cup\{1\}}d(Z')}\le \delta^{s(Y)}2^{-p(Y)}+2\Delta|Y|^{\Delta}(4\Delta+8)\varepsilon\,.\] Putting this together with the already calculated contribution to~\eqref{eq:fewdis:bigsum} from $S_1$, we get \[ \sum_{W\subseteq Y\cup\{1\}}\frac{d(\emptyset)\sum_{e\in V_W\cap (S_1\cup S_2)}\prod_{e'\subseteq e}\gamma(e')}{\gamma(\emptyset)|V_W|\prod_{W'\subseteq W}d(W')}\le 2\delta^{s(Y)}2^{-p(Y)}+2\Delta|Y|^{\Delta}(4\Delta+8)\varepsilon\le 3\delta^{s(Y)}2^{-p(Y)}\,, \] where the final inequality is by choice of $\varepsilon$. On the other hand, we can provide a lower bound on~\eqref{eq:fewdis:bigsum} by considering disallowed vertices for $Y$. By~\eqref{eq:THC:dis}, if $v\in V_1$ is disallowed for $Y$ then we have \[\sum_{\substack{Z\subseteq Y\\1\not\in Z}}\frac{d(\emptyset)\sum_{e\in V_Z\cap \bar{B}^Y_v}\prod_{e'\subseteq e\cup\{v\}}\gamma(e')}{\gamma(\emptyset)|V_{Z\cup\{1\}}|\prod_{Z'\subseteq Z\cup\{1\}}d(Z')}>\tfrac{\gamma(v)}{d(1)|V_1|}\delta^{s(Y)-1}2^{-p(Y)-1}\,.\] and hence \[\sum_{\substack{W\subseteq Y\cup \{1\}\\1\in W}}\frac{d(\emptyset)\sum_{e\in V_{W\setminus\{1\}}\cap \bar{B}^Y_v}\prod_{e'\subseteq e\cup\{v\}}\gamma(e')}{\gamma(\emptyset)|V_{W}|\prod_{W'\subseteq W}d(W')}>\tfrac{\gamma(v)}{d(1)|V_1|}\delta^{s(Y)-1}2^{-p(Y)-1}\,.\] Now for a given $W\subseteq Y\cup\{1\}$ such that $1\in W$, if $e\in V_W\setminus\{1\}$ then $e$ is in $\bar{B}^Y_v$ if and only if $e\cup\{v\}$ is in $S_1\cup S_2$. So the left hand side of the last equation is part of~\eqref{eq:fewdis:bigsum}, and furthermore as $v$ varies over $V_1$ we obtain all the terms in~\eqref{eq:fewdis:bigsum} exactly once. In particular, if the total weight of vertices in $V_1$ which are disallowed for $Y$ exceeds $10\delta d(1)|V_1|$, then summing over the disallowed vertices we get \[\tfrac{10\delta d(1)|V_1|}{d(1)|V_1|}\delta^{s(Y)-1}2^{-p(Y)-1}<3\delta^{s(Y)}2^{-p(Y)}\,,\] which is a contradiction. We conclude that the total weight of vertices in $V_1$ which are disallowed for $Y$ is at most $10\delta d(1)|V_1|$, as desired. \end{proof} Now suppose $v\in V_1$ is not disallowed for any $Y$ of interest in $H$. Let $Y\subseteq V(H')$ be of interest in $H'$. Observe that at least one of $Y$ and $Y\cup\{1\}$ must be of interest in $H$. We define \[\hat{B}^{Y}:=\begin{cases} \bar{B}^{Y}_v\quad&\text{if $Y$ is of interest in $H$ and $Y\cup\{1\}$ is not}\,,\\ \bar{B}^{Y\cup\{1\}}_v\quad&\text{if $Y\cup\{1\}$ is of interest in $H$ and $Y$ is not}\,,\text{ and}\\ \bar{B}^{Y}_v\cup\bar{B}^{Y\cup\{1\}}_v\quad&\text{if $Y$ and $Y\cup\{1\}$ are of interest in $H$}\,.\end{cases}\] We claim that $\Gamma_v$, with density graph $\mathcal{D}_1$, and decorations $\hat{B}^Y$ for each $Y\subseteq V(H')$ of interest in $H'$, is $(H',\varepsilon,\delta,c^*)$-safe. To verify~\ref{safe:count}, let $X$ be a counting place in $H'$. By definition $X$ is also a counting place in $H$. If $\{1\}$ is not of interest for $(1,X)$, then $\Gamma_v[V_X]=\Gamma[V_X]$ and $\mathcal{D}_1[X]=\mathcal{D}[X]$, so~\ref{safe:count} holds for $X$. If $\{1\}$ is of interest for $(1,X)$, then considering the (only) $Z=\emptyset$ term of~\eqref{eq:THC:dis}, we see that if $\emptyset\in \hat{B}^{\{1\}}_v$ then the (only) term on the left hand side of~\eqref{eq:THC:dis} evaluates to $1$. Since $p(\{1\})=0$, we have $\delta^{s(\{1\})}2^{-p(\{1\})-1}\le\tfrac12<1$, which contradicts our assumption that $v$ is not disallowed for $\{1\}$. It follows that $\emptyset\not\in \hat{B}^{\{1\}}_v$, so $\{v\}\not\in\bar{B}^{\{1\}}$. In particular $\{v\}$ is not a fail set for $\{1\}$, so by definition~\ref{safe:count} holds for $X$. To check~\ref{safe:fail}, let $Y$ be of interest in $H'$ and let $e$ be a fail set in $\Gamma_v$ for $Y$. Then either $\{v\}\cup e$ is a fail set in $\Gamma$ for $\{1\}\cup Y$ which is of interest in $H$, or $e$ is a fail set in $\Gamma$ for $Y$ which is of interest in $H$, or both. In any case, either $\bar{B}^{\{1\}\cup Y}$ contains a subset of $\{v\}\cup e$, in which case $\hat{B}^Y$ contains a subset of $e$, or $\bar{B}^Y$ contains a subset of $e$, in which case the same subset is in $\hat{B}^Y$. Finally, consider~\ref{safe:notbig}. Given $Y$ which is of interest in $H'$, there are three cases to consider. To begin with, suppose $Y$ is of interest in $H$ but $Y\cup\{1\}$ is not. If $1$ is not adjacent to any member of $Y$, then neither side of~\eqref{eq:safe:notbig} changes when we change $\Gamma$ to $\Gamma_v$ and $\mathcal{D}$ to $\mathcal{D}_1$ (Note that the quantities $\gamma(\emptyset)$ and $d(\emptyset)$, which are not the same as $\gamma_v(\emptyset)$ and $d_1(\emptyset)$ respectively, both cancel out), so we are done. If $1$ is adjacent to some vertices in $Y$, then by~\eqref{eq:THC:dis}, since $v$ is not disallowed,~\eqref{eq:safe:notbig} holds with a factor of $2$ to spare: the left hand side is exactly the left hand side of~\eqref{eq:THC:dis}, while on the right hand side we have $s_H(Y)=s_{H'}(Y)+1$ and $p_H(Y)=p_{H'}(Y)$. Now suppose $\{1\}\cup Y$ is of interest in $H$ but $Y$ is not. Then we have $s_{H'}(Y)=s_H(Y)-1$ and $p_{H'}(Y)=p_H(Y)$, so as above, by~\eqref{eq:THC:dis} we see that~\eqref{eq:safe:notbig} holds with a factor of $2$ to spare. Finally, suppose both $Y$ and $\{1\}\cup Y$ are of interest in $H$. Then $p_{H'}(Y)=p_H(Y)-1$, so whether or not $1$ is adjacent to a member of $Y$, the contribution of sets in $\bar{B}^Y$ to the left hand side of~\eqref{eq:safe:notbig} is by~\eqref{eq:THC:dis} at most half of the right hand side bound. Furthermore, exactly as above, the contribution of the sets in $\bar{B}^{Y\cup\{1\}}$ is at most half of the right hand side; so~\eqref{eq:safe:notbig} holds as desired. To complete the proof, we just need to show that the total weight of disallowed vertices in $V_1$ is small. Given $Y$ which is of interest in $H$, there are two possibilities. First, $1$ is neither in $Y$ nor adjacent to any member of $Y$ in $H^{(2)}$. In this case only vertices $v\in V_1$ with $\gamma(v)=0$ are disallowed for $Y$, so the total weight of vertices in $V_1$ disallowed for $Y$ is $0$. Second, $1$ is either in or adjacent to $Y$. Since $Y$ is of interest, in particular there is some counting place $X$ such that $Y$ is of interest for $\big(\max(Y),X\big)$. Now $X$ is connected in $H^{(2)}$ and has at most $c^*$ vertices, so it has diameter at most $c^*-1$. All members of $Y$ are adjacent to at least one member of $X$ in $H^{(2)}$, and $1$ is either in $Y$ or adjacent in $H^{(2)}$ to at least one member of $Y$. It follows that all members of $Y$ are at distance at most $c^*+3$ in $H^{(2)}$ from $1$. There are at most $(c^*+3)\Delta^{c^*+3}$ such vertices, so the number of possibilities for $Y$ is at most $2^{(c^*+3)\Delta^{c^*+3}}$. By Claim~\ref{cl:fewdis}, for each such $Y$ the total weight of disallowed vertices is at most $10\delta d(1)|V_1|$, giving the claimed bound. \end{proof} With Lemma~\ref{lem:safe} it is now easy to prove Theorem~\ref{thm:GaTHC}. We simply need to show that if $\Gamma$ is a graph in which we can count (rather larger than $c^*$-vertex) graphs to sufficiently high accuracy, then there is a decoration of $\Gamma$ which is safe. The decoration we use is trivial---for each $Y$ of interest we let $B^Y$ be the fail sets for $Y$. The only condition which is not trivially satisfied is~\ref{safe:notbig}, and we show that this condition holds by another application of the Cauchy--Schwarz inequality. \begin{proof}[Proof of Theorem~\ref{thm:GaTHC}] It suffices to give a decoration for $\Gamma$ which yields a $(H,\varepsilon, \delta, c^*)$-safe graph where \begin{enumerate} \item $\varepsilon\le\eta'$ so that \ref{safe:count} gives the required counting for \ref{thc:count} in any such safe graph, \item $\delta^{\Delta^2c^*}2^{-2c^*\Delta^{c^*+1}}>2\Delta(c^*\Delta)^\Delta(4\Delta+8)\varepsilon$ so that we may apply Lemma~\ref{lem:safe}, and \item $20\cdot 2^{(c^*+3)\Delta^{c^*+3}}\delta \le \eta'$ so that the lemma guarantees that the total weight of vertices in $V_1$ whose link we do not know how to decorate to form another $(H,\varepsilon, \delta, c^*)$-safe graph is at most a fraction $\eta'$ of the total weight in $V_1$. \end{enumerate} To verify the third condition note that with $\varepsilon<\eta'$ and~\ref{safe:count} we have $(1-\eta')d(1) \le \vnorm{V_1}{\Gamma}$, and hence it suffices to have \[ 10\cdot 2^{(c^*+3)\Delta^{c^*+3}}\delta d(1)\abs{V_1} \le \frac{\eta'}{2}d(1)\abs{V_1} < \eta'(1-\eta')d(1)\abs{V_1} \le \eta'\vnorm{V_1}{\Gamma}\cdot |V_1|\,. \] It is easy to see that this relationship between the constants is possible to satisfy; we have $\delta\ll\eta',\,\Delta,\,c^*$ so can pick $\delta$ small enough first, and then $\varepsilon\ll\delta,\,\eta',\,\Delta,\,c^*$ can be chosen small enough. For the remainder of the proof, fix $\delta$ and $\varepsilon$ such that the above inequalities hold. We also state now that taking $\eta$ small enough that $3\eta^{14}\le \varepsilon$ and \[ 2^{4+\Delta^2+2^{1+c^*}}(c^*)^{c^*} \eta^{1/4} \le \delta^{\Delta^2c^*}2^{-2c^*\Delta^{c^*+1}} \] suffices. We now construct the decoration. The statement of the theorem gives us a density graph, hence we need only supply the sets $B^Y$ for each $Y$ of interest. We show that the trivial decoration suffices, i.e.\ for each $Y\subseteq J$ of interest, let $B^Y\subseteq V_Y$ be the fail sets for $Y$. Given this decoration, the fact that $\eta\le\varepsilon$ and the hypotheses of the theorem give conditions~\ref{safe:count} and~\ref{safe:fail} immediately; it remains to check that~\ref{safe:notbig} holds. Given our trivial definition of $B^Y$, note that the only $Z\subseteq Y$ which can contribute to the sum in~\eqref{eq:safe:notbig} is $Y$ itself, hence it suffices to show that for each $Y$ of interest, we have \begin{equation}\label{eq:safe:notbig:trivial} \frac{d(\emptyset)\sum_{e\in B^Y}\prod_{e'\subseteq e}\gamma(e')}{\gamma(\emptyset)|V_Y|\prod_{Z'\subseteq Y}d(Z')}\le\delta^{s(Y)}2^{-p(Y)}\,. \end{equation} It is a simple application of Corollary~\ref{cor:ECSconc} to bound the contribution to the left-hand side of~\eqref{eq:safe:notbig:trivial} that arises from all fail sets for a particular $X$ and $F$, which we now give. Given a fixed counting place $X$, $t<\min(X)$, a set $Y$ of interest for $(t,X)$, and an $X$-partite $k$-complex $F$ on at most $c^*$ vertices, let $B^{Y,X,F}\subseteq B^Y$ be the fail sets of $Y$ that fail because \[ \Gamma_e(F)\neq(1\pm\eta)\tfrac{\gamma_e(\emptyset)}{d_Y(\emptyset)}\mathcal{D}_Y(F) \] holds. Then $B^Y=\bigcup_{X,F}B^{Y,X,F}$ where the union is over all $X$ such that there exists $t<\min(X)$ such that $Y$ is of interest for $(t,X)$ and all $X$-partite $k$-complexes $F$ on at most $c^*$ vertices. Let $\Gamma^{Y,X,F}$ be equal to $\Gamma$ except on $V_Y$ where we set $\gamma^{Y,X,F}(e)=\gamma(e)\indicator{e\in B^{Y,X,F}}$. Now the left-hand side of~\eqref{eq:safe:notbig:trivial} is equal to \begin{equation}\label{eq:lhs:XFsum} \sum_{X,F}\frac{d(\emptyset)}{\gamma(\emptyset)} \frac{\Gamma^{Y,X,F}(H[Y])}{\mathcal{D}(H[Y])}\,, \end{equation} and we bound each term in the sum. Consider the $k$-complex $2F$ formed of two vertex-disjoint copies of $F$, and let $\pi:V(2F)\to X$ be the projection from vertices of $2F$ to their part. We can view $F$ as a subgraph of $2F$ hence $\pi$ also serves as the analogous projection for $F$, and we also extend $\pi$ to be the identity function on $Y$. Let $F'$ and $F''$ be obtained from $F$ and $2F$ respectively by adding the vertices $Y$ and any edge $f$ such that $\pi(f)\in H[X\cup Y]$. Note that $v(F')\le v(F'')\le 2c^*+\abs{Y} \le (\Delta+2)c^*$. Define the functions $\tilde{X},\tilde{Y}:V_Y\to \mathbb{R}_{\ge0}$ by \begin{align*} \tilde{X}(x_Y)&= \prod_{e\subseteq Y}\gamma(x_e) \,,\quad\text{and}\\ \tilde{Y}(x_Y)&=\Ex[\Big]{\prod_{e\in F,\, e\not\subseteq Y}\gamma(x_e)}[x_j\in V_{\pi(j)} \text{ for } j\in V(F)\setminus Y]\,, \end{align*} which become random variables when $x_Y\in V_Y$ is chosen uniformly at random, and let $d\subref{cor:ECSconc} := \mathcal{D}(F')/\mathcal{D}(H[Y])$. Then by the hypotheses of the theorem we have \begin{align*} \Ex{\tilde{X}} &= \Gamma(H[Y]) = (1\pm\eta)\mathcal{D}(H[Y])\,,\\ \Ex{\tilde{X}\tilde{Y}} &= \Gamma(F') = (1\pm\eta)\mathcal{D}(F') = (1\pm 4\eta)d\subref{cor:ECSconc}\cdot\Ex{\tilde{X}}\,,\\ \Ex{\tilde{X}\tilde{Y}^2} &= \Gamma(F'') = (1\pm\eta)\mathcal{D}(F'') = (1\pm 4\eta)d\subref{cor:ECSconc}^2\cdot\Ex{\tilde{X}}\,, \end{align*} and hence by Corollary~\ref{cor:ECSconc} with $\varepsilon\subref{cor:ECSconc}:= 4\eta$, the random variable $\tilde{W}$ which indicates the event $\tilde{Y} = (1\pm 3\eta^{1/4})d\subref{cor:ECSconc}$ satisfies $\Ex{\tilde{W}\tilde{X}} \ge (1-6\eta^{1/4})\Ex{\tilde{X}}$. Considering the complementary event, and rewriting this in terms of $\Gamma$ and $\mathcal{D}$ via the above estimates, we have \[ \Ex{(1-\tilde{W})\tilde{X}} \le 9\eta^{1/4}\mathcal{D}(H[Y])\,. \] But observe that $x_Y\in B^{Y,X,F}$ if and only if \[ \Gamma_{x_Y}(F)\neq(1\pm\varepsilon)\frac{\gamma_{x_Y}(\emptyset)}{d_Y(\emptyset)}\mathcal{D}_Y(F)\,, \] which occurs only if $\tilde{W}=0$ since $\tilde{Y} = \Gamma_{x_Y}(F)/\gamma_{x_Y}(\emptyset)$, $d\subref{cor:ECSconc}=\mathcal{D}(F')/\mathcal{D}(H[Y]) = \mathcal{D}_Y(F)/d_Y(\emptyset)$, and $3\eta^{1/4}<\varepsilon$. We conclude that \[ \frac{d(\emptyset)}{\gamma(\emptyset)} \frac{\Gamma^{Y,X,F}(H[Y])}{\mathcal{D}(H[Y])} \le 9\eta^{1/4}\,, \] which gives a bound on each term in the sum~\eqref{eq:lhs:XFsum}. To bound the number of terms, we use that $X$ is a counting place, $Y$ is a set of neighbours of $X$, and $F$ is an $X$-partite $k$-complex on at most $c^*$ vertices. These facts give the following crude estimates. Firstly, for any $Y$ of interest $\abs{Y}\le\Delta c^*$, and there are at most $2^{\Delta^2c^*}$ sets $X$ for which there exists $t$ such that $Y$ is of interest to $(t,X)$. Secondly, any such $X$ has size at most $c^*$ and $F$ has at most $c^*$ vertices, so there are at most $(c^*)^{c^*}$ ways of choosing the vertex partition of $F$ indexed by $X$, and at most $2^{2^{c^*}}$ choices for the edges of $F$. Then there are at most $2^{\Delta^2c^*}\cdot(c^*)^{c^*}\cdot2^{2^{c^*}}\le 2^{\Delta^2+2^{1+c^*}}(c^*)^{c^*}$ terms in the sum~\eqref{eq:lhs:XFsum}, and our calculations show that the trivial decoration we consider has \[ \sum_{Z\subseteq Y}\frac{d(\emptyset)\sum_{e\in V_Z\cap B^Y}\prod_{e'\subseteq e}\gamma(e')}{\gamma(\emptyset)|V_Z|\prod_{Z'\subseteq Z}d(Z')}\le 2^{4+\Delta^2+2^{1+c^*}}(c^*)^{c^*} \eta^{1/4}\,, \] and $\eta$ was chosen small enough that this gives~\ref{safe:notbig}, because (as in the proof of Lemma~\ref{lem:safe}) we have $s(Y)\le\Delta^2c^*$ and $p(Y)\le 2c^*\Delta^{c^*+1}$. \end{proof} \subsection{Random hypergraphs have THC} We now turn to the proof of Lemma~\ref{lem:randomTHC}, where we recall that $\Gamma$ is a random $k$-uniform hypergraph and we obtain $\Gamma'$ from $\Gamma$ by partitioning $V(\Gamma)$ into $\abs{J}$ parts and applying the standard construction with a $J$-partite $k$-complex $H$ of maximum degree $\Delta$. To show that $\Gamma'$ is a THC-graph involves showing that counts of complexes $R$ in $\Gamma'$ and in related $k$-graphs obtained by taking links are close to their expectation, and such counts will correspond to counts of weighted homomorphism-like objects in $\Gamma$. A difficulty arises here because $\Gamma'$ may contain multiple copies of a single edge of $\Gamma$ and so the $k$-edge weights in $\Gamma'$ are not necessarily independent Bernoulli random variables. In order to avoid trying to deal with $\Gamma$ and $\Gamma'$ simultaneously, we first state and prove the required property of $\Gamma$, which is rather technical. Let $Y$ be an initial segment of $X$ and $\phi:Y\to V(\Gamma)$ be a partite map. Let $Z$ be a vertex set disjoint from $Y$, equipped with a map $\rho$ that associates each $z\in Z$ to some $\rho(z)\in X\setminus Y$. For convenience we extend $\rho$ to be the identity map on $Y$. Let $R$ be a $J$-partite $k$-complex on $Z$, and let $R_\phi$ be the hypergraph with vertex set $\im\phi\cup Z$, and edge set \[ E(R_\phi) := E(R)\cup\{f\subseteq \dom\phi\cup Z : f\cap Z\ne\emptyset,\, \rho(f)\in H \}\,. \] We view $R_\phi$ as $J$-partite in the following way. Each vertex in $\im\phi$ is in $V_j$ for some $j\in J$, which naturally gives an association to the index $j$, and vertices in $Z$ are related to indices $j$ through the map $\rho:Z\to X$ and the partition of $X$ into parts indexed by $J$. We write $V_z$ for the $V_j$ to which $z\in Z$ is associated in this way. Then the homomorphism-like objects we consider in $\Gamma$ are partite maps $\psi$ from $R_\phi$ to $\Gamma$, where we insist that $\psi$ extends the identity map on $\im\phi$. This definition is rather difficult to parse, but a certain amount of complexity is necessary to deal with the case that $\phi$ is not injective. In any case, the idea is that $\psi$ signifies a copy of $R_\phi$ in $\Gamma$ `rooted' at some fixed vertices specified by $\im\phi$. We are interested in weighting such $\psi$ according to the subset $R_\phi^{(\ge2)}\subseteqR_\phi$ of edges of size at least $2$, preferring to deal separately with the empty set (which has weight $1$ in this setup) and vertex weights. For $z\in Z$, let $U_z\subseteq V_z$ be a set of exactly $n_1:= n p^d/(2\log n)$ vertices. We define \begin{equation}\label{eq:Ndef} N(\phi, R, U_Z) := \sum_{\psi}\prod_{e\inR_\phi^{(\ge2)}}\gamma\big(\psi(e)\big)\,, \end{equation} where the sum is over all maps $\psi:\im\phi\cup Z\to V(\Gamma)$ such that $\psi(w)=w$ for any $w\in\im\phi$ and $\psi(z)\in U_z$ for all $z\in Z$. Note that with $Y=\phi=\emptyset$ we have $R_\phi=R$, and since $\Gamma$ is complete on edges of size at most $1$, $n_1^{-\abs{Z}}N(\phi,R,U_Z)$ is then the partite count of copies of $R$ in $\Gamma$ that lie on $U_Z$. The main probabilistic tool we require for counting in $\Gamma'$ is a statement that for any suitable $R$, the count $N(\phi, R, U_Z)$ is close to its expectation with very high probability. It turns out that we are interesting in $R$ of the following form. Given $Y$, let $H^4$ be the complex $H$ with each vertex blown up into $4$ copies. A \emph{suitable} $R$ is any subcomplex of $H^4$ on at most $c^*$ vertices which uses no copies of vertices in $Y$. Considering suitable $R$ is what requires us to work with $4^k\Delta$ and $4^kd$ in what follows. \begin{claim}\label{claim:Nconc} Consider the setup of Lemma~\ref{lem:randomTHC} and a suitable $R$ (according to the above definitions). Then with probability at least $1-\exp\big(-O(n^{1+\varepsilon})\big)$ we have \begin{equation}\label{eq:Nconc} N(\phi, R, U_Z) = \big(1\pm\tfrac{1}{\log n}\big)n_1^{\abs{Z}}\prod_{e\in R_\phi^{(\ge2)}}q(e) \,. \end{equation} \end{claim} \begin{claimproof} Formally, we proceed by induction on $\abs{Z}$. The claim is trivial if $\abs{Z}\le 1$, as the product over $E_2(R_\phi)$ is empty. If $\abs{Z}\ge 2$, note that it suffices to consider injective maps $\psi$ in~\eqref{eq:Ndef}. Any non-injective partite map $\psi':\im\phi\cup Z\to V(\Gamma)$ of the form considered in~\eqref{eq:Ndef} is an injective partite map into $V(\Gamma)$ from the complex $R'$ on a vertex set $Z'$ formed from $R$ by identifying any vertices of $z$ with the same images under $\psi'$. Applying the claim to $R'$ (which is on fewer vertices), we see that with probability at least $1-\exp\big(-O(n^{1+\varepsilon})\big)$, these non-injective maps contribute an amount at most twice expectation of $N(\psi,R', U_{Z'})$. Comparing the expectations of $N(\psi,R, U_{Z})$ and $N(\psi,R', U_{Z'})$, identifying a pair $z,z'$ of vertices in $Z$ `costs' a factor $n_1$ but can gain a factor up to $p^{-4^k\Delta}$ since the edges involving $z$, of which there are at most $4^k\Delta$, are now coupled in $\Gamma$ with those containing $z'$. Then the assumptions on $n$ and $p$ imply that the contribution to $N(\phi,R,U_Z)$ from non-injective homomorphisms is at most a factor $O(n^{-\varepsilon})$ times the expected contribution from injective homomorphisms. Write $N^*$ for the contribution to $N(\phi, R, U_Z)$ from injective $\psi$, noting that the above argument shows that $N(\phi,R, U_Z)=\big(1\pm O(n^{-\varepsilon})\big)N^*$. For each injective $\psi$, the term $\mathbf{X}_\psi^*:=\prod_{e\in R_\phi^{(\ge2)}}\gamma\big(\psi(e)\big)$ appearing in $N^*$ is a product of independent Bernoulli random variables with probabilities given by $q(e)$. The $\mathbf{X}_\psi^*$ themselves are therefore `partly dependent' Bernoulli random variables, each with the same probability $p^*=\prod_{e\in R_\phi^{(\ge2)}}q(e)$. Since we consider only the edges $R_\phi^{(\ge2)}$, if $\mathbf{X}_\psi^*$ and $\mathbf{X}_{\psi'}^*$ are dependent it must be because they agree on at least two vertices of $Z$. Then each $\mathbf{X}_\psi^*$ can be dependent on at most $\binom{\abs{Z}}{2}n_1^{\abs{Z}-2}$ other variables $\mathbf{X}_{\psi'}^*$. We apply a theorem of Janson~\cite[Corollary 2.6]{Jlargedeviations} which bounds the probability of large deviations in sums of partly dependent random variables. \begin{theorem}[Janson~\cite{Jlargedeviations}]\label{thm:janson} Let $\Psi$ be an index set, and $N^*=\sum_{\psi\in\Psi}\mathbf{X}^*_\psi$, such that each $\mathbf{X}^*_\psi$ is a Bernoulli random variable with probability $p^*\in(0,1)$. Let $\Delta_1^*$ be one more than the maximum degree of the graph on vertex set $\Psi$ such that $\psi$ and $\psi'$ are adjacent if and only if $\mathbf{X}_\psi^*$ and $\mathbf{X}_{\psi'}^*$ are dependent. Then for any $\delta>0$, \begin{equation}\label{eq:janson} \Pr\big[N^* = (1\pm\delta)\mathop{{}\mathbb{E}} N^*\big]\ge 1 - 2\exp\Bigg(-\frac{3\delta^2\abs{\Psi}p^*\big(1-\Delta_1^*/\abs{\Psi}\big)}{8\Delta_1^*}\Bigg)\,. \end{equation} \end{theorem} In the setup above, we have $n_1^{\abs{Z}}(1-\abs{Z}/n_1)^{\abs{Z}} \le \abs{\Psi} \le n_1^{\abs{Z}}$ and $\Delta_1^*\le \abs{Z}^2 n_1^{\abs{Z}-2}$. Since $\abs{Z}\le c^*$ is bounded by a constant, this means $\abs{\Psi}=\big(1\pm O(n_1^{-1})\big)n_1^{\abs{Z}}$, \begin{align} \frac{\abs{\Psi}}{\Delta_1^*} &=\Omega(n_1^2)\,,& &\text{and} &\frac{\Delta_1^*}{\abs{\Psi}} & =O(n_1^{-2})\,. \end{align} Moreover, we know that $p^*\le p^{(\abs{Z}-1)4^kd}$ since embedding the first vertex of $Z$ is `free', and each remaining vertex can be the last vertex of at most $4^kd$ edges which occur with probability $p$ each. Then for $\delta=1/(2\log n)$, the exponent on the right-hand side of~\eqref{eq:janson} is \[ -\Omega\big(p^{4^kd\abs{Z}}n^2\big) = -\Omega\big(n^{1+\varepsilon}\big)\,, \] by the assumptions on $n$ and $p$. The claim follows since the event that \eqref{eq:Nconc} that we wish to control occurs with probability $1$ provided $N^*=(1\pm\delta)\mathop{{}\mathbb{E}} N^*$ and $n$ is large enough. We have \[ \mathop{{}\mathbb{E}} N^* = \abs{\Psi}\prod_{e\in R_\phi^{(\ge2)}}q(e) = \big(1\pm O(n_1^{-1})\big)n_1^{\abs{Z}}\prod_{e\in R_\phi^{(\ge2)}}q(e)\,, \] and hence for large enough $n$, with probability at least $1-\exp\big(-O(n^{1+\varepsilon})\big)$, \[ N(\phi,R)=\big(1\pm O(n^{-\varepsilon})\big)N^* = \big(1\pm O(n^{-\varepsilon})\big)\Big(1\pm \tfrac{2}{\log n}\Big)\mathop{{}\mathbb{E}} N^* = \big(1\pm \tfrac{1}{\log n}\big)n_1^{\abs{Z}}\prod_{e\in R_\phi^{(\ge2)}}q(e)\,.\qedhere \] \end{claimproof} With the main probabilistic argument complete, we can now apply Claim~\ref{claim:Nconc} to the problem of showing $\Gamma'$ is an $(\eta,c^*)$-THC graph. \begin{proof}[Proof of Lemma~\ref{lem:randomTHC}] We start with a sketch of the proof. Given a fixed partition $\{V_j\}_{j\in J}$, to verify $\Gamma'$ is an $(\eta,c^*)$-THC graph we must count suitable $X$-partite complexes $R$ in graphs obtained from $\Gamma'$ by embedding vertices of $H$. We are not required to consider arbitrary embeddings, at each step we are permitted by~\ref{thc:hered} to avoid a `bad set' of potential images, which we will exploit in due course. At first, no vertex of $H$ has been embedded and we count $R$ in $\Gamma'$, which by the standard construction is the same as counting $R$ in $\Gamma$. By Claim~\ref{claim:Nconc} with $Y=\phi=\emptyset$ and a union bound over suitable $R$, with high probability we have the required accurate counts of $R$ in $\Gamma$. These counts are accurate enough to imply deterministically that there is a small `bad set' which, if avoided, allows us to embed the next vertex $x$ and continue the argument with `well-behaved' vertex weights in $\Gamma'_x$. When some initial segment $Y$ of $X$ has been embedded, say by a map $\phi':Y\to V(\Gamma')$, we always have an associated map $\phi:Y\to V(\Gamma)$ obtained by identifying the copies of parts $V_j$ made in the standard construction. Write $\Gamma'_{\phi'}$ for the $k$-graph obtained from $\Gamma'$ by taking the link of vertices in $\im\phi'$. By construction, the required counts of complexes $R$ in $\Gamma'_{\phi'}$ correspond to counts of $R_\phi$ in $\Gamma$, which we can control with Claim~\ref{claim:Nconc}. We handle vertex weights separately, and apply Claim~\ref{claim:Nconc} with subsets $U_z\subseteq V_z$ of vertices that receive weight $1$ in $\Gamma'_{\phi'}$. We then take a union bound over choices of partition to complete the lemma. The notion of `well behaved' for vertex weights in $\Gamma'_{\phi'}$ that we maintain is as follows. Recall that given a partition $\{V_j\}_{j\in J}$, we have $V(\Gamma')$ partitioned into $\{V'_x\}_{x\in X}$ where $V'_x$ is a copy of the $V_j$ into which $x$ will be embedded. Since we view vertices in $Z$ as copies of vertices in $X$, we also write $V'_z$ for the part of $\Gamma'$ into which $z$ should be embedded. Given $Y\subseteq X$ and $\phi$, $\phi'$ as above, let $\mathcal{Q}_\phi$ be the density $k$-graph obtained from $\mathcal{Q}$ by taking links of vertices in $\im\phi$. For a fixed suitable $R$ on vertex set $Z$, let $\mathcal{A}_{Y,z}$ be the event that $\vnorm{V_z'}{\Gamma_W}\ge (1-\eta)^{\pi(z)}q_Y(z)$, where $\pi(z) = \abs[\big]{\{y\in Y : \{y,z\}\in H\}}\le \Delta(H)$, and let $\mathcal{A}_Y$ be the intersection of $\mathcal{A}_{Y,z}$ for all $z\in Z$. The event $\mathcal{A}_\emptyset$ holds with probability $1$ because $\Gamma$ gives weight $1$ to all vertices, and by avoiding bad vertices we will maintain $\mathcal{A}_Y$ as we embed. We are now ready to give the main proof, supposing that the initial segment $Y\subseteq X$ has been embedded, we have the associated partite maps $\psi$ and $\psi'$ from $Y$ to $V(\Gamma)$ and $V(\Gamma')$ respectively, we count copies of suitable $R$ in $\Gamma'$. Given $\mathcal{A}_Y$, since we have by assumption $(1-\eta)^\Delta \ge 1/2$, $\abs{V_z}\ge n/\log n$, and $q_Y(z)\ge p^d$, we can apply Claim~\ref{claim:Nconc} for every collection of $U_z$ such that $U_z\subseteq V_z$ is of size exactly $n_1$. There are $\prod_{z\in Z}\binom{\abs{V_z}}{n_1} = e^{O(n)}$ choices of collection, hence by Claim~\ref{claim:Nconc} and a union bound over collections, conditioned on $\mathcal{A}_Y$ for all $z\in Z$, with probability at least $1-\exp\big(-O(n^{1+\varepsilon})\big)$, the $N(\phi,R,U_Z)$ counts are close to their expectation for all such $U_Z$. In particular, the $N(\phi,R,U_Z)$ are `correct' for the collections where every $u\in U_z$ receives weight $1$ as a vertex in $\Gamma'_{\phi'}$. The count $N(\phi,R,U_Z)$ deals with edges of $R_\phi$ of size at least $2$, hence by the above argument and averaging over sets $U_z$ of vertices that receive weight $1$ in $\Gamma'_{\phi'}$, we obtain that with high probability the count $\Gamma'_{\phi'}(R)$ is close to its expectation. More precisely, by a union bound over the constant number of complexes $R$ to consider, and by averaging over the choice of collection $\{U_z\}_{z\in Z}$, we have, conditioned on $\mathcal{A}_Y$, with probability at least $1-\exp\big(-O(n^{1+\varepsilon})\big)$, \begin{equation}\label{eq:Gam'Wconc} \Gamma'_{\phi'}(R) = \big(1\pm\tfrac{1}{\log n}\big)\Big(\prod_{e\in R_\phi^{(\ge2)}}q(e)\Big)\prod_{z\in Z}\vnorm{V_z'}{\Gamma'_{\phi'}}\,, \end{equation} for all suitable $k$-complexes $R$. Let $x$ be the next vertex to embed. To prove the lemma it now suffices to show that there is a subset $^{(1)}lde{V_x'}\subseteq V_x'$ with $\vnorm{^{(1)}lde{V_x'}}{\Gamma'_{\phi'}}\ge (1-\eta)\vnorm{V_x'}{\Gamma'_{\phi'}}$ such that $\mathcal{A}_{Y\cup\{x\}}$ holds. Then the above argument after $x$ has been embedded, and a union bound over the number of vertices to embed (at most $n$) gives the result. Suppose that we embed $x$ to $w\in V_x'$. Since $H$ has maximum degree $\Delta$, there are at most $\Delta$ vertices $z\in \rho(Z)$ with $\ind{\Gamma'_{\phi'\cup\{x\mapsto w\}}}{V_z'}\ne \ind{\Gamma'_{\phi'}}{V_z'}$. Let $Z'$ be the set of these vertices. The counts~\eqref{eq:Gam'Wconc} imply that for $z\in Z'$, \begin{align} \abs{V_x'}^{-1}\sum_{u\in V_x}\gamma'_{\phi'}(u) &= \big(1\pm\tfrac{1}{\log n}\big)\vnorm{V_x'}{\Gamma'_{\phi'}}\,,\\ \abs{V_x'}^{-1}\abs{V_z'}^{-1}\sum_{uv\in V_{xz}'}\gamma'_{\phi'}(u)\gamma'_{\phi'}(v)\gamma'_{\phi'}(u,v) &= \big(1\pm\tfrac{1}{\log n}\big)q_Y(x,z)\vnorm{V_z}{\Gamma'_{\phi'}}\cdot\vnorm{V_x'}{\Gamma'_{\phi'}}\,,\\ \abs{V_x'}^{-1}\sum_{u\in V_x'}\gamma'_{\phi'}(u)\Big(\abs{V_z'}^{-1}\sum_{v\in V_z'}\gamma'_{\phi'}(v)\gamma'_{\phi'}(u,v)\Big)^2 &= \big(1\pm\tfrac{1}{\log n}\big)q_Y(x,z)^2\vnorm{V_z'}{\Gamma'_{\phi'}}^2\cdot\vnorm{V_x'}{\Gamma'_{\phi'}}\, \end{align} hence we may apply Corollary~\ref{cor:ECSconc} with $\varepsilon\subref{cor:ECSconc}=4/\log n$ and $d\subref{cor:ECSconc}=q_Y(x,z)\vnorm{V_z}{\Gamma'_{\phi'}}$ to obtain the following. For each $z\in Z'$ there is a set $B_z\subseteq V_x'$ with $\vnorm{B_z}{\Gamma'_{\phi'}} \le 8(\log n)^{-1/4}\vnorm{V_z'}{\Gamma'_{\phi'}}$ such that for all $w\in V_x'\setminus B_z$, if $x$ is embedded to $w$ we have \[ \vnorm{V_z'}{\Gamma'_{\phi'\cup\{x\mapsto w\}}} = \Big(1\pm\frac{4}{(\log n)^{1/4}}\Big)q_Y(x,z)\vnorm{V_z'}{\Gamma'_{\phi'}}\,. \] Set $^{(1)}lde V_x'=V_x'\setminus \bigcup_{z\in Z'}B_z$, so that \[ \vnorm{^{(1)}lde{V_x'}}{\Gamma'_{\phi'}} \ge \Big(1-\frac{8\Delta}{(\log n)^{1/4}}\Big)\vnorm{V_x'}{\Gamma'_W}\,, \] which is at least $(1-\eta)\vnorm{V_x'}{\Gamma'_{\phi'}}$ for large enough $n$. Then given $\mathcal{A}_Y$ and the counts~\eqref{eq:Gam'Wconc}, we have a small `bad set' which, if avoided when embedding $x$, implies $\mathcal{A}_{Y\cup\{x\}}$ holds deterministically. So we can maintain well-behaved vertex weights throughout the embedding, and we may repeat the probabilistic argument above to control the counting properties~\eqref{eq:Gam'Wconc} after each vertex is embedded. There are at most $n$ embeddings, and hence with probability at least $1-\exp\big(-O(n^{1+\varepsilon})\big)$ the partition $\{V_j\}_{j\in J}$ yields a $\Gamma'$ with the required properties. To complete the proof we take a union bound over the $e^{O(n)}$ possible partitions. \end{proof} \section{A sparse hypergraph regularity lemma}\label{sec:reg} There are several approaches to generalising Szemer\'edi's regularity lemma to hypergraphs (e.g.\ \cite{RSreg,Ghypergraph}). Recall that the main idea is to partition a hypergraph into a bounded number pieces, almost all of which are regular. Difficulties arise in giving a precise formulation of regularity that is both weak enough to be found by a regularity lemma and strong enough to support a counting lemma. We use a notion of octahedron minimality as our regularity condition (Definition~\ref{def:reg}), and in this section we describe how existing results imply that we can partition arbitrary hypergraphs into pieces which have the necessary structure. In dense hypergraphs, the combined use of (strong) regularity lemmas with compatible counting lemmas constitute the standard hypergraph regularity method~\cite{RNSSKregmethod,RSregmethod}. Our Theorem~\ref{thm:counting} is essentially a version of the counting lemma of~\cite{NRScount} for use with our definition of regularity. In the following subsection we show how to derive the setup of Theorem~\ref{thm:counting} from the regularity lemma of~\cite{RSreg}, allowing our Theorem~\ref{thm:counting} to be a drop-in replacement in many applications of the standard hypergraph regularity method. Versions of these tools for sparse graphs are less well-developed, but notably the weak regularity lemma and accompanying counting lemma of Conlon, Fox, and Zhao~\cite{CFZrelative} give a general technique for transferring results for dense hypergraphs to a sparse setting. We show how combined use of the regularity methods of~\cite{CFZrelative} and~\cite{RSreg,NRScount} can yield the setup of Theorems~\ref{thm:counting} and~\ref{thm:embedding}. In particular, we will state a sparse hypergraph regularity lemma, namely a sparse version of the R\"odl-Schacht regularity lemma~\cite{RSreg2}. We derive this from the dense version using the Conlon-Fox-Zhao weak regularity lemma. We should point out that the proof strategy works with only trivial changes to obtain sparse versions of the R\"odl--Skokan regularity lemma~\cite{RSreg}, and the regular slice lemma~\cite[Lemma~10, parts (a) and (b)]{ABCM}\footnote{The dense version has a part (c), but this part is false in the sparse setting.}. We should point out that it has been well known in the area for some years that these regularity lemmas hold, but to the best of our knowledge no-one actually wrote them down with proofs. \subsection{Sparse hypergraphs after Conlon, Fox, and Zhao} We say, following~\cite{CFZrelative} (but with slightly different notation) that two weighted $k$-uniform hypergraphs $G$ and $H$ on a vertex set $V$ are a \emph{$\gammama$-discrepancy pair} in the following situation. For any $(k-1)$-uniform unweighted graphs $F_1,\dots,F_k$ on $V$, let $S$ be the collection of $k$-sets in $V$ whose $(k-1)$-sets can be labelled using each label $1$,\dots, $k$ exactly once, such that the label $i$ subset is in $F_i$. We say the edges of $S$ are \emph{rainbow for $F_1,\dots,F_k$}. If for any choice of the $F_i$, we have \[\Big|\sum_{e\in S}\big(g(e)-h(e)\big)\Big|\le\gammama|V|^k\] then $(G,G')$ is a $\gammama$-discrepancy pair. Note that this concept is interesting only when $\sum_{e\in\binom{V}{k}}g(e)$ is much larger than $\gammama |V|^k$. Since we want to work with sparse hypergraphs, in order to talk about discrepancy pairs (and in general to apply the machinery of~\cite{CFZrelative}) we will need to scale our weight functions in order that the majorising hypergraph has density about $1$. Going with this, we say a $k$-uniform hypergraph $G$ on $V$ is \emph{upper $\eta$-regular} if for any $(k-1)$-uniform unweighted graphs $F_1,\dots,F_k$ on $V$, letting $S$ be the set of rainbow edges we have \[\sum_{e\in S}(g(e)-1)\le\eta|V|^k\,.\] Conlon, Fox, and Zhao~\cite[Lemma~2.15]{CFZrelative} proved that if $\Gamma$ satisfies the `linear forms condition' then it (and trivially all its subgraphs) are upper $o(1)$-regular. More concretely, they proved the following. \begin{lemma}\label{lem:CFZupper} Given $\eta>0$ and $k$, there exists $\eta'>0$ such that the following holds. Suppose that $\Gamma$ is a $k$-uniform $n$-vertex weighted hypergraph, and for each unweighted $k$-uniform hypergraph $H$ on at most $2k$ vertices we have $G(H)=1\pm\eta'$. Then $\Gamma$, and all its subgraphs, are upper $\eta$-regular. \qed \end{lemma} Conlon, Fox, and Zhao also proved the following sparse `weak regularity lemma' which applies to upper-regular graphs. We should point out that it is well known that from such a regularity lemma one can fairly easily, by iteration, prove a sparse strong regularity lemma, so in some sense this lemma does all the work. We state a somewhat weaker version (the original allows for directed hypergraphs and parts of different sizes, and gives a bound on the `complexity' of $^{(1)}lde{G}$ as one would need for iteration to a strong regularity lemma). \begin{theorem}[{\cite[Theorem~2.16]{CFZrelative}}]\label{thm:CFZweak} For any $\gammama>0$ and $k$-uniform weighted hypergraph $G$ on $V$ which is upper $\eta$-regular with $\eta\le2^{-80k/\gammama^2}$, there exists a $k$-uniform weighted hypergraph $^{(1)}lde{G}$ on $V$, such that $0\le^{(1)}lde{g}(e)\le 1$ for each $e\in\binom{[n]}{k}$ and such that $G$ and $^{(1)}lde{G}$ form a $\gammama$-discrepancy pair. \end{theorem} Finally, Conlon, Fox, and Zhao proved a counting lemma going with this concept of regularity. We will need only the special case of their result counting octahedra, which is the following. \begin{theorem}[{\cite[Theorem~2.17]{CFZrelative}}]\label{thm:CFZcount} For every $\delta>0$ and $k$ there exist $\varepsilon>0$ and $\eta>0$ such that the following holds. Suppose $\Gamma$ is a $k$-uniform $n$-vertex weighted hypergraph, and for each unweighted $k$-uniform hypergraph $H$ on at most $4k$ vertices we have $\Gamma(H)=1\pm\eta$. Suppose that $G$ is a subgraph of $\Gamma$, and $^{(1)}lde{G}$ is a $k$-uniform weighted hypergraph on $n$ such that $0\le^{(1)}lde{g}(e)\le 1$ for each $e\in\binom{[n]}{k}$, and suppose that $G$ and $^{(1)}lde{G}$ form an $\varepsilon$-discrepancy pair. Then we have \[G\big(\oct{k}{\vec{2}^k}\big)=^{(1)}lde{G}\big(\oct{k}{\vec{2}^k}\big)\pm\delta\,.\] \end{theorem} We should note that the requirement of~\cite{CFZrelative} for a large number of vertices disappears in our setting since we made constant choices explicit rather than using $o(1)$ notation. \subsection{Hypergraph regularity lemmas} In this subsection we will derive sparse hypergraph regularity lemmas in which the concept of regularity is octahedron-minimality from existing regularity lemmas in which the concept of regularity is different. For clarity, in this section we use the word \emph{oct-regular} for the concept we earlier defined as simply \emph{regular}, and we refer to the different concept of the original forms as \emph{disc-regular}. We give only the bare bones definitions needed to state our lemmas; for more intuition or for notation which is required to actually work with the resulting regular partitions, the reader should consult~\cite{RSreg} or~\cite{RSreg2}. Given $k$ and a vertex set $V$ with a \emph{ground partition} $\mathcal{P}$, we say the parts of $\mathcal{P}$ are \emph{$1$-cells}. A \emph{$(k-1)$-family of partitions} $\mathcal{P}^*$ on $V$ consists of $\mathcal{P}$ of $V$, together with, for each $2\le j\le k-1$ and each $j$-set $J$ of $1$-cells, a \emph{supporting} partition of the $j$-sets of $V$ with one vertex in each member of $J$ into \emph{$j$-cells}. We say a partition is supporting if, for each $j$-cell of $\mathcal{P}^*$, there are $j$ $(j-1)$-cells such that each edge of the given $j$-cell is rainbow for the chosen $(j-1)$-cells. We will talk about a \emph{$j$-polyad} in such a partition, by which we mean a choice of $j$ $1$-cells, $\binom{j}{2}$ $2$-cells, and so on up to $\binom{j}{j-1}$ $(j-1)$-cells, which are supporting in the above sense. Given a $j$-polyad $\mathcal{J}$ and a weighted $j$-uniform hypergraph $G$, we say $G$ is \emph{$(\varepsilon,d,r)$-disc-regular} with respect to $\mathcal{J}$ if there is some $d$ such that the following holds. Let $\mathbf{P}$ denote the $j$-sets supported by the union of the $(j-1)$-cells of $j$. Let $H_1,\dots,H_r$ be any unweighted sub-$(j-1)$-graphs of the union of the $(j-1)$-cells in $\mathcal{J}$. Let $Q_i$ denote the $j$-sets supported by $H_i$, for each $1\le i\le r$, and let $\mathbf{Q}$ be the union of the $Q_i$. Suppose that $|\mathbf{Q}|\ge\varepsilon|\mathbf{P}|$. Then we have \[\sum_{e\in\binom{V(G)}{j}}g(e)\mathbf{q}(e)=(d\pm\varepsilon)|\mathbf{Q}|\,.\] If there exists $d$ such that $G$ is $(\varepsilon,d,r)$-disc-regular with respect to $\mathcal{J}$, we say $G$ is $(\varepsilon,r)$-disc-regular with respect to $\mathcal{J}$. We say a $k$-uniform hypergraph $G$ is \emph{$(\varepsilon,r)$-disc-regular} with respect to a $(k-1)$-family of partitions $\mathcal{P}^*$ if the following holds. Choose uniformly at random a $k$-set of vertices intersecting each part of the ground partition in at most one vertex, and choose the $k$-polyad containing this set. Then with probability at least $1-\varepsilon$, $G$ is $(\varepsilon,r)$-disc-regular with respect to the chosen polyad. We say similarly that $G$ is \emph{$\varepsilon$-oct-regular} with respect to $\mathcal{P}^*$ if the same holds replacing $(\varepsilon,r)$-disc-regularity with $\varepsilon$-oct-regularity. Finally, we say a $(k-1)$-family of partitions is \emph{$(t_0,t_1,\varepsilon)$-oct-equitable} if the following hold. There are between $t_0$ and $t_1$ parts of the ground partition, and these parts differ in size by at most one. There are numbers $d_2,\dots,d_{k-1}$ such that $1/d_i$ is an integer at most $t_1$ for each $2\le i\le k-1$, and for each $2\le j\le k-1$, each $j$-polyad in $\mathcal{P}^*$, and each $j$-cell supported by that polyad, the $j$-cell is $d_j$-oct-regular with respect to the polyad. We say $\mathcal{P}^*$ is \emph{$(t_0,t_1,\varepsilon)$-disc-equitable} if we replace $\varepsilon$-oct-regularity with $(\varepsilon,1)$-disc-regularity, and impose the stronger condition that every part of the ground partition has exactly the same size. Probably the most commonly used form of hypergraph regularity is the following, due to R\"odl and Schacht. \begin{lemma}[{\cite[Lemma~23]{RSreg2}}]\label{lem:RSchRL} Let $k \geq 3$ be a fixed integer. For all positive integers $q$, $t_0$ and $s$, positive~$\varepsilon_k$ and functions $r: \mathbb{N} \rightarrow \mathbb{N}$ and $\varepsilon: \mathbb{N} \rightarrow (0,1]$, there exist integers~$t_1$ and~$n_0$ such that the following holds for all $n \ge n_0$ which are divisible by~$t_1!$. Let $V$ be a vertex set of size $n$, and suppose that~$G_1, \dots, G_s$ are edge-disjoint $k$-uniform hypergraphs on $V$, and that $\mathcal{Q}$ is a partition of $V$ into at most $q$ parts of equal size. Then there exists a $(k-1)$-family of partitions $\mathcal{P}^*$ on~$V$ such that \begin{enumerate}[label=\itmit{\alph{*}}] \item the ground partition of $\mathcal{P}^*$ refines $\mathcal{Q}$, \item $\mathcal{P}^*$ is $(t_0,t_1, \varepsilon(t_1))$-disc-equitable, and \item for each $1 \leq i \leq s$, $G_i$ is $(\varepsilon_k,r(t_1))$-disc-regular with respect to~$\mathcal{P}^*$. \end{enumerate} \end{lemma} It is often convenient to have the initial partition $\mathcal{Q}$ of the vertex set which is refined. In fact R\"odl and Schacht even allow for an initial family of partitions which is refined (in an appropriate sense), and we could similarly allow for such an initial family of partitions in our forthcoming sparse version Lemma~\ref{lem:RSchreglem}; but for clarity we prefer this version. A counting lemma going with this, taken from~\cite[Lemma~27]{ABCM} (but which is derived from~\cite{RSchCount} via~\cite{CFKO}), is the following (which we state specifically for counting octahedra in one polyad; this follows from the version of~\cite[Lemma~27]{ABCM} by the standard construction). \begin{lemma}\label{lem:RegCount} Let $k,s,r,m_0$ be positive integers, and let $d_2,\ldots,d_{k-1}$, $\varepsilon$, $\varepsilon_k$, $\beta$ be positive constants such that $1/d_i \in\mathbb{N}$ for any $2 \leq i \leq k-1$ and \[\frac{1}{m_0}\ll \frac{1}{r}, \varepsilon \ll \varepsilon_k,d_2,\ldots,d_{k-1}\quad\text{and}\quad \varepsilon_k \ll \beta\,.\] Then the following holds for all integers $m\ge m_0$. Let $\mathcal{J}$ be a $k$-polyad with $k$ clusters $V_1, \dots, V_k$ each of size $m$, and suppose that for each $2\le j\le k-1$, each $j$-cell of $\mathcal{J}$ is $(\varepsilon,d_j,1)$-disc-regular with respect to its supporting $(j-1)$-polyad. Let $G$ be an unweighted $k$-uniform hypergraph on $\bigcup_{i \in [k]} V_i$ which is supported on $\mathcal{J}$ and is $(\varepsilon_k,d,r)$-disc-regular with respect to $\mathcal{J}$, and write $\mathcal{G}$ for the unweighted $k$-complex obtained from $\mathcal{J}$ by adding all edges of $G$. Then we have \[\mathcal{G}\big(\oct{k}{\vec{2}^k}\big)=\left(d^{2^k} \pm \beta \right)\prod_{j = 2}^{k-1} d_j^{2^j\binom{k}{j}}\,.\] \end{lemma} We now state our sparse regularity lemma for hypergraphs. For convenience of use, we remove the divisibility condition on $n$ (which is trivial); this motivates the slight difference between the definitions of disc-equitable and oct-equitable. \begin{lemma}\label{lem:RSchreglem} Let $k \geq 3$ be a fixed integer. For all positive integers $q$, $t_0$, and $s$, positive~$\varepsilon_k$ and functions $\varepsilon: \mathbb{N} \rightarrow (0,1]$, there exist integers~$t_1$ and~$n_0$, and an $\eta^*>0$, such that the following holds for all $n \ge n_0$. Let $V$ be a vertex set of size $n$, let $\Gamma$ be a $k$-uniform hypergraph on $[n]$, and suppose that~$G_1, \dots, G_s$ are edge-disjoint $k$-uniform subgraphs of $\Gamma$, and that $\mathcal{Q}$ is a partition of $V$ into at most $q$ parts whose sizes differ by at most one. Suppose furthermore that there is some $p>0$ such that for any $k$-uniform unweighted hypergraph $H$ on at most $4k$ vertices we have $\Gamma(H)=(1\pm \eta^*)p^{e(H)}$. Then there exists a $(k-1)$-family of partitions $\mathcal{P}^*$ on~$V$ such that \begin{enumerate}[label=\itmit{\alph{*}}] \item\label{RL:a} the ground partition of $\mathcal{P}^*$ refines $\mathcal{Q}$, \item\label{RL:b} $\mathcal{P}^*$ is $(t_0,t_1, \varepsilon(t_1))$-oct-equitable, and \item\label{RL:c} for each $1 \leq i \leq s$, $G_i$ is $\varepsilon_k$-oct-regular with respect to~$\mathcal{P}^*$. \end{enumerate} \end{lemma} Lemma~\ref{lem:RegCount} shows that Lemma~\ref{lem:RSchRL} immediately implies the dense, unweighted case of this statement. Specifically, what we now prove is the case that $p=1$ and $\Gamma$ is the complete unweighted $k$-uniform hypergraph on $[n]$, and each $G_i$ is an unweighted $k$-uniform hypergraph on $n$ (that is, its weight function has range $\{0,1\}$ ). We defer the general case to later. We should note that, as proved in~\cite{DHNRcharacterising}, the dense unweighted case of Lemma~\ref{lem:RSchreglem} is formally weaker than Lemma~\ref{lem:RSchRL}. \begin{proof}[Proof of Lemma~\ref{lem:RSchreglem}, dense unweighted case] Given $k\ge 3$, positive integers $q$, $t_0$ and $s$, positive $\varepsilon_k$ and a function $\varepsilon:\mathbb{N}\rightarrow(0,1]$, we let $\varepsilon'_k<\varepsilon_k$ be small enough for Lemma~\ref{lem:RegCount} with input $\beta=\tfrac12\varepsilon_k$. We choose functions $r,\varepsilon'$ of $t_1$ which tend to infinity and zero respectively fast enough for Lemma~\ref{lem:RegCount} to apply provided all densities $d_2,\dots,d_{k-1}$ are at least $1/t_1$. In addition we insist $\varepsilon'(t_1)$ is small enough for application of Lemma~\ref{lem:RegCount} with input $\beta=\tfrac12\varepsilon(t_1)$ to count octahedra of uniformity between $2$ and $k-1$ inclusive. Let $m_0$ be large enough for all these applications of Lemma~\ref{lem:RegCount}. Let $t_1$ and $n'_0$ be returned by Lemma~\ref{lem:RSchRL} for input $k,q,t_0,s,\varepsilon'_k,r,\varepsilon'$. If necessary, we increase $t_1$ such that $q$ divides $t_1!$. Let $n_0\ge n'_0$ be such that $n_0\ge m_0t_1$ is sufficiently large for the following calculations. Given any $n\ge n_0$, let $G_1,\dots,G_s$ be edge-disjoint $k$-uniform unweighted hypergraphs on $[n]$. We add a set $N$ of $\lceil\tfrac{n}{t_1!}\rceil t_1!-n$ new vertices to the vertex set of each $G_i$ (and no new edges) to obtain $k$-graphs $G'_1,\dots,G'_s$ on $n'$ vertices, where by construction $n'$ is divisible by $t_1!$. We extend $\mathcal{Q}$ to a partition $\mathcal{Q}'$ on $[n']$ by adding the new vertices to the parts of $\mathcal{Q}$ such that the part sizes of $\mathcal{Q}'$ are equal (which is possible since $q$ divides $t_1!$ divides $n'$). We now apply Lemma~\ref{lem:RSchRL}, with the given inputs, to the $G'_1,\dots,G'_s$. The result is a family of partitions $\mathcal{R}^*$ which satisfies the conclusion of Lemma~\ref{lem:RSchRL}. Removing the added vertices $N$, we obtain a family of partitions $\mathcal{P}^*$, which we claim has the desired properties. The property~\ref{RL:a} follows from the construction and the fact that the ground partition of $\mathcal{R}^*$ refines $\mathcal{Q}'$. The partition $\mathcal{R}^*$ is $(t_0,t_1,\varepsilon'(t_1))$-disc-equitable, so by Lemma~\ref{lem:RegCount} it follows that it is also $\mathcal{R}^*$ is $(t_0,t_1,\tfrac12\varepsilon(t_1))$-oct-equitable, and whenever some $G'_i$ is $(\varepsilon'_k,r)$-disc-regular with respect to a $k$-polyad of $\mathcal{R}^*$, it is also $\tfrac12\varepsilon_k$-oct-regular with respect to that polyad. Removing the set $N$ of at most $t_1!$ extra vertices reduces the size of any part by at most $t_1!$, and hence changes the number of $j$-edges using that part by at most $t_1!n^{j-1}$, and the number of copies of $\oct{j}{\vec{2}^j}$ using that part by at most $(t_1!)^2n^{2j-2}$. By choice of $n_0$ and by Lemma~\ref{lem:RegCount}, these numbers are tiny compared to respectively the number of edges and octahedra supported in any cell or polyad of $\mathcal{R}^*$. Consequently $\mathcal{P}^*$ is $\varepsilon_k$-oct-equitable, giving~\ref{RL:b}, and whenever $G'_i$ is $(\varepsilon'_k,r)$-disc-regular with respect to some polyad of $\mathcal{R}^*$ also $G_i$ is $\varepsilon_k$-oct-regular with respect to the corresponding polyad of $\mathcal{P}^*$, giving~\ref{RL:c}. \end{proof} We next use Theorems~\ref{thm:CFZweak} and~\ref{thm:CFZcount} to derive the general case of Lemma~\ref{lem:RSchreglem} from the dense unweighted case proved above. We should note that, although this proof involves two applications of a regularity lemma, the bounds on constants we obtain are essentially the same as in Lemma~\ref{lem:RSchRL}. The application of weak regularity causes all these bounds to increase by less than a double exponential; this is inconsequential given the bounds of Lemma~\ref{lem:RSchRL} cannot (even for $2$-graphs) be better than tower-type. \begin{proof}[Proof of Lemma~\ref{lem:RSchreglem}, general case] Given $k$, $q$, $t_0$, $s$ and $\varepsilon_k$, and a function $\varepsilon:\mathbb{N}\to(0,1]$, let $t_1$ and $n_0$ be returned by the dense unweighted case of Lemma~\ref{lem:RSchreglem} for input as above but with $\tfrac1{2s}\varepsilon_k$ replacing $\varepsilon_k$. We choose $\gammama$ such that $2k!\gammama$ is small enough for Theorem~\ref{thm:CFZcount} with input $\delta=\tfrac1{4s}t_1^{-2^k}\varepsilon_k$. We let $\eta>0$ be small enough for Theorem~\ref{thm:CFZweak} with input $\gammama$. We let $\eta'>0$ be small enough for Lemma~\ref{lem:CFZupper} with input $\eta$, and we let $\eta^*$ be small enough for the applications of Theorem~\ref{thm:CFZcount} and Lemma~\ref{lem:CFZupper}. Given an initial partition $\mathcal{Q}$ and $k$-uniform hypergraphs $\Gamma$, $G_1,\dots,G_s$ satisfying the conditions of the lemma, we proceed as follows. In order to apply the machinery of Conlon, Fox, and Zhao~\cite{CFZrelative} we need to scale the weight function of our majorising $k$-uniform hypergraph $\Gamma$ by $p^{-1}$. We write $p^{-1}\Gamma$ for the $k$-uniform hypergraph we obtain by this scaling, and similarly for the $G_i$. Given an unweighted $k$-uniform hypergraph $H$, we (slightly abusing notation) think of $\Gamma$ and $H$ as complexes where all edges of size less than $k$ are present with weight $1$, and write $\Gamma(H)$ for the corresponding homomorphism density. Our counting condition states that for any $H$ with at most $4k$ vertices, we have $\Gamma(H)=(1\pm\eta')p^{e(H)}$, and thus $(p^{-1}\Gamma)(H)=1\pm\eta'$. By Lemma~\ref{lem:CFZupper} it follows that $p^{-1}\Gamma$, and its subgraphs $p^{-1}G_i$, are upper $\eta$-regular. Applying Theorem~\ref{thm:CFZweak}, with input $\gammama$ separately to each $p^{-1}G_i$, we obtain weighted $k$-graphs $G'_i$ on $[n]$, whose weights are in $[0,1]$, such that $(p^{-1}G_i,G'_i)$ is a $\gammama$-discrepancy pair for each $i$. It follows that $(p^{-1}s^{-1}G_i,s^{-1}G'_i)$ is also a $\gammama$-discrepancy pair for each $i$. We now create unweighted $k$-graphs $G''_i$ by, for each $e\in\binom{[n]}{k}$ independently, choosing to put $e$ into either exactly one of the $G''_i$, or into none of them, choosing to put $e$ in $G''_i$ with probability $s^{-1}g'_i(e)$. Since $0\le g'_i(e)\le 1$ for each $i$, we have $0\le \sum_{i\in[s]}g'_i(e)\le s$, so that the distribution we just described is as required a probability distribution. We claim that with high probability $(G''_i,G'_i)$ is a $\gammama$-discrepancy pair for each $i$. Indeed, suppose $i$ and unweighted $(k-1)$-graphs $F_1,\dots,F_k$ on $[n]$ are fixed before the sampling of the $G''_i$. The expected number of edges of $G''_i$ which are rainbow for $F_1,\dots,F_k$ is exactly equal to the sum of $g'_i(e)$ over $e$ rainbow for $F_1,\dots,F_k$. By the Chernoff bound, the probability of an additive error of $\gammama n^k$ is $o(2^{-kn^{k-1}})$. In other words, a given $F_1,\dots,F_k$ and $i$ witness the failure of our claim with probability $o(2^{-kn^{k-1}})$. Taking the union bound over the at most $s2^{kn^{k-1}}$ choices of $F_1,\dots,F_k$ and $i$, our claim fails with probability $o(1)$ as desired. Putting this together, we see that $(p^{-1}s^{-1}G_i,G''_i)$ is a $2\gammama$-discrepancy pair for each $i$. We now apply the dense unweighted case of Lemma~\ref{lem:RSchreglem}, with input $\tfrac{1}{2s}\varepsilon_k$ replacing $\varepsilon_k$ but otherwise with the same inputs as given to the general case, to the graphs $G''_1,\dots,G''_s$, obtaining a family $\mathcal{P}^*$ of partitions. We claim that this is the desired family of partitions, for which we only need check condition~\ref{RL:c} holds for the $G_i$ as well as the $G''_i$. To that end, suppose we have a polyad $\mathcal{J}$ of $\mathcal{P}^*$ and an $i$ such that $G''_i$ is $\tfrac{1}{2s}\varepsilon_k$-oct-regular with respect to $\mathcal{J}$. It is enough to show that $G_i$ is also $\varepsilon_k$-oct-regular with respect to $\mathcal{J}$. We obtain graphs $H$ and $^{(1)}lde{H}$ by deleting (i.e.\ setting to weight zero) all edges of respectively $G_i$ and $G''_i$ which are not supported by $\mathcal{J}$. Trivially $H$ is still a subgraph of $\Gamma$, and we claim that $(p^{-1}s^{-1}H,^{(1)}lde{H})$ is a $2k!\gamma$-discrepancy pair. Indeed, given any $(k-1)$-uniform unweighted hypergraphs $F_1,\dots,F_k$, we consider the intersections of each $F_j$ with the $(k-1)$-cells of $\mathcal{J}$. A $k$-set which is rainbow for the $F_i$ can only have non-zero weight in either $H$ or $^{(1)}lde{H}$ if its $(k-1)$-subsets are also contained in, and rainbow for, the $(k-1)$-cells of $\mathcal{J}$, so it suffices to look at the $k!$ different rainbow intersections of the $F_i$ with the $(k-1)$-cells of $\mathcal{J}$. For each such intersection, the weights of rainbow $k$-sets in $H$ and in $^{(1)}lde{H}$ are unchanged from those in $G_i$ and $G''_i$. Since the latter form a $2\gammama$-discrepancy pair, the contribution to the discrepancy of $p^{-1}s^{-1}H$ and $^{(1)}lde{H}$ by any given rainbow intersection is at most $2\gammama$, as required. Applying Theorem~\ref{thm:CFZcount}, we have \[(p^{-1}s^{-1}H)(\oct{k}{\vec{2}^k})=^{(1)}lde{H}(\oct{k}{\vec{2}^k})\pm\delta \,,\] and by choice of $\delta$ we conclude that $G_i$ is $\varepsilon_k$-oct-regular with respect to $\mathcal{J}$, as desired. \end{proof} \subsection{Relating GPEs and regularity} As usual, when one has a family of partitions and wishes to embed a complex $H$ into it, it is necessary to choose the cells of the family of partitions to which we embed edges of $H$ of all sizes, and (if the embedding is to be done by regularity) we need that the $k$-polyads to which we want to embed $k$-edges are regular (and relatively dense if we want to obtain an embedding with large weight). Making this choice can be quite technically difficult; the Regular Slice Lemma of~\cite{ABCM}, which as mentioned has a sparse counterpart in our setting, can help. Once the choice is made, Theorems~\ref{thm:counting} and~\ref{thm:embedding} give respectively a two-sided counting lemma for small hypergraphs, and a one-sided counting lemma for potentially large hypergraphs. We now explain how to prove these two theorems. What we need to do is explain how to construct a stack of candidate graphs such that the trivial partial embedding, in which no vertices are embedded, is a good partial embedding. One should usually think of $\Gamma$ as being a random graph, so that its density graph has all edges of uniformity less than $k$ of weight $1$, and all edges of uniformity $k$ of weight $p$ for some $p\in(0,1]$, and $\mathcal{G}$ as being the subgraph consisting of the chosen cells of the family of partitions into which we want to embed, together with the supported $k$-edges in the resulting $k$-polyads (though we will not actually use this idea, and the following lemma is true in more generality). \begin{lemma}\label{lem:getGPE} For all $k\ge 2$, finite sets $J$, and $J$-partite $k$-complexes $H$, given parameters $\eta_k$, $\eta_0$ and $\varepsilon_\ell$, $d_\ell$ for $1\le\ell\le k$ such that $0<\eta_0\ll d_1,\dotsc,d_k,\,\eta_k$, and for all $\ell$ we have $0<\varepsilon_{\ell}\ll d_{\ell},\dotsc,d_k,\,\eta_k$, the following holds. Let $c^*=\max\{2v(H)-1, 4k^2+k\}$. Suppose we are given any $J$-partite weighted $k$-graphs $\mathcal{G}\subseteq\Gamma$ and density graphs $\mathcal{D}$, $\mathcal{P}$, where $\Gamma$ is an $(\eta_0,c^*)$-THC graph for $H$ with density graph $\mathcal{P}$, and where for each $e\subseteq J$ of size $1\le\ell\le k$, the graph $\ind{\mathcal{G}}{V_e}$ is $\varepsilon_\ell$-regular with relative density $d(e)\ge d_\ell$ with respect to the graph obtained from $\ind{\mathcal{G}}{V_e}$ by replacing layer $\ell$ with $\Gamma$. We define a stack of candidate graphs $\cC^{(0)},\dots,\cC^{(k)}$ as follows. To begin with, we apply the standard construction to $\Gamma$ and $\mathcal{P}$ in order to obtain a $v(H)$-partite graph $\cC^{(0)}$ with density graph $\mathcal{D}^{(0)}$, and to $\mathcal{G}$ to obtain a $v(H)$-partite graph $\cC^{(k)}$. We then let $\cC^{(\ell)}$ consist of all edges of $\cC^{(k)}$ of uniformity less than or equal to $\ell$, and all edges of $\cC^{(k)}$ of uniformity greater than $\ell$. For each $1\le\ell\le k$, we let $\mathcal{D}^{(\ell)}$ have weight one on all edges of uniformity not equal to $\ell$, and weight equal to that of $\mathcal{D}$ on edges of uniformity $\ell$. Then the trivial partial embedding of $H$ is a $k$-GPE. \end{lemma} \begin{proof} We set $h^*=k(4k+1)+\vdeg(H)$. Recall (after Definition~\ref{def:ensemble} that we can construct a valid ensemble of parameters by choosing them in the order given there, using minimum relative densities $\delta_1,\dots,\delta_k$ calculated to satisfy~\ref{gpe:dens} (using the supplied $d_1,\dots,d_k$), and the counting accuracy $\eta_k$; and we can further place, if necessary, upper bounds on the $\eta_\ell$ for $1\le\ell\le k-1$ in terms of the parameters for larger $\ell$. Suppose that such a valid ensemble of parameters is given, with $\varepsilon_\ell$ being the best-case regularity at each level $\ell$. By assumption on $\Gamma$, we have~\ref{gpe:c0}, and by assumption~\ref{gpe:dens}. It remains to verify~\ref{gpe:reg}. For a given $1\le\ell\le k$, it is trivial to verify the regularity statement for an edge $e$ of uniformity less than $k$: either $\cC^{(\ell)}(e)$ is equal to $\cC^{(\ell-1)}(e)$ (in which case regularity is automatic) or $|e|=\ell$ in which case by assumption $\cC^{(\ell)}(e)$ is a sufficiently regular subgraph of $\cC^{(\ell-1)}(e)$. It remains to verify~\ref{gpe:reg} for edges of size $k$. This is not trivially true; we need to show that a subgraph of $\Gamma$ is regular with respect to a certain regular subgraph. However it follows immediately from Lemma~\ref{lem:slicing}. \end{proof} Given Lemma~\ref{lem:getGPE}, Theorems~\ref{thm:counting} and~\ref{thm:embedding} are immediate corollaries of Lemmas~\ref{lem:GPEcount} and~\ref{lem:GPEemb} respectively. \section{Counting and embedding for GPEs}\label{sec:count} In this section we prove Lemmas~\ref{lem:onestep}, \ref{lem:GPEcount}, and~\ref{lem:GPEemb}. As mentioned above, we prove the first two lemmas together, by induction on $\ell$ in each lemma. We begin by assuming Lemma~\ref{lem:GPEcount} for $\ell'<\ell$ in order to prove the bound on $B_\ell(x)$ claimed in Lemma~\ref{lem:onestep}. We will use Lemma~\ref{lem:GPEcount} to show that the various counting conditions for Lemma~\ref{lem:k-inherit} are met; the rest is simply bookkeeping. \begin{proof}[Proof of Lemma~\ref{lem:onestep} for $\ell\ge 1$] If a vertex $v\in V_x$ is in $B_\ell(x)\setminus B_{\ell-1}(x)$, then by definition there is a failure of regularity in the graph $\cC^{(\ell)}_{x\mapsto v}$ (obtained by applying the update rule to $\cC^{(\ell)}$ ). Specifically, there is some $e\subseteq H\setminus\big(\dom(\phi)\cup\{x\}\big)$ such that, although $\tcC^{(\ell)}(e)$ is $\big(\varepsilon_{\ell,|e|,\pi_{\phi}(e)},d\big)$-regular (with $d$ as given in~\ref{gpe:reg}) with respect to $\ocC^{(\ell-1)}(e)$, the graph $\tcC^{(\ell)}_{x\mapsto v}(e)$ is not $\big(\varepsilon_{\ell,|e|,\pi_{\phi\cup\{x\to v\}}(e)},d_x\big)$-regular (with $d_x$ as given in~\ref{gpe:reg}) with respect to $\ocC^{(\ell-1)}_{x\mapsto v}(e)$. First, observe that if $\pi_{\phi}(e)=\pi_{\phi\cup\{x\mapsto v\}}(e)$, then this failure of regularity is impossible: we have $\tcC^{(\ell)}(e)=\tcC^{(\ell)}_{x\mapsto v}(e)$ and $\ocC^{(\ell-1)}(e)=\ocC^{(\ell-1)}_{x\mapsto v}(e)$. Thus there is an edge of $H$ which both contains $x$ and some non-empty subset of $e$; since there are at most $\Delta$ edges of $F$ containing $x$, each of whose at most $k-1$ other vertices are in at most $\Delta-1$ different edges of $H$, there are in total at most $\Delta+(k-1)\Delta(\Delta-1)\le k\Delta^2$ choices of $e$. Thus, in order to prove the $\ell$ case of Lemma~\ref{lem:onestep}, it suffices to show that for any given non-empty $e\subseteq H\setminus\big(\dom(\phi)\cup\{x\}\big)$, the total weight of vertices $v$ in $\cC^{(\ell)}(x)$ such that $\tcC^{(\ell)}_{x\mapsto v}(e)$ is not $\big(\varepsilon_{\ell,|e|,\pi_{\phi}(e)+1},d_x\big)$-regular, with respect to $\ocC^{(\ell-1)}_{x\mapsto v}(e)$, is at most $\varepsilon'_\ell\vnorm{V_x}{\cC^{(\ell-1)}(x)}$. The idea is that Lemma~\ref{lem:k-inherit} should provide this desired bound. To that end, let $V=V_x\cup\bigcup_{y\in e}V_y$, let $\Gamma:=\cC^{(\ell-1)}[V]$ with the inherited vertex partition, and let $\mathcal{G}$ be obtained from $\Gamma$ by replacing the edges in $V_e$ and $V_{\{x\}\cup e}$ with those from $\cC^{(\ell)}$. Now by~\ref{gpe:reg} the graphs $\mathcal{G}\cap V_e$ and $\mathcal{G}\cap V_{e\cup\{x\}}$ are both $\varepsilon:=\varepsilon_{\ell,|e|,\pi_\phi(e)}$-regular with densities $d,d'\ge \delta_\ell$ with respect to $\Gamma$. With $\varepsilon':=\varepsilon_{\ell,|e|,\pi_\phi(e)+1}$, $d_x=dd'$, and the update rule; by definition failure of regularity in the sense of an $\ell$-GPE coincides with failure to inherit regularity in Lemma~\ref{lem:k-inherit}. The conditions~\ref{ve:reginh} and~\ref{ve:sizeorder} state that the constants above are compatible with Lemma~\ref{lem:k-inherit} in this case, and the conclusion of Lemma~\ref{lem:k-inherit} is the desired bound. It only remains to show that all the conditions of Lemma~\ref{lem:k-inherit} are met. By construction, we have~\ref{inh:GGam}, while~\ref{inh:regJ} and~\ref{inh:regf} are given by~\ref{gpe:reg}. Thus to complete the proof of Lemma~\ref{lem:onestep} we only need to show that the counting condition~\ref{inh:count} holds. Write $s=\abs{e}$ and $e'=\{x\}\cup e$. Note that we have $0<s\le k$. We now justify that for any given $k$-complex $R$ of the form $\oct{s+1}{\vec a}$ or $+2\oct{s}{0,\vec b}$ with $\vec a\in\{0,1,2\}^{e'}$ and $\vec b\in\{0,1,2\}^e$, we can accurately count $R$ in $\Gamma$. This verifies~\ref{inh:count}. We separate two cases. First, if $\ell=1$ then $\Gamma$ is an induced subgraph of $\cC^{(0)}$. By~\ref{gpe:c0}, $\cC^{(0)}$ is an $(\eta_0, c^*)$-THC graph, and thus by~\ref{thc:count}, $c^*\ge 4k+1$, and~\ref{ve:count}, we obtain the required count immediately. The second, slightly more difficult case is $\ell>1$. Here we aim to deduce the required count of $R$ from the $\ell-1$ case of Lemma~\ref{lem:GPEcount} (which is valid by induction). We obtain a stack of candidate graphs by applying the standard construction (with the $e'$-partite $k$-complex $R$) to the graphs $\cC^{(i)}[V]$ for $i=0,\dots,k$. Now the required count follows immediately from Lemma~\ref{lem:GPEcount} and condition~\ref{ve:count} on $\eta_{\ell-1}$, provided that we can justify that the trivial partial embedding of $R$ (in which no vertices are embedded) together with this stack of candidate graphs forms an $(\ell-1)$-GPE. To do this we need to specify the valid ensemble of parameters we use. These are identical to the valid ensemble we are provided with, \emph{except} that we shift the indices for hits in the regularity parameters, that is, we use $\varepsilon_{\ell',r,h}$ with $h_0\le h\le h^*$ where $h_0 = \max\{\pi_\phi(f): \emptyset\ne f\subseteq e'\}$. Recall that we have $h_0 \le \vdeg(H)$. By construction the property~\ref{gpe:c0} for the trivial embedding of $R$ is implied by \ref{gpe:c0} for $\phi$ and $H$. For~\ref{gpe:reg}, we use the assumption that $\phi$ is an $(\ell-1)$-GPE, so for each edge $f$ and each $1\le\ell'\le k-1$, we have that $\tcC^{(\ell')}(f)$ is $\big(\varepsilon_{\ell',|f|,\pi_{\phi}(f)},d_f\big)$-regular (with $d_f$ as given in~\ref{gpe:reg}) with respect to $\ocC^{(\ell'-1)}(f)$. Since $\pi_{\phi}(f)\le h_0$ we have $\varepsilon_{\ell',|f|,\pi_{\phi}(f)}\le\varepsilon_{\ell',|f|,h_0}$, and so indeed the trivial partial embedding of $R$ with the given stack of candidate graphs satisfies the conditions~\ref{gpe:reg} for $1\le\ell'\le\ell-1$ and the shifted regularity parameters. Finally we must verify that the shifted ensemble is valid and suitable for use in Lemma~\ref{lem:GPEcount}. The `length' of the sequences of shifted regularity parameters is $h_0^*:=h^*-h_0\le h^*$, hence~\ref{ve:worst} and~\ref{ve:count} are implied by the same conditions for the unshifted ensemble. The property~\ref{ve:reginh} is unchanged by shifting, and~\ref{ve:sizeorder} holds because we have $\varepsilon_{\ell,r+1,h_0^*}\le\varepsilon_{\ell,r+1,h_0}\le\varepsilon_{\ell,r,0}\le\varepsilon_{\ell,r,h_0}$. For counting $R$ with the height $\ell-1$ case of Lemma~\ref{lem:GPEcount} we need $c^*\ge\max\{8k+1, (\ell-1)(4k+1)\}$, \[ h_0^*\ge h^*-\Delta'\ge \ell(4k+1) \ge (\ell-1)(4k+1)+\vdeg(R)\,, \] and $(4k+1)\eta_\ell\le 1/2$, which hold for this case by the assumptions of Lemma~\ref{lem:onestep} because $\ell\ge 2$. \end{proof} The second part of the intertwined induction is a proof of Lemma~\ref{lem:GPEcount}. We first give a proof of Lemma~\ref{lem:GPEemb} which assumes Lemma~\ref{lem:onestep} (for $\ell\le k$), because it serves as a good introduction to aspects of the method without the notation necessary for the induction on $\ell$, or calculations involving bad vertices. \begin{proof}[Proof of Lemma~\ref{lem:GPEemb}] We prove Lemma~\ref{lem:GPEemb} by induction on $r=v(H)-\abs{\dom\phi}$, assuming the $\ell\le k$ cases of Lemma~\ref{lem:onestep}. The statement for $r=0$ is a tautology, since then $F-\dom\phi=\emptyset$ and the empty set appears identically on both sides of the required count. For $r=1$, the statement follows directly from the definition of a GPE, without the need to apply Lemma~\ref{lem:onestep}. The empty set is dealt with explicitly, so here we consider consider the weights $\cC^{(\ell)}(x)$ as functions on $V_x$, and by the \emph{density} of $\cC^{(\ell)}(x)$ we mean $\vnorm{V_x}{\cC^{(\ell)}(x)}$. Let $V(H)\setminus\dom\phi=\{x\}$, and note that by \ref{gpe:c0} we know that $\cC^{(0)}(x)$ has density \[ \vnorm{V_x}{\cC^{(0)}(x)}=(1\pm\eta_0)d^{(0)}_\phi(x)\,, \] and by \ref{gpe:reg}, for each $1\le\ell'\le\ell$, the graph $\cC^{(\ell')}(x)$ is a subgraph (in the sense of a weighted $1$-graph) of $\cC^{(\ell'-1)}(x)$ of relative density \[ d^{(\ell')}_\phi(x)\pm\varepsilon_{\ell'}'\,. \] Thus $\cC^{(k)}$ has density \begin{align}\label{eq:GPEemb:cldens} (1\pm\eta_0)d^{(0)}_\phi(x)\prod_{\ell\in[k]}\left(1\pm\frac{\varepsilon'_{\ell}}{\delta_{\ell}}\right)d^{(\ell)}_\phi(x) &= (1\pm\eta_k)\prod_{0\le\ell\le k} d^{(\ell)}_\phi(x)\,, \end{align} because we have a valid ensemble of parameters ensuring for $\ell\in[k]$ that $ \eta_0,\, \varepsilon_{\ell}' \ll \delta_{\ell},\, \eta_{k},\,k $ by \ref{ve:worst}. Multiplied by the weight $c^{(k)}(\emptyset)$, this is the desired expression for $\cC^{(\ell)}(F-\dom\phi)$ in the case $r=1$. For $r\ge2$, fix any $x\in V(H)\setminus\dom\phi$. We will consider embedding $x$ to some $v\in V_x$ and use induction on $r$ to count the contribution from good choices of $v$. The key observation is that the update rule implies \[ \cC^{(\ell)}(H-\dom\phi) = \Ex[\big]{\cC^{(\ell)}_{x\mapsto v}\big(H-\dom\phi-\{x\}\big)}\,, \] where the expectation is over a uniformly random choice of $v\in V_x$. We separate three types of density term in the desired counting statement: $d^{(\ell)}(\emptyset)$ terms, $d^{(\ell)}(x)$ terms, and the remaining terms for which we write \begin{equation}\label{eq:xi} \xi(\ell) :=\frac{\mathcal{D}^{(\ell)}_{\phi}\big(H-\dom\phi\big)}{d^{(\ell)}_{\phi}(\emptyset)d^{(\ell)}_{\phi}(x)} =\frac{\mathcal{D}^{(\ell)}_{\phi\cup\{x\mapsto v\}}\big(H-\dom\phi-x\big)}{d^{(\ell)}_{\phi\cup\{x\mapsto v\}}(\emptyset)}\,, \end{equation} where the second expression for $\xi(\ell)$ comes from the update rule. Note that despite the appearance of $v$ in the notation on the right-hand side, as an expected density $\xi$ does not depend on the choice of $v$. The weight of the empty set is dealt with explicitly, analysing the choice of $v\in V_x$ gives the $d^{(\ell)}_\phi(x)$ terms, and the $\xi(\ell)$ terms are found by induction on $r$. We can afford to ignore bad vertices for a lower bound, we merely need to estimate $\vnorm{V_x\setminus B_k(x)}{\cC^{(k)}(x)}$ with Lemma~\ref{lem:onestep}. For $B_0(x)$ we have \begin{equation}\label{eq:GPEemb:B0} \vnorm{B_0(x)}{\cC^{(k)}(x)}\le\vnorm{B_0(x)}{\cC^{(0)}(x)}\le\eta_0\vnorm{V_x}{\cC^{(0)}(x)}\le2\eta_0d^{(0)}_\phi(x) \end{equation} by \ref{gpe:c0} and the condition~\ref{thc:hered} of a THC-graph. The same argument as for~\eqref{eq:GPEemb:cldens} gives that the density of $\cC^{(\ell')}(x)$ satisfies \begin{equation}\label{eq:GPEemb:sizeVx} \vnorm{V_x}{\cC^{(\ell')}(x)}=(1\pm\eta_0)d^{(0)}_\phi(x)\prod_{\ell''\in[\ell']}\Big(1\pm\frac{\varepsilon'_{\ell''}}{\delta_{\ell''}}\Big)d^{(\ell'')}_\phi(x) = (1\pm\eta_{\ell'})\prod_{0\le\ell''\le\ell} d^{(\ell'')}_\phi(x)\,. \end{equation} Then by Lemma~\ref{lem:onestep} and \eqref{eq:GPEemb:sizeVx}, we calculate for $1\le\ell\le k$ the bound \begin{align} \vnorm{B_{\ell}(x)\setminus B_{\ell-1}(x)}{\cC^{(k)}(x)} &\le \vnorm{B_{\ell}(x)\setminus B_{\ell-1}(x)}{\cC^{(\ell-1)}(x)} \\&\le k\Delta^2\varepsilon'_{\ell}\cdot\vnorm{V_x}{\cC^{(\ell-1)}(x)} \\&\le 2k\Delta^2\varepsilon'_{\ell}\prod_{0\le\ell'<\ell} d^{(\ell')}_\phi(x)\,.\label{eq:GPEemb:badx} \end{align} We next give a short calculation which shows that \begin{equation}\label{eq:GPEemb:goodvs} \vnorm{V_x\setminus B_k(x)}{\cC^{(k)}(x)} \ge (1-\eta_k)\prod_{0\le\ell\le k} d^{(\ell)}_\phi(x)\,, \end{equation} by a careful collection of density terms and `compensating' error terms from lower levels of the stack. We have \begin{align} \vnorm{V_x\setminus B_k(x)}{\cC^{(k)}(x)} &\ge \vnorm{V_x}{\cC^{(k)}(x)} - \vnorm{B_0(x)}{\cC^{(0)}(x)} - \sum_{\ell\in[k]} \vnorm{B_{\ell}(x)\setminus B_{\ell-1}(x)}{\cC^{(k)}(x)} \\&\ge \left( (1-\eta_0)\prod_{\ell'\in[k]}\left(1-\frac{\varepsilon'_{\ell'}}{\delta_{\ell'}}\right)-\frac{2\eta_0}{\prod_{\ell'\in[k]}\delta_{\ell'}}-\sum_{\ell\in[k]}\frac{2k\Delta^2\varepsilon_\ell'}{\prod_{\ell'=\ell}^k\delta_{\ell'}}\right)\prod_{0\le\ell\le k}d^{(\ell)}_\phi(x) \\&\ge (1-\eta_k)\prod_{0\le\ell\le k}d^{(\ell)}_\phi(x)\,. \end{align} The $\delta_{\ell'}$ terms in the denominators of the second line correspond to `missing densities' lost because we can only account for failure of a regularity condition in level $\ell'$ of the stack with the regularity properties of that level. We can afford to write $\delta_{\ell'}$ terms instead of $d_\phi^{(\ell)}p(x)$ because we have $\delta_{\ell'}\le d^{(\ell)}p_\phi(x)$ by~\ref{gpe:dens}. For $B_0(x)$ the missing densities are for levels $\ell'\in[k]$ but there is a very small $\eta_0$ to compensate, and for $B_{\ell}(x)\setminus B_{\ell-1}(x)$ we have a product of missing densities from levels $\ell$ to $k$ of the stack, but a comparatively small $\varepsilon_\ell'$ to compensate. With \eqref{eq:GPEemb:goodvs} in hand, we finish the proof with the induction on $r$. For any $v\in V_x\setminus B_k(x)$, note the applying the induction hypothesis is valid as the required lower bounds on $c^*$, $h^*$ still hold, and we have \begin{align} \cC^{(k)}_{x\mapsto v}(H-\dom\phi-x) &\ge (1-\eta_k)^{r-1}\tfrac{c^{(k)}_{x\mapsto v}(\emptyset)}{\prod_{0\le\ell\le k}d_{\phi\cup\{x\mapsto v\}}^{(\ell)}(\emptyset)}\prod_{0\le\ell\le k}\mathcal{D}^{(\ell)}_{\phi\cup\{x\mapsto v\}}\big(H-\dom\phi-x\big) \\&= (1-\eta_k)^{r-1}c^{(k)}_{x\mapsto v}(\emptyset)\prod_{0\le\ell\le k} \xi(\ell)\,, \end{align} where we have separated out the only term $c^{(k)}_{x\mapsto v}(\emptyset)$ which depends on $v$, so that the remaining product over $\ell$ is independent of $v$. By the update rule we have $c^{(k)}_{x\mapsto v}(\emptyset)=c^{(k)}(\emptyset)c^{(k)}(v)$, which gives \begin{align} \cC^{(k)}(H-\dom\phi) &= \Ex[\Big]{\cC^{(k)}_{x\mapsto v}\big(H-\dom\phi-\{x\}\big)}[v\in V_x] \\&\ge (1-\eta_k)^{r-1}c^{(k)}(\emptyset)\vnorm{V_x\setminus B_k(x)}{\cC^{(k)}(x)}\prod_{0\le\ell\le k}\xi(\ell) \\&= (1-\eta_k)^r\tfrac{c^{(k)}(\emptyset)}{\prod_{0\le\ell\le k}d_\phi^{(\ell)}(\emptyset)}\prod_{0\le\ell\le k}\mathcal{D}^{(\ell)}_\phi(H-\dom\phi)\,, \end{align} where for the last line we observe that density terms involving $x$ are taken care of by $\vnorm{V_x\setminus B_k(x)}{\cC^{(k)}(x)}$ via \eqref{eq:GPEemb:goodvs}, and the other terms are given by $\xi$ via~\eqref{eq:xi}. \end{proof} The proof for Lemma~\ref{lem:GPEcount} is similar, but we must proceed by induction on the height $\ell$ of the GPE and handle bad vertices more carefully. For the latter consideration, we use the following consequence of the Cauchy--Schwarz inequality, which we prove along with several related tools in Section~\ref{sec:CS}. \begin{restatable}{lemma}{ECSdist}\label{lem:ECSdist} Let $W$, $X$, and $Y$ be discrete random variables such that $W$ takes values in $[0,1]$, $X$ takes values in the non-negative reals, and $Y$ is real-valued. Suppose also that for $0\le\varepsilon\le 1$ and $d\ge0$ we have \begin{align} \Ex{XY}&=(1\pm\varepsilon)d\cdot\Ex{X}& &\text{and}& \Ex{XY^2}&\le(1+\varepsilon)d^2\cdot\Ex{X}\,. \end{align} Then \begin{align} \Ex{WXY} &= \left(1-\varepsilon\pm 2\sqrt{\frac{\varepsilon\Ex{X}}{\Ex{WX}}}\,\right)d\cdot \Ex{WX}\,, \shortintertext{and} \Ex{WXY^2} &= \left(1-2\varepsilon\pm7\sqrt{\varepsilon}\frac{\Ex{X}}{\Ex{WX}}\right)d^2\cdot \Ex{WX}\,. \end{align} \end{restatable} \begin{proof}[Proof of Lemma~\ref{lem:GPEcount} for $\ell\ge 1$] Given $\ell$, we prove the height $\ell$ case of Lemma~\ref{lem:GPEcount} by induction on $r=v(H)-\abs{\dom\phi}$, assuming the $\ell'\le\ell$ cases of Lemma~\ref{lem:onestep} and $\ell'<\ell$ cases of Lemma~\ref{lem:GPEcount}. As in the previous proof, the case $r=0$ is a tautology, and the statement for $r=1$ follows directly from the definition of $\ell$-GPE. The same applications of properties~\ref{gpe:c0}, \ref{gpe:reg}, and~\ref{ve:worst} as for~\eqref{eq:GPEemb:cldens} and~\eqref{eq:GPEemb:sizeVx} give again \begin{equation}\label{eq:GPEcount:sizeVx} \vnorm{V_x}{\cC^{(\ell')}(x)}=(1\pm\eta_0)d^{(0)}_\phi(x)\prod_{\ell''\in[\ell']}\Big(1\pm\frac{\varepsilon'_{\ell''}}{\delta_{\ell''}}\Big)d^{(\ell'')}_\phi(x) = (1\pm\eta_{\ell'})\prod_{0\le\ell''\le\ell} d^{(\ell'')}_\phi(x)\,. \end{equation} When $r=1$, with $\ell'=\ell$, and multiplied by the factor $c^{(\ell)}(\emptyset)$, this is the desired statement. Now given $r\ge 2$, fix $x\in V(H)\setminus\dom\phi$. We use the statement of Lemma~\ref{lem:GPEcount} for heights $\ell'<\ell$ and with the complex $H-x$, and (the induction assumption in this proof) for height $\ell$. We have a partition of $V_x$ into the bad vertices $B_0(x)$, and $B_{\ell'}(x)\setminus B_{\ell'-1}(x)$ for $\ell'\in[\ell]$, and the good vertices $V_x\setminus B_\ell(x)$. As in the previous proof, we separately consider density terms for $\emptyset$, $x$, and the ones of the form $\xi(\ell')$ obtained via the induction on $r$. From~\eqref{eq:xi} recall that $\xi$ is independent of $v$. The desired counting statement is then \[ \cC^{(\ell)}(H-\dom\phi) = (1\pm r\eta_\ell)c^{(\ell)}(\emptyset)\prod_{0\le\ell'\le\ell}d^{(\ell')}_\phi(x)\xi(\ell')\,. \] As in the previous proof, by Lemma~\ref{lem:onestep}, \eqref{eq:GPEcount:sizeVx}, and the fact that in any valid ensemble we have $\eta_{\ell'-1}<1$ for all $\ell'$, we calculate for $1\le\ell'\le\ell$ the bound \begin{align} \vnorm{B_{\ell'}(x)\setminus B_{\ell'-1}(x)}{\cC^{(\ell')}(x)} &\le \vnorm{B_{\ell'}(x)\setminus B_{\ell'-1}(x)}{\cC^{(\ell'-1)}(x)} \\&\le k\Delta^2\varepsilon'_{\ell'}\cdot\vnorm{V_x}{\cC^{(\ell'-1)}(x)} \\&\le 2k\Delta^2\varepsilon'_{\ell'}\prod_{0\le\ell''<\ell'} d^{(\ell'')}_\phi(x)\,.\label{eq:GPEcount:badx} \end{align} By definition, for each $v\in V_x\setminus B_{\ell'}(x)$, the partial embedding $\phi\cup\{x\to v\}$ together with the stack of candidate graphs $\cC^{(0)}_{x\mapsto v},\dots,\cC^{(\ell')}_{x\mapsto v}$ obtained by the update rule is an $\ell'$-GPE. Applying for each $1\le\ell'\le\ell$ the $\ell'$ case of Lemma~\ref{lem:GPEcount} with the partial embedding $\phi\cup\{x\mapsto v\}$ and updated candidate graphs (where we note that $c^*$ and $h^*$ are large enough and $\eta_{\ell'}$ small enough for this to be valid), it follows that for each such choice of $v$ we have \begin{align} \cC^{(\ell')}_{x\mapsto v}(H-\dom\phi-x)&=\big(1\pm(r-1)\eta_{\ell'}\big)\tfrac{c^{(\ell)}p_{x\mapsto v}(\emptyset)}{\prod_{0\le\ell''\le\ell'}d_{\phi\cup\{x\mapsto v\}}^{(\ell'')}(\emptyset)}\prod_{0\le\ell''\le\ell'}\mathcal{D}^{(\ell'')}_{\phi\cup\{x\mapsto v\}}(H-\dom\phi-x)\\ &=\big(1\pm(r-1)\eta_{\ell'}\big)c^{(\ell')}(\emptyset)c^{(\ell')}(v)\cdot\prod_{0\le\ell''\le\ell'}\xi(\ell'')\,, \end{align} where the second line follows from the update rule and definition of $\xi$. We carefully account for the empty set in level $\ell$ and not below. Then by the fact that $\cC^{(\ell')}\le\cC^{(\ell)}$, for each $1\le\ell'\le\ell$ and $v\in V_x\setminus B_{\ell'}(x)$ we have \begin{equation}\label{eq:GPEcount:countell} \begin{aligned} \cC^{(\ell)}_{x\mapsto v}(H-\dom\phi-x) &\le\tfrac{c^{(\ell)}(\emptyset)}{c^{(\ell')}(\emptyset)}\cdot\cC^{(\ell')}_{x\mapsto v}(H-\dom\phi-x) \\&=\big(1\pm(r-1)\eta_{\ell'}\big)c^{(\ell)}(\emptyset)c^{(\ell')}(v)\cdot\prod_{0\le\ell''\le\ell'}\xi(\ell'')\,. \end{aligned} \end{equation} Putting \eqref{eq:GPEcount:sizeVx}, \eqref{eq:GPEcount:badx}, and \eqref{eq:GPEcount:countell} together will give us the required lower bound on $\cC^{(\ell)}(F)$, but for the upper bound we still need to show that the contribution made by $v\in B_0(x)$ is small. Letting $H'$ be the $k$-complex on $r':=2r-1\le c^*$ vertices obtained by taking two disjoint copies of $H$ and identifying each vertex in $\dom\phi\cup\{x\}$ with the corresponding vertex in the other copy, we have the following counts in the bottom level of the stack by \ref{gpe:c0}, \begin{align} \cC^{(0)}(H-\dom\phi) &= (1\pm r\eta_0)\tfrac{c^{(0)}(\emptyset)}{d_\phi^{(0)}(\emptyset)}\mathcal{D}^{(0)}_\phi(H-\dom\phi)\,, \\&= (1\pm r\eta_0)c^{(0)}(\emptyset)d^{(0)}_\phi(x)\xi(0)\,, \label{eq:GPE-EXYbd}\\ \cC^{(0)}(H'-\dom\phi) &=(1\pm r'\eta_0)\tfrac{c^{(0)}(\emptyset)}{d_\phi^{(0)}(\emptyset)}\mathcal{D}^{(0)}_\phi(H'-\dom\phi) \\&=(1\pm r'\eta_0)c^{(0)}(\emptyset)d^{(0)}_\phi(x)\xi(0)^2\,.\label{eq:GPE-EXY2bd} \end{align} From this, apply Lemma~\ref{lem:ECSdist} to the experiment of choosing a uniform random $v\in V_x$, with \begin{align} X&:=c^{(0)}_{x\mapsto v}(\emptyset)\,, & Y&:=\cC^{(0)}_{x\mapsto v}(H-\dom\phi -x)/c^{(0)}_{x\mapsto v}(\emptyset)\,, & W&:=\mathbbm{1}_{v\in B_0(x)}\,. \end{align} Property~\ref{gpe:c0} gives $\Ex{X}=(1\pm\eta_0)c^{(0)}(\emptyset)d^{(0)}_\phi(x)$, and statements~\eqref{eq:GPE-EXYbd} and~\eqref{eq:GPE-EXY2bd} give bounds on $\Ex{XY}$ and $\Ex{XY^2}$. We also have $\Ex{WX}\le\eta_0\Ex{X}$ by \ref{gpe:c0} and condition~\ref{thc:hered}. Hence we conclude \begin{equation} \Ex{WXY}\le 5r'\eta_0 \cdot c^{(0)}(\emptyset) d^{(0)}_\phi(x)\xi(0) \le 10\eta_0 c^*\cdot c^{(0)}(\emptyset)d^{(0)}_\phi(x)\xi(0)\,. \end{equation} Again, taking care to deal with the empty set in level $\ell$, we deduce the upper bound bound \begin{equation}\label{eq:GPEcount:b0bound} 10\eta_0c^*\cdot c^{(\ell)}(\emptyset)d^{(0)}_\phi(x)\xi(0) \end{equation} on the contribution to $\cC^{(\ell)}(H-\dom\phi)$ from vertices $v\in B_0(x)$. To complete the proof we substitute these bounds into the expression \begin{align*} \cC^{(\ell)}(H-\dom\phi) &= \Ex[\Big]{\indicator{v\notin B_{\ell}(x)}\cC^{(\ell)}_{x\mapsto v}\big(H-\dom\phi-\{x\}\big)} \\&\qquad\pm \sum_{\ell'\in[\ell]}\Ex[\Big]{\indicator{v\in B_{\ell'}(x)\setminus B_{\ell'-1}(x)}\cC^{(\ell)}_{x\mapsto v}\big(H-\dom\phi-\{x\}\big)} \\&\qquad\pm \Ex[\Big]{\indicator{v\in B_0(x)}\cC^{(0)}_{x\mapsto v}\big(H-\dom\phi-\{x\}\big)}\,. \end{align*} Using~\eqref{eq:GPEcount:sizeVx},~\eqref{eq:GPEcount:badx}, \eqref{eq:GPEcount:countell}, and~\eqref{eq:GPEcount:b0bound}, we obtain \begin{align*} \cC^{(\ell)}(H-\dom\phi) &= \big(1\pm(r-1)\eta_\ell\big)c^{(\ell)}(\emptyset)\vnorm{V_x\setminus B_\ell(x)}{\cC^{(\ell)}(x)} \cdot \prod_{0\le\ell''\le\ell}\xi(\ell'') \\&\qquad\pm \sum_{\ell'\in[\ell]}\big(1+(r-1)\eta_{\ell'}\big)c^{(\ell)}(\emptyset)\vnorm{B_{\ell'}(x)\setminus B_{\ell'-1}(x)}{\cC^{(\ell')}(x)}\cdot \prod_{0\le\ell''\le\ell'}\xi(\ell') \\&\qquad\pm 10\eta_0c^*\cdot c^{(\ell)}(\emptyset)d^{(0)}_\phi(x)\xi(0) \\&= \big(1\pm(r-1)\eta_\ell\big)c^{(\ell)}(\emptyset)\bigg(1\pm\frac{\vnorm{B_\ell(x)}{\cC^{(\ell)}(x)}}{\vnorm{V_x}{\cC^{(\ell)}(x)}}\bigg)(1\pm\eta_0)\bigg(\prod_{\ell''\in[\ell]}\Big(1+\frac{\varepsilon'_{\ell''}}{\delta_{\ell''}}\Big)\bigg)\cdot \prod_{0\le\ell''\le\ell}d^{(\ell'')}_\phi(x)\xi(\ell'') \\&\qquad\pm \sum_{\ell'\in[\ell]}\big(1+(r-1)\eta_{\ell'}\big)c^{(\ell)}(\emptyset)\cdot2k\Delta^2\frac{\varepsilon'_{\ell'}}{d^{(\ell')}_\phi(x)}\cdot\prod_{0\le\ell''\le\ell'}d^{(\ell'')}_\phi(x)\xi(\ell'') \\&\qquad\pm 10\eta_0c^*\cdot c^{(\ell)}(\emptyset)d^{(0)}_\phi(x)\xi(0)\,. \end{align*} This is almost the desired statement. By collecting terms we have \[ \cC^{(\ell)}(H-\dom\phi) = (1\pm r\eta_\ell)c^{(\ell)}(\emptyset)\prod_{0\le\ell''\le\ell}d^{(\ell'')}_\phi(x)\xi(\ell'') \,, \] where the relative error is given by $r\eta_\ell$, provided the following holds: \begin{align} 1+r\eta_\ell &\ge \big(1+(r-1)\eta_\ell\big)\bigg(1+\frac{\vnorm{B_\ell(x)}{\cC^{(\ell)}(x)}}{\vnorm{V_x}{\cC^{(\ell)}(x)}}\bigg)(1+\eta_0)\prod_{\ell''\in[\ell]}\Big(1+\frac{\varepsilon'_{\ell''}}{\delta_{\ell''}}\Big) \\&\qquad + \sum_{\ell'\in[\ell]}\big(1+(r-1)\eta_{\ell'}\big) 2k\Delta^2\cdot \frac{\varepsilon'_{\ell'}}{\delta_{\ell'}\prod_{\ell'<\ell''\le\ell} \delta_{\ell''}\xi(\ell'')} \\&\qquad + \frac{10\eta_0c^*}{\prod_{0<\ell''\le\ell} \delta_{\ell''}\xi(\ell'')}\,. \end{align} The definition of a valid ensemble is chosen to make this inequality hold. Considering the right-hand side, the first line can be made at most $1+(r-2/3)\eta_\ell$. Each of the two remaining terms can be made at most $\eta_\ell/3$. Essentially the point is that where we have products of `missing' minimum densities in the denominator of error terms, there is a $\varepsilon_{\ell'}$ or $\eta_0$ to compensate in the numerator. The $\varepsilon_{\ell'}'$ parameters are chosen to be small enough to compensate for any product of minimum densities from the same level or higher, and $\eta_0$ is small enough to compensate for any densities in levels above $0$. Here we require the upper bound on $r$, since it implies the $\xi(\ell')$ terms corresponding to edges remaining after $x$ is embedded cannot be too small. We give the required calculations below, relying on the facts that for all $\ell'\ge 1$, we have \begin{align}\label{eq:GPEcount:densitylbs} \delta_{\ell'} &\le d^{(\ell')}_\phi(x)\,,& \delta_{\ell'}^{c^*-1} &\le \xi(\ell')\,. \end{align} The first bound states the contribution to the final count at level $\ell'$ from embedding $x$ is at least $\delta_{\ell'}$, which holds by assumption: $\delta_{\ell'}$ is a minimum density. Then with $2r-1\le c^*$ the first inequality implies the second because $\xi(\ell')$ is a product over the remaining $r-1$ vertices of their contributions. The next claim deals with the smaller two error terms, and a subsequent claim deals with the main term. \begin{claim} \ref{ve:worst} implies both \begin{gather} \sum_{\ell'\in[\ell]}\big(1+(r-1)\eta_{\ell'}\big) 2k\Delta^2\cdot \frac{\varepsilon'_{\ell'}}{\delta_{\ell'}\prod_{\ell'<\ell''\le\ell} \delta_{\ell''}\xi(\ell'')} \le \frac{\eta_\ell}{3}\,, \shortintertext{and} \frac{10\eta_0c^*}{\prod_{0<\ell''\le\ell} \delta_{\ell''}\xi(\ell'')} \le \frac{10\eta_0c^*}{\prod_{0<\ell''\le\ell} \delta_{\ell''}^{c^*}} \le \frac{\eta_\ell}{3}\,. \end{gather} \end{claim} \begin{claimproof} For the first statement, since we have $(r-1)\eta_{\ell'}\le 1/2$ and \eqref{eq:GPEcount:densitylbs} it suffices to ensure that \[ \varepsilon_{\ell'}' \le \frac{\eta_\ell}{9k^2\Delta^2} \delta_{\ell'}\prod_{\ell'<\ell''\le\ell} \delta_{\ell''}^{c^*} \] for each $\ell'\in[\ell]$, which holds by \ref{ve:worst}. In the second statement the first inequality holds by \eqref{eq:GPEcount:densitylbs}, and the second holds by \ref{ve:worst}. \end{claimproof} \begin{claim} \ref{ve:worst} implies \[ \big(1+(r-1)\eta_\ell\big)\bigg(1+\frac{\vnorm{B_\ell(x)}{\cC^{(\ell)}(x)}}{\vnorm{V_x}{\cC^{(\ell)}(x)}}\bigg)(1+\eta_0)\prod_{\ell''\in[\ell]}\Big(1+\frac{\varepsilon'_{\ell''}}{\delta_{\ell''}}\Big) \le 1+\Big(r-\frac{2}{3}\Big)\eta_\ell \] \end{claim} \begin{claimproof} First we bound $\vnorm{B_\ell(x)}{\cC^{(\ell)}(x)}$. By~\eqref{eq:GPEcount:sizeVx}, \eqref{eq:GPEcount:badx}, and $\vnorm{B_0(x)}{\cC^{(0)}(x)}\le\eta_0\vnorm{V_x}{\cC^{(0)}(x)}$, we have \begin{align} \vnorm{B_\ell(x)}{\cC^{(\ell)}(x)} &\le \vnorm{B_0(x)}{\cC^{(0)}(x)}+\sum_{\ell'\in[\ell]}\vnorm{B_{\ell'}(x)\setminus B_{\ell'-1}(x)}{\cC^{(\ell')}(x)} \\&\le 2\eta_0d^{(0)}_\phi(x) + \sum_{\ell'\in[\ell]}2k\Delta^2\varepsilon_{\ell'}'\prod_{0\le\ell''<\ell'}d^{(\ell'')}_\phi(x)\,. \end{align} Hence (using that $\eta_\ell<1/2$), we have \begin{align} \frac{\vnorm{B_\ell(x)}{\cC^{(\ell)}(x)}}{\vnorm{V_x}{\cC^{(\ell)}(x)}} &\le \frac{4\eta_0}{\prod_{\ell'\in[\ell]}d^{(\ell')}(x)}+ \sum_{\ell'\in[\ell]}4k\Delta^2\frac{\varepsilon_{\ell'}'}{\prod_{\ell'\le\ell''\le\ell}d^{(\ell'')}(x)} \\&\le \frac{4\eta_0}{\prod_{\ell'\in[\ell]}\delta_{\ell'}}+ \sum_{\ell'\in[\ell]}4k\Delta^2\frac{\varepsilon_{\ell'}'}{\prod_{\ell'\le\ell''\le\ell}\delta_{\ell''}}\,. \end{align} For the claim, by $r\eta_\ell<1/2$ it now suffices to show \[ \Bigg(1+\frac{4\eta_0}{\prod_{\ell'\in[\ell]}\delta_{\ell'}}+ \sum_{\ell'\in[\ell]}4k\Delta^2\frac{\varepsilon_{\ell'}'}{\prod_{\ell'\le\ell''\le\ell}\delta_{\ell''}}\Bigg)(1+\eta_0)\prod_{\ell''\in[\ell]}\Big(1+\frac{\varepsilon'_{\ell''}}{\delta_{\ell''}}\Big) \le 1+\frac{\eta_\ell}{9} \le \frac{1+(r-2/3)\eta_\ell}{1+(r-1)\eta_\ell}\,. \] We use that $\eta_\ell \ll k,\, 1$. The first bracketed term and $(1+\eta_0)$ are each at most $1+\eta_\ell/36$ by \ref{ve:worst}, and similarly we have \[ 1+\frac{\varepsilon_{\ell''}'}{\delta_{\ell''}} \le 1+\frac{\eta_\ell}{72k} \le \Big(1+\frac{\eta_\ell}{36}\Big)^{1/k}\,, \] which shows the product over $[\ell]$ is also at most $1+\eta_\ell/36$. It follows that $(1+\eta_\ell/36)^3\le 1+\eta_\ell/9$ as required. \end{claimproof} This completes the proof of Lemma~\ref{lem:GPEcount}. \end{proof} \section{Homomorphism counts and the Cauchy--Schwarz inequality}\label{sec:CS} We can always consider a $k$-complex on $t$ vertices as $t$-partite with parts of size one, and in this case we represent the sum giving $\mathcal{H}(F)$ by an expectation as follows. A partite homomorphism $\phi:F\to\mathcal{H}$ must map vertex $j$ of $F$ to a vertex $x_j\in V_j$ of $\mathcal{H}$, so for $\abs{J}=t$ indexing the vertex sets, a partite homomorphism $\psi$ from $F$ to $\mathcal{H}$ is equivalent to a vector of vertices $x_J\in V_J$, and we have \[ \mathcal{H}(F) = \Ex[\Big]{\prod_{e\in F}h(x_e)}[x_J\in V_J]\,, \] where the expectation is over the uniform distribution on vectors $x_J\in V_J$, and we write $x_e$ for the natural projection of $x_e$ onto $V_e$. Let $\vec{a}\in\{0,1,2\}^J$. We use the following notation for the count of octahedra such as $\oct{k}{\vec{a}}$ in $\mathcal{H}$. Suppose that for $j\in J$ and $i\in[\vec a_j]$, vertices $x_j^{(i)}$ are chosen uniformly at random (with replacement) from $V_j$. For $e\subseteq J$ and $\omega\in\prod_{j\in J}[\vec a_j]$ we write $x_e^{(\omega)}$ for the vector indexed by $j\in e$ of vertices $x_j^{(\omega_j)}$. Then we have the notation \[ \mathcal{H}\big(F(\vec{a})\big) = \Ex[\Big]{\;\,\prod_{\substack{e\in F\\\mathclap{\omega:\omega_i\in[\vec a_i]}}}\;\,h(x_e^{(\omega)})}[x_{j}^{(i)}\in V_{j}\text{ for each $j\in J$ and $i\in[\vec a_j]$}]\,, \] for the expected weight of a uniformly random partite homomorphism from $\oct{k}{\vec{a}}$ to $\mathcal{H}$. With this notation in place, we turn to the main tool of the paper. \subsection{The Cauchy--Schwarz inequality and related results} We make extensive use of the Cauchy--Schwarz inequality in the form $\Ex{XY}^2\leq \Ex{X}\Ex{XY^2}$, where we take care to ensure that $X$ takes non-negative values throughout. First, we restate Lemma~\ref{lem:ECSdist} and give a proof. \ECSdist* \begin{proof} For the first statement, observe that $X$, $W$, and $1-W$ are all non-negative random variables. Then we have \begin{align} (1+\varepsilon)d^2\cdot\Ex{X} &\ge \Ex{XY^2} =\Ex{WXY^2}+\Ex{(1-W)XY^2} \\&\ge \frac{\Ex{WXY}^2}{\Ex{WX}}+\frac{\Ex{(1-W)XY}^2}{\Ex{(1-W)X}}\,,\label{eq:CSdist:1} \end{align} where the second inequality is by two applications of Cauchy--Schwarz. Given fixed $\Ex{WXY}$ the right hand side is minimised when $\Ex{XY}=(1-\varepsilon)d\cdot\Ex{X}$, so we may assume $\Ex{(1-W)XY}=(1-\varepsilon)d\cdot\Ex{X}-\Ex{WXY}$. Let $\Ex{WXY}=(1-\varepsilon+c)d\cdot\Ex{WX}$. Then from~\eqref{eq:CSdist:1} we have \begin{align} (1+\varepsilon)\Ex{X} &\ge (1-\varepsilon+c)^2\Ex{WX}+\frac{\big((1-\varepsilon)\Ex{X}-(1-\varepsilon+c)\Ex{WX}\big)^2}{\Ex{(1-W)X}} \\&= (1-\varepsilon+c)^2\Ex{WX}+\frac{\big((1-\varepsilon)\Ex{(1-W)X}-c\Ex{WX}\big)^2}{\Ex{(1-W)X}} \\&=(1-\varepsilon)^2\Ex{X}+\frac{c^2\Ex{X}\Ex{WX}}{\Ex{(1-W)X}}\,, \end{align} and so \[ 3\varepsilon -\varepsilon^2 \ge c^2\frac{\Ex{WX}}{\Ex{(1-W)X}} \ge c^2\frac{\Ex{WX}}{\Ex{X}}\,, \] which is a contradiction if $c^2\ge 4\varepsilon \Ex{X}/\Ex{WX}$, as required. For the second statement, we have by Cauchy--Schwarz and the first part, \begin{align} \Ex{WXY^2} \ge \frac{\Ex{WXY}^2}{\Ex{WX}} &\ge \left(1-\varepsilon- 2\sqrt{\frac{\varepsilon\Ex{X}}{\Ex{WX}}}\,\right)^2d^2\cdot \Ex{WX} \\&\ge\left(1-2\varepsilon- 4\sqrt{\frac{\varepsilon\Ex{X}}{\Ex{WX}}}\,\right)d^2\cdot \Ex{WX}\,, \end{align} and similarly \begin{align} \Ex{(1-W)XY^2} &\ge \frac{\Ex{(1-W)XY}^2}{\Ex{(1-W)X}} \\&\ge \frac{\left((1-\varepsilon)\Ex{(1-W)X}-2\sqrt{\varepsilon\Ex{X}\Ex{WX}}\right)^2}{\Ex{(1-W)X}}d^2 \\&\ge\left((1-2\varepsilon)\Ex{(1-W)X}-4\sqrt{\varepsilon\Ex{X}\Ex{WX}}\right)d^2\,, \end{align} so that \begin{align*} \Ex{WXY^2} &\le (1+\varepsilon)d^2\cdot\Ex{X}-\left((1-2\varepsilon)\Ex{(1-W)X}-4\sqrt{\varepsilon\Ex{X}\Ex{WX}}\right)d^2 \\&\le\left(1-2\varepsilon+7\sqrt{\varepsilon}\frac{\Ex{X}}{\Ex{WX}}\right)d^2\cdot \Ex{WX}\,.\qedhere \end{align*} \end{proof} \begin{corollary}\label{cor:ECSconc} Let $X$ and $Y$ be random variables such that $X$ takes values in the non-negative reals and $Y$ is real-valued. Suppose also that for $0\le\varepsilon\le 1$ and $d\ge0$ we have \begin{align} \Ex{XY}&=(1\pm\varepsilon)d\cdot\Ex{X}& &\text{and}& \Ex{XY^2}&\le(1+\varepsilon)d^2\cdot\Ex{X}\,. \end{align} Let $W$ be the indicator of the event that $Y=(1\pm2\varepsilon^{1/4})d$. Then \[ \Ex{WX}\ge (1-4\varepsilon^{1/4})\Ex{X}\,. \] \end{corollary} \begin{proof} Write $\varepsilon'=2\varepsilon^{1/4}$ and let $Z$ indicate the event that $Y>(1+\varepsilon')d$. Then using Lemma~\ref{lem:ECSdist} with weight $1-Z$ we have \begin{align} (1+\varepsilon)d\cdot\Ex{X} &\ge \Ex{XY} = \Ex{(1-Z)XY}+\Ex{ZXY} \\&\ge\left(1-\varepsilon-2\sqrt{\frac{\varepsilon\Ex{X}}{\Ex{(1-Z)X}}}\,\right)d\cdot\Ex{(1-Z)X}+(1+\varepsilon')d\cdot\Ex{ZX} \\&\ge\left(1-\varepsilon-2\sqrt{\varepsilon}+\varepsilon'\frac{\Ex{ZX}}{\Ex{X}}\right)d\cdot\Ex{X}\,, \end{align} which implies that $\Ex{ZX}\le 2\varepsilon^{1/4}\Ex{X}$. With a similar argument we deal with the event that $Y<(1-\varepsilon')d$, now using the letter $Z$ for this event we calculate \begin{align} (1-\varepsilon)d\cdot\Ex{X} &\le \Ex{XY} = \Ex{(1-Z)XY}+\Ex{ZXY} \\&\le\left(1-\varepsilon+2\sqrt{\frac{\varepsilon\Ex{X}}{\Ex{(1-Z)X}}}\,\right)d\cdot\Ex{(1-Z)X}+(1-\varepsilon')d\cdot\Ex{ZX} \\&\le\left(1+2\sqrt{\varepsilon}-\varepsilon'\frac{\Ex{ZX}}{\Ex{X}}\right)d\cdot\Ex{X}\,, \end{align} which implies that $\Ex{ZX}\le 2\varepsilon^{1/4}\Ex{X}$. Together, the two arguments prove that $\Ex{WX}\geq (1-4\varepsilon^{1/4})\Ex{X}$ as required. \end{proof} \begin{corollary}\label{cor:higherECSconc} Let $X$ and $Y$ be random variables such that $X$ takes values in the non-negative reals and $Y$ is real-valued. Suppose also that for a natural number $t\ge2$, and reals $0\le\varepsilon<2^{2-2t}$ and $d\ge0$ we have \begin{align} \Ex{XY}&=(1\pm\varepsilon)d\cdot\Ex{X}& &\text{and}& \Ex{XY^{2^t}}&\le(1+\varepsilon)d^{2^t}\cdot\Ex{X}\,. \end{align} Let $W$ be the indicator of the event that $Y=(1\pm2\varepsilon^{1/8})d$. Then \[ \Ex{WX}\ge (1-4\varepsilon^{1/8})\Ex{X}\,. \] \end{corollary} \begin{proof} Let $Z=Y^{2^{t-1}}$ and $^{(1)}lde d=d^{2^{t-1}}$. Then by the Cauchy--Schwarz inequality we have \[ \Ex{XZ}^2\le \Ex{X}\Ex{XZ^2} = \Ex{X}\Ex{XY^{2^t}}\le (1+\varepsilon)^{(1)}lde d^2 \cdot \Ex{X}^2\,. \] By $t-1$ further applications of the Cauchy--Schwarz inequality we also have \[ \Ex{XZ}\ge \Ex{X}^{1-2^{t-1}}\Ex{XY}^{2^{t-1}} \ge (1-\varepsilon)^{2^{t-1}}^{(1)}lde d\cdot\Ex{X} \ge (1-2^{t-1}\varepsilon)^{(1)}lde d \cdot\Ex{X}\,. \] With $^{(1)}lde\varepsilon=\varepsilon^{1/2}\ge2^{t-1}\varepsilon$ this implies \begin{align} \Ex{XZ} &= (1\pm^{(1)}lde\varepsilon)^{(1)}lde d\cdot\Ex{X}\,,& \Ex{XZ^2} &\le (1+^{(1)}lde\varepsilon)^{(1)}lde d^2\cdot\Ex{X}\,. \end{align} The result now follows from Corollary~\ref{cor:ECSconc}. Note that $Z=(1\pm2^{(1)}lde\varepsilon^{1/4})^{(1)}lde d$ implies the event $Y=(1\pm2\varepsilon^{1/8})d$ which is indicated by $W$, hence by Corollary~\ref{cor:ECSconc} we obtain $\Ex{WX}\ge (1-4\varepsilon^{1/8})\Ex{X}$. \end{proof} \subsection{Lower bounds on octahedra}\label{sec:lbocts} The common theme in the following results is an application of the Cauchy--Schwarz inequality to the expectation in a normalised homomorphism count. \begin{lemma}\label{lem:CSoctlow} For every natural number $k\ge 2$, vertex set $J$ of size $k$, index $i\in J$ and vectors $\vec{a},\,\vec{b},\,\vec{c}\in\{1,2\}^J$ which satisfy $\vec{a}_j=\vec{b}_j=\vec{c}_j$ for all $j\in J\setminus\{i\}$ and $\vec{a}_i=0$, $\vec{b}_i=1$, $\vec{c}_i=2$ the following holds. Let $\mathcal{H}$ be a $k$-partite $k$-graph on vertex set $\{V_j\}_{j\in J}$. Then \[ \mathcal{H}\big(\oct{k}{\vec{c}}\big)\ge\frac{\mathcal{H}\big(\oct{k}{\vec{b}}\big)^2}{\mathcal{H}\big(\oct{k-1}{\vec{a}}\big)}\,. \] \end{lemma} \begin{proof} We prove the case $J=\{0,1,\dotsc,k-1\}$ and $i=0$, writing $f=[k-1]$ for the indices on which $\vec{a}$, $\vec{b}$, and $\vec{c}$ agree. The other cases follow by relabelling indices. Observe that a copy of $\oct{k}{\vec{c}}$ simply consists of two copies of $\oct{k}{\vec{b}}$ agreeing on a copy of $\oct{k-1}{\vec{a}}$. Let $X$ be the random variable giving the weight of a uniform random copy of $\oct{k-1}{\vec{a}}$, and $Y$ be the random variable which, given a uniform random copy of $\oct{k-1}{\vec{a}}$, returns the total weight of the ways to extend it to a copy of $\oct{k}{\vec{b}}$. More concretely, we choose uniformly at random (with replacement) vertices $x_j^{(i)}\in V_j$ for each $i\in [\vec a_j]$, and let \begin{align} X&:=\prod_{\substack{e\subseteq f,\\\mathclap{\omega:\omega_i\in[\vec a_i]}}}\;g(x_e^{(\omega)})& &\text{and}& Y&:=\Ex[\Big]{\;\,\prod_{\substack{e\subseteq f,\\\mathclap{\omega:\omega_i\in[\vec a_i]}}}\;\,g(x_0,x_e^{(\omega)})}[x_0\in V_0]\,. \end{align} Thus we have $\Ex{X}=\mathcal{H}\big(\oct{k-1}{\vec{a}}\big)$, $\Ex{XY}=\mathcal{H}\big(\oct{k}{\vec{b}}\big)$, and $\Ex{XY^2}=\mathcal{H}\big(\oct{k}{\vec{c}}\big)$. Since $X$ is a nonnegative random variable, the Cauchy-Schwarz inequality $\Ex{XY}^2\le\Ex{X}\Ex{XY^2}$ gives the required statement. \end{proof} Lemma~\ref{lem:CSoctlow} justifies the term `minimal' used in the following definition. \begin{definition}\label{def:minimal} Let $\mathcal{H}$ be a $k$-partite $k$-graph and $\eta\ge0$. Then we say that $\mathcal{H}$ is \emph{$\eta$-minimal} if, for every $i\in [k]$ and for every $\vec{a},\vec{b},\vec{c}\in\{0,1,2\}^k$ which satisfy $\vec{a}_j=\vec{b}_j=\vec{c}_j$ for all $j\in [k]\setminus\{i\}$ and $\vec{a}_i=0$, $\vec{b}_i=1$, $\vec{c}_i=2$, we have \[ \mathcal{H}\big(\oct{k}{\vec{c}}\big) \le (1+\eta)\frac{\mathcal{H}(\oct{k}{\vec{b}})^2}{\mathcal{H}(\oct{k-1}{\vec{a}})}\,. \] \end{definition} Suppose $\Gamma$ and $\mathcal{G}$ are $k$-partite $k$-graphs, and $\mathcal{G}$ agrees with $\Gamma$ on edges of size $k-1$ and less. Suppose furthermore that the density of $\mathcal{G}$ relative to $\Gamma$ is $d$. If $\Gamma$ is a complete graph, then it is well known that $\mathcal{G}$ has at least $d^{2^k}$ times as many octahedra as $\Gamma$. For general $\Gamma$ this statement is false, but we will now show that if $\Gamma$ is $\eta$-minimal it is approximately true (and generalise it). \begin{corollary}\label{cor:relCSoctlow} For all natural numbers $k\geq 2$ and vectors $\vec{s},\,\vec{s'}\in\{1,2\}^{k}$ with $\vec{s}\ge\vec{s'}$ pointwise and such that $\vec{s}$ has $t$ more $2$ entries than $\vec{s'}$, the following holds. Suppose that $\mathcal{G}$ and $\Gamma$ are $k$-partite $k$-graphs on the same partite vertex set, with $g(e)=\gamma(e)$ for all $e$ with $|e|<k$, and suppose $\mathcal{G}\big(\oct{k}{\vec{s'}}\big)=d\cdot \Gamma\big(\oct{k}{\vec{s'}}\big)$. Moreover suppose that $\Gamma$ is $\eta$-minimal. Then \[ \mathcal{G}\big(\oct{k}{\vec{s}}\big) \ge \frac{d^{2^t}}{(1+\eta)^{2^t-1}} \Gamma\big(\oct{k}{\vec{s}}\big)\,. \] \end{corollary} \begin{proof} We prove the case $\vec{s}=(\vec{2}^{t+a},\vec{1}^{k-t-a})$ where $a\ge 0$ is an integer; the other cases follow by relabelling indices. Letting $\vec{s}^{(i)}:=(\vec{2}^{t+a-i},\vec{1}^{k+i-t-a})$ and $\vec{r}^{(i)}:=(\vec{2}^{t+a-i-1},0,\vec{1}^{k+i-t-a})$, we have $\vec{s}^{(0)}=\vec{s}$ and $\vec{s}^{(t)}=\vec{s'}$. By Lemma~\ref{lem:CSoctlow} and $\eta$-minimality of $\Gamma$ respectively, for each $0\le i\le t-1$ we have \begin{align} \mathcal{G}\big(\oct{k}{\vec{s}^{(i)}}\big)&\ge\frac{\mathcal{G}\big(\oct{k}{\vec{s}^{(i+1)}}\big)^2}{\mathcal{G}\big(\oct{k-1}{\vec{r}^{(i+1)}}\big)}& &\text{and}& \Gamma\big(\oct{k}{\vec{s}^{(i)}}\big) &\le(1+\eta)\frac{\Gamma\big(\oct{k}{\vec{s}^{(i+1)}}\big)^2}{\Gamma\big(\oct{k-1}{\vec{r}^{(i+1)}}\big)}\,. \end{align} Note that since $\mathcal{G}$ and $\Gamma$ agree on edges of size less than $k$, the denominators in both fractions are equal, so for each $0\le i\le t-1$ we have \[\frac{\mathcal{G}\big(\oct{k}{\vec{s}^{(i)}}\big)}{\Gamma\big(\oct{k}{\vec{s}^{(i)}}\big)}\ge \frac{1}{1+\eta}\bigg(\frac{\mathcal{G}\big(\oct{k}{\vec{s}^{(i+1)}}\big)}{\Gamma\big(\oct{k}{\vec{s}^{(i+1)}}\big)}\bigg)^2\,,\] and thus \[\frac{\mathcal{G}\big(\oct{k}{\vec{s}^{(0)}}\big)}{\Gamma\big(\oct{k}{\vec{s}^{(0)}}\big)}\ge \frac{1}{(1+\eta)^{2^t-1}}\bigg(\frac{\mathcal{G}\big(\oct{k}{\vec{s}^{(t)}}\big)}{\Gamma\big(\oct{k}{\vec{s}^{(t)}}\big)}\bigg)^{2^t}=\frac{d^{2^t}}{(1+\eta)^{2^t-1}}\,,\] as desired. \end{proof} In particular, it follows that if $\mathcal{G}$ is $(\varepsilon,d)$-regular with respect to the $\eta$-minimal $\Gamma$, then $\mathcal{G}$ is itself $\varepsilon'$-minimal, where $\varepsilon'$ is small provided $\eta$ is sufficiently small and $\varepsilon$ is small enough compared to $d$. \begin{corollary}\label{cor:subregular} Given $\varepsilon',\, d>0$, then for $\varepsilon$, $\eta$ small enough that \[ \varepsilon'\ge\max\Big\{1-\tfrac{(1-\varepsilon/d)^{2^k}}{(1+\eta)^{2^k-1}},\,\big(1+\varepsilon d^{-2^k}\big)(1+\eta)^{2^k-1}-1\Big\} \,,\] the following holds. Let $\Gamma$ and $\mathcal{G}$ be $k$-partite $k$-graphs on the same partite vertex set, such that $\gamma(e)=g(e)$ whenever $|e|< k$. Suppose that $\mathcal{G}$ is $(\varepsilon,d)$-regular with respect to $\Gamma$, and that $\Gamma$ is $\eta$-minimal. Then $\mathcal{G}$ is $\varepsilon'$-minimal, and for each $\vec{c}\in\{1,2\}^k$ we have $\mathcal{G}\big(\oct{k}{\vec{c}}\big)=(1\pm\varepsilon')d^r\Gamma\big(\oct{k}{\vec{c}}\big)$ with $r=\prod_{i\in [k]}\vec c_i$. Moreover we note that if the above inequality for $\varepsilon'$ is tight, we have \[ \varepsilon'\le 2^{2^k}\big(\varepsilon d^{-2^k}+\eta\big)\,. \] \end{corollary} \begin{proof} We begin with the second statement, comparing $\mathcal{G}\big(\oct{k}{\vec{c}}\big)$ to $\Gamma\big(\oct{k}{\vec{c}}\big)$. Corollary~\ref{cor:relCSoctlow} with $\vec s=\vec c$ and $\vec s'=\vec1^k$ and the regularity bound on $\mathcal{G}\big(\oct{k}{\vec1^k}\big)$ give the required lower bound, since for any $1\le r\le 2^k$ we have by choice of $\varepsilon$ and $\eta$, \[ \frac{(1-\varepsilon/d)^{r}}{(1+\eta)^{r-1}}\ge\frac{(1-\varepsilon/d)^{2^k}}{(1+\eta)^{2^k-1}}\ge 1-\varepsilon'\,. \] To obtain the upper bound, suppose for contradiction that $\mathcal{G}\big(\oct{k}{\vec{c}}\big)>(1+\varepsilon')d^r\Gamma\big(\oct{k}{\vec{c}}\big)$, where $r=\prod_{i\in [k]}\vec c_i$. Then applying Corollary~\ref{cor:relCSoctlow} with $\vec s= \vec 2^k$ and $\vec s'= \vec c$, we have \begin{align} \mathcal{G}\big(\oct{k}{\vec{2}^k}\big) &> \frac{\big((1+\varepsilon')d^r\big)^{2^k/r}}{(1+\eta)^{2^k/r-1}}\Gamma\big(\oct{k}{\vec{2}^k}\big) \ge \frac{(1+\varepsilon')^{2^k/r}}{(1+\eta)^{2^k-1}}d^{2^k}\Gamma\big(\oct{k}{\vec{2}^k}\big) \\&\ge \big(d^{2^k}+\varepsilon\big)\Gamma\big(\oct{k}{\vec{2}^k}\big)\,, \end{align} by $1\le r\le 2^k$ and choice of $\varepsilon$, $\eta$. This contradicts the $(\varepsilon, d)$-regularity of $\mathcal{G}$ with respect to $\Gamma$. The minimality argument is essentially identical. Suppose for contradiction that $\mathcal{G}$ is not $\varepsilon'$-minimal, and let $\vec{a},\,\vec{b},\,\vec{c}\in\{0,1,2\}^k$ be vectors witnessing this. That is, these vectors agree on $[k]\setminus\{j\}$ for some $j\in[k]$ and we have $\vec a_j=0$, $\vec b_j=1$, $\vec c_j=2$, and \begin{equation}\label{eq:subregular:fail} \mathcal{G}\big(\oct{k}{\vec{c}}\big)>(1+\varepsilon')\frac{\mathcal{G}\big(\oct{k}{\vec{b}}\big)^2}{\mathcal{G}\big(\oct{k-1}{\vec{a}}\big)}\,. \end{equation} Observe that $\vec{b}$ cannot contain any zero entries, since otherwise the three octahedron counts are the same as in $\Gamma$, and since $\varepsilon'\ge \eta$ the three vectors then witness that $\Gamma$ is not $\eta$-minimal. Let $t$ be the number of $2$ entries in $\vec{b}$. By Corollary~\ref{cor:relCSoctlow}, we have $\mathcal{G}\big(\oct{k}{\vec{b}}\big)\ge\frac{d^{2^t}}{(1+\eta)^{2^t-1}}\Gamma\big(\oct{k}{\vec{b}}\big)$, so since $\mathcal{G}$ and $\Gamma$ agree on edges of size at most $k-1$, we have \[ \mathcal{G}\big(\oct{k}{\vec{c}}\big)\gByRef{eq:subregular:fail}(1+\varepsilon')\frac{d^{2^{t+1}}}{(1+\eta)^{2^{t+1}-2}}\frac{\Gamma\big(\oct{k}{\vec{b}}\big)^2}{\Gamma\big(\oct{k-1}{\vec{a}}\big)}\ge \frac{(1+\varepsilon')d^{2^{t+1}}}{(1+\eta)^{2^{t+1}-1}}\Gamma\big(\oct{k}{\vec{c}}\big)\,, \] where the second inequality uses the $\eta$-minimality of $\Gamma$. Applying Corollary~\ref{cor:relCSoctlow} with $\vec s= \vec 2^k$ and $\vec s'= \vec c$, we obtain \begin{align} \mathcal{G}\big(\oct{k}{\vec{2}^k}\big) &>(1+\eta)^{2^{k-t-1}-1}\Bigg(\frac{(1+\varepsilon')d^{2^{t+1}}}{(1+\eta)^{2^{t+1}-1}}\Bigg)^{2^{k-t-1}}\Gamma\big(\oct{k}{\vec{2}^k}\big) \\&=\frac{(1+\varepsilon')^{2^{k-t-1}}}{(1+\eta)^{2^k-1}}d^{2^k}\Gamma\big(\oct{k}{\vec{2}^k}\big)\,. \end{align} Since $t\le k-1$, and since $(1+\varepsilon')(1+\eta)^{1-2^k}d^{2^k}\ge d^{2^k}+\varepsilon$, this is the desired contradiction to the $(\varepsilon,d)$-regularity of $\mathcal{G}$ with respect to $\Gamma$. \end{proof} \subsection{Regular subgraphs of regular graphs} In this section we show that, given an $\eta$-minimal graph $\Gamma$, if we replace the $\ell$-edges $V_{[\ell]}$ by a subgraph which is relatively dense and regular with respect to $\Gamma[V_1,\dots,V_\ell]$ then the result is still $\eta'$-minimal. This is a generalisation of the `Slicing Lemma' for $2$-graphs, which says that large subsets of a regular pair induce a regular pair; in other words, replacing $1$-edges with a relatively dense subgraph preserves regularity of the $2$-edges. Note that regularity is a trivial condition for $1$-graphs. \begin{lemma}\label{lem:slicing} Given $\varepsilon',\, d>0$, then for $\varepsilon$, $\eta$ small enough that \begin{align} \min\{\varepsilon',1/2\}\ge\eta+\max\Big\{ & 2^7k^3\Big(1-\tfrac{(1-\varepsilon/d)^{2^{k-1}}}{(1+\eta)^{2^{k-1}-1}}\Big),\, \\& 2^7k^3\left(\big(1+\varepsilon d^{-2^{k-1}}\big)(1+\eta)^{2^{k-1}-1}-1\right),\, \\& 2^9k^3\left(\big(1+100\sqrt{\eta}d^{-2^{k-1}}\big)(1+2\eta)-1\right)\Big\} \,, \end{align} the following holds. Let $\Gamma$ be an $\eta$-minimal $k$-partite $k$-graph with parts $V_1,\dots,V_k$, and let $\mathcal{G}$ be a subgraph on the same vertex set, which agrees with $\Gamma$ except on $V_{[\ell]}$ for some $\ell<k$, and which has the property that $\mathcal{G}[V_1,\dots,V_\ell]$ is $(\varepsilon,d)$-regular with respect to $\Gamma[V_1,\dots,V_\ell]$. Then $\mathcal{G}$ is $\varepsilon'$-minimal, and for each $\vec{s}\in\{0,1,2\}^k$ we have $\mathcal{G}\big(\oct{k}{\vec{s}}\big)=(1\pm\varepsilon')d^r\Gamma\big(\oct{k}{\vec{s}}\big)$ with $r:=\prod_{i\in[\ell]}\vec{s}_i$. Moreover, we note that when $\varepsilon'<1/2$ and the above inequality for $\varepsilon'$ is tight, we have \[ \varepsilon' \le 2^{2^{k-1}+18}k^3(\varepsilon +\sqrt\eta)d^{-2^{k-1}}\,. \] \end{lemma} \begin{proof} Let $\xi$ be maximal such that $\big(\frac{1+2k\xi}{1-2k\xi}\big)^2(1+\eta)\le1+\varepsilon'$, noting that this gives $2^{-7}k^{-3}(\varepsilon'-\eta)\le\xi\le (4k)^{-1}\varepsilon'$. The choice of $\varepsilon$, $\eta$ ensure that when Corollary~\ref{cor:subregular} is applied (e.g.\ to $\ind{\mathcal{G}}{V_1,\dotsc,V_\ell}$ and $\ind{\Gamma}{V_1,\dotsc,V_\ell}$) with $k\subref{cor:subregular}=\ell$ and $\varepsilon$, $\eta$ as in this lemma, the resulting $\varepsilon'\subref{cor:subregular}$ is at most $\xi$, and that \begin{equation}\label{eq:slicing:xibd} \big(1+100\sqrt{\eta}d^{-2^{k-1}}\big)(1+2\eta)\le1+\tfrac14\xi\,. \end{equation} The following claim, and choice of $\xi$, gives the desired counting in $\mathcal{G}$. \begin{claim}\label{claim:slicing} Given $\vec{s}\in\{0,1,2\}^k$, let $q:=\sum_{i\in[k]}\vec s_i$ and $r:=\prod_{i\in[\ell]}s_i$. Then we have \begin{equation}\label{eq:slice:G} \mathcal{G}\big(\oct{k}{\vec{s}}\big)=(1\pm q\xi)d^r\Gamma\big(\oct{k}{\vec{s}}\big)\,. \end{equation} \end{claim} The required counting statements in $\mathcal{G}$ follow because $q\le 2k$ and $\xi\le(4k)^{-1}\varepsilon'$. The desired $\varepsilon'$-minimality follows directly from this claim and $\eta$-minimality of $\Gamma$. Indeed, let $\vec{a}$, $\vec{b}$ and $\vec{c}$ be vectors in $\{0,1,2\}^k$ agreeing at all indices except $j$, and with $\vec a_j=0$, $\vec b_j=1$, $\vec c_j=2$. Let $t_{\vec{a}}=\prod_{i\in[\ell]}a_i$, and define similarly $t_{\vec{b}}$ and $t_{\vec{c}}$. Note that either $j\in[\ell]$ and we have $t_{\vec{a}}=0$ and $t_{\vec{c}}=2t_{\vec{b}}$, or $j\not\in[\ell]$ and all three are equal. Since $\sum_{i\in[k]}\vec{c}_i\le 2k$, we have by the claim, \begin{align*} \mathcal{G}\big(\oct{k}{\vec{c}}\big) &\le(1+2k\xi)d^{t_{\vec{c}}}\Gamma\big(\oct{k}{\vec{c}}\big) \\&\le(1+2k\xi)(1+\eta)d^{t_{\vec{c}}}\frac{\Gamma\big(\oct{k}{\vec{b}}\big)^2}{\Gamma\big(\oct{k}{\vec{a}}\big)} \\&\le(1+2k\xi)(1+\eta)d^{t_{\vec{c}}}\frac{(1-2k\xi)^{-2}d^{-2t_{\vec{b}}}\mathcal{G}\big(\oct{k}{\vec{b}}\big)^2}{(1+2k\xi)^{-1}d^{-t_{\vec{a}}}\mathcal{G}\big(\oct{k}{\vec{a}}\big)}=\Bigg(\frac{1+2k\xi}{1-2k\xi}\Bigg)^2(1+\eta)\frac{\mathcal{G}\big(\oct{k}{\vec{b}}\big)^2}{\mathcal{G}\big(\oct{k}{\vec{a}}\big)}\,, \end{align*} as desired. It remains only to prove the claim, which we now do by induction on the number of zeroes in $\vec{s}$ outside $[\ell]$. \begin{claimproof}[Proof of Claim~\ref{claim:slicing}] The base case is that all entries of $\vec{s}$ outside $[\ell]$ are equal to zero. Note that if $\vec{s}=\vec{0}^k$ then the claim is trivial, so we assume this is not the case, and hence $q=\sum_{i\in[k]}\vec{s}_i\ge 1$. Write $\vec s'\in\{0,1,2\}^\ell$ for the first $\ell$ entries of $\vec s$, which for the base case are the only entries which may be non-zero, giving $\mathcal{G}\big(\oct{k}{\vec{s}}\big)=\mathcal{G}[V_1,\dots,V_\ell]\big(\oct{\ell}{\vec{s}'}\big)$. Since $\mathcal{G}[V_1,\dots,V_\ell]$ is $(\varepsilon,d)$-regular with respect to $\Gamma[V_1,\dots,V_\ell]$, which is $\eta$-minimal, by Corollary~\ref{cor:subregular} and choice of $\varepsilon$, $\eta$, the claim statement follows. For the induction step, suppose that $j\notin[\ell]$ is such that $s_j\neq 0$. For $i=0,1,2$, let $\vec s^{(i)}$ be the vector equal to $\vec s$ at all entries except the $j$th, and with $\vec s^{(i)}_j=i$. By induction, the claim statement holds for $\vec s^{(0)}$. Again, write $\vec s'\in\{0,1,2\}^\ell$ for the first $\ell$ entries of $\vec s$. We define random variables $W$, $X$, $Y$ as follows. The random experiment we perform is to choose, for each $i\in[\ell]$, uniformly at random (with replacement) $\vec s^{(0)}_i$ vertices in $V_i$. We let $X$ be the weight of the copy of $\oct{k-1}{\vec s^{(0)}}$ in $\Gamma$ on these vertices, $WX$ be the weight of the copy of $\oct{k-1}{\vec s^{(0)}}$ in $\mathcal{G}$ on these vertices, and $XY$ be the expected weight, over a uniformly random choice of vertex in $V_j$, of the copy of $\oct{k}{\vec{s}^{(1)}}$ in $\Gamma$. Note that since $\mathcal{G}$ is a subgraph of $\Gamma$, we always have $0\le W\le 1$. More formally (and dealing with the trivial exceptional case $X=0$), let $x_i^{(m)}\in V_i$ be chosen independently, uniformly at random for each $i\in[k]\setminus\{j\}$ and $m\in[\vec s^{(0)}_i]$. Write $\Omega = \prod_{i\in[k]\setminus\{j\}}[\vec s_i]$ and $\Omega' = \prod_{i\in[k]}[\vec s^{(1)}_i]$, and define \begin{align*} X&= \prod_{e\subseteq[k]\setminus\{j\}}\prod_{\omega\in\Omega}\gamma\big(x_e^{(\vec{\omega})}\big) \,,\\ Y&=\Ex[\Big]{\prod_{e\subseteq[k],\,j\in e}\prod_{\omega\in\Omega'}\gamma\big(x_e^{(\vec{\omega})}\big)}[x_j^{(1)}\in V_j]\,,\quad\text{and}\\ W&=\begin{cases} \tfrac{1}{X}\prod_{e\subseteq[k]\setminus\{j\}}\prod_{\omega\in\Omega} g\big(x_e^{(\vec{\omega})}\big)&\text{ if }X>0 \\ 1 & \text{ if }X=0 \end{cases}\,. \end{align*} The key feature of these definitions is that $\Ex{XY^i}=\Gamma\big(\oct{k}{\vec{s}^{(i)}}\big)$ for each $i=0,1,2$, and similarly $\Ex{WXY^i}=\mathcal{G}\big(\oct{k}{\vec{s}^{(i)}}\big)$. If $\Ex{X}=0$ then trivially the claim holds, since $\Gamma\big(\oct{k}{\vec{s}}\big)=0$. So we may assume $\Ex{X}>0$, and let $d'$ be such that $\Ex{XY}=d'\Ex{X}$. By Lemma~\ref{lem:CSoctlow} and the $\eta$-minimality of $\Gamma$, we have $\Ex{XY^2}= (1\pm\eta)\frac{\Ex{XY}^2}{\Ex{X}}=(1\pm\eta)(d')^2\Ex{X}$. We are thus in a position to apply Lemma~\ref{lem:ECSdist}, with $\varepsilon\subref{lem:ECSdist}=\eta$. We obtain \begin{align*} \Ex{WXY}&=\Big(1-\eta\pm2\sqrt{\tfrac{\eta\Ex{X}}{\Ex{WX}}}\Big)d'\cdot\Ex{WX}\quad\text{and}\\ \Ex{WXY^2}&=\Big(1-2\eta\pm7\sqrt{\eta}\frac{\Ex{X}}{\Ex{WX}}\Big)(d')^2\Ex{WX}\,. \end{align*} Recall that from the induction hypothesis with $q=\sum_{i\in[k]}\vec s^{(0)}_i$ and $r=\prod_{i\in[\ell]}\vec s^{(0)}_i$ we have \[ \Ex{WX}=\mathcal{G}\big(\oct{k-1}{\vec s^{(0)}}\big)=(1\pm q\xi)d^r\Gamma\big(\oct{k}{\vec{s}^{(0)}}\big)=(1\pm q\xi)d^r\Ex{X}\,. \] This gives \begin{align*} \mathcal{G}\big(\oct{k}{\vec{s}^{(1)}}\big)&=\Big(1-\eta\pm2\sqrt{\eta(1+ 2q\xi)d^{-r}}\Big)\cdot(1\pm q\xi)d^r\Gamma\big(\oct{k}{\vec{s}^{(1)}}\big)\quad\text{and}\\ \mathcal{G}\big(\oct{k}{\vec{s}^{(2)}}\big)&=\Big(1-2\eta\pm7\sqrt{\eta}(1+2q\xi)d^{-r}\Big)\cdot(1\pm q\xi)(1\pm2\eta)d^r\Gamma\big(\oct{k}{\vec{s}^{(2)}}\big)\,, \end{align*} where we use the $\eta$-minimality of $\Gamma$ in obtaining the second statement. By choice of $\xi$ and~\eqref{eq:slicing:xibd}, this proves the claim for $\vec{s}^{(i)}$ with $i=1,2$, and in particular for $\vec{s}$, as desired. \end{claimproof} The proof of Claim~\ref{claim:slicing} completes the proof of Lemma~\ref{lem:slicing}. \end{proof} \section{Inheritance of regularity}\label{sec:inherit} Our goal in this section is to prove Lemma~\ref{lem:k-inherit}. Note that in proving the counting and embedding lemmas (see Section~\ref{sec:count}) for $k$-graphs, we must apply Lemma~\ref{lem:k-inherit} for $k\subref{lem:k-inherit}$-graphs where $k\subref{lem:k-inherit}$ takes values up to $k$, which means we must mention $(k+1)$-partite $(k+1)$-graphs in the proof below. Our definitions mean that in Section~\ref{sec:count}, whenever we are applying Lemma~\ref{lem:k-inherit} to a $(k+1)$-partite $(k+1)$-graph, the graphs are trivial and equal to $1$ on edges of size $k+1$. This feature is visible in the graph case: when $k=2$ the inheritance lemmas of~\cite{CFZextremal,ABSSregularity} involve a $3$-partite graph, and to deduce similar results from our inheritance lemma one must form a $3$-graph from this $3$-partite graph by giving edges of size $3$ weight $1$. We with a brief outline of the method for proving Lemma~\ref{lem:k-inherit}. First, let $\mathcal{H}$ be the $k$-graph on $V_0,\dotsc, V_k$ with edge weights \[ h(e):= \begin{cases} \gamma(e) &\quad e\not\in V_{[k]} \\ g(e) &\quad e\in V_{[k]}\,. \end{cases} \] By Lemma~\ref{lem:slicing} and~\ref{inh:regJ}, $\mathcal{G}$ is regular with respect to $\mathcal{H}$, and by~\ref{inh:regf}, $\ind{\mathcal{H}}{V_1,\dotsc,V_k}$ is regular with respect to $\ind{\Gamma}{V_1,\dotsc,V_k}$. This, together with Corollary~\ref{cor:subregular}, in particular allows us to estimate $\mathcal{G}\big(\oct{k+1}{\vec{1}^{k+1}}\big)$ and $\mathcal{G}\big(\oct{k+1}{1,\vec{2}^{k}}\big)$ accurately. These two quantities are by definition equal to the averages, over $v\in V_0$, of $\mathcal{G}_v\big(\oct{k}{\vec{1}^{k}}\big)$ and $\mathcal{G}_v\big(\oct{k}{\vec{2}^{k}}\big)$ respectively. Using~\ref{inh:count}, we conclude that on average the relative density of $\mathcal{G}_v$ with respect to $\Gamma_v$ is about $dd'$, and the number of octahedra it contains is about $(dd')^{2^k}$ times the number of octahedra in $\Gamma_v$. However, we can also give a lower bound on the average number of octahedra in $\mathcal{G}_v$ using its density relative to $\Gamma_v$ and Corollary~\ref{cor:relCSoctlow}, whenever $\Gamma_v$ satisfies the counting conditions of that lemma. The assumption~\ref{inh:count} implies that these counting conditions are typically satisfied, and the few atypical vertices do not much affect the argument. Using the defect Cauchy--Schwarz inequality and the fact that we know the average density of $\mathcal{G}_v$ relative to $\Gamma_v$, we conclude that the only way this lower bound does not contradict the previous estimate is if typically $\mathcal{G}_v$ has density about $dd'$ relative to $\Gamma_v$ and number of octahedra about $(dd')^{2^k}$ times the number in $\Gamma_v$. In other words, $\mathcal{G}_v$ is typically $(\varepsilon',dd')$-regular with respect to $\Gamma_v$, as desired. \begin{proof}[Proof of Lemma~\ref{lem:k-inherit}] We use the letter $v$ for a vertex in $V_0$ to draw attention to the special role of the set $V_0$, but use $x_j$ for a vertex in $V_j$ when $j\in [k]$. As in the proof of Lemmas~\ref{lem:GPEcount} and~\ref{lem:GPEemb}, we use the correspondence between copies of $\oct{k+1}{1,\vec{a}}$ in $\mathcal{G}$ or $\Gamma$, and the average of the counts of $\oct{k}{\vec{a}}$ in the graphs $\mathcal{G}_v$ or $\Gamma_v$ over $v\in V_0$. More precisely, we have for any $\vec{a}\in\{0,1,2\}^k$, \begin{align} \mathcal{G}\big(\oct{k+1}{1,\vec{a}}\big) &= \Ex[\big]{\mathcal{G}_{v}\big(\oct{k}{\vec{a}}\big)}[v\in V_0]\,,\label{eq:GF1acor}\\ \Gamma\big(\oct{k+1}{1,\vec{a}}\big) &= \Ex[\big]{\Gamma_{v}\big(\oct{k}{\vec{a}}\big)}[v\in V_0]\,.\label{eq:GamF1acor} \end{align} When $\Gamma_v$ is well-behaved (in a way we make precise below) we are able to count carefully in $\mathcal{G}_v$ but when $\Gamma_v$ is not well-behaved, we can bound weights in $\mathcal{G}_v$ from above by those in $\Gamma_v$. Though in general it is difficult to control the weight of the empty set, in this proof we only embed a single vertex into $V_0$, hence there is not much to control. Instead of the usual sprinkling of weights involving the empty set, for this proof we can assume without loss of generality that $\gamma(\emptyset)=g(\emptyset)=p(\emptyset)=1$ and avoid most of these factors. We will have similar correcting factors when counting in $\mathcal{G}_v$ and $\Gamma_v$, however. The first step of the proof is to use the counting conditions in $\Gamma$ to establish the existence of $U\subseteq V_0$ such that for each $v\in U$, $\Gamma_v$ is well-behaved. We also give additional properties of $\Gamma$ and $U$ that are useful later. Property~\ref{inh:count} specifies a kind of pseudorandomness for $\Gamma$, and the natural definition of a well-behaved vertex $v\in V_0$ is that its link $\Gamma_v$ is similarly pseudorandom, so our definition of $U$ will involve control of the counts of $\oct{k}{\vec{a}}$ in links. As ever, we must deal carefully with the weight of the empty set in these links, but for edges of size greater than one, we will see that~\ref{inh:count} implies concentration of the edge weights by the Cauchy--Schwarz inequality. We state the definition in terms of $\mathcal{P}$ rather than $\mathcal{P}_v$ for more convenient use later. Write $\eta'=2^{3/2}\eta^{1/4}$ (so we have $\eta'<1/2$), and let $U\subseteq V_0$ be those vertices $v\in V_0$ such that for any $\vec{a}\in\{0,1,2\}^k\setminus\{\vec0^k\}$ we have \begin{equation}\label{eq:Udef} \Gamma_v\big(\oct{k}{\vec{a}}\big)=(1\pm \eta')\frac{\gamma(v)}{p(0)}\mathcal{P}\big(\oct{k+1}{1,\vec{a}}\big)\,. \end{equation} The counting assumptions~\ref{inh:count} are a form of pseudorandomness which suggests that $U$ will be a large subset of $V_0$, which we prove in the necessary weighted setting below. \begin{claim}\label{clm:U} \begin{enumerate} \item\label{itm:Gammin} $\Gamma$ is $16\eta$-minimal. \item\label{itm:Gamvmin} For $v\in U$, $\Gamma_v$ is $16\eta'$-minimal. \item\label{itm:Ularge} The contribution to $\Gamma\big(\oct{k+1}{\vec1^{k+1}}\big)$ from homomorphisms which use a vertex in $V_0\setminus U$ is at most $3^{k+3}\eta'\mathcal{P}\big(\oct{k+1}{\vec1^{k+1}}\big)$. \end{enumerate} \end{claim} \begin{claimproof} To see \ref{itm:Gammin}, we use \ref{inh:count}. Let $j\in \{0\}\cup[k]$, and vectors $\vec{a},\vec{b},\vec{c}\in\{0,1,2\}^{k+1}$ be equal on $\{0\}\cup[k]\setminus\{j\}$ and satisfy $\vec{a}_j=0$, $\vec{b}_j=1$, $\vec{c}_j=2$. Then by \ref{inh:count} we have \begin{align} \Gamma\big(\oct{k+1}{\vec{a}}\big)\Gamma\big(\oct{k+1}{\vec{c}}\big) &\le (1+\eta)^2\cdot\mathcal{P}\big(\oct{k+1}{\vec{a}}\big)\mathcal{P}\big(\oct{k+1}{\vec{c}}\big) \\&= (1+\eta)^2\cdot\mathcal{P}\big(\oct{k+1}{\vec{b}}\big)^2 \\&\le \frac{(1+\eta)^2}{(1-\eta)^2}\cdot\Gamma\big(\oct{k+1}{\vec{b}}\big)^2\,, \end{align} which shows $\Gamma$ is minimal with parameter $(1+\eta)^2(1-\eta)^{-2}-1\le 16\eta$. The proof of \ref{itm:Gamvmin} is similar but we use the definition of $U$. Let $j\in [k]$, and $\vec{a},\vec{b},\vec{c}\in\{0,1,2\}^k$ be equal on $[k]\setminus\{j\}$ and satisfy $\vec{a}_j=0$, $\vec{b}_j=1$, $\vec{c}_j=2$. If $\vec a=\vec 0^k$ then the required bound is trivial, otherwise by the fact that $v\in U$ we have \begin{align} \Gamma_v\big(\oct{k}{\vec{a}}\big)\Gamma_v\big(\oct{k}{\vec{c}}\big) &\le (1+\eta')^2\cdot\frac{\gamma(v)^2}{p(0)^2}\mathcal{P}\big(\oct{k+1}{1,\vec{a}}\big)\mathcal{P}\big(\oct{k+1}{1,\vec{c}}\big) \\&= (1+\eta')^2\cdot\frac{\gamma(v)^2}{p(0)^2}\mathcal{P}\big(\oct{k+1}{1,\vec{b}}\big)^2 \\&\le \frac{(1+\eta')^2}{(1-\eta')^2}\cdot\Gamma_v\big(\oct{k}{\vec{b}}\big)^2\,, \end{align} which shows that when $v\in U$, $\Gamma_v$ is minimal with parameter $(1+\eta')^2(1-\eta')^{-2}-1\le 16\eta'$. Part \ref{itm:Ularge} resembles a step in the proof of Lemma~\ref{lem:GPEcount} involving $\cC^{(0)}$. We first establish a lower bound on $\vnorm{U}{\Gamma}$. Fix $\vec{a}\in\{0,1,2\}^k$ and recall that $+2\oct{k+1}{0,\vec{a}}$ is the $(k+1)$-complex obtained by taking two vertex-disjoint copies of $\oct{k+1}{1,\vec{a}}$ and identifying their first vertices. Consider the experiment where $v\in V_0$ is chosen uniformly at random, and let \begin{align} X&:=\gamma(v)\,, & Y&:=\frac{\Gamma_v(\oct{k}{\vec{a}})}{\gamma(v)}\,. \end{align} By \eqref{eq:GamF1acor} and \ref{inh:count} we have \begin{align} \Ex{XY} &=\Ex[\big]{\Gamma_v\big(\oct{k}{\vec{a}}\big)} =\Gamma\big(\oct{k+1}{1,\vec{a}}\big) =(1\pm\eta)\mathcal{P}\big(\oct{k+1}{1,\vec{a}}\big) \\&=(1\pm\eta)p(0)\cdot\frac{\mathcal{P}\big(\oct{k+1}{1,\vec{a}}\big)}{p(0)}\,, \\\Ex{XY^2} &=\Ex*{\frac{\Gamma_v\big(\oct{k}{\vec{a}}\big)^2}{\gamma(v)}} =\Gamma\big({+2}\oct{k+1}{0,\vec{a}}\big) =(1\pm\eta)\mathcal{P}\big({+2}\oct{k+1}{0,\vec{a}}\big) \\&=(1\pm\eta)p(0)\cdot\left(\frac{\mathcal{P}\big(\oct{k+1}{1,\vec{a}}\big)}{p(0)}\right)^2\,. \end{align} Noting that $\Ex{X}=(1\pm\eta)p(0)$ by \ref{inh:count}, we can apply Lemma~\ref{lem:ECSdist} and Corollary~\ref{cor:ECSconc} in the arguments below with an appropriate $\vec a$, $\varepsilon\subref{lem:ECSdist}=\varepsilon\subref{cor:ECSconc}=4\eta$, and \[ d\subref{lem:ECSdist}=d\subref{cor:ECSconc}=\frac{\mathcal{P}\big(\oct{k+1}{1,\vec{a}}\big)}{p(0)}\,. \] To bound $\vnorm{U}{\Gamma}$, for $\vec{a}\in\{0,1,2\}^k\setminus\{\vec0^k\}$, let $U_{\vec{a}}\subseteq V_0$ be those vertices $v$ which satisfy \[ \Gamma_v\big(\oct{k}{\vec{a}}\big) = (1\pm \eta')\frac{\gamma(v)}{p(0)}\mathcal{P}\big(\oct{k+1}{1,\vec{a}}\big)\,, \] so that $U$ is the intersection of the $3^k-1$ different $U_{\vec{a}}$. By Corollary~\ref{cor:ECSconc} we have $\vnorm{U_\vec{a}}{\Gamma}\ge (1-2\eta')\vnorm{V_0}{\Gamma}$, and hence $\vnorm{U}{\Gamma}\ge(1-3^{k+1}\eta')\vnorm{V_0}{\Gamma}$, so that \begin{equation}\label{eq:Ularge} \vnorm{V_0\setminus U}{\Gamma}\le 3^{k+1}\eta'\vnorm{V_0}{\Gamma}\,. \end{equation} The contribution to $\Gamma\big(\oct{k+1}{\vec1^{k+1}}\big)$ from homomorphisms that use a vertex in $V_0\setminus U$ can be written as \begin{equation} \Ex[\big]{\Gamma_v\big(\oct{k}{\vec1^k}\big)\indicator{v\in V_0\setminus U}}\,, \end{equation} which is a weighting of $\Ex[\big]{\Gamma_v\big(\oct{k}{\vec1^k}\big)}$ by $W:=\indicator{v\in V_0\setminus U}$. We apply Lemma~\ref{lem:ECSdist} with this weight $W$ and $X$, $Y$ as above with $\vec a=\vec1^k$ to obtain \begin{align} \Ex[\big]{\Gamma_v\big(\oct{k}{\vec1^k}\big)\indicator{v\in V_0\setminus U}} &\leq \left(1-4\eta+4\sqrt{\frac{\eta\vnorm{V_0}{\Gamma}}{\vnorm{V_0\setminus U}{\Gamma}}}\right) \frac{\mathcal{P}\big(\oct{k+1}{\vec1^{k+1}}\big)}{p(0)}\cdot \vnorm{V_0\setminus U}{\Gamma} \\&\le (1+\eta)\big(3^{k+1}\eta'+4\sqrt{3^{k+1}\eta\eta'}\big)\mathcal{P}\big(\oct{k+1}{\vec1^{k+1}}\big)\,, \end{align} and note that the coefficient of $\mathcal{P}\big(\oct{k+1}{\vec1^{k+1}}\big)$ here is at most $3^{k+3}\eta'$. \end{claimproof} With the set $U$ understood, we proceed by counting $\oct{k+1}{1,\vec2^k}$ in $\mathcal{G}$ two different ways. Firstly, we estimate counts of $\oct{k+1}{\vec1^{k+1}}$ and $\oct{k+1}{1,\vec2^k}$ in $\mathcal{G}$ with Corollary~\ref{cor:subregular} and Lemma~\ref{lem:slicing}. We give crude values of the constants that work in the argument, but make no effort to optimise them. Let $\mathcal{H}$ have layer $k+1$ given by $\mathcal{G}$, and lower layers given by $\Gamma$. Then by assumption~\ref{inh:regJ} $\mathcal{H}$ is $(\varepsilon,d')$-regular with respect to $\Gamma$, and we obtain $\mathcal{G}$ from $\mathcal{H}$ by replacing weights on $V_{[k]}$ with those from $\mathcal{G}$. By Claim~\ref{clm:U}\ref{itm:Gammin}, and Corollary~\ref{cor:subregular} for $(k+1)$-graphs, $\mathcal{H}$ is $\varepsilon_m$-minimal where \[ \varepsilon_m = 2^{2^{k+1}}\Big(\varepsilon(d')^{-2^{k+1}}+\eta\Big) > \max\Big\{1-\frac{(1-\varepsilon/d')^{2^{k+1}}}{(1+\eta)^{2^{k+1}-1}},\,\big(1+\varepsilon (d')^{-2^{k+1}}\big)(1+\eta)^{2^{k+1}-1}-1\Big\}\,. \] We can now apply Lemma~\ref{lem:slicing} for $(k+1)$-graphs to $\mathcal{G}$ and $\mathcal{H}$ to obtain the required counts in $\mathcal{G}$. With $\varepsilon\subref{lem:slicing}=\varepsilon$, $\eta\subref{lem:slicing}=\varepsilon_m$ as above, and $d\subref{lem:slicing}=d$, we obtain that for \[ \varepsilon_m' = 2^{2^k+22}k^3\big(\varepsilon^{1/2}(d')^{-2^k}+\eta^{1/2}\big)d^{2^{-k}}\,, \] the $(k+1)$-graph $\mathcal{G}$ is $\varepsilon_m'$-minimal, and the remaining assertions of Corollary~\ref{cor:subregular} and Lemma~\ref{lem:slicing} give \begin{align} \mathcal{G}\big(\oct{k+1}{\vec1^{k+1}}\big) &= (1\pm \varepsilon_m') d \cdot \mathcal{H}\big(\oct{k+1}{\vec1^{k+1}}\big) = (1\pm \varepsilon_m)(1\pm \varepsilon_m')dd'\Gamma\big(\oct{k+1}{\vec1^{k+1}}\big) \\& = (1\pm\varepsilon_m'') dd' \cdot \mathcal{P}\big(\oct{k+1}{\vec1^{k+1}}\big)\,,\label{eq:G11k} \\\mathcal{G}(\oct{k+1}{1,\vec2^k}) &\le (1+\varepsilon_m') d^{2^k}\mathcal{H}\big(\oct{k+1}{1,\vec2^k}\big) \le (1+\varepsilon_m)(1+\varepsilon_m')(dd')^{2^k}\Gamma\big(\oct{k+1}{1,\vec2^k}\big) \\&\le(1+\varepsilon_m'')(dd')^{2^k}\cdot\mathcal{P}\big(\oct{k+1}{1,\vec2^k}\big)\,,\label{eq:G12k} \end{align} where \[ \varepsilon_m'' = 2^{2^k+25}k^3\big(\varepsilon^{1/2}(d')^{-2^{k+1}}+\eta^{1/2}\big)d^{2^{-k}}\,. \] The second method for counting $\oct{k+1}{1,\vec2^k}$ involves counting $\oct{k}{\vec2^k}$ in the links of vertices $v\in V_0$. We have $\mathcal{G}_v\le\Gamma_v$ and since we do not try to control $\mathcal{G}_v$ directly when $v\notin U$, we define \begin{align} d_v&= \begin{cases} \frac{\mathcal{G}_v(\oct{k}{\vec1^k})}{\Gamma_v(\oct{k}{\vec1^k})} & \text{if } v\in U\,,\\ 0 & \text{otherwise}\,. \end{cases} \end{align} \begin{claim}\label{clm:Edvbounds} Writing \[ \zeta:=\max\left\{\varepsilon_m''+\eta'+\frac{3^{k+3}\eta'}{dd'},\,\varepsilon_m''+2\eta'+2\eta'\varepsilon_m'',\,\frac{(1+16\eta')^{2^k-1}}{1-16\eta'} (1+\varepsilon_m'')-1\right\} \] we have \begin{align} \Ex{\gamma(v)d_v} &= (1\pm\zeta)dd'\cdot p(0) \,, &&\text{and}& \Ex[\big]{\gamma(v)d_v^{2^{k-1}}} &\le (1 +\zeta)(dd')^{2^{k-1}}\cdot p(0)\,. \end{align} Moreover, we note that a crude calculation gives \[ \zeta \le 2^{2^{k+1}+50}k^3\big(\varepsilon^{1/2} + \eta^{1/4}\big)(dd')^{-2^{k+1}}\,. \] \end{claim} \begin{claimproof} First we bound $\Ex{\gamma(v)d_v}$. By \eqref{eq:GF1acor} we have \begin{align} \mathcal{G}(\oct{k+1}{\vec1^{k+1}}) &= \Ex[\big]{\mathcal{G}_v\big(\oct{k}{\vec1^k}\big)}\\ &\le \Ex[\big]{d_v\Gamma_v\big(\oct{k}{\vec1^k}\big)} + \Ex[\big]{\Gamma_v\big(\oct{k}{\vec1^k}\big)\indicator{v\in V_0\setminus U}}\,. \end{align} By the definition \eqref{eq:Udef} of $U$, for the first expectation we have an upper bound on $\Gamma_v(\oct{k}{\vec1^k})$ which depends only on $\gamma(v)$, and by Claim~\ref{clm:U}\ref{itm:Ularge} we have a bound on the final expectation which represents copies of $\oct{k+1}{\vec1^{k+1}}$ using a vertex in $V_0\setminus U$. We combine these facts with \eqref{eq:G11k} to obtain a lower bound on $\Ex{\gamma_0(v)d_v}$. That is, \begin{align} (1-\varepsilon_m'') dd' \cdot \mathcal{P}\big(\oct{k+1}{\vec1^{k+1}}\big) &\le \mathcal{G}\big(\oct{k+1}{\vec1^{k+1}}\big) \\&\le (1+\eta')\frac{\mathcal{P}\big(\oct{k+1}{\vec1^{k+1}}\big)}{p(0)}\Ex{\gamma(v)d_v} + 3^{k+3}\eta'\mathcal{P}\big(\oct{k+1}{\vec1^{k+1}}\big)\,, \end{align} which yields the lower bound \begin{equation}\label{eq:Edv_lb} \Ex{\gamma(v)d_v} \ge \left(1-\varepsilon_m''-\eta'-\frac{3^{k+3}\eta'}{dd'}\right) dd'\cdot p(0)\,. \end{equation} For a corresponding upper bound we have \begin{align} (1+\varepsilon_m'') dd' \cdot \mathcal{P}\big(\oct{k+1}{\vec1^{k+1}}\big) &\ge \mathcal{G}\big(\oct{k+1}{\vec1^{k+1}}\big) \ge \Ex[\big]{d_v\Gamma_v\big(\oct{k}{\vec1^k}\big)} \\&\ge (1-\eta')\frac{\mathcal{P}\big(\oct{k+1}{\vec1^{k+1}}\big)}{p(0)}\Ex{\gamma(v)d_v}\,, \end{align} by the definition of $U$ and \eqref{eq:G11k}. We conclude \begin{equation} \Ex{\gamma(v) d_v} \le \left(1+\varepsilon_m''+2\eta'+2\eta'\varepsilon_m''\right) dd'\cdot p(0)\,. \end{equation} For the second statement, Claim~\ref{clm:U}\ref{itm:Gamvmin} means that when $v\in U$ we can apply Corollary~\ref{cor:relCSoctlow} to $\mathcal{G}_v\le\Gamma_v$ and obtain a lower bound on $\mathcal{G}_v\big(\oct{k}{\vec2^k}\big)$, \begin{align} \mathcal{G}_v\big(\oct{k}{\vec2^k}\big) &\ge \frac{d_v^{2^k}}{(1+16\eta')^{2^k-1}}\Gamma_v\big(\oct{k}{\vec2^k}\big)\label{eq:GvOkb-lb} \\&\geq \frac{1-\eta'}{(1+16\eta')^{2^k-1}}\frac{\mathcal{P}\big(\oct{k+1}{1,\vec2^k}\big)}{p(0)} \cdot \gamma(v)d_v^{2^k}\,. \intertext{Then by \eqref{eq:GF1acor} again,} p(0) \mathcal{G}\big(\oct{k+1}{1,\vec2^k}\big) &\ge \frac{1-\eta'}{(1+16\eta')^{2^k-1}}\mathcal{P}\big(\oct{k+1}{1,\vec2^k}\big)\Ex[\big]{\gamma(v)d_v^{2^k}}\,, \end{align} which together with \eqref{eq:G12k} implies the required upper bound \[ \Ex[\big]{\gamma(v)d_v^{2^k}} \le \frac{(1+16\eta')^{2^k-1}}{1-16\eta'} (1+\varepsilon_m'')(dd')^{2^k}\cdot p(0)\,.\qedhere \] \end{claimproof} Claim~\ref{clm:Edvbounds} means that we have concentration of $d_v$ by Corollary~\ref{cor:higherECSconc} with $X=\gamma(v)$ and $Y=d_v$. Writing $U_{\mathrm{conc}}\subseteq U$ for the vertices $v$ with $d_v= (1\pm2\zeta^{1/8})dd'$, we have \begin{equation}\label{eq:Uconc} \vnorm{U_{\mathrm{conc}}}{\Gamma}\ge\big(1-4\zeta^{1/8}\big)p(0)\,. \end{equation} It remains to show that for almost all of the weight in $U_{\mathrm{conc}}$, $\mathcal{G}_v$ is regular in the sense that the weight of $\oct{k}{\vec2^k}$ is close to minimal. Let $U_{\mathrm{reg}}\subseteq U$ be the vertices $v\in U$ with \[ \mathcal{G}_v\big(\oct{k}{\vec2^k}\big)\le (d_v^{2^k}+\varepsilon')\Gamma_v\big(\oct{k}{\vec2^k}\big)\,. \] For all vertices $v\in U$ we have the lower bound~\eqref{eq:GvOkb-lb} on $\mathcal{G}_v\big(\oct{k}{\vec2^k}\big)$, hence we are supposing that for $v\in U\setminusU_{\mathrm{reg}}$ we have an additive improvement on~\eqref{eq:GvOkb-lb} of at least $\varepsilon'\Gamma_v\big(\oct{k}{\vec2^k}\big)$. Then we have \begin{align} \mathcal{G}\big(\oct{k+1}{1,\vec2^k}\big) &= \Ex[\big]{\mathcal{G}_v\big(\oct{k}{\vec2^k}\big)} \\&\ge \frac{1}{(1+16\eta')^{ 2^k-1}}\Ex[\Big]{\big(d_v^{ 2^k}+\varepsilon'\indicator{v\in U\setminusU_{\mathrm{reg}}}\big)\Gamma_v\big(\oct{k}{\vec2^k}\big)} \\&\ge \frac{1-\eta'}{(1+16\eta')^{2^k-1}}\left(\Ex[\big]{\gamma(v)d_v^{2^k}}+\varepsilon'\vnorm{U\setminusU_{\mathrm{reg}}}{\Gamma}\right)\mathcal{P}\big(\oct{k+1}{1,\vec2^k}\big)/p(0) \\&\ge \frac{1-\eta'}{(1+16\eta')^{2^k-1}}\left(\frac{\Ex{\gamma_0(v)d_v}^{2^k}}{\vnorm{U}{\Gamma}^{2^k-1}}+\varepsilon'\vnorm{U\setminusU_{\mathrm{reg}}}{\Gamma}\right)\mathcal{P}\big(\oct{k+1}{1,\vec2^k}\big)/p(0) \\&\ge \frac{(1-\eta')(1-\zeta)^{2^k}}{\big((1+16\eta')(1+\eta)\big)^{2^k-1}}\left( (dd')^{2^k}+\frac{\varepsilon'\vnorm{U\setminusU_{\mathrm{reg}}}{\Gamma}}{p(0)}\right)\mathcal{P}\big(\oct{k+1}{1,\vec2^k}\big)\,, \end{align} where the fourth line is by the Cauchy--Schwarz inequality, and the fifth is by Claim~\ref{clm:Edvbounds}, and the fact that $\vnorm{U}{\Gamma}\le\vnorm{V_0}{\Gamma}\le (1+\eta)p(0)$. With \eqref{eq:G12k} we have \begin{align}\label{eq:Ureg} \vnorm{U\setminusU_{\mathrm{reg}}}{\Gamma} &\le \frac{1}{\varepsilon'}\left(\frac{\big((1+16\eta')(1+\eta)\big)^{2^k-1}(1+\varepsilon_m'')}{(1-\eta')(1-\zeta)^{2^k}}-1\right) (dd')^{2^k} p(0) \\&\le \frac{1}{\varepsilon'}2^{2^{k+2}+54}k^3\big(\varepsilon^{1/2}+\eta^{1/4}\big)(dd')^{-2^k}p(0) \\&\le 2^{2^{k+2}+54}k^3\big(\varepsilon^{1/4}+\eta^{1/8}\big)(dd')^{-2^k}p(0)\,, \end{align} where for the last line we use that $\varepsilon'\ge \max\{\varepsilon^{1/4},\,\eta^{1/8}\}$. Now, for Lemma~\ref{lem:k-inherit} we may take $V_0'=U_{\mathrm{conc}}\capU_{\mathrm{reg}}$, since then for $v\in V_0'$ the link $\mathcal{G}_v$ inherits both the desired relative density and regularity from $\mathcal{G}$. Moreover, by~\eqref{eq:Uconc} and~\eqref{eq:Ureg} we have \begin{align} \vnorm{V_0}{\Gamma} &\ge \vnorm{U_{\mathrm{conc}}}{\Gamma} - \vnorm{U\setminusU_{\mathrm{reg}}}{\Gamma} \\&\ge \left(1-4\zeta^{1/8}-2^{2^{k+2}+54}k^3\big(\varepsilon^{1/4}+\eta^{1/8}\big)(dd')^{-2^k}\right)p(0) \\&\ge \left(1-2^{2^{k+6}}k^3\big(\varepsilon^{1/16}+\eta^{1/32}\big)(dd')^{-2^k}\right)\vnorm{V_0}{\Gamma}\,, \end{align} where we again use \ref{inh:count} for the last line. To complete the proof, observe that we choose $\varepsilon$, $\eta$ in terms of $\varepsilon'$, $d$, $d'$, and $k$ to satisfy \[ \min\{\varepsilon', 2^{-k}\} \ge 2^{2^{k+6}}k^3\big(\varepsilon^{1/16}+\eta^{1/32}\big)(dd')^{-2^k}\,.\qedhere \] \end{proof} \section{Concluding remarks} A feature of this paper is that we intend for the methods, in particular the definitions of THC and GPE, and the proofs that one can obtain these properties, to be of more interest than the results we obtain in this paper with them. Indeed, some of our theorems are similar to results that can be deduced from combinations of existing hypergraph regularity and counting methods in the literature. In this section we discuss methods presented in this paper and from the literature from the perspective of some possible applications, and highlight useful features of a number of different ways to prove hypergraph counting results. \subsection{Counting in sparse hypergraphs} Our THC and GPE methods (Theorems~\ref{thm:GaTHC}, \ref{thm:counting}, and~\ref{thm:embedding}), allow techniques which resemble those for working with the regularity setup of Rödl--Skokan~\cite{RSreg} to be used in sparse graphs. If one is interested (say) in a relative hypergraph removal lemma, then one could use Theorem~\ref{thm:counting} to give a somewhat direct proof which has the flavour of generalising the methods of the dense case (embedding vertex-by-vertex) to the sparse case. Alternatively, Conlon, Fox, and Zhao~\cite{CFZrelative} show that one can transfer the dense case result (used as a black box) to the sparse case. Their approach is certainly easier to write down, and in many cases requires slightly weaker pseudorandomness of the majorising hypergraph (our approach would be much better for removing large graphs with small maximum degree, theirs would be better for cliques), but we claim that having a direct proof may be useful in other applications of the method. A direct proof may be more amenable to modification for use in applications that require more precise control of embedded vertices, such as the blow-up lemma. A simple example of this is already present here: in Theorem~\ref{thm:embedding} we can exploit the maximum degree of the graph to be embedded to allow embedding large graphs in a way that does not follow easily from the methods of~\cite{CFZrelative}. \subsection{Embedding in sparse hypergraphs} As mentioned briefly in Section~\ref{sec:main}, we give two self-contained ways to prove embedding results in this paper. Given a THC-graph $\Gamma$, the GPE methods yielding Theorem~\ref{thm:embedding} show that one can embed a bounded-degree but large (size growing with $v(\mathcal{G})$) complex $F$ into a regular subgraph $\mathcal{G}\subseteq\Gamma$. But one can also use Theorem~\ref{thm:counting} (or similar results from the literature) to count \emph{fixed size} subgraphs in $\mathcal{G}$, and apply Theorem~\ref{thm:GaTHC} to obtain that $\mathcal{G}$ is itself a THC-graph. An analogous embedding result for $F$ follows. For applications, the latter approach may save quite some effort: to describe $\mathcal{G}$ as a THC-graph requires only a density graph and the constants $c^*$ and $\eta$, and one can ignore the majorising hypergraph. In contrast, a GPE has an ensemble of parameters and much more structure to consider. One does lose information going to a THC-graph, however, and one may wish to keep the GPE formalism to allow appealing to special properties of $\Gamma$ such as when $\Gamma$ is a random graph. A particular example to bear in mind is the result of~\cite{ABETbandwidth}, in which it is shown (among other things) that a triangle factor with $n-cp^{-2}$ vertices has with high probability local resilience $\tfrac13-o(1)$ in $\Gamma=G(n,p)$ if $p$ is not too small. This number of vertices in the triangle factor is optimal, and it is possible to prove such a result only because one has access to the graph $\Gamma$ (using a graph version of the GPE formalism, as made explicit in~\cite{ABHKPblow}). If one attempted to prove such a result using the THC formalism, without access to $\Gamma$, then the best one could hope for would be to prove local resilience for an $\big(n-o(n)\big)$-vertex triangle factor. We conclude by sketching a new result, showing that sparse pseudorandom hypergraphs have the Ramsey property for large bounded degree hypergraphs, which one can prove using the THC formalism. \begin{theorem}\label{thm:THCexample} Given $\Delta,k,r\ge 2$ there exist $C,\varepsilon>0$ such that the following holds for all sufficiently large $n$. Suppose that $\Gamma$ is an $n$-vertex $k$-graph with $n$ vertices, such that for any $k$-graph $F$ with at most $C$ vertices we have $\Gamma(F)=(1\pm\varepsilon)p^{e(F)}$. Then however the $k$-edges of $\Gamma$ are $r$-coloured, there is a colour $c$ with the following property. For each $k$-graph $H$ with $\Delta(H)\le\Delta$ and $v(H)\le\varepsilon p^{\Delta} n$, there is a copy of $H$ in $\Gamma$ all of whose edges have colour $r$. \end{theorem} Note that one would expect that one can actually allow $H$ to have up to $\varepsilon n$ vertices. We expect it would not be very hard to prove this, but we prefer to give a clean illustration of how one can do embedding of moderately large graphs. \begin{proof}[Sketch proof] We choose $\Delta+2\ll C'\ll C$ and $0<\varepsilon\ll\eta\ll\eta'$. Given a $k$-graph $\Gamma$, and an $r$-colouring of its edges, we begin by applying the sparse hypergraph regularity lemma, Lemma~\ref{lem:RSchreglem} (with input $\varepsilon_k$ much smaller than $\eta$ and much larger than $\varepsilon$), to the $k$-graphs $G_1,\dots,G_r$, where $G_i$ consists of the colour-$i$ edges of $\Gamma$. By a straightforward counting argument, we find a collection of $\ell=R_r^{(k)}(k\Delta)$ clusters $V_1,\dots,V_\ell$ in the resulting family of partitions, and a collection of $2$-cells between all pairs, $3$-cells between all triples, and so on, with the following properties. First, each $i$-cell is regular for $2\le i\le k-1$, and each $i$-cell is supported on the chosen $(i-1)$-cells for $3\le i\le k-1$. Second, for each colour $1\le i\le r$, the graph $G_i$ is regular with respect to each polyad on the chosen cells. We now assign a colour to each $k$-set in $[t]$ by choosing one of the densest colours in the corresponding $k$-polyad. By definition of $t$, we can choose a colour $1\le c\le r$ and a subset $V'_1,\dots,V'_{k\Delta}$ of clusters such that in each $k$-polyad the graph $G_c$ is regular and has density at least $1/r$. Define a complex $\mathcal{G}$ on vertex set $V'_1\cup\dots V'_{k\Delta}$ by taking all the edges of the chosen cells on this vertex set, together with the edges of $G_c$ they support. Given a $k$-graph $H$ with maximum degree $\Delta$ and at most $\varepsilon p^{\Delta} n$ vertices, suppose $V(H)=[n]$ and let $\mathcal{H}$ be the complex obtained from $H$ by down-closure. Note that $\mathcal{H}^{(2)}$ has maximum degree at most $(k-1)\Delta$, and hence there is a partition of $V(\mathcal{H})$ into $k\Delta$ parts such that no edge of $\mathcal{H}^{(2)}$ lies in any one part. We fix such a partition, and assign vertices of $H$ to the $k\Delta$ clusters of $G$ according to the partition. Let $\mathcal{D}$ be the corresponding relative density graph, with $\mathcal{D}(e)$ being the relative density of the $|e|$-cell on clusters $e$ (if $|e|<k$) or of $G$ relative to the $k$-polyad on clusters $e$ (if $|e|=k$). By Theorem~\ref{thm:GaTHC}, we see that $\Gamma$ is a $(\eta,C')$-THC graph, and so is any graph obtained from $\Gamma$ by the standard construction. So applying Theorem~\ref{thm:counting}, we obtain that counts of graphs on up to $(\Delta+2)^2$ vertices in $G$ are as one would expect for the density graph $\mathcal{D}$ of $G$, where $\mathcal{D}'$ is obtained from $\mathcal{D}$ by keeping the weights of all edges the same, except for the $k$-edges whose weights are multiplied by $p$. By Theorem~\ref{thm:GaTHC} again, we see that $G$, and any graph obtained from it by the standard construction, is an $(\eta',\Delta+2)$-THC graph. We apply the standard construction to $G$ to obtain a $v(H)$-partite graph $G_0$, with corresponding density graph $\mathcal{R}_0$ obtained by applying the standard construction to $\mathcal{D}'$. We now choose in order $1\le i\le n$ an image $v_i$ for the vertex $i$ of $V(H)$ in $X_i$ We do this as follows. First, we look at the vertices of $X_i$ in $G_{i-1}$. These vertices have weight either zero or one in $G_{i-1}$, and the total weight is (because $G_{i-1}$ is an $(\eta',\Delta+2)$-THC graph) equal to $(1\pm\eta')r_{i-1}(i)|X_i|$. Of the vertices with weight one, at most $\eta' r_{i-1}(i)|X_i|$ vertices $v$ are such that the link graph $\big(G_{i-1}\big)_v$ fails to be an $(\eta',c^*)$-THC graph with density graph $\big(\mathcal{R}_{i-1}\big)_i$. We choose a vertex $v_i$ which is not among these failing vertices, and which corresponds to a vertex of $G$ not previously used. We set $G_i:=\big(G_{i-1}\big)_{v_i}$ and $\mathcal{R}_i:=\big(\mathcal{R}_{i-1}\big)_i$. To see that this is always possible, it is enough to check that $v(H)<(1-2\eta')r_{i-1}(i)|X_i|$. This is true by choice of $\varepsilon$ and because the product defining $r_{i-1}(i)$ contains at most $\Delta$ terms coming from the $k$-level of $\mathcal{R}_0$ (because $\Delta(H)\le\Delta$). \end{proof} \renewcommand*{\bibfont}{\small} \printbibliography \end{document}
\begin{document} \maketitle \begin{abstract} We analyze the transience, recurrence, and irreducibility properties of general sub-Markovian resolvents of kernels and their duals, with respect to a fixed sub-invariant measure $m$. We give a unifying characterization of the invariant functions, revealing the fact that an $L^p$-integrable function is harmonic if and only if it is harmonic with respect to the weak dual resolvent. Our approach is based on potential theoretical techniques for resolvents in weak duality. We prove the equivalence between the $m$-irreducible recurrence of the resolvent and the extremality of $m$ in the set of all invariant measures, and we apply this result to the extremality of Gibbs states. We also show that our results can be applied to non-symmetric Dirichlet forms, in general and in concrete situations. A second application is the extension of the so called {\it Fukushima ergodic theorem} for symmetric Dirichlet forms to the case of sub-Markovian resolvents of kernels. \end{abstract} \section{Introduction} Questions on recurrence, transience and irreducibility of Markov processes were treated in various frames and with specific tools, both from probabilistic and analytic view point: see \cite{ChFu11}, \cite{Ge80}, \cite{Os92}, \cite{St94}, \cite{Fu07}, \cite{FuOsTa11}, and \cite{MaUeWa12} for continuous time processes, as well as \cite{MeTw93} and \cite{No97} for Markov chains, and the references therein. The purpose of this paper is twofold: first, we aim to clarify the connection between different definitions for transience, recurrence, and irreducibility, and unify various characterizations of these notions. Second, we want to analyze whether transience, recurrence, and irreducibility are stable when passing to the dual structure, i.e. the dual Markov process or the dual resolvent, respectively, with the underlying measure being a sub-invariant measure for the initial resolvent. On the way, we also obtain a number of new results on the subject, based on potential theoretical techniques. Motivated by relevant examples arising mainly in infinite dimensional settings, we present here an approach to this subject in an $L^p$-context, for sub-Markovian resolvents. It turns out to be a unifying method, in particular, revealing applications to invariant and Gibbs measures. The structure and main results of this paper are as follows. In the first part of Section 2 we study different characterizations of transience, recurrence, and irreducibility of a sub-Markovian resolvent of kernels $\mathcal{U}$ on a Lusin measurable space $E$ with respect to a $\sigma$-finite sub-invariant measure $m$. We emphasize that we do not assume any continuity of the resolvent and our proofs rely on the weak duality for the resolvent $\mathcal{U}$, and corresponding potential theoretical techniques, see (\ref{eq 2.1}) below, which is in contrast to the ones in \cite{Fu07} and \cite{FuOsTa11}, where main ingredients are {\it Hopf's maximal inequality} and the continuity of the transition function. When $\mathcal{U}$ is the resolvent of a right process, we show that $m$-transience and $m$-irreducible recurrence are respectively equivalent with the transience and recurrence of the process in the stronger sense of \cite{Ge80}, outside some $m${\it -inessential} set; see Propositions \ref{prop 2.1.14} and \ref{prop 2.1.18}. This probabilistic counterpart was studied in \cite{FuOsTa11} for $m$-symmetric Hunt processes. Then, we give a characterization for invariant functions in $L^p(E, m)$, $1 \leq p \leq \infty$, unifying the approaches from stochastic processes, Dirichlet forms, positivity preserving semigroups, and ergodic theory (see Theorem \ref{thm 2.19}). Our results also cover and extend the ones in \cite{Sc04}, and we shall use it in Sections 4 and 5 to prove the equivalence of irreducibility and extremality of invariant measures, resp. extremality of Gibbs states. A second consequence of Theorem \ref{thm 2.19} states that an element $u$ from the kernel of the generator of an $L^p$-strongly continuous sub-Markovian resolvent of contractive operators also belongs to the kernel of the co-generator on $L^p$ induced by weak duality; see Corollary \ref{prop 3} and the discussion at the beginning of Section 2. In Section 3 we apply the results of the previous one to prove the equivalence of irreducible recurrence and ergodicity, as stated in Proposition \ref{prop 3.5}, of a sub-Markovian resolvent of kernels with respect to a sub-invariant $\sigma$-finite measure, extending the so called {\it Fukushima ergodic theorem} for a (quasi)regular Dirichlet form; see \cite{FuOsTa11}, Theorem 4.7.3 and \cite{AlKoRo97a}, Theorem 4.6. The key ingredient is Theorem \ref{thm 3.1} which states the strong convergence of an $L^p$-uniformly bounded resolvent family of continuous operators $(\alpha U_\alpha)_{\alpha > 0}$ to the projection on the kernel of $\mathcal{I} - \beta U_\beta$, as $\alpha$ tends to $0$, for one (hence for all) $\beta > 0$. The central result of Section 4 is Theorem \ref{thm 2.26} which states that the sub-Markovian resolvent of kernels $\mathcal{U}$ is $m$-recurrent and $m$-irreducible if and only if the measure $m$ is extremal in the set of all invariant probability measures for $\mathcal{U}$. This extends results from \cite{AlKoRo97a}, \cite{AlKoRo97b}, and \cite{DaZa96}, Section 3.1, concerning the ergodicity and extremality of invariant measures. In Section 5 we apply the obtained results on irreducibility and extremality of invariant measures to the context of (non-symmetric) Dirichlet forms. In Corollary \ref{coro 5.3} we give a characterization for the irreducibility of a Dirichlet form. It improves the one in \cite{AlKoRo97a}, Proposition 2.3, where the forms are symmetric, recurrent, and given by a square field operator. We would like to point out another consequence of Corollary \ref{coro 5.3}, namely that both the recurrence and the irreducibility of a strongly sectorial (non-symmetric) Dirichlet form is equivalent to the respective property of its symmetric part. We illustrate this by a concrete example in infinite dimensions (see Corollary \ref{coro 5.8}). The main results of this last section are given in a subsection on the extremality of Gibbs states. Recall that in \cite{AlKoRo97a} the authors extend classical results of Holley and Stroock for the Ising model, proving that a Gibbs state is extremal if and only if the corresponding Dirichlet form is irreducible (or equivalently ergodic), for classes of lattice models with non-compact, but linear spin space. In particular, numerous examples of irreducible Dirichlet forms on infinite dimensional state space are obtained. For applications to more general models see \cite{AlKoRo97b}. Our purpose is to recapture two of the main results in \cite{AlKoRo97a} as particular cases of Theorem \ref{thm 2.26} and thus to place the problem in a broader context. The key point is Theorem \ref{thm 5.5}, according to which the space of Gibbs measures which are absolutely continuous with respect to a fixed Gibbs measure $m$ coincide with the space of all $\mathcal{U}$-invariant probability measures which are absolutely continuous with respect to $m$; here, $\mathcal{U}$ is the resolvent of the Dirichlet form. Theorem \ref{thm 5.6} is the main result on the equivalence between the extremality of Gibbs states and irreducibility of the corresponding Dirichlet form. In order to give a better overview of the paper, we summarize its structure and the main results in the following two diagrams. \newgeometry{margin=0.3 cm} \begin{landscape} \begin{center} \includegraphics[scale=0.8]{Diagrama.pdf} \end{center} \end{landscape} \restoregeometry Finally, we point out that the situation where we are given a sub-Markovian resolvent of contractive operators $\mathcal{V}=(V_\alpha)_{\alpha > 0}$ on $L^p(E, m)$, $p \in [1, \infty)$, where $m$ is a sub-invariant measure on $E$, is covered by our framework, since one can always construct a sub-Markovian resolvent of kernels $\mathcal{U}= (U_\alpha)_{\alpha > 0}$ on $(E, \mathcal{B})$ such that $U_\alpha = V_\alpha$, as operators on $L^p(E,m)$ for all $\alpha > 0$. We give more details on this in the beginning of the next section. \section{Transience, recurrence, and irreducibility of a sub-Markovian resolvent of kernels} \noindent {\bf Preliminaries on resolvents of kernels, $L^p$-resolvents, and duality.} Throughout we follow the terminology of \cite{BeBo04}. Let $(E, \mathcal{B})$ be a Lusin measurable space and $\mathcal{U} = (U_\alpha)_{\alpha > 0}$ be a sub-Markovian resolvent of kernels on $(E, \mathcal{B})$. Throughout, we denote by $p\mathcal{B}$ the space of all positive $\mathcal{B}$-measurable functions defined on $E$. The {\it initial kernel} of $\mathcal{U}$ is defined as $U := \sup\limits_{\alpha \geq 0} U_\alpha = \sup\limits_{n} U_{\frac{1}{n}}$. Recall that a function $u \in p\mathcal{B}$ is called {\it $\mathcal{U}$-supermedian} if $\alpha U_{\alpha} u \leq u$ for all $\alpha > 0$; it is called {\it $\mathcal{U}$-excessive} if it is $\mathcal{U}$-supermedian and $\alpha U_{\alpha} u \nearrow u$ as $\alpha$ tends to infinity. We denote by $\mathcal{S}(\mathcal{U})$ and $\mathcal{E}(\mathcal{U})$ the sets of all $\mathcal{U}$-supermedian (resp. $\mathcal{U}$-excessive) functions. A $\sigma$-finite measure $\mu$ is called {\it sub-invariant} w.r.t. $\mathcal{U}$ if $\mu \circ \alpha U_{\alpha} \leq \mu$, $\alpha > 0$. For $\beta > 0$ we denote by $\mathcal{U}_{\beta}$ the $\beta$-order sub-Markovian resolvent of kernels associated to $\mathcal{U}$, that is $\mathcal{U}_{\beta} := (U_{\beta + \alpha})_{\alpha > 0}$. If $m$ is a $\sigma$-finite sub-invariant measure, then by \cite{BeBo04}, Theorem 1.4.14, there exists a second sub-Markovian resolvent of kernels $\mathcal{U}^\ast = (U^\ast_\alpha)_{\alpha > 0}$ on $(E, \mathcal{B})$ such that \begin{equation} \label{eq 2.1} \mathop{\int}\limits_{E} f U_{\alpha}g dm = \mathop{\int}\limits_{E} g U_{\alpha}^{\ast} f dm \; \mbox{for all} \; f,g \in p\mathcal{B} \; {\rm and} \; \alpha > 0. \end{equation} Such a sub-Markovian resolvent is uniquely determined $m$-a.e. and it is called the {\it adjoint} of $\mathcal{U}$ w.r.t. $m$. Using H\"older inequality and extending by linearity one can easily check that $\mathcal{U}$ becomes a sub-Markovian family of contractive operators on $L^p(E,m)$ for all $1 \leq p < \infty$. Conversely, if $\mathcal{V}:=(V_{\alpha})_{\alpha > 0}$ is a sub-Markovian resolvent of contractive operators on $L^p(E, m)$ for some $p \in [1, \infty)$, where $m$ is a $\sigma$-finite measure on $(E, \mathcal{B})$ such that $\int \alpha V_{\alpha}f dm \leq \int f dm$ for all $\alpha > 0$ and $f \in p\mathcal{B} \cap L^p(E, m)$, then by \cite{BeBo04}, Proposition 1.4.13 and Lemma A.1.9, there exist two sub-Markovian resolvents of kernels $\mathcal{U} = (U_{\alpha})_{\alpha >0}$ and $\mathcal{U}^{\ast} = (U_{\alpha}^{\ast})_{\alpha > 0}$ on $(E, \mathcal{B})$ such that: $a)$ $U_{\alpha}=V_{\alpha}$ as operators on $L^p(E,m)$ for all $\alpha>0$; $b)$ \; $\mathcal{U}$ and $\mathcal{U}^{\ast}$ are in {\it weak} duality with respect to $m$, that is (\ref{eq 2.1}) holds. \noindent Moreover, if $\mathcal{V}$ is strongly continuous, i.e. $\lim\limits_{\alpha \to \infty} \| \alpha U_\alpha f - f\|_{L^p}=0$ all $f \in L^p(E,m)$, $1 \leq p < \infty$, then by \cite{BeBoRo06a}, Remark 2.3, and Corollary 2.4, we also have $c)$ \; $1 \in \mathcal{E}(\mathcal{U}_{\beta})\cap\mathcal{E}(\mathcal{U}^{\ast}_{\beta})$, $\sigma(\mathcal{E}(\mathcal{U}_{\beta})) = \mathcal{B} = \sigma(\mathcal{E}(\mathcal{U}^{\ast}_{\beta}))$; $d)$ \; Every point of $E$ is a non-branch point for $\mathcal{U}$ and $\mathcal{U}^{\ast}$.\\ We note that, as in \cite{MaRo92}, Chapter II, Proposition 4.3, the strong continuity of $\mathcal{V}$ for $1 \leq p < \infty$ is satisfied if one can find a dense subset $D \subset L^p(E,m)$ such that $\alpha V_\alpha f \mathop{\longrightarrow}\limits_{\alpha \to \infty} f$ in $m$-measure for all $f \in D$. A second approach to strongly continuity is given by the next result, for which we refer to \cite{BeBoRo06a}, Remark 2.3, and Corollary 2.4 and \cite{BeBo04}, Subsection 7.5. \begin{prop} \label{prop 2.0} Let $\mathcal{U}$ be a sub-Markovian resolvent of kernels on $(E, \mathcal{B})$. If $\mathcal{E}(\mathcal{U}_\beta)$ is min-stable, contains the positive constant functions and generates $\mathcal{B}$ then $\mathcal{U}$ becomes a strongly continuous sub-Markovian resolvent of contractive operators on $L^p(E, m)$ for every $\sigma$-finite sub-invariant measure $m$ and $1 \leq p < \infty$. \end{prop} We would also like to stress out that if we deal with a strongly continuous sub-Markovian resolvent of contractions $\mathcal{U}$ on $L^p(E,m)$ then one can always find a larger Lusin topological space $E \subset E_1$, $E \in \mathcal{B}(E_1)$, $\mathcal{B} = \mathcal{B}(E_1)|_E$, and an $E_1$-valued right Markov process such that its resolvent $\mathcal{U}^1$ regarded on $L^p(E,\overline{m})$, coincides with $\mathcal{U}$ and $U^1_\alpha(1_{E_1 \setminus E}) = 0$, where $\overline{m}$ is the measure on $(E_1, \mathcal{B}(E_1))$ extending $m$ by zero on $E_1 \setminus E$; see \cite{BeBoRo06a}, Theorem 2.2, and also \cite{BeRo15} for the extension to $E_1$ of the adjoint resolvent. Taking into account that the properties of $m$-transience, $m$-recurrence, and $m$-irreducibility are preserved by modifying the initial space with some zero measure set, the results presented in this paper have a probabilistic counterpart. As mentioned in Introduction, due to the above remarks we can assume without loss of generality that $\mathcal{U}$ is a sub-Markovian resolvent of kernels on $(E, \mathcal{B})$. We also fix a $\sigma$-finite sub-invariant measure $m$. \begin{defi*}[(cf. \cite{Fu07})] i) The resolvent $\mathcal{U}$ is called {\it $m$-transient} provided there exists $f \in p\mathcal{B} \cap L^1(E, m)$ such that $m$-a.e. we have $f >0$ and $U f < \infty$. ii) The resolvent $\mathcal{U}$ is called {\it $m$-recurrent} if for every $f \in p\mathcal{B}\cap L^1(E, m)$ we have $m(\{ x \in E : 0 < Uf(x) < \infty \}) = 0$. \end{defi*} \begin{prop} \label{prop2.1} The following assertions are equivalent i) $\mathcal{U}$ is $m$-transient. ii) $\mathcal{U}^{\ast}$ is $m$-transient. iii) There exists $f_0 \in p\mathcal{B} \cap L^1(E, m)$, $0 < f_0 \leq 1$ $m$-a.e., such that $U f_0$ is bounded. iv) For every $f \in p\mathcal{B} \cap L^1(E, m)$ we have $U f < \infty$ $m$-a.e. \end{prop} \begin{proof} i) $\Rightarrow$ iii) Let $f \in p\mathcal{B} \cap L^1(E, m)$ be such that $m$-a.e. $f > 0$ and $U f < \infty$. For every $n \in \mathbb{N}^{\ast}$ let us put $A_n := \{ x \in E : Uf(x) \leq n \}.$ Then clearly $m(E \setminus \mathop{\bigcup}\limits_{n} A_n) = 0$ and by the complete maximum principle we get $U(f\cdot 1_{A_n}) \leq n.$ If we put $f_0 := \inf (1, \mathop{\sum}\limits_{n = 1}^{\infty} \dfrac{1}{n\cdot 2^n} f \cdot 1_{A_n}) $ then $0 < f_0 \leq 1$ $m$-a.e. and $U f_0 \leq 1$ on $E$. ii) $\Rightarrow$ iv) Applying i) $\Rightarrow$ iii) for $\mathcal{U}^{\ast}$ we get a function $g_0 \in p\mathcal{B} \cap L^1(E, m)$, $0 < g_0 \leq 1$ $m$-a.e. such that $U^{\ast}g_0$ is bounded. If $f \in p\mathcal{B} \cap L^1(E, m)$ then $\int g_0 U f dm = \int fU^{\ast}g_0 dm < \infty$. Since $g_0 >0$ $m$-a.e. we deduce that $U f < \infty$ $m$-a.e. The implication iv) $\Rightarrow$ i) is trivial and therefore we have i) $\Leftrightarrow$ ii). \end{proof} \begin{rem} a) For a different proof of equivalence i) $\Leftrightarrow$ iv), which makes heavy use of Hopf's maximal inequality and the continuity of the transition function, see \cite{Fu07}, Proposition 1.1, i) and \cite{FuOsTa11}, Lemma 1.5.1. b) If $\mathcal{U}$ is the resolvent of a right process and $m$ is a $\sigma$-finite sub-invariant measure, then by \cite{Fu07}, Section 3.1, $\mathcal{U}$ is $m$-transient if and only if there exists a sequence of Borel finely open sets $(B_n)_{n \geq 1}$ increasing to $E$ such that q.e. in $x \in E$ (i.e. for all $x \in E$ outside some $m$-polar set) the last exit time of $B_n$ is finite $P_x$-a.e. In particular, if q.e. in $x \in E$ the process has finite lifetime $P_x$-a.e., then $\mathcal{U}$ is $m$-transient. \end{rem} For the reader convenience we present the proof of the following essentially known result (see e.g. \cite{BeBo04}, Proposition 1.1.11). \begin{prop} \label{prop 2.2} Let $v \in p\mathcal{B}$ be such that $\alpha U_{\alpha} v \leq v \; m\mbox{-a.e. for all} \; \; \; \alpha > 0.$ Then there exists an $\mathcal{U}$-excessive function $v'$ such that \[ v = v' \; m\mbox{-a.e.} \] Moreover, if $v$ is bounded then $v'$ can be chosen bounded. \end{prop} \begin{proof} We consider the sequence $(v_n)_{n \geq 0}$ defined inductively as follows \[ v_0 := v \; {\rm and} \; v_{n+1} := \mathop{\sup}\limits_{\alpha \in \mathbb{Q}_{+}}\sup(v_n, \alpha U_{\alpha} v_n). \] Then clearly the sequence is increasing and for all $\alpha \in \mathbb{Q}_{+}$ and $n \in \mathbb{N}$ we have $\alpha U_{\alpha} v_n \leq v_{n+1}$. Taking $v'' := \sup v_n$ we get that $\alpha U_{\alpha} v'' \leq v''$ for all $\alpha \in \mathbb{Q}_{+}$ and therefore $v''$ is an $\mathcal{U}$-supermedian function. Because $v_n = v$ $m$-a.e. for all $n$ we obtain that $v = v''$ $m$-a.e. The required function $v'$ will be the $\mathcal{U}$-excessive regularization of $v''$, $v' = \mathop{\sup}\limits_{k} k U_{k} v''$. In order to prove the second part of the statement we only have to notice that if $v$ is bounded then so are $v''$ and its $\mathcal{U}$-excessive regularization $v'$. \end{proof} The next proposition collects several characterizations of $m$-recurrence. \begin{prop} \label{prop2.3} The following assertions are equivalent. i) $\mathcal{U}$ is $m$-recurrent. ii) $\mathcal{U}^{\ast}$ is $m$-recurrent. iii) There exists $f_0 \in p\mathcal{B} \cap L^1(E, m)$ such that $U f_0 = +\infty$ $m$-a.e. iv) For every $f \in p\mathcal{B}$ we have \[ U f = +\infty \quad m\mbox{-a.e. on} \; \; \; [f > 0]. \] v) For every $v \in L^\infty(E, m)$ such that $\alpha U_\alpha v \leq v$ $m$-a.e. for all $\alpha > 0$ we have \[ \alpha U_{\alpha} v = v \quad m\mbox{-a.e. for all} \; \; \; \alpha > 0. \] v$'$) For every $\mathcal{U}$-excessive function $v$ we have \[ \alpha U_{\alpha} v = v \quad m\mbox{-a.e. for all} \; \; \; \alpha > 0. \] \end{prop} \begin{proof} i) $\Rightarrow$ iv). Let $f \in p\mathcal{B} \cap L^1(E, m)$ and $A = [U f = 0]$. It follows that $U_{\alpha}(f 1_A) = 0$ and thus $m([f > 0] \cap A) = 0$. By hypothesis we conclude that $U f = +\infty$ $m$-a.e. on $[f > 0]$. iv) $\Rightarrow$ i) Let $M = [0 < U f < \infty]$. By iv) we get $[U f = \infty] \supset [f > 0]$ $m$-a.e. Let $B = [U f < \infty] \cap [f > 0]$. Then $m(B) = 0$ and $[f 1_{E \setminus B} > 0] \subset [U f = \infty]$. It follows that $U(f 1_{E \setminus B}) \leq \dfrac{1}{n}U f$ for all $n$ and thus $U(f 1_{E \setminus B}) = 0$ on $[U f < \infty]$, hence $U(f) = 0$ $m$-a.e. on $[U f < \infty]$, i.e. $m(M) = 0$. Clearly we have iv) $\Rightarrow$ iii) iii) $\Rightarrow$ ii) Let $f_0 \in p\mathcal{B} \cap L^1(E, m)$ be such that $U f_0 = +\infty$ $m$-a.e. If $g \in p\mathcal{B}\cap L^1(E, m)$ is such that $\int g dm > 0$ then we claim that for every $A \in \mathcal{B}$, $A \subset [g > 0]$ such that $U^{\ast}(g 1_A)$ is bounded we have $m(A) = 0$. Indeed, in the contrary case we have $m(A) > 0$ and since $U^{\ast}(g 1_A)$ is bounded we arrive at the contradictory relation \[ \infty = \int g 1_A U f_0 dm = \int f_0 U^{\ast}(g 1_A) dm < \infty. \] We conclude that $U^{\ast} g = \infty$ $m$-a.e. on $[g > 0]$. By the implication iv) $\Rightarrow$ i) applied to $\mathcal{U}^{\ast}$ we deduce that assertion ii) holds. ii) $\Rightarrow$ iv). Let $0<g<1$ such that $m(g) < \infty$ and take $f \in p\mathcal{B}$. Then $\inf(f,g) \in p\mathcal{B} \cap L^1(E, m)$, $U\inf(f,g) \leq Uf$ and $[\inf(f,g)>0] = [f>0]$. Therefore, we may assume that $f \in p\mathcal{B} \cap L^1(E, m)$. Now suppose that there exists $\mathcal{B} \ni A \subset [f > 0]$ with $m(A) > 0$ and $U f$ bounded on $A$. It follows that the function $U(f 1_A)$ is bounded and \[ \int f 1_A U^{\ast}(f 1_A) dm = \int f 1_A U(f 1_A) dm < \infty. \] Consequently $U^{\ast}(f 1_A) < \infty$ $m$-a.e. on $A$ and by hypothesis ii) we get that $U^{\ast}(f 1_A) = 0$ $m$-a.e. on $A$. We deduce that $\int f 1_A U(f 1_A) dm = 0$, hence $U(f 1_A) = 0$ $m$-a.e. on $A$, which leads to the contradictory relation $m(A) = 0$. We conclude that $U f = +\infty$ $m$-a.e. on $[f > 0]$. iv) $\Rightarrow$ v) Let $\alpha, \beta > 0$ and $v \in L^\infty(E, m)$ such that $\alpha U_\alpha v \leq v$ $m$-a.e. Then we have \[ U_{\beta}(v - \alpha U_{\alpha} v) = U_{\alpha}(v - \beta U_{\beta} v) \leq \dfrac{\| v - \beta U_{\beta} v \|_\infty}{\alpha} \leq \dfrac{2 \|v\|_\infty}{\alpha} \; \; \; m\mbox{-a.e.} \] and so \[ U(v - \alpha U_{\alpha} v) \leq \dfrac{2 \|v\|_\infty}{\alpha} \quad m\mbox{-a.e.} \] If $m(v - \alpha U_{\alpha} v) > 0$ then by hypothesis iv) we get the contradictory relation $$ \dfrac{2 \|v\|_\infty}{\alpha} \geq U(v - \alpha U_{\alpha} v) = +\infty \quad m\mbox{-a.e. on} \; [v - \alpha U_{\alpha} v > 0]. $$ Hence $\alpha U_{\alpha} v = v$ $m$-a.e. v) $\Rightarrow$ v$'$) If $\alpha > 0 $, $v \in \mathcal{E}(\mathcal{U})$ and $v_n := \inf(v, n)$, $n \in \mathbb{N}^{\ast}$, then $(v_n)_n \subset b\mathcal{E}(\mathcal{U})$, $\alpha U_{\alpha} v_n = v_n$ $m$-a.e. and $v_n \nearrow v$ pointwise. Hence $\alpha U_{\alpha} v = v$ $m$-a.e. v$'$) $\Rightarrow$ i) If $f \in p\mathcal{B} \cap L^1(E, m)$ and $\alpha > 0$ then $U f = U_{\alpha} f + \alpha U_{\alpha} U f$ and from v$'$) we have $\alpha U_{\alpha} U f = U f$ $m$-a.e. It follows that for all $\alpha > 0$ we have $m$-a.e. $U_{\alpha} f = 0$ on $[U f < \infty]$ and we conclude that $U f = 0$ $m$-a.e. on $[U f < \infty]$. \end{proof} \begin{rem} The implications i) $\Leftrightarrow$ iii) $\Rightarrow$ iv) in Proposition \ref{prop2.3} should be compared to \cite{Fu07}, Proposition 1.1, ii) and \cite{FuOsTa11}, Lemma 1.6.4 and Theorem 4.7.1, ii), where the context is that of a strongly continuous sub-Markovian semigroup on $L^2(E, m)$, respectively of a symmetric Dirichlet form and the proofs are based on Hopf's maximal inequality. \end{rem} As a consequence we have the following useful result. \begin{coro} \label{coro 2.4} Assume that $\alpha U_\alpha^\ast 1 = 1$ $m$-a.e., $\alpha > 0$ and there exists an $m$-a.e. strictly positive $m$-integrable $\mathcal{U}$-excessive function. Then $\mathcal{U}$ is $m$-recurrent. Consequently, if $m(E) < \infty$ then the following assertions are equivalent. i) $\mathcal{U}$ is $m$-recurrent. ii) $\alpha U_\alpha 1 = 1$ $m$-a.e., $\alpha > 0$. iii) $\alpha U_\alpha^\ast 1 = 1$ $m$-a.e., $\alpha > 0$. \end{coro} \begin{proof} Let $s \in \mathcal{E}(\mathcal{U}) \cap L^1(E,m)$, $s >0$ $m$-a.e. If $u \in \mathcal{E}(\mathcal{U})$ and $u_n := \inf(ns, u)$ for all $n \in \mathbb{N}^\ast$, then $u_n \nearrow u$ $m$-a.e. and $(u_n)_n \subset \mathcal{E}(\mathcal{U}) \cap L^1(E,m)$. Since \[ \int{\alpha U_\alpha u_n dm} = \int{u_n \alpha U_\alpha^\ast 1 dm} = \int{u_n dm}, \] it follows that $m$-a.e. $\alpha U_\alpha u_n = u_n$ for all $n$ and therefore $\alpha U_\alpha u = u$. By Proposition \ref{prop2.3} we conclude that $\mathcal{U}$ is $m$-recurrent. Assume now that $m(E) < \infty$. The implication i) $\Rightarrow$ ii) follows by Proposition \ref{prop2.3}. ii) $\Rightarrow$ iii) Since $\int{\alpha U_\alpha^\ast 1 dm} = \int{\alpha U_\alpha 1 dm} = \int{1 dm} < \infty$ then condition iii) is satisfied. The implication iii) $\Rightarrow$ i) follows by the first part of the statement. \end{proof} \begin{rem} a) By Theorem \ref{prop 2.9} below, the resolvent may be recurrent without possessing any excessive functions except the constant ones. In such situations the first assertion in Corollary \ref{coro 2.4} is not applicable unless $m(E)<\infty$. b) If $\mathcal{U}$ is the resolvent of a symmetric Dirichlet form $(\mathcal{E}, D(\mathcal{E}))$ on $L^2(E, m)$ then by \cite{FuOsTa11}, Theorem 1.6.3 (see also Theorem 1.6.5), $\mathcal{U}$ is $m$-recurrent if and only if there exists a sequence $(u_n)_{n} \subset D(\mathcal{E})$ such that $u_n \nearrow 1$ $m$-a.e. and $\lim\limits_{n} \mathcal{E}(u_n,u_n) = 0$. Hence, if $m(E) < \infty$, then $\mathcal{U}$ is $m$-recurrent if and only if $1 \in D(\mathcal{E})$ and $\mathcal{E}(1,1) = 0$, which is in fact a particular case of Corollary \ref{coro 2.4} (see also Corollary \ref{coro 5.3}). \end{rem} \begin{defi*} A set $A \in \mathcal{B}$ is called {\it $\mathcal{U}${\it -absorbing}} (with respect to $m$) provided that \[ U(1_{E \setminus A}) = 0 \quad m\mbox{-a.e. on} \; A. \] \end{defi*} \begin{rem} \label{rem 2.5} a) If the set $A$ is $\mathcal{U}$-absorbing (with respect to $m$) and $B \in \mathcal{B}$ is such that $A = B$ $m${-a.e.} (i.e. $m(A \Delta B) = 0$), then $B$ is also $\mathcal{U}$-absorbing. b) If $\beta > 0$ then a set $A \in \mathcal{B}$ is simultaneously $\mathcal{U}$-absorbing and $\mathcal{U}_{\beta}$-absorbing. \end{rem} \begin{prop} \label{prop2.4} The following assertions are equivalent for a set $A \in \mathcal{B}$. i) The set $A$ is $\mathcal{U}$-absorbing (with respect to $m$). ii) The set $E \setminus A$ is $\mathcal{U}^{\ast}$-absorbing (with respect to $m$). iii) There exists a set $B \in \mathcal{B}$ such that $1_{E \setminus B} \in \mathcal{E}(\mathcal{U})$ and \[ A = B \quad m \mbox{-a.e.} \] iv) There exists a $\mathcal{U}$-excessive function $u$ such that \[ A = [u = 0] \quad m\mbox{-a.e.} \] v) There exists a $\mathcal{U}$-excessive function $u$ such that \[ A = [u < +\infty] \quad m\mbox{-a.e.} \] \end{prop} \begin{proof} i) $\Leftrightarrow$ ii). If $U(1_{E \setminus A}) = 0$ on $A$ $m$-a.e. then $0 = \int 1_A U(1_{E \setminus A}) dm = \int 1_{E \setminus A} U^{\ast}(1_A) dm$, hence $U^{\ast}(1_A) = 0$ on $E \setminus A$ $m$-a.e. Therefore the set $A$ is also $U^{\ast}$-absorbing. i) $\Rightarrow$ iii) Let $B = [U(1_{E \setminus A}) = 0]$. By i) we have \[ A \subset B \quad m\mbox{-a.e.} \] If we put $M := B\setminus A$ then $U(1_M) \leq U(1_{E \setminus A}) = 0$ on $M$ and by the complete maximum principle we deduce that $U(1_M) = 0$, $m(M) = 0$. It follows that $B = A$ $m$-a.e. and since $U(1_{E\setminus A}) \in \mathcal{E}(\mathcal{U})$ we get also that $1_{E\setminus B} \in \mathcal{E}(\mathcal{U})$. The implication iii) $\Rightarrow$ iv) is clear and iv) $\Rightarrow$ i) follows by assertion a) of Remark \ref{rem 2.5} since the set $[u = 0]$ is $\mathcal{U}$-absorbing. iii) $\Rightarrow$ v). Let $B \in \mathcal{B}$ be such that $A = B$ $m$-a.e. and $1_{E \setminus B} \in \mathcal{E}(\mathcal{U})$. Then the function $u$ defined by \[ u := \left\{\begin{array}{ll} \infty & {\rm on} \; E \setminus B\\ 0 & {\rm on} \; B \end{array} \right. \] is $\mathcal{U}$-excessive and clearly $B = [u < \infty]$. v) $\Rightarrow$ i). Let $u \in \mathcal{E}(\mathcal{U})$ be such that $A = [u < +\infty]$ $m$-a.e. and put $B := [u < \infty]$. Then $U(1_{E \setminus B}) \leq \dfrac{1}{n}u$ on $E$ for all $n$, $U(1_{E \setminus B}) = 0$ on $B$. Therefore $B$ is $\mathcal{U}$-absorbing. \end{proof} \begin{coro} \label{coro 2.7} If $(A_n)_{n}$ is a sequence of $\mathcal{U}$-absorbing sets then $\mathop{\bigcup}\limits_{n} A_n$ and $\mathop{\bigcap}\limits_{n} A_n$ are also $\mathcal{U}$-absorbing. \end{coro} \begin{proof} By Proposition \ref{prop2.4} for every $n$ there exists $u_n \in \mathcal{E}(\mathcal{U})$ such that $A_n = [u_n = 0]$ $m$-a.e. Let $u := \mathop{\inf}\limits_{n} u_n$. Then $[u = 0] = \mathop{\bigcup}\limits_{n}[u_n = 0]$ and $\alpha U_{\alpha} u \leq u$ $m$-a.e. for all $\alpha > 0$. From Proposition \ref{prop 2.2} and using again Proposition \ref{prop2.4} we conclude that $\mathop{\bigcup}\limits_{n}A_n$ is $\mathcal{U}$-absorbing. The equivalence i) $\Leftrightarrow$ ii) in the above proposition implies now that $\mathop{\bigcap}\limits_{n} A_n$ is also $\mathcal{U}$-absorbing. \end{proof} \begin{defi*} The resolvent $\mathcal{U} = (U_{\alpha})_{\alpha > 0}$ is named {\it $m$-irreducible} provided that there exists no nontrivial $\mathcal{U}$-absorbing set (with respect to $m$), i.e., if $A \in \mathcal{B}$ is $\mathcal{U}$-absorbing then either $m(A) = 0$ or $m(E \setminus A) = 0$. \end{defi*} By Proposition \ref{prop2.4} it follows that $\mathcal{U}$ and $\mathcal{U}^{\ast}$ are simultaneously $m$-irreducible. The next result expresses the dichotomy of $\mathcal{U}$ under the assumption of irreducibility. \begin{prop} \label{prop 2.5} Assume that $\mathcal{U}$ is $m$-irreducible. Then the resolvent $\mathcal{U} = (U_{\alpha})_{\alpha > 0}$ is either $m$-transient or $m$-recurrent. \end{prop} \begin{proof} Suppose that $\mathcal{U}$ is not $m$-recurrent, then there exists $f \in p\mathcal{B} \cap L^1(E, m)$ such that $m([0 < U f < \infty]) > 0$. Then $m([U f< \infty]) > 0$ and $m([U f > 0]) > 0$ and by Proposition \ref{prop2.4} the sets $[U f < +\infty]$ and $[U f = 0]$ are $m$-absorbing. Since $\mathcal{U}$ is $m$-irreducible, we deduce that $0 = m([U f = +\infty]) = m([U f = 0])$. Therefore we have $m$-a.e. $U f < \infty$ and $f > 0$, hence $\mathcal{U}$ is $m$-transient. \end{proof} \begin{rem} Proposition \ref{prop 2.5} was already proved in the case of a strongly continuous sub-Markovian resolvent on $L^2(E,m)$ and we refere to \cite{Fu07}, Theorem 1.1, i) and \cite{FuOsTa11}, Lemma 1.6.4 iii). We also recall that in the case of a convolution semigroup on $\mathbb{R}^d$ then by \cite{Fu07}, Theorem 1.2, the dichotomy still holds without requiring irreducibility. \end{rem} Recall that if $\beta > 0$ and $f \in p\mathcal{B}$ we may consider the ($\beta$-order) {\it reduced function} of $f$, defined by \[ R_{\beta} f := \inf \{ v \in \mathcal{S}(\mathcal{U}_{\beta}) : v \geq f\}. \] Due to a result of Mokobodzki (see for example \cite{BeBo04}, Theorem 1.1.9) we have that $R_{\beta} f$ is $\mathcal{B}$-measurable and it is $\mathcal{U}_{\beta}$-supermedian. Notice that if $\beta' < \beta$ then $\mathcal{S}(\mathcal{U}_{\beta'})\subset \mathcal{S}(\mathcal{U}_{\beta})$ and consequently $R_{\beta} f \leq R_{\beta'} f$. Therefore if $f \in p\mathcal{B}$, we may consider $R_0 f$, the $0$-order reduced function of $f$, defined by \[ R_0 f(x) := \mathop{\sup}\limits_{\beta} R_{\beta}f(x) = \mathop{\lim}\limits_{\beta \searrow 0} R_{\beta} f(x), \; x \in E. \] It follows that $R_0 f$ is an $\mathcal{U}$-supermedian function. If $u \in \mathcal{S}(\mathcal{U})$ and $A \in \mathcal{B}$ then $R_0 (1_{A}u)\in \mathcal{S}(\mathcal{U}) = \mathop{\bigcap}\limits_{\beta > 0} \mathcal{S}(\mathcal{U}_{\beta})$, it is dominated by $u$ and equal to $u$ on $A$. Therefore \[ R_0 (1_Au) = \inf \{ v\in \mathcal{S}(\mathcal{U}) : v\geq u \; {\rm on} \; A \}. \] As we mentioned above, the resolvent may not possess $0$-order excessive functions other than the constant functions (with respect to $m$). This is the case if and only if the resolvent is irreducible recurrent and we express this fact in the next proposition (for equivalence i) $\Leftrightarrow$ ii) below see also \cite{Fu07}, Theorem 1.1, ii)). For a probabilistic approach (in terms of an $m$-symmetric right process) of implication i) $\Rightarrow$ iv) we refer to \cite{FuOsTa11}, Theorem 4.7.1, where condition iv) below holds q.e. (i.e. outside some $m$-polar set) and not only $m$-a.e. \begin{prop} \label{prop 2.9} The following assertions are equivalent. i) $\mathcal{U}$ is $m$-irreducible and $m$-recurrent. ii) For every $f \in p\mathcal{B} \cap L^1(E, m)$ with $\int f dm > 0$ we have $U f = +\infty$ $m$-a.e. iii) If $ f \in p\mathcal{B}$ then $m$-a.e. we have either $U f = 0$ or $U f = +\infty$. iv) We have $m$-a.e. that every $\mathcal{U}$-excessive function is constant and $\alpha U_{\alpha} 1 = 1$, $\alpha > 0$. \end{prop} \begin{proof} i) $\Rightarrow$ ii). Let $f \in p\mathcal{B} \cap L^1(E, m)$ with $\int f dm > 0$ and $A := [f > 0]$, then $m(A) > 0$ and by Proposition \ref{prop2.3} (since $\mathcal{U}$ is $m$-recurrent) it follows that \[ m([U f = +\infty]) \geq m([U f = +\infty] \cap A) > 0. \] The set $[U f < +\infty]$ is $\mathcal{U}$-absorbing (c.f. Proposition \ref{prop2.4}) and therefore, $\mathcal{U}$ being $m$-irreducible, we deduce that $m([U f < +\infty]) = 0$, hence $U f = +\infty$ $m$-a.e. ii) $\Rightarrow$ iii). Let $g \in pb\mathcal{B} \cap L^1(E, m)$, $g > 0$. If $f \in p\mathcal{B}$ and $f_n := \inf(f, ng)$, $n \in \mathbb{N}^{\ast}$, then $(f_n)_n \subset p\mathcal{B} \cap L^1(E, m)$ and $f_n \nearrow f$ pointwise. If $f \in p\mathcal{B}$ and $m([U f > 0]) > 0$ then $m(U f) > 0$ and therefore $\int f dm > 0$. We consider $n_0 \in \mathbb{N}^{\ast}$ such that $\int f_{n_0} dm > 0$ and by hypothesis ii) we get $m$-a.e. $U f \geq U f_{n_0} = +\infty$. iii) $\Rightarrow$ i). Let $f \in b\mathcal{B} \cap L^1(E, m)$, $f > 0$. Then $U f > 0$ and therefore by iii) we deduce that $U f = +\infty$ $m$-a.e. From Proposition \ref{prop2.3} we conclude that $\mathcal{U}$ is $m$-recurrent. Let now $A$ be $m$-absorbing. We may assume that $1_{E \setminus A} \in \mathcal{E}(\mathcal{U})$ (see Proposition \ref{prop2.4}). Therefore we have $U(1_{E \setminus A}) =0$ on $A$ and $U(1_{E \setminus A}) > 0$ on $E \setminus A$. If $m(A) > 0$ then by hypothesis iii) we get $U(1_{E \setminus A}) = 0$ $m$-a.e. and therefore $m(E \setminus A) = 0$. i) $\Rightarrow$ iv). Let $u \in \mathcal{E}(\mathcal{U})$ be such that $\int_{E} u dm > 0$. We may assume that $u \leq 1$ $m$-a.e. and notice that if $v \in \mathcal{S}(\mathcal{U})$ and $v \leq u$ $m$-a.e. then (cf. Proposition \ref{prop2.3}) there exists $w \in b\mathcal{S}(\mathcal{U})$ such that $u = v + w$ $m$-a.e. Let $G \in \mathcal{B}$ such that $m(G) > 0$. We claim that \[ R_0(1_{G}u) = u \quad m\mbox{-a.e.}, \] Indeed, if $w \in b\mathcal{S}(\mathcal{U})$ is such that $u = R_0(1_G u) + w$ $m$-a.e., because $R_0(1_G u) = u$ on $G$ we get that $[w = 0] \supset G$ $m$-a.e., hence $m([w = 0]) \geq m(G) > 0$. Since $\mathcal{U}$ is $m$-irreducible we conclude that $w = 0$ $m$-a.e. For every $\alpha \in (0, 1]$ we consider the set $G_{\alpha} \in \mathcal{B}$ defined by $G_{\alpha} := [u > \alpha].$ By the above considerations we deduce that if $m(G_{\alpha}) > 0$ then $m$-a.e. we have $R_0(1_{G_{\alpha}} u) = u$ and $R_0 1_{G_{\alpha}} = 1$. From $\alpha \leq u$ on $G_{\alpha}$ it follows that $\alpha \leq u$ $m$-a.e. on $E$. Let further $\alpha_0 := \sup\{ \alpha > 0 : m(G_{\alpha}) > 0 \}.$ Then $u \geq \alpha_0$ $m$-a.e. and $m(G_{\alpha}) = 0$, hence $u = \alpha_0$ $m$-a.e. iv) $\Rightarrow$ i). By Proposition \ref{prop2.3} it follows clearly that $\mathcal{U}$ is $m$-recurrent. If $A$ is $m$-absorbing then by Proposition \ref{prop2.4} there exists $B \in \mathcal{B}$ such that $A = B$ $m$-a.e. and $1_{E \setminus B} \in \mathcal{E}(\mathcal{U})$. Since by hypothesis the function $1_{E \setminus B}$ should be $m$-a.e. a constant, we get that either $m(B) = 0$ or $m(E \setminus B) = 0$. Therefore $\mathcal{U}$ is $m$-irreducible. \end{proof} \noindent {\bf Transience, recurrence, and irreducibility of a right process.} In this subsection $\mathcal{U}$ is the resolvent of a right (Markov) process $X=(\Omega, \mathcal{F}, \mathcal{F}_t, X_t, P^x)$ with values in $E$, and $m$ is a sub-invariant $\sigma$-finite measure. \begin{defi*} (cf. \cite{Ge80}) i) The resolvent $\mathcal{U}$ (or the process $X$) is called {\it transient} provided there exists a strictly positive Borel measurable function $f$ such that $Uf <\infty$. ii) The resolvent $\mathcal{U}$ (or the process $X$) is called {\it recurrent} if $U1_B=0$ or $U1_B=\infty$ for all $B \in \mathcal{B}$. \end{defi*} \begin{rem} i) By \cite{Ge80}, Proposition 2.2 and Proposition 2.4, the following probabilistic characterizations hold: i.1) $\mathcal{U}$ is transient if and only if there exists a sequence of Borel finely open sets $(B_n)_{n \geq 1}$ increasing to $E$ such that the last exit time of $B_n$ is finite $P_x$-a.e. for all $x \in E$. i.2) $\mathcal{U}$ is recurrent if and only if any excessive function is constant, and furthermore, if and only if the last exit time of any finely open set is infinite almost surely. ii) Following the lines of \cite{FuOsTa11}, Lemma 4.8.1 one can show that recurrence as defined above is, as a matter of fact, equivalent with the (apparently stronger) so called Harris recurrence: $\int\limits^\infty_0 1_B(X_s)ds = \infty$ $P^x$-a.s. for all $x \in E$ whenever $B \in \mathcal{B}$ with $U(1_B) > 0$. \end{rem} Recall that a set $A \in \mathcal{B}$ is called {\it absorbing} if there exists an excessive function $v \in p\mathcal{B}$ such that $A=[v=0]$. We remark that $A$ is absorbing if and only if $1_{E \setminus A}$ is excessive, and if and only if there exists an excessive function $v \in p\mathcal{B}$ such that $A=[v < \infty]$. If $B \in \mathcal{B}$ is $m$-negligible such that $E \setminus B$ is absorbing then the set $B$ is named $m${\it -inessential}. As in \cite{BeRo11}, Section 3, if $A \in \mathcal{B}$ such that $E \setminus A$ is $m$-inessential, then we may consider the following two modifications of $\mathcal{U}$: - the {\it restriction} $\mathcal{U}{'}$ of $\mathcal{U}$ on $A$, i.e. the sub-Markovian resolvent of kernels on $(A, \mathcal{B}|_A)$ defined as: \[ U'_\alpha f = U_\alpha \overline{f} |_A \; \; \; \mbox{for all } f \in p\mathcal{B}|_A, \] where $\overline{f} \in p\mathcal{B}$ is such that $\overline{f}|_A = f$. - the {\it ($1$-order) trivial modification of $\mathcal{U}$ on A}, namely the sub-Markovian resolvent $\mathcal{U}^A = (U^A_\alpha)_{\alpha > 0}$ on $(E, \mathcal{B})$ defined by \[ U^A_\alpha f = 1_A U_\alpha (f 1_A) + \frac{1}{1+\alpha} f 1_{E \setminus A} \; \; \; \alpha > 0, f \in p\mathcal{B}. \] Then both of the above resolvents induced by $\mathcal{U}$ and $A$ are the resolvents of some right processes with state spaces $(A, \mathcal{B}|_A)$, respectively $(E, \mathcal{B})$. \begin{rem} \label{rem 2.1.15} i) $\mathcal{U}^A$ is an $m$-version of $\mathcal{U}$, that is $U^A_\alpha f = U_\alpha f$, $\alpha > 0$, $m$-a.e. for all $f \in p\mathcal{B}$. ii) $\mathcal{U}$ is $m$-transient, $m$-recurrent, or $m$-irreducible if and only if $\mathcal{U}{'}$, and hence $\mathcal{U}^A$, are $m$-transient, $m$-recurrent, or $m$-irreducible, respectively. \end{rem} \begin{prop} \label{prop 2.1.14} The following assertions are equivalent. i) $\mathcal{U}$ is $m$-transient. ii) There exists a Borel set $A$ such that $E \setminus A$ is $m$-inessential and the $1$-order trivial modification $\mathcal{U}^A$ is transient. \end{prop} \begin{proof} Since the implication ii) $\Rightarrow$ i) is clear, we prove only the converse. Let $f_0 \in p\mathcal{B}$ such that $m$-a.e. we have that $f_0 > 0$ and $Uf_0 < \infty$. Clearly, we may assume that $f_0 > 0$ on E. If $A:= [Uf_0 < \infty]$ then $m(E \setminus A) = 0$ hence $E \setminus A$ is $m$-inessential. Finally, if $p\mathcal{B} \ni f_1 = f_0$ on $A$ then $U^A f_1 = 1_A U(f_0 1_A) + \frac{1}{1+\alpha}f_{1}1_{E \setminus A} < \infty$ on $E$. \end{proof} We say that $\mathcal{U}$ is {\it irreducible} if for any absorbing set $A \in \mathcal{B}$ we have either $A = \emptyset$ or $A = E$. \begin{prop} \label{prop 2.1.16} The following assertions are equivalent. i) $\mathcal{U}$ is irreducible. ii) For every $\alpha$-sub-invariant measure $\mu$ we have that $\mu$ is a reference measure, $\alpha \geq 0$. iii) $\mathcal{U}$ is $m$-irreducible and $m$ is a reference measure. \end{prop} \begin{proof} i) $\Rightarrow$ ii). Let $\mu$ be an $\alpha$-sub-invariant measure and $A \in \mathcal{B}$ an $\mu$-negligible set. Then $[U_\alpha 1_A = 0] = E$ $\mu$-a.e., and because $\mathcal{U}$ is irreducible we get that $U_\alpha 1_A = 0$. ii) $\Rightarrow$ iii). Clearly, we only have to check that $\mathcal{U}$ is $m$-irreducible. If $A \in \mathcal{B}$ is $m$-absorbing such that $m(A) > 0$ then there exists $x \in A$ such that $U_\alpha 1_{E \setminus A}(x) = 0$. Since the measure $\delta_x \circ U_\alpha$ is a reference measure (as an $\alpha$-sub-invariant measure) it follows that $U_\alpha 1_{E \setminus A} = 0$. But by Proposition \ref{prop 2.2} there exists an excessive $m$-version $v$ of $1_{E \setminus A}$. Consequently, $v= \sup\limits_{\alpha} \alpha U_\alpha v = \sup\limits_{\alpha} \alpha U_\alpha 1_{E \setminus A} = 0$, hence $m(E \setminus A) = 0$ iii) $\Rightarrow$ i). Let $v \in p\mathcal{B}$ be an excessive function and $A:=[v=0]$. In particular, we have that $A$ is $m$-absorbing. If $m(A)=0$ then $U_\alpha 1_A =0$, hence $1_{E\setminus A} \geq \alpha U_\alpha 1_{E\setminus A}=\alpha U_\alpha 1 \nearrow 1$. It follows that $A = \emptyset$. Now assume that $m(E \setminus A)=0$ so that $U_\alpha 1_{E \setminus A} =0$. Because $E \setminus A$ is finely open, by \cite{BeBo04}, Proposition 1.3.2 we have that $\liminf\limits_{\alpha \to \infty} \alpha U_\alpha 1_{E \setminus A} = 1$ on $E \setminus A$. In conclusion, $E \setminus A= \emptyset$. \end{proof} Let $\mu$ be a $\sigma$-finite measure on $(E, \mathcal{B})$. As in \cite{BeBo97}, we say that the $\mu${\it -quasi-Lindel\"of property holds} (for the {\it fine topology} on $E$, which is the coarsest topology on $E$ making continuous all $\alpha$-excessive functions) if: for any collection $\mathcal{G}$ of finely open Borel subsets of $E$ there exists a countable subcollection $(G_k)_{k \in \mathbb{N}}$ such that the set $\bigcup\limits_{G \in \mathcal{G}}G \setminus \bigcup\limits_{k \in \mathbb{N}} G_k$ is $\mu$-semipolar. If $\bigcup\limits_{k \in \mathbb{N}} G_k$ differs from $\bigcup\limits_{G \in \mathcal{G}}G$ by a semipolar set, then we say that the {\it quasi-Lindel\"of property} holds. \begin{rem} It is known that the quasi-Lindel\"of property holds if and only if $\mathcal{U}$ posses a {\it reference} measure (i.e. there exists a $\sigma$-finite measure $\lambda$ such that $Uf=0$ whenever $\lambda(f)=0$ for $f \in p\mathcal{B}$). Also, the $m$-quasi-Lindel\"of property holds if and only if there exists a set $A \in \mathcal{B}$ such that $E \setminus A$ is $m$-inessential and the restriction of $m$ to $A$ is a reference measure for the restriction of $\mathcal{U}$ on $A$ (see \cite{BeBo97}, Section 3, and the references therein). We reiterate that $m$ is a sub-invariant measure, fixed at the beginning of this subsection. \end{rem} \begin{prop} \label{prop 2.1.17} The following assertions are equivalent. i) The $m$-quasi-Lindel\"of property holds for $\mathcal{U}$ and $\mathcal{U}$ is $m$-irreducible. ii) There exists a Borel set $A$ such that $E \setminus A$ is $m$-inessential and the restriction of $\mathcal{U}$ to $A$ is irreducible. \end{prop} \begin{proof} i) $\Rightarrow$ ii). If the $m$-quasi-Lindel\"of property holds for $\mathcal{U}$ then, by \cite{BeBo97}, Theorem 3.1, there exists a Borel set $A$ such that $E \setminus A$ is $m$-inessential and $m$ is a reference measure for $\mathcal{U}{'}$. But $\mathcal{U}{'}$ is $m$-irreducible so assertion ii) follows by Proposition \ref{prop 2.1.16}. The implication ii) $\Rightarrow$ i) follows by Proposition \ref{prop 2.1.16} and \cite{BeBo97}, Theorem 3.1. \end{proof} The next result is a version of Lemma 2.1 from \cite{BeBoRo06a}. \begin{lem} \label{lem 2.1.15} If $E_0 \in \mathcal{B}$ is finely closed and $m(E \setminus E_0) = 0$ then there exists a set $F \subset E_0$ such that $E \setminus F$ is $m$-inessential. \end{lem} \begin{proof} Let $(E_n)_{n \geq 1} \subset \mathcal{B}$ be the sequence defined inductively by $E_{n+1} = E_n \cap [U(1_{E \setminus E_n}) = 0]$ if $n \geq 0$. If $F := \bigcap\limits_n E_n$ then $\mathcal{B} \ni F \subset E_0$, $m(E \setminus F) = 0$, and $U1_{E \setminus F}=0$ on $F$. Moreover, $F$ is finely closed, as an intersection of finely closed sets. Therefore, the function $1_{E \setminus F}$ is supermedian and finely lower semicontinuous. By \cite{BeBo04}, Corollary 1.3.4 we get that $1_{E \setminus F}$ is excessive. Clearly, $E \setminus F$ is $m$-inessential. \end{proof} \begin{prop} \label{prop 2.1.18} The following assertions are equivalent. i) The $m$-quasi-Lindel\"of property holds for $\mathcal{U}$ and $\mathcal{U}$ is $m$-recurrent and $m$-irreducible. ii) There exists a Borel set $A$ such that $E \setminus A$ is $m$-inessential and the restriction of $\mathcal{U}$ to $A$ is recurrent. \end{prop} \begin{proof} i) $\Rightarrow$ ii). By Proposition \ref{prop 2.1.17} and Remark \ref{rem 2.1.15} there exists a Borel set $A$ such that $E \setminus A$ is $m$-inessential and the restriction $\mathcal{U}{'}$ is irreducible and $m$-recurrent. Therefore, if $B \in \mathcal{B}|_A$ then $U{'} 1_B = 0$ or $U{'} 1_B = \infty$, $m$-a.e. Let $E_0:=[U{'} 1_B = 0]$ such that $m(A \setminus E_0)=0$. From Lemma \ref {lem 2.1.15} there exists a non-empty absorbent set $F \subset E_0$. Consequently, $E_0 = A$. The other case is similar. ii) $\Rightarrow$ i). Since $m(E \setminus A) = 0$, by Remark \ref{rem 2.1.15} it follows that all $\mathcal{U}$-excessive functions are constant $m$-a.e., and by Proposition \ref{prop 2.9} we obtain that $\mathcal{U}$ is $m$-recurrent and $m$-irreducible. The $m$-quasi-Lindel\"of property follows by Proposition \ref{prop 2.1.17}. \end{proof} \noindent {\bf Irreducibility and invariance.} As in \cite{AlKoRo97a} a real-valued function $v \in \bigcup\limits_{1 \leq p \leq \infty} L^p(E, m)$ is called {\it $\mathcal{U}$-invariant} (with respect to $m$) provided that for all $\alpha > 0$ and $f \in bp\mathcal{B}$ we have \[ U_{\alpha}(vf) = v U_{\alpha} f \quad m\mbox{-a.e.} \] A set $A \in \mathcal{B}$ is called {\it $\mathcal{U}$-invariant} if the function $1_A$ is $\mathcal{U}$-invariant. It is easy to check that the collection of all $\mathcal{U}$-invariant sets is a $\sigma$-algebra. \begin{rem} \label{rem 2.14} Let $v$ be a $\mathcal{U}$-invariant function. i) If $u$ is a $\mathcal{B}$-measurable real-valued function and $u = v$ $m$-a.e. then $u$ is also $\mathcal{U}$-invariant. ii) If $v \geq 0$ then there exists a $\mathcal{U}$-excessive function $u$ such that $u = v$ $m$-a.e. If in addition $\alpha U_{\alpha} 1 = 1$ $m$-a.e. then $\alpha U_{\alpha} v = v$ $m$-a.e. Indeed, the assertion follows since $\alpha U_{\alpha} v = v\alpha U_{\alpha} 1 \leq v$ $m$-a.e. \end{rem} For every $p \in [1, \infty]$ let $\mathcal{A}_p$ be the set of all $\mathcal{U}$-invariant functions from $L^p(E, m)$. \begin{prop} \label{prop 2.15} The set $\mathcal{A}_p$, $1 \leq p \leq \infty$ is a vector lattice with respect to the pointwise order relation. \end{prop} \begin{proof} It is clear that $\mathcal{A}_p$ is a vector space. If $u \in \mathcal{A}_p$, $\alpha > 0$ and $f \in bp\mathcal{B}$ then we have $m$-a.e. \[ U_{\alpha}(u^{+}f) = U_{\alpha}(1_{[u>0]}u f) = uU_{\alpha}(1_{[u > 0]}f) \leq u^{+}U_{\alpha} f. \] Consequently we have also $U_{\alpha}(u^{-} f) \leq u^{-}U_{\alpha} f$ and therefore \[ U_{\alpha}(|u|f) \leq |u|U_{\alpha} f \quad m\mbox{-a.e.} \] On the other hand we have $m$-a.e. \[ \pm u U_{\alpha} f = U_{\alpha} (\pm u f) \leq U_{\alpha}(|u|f), \] and thus $|u|U_{\alpha}f \leq U_{\alpha}(|u|f)$, hence $|u|\in \mathcal{A}_p$. \end{proof} \begin{prop} \label{prop 2.16} The following assertions are equivalent for a real-valued function $u \in L^p(E, m)$. i) $u$ is $\mathcal{U}$-invariant. ii) $u$ is $\mathcal{U}^{\ast}$-invariant. iii) For all $f, g\in bp\mathcal{B} \cap L^{p'}(E, m)$ and $\alpha > 0$ we have \[ \int f u U_{\alpha}^{\ast} g dm = \int g u U_{\alpha} f dm. \] \end{prop} \begin{proof} Notice that $u \in \mathcal{A}_p$ if and only if for all $f,g \in bp\mathcal{B} \cap L^{p'}(E, m)$ we have \[ \int g U_{\alpha}(uf) dm = \int gu U_{\alpha} f dm. \] The equivalence i) $\Leftrightarrow$ iii) follows now since \[ \int g U_{\alpha}(uf) dm = \int f u U_{\alpha}^{\ast} g dm. \] We have also i) $\Leftrightarrow$ ii) since property iii) is the same for $\mathcal{U}$ and $\mathcal{U}^{\ast}$. \end{proof} \begin{coro} \label{coro 2.17} If $A \in \mathcal{B}$ then the following assertions are equivalent. i) The function $1_A$ is $\mathcal{U}$-invariant. ii) The sets $A$ and $E \setminus A$ are both of them $\mathcal{U}$-absorbing. iii) There exists a function $s \in \mathcal{E}(\mathcal{U}) \cap \mathcal{E}(\mathcal{U^\ast}$) such that $A = [s=0]$ $m$-a.e. iv) There exists an $\mathcal{U}$-invariant function $s$ such that $A = [s=0]$ $m$-a.e. \end{coro} The next main theorem collects several characterizations of invariance and also shows that, like absorbance, invariance is determined by only one operator $U_\alpha$. Let \[ \mathcal{I}_p : = \{ u \in L^p(E, m) : \alpha U_\alpha u = u \; m\mbox{-a.e.}, \; \alpha > 0 \}. \] \begin{thm} \label{thm 2.19} Let $u \in L^p(E, m)$, $1 \leq p < \infty $ and consider the following conditions. i) $\alpha U_{\alpha}u = u$ $m$-a.e. for one (and therefore for all) $\alpha > 0$. ii) $\alpha U_{\alpha}^\ast u = u$ $m$-a.e., $\alpha > 0$. iii) The function $u$ is $\mathcal{U}$-invariant. iv) $U_\alpha u = u U_\alpha 1$ and $ U_\alpha^\ast u = u U_\alpha^\ast 1$ $m$-a.e. for one (and therefore for all) $\alpha > 0$. v) The function $u$ is measurable w.r.t. the $\sigma$-algebra of all $\mathcal{U}$-invariant sets. Then $\mathcal{I}_p$ is a vector lattice w.r.t. the pointwise order relation and i) $\Leftrightarrow$ ii) $\Rightarrow$ iii) $\Leftrightarrow$ iv) $\Leftrightarrow$ v). If $\alpha U_\alpha 1 = 1$ or $\alpha U^\ast_\alpha 1 = 1$ $m$-a.e. then assertions i) - v) are equivalent. If $m(E) < \infty$ and $p = \infty$ then all of the statements above are still true. \end{thm} \begin{proof} i) $\Leftrightarrow$ ii) $\Rightarrow$ iii) and $\mathcal{I}_p$ is a vector lattice. It is clear that $\mathcal{I}_p$ is a vector space. If $u \in \mathcal{I}_p$ and $c$ is a positive real number then $m$-a.e. $\alpha U_\alpha (u-c)^+ \geq \alpha U_\alpha (u-c) \geq u - c$, hence $\alpha U_\alpha (u-c)^+ \geq (u-c)^+$ and by H\"older inequality we get $m$-a.e. \[ \alpha U_\alpha ((u-c)^+)^p \geq \alpha^p U_\alpha ((u-c)^+)^p (U_\alpha 1)^{p-1} \geq (\alpha U_\alpha (u-c)^+)^p \geq ((u-c)^+)^p. \] Since $(u-c)^+ \in L^p(E, m)$ we have \[ \int{((u-c)^+)^p dm} \leq \int{\alpha U_\alpha ((u-c)^+)^p dm} = \int{((u-c)^+)^p \alpha U_\alpha^\ast 1 dm} \leq \int{((u-c)^+)^p dm}, \] therefore $m$-a.e. \begin{equation} \label{eq 2.2} \alpha U_\alpha ((u-c)^+)^p = ((u-c)^+)^p, \alpha U_\alpha (u-c)^+ = (u-c)^+ \;{\rm and} \; \alpha U_\alpha (u-c)^- \leq (u-c)^-. \end{equation} If we take $c = 0$ in the second relation in (\ref{eq 2.2}) it yields that $\mathcal{I}_p$ is a vector lattice hence we may assume that $u$ is positive. Furthermore, by Proposition \ref{prop2.4} it follows that the sets $[u \leq c] = [(u-c)^+ = 0]$ and $[ u \geq c] = [(u-c)^- = 0]$ are $\mathcal{U}$-absorbing for any $c \in \mathbb{R}_+$. Because $[u > c] = \bigcap\limits_{n=1}^\infty [u \geq c + \frac{1}{n}]$ from Corollary \ref{coro 2.7} and Corollary \ref{coro 2.17} we obtain that $1_{[u \leq c]}$ and consequently $1_{[b < u \leq c]}$ are $\mathcal{U}$-invariant for avery $b, c \in \mathbb{R}_+^\ast$. By approximating $u$ with linear combinations of functions of type $1_{[b < u \leq c]}$ and using monotone convergence we deduce that $u$ is $\mathcal{U}$-invariant and the implication i) $\Rightarrow$ iii) is proved. We continue by showing that i) implies ii) (the converse follows by duality). For simplicity, let us generically write $A : = [b < u \leq c] \subset [u > 0]$, $b, c \in \mathbb{R}_+^\ast$ and recall that $u \in \mathcal{I}_p$ and $1_A$ are $\mathcal{U}$-invariant. In particular, we have $m$-a.e. that $U_\alpha u = u U_\alpha 1$, $\alpha U_\alpha 1 = 1$ on $[u > 0] \supset A$, $\alpha U^\ast_\alpha 1_A \leq 1_A$ and the function $1_A$ is integrable. Then \[ \int{1_A dm} \geq \int{\alpha U_\alpha^\ast 1_A dm} = \int{1_A \alpha U_\alpha 1 dm} = \int{1_A dm}, \] hence $\alpha U_\alpha^\ast 1_A = 1_A$ $m$-a.e. and again by approximating with step functions we conclude that $\alpha U_\alpha^\ast u = u$ $m$-a.e. Clearly iii) implies iv). iv) $\Rightarrow $ v). Assume that $u$ satisfies iv) for one $\alpha > 0$. If $c$ is a positive real number then $(u-c)^+ \in L^p(E, m)$ and $U_\alpha (u-c)^+ \geq U_\alpha (u-c) = (u-c) U_\alpha 1$, hence $U_\alpha (u-c)^+ \geq (u-c)^+ U_\alpha 1$ $m$-a.e. Moreover, since $U_\alpha ((u-c)^+)^p (U_\alpha 1)^{p-1} \geq (U_\alpha (u-c)^+)^p \geq ((u-c)^+)^p (U_\alpha 1)^p$ we have that $U_\alpha ((u-c)^+)^p \geq ((u-c)^+)^p U_\alpha 1$ $m$-a.e. Analogously, we get $U_\alpha^\ast (u-c)^+ \geq (u-c)^+ U_\alpha^\ast 1$ and $U_\alpha^\ast ((u-c)^+)^p \geq ((u-c)^+)^p U_\alpha^\ast 1$ $m$-a.e. Then \[ \int{U_\alpha ((u-c)^+)^p +U_\alpha^\ast ((u-c)^+)^p dm} = \int{((u-c)^+)^p (U_\alpha^\ast 1 + U_\alpha 1) dm} \leq \] \[ \leq \int{U_\alpha ((u-c)^+)^p +U_\alpha^\ast ((u-c)^+)^p dm} \] which implies $m$-a.e. \begin{equation} \label{eq 2.3} U_\alpha (u-c)^+ = (u-c)^+ U_\alpha 1 \; {\rm and} \; U_\alpha^\ast (u-c)^+ = (u-c)^+ U_\alpha^\ast 1. \end{equation} Then $U_\alpha \inf (n(u-c)^+, 1) \leq \inf ( n(u-c)^+, 1) U_\alpha 1$ $m$-a.e. and letting $n$ tend to infinity we get $ U_\alpha 1_{[u > c]} \leq 1_{[u > c]} U_\alpha 1$ and analogously, $ U^\ast_\alpha 1_{[u > c]} \leq 1_{[u > c]} U^\ast_\alpha 1$ $m$-a.e. By Remark \ref{rem 2.5}, b) it follows that $[u \leq c]$ is $\mathcal{U}$-invariant. Taking $c = 0$ in (\ref{eq 2.3}) we obtain that the set of functions satisfying condition iv) is a vector lattice w.r.t. the pointwise order relation so we may assume that $u$ is positive. It follows that condition v) holds. v) $\Rightarrow$ iii). Since $\mathcal{A}_p$ is a lattice we may assume that $u$ is positive. If $u$ satisfies v) then it can be approximated by an increasing sequence of invariant simple functions and by monotone convergence we conclude that $u$ is $\mathcal{U}$-invariant. Finally, if $\alpha U_\alpha 1 = 1$ or $\alpha U^\ast_\alpha 1 = 1$ $m$-a.e. then $u \in \mathcal{A}_p$ if and only if $u \in \mathcal{I}_p$ and all of the assertions are equivalent. \end{proof} \begin{rem} \label{rem 2.16} i) Similar characterizations for invariance as in Theorem \ref{thm 2.19}, but in the recurrent case and for functions which are bounded or integrable with bounded negative parts, as well as the fact that, in terms of semigroups (assuming a strong analyticity assumption), absorbance and invariance are determined by only one operator, were already obtained in \cite{Sc04}. ii) If $u \in L^p(E, m)$, $1 \leq p < \infty$ is in $\mathcal{I}_p$ (resp. is $\mathcal{U}$-invariant) then $\inf{(u, c)}$ is in $\mathcal{I}_p$ (resp. is $\mathcal{U}$-invariant) for all positive real numbers $c$. This is true by relation $(\ast)$ (resp. $(\ast \ast)$) ( see the proof of Theorem \ref{thm 2.19}) and the fact that $\inf{(u, c)} = u - (u - c)^+$. iii) A set $A \in \mathcal{B}$ is $\mathcal{U}$-invariant if and only of $U_\alpha 1_A = 1_A U_\alpha 1$ since the last equality implies that $U_\alpha 1_{E \setminus A} = 1_{E \setminus A} U_\alpha 1$ $m$-a.e. hence $A$ and $E \setminus A$ are $\mathcal{U}$-absorbing. However, if $u \in L^p(E, m)$, $1 \leq p \leq \infty$ we do not know if $U_\alpha u = u U_\alpha 1$ $m$-a.e. (without assuming $ U_\alpha^\ast u = u U_\alpha^\ast 1$ $m$-a.e. as in condition v) of the above theorem ) is enough for $u$ to be $\mathcal{U}$-invariant. \end{rem} If $p=\infty=m(E)$ then we have the following version of Theorem \ref{thm 2.19}. \begin{prop} \label{prop 2.17} If $\mathcal{U}$ is $m$-recurrent then the following assertions are equivalent for a function $u \in L^\infty (E, m)$. i) $\alpha U_\alpha u = u$ $m$-a.e., $\alpha > 0$. ii) $\alpha U^\ast_\alpha u = u$ $m$-a.e., $\alpha > 0$. iii) The function $ u $ is $\mathcal{U}$-invariant. iv) The function $ u $ is measurable w.r.t. the $\sigma$-algebra of all $\mathcal{U}$-invariant sets. \end{prop} \begin{proof} The equivalence i) $\Leftrightarrow$ iv) follows by \cite{Sc04}, Corollary 21. Also, from Proposition \ref{prop2.3} we have that $\mathcal{U}^\ast$ is $m$-recurrent, hence ii) $\Leftrightarrow$ iv). The implication iv) $\Rightarrow$ iii) is obtained by approximating with simple functions, and since iii) $\Rightarrow$ i) is clear, the proof is complete. \end{proof} However, we recall that if $m(E)<\infty$ then Proposition \ref{prop 2.17} is just a particular case of Theorem \ref{thm 2.19}. The next proposition shows that in condition v) of Theorem \ref{thm 2.19} we can put inequality instead of equality. \begin{prop} \label{prop 2.18} The following assertions are equivalent for a function $u \in L^p(E,m)$ such that $u^- \in L^1(E, m)$, $1 \leq p < \infty$. i) $U_\alpha u = u U_\alpha 1$ and $ U_\alpha^\ast u = u U_\alpha^\ast 1$ $m$-a.e., $\alpha > 0$. ii) $U_\alpha u \geq u U_\alpha 1$ and $U_\alpha^\ast u \geq u U_\alpha^\ast 1$ $m$-a.e., $\alpha > 0$. \end{prop} \begin{proof} Since the implication i) $\Rightarrow$ ii) is trivial we prove only the converse. If condition ii) holds for $u$ then it holds for $u^+$ too and $U_\alpha (u^+)^p (U_\alpha 1)^{p-1} \geq (U_\alpha u^+)^p \geq (u^+)^p (U_\alpha 1)^p$, hence $U_\alpha (u^+)^p \geq (u^+)^p U_\alpha 1$ $m$-a.e. Because the same relations hold for $\mathcal{U}^\ast$ we have \[ \int{U_\alpha (u^+)^p dm} = \int{(u^+)^p U_\alpha^\ast 1 dm} \leq \int{U_\alpha^\ast (u^+)^p dm} = \int{(u^+)^p U_\alpha 1 dm} \leq \int{U_\alpha (u^+)^p dm} \] It follows that $u^+$ satisfies i) hence $U_\alpha u^- \leq u^- U_\alpha 1 \; {\rm and} \; U_\alpha^\ast u^- \leq u^- U_\alpha^\ast 1$ $m$-a.e. Then \[ \int{U_\alpha u^- dm} = \int {u^- U_\alpha^\ast 1 dm} \leq \int{U_\alpha^\ast u^- dm} = \int{u^- U_\alpha 1 dm} \leq \int{U_\alpha u^- dm} \] thus condition i) is also verified by $u^-$. \end{proof} \begin{prop} \label{prop 2.20} If $\mathcal{U}$ is $m$-recurrent or $m$-symmetric then a set $A \in \mathcal{B}$ is $\mathcal{U}$-absorbing if and only if it is $\mathcal{U}$-invariant. \end{prop} \begin{proof} The symmetric case follows by Corollary \ref{coro 2.17}. Assume that $\mathcal{U}$ is $m$-recurrent. If $A$ is $\mathcal{U}$-absorbing then by Proposition \ref{prop2.4} there exists $B \in \mathcal{B}$ such that $B = A$ $m$-a.e. and $1_{E \setminus B} \in \mathcal{E}(\mathcal{U})$. By Proposition \ref{prop2.3} we have that $\alpha U_\alpha 1_{E \setminus B} = 1_{E \setminus B}$ $m$-a.e., $\alpha > 0$. To get that $A$ is $\mathcal{U}$-invariant we can simply apply Proposition \ref{prop 2.17} or notice that $ \alpha U_\alpha 1_A = \alpha U_\alpha 1 - \alpha U_\alpha 1_{E \setminus B} = 1_A $ $m$-a.e., hence $E \setminus A$ is $\mathcal{U}$-absorbing and the implication follows by Corollary \ref{coro 2.17}. The converse is clear. \end{proof} \begin{coro} \label{coro 2.21} Consider the following assertions. i) $\mathcal{U}$ is $m$-irreducible. ii) Every $L^p(E, m)$-integrable $\mathcal{U}$-invariant function is constant, $1 \leq p < \infty$. iii) Every bounded $\mathcal{U}$-invariant function is constant. Then i) $\Rightarrow$ ii) $\Leftarrow$ iii). If $m(E) < \infty$ then ii) $\Leftrightarrow$ iii). In addition, if $\mathcal{U}$ is $m$-symmetric then i) $\Leftrightarrow$ ii) $\Leftrightarrow$ iii). If $\mathcal{U}$ is $m$-recurrent then i) $\Leftrightarrow$ iii). \end{coro} \begin{proof} i) $ \Rightarrow $ ii). If $\mathcal{U}$ is $m$-irreducible then the $\sigma$-algebra of all $\mathcal{U}$-invariant sets is trivial and assertion ii) follows by Theorem \ref{thm 2.19}. The implication iii) $\Rightarrow$ ii) follows by Proposition \ref{prop 2.15} and Remark \ref{rem 2.16}, i). If $m(E) < \infty$ then the converse is clear. In addition, if $\mathcal{U}$ is $m$-symmetric then by Proposition \ref{prop 2.20} it follows that iii) $\Rightarrow$ i) hence all of the three assertions are equivalent. Assume now that $\mathcal{U}$ is $m$-recurrent. If iii) holds and $A$ is $\mathcal{U}$-absorbing then by Proposition \ref{prop 2.20} it follows that $1_A$ is $\mathcal{U}$-invariant and therefore it is constant $m$-a.e. Thus $\mathcal{U}$ is $m$-irreducible. Conversely, assume that i) is satisfied. Then there are no non-trivial $\mathcal{U}$-invariant sets and assertion iii) is deduced from Proposition \ref{prop 2.17}. \end{proof} \section{Irreducibility and ergodicity of $L^p$-resolvents} In this section we study ergodic properties of a sub-Markovian resolvent of kernels under additional hypotheses such as transience and irreducible recurrence. Let $m$ be a $\sigma$-finite measure on $(E, \mathcal{B})$. Further we shall use the notation $(\cdot, \cdot)$ to express the duality between $L^p(E, m)$ and $L^{p'}(E, m)$ and $\| \cdot \|_p$ for the $L^p(E, m)$-norm; $p'$ is the exponential conjugate of $p$: $\frac{1}{p} + \frac{1}{p'} = 1$. We say that a resolvent family $\mathcal{U} = (U_\alpha)_{\alpha > 0}$ of operators on $L^p(E,m)$, $1<p<\infty$ is {\it ergodic} if the strong limit $\lim\limits_{\alpha \to 0}\alpha U_{\alpha}u$ exists for all $u \in L^p(E, m)$. The next theorem states a convenient version for the present context of the classical result concerning Abel-ergodicity of a pseudo-resolvent family $(U_\alpha)_{\alpha > 0}$ of operators on a locally convex space; cf. \cite{Yo80}, Chapter VIII, Section 4 and \cite{WaEr93}. We drop the sub-Markov property for the moment and proceed with the more general condition that allows $\mathcal{U}$ to be uniformly bounded. \begin{thm} \label{thm 3.1} Let $m$ be a $\sigma$-finite measure on $(E, \mathcal{B})$ and $\mathcal{U} = (U_\alpha)_{\alpha > 0}$ be a resolvent family of continuous linear operators on $L^p(E,m)$, $1<p<\infty$ such that $\|\alpha U_\alpha \|_p \leq M$ for all $\alpha > 0$ and for some positive constant $M < \infty$. Then $\mathcal {U}$ is ergodic. More precisely, for one (hence for all) $\beta > 0$ and all $u \in L^p(E,m)$ there exists $u' \in {\rm Ker}(\mathcal{I} - \beta U_\beta)$ such that \[ \mathop{\lim}\limits_{\alpha \to 0} \| \alpha U_{\alpha}u - u'\|_p = 0. \] \end{thm} \begin{proof} Step I. We claim that for every $f \in L^p(E,m)$ there exists $\alpha_n \searrow 0$ such that $(\alpha_n U_{\alpha_n} f)_n$ is weakly convergent to some element from Ker$(\mathcal{I} - \beta U_\beta)$ and, as a consequence, that Ker$(\mathcal{I} - \beta U_\beta)$ separates Ker$(\mathcal{I} - \beta U^\ast_\beta)$, in the sense that if $v \in \mbox{Ker}(\mathcal{I} - \beta U^\ast_\beta)$ and $(u,v) = 0$ for all $u \in {\rm Ker}(\mathcal{I} - \beta U_\beta)$ then $v = 0$, where $ U^\ast_\beta$ is the adjoint operator of $U_\beta$ on $L^{p'}(E,m)$. To prove this, let $f \in L^p(E,m)$, $f' \in L^p(E,m)$ and $\alpha_n \downarrow 0$ s.t. $(g_n)_n := (\alpha_n U_{\alpha_n} f)_n$ is weakly convergent to $f'$. By passing to a subsequence we may assume that the sequence of Cesaro means $(\dfrac{1}{n}\mathop{\sum}\limits_{k =1}^{n}g_k)_n$ converges strongly to $f'$. Then \[ \beta U_\beta(\dfrac{1}{n}\mathop{\sum}\limits_{k =1}^{n}g_k) = \dfrac{1}{n}\mathop{\sum}\limits_{k=1}^{n}\alpha_k \beta U_\beta U_{\alpha_k}f = \dfrac{1}{n}\mathop{\sum}\limits_{k=1}^{n}\dfrac{\alpha_k \beta}{\beta - \alpha_k}(U_{\alpha_k} f - U_\beta f) \mathop\rightarrow\limits_{n} f'. \] It follows that $f' \in {\rm Ker}(\mathcal{I} - \beta U_\beta)$. Assume now that $v \in \mbox{Ker}(\mathcal{I} - \alpha U^\ast_\beta)$ and $(u,v) = 0$ for all $u \in {\rm Ker}(\mathcal{I} - \beta U_\beta)$. If $f \in L^p(E, m)$ then by the first part of this proof there exists $f' \in {\rm Ker}(\mathcal{I} - \beta U_\beta)$ and $\alpha_n \downarrow 0$ s.t. $(\alpha_n U_{\alpha_n} f)_n$ is weakly convergent to $f'$. Then \[ (f,v) = (f, \alpha_n U^\ast_{\alpha_n} v) = (\alpha_n U_{\alpha_n}f, v) \mathop{\rightarrow}\limits_n (f', v) = 0. \] Since $f$ was arbitrarily chosen it follows that $v = 0$ and Ker$(\mathcal{I} - \beta U_\beta)$ separates Ker$(\mathcal{I} - \beta U^\ast_\beta)$. Step II. We show now that $(\alpha U_\alpha u)_{\alpha > 0}$ is strongly convergent for all $u \in L^p(E, m)$. Choose $\beta > 0$ and consider the subspace \[ G := \mbox{Ker}(\mathcal{I} - \beta U_\beta) \oplus \left\{ \beta U_\beta f - f : f \in L^p(E, m) \right\} \] of $L^p(E,m)$ and take $v \in L^{p'}(E, m)$ s.t. $(u, v) = 0$ for all $u \in G$. Since $v$ is orthogonal to each element of the form $\beta U_\beta f - f$, $f \in L^p(E, m)$ we have that $v \in {\rm Ker}(\mathcal{I} - \beta U^\ast_\beta)$ and by Step I it follows that $v = 0$. This means that $G$ is dense in $L^p(E, m)$ and because $(\alpha U_\alpha)_{\alpha > 0}$ is uniformly bounded it is enough to prove that $(\alpha U_\alpha u)_{\alpha > 0}$ is strongly convergent for $u \in G$ and in fact, for elements of the type $\beta U_\beta f - f$, $f \in L^p(E, m)$. By the resolvent equation we have \[ \|\alpha U_\alpha (\beta U_\beta f - f)\|_p = \alpha \| \alpha U_\alpha U_\beta f - U_\beta f \|_p \leq \alpha \frac{M+M^2}{\beta}\|f\|_p \mathop{\rightarrow}\limits_{\alpha \to 0} 0 \] for all $f \in L^p(E, m)$ and Step II is complete. It is clear now that Step I and Step II prove the theorem. \end{proof} \begin{rem} i). Recall that if $(T_t)_{t \geq 0}$ is an uniformly bounded strongly continuous semigroup on a reflexive Banach space it holds that $(\dfrac{1}{t}\int\limits_0^t{T(s)ds})_{t \geq 0}$ is ergodic to the projection on the null space of its generator, as $t$ tends to infinity. This property is known as {\it mean ergodicity} and its proof, for which we refer to \cite{EnNa99}, Chapter V, Theorem 4.5 and Example 4.7, follows the same lines as the one of Theorem \ref{thm 3.1}. We emphasize that if a strongly continuous semigroup is uniformly bounded then so is its corresponding resolvent, but the converse is not true in general, so the assumption on the boundedness of the resolvent is weaker. In the same time, it is natural that the resolvent gives more information about the semigroup rather than its integral means. ii). If $(T_t)_{t\geq 0}$ is the transition function of a right Markov process on $(E,\mathcal{B})$ which is $m$-recurrent, then by \cite{Fi98}, Theorem 1.1, the following {\it quasi-sure} form of Theorem \ref{thm 3.1} holds: let $\Sigma$ be the $\sigma$-algebra of all $m$-invariant sets and set $\mu := q \cdot m$ with $q>0$ and $m(q)=1$. Then, for every measurable functions $f$ and $g \geq 0$ from $L^1(m)$, there exists an $m$-polar set $B\in \mathcal{B}$ such that $$ \lim\limits_{t \to \infty}\frac{\int\limits_0^tT_s f(x) ds}{\int\limits_0^tT_s g(x) ds} = \frac{\mu(f\slash q \mid \Sigma)}{\mu(g\slash q \mid \Sigma)} $$ for all $x \in [Ug>0] \setminus B$; for the corresponding statement in terms of resolvents see Theorem 6.1 from \cite{Fi98}. This result is a generalization of its $m$-semipolar version proved in \cite{Fu74}, Theorem 3.1 for standard Markov processes in duality (see also \cite{Sh76} and \cite{Sh77}), and, as a matter of fact, it is the quasi-sure refinement in continuous time of the well-known ergodic result of Chacon and Ornstein, \cite{ChOr60}. \end{rem} In view of Theorem \ref{thm 2.19} we would like to give a better insight for Theorem \ref{thm 3.1} in the situation that $\mathcal{U}$ is a sub-Markovian resolvent of kernels on $(E, \mathcal{B})$ such that $\mathcal{E}(\mathcal{U}_\beta)$ is min-stable, contains the positive constant functions and generates $\mathcal{B}$, and $m$ a $\sigma$-finite sub-invariant measure, or equivalently, according to the discussion in the beginning of Section 2, that $\mathcal{U}$ (and hence $\mathcal{U}^\ast$) is a strongly continuous sub-Markovian resolvents of contractive operators on $L^p(E, m)$, $1 < p < \infty$. For every $1 < p < \infty$ we denote by $({\sf L}_p, D({\sf L}_p))$ the generator associated to $\mathcal{U}$ as a strongly continuous resolvent of operators on $L^p(E,m)$: \[ D({\sf L}_p) = U_{\alpha}(L^p(E, m)), \; \alpha >0, \; {\sf L}_p(U_{\alpha}f) = \alpha U_{\alpha} f - f, \; f\in L^p(E, m). \] The corresponding generator associated to $\mathcal{U}^\ast$ is denoted by $({\sf L}^{\ast}_{p'}, D({\sf L}^{\ast}))$. We point out that the adjoint operator of ${\sf L}_p$ is ${\sf L}^\ast_{p'}$ and not ${\sf L}^\ast_{p}$. The following corollary is a direct consequence of Theorem \ref{thm 2.19}. \begin{coro} \label{prop 3} Let $1 \leq p < \infty$. Then the following assertions hold. i) {\rm Ker}${\sf L}_p$ = {\rm Ker}${\sf L}^\ast_p$. ii) {\rm Ker}${\sf L}_p \cap L^{p'}(E, m) \subset {\rm Ker}{\sf L}^\ast_{p'}$. iii) If $u \in {\rm Ker}{\sf L}_p$ then $u$ is $\mathcal{U}$-invariant. iv) If $\alpha U_\alpha 1 = 1$ or $\alpha U^\ast_\alpha 1 = 1$ $m$-a.e. then the converse of iii) is also true for any $u \in L^p(E, m)$. \end{coro} In potential theoretical terms, Corollary \ref{prop 3}, i), states that the harmonic and coharmonic functions belonging to $L^{p}$ coincide. In combination with Theorem \ref{thm 3.1}, it means that for functions $u \in L^p(E,m)$, the limit of $\alpha U_\alpha u$, $\alpha \searrow 0$, produces both harmonic and coharmonic functions.\\ From now on we consider the same framework as in Section 2, that is $\mathcal{U}=(U_\alpha)_{\alpha > 0}$ is a sub-Markovian resolvent of kernels on $(E, \mathcal{B})$ and $m$ is a $\sigma$-finite sub-invariant measure with respect to $\mathcal{U}$. In particular, $\mathcal{U}$ becomes a resolvent family of contractive operators on $L^p(E,m)$ for all $1 < p < \infty$, hence $\mathcal{U}$ is ergodic in the sense of Theorem \ref{thm 3.1}. In the next two propositions we exploit the ergodic property of $\mathcal{U}$, involving the additional properties of $m$-transience and $m$-irreducible recurrence, respectively. \begin{prop} \label{prop 3.4} Consider the following assertions. i) $\mathcal{U}$ is $m$-transient. ii) If $u \in L^p$, $1 < p < \infty$, and $\alpha U_\alpha u = u$ then $u = 0$ $m$-a.e. iii) For all $u \in L^p(E, m)$, $1 < p < \infty$ we have \[ \lim\limits_{\alpha \to 0} \| \alpha U_\alpha u \|_p = 0. \] Then i) $\Rightarrow$ ii) $\Leftrightarrow$ iii). If $m(E) < \infty$ and $\mathcal{U}$ is $m$-irreducible then i), ii), and iii) are equivalent. \end{prop} \begin{proof} i) $\Rightarrow$ iii). We may assume that $u$ is positive. If $u \in p\mathcal{B} \cap L^p(E, m) \cap L^1(E, m)$ then by Theorem \ref{thm 3.1} there exists $ u' \in L^p(E, m)$ such that $\alpha U_\alpha u' = u'$ and $\lim\limits_{\alpha \to 0} \| \alpha U_\alpha u - u' \|_p = 0 $. By Proposition \ref{prop2.1} we have $\alpha U_\alpha u \leq \alpha U u \mathop{\longrightarrow}\limits_{\alpha \to 0} 0$ $m$-a.e., therefore $u' = 0$ $m$-a.e. and $\lim\limits_{\alpha \to 0} \alpha U_\alpha u = 0$ in $ L^p(E, m) $. Let now $ u, u' \in p\mathcal{B} \cap L^p(E, m) $ and $(u_n)_n \subset p\mathcal{B} \cap L^p(E, m) \cap L^1(E, m)$ such that $\lim\limits_{n \to \infty} \| u_n - u \|_p = 0$ and $\lim\limits_{\alpha \to 0} \|\alpha U_\alpha u - u' \|_p = 0$ (cf. Theorem \ref{thm 3.1}). Then \[ \|u'\|_p = \lim\limits_{\alpha \to 0} \| \alpha U_\alpha u \|_p \leq \lim\limits_{\alpha \to 0} ( \| \alpha U_\alpha (u - u_n) \|_p + \| \alpha U_\alpha u_n \|_p) \leq \] \[ \leq \| u - u_n \|_p + \lim\limits_{\alpha \to 0} \| \alpha U_\alpha u_n \|_p = \| u - u_n \|_p \mathop{\longrightarrow}\limits_{n} 0 \] hence $u' = 0$ $m$-a.e. iii) $\Rightarrow$ ii). If $u \in L^p(E,m)$ such that $\alpha U_\alpha u = u$ then by iii) we have $u = 0$ $m$-a.e. ii) $\Rightarrow$ iii). If $u \in L^p(E, m)$ by Theorem \ref{thm 3.1} there exists $u' \in L^p(E,m)$ such that $\alpha U_\alpha u' = 0$ and $\lim\limits_{\alpha \to 0} \| \alpha U_\alpha u - u' \|_p = 0$. By ii) we have $u' = 0$ $m$-a.e. Assume now that $m(E) < \infty$ and $\mathcal{U}$ is $m$-irreducible. Since $1 \in L^p(E,m)$, if iii) holds then we have that $\alpha U_\alpha 1 $ converges pointwise to $0$ as $\alpha$ goes to $0$ hence $\mathcal{U}$ is not $m$-recurrent. By Proposition \ref{prop 2.5} it follows that $\mathcal{U}$ is $m$-transient. \end{proof} \begin{prop} \label{prop 3.5} Consider the following conditions. i) $\mathcal{U}$ is $m$-irreducible and $m$-recurrent. ii) If $u \in L^p(E,m)$ and $\alpha U_\alpha u = u$ then $u$ is constant. iii) For all $u \in L^p(E, m) $ we have \[ \mathop{\lim}\limits_{\alpha \to 0}\| \alpha U_{\alpha}u - c_u \|_p = 0, \] where $c_u$ is the constant defined by \[ c_u := \left\{\begin{array}{ll} \dfrac{\int u dm}{m(E)} &, \; {\rm if} \; m(E) < \infty \\ 0 &, \; {\rm if} \; m(E) = +\infty. \end{array} \right. \] Then i) $\Rightarrow$ iii) $\Rightarrow$ ii). If $m(E) = \infty$ then ii) $\Leftrightarrow$ iii). If $m(E) < \infty$ then i) $\Leftrightarrow$ iii). If in addition $\mathcal{U}$ is $m$-recurrent then i), ii), and iii) are equivalent. \end{prop} \begin{proof} i) $\Rightarrow$ iii). Let $u \in L^p(E, m)$. From Theorem \ref{thm 3.1} there exists $ u' \in L^p(E,m)$ such that $\alpha U_\alpha u' = u'$ and $\lim\limits_{\alpha \to 0} \| \alpha U_\alpha u - u' \|_p = 0 $. Furthermore, by Proposition \ref{prop 2.9} we have that $u'$ is constant. Clearly $u' = 0$ if $m(E) = \infty$. If $m(E) < \infty$ then $\int u' dm = \mathop{\lim}\limits_{\alpha \to 0}(\alpha U_{\alpha}u, 1) = \mathop{\lim}\limits_{\alpha \to 0}(u, \alpha U_{\alpha}^{\ast}1) = \int u dm$ and so $u' = \dfrac{\int u dm}{m(E)}$. The implication iii) $\Rightarrow$ ii) is clear. If $m(E) = \infty$ and ii) holds then for $u \in L^p(E, m)$ and $u'$ provided by Theorem \ref{thm 3.1} we have that $u'$ is constant and therefore $u' = 0$ and iii) is satisfied. Assume now that $m(E) < \infty$. Under assertion iii), if $u$ is a bounded excessive function then $\int u dm \geq \int \alpha U_\alpha u dm \mathop\to\limits_{\alpha \to 0} \int u dm$, hence $\alpha U_\alpha u = u$ $m$-a.e. and in fact $u = c_u$ $m$-a.e. By Proposition \ref{prop 2.9} we get that $\mathcal{U}$ is $m$-irreducible recurrent. If in addition $\mathcal{U}$ is $m$-recurrent and ii) holds, it follows once again that every bounded excessive function is constant and by Proposition \ref{prop 2.9} we conclude that $\mathcal{U}$ is $m$-irreducible and assertions i), ii), and iii) are equivalent. \end{proof} \begin{rem} \label{rem 3.6} a) If $m(E) < \infty$ then the $L^p$-ergodicity in the assertions iii) of Proposition \ref{prop 3.4} and respectively of Proposition \ref{prop 3.5} for $p > 1$ implies also the $L^1$-ergodicity. This follows easily by the density of $L^1(E, m) \cap L^p(E, m)$ in $L^1(E, m)$. b) If $\mathcal{U}$ is the resolvent of an $m$-symmetric right process $X$, then by \cite{FuOsTa11}, Theorem 4.7.3, if $\mathcal{U}$ is $m$-irreducible and $m$-recurrent then for all Borel measurable and $m$-integrable functions $u$ we have $P_m$-a.s. and $P_x$-a.s. for q.e. $x \in E$ that \[ \lim\limits_{t \to \infty}\frac{1}{t}\int\limits_0^t u(X_s)ds = c_u. \] This pathwise ergodicity is entailed by a corresponding ergodicity in terms of shift invariance for which we refer to Theorem 4.7.2. \end{rem} \section{Extremality of invariant measures} As in the previous sections, we assume that $\mathcal{U} = (U_\alpha)_{\alpha > 0}$ is merely a sub-Markovian resolvent of kernels on $(E, \mathcal{B})$. Let $\mathcal{I}$ be the set of all $\mathcal{U}$-invariant probability measures, i.e. $$ \mathcal{I} : = \{\mu : \mu \; \mbox{ is a} \; \mbox{ probability measure such that } \; \mu \circ \alpha U_\alpha = \mu, \; \alpha > 0 \}. $$ We fix an $\mathcal{U}$-sub-invariant probability measure $m$ and denote by $\mathcal{I}_{m, ac}$ the subset of $\mathcal{I}$ which consists of all absolutely continuous measures with respect to $m$. Also, let $\mathcal{U}^\ast$ be the adjoint resolvent of $\mathcal{U}$ with respect to $m$ (cf. (2.1)). \begin{lem} \label{lem 2.24} The following assertions are equivalent for a probability measure $\mu$. i) $\mu \in \mathcal{I}_{m, ac}$. ii) There exists a function $u \in p\mathcal{B}$ such that $\alpha U_\alpha u = u$ $m$-a.e. for all $\alpha > 0$, $\mu = u \cdot m$, and $m(u) = 1$. In particular, the function $u$ is $\mathcal{U}$-invariant (or equivalently, $\mathcal{U}^\ast$-invariant). \end{lem} \begin{proof} i) $\Rightarrow$ ii). Let $u \in p\mathcal{B} \cap L^1(E, m)$ such that $\mu = u \cdot m \in \mathcal{I}_{m, ac}$. Then for every $f \in bp\mathcal{B}$ we have \begin{equation} \label{eq 4.1} \int{f \alpha U_\alpha^\ast u dm} = \int{u \alpha U_\alpha f dm = \int{fu dm}} \end{equation} hence $\alpha U_\alpha^\ast u = u$, $\alpha > 0$. By Theorem \ref{thm 2.19} we conclude that $u$ is $\mathcal{U}$-invariant. The implication ii) $\Rightarrow$ i) follows by Proposition \ref{thm 2.19} and relation (\ref{eq 4.1}). \end{proof} Let $\mathcal{G}^m$ be the set of all probability measures $\mu$ on $(E, \mathcal{B})$ of the form $\mu = u\cdot m$, where $u$ is $\mathcal{U}$-invariant (with respect to $m$). \begin{rem} \label{rem 2.22} i) Because the $\mathcal{U}$-invariant functions are $\mathcal{U}^{\ast}$-excessive, it follows that $\mathcal{G}^m$ is a set of sub-invariant measures. ii) $\mathcal{G}^{m}$ is a non-empty convex set; $m \in \mathcal{G}_{m}$. iii) $\mathcal{U}$ is $m$-recurrent if and only if $m \in \mathcal{I}$. iv) We have the inclusion $\mathcal{I}_{m, ac} \subset \mathcal{G}^m $. If $\mathcal{U}$ is $m$-recurrent then $\mathcal{G}^m = \mathcal{I}_{m, ac}$. \end{rem} \begin{thm} \label{thm 2.26} Consider the following assertions. i) $\mathcal{U}$ is $m$-irreducible. ii) $\mathcal{G}^{m} = \{m\}$. iii) The measure $m$ is extremal in $\mathcal{G}^{m}$. iv) The measure $m$ is extremal in $\mathcal{I}$. Then i) $\Rightarrow$ ii) $\Leftrightarrow$ iii). If $\mathcal{U}$ is $m$-symmetric then assertions i) - iii) are equivalent. If $\mathcal{U}$ is $m$-recurrent then assertions i) - iv) are equivalent. \end{thm} \begin{proof} The implication i) $\Rightarrow$ ii) follows by Corollary \ref{coro 2.21}. ii) $\Leftrightarrow$ iii). Clearly, ii) implies iii). Assume that iii) holds and let $\mu \in \mathcal{G}^m$, $\mu = u\cdot m$. If $u \leq 1$ or $u \geq 1$, since $\int u \wedge 1 dm = 1$, we get that $\mu = m$. Assume that $\mu \neq m$. Consequently we have $0 < \int u \wedge 1 dm < 1$, hence if we put $\alpha := \int u \wedge 1 dm$ then $\alpha \in (0,1)$. By Proposition \ref{prop 2.15} it follows that $u \wedge 1$ is also $\mathcal{U}$-invariant function. Therefore the measures $\mu_1 := \dfrac{u \wedge 1}{\alpha}\cdot m$ and $\mu_2 := \dfrac{1 - u\wedge 1}{1-\alpha}\cdot m$ belong to $\mathcal{G}^m$ and clearly we have $m = \alpha \mu_1 + (1-\alpha)\mu_2$. The measure $m$ being extremal in $\mathcal{G}^m$, we get the contradictory equality $\mu_1 = m$ and therefore ii) holds. Assume now that $\mathcal{U}$ is $m$-symmetric. Clearly, is enough to show that ii) $\Rightarrow$ i), so let $A \subset E$ be an $\mathcal{U}$-absorbing set. By Proposition \ref{prop 2.20} we get that the function $1_A$ is $\mathcal{U}$-invariant. If $m(A) > 0$ then the measure $\dfrac{1_A}{m(A)}\cdot m$ belongs to $\mathcal{G}^m = \{m\}$ so $m(E \setminus A) = 0$, hence $\mathcal{U}$ is $m$-irreducible. Le us consider the last case when $\mathcal{U}$ is $m$-recurrent. iii) $\Rightarrow$ iv). If $m = \alpha m_1 + ( 1 - \alpha ) m_2$ with $m_1$, $m_2 \in \mathcal{I}$ and $\alpha \in (0, 1)$ then by Lemma \ref{lem 2.24} and Remark \ref{rem 2.22}, iv) we have that $m_1 \in \mathcal{I}_{m, ac} = \mathcal{G}^m$, hence $m_1 = m$ and $m$ is extremal in $\mathcal{I}$. iv) $\Rightarrow$ i). First we notice that under condition iv), from Remark \ref{rem 2.22}, iv) it follows that iii) and hence ii) hold. Let now $A \subset E$ be an $\mathcal{U}$-absorbing set. By Proposition \ref{prop 2.20} we get that the function $1_A$ is $\mathcal{U}$-invariant. If $m(A) > 0$ then the measure $\dfrac{1_A}{m(A)}\cdot m$ belongs to $\mathcal{G}^m = \{m\}$ so $m(E \setminus A) = 0$. In conclusion we obtain that $\mathcal{U}$ is $m$-irreducible. \end{proof} As an application of Theorem \ref{thm 2.26}, we end this sections with the following known result (cf. e.g. \cite{DaZa96}, Proposition 3.2.5; strongly continuous semigroups) on the singularity of extremal invariant measures. We make the remark that, in contrast with the previous work, we drop the strong continuity assumption. The key ingredient is the ergodicity of $\mathcal{U}$ with respect to an extremal measure. \begin{prop} \label{prop 3.8} If $\mu$ and $\nu$ are extremal measures in $\mathcal{I}$ such that $\mu \neq \nu$ then $\mu$ and $\nu$ are singular. \end{prop} \begin{proof} Let $A \in \mathcal{B}$ such that $\mu (A) \neq \nu (A)$. By Remark \ref{rem 2.22}, iii), Theorem \ref{thm 2.26}, and Proposition \ref{prop 3.5} there exists a sequence $(\alpha_n)_{n \geq 1}$ decreasing to $0$ such that \[ \lim\limits_n \alpha_n U_{\alpha_n} 1_A = \mu (A), \; \mu \mbox{-a.e. and} \; \lim\limits_n \alpha_n U_{\alpha_n} 1_A = \nu (A), \; \nu \mbox{-a.e.} \] If we set \[ \Gamma_1 : = \left\{ x \in E : \lim\limits_n \alpha_n U_{\alpha_n} 1_A(x) = \mu (A) \right\},\ \Gamma_2 : = \left\{ x \in E : \lim\limits_n \alpha_n U_{\alpha_n} 1_A(x) = \nu (A) \right\}, \] then $\Gamma_1 \cap \Gamma_2 = \emptyset$ and $\mu(A) = \nu(A) = 1$. Therefore $\mu$ and $\nu$ are singular. \end{proof} \section{Irreducibility of (non-symmetric) Dirichlet forms} In this section we assume that $\mathcal{U}$ and $\mathcal{U}^{\ast}$ are the resolvent and respectively the co-resolvent of a (non-symmetric) Dirichlet form $(\mathcal{E}, D(\mathcal{E}))$ on $L^2(E, m)$, i.e., \[ U_{\alpha}(L^2(E, m)) \subset D(\mathcal{E}), \; U_{\alpha}^{\ast}(L^2(E, m)) \subset D(\mathcal{E}) \] and \[ \mathcal{E}_{\alpha}(U_{\alpha}f, u) = \mathcal{E}_{\alpha}(u, U_{\alpha}^{\ast}f) = (f, u)_{L^2(E, m)} \] for all $\alpha > 0$, $f \in L^2(E, m)$ and $u \in D(\mathcal{E})$, where $\mathcal{E}_{\alpha}: = \mathcal{E} + \alpha(\cdot, \cdot)_{L^2(E, m)}$; for the definition of the (non-symmetric) Dirichlet form see \cite{MaRo92}, Definition 4.5. Recall that $\mathcal{U}$ and $\mathcal{U}^\ast$ become (uniquely) strongly continuous sub-Markovian resolvents of contractive operators on $L^2(E, m)$ and $m$ is a sub-invariant measure (cf. \cite{MaRo92}, Theorem 2.8 and Theorem 4.4; see also Chapter II, Section 5). According to the discussion at the beginning of Section 2, we can assume that $\mathcal{U}$ and $\mathcal{U}^\ast$ are sub-Markovian resolvents of kernels in weak duality with respect to $m$. In particular, all notions that are related to $\mathcal{U}$ depend implicitly on the fixed measure $m$. We suppose that $\mathcal{E}$ satisfies the {\it (strong) sector condition}, that is there exists a constant $k > 0$ such that \[ |\mathcal{E}(u, v)| \leq k\mathcal{E}(u, u)^{\frac{1}{2}} \mathcal{E}(v,v)^{\frac{1}{2}} \] for all $u,v \in D(\mathcal{E})$. We denote by $({\sf L}, D({\sf L}))$ (resp. $({\sf L}^{\ast}, D({\sf L}^{\ast}))$) the generator (resp. co-generator) of the form $(\mathcal{E}, D(\mathcal{E}))$, \[ D({\sf L}) := U_{\alpha}(L^2(E, m)), \; {\sf L}(U_{\alpha}f) := \alpha U_{\alpha} f - f, \; f\in L^2(E, m) \] and recall that $\mathcal{E}(u,v) = -({\sf L}u,v)_{L^2(E, m)}$ for all $u \in D({\sf L})$ and $v \in D(\mathcal{E}).$ For the reader convenience we restate and prove the next well-known characterization of zero-energy elements. \begin{lem} \label{lem 5.1} The following assertions are equivalent for $u \in L^2(E, m)$. i) $u \in D(\mathcal{E})$ and $\mathcal{E}(u,u) = 0$. ii) $u \in D({\sf L})$ and ${\sf L}u = 0$. iii) $\alpha U_{\alpha}u = u$ for one (or equivalent for all) $\alpha > 0$. \end{lem} \begin{proof} i) $\Rightarrow$ iii). Let $u \in D(\mathcal{E})$ with $\mathcal{E}(u,u) = 0$ and $f \in L^2(E, m)$, then by the sector condition we get $\mathcal{E}(u, U_{\alpha}^{\ast} f) = 0$, \[ (u - \alpha U_{\alpha}u, f)_{L^2(E, m)} = \mathcal{E}_{\alpha}(u, U_{\alpha}^{\ast}f) - \alpha(u, U_{\alpha}^{\ast}f) = \mathcal{E}(u, U_{\alpha}^{\ast}f) = 0, \] hence $\alpha U_{\alpha} u = u$. The implication iii) $\Rightarrow$ ii) is clear by the definition of $({\sf L}, D({\sf L}))$ and ii) $\Rightarrow$ i) follows since $\mathcal{E}(u,u) = -({\sf L}u, u)$ if $u \in D({\sf L})$. \end{proof} \begin{prop} \label{prop 5.2} The following assertions are equivalent for $u \in L^2(E, m) \cap L^{\infty}(E, m)$. i) $u$ is $\mathcal{U}$-invariant (with respect to $m$). ii) If $v \in D({\sf L})$ then $uv \in D({\sf L})$ and ${\sf L}(uv) = u{\sf L}v$. iii) If $v,w \in D(\mathcal{E})$ then $uv \in D(\mathcal{E})$ and $\mathcal{E}(uv, w) = \mathcal{E}(v, uw)$. \end{prop} \begin{proof} i) $\Rightarrow$ ii). Let $\alpha > 0$ and $v = U_{\alpha}f$, $f \in L^2(E, m)$. If $u$ is $\mathcal{U}$-invariant then $uv = U_{\alpha}(uf)$. Therefore $uv \in D({\sf L})$ and \[ {\sf L}(uv) = \alpha U_{\alpha}(uf) - uf = u(\alpha U_{\alpha}f - f) = u{\sf L}v. \] ii) $\Rightarrow$ i). Let $f \in L^2(E, m)$, $v = U_{\alpha}f$. Then by ii) there exists $g \in L^2(E, m)$ such that $uv = U_{\alpha}g$ and from ${\sf L}(uv) = u{\sf L}v$ we get \[ {\sf L}(uv) = \alpha U_{\alpha}g - g = \alpha uv - g, \; u{\sf L}v = u(\alpha U_{\alpha}f - f) = \alpha uv - uf. \] Hence $g = uf$ and thus $u U_{\alpha}f = U_{\alpha}(uf)$, i.e. $u$ is $\mathcal{U}$-invariant. i) $\Rightarrow$ iii). Let $v,w \in D(\mathcal{E})$ and $u$ be $\mathcal{U}$-invariant. Then \[ \mathop{\sup}\limits_{\alpha} \mathcal{E}^{\alpha}(uv, uv) = \mathop{\sup}\limits_{\alpha}\int \alpha uv(uv - \alpha U_{\alpha} uv) dm = \mathop{\sup}\limits_{\alpha}\int u^2\alpha v(v -\alpha U_{\alpha}v) dm \leq \] \[ \leq \| u \|^2_{\infty}\mathop{\sup}\limits_{\alpha}\int \alpha v(v - \alpha U_{\alpha}v) dm \leq \|u\|^2_{\infty}k^2\mathcal{E}(v,v) < \infty. \] We deduce that $uv \in D(\mathcal{E})$ and therefore \[ \mathcal{E}(uv, w) = \mathop{\lim}\limits_{\alpha \to \infty}\mathcal{E}^{\alpha}(uv, w) = \mathop{\lim}\limits_{\alpha \to \infty}\int \alpha w(uv - \alpha U_{\alpha}(uv)) dm = \] \[ = \mathop{\lim}\limits_{\alpha \to \infty}\int\alpha uw(v -\alpha U_{\alpha}v) dm = \mathcal{E}(v, uw). \] iii) $\Rightarrow$ i). Let $v = U_{\alpha} f$, $w = U_{\alpha}^{\ast}g$ with $f,g \in bp\mathcal{B}\cap L^2(E, m)$. By hypothesis iii) we have $uv, uw \in D(\mathcal{E})$ and \[ \int uf U_{\alpha}^{\ast}g dm = \mathcal{E}_{\alpha}(v, uw) = \mathcal{E}_{\alpha}(uv, w) = \int gu U_{\alpha}f dm. \] According with Proposition \ref{prop 2.16} we conclude that $u$ is $\mathcal{U}$-invariant. \end{proof} \begin{rem} By Theorem \ref{thm 2.19}, if $u \in L^2(E, m) \cap L^{\infty}(E, m)$ satisfies any (and hence all) of the conditions in Lemma \ref{lem 5.1} then $u$ also satisfies the ones in Proposition \ref{prop 5.2}. If $\mathcal{E}$ is Markovian then the converse is also true. \end{rem} As in \cite{Fu07}, Definition 4.1, the Dirichlet form $\mathcal{E}$ is called {\it recurrent} (resp. {\it irreducible}) if the associated resolvent $\mathcal{U}$ is $m$-recurrent (resp. $m$-irreducible). Let $(\widetilde{\mathcal{E}}, D(\widetilde{\mathcal{E}}))$ be the symmetric part of $\mathcal{E}$, \[ \widetilde{\mathcal{E}}(u,v):= \frac{1}{2}(\mathcal{E}(u,v) + \mathcal{E}(v,u)) \; \mbox{for all} \; u,v \in D(\widetilde{\mathcal{E}}) := D(\mathcal{E}). \] \begin{coro} \label{coro 5.3} Assume that $m(E) < \infty$. i) The following assertions are equivalent. i.1) $\mathcal{E}$ is recurrent. i.2) $\widetilde{\mathcal{E}}$ is recurrent. i.3) $1 \in D(\mathcal{E})$ and $\mathcal{E}(1,1) = 0$. ii) If $\mathcal{E}$ is recurrent then the following assertions are equivalent. ii.1) $\mathcal{E}$ is irreducible. ii.2) $\widetilde{\mathcal{E}}$ is irreducible. ii.3) If $u \in D(\mathcal{E})$ and $\mathcal{E}(u,u)=0$ then $u$ is constant. ii.4) If $u \in D({\sf L})$ and ${\sf L}u = 0$ then $u$ is constant. ii.5) If $u \in D({\sf L})$ such that for all $v \in D({\sf L})$ we have that $uv \in D({\sf L})$ and ${\sf L}(uv) = u{\sf L}v$ then $u$ is constant. ii.6) $\int{(\alpha U_\alpha u - \frac{1}{m(E)}\int{u dm})^2 dm} \mathop{\longrightarrow}\limits_{\alpha \to 0} 0$ for all $u \in L^2(E, m)$. \end{coro} \begin{proof} The equivalence i.1) $\Leftrightarrow$ i.3) follows by Lemma \ref{lem 5.1} and Corollary \ref{coro 2.4}. This also shows the equivalence with i.2) of the other two assertions. The fact that assertions ii.1) - ii.6) are equivalent follows by a simple combination of Lemma \ref{lem 5.1}, Proposition \ref{prop 5.2}, Corollary \ref{coro 2.21}, and Proposition \ref{prop 3.5}. \end{proof} \noindent {\bf Extremality of Gibbs states.} From now on our framework is the one considered in \cite{AlKoRo97a}. All measures which appear are probability measures on a locally convex topological vector space $E$ and its borel $\sigma$-algebra $\mathcal{B}$. The set $\mathcal{F}C_b^\infty$ of all {\it finitely based smooth bounded functions on $E$} is defined as \[ \mathcal{F}C_b^\infty : = \{ f(l_1, \ldots, l_n) : n \in \mathbb{N}, f \in C_b^\infty(\mathbb{R}^n), l_1, \ldots, l_n \in E'\}, \] where $E'$ is the topological dual space of $E$. For $K \subset E$ and $(b_k)_{k \in K}$ a family of $\mathcal{B}$-measurable functions, we denote by $\mathcal{G}^b$ the set of all probability measures $m$ on $E$ such that $b_k \in L^2(E, m)$ and \begin{equation} \label{eq 5.1} \int{\frac{\partial u}{\partial k} dm} = - \int{u b_k dm}, \end{equation} for all $u \in \mathcal{F}C_b^\infty$ and $k \in K$. Elements in $\mathcal{G}^b$ are called {\it Gibbs states associated with b}. We fix $K$, $b$, and $m \in \mathcal{G}^b$, and we consider the corresponding Dirichlet form $(\mathcal{E}_{m, k}, D(\mathcal{E}_{m, k}))$ defined as the closure on $L^2 (E, m)$ of \[ \mathcal{E}_{m, k}(u, v) = \int{\frac{\partial u}{\partial k}\frac{\partial v}{\partial k} dm}; \; \; \; \; \; \; u,\; v \in \mathcal{F}C_b^\infty. \] Hereinafter we assume that $K$ is countable and \[ \mathop{\sum\limits_{k \in K}} |l(k)|^2 < \infty \; \; \; \; \; \; {\rm for \; all} \; l \in E'. \] Then we can define the Dirichlet form $(\mathcal{E}_m, D(\mathcal{E}_m))$ by setting \[ D(\mathcal{E}_m) : = \{ u \in \bigcap\limits_{k \in K} D(\mathcal{E}_{m, k}) : \sum\limits_{k \in K} \mathcal{E}_{m, k}(u, u) < \infty \} \] and \[ \mathcal{E}_m(u, v) : = \frac{1}{2} \sum\limits_{k \in K} \mathcal{E}_{m, k}(u, v), \; \; \; \; u, \; v \in D(\mathcal{E}_m). \] For more details on the definition and closability of the forms introduced above see \cite{AlKoRo97a} and the references therein. Further, we denote by $\mathcal{U} = (U_\alpha)_{\alpha > 0}$ the resolvent of kernels associated to $(\mathcal{E}_m, D(\mathcal{E}_m))$ and, as in Section 3, let $\mathcal{I}_{m,ac}$ be the set of all $\mathcal{U}$-invariant probability measures which are absolutely continuous with respect to $m$. \begin{rem} Since $1 \in D(\mathcal{E}_m)$ and $\mathcal{E}_m(1,1) = 0$, by Corollary \ref{coro 5.3} we have that $\mathcal{E}_m$ is recurrent. \end{rem} Let $\mathcal{G}^b_{m, ac}$ be the set of all probability measures from $\mathcal{G}^b$ which are absolutely continuous with respect to $m$. We give now a characterization of $\mathcal{G}^b_{m, ac}$ in terms of excessive functions. \begin{thm} \label{thm 5.5} The following assertions are equivalent for $\rho \in p\mathcal{B} \cap L^1(E, m)$ such that $m(\rho) = 1$. i) There exists an $\mathcal{U}$-excessive function which is an $m$-version of $\rho$. ii) The measure $\rho \cdot m$ belongs to $ \mathcal{G}^b$. Consequently, $\mathcal{I}_{m, ac} = \mathcal{G}^b_{m, ac}$. \end{thm} \begin{proof} i) $\Rightarrow$ ii). Without loss of generality we may assume that $\rho \in \mathcal{E}(\mathcal{U})$. Suppose first that $\rho \in bp(\mathcal{B}) \cap \mathcal{E}(\mathcal{U})$. By Proposition \ref{prop2.3} it follows that $\alpha U_\alpha \rho = \rho$ for all $\alpha > 0$, hence $\rho \in D(\mathcal{E}_{m})$. From Lemma \ref{lem 5.1} we have that $\mathcal{E}_m(\rho, \rho) = 0$ and by the chain rule ( see \cite{AlKoRo97a}, Remark 1.1, iii)) and \cite{AlKuRo90}, Theorem 2.5, we conclude that $\rho \cdot m \in \mathcal{G}^b$. If $\rho \in \mathcal{E}(\mathcal{U})$ then $(\inf(\rho, n))_{n} \subset bp\mathcal{B} \cap \mathcal{E}(\mathcal{U})$ converges pointwise to $\rho$ and by the dominated convergence theorem applied in relation (\ref{eq 5.1}) we get that $\rho \cdot m \in \mathcal{G}^b$. ii) $\Rightarrow$ i). If $\rho \cdot m \in \mathcal{G}^b$ then by \cite{BoRo95}, Lemma 6.14 it follows that $\sqrt{\rho} \in D(\mathcal{E}_m)$ and $\mathcal{E}_m(\sqrt{\rho}, \sqrt{\rho}) = 0$. By Lemma \ref{lem 5.1} and Theorem \ref{thm 2.19} it follows that $\sqrt{\rho}$ is $\mathcal{U}$-invariant. Since \[ \rho = \rho \alpha U_\alpha 1 = \alpha \sqrt{\rho} U_\alpha \sqrt{\rho} = \alpha U_\alpha \rho \quad m \mbox{-a.e.} \] by Proposition \ref{prop 2.2} we get that condition i) is satisfied. \end{proof} Now, Theorem \ref{thm 5.5} places us in the context of Theorem \ref{thm 2.26} and we obtain: \begin{thm} \label{thm 5.6} The following assertions are equivalent. i) The form $\mathcal{E}_m$ is irreducible. ii) $\mathcal{G}^b_{m, ac} = \{m\}$. iii) The measure $m$ is extremal in $\mathcal{G}^b_{m, ac}$. iv) The measure $m$ is extremal in $\mathcal{G}^b$. \end{thm} \begin{proof} The fact that i), ii), and iii) are equivalent follows by Theorem \ref{thm 5.5} and Theorem \ref{thm 2.26}. Clearly iv) implies iii). If iii) holds and $m = \alpha m_1 + (1 - \alpha) m_2$ with $\alpha \in (0, 1)$ and $m_1, m_2 \in \mathcal{G}^b$, then $m_1$ and $m_2$ belong to $\mathcal{G}^b_{m, ac}$, hence $m=m_1=m_2$. \end{proof} In the spirit of \cite{RoWa01}, further connections between irreducibility, extremality of Gibbs measures, and functional inequalities can be studied. In this sense, we would also like to refer to the classical work of \cite{StZe92a}, \cite{StZe92b}, and \cite{StZe92c}. \noindent {\bf Example} (The non-symmetric case). Assume now that there exists a separable real Hilbert space $(H, \langle , \rangle_H)$ densely and continuously embedded into $E$ and $K$ is an orthonormal basis of $H$. By the chain rule, for every $u \in \mathcal{F}C_b^\infty$ and $z \in E$ fixed, $h \mapsto \frac{\partial u}{\partial h}(z)$ is a continuous linear functional on $H$, hence $\nabla u(z) \in H$ is uniqely defined by \[ \langle \nabla u(z), h\rangle_H = \frac{\partial u}{\partial h} (z), \; h \in H. \] Then \[ \mathcal{E}_m (u,v) = \frac{1}{2}\int \langle \nabla u, \nabla v\rangle_H dm. \] Let $A$ be a map from $E$ to the space of all bounded linear operators denoted by $\mathcal{L}(H)$, such that $z \mapsto \langle A(z)h_1, h_2\rangle_H$ is $\mathcal{B}(E)$-measurable for all $h_1$, $h_2 \in H$. Additionally, suppose that there exists $C > 0$ with \[ \langle A(z)h, h \rangle_H \geq C \|h\|^2_H \; \mbox{ for all} \; h \in H, \] and that $\|\widetilde{A}\| \in L^1(E, m)$ and $\|\check{A}\| \in L^\infty(E, m)$, where $\widetilde{A} := \frac{1}{2}(A + \hat{A})$, $\check{A} := \frac{1}{2}(A - \hat{A})$ and $\hat{A}(z)$ denotes the adjoint of $A(z)$, $z\in E$. Let $b, c \in L^\infty(E \rightarrow H, m)$ such that for all $u \in \mathcal{F}C_b^\infty$ it holds \begin{equation} \label{eq 5.2} \int \langle b, \nabla u\rangle_H dm, \; \; \; \int \langle c, \nabla u\rangle_Hdm \geq 0. \end{equation} Then by \cite{MaRo92}, Chapter II, Section 3, \[ \mathcal{E}'(u, v) := \int \langle A \nabla u, \nabla v\rangle_H dm + \int u \langle b, \nabla v \rangle_H dm + \int \langle c, \nabla u\rangle_H v dm \] is a closable densely defined positive bilinear form on $L^2(E, m)$, whose closure is a Dirichlet form. Moreover, if $J$ is a symmetric finite positive measure on $(E \times E, \mathcal{B} \otimes \mathcal{B})$ such that $(\mathcal{E}_J, \mathcal{F}C_b^\infty)$ given by \[ \mathcal{E}_J(u,v):=\int \int (u(x) - u(y)) (v(x) - v(y)) J(dxdy), \; u,v \in \mathcal{F}C_b^\infty, \] is closable, then $(\mathcal{E} := \mathcal{E}' + \mathcal{E}_J, \mathcal{F}C_b^\infty)$ is closable and its closure is a Dirichlet form. Note that since $1 \in \mathcal{F}C_b^\infty$, $m(E) < \infty$, and $\mathcal{E}(1,1)=0$, by Corollary \ref{coro 5.3} it follows that $\mathcal{E}$, $\mathcal{E}'$, and $\mathcal{E}_J$ are recurrent. \begin{coro} \label{coro 5.8} The following assertions hold. i) Let $\mathcal{E}_m$ be irreducible (equivalently, let $m$ be extremal in $\mathcal{G}^b$). Then $\mathcal{E}$ is irreducible. ii) Assume that $\int\langle b + c, \nabla u\rangle_Hdm = 0$ for all $u \in \mathcal{F}C_b^\infty$ and there exists $C' > 0$ such that $<Ah,h>_H \leq C' \|h\|_H^2$ for all $h \in H$. Then $\mathcal{E}'$ is irreducible if and only if $\mathcal{E}_m$ is irreducible. \end{coro} \begin{proof} i) By the strict ellipticity of $A$ and the relations (\ref{eq 5.2}), it is straightforward to check that $D(\mathcal{E}) \subset D(\mathcal{E}_m)$ and if $u \in D(\mathcal{E})$ such that $\mathcal{E}(u,u)=0$, then $\mathcal{E}_m(u,u)=0$. Now, the assertion follows by Corollary \ref{coro 5.3}, ii). ii) Notice that if $u \in \mathcal{F}C_b^\infty$, then integrating by parts and using the first assumption in ii) we get \[ \mathcal{E}'(u,u) = \int\langle A \nabla u, \nabla u\rangle_H dm. \] Therefore, $C \mathcal{E}(u,u) \leq \mathcal{E}'(u,u) \leq C' \mathcal{E}(u,u)$, $D(\mathcal{E}') = D(\mathcal{E})$, and the statement follows by applying Corollary \ref{coro 5.3}, ii). \end{proof} \affiliationone{ Lucian Beznea\\ Simion Stoilow Institute of Mathematics\\ of the Romanian Academy,\\ Research unit No. 2, P.O. Box 1-764,\\ RO-014700 Bucharest, Romania,\\ and University of Bucharest, Faculty\\ of Mathematics and Computer Science \email{e-mail: [email protected]}} \affiliationtwo{ Iulian C\^impean\\ Simion Stoilow Institute of Mathematics\\ of the Romanian Academy,\\ Research unit No. 2, P.O. Box 1-764,\\ RO-014700 Bucharest\\ Romania \email{[email protected]}} \affiliationthree{ Michael R\"ockner\\ Fakult\"at f\"ur Mathematik, Universit\"at\\ Bielefeld,\\ Postfach 100 131, D-33501 Bielefeld\\ Germany \email{[email protected]} } \end{document}
\begin{document} \begin{abstract} Based on Beurling's theory of balayage, we develop the theory of non-uniform sampling in the context of the theory of frames for the settings of the Short Time Fourier Transform and pseudo-differential operators. There is sufficient complexity to warrant new examples generally, and to resurrect the formulation of balayage in terms of covering criteria with an eye towards an expanded theory as well as computational implementation. \end{abstract} \maketitle \section{Introduction} \subsection{Background and theme} There has been a great deal of work during the past quarter century in analyzing, formulating, validating, and extending sampling formulas, \begin{equation}\langlebel{eq:sampling} f(x) = \sum f(x_n) s_n, \end{equation} for non-uniformly spaced sequences $\{x_n\}$, for specific sequences of sampling functions $s_n$ depending on $x_n$, and for classes of functions $f$ for which such formulas are true. For glimpses into the literature, see the Journal of Sampling Theory in Signal and Image Processing, the influential book by Young \cite{youn2001}, edited volumes such as \cite{BenFer2001}, and specific papers such as those by Jaffard \cite{jaff1991} and Seip \cite{seip1995}. This surge of activity is intimately related to the emergence of wavelet and Gabor theories and more general frame theory. Further, it is firmly buttressed by the profound results of Paley-Wiener \cite{PalWie1934}, Levinson \cite{levi1940}, Duffin-Schaeffer \cite{DufSch1952}, Beurling-Malliavin \cite{BeuMal1962}, \cite{BeuMal1967}, Beurling (unpublished 1959-1960 lectures), and H.~J.~ Landau \cite{land1967}, that themselves have explicit origins by Dini \cite{dini1917}, as well as G. D. Birkhoff (1917), J. L. Walsh (1921), and Wiener (1927), see \cite{PalWie1934}, page 86, for explicit references. The setting will be in terms of classical spectral criteria to prove non-uniform sampling formulas such as (\ref{eq:sampling}). Our theme is to generalize non-uniform sampling in this setting to the Gabor theory \cite{groc1991}, \cite{FeiSun2006}, \cite{LabWeiWil2004}, as well as to the setting of time-varying signals and pseudo-differential operators. The techniques are based on Beurling's methods from 1959-1960, \cite{beur1966}, \cite{beur1989}, pages 299-315, \cite{beur1989}, pages 341-350, which incorporate balayage, spectral synthesis, and strict multiplicity. Our formulation is in terms of the theory of frames. \subsection{Definitions} Let $\mathcal{S}(\mathbb{R}^d)$ be the Schwartz space of rapidly decreasing smooth functions on $d$-dimensional Euclidean space ${\mathbb R}^d$. We define the Fourier transform and inverse Fourier transform of $f \in \mathcal{S}(\mathbb{R}^d)$ by the formulas, $$ \widehat{f}(\gamma) = \int_{\mathbb{R}^d} f(x) e^{-2 \pi i x \cdot \gamma} \ dx \quad \text{ and } \quad (\widehat{f})^{\vee}(x) = f(x) = \int_{\mathbb{\widehat{R}}^d} \widehat{f}(\gamma) e^{2 \pi i x \cdot \gamma} \ d\gamma, $$ respectively. $\widehat{\mathbb{R}}^d$ denotes ${\mathbb R}^d$ considered as the spectral domain. If $F \in \mathcal{S}(\widehat{\mathbb{R}}^d)$, then we write $F^\vee(x) = \int_{\widehat{\mathbb{R}}^d}F(\gamma)e^{2\pi i x \cdot \gamma}\,d\gamma$. The notation ``$\int$"' designates integration over ${\mathbb R}^d$ or $\widehat{\mathbb{R}}^d$. The Fourier transform extends to tempered distributions. If $X \subseteq {\mathbb R}^d$, where $X$ is closed, then $M_b(X)$ is the space of bounded Radon measures $\mu$ with support, $\text{supp}\,(\mu)$, contained in $X$. $C_b({\mathbb R}^d)$ denotes the space of complex valued bounded continuous functions on ${\mathbb R}^d$. \begin{defn}\langlebel{defn:frame}(Frame) Let $H$ be a separable Hilbert space. A sequence $\{x_{n}\}_{n \in {\mathbb Z}} \subseteq H$ is a \emph{frame} for $H$ if there are positive constants $A$ and $B$ such that \[\forall \ f \in H, \quad A \lVert f \rVert^{2} \leq \sum_{n \in {\mathbb Z}} |\langlengle f,x_{n}\ranglengle|^{2} \leq B \lVert f \rVert^{2} . \] The constants $A$ and $B$ are lower and upper frame bounds, respectively. They are not unique. We choose $B$ to be the infimum over all upper frame bounds, and we choose $A$ to be the supremum over all lower frame bounds. If $A = B$, we say that the frame is a tight frame or an $A$-tight frame for $H$. \end{defn} \begin{defn}(Fourier frame) Let $E \subseteq \mathbb{R}^d$ be a sequence and let $\Lambda \subseteq \widehat{\mathbb{R}}^d$ be a compact set. Notationally, let $e_{x}(\gamma) = e^{2 \pi i x \cdot \gamma}$. The sequence $\mathcal{E}(E) = \{e_{-x}: x \in E \}$ is a \emph{Fourier frame} for $L^2(\Lambda)$ if there are positive constants $A$ and $B$ such that \[\forall \ F \in L^2(\Lambda), \quad A \lVert F \rVert^{2}_{L^2(\Lambda)} \leq \sum_{x \in E} |\langlengle F,e_{-x}\ranglengle|^{2} \leq B \lVert F \rVert^{2}_{L^2(\Lambda)}. \] Define the \emph{Paley-Wiener space}, $$PW_{\Lambda}= \{f \in L^2({\mathbb R}^d): \text{supp}\, (\widehat{f}) \subseteq \Lambda\}.$$ Clearly, $\mathcal{E}(E)$ is a Fourier frame for $L^2(\Lambda)$ if and only if the sequence, $$ \{(e_{-x} \ \mathbb{1}_{\Lambda})^\vee: x \in E \} \subseteq PW_{\Lambda}, $$ is a frame for $PW_{\Lambda}$, in which case it is called a \emph{Fourier frame} for $PW_{\Lambda}$. Note that $\inner{F}{e_{-x}} = f(x)$ for $f \in PW_{\Lambda}$, where $\widehat{f} = F \in L^2(\widehat{\mathbb{R}}^d)$ can be considered an element of $L^2(\Lambda).$ \end{defn} \begin{rem} Frames were first defined by Duffin and Schaeffer \cite{DufSch1952}, but appeared explicitly earlier in Paley and Wiener's book \cite{PalWie1934}, page 115. See Christensen's book \cite{chri2003} and Kova\v{c}evi\'{c} and Chebira's articles \cite{KovChe2007a}, \cite{KovChe2007b} for recent expositions of theory and applications. If $\{x_n\}_{n \in Z} \subseteq H$ is a frame, then there is a topological isomorphism $S : H \longrightarrow \ell^2(Z)$ such that \begin{equation} \langlebel{eq:iso} \forall x \in H, \quad x = \sum_{n \in {\mathbb Z}} \inner{x}{S^{-1}(x_n)}x_n = \sum_{n \in {\mathbb Z}} \inner{x}{x_n}S^{-1}(x_n). \end{equation} Equation (\ref{eq:iso}) illustrates the natural role that frames play in studying non-uniform sampling formulas (\ref{eq:sampling}), see Example \ref{ex:1}. \end{rem} Beurling introduced the following definition in his 1959-1960 lectures. \begin{defn}(Balayage) Let $E \subseteq {\mathbb R}^d$ and $\Lambda \subseteq \widehat{\mathbb{R}}^d$ be closed sets. \emph{Balayage} is possible for $(E, \Lambda) \subseteq {\mathbb R}^d \times \widehat{\mathbb{R}}^d$ if \begin{equation*} \forall \mu \in M_b({\mathbb R}^d), \mbox{ }\exists \nu \in M_b(E) \mbox{ such that } \widehat{\mu} = \widehat{\nu} \mbox{ on } \Lambda . \end{equation*} \end{defn} \begin{rem} {\it a.} The set $\Lambda$ is a collection of group characters in analogy to the Newtonian potential theoretic setting, e.g., \cite{beur1989}, pages 341-350, \cite{land1967}. {\it b.} The notion of balayage in potential theory is due to Christoffel (1871), e.g., see the remarkable book \cite{ButFeh1981}, edited by Butzer and Feh\'{e}r, and the article therein by Brelot. Then, Poincar\'{e} (1890 and 1899) used the idea of balayage as a method of solving the Dirichlet problem for the Laplace equation. Letting $D \subseteq {\mathbb R}^d$, $d\geq 3$, be a bounded domain, a balayage or sweeping of the measure $\mu = \delta_y$, $y \in D$, to $\partial D$ is a measure $\nu_y \in M_b(\partial D)$ whose Newtonian potential coincides outside of D with the Newtonian potential of $\delta_y$. In fact, $\nu_y$ is unique and is the harmonic measure on $\partial D$ for $y \in D$, e.g., \cite{kell1929}, \cite{dela1949}. One then formulates a more general balayage problem: for a given mass distribution $\mu$ inside a closed bounded domain $\overline{D} \subseteq {\mathbb R}^d$, find a mass distribution $\nu$ on $\partial D$ such that the potentials are equal outside $\overline{D}$ \cite{land1972}, cf. \cite{AdaHed1999}. Let $\Lambda \subseteq \widehat{\mathbb{R}}^d$ be a closed set. Define $$\mathcal{C}(\Lambda) = \{f \in C_{b}({\mathbb R}^d) : \text{supp}\,(\widehat{f}) \subseteq \Lambda\},$$ cf. the role of $\mathcal{C}(\Lambda)$ in \cite{shap1972}. \end{rem} \begin{defn}(Spectral synthesis) A closed set $\Lambda \subseteq \widehat{\mathbb{R}}^d$ is a set of \emph{spectral synthesis (S-set)} if \begin{equation} \langlebel{eq:sset} \forall f \in \mathcal{C}(\Lambda) \text{ and } \forall \mu \in M_b(\mathbb{R}^d), \quad \widehat{\mu} = 0 \text{ on } \Lambda \Rightarrow \int f \,d\mu = 0, \end{equation} see \cite{bene1975}. \end{defn} \begin{rem} {\it a.} The problem of characterizing S-sets emanated from Wiener's Tauberian theorem ideas, and was developed by Beurling in the 1940s. It is ``synthesis'' in that one wishes to approximate $f \in L^{\infty}({\mathbb R}^d)$ in the $\sigma(L^\infty ({\mathbb R}^d), L^1 ({\mathbb R}^d))$ (weak-$\ast$) topology by finite sums of characters $\gamma: L^\infty ({\mathbb R}^d) \rightarrow {\mathbb C}$, where $\gamma$ can be considered an element of $\widehat{\mathbb{R}}^d$ and where $\text{supp}\, (\delta_\gamma ) \subseteq \text{supp}\,(\widehat{f})$, which is the so-called spectrum of $f$. Such an approximation is elementary to achieve by convolutions of the measures $\delta_\gamma$, but in this case we lose the essential property that the spectra of the approximants be contained in the spectrum of $f$. It is a fascinating problem whose complete resolution is equivalent to the characterization of the ideal structure of $L^1({\mathbb R}^d)$, a veritable Nullstellensatz of harmonic analysis. {\it b.} We obtain the annihilation property of (\ref{eq:sset}) in the case that $f$ and $\mu$ have balancing smoothness and irregularity. For example, if $\widehat{f} \in D'(\widehat{\mathbb{R}}^d),\,\widehat{\mu} = \phi \in C_c^\infty (\widehat{\mathbb{R}}^d)$, and $\phi = 0$ on $\text{supp}\, (\widehat{f})$, then $\widehat{f}(\phi) = 0$, where $\widehat{f}(\phi)$ is sometimes written $\inner{\widehat{f}}{\phi}$. The sphere $S^2 \subseteq \widehat{\mathbb{R}}^3$ is not an S-set (Laurent Schwartz, 1947), and every non-discrete locally compact abelian group $\widehat{G}$, e.g., $\widehat{\mathbb{R}}^d$, contains non-S-sets (Paul Malliavin 1959). On the other hand, polyhedra are S-sets, whereas the 1/3-Cantor set is an S-set with non-S-subsets. We refer to \cite{bene1975} for an exposition of the theory. \end{rem} \begin{defn}(Strict multiplicity) A closed set $\Gamma \subseteq \widehat{\mathbb{R}}^d$ is a set of \emph{strict multiplicity} if \begin{equation*} \exists \mu \in M_b(\Gamma)\setminus\{0\} \mbox{ such that } \lim_{\norm{x} \to \infty} |\mu^\vee (x) | = 0. \end{equation*} \end{defn} \begin{rem} The study of sets of strict multiplicity has its origins in Riemann's theory of sets of uniqueness for trigonometric series, see \cite{bary1964}, \cite{zygm1968}. An early, important, and difficult result is due to Menchov (1916): \begin{equation*} \exists \Gamma \subseteq \widehat{\mathbb{R}} / {\mathbb Z} \mbox{ and } \exists \mu \in M_b(\Gamma) \setminus \{0\} \mbox{ such that } |\Gamma| = 0 \mbox{ and } \mu^\vee (n) = O((\log |n|)^{-1/2}), |n| \rightarrow \infty. \end{equation*} ($|\Gamma|$ is the Lebesgue measure of $\Gamma$.) There are refinements of Menchov's result, aimed at increasing the rate of decrease, due to Bary (1927), Littlewood (1936), Salem (1942, 1950), and Iva\v{s}ev-Mucatov (1952, 1956). \end{rem} \subsection{Results of Beurling} The results in this subsection stem from 1959-1960, and the proofs are sometimes sophisticated, see \cite{beur1989}, pages 341-350. Throughout, $E \subseteq {\mathbb R}^d$ is closed and $\Lambda \subseteq \widehat{\mathbb{R}}^d$ is compact. The following is a consequence of the open mapping theorem. \begin{prop}\langlebel{prop:110} Assume balayage is possible for $(E, \Lambda)$. Then \begin{equation*} \exists K >0 \text{ such that } \forall \mu \in M_b({\mathbb R}^d) , \, \inf \{ \norm{\nu}_1 : \nu \in M_b(E) \text{ and } \widehat{\nu} = \widehat{\mu} \text{ on } \Lambda \} \leq K \norm{\mu}_1 . \end{equation*} ($\norm{\ldots}_1$ designates the total variation norm.) \end{prop} The smallest such $K$ is denoted by $K(E, \Lambda)$, and we say that balayage is not possible if $K(E,\Lambda) = \infty$. In fact, \emph{if $\Lambda$ is a set of strict multiplicity, then balayage is possible for $(E,\Lambda)$ if and only if} $K(E, \Lambda) < \infty$, e.g., see Lemma 1 of \cite{beur1989}, pages 341-350. Let $J(E, \Lambda)$ be the smallest $J \geq 0$ such that \begin{equation*} \forall f \in \mathcal{C}(\Lambda) \text{, } \sup_{x \in {\mathbb R}^d} |f(x)| \leq J \sup_{x \in E} |f(x)|. \end{equation*} $J(E, \Lambda)$ could be $\infty$. The Riesz representation theorem is used to prove the following result. Part \emph{c} is a consequence of parts \emph{a} and \emph{b}. \begin{prop} {\it a.} If $\Lambda$ is a set of strict multiplicity, then $K(E, \Lambda) \leq J(E, \Lambda)$. {\it b.} If $\Lambda$ is an S-set, then $J(E,\Lambda) \leq K(E,\Lambda)$. {\it c.} Assume that $\Lambda$ is an S-set of strict multiplicity and that balayage is possible for $(E, \Lambda)$. If $f \in \mathcal{C}(\Lambda)$ and $f = 0$ on $E$, then $f$ is identically $0$. \end{prop} \begin{prop} Assume that $\Lambda$ is an S-set of strict multiplicity. Then, balayage is possible for $(E, \Lambda)$ $\Leftrightarrow$ \begin{equation*} \exists K(E, \Lambda) > 0 \text{ such that } \forall f \in \mathcal{C}(\Lambda), \quad \norm{f}_{\infty}\leq K(E,\Lambda) \sup_{x \in E}|f(x)|. \end{equation*} \end{prop} The previous results are used in the intricate proof of Theorem \ref{theorem:balayage1}. \begin{thm}\langlebel{theorem:balayage1} Assume that $\Lambda$ is an S-set of strict multiplicity, and that balayage is possible for $(E, \Lambda)$ and therefore $K(E, \Lambda) < \infty$. Let $\Lambda_\epsilon = \{ \gamma \in \widehat{\mathbb{R}}^d: \text{dist}\,(\gamma,\Lambda) \leq \epsilon \}$. Then, \begin{equation*} \exists \, \epsilon_0 > 0 \text{ such that } \forall \, 0 < \epsilon < \epsilon_0 \text{, } K(E,\Lambda_\epsilon) < \infty, \end{equation*} i.e., balayage is possible for $(E, \Lambda_\epsilon)$. \end{thm} The following result for ${\mathbb R}^d$ is not explicitly stated in \cite{beur1989}, pages 341-350, but it goes back to his 1959-1960 lectures, see \cite{wu1998}, Theorem E in \cite{land1967}, Landau's comment on its origins \cite{land2011}, and Example \ref{ex:fouierframebalayage}. In fact, using Theorem \ref{theorem:balayage1} and Ingham's theorem (Theorem \ref{thm:balayage3}), Beurling obtained Theorem \ref{theorem:balayage2}. We have chosen to state Ingham's theorem (Theorem \ref{thm:balayage3}) in Section 2 as a basic step in the proof of Theorem \ref{thm:balayage4}, which supposes Theorem \ref{theorem:balayage1} and which we chose to highlight as \textit{A fundamental identity of balayage} and in terms of its quantitative conclusion, (\ref{eq:JB6}) and (\ref{eq:JB7}). In fact, Theorem \ref{thm:balayage4} essentially yields Theorem \ref{theorem:balayage2}, see Example \ref{ex:fouierframebalayage}. \begin{defn} A sequence $E \subseteq {\mathbb R}^d$ is \emph{separated} if $$\exists \, r >0 \text{ such that } \inf \{\norm{x-y}: x, y \in E \text{ and } x \neq y \} \geq r.$$ \end{defn} \begin{thm}\langlebel{theorem:balayage2} Assume that $\Lambda \subseteq \widehat{\mathbb{R}}^d$ is an S-set of strict multiplicity and that $E \subseteq {\mathbb R}^d$ is a separated sequence. If balayage is possible for $(E,\Lambda)$, then $\mathcal{E}(E)$ is a Fourier frame for $L^2(\Lambda)$, i.e., $\{(e_{-x} \ \mathbb{1}_{\Lambda})^\vee: x \in E \}$ is a Fourier frame for $PW_{\Lambda}$. \end{thm} \begin{example}\langlebel{ex:1} The conclusion of Theorem \ref{theorem:balayage2} is the assertion $$ \forall f \in PW_\Lambda , \quad f = \sum_{x \in E}f(x)S^{-1}(f_x) = \sum_{x \in E}\inner{f}{S^{-1}(f_x)}f_x, $$ where $$ f_x(y) = (e_{-x} \ \mathbb{1}_{\Lambda} )^\vee(\gamma) $$ and $$ S(f) = \sum_{x \in E} f(x) (e_{-x} \ \mathbb{1} )^\vee, $$ cf. (\ref{eq:sampling}) and (\ref{eq:iso}). Clearly, $f_x$ is a type of sinc function. Smooth sampling functions can be introduced into this setup, e.g., Theorem 7.45 of \cite{BenFra1994}, Chapter 7. \end{example} \begin{rem} Theorem \ref{theorem:balayage2} and results in \cite{beur1966} led to the Beurling covering theorem, see Section \ref{sec:covering}. \end{rem} \subsection{Outline} Now that we have described the background and recalled the required definitions from harmonic analysis and Beurling's fundamental theorems, we proceed to Section \ref{sec:balayageidentity}, where we state a basic theorem due to Ingham, as well as what we have called Beurling's fundamental identity of balayage. This result is a powerful technical tool that we use throughout. In Section \ref{sec:stft}, we prove two theorems, that are the basis for our frame theoretic non-uniform sampling theory for the Short Time Fourier Transform (STFT). The second of these theorems, Theorem \ref{thm:stft-frame}, is compared with an earlier result of Gr{\"o}chenig, that itself goes back to work of Feichtinger and Gr{\"o}chenig. Section \ref{sec:ex} is devoted to examples that we formulated as avenues for further development integrating balayage with other theoretical notions. In Section \ref{sec:pdo} we prove the frame inequalities necessary to provide a non-uniform sampling formula for pseudo-differential operators defined by a specific class of Kohn-Nirenberg symbols. We view this as the basis for a much broader theory. Our last mathematical section, Section \ref{sec:covering}, is a brief recollection of Beurling's balayage results, but formulated in terms of covering criteria and due to a collaboration of one of the authors in 1990s with Dr. Hui-Chuan Wu. Such coverings in terms of polar sets of given band width are a natural vehicle for extending the theory developed herein. Finally, in the Epilogue, we note the important related contemporary research being conducted in terms of quasicrystals, as well as other applications \section{A fundamental identity of balayage} \langlebel{sec:balayageidentity} By construction, and slightly paraphrased, Ingham \cite{ingh1934} proved the following result for the case $d = 1$, see \cite{beur1966}, page 115 for a modification which gives the $d>1$ case. In fact, Beurling gave a version for $d > 1$ in 1953; it is unpublished. In 1962, Kahane \cite{kaha1962} went into depth about the $d > 1$ case. \begin{thm} \langlebel{thm:balayage3} Let $\epsilon > 0$ and let $\Omega:[0,\infty) \rightarrow (0, \infty)$ be a continuous function, increasing to infinity. Assume the following conditions: \begin{equation} \int_1^\infty \Omega(r) \,\frac{dr}{r^2}<\infty , \end{equation} \begin{equation} \int exp(-\Omega(\norm{x}))\,dx <\infty , \end{equation} and $\Omega (r) > r^a$ on some interval $[r_0, \infty)$ and for some $a<1$. Then, there is $h\in L^1({\mathbb R}^d)$ for which $h(0)=1$, $\text{supp}\, (\widehat{h})\subseteq \overline{B(0,\epsilon)}$, and $$|h(x)| = \text{O}(e^{-\Omega \norm{x}}), \quad \norm{x}\rightarrow \infty .$$ \end{thm} Ingham also proved the converse, which, in fact, requries the Denjoy-Carleman theorem for quasi-analytic functions. If balayage is possible for $(E,\Lambda)$ and $E \subseteq {\mathbb R}^d$ is a closed sequence, e.g., if $E$ is separated, then Proposition \ref{prop:110} allows us to write $\widehat{\mu} = \sum_{x \in E} a_x(\mu)\widehat{\delta_x}$ on $\Lambda$, where $\sum_{x \in E}|a_x(\mu)|\leq K(E,\Lambda) \norm{\mu}_1$. In the case $\mu = \delta_y$, we write $a_x(\mu) = a_x(y)$. We refer to the following result as \emph{A fundamental identity of balayage}. \begin{thm} \langlebel{thm:balayage4} Let $\Omega$ satisfy the conditions of Ingham's Theorem \ref{thm:balayage3}. Assume that $\Lambda$ is a compact S-set of strict multiplicity, that $E$ is a separated sequence, and that balayage is possible for $(E,\Lambda)$. Choose $\epsilon > 0$ from Beurling's Theorem \ref{theorem:balayage1} so that $K(E,\Lambda_\epsilon) < \infty$. For this $\epsilon > 0$, take h from Ingham's Theorem \ref{thm:balayage3}. Then, we have \begin{equation}\langlebel{eq:JB6} \forall y \in {\mathbb R}^d \text{ \rm{and} } \forall f \in \mathcal{C}(\Lambda), \quad f(y) = \sum_{x \in E} f(x) a_x(y) h(x-y), \end{equation} where \begin{equation}\langlebel{eq:JB7} \sup_{y \in {\mathbb R}^d} \sum_{x \in E}|a_x(y)| \leq K(E,\Lambda_\epsilon) < \infty . \end{equation} In particular, we have $$\forall y \in {\mathbb R}^d \text{ \rm{and} }\forall \gamma \in \Lambda , \quad e^{2 \pi i y \cdot \gamma} = \sum_{x \in E}a_x(y)h(x-y)e^{2 \pi i x \cdot \gamma}. $$ \begin{proof} Since balayage is possible for $(E,\Lambda_\epsilon)$, we have that $(\delta_y)^\wedge = (\sum_{x \in E}a_x(y)\delta_x)^\wedge$ on $\Lambda_\epsilon$ and that $$\sum_{x \in E}|a_x(y)| \leq K(E, \Lambda_\epsilon)\norm{\delta_y}_1$$ for each $y\in {\mathbb R}^d$. Thus, (\ref{eq:JB7}) is obtained. Next, for each fixed $y\in {\mathbb R}^d$, define the measure, $$\eta_y (w) = h_y(w)\left(\delta_y - \sum_{x \in E}a_x(y)\delta_x\right)(w) \in M_b({\mathbb R}^d),$$ where $h_y (w) = h(w-y)$. Then, we have \begin{align*} (\eta_y)^\wedge(\gamma) &= \left[ (h_y)^\wedge \ast \left(\delta_y - \sum_{x\in E}a_x(y)\delta_x \right)^\wedge \right](\gamma)\\ &= \int \widehat{h}(\gamma - \langlembda) e^{-2 \pi i y \cdot (\gamma - \langlembda)} \left(\delta_y - \sum_{x\in E}a_x(y)\delta_x \right)^\wedge (\langlembda) \, d \langlembda \\ &= \int_{(\Lambda_\epsilon)^c} \widehat{h}(\gamma - \langlembda) e^{-2 \pi i y \cdot (\gamma - \langlembda)}\left(\delta_y - \sum_{x\in E}a_x(y)\delta_x \right)^\wedge (\langlembda) \, d \langlembda \\ \end{align*} on $\widehat{\mathbb{R}}^d$. If $\gamma \in \Lambda$ and $\langlembda \in (\Lambda_\epsilon)^c$, then $\widehat{h} (\gamma - \langlembda) = 0$. Consequently, we obtain $$\forall y \in {\mathbb R}^d\text{ and }\forall \gamma \in \Lambda, \quad (\eta_y)^\wedge (\gamma) = 0.$$ Thus, since $\Lambda$ is an S-set and $h(0) = 1$, we obtain (\ref{eq:JB6}) from the definition of $\eta_y$. \end{proof} \end{thm} \begin{example}\langlebel{ex:fouierframebalayage} Theorem \ref{thm:balayage4} can be used to prove Beurling's sufficient condition for a Fourier frame in terms of balayage (Theorem \ref{theorem:balayage2}), see part b. For convenience, let $\Lambda$ be symmetric about $0 \in \widehat{\mathbb{R}}^d$, i.e., $-\Lambda = \Lambda$. {\it a.} Using the notation of Theorem \ref{thm:balayage4}, we have the following estimate. \begin{align*} \sum_{x \in E} |\int a_x(y)h(x-y)f(y)\,dy|^2 &\leq \sum_{x\in E} \int |a_x(y)||h(x-y)|^2 \,dy \int |a_x(y)||f(y)|^2 \,dy \\ &\leq C \norm{h}_2^2 \int \left(\sum_{x \in E}|a_x(y)|\right)|f(y)|^2\,dy \\ &\leq C \norm{h}_2^2 K(E, \Lambda_\epsilon) \norm{f}_2^2, \end{align*} where $C$ is a uniform bound of $\{|a_x(y)|: x \in E, y \in {\mathbb R}^d \}$. {\it b.} It is sufficient to prove the lower frame bound. Let $F \in L^2(\Lambda)$ be considered as an element of $(PW_\Lambda)^\wedge$, i.e., $\widehat{f} = F$ vanishes off of $\Lambda$ and $f \in L^2({\mathbb R}^d)$. We shall show that \begin{equation}\langlebel{eq:JB8} A \norm{F}_{L^2(\Lambda)} \leq \left(\sum_{x \in E}|f(x)|^2 \right)^{1/2}, \end{equation} where $A$ is independent of $F\in L^2(\Lambda)$. \begin{align*} \norm{F}_{L^2(\Lambda)}^2 &= \int_{\Lambda}\overline{F(\langlembda)}\left(\int f(y) e^{-2 \pi i y \cdot \langlembda}\,dy\right)\,d\langlembda \\ & = \int_{\Lambda}\overline{F(\langlembda)}\left(\int f(y) \left(\sum_{x \in E}a_x(y) h(x-y)e^{-2 \pi i x \cdot \langlembda}\right)\,dy \right)\,d\langlembda \\ &= \sum_{x \in E}\overline{f(x)}\left(\int a_x(y)h(x-y)f(y)\,dy\right) \\ &\leq \left(\sum_{x \in E} |f(x)|^2\right)^{1/2}\left(\sum_{x\in E} \left|\int a_x(y)h(x-y)f(y)\, dy \right|^2\right)^{1/2} \\ &\leq \left[ C \norm{h}_2^2 K(E,\Lambda_\epsilon)\right]^{1/2}\left(\sum_{x\in E} |f(x)|^2 \right)^{1/2}\norm{f}_2, \end{align*} and so we set $A = 1 / [C\norm{h}_2^2 K(E, \Lambda_\epsilon)]^{1/2}$ to obtain (\ref{eq:JB8}). \end{example} \section{Short time Fourier transform (STFT) frame inequalities} \langlebel{sec:stft} \begin{defn} \emph{a.} Let $f, g \in L^2(\mathbb{R}^{d})$. The {\it short-time Fourier transform} (STFT) of $f$ with respect to $g$ is the function $V_{g}f$ on $\mathbb{R}^{2d}$ defined as \[ \quad V_{g}f(x, \omega) = \int f(t) \overline{g(t-x)} \ e^{- 2 \pi i t \cdot \omega } \ dt,\] \end{defn} \noindent see \cite{groc2001}, \cite{groc2006}.\\ \emph{b.} The STFT is uniformly continuous on $\mathbb{R}^{2d}$. Further, for a fixed ``window'' $g \in L^{2}(\mathbb{R}^{d})$ with $\|g\|_{2} = 1$, we can recover the original function $f \in L^{2}(\mathbb{R}^{d})$ from its STFT $V_{g}f$ by means of the vector-valued integral inversion formula, \begin{equation} \langlebel{eq:InversionSTFT} f = \int \int V_{g}f(x, \omega) \ e_{\omega} \tau_{x} g \ d\omega \ dx, \end{equation} where modulation $e_{\omega}$ was defined earlier and translation $\tau_x$ is defined as $\tau_{x}g(t) = g(t-x)$. Explicitly, Equation (\ref{eq:InversionSTFT}) signifies that we have the vector-valued mapping, $(x,\omega) \mapsto e_{\omega} \tau_{x} g \in L^{2}(\mathbb{R}^{d})$, and \[\forall \ h \in L^{2}({\mathbb R}^d), \ \langlengle f, h \ranglengle = \int \int \left[ \int V_{g}f(x, \omega) (e_{\omega} \tau_{x} g(t) ) \overline{h(t)} \ dt \right] d\omega dx.\] Also, if $\widehat{f} = F$ and $\widehat{g} = G$, where $f, g \in L^{2}(\mathbb{R}^d)$, then one obtains the {\it fundamental identity of time frequency analysis,} \begin{equation} \langlebel{eq:time-frequency} V_{g}f(x,\omega) = e^{-2 \pi i x \cdot \omega } V_{G}F(\omega,-x). \end{equation} \emph{c.} Let $g_0(x) = 2^{d/4} e^{- \pi \| x \|^2 }.$ Then $G_0(\gamma) = \widehat{g}_0(\gamma) = 2^{d/4} e^{- \pi \| \gamma \|^2 }$ and $\| g_0 \|_2 = 1$, see \cite{BenCza2009} for properties of $g_0.$ The {\it Feichtinger algebra}, ${\mathcal S}_0({\mathbb R}^d),$ is \[ \mathcal{S}_0(\mathbb{R}^d) = \{ f \in L^2(\mathbb{R}^d) \colon \| f \|_{\mathcal{S}_0} = \| V_{g_0}f \|_1 < \infty \}. \] For now it is useful to note that the Fourier transform of $\mathcal{S}_0(\mathbb{R}^d)$ is an isometric isomorphism onto itself, and, in particular, $f \in \mathcal{S}_0(\mathbb{R}^d)$ if and only if $F \in \mathcal{S}_0(\widehat{\mathbb{R}}^d)$. \begin{thm} \langlebel{thm:1NonUniform} Let $E = \{x_n\}\subseteq {\mathbb R}^d$ be a separated sequence, that is symmetric about $0 \in \mathbb{R}^{d}$; and let $\Lambda \subseteq \widehat{\mathbb{R}}^{d}$ be an S-set of strict multiplicity, that is compact, convex, and symmetric about $0 \in \widehat{\mathbb{R}}^d$. Assume balayage is possible for $(E, \Lambda)$. Further, let $g \in L^2(\mathbb{R}^{d}), \,\widehat{g} = G,$ have the property that $\norm{g}_2 = 1$. {\it a.} We have that $$ \exists \ A > 0, \quad \text{such that } \quad \forall f \in PW_\Lambda \backslash \{0\}, \quad \widehat{f} = F, $$ \begin{equation} \langlebel{eq:z} A \| f \|_{2}^{2} = A \|F \|_{2}^{2} \leq \sum_{x \in E} \int | V_{G} F(\omega, x)|^{2} \ d\omega = \sum_{x \in E} \int | V_{g}f(x, \omega)|^2 \ d\omega. \end{equation} {\it b.} Let $g \in \mathcal{S}_0(\mathbb{R}^d)$. We have that $$ \exists \ B > 0, \quad \text{such that } \quad \forall f \in PW_\Lambda \backslash \{0\}, \quad \widehat{f} = F, $$ \begin{equation} \langlebel{eq:zz} \sum_{x \in E} \int | V_{g} f(x, \omega)|^{2} \ d\omega = \sum_{x \in E} \int | V_{G}F(\omega, -x)|^2 \ d\omega \leq B \| F \|_2^2 = B \| f \|_2^2, \end{equation} where $B$ can be taken as $2^{d/2}\ C \| V_{g_0}g \|_{1}^2$ and where $$ C = {\rm sup}_{u \in {\mathbb R}^d} \sum_{x \in E} e^{-\|x-u\|^2}. $$ see the technique in \cite{FeiZimm1998}, Lemma 3.2.15, cf. \cite{FeiSun2007}, Lemma 3.2. \begin{proof} {\it a.i.} We first combine the $STFT$ and balayage to compute \begin{eqnarray} \langlebel{eqn:eqn4balayage} & & \| f \|_{2}^{2} = \int_{\Lambda} F(\gamma) \ \overline{F(\gamma) } \ d \gamma \\ & = & \int_{\Lambda} F(\gamma) \ \left( \int \int \overline{ V_{G}F(y, \omega)} \ \overline{\ e_{\omega}(\gamma)} \ \overline{ G(\gamma - y)} \ d\omega \ dy \right) \ d \gamma \nonumber \\ & = & \int_{\Lambda} F(\gamma) \ \left( \int \int \overline{ V_{G}F(y, \omega)} \ \overline{ G(\gamma - y)} \left( \ \sum_{x \in E} \overline{a_{x}(\omega)} \ \overline{h(x - \omega)} \ e^{ -2 \pi i x \cdot \gamma} \right) \ d\omega \ dy \right) \ d \gamma \nonumber \\ & = & \int \int \overline{ V_{G}F(y, \omega)} \ \left( \ \sum_{x \in E} \overline{a_{x}(\omega)} \ \overline{h(x - \omega)} \ \int F(\gamma) \ \overline{G(\gamma - y)} \ e^{- 2 \pi i x \cdot \gamma} \ d \gamma \right) \ d\omega \ dy \nonumber \\ & = & \int \int \overline{ V_{G}F(y, \omega)} \ \left( \ \sum_{x \in E} \overline{a_{x}(\omega)} \ \overline{h(x - \omega)} \ V_{G}F(y, x ) \right) d \omega \ dy \nonumber \\ & = & \int \left[ \ \sum_{x \in E} \left( \int \overline{ V_{G}F(y, \omega)} \ \overline{a_{x}(\omega)} \ \overline{ h(x - \omega)} \ d \omega \right) \ V_{G}F(y, x) \right] \ dy \nonumber \\ & \leq & \int \left( \sum_{x \in E} \left| \int a_{x}(\omega) \ h(x - \omega) \ V_{G}F(y, \omega) \ d \omega \right|^{2} \right)^{1/2} \left(\sum_{x \in E} \left| V_{G}F(y, x) \right|^{2} \right)^{1/2} \ dy . \nonumber \end{eqnarray} {\it a.ii.} We shall show that there is a constant $C > 0,$ independent of $f \in PW_{\Lambda}$, such that \begin{equation} \langlebel{eq:eqn5cauchy} \forall \ y \in \mathbb{R}^{d}, \ \sum_{x \in E} \left| \int a_{x}(\omega) \ h(x - \omega) V_{G}F(y, \omega) \ d \omega \right|^{2} \ \leq C^2 \int \left| V_{G}F(y, \omega) \right|^{2} \ d\omega. \end{equation} The left side of (\ref{eq:eqn5cauchy}) is bounded above by \begin{eqnarray*} & & \sum_{x \in E} \left( \int | a_{x}(\omega)| \ |h(x - \omega)|^{2} \ d\omega \right) \left(\int | a_{x}(\omega)| \ |V_{G}F(y, \omega)|^{2} \ d\omega \right) \\ & \leq & \sum_{x \in E} \left(K_{1} \ \int |h(x - \omega)|^{2} \ d\omega \right) \left(\int | a_{x}(\omega)| \ |V_{G}F(y, \omega)|^{2} \ d\omega \right) \\ & = & K_{1} \ \| h \|_2^2 \ \sum_{x \in E} \int | a_{x}(\omega)| \ |V_{G}F(y, \omega)|^{2} \ d\omega \\ & = & K_{1} \ \| h \|_2^2 \ \int \left(\sum_{x \in E} | a_{x}(\omega)| \right) |V_{G}F(y, \omega)|^{2} \ d\omega \\ & \leq & K_1 \ K_2 \ \| h \|_2^2 \ \int |V_{G}F(y, \omega)|^{2} \ d\omega, \end{eqnarray*} where we began by using H\"{o}lder's inequality and where $K_1$ and $K_2$ exist because of (\ref{eq:JB7}) in Theorem \ref{thm:balayage4}. Let $C^2 = K_1 K_2 \ \| h \|_2^2$. {\it a.iii.} Combining parts \emph{a.i} and \emph{a.ii}, we have from (\ref{eqn:eqn4balayage}) and (\ref{eq:eqn5cauchy}) that \begin{eqnarray*} & & \| f \|_2^{2} = \int_{\Lambda} F(\gamma) \ \overline{F(\gamma) } \ d \gamma \\ & \leq & \int \ C \ \left(\int |V_{G}F(y, \omega) |^{2} \ d\omega \right)^{1/2} \ \left(\sum_{x \in E} |V_{G}F(y, x) |^{2} \right)^{1/2} \ dy \\ & \leq & C \ \left(\int \int |V_{G}F(y, \omega) |^{2} \ d\omega \ dy \right)^{1/2} \ \left(\int \sum_{x \in E} |V_{G}F(y, x)|^{2} \ dy \right)^{1/2} \\ & = & C \ \left( \int_{\Lambda} |F(\gamma)|^2 \ d\gamma \right)^{1/2} \ \left(\int \sum_{x \in E} |V_{G}F(y, x)|^{2} \ dy \right)^{1/2}, \end{eqnarray*} where we have used H\"{o}lder's inequality and the fact that the STFT is an isometry from $L^2(\mathbb{R}^d)$ into $L^2(\mathbb{R}^{2d})$. Consequently, by the symmetry of $E$, we have \begin{eqnarray*} \frac{1}{C^2} \| f \|_2^{2} & = & \frac{1}{C^2} \ \int_{\Lambda} |F(\gamma)|^2 \ d\gamma \\ & \leq & \int \sum_{x \in E} | V_{G} F(\omega, -x)|^{2} \ d\omega = \int \sum_{x \in E} | V_{g}f(x, \omega)|^2 d \omega, \end{eqnarray*} where we have used ($\ref{eq:time-frequency}$). Part \emph{a} is completed by setting $A = 1/C^2.$ {\it b.i.} The proof of (\ref{eq:zz}) will require the reproducing formula \cite{FeiSun2007}, page 412: \begin{equation} \langlebel{eq:FeiSun} V_{g}f(y, \gamma) = \langlengle V_{g_0}f, V_{g_0}( e_{\gamma}\tau_{y}g) \ranglengle, \end{equation} where $\widehat{g}_0 = G_0.$ Equation (\ref{eq:FeiSun}) is a consequence of the inversion formula, $$ f = \int \int V_{g_0}f(x, \omega) e_{\omega} \tau_x g_0 \ d\omega \ dx, $$ and substituting the right side into the definition $\langlengle f, e_{\gamma} \tau_y g \ranglengle$ of $V_gf(y, \gamma).$ Equation (\ref{eq:FeiSun}) is valid for all $f, g \in L^2(\mathbb{R}^d).$ {\it b.ii.} Using Equation (\ref{eq:FeiSun}) from part {\it b.i} we compute $$ \sum_{x \in E} \int | V_{g} f(x, \omega)|^{2} d\omega $$ $$ = \int \sum_{x \in E} |\langlengle V_{g_0}f, V_{g_0}( e_{\omega}\tau_{x}g) \ranglengle|^2 d\omega $$ $$ = \int \sum_{x \in E} |\int \int \overline{V_{g_0}f(y,\gamma)}\ V_{g_0}( e_{\omega}\tau_{x}g)(y,\gamma) \ dy\ d\gamma|^2 d\omega $$ $$ \leq \int \sum_{x \in E}\left(\left(\int \int|V_{g_0}f(y,\gamma)|^2|V_{g_0}(e_{\omega}\tau_{x}g)(y,\gamma)| \ dy\ d\gamma\right)\left(\int \int |V_{g_0}(e_{\omega}\tau_{x}g)(y,\gamma)|\ dy\ d\gamma\right)\right) d\omega. $$ {\it b.iii.} Since $$ V_{g_0}(e_{\omega}\tau_{x})g)(y,\gamma) = \int g(t-x)\ \overline{g_{0}(t-y)}\ e^{-2\pi i t\cdot (\gamma - \omega)}dt $$ $$ = e^{-2\pi i x\cdot (\gamma - \omega)}\ \int g(u)\ \overline{g_{0}(u+(x-y))}\ e^{-2\pi i u\cdot (\gamma - \omega)}du, $$ we have $$ |V_{g_0}(e_{\omega}\tau_{x})g)(y,\gamma)| \leq |V_{g_0}g(y-x, \gamma - \omega). $$ Inserting this inequality into the last term of part {\it b.ii}, the inequality of part {\it b.ii} becomes $$ \sum_{x \in E} \int | V_{g} f(x, \omega)|^{2} \ d\omega $$ $$ \leq \int \sum_{x \in E}\left(\left(\int \int|V_{g_0}f(y,\gamma)|^2|V_{g_0}g(y-x,\gamma -\omega)| \ dy\ d\gamma \right) \left(\int \int |V_{g_0}g)(y-x,\gamma - \omega)|\ dy\ d\gamma \right)\right) d\omega $$ $$ = \|V_{g_0}g\|_1\ \int \sum_{x \in E}\left(\int \int|V_{g_0}f(y,\gamma)|^2|V_{g_0}g(y-x,\gamma -\omega)| \ dy\ d\gamma\right)\ d\omega $$ $$ \leq \|V_{g_0}g\|_1\ \int \int|V_{g_0}f(y,\gamma)|^2\ \left(\int \sum_{x \in E} |V_{g_0}g(y-x,\gamma -\omega)|\ d\omega\right)\ dy\ d\gamma. $$ {\it b.iv.} By the reproducing formula, Equation (\ref{eq:FeiSun}), the integral-sum factor in the last term of part {\it b.iii} is $$ \int \sum_{x \in E} |V_{g_0}g(y-x,\gamma -\omega)|\ d\omega $$ $$ = \int \sum_{x \in E} |\int\int\ V_{g_0}g(z,\zeta)\ \overline{V_{g_0}(e_{\gamma - \omega}\tau_{y - x}g_0)(z,\zeta)}\ dz\ d\zeta|d\omega $$ $$ = \int \sum_{x \in E} |\int\int\ V_{g_0}g(z,\zeta)\ \left(\overline{\int g_{0}(u)\overline{g_{0}(u-(z+x-y))} \ e^{-2\pi i u\cdot (\zeta - \gamma + \omega)}\ du}\right) dz\ d\zeta|\ d\omega $$ $$ = \int \sum_{x \in E} |\int\int\ V_{g_0}g(z,\zeta)\ \overline{V_{g_0}g_{0}(z+(x-y), \zeta + (\omega - \gamma))}\ dz\ d\zeta|d\omega $$ $$ \leq \int \int |V_{g_0}g(z,\zeta)|\ \left(\int \sum_{x \in E} |V_{g_0}g_{0}(z+(x-y),\zeta +(\omega - \gamma))| d\omega\right)\ dz\ d\zeta. $$ {\it b.v.} Substituting the last term of part {\it b.iv} in the last term of part {\it b.iii}, the inequality of part {\it b.ii} becomes $$ \sum_{x \in E} \int | V_{g} f(x, \omega)|^{2} \ d\omega $$ $$ \leq \|V_{g_0}g\|_1 \int\int|V_{g_0}f(y,\gamma)|^2 \times $$ $$ \left(\int\int|V_{g_0}g(z,\zeta)| \left(\sum_{x \in E}\left(\int|V_{g_0}g_0(z+(x-y)),\zeta + (\omega - \gamma))| d\omega \right)\right) dz\ d\zeta\right)dy\ d\gamma $$ $$ = \|V_{g_0}g\|_1 \int\int|V_{g_0}f(y,\gamma)|^2\left(\int\int |V_{g_0}g(z,\zeta)|\left(\sum_{x \in E} K(x,y,z,\gamma,\zeta)\right)dz\ d\zeta\right)dy\ d\gamma, $$ where $$ K(x,y,z,\gamma,\zeta) = e^{-\frac{\pi}{2}\|z+(x-y)\|^2}\ \int e^{-\frac{\pi}{2}\|\zeta + (\omega - \gamma\|^2}d\omega. $$ Hence, $$ \sum_{x \in E} \int | V_{g} f(x, \omega)|^{2} \ d\omega \leq 2^{\frac{d}{2}}\ C\ \|V_{g_0}g\|_{1}^2\ \|V_{g_0}f\|^2, $$ where $$ C = {\rm sup}_{u \in {\mathbb R}^d} \sum_{x \in E} e^{-\|x-u\|^2}. $$ The fact, $C < \infty$, is straightforward to verify, but see \cite{NarWar1991} and \cite{NarSivWar1994}, Lemma 2.1, for an insightful, refined estimate of $C.$ The proof of part {\it b} is completed by a simple application of Equation (\ref{eq:STFT_orthogonality}). \end{proof} \end{thm} We now recall a special case of a fundamental theorem of Gr{\"o}chenig for non-uniform Gabor frames, see \cite{groc1991}, Theorem S, and \cite{groc2001}, Theorem 13.1.1, cf. \cite{FeiGro1988} and \cite{FeiGro1989} for a precursor of this result, presented in an almost perfectly disguised way for the senior author to understand. The general case of Gr{\"o}chenig's theorem is true for the class of modulation spaces, $M_{v}^{1}({\mathbb R}^d),$ where the Feichtinger algebra, ${\mathcal S}_{0}({\mathbb R}^d),$ is the case that the weight $v$ is identically $1$ on ${\mathbb R}^d.$ The author's proof at all levels of generalization involves a significant analysis of convolution operators on the Heisenberg group. See \cite{groc2001} for an authoritative exposition of modulation spaces as well as their history. \begin{thm} \langlebel{thm:groch} Given any $g \in \mathcal{S}_0(\mathbb{R}^d)$. There is $r = r(g) > 0$ such that if $E = \{(s_n, \sigma_n)\} \subseteq {\mathbb R}^d \times {\widehat{\mathbb R}}^d$ is a separated sequence with the property that $$ \bigcup_{n=1}^{\infty} \overline{B((s_n,{\sigma}_n),r(g))} ={\mathbb R}^d \times {\widehat{\mathbb R}}^d, $$ then the frame operator, $S = S_{g,E},$ defined by $$ S_{g,E}\,f = {\sum}_{n=1}^{\infty}\langlengle f, {\tau}_{s_n}e_{\sigma_n}g\ranglengle\, {\tau}_{s_n}e_{\sigma_n}g, $$ is invertible on $\mathcal{S}_0(\mathbb{R}^d)$. Moreover, every $f \in \mathcal{S}_0(\mathbb{R}^d)$ has a non-uniform Gabor expansion, $$ f = {\sum}_{n=1}^{\infty} \langlengle f, \tau_{s_n} e_{\sigma_n} g \ranglengle S_{g,E}^{-1}(\tau_{s_n} e_{\sigma_n}g), $$ where the series converges unconditionally in $ \mathcal{S}_0(\mathbb{R}^d)$. ($E$ depends on $g.$) \end{thm} The following result can be compared with Theorem \ref{thm:groch}. \begin{thm} \langlebel{thm:stft-frame} Let $E = \{(s_{n}, \sigma_{n})\} \subseteq \mathbb{R}^d \times \widehat{\mathbb{R}}^d $ be a separated sequence; and let $\Lambda \subseteq \widehat{\mathbb{R}}^d \times \mathbb{R}^{d}$ be an S-set of strict multiplicity that is compact, convex, and symmetric about $0 \in \widehat{\mathbb{R}}^d \times \mathbb{R}^{d}.$ Assume balayage is possible for $(E, \Lambda)$. Further, let $g \in L^2(\mathbb{R}^{d}),\, \widehat{g} = G,$ have the property that $\norm{g}_2 = 1$. We have that $$ \exists \ A, \ B > 0, \quad \text{ such that } \quad \forall f \in \mathcal{S}_0({\mathbb R}^d), \quad \text{for which } \quad {\rm supp}(\widehat{V_gf}) \subseteq \Lambda, $$ \begin{equation} \langlebel{eq:stft-frame} A \norm{f}_2^2 \leq \, {\sum}_{n=1}^{\infty} | V_{g} f(s_n, \sigma_n)|^{2} \leq B \norm{f}_2^2. \end{equation} Consequently, the frame operator, $S = S_{g,E},$ is invertible in $L^2({\mathbb R}^d)$--norm on the subspace of $\mathcal{S}_0(\mathcal{R}^d),$ whose elements $f$ have the property, $supp\,(\widehat{V_gf}) \subseteq \Lambda.$ Moreover, every $f \in \mathcal{S}_0(\mathbb{R}^d)$ satisfying the support condition, ${\rm supp}(\widehat{V_gf}) \subseteq \Lambda,$ has a non-uniform Gabor expansion, $$ f = {\sum}_{n=1}^{\infty} \langlengle f, \tau_{s_n} e_{\sigma_n} g \ranglengle S_{g,E}^{-1}(\tau_{s_n} e_{\sigma_n}g), $$ where the series converges unconditionally in $L^2(\mathbb{R}^d)$. ($E$ does not depend on $g.$) \begin{proof} {\it a.} Using Theorem \ref{thm:balayage4} for the setting $ \mathbb{R}^d \times \widehat{\mathbb{R}}^d$, where $h \in L^1(\mathbb{R}^d \times \widehat{\mathbb{R}}^d)$ from Ingham's theorem has the property that ${\rm supp}(\widehat{h}) \subseteq \overline{B(0, \epsilon)} \subseteq \widehat{\mathbb{R}}^d \times \mathbb{R}^{d},$ we compute \begin{equation} \langlebel{eq:moyal-beurl} \int | f(x) |^2 \ dx = \int \int | V_gf(y, \omega) |^2 \ dy \ d\omega \end{equation} $$ = \int \int \overline{V_{g}f(y, \omega)} {\sum}_{n=1}^{\infty} a_{s_n,{\sigma}_n}(y, \omega) h(s_n - y, \sigma_n - \omega) V_gf(s_n, \sigma_n) \ dy \ d\omega, $$ where $$ V_gf(y, \omega) = {\sum}_{n=1}^{\infty} a_{s_n,{\sigma}_n}(y, \omega) h(s_n - y, \sigma_n - \omega) V_gf(s_n, \sigma_n) $$ and $$ {\rm sup}_{(y,\omega) \in {\mathbb R}^d \times \widehat{\mathbb R}^d}\, \sum_{n=1}^{\infty} |a_{s_n,{\sigma}_n}(y, \omega)| \leq K(E,{\Lambda}_\epsilon) < \infty. $$ Interchanging summation and integration on the right side of Equation (\ref{eq:moyal-beurl}), we use H{\"o}lder's inequality to obtain $$ \int | f(x) |^2 \ dx \leq $$ \begin{equation} \langlebel{eq:prodVbeurl} \left(\sum_{n=1}^{\infty} |V_{g}f(s_n,{\sigma}_n)|^2\right)^{1/2}\ \left(\sum_{n=1}^{\infty}|\int\int a_{s_n,{\sigma}_n}(y,\omega) h(s_n - y, {\sigma}_n - \omega)\ \overline{V_{g}f(y,\omega)}\ dy\ d{\omega}|^2\right)^{1/2} \end{equation} $$ \leq {S_1}^{1/2}\ {S_2}^{1/2}. $$ We bound the second sum $S_2$ using H{\"o}lder's inequality for the integrand, $$ [(a_{s_n,{\sigma}_n}(y,\omega))^{1/2}h(s_n - y, {\sigma}_n - \omega)] [(a_{s_n,{\sigma}_n}(y,\omega))^{1/2} \overline{V_{g}f(y,\omega)}], $$ as follows: $$ S_2 \leq \sum_{n=1}^{\infty} \left(\int \int|a_{s_n,{\sigma}_n}(y,\omega)||h(s_n - y, {\sigma}_n - \omega)|^2\,dy\ d{\omega}\ \int \int|a_{s_n,{\sigma}_n}(y,\omega)||V_{g}f(y,\omega)|^2\,dy\,d{\omega}\right) $$ \begin{equation} \langlebel{eq:S2ineq} \leq K_1\ \sum_{n=1}^{\infty} \left(\int\int |h(s_n - y, {\sigma}_n -\omega)|^2\,dy\,d{\omega}\,\int\int |a_{s_n,{\sigma}_n}(y,\omega)||V_{g}f(y,\omega)|^2\,dy\,d{\omega}\right) \end{equation} $$ = K_1\ \norm{h}_2^2 \int\int \left(\sum_{n=1}^{\infty} |a_{s_n,{\sigma}_n}(y,\omega)||V_{g}f(y,\omega)|^2\right)\ dy\ d{\omega}\:\leq\: K_1K_2 \norm{h}_2^2\norm{f}_2^2, $$ where $K_1$ is a uniform bound on $\{a_{s_n,{\sigma}_n}(y,\omega)\},$ $K_2$ invokes the full power of Theorem \ref{thm:balayage4}, and $\norm{f}_2^2 = \norm{V_gf}_2^2.$ Combining (\ref{eq:prodVbeurl}) and (\ref{eq:S2ineq}), we obtain $$ \norm{f}_2^2 \leq (S_1K_1K_2)^{1/2}\norm{h}_2\norm{f}_2, $$ and so the left hand inequality of (\ref{eq:stft-frame}) is valid for $1/(K_1K_2\norm{h}_2^2).$ {\it b.} The right hand inequality of (\ref{eq:stft-frame}) follows directly from the P{\'o}lya-Plancherel theorem, cf. Theorem \ref{thm:1NonUniform}{\it b}. \end{proof} \end{thm} \begin{example} \langlebel{ex:suppFT} {\it a.} In comparing Theorem \ref{thm:groch} with Theorem \ref{thm:stft-frame} a possible weakness of the former is the dependence of $E$ on $g,$ whereas a possible weakness of the latter is the hypothesis that ${\rm supp}(\widehat{V_{g}f}) \subseteq \Lambda.$ We now show that this latter constraint is of no major consequence. Let $f,g \in L^{1}({\mathbb R}^d) \cap L^{2}({\mathbb R}^d).$ We know that $V_{g}f \in L^{2}({\mathbb R}^d \times \widehat{{\mathbb R}}^d),$ and $$ \widehat{V_{g}f}(\zeta,z) = \int \int \left(\int f(t)\ g(t-x)\ e^{-2 \pi i t \cdot \omega}\ dt\right)\ e^{-2 \pi i(x \cdot \zeta + z \cdot \omega)}\ dx\ d{\omega}. $$ The right side is $$ \int \int f(t)\ \left(\int g(t-x)\ e^{-2 \pi i x \cdot \zeta}\ dx\right)\ e^{-2 \pi i t\cdot \omega} \ e^{-2 \pi i z \cdot \omega}\ dt\ d{\omega}, $$ where the interchange in integration follows from the Fubini-Tonelli theorem and the hypothesis that $f,g \in L^{1}({\mathbb R}^d).$ This, in turn, is $$ \hat{g}(-\zeta)\ \int \left(\int f(t)\ e^{-2 \pi i t\cdot \zeta}\ e^{-2 \pi i t\cdot \omega}\ dt\right)\ e^{-2 \pi i z \cdot \omega}\ d{\omega} $$ $$ = \hat{g}(-\zeta)\ \int \hat{f}(\zeta + \omega)\ e^{-2 \pi i z \cdot \omega}\ d{\omega} = e^{-2 \pi i z \cdot \zeta}\ f(-z)\ \hat{g}(-\zeta). $$ Consequently, we have shown that if $f,g \in L^{1}({\mathbb R}^d) \cap L^{2}({\mathbb R}^d),$ then \begin{equation} \langlebel{eqn:suppFT} f,g \in L^{1}({\mathbb R}^d) \cap L^{2}({\mathbb R}^d), \quad \widehat{V_{g}f}(\zeta,z) = e^{-2 \pi i z \cdot \zeta}\ f(-z)\ \hat{g}(-\zeta). \end{equation} Let $d=1$ and let $\Lambda = [-\Omega,\Omega] \times [-T,T] \subseteq \widehat{{\mathbb R}}^d \times {\mathbb R}^d.$ We can choose $g \in PW_{[-\Omega,\Omega]},$ where $\hat{g}$ is even and smooth enough so that $g \in L^{1}(\mathbb R).$ For this window $g,$ we take any even $f \in L^{2}(\mathbb R)$ which is supported in $[-T,T].$ Equation (\ref{eqn:suppFT}) applies. {\it b.} Theorems \ref{thm:groch} and \ref{thm:stft-frame} give non-uniform Gabor frame expansions. Generally, for $g \in L^2({\mathbb R})$, if $\{e_{\sigma_n}{\tau_{s_n}}g\}$ is a frame for $L^2({\mathbb R}),$ then $E = \{s_n,\sigma_n\} \subseteq {\mathbb R} \times \widehat{\mathbb R}$ is a finite union of separated sequences and $D^{-}(E) \geq 1,$ where $D^{-}$ denotes the lower Beurling density, \cite{ChrDenHei1999}. (Beurling density has been analyzed deeply in terms of Fourier frames, e.g., \cite{beur1989}, \cite{land1967}, \cite{jaff1991}, and \cite{seip1995}, and it is defined as $$ D^{-}(E) = {\rm lim}_{r \rightarrow \infty}\ \frac{n^{-}(r)}{r^2}, $$ where $n^{-}(r)$ is the minimal number of points from $E \subseteq {\mathbb R} \times \widehat{\mathbb R}$ in a ball of radius $r/2$.) For perspective, in the case of $\{e_{mb}{\tau}_{na}g : m,n \in {\mathbb Z}\}$, this necessary condition is equivalent to the condition $ab \leq 1.$ It is also well-known that if $ab > 1,$ then $\{e_{mb}{\tau}_{na}g : m,n \in {\mathbb Z}\}$ is not complete in $L^2({\mathbb R}).$ As such, it is not unexpected that $\{e_{\sigma_n}{{\tau}_{s_n}}g\}$ is incomplete if $D^{-}(E) < 1;$ however, this is not the case as has been shown by explicit construction, see \cite{BenHeiWal1995}, Theorem 2.6. Other sparse complete Gabor systems have been constructed in \cite{rome2002} and \cite{wang2004}. \end{example} \begin{example} \langlebel{ex:exXmu} {\it a.} Let $(X, \mathcal{A}, \mu)$ be a measure space, i.e., $X$ is a set, $\mathcal{A}$ is a $\sigma-$algebra in the power set $\mathcal{P}(X)$, and $\mu$ is a measure on $\mathcal{A}$, see \cite{BenCza2009}. Let $H$ be a complex, separable Hilbert space. Assume $$ \mathcal{F} \colon X \rightarrow H $$ is a weakly measurable function in the sense that for each $f \in H,$ the complex-valued mapping $x \mapsto \langlengle f, \mathcal{F}(x)\ranglengle$ is measurable. $\mathcal{F}$ is a $(X, \mathcal{A}, \mu)$--{\it frame} for $H$ if \[ \exists \ A, B > 0 \mbox{ such that } \forall \ f \in H,\quad A \| f \|^2 \leq \int_X | \langlengle f, \mathcal{F}(x) \ranglengle |^2 \ d\mu(x) \leq B \| f \|^2. \] Typically, $\mathcal{A}$ is the Borel algebra $\mathcal{B}(\mathbb{R}^d)$ for $X = \mathbb{R}^d$ and $\mathcal{A} = \mathcal{P}(\mathbb{Z})$ for $X = \mathbb{Z}.$ In these cases we use the terminology, $(X, \mu)$-frame. {\it b.} Continuous and discrete wavelet and Gabor frames are special cases of $(X, \mathcal{A}, \mu)$-frames and could have been formulated as such from the time of \cite{daub1992} (1986) and \cite{HeiWal1989} (1989). In mathematical physics the idea was introduced in \cite{kais1990}, \cite{AliAntGaz1993}, and \cite{AliAntGaz2000}. Recent mathematical contributions are found in \cite{GabHan2003} and \cite{ForRau2005}. $(X, \mathcal{A}, \mu)$-frames are sometimes referred to as {\it continuous frames.} Also, in a slightly more concrete way we could have let $X$ be a locally compact space and $\mu$ a positive Radon measure on $X$. {\it c.} Let $X = \mathbb{Z}, \mathcal{A} = \mathcal{P}(\mathbb{Z})$, and $\mu = c,$ where $c$ is counting measure, $c(Y) = \mbox{card}(Y)$. Define $\mathcal{F}(n) = x_n \in H, n \in \mathbb{Z},$ for a given complex, separable Hilbert space, $H.$ We have $$ \forall \ f \in H, \ \int_{\mathbb{Z}} | \langlengle f, x_n \ranglengle |^2 \ d\ c(n) = \sum_{n \in \mathbb{Z}} \int_{\{n\}} | \langlengle f, x_n \ranglengle |^2 \ d\ c(n) = \sum_{n \in \mathbb{Z}} | \langlengle f, x_n \ranglengle |^2. $$ Thus, {\it frames} $\{ x_n \}$ for H, as defined in Definition \ref{defn:frame}, are $(\mathbb{Z}, \mathcal{P}(\mathbb{Z}), c)$--frames. For the present discussion we also refer to them as {\it discrete frames.} {\it d.} Let $X = \mathbb{R}^d, \mathcal{A} = \mathcal{B}(\mathbb{R}^d)$, and $\mu = p$ a probability measure, i.e. $p(\mathbb{R}^d) = 1$; and let $H = \mathbb{R}^d.$ The measure $p$ is a {\it probabilistic frame} for $H = \mathbb{R}^d$ if $$ \exists \ A, B > 0 \mbox{ such that } \forall \ x \in \mathbb{R}^d \ (= H),\quad A \| x \|^2 \leq \int_X | \langlengle x, y \ranglengle |^2 \ d\ p(y) \leq B \| x \|^2, $$ see \cite{ehle2012}, \cite{EhlOko2013}. Define $$ \mathcal{F} \colon X = \mathbb{R}^d \rightarrow H = \mathbb{R}^d $$ by $\mathcal{F}(x) = x \in \mathbb{R}^d.$ Suppose $\mathcal{F}$ is a $(\mathbb{R}^d, \mathcal{B}(\mathbb{R}^d), p)$-frame for $H = \mathbb{R}^d.$ Then $$ \forall \ x \in H, \quad A \| x \|^2 \leq \int_X | \langlengle x, y \ranglengle |^2 \ d \ p(y) \leq B \| x \|^2, $$ and this is precisely the same as saying that $p$ is a probabilistic frame for $H = \mathbb{R}^d.$ Suppose we try to generalize probabilistic frames to the setting that X is locally compact, as well as being a vector space because of probabilistic applications. This simple extension can not be effected since Hausdorff, locally compact vector spaces are, in fact, finite dimensional (F. Riesz). {\it e.} Let $(X, \mathcal{A}, \mu)$ be a measure space and let $H$ be a complex, separable Hilbert space. A {\it positive operator-valued measure} ($POVM$) is a function $\pi \colon \mathcal{A} \rightarrow \mathcal{L}(H),$ where $\mathcal{L}(H)$ is the space of the bounded linear operators on $H$, such that $\pi(\emptyset) = 0, \pi(X) = I$ (Identity), $\pi(A)$ is a positive, and therefore self-adjoint (since H is a complex vector space), operator on $H$ for each $A \in \mathcal{A},$ and $$ \forall \ \mbox{ disjoint } \{A_j \}_{j=1}^{\infty} \subseteq \mathcal{A}, \quad x, y \in H \implies \langlengle \pi \left(\cup_{j=1}^{\infty} A_j \right) x, y \ranglengle = \sum_{j=1}^{\infty} \langlengle \pi (A_j) x, y \ranglengle. $$ $POVMs$ are a staple in quantum mechanics, see \cite{AliAntGaz2000}, \cite{BenKeb2008} for rationale and references. If $\{x_n\} \subseteq H$ is a 1-tight discrete frame for $H$, then it is elementary to see that the formula, $$ \forall \ x \in H \mbox{ and } \forall \ A \in \mathcal{P}(\mathbb{Z}), \ \pi(A) x = \sum_{n \in A} \langlengle x, x_n \ranglengle x_n, $$ defines a $POVM.$ Conversely, if $H = \mathbb{C}^d$ and $\pi$ is a $POVM$ for $X$ countable, then by the spectral theorem there is a corresponding 1-tight discrete frame. This relationship between tight frames and $POVMs$ extends to more general $(X, \mathcal{A}, \mu)$-frames, e.g., \cite{AliAntGaz2000}, Chapter 3. In this setting, and related to {\it probability of quantum detection error,} $P_e$, which is defined in terms of $POVMs,$ Kebo and one of the authors have proved the following for $H = \mathbb{C}^d, \{ y_j \}_{j=1}^{N} \subseteq H,$ and $\{\rho_j > 0 \}_{j=1}^N, \sum_{j=1}^N \rho_j = 1 \colon$ there is a 1-tight discrete frame $\{x_n\}_{n=1}^N \subseteq H$ for $H$ that minimizes $P_e$, \cite{BenKeb2008}, Theorem A.2. {\it f.} Let $X = \mathbb{R}^{2d}$ and let $H = L^2(\mathbb{R}^d)$. Given $g \in L^2(\mathbb{R}^d)$ and define the function \begin{align*} \mathcal{F} \colon \mathbb{R}^{2d} & \rightarrow L^2( \mathbb{R}^d) \\ (x, \omega) & \mapsto e^{2 \pi i t \cdot \omega} \ g(t - x). \end{align*} $\mathcal{F}$ is a $(\mathbb{R}^{d}, \mathcal{B}(\mathbb{R}^{2d}), m)$-frame for $L^2(\mathbb{R}^{2d}),$ where $m$ is Lebesgue measure on $\mathbb{R}^{2d}$; and, in fact, it is a tight frame for $L^2(\mathbb{R}^{d})$ with frame constant $A = B = \| g \|_2^2.$ To see this we need only note the following consequence of the orthogonality relations for the $STFT$: \begin{equation} \langlebel{eq:STFT_orthogonality} \| V_gf \|_2 = \| g \|_{L^2(\mathbb{R}^{d}) } \| f \|_{ L^2(\mathbb{R}^{d})}. \end{equation} Equation (\ref{eq:STFT_orthogonality}) is also used in the proof of (\ref{eq:InversionSTFT}). {\it g.} Clearly, Theorems \ref{thm:1NonUniform}, \ref{thm:groch}, and \ref{thm:stft-frame} can be formulated in terms of $(X,\mu)$--frames. \end{example} \section{Examples and modifications of Beurling's method} \langlebel{sec:ex} \subsection{Generalizations of Beurling's Fourier frame theorem} \langlebel{sec:123-weights} Using more than one measure, we can extend Theorem \ref{theorem:balayage2} to more general types of Fourier frames. For clarity we give the result for three simple measures. \begin{lem}\langlebel{lemma:Fourier1} Given the notation and hypotheses of Theorems \ref{thm:balayage3} and \ref{thm:balayage4}. Then, \[ \forall f \in PW_{\Lambda}\setminus \{0\},\, \widehat{f} = F,\] \[ \sum_{x \in E} \left| \int a_{x}(y) h(x-y) f(y) \,dy \right|^2 \leq [K(E,\Lambda_{\epsilon}) \norm{h}_2]^2 \int_{\Lambda} |F(\gamma)|^2 \ d\gamma.\] \begin{proof} We compute: \begin{align*} & \sum_{x \in E} \left|\int a_x(y)h(x-y)f(y)\,dy \right|^2 \\ &\leq \sum_{x\in E} \left| \left(\int | a_x(y)^{1/2} h(x-y)|^2 \,dy\right)^{1/2} \left(\int |a_x(y)^{1/2} f(y)|^2 \,dy \right)^{1/2} \right|^{2}\\ &\leq \sup_{x\in E} \left( \int | a_x(y) | | h(x-y)|^2 \,dy\right) \left(\sum_{x\in E} \int |a_x(y)| | f(y)|^2 \,dy \right)\\ &\leq K(E, \Lambda_\epsilon) \sup_{x\in E} \left(\int |a_{x}(y)| |h(x-y)|^2 \,dy \right) \int_{\Lambda} |F(\gamma)|^2\,d\gamma \\ &\leq K(E, \Lambda_\epsilon)^{2} \norm{h}_2^2\int_{\Lambda} |F(\gamma)|^2\,d\gamma, \end{align*} where we have used the Plancherel theorem to obtain the third inequality. \end{proof} \end{lem} \begin{thm} \langlebel{theorem:Generalized_fourier_frames} Let $E =\{x_n\} \subseteq {\mathbb R}^d$ be a separated sequence, and let $\Lambda \subseteq \widehat{\mathbb{R}}^d$ be a compact S-set of strict multiplicity. Assume that $\Lambda$ is a compact, convex set, that is symmetric about $0 \in \widehat{\mathbb{R}}^d$. If balayage is possible for $(E, \Lambda)$, then \[\exists \ A, B > 0 \text{ \rm{ such that} } \forall f \in PW_{\Lambda}\setminus \{0\},\, F = \widehat{f},\] \begin{align}\begin{split}\langlebel{eq:a} & A^{1/2} \frac{\int_{\Lambda} |F(\gamma) + F(2 \gamma) + F(3 \gamma)|^2 \ d\gamma}{\left( \int_{\Lambda} |F(\gamma)|^{2} \ d \gamma \right)^{1/2}}\\ &\leq \left(\sum_{x \in E} |f(x)|^{2}\right)^{1/2} + \frac{1}{2}\left(\sum_{x \in E} |f(\frac{1}{2} x)|^{2}\right)^{1/2} + \frac{1}{3}\left(\sum_{x \in E} |f(\frac{1}{3} x)|^{2}\right)^{1/2} \\ &\leq \ B^{1/2} \left(\int_{\Lambda} |F(\gamma)|^2 \,d\gamma \right)^{1/2}. \end{split} \end{align} \begin{proof} By hypothesis, we can invoke Theorem \ref{theorem:balayage1} to choose $\epsilon > 0$ so that balayage is possible for $(E, \Lambda_{\epsilon})$, i.e., $K(E, \Lambda_{\epsilon}) < \infty$. For this $\epsilon > 0$ and appropriate $\Omega,$ we use Theorem \ref{thm:balayage3} to choose $h \in L^{1}(\mathbb{R}^{d})$ for which $h(0) = 1, \text{supp}\,(\widehat{h}) \subseteq \overline{B(0,\epsilon)},$ and $|h(x)| = O(e^{- \Omega(\|x\|)} ), \|x\| \rightarrow \infty.$ Therefore, for a fixed $y \in \mathbb{R}^{d}$ and $g \in \mathcal{C}(\Lambda)$, Theorem \ref{thm:balayage4} allows us to assert that \begin{eqnarray*} & & g(y) + g(2y) + g(3y) \\ & = & \sum_{x \in E} g(x) \left( a_x(y) h(x-y) + a_x(2y) h(x - 2y) + a_x(3y) h(x - 3y) \right) \end{eqnarray*} and \[ \sum_{x \in E} \left|a_x(j y)\right| \leq K(E, \Lambda_{\epsilon}), \ j = 1,2,3. \] Hence, if $\gamma \in \Lambda$ is fixed and $g(w) = e^{-2 \pi i w \cdot \gamma},$ then \begin{eqnarray*} & & e^{-2 \pi i y \cdot \gamma} + e^{-2 \pi i (2y) \cdot \gamma} + e^{-2 \pi i (3y) \cdot \gamma} \\ & = & \sum_{x \in E} \left( a_x(y) h(x - y) + a_x(2y) h(x - 2y) + a_x(3y) h(x - 3y) \right) \ e^{-2 \pi i x \cdot \gamma},\\ \end{eqnarray*} which we write as \[ \sum_{x \in E} b_x(y) e^{-2 \pi i x \cdot \gamma}.\] Since $L^{1}(\mathbb{R}^d) \cap PW_{\Lambda}$ is dense in $PW_{\Lambda},$ we take $f \in L^{1}(\mathbb{R}^d) \cap PW_{\Lambda}$ in the following argument without loss of generality. We compute \begin{eqnarray*} & & \sum_{x \in E} e^{-2 \pi i x \cdot \gamma} \int b_x(y) f(y) \,dy \\ & = & \int f(y) \left( \sum_{x \in E} b_x(y) e^{-2 \pi i x \cdot \gamma} \right) \,dy \\ & = & \int f(y) \left( e^{-2 \pi i y \cdot \gamma} + e^{-2 \pi i (2y) \cdot \gamma} + e^{-2 \pi i (3y) \cdot \gamma} \right) \,dy \\ & = & F(\gamma) + F(2 \gamma) + F(3 \gamma) = J_F(\gamma). \end{eqnarray*} As such, we have \[J_F(\gamma) = \sum_{x \in E} \widetilde{f}(x) e^{-2 \pi i x \cdot \gamma}, \quad \text{where }\widetilde{f}(x) = \int b_x(y) f(y) \,dy.\] Next, we compute the following inequality for the inner product $ \langlengle J_F, J_F \ranglengle_{\Lambda}:$ \begin{align}\langlebel{eq:b} &\int_{\Lambda} J_F(\gamma) \overline{J_F(\gamma) } \ d\gamma = \ \int_{\Lambda} J_F(\gamma) \left( \sum_{x \in E} \overline{\widetilde{f}(x)} e^{2 \pi i x \cdot \gamma} \right) d\gamma \nonumber\\ &=\sum_{x \in E} \overline{\widetilde{f}(x)} \left( \int_{\Lambda} J_F(\gamma) e^{2 \pi i x \cdot \gamma} \,d\gamma \right) \ = \ \sum_{x \in E} \overline{\widetilde{f}(x)} \left( f(x) + \frac{1}{2} f(\frac{x}{2}) + \frac{1}{3}f(\frac{x}{3}) \right)\\ &\leq \left( \sum_{x \in E} | \widetilde{f}(x)|^2 \right)^{1/2} \left( \sum_{x \in E} \left| f(x) + \frac{1}{2} f(\frac{x}{2}) + \frac{1}{3}f(\frac{x}{3}) \right|^2 \right)^{1/2} \nonumber\\ &\leq \left(\sum_{x \in E} |\widetilde{f}(x)|^2 \right)^{1/2} \left[ \left(\sum_{x \in E} |f(x)|^2 \right)^{1/2} + \frac{1}{2} \left(\sum_{x \in E} |f(\frac{x}{2})|^2 \right)^{1/2} + \frac{1}{3} \left(\sum_{x \in E} |f(\frac{x}{3} )|^2 \right)^{1/2} \right]\nonumber \end{align} by H{\"o}lder's and Minkowski's inequalities. Further, there is $A > 0$ such that \begin{equation} \langlebel{eq:c} \sum_{x \in E} |\widetilde{f}(x)|^2 \leq \frac{1}{A} \int_{\Lambda} |F(\gamma)|^2 \,d\gamma. \hspace{2in} \end{equation} This is a consequence of Lemma \ref{lemma:Fourier1}. Combining the definition of $J_F$ with the inequalities \eqref{eq:b} and \eqref{eq:c} yield the first inequality of \eqref{eq:a}. The second inequality of \eqref{eq:a} only requires the assumption that $E$ be separated, and, as such, it is a consequence of the Plancherel-P\'{o}lya theorem, which asserts that if $E$ is separated, then \[\exists \ B_j \text{ such that } \forall \ f \in PW_{\Lambda}, \] \[\sum_{x \in E} \left| f\left(\frac{x}{j}\right)\right|^2 \leq B_j \ \| f \|_{2}^{2}, \ j = 1,2,3,\] see \cite{bene1992}, pages 474-475, \cite{land1967}, \cite{SteWei1971}, pages 109-113. \end{proof} \end{thm} Theorem \ref{theorem:Generalized_fourier_frames} can be generalized extensively.\\ \begin{example} Given the setting of Theorem \ref{theorem:Generalized_fourier_frames}. {\it a.} Define the set $\{e_{j,x}^\vee : j = 1, 2, 3 \text{ and } x \in E\}$ of functions on ${\mathbb R}^d$ by $$e_{j,x}(\gamma) = \frac{1}{j} \mathbb{1}_\Lambda (\gamma)e^{-2 \pi i (1/j)x \cdot \gamma},$$ and define the mapping $S: PW_\Lambda \rightarrow PW_\Lambda$ by $$Sf = \sum_{j=1}^3 \sum_{x \in E} \inner{f}{e_{j,x}^\vee}e_{j,x}^\vee.$$ We compute $$\forall f \in PW_\Lambda, \quad \inner{Sf}{f} = \sum_{j=1}^3 \frac{1}{j^2} \sum_{x \in E} \left|f\left(\frac{x}{j}\right)\right|^2.$$ {\it b.} Let $f \in PW_\Lambda$, $\widehat{f} = F$, and define $J_F(\gamma) = F(\gamma) + F(2 \gamma) + F(3 \gamma)$. Since $(a + b + c)^2 \leq 3(a^2+b^2+c^2)$ for $a, b, c \in {\mathbb R}$, Theorem \ref{theorem:Generalized_fourier_frames} and part {\it a} allow us to write the frame-type inequality, \begin{equation} \langlebel{eq:x} \frac{A}{3} \frac{\inner{J_F}{J_F}^2}{\norm{F}_2}\leq \inner{Sf}{f} = \norm{Lf}_{\ell^2}^2 \leq B \norm{f}_2^2, \end{equation} where $Lf = \{\inner{f}{e_{j,x}^\vee}: j = 1, 2, 3 \text{ and } x \in E\}$ so that $S = L^\ast L$. The inequalities \eqref{eq:x} do not a priori define a frame for $PW_\Lambda$. However, $\{e_{j,x}: j = 1, 2, 3 \text{ and } x \in E\}$ is a frame for $PW_\Lambda$ with frame operator $S$. This is a consequence of Theorem \ref{theorem:balayage2}. \end{example} \begin{thm} \langlebel{theorem:weightedbalayage} Let $E = \{x_{n}\} \subseteq \mathbb{R}^d$ be a separated sequence, and let $\Lambda \subseteq \widehat{\mathbb{R}}^d$ be an S-set of strict multiplicity. Assume that $\Lambda$ is a compact, convex set, that is symmetric about $0 \in \widehat{\mathbb{R}}^d$. Further, let $G \in L^\infty({\mathbb R}^d)$ be non-negative on $\widehat{\mathbb{R}}^d$. If balayage is possible for $(E, \Lambda)$, then \[ \exists \ A, B > 0, \text{ such that } \forall \ f \in PW_\Lambda \setminus \{0\}, F = \widehat{f},\] \begin{align}\langlebel{eq:X} A \frac{\left(\int_{\Lambda} |F(\gamma)|^2 \ G(\gamma) \,d\gamma\right)^2}{\int_{\Lambda} |F(\gamma)|^2 \,d\gamma} & \leq \sum_{x \in E} |\left(F \ G \right)^\vee (x) |^{2}\\ & \leq B \int_\Lambda \left| F(\gamma) \right|^2 \,d\gamma \nonumber. \end{align} We can take $A = 1/\left(K(E, \Lambda_\epsilon)\norm{h}_2^2 \right)$ and $B = B_1 \norm{G}_\infty^2$, where $B_1$ is the Bessel bound in the Plancherel-P\'{o}lya theorem for $PW_\Lambda$. \begin{proof} By hypothesis, we can invoke Theorem \ref{theorem:balayage1} to choose $\epsilon > 0$ so that balayage is possible for $(E, \Lambda_\epsilon)$, i.e., $K(E, \Lambda_\epsilon) < \infty$. For this $\epsilon >0$ and appropriate $\Omega$, we use Theorem \ref{thm:balayage3} to choose $h \in L^1({\mathbb R}^d)$ for which $h(0) = 1$, $\text{supp}\,{\widehat{h}} \subseteq \overline{B(0, \epsilon)}$, and $|h(x)| = O(e^{-\Omega(\norm{x})}), \norm{x} \rightarrow \infty$. Consequently, we have \[\forall \ y \in \mathbb{R}^d \text{ and } \forall \ \gamma \in \Lambda,\] \[e^{-2 \pi i y \cdot \gamma} = \sum_{x \in E} a_x(y) h(x - y) e^{-2 \pi i x \cdot \gamma}, \text{ where } \sum_{x \in E} |a_x(y)| \leq K(E, \Lambda_{\epsilon}).\] If $f \in PW_\Lambda$, $\widehat{f} = F$, and noting that $F \in L^1({\widehat{\mathbb{R}}^d})$, we have the following computation: \begin{eqnarray}\langlebel{eq:y} & & \int_{\Lambda} |F(\gamma)|^2 G(\gamma) \ d \gamma \nonumber \\ & = & \int_{\Lambda} F(\gamma) G(\gamma) \left(\int \overline{f(w)} \left( \sum_{x \in E} a_x(w) h(x - w) e^{2 \pi i x \cdot \gamma} \right) \,dw \right) \,d\gamma \nonumber\\ & = & \sum_{x \in E} \left(\int_{\Lambda} F(\gamma) G(\gamma) e^{2 \pi i x \cdot \gamma} \ d\gamma \right) \left(\int \overline{f(w)} a_{x}(w) h(x - w) \ dw \right) \\ & \leq & \left(\sum_{x \in E} | (F G) ^\vee (x)|^2 \right)^{1/2} \left( \sum_{x \in E} \left| \int \overline{f(w)} a_{x}(w) h(x - w) \ dw \right|^2 \right)^{1/2}\nonumber\\ & \leq & K(E, \Lambda_{\epsilon}) \norm{h}_{2} \left( \int_{\Lambda} | F(\gamma) |^2 \, d\gamma \right)^{1/2} \left(\sum_{x \in E} \left| (F G)^\vee (x) \right|^2 \right)^{1/2},\nonumber \end{eqnarray} where the last step is a consequence of Lemma \ref{lemma:Fourier1}. Clearly, \eqref{eq:y} gives the first inequality of \eqref{eq:X}. As in Theorem \ref{theorem:Generalized_fourier_frames}, the second inequality of \eqref{eq:X} only requires the assumption that $E$ be separated, and, as such, it is a consequence of the Plancherel-P\'{o}lya theorem for $PW_\Lambda$. \end{proof} \end{thm} Theorem \ref{theorem:weightedbalayage} is an elementary generalization of the classical result for the case $G = 1$ on ${\mathbb R}$, and itself has significant generalizations to other weights $G$. We have not written $(FG)^\vee$ as a convolution since for such generalizations there are inherent subtleties in defining the convolution of distributions, e.g., \cite{schw1966}, Chapitre VI, \cite{meye1981}, see \cite{bene1997}, pages 99-102, for contributions of Hirata and Ogata, Colombeau, et al. Even in the case of Theorem \ref{theorem:weightedbalayage}, $G^\vee = g$ is in the class of pseudo-measures, which themselves play a basic role in spectral synthesis \cite{bene1975}. \subsection{A bounded operator $B: L^{p}(\mathbb{R}^{d}) \rightarrow l^{p}(E), \ p > 1$} \langlebel{sec:landau} {\it a.} In Example \ref{ex:fouierframebalayage}{\it b} we proved the lower frame bound assertion of Theorem \ref{theorem:balayage2}. This can also be achieved using Beurling's generalization of balayage to so-called linear balayage operators $B$, see \cite{beur1989}, pages 348-350. In fact, with this notion and assuming the hypotheses of Theorem \ref{thm:balayage4}, Beurling proved that the mapping, \begin{eqnarray*} L^{p}(\mathbb{R}^d) & \longrightarrow & l^{p}(E),\quad p > 1,\\ k & \mapsto & \{k_{x} \}_{x \in E}, \end{eqnarray*} where $$ \forall \ x \in E, \quad k_x = \int_{\mathbb{R}^d} a_{x}(y) h(x-y) k(y) \ dy, $$ has the property that $$ \exists \ C_p > 0 \text{ such that } \forall \ k \in L^{p}(\mathbb{R}^d), $$ \begin{equation} \langlebel{eqn:Landau1} \sum_{x \in E} |k_x |^{p} \leq C_p \int |k(y)|^{p} dy. \end{equation} Let $p = 2$ and fix $f \in PW_{\Lambda}.$ We shall use (\ref{eqn:Landau1}) and the definition of norm to obtain the desired lower frame bound. This is H.J. Landau's idea. Set $$ I_k = \int_{\Lambda} F(\gamma) \overline{K(\gamma)} d\gamma, \quad \quad \widehat{f} = F, $$ where $K^{\vee} = k \in L^{2}({\mathbb R}^d).$ By balayage, we have \[K(\gamma) = \sum_{x \in E} k_x e^{-2 \pi i x \cdot \gamma} \text{ on } \Lambda; \] and so, \[I_k = \sum_{x \in E} f(x) \overline{k_x}, \] allowing us to use (\ref{eqn:Landau1}) to make the estimate, \[ |I_k|^2 \leq C \|K \|_{2}^{2} \ \sum_{x \in E}|f(x)|^2.\] By definition of $\|f \|_{2}$, we have \[ \| f \|_{2} = \sup_{K} \frac{| I_{K} |}{\| K \|_{2}} \leq C \left( \sum_{x \in E}|f(x)|^2 \right)^{1/2},\] and this is the lower frame bound inequality with bound $A = 1/C^2.$\\ Because of this approach we can think of balayage as "$l^{2}-L^{2}$ balayage". {\it b.} Motivated by part {\it a}, we shall say that $l^{1}-L^{2}$ {\it balayage is possible} for $(E, \Lambda)$, where $E$ is separated and $\Lambda$ is a compact set of positive measure $| \Lambda |$, if \[ \exists \ C > 0 \text{ such that } \forall \ k \in L^{2}(\mathbb{R}^d), \widehat{k} = K,\] \[\sum_{x \in E} | k_x | \leq C \int_{\Lambda} | K(\gamma)|^2 d \gamma\] and \[K(\gamma) = \sum_{x \in E} k_x e^{-2 \pi i x \cdot \gamma} \text{ on } \Lambda. \] For fixed $f \in PW_{\Lambda}$ and using the notation of part {\it a,} we have \begin{equation}\langlebel{eqn17} |I_k|^2 \leq \sum_{x \in E} |k_x|^2 \sum_{x \in E} | f(x) |^2. \end{equation} An elementary calculation gives \[\sum_{x \in E} |k_x|^2 \leq C^2 | \Lambda | \int_{\Lambda} | K(\gamma)|^2 d\gamma, \] which, when substituted into (\ref{eqn17}), gives \[\frac{1}{C^2 | \Lambda |} \left( \frac{| I_K |^2}{ \int_{\Lambda} | K(\gamma)|^2 d\gamma} \right) \leq \sum_{x \in E} | f(x) |^2.\] We obtain the desired lower frame inequality with bound $ A = 1/(C^2 | \Lambda |). $ \section{Pseudo-differential operator frame inequalities} \langlebel{sec:pdo} Let $\sigma \in \mathcal{S}^{\prime}(\mathbb{R}^d \times \widehat{\mathbb{R}}^d).$ The operator, $K_{\sigma},$ formally defined as \[ (K_{\sigma} f)(x) = \int \sigma(x, \gamma) \widehat{f}(\gamma) e^{2 \pi i x \cdot \gamma} \ d\gamma, \] is the \emph{pseudo-differential operator} with Kohn-Nirenberg symbol, $\sigma$, see \cite{groc2001} Chapter 14, \cite{groc2006} Chapter 8, \cite{horm1979}, and \cite{stei1993}, Chapter VI. For consistency with the notation of the previous sections, we shall define pseudo-differential operators, $K_s,$ with tempered distributional Kohn-Nirenberg symbols, $s \in \mathcal{S}^{\prime}(\mathbb{R}^d \times \widehat{\mathbb{R}}^d),$ as \[ (K_{s} \widehat{f})(\gamma) = \int s(y, \gamma) f(y) e^{-2 \pi i y \cdot \gamma} \ dy. \] Further, we shall actually deal with Hilbert-Schmidt operators, $K \colon L^2(\widehat{\mathbb{R}}^d) \rightarrow L^2(\widehat{\mathbb{R}}^d)$; and these, in turn, can be represented as $K = K_s,$ where $s \in L^2(\mathbb{R}^d \times \widehat{\mathbb{R}}^d)$. Recall that $K \colon L^2(\widehat{\mathbb{R}}^d) \rightarrow L^2(\widehat{\mathbb{R}}^d)$ is a \emph{Hilbert-Schmidt operator} if \[ {\sum}_{n=1}^{\infty} \| K e_n \|_2^2 < \infty \] for some orthonormal basis, $\{e_n\}_{n=1}^{\infty},$ for $L^2(\widehat{\mathbb{R}}^d)$, in which case the \emph{Hilbert-Schmidt norm} of $K$ is defined as \[ \| K \|_{HS} = \left(\sum_{n=1}^{\infty} \| K e_n \|_2^2 \right)^{1/2}, \] and $\| K \|_{HS}$ is independent of the choice of orthonormal basis. The first theorem about Hilbert-Schmidt operators is the following \cite{RieNag1955}: \begin{thm} If $K \colon L^2(\widehat{\mathbb{R}}^d) \rightarrow L^2(\widehat{\mathbb{R}}^d)$ is a bounded linear mapping and $(K \widehat{f})(\gamma) = \int m(\gamma, \langlembda) \widehat{f}(\langlembda) \ d\langlembda,$ for some measurable function $m$, then $K$ is a Hilbert-Schmidt operator if and only if $m \in L^2(\widehat{\mathbb{R}}^{2d})$ and, in this case, $\| K \|_{HS} = \| m \|_{L^2(\mathbb{R}^{2d})}.$ \end{thm} The following is our result about pseudo-differential operator frame inequalities. \begin{thm}\langlebel{thm:pseudoD} Let $E = \{x_n\} \subseteq \mathbb{R}^d$ be a separated sequence, that is symmetric about $0 \in \mathbb{R}^d$; and let $\Lambda \subseteq \widehat{\mathbb{R}}^d$ be an S-set of strict multiplicity, that is compact, convex, and symmetric about $0 \in \widehat{\mathbb{R}}^d$. Assume balayage is possible for $(E, \Lambda)$. Further, let $K$ be a Hilbert-Schmidt operator on $L^2(\widehat{\mathbb{R}}^d)$ with pseudo-differential operator representation, \[ (K \widehat{f})(\gamma) = (K_{s} \widehat{f})(\gamma) = \int s(y, \gamma) f(y) e^{-2 \pi i y \cdot \gamma} \ dy, \] where $s_{\gamma}(y) = s(y, \gamma) \in L^2(\mathbb{R}^d \times \widehat{\mathbb{R}}^d)$ is the Kohn-Nirenberg symbol and where we make the further assumption that \begin{equation}\langlebel{eqn:assumpThm5.2} \forall \gamma \in \widehat{\mathbb{R}}^d, \quad s_{\gamma} \in C_b(\mathbb{R}^d) \quad and \quad {\rm supp}\,(s_{\gamma} e_{- \gamma})^{\widehat{}} \subseteq \Lambda. \end{equation} Then, \[ \exists A,\,B > 0 \quad \text{such that} \quad \forall f \in L^2(\mathbb{R}^d) \backslash \{0\}, \] \begin{equation}\langlebel{eqn:Thm5.2} A \frac{ \| K_s \widehat{f} \|_2^4}{\| f \|_2^2} \leq \sum_{x \in E} | \langlengle (K_s \widehat{f})(\cdot), \overline{s(x, \cdot)} \ e_x(\cdot) \ranglengle |^2 \leq B \ \| s \|_{L^2(\mathbb{R}^d \times \widehat{\mathbb{R}}^d)}^{2} \| K_s \widehat{f} \|_2^2. \end{equation} \end{thm} \begin{proof} {\it a.} In order to prove the assertion for the lower frame bound, we first combine the pseudo-differential operator $K_s$, with Kohn-Nirenberg symbol $s$, and balayage to compute \begin{align}\langlebel{eqn:5A} & \int | (K_s \widehat{f})(\gamma) |^2 \ d\gamma = \int \overline{(K_s \widehat{f})(\gamma)} (K_s \widehat{f})(\gamma) \ d\gamma \\ = & \int \overline{(K_s \widehat{f})(\gamma)} \left( \int s(y, \gamma) f(y) e^{-2 \pi i y \cdot \gamma} \ dy \right) d\gamma \nonumber \\ = & \int \overline{(K_s \widehat{f})(\gamma)} \left( \int f(y) k(y, \gamma) \ dy \right) d\gamma \nonumber \\ = & \int \overline{(K_s \widehat{f})(\gamma)} \left( \int f(y) \left( \sum_{x \in E} k(x, \gamma) a_x(y, \gamma) h(x-y) \right) dy \right) d\gamma, \nonumber \end{align} where $k_{\gamma}(y) = k(y, \gamma) = s(y, \gamma) e^{-2 \pi i y \cdot \gamma}$ on $\mathbb{R}^d$ and $k_{\gamma} \in \mathcal{C}(\Lambda)$ for each fixed $\gamma \in \widehat{\mathbb{R}}^d$, and where \begin{equation} \langlebel{eqn:5B} \sup_{\gamma \in \widehat{\mathbb{R}}^d} \sup_{y \in {\mathbb{R}}^d} \sum_{x \in E}\,| a_x(y, \gamma)| \leq K(E, \Lambda_{\epsilon}) = C < \infty. \end{equation} Because of Theorems \ref{thm:balayage3} and \ref{thm:balayage4}, we do not need to have the function $h$ depend on $\gamma \in \widehat{\mathbb{R}}^d$. Further, because of (\ref{eqn:5B}) and estimates we shall make, we can write $a_x(y, \gamma) = a_x(y)$. Thus, the right side of (\ref{eqn:5A}) is \begin{align} \langlebel{eqn:5C} & \int f(y) \left[ \sum_{x \in E} a_x(y) h(x-y) \left( \int \overline{(K_s \widehat{f})(\gamma)} k(x, \gamma) \ d\gamma \right) \right] dy \\ = & \sum_{x \in E} \left( \int f(y) a_x(y) h(x-y) \ dy \int \overline{(K_s \widehat{f})(\gamma)} k(x, \gamma) \ d\gamma \right) \nonumber \\ \leq & \left( \sum_{x \in E} \left| \int f(y) a_x(y) h(x-y) \ dy \right|^2 \right)^{1/2} \left( \sum_{x \in E} \left| \overline{(K_s \widehat{f})(\gamma)} k(x, \gamma) \right|^2 \right)^{1/2} . \nonumber \end{align} Note that, by H\"{o}lder's inequality applied to the integral, we have \begin{align} \langlebel{eqn:5D} & \sum_{x \in E} \left| \int f(y) a_x(y) h(x-y) \ dy \right|^2 \\ \leq & \sum_{x \in E} \left| \left( \int | a_x(y) | | h(x-y) |^2 \ dy \right)^{1/2} \left( \int | f(y) |^2 | a_x(y)| \ dy \right)^{1/2} \right|^2 \nonumber \\ \leq & \sum_{x \in E} \left( C \int | h(x-y) |^2 \ dy \right) \left( \int | f(y) |^2 | a_x(y)| \ dy \right) \nonumber \\ \leq & C \| h \|_2^2 \int \left( ( \sum_{x \in E} | a_x(y) | ) \left| f(y) \right|^2 \ dy \right) \nonumber \\ \leq & C^2 \| h \|_2^2 \| f \|_2^2. \nonumber \end{align} Combining (\ref{eqn:5A}), (\ref{eqn:5C}), and (\ref{eqn:5D}), we obtain \[ \| K_s \widehat{f} \|_2^2 \leq C \|h\|_2 \| f \|_2 \left( \sum_{x \in E} \left| \int (K_s \widehat{f})(\gamma) k(x, \gamma) \ d\gamma \right|^2 \right)^{1/2}. \] Consequently, setting $A = 1/(C\|h\|_2)^2$, we have \begin{align*} \forall f \in L^2(\mathbb{R}^d) \backslash \{0\}, \quad A \frac{\| K_s \widehat{f} \|_2^4}{\| f \|_2^2} & \leq \sum_{x \in E} \left| \int (K_s \widehat{f} )(\gamma) s(x, \gamma) e^{-2 \pi i x \cdot \gamma} \ d\gamma \right|^2 \\ & = \sum_{x \in E} | \langlengle (K_s \widehat{f})(\cdot), \overline{s(x, \cdot)} e_x(\cdot) \ranglengle |^2 \end{align*} and the assertion for the lower frame bound is proved. {\it b.i.} In order to prove the assertion for the upper frame bound, we begin by formally defining \[ \forall f \in L^2({\mathbb R}^d), \quad (I_s \widehat{f})(x) = \int s(x,\gamma) (K_s \widehat{f})(\gamma) e^{-2 \pi i x \cdot \gamma} \ d\gamma, \] which is the inner product in (\ref{eqn:Thm5.2}). Note that $I_s \widehat{f} \in L^2(\mathbb{R}^d).$ In fact, we know $K_s \widehat{f} \in L^2(\widehat{\mathbb{R}}^d)$ and $s \in L^2(\mathbb{R}^d \times \widehat{\mathbb{R}}^d)$ so that \[ | I_s \widehat{f}(x) |^2 \leq \int | s(x, \gamma) |^2 \ d\gamma \int | K_s \widehat{f}(\gamma) |^2 \ d\gamma \] by H\"{o}lder's inequality, and, hence, \begin{equation}\langlebel{eqn:I_s_f_hat} \| I_s \widehat{f} \|_2^2 \leq \| s \|_{ L^2(\mathbb{R}^d \times \widehat{\mathbb{R}}^d)}^2 \| K_s \widehat{f} \|_2^2. \end{equation} {\it b.ii.} We shall now show that supp$((I_s \widehat{f})\ \widehat{} \ ) \subseteq \Lambda$, and to this end we use (\ref{eqn:assumpThm5.2}). We begin by computing \begin{align*} (I_s\widehat{f})\ \widehat{}\ (\omega) & = \int \left( \int s(y, \gamma) (K_s \widehat{f})(\gamma) \ e^{-2 \pi i y \cdot \gamma} \ d\gamma \right) e^{-2 \pi i y \cdot \omega} \ dy \\ & = \int (K_s \widehat{f})(\gamma) \left( \int k_{\gamma}(y) e^{-2 \pi i y \cdot \omega} \ dy \right) \ d\gamma \\ & = \int (K_s \widehat{f})(\gamma) (k_{\gamma})^{\widehat{}} (\omega) \ d\gamma, \end{align*} where \[k_{\gamma}(y) = k(y, \gamma) = s(y,\gamma) e^{-2 \pi i y \cdot \gamma} = (s_{\gamma} e_{- \gamma})(y), \] as in part {\it a}. Also, supp$(k_{\gamma})^{\widehat{}} \ \subseteq \Lambda$ by our assumption, (\ref{eqn:assumpThm5.2}); that is, for each $\gamma \in \widehat{\mathbb{R}}^d, (k_{\gamma})^{\widehat{}} \ = 0$ a.e. on $\widehat{\mathbb{R}}^d \backslash \Lambda$. Since supp$(I_s\widehat{f})\ {\widehat{}} \ $ is the smallest closed set outside of which $(I_s\widehat{f})\ \widehat{}\ $ is 0 a.e., we need only show that if supp$(L) \subseteq \widehat{\mathbb{R}}^d \backslash \Lambda$ then \[ \int L(\omega) (I_s\widehat{f})\ \widehat{}\ (\omega) \ d\omega = 0. \] This follows because \[ \int L(\omega) (I_s\widehat{f})\ \widehat{} \ (\omega) \ d\omega = \int (K_s \widehat{f})(\gamma) \left( \int L(\omega) (k_{\gamma})^{\widehat{}} (\omega) \ d\omega \right) \ d\gamma\] and $(k_{\gamma})^{\widehat{}} = 0$ on $\widehat{\mathbb{R}}^d \backslash \Lambda.$ {\it b.iii.} Because of parts ${\it b. i}$ and ${\it b. ii}$, we can invoke the P{\'o}lya-Plancherel theorem to assert the existence of $B > 0$ such that \[ \forall f \in L^2(\mathbb{R}^d), \quad \sum_{x \in E} | (I_s \widehat{f}) (x) | \leq B \| I_s \widehat{f} \|_2^2, \] and the upper frame inequality of (\ref{eqn:Thm5.2}) follows from (\ref{eqn:I_s_f_hat}). \end{proof} \begin{example} We shall define a Kohn-Nirenberg symbol class whose elements $s$ satisfy the hypotheses of Theorem \ref{thm:pseudoD}. Choose $\{ \langlembda_j \} \subseteq \text{int}(\Lambda), a_j \in C_b(\mathbb{R}^d) \cap L^2(\mathbb{R}^d),$ and $b_j \in C_b(\widehat{\mathbb{R}}^d) \cap L^2(\widehat{\mathbb{R}}^d)$ with the following properties:\\ {\it i.} \; $ \sum_{j=1}^{\infty} | a_j(y) b_j(\gamma)|$ is uniformly bounded and converges uniformly on $\mathbb{R}^d \times \widehat{\mathbb{R}}^d$;\\ {\it ii.} \; $ \sum_{j=1}^{\infty} \| a_j \|_2 \| b_j \|_2 < \infty;$\\ {\it iii.} \; $ \forall j = 1, \ldots, \ \exists \epsilon_j > 0$ such that $\overline{B(\langlembda_j, \epsilon_j)} \subseteq \Lambda$ and supp$(\widehat{a}_j) \subseteq \overline{B(0, \epsilon_j)}.$\\ \noindent These conditions are satisfied for a large class of functions $a_j$ and $b_j$. The Kohn-Nirenberg symbol class consisting of functions, $s$, defined as \[ s(y, \gamma) = \sum_{j=1}^{\infty} a_j(y) b_j(\gamma) e^{-2 \pi i y \cdot \langlembda_j} \] satisfy the hypotheses of Theorem \ref{thm:pseudoD}. To see this, first note that condition \emph{i} tells us that, if we set $s_{\gamma}(y) = s(y, \gamma),$ then \[ \forall \gamma \in \widehat{\mathbb{R}}^d, \quad s_{\gamma} \in C_b(\mathbb{R}^d). \] Condition \emph{ii} allows us to assert that $s \in L^2(\mathbb{R}^d \times \widehat{\mathbb{R}}^d)$ since we can use Minkowski's inequality to make the estimate, \[ \| s \|_{L^2(\mathbb{R}^d \times \widehat{\mathbb{R}}^d)} \leq \sum_{j=1}^{\infty} \left( \int \int \left| b_j(\gamma) a_j(y) e^{-2 \pi i y \cdot (\langlembda_j - \gamma)} \right|^2 \ dy \ d\gamma \right)^{1/2} = \sum_{j=1}^{\infty} \| a_j \|_2 \|b_j \|_2 . \] Finally, using condition \emph{iii}, we obtain the support hypothesis, supp$(s_{\gamma} e_{-\gamma})^{\widehat{}} \subseteq \Lambda,$ of Theorem \ref{thm:pseudoD} for each $\gamma \in \widehat{\mathbb{R}}^d$, because of the following calculations: \[ (s_{\gamma} e_{-\gamma})^{\widehat{}}(\omega) = \sum_{j=1}^{\infty} b_j(\gamma) (\widehat{a}_{j} \ast \delta_{- \langlembda_j})(\omega) \] and, for each $j$, \[\text{supp}(\widehat{a}_j \ast \delta_{- \langlembda_j}) \subseteq \overline{B(0, \epsilon_j)} + \{ \langlembda_j \} \subseteq \overline{B(\langlembda_j, \epsilon_j)} \subseteq \Lambda.\] \end{example} \section{The Beurling covering theorem} \langlebel{sec:covering} Let $\Lambda \subseteq \widehat{\mathbb{R}}^d$ be a convex, compact set which is symmetric about the origin and has non-empty interior. Then $\norm{\cdot}_\Lambda$, defined by $$ \forall \gamma \in \widehat{\mathbb{R}}^d ,\quad \norm{\gamma}_\Lambda = \inf \{\rho > 0 : \gamma \in {\rho}\Lambda \}, $$ is a norm on $\widehat{\mathbb{R}}^d$ equivalent to the Euclidean norm. The polar set $\Lambda^\ast \subseteq {\mathbb R}^d$ of $\Lambda$ is defined as $$ \Lambda^\ast = \{x \in {\mathbb R}^d: x \cdot \gamma \leq 1, \text{ for all } \gamma \in \Lambda \}. $$ It is elementary to check that $\Lambda^\ast$ is a convex, compact set which is symmetric about the origin, and that it has non-empty interior. \begin{example} Let $\Lambda = [-1,1] \times [-1,1]$. Then, for $(\gamma_1,\gamma_2)\in \widehat{\mathbb{R}}^2$, $$ \norm{(\gamma_1,\gamma_2)}_\Lambda = \inf \{\rho >0: |\gamma_1| \leq \rho, |\gamma_2| \leq \rho \}= \norm{(\gamma_1,\gamma_2)}_\infty. $$ The polar set of $\Lambda$ is $$ \Lambda^\ast = \{(x_1,x_2): |x_1|+|x_2|\leq 1\} = \{(x_1,x_2): \norm{(x_1,x_2)}_1 \leq 1\}. $$ \end{example} \begin{thm}\langlebel{thm:covering}(Beurling covering theorem) Let $\Lambda \subseteq \widehat{\mathbb{R}}^d$ be a convex, compact set which is symmetric about the origin and has non-empty interior, and let $E \subseteq {\mathbb R}^d$ be a separated set satisfying the covering property, $$\bigcup_{y \in E}\tau_y \Lambda^\ast = {\mathbb R}^d.$$ If $\rho <1/4$, then $\{(e_{-x} \ \mathbb{1}_{\Lambda})^\vee: x \in E \}$ is a Fourier frame for $PW_{\rho\Lambda}$. \end{thm} Theorem \ref{thm:covering} \cite{BenWu1999b}, \cite{BenWu2000} involves the Paley-Wiener theorem and properties of balayage, and it depends on the theory developed in \cite{beur1989}, pages 341-350, \cite{beur1966}, and \cite{land1967}. For a recent development, see \cite{OleUla2012}. \section{Epilogue}\ \langlebel{sec:epilogue} This paper is rooted in Beurling's deep ideas and techniques dealing with balayage, that themselves have spawned wondrous results in a host of areas ranging from Kahane's creative formulation and theory exposited in \cite{kaha1970} to the setting of various locally compact abelian groups with surprising twists and turns and many open problems, e.g., \cite{shap1977}, \cite{shap1978}, to the new original chapter on quasi-crystals led by by Yves Meyer, e.g., \cite{hof1995}, \cite{meye1995}, \cite{laga2000}, \cite{MatMey2009}, \cite{MatMey2010}, \cite{MatMeyOrt2013}, \cite{meye2012} as well as the revisiting by Beurling \cite{beur1985}. Even with the focused theme of this paper, there is the important issue of implementation and computation vis a vis balayage and genuine applications of non-uniform sampling. \end{document}
\begin{document} \title{An algorithmic proof of Bachet's conjecture and the Lagrange-Euler method} \section{Introduction} The goal of this notice is to present a proof of Bachet's conjecture based exclusively on the fundamental theorem of arithmetic. The novelty of this proof consists in its introduction of a partial order on rational integers through the unique factorization property. In general, the proofs of Bachet's conjecture by Lagrange - Euler's method (c.f. \cite{Landau1958}, \cite{Davenport1983}, \cite{Cohen2007}) assume necessary the use of infinite descent. In the proposed proof we do not assume the existence of a ``minimal solution", but rather we show the existence of the desired solution through an algorithmic method. \\ \\ This approach should also be suitable for generalized versions of Bachet's conjecture for algebraic integers. This is due to the fact that total orders are often impossible to introduce in algerbaic extensions of $\mathbb{Q}$. However, if unique factorization is used as the basis for ordering, it is likely to be possible to apply our approach and obtain the desired results. For example, \cite{Deutsch2002} possibly had to restrict her work to totally ordered fields due to the problem of ordering. \section{Definitions} \begin{definition}[Initial interval of primes] Let $\Pi_n = \{ p_1=2,p_2=3,...,p_n \}$ be the interval of the first $n$ prime numbers. \end{definition} \noindent Let $S(\Pi_n) = \{ w = p_1^{i_1}...p_n^{i_n}| i_1,i_2,..,i_n \in \mathbb{Z}^+ \}$. Given an element $w \in S(\Pi_n)$, $w$ must be written as $w= p_1^{\alpha_1}...p_n^{\alpha_n}$, even if some of the powers are 0. The leading prime factor of $w$, is the prime with the greatest index, whose corresponding power is not 0. \begin{definition}[The $L$ map] Let $w = p_1^{\alpha_1}p_2^{\alpha_2}...p_n^{\alpha_n}$ where $\alpha_i \in \mathbb{Z}^+$. Let $L :S \rightarrow \mathbb{Z}^+$, $L(w) = k$, where $k$ is the index of the leading prime. \end{definition} \begin{definition}[The $\nu$ map] Let $w = p_1^{\alpha_1}p_2^{\alpha_2}...p_n^{\alpha_n}$ where $\alpha_i \in \mathbb{Z}^+$. Let $\nu : S \rightarrow \mathbb{Z}^+$ where $\nu(w) = \alpha_{L(w)}$. \end{definition} \noindent The above mappings are well-defined due to the unique factorization property of $\mathbb{Z}$. \begin{definition}[The partial order on $S$] Given $w_1$ and $w_2$, $w_1 \prec w_2$ if $L(w_1) < L(w_2)$ or $L(w_1) = L(w_2)$ and $\nu(w_1) < \nu(w_2)$. \end{definition} \noindent We shall now give some properties of this partial order on the set $S$. Let $w_1 = w_2 w_3$, then $L(w_1) = \max(L(w_2),L(w_3))$; and moreover if $L(w_2) < L(w_3)$, then $\nu(w_1) = \nu(w_3)$, otherwise if $L(w_2) = L(w_3)$ then $\nu(w_1) = \nu(w_2) + \nu(w_3)$. \begin{definition}[Reduced Solution] Given a system of equations \begin{equation*} \begin{cases} x_1^2 + x_2^2 + x_3^2 + x_4^2 - px_5 = 0 \\ (x_1,x_2,...,x_5) = 1 \end{cases} \end{equation*} \noindent a solution $(a_1,a_2,a_3,a_4,a_5)$ is called a reduced solution if every prime factor of $a_5$ precedes $p$. \end{definition} \section{The Result} \begin{lemma} For any prime $p$ the system of equations: \begin{equation} \begin{cases} x_1^2 + x_2^2 + x_3^2 + x_4^2 - px_5 = 0 \\ (x_1,x_2,...,x_5) = 1 \end{cases} \end{equation} \noindent has a reduced solution. \end{lemma} \begin{lemma} Let $(a_1,a_2,a_3,a_4,a_5)$ be a reduced solution of equation (1) and $p' | a_5$. Then the system of equations: \begin{equation} \begin{cases} y_1^2 + y_2^2 + y_3^2 + y_4^2 - p'y_5 = 0 \\ (y_1,y_2,...,y_5) =1 \end{cases} \end{equation} \noindent has a reduced solution $(b_1,b_2,b_3,b_4,b_5)$ such that: \begin{equation} \begin{aligned} a_1b_1 + a_2b_2 + a_3b_3 + a_4b_4 \equiv 0 \mod p' \\ a_1b_2 - a_2b_1 + a_3b_4 - a_4b_3 \equiv 0 \mod p' \\ a_1b_3 - a_3b_1 + a_4b_2 - a_2b_4 \equiv 0 \mod p' \\ a_1b_4 - a_4b_1 + a_2b_3 - a_3b_2 \equiv 0 \mod p' \end{aligned} \end{equation} \end{lemma} \begin{theorem} Let $p$ be an arbitrary prime, then $x_1^2 + x_2^2 + x_3^2 + x_4^2 = p$ is solvable. \end{theorem} \begin{proof} Let $(a_1,a_2,a_3,a_4,a_5)$ be a reduced solution of equation (1) and $a_5 \in S$. Let $p' = p_n$ and $(b_1,b_2,b_3,b_4,b_5)$ be a reduced solution of equation (2) subject to (3). By taking the product of equations (1) and (2), we obtain: \begin{equation} (a_1^2 + a_2^2 + a_3^2 + a_4^2)(b_1^2 + b_2^2 + b_3^2 + b_4^2) - p p_n a_5b_5 = 0 \end{equation} This equation by Euler's identity is really: \begin{equation} c_1^2 + c_2^2 + c_3^2 + c_4^2 - pp_na_5b_5 = 0 \end{equation} \noindent Where by lemma 2, $\gcd(c_1,c_2,c_3,c_4) =d $ where $d \equiv 0 \mod p_n$. We can therefore reduce by $d$ and obtain: \begin{equation} \begin{cases} {a_1^{1}}^2 + {a_2^{1}}^2 + {a_3^{1}}^2 + {a_4^{1}}^2 - pa_5^{1} = 0 \\ (a_1^1,a_2^1,a_3^1,a_4^1,a_5^1) = 1 \end{cases} \end{equation} \noindent where $a_{i}^1 = \frac{c_i}{d}$ where $i=1...4$ and $a_5^1 = \frac{p_na_5b_5}{d^2}$. Let us show that $a_5 \succ a_5^1$. $L(a_5b_5) = L(a_5)$ follows from the fact that all prime of divisors of $b_5$ preceed $p_n$. By multiplying $a_5b_5$ by $p_n$, $L(p_na_5b_5) = L(a_5)$. When one divides by $d^2$, there are two possibilities: \begin{itemize} \item Possibility 1: $L(\frac{p_na_5b_5}{d^2}) < L(a_5)$, then $ a_5^1 \prec a_5 $. \\ \item Possibility 2: $L(\frac{p_na_5b_5}{d^2}) = L(a_5)$, however in this case, $\nu(\frac{p_na_5b_5}{d^2}) < \nu(a_5)$, then $a_5^1 \prec a_5$. \end{itemize} \noindent If $L(a_5) > L(a_5^1)$, then the leading prime factor of $a_5^1$ strictly precedes $p_n$. Otherwise if $L(a_5) = L(a_5^1)$ (implying $\nu(a_5) > \nu(a_5^1)$), we repeat this procedure for $p_n$. \newline \newline To finalize the proof, one should note that this reduction procedure can be repeated. Moreover, the maximal bound before $L$ turns into a strict inequality towards its predecessor is equal to the power of the prime we are reducing over. Therefore, we have the following ordered chain and its associated finite non-increasing sequence: \begin{eqnarray} a_5 \succ a_5^1 \succ ... \succ a_5^{k-1} \succ a_5^k \succ ... \succ a_5^t = 1 \\ L(a_5) \geq L(a_5^1) \geq ... \geq L(a_5^{k-1})> L(a_5^k)\geq ... > L(a_5^t) = 0 \end{eqnarray} \noindent which completes the proof. \end{proof} \begin{theorem}[Lagrange's four square theorem] By the unique factorization property of $\mathbb{Z}$, theorem 1, and Euler's identity, Lagrange's theorem follows. \end{theorem} \section{Remarks (Sketch of proofs of lemma 1 and lemma 2)} Proof of Lemma 1: Using Chevalley's theorem, the equation $x_1^2 + x_2^2 + x_3^2 + x_4^2 - px_5 = 0$ has a non-zero solution $(a_1,a_2,a_3,a_4,a_5)$. It is possible find a solution $(b_1,b_2,b_3,b_4,b_5)$, such that every $b_i$,$(i=1...4)$ is a least absolute residue modulo $p$, consequently all prime of factors of $b_5$ preceed $p$. Let $\gcd(b_1,b_2,b_3,b_4)=d$. If $d=1$, then we have a reduced solution. Otherwise, by dividing by $d$ we obtain the reduced solution. \newline \newline \noindent Proof of Lemma 2: One can assume per the lemma's formulation that we are given $(a_1,a_2,a_3,a_4,a_5)$ which are a reduced solution of equation (1). Find the least absolute residues $(c_1,c_2,c_3,c_4)$ corresponding to $(a_1,a_2,a_3,a_4)$ modulo $p'$ (where $p'$ is an abitrary prime divisor of $a_5$). Using the properties of residues and their arithmetic, one can manipulate the $a$'s and $c$'s to obtain solutions over $p'$ without the relative-primality condition. This last condition is satisfied independently from $p'$ arithmetic, by dividing the final result by $\gcd(c_1,c_2,c_3,c_4)$, which results in $(b_1,b_2,b_3,b_4,b_5)$. \newline \newline \end{document}
\begin{document} \title{A universal regularization method for ill-posed Cauchy problems for quasilinear partial differential equations} \author{Michael V. Klibanov \\ \\ Department of Mathematics \& Statistics, University of North Carolina \\ at Charlotte, Charlotte, NC 28223, USA\\ Email: \texttt{\ }[email protected] } \date{} \maketitle \begin{abstract} For the first time, a globally convergent numerical method is presented for ill-posed Cauchy problems for quasilinear PDEs. The key idea is to use Carleman Weight Functions to construct globally strictly convex Tikhonov-like cost functionals. \end{abstract} \textbf{Keywords}: Carleman estimates; Ill-Posed Cauchy problems; quasilinear PDEs \textbf{2010 Mathematics Subject Classification:} 35R30. \section{Introduction} \label{sec:1} This is the first publication where a globally convergent numerical method is presented for ill-posed Cauchy problems for a broad class of quasilinear Partial Differential Equations (PDEs). All previous works were concerned only with the linear case. First, we present the general framework of this method. Next, we specify that framework for ill-posed Cauchy problems for quasilinear elliptic, parabolic and hyperbolic PDEs. Let $G\subset \mathbb{R}^{n}$ be a bounded domain with a piecewise smooth boundary $\partial G.$ Let $\Gamma \subseteq \partial G$ be a part of the boundary.\ Let $A$ be a quasilinear Partial Differential Operator (PDO) of the second order acting on functions $u$ defined in the domain $G$ (details are given in section 2). Suppose that $\Gamma $ is not a characteristic hypersurface of $A$. Consider the Cauchy problem for the quasilinear PDE generated by the operator $A$ with the Cauchy data at $\Gamma .$ Suppose that this problem is ill-posed in the classical sense, e.g. the Cauchy problem for a quasilinear elliptic equation. In this paper we construct a universal globally convergent regularization method for solving such problems. This method works for those PDOs, for which Carleman estimates are valid for linearized versions of those operators. Since these estimates are valid for three main types of linear PDOs of the second order, elliptic, parabolic and hyperbolic ones, then our method is a quite general one. The key idea of this paper is to construct a weighted Tikhonov-like functional.\ The weight is the Carleman Weight Function (CWF), i.e. the function which is involved in the Carleman estimate for the principal part of the corresponding PDO. Given a ball $B\left( R\right) $ of an arbitrary radius $R$ in a certain Sobolev space, one can choose the parameter $\lambda \left( R\right) $ of this CWF so large that the weighted Tikhonov-like functional is strictly convex on $B\left( R\right) $ for all $\lambda \geq \lambda \left( R\right) $. The strict convexity, in turn guarantees the convergence of the gradient method of the minimization of this functional if starting from an arbitrary point of the ball $B\left( R\right) $. Since restrictions of the radius $R$ are not imposed, then this is \emph{global convergence}. On the other hand, the major problem with conventional Tikhonov functionals for nonlinear ill-posed problems is that they usually have many local minima and ravines. The latter implies convergence of the gradient method only if it starts in a sufficiently small neighborhood of the solution, i.e. the local convergence. First globally strictly convex cost functionals for nonlinear ill-posed problems were constructed by Klibanov \cite{Klib97,Kpar} for Coefficient Inverse Problems (CIPs), using CWFs. Based on a modification of this idea, some numerical studies for 1-d CIPs were performed in the book of Klibanov and Timonov \cite{KT}. Recently there is a renewed interest to this topic, see Baudoin, DeBuhan and Ervedoza \cite{Baud}, Beilina and Klibanov \cite {BKconv}, Klibanov and Thanh \cite{KNT} and Klibanov and Kamburg \cite{KK}. In particular, the paper \cite{KNT} contains some numerical results. However, globally strictly convex cost functionals were not constructed for ill-posed Cauchy problems for quasilinear PDEs in the past. Concerning applications of Carleman estimates to inverse problems, we refer to the method, which was first proposed by Bukhgeim and Klibanov \cite {BukhK,Bukh1,Klib1}. The method of \cite{BukhK,Bukh1,Klib1} was originally designed for proofs of uniqueness theorems for CIPs with single measurement data, see, e.g. some follow up works of Bukhgeim \cite{Bukh}, Klibanov \cite {K92}, Klibanov and Timonov \cite{KT}, surveys of Klibanov \cite{Ksurvey} and Yamamoto \cite{Y}, as well as sections 1.10 and 1.11 of the book of Beilina and Klibanov \cite{BK}. There is a huge number of publications about ill-posed Cauchy problems for linear PDEs. Hence, we now refer only to a few of them. We note first that in the conventional case of a linear ill-posed problem, Tikhonov regularization functional is generated by a bounded linear operator, see, e.g. books of Ivanov, Vasin and Tanana \cite{Iv} and Lavrentiev, Romanov and Shishatskii \cite{LRS}. Unlike this, the idea, which is the closest one to this paper, is to use unbounded linear PDOs as generators of Tikhonov functionals for ill-posed Cauchy problems. This idea was first proposed in the book of Lattes and Lions \cite{LL}. In \cite{LL} Tikhonov functionals were written in the variational form. Since Carleman estimates were not used in \cite{LL}, then convergence rates for minimizers were not established. The idea of using Carleman estimates for establishing convergence rates of minimizers of those Tikhonov functionals was first proposed in works of Klibanov and Santosa \cite{KS} and of Klibanov and Malinsky \cite{KM}. Next, it was explored in works of Klibanov with coauthors \cite{Cao,ClK,KKKN}, where\ accurate numerical results were obtained; also see a recent survey \cite{Klib}. In addition, Bourgeois \cite{B5,B9} and Bourgeois and Dard\'{e} have used this idea for the case of the Cauchy problem for the Laplace equation, see, e.g. \cite{B5,B9}. We also refer to Berntsson, Kozlov, Mpinganzima and Turesson \cite{Kozlov3}, Hao and Lesnic \cite{HaoLes}, Kabanikhin \cite{Kab}, Kabanikhin and Karchevsky \cite{Karch1}, Kozlov, Maz'ya and Fomin \cite{Kozlov1} and Li, Xie and Zou \cite{Zou} for some other numerical methods for ill-posed Cauchy problems for linear PDEs. All functions considered below are real valued ones. In section 2 we present the general framework of our method. Next, in sections 3, 4 and 5 we specify this framework for quasilinear elliptic, parabolic and hyperbolic PDEs respectively. \section{The method} \label{sec:2} Let $A$ be a quasilinear PDO of the second order in $G$ with its principal part $A_{0},$ \begin{eqnarray} A\left( u\right) &=&\sum\limits_{\left\vert \alpha \right\vert =2}a_{\alpha }\left( x\right) D^{\alpha }u+A_{1}\left( x,\nabla u,u\right) , \label{2.100} \\ A_{0}u &=&\sum\limits_{\left\vert \alpha \right\vert =2}a_{\alpha }\left( x\right) D^{\alpha }u, \label{2.1001} \\ \text{ where functions }a_{\alpha } &\in &C^{1}\left( \overline{G}\right) ,A_{1}\in C\left( \overline{G}\right) \times C^{3}\left( \mathbb{R} ^{n+1}\right) . \label{2.1002} \end{eqnarray} Let $k_{n}=\left[ n/2\right] +2,$ where $\left[ n/2\right] $ is the largest integer which does not exceed the number $n/2.$ By the embedding theorem \begin{equation} H^{k_{n}}\left( G\right) \subset C^{1}\left( \overline{G}\right) \text{ and } \left\Vert f\right\Vert _{C^{1}\left( \overline{G}\right) }\leq C\left\Vert f\right\Vert _{H^{k_{n}}\left( G\right) },\forall f\in H^{k_{n}}\left( G\right) , \label{2.10005} \end{equation} where the constant $C=C\left( n,G\right) >0$ depends only on listed parameters. \textbf{Cauchy Problem}. \emph{Let the hypersurface }$\Gamma \subseteq \partial G$\emph{\ and assume that }$\Gamma $ \emph{is not a characteristic hypersurface of the operator }$A_{0}.$\emph{\ Consider the following Cauchy problem for the operator }$A$\emph{\ with the Cauchy data }$g_{0}\left( x\right) ,g_{1}\left( x\right) ,$\emph{\ } \begin{equation} A\left( u\right) =0, \label{2.1003} \end{equation} \begin{equation} u\mid _{\Gamma }=g_{0}\left( x\right) ,\partial _{n}u\mid _{\Gamma }=g_{1}\left( x\right) ,x\in \Gamma . \label{2.1004} \end{equation} \emph{Determine the solution }$u\in H^{k_{n}}\left( G\right) $\emph{\ of the problem (\ref{2.1003}), (\ref{2.1004}) either in the entire domain }$G$\emph{ \ or at least in its subdomain.} \subsection{The Carleman estimate} \label{sec:2.1} Let the function $\xi \in C^{2}\left( \overline{G}\right) $ and $\left\vert \nabla \xi \right\vert \neq 0$ in $\overline{G}.$ For a number $c\geq 0$ denote \begin{equation} \xi _{c}=\left\{ x\in \overline{G}:\xi \left( x\right) =c\right\} ,G_{c}=\left\{ x\in G:\xi \left( x\right) >c\right\} . \label{2.0} \end{equation} We assume that $G_{c}\neq \varnothing .$ Choose a sufficiently small $ \varepsilon >0$ such that $G_{c+2\varepsilon }\neq \varnothing .$ Obviously, $G_{c+2\varepsilon }\subset G_{c+\varepsilon }\subset G_{c}.$ Let $\Gamma _{c}=\left\{ x\in \Gamma :\xi \left( x\right) >c\right\} \neq \varnothing .$ Hence, the boundary of the domain $G_{c}$ consists of two parts, \begin{equation} \partial G_{c}=\partial _{1}G_{c}\cup \partial _{2}G_{c},\partial _{1}G_{c}=\xi _{c},\partial _{2}G_{c}=\Gamma _{c}. \label{2.1} \end{equation} Let $\lambda >1$ be a large parameter. Consider the function $\varphi _{\lambda }\left( x\right) ,$ \begin{equation} \varphi _{\lambda }\left( x\right) =\exp \left( \lambda \xi \left( x\right) \right) . \label{2.2} \end{equation} It follows from (\ref{2.1}) and (\ref{2.2}) that \begin{equation} \min_{\overline{G}_{c}}\varphi _{\lambda }\left( x\right) =\varphi _{\lambda }\left( x\right) \mid _{\xi _{c}}=e^{\lambda c}. \label{2.3} \end{equation} \textbf{Definition 2.1}. \emph{We say that the operator }$A_{0}$\emph{\ admits the pointwise\ Carleman estimate in the domain }$G_{c}$\emph{\ with the CWF }$\varphi _{\lambda }\left( x\right) $\emph{\ if there exist constants }$\lambda _{0}\left( G_{c},A_{0}\right) >1,C_{1}\left( G_{c},A_{0}\right) >0$\emph{\ depending only on the domain }$G$\emph{\ and the operator }$A_{0}$\emph{\ such that the following estimate holds} \begin{eqnarray} \left( A_{0}u\right) ^{2}\varphi _{\lambda }^{2}\left( x\right) &\geq &C_{1}\lambda \left( \nabla u\right) ^{2}\varphi _{\lambda }^{2}\left( x\right) +C_{1}\lambda ^{3}u^{2}\varphi _{\lambda }^{2}\left( x\right) + \func{div}U, \label{2.6} \\ \forall \lambda &\geq &\lambda _{0},\forall u\in C^{2}\left( \overline{G} \right) ,\forall x\in G_{c}. \label{2.7} \end{eqnarray} \emph{In (\ref{2.6}) the vector function }$U\left( x\right) $ \emph{ satisfies the following estimate} \begin{equation} \left\vert U\left( x\right) \right\vert \leq C_{1}\lambda ^{3}\left[ \left( \nabla u\right) ^{2}+u^{2}\right] \varphi _{\lambda }^{2}\left( x\right) ,\forall x\in G_{c}. \label{2.8} \end{equation} \subsection{The main result and the gradient method} \label{sec:2.2} Let $R>0$ be an arbitrary number. Denote \begin{equation} B\left( R\right) =\left\{ u\in H^{k_{n}}\left( G_{c}\right) :\left\Vert u\right\Vert _{H^{k_{n}}\left( G_{c}\right) }<R,u\mid _{\Gamma _{c}}=g_{0}\left( x\right) ,\partial _{n}u\mid _{\Gamma _{c}}=g_{1}\left( x\right) \right\} , \label{1} \end{equation} \begin{equation} H_{0}^{k_{n}}\left( G_{c}\right) =\left\{ u\in H^{k_{n}}\left( G_{c}\right) :u\mid _{\Gamma _{c}}=\partial _{n}u\mid _{\Gamma _{c}}=0\right\} . \label{2} \end{equation} To solve the above Cauchy problem, we consider the following minimization problem. \textbf{Minimization Problem}. \emph{Assume that the operator }$A_{0}$\emph{ \ satisfies the Carleman estimate (\ref{2.6}), (\ref{2.7}) for a certain number }$c\geq 0.$ \emph{Minimize the functional }$J_{\lambda ,\alpha }\left( u\right) $ \emph{in} \emph{(\ref{2.1005})} \emph{on the set }$ B\left( R\right) $\emph{, where} \begin{equation} J_{\lambda ,\beta }\left( u\right) =e^{-2\lambda \left( c+\varepsilon \right) }\dint\limits_{G_{c}}\left[ A\left( u\right) \right] ^{2}\varphi _{\lambda }^{2}dx+\beta \left\Vert u\right\Vert _{H^{k_{n}}\left( G_{c}\right) }^{2}, \label{2.1005} \end{equation} \emph{where }$\beta \in \left( 0,1\right) $\emph{\ is the regularization parameter.} Thus, by solving this problem we approximate the function $u$ in the subdomain $G_{c}\subset G.$ The multiplier $e^{-2\lambda \left( c+\varepsilon \right) }$ is introduced to balance two terms in the right hand side (\ref{2.1005}): to allow to have $\beta \in \left( 0,1\right) .$ Theorem 2.1 is the main result of this paper. Note that since $e^{-\lambda \varepsilon }<<1$ for sufficiently large $\lambda ,$ then the requirement of this theorem $\beta \in \left( e^{-\lambda \varepsilon },1\right) $ means that the regularization parameter $\beta $ can change from being very small and up to unity. \textbf{Theorem 2.1}. \emph{Let }$R>0$\emph{\ be an arbitrary number. Let }$ B\left( R\right) $\emph{\ and }$H_{0}^{k_{n}}\left( G_{c}\right) $\emph{\ be the sets defined in (\ref{1}) and (\ref{2}) respectively. Then for every point }$u\in B\left( R\right) $\emph{\ there exists the Fr\'{e}chet derivative }$J_{\lambda ,\alpha }^{\prime }\left( u\right) \in H_{0}^{k_{n}}\left( G_{c}\right) .$\emph{\ Assume that the operator }$A_{0}$ \emph{\ admits the pointwise\ Carleman estimate in the domain }$G_{c}$\emph{ \ of Definition 2.1 and let }$\lambda _{0}\left( G_{c},A_{0}\right) >1$\emph{ \ be the constant of this definition. Then there exists a sufficiently large number }$\lambda _{1}=\lambda _{1}\left( G_{c},A,R\right) >\lambda _{0}$ \emph{\ such that for all }$\lambda \geq \lambda _{1}$\emph{\ and for every } $\beta \in \left( e^{-\lambda \varepsilon },1\right) $\emph{\ the functional }$J_{\lambda ,\beta }\left( u\right) $\emph{\ is strictly convex on the set } $B\left( R\right) ,$ \begin{equation*} J_{\lambda ,\beta }\left( u_{2}\right) -J_{\lambda ,\beta }\left( u_{1}\right) -J_{\lambda ,\beta }^{\prime }\left( u_{1}\right) \left( u_{2}-u_{1}\right) \end{equation*} \begin{equation*} \geq C_{2}e^{2\lambda \varepsilon }\left\Vert u_{2}-u_{1}\right\Vert _{H^{1}\left( G_{c+2\varepsilon }\right) }^{2}+\frac{\beta }{2}\left\Vert u_{2}-u_{1}\right\Vert _{H^{k_{n}}\left( G_{c}\right) }^{2},\forall u_{1},u_{2}\in B\left( R\right) . \end{equation*} \emph{Here the number }$C_{2}=C_{2}\left( A,R,c\right) $\emph{\ depends only on listed parameters. } We now show that this theorem implies the global convergence of the gradient method of the minimization of the functional (\ref{2.1005}) on the set $ B\left( R\right) $. Consider an arbitrary function $u_{1}\in B\left( R\right) ,$ which is our starting point for iterations of this method. Let the step size of the gradient method be $\gamma >0$. For brevity, we do not indicate the dependence of functions $u_{n}$ on parameters $\lambda ,\beta ,\gamma $. Consider the sequence $\left\{ u_{n}\right\} _{n=1}^{\infty }$ of the gradient method, \begin{equation} u_{n+1}=u_{n}-\gamma J_{\lambda ,\alpha }^{\prime }\left( u_{n}\right) ,n=1,2,... \label{2.200} \end{equation} \textbf{Theorem 2.2}. \emph{Let }$\lambda _{1}$\emph{\ be the parameter of Theorem 2.1. Choose a number }$\lambda \geq \lambda _{0}.$ \emph{Let }$\beta \in \left( e^{-\lambda \varepsilon },1\right) .$ \emph{Assume that the functional }$J_{\lambda ,\beta }$\emph{\ achieves its minimal value on the set }$B\left( R\right) $\emph{\ at a point }$u_{\min }\in B\left( R\right) $ \emph{.}\ \emph{\ Then such a point }$u_{\min }$ \emph{is unique. Consider the sequence (\ref{2.200}), where }$u_{1}\in B\left( R\right) $ \emph{is an arbitrary point}. \emph{Assume that }$\left\{ u_{n}\right\} _{n=1}^{\infty }\subset B\left( R\right) .$ \emph{Then there exists a sufficiently small number }$\gamma =\gamma \left( \lambda ,\beta ,B\left( R\right) \right) \in \left( 0,1\right) $ \emph{and a number }$q=q\left( \gamma \right) \in \left( 0,1\right) $\emph{\ such that the sequence (\ref{2.200}) converges to the point }$u_{\min },$ \begin{equation*} \left\Vert u_{n+1}-u_{\min }\right\Vert _{H^{k_{n}}\left( G_{c}\right) }\leq q^{n}\left\Vert u_{0}-u_{\min }\right\Vert _{H^{k_{n}}\left( G_{c}\right) }. \end{equation*} Since the starting point $u_{1}$ of the sequence (\ref{2.200}) is an arbitrary point of the set $B\left( R\right) $ and a restriction on $R$ is not imposed, then Theorem 2.2 claims the \emph{global convergence} of the gradient method. This is the opposite to its local convergence for non-convex functionals. Since it was shown in \cite{BKconv} how a direct analog of Theorem 2.2 follows from an analog of Theorem 2.1, although for a different cost functional, we do not prove Theorem 2.2 here for brevity. We note that it is likely that similar global convergence results can be proven for other versions of the gradient method, e.g. the conjugate gradient method.\ However, we are not doing this here for brevity. \subsection{Proof of Theorem 2.1} In this proof $C_{2}=C_{2}\left( A,R,c\right) >0$ denotes different numbers depending only on listed parameters. Let $u_{1},u_{2}\in B\left( R\right) $ be two arbitrary functions and let $h=u_{2}-u_{1}.$ Then (\ref{2.1006}) implies that \begin{equation} h\in H_{0}^{k_{n}}\left( G_{c}\right) . \label{2.11} \end{equation} Let \begin{equation} D=\left( A\left( u_{2}\right) \right) ^{2}-\left( A\left( u_{1}\right) \right) ^{2}. \label{2.12} \end{equation} We now put the expression for $D$ in a form, which is convenient for us. First, we recall that the Lagrange formula implies \begin{equation*} f\left( y+z\right) =f\left( y\right) +f^{\prime }\left( y\right) z+\frac{ z^{2}}{2}f^{\prime \prime }\left( \eta \right) ,\forall y,z\in \mathbb{R} ,\forall f\in C^{2}\left( \mathbb{R}\right) , \end{equation*} where $\eta =\eta \left( y,z\right) $ is a number located between numbers $y$ and $z$. By (\ref{2.10005}) \begin{equation} \left\Vert h\right\Vert _{C^{1}\left( \overline{G}_{c}\right) }=\left\Vert u_{2}-u_{1}\right\Vert _{C^{1}\left( \overline{G}_{c}\right) }\leq 2CR. \label{2.13} \end{equation} Hence, using this formula, (\ref{2.1001}) and (\ref{2.13}), we obtain for the operator $A_{1}$ \begin{equation*} A_{1}\left( x,\nabla \left( u_{1}+h\right) ,u_{1}+h\right) =A_{1}\left( x,\nabla u_{1},u_{1}\right) \end{equation*} \begin{equation*} +\dsum\limits_{i=1}^{n}\partial _{u_{x_{i}}}A_{1}\left( x,\nabla u_{1},u_{1}\right) h_{x_{i}}+\partial _{u}A_{1}\left( x,\nabla u_{1},u_{1}\right) h+F\left( x,\nabla u_{1},u_{1},h\right) , \end{equation*} where the function $F$ satisfies the following estimate \begin{equation} \left\vert F\left( x,\nabla u_{1},u_{1},h\right) \right\vert \leq C_{2}\left( \left( \nabla h\right) ^{2}+h^{2}\right) ,\forall x\in G_{c},\forall u_{1}\in B\left( R\right) . \label{2.14} \end{equation} Hence, \begin{equation*} A\left( u_{1}+h\right) =A_{0}\left( u_{1}+h\right) +A_{1}\left( x,\nabla \left( u_{1}+h\right) ,u_{1}+h\right) = \end{equation*} \begin{equation*} A\left( u_{1}\right) +\left[ A_{0}\left( h\right) +\dsum\limits_{i=1}^{n}\partial _{u_{x_{i}}}A_{1}\left( x,\nabla u_{1},u_{1}\right) h_{x_{i}}+\partial _{u}A_{1}\left( x,\nabla u_{1},u_{1}\right) h\right] +F\left( x,\nabla u_{1},u_{1},h\right) . \end{equation*} Hence, by (\ref{2.12}) \begin{equation*} D=2A\left( u_{1}\right) \left[ A_{0}\left( h\right) +\dsum\limits_{i=1}^{n}\partial _{u_{x_{i}}}A_{1}\left( x,\nabla u_{1},u_{1}\right) h_{x_{i}}+\partial _{u}A_{1}\left( x,\nabla u_{1},u_{1}\right) h\right] \end{equation*} \begin{equation} +\left[ A_{0}\left( h\right) +\dsum\limits_{i=1}^{n}\partial _{u_{x_{i}}}A_{1}\left( x,\nabla u_{1},u_{1}\right) h_{x_{i}}+\partial _{u}A_{1}\left( x,\nabla u_{1},u_{1}\right) h\right] ^{2}+F^{2} \label{2.15} \end{equation} \begin{equation*} +2\left[ A_{0}\left( h\right) +\dsum\limits_{i=1}^{n}\partial _{u_{x_{i}}}A_{1}\left( x,\nabla u_{1},u_{1}\right) h_{x_{i}}+\partial _{u}A_{1}\left( x,\nabla u_{1},u_{1}\right) h\right] F. \end{equation*} The expression in the first line of (\ref{2.15}) is linear with respect to $ h $. We denote it as $Q\left( u_{1}\right) \left( h\right) .$ Consider the linear functional \begin{equation} \widetilde{J}_{u_{1}}\left( h\right) =\dint\limits_{G_{c}}Q\left( u_{1}\right) \left( h\right) \varphi _{\lambda }^{2}dx+2\beta \left[ u_{1},h \right] , \label{2.16} \end{equation} where $\left[ ,\right] $ denotes the scalar product in $H^{k_{n}}\left( G_{c}\right) .$ Clearly, $\widetilde{J}_{u_{1}}\left( h\right) :H_{0}^{k_{n}}\left( G_{c}\right) \rightarrow \mathbb{R}$ is a bounded linear functional. Hence, by the Riesz theorem, there exists a single element $P\left( u_{1}\right) \in H_{0}^{k_{n}}\left( G_{c}\right) $ such that $\widetilde{J}_{u_{1}}\left( h\right) =\left[ P\left( u_{1}\right) ,h \right] ,\forall h\in H_{0}^{k_{n}}\left( G_{c}\right) .$ Furthermore, $ \left\Vert P\left( u_{1}\right) \right\Vert _{H^{k_{n}}\left( G_{c}\right) }=\left\Vert \widetilde{J}_{u_{1}}\right\Vert .$ This proves the existence of the Frech\'{e}t derivative \begin{equation} J_{\lambda ,\beta }^{^{\prime }}\left( u_{1}\right) =P\left( u_{1}\right) \in H_{0}^{k_{n}}\left( G_{c}\right) . \label{2.17} \end{equation} Let \begin{equation*} S\left( x,u_{1},h\right) =D-2A\left( u_{1}\right) \left[ A_{0}\left( h\right) +\dsum\limits_{i=1}^{n}\partial _{u_{x_{i}}}A_{1}\left( x,\nabla u_{1},u_{1}\right) h_{x_{i}}+\partial _{u}A_{1}\left( x,\nabla u_{1},u_{1}\right) h\right] . \end{equation*} Then, using (\ref{2.13})-(\ref{2.15}) and the Cauchy-Schwarz inequality, we obtain \begin{equation*} S\geq \frac{1}{2}\left( A_{0}h\right) ^{2}-C_{2}\left( \left( \nabla h\right) ^{2}+h^{2}\right) ,\forall x\in G_{c},\forall u_{1}\in B\left( R\right) . \end{equation*} Hence, using (\ref{2.16}) and (\ref{2.17}), we obtain \begin{equation*} J_{\lambda ,\beta }\left( u_{1}+h\right) -J_{\lambda ,\beta }\left( u_{1}\right) -J_{\lambda ,\beta }^{\prime }\left( u_{1}\right) \left( h\right) \end{equation*} \begin{equation} \geq \frac{1}{2}e^{-2\lambda \left( c+\varepsilon \right) }\dint\limits_{G_{c}}\left( A_{0}h\right) ^{2}\varphi _{\lambda }^{2}dx-C_{2}e^{-2\lambda \left( c+\varepsilon \right) }\dint\limits_{G_{c}}\left( \left( \nabla h\right) ^{2}+h^{2}\right) \varphi _{\lambda }^{2}dx+\beta \left\Vert h\right\Vert _{H^{k_{n}}\left( G_{c}\right) }^{2}. \label{2.18} \end{equation} Next, integrate (\ref{2.6}) over the domain $G_{c},$ using the Gauss' formula, (\ref{2.7}) and (\ref{2.8}). Next, replace $u$ with $h$ in the resulting formula. Even though there is no guarantee that $h\in C^{2}\left( \overline{G}_{c}\right) ,$ still density arguments ensure that the resulting inequality remains true. Hence, using (\ref{2.11}), we obtain \begin{equation*} \frac{1}{2}e^{-2\lambda \left( c+\varepsilon \right) }\dint\limits_{G_{c}}\left( A_{0}h\right) ^{2}\varphi _{\lambda }^{2}dx\geq \frac{C_{1}}{2}e^{-2\lambda \left( c+\varepsilon \right) }\dint\limits_{G_{c}}\left( \lambda \left( \nabla h\right) ^{2}+\lambda ^{3}h^{2}\right) \varphi _{\lambda }^{2}dx \end{equation*} \begin{equation*} -\frac{C_{1}}{2}\lambda ^{3}e^{-2\lambda \varepsilon }\dint\limits_{\xi _{c}}\left( \left( \nabla h\right) ^{2}+h^{2}\right) \varphi _{\lambda }^{2}dx. \end{equation*} Substituting this in (\ref{2.18}), using again (\ref{2.13}) and $\beta >e^{-\lambda \varepsilon }$, we obtain for sufficiently large $\lambda $ \begin{equation*} J_{\lambda ,\beta }\left( u_{1}+h\right) -J_{\lambda ,\beta }\left( u_{1}\right) -J_{\lambda ,\beta }^{\prime }\left( u_{1}\right) \left( h\right) \end{equation*} \begin{equation*} \geq C_{2}e^{-2\lambda \left( c+\varepsilon \right) }\dint\limits_{G_{c}}\left( \lambda \left( \nabla h\right) ^{2}+\lambda ^{3}h^{2}\right) \varphi _{\lambda }^{2}dx-C_{2}e^{-2\lambda \varepsilon }\lambda ^{3}\left\Vert h\right\Vert _{H^{k_{n}}\left( G_{c}\right) }^{2}+\beta \left\Vert h\right\Vert _{H^{k_{n}}\left( G_{c}\right) }^{2} \end{equation*} \begin{equation*} \geq C_{2}e^{2\lambda \varepsilon }\left\Vert h\right\Vert _{H^{1}\left( G_{c+2\varepsilon }\right) }^{2}-C_{2}e^{-2\lambda \varepsilon }\lambda ^{3}\left\Vert h\right\Vert _{H^{k_{n}}\left( G_{c}\right) }^{2}+\frac{ e^{-\lambda \varepsilon }}{2}\left\Vert h\right\Vert _{H^{k_{n}}\left( G_{c}\right) }^{2}+\frac{\beta }{2}\left\Vert h\right\Vert _{H^{k_{n}}\left( G_{c}\right) }^{2} \end{equation*} \begin{equation*} \geq C_{2}e^{2\lambda \varepsilon }\left\Vert h\right\Vert _{H^{1}\left( G_{c+2\varepsilon }\right) }^{2}+\frac{\beta }{2}\left\Vert h\right\Vert _{H^{k_{n}}\left( G_{c}\right) }^{2}.\text{ \ \ \ \ }\square \end{equation*} \section{Cauchy problem for the quasilinear elliptic equation} \label{sec:3} In this section we apply Theorem 2.1 to the Cauchy problem for the quasilinear elliptic equation. We now rewrite the operator $A$ in (\ref {2.100}) as \begin{equation} Au:=L_{ell}\left( u\right) =\dsum\limits_{i,j=1}^{n}a_{i,j}\left( x\right) u_{x_{i}x_{j}}+A_{1}\left( x,\nabla u,u\right) ,x\in G, \label{3.1} \end{equation} \begin{equation} A_{0}u:=L_{0}u=\dsum\limits_{i,j=1}^{n}a_{i,j}\left( x\right) u_{x_{i}x_{j}}, \label{3.2} \end{equation} where $a_{i,j}\left( x\right) =a_{j,i}\left( x\right) ,\forall i,j$ and $ L_{0}$ is the principal part of the operator $L.$ We impose assumption (\ref {2.1002}). The ellipticity of the operator $L_{0}$ means that there exist two constants $\mu _{1},\mu _{2}>0,\mu _{1}\leq \mu _{2}$ such that \begin{equation} \mu _{1}\left\vert \eta \right\vert ^{2}\leq \dsum\limits_{i,j=1}^{n}a_{i,j}\left( x\right) \eta _{i}\eta _{j}\leq \mu _{2}\left\vert \eta \right\vert ^{2},\forall x\in \overline{G},\forall \eta =\left( \eta _{1},...\eta _{n}\right) \in \mathbb{R}^{n}. \label{3.4} \end{equation} As above, let $\Gamma \subset \partial G$ be the part of the boundary $ \partial G$, where the Cauchy data are given. Assume that the equation of $ \Gamma $ is $\Gamma =\left\{ x\in \mathbb{R}^{n}:x_{1}=p\left( \overline{x} \right) ,\overline{x}=\left( x_{2},...,x_{n}\right) \in \Gamma ^{\prime }\subset \mathbb{R}^{n-1}\right\} $ and that the function $g\in C^{2}\left( \overline{\Gamma }^{\prime }\right) .$ Here $\Gamma ^{\prime }\subset \mathbb{R}^{n-1}$ is a bounded domain. Changing variables as $x=\left( x_{1}, \overline{x}\right) \Leftrightarrow \left( x_{1}^{\prime },\overline{x} \right) ,$ where $x_{1}^{\prime }=x_{1}-p\left( \overline{x}\right) ,$ we obtain that in new variables $\Gamma =\left\{ x\in \mathbb{R}^{n}:x_{1}=0,\overline{x}\in \Gamma ^{\prime }\right\} .$ Here we keep the same notation for $x_{1}$ as before: for the simplicity of notations. This change of variables does not affect the property of the ellipticity of the operator $L$. Let $X>0$ be a certain number. Thus, without any loss of generality, we assume that \begin{equation} G\subset \left\{ x_{1}>0\right\} ,\text{ }\Gamma =\left\{ x\in \mathbb{R} ^{n}:x_{1}=0,\left\vert \overline{x}\right\vert <X\right\} \subset \partial G. \label{3.5} \end{equation} \textbf{Cauchy Problem for the Quasilinear Elliptic Equation}. \emph{Suppose that conditions (\ref{3.5}) hold. Find such a function }$u\in H^{k_{n}}\left( G\right) $\emph{\ that satisfies the equation } \begin{equation} L_{ell}\left( u\right) =0 \label{3.6} \end{equation} \emph{and has the following Cauchy data }$g_{0},g_{1}$ \emph{at }$\Gamma $ \begin{equation} u\mid _{\Gamma }=g_{0}\left( \overline{x}\right) ,u_{x_{1}}\mid _{\Gamma }=g_{1}\left( \overline{x}\right) . \label{3.7} \end{equation} Let $\lambda >1$ and $\nu >1$ be two large parameters, which we define later. Consider two arbitrary numbers $a,c=const.\in \left( 0,1/2\right) ,$ where $a<c$. To introduce the Carleman estimate, consider functions $\psi \left( x\right) $, $\varphi _{\lambda }\left( x\right) $ defined as \begin{equation} \psi \left( x\right) =x_{1}+\frac{\left\vert \overline{x}\right\vert ^{2}}{ X^{2}}+a,\varphi _{\lambda }\left( x\right) =\exp \left( \lambda \psi ^{-\nu }\right) . \label{3.70} \end{equation} Then the analogs of sets (\ref{2.0})\emph{\ }and\emph{\ }$\Gamma _{c}$ are \begin{eqnarray} G_{c} &=&\left\{ x:x_{1}>0,\left( x_{1}+\frac{\left\vert \overline{x} \right\vert ^{2}}{X^{2}}+a\right) ^{-\nu }>c^{-\nu }\right\} , \label{3.8} \\ \xi _{c} &=&\left\{ x:x_{1}>0,\left( x_{1}+\frac{\left\vert \overline{x} \right\vert ^{2}}{X^{2}}+a\right) ^{-\nu }=c^{-\nu }\right\} , \label{3.80} \\ \Gamma _{c} &=&\left\{ x:x_{1}=0,\left( \frac{\left\vert \overline{x} \right\vert ^{2}}{X^{2}}+a\right) ^{-\nu }>c^{-\nu }\right\} . \label{3.9} \end{eqnarray} Hence, $\partial G_{c}=\xi _{c}\cup \Gamma _{c}.$ Below in this sections we keep notations (\ref{3.8})-(\ref{3.9}). We assume that $G_{c}\neq \varnothing $ and $\overline{G}_{c}\subseteq G.$ By (\ref{3.5}) and (\ref {3.9}) $\Gamma _{c}\subset \Gamma .$ For a sufficiently small number $ \varepsilon >0$ and for $k=1,2$ define the subdomain $G_{c+2\varepsilon }$ as \begin{equation} G_{c+2\varepsilon }=\left\{ x:x_{1}>0,\left( x_{1}+\frac{\left\vert \overline{x}\right\vert ^{2}}{X^{2}}+a\right) ^{-\nu }>c^{-\nu }+2\varepsilon \right\} . \label{3.91} \end{equation} Hence, $G_{c+2\varepsilon }\subset G_{c}.$ Lemma 3.1 follows immediately from Lemma 3 of \S 1 of Chapter 4 of the book \cite{LRS}. \textbf{Lemma 3.1 }(Carleman estimate). \emph{There exist a sufficiently large number } $\nu _{0}=\nu _{0}\left( a,\mu _{1},\mu _{2},\max_{i,j}\left\Vert a_{i,j}\right\Vert _{C^{1}\left( \overline{\Omega }_{c}\right) },X,n\right) >1$\emph{\ and a sufficiently large absolute constant }$\lambda _{0}>1$\emph{ \ such that for all }$\nu \geq \nu _{0},\lambda \geq \lambda _{0}$\emph{\ and for all functions }$u\in C^{2}\left( \overline{G}_{1/2}\right) $\emph{\ the following pointwise Carleman estimate is valid for all }$x\in G_{1/2}$ \emph{\ with a constant }$C=C\left( n,\max_{i,j}\left\Vert a_{i,j}\right\Vert _{C^{1}\left( \overline{G}_{1/2}\right) }\right) >0$ \emph{and with the function }$\varphi _{\lambda }$\emph{\ from (\ref{3.70})} \begin{eqnarray*} \left( L_{0}u\right) ^{2}\varphi _{\lambda }^{2} &\geq &C\lambda \left\vert \nabla u\right\vert ^{2}\varphi _{\lambda }^{2}+C\lambda ^{3}u^{2}\varphi _{\lambda }^{2}+\func{div}U, \\ \left\vert U\right\vert &\leq &C\lambda ^{3}\left[ \left( \nabla u\right) ^{2}+u^{2}\right] \varphi _{\lambda }^{2}. \end{eqnarray*} This Carleman estimate allows us to construct the weighted Tikhonov functional to solve the Cauchy problem (\ref{3.6}), (\ref{3.7}).\emph{\ } Similarly with (\ref{2.1005}) we minimize the functional $J_{\lambda ,\beta ,ell}\left( u\right) $ (\ref{3.10}) on the set $B\left( R\right) $ defined in (\ref{1}), where \begin{equation} J_{\lambda ,\beta ,ell}\left( u\right) =e^{-2\lambda \left( c^{-\nu }+\varepsilon \right) }\dint\limits_{G_{c}}\left[ L_{ell}\left( u\right) \right] ^{2}\varphi _{\lambda }^{2}dx+\beta \left\Vert u\right\Vert _{H^{k_{n}}\left( G_{c}\right) }^{2}, \label{3.10} \end{equation} where the CWF $\varphi _{\lambda }\left( x\right) $ is the one in (\ref{3.70} ) and functions $g_{0}$ and $g_{1}$ are ones in (\ref{3.7}). Hence, Lemma 3.1 and Theorem 2.1 immediately imply Theorem 3.1. \textbf{Theorem 3.1.} \emph{Let }$R>0$\emph{\ be an arbitrary number. Let }$ B\left( R\right) $\emph{\ and }$H_{0}^{k_{n}}\left( G_{c}\right) $\emph{\ be the sets defined in (\ref{1}) and (\ref{2}) respectively. Then for every point }$u\in B\left( R\right) $\emph{\ there exists the Fr\'{e}chet derivative }$J_{\lambda ,\beta ,ell}^{\prime }\left( u\right) \in H_{0}^{k_{n}}\left( G_{c}\right) .$\emph{\ Choose the numbers }$\nu =\nu _{0},$ $\lambda _{0}$ \emph{as in Lemma 3.1. There exists a sufficiently large number }$\lambda _{1}=\lambda _{1}\left( R,L\right) >\lambda _{0}>1$ \emph{\ such that for all }$\lambda \geq \lambda _{1}$\emph{\ and for every } $\beta \in \left( e^{-\lambda \varepsilon },1\right) $\emph{\ the functional }$J_{\lambda ,\beta ,ell}\left( u\right) $\emph{\ is strictly convex on the set }$B\left( R\right) ,$ \begin{equation*} J_{\lambda ,\beta ,ell}\left( u_{2}\right) -J_{\lambda ,\beta ,ell}\left( u_{1}\right) -J_{\lambda ,\beta ,ell}^{\prime }\left( u_{1}\right) \left( u_{2}-u_{1}\right) \end{equation*} \begin{equation*} \geq C_{2}e^{2\lambda \varepsilon }\left\Vert u_{2}-u_{1}\right\Vert _{H^{1}\left( G_{c+2\varepsilon }\right) }^{2}+\frac{\beta }{2}\left\Vert u_{2}-u_{1}\right\Vert _{H^{k_{n}}\left( G_{c}\right) }^{2},\forall u_{1},u_{2}\in B\left( R\right) . \end{equation*} \emph{Here the number }$C_{2}=C_{2}\left( L_{ell},R,c\right) >0$\emph{\ depends only on listed parameters. } \section{Quasilinear parabolic equation with the lateral Cauchy data} \label{sec:4} Choose an arbitrary $T=const.>0$ and denote $G_{T}=G\times \left( -T,T\right) .$ Let $L_{par}$ be the quasilinear elliptic operator of the second order in $G_{T},$ which we define the same way as the operator $ L_{ell}$ in (\ref{3.1})-(\ref{3.4}) with the only difference that now its coefficients depend on both $x$ and $t$ and the domain $G$ is replaced with the domain $G_{T}.$ Let $L_{0,par}$ be the similarly defined principal part of the operator $L_{par},$ see (\ref{3.2}). Next, we define the quasilinear parabolic operator as $P=\partial _{t}-L_{par}$. The principal part of $P$ is $P_{0}=\partial _{t}-L_{0,par}.$ Thus, \begin{equation} L_{par}\left( u\right) =\dsum\limits_{i,j=1}^{n}a_{i,j}\left( x,t\right) u_{x_{i}x_{j}}+A_{1}\left( x,t,\nabla u,u\right) , \label{5.1} \end{equation} \begin{equation} Au:=Pu=u_{t}-L_{par}\left( u\right) ,\left( x,t\right) \in G_{T}, \label{5.2} \end{equation} \begin{equation} P_{0}u=u_{t}-L_{0,par}u=u_{t}-\dsum\limits_{i,j=1}^{n}a_{i,j}\left( x,t\right) u_{x_{i}x_{j}}, \label{5.3} \end{equation} \begin{equation} a_{i,j}\in C^{1}\left( \overline{G}_{T}\right) ,\text{ }A_{1}\in C\left( \overline{G}_{T}\right) \times C^{3}\left( \mathbb{R}^{n+1}\right) , \label{5.4} \end{equation} \begin{equation} \mu _{1}\left\vert \eta \right\vert ^{2}\leq \dsum\limits_{i,j=1}^{n}a_{i,j}\left( x,t\right) \eta _{i}\eta _{j}\leq \mu _{2}\left\vert \eta \right\vert ^{2},\forall \left( x,t\right) \in \overline{ G}_{T},\forall \eta =\left( \eta _{1},...\eta _{n}\right) \in \mathbb{R}^{n}. \label{5.5} \end{equation} Let $\Gamma \subset \partial G$, $\Gamma \in C^{2}$ be the subsurface of the boundary $\partial G$ with the same properties as ones in section 3.\ Denote $\Gamma _{T}=\Gamma \times \left( -T,T\right) .$ Without loss of generality we assume that $\Gamma $ is the same as in (\ref{3.5}). Consider the parabolic equation \begin{equation} P\left( u\right) =u_{t}-L_{par}\left( u\right) =0\text{ \ in }G_{T}.\text{ } \label{4.2} \end{equation} \textbf{Cauchy Problem with the Lateral Data for Quasilinear Parabolic Equation (\ref{4.2}).} \emph{Assume that conditions (\ref{3.5}) hold. Find such a function }$u\in H^{k_{n+1}}\left( G_{T}\right) $\emph{\ that satisfies equation (\ref{4.2}) and has the following lateral Cauchy data }$ g_{0},g_{1}$ \emph{at }$\Gamma _{T}$ \begin{equation} u\mid _{\Gamma _{T}}=g_{0}\left( \overline{x},t\right) ,u_{x_{1}}\mid _{\Gamma _{T}}=g_{1}\left( \overline{x},t\right) . \label{4.3} \end{equation} We now introduce the Carleman estimate which is similar with the one of section 3. Let $\lambda >1$ and $\nu >1$ be two large parameters, which we define later. Consider two arbitrary numbers $a,c=const.\in \left( 0,1/2\right) ,$ where $a<c$. Consider functions $\psi \left( x,t\right) $, $ \varphi _{\lambda }\left( x,t\right) $ defined as \begin{equation} \psi \left( x,t\right) =x_{1}+\frac{\left\vert \overline{x}\right\vert ^{2}}{ X^{2}}+\frac{t^{2}}{T^{2}}+a,\text{ }\varphi _{\lambda }\left( x,t\right) =\exp \left( \lambda \psi ^{-\nu }\right) . \label{4.30} \end{equation} Analogs of conditions (\ref{3.8})-(\ref{3.91})\emph{\ }are \begin{eqnarray} G_{T,c} &=&\left\{ \left( x,t\right) :x_{1}>0,\left( x_{1}+\frac{\left\vert \overline{x}\right\vert ^{2}}{X^{2}}+\frac{t^{2}}{T^{2}}+a\right) ^{-\nu }>c^{-\nu }\right\} , \label{4.4} \\ \xi _{c} &=&\left\{ \left( x,t\right) :x_{1}>0,\left( x_{1}+\frac{\left\vert \overline{x}\right\vert ^{2}}{X^{2}}+\frac{t^{2}}{T^{2}}+a\right) ^{-\nu }=c^{-\nu }\right\} , \label{4.40} \\ \Gamma _{c} &=&\left\{ \left( x,t\right) :x_{1}=0,\left( \frac{\left\vert \overline{x}\right\vert ^{2}}{X^{2}}+\frac{t^{2}}{T^{2}}+a\right) ^{-\nu }>c^{-\nu }\right\} , \label{4.5} \\ \partial G_{T,c} &=&\xi _{c}\cup \Gamma _{c}, \label{4.6} \\ G_{c+2\varepsilon ,T} &=&\left\{ \left( x,t\right) :x_{1}>0,\left( x_{1}+ \frac{\left\vert \overline{x}\right\vert ^{2}}{X^{2}}+\frac{t^{2}}{T^{2}} +a\right) ^{-\nu }>c^{-\nu }+2\varepsilon \right\} . \label{4.7} \end{eqnarray} We assume that \begin{equation} G_{T,c}\neq \varnothing ,G_{T,c}\subset G_{T}\text{ and }\overline{G} _{T,c}\cap \left\{ t=\pm T\right\} =\varnothing . \label{4.700} \end{equation} In (\ref{4.7}) $\varepsilon >0$ is so small that $G_{c+2\varepsilon ,T}\neq \varnothing .$ Below in this section we use notations (\ref{4.30})-(\ref{4.7} ). By (\ref{4.1}) and (\ref{4.5}) $\Gamma _{c}\subset \Theta _{T}.$ Lemma 4.1 follows immediately from Lemma 3 of \S 1 of chapter 4 of the book \cite {LRS}. \textbf{Lemma 4.1 }(Carleman estimate). \emph{Let }$P_{0}$\emph{\ be the parabolic operator defined via (\ref{5.1})-(\ref{5.5}). There exist a sufficiently large number }$\nu _{0}=\nu _{0}\left( a,c,\mu _{1},\mu _{2},\max_{i,j}\left\Vert a_{i,j}\right\Vert _{C^{1}\left( \overline{\Omega } _{c}\right) },X,T\right) >1$\emph{\ and a sufficiently large absolute constant }$\lambda _{0}>1$\emph{\ such that for all }$\nu \geq \nu _{0},\lambda \geq \lambda _{0}$\emph{\ and for all functions }$u\in C^{2,1}\left( \overline{G}_{T,1/2}\right) $\emph{\ the following pointwise Carleman estimate is valid for all }$\left( x,t\right) \in G_{T,1/2}$\emph{\ with a constant }$C=C\left( n,\max_{i,j}\left\Vert a_{i,j}\right\Vert _{C^{1}\left( \overline{G}\right) }\right) >0$ \emph{and} \emph{with the function }$\varphi _{\lambda }$\emph{\ defined in (\ref{4.30})} \begin{eqnarray*} \left( P_{0}u\right) ^{2}\varphi _{\lambda }^{2} &\geq &C\lambda \left\vert \nabla u\right\vert ^{2}\varphi _{\lambda }^{2}+C\lambda ^{3}u^{2}\varphi _{\lambda }^{2}+\func{div}U+V_{t}, \\ \left\vert U\right\vert ,\left\vert V\right\vert &\leq &C\lambda ^{3}\left[ \left( \nabla u\right) ^{2}+u_{t}^{2}+u^{2}\right] \varphi _{\lambda }^{2}. \end{eqnarray*} Let $R>0$ be an arbitrary number. Similarly with (\ref{1}) and (\ref{2}) let \begin{equation} B\left( R\right) =\left\{ u\in H^{k_{n+1}}\left( G_{T,c}\right) :\left\Vert u\right\Vert _{H^{k_{n+1}}\left( G_{T,c}\right) }<R,u\mid _{\Gamma _{c}}=g_{0}\left( \overline{x},t\right) ,\partial _{n}u\mid _{\Gamma _{c}}=g_{1}\left( \overline{x},t\right) \right\} , \label{5.6} \end{equation} \begin{equation} H_{0}^{k_{n+1}}\left( G_{T,c}\right) =\left\{ u\in H^{k_{n+1}}\left( G_{T,c}\right) :u\mid _{\Gamma _{c}}=\partial _{n}u\mid _{\Gamma _{c}}=0\right\} . \label{5.7} \end{equation} To solve the problem (\ref{4.2}), (\ref{4.3}), we minimize the functional $ J_{\lambda ,\beta ,par}\left( u\right) $ in (\ref{4.8}) on the set $B\left( R\right) $ defined (\ref{5.6}), where \begin{equation} J_{\lambda ,\beta ,par}\left( u\right) =e^{-2\lambda \left( c^{-\nu }+\varepsilon \right) }\dint\limits_{G_{T,c}}\left[ P\left( u\right) \right] ^{2}\varphi _{\lambda }^{2}dx+\beta \left\Vert u\right\Vert _{H^{k_{n+1}}\left( G_{T,c}\right) }^{2}, \label{4.8} \end{equation} where the operator $P$ is defined via (\ref{5.1})-(\ref{5.5}). Hence, Lemma 4.1 and Theorem 2.1 imply Theorem 4.1. \textbf{Theorem 4.1. }\emph{Let }$R>0$\emph{\ be an arbitrary number.\ Let }$ B\left( R\right) $\emph{\ and }$H_{0}^{k_{n+1}}\left( G_{T,c}\right) $ \emph{ be the sets defined in (\ref{5.6}) and (\ref{5.7}) respectively. Then} \emph{ for every point }$u\in B\left( R\right) $\emph{\ there exists the Fr\'{e} chet derivative }$J_{\lambda ,\beta ,par}^{\prime }\left( u\right) \in H_{0}^{k_{n+1}}\left( G_{T,c}\right) .$\emph{\ Choose the numbers }$\nu =\nu _{0},$ $\lambda _{0}$ \emph{as in Lemma 4.1. There exists a sufficiently large number }$\lambda _{1}=\lambda _{1}\left( R,P\right) >\lambda _{0}>1$ \emph{\ such that for all }$\lambda \geq \lambda _{1}$\emph{\ and for every } $\beta \in \left( e^{-\lambda \varepsilon },1\right) $\emph{\ the functional }$J_{\lambda ,\beta ,par}\left( u\right) $\emph{\ is strictly convex on the set }$B\left( R\right) ,$ \begin{equation*} J_{\lambda ,\beta ,par}\left( u_{2}\right) -J_{\lambda ,\beta ,par}\left( u_{1}\right) -J_{\lambda ,\beta ,par}^{\prime }\left( u_{1}\right) \left( u_{2}-u_{1}\right) \end{equation*} \begin{equation*} \geq C_{3}e^{2\lambda \varepsilon }\left\Vert u_{2}-u_{1}\right\Vert _{H^{1}\left( G_{T,c+2\varepsilon }\right) }^{2}+\frac{\beta }{2}\left\Vert u_{2}-u_{1}\right\Vert _{H^{k_{n}+1}\left( G_{T,c}\right) }^{2},\forall u_{1},u_{2}\in B\left( R\right) . \end{equation*} \emph{Here the number }$C_{3}=C_{3}\left( L_{par},R,c\right) >0$\emph{\ depends only on listed parameters. } \section{Quasilinear hyperbolic equation with lateral Cauchy data} \label{sec:5} In this section, notations for the domain $G\subset \mathbb{R}^{n}$ and the time cylinder $G_{T}$ are the same as ones in section 4. Denote $ S_{T}=\partial G\times \left( -T,T\right) .$ The Carleman estimate of Lemma 5.1 of this section for was proved in Theorem 1.10.2 of the book of Beilina and Klibanov \cite{BK}. Other forms of Carleman estimates for the hyperbolic case can be found in, e.g. Theorem 3.4.1 of the book of Isakov \cite{Is}, Theorem 2.2.4 of the book of Klibanov and Timonov \cite{KT}, Lemma 2 of \S 4 of chapter 4 of the book of Lavrentiev, Romanov and Shishatskii \cite{LRS} and in Lemma 3.1 of Triggiani and Yao \cite{Trig}. Let numbers $a_{l},a_{u}>0$ and $a_{l}<a_{u}.$ For $x\in G,$ let the function $a\left( x\right) $ satisfy the following conditions in $G$ \begin{equation} a\left( x\right) \in \left[ a_{l},a_{u}\right] ,a\in C^{1}\left( \overline{G} \right) . \label{6.1} \end{equation} In addition, we assume that there exists a point $x_{0}\in G$ such that \begin{equation} \left( \nabla a\left( x\right) ,x-x_{0}\right) \geq 0,\forall x\in \overline{ G}, \label{6.2} \end{equation} where $\left( \cdot ,\cdot \right) $ denotes the scalar product in $\mathbb{R }^{n}$.\emph{\ }In particular, if $a\left( x\right) \equiv 1,$ then (\ref {6.2}) holds for any $x_{0}\in G.$ We need inequality (\ref{6.2}) for the validity of the Carleman estimate of Lemma 5.1. Assume that the function $ A_{1}$ satisfies condition (\ref{5.4}). Consider the quasilinear hyperbolic equation in the time cylinder $G_{T}$ with the lateral Cauchy data $ g_{0}\left( x,t\right) ,g_{1}\left( x,t\right) ,$ \begin{eqnarray} L_{hyp}u &=&a\left( x\right) u_{tt}-\Delta u-A_{1}\left( x,t,\nabla u,u\right) =0\text{ in }G_{T}, \label{6.3} \\ u &\mid &_{S_{T}}=g_{0}\left( x,t\right) ,\partial _{n}u\mid _{S_{T}}=g_{1}\left( x,t\right) . \label{6.4} \end{eqnarray} Denote $L_{0,hyp}u=a\left( x\right) u_{tt}-\Delta u.$ \textbf{Cauchy Problem with the Lateral Data for the Hyperbolic Equation ( \ref{6.3})}. \emph{Find the function }$u\in H^{k_{n+1}}\left( G_{T}\right) $ \emph{\ satisfying conditions (\ref{6.3}), (\ref{6.4}).} Let the number $\eta \in \left( 0,1\right) .$ Let $\lambda >1$ be a large parameter$.$ Define functions $\xi \left( x,t\right) $ and $\varphi _{\lambda }\left( x,t\right) $ as \begin{equation} \xi \left( x,t\right) =\left\vert x-x_{0}\right\vert ^{2}-\eta t^{2},\varphi _{\lambda }\left( x,t\right) =\exp \left[ \lambda \psi \left( x,t\right) \right] . \label{6.5} \end{equation} Similarly with (\ref{2.0}), for a number $c>0$ define the hypersurface $\xi _{c}$ and the domain $G_{T,c}$ as \begin{equation} \xi _{c}=\left\{ \left( x,t\right) \in G_{T}:\xi \left( x,t\right) =c\right\} ,\text{ }G_{T,c}=\left\{ \left( x,t\right) \in G_{T}:\xi \left( x,t\right) >c\right\} . \label{6.6} \end{equation} \textbf{Lemma 5.1 }(Carleman estimate). \emph{Let }$n\geq 2$\emph{\ and conditions (\ref{6.1}) be satisfied. Also, assume that there exists a point } $x_{0}\in G$\emph{\ such that (\ref{6.2}) holds. Denote }$M=M\left( x_{0},G\right) =\max_{x\in \overline{G}}\left\vert x-x_{0}\right\vert .$ \emph{\ Choose such a number }$c>0$\emph{\ that }$G_{T,c}\neq \varnothing .$ \emph{\ Let }$\varphi _{\lambda }\left( x,t\right) $\emph{\ be the function defined in (\ref{6.5}), sets }$\xi _{c},G_{T,c}$\emph{\ be the ones defined in (\ref{6.6}) and conditions (\ref{4.700}) are valid for the case of the domain }$G$\emph{\ of this section. Then there exists a number }$\eta _{0}=\eta _{0}\left( G,M,a_{l},a_{u},\left\Vert \nabla a\right\Vert _{C\left( \overline{G}\right) }\right) \in \left( 0,1\right) $\emph{\ such that for any }$\eta \in \left( 0,\eta _{0}\right) $\emph{\ one can choose a sufficiently large number }$\lambda _{0}=\lambda _{0}\left( G,M,a_{l},a_{u},\left\Vert \nabla a\right\Vert _{C\left( \overline{G}\right) },\eta _{0},c\right) >1$\emph{\ and the number }$C_{4}=C_{4}\left( G,M,a_{l},a_{u},\left\Vert \nabla a\right\Vert _{C\left( \overline{G}\right) },\eta _{0},c\right) >0$\emph{, such that for all }$u\in C^{2}\left( \overline{G}_{T,c}\right) $\emph{\ and for all }$\lambda \geq \lambda _{0}$ \emph{\ the following pointwise Carleman estimate is valid} \begin{equation*} \left( L_{0,hyp}u\right) ^{2}\varphi _{\lambda }^{2}\geq C_{4}\lambda \left( \left\vert \nabla u\right\vert ^{2}+u_{t}^{2}\right) \varphi _{\lambda }^{2}+C_{4}\lambda ^{3}u^{2}\varphi _{\lambda }^{2}+\func{div}U+V_{t}\text{ in }G_{T,c}, \end{equation*} \emph{\ } \begin{equation*} \left\vert U\right\vert ,\left\vert V\right\vert \leq C_{4}\lambda ^{3}\left( \left\vert \nabla u\right\vert ^{2}+u_{t}^{2}+u^{2}\right) \varphi _{\lambda }^{2}. \end{equation*} \emph{\ } \emph{In the case }$a\left( x\right) \equiv 1$\emph{\ one }$\eta _{0}=1.$ Again, let $R>0$ be an arbitrary number. Similarly with (\ref{5.6}) and (\ref {5.7}) let \begin{equation} B\left( R\right) =\left\{ u\in H^{k_{n+1}}\left( G_{T,c}\right) :\left\Vert u\right\Vert _{H^{k_{n+1}}\left( G_{T,c}\right) }<R,u\mid _{S_{T}}=g_{0}\left( x,t\right) ,\partial _{n}u\mid _{S_{T}}=g_{1}\left( x,t\right) \right\} , \label{6.9} \end{equation} \begin{equation} H_{0}^{k_{n+1}}\left( G_{T,c}\right) =\left\{ u\in H^{k_{n+1}}\left( G_{T,c}\right) :u\mid _{S_{T}}=\partial _{n}u\mid _{S_{T}}=0\right\} . \label{6.10} \end{equation} To solve the Cauchy problem posed in this section, we minimize the functional $J_{\lambda ,\beta ,hyp}\left( u\right) $ in (\ref{6.7}) on the set $B\left( R\right) $ defined in (\ref{6.9}), where \begin{equation} J_{\lambda ,\beta ,hyp}\left( u\right) =e^{-2\lambda \left( c+\varepsilon \right) }\dint\limits_{G_{T,c}}\left[ L_{hyp}\left( u\right) \right] ^{2}\varphi _{\lambda }^{2}dx+\beta \left\Vert u\right\Vert _{H^{k_{n+1}}\left( G_{T,c}\right) }^{2}. \label{6.7} \end{equation} Hence, Lemma 5.1 and Theorem 2.1 imply Theorem 5.1. \textbf{Theorem 5.1}. \emph{Let }$R>0$\emph{\ be an arbitrary number.\ Let }$ B\left( R\right) $\emph{\ and }$H_{0}^{k_{n+1}}\left( G_{T,c}\right) $ \emph{ be the sets defined in (\ref{6.9}) and (\ref{6.10}) respectively. Then} \emph{for every point }$u\in B\left( R\right) $\emph{\ there exists the Fr \'{e}chet derivative }$J_{\lambda ,\beta ,hyp}^{\prime }\left( u\right) \in H_{0}^{k_{n+1}}\left( G_{T,c}\right) .$\emph{\ Let} $\lambda _{0}=\lambda _{0}\left( G,M,a_{l},a_{u},\left\Vert \nabla a\right\Vert _{C\left( \overline{G}\right) },\eta _{0},c\right) >1$ \emph{be the number of Lemma 5.1. There exists a sufficiently large number } $\lambda _{1}=\lambda _{1}\left( R,L_{hyp},G,M,a_{l},a_{u},\left\Vert \nabla a\right\Vert _{C\left( \overline{G}\right) },\eta _{0},c\right) >\lambda _{0}>1$\emph{\ such that for all }$\lambda \geq \lambda _{1}$\emph{\ and for every }$\beta \in \left( e^{-\lambda \varepsilon },1\right) $\emph{\ the functional }$J_{\lambda ,\beta ,hyp}\left( u\right) $\emph{\ is strictly convex on the set }$B\left( R\right) ,$ \begin{equation*} J_{\lambda ,\beta ,par}\left( u_{2}\right) -J_{\lambda ,\beta ,hyp}\left( u_{1}\right) -J_{\lambda ,\beta ,hyp}^{\prime }\left( u_{1}\right) \left( u_{2}-u_{1}\right) \end{equation*} \begin{equation*} \geq C_{5}e^{2\lambda \varepsilon }\left\Vert u_{2}-u_{1}\right\Vert _{H^{1}\left( G_{T,c+2\varepsilon }\right) }^{2}+\frac{\beta }{2}\left\Vert u_{2}-u_{1}\right\Vert _{H^{k_{n}+1}\left( G_{T,c}\right) }^{2},\forall u_{1},u_{2}\in B\left( R\right) . \end{equation*} \emph{Here the number }$C_{5}=C_{3}\left( L_{hyp},R,c\right) >0$\emph{.} \end{document}
\begin{document} \title{On Structural Parameterizations of Star Coloring} \begin{abstract} A \emph{star coloring} of a graph $G$ is a proper vertex coloring such that every path on four vertices uses at least three distinct colors. The minimum number of colors required for such a star coloring of $G$ is called star chromatic number, denoted by $\chi_s(G)$. Given a graph $G$ and a positive integer $k$, the \textsc{Star Coloring Problem} asks whether $G$ has a star coloring using at most $k$ colors. This problem is {\sf NP}-complete even on restricted graph classes such as bipartite graphs. In this paper, we initiate a study of \textsc{Star Coloring}{} from the parameterized complexity perspective. We show that \textsc{Star Coloring}{} is fixed-parameter tractable when parameterized by (a) neighborhood diversity, (b) twin-cover, and (c) the combined parameters clique-width and the number of colors. \end{abstract} \section{Introduction} A coloring $f:V(G) \rightarrow \{1,2,\ldots,k\}$ of a graph $G=(V,E)$ is a \emph{star coloring} if (i) $f(u)\neq f(v)$ for every edge $uv\in E(G)$, and (ii) every path on four vertices uses at least three distinct colors. The \emph{star chromatic number} of $G$, denoted by $\chi_s{(G)}$, is the smallest integer $k$ such that $G$ is star colorable using $k$ colors. Given a graph $G$ and a positive integer $k$, the \textsc{Star Coloring}{} problem asks whether $G$ has a star coloring using at most $k$ colors. The name star coloring is due to the fact that the subgraph induced by any two color classes (subset of vertices assigned the same color) is a disjoint union of stars. \textsc{Star Coloring}{}~\cite{gebremedhin2009efficient} is used in the computation of the Hessian matrix. A Hessian matrix is a square matrix of second order partial derivatives of a scalar-valued function. Hessian matrices are used in large-scale optimization problems, parametric sensitivity analysis~\cite{buskens2001sensitivity}, image processing, computer vision~\cite{lorenz1997multi}, and control of dynamical systems in real time~\cite{buskens2001sensitivity}. Typically, Hessian matrices that arise in a large-scale application are sparse. The computation of a sparse Hessian matrix using the automatic differentiation technique requires a seed matrix. Coleman and Moré~\cite{coleman1984estimation} showed that the computation of a seed matrix can be formulated using a star coloring of the adjacency graph of a Hessian matrix. \textsc{Star Coloring}{} was first introduced by Gr{\"u}nbaum in~\cite{grunbaum1973acyclic}. The computational complexity of the problem is studied on several graph classes. The problem is polynomial time solvable on cographs~\cite{lyons2011acyclic} and line graphs of trees~\cite{omoomi2018polynomial}. It is {\sf NP}-complete to decide if there exists a star coloring of bipartite graphs~\cite{coleman1983estimation} using at most $k$ colors, for any $k\geq 3$. It has also been shown that \textsc{Star Coloring}{} is {\sf NP}-complete on planar bipartite graphs~\cite{albertson2004coloring} and line graphs of subcubic graphs~\cite{lei2018star} when $k=3$. Recently, Shalu and Cyriac~\cite{shalu2022complexity} showed that $k$-\textsc{Star Coloring} is {\sf NP}-complete for graphs of degree at most four, where $k\in \{4, 5\}$. To the best of our knowledge, the problem has not been studied in the framework of parameterized complexity. In this paper, we initiate the study of \textsc{Star Coloring}{} from the viewpoint of parameterized complexity. In parameterized complexity, the running time of an algorithm is measured as a function of input and a secondary measure called a parameter. A parameterized problem is said to be fixed-parameter tractable (FPT) with respect to a parameter $k$, if the problem can be solved in $f(k) n^{O(1)}$ time, where $f$ is a computable function independent of the input size $n$ and $k$ is a parameter associated with the input instance. For more details on parameterized complexity, we refer the reader to the texts~\cite{cygan2015parameterized}. As \textsc{Star Coloring}{} is {\sf NP}-complete even when $k=3$, the problem is para-{\sf NP} complete when parameterized by the number colors $k$. This motivates us to study the problem with respect to structural graph parameters, which measure the structural properties of the input graph. The parameter tree-width~\cite{robertson1983graph} introduced by Robertson and Seymour is one of the most investigated structural graph parameters for graph problems. The \textsc{Star Coloring}{} problem is expressible in monadic second order logic (MSO)~\cite{harshita2017fo}. Using the meta theorem of Courcelle~\cite{courcelle1992monadic}, the problem is FPT when parameterized by the tree-width of the input graph. Clique-width~\cite{courcellecw} is another graph parameter which is a generalization of tree-width. If a graph has bounded tree-width, then it has bounded clique-width, however, the converse may not always be true (e.g., complete graphs). Courcelle's meta theorem can also be extended to graphs of bounded clique-width. It was shown in~\cite{courcelle2000linear} that all problems expressible in MSO logic that does not use edge set quantifications (called as $MS_1$-logic) are FPT when parameterized by the clique-width. However, the \textsc{Star Coloring}{} problem cannot be expressed in $MS_1$ logic~\cite{harshita2017fo,fomin2010intractability}. Motivated by this, we study the parameterized complexity of the problem with respect to the combined parameters clique-width and the number of colors and show that \textsc{Star Coloring}{} is FPT. Next, we consider the parameters neighborhood diversity~\cite{lampis2012algorithmic} and twin-cover~\cite{ganian2015improving}. These parameters are weaker than clique-width in the sense that graphs of bounded neighborhood diversity (resp. twin-cover) have bounded clique-width, however, the converse may not always be true. Moreover, these two parameters are not comparable with the parameter tree-width and they generalize the parameter vertex cover~\cite{ganian2015improving} (see Fig~\ref{fig:my_label-par}). We show that \textsc{Star Coloring}{} is FPT with respect to neighborhood diversity or twin-cover. \begin{figure} \caption{Hasse diagram of some structural graph parameters. An edge from a parameter $k_1$ to a parameter $k_2$ means that there is a function $f$ such that for all graphs $G$, we have $k_1(G) \leq f(k_2(G))$. The parameters considered in this paper are indicated by $\ast$. } \label{fig:my_label-par} \end{figure} \section{Preliminaries} For $k \in \mathbb{N}$, we use $[k]$ to denote the set $\{1,2,\ldots,k\}$. If $f: A \rightarrow B$ is a function and $C \subseteq A$, $f|_C$ denotes the restriction of $f$ to $C$, that is $f|_C: C \rightarrow B$ such that for all $x \in C$, $f|_C(x)=f(x)$. All graphs we consider in this paper are undirected, connected, finite and simple. For a graph $G=(V,E)$, we denote the vertex set and edge set of $G$ by $V(G)$ and $E(G)$ respectively. We use $n$ to denote the number of vertices and $m$ to denote the number of edges of the graph. For simplicity, an edge between vertices $x$ and $y$ is denoted as $xy$. For a subset $X \subseteq V(G)$, the graph $G[X]$ denotes the subgraph of $G$ induced by vertices of $X$. If $f:V(G) \rightarrow[k]$ is a coloring of $G$ using $k$ colors, then we use $f^{-1}(i)$ to denote the subset of vertices of $G$ which are assigned the color $i$. For a subset $U \subseteq V(G)$, we use $f(U)$ to denote the set of colors used to color the vertices of $U$, i.e., $f(U)=\bigcup \limits_{u \in U} f(u)$. For a vertex set $X \subseteq V(G)$, we denote $G - X$, the graph obtained from $G$ by deleting all vertices of $X$ and their incident edges. The open neighborhood of a vertex $v$, denoted $N(v)$, is the set of vertices adjacent to $v$ and the set $N[v]=N(v) \cup \{v\}$ denotes the closed neighborhood of $v$. The neighbourhood of a vertex set $S \subseteq V(G)$ is $N(S)=(\cup_{v \in S} N(v)) \setminus S$. For a fixed coloring of $G$, we say a path is \emph{bi-colored} if there exists a proper coloring of the path using two colors. \section{Neighborhood Diversity}\label{sec:nd} In this section, we show that \textsc{Star Coloring}{} is FPT when parameterized by neighborhood diversity. The key idea is to reduce star coloring on graphs of bounded neighborhood diversity to the integer linear programming problem (ILP). The latter is FPT when parameterized by the number of variables. \begin{theorem}[\cite{frankilp,kannanilp,lenstrailp}]\label{thm:ndilp} The $q$-variable \textsc{Integer Linear Programming Feasibility} problem can be solved using $O(q^{2.5q+o(q)}n)$ arithmetic operations and space polynomial in $n$, where $n$ is the number of bits of the input. \end{theorem} We now define the parameter neighborhood diversity and state some of its properties. \begin{definition}[Neighborhood Diversity~\cite{lampis2012algorithmic}]\label{def:nd} Let $G=(V,E)$ be a graph. Two vertices $u,v\in V(G)$ are said to have the \emph{same type} if and only if $N(u)\setminus \{v\}=N(v)\setminus \{u\}$. A graph $G$ has neighborhood diversity at most $t$ if there exists a partition of $V(G)$ into at most $t$ sets $V_1, V_2, \dots, V_t$ such that all vertices in each set have same type. \end{definition} Observe that each $V_i$ either forms a clique or an independent set in $G$. Also, for any two distinct types $V_i$ and $ V_j$, either each vertex in $V_i$ is adjacent to each vertex in $V_j$, or no vertex in $V_i$ is adjacent to any vertex in $V_j$. We call a set $V_i$ as a \emph{clique type} (resp, independent type) if $G[V_i]$ is a clique (resp, independent set). It is known that a smallest sized partition of $V(G)$ into clique types and independent types can be found in polynomial time \cite{lampis2012algorithmic}. Hence, we assume that the types $V_1, V_2, \dots, V_t$ of the graph $G$ are given as input. We now present the main result of the section. \begin{theorem}\label{thm:nd} \textsc{Star Coloring}{} can be solved in $O(q^{2.5q+o(q)} n)$ time, where $q=2^t$ and $t$ is the neighborhood diversity of the graph. \end{theorem} Let $G=(V,E)$ be a graph with the types $V_1, V_2, \dots, V_t$. For each $A\subseteq \{1, 2, \dots, t\}$, we denote a \emph{subset type} of $G$ by $T_A=\{V_i\mid i\in A\}$. We denote the set of all types adjacent to type $V_i$ by $adj(V_i)$. That is, $V_j\in adj(V_i)$ if every vertex in $V_j$ is adjacent to every vertex of $V_i$. Given a graph $G$ and its types, we construct the ILP instance in the following manner. \noindent \textbf{Construction of the ILP instance:} For each $A\subseteq [t]$, let $n_A$ be the variable that denotes the number of colors assigned to vertices in every type of $T_A$ and not used in any of the types from $\{V_1, V_2, \dots, V_t\}\setminus T_A$. For example, if $A = \{1, 3, 4\}$ (i.e., $T_A = \{V_1, V_3 , V_4 \}$) and $n_A = 2$, then there are two colors, say $c_1$ and $c_2$, such that both $c_1$ and $c_2$ are assigned to vertices in each of the types $V_1$, $V_3$ and $V_4$ and not assigned to any of the vertices in types $\{V_1 , V_2 , \dots , V_t\} \setminus \{V_1, V_3 , V_4 \}$. This is the critical part of the proof where we look at how many colors are used exclusively in each type of $T_A$ rather than what colors are used. Since we have a variable $n_A$ for each $A\subseteq [t]$, the number of variables is $2^t$. We now describe the constraints for ILP with a short description explaining the significance or the information being captured by the constraints. \begin{enumerate}\setlength\itemsep{1.2em} \item[(C0)] Discard all subset types $T_A$ containing two types $V_i, V_j$ where $V_j\in adj(V_i)$. To ensure that no two adjacent vertices are assigned the same color, we introduce this constraint that only considers $T_A$ in which no two types in $T_A$ are adjacent. \item[(C1)] The sum of all the variables is at most $k$. That is $\sum\limits_{A\subseteq[t]}n_A\leq k$. We introduce this constraint to ensure that the number of colors used in any coloring is at most $k$. \item[(C2)] For each clique type $V_i$, $i\in [t]$, the sum of the variables $n_A$ for which $V_i\in T_A$ is equal to the number of vertices in $V_i$. That is, $\sum\limits_{A:V_i\in T_A}n_A=|V_i|$. To ensure that no two vertices in the clique type $V_i$ are assigned the same color, we introduce this constraint. \item[(C3)] For each independent type $V_i$, where $i\in [t]$, the sum of the variables $n_A$ for which $V_i\in T_A$ is at least one and at most the minimum of $k$ and the number of vertices in $V_i$. That is, $1\leq \sum\limits_{A:V_i\in T_A} n_A \leq \min\{k, |V_i|\}$. To ensure that the number of colors used for coloring an independent type $V_i$ is at least one and at most the minimum of $k$ and $|V_i|$, we introduce this constraint. \item[(C4)] For each combination of four distinct types, say $V_{i_1}, V_{i_2}, V_{i_3}$ and $V_{i_4}$, where $i_1, i_2, i_3, i_4\in [t]$, with $V_{i_1}, V_{i_3}\in adj(V_{i_2})$ and $V_{i_4}\in adj(V_{i_3})$, we have the following constraint: If the sum of the variables $n_A$ for which $V_{i_1}, V_{i_3}\in T_A$ is at least one, then sum of variables $n_{B}$ for which $V_{i_2}, V_{i_4}\in T_{B}$ should be equal to zero. That is, $$\sum\limits_{\substack{A:V_{i_1}, V_{i_3}\in T_A \mbox{ where } \\ V_{i_1}, V_{i_3}\in adj(V_{i_2}) \mbox{ and } V_{i_4}\in adj(V_{i_3})}} n_A \geq 1 \implies \sum\limits_{B: V_{i_2}, V_{i_4}\in T_{B}} n_{B} =0.$$ This constraint ensures that if there exists a vertex in $V_{i_1}$ and a vertex in $V_{i_3}$ that are assigned the same color, then the sets of colors used to color the vertices of $V_{i_2}$ and $V_{i_4}$ are disjoint. \item[(C5)] For every combination of three distinct types, say $V_{i_1}, V_{i_2}, V_{i_3}$, where $i_1, i_2, i_3\in [t]$, with $V_{i_1}$ being an independent type and $V_{i_2}, V_{i_3}\in adj(V_{i_1})$, we have the following constraint: If the sum of the variables $n_A$ for which $V_{i_1}\in T_A$ is strictly less than the number of vertices in $V_{i_1}$, then the sum of variables $n_{B}$ for which $V_{i_2}, V_{i_3}\in T_{B}$ is equal to zero. $$\sum\limits_{\substack{A:V_{i_1}\in T_A, \mbox{ where } V_{i_2}, V_{i_3}\in adj(V_{i_1}) \mbox{ and } \\ V_{i_1} \mbox{ is an independent type}}} n_A < |V_{i_1}| \implies \sum\limits_{B: V_{i_2}, V_{i_3}\in T_B } n_B =0.$$ This constraint ensures that if there exist two vertices in $V_{i_1}$ that are assigned the same color, then every vertex in $V_{i_2}$ is assigned a color different from every vertex in $V_{i_3}$. \item[(C6)] For every combination of two distinct independent types $V_{i_1}, V_{i_2}$, where $i_1, i_2\in [t]$ with $V_{i_1}\in adj(V_{i_2})$, if the sum of the variables $n_A$ for which $V_{i_1}\in T_A$ is less than the number of vertices in $V_{i_1}$, then the sum of variables $n_{B}$ for which $V_{i_2}\in T_{B}$ is equal to the number of vertices in $V_{i_2}$, and vice-versa. The former constraint is illustrated below while the latter constraint can be constructed by swapping $V_{i_1}$ and $V_{i_2}$ in the former constraint. $$\sum\limits_{\substack{A: V_{i_1}\in T_A \mbox{ where } V_{i_1}\in adj(V_{i_2}) \\ \mbox{ and } V_{i_1}, V_{i_2} \mbox{ are independent types} }} n_A < |V_{i_1}| \implies \sum\limits_{\substack{B: V_{i_2}\in T_B }} n_B = |V_{i_2}|.$$ This constraint ensures that if there exist two vertices in $V_{i_1}$ that are assigned the same color then all vertices in $V_{i_2}$ are assigned distinct colors. We can say similar things for the latter constraint. \item[(C7)] For each $A\subseteq [t]$, $n_A\geq 0$. The number of colors used exclusively in all the types of $T_A$ is at least 0. \end{enumerate} The construction of the ILP instance is complete. We use Theorem \ref{thm:ndilp} to obtain a feasible assignment for ILP. Using this, we find a star coloring of $G$. We now show that $G$ has a star coloring using at most $k$ colors if and only if there exists a feasible assignment to ILP. \begin{lemma}\label{lem:ilp-color} If there exists a feasible assignment to ILP then $G$ has a star coloring using at most $k$ colors. \end{lemma} \begin{proof} Using a feasible assignment returned by the ILP, we construct a star coloring $f:V(G)\rightarrow [k]$ of $G$. Let $A_1, A_2, \ldots A_{2^t}$ be the subsets of $[t]$ in some fixed order. For each $A_i$, we associate the set of colors $c(A_i)=\{ \sum\limits_{j=0}^{i-1} n_{A_j}+1, \sum\limits_{j=0}^{i-1} n_{A_j}+2, \ldots, \sum\limits_{j=0}^{i-1} n_{A_j}+n_{A_i}\}$, where $n_{A_0}=0$. Now, for each $V_j$, we associate the set of colors $c(V_j)= \cup_{j \in A_i} c(A_i)$. If $V_j$ is a clique type, then from constraint (C2), $|c(V_j)|=|V_j|$ for every $j$. Therefore, we color the vertices of $V_j$ with distinct colors from the set $c(V_j)$. If $V_j$ is an independent type, then from constraint (C3), $1 \leq |c(V_j)| \leq \min \{k, |V_j|\}$. In this case, we greedily color the vertices of $V_j$ with colors from the set $c(V_j)$ such that each color is used at least once in $V_j$. This finishes the description of the coloring $f$ of $G$. We now argue that $f$ is a star coloring of $G$. To show that $f$ is a proper coloring, we need to show that every vertex is assigned a color and adjacent vertices do not receive the same color. The coloring process described above ensures that every vertex is colored. Also, $f$ is a proper coloring because of the constraints (C0) and (C2). The former constraint ensures that subset types $T_A$ considered do not contain a pair of adjacent types in it while the latter constraint ensures that no two vertices in a clique type are assigned the same color. Thus $f$ is a proper coloring. We now show that there is no bi-colored path of length $3$. Suppose that there exists a path $u_1-u_2-u_3-u_4$ on four vertices such that $f(u_1)=f(u_3)$ and $f(u_2)=f(u_4)$. \begin{itemize}\setlength\itemsep{1em} \item \textbf{Vertices $u_1,u_2,u_3, u_4$ belong to four distinct types. } WLOG, let $u_1, u_2, u_3, u_4$ belong to $V_1, V_2, V_3, V_4$ respectively. From the definition of neighborhood diversity, we have $V_1, V_3 \in adj(V_2)$ and $V_4 \in adj(V_3)$. As $f(u_1)=f(u_3)$ and $f(u_2)=f(u_4)$, there exists two sets $A\subseteq [t]$ and $B\subseteq [t]$ such that $V_1, V_3\in T_A$, $V_2, V_4\in T_B$, $n_A \geq 1$ and $n_B\geq 1$. This cannot happen because of the constraint (C4). \item \textbf{Vertices $u_1,u_2,u_3, u_4$ belong to three distinct types.} WLOG, let $u_1, u_2, u_3, u_4$ belong to $V_1, V_2, V_1, V_3$ respectively. Since $f(u_1)=f(u_3)$, it is the case that $\sum\limits_{A:V_1\in T_A} n_A < |V_1|$ implying $V_1$ is an independent type. Since $V_1$ is an independent type with two vertices assigned the same color and $f(u_2)=f(u_4)$, there exists $B\subseteq [t]$ such that $V_2, V_3\in T_B$ and $n_B\geq 1$. This cannot happen because of the constraint (C5). \item \textbf{Vertices $u_1,u_2,u_3, u_4$ belong to two distinct types. } WLOG, let $u_1, u_3\in V_1$ and $ u_2, u_4\in V_2$. Similar arguments as in the above case can be applied to show that $V_1$ and $V_2$ are independent types and this case cannot arise due to constraint (C6). \end{itemize} Thus $f$ is a star coloring of $G$ using at most $k$ colors. \qed \end{proof} \begin{lemma}\label{lem:color-ilp} If $G$ has a star coloring using at most $k$ colors then there exists a feasible assignment to ILP. \end{lemma} \begin{proof} Let $f:V(G)\rightarrow [k]$ be a star coloring of $G$ using $k$ colors. For each $A\subseteq [t]$, we set $$n_A=|\bigcap \limits_{V_i \in T_A} f(V_i) - \bigcup \limits_{V_i \notin T_A} f(V_i)|.$$ That is $n_A$ is the number of colors that appear in each of the types in $T_A$ and does not appear in any of the types from $\{V_1,\ldots, V_t \}\setminus T_A$. We now show that such an assignment satisfies the constraints (C0)-(C7). \begin{enumerate} \item Since $f$ is a proper coloring of $G$, no two vertices in two adjacent types are assigned the same color. Hence the constraint (C0) is satisfied. \item Using the fact that $f$ is a star coloring that uses $k$ colors and from the definition of $n_A$, where each color is counted towards only exactly one variable, we see that the constraint (C1) is satisfied. For each of the remaining variables $n_A$ for which no color is associated with it, we have that $n_A=0$. Hence the constraint (C7) is satisfied. \item When $V_i$ is a clique type, we have that $|f(V_i)|=|V_i|$. The expression $\sum\limits_{A:V_i\in T_A}n_A$ denotes the number of colors used in $V_i$ in the coloring $f$, which equals $|V_i|$. Hence the constraint (C2) is satisfied. \item When $V_i$ is an independent type, the number of colors used in $V_i$ is at most the minimum of $k$ and $|V_i|$. In addition, we need at least one color to color the vertices of $V_i$. Hence $1 \leq |f(V_i)| \leq \min\{k, |V_i|\}$. Since $\sum\limits_{A:V_i\in T_A}n_A =|f(V_i)|$, the constraint (C3) is satisfied. \item Since $f$ is a star coloring, there is no bi-colored $P_4$. Thus for every combination of four types, say $V_1, V_2, V_3$ and $V_4$, if there exists a color assigned to a vertex in $V_1$ and a vertex in $V_3$ with $V_1, V_3\in adj(V_2)$ and $V_4\in adj(V_3)$, then all the vertices in $V_2\cup V_4$ should be assigned distinct colors. That is, there is no $B\subseteq [t]$ for which $V_2, V_4\in T_B$ and $n_B\geq 1$. Hence the constraint (C4) is satisfied. \end{enumerate} Similarly, we can show that constraints (C5) and (C6) are also satisfied. \qed \end{proof} The running time of the algorithm depends on the time taken to construct an ILP instance and obtain a feasible assignment for the ILP using Theorem \ref{thm:ndilp}. The former takes polynomial time while the latter takes $O(q^{2.5q+o(q)}n)$ time where $q=2^t$ is the number of variables. This completes the proof of Theorem \ref{thm:nd}. \section{Twin Cover} In this section, we show that \textsc{Star Coloring}{} is FPT when parameterized by twin cover. Ganian~\cite{ganian2015improving} introduced the notion of twin-cover which is a generalization of vertex cover. Note that the parameters neighborhood diversity and twin-cover are not comparable (see Section~3.4 in \cite{ganian2015improving}). We now define the parameter twin-cover and state some of its properties. \begin{definition}[Twin Cover~\cite{ganian2015improving}]\label{def:tw} Two vertices $u$ and $v$ of a graph $G$ are said to be {\it twins} if $N(u)\setminus \{v\}=N(v)\setminus \{u\}$ and \emph{true twins} if $N[u]=N[v]$. A {\it twin-cover} of a graph $G$ is a set $X \subseteq V (G)$ of vertices such that for every edge $uv \in E(G)$ either $u \in X$ or $v \in X$, or $u$ and $v$ are true twins. \end{definition} \begin{remark} If $X \subseteq V (G)$ is a twin-cover of $G$ then (i) $G - X$ is disjoint union of cliques, and (ii) for each clique $K$ in $G - X$ and each pair of vertices $u, v$ in $K$, $N (u) \cap X = N(v) \cap X$. \end{remark} \begin{theorem}\label{thm:tw} \textsc{Star Coloring}{} can be solved in $O(q^{2.5q+o(q)}n)$ time where $q=2^{2^t}$ and $t$ is the size of a twin-cover of the graph. \end{theorem} \noindent \textbf{Overview of the Algorithm:} Given an instance $(G, k,t)$ of \textsc{Star Coloring}{}, and a twin cover $X\subseteq V(G)$ of size $t$ in $G$, the goal is to check if there exists a star coloring of $G$ using at most $k$ colors. The algorithm consists of the following four steps. \begin{enumerate} \item We guess the coloring $f: X \rightarrow[t']$ of $X$ in a star coloring of $G$ (where $t' \leq t$). Then construct an auxiliary graph $G'$ from $G$ where the neighborhood diversity of $G'$ is bounded by a function of $t$. \item We show that $G$ has a star coloring $g$, using at most $k$ colors, such that $g|_X=f$ if and only if $G'$ has a star coloring $h$, using at most $k$ colors, such that $h|_X=f$. \item We construct a graph $\mathcal{B}$, which is a subgraph of $G'$ such that $G'$ has a star coloring $h$, using at most $k$ colors, such that $h|_X=f$ if and only if $\mathcal{B}$ has a proper coloring using at most $k-t'$ colors, where $t'=|f(X)|$. \item We show that the neighborhood diversity of $\mathcal{B}$ is bounded by a function of $t$. Then we use the FPT algorithm parameterized by neighborhood diversity from \cite{ganian2015improving} to check whether $\mathcal{B}$ has a proper coloring using at most $k-t'$ colors and decide if there exists a star coloring of $G'$ using at most $k$ colors. \end{enumerate} Given a graph $G$, there exists an algorithm to compute a twin-cover of size at most $t$ (if one exists) in $O(1.2738^t+tn)$ time~\cite{ganian2015improving}. Hence we assume that we are given a twin-cover $X=\{v_1, v_2, \dots, v_t\}\subseteq V(G)$ of size $t$. Let $(G,k,t)$ be an instance of \textsc{Star Coloring}{} and $X=\{v_1, v_2, \dots, v_t\}\subseteq V(G)$ be a twin-cover of size $t$ in $G$. That is, $G[V\setminus X]$ is a disjoint union of cliques. By the definition of twin cover, all vertices in a clique $K$ from $G[V\setminus X]$ has the same neighborhood in $X$. Similar to the proof of Theorem \ref{thm:nd}, we define subset types. For each $A\subseteq [t]$, let $T_A=\{v_i\mid i\in A\}\subseteq X$ denote a subset type of $G$. For every subset type $T_A$, we denote a \emph{clique type} of $G$ by $K_A=\{K \mid K \mbox{ is a clique in } G[V\setminus X] \mbox{ and } N(K) \cap X=T_A \}$. Step 1 of the algorithm is to initially guess the colors of the vertices in $X$ in a star coloring of $G$. Let $f : X \rightarrow [t']$ be such a coloring, where $t' \leq t$. The rest of the proof is to check if $f$ could be extended to a coloring $g:V(G) \rightarrow [k]$ such that $g |_{X}=f$. Let $X_i=f^{-1}(i)\subseteq X$ be the set of vertices from $X$ that are assigned the color $i\in [t']$ in $f$. We now construct an auxiliary graph $G'$ from $G$ by repeated application of the Claims \ref{cla:claim1}, \ref{cla:claim2} and the Reduction Rule \ref{red:1}. \begin{claim claim}\label{cla:claim1} Let $K_A$ be a clique type with $|K_A|\geq 2$ and there exist two vertices in $X_i\cap T_A$ for some $i\in [t']$. Let $G^{\star}$ be the graph obtained from $G$ by adding additional edges between every pair of non-adjacent vertices in $\bigcup\limits_{K\in K_A}V(K)$. Then $(G,k,t)$ is a yes-instance of \textsc{Star Coloring}{} if and only if $(G^{\star}, k,t)$ is a yes-instance of \textsc{Star Coloring}{}. \end{claim claim} \begin{proof} Let $K,K'\in K_A$ be two cliques and $u, v\in X_i\cap T_A$ (i.e., $f(u)=f(v)=i$). For the forward direction, let $g$ be a star coloring of $(G,k,t)$. Since $g|_X=f$ and $g$ is a star coloring, no two vertices in $\bigcup\limits_{K\in K_A}V(K)$ are assigned the same color. Suppose not, there exists two vertices $w,w'\in \bigcup\limits_{K\in K_A}V(K)$ such that $g(w)=g(w')$, then $w-u-w'-v$ is a bi-colored $P_4$. Observe that $g$ is also a star coloring of $(G^{\star}, k, t)$. For the reverse direction, let $h$ be a star coloring of $(G^{\star}, k, t)$ that uses at most $k$ colors. Since $G$ is a subgraph of $G^{\star}$, we have that $h$ is also a star coloring of $(G, k, t)$. \qed \end{proof} Notice that a clique type $K_A$ satisfying the assumptions of Claim \ref{cla:claim1} will now have $|K_A|=1$. We now look at the clique types $K_A$ such that $|K_A|\geq 2$ and apply the following reduction rule. Let $K\in K_A$ be an arbitrarily chosen clique with maximum number of vertices. \begin{reduction rule}\label{red:1} Let $K_A$ be a clique type with $|K_A|\geq 2$ and $|X_i\cap T_A|\leq 1$, for all $i\in [t']$. Also, let $K\in K_A$ be an arbitrarily chosen clique with maximum number of vertices over all cliques in $K_A$. Then $(G, k, t)$ is a yes-instance of \textsc{Star Coloring}{} if and only if $(G-\bigcup\limits_{K'\in K_A \setminus \{K\} } V(K'), k, t)$ is a yes-instance of \textsc{Star Coloring}{}. \end{reduction rule} \begin{lemma}\label{lem:rrs} Reduction Rule \ref{red:1} is safe. \end{lemma} \begin{proof} Suppose $(G, k,t)$ is a yes-instance of \textsc{Star Coloring}{}. Then it is easy to see that $(G-V(K'), k, t)$ is a yes instance of \textsc{Star Coloring}{}. For the reverse direction, let $g$ be a star coloring of $(G-V(K'), k, t)$. We show how to extend $g$ to the vertices of $K'$ maintaining the star coloring requirement. We use the colors from $g(K)$ (assigned to the vertices of $K$) to color the vertices of the deleted clique $K'\in K_A$. Every vertex in $K'$ is assigned a distinct color from $g(K)$. This is possible as $|K'|\leq |K|$. We now prove that there is no bi-colored $P_4$ in $G$. Suppose not. Let there exist a bi-colored $P_4$ in $G$ because of the coloring assigned to $K'$. Notice that this happens only when there exists two vertices in $T_A$ that are assigned the same color. In this case, we would have applied our Claim \ref{cla:claim1}, which is a contradiction to the fact that $K_A$ does not satisfy the assumptions of Claim \ref{cla:claim1}. \qed \end{proof} We repeatedly apply Reduction Rule \ref{red:1} on the clique types $K_A$ for which $|K_A|\geq 2$ after the application of Claim \ref{cla:claim1}. Thereby ensuring $|K_A|= 1$ for all clique types $K_A$ for which $|K_A|\geq 2$ in $G$. Thus for all clique types $K_A$, we have that $|K_A|\leq 1$. Notice that after the application of Claim~\ref{cla:claim1} and Reduction Rule~\ref{red:1}, the resulting graph has bounded neighborhood diversity. However, a proper coloring of the resulting graph may not yield a star coloring. The following claim help us to reduce our problem to proper coloring parameterized by neighborhood diversity. \begin{claim claim}\label{cla:claim2} Let $K_A$ and $K_B$, with $A\neq B$, be two clique types such that there exists two vertices $u,v\in X_i$ such that $u\in T_A\cap T_B$ and $v\in T_B$, for some $i\in [t]$. Let $G^{\star}$ be the graph obtained from $G$ by adding additional edges between every pair of non-adjacent vertices in $V(K_A)\cup V(K_B)$. Then $(G,k,t)$ is a yes-instance of \textsc{Star Coloring}{} if and only if $(G^{\star}, k,t)$ is a yes-instance of \textsc{Star Coloring}{}. \end{claim claim} \begin{proof} For the forward direction, let $g$ be a star coloring of $(G,k,t)$. This implies that no two vertices in $V(K_A)\cup V(K_B)$ are assigned the same color because of the vertices $u$ and $v$. Hence $g$ is also a star coloring of $(G^{\star}, k, t)$. The reverse direction is trivial. Since $G$ is a subgraph of $G^{\star}$, the star coloring of $(G^{\star}, k, t)$ is also a star coloring of $(G, k, t)$. \qed \end{proof} We are now ready to explain the steps of our algorithm in detail. \noindent \textbf{Step 1:} Given an instance $(G, k, t)$ of \textsc{Star Coloring}{}, we construct an auxiliary graph. The graph constructed after repeated application of Claim \ref{cla:claim1}, Reduction Rule \ref{red:1} and Claim \ref{cla:claim2}, is the auxiliary graph $G'$. We now argue that the neighborhood diversity of $G'$ is bounded by a function of $t$. Consider the partition $\{V(K_A)~|~A \subseteq [t]\} \cup \{\{v_i\}~|~ v_i \in X\}$ of $V(G')$. Notice that each clique type $K_A$ of $G'$ is a clique type (see Section \ref{sec:nd} for more details). That is, all the vertices in $K_A$ have the same neighborhood in $X$. This is true because initially all vertices in $K_A$ have the same neighborhood (by definition of twin cover) and during the process of adding edges (Claims \ref{cla:claim1} and \ref{cla:claim2}), either all the vertices in $K_A$ are made adjacent to all the vertices in a type $K_B$ ($A\neq B$) or none of them are adjacent to any vertex in $K_B$. Thus the number of such types is at most $2^t$. Including the vertices of $X$, we have that the neighborhood diversity of $G'$ is at most $2^t +t$. \noindent \textbf{Step 2:} We need to show that $(G, k, t)$ is a yes-instance of \textsc{Star Coloring}{} if and only if $(G', k, t)$ is a yes-instance of \textsc{Star Coloring}{}. This is accomplished by the correctness of the Claims \ref{cla:claim1}, \ref{cla:claim2} and the Reduction rule \ref{red:1}. \noindent \textbf{Step 3:} The next step of the algorithm is to find a set of colors from $[t']$ that can be assigned to the vertices in $V\setminus X$. Towards this, for each $A \subseteq[t]$, we guess a subset of colors $D_A \subseteq [t']$ of size at most $|V(K_A)|$, that can be assigned to the vertices in the clique type $K_A$ in a star coloring of $G$ (extending the coloring $f$ of $X$) that uses at most $k$ colors. For the guess $D_A$, we arbitrarily (as it does not matter which vertices are assigned a specific color) assign colors from $D_A$ to vertices in $K_A$ such that $|D_A|$ vertices in $K_A$ are colored distinctly. Given the guess $D_A$ for each $K_A$, we can check in $2^{O(t)}$ time if the color set $D_A$ associated with $K_A$ is indeed a proper coloring (considering the coloring $f$ of $X$ and its neighboring types). In a valid guess, some vertices $Q\subseteq V(G')\setminus X$ are assigned colors from $[t']$. The uncolored vertices of $G'$ should be given a color from $[k]\setminus [t']$. Let $g:X\cup Q\rightarrow [t']$ be a coloring such that $g(v)=f(v)$ if $v \in X$, and $g(v)=\ell$, where $\ell$ is the assigned color as per the above greedy assignment, if $v \in Q$ We now extend this partial coloring $g$ of $Q\cup X$ to a full coloring of $G'$, where the vertices in $V(G')\setminus (Q\cup X)$ are assigned colors from $[k]\setminus [t']$. Let $\mathcal{B}$ be the subgraph of $G'$ obtained by deleting the vertices $Q \cup X$ from $G'$. Notice that $\mathcal{B}$ has neighborhood diversity at most $2^t$. \begin{claim claim}\label{cla:subgraph-b} There exists a star coloring of $G'$ extending $g$ using at most $k$ colors if and only if there exists a proper coloring of $\mathcal{B}$ using at most $k-t'$ colors. \end{claim claim} \begin{proof} Let $h: V(G') \rightarrow [k]$ be a star coloring of $G'$ such that $h|_{Q\cup X}=g$. Clearly $|h(\mathcal{B})|=|h(V \setminus (Q\cup X))| \leq k-t'$. That is, $h$ restricted to the vertices of $\mathcal{B}$ is a proper coloring of $\mathcal{B}$ which uses at most $k-t'$ colors. For the reverse direction, let $c:V(\mathcal{B})\rightarrow [k-t']$ be a proper coloring. We construct a coloring $h:V(G')\rightarrow [k]$ using the coloring $c$ as follows: $h(v)=c(v)$ if $v \in \mathcal{B}$, and $h(v)=g(v)$ otherwise. We show that $h$ is a star coloring of $G'$. Suppose not, without loss of generality, let $u_1-u_2-u_3-u_4$ be a bi-colored $P_4$, with $u_1\in K_{A_1}$, $u_2\in X$, $u_3\in K_{A_2}$ and $u_4\in X$ (notice that this is the only way a bi-colored $P_4$ exists), for some $A_1, A_2\subseteq [t]$. That is, $c(u_1)=c(u_3)$ and $c(u_2)=c(u_4)$. Also, $A_1\neq A_2$ because of the proper coloring. If this were the case, we would have applied Claim \ref{cla:claim2} as $K_{A_1}$ and $K_{A_2}$ satisfy the assumptions along with coloring of the vertices $u_2$ and $u_4$ in $X$. As a consequence, each vertex in $K_{A_1}$ would have been adjacent to each vertex in $K_{A_2}$. \qed \end{proof} \noindent \textbf{Step 4:} It is known that proper coloring is FPT parameterized by neighborhood diversity \cite{ndrobert}. The algorithm in \cite{ndrobert} uses integer linear programming with $2^{2k}$ variables, where $k$ is the neighborhood diversity of the graph. Since $\mathcal{B}$ has neighborhood diversity at most $2^t$, we have that the number of variables $q\leq 2^{2^t}$. We use the algorithm to test whether $\mathcal{B}$ has a proper coloring using at most $k-t'$ colors. \noindent{}\textbf{Running time:} Step 1 of the algorithm takes $O(t^t)$ time to guess a coloring of $X$. Reduction Rule \ref{red:1}, Claims \ref{cla:claim1} and \ref{cla:claim2} can be applied in $2^{O(t)} n^{O(1)}$ time. Step 2 can be processed in $2^{O(t)} n^{O(1)}$ time. Step 3 involves guessing the colors that the clique types can take from the colors used in $X$ and this takes $O(2^{2^t})$ time. Constructing $\mathcal{B}$ takes polynomial time. Step 4 is applying the FPT algorithm parameterized by neighborhood diversity from \cite{ndrobert} on $\mathcal{B}$ which takes $O(q^{2.5q+o(q)} n)$ time where $q\leq 2^{2^t}$. The latter dominates the running time and hence the running time of the algorithm is $O(2^{2^t}q^{2.5q+o(q)}n^{O(1)})$, where $q\leq 2^{2^t}$. This completes the proof of Theorem \ref{thm:tw}. \input{cw.tex} \section{Conclusion} In this paper, we study the parameterized complexity of \textsc{Star Coloring}{} with respect to several structural graph parameters. We show that \textsc{Star Coloring}{} is FPT when parameterized by (a) neighborhood diversity, (b) twin cover, and (c) the combined parameter clique-width and the number of colors. We conclude the paper with the following open problems for further research. \begin{enumerate} \item What is the parameterized complexity of \textsc{Star Coloring}{} when parameterized by distance to cluster or distance to co-cluster? \item It is known that graph coloring admits a polynomial kernel when parameterized by distance to clique~\cite{gutin2021parameterized}. Does \textsc{Star Coloring}{} also admit a polynomial kernel parameterized by distance to clique? \end{enumerate} \noindent \textbf{Acknowledgments:} We would like to thank anonymous referees for their helpful comments. The first author and the second author acknowledges SERB-DST for supporting this research via grants PDF/2021/003452 and SRG/2020/001162 respectively for funding to support this research. \end{document}
\begin{document} \title{Theorems on the Geometric Definition \ of the Positive Likelihood Ratio (LR+)} \begin{abstract} From the fundamental theorem of screening (FTS) we obtain the following mathematical relationship relaying the pre-test probability of disease $\phi$ to the positive predictive value $\rho(\phi)$ of a screening test: \begin{center} \begin{large} $\displaystyle\lim_{\varepsilon \to 2}{\displaystyle \int_{0}^{1}}{\rho(\phi)d\phi} = 1$ \end{large} \end{center} \ where $\varepsilon$ is the screening coefficient - the sum of the sensitivity ($a$) and specificity ($b$) parameters of the test in question. However, given the invariant points on the screening plane, identical values of $\varepsilon$ may yield different shapes of the screening curve since $\varepsilon$ does not respect traditional commutative properties. In order to compare the performance between two screening curves with identical $\varepsilon$ values, we derive two geometric definitions of the positive likelihood ratio (LR+), defined as the likelihood of a positive test result in patients with the disease divided by the likelihood of a positive test result in patients without the disease, which helps distinguish the performance of both screening tests. The first definition uses the angle $\beta$ created on the vertical axis by the line between the origin invariant and the prevalence threshold $\phi_e$ such that $LR+ = \frac{a}{1-b} = cot^2{(\beta)}$. The second definition projects two lines $(y_1,y_2)$ from any point on the curve to the invariant points on the plane and defines the LR+ as the ratio of its derivatives $\frac{dy_1}{dx}$ and $\frac{dy_2}{dx}$. Using the concepts of the prevalence threshold and the invariant points on the screening plane, the work herein presented provides a new geometric definition of the positive likelihood ratio (LR+) throughout the prevalence spectrum and describes a formal measure to compare the performance of two screening tests whose screening coefficients $\varepsilon$ are equal. \end{abstract} \section{The Fundamental Theorem of Screening} From the fundamental theorem of screening we obtain the following mathematical relationship relaying the positive predictive value ($\rho$) to the pre-test probability ($\phi$), which equals the prevalence of disease amongtst individuals at baseline risk \cite{manrai2014medicine}: \begin{large} \begin{equation} \displaystyle\lim_{\varepsilon \to 2}{\displaystyle \int_{0}^{1}}{\rho(\phi)d\phi} = 1 \end{equation} \end{large} \ where $\varepsilon$ is equal to the sum of the sensitivity ($a$) and specificity ($b$) parameters of a screening test in question \cite{balayla2020prevalence}. Equation (1) holds since the Euclidean plane, henceforth referred to as the screening plane, which contains the domain and range of the screening curve is a square of dimensions 1 x 1 and consequently, of area 1. \subsection{The Screening Plane} Graphically, the screening plane can be depicted as follows, with the vertical axis representing the positive predictive value and the horizontal axis representing the pre-test probability or prevalence of disease: \begin{center} \begin{tikzpicture} \begin{axis}[ axis lines = left, xlabel = $\phi$, ylabel = {$\rho(\phi)$}, ymin=0, ymax=1, legend pos = south east, ymajorgrids=true, xmajorgrids=true, grid style=dashed, width=6cm, height=6cm, ] \end{axis} \end{tikzpicture} \\ \ \textbf{Figure 1. The Screening Plane} \end{center} \ As can be readily observed, the range and domain of the screening curve span from [0-1], and all curves share $at$ $least$ two invariant points at [0,0] and [1,1] \cite{balayla2020prevalence}. While we're restricting the screening plane to this square in order to obtain clinically useful information, the screening curve extends beyond this area. \section{Invariant Points} In mathematics, a fixed point (sometimes shortened to fixpoint, also known as an invariant point) of a function is an element of the function's domain that is mapped to itself by the function. That is to say, c is a fixed point of the function $f$ if $f$(c) = c \cite{birkhoff1922invariant}. In other words, there is a point with coordinates $\lbrace c, f(c)\rbrace$ that equals $\lbrace c,c\rbrace$. Invariant points do not move when a specific transformation is applied. However, points which are invariant under one transformation may not be invariant under a different transformation \cite{birkhoff1922invariant}. We can illustrate graphically the invariant points at the extremes of the screening plane at [0,0] and [1,1], with multiple screening curves with different sensitivity and specificity parameters in the same plane as follows: \begin{center} \begin{tikzpicture} \begin{axis}[ axis lines = left, xlabel = $\phi$, ylabel = {$\rho(\phi)$}, ymin=0, ymax=1, legend pos = south east, ymajorgrids=true, xmajorgrids=true, grid style=dashed, width=6cm, height=6cm, ] \addplot [ domain= 0:1, color= blue, ] {(0.95*x)/((0.95*x+(1-0.99)*(1-x))}; \addplot [ domain= 0:1, color= orange, ] {(0.85*x)/((0.85*x+(1-0.95)*(1-x))}; \addplot [ domain= 0:1, color= red, ] {(0.75*x)/((0.75*x+(1-0.85)*(1-x))}; \addplot [ domain= 0:1, color= gray, ] {(0.5*x)/((0.5*x+(1-0.5)*(1-x))}; \addplot [ domain= 0:1, color= black, ] {(0.2*x)/((0.2*x+(1-0.4)*(1-x))}; \addplot [ domain= 0:1, color= magenta, ] {(0.1*x)/((0.1*x+(1-0.1)*(1-x))}; \addplot [ domain= 0:1, color= brown , ] {(0.02*x)/((0.02*x+(1-0.02)*(1-x))}; \end{axis} \end{tikzpicture} \textbf{Figure 2. Screening curves with different $\varepsilon$ values} \end{center} \section{Non-Commutative Properties of $\varepsilon$} Two identical values of $\varepsilon$ will yield different screening curves depending on the individual values of the sensitivity and specificity. In this sense, the make-up of $\varepsilon$ is non-commutative because the equation for the positive predictive value isn't linear. Take as an example two different tests whose $\varepsilon$ value equal to 1.70. The first case we have a sensitivity of 95$\%$ and a specificity of 75$\%$. In the second case, we have the reverse, a sensitivity of 75$\%$ and a specificity of 95$\%$ \cite{balayla2020derivation}. Graphically, these scenarios yield the following curves: \begin{center} \begin{tikzpicture} \begin{axis}[ axis lines = left, xlabel = $\phi$, ylabel = {$\rho(\phi)$}, ymin=0, ymax=1, legend pos = south east, ymajorgrids=true, xmajorgrids=true, grid style=dashed, width=6cm, height=6cm, ] \addplot [ domain= 0:1, color= blue, ] {(0.95*x)/((0.95*x+(1-0.75)*(1-x))}; \addplot [ domain= 0:1, color= orange, ] {(0.75*x)/((0.75*x+(1-0.95)*(1-x))}; \end{axis} \end{tikzpicture} \textbf{Figure 3. Identical values of $\varepsilon$ yield different screening curves} \end{center} It is clear therefore that despite $\varepsilon_1$ = $\varepsilon_2$ = 1.70, the areas under the curve are different such that: \ \begin{center} \begin{large} ${\displaystyle \int_{0}^{1}}{{\rho_2}(\phi)d\phi}>{\displaystyle \int_{0}^{1}}{{\rho_1}(\phi)d\phi}$ \end{large} \end{center} To overcome the non-commutative properties of $\varepsilon$ and determine which tests performs better under standard conditions, we can take advantage of the invariant points of the screening curve at the extremes of the domain, namely the origin [0,0] and the endpoint [1,1]. \begin{center} \begin{tikzpicture} \begin{axis}[ axis lines = left, xlabel = $\phi$, ylabel = {$\rho(\phi)$}, ymin=0, ymax=1, xmin=0,xmax=1, legend pos = south east, ymajorgrids=true, xmajorgrids=true, grid style=dashed, width=6cm, height=6cm, ] \addplot[mark=*,color=red] coordinates {(0,0)}; \addplot[mark=*,color=red] coordinates {(1,1)}; \end{axis} \end{tikzpicture} \\ \ \textbf{Figure 4. Invariant points (red) on the screening plane} \end{center} \ \section{The Likelihood Ratio} The likelihood ratio is a computable statistic which provides a direct estimate of how much a test result will change the odds of having a disease \cite{hayden1999likelihood}. In essence, the likelihood ratio (LR) for a dichotomous test is defined as the likelihood of a test result in patients with the disease divided by the likelihood of the test result in patients without the disease. Both the positive and negative likelihood ratios can be calculated depending on the clinical scenario by simply using the sensitivity and specificity parameters of a test. A LR close to 1 means that the test result does not change the likelihood of disease or the outcome of interest appreciably \cite{hayden1999likelihood}. The more the likelihood ratio for a positive test (LR+) is greater than 1, the more likely the disease or outcome \cite{grove1984positive}. The more a likelihood ratio for a negative test is less than 1, the less likely the disease or outcome. Thus, LRs correspond to the clinical concepts of ruling in and ruling out disease \cite{hayden1999likelihood}. \section{The Prevalence Threshold} We have previously defined the prevalence threshold as the prevalence level on the screening curve below which screening tests start to produce an increasing amount of false positive results \cite{balayla2020prevalence}. In technical terms, this is equivalent to the inflection point, also known as point of greatest curvature, on the screening curve below which the the rate of change of a test's positive predictive value drops at a differential pace relative to the prevalence \cite{balayla2020prevalence}. This value, termed $\phi_e$, is defined at the following point on the prevalence (pre-test probability) axis: \ \begin{large} \begin{equation} \phi_e = \frac{\sqrt{a\left(-b+1\right)}+b-1}{(\varepsilon-1)}=\frac{\sqrt{1-b}}{\sqrt{a}+\sqrt{1-b}} \end{equation} \end{large} \begin{theorem} Let S be the screening plane of area 1 with invariant points [0,0] and [1,1] where lies the screening curve's continuous function $0<\rho(\phi)<1$. Then there is a vertical function $f(a,b)$ which transects the screening curve at the prevalence threshold, such that the square of the cotangent of the angle $\beta$ formed between the line $y_1$ from the origin to the prevalence threshold and the vertical axis equals the positive likelihood ratio (LR+). \end{theorem} \subsection{Proof of Theorem 1} If we take the point [0,0], we can draw a line from the origin to the intersection of the prevalence threshold with the screening curve. The coordinates of this intersecting point are [$\frac{\sqrt{1-b}}{\sqrt{a}+\sqrt{1-b}}$, $\sqrt{\frac{a}{1-b}}\frac{\sqrt{1-b}}{\sqrt{a}+\sqrt{1-b}}$]. This intersection allows us to determine the slope $m$ of the line that crosses that point and the origin at [0,0] (Figure 5). \begin{large} \begin{equation} m = \frac{\Delta y}{\Delta x} = \frac{y_2-y_1}{x_2-x_1}=\frac{\sqrt{\frac{a}{1-b}}\frac{\sqrt{1-b}}{\sqrt{a}+\sqrt{1-b}}-0}{\frac{\sqrt{1-b}}{\sqrt{a}+\sqrt{1-b}}-0} = \sqrt{\frac{a}{1-b}} \end{equation} \end{large} Since by definition the point [0,0] falls on the line, the linear equation from the origin to the prevalence threshold point is therefore simply: \begin{large} \begin{equation} f(x) = \sqrt{\frac{a}{1-b}}x \end{equation} \end{large} \ where $\frac{a}{1-b}$ is the positive likelihood ratio (LR+), defined as the likelihood of a positive test result in patients $with$ the disease divided by the likelihood of a positive test result in patients $without$ the disease. To distinguish between two screening curves with identical $\varepsilon$ values, we make use of the angle $\beta$ created by the line between the invariant origin and the prevalence threshold $\phi_e$ to make a right-angle triangle as follows: \begin{center} \textbf{Figure 5. Representation of the angle $\beta$ on the screening curve} \\ \ \hspace*{-3.7em} \begin{tikzpicture} \begin{axis}[ axis lines = left, xlabel = $\phi$, ylabel = {$\rho(\phi)$}, y label style={at={(current axis.above origin)},rotate=270,anchor=north}, ymin=0, ymax=1, xmin=0,xmax=1, legend pos = south east, width=10cm, height=10cm, ] \addplot [ domain= 0.34:1, color= blue, ] {(0.95*x)/((0.95*x+(1-0.75)*(1-x))}; \addplot [ domain= 0.048:0.34, color= blue, style=dashed, ] {(0.95*x)/((0.95*x+(1-0.75)*(1-x))}; \addplot [ domain= 0:0.34, color= red, ] {((0.95/(1-0.75))^0.5)*x}; \addplot [ domain= 0:0.34, color= red, style=dashed, ] {0.66}; \addplot[samples=50, smooth,domain=0:6,magenta, name path=three, style=dashed] coordinates {(0.34,0)(0.34,0.66)}; \end{axis} \draw (0.6,1.2) arc (62:80:2); \node[] at (76.5:1.0) {$\beta$}; \node[] at (21:3.5) {$\phi_e$}; \draw [decorate,decoration={brace,amplitude=10pt},xshift=-2pt,yshift=0pt] (-1,0) -- (-1,5.5) node [blue,midway,xshift=-2.8cm] {$\Delta y = \sqrt{\frac{a}{1-b}}\frac{\sqrt{a\left(-b+1\right)}+b-1}{(\varepsilon-1)}$}; \draw [decorate,decoration={brace,amplitude=10pt},xshift=-4pt,yshift=0pt] (0.2,5.7) -- (3,5.7) node [blue,midway,xshift=0.6cm,yshift=0.9cm] {\large $\Delta x = \frac{\sqrt{a\left(-b+1\right)}+b-1}{(\varepsilon-1)} $}; \end{tikzpicture} \end{center} \ \ We can determine the value of $\beta$ through the following trigonometric identities: \begin{large} \begin{equation} tan(\beta) =\frac{opp}{adj}= \frac{\Delta x}{\Delta y} = \frac{1}{\sqrt{\frac{a}{1-b}}}=\sqrt{\frac{1-b}{a}} \end{equation} \end{large} \ We can therefore isolate $\beta$ such that: \begin{large} \begin{equation} \beta = arctan\left(\sqrt{\frac{1-b}{a}}\right) \end{equation} \end{large} For simplicity's sake we can define the expression $\sqrt{\frac{1-b}{a}}$ as $\Psi$. From the above relationship we can infer the critical relationship between a test's parameters and the angle $\beta$: \begin{large} \begin{equation} \lim_{\Psi \to 0} \beta = 0 \end{equation} \end{large} We now have enough information to distinguish between the shapes of two different screening curves which have the same $\varepsilon$. Notably, the test whose angle $\beta$ is lower will take up a greater area in the screening plane and therefore will offer a greater positive predictive value for a given risk level. Otherwise stated: \begin{large} \begin{equation} \beta_2 < \beta_1\leftrightarrow \Psi_2 < \Psi_1 \rightarrow \rho_2 > \rho_1 \end{equation} \end{large} The latter follows from the non-commutative properties of $\varepsilon$. \subsection{Using the angle $\beta$ to determine the LR+ of a screening curve} From the relationship above in (6), and knowing that the positive likelihood ratio (LR+) \cite{deeks2004diagnostic} is defined as the ratio of the sensitivity over the compliment of the specificity ($\frac{a}{1-b}$) we obtain: \begin{large} \begin{equation} tan(\beta) = \frac{1}{\sqrt{\frac{a}{1-b}}} \Leftrightarrow \sqrt{\frac{a}{1-b}} = \frac{1}{tan(\beta)}=cot(\beta) \end{equation} \end{large} \ \\ \ where $cot(\beta)$ is the cotangent of the angle $\beta$. Thus, a new geometric formulation for the positive likelihood ratio (LR+) \cite{deeks2004diagnostic} ensues, now defined as: \begin{large} \begin{equation} cot^2{(\beta)} = \frac{a}{1-b} \end{equation} \end{large} \section{Deriving the likelihood ratio LR+ by means of the prevalence threshold} \begin{theorem} Let S be the screening plane of area 1 with invariant points [0,0] and [1,1] where lies the screening curve's continuous function $0<\rho(\phi)<1$. Then there is a vertical function $f(a,b)$ which transects the screening curve at the prevalence threshold, such that the ratio $\chi$ between the derivatives $\frac{dy_1}{dx}$ and $\frac{dy_2}{dx}$ of the linear equations $y_1$, $y_2$ which stem from the invariant points on the plane to the prevalence threshold equals the positive likelihood ratio (LR+). \end{theorem} \subsection{Proof of Theorem 2} While the angular definition makes intuitive sense, we can derive the positive likelihood ratio (LR+) by using the prevalence threshold as a transecting function that divides the screening curve into two halves. Notably, as stated in previous work defining the prevalence threshold, the half of the screening curve below the prevalence threshold represents the risk range where the reliability of the test in question drops exponentially and the number of false positive results increases most rapidly. This time, we will use the second invariant point, namely [1,1], to define the linear equation from the prevalence threshold [$\frac{\sqrt{1-b}}{\sqrt{a}+\sqrt{1-b}}$, $\sqrt{\frac{a}{1-b}}\frac{\sqrt{1-b}}{\sqrt{a}+\sqrt{1-b}}$] to this point. In so doing, we thus obtain: \begin{large} \begin{equation} m = \frac{\Delta y}{\Delta x} = \frac{y_2-y_1}{x_2-x_1}=\frac{1-\sqrt{\frac{a}{1-b}}\frac{\sqrt{1-b}}{\sqrt{a}+\sqrt{1-b}}}{1-\frac{\sqrt{1-b}}{\sqrt{a}+\sqrt{1-b}}} \end{equation} \end{large} \ We can use the invariant point [1,1] to determine the y-intercept and thus obtain the following final linear equation for this line: \begin{large} \begin{equation} y=\frac{\left(1-\frac{\sqrt{1-b}}{\sqrt{1-b}+\sqrt{a}}\sqrt{\frac{a}{1-b}}\right)}{1-\frac{\sqrt{1-b}}{\sqrt{1-b}+\sqrt{a}}}x\ +\ \left[1-\frac{\left(1-\frac{\sqrt{1-b}}{\sqrt{1-b}+\sqrt{a}}\sqrt{\frac{a}{1-b}}\right)}{1-\frac{\sqrt{1-b}}{\sqrt{1-b}+\sqrt{a}}}\right] \end{equation} \end{large} If we take the ratio of the derivatives $\chi$ of both equations in (4) and (12) we obtain the following: \begin{large} \begin{equation} \Rightarrow \chi =\frac{\frac{dy_1}{dx}}{\frac{dy_2}{dx}} = \frac{\sqrt{\frac{a}{1-b}}}{\frac{\left(1-\frac{\sqrt{1-b}}{\sqrt{1-b}+\sqrt{a}}\sqrt{\frac{a}{1-b}}\right)}{1-\frac{\sqrt{1-b}}{\sqrt{1-b}+\sqrt{a}}}} =\frac{a}{1-b} \end{equation} \end{large} This is the equation for the positive likelihood ratio of a screening test (LR+). It thus follows that we can re-define the prevalence threshold as the single dividing point on the screening curve from which only two lines can de drawn to the invariant points on the screening curve such that the ensuing ratio of their derivative equals the positive likelihood ratio. Graphically, we can depict the two linear functions in red and blue as follows: \begin{center} \hspace*{-8.4em} \begin{tikzpicture} \begin{axis}[ axis lines = left, xlabel = $\phi$, ylabel = {$\rho(\phi)$}, y label style={at={(current axis.above origin)},rotate=270,anchor=north}, ymin=0, ymax=1, xmin=0,xmax=1, legend pos = south east, width=10cm, height=10cm, ] \addplot [ domain= 0.34:1, color= black, ] {(0.95*x)/((0.95*x+(1-0.75)*(1-x))}; \addplot [ domain= 0:0.34, color= black, ] {(0.95*x)/((0.95*x+(1-0.75)*(1-x))}; \addplot [ domain= 0:0.34, color= red, ] {((0.95/(1-0.75))^0.5)*x}; \addplot [ domain= 0.34:1, color= blue, ] {((0.339/(0.660))*x+0.489}; \addplot [ domain= 0:0.34, color= red, style=dashed, ] {0.66}; \addplot [ domain= 0.34:1, color= blue, style=dashed, ] {0.66}; \addplot[samples=50, smooth,domain=0:6,red, name path=three, style=dashed] coordinates {(0.34,0)(0.34,0.66)}; \addplot[samples=50, smooth,domain=0:6,blue, name path=three, style=dashed] coordinates {(1,0.66)(1,1)}; \end{axis} \node[] at (47:4.6) {$\phi_e$}; \draw [decorate,decoration={brace,amplitude=10pt},xshift=-2pt,yshift=0pt] (-1,0) -- (-1,5.5) node [red,midway,xshift=-2.8cm] {$\Delta y_a = \sqrt{\frac{a}{1-b}}\frac{\sqrt{a\left(-b+1\right)}+b-1}{(\varepsilon-1)}$}; \draw [decorate,decoration={brace,amplitude=10pt},xshift=-4pt,yshift=0pt] (0.2,5.7) -- (3,5.7) node [red,midway,xshift=0.6cm,yshift=0.9cm] {\large $\Delta x_a = \frac{\sqrt{a\left(-b+1\right)}+b-1}{(\varepsilon-1)} $}; \draw [decorate,decoration={brace,amplitude=10pt}, xshift=275pt,yshift=7pt] (-1,8) -- (-1,5.5) node [blue,midway,xshift=2.4cm, yshift = 0cm] {\tiny $\Delta y_b = 1-\sqrt{\frac{a}{1-b}}\frac{\sqrt{a\left(-b+1\right)}+b-1}{(\varepsilon-1)}$}; \draw [decorate,decoration={brace,amplitude=10pt},xshift=55pt,yshift=124.9pt] (6.4,1) -- (1,1) node [blue,midway,xshift=0cm,yshift=-1.0cm] {\large $\Delta x_b = 1-\frac{\sqrt{a\left(-b+1\right)}+b-1}{(\varepsilon-1)} $}; \end{tikzpicture} \textbf{Figure 6. Prevalence threshold transects the screening curve} \\ \ \end{center} \section{Generalized version of the theorem on the Geometric Definition of the Positive Likelihood Ratio (LR+)} \begin{theorem} Let S be the screening plane of area 1 with invariant points [0,0] and [1,1] where lies the screening curve's continuous function $0<\rho(\phi)<1$. Then there is a vertical function $f(a,b)$ which transects the screening curve at any given point, such that the ratio $\chi$ between the derivatives $\frac{dy_1}{dx}$ and $\frac{dy_2}{dx}$ of the linear equations $y_1$, $y_2$ which stem from the invariant points on the plane to that point on the curve equals the positive likelihood ratio (LR+). \end{theorem} \subsection{Proof of Theorem 3} While the use of the prevalence threshold as a transecting function that divides the screening curve into two halves can yield the LR+, this point is but a special case of a broader definition. Indeed, drawing a line from each invariant point to any point on the curve yields linear equations whose ratio of derivatives yield the LR+. This time, we will use the PPV equation to obtain the coordinates of any point on the curve as [$\phi,\frac{a\phi}{a\phi+\left(1-b\right)\left(1-\phi\right)}$]. Determining the equation of the lines that stem from the invariant origin [0,0] to any point on the curve we obtain: \begin{large} \begin{equation} m_1 = \frac{\Delta y}{\Delta x} = \frac{\frac{a\phi}{ a\phi+(1-b)(1-\phi)}-0}{\phi-0} = \frac{\frac{a\phi}{ a\phi+(1-b)(1-\phi)}}{\phi} \end{equation} \end{large} \ We can likewise use the invariant point [1,1] to determine the slope of the second line: \begin{large} \begin{equation} m_2 = \frac{\Delta y}{\Delta x} = \frac{1-\frac{a\phi}{ a\phi+(1-b)(1-\phi)}}{1-\phi} \end{equation} \end{large} If we take the ratio of the derivatives $\chi$ of both equations in (14) and (15) we obtain the following: \begin{large} \begin{equation} \Rightarrow \chi =\frac{\frac{dy_1}{dx}}{\frac{dy_2}{dx}} = \frac{\frac{\frac{a\phi}{ a\phi+(1-b)(1-\phi)}}{\phi}}{\frac{1-\frac{a\phi}{ a\phi+(1-b)(1-\phi)}}{1-\phi}}=\frac{a}{1-b} \end{equation} \end{large} Once again, we retrieve the equation for the positive likelihood ratio of a screening test (LR+). It thus follows that Theorem 3 generalizes the geometrization of the LR+ from the prevalence threshold point, as in Theorem 2, to any point on the screening curve. \section{Discussion} By having a foundational understanding of the interpretation of sensitivity, specificity, predictive values, and likelihood ratios, healthcare providers can better understand outputs from current and new diagnostic assessments, aiding in decision-making and ultimately improving healthcare for patients. That said, it is important to understand the limitations of the LR. First, and perhaps most obviously, the accuracy of a LR depends entirely upon the relevance and quality of the studies that generated the numbers (sensitivity and specificity) that inform that LR. Even when a LR is used in this fashion there are definite limits to the accuracy that we can presume underlies the number, for example the sensitivity and specificity evidence as originally generated may be flawed and the pre-test probability judgment can vary widely which mean that there are margins of error that should considered even under ideal circumstances. In addition, LRs have never been validated for use in series or in parallel. In other words there is no precedent to suggest that LRs can be used one after the other (i.e. using one LR to generate a post-test probability, and then using this as a pre-test probability for application of a different LR) or simultaneously, to arrive at a more accurate probability or diagnosis. It is important to keep these limitations in mind when using LRs because in many ways it is quite counter-intuitive to imagine that only one question at a time can be addressed when seeing a patient in the clinical environment with all of its inherent complexity. Despite this seemingly narrow use, LRs remain an invaluable and unique tool, as there is no other established method for adjusting a probability of disease based on known diagnostic test properties. \section{Conclusion} Using the concepts of the prevalence threshold and the invariant points on the screening plane, the work herein presented provides a geometric definition of the positive likelihood ratio (LR+) and describes a formal measure to compare the performance of two screening tests whose $\varepsilon$ are equal. Indeed, when faced with two screening tests whose sensitivity and specificity parameters add to the same value, the angle $\beta$ and the ratio of derivatives $\chi$ can help distinguish their performance throughout the prevalence/risk spectrum. \end{document}
\begin{document} \title{RLZAP:\\Relative Lempel-Ziv with Adaptive Pointers\thanks{Supported by the Academy of Finland through grants 258308, 268324, 284598 and 285221. Parts of this work were done during the second author's visit to the University of Helsinki and during the third author's visits to Illumina Cambridge Ltd.\ and the University of A Coru\~na, Spain.}} \author{Anthony J.\ Cox\inst{1} \and Andrea Farruggia\inst{2} \and Travis Gagie\inst{3,4} \and\\ Simon J.\ Puglisi\inst{3,4} \and Jouni Sir\'en\inst{5}} \authorrunning{Cox et al.} \institute{Illumina Cambridge Ltd, UK \and University of Pisa, Italy \and Helsinki Institute for Information Technology, Finland \and University of Helsinki, Finland \and Wellcome Trust Sanger Institute, UK} \maketitle \begin{abstract} Relative Lempel-Ziv (\rlz{}) is a popular algorithm for compressing databases of genomes from individuals of the same species when fast random access is desired. With Kuruppu et al.'s (SPIRE 2010) original implementation, a reference genome is selected and then the other genomes are greedily parsed into phrases exactly matching substrings of the reference. Deorowicz and Grabowski ({\it Bioinformatics}, 2011) pointed out that letting each phrase end with a mismatch character usually gives better compression because many of the differences between individuals' genomes are single-nucleotide substitutions. Ferrada et al.\ (SPIRE 2014) then pointed out that also using relative pointers and run-length compressing them usually gives even better compression. In this paper we generalize Ferrada et al.'s idea to handle well also short insertions, deletions and multi-character substitutions. We show experimentally that our generalization achieves better compression than Ferrada et al.'s implementation with comparable random-access times. \end{abstract} \section{Introduction} \label{sec:introduction} Next-generation sequencing technologies can quickly and cheaply yield far more genetic data than can fit into an everyday computer's memory, so it is important to find ways to compress it while still supporting fast random access. Often the data is highly repetitive and can thus be compressed very well with LZ77~\cite{ZL77}, but then random access is slow. For many applications, however, we need store only a database of genomes from individuals of the same species, which are not only highly repetitive collectively but also but also all very similar to each other. Kuruppu, Puglisi and Zobel~\cite{KPZ10} proposed choosing one of the genomes as a reference and then greedily parsing each of the others into phrases exactly matching substrings of that reference. They called their algorithm Relative Lempel-Ziv (\rlz{}) because it can be viewed as a version of LZ77 that looks for phrase sources only in the reference, which greatly speeds up random access later. (Ziv and Merhav~\cite{ZM93} introduced a similar algorithm for estimating the relative entropy of the sources of two sequences.) RLZ is now is popular for compressing not only such genomic databases but also other kinds of repetitive datasets; see, e.g.,~\cite{HPZ11a,HPZ11b}. Deorowicz and Grabowski~\cite{DG11} pointed out that letting each phrase end with a mismatch character usually gives better compression on genomic databases because many of the differences between individuals' genomes are single-nucleotide substitutions, and gave a new implementation with this optimization. Ferrada, Gagie, Gog and Puglisi~\cite{FGGP14} then pointed out that often the current phrase's source ends two characters before the next phrase's source starts, so the distances between the phrases' starting positions and their sources' starting positions are the same. They showed that using relative pointers and run-length compressing them usually gives even better compression on genomic databases. In this paper we generalize Ferrada et al.'s idea to handle well also short insertions, deletions and substitutions. In the Section~\ref{sec:preliminaries} we review in detail \rlz{} and Deorowicz and Grabowski's and Ferrada et al.'s optimizations. We also discuss how \rlz{} can be used to build relative data structures and why the optimizations that work to better compress genomic databases fail for this application. In Section~\ref{sec:adaptive} we explain the design and implementation of \rlz{} with adaptive pointers (\rlzap{}): in short, after parsing each phrase, we look ahead several characters to see if we can start a new phrase with a similar relative pointer; if so, we store the intervening characters as mismatch characters and store the new relative pointer encoded as its difference from the previous one. We present our experimental results in Section~\ref{sec:experiments}, showing that \rlzap{} achieves better compression than Ferrada et al.'s implementation with comparable random-access times. Finally, in Section~\ref{sec:future} we discuss directions for future work. Our implementation and datasets are available for download from \mbox{\url{http://github.com/farruggia/rlzap}\ .} \section{Preliminaries} \label{sec:preliminaries} In this section we discuss the previous work that is the basis and motivation for this paper. We first review in greater detail Kuruppu et al.'s implementation of \rlz{} and Deorowicz and Grabowski's and Ferrada et al.'s optimizations. We then quickly summarize the new field of {\em relative data structures} --- which concerns when and how we can use compress a new instance of a data structure, using an instance we already have for a similar dataset --- and explain how it uses \rlz{} and why it needs a generalization of Deorowicz and Grabowski's and Ferrada et al.'s optimizations. \subsection{RLZ} \label{subsec:rlz} To compute the \rlz{} parse of a string \(S [1..n]\) with respect to a reference string $R$ using Kuruppu et al.'s implementation, we greedily parse $S$ from left to right into phrases \begin{eqnarray*} && S [p_1 = 1..p_1 + \ell_1 - 1]\\ && S [p_2 = p_1 + \ell_1..p_2 + \ell_2 - 1]\\ && \hspace{12ex} \vdots\\ && S [p_t = p_{t - 1} + \ell_{t - 1}..p_t + \ell_t - 1 = n] \end{eqnarray*} such that each \(S [p_i..p_i + \ell_i - 1]\) exactly matches some substring \(R [q_i..q_i + \ell_i - 1]\) of $R$ --- called the $i$th phrase's {\em source} --- for \(1 \leq i \leq t\), but \(S [p_i..p_i + \ell_i]\) does not exactly match any substring in $R$ for \(1 \leq i \leq t - 1\). For simplicity, we assume $R$ contains every distinct character in $S$, so the parse is well-defined. Suppose we have constant-time random access to $R$. To support constant-time random access to $S$, we store an array \(Q [1..t]\) containing the starting positions of the phrases' sources, and a compressed bitvector \(B [1..n]\) with constant query time (see, e.g.,~\cite{KKP14} for a a discussion) and 1s marking the first character of each phrase. Given a position $j$ between 1 and $n$, we can compute in constant time \[S [i] = R \left[ \rule{0ex}{2ex} Q [B.\rank (j)] + j - B.\select (B.\rank (j)) \right]\,.\] If there are few phrases then $Q$ is small and $B$ is sparse, so we use little space. For example, if \begin{eqnarray*} R & = & \mathsf{ACATCATTCGAGGACAGGTATAGCTACAGTTAGAA}\\ S & = & \mathsf{ACATGATTCGACGACAGGTACTAGCTACAGTAGAA} \end{eqnarray*} then we parse $S$ into \[\mathsf{ACAT}, \mathsf{GA}, \mathsf{TTCGA}, \mathsf{CGA}, \mathsf{CAGGTA}, \mathsf{CTA}, \mathsf{GCTACAGT}, \mathsf{AGAA}\,,\] and store \begin{eqnarray*} Q & = & 1, 10, 7, 9, 15, 24, 23, 32\\ B & = & 10001010000100100000100100000001000\,. \end{eqnarray*} To compute \(S [25]\), we compute \(B.\rank (25) = 7\) and \(B.\select (7) = 24\), which tell us that \(S [25]\) is \(25 - 24 = 1\) character after the initial character in the 7th phrase. Since \(Q [7] = 23\), we look up \(S [25] = R [24] = \mathsf{C}\). \subsection{GDC} \label{subsec:gdc} Deorowicz and Grabowski~\cite{DG11} pointed out that with Kuruppu et al.'s implementation of \rlz{}, single-character substitutions usually cause two phrase breaks: e.g., in our example \(S [1..11] = \mathsf{ACATGATTCGA}\) is split into three phrases, even though the only difference between it and \(R [1..11]\) is that \(S [5] = \mathsf{G}\) and \(R [5] = \mathsf{C}\). They proposed another implementation, called the Genome Differential Compressor (GDC), that lets each phrase end with a mismatch character --- as the original version of LZ77 does --- so single-character substitutions usually cause only one phrase break. Since many of the differences between individuals' DNA are single-nucleotide substitutions, GDC usually compresses genomic databases better than Kuruppu et al.'s implementation. Specifically, with GDC we parse $S$ from left to right into phrases \(S [p_1..p_1 + \ell_1], S [p_2 = p_1 + \ell_1 + 1..p_2 + \ell_2], \ldots, S [p_t = p_{t - 1} + \ell_{t - 1} + 1..p_t + \ell_t = n]\) such that each \(S [p_i..p_i + \ell_i - 1]\) exactly matches some substring \(R [q_i..q_i + \ell_i - 1]\) of $R$ --- again called the $i$th phrase's source --- for \(1 \leq i \leq t\), but \(S [p_i..p_i + \ell_i]\) does not exactly match any substring in $R$, for \(1 \leq i \leq t - 1\). Suppose again that we have constant-time random access to $R$. To support constant-time random access to $S$, we store an array \(Q [1..t]\) containing the starting positions of the phrases' sources, an array \(M [1..t]\) containing the last character of each phrase, and a compressed bitvector \(B [1..n]\) with constant query time and 1s marking the last character of each phrase. Given a position $j$ between 1 and $n$, we can compute in constant time \[S [j] = \left\{ \begin{array}{l@{\hspace{2ex}}l} M [B.\rank (j)] & \mbox{if \(B [j] = 1\),}\\[1ex] R \left[ \rule{0ex}{2ex} Q [B.\rank (j) + 1] + j - B.\select(B.\rank (j)) - 1 \right] & \mbox{otherwise,} \end{array} \right.\] assuming \(B.\select (0) = 0\). In our example, we parse $S$ into \[\mathsf{ACATG}, \mathsf{ATTCGAC}, \mathsf{GACAGGTAC}, \mathsf{TAGCTACAGT}, \mathsf{AGAA}\,,\] and store \begin{eqnarray*} Q & = & 1, 6, 13, 21, 32\\ M & = & \mathsf{GCCTA}\\ B & = & 00001000000100000000100000000010001\,. \end{eqnarray*} To compute \(S [25]\), we compute \(B [25] = 0\), \(B.\rank (25) = 3\) and \(B.\select (3) = 21\), which tell us that \(S [25]\) is \(25 - 21 - 1 = 3\) characters after the initial character in the 4th phrase. Since \(Q [4] = 21\), we look up \(S [25] = R [24] = \mathsf{C}\). \subsection{Relative pointers} \label{subsec:pointers} Ferrada, Gagie, Gog and Puglisi~\cite{FGGP14} pointed out that after a single-character substitution, the source of the next phrase in GDC's parse often starts two characters after the end of the source of the current phrase: e.g., in our example the source for \(S [1..5] = \mathsf{ACATG}\) is \(R [1..4] = \mathsf{ACAT}\) and the source for \(S [6..12] = \mathsf{ATTCGAC}\) is \(R [6..11] = \mathsf{ATTCGA}\). This means the distances between the phrases' starting positions and their sources' starting positions are the same. They proposed an implementation of \rlz{} that parses $S$ like GDC does but keeps a relative pointer, instead of the explicit pointer, and stores the list of those relative pointers run-length compressed. Since the relative pointers usually do not change after single-nucleotide substitutions, \rlz{} with relative pointers usually gives even better compression than GDC on genomic databases. (We note that Deorowicz, Danek and Niemiec~\cite{DDN15} recently proposed a new version of GDC, called GDC2, that has improved compression but does not support fast random access.) Suppose again that we have constant-time random access to $R$. To support constant-time random access to $S$, we store the array $M$ of mismatch characters and the bitvector $B$ as with GDC. Instead of storing $Q$, we build an array \(D [1..t]\) containing, for each phrase, the difference \(q_i - p_i\) between its source's starting position and its own starting position. We store $D$ run-length compressed: i.e., we partition it into maximal consecutive subsequences of equal values, store an array $V$ containing one copy of the value in each subsequence, and a bitvector \(L [1..t]\) with constant query time and 1s marking the first value of each subsequence. Given $k$ between 1 and $t$, we can compute in constant time \[D [k] = V [L.\rank (k)]\,.\] Given a position $j$ between 1 and $n$, we can compute in constant time \[S [j] = \left\{ \begin{array}{l@{\hspace{2ex}}l} M [B.\rank (j)] & \mbox{if \(B [j] = 1\),}\\[1ex] R \left[ \rule{0ex}{2ex} D [B.\rank (j) + 1] + j \right]& \mbox{otherwise.} \end{array} \right.\] In our example, we again parse $S$ into \[\mathsf{ACATG}, \mathsf{ATTCGAC}, \mathsf{GACAGGTAC}, \mathsf{TAGCTACAGT}, \mathsf{AGAA}\,,\] and store \begin{eqnarray*} M & = & \mathsf{GCCTA}\\ B & = & 00001000000100000000100000000010001\,, \end{eqnarray*} but now we store \(D = 0, 0, 0, -1, 0\) as \(V = 0, -1, 0\) and \(L = 10011\) instead of storing $Q$. To compute \(S [25]\), we again compute \(B [25] = 0\) and \(B.\rank (25) = 3\), which tell us that \(S [25]\) is in the 4th phrase. We add 25 to the 4th relative pointer \(D [4] = V [L.\rank (4)] = -1\) and obtain 24, so \(S [25] = R [24]\). A single-character insertion or deletion usually causes only a single phrase break in the parse but a new run in $D$, with the values in the run being one less or one more than the values in the previous run. In our example, the insertion of \(S [21] = \mathsf{C}\) causes the value to decrement to -1, and the deletion of \(R [26] = \mathsf{T}\) (or, equivalently, of \(R [27] = \mathsf{T}\)) causes the value to increment to 0 again. In larger examples, where the values of the relative pointers are often a significant fraction of $n$, it seems wasteful to store a new value uncompressed when it differs only by 1 from the previous value. For example, suppose $R$ and $S$ are thousands of characters long, \begin{eqnarray*} R [1783..1817] & = & \ldots\mathsf{ACATCATTCGAGGACAGGTATAGCTACAGTTAGAA}\ldots\\ S [2009..2043] & = & \ldots\mathsf{ACATGATTCGACGACAGGTACTAGCTACAGTAGAA}\ldots \end{eqnarray*} and GDC still parses \(S [2009..2043]\) into the same phrases as before, with their sources in \(R [1783..1817]\). The relative pointers for those phrases are \(-136, -136,\) \(-136, -137, -136\), so we store \(-136, -137, -136\) for them in $V$, which takes at least a couple of dozen bits without further compression. \subsection{Relative data structures} \label{subsec:structures} As mentioned in Section~\ref{sec:introduction}, the new field of relative data structures concerns when and how we can use compress a new instance of a data structure, using an instance we already have for a similar dataset. Suppose we have a basic FM-index~\cite{FM05} for $R$ --- i.e., a rank data structure over the Burrows-Wheeler Transform (BWT)~\cite{BW94} of $R$, without a suffix-array sample --- and we want to use it to build a very compact basic FM-index for $S$. Since $R$ and $S$ are very similar, it is not surprising that their BWTs are also fairly similar: \begin{eqnarray*} \BWT (R) & = & \mathsf{AAGGT\$TTGCCTCCAAATTGAGCAAAGACTAGATGA}\\ \BWT (S) & = & \mathsf{AAGGT\$GTTTCCCGAAAATGAACCTAAGACGGCTAA}\,. \end{eqnarray*} Belazzougui, Gog, Gagie, Manzini and Sir\'en~\cite{BGGMS14} (see also~\cite{BBGMS15}) showed how we can implement such a relative FM-index for $S$ by choosing a common subsequence of the two BWTs and then storing bitvectors marking the characters not in that common subsequence, and rank data structures over those characters. They also showed how to build a relative suffix-array sample to obtain a fully-functional relative FM-index for $S$, but reviewing that is beyond the scope of this paper. An alternative to Belazzougui et al.'s basic approach is to compute the \rlz{} parse of \(\BWT (S)\) with respect to \(\BWT (R)\) and then store the rank for each character just before the beginning of each phrase. We can then answer a rank query \(\BWT (S).\rank_X (j)\) by finding the beginning \(\BWT (S) [p]\) of the phrase containing \(\BWT (S) [j]\) and the beginning \(\BWT (R) [q]\) of that phrase's source, then computing \[\BWT (S).\rank_X (p - 1) + \BWT (R).\rank_X (q + j - p) - \BWT (R).\rank_X (q - 1)\,.\] Unfortunately, single-character substitutions between $R$ and $S$ usually cause insertions, deletions and multi-character substitutions between \(\BWT (R)\) and \(\BWT (S)\), so Deorowicz and Grabowski's and Ferrada et al.'s optimizations no longer help us, even when the underlying strings are individuals' genomes. On the other hand, on average those insertions, deletions and multi-character substitutions are fairly few and short~\cite{LMS12}, so there is still hope that those optimized parsing algorithms can be generalized and applied to make this alternative practical. Our immediate concern is with a recent implementation of relative suffix trees~\cite{GNPS15}, which uses relative FM-indexes and relatively-compressed longest-common-prefix (LCP) arrays. Deorowicz and Grabowski's and Ferrada et al.'s optimizations also fail when we try to compress the LCP arrays, and when we use Kuruppu et al.'s implementation of \rlz{} the arrays take a substantial fraction of the total space. In our example, however, \begin{eqnarray*} \LCP (R) & = & \mathsf{0,\!1,\!1,\!4,\!3,\!1,\!2,\!2,\!3,\!2,\!1,\!2,\!2,\!0,\!3,\!2,\!3,\!1,\!1,\!0,\!2,\!2,\!1,\!1,\!2,\!1,\!2,\!0,\!2,\!3,\!2,\!1,\!2,\!1,\!2}\\ \LCP (S) & = & \mathsf{0,\!1,\!1,\!4,\!3,\!2,\!2,\!1,\!2,\!2,\!2,\!1,\!2,\!0,\!3,\!2,\!1,\!4,\!1,\!3,\!0,\!2,\!3,\!2,\!1,\!1,\!1,\!3,\!0,\!3,\!2,\!3,\!1,\!1,\!1} \end{eqnarray*} are quite similar: e.g., they have a common subsequence of length 26, almost three quarters of their individual lengths. LCP values tend to grow at least logarithmically with the size of the strings, so good compression becomes more important. \section{Adaptive Pointers} \label{sec:adaptive} We generalize Ferrada et al.'s optimization to handle short insertions, deletions and substitutions by introducing {\em adaptive pointers} and by allowing more than one mismatch character at the end of each phrase. An adaptive pointer is represented as the difference from the previous non-adaptive pointer. Henceforth we say a phrase is \emph{adaptive} if its pointer is adaptive, and \emph{explicit} otherwise. In this section we first describe our parsing strategy and then describe how we can support fast random access. \subsection{Parsing} \label{subsec:parsing} The parsing strategy is a generalization of the Greedy approach for adaptive phrases. The parser first compute the \emph{matching statistics} between input $S$ and reference $R$: for each suffix $S[i;n]$ of $S$, a suffix $R[k;m]$ of $R$ with the longest \LCP{} with $S[i]$ is found. Let \MatchRelPtr{i} be the relative pointer $k - i$ and \MatchLen{i} be the length of the \LCP{} between the two suffixes $S[i;n]$ and $R[k;m]$. Parsing scans $S$ from left to right, in one pass. Let us assume $S$ has already been parsed up to a position $i$, and let us assume the most recent explicit phrase starts at position $h$. The parser first tries to find an adaptive phrase (\emph{adaptive step}); if it fails, looks for an explicit phrase (\emph{explicit step}). Specifically: \begin{enumerate} \item \emph{adaptive step}: the parser checks, for the current position $i$ if \begin{inparaenum}[(i)]\item the relative pointer $\MatchRelPtr{i}$ can be represented as an adaptive pointer, that is, if the differential $\MatchRelPtr{i} - \MatchRelPtr{j}$ can be represented as a signed binary integer of at most $\DeltaBits$ bits, and \item if it is convenient to start a new adaptive phrase instead of representing literals as they are, that is, whether $\MatchLen{i} \cdot \log \sigma > \DeltaBits{}$\end{inparaenum}. The parser outputs the adaptive phrase and advances $\MatchLen{i}$ positions if both conditions are satisfied; otherwise, it looks for the leftmost position $k$ in range $i + 1$ up to $i + \LookAhead$ where both conditions are satisfied. If it finds such position $k$, the parser outputs literals $S[i;k-1]$ and an adaptive phrase; otherwise, it goes to step~\ref{en:explicit-step}. \item\label{en:explicit-step} \emph{explicit step}: in this step the parser goes back to position $i$ and scans forward until it has found a position $k \geq i$ where at least one of these two conditions is satisfied:\begin{inparaenum}[(i)]\item match length \MatchLen{i} is greater than a parameter $\MinExplicitLength{}$; \item the match is followed by an adaptive phrase\end{inparaenum}. It then outputs a literal range $S[i;k-1]$ and the explicit phrase found. \end{enumerate} The purpose of the two conditions on the explicit phrase is to avoid having spurious explicit phrases which are not associated to a meaningfully aligned substrings. It is important to notice that our data structure logically represents an adaptive/explicit phrase followed by a literal run as a single phrase: for example, an adaptive phrase of length $5$ followed by a literal sequence $\mathsf{GAT}$ is represented as an adaptive phrase of length $8$ with the last $3$ symbols represented as literals. \subsection{Representation} \label{subsec:representation} In order to support fast random access to $S$, we deploy several data structures, which can be grouped into two sets with different purposes: \begin{enumerate} \item \textbf{Storing the parsing}: a set of data structures mapping any position $i$ to some useful information about the phrase $P_i$ containing $S[i]$, that is:\begin{inparaenum}[(i)]\item the position $\Start{i}$ of the first symbol in $P_i$; \item $P_i$'s length $\Length{i}$; \item its relative pointer $\Rel{i}$; \item the number of phrases $\PrevPhr{i}$ preceding $P_i$ in the parsing, and \item the number of explicit phrases $\AbsPhr{i} \geq \PrevPhr{i}$ preceding $P_i$.\end{inparaenum} \item \textbf{Storing the literals}: a set of data structures which, given a position $i$ and the information about phrase $P_i$, tells whether $S[i]$ is a literal in the parsing and, if this is the case, returns $S[i]$. \end{enumerate} Here we provide a detailed illustration of these data structures. \paragraph{Storing the parsing.} The parsing is represented by storing two bitvectors. The first bitvector $\PhraseBV{}$ has $|S|$ entries, marking with a $1$ characters in $S$ at the beginning of a new phrase in the parsing. The second bitvector $\ExplicitBV{}$ has $m$ entries, one for every phrases in the parsing, and marks every explicit phrase in the parsing with a $1$, otherwise $0$. A rank/select datastructure is built on top of $\PhraseBV{}$, and a rank datastructure on top of $\ExplicitBV{}$. In this way, given $i$ we can efficiently compute the phrase index $\PrevPhr{i}$ as $\PhraseBV{}.rank{}(i)$, the explicit phrase index $\AbsPhr{i}$ as $\ExplicitBV{}.rank(p_i)$ and the phrase beginning $\Start{i}$ as $\PhraseBV{}.\select{}(p_i)$. Experimentally, bitvector \PhraseBV{} is sparse, while \ExplicitBV{} is usually dense. Bitvector \PhraseBV{} can be represented with any efficient implementation for sparse bitvectors; our implementation, detailed in Section~\ref{sec:experiments}, employs the Elias-Fano based \textsf{SDarrays} datastructure of Okanohara and Sadakane \cite{OS07}, which requires $m \log \frac{|S|}{m} + O(m)$ bits and supports rank in $O(\log \frac{|S|}{m})$ time and select in constant time. Bitvector \ExplicitBV{} is represented plainly, taking $m$ bits, with any $o(m)$-space $O(1)$-time rank implementation on top of it (\cite{OS07,RRR07}). In particular, it is interesting to notice that only one $\rank{}$ query is needed for extracting an unbounded number of consecutive symbols from \ExplicitBV{}, since each starting position of consecutive phrases can be accessed with a single $\select{}$ query, which has very efficient implementations on sparse bitvectors. Both explicit and relative pointers are stored using minimal binary codes in tables $A$ and $R$, respectively. These integers are not compressed using statistical encoding because this would prevent efficient random access to the sequence. Each explicit and relative pointer takes thus $\lceil \log n \rceil$ and $\lceil \log{}(\LookAhead{}) \rceil + 1$ bits of space, respectively. To compute $\Rel{i}$, we first check if the phrase is explicit by checking if $\mathsf{S}[\AbsPhr{i}]$ is set to one; if it is, then $\Rel{i} = A[\AbsPhr{i}]$, otherwise it is $\Rel{i} = A[\AbsPhr{i}] + R[\PrevPhr{i} - \AbsPhr{i}]$. \paragraph{Storing literals.} Literals are extracted as follows. Let us assume we are interested in accessing $S[i]$, which is contained in phrase $P_j$. First, it is determined whether $S[i]$ is a literal or not. Since literals in a phrase are grouped at the end of the phrase itself, it is sufficient to store, for every phrase $P_k$ in the parsing, the number of literals $\Literals{k}$ at its end. Thus, knowing the starting position $\Start{j}$ and length $\Length{j}$ of phrase $P_j$, symbol $S[i]$ is a literal if and only if $i > \Start{j} + \Length{j} - \Literals{j}$. All literals are stored in a table $L$, where $L[k]$ is the $k$-th literal found by scanning the parsing from left to right. How we represent $L$ depends on the kind of data we are dealing with. In our experiments, described in Section~\ref{sec:experiments}, we consider differentially-encoded LCP arrays and DNA. For \textsf{DLCP} values, $L$ simply stores all values using minimal binary codes. For \textsf{DNA} values, a more refined implementation (which we describe in a later paragraph) is needed to use less than $3$ bits on average for each symbol. So, in order to display the literal $S[i]$, we need a way to compute its index in $L$, which is equal to $\Start{j} - \Length{j} - \Literals{k}$ plus the prefix sum $\sum_{k = 1}^{j-1} \Literals{k}$. In the following paragraph we detail two solutions for efficiently storing \Literals{k} values and computing prefix sums. \paragraph{Storing literal counts.} Here we detail a simple and fast data structure for storing \Literals{-} values and for computing prefix sums on them. The basic idea is to store \Literals{-} values explicitly, and accelerate prefix sums by storing the prefix sum of some regularly sampled positions. To provide fast random access, the maximum number of literals in a phrase is limited to $2^\MaxLit{} - 1$, where $\MaxLit{}$ is a parameter chosen at construction time. Every value $\Literals{-}$ is thus collected in a table $L$, stored using $\MaxLit{}$ bits each. Since each phrase cannot have more than $2^\MaxLit{} - 1$ literals, we split each run of more than $2^\MaxLit{} - 1$ literals into the minimal number of phrases which do meet the limit. In order to speed-up the prefix sum computation on $L$, we sample one every \SampleInterval{} positions and store prefix sums of sampled positions into a table \Prefix{}. To accelerate further prefix sum computation, we employ a $256$-entries table \StaticTable{} which maps any sequence of $8 / \MaxLit{}$ elements into their sum. Here, we constrain $\MaxLit{}$ as a power of two not greater than $8$ (that is, either $1$, $2$, $4$ or $8$) and \SampleInterval{} as a multiple of $8 / \MaxLit{}$. In this way we can compute the prefix sum by just one look-up into \Prefix{} and at most $\frac{\SampleInterval{}}{8 / \MaxLit{}}$ queries into \StaticTable{}. Using \StaticTable{} is faster than summing elements in $L$ because it replaces costly bitshift operations with efficient byte-accesses to $L$. This is because $8 / \MaxLit{}$ elements of $L$ fit into one byte; moreover, those bytes are aligned to byte-boundaries because $\SampleInterval{}$ is a multiple of $8 / \MaxLit{}$, which in turn implies that the sampling interval spans entire bytes of $L$. \paragraph{Storing DNA literals.} Every literal is collected into a table $J$, where each element is represented using a fixed number of bits. For the DNA sequences we consider in our experiments, this would imply using $3$ bits, since the alphabet is $\{A, C, G, T, N\}$. However, since symbols $N$ occur less often than the others, it is more convenient to handle those as exceptions, so other literals can be stored in just $2$ bits. In particular, every $N$ in table $J$ is stored as one of the other four symbols in the alphabet (say, $A$) and a bit-vector \NExc{} marks every position in $J$ which corresponds to an $N$. Experimentally, bitvector \NExc{} is sparse and the $1$ are usually clustered together into a few regions. In order to reduce the space needed to store \NExc{}, we designed a simple bit-vector implementation to exploit this fact. In our design, \NExc{} is divided into equal-sized chunks of length $C$. A bitvector $\mathsf{Chunk}$ marks those chunks which contain at least one bit set to $1$. Marked chunks of \NExc{} are collected into a vector $V$. Because of the clustering property we just mentioned, most of the chunks are not marked, but marked chunks are locally dense. Because of this, bitvector $\mathsf{Chunk}$ is implemented using a sparse representation, while each chunk employs a dense representation. Good experimental values for $C$ are around $16-32$ bits, so each chunk is represented with a fixed-width integer. In order to check whether a position $i$ is marked in \NExc{}, we first check if chunk $c = \lfloor i / C \rfloor$ is marked in $\mathsf{Chunk}$. If it is marked, we compute $\mathsf{Chunk}.rank(c)$ to get the index of the marked chunk in $V$. \section{Experiments} \label{sec:experiments} We implemented \rlzap{} in C++11 with bitvectors from Gog et al.'s \textsf{sdsl} library \mbox{(\url{https://github.com/simongog/sdsl-lite})}, and compiled it with \verb=gcc= version \verb=4.8.4= with flags \verb|-O3|, \verb|-march=native|, \verb|-ffast-math|, \verb|-funroll-loops| and \verb|-DNDEBUG|. We performed our experiments on a computer with a $6$-core Intel Xeon X5670 clocked at 2.93GHz, $40$GiB of DDR3 ram clocked at 1333MHz and running Ubuntu 14.04. As noted in Section~\ref{sec:introduction}, our code is available at \mbox{\url{http://github.com/farruggia/rlzap}\ .} We performed our experiments on the following four datasets: \begin{itemize} \item \textsf{Cere}: the genomes of $39$ strains of the \emph{Saccharomyces cerevisiae} yeast; \item \textsf{E.\@ Coli}: the genomes of $33$ strains of the \emph{Escherichia coli} bacteria; \item \textsf{Para}: the genomes of $36$ strains of the \emph{Saccharomyces paradoxus} yeast; \item \textsf{DLCP}: differentially-encoded LCP arrays for three human genomes, with 32-bit entries. \end{itemize} These files are available from \url{http://acube.di.unipi.it/rlzap-dataset}. For each dataset we chose the file (i.e., the single genome or DLCP array) with the lexicographically largest name to be the \emph{reference}, and made the concatenation of the other files the \emph{target}. We then compressed the target against the reference with Ferrada et al.'s optimization of \rlz{} --- which reflects the current state of the art, as explained in Section~\ref{sec:introduction} --- and with \rlzap{}. For the DNA files (i.e., \textsf{Cere}, \textsf{E.\@ Coli} and \textsf{Para}) we used $\LookAhead{} = 32$, $\MinLength{} = 32$ and $\DeltaBits{} = 2$, while for \textsf{DLCP} we used $\LookAhead{} = 8$, $\MinLength{} = 4$ and $\DeltaBits{} = 4$. We chose these parameters during a calibration step performed on a different dataset, which we will describe in the full version of this paper. Table~\ref{tab:compression} shows the compression achieved by \rlz{} and \rlzap{}. (We note that, since the DNA datasets are each over an alphabet of \(\{\mathsf{A}, \mathsf{C}, \mathsf{G}, \mathsf{T}, \mathsf{N}\}\) and {\sf N}s are rare, the targets for those datasets can be compressed to about a quarter of their size even with only, e.g., Huffman coding.) Notice \rlzap{} consistently achieves better compression than \rlz{}, with its space usage ranging from about 17\% less for \textsf{Cere} to about 32\% less for \textsf{DLCP}. \begin{table}[t] \caption{Compression achieved by \rlz{} and \rlzap{}. For each dataset we report in MiB ($2^{20}$ bytes) the size of the reference and the size of the target uncompressed and compressed with each method.} \label{tab:compression} \begin{center} \begin{tabular}{l @{\hspace{3ex}} r @{\hspace{3ex}} r @{\hspace{3ex}} r @{\hspace{8ex}} r @{\hspace{3ex}} r @{\hspace{6ex}}} \toprule Dataset & \multicolumn{1}{l}{Reference} & \multicolumn{1}{l}{Target} & \multicolumn{3}{l}{Compressed Target Size (MiB)} \\ & size (MiB) & size (MiB) & \rlz{} & \rlzap{} \\ \midrule \textsf{Cere} & 12.0 & 451 & 9.16 & 7.61 \\ \textsf{E. \@Coli} & 4.8 & 152 & 30.47 & 21.51 \\ \textsf{Para} & 11.3 & 398 & 15.57 & 10.49 \\ \textsf{DLCP} & 11,582 & 23,392 & 1,745.33 & 1,173.81 \\ \bottomrule \end{tabular} \end{center} \end{table} \begin{table}[t] \caption{Extraction times per character from \rlz{}- and \rlzap{}-compressed targets. For each file in each target, we compute the mean extraction time for $2^{24} / \ell$ pseudo-randomly chosen substrings; take the mean of these means.} \label{tab:decompression_time} \begin{center} \begin{tabular}{C{5em} C{6em} R{3em} R{3em} R{3em} R{3em} R{3em} R{3em}} \toprule \multirow{2}*{Dataset} & \multirow{2}*{Algorithm} & \multicolumn{6}{c}{Mean extraction time per character (ns)}\\ & & 1 & 4 & 16 & 64 & 256 & 1024 \\ \midrule \multirow{2}*{\textsf{Cere}} & \rlz{} & 234 & 59 & 16.4 & 4.4 & 1.47 & 0.55 \\ & \rlzap{} & 274 & 70 & 19.5 & 5.7 & 2.34 & 1.26 \\ \midrule \multirow{2}*{\textsf{E. \@Coli}} & \rlz{} & 225 & 62 & 20.1 & 7.7 & 4.34 & 3.34 \\ & \rlzap{} & 322 & 91 & 31.3 & 15.3 & 10.78 & 9.47 \\ \midrule \multirow{2}*{\textsf{Para}} & \rlz{} & 235 & 59 & 17.2 & 5.2 & 2.23 & 1.03 \\ & \rlzap{} & 284 & 74 & 21.2 & 6.9 & 3.09 & 2.26 \\ \midrule \multirow{2}*{\textsf{DLCP}} & \rlz{} & 756 & 238 & 61.5 & 20.5 & 9.00 & 6.00 \\ & \rlzap{} & 826 & 212 & 57.5 & 19.0 & 8.00 & 4.50 \\ \bottomrule \end{tabular} \end{center} \end{table} Table~\ref{tab:decompression_time} shows extraction times for \rlz{}- and \rlzap{}-compressed targets. \rlzap{} is noticeably slower than \rlz{} for DNA, while it is slightly faster for the DLCP dataset when at least four characters are extracted. We believe \rlzap{} outperforms \rlz{} on the DLCP because its parsing is generally more cache-friendly: our measurements indicate that on this dataset \rlzap{} causes about 36\% fewer L2 and L3 cache misses than \rlz{}. Even for DNA, \rlzap{} is still fast in absolute terms, taking just tens of nanoseconds per character when extracting at least four characters. On DNA files, \rlzap{} achieves better compression at the cost of slightly longer extraction times. On differentially-encoded LCP arrays, \rlzap{} outperforms \rlz{} in all regards, except for a slight slowdown when extraction substrings of length less than 4. That is, \rlzap{} is competitive with the state of the art even for compressing DNA and, as we hoped, advances it for relative data structures. Our next step will be to integrate it into the implementation of relative suffix trees mentioned in Subsection~\ref{subsec:structures}. \section{Future Work} \label{sec:future} In the near future we plan to perform more experiments to tune \rlzap{} and discover its limitations. For example, we will test it on the balanced-parentheses representations of suffix trees' shapes, which are an alternative to LCP arrays, and on the BWTs in relative FM-indexes. We also plan to investigate how to minimize the bit-complexity of our parsing --- i.e., how to choose the phrases and sources so as to minimize the number of bits in our representation --- building on the results by Farruggia, Ferragina and Venturini~\cite{FFV14a,FFV14b} about minimizing the bit-complexity of LZ77. \rlzap{} can be viewed as a bounded-lookahead greedy heuristic for computing a glocal alignment~\cite{BMPDCDB03} or $S$ against $R$. Such an alignment allows for genetic recombination events, in which potentially large sections of DNA are rearranged. We note that standard heuristics for speeding up edit-distance computation and global alignment do not work here, because even a low-cost path through the dynamic programming matrix can occasionally jump arbitrarily far from the diagonal. \rlzap{} runs in linear time, which is attractive, but it may produce a suboptimal alignment --- i.e., it is not an admissible heuristic. In the longer term, we are interested in finding practical admissible heuristics. For example, if a long enough substring of $S$ aligns well enough against a particular substring of $R$ and badly enough against any other substring or small collection of substrings of $R$ (which we can check with LCP queries), then any optimal alignment of $S$ against $R$ should align most of that subalignment. This observation should help us find an optimal alignment when the \rlz{} parse of $S$ with respect to $R$ is small but, e.g., there are few or no long approximate repetitions within $R$, so the LZ77 parse of $R$ is fairly large. Apart from the direct biological interest of computing optimal or nearly optimal glocal alignments, they can also help us design more data structures. For example, consider the problem of representing the mapping between orthologous genes in several species' genomes; see, e.g.,~\cite{Kub14}. Given two genomes' indices and the position of a base-pair in one of those genomes, we would like to return quickly the positions of all corresponding base-pairs in the other genome. Only a few base-pairs correspond to two base-pairs in another genome and, ignoring those, this problem reduces to representing compressed permutations. A feature of these permutations is that base-pairs tend to be mapped in blocks, possibly with some slight reordering within each block. We can extract this block structure by computing a glocal alignment, either between the genomes or between the permutation and its inverse. \enlargethispage{10ex} \end{document}
\begin{document} \title{Sample file for SIAM \LaTeX\ macro package\thanks{This work was supported by the Society for Industrial and Applied Mathematics, Philadelphia, Pennsylvania.} \begin{abstract} An example of SIAM \LaTeX\ macros is presented. Various aspects of composing manuscripts for SIAM's journal series are illustrated with actual examples from accepted manuscripts. SIAM's stylistic standards are adhered to throughout, and illustrated. {\epsilon}nd{abstract} \begin{keywords} sign-nonsingular matrix, LU-factorization, indicator polynomial {\epsilon}nd{keywords} \begin{AMS} 15A15, 15A09, 15A23 {\epsilon}nd{AMS} \pagestyle{myheadings} \thispagestyle{plain} \markboth{P. DUGGAN AND V. A. U. THORS}{SIAM MACRO EXAMPLES} \section{Introduction and examples} This paper presents a sample file for the use of SIAM's \LaTeX\ macro package. It illustrates the features of the macro package, using actual examples culled from various papers published in SIAM's journals. It is to be expected that this sample will provide examples of how to use the macros to generate standard elements of journal papers, e.g., theorems, definitions, or figures. This paper also serves as an example of SIAM's stylistic preferences for the formatting of such elements as bibliographic references, displayed equations, and equation arrays, among others. Some special circumstances are not dealt with in this sample file; for such information one should see the included documentation file. {{\epsilon}m Note:} This paper is not to be read in any form for content. The conglomeration of equations, lemmas, and other text elements were put together solely for typographic illustrative purposes and don't make any sense as lemmas, equations, etc. \subsection{Sample text} Let $S=[s_{ij}]$ ($1\leq i,j\leq n$) be a $(0,1,-1)$-matrix of order $n$. Then $S$ is a {{\epsilon}m sign-nonsingular matrix} (SNS-matrix) provided that each real matrix with the same sign pattern as $S$ is nonsingular. There has been considerable recent interest in constructing and characterizing SNS-matrices \cite{bs}, \cite{klm}. There has also been interest in strong forms of sign-nonsingularity \cite{djd}. In this paper we give a new generalization of SNS-matrices and investigate some of their basic properties. Let $S=[s_{ij}]$ be a $(0,1,-1)$-matrix of order $n$ and let $C=[c_{ij}]$ be a real matrix of order $n$. The pair $(S,C)$ is called a {{\epsilon}m matrix pair of order} $n$. Throughout, $X=[x_{ij}]$ denotes a matrix of order $n$ whose entries are algebraically independent indeterminates over the real field. Let $S\circ X$ denote the Hadamard product (entrywise product) of $S$ and $X$. We say that the pair $(S,C)$ is a {{\epsilon}m sign-nonsingular matrix pair of order} $n$, abbreviated SNS-{{\epsilon}m matrix pair of order} $n$, provided that the matrix \[A=S\circ X+C\] is nonsingular for all positive real values of the $x_{ij}$. If $C=O$ then the pair $(S,O)$ is a SNS-matrix pair if and only if $S$ is a SNS-matrix. If $S=O$ then the pair $(O,C)$ is a SNS-matrix pair if and only if $C$ is nonsingular. Thus SNS-matrix pairs include both nonsingular matrices and sign-nonsingular matrices as special cases. The pairs $(S,C)$ with \[S=\left[\begin{array}{cc}1&0\\0&0{\epsilon}nd{array}\right],\qquad C=\left[\begin{array}{cc}1&1\\1&1{\epsilon}nd{array}\right]\] and \[S=\left[\begin{array}{ccc}1&1&0\\1&1&0\\0&0&0{\epsilon}nd{array}\right],\qquad C=\left[\begin{array}{ccc}0&0&1\\0&2&0\\ 3&0&0{\epsilon}nd{array}\right]\] are examples of SNS-matrix pairs. \subsection{A remuneration list} In this paper we consider the evaluation of integrals of the following forms: \begin{equation} \int_a^b \left( \sum_i E_i B_{i,k,x}(t) \right) \left( \sum_j F_j B_{j,l,y}(t) \right) dt,\label{problem} {\epsilon}nd{equation} \begin{equation} \int_a^b f(t) \left( \sum_i E_i B_{i,k,x}(t) \right) dt,\label{problem2} {\epsilon}nd{equation} where $B_{i,k,x}$ is the $i$th B-spline of order $k$ defined over the knots $x_i, x_{i+1}, \ldots, x_{i+k}$. We will consider B-splines normalized so that their integral is one. The splines may be of different orders and defined on different knot sequences $x$ and $y$. Often the limits of integration will be the entire real line, $-\infty$ to $+\infty$. Note that (\ref{problem}) is a special case of (\ref{problem2}) where $f(t)$ is a spline. There are five different methods for calculating (\ref{problem}) that will be considered: \begin{remunerate} \item Use Gauss quadrature on each interval. \item Convert the integral to a linear combination of integrals of products of B-splines and provide a recurrence for integrating the product of a pair of B-splines. \item Convert the sums of B-splines to piecewise B\'{e}zier format and integrate segment by segment using the properties of the Bernstein polynomials. \item Express the product of a pair of B-splines as a linear combination of B-splines. Use this to reformulate the integrand as a linear combination of B-splines, and integrate term by term. \item Integrate by parts. {\epsilon}nd{remunerate} Of these five, only methods 1 and 5 are suitable for calculating (\ref{problem2}). The first four methods will be touched on and the last will be discussed at length. \subsection{Some displayed equations and \{{\tt eqnarray}\}s} By introducing the product topology on $R^{m \times m} \times R^{n \times n}$ with the induced inner product \begin{equation} \langle (A_{1},B_{1}), (A_{2},B_{2})\rangle := \langle A_{1},A_{2}\rangle + \langle B_{1},B_{2}\rangle,\label{eq2.10} {\epsilon}nd{equation} we calculate the Fr\'{e}chet derivative of $F$ as follows: \begin{eqnarray} F'(U,V)(H,K) &=& \langle R(U,V),H\Sigma V^{T} + U\Sigma K^{T} - P(H\Sigma V^{T} + U\Sigma K^{T})\rangle \nonumber \\ &=& \langle R(U,V),H\Sigma V^{T} + U\Sigma K^{T}\rangle \label{eq2.11} \\ &=& \langle R(U,V)V\Sigma^{T},H\rangle + \langle \Sigma^{T}U^{T}R(U,V),K^{T}\rangle. \nonumber {\epsilon}nd{eqnarray} In the middle line of (\ref{eq2.11}) we have used the fact that the range of $R$ is always perpendicular to the range of $P$. The gradient $\nabla F$ of $F$, therefore, may be interpreted as the pair of matrices: \begin{equation} \nabla F(U,V) = (R(U,V)V\Sigma^{T},R(U,V)^{T}U\Sigma ) \in R^{m \times m} \times R^{n \times n}. \label{eq2.12} {\epsilon}nd{equation} Because of the product topology, we know \begin{equation} {\cal T}_{(U,V)}({\cal O} (m) \times {\cal O} (n)) = {\cal T}_{U}{\cal O} (m) \times {\cal T}_{V}{\cal O} (n), \label{eq2.13} {\epsilon}nd{equation} where ${\cal T}_{(U,V)}({\cal O} (m) \times {\cal O} (n))$ stands for the tangent space to the manifold ${\cal O} (m) \times {\cal O} (n)$ at $(U,V) \in {\cal O} (m) \times {\cal O} (n)$ and so on. The projection of $\nabla F(U,V)$ onto ${\cal T}_{(U,V)}({\cal O} (m) \times {\cal O} (n))$, therefore, is the product of the projection of the first component of $\nabla F(U,V)$ onto ${\cal T}_{U}{\cal O} (m)$ and the projection of the second component of $\nabla F(U,V)$ onto ${\cal T}_{V}{\cal O} (n)$. In particular, we claim that the projection $ g(U,V)$ of the gradient $\nabla F(U,V)$ onto ${\cal T}_{(U,V)}({\cal O} (m) \times {\cal O} (n))$ is given by the pair of matrices: \begin{eqnarray} g(U,V) = && \left( \frac{R(U,V)V\Sigma^{T}U^{T}-U\Sigma V^{T}R(U,V)^{T}}{2}U, \right. \nonumber \\[-1.5ex] \label{eq2.14}\\[-1.5ex] &&\quad \left. \frac{R(U,V)^{T}U\Sigma V^{T}-V \Sigma^{T}U^{T}R(U,V)}{2}V \right).\nonumber {\epsilon}nd{eqnarray} Thus, the vector field \begin{equation} \frac{d(U,V)}{dt} = -g(U,V) \label{eq2.15} {\epsilon}nd{equation} defines a steepest descent flow on the manifold ${\cal O} (m) \times {\cal O} (n)$ for the objective function $F(U,V)$. \section{Main results} Let $(S,C)$ be a matrix pair of order $n$. The determinant \[\deltaet (S\circ X+C)\] is a polynomial in the indeterminates of $X$ of degree at most $n$ over the real field. We call this polynomial the {{\epsilon}m indicator polynomial} of the matrix pair $(S,C)$ because of the following proposition. \begin{theorem} \label{th:prop} The matrix pair $(S,C)$ is a {\rm SNS}-matrix pair if and only if all the nonzero coefficients in its indicator polynomial have the same sign and there is at least one nonzero coefficient. {\epsilon}nd{theorem} \begin{proof} Assume that $(S,C)$ is a SNS-matrix pair. Clearly the indicator polynomial has a nonzero coefficient. Consider a monomial \begin{equation} \label{eq:mono} b_{i_{1},\ldots,i_{k};j_{1},\ldots,j_{k}}x_{i_{1}j_{1}}\cdots x_{i_{k}j_{k}} {\epsilon}nd{equation} occurring in the indicator polynomial with a nonzero coefficient. By taking the $x_{ij}$ that occur in (\ref{eq:mono}) large and all others small, we see that any monomial that occurs in the indicator polynomial with a nonzero coefficient can be made to dominate all others. Hence all the nonzero coefficients have the same sign. The converse is im-\linebreak mediate. \qquad{\epsilon}nd{proof} For SNS-matrix pairs $(S,C)$ with $C=O$ the indicator polynomial is a homogeneous polynomial of degree $n$. In this case Theorem \ref{th:prop} is a standard fact about SNS-matrices. \begin{lemma}[{\rm Stability}] \label{stability} Given $T>0$, suppose that $\| {\epsilon}psilon (t) \|_{1,2} \leq h^{q-2}$ for $0 \leq t \leq T$ and $q \geq 6$. Then there exists a positive number $B$ that depends on $T$ and the exact solution $\psi$ only such that for all $0 \leq t \leq T$, \begin{equation} \label{Gron} \frac {d}{dt} \| {\epsilon}psilon (t) \| _{1,2} \leq B ( h^{q-3/2} + \| {\epsilon}psilon (t) \|_{1,2})\;. {\epsilon}nd{equation} The function $B(T)$ can be chosen to be nondecreasing in time. {\epsilon}nd{lemma} \begin{theorem} \label{th:gibson} The maximum number of nonzero entries in a {\rm SNS}-matrix $S$ of order $n$ equals \[\frac{n^{2}+3n-2}{2}\] with equality if and only if there exist permutation matrices such that $P|S|Q=T_{n}$ where \begin{equation} \label{eq:gibson} T_{n}=\left[\begin{array}{cccccc} 1&1&\cdots&1&1&1\\ 1&1&\cdots&1&1&1\\ 0&1&\cdots&1&1&1\\ \vdots&\vdots&\deltadots&\vdots&\vdots&\vdots\\ 0&0&\cdots&1&1&1\\ 0&0&\cdots&0&1&1{\epsilon}nd{array}\right]. {\epsilon}nd{equation} {\epsilon}nd{theorem} We note for later use that each submatrix of $T_{n}$ of order $n-1$ has all 1s on its main diagonal. We now obtain a bound on the number of nonzero entries of $S$ in a SNS-matrix pair $(S,C)$ in terms of the degree of the indicator polynomial. We denote the strictly upper triangular (0,1)-matrix of order $m$ with all 1s above the main diagonal by $U_{m}$. The all 1s matrix of size $m$ by $p$ is denoted by $J_{m,p}$. \begin{proposition}[{\rm Convolution theorem}] \label{pro:2.1} Let \begin{eqnarray*} a\ast u(t) = \int_0^t a(t- \tau) u(\tau) d\tau, \hspace{.2in} t \in (0, \infty). {\epsilon}nd{eqnarray*} Then \begin{eqnarray*} \widehat{a\ast u}(s) = \widehat{a}(s)\widehat{u}(s). {\epsilon}nd{eqnarray*} {\epsilon}nd{proposition} \begin{lemma} \label{lem:3.1} For $s_0 >0$, if $$ \int_0^{\infty} e^{-2s_0 t}v^{(1)}(t) v(t) dt \; \leq 0 \;, $$ then \begin{eqnarray*} \int_0^{\infty} e^{-2s_0 t} v^2(t) dt \; \leq \; \frac{1}{2s_0} v^2(0). {\epsilon}nd{eqnarray*} {\epsilon}nd{lemma} {{\epsilon}m Proof}. Applying integration by parts, we obtain \begin{eqnarray*} \int_0^{\infty} e^{-2s_0 t} [v^2(t)-v^2(0)] dt &=&\lim_{t\rightarrow \infty}\left ( -\frac{1}{2s_0}e^{-2s_0 t}v^2(t) \right ) +\frac{1}{s_0} \int_0^{\infty} e^{-2s_0 t}v^{(1)}(t)v(t)dt\\ &\leq& \frac{1}{s_0} \int_0^{\infty} e^{-2s_0 t} v^{(1)}(t)v(t) dt \;\; \leq \;\; 0. {\epsilon}nd{eqnarray*} Thus $$ \int_0^{\infty} e^{-2s_0 t} v^2(t) dt \;\;\leq v^2(0) \int_0^{\infty} \;\;e^{-2s_0 t} dt\;\;=\;\;\frac{1}{2s_0} v^2(0).{\epsilon}qno{\epsilon}ndproof $$ \begin{corollary}\label{c4.1} Let $ \mbox{\boldmath$E$} $ satisfy $(5)$--$(6)$ and suppose $ \mbox{\boldmath$E$}^h $ satisfies $(7)$ and $(8)$ with a general $ \mbox{\boldmath$G$} $. Let $ \mbox{\boldmath$G$}= \nabla \times {\bf \Phi} + \nabla p,$ $p \in H_0^1 (\Omega) $. Suppose that $\nabla p$ and $ \nabla \times {\bf \Phi} $ satisfy all the assumptions of Theorems $4.1$ and $4.2$, respectively. In addition suppose all the regularity assumptions of Theorems $4.1$--$4.2$ are satisfied. Then for $ 0 \le t \le T $ and $ 0 < {\epsilon}psilon \le {\epsilon}psilon_0 $ there exists a constant $ C = C({\epsilon}psilon, T) $ such that $$ \Vert (\mbox{\boldmath$E$} - \mbox{\boldmath$E$}^h)(t) \Vert_0 \le C h^{k+1- {\epsilon}psilon}, $$ where $ C $ also depends on the constants given in Theorems $4.1$ and $4.2$. {\epsilon}nd{corollary} \begin{definition} Let $S$ be an isolated invariant set with isolating neighborhood $N$. An {{\epsilon}m index pair} for $S$ is a pair of compact sets $(N_{1},N_{0})$ with $N_{0} \subset N_{1} \subset N$ such that: \begin{romannum} \item $cl(N_{1} \backslash N_{0})$ is an isolating neighborhood for $S$. \item $N_{i}$ is positively invariant relative to $N$ for $i=0,1$, i.e., given $x \in N_{i}$ and $x \cdot [0,t] \subset N$, then $x \cdot [0,t] \subset N_{i}$. \item $N_{0}$ is an exit set for $N_{1}$, i.e. if $x \in N_{1}$, $x \cdot [0, \infty ) \not\subset N_{1}$, then there is a $T \geq 0$ such that $x \cdot [0,T] \subset N_{1}$ and $x \cdot T \in N_{0}$. {\epsilon}nd{romannum} {\epsilon}nd{definition} \subsection{Numerical experiments} We conducted numerical experiments in computing inexact Newton steps for discretizations of a {{\epsilon}m modified Bratu problem}, given by \begin{eqnarray} {\deltas \Delta w + c e^w + d{ {\partial w}\over{\partial x} } } &=&{\deltas f \quad {\rm in}\ D, }\nonumber\\[-1.5ex] \label{bratu} \\[-1.5ex] {\deltas w }&=&{\deltas 0 \quad {\rm on}\ \partial D , } \nonumber {\epsilon}nd{eqnarray} where $c$ and $d$ are constants. The actual Bratu problem has $d=0$ and $f {\epsilon}quiv0$. It provides a simplified model of nonlinear diffusion phenomena, e.g., in combustion and semiconductors, and has been considered by Glowinski, Keller, and Rheinhardt \cite{GloKR85}, as well as by a number of other investigators; see \cite{GloKR85} and the references therein. See also problem 3 by Glowinski and Keller and problem 7 by Mittelmann in the collection of nonlinear model problems assembled by Mor\'e \cite{More}. The modified problem (\ref{bratu}) has been used as a test problem for inexact Newton methods by Brown and Saad \cite{Brown-Saad1}. In our experiments, we took $D = [0,1]\times[0,1]$, $f {\epsilon}quiv0$, $c=d=10$, and discretized (\ref{bratu}) using the usual second-order centered differences over a $100\times100$ mesh of equally spaced points in $D$. In {GMRES}($m$), we took $m=10$ and used fast Poisson right preconditioning as in the experiments in \S2. The computing environment was as described in \S2. All computing was done in double precision. \begin{figure}[ht] \caption{{\rm Log}$_{10}$ of the residual norm versus the number of {\rm GMRES$(m)$} iterations for the finite difference methods.} \label{diff} {\epsilon}nd{figure} In the first set of experiments, we allowed each method to run for $40$ {{GMRES}m} iterations, starting with zero as the initial approximate solution, after which the limit of residual norm reduction had been reached. The results are shown in Fig.~\ref{diff}. In Fig.~\ref{diff}, the top curve was produced by method FD1. The second curve from the top is actually a superposition of the curves produced by methods EHA2 and FD2; the two curves are visually indistinguishable. Similarly, the third curve from the top is a superposition of the curves produced by methods EHA4 and FD4, and the fourth curve from the top, which lies barely above the bottom curve, is a superposition of the curves produced by methods EHA6 and FD6. The bottom curve was produced by method A. In the second set of experiments, our purpose was to assess the relative amount of computational work required by the methods which use higher-order differencing to reach comparable levels of residual norm reduction. We compared pairs of methods EHA2 and FD2, EHA4 and FD4, and EHA6 and FD6 by observing in each of 20 trials the number of {{GMRES}m} iterations, number of $F$-evaluations, and run time required by each method to reduce the residual norm by a factor of ${\epsilon}$, where for each pair of methods ${\epsilon}$ was chosen to be somewhat greater than the limiting ratio of final to initial residual norms obtainable by the methods. In these trials, the initial approximate solutions were obtained by generating random components as in the similar experiments in \S2. We note that for every method, the numbers of {{GMRES}m} iterations and $F$-evaluations required before termination did not vary at all over the 20 trials. The {{GMRES}m} iteration counts, numbers of $F$-evaluations, and means and standard deviations of the run times are given in Table \ref{diffstats}. \begin{table} \caption{Statistics over $20$ trials of {\rm GMRES$(m)$} iteration numbers, $F$-evaluations, and run times required to reduce the residual norm by a factor of ${\epsilon}$. For each method, the number of {\rm GMRES$(m)$} iterations and $F$-evaluations was the same in every trial.} \begin{center} \footnotesize \begin{tabular}{|c|c|c|c|c|c|} \hline && Number of & Number of & Mean Run Time & Standard \\ Method & ${\epsilon}$ & Iterations & $F$-Evaluations& (Seconds) & Deviation \\ \hline \lower.3ex\hbox{EHA2} & \lower.3ex\hbox{$10^{-10}$} & \lower.3ex\hbox{26} & \lower.3ex\hbox{32} & \lower.3ex\hbox{47.12} & \lower.3ex\hbox{.1048} \\ FD2 & $10^{-10}$ & 26 & 58 & 53.79 & .1829 \\ \hline \lower.3ex\hbox{EHA4} & \lower.3ex\hbox{$10^{-12}$} & \lower.3ex\hbox{30} & \lower.3ex\hbox{42} & \lower.3ex\hbox{56.76} & \lower.3ex\hbox{.1855} \\ FD4 & $10^{-12}$ & 30 & 132 & 81.35 & .3730 \\ \hline \lower.3ex\hbox{EHA6} & \lower.3ex\hbox{$10^{-12}$} & \lower.3ex\hbox{30} & \lower.3ex\hbox{48} & \lower.3ex\hbox{58.56} & \lower.3ex\hbox{.1952} \\ FD6 & $10^{-12}$ & 30 & 198 & 100.6 & .3278 \\ \hline {\epsilon}nd{tabular} {\epsilon}nd{center} \label{diffstats} {\epsilon}nd{table} In our first set of experiments, we took $c=d=10$ and used right preconditioning with a fast Poisson solver from {{FISHPACK}} \cite{Swarztrauber-Sweet}, which is very effective for these fairly small values of $c$ and $d$. We first started each method with zero as the initial approximate solution and allowed it to run for 40 {{GMRES}m} iterations, after which the limit of residual norm reduction had been reached. Figure \ref{pdep} shows plots of the logarithm of the Euclidean norm of the residual versus the number of {{GMRES}m} iterations for the three methods. We note that in Fig.~\ref{pdep} and in all other figures below, the plotted residual norms were not the values maintained by {{GMRES}m}, but rather were computed as accurately as possible ``from scratch.'' That is, at each {{GMRES}m} iteration, the current approximate solution was formed and its product with the coefficient matrix was subtracted from the right-hand side, all in double precision. It was important to compute the residual norms in this way because the values maintained by {{GMRES}m} become increasingly untrustworthy as the limits of residual norm reduction are neared; see \cite{Walker88}. It is seen in Fig.~\ref{pdep} that Algorithm EHA achieved the same ultimate level of residual norm reduction as the FDP method and required only a few more {{GMRES}m} iterations to do so. \begin{figure}[t] \caption{{\rm Log}$_{10}$ of the residual norm versus the number of {\rm GMRES}$(m)$ iterations for $c=d=10$ with fast Poisson preconditioning. Solid curve: Algorithm {\rm EHA}; dotted curve: {\rm FDP} method; dashed curve: {\rm FSP} method.} \label{pdep} {\epsilon}nd{figure} In our second set of experiments, we took $c=d=100$ and carried out trials analogous to those in the first set above. No preconditioning was used in these experiments, both because we wanted to compare the methods without preconditioning and because the fast Poisson preconditioning used in the first set of experiments is not cost effective for these large values of $c$ and $d$. We first allowed each method to run for 600 {{GMRES}m} iterations, starting with zero as the initial approximate solution, after which the limit of residual norm reduction had been reached. \section*{Acknowledgments} The author thanks the anonymous authors whose work largely constitutes this sample file. He also thanks the INFO-TeX mailing list for the valuable indirect assistance he received. \begin{thebibliography}{10} \bibitem{bs} {\sc R.~A. Brualdi and B.~L. Shader}, {{\epsilon}m On sign-nonsingular matrices and the conversion of the permanent into the determinant}, in Applied Geometry and Discrete Mathematics, The Victor Klee Festschrift, P. Gritzmann and B. Sturmfels, eds., American Mathematical Society, Providence, RI, 1991, pp. 117--134. \bibitem{djd} {\sc J. Drew, C.~R. Johnson, and P. van den Driessche}, {{\epsilon}m Strong forms of nonsingularity}, Linear Algebra Appl., 162 (1992), to appear. \bibitem{g} {\sc P.~M. Gibson}, {{\epsilon}m Conversion of the permanent into the determinant}, Proc. Amer. Math. Soc., 27 (1971), pp.~471--476. \bibitem{klm} {\sc V.~Klee, R.~Ladner, and R.~Manber}, {\it Signsolvability revisited}, Linear Algebra Appl., 59 (1984), pp.~131--157. \bibitem{m} {\sc K. Murota}, LU-{{\epsilon}m decomposition of a matrix with entries of different kinds}, Linear Algebra Appl., 49 (1983), pp.~275--283. \bibitem{Axelsson} {\sc O.~Axelsson}, {{\epsilon}m Conjugate gradient type methods for unsymmetric and inconsistent systems of linear equations}, Linear Algebra Appl., 29 (1980), pp.~1--16. \bibitem{Brown-Saad1} {\sc P.~N. Brown and Y.~Saad}, {{\epsilon}m Hybrid {K}rylov methods for nonlinear systems of equations}, SIAM J. Sci. Statist. Comput., 11 (1990), pp.~450--481. \bibitem{DES} {\sc R.~S. Dembo, S.~C. Eisenstat, and T.~Steihaug}, {{\epsilon}m Inexact {N}ewton methods}, SIAM J. Numer. Anal., 19 (1982), pp.~400--408. \bibitem{EES} {\sc S.~C. Eisenstat, H.~C. Elman, and M.~H. Schultz}, {{\epsilon}m Variational iterative methods for nonsymmetric systems of linear equations}, SIAM J. Numer. Anal., 20 (1983), pp.~345--357. \bibitem{Elman} {\sc H.~C. Elman}, {{\epsilon}m Iterative methods for large, sparse, nonsymmetric systems of linear equations}, Ph.D. thesis, Department of Computer Science, Yale University, New Haven, CT, 1982. \bibitem{GloKR85} {\sc R.~Glowinski, H.~B. Keller, and L.~Rheinhart}, {{\epsilon}m Continuation-conjugate gradient methods for the least-squares solution of nonlinear boundary value problems}, SIAM J. Sci. Statist. Comput., 6 (1985), pp.~793--832. \bibitem{Golub-VanLoan} {\sc G.~H. Golub and C.~F. Van~Loan}, {{\epsilon}m Matrix Computations}, Second ed., The Johns Hopkins University Press, Baltimore, MD, 1989. \bibitem{More} {\sc J.~J. Mor\'e}, {{\epsilon}m A collection of nonlinear model problems}, in Computational Solutions of Nonlinear Systems of Equations, E.~L. Allgower and K.~Georg, eds., Lectures in Applied Mathematics, Vol. 26, American Mathematical Society, Providence, RI, 1990, pp.~723--762. \bibitem{Saad} {\sc Y.~Saad}, {{\epsilon}m Krylov subspace methods for solving large unsymmetric linear systems}, Math. Comp., 37 (1981), pp.~105--126. \bibitem{Saad-Schultz} {\sc Y.~Saad and M.~H. Schultz}, {{\epsilon}m {\rm GMRES}: A generalized minimal residual method for solving nonsymmetric linear systems}, SIAM J. Sci. Statist. Comput., 7 (1986), pp.~856--869. \bibitem{Swarztrauber-Sweet} {\sc P.~N. Swarztrauber and R.~A. Sweet}, {{\epsilon}m Efficient {\rm FORTRAN} subprograms for the solution of elliptic partial differential equations}, ACM Trans. Math. Software, 5 (1979), pp.~352--364. \bibitem{Walker88} {\sc H.~F. Walker}, {{\epsilon}m Implementation of the {\rm GMRES} method using {H}ouseholder transformations}, SIAM J. Sci. Statist. Comput., 9 (1988), pp.~152--163. \bibitem{Walker89} \sameauthor, {{\epsilon}m Implementations of the {\rm GMRES} method}, Computer Phys. Comm., 53 (1989), pp.~311--320. {\epsilon}nd{thebibliography} {\epsilon}nd{document}
\begin{document} \begin{abstract} We prove the $L^p$ boundedness of a maximal operator associated with a dyadic frequency decomposition of a Fourier multiplier, under a weak regularity assumption. \end{abstract} \maketitle \section{Introduction} Consider a Mikhlin-Hörmander multiplier $m$ on $\mathbb{R}^d$ satisfying the assumption \begin{equation} \abs{\partial^\gamma m(\Tilde{x}i)}\leq A\abs{\Tilde{x}i}^{-\abs{\gamma}} \label{mhtype} \end{equation} for all multi-indices $\gamma$ with $\abs{\gamma}\leq L$ for some integer $L>d$. Let $\chi \in C_c^{\infty}(\mathbb{R})$ be supported in $(1/2,2)$ such that $\sum_{j=-\infty}^{\infty}\chi(2^jt)=1$ and let $\phi=\chi(\lvert.\rvert)$. For a given Schwartz function $f$, let $Sf:=\mathcal{F}^{-1}[mlat{f}]$, and for $n\in \mathbb{Z}$ let $S_n$ be defined by \begin{equation} \widehat{S_nf}(\Tilde{x}i):=\sum_{j\leq n}\phi(2^{-j}\Tilde{x}i)m(\Tilde{x}i)lat{f}(\Tilde{x}i). \label{opdefn} \end{equation} We are interested in bounds for the maximal function \begin{equation} S_*f(x):=\underset{n\in \mathbb{Z}}{\mathrm{sup}}\,\abs{S_nf(x)}. \label{opdefnmax} \end{equation} The above operator was studied by Guo, Roos, Seeger and Yung in [\cite{guo2019maximal}, in connection with proving $L^p$ bounds for a maximal operator associated with families of Hilbert transforms along parabolas. The multiplier $m$ in [\cite{guo2019maximal} was assumed to satisfy the condition \begin{equation} \label{initialassump} \sup_{t>0}\,\norm{\phi m(t\cdot)}_{\mathcal{L}^1_{\beta}}=B(m)<\infty \end{equation} with $\beta>d$. Here $\mathcal{L}^1_{\beta}$ is the potential space of functions $g$ with $(I-\Delta)^{\beta/2}g\in L^1$ (we note the analogy with condition (\ref{mhtype}) here). With the above hypothesis, the authors were able to prove a pointwise Cotlar-type inequality \begin{equation} \label{initialCotlar} S_*f(x)\leq \frac{1}{(1-\delta)^{1/r}}(M(\abs{Sf}^r)(x))^{1/r}+C_{d,\beta}\delta^{-1}B(m)Mf(x) \end{equation} for $f\in L^p(\mathbb{R}^d)$ and for almost every $x$ (with $ r>0$ and $0<\delta\leq 1/2)$. Here $M[f]$ denotes the standard Hardy-Littlewood Maximal function. (\ref{initialCotlar}) easily implies $L^p$ and weak (1,1) bounds for the maximal operator $S_*$. It is natural to ask if one could weaken the assumption (\ref{initialassump}) and still establish $L^p$ bounds on the operator $S_*$, possibly without the intermediate step of proving a pointwise inequality of the form (\ref{initialCotlar}) for all exponents $r$. In this paper, we answer this question in the affirmative. In particular, we show that for the $L^p$ bounds on $S_*$ to hold, it is enough for the multiplier $m$ to satisfy a much weaker condition \begin{equation} \sup_ {t>0} \int | \mathcal{F}^{-1} [\phi m(t\cdot)](x)| (\log{(2+|x|)})^{\alpha} dx :=B(m)<\infty,\qquad \alpha >3. \label{assump} \end{equation} Our main result is the following: \begin{thm}\label{main thm} $S_*$, as defined in (\ref{opdefnmax}) for a multiplier $m$ satisfying (\ref{assump}), is of weak-type (1,1) and bounded on $L^p$ for $p\in (1,\infty)$, with the respective operator norm $\lesssim_p B(m)$. \end{thm} We remark here that it is not possible to do away with the smoothness assumption entirely. In other words, the condition \begin{equation*} \sup_ {t>0} \int | \mathcal{F}^{-1} [\phi m(t\cdot)](x)| dx <\infty \end{equation*} alone is not enough to guarantee $L^p$ bounds on the maximal operator $S_*$, or even on the singular operator $S$. For counterexamples, we refer to [\cite{littman1968LpMultiplier}, Section 5 and [\cite{stein1967boundedness}, Section 3. We shall denote the Littlewood-Paley pieces of $m$ by $m_j$. More precisely, for $j\in \mathbb{Z}$, we define $m_j(\Tilde{x}i):=\phi(2^{-j}\Tilde{x}i)m(\Tilde{x}i)$. Furthermore let $a_j(\Tilde{x}i)=m_j(2^j\Tilde{x}i)=\eta(\Tilde{x}i)m(2^j\Tilde{x}i)$. Observe that $supp(m_j)\subset (2^{j-1},2^j)$ and $supp(a_j)\subset (1/2,2)$. Let $K_j=\mathcal{F}^{-1}[m_j]$. Then (\ref{opdefn}) can be re-written as \begin{equation} S_nf(x)=\sum_{j\leq n} K_j*f(x). \label{opdefnspace} \end{equation} However, in order to quantify the smoothness condition in (\ref{assump}), it is useful to partition $K_j$ (for each $j\in \mathbb{Z}$) on the space side as well (see [\cite{grafakos2006maximal}). To this effect, let $\eta_0\in C_c^\infty(\mathbb{R}^d)$ be such that $\eta_0$ is even, $\eta_0(x)=1$ for $|x|\leq 1/2$ and $\eta_0$ is supported where $|x|\leq 1$. For $l\in \mathbb{N}$, let $\eta_l(x)=\eta_0(2^{-l}x)-\eta_0(2^{-l+1}x)$. For $l\in \mathbb{N}\cup \{0\}$ we define \begin{equation} K^{l,j}(x)=\eta_l(2^jx)\mathcal{F}^{-1}[m_j](x). \label{kljdefn} \end{equation} By the assumption (\ref{assump}), we then have \begin{equation} \label{kljnorm} \norm{K^{l,j}}_{L^1}\lesssim B(\log{(2+2^l)})^{-\alpha}. \end{equation} Now the multiplier corresponding to $K^{l,j}$ is given by $2^{-jd}lat{\eta_l}(2^{-j}\cdot)*m_j$, which, unlike $m_j$, is not compactly supported. However the rapid decay of $lat{\eta_l}$ still leads to the multiplier "essentially" being supported in a slightly thicker (but still compact) dyadic annulus, with the other frequency regions contributing negligible error terms. We make these ideas rigorous in Section \ref{error}. The arguments used are similar in spirit to those in [\cite{carbery1986variants}, Section 5. Another source of reference is [\cite{seeger1988some}. As a corollary, we prove that the singular operator \begin{equation} S^lf(x)=\sum_{j\in \mathbb{Z}}K^{l,j}*f(x) \label{op_l} \end{equation} is bounded on $L^2$, with operator norm $\lesssim Bl^{-\alpha}$. In Section \ref{majsec}, we use Bernstein's inequality (see [\cite{wolff2003lectures}) to establish $L^p$ bounds for the aforementioned portion with the major contribution and with compact frequency support. We also establish a pointwise estimate on its gradient. In Section \ref{weakSl}, using the estimates from Section \ref{majsec}, we prove that the Calder\'on-Zygmund operator $S^l$ associated to the multiplier $m$ is of weak type (1,1), with the operator norm $\lesssim Bl^{-\alpha+1}$. We do so by establishing the result for the operator $T^lf:=\sum_{j\in \mathbb{Z}}H^{l,j}*f$, where $H^{l,j}$ is the portion of the kernel $K^{l,j}$ with the major contribution. The arguments flow in the same vein as those in the proof of the Mikhlin-Hörmander Multiplier Theorem (see [\cite{grafakos2008classical},[\cite{H}). We also establish $L^p$ bounds on $T^l$ using interpolation. In Section \ref{largesec}, we investigate the properties of the truncated operator $T_n^l f:= \sum_{j\leq n} H^{l,j}*f$ (for $n\in \mathbb{Z}$ and $l\geq 0$), with an aim of establishing $L^p$ bounds for the associated maximal operator $T^l_*f=\text{sup }\,|T_n^l f|$. We wish to show that for $l\geq 0$, the operator $T^l_*$ is bounded on $L^p$ for $1<p<\infty$ with the corresponding operator norm $\lesssim_p Bl^{-\alpha+1}$ ($\lesssim B$ for $l=0$). Since our assumption (\ref{assump}) on $m$ is much weaker than that in [\cite{guo2019maximal}, a pointwise inequality like (\ref{initialCotlar}) for all $r>0$ seems out of reach. However, by using similar ideas, we are able to establish a Cotlar type inequality \[ T^l_* f(x)\lesssim_d \frac{1}{(1-\delta)^{1/p}}(M(\abs{T^lf}^p)(x))^{1/p}+2^\frac{ld}{p}l^{-\alpha+1}(1+\delta^{-1/p})B(m)(M(\abs{f}^p)(x))^{1/p}. \label{cotlar} \] for each $T^l_*$ and for a large enough exponent $p$ (here $0<\delta\leq 1/2$). Roughly speaking, a choice of $p=p_l \sim l$ will work. In particular, $p_l\rightarrow \infty$ as $l\rightarrow\infty$. Using this inequality, we can conclude that $T^l_*$ is bounded (with norm $\lesssim_pBl^{-\alpha+1})$ for all $p\in [p_l,\infty)$. This idea of keeping track of the explicit dependence on the exponent $l$ and using the decay in $l$ to sum the pieces up has also been used in [\cite{grafakos2006maximal}, albeit for a different maximal operator than the one considered here. For $p\in (1, p_l)$, however, we rely on a weak (1,1) estimate for $T^l_*$ (which is not hard to obtain) and then an interpolation between $1$ and $p_l$, which causes us to gain a power of $l$. In other words, we are only able to retain a decay of $l^{-\alpha+2}$ (hence the assumption $\alpha>3$). This is a trade off of working with a weaker logarithmic regularity assumption. Finally, in Section \ref{weakmax}, we establish a weak (1,1) estimate for the maximal operator $T^l_*$ and obtain $L^p$ bounds for the sum $\sum_{l\geq 0}T^l_*$, and consequently for $S_*$ (we use the decay in $l$ and the condition $\alpha>3$ here). \section{The Error Terms} \label{error} Let $\Psi\in C^\infty$ with $\Psi=1$ on $\{1/4\leq |\Tilde{x}i|\leq 4\}$ and supported on $\{1/5\leq |\Tilde{x}i|\leq 5\}$. For $j\in \mathbb{Z}$, define $\Psi_j(\Tilde{x}i)=\Psi(2^{-j}\Tilde{x}i)$. Then \begin{equation} \sum_{j\in \mathbb{Z}}K^{l,j}= \sum_{j\in \mathbb{Z}}K^{l,j}*\mathcal{F}^{-1}(\Psi_j) + \sum_{j\in \mathbb{Z}}K^{l,j} *\mathcal{F}^{-1}(1-\Psi_j) \label{twoterms} \end{equation} where $K^{l,j}$ is as defined in (\ref{kljdefn}). In this section, we will show that the contribution from the second sum above can be made as small as required using the rapid decay of $lat{\eta}$. To control this sum, we study the corresponding multiplier given by \[ \sum_{j\in \mathbb{Z}}2^{-jd}(1-\Psi_j)(\Tilde{x}i)lat{\eta_l}(2^{-j}\cdot)*m_j(\Tilde{x}i)=\sum_{j\in \mathbb{Z}} (1-\Psi_j(\Tilde{x}i))2^{-jd}\int m_j(\omega)lat{\eta_l}(2^{-j}(\Tilde{x}i-\omega))\, d\omega . \] \begin{lem} \label{errorterms} Let $l\geq 0$. For any $N\in \mathbb{N}$, multi-index $\gamma$ and $\Tilde{x}i\neq 0$, we have the estimate \begin{equation} \sum_{j\in \mathbb{Z}}2^{-jd}\abs{(1-\Psi_j)\partial^{\gamma}_\Tilde{x}i (lat{\eta_l}(2^{-j}\cdot)*m_j)(\Tilde{x}i)} \lesssim_{\eta, N, \gamma,d} B2^{l(d+\abs{\gamma}-N)} \abs{\Tilde{x}i}^{-\abs{\gamma}}. \end{equation} \end{lem} \begin{proof} Fix $\Tilde{x}i\neq 0$. Let $k\in \mathbb{Z}$ be such that $2^{k-1}\leq \abs{\Tilde{x}i}<2^{k+1}$.As $\Psi_k=1$ on $\{2^{k-2}\leq \abs{\Tilde{x}i}\leq 2^{k+2}\}$, we can split the sum under consideration into two parts \begin{align*} &\sum_{j\in \mathbb{Z}}2^{-jd}\abs{(1-\Psi_j)\partial^{\gamma}_\Tilde{x}i (lat{\eta_l}(2^{-j}\cdot)*m_j)(\Tilde{x}i)} \leq \\ &\sum_{j> k+2}2^{-jd}\abs{\partial^{\gamma}_\Tilde{x}i (lat{\eta_l}(2^{-j}\cdot)*m_j)(\Tilde{x}i)} +\sum_{j<k-2}2^{-jd}\abs{\partial^{\gamma}_\Tilde{x}i (lat{\eta_l}(2^{-j}\cdot)*m_j)(\Tilde{x}i)}. \end{align*} \subsection*{First Term:} We have \begin{align*} &\sum_{j>k+2}2^{-jd}\abs{\partial^{\gamma}_\Tilde{x}i (lat{\eta_l}(2^{-j}\cdot)*m_j)(\Tilde{x}i)} \leq &\sum_{j>k+2}2^{-jd}\int_{2^{j-1}\leq \abs{\omega}\leq 2^{j+1}}\abs{ m_j(\omega)}\abs{\partial^\gamma_\Tilde{x}ilat{\eta_l}(2^{-j}(\Tilde{x}i-\omega))}\, d\omega.\\ \end{align*} Now we observe that for $2^{j-1}\leq \abs{\omega}\leq 2^{j+1}$, we have $\abs{\omega-\Tilde{x}i}\geq \abs{\omega}/2$ and $2^{-j}\abs{\omega}/2\sim 1$. Hence, the Schwartz decay of $lat{\eta}$ yields \begin{align*} &\sum_{j>k+2}2^{-jd}\abs{\partial^{\gamma}_\Tilde{x}i (lat{\eta_l}(2^{-j}\cdot)*m_j)(\Tilde{x}i)} \\ &\lesssim_{\eta,N,\gamma}\sum_{j>k+2} 2^{-jd}\int_{2^{j-1}\leq \abs{\omega}\leq 2^{j+1}}\norm{ m}_{L^\infty}2^{l(d+\abs{\gamma})}2^{-j\abs{\gamma}}\Big(2^{l-j}\abs{\omega}/2\Big)^{-N}\, d\omega\\ &\lesssim_{d,\eta,N,\gamma}\sum_{j>k+2}2^{-jd}\norm{m}_{L^\infty}2^{l(d+\abs{\gamma}-N)}2^{-j\abs{\gamma}}\int_{2^{j-1}\leq \abs{\omega}\leq 2^{j+1}}\, d\omega\\ &\lesssim_{d,\eta,N,\gamma}\sum_{j>k+2}2^{d}\norm{m}_{L^\infty}2^{l(d+\abs{\gamma}-N)}2^{-j\abs{\gamma}} \lesssim_{d,\eta,N,\gamma}B2^{l(d+\abs{\gamma}-N)}2^{-(k+2)\abs{\gamma}} \lesssim_{d,\eta,N,\gamma}B2^{l(d+\abs{\gamma}-N)}\abs{\Tilde{x}i}^{-\abs{\gamma}}. \end{align*} \subsection*{Second Term:} In this case for $2^{j-1}\leq \abs{\omega}\leq 2^{j+1}$ we have that $\abs{\omega-\Tilde{x}i}\geq \abs{\Tilde{x}i}/2$ and $2^{-j}\abs{\Tilde{x}i}/2\geq 1$. Hence \begin{align*} &\sum_{j<k-2}2^{-jd}\abs{\partial^{\gamma}_\Tilde{x}i (lat{\eta_l}(2^{-j}\cdot)*m_j)(\Tilde{x}i)}\\ &\lesssim_{\eta,N,\gamma}\sum_{j<k-2}2^{-jd}\int_{2^{j-1}\leq \abs{\omega}\leq 2^{j+1}}\norm{ m}_{L^\infty}2^{l(d+\abs{\gamma})}2^{-j\abs{\gamma}}\Big(2^{l-j}\abs{\Tilde{x}i}/2\Big)^{-N}\, d\omega\\ &\lesssim_{\eta,N,\gamma}\sum_{j<k-2}2^{-jd}\norm{m}_{L^\infty}2^{l(d+\abs{\gamma}-N)}\int_{2^{j-1}\leq \abs{\omega}\leq 2^{j+1}}\abs{\Tilde{x}i}^{-\abs{\gamma}}\Big(2^{-j}\abs{\Tilde{x}i}/2\Big)^{-N+\abs{\gamma}}\, d\omega\\ &\lesssim_{\eta,N,\gamma,d}\norm{m}_{L^\infty}2^{l(d+\abs{\gamma}-N)}\abs{\Tilde{x}i}^{-\abs{\gamma}}\sum_{j<k-2}2^{-jd}2^{(k-1)d} \lesssim_{\eta,N,\gamma,d}\norm{m}_{L^\infty}2^{l(d+\abs{\gamma}-N)}\abs{\Tilde{x}i}^{-\abs{\gamma}}. \end{align*} \end{proof} Now a simple application of the Mikhlin-H\"ormander Multiplier theorem gives us \begin{thm} \label{errorthm} Let $l\geq 0$. For any $N_0\in \mathbb{N}$, $p\in (1,\infty)$ and Schwartz function $f$, we have \begin{equation*} \norm{\sum_{j\in \mathbb{Z}}K^{l,j} *\mathcal{F}^{-1}(1-\Psi_j)*f}_{L^p}\lesssim_d \text{max }(p,(p-1)^{-1})C_{\eta,N_0,p,\Psi}B2^{-log_0}\norm{f}_{L^p}. \end{equation*} Furthermore, we also have \[\norm{\sum_{j\in \mathbb{Z}}K^{l,j} *\mathcal{F}^{-1}(1-\Psi_j)*f\in L^{1,\infty}}_{L^{1,\infty}}\lesssim_d C_{\eta,N_0,p,\Psi}B2^{-log_0}\norm{f}_{L^1}.\] \end{thm} \begin{proof} We need to prove that \[\abs{\sum_{\beta\leq \gamma}c_{\beta,\gamma}\sum_{j\in \mathbb{Z}}\partial^\beta(1-\Psi_j)(\Tilde{x}i)\partial^{\gamma-\beta}(2^{-jd}lat{\eta_l}(2^{-j}\cdot)*m_j)(\Tilde{x}i)}\lesssim_d C_{\eta,N,p,\Psi}B2^{-log_0}\abs{\Tilde{x}i}^{-\abs{\gamma}}\] where $\gamma$ is a multi-index with $\abs{\gamma}\leq d/2+1$ and $\Tilde{x}i\neq 0$. We observe that for all values of $j$ except for $j_1,j_2$ where $2^{j_1+2}\leq \abs{\Tilde{x}i}<5\cdot2^{j_1}$ or when $2^{j_2}/5\leq \abs{\Tilde{x}i}<2^{j_2-2}$, we can use lemma \ref{errorterms} directly (as then $\abs{(1-\Psi_j)(\Tilde{x}i)}$ is a constant). Further, for the two remaining cases, we observe that $\abs{\Tilde{x}i}\sim 2^{j_k}$ $(k=1,2)$. Hence we can bound the term $\abs{\partial^\beta(1-\Psi_{j_k})}$ by $C_{\Psi}\abs{\Tilde{x}i}^{-\abs{\beta}}$ and apply the previous lemma to the term $\abs{\partial^{\gamma-\beta}(2^{-jd}lat{\eta_l}(2^{-j}\cdot)*m_j)(\Tilde{x}i)}$ to bound it above by $C_{\eta,N}B2^{l(d+\abs{\gamma}-\abs{\beta}-N_0-\abs{\gamma})}\abs{\Tilde{x}i}^{-(\abs{\gamma}-\abs{\beta})}$. The result then follows by summing up. \end{proof} As a consequence, we obtain the $L^2$ boundedness of $S^l$ (as defined in (\ref{op_l})), with polynomial decay in $l$. \begin{thm} \label{l2bound} For $f\in L^2$ and $l>0$, we have \[\norm{S^l(f)}_{L^2}\lesssim Bl^{-\alpha}\norm{f}_{L^2}.\] We also have \[\norm{S^0(f)}_{L^2}\lesssim B\norm{f}_{L^2}.\] \end{thm} \begin{proof} It is enough to prove the above for a Schwartz function $f$. Now \[S^l(f)=\sum_{j\in \mathbb{Z}}K^{l,j}*f=\sum_{j\in \mathbb{Z}}K^{l,j}*\mathcal{F}^{-1}(\Psi_j)*f + \sum_{j\in \mathbb{Z}}K^{l,j} *\mathcal{F}^{-1}(1-\Psi_j)*f.\] As we have already established the $L_2$ boundedness of the second term in Theorem \ref{errorthm} (with as good a decay in $l$ as required), we only need to prove the theorem for the first term. To this effect, let \[f=\sum_{k\in \mathbb{Z}}\Delta_k f\] be a Littlewood-Paley decomposition of $f$. Then \begin{align*} \norm{\sum_{j\in \mathbb{Z}}K^{l,j}*\mathcal{F}^{-1}(\Psi_j)*f}_{L^2} &\lesssim \norm{\sum_{j\in \mathbb{Z}}\sum_{k\in \mathbb{Z}}K^{l,j}*\mathcal{F}^{-1}(\Psi_j)*\Delta_k f}_{L^2}\\ &\lesssim \norm{\sum_{j\in \mathbb{Z}}K^{l,j}*\mathcal{F}^{-1}(\Psi_j)*(\Delta_{j-1}+\Delta_{j}+\Delta_{j+1}) f}_{L^2} \end{align*} where we have used the fact that the frequency support of $K^{l,j}*\mathcal{F}^{-1}(\Psi_j)$ is contained in $\{2^j/5\leq \abs{\Tilde{x}i}\leq 5.2^j\}$. From (\ref{kljnorm}), we also have that \[\norm{K^{l,j}*\mathcal{F}^{-1}(\Psi_j)}_{L^1}\leq \norm{K^{l,j}}_{L^1}\norm{\mathcal{F}^{-1}(\Psi_j)}_{L^1}\lesssim B (\log{(2+2^l)})^{-\alpha}\norm{\Psi}_{L^\infty}. \] Hence \begin{align*} \norm{\sum_{j\in \mathbb{Z}}K^{l,j}*\mathcal{F}^{-1}(\Psi_j)*f}_{L^2} &\lesssim B (\log{(2+2^l)})^{-\alpha} \norm{\sum_{j\in \mathbb{Z}}(\Delta_{j-1}+\Delta_{j}+\Delta_{j+1}) f}_{L^2}\\ &\lesssim B (\log{(2+2^l)})^{-\alpha}\norm{f}_{L^2} \end{align*} which proves the theorem. \end{proof} \section{Estimates for the Majorly Contributing Portion of the Kernel} \label{majsec} We now turn our attention to the first term in (\ref{twoterms}), which is the one with the main contribution. The main advantage we have now is that the $jth$ term in \[\sum_{j\in \mathbb{Z}}K^{l,j}*\mathcal{F}^{-1}(\Psi_j)\] has frequency supported in the annulus $\{2^j/5\leq \abs{\Tilde{x}i}\leq 5\cdot2^j\}$. Hence, we can use Bernstein's inequality to get bounds on the $L_p$ norm (of each term, with $1<p<\infty$) and $L^{\infty}$ norm (of the derivative). \begin{prop}\label{new} Let $q\in (1,\infty)$, and let $q'$ denote the Hölder conjugate exponent of $q$. Then for all $l\in \mathbb{N}\cup \{0\}$ and $j\in \mathbb{Z}$, we have \begin{enumerate} \item $\norm{K^{l,j}*\mathcal{F}^{-1}(\Psi_j)}_{L^{q'}}\lesssim B 2^{\frac{jd}{q}}(\log{(2+2^l)})^{-\alpha}$. \item For $x\in \mathbb{R}^d$, $|\nabla (K^{l,j}*\mathcal{F}^{-1}(\Psi_j))(x)|\lesssim 2^{j}|K^{l,j}*\mathcal{F}^{-1}(\Psi_j)(x)|$. \end{enumerate} \end{prop} \begin{proof} For the first part, we have \begin{align*} &\norm{K^{l,j}*\mathcal{F}^{-1}(\Psi_j)}_{L^{q'}}\\ &\leq \norm{K^{l,j}}_{L^1}\norm{\mathcal{F}^{-1}(\Psi_j)}_{L^{q'}} \lesssim B(\log{(2+2^l)})^{-\alpha}\norm{\Psi_j}_{L^q} \lesssim_{\Psi} B(\log{(2+2^l)})^{-\alpha}2^{jd/q} \end{align*} where in the second step we have used (\ref{kljnorm}) and Hausdorff-Young's inequality. For the second part, we recall that the Fourier transform of $\nabla (K^{l,j}*\mathcal{F}^{-1})(\Psi_j)$ is supported on a dyadic annulus of radius $2^j$. The assertion then follows from Bernstein's inequality (see [\cite{wolff2003lectures}, Proposition 5.3). \end{proof} \section{Weak (1,1) Boundedness of \texorpdfstring{$S^l$}{}} \label{weakSl} Let $\mathds{1}_l$ be the characteristic function of the set $\{x: 2^{l-1}\leq \abs{x}\leq 2^{l+1}\}$ for $l>0$ and of the set $\{x: \abs{x}\leq 1\}$ for $l=0$. We will denote $\mathds{1}_l(2^jx)$ by $\mathds{1}^{l,j}(x)$. Then (\ref{twoterms}) can be rewritten as \begin{equation} \label{twotermswithchar} \sum_{j\in \mathbb{Z}}K^{l,j}= \sum_{j\in \mathbb{Z}}K^{l,j}\mathds{1}^{l,j}=\sum_{j\in \mathbb{Z}}K^{l,j}*\mathcal{F}^{-1}(\Psi_j)\mathds{1}^{l,j} + \sum_{j\in \mathbb{Z}}K^{l,j} *\mathcal{F}^{-1}(1-\Psi_j)\mathds{1}^{l,j}. \end{equation} The advantage of (\ref{twotermswithchar}) over (\ref{twoterms}) is that it preserves information about the compact support of the kernel $K^{l,j}$, a property we will exploit quite often in the forthcoming proofs. Now for $f\in L^1(\mathbb{R}^d)$, we can estimate \[ \norm{\sum_{j\in \mathbb{Z}}K^{l,j}*f}_{L^{1,\infty}} \leq \norm{\sum_{j\in \mathbb{Z}}H^{l,j}*f}_{L^{1,\infty}} + \norm{\sum_{j\in \mathbb{Z}}(K^{l,j} *\mathcal{F}^{-1}(1-\Psi_j)\mathds{1}^{l,j})*f}_{L^{1,\infty}} \] where we define $H^{l,j}:=(K^{l,j}*\mathcal{F}^{-1}(\Psi_j))\mathds{1}^{l,j}$. Also let the operator $T^l$ be defined as $T^lf:= \sum_{j\in \mathbb{Z}}H^{l,j}*f$. By Theorem \ref{errorthm}, we conclude that $\norm{\sum_{j\in \mathbb{Z}}(K^{l,j} *\mathcal{F}^{-1}(1-\Psi_j)\mathds{1}^{l,j})*f}_{L^{1,\infty}}\lesssim Bl^{-\alpha+1}\norm{f}_{L^1}$. Hence, in order to prove that $S^l$ is of weak type $(1,1)$ (with the respective norm $\lesssim_d Bl^{-\alpha+1}$), it is enough to prove the same for $T^l$, which is the content of the next theorem. The proof we give here essentially uses the same ideas as Hörmander's original proof of the (Hörmander-)Mikhlin Multiplier Theorem (see [\cite{H}, also [\cite{grafakos2008classical}). \begin{thm}\label{weak thm} For all $f\in L^1(\mathbb{R}^d)$, we have \[\norm{T^lf}_{L^{1,\infty}}\lesssim Bl^{-\alpha+1}\norm{f}_{L^1}\] for $l\geq 1$ and \[\norm{T^0f}_{L^{1,\infty}}\lesssim B\norm{f}_{L^1}\] \end{thm} \begin{proof} We prove the result for $l>0$. The result for $l=0$ follows similarly. Also, for this proof, we can assume that $B=1$. Let $f\in L^1(\mathbb{R}^d)$ and fix $\sigma >0$. Let \[ f= f_0+f_1 \] be the standard Calder\'on-Zygmund decomposition of $f$ at the level $l^{\alpha-1}\sigma$. More precisely, let $\{I_k\}_{k\in \mathbb{N}}$ be axis-parallel cubes with centres $\{a_k\}_{k\in \mathbb{N}}$ respectively such that \begin{align*} &l^{\alpha-1}\sigma<\abs{I_k}^{-1}\int_{I_k}\abs{f(y)}\, dy\leq 2^dl^{\alpha-1}\sigma,\\ &\abs{f(x)}\leq l^{\alpha-1}\sigma \mathrm{ \,a.e.\, for\, } x\not\in \underset{k\in \mathbb{N}}{\bigcup}I_k, \end{align*} \[ f_0(x)= \begin{cases} f(x)-\abs{I_k}^{-1}\int_{I_k}f(y)\, dy \: x\in I_k,\\ 0,\,\textrm{otherwise.} \\ \end{cases} f_1(x)= \begin{cases} \abs{I_k}^{-1}\int_{I_k}f(y)\, dy \: x\in I_k,\\ f(x),\,\textrm{otherwise.} \end{cases} \] Now \begin{align*} &\textrm{meas}(\{x\colon \abs{\sum_{j\in \mathbb{Z}}H^{l,j}*f(x)}>\sigma \})\\ \leq &\textrm{meas}(\{x\colon \abs{\sum_{j\in \mathbb{Z}}H^{l,j}*f_0(x)}>\sigma/2 \})+\textrm{meas}(\{x\colon \abs{\sum_{j\in \mathbb{Z}}H^{l,j}*f_1(x)}>\sigma/2 \}) \end{align*} We estimate $\textrm{meas}(\{x\colon \abs{\sum_{j\in \mathbb{Z}}H^{l,j}*f_0(x)}>\sigma/2 \})$ \begin{align*} &\leq \textrm{meas}(\{x\colon \abs{\sum_{j\in \mathbb{Z}}H^{l,j}*f_0(x)}>\sigma/2 \}\bigcap (\underset{k\in \mathbb{N}}{\bigcup}2I_k)^c)+ \textrm{meas}(\underset{k\in \mathbb{N}}{\bigcup}2I_k)\\ &\lesssim_d \textrm{meas}(\{x\colon \abs{\sum_{j\in \mathbb{Z}}H^{l,j}*f_0(x)}>\sigma/2 \}\bigcap (\underset{k\in \mathbb{N}}{\bigcup}2I_k)^c)+\frac{l^{-\alpha+1}\norm{f}_1}{\sigma}. \tag{a} \label{a} \end{align*} Now since the mean value of $f_0$ over $I_k$ vanishes, we have \begin{align*} &\int_{(\underset{k\in \mathbb{N}}{\bigcup}2I_k)^c}\abs{\sum_{j\in \mathbb{Z}}H^{l,j}*f_0(x)}\, dx\\ &\leq \sum_{k\in \mathbb{N}}\int_{I_k}\Bigg(\int_{(\underset{k\in \mathbb{N}}{\bigcup}2I_k)^c}\abs{\sum_{j\in \mathbb{Z}}H^{l,j}(x-y)-H^{l,j}(x-a_k)}\,dx\Bigg )\abs{f_0(y)}\, dy\\ &\lesssim_d l^{-\alpha+1}\sum_{k\in \mathbb{N}}\int_{I_k}\abs{f_0(y)\, dy} \lesssim_d l^{-\alpha+1}\norm{f}_1, \tag{b} \label{b} \end{align*} provided we prove that \[ \int_{(\underset{k\in \mathbb{N}}{\bigcup}2I_k)^c}\abs{\sum_{j\in \mathbb{Z} }H^{l,j}(x-y)-H^{l,j}(x-a_k)}\,dx\lesssim_d l^{-\alpha+1},\: y\in I_k. \tag{c} \label{c} \] We postpone the proof of (\ref{c}) in order to conclude the estimates. By (\ref{a}), (\ref{b}) and (\ref{c}), we obtain \[ \textrm{meas}(\{x\colon \abs{\sum_{j\in \mathbb{Z}}H^{l,j}*f_0(x)}>\sigma/2 \})\lesssim_d \sigma^{-1}l^{-\alpha+1}\norm{f}_1 +\sigma^{-1}l^{-\alpha+1}\norm{f}_1\lesssim_d \sigma^{-1}l^{-\alpha+1}\norm{f}_1. \tag{d} \label{d} \] Set $p=2l+4$. Then by Theorem \ref{l2bound}, we have $\norm{S^lf}_2\lesssim_d l^{\alpha-1}\norm{f}_{L^2},$ and hence \[ \sigma^2\,\textrm{meas}(\{x\colon \abs{\sum_{j\in \mathbb{Z}}H^{l,j}*f_1(x)}>\sigma/2 \}) \leq \norm{\sum_{j\in \mathbb{Z}}H^{l,j}*f_1}_2^2\lesssim l^{2(-\alpha+1)}\norm{f_1}_2^2 \] \begin{align*} &=l^{2(-\alpha+1)}\Bigg (\sum_{k\in \mathbb{N}}\abs{I_k}^{-1}\Big\lvert\int_{I_k}f(x)\,dx\Big \rvert^2+\int_{(\underset{k\in \mathbb{N}}{\bigcup}I_k)^c}\abs{f(x)}^2\, dx\Bigg ) \\ &\lesssim_d l^{-\alpha+1}\sigma\Big \{ \sum_{k\in \mathbb{N}}\Big\lvert\int_{I_k}f(x)\,dx\Big \rvert+\int_{(\underset{k\in \mathbb{N}}{\bigcup}I_k)^c}\abs{f(x)}\, dx\Big \} \end{align*} which enables us to conclude \[ \textrm{meas}(\{x\colon \abs{\sum_{j\in \mathbb{Z}}H^{l,j}*f_1(x)}>\sigma/2 \})\lesssim \frac{l^{-\alpha+1}\norm{f}_1}{\sigma}. \tag{e} \label{e} \] Combining the estimates (\ref{d}) and (\ref{e}) yields \[ \textrm{meas}(\{x\colon \abs{\sum_{j\in \mathbb{Z}}H^{l,j}*f(x)}>\sigma \})\lesssim_d \frac{l^{-\alpha+1}\norm{f}_1}{\sigma}. \] There remains the proof of (\ref{c}). It is sufficient to prove that \[ \sum_{j\in \mathbb{Z}}\int_{\abs{x}\geq 2t}\abs{H^{l,j}(x-y)-H^{l,j}(x)}\, dx\lesssim l^{-\alpha+1}\; (\abs{y}\leq t, t>0). \] Fix $t>0$. Let $m\in \mathbb{Z}$ such that $\abs{t}\sim 2^m$. Now for $j> -m$, \begin{align*} \int_{\abs{x}\geq 2t}\abs{H^{l,j}(x-y)-H^{l,j}(x)}\, dx &\leq 2\int_{\abs{x}\geq 2t}\abs{H^{l,j}(x)}\, dx \leq\int_{\abs{x}\geq 2^m}\abs{K^{l,j}*\mathcal{F}^{-1}(\Psi_j)(x)}\mathds{1}^{l,j}(x)\, dx. \end{align*} Now we observe that for the last term to be non-zero, $2^{l-j}\geq 2^m$. Hence we have \begin{align*} &\sum_{j>-m}\int_{\abs{x}\geq 2t}\abs{H^{l,j}(x-y)-H^{l,j}(x)}\, dx\\ &\leq \sum_{-m< j\leq l-m} \int_{\abs{x}\geq 2^m}\abs{K^{l,j}*\mathcal{F}^{-1}(\Psi_j)(x)}\mathds{1}^{l,j}(x)\, dx \leq l \norm{K^{l,j}}_{L^1}\norm{\mathcal{F}^{-1}(\Psi_j)}_{L^1}\lesssim_{\Psi} l(\log{(2+2^l)})^{-\alpha}\\ &\lesssim l^{-\alpha+1} \end{align*} where we have used our assumption (\ref{assump}) to conclude the second to last inequality. Moreover, by Proposition \ref{new}, we have $ \abs{\nabla (K^{l,j}*\mathcal{F}^{-1}(\Psi_j))(x)}\lesssim 2^{j} \abs{ (K^{l,j}*\mathcal{F}^{-1}(\Psi_j))(x)} $ which yields \begin{align*} \sum_{j\leq -m }\int_{\abs{x}\geq t}\abs{H^{l,j}(x-y)-H^{l,j}(x)}\, dx &\leq \sum_{j\leq -m }\int_0^1\int_{\mathbb{R}^n}\abs{\langle y,\nabla (K^{l,j}*\mathcal{F}^{-1}(\Psi_j))(x-\tau y) \rangle }\, dx\,d\tau \\ &\lesssim\sum_{j\leq -m } t \norm{\nabla (K^{l,j}*\mathcal{F}^{-1}(\Psi_j))}_1\\ &\lesssim\sum_{j\leq -m } t2^j \norm{K^{l,j}*\mathcal{F}^{-1}(\Psi_j)}_1 \lesssim_{\Psi}\sum_{j\leq -m } \norm{K^{l,j}}t2^j \leq l^{-\alpha}. \end{align*} The last two estimates give (\ref{c}), and the proof is complete. \end{proof} The following theorem now follows almost immediately using standard $L^p-$interpolation theory. \begin{thm} \label{lpforSl} $T^l$ defines a bounded operator on $L^p$ for $p\in (1,\infty)$, with \[\norm{T^l}_{L^p}\lesssim_d \text{max }(p,(p-1)^{-1}Bl^{-\alpha+1}\] for $l>0$. We also have \[\norm{T^0}_{L^p}\lesssim_d \text{max }(p,(p-1)^{-1}B.\] \end{thm} \begin{proof} The operator $T^l$ is bounded on $L^2$ (by Theorem \ref{l2bound}) and maps $L^1$ to $L^{(1,\infty)}$ (by Theorem \ref{weak thm}), with norm $Bl^{-\alpha+1}$ (norm $B$ for $l=0$). Interpolating between the $L^1$ and $L^2$ spaces then yields the required norm for the operator acting on $L^p$ with $p\in (1,2)$. Further, a duality argument yields the desired result for $p\in (2,\infty)$ as well. This concludes the proof of the theorem. \end{proof} \section{Boundedness on \texorpdfstring{$L^p$}{} for large \texorpdfstring{$p$}{}} \label{largesec} In this section, we investigate the properties of the truncated operator $T_n^l f:= \sum_{j\leq n} H^{l,j}*f$ (for $n\in \mathbb{Z}$ and $l\geq 0$), with an aim of establishing $L^p\rightarrow L^p$ bounds for the associated maximal operator $T^l_*f=\text{sup }\,|T_n^l f|$. Let $M[f]$ denote the standard Hardy-Littlewood Maximal function. We wish to show that for $l\geq 0$, the operator $T^l_*$ is bounded on $L^p$ for $1<p<\infty$ with the corresponding operator norm $\lesssim_p Bl^{-\alpha+1}$ ($\lesssim_p B$ for $l=0$). We will achieve this by proving a Cotlar type inequality \[ T^l_* f(x)\lesssim_d \frac{1}{(1-\delta)^{1/q}}(M(\abs{T^lf}^q)(x))^{1/q}+2^\frac{ld}{q}l^{-\alpha+1}(1+\delta^{-1/q})B(m)(M(\abs{f}^q)(x))^{1/q}. \] for each $T^l_*$ and for a large enough exponent $q$ (here $0<\delta\leq 1/2$). Roughly speaking, a choice of $q=q_l \sim l$ will work. In particular, $q_l\rightarrow \infty$ as $l\rightarrow\infty$. Using this inequality, we can conclude that $T^l_*$ is bounded (with norm $\lesssim_pBl^{-\alpha+1})$ for all $p\in (p_l,\infty)$. The following lemma is the main step in establishing the Cotlar type inequality. The ideas used are similar to Lemma A.1 in [\cite{guo2019maximal}. \begin{lem}\label{mainlemma} Fix $\Tilde{x}\in \mathbb{R}^d$, $n\in \mathbb{Z}$ and $q>1$. Let $g(y)=f(y)\mathds{1}_{B(\Tilde{x},2^{-n})}(y)$ and $h=f-g$. Then we have \begin{enumerate}[label=(\roman*)] \item \label{l1} $|T_n^l g(\Tilde{x})| \lesssim \begin{cases} B(\mq{f}(\Tilde{x}))^{1/q}\,&l=0.\\ 0\, & l>0. \end{cases}$ \item \label{l2} $ |T_n^l h(\Tilde{x})-T^l h(\Tilde{x})| \lesssim \begin{cases} 0\, & l=0.\\ B2^{ld/q}l^{-\alpha+1}(\mq{f}(\Tilde{x}))^{1/q}\,&l>0. \end{cases}$ \item \label{l3} For $\abs{w-\Tilde{x}}\leq 2^{-n-1}$, we have\\ $|T^lh(\Tilde{x})-T^lh(w)| \lesssim \begin{cases} B(\mq{f}(\Tilde{x}))^{1/q}\,&l=0.\\ B2^{ld/q}l^{-\alpha+1}(\mq{f}(\Tilde{x}))^{1/q}\,&l>0. \end{cases}$ \end{enumerate} \end{lem} \begin{proof} We may assume without loss of generality that $B=1$. To prove \ref{l1}, we consider for $j\leq n$ \begin{align*} \abs{H^{l,j}*g(\Tilde{x})} &\leq \int_{\abs{\Tilde{x}-y}\leq 2^{-n}}\abs{H^{l,j}(\Tilde{x}-y)g(y)}\,dy\\ &\leq \norm{K^{l,j}*\mathcal{F}^{-1}(\Psi_j)(\Tilde{x}-y)}_{L_{q'}}\Bigg(\int_{\abs{\Tilde{x}-y}\leq 2^{-n}}\abs{g(y)}^q|\mathds{1}^{l,j}(\Tilde{x}-y)|^q\,dy\Bigg )^{1/q}. \end{align*} We note that $\mathds{1}^{l,j}(\Tilde{x}-y)$ is supported around $\Tilde{x}$ in either a dyadic annulus of radius $\sim 2^{-j+l}$ for $l>0$ or a disc of radius $2^{-j}$ for $l=0$. As $j\leq n$, the second term above is non-zero only when $l=0$. In this case, we estimate \begin{align*} &\abs{T^{0,j}*g(\Tilde{x})}\\ &\lesssim 2^{jd/q}(\log{2})^{-\alpha} 2^{-nd/q}\Bigg(2^{nd}\int_{\abs{\Tilde{x}-y}\leq 2^{-n}}\abs{g(y)}^q\,dy\Bigg )^{1/q} \mathrm{(using\, Proposition\, \ref{new})}\\ &\lesssim 2^{(j-n)d/q} (\mq{g}(\Tilde{x}))^{1/q}. \end{align*} Summing up in $j<n$, the assertion now follows as $\abs{g}\leq \abs{f}$. For \ref{l2}, we observe that $\abs{T_n^l h(\Tilde{x})-T^l(\Tilde{x})}\leq \sum_{j>n}\abs{H^{l,j}*h(\Tilde{x})}$. For $j>n$, we then have \begin{align*} \abs{H^{l,j}*h(\Tilde{x})} &\leq \int_{\abs{\Tilde{x}-y}\geq 2^{-n}}\abs{H^{l,j}(\Tilde{x}-y)h(y)}\,dy\\ &\leq \norm{K^{l,j}*\mathcal{F}^{-1}(\Psi_j)(\Tilde{x}-y)}_{L_{q'}}\Bigg(\int_{\abs{\Tilde{x}-y}\geq 2^{-n}}\abs{h(y)}^q|\mathds{1}^{l,j}((\Tilde{x}-y))|^q\,dy\Bigg )^{1/q}. \end{align*} Again, using the support property of $\mathds{1}^{l,j}$, we observe that the second term above is non-zero only when $l>0$ and $j<l+n$. For each $j\in (n,l+n)$, we estimate \begin{align*} &\abs{H^{l,j}*h(\Tilde{x})}\\ &\leq 2^{jd/q}(\log{(2+2^l)})^{-\alpha}\Bigg(\int_{\abs{\Tilde{x}-y}\geq 2^{-n}}\abs{h(y)}^q|\mathds{1}^{l,j}(2^j(\Tilde{x}-y))|^q\,dy\Bigg )^{1/q} \mathrm{(using\, Proposition\, \ref{new})}\\ &\lesssim 2^{jd/q} l^{-\alpha}\Bigg(\int_{\abs{\Tilde{x}-y}\sim 2^{-l+j}}\abs{h(y)}^q\,dy\Bigg )^{1/q} \leq 2^{jd/q} l^{-\alpha}2^{(-j+l)d/q}\Bigg(2^{(-j+l)d}\int_{\abs{\Tilde{x}-y}\leq 2^{-l+j}}\abs{h(y)}^q\,dy\Bigg )^{1/q} \\ &\leq l^{-\alpha}2^{ld/q}(\mq{h}(\Tilde{x}))^{1/q}. \end{align*} Summing up in $n<j<n+l$ and noting that $\abs{h}\leq \abs{f}$, we get \[ \sum_{j>n}\abs{H^{l,j}*h(\Tilde{x})}\lesssim 2^{ld/q}l^{-\alpha+1}(\mq{f}(\Tilde{x}))^{1/q}. \] Now for \ref{l3}, we consider the terms $H^{l,j}*h(\Tilde{x})-H^{l,j}*h(w)$ separately for $j\leq n$ and $j>n$. The sum $\sum_{j>n}\abs{H^{l,j}*h(\Tilde{x})}$ was already dealt with in \ref{l2} and like before, only matters for $l>0$. Since $\abs{w-\Tilde{x}}\leq 2^{-n-1}$ we have $\abs{w-y}\approx \abs{\Tilde{x}-y}$ for $\abs{\Tilde{x}-y}\geq 2^{-n}$ and for non-zero $l$, the previous calculation leads to \[ \sum_{j>n}\abs{H^{l,j}*h(w)}\lesssim \begin{cases} 0\, & l=0.\\ B2^{ld/q}l^{-\alpha+1}(\mq{f}(\Tilde{x}))^{1/q}\,&l>0. \end{cases} \] It remains to consider the terms for $j\leq n$. We write \[ H^{l,j}*h(\Tilde{x})-H^{l,j}*h(w)= \int_0^1\int_{\abs{\Tilde{x}-y}\geq 2^{-n}}\Big \langle \Tilde{x}-w, \nabla (K^{l,j}*\mathcal{F}^{-1}(\Psi_j))(w+s(\Tilde{x}-w)-y)\Big \rangle h(y)dy ds. \] Since $\abs{w-\Tilde{x}}\leq 2^{n-1}$ we can replace $\abs{w+s(\Tilde{x}-w)-y}$ in the integrand with $\abs{\Tilde{x}-y}$. Also Proposition \ref{new} yields \[ \abs{\nabla (K^{l,j}*\mathcal{F}^{-1}(\Psi_j))(\Tilde{x}-y)}\leq 2^j \abs{(K^{l,j}*\mathcal{F}^{-1}(\Psi_j))(\Tilde{x}-y)}. \] Thus we have \begin{align*} &\abs{H^{l,j}*h(\Tilde{x})-H^{l,j}*h(w)}\\ &\lesssim 2^j\abs{\Tilde{x}-w}\int_{\abs{\Tilde{x}-y}\geq 2^{-n}} \abs{(K^{l,j}*\mathcal{F}^{-1}(\Psi_j))(\Tilde{x}-y)h(y)}dy\\ &\leq 2^{j-n-1}\norm{(K^{l,j}*\mathcal{F}^{-1}(\Psi_j))(\Tilde{x}-y)}_{L_{q'}}\Bigg(\int_{\abs{\Tilde{x}-y}\geq 2^{-n}}\abs{h(y)}^q|\mathds{1}^{l,j}((\Tilde{x}-y))|^q\,dy\Bigg )^{1/q}. \end{align*} Proposition \ref{new} now gives us \begin{align*} \abs{H^{l,j}*h(\Tilde{x})-H^{l,j}*h(w)} &\lesssim 2^{j-n-1}2^{jd/q}(\log{(2+2^l)})^{-\alpha}\Bigg(\int_{\abs{\Tilde{x}-y}\sim 2^{-j+l}}\abs{h(y)}^q\,dy\Bigg )^{1/q} \\ &\lesssim 2^{j-n-1}2^{jd/q}(\log{(2+2^l)})^{-\alpha}2^{(-j+l)d/q}(\mq{h}(\Tilde{x}))^{1/q} \\ &\lesssim \begin{cases} 2^{ld/q} 2^{j-n-1}(\mq{f}(\Tilde{x}))^{1/q}\, & l=0.\\ 2^{ld/q} 2^{j-n-1}l^{-\alpha}(\mq{f}(\Tilde{x}))^{1/q}\,&l>0. \end{cases} \end{align*} Summing in $j\leq n$ leads to \[ \sum_{j\leq n}\abs{H^{l,j}*h(\Tilde{x})-H^{l,j}*h(w)}\lesssim \begin{cases} 2^{ld/q} (\mq{f}(\Tilde{x}))^{1/q}\, & l=0.\\ 2^{ld/q} l^{-\alpha}(\mq{f}(\Tilde{x}))^{1/q}\,&l>0. \end{cases} \] \end{proof} We can now prove the main result of this section. \begin{prop}\label{cotlarlemma} Let $\alpha>3$, $q>1$ and $B(m)$ be as in (\ref{assump}). Let $f$ be a Schwartz function. Then for almost every $x$ and for $0<\delta \leq 1/2$, we have \[ T^l_* f(x)\lesssim \frac{1}{(1-\delta)^{1/q}}(M(\abs{T^lf}^q)(x))^{1/q}+C_dA_l(1+\delta^{-1/q})(M(\abs{f}^q)(x))^{1/q} \] where \[ A_l= \begin{cases} B\,&l=0\\ B2^{ld/q} l^{-\alpha+1}\, & l>0. \end{cases} \] \end{prop} \begin{proof} The proof is essentially the same as that of an analogous result in [\cite{guo2019maximal}, which is in turn a modification of the argument for the standard Cotlar inequality regarding the truncation of singular integrals (see [\cite{stein1993harmonic}, sec 1.7). Fix $\Tilde{x}\in \mathbb{R}^d$ and $n\in \mathbb{Z}$ and define $g$, $h$ and $q$ as in the previous lemma. For $w$ (to be chosen later) with $\abs{w-\Tilde{x}}\leq 2^{-n-1}$ we can write \begin{align} T_n^l f(\Tilde{x}) &= T_n^l g(\Tilde{x})+(T_n^l-T^l)h(\Tilde{x})+T^lh(\Tilde{x})\nonumber\\ &= T_n^l g(\Tilde{x})+(T_n^l -T^l)h(\Tilde{x})+T^lh(\Tilde{x})-T^lh(w)+T^lf(w)-T^lg(w). \label{terms} \end{align} By Lemma \ref{mainlemma}, we have \[ \abs{T_n^l g(\Tilde{x})}+\abs{(T_n^l-T^l)h(\Tilde{x})}+\abs{T^lh(\Tilde{x})-T^lh(w)}\lesssim A_l(\mq{f}(\Tilde{x}))^{1/q}. \] All that remains in this case is to consider the term $T^lf(w)-T^lg(w)$ for $w$ in a substantial subset of $B(\Tilde{x},2^{-n-1})$. By Theorem \ref{weak thm} we have that for all $f\in L^q(\mathbb{R}^d)$ and all $\lambda>0$ \[ \textrm{meas}\{x\colon \abs{T^lf(x)}>\lambda\}\leq A_l^q\lambda^{-q}\norm{f}_q^q. \] Now let $\delta\in (0,1/2)$ and consider the set \[ \Omega_n(\Tilde{x},\delta)=\{w\colon \abs{w-\Tilde{x}}<2^{-n-1}, \, \abs{T^lg(w)}>2^{d/q}\delta^{-1/q}A_l(\mq{f}(\Tilde{x}))^{1/q}\}. \] In (\ref{terms}) we can estimate the term $\abs{T^lg(w)}$ by $A_l2^{d/q}\delta^{-1/q}(\mq{f}(\Tilde{x}))^{1/q}$ whenever $w\in B(\Tilde{x}, 2^{-n-1})\setminus \Omega_n(\Tilde{x},\delta)$. Hence we obtain \begin{equation} \abs{T_n^l f(\Tilde{x})}\lesssim \underset{w\in B(\Tilde{x}, 2^{-n-1}\setminus \Omega_n(\Tilde{x},\delta)}{\textrm{inf}} \abs{T^lf(w)}+C_dA_l(1+\delta^{-1/q})(\mq{f}(\Tilde{x}))^{1/q}. \end{equation} By the weak type inequality for $T^l$ we have \begin{align*} \textrm{meas}(\Omega_n(\Tilde{x},\delta)) &\leq \frac{A_l^q\norm{g}_q^q}{2^d\delta^{-1}A_l^q\mq{f}(\Tilde{x})}=\frac{\delta}{2^d\mq{f}(\Tilde{x})}\int_{\abs{x-y}\leq 2^{-n}}\abs{f(y)}^q\, dy\\ &\leq \delta 2^{-d}\textrm{meas}(B(\Tilde{x},2^{-n})=\delta \textrm{meas}(B(\Tilde{x},2^{-n-1}). \end{align*} Hence $\textrm{meas}(B(\Tilde{x}, 2^{-n-1})\setminus \Omega_n(\Tilde{x},\delta))\geq (1-\delta)\textrm{meas}(B(\Tilde{x},2^{-n-1})$ and thus \begin{align*} \underset{w\in B(\Tilde{x}, 2^{-n-1}\setminus \Omega_n(\Tilde{x},\delta))}{\textrm{inf}} \abs{T^lf(w)} &\leq \Bigg ( \frac{1}{\textrm{meas}(B(\Tilde{x}, 2^{-n-1}\setminus \Omega_n(\Tilde{x},\delta))}\int_{B(\Tilde{x}, 2^{-n-1})}\abs{T^lf(w)}^q\Bigg )^{1/q}\\ &\leq \Bigg ( \frac{1}{(1-\delta)\textrm{meas}(B(\Tilde{x},2^{-n-1}))}\int_{B(\Tilde{x}, 2^{-n-1})}\abs{T^lf(w)}^q\Bigg )^{1/q}. \end{align*} We obtain \[ \abs{T_n^l f(\Tilde{x})}\lesssim \frac{1}{(1-\delta)^{1/q}}(M(\abs{T^lf}^q)(\Tilde{x}))^{1/q}+A_l(1+\delta^{-1/q})(M(\abs{f}^q)(\Tilde{x}))^{1/q} \] uniformly in $n$, which implies Proposition \ref{cotlarlemma}. \end{proof} The above proposition, in conjunction with Theorem \ref{lpforSl} immediately leads to the following: \begin{thm}\label{large thm} $T_*^l$ is bounded on $L^p$ for $p\in (l+2,\infty)$. \end{thm} \begin{proof} Fix $\delta=1/2$ and $q=l+2$. We note that both $f\rightarrow T^lf$ and $f\rightarrow (\mq{f})^{1/q}$ are bounded operators on $L^p$ for $p\in (l+2,\infty)$, with operator norms bounded by $pBl^{-\alpha+1}$ ($pB$ for $l=0$) and $p/(p-l-2)$ respectively (upto multiplication by a dimensional constant). Hence, by Proposition \ref{cotlarlemma}, we obtain \[ \norm{T^l_*}_{L^p}\lesssim_d \begin{cases} B, &l=0\\ l^{-\alpha+1}\frac{p^2}{p-l-2}B, &l>0. \end{cases} \] \end{proof} \section{Weak (1,1) Boundedness of the Maximal Operator } \label{weakmax} In this section, we will prove that each of the pieces $T^l_*$ is of weak type $(1,1)$, with the respective norm $\lesssim_d Bl^{-\alpha+1}$ ($\lesssim_d B$ for $l=0)$. Combining this result with Theorem \ref{large thm} and interpolating, we will obtain bounds on the operator norm of $T^l_*$ on $L^p$ for all $p\in (1,\infty)$, with a decay of $l^{-\alpha+2}$. This will allow us to achieve our final goal of summing up the pieces together to get bounds on the operator $S^*$. The proof we give here is essentially the same as the one for Theorem \ref{weak thm}, except for one notable difference. In order to obtain a weak bound for the "good" function in the Calder\'on-Zygmund decomposition, we use the bounds on $L^{p_l}$ (as given by Theorem \ref{large thm}, with $p_l=4l)$ in place of those on $L^2$, noting that the operator norm in the former case is $\lesssim Bl^{-\alpha+2}$ (for $l>0$). The upshot is that the power of $l$ in the weak (1,1) norm of $T^l_*$ goes up by $1$. Instead of repeating the entire argument, we sketch an outline. \begin{thm} \label{weak2} $T_*^l$ is weak (1,1) bounded with \[ \norm{T_*^l}_{L^1\rightarrow L^{1,\infty}}\lesssim \begin{cases} B, &l=0,\\ Bl^{-\alpha+2},&l>0. \end{cases} \] \end{thm} \begin{proof} Let $n\in \mathbb{Z}$ and $f\in L^1(\mathbb{R}^d)$. Fix $\sigma>0$. We sketch the proof for $l>0$ (the one for $l=0$ proceeds in almost the same way). Also, we might assume $B=1$. As before, we make a Calder\'on-Zygmund decomposition of $f$ at the level $l^{\alpha-1}\sigma$. For the "good" function $f_1$, we use the bound $\norm{T_n^l}_{L^{P_l}}\lesssim Bl^{-\alpha+2}$ on $L^{p_l}$ with $p_l=4l$ for $l>0$ and $\norm{T^0_n}_{L^{3}}\lesssim B$ on $L^{3}$. We obtain for $l>0$ \[ \norm{T_n^l f_1}_{L^{1,\infty}}\lesssim Bl^{-\alpha+2}\norm{f}_{L^1} \] (and the corresponding result for $l=0$). The argument for the "bad" part $f_0$ proceeds the exact way as in proof of Theorem \ref{large thm}, with the index $n$ playing no real role, and we get for $l>0$ \[ \norm{T_n^l f_0}_{L^{1,\infty}}\lesssim Bl^{-\alpha+1}\norm{f}_{L^1} \] (and the corresponding result for $l=0$). The result then follows by combining the two estimates followed by taking a supremum over $n$. \end{proof} Theorems \ref{large thm} and \ref{weak thm} together via $L^p$ interpolation lead to \begin{thm}\label{large thm2} $T_*^l$ is bounded on $L^p$ for $p\in (1,\infty)$, with \[ \norm{T^l_*}_{L^p}\lesssim_d \begin{cases} B\text{ max }(p,(p-1)^{-1}), &l=0\, , p\in(1,\infty)\\ B\text{ max }(p,(p-1)^{-1})l^{-\alpha+2}, &l>0\, , p\in(1,4l)\\ Bpl^{-\alpha+1}, &l>0\, , p\in[4l,\infty). \end{cases} \] \end{thm} \begin{proof} The result for $l=0$ is a clear outcome of $L^p$ interpolation and the estimates proved earlier. For $l>0$ and $p\in [4l,\infty)$, it follows easily from Theorem \ref{large thm} and the observation that $p^2/(p-l-2)\lesssim p$ for $p\geq 4l$. For $l>0$ and $p\in (1,4l)$, it is an outcome of interpolating between the weak (1,1) estimate in Theorem \ref{weak2} and the one contained in Theorem \ref{large thm} for $p=4l$, again making the observation that $p^2/(p-l-2)\lesssim p=4l$. \end{proof} Finally, we give the proof of Theorem \ref{main thm}. \begin{proof} It is enough to prove the theorem for $S_n$ for a fixed $n\in \mathbb{Z}$, as the result then follows by taking the supremum over $n\in \mathbb{Z}$. For $p\in (1,\infty)$ and a Schwartz function $f$, we have \[\norm{S_nf}_{L^p}\leq \sum_{l\geq 0}\norm{S_n^l f}_{L^p}\leq \sum_{l\geq 0}\norm{T_n^l f}_{L^p}+\sum_{l\geq 0}\norm{\sum_{j\leq n}K^{l,j} *\mathcal{F}^{-1}(1-\Psi_j)*f}_{L^p}.\] Using Theorem \ref{large thm2} for the first sum and Theorem \ref{errorthm} for the second one (and noting that summing up for $j\leq n$ instead of $j\in \mathbb{Z}$ does not affect the proof), we get \[\norm{S_nf}_{L^p}\lesssim B \text{ max }(p,(p-1)^{-1})\sum_{l\geq 1}(l^{-\alpha+2}+2^{-l})\lesssim_p B.\] For weak (1,1) boundedness, we argue in a similar way, only using Theorem \ref{weak2} this time in place of Theorem \ref{large thm2}. \end{proof} \nocite{*} \end{document}
\betaegin{document} \thetaitle[Piecewise Smooth Holomorphic Systems] {Piecewise Smooth Holomorphic Systems} \alphauthor[L. F. S. Gouveia, Gabriel Rondón and P. R. da Silva] {Luiz F. S. Gouveia, Gabriel Rondón and Paulo R. da Silva} \alphaddress{ S\~{a}o Paulo State University (Unesp), Institute of Biosciences, Humanities and Exact Sciences. Rua C. Colombo, 2265, CEP 15054--000. S. J. Rio Preto, S\~ao Paulo, Brazil.} \varepsilonmail{[email protected]} \varepsilonmail{[email protected]} \varepsilonmail{[email protected]} \thetahanks{ } \varepsilonnsuremath{\mathbb{S}}ubjclass[2010]{32A10, 34C20, 34A34, 34A36, 34C05.} \keywords {piecewise smooth holomorphic systems, limit cycles, regularization} \deltaate{} \deltaedicatory{} \maketitle \betaegin{abstract} The normal forms associated with holomorphic systems are well known in the literature. In this paper we are concerned about studying the piecewise smooth holomorphic systems (PWHS). Specifically, we classify the possible phase portraits of these systems from the known normal forms and the typical singularities of PWHS. Also, we are interested in understanding how the trajectories of the regularized system associated with the PWHS transits through the region of regularization. In addition, we know that holomorphic systems have no limit cycles, but piecewise smooth holomorphic systems do, so we provide conditions to ensure the existence of limit cycles of these systems. Additional conditions are provided to guarantee the stability and uniqueness of such limit cycles. Finally, we give some families of PWHS that have homoclinic orbits. \varepsilonnd{abstract} \varepsilonnsuremath{\mathbb{S}}ection{Introduction} The holomorphic systems $\deltaot{z}=f(z)$ have interesting dynamical properties, for example, the fact that these systems have no limit cycles and that they have a finite number of equilibrium points, which are isolated provided that $f$ is not identically null. Moreover, holomorphic polynomial systems reduce the number of parameters in the system. Although a polynomial system of degree $n$ depends on $n^2+3n+2$ parameters, a polynomial holomorphic system depends only on $2n+2$ parameters. Furthermore, the holomorphic functions has its interest in several areas of applied science, for example, in the study of fluid dynamics. In this context, it is possible to verify that the complex potential of the conjugate holomorphic system $\deltaot{z}=\overlineverline{f(z)}$ is a primitive of $f(z)$. For more information see, for instance, \cite{BatGK,Mars,Conw}. In this paper, we are interested in the study of piecewise smooth holomorphic systems (PWHS), \betaegin{equation}\lambdabel{ch4:eq111} \betaegin{aligned} \left\{\betaegin{array}{l} \deltaot{z}^{+}=f^{+}(z)=u_1+iv_1, \thetaext{ when } \mathbb{R}e(z)> 0,\\[5pt] \deltaot{z}^{-}=f^{-}(z)=u_2+iv_2,\thetaext{ when }\mathbb{R}e(z)<0, \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} where $z=x+iy$ and $f^{\partialm}(z)$ are holomorphic functions defined in a domain $\mathcal{V}\varepsilonnsuremath{\mathbb{S}}ubseteq\mathbb{C}$ and satisfying that \betaegin{itemize} \item[(i)] $u_{1,2}=\overlineperatorname{Re}(f^{\partialm})$ and $v_{1,2}=\overlineperatorname{Im}(f^{\partialm})$ are continuous; \item[(ii)] there exist the partial derivatives $(u_{1,2})_x,(u_{1,2})_y,(v_{1,2})_x,(v_{1,2})_y$ in $\mathcal{V},$ and \item[(iii)] the partial derivatives satisfy the Cauchy--Riemann equations \[ \betaegin{array}{rcl} (u_{1})_x =(v_{1})_y,& (u_{1})_y=-(v_{1})_x,\quad \forall z=x+iy\in\mathcal{V},\\ (u_{2})_x =(v_{2})_y,& (u_{2})_y=-(v_{2})_x,\quad \forall z=x+iy\in\mathcal{V}.\\ \varepsilonnd{array} \] \varepsilonnd{itemize} We remark that the straight line $\Sigma=\{\mathbb{R}e(z)=0\}$ divides the plane in two half-planes $\Sigma^\partialm$ given by $\{z :\mathbb{R}e(z)> 0\}$ and $\{z :\mathbb{R}e(z)< 0\}$, respectively. The trajectories on $\Sigma$ are defined following the Filippov convention. Throughout this article we use the normal forms associated with the holomorphic functions given in \cite{BT} and \cite{GGJ2}, namely: $1$, $(a+ib)z$, $z^n$, $\frac{\gammaamma z^n}{1+z^{n-1}},$ and $\frac{1}{z^n}.$ For more details see Proposition \ref{GGJ}. A priori these normal forms depends on the notion of conformal conjugation. One of the properties of the PWHS that we will prove here is that the sliding, sewing and tangential regions are preserved by conformal conjugation, see Theorem \ref{foldtofold}. In particular, Lemma \ref{foldtofold1} establishes that regular-fold singularities are preserved by conformal conjugation. We will use this last result to characterize the type of tangential contact of the holomorphic functions with $\Sigma$, which are conformally conjugated to some of the normal forms. For more information see Theorem \ref{car_nf}. An interesting property is that the regularized vector field associated to \varepsilonqref{ch4:eq111} loses the property of being holomorphic, see Theorems \ref{teo:reg} and \ref{teoreg}. For that, we will use {\it the principle of identity of the analytic functions}, which states: given functions $f$ and $g$ analytic on a domain $D$ (open and connected subset of $\mathbb{C})$, if $f=g$ on some $S\varepsilonnsuremath{\mathbb{S}}ubseteq D$, where $S$ has an accumulation point of $D$, then $f=g$ on $D$. Also, we are interested in regularizations of PWHS around visible regular-fold singularities. More specifically, using Theorem 1 of \cite{NR} and the normal forms associated with the homomorphic functions, we propose to understand how the trajectories of the regularized system transits through the region of regularization, see Theorem \ref{ta}. \betaegin{figure}[h] \betaegin{center} \betaegin{overpic}[scale=0.35]{fig_fc.pdf} \partialut(-7,27){$\Sigma^-$} \partialut(-7,68){$\Sigma^+$} \partialut(102,51){$\Sigma$} \varepsilonnd{overpic} \caption{Phase portrait of PWHS \varepsilonqref{ex_cyclefc}. The red trajectory is the limit cycle of \varepsilonqref{ex_cyclefc}.} \lambdabel{limit_cycle_fc} \varepsilonnd{center} \varepsilonnd{figure} In addition, we are concerned about studying the existence of limit cycles for the PWHS. One of the reasons for this study is the fact that the holomorphic systems have no limit cycles, for more information see, for instance, \cite{Ben,Bro,GGJ2,Oto1,Oto2,GXG,NeeKing,Sverdlove}. For that we will use the normal forms mentioned above and we will establish conditions for the existence of limit cycles, see Theorems \ref{cl1}, \ref{cl2}, and \ref{cl3}. Furthermore, additional conditions are provided to guarantee the stability and uniqueness of such limit cycles. In particular, Theorem \ref{cl1} establishes that the piecewise linear holomorphic systems whose equilibrium points are on manifold $\Sigma$ have at most one limit cycle. Also, Corollary \ref{cor:lc} establishes necessary and sufficient conditions for the existence of such a limit cycle. For example, if we consider the PWHS \betaegin{equation}\lambdabel{ex_cyclefc} \betaegin{aligned} \left\{\betaegin{array}{l} \deltaot{z}^{+}=(1-i)(z+1),\thetaext{ when } \Im{(z)}>0, \\[5pt] \deltaot{z}^{-}=-iz, \thetaext{ when } \Im{(z)}< 0, \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} then it has a unique unstable limit cycle (see Figure \ref{limit_cycle_fc}). In the context of piecewise linear systems in the real plane, depending on the number of zones generated by the discontinuity manifold $\Sigma$, the maximum number of limit cycles varies. For example, in \cite{MR1681463}, Freire et al. considered 2 zones divided by a straight line and proved that piecewise linear systems in the real plane have at most one limit cycle. However, when considering 3 zones (for example, the discontinuity manifold $\Sigma$ could be 2 parallel straight lines) it is possible to prove the existence of more than one limit cycle, for more details see, for instance, \cite{math8050755,MR3360760,MR3328261}. We emphasize that the focus on the existence of limit cycles in PWHS is one of the main novelties of the present study. Some of the main challenges when working in this context is that building the first return map is a bit complicated, however, if we use the normal forms associated with the holomorphic functions in their polar form, it is much easier to work with. For the construction of the limit cycles we will use the symmetry of the polar equation of the orbits of $z^n$ and $\frac{1}{z^n}$ and the invariance of the rays of such normal forms. Finally, we are going to use the invariant rays of the normal forms $z^n$ and $\frac{1}{z^n}$ to construct homoclinic orbits of the PWHS, for more details see Propositions \ref{propho1} and \ref{propho2}. \varepsilonnsuremath{\mathbb{S}}ubsection{Structure of the paper} In Section \ref{sec:preliminares}, we present some basic results on holomorphic functions that will be used throughout the paper. In Section \ref{sec:PWHS}, we use the normal forms given in Proposition \ref{GGJ} to classify the sliding, sewing, and tangential regions. For the tangential region, we study the type of tangential singularities existing in PWHS. In Section \ref{sec:reg}, we perform an analysis of the regularization of PWHS. In Section \ref{sec:limitcycles}, we establish conditions for the existence of limit cycles in PWHS. Finally, in Section \ref{sec:homoclinic_orbits} we give some families of PWHS that have homoclinic orbits. \varepsilonnsuremath{\mathbb{S}}ection{Preliminaries}\lambdabel{sec:preliminares} In this section we establish some basic results that will be used throughout the paper. \varepsilonnsuremath{\mathbb{S}}ubsection{Holomorphic functions} Let $F$ be a holomorphic function on a domain $\mathcal{V}\varepsilonnsuremath{\mathbb{S}}ubseteq\mathbb{C}$. Thus for any $z_0\in\mathcal{V}$ \betaegin{equation} F(z)=A_0+A_1(z-z_0)+A_2(z-z_0)^2+...,\quad A_k=a_k+ib_k=\deltafrac{F^{(k)}(z_0)}{k!} \lambdabel{analF} \varepsilonnd{equation} for $z\in D(z_0,R_{z_0})\varepsilonnsuremath{\mathbb{S}}ubseteq\mathcal{V}$ where $D(z_0,R_{z_0})$ is the largest possible $z_0$--centered disk contained in $\mathcal{V}.$ Unless a translation we can always assume that $z_0 = 0$. If $F$ is holomorphic in a punctured disc $D(z_0,R)\varepsilonnsuremath{\mathbb{S}}etminus\{z_0\}$ and it is not derivable at $z_0$ we say that $z_0$ is a singularity of $F$. In this case $F(z)$ is equal to its Laurent's series in $D(z_0,R)\varepsilonnsuremath{\mathbb{S}}etminus\{z_0\}$ \betaegin{equation} F(z)=\varepsilonnsuremath{\mathbb{S}}um_{k=1}^{\infty}\deltafrac{B_k}{(z-z_0)^k}+ \varepsilonnsuremath{\mathbb{S}}um_{k=0}^{\infty}A_k(z-z_0)^k, \lambdabel{laurent}\varepsilonnd{equation} where \[B_k=\deltafrac{1}{2\partiali i}\int_{C_{\varepsilon}}F(z)(z-z_0)^{k-1}dz,\quad A_k=\deltafrac{1}{2\partiali i}\int_{C_{\varepsilon}}\deltafrac{F(z)}{(z-z_0)^{k+1}}dz\] with $C_{\varepsilon}$ parameterized by $z(t)=\varepsilon e^{it}, \varepsilon\varepsilonnsuremath{\mathbb{S}}im0$. \\ If $B_k\neq0$ for an infinite set of indices $k$ we say that $z_0$ is an \thetaextit{essential singularity} and if there exists $n \gammaeq1$ such that $B_n \neq0$ and $B_k = 0$ for every $k>n$ then we say that $z_0$ is a \thetaextit{pole of order n}. Moreover, $B_1$ is called \thetaextit{residue} of $F$ at $z_0$ and it is denoted by $B_1 = \overlineperatorname{res} (F, z_0)$.\\ Let $F:D(0,R)\varepsilonnsuremath{\mathbb{S}}etminus\{0\}\rightarrow\mathbb{C}$ be a holomorphic function as \varepsilonqref{laurent} with $z_0=0, B_k=c_k+id_k$ and $A_k=a_k+ib_k$. Consider the ordinary differential equation \betaegin{equation}\lambdabel{hde} \deltaot{z}(t)=F(z(t)),\quad t\in\mathbb{R}. \varepsilonnd{equation} The solution of \varepsilonqref{hde} passing through $z\in D(0,R)\varepsilonnsuremath{\mathbb{S}}etminus\{0\}$ at $t=0$ is denoted by $\varphi_F(t,z).$\\ We have \[F(z)=\varepsilonnsuremath{\mathbb{S}}um_{k=1}^{\infty}\deltafrac{c_k+id_k}{z^k}+ \varepsilonnsuremath{\mathbb{S}}um_{k=0}^{\infty}(a_k+ib_k)z^k.\] A direct calculation using Newton's binomial formula gives us $z^k=(x+iy)^k=p_k+iq_k$ with $p_k$ and $q_k$ as in the table \betaegin{equation} \betaegin{array}{llll} \hline &k &p_k &q_k \\ \hline\\ &1 & x & y\\ \hline\\ &2 & x^2-y^2&2x y\\ \hline\\ &3 & x^3-3xy^2 & 3x^2y-y^3\\ \hline\\ &4 & x^4-6x^2y^2+y^4 & 4x^3y-4xy^3\\ \hline\\ &5 & x^5-10x^3y^2+5xy^4 & 5x^4y-10x^2y^3+y^5\\ \hline &...&...&...\\ \varepsilonnd{array} \lambdabel{Tpq} \varepsilonnd{equation} Thus $$(a_k+ib_k)z^k=(a_kp_k-b_kq_k)+i(b_kp_k+a_kq_k)$$ and $$\frac{c_k+id_k}{z^k}=\frac{(c_kp_k+d_kq_k)+i(d_kp_k-c_kq_k)}{(x^2+y^2)^k}.$$ Hence $\deltaot{x}=\overlineperatorname{Re} (F(z))$ and $\deltaot{y}=\overlineperatorname{Im}(F(z))$ must satisfy the following system \betaegin{equation}\left\{\betaegin{array}{ll} \deltaot{x}&= \deltaisplaystyle\varepsilonnsuremath{\mathbb{S}}um_{k=1}^{\infty}\left(c_k\deltafrac{p_k}{(x^2+y^2)^k}+d_k\deltafrac{q_k}{(x^2+y^2)^k}\right)+a_0+ \deltaisplaystyle\varepsilonnsuremath{\mathbb{S}}um_{k=1}^{\infty}\left(a_kp_k-b_kq_k\right)\\ \deltaot{y}&= \deltaisplaystyle\varepsilonnsuremath{\mathbb{S}}um_{k=1}^{\infty}\left(d_k\deltafrac{p_k}{(x^2+y^2)^k}-c_k\deltafrac{q_k}{(x^2+y^2)^k}\right)+b_0+ \deltaisplaystyle\varepsilonnsuremath{\mathbb{S}}um_{k=1}^{\infty}\left(b_kp_k+a_kq_k\right) \varepsilonnd{array} \right.\lambdabel{hvf} \varepsilonnd{equation} with $p_k,q_k$ given in Table \varepsilonqref{Tpq}. We refer to system \varepsilonqref{hvf} as a holomorphic system. The coefficients $c_k,d_k$ are zero provided that $F$ is holomorphic at $0$.\\ \noindent\thetaextbf {Remark.} If $F=u+iv$ is holomorphic in $D(0,R)\varepsilonnsuremath{\mathbb{S}}etminus\{0\}$ and it is not identically null then system \varepsilonqref{hvf} has a finite number of equilibrium points and all of them are isolated. In fact, if there exists a sequence of distinct equilibria $(x_n,y_n)$ of \varepsilonqref{hvf} then the sequence $z_n=x_n+iy_n$ will be formed by zeros of $F$. Taking $\overlineverline{D(0,R)}$ if necessary, we can assume that $z_n$ admits a convergent subsequence $z_{n_k}$. In this case $F$ is identically null in a set that has an accumulation point. It follows from the principle of identity of analytic functions that $F \varepsilonquiv 0$. \varepsilonnsuremath{\mathbb{S}}ubsection{Conformally conjugate holomorphic functions} In this section we introduce the notion of conformally conjugated holomorphic functions that allow us to obtain the normal forms for this class of functions. Before that, we need to define conformal mappings. \betaegin{definition} A map $\Phi:\mathbb{C}\thetao\mathbb{C}$ is called conformal if it preserves angles. \varepsilonnd{definition} In \cite{GAvila} was proved that the angle between 2 curves which intersect at a point $z_0$ is preserved by conformal maps (see Figure \ref{angle_conformal_3}). \betaegin{figure}[h] \betaegin{overpic}[width=12cm]{angle_conformal_3} \partialut(49,25){$\Phi$} \partialut(24,21){\varepsilonnsuremath{\mathbb{S}}criptsize $\thetaheta$ \partialar} \partialut(94,22){$\Phi(C_1)$} \partialut(7.5,27){$C_2$} \partialut(101,11){$u$} \partialut(77,30){$v$} \partialut(85,16){\varepsilonnsuremath{\mathbb{S}}criptsize $\thetaheta$ \partialar} \partialut(34,27){$C_1$} \partialut(68,29){$\Phi(C_2)$} \partialut(44,11){$x$} \partialut(20,30){$y$} \varepsilonnd{overpic} \caption{ The angle between any two curves is preserved.} \lambdabel{angle_conformal_3} \varepsilonnd{figure} An interesting geometric property that complex analytic functions satisfy is that, at non-critical points (points with nonzero derivative), they preserve angles and consequently define conformal mappings. \betaegin{proposition} If $w=\Phi(z)$ is an analytic function and $\Phi'(z)\neq 0$, then $\Phi$ defines a conformal map. \varepsilonnd{proposition} Notice that the converse is also valid, because every planar conformal map comes from a complex analytic function with nonvanishing derivative. \betaegin{remark} Let $\Phi(z)$ be a conformal map with $\Phi(0)=0$. Then, the linear approximation of $\Phi$ near 0 (first two terms of the Taylor series) is given by $$\Phi(z)\alphapprox \Phi(0)+\Phi'(0)z=\Phi'(0)z$$ and if $\gammaamma(t)$ is a curve with $\gammaamma(t_0)=0$ for some $t_0\in\mathbb{R},$ then $\Phi(\gammaamma(t))\alphapprox\Phi'(0)\gammaamma(t),$ for all $t$ near to $t_0$. \varepsilonnd{remark} We will classify the local phase portraits of piecewise smooth holomorphic systems. To do this, we start by introducing the concept of conformal conjugation. Let $F$ and $G$ be holomorphic functions defined in some punctured neighborhood of $0\in\mathbb{C}$. We say that $F$ and $G$ are \thetaextit{$0$--conformally conjugated} if there exist $R>0$ and a conformal map $\Phi:D(0,R)\rightarrow D(0,R)$ such that $\Phi(0)=0$ and $\Phi(\varphi_F(t,z)) =\varphi_G(t,\Phi(z))$, for any $z\in D(0,R)\varepsilonnsuremath{\mathbb{S}}etminus\{0\}$ and all $t $ for which the above expressions are well defined and the corresponding points are in $ D(0,R)$. Let $F$ and $G$ be holomorphic functions defined in some punctured neighborhoods of $z_1\in\mathbb{C}$ and $z_2\in\mathbb{C}$, respectively. We say that $F$ and $G$ are \thetaextit{$z_1z_2$--conformally conjugated} if $F(z-z_1)$ and $G(z-z_2 )$ are conformally conjugated at $0$.\\ If $F$ and $G$ are holomorphic in $D(0,R)$ then we have: \betaegin{itemize} \item If $F(0)\neq0$, $G(0)\neq0$ then $F$ and $G$ are $0$--conformally conjugated; \item If $F(0)\neq0$, $G(0)=0$ then $F$ and $G$ are not $0$--conformally conjugated; \item If $F(0)=0$, $G(0)=0$ and $F,G$ are non constant then \[ \Phi(\varphi_F(t,z)) =\varphi_G(t,\Phi(z))\Leftrightarrow \Phi'(z)F(z)=G(\Phi(z)),\] for $ |z|$ sufficiently small. \varepsilonnd{itemize} The following proposition, whose proof can be found in \cite{BT,GGJ2}, gives us important information about the normal forms of holomorphic functions. \betaegin{proposition}\lambdabel{GGJ} Let $F$ be a holomorphic function defined in some punctured neighborhood of $w_0\in\mathbb{C}$. \betaegin{itemize} \item [(a)] If $F(w_0)\neq0$ then $F$ and $G(z)\varepsilonquiv 1$ are $w_00$--conformally conjugated. \item [(b)] If $F(w_0)=0$ and $F'(w_0)\neq0$ then $F$ and $G(z)\varepsilonquiv F'(w_0)z$ are $w_00$--conformally conjugated. \item [(c)] If $F(w_0)=0$, $w_0$ is a zero of $F$ of order $n>1$ and $\overlineperatorname{Res}(1/F,w_0)=1/\gammaamma$ then $F$ and $G(z)\varepsilonquiv \gammaamma z^n/(1+z^{n-1})$ are $w_00$--conformally conjugated. \item[(d)] If $F(w_0)=0$, $w_0$ is a zero of $F$ of order $n>1$ and $\overlineperatorname{Res}(1/F,w_0)=0$ then $F$ and $G(z)\varepsilonquiv z^n$ are $w_00$--conformally conjugated. \item[(e)] If $w_0$ is a pole of $F$ of order $n$ then $F$ and $G(z)\varepsilonquiv \frac{1}{z^n} $ are $w_00$--conformally conjugated. \varepsilonnd{itemize} \varepsilonnd{proposition} Due to the beauty of the argument used in \cite{GGJ2} to demonstrate the following result, let us reproduce its demonstration here. \betaegin{proposition}\lambdabel{teo_nocl} Let $F$ be a holomorphic function defined in a domain $\mathcal{V}\varepsilonnsuremath{\mathbb{S}}ubseteq\mathbb{C}$. The phase portrait of $\deltaot{z}=F(z)$ has no limit cycle. \varepsilonnd{proposition} \betaegin{proof} Suppose $ \gammaamma $ is a periodic orbit of $ \deltaot {z} = F (z) $ with period $ T $, i.e. $ \varphi_F (z, T) = z $ whatever $ z \in\gammaamma $. Let us fix any point in $ \gammaamma $ and consider the transition function given $ \xi (z) = \varphi_F (z, T)$. The transition function is analytic and is equal to identity at all points that are in $ \gammaamma $. Thus, this function coincides with the identity in a neighborhood of $ z $. This means that the periodic orbit belongs to a continuum of periodic orbits, all with the same period $T$.\varepsilonnd{proof} \varepsilonnsuremath{\mathbb{S}}ection{Piecewise smooth holomorphic systems}\lambdabel{sec:PWHS} This section is devoted to study the piecewise smooth holomorphic systems, \betaegin{equation}\lambdabel{ch4:eq1} \betaegin{aligned} \left\{\betaegin{array}{l} \deltaot{z}^{+}=f^{+}(z)=u_1+iv_1, \thetaext{ when } \mathbb{R}e(z)> 0,\\[5pt] \deltaot{z}^{-}=f^{-}(z)=u_2+iv_2,\thetaext{ when }\mathbb{R}e(z)<0, \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} where $z=x+iy$ and $f^{\partialm}(z)$ are holomorphic functions. The straight line $\Sigma=\{\mathbb{R}e(z)=0\}$ divides the plane in two half-planes $\Sigma^\partialm$ given by $\{z :\mathbb{R}e(z)> 0\}$ and $\{z :\mathbb{R}e(z)< 0\}$, respectively. The trajectories on $\Sigma$ are defined following the Filippov convention. \betaegin{itemize} \item[(i)] $\Sigma^w =\{z\in\Sigma:u_1u_2 >0 \}$ is the \thetaextit{sewing} region; \varepsilonnsuremath{\mathbb{S}}mallskip \item[(ii)] $\Sigma^s=\{z\in\Sigma:u_1u_2 <0 \}$ is the \thetaextit{sliding} region; \varepsilonnsuremath{\mathbb{S}}mallskip \item[(iii)] $\Sigma^t=\{z\in\Sigma:u_1u_2=0 \}$ is the \thetaextit{tangent} region. \varepsilonnd{itemize} We say that $p\in\Sigma^{s}$ is an \thetaextit{attracting sliding point} and denote $p\in\Sigma^{s}_s$ if $u_1<0$ and $u_2>0$. We say that $p\in\Sigma^{s}$ is a \thetaextit{repelling sliding point} and denote $p\in\Sigma^{s}_u$ if $u_1>0$ and $u_2<0$. The orbits of the PWHS by $\Sigma^w$ are naturally concatenated. The orbits by $\Sigma^s$ follow the flow of the \varepsilonmph{sliding vector field} $F^{\Sigma}$, which is a linear convex combination of $(u_1,v_1)$ and $(u_2,v_2)$ tangent to $\Sigma$: \betaegin{equation}\lambdabel{GeralSVF} F^{\Sigma} = \Big(0,\deltafrac{u_1v_2-u_2v_1}{u_1-u_2}\Big). \varepsilonnd{equation} \varepsilonnsuremath{\mathbb{S}}ubsection{Phase portrait of the PWHS} To study the phase portraits, we shall make combinations of the items of Proposition \ref{GGJ}.\\ \noindent\thetaextbf{Case 1.} We take $\deltaot{z}^{+}=f'(p)(z-z_0)$, where $f'(p)=a+ib$ and $z_0=x_0+iy_0$. The PWHS is \betaegin{equation}\lambdabel{case2a} \betaegin{aligned} \left\{\betaegin{array}{l} \deltaot{z}^{-}=1,\thetaext{ when } \mathbb{R}e(z)<0, \\[5pt] \deltaot{z}^{+}=f'(p)(z-z_0),\thetaext{ when } \mathbb{R}e(z)>0. \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} In cartesian coordinates, we have \betaegin{equation} \betaegin{aligned} \left\{\betaegin{array}{l} (\deltaot{x}^{-},\deltaot{y}^{-})=(1,0),\thetaext{ when } x<0, \\[5pt] (\deltaot{x}^{+},\deltaot{y}^{+})=(a(x-x_0)-b(y-y_0),b(x-x_0)+a(y-y_0)), \thetaext{ when } x>0. \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} As $u_1u_2=-ax_0-b(y-y_0)$ in $\Sigma$, then we get the following table: \betaegin{equation*}\lambdabel{table_case2} \betaegin{array}{|| c| c|c| c | c | c | c||} \hline a&b&x_0&\Sigma^w &\Sigma^{s}_s & \Sigma^t \\ \hline\hline +&0&0 & & & \mathbb{R}\\ \hline +&0&+ & &\mathbb{R} & \\ \hline +&0&- &\mathbb{R} & & \\ \hline -&0&0 & & & \mathbb{R} \\ \hline -&0&+ &\mathbb{R} & & \\ \hline -&0&-&- &\mathbb{R} &\\ \hline \mathbb{R}&+&\mathbb{R}&(-\infty,y_0-\frac{a}{b}x_0) & (y_0-\frac{a}{b}x_0,+\infty) &y_0-\frac{a}{b}x_0\\ \hline \mathbb{R}&-&\mathbb{R}&(y_0-\frac{a}{b}x_0,+\infty) & (-\infty,y_0-\frac{a}{b}x_0)& y_0-\frac{a}{b}x_0 \\ \hline \varepsilonnd{array} \varepsilonnd{equation*} \betaegin{figure}[h] \betaegin{center} \betaegin{overpic}[scale=0.4]{case2_e.pdf} \partialut(23,86){$\Sigma^-$} \partialut(64,86){$\Sigma^+$} \partialut(102,82){$\Sigma^s_s$} \partialut(102,71){$\Sigma^w$} \partialut(102,61){$\Sigma^t$} \partialut(43,-5){$\Sigma$} \varepsilonnd{overpic} \caption{Phase portrait of PWHS \varepsilonqref{case2a}, with $z_0=2+2i,$ $a=1$ and $b=1$.} \lambdabel{caso2_e} \varepsilonnd{center} \varepsilonnd{figure} \noindent\thetaextbf{Case 2.} Now $\deltaot{z}^{-}$ and $\deltaot{z}^{+}$ are given by $(a+ib)(z-z_0)$ and $(c+id)(z-z_0)$ respectively. The PWHS is \betaegin{equation}\lambdabel{case3} \betaegin{aligned} \left\{\betaegin{array}{l} \deltaot{z}^{-}=(a+ib)(z-z_0),\thetaext{ when } \mathbb{R}e(z)<0, \\[5pt] \deltaot{z}^{+}=(c+id)(z-z_0),\thetaext{ when } \mathbb{R}e(z)>0. \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} In cartesian coordinates, we have \betaegin{equation} \betaegin{aligned} \left\{\betaegin{array}{l} (\deltaot{x}^{-},\deltaot{y}^{-})=(a(x-x_0)-b(y-y_0),b(x-x_0)+a(y-y_0)),\thetaext{ when } x<0, \\[5pt] (\deltaot{x}^{+},\deltaot{y}^{+})=(c(x-x_0)-d(y-y_0),d(x-x_0)+c(y-y_0)), \thetaext{ when } x>0. \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} As $u_1u_2=d(y-y_0)(ax_0+b(y-y_0))$ in $\Sigma,$ then we get the following table: \betaegin{equation*}\lambdabel{table_case2} \betaegin{array}{|| c |c |c | c|c| c | c | c | c | c||} \hline a&b&c&d&x_0&\Sigma^w &\Sigma^{s}_s & \Sigma^{s}_u & \Sigma^t \\ \hline\hline 0&+&0&+&\mathbb{R} &\mathbb{R}\varepsilonnsuremath{\mathbb{S}}etminus\{y_0\} && & y_0\\ \hline 0&+&0&-&\mathbb{R} & & (-\infty,y_0)& (y_0,+\infty) & y_0 \\ \hline +&+&0&-&0& &(-\infty,y_0) & (y_0,+\infty) &y_0\\ \hline +&+&0&-&+& (y_0-\frac{a}{b}x_0,y_0) & (-\infty,y_0-\frac{a}{b}x_0) & (y_0,+\infty) &y_0;y_0-\frac{a}{b}x_0\\ \hline +&+&0&-&-&(y_0,y_0-\frac{a}{b}x_0) & (-\infty,y_0) & (y_0-\frac{a}{b}x_0,+\infty)&y_0;y_0-\frac{a}{b}x_0 \\ \hline \varepsilonnd{array} \varepsilonnd{equation*} \betaegin{figure}[h] \betaegin{center} \betaegin{overpic}[scale=0.4]{case3_b.pdf} \partialut(23,86){$\Sigma^-$} \partialut(64,86){$\Sigma^+$} \partialut(102,81.5){$\Sigma^{s}_s$} \partialut(102,70.5){$\Sigma^{s}_u$} \partialut(102,60.5){$\Sigma^t$} \partialut(43,-5){$\Sigma$} \varepsilonnd{overpic} \caption{Phase portrait of PWHS \varepsilonqref{case3}, with $z_0=1-2i$, $b>0,$ and $d<0$.} \lambdabel{caso3_b} \varepsilonnd{center} \varepsilonnd{figure} \noindent\thetaextbf{Case 3.} Here, we consider $\deltaot{z}^{-}=f'(p)(z-z_0)$ and $\deltaot{z}^{+}=(z-z_0)^{n}$, where $n=2,$ $f'(p)=a+ib$, and $z_0=x_0+iy_0$. The PWHS is \betaegin{equation}\lambdabel{case4} \betaegin{aligned} \left\{\betaegin{array}{l} \deltaot{z}^{-}=(a+ib)(z-z_0),\thetaext{ when } \mathbb{R}e(z)<0, \\[5pt] \deltaot{z}^{+}=(z-z_0)^2,\thetaext{ when } \mathbb{R}e(z)>0. \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} In cartesian coordinates, we have \betaegin{equation} \betaegin{aligned} \left\{\betaegin{array}{l} (\deltaot{x}^{-},\deltaot{y}^{-})=(a(x-x_0)-b(y-y_0),b(x-x_0)+a(y-y_0)),\thetaext{ when } x<0, \\[5pt] (\deltaot{x}^{+},\deltaot{y}^{+})=((x-x_0)^{2}-(y-y_0)^{2},2(x-x_0)(y-y_0)), \thetaext{ when } x> 0. \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} As $u_1u_2=(x_0^2-(y-y_0)^2)(-ax_0-b(y-y_0))$ in $\Sigma$, then we get the following table: \betaegin{equation*}\lambdabel{table_case3} \betaegin{array}{|| c |c| c | c | c | c | c||} \hline b& x_0&\Sigma^w &\Sigma^{s}_s & \Sigma^{s}_u\\ \hline\hline +&0 & (y_0,+\infty) &(-\infty,y_0) & \\ \hline +&+ &(y_{min},y_{max})\cup & (-\infty,y_{min})& (y_{max},y_0+x_0)\\ & &(y_0+x_0,\infty) & & \\ \hline +&-& (y_0+x_0,y_{min})\cup & (-\infty,y_0+x_0)\cup & (y_0-\frac{a}{b}x_0,y_0-x_0)\\ && (y_{max},+\infty) &(y_0-x_0,y_0-\frac{a}{b}x_0) & \\ \hline -&0& (-\infty,y_0) & (y_0,+\infty) & \\ \hline -&+&(-\infty,y_{min})\cup & (y_0-\frac{a}{b}x_0,y_0-x_0)\cup & (y_0-x_0,y_0-\frac{a}{b}x_0) \\ &&(y_{max},y_0+x_0) & (y_0+x_0,\infty) & \\ \hline -&-& (-\infty,y_0+x_0)\cup & (y_{max},+\infty)& (y_0+x_0,y_{min}) \\ && (y_{min},y_{max}) & & \\ \hline 0&\overlineperatorname{sgn}(x_0)=\overlineperatorname{sgn}(a)&\mathbb{R}\varepsilonnsuremath{\mathbb{S}}etminus[y^0_{min},y^0_{max}] & & (y^0_{min},y^0_{max}) \\ \hline 0&\overlineperatorname{sgn}(x_0)\neq \overlineperatorname{sgn}(a)& (y^0_{min},y^0_{max}) & \mathbb{R}\varepsilonnsuremath{\mathbb{S}}etminus[y^0_{min},y^0_{max}] & \\ \hline \varepsilonnd{array} \varepsilonnd{equation*} where $y_{min}:=\min\{y_0-\frac{a}{b}x_0,y_0-x_0\},$ $y_{max}:=\max\{y_0-\frac{a}{b}x_0,y_0-x_0\},$ $y^0_{min}:=\min\{y_0+x_0,y_0-x_0\},$ $y^0_{max}:=\max\{y_0+x_0,y_0-x_0\}.$\\ \betaegin{figure}[h] \betaegin{center} \betaegin{overpic}[scale=0.3]{case4a.pdf} \partialut(7,31){$\Sigma^-$} \partialut(22,31){$\Sigma^+$} \partialut(39,31){$\Sigma^-$} \partialut(53,31){$\Sigma^+$} \partialut(71,31){$\Sigma^-$} \partialut(85,31){$\Sigma^+$} \partialut(101,29){$\Sigma^{s}_s$} \partialut(101,25){$\Sigma^{s}_u$} \partialut(101,21.5){$\Sigma^w$} \partialut(101,18){$\Sigma^t$} \partialut(11,-2){$x_0<0$} \partialut(43,-2){$x_0=0$} \partialut(76,-2){$x_0>0$} \varepsilonnd{overpic} \caption{Phase portrait of PWHS \varepsilonqref{case4}, with $z_0=x_0+i,$ $a=1,$ and $b=2$.} \lambdabel{4_a} \varepsilonnd{center} \varepsilonnd{figure} \noindent\thetaextbf{Case 4.} Now, we consider $\deltaot{z}^{-}=f'(p)(z-z_0)$ and $\deltaot{z}^{+}=\frac{1}{(z-z_0)^{n}}$, with $n=1,$ $f'(p)=a+ib$, $a,b>0,$ and $z_0=x_0+iy_0$. The PWHS is \betaegin{equation}\lambdabel{caso5} \betaegin{aligned} \left\{\betaegin{array}{l} z^{-}=(a+ib)(z-z_0),\thetaext{ when } x<0, \\[5pt] z^{+}=\frac{1}{z-z_0}, \thetaext{ when } x> 0. \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} Writing in cartesian coordinates, we have \betaegin{equation} \betaegin{aligned} \left\{\betaegin{array}{l} (\deltaot{x}^{-},\deltaot{y}^{-})=(a(x-x_0)-b(y-y_0),b(x-x_0)+a(y-y_0)),\thetaext{ when } x<0, \\[5pt] (\deltaot{x}^{+},\deltaot{y}^{+})=\left(\frac{x-x_0}{(x-x_0)^{2}+(y-y_0)^{2}},-\frac{y-y_0}{(x-x_0)^{2}+(y-y_0)^{2}}\right), \thetaext{ when } x> 0. \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} As $u_1u_2=\frac{x_0(ax_0+b(y-y_0))}{x_0^{2}+(y-y_0)^{2}}$ in $\Sigma,$ then we get the following table: \betaegin{equation*}\lambdabel{table_case5} \betaegin{array}{|| c |c| c | c | c | c | c||} \hline b& x_0&\Sigma^w &\Sigma^{s}_s & \Sigma^{s}_u & \Sigma^t \\ \hline\hline \mathbb{R}&0 & & & & \mathbb{R}\varepsilonnsuremath{\mathbb{S}}etminus\{y_0\}\\ \hline +&+ &(y_0-\frac{a}{b}x_0,+\infty) & (-\infty,y_0-\frac{a}{b}x_0)& & y_0-\frac{a}{b}x_0 \\ \hline +&-& (-\infty,y_0-\frac{a}{b}x_0) & & (y_0-\frac{a}{b}x_0,+\infty)&y_0-\frac{a}{b}x_0\\ \hline -&+& (-\infty,y_0-\frac{a}{b}x_0) & (y_0-\frac{a}{b}x_0,+\infty) & &y_0-\frac{a}{b}x_0\\ \hline -&-& (y_0-\frac{a}{b}x_0,+\infty) & &(-\infty,y_0-\frac{a}{b}x_0) &y_0-\frac{a}{b}x_0\\ \hline \varepsilonnd{array} \varepsilonnd{equation*} \betaegin{figure}[h] \betaegin{center} \betaegin{overpic}[scale=0.3]{caso_5_.pdf} \partialut(7,31){$\Sigma^-$} \partialut(22,31){$\Sigma^+$} \partialut(39,31){$\Sigma^-$} \partialut(53,31){$\Sigma^+$} \partialut(71,31){$\Sigma^-$} \partialut(85,31){$\Sigma^+$} \partialut(101,28){$\Sigma^{s}_s$} \partialut(101,24){$\Sigma^{s}_u$} \partialut(101,20){$\Sigma^w$} \partialut(101,15){$\Sigma^t$} \partialut(11,-2){$x_0<0$} \partialut(43,-2){$x_0=0$} \partialut(76,-2){$x_0>0$} \varepsilonnd{overpic} \caption{Phase portrait of PWHS \varepsilonqref{caso5}, with $z_0=x_0-i,$ $a=1,$ and $b=1$.} \lambdabel{case5} \varepsilonnd{center} \varepsilonnd{figure} \noindent\thetaextbf{Case 5.} Now, we consider $\deltaot{z}^{-}=f'(p)(z-z_0)$ and $\deltaot{z}^{+}=\frac{\gammaamma(z-z_0)^n}{(z-z_0)^{n-1}}$, with $n=2,$ $\gammaamma=1$, $f'(p)=a+ib$, and $z_0=iy_0$. The PWHS is \betaegin{equation}\lambdabel{case6} \betaegin{aligned} \left\{\betaegin{array}{l} z^{-}=(a+ib)(z-iy_0),\thetaext{ when } x<0, \\[5pt] z^{+}=\frac{(z-iy_0)^2}{1+(z-iy_0)}, \thetaext{ when } x> 0. \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} Writing in cartesian coordinates, we have \betaegin{equation} \betaegin{aligned} \left\{\betaegin{array}{l} (\deltaot{x}^{-},\deltaot{y}^{-})=(ax-b(y-y_0),bx+a(y-y_0)),\thetaext{ when } x<0, \\[5pt] (\deltaot{x}^{+},\deltaot{y}^{+})=\left(\frac{x^2+x^3-(y-y_0)^2+x(y-y_0)^2}{(x+1)^2+(y-y_0)^{2}},\frac{(2x+x^2+(y-y_0)^2)(y-y_0)}{(x+1)^2+(y-y_0)^{2}}\right), \thetaext{ when } x> 0. \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} As $u_1u_2=\frac{b(y-y_0)^3}{1+(y-y_0)^{2}}$ in $\Sigma,$ then we get the following table: \betaegin{equation*}\lambdabel{table_case5} \betaegin{array}{|| c |c| c | c |c| c||} \hline b&\Sigma^w &\Sigma^{s}_s&\Sigma^t \\ \hline\hline +& (y_0,+\infty) &(-\infty,y_0)& y_0 \\ \hline -&(-\infty,y_0) & (y_0,+\infty)& y_0 \\ \hline \varepsilonnd{array} \varepsilonnd{equation*} \betaegin{figure}[h] \betaegin{center} \betaegin{overpic}[scale=0.36]{case6.pdf} \partialut(23,88){$\Sigma^-$} \partialut(64,88){$\Sigma^+$} \partialut(102,81){$\Sigma^s_s$} \partialut(102,71){$\Sigma^w$} \partialut(102,60){$\Sigma^t$} \partialut(43,-6){$\Sigma$} \varepsilonnd{overpic} \caption{Phase portrait of PWHS \varepsilonqref{case6}, with $z_0=i,$ $a=1,$ and $b=1$.} \lambdabel{caso6} \varepsilonnd{center} \varepsilonnd{figure} Now, consider $f^+$ and $f^-$ as fields in the plane (see Remark \ref{pcomp}), i.e. $f^+=(u_1,v_1)$ and $f^-=(u_2,v_2)$. Then, we can write the sewing region, the attracting sliding region, and the repelling sliding region as follow: \[\betaegin{array}{rcl} \Sigma^w&=&\{p\in \Sigma:\, f^+h(p)\cdot f^-h(p)>0\},\\ \Sigma^{s}_s&=&\{p\in \Sigma:\, f^+h(p)<0,f^-h(p)>0\}, \,\thetaext{and}\\ \Sigma^{s}_u&=&\{p\in \Sigma:\, f^+h(p)>0, f^-h(p)<0\},\\ \varepsilonnd{array} \] where $h(x,y)=x,$ $\Sigma=\{(x,y)|x=0\}=h^{-1}(0)$, and $f^{\partialm}h(p)=\lambdangle\nabla h(p),f^{\partialm}(p)\rangle$ denotes the Lie derivative of $h$ in the direction of the vector fields $f^{\partialm}.$ Recall that if $f_\partialm^h(t):= h\circ \varphi_{f^\partialm}(t,p),$ where $t\mapsto \varphi_{f^\partialm}(t,p)$ is the trajectory of $f^\partialm$ starting at $p,$ then $(f_\partialm^h)'(0)=\lambdangle\nabla h(p),f^{\partialm}(p)\rangle=f^\partialm h(p).$ \betaegin{remark}\lambdabel{pcomp} We emphasize that there is a change of coordinates $\rho:\mathbb{C}\thetao\mathbb{R}^2$ between vector fields in the real plane and vector fields in the complex plane (see Figure \ref{fig:conm}). \varepsilonnd{remark} \betaegin{figure}[h] \betaegin{center} \betaegin{overpic}[scale=0.2]{conm.pdf} \partialut(-2,90){$\mathbb{C}$} \partialut(-2,-1){$\mathbb{R}^2$} \partialut(90,90){$\mathbb{C}$} \partialut(90,-1){$\mathbb{R}^2$} \partialut(48,95){$F$} \partialut(48,-11){$\widetilde{F}$} \partialut(-9,50){$\rho$} \partialut(101,50){$\rho$} \varepsilonnd{overpic} \caption{Change of coordinates $\rho:\mathbb{C}\thetao\mathbb{R}^2$ associated with the vector fields $F$ and $\widetilde{F}$.} \lambdabel{fig:conm} \varepsilonnd{center} \varepsilonnd{figure} \betaegin{definition} We say that $\widetilde{F}$ and $\widetilde{G}$ are $0-$conformally conjugated as functions of $\mathbb{R}^2$ in $\mathbb{R}^2$ if $F$ and $G$ are $0-$conformally conjugated as functions of $\mathbb{C}$ in $\mathbb{C}$. \varepsilonnd{definition} \betaegin{theorem}\lambdabel{foldtofold} Suppose that $f^\partialm$ and $g^\partialm$ are holomorphic and $0-$conformally conjugate with conformal map $\Phi$. Then $\Phi$ preserves sewing and sliding regions, i.e.: \betaegin{itemize} \item[(a)] If $p\in\Sigma^w,$ then $\Phi(p)\in\widetilde{\Sigma}^w:=\Phi(\Sigma^w).$ \item[(b)] If $p\in\Sigma^{s}_s,$ then $\Phi(p)\in\widetilde{\Sigma}^s_a:=\Phi(\Sigma^{s}_s).$ \item[(c)] If $p\in\Sigma^{s}_u,$ then $\Phi(p)\in\widetilde{\Sigma}^s_r:=\Phi(\Sigma^{s}_u).$ \varepsilonnd{itemize} \varepsilonnd{theorem} \betaegin{proof} We will prove item $(a)$, items $(b)$ and $(c)$ are verified analogously. Consider $p\in\Sigma^w,$ then $f^+h(p)\cdot f^-h(p)=f^+_1(p)\cdot f^-_1(p)>0.$ Now, let $\widetilde{h}$ be a function, such that $\widetilde{\Sigma}=\widetilde{h}^{-1}(0).$ Notice that $\widetilde{\Sigma}=\Phi(\Sigma)$ implies that $h(x,y)=\widetilde{h}\circ\Phi(x,y),$ for all $(x,y)\in\Sigma.$ Moreover, \betaegin{equation}\lambdabel{hwideh_} \betaegin{array}{rcl} \nabla h(p)&=&\nabla\widetilde{h}(\Phi(p))D\Phi(p).\\ \varepsilonnd{array} \varepsilonnd{equation} To prove that $\Phi(p)\in\widetilde{\Sigma}^w$ it is enough to verify that $g^+\widetilde{h}(p)\cdot g^-\widetilde{h}(p)>0.$ Indeed, consider $f_{\partialm}^{\widetilde{h}}(t)=\widetilde{h}\circ\varphi_{g^\partialm}(t,\Phi(p)).$ Since $f^\partialm$ and $g^\partialm$ are 0-conformally conjugate, then $\varphi_{g^\partialm}(t,\Phi(x,y))=\Phi\circ\varphi_{f^\partialm}(t,x,y),$ for all $(x,y)\in D(0,R)\varepsilonnsuremath{\mathbb{S}}etminus\{0\}.$ Hence, $f_{\partialm}^{\widetilde{h}}(t)=\widetilde{h}\circ\Phi\circ\varphi_{f^\partialm}(t,p)$ and using equation \varepsilonqref{hwideh_}, we get $(f_{\partialm}^{\widetilde{h}})'(0)=f_1^\partialm(p)$. Therefore, $g^+\widetilde{h}(p)\cdot g^-\widetilde{h}(p)=f_1^+(p)\cdot f_1^-(p)>0$. Consequently, we get item $(a).$ \varepsilonnd{proof} Using the same ideas of the proof of Theorem \ref{foldtofold}, we obtain the following result. \betaegin{corollary} Suppose that $f^\partialm$ and $g^\partialm$ are holomorphic and $0-$conformally conjugate with conformal map $\Phi^\partialm$. If $\Phi^\partialm(\Sigma)=\widetilde{\Sigma}$ , then $$\Phi=\Big(\frac{1+\overlineperatorname{sgn} (x)}{2} \Big) \Phi^+ +\Big(\frac{1-\overlineperatorname{sgn} (x)}{2}\Big)\Phi^-$$ preserves sewing and sliding regions. \varepsilonnd{corollary} \varepsilonnsuremath{\mathbb{S}}ubsection{Tangencial points} In the Filippov context, the notion of {\it$\Sigma$-singular points} comprehends the tangential points $\Sigma^t$ constituted by the contact points between $f^+=u_1+iv_1$ and $f^-=u_2+iv_2$ with $\Sigma,$ i.e. $\Sigma^t=\{p\in \Sigma:\, u_1\cdot u_2 = 0\},$ where $\Sigma=\{\mathbb{R}e(z)=0\}.$ Here, we are interested in contact points of finite degeneracy. For that reason, consider the following definition. \betaegin{definition}\lambdabel{rts} Consider the PWHS given by \varepsilonqref{ch4:eq1}, $k\in\mathbb{N},$ and $q\in\Sigma^t.$ \betaegin{itemize} \item $q$ is called a contact of multiplicity $k$ between $f^+$ and $\Sigma$ when $u_1(q)=0,$ $v_1(q)\neq 0,$ $\frac{\partialartial^{i-2}}{\partialartial y^{i-2}}\left(\frac{\partialartial v_1}{\partialartial x}\right)(q)= 0,$ for each $i=2,\cdots, k-1,$ and $\frac{\partialartial^{k-2}}{\partialartial y^{k-2}}\left(\frac{\partialartial v_1}{\partialartial x}\right)(q)\neq 0$. Even more, if $k$ is even, then $q$ is visible when $v_1^{k-1}(q)\frac{\partialartial^{k-2}}{\partialartial y^{k-2}}\left(\frac{\partialartial v_1}{\partialartial x}\right)(q)<0$ and invisible otherwise. \item $q$ is called a contact of multiplicity $k$ between $f^-$ and $\Sigma$ when $u_2(q)=0,$ $v_2(q)\neq 0,$ $\frac{\partialartial^{i-2}}{\partialartial y^{i-2}}\left(\frac{\partialartial v_2}{\partialartial x}\right)(q)= 0,$ for each $i=2,\cdots, k-1,$ and $\frac{\partialartial^{k-2}}{\partialartial y^{k-2}}\left(\frac{\partialartial v_2}{\partialartial x}\right)(q)\neq 0$. Even more, if $k$ is even, then $q$ is visible when $v_2^{k-1}(q)\frac{\partialartial^{k-2}}{\partialartial y^{k-2}}\left(\frac{\partialartial v_2}{\partialartial x}\right)(q)>0$ and invisible otherwise. \varepsilonnd{itemize} \varepsilonnd{definition} \betaegin{definition} Consider the PWHS given by \varepsilonqref{ch4:eq1} and $q\in\Sigma^t$: \betaegin{itemize} \item If $q$ is a visible (resp. invisible) contact of multiplicity $k$ between $f^+$ and $\Sigma$ and $q$ is a visible (resp. invisible) contact of multiplicity $k$ between $f^-$ and $\Sigma,$ then it is called a visible-visible (resp. invisible-invisible) tangential singularity of multiplicity $k.$ \item If $q$ is a visible (resp. invisible) contact of multiplicity $k$ between $f^+$ and $\Sigma$ and $q$ is an invisible (resp. visible) contact of multiplicity $k$ between $f^-$ and $\Sigma,$ then it is called an invisible-visible (resp. visible-invisible) tangential singularity of multiplicity $k.$ \varepsilonnd{itemize} \varepsilonnd{definition} In what follows we characterize the contacts multiplicity $k=2$ of the holomorphic functions $f^\partialm$ with $\Sigma$. These contacts are known as fold singularities. \betaegin{proposition}\lambdabel{rfs} Let $F=u+iv$ be a holomorphic function defined in some punctured neighborhood of $z_0\in\mathbb{C}$ and $z=x+iy$. Then $q$ is a fold singularity of $F$ with respect to $\Sigma=\{\mathbb{R}e(z)=0\}$ if, and only if, $u(q)=0$, $v(q)\neq 0,$ and $\Im(F'(q))\neq 0$. \varepsilonnd{proposition} \betaegin{proof} Since $F$ is a holomorphic function at $q$, then $F'(q)=\frac{\partialartial u}{\partialartial x}(q)+i\frac{\partialartial v}{\partialartial x}(q).$ Thus, $\Im(F'(q))=\frac{\partialartial v}{\partialartial x}(q).$ The result follows from the Definition \ref{rts} for $k=2.$ \varepsilonnd{proof} Since conformal maps preserve angles, then tangential contacts are also preserved by these maps. Furthermore, locally the conformal maps preserve figures in the vicinity of the tangential contact, so if a tangential contact is even (resp. odd), then it is preserved by said maps. Now, consider $f^+$ and $f^-$ as fields in the plane (see Remark \ref{pcomp}), i.e. $f^+=(u_1,v_1)$ and $f^-=(u_2,v_2)$. Then, we can write the set of tangential points as $\Sigma^t=\{p\in \Sigma:\, f^+h(p)\cdot f^-h(p) = 0\},$ where $h(x,y)=x,$ $\Sigma=\{(x,y)|x=0\}=h^{-1}(0)$, and $f^{\partialm}h(p)=\lambdangle\nabla h(p),f^{\partialm}(p)\rangle$ denotes the Lie derivative of $h$ in the direction of the vector fields $f^{\partialm}.$ In addition, $(f^{\partialm})^ih(p)=f^{\partialm}((f^{\partialm})^{i-1}h)(p)$ for $i>1.$ Recall that $p$ is a {\it contact of order $k-1$} (or multiplicity $k$) between $f^\partialm$ and $\Sigma$ if $0$ is a root of multiplicity $k$ of $f(t):= h\circ \varphi_{f^\partialm}(t,p),$ where $t\mapsto \varphi_{f^\partialm}(t,p)$ is the trajectory of $f^\partialm$ starting at $p.$ Equivalently, $f^\partialm h(p) = (f^\partialm)^2h(p) = \ldots = (f^\partialm)^{k-1}h(p) =0,\thetaext{ and } (f^\partialm)^{k} h(p)\neq 0.$ In addition, an even multiplicity contact, say $2k,$ is called {\it visible} for $f^+$ (resp. $f^-$) when $(f^{+})^{2k}h(p)>0$ (resp. $(f^{-})^{2k}h(p)<0$). Otherwise, it is called {\it invisible}. Recall that a {\it regular-tangential singularity of multiplicity $2k$} is formed by a contact of multiplicity $2k$ of $f^+$ and a regular point of $f^-,$ or vice versa. In the literature, when $k=1$, then a regular-tangential singularity of multiplicity 2 is called a regular-fold singularity. \betaegin{lemma}\lambdabel{foldtofold1} Suppose that $G:\mathbb{R}^2\thetao\mathbb{R}^2$ and $F:\mathbb{R}^2\thetao\mathbb{R}^2$ are $0$-conformally conjugate $C^2$ maps with conformal map $\Phi$. If $p\in\Sigma$ is a fold singularity associated to $G,$ then $\Phi(p)\in\widetilde{\Sigma}:=\Phi(\Sigma)$ is a fold singularity associated to $F.$ \varepsilonnd{lemma} \betaegin{proof} Let $p\in\Sigma$ be a fold singularity associated to $G,$ then 0 is a root of multiplicity 2 of $g(t)=h\circ\varphi_G(t,p).$ Thus, $g(0)=0,$ $g'(0)=\nabla h(p)G(p)=0$ and $g''(0)=D^2h_p\left(G(p),G(p)\right)+\nabla h(p)\frac{\partialartial^2\varphi_G}{\partialartial t^2}(0,p)\neq 0,$ where we have used chain rule and that $\frac{\partialartial\varphi_G(0,p)}{\partialartial t}=G(p).$ Now, consider the function $\widetilde{h},$ such that $\widetilde{\Sigma}=\widetilde{h}^{-1}(0).$ Notice that $\widetilde{\Sigma}=\Phi(\Sigma)$ implies that $h(x,y)=\widetilde{h}\circ\Phi(x,y),$ for all $(x,y)\in\Sigma.$ Moreover, \betaegin{equation}\lambdabel{hwideh} \betaegin{array}{rcl} \nabla h(p)&=&\nabla\widetilde{h}(\Phi(p))D\Phi(p),\\ D^2h(p)&=&D^2\widetilde{h}_{\Phi(p)}(D\Phi(p),D\Phi(p))+\nabla\widetilde{h}(\Phi(p))D^2\Phi_p.\\ \varepsilonnd{array} \varepsilonnd{equation} To prove that $\Phi(p)$ is a fold singularity of $F$ it is enough to verify that 0 is a root of multiplicity 2 of $f(t)=\widetilde{h}\circ\varphi_F(t,\Phi(p)).$ Indeed, since $G$ and $F$ are 0-conformally conjugate, then $\varphi_F(t,\Phi(x,y))=\Phi\circ\varphi_G(t,x,y),$ for all $(x,y)\in D(0,R)\varepsilonnsuremath{\mathbb{S}}etminus\{0\}.$ Hence, using the equations of \varepsilonqref{hwideh}, we get $f(0)=\widetilde{h}\circ\varphi_F(0,\Phi(p))= h(p)=0$, $f'(0)=\nabla\widetilde{h}(\Phi(p))D\Phi(p)G(p)=\nabla h(p)G(p)=0$ and $$f''(0)=D^2h_p(G(p),G(p))+\nabla h(p)\frac{\partialartial^2\varphi_G}{\partialartial t^2}(0,p)\neq 0.$$ Therefore, 0 is a root of multiplicity 2 and we can conclude the result. \varepsilonnd{proof} The following result determines the type of contacts of the normal forms given in Proposition \ref{GGJ}. \betaegin{proposition}\lambdabel{Th:folds} Let $G=u+iv$ be a holomorphic function defined in some punctured neighborhood of $z_0$. Consider $z\in \mathbb{C}\varepsilonnsuremath{\mathbb{S}}etminus\{z_0\}$ and the constants $\gammaamma\in\mathbb{R}$, $n\in\mathbb{N},$ and $k\in\mathbb{Z}$. \betaegin{enumerate} \item[(a)] If $G(z)=(a+ib)(z-z_0)$, then $G$ only has fold singularities with respect to $\Sigma$ when $b\neq 0$. \item[(b)] If $G(z)=(z-z_0)^n$ and $n$ is even, then $G$ only has fold singularities with respect to $\Sigma$. \item[(c)] If $G(z)=(z-z_0)^n$ and $n>1$ is odd, then $G$ only has fold singularities with respect to $\Sigma$ when $\frac{(n-1)(2k+1)}{2n} \notin \mathbb{Z}$. \item[(d)] If $G(z)=\frac{1}{(z-z_0)^n}$ and $n$ is even, then $G$ only has fold singularities with respect to $\Sigma$. \item[(e)] If $G(z)=\frac{1}{(z-z_0)^n}$ and $n$ is odd, then $G$ only has fold singularities with respect to $\Sigma$ when $\frac{(n+1)(2k+1)}{2n} \notin \mathbb{Z}$. \item[(f)] If $G(z)=\frac{\gammaamma(z-z_0)^n}{1+(z-z_0)^{n-1}}$, then $G$ only has singularities of multiplicity even with respect to $\Sigma$. \varepsilonnd{enumerate} \varepsilonnd{proposition} \betaegin{proof} First, consider $G(z)=(a+ib)(z-z_0)$ whose polar form is given by $$G(z)=|(a+ib)(z-z_0)|\cos(\thetaheta)+i|(a+ib)(z-z_0)|\varepsilonnsuremath{\mathbb{S}}in(\thetaheta).$$ Hence, $u(z)=|(a+ib)(z-z_0)|\cos(\thetaheta)$ and $v(z)=|(a+ib)(z-z_0)|\varepsilonnsuremath{\mathbb{S}}in(\thetaheta).$ Notice that $u=0$ if, and only if, $\thetaheta=\frac{(2k+1)\partiali}{2},$ for all $k\in\mathbb{Z}$. Therefore, $v=|(a+ib)(z-z_0)|(-1)^k\neq 0$ when $\thetaheta=\frac{(2k+1)\partiali}{2},$ for all $k\in\mathbb{Z}$. In addition, the derivative of the complex function $(a+ib)(z-z_0)$ is $a+ib$, consequently $\Im(G'(z))=b,$ when $\thetaheta=\frac{(2k+1)\partiali}{2},$ for all $k\in\mathbb{Z}$. By Proposition \ref{rfs} we can conclude item $(a)$. Now, consider $G(z)=(z-z_0)^n$. Writing $G$ in its polar form, we have that $G(z)=|z-z_0|^n\cos(n\thetaheta)+i|z-z_0|^n\varepsilonnsuremath{\mathbb{S}}in(n\thetaheta).$ Thus, $u(z)=|z-z_0|^n\cos(n\thetaheta)$ and $v(z)=|z-z_0|^n\varepsilonnsuremath{\mathbb{S}}in(n\thetaheta).$ Notice that $u=0$ if, and only if, $\thetaheta=\frac{(2k+1)\partiali}{2n},$ for all $k\in\mathbb{Z}$. Therefore, $v=|z-z_0|^n(-1)^k\neq 0$ when $\thetaheta=\frac{(2k+1)\partiali}{2n},$ for all $k\in\mathbb{Z}$. Moreover, since the derivative of the complex function $(z-z_0)^n$ is $n(z-z_0)^{n-1}$, we have that $$\Im(G'(z))=n|z-z_0|^{n-1}\varepsilonnsuremath{\mathbb{S}}in\left(\frac{(n-1)(2k+1)\partiali}{2n}\right),$$ when $\thetaheta=\frac{(2k+1)\partiali}{2n},$ for all $k\in\mathbb{Z}$. Moreover, if $n$ is even, then $\frac{(n-1)(2k+1)}{2n}\notin\mathbb{Z},$ i.e. $\Im(G'(q))\neq 0.$ In addition, if $n$ is odd, then $\Im(G'(q))\neq 0$ if, and only if, $\frac{(n-1)(2k+1)}{2n}\notin\mathbb{Z}.$ By Proposition \ref{rfs} we get items $(b)$ and $(c)$. On the other hand, consider $G(z)=\frac{1}{(z-z_0)^n}$. Writing $G$ in its polar form, we get $$G(z)=|z-z_0|^{-n}\cos(n\thetaheta)-i|z-z_0|^{-n}\varepsilonnsuremath{\mathbb{S}}in(n\thetaheta).$$ Thus, $u(z)=|z-z_0|^{-n}\cos(n\thetaheta)$ and $v(z)=-|z-z_0|^{-n}\varepsilonnsuremath{\mathbb{S}}in(n\thetaheta)$. Notice that $u=0$ if, and only if, $\thetaheta=\frac{(2k+1)\partiali}{2n},$ for all $k\in\mathbb{Z}$. Therefore, $v=-|z-z_0|^n(-1)^k\neq 0$ when $\thetaheta=\frac{(2k+1)\partiali}{2n},$ for all $k\in\mathbb{Z}$. Moreover, since the derivative of the complex function $(z-z_0)^{-n}$ is $-n(z-z_0)^{-n-1}$, we have that $$\Im(G'(z))=n|z-z_0|^{-n-1}\varepsilonnsuremath{\mathbb{S}}in\left(\frac{(n+1)(2k+1)\partiali}{2n}\right),$$ when $\thetaheta=\frac{(2k+1)\partiali}{2n},$ for all $k\in\mathbb{Z}$. Thus, if $n$ is even, then $\frac{(n+1)(2k+1)}{2n}\notin\mathbb{Z},$ i.e. $\Im(G'(q))\neq 0.$ In addition, if $n$ is odd, then $\Im(G'(q))\neq 0$ if, and only if, $\frac{(n+1)(2k+1)}{2n}\notin\mathbb{Z}.$ By Proposition \ref{rfs} we get items $(d)$ and $(e)$. Finally, consider $G(z)=\frac{\gammaamma(z-z_0)^n}{1+(z-z_0)^{n-1}}$. Let $w=\Phi(z)=(z-z_0)^{1-n}$ be a conformal map. Thus we obtain the vector field $F(w)=\frac{\gammaamma(1-n)}{1+\frac{1}{w}}.$ Writing $F$ in its polar form, we get $$F(w)=\frac{\gammaamma(1-n)(1+|w|^{-1}\cos(\thetaheta))}{1+2|w|^{-1}\cos(\thetaheta)+|w|^{-2}}+i\frac{\gammaamma(1-n)|w|^{-1}\varepsilonnsuremath{\mathbb{S}}in(\thetaheta)}{1+2|w|^{-1}\cos(\thetaheta)+|w|^{-2}}.$$ Hence, $$\betaegin{array}{rcl} u(w)&=&\deltafrac{\gammaamma(1-n)(1+|w|^{-1}\cos(\thetaheta))}{1+2|w|^{-1}\cos(\thetaheta)+|w|^{-2}},\\ v(w)&=&\deltafrac{\gammaamma(1-n)|w|^{-1}\varepsilonnsuremath{\mathbb{S}}in(\thetaheta)}{1+2|w|^{-1}\cos(\thetaheta)+|w|^{-2}}.\\ \varepsilonnd{array}$$ Notice that $u=0$ if, and only if, $1+|w|^{-1}\cos(\thetaheta)=0$, which implies that $cos(\thetaheta)\neq 0.$ Hence, $v(w)=\frac{\gammaamma(n-1)\cos(\thetaheta)}{\varepsilonnsuremath{\mathbb{S}}in(\thetaheta)}\neq 0$, when $1+|w|^{-1}\cos(\thetaheta)=0$. Moreover, since the derivative of the complex function $\frac{\gammaamma(1-n)}{1+\frac{1}{w}}$ is $\frac{\gammaamma(1-n)}{(1+w)^2}$, we have that $$\Im(F'(w))=\frac{2\gammaamma(1-n)\cos(\thetaheta)}{\varepsilonnsuremath{\mathbb{S}}in(\thetaheta)}\neq 0,$$ when $1+|w|^{-1}\cos(\thetaheta)=0$. By Proposition \ref{rfs} we get item $(f)$. \varepsilonnd{proof} Now we establish the main theorem of this section, which is a direct consequence of Propositions \ref{GGJ} and \ref{Th:folds} and Lemma \ref{foldtofold1}. \betaegin{theorem}\lambdabel{car_nf} Let $F$ be a holomorphic function defined in some punctured neighborhood of $z_0\in\mathbb{C}.$ Consider $z\in \mathbb{C}\varepsilonnsuremath{\mathbb{S}}etminus\{z_0\}$ and the constants $n\in\mathbb{N},$ $k\in\mathbb{Z},$ and $\gammaamma\in\mathbb{R}$. \betaegin{itemize} \item [(a)] If $F(z_0)=0$ and $Im(F'(z_0))\neq0,$ then there exists a conformal map $\Phi$ such that $F$ only has fold singularities with respect to $\Phi(\Sigma).$ \item[(b)] If $F(z_0)=0$, $z_0$ is a zero of $F$ of order $n>1$ with $n$ even, and $\overlineperatorname{Res}(1/F,z_0)=0,$ then there exists a conformal map $\Phi$ such that $F$ only has fold singularities with respect to $\Phi(\Sigma)$. \item[(c)] If $F(z_0)=0$, $z_0$ is a zero of $F$ of order $n>1$ with $n$ odd, $\frac{(n-1)(2k+1)}{2n} \notin \mathbb{Z},$ and $\overlineperatorname{Res}(1/F,z_0)=0,$ then there exists a conformal map $\Phi$ such that $F$ only has fold singularities with respect to $\Phi(\Sigma)$. \item[(d)] If $z_0$ is a pole of $F$ of order $n$ with $n$ even, then there exists a conformal map $\Phi$ such that $F$ only has fold singularities with respect to $\Phi(\Sigma)$. \item[(e)] If $z_0$ is a pole of $F$ of order $n$ with $n$ odd and $\frac{(n-1)(2k+1)}{2n} \notin \mathbb{Z},$ then there exists a conformal map $\Phi$ such that $F$ only has fold singularities with respect to $\Phi(\Sigma)$. \item [(f)] If $F(z_0)=0$, $z_0$ is a zero of $F$ of order $n>1$, and $\overlineperatorname{Res}(1/F,z_0)=1/\gammaamma,$ then there exists a conformal map $\Phi$ such that $F$ only has tangential singularities of multiplicity even with respect to $\Phi(\Sigma)$. \varepsilonnd{itemize} \varepsilonnd{theorem} Now, we present an example of a holomorphic function defined in some punctured neighborhood of $z_0=0\in\mathbb{C}$ with an essential singularity at $z_0=0,$ which has infinite contacts of multiplicity 3. \betaegin{example} Consider the ODE \betaegin{equation}\lambdabel{eqesse} \deltaot{z}=z^m\varepsilonxp\left(\frac{1}{z^n}\right), \varepsilonnd{equation} where $n\gammaeq 1$ and $m\gammaeq n+1$. Doing a scaling of the time and writing $z=x+iy$, system \varepsilonqref{eqesse} becomes a real smooth planar system in a punctured neighborhood of the origin of the form \betaegin{equation} \betaegin{aligned} \left\{\betaegin{array}{l} \deltaot{x}=P_m(x,y)\cos\left(\frac{R_n(x,y)}{(x^2+y^2)^n}\right)-Q_m(x,y)\varepsilonnsuremath{\mathbb{S}}in\left(\frac{R_n(x,y)}{(x^2+y^2)^n}\right),\\ \deltaot{y}=P_m(x,y)\varepsilonnsuremath{\mathbb{S}}in\left(\frac{R_n(x,y)}{(x^2+y^2)^n}\right)+Q_m(x,y)\cos\left(\frac{R_n(x,y)}{(x^2+y^2)^n}\right), \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} where $P_m$, $Q_m$, and $R_n$ are given by $$\betaegin{array}{rcl} P_m(x,y)&=&\deltaisplaystyle\varepsilonnsuremath{\mathbb{S}}um_{j=0}^{l}\frac{m!}{(m-2j)!(2j)!}x^{m-2j}(-1)^jy^{2j},\\ Q_m(x,y)&=&\deltaisplaystyle\varepsilonnsuremath{\mathbb{S}}um_{j=1}^{s}\frac{m!}{(m-(2j-1))!(2j-1)!}x^{m-(2j-1)}(-1)^{j-1}y^{2j-1},\\ R_n(x,y)&=&\deltaisplaystyle\varepsilonnsuremath{\mathbb{S}}um_{j=1}^{s}\frac{n!}{(n-(2j-1))!(2j-1)!}x^{n-(2j-1)}(-1)^{j}y^{2j-1},\varepsilonnd{array}$$ where $l=s=\frac{m}{2}$ when $m$ is even, $l=\frac{m-1}{2}$ and $s=\frac{m+1}{2}$ when $m$ is odd, $s=\frac{n}{2}$ when $n$ is even, and $s=\frac{n+1}{2}$ when $n$ is odd. Now, consider the functions $$\betaegin{array}{rcl} u_1(x,y)&=&\deltaisplaystyle P_m(x,y)\cos\left(\frac{R_n(x,y)}{(x^2+y^2)^n}\right)-Q_m(x,y)\varepsilonnsuremath{\mathbb{S}}in\left(\frac{R_n(x,y)}{(x^2+y^2)^n}\right),\\ v_1(x,y)&=&\deltaisplaystyle P_m(x,y)\varepsilonnsuremath{\mathbb{S}}in\left(\frac{R_n(x,y)}{(x^2+y^2)^n}\right)+Q_m(x,y)\cos\left(\frac{R_n(x,y)}{(x^2+y^2)^n}\right). \varepsilonnd{array}$$ Notice that if $n$ is odd, $m$ is even, and $k\gammaeq 1$ then $u_1(p_k)=0$ if, and only if, $p_k=\left(0,\varepsilonnsuremath{\mathbb{S}}qrt[n]{\frac{2(-1)^{\frac{n+1}{2}}}{(2k+1)\partiali}}\right)$. And if $n,m$ are odd and $k\gammaeq 1$ then $u_1(q_k)=0$ if, and only if, $q_k=\left(0,\varepsilonnsuremath{\mathbb{S}}qrt[n]{\frac{(-1)^{\frac{n+1}{2}}}{k\partiali}}\right)$. Moreover, $$\betaegin{array}{rcl} v_1(p_k)&=&\left(\varepsilonnsuremath{\mathbb{S}}qrt[n]{\frac{2(-1)^{\frac{n+1}{2}}}{(2k+1)\partiali}}\right)^m(- 1)^{k+\frac{m}{2}}\neq 0,\\ \frac{\partialartial v_1}{\partialartial x}(p_k)&=&0,\\ \frac{\partialartial}{\partialartial y}\left(\frac{\partialartial v_1}{\partialartial x}\right)(p_k)&=&mn(-1)^{\frac{m+n-1}{2}+k}\left(\varepsilonnsuremath{\mathbb{S}}qrt[n]{\frac{2(-1)^{\frac{n+1}{2}}}{(2k+1)\partiali}}\right)^{m-n-2}\neq 0,\\ \varepsilonnd{array}$$ and $$\betaegin{array}{rcl} v_1(q_k)&=&\left(\varepsilonnsuremath{\mathbb{S}}qrt[n]{\frac{(-1)^{\frac{n+1}{2}}}{k\partiali}}\right)^m(- 1)^{k+\frac{m-1}{2}}\neq 0,\\ \frac{\partialartial v_1}{\partialartial x}(q_k)&=&0,\\ \frac{\partialartial}{\partialartial y}\left(\frac{\partialartial v_1}{\partialartial x}\right)(q_k)&=&mn(-1)^{\frac{m+n}{2}+1+k}\left(\varepsilonnsuremath{\mathbb{S}}qrt[n]{\frac{(-1)^{\frac{n+1}{2}}}{k\partiali}}\right)^{m-n-2}\neq 0. \varepsilonnd{array} $$ for all $k\in\mathbb{Z}.$ By Definition \ref{rts}, we conclude that $p_k$ and $q_k$ are contacts of multiplicity 3, for all $k\in\mathbb{Z}$. \varepsilonnd{example} We conclude this session by presenting some PWHS from Section \ref{sec:PWHS} that has at least one singularity of the above. \betaegin{example}\lambdabel{ex_reg} We take $\deltaot{z}^{+}=f'(p)(z-z_0)$, where $f'(p)=a+ib,$ $b\neq 0,$ and $z_0=x_0+iy_0$. The PWHS is given by \betaegin{equation} \betaegin{aligned} \left\{\betaegin{array}{l} \deltaot{z}^{-}=1,\thetaext{ when } \mathbb{R}e(z)<0, \\[5pt] \deltaot{z}^{+}=f'(p)(z-z_0),\thetaext{ when } \mathbb{R}e(z)>0. \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} In cartesian coordinates, we have \betaegin{equation} \betaegin{aligned} \left\{\betaegin{array}{l} (\deltaot{x}^{-},\deltaot{y}^{-})=(1,0),\thetaext{ when } x<0, \\[5pt] (\deltaot{x}^{+},\deltaot{y}^{+})=(a(x-x_0)-b(y-y_0),b(x-x_0)+a(y-y_0)), \thetaext{ when } x>0. \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} Taking $q=(0,y_0-\frac{a}{b}x_0)$, we have that $u_1(q)=0,$ $v_1(q)=-\frac{(b^2+a^2)}{b}x_0\neq 0$ for $x_0\neq 0,$ $\frac{\partialartial v_1}{\partialartial x}(q)=b\neq 0,$ and $f^-(q)=(1,0).$ Thus, by Definition \ref{rts} we conclude that $q$ is a visible regular-fold singularity when $x_0>0$ and $q$ is an invisible regular-fold singularity when $x_0<0$ (see Figure \ref{ex1_contact}). \varepsilonnd{example} \betaegin{figure}[h] \betaegin{center} \betaegin{overpic}[scale=0.4]{ex1_contact.pdf} \partialut(25,102){$\Sigma^-$} \partialut(75,102){$\Sigma^+$} \partialut(50,-6){$\Sigma$} \varepsilonnd{overpic} \caption{The visible regular-fold singularity $q=(0,1)$, for $a=0$ and $b=1.$} \lambdabel{ex1_contact} \varepsilonnd{center} \varepsilonnd{figure} \betaegin{example} Consider $\deltaot{z}^{-}$ and $\deltaot{z}^{+}$ given by $(a+ib)(z-z_0)$ and $id(z-z_0)$ respectively, with $b,d\neq 0$ and $z_0=x_0+iy_0$. In cartesian coordinates, we have \betaegin{equation}\lambdabel{ex:vis_inv} \betaegin{aligned} \left\{\betaegin{array}{l} (\deltaot{x}^{-},\deltaot{y}^{-})=(a(x-x_0)-b(y-y_0),b(x-x_0)+a(y-y_0)),\thetaext{ when } x<0, \\[5pt] (\deltaot{x}^{+},\deltaot{y}^{+})=(-d(y-y_0),d(x-x_0)), \thetaext{ when } x>0. \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} Suppose that $a=0$ and $x_0\neq 0.$ Taking $q=(0,y_0)$, we have that $u_1(q)=0,$ $v_1(q)=-dx_0\neq 0,$ and $\frac{\partialartial v_1}{\partialartial x}(q)=d\neq 0.$ In addition, notice that $u_2(q)=0,$ $v_2(q)=-bx_0\neq 0,$ and $\frac{\partialartial v_2}{\partialartial x}(q)=b\neq 0.$ Thus, by Definition \ref{rts} we conclude that $q$ is an invisible-visible fold singularity of \varepsilonqref{ex:vis_inv} when $x_0>0$ and $q$ is a visible-invisible fold singularity of \varepsilonqref{ex:vis_inv} when $x_0<0$. Suppose that $a\neq 0$ and $x_0\neq 0.$ \betaegin{itemize} \item Taking $q=(0,y_0)$, we have that $u_1(q)=0,$ $v_1(q)=-dx_0\neq 0,$ and $\frac{\partialartial v_1}{\partialartial x}(q)=d\neq 0.$ In addition, notice that $f^-(q)=(-ax_0,-bx_0)\neq (0,0).$ Thus, by Definition \ref{rts} we conclude that $q$ is a visible regular-fold singularity of \varepsilonqref{ex:vis_inv} when $x_0>0$ and $q$ is an invisible regular-fold singularity of \varepsilonqref{ex:vis_inv} when $x_0<0$. \item Taking $q=(0,y_0-\frac{a}{b}x_0)$, we have that $u_2(q)=0,$ $v_2(q)=-\frac{b^2+a^2}{b}x_0\neq 0,$ and $\frac{\partialartial v_1}{\partialartial x}(q)=b\neq 0.$ In addition, notice that $f^+(q)=(\frac{ad}{b}x_0,-dx_0)\neq (0,0).$ Thus, by Definition \ref{rts} we conclude that $q$ is an invisible regular-fold singularity of \varepsilonqref{ex:vis_inv} when $x_0>0$ and $q$ is a visible regular-fold singularity of \varepsilonqref{ex:vis_inv} when $x_0<0$ (see Figure \ref{ex2_contact}). \varepsilonnd{itemize} \varepsilonnd{example} \betaegin{figure}[h] \betaegin{center} \betaegin{overpic}[scale=0.4]{ex2_contact.pdf} \partialut(25,102){$\Sigma^-$} \partialut(75,102){$\Sigma^+$} \partialut(50,-6){$\Sigma$} \varepsilonnd{overpic} \caption{The invisible regular-fold singularity $q_1=(0,1)$ and the visible regular-fold singularity $q_2=(0,2)$ for $a=1,$ $b=2,$ and $d=1.$} \lambdabel{ex2_contact} \varepsilonnd{center} \varepsilonnd{figure} \betaegin{example} Consider $\deltaot{z}^{-}=f'(p)(z-z_0)$ and $\deltaot{z}^{+}=(z-z_0)^{n}$, with $n=2$ and $z_0=x_0+iy_0$. In cartesian coordinates, we have \betaegin{equation}\lambdabel{ex:vis_inv_case4} \betaegin{aligned} \left\{\betaegin{array}{l} (\deltaot{x}^{-},\deltaot{y}^{-})=(a(x-x_0)-b(y-y_0),b(x-x_0)+a(y-y_0)),\thetaext{ when } x<0, \\[5pt] (\deltaot{x}^{+},\deltaot{y}^{+})=((x-x_0)^{2}-(y-y_0)^{2},2(x-x_0)(y-y_0)), \thetaext{ when } x> 0. \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} Suppose that $b\neq 0$ and $x_0\neq 0.$ \betaegin{itemize} \item Taking $q=(0,y_0+x_0),$ we have that $u_1(q)=0,$ $v_1(q)=-2x_0^2\neq 0,$ and $\frac{\partialartial v_1}{\partialartial x}(q)=2x_0\neq 0.$ In addition, notice that $$u_2(q)=-x_0(a+b)=\left\{\betaegin{array}{rcl} 0&if&a= -b,\\ \neq 0&if& a\neq-b, \varepsilonnd{array}\right.$$ $$v_2(q)=x_0(a-b)=\left\{\betaegin{array}{rcl} 0&if&a= b,\\ \neq 0&if& a\neq b, \varepsilonnd{array}\right.$$ and $\frac{\partialartial v_2}{\partialartial x}(q)=b\neq 0.$ Thus, by Definition \ref{rts} we conclude that if $x_0>0$ and $a=-b$ then $q$ is an invisible-visible fold singularity of \varepsilonqref{ex:vis_inv_case4} and if $x_0<0$ and $a= -b$ then $q$ is a visible-invisible fold singularity of \varepsilonqref{ex:vis_inv_case4}. Moreover, if $x_0>0$ and $a\neq -b,$ then $q$ is a visible fold singularity of $f^+$ and a regular point of $f^-,$ and if $x_0<0$ and $a\neq -b,$ then $q$ is an invisible fold singularity of $f^+$ and a regular point of $f^-$ (see Figure \ref{ex3_contact}). \item Taking $q=(0,y_0-x_0),$ we have that $u_1(q)=0,$ $v_1(q)=2x_0^2\neq 0,$ and $\frac{\partialartial v_1}{\partialartial x}(q)=-2x_0\neq 0.$ In addition, notice that $$u_2(q)=x_0(b-a)=\left\{\betaegin{array}{rcl} 0&if&a=b,\\ \neq 0&if& a\neq b, \varepsilonnd{array}\right.$$ $$v_2(q)=-x_0(a+b)=\left\{\betaegin{array}{rcl} 0&if&a= -b,\\ \neq 0&if& a\neq -b, \varepsilonnd{array}\right.$$ and $\frac{\partialartial v_2}{\partialartial x}(q)=b\neq 0.$ Thus, by Definition \ref{rts} we conclude that if $x_0>0$ and $a=b$ then $q$ is an invisible-visible fold singularity of \varepsilonqref{ex:vis_inv_case4} and if $x_0<0$ and $a= b$ then $q$ is a visible-invisible fold singularity of \varepsilonqref{ex:vis_inv_case4} (see Figure \ref{ex3_contact}). \item Taking $q=(0,y_0-\frac{a}{b}x_0),$ we have that $u_2(q)=0,$ $v_2(q)=-x_0\frac{a^2+b^2}{b}\neq 0,$ and $\frac{\partialartial v_1}{\partialartial x}(q)=b\neq 0.$ In addition, notice that $$u_1(q)=\frac{(b-a)(b+a)}{b^2}x_0^2=\left\{\betaegin{array}{rcl} 0&if&b=\partialm a,\\ \neq 0&if& b=\partialm a. \varepsilonnd{array}\right.$$ $$v_1(q)=\frac{2a}{b}x_0^2=\left\{\betaegin{array}{rcl} 0&if&a= 0,\\ \neq 0&if& a\neq 0, \varepsilonnd{array}\right.$$ and $\frac{\partialartial v_1}{\partialartial x}(q)=\frac{-2a}{b}x_0.$ Thus, by Definition \ref{rts} we conclude that if $x_0>0$ and $b=\partialm a,$ then $q$ is an invisible-visible fold singularity of \varepsilonqref{ex:vis_inv_case4} and if $x_0<0$ and $b=\partialm a,$ then $q$ is a visible-invisible fold singularity of \varepsilonqref{ex:vis_inv_case4}. Moreover, if $x_0<0$ and $b\neq \partialm a,$ then $q$ is a visible fold singularity of $f^-$ and a regular point of $f^+,$ and if $x_0>0$ and $b\neq \partialm a,$ then $q$ is an invisible fold singularity of $f^-$ and a regular point of $f^+.$ \varepsilonnd{itemize} Now, suppose that $b=0,$ $a\neq 0,$ and $x_0\neq 0.$ Taking $q=(0,y_0\partialm x_0),$ we have that $u_1(q)=0,$ $v_1(q)=\mp 2x_0^2\neq 0,$ and $\frac{\partialartial v_1}{\partialartial x}(q)=\partialm 2x_0\neq 0.$ In addition, notice that $u_2(q)=-ax_0\neq 0$ and $\frac{\partialartial v_2}{\partialartial x}(q)=\partialm ax_0\neq 0.$ Thus, by Definition \ref{rts} we conclude that $q$ is a visible regular-fold singularity when $x_0>0$ and $q$ is an invisible regular-fold singularity when $x_0<0.$ \varepsilonnd{example} \betaegin{figure}[h] \betaegin{center} \betaegin{overpic}[scale=0.4]{ex3_contact.pdf} \partialut(25,102){$\Sigma^-$} \partialut(75,102){$\Sigma^+$} \partialut(50,-6){$\Sigma$} \varepsilonnd{overpic} \caption{The invisible regular-fold singularity $q_1=(0,-1)$ and the visible-invisible fold singularity $q_2=(0,3)$ for $a=1$ and $b=1.$} \lambdabel{ex3_contact} \varepsilonnd{center} \varepsilonnd{figure} \varepsilonnsuremath{\mathbb{S}}ection{Regularization of PWHS}\lambdabel{sec:reg} In this section we are interested in determining if the regularized system associated with a PWHS preserves the property of being holomorphic. For that, consider two holomorphic ordinary differential equations \[\deltaot{z}^+=f^+(z),\quad \deltaot{z}^-=f^-(z)\] defined in $\mathbb{C}.$ A piecewise-smooth holomorphic system is $\deltaot{z}=F(z)$ with \betaegin{equation} \lambdabel{sis1} F=\Big(\frac{1+\overlineperatorname{sgn} (\mathbb{R}e)}{2} \Big) f^+ +\Big(\frac{1-\overlineperatorname{sgn} (\mathbb{R}e)}{2}\Big)f^-. \varepsilonnd{equation} The set $\Sigma=\{z\in\mathbb{C}:\mathbb{R}e(z)=0\}$ is called \varepsilonmph{switching manifold}. The regularization process of a piecewise smooth vector field $F$ consists in obtaining a one-parameter family of continuous vector fields $F_{\varepsilon}$ converging to $F$ when $\varepsilon\thetao 0.$ More specifically, the Sotomayor-Teixeira regularization (\varepsilonmph{ST-regularization}) is the one parameter family $F^{\varepsilon}$ given by \betaegin{equation} \lambdabel{regst} F^{\varepsilon}= \Big(\frac{1+\varphi(\mathbb{R}e/\varepsilon)}{2}\Big) f^+ + \Big(\frac{1-\varphi(\mathbb{R}e/\varepsilon)}{2}\Big)f^-, \varepsilonnd{equation} where $\varphi:\mathbb{R}\rightarrow[-1,1]$ is a \varepsilonmph{Sotomayor-Teixeira transition function}, i.e. a smooth function satisfying that $\varphi(t)=1$ for $t\gammaeq 1$, $\varphi(t)=-1$ for $t\leq -1$ and $\varphi'(t)>0$ for $t\in(-1,1)$ and $\varphi^{(i)}(\partialm1)=0$ for $i=1,2,\ldots,n$. The regularization is smooth for $\varepsilon > 0$ and satisfies that $F^{\varepsilon}= f^+$ on $\{z\in\mathbb{C}:\mathbb{R}e(z)\gammaeq\varepsilon\}$ and $F^{\varepsilon}= f^-$ on $\{z\in\mathbb{C}:\mathbb{R}e(z)\leq-\varepsilon\}$. \betaegin{theorem}\lambdabel{teo:reg} Let $\varphi$ be a Sotomayor-Teixeira transition function. If there exists some $\varepsilon>0$ such that the regularization \varepsilonqref{regst} is holomorphic then $f^+(z)=f^-(z)$ for all $z\in\mathbb{C}$. \varepsilonnd{theorem} \betaegin{proof} The result is an immediate consequence of the principle of identity of the analytic functions. Indeed, if two analytical functions coincide in an open subset, then they coincide throughout their domain. \varepsilonnd{proof} Recently, some authors have considered a broader family of transition functions (see Definition \ref{reggeral}) that include analytical functions such as $\thetaanh(x)$ and $\frac{2}{\partiali}\alpharctan(x)$ and other non-analytic functions such as the Sotomayor-Teixeira transition functions. Readers are referred to \cite{MR3976635,kris,MR3927112} for more information on these transition functions. \betaegin{definition}\lambdabel{reggeral} The transition function $\partialhi:\mathbb{R}\rightarrow[-1,1]$ is a smooth function $C^n$ which is strictly increasing $\partialhi'(s)>0$ for every $s$ such that $\partialhi(s)\in(-1,1)$ and $\partialhi(s)\rightarrow\partialm 1,$ for $s\rightarrow\partialm\infty.$ \varepsilonnd{definition} Notice that $$F^\varepsilon(z)\thetao\left\{\betaegin{array}{rcl} f^+(z)&\thetaext{for}& \mathbb{R}e(z)>0,\\ \\ f^-(z)&\thetaext{for}& \mathbb{R}e(z)<0. \varepsilonnd{array}\right.$$ Theorem \ref{teo:reg} still holds for the transition functions of Definition \ref{reggeral}. \betaegin{theorem}\lambdabel{teoreg} Let $\partialhi$ be a transition function satisfying the conditions of Definition \ref{reggeral}. If there exists some $\varepsilon>0$ such that the regularization \varepsilonqref{regst} is holomorphic then $f^+(z)=f^-(z)$ for all $z\in\mathbb{C}$. \varepsilonnd{theorem} \betaegin{proof} Since $F^\varepsilon=u^\varepsilon+iv^\varepsilon$ is a holomorphic function, then its partial derivatives satisfy the Cauchy-Riemann equations: \betaegin{equation}\lambdabel{cre} u^\varepsilon_x=v^\varepsilon_y,\quad u^\varepsilon_y=-v^\varepsilon_x,\quad \forall z=x+iy\in\mathbb{C}. \varepsilonnd{equation} If $f^{+}=u_1+iv_1$, $f^-=u_2+iv_2$ then \betaegin{equation}\betaegin{array}{rcl}\lambdabel{cr4} u^\varepsilon_x &=&\frac{\partialhi'(\frac{x}{\varepsilon})}{4}u_1(x+iy)+\frac{1+\partialhi(\frac{x}{\varepsilon})}{2}\frac{\partialartial u_1}{\partialartial x}(x+iy)-\frac{\partialhi'(\frac{x}{\varepsilon})}{4}u_2(x+iy)+\frac{1-\partialhi(\frac{x}{\varepsilon})}{2}\frac{\partialartial u_2}{\partialartial x}(x+iy); \\ u^\varepsilon_y &=&\frac{1+\partialhi(\frac{x}{\varepsilon})}{2}\frac{\partialartial u_1}{\partialartial y}(x+iy)-\frac{1-\partialhi(\frac{x}{\varepsilon})}{2}\frac{\partialartial u_2}{\partialartial y}(x+iy); \\ v^\varepsilon_x &=&\frac{\partialhi'(\frac{x}{\varepsilon})}{4}v_1(x+iy)+\frac{1+\partialhi(\frac{x}{\varepsilon})}{2}\frac{\partialartial v_1}{\partialartial x}(x+iy)-\frac{\partialhi'(\frac{x}{\varepsilon})}{4}v_2(x+iy)+\frac{1-\partialhi(\frac{x}{\varepsilon})}{2}\frac{\partialartial v_2}{\partialartial x}(x+iy); \\ v^\varepsilon_y &=&\frac{1+\partialhi(\frac{x}{\varepsilon})}{2}\frac{\partialartial v_1}{\partialartial y}(x+iy)-\frac{1-\partialhi(\frac{x}{\varepsilon})}{2}\frac{\partialartial v_2}{\partialartial y}(x+iy). \varepsilonnd{array}\varepsilonnd{equation} Since $f^+$ and $f^-$ are holomorphic functions, then its partial derivatives satisfy the Cauchy-Riemann equations: \betaegin{equation} \betaegin{array}{rcl}\lambdabel{cr2} (u_1)_x &=&(v_1)_y,\quad (u_1)_y=-(v_1)_x,\quad \forall z=x+iy\in\mathbb{C};\\ (u_2)_x &=&(v_2)_y,\quad (u_2)_y=-(v_2)_x,\quad \forall z=x+iy\in\mathbb{C}. \varepsilonnd{array}\varepsilonnd{equation} Thus, substituting \varepsilonqref{cr2} in \varepsilonqref{cr4} e using \varepsilonqref{cre} we get that $\frac{\partialhi'(\frac{x}{\varepsilon})}{4}(u_1(x+iy)-u_2(x+iy))=0$ and $\frac{\partialhi'(\frac{x}{\varepsilon})}{4}(v_1(x+iy)-v_2(x+iy))=0$, respectively. As $\partialhi'(\frac{x}{\varepsilon})>0$ for all $x\in(-\varepsilon,\varepsilon),$ then $f^+(x+iy)=f^-(x+iy)$ for all $x\in(-\varepsilon,\varepsilon)$ and $y\in\mathbb{R}.$ Therefore, using the principle of identity of the analytic functions, we can conclude that $f^+(z)=f^-(z)$ for all $z\in\mathbb{C}.$ \varepsilonnd{proof} Now, the trajectories of the regularized system \varepsilonqref{regst} are the solutions of the slow--fast system \betaegin{equation} \varepsilon\deltaot{\betaar{x}}= \deltafrac{(1+\varphi(\betaar{x})) u_1 + (1-\varphi(\betaar{x}))u_2}{2},\quad \deltaot{y}= \deltafrac{(1+\varphi(\betaar{x}))v_1 + (1-\varphi(\betaar{x}))v_2}{2}, \lambdabel{singbasic}\varepsilonnd{equation} where $x=\varepsilon\betaar{x}$. We refer to the set $\mathcal{S}=\{(\betaar{x},y):(1+\varphi(\betaar{x}))u_1 + (1-\varphi(\betaar{x}))u_2=0\}$ as being the \thetaextit{critical manifold}. System \varepsilonqref{singbasic} when the parameter $\varepsilon$ is $0$ is called \thetaextit{reduced system}. \betaegin{theorem} The sliding region $\Sigma^s$ is homeomorphic to the normally hyperbolic part of the critical manifold $\mathcal{S}$ and the sliding vector field $F^{\Sigma}$ is topologically equivalent to the reduced system. \lambdabel{teoBB} \varepsilonnd{theorem} See Figure \ref{figPR}. \betaegin{figure}[h] \betaegin{overpic}[width=3.5cm]{figF1} \partialut(75,77){$f^+$} \partialut(-3,77){$f^-$} \partialut(23,40){$\mathcal{S}$} \partialut(51,40){$\Sigma^s$} \varepsilonnd{overpic} \caption{\varepsilonnsuremath{\mathbb{S}}mall{In the vertical range is drawn the phase portrait of the slow-fast system with $\varepsilon=0,\betaar{x} \in [-1,1], y \in\mathbb{R}.$ The red curve is the slow manifold $\mathcal{S}$ and the sliding region $\Sigma^s$. The double arrow represents the fast flow.}} \lambdabel{figPR} \varepsilonnd{figure} In \cite{BS} and \cite{kris}, asymptotic methods and blow-up methods were used to study $C^n$-regularizations of generic regular-fold singularities respectively. Following \cite{GST}, these authors used the local normal form of Filippov systems, close to $\Sigma=\{x=0\}$, around a visible fold-regular singularity, which is given by $\widetilde{f^-}=(1,0)$ and $\widetilde{f^+}=(2y,1).$ Notice that $f^+=u_1+iv_1$ is not a holomorphic function because $(u_1)_y=2\neq 0=-(v_1)_x.$ In what follows, we are concerned in studying the regularization of PWHS around visible regular-fold singularities. For that we use the normal forms given in Proposition \ref{GGJ} and Theorem 1 of \cite{NR}. Consider $\deltaot{z}^{+}=G(z)$, where $G$ is one of the following fields: \betaegin{itemize} \item[(i)] $G(z)=f'(p)(z-z_0)$, with $f'(p)=a+ib,$ $z_0=x_0+iy_0,$ $b<0,$ and $x_0>0$; \item[(ii)] $G(z)=(z-z_0)^2$, with $x_0>0$; \item[(iii)] $G(z)=\frac{1}{(z-z_0)^2}$, with $x_0<0$; \item[(iv)]$G(z)=\frac{(z-z_0)^2}{1+(z-z_0)}$, with $0<x_0<1$. \varepsilonnd{itemize} The PWHS is given by \betaegin{equation}\lambdabel{reg_sys_1} \betaegin{aligned} \left\{\betaegin{array}{l} \deltaot{z}^{-}=1,\thetaext{ when } \mathbb{R}e(z)<0, \\[5pt] \deltaot{z}^{+}=G(z),\thetaext{ when } \mathbb{R}e(z)>0. \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} In cartesian coordinates, we have \betaegin{equation}\lambdabel{carcoord_reg_1} \betaegin{aligned} \left\{\betaegin{array}{l} (\deltaot{x}^{-},\deltaot{y}^{-})=(1,0),\thetaext{ when } x<0, \\[5pt] (\deltaot{x}^{+},\deltaot{y}^{+})=\widetilde{f^+}(x,y), \thetaext{ when } x>0, \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} where \betaegin{itemize} \item[(i)] $\widetilde{f^+}(x,y)=(a(x-x_0)-b(y-y_0),b(x-x_0)+a(y-y_0))$, \item[(ii)] $\widetilde{f^+}(x,y)=((x-x_0)^2-(y-y_0)^2,2(x-x_0)(y-y_0))$, \item[(iii)] $\widetilde{f^+}(x,y)=\left(\deltafrac{(x-x_0+y-y_0)(x-x_0-y+y_0)}{((x-x_0)^2+(y-y_0)^2)^2},-\deltafrac{2(x-x_0)(y-y_0)}{((x-x_0)^2+(y-y_0)^2)^2}\right)$, and \item[(iv)] $\betaegin{array}{rcl} \widetilde{f^+}(x,y)&=&\left(\frac{x^3+x^2(1-3x_0)+x_0^2-x_0^3+x(-2x_0+3x_0^2+(y-y_0)^2)-(1+x_0)(y-y_0)^2}{1+x^2-2x(x_0-1)-2x_0+x_0^2+(y-y_0)^2}\right.,\\ & &\left.+\frac{(x^2-2x(x_0-1)-2x_0+x_0^2+(y-y_0)^2)(y-y_0)}{1+x^2-2x(x_0-1)-2x_0+x_0^2+(y-y_0)^2}\right),\varepsilonnd{array}$ \varepsilonnd{itemize} respectively. Notice that \betaegin{itemize} \item[(i)] $p=(0,y_0-\frac{a}{b}x_0)$, \item[(ii)] $p=(0,y_0-x_0)$, \item[(iii)] $p=(0,y_0+x_0)$, and \item[(iv)] $p=\left(0,\frac{-\varepsilonnsuremath{\mathbb{S}}qrt{x_0^2-x_0^4}+y_0+x_0y_0}{1+x_0}\right)$ \varepsilonnd{itemize} are visible regular-fold singularities, respectively. In what follows we study the dynamics of the regularized system of \varepsilonqref{carcoord_reg_1} around $p.$ For that, we consider the translations: \betaegin{itemize} \item[(i)] $\hat{x}=x$ and $\hat{y}=y-y_0+\frac{a}{b}x_0,$ \item[(ii)] $\hat{x}=x$ and $\hat{y}=y-y_0+x_0,$ \item[(iii)] $\hat{x}=x$ and $\hat{y}=y-y_0-x_0,$ and \item[(iv)] $\hat{x}=x$ and $\hat{y}=y-\frac{-\varepsilonnsuremath{\mathbb{S}}qrt{x_0^2-x_0^4}+y_0+x_0y_0}{1+x_0},$ \varepsilonnd{itemize} respectively. Then, the vector field $\widetilde{f^+}$ at the coordinates $(\hat{x},\hat{y})$ is given by \betaegin{itemize} \item[(i)] $\widehat{f^+}(\hat{x},\hat{y})=\left(a(\hat{x}-x_0)-b\left(\hat{y}-\frac{a}{b}x_0\right),b(\hat{x}-x_0)+a\left(\hat{y}-\frac{a}{b}x_0\right)\right),$ \item[(ii)] $\widehat{f^+}(\hat{x},\hat{y})=\left((\hat{x}-x_0)^2-(\hat{y}-x_0)^2,2(\hat{x}-x_0)(\hat{y}-x_0)\right),$ \item[(iii)] $\widehat{f^+}(\hat{x},\hat{y})=\left(\deltafrac{(\hat{y}+\hat{x})(-\hat{y}+\hat{x}-2x_0)}{((\hat{x}-x_0)^2+(\hat{y}+x_0)^2)^2},\deltafrac{-2(\hat{x}-x_0)(\hat{y}+x_0)}{((\hat{x}-x_0)^2+(\hat{y}+x_0)^2)^2}\right),$ and \item[(iv)] $\betaegin{array}{rl} \widehat{f^+}(\hat{x},\hat{y})=&\left(-\frac{-2xx_0+\hat{y}^2(-1+x-x_0)(1+x_0)+x(1+x_0)(x+x^2-3xx_0+2x_0^2)+2\hat{y}(1-x+x_0)\varepsilonnsuremath{\mathbb{S}}qrt{x_0^2-x_0^4}}{-1+x_0-\hat{y}^2(1+x_0)-x(2+x-2x_0)(1+x_0)+2\hat{y}\varepsilonnsuremath{\mathbb{S}}qrt{x_0^2-x_0^4}}\right.,\\ &\left.\frac{(\hat{y}+\hat{y}x_0-\varepsilonnsuremath{\mathbb{S}}qrt{x_0^2-x_0^4})(2x_0-\hat{y}^2(1+x_0)-x(2+x-2x_0)(1+x_0)+2\hat{y}\varepsilonnsuremath{\mathbb{S}}qrt{x_0^2-x_0^4})}{(1+x_0)(-1+x_0-\hat{y}^2(1+x_0)-x(2+x-2x_0)(1+x_0)+2\hat{y}\varepsilonnsuremath{\mathbb{S}}qrt{x_0^2-x_0^4)}}\right),\varepsilonnd{array}$ \varepsilonnd{itemize} respectively. Recall that $\hat{p}=(0,0)$ is a visible regular-fold singularity of $\widehat{f^+}$. Now, since \betaegin{itemize} \item[(i)] $\widehat{f_2^+}(\hat{p})=-\frac{(a^2+b^2)}{b}x_0>0,$ \item[(ii)] $\widehat{f_2^+}(\hat{p})=2x_0^2>0,$ \item[(iii)] $\widehat{f_2^+}(\hat{p})=\frac{1}{2x_0^2}>0,$ and \item[(iv)] $\widehat{f_2^+}(\hat{p})=-\frac{2x_0\varepsilonnsuremath{\mathbb{S}}qrt{x_0^2-x_0^4}}{(x_0-1)(1+x_0)}>0,$ \varepsilonnd{itemize} then there exists a neighborhood $\mathcal{U}$ of $\hat{p}$, such that $\widehat{f_2^+}(\hat{x},\hat{y})>0,$ for all $(\hat{x},\hat{y})\in \mathcal{U}$. Performing a time rescaling in $\widehat{f^+},$ we get $\widecheck{f^+}(\hat{x},\hat{y})=(f(\hat{x},\hat{y}),1),$ where \betaegin{itemize} \item[(i)] $f(\hat{x},\hat{y})=\deltafrac{a(\hat{x}-x_0)-b(\hat{y}-\frac{a}{b}x_0)}{b(\hat{x}-x_0)+a(\hat{y}-\frac{a}{b}x_0)},$ \item[(ii)] $f(\hat{x},\hat{y})=\deltafrac{(\hat{x}-x_0)^2-(\hat{y}-x_0)^2}{2(\hat{x}-x_0)(\hat{y}-x_0)},$ \item[(iii)] $f(\hat{x},\hat{y})=\deltafrac{(\hat{y}+\hat{x})(\hat{y}-\hat{x}+2x_0)}{2(\hat{x}-x_0)(\hat{y}+x_0)},$ and \item[(iv)] $f(\hat{x},\hat{y})=-\frac{(1+x_0)(-2xx_0+\hat{y}^2(-1+x-x_0)(1+x_0)+x(1+x_0)(x+x^2-3xx_0+2x_0^2)+2\hat{y}(1-x+x_0)\varepsilonnsuremath{\mathbb{S}}qrt{x_0^2-x_0^4}}{(\hat{y}+\hat{y}x_0-\varepsilonnsuremath{\mathbb{S}}qrt{x_0^2-x_0^4})(2x_0-\hat{y}^2(1+x_0)-x(2+x-2x_0)(1+x_0)+2\hat{y}\varepsilonnsuremath{\mathbb{S}}qrt{x_0^2-x_0^4})},$ \varepsilonnd{itemize} respectively. Notice that $\widehat{f^+}$ and $\widecheck{f^+}$ have the same orbits in $\mathcal{U}$ with the same orientation. Now, expanding $f$ around $(\hat{x},\hat{y})=(0,0),$ we get $$f(\hat{x},\hat{y})=\alphalpha\hat{y}+g(\hat{y})+\hat{x}\vartheta(\hat{x},\hat{y}),$$ where \betaegin{itemize} \item[(i)] $\alphalpha=\frac{b^2}{(a^2+b^2)x_0},$ $g(\hat{y})=\frac{ab^3}{(a^2+b^2)^2x_0^2}\hat{y}^2+\mathbb{C}O(\hat{y}^3)$ and $\vartheta(\hat{x},\hat{y})=\frac{-ab}{(a^2+b^2)x_0}+\mathbb{C}O(\hat{x},\hat{y}),$ \item[(ii)] $\alphalpha=\frac{1}{x_0},$ $g(\hat{y})=\frac{1}{2x_0^2}\hat{y}^2+\mathbb{C}O(\hat{y}^3)$, and $\vartheta(\hat{x},\hat{y})=-\frac{1}{x_0}+\mathbb{C}O(\hat{x},\hat{y}),$ and \item[(iii)] $\alphalpha=-\frac{1}{x_0},$ $g(\hat{y})=\frac{1}{2x_0^2}\hat{y}^2+\mathbb{C}O(\hat{y}^3)$, and $\vartheta(\hat{x},\hat{y})=-\frac{1}{x_0}+\mathbb{C}O(\hat{x},\hat{y}),$ and \item[(iv)] $\alphalpha=\frac{(1+x_0)^2}{x_0},$ $g(\hat{y})=\frac{(1+x_0)^3(1+2(x_0-1)x_0)}{2x_0\varepsilonnsuremath{\mathbb{S}}qrt{x_0^2-x_0^4}}\hat{y}^2+\mathbb{C}O(\hat{y}^3)$, and $\vartheta(\hat{x},\hat{y})=\frac{(1+x_0)(-1+x_0+x_0^2)}{\varepsilonnsuremath{\mathbb{S}}qrt{x_0^2-x_0^4}}+\mathbb{C}O(\hat{x},\hat{y}),$ \varepsilonnd{itemize} respectively. Now, using Theorem 1 of \cite{NR}, we get the following result. \betaegin{theorem}\lambdabel{ta} Consider system \varepsilonqref{carcoord_reg_1}, i.e. $\widetilde{f^+}$ has a visible fold singularity at $p=(0,p^*),$ $\widetilde{f_2^+}(p)>0,$ and $\widetilde{f^-}=(1,0).$ For $n\gammaeqslant 2,$ consider the regularized system $F^{\varepsilon}$ \varepsilonqref{regst}. Then, there exist $\rho_0,\thetaheta_0>0,$ and constants $\betaeta<0$ and $c,r>0,$ such that for every $\rho\in(\varepsilon^\lambda,\rho_0],$ $\thetaheta\in[y_\varepsilon,\thetaheta_0],$ $\lambda\in(0,\lambda^*),$ with $\lambda^*= \frac{n}{2n-1},$ $q=1-\deltafrac{\lambdambda}{\lambdambda^*}\in(0,1),$ and $\varepsilon>0$ sufficiently small, the flow of $F^{\varepsilon}$ defines a map $U_{\varepsilon}$ between the transversal sections $\widehat V_{\rho,\lambdambda}^{\varepsilon}=[\varepsilon,x_{\rho,\lambdambda}^{\varepsilon}]\thetaimes\{-\rho+p^*\}$ and $\widetilde V_{\thetaheta}^{\varepsilon}=[x_\thetaheta^\varepsilon,x_\thetaheta^\varepsilon+r e^{-\frac{c}{\varepsilon^q}}]\thetaimes\{\thetaheta+p^*\},$ satisfying \[ \betaegin{array}{cccl} U_{\varepsilon}:& \widehat V_{\rho,\lambdambda}^{\varepsilon}& \longrightarrow& \widetilde V_{\thetaheta}^{\varepsilon}\\ &x&\longmapsto&x_{\thetaheta}^{\varepsilon}+\mathbb{C}O(e^{-\frac{c}{\varepsilon^q}}), \varepsilonnd{array} \] where \[\betaegin{array}{rcl} x_{\thetaheta}^{\varepsilon}& =&\frac{\alphalpha\thetaheta^2}{2}+\mathbb{C}O(\thetaheta^3)+\varepsilon+\mathcal{O}(\varepsilon\thetaheta)+\mathcal{O}(\thetaheta^{2} y_\varepsilon)+\mathcal{O}(y_{\varepsilon}^{2}), \quad\thetaext{and} \\ y_\varepsilon&=&\varepsilon^{\lambdambda^*}\varepsilonta+\mathbb{C}O(\varepsilon^{\lambdambda^*+\frac{1}{2n-1}}), \quad\thetaext{for}\quad \varepsilonta>0, \\ x^\varepsilon_{\rho,\lambda}&=&\frac{\alphalpha\rho^2}{2}+\mathbb{C}O(\rho^3)+\varepsilon+\mathcal{O}(\varepsilon \rho)+\betaeta \varepsilon^{2\lambda}+\mathcal{O}(\varepsilon^{3\lambda})+\mathcal{O}(\varepsilon^{1+\lambda}). \varepsilonnd{array} \] (see Figure \ref{figMAP1}). \varepsilonnd{theorem} \betaegin{figure}[h] \betaegin{center} \betaegin{overpic}[scale=0.52]{poincaremap1.pdf} \partialut(65,12){$\widehat V_{\rho,\lambda}^{\varepsilon}$} \partialut(62,80.5){$\widetilde V_{\thetaheta}^{\varepsilon}$} \partialut(86,78){$y=\thetaheta+p^*$} \partialut(86,20){$y=-\rho+p^*$} \partialut(28,-5){$\Sigma$} \partialut(26,51){$p$} \partialut(44,-5){$x=\varepsilon$} \partialut(4,-5){$x=-\varepsilon$} \partialut(62,89){$(\thetaheta,x_{\thetaheta}^{\varepsilon})$} \partialut(55,61){$U_{\varepsilon}(x)$} \partialut(61,15){$x$} \partialut(77,15){$x^\varepsilon_{\rho,\lambda}$} \varepsilonnd{overpic} \varepsilonnd{center} \betaigskip \caption{The Transition Map $U_{\varepsilon}$ of $F^\varepsilon.$The dotted curve is the trajectory of $\widetilde{f^+}$ passing through the visible regular-fold singularity. The red curve is the Fenichel manifold.} \lambdabel{figMAP1} \varepsilonnd{figure} \varepsilonnsuremath{\mathbb{S}}ection{Limit cycles of PWHS}\lambdabel{sec:limitcycles} In Proposition \ref{teo_nocl} was shown that holomorphic systems have no limit cycles, however it is possible to prove that PWHS have limit cycles. For that reason, in this section we focus on finding the conditions for the existence of limit cycles of the PWHS, which are formed by the normal forms given in Proposition \ref{GGJ}. We start by studying the linear case. In this case, we consider equilibrium points on manifold $\Sigma=\{x=0\}.$ \betaegin{theorem}\lambdabel{cl1} The piecewise linear holomorphic systems whose equilibrium points are on manifold $\Sigma$ have at most one limit cycle. \varepsilonnd{theorem} \betaegin{proof} Without loss of generality suppose that the piecewise linear holomorphic system has one of its equilibrium points at the origin. Thus, this system can be written as follows: \betaegin{equation}\lambdabel{ex1_cycle} \betaegin{aligned} \left\{\betaegin{array}{l} \deltaot{z}^{+}=(a+ib)(z-x_0),\thetaext{ when } \Im{(z)}>0, \\[5pt] \deltaot{z}^{-}=(c+id)z, \thetaext{ when } \Im{(z)}< 0, \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} where $a,b,c,d$ and $x_0$ are real numbers. It is easy to verify that if any of the coefficients $b,d$ or $x_0$ are zero then the system has no limit cycles. So we assume that these coefficients are not zero. Consider $w_0\in\mathbb{R}^+.$ If $b>0,$ then \[z^+(t)=(w_0-x_0)e^{at}(\cos (bt)+i\varepsilonnsuremath{\mathbb{S}}in (bt))+x_0\] is a solution of $\deltaot{z}^{+}=(a+bi)(z-x_0)$ satisfying that $z^+(0)=w_0$ and $z^{+}(\frac{\partiali}{b})=x_0-(w_0-x_0)e^{\frac{a\partiali}{b}}$. In addition, \[z^-(t)=-e^{ct}((w_0-x_0)e^\frac{a\partiali}{b}-x_0)(\cos(dt)+i\varepsilonnsuremath{\mathbb{S}}in(dt))\] is a solution of $\deltaot{z}^{-}=(c+id)z$ such that $z^{-}(0)=x_0-(w_0-x_0)e^{\frac{a\partiali}{b}}$ and $z^{-}(\frac{\partiali}{d})=e^\frac{c\partiali}{d}((w_0-x_0)e^\frac{a\partiali}{b}-x_0),$ with $d>0$. On the other hand, if $b<0,$ then \[z^+(t)=-(w_0+x_0)e^{at}(\cos (bt)+i\varepsilonnsuremath{\mathbb{S}}in (bt))+x_0\] is a solution of $\deltaot{z}^{+}=(a+bi)(z-x_0)$ satisfying that $z^+(0)=-w_0$ and $z^{+}(-\frac{\partiali}{b})=x_0+(w_0+x_0)e^{-\frac{a\partiali}{b}}$. Moreover, \[z^-(t)=e^{ct}((w_0+x_0)e^{-\frac{a\partiali}{b}}+x_0)(\cos(dt)+i\varepsilonnsuremath{\mathbb{S}}in(dt))\] is a solution of $\deltaot{z}^{-}=(c+id)z$ such that $z^{-}(0)=x_0+(w_0+x_0)e^{-\frac{a\partiali}{b}}$ and $z^{-}(-\frac{\partiali}{d})=e^{-\frac{c\partiali}{d}}((-w_0-x_0)e^{-\frac{a\partiali}{b}}-x_0),$ with $d<0$. \betaegin{figure}[h] \betaegin{center} \betaegin{overpic}[scale=0.8]{frmap.pdf} \partialut(35,3.5){$w_0$} \partialut(59,3.5){$-w_0$} \partialut(22,8.5){$\Pi(w_0)$} \partialut(70,8.5){$\Pi(-w_0)$} \partialut(103,6){$\Sigma$} \partialut(47,6){$\Sigma$} \partialut(14,-4.5){$b,d>0$} \partialut(75,-4.5){$b,d<0$} \varepsilonnd{overpic} \caption{The Poincaré map around $z=\partialm w_0$.} \lambdabel{frmap} \varepsilonnd{center} \varepsilonnd{figure} Therefore, the Poincaré map around $z=\partialm w_0$ is given by $$\Pi(z)=e^{\partialm\frac{c\partiali}{d}}((z-x_0)e^{\partialm\frac{a\partiali}{b}}-x_0)$$ and $\Pi'(\partialm w_{0})=e^{\partialm(\frac{a}{b}+\frac{c}{d})\partiali}$ (see Figure \ref{frmap}). Now, we must seek solutions for the equation $\Pi(\partialm w_{0})=\partialm w_{0}$. The number of roots of this equations correspond to the number of limit cycles. If $\frac{a}{b}+\frac{c}{d}=0,$ then $\Pi(\partialm w_{0})=\partialm w_{0}$ has no solution. Otherwise, we have a unique solution given by $w_0=\deltafrac{e^{\frac{c\partiali}{d}}(1+e^{\frac{a\partiali}{b}})x_0}{-1+e^{(\frac{a}{b}+\frac{c}{d})\partiali}}$ provided that $b>0$ and $w_0=\deltafrac{(1+e^{\frac{a\partiali}{b}})x_0}{-1+e^{(\frac{a}{b}+\frac{c}{d})\partiali}}$ provided that $b<0$, thus we have a unique limit cycle $\Gammaamma$. Finally, using the first derivative of the Poincaré map, we can conclude that $\Gammaamma$ is stable (resp. unstable) provides that $\overlineperatorname{sgn}(b)\neq \overlineperatorname{sgn}(\frac{a}{b}+\frac{c}{d})$ (resp. $\overlineperatorname{sgn}(b)=\overlineperatorname{sgn}(\frac{a}{b}+\frac{c}{d})$). \varepsilonnd{proof} An immediate consequence of the proof of the previous theorem is the following result. \betaegin{corollary}\lambdabel{cor:lc} Let $b,d,$ and $x_0$ be non-zero real numbers and $a,c\in\mathbb{R}$. The piecewise linear holomorphic system \betaegin{equation}\lambdabel{ex1_cycle} \betaegin{aligned} \left\{\betaegin{array}{l} \deltaot{z}^{+}=(a+ib)(z-x_0),\thetaext{ when } \Im{(z)}>0, \\[5pt] \deltaot{z}^{-}=(c+id)z, \thetaext{ when } \Im{(z)}< 0, \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} has a unique limit cycle $\Gammaamma$ if, and only if, $a,b,c,d,$ and $x_0$ satisfy any row in tables \varepsilonqref{table0} and \varepsilonqref{table00} and $\frac{a}{b}+\frac{c}{d}\neq 0$. Moreover, $\Gammaamma$ is stable (resp. unstable) provides that $\overlineperatorname{sgn}(b)\neq \overlineperatorname{sgn}(\frac{a}{b}+\frac{c}{d})$ (resp. $\overlineperatorname{sgn}(b)=\overlineperatorname{sgn}(\frac{a}{b}+\frac{c}{d})$). \varepsilonnd{corollary} \betaegin{minipage}[t]{.58\thetaextwidth} \raggedright \betaegin{equation}\lambdabel{table0} \betaegin{array}{|| c |c| c | c|c |c||} \hline a& b &c & d &x_0 \\ \hline\hline +&+ &+ &+&+ \\ \hline -&+ &- &+&-\\ \hline -&- &- &-&+ \\ \hline +&- &+ &-&- \\ \hline +&+ &- &+&\overlineperatorname{sgn}(x_0)=\overlineperatorname{sgn}(\frac{a}{b}+\frac{c}{d}) \\ \hline -&+ &+ &+&\overlineperatorname{sgn}(x_0)=\overlineperatorname{sgn}(\frac{a}{b}+\frac{c}{d})\\ \hline -&- &+ &-&\overlineperatorname{sgn}(x_0)=\overlineperatorname{sgn}(\frac{a}{b}+\frac{c}{d}) \\ \hline +&- &- &-&\overlineperatorname{sgn}(x_0)=\overlineperatorname{sgn}(\frac{a}{b}+\frac{c}{d}) \\ \hline \varepsilonnd{array} \varepsilonnd{equation} \varepsilonnd{minipage} \betaegin{minipage}[t]{.35\thetaextwidth} \raggedright \betaegin{equation}\lambdabel{table00} \betaegin{array}{|| c |c| c | c|c |c||} \hline a& b &c & d &x_0 \\ \hline\hline 0&+ &+ &+&+ \\ \hline 0&+&- &+&- \\ \hline 0&-&+ &-&-\\ \hline 0&- &- &-&+ \\ \hline +&+ &0 &+&+ \\ \hline -&+&0 &+&- \\ \hline -&-&0 &-&+\\ \hline +&- &0 &-&- \\ \hline \varepsilonnd{array} \varepsilonnd{equation} \varepsilonnd{minipage} It is important to emphasize that the conditions given in the previous theorem are not empty. Indeed, taking $a=-1$ or $a=0,$ $b=1,$ $c=-1,$ $d=1,$ and $x_0=-1$ we have the existence of a unique stable limit cycle (see Figure \ref{limit_cycle_1}). \betaegin{figure}[h] \betaegin{center} \betaegin{overpic}[scale=0.4]{fig_focus_focus1.pdf} \partialut(-4,13){$\Sigma^-$} \partialut(-4,35){$\Sigma^+$} \partialut(101,24){$\Sigma$} \partialut(50,13){$\Sigma^-$} \partialut(50,35){$\Sigma^+$} \partialut(47,24){$\Sigma$} \partialut(19,-2){$a=-1$} \partialut(74,-2){$a=0$} \varepsilonnd{overpic} \caption{Phase portrait of PWHS \varepsilonqref{ex1_cycle} with $b=1$, $c=-1,$ $d=1,$ and $x_0=-1$. The red trajectory is the limit cycle of \varepsilonqref{ex1_cycle}.} \lambdabel{limit_cycle_1} \varepsilonnd{center} \varepsilonnd{figure} Now, we study the analytical vector fields $z^n$ for $n\gammaeq 2$, which are divided into 5 cases that depend on $n$. For that, we use the symmetry of this normal form and that the rays $\frac{j\partiali}{n-1}$, $j=\{1,\cdots, 2(n-1)\}$ (resp. $\frac{j\partiali}{2(n-1)}$, $j=\{1,3,\cdots, 4(n-1)-1\}$) are invariant by the flow of the equation $\deltaot{z}=z^n,$ with $n$ even (resp. $\deltaot{z}=iz^n$, with $n$ odd). Moreover, we consider virtual equilibrium points of $z^n$ in the following sense: given a piecewise smooth vector field \betaegin{equation} \betaegin{aligned} \left\{\betaegin{array}{l} \deltaot{z}^{+}=f^{+}(z), \thetaext{ when } \Im(z)> 0,\\[5pt] \deltaot{z}^{-}=f^{-}(z),\thetaext{ when }\Im(z)<0, \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} we say that a equilibrium point $z_0$ of $f^+$ (resp. $f^-$) is virtual when $z_0\in\Sigma^-$ (resp. $z_0\in\Sigma^+$). \betaegin{theorem}\lambdabel{cl2} Given $n\in\mathbb{N}_{n\gammaeq2}$, there exist $a,b,d,$ and $y_0$ non-zero real numbers and $z_0=x_0+iy_0\in\mathbb{C}$ satisfying table \varepsilonqref{table1}, such that the PWHS \betaegin{equation}\lambdabel{2_ex_cycle} \betaegin{aligned} \left\{\betaegin{array}{l} \deltaot{z}^{+}=i^m(z+z_0)^n,\thetaext{ when } \Im{(z)}>0, \\[5pt] \deltaot{z}^{-}=(a+ib)(z-d), \thetaext{ when } \Im{(z)}< 0, \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} has a unique stable limit cycle, where $m=0$ if $n$ is even and $m=1$ otherwise. \varepsilonnd{theorem} \betaegin{equation}\lambdabel{table1} \betaegin{array}{|| c |c| c | c | c | c | c |c||} \hline n& k&a &b & d & x_0&y_0& \thetaext{Main condition} \\ \hline\hline 2& &- &+ & \mathbb{R} & \mathbb{R}&+& d>-x_0 \\ \hline 4k-1&\gammaeq 1&- & -& - & 0 &+ & \cot\left(\deltafrac{n\partiali}{2(n-1)}\right)y_0<-\deltafrac{d(1+e^\frac{a\partiali}{b})}{1-e^{\frac{a\partiali}{b}}}<0 \\ \hline 4k&\gammaeq 1 & - & - & -&0 & + & \cot\left(\deltafrac{n\partiali}{2(n-1)}\right)y_0<-\deltafrac{d(1+e^\frac{a\partiali}{b})}{1-e^{\frac{a\partiali}{b}}}<0\\ \hline 4k-2&>1 & - & + & + &0& + &0<\deltafrac{d(1+e^\frac{a\partiali}{b})}{1-e^{\frac{a\partiali}{b}}}<\cot\left(\deltafrac{(n-2)\partiali}{2(n-1)}\right)y_0\\ \hline 4k+1&\gammaeq 1 & - & + & + &0& + &0<\deltafrac{d(1+e^\frac{a\partiali}{b})}{1-e^{\frac{a\partiali}{b}}}<\cot\left(\deltafrac{(n-2)\partiali}{2(n-1)}\right)y_0\\ \hline \varepsilonnd{array} \varepsilonnd{equation} The proof of this theorem is an immediate consequence of the following 5 propositions. \betaegin{proposition} Let $a$ and $b$ be non-zero real numbers, $d\in\mathbb{R},$ and $z_0=x_0+iy_0,$ with $y_0>0$. If $a<0<b$ and $d>-x_0,$ then the PWHS \betaegin{equation}\lambdabel{ex2_cycle} \betaegin{aligned} \left\{\betaegin{array}{l} \deltaot{z}^{+}=(z+z_0)^2,\thetaext{ when } \Im{(z)}>0, \\[5pt] \deltaot{z}^{-}=(a+ib)(z-d), \thetaext{ when } \Im{(z)}< 0, \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} has a unique stable limit cycle. \varepsilonnd{proposition} \betaegin{proof} Writing system \varepsilonqref{ex2_cycle} in Cartesian coordinates, we have \betaegin{equation}\lambdabel{csys_2} \betaegin{aligned} \left\{\betaegin{array}{l} (\deltaot{x}^{+},\deltaot{y}^{-})=((x+x_0)^2-(y+y_0)^2,2(x+x_0)(y+y_0)),\thetaext{ when } y>0, \\[5pt] (\deltaot{x}^{-},\deltaot{y}^{+})=(a(x-d)-by,b(x-d)+ay), \thetaext{ when } y< 0. \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} Now, consider the equation for the orbits of system \varepsilonqref{csys_2} when $y>0$ \betaegin{equation}\lambdabel{orb_2} \frac{dy}{dx}=\frac{2(x+x_0)(y+y_0)}{(x+x_0)^2-(y+y_0)^2}, \varepsilonnd{equation} or equivalent \betaegin{equation}\lambdabel{orb_22} -2(x+x_0)(y+y_0)dx+((x+x_0)^2-(y+y_0)^2)dy=0. \varepsilonnd{equation} We emphasize that equation \varepsilonqref{orb_22} will be exact when we multiply it by the integrating factor $\mu(y)=\frac{1}{(y+y_0)^2}.$ Thus, the solution of equation \varepsilonqref{orb_2}, with initial condition $x(0)=w_0>-x_0$ and $y(0)=0$, in implicit form is $$\frac{(x+x_0)^2}{y+y_0}+y=\frac{(w_0+x_0)^2}{y_0}.$$ Notice that $y=0$ if, and only if, $x=w_0$ or $x=-2x_0-w_0.$ Hence, there exists $t_0>0$ such that $x(t_0)=-2x_0-w_0$ and $y(t_0)=0$. Moreover, \[z^-(t)=-(d+w_0+2x_0)e^{at}(\cos (bt)+i\varepsilonnsuremath{\mathbb{S}}in (bt))+d\] is a solution of $\deltaot{z}^{-}=(a+bi)(z-d)$ satisfying that $z^-(0)=-2x_0-w_0$ and $z^{-}(\frac{\partiali}{b})=d+(d+2x_0+w_0)e^{\frac{a\partiali}{b}}$. Therefore, the Poincaré map at $z=w_0$ is given by $\Pi(w_{0})=d+(d+2x_0+w_0)e^{\frac{a\partiali}{b}}$ and $\Pi'(w_{0})=e^{\frac{a\partiali}{b}}<1$. Now, we must seek solutions for the equation $\Pi(w_{0})=w_{0}$. The number of roots of this equations correspond to the number of limit cycles. Since $a\neq 0,$ then we have a unique solution, given by $\deltafrac{d+e^\frac{a\partiali}{b}(d+2x_0)}{1-e^{\frac{a\partiali}{b}}}$, thus we have only one limit cycle, which is stable. \varepsilonnd{proof} Emphasize that the conditions given in the previous proposition are not empty. Indeed, taking $a=-1,$ $b=1,$ $d=1,$ and $z_0=i$ we have the existence of a unique limit cycle (see Figure \ref{limit_cycle_2}). \betaegin{figure}[h] \betaegin{center} \betaegin{overpic}[scale=0.26]{limit_cycle_2.pdf} \partialut(-7,23){$\Sigma^-$} \partialut(-7,75){$\Sigma^+$} \partialut(102,52){$\Sigma$} \varepsilonnd{overpic} \caption{Phase portrait of PWHS \varepsilonqref{ex1_cycle} with $a=-1,$ $b=1,$ $d=1,$ and $z_0=i$. The red trajectory is the limit cycle of \varepsilonqref{ex2_cycle}.} \lambdabel{limit_cycle_2} \varepsilonnd{center} \varepsilonnd{figure} \betaegin{remark} Recall that the PWHS \varepsilonqref{ex2_cycle} with $a=0$ and $b\neq 0$ has no limit cycles. Even more, if $a=0,$ $b\neq 0,$ and $d\neq -x_0$ (resp. $a=0,$ $b\neq 0,$ and $d=-x_0$), then we have no periodic orbits (resp. we have infinite periodic orbits). \varepsilonnd{remark} \betaegin{proposition}\lambdabel{prop_polar} Let $a,b,d,$ and $y_0$ be non-zero real numbers. If $a,b,d<0,$ $y_0>0,$ $n=4k$ for some integer $k\gammaeq 1,$ and $\cot\left(\frac{n\partiali}{2(n-1)}\right)y_0<-\frac{d(1+e^\frac{a\partiali}{b})}{1-e^{\frac{a\partiali}{b}}}<0,$ then the PWHS \betaegin{equation}\lambdabel{ex4_cycle} \betaegin{aligned} \left\{\betaegin{array}{l} \deltaot{z}^{+}=(z+iy_0)^n,\thetaext{ when } \Im{(z)}>0, \\[5pt] \deltaot{z}^{-}=(a+ib)(z-d), \thetaext{ when } \Im{(z)}< 0, \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} has a unique stable limit cycle. \varepsilonnd{proposition} \betaegin{proof} Consider $\deltaot{z}^{+}=(z+iy_0)^n$. First, we shall prove that the solutions of $z^+$ are symmetric about the $y-$axis. Indeed, writing $\deltaot{z}^+$ in its polar form we have \betaegin{equation}\lambdabel{pczn4} \left\{\betaegin{array}{rcl} \deltaot{r}&=&r^n\cos(n-1)\thetaheta,\\ \deltaot{\thetaheta}&=&r^{n-1}\varepsilonnsuremath{\mathbb{S}}in(n-1)\thetaheta, \varepsilonnd{array}\right. \varepsilonnd{equation} where $z+y_0i=re^{i\thetaheta}=r(\cos(\thetaheta)+i\varepsilonnsuremath{\mathbb{S}}in(\thetaheta)).$ It is easy to see that the orbits of this system satisfy the following equation: \betaegin{equation} \lambdabel{rzn4} r=|\varepsilonnsuremath{\mathbb{S}}in(n-1)\thetaheta|^{\frac{1}{n-1}}e^C. \varepsilonnd{equation} Since equation \varepsilonqref{rzn4} evaluated in $\partiali-\thetaheta$ and $\thetaheta$ are the same, then the orbits of \varepsilonqref{pczn4} are symmetric with respect to the straight line $\thetaheta=\frac{\partiali}{2}.$ Therefore, we can conclude the symmetry of the solutions of $\deltaot{z}^+$ with respect to $y-$axis. Now, consider the solution $z^+(t)$ of \varepsilonqref{ex4_cycle} with initial condition $z^+(0)=-w_0<0$. By the symmetry of the solutions of \varepsilonqref{pczn4}, we have that there exists $t_0>0$ such that $z^+(t_0)=w_0.$ Moreover, \[z^-(t)=-(d-w_0)e^{at}(\cos (bt)+i\varepsilonnsuremath{\mathbb{S}}in (bt))+d\] is a solution of $\deltaot{z}^{-}=(a+bi)(z-d)$ satisfying that $z^-(0)=w_0$ and $z^{-}(-\frac{\partiali}{b})=d+(d-w_0)e^{-\frac{a\partiali}{b}}$. Therefore, the Poincaré map around $z=-w_0$ is given by $\Pi(z)=d+(d+z)e^{-\frac{a\partiali}{b}}$ and $\Pi'(-w_{0})=e^{-\frac{a\partiali}{b}}<1$. Now, we must seek solutions for the equation $\Pi(-w_{0})=-w_{0}$. Since $a\neq 0,$ then we have a unique solution, given by $w_0=\deltafrac{d(1+e^\frac{a\partiali}{b})}{1-e^{\frac{a\partiali}{b}}}$, thus we have only one limit cycle (see remark \ref{unique_cycle}), which is stable. \varepsilonnd{proof} \betaegin{figure}[h] \betaegin{center} \betaegin{overpic}[scale=0.3]{limit_cycle_unique2.pdf} \partialut(101,10){$\Sigma$} \partialut(8,55){$\frac{(n-2)\partiali}{2(n-1)}$} \partialut(75,55){$\frac{n\partiali}{2(n-1)}$} \varepsilonnd{overpic} \caption{Uniqueness of the limit cycle.} \lambdabel{limit_cycle_unique2} \varepsilonnd{center} \varepsilonnd{figure} \betaegin{remark}\lambdabel{unique_cycle} Recall that the limit cycle found is determined by rays $\frac{(n-2)\partiali}{2(n-1)}$ and $\frac{n\partiali}{2(n-1)}$, however due to the invariance of the rays of $z^+$ and the orientation of the trajectories, then this limit cycle is unique (see figure \ref{limit_cycle_unique2}). \varepsilonnd{remark} Notice that the conditions given in the previous proposition are not empty. Indeed, taking $a=-1,$ $b=-1,$ $d=-\frac{1}{2},$ and $y_0=1$ we have the existence of a limit cycle (see Figure \ref{limit_cycle_4}). \betaegin{figure}[h] \betaegin{center} \betaegin{overpic}[scale=0.40]{limit_cycle_4.pdf} \partialut(-7,23){$\Sigma^-$} \partialut(-7,58){$\Sigma^+$} \partialut(82,43){$\Sigma$} \varepsilonnd{overpic} \caption{Phase portrait of PWHS \varepsilonqref{ex4_cycle} with $n=4,$ $a=-1,$ $b=-1,$ $d=-\frac{1}{2},$ and $y_0=1$. The red trajectory is the limit cycle of \varepsilonqref{ex4_cycle}.} \lambdabel{limit_cycle_4} \varepsilonnd{center} \varepsilonnd{figure} \betaegin{proposition} Let $a,b,d,$ and $y_0$ be non-zero real numbers. If $a<0<b,$ $y_0,d>0,$ $n=4k-2$ for some integer $k> 1,$ and $0<\frac{d(1+e^\frac{a\partiali}{b})}{1-e^{\frac{a\partiali}{b}}}<\cot\left(\frac{(n-2)\partiali}{2(n-1)}\right)y_0,$ then the PWHS \betaegin{equation}\lambdabel{ex3_cycle} \betaegin{aligned} \left\{\betaegin{array}{l} \deltaot{z}^{+}=(z+iy_0)^n,\thetaext{ when } \Im{(z)}>0, \\[5pt] \deltaot{z}^{-}=(a+ib)(z-d), \thetaext{ when } \Im{(z)}< 0, \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} has a unique stable limit cycle. \varepsilonnd{proposition} \betaegin{proof} Consider $\deltaot{z}^{+}=(z+iy_0)^n$. Writing $\deltaot{z}^+$ in its polar form we have \betaegin{equation}\lambdabel{pczn3} \left\{\betaegin{array}{rcl} \deltaot{r}&=&r^n\cos(n-1)\thetaheta,\\ \deltaot{\thetaheta}&=&r^{n-1}\varepsilonnsuremath{\mathbb{S}}in(n-1)\thetaheta, \varepsilonnd{array}\right. \varepsilonnd{equation} where $z+y_0i=re^{i\thetaheta}=r(\cos(\thetaheta)+i\varepsilonnsuremath{\mathbb{S}}in(\thetaheta)).$ By the proof of Proposition \ref{prop_polar}, we know that the solutions of $z^+$ are symmetric about the $y-$axis. Now, consider the solution $z^+(t)$ of \varepsilonqref{ex3_cycle} with initial condition $z^+(0)=w_0>0$. By the symmetry of the solutions of \varepsilonqref{pczn3}, we have that there exists $t_0>0$ such that $z^+(t_0)=-w_0.$ Moreover, \[z^-(t)=-(d+w_0)e^{at}(\cos (bt)+i\varepsilonnsuremath{\mathbb{S}}in (bt))+d\] is a solution of $\deltaot{z}^{-}=(a+bi)(z-d)$ satisfying that $z^-(0)=-w_0$ and $z^{-}(\frac{\partiali}{b})=d+(d+w_0)e^{\frac{a\partiali}{b}}$. Consequently, the Poincaré map at $z=w_0$ is given by $\Pi(w_{0})=d+(d+w_0)e^{\frac{a\partiali}{b}}$ and $\Pi'(w_{0})=e^{\frac{a\partiali}{b}}<1$. Now, we must seek solutions for the equation $\Pi(w_{0})=w_{0}$. Since $a\neq 0,$ then we have a unique solution, given by $\deltafrac{d(1+e^\frac{a\partiali}{b})}{1-e^{\frac{a\partiali}{b}}}$, thus we have only one limit cycle (see Remark \ref{unique_cycle}), which is stable. \varepsilonnd{proof} It is important to note that the conditions given in the previous proposition are not empty. Indeed, taking $a=-1,$ $b=1,$ $d=-\frac{1}{5},$ and $y_0=1$ we have the existence of a limit cycle (see Figure \ref{limit_cycle_3}). \betaegin{figure}[h] \betaegin{center} \betaegin{overpic}[scale=0.40]{limit_cycle_3.pdf} \partialut(-7,23){$\Sigma^-$} \partialut(-7,58){$\Sigma^+$} \partialut(82,42){$\Sigma$} \varepsilonnd{overpic} \caption{Phase portrait of PWHS \varepsilonqref{ex3_cycle} with $n=6,$ $a=-1,$ $b=1,$ $d=\frac{1}{5},$ and $y_0=1$. The red trajectory is the limit cycle of \varepsilonqref{ex3_cycle}.} \lambdabel{limit_cycle_3} \varepsilonnd{center} \varepsilonnd{figure} \betaegin{proposition}\lambdabel{prop_polar2} Let $a,b,d,$ and $y_0$ be non-zero real numbers. If $a,b,d<0,$ $y_0>0,$ $n=4k-1$ for some integer $k\gammaeq 1,$ and $\cot\left(\frac{n\partiali}{2(n-1)}\right)y_0<-\frac{d(1+e^\frac{a\partiali}{b})}{1-e^{\frac{a\partiali}{b}}}<0,$ then the PWHS \betaegin{equation}\lambdabel{ex5_cycle} \betaegin{aligned} \left\{\betaegin{array}{l} \deltaot{z}^{+}=i(z+iy_0)^n,\thetaext{ when } \Im{(z)}>0, \\[5pt] \deltaot{z}^{-}=(a+ib)(z-d), \thetaext{ when } \Im{(z)}< 0, \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} has a unique stable limit cycle. \varepsilonnd{proposition} \betaegin{proof} Consider $\deltaot{z}^{+}=i(z+iy_0)^n$. First, we shall prove that the solutions of $z^+$ are symmetric about the $y-$axis. Indeed, writing $\deltaot{z}^+$ in its polar form we have \betaegin{equation}\lambdabel{pczn5} \left\{\betaegin{array}{rcl} \deltaot{r}&=&-r^n\varepsilonnsuremath{\mathbb{S}}in(n-1)\thetaheta,\\ \deltaot{\thetaheta}&=&r^{n-1}\cos(n-1)\thetaheta, \varepsilonnd{array}\right. \varepsilonnd{equation} where $z+y_0i=re^{i\thetaheta}=r(\cos(\thetaheta)+i\varepsilonnsuremath{\mathbb{S}}in(\thetaheta)).$ It is easy to see that the orbits of this system satisfy the following equation: \betaegin{equation} \lambdabel{rzn5} r=|\cos(n-1)\thetaheta|^{\frac{1}{n-1}}e^C. \varepsilonnd{equation} Since equation \varepsilonqref{rzn5} evaluated in $\partiali-\thetaheta$ and $\thetaheta$ are the same, then the orbits of \varepsilonqref{pczn5} are symmetric with respect to the straight line $\thetaheta=\frac{\partiali}{2}.$ Therefore, we can conclude the symmetry of the solutions of $\deltaot{z}^+$ with respect to $y-$axis. Now, consider the solution $z^+(t)$ of \varepsilonqref{ex5_cycle} with initial condition $z^+(0)=-w_0<0$. By the symmetry of the solutions of \varepsilonqref{pczn5}, we have that there exists $t_0>0$ such that $z^+(t_0)=w_0.$ Moreover, \[z^-(t)=-(d-w_0)e^{at}(\cos (bt)+i\varepsilonnsuremath{\mathbb{S}}in (bt))+d\] is a solution of $\deltaot{z}^{-}=(a+bi)(z-d)$ satisfying that $z^-(0)=w_0$ and $z^{-}(-\frac{\partiali}{b})=d+(d-w_0)e^{-\frac{a\partiali}{b}}$. Therefore, the Poincaré map around $z=-w_0$ is given by $\Pi(z)=d+(d+z)e^{-\frac{a\partiali}{b}}$ and $\Pi'(-w_{0})=e^{-\frac{a\partiali}{b}}<1$. Now, we must seek solutions for the equation $\Pi(-w_{0})=-w_{0}$. Since $a\neq 0,$ then we have a unique solution, given by $w_0=\deltafrac{d(1+e^\frac{a\partiali}{b})}{1-e^{\frac{a\partiali}{b}}}$, thus we have only one limit cycle (see Remark \ref{unique_cycle}), which is stable. \varepsilonnd{proof} Notice that the conditions given in the previous proposition are not empty. Indeed, taking $a=-1,$ $b=-1,$ $d=-\frac{1}{2},$ and $y_0=1$ we have the existence of a limit cycle (see Figure \ref{limit_cycle_5}). \betaegin{figure}[h] \betaegin{center} \betaegin{overpic}[scale=0.40]{limit_cycle_5.pdf} \partialut(-7,23){$\Sigma^-$} \partialut(-7,58){$\Sigma^+$} \partialut(82,43){$\Sigma$} \varepsilonnd{overpic} \caption{Phase portrait of PWHS \varepsilonqref{ex5_cycle} with $n=3,$ $a=-1,$ $b=-1,$ $d=-\frac{1}{2}$ and $y_0=1$. The red trajectory is the limit cycle of \varepsilonqref{ex5_cycle}.} \lambdabel{limit_cycle_5} \varepsilonnd{center} \varepsilonnd{figure} \betaegin{proposition} Let $a,b,d,$ and $y_0$ be non-zero real numbers. If $a<0<b,$ $y_0,d>0,$ $n=4k+1$ for some integer $k\gammaeq 1,$ and $0<\frac{d(1+e^\frac{a\partiali}{b})}{1-e^{\frac{a\partiali}{b}}}<\cot\left(\frac{(n-2)\partiali}{2(n-1)}\right)y_0,$ then the PWHS \betaegin{equation}\lambdabel{ex6_cycle} \betaegin{aligned} \left\{\betaegin{array}{l} \deltaot{z}^{+}=i(z+iy_0)^n,\thetaext{ when } \Im{(z)}>0, \\[5pt] \deltaot{z}^{-}=(a+ib)(z-d), \thetaext{ when } \Im{(z)}< 0, \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} has a unique stable limit cycle. \varepsilonnd{proposition} \betaegin{proof} Consider $\deltaot{z}^{+}=i(z+iy_0)^n$. Writing $\deltaot{z}^+$ in its polar form we have \betaegin{equation}\lambdabel{pczn6} \left\{\betaegin{array}{rcl} \deltaot{r}&=&-r^n\varepsilonnsuremath{\mathbb{S}}in(n-1)\thetaheta,\\ \deltaot{\thetaheta}&=&r^{n-1}\cos(n-1)\thetaheta, \varepsilonnd{array}\right. \varepsilonnd{equation} where $z+y_0i=re^{i\thetaheta}=r(\cos(\thetaheta)+i\varepsilonnsuremath{\mathbb{S}}in(\thetaheta)).$ By the proof of Proposition \ref{prop_polar2}, we know that the solutions of $z^+$ are symmetric about the $y-$axis. Now, consider the solution $z^+(t)$ of \varepsilonqref{ex6_cycle} with initial condition $z^+(0)=w_0>0$. By the symmetry of the solutions of \varepsilonqref{pczn6}, we have that there exists $t_0>0$ such that $z^+(t_0)=-w_0.$ Moreover, \[z^-(t)=-(d+w_0)e^{at}(\cos (bt)+i\varepsilonnsuremath{\mathbb{S}}in (bt))+d\] is a solution of $\deltaot{z}^{-}=(a+bi)(z-d)$ satisfying that $z^-(0)=-w_0$ and $z^{-}(\frac{\partiali}{b})=d+(d+w_0)e^{\frac{a\partiali}{b}}$. Consequently, the Poincaré map at $z=w_0$ is given by $\Pi(w_{0})=d+(d+w_0)e^{\frac{a\partiali}{b}}$ and $\Pi'(w_{0})=e^{\frac{a\partiali}{b}}<1$. Now, we must seek solutions for the equation $\Pi(w_{0})=w_{0}$. Since $a\neq 0,$ then we have a unique solution, given by $\deltafrac{d(1+e^\frac{a\partiali}{b})}{1-e^{\frac{a\partiali}{b}}}$, thus we have only one limit cycle (see Remark \ref{unique_cycle}), which is stable. \varepsilonnd{proof} Notice that the conditions given in the previous proposition are not empty. Indeed, taking $a=-1,$ $b=1,$ $d=\frac{3}{10},$ and $y_0=1$ we have the existence of a limit cycle (see Figure \ref{limit_cycle_6}). \betaegin{figure}[h] \betaegin{center} \betaegin{overpic}[scale=0.40]{limit_cycle_6.pdf} \partialut(-7,23){$\Sigma^-$} \partialut(-7,58){$\Sigma^+$} \partialut(82,42){$\Sigma$} \varepsilonnd{overpic} \caption{Phase portrait of PWHS \varepsilonqref{ex6_cycle} with $n=5,$ $a=-1,$ $b=1,$ $d=\frac{3}{10},$ and $y_0=1$. The red trajectory is the limit cycle of \varepsilonqref{ex6_cycle}.} \lambdabel{limit_cycle_6} \varepsilonnd{center} \varepsilonnd{figure} Now, we do the study of vector fields that admit poles, $\frac{1}{z^n}$ for $n\gammaeq 1$, which are divided into 4 cases that depend on $n$. For that, we use the symmetry of this normal form and that the rays $\frac{j\partiali}{n+1}$, $j=\{1,\cdots, 2(n+1)\}$ (resp. $\frac{j\partiali}{2(n+1)}$, $j=\{1,3,\cdots, 4(n+1)-1\}$) are invariant by the flow of the equation $\deltaot{z}=\frac{1}{z^n},$ with $n$ even (resp. $\deltaot{z}=\frac{i}{z^n}$, with $n$ odd). For this normal form we consider real singularities of the pole type in the following sense: given a piecewise smooth vector field \betaegin{equation} \betaegin{aligned} \left\{\betaegin{array}{l} \deltaot{z}^{+}=f^{+}(z), \thetaext{ when } \Im(z)> 0,\\[5pt] \deltaot{z}^{-}=f^{-}(z),\thetaext{ when }\Im(z)<0, \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} we say that a singularity of the pole type $z_0$ of $f^+$ (resp. $f^-$) is real when $z_0\in\Sigma^+$ (resp. $z_0\in\Sigma^-$). We recall that it is possible to construct limit cycles using virtual singularities. \betaegin{theorem}\lambdabel{cl3} Given $n\in\mathbb{N}_{n\gammaeq 1}$, there exist $a,b,d,$ and $y_0$ be non-zero real numbers satisfying table \varepsilonqref{table2}, such that the PWHS \betaegin{equation}\lambdabel{1_ex_cycle} \betaegin{aligned} \left\{\betaegin{array}{l} \deltaot{z}^{+}=(a+ib)(z-d), \thetaext{ when } \Im{(z)}> 0,\\[5pt] \deltaot{z}^{-}=\frac{i^m}{(z+iy_0)^n},\thetaext{ when } \Im{(z)}<0, \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} has a unique stable limit cycle, where $m=0$ if $n$ is even and $m=1$ otherwise. \varepsilonnd{theorem} \betaegin{equation}\lambdabel{table2} \betaegin{array}{|| c|c | c | c | c | c | c ||} \hline n&k &a &b & d & y_0& \thetaext{Main condition} \\ \hline\hline 4k-2&\gammaeq 1 & - & - & + & + &0<\deltafrac{d(1+e^\frac{a\partiali}{b})}{-1+e^{\frac{a\partiali}{b}}}<\cot\left(\deltafrac{n\partiali}{2(n+1)}\right)y_0\\ \hline 4k-1&\gammaeq 1 & - & - & + & + & 0<\deltafrac{d(1+e^\frac{a\partiali}{b})}{-1+e^{\frac{a\partiali}{b}}}<\cot\left(\deltafrac{n\partiali}{2(n+1)}\right)y_0 \\ \hline 4k&\gammaeq 1 & - & + & - & + & \cot\left(\deltafrac{(n+2)\partiali}{2(n+1)}\right)y_0<-\deltafrac{d(1+e^\frac{a\partiali}{b})}{-1+e^{\frac{a\partiali}{b}}}<0\\ \hline 4k+1&\gammaeq 0 & - & + & - & + & \cot\left(\deltafrac{(n+2)\partiali}{2(n+1)}\right)y_0<-\deltafrac{d(1+e^\frac{a\partiali}{b})}{-1+e^{\frac{a\partiali}{b}}}<0\\ \hline \varepsilonnd{array} \varepsilonnd{equation} The proof of this theorem is an immediate consequence of the following 4 propositions. \betaegin{proposition}\lambdabel{prop_polar3} Let $a,b,d,$ and $y_0$ be non-zero real numbers. If $a,b<0,$ $d,y_0>0,$ $n=4k-2$ for some integer $k\gammaeq 1,$ and $0<\frac{d(1+e^\frac{a\partiali}{b})}{-1+e^{\frac{a\partiali}{b}}}<\cot\left(\frac{n\partiali}{2(n+1)}\right)y_0,$ then the PWHS \betaegin{equation}\lambdabel{ex7_cycle} \betaegin{aligned} \left\{\betaegin{array}{l} \deltaot{z}^{+}=(a+ib)z, \thetaext{ when } \Im{(z)}> 0, \\[5pt] \deltaot{z}^{-}=\frac{1}{(z+iy_0)^n},\thetaext{ when } \Im{(z)}<0, \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} has a unique stable limit cycle. \varepsilonnd{proposition} \betaegin{proof} Consider $\deltaot{z}^{-}=\frac{1}{(z+iy_0)^n}$. First, we shall prove that the solutions of $z^-$ are symmetric about the $y-$axis. Indeed, writing $\deltaot{z}^-$ in its polar form we have \betaegin{equation}\lambdabel{pczn7} \left\{\betaegin{array}{rcl} \deltaot{r}&=&r^{-n}\cos(n+1)\thetaheta,\\ \deltaot{\thetaheta}&=&-r^{-n-1}\varepsilonnsuremath{\mathbb{S}}in(n+1)\thetaheta, \varepsilonnd{array}\right. \varepsilonnd{equation} where $z+y_0i=re^{i\thetaheta}=r(\cos(\thetaheta)+i\varepsilonnsuremath{\mathbb{S}}in(\thetaheta)).$ It is easy to see that the orbits of this system satisfy the following equation: \betaegin{equation} \lambdabel{rzn7} r=\frac{e^C}{|\varepsilonnsuremath{\mathbb{S}}in(n+1)\thetaheta|^{\frac{1}{n+1}}}. \varepsilonnd{equation} Since equation \varepsilonqref{rzn7} evaluated in $\partiali-\thetaheta$ and $\thetaheta$ are the same, then the orbits of \varepsilonqref{pczn7} are symmetric with respect to the straight line $\thetaheta=\frac{\partiali}{2}.$ Therefore, we can conclude the symmetry of the solutions of $\deltaot{z}^-$ with respect to $y-$axis. Now, consider the solution $z^-(t)$ of \varepsilonqref{ex7_cycle} with initial condition $z^-(0)=w_0>0$. By the symmetry of the solutions of \varepsilonqref{pczn7}, we have that there exists $t_0>0$ such that $z^-(t_0)=-w_0.$ Moreover, \[z^+(t)=-(d+w_0)e^{at}(\cos (bt)+i\varepsilonnsuremath{\mathbb{S}}in (bt))+d\] is a solution of $\deltaot{z}^{+}=(a+bi)(z-d)$ satisfying that $z^+(0)=-w_0$ and $z^{+}(-\frac{\partiali}{b})=d+(d+w_0)e^{-\frac{a\partiali}{b}}$. Consequently, the Poincaré map at $z=w_0$ is given by $\Pi(w_{0})=d+(d+w_0)e^{-\frac{a\partiali}{b}}$ and $\Pi'(w_{0})=e^{\frac{-a\partiali}{b}}<1$. Now, we must seek solutions for the equation $\Pi(w_{0})=w_{0}$. Since $a\neq 0,$ then we have a unique solution, given by $\deltafrac{d(1+e^\frac{a\partiali}{b})}{-1+e^{\frac{a\partiali}{b}}}$, thus we have only one limit cycle (see Remark \ref{unique_cycle_2}), which is stable. \varepsilonnd{proof} \betaegin{figure}[h] \betaegin{center} \betaegin{overpic}[scale=0.3]{limit_cycle_unique3.pdf} \partialut(101,51){$\Sigma$} \partialut(28,1){$\frac{(n+2)\partiali}{2(n+1)}$} \partialut(53,2){$\frac{n\partiali}{2(n+1)}$} \varepsilonnd{overpic} \caption{Uniqueness of the limit cycle.} \lambdabel{limit_cycle_unique} \varepsilonnd{center} \varepsilonnd{figure} \betaegin{remark}\lambdabel{unique_cycle_2} Recall that the limit cycle found is determined by rays $\frac{n\partiali}{2(n+1)}$ and $\frac{(n+2)\partiali}{2(n+1)}$, however due to the invariance of the rays of $z^-$ and the orientation of the trajectories, then this limit cycle is unique (see figure \ref{limit_cycle_unique}). \varepsilonnd{remark} It is important to emphasize that the conditions given in the previous proposition are not empty. Indeed, taking $a=-1,$ $b=-1,$ and $d=\frac{1}{2}$ we have the existence of a limit cycle (see Figure \ref{limit_cycle_7}). \betaegin{figure}[h] \betaegin{center} \betaegin{overpic}[scale=0.29]{limit_cycle_7.pdf} \partialut(-7,27){$\Sigma^-$} \partialut(-7,68){$\Sigma^+$} \partialut(102,51){$\Sigma$} \varepsilonnd{overpic} \caption{Phase portrait of PWHS \varepsilonqref{ex7_cycle} with $n=2,$ $a=-1,$ $b=-1,$ $d=\frac{1}{2},$ and $y_0=1$. The red trajectory is the limit cycle of \varepsilonqref{ex7_cycle}.} \lambdabel{limit_cycle_7} \varepsilonnd{center} \varepsilonnd{figure} \betaegin{proposition} Let $a,b,d,$ and $y_0$ be non-zero real numbers. If $a,d<0,$ $b,y_0>0,$ $n=4k$ for some integer $k\gammaeq 1,$ and $\cot\left(\frac{(n+2)\partiali}{2(n+1)}\right)y_0<-\frac{d(1+e^\frac{a\partiali}{b})}{-1+e^{\frac{a\partiali}{b}}}<0,$ then the PWHS \betaegin{equation}\lambdabel{ex8_cycle} \betaegin{aligned} \left\{\betaegin{array}{l} \deltaot{z}^{+}=(a+ib)(z-d), \thetaext{ when } \Im{(z)}> 0, \\[5pt] \deltaot{z}^{-}=\frac{1}{(z+iy_0)^n},\thetaext{ when } \Im{(z)}<0, \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} has a unique stable limit cycle. \varepsilonnd{proposition} \betaegin{proof} Consider $\deltaot{z}^{-}=\frac{1}{(z+iy_0)^n}$. Writing $\deltaot{z}^+$ in its polar form we have \betaegin{equation}\lambdabel{pczn8} \left\{\betaegin{array}{rcl} \deltaot{r}&=&r^{-n}\cos(n+1)\thetaheta,\\ \deltaot{\thetaheta}&=&-r^{-n-1}\varepsilonnsuremath{\mathbb{S}}in(n+1)\thetaheta, \varepsilonnd{array}\right. \varepsilonnd{equation} where $z+y_0i=re^{i\thetaheta}=r(\cos(\thetaheta)+i\varepsilonnsuremath{\mathbb{S}}in(\thetaheta)).$ By the proof of Proposition \ref{prop_polar3}, we know that the solutions of $z^+$ are symmetric about the $y-$axis. Now, consider the solution $z^-(t)$ of \varepsilonqref{ex8_cycle} with initial condition $z^-(0)=-w_0>0$. By the symmetry of the solutions of \varepsilonqref{pczn8}, we have that there exists $t_0>0$ such that $z^-(t_0)=w_0.$ Moreover, \[z^+(t)=-(d-w_0)e^{at}(\cos (bt)+i\varepsilonnsuremath{\mathbb{S}}in (bt))+d\] is a solution of $\deltaot{z}^{+}=(a+bi)(z-d)$ satisfying that $z^+(0)=w_0$ and $z^{+}(\frac{\partiali}{b})=d+(d-w_0)e^{\frac{a\partiali}{b}}$. Therefore, the Poincaré map around $z=-w_0$ is given by $\Pi(z)=d+(d+z)e^{\frac{a\partiali}{b}}$ and $\Pi'(-w_{0})=e^{\frac{a\partiali}{b}}<1$. Now, we must seek solutions for the equation $\Pi(-w_{0})=-w_{0}$. Since $a\neq 0,$ then we have a unique solution, given by $w_0=\deltafrac{d(1+e^\frac{a\partiali}{b})}{-1+e^{\frac{a\partiali}{b}}}$, thus we have only one limit cycle (see Remark \ref{unique_cycle_2}), which is stable. \varepsilonnd{proof} Notice that the conditions given in the previous proposition are not empty. Indeed, taking $a=-1,$ $b=1,$ $d=-\frac{1}{5},$ and $y_0=1$ we have the existence of a limit cycle (see Figure \ref{limit_cycle_8}). \betaegin{figure}[h] \betaegin{center} \betaegin{overpic}[scale=0.29]{limit_cycle_8.pdf} \partialut(-7,27){$\Sigma^-$} \partialut(-7,68){$\Sigma^+$} \partialut(102,51){$\Sigma$} \varepsilonnd{overpic} \caption{Phase portrait of PWHS \varepsilonqref{ex8_cycle} with $n=4,$ $a=-1,$ $b=1,$ $d=-\frac{1}{5},$ and $y_0=1$. The red trajectory is the limit cycle of \varepsilonqref{ex8_cycle}.} \lambdabel{limit_cycle_8} \varepsilonnd{center} \varepsilonnd{figure} \betaegin{proposition}\lambdabel{prop_polar4} Let $a,b,d,$ and $y_0$ be non-zero real numbers. If $a,d<0,$ $b,y_0>0,$ $n=4k+1$ for some integer $k\gammaeq 0,$ and $\cot\left(\frac{(n+2)\partiali}{2(n+1)}\right)y_0<-\frac{d(1+e^\frac{a\partiali}{b})}{-1+e^{\frac{a\partiali}{b}}}<0,$ then the PWHS \betaegin{equation}\lambdabel{ex9_cycle} \betaegin{aligned} \left\{\betaegin{array}{l} \deltaot{z}^{+}=(a+ib)(z-d), \thetaext{ when } \Im{(z)}> 0, \\[5pt] \deltaot{z}^{-}=\frac{i}{(z+iy_0)^n},\thetaext{ when } \Im{(z)}<0, \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} has a unique stable limit cycle. \varepsilonnd{proposition} \betaegin{proof} Consider $\deltaot{z}^{-}=\frac{i}{(z+iy_0)^n}$. First, we shall prove that the solutions of $z^-$ are symmetric about the $y-$axis. Indeed, writing $\deltaot{z}^-$ in its polar form we have \betaegin{equation}\lambdabel{pczn9} \left\{\betaegin{array}{rcl} \deltaot{r}&=&r^{-n}\varepsilonnsuremath{\mathbb{S}}in(n+1)\thetaheta,\\ \deltaot{\thetaheta}&=&r^{-n-1}\cos(n+1)\thetaheta, \varepsilonnd{array}\right. \varepsilonnd{equation} where $z+y_0i=re^{i\thetaheta}=r(\cos(\thetaheta)+i\varepsilonnsuremath{\mathbb{S}}in(\thetaheta)).$ It is easy to see that the orbits of this system satisfy the following equation: \betaegin{equation} \lambdabel{rzn9} r=\frac{e^C}{|\cos(n+1)\thetaheta|^{\frac{1}{n+1}}}. \varepsilonnd{equation} Since equation \varepsilonqref{rzn9} evaluated in $\partiali-\thetaheta$ and $\thetaheta$ are the same, then the orbits of \varepsilonqref{pczn9} are symmetric with respect to the straight line $\thetaheta=\frac{\partiali}{2}.$ Therefore, we can conclude the symmetry of the solutions of $\deltaot{z}^-$ with respect to $y-$axis. Now, consider the solution $z^-(t)$ of \varepsilonqref{ex9_cycle} with initial condition $z^-(0)=-w_0>0$. By the symmetry of the solutions of \varepsilonqref{pczn9}, we have that there exists $t_0>0$ such that $z^-(t_0)=w_0.$ Moreover, \[z^+(t)=-(d-w_0)e^{at}(\cos (bt)+i\varepsilonnsuremath{\mathbb{S}}in (bt))+d\] is a solution of $\deltaot{z}^{+}=(a+bi)(z-d)$ satisfying that $z^+(0)=w_0$ and $z^{+}(\frac{\partiali}{b})=d+(d-w_0)e^{\frac{a\partiali}{b}}$. Therefore, the Poincaré map around $z=-w_0$ is given by $\Pi(z)=d+(d+z)e^{\frac{a\partiali}{b}}$ and $\Pi'(-w_{0})=e^{\frac{a\partiali}{b}}<1$. Now, we must seek solutions for the equation $\Pi(-w_{0})=-w_{0}$. Since $a\neq 0,$ then we have a unique solution, given by $w_0=\deltafrac{d(1+e^\frac{a\partiali}{b})}{-1+e^{\frac{a\partiali}{b}}}$, thus we have only one limit cycle (see Remark \ref{unique_cycle_2}), which is stable. \varepsilonnd{proof} Recall that the conditions given in the previous proposition are not empty. Indeed, taking $a=-1,$ $b=1,$ $d=-\frac{1}{2},$ and $y_0=1$ we have the existence of a limit cycle (see Figure \ref{limit_cycle_9}). \betaegin{figure}[h] \betaegin{center} \betaegin{overpic}[scale=0.29]{limit_cycle_9.pdf} \partialut(-7,27){$\Sigma^-$} \partialut(-7,68){$\Sigma^+$} \partialut(102,51){$\Sigma$} \varepsilonnd{overpic} \caption{Phase portrait of PWHS \varepsilonqref{ex9_cycle} with $n=1,$ $a=-1,$ $b=1,$ $d=-\frac{1}{2},$ and $y_0=1$. The red trajectory is the limit cycle of \varepsilonqref{ex9_cycle}.} \lambdabel{limit_cycle_9} \varepsilonnd{center} \varepsilonnd{figure} \betaegin{proposition} Let $a,b,d,$ and $y_0$ be non-zero real numbers. If $a,b<0,$ $d,y_0>0,$ $n=4k-1$ for some integer $k\gammaeq 1,$ and $0<\frac{d(1+e^\frac{a\partiali}{b})}{-1+e^{\frac{a\partiali}{b}}}<\cot\left(\frac{n\partiali}{2(n+1)}\right)y_0,$ then the PWHS \betaegin{equation}\lambdabel{ex10_cycle} \betaegin{aligned} \left\{\betaegin{array}{l} \deltaot{z}^{+}=(a+ib)(z-d), \thetaext{ when } \Im{(z)}> 0, \\[5pt] \deltaot{z}^{-}=\frac{i}{(z+iy_0)^n},\thetaext{ when } \Im{(z)}<0, \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} has a unique stable limit cycle. \varepsilonnd{proposition} \betaegin{proof} Consider $\deltaot{z}^{-}=\frac{i}{(z+iy_0)^n}$. Writing $\deltaot{z}^+$ in its polar form we have \betaegin{equation}\lambdabel{pczn10} \left\{\betaegin{array}{rcl} \deltaot{r}&=&r^{-n}\varepsilonnsuremath{\mathbb{S}}in(n+1)\thetaheta,\\ \deltaot{\thetaheta}&=&r^{-n-1}\cos(n+1)\thetaheta, \varepsilonnd{array}\right. \varepsilonnd{equation} where $z+y_0i=re^{i\thetaheta}=r(\cos(\thetaheta)+i\varepsilonnsuremath{\mathbb{S}}in(\thetaheta)).$ By the proof of Proposition \ref{prop_polar4}, we know that the solutions of $z^+$ are symmetric about the $y-$axis. Now, consider the solution $z^-(t)$ of \varepsilonqref{ex10_cycle} with initial condition $z^-(0)=w_0>0$. By the symmetry of the solutions of \varepsilonqref{pczn10}, we have that there exists $t_0>0$ such that $z^-(t_0)=-w_0.$ Moreover, \[z^+(t)=-(d+w_0)e^{at}(\cos (bt)+i\varepsilonnsuremath{\mathbb{S}}in (bt))+d\] is a solution of $\deltaot{z}^{+}=(a+bi)(z-d)$ satisfying that $z^+(0)=-w_0$ and $z^{+}(-\frac{\partiali}{b})=d+(d+w_0)e^{-\frac{a\partiali}{b}}$. Consequently, the Poincaré map at $z=w_0$ is given by $\Pi(w_{0})=d+(d+w_0)e^{-\frac{a\partiali}{b}}$ and $\Pi'(w_{0})=e^{\frac{-a\partiali}{b}}<1$. Now, we must seek solutions for the equation $\Pi(w_{0})=w_{0}$. Since $a\neq 0,$ then we have a unique solution, given by $\deltafrac{d(1+e^\frac{a\partiali}{b})}{-1+e^{\frac{a\partiali}{b}}}$, thus we have only one limit cycle (see Remark \ref{unique_cycle_2}), which is stable. \varepsilonnd{proof} Emphasize that the conditions given in the previous proposition are not empty. Indeed, taking $a=-1,$ $b=-1,$ $d=\frac{3}{10},$ and $y_0=1$ we have the existence of a limit cycle (see Figure \ref{limit_cycle_10}). \betaegin{figure}[h] \betaegin{center} \betaegin{overpic}[scale=0.29]{limit_cycle_10.pdf} \partialut(-7,27){$\Sigma^-$} \partialut(-7,68){$\Sigma^+$} \partialut(102,51){$\Sigma$} \varepsilonnd{overpic} \caption{Phase portrait of PWHS \varepsilonqref{ex10_cycle} with $n=3,$ $a=-1,$ $b=-1,$ $d=\frac{3}{10},$ and $y_0=1$. The red trajectory is the limit cycle of \varepsilonqref{ex10_cycle}.} \lambdabel{limit_cycle_10} \varepsilonnd{center} \varepsilonnd{figure} To end this section, we give an example of a limit cycle of PWHS using the normal form $\frac{\gammaamma z^n}{1+z^{n-1}},$ for $n=2$ and $\gammaamma=1.$ \betaegin{example} The PWHS \betaegin{equation}\lambdabel{ex11_cycle} \betaegin{aligned} \left\{\betaegin{array}{l} \deltaot{z}^{+}=\deltafrac{(z+\frac{i}{5})^2}{1+(z+\frac{i}{5})},\thetaext{ when } \Im{(z)}>0, \\[5pt] \deltaot{z}^{-}=i(z+0.0381415), \thetaext{ when } \Im{(z)}< 0, \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} has at least an unstable limit cycle. Indeed, writing $\deltaot{z}^+$ in its polar form we have \betaegin{equation}\lambdabel{pczn11} \left\{\betaegin{array}{rcl} \deltaot{r}&=&\frac{r^2(r+\cos(\thetaheta))}{1+r^2+2r\cos(\thetaheta)},\\ \deltaot{\thetaheta}&=&\frac{r\varepsilonnsuremath{\mathbb{S}}in(\thetaheta)}{1+r^2+2r\cos(\thetaheta)}, \varepsilonnd{array}\right. \varepsilonnd{equation} where $z+\frac{i}{5}=re^{i\thetaheta}=r(\cos(\thetaheta)+i\varepsilonnsuremath{\mathbb{S}}in(\thetaheta)).$ It is easy to see that the orbits of system \varepsilonqref{pczn11} satisfy the equation $r=\frac{-\varepsilonnsuremath{\mathbb{S}}in(\thetaheta)}{\thetaheta-c_1},$ where $c_1=\alpharctan\left(\frac{\frac{1}{5}}{w_0}\right)+\frac{\frac{1}{5}}{w_0^2+(\frac{1}{5})^2},$ with $z^+(0)=w_0.$ Thus, the solutions of system \varepsilonqref{pczn11} can be parametrized by $$z^+(\thetaheta)=\frac{-\varepsilonnsuremath{\mathbb{S}}in(\thetaheta)\cos(\thetaheta)}{\thetaheta-c_1}-i\left(\frac{\varepsilonnsuremath{\mathbb{S}}in^2(\thetaheta)}{\thetaheta-c_1}+\frac{1}{5}\right).$$ Notice that if $w_1=\frac{1}{20}$ and $w_2=\frac{13}{100}$, then $z^+(w_1)\alphapprox -0.100229$ and $z^+(w_2)\alphapprox -0.238348.$ Now, \[z_1^-(t)=-0.062087(\cos (t)+i\varepsilonnsuremath{\mathbb{S}}in (t))-0.0381415\]\quad and \[z_2^-(t)=-0.200207(\cos (t)+i\varepsilonnsuremath{\mathbb{S}}in (t))-0.0381415\] are the solutions of $\deltaot{z}^{-}=i(z+0.0381415)$ satisfying that $z_1^-(0)=-0.100229$ and $z_2^{-}(0)=-0.238348$, respectively. Thus, $z_1^-(\partiali)\alphapprox 0.0239455$ and $z_2^-(\partiali)\alphapprox 0.162065.$ Therefore, $\varepsilonnsuremath{\mathbb{D}}elta=\partiali(w_1)-w_1>0$ and $\varepsilonnsuremath{\mathbb{D}}elta=\partiali(w_2)-w_2<0.$ Then, by continuity there exists $w_0\in(w_1,w_2),$ such that $\partiali(\widetilde{w_0})=\widetilde{w_0}.$ Consequently, system \varepsilonqref{ex11_cycle} has a periodic orbit. Even more, numerical approximations indicate that it is an unstable limit cycle(see Figure \ref{limit_cycle_11}). \varepsilonnd{example} \betaegin{figure}[h] \betaegin{center} \betaegin{overpic}[scale=0.35]{limit_cycle_11.pdf} \partialut(-7,27){$\Sigma^-$} \partialut(-7,68){$\Sigma^+$} \partialut(102,51){$\Sigma$} \varepsilonnd{overpic} \caption{Phase portrait of PWHS \varepsilonqref{ex11_cycle}. The red trajectory is the limit cycle of \varepsilonqref{ex11_cycle}.} \lambdabel{limit_cycle_11} \varepsilonnd{center} \varepsilonnd{figure} \varepsilonnsuremath{\mathbb{S}}ection{Homoclinic Orbits of PWHS}\lambdabel{sec:homoclinic_orbits} This section is devoted to give some families of PWHS that have homoclinic orbits. For that, notice that it is possible to form homoclinic orbits in PWHS considering $\deltaot{z}^-=(z-z_0)^n$ or $\deltaot{z}^-=\frac{1}{(z-z_0)^n}$ and $\deltaot{z}^+=biz$. For this we use the invariant rays of $\deltaot{z}^-$ and, depending on the case, we consider $z_0$ as a real singularity or real equilibrium point of $\deltaot{z}^-$. \betaegin{proposition}\lambdabel{propho1} Given $n\in\mathbb{N}_{n\gammaeq 1}$, $b,$ and $y_0$ be non-zero real numbers with $y_0>0$ and $b$ satisfies the table \varepsilonqref{hol_table1}. Then the PWHS \betaegin{equation}\lambdabel{ex_hom_orb_1} \betaegin{aligned} \left\{\betaegin{array}{l} \deltaot{z}^{+}=ibz, \thetaext{ when } \Im{(z)}> 0,\\[5pt] \deltaot{z}^{-}=\frac{i^m}{(z+iy_0)^n},\thetaext{ when } \Im{(z)}<0, \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} has at least one homoclinic orbit, where $m=0$ if $n$ is even and $m=1$ otherwise. \betaegin{equation}\lambdabel{hol_table1} \betaegin{array}{|| c |c| c | c | c | c | c |c||} \hline n& b \\ \hline\hline \hline 4k-1&+ \\ \hline 4k&-\\ \hline 4k-2&+\\ \hline 4k+1&-\\ \hline \varepsilonnd{array} \varepsilonnd{equation} \varepsilonnd{proposition} \betaegin{proof} Without loss of generality assume $b>0.$ Consider the invariant rays $\frac{n\partiali}{2(n+1)}$ and $\frac{(n+2)\partiali}{2(n+1)}$ associated with $z^-.$ Notice that these rays intersect $\Sigma$ at points $x=\cot\left(\frac{n\partiali}{2(n+1)}\right)y_0$ and $x=\cot\left(\frac{(n+2)\partiali}{2(n+1)}\right)y_0,$ respectively. Now, \[z^-(t)=\cot\left(\frac{n\partiali}{2(n+1)}\right)y_0\left(\cos (bt)+i\varepsilonnsuremath{\mathbb{S}}in (bt)\right)\] is a solution of $\deltaot{z}^{-}=biz$ satisfying that $z^-(0)=\cot\left(\frac{n\partiali}{2(n+1)}\right)y_0$ and $z^{-}(\frac{\partiali}{b})=\cot\left(\frac{(n+2)\partiali}{2(n+1)}\right)y_0$. Thus, we get a homoclinic orbit of \varepsilonqref{ex_hom_orb_1} (see Figure \ref{hol_orb_1}). \varepsilonnd{proof} \betaegin{figure}[h] \betaegin{center} \betaegin{overpic}[scale=0.35]{hol_orb_1.pdf} \partialut(-9,27){$\Sigma^-$} \partialut(-9,68){$\Sigma^+$} \partialut(102,51){$\Sigma$} \varepsilonnd{overpic} \caption{Phase portrait of PWHS \varepsilonqref{ex_hom_orb_1}, with $n=1,$ $b=1,$ and $y_0=1$. The red trajectory is a homoclinic orbit of \varepsilonqref{ex_hom_orb_1}.} \lambdabel{hol_orb_1} \varepsilonnd{center} \varepsilonnd{figure} \betaegin{proposition}\lambdabel{propho2} Given $n\in\mathbb{N}_{n>2}$, $b,$ and $y_0$ be non-zero real numbers with $y_0>0$ and $b$ satisfies the table \varepsilonqref{hol_table2}. Then the PWHS \betaegin{equation}\lambdabel{ex_hom_orb_2} \betaegin{aligned} \left\{\betaegin{array}{l} \deltaot{z}^{+}=ibz, \thetaext{ when } \Im{(z)}> 0,\\[5pt] \deltaot{z}^{-}=i^m(z+iy_0)^n,\thetaext{ when } \Im{(z)}<0, \varepsilonnd{array} \right. \varepsilonnd{aligned} \varepsilonnd{equation} has at least one homoclinic orbit, where $m=0$ if $n$ is even and $m=1$ otherwise. \betaegin{equation}\lambdabel{hol_table2} \betaegin{array}{|| c |c| c | c | c | c | c |c||} \hline n& b \\ \hline\hline \hline 4k-1&- \\ \hline 4k&-\\ \hline 4k-2&+\\ \hline 4k+1&+\\ \hline \varepsilonnd{array} \varepsilonnd{equation} \varepsilonnd{proposition} \betaegin{proof} Without loss of generality assume $b>0.$ Consider the invariant rays $\frac{n\partiali}{2(n-1)}$ and $\frac{(n-2)\partiali}{2(n-1)}$ associated with $z^-.$ Notice that these rays intersect $\Sigma$ at points $x=\cot\left(\frac{n\partiali}{2(n-1)}\right)y_0$ and $x=\cot\left(\frac{(n-2)\partiali}{2(n-1)}\right)y_0,$ respectively. Now, \[z^-(t)=\cot\left(\frac{(n-2)\partiali}{2(n-1)}\right)y_0\left(\cos (bt)+i\varepsilonnsuremath{\mathbb{S}}in (bt)\right)\] is a solution of $\deltaot{z}^{-}=biz$ satisfying that $z^-(0)=\cot\left(\frac{(n-2)\partiali}{2(n-1)}\right)y_0$ and $z^{-}(\frac{\partiali}{b})=\cot\left(\frac{n\partiali}{2(n-1)}\right)y_0$. Thus, we get a homoclinic orbit of \varepsilonqref{ex_hom_orb_2} (see Figure \ref{hol_orb_2}). \varepsilonnd{proof} \betaegin{figure}[h] \betaegin{center} \betaegin{overpic}[scale=0.35]{hol_orb_2.pdf} \partialut(-9,27){$\Sigma^-$} \partialut(-9,68){$\Sigma^+$} \partialut(102,51){$\Sigma$} \varepsilonnd{overpic} \caption{Phase portrait of PWHS \varepsilonqref{ex_hom_orb_2}, with $n=4,$ $b=-1,$ and $y_0=1$. The red trajectory is a homoclinic orbit of \varepsilonqref{ex_hom_orb_2}.} \lambdabel{hol_orb_2} \varepsilonnd{center} \varepsilonnd{figure} \varepsilonnsuremath{\mathbb{S}}ection{Acknowledgments} This article was possible thanks to the scholarship granted from the Brazilian Federal Agency for Support and Evaluation of Graduate Education (CAPES), in the scope of the Program CAPES-Print, process number 88887.310463/2018-00, International Cooperation Project number 88881.310741/2018-01. Paulo Ricardo da Silva is also partially supported by São Paulo Research Foundation (FAPESP) grant 2019/10269-3. Luiz Fernando Gouveia is supported by São Paulo Research Foundation (FAPESP) grant 2020/04717-0. Gabriel Rondón is supported by São Paulo Research Foundation (FAPESP) grant 2020/06708-9. \betaibliographystyle{abbrv} \betaibliography{references1} \varepsilonnd{document}
\begin{document} \begin{abstract} We study the Fano scheme of $k$-planes contained in the hypersurface cut out by a generic sum of products of linear forms. In particular, we show that under certain hypotheses, linear subspaces of sufficiently high dimension must be contained in a coordinate hyperplane. We use our results on these Fano schemes to obtain a lower bound for the product rank of a linear form. This provides a new lower bound for the product ranks of the $6\times 6$ Pfaffian and $4\times 4$ permanent, as well as giving a new proof that the product and tensor ranks of the $3\times 3$ determinant equal five. Based on our results, we formulate several conjectures. \end{abstract} \title{Fano Schemes for Generic Sums of Products of Linear Forms} \section{Introduction} Given an embedded projective variety $X\subset \mathbb{P}^n$, its Fano scheme $\mathbf{F}_k(X)$ is the fine moduli space parametrizing projective $k$-planes contained in $X$. Such Fano schemes have been considered extensively for the case of sufficiently general hypersurfaces \cite{altman:77a, barth:81a,langer:97a} but less so for particular hypersurfaces \cite{harris:98a, beheshti:06a, ilten:15a}. In this article, we study the Fano schemes $\mathbf{F}_k(X)$ for the special family of irreducible hypersurfaces \[ X=X_{r,d}=V\left(\sum_{i=1}^r\mathbf{pr}od_{j=1}^d c_{ij}x_{ij}\right)\subset \mathbb{P}^{rd-1}, \quad c_{ij} \in \mathbb{K}^* \] for any $r>1,d>2$. Up to projective equivalence these hypersurfaces do not depend on the choice of the $c_{ij} \in \mathbb{K}^*$. Hence, in the following we may assume that $c_{ij}=1$ for all $1 \leq i \leq r$ and $1 \leq j \leq d$. Moreover, for $d > 2$ the hypersurfaces $X_{r,d}$ are always singular along a union of coordinate hyperplanes of codimension $2r$. We exclude the case $d=2$ since this is a smooth quadric hypersurface with significantly different behaviour. In \cite[\S 3]{ilten:16a}, Z.~Teitler and the first author considered the Fano scheme $\mathbf{F}_5(X_{4,3})$. With the help of a computer-assisted calculation, they observed the curious fact that every $5$-plane $L$ of $X_{4,3}$ is either contained in a coordinate hyperplane, or there exist $1\leq a < b \leq 4$ such that $L$ is contained in $V(x_{a1}x_{a2}x_{a3}+x_{b1}x_{b2}x_{b3})$. This motivates the following definition: \begin{defn}[$\lambda$-splitting] Consider $\lambda\in\mathbb{N}$. A $k$-plane $L$ contained in $X_{r,d}$ admits a $\lambda$-\emph{splitting} if there exist $1\leq a_1 < a_2 <\ldots <a_\lambda\leq r$ such that $L$ is contained in \[ V\left(\sum_{i=1}^\lambda\mathbf{pr}od_{j=1}^d x_{a_ij}\right)\subset \mathbb{P}^{rd-1}. \] We say that $\mathbf{F}_k(X_{r,d})$ is $m$-split if every $k$-plane of $X_{r,d}$ admits a $\lambda$-splitting for some $\lambda\leq m$. \end{defn} \noindent The above-mentioned observation from \cite{ilten:16a} can now be rephrased as the statement that $\mathbf{F}_5(X_{4,3})$ is two-split. We make two conjectures regarding the splitting behaviour of these Fano schemes: \begin{conj}[One-Splitting]\label{conj:one-split} Assume $r\geq 2$ and $d\geq 3$. The Fano scheme $\mathbf{F}_k(X_{r,d})$ is one-split if and only if \[ k\geq \begin{cases} \frac{r}{2}\cdot d & r\ \textrm{even}\\ \frac{r-1}{2}\ \cdot d + 1 & r\ \textrm{odd}\\ \end{cases}. \] \end{conj} \begin{conj}[Two-Splitting]\label{conj:two-split} Assume $r$ is even and $d\geq 3$. The Fano scheme $\mathbf{F}_k(X_{r,d})$ is two-split if \[ k\geq \frac{r}{2}\cdot d-1. \] \end{conj} \noindent We show in Example \ref{ex:sharp} that the bound on $k$ of Conjecture \ref{conj:one-split} is indeed necessary for one-splitting. However, the sufficiency of the conditions of Conjectures \ref{conj:one-split} and \ref{conj:two-split} for one- and two-splitting is less obvious. Our belief in these conjectures is motivated by our Theorem \ref{thm:conj} below and the connection with the property $C^d_m$ described below. It would be interesting to formulate conjectures characterizing $m$-splitting of $\mathbf{F}_k(X_{r,d})$ in general (including the case $r$ odd and $m=2$) but we don't know what they would be. For our application below (Theorem \ref{thm:bound}), understanding the cases $m=1$ and $m=2$ suffices. The following example illustrates the ideas we will use to attack Conjectures \ref{conj:one-split} and \ref{conj:two-split}: \begin{ex}[$\mathbf{F}_5(X_{4,3})$ is two-split] We are considering the hypersurface $X_{4,3}$ in $\mathbb{P}^{11}$, equipped with coordinates $x_{11},\ldots,x_{43}$. Any $5$-plane $L$ in $\mathbb{P}^{11}$ can be represented as the rowspan of a full rank $6\times 12$ matrix $B=(b_{\alpha,ij})$, with rows indexed by $\alpha=0,\ldots ,5$ and column $ij$ corresponding to the homogeneous coordinates $x_{ij}$ on $\mathbb{P}^{11}$. We define linear forms $y_{ij}$ in $\mathbb{K}[z_0,\ldots, z_5]$ by \[y_{ij}=\sum_\alpha b_{\alpha,ij}z_\alpha \] and note that $L\subset X_{4,3}$ if and only if the form \[ h:=\sum_{i=1}^4\mathbf{pr}od_{j=1}^3 y_{ij} \] is equal to zero. Since $B$ has full rank, we can assume that there is a submatrix $B'$ of $B$ consisting of six columns which form the identity matrix. Grouping the columns of $B$ into the $4$ blocks whose indices $ij$ have the same $j$ value, we see that either: \begin{enumerate} \item There are two blocks which each contain at least two columns of $B'$. This implies $h=f_1z_0z_1+f_2z_2z_3-\boldsymbol{\ell}_1-\boldsymbol{\ell}_2$, where $f_1,f_2$ are linear forms and $\boldsymbol{\ell}_1,\boldsymbol{\ell}_2$ are products of three linear forms. Or, \item Every block contains at least one column of $B'$, and there is a block containing three columns of $B'$. This implies $h=z_0z_1z_2+f_1z_3+f_2z_4+f_3z_5$. \end{enumerate} If $L\subset X_{4,3}$ and hence $h=0$, the second case cannot occur, since $z_0z_1z_2$ is not in the ideal generated by $z_3,z_4,z_5$. In the first case, the equation $h=0$ translates to \[ \boldsymbol{\ell}_1+\boldsymbol{\ell}_2=z_0z_1f_1+z_2z_3f_2. \] We will see in \S \ref{sec:sums} that this is only possible if either one of $f_1,f_2$ vanishes (in which case $L$ is one-split), or (after permuting indices) $\boldsymbol{\ell}_1=z_0z_1f_1$ (in which case $L$ is two-split). \end{ex} More generally, in our study of $\mathbf{F}_k(X_{r,d})$ we are led to consider degree $d>1$ homogeneous equations of the form \begin{equation}\label{eqn:sumprod} \boldsymbol{\ell}_1+\ldots+\boldsymbol{\ell}_m=\sum_{i=0}^mf_i\mathbf{x}_i \end{equation} where the $\mathbf{x}_i$ are pairwise coprime squarefree monomials, the $\boldsymbol{\ell}_i$ are degree $d$ products of linear forms (possibly equal to $0$, and the $f_i$ are degree $d-\deg \mathbf{x}_i$ products of linear forms in some polynomial ring (also possibly equal to $0$). The following property will be essential in our analysis: \begin{defn}[Property $C^d_m$] We say that $C^d_m$ is true if, for any equation of the form \eqref{eqn:sumprod} satisfying $\deg \mathbf{x}_i+\deg \mathbf{x}_j\geq d+2$ for all $i\neq j$, it follows that there is some $i$ for which $f_i=0$. \end{defn} \begin{ex} Property $C_1^2$ is simply stating the obvious fact that for variables $x_1,\ldots,x_4$, the form $x_1x_2+x_3x_4$ is not a product of linear forms. Property $C_1^3$ states the less-obvious fact that for variables $x_1,\ldots,x_5$ and non-zero linear form $f$, the form $x_1x_2x_3+fx_4x_5$ is not a product of linear forms. \end{ex} Our first main result relates the above definition to our two conjectures: \begin{thm}\label{thm:reduction} Fix $r\geq 2$, $d\geq 3$ such that either \begin{enumerate} \item $d$ is even, \item $r$ is even and $d\geq r$, or \item $r$ is odd and $r\leq 5$. \end{enumerate} Suppose that $C^d_m$ is true for all \[m\leq \frac{r-1}{2}.\] Then Conjectures \ref{conj:one-split} and \ref{conj:two-split} hold for this choice of $(r,d)$. \end{thm} Secondly, we use this to prove our conjectures in some special cases. \begin{thm}\label{thm:conj} Property $C^d_m$ is true if $m\leq 2$ or if $d\leq 4$. Furthermore, Conjectures \ref{conj:one-split} and \ref{conj:two-split} hold if $r\leq 6$ or if $d= 4$. \end{thm} Our analysis of Equation \eqref{eqn:sumprod} makes use of relatively elementary methods. However, a more sophisticated approach should also be possible. Equation \eqref{eqn:sumprod} posits that $\sum_{i=0}^mf_i\mathbf{x}_i$ is a point in the $(m-1)$th secant variety of a Chow variety parametrizing degree $d$ products of linear forms. Equations for the Chow variety are classical, going back to Brill and Gordan \cite{GKZ}. More recently, Y.~Guan has provided some equations for secant varieties of Chow varieties \cite{guan2,guan1} . It would be interesting to see if these equations shed light on the vanishing of the $f_i$ from Equation \eqref{eqn:sumprod}. Our motivation for studying $\mathbf{F}_k(X_{r,d})$ is twofold. Firstly, we wish to add to the body of examples of varieties $X$ for which one understands the geometry of $\mathbf{F}_k(X)$. If the Fano scheme $\mathbf{F}_k(X_{r,d})$ is $m$-split for some $m<r$, then $k$-dimensional linear subspaces of $X_{r,d}$ can be understood in terms of linear subspaces of $X_{r',d}$ for certain $r'<d$. We illustrate this by describing the irreducible components of $\mathbf{F}_k(X_{r,d})$ for $k\geq(r-2)(d-1)+1$ whenever $r\leq d+1$ or $d=4$, see Examples \ref{ex:one} and \ref{ex:three}. We also characterize when $\mathbf{F}_k(X_{r,d})$ is connected, see Theorem \ref{thm:connected}. Secondly, we may use our results to obtain lower bounds on the \emph{product rank} of certain linear forms. Recall that the \emph{product rank} (also known as Chow rank) of a degree $d$ form $f$ is the smallest number $r$ such that we can write \begin{equation*} f=\boldsymbol{\ell}_1+\ldots+\boldsymbol{\ell}_r \end{equation*} where the $\boldsymbol{\ell}_i$ are products of $d$ linear forms. We denote the product rank of $f$ by $\mathbf{pr}(f)$. Note that product rank may be used to give a lower bound on \emph{tensor rank}, see \cite[\S1.3]{ilten:16a} for details. A form $f$ in $n+1$ variables is \emph{concise} if it cannot be written as a form in fewer variables after a linear change of coordinates. The hypersurface $V(f)$ of a concise degree $d$ form of product rank at most $r$ is isomorphic to the intersection of $X_{r,d}\subset \mathbb{P}^{rd-1}$ with an $n$-dimensional linear subspace. This allows us to relate properties concerning linear subspaces contained in $V(f)$ to product rank. Generalizing \cite[Theorem 3.1]{ilten:16a}, we prove the following: \begin{thm}\label{thm:bound} Let $f$ be a concise irreducible degree $d>1$ form in $n+1$ variables such that $V(f)\subset \mathbb{P}^{n}$ is covered by $k$-planes, and let $r\in \mathbb{N}$. \begin{enumerate} \item If $\mathbf{F}_k(X_{r,d})$ is one-split, then $\mathbf{pr}(f)\neq r$. \item If $r$ is even, $k>n-r$, and \[\mathbf{F}_k(X_{r,d}),\ \mathbf{F}_{k-d}(X_{r-2,d}),\ldots,\ \mathbf{F}_{k-\frac{d(r-4)}{2}}(X_{4,d})\] are two-split, then $\mathbf{pr}(f)\neq r$. \end{enumerate} \end{thm} Applying this to the $3\times 3$ determinant of a generic matrix, we recover that its product and tensor ranks are five \cite{ilten:16a}. Note that we have replaced the computer-aided computation of $\mathbf{F}_{5}(X_{3,4})$ with a conceptual proof. We may also apply Theorem \ref{thm:bound} to the $4\times 4$ determinant $\det_4$ of a generic matrix to obtain $\mathbf{pr}(\det_4)\geq 7$; this is equal to the lower bound one obtains from Derksen and Teitler's lower bound on the Waring rank \cite{derksen:15a}. In Example \ref{ex:pfaffian} we apply the theorem to the Pfaffian $f$ of a generic $6 \times 6$ skew-symmetric matrix to obtain $\mathbf{pr}(f)\geq 7$, beating the previous lower bound of $6$. Finally, in Example \ref{ex:perm} we use a slightly different argument to obtain that the product rank (and tensor rank) of the $4\times 4$ permanent is at least $6$, beating the previous lower bound of $5$. The rest of the paper is organized as follows. In \S \ref{sec:sums}, we study equations of the form \eqref{eqn:sumprod}. We use this in \S \ref{sec:fano} to show our splitting results for the Fano schemes $\mathbf{F}_{k}(X_{r,d})$, as well as studying several cases in more detail. Finally, we prove Theorem \ref{thm:bound} in \S \ref{sec:pr} and apply our results to a number of examples including the $6\times 6$ Pfaffian and $4\times 4$ permanent. For simplicity, we will be working over an arbitrary algebraically closed field $\mathbb{K}$. Note however that all our main results clearly hold for arbitrary fields simply by restricting from $\mathbb{K}$ to any subfield. \section{Special Sums of Products of Linear Forms}\label{sec:sums} \subsection{Preliminaries} In this section we will prove that property $C^d_m$ holds for $m \leq 2$ or $d \leq 4$ . We will obtain this result by using induction arguments. These arguments involve a refined version of the $C^d_m$ property. Consider an equation of the form \begin{equation} \boldsymbol{\ell}_1+\ldots+\boldsymbol{\ell}_m=\sum_{i=1}^{k+n} f_i\mathbf{x}_i \label{eq:2}, \end{equation} with $n > m$. As before, the $\mathbf{x}_i$ are pairwise coprime squarefree monomials and the $\boldsymbol{\ell}_i$ are degree $d$ products of linear forms, possibly equal to zero. In fact, whenever we say that some polynomial $g$ is homogeneous of degree $d$, we include the possibility that $g=0$. We now assume simply that $f_i$ are degree $(d-\deg \mathbf{x}_i)$ forms (or zero), no longer requiring that they be products of linear forms. It will be convenient to order the summands on the right hand side so that \[\deg \mathbf{x}_1 \leq \deg \mathbf{x}_2 \leq \ldots \leq \deg \mathbf{x}_{k+n}.\] We will maintain this ordering convention throughout all of \S\ref{sec:sums}. Similar to the property $C^d_m$ we make the following definition. \begin{defn}[Property $C^d_{k,m,n}$] We say that $C^d_{k,m,n}$ is true, if for any equation of the form \eqref{eq:2} satisfying \begin{align} \deg \mathbf{x}_i+\deg \mathbf{x}_j\geq d+1 && \text{ for } i > k\label{eq:3}\\ \deg \mathbf{x}_i+\deg \mathbf{x}_j\geq d+2 && \text{ for } i,j > k.\label{eq:4} \end{align} it follows that there are $i_1, \ldots, i_{n-m} > k$ for which $f_{i_j}=0$. \end{defn} \noindent Note that by definition $C^d_{0,m,m+1}$ implies $C_m^d$. \begin{lem} \label{lem:multiple-vanishing} Fix $d$ and $m$ and assume that $C^d_{k,m,m+1}$ holds for every $k \geq 0$. Then $C^d_{k,m,n}$ holds for every $n > m$. \end{lem} \begin{proof} We may argue by induction on $n$. Obviously, the hypotheses for $C^d_{k,m,n}$ imply those for $C^d_{k+1,m,n-1}$. Hence, by the induction hypothesis we have the vanishing of $(n-m-1)$ of the $f_i$, with $i> k+1$. Using this we end up with the hypotheses for $C^d_{k,m,n-((n-m)-1)}=C^d_{k,m,m+1}$ being fulfilled, and we see that another of the $f_i$ with $i > k$ has to vanish. \end{proof} \subsection{Property $C_1^d$} We will first analyze the case $m=1$. Therefore, we consider an equation of the form \begin{equation}\label{eqn:form} \boldsymbol{\ell}=\sum_{i=1}^r f_i\mathbf{x}_i \end{equation} with $f_i$ forms of degree $(d-\deg \mathbf{x}_i)$ (possibly equal to zero), and $\boldsymbol{\ell}$ a product of linear forms, also possibly equal to zero. As before, we order indices such that $\deg \mathbf{x}_1\leq \deg \mathbf{x}_2\leq \ldots\leq \deg \mathbf{x}_r$. \begin{rem}[Cancellation]\label{rem:cancellation} Assume we are given a variable $x$ which divides $\boldsymbol{\ell}$, one monomial $\mathbf{x}_i$, and all $f_j$ for $j\neq i$. Setting \begin{align*} \boldsymbol{\ell}'={\boldsymbol{\ell}}/{x}\qquad \mathbf{x}_j'=\begin{cases} {\mathbf{x}_j}/{x}&i=j\\ \mathbf{x}_j &i\neq j \end{cases} \qquad f_j'=\begin{cases} {f_j}/{x}&i\neq j\\ f_j &i= j \end{cases} \end{align*} leads to \begin{equation*} \boldsymbol{\ell}'=\sum_{i=1}^r f_i'\mathbf{x}_i' \end{equation*} where we have reduced from forms of degree $d$ to degree $d-1$. We call this the \emph{cancellation} of \eqref{eqn:form} by $x$. \end{rem} \begin{lemma}\label{lemma:dcoeff} Let $l$ be a linear form dividing $\sum f_i\mathbf{x}_i$, where the $f_i$ are forms of degree $(d-\deg \mathbf{x}_i)$. \begin{enumerate} \item If $\deg \mathbf{x}_1+ \deg \mathbf{x}_2\geq d+2$, then for all $i$, $l$ divides $\mathbf{x}_i$ or $f_i$. \item If $\deg \mathbf{x}_1 +\deg \mathbf{x}_2 \geq d+1$ and $l$ is a monomial, then for all $i$, $l$ divides $\mathbf{x}_i$ or $f_i$. \end{enumerate} \end{lemma} \begin{proof} We first prove the second statement. We have \[\sum f_i\mathbf{x}_i=xg\] for some variable $x$ and form $g$. Expanding the left hand side as a sum of monomials, we see that the degree condition ensures that no terms from $f_i\mathbf{x}_i$ cancel with $f_j\mathbf{x}_j$ for $i\neq j$. But every monomial on the right hand side is divisible by $x$, hence also on the left hand side. The claim follows. For the first statement, we reduce to the second by performing a change of coordinates taking $l$ to a monomial. This can be achieved while preserving all variables in the $\mathbf{x}_1,\ldots,\mathbf{x}_r$ with at most one exception, say in $\mathbf{x}_i$. After factoring out this one linear form from $\mathbf{x}_i$, the pairwise of sum degrees is still at least $d+1$ and we may apply the second claim. \end{proof} \begin{lemma}\label{lemma:divide} Suppose $\boldsymbol{\ell}=\sum_{i=1}^r f_i\mathbf{x}_i$ for $r\geq 2$, $f_i$ forms of degree $(d- \deg \mathbf{x}_i)$. Assume $\boldsymbol{\ell}\neq 0$, and let $\lambda$ be the number of distinct factors of $\boldsymbol{\ell}$. If \[ \left\lceil \frac{\mathbf{pr}od \deg \mathbf{x}_i}{\lambda}\right\rceil > \frac{\mathbf{pr}od \deg \mathbf{x}_i}{\deg \mathbf{x}_1 \cdot \deg \mathbf{x}_2} \] then there is a variable $x$ dividing both $\boldsymbol{\ell}$ and one of the $\mathbf{x}_i$. This is true if $\deg \mathbf{x}_1 \cdot \deg \mathbf{x}_2 >\lambda$, in particular if $\deg \mathbf{x}_1 \cdot \deg \mathbf{x}_2>d$. \end{lemma} \begin{proof} For each $\mathbf{x}_i$, choose some variable $x_i$ dividing it. Setting $x_1=x_2=\ldots=x_r=0$ will result in the equality $\boldsymbol{\ell}=0$, hence there exists one factor of $\boldsymbol{\ell}$ depending only on $x_1,\ldots,x_r$. There are $\mathbf{pr}od_i \deg \mathbf{x}_i$ possible ways to choose the $x_i$, and $\lambda$ factors of $\boldsymbol{\ell}$, so there must be one factor of $\boldsymbol{\ell}$ which depends only on the $x_1,\ldots,x_r$ for \[ \left\lceil \frac{\mathbf{pr}od \deg \mathbf{x}_i}{\lambda}\right\rceil \] different choices. On the other hand, the intersection of more than \[ \frac{\mathbf{pr}od \deg \mathbf{x}_i}{\deg \mathbf{x}_1 \cdot \deg \mathbf{x}_2} \] choices of the $x_1,\ldots,x_r$ contains at most one variable. Hence, if the above inequality is satisfied, the claim follows. \end{proof} \begin{rem} If $\boldsymbol{\ell}=0$, the conclusion of the above lemma is trivial since every variable divides $\boldsymbol{\ell}$. \end{rem} \begin{prop}\label{prop:rkonebasis} Suppose \[ \boldsymbol{\ell}=f_1\mathbf{x}_1+f_2\mathbf{x}_2 \] with $f_i$ forms of degree $(d-\deg \mathbf{x}_i)$. \begin{enumerate} \item If $\deg \mathbf{x}_1 + \deg \mathbf{x}_2 \geq d+2$, then either $f_1$ or $f_2$ vanishes.\label{part:a1} \item If $\boldsymbol{\ell}$ is not squarefree and $\deg \mathbf{x}_1 + \deg \mathbf{x}_2 \geq d+1$, then either $f_1$ or $f_2$ vanishes.\label{part:a2} \end{enumerate} In particular, property $C^d_1$ holds for every $d>0$. \end{prop} \begin{proof} For the first case, the hypothesis $\deg \mathbf{x}_1 + \deg \mathbf{x}_2\geq d+2$ implies in particular that $\deg \mathbf{x}_1 \geq 2$ and $\deg \mathbf{x}_1 \cdot \deg \mathbf{x}_2 >d$. Hence, Lemma \ref{lemma:divide} implies the existence of a variable $x$ dividing both $\boldsymbol{\ell}$ and one of the $\mathbf{x}_i$. But then $f_j\mathbf{x}_j$ is divisible by $x$ for $j\neq i$, hence $x$ divides $f_j$. Cancelling by $x$, we may proceed by induction on the degree $d$. For the second case, we proceed with a similar argument. The inequality \[\deg \mathbf{x}_1 + \deg \mathbf{x}_2 \geq d+1\]implies that $\deg \mathbf{x}_1 \cdot \deg \mathbf{x}_2>d-1$, which is larger than or equal to the number of distinct factors of $\boldsymbol{\ell}$. Thus, we again find a variable $x$ dividing both $\boldsymbol{\ell}$ and one of the $\mathbf{x}_i$. If in fact $x^2$ divides $\boldsymbol{\ell}$, then after factoring out one power of $x$ from $\mathbf{x}_i$ and $f_j$, Lemma \ref{lemma:dcoeff} guarantees that $x$ divides $f_i$ as well. Dividing $\boldsymbol{\ell}$, $f_i$, and $f_j$ by $x$, we reduce to the first case. If $x^2$ does not divide $\boldsymbol{\ell}$, we may cancel by $x$ as in the first case, maintaining that $\boldsymbol{\ell}/x$ is still not squarefree. To finish, we again proceed by induction on the degree $d$. \end{proof} \begin{rem}\label{rem:rkone} It is clear that the degree bounds in Proposition \ref{prop:rkonebasis} cannot be improved upon. If $\deg \mathbf{x}_1 + \deg \mathbf{x}_2= d+1$ and $x_1,x_2$ are variables dividing $\mathbf{x}_1,\mathbf{x}_2$ respectively, then setting $f_i=\mathbf{x}_i/x_i$ gives \[ f_1\mathbf{x}_1+f_2\mathbf{x}_2=(x_1+x_2)\frac {\mathbf{x}_1}{x_1}\frac{\mathbf{x}_2}{x_2}. \] Likewise, if $\deg \mathbf{x}_1+ \deg \mathbf{x}_2\leq d$ there are non-trivial degree $d$ syzygies between $\mathbf{x}_1$ and $\mathbf{x}_2$, so we cannot expect the second claim to hold. \end{rem} We next prove a stronger version of $C_{k,1,2}^d$. \begin{prop}\label{prop:rkone} Suppose for $r\geq 3$, $d \geq 2$ \begin{equation}\label{eqn:form2} \boldsymbol{\ell}=\sum_{i=1}^r f_i\mathbf{x}_i \end{equation} with $f_i$ being forms of degree $(d-\deg \mathbf{x}_i)$. Then if $\deg \mathbf{x}_1 + \deg \mathbf{x}_{r-1}\geq d+1$, some $f_{i}$ with $\deg \mathbf{x}_i\geq \deg \mathbf{x}_{r-1}$ must vanish. In particular, $C^d_{k,1,2}$ holds for every $d > 0$, $k \geq 0$. \end{prop} \begin{proof} Set $\alpha=\deg \mathbf{x}_1$ and $\beta=\deg \mathbf{x}_{r-1}$. It suffices to prove the proposition in the case that $\deg \mathbf{x}_1=\ldots=\deg \mathbf{x}_{r-2}=\alpha$ and $\deg \mathbf{x}_{r-1}= \deg \mathbf{x}_r=\beta$. Indeed, we may absorb variables from $\mathbf{x}_2,\ldots,\mathbf{x}_{r-2},\mathbf{x}_r$ into the corresponding $f_i$ to reduce to this case. Henceforth we will assume we are in such a situation. We begin by proving the claim when $r=3$ and $\alpha=1$, that is, $\mathbf{x}_1$ is a single variable $x$. By using Proposition \ref{prop:rkonebasis}(\ref{part:a1}), we see that modulo $x$, either $f_2$ or $f_3$ must vanish. But since $\alpha+\beta\geq d+1$, $f_2$ and $f_3$ are both just constants, hence one must vanish outright. Next, we consider the case when $r=3$ and $\alpha>1$. First, we show that some $f_i$ must vanish, with no restriction on its degree. We apply Lemma \ref{lemma:divide} to find a variable $x$ dividing some monomial $\mathbf{x}_i$ and $\ell$. Applying Lemma \ref{lemma:dcoeff}, we may conclude that $x$ divides $f_j$ for $j\neq i$. In particular, if $d=2$, this implies that $f_j=0$ for $j\neq i$. For $d>2$, we may cancel by $x$ to reduce the degree by one and conclude by induction on degree that $f_i=0$ for some $i$. Now we show that we can impose the desired degree restriction on $f_i$. Indeed, if $i=2,3$, or $i=1$ and $\alpha=\beta$, this is automatic. If instead $i=1$ and $\alpha<\beta$, then we have $\boldsymbol{\ell}=f_2\mathbf{x}_2+f_3\mathbf{x}_3$ satisfying the hypotheses of Proposition \ref{prop:rkonebasis}(\ref{part:a1}), from which the claim follows. It remains to consider the cases when $r\geq 4$. We will now induct on $r$. First assume that $\alpha<\beta$. By setting any variable $x$ in $\mathbf{x}_i$ equal to zero for $i\leq r-2$, we reduce to an equation of the form \eqref{eqn:form2} with one fewer summand on the right hand side, yet $\alpha$, $\beta$, and $d$ the same. Hence, by induction, $x$ divides $f_{r-1}$ or $f_{r}$. Now, there are $\alpha\cdot (r-2)$ variables appearing in the $\mathbf{x}_i$ for $i\leq r-2$, yet \[\deg f_{r-1}+\deg f_r=2(d-\beta)\leq 2(\alpha-1).\] Thus, if $r>3$ then either $f_{r-1}$ or $f_r$ must vanish. If instead $r\geq 4$ and $\alpha=\beta$, we may again apply Lemma \ref{lemma:divide} followed by Lemma \ref{lemma:dcoeff} to find a variable $x$ dividing some $\mathbf{x}_i$ and $f_j$ for $j\neq i$. We may reorder the monomials such that $i=1$, since all have the same degree $\alpha$. Cancelling by $x$, we again find ourselves in the situation of \eqref{eqn:form2}, but now with $\alpha<\beta$, so by the above, $xf_{r-1}$ or $xf_{r}$ vanishes, thus $f_{r-1}$ or $f_r$ does as well. \end{proof} \begin{rem} Proposition \ref{prop:rkone} is sharp in the following sense. Suppose that in \eqref{eqn:form2}, we have $\deg \mathbf{x}_1 + \deg \mathbf{x}_{r-1} \leq d$. Then \emph{none} of the $f_i$ need vanish. Indeed, for $i=2,\ldots,r-1$ we can take $f_i=g_i\mathbf{x}_1$ for any forms $g_i$ of degree $(d-\deg \mathbf{x}_1 - \deg \mathbf{x}_i)$, and \[f_1=\sum_{i=2}^{r-1}-g_i\mathbf{x}_i.\] Then \[\sum_{i=1}^r f_i\mathbf{x}_i=f_r\mathbf{x}_r\] so if $f_r$ is a product of linear forms, then so is the whole sum, yet for appropriate choice of $g_i$ none of the $f_i$ will vanish. \end{rem} \subsection{Property $C_2^d$} We now move to the case of $C_2^d$: \begin{prop}\label{prop:rktwobasis} Suppose \begin{equation}\label{eqn:rktwobasis} \boldsymbol{\ell}_1+\boldsymbol{\ell}_2=f_1\mathbf{x}_1+f_2\mathbf{x}_2+f_3\mathbf{x}_3 \end{equation} with $f_i$ forms of degree $(d-\deg \mathbf{x}_i)$. If $\deg \mathbf{x}_1+\deg \mathbf{x}_2\geq d+2$, then either $f_1$, $f_2$, or $f_3$ vanishes. In particular, $C^d_2$ holds for every $d > 0$. \end{prop} \begin{proof} We will prove the statement by induction on the degree $d$. For $d=1$ there is nothing to prove. If we can show that $\boldsymbol{\ell}_1,\boldsymbol{\ell}_2$ have a common factor $l$, then we are done. Indeed, by Lemma \ref{lemma:dcoeff}, $l$ must divide either each $\mathbf{x}_i$ or $f_i$. Pulling $l$ out of each $f_i$ where we can, and out of $\mathbf{x}_i$ in at most one position, allows us to ``cancel by $l$'' in a fashion similar to Remark \ref{rem:cancellation}. We thus reduce the degree and the claim follows by induction. In the following, we will assume that no common factor $l$ of $\boldsymbol{\ell}_1$ and $\boldsymbol{\ell}_2$ exists, and that all $f_i$ are non-zero. For simplicity, we may assume that $\deg \mathbf{x}_2= \deg \mathbf{x}_3$, since this case implies the more general one. We denote $\deg \mathbf{x}_1$ by $\alpha$, and $\deg \mathbf{x}_2$ by $\beta$. Our hypothesis on degrees is now simply $\alpha+\beta\geq d+2$. Consider any factor $l$ of $\boldsymbol{\ell}_1$ or $\boldsymbol{\ell}_2$. Setting $l=0$, we reduce to the case of Proposition \ref{prop:rkonebasis}(\ref{part:a1}) (if $l$ divides some $\mathbf{x}_i$) or Proposition \ref{prop:rkone} (by absorbing into some $f_i$ a variable of $\mathbf{x}_i$ appearing in $l$). In either case, we see that modulo $l$, some $f_i$ must vanish, that is, $l$ is a factor of $f_i$. We may proceed to do this for all distinct divisors of $\boldsymbol{\ell}_1$ and $\boldsymbol{\ell}_2$. But since \[\deg f_1+\deg f_2+\deg f_3 \leq 2(d-\beta)+(d-\alpha),\] we conclude that together $\boldsymbol{\ell}_1$ and $\boldsymbol{\ell}_2$ have at most \[ 2(d-\beta)+(d-\alpha)\leq 2d-\beta-2 \] distinct factors. It follows that either both $\boldsymbol{\ell}_1$ and $\boldsymbol{\ell}_2$ contain a square, or else that the non-squarefree product has at most $d-\beta-2$ distinct factors. Assume first that $\boldsymbol{\ell}_1$ is squarefree, and fix some factor $l$. We now argue in a similar fashion to the proof of Lemma \ref{lemma:divide}. For each $\mathbf{x}_1,\mathbf{x}_2,\mathbf{x}_3$, fix a variable $y_i$ so that $l$ and all remaining variables are linearly independent. For each $\mathbf{x}_i$, choose some variable $x_i\neq y_i$ dividing $\mathbf{x}_i$. Setting $x_1=x_2=x_3=0$ will result in the equality $\boldsymbol{\ell}_1=-\boldsymbol{\ell}_2$, hence one factor of $\boldsymbol{\ell}_2$ is zero modulo $x_1,\ldots,x_m,l$. There are $(\alpha-1)(\beta-1)^2$ possible ways to choose the $x_i$, and at most $d-\beta-2$ factors of $\boldsymbol{\ell}_2$, so there must be one fixed factor of $\boldsymbol{\ell}_2$ which is zero mod $x_1,\ldots,x_m,l$ for \[ \left\lceil \frac{(\alpha-1)(\beta-1)^2}{d-\beta-2}\right\rceil \] different choices. On the other hand, the intersection of more than $\beta^2$ choices of the $x_1,\ldots,x_m$ contains no variable. Hence, since $d-\beta-2<\alpha-1$ it follows that there is a factor of $\boldsymbol{\ell}_2$ which is zero modulo $l$, that is, agrees with it. We now instead assume that both $\boldsymbol{\ell}_1$ and $\boldsymbol{\ell}_2$ contain factors with multiplicity at least two. Consider any factor $l$ of $\boldsymbol{\ell}_1$ or $\boldsymbol{\ell}_2$. As long as $l$ is not a variable in $\mathbf{x}_2$ or $\mathbf{x}_3$, we may set $l=0$ and conclude that $l$ divides $f_2$ or $f_3$. Indeed, if $l$ divides $\mathbf{x}_1$ this follows from Proposition \ref{prop:rkonebasis}. Otherwise we may absorb into some $f_i$ a variable of $\mathbf{x}_i$ made linear dependent modulo $l$, and then apply Proposition \ref{prop:rkone} followed by Proposition \ref{prop:rkonebasis}(\ref{part:a2}) to conclude that two of $f_1$, $f_2$, and $f_3$ vanish modulo $l$. If at most one factor $l$ of $\boldsymbol{\ell}_1$, $\boldsymbol{\ell}_2$ divides $\mathbf{x}_2$ or $\mathbf{x}_3$ but not $f_2$ or $f_3$, we thus obtain that $\boldsymbol{\ell}_1,\boldsymbol{\ell}_2$ have at most $1+2(d-\beta)$ distinct factors. But then either $\boldsymbol{\ell}_1$ or $\boldsymbol{\ell}_2$ has at most $d-\beta$ distinct factors, so an argument similar to the previous case where $\boldsymbol{\ell}_1$ squarefree above shows that $\boldsymbol{\ell}_1$ and $\boldsymbol{\ell}_2$ would have to possess a common factor. So we now finally consider the case that at least two distinct factors $x,y$ of $\boldsymbol{\ell}_1,\boldsymbol{\ell}_2$ are variables found in $\mathbf{x}_2$ and $\mathbf{x}_3$, neither dividing $f_2$ or $f_3$. It follows by Proposition \ref{prop:rkonebasis} that each such factor must divide $f_1$. Without loss of generality, we assume that $x$ divides $\mathbf{x}_3$ and $\boldsymbol{\ell}_1$. We obtain \begin{align*} \boldsymbol{\ell}_2\equiv f_2\mathbf{x}_2 \mod x, \end{align*} so $\boldsymbol{\ell}_2$ has $\beta$ factors which only depend on $x$ and a single variable of $\mathbf{x}_2$. If $y$ divides $\mathbf{x}_3$ and $\boldsymbol{\ell}_2$, then setting $x=y=0$, we obtain $f_2\mathbf{x}_2\equiv 0$ $\mod x,y$, a contradiction. If instead $y$ divides $\mathbf{x}_3$ and $\boldsymbol{\ell}_1$ we obtain \begin{align*} \boldsymbol{\ell}_2\equiv f_2\mathbf{x}_2 \mod y \end{align*} and $\boldsymbol{\ell}_2$ has $\beta$ factors which only depend on $y$ and a single variable of $\mathbf{x}_2$. Since $\boldsymbol{\ell}_2$ has $d<2\beta$ factors, one must also just be a variable $w$ of $\mathbf{x}_2$. So in this case, we conclude that a variable $w$ of $\mathbf{x}_2$ divides $\boldsymbol{\ell}_2$. If instead $y$ divides $\mathbf{x}_2$, we see by setting $x=0$ that $y$ must divide $\boldsymbol{\ell}_2$, so we can take $w=y$ to produce $w$ as above. We thus may assume that we are in the situation of variables $x,w$ with $x$ dividing $\mathbf{x}_3$ and $\boldsymbol{\ell}_1$, and $w$ dividing $\mathbf{x}_2$. By Proposition \ref{prop:rkonebasis}, $w$ divides $f_j$ for $j=1$ or $j=3$. Now let $k\in\{1,3\}$ be such that $i\neq j$. We thus obtain \begin{align*} \boldsymbol{\ell}_1\equiv f_k\mathbf{x}_k \mod w, \end{align*} hence $\boldsymbol{\ell}_1$ has $\deg \mathbf{x}_k $ factors which depend on $w$ and a single variable of $\mathbf{x}_k$. The right hand side of Equation \eqref{eqn:rktwobasis} clearly contains monomials divisible by $\mathbf{x}_j$. But the left hand side cannot: while each monomial of $\boldsymbol{\ell}_1$ has degree at least $\deg \mathbf{x}_k$ in the variables of $\mathbf{x}_k$ and $w$, and each monomial of $\boldsymbol{\ell}_2$ has degree at least $\beta=\deg \mathbf{x}_2$ in the variables of $\mathbf{x}_2$ and $x$, the part of $\mathbf{x}_j$ relatively prime to $x$ has degree at least $\deg \mathbf{x}_j-1$. The inequality \[ \deg \mathbf{x}_j-1+ \deg \mathbf{x}_k\geq d+1 \] then shows that this impossible. We conclude that in fact some $f_i$ must equal zero. \end{proof} \begin{rem} Proposition \ref{prop:rktwobasis} is optimal. Indeed, suppose that $\deg \mathbf{x}_1 + \deg \mathbf{x}_2 \leq d+1$. Then by Remark \ref{rem:rkone}, for appropriate non-vanishing choices of $f_1,f_2$, $f_1\mathbf{x}_1+f_x\mathbf{x}_2$ is a product of linear forms, so $f_1\mathbf{x}_1+f_2\mathbf{x}_2+f_3\mathbf{x}_3$ is a sum of two products of linear forms for any choice of $f_3$. \end{rem} \subsection{Property $C_m^d$ for $d\leq 4$} We now prove a lemma that will help us with the degree four case: \begin{lemma}\label{lemma:help} Fix $d\geq 2$, $k>0$, and $m>0$ and let $n=m+1$. Assume that (\ref{eq:3}) and (\ref{eq:4}) hold and that $C_{k',m',n'}^d$ holds whenever $m'<m$, or whenever $m'=m$ and $k'<k$. In Equation \eqref{eq:2}, consider any linear form $l$ dividing $p$ of the summands $\boldsymbol{\ell}_i$ on the left hand side. Then $l$ must also divide $p$ of the $f_i$ with $i>k$. \end{lemma} \begin{proof} Assume $l$ divides some factor $x_j$ of $\mathbf{x}_j$. Setting $l=0$, we now have that the hypothesis for $C^d_{k,m-p,m}$ is fulfilled. Since we have assumed that $C^d_{k,m-p,m}$ is true, $p$ of the $f_i$ with $i > k$ must vanish modulo $l$. Even if $l$ does not divide any $\mathbf{x}_j$, we still may set $l=0$, modifying the right hand side of the equation $\boldsymbol{\ell}_1+\ldots+\boldsymbol{\ell}_m=\sum_{i=1}^{k+n} f_i\mathbf{x}_i$ to replace one factor of some $\mathbf{x}_j$ by a linear form $f$ which is no longer a monomial. Now we have to distinguish two cases. Let us assume first that $j > k$. Then, since the degree of $\mathbf{x}_j$ drops by one, we are in the situation of $C^d_{k+1,m-p,m}$. As before by our assumption, $p$ of the $f_i$ with $i > k$ must vanish modulo $l$. If $j \leq k$ then the fact that the degree of $\mathbf{x}_j$ drops may violated condition (\ref{eq:4}). However, we may bring the summand $f_j \mathbf{x}_j$ to the left hand side of the equation. This leaves us in the situation of $C^d_{k-1,m-p+1,m+1}$ and our assumption again provides the vanishing of $p$ of the $f_i$, with $i > k$. \end{proof} We now use this lemma to show $C_m^d$ for arbitrary $m$ and $d\leq 4$: \begin{prop}\label{prop:degfour} If $d\leq 4$, property $C_{k,m,n}^d$ holds for arbitrary $k>0$ and $n>m>0$. In particular, $C_m^d$ holds for $m>0$. \end{prop} \begin{proof} By Lemma~\ref{lem:multiple-vanishing} it is enough to show that $C^d_{k,m,m+1}$ holds for all $m,k>0$. Now, we prove $C^d_{k,m,m+1}$ by induction on $m$ and $k$. Note that, for $k$ arbitrary, $C^d_{k,1,2}$ follows from Proposition~\ref{prop:rkone}. Moreover, Lemma~\ref{lem:multiple-vanishing} then provides $C^d_{k,1,n}$ for arbitrary $n > 1$. Assume we have proven property $C^d_{k',m',m'+1}$ is true whenever $m' < m$, or whenever $m'=m$ and $k' < k$. For $d \leq 4$ we have $\sum_{i>k} \deg f_i \leq m+1$. Either one of the $f_i$ has to vanish, which would prove our claim, or by Lemma \ref{lemma:help}, all the linear factors of the $\boldsymbol{\ell}_i$ occur as one of the (at most) $(m+1)$ linear factors of the $f_i$ for $i>k$. Here, by a \emph{linear factor} we we mean an equivalence class of linear forms, where two linear forms are equivalent if one is a non-zero scalar multiple of the other. By the above the linear factors of the $f_i$ form a multiset $L$ of cardinality at most $m+1$ and every $\boldsymbol{\ell}_j$ is divisible by one of the elements of $L$. On the other hand, we have seen by Lemma \ref{lemma:help} that every $l\in L$ may divide at most $m(l)$ of the $\boldsymbol{\ell}_j$, where $m(l)$ denotes the multiplicity of $l$ in $L$. Since $\# L \leq (m+1)$, there can be at most one $\boldsymbol{\ell}_j$ which is divisible by more than one linear factor. This implies, we have $\boldsymbol{\ell}_i = l_i^d$ for all but one of the summands on the left hand side. For $m > 2$ this implies that we can write the left hand side as the sum of only $(m-1)$ products, since we can write $l_i^d + l_j^d$ as a product of linear forms. Then using the induction hypothesis for $C^d_{k,m-1,m}$ concludes the proof for the case $m > 2$. To conclude, we consider the case $m=2$. If none of the $f_i$ (with $i > k$) vanishes, we have seen that at most three linear factors $l_1$, $l_2$ and $l_3$ can occur on the left hand side. We choose $\lambda \in \mathbb{K}$ and set $l_2 = \lambda l_3$. Now, the left hand side depends only on two linear forms. In this situation the left hand side is actually a product of linear forms (since $\mathbb{K}$ is algebraically closed). In the same way as in the proof of Lemma \ref{lemma:help} (when setting one of the $l_i$ to zero) we see by the induction hypothesis that one of the $f_i$ has to be divisible by $(l_2 - \lambda l_3)$. We have only finitely many choices for the linear factors of the $f_i$, but we have infinitely many choices for $\lambda \in \mathbb{K}$. Hence, there are $\lambda, \lambda' \in \mathbb{K}$ with $(l_2 - \lambda' l_3)$ dividing $(l_2 - \lambda l_3)$, which implies either $l_2 = 0$ or $l_3=0$. In any case, one of the summands $\boldsymbol{\ell}_1$ or $\boldsymbol{\ell}_2$ has to vanish, and we are in the case of $C^d_{k,1,3}$. \end{proof} \subsection{Consequences} The following lemma derives a consequence of the property $C^d_m$ which will be used later. \begin{lemma}\label{lemma:uniqueness} Consider an equation of the form \begin{equation}\label{eqn:unique} \boldsymbol{\ell}_1+\ldots+\boldsymbol{\ell}_m=\sum_{i=1}^m\mathbf{x}_i \end{equation} where the $\boldsymbol{\ell}_i$ are degree $d\geq 3$ products of linear forms, and the $\mathbf{x}_i$ are pairwise relatively prime squarefree monomials of degree $d$. If property $C_{m-1}^d$ is true, then there is a permutation $\sigma\in S_m$ such that $\boldsymbol{\ell}_i=\mathbf{x}_{\sigma(i)}$ for all $i$. \end{lemma} \begin{proof} Consider any factor $l_i$ of some $\boldsymbol{\ell}_i$. If $l_i$ does not divide any $\mathbf{x}_j$, we may set $l_i=0$, modifying the right hand side of Equation \eqref{eqn:unique} to replace one factor of some $\mathbf{x}_j$ by a linear form $f$ which is no longer a monomial. But this equation still satisfies the hypotheses necessary for $C_{m-1}^d$, as long as $d\geq 3$, so in fact, $l_i$ must have divided one of the $\mathbf{x}_j$ all along. We thus see that every factor of each $\boldsymbol{\ell}_i$ is just a variable, up to scaling. By comparing the monomials on both sides of \eqref{eqn:unique}, we find the desired permutation. \end{proof} \begin{rem} We may interpret the above lemma geometrically as saying that, if $C_d^{r-1}$ is true, then the subgroup of $PGL(rd-1)$ taking $X_{r,d}$ to itself is generated by the semidirect product of the torus \[ T=\{x_{11}\cdots x_{1d}=x_{21}\cdots x_{2d}=\ldots=x_{r1}\cdots x_{rd}\} \] with the copy of the symmetric group $S_r$ permuting the indices $i$ of $x_{ij}$, and the $r$ copies of $S_d$ permuting the indices $j$ of $x_{ij}$ for some fixed $1\leq i \leq r$. \end{rem} \section{Fano Schemes and Splitting}\label{sec:fano} \subsection{Main results}\label{sec:mainproof} In this section, we will prove Theorems \ref{thm:reduction} and \ref{thm:conj}. For $n=rd-1$, consider projective space $\mathbb{P}^{n}$ with coordinates $x_{ij}$, $1\leq i \leq r$ and $1\leq j \leq d$. Let $L$ be a $k$-dimensional linear subspace of $\mathbb{P}^{n}$. We may represent $L$ as the rowspan of a full rank $(k+1)\times rd$ matrix $B=(b_{\alpha,ij})$, with rows indexed by $\alpha=0,\ldots ,k$ and column $ij$ corresponding to the homogeneous coordinates $x_{ij}$ on $\mathbb{P}^{n}$. We define linear forms $y_{ij}$ in $S=\mathbb{K}[z_0,\ldots, z_k]$ by \[y_{ij}=\sum_\alpha b_{\alpha,ij}z_\alpha, \] along with degree $d$ forms \[ \mathbf{y}_i=\mathbf{pr}od_{j=1}^d y_{ij}. \] The condition that $L$ is contained in $X_{r,d}$ is equivalent to the condition \begin{equation}\label{eqn:infano} \sum_i \mathbf{y}_{i}=0. \end{equation} The condition that $L$ is one-split is equivalent to the condition that some $y_{ij}$ vanishes, and also to the condition that some $\mathbf{y}_i$ vanishes. The condition that $L$ is two-split is equivalent to the existence of $a \leq a_1 < a_2 \leq r$ such that $\mathbf{y}_{a_1} + \mathbf{y}_{a_2}=0$. \begin{ex}[$k$-planes which are not one-split]\label{ex:sharp} For $r=2m$ and $k=md-1$, let $L$ be any $k$-plane with $y_{ij}$ all linearly independent for $i\leq m$, $y_{(i+m)1}=-y_{i1}$, and $y_{(i+m)j}=y_{ij}$ for $j>1$. Then clearly $L$ is contained in $X_{r,d}$, but is not one-split (although it is two-split). For $r=2m+1$ and $k=md$, consider forms $y_{ij}$ satisfying $\{y_{ij}\}_{i\leq m}$ and $y_{r1}$ all linearly independent, and \begin{align*} y_{(i+m)1}&=-y_{i1}\qquad&\textrm{for}\ i<m,\\ y_{(i+m)j}&=y_{ij}\qquad&\textrm{for}\ i<m,\ j>1,\\ y_{rj}&=y_{(2m)j}=y_{mj}&\textrm{for}\ j>1,\\ y_{(2m)1}&=-y_{m1}-y_{r1}. \end{align*} Let $L$ be the corresponding $md$-plane. Clearly $L$ is contained in $X_{r,d}$, but is not one-split. We thus see that the bound on $k$ in Conjecture \ref{conj:one-split} is sharp. \end{ex} We henceforth assume that $L\subset X_{r,d}$, that is, that $\sum_i \mathbf{y}_i=0$, and that none of the $\mathbf{y}_i$ vanish, that is, $L$ is not one-split. Without loss of generality, we may inductively reorder the forms $\mathbf{y}_i$ as follows: given $\mathbf{y}_1,\ldots,\mathbf{y}_s$, we take $\mathbf{y}_{s+1}$ to be any form such that the dimension of the vector space spanned by the $\{y_{ij}\}_{i\leq s+1}$ is maximal. After this re-ordering, we may define integers $\lambda_1,\lambda_2,\ldots,\lambda_r$ inductively by requiring that the dimension of the vector space spanned by the $\{y_{ij}\}_{i\leq s}$ is equal to $\sum_{i \leq s} \lambda_i$. By the way we have ordered the forms $\mathbf{y}_i$, this implies that $\lambda_1\geq \lambda_2\geq \ldots \geq \lambda_r$. Indeed, by our choice of $\mathbf{y}_s$ we must have $\sum_{i \leq s} \lambda_i \geq \lambda_{s+1}+ \sum_{i \leq s-1} \lambda_i$ and, hence, $\lambda_s \geq \lambda_{s+1}$ for every $1 \leq s \leq r$. We may then choose a new basis \[ z_{ij},\qquad 1\leq i \leq r\qquad 1\leq j \leq \lambda_i; \] for the degree one piece of $S$ with the property that each $z_{ij}$ is a factor of $\mathbf{y}_i$, and each factor of $\mathbf{y}_i$ is in the span of \[\{z_{hj}\}_{1 \leq h \leq i,1\leq j \leq \lambda_h}.\] We now will assume that \[ k\geq \begin{cases} \frac{r}{2}\cdot d-1 & r\ \textrm{even}\\ \frac{r-1}{2}\ \cdot d + 1 & r\ \textrm{odd}\\ \end{cases}. \] \begin{lemma}\label{lemma:help2} For $s\geq 0$, suppose that $\lambda_{r-s}=0$. If $r$ is odd, then $s \leq \frac{r-3}{2}$ . If $r$ is even, then $s \leq \frac{r}{2}-1$ In particular, we always have $s+1 \leq r-s-1$. \end{lemma} \begin{proof} We have \begin{equation} k+1=\sum_{i=1}^r \lambda_i=\sum_{i=1}^{r-s-1}\lambda_i \leq (r-s-1)d. \label{eq:help} \end{equation} For $r$ odd, our assumptions on $k$ imply $\frac{r-1}{2}d + 2\leq (r-s-1)d$. Hence, we have $s+\frac{2}{d} \leq \frac{r-1}{2}$ which implies $s < \frac{r-1}{2}$. Since $s$ is an integer we obtain $s \leq \frac{r-3}{2}$. For $r$ even, \eqref{eq:help} implies $\frac{r}{2}d \leq (r-s-1)d$, which directly implies the claim. \end{proof} \begin{lemma}\label{lemma:lambdas} For $s\geq 0$, suppose that $\lambda_{r-s}=0$. Assume further that either \begin{enumerate} \item $d$ is even, \item $r$ is even and $d\geq r-2s$, or \item $r$ is odd and $r-2s\leq 6$. \end{enumerate} Then $\lambda_{s+1}+\lambda_{s+2}\geq d+2$. \end{lemma} \begin{proof} We have that \[ k+1=\sum_{i=1}^r \lambda_i=\sum_{i=1}^{r-s-1} \lambda_i \leq sd+\sum_{i={s+1}}^{r-s-1} \lambda_i. \] Note that by Lemma~\ref{lemma:help2} we have $s+1 \leq r-s-1$, so the summation on the right hand side makes sense. Using our assumption on $k$ we thus have \begin{equation}\label{eq:ineq} \sum_{i={s+1}}^{r-s-1} \lambda_i \geq \begin{cases} \frac{r-2s}{2}\cdot d & r\ \textrm{even}\\ \frac{r-2s-1}{2}\ \cdot d + 2 & r\ \textrm{odd}\\ \end{cases}. \end{equation} Suppose that $\lambda_{s+1}+\lambda_{s+2}\leq d+1$. If $d$ is even, then $\lambda_{s+2}\leq \frac{d}{2}$, and thus $\lambda_{i}\leq \frac{d}{2}$ for all $i\geq s+2$. But then \[ \sum_{i={s+1}}^{r-s-1} \lambda_i\leq (d+1)+(r-2s-3)\frac{d}{2}=\frac{r-2s-1}{2}\cdot d+1 \] contradicting \eqref{eq:ineq}, since $d\geq 3$. Assume instead that $d$ is odd. Then $\lambda_{i}\leq \frac{d+1}{2}$ for all $i\geq s+2$, so \[ \sum_{i={s+1}}^{r-s-1} \lambda_i\leq (r-2s-1)\frac{d+1}{2}. \] But this contradicts \eqref{eq:ineq} if $r$ is even and $d\geq r-2s$, or if $r$ is odd and $r-2s\leq 6$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:reduction}] First note that $\lambda_r=0$. Indeed, if not, then $\mathbf{y}_r$ contains a factor which is not in the span of the factors of the $\mathbf{y}_i$ for $i<r$, so it is impossible to satisfy Equation \eqref{eqn:infano}. Suppose that we have inductively shown that $\lambda_{r-s}=0$ for some $s\leq \frac{r-1}{2}-1$. Then by Lemma \ref{lemma:lambdas}, we have that $\lambda_{s+1}+\lambda_{s+2}\geq d+2$. If $\lambda_{r-s-1}\neq 0$, we set $z_{i1}=0$ for $i=s+3,\ldots,r-s-1$ and use property $C_{s+1}^d$ applied to \[ -\sum_{i=r-s}^r\mathbf{y}_i =\sum_{i=1}^{s+2} \mathbf{y}_i \qquad\mod \{z_{i1}\}_{s+3\leq i\leq r-s-1} \] to conclude that some $\mathbf{y}_i$ for $i\leq s+2$ vanishes modulo $\{z_{i1}\}_{s+3\leq i\leq r-s-1}$. But by our construction of the $\mathbf{y}_i$, this is impossible, and we conclude that $\lambda_{r-s-1}=0$. We proceed in this fashion until we obtain $\lambda_{t}=0$ for \[ t=\left\lceil\frac{r}{2}\right\rceil+1, \] since for \[ s=\left\lfloor \frac{r-1}{2}-1\right\rfloor, \] we have \[ t\geq r-s-1. \] If $r$ is odd, we conclude again by Lemma \ref{lemma:lambdas} that $\lambda_{t-2}+\lambda_{t-1}\geq d+2$, and an appropriate application of property $C_{(r-1)/2}^d$ shows that some $\mathbf{y}_i$ must vanish, a contradiction. If $r$ is even, we must have $\lambda_1=\ldots=\lambda_{r/2}=d$. This is impossible if $k$ satisfies the bound of Conjecture \ref{conj:one-split}, completing the claim regarding one-splitting. For the claim regarding two-splitting, we may apply Lemma \ref{lemma:uniqueness} to conclude that $\mathbf{y}_1=-\mathbf{y}_j$ for some $j>r/2$. But this implies two-splitting. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:conj}] The first part of the Theorem is simply Propositions \ref{prop:rkonebasis}, \ref{prop:rktwobasis}, and \ref{prop:degfour}. The statement regarding Conjectures \ref{conj:one-split} and \ref{conj:two-split} following immediately from Theorem \ref{thm:reduction} except in the cases ($r=4$, $d=3$), ($r=6$, $d=3$). and ($r=6$, $d=5$). The obstruction in all these cases comes about that in the proof of Theorem \ref{thm:reduction}, we cannot use Lemma \ref{lemma:lambdas} to conclude that $\lambda_1+\lambda_2\geq d+2$. However, we may use Proposition \ref{prop:rkone} to compensate. Consider for example the case $r=6$, $d=3$. If $\lambda_1+\lambda_2\leq d+1=4$, then we must in fact have $\lambda_1=\lambda_2=\ldots=\lambda_4=2$, and $\lambda_5=1$. Setting $z_{51}=0$, we may apply Proposition \ref{prop:rkone} to reach a contradiction. Thus, $\lambda_5=\lambda_6=0$. A similar argument shows that $\lambda_3=0$ as well, and we conclude as in the proof of Theorem \ref{thm:reduction}. The other two cases are similar, and left to the reader. \end{proof} \begin{rem}\label{rem:3d} Consider the Fano scheme $\mathbf{F}_k(X_{r,d})$. If we only know that $C^d_m$ is true for all $m\leq M$ for some $M$ strictly less than $(r-1)/2$, we may still use the above arguments to conclude that $\mathbf{F}_k(X_{r,d})$ is one-split if $k$ is sufficiently large. For example, we know that $C^d_m$ is always true for $m=1,2$. For $r\leq 6$, we already know by Theorem \ref{thm:conj} exactly when $\mathbf{F}_k(X_{r,d})$ is one-split, so assume that $r\geq 7$. We claim that if $k\geq d(r-3)$, then $\mathbf{F}_k(X_{r,d})$ must be one-split. Indeed, if $d$ is even, Lemma \ref{lemma:lambdas} applies and arguing as in the proof of Theorem \ref{thm:conj} shows that if $L$ is not one-split, then $\lambda_r=\lambda_{r-1}=\lambda_{r-2}=0$. But this contradicts $k\geq d(r-3)$. For $d$ odd, slightly more care is needed. Assume some $k$-plane $L$ is not one-split. The arguments from Lemma \ref{lemma:lambdas} will apply if \[k\geq \frac{(r-1)(d+1)}{2},\] in which case we are done as above. But this inequality is satisfied except for the case $r=7$, $d=3$. As always, $\lambda_7=0$. But certainly $\lambda_2+\lambda_3\geq d+1=4$, so using Proposition \ref{prop:rkone} in place of $C^3_1$ we conclude that $\lambda_{6}=0$. But then one easily verifies that $\lambda_2+\lambda_3\geq d+2=5$, so $\lambda_5=0$, which is impossible. \end{rem} \subsection{Consequences and examples} We now want to use our results on splitting to study the geometry of $\mathbf{F}_k(X_{r,d})$. We first note the following result: \begin{thm}\label{thm:connected} The Fano scheme $\mathbf{F}_k(X_{r,d})$ is non-empty if and only if $k< r(d-1)$. Such a Fano scheme is connected if and only if $k< r(d-1)-1$. \end{thm} \begin{proof} Consider the subtorus $T$ of $(\mathbb{K}^*)^{rd}$ cut out by \[ x_{11}x_{12}\cdots x_{1d}=x_{21}x_{22}\cdots x_{2d}=\ldots=x_{r1}x_{r1}\cdots x_{rd}. \] This torus acts naturally on $\mathbb{P}^{rd-1}$. Since it fixes $X_{r,d}$, this action induces an action on $X_{r,d}$, and hence also on $\mathbf{F}_k(X_{r,d})$. It is straightforward to check that the only $k$-planes of $\mathbb{P}^{rd-1}$ fixed by $T$ are intersections of coordinate hyperplanes. Thus, any torus fixed point of $\mathbf{F}_k(X_{r,d})$ corresponds to a $k$-plane $L$ whose associated non-zero forms $y_{ij}$ of \S \ref{sec:mainproof} are all linearly independent. Recall that such a $k$-plane $L$ is contained in $\mathbf{F}_k(X_{r,d})$ if and only if Equation \eqref{eqn:infano} is satisfied. But since by assumption the non-zero $y_{ij}$ are linearly independent, this is equivalent to requiring that for each $i$, there is some $j$ such that $y_{ij}=0$. Since every component of $\mathbf{F}_k(X_{r,d})$ must contain a torus fixed point, it follows immediately that if $k\geq r(d-1)$, $\mathbf{F}_k(X_{r,d})$ must be empty. The non-emptiness of $\mathbf{F}_k(X_{r,d})$ for $k< r(d-1)$ is also clear. Assume now that $k=r(d-1)-1$. By Remark \ref{rem:3d}, it inductively follows that any $k$-plane of $X_{r,d}$ must be torus fixed. But there are $r^d$ such fixed $k$-planes, so $\mathbf{F}_k(X_{r,d})$ is not connected. Suppose finally that $k<r(d-1)-1$, and let $L$ be a torus fixed $k$-plane contained in $\mathbf{F}_k(X_{r,d})$. We prove that $\mathbf{F}_k(X_{r,d})$ is connected by deforming $L$ to a $k$-plane satisfying \[y_{11}=y_{21}=\ldots=y_{r1}=0.\] Since the set of all such $k$-planes forms a connected subscheme of $\mathbf{F}_k(X_{r,d})$ isomorphic to the Grassmannian $G(k+1,r(d-1))$, and every irreducible component of the Fano scheme contains a torus fixed point, it follows that $\mathbf{F}_k(X_{r,d})$ is connected. To see that we can deform $L$ to a $k$-plane of the desired type, let $j_1,\ldots,j_r$ be such that $y_{1j_1},\ldots,y_{rj_r}$ all vanish; these must exist since $L$ is torus fixed and contained in $X_{r,d}$. Let $i$ be the smallest index for which $j_i\neq 1$. The set of all $k$-planes satisfying $y_{1j_1}=\ldots=y_{rj_r}=0$ forms a closed subscheme of $\mathbf{F}_k(X_{r,d})$ isomorphic to $G(k+1,r(d-1))$. Since $k<r(d-1)-1$, this set contains a $k$-plane $L'$ satisfying $y_{i1}=0$ along with $y_{1j_1}=\ldots=y_{rj_r}=0$, and $L$ deforms to $L'$. Replacing $L$ with $L'$ we can continue this procedure until we arrive at a $k$-plane satisfying $y_{11}=y_{21}=\ldots=y_{r1}=0$ as desired. \end{proof} \begin{rem} Theorem \ref{thm:connected} is in stark contrast to the situation for the Fano scheme $\mathbf{F}_k(X)$ for a general degree $d>2$ hypersurface $X\subset \mathbb{P}^n$. Such a Fano scheme $\mathbf{F}_k(X)$ is non-empty if and only if $\phi(n,k,d)\geq 0$, and connected if $\phi(n,k,d)\geq 1$, where \[ \phi(n,k,d)=(k+1)(n-k)-{{k+d}\choose{k}}, \] see \cite{langer:97a}. \end{rem} We now illustrate on several examples how our results help determine the irreducible component structure of $\mathbf{F}_k(X_{r,d})$. \begin{ex}[$\mathbf{F}_k(X_{r,d})$ for $k\geq (r-2)(d-1)+2$]\label{ex:one} For $k\geq (r-2)(d-1)+2$ and $r\geq 3$, Conjecture \ref{conj:one-split} would imply that $\mathbf{F}_k(X_{r,d})$ is one-split. Assume this to be true. Considering any $k$-plane $L$ contained in $X_{r,d}$, we know that some $x_{ij}$ must vanish. Intersecting $L$ with $x_{i1}=x_{i2}=\ldots=x_{id}=0$, we obtain a linear subspace $L'$ in $X_{r-1,d}$ of dimension $k'$, where $k'\geq k-(d-1)\geq (r-3)(d-1)+2$. Hence, $L'$ is also (conjecturally) one-split, as long as $r-1\geq 3$. We may proceed in this fashion until we obtain a linear subspace $L''$ in $X_{2,d}$ of dimension $k''$, where $k''\geq k-(r-2)(d-1)\geq 2$. If $L''$ is also one-split, then $L$ is contained in an $(r(d-1)-1)$-plane of the form \[ x_{1j_1}=x_{2j_2}=\ldots=x_{rj_r}=0 \] for some choice of $j_1,\ldots,j_r$. The $k$-planes in this fixed $(r(d-1)-1)$-plane are parametrized by the Grassmannian $G(k+1,r(d-1))$. This leads to $d^r$ irreducible components of $\mathbf{F}_k(X_{r,d})$, each isomorphic in its reduced structure to $G(k+1,r(d-1))$. If on the other hand $L''$ is not one-split, then Equation \eqref{eqn:infano} implies that after some permutation in the $j$ indices, $y_{1j}$ and $y_{2j}$ are linearly dependent for all $j$. In particular, $L''$ is contained in a $(d-1)$-plane of $X_{2,d}$, appearing in a $d-1$-dimensional family. Thus, the plane $L$ is contained in an $(r-1)(d-1)$-plane of $X_{r,d}$, which is moving in a $(d-1)$-dimensional family. This only can occur if $k\leq (r-1)(d-1)$. In such cases, it follows that the corresponding irreducible component of $\mathbf{F}_k(X_{r,d})$ has dimension $(d-1)+(k+1)((r-1)(d-1)-k)$, and there are \[ {r\choose 2} d^{r-2}\cdot (d!) \] such components. To summarize, the Fano scheme has two types of irreducible components: \begin{itemize} \item {\bf Type A}: $d^r$ components of dimension $(k+1)(r(d-1)-(k+1))$, isomorphic in their reduced structures to a Grassmannian; general $k$-planes in such components are contained in the intersection of $r$ coordinate hyperplanes. \item {\bf Type B}: Assuming $k\leq (r-1)(d-1)$, ${r\choose 2} d^{r-2}\cdot (d!)$ components of dimension $(d-1)+(k+1)((r-1)(d-1)-k)$; general $k$-planes in such components are contained in the intersection of $(r-2)$ coordinate hyperplanes. \end{itemize} This analysis relied on Conjecture \ref{conj:one-split}. By Theorem \ref{thm:conj}, this holds true if $r \leq 6$ or $d=4$, so we know our above conclusions are true as long as this is satisfied. Furthermore, by Remark \ref{rem:3d}, the one-splitting we need follows if $k\geq d(r-3)$. But this is always satisfied as long as $r\leq d+2$. \end{ex} The above example is somewhat elementary, since all the Fano schemes appearing in the reduction steps are one-split or two-split. However, if we understand the structure of a Fano scheme which isn't one-split (or even two-split), we can leverage this to an understanding of $\mathbf{F}_k(X_{r,d})$ for larger values of $r$. We will illustrate in the next two examples. \begin{ex}[Special components of $\mathbf{F}_d (X_{3,d})$]\label{ex:base} By Example \ref{ex:sharp}, we know that $\mathbf{F}_d(X_{3,d})$ is not one-split; since $r=3$, it is also not two-split. Nonetheless, with a bit of work, we can completely describe these Fano schemes. In this example, we will describe a special type of irreducible component; all components will be dealt with in Example \ref{ex:three}. We begin with the case $d=2$ (although we usually have been assuming $d>2$). The variety $X_{3,2}$ is just a non-singular quadric fourfold; it is well-known that $\mathbf{F}_2(X_{3,4})$ is the disjoint union of two copies of $\mathbb{P}^3$. We now suppose that $d>2$. Let $L$ be a $d$-plane of $X_{3,d}$ which is not one-split. After reordering indices, we may assume that $y_{11},\ldots,y_{1\lambda_1},y_{21},\ldots,y_{2\lambda_2}$ are linearly independent, with $\lambda_1+\lambda_2\geq d+1$ and $\lambda_1\geq \lambda_2$. If $\lambda_2=1$, then we may replace $\lambda_1$ with $\lambda_1-1$ and $\lambda_2$ with $\lambda_2+1$, unless all $y_{2j}$ are linearly dependent. But this is easily seen to contradict Equation \eqref{eqn:infano}. So we may assume that $\lambda_2\geq 2$. For each choice $y_{1j_1},y_{2j_2}$ with $j_i\leq \lambda_i$, by setting $y_{1j_1}=y_{2j_2}=0$, we find that one $y_{3j}$ depends only on $y_{1j_1},y_{2j_2}$. A simple counting argument shows that some $y_{3j}$ can only depend on some $y_{1j_1}$ or $y_{2j_2}$. But by Equation \eqref{eqn:infano}, this form must also divide some $y_{2j_2'}$ or respectively $y_{1j_1'}$ (for $j_i'>\lambda_i$). Factoring this out of Equation \eqref{eqn:infano}, we arrive at the situation of a $k'$-plane in $X_{3,d-1}$, with $k'\geq d-1$. If $k'>d-1$, then this plane is one-split, contradicting our assumption, so in fact $k'=d-1$. We continue in this fashion of reducing degree until we arrive at one of the two toric components of $\mathbf{F}_2(X_{3,2})$. The component of $\mathbf{F}_d(X_{3,d})$ is a $(d-2)$-fold iterated $\mathbb{P}^2$ bundle over the toric component, and hence has dimension $3+2(d-2)=2d-1$. There are \[ 2\cdot {d \choose{2}}^3 \left({(d -2)}!\right)^2 \] such components: we choose one of two toric components of $\mathbf{F}_2(X_{3,2})$; then for each index $i$ we choose two of the $y_{ij}$ which are not getting factored out. We then match each of the remaining $y_{1j}$ with a $y_{2j'}$ and $y_{3j''}$, of which there are $\left((d -2)!\right)^2$ ways. \end{ex} We now leverage the above example to lower the bound on $k$ in Example \ref{ex:one} by one: \begin{ex}[$\mathbf{F}_{k}(X_{r,d})$ for $k=(r-2)(d-1)+1$]\label{ex:three} Let $L$ be any $k$-plane of $X_{r,d}$, for $k=(r-2)(d-1)+1$. Similar to in Example \ref{ex:one}, Conjecture \ref{conj:one-split} would imply that $L$ is one-split, as long as $r\geq 5$. As in Example \ref{ex:one}, we successively reduce to a $k'$-plane in $X_{4,d}$, with $k'\geq 2(d-1)+1=2d-1$. By Theorem \ref{thm:conj}, $L'$ is two-split. Suppose first that $L'$ is not one-split. Then after permuting $\{1,2,3,4\}$, we may assume that $\mathbf{y}_1+\mathbf{y}_2=\mathbf{y}_3+\mathbf{y}_4=0$. The factors of $\mathbf{y}_1$ and $\mathbf{y}_2$ must agree up to scaling, and similarly for $\mathbf{y}_3$ and $\mathbf{y}_4$. Similar to the component of type $B$ in Example \ref{ex:one}, we see that $L'$ is a $2d-1$-plane of $X_{4,d}$, moving in a $2(d-1)$-dimensional family. Thus, the plane $L$ also is moving in a $2(d-1)$-dimensional family. It follows that the corresponding irreducible component of $\mathbf{F}_k(X_{r,d})$ has dimension $2(d-1)$, and there are \[ {r\choose r-4,2,2} d^{r-4}\cdot (d!)^2 \] such components. If $L'$ is one-split, we may reduce further to a $k''$-plane $L''$ in $X_{3,d}$ with $k''\geq d$. Suppose next that $L''$ is not one-split. Then $k''=d$, and $L''$ corresponds to a point in one of the $(2d-1)$-dimensional irreducible components described in Example \ref{ex:base}. It follows that the corresponding irreducible component of $\mathbf{F}_k(X_{r,d})$ has dimension $2d-1$, and there are \[ {r \choose 3}d^{r-3}\cdot 2\cdot {d \choose{2}}^3 \left({(d -{2})}!\right)^2 \] such components. Finally, if $L''$ is also one-split, then we get components of types A and B similar to those appearing in Example \ref{ex:one}. To summarize, assuming that the necessary splitting conjectures are true, $\mathbf{F}_{k}(X_{r,d})$ for $k=(r-2)(d-1)+1$ has the following irreducible components: \begin{itemize} \item {\bf Type A}: $d^r$ components of dimension \[2((r-2)(d-1)+2)(d-2),\] isomorphic in their reduced structures to a Grassmannian; general $k$-planes in such components are contained in the intersection of $r$ coordinate hyperplanes. \item {\bf Type B}: ${r\choose 2} d^{r-2}\cdot (d!)$ components of dimension \[(d-1)+( (r-2)(d-1)+2)(d-2);\] general $k$-planes in such components are contained in the intersection of $(r-2)$ coordinate hyperplanes. \item {\bf Type C}: there are \[{r \choose 3}d^{r-3}\cdot 2\cdot {d \choose{2}}^3 \left({(d -{2})}!\right)^2\] components of dimension $2d-1$; general $k$-planes in such components are contained in the intersection of $(r-3)$ coordinate hyperplanes. \item {\bf Type D}: there are \[{r\choose r-4,2,2} d^{r-4}\cdot (d!)^2\] components of dimension $2(d-1)$; general $k$-planes in such components are contained in the intersection of $(r-4)$ coordinate hyperplanes. \end{itemize} This analysis relied on appropriate splitting statements, which (similar to Example \ref{ex:one}) hold true if $r \leq 6$, $d=4$, or $r\leq d+1$. \end{ex} \begin{ex}[$\mathbf{F}_5(X_{4,3})$] For a concrete example, consider $\mathbf{F}_5(X_{4,3})$. By Example \ref{ex:three}, we see that this Fano scheme has the following components: \begin{center} \begin{tabular}{l r r} & Dimension & Number\\ \hline {\bf Type A} & 12 & 81\\ {\bf Type B} & 8 & 324\\ {\bf Type C} & 5 & 648\\ {\bf Type D} & 4 & 216\\ \end{tabular}. \end{center} \end{ex} \section{Product Rank}\label{sec:pr} \subsection{Bounding product rank} \begin{proof}[Proof of Theorem \ref{thm:bound}] Assume that $\mathbf{pr}(f)\leq r$. Since $f$ is concise, this implies that there is an $n$-dimensional linear space $Y\subset \mathbb{P}^{rd-1}$ such that $V(f)=X_{r,d}\cap Y$. Since we are assuming that $V(f)$ is covered by $k$-planes, there must be a positive-dimensional irreducible subvariety $S\subset \mathbf{F}_k(X_{r,d})$ such that the $k$-planes corresponding to points in $S$ are all contained in $Y$, and that the linear span of these $k$-planes is exactly $Y$. Now, if all $k$-planes parametrized by $S$ are contained in a coordinate hyperplane of $\mathbb{P}^{rd-1}$, we can clearly write $f$ as a sum of $r-1$ products of linear forms, that is, $\mathbf{pr}(f)\neq r$. But this is certainly the case if $\mathbf{F}_k(X_{r,d})$ is one-split. Assume instead that $r$ is even and the two-splitting assumption of the theorem is fulfilled. As above, if every $k$-plane parametrized $S$ is contained in a coordinated hyperplane, we are done. Otherwise, by the two-splitting assumption, we can permute the indices $i=1,\ldots,r$ such that every $k$-plane $L$ parametrized by $S$ is contained in \[ V(x_{i1}\cdots x_{id}+x_{(i-1)1}\cdots x_{(i-1)d}) \] for $i=2,4,\ldots,r$. Using the notation from \S \ref{sec:fano}, this tells us that \begin{equation}\label{eqn:ys} y_{i1}\cdots y_{id}+y_{(i-1)1}\cdots y_{(i-1)d}=0. \end{equation} for $i=2,4,\ldots,r$. After reordering the $y_{ij}$ for each fixed $i$, we conclude (by unique factorization of polynomials) that the forms $y_{ij}$ and $y_{(i-1)j}$ are proportional for $i=2,4,\ldots,r$ and $j\leq d$ for all $k$-planes $L$ in $S$. For some fixed $s=2,4,6,\ldots,r$, suppose that the ratio $y_{sj}/y_{(s-1)j}$ is some constant $c_j$ as $L$ ranges over $S$. Note that these constants satisfy $\mathbf{pr}od c_j=-1$. Then every $L$ in $S$ is contained in the linear space \[ V\left(\{x_{sj}-c_jx_{(s-1)j}\}_{j\leq d}\right) \] so their span $Y$ is as well. This means that after restricting to $Y$ we have \[ \sum_{i=1}^r x_{i1}\cdots x_{id}=\sum_{i\neq s,s-1} x_{i1}\cdots x_{id} \] so the product rank of $f$ is at most $r-2$. We have thus arrived in the situation where for each fixed $i=2,4,\ldots,r$, there is some $j\leq d$ such that the ratio between $y_{ij}$ and $y_{(i-1)j}$ is non-constant over $S$. A straightforward calculation shows that the dimension of the span of two general $k$-planes $L,L'$ in $S$ must be at least $k+r$, leading to the inequality $k+r\leq n$; by assumption, this is a contradiction. \end{proof} \begin{rem}\label{rem:better} Suppose that in the situation of part two of Theorem \ref{thm:bound}, we know that the family of $k$-planes $S\subset \mathbf{F}_k(V(f))$ covering $V(f)$ is $m$-dimensional. Then the hypothesis $k>n-r$ may be replaced with the condition $k>n-m-r/2$. Indeed, in the conclusion of the proof of the theorem, the assumption on the dimension of $S$ guarantees that at least $m$ of the ratios $y_{ij}/y_{(i-1)j}$ vary independently of each other. Combining Equation \eqref{eqn:ys} with the fact that at least one ratio $y_{ij}/y_{(i-1)j}$ varies for each $i$ guarantees that in fact a total of at least $m+r/2$ ratios vary. As above, this shows that the dimension of the span of two general $k$-planes $L,L'$ in $S$ must be at least $k+m+r/2$, leading to the desired contradiction. \end{rem} \subsection{Examples of bounds on product rank} \begin{ex}[$3\times 3$ determinant]\label{ex:det3} In \cite{ilten:16a}, Z.~Teitler and the first author prove that $\mathbf{pr}(\det_3)>4$ over $\mathbb{C}$, where $\det_3$ is the determinant of a generic $3\times 3$ matrix. H.~Derksen gave an expression for $\det_3$ as a sum of $5$ multihomogeneous products of linear forms in \cite{derksen:16a}, so we conclude $\mathbf{pr}(\det_3)=5$. This also shows that the tensor rank of $\det_3$ equals five. The proof that $\mathbf{pr}(\det_3)$ consisted of a computer calculation showing that $\mathbf{F}_5(X_{4,3})$ is $2$-split, and then a special case of Theorem \ref{thm:bound}. Our Theorem \ref{thm:conj} makes this computer calculation unnecessary, and is valid in arbitary characteristic. We conclude that the product and tensor ranks of $\det_3$ are at least five over \emph{any} field. Derksen's identity requires $2$ to be invertible, so we conclude that except in characteristic $2$, $\mathbf{pr}(\det_3)=5$. We do not know whether $\mathbf{pr}(\det_3)$ is $5$ or $6$ in characteristic $2$. \end{ex} \begin{ex}[$4 \times 4$ determinant]\label{ex:det4} Let $\det_4$ be the $4\times 4$ determinant; this is easily seen to be a concise form. The projective hypersurface $V(\det_4)$ is covered by $11$-dimensional linear spaces, see e.g.~\cite{ilten:15a}. By Theorem \ref{thm:conj}, we know that $\mathbf{F}_{11}(X_{6,4})$ and $\mathbf{F}_{7}(X_{4,4})$ are both $2$-split, so we may apply Theorem \ref{thm:bound} to conclude that $\mathbf{pr}(\det_4)\neq 6$. A similar application of Theorem \ref{thm:bound} shows that $\mathbf{pr}(\det_4)\neq 4,5$. If $\mathbf{pr}(\det_4)\leq 3$, then the projective hypersurface $V(\det_4)\subset \mathbb{P}^{15}$ must be a cone, in which case every maximal linear subspace would contain a common line. But this is not the case, so we conclude that $\mathbf{pr}(\det_4)\geq 7$ (in arbitrary characteristic). This is exactly the bound on product rank in characteristic zero which follows from Z.~Teitler and H.~Derksen's bound on Waring rank. They show that the Waring rank of $\det_4$ is at least $50$ \cite{derksen:15a}, from which follows that $\mathbf{pr}(\det_4)\geq 7$ by \cite[\S 1.2]{ilten:16a}. Our above argument for the product rank of $\det_4$ can be generalized to show that, for $n\geq 3$, $\mathbf{pr}(\det_n)\geq 2n-1$, as long as we assume that Conjecture \ref{conj:two-split} holds. However, for $n\geq 5$ this is much worse than the bound that follows from known lower bounds on Waring rank \cite{derksen:15a}. \end{ex} \begin{ex}[$6 \times 6$ Pfaffian]\label{ex:pfaffian} Let $f$ be the Pfaffian of a generic $6\times 6$ skew-symmetric matrix; this is also a concise form. Derksen and Teitler show that the Waring rank of $f$ is at least $24$ \cite{derksen:15a}. Section 1.2 of \cite{ilten:16a} then implies that $\mathbf{pr}(f)\geq 6$. We will use Theorem \ref{thm:bound} to show that $\mathbf{pr}(f)\neq 6$, and hence $\mathbf{pr}(f)\geq 7$, a new lower bound. First note that by Theorem \ref{thm:conj}, $\mathbf{F}_9(X_{6,3})$ is one-split. Secondly, we have that $V(f)\subset \mathbb{P}^{14}$ is covered by projective $9$-planes. Indeed, for any $6\times 6$ singular skew-symmetric matrix $A$ with $0\neq v\in\mathbb{K}^6$ in its kernel, consider the linear space of all $6\times 6$ skew-symmetric matrices $B$ satisfying \[ B\cdot v=0. \] This is clearly a linear space of singular skew-symmetric matrices containing $A$. There are six linear conditions cutting out this linear space, but they are linearly dependent, since \[ v^\mathrm{tr}\cdot A\cdot v=0. \] Hence, $A$ is contained in a linear space of dimension $14-5=9$. The claim $\mathbf{pr}(f)\neq 6$ now follows from Theorem \ref{thm:bound}. \end{ex} For our final example, we must use a different argument than Theorem \ref{thm:bound}, since the permanental hypersurface is not covered by high-dimensional linear spaces: \begin{ex}[$4\times 4$ permanent]\label{ex:perm} Let $\perm_4$ be the permanent of a generic $4\times 4$ matrix. Assume that the characteristic of $\mathbb{K}$ is not two, in which case Example \ref{ex:det4} applies. Shafiei has shown that the Waring rank of $\perm_4$ is at least $35$ \cite{shafiei:15a}, from which follows that $\mathbf{pr}(\perm_4)\geq 5$ by \cite[\S 1.2]{ilten:16a}. We will show that in fact, $\mathbf{pr}(\perm_4)\geq 6$. On the other hand, Glynn's formula gives $8$ as an upper bound for the product rank of $\perm_4$ \cite{glynn:10a}. Our result also gives a lower bound of $6$ on the \emph{tensor rank} of $\perm_4$. To fix notation, suppose that $\perm_4$ is the permanent of the matrix \[ M=\left(\begin{array}{c c c c} z_{11} &z_{12}& z_{13}& z_{14}\\ z_{21} &z_{22}& z_{23}& z_{24}\\ z_{31} &z_{32}& z_{33}& z_{34}\\ z_{41} &z_{42}& z_{43}& z_{44}\\ \end{array} \right). \] The hypersurface $V(\perm_4)\subset \mathbb{P}^{15}$ contains exactly $8$ $11$-planes \cite{ilten:15a}: $H_i$ and $V_j$ for $i,j\leq 4$ are given respectively by the vanishing of the $i$th row or $j$th column of $M$. Any two of the $H_i$, or any two of the $V_j$ span the entire space $\mathbb{P}^{15}$. If $\mathbf{pr}(\perm_4)\leq 5$, then $V(\perm_4)$ is isomorphic to $X_{5,4}$ intersected with a $15$-dimensional linear space. If under the embedding of $V(\perm_4)$ in $X_{5,4}$, two of the $H_i$ or two of the $V_j$ are contained in a common coordinate hyperplane, it follows that $\mathbf{pr}(\perm_4)\leq 4$, contradicting the above bound. Thus, we may assume that this is not the case. On the other hand, it follows from Theorem \ref{thm:conj} that any $11$-plane of $X_{5,4}$ is contained in the intersection of three coordinate hyperplanes. Since no pair of $H_i$ or $V_j$ is contained in a common coordinate hyperplane, some pair $(H_a,V_b)$ must be contained in a common coordinate hyperplane. Now, $H_a$ and $V_b$ span the $14$-dimensional linear space $L=V(z_{ab})\subset \mathbb{P}^{15}$. But since this $14$-dimensional linear space is contained in a coordinate hyperplane of $X_{5,4}$, we conclude that $\mathbf{pr}(\perm_4')\leq 4$, where $\perm_4'$ is obtained from $\perm_4$ by setting $z_{ab}=0$. We will show that this cannot be. Indeed, in such a situation we would have $V(\perm_4')\subset \mathbb{P}^{14}$ isomorphic to the intersection of $X_{4,4}$ with a $14$-dimensional linear space. Intersecting $H_i$ and $V_j$ with $L$, we arrive at a set of $8$ $11$- or $10$-dimensional planes $H_i',V_j'$ with properties similar to above. Similar to above, if a pair of $H_i'$ or $V_j'$ is contained in a coordinate hyperplane in $X_{4,4}$, then we would have that $\mathbf{pr}(\perm_4')\leq 3$. But if this is not the case, an argument similar to above shows that $\mathbf{pr}(\perm_4'')\leq 3$, where $\perm_4''$ is obtained from $\perm_4$ by setting two variables equal to zero. The key step of the argument is here another application of Theorem \ref{thm:conj} showing that any $10$-plane of $X_{4,4}$ is contained in four coordinate hyperplanes. To arrive at the final contradiction, first note that $\mathbf{pr}(\perm_4')\leq 3$ implies $\mathbf{pr}(\perm_4'')\leq 3$. The latter implies in particular that one can write $\perm_4''$ as a form in $12$ variables, that is $V(\perm_4'')\subset\mathbb{P}^{13}$ must be a cone. However, utilizing the natural torus action on $V(\perm_4'')$ similar to in \cite[Proposition 2.3]{ilten:15a}, one easily verifies that the intersections of $H_1,H_2,H_3,H_4$ with this $\mathbb{P}^{13}\subset \mathbb{P}^{15}$ are all maximal linear subspaces of $V(\perm_4'')$. But their common intersection is empty, which contradicts $V(\perm_4'')$ being a cone. We conclude that $\mathbf{pr}(\perm_4'')>3$, which in turn implies $\mathbf{pr}(\perm_4')>4$, which finally implies $\mathbf{pr}(\perm_4)>5$. \end{ex} \renewcommand{\MR}[1]{\relax} \end{document}
\betagin{document} \title{Pure $\Sigma_2$-Elementarity beyond the Core \footnote{Preprint of article in press: G. Wilken, Pure $\Sigma_2$-elementarity beyond the core, Annals of Pure and Applied Logic 172 (2021), https://doi.org/10.1016/j.apal.2021.103001 }} \author{Gunnar Wilken\\ Structural Cellular Biology Unit\\ Okinawa Institute of Science and Technology\\ 1919-1 Tancha, Onna-son, 904-0495 Okinawa, Japan\\ {\tt [email protected]} } \maketitle \betagin{abstract} \noindent We display the entire structure ${\cal R}two$ coding $\Sigma_1$- and $\Sigma_2$-elementarity on the ordinals. This will enable the analysis of pure $\Sigma_3$-elementary substructures. \end{abstract} \section{Introduction} Intriguing theorems that demonstratively serve as examples for mathematical incompleteness have been a subject of great interest ever since G\"odel established his incompleteness theorems \cite{G31}, showing that Hilbert's programme (see \cite{Hilbert}, and for a recent account \cite{Z06}) could not be executed as originally expected. Some of the most appealing such theorems exemplifying incompleteness are the Paris-Harring\-ton theorem \cite{PH}, Goodstein sequences \cite{Goodstein, KirbyParis, WW13}, Kruskal's theorem \cite{Kr60, RW93}, its extension by Friedman \cite{S85}, the graph minor theorem by Robertson and Seymour, see \cite{FRS87} and, for a general reference on so-called concrete mathematical incompleteness, Friedman's book \cite{Fr11}. The term concrete incompleteness refers to natural mathematical theorems independent of significantly strong fragments of $\mathrm{ZFC}$, Zermelo-Fraenkel set theory with the axiom of choice. Technique and theorems on phase transition from provability to unprovability that bear on methods and results from analytic number theory were developed by Weiermann \cite{We03,We09} in order to further understand phenomena of mathematical independence, with interesting contributions from others, e.g.\ Lee \cite{L14}. A natural generalization of Kruskal's theorem was shown by Carlson \cite{C16}, using elementary patterns of resemblance \cite{C01} as basic structures of nested trees. The embeddability of tree structures into one another plays a central role in the area of well-quasi orderings, cf.\ \cite{C16}, maximal order types of which can be measured using ordinal notation systems from proof theory, cf.\ \cite{Schm79} and the introduction to \cite{WW11}. Elementary patterns of resemblance (in short: patterns) of order $n$ are finite structures of orderings $(\le_i)_{i\le n}$ where $\le_0$ is a linear ordering and $\le_1,\ldots,\le_n$ are forests such that 1) $\le_{i+1}\subseteqseteq\le_i$ and 2) $a\le_{i+1}b$ whenever $a\le_i b\le_i c$ and $a\le_{i+1}c$ for all $i<n$ and all $a,b,c$ in the universe of the pattern. Patterns that do not contain further non-trivial functions or relations are called pure patterns. Kruskal's theorem establishes that the collection of finite trees is well-quasi ordered with respect to inf-preserving embeddings \cite{Kr60}. When it comes to patterns, the natural embeddings are coverings, i.e.\ $\le_0$-embeddings that maintain the relations $\le_i$ for $i=1,\ldots,n$. Carlson's theorem \cite{C16} states that pure patterns of order $2$ are well-quasi ordered with respect to coverings. This is provable in the extension of the base theory of reverse mathematics, $\mathrm{RCA}_0$, of recursive comprehension, by the uniform $\Pi^1_1$-reflection principle for ${\operatorname{KP}\!\ell_0}$, a subsystem of set theory axiomatizing a mathematical universe that is a limit of admissible sets, see \cite{Ba75,C16}. In \cite{W18} we give a proof, which is independent of \cite{C17}, of the fact that Carlson's theorem is unprovable in ${\operatorname{KP}\!\ell_0}$, or equivalently, $\Pi^1_1{\operatorname{-CA}_0}$. This latter subsystem of second order number theory, where induction is restricted to range over sets and set comprehension is restricted to $\Pi^1_1$-formulae, plays a prominent role in reverse mathematics, see Simpson \cite{S09} for the relevance of these theories in mathematics. Pohlers \cite{P98} provides an extensive exposition of various subsystems of set theory and second order number theory, equivalences and comparisons in strength, and their proof-theoretic ordinals. Note that in \cite{P98} ${\operatorname{KP}\!\ell_0}$ goes by the name ${\operatorname{KP}\!\ell^r}$. \subseteqsection*{\boldmath Discovery of patterns, the structure ${\cal R}one$, and Cantor normal form\unboldmath} Patterns were discovered by Carlson \cite{C00} during model construction that verifies the consistency of epistemic arithmetic with the statement \emph{I know I am a Turing machine}, thereby proving a conjecture by Reinhardt, see Section 21.2 of \cite{W} for a short summary. The consistency proof uncovers, through a demand for $\Sigma_1$-elementary substructures via a well-known finite set criterion, see Proposition 1.2 of \cite{C99}, here part 1 of Proposition \ref{letwocriterion}, a structure ${\cal R}one=\left({\mathrm{Ord}};\le,\le_1\right)$ of ordinals, where $\le$ is the standard linear ordering on the ordinals and the relation $\le_1$ is defined recursively in $\beta$ by \[\alpha\le_1\beta:\Leftrightarrow (\alpha;\le,\le_1) \preceq_{\Sigma_1} (\beta;\le,\le_1).\] We therefore have $\alpha\le_1\beta$ if and only if $(\alpha;\le,\le_1)$ and $(\beta;\le,\le_1)$ satisfy the same $\Sigma_1$-sentences over the language $(\le,\le_1)$ with parameters from $\alpha=\{\gamma\mid\gamma<\alpha\}$, where $\Sigma_1$-formulas are quantifier-free formulas preceded by finitely many existential quantifiers. ${\cal R}one$ was analyzed in \cite{C99} and shown to be recursive and periodic in multiples of the ordinal $\varepsilonn$, the proof-theoretic ordinal of Peano arithmetic (${\operatorname{PA}}$). Pure patterns of order $1$ comprise the finite isomorphism types of ${\cal R}one$ and provide ordinal notations that denote their unique pointwise minimal coverings within ${\cal R}one$, an observation Carlson elaborated in \cite{C01}. In ${\cal R}one$ the relation $\le_1$ can still be described by a relatively simple recursion formula, see Proposition \ref{Roneformula} below, given in terms of Cantor normal form notation, as was carried out in \cite{C99}. Ordinal notations in Cantor normal form, indicated by the notation $=_{\mathrm{\scriptscriptstyle{CNF}}}$, are built up from $0,+$, and $\omega$-exponentiation, where $\omega$ denotes the least infinite ordinal, see \cite{P09,Sch77} for reference. We write $\alpha=_{\mathrm{\scriptscriptstyle{CNF}}}\omega^\alphae+\ldots+\omega^\alphan$ if $\alpha$ satisfies the equation with $\alphae\ge\ldots\ge\alphan$. If $\alpha$ is represented as a sum of weakly decreasing additive principal numbers (i.e.\ powers of $\omega$) $\rho_1\ge\ldots\ge\rho_m$, we also write $\alpha=_{\mathrm{\scriptscriptstyle{ANF}}}\rho_1+\ldots+\rho_m$, where we allow $m=0$ to cover the case $\alpha=0$. $\varepsilonn$ is the least fixed point of $\omega$-exponentiation, as in general the class function $\alpha\mapsto\varepsilon_\alpha$ enumerates the class ${\mathbb E}:=\{\xi\mid\xi=\omega^\xi\}$ of epsilon numbers. Note that Cantor normal form notations with parameters from an initial segment, say $\tau+1$, of the ordinals provide notations for the segment of ordinals below the least epsilon number greater than $\tau$. Defining ${\operatorname{lh}}(\alpha)$, the length of $\alpha$, to be the greatest $\beta$ such that $\alpha\le_1\beta$, if such $\beta$ exists, and ${\operatorname{lh}}(\alpha):=\infty$ otherwise, the recursion formula for ${\cal R}one$ reads as follows: \betagin{prop}[\cite{C99}]\lambdabel{Roneformula} For $\alpha=_{\mathrm{\scriptscriptstyle{CNF}}}\omega^\alphae+\ldots+\omega^\alphan<\varepsilonn$, where $n>0$ and $\alphan=_{\mathrm{\scriptscriptstyle{ANF}}}\rho_1+\ldots+\rho_m$, we have \[{\operatorname{lh}}(\alpha)=\alpha+{\operatorname{lh}}(\rho_1)+\ldots+{\operatorname{lh}}(\rho_m).\] \end{prop} While, as shown in \cite{C99}, the structure ${\cal R}one$ becomes periodic in multiples of $\varepsilonn$, with the proper multiples of $\varepsilonn$ characterizing those ordinals $\alpha$ which satisfy ${\operatorname{lh}}(\alpha)=\infty$, in ${\cal R}two$ this isomorphic repetition of the interval $[1,\varepsilonn]$ with respect to additive translation holds only up to the ordinal $\varepsilonn\cdot(\omega+1)$, since the pointwise least $<_2$-pair is $\varepsilonn\cdot\omega<_2\varepsilonn\cdot(\omega+1)$ and hence ${\operatorname{lh}}(\varepsilonn\cdot(\omega+1))=\varepsilonn\cdot(\omega+1)$, as was shown in \cite{CWc}. \subseteqsection*{G\"odel's program and patterns of embeddings} G\"odel's program to solve incompleteness progressively by the introduction of large cardinal axioms, see \cite{F96,K09}, together with the heuristic correspondence between large cardinals and ordinal notations through reflection properties and embeddings, motivated Carlson to view patterns as a programmatic approach to ordinal notations, as we pointed out earlier in \cite{W18} and \cite{W}. According to his view, the concept of patterns of resemblance, in which binary relations code the property of elementary substructure, can be generalized to patterns of embeddings that involve codings of embeddings, and would ultimately tie in with inner model theory in form of an ultra-fine structure. It seems plausible that ${\cal R}n$-patterns ($n<\omega$) just suffice to analyze set-theoretic systems of $\Pi_m$-reflection ($m<\omega$), see \cite{R94,PS}. \subseteqsection*{\boldmath Additive patters of order $1$ and the structure ${\cal R}onepl$\unboldmath} The extension of ${\cal R}one$ to a relational structure ${\cal R}onepl=\left({\mathrm{Ord}};0,+;\le,\le_1\right)$, containing the graphs of $0$ and ordinal addition, gives rise to additive patterns of order $1$ and is the central object of study in \cite{C01,W06,W07a,W07b,W07c,CWa,A15}. Such extensions are of interest when it comes to applications of patterns to ordinal analysis. Following Carlson's recommendation in \cite{C01}, \cite{W06} gives a sense of the relation $\le_1$ in ${\cal R}onepl$, moreover, an easily accessible characterization of the Bachmann-Howard structure in terms of $\Sigma_1$-elementarity is given in full detail. The Bachmann-Howard ordinal characterizes the segment of ordinals below the proof-theoretic ordinal of Kripke-Platek set theory with infinity, ${\operatorname{KP}\!\om}$, which axiomatizes the notion of (infinite) admissible set, cf.\ Barwise \cite{Ba75}, or equivalently $\operatorname{ID}_1$, the theory of non-iterated positive inductive definitions, cf.\ \cite{P09}. A benefit from such characterizations, as pointed out in the introduction to \cite{W11}, is an illustrative semantics for Skolem hull notations, cf.\ \cite{P91}, the type of ordinal notations that have been most-useful in ordinal analysis so far, by the description of ordinals given in hull-notation as least solutions to $\Sigma_1$-sentences over the language $(0,+,\le,\le_1)$. The pattern approach to ordinal notations, on the other hand, is explained in terms of ordinal arithmetic in the style of \cite{RW93}. \cite{W07c} presents an ordinal assignment producing pointwise minimal instantiations of ${\cal R}onepl$-patterns in the ordinals. In turn, pattern-characterizations are assigned to ordinals given in classical notation. In \cite{CWa} normal forms are considered, also from the viewpoint of \cite{C01}. The elementary recursive isomorphism between Skolem-hull notations and pattern notations for additive patterns of order $1$, established in \cite{W07a,W07b,W07c}, was of considerable interest to proof-theorists, as it explained the new concept of patterns in a language familiar to proof-theorists. It was shown in \cite{W07b,W07c} that notations in terms of ${\cal R}onepl$-patterns characterize the proof-theoretic ordinal of $\Pi^1_1{\operatorname{-CA}_0}$. By simply incorporating (the graph of) ordinal addition into the pattern's language, patterns of order one considerably increase in strength. A similar phenomenon for classical notations is observed in the opposite direction in \cite{SchS} when withdrawing basic functions like addition from a notation system, as doing so causes a collapse, in case of a system for $\Pi^1_1{\operatorname{-CA}_0}$ down to $\varepsilonn$. On the other hand, the analysis of additive patterns of order $1$ is easily modified when adding (the graphs of) other basic arithmetic functions for expressive convenience, as we pointed out in earlier work. Such a procedure does not increase the order type of the resulting notation system any further, cf.\ Section 9 of \cite{C01}. \subseteqsection*{Skolem-hull based ordinal notations and arithmetic} The analysis of ${\cal R}onepl$ in \cite{W07b} and the assignment of minimal ordinal solutions to additive patterns of order $1$ in \cite{W07c} and \cite{CWa} rely on a toolkit of ordinal arithmetic based on relativized Skolem hull-based notation systems ${\operatorname{T}}t$ and was introduced in \cite{W07a}. The parameter $\tau$ is intended to be either $1$ or an arbitrary epsilon number, which we often denote as $\tau\in{\mathbb E}one={\mathbb E}\cup\{1\}$. This type of ordinal notation system builds upon contributions over many decades mainly by Bachmann, Aczel, Feferman, Bridge, Sch\"utte, Buchholz, Rathjen, and Weiermann (see \cite{B55,Br75,B75,B81,R90,RW93}, and the introduction of \cite{W07a}). Preparatory variants of systems ${\operatorname{T}}t$ that can be extended to stronger ordinal notation systems and arithmetic were provided in \cite{WW11}, starting with work by Buchholz and Sch\"utte in \cite{BSch76}. The so far most powerful notation systems of this classical type were introduced by Rathjen, resulting in a notation system for $\Pi^1_2{\operatorname{-CA}_0}$ \cite{R05}, and have quite recently been elaborated more completely for an analysis of the provably recursive functions of reflection by Pohlers and Stegert \cite{PS}. We now go into more detail regarding ordinal arithmetic sufficiently expressive to display pattern notations on the basis of order $1$ (additive) or order $2$ (pure). Let $\Omega^{>\tau}$ be the least regular cardinal greater than ${\mathrm{Card}}(\tau)\cup\alphaeph_0$. We also simply write $\Omega$ for $\Omega^{>\tau}$ when the dependence of $\Omega$ on $\tau\in{\mathbb E}one$ is easily understood from the context. For convenience we set ${\operatorname{T}}^0:={\operatorname{T}}^1={\operatorname{T}}$ for the original notation system that provides notations for the ordinals below the proof theoretic ordinal of theories such as $\Pi^1_1{\operatorname{-CA}_0}$, ${\operatorname{KP}\!\ell_0}$, and ${\operatorname{ID}_{<\om}}$, the latter being the theory of finitely times-iterated inductive definitions, cf.\ e.g.\ \cite{P98}. In the presence of constants for all ordinals less than $\tau$ and ordinal addition, stepwise collapsing, total, injective, unary functions $(\varthetai)_{i<\omega}$, where $\varthetanod=\varthetat$ is relativized to $\tau$, give rise to unique terms for all ordinals in the notation system ${\operatorname{T}}t$. The relativized systems ${\operatorname{T}}t$ we are referring to were carefully introduced in Section 3 of \cite{W07a}, including complete proofs, but for the reader's convenience we provide a review of ordinal arithmetic in terms of the systems ${\operatorname{T}}t$ in the next section, including the formal definition of $\vartheta$-functions. However, for now, descriptions given here should suffice to understand the larger picture. The Veblen function \cite{V1908} is a binary function in which the first argument indicates a lower bound of the fixed point level. While $\varphi(0,\cdot)$ enumerates the additive principal numbers, i.e.\ the class of ordinals greater than $0$ that are closed under ordinal addition, which we often denote as ${\mathbb P}$, $\varphi(1,\cdot)$ enumerates $\varepsilon$-numbers starting with $\varepsilonn$. In general, $\alpha\mapsto\varphi(\xi+1,\alpha)$ enumerates the fixed points of $\varphi(\xi,\cdot)$, and $\varphi(\lambda,\cdot)$ enumerates the common fixed points of all $\varphi(\xi,\cdot)$, $\xi<\lambda$, where $\lambda$ is a limit ordinal. The fixed-point free variant $\bar{\varphi}$ of the Veblen function omits fixed-points, meaning that $\bar{\varphi}(0,\cdot)$ enumerates those additive principal numbers that are not epsilon numbers, $\bar{\varphi}(1,\cdot)$ enumerates all epsilon numbers $\alpha$ such that $\alpha<\varepsilon_\alpha$, and so on. Therefore, the first argument of the function $\bar{\varphi}$ denotes the exact fixed-point level. Now, the fixed point-level of $\Gamma$-numbers, i.e.\ ordinals $\alpha$ such that $\alpha=\varphi(\alpha,0)$, cannot be expressed unless we introduce a ternary Veblen function. In the context of ${\operatorname{T}}t$-systems this problem of finding names for increasing fixed-point levels is resolved by arithmetization through higher $\vartheta_j$-functions, relying on the regularity properties of the $\Omega_i$, in that they cannot be reached by enumeration functions from below. With ordinal addition being part of the set of functions over which Skolem hulling is performed, the function $\xi\mapsto\varthetat(\xi)$, where $\xi<\Omega$, enumerates all additive principal numbers in the interval $[\tau,\Omega)$ that are not epsilon numbers greater than $\tau$, for the sake of uniqueness of notation. Fixed points of increasing level, in the sense explained above in the context of the Veblen function, are in the range of $\varthetat$, however, as enumerations starting from multiples of $\Omega$. For example, $\xi\mapsto\varthetat(\Omega+\xi)$, where $\xi<\Omega$, enumerates the ordinals $\{\varepsilon_\alpha\mid\tau,\alpha<\varepsilon_\alpha<\Omega\}$. At each multiple of $\Omega$ in the domain of $\varthetat$ the enumeration of ordinals below $\Omega$ of next higher fixed point level begins, again in fixed point omitting manner, which means that $\xi<\varthetat(\Delta+\xi)$ whenever $\Delta$ is such a multiple of $\Omega$ and $\xi<\Omega$. Letting $\Omega_0:=\tau$, $\Omega_1:=\Omega^{>\tau}$, and generally $\Omega_{i+1}$ to be the least (infinite) regular cardinal greater than $\Omega_i$ for $i<\omega$, this algorithm of denoting ordinals allows for a powerful mechanism extending the fixed-point free variant of the Veblen function. Generalizing from $\varthetat=\vartheta_0$, the function $\vartheta_i$ enumerates additive principal numbers starting with $\Omega_i$ in fixed-point free manner: \betagin{lem}[cf.\ 4.2 of \cite{W07a}]\lambdabel{tenumlem} Let $i<\omega$. \betagin{enumerate} \item[a)] $\vartheta_i(0)=\Omega_i$. \item[b)] For any $\alpha<\Omega_{i+1}$ we have $\vartheta_i(1+\alpha)=\omegaq{\Omega_i+\alpha}$, \end{enumerate} where $\xi\mapsto\omegaq{\xi}$ denotes the enumeration function of the additive principal numbers that are not epsilon numbers. \end{lem} Consequently, fixed-points of increasing level are enumerated by $\vartheta_i$ with arguments starting from multiples of $\Omega_{i+1}$, which in turn are (additively) composed of values of the function $\vartheta_{i+1}$. Note that we have $\vartheta_i=\vartheta^{\Omega_i}$ for $i<\omega$, where the right hand side $\vartheta$-function is a $\vartheta_0$-function that is relativized to $\Omega_i$. \operatorname{me}dskip \noindent{\bf Examples.} Assuming $\tau=1$, we give a few instructive examples of values of $\vartheta$-functions: $\vartheta_0(0)=1=\omega^0=\varphi(0,0)$, $\vartheta_0(1)=\omega=\varphi(0,1)$, $\vartheta_0(\varepsilonn)=\omega^{\varepsilonn+1}$, $\vartheta_0(\vartheta_1(0))=\vartheta_0(\Omega)=\varepsilonn=\varphi(1,0)$, $\vartheta_0(\Omega+\varepsilonn)=\varepsilon_{\varepsilonn}$, $\vartheta_0(\Omega+\Omega)=\varphi(2,0)$, $\vartheta_0(\Omega^2)=\Gamma_0$ where $\Omega=\alphaeph_1$ and $\Omega^2=\vartheta_1(\vartheta_1(0))$, and $\vartheta_0(\vartheta_1(\vartheta_2(0)))=\vartheta_0(\varepsilon_{\Omega+1})$ denoting the Bachmann-Howard ordinal. The proof-theoretic ordinal of the theory of $n$-times iterated inductive definitions $\mathrm{ID}_n$, $0<n<\omega$, is $|\mathrm{ID}_n|=\vartheta_0(\vartheta_1(\ldots(\vartheta_{n+1}(0))\ldots))$. For $n<\omega$ the ordinal $\vartheta_0(\vartheta_1(\ldots(\vartheta_{n+1}(0))\ldots))$ is equal to the least $\le_1$-predecessor of the pointwise minimal $\kappa^\tauwo$-chain of $(n+2)$-many ordinals, as was first shown in \cite{CWc}. Finally, $\sup\{\vartheta_0(\ldots\vartheta_n(0)\ldots)\mid n<\omega\}$ is the proof-theoretic ordinal of $\mathrm{ID}_{<\omega}$, ${\operatorname{KP}\!\ell_0}$, and $\Pi^1_1{\operatorname{-CA}_0}$, and is the limit of expressibility of ordinals in terms of pure patterns of order $2$, see \cite{W18}. Taking the functions $\varthetai$ for granted for now, we can formally define ${\operatorname{T}}t$ and state a rough lemma on the collapsing nature of the functions $\varthetai$. \betagin{defi}[\boldmath Inductive Definition of ${\operatorname{T}}t$, cf.\ 3.22 of \cite{W07a}\unboldmath]\lambdabel{Ttdefi} \mbox{} \betagin{itemize} \item $\tau\subseteq{\operatorname{T}}t$ \item $\xi,\eta\in{\operatorname{T}}t\Rightarrow\xi+\eta\in{\operatorname{T}}t$ \item $\xi\in{\operatorname{T}}t\cap\Omegaiz\Rightarrow\varthetai(\xi)\in{\operatorname{T}}t$ for all $i<\omega$. \end{itemize} \end{defi} \betagin{lem}[cf.\ 3.30 of \cite{W07a}] For every $i<\omega$ the function $\varthetai\restriction_{{\operatorname{T}}t\cap\Omegaiz}$ is $1$-$1$ and has values in ${\mathbb P}\cap[\Omega_i,\Omegaie)$. \end{lem} Thus every ordinal in ${\operatorname{T}}t$ can be identified with a unique term built up from parameters below $\tau$ using $+$ and the functions $\varthetai$ $(i<\omega)$, if we assume that additive compositions are given in additive normal form. The intersection ${\operatorname{T}}t\cap\Omega$ turns out to be an initial segment of ordinals, which for $\tau=1$ is the proof-theoretic ordinal of $\Pi^1_1{\operatorname{-CA}_0}$. The essential reason for ${\operatorname{T}}t\cap\Omega$ to be an ordinal is that all proper components $\eta\in{\operatorname{T}}t\cap\Omega$ used to compose the unique notation of some $\xi\in{\operatorname{T}}t\cap\Omega$ satisfy $\eta<\xi$, see Lemma \ref{segmentlem} and Theorem \ref{tnmthm} (3.12 and 3.14 of \cite{W07a}). For a term of shape $\alpha:=\varthetat(\Delta+\xi)\in{\operatorname{T}}t$, where $\Delta$ is a multiple of $\Omega$ and $\xi<\Omega$, as was mentioned earlier, this means that besides $\xi<\alpha$ we also have $\eta<\alpha$ for every subterm $\eta$ of $\Delta$ such that $\eta<\Omega$. The choice of regular cardinals for the $\Omega_i$ is just convenient for short proofs. One can choose recursively regular ordinals instead (see \cite{R93}), at the cost of more involved proofs to demonstrate recursiveness (see \cite{Schl93}). The usage of regular cardinals causes gaps in notation systems constructed in a similar fashion as the systems ${\operatorname{T}}t$, where the first such gap is $[{\operatorname{T}}t\cap\Omega,\Omega)$. Patterns arise in a more dynamic manner from local reflection properties. This is one major reason that explains why, despite the elegance and brevity of their definition, their calculation is quite involved. Another, intrinsically connected, major reason explaining the subtleties in pattern calculation is the necessity to calculate or analyze in terms of the connectivity components of the relations $\le_i$. We will return to the discussion of connectivity components later. As explained above, we developed an expressive ordinal arithmetic in \cite{W07a} on the basis of Skolem hull term systems for initial segments ${\tau_i}nf:={\operatorname{T}}t\cap\Omega^{>\tau}$ of ${\mathrm{Ord}}$, that was further extended in \cite{CWa}, \cite{CWc}, and \cite{W18}. The following definition transfinitely iterates the closure under $\tau\mapsto{\tau_i}nf={\operatorname{T}}t\cap\Omega^{>\tau}$ continuously through all of ${\mathrm{Ord}}$. Note that we have chosen the Greek lowercase letter $\upsilon$ (upsilon) in order to avoid ambiguity of notation. \betagin{defi}[modified 9.1 of \cite{W07a}]\lambdabel{upsilondefi} Let $(\upsilon_\iota)_{\iota\in{\mathrm{Ord}}}$ be the sequence defined by \betagin{enumerate} \item $\upsilon_0:=0$, \item $\upsilon_{\xi+1}:=\upsilon_\xi^\infty={\operatorname{T}}^{\upsilon_\xi}\cap\Omega^{>\upsilon_\xi}$, \item $\upsilon_\lambda:=\sup\{\upsilon_\iota\mid\iota<\lambda\}$ for $\lambda\in\mathrm{Lim}$, i.e.\ $\lambda$ a limit ordinal. \end{enumerate} \end{defi} In Corollary 5.10 of \cite{W07b} we have shown that the maximal $<_1$-chain in ${\cal R}onepl$ is $\mathrm{Im}(\upsilon)\setminus\{0\}$. The results from \cite{W07b}, \cite{W07c}, and from Sections 5 and 6 of \cite{CWa} were applied in a case study for an ordinal analysis of $\operatorname{ID}_{<\omega}$, \cite{Pae}, following Buchholz' style of operator-controlled derivations \cite{B92}. Proof-theoretic analysis based on patterns as originally intended by Carlson uses $\beta$-logic \cite{Cb}. As we pointed out in \cite{W}, the intrinsic semantic content via the notion of elementary substructure directly on the ordinals in connection with their immediate combinatorial characterization and the concise elegance of their definition on the one hand, and the remarkable intricacies in their calculation, revealing mathematical depth not yet fully understood, on the other hand, create the impression that patterns contribute to the quest for natural well-orderings, cf.\ \cite{F}. \subseteqsection*{\boldmath The structure ${\cal R}two$ and its core, notes on ${\cal R}twopl$ and ${\cal R}three$\unboldmath} Returning to the discussion of pure patterns of order $2$, let \[{\cal R}two=\left({\mathrm{Ord}};\le,\le_1,\le_2\right)\] be the structure of ordinals with standard linear ordering $\le$ and partial orderings $\le_1$ and $\le_2$, simultaneously defined by induction on $\beta$ in \[\alpha\le_i\beta:\Leftrightarrow \left(\alpha;\le,\le_1,\le_2\right) \preceq_{\Sigma_i} \left(\beta;\le,\le_1,\le_2\right)\] where $\preceq_{\Sigma_i}$ is the usual notion of $\Sigma_i$-elementary substructure (without bounded quantification), as analyzed thoroughly below the ordinal $\upsilonilon_1=1^\infty$ of ${\operatorname{KP}\!\ell_0}$ in \cite{CWc,W17,W18}. $\le_i$-relationships between ordinals considered as members of structures of patterns can be verified using finite-set criteria. The following criterion can be extended successively to provide criteria for relations $\le_i$, $i<\omega$, see e.g.\ Proposition 3.9 of \cite{W}. \betagin{prop}[cf.\ 7.4 of \cite{CWc}]\lambdabel{letwocriterion} Let $\alpha,\beta$ be such that $\alpha<\beta$ and $X,\tilde{Y},Y$ be finite sets of ordinals such that $X,\tilde{Y}\subseteqseteq\alpha$ and $Y\subseteqseteq[\alpha,\beta)$. Consider the following properties: \betagin{enumerate} \item $X<\tilde{Y}<\alpha$ and there exists an isomorphism $h:X\cup\tilde{Y}\stackrel{\cong}{<_1ngrightarrow} X\cup Y$. \item For all finite $\tilde{Y}^+$, where $\tilde{Y}\subseteq\tilde{Y}^+\subseteq\alpha$, $h$ can be extended to an isomorphism $h^+$ such that \[h^+: X\cup\tilde{Y}^+\stackrel{\cong}{<_1ngrightarrow} X\cup Y^+\] for a suitable superset $Y^+\subseteqseteq\beta$ of $Y$. \end{enumerate} If for all such $X$ and $Y$ there exists a set $\tilde{Y}$ that satisfies property 1, then we have $\alpha<_1\beta$. If $\tilde{Y}$ can be chosen so that additionally property 2 holds, then we even have $\alpha<_2\beta$. \end{prop} {\bf Proof.} A proof is given in Section 7 of \cite{CWc} and in greater detail in \cite{W} (Propositions 3.1 and 3.6). \mbox{ } $\Box$ Note that for any $\alpha\in{\mathrm{Ord}}$ the class $\{\beta\mid\alpha\le_i\beta\}$ is closed (under limits) for $i=1,2$ and a closed interval for $i=1$. The criterion for $\le_1$ can be applied to see that in ${\cal R}two$ we have $\alpha\le_1\alpha+1$ if and only if $\alpha\in\mathrm{Lim}$ and for every $<_2$-predecessor $\beta$ of $\alpha$ the ordinal $\alpha$ is a proper supremum of $<_2$-successors of $\beta$, see Lemma \ref{loalpllem}. Another basic observation is that whenever $\alpha<_2\beta$, $\alpha$ must be the proper supremum of an infinite $<_1$-chain, i.e.\ the order type of the set of $<_1$-predecessors of $\alpha$ must be a limit ordinal, see Lemma \ref{ktwoinflochainlem}. Another useful elementary observation proved in \cite{CWc} and in \cite{W} (Lemma 3.7) is the following \betagin{lem}[7.6 of \cite{CWc}]\lambdabel{letwoupwlem} Suppose $\alphapha<_2\betata$, $X\subseteqseteq_\mathrm{fin}\alphapha$, and $\emptyset\not=Y\subseteqseteq_\mathrm{fin}[\alphapha,\betata)$. \betagin{enumerate} \item There exist cofinally many $\tilde{Y}\subseteqseteq\betata$ such that $X\cup\tilde{Y}\cong X\cup Y$. More generally, for any $Z\subseteqseteq_\mathrm{fin}\alphapha$ with $X<Z$, if $\alpha\models\forall x\exists\tilde{Z}\;(x<\tilde{Z}\wedge\mbox{``}X\cup Z\cong X\cup\tilde{Z}\mbox{''})$ then this also holds in $\beta$. \item Cofinally in $\alpha$, copies $\tilde{Y}\subseteqseteq\alpha$ of $Y$ can be chosen that besides satisfying $X<\tilde{Y}$ and $X\cup\tilde{Y}\cong X\cup Y$ also maintain $\le_1$-connections to $\beta$: For any $y\in Y$ such that $y<_1\beta$ the corresponding $\tilde{y}$ satisfies $\tilde{y}<_1\alpha$.\mbox{ } $\Box$ \end{enumerate} \end{lem} ${\operatorname{C}}ore({\cal R}two)$, the \emph{core} of ${\cal R}two$, i.e.\ the union of pointwise minimal instantiations of all finite isomorphism types of ${\cal R}two$, was analyzed in \cite{W18} and shown to coincide (in domain) with the initial segment of the ordinals below $1^\infty$. Note that any (finite) subset of ${\cal R}two$ gives rise to a substructure of ${\cal R}two$; hence, it represents a (finite) isomorphism type of ${\cal R}two$. It was shown in \cite{W18} that collections of pure patterns of order $2$ as defined above and of finite isomorphism types of ${\cal R}two$ coincide, and that each pure pattern of order $2$ has a \emph{unique isominimal} representative $P\subseteqset{\cal R}two$, where a finite substructure $Q\subseteqset{\cal R}two$ is isominimal if and only if $Q\le_\mathrm{\scriptscriptstyle{pw}} R$ for every $R\subseteqset{\cal R}two$ such that $Q\cong R$. $Q\le_\mathrm{\scriptscriptstyle{pw}} R$ means that $Q$ is pointwise less than or equal to $R$ with respect to increasing enumerations. The notions core and isominimality where introduced by Carlson in \cite{C01}. In the case of patterns of order $1$ the increased strength resulting from basic arithmetic functions such as addition, is matched by pure patterns of order $2$, as we have shown in \cite{CWc} that any pure pattern of order $2$ has a covering below $1^\infty$, the least such ordinal. According to a conjecture by Carlson, this phenomenon of compensation holds generally for all orders. Despite the fact that the cores of ${\cal R}onepl$ and ${\cal R}two$ cover the same initial segment of the ordinals, their structures differ considerably. While, as pointed out in \cite{W}, the core of ${\cal R}onepl$ shows a great deal of uniformity, reminding one of Girard's notion of dilator, cf.\ \cite{G81}, and giving rise to Montalb\'an's Question 27 in \cite{Mo11}, the core of ${\cal R}two$ is a structure, the regularity of which is far less obvious, due to the absence of uniformity provided by ordinal addition. In this context the ordinal $1^\infty$ is obtained as a collapse when weakening additive pattern notations of order $2$ on the basis of the structure ${\cal R}twopl=\left({\mathrm{Ord}};0,+;\le,\le_1,\le_2\right)$ to pure patterns of order $2$ arising in ${\cal R}two$. We claim, as in \cite{W}, that the segment of countable ordinals denoted by the Skolem-hull notation system derived from the first $\omega$-many weakly inaccessible cardinals covers (the domain of) ${\operatorname{C}}ore({\cal R}twopl)$. Note that the notation system based on the first weakly inaccessible cardinal matches the proof-theoretic ordinal of the set theory $\operatorname{KPI}$, which axiomatizes an admissible universe that is a limit of admissible sets, and which is equivalent to the system $\Delta^1_2$-$\operatorname{CA}+\operatorname{BI}$ of second order number theory, first analyzed by J\"ager and Pohlers in \cite{JP82}, cf.\ also \cite{P98}. See Buchholz' seminal work \cite{B92} for an analysis of $\operatorname{KPI}$ via operator controlled derivations. The analysis of ${\cal R}twopl$ is a topic of future work and requires a generalization of ordinal arithmetical methods, the beginning of which is outlined in \cite{WW11}. Note that \cite{C09} discusses patterns of order 2 in a general way over Ehrenfeucht-Mostowski structures; however, on the basis of modified relations $\le_1$, $\le_2$, see Section 5 of \cite{C09}. In this article, we generalize the approach taken in \cite{CWc,W17,W18}, which in turn naturally extends the arithmetical analysis of pure $\Sigma_1$-elementarity given by Carlson in \cite{C99}, to arithmetically characterize the relations $\le_1$ and $\le_2$ in all of ${\cal R}two$, not just its core. Define \[I:=\{\iota\in{\mathrm{Ord}}\mid\iota>1\mbox{ and not of a form }\iota=\lambda+1\mbox{ where }\lambda\in\mathrm{Lim}\}\] and let the expression $\iota\minusp1$ for $\iota\in{\mathrm{Ord}}$ denote $\iota_0$ if $\iota=\iota_0+1$ for some $\iota_0$, and simply $\iota$ otherwise. The following theorem is an immediate corollary of Theorem \ref{maintheo} of the present article. \betagin{theo}[\boldmath Maximal $\kappa^\tauwo$-chain in ${\cal R}two$\unboldmath]\lambdabel{maxchaintheo} $\mbox{ }$ \betagin{enumerate} \item The sequence $(\upsilon_\iota)_{\iota>0}$ is a $<_1$-chain through the ordinals, supporting the maximal $\kappa^\tauwo$-chain through the ordinals, which is $(\upsilon_\iota)_{\iota\in I}$. Here maximality means that for any $\iota\in I$ (actually for any ordinal $\iota$), the enumeration of all $\kappa^\tauwo$-predecessors of $\upsilon_\iota$ is given by $(\upsilon_\xi)_{\xi\in I\cap\iota}$. \item For $\iota\in{\mathrm{Ord}}\setminus I$ the ordinal $\upsilon_\iota$ is $\upsilon_{\iota\minusp1}$-$\le_1$-minimal, i.e.\ there does not exist any $\alpha\in(\upsilon_{\iota\minusp1},\upsilon_\iota)$ such that $\alpha<_1\upsilon_\iota$, and does not have any $\kappa^\tauwo$-successor. \item For every $\iota\in I$ the ordinal $\upsilon_\iota$ is a $\kappa^\tauwo$-predecessor of every proper multiple of $\upsilon_{\iota\minusp1}$ greater than $\upsilonilon_\iota$. The ordinals of the form $\upsilon_\lambda$ where $\lambda\in\mathrm{Lim}$ comprise the set of suprema of $\kappa^\tauwo$-chains of limit order type. \mbox{ } $\Box$ \end{enumerate} \end{theo} We give a simple illustration of Theorem \ref{maxchaintheo} below, where $\lambda$ stands for any $\lambda\in\mathrm{Lim}$. All indicated ordinals are $\le_1$-connected, blue edges except for the connection $\upsilon_\lambda\kappa^\tauwo\upsilon_{\lambda+1}$ indicate the maximal $\kappa^\tauwo$-chain of part 1 of the theorem. The \emph{least} $\kappa^\tauwo$-successor of, for instance, $\upsilon_2$ is $\upsilon_2+\upsilon_1$, which in turn does not have any $\kappa^\tauwo$-successors itself. The least $\kappa^\tauwo$-successor of $\upsilon_3$ is $\upsilon_3+\upsilon_2$, and so forth, while the least $\kappa^\tauwo$-successor of $\upsilon_\lambda$ is $\upsilon_\lambda\cdot2$ and the least $\kappa^\tauwo$-successor of $\upsilon_{\lambda+2}$ is $\upsilon_{\lambda+2}+\upsilon_{\lambda+1}$, etc. Note that according to the theorem, $\upsilon_1$ and $\upsilon_{\lambda+1}$ do not have any $\kappa^\tauwo$-successors. The ordinal $\upsilon_1$ is $\le_1$-minimal, and the greatest $<_1$-predecessor of $\upsilon_{\lambda+1}$ is $\upsilon_\lambda$. \color{blue} \betagin{pgfpicture}{0cm}{0cm}{15cm}{2cm} \pgfxyline(1,1.1)(1,0.9) \pgfxyline(2,1)(4,1) \pgfxyline(2,1.1)(2,0.9) \pgfxyline(3,1.1)(3,0.9) \pgfxyline(4,1.1)(4,0.9) \pgfxyline(4,1)(4.8,1) \pgfputat{\pgfxy(2.5,1.25)}{\pgfbox[center,base]{\color{red}${\scriptscriptstyle <_2}$}} \pgfputat{\pgfxy(3.5,1.25)}{\pgfbox[center,base]{\color{red}${\scriptscriptstyle <_2}$}} \pgfputat{\pgfxy(4.5,1.25)}{\pgfbox[center,base]{\color{red}${\scriptscriptstyle <_2}$}} \pgfxyline(7.2,1)(8,1) \pgfputat{\pgfxy(7.5,1.25)}{\pgfbox[center,base]{\color{red}${\scriptscriptstyle <_2}$}} \pgfxyline(8,1.1)(8,0.9) \pgfxyline(9,1.1)(9,0.9) \pgfxyline(10,1)(12,1) \pgfxyline(10,1.1)(10,0.9) \pgfxyline(11,1.1)(11,0.9) \pgfxyline(12,1.1)(12,0.9) \pgfxyline(12,1)(12.8,1) \pgfputat{\pgfxy(12.5,1.25)}{\pgfbox[center,base]{\color{red}${\scriptscriptstyle <_2}$}} \pgfputat{\pgfxy(10.5,1.25)}{\pgfbox[center,base]{\color{red}${\scriptscriptstyle <_2}$}} \pgfputat{\pgfxy(11.5,1.25)}{\pgfbox[center,base]{\color{red}${\scriptscriptstyle <_2}$}} \pgfxycurve(8,1)(8.25,1.2)(8.75,1.2)(9,1) \pgfstroke {\color{red} \pgfputat{\pgfxy(8.5,1.25)}{\pgfbox[center,base]{\color{red}${\scriptscriptstyle <_2}$}}} \pgfxycurve(8,1)(8.5,1.8)(9.5,1.8)(10,1) \pgfstroke {\color{red} \pgfputat{\pgfxy(9,1.8)}{\pgfbox[center,base]{\color{red}${\scriptscriptstyle <_2}$}}} {\color{black} \pgfputat{\pgfxy(1,0.5)}{\pgfbox[center,base]{$\upsilon_1$}} \pgfputat{\pgfxy(2,0.5)}{\pgfbox[center,base]{$\upsilon_2$}} \pgfputat{\pgfxy(3,0.5)}{\pgfbox[center,base]{$\upsilon_3$}} \pgfputat{\pgfxy(4,0.5)}{\pgfbox[center,base]{$\upsilon_4$}} \pgfputat{\pgfxy(6,0.5)}{\pgfbox[center,base]{$\cdots$}} \pgfputat{\pgfxy(8,0.5)}{\pgfbox[center,base]{$\upsilon_{\lambda}$}} \pgfputat{\pgfxy(9,0.5)}{\pgfbox[center,base]{$\upsilon_{\lambda+1}$}} \pgfputat{\pgfxy(10,0.5)}{\pgfbox[center,base]{$\upsilon_{\lambda+2}$}} \pgfputat{\pgfxy(11,0.5)}{\pgfbox[center,base]{$\upsilon_{\lambda+3}$}} \pgfputat{\pgfxy(12,0.5)}{\pgfbox[center,base]{$\upsilon_{\lambda+4}$}} \pgfputat{\pgfxy(13,0.5)}{\pgfbox[center,base]{$\cdots$}} } \end{pgfpicture} \color{black} Theorem \ref{maintheo} also shows that the gap $[\upsilon_1,\upsilon_2)$ contains a $<_1$-chain of order type $\upsilon_2$ that is $<_1$-connected to $\upsilon_2$, and the gap $[\upsilon_{\lambda+1},\upsilon_{\lambda+2})$ contains a $<_1$-chain of order type $\upsilon_{\lambda+2}$ that is $<_1$-connected to $\upsilon_{\lambda+2}$, as in general for any successor ordinal $\iota$ the interval $[\upsilon_\iota,\upsilon_{\iota+1})$ contains a $<_1$-chain of order type $\upsilon_{\iota+1}$ that is $<_1$-connected to $\upsilon_{\iota+1}$. Theorem \ref{maxchaintheo} provides overview and general structure of ${\cal R}two$, and follows from a detailed arithmetical characterization of the relations $\le_1$ and $\le_2$ in all of ${\cal R}two$, shown here through a generalization of Theorem 7.9 and Corollary 7.13 of \cite{CWc} from the initial segment $1^\infty=\upsilon_1$ to all of ${\mathrm{Ord}}$, by Theorem \ref{maintheo}, which for $i=1,2$ explicitly describes the $\le_i$-predecessors, and Corollary \ref{maincor}, which describes the $\le_i$-successors of a given ordinal. As a byproduct, a flaw in the proof of Theorem 7.9 of \cite{CWc} is corrected here in the generalized version, see also the paragraph preceding Theorem \ref{maintheo}. It is worth mentioning that Theorem \ref{maxchaintheo} corrects the claim made in the first paragraph of Section 8 of \cite{CWc}, where the role of the ordinals $\{\upsilon_{\lambda+1}\mid\lambda\in\mathrm{Lim}\}$ was overlooked. The exact description of $<_i$-predecessors and $\le_i$-successors for $i=1,2$ (in particular the greatest one whenever such exists) of an ordinal $\alpha$ relies on its tracking chain ${\mathrm{tc}}(\alpha)$ and the notion of maximal extension ($\operatorname{me}$) of tracking chains. These notions are carefully elaborated in Section \ref{tstcsec} in the generalized form needed for an analysis of the entire structure ${\cal R}two$, and can be derived from the term decomposition of $\alpha$ in Skolem hull notation. Tracking chains and their (maximal) extensions essentially make the surrounding (nested) $\le_i$-connectivity components visible, in which $\alpha$ is located. Incomplete fragments of the general big picture of ${\cal R}two$ described above, repeat in a cofinal manner of growing complexity throughout all of ${\cal R}two$, with the union of pointwise minimal isomorphic copies of all finite patterns comprising ${\operatorname{C}}ore({\cal R}two)$, the universe of which is $\upsilon_1$ as shown in \cite{W18}. The $\le_i$-relationships claimed to hold in Theorem \ref{maintheo} are verified using Proposition \ref{letwocriterion} by transfinite induction through the ordinals. Conversely, starting from the backbone $\mathrm{Im}(\upsilon)$ provided by Theorem \ref{maxchaintheo} and applying reflection properties given by Lemma \ref{letwoupwlem} and the converse of Proposition \ref{letwocriterion} in a transfinitely iterated manner, we may eventually arrive back at (an isomorphic copy of) ${\cal R}two$, cf.\ \cite{C01,C09} for an elaboration of such an approach. As a consequence of Theorem \ref{maintheo} the converse of Propostion \ref{letwocriterion} holds in ${\cal R}two$. The results established here enable us to show in \cite{W} that on the initial segment $\upsilon_{\omega^2+2}$ the structures ${\cal R}two$ and ${\cal R}three$ agree, where ${\cal R}three=\left({\mathrm{Ord}};\le,\le_1,\le_2,\le_3\right)$ and \[\alpha\le_i\beta:\Leftrightarrow \left(\alpha;\le,\le_1,\le_2,\le_3\right) \preceq_{\Sigma_i} \left(\beta;\le,\le_1,\le_2,\le_3\right)\] simultaneously for $i=1,2,3$ and recursively in $\beta$, while \[\upsilon_{\omega^2}\kappa^\tauhree\upsilon_{\omega^2+2}\] is the least occurrence of a $\kappa^\tauhree$-pair in ${\cal R}three$. A detailed arithmetical analysis of the structure ${\cal R}three$, using extended arithmetical means as for the analysis of ${\cal R}twopl$, is the subject of ongoing work that will also be based on the present article. \subseteqsection*{Organization of this article, stand-alone readability} The present article generalizes \cite{CWc}, with several improvements and corrections, and provides the necessary preparation for the result in \cite{W} (Section 21.4) and for future work on ${\cal R}three$. \cite{W} starts out from an earlier, slightly longer, version of this introduction and reviews basic insights around patterns along with an outline of how they were discovered. The interested reader not yet familiar with patterns will find Sections 21.2 and 21.3 of \cite{W} helpful before reading the present article in detail. However, previous knowledge of \cite{W} is not required for understanding this article, which is intended to be readable as stand-alone text. Clearly, for proofs and more details the reader is ultimately referred to the cited previous work on the subject, however, we have decided to renew several proofs in order to increase accessibility of the work. Section 2 provides a review of Skolem-hull based ordinal arithmetic developed for the analysis of patterns. Section 3 introduces the reader to the generalized concepts of tracking sequences and chains. These provide the necessary machinery to describe the nested structure of $\le_1$- and $\le_2$-connectivity components of ${\cal R}two$. Besides the required generalization as compared to \cite{CWc}, the exposition contains considerably more explanations, examples, and motivations, as well as renewed and simplified proofs. The conceptual and technical improvement and simplification of Section 4 of \cite{CWc} in particular was first published in \cite{W17}, used in \cite{W18}, and is reviewed and extended for application in the present article. Section 4 contains the complete description of ${\cal R}two$. The proof of the main theorem, Theorem \ref{maintheo}, begins with an overview called \emph{proof map} and contains simple examples for the various cases that need to be discussed. \section{Preliminaries} Here we give a review of ordinal notational and arithmetical tools developed in \cite{W07a}, Section 5 of \cite{CWa}, and \cite{CWc}. For a reference to basic ordinal arithmetic we recommend Pohlers' book on proof theory, \cite{P09}. Let $\mathrm{Lim}nod$ denote the class $\{0\}\cup\mathrm{Lim}$ where $\mathrm{Lim}$ is the class of limit ordinals. Generally, for a set or class $X$ of ordinals we define $\mathrm{Lim}(X)$\index{$\mathrm{Lim}(X)$} to be the set of all $\alpha\in X$ that are proper suprema of subsets of $X$. By ${\mathbb P}$ we denote the class of additive principal numbers, i.e.\ the image of the $\omega$-exponentiation function $\{\omega^\eta\mid\eta\in{\mathrm{Ord}}\}$, and we write ${\mathbb P}nod:=\{0\}\cup{\mathbb P}$ for the class of all ordinals that are closed under ordinal addition. By ${\mathbb L}$ we denote the class $\{\omega^\eta\mid\eta\in\mathrm{Lim}\}=\mathrm{Lim}({\mathbb P})$. By ${\mathbb M}$ we denote the class of multiplicative principal numbers, i.e.\ nonzero ordinals closed under ordinal multiplication. Note that ${\mathbb M}\subseteqseteq{\mathbb P}$ since \[{\mathbb M}=\{1\}\cup\{\omega^{\omega^\eta}\mid\eta\in{\mathrm{Ord}}\}.\] As in the introduction, let ${\mathbb E}=\{\eta\mid\eta=\omega^\eta\}$ denote the class of all epsilon numbers, i.e.\ fixed points of $\omega$-exponentiation, and ${\mathbb E}one:=\{1\}\cup{\mathbb E}$. By ${\mathbb E}^{>\delta}$ we denote the class of epsilon numbers that are greater than $\delta$. Similarly, ${\mathbb P}^{\le\eta}$ denotes the set of additive principal numbers that are less than or equal to $\eta$, etc. Representations in normal form are sometimes explicitly marked, such as additive normal form (${\mathrm{\scriptscriptstyle{ANF}}}$, weakly decreasing additive principal summands) or multiplicative normal form of ordinals $\alpha\in{\mathbb P}$ (${\mathrm{\scriptscriptstyle{MNF}}}$, weakly decreasing multiplicative principal factors), or Cantor normal form (${\mathrm{\scriptscriptstyle{CNF}}}$) itself, i.e.\ $\alpha=_{\mathrm{\scriptscriptstyle{CNF}}}\omega^{\alpha_1}+\ldots+\omega^{\alpha_n}$ such that $\alpha_1\ge\ldots\ge\alpha_n$, $n\ge 0$. Here $n=0$ covers the case $\alpha=0$. For $\alpha=_{\mathrm{\scriptscriptstyle{ANF}}}\alpha_1+\ldots+\alpha_n$ we define ${\mathrm{end}}(\alpha):=\alpha_n$ and set ${\mathrm{end}}(0):=0$, while ${\operatorname{mc}}(\alpha)$ denotes the greatest additive component of $\alpha$, i.e.\ $\alphae$ if $n>0$ and $0$ otherwise. For an ordinal $\alpha=_{\mathrm{\scriptscriptstyle{ANF}}}\alphae+\ldots+\alphan$ the notation $\alpha=_{\mathbb N}F\beta+\gamma$ is a shorthand for $\beta=_{\mathrm{\scriptscriptstyle{ANF}}}\alphae+\ldots+\alphanmin$ and $\gamma=\alphan$ if $n>0$, and $\beta=\gamma=0$ otherwise, for completenss. This notation implies that $\gamma={\mathrm{end}}(\alpha)$. For $\alpha\in{\mathbb P}$ such that $\alpha=_{\mathrm{\scriptscriptstyle{MNF}}}\alphae\cdot\ldots\cdot\alphan$ the notation $\alpha=_{\mathbb N}F\beta\cdot\gamma$ indicates that $\beta=_{\mathrm{\scriptscriptstyle{MNF}}}\alphae\cdot\ldots\cdot\alphanmin$ and $\gamma=\alphan$ if $n>1$, and $\beta=1$, $\gamma=\alpha$ otherwise, for completeness. This has the effect that the last multiplicative principal factor in the multiplicative normal form of $\alpha$ (also written as ${\operatorname{lf}}(\alpha)$) is equal to $\gamma$. We define the ordinal $\alpha\minusp\betata$ as usual, namely in case of $\alpha\le\beta$ to be $0$, while in case of $\alpha>\beta$ to be the least $\gamma$ s.t.\ $\alpha=\gamma+\beta$ if such $\gamma$ exists, and $\alpha$ otherwise. If $\alpha\le\beta$ then we write $-\alpha+\beta$\index{$-\alpha+\beta$} for the unique $\gamma$ such that $\alpha+\gamma=\beta$. By $(1/\gamma)\cdot\alpha$ we denote the least ordinal $\delta$ such that $\alpha=\gamma\cdot\delta$, whenever such an ordinal exists. We write $\alpha\mid\beta$\index{$\alpha\mid\beta$} if $\beta$ is a (possibly zero) multiple of $\alpha$, i.e. $\exists\xi\,(\beta=\alpha\cdot\xi)$. i.e.\ $\alpha_1$ if $\alpha>0$ and $0$ otherwise. ${\mathrm{logend}}(\alpha)$\index{logend} is defined to be $0$ if $\alpha=0$ and $\alpha_n$ if $\alpha=_{\mathrm{\scriptscriptstyle{CNF}}}\omega^{\alpha_1}+\ldots+\omega^{\alpha_n}$. For $\alpha\in{\mathbb P}nod$ we also write $<_1g(\alpha)$ instead of ${\mathrm{logend}}(\alpha)$. For a function $f$ and a subset $X$ of its domain we denote the image of $X$ under $f$ by $f[X]=\set{f(x)}{x\in X}$. Inequalities like $X<Y$ or $\alpha<X$ where $X,Y$ are sets of ordinals mean the conjunction of all inequalities taking each element of the concerning sets. For sets $X$ and $Y$ we denote the set $\set{x}{x\in X\:\&\: x\not\in Y}$ by $X\setminus Y$. Intervals of ordinals are often written in the following way: $(\alpha,\beta)=\{\gamma\mid\alpha<\gamma<\beta\}$, $[\alpha,\beta]=\{\gamma\mid\alpha\le\gamma\le\beta\}$, and mixed forms $(\alpha,\beta]$ and $[\alpha,\beta)$ are defined analogously. Clearly, we also simply have $\alpha=\{\gamma\mid\gamma<\alpha\}$. Sequences of ordinals (also called ordinal vectors) are often written as $\alphavec=(\alphae,\ldots,\alphan)$, and appending an ordinal $\alpha$ is written as $\alpha^\frown\alphavec=(\alpha,\alphae,\ldots,\alphan)$, $\alphavec^\frown\alpha=(\alphae,\ldots,\alphan,\alpha)$. Concatenation of $\alphavec$ with $\betavec=(\beta_1,\ldots,\beta_m)$ is writen as $\alphavec^\frown\betavec=(\alphae,\ldots,\alphan,\beta_1,\ldots,\beta_m)$. Similar notation is used for sequences of ordinal vectors. \subseteqsection{Stepwise collapsing functions \boldmath $(\varthetai)_{i\in\omega}$ \unboldmath} \cite{W06} can be seen as an introduction to the kind of ordinal arithmetic which is reviewed here, however, with ordinal addition \emph{and} $\omega$-exponentiation as basic functions over which Skolem hulling is performed, and with only one collapsing function $\vartheta$ instead of a family $(\varthetai)_{i\in\omega}$ of stepwise collapsing functions. \cite{W06} therefore only covers notations for ordinals below the Bachmann-Howard ordinal, but provides an elementary and quickly accessible treatise of the Bachmann-Howard structure both in terms of hull notations and additive patterns of order one. However, we do not assume knowledge of \cite{W06} for the understanding of this article. The main reference for this subsection is Section 3 of \cite{W07a}, which contains detailed proofs. Here we give an outline of the construction of systems ${\operatorname{T}}t$, which along the lines of the introduction reduces to the formal definition of the system $(\varthetai)_{i\in\Omega}$ of collapsing functions. The construction can be seen as a straightforward direct limit construction, since ${\operatorname{T}}t$ and $(\varthetai)_{i\in\Omega}$ are obtained from increasing initial segments, i.e.\ through a limit of end-extensions. Once established, the usefulness of these relativized notations becomes apparent through algebraic exactness and the absence of normal conditions that would have to be permanently verified as is the case when one works with non-injective collapsing functions. The usage of functions $\varthetai$ for increasing $i$ will allow us to arithmetically characterize pure patterns of an increasing maximal number of nestings of $\le_2$, as indicated in the examples following Lemma \ref{tenumlem}. Let us fix the general setting for the definition of relativized hull notations as in the introduction: \emph{Let $\tau\in{\mathbb E}one$, set $\Omega_0:= \tau$, suppose that $\Omega_1$ is an uncountable regular cardinal number greater than $\tau$, and let $\Omega_{i+1}$ for $i\in(0,\omega)$ be the regular cardinal successor of $\Omega_i$.} \subseteqsubsection{\boldmath The notation systems ${\operatorname{T}}nm$\unboldmath} The following definition taken from \cite{W07a} is fundamental for the construction of systems ${\operatorname{T}}t$. It provides the necessary support framework for the definition of $\varthetanm$-functions, which in turn yield the desired $\varthetai$-functions via successive end-extension for $n\to\omega$. We finally obtain ${\operatorname{T}}t$ as a direct limit of systems $({\operatorname{T}}^n_0)_{n<\omega}$. Apart from their role as auxiliary systems, for $\tau=1$ the system ${\operatorname{T}}^1_0$ provides a notation system suited for an analysis of Peano arithmetic (${\operatorname{PA}}$), as ${\operatorname{T}}^1_0\cap\Omega=\varepsilonn$, and ${\operatorname{T}}^{n+1}_0$ provides a notation system for the theory ${\operatorname{ID}_n}$ of $n$-times iterated inductive definitions, for $n\in(0,\omega)$. \betagin{defi}[3.1 of \cite{W07a}] Let $n\in(0,\omega)$. Descending from $m=n-1$ down to $m=0$ we define sets of ordinals ${\operatorname{C}}nmalbe$\index{${\operatorname{C}}nm$} where $\beta<\Omegame$ and ordinals $\varthetanm(\alpha)$\index{$\varthetanm$} by simultaneous recursion on $\alpha<\Omega_{m+2}$. \noindent For each $\beta<\Omegame$ the set ${\operatorname{C}}nmalbe$ is defined inductively by \betagin{itemize} \item $\Omega_m\cup\beta\subseteq{\operatorname{C}}nmalbe$ \item $\xi,\eta\in{\operatorname{C}}nmalbe\;\Rightarrow\;\xi+\eta\in{\operatorname{C}}nmalbe$ \item $\xi\in{\operatorname{C}}nmalbe\cap\Omega_{k+2}\;\Rightarrow\;\varthetank(\xi)\in{\operatorname{C}}nmalbe$ for $m<k<n$ \item $\xi\in{\operatorname{C}}nmalbe\cap\alpha\;\Rightarrow\;\varthetanm(\xi)\in{\operatorname{C}}nmalbe$. \end{itemize} Having defined $\varthetanm(\xi)$ for all $\xi<\alpha$ and ${\operatorname{C}}nmalbe$ for every $\beta<\Omegame$ we set \[\varthetanmal:=\min(\set{\xi<\Omega_{m+1}}{{\operatorname{C}}nm(\alpha,\xi)\cap\Omegame\subseteq\xi\wedge\alpha\in{\operatorname{C}}nm(\alpha,\xi)} \cup\sigmangleton{\Omegame}).\] \end{defi} Note that we have $\varthetanm(0)=\Omega_m$ for $m<n$. The function $\varthetannmin$ is not a proper collapsing function simply because $\Omega_n\not\in{\operatorname{C}}nmalbe$ which will become clear in the sequel. The next two lemmas follow immediately from the above definition. \betagin{lem}[3.2 of \cite{W07a}] Let $\alpha, \alpha_1, \alpha_2, \gamma<\Omegamz$ and $\beta, \beta_1, \beta_2, \delta<\Omegame$. \betagin{enumerate} \item[a)] If $\delta\subseteq{\operatorname{C}}nmalbe$ then ${\operatorname{C}}nm(\alpha,\delta)\subseteq{\operatorname{C}}nmalbe$. \item[b)] For $\alpha_1\le\alpha_2$ and $\beta_1\le\beta_2$ we have ${\operatorname{C}}nm(\alpha_1,\beta_1)\subseteq{\operatorname{C}}nm(\alpha_2,\beta_2)$. \item[c)] We have ${\operatorname{C}}nmalbe=\bigcup_{\gamma<\alpha}{\operatorname{C}}nm(\gamma,\beta)$ for $\alpha\in\mathrm{Lim}$ and similarly we have ${\operatorname{C}}nmalbe=\bigcup_{\delta<\beta}{\operatorname{C}}nm(\alpha,\delta)$ for $\beta\in\mathrm{Lim}$. \item[d)] ${\mathrm{Card}}({\operatorname{C}}nmalbe)<\Omegame$. \end{enumerate} \end{lem} In the following we write ${\operatorname{C}}nm(\Omegamz,\beta)$ for $\bigcup_{\alpha<\Omegamz}{\operatorname{C}}nmalbe$ and ${\operatorname{C}}nm(\alpha,\Omegame)$ for $\bigcup_{\beta<\Omegame}{\operatorname{C}}nmalbe$. \betagin{lem}[3.3 of \cite{W07a}]\lambdabel{thtseglem} Let $\alpha<\Omegamz$ and $\beta<\Omegame$. \betagin{enumerate} \item[a)] $\varthetanmal={\operatorname{C}}nm(\alpha,\varthetanmal)\cap\Omegame$. \item[b)] $\varthetanmal\in{\mathbb P}\cap[\Omega_m,\Omegame]$. \item[c)] Let $\xi=_{\mathrm{\scriptscriptstyle{ANF}}}\xi_1+\ldots+\xi_l$. Then $\xi\in{\operatorname{C}}nmalbe$ iff $\xi_1,\ldots,\xi_l\in{\operatorname{C}}nmalbe$. \end{enumerate} \end{lem} As in Lemma \ref{tenumlem} additive principal numbers which are not epsilon numbers can be characterized as follows. Recall that the function $\omegaq{\cdot}$ enumerates the non-epsilon additive principal numbers. \betagin{lem}[3.5 of \cite{W07a}]\lambdabel{enumlem} Let $m<n$. For every $\alpha<\Omegame$ we have \[\varthetanm(1+\alpha)=\omegaq{\Omega_m+\alpha}.\] \end{lem} \betagin{defi}[3.6 of \cite{W07a}] For $m<n$ we define \[{\operatorname{T}}nm:={\operatorname{C}}nm(\Omegamz,0)\] and set ${\operatorname{T}}nn:=\Omega_n$ for convenience.\index{${\operatorname{T}}nm$} \end{defi} \betagin{lem}[3.7 of \cite{W07a}] For $m<n$ the set ${\operatorname{T}}nm$ is inductively characterized as follows: \betagin{itemize} \item $\Omega_m\subseteq{\operatorname{T}}nm$ \item $\xi,\eta\in{\operatorname{T}}nm\;\Rightarrow\;\xi+\eta\in{\operatorname{T}}nm$ \item $\xi\in{\operatorname{T}}nm\cap\Omegakz\;\Rightarrow\;\varthetank(\xi)\in{\operatorname{T}}nm$ for $m\le k<n$. \end{itemize} \end{lem} The next lemma the most important claim of which is that the $\vartheta$-functions are collapsing functions depends on the regularity of the cardinals $\Omega_n$ where $0<n<\omega$. In \cite{W06} we showed that the $\vartheta$-function defined there is a total collapsing function on the segment $\varepsilonom$, the least epsilon number greater than $\Omega$. The analogy here is that $\varthetanm$ is a total collapsing function on the set ${\operatorname{C}}nm(0,\Omegame)\cap\Omegamz$ (for $m<n-1$). The latter set is actually the largest segment of ordinals having notations in ${\operatorname{T}}nme$. \betagin{lem}[3.8 of \cite{W07a}]\lambdabel{collapslem} Let $m<n$. For all $\alpha\in{\operatorname{C}}nm(0,\Omegame)\cap\Omegamz$ we have \[\varthetanmal<\Omegame\;\mbox{ and }\;\varthetanmal\not\in{\operatorname{C}}nm(\alpha,\varthetanmal).\] \end{lem} \betagin{cor}[3.9 of \cite{W07a}]\lambdabel{idsetslem} For $m<n$ we have \[{\operatorname{T}}nm\subseteq{\operatorname{T}}nme={\operatorname{C}}nm(0,\Omegame)={\operatorname{C}}nm(\Omegamz,\Omegame).\] \end{cor} In order to compare $\varthetanm$-terms we need to detect the additive principal parts of ordinals in ${\operatorname{T}}nme$. This is done by the following definition. The symbol $\subseteqseteq_\mathrm{fin}$ indicates a finite subset. \betagin{defi}[3.10 of \cite{W07a}] Let $m<n$. By recursion on the definition of ${\operatorname{T}}nme$ we define ${\operatorname{P}^n_m}(\xi)\subseteqseteq_\mathrm{fin}\Omegame$\index{${\operatorname{P}^n_m}$} for every $\xi\in{\operatorname{T}}nme$. \betagin{itemize} \item ${\operatorname{P}^n_m}(\xi):=\sigmangleton{\xi_1,\ldots,\xi_r}$, if $\xi=_{\mathrm{\scriptscriptstyle{ANF}}}\xi_1+\ldots+\xi_r<\Omegame$ \item ${\operatorname{P}^n_m}(\xi):={\operatorname{P}^n_m}(\xi_1)\cup{\operatorname{P}^n_m}(\xi_2)$, if $\xi_1,\xi_2\in{\operatorname{T}}nme$ and $\xi=_{\mathbb N}F\xi_1+\xi_2>\Omegame$ \item ${\operatorname{P}^n_m}(\xi):={\operatorname{P}^n_m}(\eta)$, if $\xi=\varthetank(\eta)$, $\eta\in{\operatorname{T}}nme\cap\Omegakz$, $m<k<n$. \end{itemize} We define $\xi^{\ast^n_m}:=\max(({\operatorname{P}^n_m}(\xi)\setminus\Omega_m)\cup\sigmangleton{0})$\index{${\ast^n_m}$} for $\xi\in{\operatorname{T}}nme$. \end{defi} According to the next lemma, by this definition ${\operatorname{P}^n_m}(\xi)$ is uniquely determined for every $\xi\in{\operatorname{T}}nme$. The lemma provides a criterion for the comparison of ordinals within ${\operatorname{T}}nm$ which is elementary recursive in $\Omega_m$. \betagin{lem}[3.11 of \cite{W07a}]\lambdabel{propslem} For $n>0$ and $m\in\sigmangleton{0,\ldots,n-1}$ we have \betagin{enumerate} \item[a)] ${\operatorname{P}^n_m}$ is well defined. \item[b)] Let $\alpha<\Omegamz$ and $\beta<\Omegame$. For every $\xi\in{\operatorname{T}}nme$ we have \[\xi\in{\operatorname{C}}nmalbe \Leftrightarrow {\operatorname{P}^n_m}(\xi)\subseteq{\operatorname{C}}nmalbe.\] \item[c)] $\alpha^{\ast^n_m}<\varthetanmal$ for all $\alpha\in{\operatorname{T}}nme\cap\Omegamz$. \item[d)] The restriction of $\varthetanm$ to ${\operatorname{T}}nme\cap\Omegamz$ is $1$-$1$. We have \[\varthetanmal<\varthetanm(\gamma) \Leftrightarrow \left(\alpha<\gamma \:\&\: \alpha^{\ast^n_m}<\varthetanm(\gamma)\right) \vee \varthetanmal\le\gamma^{\ast^n_m}\] for all $\alpha,\gamma\in{\operatorname{T}}nme\cap\Omegamz$. \end{enumerate} \end{lem} As mentioned in the introduction, below $\Omegame$ the Skolem hulls ${\operatorname{C}}nm(\alpha,\beta)$ form initial segments of ordinals. This property is essential in their role to provide the basis for an ordinal notation system. \betagin{lem}[3.12 of \cite{W07a}]\lambdabel{segmentlem} For $m<n$ we have \betagin{enumerate} \item[a)] ${\operatorname{C}}nm(\alpha,\alpha^{\ast^n_m}+1)={\operatorname{C}}nm(\alpha,\varthetanmal)$ for all $\alpha\in{\operatorname{T}}nme\cap\Omegamz$. \item[b)] ${\operatorname{C}}nmalbe\cap\Omegame\in{\mathrm{Ord}}$ for all $\alpha<\Omegamz$ and all $\beta<\Omegame$. \end{enumerate} \end{lem} For $m<n$ and $k\in\sigmangleton{m,\ldots,n-1}$ the ordinal $\theta^n_k$ defined below is the supremum of all ordinals in the segment $[\Omega_k,\Omegake)$ that have a notation within ${\operatorname{T}}nm$. Moreover, according to the theorem below, $\theta^n_m$ is the maximal segment of ordinals having a notation in ${\operatorname{T}}nm$. \betagin{defi}[3.13 of \cite{W07a}] \lambdabel{thetadefi} Let $n>0$. For $k<\omega$ we define \index{${\operatorname{T}}hetanm$}\index{$\theta^n_m$} \[{\operatorname{T}}hetannmin(k):={\varthetannmin}^{(k)}(0)\;\mbox{ and }\;\theta^n_nmin:=\sup_{k<\omega}\,{\operatorname{T}}hetannmin(k)\] where ${\varthetannmin}^{(k)}(0)$ denotes the $k$-fold application of $\varthetannmin$ to $0$. Descending from $m=n-2$ down to $m=0$ we define \[{\operatorname{T}}hetanm(k):=\varthetanm({\operatorname{T}}hetanme(k))\;\mbox{ and }\;\theta^n_m:=\sup_{k<\omega}\,{\operatorname{T}}hetanm(k).\] For convenience of notation we set ${\operatorname{T}}hetann(k):={\operatorname{T}}hetannmin(k)$ for $k<\omega$, $\theta^n_n:=\theta^n_nmin$ (since the $\varthetannmin$-function is not a collapsing function), and $\theta^n_ne:=0$. \end{defi} \betagin{theo}[3.14 of \cite{W07a}]\lambdabel{tnmthm} For $m<n$ we have \[{\operatorname{T}}nm\cap\Omegame=\theta^n_m.\] \end{theo} \betagin{cor}[3.15 of \cite{W07a}]\lambdabel{thtsubscor} Let $m<n$, $\alpha<\Omegamz$, and $\beta<\Omegame$. Then all terms of a shape $\varthetank(\eta)$ in ${\operatorname{C}}nmalbe$ satisfy $\eta<\theta^n_ke$. \end{cor} \subseteqsubsection{\boldmath The notation system ${\operatorname{T}}t$\unboldmath} The following lemma is central to see that we can obtain ${\operatorname{T}}t$ as a direct limit of the ${\operatorname{T}}^n_0$. \betagin{lem}[3.20 of \cite{W07a}]\lambdabel{endextlem} For all $m\le n$ we have $\theta^n_m<\theta^{n+1}_m$, and for $m<n$ \[{\operatorname{C}}nmalbe\cap\theta^n_me={\operatorname{C}}nemalbe\cap\theta^n_me \quad\&\quad \varthetanmal=\varthetanemal\] for all $\alpha<\theta^n_me$ and all $\beta<\Omegame$. \end{lem} \betagin{cor}[3.21 of \cite{W07a}] Let $m<n$. ${\operatorname{T}}nm\subseteq{\operatorname{T}}nem$ and for $k$ such that $m\le k<n$ the functions $\varthetank$ and $\varthetanek$\index{${\theta_m}$} agree on ${\operatorname{T}}nm\cap\Omegakz$. \end{cor} \betagin{defi}[\boldmath $\varthetam$ and ${\operatorname{T}}m$\unboldmath, 3.22 of \cite{W07a}, cf.\ Def.\ \ref{Ttdefi}]\lambdabel{Tmdef} For $m<\omega$ we set \[{\theta_m}:=\sup_{n>m}\theta^n_m,\] and we define a function $\varthetam:{\theta_m}e\to\Omegame$\index{$\varthetam$} by \[\varthetamal:=\varthetanmal\] for $\alpha<{\theta_m}e$ where $n>m$ is large enough to satisfy $\alpha<\theta^n_me$. ${\operatorname{T}}m$\index{${\operatorname{T}}m$} is defined inductively as follows: \betagin{itemize} \item $\Omega_m\subseteq{\operatorname{T}}m$ \item $\xi,\eta\in{\operatorname{T}}m\;\Rightarrow\;\xi+\eta\in{\operatorname{T}}m$ \item $\xi\in{\operatorname{T}}m\cap\Omegakz\;\Rightarrow\;\varthetak(\xi)\in{\operatorname{T}}m$ for $k\ge m$. \end{itemize} \end{defi} It is immediate from the previous lemma that the functions $\varthetam$ are well defined. The well-definedness of ${\operatorname{T}}m$, which means that $\sup({\operatorname{T}}m\cap\Omegakz)\le{\theta_{k+1}}$ where $m\le k$, follows from the next theorem. This theorem establishes the systems of relativized ordinal notations based on Skolem hull operators we aim for. \betagin{theo}[3.23 of \cite{W07a}] For $m<\omega$ we have ${\operatorname{T}}m=\bigcup_{n>m}{\operatorname{T}}nm$ and \[{\operatorname{T}}m\cap\Omegame={\theta_m}=\sup_{n\ge m}\varthetam(\cdots(\varthetan(0))\cdots).\] \end{theo} \betagin{cor}[3.24 of \cite{W07a}] For every $k\ge m$ we have $\sup({\operatorname{T}}m\cap\Omegakz)={\theta_{k+1}}$. For $m<n$ we have \[{\operatorname{T}}nm\cap\Omegame=\theta^n_m=\varthetam(\cdots(\varthetan(0))\cdots).\] \end{cor} Notice that by the end-extension property shown in the previous lemma and theorem it follows that each ${\operatorname{T}}m$ again gives rise to a notation system with parameters from $\Omega_m$ that provides a unique term for every ordinal which is element of some ${\operatorname{T}}nm$ where $n>m$ (refining the clause for ordinal addition with a normal form condition as mentioned in the introduction). The comparison of $\vartheta$-terms in ${\operatorname{T}}m$ can be done within a sufficiently large fragment ${\operatorname{T}}nm$ where $n>m$. The notation system ${\operatorname{T}}m$ as well as the criterion for the comparison of its elements are now easily seen to be elementary recursive in $\Omega_m$. From now on we will only need to consider the notation system ${\operatorname{T}}_0$. \betagin{conv}[cf.\ 3.25 of \cite{W07a}] In our setting the ordinal notations are relativized to $\tau$. Later on we will indicate this explicitly in writing $\varthetat$\index{$\varthetat$} and ${\operatorname{T}}t$\index{${\operatorname{T}}t$} instead of $\vartheta_0$ and ${\operatorname{T}}_0$. As already defined in the introduction, we have \[{\tau_i}nf:={\operatorname{T}}t\cap\Omega=\theta_0.\] \end{conv} The notion of term height in the following sense is often useful in inductive proofs. It recovers the least fragment in which to find the given notation (modulo the trivial embedding of a system ${\operatorname{T}}nm$ into ${\operatorname{T}}m$). \betagin{defi}[3.26 of \cite{W07a}]\lambdabel{htdef} We define a function ${\operatorname{ht}_\tau}:{\operatorname{T}}t\to\omega$\index{${\operatorname{ht}_\tau}$} as follows: \[{\operatorname{ht}_\tau}(\alpha):=\left\{\betagin{array}{ll} m+1 & {\small \mbox{ if } m=\max\set{k}{\mbox{there is a subterm of $\alpha$ of shape $\varthetak(\eta)$}}}\\ 0 & {\small \mbox{ if such $m$ does not exist.}} \end{array}\right.\] \end{defi} \betagin{lem}[3.27 of \cite{W07a}]\lambdabel{htlem} For $\alpha<{\operatorname{T}}t\cap\Omega_1$ \[{\operatorname{ht}_\tau}(\alpha)=\min\set{n}{\alpha<\varthetanod(\cdots(\varthetan(0))\cdots)}.\] ${\operatorname{ht}_\tau}$ is weakly increasing successively as the indices of occurring $\vartheta$-functions increase. \end{lem} The following notion of subterm is crucial for the comparison of $\vartheta$-terms. Subterms of lower cardinality become parameters when comparing $\vartheta$-terms of higher cardinality, a natural property if we take into account that $\vartheta$-functions are collapsing functions. \betagin{defi}[3.28 of \cite{W07a}]\lambdabel{subtermdefi} We define sets of subterms ${\operatorname{Sub}^\tau_m}(\alpha)$\index{${\operatorname{Sub}^\tau_m}$} for $m<\omega$ and notations $\alpha$ in ${\operatorname{T}}t$ by recursion on the build up of ${\operatorname{T}}t$: \betagin{itemize} \item ${\operatorname{Sub}^\tau_m}(\alpha):=\sigmangleton\alpha$ for parameters $\alpha<\tau$ \item ${\operatorname{Sub}^\tau_m}(\alpha):=\sigmangleton{\alpha}\cup{\operatorname{Sub}^\tau_m}(\xi)\cup{\operatorname{Sub}^\tau_m}(\eta)$ for $\xi,\eta\in{\operatorname{T}}t$ s.t. $\alpha=_{\mathbb N}F\xi+\eta>\tau$ \item ${\operatorname{Sub}^\tau_m}(\alpha):=\left\{\betagin{array}{cl} \sigmangleton\alpha\cup{\operatorname{Sub}^\tau_m}(\xi) & \mbox{ if } \;k\ge m\\ \sigmangleton\alpha & \mbox{ if } \;k<m \end{array}\right.$\\ for $\alpha=\varthetak(\xi)$ where $\xi\in{\operatorname{T}}t\cap\Omegakz$. \end{itemize} We define the additive principal parts of level $m$ of $\alpha\in{\operatorname{T}}t$ by\index{${\operatorname{P}_m}$}\index{${\ast_m}$} \[{\operatorname{P}_m}(\alpha):={\operatorname{Sub}^\tau_m}(\alpha)\cap{\mathbb P}\cap[\Omega_m,\Omega_{m+1}) \mbox{ and } \alpha^{\ast_m}:=\max\left({\operatorname{P}_m}(\alpha)\cup\sigmangleton0\right).\] The set of parameters $<\tau$ used in the unique term denoting some $\alpha\in{\operatorname{T}}t$ is denoted by\index{$\operatorname{Par}t$} \[\operatorname{Par}t(\alpha):={\operatorname{Sub}^\tau_0}(\alpha)\cap\tau.\] \end{defi} \betagin{conv}[3.29 of \cite{W07a}] In order to make the setting of relativization explicit we write ${\operatorname{P}^\tau_m}$\index{${\operatorname{P}^\tau_m}$} for ${\operatorname{P}_m}$ and ${}^{\ast^\tau}m$ for ${}^{\ast_m}$. We write ${\operatorname{P}^\tau}$\index{${\operatorname{P}^\tau}$} for ${\operatorname{P}^\tau_0}$, and instead of ${}^{\ast_0}$ we will also write ${}^{\ast^\tau}$.\index{${\ast^\tau}$} \end{conv} \betagin{rmk}[Remarks following 3.29 of \cite{W07a}]\mbox{ } \betagin{enumerate} \item ${\operatorname{Sub}^\tau_m}(\alpha)$ consists of the subterms of $\alpha$ where parameters below $\tau$ as well as subterms of a shape $\varthetak(\eta)$ with $k<m$ are considered atomic. \item ${\operatorname{P}_m}$ consists of the $\varthetam$-subterms of $\alpha$ which are not in the scope of a $\varthetak$-function with $k<m$. \item By lemma \ref{propslem} part c) and the end extension properties shown above it follows that the notion ${}^{\ast_m}$ is consistent with the notion ${}^{\ast^n_m}$ where $m<n$ on the common domain. It also follows that \[\alpha^{\ast_m}=\max\left({\operatorname{Sub}^\tau_m}e(\alpha)\cap{\mathbb P}\cap[\Omega_m,\Omega_{m+1})\cup\sigmangleton0\right).\] \item The notion ${\operatorname{P}_m}$ takes more subterms into consideration than ${\operatorname{P}^n_m}$ since ${\operatorname{Sub}^\tau_m}$ also decomposes $\varthetam$-subterms. However, ${\operatorname{P}_m}e$ is consistent with ${\operatorname{P}^n_m}$ on the common domain. \item In order to clarify the definition of $\operatorname{Par}t$ consider the following examples: $\operatorname{Par}^{\varepsilonn}(\omega+1)=\sigmangleton{\omega+1}$ and $\operatorname{Par}^{\varepsilonn}(\varepsilonn+\omega+1)=\sigmangleton{1,\omega}$. \end{enumerate} \end{rmk} The following lemma concerning $\vartheta$-terms within ${\operatorname{T}}t$ and their comparison will be used frequently without further mention. \betagin{lem}[3.30 of \cite{W07a}]\lambdabel{thttcomplem} For $m<\omega$ the function $\varthetam\restriction_{{\operatorname{T}}t\cap\Omegamz}$ is $1$-$1$ and has values in ${\mathbb P}\cap[\Omega_m,\Omegame)$. Let $\alpha,\gamma\in{\operatorname{T}}t\cap\Omegamz$. Then $\alpha^{\ast_m}<\varthetam(\alpha)$ and \[\varthetam(\alpha)<\varthetam(\gamma) \;\Leftrightarrow\; \left(\alpha<\gamma\wedge\alpha^{\ast_m}<\varthetam(\gamma)\right)\vee\varthetam(\alpha)\le\gamma^{\ast_m}.\] \end{lem} \betagin{rmk}\lambdabel{extrarmk} Note that in particular we have \betagin{enumerate} \item ${\operatorname{P}^\tau}(\alpha)$ is the set of all subterms of $\alpha$ of a form $\varthetat(\xi)$ for some $\xi$. \item $\alpha^{\ast^\tau}=\max({\operatorname{P}^\tau}(\alpha)\cup\sigmangleton{0})$. \item $\alpha=\alpha^{\ast^\tau}$ whenever $\alpha=\varthetat(\xi)$ for some $\xi$. \item $\operatorname{Par}t(\alpha)$ is the set of parameters $<\tau$ used in the unique term denoting $\alpha$. \end{enumerate} \end{rmk} Note that the ordinal defined by a $\vartheta$-term, say $\vartheta(\xi)$, is characterized as the least $\theta>\xi^\star$ that is closed under parameters and basic functions such that $\vartheta({\ze^\tau}a)<\theta$ for all ${\ze^\tau}a<\xi$ satisfying ${\ze^\tau}a^\star<\theta$, cf.\ Lemma 4.10 of \cite{W06}. \subseteqsection{Localization}\lambdabel{localizationsec} The notion of \emph{localization} introduced in Section 4 of \cite{W07a} and refined in \cite{CWa} is to be understood in terms of closure properties or fixed point levels in the sense discussed in the introduction. As an example, given an ordinal $\alpha\not\in{\mathbb E}$ we ask for the greatest epsilon number $\beta<\alpha$, if such exists. The ordinal $\beta$ is of higher fixed point level and has stronger closure properties than $\alpha$, as $\beta$ is closed under $\omega$-exponentiation. The refinement also considers degrees of limit point thinning, i.e.\ (transfinitely) iterated applications of the operation $\mathrm{Lim}(\cdot)$. We now formally fix a notation already used in the introduction, and define $\alphaplus$ for given $\alpha=\varthetat(\Delta+\eta)$, which is the least ordinal greater than $\alpha$ that is of the same fixed point level as $\alpha$, as becomes clear shortly. All ordinals mentioned are assumed to be represented in ${\operatorname{T}}t$. \betagin{conv}[cf.\ 4.1 of \cite{W07a}]\lambdabel{thtargconv} $\alpha=\varthetat(\Delta+\eta)$ automatically means that $\Omega_1\mid\Delta$ and $\eta<\Omega_1$. In a situation where some $\alpha=\varthetat(\Delta+\eta)$ is fixed, we will write $\alphaplus$ for $\varthetat(\Delta+\eta+1)$. \end{conv} We will apply this notation frequently and use Greek capital letters to indicate that part of the argument which is a (possibly zero) multiple of $\Omega_1$. Note that for $\Delta$, to be a proper multiple of $\Omega$ means to be a sum of $\vartheta_1$-terms. $\eta$ can be additively composed of $\varthetat$-terms and parameters below $\tau$. Recall Lemma \ref{tenumlem} from the introduction, which characterizes additive principal numbers that are not epsilon numbers in the interval $(\tau,{\tau_i}nf)$. Epsilon numbers in the interval $(\tau,{\tau_i}nf)$ are characterized as follows. \betagin{lem}[4.3 of \cite{W07a}] For $\alpha=\varthetat(\Delta+\eta)\in{\operatorname{T}}t$ we have $\alpha\in{\mathbb E}^{>\tau}$ if and only if $\Delta>0$. \end{lem} The (informal) notion of fixed point level is justified by the following lemma. We sometimes also call $\Delta$ the fixed point level of an ordinal $\varthetat(\Delta+\eta)$. \betagin{lem}[4.4 of \cite{W07a}]\lambdabel{fixlevcarlem} For $\alpha=\varthetat(\Delta+\eta)\in{\operatorname{T}}t$ we have $\eta=\sup_{\sigma<\eta}\varthetat(\Delta+\sigma)$ if and only if $\eta=\varthetat(\Gamma+\rho)$ with $\Gamma>\Delta$ and $\eta>\Delta^{\ast^\tau}$. \end{lem} An immediate consequence of Lemma \ref{thttcomplem} shows that for any $\alpha$ of the form $\varthetat(\Delta+\eta)$ the interval $(\alpha,\alphaplus)$ does not contain ordinals of fixed point level greater than or equal to $\Delta$. \betagin{lem}[4.5 of \cite{W07a}] For $\alpha=\varthetat(\Delta+\eta)\in{\operatorname{T}}t$ and $\varthetat(\Gamma+\rho)\in(\alpha,\alphaplus)$ we have $\Gamma<\Delta$. \end{lem} Recalling part 3 of Remark \ref{extrarmk}, we are now prepared to make sense of the notion of localization. \betagin{defi}[4.6 of \cite{W07a}]\lambdabel{localizationdef} Let $\alpha=\varthetat(\Delta+\eta)\in{\operatorname{T}}t$. We define a finite sequence of ordinals as follows: Set $\alpha_0:=\tau$. Suppose $\alpha_n$ is already defined and $\alpha_n<\alpha$. Let $\alpha_{n+1}:=\varthetat(\xi)\in{\operatorname{P}^\tau}(\alpha)\setminus(\alpha_n+1)$ where $\xi$ is maximal. This yields a finite sequence $\tau=\alpha_0<\ldots<\alpha_n=\alpha$ for some $n<\omega$ which we call the \boldmath{\bf $\tau$-localization}\unboldmath\ of $\alpha$.\index{localization!$\tau$-localization of $\alpha$} \end{defi} For example, let $\alpha=\varthetanod(\varthetae(\vartheta_2(0)))$ be the Bachmann-Howard ordinal, $\beta$ be the least $\Gamma$-number greater than $\alpha$, i.e\ $\beta=\Gamma_{\alpha+1}=\varthetaal(\Omega^2)=\varthetanod(\varthetae(\varthetae(0))+\alpha)$, and $\gamma=\varepsilon_{\beta+1}=\varthetabe(\varthetae(0))=\varthetanod(\varthetae(0)+\beta)$ be the least epsilon number greater than $\beta$. Then the $1$-localization of $\gamma$ is $(1,\alpha,\beta,\gamma)$, the $\alpha$-localization of $\gamma$ is $(\alpha,\beta,\gamma)$, whereas the $\beta$-localization of $\gamma$ is just $(\beta,\gamma)$ and the $\gamma$-localization of $\gamma$ is simply $(\gamma)$, the trivial localization. Note that $\alpha_1,\ldots,\alpha_n$, in case of $n\ge 2$, forms by definition a sequence of $\varthetat$-terms of strictly decreasing arguments. The following lemma summarizes properties of localization starting from this observation. \betagin{lem}[4.7, 4.8, and 4.9 of \cite{W07a}]\lambdabel{localipic} Let $\alpha=\varthetat(\Delta+\eta)\in{\operatorname{T}}t$, $\alpha>\tau$ and $(\tau=\al_0,\ldots,\al_n=\al)$ be the $\tau$-localization of $\alpha$ where $\alpha_i=\varthetat(\Delta_i+\eta_i)$ for $i=1,\ldots,n$. Then \betagin{itemize} \item[a)] For $i<n$ and any $\beta=\varthetat(\Gamma+\rho)\in(\alpha_i,\alpha_{i+1})$ we have $\Gamma+\rho<\Delta_{i+1}+\eta_{i+1}$. \item[b)] $(\Delta_i)_{1\le i\le n}$ forms a strictly descending sequence of multiples of $\Omega_1$. \item[c)] For $i<n$ the sequence $(\alpha_0,\ldots,\alpha_i)$ is the $\tau$-localization of $\alpha_i$. \end{itemize} The guiding picture for localizations is \[\tau<\alpha_1<\ldots<\alpha_n=\alpha<\alphaplus=\alpha_n^+<\ldots<\alpha_1^+.\] \end{lem} The notion of \emph{fine localization} is obtained by iterated application of the operator $\alpha\mapsto\alphabar$ from \cite{W07a}, defined there in the proof of Lemma 8.2, and extended in Section 5 of \cite{CWa} as follows. While the predecessor in the $\tau$-localization of an ordinal $\alpha\in(\tau,{\tau_i}nf)$ is the greatest ordinal below $\alpha$ that has a (strictly) greater fixed point level than $\alpha$, if such ordinal exists, and $\tau=\alphanod$ otherwise, the predecessor $\alphabar$ of $\alpha$ in the $\tau$-fine-localization of $\alpha$ is the greatest ordinal below $\alpha$ that is of the same fixed point level and of a degree of limit point thinning greater than or equal to that of $\alpha$, if such ordinal exists, and the predecessor of $\alpha$ in its $\tau$-localization of $\alpha$ otherwise. According to Corollary 6.3 of \cite{CWa} a pattern notation (of least cardinality) for an ordinal $\alpha\in{\operatorname{C}}ore({\cal R}onepl)$, i.e.\ $\alpha<1^\infty$, is obtained by closure of the set $\{0,\alpha\}$ under additive decomposition, the function ${\operatorname{lh}}$ for ${\cal R}onepl$, and the operator $\bar{\cdot}$. Closure under additive decomposition means that for any ordinal $\beta=_{\mathrm{\scriptscriptstyle{ANF}}}\beta_1+\ldots+\beta_n$ in the set, also the ordinals $\beta_i, \beta_1+\ldots+\beta_i$ for $i=1,\ldots,n$ are in the set. \betagin{defi}[5.1 of \cite{CWa}]\lambdabel{barop} Let $\alpha=\varthetat(\Delta+\eta)\in{\operatorname{T}}t$, $\alpha>\tau$, and $\tau=\alpha_0,\ldots,\alpha_n=\alpha$ be its $\tau$-localization. $\alphabar\in[\tau,\alpha)$ is defined as follows. \betagin{itemize} \item Suppose $\eta$ is of the form ${\eta^\prime}+\eta_0$ where $\eta_0\in{\mathbb P}$ and either ${\eta^\prime}=0$ or $\eta=_{\mathbb N}F{\eta^\prime}+\eta_0$. Then if either $\eta_0=1$ or ${\eta^\prime}<\sup_{\sigma<{\eta^\prime}}\varthetat(\Delta+\sigma)$ we set $\alphabar:=\varthetat(\Delta+{\eta^\prime})$. \item In all remaining cases we let $\alphabar:=\alpha_{n-1}$. \end{itemize} \end{defi} In case of $\alpha\in{\mathbb L}$, i.e.\ for $\alpha$ a limit of additive principal numbers, the above definition is consistent with the definition given in \cite{W07a}. For $\alpha\in{\mathbb P}\setminus{\mathbb L}$, that is, for $\alpha$ of a shape $\alpha=\omega^{\alphapr+1}$ the above definition yields $\alphabar=\omega^\alphapr$. For further clarification, consider the following examples, in which we assume that $\varthetanod=\varthetat$ for $\tau=1$: \betagin{enumerate} \item $\mathrm{o}erline{\varepsilonn\cdot\omega}=\varepsilonn$, where $\varepsilonn\cdot\omega=\varthetanod(\varthetanod(\varthetae(0)))$, \item $\mathrm{o}erline{\varepsilon_{\omega+\omega}}=\varepsilon_\omega$, where $\varepsilon_{\omega+\omega}=\varthetanod(\varthetae(0)+\omega+\omega)$ and $\omega=\varthetanod(\varthetanod(0))$, \item $\mathrm{o}erline{\varepsilon_{\omega^2}}=1$, where $\varepsilon_{\omega^2}=\varthetanod(\varthetae(0)+\omega^2)$ and $\omega^2=\varthetanod(\varthetanod(0)+\varthetanod(0))$, \item $\mathrm{o}erline{\varepsilon_{\Gamma_0+1}}=\Gamma_0=\mathrm{o}erline{\varepsilon_{\Gamma_0+\omega}}$, where $\varepsilon_{\Gamma_0+1[+\omega]}=\varthetanod(\varthetae(0)+\varthetanod(\varthetae(\varthetae(0)))[+\omega])$, \item $\mathrm{o}erline{\varepsilon_{\Gamma_0+2}}=\varepsilon_{\Gamma_0+1}$, where $\varepsilon_{\Gamma_0+2}=\varthetanod(\varthetae(0)+\varthetanod(\varthetae(\varthetae(0)))+\varthetanod(0))$, \item $\mathrm{o}erline{\Gamma_{\omega^\omega+\omega^2}}=\Gamma_{\omega^\omega}$, where $\Gamma_{\omega^\omega[+\omega^2]}=\varthetanod(\varthetae(\varthetae(0))+\varthetanod(\varthetanod(\varthetanod(0)))[+\omega^2])$. \end{enumerate} See also Lemma \ref{simplebarlem} for a more general statement about ordinals $\alpha\in{\mathbb P}\setminus{\mathbb M}$. As pointed out in \cite{CWa}, the iterated application of $\bar{\cdot}$ to $\alpha$ leads to $\alpha_{n-1}$ after finitely many steps, from there to $\alpha_{n-2}$ and finally to $\tau$. This follows from the lemmas concerning $\tau$-localization cited below. \betagin{lem}[5.2 of \cite{CWa}]\lambdabel{loclexordlem} Let $\alpha,\beta\in(\tau,{\tau_i}nf)\cap{\mathbb P}$ with $\alpha<\beta$. For their $\tau$-localizations $\tau=\alpha_0,\ldots,\alpha_n=\alpha$ and $\tau=\beta_0,\ldots,\beta_m=\beta$ we have \[\alphavec :=(\alpha_1,\ldots,\alpha_n)<_\mathrm{\scriptscriptstyle{lex}}(\beta_1,\ldots,\beta_m)=:\betavec .\] \end{lem} \betagin{lem}[5.3 of \cite{CWa}]\lambdabel{subtermloclem} Let $\alpha=\varthetat(\xi)\in{\operatorname{T}}t$ with $\tau$-localization $\alphavec :=(\alpha_0,\ldots,\alpha_n)$ and $\beta\in{\operatorname{P}^\tau}(\xi)$. If there is $\alpha_i\in{\operatorname{P}^\tau}(\beta)$ where $0\le i\le n$ then $(\alpha_0,\ldots,\alpha_i)$ is an initial sequence of the $\tau$-localization of any $\gamma\in[\beta,\alpha]$. \end{lem} \betagin{lem}[5.4 of \cite{CWa}]\lambdabel{albarloclem} Let $\alpha\in(\tau,{\tau_i}nf)\cap{\mathbb P}$ with $\tau$-localization $(\alpha_0,\ldots,\alpha_n)$. Then the $\tau$-locali\-zation of $\alphabar$ is $(\alpha_0,\ldots,\alpha_{n-1}=\alphabar)$ if $\alphabar\in{\operatorname{P}^\tau}(\alpha)$ or $(\alpha_0,\ldots,\alpha_{n-1},\alphabar)$ otherwise. In the latter case for $\alpha=\varthetat(\Delta+\eta)$ and $\alphabar=\varthetat(\Gamma+\rho)$ we have \[(\Delta+\eta)^{\ast^\tau}=(\Gamma+\rho)^{\ast^\tau}.\] \end{lem} \betagin{defi}[5.5 of \cite{CWa}] The \boldmath{\bf $\tau$-fine-localization}\unboldmath\ of $\alpha\in(\tau,{\tau_i}nf)\cap{\mathbb P}$ is defined to be either $(\tau,\alpha)$ if $\alphabar=\tau$ or the $\tau$-fine-localization of $\alphabar$ concatenated with $\alpha$. \end{defi} \betagin{lem}[5.6 of \cite{CWa}]\lambdabel{finelocinitialseqlem} The $\tau$-localization of $\alpha$ is a subsequence of the $\tau$-fine-localization of $\alpha$. \end{lem} \betagin{lem}[5.9 of \cite{CWa}]\lambdabel{klexfineloclem} Let $\alpha,\beta\in[\tau,{\tau_i}nf)\cap{\mathbb P}$ with $\tau$-fine-localizations $\alphavec=(\alpha_0,\ldots,\alpha_n)$ and $\betavec=(\beta_0,\ldots,\beta_m)$ be given. Then $\alpha<\beta$ if and only if $\alphavec<_\mathrm{\scriptscriptstyle{lex}}\betavec$. \end{lem} In order to obtain a formal notion of limit point thinning in the context of ${\operatorname{T}}t$, we first characterize the function $<_1g$ introduced earlier in terms of ${\operatorname{T}}t$. The indicated correction corrects for an unintended flaw in the original formulation. \betagin{lem}[corrected 4.10 of \cite{W07a}]\lambdabel{logendcharlem} For $\alpha=\varthetat(\Delta+\eta)\in{\operatorname{T}}t$ we have \[<_1g(\alpha)=\left\{\betagin{array}{ll} \alpha & {\small \mbox{ if }\Delta>0}\\ \eta+1 & {\small \mbox{ if } \Delta=0 \mbox{ and } \eta=\delta+n \mbox{ such that } \delta\in{\mathbb E}^{>\tau}, n<\omega}\\ (-1+\tau)+\eta & {\small \mbox{ otherwise.}} \end{array}\right.\] \end{lem} We now define an operator ${\ze^\tau}a$ that given a setting of relativization $\tau$ and an ordinal $\alpha$ in the image of the enumeration function $\eta\mapsto\varthetat(\Delta+\eta)$, $\eta<{\tau_i}nf$, outputs the \emph{degree of limit point thinning of $\alpha$}, written as ${\ze^\tau}atal$. \betagin{defi}[4.11 of \cite{W07a}]\lambdabel{zetaldefi} For $\alpha=\varthetat(\Delta+\eta)\in{\operatorname{T}}t$ we define \[{\ze^\tau}atal:=\left\{\betagin{array}{ll} {\mathrm{logend}}(\eta) & {\small \mbox{ if } \eta<\sup_{\sigma<\eta}\varthetat(\Delta+\sigma)}\\ 0 & {\small \mbox{ otherwise.}} \end{array}\right.\] \end{defi} The connection between the operators $\bar{\cdot}$ and ${\ze^\tau}a$ cannot be stated yet. We first need to introduce the notions of \emph{base transformation}, \emph{cofinality operator}, and \emph{translation}. \subseteqsection{Base transformation, cofinality operators, and translation}\lambdabel{arithmeticsubsec} Here we still cite results from \cite{W07a,CWa}, but we summarize as in Subsection 2.3 of \cite{W07c}. \emph{Base transformation}, as introduced and treated in detail in Section 5 of \cite{W07a}, is the crucial notion which allows us to express essential uniformity properties in both the development of a strong ordinal arithmetic and the formulation of characteristic properties of elementary substructure on the ordinals. It enables a precise comparison of ordinals modulo their relativizations. \betagin{defi}[\boldmath ${\operatorname{T}}ts$, $\pi_{\si,\tau}$\unboldmath] Let $\tau\in{\mathbb E}$ and $\sigma\in\sigmangleton{1}\cup{\mathbb E}\cap\tau$. \betagin{itemize} \item ${\operatorname{T}}ts:=\set{\alpha\in{\operatorname{T}}t}{\operatorname{Par}t(\alpha)\subseteq\sigma}$. \item The base transformation $\pi_{\si,\tau}:{\operatorname{T}}ts\to{\operatorname{T}}s$ maps $\alpha\in{\operatorname{T}}ts$ to the ordinal one obtains from the term representation of $\alpha$ in ${\operatorname{T}}t$ by substituting every occurring function $\varthetat$ by $\varthetas$, i.e.\ $\pi_{\si,\tau}(\alpha):=\alpha[\varthetat/\varthetas]$. \end{itemize} \end{defi} \betagin{lem}[5.3, 5.5, and 5.6 of \cite{W07a}] $\pi_{\si,\tau}$ is a $(<,+)$-isomorphism, and base transformation commutes with the process of localization as well as with the degree of limit point thinning (i.e.\ $\pi_{\si,\tau}({\ze^\tau}atal)={\ze^\tau}a^\sigma_{\pi_{\si,\tau}(\alpha)}$). For $\alpha\in{\operatorname{T}}ts$ we have $\operatorname{Par}t(\alpha)=\operatorname{Par}si(\pi_{\si,\tau}(\alpha))$. \end{lem} The \emph{cofinality operators $\iota_{\tau,\al}$ and $\lambda^\tau$}, introduced in Section 7 of \cite{W07a}, allow for the exact classification of the \emph{cofinality properties} of additive principal numbers in ${\operatorname{T}}t\cap\Omega_1$ which are also crucial in the analysis of $\le_1$ and $\le_2$. It will become clear in lemma \ref{cofinlem} what is meant by cofinality properties of ordinals. \betagin{defi}[\boldmath${\operatorname{T}}trestral$, $\iota_{\tau,\al}$, $\lambdatal$\unboldmath]\lambdabel{lataldefi} Let $\alpha=\varthetat(\Delta+\eta)\in{\operatorname{T}}t$. \betagin{itemize} \item The restriction of ${\operatorname{T}}t$ to $\alpha$ is defined by ${\operatorname{T}}trestral:=\set{\beta\in{\operatorname{T}}t}{\beta^{\ast^\tau}<\alpha}$. \item If $\Delta>0$ we define $\iota_{\tau,\al}:{\operatorname{T}}trestral\to{\operatorname{T}}al$ by recursion on the definition of ${\operatorname{T}}t$ by transforming every ${\operatorname{T}}t$-term for an ordinal $\xi<\alpha$ into the parameter $\xi$, defining $\iota_{\tau,\al}(\xi)$ for any non-principal $\xi>\alpha$ homomorphically, and substituting every $\varthetake$-function by $\varthetak$ (where $\vartheta_0=\varthetaal$). \footnote{Note that ${\operatorname{T}}trestral$ does not contain any $\varthetat$-terms which are greater than or equal to $\alpha$.} \item If $\alpha>\tau$ we define \[\lambdatal:=\left\{\betagin{array}{ll} \iota_{\tau,\al}(\Delta)+{\ze^\tau}atal & \mbox{{\small if $\alpha\in{\mathbb E}$}} \\ {\ze^\tau}atal & \mbox{{\small otherwise.}} \end{array}\right.\] \end{itemize} \end{defi} As can be seen from the above definition, the operator $\lambda$ measures not only the fixed point level, but also the degree of limit point thinning. Easy examples are $\lambda_{\varepsilonn}=\varepsilonn$, $\lambda_{\varepsilon_1}=\varepsilon_1$, while $\lambda_{\varepsilon_\omega}=\varepsilon_\omega+1$ and $\lambda_{\varepsilon_{\omega^2}}=\varepsilon_{\omega^2}+2$, where we have omitted the superscript $\tau=1$ and used the enumeration function $\alpha\mapsto\varepsilon_\alpha$ of the epsilon numbers. For completeness we summarize basic properties of the $\iota$-operator that also justify proofs by induction on ${\operatorname{ht}_\tau}$ and ${\operatorname{ht}_\al}$, respectively, as given in Definition \ref{htdef}, and provide an important upper bound. \betagin{lem}[7.2 and 7.3 of \cite{W07a}]\lambdabel{iotalem} Let $\alpha=\varthetat(\Delta+\eta)\in{\operatorname{T}}t$ where $\Delta>0$. \betagin{enumerate} \item[a)] $\iota_{\tau,\al}$ is a $(<,+)$-isomorphism of ${\operatorname{T}}trestral$ and ${\operatorname{T}}al$, and we have $\iota_{\tau,\al}(\Delta)<\alpha^+$. \item[b)] Suppose $\xi\in{\operatorname{T}}trestral$, $\xi>\alpha$. Then ${\operatorname{ht}_\al}(\iota_{\tau,\al}(\xi))<{\operatorname{ht}_\al}(\xi)$. \item[c)] $\xi\in{\operatorname{T}}al\;\Rightarrow\;{\operatorname{ht}_\tau}(\iota_{\tau,\al}inv(\xi))\le\max\sigmangleton{{\operatorname{ht}_\tau}(\alpha), {\operatorname{ht}_\al}(\xi)+1}$. \end{enumerate} \end{lem} The interplay between base transformation and the operator $\iota_{\tau,\al}$ is described as follows. \betagin{lem}[7.8 of \cite{W07a}]\lambdabel{iopiinterplaylem} Let $\gamma,\alpha\in{\operatorname{T}}t\cap\Omega_1$ be epsilon numbers such that $\tau<\gamma<\alpha$. We have \[\iota_{\tau,\ga}=\pi_{\ga,\al}\circ\iota_{\tau,\al}\restriction_{\operatorname{T}}trestrga,\] i.e.\ the following diagram is commutative: \[\betagin{diagram} \node{{\operatorname{T}}trestrga}\arrow[2]{e,t}{\iota_{\tau,\ga}}\arrow{se,b}{\iota_{\tau,\al}}\node[2]{{\operatorname{T}}ga}\\ \node{}\node{{\operatorname{T}}alga}\arrow{ne,b}{\pi_{\ga,\al}} \end{diagram}\] In particular, the image of $\iota_{\tau,\al}\restriction_{\operatorname{T}}trestrga$ is ${\operatorname{T}}alga$. \end{lem} The notion of term translation was introduced in Section 6 of \cite{W07a}. For given epsilon number $\alpha=\varthetat(\Delta+\eta)\in{\operatorname{T}}t$ (where $\Delta>0$) it is possible to define (relativized elementary recursive) \emph{partial translation procedures} ${}^{\operatorname{t}^\tau_\al}:{\operatorname{T}}t\to{\operatorname{T}}al$ and ${}^{\operatorname{t}^\al_\tau}:{\operatorname{T}}al\to{\operatorname{T}}t$ between the systems ${\operatorname{T}}t$ and ${\operatorname{T}}al$, see Definition 6.2 of \cite{W07a}. According to Lemma 6.3 of \cite{W07a}, ${}^{\operatorname{t}^\tau_\al}$ and ${}^{\operatorname{t}^\al_\tau}$ are correct on ${\operatorname{T}}trestralplus$ and ${\operatorname{T}}alrestralplus$, respectively. These two restrictions contain the same ordinals, and $\alphaplus=\varthetat(\Delta+\eta+1)=\varthetaal(\Delta)$. The details are quite technical and can be taken for granted here as we only use translation implicitly in a straightforward manner. The translation procedure ${}^{\operatorname{t}^\al_\tau}$ allows us to consider the ordinal $\lambdatal$ as ${\operatorname{T}}t$-term since $\iota_{\tau,\al}(\Delta)<\alpha^+$. We will omit translation superscripts since their application can easily be understood from the context. \betagin{lem}[7.6, 7.7, and 7.10 of \cite{W07a}]\lambdabel{latallem} Suppose $\alpha=\varthetat(\Delta+\eta)\in{\operatorname{T}}t$, $\alpha>\tau$. \betagin{enumerate} \item[a)] We have $\lambdatal=0$ if and only if $\alpha\not\in{\mathbb L}$. \item[b)] $\lambdatal<\alpha^+$. \item[c)] ${\operatorname{ht}_\al}(\lambdatal)<{\operatorname{ht}_\tau}(\alpha)$ in case of $\Delta>0$. \item[d)] If $\Delta>0$, i.e.\ $\alpha\in{\mathbb E}$, and $\beta=\varthetat(\Gamma+\rho)\in(\alpha,\alphaplus)$. Then we have \[\lambdatbe=\lambdaalbetal=\lambdaalbe.\] \item[e)] If $\tau\in{\mathbb E}$ and $\sigma\in{\mathbb E}\cap\tau$ such that $\alpha\in{\operatorname{T}}ts$ then \[\lambdatal\in{\operatorname{T}}ts \quad\mbox{ and }\quad \pi_{\si,\tau}(\lambdatal)=\lambdaspistal,\] i.e.\ the following diagram is commutative: \[\betagin{diagram} \node{{\operatorname{T}}ts\cap(\tau,{\tau_i}nf)\cap{\mathbb P}}\arrow[2]{e,t}{\lambdatau}\arrow{s,l}{\pi_{\si,\tau}}\node[2]{{\operatorname{T}}ts}\arrow{s,r}{\pi_{\si,\tau}}\\ \node{{\operatorname{T}}s\cap(\sigma,\sigmainf)\cap{\mathbb P}}\arrow[2]{e,b}{\lambdasi}\node[2]{{\operatorname{T}}s} \end{diagram}\] \end{enumerate} \end{lem} The characterization of the (intuitive notion of) cofinality properties of an ordinal can now be stated precisely. The next two lemmas settle that the cofinality properties of an additive principal number $\alpha\in{\operatorname{T}}t$ are exactly classified by $\lambdatal$. First of all, note that immediately by definition $\lambdatal=0$ if and only if $\alpha$ is not a limit of additive principal numbers. \betagin{lem}[8.1 of \cite{W07a}]\lambdabel{cofinlem} Let $\alpha=\varthetat(\Delta+\eta)\in{\operatorname{T}}t$, $\alpha>\tau$, be given. Then for every $\lambda<\lambdatal$ we have \[\alpha=\left\{\betagin{array}{ll} \sup\set{\gamma\in(\tau,\alpha)\cap{\mathbb E}}{\pi_{\ga,\al}inv(\lambdatga)\ge\lambda}& \mbox{{\small if $\alpha\in\mathrm{Lim}({\mathbb E})$}} \\[2ex] \sup\set{\gamma\in(\tau,\alpha)\cap{\mathbb P}}{\lambdatga\ge\lambda} & \mbox{{\small otherwise.}} \end{array}\right.\] \end{lem} Fine-localization commutes with base transformation and in the sense of the following lemma with translation. Part 1 of the following lemma, which slightly generalizes Lemma 8.2 of \cite{W07a}, gives a characterization of $\alphabar$ in terms of cofinality properties of $\alpha$. The lemma also justifies that the operator $\bar{\cdot}$ does not carry a superscript $\tau$: For $\beta\in(\alpha,\alphainf)\subseteqseteq(\tau,{\tau_i}nf)$, $\alpha\in{\operatorname{T}}t\cap{\mathbb E}^{<\Omega}$, the ordinal $\betabar$ is invariant of the term representation of $\beta$, whether given in ${\operatorname{T}}t$ or in ${\operatorname{T}}al$. \betagin{lem}[5.7 of \cite{CWa}] \lambdabel{finelocbasicpropslem} Let $\alpha=\varthetat(\Delta+\eta)\in{\operatorname{T}}t$, $\alpha>\tau$. \betagin{enumerate} \item We have \[\alphabar=\left\{\betagin{array}{ll} \max\left(\set{\gamma\in(\tau,\alpha)\cap{\mathbb E}}{\pi_{\ga,\al}inv(\lambdatga)\ge\lambdatal}\cup\sigmangleton{\tau}\right) & \mbox{ if } \alpha\in{\mathbb E}\\ \max\left(\set{\gamma\in(\tau,\alpha)\cap{\mathbb P}}{\lambdatga\ge\lambdatal}\cup\sigmangleton{\tau}\right) & \mbox{ otherwise.} \end{array}\right.\] \item The operator $\bar{\cdot}$ commutes with base transformation, i.e.\ the following diagram is commutative: \[\betagin{diagram} \node{{\operatorname{T}}ts\cap(\tau,{\tau_i}nf)\cap{\mathbb P}}\arrow[3]{e,t} {\bar{\cdot} \scriptscriptstyle{\:in\:}{\operatorname{T}}t}\arrow{s,l}{\pi_{\si,\tau}}\node[3]{{\operatorname{T}}ts\cap[\tau,{\tau_i}nf)\cap{\mathbb P}}\arrow{s,r}{\pi_{\si,\tau}}\\ \node{{\operatorname{T}}s\cap(\sigma,\sigmainf)\cap{\mathbb P}}\arrow[3]{e,b} {\bar{\cdot} \scriptscriptstyle{\:in\:}{\operatorname{T}}s}\node[3]{{\operatorname{T}}s\cap[\sigma,\sigmainf)\cap{\mathbb P}} \end{diagram}\] That is to say, for $\alpha\in{\operatorname{T}}ts\cap(\tau,{\tau_i}nf)\cap{\mathbb P}$ we have \[\alphabar\in{\operatorname{T}}ts\quad\mbox{and}\quad\mathrm{o}erline{\pi_{\si,\tau}(\alpha)}=\pi_{\si,\tau}(\alphabar).\] \item For $\alpha\in{\mathbb E}$ the operator $\bar{\cdot}$ commutes with the translation mappings ${\operatorname{t}^\tau_\al}$ and ${\operatorname{t}^\al_\tau}$: \[\betagin{diagram} \node{{\operatorname{T}}t\cap(\alpha,\alpha^+)\cap{\mathbb P}}\arrow[2]{e,t}{{\operatorname{t}^\tau_\al}}\arrow{s,r}{\:\bar{\cdot} \scriptscriptstyle{\:in\:}{\operatorname{T}}t} \node[2]{{\operatorname{T}}al\cap(\alpha,\alpha^+)\cap{\mathbb P}}\arrow[2]{e,t}{{\operatorname{t}^\al_\tau}}\arrow{s,r}{\:\bar{\cdot} \scriptscriptstyle{\:in\:}{\operatorname{T}}al} \node[2]{{\operatorname{T}}t\cap(\alpha,\alpha^+)\cap{\mathbb P}}\arrow{s,r}{\:\bar{\cdot} \scriptscriptstyle{\:in\:}{\operatorname{T}}t}\\ \node{{\operatorname{T}}t\cap[\alpha,\alpha^+)\cap{\mathbb P}}\arrow[2]{e,b}{{\operatorname{t}^\tau_\al}} \node[2]{{\operatorname{T}}al\cap[\alpha,\alpha^+)\cap{\mathbb P}}\arrow[2]{e,b}{{\operatorname{t}^\al_\tau}} \node[2]{{\operatorname{T}}t\cap[\alpha,\alpha^+)\cap{\mathbb P}} \end{diagram}\] \item If $\alpha\in{\mathbb E}$ with $\tau$-fine-localization $(\alpha_0,\ldots,\alpha_n)$ and $\beta\in(\alpha,\alpha^+)$ with $\alpha$-fine-localization $(\beta_0,\ldots,\beta_m)$, then \[\tau=\alpha_0,\ldots,\alpha_n=\alpha=\beta_0^{\operatorname{t}^\al_\tau},\ldots,\beta_m^{\operatorname{t}^\al_\tau}=\beta^{\operatorname{t}^\al_\tau}\] is the $\tau$-fine-localization of $\beta$. \end{enumerate} \end{lem} For more details regarding term representations with respect to (fine) localization, closure and cofinality properties, and invariance with respect to base transformation and translation, see \cite{W07a,CWa}. In particular Lemma 5.8 of both \cite{W07a} and \cite{CWa} is informative, and Lemma 5.10 of \cite{CWa} characterizes $\omega$-exponentiation in ${\operatorname{T}}t\cap(\tau,{\tau_i}nf)$, also with respect to $\tau$-fine-localization and base transformation. Here we only need a simplified version of this latter lemma. \betagin{lem}[cf.\ 5.10 of \cite{CWa}]\lambdabel{simplebarlem} Let ${\ze^\tau}a<\alpha=\omega^{\ze^\tau}a\in(\tau,{\tau_i}nf)$. We have \[\alpha=\varthetat\left(1+(-\tau+{\ze^\tau}a)\minusp \operatorname{e}_{\ze^\tau}a\right)\] where $\operatorname{e}_{\ze^\tau}a:=\left\{\betagin{array}{ll} 1&\mbox{ if }{\ze^\tau}a=\varepsilon+n\mbox{ for some }\varepsilon\in{\mathbb E}\mbox{ and }n<\omega\\ 0&\mbox{ otherwise.}\end{array}\right.$ \operatorname{me}dskip \noindent For any $\sigma\in{\mathbb E}\cap\tau$ such that ${\ze^\tau}a\in{\operatorname{T}}ts$ we have \[\pi_{\si,\tau}(\alpha)=\varthetas\left(1+(-\sigma+\pi_{\si,\tau}({\ze^\tau}a))\minusp \operatorname{e}_{\ze^\tau}a\right)=\omega^{\pi_{\si,\tau}({\ze^\tau}a)}.\] \operatorname{me}dskip \noindent Suppose ${\ze^\tau}a=_{\mathrm{\scriptscriptstyle{CNF}}}\omega^{{\ze^\tau}a_1}+\ldots+\omega^{{\ze^\tau}a_k}$ with $k>1$. We have \[{\ze^\tau}atal={\ze^\tau}a_k \quad\mbox{ and }\quad\alphabar=\omega^{\ze^\tau}apr\quad\mbox{ where }\quad{\ze^\tau}apr=\omega^{{\ze^\tau}a_1}+\ldots+\omega^{{\ze^\tau}a_{k-1}}.\] \end{lem} This shows that for $\alpha\in{\mathbb P}\setminus{\mathbb M}$ the operator $\bar{\cdot}$ cancels the last factor in the multiplicative normal form of $\alpha$, and the degree of limit point thinning of $\alpha$ is indeed ${\ze^\tau}a_k$, which for $\alpha\in{\mathbb P}\setminus{\mathbb L}$ is $0$. In other words, for $\alpha=_{\mathbb N}F\eta\cdot\xi\in{\mathbb P}-{\mathbb M}$ we have $\alphabar=\eta$ and ${\ze^\tau}atal=<_1g(<_1g(\xi))={\mathrm{logend}}(<_1g(\alpha))$ for any $\tau\in{\mathbb E}one$ such that $\alpha\in(\tau,{\tau_i}nf)$. \section{Generalized Tracking Sequences and Chains}\lambdabel{tstcsec} In this section we establish connections with earlier work on the structure ${\cal R}two$, in particular \cite{CWc}, \cite{W17}, and \cite{W18}, and generalize notions originally developed there to be applicable to the entire ordinal structure ${\cal R}two$. In order to make this article more accessible we review quite extensively the relevant notions and results from \cite{CWc} with improvements from the book chapter \cite{W17} and to some extent from \cite{W18}. The notions of \emph{tracking sequences and chains}, which were introduced in \cite{CWc}, are motivated easily, despite the relatively involved technical preparations and definitions required to formulate them. The approach is natural when analyzing well-partial orderings and was already chosen by Carlson to calculate the structure ${\cal R}one$ in \cite{C99}: one considers enumeration functions of connectivity components. In the case of ${\cal R}one$, there is only one such function, called $\kappa$, the domain of which is $\varepsilonn+1$, since the $\varepsilonn$-th component of ${\cal R}one$ was shown in \cite{C99} to be $[\varepsilonn,\infty)$, i.e.\ $\kappa_{\varepsilonn}=\varepsilonn<_1\alpha$ for all $\alpha>\varepsilonn$, which is sometimes written as $\varepsilonn<_1\infty$. Within connectivity components one can consider components relative to the root of the component. In ${\cal R}one$ this is again very easy: $[\varepsilonn+1,\varepsilonn\cdot2)\cong [0,\varepsilonn)$ as in general ${\mathrm{Ord}}\cong [\alpha+1,\infty)$ for all $\alpha$. Considering the nesting of connectivity components now allows us to locate any ordinal in the segment $\varepsilonn\cdot\omega$ by a finite sequence of $\kappa$-indices. This means that we can \emph{track down} an ordinal in terms of nested connectivity components. The ordinal $\varepsilonn\cdot\omega$ is the least supremum of an infinite $<_1$-chain, and it is therefore not surprising that the least $\kappa^\tauwo$-pair of ${\cal R}two$ is $\varepsilonn\cdot\omega\kappa^\tauwo\varepsilonn\cdot(\omega+1)$, see \cite{CWc} and Theorem 3.8 of \cite{W} for a direct proof. Note that therefore, ${\cal R}one$ and ${\cal R}two$ agree on the initial segment $\varepsilonn\cdot(\omega+1)$. In ${\cal R}onepl$, as compared to ${\cal R}one$, the situation is dynamized, and the role of nonzero multiples of $\varepsilonn$ is taken by $\mathrm{Im}(\upsilonilon)\setminus\{0\}$ with $\upsilonilon$ as in Definition \ref{upsilondefi}, as was shown in \cite{W07b}. There, the enumeration function $\kappa$ is therefore relativized to $\kappa^\alpha$, so that $\kappa^\alpha$ enumerates the $\alpha$-$\le_1$-minimal ordinals, ordinals greater than or equal to $\alpha$ that do not have $<_1$-predecessors strictly greater than $\alpha$. The index of the maximum $\alpha$-$\le_1$-minimal component that is $\le_1$-connected to $\alpha$ is defined to be $\lambda_\alpha$, and thereby the domain of $\kappa^\alpha$ is $\lambda_\alpha+1$. As was shown in \cite{W07b}, if $\alpha\in{\operatorname{T}}t\cap{\mathbb P}\cap(\tau,{\tau_i}nf)$ for some $\tau\in{\mathbb E}one$, then $\lambdaal=\lambdatal$, i.e.\ the index $\lambdaal$ of the largest $\alpha$-relativized $\le_1$-connectivity component that is $\le_1$-connected to $\alpha$ is equal to the cofinality of $\alpha$ given by the cofinality operator defined in \ref{lataldefi}, namely $\lambdatal$. In short, $\alpha\le_1\kappa^\alpha_\lambdatal$, and the problem of finding the largest $\beta$ such that $\alpha\le_1\beta$, called ${\operatorname{lh}}(\alpha)$, the length of $\alpha$, is reduced to the calculation of ${\operatorname{lh}}(\kappa^\alpha_\lambdatal)$. For a summary of the results of \cite{W07b} see Subsection 2.4 of \cite{W07c}. In ${\cal R}two$ any additive principal number $\alpha<\upsilonilon_\omega$ is the maximum of a maximal finite chain \betagin{equation}\lambdabel{principalchain} \alpha_0<_1\alpha_1\kappa^\tauwo\cdots\kappa^\tauwo\alpha_n=\alpha, \end{equation} for suitable $n<\omega$, such that $\alpha_{i-1}$ is the greatest $\kappa^\tauwo$-predecessor of $\alpha_i$ for $i=2,\ldots,n$, and $\alpha_0$ is $\le_1$-minimal. Note that if $n>0$, $\alpha_0$ is therefore the least $<_1$-predecessor of $\alpha_1$, and according to Lemma \ref{ktwoinflochainlem} we indeed have $\alpha_0\not\kappa^\tauwo\alphae$. Note further that for every $i<n$, the ordinal $\alphaie$ is $\alphai$-$\le_2$-minimal, i.e.\ $\beta\le\alphai$ for any $\beta$ such that $\beta\kappa^\tauwo\alphaie$. The existence of such sequences was shown in \cite{CWc} for $\alpha\le1^\infty=\upsilon_1$. The \emph{tracking sequence} for such $\alpha$ characterizes the sequence $\alpha_0,\ldots,\alphan$ in terms of nested connectivity components by providing their indices. In order to track down additively decomposable ordinals also, the process is iterated, leading to a \emph{tracking chain}: a finite sequence of tracking sequences. For additively decomposable ordinals the iterated descent along greatest $\kappa^\tauwo$-predecessors does not in general suffice to locate the ordinal in terms of nested $\le_i$-components, $i=1,2$. Analyzing ${\cal R}$-structures in terms of enumerations of connectivity components lays bare their internal structure and reveals their regularity. An evaluation function $\mathrm{o}$, for \emph{ordinal}, is defined in order to recover the actual ordinal from its description by indices. \subseteqsection{\boldmath Ordinal operators for ${\cal R}two$\unboldmath}\lambdabel{ordopsec} While the $\lambda$-operator, which was first introduced in \cite{W07a}, could be motivated independently as a cofinality operator, as we did in the previous section, the $\mu$-operator, first introduced in \cite{CWc}, is specifically tailored for index calculation in ${\cal R}two$. The ($\tau$-relativized) ordinal operator $\alpha\mapsto\lambdatal$ returning the index of the largest (relative) $\le_1$-connectivity component in the context of ${\cal R}onepl$ is recovered in ${\cal R}two$ in an analogue way, see part a) of Lemma \ref{rhomulamestimlem}; however, the nesting of $\le_2$-connectivity components within a $\le_1$-component has to be considered as well. Each ordinal $\alpha$ such that $\alpha\kappa^\tauwo\beta$ for some $\beta$ is the supremum of an infinite $<_1$-chain, along which $\le_2$-connectivity components can occur: \betagin{lem}[7.5 of \cite{CWc}]\lambdabel{ktwoinflochainlem} If $\alphapha<_2\betata$ then $\alphapha$ is the $\mathrm{sup}$ of an infinite $<_1$-chain. \end{lem} {\bf Proof.} For any $\rho<\alphapha$ we have $\betata\models\exists x\:\forall y>x\:(\rho<x<_1 y)$. Hence the same holds true in $\alphapha$. We obtain $\rho_1<_1\rho_2<_1\rho_3<_1\ldots<_1\alphapha$. \mbox{ } $\Box$ The ($\tau$-relativized) ordinal operator $\alpha\mapsto\mu^\tall$ and an enumeration function $\xi\mapsto\nu^\alpha_\xi$ where $\xi\le\mu^\tall$ are defined so as to keep track of such infinite $<_1$-chains accommodating $\le_2$-connectivity components. $\mu^\tall$ characterizes the order type of the $<_1$-chain leading to the greatest newly arising $\kappa^\tauwo$-component with the refinement that this $<_1$-chain is subject to a notion of relativized $\le_2$-minimality. This means that $\nu^\alpha$ does not enumerate ordinals within the newly arising $\kappa^\tauwo$-components apart from their roots. To clarify this latter statement, call such root $\beta$, which (apart from a possible additve offset to place it into a surrounding component) is equal to $\nu^\alpha_\xi$ for some $\xi$ and its greatest $\le_2$-successor $\gamma$. Then the enumeration function $\nu^\alpha$ omits all ordinals in the interval $(\beta,\gamma]$ and in case of $\xi<\mu^\tall$ continues with the least ordinal $\delta$ that is $\le_1$-connected to the greatest newly arising $\le_2$-component, which is indexed by $\mu^\tall$. It should further be noted that any non-trivial $\le_2$-component is itself supremum of an infinite $<_1$-chain, which has the consequence that the function $\nu$ also enumerates those ordinals on such $<_1$-(sub-)chains that do not have any $\kappa^\tauwo$-successor themselves but lead to the next non-trivial $\kappa^\tauwo$-component. This entails that the image of $\mu^\tau$ consists of additive principal numbers. Easy examples, again in the setting $\tau=1$ omitted as superscript, are $\mu_{\varepsilonn}=\omega$, which is the order type of the $<_1$-chain leading to the ordinal $\varepsilonn\cdot\omega$, which is the least ordinal that has a $\kappa^\tauwo$-successor. While we still have $\mu_{\varepsilon_\omega}=\omega$ (but ${\operatorname{lh}}(\varepsilon_\omega)=\varepsilon_\omega\cdot(\omega+1)+1$), the index $\mu_{\varphi(2,0)}=\omega^2$ leads to the least ordinal that has two $\kappa^\tauwo$-successors, namely $\varphi(2,0)\cdot\omega^2\kappa^\tauwo\varphi(2,0)\cdot(\omega^2+i)$, $i=1,2$. The index $\mu_{\varphi(\omega,0)}=\omega^\omega$ governs the chain \[\varphi(\omega,0)<_1\ldots<_1\varphi(\omega,0)\cdot\omega^\omega\kappa^\tauwo\varphi(\omega,0)\cdot(\omega^\omega+\omega)<_1\varphi(\omega,0)\cdot(\omega^\omega+\omega)+1={\operatorname{lh}}(\varphi(\omega,0)),\] where along the infinite $<_1$-chain of order type $\omega^\omega$ we find ordinals $\alpha$ such that $\alpha\kappa^\tauwo\alpha+\varphi(\omega,0)\cdot n$ for every $n<\omega$, nested in a cofinally increasing manner. This provides an example for the following elementary observation, a complete proof of which is also given in \cite{W} (Lemma 3.3): \betagin{lem}[7.2 of \cite{CWc}]\lambdabel{loalpllem} \betagin{enumerate} \item In ${\cal R}one$ we have (see \cite{C99}) \[\alpha\le_1\alpha+1\quad\Leftrightarrow\quad\alpha\in\mathrm{Lim}.\] \item In ${\cal R}two$ we have \[\alpha\le_1\alpha+1\quad\Leftrightarrow\quad\alpha\in\mathrm{Lim}\;\:\&\:\;\forall\beta(\beta\kappa^\tauwo\alpha\Rightarrow\alpha=\sup\set{\gamma<\alpha}{\beta\le_2\gamma}).\] \end{enumerate} \end{lem} The ordinal $\alpha:=\varphi(\varepsilonn,0)$ gives rise to an infinite $<_1$-chain leading to its largest $\le_2$-component at $\beta:=\nu^\al_{\mu_\alpha}=\alpha\cdot\varepsilonn$, where $\mu_\alpha=\varepsilonn$ and superscript $\tau=1$ is suppressed. This $\le_2$-component is \[\beta\kappa^\tauwo\beta+\beta=:\gamma<_1\gamma+\varepsilonn\cdot\omega=:\delta\kappa^\tauwo\delta+\varepsilonn\] and contains an infinite $<_1$-chain below $\delta$. The $\le_2$-component $\delta\kappa^\tauwo\delta+\varepsilonn$ is not new, as the interval $(\gamma,\delta+\varepsilonn]$ is isomorphic to the initial segment $\varepsilonn\cdot(\omega+1)+1$. Values of the $\mu$-operator look very canonical so far, but a subtlety arises for the first time when we consider the prominent ordinal $\Gamma_0=\min\{\alpha\mid\alpha=\varphi(\alpha,0)\}=\varthetanod(\varthetae(\varthetae(0)))$. While $\varphi(\omega,0)\cdot\omega^\omega$ was the least example for an ordinal $\alpha$ such that there exist ordinals $\beta$ and $\gamma$ satisfying $\alpha\kappa^\tauwo\beta<_1\gamma$, the ordinal $\Gamma_0$ is the least $<_1$-predecessor of the least ordinal $\alpha$ such there exist $\beta,\gamma$ such that $\alpha\kappa^\tauwo\beta<_1\gamma$ \emph{and} $\alpha\kappa^\tauwo\gamma$. We find $\alpha=\Gamma_0^2\cdot\omega$, $\beta=\Gamma_0^2\cdot(\omega+1)$, and $\gamma=\Gamma_0^2\cdot(\omega+1)+\Gamma_0$. The \emph{index} $\mu_{\Gamma_0}$ leading to $\alpha$ is $\Gamma_0\cdot\omega$. $\alpha=\nu_{\mu_{\Gamma_0}}$ is the supremum of the first infinite chain in ${\cal R}two$ of a form \[\alpha_1\kappa^\tauwo\beta_1<_1\alpha_2\kappa^\tauwo\beta_2<_1\alpha_3\kappa^\tauwo\beta_3<_1\ldots\] of alternating $<_1$- and $\kappa^\tauwo$-connections. Here $\alpha_1=\nu_{\Gamma_0}=\Gamma_0^2$ is the root of a $\le_2$-component that contains an element ($\beta_1$) apart from the root that is $<_1$-connected to the greater $\le_2$-components rooted in the ordinals $\alpha_2,\alpha_3,\ldots,\alpha$. The ordinals $\alpha_1,\alpha_2,\alpha_3,\ldots$ are the least witnesses of such a phenomenon in ${\cal R}two$. It turns out that there is a simple criterion for indices of the $\nu$-function of whether this phenomenon occurs or not. If we call the set of ordinals $\{\gamma\mid\Gamma_0\le_1\gamma\le_1\alpha\}$ the \emph{main line} (here for the $\le_1$-connectivity component rooted in $\Gamma_0$), then we can say that the $\le_2$-components rooting in the $\alpha_i$ \emph{fall back onto the main line}, namely at the ordinals $\beta_i$. This way of expressing this characteristic phenomenon in general in ${\cal R}two$ has been used sometimes in earlier work. \cite{CWc} starts with the so-called \emph{indicator function} $\chi$, indicating the occurrence of the just described phenomenon. For the reader's convenience we review definition and key properties of $\chi$. \betagin{defi}[3.1 of \cite{CWc}]\lambdabel{indicatorchi} For $\tau\in{\mathbb E}$ the {\bf\boldmath indicator function\unboldmath} $\chi^\tau:{\operatorname{T}}t\to\sigmangleton{0,1}$\index{$\xit$@$\chi^\tau$} is defined by \betagin{itemize} \item $\chi^\tau(\xi):=0$ for parameters $\xi<\tau$ \item $\chi^\tau(\tau):=1$ \item $\chi^\tau(\eta+\xi):=\chi^\tau(\xi)$ if $\eta+\xi>\tau$ is in normal form \item Let $i<\omega$ and $\xi=\Delta+\eta\in\mathrm{dom}(\varthetai)$ where $\eta<\Omegaega_{i+1}\mid\Delta$ with $\xi>0$ in case of $i=0$. \betagin{itemize} \item $\chi^\tau(\varthetai(\xi)):=\chi^\tau(\Delta)$ if $\eta=\sup_{\sigma<\eta}\varthetai(\Delta+\sigma)$ or ${\mathrm{logend}}(\eta)=0$ \item $\chi^\tau(\varthetai(\xi)):=\chi^\tau(\xi)$ otherwise. \end{itemize} \end{itemize} Let $\chi^\taucheck:{\operatorname{T}}t\to\sigmangleton{0,1}$ be the dual indicator function, i.e.\ $\chi^\taucheck:=1-\chi^\tau$.\index{$\xitc$@$\chi^\taucheck$} \end{defi} \betagin{lem}[3.2 of \cite{CWc}]\lambdabel{chibasetrafolem} Let $\sigma,\tau\in{\mathbb E}$, $\sigma<\tau$, and $\alpha\in{\operatorname{T}}ts$. Then $\chi^\si(\pi_{\si,\tau}(\alpha))=\chi^\tau(\alpha)$, i.e.\ the following diagram is commutative: \[\betagin{diagram} \node{{\operatorname{T}}ts}\arrow[2]{e,t}{\chi^\tau}\arrow{se,b}{\pi_{\si,\tau}}\node[2]{\sigmangleton{0,1}}\\ \node{}\node{{\operatorname{T}}s}\arrow{ne,b}{\chi^\si} \end{diagram}\] The analogue statement holds for $\chi^\taucheck$. \end{lem} \betagin{lem}[3.3 of \cite{CWc}]\lambdabel{chiinvlem} Let $\tau\in{\mathbb E}$ and $\alpha=\varthetat(\Delta+\eta)>\tau$. \betagin{enumerate} \item[a)] $\chi^\tau(\alpha)$ is equal to each of the following: $\chi^\tau(\beta+\alpha)$ for all $\beta<{\tau_i}nf$, $\chi^\tau({\mathrm{logend}}(\alpha))$, $\chi^\tau(\omega^\alpha)$, $\chi^\tau(\beta\cdot\alpha)$ for all $\beta\in(0,{\tau_i}nf)$, $\chi^\tau\left((1/\beta)\cdot\alpha\right)$ for all $\beta\in{\mathbb P}^{<\alpha}$, and $\chi^\tau(\lambdatal)$. \item[b)] If $\alpha\in{\mathbb E}$ then for all $\xi\in{\operatorname{T}}trestral$ such that $\chi^\al(\iota_{\tau,\al}(\xi))=1$ we have $\chi^\tau(\xi)=0$. \end{enumerate} \end{lem} In order to complete our list of instructive examples of values of the ordinal operator $\mu$, consider the Bachmann-Howard ordinal $\alpha:=\varthetanod(\varthetae(\vartheta_2(0)))$, which is the least $<_1$-predecessor of the least $\kappa^\tauwo$-chain of the form $\beta\kappa^\tauwo\gamma\kappa^\tauwo\delta$ in ${\cal R}two$ (in fact, any ${\cal R}n$ for $n>1$). $\alpha$ has fixed point level $\varepsilonom=\varthetae(\vartheta_2(0))$, and setting $\tau:=1$ we have \[\mu^\tall=\iota_{\tau,\al}(\varepsilonom)=\varepsilonal,\quad \beta=\nu^\alpha_{\varepsilonal}=\varepsilonal, \quad\gamma=\varepsilonal\cdot\omega, \quad\mbox{ and }\quad\delta=\varepsilonal\cdot(\omega+1).\] With this preparation we can now review the precise definition of $\mu$: \betagin{defi}[3.4 of \cite{CWc}]\lambdabel{muindex} Let $\tau\in{\mathbb E}one$ and $\alpha\in(\tau,{\tau_i}nf)\cap{\mathbb E}$, say $\alpha=\varthetat(\Delta+\eta)$ where $\Delta=\Omegaega_1\cdot(\lambda+k)$ such that $\lambda\in\mathrm{Lim}nod$ and $k<\omega$. We define \[\mu^\tall:=\omega^{\iota_{\tau,\al}(\lambda)+\chi^\al(\iota_{\tau,\al}(\lambda))+k}.\]\index{$\mu^\tall$} \end{defi} The next lemma justifies inductive proofs along ${\operatorname{ht}_\tau}$. The more refined estimation is useful when dealing with localizations. The subsequent algebraic lemmas concerning the notions of translation and base transformation will later be used without explicit mention. \betagin{lem}[3.5 of \cite{CWc}]\lambdabel{muestimlem} ${\operatorname{ht}_\al}(\mu^\tall)\le{\operatorname{ht}_\al}(\lambdatal)<{\operatorname{ht}_\tau}(\alpha)$ and $\mu^\tall,(\mu^\tall)^+<\alphaplus$. \end{lem} \betagin{lem}[3.6 of \cite{CWc}]\lambdabel{mutranslem} Let $\alpha=\varthetat(\Delta+\eta)\in{\mathbb E}^{>\tau}$. For every $\beta=\varthetat(\Gamma+\rho)\in(\alpha,\alphaplus)\cap{\mathbb E}$ we have \[\mu^\taube=\mu_\albetal=\mu_\albe.\] \end{lem} \betagin{lem}[3.7 of \cite{CWc}]\lambdabel{mubasetrafolem} Let $\sigma,\tau\in{\mathbb E}$, $\sigma<\tau$, and $\alpha=\varthetat(\Delta+\eta)\in{\operatorname{T}}ts\cap(\tau,{\tau_i}nf)\cap{\mathbb E}$. Then \[\mu^\tall\in{\operatorname{T}}ts\quad\mbox{ and }\quad\pi_{\si,\tau}(\mu^\tall)=\mu^\sipistal,\] i.e.\ the following diagram is commutative: \[\betagin{diagram} \node{{\operatorname{T}}ts\cap(\tau,{\tau_i}nf)\cap {\mathbb E}}\arrow[2]{e,t}{\mu^\tau}\arrow{s,l}{\pi_{\si,\tau}}\node[2]{{\operatorname{T}}ts}\arrow{s,r}{\pi_{\si,\tau}}\\ \node{{\operatorname{T}}s\cap(\sigma,\sigmainf)\cap{\mathbb E}}\arrow[2]{e,b}{\mu^\si}\node[2]{{\operatorname{T}}s} \end{diagram}\] \end{lem} \betagin{lem}[3.8 of \cite{CWc}]\lambdabel{munoncofinlem} Let $\tau\in{\mathbb E}one$ and $\alpha\in{\mathbb E}\cap(\tau,{\tau_i}nf)$ and $\gamma\in{\mathbb E}\cap(\alphabar,\alpha)$. Then we have \[\pi_{\ga,\al}inv(\mu^\tau_\gamma)\le\mu^\tall.\] \end{lem} With $\tau$ and $\alpha$ as in the above definition of $\mu^\tall$, the ordinal operator $\rhoargs{\alpha}{\xi}$, where $\xi\le\mu^\tall$, denotes the index of that $\nu^\alpha_\xi$-relativized $\le_1$-connectivity component which contains the largest $\le_2$-successor of $\nu^\alpha_\xi$, where $\nu^\alpha_\xi$ is the $\xi$-th such newly arising $\le_2$-component in the $\alpha$-th component, assuming that there is no surrounding component causing an additive offset. This latter phenomenon is taken care of by the notion of tracking chain, i.e.\ nesting of tracking sequences, and will be discussed later. \betagin{defi}[3.9 of\cite{CWc}]\lambdabel{rhoindex} Let $\tau\in{\mathbb E}$ and $\alpha<{\tau_i}nf$ where ${\mathrm{logend}}(\alpha)=\lambda+k$ such that $\lambda\in\mathrm{Lim}nod$ and $k<\omega$. We define \[\rho^\tall:=\tau\cdot(\lambda+k\minusp\chi^\tau(\lambda)).\]\index{$\rho^\tall$} \end{defi} \betagin{lem}[3.10 of\cite{CWc}] $\rho^\tall\le\tau\cdot{\mathrm{logend}}(\alpha)$ and ${\operatorname{ht}_\tau}(\rho^\tall)\le\max\sigmangleton{1,{\operatorname{ht}_\tau}(\alpha)}$. \end{lem} \betagin{lem}[3.11 of\cite{CWc}]\lambdabel{rhobasetrafolem} Let $\sigma,\tau\in{\mathbb E}$, $\sigma<\tau$, and $\alpha\in{\operatorname{T}}ts\cap{\tau_i}nf$. Then we have \[\rho^\tall\in{\operatorname{T}}ts\quad\mbox{ and }\quad\pi_{\si,\tau}(\rho^\tall)=\rhoargs{\sigma}{\pi_{\si,\tau}(\alpha)},\] i.e.\ the following diagram is commutative: \[\betagin{diagram} \node{{\operatorname{T}}ts\cap{\tau_i}nf}\arrow[2]{e,t}{\varrho^\tau}\arrow{s,l}{\pi_{\si,\tau}}\node[2]{{\operatorname{T}}ts}\arrow{s,r}{\pi_{\si,\tau}}\\ \node{{\operatorname{T}}s\cap\sigmainf}\arrow[2]{e,b}{\varrho^\si}\node[2]{{\operatorname{T}}s} \end{diagram}\] \end{lem} The lemma below shows the interrelations between the operators from the previous section (\cite{W07a}) and the new ones (\cite{CWc}). Note in particular part a), where the new index operators $\mu$ and $\varrho$ fall into place with $\iota$. \betagin{lem}[3.12 of\cite{CWc}, corrected in \cite{W17}]\lambdabel{rhomulamestimlem} Let $\tau\in{\mathbb E}one$ and $\alpha=\varthetat(\Delta+\eta)\in(\tau,{\tau_i}nf)\cap{\mathbb E}$. Then we have \betagin{enumerate} \item[a)] $\iota_{\tau,\al}(\Delta)=\varrho^\al_{\mutal}$ and hence $\lambdatal=\varrho^\al_{\mutal}+{\ze^\tau}atal$. \item[b)] ${\varrho^\al_\be}\le\lambdatal$ for every $\beta\le\mu^\tall$. For $\beta<\mu^\tall$ such that \footnote{This condition is missing in \cite{CWc}. However, that inequality was only applied under this condition, cf.\ Def.\ 5.1 and L.\ 5.7 of \cite{CWc}.} $\chi^\al(\beta)=0$ we even have ${\varrho^\al_\be}+\alpha\le\lambdatal$. \item[c)] If $\mu^\tall<\alpha$ we have $\alpha\le\lambdatal<\alpha^2$, while otherwise \[\max\left({\mathbb E}^{\le\mu^\tall}\right)=\max\left({\mathbb E}^{\le\lambdatal}\right).\] \item[d)] If $\lambdatal\in{\mathbb E}^{>\alpha}$, we have $\mu^\tall=\lambdatal\cdot\omega$ in case of $\chi^\al(\lambdatal)=1$, and $\mu^\tall=\lambdatal$ otherwise. \end{enumerate} \end{lem} Note that whenever $\mu^\tall\in{\mathbb E}^{>\alpha}$, we have $\mu^\tall=\iota_{\tau,\al}(\Delta)={\operatorname{mc}}(\lambdatal)$, where ${\operatorname{mc}}$ denotes the largest additive component as introduced in the previous section. An easy example already mentioned earlier might be helpful to illustrate part a): The ordinal $\varepsilon_\omega$ is the $\varepsilon_\omega$-th $\le_1$-minimal ordinal in ${\cal R}two$, $\kappa_{\varepsilon_\omega}=\varepsilon_\omega$, $\mu_{\varepsilon_\omega}=\omega$, $\nu^{\varepsilon_\omega}_\omega=\varepsilon_\omega\cdot\omega$, $\varrho^{\varepsilon_\omega}_\omega=\varepsilon_\omega$, ${\ze^\tau}a_{\varepsilon_\omega}=1$, and $\lambda_{\varepsilon_\omega}=\varepsilon_\omega+1$. \subseteqsubsection*{Extending the domain of ordinal operators} Note that for $\tau\in{\mathbb E}one$ we have ${\tau_i}nf\in\mathrm{Im}(\upsilon)$ and $(\tau,{\tau_i}nf)\cap\mathrm{Im}(\upsilon)=\emptyset$. Recall Definitions \ref{lataldefi}, \ref{muindex}, \ref{rhoindex}, and \ref{indicatorchi} of the ordinal operators $\lambda$, $\mu$, $\varrho$, and $\chi$, respectively. The (partial) extension of their domain to $\mathrm{Im}(\upsilon)$ is motivated by Theorem \ref{maxchaintheo}. \betagin{defi}\lambdabel{lamuextdefi} For $\tau\in{\mathbb E}one$ we extend the definitions of the ordinal operators $\lambda$, $\mu$, $\varrho$, and $\chi$ as follows. \[\mu^\tau_{{\tau_i}nf}:=({\tau_i}nf)^\infty=:\lambda^\tau_{{\tau_i}nf}.\] \[\varrho^\tau_{{\tau_i}nf}:={\tau_i}nf\quad\mbox{ and }\quad\chi^\tau({\tau_i}nf):=0.\] For $\lambda\in\mathrm{Lim}$ we set \[\lambda_{\upsilon_\lambda}:=\upsilon_{\lambda+1}.\] \end{defi} We thus obtain \[\mu_{\upsilon_{\lambda+k}}=\upsilon_{\lambda+k+1}=\lambda_{\upsilon_{\lambda+k}}\] for $\lambda\in\mathrm{Lim}nod$ and $k\in(0,\omega)$. Note that the expressions $\lambda_0$ and $\mu_{\upsilon_\lambda}$ for $\lambda\in\mathrm{Lim}nod$ remain undefined. \subseteqsection{\boldmath Tracking sequences relative to limit $\upsilon$-segments\unboldmath} \noindent As mentioned earlier, by definition the $\kappa$-function enumerates $\le_1$-minimal ordinals in ${\cal R}one$ as well as in ${\cal R}two$, and the $\nu$-functions essentially enumerate $\le_2$-components relative to the component they occur in. Precise definitions will be given later. Returning to the chain (\ref{principalchain}) of ordinals below any additive principal number (less than $\upsilon_\omega$), $\alpha_0$ is therefore an element in the image of $\kappa$, while $\alphai$ for $i=1,\ldots,n$ is an element of the image of that $\nu$-function which relates to the context given by $\alphanod,\ldots,\alphaimin$. This context dependence is indicated by a superscript $\nu^\alphavec$ where the vector $\alphavec$ stands for the sequence of \emph{indices} of the ordinals $\alphanod,\ldots,\alphaimin$. For arbitrary ordinals we will also need context dependent enumeration functions $\kappa^\alphavec$ of relativized $\le_1$-components, so that $\kappa^{()}$ becomes another notation for simply $\kappa$. So, given an ordinal $\alpha$ in ${\cal R}two$, we want to calculate the indices of the nested $\le_i$-components it occurs in. For additive principal $\alpha$ this will be a sequence, called a \emph{tracking sequence}, or ${\mathrm{ts}}(\alpha)$, and for arbitrary $\alpha$ it will be a \emph{tracking chain}, or ${\mathrm{tc}}(\alpha)$, a sequence of tracking sequences, where the first element of each tracking sequence is a $\kappa$-index (relativized from the second tracking sequence in the chain on), and the later indices in each sequence are $\nu$-indices. It turns out that tracking sequences can easily be characterized using the $\mu$-operator, and we denote the independently defined set (or class) of sequences by ${\operatorname{T}}S$, in relativized form by ${\operatorname{T}}St$. We cite the following definition of ${\operatorname{T}}St$ from \cite{CWc}, thereby correcting a flaw in the original formulation that caused a deviation from the intended meaning. \betagin{defi}[corrected 4.2 of \cite{CWc}]\lambdabel{TSdefi} Let $\tau\in{\mathbb E}one$. A nonempty sequence $(\alphae,\ldots,\alphan)$ of ordinals below ${\tau_i}nf$ is called a {\bf \boldmath$\tau$-tracking sequence\unboldmath}\index{$\tau$-tracking sequence} if \betagin{enumerate} \item $(\alphae,\ldots,\alphanmin)$ is either empty or $\alphae,\ldots,\alphanmin\in{\mathbb E}$ and is such that $\tau<\alphae<\ldots<\alphanmin$. \item $\alphan\in{\mathbb P}$ and is such that $\alphan\ge\tau$ if $n=1$ and $\alphan>1$ if $n>1$. \item $\alphaie\le\mu^\talli$ for every $i\in\sigmangleton{1,\ldots,n-1}$. \end{enumerate} By ${\operatorname{T}}St$\index{$\tst$@${\operatorname{T}}St$} we denote the set of all $\tau$-tracking sequences. For convenience we set ${\operatorname{T}}S^0:={\operatorname{T}}Se$. \end{defi} According to Lemma \ref{muestimlem} the length of a tracking sequence is bounded in terms of the largest index of $\vartheta$-functions in the term representation of the first element of the sequence. \betagin{defi}[cf.\ 4.3 of \cite{CWc}]\lambdabel{RSdefi} The set of sequences obtained from ${\operatorname{T}}St$ by erasing the last entry in each sequence, is denoted by ${\cal R}St$ and called the set of {\bf \boldmath$\tau$-reference sequences\unboldmath}. \end{defi} Note that ($\tau$-)reference sequences are, apart from the empty sequence, sequences $(\alphae,\ldots,\alphan)$ of strictly increasing epsilon numbers greater than $\tau$ that are \emph{$\mu$-covered}, that is $\alpha_{i+1}\le\mu^{\alpha_{i-1}}_{\alphai}$ for $i=1,\ldots,n-1$, setting $\alphanod:=\tau$. Note that $\mu^{\alpha_{i-1}}_{\alphai}=\mu^\tau_\alphai$ via translation, see Lemma \ref{mutranslem}. We therefore sometimes omit the superscript when a suitable relativization parameter can be understood from the context. $\mu$-coverings were first explicitly discussed in Subsection 3.1 of \cite{W17}; we will return to this notion shortly. Reference sequences $\alphavec$ characterize the contexts of relativization needed for the definition of $\kappa^\alphavec$ and $\nu^\alphavec$, where $\nu$ requires $\alphavec$ not to be the empty sequence. We now modify the definitions of ${\operatorname{T}}S$, which was defined to be just ${\operatorname{T}}Se$ in ealier work, and ${\cal R}S$, which used to be defined as ${\cal R}S^1$, to apply to all of ${\mathrm{Ord}}$. Since $\upsilon_\omega$ is the supremum of the first infinite $\kappa^\tauwo$-chain in ${\cal R}two$, an additional parameter $\lambda\in\mathrm{Lim}nod$ comes into the picture as we now need to relate to the interval $[\upsilon_\lambda,\upsilon_{\lambda+\omega})$ that we want to consider tracking sequences in. The initial sequence $(\upsilon_{\lambda+1},\ldots,\upsilon_{\lambda+m})$ for tracking sequences leading into the interval $[\upsilon_{\lambda+m},\upsilon_{\lambda+m+1})$, which might appear redundant at first sight, is needed if $m>0$, since $\upsilon_{\lambda+1}$ is the $\kappa$-index (relative to $\upsilon_\lambda$ if $\lambda\in\mathrm{Lim}$) that specifies the $\le_1$-component (relative to $\upsilon_\lambda$ if $\lambda\in\mathrm{Lim}$) in which nested $\le_2$-components are specified by the tracking sequence, cf.\ Theorem \ref{maxchaintheo}. \betagin{defi}[\boldmath$\lambdaRS$ and $\lambdaTS$\unboldmath]\lambdabel{laRSlaTSdefi} For $\lambda\in\mathrm{Lim}nod$ we define \betagin{enumerate} \item $\alphavec\in\lambdaRS$ if and only if $\alphavec$ is of a form\footnote{For the sake of notational simplicity, we deviate from the usual convention that a vector $\alphavec$ has components $\alphae,\ldots,\alphan$ where $n$ is the length of $\alphavec$, whenever an explicit redefinition is given.} $(\upsilon_{\lambda+1},\ldots,\upsilon_{\lambda+m})^\frown(\alphae,\ldots,\alphan)$ where $m<\omega$ and either $n=0$ or $n>0$ and $\alphae,\ldots,\alphan\in{\mathbb E}$ such that $\alphae<\ldots<\alphan$, where $\alphae\in(\upsilon_{\lambda+m},\upsilon_{\lambda+m+1})$, and $\alphaie\le\mu_{\alphai}$ for $i=1,\ldots,n-1$. \item $\alphavec^\frown\beta\in\lambdaTS$ if and only if either $\alphavec=()$ and $\beta\in{\mathbb P}\cap[\upsilon_\lambda,\upsilon_{\lambda+1}]$ or $()\not=\alphavec=(\alpha_1,\ldots,\alphan)\in\lambdaRS$ and $\beta\in{\mathbb P}\cap(1,\mu_{\alphan}]$. \end{enumerate} We define the extended class ${\cal R}S$ (respectively ${\operatorname{T}}S$) as the union of all $\lambdaRS$ ($\lambdaTS$, respectively). \end{defi} Note that the above definition accords with the already defined ${\cal R}S$ and ${\operatorname{T}}S$: ${\cal R}S$ comprises the sequences in $0$-${\cal R}S$ with elements below $\upsilon_1=1^\infty$ and ${\operatorname{T}}S$ comprises the sequences below $\upsilon_1$ in $0$-${\operatorname{T}}S$. For $\lambda\in\mathrm{Lim}$ we have $(\upsilon_\lambda)\in\lambdaTS\setminus\lambdaRS$. While the parameter $\lambda$ in $\lambdaRS$ and $\lambdaTS$ explicitly mentions the interval $[\upsilon_\lambda,\upsilon_{\lambda+\omega})$ in which the sequences reside, $\lambda$ can always be recovered from the first element in the sequence. The empty sequence $()\in\lambdaRS$ clearly does not cause any problem. Tracking sequences for additive principal numbers were first introduced in Definition 3.13 of \cite{CWc}. Here we first state the assignment of the proper tracking sequence of a multiplicative principal number, which is based on the notion of localization reviewed in Subsection \ref{localizationsec}. \betagin{defi}[cf.\ 3.5 of \cite{W17}]\lambdabel{trsofmzdefi} Let $\tau\in{\mathbb E}one$ and $\alpha\in{\mathbb M}\cap(\tau,{\tau_i}nf)$ with $\tau$-localization $\tau=\alpha_0,\ldots,\alpha_n=\alpha$. The {\bf \boldmath tracking sequence of $\alpha$\unboldmath} above $\tau$\index{tracking sequence}, ${\mathrm{ts}}t(\alpha)$\index{${\mathrm{ts}}tmz$}, is defined as follows. If there exists the largest index $i\in\{1,\ldots,n-1\}$ such that $\alpha\le\mu^\talli$, then \[{\mathrm{ts}}t(\alpha):={\mathrm{ts}}t(\alphai)^\frown(\alpha),\] otherwise ${\mathrm{ts}}t(\alpha):=(\alpha)$. We extend the definition of ${\mathrm{ts}}t$ to \[{\mathrm{ts}}t({\tau_i}nf):=({\tau_i}nf).\] \end{defi} We will apply ${\mathrm{ts}}t$ to ordinals in ${\mathbb M}$ in particular when defining the evaluation function $\mathrm{o}$. The extension of the assignment ${\mathrm{ts}}t$ to all additive principal numbers in $(\tau,{\tau_i}nf)$ is somewhat more involved. \betagin{defi}[3.15 of \cite{W17}]\lambdabel{trsofhzdefi} Let $\tau\in{\mathbb E}one$ and $\alpha\in[\tau,{\tau_i}nf)\cap{\mathbb P}$. The tracking sequence of $\alpha$ above $\tau$\index{tracking sequence}, ${\mathrm{ts}}t(\alpha)$\index{${\mathrm{ts}}thz$}, is defined as in Definition \ref{trsofmzdefi} if $\alpha\in{\mathbb M}^{>\tau}$, and otherwise recursively in the multiplicative decomposition of $\alpha$ as follows. \betagin{enumerate} \item If $\alpha\le\tau^\omega$ then ${\mathrm{ts}}t(\alpha):=(\alpha)$. \item Otherwise. Then $\alphabar\in[\tau,\alpha)$ and $\alpha=_{\mathrm{\scriptscriptstyle{NF}}}\alphabar\cdot\beta$ for some $\beta\in{\mathbb M}^{>1}$. Let ${\mathrm{ts}}t(\alphabar)=(\alphae,\ldots,\alphan)$ and set $\alpha_0:=\tau$.\footnote{As verified in parts 1 and 2 of Lemma \ref{trsbasicpropslem} we have $\beta\le\alpha_n$.} \betagin{enumerate} \item[2.1.] If $\alphan\in{\mathbb E}^{>\alpha_{n-1}}$ and $\beta\le\mu^\talln$ then ${\mathrm{ts}}t(\alpha):=(\alphae,\ldots,\alphan,\beta)$. \item[2.2.] Otherwise. For $i\in\sigmangleton{1,\ldots,n}$ let $(\beta^i_1,\ldots,\beta^i_{m_i})$ be ${\mathrm{ts}}^{\alphai}(\beta)$ provided $\beta>\alphai$, and set $m_i:=1$, $\beta^i_1:=\alphai\cdot\beta$ if $\beta\le\alphai$. We first set the insertion index \[i_0:=\max\left(\sigmangleton{1}\cup\set{j\in\sigmangleton{2,\ldots,n}}{\beta^j_1\le\mu^\tau_{\alpha_{j-1}}}\right),\] then define ${\mathrm{ts}}t(\alpha):=(\alphae,\ldots,\alpha_{i_0-1},\beta^{i_0}_1,\ldots,\beta^{i_0}_{m_{i_0}})$. \end{enumerate} \end{enumerate} For technical convenience we set ${\mathrm{ts}}^0:={\mathrm{ts}}^1$, and instead of ${\mathrm{ts}}^1$ we also simply write ${\mathrm{ts}}$. \index{${\mathrm{ts}}t$!${\mathrm{ts}}$} \end{defi} The following lemma shows that the above definition is sound, that the image of ${\mathrm{ts}}t$ is contained in ${\operatorname{T}}St$, and states basic properties of ${\mathrm{ts}}t$. Its proof proceeds by straightforward induction along the definition of ${\mathrm{ts}}t$, i.e.\ the length of $\tau$-localization of multiplicative principal numbers and the number of factors in the multiplicative normal form of additive principal numbers. \betagin{lem}[3.14 of \cite{CWc}]\lambdabel{trsbasicpropslem} Let $\tau\in{\mathbb E}one$ and $\alpha\in[\tau,{\tau_i}nf)\cap{\mathbb P}$. Let further $(\alphae,\ldots,\alphan)$ be ${\mathrm{ts}}t(\alpha)$, the tracking sequence of $\alpha$ above $\tau$. \betagin{enumerate} \item If $\alpha\in{\mathbb M}$ then $\alphan=\alpha$ and ${\mathrm{ts}}t(\alphai)=(\alphae,\ldots,\alphai)$ for $i=1,\ldots,n$. \item If $\alpha=_{\mathrm{\scriptscriptstyle{NF}}}\eta\cdot\xi\not\in{\mathbb M}$ then $\alphan\in{\mathbb P}\cap[\xi,\alpha]$ and $\alphan=_{\mathrm{\scriptscriptstyle{NF}}}\alphanbar\cdot\xi$. \item $(\alphae,\ldots,\alpha_{n-1})$ is either empty or a strictly increasing sequence of epsilon numbers in the interval $(\tau,\alpha)$. \item For $1\le i\le n-1$ we have $\alphaie\le\mu^\talli$, and if $\alphai<\alphaie$ then $(\alphae,\ldots,\alphaie)$ is a subsequence of the $\tau$-localization of $\alphaie$. \end{enumerate} We therefore have ${\mathrm{ts}}t(\alpha)\in{\operatorname{T}}St$. \end{lem} The following lemma is part of showing (in Theorems 3.19 and 3.20 of \cite{W17}) that ${\mathrm{ts}}$ is a $(\le,\le_\mathrm{\scriptscriptstyle{lex}})$-isomorphism between $\upsilon_1\cap{\mathbb P}$ and ${\operatorname{T}}S$. We will generalize this isomorphism to all of ${\mathbb P}$. However, we will need to define the evaluation function $\mathrm{o}$ for additive principal numbers first, which will be shown to be the inverse of ${\mathrm{ts}}$. \betagin{lem}[3.15 of \cite{CWc}]\lambdabel{citedinjtrslem} Let $\tau\in{\mathbb E}one$ and $\alpha,\gamma\in[\tau,{\tau_i}nf)\cap{\mathbb P}$, $\alpha<\gamma$. Then we have \[{\mathrm{ts}}t(\alpha)<_\mathrm{\scriptscriptstyle{lex}}{\mathrm{ts}}t(\gamma).\] \end{lem} Note that the proof of the above lemma given in \cite{CWc} is in fact an induction along the inductive definition of ${\mathrm{ts}}t(\gamma)$ with a subsidiary induction along the inductive definition of ${\mathrm{ts}}t(\alpha)$. The function $\upsilon$ gives rise to a segmentation of the ordinals into intervals $[\upsilon_\iota,\upsilon_{\iota+1})$, which we will use for a generalization of the notion of tracking sequence. \betagin{defi}[\boldmath$\upsilonseg$\unboldmath]\lambdabel{upssegdefi} For $\alpha\in{\mathrm{Ord}}$ let $(\lambda,m)\in\mathrm{Lim}nod\times\omega$ be $<_\mathrm{\scriptscriptstyle{lex}}$-minimal such that $\alpha<\upsilon_{\lambda+m+1}$ and define $\upsilonseg(\alpha):=(\lambda,m)$, the {\bf \boldmath$\upsilon$-segment\unboldmath} of $\alpha$. \end{defi} Note that for $\tau\in{\mathbb E}one$ we have ${\tau_i}nf=\upsilon_{\lambda+m+1}$ where $(\lambda,m):=\upsilonseg(\tau)$. Recall that by $(1/\gamma)\cdot\alpha$ we denote the least ordinal $\delta$ such that $\alpha=\gamma\cdot\delta$, whenever such an ordinal exists. Note that in the extension of the function ${\mathrm{ts}}$ to all of ${\mathbb P}$ below, the original definition of ${\mathrm{ts}}$ is referred to as ${\mathrm{ts}}^0$. The technical appearance of the following definition is owed to the fact that ordinal notations cover in each relativized instance at most an interval of the form $[\upsilon_\alpha,\upsilon_{\alpha+1})$. Note that in the case where $\gamma<\alpha<\gamma^\omega$ we need to explicitly cancel the lead factor $\gamma$, which is the one but last element of the tracking sequence, resulting in the final $\nu$-index $(1/\gamma)\cdot\alpha$. \betagin{defi}[Extended domain \boldmath${\mathrm{ts}}$\unboldmath]\lambdabel{latrsdefi} For $\alpha\in{\mathbb P}$ let $(\lambda,m):=\upsilonseg(\alpha)$ be the $\upsilon$-segment of $\alpha$. For better readability we set $\gammavec:=(\upsilon_{\lambda+1},\ldots,\upsilon_{\lambda+m})$ and $\gamma:=\upsilon_{\lambda+m}$. We define \[{\mathrm{ts}}(\alpha):=\left\{\betagin{array}{ll} {\mathrm{ts}}^{\upsilon_\lambda}(\alpha)&\mbox{if }m=0\mbox{, otherwise:}\\[2mm] \gammavec&\mbox{if }\alpha=\gamma,\\[2mm] \gammavec^\frown(1/\gamma)\cdot\alpha&\mbox{if }\gamma<\alpha<\gamma^\omega,\\[2mm] \gammavec^\frown{\mathrm{ts}}^{\gamma}(\alpha)&\mbox{if }\alpha\ge\gamma^\omega. \end{array}\right. \] \end{defi} Having defined the assignment function ${\mathrm{ts}}$ of tracking sequences on ${\mathbb P}$, we need to calculate the evaluation function $\mathrm{o}$ on ${\operatorname{T}}S$. The evaluation function $\mathrm{o}$ was first given in \cite{CWc}, but redefined in \cite{W17}, which led to a substantial disentanglement that allowed to directly see elementary recursiveness. For the verification proof that these functions are inverses of each other, we need to establish a definition of $\mathrm{o}$ that provides expressions in multiplicative normal forms accompanied with the tracking sequences of the initial products of such normal forms. Along the way we will characterize the fixed points of $\mathrm{o}$, i.e.\ all $\alphavec^\frown\beta\in{\operatorname{T}}S$ such that $\mathrm{o}(\alphavec^\frown\beta)=\beta$. Clearly, fixed points are pivotal in the proof of order-isomorphism between ${\mathbb P}$ and ${\operatorname{T}}S$, and decomposition into multiplicative normal form carries the proof on. More specifically, given a sequence in $\alphavec^\frown\beta\in{\operatorname{T}}S$ with evaluation $\gamma=_{\mathbb N}F\eta\cdot\xi\not\in{\mathbb M}$, we want to determine the tracking sequence of $\eta$. It turns out that $\xi={\operatorname{lf}}(\beta)$, i.e.\ either $\beta$ itself or its last factor. The latter can occur if $\beta=_{\mathrm{\scriptscriptstyle{MNF}}}\beta_1\cdot\ldots\cdot\beta_k$ with $k>1$, $\beta_1\in{\mathbb E}^{>\alphan}$, and $\beta_2\le\mu_{\beta_1}$, where $\alphavec=(\alphae,\ldots,\alphan)$. The tracking sequence of $\eta$ is determined by an auxiliary function $\operatorname{h}$, which in turn uses two functions $\operatorname{sk}$ (for \emph{skimmed sequence}) and $\operatorname{mts}$ (for \emph{minimal tracking sequence}). The former sequence, say $\operatorname{sk}be(\gamma)$, extends a given point on a local main line (i.e.\ starting from a $\nu$-index $\gamma$ and extending along nested $\nu$-indices through the $\le_2$-component rooted in the start point indexed by $\gamma$) as far as possible without obtaining evaluating last factors below $\beta$. The latter, say $\operatorname{mts}al(\beta)$, produces a minimal $\mu$-covering from $\alpha$ to $\beta$, obtaining large evaluating factors as quickly as possible. For the reader's convenience we include all relevant (slightly modified) definitions, starting with $\mu$-coverings. Correcting definitions in \cite{W17} technically, we need to allow $(\alpha)$ to be a $\mu$-covering of $\alpha$ itself and let a $\mu$-covering from $\alpha$ to $\beta$ start with $\alpha$. \betagin{defi}[cf.\ 3.2 of \cite{W17}]\lambdabel{mucovering} Let $\tau\in{\mathbb E}one$, $\alpha\in{\mathbb E}\cap(\tau,{\tau_i}nf)$, and $\beta\in{\mathbb P}\cap[\alpha,\alphainf)$. A sequence $(\alpha_0,\dots,\alphan)$ where $\alpha_0=\alpha$, $\alphan=\beta$, such that $(\alpha_1,\ldots,\alphan)\in{\operatorname{T}}Sal$ and $\alpha<\alpha_1\le\mu^\tall$ if $n>0$, is called a {\bf \boldmath$\mu$-covering\unboldmath} from $\alpha$ to $\beta$. \end{defi} \betagin{lem}[3.3 of \cite{W17}]\lambdabel{mucovloc} Any $\mu$-covering from $\alpha$ to $\beta$ is a subsequence of the $\alpha$-localization of $\beta$. \end{lem} \betagin{defi}[3.4 of \cite{W17}]\lambdabel{maxminmucov} Let $\tau\in{\mathbb E}one$. \betagin{enumerate} \item For $\alpha\in{\mathbb P}\cap(\tau,{\tau_i}nf)$ we define $\operatorname{max-cov}tau(\alpha)$ to be the longest subsequence $(\alphae,\ldots,\alphane)$ of the $\tau$-localization of $\alpha$ which satisfies $\tau<\alphae$, $\alphane=\alpha$, and which is {\it $\mu$-covered}, i.e.\ which satisfies $\alphaie\le\mu^\talli$ for $i=1,\ldots,n$. \item For $\alpha\in{\mathbb E}\cap(\tau,{\tau_i}nf)$ and $\beta\in{\mathbb P}\cap[\alpha,\alphainf)$ we denote the shortest subsequence $(\beta_0,\beta_1,\ldots,\beta_n)$ of the $\alpha$-localization of $\beta$ which is a $\mu$-covering from $\alpha$ to $\beta$ by $\operatorname{min-cov}^\al(\beta)$, if such sequence exists. \end{enumerate} \end{defi} \betagin{defi}[cf.\ 3.6 of \cite{W17}]\lambdabel{mts} Let $\tau\in{\mathbb E}one$, $\alpha\in{\mathbb E}\cap(\tau,{\tau_i}nf)$, $\beta\in{\mathbb P}\cap[\alpha,\alphainf)$, and let $\alpha=\alpha_0,\ldots,\alphan=\beta$ be the $\alpha$-localization of $\beta$. If there exists the least index $i\in\{0,\ldots,n-1\}$ such that $\alphai<\beta\le\mu^\talli$, then \[\operatorname{mts}al(\beta):=\operatorname{mts}al(\alphai)^\frown(\beta),\] otherwise $\operatorname{mts}al(\beta):=(\alpha)$. \end{defi} Note that $\operatorname{mts}al(\beta)$ reaches $\beta$ if and only if it is a $\mu$-covering from $\alpha$ to $\beta$. We will see a criterion for this to hold in Lemma \ref{mtshatlem}. \betagin{lem}[3.7 of \cite{W17}]\lambdabel{covcharlem} Fix $\tau\in{\mathbb E}one$. \betagin{enumerate} \item For $\alpha\in{\mathbb P}\cap(\tau,{\tau_i}nf)$ let $\operatorname{max-cov}tau(\alpha)=(\alphae,\ldots,\alphane)=\alphavec$. If $\alphae<\alpha$ then $\alphavec$ is a $\mu$-covering from $\alphae$ to $\alpha$ and $\operatorname{mts}ale(\alpha)\subseteqseteq\alphavec$. \item If $\alpha\in{\mathbb M}\cap(\tau,{\tau_i}nf)$ then $\operatorname{max-cov}tau(\alpha)={\mathrm{ts}}t(\alpha)$. \item Let $\alpha\in{\mathbb E}\cap(\tau,{\tau_i}nf)$ and $\beta\in{\mathbb P}\cap[\alpha,\alphainf)$. Then $\operatorname{min-cov}^\al(\alpha)$ exists if and only if $\operatorname{mts}al(\beta)$ is a $\mu$-covering from $\alpha$ to $\beta$, in which case these sequences are equal, characterizing the lexicographically maximal $\mu$-covering from $\alpha$ to $\beta$. \end{enumerate} \end{lem} The following ordinal operator provides a useful upper bound when calculating the reach of connectivity components. The subsequent Lemma \ref{mtshatlem} justifies the definition further. \betagin{defi}[cf.\ 3.16 of \cite{CWc}]\lambdabel{alhatdefi}\index{$\alphahat$} Let $\tau\in{\mathbb E}one$ and $\alpha\in(\tau,{\tau_i}nf)\cap{\mathbb E}$. We define \[\alphahat:=\min\set{\gamma\in{\mathbb M}^{>\alpha}}{{\mathrm{ts}}al(\gamma)=(\gamma)\:\&\:\mu^\tall<\gamma}.\] For $\alpha=\upsilon_\xi$, $\xi>0$, we set \[\alphahat:=\upsilon_{\xi+1}.\] \end{defi} Note that in the above context we have $\alphahat\le\alphaplus$ if $\alpha\not\in\mathrm{Im}(\upsilon)$. As is the case with $\alpha^+$ we suppress the base $\tau$ in the notation $\widehat{\alpha}$ assuming that it will always be well understood from the respective context. \betagin{lem}[3.17 of \cite{CWc}]\lambdabel{widehatestimlem} Let $\tau,\alpha$ be as in the above definition, $\alpha\not\in\mathrm{Im}(\upsilon)$. Then \[\widehat{\beta}\le\widehat{\alpha}\quad\mbox{ for any }\quad\beta\in{\operatorname{T}}al\cap{\mathbb E}\cap(\alpha,\mu^\tall].\] We further have $\lambdatal<\widehat{\alpha}$. \end{lem} \betagin{lem}[cf.\ 3.8 of \cite{W17}]\lambdabel{mtshatlem} Let $\tau\in{\mathbb E}one$, $\alpha\in{\mathbb E}\cap(\tau,{\tau_i}nf)$, and $\beta\in{\mathbb M}\cap[\alpha,\alphainf)$. Then $\operatorname{mts}al(\beta)$ is a $\mu$-covering from $\alpha$ to $\beta$ if and only if $\beta<\alphahat$. This holds if and only if either $\beta=\alpha$ or for ${\mathrm{ts}}al(\beta)=(\beta_1,\ldots,\beta_m)$ we have $\beta_1\le\mu^\tall$. \end{lem} \betagin{defi}[cf.\ 4.9 of \cite{CWc}] For $\beta\in{\mathbb M}^{>1}$ and $\gamma\in{\mathbb E}\setminus\mathrm{Im}(\upsilon)$ let $\operatorname{sk}_\beta(\gamma)$\index{$\operatorname{sk}$} be the maximal sequence $\delta_1,\ldots,\delta_l$ such that (setting $\delta_0:=1$) \betagin{itemize} \item $\delta_1=\gamma$ and \item if $i\in\sigmangleton{1,\ldots,l-1}\:\&\:\delta_i\in{\mathbb E}^{>\delta_{i-1}}\:\&\:\beta\le\mu_{\delta_i}$, then $\delta_{i+1}=\mathrm{o}erline{\mu_{\delta_i}\cdot\beta}$. \end{itemize} \end{defi} Note that equivalently, one obtains the ordinal $\mathrm{o}erline{\mu_{\delta_i}\cdot\beta}$ in the above definition by \emph{skipping} the factors strictly below $\beta$ from the multiplicative normal form of $\mu_{\delta_i}$. Lemma \ref{muestimlem} guarantees that the above definition terminates. We have $(\delta_1,\ldots,\delta_{l-1})\in{\cal R}S$ and $(\delta_1,\ldots,\delta_l)\in{\operatorname{T}}S$. Notice that $\beta\le\delta_i$ for $i=2,\ldots,l$. Note that if $\beta=\omega$, the sequence $\gamma=\delta_1,\ldots,\delta_l$ is maximal such that $\delta_{i+1}=\mu_{\delta_i}$, $i=1,\ldots,l-1$. After the above preparations we can now assemble the auxiliary sequence $\operatorname{h}_\ga(\alphavec^\frown\beta)$ that plays a key role in the definition of the evaluation function $\mathrm{o}$. \betagin{defi}[cf.\ 3.11 of \cite{W17}]\lambdabel{hgamalbe} Let $\alphavec^\frown\gamma\in\lambdaRS$, where $\gamma\not\in\mathrm{Im}(\upsilon)$, and $\beta\in{\mathbb M}\cap(1,\gammahat)$. If $\beta>\gamma$ let $\operatorname{mts}ga(\beta)={\vec{\eta}}^\frown(\varepsilon,\beta)$. \[\operatorname{h}_\be(\alphavecga):=\left\{\betagin{array}{ll} \alphavec^\frown{\vec{\eta}}^\frown\operatorname{sk}be(\varepsilon)&\mbox{ if }\gamma<\beta<\gammahat\\[2mm] \alphavec^\frown\operatorname{sk}be(\gamma)&\mbox{ if }\beta\le\gamma\mbox{ and }\beta\le\mu_\ga\\[2mm] \alphavec^\frown\gamma&\mbox{ if }\beta\le\gamma\mbox{ and }\beta>\mu_\ga. \end{array}\right.\] \end{defi} We are now going to extend Definition 3.14 of \cite{W17} (2.3 of \cite{W18}) to the extended class ${\operatorname{T}}S$ as the union over all $\lambdaTS$. If $\alphavec^\frown\beta\in\lambdaTS$ is of the form $(\upsilon_{\lambda+1},\ldots,\upsilon_{\lambda+m})$ where $m<\omega$ or $\alphavec^\frown\beta=(1+\upsilon_\lambda)$, we will have $\mathrm{o}(\alphavec^\frown\beta)=\beta$, extending the class of fixed points of $\mathrm{o}$. As mentioned earlier, the evaluation function $\mathrm{o}$ enables a smooth definition of the component enumerating functions $\kappa$ and $\nu$ in the next subsection. \betagin{defi}[cf.\ 3.14 of \cite{W17}]\lambdabel{odef} Let $\alphavecbe\in\lambdaTS$, where $\alphavec=(\upsilon_{\lambda+1},\ldots,\upsilon_{\lambda+m})^\frown(\alphae,\ldots,\alphan)\in\lambdaRS$ for the maximal such $m<\omega$ and $\beta=_{\mathrm{\scriptscriptstyle{MNF}}}\beta_1\cdot\ldots\cdot\beta_k$. Let $\betapr:=1$ if $k=1$ and $\betapr:=\beta_2\cdot\ldots\cdot\beta_k$ otherwise. We set $\alphanod:=1+\upsilon_{\lambda+m}$, $\alphane:=\beta$, $h:=\htarg{\alphanod}(\alphae)+1$, and $\gammavec_i:={\mathrm{ts}}^{\alphaimin}(\alphai)$ for $i=1,\ldots,n$, while \[\gammavec_{n+1}:=\left\{\betagin{array}{ll} (\beta)&\mbox{if }\beta\le\alphan\\[2mm] {\mathrm{ts}}aln(\beta_1)^\frown\beta_2&\mbox{if } k>1,\beta_1\in{\mathbb E}^{>\alphan}\:\&\:\beta_2\le\mu_{\beta_1}\\[2mm] {\mathrm{ts}}aln(\beta_1)&\mbox{otherwise,} \end{array}\right.\] and write $\gammavec_i=(\gamma_{i,1},\ldots,\gamma_{i,m_i})$, $i=1,\ldots,n+1$. We then define \[\mathrm{lSeq}(\alphavecbe):=(m_1,\ldots,m_{n+1})\in[h]^{\le h}\] where $[h]^{\le h}$ is the set of sequences of natural numbers $\le h$ of length at most $h$, ordered lexicographically. We may now define $\mathrm{o}(\alphavecbe)$ recursively in $\mathrm{lSeq}(\alphavecbe)$, as well as auxiliary parameters $n_0(\alphavecbe)$ and $\gamma(\alphavecbe)$, which are set to $0$ where not defined explicitly. \betagin{enumerate} \item If $\alphavec=()$ and $\beta=1+\upsilon_\lambda$, then $\mathrm{o}((\beta)):=\beta$. \item If $\alphavec\not=()$\footnote{The corresponding condition of part 2 of Definition 2.3 in \cite{W18} should read $n\ge 1$, not $n>1$.} and $\beta_1\le\alphan$, then $\mathrm{o}(\alphavecbe):=_{\mathbb N}F\mathrm{o}(\alphavec)\cdot\beta$. \item If $\beta_1\in{\mathbb E}^{>\alphan}$, $k>1$, and $\beta_2\le\mu_{\beta_1}$, then set $n_0(\alphavecbe):=n+1$, $\gamma(\alphavecbe):=\beta_1$, and define \[\mathrm{o}(\alphavecbe):=_{\mathbb N}F\mathrm{o}(\operatorname{h}_{\beta_2}(\alphavec^\frown\beta_1))\cdot\betapr.\] \item Otherwise. Then setting \[n_0:=n_0(\alphavecbe):=\max\left(\{i\in\{1,\ldots,n+1\}\mid m_i>1\}\cup\{0\}\right),\] define \[\mathrm{o}(\alphavecbe):=_{\mathbb N}F\left\{\betagin{array}{ll} \beta&\mbox{if } n_0=0\\[2mm] \mathrm{o}(\operatorname{h}_{\beta_1}({\alphavec_{\restriction_{n_0-1}}}^\frown\gamma))\cdot\beta&\mbox{if } n_0>0, \end{array}\right.\] where $\gamma:=\gamma(\alphavecbe):=\gamma_{n_0,m_{n_0}-1}$. \end{enumerate} \end{defi} The just defined extension of the evaluation function $\mathrm{o}$ to ${\operatorname{T}}S$ is easily seen to have the following desired properties. \betagin{theo}[cf.\ 3.19, 3.20, and 3.21 of \cite{W17}]\lambdabel{trsvallem} The (class) function ${\mathrm{ts}}:{\mathbb P}\to{\operatorname{T}}S$ is a $(<,<_\mathrm{\scriptscriptstyle{lex}})$-order isomorphism with inverse $\mathrm{o}$: \betagin{enumerate} \item For $\alpha\in{\mathbb P}$ we have \[\mathrm{o}({\mathrm{ts}}(\alpha))=\alpha.\] \item For $\alphavec^\frown\beta\in{\operatorname{T}}S$ we have \[{\mathrm{ts}}(\mathrm{o}(\alphavec^\frown\beta))=\alphavec^\frown\beta.\] \item $\mathrm{o}$ is strictly increasing with respect to the lexicographic ordering on ${\operatorname{T}}S$ and continuous in the last vector component. \end{enumerate} \end{theo} For proofs and further details see Section 3 of \cite{W17}. Note that part 1 of the above theorem is proved by induction along the definition of ${\mathrm{ts}}(\alpha)$, while part 2 is proved by induction on $\mathrm{lSeq}(\alphavecbe)$ along the ordering $(\mathrm{lSeq},<_\mathrm{\scriptscriptstyle{lex}})$. \subseteqsection{Connectivity components of \boldmath${\cal R}two$\unboldmath} We can now define the complete system of enumeration functions of (relativized) connectivity components, using the evaluation function $\mathrm{o}$. To this end we define $\kappa=\kappa^{{\scriptscriptstyle{\mathbf{()}}}}$ on all of ${\mathrm{Ord}}$, which will be conceptually justified in Remark \ref{globalkapparmk}, and define functions $\kappa^\alvec$ and $\nu^\alvec$ for $()\not=\alphavec\in\lambdaRS$ where $\lambda\in\mathrm{Lim}nod$, extending Definition 4.1 of \cite{W17} (Definition 2.4 of \cite{W18}) to $\lambdaRS$. We first define these functions on the additive principal numbers. \betagin{defi}[cf.\ 2.4 of \cite{W18}]\lambdabel{kappanuprincipals} Let $\alphavec\in\lambdaRS$ where $\lambda\in\mathrm{Lim}nod$, $\alphavec=(\upsilon_{\lambda+1},\ldots,\upsilon_{\lambda+m})^\frown(\alphae,\ldots,\alphan)$ for the maximal such $m<\omega$, and set $\alphanod:=1+\upsilon_{\lambda+m}$. If $\alphavec\not=()$, we define for $\beta$ such that $\alphavec^\frown\beta\in\lambdaTS$ \[\nu^\alphavec_\beta:=\mathrm{o}(\alphavecbe).\] Now let $\beta\in{\mathbb P}$ such that $\beta\le\lambdaaln$ if that exists (i.e.\ $\lambda+m>0$ if $n=0$) and $\beta\le\upsilon_1$ otherwise. Let $\beta=_{\mathrm{\scriptscriptstyle{MNF}}}\beta_1\cdot\ldots\cdot\beta_k$ and set $\betapr:=(1/\beta_1)\cdot\beta$. We first define an auxiliary sequence $\gammavec\in\xi\operatorname{-RS}$ for some $\xi\in\mathrm{Lim}nod$, $\xi\le\lambda$.\\[2mm] {\bf Case 1:} $\beta\le\alphan$. Here we consider two subcases:\\[2mm] {\bf Subcase 1.1:} $n>0$ and there exists the maximal $i\in[0,\ldots,n-1]$ such that $\alphai<\beta$. Then we set \[\gammavec:=(\upsilon_{\lambda+1},\ldots,\upsilon_{\lambda+m})^\frown(\alphae,\ldots,\alphai).\] {\bf Subcase 1.2:} Otherwise. Then we have $\beta\le\alphanod$. Let $(\xi,l)\in\mathrm{Lim}nod\times\omega$ be $\le_\mathrm{\scriptscriptstyle{lex}}$-minimal such that $\beta\le\upsilon_{\xi+l+1}$, so that $(\xi,l)\le_\mathrm{\scriptscriptstyle{lex}}(\lambda,m)$, and set \[\gammavec:=(\upsilon_{\xi+1},\ldots,\upsilon_{\xi+l}).\] {\bf Case 2:} $\beta>\alphan$. Then $\gammavec:=\alphavec$.\\[2mm] Writing $\gammavec=(\gamma_1,\ldots,\gamma_r)$ we now define \[\kappa^\alvecbe:=\left\{\betagin{array}{ll} \mathrm{o}(\gammavec)\cdot\betapr&\mbox{if } r>0,\; \beta_1=\gamma_r,\mbox{ and }k>1\\[2mm] \mathrm{o}(\gammavec^\frown\beta)&\mbox{otherwise.} \end{array}\right.\] For arbitrary $\beta\in{\mathbb P}$ let $(\lambda,m)\in\mathrm{Lim}nod\times\omega$ be $\le_\mathrm{\scriptscriptstyle{lex}}$-minimal such that $\beta\le\upsilon_{\lambda+m+1}$ and set $\alphavec:=(\upsilon_{\lambda+1},\dots,\upsilon_{\lambda+m})$. Writing $\kappappa_\beta$ instead of $\kappappa^{()}_\beta$, we define \[\kappa_\beta:=\kappa^\alvecbe,\] and call $\kappa$ the {\bf global \boldmath$\kappa$-function of ${\cal R}two$\unboldmath}. \end{defi} \betagin{rmk}\lambdabel{globalkapparmk} We still need to define the $\kappa$- and $\nu$-functions for arguments that are additively decomposable. For orientation and clarification we make some statements about the global $\kappa$-function that follow from our results in Section \ref{structuresec}. Recall that an ordinal $\beta\ge\alpha$ is called $\alpha$-$\le_i$-minimal if $\gamma\le\alpha$ for any $\gamma$ such that $\gamma<_i\beta$. \betagin{itemize} \item The restriction of the global $\kappa$-function of ${\cal R}two$ to the initial segment $1^\infty=\upsilon_1$ enumerates all $\le_1$-minimal ordinals, the largest one being $\upsilon_1$. \item For any $\lambda\in\mathrm{Lim}$ the $\upsilon_\lambda$-$\le_1$-minimal ordinals are enumerated by $\beta\mapsto\upsilon_\lambda+\kappa_\beta$ for $\beta\le\upsilon_{\lambda+1}$, and the greatest $<_1$-predecessor of $\upsilon_{\lambda+1}$ is $\upsilon_\lambda$. \item And for any $\xi\in\mathrm{Lim}nod$, $m<\omega$, the $\upsilon_{\xi+m+1}$-$\le_1$-minimal ordinals are enumerated by $\beta\mapsto\upsilon_{\xi+m+1}+\kappa_\beta$ for $\beta\le\upsilon_{\xi+m+1}$. We have $\upsilon_{\xi+m+1}\cdot 2<_1\infty$. \end{itemize} Note that $\kappa$ as defined here acts as the identity on $\mathrm{Im}(\upsilon)$. The global $\kappa$-function can therefore equally be defined using Definition \ref{upssegdefi}, since for $\beta\in{\mathbb P}$ and $(\xi,m):=\upsilonseg(\beta)$, we have $\kappa_\beta=\kappa^\alphavec_\beta$, where $\alphavec:=(\upsilon_{\xi+1},\ldots,\upsilon_{\xi+m})$, as $\kappa^\alphavec_\beta=\kappa^\alphavecpr_\beta$ for $\beta=\upsilon_{\xi+m+1}$ and $\alphavecpr=\alphavec^\frown\beta$. Defining $\kappa$ on all of ${\mathrm{Ord}}$ despite the fact that we have $\upsilon_1<_1\infty$, wherefore the largest $\le_1$-connectivity component in ${\cal R}two$ is $[\upsilon_1,\infty)$, is justified by the uniformity of $\kappa$ through all $\upsilon$-segments in that it simply end-extends continously. \end{rmk} In order to extend the above definition to non-principal indices $\beta$, we need some preparation. We introduce a term measure on terms that use finitely many parameters from $\mathrm{Im}(\upsilon)$. Suppose $\lambda\in\mathrm{Lim}nod$, $m<\omega$, and $\alpha\in{\operatorname{T}}^{\upsilon_{\lambda+m}}$. This term representation uses finitely many parameters below $\upsilon_{\lambda+m}$, each of which, in turn, can be represented in a system ${\operatorname{T}}^{\upsilon_\iota}$ for some $\iota<\lambda+m$ with parameters below $\upsilon_\iota$. Resolving hereditarily (using transfinite recursion) all parameters results in a term representation of $\alpha$ that makes use of finitely many relativized $\varthetanod$-functions $\varthetat$ where $\tau\in(1+\upsilon_{\iota_1},\ldots,\upsilon_{\iota_l})$ for an increasing sequence of indices $0\le\iota_1,\ldots,\iota_l=\lambda+m$. A term measure can therefore be defined elementary recursively relative to such resolved term representation for any ordinal $\alpha$. We adapt this motivation to the setting of $\lambdaRS$ as follows. For $\lambda=m=0$ we obtain a parameter-free representation for notations below $\upsilon_1$ as in \cite{W18}. We begin with a useful auxiliary notion. Note that we can in general assume that parameters of terms in ${\operatorname{T}}t$ can itself be represented as terms of some suitable ${\operatorname{T}}s$ where $\sigma,\tau\in{\mathbb E}one$, $\sigma<\tau$. \betagin{defi} Let ${\vec{\tau}}=(\tau_1,\ldots,\tau_{n+1})$ be a strictly increasing sequence of ordinals in ${\mathbb E}one$ and $\tau:=\tau_{n+1}$. We say that $\alpha\in{\operatorname{T}}t$ is given in ${\operatorname{T}}tvec$-representation with parameter set $\operatorname{Par}^{\vec{\tau}}(\alpha)$ if either $n=0$ and $\operatorname{Par}^{\vec{\tau}}(\alpha)=\operatorname{Par}^\tau(\alpha)$ or $n>0$ and each parameter term $\beta\in\operatorname{Par}^\tau(\alpha)$ is given in ${\operatorname{T}}sivec$-representation where ${\vec{\sigma}}=(\tau_1,\ldots,\tau_n)$ and \[\operatorname{Par}^{\vec{\tau}}(\alpha)=\bigcup_{\beta\in\operatorname{Par}^\tau(\alpha)}\operatorname{Par}^{\vec{\sigma}}(\beta).\] \end{defi} For clarification, in the case $n=0$, where $\tau=\tau_1$, we trivially have ${\operatorname{T}}^{(\tau)}={\operatorname{T}}t$ and $\operatorname{Par}^{(\tau)}(\alpha)=\operatorname{Par}^\tau(\alpha)$. For $n>0$ we have $\operatorname{Par}^{\vec{\tau}}(\alpha)\subseteqseteq\tau_1$, which contains but in general can be much larger than $\operatorname{Par}^\tau(\alpha)\cap\tau_1$. Note that terms $\alpha$ in ${\operatorname{T}}tvec$-representation as above can have $\vartheta^{\tau_i}$-subterms for $i=1,\ldots,n+1$, but for $i,j\in\{1,\ldots,n+1\}$, $i<j$, while a $\vartheta^{\tau_j}$-subterms can itself have $\vartheta^{\tau_i}$-subterms, this cannot happen the other way around. \betagin{defi}\lambdabel{resolvingseqdefi} Setting $\alphanod:=1+\upsilon_\lambda$ where $\lambda\in\mathrm{Lim}nod$, let $\alphavec\in\lambdaRS$ be of a form $\alphavec=(\alphae,\ldots,\alphamn)$ where $m,n<\omega$, $\alphae=\upsilon_{\lambda+1},\ldots,\alpham=\upsilon_{\lambda+m}$, and $\alphame\in(\upsilon_{\lambda+m},\upsilon_{\lambda+m+1})$ if $n>0$. The term system $\lambdaTalvec$ is obtained from ${\operatorname{T}}almn$ by successive substitution of parameters from $(\alphai,\alphaie)$ by their ${\operatorname{T}}ali$-representations, for $i=m+n-1,\ldots,0$. The parameters $\alphai$ are represented by the terms $\varthetaali(0)$, $0\le i\le m+n$. Remaining unresolved parameters are below $\alphanod$. More formally, we proceed as follows. \betagin{itemize} \item $\alpha\in{\operatorname{T}}^{\alphamn}$ is a ${\operatorname{T}}^{(\alphamn)}$-term. $\operatorname{Par}^{(\alphamn)}(\alpha):=\operatorname{Par}^{\alphamn}(\alpha)$. \item Let $\alpha$ be a ${\operatorname{T}}^{(\alpha_{m+n-i},\ldots,\alpha_{m+n})}$-term, $0\le i\le m+n-1$. Replace those parameters of $\alpha$ that are in the set $\operatorname{Par}^{(\alpha_{m+n-i},\ldots,\alpha_{m+n})}(\alpha)\cap[\alpha_{m+n-i-1},\alpha_{m+n-i})$ by ${\operatorname{T}}^{\alpha_{m+n-i-1}}$-terms. The resulting representation of $\alpha$ is a ${\operatorname{T}}^{(\alpha_{m+n-i-1},\ldots,\alpha_{m+n})}$-term, and the set of parameters $\operatorname{Par}^{(\alpha_{m+n-i-1},\ldots,\alpha_{m+n})}(\alpha)$ is the set of remaining (and new) parameters in the new representation of $\alpha$ after replacement. All parameters are now below $\alpha_{m+n-i-1}$. \item We also call the ${\operatorname{T}}^{\alphanod^\frown\alphavec}$-representation of $\alpha$ the $\lambdaTalvec$-term representation of $\alpha$ and the corresponding parameter set $\lambda$-$\operatorname{Par}^\alphavec(\alpha)$. We have $\lambda$-$\operatorname{Par}^\alphavec(\alpha)\subseteqseteq\alphanod$. \end{itemize} For $\alpha\in{\operatorname{T}}almn$ in $\lambdaTalvec$-representation, let ${\vec{\iota}}:=(1+\upsilon_{\iota_1},\ldots,\upsilon_{\iota_l})$ be the uniquely defined, (possibly empty) finite increasing sequence below $\upsilon_\lambda$ needed to resolve all parameters of $\alpha$. This results in a ${\operatorname{T}}tvec$-representation of $\alpha$, where ${\vec{\tau}}={\vec{\iota}}^\frown\alphanod^\frown\alphavec$, that uses relativized $\varthetanod$-functions $\varthetati$ for $i=1,\ldots,l+m+n+1$. More formally, we proceed as follows. \betagin{itemize} \item Suppose that $\alpha$ is given in ${\operatorname{T}}sivec$-representation with parameter set $\operatorname{Par}^{\vec{\sigma}}(\alpha)$ still containing nonzero elements. Let $(\xi,k)$, where $\xi\in\mathrm{Lim}nod$ and $k<\omega$, be $<_\mathrm{\scriptscriptstyle{lex}}$-minimal such that $\operatorname{Par}^{\vec{\sigma}}(\alpha)\subseteqseteq\upsilon_{\xi+k+1}$, and set $\sigma:=1+\upsilon_{\xi+k}$. Replace all parameters in $\alpha$ that are in the interval $[\sigma,\sigmainf)$ by ${\operatorname{T}}s$-terms (using $\varthetas$-terms) to obtain $\alpha$ in ${\operatorname{T}}^{\sigma^\frown{\vec{\sigma}}}$-representation with parameter set $\operatorname{Par}^{\sigma^\frown{\vec{\sigma}}}(\alpha)$ consisting of the remaining (and new) parameters, which are now below $\sigma$. \item Starting from the $\lambdaTalvec$-representation of $\alpha$, i.e.\ starting with ${\vec{\sigma}}=\alphanod^\frown\alphavec$, after finitely many steps all nonzero parameters in $\alpha$ are resolved, and we call the resulting sequence ${\vec{\tau}}$ the {\bf resolving sequence} for $\alpha$. \end{itemize} \end{defi} \betagin{defi}[cf.\ 4.3 of \cite{W17} (2.6 of \cite{W18})]\lambdabel{Ttauvec} For $\alpha\in{\operatorname{T}}almn$ in $\lambdaTalvec$-representation, where $\lambda\in\mathrm{Lim}nod$, $\alphavec=(\alphae,\ldots,\alphamn)\in\lambdaRS$, and $\alphanod:=1+\upsilon_\lambda$, let ${\vec{\tau}}=(\tau_1,\ldots,\tau_r)$ be a sequence of strictly increasing ordinals in ${\mathbb E}one$ containing the resolving sequence for $\alpha$ as defined above. The {\bf length} ${\operatorname{l}^\tauvec}(\alpha)$ of the representation of $\alpha$ as ${\operatorname{T}}tvec$-term $\alpha$ is defined by induction on the build-up of $\alpha$ as follows. \betagin{enumerate} \item ${\operatorname{l}^\tauvec}(0):=0$, \item ${\operatorname{l}^\tauvec}(\beta):={\operatorname{l}^\tauvec}(\gamma)+{\operatorname{l}^\tauvec}(\delta)$ if $\beta=_{\mathbb N}F\gamma+\delta$, and \item ${\operatorname{l}^\tauvec}(\vartheta(\eta)):=\left\{\betagin{array}{l@{\quad}l} 1&\mbox{ if }\quad\eta=0,\\ {\operatorname{l}^\tauvec}(\eta)+4&\mbox{ if }\quad\eta>0, \end{array}\right.$\\[2mm] where $\vartheta\in\{\vartheta^{\tau_i}\mid 1\le i\le r\}\cup\{\vartheta_{i+1}\mid i\in{\mathbb N}\}$. \end{enumerate} \end{defi} Note that in the above defintion the term $\alpha\in{\operatorname{T}}^{\alpha_{m+n}}$ in $\lambdaTalvec$-representation determines uniquely, which relativized $\varthetat$-terms occur in its resolved term notation. If ${\vec{\tau}}$ is or contains the resolving sequence for $\alpha$, then for each such $\varthetat$-term, $\tau$ is an element of ${\vec{\tau}}$. For $\lambda=m=0$ the above definition is compatible with Definition 4.3 of \cite{W17} and Definition 2.6 of \cite{W18}. In this case the sequence $1^\frown\alphavec$ is resolving for $\alpha$, so the $0$-${\operatorname{T}}^\alphavec$-term representation is completely resolved and directly corresponds to the notion of ${\operatorname{T}}tvec$-representation in \cite{W17,W18}. \betagin{lem}[cf.\ Subsection 2.1 of \cite{W18}]\lambdabel{ltveclem} For $\alpha\in{\operatorname{T}}almn$ in $\lambdaTalvec$-representation, where $\lambda\in\mathrm{Lim}nod$, $\alphavec=(\alphae,\ldots,\alphamn)\in\lambdaRS$, and $\alphanod:=1+\upsilon_\lambda$, let ${\vec{\tau}}=(\tau_1,\ldots,\tau_r)$ be a sequence of strictly increasing ordinals in ${\mathbb E}one$ containing the resolving sequence for $\alpha$ as defined above. \betagin{enumerate} \item Setting $\tau:=\alphamn$, if $\alpha=\varthetat(\Delta+\eta)\in{\mathbb E}$ such that $\alpha\le\mu_\tau$, then we have \betagin{equation*}\lambdabel{iotalen} {\operatorname{l}^\tauvec}(\Delta)={\operatorname{l}^\tauvec}al(\iota_{\tau,\alpha}(\Delta))<{\operatorname{l}^\tauvec}(\alpha). \end{equation*} \item If $\alpha\in{\operatorname{T}}tvec\cap{\mathbb P}^{>1}\cap\Omega_1$ let $\tau\in\{\tau_0,\ldots,\tau_r\}$ (where $\tau_0:=1$) be maximal such that $\tau<\alpha$. we have \betagin{equation*}\lambdabel{barlen} {\operatorname{l}^\tauvec}(\alphabar) < {\operatorname{l}^\tauvec}(\alpha), \end{equation*} and \betagin{equation*}\lambdabel{zelen} {\operatorname{l}^\tauvec}({\ze^\tau}atal) < {\operatorname{l}^\tauvec}(\alpha). \end{equation*} In case of $\alpha\not\in{\mathbb E}$ we have \betagin{equation*}\lambdabel{loglen} {\operatorname{l}^\tauvec}(<_1g(\alpha)), {\operatorname{l}^\tauvec}(<_1g((1/\tau)\cdot\alpha)) < {\operatorname{l}^\tauvec}(\alpha), \end{equation*} and for $\alpha\in{\mathbb E}$ we have \betagin{equation*}\lambdabel{lalen} {\operatorname{l}^\tauvec}al(\lambdatal) < {\operatorname{l}^\tauvec}(\alpha). \end{equation*} \end{enumerate} \end{lem} {\bf Proof.} The inequalities follow directly from the term representations for the ordinal operations applied, see Lemma \ref{logendcharlem} and Definitions \ref{barop}, \ref{zetaldefi}, and \ref{lataldefi}. \mbox{ } $\Box$ Preparations are complete now to extend Definition \ref{kappanuprincipals} by the following two definitions. In order to do so, we need to define the function ${\mathrm{dp}_\alvec}$, simultaneously with defining $\kappa^\alvec$. The \emph{depth} function $\mathrm{dp}$ was first introduced in \cite{C99} for the analysis of ${\cal R}one$. $\mathrm{dp}$ satisfies the equation $\kappa_{\alpha+1}=\kappa_\alpha+\mathrm{dp}(\alpha)+1$ for $\alpha<\upsilon_1$, i.e.\ ${\operatorname{lh}}(\kappa_\alpha)=\kappa_\alpha+\mathrm{dp}(\alpha)$, where we let $\mathrm{dp}$ operate on indices. Clause 5 of the following definition applies when recovering and generalizing the recursion formula for ${\cal R}one$ in the context of ${\cal R}two$. However, our extension of $\mathrm{dp}$ to ${\mathrm{dp}_\alvec}$ for all of ${\cal R}two$ does not any longer characterize the $\le_1$-reach of an ordinal. In case that the relativized $\le_1$-component that ${\mathrm{dp}_\alvec}$ is applied to falls back onto a surrounding main line (recall the informal outline given before Definition \ref{indicatorchi}), ${\mathrm{dp}_\alvec}$ leads us only to the point where that happens, as required for understanding the internal structure of local components, cf.\ clause 2 of Definition \ref{nuvaldefi}, rather than leading us to the $\le_1$-reach of the component it is applied to. For an explanation in different words and some greater detail, see \cite{CWc} before Definition 4.4. \betagin{defi}[cf.\ 4.4 of \cite{CWc}]\lambdabel{kappadpdefi}\index{$\kappa^\alvec$}\index{${\mathrm{dp}_\alvec}$} Let $\alphavec\in\lambdaRS$ where $\alphavec=(\upsilon_{\lambda+1},\ldots,\upsilon_{\lambda+m})^\frown(\alpha_1,\ldots,\alpha_n)$ and $\alphanod:=1+\upsilon_{\lambda+m}$, where $\alphae\in(\upsilon_{\lambda+m},\upsilon_{\lambda+m+1})$ if $n>0$. We define global functions $\kappa,\mathrm{dp}:{\mathrm{Ord}}\to{\mathrm{Ord}}$, omitting superscripts $()$ for ease of notation, as well as, for $\alphavec\not=()$, local functions $\kappa^\alvec,{\mathrm{dp}_\alvec}$ where $\mathrm{dom}kval=[0,\lambdaaln]$ and $\mathrm{dom}({\mathrm{dp}_\alvec})=\mathrm{dom}kval$ if $n>0$ while $\mathrm{dom}({\mathrm{dp}_\alvec})=\upsilon_{\lambda+m+1}$ if $n=0$, simultaneously by recursion on ${\operatorname{l}^\tauvec}(\beta)$, where ${\vec{\tau}}$ is the resolving sequence for $\beta$ in $\lambdaTalvec$-representation, extending Definition \ref{kappanuprincipals}. The clauses extending the definition of $\kappa^\alvec$ are as follows. \betagin{enumerate} \item $\kappa^\alvec_0:=0$, \item\lambdabel{kappapl} $\kappa^\alvecbe:=\kappa^\alvecga+{\mathrm{dp}_\alvec}(\gamma)+\kappa^\alvecde$ for $\beta=_{\mathbb N}F\gamma+\delta$. \end{enumerate} \noindent ${\mathrm{dp}_\alvec}$ is defined as follows, using $\nu$ as already defined on $\lambdaTS$. \betagin{enumerate} \item $\mathrm{dp}(\upsilon_\xi):= 0$ for all $\xi\in{\mathrm{Ord}}$. \item ${\mathrm{dp}_\alvec}(0):=0$, ${\mathrm{dp}_\alvec}(1):=0$, and ${\mathrm{dp}_\alvec}(\alphan):=0$ in case of $\alphavec\not=()$, \item ${\mathrm{dp}_\alvec}(\beta):={\mathrm{dp}_\alvec}(\delta)$ if $\beta=_{\mathbb N}F\gamma+\delta$, \item\lambdabel{dpred} ${\mathrm{dp}_\alvec}(\beta):=\mathrm{dp}_{\alphavecpr}(\beta)$ if $\alphavec\not=()$ for $\beta\in{\mathbb P}\cap(1,\alphan)$ where $\alphavec={\alphavecpr}^\frown\alphan$, \item for $\beta\in{\mathbb P}^{>\alphan}\setminus{\mathbb E}$ let $\gamma:=(1/\alphan)\cdot\beta$ and $<_1g(\gamma)=_{\mathrm{\scriptscriptstyle{ANF}}}\gamma_1+\ldots+\gamma_m$ and set \[{\mathrm{dp}_\alvec}(\beta):=\kappa^\alvec_{\gamma_1}+{\mathrm{dp}_\alvec}(\gamma_1)+\ldots+\kappa^\alvec_{\gamma_m}+{\mathrm{dp}_\alvec}(\gamma_m),\] \item\lambdabel{dpeps} for $\beta\in{\mathbb E}^{>\alphan}$ let $\gammavec:=(\upsilon_{\lambda+1},\ldots,\upsilon_{\lambda+m})^\frown(\alphae,\ldots,\alphan,\beta)$, and define \[{\mathrm{dp}_\alvec}(\beta):=\nu^\gavec_{\mu^\alphan_\beta}+\kappa^\gavec_{\lambdaalnbe}+{\mathrm{dp}_\gavec}(\lambdaalnbe).\] \end{enumerate} \end{defi} \betagin{defi}[cf.\ 4.4 of \cite{CWc}]\index{$\nu^\alvec$}\lambdabel{nuvaldefi} Let $\alphavec\in\lambdaRS$ be of the form $\alphavec=(\upsilon_{\lambda+1},\ldots,\upsilon_{\lambda+m})^\frown(\alpha_1,\ldots,\alpha_n)\not=()$ and set $\alphanod:=1+\upsilon_{\lambda+m}$. We define the local function $\nu^\alvec$ on $[0,\mu_\aln]$, extending Definition \ref{kappanuprincipals} and setting $\alpha:=\mathrm{o}(\alphavec)$, by \betagin{enumerate} \item $\nu^\alvec_0:=\alpha$, \item\lambdabel{nuple} $\nu^\alvec_{\beta}:=\nu^\alvec_\gamma+\kappa^\alvec_{{\varrho^\aln_\ga}}+{\mathrm{dp}_\alvec}({\varrho^\aln_\ga})+\chi^\alncheck(\gamma)\cdot\alpha$ if $\beta=\gamma+1$, \item $\nu^\alvec_\beta:=\nu^\alvec_\gamma+\kappa^\alvec_{{\varrho^\aln_\ga}}+{\mathrm{dp}_\alvec}({\varrho^\aln_\ga})+\nu^\alvec_{\delta}$ if $\beta=_{\mathbb N}F\gamma+\delta\in\mathrm{Lim}$. \end{enumerate} \end{defi} \betagin{rmk} In Corollary \ref{nuimagecor} we will see that the image of $\nu^\alphavec$ indeed consists of multiples of $\alpha$ and that infinite additive principal numbers in its domain are mapped to additive principal numbers greater than $\alpha$. It is not obvious, but a crucial property of the indicator function $\chi$, that clause 2 of the above definition also yields a multiple of $\alpha$ if $\chi^\alphan(\gamma)=1$. \end{rmk} It is easy to see that the properties of $\kappa$-, $\mathrm{dp}$-, and $\nu$-functions established in Section 4 of both \cite{CWc} and \cite{W17} extend as expected to the functions defined above. We essentially only need estimates of components in the interior of $\upsilon$-segments. Of particular importance are monotonicity and continuity of $\kappa$- and $\nu$-functions (Corollary \ref{kappanuhzcor}) and the verification of agreement on the common domain with the definitions given in \cite{CWc} (Theorem \ref{agreementthm}). \betagin{lem}[cf.\ 4.6 of \cite{W17}]\lambdabel{kdpmainlem} Let $\alphavec=(\upsilon_{\lambda+1},\ldots,\upsilon_{\lambda+m})^\frown(\alphae,\ldots,\alphan)\in\lambdaRS$ for the maximal such $m<\omega$, and set $\alphanod:=1+\upsilon_{\lambda+m}$. \betagin{enumerate} \item Let $\gamma\in\mathrm{dom}kval\cap{\mathbb P}$, $\gamma\not\in\mathrm{Im}(\upsilon)$. If $\gamma=_{\mathrm{\scriptscriptstyle{MNF}}}\gamma_1\cdot\ldots\cdot\gamma_k\ge\alphan$, setting $\gammapr:=(1/\gamma_1)\cdot\gamma$, we have \[(\kappa^\alvecga+{\mathrm{dp}_\alvec}(\gamma))\cdot\omega=\left\{\betagin{array}{ll} \mathrm{o}(\alphavec)\cdot\gammapr\cdot\omega&\mbox{if } \gamma_1=\alphan\\[2mm] \mathrm{o}(\alphavec^\frown\gamma\cdot\omega)&\mbox{otherwise.} \end{array}\right.\] If $\gamma<\alphan$ we have $(\kappa^\alvecga+{\mathrm{dp}_\alvec}(\gamma))\cdot\omega<\mathrm{o}(\alphavec)$. \item For $\gamma\in\mathrm{dom}kval-({\mathbb E}\cup\{0\})$ we have \[{\mathrm{dp}_\alvec}(\gamma)<\kappa^\alvecga.\] \item For $\gamma\in\mathrm{dom}({\mathrm{dp}_\alvec})\cap{\mathbb E}^{>\alphan}$ such that $\mu_\ga<\gamma$ we have \[{\mathrm{dp}_\alvec}(\gamma)<\mathrm{o}(\alphavec^\frown\gamma)\cdot\mu_\ga\cdot\omega.\] \item For $\gamma\in\mathrm{dom}kval\cap{\mathbb E}^{>\alphan}$, $\gamma\not\in\mathrm{Im}(\upsilon)$, we have \[\kappa^\alvecga\cdot\omega\le{\mathrm{dp}_\alvec}(\gamma) \quad\mbox{ and }\quad {\mathrm{dp}_\alvec}(\gamma)\cdot\omega=_{\mathbb N}F\mathrm{o}(\operatorname{h}_\om(\alphavecga))\cdot\omega.\] \item Let $\gamma\in\mathrm{dom}nuval\cap{\mathbb P}$, $\gamma=_{\mathrm{\scriptscriptstyle{MNF}}}\gamma_1\cdot\ldots\cdot\gamma_k\not\in\mathrm{Im}(\upsilon)$. We have \[(\nu^\alvecga+\kappa^\alphavec_{\varrho^\alphan_\gamma}+{\mathrm{dp}_\alvec}(\varrho^\alphan_\gamma))\cdot\omega=\left\{\betagin{array}{ll} \mathrm{o}(\alphavec)\cdot\gamma\cdot\omega&\mbox{if } \gamma_1\le\alphan\\[2mm] \mathrm{o}(\operatorname{h}_\om(\alphavec^\frown\gamma))\cdot\omega&\mbox{if } \gamma\in{\mathbb E}^{>\alphan}\\[2mm] \mathrm{o}(\alphavec^\frown\gamma)\cdot\omega&\mbox{otherwise.} \end{array}\right.\] \end{enumerate} \end{lem} {\bf Proof.} The lemma is shown by simultaneous induction on ${\operatorname{l}^\tauvec}(\gamma)$, where ${\vec{\tau}}$ is the resolving sequence for $\gamma$, over all parts. For a detailed proof see \cite{W17}. \mbox{ } $\Box$ \betagin{cor}[cf.\ 4.7 of \cite{W17}]\lambdabel{kappanuhzcor} Let $\alphavec\in{\cal R}S$. We have \betagin{enumerate} \item $\kappa^\alvec_{\gamma\cdot\omega}=(\kappa^\alvecga+{\mathrm{dp}_\alvec}(\gamma))\cdot\omega$ for $\gamma\in{\mathbb P}$ such that $\gamma\cdot\omega\in\mathrm{dom}kval$. \item $\nu^\alvec_{\gamma\cdot\omega}=(\nu^\alvecga+\kappa^\alphavec_{\varrho^\alphan_\gamma}+{\mathrm{dp}_\alvec}(\varrho^\alphan_\gamma))\cdot\omega$ for $\gamma\in{\mathbb P}$ such that $\gamma\cdot\omega\in\mathrm{dom}nuval$. \end{enumerate} $\kappa^\alvec$ and for $\alphavec\not=()$ also $\nu^\alvec$ are strictly monotonically increasing and continuous. \end{cor} \betagin{theo}[cf.\ 4.8 of \cite{W17}]\lambdabel{agreementthm} Let $\alphavec=(\upsilon_{\lambda+1},\ldots,\upsilon_{\lambda+m})^\frown(\alphae,\ldots,\alphan)\in\lambdaRS$ for the maximal such $m<\omega$, and set $\alphanod:=1+\upsilon_{\lambda+m}$. For $\beta\in{\mathbb P}$ let $\delta:=(1/\betabar)\cdot\beta$, so that $\beta=_{\mathbb N}F\betabar\cdot\delta$ if $\beta\not\in{\mathbb M}$. \betagin{enumerate} \item For all $\beta\in\mathrm{dom}kval\cap{\mathbb P}^{>\alphan}$, $\beta\not\in\mathrm{Im}(\upsilon)$, we have \[\kappa^\alvecbe=\kappa^\alvec_{\betabar+1}\cdot\delta.\] \item If $\alphavec\not=()$, then for all $\beta\in\mathrm{dom}nuval\cap{\mathbb P}^{>\alphan}$, $\beta\not\in\mathrm{Im}(\upsilon)$, we have \[\nu^\alvecbe=\nu^\alvec_{\betabar+1}\cdot\delta.\] \end{enumerate} The definitions of $\kappappa,\nu$, and $\mathrm{dp}$ extend the definitions given in \cite{CWc} and \cite{W17}. \end{theo} For the following estimates recall Definition \ref{alhatdefi}. Note that these estimates confirm that our relativized systems ${\operatorname{T}}t$ just suffice to contain relative connectivity components that occur between elements of $\mathrm{Im}(\upsilon)$. \betagin{lem}[cf.\ 4.9 of \cite{W17}]\lambdabel{hatmainlem} Let $\alphavec=(\alphae,\ldots,\alphan)\in{\cal R}S$, $n>0$. \betagin{enumerate} \item For all $\beta$ such that $\alphavecbe\in{\operatorname{T}}S$ and $\beta\not\in\mathrm{Im}(\upsilon)$ we have \[\mathrm{o}(\alphavecbe)<\mathrm{o}(\alphavec)\cdot\widehat{\alphan}.\] \item For all $\gamma$ such that $\alphavecga\in{\cal R}S$ and $\gamma\not\in\mathrm{Im}(\upsilon)$ we have \[\mathrm{o}(\operatorname{h}_\om(\alphavecga))<\mathrm{o}(\alphavecga)\cdot\gammahat.\] \end{enumerate} \end{lem} \betagin{cor}[cf.\ 4.10 of \cite{W17}]\lambdabel{kdpnuestimcor} For all $\alphavecga\in{\cal R}S$ such that $\gamma\not\in\mathrm{Im}(\upsilon)$ the ordinal $\mathrm{o}(\alphavecga)\cdot\gammahat$ is a strict upper bound of \[\mathrm{Im}(\kappappa^{\alphavecga}),\: \mathrm{Im}(\nu^{\alphavecga}),\: {\mathrm{dp}_\alvec}(\gamma),\: \mbox{and } \nu^{\alphavecga}_{\mu_\gamma}+\kappappa^{\alphavecga}_{\lambda_\gamma}+\mathrm{dp}_{\alphavecga}(\lambda_\gamma).\] \end{cor} {\bf Proof.} This directly follows from Lemmas \ref{kdpmainlem} and \ref{hatmainlem}. \mbox{ } $\Box$ In order to formulate the assignment of tracking chains to ordinals in Subsection \ref{tcassignmentsubsec} we need to introduce a suitable notion of tracking sequence relative to a given context, as we did in \cite{CWc}. We first introduce an evaluation function for relativized tracking sequences. \betagin{defi}[cf.\ 4.13 of \cite{CWc}] Let $\alphavec=(\alphae,\ldots,\alphan)\in{\cal R}S$ where $n>0$. We define \[{\operatorname{T}}Salvec:=\set{\gammavec\in{\operatorname{T}}Saln}{\gamma_1\le\lambdaalnminaln}\]\index{$\tst$@${\operatorname{T}}St$!${\operatorname{T}}Salvec$} and for $\gammavec^\frown\beta\in{\operatorname{T}}Salvec$ \[\mathrm{o}^\alvec\left(\gammavec^\frown\beta\right):=\left\{\betagin{array}{l@{\quad}l} \kappa^\alvec_{\beta} & \mbox{if } \gammavec=()\\[2mm] \nu^{\alphavec^\frown\gammavec}_\beta & \mbox{otherwise.} \end{array}\right.\]\index{$\mathrm{o}$!$\mathrm{o}^\alvec$} For convenience we identify ${\operatorname{T}}S^{()}$ with ${\operatorname{T}}S$ and $\mathrm{o}^{()}$ with $\mathrm{o}$. \end{defi} {\bf Remark.} Note that this is well-defined thanks to part c) of Lemma \ref{rhomulamestimlem}. Notice also that ${\operatorname{T}}Salvec$ is a $<_\mathrm{\scriptscriptstyle{lex}}$-initial segment of ${\operatorname{T}}Saln$ and that in the case $\alphan\in\mathrm{Im}(\upsilon)$ the sets ${\operatorname{T}}Salvec$ and ${\operatorname{T}}Saln$ coincide, since $\gammavec\in{\operatorname{T}}Saln$ implies that $\gamma_1<\alpha_n^\infty$. If $\alphan\in\mathrm{Im}(\upsilon)$, the evaluation functions $\mathrm{o}^\alvec$ and $\mathrm{o}$ agree. We have the following \betagin{lem}[cf.\ 4.14 of \cite{CWc}]\lambdabel{lamveclem} Let $\alphavec=(\alphae,\ldots,\alphan)\in{\cal R}S$ where $n>0$ and $\alphan\not\in\mathrm{Im}(\upsilon)$. Let $\lambda_1:={\operatorname{mc}}(\lambda_\alphan)$, and whenever $\lambda_i$ is defined and $\lambda_i\in{\mathbb E}^{>\lambda_{i-1}}$ (setting $\lambda_0:=\alphan$), let $\lambda_{i+1}:=\mu_{\lambda_i}$. If we denote the resulting vector by $(\lambda_1,\ldots,\lambda_k)=:\vec{\lambda}$ then ${\operatorname{T}}Salvec$ is the initial segment of ${\operatorname{T}}Saln$ with $<_\mathrm{\scriptscriptstyle{lex}}$-maximum $\vec{\lambda}$. We have \[\mathrm{o}^\alvec(\vec{\lambda})={\operatorname{mc}}(\kappa^\alphavec_{\lambda_\alphan}+\mathrm{dp}_\alphavec(\lambda_\alphan)).\] \end{lem} {\bf Proof.} The proof is by evaluation of ${\operatorname{mc}}(\kappa^\alphavec_{\lambda_\alphan}+\mathrm{dp}_\alphavec(\lambda_\alphan))$ using Lemmas \ref{kdpmainlem}, \ref{hatmainlem}, and Corollary \ref{kdpnuestimcor}. \mbox{ } $\Box$ The analogue to Lemma \ref{trsvallem} is as follows. Notice that we have to be careful regarding multiples of indices versus their evaluations. \betagin{lem}[cf.\ 4.15 of \cite{CWc}]\lambdabel{reltrsvallem} Let $\alphavec$ and $\gammavec^\frown\beta\in{\operatorname{T}}Salvec$ be as in the above definition and set $\alpha:=\mathrm{o}(\alphavec)$. \betagin{enumerate} \item We have \[{\mathrm{ts}}aln(\alphan\cdot((1/\alpha)\cdot\mathrm{o}^\alvec(\gammavec^\frown\beta)))=\gammavec^\frown\beta.\] \item For $\delta\in{\mathbb P}\cap[\alphan,\alpha_n^\infty)$ such that ${\mathrm{ts}}aln(\delta)\in{\operatorname{T}}Salvec$ we have \[\mathrm{o}^\alvec({\mathrm{ts}}aln(\delta))=\alpha\cdot((1/\alphan)\cdot\delta).\] \item If $\alphan\not\in\mathrm{Im}(\upsilon)$, setting $\lambda:=\alphan\cdot((1/\alpha)\cdot{\operatorname{mc}}(\kappa^\alphavec_{\lambda_\alphan}+\mathrm{dp}_\alphavec(\lambda_\alphan)))$ we have \[{\mathrm{ts}}aln(\lambda)=\vec{\lambda}\in{\operatorname{T}}Salvec\] for $\vec{\lambda}$ as defined in Lemma \ref{lamveclem}, and the mapping ${\mathrm{ts}}aln$ is a $<$-$<_\mathrm{\scriptscriptstyle{lex}}$-order isomorphism of \[\set{\delta\in{\mathbb P}\cap[\alphan,\alpha_n^\infty)}{{\mathrm{ts}}aln(\delta)\in{\operatorname{T}}Salvec}=[\alphan,\lambda]\cap{\mathbb P}\] with ${\operatorname{T}}Salvec$. \item If $\alphan\in\mathrm{Im}(\upsilon)$, the mapping ${\mathrm{ts}}aln$ is a $<$-$<_\mathrm{\scriptscriptstyle{lex}}$-order isomorphism of ${\mathbb P}\cap[\alphan,\alpha_n^\infty)$ with ${\operatorname{T}}Salvec$. \end{enumerate} \end{lem} {\bf Proof.} Once the first claim of the lemma is shown by induction along $<_\mathrm{\scriptscriptstyle{lex}}$ on ${\operatorname{T}}Salvec$, the remaining claims follow using Lemmas \ref{citedinjtrslem} and \ref{lamveclem}. In proving the first claim for $\gammavec^\frown\beta\in{\operatorname{T}}Salvec$, say $\gammavec=(\gamma_1,\ldots,\gamma_m)$ where $m\ge 0$, we proceed in analogy with the course of proof of Lemma \ref{trsvallem} (Theorem 3.20 of \cite{W17}), replacing $\alphavec$ with $\alphavec^\frown\gammavec$, $\alphan$ with $\gamma_m$ (setting $\gamma_0:=\alphan$), and $\alpha$ with $\gamma:=\alphan\cdot((1/\alpha)\cdot\mathrm{o}(\gammavec))$ in the case $m>0$, where by the i.h.\ we have ${\mathrm{ts}}aln(\gamma)=\gammavec$. \mbox{ } $\Box$ \subseteqsection{Extending the concept of tracking chains} Every additive principal number $\alpha$ in ${\cal R}two$ can be described uniquely by a tracking sequence in the way we explained at the beginning of this Section, cf.\ (\ref{principalchain}). Tracking chains, introduced first in Section 5 of \cite{CWc}, are sequeces of tracking sequences in the sense that each of these sequences starts with a $\kappa$-index and then possibly continues with $\nu$-indices. However, these $\kappa$- and $\nu$-indices do not need to be (infinite) additive principal numbers as is the case with tracking sequences. From the second sequence on the $\kappa$-indices may have to be relativized to the local connectivity component specified by the preceding sequences. $\kappa$-indices specify the (relativized local) $\le_1$-component, while $\nu$-indices of suitably relativized $\nu$-functions describe the selection of the nested $\le_2$-components. Extending the \emph{address space} from the set of tracking sequences to the set of tracking chains allows us to describe all ordinals uniquely in terms of indices of nested $\le_i$-components. Characterizing this general address space for ordinals in terms of conditions on the indices of nested $\le_i$-components is unfortunately more complicated than expected. Before providing the formal definitions determining the class ${\operatorname{T}}C$ of tracking chains for ${\cal R}two$, we mention two simple examples for tracking chains. First, as mentioned above, the tracking chain of an additive principal number $\alpha$ is $(\alphavec)$, where $\alphavec:={\mathrm{ts}}(\alpha)$ is the tracking sequence of $\alpha$. Second, for an ordinal $\beta=\kappa_{\beta_1}+\ldots+\kappa_{\beta_n}$ for suitable indices $\beta_1>\ldots>\beta_n$ has the tracking chain $((\beta_1),\ldots,(\beta_n))$. For $\beta$ below $\varepsilonn$ the conditions on the $\beta_i$ are simply $\beta_1<\varepsilonn$ and $0<\beta_{i+1}\le{\mathrm{logend}}(\beta_i)$ for $i=1,\ldots,n-1$, as shown in \cite{C99}. Strictly speaking, ordinals greater than $\upsilon_1$ drop out of the usual $2$-dimensional format of tracking chains for ordinals below $1^\infty=\upsilon_1$ as introduced in \cite{CWc}. While a suitable address space for ${\cal R}om$ would require a bookkeeping formalism handling $\omega\times\omega$-matrices of ordinals all but finitely many entries of which are left blank and in which addresses for ordinals below $\upsilon_1$ would be represented in an equivalent but different way, as long as we stay in ${\cal R}two$, we can extend the address space ${\operatorname{T}}C$ of tracking chains without too many complications. As we will see, ordinals $\upsilon_\lambda$ where $\lambda\in\mathrm{Lim}$ have $\upsilon_{\lambda+1}$-many $\upsilon_\lambda$-$\le_1$-minimal successors. We will denote the ordinal $\upsilon_\lambda$ by the chain $((\upsilon_\lambda))$, its least $<_2$-successor $\upsilon_\lambda\cdot 2$ by $((\upsilon_\lambda\cdot 2))$, and the largest $\upsilon_\lambda$-$\le_1$-minimal ordinal $\upsilon_{\lambda+1}$ by $((\upsilon_{\lambda+1}))$. The least $<_2$-successor of $\upsilon_2$ above $\upsilon_\lambda$ is denoted by $((\upsilon_\lambda+\upsilon_2))$. We will denote the ordinal $\upsilon_{\omega^2+2}$, which we considered in the introduction in the context of ${\cal R}three$, simply by $((\upsilon_{\omega^2+1},\upsilon_{\omega^2+2}))$. Note that we lose a nice property of the original tracking chains: Namely that tracking chains of all $<_2$-predecessors of an ordinal occur as initial chains of its tracking chain. However, in the presence of infinite $<_2$-chains this property cannot be kept anyway. We are now going to extend Definition 5.1 of \cite{CWc} to a system of tracking chains for all of ${\cal R}two$. The first sequence of a generalized tracking chain $\alphavec$ will determine the $\upsilon$-segment in which the ordinal with address $\alphavec$ is located. This will be called $\upsilonseg(\alphavec)$, the $\upsilon$-segment of $\alphavec$. For better accessiblity we are going to split up the definition of the class ${\operatorname{T}}C$ of tracking chains for all of ${\cal R}two$ into several steps, as compared to the orginal definition in \cite{CWc}. We begin with templates for tracking chains, which we call \emph{index chains}, and some useful general terminology. \betagin{defi}[Index chains, their domains, associated and initial chains]\lambdabel{indexchaindefi} \mbox{ } \betagin{enumerate} \item An {\bf index chain} is a sequence $\alphavec=(\alphaevec,\ldots,\alphanvec)$, $n\in(0,\omega)$, of ordinal vectors $\alphaivec=(\alphacp{i,1},\ldots,\alphacp{i,m_i})$ with $m_i\in(0,\omega)$ for $1\le i\le n$. \item We define $\mathrm{dom}(\alphavec)$\index{$\mathrm{dom}(\alphavec)$} to be the set of all index pairs of $\alphavec$, that is \[\mathrm{dom}(\alphavec):=\set{(i,j)}{1\le i\le n\:\&\:1\le j \le m_i}.\] \item The vector ${\vec{\tau}}=\left(\vec{\tau}_1,\ldots,\vec{\tau}_n\right)$ defined by $\taucp{i,j}:={\mathrm{end}}(\alphacp{i,j})$ for $(i,j)\in\mathrm{dom}(\alphavec)$ (that is, $\taucp{i,j}$ is the least additive component of $\alphacp{i,j}$) is called the {\bf\boldmath chain associated with $\alphavec$\unboldmath}.\index{chain associated with an index chain} \item The {\bf initial chains $\alphavecrestrarg{i,j}$ of $\alphavec$}\index{initial chain}\index{$\alphavecrestrarg{i,j}$}, where $(i,j)\in\mathrm{dom}(\alphavec)$, are \[\alphavecrestrarg{i,j}:=((\alphacp{1,1},\ldots,\alphacp{1,m_1}),\ldots,(\alphacp{i-1,1},\ldots,\alphacp{i-1,m_{i-1}}),(\alphacp{i,1},\ldots,\alphacp{i,j})).\] By $\alphavecrestrarg{i}$\index{$\alphavecrestri$!$\alphavecrestrarg{i}$ for tracking chains} we abbreviate $\alphavecrestrarg{i,m_i}$. For convenience we set $\alphavecrestrarg{i,0}:=()$ for $i=0,1$ and $\alphavecrestrarg{i+1,0}:=\alphavecrestrarg{i,m_i-1}$ for $1\le i<n$. Initial chains of ${\vec{\tau}}$ are defined in the same way. \end{enumerate} \end{defi} Next we impose conditions familiar from tracking sequences, cf.\ definitions \ref{TSdefi}, \ref{RSdefi}, and \ref{laRSlaTSdefi}, in order to introduce a notion of regularity with respect to $\nu$-indices. \betagin{defi}\lambdabel{nuregdefi} Let $\alphavec=(\alphaevec,\ldots,\alphanvec)$, where $\alphaivec=(\alphacp{i,1},\ldots,\alphacp{i,m_i})$ for $i=1,\ldots,n$, be an index chain with associated chain ${\vec{\tau}}$. We call $\alphavec$ a {\bf \boldmath$\nu$-regular index chain\unboldmath} if for every $i\in\{1,\ldots,n\}$ such that $m_i>1$ \[\taucp{i,1}<\ldots<\taucp{i,m_i-1}\] where $\taucp{i,j}\in{\mathbb E}$ and \[\alphacp{i,j+1}\le\mu_\taucp{i,j}\] for $j=1,\ldots,m_i-1$. \end{defi} For $\nu$-regular index chains the following definition becomes meaningful, which allows us to introduce a notion of \emph{$\upsilon$-segmentation} of such index chains. \betagin{defi}[cf.\ Definition \ref{upssegdefi}] For a $\nu$-regular index chain $\alphavec=(\alphaevec,\ldots,\alphanvec)$ let $\lambda\in\mathrm{Lim}nod$ be maximal such that $\upsilon_\lambda\le\alphacp{1,1}$ and $t<\omega$ be maximal such that $\alphavec_1$ is of the form \[\alphavec_1=(\upsilon_{\lambda+1},\ldots,\upsilon_{\lambda+t},\gamma_1,\ldots,\gamma_l),\] hence $\gamma_1<\upsilon_{\lambda+t+1}$ if $l>0$. Then $(\lambda,t)$ indicates the {\bf \boldmath$\upsilon$-segment\unboldmath} of $\alphavec$, $\upsilonseg(\alphavec):=(\lambda,t)$. \end{defi} \betagin{defi} Let $\alphavec=(\alphaevec,\ldots,\alphanvec)$, where $\alphaivec=(\alphacp{i,1},\ldots,\alphacp{i,m_i})$ for $i=1,\ldots,n$, be a $\nu$-regular index chain with associated chain ${\vec{\tau}}$ and set $(\lambda,t):=\upsilonseg(\alphavec)$. We say that $\alphavec$ is {\bf \boldmath$\upsilonilon$-segmented\unboldmath} if the following two conditions hold. \betagin{enumerate} \item $\alphacp{1,1}\in[\upsilon_\lambda,\upsilon_{\lambda+1}]$, where \[\alphacp{1,1}=\upsilon_\lambda\mbox{ implies }n=m_n=1,\] and unless $\alphavec=((0))$, all indices $\alphacp{i,j}$ are nonzero. \item The sequences \[(\upsilonseg(\alphacp{i,1}))_{2\le i\le n}\mbox{ and }(\upsilonseg(\taucp{i,1}))_{2\le i\le n}\] are $\le_\mathrm{\scriptscriptstyle{lex}}$-weakly decreasing with upper bound $(\lambda,t)$. \end{enumerate} \end{defi} \betagin{defi}\lambdabel{segmentationdefi} Let $\alphavec=(\alphaevec,\ldots,\alphanvec)$, where $\alphaivec=(\alphacp{i,1},\ldots,\alphacp{i,m_i})$ for $i=1,\ldots,n$, be an $\upsilonilon$-segmented $\nu$-regular index chain with associated chain ${\vec{\tau}}$ and set $(\lambda,t):=\upsilonseg(\alphavec)$. Let $s_1\in\{1,\ldots,n\}$ be minimal such that \[\taucp{s_1,1}<\upsilon_\lambda\] if that exists, in which case we further let $s_1<\ldots<s_p\le n$ indicate all indices $i\ge s_1$ where $\upsilonseg(\taucp{i,1})$ strictly decreases, hence \[\upsilonseg(\taucp{s_j-1,1})>_\mathrm{\scriptscriptstyle{lex}}\upsilonseg(\taucp{s_j,1})\] for $j=2,\ldots,p$, otherwise we set $p:=0$ and $s_0:=1$. Let \[(\lambda_j,t_j):=\upsilonseg(\taucp{s_j,1})\] for $j=1,\ldots,p$ indicate the corresponding $\upsilon$-segments and set for convenience $(\lambda_0,t_0):=(\lambda,t)$. We call \betagin{enumerate} \item $p$ the {\bf segmentation depth}, \item $(s_1,\ldots,s_p)$ the {\bf segmentation signature}, and \item $(\lambda_i,t_i)_{0\le i\le p}$ the {\bf sequence of \boldmath$\upsilon$-segments\unboldmath} \end{enumerate} of $\alphavec$. \end{defi} Note that it is very well possible to have $p>0$ and $s_1=1$, a situation that occurs when $\alphacp{1,1}>\upsilon_\lambda$ while $\taucp{1,1}<\upsilon_\lambda$ where $\lambda\in\mathrm{Lim}$. The notion of \emph{unit} defined next allows us to trace back the greatest $\kappa^\tauwo$-predecessor of an ordinal with tracking chain an initial chain $\alphavecrestrarg{i,1}$, $1\le i\le n$, of $\alphavec$. It determines the setting of relativization (reference sequence) of the relativized $\kappa$-function applicable in the $\le_2$-component rooted in that greatest $\kappa^\tauwo$-predecessor of the ordinal with tracking chain $\alphavecrestrarg{i,1}$. \betagin{defi}[Units]\lambdabel{unitsdefi} Let $\alphavec=(\alphaevec,\ldots,\alphanvec)$, where $\alphaivec=(\alphacp{i,1},\ldots,\alphacp{i,m_i})$ for $i=1,\ldots,n$, be an $\upsilonilon$-segmented $\nu$-regular index chain with associated chain ${\vec{\tau}}$, segmentation depth $p$, segmentation signature $(s_1,\ldots,s_p)$, and sequence of $\upsilon$-segments $(\lambda_i,t_i)_{0\le i\le p}$. The {\bf\boldmath $i$-th unit ${\tau_i^\star}$\unboldmath}\index{unit}\index{${\tau_i^\star}$} of $\alphavec$ and its {\bf index pair \boldmath$i^\star$\unboldmath}\index{index pair}\index{$*$@$i^\star$} for $1\le i\le n$ is defined as follows. \[i^\star\mbox{, }{\tau_i^\star}:=\left\{\betagin{array}{ll} (l,j)\mbox{, }\;\;\taucp{l,j} & \mbox{ if $<_\mathrm{\scriptscriptstyle{lex}}$-max.\ } (l,j)\in\mathrm{dom}(\alphavec) \mbox{ exists s.t.\ }(l,j)<_\mathrm{\scriptscriptstyle{lex}}(i,1)\mbox{, }j<m_l\mbox{ and }\taucp{l,j}\le\taucp{i,1}\\[2mm] (s_j,0)\mbox{, }1+\upsilon_{\lambda_j+t_j} & \mbox{ if otherwise max.\ }j\in\{1,\ldots,p\}\mbox{ exists s.t.\ }s_j\le i\\[2mm] (1,0)\mbox{, }\;1+\upsilon_\lambda & \mbox{ otherwise.} \end{array}\right.\] For technical convenience we set $\taucp{i,0}:={\tau_i^\star}$ for $i=1,\ldots,n$. \end{defi} Note that the definition of $\taucp{i,0}$ deviates from Definition 5.1 of \cite{CWc} but is conceptually more appropriate. Clearly, for $j=1,\ldots,p$ we have $(s_j)^\star=(s_j,0)$ by definition. Note further that, unless $\alphacp{1,1}=0$, for $i=1,\ldots,n$, by definition we have \[i^\star<_\mathrm{\scriptscriptstyle{lex}}(i,1),\quad {\tau_i^\star}\le\taucp{i,1}, \quad\mbox{ and }\quad\upsilonseg(\taucp{i,1})=\upsilonseg({\tau_i^\star}).\] For $\alphavec=((1))$, i.e.\ $\alphacp{1,1}=1$, we have $\tau_1^\star=1$, and for $\alphavec=((\upsilon_\lambda))$ where $\lambda\in\mathrm{Lim}$ we have $\tau_1^\star=\upsilon_\lambda$. The more general notion of \emph{base} also considers greatest $\kappa^\tauwo$-predecessors of ordinals with tracking chain $\alphavecrestrarg{i,j}$ where $j\ge 1$ of a tracking chain $\alphavec$. In the case $j>1$ the base is given by $\taucp{i,j-1}$, so the next definition simply introduces a more convenient terminology to name bases. \betagin{defi}[Bases]\lambdabel{basesdefi} Let $\alphavec=(\alphaevec,\ldots,\alphanvec)$, where $\alphaivec=(\alphacp{i,1},\ldots,\alphacp{i,m_i})$ for $i=1,\ldots,n$, be an $\upsilonilon$-segmented $\nu$-regular index chain with associated chain ${\vec{\tau}}$. For $(i,j)\in\mathrm{dom}(\alphavec)$ we define the index pair $(i,j)^\prime$\index{$'$@$(i,j)^\prime$} by $i^\star$ if $j=1$ and by $(i,j-1)$ otherwise. The {\bf base $\taucppr{i,j}$ of $\taucp{i,j}$ in $\alphavec$}\index{base} is defined by \index{$\taucppr{i,j}$} \[\taucppr{i,j}:=\left\{\betagin{array}{l@{\quad}l} {\tau_i^\star} & \mbox{ if } j=1\\[2mm] \taucp{i,j-1} & \mbox{ if } j>1.\end{array}\right.\] For $1\le i\le n$ we define the {\bf\boldmath $i$-th maximal base ${\tau^\prime_i}$\unboldmath} of $\alphavec$\index{base!maximal base}\index{${\tau^\prime_i}$} by \[{\tau^\prime_i}:=\taucppr{i,m_i}.\] \end{defi} With the above preparations we can determine how many $\alpha$-$\le_1$-minimal $<_1$-successors an ordinal $\alpha$ represented by tracking chain $\alphavec=(\alphaevec,\ldots,\alphanvec)$ has. This type of condition also applies to the initial chains $\alphavecrestrarg{i,1}$, $1\le i\le n$. We call these upper bounds on the number of immediate $<_1$-successors the $i$-th critical indices. The formal definition is as follows. \betagin{defi}[Critical \boldmath$\kappa$-indices\unboldmath]\lambdabel{critkapdefi} Let $\alphavec=(\alphaevec,\ldots,\alphanvec)$, where $\alphaivec=(\alphacp{i,1},\ldots,\alphacp{i,m_i})$ for $i=1,\ldots,n$, be an $\upsilonilon$-segmented $\nu$-regular index chain with associated chain ${\vec{\tau}}$. We define the {\bf\boldmath $i$-th critical index of $\alphavec$\unboldmath}\index{critical index of a tracking chain}, written as $\rho_i(\alphavec)$, in short as $\rho_i$ if no confusion is likely, by \[\rho_i:=\left\{\betagin{array}{ll} <_1g\left((1/{\tau_i^\star})\cdot\taucp{i,1}\right)+1&\mbox{if }\;m_i=1\\[2mm] \rhoargs{{\tau^\prime_i}}{\taucp{i,m_i}}+{\tau^\prime_i}&\mbox{if }\; m_i>1\:\&\:\taucp{i,m_i}<\mu_{{\tau^\prime_i}}\:\&\:\chi^{\tau^\prime_i}(\taucp{i,m_i})=0\\[2mm] \rhoargs{{\tau^\prime_i}}{\taucp{i,m_i}}+1&\mbox{if }\; m_i>1\:\&\:\taucp{i,m_i}<\mu_{{\tau^\prime_i}}\:\&\:\chi^{\tau^\prime_i}(\taucp{i,m_i})=1\\[2mm] \lambda_{{\tau^\prime_i}}+1&\mbox{otherwise.}\end{array}\right.\] \end{defi} Note that $\alpha<\rho_i$ implies that $\alpha\le\taucp{i,1}$ if $m_i=1$, and $\alpha\le\lambda_{\taucp{i,m_i-1}}$ if $m_i>1$, since according to part b) of Lemma \ref{rhomulamestimlem} we have $\rho_i\le\lambda_{{\tau^\prime_i}}+1$ where ${\tau^\prime_i}=\taucp{i,m_i-1}$. Note that the second clause is not a successor ordinal since we are approaching an ordinal that should be indexed by a $\nu$-index and therefore need to avoid ambiguity. This preference of $\nu$-indices over $\kappa$-indices whenever possible also motivates the second part of part 2 in the following definition, whereas part 1 is a necessity: If ${\tau_i^\star}=\taucp{i,1}$ then we must have $m_i=1$ since there do not exist $<_1$-sucessors of successor-$\kappa^\tauwo$-successors (and hence we also must have $i=n$), cf.\ Lemma \ref{loalpllem}. \betagin{defi}[\boldmath$\kappa$-regularity\unboldmath]\lambdabel{kapparegdefi} Let $\alphavec=(\alphaevec,\ldots,\alphanvec)$, where $\alphaivec=(\alphacp{i,1},\ldots,\alphacp{i,m_i})$ for $i=1,\ldots,n$, be an $\upsilonilon$-segmented $\nu$-regular index chain with associated chain ${\vec{\tau}}$. $\alphavec$ is called {\bf \boldmath $\kappa$-regular\unboldmath} if the following conditions hold. \betagin{enumerate} \item For all $i\in\{1,\ldots,n\}$ such that $m_i>1$ \[{\tau_i^\star}<\taucp{i,1}.\] \item For any $i\in\sigmangleton{1,\ldots,n-1}$ \[\alphacp{i+1,1}<\rho_i,\] and in case of ${\tau^\prime_i}<\taucp{i,m_i}\in{\mathbb E}$ \[\alphacp{i+1,1}\not=\taucp{i,m_i}.\] \end{enumerate} In short, such $\alphavec$ is called a {\bf \boldmath$\kappa\nu\upsilon$-regular\unboldmath} index chain. \end{defi} Note that the class of $\kappa\nu\upsilon$-regular index chains is closed under non-empty initial chains. The following lemma is technical, but an important property of $\kappa\nu\upsilon$-regular index chains. Essentially, this was shown in part b) of Lemma 5.8 (using Lemma 5.7) of \cite{CWc}, but we provide a new proof to enhance clarity. \betagin{lem}[cf.\ 5.7 and 5.8 of \cite{CWc}]\lambdabel{rserslem} Let $\alphavec=(\alphaevec,\ldots,\alphanvec)$, where $\alphaivec=(\alphacp{i,1},\ldots,\alphacp{i,m_i})$ for $i=1,\ldots,n$, be a $\kappa\nu\upsilon$-regular index chain with associated chain ${\vec{\tau}}$. For every $i$, $1\le i \le n$, we have \[\taucp{i,1}\le\lambda_{\tau_i^\star},\] provided that ${\tau_i^\star}>1$. \end{lem} {\bf Proof.} We may assume that $i=n$ and $m_n=1$ since otherwise we can simply truncate $\alphavec$ to $\alphavecrestrarg{i,1}$. Suppose that ${\tau_n^\star}>1$. We consider cases according to Definition \ref{unitsdefi}. Let $p$, $s_1,\ldots,s_p$, and $(\lambda_l+t_l)_{0\le l\le p}$ be according to Definition \ref{segmentationdefi}.\\[2mm] {\bf Case 1:} $n^\star=(j,k)\in\mathrm{dom}(\alphavec)$. Then we have $j<n$ and $k<m_j$.\\[1mm] {\bf Subcase 1.1:} $k+1<m_j$. Then $\taucp{n,1}<\taucp{j,k+1}\le\mu_\taucp{j,k}$ and $\taucp{j,k+1}\in{\mathbb E}^{>\taucp{j,k}}$. By part c) of Lemma \ref{rhomulamestimlem} we have \[\taucp{j,k+1}\le\lambda_\taucp{j,k}.\] {\bf Subcase 1.2:} $k+1=m_j$. We have $\taucp{j+1,1}<\rho_j$, and since $\rho_j\le\lambda_\taucp{j,k}+1$ by part b) of Lemma \ \ref{rhomulamestimlem}, we have \[\taucp{j+1,1}\le\lambda_\taucp{j,k}.\] {\bf Subcase 1.2.1:} $\min\{l\in(j,n)\mid m_l>1\}$ exists. Then we have \[\taucp{n,1}<\taucp{l,1}\le\ldots\le\taucp{j+1,1}\le\lambda_\taucp{j,k}.\] {\bf Subcase 1.2.2:} Otherwise. Then we have \[\taucp{n,1}\le\ldots\le\taucp{j+1,1}\le\lambda_\taucp{j,k}.\] {\bf Case 2:} ${\tau_n^\star}=\upsilon_{\lambda_p+t_p}$ where $p>0$ and $\lambda_p+t_p>0$ and $n^\star=(s_p,0)$. Then according to the definition we have \[\taucp{n,1}\in[\upsilon_{\lambda_p+t_p},\upsilon_{\lambda_p+t_p+1}).\] {\bf Case 3:} ${\tau_n^\star}=\upsilon_\lambda$ where $n^\star=(1,0)$ and $\lambda\in\mathrm{Lim}$. Then according to the definition we have \[\taucp{n,1}\in[\upsilon_\lambda,\upsilon_{\lambda+1}).\] Regarding cases 2 and 3, note that $\lambda_{\upsilon_\iota}=\upsilon_{\iota+1}$ for all $\iota>0$. \mbox{ } $\Box$ Next we consider stepwise (maximal) extension of $\kappa\nu\upsilon$-regular index chains and show that the class of such index chains is closed under maximal extension. Index chains of the form $((\upsilon_{\lambda+1},\ldots,\upsilon_{\lambda+m}))$, which in principle could be maximally extended infinitely many times, are excluded from this extension procedure as they do not play any role in the upcoming definition of tracking chains. For clarification, this notion of maximal extension does not change the local $\le_i$-component it originates from. In particular, it does not jump from a local $\le_2$-component indexed by a submaximal $\nu$-index to another $\le_2$-component indexed by a larger $\nu$-index in the same domain. \betagin{defi}[cf.\ 5.2 of \cite{CWc}]\lambdabel{maxextdefi} Let $\alphavec$ be a $\kappa\nu\upsilon$-regular index chain with components $\alphaivec=(\alphacp{i,1},\ldots,\alphacp{i,m_i})$ for $1\le i\le n$ and associated chain ${\vec{\tau}}$. The {\bf extension index}\index{extension index} for $\alphavec$ is defined via the following cases, setting $\tau:=\taucp{n,m_n}$ and ${\tau^\prime}:={\tau^\prime_n}$: \betagin{enumerate} \item[0.] ${\tau^\prime}<\tau\in\mathrm{Im}(\upsilon)$: Then an extension index for $\alphavec$ is not defined. \item $m_n=1$: We consider three subcases: \betagin{enumerate} \item[1.1.] ${\tau^\prime}=\tau$: Then $\alphavec$ is already maximal. An extension index for $\alphavec$ does not exist. \item[1.2.] ${\tau^\prime}<\tau\in{\mathbb E}$: Then the extension index for $\alphavec$ is $\alphacp{n,2}:=\mu_\tau$. \item[1.3.] Otherwise: Then the extension index for $\alphavec$ is $\alphacp{n+1,1}:=<_1g\left((1/{\tau^\prime})\cdot\tau\right)$. \end{enumerate} \item $m_n>1$: We consider three subcases. \betagin{enumerate} \item[2.1.] $\tau=1$: Then $\alphavec$ is already maximal. An extension index for $\alphavec$ does not exist. \item[2.2.] ${\tau^\prime}<\tau\in{\mathbb E}$: Here we consider another two subcases. \betagin{enumerate} \item[2.2.1.] $\tau=\mu_{\tau^\prime}<\lambda_{\tau^\prime}$: Then the extension index for $\alphavec$ is $\alphacp{n+1,1}:=\lambda_{\tau^\prime}$. \item[2.2.2.] Otherwise: Then the extension index for $\alphavec$ is $\alphacp{n,m_n+1}:=\mu_\tau$. \end{enumerate} \item[2.3.] Otherwise: We consider again two subcases. \betagin{enumerate} \item[2.3.1.] $\tau<\mu_{\tau^\prime}$: Then the extension index for $\alphavec$ is $\alphacp{n+1,1}:=\rhoargs{{\tau^\prime}}{\tau}$. \item[2.3.2.] Otherwise: Then the extension index for $\alphavec$ is $\alphacp{n+1,1}:=\lambda_{\tau^\prime}$. \end{enumerate} \end{enumerate} \end{enumerate} If the extension index for $\alphavec$ is defined, we denote the extension of $\alphavec$ by this index by $\operatorname{ec}(\alphavec)$\index{$\operatorname{ec}$} and call this extended chain the {\bf\boldmath maximal $1$-step extension\unboldmath} of $\alphavec$.\index{extension!maximal $1$-step extension} If the iteration of maximal $1$-step extensions terminates after finitely many steps, we call the resulting index chain the {\bf maxplus extension} of $\alphavec$ and denote it by $\operatorname{me}pl(\alphavec)$.\index{$\operatorname{me}$!$\operatorname{me}pl$} \end{defi} \betagin{rmk} In \cite{CWc} we called the extension of $\alphavec$ by the extension index the \emph{extension candidate} for $\alphavec$ since it might fail to be a tracking chain. Here we keep definitions essentially compatible with earlier work but avoid the notion of tracking chain since it has not been defined yet. In earlier work we called $\operatorname{ec}(\alphavec)$ the maximal $1$-step extension only if it was a tracking chain. The maxplus extension of the index chain $\alphavec$, $\operatorname{me}pl(\alphavec)$, might fail to be a tracking chain, cf.\ condition 2 of Definition \ref{trackingchaindefi}. Regarding the formulation of the new clause 0, note that the extension index of the chain $\alphavec=((\upsilon_1,\upsilon_1))$ is $\alphacp{2,1}=\upsilon_1^2$ in the same way as $\Gamma_0^2$ is the extension index of the chain $((\Gamma_0,\Gamma_0))$, by an application of clause 2.3.1. \end{rmk} \betagin{lem} The class of $\kappa\nu\upsilon$-regular index chains is closed under maximal $1$-step extensions. \end{lem} {\bf Proof.} This immediately follows from the definitions. \mbox{ } $\Box$ The following lemma settles that maximal extension is a finite process. For an alternative, more general proof of termination regarding arbitrary $1$-step extensions, see Definition 5.3 and Lemma 5.4 of \cite{CWc}. \betagin{lem}\lambdabel{meterminationlem} The process of iterated maximal $1$-step extensions always terminates, hence $\operatorname{me}pl(\alphavec)$ always exists. \end{lem} {\bf Proof.} The ${\operatorname{l}^\tauvec}$-measure applied from the second extension index on strictly decreases if we omit those applications of the $\mu$-operator (cases 1.2 and 2.2.2) that are directly followed by an application of the $\lambda$-operator (cases 2.2.1 and 2.3.2). Note that clause 2.3.1 can only be applied at the beginning of the process of maximal extension since all $\nu$-indices occurring as maximally extending indices are maximal, i.e.\ obtained by application of the $\mu$-operator. If the $\mu$-operator is applied twice in a row during maximal extension, say first extending by $\mu_\tau$ and next by $\mu_{\mu_\tau}$, then according to the definition we have $\mu_\tau=\lambda_\tau$. Otherwise the next maximally extending index after $\mu_\tau$ is the $\kappa$-index $\lambda_\tau$. \mbox{ } $\Box$ \betagin{defi} Let $\alphavec$ be a $\kappa\nu\upsilon$-regular index chain with components $\alphaivec=(\alphacp{i,1},\ldots,\alphacp{i,m_i})$ for $1\le i\le n$ and associated chain ${\vec{\tau}}$. The $<_\mathrm{\scriptscriptstyle{lex}}$-greatest index pair $(i,j)$ of $\alphavec$ after which the elements of $\alphavec$ fall onto the main line starting at $\alphacp{i,j}$ is called the {\bf\boldmath critical main line index pair of $\alphavec$\unboldmath}.\index{index pair!critical main line index pair} The formal definition is as follows: If there exists the $<_\mathrm{\scriptscriptstyle{lex}}$-maximal $(i,j)\in\mathrm{dom}(\alphavec)$ such that $j<m_i$ and $\alphacp{i,j+1}<\mu_\taucp{i,j}$, and if $(i,j)$ satisfies the following conditions: \betagin{enumerate} \item $\chi^\taucp{i,j}(\taucp{i,j+1})=1$ and \item $\alphavec$ is reached by maximal $1$-step extensions starting from $\alphavecrestrarg{i,j+1}$, \end{enumerate} then $(i,j)$ is called the critical main line index pair of $\alphavec$, written as $\operatorname{cml}(\alphavec)$. Otherwise $\alphavec$ does not possess a critical main line index pair.\index{$\operatorname{cml}$} \end{defi} \betagin{defi}[cf.\ 5.1 of \cite{CWc}]\lambdabel{trackingchaindefi} Let $\alphavec$ be a $\kappa\nu\upsilon$-regular index chain with components $\alphaivec=(\alphacp{i,1},\ldots,\alphacp{i,m_i})$ for $1\le i\le n$ and associated chain ${\vec{\tau}}$. $\alphavec$ is called a {\bf tracking chain}\index{tracking chain} if the following conditions hold. \betagin{enumerate} \item All non-empty proper initial chains $\alphavecrestrarg{i,j}$ of $\alphavec$ are tracking chains. \item If $m_n=1$ and if $\alphavec$ possesses a critical main line index pair $\operatorname{cml}(\alphavec)=(i,j)$, then \[\taucp{n,1}\not=\taucp{i,j}.\] \end{enumerate} By ${\operatorname{T}}C$\index{${\mathrm{tc}}$@${\operatorname{T}}C$} we denote the class of all tracking chains. For a tracking chain $\alphavec$ with $\upsilonseg(\alphavec)=(\lambda,t)$ we also write $\alphavec\in\lambdatTC$ or $\alphavec\in\lambdaTC$. \operatorname{me}dskip \noindent {\bf Useful notation:} \betagin{enumerate} \item By $(i,j)^+$\index{$+$@$(i,j)^+$} we denote the immediate $<_\mathrm{\scriptscriptstyle{lex}}$-successor of $(i,j)$ in $\mathrm{dom}(\alphavec)$ if that exists and $(n+1,1)$ otherwise. For convenience we set $(0,0)^+:=(1,1)$ and $(i,0)^+:=(i,1)$ for $i=1,\ldots,n$. \item Due to frequent future occurrences, we introduce the following notation for the modification of a tracking chain's last ordinal. \[\alphavec[\xi]:=\left\{\betagin{array}{l@{\quad}l} \alphavecrestrarg{n-1}^\frown(\alphacp{n,1},\ldots,\alphacp{n,m_n-1},\xi) & \mbox{ if } \xi>0\mbox{ or } (n,m_n)=(1,1)\\ \alphavecrestrarg{n-1}^\frown(\alphacp{n,1},\ldots,\alphacp{n,m_n-1}) & \mbox{ if } \xi=0\mbox{ and } m_n>1\\ \alphavecrestrarg{n-1} & \mbox{ if } \xi=0, n>1,\mbox{ and } m_n=1. \end{array}\right.\]\index{$\alphavec[\xi]$} \end{enumerate} \end{defi} \betagin{rmk} Note that $\alphavec[\xi]$ might not be a tracking chain. This has to be verified when this notation is used. In the case $\xi\in(0,\alphacp{n,m_n})$ the second part of condition 2 of Definition \ref{kapparegdefi} has to be checked. \end{rmk} \betagin{defi}[Extension of tracking chains, \boldmath$\operatorname{me}(\alphavec)$\unboldmath]\lambdabel{tcextensiondefi} An {\bf extension}\index{extension} of a tracking chain $\alphavec$ is a tracking chain of which $\alphavec$ is an initial chain. A {\bf\boldmath $1$-step extension\unboldmath}\index{extension!$1$-step extension} is an extension by exactly one additional index. The {\bf maximal extension} of $\alphavec$ is denoted by $\operatorname{me}(\alphavec)$ and is the tracking chain obtained from $\alphavec$ after the maximum possible number of maximal $1$-step extensions according to Definition \ref{maxextdefi}. \end{defi} \betagin{rmk} Note that $\operatorname{me}pl(\alphavec)$ and $\operatorname{me}(\alphavec)$, where $\alphavec\in{\operatorname{T}}C$, either coincide or differ by one index extending $\operatorname{me}(\alphavec)$ to $\operatorname{me}pl(\alphavec)$ that does not satisfy condition 2 of Definition \ref{trackingchaindefi}. \end{rmk} The following \emph{key} lemma and proof cover Lemma 5.5 and Corollary 5.6 of \cite{CWc}. For details see the alternative formulation and proof of Lemma 5.5 of \cite{CWc}, however, the proof given below should suffice. \betagin{lem}[5.5 and 5.6 of \cite{CWc}]\lambdabel{cmlmaxextcor} Let $\alphavec\in{\operatorname{T}}C$ be maximal, i.e.\ $\alphavec=\operatorname{me}(\alphavec)$, with maximal index pair $(n,m_n)$ and associated chain ${\vec{\tau}}$. Suppose that $\operatorname{cml}(\alphavec)=:(i,j)$ exists. \betagin{enumerate} \item We have $(i,j+1)<_\mathrm{\scriptscriptstyle{lex}}(n,m_n)$, $\operatorname{ec}(\alphavec)=\operatorname{me}pl(\alphavec)$ exists, and the extending index with index pair $(n+1,1)$ is a successor multiple of $\taucp{i,j}$ with $(n+1)^\star=(i,j)$. \item For $(k,l)\in\mathrm{dom}(\alphavec)$ such that $(i,j+1)<_\mathrm{\scriptscriptstyle{lex}}(k,l)$ and $l<m_k$ if $m_k>1$ we have $\taucp{i,j}<\taucp{k,l}$ and $\chi^\taucp{i,j}(\taucp{k,l})=1$. \end{enumerate} \end{lem} {\bf Proof.} We explain first, why $\operatorname{ec}(\alphavecrestrarg{i,j+1})$ always exists and always is a tracking chain (see the beginning of the proof of Lemma 5.5 of \cite{CWc}). If $\taucp{i,j}<\taucp{i,j+1}\in{\mathbb E}$, then case 2.2.2 of Definition \ref{maxextdefi} applies, and the extension is clearly a tracking chain, otherwise case 2.3.1 applies. In this latter case $\alphavec$ is extended by $\alphacp{i+1,1}=\rhoargs{\taucp{i,j}}{\taucp{i,j+1}}=\taucp{i,j}\cdot\lambda$ where $\lambda:=<_1g(\taucp{i,j+1})$ is a limit ordinal, since $\taucp{i,j+1}\in{\mathbb L}^{\ge\taucp{i,j}}$ due to the assumption $\chi^\taucp{i,j}(\taucp{i,j+1})=1$. Hence $\taucp{i+1,1}>\taucp{i,j}=\taucppr{i+1,1}$, implying that $\operatorname{ec}(\alphavec)\in{\operatorname{T}}C$. According to Lemma \ref{chiinvlem} we have $\chi^\taucp{i,j}(\taucp{i,j+1})=\chi^\taucp{i,j}(\lambda)=\chi^\taucp{i,j}({\mathrm{end}}(\lambda))=1$ and hence also $\chi^\taucp{i,j}(\taucp{i+1,1})=1$. Now consider the subset $J_0$ of $\mathrm{dom}(\alphavec)\cup\{(n+1,1)\}$ of (index pairs of) maximally extending indices starting with the extending index of $\alphavecrestrarg{i,j+1}$, which is obtained from an application of either case 2.2.2 or case 2.3.1 as mentioned before. Observe that the process of maximal extension, shown to always terminate in Lemma \ref{meterminationlem}, can only end with a $\kappa$-index. We reduce $J_0$ to the set $J$ cancelling those (index pairs of) indices obtained from applications of cases 1.2 and 2.2.2 that are immediately followed by applications of cases 2.2.1 or 2.3.2, i.e.\ applications of the $\mu$-operator immediately before application of the $\lambda$-operator, cf.\ the proof of Lemma \ref{meterminationlem}. The index pairs specified in part 2 then comprise the set $J\setminus\{(n+1,1)\}$. According to Lemma \ref{chiinvlem}, part a), we have \[\chi^{\taucp{i,j}}(\taucp{k,l})=\chi^{\taucp{i,j}}(\taucp{i,j+1})=1\] for every $(k,l)\in J$, cf.\ the proof of Lemma 5.5 of \cite{CWc} and an alternative argumentation \footnote{Corrections to be made on p.\ 60 of \cite{W18}: $\alphavecpl:=\operatorname{me}pl(\alphavecpr)=\operatorname{me}pl(\alphavec)$ in line 8, and in line -11 after defining ${\mathrm{bs}}pr_{l+1}$, add: ${\mathrm{bs}}_{k+1}$ is then defined to be ${{\mathrm{bs}}pr_{k+1}}^\frown\sigma_{k+1}$ if $\sigma_{k+1}\in{\mathbb E}^{>\sigma^\prime_{k+1}}$, otherwise ${\mathrm{bs}}_{k+1}:={\mathrm{bs}}pr_{k+1}$.} in Subsection 2.2 of \cite{W18}. Note that in particular $\taucp{i,j}\le\taucp{k,l}$ for all $(k,l)\in J$, where equality holds if and only if $(k,l)=(n+1,1)$, which settles part 2. Due to Lemma \ref{chiinvlem}, part b), early termination, i.e.\ $\taucppr{k,l}=\taucp{k,l}$ for $(k,l)\in J\setminus\{(n+1,1)\}$, is not possible. For the index $\alphacp{n+1,1}$ maximally extending $\alphavec$ to $\operatorname{ec}(\alphavec)=\operatorname{me}pl(\alphavec)$ we have ${\mathrm{end}}(\alphacp{n+1,1})=\taucp{n+1,1}=\taucp{i,j}$, completing the proof of the lemma. \mbox{ } $\Box$ In the following definition we will provide a notion of \emph{reference sequence} that will replace the notion of \emph{characteristic sequence} in Definition 5.3 of \cite{CWc} and allow us to see the analogues of Lemmas 5.7, 5.8, and 5.10 of \cite{CWc} without reiterating similar arguments given in the respective proofs. \betagin{defi}[cf.\ 5.1, 5.3 of \cite{CWc}]\lambdabel{charseqdefi} Let $\alphavec=\left(\alphaevec,\ldots,\alphanvec\right)\in{\operatorname{T}}C$, where $\alphaivec=(\alphacp{i,1},\ldots,\alphacp{i,m_i})$, with associated chain ${\vec{\tau}}$, segmentation depth $p$, segmentation signature $(s_1,\ldots,s_p)$, and sequence of $\upsilon$-segments $(\lambda_i,t_i)_{0\le i\le p}$ as in Definition \ref{segmentationdefi}. \betagin{enumerate} \item For $i\in\{1,\ldots,n\}$ and $j\in\{0,\ldots,m_i\}$ the {\bf reference sequence ${\mathrm{rs}}ij(\alphavec)$ of $\alphavec$ at $(i,j)$} \index{characteristic sequence}\index{${\mathrm{rs}}$} is defined by \[{\mathrm{rs}}ij(\alphavec):=\left\{\betagin{array}{rl} {\mathrm{rs}}istar(\alphavec)^\frown{\vec{\sigma}}&\mbox{ if }i^\star\in\mathrm{dom}(\alphavec)\\[2mm] (\upsilon_{\lambda_l+1},\ldots,\upsilon_{\lambda_l+t_l})^\frown{\vec{\sigma}} &\mbox{ if }i^\star=(s_l,0)\mbox{ for some }l\in\{1,\ldots,p\}\\[2mm] {\vec{\sigma}}&\mbox{ otherwise,} \end{array}\right.\] where ${\vec{\sigma}}:=(\taucp{i,1},\ldots,\taucp{i,j})$. \item For $(i,j)\in\mathrm{dom}(\alphavec)$ the {\bf reference index pair $\refcp{i,j}(\alphavec)$ of $\alphavec$ at $(i,j)$}\index{index pair!reference index pair}\index{$\refcp{}$} is \[\refcp{i,j}(\alphavec):=\left\{\betagin{array}{ll} (i,j-1) &\mbox{ if } (i,j-1)\in\mathrm{dom}(\alphavec)\cup\{(1,0)\}\\[2mm] \refcp{i-1,m_{i-1}}(\alphavec) &\mbox{ otherwise.} \end{array}\right.\] \item Setting $(i_0,j_0):=\refcp{i,j}(\alphavec)$, the {\bf evaluation reference sequence ${\mathrm{ers}}ij(\alphavec)$ of $\alphavec$ at $(i,j)\in\mathrm{dom}(\alphavec)$} \index{ers} is \[{\mathrm{ers}}ij(\alphavec):=\left\{\betagin{array}{ll} () & \mbox{ if }(i_0,j_0)=(1,0)\\[2mm] {\mathrm{rs}}_{i_0,j_0}(\alphavec) & \mbox{ otherwise.} \end{array}\right.\] For convenience we also define ${\mathrm{ers}}_{1,0}(\alphavec):=()$. \end{enumerate} \end{defi} \betagin{lem}[cf.\ 5.8 of \cite{CWc}]\lambdabel{rslem} In the situation of the above definition the following statements hold. \betagin{enumerate} \item For all $(i,j)\in\mathrm{dom}(\alphavec)$ we have \[{\mathrm{rs}}ij(\alphavec)={\mathrm{rs}}istar(\alphavec)^\frown(\taucp{i,1},\ldots,\taucp{i,j}),\] and if $j>1$ we have ${\mathrm{rs}}ij(\alphavec)={\mathrm{rs}}_{i,j-1}(\alphavec)^\frown\taucp{i,j}$. \item For all $i\in\{1,\ldots,n\}$ and $j\in\{0,\ldots,m_i-1\}$ we have \[{\mathrm{rs}}ij(\alphavec)\in{\cal R}S.\] \item For $(i,j)\in\mathrm{dom}(\alphavec)$ let $\gammavec:={\mathrm{rs}}_{i,j-1}(\alphavec)$. Then we have \[\taucp{i,j}\in\left\{\betagin{array}{l@{\quad}l}\mathrm{dom}kvga & \mbox{ if }j=1\\ \mathrm{dom}nuvga & \mbox{ if }j>1.\end{array}\right.\] \item For $(i,j)\in\mathrm{dom}(\alphavec)$ we have \[{\mathrm{ers}}ij(\alphavec)\in{\cal R}S.\] \end{enumerate} \end{lem} {\bf Proof.} Part 1 follows directly from the definition. Note that if $i^\star=(j,0)$ for some $j$, then $j^\star=(j,0)$. According to Lemma \ref{rserslem} we have $\taucp{i,1}\le\lambda_{\tau_i^\star}$ whenever ${\tau_i^\star}>1$. If ${\tau_i^\star}>1$ and $\taucp{i,1}\in{\mathbb E}^{>{\tau_i^\star}}$, part c) of Lemma \ref{rhomulamestimlem} yields $\taucp{i,1}\le\mu_{\tau_i^\star}$ as well. If $1\le j<m_i$ we have $\taucp{i,1}\in{\mathbb E}^{>{\tau_i^\star}}$, so proceeding from $i=1$ up to $i=n$ we obtain part 2. Parts 3 and 4 then readily follow. \mbox{ } $\Box$ The evaluation reference sequence $\gammavec:={\mathrm{ers}}ij(\alphavec)$ for $(i,j)\in\mathrm{dom}(\alphavec)$ is defined to be the sequence in ${\cal R}S$ that matches the correct setting of relativization needed to evaluate the index $\alphacp{i,j}$ in terms of $\kappa^\gammavec$ (if $j=1$) or $\nu^\gammavec$ (if $j>1$), as intended in the definition of tracking chains. In order to have the index $\alphacp{i,j}$ in the domain of the corresponding $\kappa$- or $\nu$-function, we have defined the notion of reference index pair $\refcp{i,j}(\alphavec)$. Note that we will make use of the global $\kappa$-function where possible, cf.\ Definition \ref{kappanuprincipals}. The well-definedness of $\tilde{\tau}cp{i,j}$ and $\alphaticp{i,j}$ in part 1 of the following definition is warranted by $\kappa$- and $\nu$-regularity, see Definitions \ref{critkapdefi}, \ref{kapparegdefi}, and \ref{nuregdefi}. \betagin{defi}[cf.\ 5.9 of \cite{CWc}]\lambdabel{trchevaldefi} Let $\alphavec=\left(\alphaevec,\ldots,\alphanvec\right)\in{\operatorname{T}}C$, where $\alphaivec=(\alphacp{i,1},\ldots,\alphacp{i,m_i})$, with associated chain ${\vec{\tau}}$. \betagin{enumerate} \item The {\bf evaluations $\tilde{\tau}cp{i,j}$ and $\alphaticp{i,j}$}\index{evaluation function!evaluation of tracking chains} \index{$\tilde{\tau}cp{i,j}$,$\alphaticp{i,j}$} for $(i,j)\in\mathrm{dom}(\alphavec)$ are defined as follows, setting ${\vec{\varsigma}}:={\mathrm{ers}}ij(\alphavec)$. \[\tilde{\tau}cp{i,1}:=\kappa^{\vec{\varsigma}}_\taucp{i,1},\quad\alphaticp{i,1}:=\kappa^{\vec{\varsigma}}_\alphacp{i,1}\quad\quad\mbox{ if }j=1,\] \[\tilde{\tau}cp{i,j}:=\nu^{\vec{\varsigma}}_\taucp{i,j},\quad\alphaticp{i,j}:=\nu^{\vec{\varsigma}}_\alphacp{i,j}\quad\quad\:\mbox{ if }j>1.\] For $i\in\{1,\dots,n\}$, setting $i^\star=:(k,l)$ we define $\tilde{\tau}cp{k,l}:={\tau_i^\star}$ in case of $l=0$. For convenience we define $\tilde{\tau}cp{i,0}:=\tilde{\tau}cp{i^\star}$ for $i=1,\ldots,n$. \item The {\bf initial values $\set{\ordcp{i,j}(\alphavec)}{(i,j)\in\mathrm{dom}(\alphavec)}$ of $\alphavec$}\index{initial value} are defined, setting for convenience $m_0:=0$, $\ordcp{0,0}(\alphavec):=0$, and $\ordcp{1,0}(\alphavec):=0$, for $i=1,\ldots,n$ by \[\ordcp{i,1}(\alphavec):=\ordcp{i-1,m_{i-1}}(\alphavec)+\alphaticp{i,1}\] and\index{$\mathrm{o}$!$\ordcp{i,j}$} \[\ordcp{i,j+1}(\alphavec):=\ordcp{i,j}(\alphavec)+(-\tilde{\tau}cp{i,j}+\alphaticp{i,j+1})\mbox{ for }1\le j<m_i.\] We define the {\bf value of $\alphavec$} by $\mathrm{o}(\alphavec):=\ordcp{n,m_n}(\alphavec)$\index{$\mathrm{o}$!$\mathrm{o}$ for tracking chains} which is the terminal initial value of $\alphavec$. For $\alpha\in{\mathrm{Ord}}$ and $\alphavec\in{\operatorname{T}}C$ such that $\alpha=\mathrm{o}(\alphavec)$ we call $\alphavec$ a {\bf tracking chain for $\alphavec$}. \index{value of a tracking chain} \end{enumerate} \end{defi} \betagin{rmk}[\cite{CWc}] The correction $-\tilde{\tau}cp{i,j}$ in the above definition avoids double summation: Consider the easy example of the chain $((\varepsilonn,1))$ which codes $\varepsilonn\cdot2$. Notice that $-\tilde{\tau}cp{i,j}+\alphaticp{i,j+1}$ is always a non-zero multiple of $\tilde{\tau}cp{i,j}$. We clearly have $\ordcp{i,j}(\alphavec)=\mathrm{o}(\alphavecrestrarg{(i,j)})$. \end{rmk} The sequences given by ${\mathrm{rs}}ij(\alphavec)$ directly provide the (possibly pruned) tracking sequences required when evaluating indices of the associated chain ${\vec{\tau}}$ in their proper setting of relativization, as specified in the following lemma. \betagin{lem}[cf.\ Lemma 5.10 of \cite{CWc}]\lambdabel{evallem} In the situation of Definition \ref{trchevaldefi}, \betagin{enumerate} \item $\tilde{\tau}cp{i,1}=\kappa^{{\vec{\varsigma}}star}_\taucp{i,1}$ where ${\vec{\varsigma}}star:={\mathrm{rs}}istar(\alphavec)$, \item ${\mathrm{ts}}(\tilde{\tau}cp{i,j})={\mathrm{rs}}ij(\alphavec)$ for $(i,j)\in\mathrm{dom}(\alphavec)$ such that $\taucp{i,1}\in{\mathbb E}^{>{\tau_i^\star}}$ (if ${\tau_i^\star}>1$) and $\taucp{i,j}>1$ (if $j>1$), and \item $(i,j)=(k,l)$ for $(i,j),(k,l)\in\mathrm{dom}(\alphavec)$ such that ${\mathrm{ts}}(\tilde{\tau}cp{i,j})={\mathrm{ts}}(\tilde{\tau}cp{k,l})$, $\taucp{i,1}\in{\mathbb E}^{>{\tau_i^\star}}$, $\taucp{k,1}\in{\mathbb E}^{>{\tau_k^\star}}$ $\:\&\:$ $\taucp{i,j},\taucp{k,l}>1$. \end{enumerate} \end{lem} {\bf Proof.} Part 1: Let us assume that $\alphavec\not=((0))$, which is the only possibility for the trivial case $\taucp{i,1}=0$ (where $i=1$). Lemma \ref{rserslem} warrants (in the nontrivial case where ${\tau_i^\star}>1$) that $\taucp{i,1}\in\mathrm{dom}(\kappa^{{\vec{\varsigma}}star})$. Observe that in the case $i^\star=(j,0)$ for some $j$ we have $\kappa^{{\vec{\varsigma}}star}_\taucp{i,1}=\kappa_\taucp{i,1}$ according to the definition of the global $\kappa$-function, Definition \ref{kappanuprincipals}, and the fact that \[\kappa^{\betavec^\frown\beta}_\beta=\kappa^\betavec_\beta\mbox{ whenver } \betavec^\frown\beta\in{\cal R}S.\] {\bf Case 1:} ${\mathrm{ers}}_{i,1}(\alphavec)=()$. This implies that $i^\star\in\mathrm{dom}(\alphavec)$ is impossible. Hence $\tilde{\tau}cp{i,1}=\kappa_\taucp{i,1}$, as observed above.\\[2mm] {\bf Case 2:} ${\vec{\varsigma}}:={\mathrm{ers}}_{i,1}(\alphavec)={\mathrm{rs}}_{k,l}(\alphavec)$ where $(k,l)=\refcp{i,1}(\alphavec)\in\mathrm{dom}(\alphavec)$.\\[2mm] {\bf Subcase 2.1:} $(r,s):=i^\star\in\mathrm{dom}(\alphavec)$. Then we have $(r,s)\le_\mathrm{\scriptscriptstyle{lex}}(k,l)$ and are done if equality holds. Otherwise we have $\taucp{r,s}\le\taucp{i,1}<\taucp{k,l}$ and ${\mathrm{rs}}_{r,s}(\alphavec)$ is a proper initial sequence of ${\mathrm{rs}}_{k,l}(\alphavec)$, because $(r,s)$ is $<_\mathrm{\scriptscriptstyle{lex}}$-maximal candidate of a $\taucp{u,v}\le\taucp{i,1}$ where $(u,v)\in\mathrm{dom}(\alphavec)$, $(u,v)<_\mathrm{\scriptscriptstyle{lex}}(i,1)$, and $v<m_u$. By definition we obtain $\tilde{\tau}cp{i,1}=\kappa^{\vec{\varsigma}}_\taucp{i,1}=\kappa^{\vec{\varsigma}}star_\taucp{i,1}$.\\[2mm] {\bf Subcase 2.2:} $i^\star=(j,0)$ for some $j$. Then as observed above $\kappa^{{\vec{\varsigma}}star}_\taucp{i,1}=\kappa_\taucp{i,1}$.\\[2mm] {\bf 2.2.1:} $j\le k$. Then ${\mathrm{rs}}istar(\alphavec)$ is a proper initial sequence of ${\mathrm{rs}}_{k,l}(\alphavec)$, say ${\mathrm{rs}}_{k,l}(\alphavec)={\mathrm{rs}}istar(\alphavec)^\frown(\beta_1,\ldots,\beta_r)$ where $\beta_r=\taucp{k,l}$ and $\taucp{i,1}<\beta_1$. As in case 2.1 it follows that $\tilde{\tau}cp{i,1}=\kappa^{\vec{\varsigma}}_\taucp{i,1}=\kappa^{\vec{\varsigma}}star_\taucp{i,1}$, which is equal to $\kappa_\taucp{i,1}$.\\[2mm] {\bf 2.2.2:} $k<j$. Then all components of ${\vec{\varsigma}}$ are strictly greater than $\taucp{i,1}$, and we obtain $\tilde{\tau}cp{i,1}=\kappa^{\vec{\varsigma}}_\taucp{i,1}=\kappa_\taucp{i,1}$. This concludes the proof of part 1. Part 2 follows for $j=1$ from part 1 since in the case ${\vec{\varsigma}}star:={\mathrm{rs}}istar(\alphavec)\not=()$, due to the assumption $\taucp{i,1}\in{\mathbb E}^{>{\tau_i^\star}}$, we have \[\kappa^{{\vec{\varsigma}}star}_\taucp{i,1}=\nu^{{\vec{\varsigma}}star}_\taucp{i,1},\] so that according to Theorem \ref{trsvallem} ${\mathrm{ts}}(\tilde{\tau}cp{i,1})={\vec{\varsigma}}star^\frown\taucp{i,1}={\mathrm{rs}}_{i,1}(\alphavec)$. In the case $j>1$, setting ${\vec{\varsigma}}:={\mathrm{ers}}ij(\alphavec)$, which is equal to ${\mathrm{rs}}_{i,j-1}(\alphavec)$, we have $\tilde{\tau}cp{i,j}=\nu^{\vec{\varsigma}}_\taucp{i,j}$, so that again according to Theorem \ref{trsvallem} ${\mathrm{ts}}(\tilde{\tau}cp{i,j})={\vec{\varsigma}}^\frown\taucp{i,j}={\mathrm{rs}}ij(\alphavec)$. Note that for $\taucp{i,j}=1$ we have $\tilde{\tau}cp{i,j}=\mathrm{o}({\vec{\varsigma}})\cdot 2$, which is not in the domain of ${\mathrm{ts}}$. Part 3 is seen by induction on the length of ${\mathrm{ts}}(\tilde{\tau}cp{i,j})$, using part 2, according to which the assumption implies that ${\mathrm{rs}}ij(\alphavec)={\mathrm{rs}}_{k,l}(\alphavec)$. Let $(\lambda_q,t_q)_{0\le q\le p}$ be the sequence of $\upsilon$-segments of $\alphavec$ according to Definition \ref{segmentationdefi}, $(\lambda_0,t_0)$ being the $\upsilon$-segment and $p$ the segmentation depth of $\alphavec$. We consider the following cases.\\[2mm] {\bf Case 1:} $1<j\le m_i$ and $1<l\le m_k$. Then the i.h.\ yields $(i,j-1)=(k,l-1)$ and we are done.\\[2mm] {\bf Case 2:} $1<j\le m_i$ and $l=1$. We show that this contradicts the assumption and is therefore impossible.\\[2mm] {\bf Subcase 2.1:} $k^\star\in\mathrm{dom}(\alphavec)$. Then we have ${\mathrm{rs}}_{i,j-1}(\alphavec)={\mathrm{rs}}_{k^\star}(\alphavec)$, and the i.h.\ yields $(i,j-1)=k^\star$. On the other hand, by assumption we have $\taucp{i,j}=\taucp{k,1}$, which implies that $(i,j)\le_\mathrm{\scriptscriptstyle{lex}} k^\star$. Contradiction. \\[2mm] {\bf Subcase 2.2:} $k^\star\not\in\mathrm{dom}(\alphavec)$. In this situation $k^\star=(1,0)$ is not possible since according to the assumption, ${\mathrm{rs}}ij(\alphavec)$ and ${\mathrm{rs}}_{k,l}(\alphavec)$ have the same lenth. Therefore, ${\mathrm{rs}}_{k,1}(\alphavec)$ is of the form $(\upsilon_{\lambda_r+1},\ldots,\upsilon_{\lambda_r+t_r},\taucp{k,1})$ where $r\in\{1,\ldots,p\}$, $\lambda_r+t_r<\lambda_0$, ${\tau_k^\star}=1+\upsilon_{\lambda_r+t_r}$, and $\taucp{k,1}\in{\mathbb E}^{>{\tau_k^\star}}$, and hence ${\mathrm{rs}}_{i,j-1}(\alphavec)=(\upsilon_{\lambda_r+1},\ldots,\upsilon_{\lambda_r+t_r})$, which is impossible since $(i,j-1)\in\mathrm{dom}(\alphavec)$. \\[2mm] {\bf Case 3:} $j=1$ and $1<l\le m_k$. We argue as in Case 2 to show that this case does not occur.\\[2mm] {\bf Case 4:} $j=1$ and $l=1$. Then we have $\taucp{i,1}=\taucp{k,1}$ and need to show that $i=k$.\\[2mm] {\bf Subcase 4.1:} $i^\star,k^\star\in\mathrm{dom}(\alphavec)$. Then we have ${\mathrm{ts}}(\tilde{\tau}cp{i^\star})={\mathrm{ts}}(\tilde{\tau}cp{k^\star})$, and hence by the i.h.\ $i^\star=k^\star$. Let us assume to the contrary that $i<k$. This implies $(i,1)\le_\mathrm{\scriptscriptstyle{lex}} k^\star$, since due to $\kappa$-regularity of $\alphavec$ there exists $h\in\{i,\ldots,k-1\}$ such that $m_h>1$, hence $i^\star<_\mathrm{\scriptscriptstyle{lex}} k^\star$ contradicting the i.h. In the same way we see that $k<i$ cannot hold either.\\[2mm] {\bf Subcase 4.2:} $i^\star,k^\star\not\in\mathrm{dom}(\alphavec)$. In this case ${\mathrm{rs}}_{i,1}(\alphavec)={\mathrm{rs}}_{k,1}(\alphavec)$ is either equal to $(\taucp{i,1})$ or of the form $(\upsilon_{\lambda_r+1},\ldots,\upsilon_{\lambda_r+t_r},\taucp{i,1})$ where $r\in\{1,\ldots,p\}$. Assuming that $i<k$ we would either have $m_i=\ldots=m_{k-1}=1$, which implies that $\taucp{i,1}=\ldots=\taucp{k,1}\in{\mathbb E}^{>\taucp{i,1}}$, contradicting $\taucp{i+1,1}<\taucp{i,1}$ by $\kappa$-regularity, or there would exist the least $h\in\{i,\ldots,k-1\}$ such that $m_h>1$, so that we would obtain $\taucp{k,1}=\taucp{i,1}\ge\ldots\ge\taucp{h,1}$ and hence $(h,1)\le_\mathrm{\scriptscriptstyle{lex}} k^\star$, which implies $k^\star\in\mathrm{dom}(\alphavec)$ contradicting our assumption. \\[2mm] {\bf Subcase 4.3:} $i^\star\in\mathrm{dom}(\alphavec)$ and $k^\star\not\in\mathrm{dom}(\alphavec)$. Then $k^\star=(1,0)$ is impossible (as in Subcase 2.2), and ${\mathrm{rs}}_{i,1}(\alphavec)$ is of a form $(\upsilon_{\lambda_r+1},\ldots,\upsilon_{\lambda_r+t_r},\taucp{i,1})$ where $r\in\{1,\ldots,p\}$. Hence ${\mathrm{rs}}_{i^\star}(\alphavec)=(\upsilon_{\lambda_r+1},\ldots,\upsilon_{\lambda_r+t_r})$ while $i^\star\in\mathrm{dom}(\alphavec)$, which is impossible as in Subcase 2.2. \\[2mm] {\bf Subcase 4.4:} $i^\star\not\in\mathrm{dom}(\alphavec)$ and $k^\star\in\mathrm{dom}(\alphavec)$. Then we argue as in Subcase 4.3. \mbox{ } $\Box$ Our intermediate goal is to establish an order isomorphism between tracking chains and their evaluations. In order to formulate estimates on the values of tracking chains that will allow us to reach that goal we introduce a notion of \emph{depth} of a tracking chain by application of the function $\mathrm{dp}$ introduced in Definition \ref{kappadpdefi} in accordance with the concept of tracking chains. \betagin{defi}[cf.\ 5.11 of \cite{CWc}]\lambdabel{dpfdefi} Let $\alphavec=\left(\alphaevec,\ldots,\alphanvec\right)\in{\operatorname{T}}C$ where $\alphaivec=(\alphacp{i,1},\ldots,\alphacp{i,m_i})$ for $i=1,\ldots,n$, with associated chain ${\vec{\tau}}$. $\mathrm{dp}(\alphavec)$ is defined as follows. Let $\tau:=\taucp{n,m_n}$ and ${\tau^\prime}:={\tau^\prime_n}$. Let further $\tilde{\tau}pr$ be the evaluation of ${\tau^\prime}$. We set ${\vec{\varsigma}}:={\mathrm{rs}}_{n,m_n-1}(\alphavec)$ and define \[\mathrm{dp}(\alphavec):=\left\{\betagin{array}{l@{\quad}l} \mathrm{dp}_{\vec{\varsigma}}(\tau) & \mbox{ if } m_n=1\\[2mm] \kappa^{\vec{\varsigma}}_{\rhoargs{ }{\tau}}+\mathrm{dp}_{\vec{\varsigma}}(\rhoargs{ }{\tau})+\check{\chi}^{\tau^\prime}(\tau)\cdot\tilde{\tau}pr & \mbox{ if } m_n>1\:\&\:\tau<\mu_{\tau^\prime}\\[2mm] \kappa^{\vec{\varsigma}}_{\lambda_{\tau^\prime}}+\mathrm{dp}_{\vec{\varsigma}}(\lambda_{\tau^\prime}) & \mbox{ if } m_n>1\:\&\:\tau=\mu_{\tau^\prime}. \end{array}\right.\] \end{defi} Note that ${\tau^\prime}$ is equal to $\taucp{n,m_n-1}$ if $m_n>1$ and ${\tau_n^\star}$ if $m_n=1$, that $\tilde{\tau}pr$ is equal to $\tilde{\tau}cp{n,m_n-1}$ if $m_n>1$ and $\tilde{\tau}cp{n^\star}$ if $m_n=1$, and that ${\vec{\varsigma}}$ as defined above is equal to ${\mathrm{ts}}(\tilde{\tau}pr)$ unless ${\tau_n^\star}=1+\upsilon_{\lambda_r}$ for some $r\le p$ (assuming $(\lambda_i,t_i)_{0\le i\le p}$ to be the sequence of $\upsilon$-segments of $\alphavec$), in which case we have ${\vec{\varsigma}}=()$. ${\vec{\varsigma}}$ is the setting of relativization for the $\kappa$-function enumerating $\alpha$-$\le_1$-minimal ordinals, where $\alpha:=\mathrm{o}(\alphavec)$. The technical Lemma 5.12 of \cite{CWc} carries over for tracking chains $\alphavec$ such that $\mathrm{o}(\alphavec)\not\in\mathrm{Im}(\upsilon)$, which is just what we need here. For reader's convenience we present the content of Lemma 5.12 of \cite{CWc} in a way that may be easier to follow. Each part is formulated separately and basically contains an observation about the interplay of (maximal) extensions of tracking chains and the auxiliary function $\mathrm{dp}$ that is useful in the context of order-isomorphisms of tracking chains and ordinals, in particular in the next subsection, Lemma \ref{tcassgnmntlem} and its corollary, where we provide an assignment of tracking chains to ordinals. Basically, the consequences stated in the following lemma rest on the strict monotonicity of $\kappa$- and $\nu$-functions verified in Corollary \ref{kappanuhzcor}. \betagin{lem}[cf.\ Lemma 5.12 of \cite{CWc}]\lambdabel{tcevalestimlem} Assume the settings of Definition \ref{dpfdefi}. \betagin{enumerate} \item In case of $m_n>1$ and $\tau<\mu_{\tau^\prime}$ we have \[\mathrm{o}(\alphavec)+\mathrm{dp}(\alphavec)=\mathrm{o}(\alphavec[\alphacp{n,m_n}+1]).\] \item $\mathrm{dp}(\alphavec)=0$ if and only if there does not exist any proper extension of $\alphavec$. \item If $\operatorname{ec}(\alphavec)$ exists, but $\operatorname{ec}(\alphavec)\not\in{\operatorname{T}}C$, then \[\taucp{n+1,1}=\taucp{\operatorname{cml}(\alphavec)}\quad\mbox{ and }\quad{\mathrm{end}}(\mathrm{dp}(\alphavec))=\tilde{\tau}cp{\operatorname{cml}(\alphavec)},\] where $\taucp{n+1,1}={\mathrm{end}}(\alphacp{n+1,1})$ and $\operatorname{ec}(\alphavec)=\alphavec^\frown(\alphacp{n+1,1})$. \item If $\alphavecpl:=\operatorname{ec}(\alphavec)\in{\operatorname{T}}C$ exists, there are the following three cases to consider. \betagin{enumerate} \item $m_n>1$, $\tau<\mu_{\tau^\prime}$, and $\chi^{\tau^\prime}(\tau)=0$. Then \[\mathrm{o}(\alphavecpl)+\mathrm{dp}(\alphavecpl)<\mathrm{o}(\alphavec^\frown(\rhoargs{ }{\tau}+1))<\alpha +\mathrm{dp}(\alphavec).\] \item $m_n>1$, $\tau<\mu_{\tau^\prime}$, and $\chi^{\tau^\prime}(\tau)=1$. Then $\operatorname{cml}(\operatorname{me}(\alphavec))=(n,m_n-1)$, $\operatorname{me}pl(\alphavec)\not\in{\operatorname{T}}C$, and \[\mathrm{o}(\alphavecpl)+\mathrm{dp}(\alphavecpl)=\alpha +\mathrm{dp}(\alphavec).\] \item Otherwise. Then we again have \[\mathrm{o}(\alphavecpl)+\mathrm{dp}(\alphavecpl)=\alpha +\mathrm{dp}(\alphavec).\] \end{enumerate} \item For a non-maximal $1$-step extension $\alphavecpl\not=\operatorname{ec}(\alphavec)$ of $\alphavec$ according to Definition \ref{tcextensiondefi}, $\alphavecpl$ is of a form either \[\alphavecpl=\alphavec^\frown(\alphacp{n+1,1})\quad\mbox{ or }\quad \alphavecpl=(\alphaevec,\ldots,\alphanminvec,\alphanvec^\frown\alphacp{n,m_n+1}),\] and we set $\alphacp{n,m_n+1}:=0$ if $\alphavecpl$ is of the former, and $\alphacp{n+1,1}:=0$ if $\alphavecpl$ is of the latter form. We define \[\alphavecpr:=\left\{\betagin{array}{l@{\quad}l} \alphavec^\frown(\alphacp{n+1,1}+1) & \mbox{ if } \alphacp{n,m_n+1}=0\\[2mm] (\alphaevec,\ldots,\alphanminvec,\alphanvec^\frown(\alphacp{n,m_n+1}+1)) & \mbox { if } \alphacp{n+1,1}=0, \end{array}\right.\] and consider two cases. \betagin{enumerate} \item $\alphavecpr\not\in{\operatorname{T}}C$: In this case we have $m_n>1$, $\tau=\mu_{\tau^\prime}\in{\mathbb E}\cap({\tau^\prime},\lambda_{\tau^\prime})$, $\alphacp{n,m_n+1}=\mu_\tau$, and \[\mathrm{o}(\alphavecpl)+\mathrm{dp}(\alphavecpl)<\mathrm{o}(\alphavec^\frown(\tau+1))\le\alpha +\mathrm{dp}(\alphavec).\] \item $\alphavecpr\in{\operatorname{T}}C$: Then we have \[\mathrm{o}(\alphavecpl)+\mathrm{dp}(\alphavecpl)\le\mathrm{o}(\alphavecpr)\le\alpha +\mathrm{dp}(\alphavec)\quad\mbox{and}\quad\mathrm{o}(\alphavecpl)+\mathrm{dp}(\alphavecpl)<\alpha +\mathrm{dp}(\alphavec).\] \end{enumerate} \item For any extension $\betavec$ of $\alphavec$ we have \betagin{enumerate} \item $\mathrm{o}(\betavec)+\mathrm{dp}(\betavec)\le\alpha +\mathrm{dp}(\alphavec)$ and \item $\mathrm{o}(\betavec)<\alpha +\mathrm{dp}(\alphavec)$ if $m_n>1$ and $\tau<\mu_{\tau^\prime}$. \end{enumerate} \item The ordinal $\mathrm{o}(\operatorname{me}(\alphavec))+\mathrm{dp}(\operatorname{me}(\alphavec))$ is calculated in the following three scenarios. \betagin{enumerate} \item If $m_n>1$, $\tau<\mu_{\tau^\prime}$, and $\chi^{\tau^\prime}(\tau)=1$ we have \[\mathrm{o}(\operatorname{me}(\alphavec))+\mathrm{dp}(\operatorname{me}(\alphavec))=\alpha+\mathrm{dp}(\alphavec)=\mathrm{o}(\alphavec[\alphacp{n,m_n}+1]).\] \item If $\alphavec$ does not possess a critical main line index pair $\operatorname{cml}(\alphavec)$ then $\mathrm{dp}(\operatorname{me}(\alphavec))=0$ and \[\mathrm{o}(\operatorname{me}(\alphavec))=\left\{\betagin{array}{l@{\quad}l} \alpha + \mathrm{dp}_{\vec{\varsigma}}(\tau)&\mbox{ if }m_n=1\\[2mm] \alpha + \kappa^{\vec{\varsigma}}_{\rhoargs{{\tau^\prime}}{\tau}}+\mathrm{dp}_{\vec{\varsigma}}(\rhoargs{{\tau^\prime}}{\tau})&\mbox{ if }m_n>1\mbox{ and }\tau<\mu_{\tau^\prime}\\[2mm] \alpha + \kappa^{\vec{\varsigma}}_{\lambda_{\tau^\prime}}+\mathrm{dp}_{\vec{\varsigma}}(\lambda_{\tau^\prime})&\mbox{ otherwise,} \end{array}\right.\] which only deviates from $\alpha+\mathrm{dp}(\alphavec)$ in the case $m_n>1\:\&\:\tau<\mu_{\tau^\prime}$. \item If $\operatorname{cml}(\alphavec)=:(i,j)$ exists then \betagin{eqnarray*} \mathrm{o}(\operatorname{me}(\alphavec))+\mathrm{dp}(\operatorname{me}(\alphavec))&=&\alpha+\mathrm{dp}(\alphavec)\\[2mm] &=&\left\{\betagin{array}{l@{\quad}l} \alpha + \mathrm{dp}_{\vec{\varsigma}}(\tau)&\mbox{ if }m_n=1\\[2mm] \alpha + \kappa^{\vec{\varsigma}}_{\rhoargs{{\tau^\prime}}{\tau}}+\mathrm{dp}_{\vec{\varsigma}}(\rhoargs{{\tau^\prime}}{\tau})&\mbox{ if }(n,m_n)=(i,j+1)\\[2mm] \alpha + \kappa^{\vec{\varsigma}}_{\lambda_{\tau^\prime}}+\mathrm{dp}_{\vec{\varsigma}}(\lambda_{\tau^\prime})&\mbox{ otherwise} \end{array}\right.\\[2mm] &=&\mathrm{o}(\alphavecrestrarg{(i,j+1)}[\alphacp{i,j+1}+1]), \end{eqnarray*} and \[\mathrm{dp}(\operatorname{me}(\alphavec))=\kappa^{\vec{\varsigma}}pr_{\taucp{i,j}(\xi+1)}\] where, say, $(r,k_r)$ is the $<_\mathrm{\scriptscriptstyle{lex}}$-greatest index pair of $\operatorname{me}(\alphavec)$, ${\vec{\varsigma}}pr:={\mathrm{rs}}_{r,k_r-1}(\operatorname{me}(\alphavec))$, and $\taucp{i,j}(\xi+1)$ for suitable $\xi$ is the extending index of $\operatorname{ec}(\operatorname{me}(\alphavec))$. \end{enumerate} \end{enumerate} \end{lem} {\bf Proof.} Part 1 follows from the definition of the $\nu$-functions. Part 2 is a direct consequence of the definitions of tracking chain and $\mathrm{dp}$. Part 3 is a consequence of Lemma \ref{cmlmaxextcor}, using Lemma \ref{evallem}, as carried out in detail in the beginning of the proof of Lemma 5.12 of \cite{CWc}. Part 4 follows from the definitions involved with the aid of the monotonicity of $\kappa$- and $\nu$-functions, and Lemma \ref{cmlmaxextcor} for part 4(b). Part 5 is shown as in the proof of Lemma 5.12 of \cite{CWc} using monotonicity of $\kappa$- and $\nu$-functions. Part 6 is shown using the former parts regarding $1$-step extensions by induction on the number of $1$-step extensions that $\betavec$ results from (rather than the induction along $<_\mathrm{\scriptscriptstyle{lex}}$ on ${\mathrm{cs}}pr(\alphavec)$ as was stated in the proof of Lemma 5.12 of \cite{CWc}). The first equality of part 7(a) follows from parts 6(a) and 4(b) and (c) where each extension step is chosen maximally. In part 7(c) the last stated equality was already shown by part 1 of Lemma \ref{cmlmaxextcor}. Details of the verification of part 7 are given in the proof of Lemma 5.12 of \cite{CWc}. \mbox{ } $\Box$ \betagin{cor}\lambdabel{nuimagecor} Let $\alphavec\in{\cal R}S$, $\alphavec=(\alphae,\ldots,\alphan)$ where $n>0$, and $\alpha:=\mathrm{o}(\alphavec)$. The image of $\nu^\alphavec$ consists of multiples of $\alpha$. For $\beta\in\mathrm{dom}(\nu^\alphavec)$ we have $\nu^\alphavec_\beta\in{\mathbb P}^{>\alpha}$ if and only if $\beta\in{\mathbb P}^{>1}$. \end{cor} {\bf Proof.} Part 7(c) of the above Lemma \ref{tcevalestimlem} shows that for the only case in question, that is, $\gamma\in\mathrm{dom}(\nu^\alphavec)$ such that $\chi^{\alphan}(\gamma)=1$ the last summand of $\nu^\alphavec_{\gamma+1}$ is $\alpha$, which is seen as follows. Let $\betavec:=\operatorname{me}((\alphavec^\frown\gamma))$ with $<_\mathrm{\scriptscriptstyle{lex}}$-greatest index pair $(r,k_r)$ and set ${\vec{\varsigma}}pr:={\mathrm{rs}}_{r,k_r-1}(\betavec)$. The last summand of $\nu^\alphavec_{\gamma+1}$ is then $\kappa^{\vec{\varsigma}}pr_\alphan=\kappa^\alphavec_\alphan=\alpha$, since $\alphavec\subseteqseteq{\vec{\varsigma}}pr$ by Lemma \ref{cmlmaxextcor}. The second claim now readily follows. Note that $\nu^\alphavec_0=\alpha$ and $\nu^\alphavec_1=\alpha\cdot 2$. \mbox{ } $\Box$ \betagin{defi}[5.13 of \cite{CWc}] We define a linear ordering $\kappa^\tauc$\index{$<_tc$@$\kappa^\tauc$,$\le_\mathrm{TC}$} on ${\operatorname{T}}C$ as follows. Let $\alphavec, \betavec\in{\operatorname{T}}C$ be given, say, of the form \[\alphavec=((\alphacp{1,1},\ldots,\alphacp{1,m_1}),\ldots,(\alphacp{n,1},\ldots,\alphacp{n,m_n}))\] and \[\betavec=((\betacp{1,1},\ldots,\betacp{1,k_1}),\ldots,(\betacp{l,1},\ldots,\betacp{l,k_l})).\] Let $(i,j)$ where $1\le i\le\min\sigmangleton{n,l}$ and $1\le j\le\min\sigmangleton{m_i,k_i}$ be $<_\mathrm{\scriptscriptstyle{lex}}$-maxi\-mal such that $\alphavecrestrarg{i,j}=\betavecrestrarg{i,j}$, if that exists, and $(i,j):=(1,0)$ otherwise. \betagin{eqnarray*} \alphavec\kappa^\tauc\betavec\quad:\Leftrightarrow&\quad&(i,j)=(n,m_n)\not=(l,k_l)\\ &\;\;\vee&(j<\min\sigmangleton{m_i,k_i}\:\&\:\alphacp{i,j+1}<\betacp{i,j+1})\\ &\;\;\vee&(j=m_i<k_i\:\&\: i<n\:\&\:\alphacp{i+1,1}<\taucp{i,j})\\ &\;\;\vee&(j=k_i<m_i\:\&\: i<l\:\&\:\taucp{i,j}<\betacp{i+1,1})\\ &\;\;\vee&(j=k_i=m_i\:\&\: i<\min\sigmangleton{n,l}\:\&\:\alphacp{i+1,1}<\betacp{i+1,1})\\[1ex] \alphavec\le_\mathrm{TC}\betavec\quad:\Leftrightarrow&&\alphavec\kappa^\tauc\betavec\:\vee\:\alphavec=\betavec. \end{eqnarray*} \end{defi} Note that in the above definition, the first disjunction term covers the case where $\alphavec$ is a proper initial chain of $\betavec$, the second covers the situation where at $(i,j+1)$ the tracking chain $\betavec$ branches into a component with larger $\nu$-index, and the third, fourth, and fifth disjunction term cover the situation where $\betavec$ branches into a component with larger $\kappa$-index. In this latter situation the third term applies to the scenario where the branching $\kappa$-index of $\betavec$ is not given explicitly but rather by continuation with a $\nu$-index, whereas in the fourth term the lower $\kappa$-index of $\alphavec$ is not given explicitly but by continuation with a $\nu$-index. \betagin{lem}[5.14 of \cite{CWc}]\lambdabel{tcorderisolem} For all $\alphavec,\betavec\in{\operatorname{T}}C$ we have \[\alphavec\kappa^\tauc\betavec\quad\Leftrightarrow\quad\mathrm{o}(\alphavec)<\mathrm{o}(\betavec).\] \end{lem} {\bf Proof.} We first observe that $({\operatorname{T}}C,\kappa^\tauc)$ is a linear ordering. The lemma then follows from the definitions of $\kappa^\tauc$ and $\mathrm{o}$ using the strict monotonicity of $\kappa$- and $\nu$-functions, Corollary \ref{kappanuhzcor}, and Lemma \ref{tcevalestimlem}, by verifying that $\mathrm{o}(\alphavec)<\mathrm{o}(\betavec)$ whenever $\alphavec,\betavec\in{\operatorname{T}}C$ such that $\alphavec\kappa^\tauc\betavec$. \mbox{ } $\Box$ \betagin{cor}[cf.\ 5.15 of \cite{CWc}] For any $\alpha\in{\mathrm{Ord}}$ there exists at most one tracking chain for $\alpha$.\mbox{ } $\Box$ \end{cor} \subseteqsection{Assignment of tracking chains to ordinals}\lambdabel{tcassignmentsubsec} We will obtain an order isomorphism between $({\mathrm{Ord}},<)$ and $({\operatorname{T}}C,\kappa^\tauc)$ once we extend the inverse function ${\mathrm{tc}}$ from Definition 6.1 of \cite{CWc} to an assignment of tracking chains to all ordinals. With the following adaptation of the notion of relative tracking sequence the extension of the assignment of tracking chains to ordinals from $\upsilon_1$ to all ordinals can be formulated conveniently. \betagin{defi}[cf.\ Definition 4.16 of \cite{CWc}]\lambdabel{relchaindefi} Let ${\vec{\tau}}=(\tau_1,\ldots,\tau_n)\in{\cal R}S$ and $\beta\in{\mathbb P}$, $\beta\le\kappa^{\vec{\tau}}_{\lambda_{\tau_n}}+\mathrm{dp}_{\vec{\tau}}(\lambda_{\tau_n})$ if $n>0$. Denote the $\upsilon$-segment of $\beta$ by $(\xi,u):=\upsilonseg(\beta)$ and set $\tau_0:=\upsilon_{\xi+u}=:\tilde{\tau}_0$. Let $k\in\sigmangleton{0,\ldots,n}$ be maximal such that $\mathrm{o}({\vec{\tau}}restrk)\le\beta$, and set $\tilde{\tau}_k:=\mathrm{o}({\vec{\tau}}restrk)$ if $k>0$. The {\bf tracking sequence ${\mathrm{ts}}[{\vec{\tau}}](\beta)$ of $\beta$ relative to the reference sequence ${\vec{\tau}}$} is defined by \index{tracking sequence!tracking sequence of $\beta$ relative to $\alpha$}\index{${\mathrm{ts}}t$!${\mathrm{ts}}relal$} \[{\mathrm{ts}}[{\vec{\tau}}](\beta):=\left\{\betagin{array}{l@{\quad}l} {\mathrm{ts}}^{\tau_k}(\tau_k\cdot(1/\tilde{\tau}_k)\cdot\beta)&\mbox{ if } k>0\\[2mm] {\mathrm{ts}}^{\tau_0}(\beta)&\mbox{ if } k=0. \end{array}\right. \] \end{defi} {\bf Remark.} ${\mathrm{ts}}[{\vec{\tau}}]$ aims at a tracking sequence with starting point $\tilde{\tau}_k$ instead of $0$. In the above situation for ${\mathrm{ts}}[{\vec{\tau}}](\beta)$ to make sense, i.e.\ to be related to $\mathrm{o}({\vec{\tau}})$, we should have $\beta_1\le\lambda^{\tau_{k-1}}_{\tau_k}$ in case of $k>0$, where $\beta_1$ is the first element of ${\mathrm{ts}}[{\vec{\tau}}](\beta)$. It is easy to see (using Lemmas \ref{rhomulamestimlem}, \ref{citedinjtrslem}, and \ref{reltrsvallem}) that this holds if $k\in(0,n)$. However, in case of $k=n>0$ this holds if and only if $\beta\le\kappa^{\vec{\tau}}_{\lambda_{\tau_n}}+\mathrm{dp}_{\vec{\tau}}(\lambda_{\tau_n})$ as follows from Lemmas \ref{lamveclem} and \ref{reltrsvallem}. Note that ${\mathrm{ts}}[()]$ and ${\mathrm{ts}}$ are not equal. \betagin{lem}[cf.\ 4.17 of \cite{CWc}]\lambdabel{reltrslem} Let ${\vec{\tau}}=(\tau_1,\ldots,\tau_n)\in{\cal R}S$ and $\beta,\gamma\in{\mathbb P}$ such that $\beta,\gamma\le\kappa^{\vec{\tau}}_{\lambda_{\tau_n}}+\mathrm{dp}_{\vec{\tau}}(\lambda_{\tau_n})$ if $n>0$. Set $\tau_{n+1}:=\lambda_{\tau_n}+1$ if $n>0$ and $\tau_1:=\upsilon_{\xi+u+1}$ where $(\xi,u)=\upsilonseg(\beta)$ otherwise. \betagin{enumerate} \item[a)] With $k$ as in the above definition we have \[\tau_k\le\beta_1<\tau_{k+1}\] where $\beta_1$ is the first element of ${\mathrm{ts}}[{\vec{\tau}}](\beta)$. \item[b)] If $\beta<\gamma$ then \[{\mathrm{ts}}[{\vec{\tau}}](\beta)<_\mathrm{\scriptscriptstyle{lex}}{\mathrm{ts}}[{\vec{\tau}}](\gamma).\] \end{enumerate} \end{lem} {\bf Proof.} The lemma is proved by application of Lemmas \ref{citedinjtrslem} and \ref{reltrsvallem}, using part a) to show part b). \mbox{ } $\Box$ Recall part 4 of Lemma \ref{rslem}, according to which ${\mathrm{ers}}ij(\alphavec)\in{\cal R}S$ for all $(i,j)\in\mathrm{dom}(\alphavec)$ where $\alphavec\in{\operatorname{T}}C$. The assignment of tracking chains to ordinals given below is based on tracking sequences relativized to such evaluation reference sequences. \betagin{defi}[cf.\ 6.1 of \cite{CWc}]\lambdabel{tcassignmentdefi} For $\alpha\in{\mathrm{Ord}}$ we define the {\bf\boldmath tracking chain assigned to\unboldmath} \index{tracking chain!tracking chain assigned to $\alpha$} $\alpha$, ${\mathrm{tc}}(\alpha)$\index{${\mathrm{tc}}$}, by recursion on the length of the additive decomposition of $\alpha$. We define ${\mathrm{tc}}(0):=((0))$, and if $\alpha\in{\mathbb P}$ we set ${\mathrm{tc}}(\alpha):=({\mathrm{ts}}(\alpha))$. Now suppose ${\mathrm{tc}}(\alpha)=\alphavec=(\alphaevec,\ldots,\alphanvec)$ to be the tracking chain already assigned to some $\alpha>0$, where $\alphaivec=(\alphacp{i,1},\ldots,\alphacp{i,m_i})$ for $1\le i\le n$, with associated chain ${\vec{\tau}}$, $(\lambda,t):=\upsilonseg(\alphavec)$, and segmentation parameters $p,s_l,(\lambda_l,t_l)$ for $l=1,\ldots,p$ as in Definition \ref{segmentationdefi}, and let $\beta\in{\mathbb P}$, $\beta\le{\mathrm{end}}(\alpha)$. For technical reasons, we set $\alphacp{n+1,1}:=0$ and $m_{n+1}:=1$. The definition of ${\mathrm{tc}}(\alpha+\beta)$, the tracking chain assigned to $\alpha+\beta$, requires the following preparations. \betagin{itemize} \item For $(i,j)\in\mathrm{dom}(\alphavec)$ let \[(\betaucp{i,j}_1,\ldots,\betaucp{i,j}_{r_{i,j}}):={\mathrm{ts}}[{\mathrm{ers}}ij(\alphavec)](\beta).\] For the tracking chain of $\beta$ (according to Definition \ref{latrsdefi}) let \[(\beta_1,\ldots,\beta_r):={\mathrm{ts}}(\beta).\] \item Let $(i_0,j_0)$, where $1\le i_0\le n$ and $1\le j_0< m_{i_0}$, be $<_\mathrm{\scriptscriptstyle{lex}}$-maximal with \[\alphacp{i_0,j_0+1}<\mu_\taucp{i_0,j_0}\] if that exists, otherwise set $(i_0,j_0):=(1,0)$. \item Let $(k_0,l_0)$ be either $(1,0)$ or satisfy $1\le k_0\le n+1$ and $1\le l_0\le m_{k_0}$, so that $(k_0,l_0)$ is $<_\mathrm{\scriptscriptstyle{lex}}$-minimal with $(i_0,j_0)\le_\mathrm{\scriptscriptstyle{lex}}(k_0,l_0)$ and \betagin{enumerate} \item for all $k\in\sigmangleton{k_0,\ldots,n}$ we have \[\alphacp{k+1,1}+\betaucp{k,m_k}_1\ge\rho_k\] \item for all $k\in\sigmangleton{k_0,\ldots,n}$ and all $l\in\sigmangleton{1,\ldots,m_k-2}$ such that $(k_0,l_0)<_\mathrm{\scriptscriptstyle{lex}}(k,l)$ we have \[\taucp{k,l+1}+\betaucp{k,l+1}_1>\lambda_\taucp{k,l}.\] \end{enumerate} \end{itemize} {\bf Case 0:} $(k_0,l_0)=(1,0)$. Then ${\mathrm{tc}}(\alpha+\beta)$ is defined by \[((\alphacp{1,1}+\betaucp{1,1}_1,\betaucp{1,1}_2,\ldots,\betaucp{1,1}_\rcp{1,1})).\] {\bf Case 1:} $(i_0,j_0)=(k_0,l_0)\in\mathrm{dom}(\alphavec)$. Then we set $(i,j):=(i_0,j_0+1)$ and consider three subcases: \betagin{itemize} \item[{\bf 1.1:}] $\beta<\tilde{\tau}cp{i_0,j_0}$. Then ${\mathrm{tc}}(\alpha+\beta)$ is defined to be \[\alphavecrestrarg{i,j}^\frown\left(\rhoargs{\taucp{i,j-1}}{\taucp{i,j}}+\betaucp{i,j}_1,\betaucp{i,j}_2,\ldots,\betaucp{i,j}_\rcp{i,j}\right).\] \item[{\bf 1.2:}] $\beta=\tilde{\tau}cp{i_0,j_0}$. Then ${\mathrm{tc}}(\alpha+\beta)$ is defined by \[\alphavecrestrarg{i,j}[\alphacp{i,j}+1].\] \item[{\bf 1.3:}] $\beta>\tilde{\tau}cp{i_0,j_0}$. Then there is an $r_0\in(0,r)$ such that $\beta_{r_0}=\taucp{i_0,j_0}=\taucp{i,j-1}$, and ${\mathrm{tc}}(\alpha+\beta)$ is defined by \[\alphavecrestrarg{i-1}^\frown(\alphacp{i,1},\ldots,\alphacp{i,j-1}, \alphacp{i,j}+\beta_{r_0+1},\beta_{r_0+2},\ldots,\beta_r).\] \end{itemize} {\bf Case 2:} $(i_0,j_0)<_\mathrm{\scriptscriptstyle{lex}}(k_0,l_0)$. Then there are the following subcases: \betagin{itemize} \item[{\bf 2.1:}] $k_0=n+1$ and $\betaucp{n,m_n}_1=\taucp{n,m_n}\in{\mathbb E}^{>{\tau^\prime_n}}$. Then $\beta=\tilde{\tau}cp{n,m_n}$, and ${\mathrm{tc}}(\alpha+\beta)$ is defined by \[\alphavecrestrarg{n-1}^\frown(\alphacp{n,1},\ldots,\alphacp{n,m_n},1).\] \item[{\bf 2.2:}] $k_0\le n$, $l_0\in\sigmangleton{1,\ldots,m_{k_0}-2}$ and $\taucp{k,l}+\betaucp{k,l}_1\le\lambda_\taucp{k,l-1}$ for $(k,l):=(k_0,l_0+1)$. Then we define ${\mathrm{tc}}(\alpha+\beta)$ by \[\alphavecrestrarg{k,l}^\frown\left(\taucp{k,l}+\betaucp{k,l}_1,\betaucp{k,l}_2,\ldots,\betaucp{k,l}_\rcp{k,l}\right),\] provided this vector satisfies condition 2 of Definition \ref{trackingchaindefi}, otherwise we have $(i_0,j_0)\in\mathrm{dom}(\alphavec)$, $\beta=\tilde{\tau}cp{i_0,j_0}$, and ${\mathrm{tc}}(\alpha+\beta)$ is defined as in case 1.2. \item[{\bf 2.3:}] Otherwise. Then $k_0>i_0$, $l_0=1$, and $\alphacp{k+1,1}+\betaucp{k,m_k}_1<\rho_k$ for $k:=k_0-1$, and ${\mathrm{tc}}(\alpha+\beta)$ is defined by \[\alphavecrestrarg{k}^\frown\left(\alphacp{k+1,1}+\betaucp{k,m_k}_1,\betaucp{k,m_k}_2,\ldots,\betaucp{k,m_k}_\rcp{k,m_k}\right),\] provided this vector satisfies condition 2 of Definition \ref{trackingchaindefi}, otherwise we have $(i_0,j_0)\in\mathrm{dom}(\alphavec)$, $\beta=\tilde{\tau}cp{i_0,j_0}$, and ${\mathrm{tc}}(\alpha+\beta)$ is defined as in case 1.2. \end{itemize} \end{defi} \betagin{rmk} Note that compared to \cite{CWc} we have changed indexing of the $\betaucp{i,j}$-sequences: $\betaucp{i,j}$ here corresponds to $\betaucp{i,j-1}$ in \cite{CWc}. Regarding part 1 of the definition of $(k_0,l_0)$, note that for $0<k<n$ by definition of ${\mathrm{ers}}$ we have ${\mathrm{ers}}_{k+1,1}(\alphavec)={\mathrm{ers}}_{k,m_k}(\alphavec)$. Writing more intuitively ${\mathrm{ers}}_{k+1,1}(\alphavec)$ instead of ${\mathrm{ers}}_{k,m_k}(\alphavec)$ would require to set ${\mathrm{ers}}_{n+1,1}(\alphavec)$ to ${\mathrm{ers}}_{n,m_n}(\alphavec)$. Case 0 corresponds to the scenario in which adding $\beta$ to $\alpha$ means to jump into a larger ($\upsilon_\lambda$-)$\le_1$-connectivity component and used to be included in Case 1.3 in \cite{CWc}. Since in our more general setting here we no longer have $\tilde{\tau}cp{1,0}=0$, as was the case in \cite{CWc}, we decided to cover this case separately. Note that in Case 0 we use the tracking chain $\betaucp{1,1}={\mathrm{ts}}^{\upsilon_{\xi+u}}(\beta)$ where $(\xi,u)=\upsilonseg(\beta)$. Case 1.3 now solely covers the situation of jumping into a larger (non-trivial) $\le_2$-connectivity component on the surrounding main line. In Case 1.3 we use ${\mathrm{ts}}(\beta)$ for the assignment. Case 2.1 takes care of condition 2 of $\kappa$-regularity, Definition \ref{kapparegdefi}. \end{rmk} \betagin{lem}[cf.\ 6.2 of \cite{CWc}]\lambdabel{tcassgnmntlem} Let $\alpha\in{\mathrm{Ord}}$. \betagin{enumerate} \item[a)] ${\mathrm{tc}}(\alpha)\in{\operatorname{T}}C$, i.e.\ ${\mathrm{tc}}(\alpha)$ meets all conditions of Definition \ref{trackingchaindefi}. \item[b)] There exists exactly one tracking chain for $\alpha$, namely ${\mathrm{tc}}(\alpha)$ satisfies $\mathrm{o}({\mathrm{tc}}(\alpha))=\alpha$. \end{enumerate} \end{lem} {\bf Proof.} By adaptation of the proof of Lemma 6.2 of \cite{CWc}, including several corrections. Note that the proof in \cite{CWc} actually proceeds by induction on the length of the additive decomposition of $\alpha$, rather than by induction on $\alpha$, as was stated there. The proof extensively utilizes Lemma \ref{tcevalestimlem} and the monotonicity of $\kappa$- and $\nu$-functions. The case $\alpha=0$ is trivial, and using Lemma \ref{trsvallem} we see that the claims hold whenever $\alpha\in{\mathbb P}$. Now suppose the claims have been shown for some $\alpha>0$ with assigned tracking chain ${\mathrm{tc}}(\alpha)=\alphavec$ as in the definition and suppose $\beta\le{\mathrm{end}}(\alpha)$. We adopt the terminology of the previous definition and commence proving the inductive step for $\alpha+\beta$ by showing the following preparatory claims. \betagin{claim}\lambdabel{tcassgnclmone} If $(i_0,j_0)\not=(1,0)$ and $\beta\le\tilde{\tau}cp{i_0,j_0}$ then $\betaucp{i_0,j_0+1}_1\le\taucp{i_0,j_0}$. Equality holds if and only if $\beta=\tilde{\tau}cp{i_0,j_0}$. \end{claim} In order to show the claim let us assume that $(i_0,j_0)\not=(1,0)$ and $\beta\le\tilde{\tau}cp{i_0,j_0}$. In the case $\betaucp{i_0,j_0+1}_1=\taucp{i_0,j_0}$ the assumption implies $r_{i_0,j_0+1}=1$ and $\beta=\tilde{\tau}cp{i_0,j_0}$. On the other hand, in case of $\beta=\tilde{\tau}cp{i_0,j_0}$ we clearly have ${\mathrm{ts}}[{\mathrm{ers}}_{i_0,j_0+1}(\alphavec)](\beta)=(\taucp{i_0,j_0})$. Now assume $\beta<\tilde{\tau}cp{i_0,j_0}$. Write ${\vec{\sigma}}=(\sigma_1,\ldots,\sigma_s)$ for the sequence ${\mathrm{ts}}(\tilde{\tau}cp{i_0,j_0})$, which is equal to the reference sequence ${\mathrm{rs}}_{i_0,j_0}(\alphavec)$, so that $\tilde{\tau}cp{i_0,j_0}=\mathrm{o}({\vec{\sigma}})$ and $\taucp{i_0,j_0}=\sigma_s$. Let $r<s$ be maximal such that $\tilde{\sigma}_r:=\mathrm{o}({\vec{\sigma}}{\scriptscriptstyle{\restriction {r}}})\le\beta$. If $r>0$, setting $\betapr:=\sigma_r\cdot(1/\tilde{\sigma}_r)\cdot\beta$, we have $\betapr\le\beta$ and, by Lemma \ref{citedinjtrslem}, \[\betaucp{i_0,j_0+1}={\mathrm{ts}}^{\sigma_r}(\betapr)<_\mathrm{\scriptscriptstyle{lex}}{\mathrm{ts}}^{\sigma_r}(\tilde{\tau}cp{i_0,j_0})=(\sigma_{r+1},\ldots,\sigma_s),\] whence $\betaucp{i_0,j_0+1}_1<\taucp{i_0,j_0}$. If $r=0$ and hence $\beta<\tilde{\sigma}_1$, let $(\xi,u):=\upsilonseg(\beta)$, so that \[\betaucp{i_0,j_0+1}={\mathrm{ts}}^{\upsilon_{\xi+u}}(\beta)<_\mathrm{\scriptscriptstyle{lex}}{\vec{\sigma}}\] and thus $\betaucp{i_0,j_0+1}_1<\taucp{i_0,j_0}$, using again Lemma \ref{citedinjtrslem} if necessary. This concludes the proof of Claim \ref{tcassgnclmone}.\mbox{ } $\Box$ \betagin{claim}\lambdabel{tcassgnclmtwo} If $(i_0,j_0)\not=(1,0)$ and $\chi^\taucp{i_0,j_0}(\taucp{i_0,j_0+1})=1$ then $\beta\le\tilde{\tau}cp{i_0,j_0}$ implies $(i_0,j_0)<_\mathrm{\scriptscriptstyle{lex}}(k_0,l_0)$. \end{claim} For the proof of this claim assume $(i_0,j_0)\not=(1,0)$, $\chi^\taucp{i_0,j_0}(\taucp{i_0,j_0+1})=1$, $\beta\le\tilde{\tau}cp{i_0,j_0}$, and let $(i,j)$ be the $\le_\mathrm{\scriptscriptstyle{lex}}$-maximal index pair such that $\alphavecrestrarg{(i,j)}$ is a common initial chain of $\alphavec$ and $\gammavec:=\operatorname{me}(\alphavecrestrarg{i_0,j_0+1})$, hence $(i_0,j_0+1)\le_\mathrm{\scriptscriptstyle{lex}}(i,j)$. By Lemma \ref{cmlmaxextcor} we know that $\operatorname{ec}(\gammavec)$ and hence $\operatorname{ec}(\alphavecrestrarg{(i,j)})$ exists. Note also that by part 2 of Lemma \ref{cmlmaxextcor} we have ${\mathrm{ers}}_{i_0,j_0+1}(\alphavec)\subseteqseteq{\mathrm{ers}}_{i,j}(\alphavec)$, which due to the assumption $\beta\le\tilde{\tau}cp{i_0,j_0}$ entails $\betaucp{i,j}=\betaucp{i_0,j_0+1}$, and hence \[\betaucp{i,j}_1\le\taucp{i_0,j_0}\] according to Claim \ref{tcassgnclmone}. In order to derive a contradiction we now assume that $(i_0,j_0)=(k_0,l_0)$ and discuss the possible cases in the definition of $\operatorname{ec}(\alphavecrestrarg{(i,j)})$. For convenience of notation we set $\tau:=\taucp{i,j}$ and ${\tau^\prime}:=\taucppr{i,j}$. \\[2mm] {\bf Case 1:} $j=1$. Then we have $m_i=1$ by the maximality of $(i_0,j_0)$ and $(i,j)$, hence $i>i_0$ and $(i_0,j_0+1)<_\mathrm{\scriptscriptstyle{lex}}(i,j)$. \\[2mm] {\bf Subcase 1.1:} ${\tau^\prime}<\tau\in{\mathbb E}$. By part 2 of Lemma \ref{cmlmaxextcor} and the assumption $k_0=i_0<i$ we then have \[\taucp{i_0,j_0}<\tau=<_1g((1/{\tau^\prime})\cdot\tau)<\rho_i\le\alphacp{i+1,1}+\betaucp{i,1}_1.\] However, we have already seen that $\betaucp{i,1}_1\le\taucp{i_0,j_0}$, and by part 2 of Definition \ref{kapparegdefi} regarding $\kappa\nu\upsilon$-regularity of tracking chains we have $\alphacp{i+1,1}<\tau$, whence $\alphacp{i+1,1}+\betaucp{i,1}_1<\rho_i$. Contradiction. \\[2mm] {\bf Subcase 1.2:} Otherwise. Then by the maximality of $(i,j)$ the index $\alphacp{i+1,1}$ is strictly less than $<_1g((1/{\tau^\prime})\cdot\tau)$, which is the extending index of $\operatorname{ec}(\alphavecrestrarg{(i,j)})$ and according to Lemma \ref{cmlmaxextcor} a proper (non-zero) multiple of $\taucp{i_0,j_0}$. We run into the same contradiction as in Subcase 1.1. \\[2mm] {\bf Case 2:} $j>1$. Then ${\tau^\prime}=\taucp{i,j-1}$. \\[2mm] {\bf Subcase 2.1:} ${\tau^\prime}<\tau\in{\mathbb E}$. \\[2mm] {\bf 2.1.1:} $\tau=\mu_{\tau^\prime}<\lambda_{\tau^\prime}$. Then $(i_0,j_0+1)<_\mathrm{\scriptscriptstyle{lex}}(i,j)$, which implies $(k_0,l_0)<_\mathrm{\scriptscriptstyle{lex}}(i,j-1)$. The extending index of $\operatorname{ec}(\alphavecrestrarg{(i,j)})$ is then $\lambda_{\tau^\prime}$, a proper multiple of $\taucp{i_0,j_0}$. If $j<m_i$ we obtain the contradiction $\tau+\betaucp{i,j}_1\le\lambda_{\tau^\prime}$, otherwise we obtain the contradiction $\alphacp{i+1,1}+\betaucp{i,j}_1<\rho_i=\lambda_{\tau^\prime}+1$ in a similar fashion as in Case 1. \\[2mm] {\bf 2.1.2:} Otherwise. The extending index of $\operatorname{ec}(\alphavecrestrarg{(i,j)})$ is then $\mu_\tau$, and $m_i=j$. By part 2 of Definition \ref{kapparegdefi}, i.e.\ $\kappa\nu\upsilon$-regularity of tracking chains, $\alphacp{i+1,1}\not=\tau$. By the assumptions of this case and using Lemma \ref{cmlmaxextcor} we have $\taucp{i_0,j_0}<\tau$. We first consider the case $(i,j)=(i_0,j_0+1)$. Then $\alphacp{i+1,1}<\tau$ and $\rho_i=\tau+1$. We obtain the contradiction $\alphacp{i+1,1}+\betaucp{i,j}_1<\rho_i$. Now assume $(i_0,j_0+1)<_\mathrm{\scriptscriptstyle{lex}}(i,j)$. Again we have $\rho_i=\tau+1$, $\alphacp{i+1,1}<\tau$, and we run into the same contradiction. \\[2mm] {\bf Subcase 2.2:} Otherwise. Then again $m_i=j$. \\[2mm] {\bf 2.2.1:} $\tau<\mu_{\tau^\prime}$. This can only occur if $(i,j)=(i_0,j_0+1)$, thus ${\tau^\prime}=\taucp{i_0,j_0}$ and $\tau=\taucp{i_0,j_0+1}$. The extending index of $\operatorname{ec}(\alphavecrestrarg{(i,j)})$ is (the $\kappa$-index) $\rhoargs{{\tau^\prime}}{\tau}$, a proper multiple of $\taucp{i_0,j_0}$, and $\rho_i=\rhoargs{{\tau^\prime}}{\tau}+1$. We are then confronted with the contradiction $\alphacp{i+1,1}+\betaucp{i,j}_1<\rho_i$. \\[2mm] {\bf 2.2.2:} Otherwise, that is, $\tau=\mu_{\tau^\prime}$. This implies $(i_0,j_0+1)<_\mathrm{\scriptscriptstyle{lex}}(i,j)$, and the extending index of $\operatorname{ec}(\alphavecrestrarg{(i,j)})$ is $\lambda_{\tau^\prime}$ which again is a proper multiple of $\taucp{i_0,j_0}$. Thus $\alphacp{i+1,1}+\betaucp{i,j}_1\le\lambda_{\tau^\prime}<\lambda_{\tau^\prime}+1=\rho_i$. Contradiction.\\[2mm] Our assumption $(i_0,j_0)=(k_0,l_0)$ therefore cannot hold true, which concludes the proof of Claim \ref{tcassgnclmtwo}. \mbox{ } $\Box$ \\[2mm] We are now prepared to verify the lemma for each of the seven clauses of the assignment of ${\mathrm{tc}}(\alpha+\beta)$ to $\alpha+\beta$. We check that ${\mathrm{tc}}(\alpha+\beta)\in{\operatorname{T}}C$ and make a first step to calculate its ordinal value $\mathrm{o}({\mathrm{tc}}(\alpha+\beta))$. However, a uniform argument exploiting the careful choice of the index pair $(k_0,l_0)$ will be given in the last part of this proof to complete the treatment of the single clauses. \\[2mm] {\bf Case 0:} $(k_0,l_0)=(1,0)$. Then $(i_0,j_0)=(1,0)$, hence all $\nu$-indices of $\alphavec$ are maximal, i.e.\ given by the $\mu$-operator. Letting $(\xi,u):=\upsilonseg(\beta)$ and applying either Lemma \ref{trsvallem} in the case $u=0$ or Lemma \ref{reltrsvallem} to the reference sequence $(\upsilon_{\xi+1},\ldots,\upsilon_{\xi+u})$ if $u>0$, according to part 7(b) of Lemma \ref{tcevalestimlem} we obtain \betagin{eqnarray*} \mathrm{o}({\mathrm{tc}}(\alpha+\beta))&=&\mathrm{o}(\operatorname{me}(\alphavecrestrarg{(1,1)}))+\beta, \end{eqnarray*} and will show later that this is equal to $\alpha+\beta$. \\[2mm] {\bf Case 1:} $(i_0,j_0)=(k_0,l_0)\in\mathrm{dom}(\alphavec)$. Here we have $\alphacp{i_0,j_0+1}<\mu_\taucp{i_0,j_0}$. Let ${\vec{\varsigma}}:={\mathrm{ers}}(\taucp{i_0,j_0+1})$, which according to Lemma \ref{evallem} is equal to ${\mathrm{ts}}(\tilde{\tau}cp{i_0,j_0})$. \\[2mm] {\bf Subcase 1.1:} $\beta<\tilde{\tau}cp{i_0,j_0}$. Then by Claim \ref{tcassgnclmone} $\betaucp{i_0,j_0+1}_1<\taucp{i_0,j_0}$, and Claim \ref{tcassgnclmtwo} yields $\chi^\taucp{i_0,j_0}(\taucp{i_0,j_0+1})=0$, hence ${\mathrm{tc}}(\alpha+\beta)\in{\operatorname{T}}C$. Since $\alphacp{i_0+1,1}+\betaucp{i_0,m_{i_0}}_1\ge\rho_{i_0}$, we must have $m_{i_0}>j_0+1$ and hence $\rhoargs{\taucp{i_0,j_0}}{\taucp{i_0,j_0+1}}=\taucp{i_0,j_0+1}$. According to part 7(b) of Lemma \ref{tcevalestimlem} and Lemmas \ref{trsvallem} and \ref{reltrsvallem} we have \betagin{eqnarray*} \mathrm{o}({\mathrm{tc}}(\alpha+\beta))&=&\mathrm{o}(\alphavecrestrarg{(i_0,j_0+1)})+\mathrm{dp}_{\vec{\varsigma}}(\taucp{i_0,j_0+1})+\beta\\ &=&\mathrm{o}(\operatorname{me}(\alphavecrestrarg{(i_0,j_0+1)}))+\beta. \end{eqnarray*} It remains to be shown that this is equal to $\alpha+\beta$. \\[2mm] {\bf Subcase 1.2:} $\beta=\tilde{\tau}cp{i_0,j_0}$. Again, by Claim \ref{tcassgnclmone} we have $\betaucp{i_0,j_0+1}=(\taucp{i_0,j_0})$, and by Claim \ref{tcassgnclmtwo} $\chi^\taucp{i_0,j_0}(\taucp{i_0,j_0+1})=0$. ${\mathrm{tc}}(\alpha+\beta)\in{\operatorname{T}}C$ is immediate. We compute similarly as above \betagin{eqnarray*} \mathrm{o}({\mathrm{tc}}(\alpha+\beta))&=&\mathrm{o}(\alphavecrestrarg{(i_0,j_0+1)})+\kappa^{\vec{\varsigma}}_{\rhoargs{}{\taucp{i_0,j_0+1}}}+\mathrm{dp}_{\vec{\varsigma}}(\rhoargs{}{\taucp{i_0,j_0+1}})+\beta\\ &=&\mathrm{o}(\operatorname{me}(\alphavecrestrarg{(i_0,j_0+1)}))+\beta, \end{eqnarray*} and again it remains to be shown that this is equal to $\alpha+\beta$. \\[2mm] {\bf Subcase 1.3:} $\beta>\tilde{\tau}cp{i_0,j_0}$. Making use of Lemma \ref{tcevalestimlem} we observe that \[\tilde{\tau}cp{i_0,j_0}<\beta\le{\mathrm{end}}(\alpha)={\mathrm{end}}(\alphaticp{n,m_n})<\nu^{\vec{\varsigma}}_{\mu_\taucp{i_0,j_0}},\] which, realizing that due to Lemma \ref{trsvallem} ${\mathrm{ts}}(\nu^{\vec{\varsigma}}_{\mu_\taucp{i_0,j_0}})={\vec{\varsigma}}^\frown\mu_\taucp{i_0,j_0}$, according to Lemma \ref{citedinjtrslem} implies \[{\vec{\varsigma}}<_\mathrm{\scriptscriptstyle{lex}}{\mathrm{ts}}(\beta)<_\mathrm{\scriptscriptstyle{lex}}{\vec{\varsigma}}^\frown\mu_\taucp{i_0,j_0},\] whence ${\vec{\varsigma}}$ is a proper initial segment of ${\mathrm{ts}}(\beta)$. Thus there is an $r_0\in(0,r)$ such that $\taucp{i_0,j_0}=\beta_{r_0}$. We now see that ${\mathrm{tc}}(\alpha+\beta)\in{\operatorname{T}}C$. We will apply Lemma \ref{trsvallem} for the evaluation of ${\mathrm{ts}}(\beta)$, considering two cases.\\[2mm] {\bf 1.3.1:} $\chi^\taucp{i_0,j_0}(\taucp{i_0,j_0+1})=0$. Then by part 7(b) of Lemma \ref{tcevalestimlem} we have \betagin{eqnarray*} \mathrm{o}({\mathrm{tc}}(\alpha+\beta))&=&\mathrm{o}(\operatorname{me}(\alphavecrestrarg{(i_0,j_0+1)}))+\beta. \end{eqnarray*} {\bf 1.3.2:} $\chi^\taucp{i_0,j_0}(\taucp{i_0,j_0+1})=1$. Then by part 7(a) of Lemma \ref{tcevalestimlem} we have \betagin{eqnarray*} \mathrm{o}({\mathrm{tc}}(\alpha+\beta))&=&\mathrm{o}(\alphavecrestrarg{(i_0,j_0+1)}[\alphacp{i_0,j_0+1}+1])+\beta\\ &=&\mathrm{o}(\operatorname{me}(\alphavecrestrarg{(i_0,j_0+1)}))+\mathrm{dp}(\operatorname{me}(\alphavecrestrarg{(i_0,j_0+1)}))+\beta. \end{eqnarray*} We leave the task of showing that this is equal to $\alpha+\beta$ for later. \\[2mm] {\bf Case 2:} $(i_0,j_0)<_\mathrm{\scriptscriptstyle{lex}}(k_0,l_0)$. \\[2mm] {\bf Subcase 2.1:} $k_0=n+1$ and $\betaucp{n,m_n}_1=\taucp{n,m_n}\in{\mathbb E}^{>{\tau^\prime_n}}$. Since $\beta\le\tilde{\tau}cp{n,m_n}$ we then have $\beta=\tilde{\tau}cp{n,m_n}$, and ${\mathrm{tc}}(\alpha+\beta)\in{\operatorname{T}}C$ is clear. Since $k_0=n+1$ we have $\taucp{n,m_n}<\rho_n$, and realizing that $-\tilde{\tau}cp{n,m_n}+\nu^{{\mathrm{rs}}arg{n,m_n}(\alphavec)}_1=\tilde{\tau}cp{n,m_n}$ we obtain \[\mathrm{o}({\mathrm{tc}}(\alpha+\beta))=\alpha+\beta.\] {\bf Subcase 2.2:} $k_0\le n$, $l_0\in\sigmangleton{1,\ldots,m_{k_0}-2}$ and $\taucp{k,l}+\betaucp{k,l}_1\le\lambda_\taucp{k,l-1}$ for $(k,l):=(k_0,l_0+1)$. \\[2mm] {\bf 2.2.1:} $\alphavecrestrarg{(k,l)}^\frown\left(\taucp{k,l}+\betaucp{k,l}_1,\betaucp{k,l}_2,\ldots,\betaucp{k,l}_\rcp{k,l}\right)$ satisfies condition 2 of Definition \ref{trackingchaindefi} for tracking chains. Then this vector defines ${\mathrm{tc}}(\alpha+\beta)$ and is easily seen to be a tracking chain. Note that since $\taucp{k,l}=\mu_\taucp{k,l-1}\in{\mathbb E}^{>\taucp{k,l-1}}\cap\lambda_\taucp{k,l-1}$ and $\alphacp{k,l+1}=\taucp{k,l+1}=\mu_{\taucp{k,l}}$ we have $\alphavecrestrarg{k,l+1}\kappa^\tauc\operatorname{ec}(\alphavecrestrarg{k,l})$, implying that $\alphavecrestrarg{k,l+1}$ does not possess a critical main line index pair. Part 7(b) of Lemma \ref{tcevalestimlem} therefore yields, setting ${\vec{\varsigma}}:={\mathrm{ers}}_{k,l+1}(\alphavec)$, \[\mathrm{o}(\operatorname{me}(\alphavecrestrarg{k,l+1}))= \mathrm{o}(\alphavecrestrarg{k,l+1})+\kappa^{\vec{\varsigma}}_{\lambda_\taucp{k,l}}+\mathrm{dp}_{\vec{\varsigma}}(\lambda_\taucp{k,l}).\] Setting ${\vec{\varsigma}}pr:={\mathrm{ers}}_{k,l}(\alphavec)$, we now compute using similarly as in Case 0 either Lemma \ref{trsvallem} or Lemma \ref{reltrsvallem} \betagin{eqnarray*} \mathrm{o}({\mathrm{tc}}(\alpha+\beta))&=&\mathrm{o}(\alphavecrestrarg{k,l})+\mathrm{dp}_{\vec{\varsigma}}pr(\taucp{k,l})+\beta\\ &=&\mathrm{o}(\alphavecrestrarg{k,l})+\nu^{\vec{\varsigma}}_\taucp{k,l+1}+\kappa^{\vec{\varsigma}}_{\lambda_\taucp{k,l}}+\mathrm{dp}_{\vec{\varsigma}}(\lambda_\taucp{k,l})+\beta\\ &=&\mathrm{o}(\alphavecrestrarg{k,l+1})+\kappa^{\vec{\varsigma}}_{\lambda_\taucp{k,l}}+\mathrm{dp}_{\vec{\varsigma}}(\lambda_\taucp{k,l})+\beta\\ &=&\mathrm{o}(\operatorname{me}(\alphavecrestrarg{k,l+1}))+\beta, \end{eqnarray*} and leave the task of showing this to be equal to $\alpha+\beta$ for later. \\[2mm] {\bf 2.2.2:} Otherwise. Then ${\mathrm{tc}}(\alpha+\beta)=\alphavecrestrarg{i,j}[\alphacp{i,j}+1]\in{\operatorname{T}}C$ where $(i,j):=(i_0,j_0+1)$. According to the assumptions defining this case we have $r_{k,l}=1$, $\betaucp{k,l}_1=\taucp{i_0,j_0}$, $\taucp{k,l}+\betaucp{k,l}_1=\lambda_\taucp{k,l-1}$, which is the extending index of $\operatorname{ec}(\alphavecrestrarg{k,l})\not\in{\operatorname{T}}C$, and thus $\operatorname{me}(\alphavecrestrarg{i,j})=\alphavecrestrarg{k,l}$. Defining ${\vec{\varsigma}}pr$ and ${\vec{\varsigma}}$ as in the previous subcase 2.2.1, setting ${\vec{\varsigma}}nod:={\mathrm{ers}}_{i,j}(\alphavec)$, and noticing that $\mathrm{dp}_{\vec{\varsigma}}pr(\lambda_\taucp{k,l-1})=0$, Lemmas \ref{trsvallem}, \ref{reltrsvallem}, and part 7(c) of Lemma \ref{tcevalestimlem} assure the computation \betagin{eqnarray*} \mathrm{o}({\mathrm{tc}}(\alpha+\beta))&=&\mathrm{o}(\alphavecrestrarg{i,j})+\kappa^{\vec{\varsigma}}nod_{\rhoargs{}{\taucp{i,j}}}+ \mathrm{dp}_{\vec{\varsigma}}nod(\rhoargs{}{\taucp{i,j}})\\ &=&\mathrm{o}(\alphavecrestrarg{k,l})+\kappa^{\vec{\varsigma}}pr_{\lambda_\taucp{k,l-1}}\\ &=&\mathrm{o}(\alphavecrestrarg{k,l})+\mathrm{dp}_{\vec{\varsigma}}pr(\taucp{k,l})+\beta\\ &=&\mathrm{o}(\alphavecrestrarg{k-1}^\frown(\alphacp{k,1},\ldots,\alphacp{k,l},\mu_\taucp{k,l}))+\kappa^{\vec{\varsigma}}_{\lambda_\taucp{k,l}}+\mathrm{dp}_{\vec{\varsigma}}(\lambda_\taucp{k,l})+\beta\\ &=&\mathrm{o}(\operatorname{me}(\alphavecrestrarg{k,l+1}))+\beta, \end{eqnarray*} where the last equality holds, since the tracking chain $\alphavecrestrarg{k,l+1}=\alphavecrestrarg{k-1}^\frown(\alphacp{k,1},\ldots,\alphacp{k,l},\mu_\taucp{k,l})$ does not possess a critical main line index pair, according to part 7(b) of Lemma \ref{tcevalestimlem}. That this is equal to $\alpha+\beta$ will be shown later. \\[2mm] {\bf Subcase 2.3:} Otherwise. Then $k_0>i_0$, $l_0=1$, and $\alphacp{k+1,1}+\betaucp{k,m_k}_1<\rho_k$ for $k:=k_0-1$. \\[2mm] {\bf 2.3.1:} The vector $\alphavecrestrarg{k}^\frown\left(\alphacp{k+1,1}+\betaucp{k,m_k}_1,\betaucp{k,m_k}_2,\ldots,\betaucp{k,m_k}_\rcp{k,m_k}\right)$ satisfies condition 2 of Definition \ref{trackingchaindefi} for tracking chains. Then ${\mathrm{tc}}(\alpha+\beta)$ is defined by this vector and is easily seen to be a tracking chain, since we have already handled Subcase 2.1. Let us first assume that $k=n$. Using Lemma \ref{trsvallem} or \ref{reltrsvallem} as before, we then have \[\mathrm{o}({\mathrm{tc}}(\alpha+\beta))=\alpha+\beta.\] Now we suppose $k<n$. We observe that $\alphavecrestrarg{k+1,1}$ does not possess a critical main line index pair since $\alphacp{k+1,1}<\rho_k\minusp1$, and it is only possible to have $m_k>1$ and $\taucp{k,m_k}<\mu_{\tau^\prime_k}$ if $(k,m_k)=(i_0,j_0+1)$. Now Lemmas \ref{trsvallem}, \ref{reltrsvallem}, and part 7(b) of \ref{tcevalestimlem} yield, setting ${\vec{\varsigma}}:={\mathrm{ers}}_{k,m_k}(\alphavec)$, \betagin{eqnarray*} \mathrm{o}({\mathrm{tc}}(\alpha+\beta))&=&\mathrm{o}(\alphavecrestrarg{k+1,1})+\mathrm{dp}_{\vec{\varsigma}}(\taucp{k+1,1})+\beta\\ &=&\mathrm{o}(\operatorname{me}(\alphavecrestrarg{k+1,1}))+\beta, \end{eqnarray*} which will be shown to be equal to $\alpha+\beta$. \\[2mm] {\bf 2.3.2:} Otherwise. Then ${\mathrm{tc}}(\alpha+\beta)=\alphavecrestrarg{i,j}[\alphacp{i,j}+1]\in{\operatorname{T}}C$ where $(i,j):=(i_0,j_0+1)$. In this final case we have $r_{k,m_k}=1$, $\betaucp{k,m_k}=(\taucp{i,j-1})$, $\operatorname{me}(\alphavecrestrarg{i,j})=\alphavecrestrarg{k}$, and $\alphacp{k+1,1}+\taucp{i,j-1}=\rho_k\minusp1$ is the extending index of $\operatorname{ec}(\alphavecrestrarg{k})\not\in{\operatorname{T}}C$. The assumption $m_k>1$ and $\taucp{k,m_k}<\mu_{\tau^\prime_k}$ would imply $(k,m_k)=(i,j)$, but since $\operatorname{ec}(\alphavecrestrarg{k})\not\in{\operatorname{T}}C$, this would contradict Lemma \ref{cmlmaxextcor}, according to which $\operatorname{ec}(\alphavecrestrarg{i,j})\in{\operatorname{T}}C$. We therefore either have $\rho_k\minusp1=<_1g((1/{\tau_k^\star})\cdot\taucp{k,1})$ with $k>i$ in the case $m_k=1$, or we have $\rho_k\minusp1=\lambda_{\tau^\prime_k}$ with $\taucp{k,m_k}=\mu_{\tau^\prime_k}$ in the case $m_k>1$. Let ${\vec{\varsigma}}:={\mathrm{ers}}_{k,m_k}(\alphavec)$. $\alphacp{k+1,1}$ must be a (possibly zero in the case $k=n$) multiple of $\taucp{i,j-1}$, which is seen as follows. Assume we had $0<\taucp{k+1,1}<\taucp{i,j-1}$. Then $k<n$, and by part 6 of Lemma \ref{tcevalestimlem}, since $\alphavec$ is an extension of $\alphavecrestrarg{k+1,1}$, \[\mathrm{o}(\alphavecrestrarg{k+1,1})\le\alpha\le\mathrm{o}(\alphavecrestrarg{k+1,1})+\mathrm{dp}_{\vec{\varsigma}}(\taucp{k+1,1})\] and therefore by monotonicity of $\kappa^{\vec{\varsigma}}$ \[\tilde{\tau}cp{k+1,1}=\kappa^{\vec{\varsigma}}_\taucp{k+1,1}\le\kappa^{\vec{\varsigma}}_\taucp{k+1,1}+\mathrm{dp}_{\vec{\varsigma}}(\taucp{k+1,1})<\kappa^{\vec{\varsigma}}_{\taucp{k+1,1}+1} <\kappa^{\vec{\varsigma}}_\taucp{i,j-1}=\tilde{\tau}cp{i,j-1}=\beta,\] so that either $\mathrm{o}(\alphavecrestrarg{k+1,1})=\alpha$ and ${\mathrm{end}}(\alpha)=\tilde{\tau}cp{k+1,1}<\beta$ or $\mathrm{o}(\alphavecrestrarg{k+1,1})<\alpha$ and ${\mathrm{end}}(\alpha)\le\mathrm{dp}_{\vec{\varsigma}}(\taucp{k+1,1})<\beta$, contradicting the assumption $\beta\le{\mathrm{end}}(\alpha)$. We are now prepared for another twofold application of Lemma \ref{tcevalestimlem}, first part 7(c), then part 7(b). In the case $k=n$ we are finished with the second equation, while otherwise we continue the computation as shown, where again ${\vec{\varsigma}}nod:={\mathrm{ers}}_{i,j}(\alphavec)$. \betagin{eqnarray*} \mathrm{o}({\mathrm{tc}}(\alpha+\beta))&=&\mathrm{o}(\alphavecrestrarg{i,j})+\kappa^{\vec{\varsigma}}nod_{\rhoargs{}{\taucp{i,j}}}+\mathrm{dp}_{\vec{\varsigma}}nod(\rhoargs{}{\taucp{i,j}})\\ &=&\mathrm{o}(\alphavecrestrarg{k})+\kappa^{\vec{\varsigma}}_\alphacp{k+1,1}+\mathrm{dp}_{\vec{\varsigma}}(\alphacp{k+1,1})+\beta\\ &=&\mathrm{o}(\alphavecrestrarg{k+1,1})+\mathrm{dp}_{\vec{\varsigma}}(\taucp{k+1,1})+\beta\\ &=&\mathrm{o}(\operatorname{me}(\alphavecrestrarg{k+1,1}))+\beta \end{eqnarray*} which in the case $k<n$ will be shown below to be equal to $\alpha+\beta$. \\[2mm] We are going to show the equalities left open in the single cases. Notice that all cases where $k_0=n+1$ are finished already. We therefore assume $k_0\le n$ from now on, whence $\betaucp{n,m_n}_1\ge\rho_n$. In the first step we show that \betagin{equation}\lambdabel{mealeq} \mathrm{o}(\operatorname{me}(\alphavec))+\mathrm{dp}(\operatorname{me}(\alphavec))+\beta=\alpha+\beta. \end{equation} We have to consider three cases in each of which we use Lemma \ref{tcevalestimlem}. Let ${\vec{\varsigma}}:={\mathrm{ers}}_{n,m_n}(\alphavec)$. \\[2mm] {\bf Case A:} $m_n=1$. Then $\alpha\le\mathrm{o}(\operatorname{me}(\alphavec))+\mathrm{dp}(\operatorname{me}(\alphavec))=\alpha+\mathrm{dp}_{\vec{\varsigma}}star(\taucp{n,1})$ according to part 7 of Lemma \ref{tcevalestimlem}, where ${\vec{\varsigma}}star:={\mathrm{rs}}_{n^\star}(\alphavec)$ is equal to ${\mathrm{rs}}_{n,0}(\alphavec)$ and hence agrees with Definition \ref{dpfdefi}. Since $\rho_n=<_1g((1/{\tau^\prime_n})\cdot\taucp{n,1})+1$, we have \[\mathrm{dp}_{\vec{\varsigma}}star(\taucp{n,1})=\kappa^{\vec{\varsigma}}_{<_1g((1/{\tau^\prime_n})\cdot\taucp{n,1})}+\mathrm{dp}_{\vec{\varsigma}}(<_1g((1/{\tau^\prime_n})\cdot\taucp{n,1})).\] By part b) of Lemma \ref{reltrslem} the assumption $\beta\le\mathrm{dp}_{\vec{\varsigma}}star(\taucp{n,1})$ would imply $\betaucp{n,1}_1<<_1g((1/{\tau^\prime_n})\cdot\taucp{n,1})+1$, which is not the case. \\[2mm] {\bf Case B:} $m_n>1$ and $\taucp{n,m_n}<\mu_{\tau^\prime_n}$. This is only possible if $(n,m_n)=(i_0,j_0+1)$. We then have \[\alpha\le\mathrm{o}(\operatorname{me}(\alphavec))+\mathrm{dp}(\operatorname{me}(\alphavec))=\alpha+\kappa^{\vec{\varsigma}}_{\rhoargs{}{\taucp{i_0,j_0+1}}}+ \mathrm{dp}_{\vec{\varsigma}}(\rhoargs{}{\taucp{i_0,j_0+1}}).\] Here the assumption $\beta\le\kappa^{\vec{\varsigma}}_{\rhoargs{}{\taucp{i_0,j_0+1}}}+\mathrm{dp}_{\vec{\varsigma}}(\rhoargs{}{\taucp{i_0,j_0+1}})$ would entail the contradiction $\betaucp{i_0,j_0+1}_1<\rho_n$. \\[2mm] {\bf Case C:} Otherwise, i.e.\ $m_n>1$ and $\taucp{n,m_n}=\mu_{\tau^\prime_n}$. Then we have \[\alpha\le\mathrm{o}(\operatorname{me}(\alphavec))+\mathrm{dp}(\operatorname{me}(\alphavec))=\alpha+\kappa^{\vec{\varsigma}}_{\lambda_{\tau^\prime_n}}+\mathrm{dp}_{\vec{\varsigma}}(\lambda_{\tau^\prime_n}),\] and the assumption $\beta\le\kappa^{\vec{\varsigma}}_{\lambda_{\tau^\prime_n}}+\mathrm{dp}_{\vec{\varsigma}}(\lambda_{\tau^\prime_n})$ would lead to the contradiction $\betaucp{n,m_n}_1<\lambda_{\tau^\prime_n}+1=\rho_n$.\\[2mm] This concludes the verification of (\ref{mealeq}).\\[2mm] We now have to show that for index pairs $(i,j)\in\mathrm{dom}(\alphavec)-\sigmangleton{(n,m_n)}$ that are lexicographically greater than or equal to the index pair occurring in the respective case above, we have \betagin{equation}\lambdabel{mejialeq} \mathrm{o}(\operatorname{me}(\alphavecrestrarg{(i,j)}))+\mathrm{dp}(\operatorname{me}(\alphavecrestrarg{(i,j)}))+\beta= \mathrm{o}(\operatorname{me}(\alphavecrestrarg{(i,j)^+}))+\mathrm{dp}(\operatorname{me}(\alphavecrestrarg{(i,j)^+}))+\beta. \end{equation} This means that regarding the equations to be proven in Cases 0 and 1 we assume $(i_0,j_0+1)\le_\mathrm{\scriptscriptstyle{lex}}(i,j)$, regarding those to be shown in Case 2.2 we assume $(k_0,l_0+2)\le_\mathrm{\scriptscriptstyle{lex}}(i,j)$, and regarding Case 2.3 we assume $(k_0,1)\le_\mathrm{\scriptscriptstyle{lex}}(i,j)$. Let such an index pair $(i,j)$ be given. We may assume that $\operatorname{me}(\alphavecrestrarg{(i,j)^+})\kappa^\tauc\operatorname{me}(\alphavecrestrarg{(i,j)})$, since in the case of equality there would be nothing to show, while $\operatorname{me}(\alphavecrestrarg{(i,j)})\kappa^\tauc\operatorname{me}(\alphavecrestrarg{(i,j)^+})$ is not possible, for if this were the case we would have $(i,j)=(i_0,j_0+1)$ where $j_0>0$, $\chi^\taucp{i_0,j_0}(\taucp{i_0,j_0+1})=0$, $\rho_{i_0}=\rhoargs{}{\taucp{i_0,j_0+1}}+\taucp{i_0,j_0}$, $(i,j)^+=(i+1,1)$, and $\alphacp{i+1,1}=\rhoargs{}{i_0,j_0+1}+\xi$ for some $\xi\in(0,\taucp{i_0,j_0})$, which by Lemma \ref{tcevalestimlem} would imply that $\beta\le{\mathrm{end}}(\alpha)<\tilde{\tau}cp{i_0,j_0}$, hence $\betaucp{i_0,j_0+1}=\betaucp{i_0,m_{i_0}}$ and according to Claim \ref{tcassgnclmone} $\betaucp{i_0,j_0+1}_1<\taucp{i_0,j_0}$. Thus $k_0>i_0$, since $\alphacp{i_0+1}+\betaucp{i_0,m_{i_0}}_1<\rho_{i_0}$, which implies Case 2 and the condition $i\ge k_0$, wherefore $i=i_0$ is not permitted. We therefore have \[\alphavecrestrarg{(i,j)^+}\kappa^\tauc\operatorname{ec}(\alphavecrestrarg{(i,j)}),\] i.e.\ $\alphavecrestrarg{(i,j)^+}$ is not a maximal $1$-step extension of $\alphavecrestrarg{(i,j)}$, and consider the two possibilities for $(i,j)^+$:\\[2mm] {\bf Case {\cal R}omannumeral{1}:} $(i,j)^+=(i+1,1)$. We then have $j=m_i$, set ${\vec{\varsigma}}:={\mathrm{ers}}_{i,m_i}(\alphavec)$, and consider three subcases. \\[2mm] {\bf Subcase {\cal R}omannumeral{1}.1:} $m_i=1$. Then $\alphacp{i+1,1}<<_1g((1/{\tau^\prime_i})\cdot\taucp{i,1})=\rho_i\minusp 1$, hence $\alphavecrestrarg{i+1,1}$ does not possess a critical main line index pair. Setting ${\vec{\varsigma}}_{i}:={\mathrm{rs}}_{i,0}(\alphavec)$ and ${\vec{\varsigma}}_{i+1}:={\mathrm{rs}}_{i+1,0}(\alphavec)$ (cf.\ Definition \ref{dpfdefi}), by part 7(b) of Lemma \ref{tcevalestimlem} we have \betagin{eqnarray*} \mathrm{o}(\operatorname{me}(\alphavecrestrarg{(i+1,1)}))&=&\mathrm{o}(\alphavecrestrarg{(i+1,1)})+\mathrm{dp}_{{\vec{\varsigma}}_{i+1}}(\taucp{i+1,1})\\ &=&\mathrm{o}(\alphavecrestrarg{(i,1)})+\kappa^{\vec{\varsigma}}_\alphacp{i+1,1}+\mathrm{dp}_{\vec{\varsigma}}(\taucp{i+1,1})\\ &<&\mathrm{o}(\operatorname{me}(\alphavecrestrarg{(i,1)}))+\mathrm{dp}(\operatorname{me}(\alphavecrestrarg{(i,1)}))\\ &=&\mathrm{o}(\alphavecrestrarg{(i,1)})+\mathrm{dp}_{{\vec{\varsigma}}_i}(\taucp{i,1})\\ &=&\mathrm{o}(\alphavecrestrarg{(i,1)})+\kappa^{\vec{\varsigma}}_{<_1g((1/{\tau^\prime_i})\cdot\taucp{i,1})}+\mathrm{dp}_{\vec{\varsigma}}(<_1g((1/{\tau^\prime_i})\cdot\taucp{i,1})). \end{eqnarray*} By setting $\delta:=-\alphacp{i+1,1}+<_1g((1/{\tau^\prime_i})\cdot\taucp{i,1})$ and assuming $\beta\le\kappa^{\vec{\varsigma}}_\delta+\mathrm{dp}_{\vec{\varsigma}}(\delta)$ we would obtain $\alphacp{i+1,1}+\betaucp{i,1}_1<\rho_i$, which because of $i\ge k_0$ is not the case. Thus equation (\ref{mejialeq}) holds in the case $m_i=1$.\\[2mm] {\bf Subcase {\cal R}omannumeral{1}.2:} $(i,m_i)=(i_0,j_0+1)$ where $j_0>0$. Then only Case 1 is possible, and it follows that $\alphacp{i+1,1}\le\rhoargs{}{\taucp{i,m_i}}$, as we ruled out the situation where $\rho_{i_0}=\rhoargs{}{\taucp{i_0,j_0+1}}+\taucp{i_0,j_0}$ and $\alphacp{i+1,1}=\rhoargs{}{\taucp{i_0,j_0+1}}+\xi$ for some $\xi\in(0,\taucp{i_0,j_0})$. Lemma \ref{tcevalestimlem} supplies us with \betagin{eqnarray*} \mathrm{o}(\operatorname{me}(\alphavecrestrarg{i+1,1}))&=&\mathrm{o}(\alphavecrestrarg{(i,m_i)})+\kappa^{\vec{\varsigma}}_\alphacp{i+1,1}+\mathrm{dp}_{\vec{\varsigma}}(\alphacp{i+1,1})\\ &<&\mathrm{o}(\operatorname{me}(\alphavecrestrarg{(i,m_i)})) \end{eqnarray*} and \[\mathrm{o}(\operatorname{me}(\alphavecrestrarg{(i,m_i)}))+\mathrm{dp}(\operatorname{me}(\alphavecrestrarg{(i,m_i)}))= \mathrm{o}(\alphavecrestrarg{(i,m_i)})+\kappa^{\vec{\varsigma}}_{\rhoargs{}{\taucp{i,m_i}}}+ \mathrm{dp}_{\vec{\varsigma}}(\rhoargs{}{\taucp{i,m_i}}).\] We now see that the assumption $\beta\le\kappa^{\vec{\varsigma}}_\delta+\mathrm{dp}_{\vec{\varsigma}}(\delta)$, where $\delta:=-\alphacp{i+1,1}+\rhoargs{}{\taucp{i,m_i}}$, would have the consequence $\alphacp{i+1,1}+\betaucp{i,m_i}_1<\rho_{i}$, which (again because of $i\ge k_0$) is not the case. We therefore have (\ref{mejialeq}) in this special case.\\[2mm] {\bf Subcase {\cal R}omannumeral{1}.3:} $m_i>1$ and $(i_0,j_0+1)<_\mathrm{\scriptscriptstyle{lex}}(i,m_i)$. Then $\alphacp{i+1,1}<\lambda_\taucp{i,m_i-1}=\rho_i\minusp 1$. Lemma \ref{tcevalestimlem} yields \betagin{eqnarray*} \mathrm{o}(\operatorname{me}(\alphavecrestrarg{(i+1,1)}))&=&\mathrm{o}(\alphavecrestrarg{(i,m_i)})+\kappa^{\vec{\varsigma}}_\alphacp{i+1,1}+\mathrm{dp}_{\vec{\varsigma}}(\alphacp{i+1,1})\\ &<&\mathrm{o}(\operatorname{me}(\alphavecrestrarg{(i,m_i)}))+\mathrm{dp}(\operatorname{me}(\alphavecrestrarg{(i,m_i)}))\\ &=&\mathrm{o}(\alphavecrestrarg{(i,m_i)})+\kappa^{\vec{\varsigma}}_{\lambda_\taucp{i,m_i-1}}+\mathrm{dp}_{\vec{\varsigma}}(\lambda_\taucp{i,m_i-1}), \end{eqnarray*} and setting $\delta:=-\alphacp{i+1,1}+\lambda_\taucp{i,m_i-1}$ the assumption $\beta\le\kappa^{\vec{\varsigma}}_\delta+\mathrm{dp}_{\vec{\varsigma}}(\delta)$ would again imply $\alphacp{i+1,1}+\betaucp{i,m_i}_1<\rho_i$. Consequently, equation (\ref{mejialeq}) follows also in this situation.\\[2mm] {\bf Case {\cal R}omannumeral{2}:} $(i,j)^+=(i,j+1)$. Then we have $\alphacp{i,j+1}=\mu_\taucp{i,j}$, since $(i_0,j_0)<_\mathrm{\scriptscriptstyle{lex}}(i,j)$. Due to the fact that $\alphavecrestrarg{(i,j+1)}$ is not a maximal $1$-step extension of $\alphavecrestrarg{(i,j)}$, we have $j>1$, $\taucp{i,j}=\mu_\taucp{i,j-1}\in{\mathbb E}\cap(\taucp{i,j-1},\lambda_\taucp{i,j-1})$, $\operatorname{ec}(\alphavecrestrarg{(i,j)})=\alphavecrestrarg{(i,j)}^\frown(\lambda_\taucp{i,j-1})$, and $(i_0,j_0+1)<_\mathrm{\scriptscriptstyle{lex}}(i,j)$. In particular, $\alphavecrestrarg{(i,j+1)}$ does not possess a critical main line index pair. Setting ${\vec{\varsigma}}:={\mathrm{ers}}_{i,j+1}(\alphavec)$ and ${\vec{\varsigma}}pr:={\mathrm{ers}}ij(\alphavec)$, part 7(b) of Lemma \ref{tcevalestimlem} yields \betagin{eqnarray*} \mathrm{o}(\operatorname{me}(\alphavecrestrarg{(i,j+1)}))&=&\mathrm{o}(\alphavecrestrarg{(i,j+1)})+\kappa^{\vec{\varsigma}}_{\lambda_\taucp{i,j}}+\mathrm{dp}_{\vec{\varsigma}}(\lambda_\taucp{i,j})\\ &=&\mathrm{o}(\alphavecrestrarg{(i,j)})+\nu^{\vec{\varsigma}}_{\mu_\taucp{i,j}}+\kappa^{\vec{\varsigma}}_{\lambda_\taucp{i,j}}+\mathrm{dp}_{\vec{\varsigma}}(\lambda_\taucp{i,j})\\ &=&\mathrm{o}(\alphavecrestrarg{(i,j)})+\mathrm{dp}_{\vec{\varsigma}}pr(\taucp{i,j}). \end{eqnarray*} Another extensive application of Lemma \ref{tcevalestimlem} provides us with \[\mathrm{o}(\alphavecrestrarg{(i,j)})+\mathrm{dp}_{\vec{\varsigma}}pr(\taucp{i,j})<\mathrm{o}(\operatorname{me}(\alphavecrestrarg{(i,j)}))+\mathrm{dp}(\operatorname{me}(\alphavecrestrarg{(i,j)}))= \mathrm{o}(\alphavecrestrarg{(i,j)})+\kappa^{\vec{\varsigma}}pr_{\lambda_\taucp{i,j-1}}+\mathrm{dp}_{\vec{\varsigma}}pr(\lambda_\taucp{i,j-1}).\] Now setting $\delta:=-\taucp{i,j}+\lambda_\taucp{i,j-1}$, the assumption $\beta\le\kappa^{\vec{\varsigma}}pr_\delta+\mathrm{dp}_{\vec{\varsigma}}pr(\delta)$ would imply, by Lemma \ref{reltrslem}, that $\betaucp{i,j}_1\le\delta$ and hence $\taucp{i,j}+\betaucp{i,j}_1\le\lambda_\taucp{i,j-1}$ which is not the case: In Cases 0, 1 and 2.2 we always have $(k_0,l_0)<_\mathrm{\scriptscriptstyle{lex}}(i,j-1)$, while Case 2.3 presupposes (w.r.t.\ Case 2.2) that $\taucp{k_0,l_0+1}+\betaucp{k_0,l_0+1}_1>\lambda_\taucp{k_0,l_0}$, which covers the only possibility where $(k_0,l_0)=(i,j-1)$. This concludes the proof of (\ref{mejialeq}).\\[2mm] From the equations (\ref{mealeq}) and (\ref{mejialeq}) all claimed equalities follow, noticing that only in Subcase 1.3.2 the $\mathrm{dp}$-term is non-zero. This completes the proof of Lemma \ref{tcassgnmntlem}. \mbox{ } $\Box$ \betagin{cor}[cf.\ 6.5 of \cite{CWc}]\lambdabel{tcorderisocor} ${\mathrm{tc}}$ is a $<$-$\kappa^\tauc$-order isomorphism between ${\mathrm{Ord}}$ and ${\operatorname{T}}C$ with inverse $\mathrm{o}$. We thus have \[{\mathrm{tc}}(\mathrm{o}(\alphavec))=\alphavec\] for any $\alphavec\in{\operatorname{T}}C$ and \[\alpha<\beta\quad\Leftrightarrow\quad{\mathrm{tc}}(\alpha)\kappa^\tauc{\mathrm{tc}}(\beta)\] for all $\alpha,\beta\in{\mathrm{Ord}}$.\mbox{ } $\Box$ \end{cor} \betagin{cor}[corrected 6.6 of \cite{CWc}]\lambdabel{alpldpcor} Let $\alpha\in{\mathrm{Ord}}$ and $\alphavec:={\mathrm{tc}}(\alpha)$ with associated chain ${\vec{\tau}}$, where $\alphavec=(\alphaevec,\ldots,\alphanvec)$ and $\alphaivec=(\alphacp{i,1},\ldots,\alphacp{i,m_i})$ for $1\le i\le n$. Then we have \[{\mathrm{tc}}(\alpha+\mathrm{dp}(\alphavec))=\left\{\betagin{array}{l@{\quad}l} \alphavec_{\restriction_{i,j+1}}[\alphacp{i,j+1}+1] & \mbox{ if }(i,j)=\operatorname{cml}(\alphavec)\mbox{ or } (i,j+1)=(n,m_n)\:\&\:\taucp{i,j+1}<\mu_\taucp{i,j}\\[2mm] \operatorname{me}(\alphavec)&\mbox{otherwise.} \end{array}\right.\] Let $\beta\in{\mathrm{Ord}}$. Then ${\mathrm{tc}}(\beta)$ is a proper extension of ${\mathrm{tc}}(\alpha)$ if and only if \[\beta\in\left\{\betagin{array}{l@{\quad}l} \left(\alpha,\alpha+\mathrm{dp}(\alphavec)\right)&\mbox{ if }\operatorname{cml}(\alphavec)\mbox{ exists or }m_n>1\:\&\:\taucp{n,m_n}<\mu_\taucp{n,m_n-1}\\[2mm] \left(\alpha,\alpha+\mathrm{dp}(\alphavec)\right]&\mbox{ otherwise.} \end{array}\right.\] \end{cor} {\bf Proof.} The corollary directly follows from the Definition \ref{dpfdefi} of $\mathrm{dp}$, Lemma \ref{tcevalestimlem}, and Corollary \ref{tcorderisocor}. \mbox{ } $\Box$ \subseteqsection{Closed sets of tracking chains}\lambdabel{closedsetssubsec} Here we introduce the notion of \emph{closed set of tracking chains}. This was first done in \cite{W18} in order not just to locate an ordinal $\alpha$ within the core of ${\cal R}two$, but to collect the tracking chains of ordinals needed to specify a pattern of minimal cardinality, the isominimal realization of which contains and therefore denotes $\alpha$. The corresponding result for ${\cal R}onepl$ was established in \cite{W07c} and completed in Sections 5 and 6 of \cite{CWa}. The generalization of closedness introduced here will be useful when verifying $<_3$-connections elsewhere, e.g.\ in \cite{W}, where this topic would have been too technical for the intended audience. However, in this article this subsection on closedness is not used and can be skipped at first reading. We are now going to generalize the notion of closed sets of tracking chains that was introduced in Section 3 of \cite{W18}. Closed sets are easily seen to be closed under the operation of maximal extension ($\operatorname{me}$) introduced in Definition \ref{maxextdefi}. We will need closedness to find all parameters from $\mathrm{Im}(\upsilon)$ involved in (tracking chains of) elements of ${\cal R}two$. These play a key role in handling all (finitely many) ``global'' $<_2$-predecessors needed to locate ordinals in ${\cal R}two$. As we will see in the next section, ordinals of the form $\upsilon_{\lambda+m}$, where $\lambda+m>0$, $\lambda\in\mathrm{Lim}nod$, and $m\in{\mathbb N}\setminus\{1\}$, have arbitrarily large $<_2$-successors, namely all ordinals of the form $\upsilon_{\lambda+m}+\upsilon_{\lambda+m\minusp 1}\cdot(1+\xi)$, $\xi\in{\mathrm{Ord}}$. For illustration, consider the easy example of the ordinal $\upsilon_\omega\cdot\upsilon_{17}$, the greatest $<_2$-predecessor of which is $\upsilon_\omega$, while its $\le_1$-reach is ${\operatorname{lh}}(\upsilon_\omega\cdot\upsilon_{17})=\upsilon_\omega\cdot\upsilon_{17}+\upsilon_{17}$, which in turn has the greatest $<_2$-predecessor $\upsilon_{18}$ (note that in this example, instead of $17,18$ any pair of natural numbers $k,k+1$, $k>1$ would do). Another instructive example would be to consider the ordinal $\varepsilon_{\upsilon_\omega+\upsilon_{17}+1}$, where closure under $\bar{\cdot}$ (see Section 8 of \cite{W07a} and Section 5 of \cite{CWa}) becomes essential, which holds for closed sets of tracking chains, cf.\ Lemma 3.19 of \cite{W18}. The term decomposition of components of tracking chains in a closed set $M$ of tracking chains via the operations of additive decomposition, logarithm, $\lambda$-, and $\bar{\cdot}$-operator expose all bases of greatest $<_2$-predecessors of elements in $\mathrm{o}[M]$, cf.\ also Lemma 3.20 of \cite{W18} in a more ambitious context in order to enable base minimization, cf.\ Definition 3.26 of \cite{W18}. \betagin{defi}\lambdabel{upsseqdefi} Let ${\vec{\tau}}\in{\operatorname{T}}S$. We call ${\vec{\tau}}$ an {\bf \boldmath$\upsilon$-sequence} if it is of the form either ${\vec{\tau}}=(\upsilon_\lambda)$ where $\lambda\in\mathrm{Lim}$ or ${\vec{\tau}}=(\upsilon_{\lambda+1},\ldots,\upsilon_{\lambda+m})$ where $\lambda\in\mathrm{Lim}nod$ and $m\in(0,\omega)$. A tracking chain $\alphavec\in{\operatorname{T}}C$ is called an {\bf \boldmath$\upsilon$-sequence} if it is of the form $({\vec{\tau}})$ where ${\vec{\tau}}\in{\operatorname{T}}S$ is an $\upsilon$-sequence. \end{defi} \betagin{defi}[cf.\ 3.7, 3.16 of \cite{W18}]\lambdabel{convexprincipalchaindefi} Let $\alphavec=(\alphavec_1,\ldots,\alphavec_n)\in{\operatorname{T}}C$, $\alphavec_i=(\alphacp{i,1},\ldots,\alphacp{i,m_i})$, $1\le i\le n$, with associated chain ${\vec{\tau}}$. \betagin{enumerate} \item\lambdabel{convexpart} $\alphavec$ is called {\bf convex} if every $\nu$-index of $\alphavec$ is maximal, i.e.\ given by the $\mu$-operator. \item\lambdabel{principalchainpart} If $\alphavec$ satisfies $m_n>1$ and $\alphacp{n,m_n}=\mu_\tau$, where $\tau:=\taucp{n,m_n-1}$, then $\alphavec$ is called a {\bf principal chain to base $\tau$}, and $\tau$ is called the {\bf base of $\alphavec$}. If $\alphavec\in M$, where $M$ is some set of tracking chains, then we say that $\alphavec$ is a {\bf principal chain in $M$} and that $\tau$ is a {\bf base in $M$}. \end{enumerate} \end{defi} \betagin{defi}[cf.\ 3.1, 3.2, 3.3, and 3.21 of \cite{W18}]\lambdabel{closedsetsdefi} Let $M\subseteqseteq_\mathrm{fin}{\operatorname{T}}C$. $M$ is {\bf closed} if and only if $M$ \betagin{enumerate} \item[1.] is {\bf closed under initial chains:} if $\alphavec\in M$ and $(i,j)\in\mathrm{dom}(\alphavec)$ then $\alphavec_{\restriction_{(i,j)}}\in M$, \item[2.] is {\bf $\nu$-index closed:} if $\alphavec\in M$, $m_n>1$, $\alphacp{n,m_n}=_{\mathrm{\scriptscriptstyle{ANF}}}\xi_1+\ldots+\xi_k$ then \betagin{enumerate} \item[2.1.] $\alphavec[\xi_1+\ldots+\xi_l]\in M$ for $1\le l\le k$ and \item[2.2.] $\alphavec[\mu_{{\tau^\prime}}]$, unless this is a $\upsilon$-sequence, \end{enumerate} \item[3.] {\bf unfolds minor $\le_2$-components:} if $\alphavec\in M$, $m_n>1$, and $\tau<\mu_{\tau^\prime}$ then: \betagin{enumerate} \item[3.1.] ${\alphavec_{\restriction_{n-1}}}^\frown(\alphacp{n,1},\ldots,\alphacp{n,m_n},\mu_\tau)\in M$ in the case $\tau\in{\mathbb E}^{>{\tau^\prime}}$, and \item[3.2.] otherwise $\alphavec^\frown(\varrho^{\tau^\prime}_\tau)\in M$, provided that $\varrho^{\tau^\prime}_\tau>0$, \end{enumerate} \item[4.] is {\bf $\kappa$-index closed:} if $\alphavec\in M$, $m_n=1$, and $\alphacp{n,1}=_{\mathrm{\scriptscriptstyle{ANF}}}\xi_1+\ldots+\xi_k$, then: \betagin{enumerate} \item[4.1.] if $m_{n-1}>1$ and $\xi_1=\taucp{n-1,m_{n-1}}\in{\mathbb E}^{>\taucp{n-1,m_{n-1}-1}}$ then ${\alphavec_{\restriction_{n-2}}}^\frown(\alphacp{n-1,1},\ldots,\alphacp{n-1,m_{n-1}},\mu_{\xi_1})\in M$, else ${\alphavec_{\restriction_{n-1}}}^\frown(\xi_1)\in M$, and \item[4.2.] ${\alphavec_{\restriction_{n-1}}}^\frown(\xi_1+\ldots+\xi_l)\in M$ for $l=2,\ldots,k$, \end{enumerate} \item[5.] {\bf maximizes $\operatorname{me}$-$\mu$-chains:} if $\alphavec\in M$ and $\tau\in{\mathbb E}^{>{\tau^\prime}}$, then: \betagin{enumerate} \item[5.1.] if $m_n=1$ then ${\alphavec_{\restriction_{n-1}}}^\frown(\alphacp{n,1},\mu_\tau)\in M$, unless this is a $\upsilon$-sequence, and \item[5.2.] if $m_n>1$ and $\tau=\mu_{\tau^\prime}=\lambda_{\tau^\prime}$ then ${\alphavec_{\restriction_{n-1}}}^\frown(\alphacp{n,1}\ldots,\alphacp{n,m_n},\mu_\tau)\in M$, unless this is a $\upsilon$-sequence, \end{enumerate} \item[6.] {\bf unfolds $\le_1$-components:} for $\alphavec\in M$, if $m_n=1$ and $\tau\not\in{\mathbb E}^{\ge{\tau^\prime}}\cup\{1\}$ (i.e.\ $\tau=\taucp{n,m_n}\not\in{\mathbb E}one$, ${\tau^\prime}={\tau_n^\star}$), let \[<_1g((1/{\tau^\prime})\cdot\tau)=_{\mathrm{\scriptscriptstyle{ANF}}}\xi_1+\ldots+\xi_k,\] if otherwise $m_n>1$ and $\tau=\mu_{\tau^\prime}$ such that $\tau<\lambda_{\tau^\prime}$ in the case $\tau\in{\mathbb E}^{>{\tau^\prime}}$, let \[\lambda_{\tau^\prime}=_{\mathrm{\scriptscriptstyle{ANF}}}\xi_1+\ldots+\xi_k.\] Set $\xi:=\xi_1+\ldots+\xi_k$, unless $\xi>0$ and $\alphavec^\frown(\xi_1+\ldots+\xi_k)\not\in{\operatorname{T}}C$ (due to condition 2 of Definition \ref{trackingchaindefi}), in which case we set $\xi:=\xi_1+\ldots+\xi_{k-1}$. Suppose that $\xi>0$. Let $\alphavecpl$ denote the vector $\{\alphavec^\frown(\xi)\}$ if this is a tracking chain (condition 2 of Definition \ref{kapparegdefi}), or otherwise the vector ${\alphavec_{\restriction_{n-1}}}^\frown(\alphacp{n,1},\ldots,\alphacp{n,m_n},\mu_\tau)$. Then the closure of $\{\alphavecpl\}$ under clauses 4 and 5 is contained in $M$. \item[7.] {\bf supports bases:} if $\betavec$ is a principal chain in $M$ to base $\tau$ such that $\bar{\tau}\in({\tau^\prime},\tau)$ then $\betavec^\frown(\bar{\tau})\in M$. \end{enumerate} \end{defi} Note that in clause 4.1, if $m_{n-1}=1$ and $\taucp{n-1,1}\in{\mathbb E}^{>\tau^\star_{n-1}}$, we have $\rho_{n-1}=\taucp{n-1,1}+1$, and hence $\alphacp{n,1}<\taucp{n-1,1}$ by condition 2 of Definition \ref{kapparegdefi}, so that the situation $\xi_1=\taucp{n-1,1}$ does not occur. If the conditions stated in clause 4.1 hold, the preference of $\nu$-indices over $\kappa$-indices applies, and the chain ${\alphavec_{\restriction_{n-2}}}^\frown({\alphavec_{n-1}}^\frown{\mu_{\xi_1}})$, which can not be a $\upsilon$-sequence, is taken instead of either ${\alphavec_{\restriction_{n-1}}}^\frown(\xi_1)$, which is not a tracking chain, or ${\alphavec_{\restriction_{n-2}}}^\frown({\alphavec_{n-1}}^\frown{1})$, which in this context would be redundant. \betagin{rmk} Due to the exclusion of $\upsilon$-sequences from closure in clauses 2 and 5 above it is easy to see that closure of a set $M\subseteqseteq_\mathrm{fin}{\operatorname{T}}C$ under clauses 1 -- 7 results in a finite set of tracking chains. This is due to decreasing $\htarg{}$- and $\operatorname{l}$-measures of the terms involved, cf.\ Definition 3.26 of \cite{W07a} and Definition \ref{Ttauvec}. Closedness under clauses 1 -- 6 only results in $M$ being a {\bf spanning} set of tracking chains, first introduced in Section 5 of \cite{W17}. \end{rmk} \section{The Structure \boldmath${\cal R}two$\unboldmath}\lambdabel{structuresec} We are now prepared to generalize Theorem 7.9 and Corollary 7.13 of \cite{CWc} to all ordinal numbers. Theorem 7.9 of \cite{CWc} provides the $\le_i$-predecessors ($i=1,2$) of ordinals below $1^\infty=\upsilon_1$, in particular the greatest $<_i$-predecessor of an ordinal in case such exists, while Corollary 7.13 of \cite{CWc} characterizes the $\le_i$-successors of ordinals below $1^\infty$. The generalization carried out in this article consists of descriptions of $<_i$-pre- and $\le_i$-successorship within all of ${\cal R}two$. For this reason we may say that we \emph{display} the entire structure ${\cal R}two$, as claimed in the abstract. For an in-depth discussion of the initial segment of ${\cal R}two$ below $1^\infty$ in arithmetical terms, as secured by Theorem 7.9 and Corollary 7.13 of \cite{CWc}, the reader is referred to Subsection 2.3 of \cite{W18}. There we called this arithmetical characterization ${\operatorname{C}}two$, and in \cite{W17} we showed that it is an elementary recursive structure. In order to proceed toward generalization of the arithmetical analysis established in \cite{CWc}, recall the notion of relativized $\le_i$-minimality for $i\in\{1,2\}$: $\alpha$ is $\beta$-$\le_i$-minimal if and only if there does not exist any $\gamma\in(\beta,\alpha)$ such that $\gamma<_i\alpha$. Hence, $0$-$\le_i$-minimality is equivalent to $\le_i$-minimality. As in Definition 7.7 of \cite{CWc} we denote the greatest $<_i$-predecessor of an ordinal $\alpha$ by $\operatorname{pred}_i(\alpha)$ if that exists and set $\operatorname{pred}_i(\alpha):=0$ otherwise. Note that the latter case can have two reasons: either $\alpha$ is $\le_i$-minimal or the order type of its $<_i$-predecessors is a limit ordinal. $\operatorname{pred}s_i(\alpha)$ denotes the set of all $<_i$-predecessors of $\alpha$, $\operatorname{Succ}_i(\alpha)$ denotes the class of all $\beta$ such that $\alpha\le_i\beta$, and ${\operatorname{lh}}_i(\alpha)$ denotes the maximum of $\operatorname{Succ}_i(\alpha)$ if that exists and $\infty$ otherwise, ${\operatorname{lh}}:={\operatorname{lh}}_1$, where ${\operatorname{lh}}$ stands for \emph{length}. Note that ${\operatorname{lh}}_i(\alpha)$ is not defined to be the maximum $\beta$ such that $\alpha\le_i\alpha+\beta$ but rather to be the maximum $\beta$ such that $\alpha\le_i\beta$ (if such ordinal exists). Recall Proposition \ref{letwocriterion} and Lemmas \ref{loalpllem}, \ref{ktwoinflochainlem}, and \ref{letwoupwlem} for basic but central properties of relations $\le_1$ and $\le_2$ in ${\cal R}two$. For the reader's convenience we also cite the notion of \emph{covering}, which is the natural notion of embedding for (pure) patterns, and which plays a crucial role in the proof of Theorem \ref{maintheo}. \betagin{defi}[7.8 of \cite{CWc}]\lambdabel{coveringdef} Given substructures $X$ and $Y$ of ${\cal R}two$, a mapping $h:X\hookrightarrow Y$ is a \emph{covering of $X$ into $Y$}, if \betagin{enumerate} \item $h$ is an injection of $X$ into $Y$ that is strictly increasing with respect to $\le$, and \item $h$ maintains $\le_i$-connections for $i=1,2$, i.e.\ $\forall \alpha,\beta\in X\, (\alpha\le_i\beta\Rightarrow h(\alpha)\le_i h(\beta))$. \end{enumerate} We call $h$ a \emph{covering of $X$} if it is a covering from $X$ into ${\cal R}two$. We call $Y$ a \emph{cover} of $X$ if there is a covering of $X$ with image $Y$.\index{cover, covering} \end{defi} As mentioned before, the following main theorem describes the structure ${\cal R}two$ completely in terms of $\le_i$-predecessorship, $i=1,2$. As compared to Theorem 7.9 of \cite{CWc}, which only describes the initial segment $\upsilon_1$ of the structure ${\cal R}two$ in this way, new cases arise in relation to the ordinals in $\mathrm{Im}(\upsilon)$. The proof of Theorem \ref{maintheo}, of which Theorem \ref{maxchaintheo} from the introduction is an immediate consequence, is a modifying and generalizing rewrite of the proof of Theorem 7.9 of \cite{CWc} with several corrections and notational adjustments. We keep the proof structure and case numbering comparable to structure and numbering chosen in the proof of Theorem 7.9 of \cite{CWc}, however, with a more explicit numbering of subcases and some preference to deal with cases involving translational isomorphism before cases that in general require base transformation as introduced in Subsection \ref{arithmeticsubsec}. The Special Case in Subcase 1.2, as well as Subcases 1.1.3 and 1.2.3, are new, due to the extended claim, while other parts of the proof smoothly generalize. A correction of a part of the proof of Theorem 7.9 of \cite{CWc} is indicated. For further details see the \emph{proof map} at the beginning of the proof of Theorem \ref{maintheo}. Recall the definition of tracking chain and $[\cdot]$-notation, Definition \ref{trackingchaindefi}, maximal extension $\operatorname{me}$, Definition \ref{tcextensiondefi}, the assignment of tracking chains to ordinals ${\mathrm{tc}}$ in Definition \ref{tcassignmentdefi}, and of evaluation of (initial segments of) tracking chains, $\ordcp{i,j}(\alphavec)$, as well as the evaluation at indices $\alphacp{i,j}$ of $\alphavec$ and at indices $\taucp{i,j}$ of the associated chain ${\vec{\tau}}$, $\alphaticp{i,j}$ and $\tilde{\tau}cp{i,j}$, respectively, see Definition \ref{trchevaldefi}. Recall the notation for \emph{units} ${\tau_i^\star}$ introduced in Definition \ref{unitsdefi}. \betagin{theo}[cf.\ 7.9 of \cite{CWc}]\lambdabel{maintheo} Let $\alpha\in{\mathrm{Ord}}$ and ${\mathrm{tc}}(\alpha)=:\alphavec$, where $\alphaivec=(\alphacp{i,1},\ldots,\alphacp{i,m_i})$ for $1\le i\le n$, with associated chain ${\vec{\tau}}$ and segmentation parameters $(\lambda,t):=\upsilonseg(\alphavec)$ and $p,s_l,(\lambda_l,t_l)$ for $l=1,\ldots,p$ as in Definition \ref{segmentationdefi}. \betagin{enumerate} \item[a)] We have \[\alpha\mbox{ is $\upsilon_\lambda$-$\le_1$-minimal} \quad\Leftrightarrow\quad (n,m_n)=(1,1)\] and the greatest $<_1$-predecessor of $\alpha$ is \[\operatorname{pred}_1(\alpha)=\left\{\betagin{array}{l@{\quad}l} \upsilon_\lambda & \mbox{if }(n,m_n)=(1,1)\mbox{ and }\upsilon_\lambda\in(0,\alphacp{1,1})\mbox{ (hence $\lambda\in\mathrm{Lim}$)}\\[2mm] \ordcp{n-1,m_{n-1}}(\alphavec) & \mbox{if }m_n=1\mbox{ and } n>1\\[2mm] \mathrm{o}(\alphavec[\xi])& \mbox{if } m_n>1\mbox{, }\alphacp{n,m_n}=\xi+1, \mbox{ and }\chi^\taucp{n,m_n-1}(\xi)=0\\[2mm] \mathrm{o}\left(\operatorname{me}\left(\alphavec[\xi]\right)\right) & \mbox{if } m_n>1\mbox{, } \alphacp{n,m_n}=\xi+1, \mbox{ and }\chi^\taucp{n,m_n-1}(\xi)=1\\[2mm] 0 & \mbox{otherwise (a greatest $<_1$-predecessor does not exist).} \end{array}\right.\] The situation where the order type of $\operatorname{pred}s_1(\alpha)$, the set of $<_1$-predecessors of $\alpha$, is a limit ordinal is characterized by the following two cases: \[\operatorname{pred}s_1(\alpha)=\left\{\betagin{array}{ll} \bigcup_{\xi\in(0,\lambda)}\;\;\,\operatorname{pred}s_1\left(\upsilon_\xi\right) & \mbox{ if }\alpha=\upsilon_\lambda>0\\[2mm] \bigcup_{\xi<\alphacp{n,m_n}}\operatorname{pred}s_1\left(\mathrm{o}(\alphavec[\xi])\right) & \mbox{ if }m_n>1\mbox{ and }\alphacp{n,m_n}\in\mathrm{Lim}. \end{array}\right.\] \item[b)] We have \[\alpha\mbox{ is $\le_2$-minimal} \quad\Leftrightarrow\quad m_n\le 2\mbox{ and }{\tau_n^\star}=1,\] and in terms of $\operatorname{pred}_2$ to denote the greatest $\kappa^\tauwo$-predecessor we have \[\operatorname{pred}_2(\alpha)=\left\{\betagin{array}{l@{\quad}l} \ordcp{n,m_n-1}(\alphavec) & \mbox{ if }m_n>2\mbox{, otherwise:}\\[2mm] \ordcp{i_0,j_0+1}(\alphavec) & \mbox{ if } n^\star=:(i_0,j_0)\in\mathrm{dom}(\alphavec)\\[2mm] \upsilon_\lambda & \mbox{ if } n^\star=(1,0)\mbox{ and }{\tau_n^\star}=\upsilon_\lambda\in(0,\alpha)\mbox{ (hence $\lambda\in\mathrm{Lim}$)}\\[2mm] \upsilon_{\lambda_j} & \mbox{ if } n^\star=(s_j,0)\mbox{ and }{\tau_n^\star}=\upsilon_{\lambda_j}\mbox{ for some }j\in\{1,\ldots,p\}\mbox{ where }\lambda_j\in\mathrm{Lim}\\[2mm] \upsilon_{\lambda_j+t_j+1} & \mbox{ if } n^\star=(s_j,0)\mbox{ and }{\tau_n^\star}=\upsilon_{\lambda_j+t_j}\mbox{ for some }j\in\{1,\ldots,p\} \mbox{ where }t_j>0\\[2mm] 0 & \mbox{ otherwise (a greatest $\kappa^\tauwo$-predecessor does not exist).} \end{array}\right.\] The order type of the set of $<_2$-predecessors of $\alpha$ is a limit ordinal if and only if $\alpha=\upsilon_\lambda>0$: \[\alpha={\sup}^+\{\beta\mid\beta<_2\alpha\} \quad\Leftrightarrow\quad \alpha=\upsilon_\lambda>0,\] and if this is the case, we have \[\operatorname{pred}s_2(\alpha)=\left\{\upsilon_{{\ze^\tau}a+k}\mid{\ze^\tau}a+k\in(0,\lambda)\mbox{ where }{\ze^\tau}a\in\mathrm{Lim}nod\mbox{ and }k\in{\mathbb N}\setminus \{1\}\right\}.\] \end{enumerate} \end{theo} {\bf Proof.} The proof is by induction on $\alpha$. This means that according to the i.h.\ the relations $\le_1$ and $\le_2$ look exactly as claimed by the theorem on the set of ordinals $\alpha=\{\beta\mid\beta<\alpha\}$ and is the reason why the theorem is formulated in terms of $\le_i$-predecessors only. \operatorname{me}dskip \emph{\underline{Proof map.}} Before getting into the proof technically and in detail, we provide an overview to facilitate better orientation. Subcase 1.1 establishes the successor step of the proof, as its condition, namely $m_n=1$ and $\alphacp{n,1}=\xi+1$ for some $\xi$, characterizes the situation where $\alpha$ is a successor ordinal $\beta+1$. It contains Claim \ref{relleominclaim}, which implies $\delta$-$\le_1$-minimality of $\alpha$ as claimed in part a) of the theorem, where $\delta$ is as specified in the proof of Case 1, the case which generally pertains to the situation $m_n=1$, and turns out to be equal to $\operatorname{pred}_1(\alpha)$ as claimed. It is shown that part b) of the theorem, which deals with $\le_2$-predecessors of $\alpha$, is trivial for successor ordinals $\alpha$. In the case $\delta>0$ it is shown using the criterion provided by Proposition \ref{letwocriterion} that $\delta<_1\alpha$. In Subcase 1.2 ($m_n=1$ and $\alphacp{n,1}\in\mathrm{Lim}$) on the other hand, it is easy to see that part a) of the theorem immediately follows from the i.h.\ for continuity reasons, as a limit of $\Sigma_i$-superstructures is again a $\Sigma_i$-superstructure. Subcase 1.2, however, covers the situation where (successor-) $\kappa^\tauwo$-connections in ${\cal R}two$ need to be verified as claimed, and Subcase 1.2.1.2 particularly pertains to the situation where genuinely new $\kappa^\tauwo$-relations arise in ${\cal R}two$ (along increasing $\nu$-indices in the appropriate setting of relativization). An application of Claim \ref{relleominclaim} enables us to show that $\alpha$ does not have any $\kappa^\tauwo$-predecessor greater than $\gamma$, where $\gamma$ is defined to be either the greatest $\kappa^\tauwo$-predecessor of $\alpha$ as claimed in the theorem in case such is claimed to exist at all, or otherwise $\gamma:=0$. Provided that $\gamma>0$, the relation $\gamma\kappa^\tauwo\alpha$ is verified using the criterion given by Proposition \ref{letwocriterion}, which establishes that $\operatorname{pred}_2(\alpha)=\gamma$ as claimed. Case 2, which contains Claim \ref{relletwominclaim}, covers the situation $m_n>1$ and most importantly needs to verify that in Subcase 2.1 ($m_n>1$ and $\alphacp{n,m_n}=\xi+1$ for some $\xi$) we actually have \[\alphapr=\mathrm{o}(\alphavec[\xi])\not\kappa^\tauwo\mathrm{o}(\alphavec)=\alpha.\] Subcase 2.1.1, where $\operatorname{pred}_1(\alpha)=\alphapr$, discusses the situation where the $\le_2$-component arising at $\alphapr$ does not fall (non-trivially) back onto the mainline, which means that, unless $\xi$ is a successor ordinal and hence $\alphavec[\xi]=\operatorname{me}(\alphavec[\xi])$, we have \[\mathrm{o}(\operatorname{me}(\alphavec[\xi]))\not<_1\alpha,\] and the condition for this scenario is $\chi^\tau(\xi)=0$. Subcase 2.1.2 is the more involved situation using Claim \ref{relletwominclaim} where \[\alphapr\kappa^\tauwo\mathrm{o}(\operatorname{me}(\alphavec[\xi]))=\operatorname{pred}_1(\alpha)<_1\alpha\] (the condition for this situation is that $\chi^\tau(\xi)=1$). Subcase 2.2, where $\alphacp{n,m_n}$ is a limit ordinal finally follows immediately from the i.h.\ for continuity reasons. Further case distinction in the proof identifies situations where argumentation relies on the application of base transformation (see Subsection \ref{arithmeticsubsec}), in particular Subcases 1.1.2.2 and 1.2.2.2.2, where an important correction of the proof of the corresponding Theorem 7.9 of \cite{CWc} takes place. The argumentation in Subcases 1.1.1 and 1.2.2.1 is similar, namely by translational isomorphism as familiar from ${\cal R}one$, as well as in Subcases 1.1.2.1 and 1.2.2.2.1 (also exploiting translation of isomorphic ordinal intervals). The more involved Subcases 1.1.2.2 and 1.2.2.2.2 are handled similarly using base transformation. Base transformation also occurs in the proofs of Claims \ref{relleominclaim} (see Case {\cal R}omannumeral{2} there) and \ref{relletwominclaim} (not mentioned explicitly, as the proof of Claim \ref{relleominclaim} is quite similar to the proof of Claim \ref{relletwominclaim}, in which we focus on a situation (Subcase {\cal R}omannumeral{1}.2) that does not occur in the proof of Claim \ref{relleominclaim}). Claims \ref{relleominclaim} and \ref{relletwominclaim} are essential to confirm that certain $\le_i$-relations do not hold, as claimed in the theorem and mentioned above. Finite sets $X$ and $Z$ are specified so that there does not exist any copy $\tilde{Z}$ of $Z$ such that $X\cup\tilde{Z}\cong X\cup Z$ and $\tilde{Z}\subseteqseteq\max(Z)$. The role of the sets $X$ is to force $\max(X)<\min(\tilde{Z})$, but $X$ also contains all existing greatest $\kappa^\tauwo$-predecessors below $\max(X)$ of elements of $Z$. One can think of the sets $Z$ as sets that are \emph{incompressible} in the context or under the constraints provided by $X$. The additional Subcases 1.1.3 and 1.2.3 as well as the Special Case at the beginning of Subcase 1.2 cover new situations involving ordinals from $\mathrm{Im}(\upsilon)$ due to the generalization of the theorem to all ordinals as compared to Theorem 7.9 of \cite{CWc}, which covered the initial segment $\upsilon_1$ only. \operatorname{me}dskip \emph{\underline{Beginning of the formal proof.}} In the case $\alpha=0$, equivalently $\alphavec=((0))$, there is nothing to show, so let us assume that $\alpha>0$, whence $\alphacp{n,m_n}>0$. Defining \[{\vec{\varsigma}}:={\mathrm{ers}}_{n,m_n}(\alphavec)\] in order to access the setting of relativized connectivity components in which $\alpha$ is located, see Definitions \ref{charseqdefi} and \ref{trchevaldefi}, we distinguish between cases concerning $m_n$ and whether $\alphacp{n,m_n}$ is a limit or a successor ordinal.\\[2mm] {\bf Case 1:} $m_n=1$. We define \[\delta:=\left\{\betagin{array}{ll} \upsilon_\lambda & \mbox{ if }n=1\\ \ordcp{n-1,m_{n-1}}(\alphavec) & \mbox{ if }n>1 \end{array}\right.\] and consider cases regarding $\alphacp{n,1}$. \\[2mm] {\bf Subcase 1.1:} $\alphacp{n,1}$ is a successor ordinal, say $\alphacp{n,1}=\xi+1$. Thus $\taucp{n,1}=1$, ${\tau_n^\star}=1$, $\alpha$ is a successor ordinal, say $\alpha=\beta+1$, and clearly $\le_2$-minimal since we have Lemmas \ref{ktwoinflochainlem} and \ref{letwoupwlem}, according to which any $\kappa^\tauwo$-predecessor would be supremum of an infinite $<_1$-chain, and finite patterns, such as for instance $<_1$-chains, below such a $\kappa^\tauwo$-predecessor would reoccur cofinally below $\alpha$. According to Definitions \ref{kappanuprincipals}, \ref{kappadpdefi}, and \ref{trchevaldefi} we have \[\beta=\left\{\betagin{array}{ll} \kappa_\xi+\mathrm{dp}(\xi) & \mbox{ if }n=1\\[2mm] \delta+\kappa^{{\vec{\varsigma}}}_\xi+\mathrm{dp}_{{\vec{\varsigma}}}(\xi) & \mbox{ if }n>1, \end{array}\right.\] since $\xi\ge\upsilon_\lambda$ if $n=1$. Note that the tracking chain of any ordinal in the interval $[\delta,\beta]$ has the initial chain $\alphavecrestrarg{n-1,m_{n-1}}$, see Corollary \ref{alpldpcor}. In the case $n=1$ we have to show that $\alpha$ is $\upsilon_\lambda$-$\le_1$-minimal. This will be the special case $\delta=\upsilon_\lambda$. Generally, for $n\ge 1$ we now show that $\alpha$ is $\delta$-$\le_1$-minimal, which follows from the following Claim \ref{relleominclaim}, since the existence of any $\gamma\in(\delta,\alpha)$ such that $\gamma<_1\alpha$ would allow us to take the sets $X$ and $Z$ from Claim \ref{relleominclaim} and to $\le_1$-reflect the set $Z_2:=Z\cap[\gamma,\alpha)$ down to a set $\tilde{Z}_2\subseteqseteq\gamma\le\beta$ such that, setting $Z_1:=Z\cap\gamma$, we would have $X, Z_1<\tilde{Z}_2$ and $X\cup Z_1\cup\tilde{Z}_2\cong X\cup Z_1\cup Z_2$, so that $\tilde{Z}:=Z_1\cup\tilde{Z}_2$ would satisfy $X<\tilde{Z}\subseteqseteq\beta$ and $X\cup\tilde{Z}\cong X\cup Z$, contradicting Claim \ref{relleominclaim}. \betagin{claim}\lambdabel{relleominclaim} There exists a finite set $Z\subseteqseteq(\delta,\alpha)$ such that there does not exist any cover $X\cup\tilde{Z}$ of $X\cup Z$ with $X<\tilde{Z}$ and $X\cup\tilde{Z}\subseteqseteq\beta$, where $X$ is the finite set that consists of $\delta$ and all existing greatest $<_2$-predecessors $\gamma$ of elements in $Z$ such that $\gamma\le\delta$. \end{claim} {\bf Proof.} In order to prove the claim, let us first consider the case $\xi=\upsilon_\lambda$ if $n=1$ or $\xi=0$ if $n>1$. Then $\delta=\beta$ and $\alpha$ is clearly $\delta$-$\le_1$-minimal. We trivially choose $X:=\{\delta\}$ and $Z:=\emptyset$. Now let us assume that $\xi=_{\mathrm{\scriptscriptstyle{ANF}}}\xi_1+\ldots+\xi_r>0$ such that $\xi>\upsilon_\lambda$ if $n=1$. Since $\alphavec\in{\operatorname{T}}C$, we then have $\alphavec[\xi]\in{\operatorname{T}}C$ if and only if condition 2 of Definition \ref{kapparegdefi} holds, and accordingly set \[\gammavec:=\left\{\betagin{array}{l@{\quad}l} \alphavecrestrarg{n-2}^\frown({\alphavec_{n-1}}^\frown\mu_{\taucp{n-1,m_{n-1}}}) &\mbox{if }n>1\mbox{ and } \xi=\taucp{n-1,m_{n-1}}\in{\mathbb E}^{>\taucppr{n-1}}\\ \alphavec[\xi] & \mbox{otherwise.} \end{array}\right.\] Let ${\mathrm{tc}}(\beta)=:\betavec$, where $\betavec_i=(\betacp{i,1},\ldots,\betacp{i,k_i})$ for $i=1,\ldots,l$, which according to part 7(b) of Lemma \ref{tcevalestimlem} is equal to $\operatorname{me}(\gammavec)$ since due to the fact that $\alphavec\in{\operatorname{T}}C$, $\gammavec$ (and hence also $\betavec$) does not possess a critical main line index pair. Let ${\vec{\sigma}}$ be the chain associated with $\betavec$ and set $k_0:=0$. The i.h.\ yields $\delta<\gamma:=\mathrm{o}(\gammavec)\le_1\beta$, $\delta<_1\gamma$ if $\delta>0$, and we clearly have $k_l=1$ by the choice of $\gammavec$ and the definition of $\operatorname{me}$. Hence there exists a $\le_\mathrm{\scriptscriptstyle{lex}}$-minimal index pair $(e,1)\in\mathrm{dom}(\betavec)$ such that both $n\le e\le l$ and $\betacp{e,1}\not\in{\mathbb E}^{>{\sigma_e^\star}}$. Let \[\eta:=\left\{\betagin{array}{ll} \upsilon_\lambda & \mbox{ if }e=1\\ \ordcp{e-1,k_{e-1}}(\betavec) & \mbox{ if }e>1 \end{array}\right.\] Notice that due to the minimality of $e$ the case $k_{e-1}=1$ can only occur if $e=n>1$, $m_{n-1}=1$, $\gammavec=\alphavec[\xi]$, and hence $\delta=\eta$. Setting $\betavecpr:=\betavecrestrarg{(e,1)}$, $\betapr:=\mathrm{o}(\betavecpr)$, and setting ${\vec{\varsigma}}pr:={\mathrm{ers}}_{e,1}(\betavec)$ in general we have $\delta\le\eta$, \[\gamma\le\betapr=\left\{\betagin{array}{ll} \kappa^{{\vec{\varsigma}}pr}_\betacp{e,1} & \mbox{ if }e=1\\[2mm] \eta+\kappa^{{\vec{\varsigma}}pr}_\betacp{e,1} & \mbox{ if }e>1, \end{array}\right.\] and $\betapr+\mathrm{dp}_{\vec{\varsigma}}pr(\betacp{e,1})=\beta$. Note that according to Lemma \ref{evallem} we have $\mathrm{dp}_{\vec{\varsigma}}pr(\betacp{e,1})=\mathrm{dp}_{{\mathrm{rs}}_{e^\star}(\betavec)}(\sigmacp{e,1})$. We now consider cases regarding $\betacp{e,1}$ in order to define in each case a finite set $Z_\eta\subseteqseteq(\eta,\alpha)$ such that there does not exist any cover $X_\eta\cup\tilde{Z}_\eta$ of $X_\eta\cup Z_\eta$ with $X_\eta<\tilde{Z}_\eta$ and $X_\eta\cup\tilde{Z}_\eta\subseteqseteq\beta$, where $X_\eta$ is the finite set that consists of $\eta$ and all existing greatest $<_2$-predecessors less than or equal to $\eta$ of elements in $Z_\eta$. {\small Case A below specifies the situation where $\beta$ captures a (next) successor-$<_1$-successor of $\eta$ (simple example: $\alpha=\upsilon_\lambda+2$). Case B handles the occurrence of a successor-$\kappa^\tauwo$-successor the greatest $\le_2$-predecessor of which is determined by ${\sigma_e^\star}$ via the i.h. (simple example: $\alpha=\varepsilonn\cdot(\omega+1)+1$, $\betacp{e,1}=\varepsilonn$ where $e=2$. Note that in this example we have ${\mathrm{tc}}(\alpha)=((\varepsilonn+1))$ and ${\mathrm{tc}}(\beta)=((\varepsilonn,\omega),(\varepsilonn))$). Case C is where $\beta$ captures the $\betacp{e,1}=_{\mathbb N}F{\ze^\tau}a+\sigmacp{e,1}$-th $\eta$-$\le_1$-minimal component (${\ze^\tau}a>0$), that is, a branching of $\eta$-$\le_1$-component occurs (simple example: $\alpha=\omega\cdot2+2$). Case D is the situation where the least $\eta$-$\le_1$-component is reached that itself $<_1$-connects to $\betacp{e+1,1}$-many components, namely the $\betacp{e,1}$-th component, that is, nesting of $\eta$-$\le_1$-components occurs (simple example: $\alpha=\omega^\omega+\omega+2$).} \\[2mm] {\bf Case A:} $\sigmacp{e,1}=1$. Then $l=e$, and by the i.h.\ applied to $\betapr=\beta$, which is of the form $\beta=\beta^\circ+1$, there are $X^\prime\subseteqseteq_\mathrm{fin}\eta+1$ and $Z^\prime\subseteqseteq_\mathrm{fin}(\eta,\beta)$ according to the claim, with the property that there does not exist any cover $X^\prime\cup\tilde{Z}pr$ of $X^\prime\cupZ^\prime$ such that $X^\prime<\tilde{Z}pr$ and $X^\prime\cup\tilde{Z}pr\subseteqseteq\beta^\circ$. Let \[X_\eta:=X^\prime\quad\mbox{ and }\quad Z_\eta:=Z^\prime\cup\sigmangleton{\beta}.\] Clearly, if there were a set $\tilde{Z}_\eta\subseteqseteq(\eta,\beta)$ such that $X_\eta\cup\tilde{Z}_\eta$ is a cover of $X_\eta\cup Z_\eta$ then $X^\prime\cup(\tilde{Z}_\eta\cap\max(\tilde{Z}_\eta))$ would be a cover of $X^\prime\cup Z^\prime$ which is contained in $\beta^\circ$.\\[2mm] {\bf Case B:} $\betacp{e,1}={\sigma_e^\star}\in{\mathbb E}$. Then $\betavecpr$ is maximal, implying that $l=e$ and $\betapr=\beta$. Note that by monotonicity and continuity, Corollary \ref{kappanuhzcor}, \[\beta=\sup\set{\mathrm{o}(\betavecpr[{\ze^\tau}a])}{0<{\ze^\tau}a<{\sigma_e^\star}}.\] By the i.h.\ we see that $\beta$ is a successor-$\kappa^\tauwo$-successor of its greatest $\kappa^\tauwo$-predecessor $\operatorname{pred}_2(\beta)$. Accordingly, \[X_\eta:=\{\operatorname{pred}_2(\beta),\eta\}\quad\mbox{ and }\quad Z_\eta:=\sigmangleton{\beta}\] has the requested property.\\[2mm] {\bf Case C:} $\betacp{e,1}=_{\mathbb N}F{\ze^\tau}a+\sigmacp{e,1}$ where ${\ze^\tau}a,\sigmacp{e,1}>1$. Since ${\ze^\tau}a+1, \sigmacp{e,1}+1<\betacp{e,1}$ we can apply the i.h.\ to $\beta_{\ze^\tau}a:=\mathrm{o}(\betavecpr[{\ze^\tau}a+1])$ and $\beta_\sigma:=\mathrm{o}(\betavecpr[\sigmacp{e,1}+1])$, obtaining sets $X_1,Z_1$ and $X_2,Z_2$ according to the claim, respectively. We then set \[X_\eta:=X_1\cup X_2\quad\mbox{ and }\quad Z_\eta:=Z_1\cup\left(\beta_{\ze^\tau}a+\left(-(\eta+1)+Z_2\right)\right).\] Then $X_\eta$ and $Z_\eta$ have the desired property due to the fact that \[\beta_\sigma\quad\cong\quad\eta+1\cup[\beta_{\ze^\tau}a,\alpha)\] which in turn follows from the i.h. Clearly, we exploit the i.h.\ regarding $\beta_{\ze^\tau}a$ and $\beta_\sigma$ in order to see that a hypothetical cover of $X_\eta\cup Z_\eta$ contradicting the claim would imply the existence of a cover of either $X_1\cup Z_1$ or $X_2\cup Z_2$ contradicting the i.h.\\[2mm] {\bf Case D:} Otherwise. Then ${\sigma_e^\star}<\sigmacp{e,1}=\betacp{e,1}\not\in{\mathbb E}one$, and we have $k_e=1$, $(e+1,1)\in\mathrm{dom}(\betavec)$, and \[0<\betacp{e+1,1}=<_1g((1/{\sigma_e^\star})\cdot\sigmacp{e,1})<\sigmacp{e,1}.\] Note that $\betapr$ is a supremum of $\eta$-$\le_1$-minimal ordinals $\beta_\nu$ (exchanging the index $\betacp{e,1}$ with $\nu$ in the definition of $\betapr$ above) where ${\sigma_e^\star}\le\nu<\betacp{e,1}$ and $<_1g((1/{\sigma_e^\star})\cdot\nu)<\betacp{e+1,1}$, that is, $\betacp{e,1}$ is the least index of an $\eta$-$\le_1$-relativized component that $\le_1$-connects to $\betacp{e+1,1}$-many components. The constraint ${\sigma_e^\star}\le\nu$ guarantees the same greatest $\le_2$-predecessors connecting to the $\nu$-th and $\betacp{e,1}$-th components. By the i.h.\ applied to $\beta^\circ:=\mathrm{o}(\betavecpr[\betacp{e+1,1}+1])$ we obtain sets $X^\prime$ and $Z^\prime\subseteqseteq(\eta,\beta^\circ)$ according to the claim. Define \[X_\eta:=X^\prime\cup\{\operatorname{pred}_2(\betapr)\}\setminus\{0\}\quad\mbox{ and }\quad Z_\eta:=\sigmangleton{\betapr}\cup\left(\betapr+(-\eta+Z^\prime)\right).\] Arguing toward contradiction, let us assume there were a set $\tilde{Z}_\eta\subseteqseteq(\eta,\beta)$ with $X_\eta<\tilde{Z}_\eta$ such that $X_\eta\cup\tilde{Z}_\eta\subseteqseteq\beta$ is a cover of $X_\eta\cup Z_\eta$. Since by the i.h.\ $\betapr\le_1\beta$, thus $\betapr\le_1 Z_\eta$ and hence $\mu:=\min(\tilde{Z}_\eta)\le_1\tilde{Z}_\eta$, we find cofinally many copies of $\tilde{Z}_\eta$ below $\betapr$. We may therefore assume that $\tilde{Z}_\eta\subseteqseteq(\eta,\betapr)$ and moreover for some $\nu\in(0,\betacp{e,1})$ such that $\nu\ge{\sigma_e^\star}$ and $<_1g((1/{\sigma_e^\star})\cdot\nu)<\betacp{e+1,1}$ (clearly satisfying $\betavecpr[\nu]\in{\operatorname{T}}C$) \[\tilde{Z}^-_\eta:=\tilde{Z}_\eta\setminus\sigmangleton{\mu}\subseteqseteq\left(\beta_\nu,\beta_{\nu+1}\right)\] where $\beta_\nu:=\mathrm{o}(\betavecpr[\nu])$ and $\beta_{\nu+1}:=\mathrm{o}(\betavecpr[\nu+1])$. Setting \[\tilde{Z}pr:=\eta+(-\beta_\nu+\tilde{Z}^-_\eta)\] and using that due to the i.h.\ we have \[\eta+1\cup\left(\beta_\nu,\beta_{\nu+1}\right)\cong\eta+(-\beta_\nu+\beta_{\nu+1})\] we obtain a cover $X^\prime\cup\tilde{Z}pr$ of $X^\prime\cupZ^\prime$ with $X^\prime<\tilde{Z}pr$ and $X^\prime\cup\tilde{Z}pr\subseteqseteq\mathrm{o}(\betavecpr[\betacp{e+1,1}])$, which contradicts the i.h.\\[2mm] Now, in the case $\delta=\eta$ we are done, choosing $X:=X_\eta$ and $Z:=Z_\eta$. Let us therefore assume that $\delta<\eta$. We claim that for every index pair $(i,j)\in\{(0,0)\}\cup\mathrm{dom}(\betavec)$ with $(n-1,m_{n-1})\le_\mathrm{\scriptscriptstyle{lex}}(i,j)<_\mathrm{\scriptscriptstyle{lex}}(e,1)$, where $m_0:=0$, setting $\eta_{0,0}:=\upsilon_\lambda$ and ${\eta_{i,j}}:=\ordcp{i,j}(\betavec)$ for $(i,j)\in\mathrm{dom}(\betavec)$, there is $Z_{i,j}\subseteqseteq_\mathrm{fin}({\eta_{i,j}},\alpha)$ such that there does not exist any cover $X_{i,j}\cup\tilde{Z}ij$ of $X_{i,j}\cupZ_{i,j}$ with $X_{i,j}<\tilde{Z}ij$ and $X_{i,j}\cup\tilde{Z}ij\subseteqseteq\beta$, where $X_{i,j}$ consists of ${\eta_{i,j}}$ and all existing greatest $<_2$-predecessors less than or equal to ${\eta_{i,j}}$ of elements of $Z_{i,j}$. This is shown by induction on the finite number of $1$-step extensions from $\betavecrestrarg{(i,j)}$ to $\betavecpr$. The initial step where $(i,j)=(e-1,k_{e-1})$ and ${\eta_{i,j}} =\eta$ has been shown above. Now assume $(i,j)<_\mathrm{\scriptscriptstyle{lex}}(e-1,k_{e-1})$ and let $(u,v):=(i,j)^+$. Let $X_{u,v}$ and $Z_{u,v}\subseteqseteq({\eta_{u,v}} ,\alpha)$ be according to the i.h. The i.h.\ provides us with knowledge of the $<_i$-predecessors of ${\eta_{u,v}} $ ($i=1,2$), which in turn is in $\le_1$-relation with every element in $Z_{u,v}$. We consider cases regarding $(u,v)$. {\small Returning to our simple example for Case B above, where $\alpha=\varepsilonn\cdot(\omega+1)+2$ and $e=2$, first Case {\cal R}omannumeral2 applies with $(i,j)=(1,1)$ and $\betacp{u,v}=\betacp{1,2}=\omega$, and then Case {\cal R}omannumeral1 applies with $(i,j)=(0,0)$ and $\betacp{u,v}=\betacp{1,1}=\varepsilonn$. The resulting sets are $X=X_{0,0}=\{0\}$ and $Z=Z_{0,0}=\{\varepsilonn\cdot\omega,\varepsilonn\cdot(\omega+1)\}$, where $0$ is contained in $X$ only for technical reasons, as $X=\emptyset$ would of course suffice. Instructive variants of this example regarding Case {\cal R}omannumeral2 are $\alpha=\varepsilon_1\cdot(\omega+1)+2$, where $\mathrm{o}erline{\varepsilon_1}=\varepsilonn$, or $\alpha=\varepsilon_{\Gamma_0+1}\cdot(\omega+1)+2$, where $\mathrm{o}erline{\varepsilon_{\Gamma_0+1}}=\Gamma_0$.} \\[2mm] {\bf Case {\cal R}omannumeral{1}:} $(u,v)=(i+1,1)$ where $i\ge 0$. Then we have $\betacp{u,v}=\sigmacp{u,v}\in{\mathbb E}^{>\sigma^\star_u}$ (by the minimality of $e$) and $(u,v)^+=(i+1,2)$ with $\betacp{i+1,2}=\mu_{\sigmacp{u,v}}$. We define \[Z_{i,j}:=Z_{u,v}\] and observe that, setting $X^\prime:=X_{u,v}\setminus\{{\eta_{u,v}}\}$, we have $X_{i,j}=X^\prime\cup\{{\eta_{i,j}}\}$ since for any greatest $<_2$-predecessor $\nu$ of an element in $Z_{i,j}$ we have $\nu\le{\eta_{i,j}}$. Assume there were a cover $X_{i,j}\cup\tilde{Z}ij$ of $X_{i,j}\cupZ_{i,j}$ with $X_{i,j}<\tilde{Z}ij$ and $X_{i,j}\cup\tilde{Z}ij\subseteqseteq\beta$. We have ${\eta_{u,v}}<_1Z_{u,v}=Z_{i,j}$, and by the i.h.\ we may assume that $\tilde{Z}ij\subseteqseteq({\eta_{i,j}},{\eta_{u,v}})$, since if necessary we may $\le_1$-reflect the elements that are greater than or equal to ${\eta_{u,v}}$ down below ${\eta_{u,v}}$, into the interval $({\eta_{i,j}},{\eta_{u,v}})$. The i.h.\ shows that we have the following isomorphism \[{\eta_{u,v}}\quad\cong\quad{\eta_{i,j}}+1\cup({\eta_{u,v}},\mathrm{o}(\betavecrestrarg{i}^\frown(\betacp{u,v},1))),\] which shows that defining $\tilde{Z}ijpr:={\eta_{u,v}}+(-{\eta_{i,j}}+\tilde{Z}ij)$ we obtain another cover $X_{i,j}\cup\tilde{Z}ijpr$ of $X_{i,j}\cupZ_{i,j}$ with the assumed properties. We now claim that $X_{u,v}\cup\tilde{Z}ijpr$ is a cover of $X_{u,v}\cupZ_{u,v}$ with $X_{u,v}<\tilde{Z}ijpr$ and $X_{u,v}\cup\tilde{Z}ijpr\subseteqseteq\beta$, contradicting the i.h. Indeed, $X^\prime\cup\tilde{Z}ijpr$ is a cover of $X^\prime\cupZ_{u,v}$, and we have ${\eta_{u,v}}<_1Z_{u,v},\tilde{Z}ijpr$ and ${\eta_{u,v}}\not\le_2\nu$ for any $\nu\inZ_{u,v}\cup\tilde{Z}ijpr$.\\[2mm] {\bf Case {\cal R}omannumeral{2}:} $(u,v)=(i,j+1)$. Then $(i,j)\in\mathrm{dom}(\betavec)$, and letting $\sigma:=\sigmacp{i,j}$ and $\sigma^\prime:=\sigmacppr{i,j}$ we have, recalling Definition \ref{barop}, $\sigma^\prime\le{\bar{\sigma}}$, since $\sigma^\prime<\sigma$ and, according to part 4 of Lemma \ref{trsbasicpropslem}, tracking sequences are subsequences of localizations, and $\betacp{i,j+1}=\mu_\sigma$. The i.h.\ applied to $\beta_{\bar{\sigma}}:=\mathrm{o}(\betavecrestrarg{(i,j)}^\frown({\bar{\sigma}}+1))$ yields sets $X_{\bar{\sigma}}$ and $Z_{\bar{\sigma}}\subseteqseteq({\eta_{i,j}},\beta_{\bar{\sigma}})$ according to the claim. Setting $\beta_\sigma:=\mathrm{o}(\betavecrestrarg{(u,v)}^\frown(\sigma))$ we now define \[Z_{i,j}:=\sigmangleton{{\eta_{u,v}} }\cup({\eta_{u,v}} +(-{\eta_{i,j}} +Z_{\bar{\sigma}}))\cup\sigmangleton{\beta_\sigma}\cupZ_{u,v}\] and assume that there were a cover $X_{i,j}\cup\tilde{Z}ij$ of $X_{i,j}\cupZ_{i,j}$ with $X_{i,j}<\tilde{Z}ij$ and $X_{i,j}\cup\tilde{Z}ij\subseteqseteq\beta$. Notice that the possibly redundant element $\beta_\sigma$ is the least $<_2$-successor of ${\eta_{u,v}}$ and provided explicitly for a practical reason, while $Z_{u,v}$ must contain at least one $<_2$-successor of ${\eta_{u,v}}$ which, however, we do not keep track of here. Thus, the image $\mu :=\min(\tilde{Z}ij)$ of ${\eta_{u,v}}$ must have a $<_2$-successor and therefore by the i.h.\ have a tracking chain ending with a limit $\nu$-index. Setting ${\vec{\varsigma}}pr:={\mathrm{ers}}ij(\betavec)$ and $\tilde{\sigma}:=\kappa^{\vec{\varsigma}}pr_\sigma$ we have ${\mathrm{tc}}({\eta_{i,j}}+\tilde{\sigma})=\betavecrestrarg{u,v}[1]<_1\beta$, and since $\mu\le_1\tilde{Z}ij$, the set $\tilde{Z}ij$ is contained in one component enumerated by $\kappa^{\vec{\varsigma}}pr$ starting from ${\eta_{i,j}}$, so that, as we can see via $<_1$-downward reflection, the assumption can be fortified to assuming \[\tilde{Z}ij\subseteqseteq[{\eta_{i,j}}+\kappa^{\vec{\varsigma}}pr_{\ze^\tau}a,{\eta_{i,j}} +\kappa^{\vec{\varsigma}}pr_{{\ze^\tau}a+1})=:I\] for the \emph{least} ${\ze^\tau}a$, which, using the i.h.\ and recalling that we incorporated a translation of the set $Z_{\bar{\sigma}}$ into $Z_{i,j}$, can easily be seen to satisfy ${\ze^\tau}a\in{\mathbb E}\cap({\bar{\sigma}},\sigma)$ and (with the aid of Lemma \ref{tcevalestimlem}) \[\mu<_1\mathrm{o}(\operatorname{me}(\betavecrestrarg{(i,j)}^\frown({\ze^\tau}a)))={\eta_{i,j}} +\kappa^{\vec{\varsigma}}pr_{{\ze^\tau}a}+\mathrm{dp}_{\vec{\varsigma}}pr({\ze^\tau}a)=\max(\tilde{Z}ij).\] The minimality of ${\ze^\tau}a$ moreover allows us to assume that $\mathrm{o}(\betavecrestrarg{(i,j)}^\frown({\ze^\tau}a,\nu))\le_2\mu $ for some index $\nu\le\mu_{\ze^\tau}a$ for the following reasons: In case of $\mu <\mathrm{o}(\betavecrestrarg{(i,j)}^\frown({\ze^\tau}a,\mu_{\ze^\tau}a))$ there is a least $\nu>0$ such that $\mu <_1\mathrm{o}(\betavecrestrarg{(i,j)}^\frown({\ze^\tau}a,\nu+1))$, and by the i.h.\ we have (making use of Lemma \ref{cmlmaxextcor}) $\mathrm{o}(\betavecrestrarg{(i,j)}^\frown({\ze^\tau}a,\nu))\le_2\operatorname{pred}_1(\mathrm{o}(\betavecrestrarg{(i,j)}^\frown({\ze^\tau}a,\nu+1)))$. If on the other hand $\mu \ge\mathrm{o}(\betavecrestrarg{(i,j)}^\frown({\ze^\tau}a,\mu_{\ze^\tau}a))$ the assumption $\mathrm{o}(\betavecrestrarg{(i,j)}^\frown({\ze^\tau}a,\mu_{\ze^\tau}a))\not\le_2\mu $ would imply, using the i.h.\ regarding $\le_2$-predecessors of $\mu$, that there is a least $q>i$ such that $\ordcp{q,1}(\operatorname{me}(\betavecrestrarg{(i,j)}^\frown({\ze^\tau}a)))<_1\mu $ with a corresponding $\kappa$-index $\rho$ at $(q,1)$ such that ${\mathrm{end}}(\rho)<{\ze^\tau}a$ - contradicting the minimality of ${\ze^\tau}a$. We may furthermore strengthen the assumption $\mathrm{o}(\betavecrestrarg{(i,j)}^\frown({\ze^\tau}a,\nu))\le_2\mu $ for some index $\nu\le\mu_{\ze^\tau}a$ to actual equality \[\mu=\mathrm{o}(\betavecrestrarg{(i,j)}^\frown({\ze^\tau}a,\nu)),\] since it is easy to check that this still results in a cover of $X_{i,j}\cupZ_{i,j}$ with the assumed properties. Since ${\ze^\tau}a\in({\bar{\sigma}},\sigma)$, setting $\varphi:=\pi_{\ze,\si}inv$ we have $\varphi(\lambda_{\ze^\tau}a)<\lambda_\sigma$ (cf.\ part 1 of Lemma \ref{finelocbasicpropslem}) and $\varphi(\mu_{\ze^\tau}a)\le\mu_\sigma$ by Lemma \ref{munoncofinlem}. The vectors in the $\kappa^\tauc$-segment ${\mathrm{tc}}[I]$ of ${\operatorname{T}}C$ have a form \[{\vec{\iota}}=\betavecrestrarg{(i,j)}^\frown({\ze^\tau}avec,{\vec{\xi}}_1,\ldots,{\vec{\xi}}_g)\] where ${\ze^\tau}avec=({\ze^\tau}a,{\ze^\tau}a_1,\ldots,{\ze^\tau}a_h)$ with $g, h\ge 0$, cf.\ again Corollary \ref{alpldpcor}. Let \[{\ze^\tau}avecpr:= \left\{\betagin{array}{l@{\quad}l} (\betacp{i,1},\ldots,\betacp{i,j},1) & \mbox{ if } h=0\\ (\betacp{i,1},\ldots,\betacp{i,j},1+\varphi({\ze^\tau}a_1),\varphi({\ze^\tau}a_2),\dots,\varphi({\ze^\tau}a_h)) & \mbox{ otherwise.} \end{array}\right.\] Let $g_0\in\sigmangleton{1,\ldots,g}$ be minimal such that ${\mathrm{end}}(\xi_{g_0,1})<{\ze^\tau}a$ if that exists, and $g_0=g+1$ otherwise. If $g_0\le g$ let ${\vec{\xi}}_{g_0}=(\xi_{g_0,1},\ldots,\xi_{g_0,k})$ and define ${\vec{\xi}}pr:=(\varphi(\xi_{g_0,1}),\xi_{g_0,2}\ldots,\xi_{g_0,k})$. We can now define the base transformation of ${\vec{\iota}}$ by \[t({\vec{\iota}}):=\betavecrestrarg{i-1}^\frown\left({\ze^\tau}avecpr,\varphi({\vec{\xi}}_1),\ldots, \varphi({\vec{\xi}}_{g_0-1}),{\vec{\xi}}pr,{\vec{\xi}}_{g_0+1},\ldots,{\vec{\xi}}_g\right).\] In order to clarify the definition, note that $t({\vec{\iota}})=\betavecrestrarg{i-1}^\frown\left({\ze^\tau}avecpr\right)$ in case of $g=0$. The part $\left({\vec{\xi}}pr,{\vec{\xi}}_{g_0+1},\ldots,{\vec{\xi}}_g\right)$, which is empty in case of $g_0=g+1$ and is equal to $({\vec{\xi}}pr)$ if $g_0=g$, refers to the addition of a parameter below $\mathrm{o}(\betavecrestrarg{(i,j)}^\frown({\ze^\tau}a))$ which is the reason why relevant indices are not subject to base transformation. It is easy to see that $t({\vec{\iota}})\in{\operatorname{T}}C$ and therefore \[t: {\mathrm{tc}}[I]\to{\operatorname{T}}C,\mbox{ with }\mathrm{o}[\mathrm{Im}(t)]\subseteqseteq[{\eta_{i,j}} +\tilde{\sigma},\beta).\] Using $t$ and applying the i.h.\ in combination with the commutativity of $\varphi$ with all operators acting on the indices, as shown in Lemma \ref{latallem} and Subsection \ref{ordopsec}, we obtain \[{\eta_{i,j}}+1\cup I\quad\cong\quad{\eta_{i,j}}+1\cup\mathrm{o}[\mathrm{Im}(t)]\] since thanks to $\sigma^\prime<{\ze^\tau}a<\sigma$ it is easy to see that ${\eta_{i,j}} +\kappa^{\vec{\varsigma}}pr_{{\ze^\tau}a}$ and ${\eta_{i,j}} +\tilde{\sigma}$ have the same greatest $\kappa^\tauwo$-predecessor (which then is less than or equal to ${\eta_{i,j}} $) unless both are $\le_2$-minimal. The set $\tilde{\tilde{Z}}_{i,j}:=\mathrm{o}\circ t\circ{\mathrm{tc}}[\tilde{Z}ij]$ therefore gives rise to another cover of $X_{i,j}\cupZ_{i,j}$ with the assumed properties. We have \[{\tilde{\mu}}:=\min(\tilde{\tilde{Z}}_{i,j})=\mathrm{o}(\betavecrestrarg{(u,v)}[\varphi(\nu)]),\] corresponding to $\mu$. In the case $\varphi(\nu)<\mu_\sigma=\betacp{u,v}$ we may first assume (using iterated $\le_1$-downward reflection if necessary) that the set $\tilde{\tilde{Z}}_{i,j}$ is contained in the interval \[[{\tilde{\mu}},\mathrm{o}(\betavecrestrarg{(u,v)}[\varphi(\nu)+1]))=:J,\] but then we may even assume that $\tilde{\tilde{Z}}_{i,j}\subseteqseteq[{\eta_{u,v}},\beta)$ and ${\tilde{\mu}}={\eta_{u,v}}$, since otherwise, as seen directly from the i.h., we could exploit the translation isomorphism \[{\eta_{i,j}}+1\cup J\quad\cong\quad{\eta_{i,j}}+1\cup({\eta_{u,v}} +(-{\tilde{\mu}}+J))\] which shifts $J$ into the interval $[{\eta_{u,v}},\beta)$. We have now transformed the originally assumed cover $X_{i,j}\cup\tilde{Z}ij$ to a cover $X_{i,j}\cup\tilde{\tilde{Z}}_{i,j}$ of $X_{i,j}\cupZ_{i,j}$ which fixes ${\eta_{u,v}}=\min(\tilde{\tilde{Z}}_{i,j})$ and still has the assumed property $X_{i,j}\cup\tilde{\tilde{Z}}_{i,j}\subseteqseteq\beta$. Defining $\tilde{Z}uv$ to be the subset corresponding to $Z_{u,v}$ in $\tilde{\tilde{Z}}_{i,j}$ we obtain a cover $X_{u,v}\cup\tilde{Z}uv$ of $X_{u,v}\cupZ_{u,v}$ that satisfies $X_{u,v}<\tilde{Z}uv$ and $X_{u,v}\cup\tilde{Z}uv\subseteqseteq\beta$, contradiction. This concludes the proof of Claim \ref{relleominclaim}.\mbox{ } $\Box$ Clearly, in the case $\delta=0$ (that is, $n=1$ and $\lambda=0$) we have verified that $\alpha$ is $\le_1$-minimal and are done. Assuming now that in the case $n=1$ we have $\lambda>0$ and hence $\delta=\upsilon_\lambda>0$, we show next that $\delta<_1\alpha$ as claimed in part a), using the criterion provided in Proposition \ref{letwocriterion}. Let finite sets $X\subseteqseteq\delta$ and $Y\subseteqseteq[\delta,\alpha)$ be given. Without loss of generality, we may assume that $\delta\in Y$. We are going to define a set $\tilde{Y}$ such that $X<\tilde{Y}<\delta$ and $X\cup\tilde{Y}\cong X\cup Y$, distinguishing between two subcases, the second of which will require base transformation in its second part, 1.1.2.2. {\small A simple example for Subcase 1.1.1 below is $\alpha=\omega+1$, ${\mathrm{tc}}(\alpha)=((\omega),(1))$, another easy but instructive example is $\alpha=\varepsilonn\cdot\omega^2+\varepsilonn\cdot(\omega+1)+1$, ${\mathrm{tc}}(\alpha)=((\varepsilonn\cdot\omega),(\varepsilonn+1))$. An instructive example for Subcase 1.1.2.2 is $\alpha=\varepsilon_\omega\cdot(\omega+1)+1$, ${\mathrm{tc}}(\alpha)=((\varepsilon_\omega,\omega),(\varepsilon_\omega+1))$, and for the easier (non-critical) Subcase 1.1.2.1 the example $\alpha=\varepsilon_\omega\cdot\omega+\varepsilonn\cdot(\omega+1)+1$, ${\mathrm{tc}}(\alpha)=((\varepsilon_\omega,\omega),(\varepsilonn+1))$ illustrates the difference between these two scenarios in Subcase 1.1.2. } \\[2mm] {\bf Subcase 1.1.1:} $n>1$ and $m_{n-1}=1$. Since \[\alphacp{n,1}<\rho_{n-1}=<_1g((1/{\tau_{n-1}^\star})\cdot\taucp{n-1,1})+1\] we see that $\alphacp{n-1,1}$ is a limit of ordinals $\eta<\alphacp{n-1,1}$ such that $<_1g((1/{\tau_{n-1}^\star})\cdot{\mathrm{end}}(\eta))\ge\xi$. Now choose such an index $\eta$ large enough so that $\eta>\alphacp{n-1,1}\minusp\taucp{n-1,1}$, ${\tau_{n-1}^\star}\le{\mathrm{end}}(\eta)<\taucp{n-1,1}$, and $X<\mathrm{o}(\alphavecrestrarg{n-1,1}[\eta])=:\gamma$. Notice that by the i.h.\ $\gamma$ and $\delta$ have the same $<_i$-predecessors, $i=1,2$. We will define a translation mapping $t$ in terms of tracking chains that results in an isomorphic copy of the interval $[\delta,\beta]$ starting from $\gamma$. The tracking chain of an ordinal ${\ze^\tau}a\in[\delta,\beta]$ has a form ${\vec{\iota}}:=\alphavecrestrarg{(n-1,1)}^\frown{\ze^\tau}avec$ where ${\ze^\tau}avec=({\ze^\tau}avec_1,\ldots,{\ze^\tau}avec_g)$, $g\ge0$, and ${\ze^\tau}avec_i=({\ze^\tau}acp{i,1},\ldots,{\ze^\tau}acp{i,w_i})$ for $1\le i\le g$. Let \[{\ze^\tau}avecpr:=\left\{\betagin{array}{l@{\quad}l} ((\eta,1),{\ze^\tau}avec_2,\ldots,{\ze^\tau}avec_g)& \mbox{ if } g>0\:\&\:{\ze^\tau}acp{1,1}={\mathrm{end}}(\eta)\in{\mathbb E}^{>{\tau_{n-1}^\star}}\:\&\: w_1=1\\ ((\eta,1+{\ze^\tau}acp{1,2},{\ze^\tau}acp{1,3},\ldots,{\ze^\tau}acp{1,w_1}),{\ze^\tau}avec_2,\ldots,{\ze^\tau}avec_g)& \mbox{ if }g>0\:\&\:{\ze^\tau}acp{1,1}={\mathrm{end}}(\eta)\in{\mathbb E}^{>{\tau_{n-1}^\star}}\:\&\: w_1>1\\ ((\eta),{\ze^\tau}avec_1,\ldots,{\ze^\tau}avec_g)& \mbox{ otherwise,} \end{array}\right.\] where the first two cases take care of condition 2 of Definition \ref{kapparegdefi} since the situation ${\mathrm{end}}(\eta)\in{\mathbb E}^{>{\tau_{n-1}^\star}}$ can actually occur, and define \[t({\vec{\iota}}):=\alphavecrestrarg{(n-2,m_{n-2})}^\frown{\ze^\tau}avecpr.\] The mapping $t$ gives rise to the translation mapping \[\mathrm{o}\circ t\circ{\mathrm{tc}}:[\delta,\beta]\to[\gamma,\gamma+\kappa^{\vec{\varsigma}}_\xi+\mathrm{dp}_{\vec{\varsigma}}(\xi)],\] and by the i.h.\ we have \[[0,\gamma+\kappa^{\vec{\varsigma}}_\xi+\mathrm{dp}_{\vec{\varsigma}}(\xi)]\quad\cong\quad[0,\gamma)\cup[\delta,\beta].\] This shows that in order to obtain $X\cup\tilde{Y}\cong X\cup Y$ we may choose \[\tilde{Y}:=\gamma+(-\delta+Y).\] {\bf Subcase 1.1.2:} $n>1$ and $m_{n-1}>1$. Let $\tau:=\taucp{n-1,m_{n-1}}$, $\sigma:={\tau^\prime}=\taucp{n-1,m_{n-1}-1}$, $\sigma^\prime:=\taucppr{n-1,m_{n-1}-1}$, and let $\alphacp{n-1,m_{n-1}}=_{\mathbb N}F\eta+\tau$ with $\eta=0$ in case of an additive principal number. If $\alphacp{n-1,m_{n-1}}\in\mathrm{Lim}$ let $\alphapr$ be a successor ordinal in $(\eta,\eta+\tau)$, large enough to satisfy $\alphastar:=\mathrm{o}(\alphastarvec)>X$ where $\alphastarvec:=\alphavecrestrarg{n-1}[\alphapr]\in{\operatorname{T}}C$, otherwise let $\alphapr:=\alphacp{n-1,m_{n-1}}\minusp1$ and set $\alphastarvec:=\alphavecrestrarg{n-1}[\alphapr]$, which is equal to $\alphavecrestrarg{n-1,m_{n-1}-1}$ if $\alphapr=0$, according to the $[\cdot]$-notation introduced after Definition \ref{trackingchaindefi}. Notice that we have $\rho_{n-1}\ge\sigma$ and $\xi<\lambda_\sigma$. We consider the following subcases:\\[2mm] {\bf 1.1.2.1:} $\xi<\sigma$. Here we can argue comfortably as in the treatment of Subcase 1.1.1: however, in the special case where $\chi^\sigma(\alphapr)=1$ consider $\gammavec:=\operatorname{me}(\alphastarvec)$. Using Lemma \ref{cmlmaxextcor} and part 7(c) of Lemma \ref{tcevalestimlem} we know that $\operatorname{ec}(\gammavec)$ exists and has an extending index of a form $\sigma\cdot({\ze^\tau}a+1)$ for some ${\ze^\tau}a$ as well as that according to part 2 of Lemma \ref{cmlmaxextcor} the maximal extension of $\alphastarvec$ to $\gammavec$ does not add epsilon bases (in the sense of Definition \ref{basesdefi}) between $\sigma^\prime$ and $\sigma$. In the cases where $\chi^\sigma(\alphapr)=0$ we set $\gammavec:=\alphastarvec$. Clearly, as under the current assumption we have $\xi<\sigma\in{\mathbb E}^{>\sigma^\prime}$, the ordinal $\sigma$ is a limit of ordinals $\eta$ such that $<_1g((1/\sigma^\prime)\cdot{\mathrm{end}}(\eta))=\xi+1$, which guarantees that ${\mathrm{end}}(\eta)>\sigma^\prime$, and $\eta$ can be chosen large enough so that setting \[\nu:=\left\{\betagin{array}{l@{\quad}l} \sigma\cdot{\ze^\tau}a+\eta&\mbox{ if }\chi^\sigma(\alphapr)=1\\ \rhoargs{\sigma}{\alphapr}+\eta&\mbox{ if }\alphapr\in\mathrm{Lim}\:\&\:\chi^\sigma(\alphapr)=0\\ \eta&\mbox{ otherwise} \end{array}\right.\] we obtain $X<\mathrm{o}(\gammavec^\frown(\nu))=:\deltati$. Observe that by the i.h.\ $\deltati$ and $\delta$ then have the same $\kappa^\tauwo$-predecessors and the same $<_1$-predecessors below $\deltati$. The i.h.\ shows that \[\deltati+\kappa^{{\vec{\varsigma}}}_\xi+\mathrm{dp}_{{\vec{\varsigma}}}(\xi)+1\quad\cong\quad\deltati\cup[\delta,\beta]\] whence choosing \[\tilde{Y}:=\deltati+(-\delta+Y)\] satisfies our needs.\\[2mm] {\bf 1.1.2.2:} $\xi\ge\sigma$. Then, as $\xi<\lambdasi$, we have $\lambdasi>\sigma$, which implies $\sigma\in\mathrm{Lim}({\mathbb E})$, and $\rho_{n-1}>\xi+1>\sigma$, which entails $\alphacp{n-1,m_{n-1}}\in\mathrm{Lim}$, hence $\tau>1$ and $\alphapr$ is a successor ordinal. According to Lemma \ref{cofinlem} $\sigma$ is a limit of $\rho\in{\mathbb E}$ with $\varphi(\lambda^{\sigma^\prime}_\rho)\ge\xi$ where $\varphi:=\pi_{\rho,\si}inv$. The additional requirement $\rho>{\bar{\sigma}}$ yields the bounds $\varphi(\lambda_\rho)<\lambda_\sigma$ (cf.\ part 1 of Lemma \ref{finelocbasicpropslem}) and $\varphi(\mu_\rho)\le\mu_\sigma$ (by Lemma \ref{munoncofinlem}). Note that for any $y\in Y$ the tracking chain ${\mathrm{tc}}(y)$ is an extension of ${\mathrm{tc}}(\delta)$, see Lemma \ref{tcorderisolem} and Corollary \ref{alpldpcor}, and is of a form \[{\mathrm{tc}}(y)=\alphavecrestrarg{n-2}^\frown(\alphacp{n-1,1},\ldots,\alphacp{n-1,m_{n-1}},{\ze^\tau}acpy{0,1},\ldots,{\ze^\tau}acpy{0,k_0(y)})^\frown{\ze^\tau}avec^y\] where $k_0(y)\ge 0$, ${\ze^\tau}avecy=({\ze^\tau}avecy_1,\ldots,{\ze^\tau}avecy_{r(y)})$, $r(y)\ge 0$, and ${\ze^\tau}avecy_u=({\ze^\tau}acpy{u,1},\ldots,{\ze^\tau}acpy{u,k_u(y)})$ with $k_u(y)\ge 1$ for $u=1,\ldots,r(y)$. Notice that $k_0(y)>0$ implies that $\tau\in{\mathbb E}^{>\sigma}$ and $\xi\ge\tau$. We now define $r_0(y)\in\sigmangleton{1,\ldots,r(y)}$ to be minimal such that ${\mathrm{end}}({\ze^\tau}acpy{r_0(y),1})<\sigma$ if that exists, and $r_0(y):=r(y)+1$ otherwise. For convenience let ${\ze^\tau}acpy{r(y)+1,1}:=0$. Using Lemma \ref{cofinlem} we may choose an epsilon number $\rho\in({\bar{\sigma}},\sigma)$ satisfying $\tau,\xi\in{\operatorname{T}}sr$ and $\lambda_\rho\ge\pi(\xi)$, where $\pi:=\pi_{\rho,\si}$, large enough so that \[{\ze^\tau}acpy{r_0(y),1}, {\ze^\tau}acpy{u,v}\in{\operatorname{T}}sr\] for every $y\in Y$, every $u\in[0,r_0(y))$, and every $v\in\sigmangleton{1,\ldots,k_u(y)}$. We set \[{\vec{\tilde{\de}}}:=\alphastarvec^\frown(\rho,\pi(\tau)), \quad \deltati:=\mathrm{o}({\vec{\tilde{\de}}}),\] and easily verify using the i.h.\ that $\delta$ and $\deltati$ have the same $\kappa^\tauwo$-predecessors in ${\mathrm{Ord}}$ and the same $<_1$-predecessors in $X$. By commutativity with $\pi$ the ordinal $\deltati$ has at least $\pi(\xi)$-many immediate $\le_1$-successors, i.e.\ $\rho_n({\vec{\tilde{\de}}})\ge\pi(\xi)$. Setting ${\vec{\varsigma}}ti:={\mathrm{ers}}_{n,2}({\vec{\tilde{\de}}})$ let \[\betati:=\deltati+\kappa^{\vec{\varsigma}}ti_{\pi(\xi)}+\mathrm{dp}_{\vec{\varsigma}}ti(\pi(\xi)).\] Using $\varphi$ we define the embedding \[t:\betati+1\hookrightarrow\beta+1\] that fixes ordinals $\le\alphastar$ and performs base transformation from $\rho$ to $\sigma$, thereby mapping $\deltati$ to $\delta$ and $\betati$ to $\beta$, as follows. For any tracking chain \[{\ze^\tau}avec=\alphastarvec^\frown({\ze^\tau}avec_1,\ldots,{\ze^\tau}avec_r)\] of an ordinal in the interval $(\alphastar,\betati]$, write ${\ze^\tau}avec_u=({\ze^\tau}acp{u,1},\ldots,{\ze^\tau}acp{u,k_u})$ and let $r_0\in\{1,\ldots,r\}$ be minimal such that ${\mathrm{end}}({\ze^\tau}acp{r_0,1})<\rho$ if that exists and $r_0:=r+1$ otherwise. Then apply $\varphi$ to every ${\ze^\tau}acp{u,v}$ such that $u<r_0$ and $v\le k_u$ as well as to ${\ze^\tau}acp{r_0,1}$ unless $r_0=r+1$. By the i.h.\ $t$ establishes an isomorphism between $\betati+1$ and $\mathrm{Im}(t)$, and by our choice of $\rho$ we have $Y\subseteqseteq\mathrm{Im}(t)$, so that defining \[\tilde{Y}:=t^{-1}[Y]\] we obtain the desired copy of $Y$.\\[2mm] {\bf Subcase 1.1.3:} $n=1$ and $\lambda>0$. We then have $0<\delta=\upsilon_\lambda<\xi+1=\alphacp{1,1}$, and it is easy to see that we can choose an ordinal $\nu<\lambda$ large enough so that $X\subseteqseteq\upsilon_\nu$, all parameters occurring in (the tracking chains of) the elements of $Y$ are contained in $\upsilon_\nu$, and all existing greatest $<_2$-predecessors of elements in $Y$ are less than $\upsilon_\nu$. We may then apply straightforward base transformation $\pi_{\upsilon_\nu,\upsilon_\lambda}$ to produce the desired copy $\tilde{Y}$.\\[2mm] \noindent {\bf Subcase 1.2:} $\alphacp{n,1}\in\mathrm{Lim}$.\\[2mm] {\bf Special case:} $\alpha=\upsilon_\lambda>0$. According to the i.h.\ $\alpha$ is the supremum of the infinite $<_1$-chain of ordinals $\upsilon_\xi$ where $\xi\in(0,\lambda)$ and of the infinite $<_2$-chain of ordinals $\upsilon_\xi$ where $\xi\in(1,\lambda)$ is not the successor of any limit ordinal. This shows the claims for $\alpha$ in parts a) and b).\\[2mm] {\bf Remaining cases:} $\alpha>\upsilon_\lambda$. By monotonicity and continuity in conjunction with the i.h.\ it follows that $\alpha$ is the supremum of ordinals either $\le_1$-minimal as claimed for $\alpha$ or with the same greatest $<_1$-predecessor as claimed for $\alpha$, whence unless $\delta=0$, $\delta$ is the greatest $<_1$-predecessor of $\alpha$. This shows part a). We now turn to the proof of part b). If $\delta=0$, i.e.\ $\alpha$ is $\le_1$-minimal, we are done. We therefore assume that $\delta>0$ from now on. Let $\gamma$ be the ordinal claimed to be equal to $\operatorname{pred}_2(\alpha)$. We first show that $\alpha$ is $\gamma$-$\le_2$-minimal, meaning that we show $\le_2$-minimality in case of $\gamma=0$. Arguing towards contradiction let us assume that there exists $\gammastar$ such that $\gamma<\gammastar\kappa^\tauwo\alpha$. Then clearly $\gammastar\le_2\delta$, and due to Lemma \ref{ktwoinflochainlem} we know that $m_{n-1}>1$ in case of $\gammastar=\delta$ and $n>1$. Applying the i.h.\ to $\delta$ we see that $\le_2$-predecessors of $\delta$ either \betagin{enumerate} \item have a tracking chain of the form $\alphavec_{\restriction_{i,j+1}}$ where $(i,j)\in\mathrm{dom}(\alphavec)$, $j<m_i$, and $i<n$, or \item are of the form $\upsilon_{\lambdapr}$ where $\lambdapr\in\mathrm{Lim}$, $\lambdapr\le\lambda$, or \item are of the form $\upsilon_{\lambdapr+{t^\prime}}$ where $\lambdapr\in\mathrm{Lim}nod$ and ${t^\prime}\in(1,\omega)$ such that $\lambdapr+{t^\prime}<\lambda$. \end{enumerate} It follows that $\gammastar$ must be of either of the forms just described. We define \[{\tau^\star}:=\left\{\betagin{array}{rl} \taucp{i,j} & \mbox{ if }\gammastar \mbox{ is of the form 1}\\[2mm] \upsilon_\lambdapr & \mbox{ if }\gammastar \mbox{ is of the form 2}\\[2mm] \upsilon_{\lambdapr+{t^\prime}\minusp 1} & \mbox{ if }\gammastar \mbox{ is of the form 3,} \end{array}\right.\] then ${\tau^\star}$ is the basis of $\gammastar$, and by the assumption $\gamma<\gammastar$ we must have $\taucp{n,1}\in[{\tau_n^\star},{\tau^\star})$. Let ${\ze^\tau}a$ be defined by $\ordcp{i,j}(\alphavec)$, if $\gammastar$ is of the form 1, by $\upsilon_\nu$ for the least $\nu\in(1,\lambdapr)$ such that $\taucp{n,1}<\upsilon_\nu$, if $\gammastar$ is of the form 2, and defined by $\upsilon_{\lambdapr+{t^\prime}\minusp 1}$, if $\gammastar$ is of the form 3. In case of $\taucp{n,1}<\alphacp{n,1}$ let $\eta$ be such that $\alphacp{n,1}=_{\mathbb N}F\eta+\taucp{n,1}$, otherwise set $\eta:=0$, and define \[\beta:=\left\{\betagin{array}{rl} \kappa^{\vec{\varsigma}}_\eta+\mathrm{dp}_{\vec{\varsigma}}(\eta) & \mbox{ if }n=1\\[2mm] \delta+\kappa^{\vec{\varsigma}}_\eta+\mathrm{dp}_{\vec{\varsigma}}(\eta) & \mbox{ otherwise,} \end{array}\right.\] so that $\beta+\tilde{\tau}cp{n,1}=\alpha$. Set \[{\vec{\varsigma}}star:={\mathrm{rs}}_{n^\star}(\alphavec),\] and note that $\kappa^{\vec{\varsigma}}star_\taucp{n,1}=\tilde{\tau}cp{n,1}$ according to part 1 of Lemma \ref{evallem}. Applying Claim \ref{relleominclaim} of the i.h.\ to the ordinal ${\ze^\tau}a+\kappa^{\vec{\varsigma}}star_{\taucp{n,1}+1}$ we obtain finite sets $X\subseteqseteq{\ze^\tau}a+1$ and $Z\subseteqseteq({\ze^\tau}a,{\ze^\tau}a+\kappa^{\vec{\varsigma}}star_{\taucp{n,1}+1})$ such that there is no cover $X\cup\tilde{Z}$ of $X\cup Z$ with $X<\tilde{Z}$ and $X\cup\tilde{Z}\subseteqseteq{\ze^\tau}a+\tilde{\tau}cp{n,1}+\mathrm{dp}_{\vec{\varsigma}}star(\taucp{n,1})$. According to the i.h., there are copies $\tilde{Z}_\gammastar$ of $Z$ cofinally below $\gammastar$ such that $X\cup Z\cong X\cup\tilde{Z}_\gammastar$. By Lemma \ref{letwoupwlem} and our assumption $\gammastar\kappa^\tauwo\alpha$ we now obtain copies $\tilde{Z}_\alpha$ of $Z$ cofinally below $\alpha$ (and hence above $\beta$) such that $X\cup Z\cong X\cup\tilde{Z}_\alpha$. The i.h.\ reassures us of the isomorphism \[{\ze^\tau}a+1+\tilde{\tau}cp{n,1}\quad\cong\quad{\ze^\tau}a+1\cup(\beta,\alpha).\] This provides us, however, with a copy $\tilde{Z}\subseteqseteq({\ze^\tau}a,{\ze^\tau}a+\tilde{\tau}cp{n,1})$ of $Z$ such that $X\cup Z\cong X\cup\tilde{Z}$, contradicting our choice of $X$ and $Z$, whence $\gammastar\kappa^\tauwo\alpha$ is impossible. The ordinal $\alpha$ is therefore $\gamma$-$\le_2$-minimal. We have to show that $\operatorname{pred}_2(\alpha)=\gamma$ where $\gamma$ is the ordinal according to the claim in part b). From now on let us assume that ${\tau_n^\star}>1$ and set $(i,j):=n^\star$, since ${\tau_n^\star}=1$ immediately entails $\gamma=0=\operatorname{pred}_2(\alpha)$. After having shown that $\alpha$ is $\gamma$-$\le_2$-minimal, the next step is to verify that $\gamma\kappa^\tauwo\alpha$. In the situation ${\tau_n^\star}<\taucp{n,1}$ the ordinal $\alpha$ is a limit of $\kappa^\tauwo$-successors of $\gamma$ (the greatest $\kappa^\tauwo$-predecessor of which is $\gamma$). This follows from the i.h.\ noticing that $\alphacp{n,1}$ is a limit of indices which are successor multiples of ${\tau_n^\star}$. It therefore remains to consider the situation \[{\tau_n^\star}=\taucp{n,1}.\] Here we show $\gamma\kappa^\tauwo\alpha$ using Proposition \ref{letwocriterion}. To this end let $X\subseteqseteq_\mathrm{fin}\gamma$ and $Y\subseteqseteq_\mathrm{fin}[\gamma,\alpha)$ be given. Without loss of generality we may assume that $\gamma\in Y$. Set $\tau:={\tau_n^\star}$ and \[(i_0,j_0):=\left\{\betagin{array}{ll} (i,j+1) & \mbox{ if }(i,j):=n^\star\in\mathrm{dom}(\alphavec)\\[2mm] (1,0) & \mbox{ otherwise.} \end{array}\right.\] We now check whether there is a $<_\mathrm{\scriptscriptstyle{lex}}$-maximal index pair $(k,l)>_\mathrm{\scriptscriptstyle{lex}}(i_0,j_0)$, after which $\alphavec$ continues with a sub-maximal index: let $(k,l)$ be the $<_\mathrm{\scriptscriptstyle{lex}}$-maximum index pair in $\mathrm{dom}(\alphavec)$ such that $(i_0,j_0)<_\mathrm{\scriptscriptstyle{lex}}(k,l)<_\mathrm{\scriptscriptstyle{lex}}(n,1)$ and $\alphacp{k+1,1}<\rho_k\minusp1$ in case of $(k,l)^+=(k+1,1)$, i.e.\ $l=m_k$, whereas $\taucp{k,l}<\rho_k(\alphavecrestrarg{(k,l)})\minusp1$ in case of $(k,l)^+=(k,l+1)$, i.e.\ $l<m_k$, if that exists, and $(k,l):=(i_0,j_0)$ otherwise. We observe that \betagin{enumerate} \item $\alphacp{u,v+1}=\mu_\taucp{u,v}$ whenever $(u,v), (u,v+1)\in\mathrm{dom}(\alphavec)$ and $(k,l)<_\mathrm{\scriptscriptstyle{lex}}(u,v+1)$, and \item $\alphavec=\operatorname{me}(\alphavecrestrarg{(k,l)^+}),$ \end{enumerate} which is seen as follows. Assuming the existence of a lexicographically maximal $(u,v+1)$ violating property 1 of $(k,l)$, we can neither have $\chi^\taucp{u,v}(\taucp{u,v+1})=0$, as this would be in conflict with the maximality of $(k,l)$, nor can we have $\chi^\taucp{u,v}(\taucp{u,v+1})=1$, since $\alphavec=\operatorname{me}(\alphavecrestrarg{u,v+1})$ according to the definitions of $(k,l)$ and $\operatorname{me}$, while ${\tau_n^\star}=\taucp{n,1}<\taucp{u,v}$ by assumption and definition of ${\tau_n^\star}$, which is in conflict with Lemma \ref{cmlmaxextcor}. Thus properties 1 and 2 follow hand in hand. In case of $\taucp{k,l}<\alphacp{k,l}$ let $\eta$ be such that $\alphacp{k,l}=_{\mathbb N}F\eta+\taucp{k,l}$, otherwise $\eta:=0$. We set \[\beta:=\ordcp{k,l}(\alphavec)\quad\mbox{ and }\quad{\vec{\varsigma}}pr:={\mathrm{ers}}_{k,l}(\alphavec).\] For the reader's convenience we are going to discuss the following cases in full detail. Subcase 1.2.1.2 below treats the situation in which a genuinely larger $\le_2$-connectivity component arises. Subcase 1.2.2.2.2 is a correction of the corresponding subcase in \cite{CWc}. Subcase 1.2.3 is new due to the extended claim of the theorem. \operatorname{me}dskip {\small Simple examples involve the least $\kappa^\tauwo$-pair $\varepsilonn\cdot\omega\kappa^\tauwo\varepsilonn\cdot(\omega+1)$ in the example ${\mathrm{tc}}(\varepsilonn\cdot(\omega+1))=((\varepsilonn,\omega),(\varepsilonn))$ for Subcase 1.2.1.2, ${\mathrm{tc}}(\varphi_{2,0}\cdot(\omega^2+1))=((\varphi_{2,0},\omega^2),(\varphi_{2,0}))$ for Subcase 1.2.1.1., further instructive examples for these subcases are ${\mathrm{tc}}(\Gamma_0^2+\Gamma_0)=((\Gamma_0,\Gamma_0),(\Gamma_0))$ (Subcase 1.2.1.1) and ${\mathrm{tc}}(\Gamma_0^2\cdot(\omega+1)+\Gamma_0)=((\Gamma_0,\Gamma_0\cdot\omega),(\Gamma_0^2),(\Gamma_0))$ (Subcase 1.2.1.2) for the pattern characterizing $\Gamma_0$, namely $\Gamma_0^2\cdot\omega\kappa^\tauwo\Gamma_0^2\cdot(\omega+1)+\Gamma_0$ with the inner chain $\Gamma_0^2\cdot\omega\kappa^\tauwo\Gamma_0^2\cdot(\omega+1)<_1\Gamma_0^2\cdot(\omega+1)+\Gamma_0$. Note that ${\mathrm{tc}}(\Gamma_0^2\cdot2+\Gamma_0)=((\Gamma_0,\Gamma_0+1))$ and not $((\Gamma_0,\Gamma_0),(\Gamma_0^2),(\Gamma_0))$, which is not a tracking chain, since $\Gamma_0^2\cdot(2k+1)\kappa^\tauwo\Gamma_0^2\cdot(2k+2)$, but $\Gamma_0^2\cdot(2k+1)\not\kappa^\tauwo\Gamma_0^2\cdot(2k+2)+\Gamma_0$ for all $k<\omega$. These latter examples also illustrate Case 2.1, in particular Subcase 2.1.2 with the application of Claim \ref{relletwominclaim} for ${\mathrm{tc}}(\Gamma_0^2\cdot(2k+2)+\Gamma_0)=((\Gamma_0,\Gamma_0\cdot(k+1)+1))$. An example for Subcase 1.2.2.1 is $\alpha=\tau^\omega+\tau^2\cdot\omega+\tau$, ${\mathrm{tc}}(\alpha)=((\tau,\tau^\omega),(\tau^2\cdot\omega),(\tau))$, where $\tau:=\varthetanod(\Omega^2\cdot\omega)=\varthetanod(\varthetae(\varthetae(0)+1))$ and $\beta=\tau^\omega+\tau^2\cdot\omega$. Note that ${\operatorname{lh}}(\tau)={\operatorname{lh}}(\beta)=\beta+\tau+1$. An example for Subcase 1.2.2.2.1 is $\alpha=\beta+\tau$ where $\tau={\operatorname{BH}}=\varthetanod(\varthetae(\vartheta_2(0)))$, $\sigma=\varepsilon_{{\operatorname{BH}}+1}$, $\beta=\sigma\cdot\omega$, and ${\mathrm{tc}}(\alpha)=((\tau,\sigma,\omega),(\tau))$. Note that ${\operatorname{lh}}(\tau)=\beta+\sigma=\sigma\cdot(\omega+1)$, completing the least $\kappa^\tauwo$-chain of three ordinals. An easy example for Subcase 1.2.2.2.2 is $\alpha=\beta+\sigma\cdot\tau+\tau$ where $\tau:=|\mathrm{ID}_2|=\varthetanod(\varthetae(\vartheta_2(\vartheta_3(0))))$, $\sigma={\operatorname{BH}}_{\tau+1}=\varthetat(\varthetae(\vartheta_2(0)))$, $\beta=\varepsilon_{\sigma+1}$, so that ${\mathrm{tc}}(\alpha)=((\tau,\sigma,\beta),(\sigma\cdot\tau),(\tau))$. Note that ${\operatorname{lh}}(\tau)=\beta\cdot(\omega+1)$, completing the least $\kappa^\tauwo$-chain of four ordinals. The simplest example for Subcase 1.2.3.1 is $\upsilon_2\kappa^\tauwo\alpha=\upsilon_\omega+\upsilon_1$, ${\mathrm{tc}}(\alpha)=((\upsilon_\omega+\upsilon_1))$, and for Subcase 1.2.3.2 $\upsilon_\omega\kappa^\tauwo\alpha=\upsilon_\omega\cdot2$, ${\mathrm{tc}}(\alpha)=((\upsilon_\omega\cdot2))$. } \\[2mm] {\bf Subcase 1.2.1:} $(k,l)=(i_0,j_0)=(i,j+1)$ where $(i,j)=n^\star\in\mathrm{dom}(\alphavec)$. In this case we have $\beta=\gamma$. Let $\varrho:=\alphacp{i+1,1}$ if $(i,j+1)^+=(i+1,1)$ and $\varrho:=\taucp{i,j+1}$ (which then is an epsilon number greater than $\tau$) otherwise. Lemma \ref{chiinvlem}, allows us to conclude $\chi^\tau(\varrho)=1$ since maximal extension takes us to a successor multiple of $\tau$ at $(n,1)$, and as verified by Lemma \ref{tcevalestimlem} we have \[\alpha=\gamma+\kappa^{{\vec{\varsigma}}pr}_\varrho+\mathrm{dp}_{{\vec{\varsigma}}pr}(\varrho).\] Let $\lambdapr\in\mathrm{Lim}nod$ and $q<\omega$ be such that ${\mathrm{logend}}(\alphacp{i,j+1})=\lambdapr+q$, whence by definition \[\alphacp{i,j+1}=\eta+\omega^{\lambdapr+q}\quad\mbox{ and }\quad\rhoargs{\tau}{\alphacp{i,j+1}}=\tau\cdot(\lambdapr+q\minusp\chi^\tau(\lambdapr)).\] It follows from $\chi^\tau(\varrho)=1$ and $\varrho\le\rhoargs{\tau}{\alphacp{i,j+1}}$ that $\varrho$ must have the form \[\varrho=\tau\cdot\xi\mbox{ for some }\xi\in(0,\lambdapr+q\minusp\chi^\tau(\lambdapr)]\] where $\lambdapr+q>0$.\\[2mm] {\bf 1.2.1.1:} $\varrho<\rhoargs{\tau}{\alphacp{i,j+1}}$. In this case we are going to check that $\alphacp{i,j+1}$ is a supremum of indices $\eta+\nu$ such that $\varrho\le\rhoargs{\tau}{\eta+\nu}$ and $\chi^\tau(\nu)=0$. Indeed, inspecting all cases we have \[\alphacp{i,j+1}=\sup\{\eta+\nu\mid\nu\in E\}\] where \[E:=\left\{\betagin{array}{l@{\quad}l} \{\omega^{{\ze^\tau}a+k}\mid k\in(0,\omega),\:{\ze^\tau}a\in\mathrm{Lim}nod\cap\lambdapr,\mbox{ and }{\ze^\tau}a+k\minusp\chi^\tau({\ze^\tau}a)\ge\xi\}&\mbox{ if }q=0\\[2mm] \{\omega^\lambdapr\cdot r+\omega^{{\ze^\tau}a+k}\mid k,r\in(0,\omega),\:{\ze^\tau}a\in\mathrm{Lim}nod\cap\lambdapr,\mbox{ and }{\ze^\tau}a+k\minusp\chi^\tau({\ze^\tau}a)\ge\xi\}&\mbox{ if } \chi^\tau(\lambdapr)=1\:\&\: q=1\\[2mm] \{\omega^{\lambdapr+q-1}\cdot r\mid r\in(0,\omega)\}&\mbox{ if }\chi^\tau(\lambdapr)=0\:\&\: q>0\mbox{ or }\\ &\quad\;\chi^\tau(\lambdapr)=1\:\&\: q>1. \end{array}\right.\] According to the definition we have \[\rhoargs{\tau}{\alphacp{i,j+1}}=\left\{\betagin{array}{l@{\quad}l} \tau\cdot\lambdapr&\mbox{ if either }q=0\mbox{ or }\chi^\tau(\lambdapr)=1\:\&\: q=1\mbox{ (note: $\lambdapr>\xi$)}\\[2mm] \tau\cdot(\lambdapr+q)&\mbox{ if }\chi^\tau(\lambdapr)=0\:\&\: q>0\\[2mm] \tau\cdot(\lambdapr+q-1)&\mbox{ if }\chi^\tau(\lambdapr)=1\:\&\: q>1, \end{array}\right.\] and obtain for $\nu\in E$ in the respective cases of the definition of $E$ \[\rhoargs{\tau}{\eta+\nu}=\left\{\betagin{array}{l@{\quad}l} \tau\cdot({\ze^\tau}a+k\minusp\chi^\tau({\ze^\tau}a))&\mbox{ if either }q=0\mbox{ or }\chi^\tau(\lambdapr)=1\:\&\: q=1\mbox{ (note: $\lambdapr>\xi$)}\\[2mm] \tau\cdot(\lambdapr+q-1)&\mbox{ if }\chi^\tau(\lambdapr)=0\:\&\: q>0\\[2mm] \tau\cdot(\lambdapr+q-2)&\mbox{ if }\chi^\tau(\lambdapr)=1\:\&\: q>1. \end{array}\right.\] Now it is easy to see that $\rhoargs{\tau}{\eta+\nu}\ge\varrho$, since $\varrho=\tau\cdot\xi<\rhoargs{\tau}{\alphacp{i,j+1}}$ according to the assumption of this subcase. By the i.h.\ we have \[\gamma_\nu:=\mathrm{o}(\alphavecrestrarg{(i,j+1)}[\eta+\nu])\kappa^\tauwo\alpha_\nu:=\gamma_\nu+\kappa^{{\vec{\varsigma}}pr}_\varrho+\mathrm{dp}_{{\vec{\varsigma}}pr}(\varrho)\] and \betagin{equation}\lambdabel{congalnu} \alpha_\nu\quad\cong\quad\gamma_\nu\cup[\gamma,\alpha) \end{equation} for the $\nu$ specified above. Choose $\nu$ from $E$ large enough so that $X\subseteqseteq\gamma_\nu$ and let $Y_\nu\subseteqseteq[\gamma_\nu,\alpha_\nu)$ be the isomorphic copy of $Y$ according to isomorphism (\ref{congalnu}). By the i.h.\ we obtain a copy $\tilde{Y}\subseteqseteq\gamma_\nu$ according to the criterion given by Proposition \ref{letwocriterion}. Let $\tilde{Y}^+$ with $\tilde{Y}\subseteqseteq \tilde{Y}^+\subseteqseteq\gamma$ be given, and set $U:=X\cup\tilde{Y}^+\cap\gamma_\nu$, $V:=\tilde{Y}^+\setminus\gamma_\nu$. Since by the i.h.\ clearly $\gamma_\nu<_1\gamma$, we obtain a copy $\tilde{V}$ such that $U<\tilde{V}\subseteqseteq\gamma_\nu$ and $U\cup\tilde{V}\cong U\cup V$. Setting $\tilde{Y}^+_\nu:=(\tilde{Y}^+\cap\gamma_\nu)\cup\tilde{V}$, hence $\tilde{Y}\subseteqseteq\tilde{Y}^+_\nu\subseteqseteq\gamma_\nu$, the criterion yields an appropriate extension $Y^+_\nu\subseteqseteq\alpha_\nu$ such that $X\cup\tilde{Y}^+_\nu\cong X\cup Y^+_\nu$ extends $X\cup\tilde{Y}\cong X\cup Y_\nu$. Now let $Y^+$ be the isomorphic copy of $Y^+_\nu$ according to (\ref{congalnu}). This provides us with the extension of $Y$ according to $\tilde{Y}^+$ as required by Proposition \ref{letwocriterion}. \\[2mm] {\bf 1.2.1.2:} $\varrho=\rhoargs{\tau}{\alphacp{i,j+1}}$. Recalling that we have $\chi^\tau(\varrho)=1$ this implies $(i,j+1)^+=(i+1,1)$ by condition 2 of Definition \ref{trackingchaindefi} and Lemma \ref{cmlmaxextcor}, which also shows that here the case $q=0$ does not occur, since otherwise it would follow, invoking Lemma \ref{chiinvlem}, that $\chi^\tau(\alphacp{i,j+1})=1$, whence $\operatorname{cml}(\alpha)=(i,j)$ and $\alphavec\not\in{\operatorname{T}}C$. We now have \[\alphacp{i,j+1}=\sup\set{\eta+\omega^{\lambdapr+q-1}\cdot r}{r\in(0,\omega)},\] and \betagin{eqnarray*} \rhoargs{\tau}{\alphacp{i,j+1}} &=&\left\{\betagin{array}{l@{\quad}l} \tau\cdot(\lambdapr+q) &\mbox{ if }\chi^\tau(\lambdapr)=0\\[2mm] \tau\cdot(\lambdapr+q-1)&\mbox{ if }\chi^\tau(\lambdapr)=1 \end{array}\right.\\[2mm] &=&\left\{\betagin{array}{l@{\quad}l} \rhoargs{\tau}{\eta+\omega^\lambdapr\cdot r}&\mbox{ if }\chi^\tau(\lambdapr)=1\mbox{ and }q=1\\[2mm] \rhoargs{\tau}{\eta+\omega^{\lambdapr+q-1}\cdot r}+\tau&\mbox{ otherwise} \end{array}\right.\\[2mm] &=&\varrho \end{eqnarray*} Let $r\in(0,\omega)$ be large enough so that, setting $\nu:=\omega^{\lambdapr+q-1}\cdot r$ and $\gamma_\nu:=\mathrm{o}(\alphavecrestrarg{(i,j+1)}[\eta+\nu])$, we obtain $X\subseteqseteq\gamma_\nu$. Setting $\alpha_\nu:=\mathrm{o}(\alphavecrestrarg{(i,j+1)}[\eta+\nu+1])$, by Lemma \ref{tcevalestimlem} we obtain \[\alpha_\nu=\left\{\betagin{array}{l@{\quad}l} \gamma_\nu+\kappa^{{\vec{\varsigma}}pr}_{\varrho}+\mathrm{dp}_{{\vec{\varsigma}}pr}(\varrho)&\mbox{ if }\chi^\tau(\lambdapr)=1\mbox{ and }q=1\\[2mm] \gamma_\nu+\kappa^{{\vec{\varsigma}}pr}_{\rhoargs{\tau}{\eta+\nu}}+\mathrm{dp}_{{\vec{\varsigma}}pr}(\rhoargs{\tau}{\eta+\nu})+\tilde{\tau}= \gamma_\nu+\kappa^{{\vec{\varsigma}}pr}_{\varrho}&\mbox{ otherwise.} \end{array}\right.\] Now the i.h.\ yields \betagin{equation}\lambdabel{sndcongalnu} \alpha_\nu\quad\cong\quad\gamma_\nu\cup[\gamma,\alpha), \end{equation} and we choose $\tilde{Y}$ to be the isomorphic copy of $Y$ under this isomorphism. Let $\tilde{Y}^+$ with $\tilde{Y}\subseteqseteq\tilde{Y}^+\subseteqseteq\gamma$ be given. Let $U:=X\cup\tilde{Y}^+\cap\alpha_\nu$ and $V:=\tilde{Y}^+\setminus\alpha_\nu$. Since by the i.h.\ we have $\alpha_\nu<_1\gamma$ there exists $\tilde{V}$ with $U<\tilde{V}<\alpha_\nu$ and $U\cup\tilde{V}\cong U\cup V$. Now let $Y^+$ be the copy of $(\tilde{Y}^+\cap\alpha_\nu)\cup\tilde{V}$ under (\ref{sndcongalnu}). This choice satisfies the requirements of Proposition \ref{letwocriterion}. \\[2mm] {\bf Subcase 1.2.2:} $(i_0,j_0)<_\mathrm{\scriptscriptstyle{lex}}(k,l)$. We argue similarly as in Subcases 1.1.1 and 1.1.2 above. \\[2mm] {\bf 1.2.2.1:} $l=1$. This subcase corresponds to Subcase 1.1.1. Here we can only have $(k,l)^+=(k+1,1)$ and $\alphacp{k+1,1}<\rho_k\minusp1=<_1g((1/{\tau_k^\star})\cdot\taucp{k,1})$, due to the maximality and property 1 of $(k,l)$. We see that $\alphacp{k,1}$ is a limit of ordinals $\eta+\nu<\alphacp{k,1}$ such that ${\tau_k^\star}<{\mathrm{end}}(\nu)<\taucp{k,1}$ and $<_1g((1/{\tau_k^\star})\cdot{\mathrm{end}}(\nu))\ge\alphacp{k+1,1}$, and choosing $\nu$ large enough we may assume that $Y\cap\beta\subseteqseteq\mathrm{o}(\alphavecrestrarg{(k,1)}[\eta+\nu])=:\beta_\nu$. Using the i.h.\ and setting $\alpha_\nu:=\beta_\nu+\kappa^{\vec{\varsigma}}pr_\alphacp{k+1,1}+\mathrm{dp}_{\vec{\varsigma}}pr(\alphacp{k+1,1})$ we now obtain the isomorphism \betagin{equation}\lambdabel{thrdcongalnu} \alpha_\nu\quad\cong\quad\beta_\nu\cup[\beta,\alpha) \end{equation} via a mapping of the corresponding tracking chains defined similarly as in Subcase 1.1.1. In fact, since $\gamma\kappa^\tauwo\alpha_\nu$ by the i.h., proving that $\gamma\kappa^\tauwo\alpha$ shows that this isomorphism extends to the suprema, that is, mapping $\alpha_\nu$ to $\alpha$. Exploiting (\ref{thrdcongalnu}) and using that the criterion holds for $\gamma,\alpha_\nu$ we can now straightforwardly show that the criterion holds for $\gamma,\alpha$. \\[2mm] {\bf 1.2.2.2:} $l>1$. Here we proceed in parallel with Subcase 1.1.2. Let $\xi:=\alphacp{k+1,1}$ in case of $(k,l)^+=(k+1,1)$ and $\xi:=\taucp{k,l}$ otherwise, whence according to property 2 of $(k,l)$ and Lemma \ref{tcevalestimlem} \[\alpha=\beta+\kappa^{{\vec{\varsigma}}pr}_\xi+\mathrm{dp}_{{\vec{\varsigma}}pr}(\xi).\] Let further $\sigma:=\taucp{k,l-1}$ and $\sigma^\prime:=\taucppr{k,l-1}$. In the case $\alphacp{k,l}\in\mathrm{Lim}$ let $\alphapr\in(\eta,\alphacp{k,l})$ be a successor ordinal large enough so that, setting $\alphastar:=\mathrm{o}(\alphastarvec)$ where $\alphastarvec:=\alphavecrestrarg{(k,l)}[\alphapr]$, \[Y\cap[\alphastar,\beta)=\emptyset,\] otherwise let $\alphapr:=\alphacp{k,l}\minusp1$ and define $\alphastar$ and $\alphastarvec$ as above. Notice that we have $\rho_{k}(\alphavecrestrarg{(k,l)})\ge\sigma$ and $\xi<\lambda_\sigma$. We consider two cases regarding $\xi$.\\[2mm] {\bf 1.2.2.2.1:} $\xi<\sigma$. In the special case where $\chi^\sigma(\alphapr)=1$ consider $\alphavecpr:=\operatorname{me}(\alphastarvec)$. Using Lemma \ref{cmlmaxextcor} and part 7(c) of Lemma \ref{tcevalestimlem} we know that $\operatorname{ec}(\alphavecpr)$ exists and is of a form $\sigma\cdot({\ze^\tau}a+1)$ for some ${\ze^\tau}a$ as well as that the maximal extension of $\alphastarvec$ to $\alphavecpr$ does not add epsilon bases between $\sigma^\prime$ and $\sigma$. In the cases where $\chi^\sigma(\alphapr)=0$ we set $\alphavecpr:=\alphastarvec$. Clearly, $\sigma$ is a limit of ordinals $\rho$ such that $<_1g((1/\sigma^\prime)\cdot{\mathrm{end}}(\rho))=\xi+1$, which guarantees that ${\mathrm{end}}(\rho)>\sigma^\prime$, and $\rho$ can be chosen large enough so that setting \[\nu:=\left\{\betagin{array}{l@{\quad}l} \sigma\cdot{\ze^\tau}a+\rho&\mbox{ if }\chi^\sigma(\alphapr)=1\\[2mm] \rhoargs{\sigma}{\alphapr}+\rho&\mbox{ if }\alphapr\in\mathrm{Lim}\:\&\:\chi^\sigma(\alphapr)=0\\[2mm] \rho&\mbox{ otherwise} \end{array}\right.\] we obtain, setting $\beta_\nu:=\mathrm{o}(\alphavecpr^\frown(\nu))$, $Y\cap[\beta_\nu,\beta)=\emptyset$. Observe that by the i.h.\ $\beta_\nu$ and $\beta$ then have the same $\kappa^\tauwo$-predecessors and the same $<_1$-predecessors below $\beta_\nu$. The i.h.\ shows that \[\alpha_\nu:=\beta_\nu+\kappa^{{\vec{\varsigma}}pr}_\xi+\mathrm{dp}_{{\vec{\varsigma}}pr}(\xi)\quad\cong\quad\beta_\nu\cup[\beta,\alpha)\quad\mbox{ and }\gamma\kappa^\tauwo\alpha_\nu\] which we can exploit to show that the criterion given by Proposition \ref{letwocriterion} holds for $\gamma,\alpha$ from its validity for $\gamma,\alpha_\nu$, implying that the above isomorphism extends to mapping $\alpha_\nu$ to $\alpha$.\\[2mm] {\bf 1.2.2.2.2:} $\xi\ge\sigma$. Then we consequently have $\alphacp{k,l}\in\mathrm{Lim}$, hence $\taucp{k,l}>1$, $\alphapr$ is a successor ordinal, $\sigma<\lambdasi$ and thus $\sigma\in\mathrm{Lim}({\mathbb E})$. We proceed as in Subcase 1.1.2.2 in order to choose an epsilon number $\rho\in({\bar{\sigma}},\sigma)$ suitable for base transformation. Clearly, $\taucp{k,l}$ takes the role of the ordinal $\tau$ in Subcase 1.1.2.2, and the role of $\delta$ there is taken here by $\beta$. Consequently, $\deltati$ there will become $\beta_\rho$ here, as defined later. Parameters from $Y$ are treated as follows. Note that for any $y\in Y^{>\alphastar}$ the tracking chain ${\mathrm{tc}}(y)$ is an extension of ${\mathrm{tc}}(\beta)$, and is of a form \[{\mathrm{tc}}(y)=\alphavecrestrarg{k-1}^\frown(\alphacp{k,1},\ldots,\alphacp{k,l},{\ze^\tau}acpy{0,1},\ldots,{\ze^\tau}acpy{0,k_0(y)})^\frown{\ze^\tau}avec^y\] where $k_0(y)\ge 0$, ${\ze^\tau}avecy=({\ze^\tau}avecy_1,\ldots,{\ze^\tau}avecy_{r(y)})$, $r(y)\ge 0$, and ${\ze^\tau}avecy_u=({\ze^\tau}acpy{u,1},\ldots,{\ze^\tau}acpy{u,k_u(y)})$ with $k_u(y)\ge 1$ for $u=1,\ldots,r(y)$. Notice that $k_0(y)>0$ implies that $\taucp{k,l}\in{\mathbb E}^{>\sigma}$ and $\xi\ge\taucp{k,l}$. We now define $r_0(y)\in\sigmangleton{1,\ldots,r(y)}$ to be minimal such that ${\mathrm{end}}({\ze^\tau}acpy{r_0(y),1})<\sigma$ if that exists, and $r_0(y):=r(y)+1$ otherwise. For convenience let ${\ze^\tau}acpy{r(y)+1,1}:=0$. Using Lemma \ref{cofinlem} we may choose an epsilon number $\rho\in({\bar{\sigma}},\sigma)$ satisfying $\taucp{k,l},\xi\in{\operatorname{T}}sr$ and $\lambda_\rho\ge\pi(\xi)$, where $\pi:=\pi_{\rho,\si}$, large enough so that \[{\ze^\tau}acpy{r_0(y),1}, {\ze^\tau}acpy{u,v}\in{\operatorname{T}}sr\] for every $y\in Y^{>\alphastar}$, every $u\in[0,r_0(y))$, and every $v\in\sigmangleton{1,\ldots,k_u(y)}$. We may now map $\beta$ to $\beta_\rho:=\mathrm{o}(\betavec_\rho)$ where \[\betavec_\rho:=\alphastarvec^\frown(\rho,\pi(\taucp{k,l})),\] easily verifying using the i.h.\ that $\beta$ and $\beta_\rho$ have the same $\kappa^\tauwo$-predecessors in ${\mathrm{Ord}}$ and the same $<_1$-predecessors in $X\cup(Y\cap\beta_\rho)$. Setting ${\vec{\varsigma}}ti:={\mathrm{ers}}_{n,2}(\betavec_\rho)$, define \[\alpha_\rho:=\beta_\rho+\kappa^{\vec{\varsigma}}ti_{\pi(\xi)}+\mathrm{dp}_{\vec{\varsigma}}ti(\pi(\xi)).\] In the same way as in Subcase 1.1.2.2 we can now define the embedding \[t: \alpha_\rho\hookrightarrow\alpha\] which fixes ordinals $\le\alphastar$, so that by the i.h.\ \[\alpha_\rho\cong\mathrm{Im}(t)\quad\mbox{ and }\quad\gamma\kappa^\tauwo\alpha_\rho.\] By our choice of $\alphastar$ we have $X\cup(Y\cap\beta)\subseteqseteq\alphastar$, and by our choice of $\rho$ we have $Y\subseteqseteq\mathrm{Im}(t)$, hence $t^{-1}$ copies $Y^{>\alphastar}$ into $[\beta_\rho,\alpha_\rho)$, and applying the mapping $t$ we can now derive the validity of the criterion given by Proposition \ref{letwocriterion} for $\gamma,\alpha$ from its validity for $\gamma,\alpha_\rho$, which implies that the above isomorphism extends to mapping $\alpha_\rho$ to $\alpha$. \\[2mm] {\bf Subcase 1.2.3:} $(k,l)=(1,0)$. According to our assumptions, in particular $(i_0,j_0)=(1,0)$, $\taucp{n,1}={\tau_n^\star}>1$, and since the Special Case $\alpha=\upsilon_\lambda$ has been considered already, we have $(k,l)^+=(1,1)$, $\lambda\in\mathrm{Lim}$, $\alphacp{1,1}\in(\upsilon_\lambda,\upsilon_{\lambda+1})$, $\alphavec=\operatorname{me}(\alphavecrestrarg{1,1})$ according to property 2 of $(k,l)$, and \[\gamma=\left\{\betagin{array}{ll} \upsilon_\lambda & \mbox{ if }{\tau_n^\star}=\upsilon_\lambda\\ \upsilon_{\lambda_j} & \mbox{ if }{\tau_n^\star}=\upsilon_{\lambda_j}\mbox{ for some }j\in\{1,\ldots,p\}\mbox{ where }\lambda_j\in\mathrm{Lim}\\ \upsilon_{\lambda_j+t_j+1} & \mbox{ if }{\tau_n^\star}=\upsilon_{\lambda_j+t_j}\mbox{ for some }j\in\{1,\ldots,p\}\mbox{ where }\lambda_j\in\mathrm{Lim}nod\mbox{ and }t_j>0. \end{array}\right.\] {\bf 1.2.3.1:} $\gamma<\upsilon_\lambda$. Here we argue as in Subcase 1.2.2, as $\alphacp{1,1}$ is clearly a submaximal index. Cofinally in $\lambda$ we find $\rho$ such that $Y\cap[\upsilon_\rho,\upsilon_\lambda)=\emptyset$ and all parameters below $\upsilon_\lambda$ from components of tracking chains of elements of $Y\setminus\upsilon_\lambda$ and of $\alpha=\mathrm{o}(\operatorname{me}(\alphavec_{\restriction_{1,1}}))$ are contained in $\upsilon_\rho$. Applying base transformation $\pi:=\pi_{\upsilon_\rho,\upsilon_\lambda}$ to the elements of $Y\setminus\upsilon_\lambda$ then results in an isomorphic copy of $X\cup Y$ below $\alpha_\rho:=\pi(\alpha)$, which itself satisfies $\gamma\kappa^\tauwo\alpha_\rho$ by the i.h., so that again we can derive the validity of the criterion for $\gamma\kappa^\tauwo\alpha$ and $X\cup Y$ from its validity for $\gamma\kappa^\tauwo\alpha_\rho$ using the embedding from $\alpha_\rho$ into $\upsilon_\rho\cup[\upsilon_\lambda,\alpha)$ via inverted base transformation $\pi^{-1}$.\\[2mm] {\bf 1.2.3.2:} $\gamma=\upsilon_\lambda$. Setting $\xi:=\alphacp{1,1}$ we have \[\alpha=\kappa_\xi+\mathrm{dp}(\xi)\mbox{, }\quad\quad\alphavec=\operatorname{me}(((\xi)))\mbox{, }\quad\mbox{ and }\quad\chi^{\upsilon_\lambda}(\xi)=1,\] invoking again Lemma \ref{chiinvlem}. We again choose a sufficiently large $\rho<\lambda$, where now $\rho=\lambdapr+{t^\prime}+2$ for suitable $\lambdapr\in\mathrm{Lim}nod$, and ${t^\prime}<\omega$, such that $X<\upsilon_\rho$ and all parameters below $\upsilon_\lambda$ of $\alpha$ (equivalently, $\xi$) and of all components of tracking chains of the elements of $Y$ are contained in $\upsilon_\rho$. Setting $\pi:=\pi_{\upsilon_\rho,\upsilon_\lambda}$ and ${\ze^\tau}avec:=(\upsilon_{\lambdapr+1},\ldots,\upsilon_{\lambdapr+{t^\prime}+2})$, notice that by Lemma \ref{rhobasetrafolem} $\chi^{\upsilon_\rho}(\pi(\xi))=1$ and choose $\nu\in{\mathbb P}\cap\upsilon_{\rho+1}$ such that \[\rho_1(({\ze^\tau}avec^\frown\nu))\minusp 1=\pi(\xi),\] which is done as follows. Let $\xi=\upsilon_\lambda\cdot({\ze^\tau}a+l)$ where ${\ze^\tau}a\in\mathrm{Lim}nod$ and $l<\omega$. Note that if $l=0$, we must have ${\ze^\tau}a=(1/\upsilon_\lambda)\cdot\xi\in\mathrm{Lim}$ since $\xi>\upsilon_\lambda$, and $\chi^{\upsilon_\lambda}({\ze^\tau}a)=\chi^{\upsilon_\lambda}(\xi)=1$ according to Lemma \ref{chiinvlem}. Also recall that base transformation commutes with $\omega$-exponentiation (Lemma \ref{simplebarlem}) and the $\varrho$-operator (Lemma \ref{rhobasetrafolem}), which is useful to keep in mind during the following calculation. Set \[k:=\left\{\betagin{array}{ll} 0&\mbox{ if }l=0\\ l-1+\chi^{\upsilon_\lambda}({\ze^\tau}a)&\mbox{ if } l>0 \end{array}\right.\quad\mbox{ and }\quad\nu:=\omega^{\pi({\ze^\tau}a)+k}.\] Now, if $\chi^{\upsilon_\rho}(\nu)=1$ it follows that $k=0$ and $\chi^{\upsilon_\rho}(\pi({\ze^\tau}a))=1$, hence $\chi^{\upsilon_\lambda}({\ze^\tau}a)=1$ and $l=0$, whence $\rho_1({\ze^\tau}avec^\frown\nu)\minusp 1=\rhoargs{\upsilon_\rho}{\nu}=\upsilon_\rho\cdot\pi({\ze^\tau}a)=\pi(\xi)$. And if $\chi^{\upsilon_\rho}(\nu)=0$, we have $\rho_1({\ze^\tau}avec^\frown\nu)\minusp 1=\rhoargs{\upsilon_\rho}{\nu}+\upsilon_\rho$ and $k\minusp\chi^{\upsilon_\rho}(\pi({\ze^\tau}a))+1=l$, since if $k=0$ we must have $l>0$ as $\chi^{\upsilon_\lambda}(\xi)=1$. Thus $\rhoargs{\upsilon_\rho}{\nu}+\upsilon_\rho=\upsilon_\rho\cdot(\pi({\ze^\tau}a)+l)=\pi(\xi)$. Setting $\mu:=\nu^{\ze^\tau}avec_\nu$, we now obtain our master copy $\tilde{Y}$ of $Y$ by \[\tilde{Y}:=\mu+(-\upsilon_\rho+\pi[Y]).\] Now, let a finite set $\tilde{Y}pl$ such that $\tilde{Y}\subseteqseteq\tilde{Y}pl\subseteqseteq\upsilon_\lambda$ be given. If necessary, let $\tilde{Z}$ be a copy of $\tilde{Y}pl\setminus\mu^+$, where $\mu^+:=\nu^{\ze^\tau}avec_{\nu+1}$, below $\mu^+$ such that for $\tilde{Y}pr:=(\tilde{Y}pl\cap\mu^+)\cup\tilde{Z}$ \[X\cup\tilde{Y}pl\cong X\cup\tilde{Y}pr,\] and set \[Y^+:=(\tilde{Y}pr\cap\mu)\cup\pi^{-1}[\upsilon_\rho+(-\mu+\tilde{Y}pr)].\] It is now easy to see that the isomorphism of $X\cup\tilde{Y}$ and $X\cup Y$ extends to an isomorphism of $X\cup\tilde{Y}pl$ and $X\cup Y^+$. \\[2mm] \noindent {\bf Case 2:} $m_n>1$. We now discuss the situation where the $<_\mathrm{\scriptscriptstyle{lex}}$-maximal index of the tracking chain of $\alpha$ is a $\nu$-index. {\small Simple examples for the subcases of Case 2 discussed below are ${\mathrm{tc}}(\varepsilonn\cdot2)=((\varepsilonn,1))$ (Subcase 2.1.1.1.1), ${\mathrm{tc}}(\varepsilon_{{\operatorname{BH}}+1}\cdot2)=(({\operatorname{BH}},\varepsilon_{{\operatorname{BH}}+1},1))$ (Subcase 2.1.1.1.2), where ${\operatorname{BH}}$ stands of the Bachmann-Howard ordinal and $\varepsilon_{{\operatorname{BH}}+1}$ for the least epsilon number greater than the Bachmann-Howard ordinal, ${\mathrm{tc}}(\varepsilonn\cdot3)=((\varepsilonn,2))$ (Subcase 2.1.1.2), and ${\mathrm{tc}}(\varphi_{2,0}\cdot(\omega+2))=((\varphi_{2,0},\omega+1))$ (Subcase 2.1.1.3). Examples for Subcase 2.1.2 were already given in the paragraph preceding Subcase 1.2.1, and a simple example for the remaining Subcase 2.2 is ${\mathrm{tc}}(\varepsilonn\cdot\omega)=((\varepsilonn,\omega))$. Note that the characterizing pattern for ${\operatorname{BH}}$, ${\operatorname{BH}}\kappa^\tauwo\varepsilon_{{\operatorname{BH}}+1}\kappa^\tauwo\varepsilon_{{\operatorname{BH}}+1}\cdot(\omega+1)$, is the least example for a $\kappa^\tauwo$-chain of three ordinals. } \\[2mm] {\bf Subcase 2.1:} $\alphacp{n,m_n}$ is a successor ordinal, say $\alphacp{n,m_n}=\xi+1$. Let $\tau:=\taucp{n,m_n-1}$ and $\alphapr:=\mathrm{o}(\alphavec[\xi])$. We consider cases for $\chi^\tau(\xi)$: \\[2mm] {\bf Subcase 2.1.1:} $\chi^\tau(\xi)=0$. In order to verify part a) we have to show that $\operatorname{pred}_1(\alpha)=\alphapr$. By monotonicity and continuity we have \[\alpha=\sup\set{\mathrm{o}(\alphavec[\xi]^\frown(\rhoargs{\tau}{\xi}+\eta))}{\eta\in(0,\tau)},\] which by the i.h.\ is a proper supremum over ordinals the greatest $<_1$-predecessor of which is $\alphapr$. \\[2mm] We now proceed to prove part b) and consider cases regarding $\xi$.\\[2mm] {\bf 2.1.1.1:} $\xi=0$. By part a) $\alphapr=\ordcp{n,m_n-1}(\alphavec)$ is the greatest $<_1$-predecessor of $\alpha$.\\[2mm] {\bf 2.1.1.1.1:} $m_n=2$. By the i.h., $\alphapr$ is either $\le_1$-minimal or has a greatest $<_1$-predecessor, whence it does not have any $<_2$-successor, and thus $\alphapr\not\kappa^\tauwo\alpha$ as claimed. Clearly, any $\kappa^\tauwo$-predecessor of $\alpha$ then must be a $\kappa^\tauwo$-predecessor of $\alphapr$ as well. If $\operatorname{pred}_2(\alphapr)>0$ then using the i.h.\ $\alpha$ is seen to be a proper supremum of $\kappa^\tauwo$-successors of $\operatorname{pred}_2(\alphapr)$ like $\alphapr$ itself, hence $\operatorname{pred}_2(\alpha)=\operatorname{pred}_2(\alphapr)$, as claimed.\\[2mm] {\bf 2.1.1.1.2:} $m_n>2$. Then $\alphapr\kappa^\tauwo\alpha$, as according to the i.h.\ $\alpha$ then is the supremum of $\kappa^\tauwo$-successors of $\alphapr$, hence $\operatorname{pred}_2(\alpha)=\alphapr$, as claimed.\\[2mm] {\bf 2.1.1.2:} $\xi$ is a successor ordinal. Then by the i.h.\ $\alphapr$ has a greatest $<_1$-predecessor, so $\alphapr$ does not have any $<_2$-successor, in particular $\alphapr\not\kappa^\tauwo\alpha$. In the special case $m_n=2\:\&\:{\tau_n^\star}=1$ the $\le_2$-minimality follows then from the $\le_2$-minimality of $\alphapr$, while in the remaining cases $\alpha$ is easily seen to be the supremum of ordinals with the same greatest $\kappa^\tauwo$-predecessor as claimed for $\alphapr$.\\[2mm] {\bf 2.1.1.3:} $\xi\in\mathrm{Lim}$. As $\alphapr$ is its greatest $<_1$-predecessor, $\alpha$ is $\alphapr$-$\le_2$-minimal, and showing that $\alphapr\not\kappa^\tauwo\alpha$ will imply the claim as above. Arguing toward contradiction, let us assume that $\alphapr\kappa^\tauwo\alpha$. Let $X$ and $Z\subseteqseteq(\alphapr,\alphapr+\kappa^{{\vec{\varsigma}}}_{\rhoargs{\tau}{\xi}+1})$ be sets according to Claim \ref{relleominclaim}, for which there does not exist any cover $X\cup\tilde{Z}$ such that $X<\tilde{Z}$ and $X\cup\tilde{Z}\subseteqseteq\alphapr+\kappa^{{\vec{\varsigma}}}_{\rhoargs{\tau}{\xi}}+\mathrm{dp}_{{\vec{\varsigma}}}(\rhoargs{\tau}{\xi})$. We set \[X^\prime:=X\setminus\sigmangleton{\alphapr}\quad\mbox{ and }\quadZ^\prime:=\sigmangleton{\alphapr}\cup Z.\] By part 2 of Lemma \ref{letwoupwlem} we obtain cofinally many copies $\tilde{Z}pr$ below $\alphapr$ such that $X^\prime<\tilde{Z}pr$ and $X^\prime\cup\tilde{Z}pr\congX^\prime\cupZ^\prime$ with the property that $\mathrm{o}(\alphavec[1])\le\alphatipr:=\min\tilde{Z}pr<_1\alphapr$. Let $\nu\in(0,\xi)$ be such that $\mathrm{o}(\alphavec[\nu])\le\alphatipr<\mathrm{o}(\alphavec[\nu+1])$. Choosing $\tilde{Z}pr$ accordingly we may assume that $X^\prime<\mathrm{o}(\alphavec[\nu])$ and ${\mathrm{logend}}(\nu)<{\mathrm{logend}}(\xi)$, hence $\rhoargs{\tau}{\nu}\le\rhoargs{\tau}{\xi}$. Notice that if $\mathrm{o}(\alphavec[\nu])<\alphatipr$ the i.h.\ yields $\chi^\tau(\nu)=1$ and $\alphatipr\le\operatorname{pred}_1(\mathrm{o}(\alphavec[\nu+1]))=\operatorname{me}(\mathrm{o}(\alphavec[\nu]))$, whence $\mathrm{o}(\alphavec[\nu])\kappa^\tauwo\alphatipr$. We may therefore assume that $\alphatipr=\mathrm{o}(\alphavec[\nu])$ since changing $\alphatipr$ to $\mathrm{o}(\alphavec[\nu])$ would still result in a cover of $X^\prime\cupZ^\prime$. Because $\mathrm{o}(\alphavec[\nu+1])<_1\alphapr$ by the i.h., we may further assume that $\tilde{Z}pr\subseteqseteq\mathrm{o}(\alphavec[\nu+1])$. Noticing that in the case $\rhoargs{\tau}{\nu}=\rhoargs{\tau}{\xi}$ we must have $\chi^\tau(\nu)=1$ and by the i.h.\ $\mathrm{o}(\alphavec[\nu])+\kappa^{{\vec{\varsigma}}}_{\rhoargs{\tau}{\nu}}<_1\alphapr$, we finally may assume that $X^\prime\cup\tilde{Z}pr\subseteqseteq\mathrm{o}(\alphavec[\nu]^\frown({\ze^\tau}a))$ for some ${\ze^\tau}a<\rhoargs{\tau}{\xi}$ with $\min\tilde{Z}pr=\mathrm{o}(\alphavec[\nu])$ so that $X^\prime\cup\tilde{Z}pr$ is a cover of $X^\prime\cupZ^\prime$. Since by i.h. \[\mathrm{o}(\alphavec[\nu]^\frown({\ze^\tau}a))\quad\cong\quad\mathrm{o}(\alphavec[\nu])\cup[\alphapr,\mathrm{o}(\alphavec[\xi]^\frown({\ze^\tau}a))),\] setting \[\tilde{Z}:=(\alphapr+(-\mathrm{o}(\alphavec[\nu])+\tilde{Z}pr))-\sigmangleton{\alphapr}\] results in a cover $X\cup\tilde{Z}$ of $X\cup Z$ with $X<\tilde{Z}$ and $X\cup\tilde{Z}\subseteqseteq\alphapr+\kappa^{{\vec{\varsigma}}}_{\rhoargs{\tau}{\xi}}+\mathrm{dp}_{{\vec{\varsigma}}}(\rhoargs{\tau}{\xi})$. Contradiction. \\[2mm] {\bf Subcase 2.1.2:} $\chi^\tau(\xi)=1$. Part a) claims that, setting $\deltavec:=\operatorname{me}(\alphavec[\xi])$, $\operatorname{pred}_1(\alpha)=\mathrm{o}(\deltavec)=:\delta$. It is easy to see that the extending index of $\operatorname{ec}(\deltavec)$ is of a form $\tau\cdot(\eta+1)$ for some $\eta$, shown explicitly in part 7(c) of Lemma \ref{tcevalestimlem}. Notice that $\operatorname{cml}(\deltavec)=(n,m_n-1)$. By monotonicity and continuity we then have \[\alpha=\sup\set{\mathrm{o}(\deltavec^\frown(\tau\cdot\eta+{\ze^\tau}a))}{{\ze^\tau}a\in(0,\tau)},\] which by the i.h.\ is a proper supremum over ordinals the greatest $<_1$-predecessor of which is $\delta$. \\[2mm] As to part b) we first show that $\alpha$ is $\alphapr$-$\le_2$-minimal, arguing similarly as in the proof of (relativized) $\le_2$-minimality in Subcase 1.2., but providing the argument explicitly again for the reader's convenience. We will then prove $\alphapr\not\kappa^\tauwo\alpha$ which as above implies the claim. Let $\deltavec=(\deltavec_1,\ldots,\deltavec_r)$ where $\deltavec_i=(\deltacp{i,1},\ldots,\deltacp{i,k_i})$ for $1\le i\le r$ with associated chain ${\vec{\sigma}}$. Then $r\ge n$, $k_n\ge m_n$, $\deltacp{n,m_n}=\xi$, and $\deltacp{i,j}=\alphacp{i,j}$ for all $(i,j)\in\mathrm{dom}(\deltavec)$ such that $(i,j)<_\mathrm{\scriptscriptstyle{lex}}(n,m_n)$. Recall that we have $\operatorname{pred}_1(\alpha)=\mathrm{o}(\deltavec)=\delta$, i.e.\ $\delta$ is the greatest $<_1$-predecessor of $\alpha$, according to part a). For any $\gamma\kappa^\tauwo\alpha$ we therefore must have $\gamma\le\delta$. According to the i.h.\ and with the aid of Lemma \ref{cmlmaxextcor} the maximal $\kappa^\tauwo$-chain from $\alphapr$ to $\delta$ consists of ordinals the tracking chains of which are initial chains of $\deltavec$ that extend ${\mathrm{tc}}(\alphapr)=\alphavec[\xi]$, in particular $(n,m_n)<_\mathrm{\scriptscriptstyle{lex}}(r,k_r)$ and $\alphapr\kappa^\tauwo\delta$, and as verified by part 7(c) of Lemma \ref{tcevalestimlem} we have, setting ${\vec{\varsigma}}de:={\mathrm{ers}}_{r,k_r}(\deltavec)$, \[\alpha=\delta+\kappa^{\vec{\varsigma}}de_{\tau\cdot(\eta+1)}=\beta+\tilde{\tau},\mbox{ where } \beta:=\delta+\kappa^{\vec{\varsigma}}de_{\tau\cdot\eta}+\mathrm{dp}_{\vec{\varsigma}}de(\tau\cdot\eta)\mbox{ and }\tilde{\tau}=\kappa^{\vec{\varsigma}}de_\tau.\] Now, arguing toward contradiction, let us assume that there exists a greatest $\kappa^\tauwo$-predecessor $\gamma$ of $\alpha$ such that $\alphapr\kappa^\tauwo\gamma\le_2\delta$, so that by the i.h.\ $\gammavec:={\mathrm{tc}}(\gamma)$ is an initial chain of $\deltavec$, say $\gamma=\ordcp{i,j+1}(\deltavec)$ for some $(i,j+1)\in\mathrm{dom}(\deltavec)$ with $j>0$ and $(n,m_n)<_\mathrm{\scriptscriptstyle{lex}}(i,j+1)$. We then have $\sigmacp{i,j}>\tau$ by Lemma \ref{cmlmaxextcor} and set $\theta:=\ordcp{i,j}(\deltavec)$. According to Claim \ref{relleominclaim} of the i.h.\ for $\mathrm{o}({\deltavec_{\restriction_{i,j}}}^\frown(\tau+1))=\theta+\tilde{\tau}+1$ there exist finite sets $X\subseteqseteq\theta+1$ and $Z\subseteqseteq(\theta,\theta+\tilde{\tau}+1)$ such that there does not exist any cover $X\cup\tilde{Z}$ of $X\cup Z$ with $X<\tilde{Z}$ and $X\cup\tilde{Z}\subseteqseteq\theta+\tilde{\tau}$. By the i.h.\ we know that for every $\nu\in(0,\deltacp{i,j+1})$, setting $\delta_\nu:=\mathrm{o}(\deltavecrestrarg{(i,j+1)}[\nu])$ we have \[\theta+1+\tilde{\sigma}cp{i,j}\quad\cong\quad\theta+1\cup(\delta_\nu,\delta_\nu+\tilde{\sigma}cp{i,j})\] Since $\deltacp{i,j+1}=\mu_\sigmacp{i,j}\in{\mathbb P}$ and $\tilde{\tau}<\tilde{\sigma}cp{i,j}$ (due to the monotonicity of an appropriately relativized $\kappa$-function, say, $\kappa^{\vec{\varsigma}}de$), we directly see that below $\gamma$ there are cofinally many copies $\tilde{Z}_\gamma$ of $Z$ such that $X\cup Z\cong X\cup\tilde{Z}_\gamma$. By part 1 of Lemma \ref{letwoupwlem} and our assumption $\gamma\kappa^\tauwo\alpha$ we now obtain copies $\tilde{Z}_\alpha$ of $Z$ cofinally below $\alpha$ (and hence above $\beta$) such that $X\cup Z\cong X\cup\tilde{Z}_\alpha$. The i.h.\ reassures us of the isomorphism \[\theta+1+\tilde{\tau}\quad\cong\quad\theta+1\cup(\beta,\alpha),\] noting that (by the i.h.) the ordinals of the interval $(\beta,\alpha)$ cannot have any $\kappa^\tauwo$-predecessors in $(\theta,\beta]$ and that the tracking chains of the ordinals in $(\theta,\theta+\tilde{\tau})\cup(\beta,\alpha)$ have the proper initial chain $\deltavecrestrarg{(i,j)}$. This provides us, however, with a copy $\tilde{Z}\subseteqseteq(\theta,\theta+\tilde{\tau})$ of $Z$ such that $X\cup Z\cong X\cup\tilde{Z}$, contradicting our choice of $X$ and $Z$, whence $\gamma\kappa^\tauwo\alpha$ is impossible. Therefore $\alpha$ is $\alphapr$-$\le_2$-minimal. \\[2mm] We now show that $\alphapr\not\kappa^\tauwo\alpha$. In order to reach a contradiction let us assume to the contrary that $\alphapr\kappa^\tauwo\alpha$. Under this assumption we can prove the following variant of Claim \ref{relleominclaim}: \betagin{claim}\lambdabel{relletwominclaim} Assuming $\alphapr\kappa^\tauwo\alpha$, there exist finite sets $X$ and $Z\subseteqseteq(\alphapr,\alpha]$, where $X$ consists of $\alphapr$ and all existing greatest $<_2$-predecessors $\gamma$ of elements of $Z$ that satisfy $\gamma\le\alphapr$, such that there does not exist any cover $X\cup\tilde{Z}$ of $X\cup Z$ with $X<\tilde{Z}$ and $X\cup\tilde{Z}\subseteqseteq\alpha$. \end{claim} {\bf Proof.} The proof of the above claim both builds upon Claim \ref{relleominclaim} and is similar to its proof, but for the reader's convenience we give it in detail, with an emphasis on a situation that is not particularly difficult but did not occur in the proof of Claim \ref{relleominclaim}, cf.\ Subcase {\cal R}omannumeral{1}.2. We are going to show that for every index pair $(i,j)\in\mathrm{dom}(\deltavec)$ such that $(n,m_n)\le_\mathrm{\scriptscriptstyle{lex}}(i,j)\le_\mathrm{\scriptscriptstyle{lex}}(r,k_r)$, setting ${\eta_{i,j}}:=\ordcp{i,j}(\deltavec)$, there exists a finite set $Z_{i,j}\subseteqseteq({\eta_{i,j}},\alpha]$ such that for $X_{i,j}$ consisting of ${\eta_{i,j}}$ and all existing greatest $<_2$-predecessors below ${\eta_{i,j}}$ of elements of $Z_{i,j}$ there does not exist any cover $X_{i,j}\cup\tilde{Z}ij$ of $X_{i,j}\cupZ_{i,j}$ with $X_{i,j}<\tilde{Z}ij$ and $X_{i,j}\cup\tilde{Z}ij\subseteqseteq\alpha$. We proceed by induction on the finite number of 1-step extensions from $\deltavecrestrarg{(i,j)}$ to $\deltavec$: The initial step is $(i,j)=(r,k_r)$, hence ${\eta_{i,j}}=\delta$. Recalling that $\alpha=\delta+\kappa^{\vec{\varsigma}}de_{\tau\cdot(\eta+1)}$, we can apply Claim \ref{relleominclaim} of the i.h.\ to $\delta+\kappa^{\vec{\varsigma}}de_{\tau\cdot\eta+1}$ to obtain sets $X^\prime$ and $Z^\prime\subseteqseteq(\delta,\delta+\kappa^{\vec{\varsigma}}de_{\tau\cdot\eta+1})$ such that there does not exist any cover $X^\prime\cup\tilde{Z}pr$ of $X^\prime\cupZ^\prime$ with $X^\prime<\tilde{Z}pr$ and $X^\prime\cup\tilde{Z}pr\subseteqseteq\delta+\kappa^{\vec{\varsigma}}de_{\tau\cdot\eta}+\mathrm{dp}_{\vec{\varsigma}}de(\tau\cdot\eta)=:\deltapr$. Defining \[Z_{i,j}:=Z^\prime\cup\sigmangleton{\alpha}\] and noticing that by our assumption we have $\alphapr\kappa^\tauwo\alpha$ and that by the i.h.\ there do not exist any $\kappa^\tauwo$-successors of $\alphapr$ in the interval $(\deltapr,\alpha)$, it is easy to check that $Z_{i,j}$ has the required property. Let us now assume that $(i,j)<_\mathrm{\scriptscriptstyle{lex}}(r,k_r)$. We set $(u,v):=(i,j)^+$, ${\vec{\varsigma}}pr:={\mathrm{ers}}_{i,j}(\deltavec)$, and consider two cases.\\[2mm] {\bf Case {\cal R}omannumeral{1}:} $(u,v)=(i+1,1)$. By Lemma \ref{cmlmaxextcor} we have $\sigmacp{u,v}>\tau\in{\mathbb E}$ and hence $\sigma^\star_u\ge\tau$. Notice that the case $\sigmacp{u,v}=\sigma^\star_u$ cannot occur since then $\operatorname{ec}(\deltavecrestrarg{(u,v)})$ would not exist. We discuss the remaining possibilities for $\deltacp{u,v}$:\\[2mm] {\bf Subcase {\cal R}omannumeral{1}.1:} $\deltacp{u,v}\in{\mathbb E}^{>\sigma^\star_u}$. We then argue as in the corresponding Case {\cal R}omannumeral{1} in the proof of Claim \ref{relleominclaim}. We therefore define \[Z_{i,j}:=Z_{u,v}.\] That this choice is adequate is shown as in the proof of Claim \ref{relleominclaim}.\\[2mm] {\bf Subcase {\cal R}omannumeral{1}.2:} Otherwise. In case of $\deltacp{u,v}>\sigmacp{u,v}$ let ${\ze^\tau}a$ be such that $\deltacp{u,v}=_{\mathbb N}F{\ze^\tau}a+\sigmacp{u,v}$, otherwise set ${\ze^\tau}a:=0$. If ${\ze^\tau}a>0$ let $X_{\ze^\tau}a$ and $Z_{\ze^\tau}a\subseteqseteq({\eta_{i,j}},{\eta_{i,j}}+\kappa^{{\vec{\varsigma}}pr}_{{\ze^\tau}a+1})$ be the sets according to Claim \ref{relleominclaim} of the i.h.\ so that there does not exist any cover $X_{\ze^\tau}a\cup\tilde{Z}_{\ze^\tau}a$ of $X_{\ze^\tau}a\cup Z_{\ze^\tau}a$ with $X_{\ze^\tau}a<\tilde{Z}_{\ze^\tau}a$ and $X_{\ze^\tau}a\cup\tilde{Z}_{\ze^\tau}a\subseteqseteq{\eta_{i,j}}+\kappa^{{\vec{\varsigma}}pr}_{\ze^\tau}a+\mathrm{dp}_{{\vec{\varsigma}}pr}({\ze^\tau}a)$, otherwise set $X_{\ze^\tau}a:=\emptyset=:Z_{\ze^\tau}a$. We now define \[X_{i,j}:=\{{\eta_{i,j}}\}\cup X_{\ze^\tau}a\cup(X_{u,v}\setminus\{{\eta_{u,v}}\})\quad\mbox{ and }\quadZ_{i,j}:=Z_{\ze^\tau}a\cup\sigmangleton{{\eta_{u,v}}}\cupZ_{u,v}.\] In order to show that this choice of $Z_{i,j}$ satisfies the claim let us assume to the contrary the existence of a set $\tilde{Z}ij$ such that $X_{i,j}\cup\tilde{Z}ij$ is a cover of $X_{i,j}\cupZ_{i,j}$ with $X_{i,j}<\tilde{Z}ij$ and $X_{i,j}\cup\tilde{Z}ij\subseteqseteq\alpha$. Let $Z^\prime:=\sigmangleton{{\eta_{u,v}}}\cupZ_{u,v}$ and $\tilde{Z}pr$ be the subset of $\tilde{Z}ij$ corresponding to $Z^\prime$. Due to the property of $Z_{\ze^\tau}a$ in the case ${\ze^\tau}a>0$ we have \[\tilde{Z}pr\subseteqseteq[{\eta_{i,j}}+\kappa^{{\vec{\varsigma}}pr}_{{\ze^\tau}a+1},\alpha),\] and since ${\eta_{u,v}}<_1\alpha$ there are cofinally many copies \[\tilde{Z}pr\subseteqseteq[{\eta_{i,j}}+\kappa^{{\vec{\varsigma}}pr}_{{\ze^\tau}a+1},{\eta_{u,v}})\] below ${\eta_{u,v}}$ keeping the same $\kappa^\tauwo$-predecessors. The ordinal $\mu:=\min\tilde{Z}pr$ corresponds to ${\eta_{u,v}}$ in $Z_{i,j}$, and since $\mu\le_1\tilde{Z}pr$ we see that there exists $\nu\in({\ze^\tau}a,\deltacp{u,v})$ such that setting $\eta_\nu:={\eta_{i,j}}+\kappa^{{\vec{\varsigma}}pr}_{\nu}$ and $\eta_{\nu+1}:={\eta_{i,j}}+\kappa^{{\vec{\varsigma}}pr}_{\nu+1}$ we have \[\tilde{Z}pr\setminus\sigmangleton{\mu}\subseteqseteq(\eta_\nu,\eta_{\nu+1}),\] which again we may assume to satisfy $\nu\ge\sigma^\star_u$ and ${\mathrm{logend}}((1/\sigma^\star_u)\cdot\nu)<<_1g((1/\sigma^\star_u)\cdot\sigmacp{u,v})$. By the i.h.\ we have \[\eta_{\nu+1}\quad\cong\quad\eta_\nu\cup[{\eta_{u,v}},{\eta_{u,v}}+(-\eta_\nu+\eta_{\nu+1}))\] since ${\eta_{u,v}}$ and $\eta_\nu$ have the same $\kappa^\tauwo$-predecessors. Exploiting this isomorphism and noticing that $X_{u,v}\setminus\sigmangleton{{\eta_{u,v}}}\subseteqseteqX_{i,j}$ we obtain a copy $\tilde{Z}uv$ of $\tilde{Z}pr\setminus\sigmangleton{\mu}$ such that $X_{u,v}\cup\tilde{Z}uv$ is a cover of $X_{u,v}\cupZ_{u,v}$ with $X_{u,v}<\tilde{Z}uv$ and $X_{u,v}\cup\tilde{Z}uv\subseteqseteq\alpha$. Contradiction.\\[2mm] {\bf Case {\cal R}omannumeral{2}:} $(u,v)=(i,j+1)$. Setting $\sigma:=\sigmacp{i,j}$ we then have $\deltacp{i,j+1}=\mu_\sigma$ and proceed as in the corresponding case in the proof of Claim \ref{relleominclaim}. Applying Claim \ref{relleominclaim} of the i.h.\ to $\mathrm{o}(\deltavecrestrarg{(i,j)}^\frown({\bar{\sigma}}+1))$ yields a set $Z_{\bar{\sigma}}\subseteqseteq({\eta_{i,j}},{\eta_{i,j}}+\kappa^{\vec{\varsigma}}pr_{{\bar{\sigma}}+1})$ such that there does not exist a cover $X_{i,j}\cup\tilde{Z}_{\bar{\sigma}}$ of $X_{i,j}\cup Z_{\bar{\sigma}}$ with $X_{i,j}<\tilde{Z}_{\bar{\sigma}}$ and $X_{i,j}\cup\tilde{Z}_{\bar{\sigma}}\subseteqseteq{\eta_{i,j}}+\kappa^{\vec{\varsigma}}pr_{\bar{\sigma}}+\mathrm{dp}_{\vec{\varsigma}}pr({\bar{\sigma}})$. We now define \[Z_{i,j}:=\sigmangleton{{\eta_{u,v}}}\cup({\eta_{u,v}} +(-{\eta_{i,j}} +Z_{\bar{\sigma}}))\cup\sigmangleton{\mathrm{o}(\deltavecrestrarg{(u,v)}^\frown(\sigma))}\cupZ_{u,v}.\] In order to show that $Z_{i,j}$ has the desired property we assume that there were a cover $X_{i,j}\cup\tilde{Z}ij$ of $X_{i,j}\cupZ_{i,j}$ with $X_{i,j}<\tilde{Z}ij$ and $X_{i,j}\cup\tilde{Z}ij\subseteqseteq\alpha$ and then argue as in the corresponding Case {\cal R}omannumeral{2} in the proof of Claim \ref{relleominclaim} in order to drive the assumption into a contradiction.\\[2mm] The final instance $(i,j)=(n,m_n)$ establishes Claim \ref{relletwominclaim}.\mbox{ } $\Box$ We can now derive a contradiction similarly as in the previous subcase. Let $X,Z$ be as in the above claim. Without loss of generality me may assume that $\operatorname{pred}_1(\alpha)=\delta\in Z$. We set \[X^\prime:=X\setminus\sigmangleton{\alphapr}\quad\mbox{ and }\quadZ^\prime:=\sigmangleton{\alphapr}\cup Z\setminus\sigmangleton{\alpha}.\] By part 2 of Lemma \ref{letwoupwlem} we obtain cofinally many copies $\tilde{Z}pr$ below $\alphapr$ such that $X^\prime<\tilde{Z}pr$ and $X^\prime\cup\tilde{Z}pr\congX^\prime\cupZ^\prime$ with the property that all $\le_1$-connections to $\alpha$ are maintained. Let \[\alphatipr:=\min\tilde{Z}pr\] and notice that $\alphatipr<_1\alpha$ and $\alphatipr\le_2\gammati$ for all $\gammati\in\tilde{Z}prnod$, where $\tilde{Z}prnod$ is defined as the subset of $\tilde{Z}pr$ that consists of the copies of all $\gamma\inZ^\prime$ such that $\gamma<_1\alpha$. Let $\nu\in(0,\xi)$ be such that $\mathrm{o}(\alphavec[\nu])\le\alphatipr<\mathrm{o}(\alphavec[\nu+1])$. Choosing $\tilde{Z}pr$ accordingly we may assume that both $X^\prime<\mathrm{o}(\alphavec[\nu])$ and ${\mathrm{logend}}(\nu)<{\mathrm{logend}}(\xi)$. Notice that using the i.h.\ $\chi^\tau(\nu)=1$, hence $\rhoargs{\tau}{\nu}=\tau\cdot{\mathrm{logend}}(\nu)<\tau\cdot{\mathrm{logend}}(\xi)=\rhoargs{\tau}{\xi}$ (as $\nu={\nu^\prime}\cdot\omega$ for some ${\nu^\prime}$ would imply $\chi^\tau(\nu)=0$, simimarly for $\xi$), and $\alphatipr\le_1\operatorname{pred}_1(\mathrm{o}(\alphavec[\nu+1]))=\operatorname{me}(\mathrm{o}(\alphavec[\nu]))$, whence $\mathrm{o}(\alphavec[\nu])\le_2\alphatipr$. We may therefore assume that $\alphatipr=\mathrm{o}(\alphavec[\nu])$ since a replacement would still result in a cover of $X^\prime\cupZ^\prime$. Because $\mathrm{o}(\alphavec[\nu+1])<_1\alphapr$ by the i.h., we may further assume that $\tilde{Z}pr\subseteqseteq\mathrm{o}(\alphavec[\nu+1])$ as elements of $\tilde{Z}prnod$ are not affected. Noticing that since $\chi^\tau(\nu)=\chi^\tau(\xi)=1$ we have $\nu\cdot\omega<\xi$, and setting \[\alphastar:=\mathrm{o}(\alphavec[\nu\cdot\omega])+\kappa^{{\vec{\varsigma}}}_{\rhoargs{\tau}{\nu\cdot\omega}}+\mathrm{dp}_{{\vec{\varsigma}}}(\rhoargs{\tau}{\nu\cdot\omega})\] we can use the isomorphism \[\mathrm{o}(\alphavec[\nu+1])\quad\cong\quad\mathrm{o}(\alphavec[\nu])\cup[\mathrm{o}(\alphavec[\nu\cdot\omega]),\alphastar),\] which is established by the i.h., in order to shift $\tilde{Z}pr$ by the translation \[\tilde{Z}star:=\mathrm{o}(\alphavec[\nu\cdot\omega])+(-\mathrm{o}(\alphavec[\nu])+\tilde{Z}pr).\] This results in the cover $X^\prime\cup\tilde{Z}star$ of $X^\prime\cupZ^\prime$. By the i.h.\ we know that \[\mathrm{o}(\alphavec[\nu\cdot\omega])\kappa^\tauwo\alphastar=\mathrm{o}(\alphavec[\nu\cdot\omega])+(-\mathrm{o}(\alphavec[\nu])+\mathrm{o}(\alphavec[\nu+1]))\] and that for all $\gammati\in\tilde{Z}prnod$ the corresponding element in $\tilde{Z}star$ satisfies \[\mathrm{o}(\alphavec[\nu\cdot\omega])+(-\mathrm{o}(\alphavec[\nu])+\gammati)<_1\alphastar.\] Since $\rhoargs{\tau}{\nu\cdot\omega}=\tau\cdot{\mathrm{logend}}(\nu)<\rhoargs{\tau}{\xi}$, setting $\alphati:= \alphapr+\kappa^{{\vec{\varsigma}}}_{\rhoargs{\tau}{\nu\cdot\omega}}+\mathrm{dp}_{{\vec{\varsigma}}}(\rhoargs{\tau}{\nu\cdot\omega})$, we may finally exploit the isomorphism \[\alphastar+1\quad\cong\quad\mathrm{o}(\alphavec[\nu\cdot\omega])\cup[\alphapr,\alphati]\] so that setting \[\tilde{Z}:=(\alphapr+(-\mathrm{o}(\alphavec[\nu\cdot\omega])+(\tilde{Z}star\cup\sigmangleton{\alphastar})))\setminus\sigmangleton{\alphapr}\] we obtain the cover $X\cup\tilde{Z}$ of $X\cup Z$ which satisfies $X<\tilde{Z}$ and $X\cup\tilde{Z}\subseteqseteq\alpha$. Contradiction. \\[2mm] {\bf Subcase 2.2:} $\alphacp{n,m_n}\in\mathrm{Lim}$. \\[2mm] Part a) follows from the i.h.\ by monotonicity and continuity, according to which \[\alpha=\sup\set{\mathrm{o}(\alphavec[\xi])}{\xi\in(0,\alphacp{n,m_n})}.\] In order to see part b) we simply observe that according to part a) and the i.h.\ $(\mathrm{o}(\alphavec[\xi]))_{\xi<\alphacp{n,m_n}}$ is a $<_1$-chain of ordinals either $\le_2$-minimal as claimed for $\alpha$ or with the same greatest $\kappa^\tauwo$-predecessor as claimed for $\alpha$. \mbox{ } $\Box$ The following Corollary \ref{maincor} applies Theorem \ref{maintheo} in order to completely describe the structure ${\cal R}two$ in terms of $\le_i$-successorship, $i=1,2$. As a preparation we need the following definition of \emph{greatest branching point} of a tracking chain, denoted as $\operatorname{gbo}(\alphavec)$, which is crucial in calculating ${\operatorname{lh}}(\alpha)$, i.e\ the maximum $\beta$ such that $\alpha\le_1\beta$ if such ordinal exists. Recall that we write ${\operatorname{lh}}_2(\alpha)$ for the maximum $\beta$ such that $\alpha\le_2\beta$ if such ordinal exists, and $\operatorname{Succ}_i(\alpha)$ for the class $\{\beta\mid\alpha\le_i\beta\}$, $i=1,2$. Recall also Definition \ref{charseqdefi} of \emph{reference sequence}, ${\mathrm{rs}}ij(\alphavec)$, and \emph{evaluation reference sequence}, ${\mathrm{ers}}ij(\alphavec)$. \betagin{defi}[7.12 of \cite{CWc}]\lambdabel{gbodefi} Let $\alphavec\in{\operatorname{T}}C$ where $\alphaivec=(\alphacp{i,1},\ldots,\alphacp{i,m_i})$ for $1\le i\le n$ and set \index{$\alphavecst$} \[\alphavecst:=\left\{\betagin{array}{l@{\quad}l} \alphavec & \mbox{if }m_n=1\\ \alphavec[\mu_\taucp{n,m_n-1}] &\mbox{otherwise.} \end{array}\right.\] We define the (index pair of the) {\bf greatest branching point of} \index{index pair!greatest branch-off index pair} $\alphavec$, $\operatorname{gbo}(\alphavec)$\index{$\operatorname{gbo}$}, by \[\operatorname{gbo}(\alphavec):=\left\{\betagin{array}{l@{\quad}l} \operatorname{gbo}(\alphavecrestrarg{(i,j+1)}) & \mbox{if $(i,j):=\operatorname{cml}(\alphavecst)$ exists}\\[2mm] (n,m_n) & \mbox{otherwise.} \end{array}\right.\] \end{defi} \betagin{cor}[cf.\ 7.13 of \cite{CWc}]\lambdabel{maincor} Let $\alpha\in{\mathrm{Ord}}$ with ${\mathrm{tc}}(\alpha)=\alphavec$ where $\alphaivec=(\alphacp{i,1},\ldots,\alphacp{i,m_i})$ for $1\le i\le n$, with associated chain ${\vec{\tau}}$ and segmentation parameters $(\lambda,t):=\upsilonseg(\alphavec)$ and $p,s_l,(\lambda_l,t_l)$ for $l=1,\ldots,p$ as in Definition \ref{segmentationdefi}.\\[2mm] {\bf Case 1:} $\alpha\in\mathrm{Im}(\upsilon)$. \betagin{enumerate} \item[{\bf 1.1:}] $\alpha=0$. Then ${\operatorname{lh}}_2(\alpha)={\operatorname{lh}}(\alpha)=\alpha$. \item[{\bf 1.2:}] $\alpha=\upsilon_{\lambda+1}$. Then ${\operatorname{lh}}_2(\alpha)=\alpha$ and ${\operatorname{lh}}(\alpha)=\infty$. \item[{\bf 1.3:}] $\alpha=\upsilon_\lambda>0$. Then ${\operatorname{lh}}_2(\alpha)={\operatorname{lh}}(\alpha)=\infty$ and \[\operatorname{Succ}_2(\alpha)=\{\upsilon_\lambda\cdot(1+\xi)\mid\xi\in{\mathrm{Ord}}\}.\] \item[{\bf 1.4:}] $\alpha=\upsilon_{\lambda+t^\prime+1}$, $t^\prime\in(0,\omega)$. Then ${\operatorname{lh}}_2(\alpha)={\operatorname{lh}}(\alpha)=\infty$ and \[\operatorname{Succ}_2(\alpha)=\{\alpha+\upsilon_{\lambda+t^\prime}\cdot\xi\mid\xi\in{\mathrm{Ord}}\}.\] \end{enumerate} {\bf Case 2:} $\alpha\not\in\mathrm{Im}(\upsilon)$. \betagin{enumerate} \item[a)] We first consider $\le_2$-successors of $\alpha$.\\[2mm] {\bf Subcase 2.1:} $m_n=1$. Then \[\operatorname{Succ}_2(\alpha)=\sigmangleton{\alpha}\quad\mbox{ and }\quad{\operatorname{lh}}_2(\alpha)=\alpha.\] {\bf Subcase 2.2:} $m_n>1$. Set $(i_0,j_0):=(n,m_n-1)$, $\tau:=\taucp{i_0,j_0}$, $\tilde{\tau}:=\tilde{\tau}cp{i_0,j_0}$, ${\vec{\varsigma}}:={\mathrm{rs}}_{i_0,j_0}(\alphavec)$, $\varrho:=\rhoargs{\tau}{\taucp{n,m_n}}$, and let $\nu,\xi$ be such that \[\kappa^{{\vec{\varsigma}}}_\varrho+\mathrm{dp}_{{\vec{\varsigma}}}(\varrho)=\tilde{\tau}\cdot\nu+\xi\mbox{ and }\xi<\tilde{\tau}.\] Writing $\eta_{\operatorname{max}}:=\nu\minusp\chi^\tau(\taucp{n,m_n})$ we then have \[\operatorname{Succ}_2(\alpha)=\set{\alpha+\tilde{\tau}\cdot\eta}{\eta\le\eta_{\operatorname{max}}}\quad\mbox{ and }\quad{\operatorname{lh}}_2(\alpha)=\alpha+\tilde{\tau}\cdot\eta_{\operatorname{max}}.\] \item[b)] Writing $(n_0,m_0):=\operatorname{gbo}(\alphavec)$, $m:=m_0\minusp 2+1$, and setting as in Definition \ref{trchevaldefi} ${\vec{\varsigma}}:={\mathrm{ers}}_{n_0,m}(\alphavec)$ we have \[{\operatorname{lh}}(\alpha)=\left\{\betagin{array}{ll} \ordcp{n_0,m}(\alphavec)+\mathrm{dp}_{\vec{\varsigma}}(\taucp{n_0,m}) & \mbox{ if }\ordcp{n_0,m}(\alphavec)\not\in\mathrm{Im}(\upsilon)\\[2mm] \infty & \mbox{ otherwise.} \end{array}\right.\] \end{enumerate} \end{cor} {\bf Proof.} In Case 1 claims regarding ${\operatorname{lh}}$ follow directly from Theorem \ref{maintheo}. The claim regarding ${\operatorname{lh}}_2$ in Subcases 1.2 and 2.1 follows from Lemma \ref{ktwoinflochainlem}, since according to Theorem \ref{maintheo} cofinal $<_1$-chains do not exist. We now consider the situations in Subcases 1.3, 1.4, and 2.2. If $\beta$ is a $\kappa^\tauwo$-successor of $\alpha$, according to the theorem either $\alphavec\subseteqseteq\betavec:={\mathrm{tc}}(\beta)$, where $m_n>1$, or $\alpha=\upsilon_{\lambda+t}$, where $\lambda\in\mathrm{Lim}nod$, $t\in{\mathbb N}\setminus\{1\}$, and $\lambda+t>0$. In the situation of Subcase 1.3 let $\tau:=\upsilon_\lambda=:\tilde{\tau}$, in Subcase 1.4 let $\tau:=\upsilon_{\lambda+{t^\prime}}=:\tilde{\tau}$, and in Subcase 2.2 let $\tau$ and $\tilde{\tau}$ be as defined there. Note that $\le_i$-successorhip is closed under limits, $i=1,2$, so it is sufficient to consider successor-$\kappa^\tauwo$-successors $\beta$ of $\alpha$, which are immediate $\kappa^\tauwo$-successors, as by the theorem non-immediate $\kappa^\tauwo$-successors cannot be successor-$\kappa^\tauwo$-successors. Suppose therefore that $\betavec=(\betavec_1,\ldots,\betavec_r)$ where $\betavec_i=(\betacp{i,1},\ldots,\betacp{i,k_i})$ for $i=1,\ldots,r$ is a successor-$\kappa^\tauwo$-successor of $\alpha$, whence by the theorem $\operatorname{pred}_2(\beta)=\alpha$, $k_r=1$, and $\tau=\sigma^\star_r=\betacp{r,1}>1$, where ${\vec{\sigma}}$ is the chain associated with $\betavec$. Clearly, the converse of this latter implication holds as well. Therefore, if $\beta$ is a successor-$\kappa^\tauwo$-successor of $\alpha$, then it must be of the form $\alpha+\tilde{\tau}\cdot(\eta+1)$ for some $\eta$. In Subcase 2.2 we see that such $\eta$ must be bounded as claimed, since according to the theorem $\betavec$ is then a proper extension of $\alphavec$, whence with the aid of Corollary \ref{alpldpcor} $\beta\le\alpha+\kappa^{{\vec{\varsigma}}}_\varrho+\mathrm{dp}_{{\vec{\varsigma}}}(\varrho)$, with strict inequality if $\chi^\tau(\taucp{n,m_n})=1$. Note that any ordinal greater than $\alpha$ and bounded in this way has a tracking chain that properly extends $\alphavec$, cf.\ Corollary \ref{alpldpcor}. Having seen that all $\kappa^\tauwo$-successors of $\alpha$ are of the claimed form, we now assume $\beta$ to be of the form $\alpha+\tilde{\tau}\cdot(\eta+1)$ for some $\eta$, bounded as stated in the situation of Subcase 2.2. Let again ${\mathrm{tc}}(\beta)=:\betavec=(\betavec_1,\ldots,\betavec_r)$ where $\betavec_i=(\betacp{i,1},\ldots,\betacp{i,k_i})$ for $i=1,\ldots,r$, with associated chain ${\vec{\sigma}}$. According to our assumption we have ${\mathrm{end}}(\tilde{\sigma}cp{r,k_r})=\tilde{\tau}$. We first consider the situation of Subcase 2.2, where $\alphavec\subseteqseteq\betavec$ as we have seen above. Assuming that $\sigmacp{r,k_r}=1$, which implies $k_r>1$, we would have $\tilde{\sigma}cp{r,k_r-1}=\tilde{\tau}$, hence by part 3 of Lemma \ref{evallem} $(r,k_r-1)=(n,m_n-1)$ and thus $\alphavec=\betavec$, which is not the case. We therefore have $\sigmacp{r,k_r}>1$ and hence $\tilde{\sigma}cp{r,k_r}=\tilde{\tau}$, which entails \[{\mathrm{ts}}(\tilde{\sigma}cp{r,k_r})={\mathrm{ts}}(\tilde{\tau})\] due to Lemma \ref{trsvallem}, and by part 3 of Lemma \ref{evallem} it follows that neither $k_r>1$ nor $k_r=1\:\&\:\sigmacp{r,1}\in{\mathbb E}^{>{\sigma_r^\star}}$, since otherwise $(r,k_r)=(n,m_n-1)$, which is not the case. By parts 1 and 2 of Lemma \ref{evallem} we have $\tilde{\sigma}cp{r,1}=\kappa^{\vec{\varsigma}}pr_\sigmacp{r,1}$ (where ${\vec{\varsigma}}pr:={\mathrm{rs}}_{r^\star}(\betavec)$) and ${\mathrm{ts}}(\tilde{\tau})={\mathrm{rs}}_{i_0,j_0}(\alphavec)={\vec{\varsigma}}\in{\cal R}S$ (where by definition $(i_0,j_0)=(n,m_n-1)$), the membership relation according to part 2 of Lemma \ref{rslem}. Thus $k_r=1$, and since the assumption ${\sigma_r^\star}<\sigmacp{r,1}$, where as we already know $\sigmacp{r,1}\not\in{\mathbb E}$, would imply ${\mathrm{ts}}(\tilde{\sigma}cp{r,1})\not\in{\cal R}S$ as this tracking sequence could not be a strictly increasing sequence of epsilon numbers, we obtain $\sigmacp{r,1}={\sigma_r^\star}$, so that ${\mathrm{ts}}(\tilde{\sigma}cp{r,1})={\vec{\varsigma}}pr$ and hence ${\sigma_r^\star}=\tau$, moreover $r^\star=(i_0,j_0)$ again by part 3 of Lemma \ref{evallem}, whence by the theorem $\alpha\kappa^\tauwo\beta$. Next, we consider the situation of Subcase 1.3, where $\alpha=\upsilon_\lambda>0$ and $\beta=\upsilon_\lambda\cdot(1+\eta+1)$ for some $\eta$. Then we have ${\mathrm{end}}(\tilde{\sigma}cp{r,k_r})=\upsilon_\lambda$, which implies $k_r=1$ as $\upsilon_\lambda$ can not be (the last summand of) an element of the image of any $\nu$-function, hence $\tilde{\sigma}cp{r,1}=\sigmacp{r,1}=\upsilon_\lambda$ as $\kappa$-functions map additive principal numbers to addivitive principal numbers, and thus ${\sigma_r^\star}=\upsilon_\lambda$. The theorem now yields $\operatorname{pred}_2(\beta)=\alpha$. Finally, in Subcase 1.4, we have $\alpha=\upsilon_{\lambda+{t^\prime}+1}$ and $\tau=\upsilon_{\lambda+{t^\prime}}=\tilde{\tau}$. Then we have ${\mathrm{end}}(\tilde{\sigma}cp{r,k_r})=\tau<\alpha$, which again implies that $k_r=1$ as $\tau$ can only be (the last summand of an element) of the image of a $\nu$-function that occurs in tracking chains of ordinals below $\alpha$, namely if it is either equal to $\nu^\gammavecpr_{\tau}=\nu^\gammavec_0$, where $\gammavecpr=(\upsilon_{\lambda+1},\ldots,\upsilon_{\lambda+{t^\prime}-1})$ and $\gammavec=\gammavecpr^\frown\tau$, or (the last summand of) an element of the image of $\nu^\gammavec$, cf.\ Corollaries \ref{kdpnuestimcor} and \ref{nuimagecor}. We again conclude that $\tilde{\sigma}cp{r,1}=\sigmacp{r,1}=\tau={\sigma_r^\star}$, and hence by the theorem $\operatorname{pred}_2(\beta)=\alpha$. It remains to show part b) of the Corollary. We argue as in the corresponding proof of Corollary 7.13 of \cite{CWc}. Let $\alphavecpr:=(\alphavecrestrarg{(n_0,m_0)})^\star$ using the ${}^\star$-notation from Definition \ref{gbodefi}, according to which the vector $\alphavecpr$ does not possess a critical main line index pair. We set \[\alphaplus:=\mathrm{o}(\operatorname{me}(\alphavecpr)).\] If $\ordcp{n_0,m}(\alphavec)\in\mathrm{Im}(\upsilon)$, we have $\alphaplus=\mathrm{o}(\alphavecpr)\in\mathrm{Im}(\upsilon)$, otherwise inspecting definitions as was done in part 7(b) of Lemma \ref{tcevalestimlem} we obtain \[\alphaplus=\ordcp{n_0,m}(\alphavec)+\mathrm{dp}_{\vec{\varsigma}}(\taucp{n_0,m}).\] We first show that \betagin{equation}\lambdabel{leomeclaim} \alpha\le_1\alphaplus. \end{equation} In the case $m_0=1$ we have $(n_0,m_0)=(n,m_n)$, and the claim follows directly from Theorem \ref{maintheo}. Now assume that $m_0>1$. By Theorem \ref{maintheo} we have $\alpha\le_1\mathrm{o}(\alphavecst)\le_1\mathrm{o}(\operatorname{me}(\alphavecst))$. If $\operatorname{cml}(\alphavecst)$ does not exist, that is $\alphavecpr=\alphavecst$, we are done with showing (\ref{leomeclaim}). Otherwise let $\operatorname{cml}(\alphavecst)=:(i_1,j_1)$ and let $l_0$ be maximal so that for all $l\in(0,l_0)$ $\operatorname{cml}((\alphavecrestrarg{(i_l,j_l+1)})^\star)=:(i_{l+1},j_{l+1})$ exists. Clearly, the sequence of index pairs we obtain in this way is $<_\mathrm{\scriptscriptstyle{lex}}$-decreasing, and by Definition \ref{gbodefi} $(i_{l_0},j_{l_0}+1)=(n_0,m_0)$. Theorem \ref{maintheo} yields the chain of inequations \[\alpha\le_1\mathrm{o}(\alphavecst)\le_1\mathrm{o}((\alphavecrestrarg{(i_1,j_1+1)})^\star)\le_1\ldots\le_1\mathrm{o}((\alphavecrestrarg{(i_{l_0},j_{l_0}+1)})^\star)=\mathrm{o}(\alphavecpr)\le_1\alphaplus.\] In the case $\ordcp{n_0,m}(\alphavec)\in\mathrm{Im}(\upsilon)$ the claim follows directly from the theorem, so let us assume otherwise. We claim that \betagin{equation}\lambdabel{predoneclaim} \operatorname{pred}_1(\alphaplus+1)<\mathrm{o}(\alphavecpr). \end{equation} To this end note that ${\mathrm{tc}}(\alphaplus+1)$ must be of a form $\alphavecrestrarg{i}^\frown(\alphacp{i+1,1}+1)$ where $i\le n_0$. By Theorem \ref{maintheo} $\alphaplus+1$ is either $\upsilon_\lambda$-$\le_1$-minimal or the greatest $<_1$-predecessor is $\mathrm{o}(\alphacp{i-1,m_{i-1}})$. Hence (\ref{predoneclaim}) follows, which implies that $\alpha\not\le_1\alphaplus+1$. We thus have ${\operatorname{lh}}(\alpha)=\ordcp{n_0,m}(\alphavec)+\mathrm{dp}_{\vec{\varsigma}}(\taucp{n_0,m})$. \mbox{ } $\Box$ \end{document}
\begin{equation}gin{document} \centerline{} \begin{equation}gin{frontmatter} \selectlanguage{english} \textitle{Maximal solutions of equation $\Delta u=u^q$ in arbitrary domains} \selectlanguage{english} \author[authorlabel1]{Moshe Marcus}, \ead{[email protected]} \author[authorlabel2]{Laurent V\'eron} \ead{[email protected]} \address[authorlabel1]{Department of Mathematics, Technion\\ Haifa 32000, ISRAEL} \address[authorlabel2]{Laboratoire de Math\'ematiques et Physique Th\'eorique, Facult\'e des Sciences\\ Parc de Grandmont, 37200 Tours, FRANCE} \begin{equation}gin{abstract} \selectlanguage{english} We prove bilateral capacitary estimates for the maximal solution $U_F$ of $-\Delta u+u^q=0$ in the complement of an arbitrary closed set $F\subset\mathbb R^N$, involving the Bessel capacity $C_{2,q'}$, for $q$ in the supercritical range $q\geq q_{c}:=N/(N-2)$. We derive a pointwise necessary and sufficient condition, via a Wiener type criterion, in order that $U_F(x)\to\infty$ as $x\to y$ for given $y\in\partial F$. Finally we prove a general uniqueness result for large solutions. {\it To cite this article: M. Marcus, L. V\'eron, C. R. Acad. Sci. Paris, Ser. I XXX (2007).} \vskip 0.5\baselineskip \selectlanguage{francais} \noindent{\bf R\'esum\'e} \vskip 0.5\baselineskip \noindent {\bf Solutions maximales de $\Delta u=u^q$ dans un domaine arbitraire.} Nous d\'emontrons une estimation capacitaire bilat\'erale de la solution maximale $U_{F}$ de $-\Delta u+u^q=0$ dans un domaine quelconque de $\mathbb R^N$ impliquant la capacit\'e de Bessel $C_{2,q'}$ dans le cas sur-critique $q\geq q_{c}:=N/(N-2)$. Gr\^{a}ce \`a un crit\`ere de type Wiener, nous en d\'eduisons une condition n\'ecessaire et suffisante pour que cette solution maximale tende vers l'infini en un point du bord du domaine. Finalement nous prouvons un r\'esultat g\'en\'eral d'unicit\'e des grandes solutions. {\it Pour citer cet article~: M. Marcus, L. V\'eron, C. R. Acad. Sci. Paris, Ser. I XXX (2006).} \end{abstract} \end{frontmatter} \selectlanguage{francais} \section*{Version fran\c{c}aise abr\'eg\'ee} \setcounter {equation}{0} Soit $F$ un sous-ensemble compact non-vide de $\BBR^N$ de compl\'ementraire $F^c$ connexe et $q>1$. Il est bien connu qu'il existe une solution maximale $U_{F}$ de \begin{equation}gin{equation}\label{Fq-eq} -\Gd u+u^q=0, \end{equation} dans $F^c=\BBR^N\setminus F$. En outre $U_{F}=0$ si et seulement si $C_{2,q'}\left(F\right)=0$, o\`u $q'=q/(q-1)$ et $C_{2,q'}$ d\'esigne la capacit\'e de Bessel en dimension $N$ \cite {BP}. Si $1<q<q_{c}:=N/(N-2)$, la capacit\'e de tout point est positive et la solution maximale est une grande solution \cite {Ve}, c'est \`a dire v\'erifie \begin{equation}gin{equation}\label{expl} \lim_{F^c\ni x\to y}U_{F}(x)=\infty, \end{equation} pour tout $y\in \partial F^c$, et la relation (\ref{expl}) est uniforme en $y$. En outre $U_{F}$ est l'unique grande solution si on suppose $\partial F^c\subset \partial \overline{F^c}$. Dans le cas sur-critique $q\geq q_{c}$ la situation est beaucoup plus compliqu\'ee dans la mesure o\`u les singularit\'es isol\'ees sont \'eliminables et o\`u il existe une grande vari\'et\'e de solutions. Si $q=2$, $N\geq 3$, Dhersin et Le Gall \cite{DL} ont obtenu, par des m\'ethodes probabilistes, des estimations pr\'ecises portant sur $U_{F}$ et utilisant la capacit\'e $C_{2,2}$. De leurs estimations d\'ecoule une condition n\'ecessaire et suffisante, exprim\'ee par un crit\`ere du type de Wiener, pour que $U_{F}$ v\'erifie (\ref{expl}) en un point $y\in \partial F^c$. Labutin \cite{La} a r\'eussi \`a \'etendre partiellement les r\'esultats de \cite{DL} dans le cas $q\geq q_{c}$. Plus pr\'ecis\'ement il a prouv\'e que $U_{F}$ est une grande solution si et seulement si le crit\`ere de Wiener de \cite{DL}, avec $C_{2,2}$ remplac\'e par $C_{2,q'}$, est v\'erifi\'e en {\it tout} point de $\partial F^c$, cependant il n'obtient pas l'estimation ponctuelle (\ref{expl}). Les estimations de Labutin sont optimales si $q>q_{c}$, mais pas si $q=q_{c}$. Dans cette note nous \'etendons les r\'esultats de \cite {DL} par des m\'ethodes purement analytiques. Si $F$ est un sous-ensemble ferm\'e non vide de $\BBR^N$, $x\in\BBR^N$ et $m\in\BBZ$ nous notons $$T_{m}(x)=\left\{y\in\BBR^N:2^{-m-1}\leq\abs{x-y}\leq 2^{-m}\right\}$$ $$F_{m}(x)=F\cap T_{m}(x)\text { et } F^*_{m}(x)=F\cap \bar B_{2^{-m}}(x).$$ On d\'efinit le {\it potentiel $C_{2,q'}$-capacitaire} $W_{F}$de $F$ par \begin{equation}gin{equation}\label{potcap} W_{F}(x)=\sum_{-\infty}^\infty 2^\frac{2m}{q-1}C_{{2,q'}}(2^mF_{m}(x)). \end{equation} \noindent {\bf Theor\`eme 1.} {\it Il existe une constante $c=c(N,q)>0$ telle que} \begin{equation}gin{equation}\label{potcap1} cW_{F}(x)\leq U_{F}(x)\leq \frac{1}{c}W_{F}(x)\quaduad \forall x\in F^c. \end{equation} Pour $q>q_{c}$ cette estimation est la m\^eme que celle de Labutin. Notre d\'emonstration est inspir\'ee de la sienne tout en faisant intervenir des arguments nouveaux qui simplifient notoirement sa d\'emarche. En utilisant la d\'efinition de la capacit\'e de Bessel on d\'emontre alors que la fonction $W_F$ est semi-continue sup\'erieurement dans $\overline {F^c}$. On en d\'eduit \noindent {\bf Theor\`eme 2.} {\it Pour tout point $y\in\partial F^c$, \begin{equation}gin{equation}\label{potcap2} \lim_{F^c\ni x\to y}U_{F}(x)=\infty\Longleftrightarrow W_{F}(y)=\infty. \end{equation} Par suite $U_{F}$ est une grande solution si et seulement si $W_{F}(y)=\infty$ pour tout $y\in\partial F^c$.} Il est facile de v\'erifier que si $W_{F}(y)=\infty$, alors $y$ est un point \'epais de $F$, au sens de la topologie fine $\frak T_{q}$ associ\'ee \`a la capacit\'e $C_{{2,q'}}$. En utilisant la propri\'et\'e de Kellog \cite{AH} que v\'erifie la capacit\'e $C_{{2,q'}}$, on en d\'eduit que la solution maximale $U_{F}$ est une {\it presque grande solution} dans le sens suivant: La relation (\ref{expl}) a lieu sauf peut-\^etre sur un ensemble de $\partial F^c$ de capacit\'e $C_{2,q'}$ nulle. Il est classique que l'\'equation $-\Gd u+\abs u^{q-1}u=\mu} \def\gn{\nu} \def\gp{\pi$ admet une unique solution, not\'ee $u_{\mu} \def\gn{\nu} \def\gp{\pi}$, pour tout $\mu} \def\gn{\nu} \def\gp{\pi\in W^{-2,q'}(\BBR^N)$ \cite{BP}. On a alors le r\'esultat suivant \noindent {\bf Theor\`eme 3.} {\it Pour tout sous-ensemble ferm\'e $F\subset\BBR^N$, \begin{equation}gin{equation}\label{potcap3} U_{F}=\sup\{u_{\mu} \def\gn{\nu} \def\gp{\pi}:\mu} \def\gn{\nu} \def\gp{\pi\in W^{-2,q'}(\BBR^N),\mu} \def\gn{\nu} \def\gp{\pi(F^c)=0\}. \end{equation} Par suite $U_{F}$ est $\sigma} \def\vgs{\varsigma} \def\gt{\tau$-mod\'er\'ee, c'est \`a dire qu'il existe une suite croissante $\{\mu} \def\gn{\nu} \def\gp{\pi_{n}\}\subset W^{-2,q'}(\BBR^N)$ telle que $\mu} \def\gn{\nu} \def\gp{\pi_{n}(F^c)=0$ et $u_{\mu} \def\gn{\nu} \def\gp{\pi_n}\uparrow u$.} Cet \'enonc\'e est l'analogue dans le cas du probl\`eme elliptique int\'erieur de r\'esultats similaires concernant les probl\`emes elliptique au bord \cite{MV3} et parabolique \cite {MV4}. Enfin, nous avons le r\'esultat d'unicit\'e suivant o\`u nous d\'esignons par $\timeslde E$ la fermeture de $E\subset\BBR^N$ pour la topologie $\frak T_{q}$. \noindent {\bf Theor\`eme 4.} {\it Pour tout ouvert non vide $D\subset\BBR^N$, posons $F=D^c$ et $F_{0}=\timeslde D^c$ (c'est \`a dire que $F_{0}$ est l'int\'erieur de $F$ pour la topologie $\frak T_{q}$). Si $C_{{2,q'}}(F\setminus \timeslde F_{0})=0$, alors il existe au plus une grande solution de (\ref{Fq-eq}) dans $D$.} \selectlanguage{english} \section{Introduction} \label{sec1} \setcounter{equation}{0} In this note we study positive solutions of the equation \begin{equation}gin{equation}\label{eq} -\Gd u+u^q=0, \end{equation} in $\BBR^N\setminus F$, $N\geq 3$, where $F$ is a non-empty compact set with $F^c$ connected and $q>1$. More precisely, we shall study the behavior of the maximal solution of this problem, which we denote by $U_F$. The existence of the maximal solution is guaranteed by the Keller-Osserman estimates (see \cite {MV2} for discussion about large solutions and the references therein). \begin{comment} \begin{equation}gin{equation}\label{problem1}\begin{array}L -\Gd u+u^q&=0 &&\text{in}\quad\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus F\\ u&=0 &&\text{on}\quad\partial\Omega} \def\Gx{\Xi} \def\Gy{\Psi \end{array}L \end{equation} when $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ is either a smooth bounded domain or $\Omega} \def\Gx{\Xi} \def\Gy{\Psi=\BBR^N$. This solution will be denoted by $U_F^\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ or simply by $U_F$, if $\Omega} \def\Gx{\Xi} \def\Gy{\Psi=\BBR^N$. Note that $U_F^\Omega} \def\Gx{\Xi} \def\Gy{\Psi\leq U_F$. \end{comment} It is known \cite{BP} that, if $C_{2,q'}\left(F\right)=0$ then $U_F=0$. If $u$ is a solution of \eqref{eq} in $D=\BBR^N\setminus F$ and $u$ blows up at every point of $\partial D$ we say that $u$ is a {\em large solution} in $D$. Obviously a large solution exists in $D$ if and only if $U_F$ is a large solution. Our aim is: (a) To provide a necessary and sufficient condition for the blow up of $U_F$ at an arbitrary point $y\in F$ and (b) to obtain a general uniqueness result for large solutions. In the subcritical case, i.e. $1<q<q_c:=N/(N-2)$, these problems are well understood. In this case $C_{2,q'}\left(F\right)>0$ for any non-empty set and it is classical that positive solutions may have isolated point singularities of two types: weak and strong. This easily implies that the maximal solution $U_F$ is always a large solution in $D$. In addition it is proved in \cite{Ve} that the large solution is unique if it is assumed $\partial F^c\subset \partial \overline{F^c}^c$. In the supercritical case, i.e. $q\geq q_{c}$, the situation is much more complicated. In this case point singularities are removable and there exists a large variety of singular solutions. Sharp estimates for $U_F$ were obtained by Dhersin and Le Gall \cite{DL} in the case $q=2$, $N\geq 3$. These estimates were expressed in terms of the Bessel capacity $C_{2,2}$ and were used to provide a Wiener type criterion for the pointwise blow up of $U_F$, i.e., for $y\in F$, \begin{equation}gin{equation}\label{pt blowup} \lim_{F^c\ni x\to y} U_F(x)= \infty \iff \text{the Wiener type criterion is satisfied at y.} \end{equation} These results were obtained by probabilistic tools; hence the restriction to $q=2$. Labutin \cite{La} succeeded in partially extending the results of \cite{DL} to $q\geq q_c$. Specifically, he proved that $U_F$ is a large solution if and only if the Wiener criterion of \cite{DL}, with $C_{2,2}$ replaced by $C_{2,q'}$, is satisfied \textit{at every point of $F$}. The pointwise blow up was not established. Labutin's result was obtained by analytic techniques. As in \cite{DL}, the proof is based on upper and lower estimates for $U_F$, in terms of the capacity $C_{2,q'}$. Labutin's estimates are sharp for $q>q_c$ but not for $q= q_c$. Conditions for uniqueness of large solutions, for arbitrary $q>1$, can be found in \cite{Ve} and \cite{MV2}. In the present paper we obtain a full extension of the results of \cite{DL} to $q \geq q_c$, $N\geq 3$. Further we establish the following rather surprising fact: For any non-empty closed set $F\subsetneq \BBR^N$, the maximal solution $U_F$ is an 'almost large' solution in $D$ in the following sense: \eqref{pt blowup} holds at all points of $F$ with the possible exception of a set of $C_{2,q'}$-capacity zero. (Of course if $y$ is an interior point of $F$, \eqref{pt blowup} holds in void.) Finally we provide a capacitary sufficient condition for the uniqueness of large solutions. \section{Statement of main results} Throughout the remainder of the note we assume that $q\geq q_{c}$. We start with some notation. For any set $A\subset \BBR^N$ we denote by $\gr_A$ the distance function, $\gr_A(x)=\opname{dist} (x,A)$ for every $x\in \BBR^N.$ If $F$ is a closed set and $x\in \BBR^N$ we denote \begin{equation}gin{equation}\label{Fm} \begin{array}L &T_m(x)=\set{y\in \BBR^N:2^{-(m+1)}\leq \abs{y-x}\leq 2^{-m}},\\ &F_m(x)=F\cap T_m(x), \quad F^*_m(x)=F\cap \bar B_{2^{-m}}(x).\end{array}L \end{equation} As usual $C_{\alpha} \def\gb{\beta} \def\gg{\gamma,p}$ denotes Bessel capacity in $\BBR^N$. Note that if $\alpha} \def\gb{\beta} \def\gg{\gamma=2$ and $p=q'=q/(q-1)$ then, for $q\geq N/(N-2)$, $\alpha} \def\gb{\beta} \def\gg{\gamma p\leq N$. Put \begin{equation}gin{equation}\label{WF0} W_F(x)= \sum_{-\infty}^{\infty} 2^{\frac{2m}{q-1}}C_{2,q'}\left(2^{m}F_m(x)\right). \end{equation} $W_F$ is called the {\em $C_{2,q'}$-capacitary potential} of $F$. Observe that $2^{m}F^*_m(x)\subset B_1(x)$ and that, for every $x\in F^c$, there exists a minimal integer $M(x)$ such that $F_m(x)=\emptyset$ for $M(x)<m$. Therefore \begin{equation}gin{equation}\label{WFa} W_F(x)= \sum_{-\infty}^{M(x)} 2^{\frac{2m}{q-1}}C_{2,q'}\left(2^{m}F_m(x)\right)<\infty \quaduad \forall x\in F^c \end{equation} It is known that there exists a constant $C$ depending only on $q,N$ such that \begin{equation}gin{equation}\label{WF2} W_F(x)\leq W^*_F(x):=\sum_{m(x)}^{\infty} 2^{-\frac{2m}{q-1}}C_{2,q'}\left(2^{-m}F^{}*_m(x)\right)\leq C W_F(x) \end{equation} for every $x\in F^c$, see e.g. \cite{MV3}. In the following results $F$ denotes a proper closed subset of $\BBR^N$. The first theorem describes the capacitary estimates for the maximal solution. \bth{capest} The maximal solution $U_F$ satisfies the inequalities \begin{equation}gin{equation}\label{capest} \rec{c}W_F(x)\leq U_F(x)\leq cW_F(x)\quaduad \forall x\in F^c. \end{equation} \end{sub} For $q>q_c$ these estimates are equivalent to those obtained by Labutin \cite{La}. Our proof is inspired by the proof of \cite{La}, but employs some new arguments which lead to a sharp estimate in the border case $q=q_c$ as well. Using the previous theorem we establish: \bth{large1} For every point $y\in F$, \begin{equation}gin{equation}\label{Wiener} \lim_{F^c\ni x\to y} U_F(x)= \infty \iff W_F(y)=\infty. \end{equation} Consequently\xspace $U_F$ is a large solution in $F^c$ if and only if $W_F(y)=\infty$ for every $ y\in F.$ \end{sub} \bth{q-ae} For any closed set $F\subsetneq\BBR^N$, the maximal solution $U_F$ is an almost large solution in $D=F^c$ (see the definition of this term in the introduction). \end{sub} It is known \cite{BP} that if $\mu\in W^{-2,q}_+(\BBR^N)$ there exists a unique solution of the equation $ -\Gd u+u^q=\mu \txt{in} \BBR^N. $ This solution will be denoted by $u_\mu$. \bth{U=V} For any closed set $F\subsetneq\BBR^N$, \begin{equation}gin{equation}\label{mod-uF} U_F=\sup\set{u_\mu:\mu\in W^{-2,q}_+(\BBR^N),\; \mu(F^c)=0}. \end{equation} Thus $U_F$ is $\sigma} \def\vgs{\varsigma} \def\gt{\tau$-moderate\xspace, i.e., there exists an increasing sequence\xspace $\set{\mu_n}\subset W^{-2,q}_+(\BBR^N)$ such that $\mu_n(F^c)=0$ and $u_{\mu_n}\uparrow U_F$. \end{sub} For the next result we need the concept of the $C_{2,q'}$-fine topology (in $\BBR^N$) that we shall denote by $\frak T_{q}$. For its definition and basic properties see \cite[Ch. 6]{AH}. The closure of a set $E$ in the topology $\frak T_{q}$ will be denoted by $\tilde E$. The following uniqueness result holds. \bth{unique} Let $D\subset \BBR^N$ be a non-empty, bounded open set. Put $F=D^c$ and $F_0=(\tilde D)^c$ so that $F_0$ is the $\frak T_{q}$-interior of $F$. If $C_{2,q'}\left(F\setminus\tilde F_0\right)=0$ then there exists at most one large solution in $D$. \end{sub} \section{Sketch of proofs.} \noindent{\em On the proof of \rth{capest}.}\hskip 2mm The proof of this theorem is an adaptation of the proof of the capacitary estimates for boundary value problems in \cite{MV3}. A central element of the proof in that paper is the mapping $\BBP:W^{-2/q,q}_+(\partial\Omega} \def\Gx{\Xi} \def\Gy{\Psi)\mapsto L^q(\Omega} \def\Gx{\Xi} \def\Gy{\Psi;\gr_{\partial\Omega} \def\Gx{\Xi} \def\Gy{\Psi})$ given by $\BBP(\mu)=\int_{\partial\Omega} \def\Gx{\Xi} \def\Gy{\Psi}P(x,y)d\mu(y)$ where $P$ is the Poisson kernel in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. In the proof of the present result the same role is played by the Green operator acting on bounded measures in $\BBR^N$.\\[2mm] {\em On the proof of \rth{large1}.}\hskip 2mm Denote \begin{equation}gin{equation}\label{am(y)} a_m(x)=C_{2,q'}\left(2^{m}F_m(x)\right), \quad a^*_m(x)=C_{2,q'}\left(2^{m}F^*_m(x)\right) \end{equation} First we show that $W_F(y)=\infty$ implies that $\lim_{D\ni x\to y}U_F(x)=\infty$. Let $x\in F^c$ and let $\gl$ be an integer such that $2^{-\gl}\leq |x-y|\leq 2^{-\gl+1}$. Obviously $\gl\leq M(x)$. For $m\leq \gl$: $$a^*_m(y)\leq C_{2,q'}\left(2(2^{m-1}{F^*}_m(x)\right)\leq ca^*_{m-1}(x).$$ Therefore $$\sum_1^{\gl} 2^{\frac{2m}{q-1}}a^*_m(y)\leq c\sum_1^{\gl} 2^{\frac{2m}{q-1}}a^*_{m-1}(x) \leq c\sum_0^{\gl}2^{\frac{2m}{q-1}}a^*_{m}(x)\leq W^*_F(x).$$ As $x\to y$, $\gl \to\infty$ and the left hand side tends to $\infty$. The reverse implication is a consequence of the following property of $W_F$. \blemma{lcs} The function $y\mapsto W_F(y)$ is lower semi-continuous on $\overline{F^c}$. In addition, if $W_F(y)<\infty$ then $\liminf_{x\to y}W_F(x)<\infty$. \end{sub} \noindent For proving this result, we use the fact that, for any $y\in\overline{F^c}$, and any $m\in\BBZ$, $$C_{2,q'}\left(2^{m}F_m(y)\right)= \inf\left\{\norm{\gz}^{q'}_{W^{2,q'}}:\gz\in C^\infty_0(\BBR^N):\gz\geq 0,\gz\geq 1\text { in a neighborhood of } 2^mF_m(y)\right\}. $$ Thus, if $\gz\geq 1$ in a neighborhood of $2^mF_m(y)$, it implies that, for $\abs {x-y}$ small enough, $\gz\geq 1$ in a neighborhood of $2^mF_m(x)$. This implies $$\lim_{\ge\to 0}\sup_{x\in\overline{F^c}\cap B_\ge(y)} C_{2,q'}\left(2^{m}F_m(x)\right)=\limsup_{\overline{F^c}\ni x\to y}C_{2,q'}\left(2^{m}F_m(x)\right)\leq C_{2,q'}\left(2^{m}F_m(y)\right). $$ This implies the first assertion. The second assertion is proved by an argument involving the quasi-additivity of capacity.\\[2mm] {\em On the proof of \rth{q-ae}.}\hskip 2mm It is not difficult to verify that, if $x$ is a thick point of $F$ in the topology $\frak T_{q}$ (or $\frak T_{q}$-thick point), then $W_F(x)=\infty$. (For the definition of a thick point in a fine topology and the properties stated below see \cite[Ch. 6]{AH}.) The set of $\frak T_{q}$-thick points of $F$ is denoted by $b_q(F)$ and it is known that, if $F$ is $\frak T_{q}$-closed then $b_q(F)\subset F$ and $C_{2,q'}\left(F\setminus b_q(F\right)=0$ (this is called the Kellog property). Of course any set closed in the Euclidean topology is $\frak T_{q}$-closed. Therefore, by \rth{capest}, $U_F$ blows up $C_{2,q'}$-a.e. on $\partial D$.\\[2mm] {\em On the proof of \rth{U=V}.}\hskip 2mm Let us denote the right hand side of \eqref{mod-uF} by $V_F$. Obviously $V_F\leq U_F$ and the proof of \rth{capest} actually shows that $\rec{c}W_F(x)\leq V_F(x).$ Therefore $U_F\leq C V_F$ where $C$ is a constant depending only on $N,q$. By an argument introduced in \cite{MV1} this implies that $U_F=V_F$.\\[2mm] {\em On the proof of \rth{unique}.}\hskip 2mm The proof is based on the following: \blemma{UF=VF0} If $C_{2,q'}\left(F\setminus\tilde F_0\right)=0$ then $$U_F=\sup\set{u_\mu:\mu\in W^{-2,q}_+(\BBR^N),\; \opname{supp} \mu\subset F_0}.$$ \end{sub} The proof of the lemma involves subtle properties of the $C_{2,q'}$-fine topology. \par The lemma implies that for every $x\in D$ there exists $\mu\in W^{-2,q}_+(\BBR^N)$ such that $\opname{supp} \mu$ is a compact subset of $F_0$ and $U_F(x)\leq Cu_\mu(x)$. Suppose that $u$ is a large solution in $D$. Since $u_\mu$ is bounded in $\partial D$ it follows that $u_\mu<u$. Thus $U_F\leq Cu$. By the argument of \cite{MV1} mentioned before, this implies that $u=U_F$. \vskip 2mm \noindent{\bf Acknowledgment.} Both authors were partially sponsored by an EC grant through the RTN Program ñFront-Singularitiesî, HPRN-CT-2002-00274 and by the French-Israeli cooperation program through grant No. 3-1352. The first author (MM) also wishes to acknowledge the support of the Israeli Science Foundation through grant No. 145-05. \begin{equation}gin{thebibliography}{99} \bibitem{AH} Adams D. R. and Hedberg L. I., Function spaces and potential theory, Grundlehren Math. Wissen. {\bf 314}, Springer (1996). \bibitem{BP} Baras and Pierre, {\em Singularit\'ees \'eliminables pour des \'equations semi-lin\'eaires}, Ann. Inst. Fourier (Grenoble) {\bf 34}, 185-206 (1984). \bibitem{DL} Dhersin J.-S. and Le Gall J.-F., {\em Wiener's test for super-Brownian motion and the Brownian snake}, Probab. Theory Related Fields {\bf 108}, 103-129 (1997) \bibitem {La} Labutin D. A., {\em Wiener regularity for large solutions of nonlinear equations}, Ark. Mat. {\bf 41}, 307-339 (2003). \bibitem{MV1} Marcus M. \& V\'eron L., {\em The boundary trace of positive solutions of semilinear elliptic equations: the subcritical case}, Arch. Rat. Mech. Anal. {\bf 144} (1998), 201-231. \bibitem{MV2} Marcus M. \& V\'eron L.,{\em Existence and uniqueness results for large solutions of general nonlinear elliptic equations. Dedicated to Philippe B\'enilan}, J. Evol. Equ. {\bf 3}, 637-652 (2003). \bibitem{MV3} Marcus M. and V\'{e}ron L., {\em Capacitary estimates of positive solutions of semilinear elliptic equations with absorption}, J. European Math. Soc. {\bf 6}, 483-527 (2004). \bibitem{MV4} Marcus M. and V\'{e}ron L., {\em Capacitary representation of positive solutions of semilinear parabolic equations}, C. R. Acad. Sci. Paris Ser. I {\bf 342}, 655-660 (2006). \bibitem{Ve} V\'{e}ron L., {\em Generalized boundary values problems for nonlinear elliptic equations}, Elec. J. Diff. Equ., Conf. {\bf 06}, 313-342 (2001). \end{thebibliography} \end{document}
\begin{document} \title{Evolution without evolution, and without ambiguities} \begin{abstract} \noindent In quantum theory it is possible to explain time, and dynamics, in terms of {\sl entanglement}. This is the {timeless} approach to time, which assumes that the universe is in a {\sl stationary} state, where two non-interacting subsystems, the \qq{clock} and the \qq{rest}, are entangled. As a consequence, by choosing a suitable observable of the clock, the {\sl relative state} of the rest of the universe evolves unitarily with respect to the variable labelling the clock observable's eigenstates, which is then interpreted as time. This model for an \qq{evolution without evolution} (Page and Wootters, 1983), albeit elegant, has never been developed further, because it was criticised for generating severe ambiguities in the dynamics of the rest of the universe. In this paper we show that there are no such ambiguities; we also update the model, making it amenable to possible new applications. \end{abstract} \section{Introduction} All dynamical laws are affected by a deep problem, \cite{BAR1}. They are formulated in terms of an extrinsic parameter time, which is not itself an element of dynamics and hence it is left unexplained. One powerful way of addressing this problem is the \qq{timeless approach} to time. Its logic is elegant: both dynamics and time should emerge from more fundamental elements, chosen so that the dynamics satisfies certain criteria \cite{MITHWE}. For example, when applied to Newtonian physics \cite{BAR1, BAR2}, this approach leads to relational dynamics, where one selects a system as a reference clock, with a particular clock-variable, so as to ensure that Newton's laws hold when that observable is regarded as time (the so-called \qq{ephemeris time}). This picture, however, still requires motion to be assumed as primitive, thus leaving the appearance of dynamics itself unexplained. The same problem as in classical physics arises in quantum theory: time appears as an extrinsic parameter in the equations of motion. In quantum theory there is also a deeper problem - a major obstacle for quantum gravity \cite{ZEH}: time is not a quantum observable, and yet quantum observables depend on it. What precisely is its status, and how can it be reduced to more fundamental elements? Once again, the timeless approach provides an elegant way out: the Page and Wootters (PW) model, \cite{PAWO}. By analogy with classical physics, that approach aims at selecting a clock and an observable of the clock, so that the Schr\"odinger (or Heisenberg) equation holds on the rest of the universe, with respect to the variable $t$ labelling the eigenvalues of that observable. But since observables in quantum theory are operators, the implementation of this approach turns out to be rather different from its classical counterpart - with some advantages, but also, as we are about to recall, various problems. The advantage is that, unlike in the classical scenario, in quantum theory motion does not have to be assumed as primitive: one assumes that the whole universe is in a {\sl stationary state} - i.e., it is an eigenstate of its Hamiltonian. Time and dynamics then emerge in a subsystem of the universe that is entangled with some suitably chosen clock, endowed with an appropriate observable that we shall call {\sl clock observable}. It is important to notice that this is {\sl not} a time operator, but simply an observable (such as, say, a component of the angular momentum) of the system chosen as a clock. Specifically, by supposing that the Hamiltonian is sufficiently local, it is always possible to regard the universe as consisting of two {\sl non-interacting} subsystems, which we shall call \qq{the clock} and \qq{the rest}. A clock-observable $T$, conjugate to the clock's Hamiltonian, defines a basis of eigenvectors $\ket{t}\;:\;T\ket{t}=t\ket{t}$ (the hands of the clock), where $t$ is a real-valued label. Since $T$ does not commute with the total Hamiltonian of the universe, the overall static state of the universe must be a {\sl superposition} (or mixture, \cite{VED}) of different eigenstates of the clock observable $T$: as a result, a Schr\"odinger equation can be written for the relative state (in the Everett sense \cite{EVE}) of the rest of the universe (relative to $t$) whose parameter time is nothing but the label $t$ of the states of the clock. (An equivalent construction can be carried out in the Heisenberg picture \cite{PAWO}.) As we said, nothing in this construction relies on defining a time operator.Thus, quantum theory provides the means to solve the problem of time via its most profound properties: having pairs of non-commuting observables (in this case, the Hamiltonian of the universe, and the clock observable $T$); and permitting entanglement between subsystems of the universe. Unlike in classical dynamics, there is no need to assume any underlying motion: both time {\sl and} motion are explained in terms of motionless entanglement contained in the state of the universe. This elegant model leading to an \qq{evolution without evolution} \cite{PAWO} has promising features, such as its compatibility with quantum gravity \cite{XXX} and its operational nature, that bodes well for experimental techniques involving quantum clocks \cite{NIST}. Yet, it has never been developed beyond the toy-model stage. This is because it is affected by a number of problems, which, though superficially technical, have been regarded as invalidating the whole approach as a contribution to fundamental physics. For example, Kuchar pointed out problems about the possibility of constructing two-time propagators in this model, \cite{KU} -- these have been thoroughly addressed in \cite{LOY}. There are also conceptual problems, because the model seems to have serious ambiguities that do not arise in relational classical dynamics. Specifically, as pointed out by Albrecht and Iglesias, there seem to be a \qq{clock ambiguity}; there are several, non-equivalent choices of the clock \cite{ALB}, which appear produce an ambiguity in the laws of physics for the rest of the universe: different choices of the clock lead to different Hamiltonians, each corresponding to radically different dynamics in the rest of the universe. So, it would seem that the logic of the timeless approach cannot be applied as directly as in classical physics, because it does not lead to a unique Schr\"odinger equation for the rest of the universe. In this paper we show that the clock ambiguities in fact do not arise. To see why they do not arise, one must appeal to the necessary properties for a subsystem to be a good clock -- in particular, that it must be weakly interacting with the rest. We also update the PW model, clarifying what constraints the state of the universe must satisfy in order for the model to be realistic, and how it accommodates an unambiguous notions of the flow of time. As a result of this update, the model becomes applicable to a number of open problems, including potential new applications. \section{Evolution without evolution} We shall now review the PW approach, by expressing explicitly what conditions must hold for it to be applicable -- namely: {\bf Timelessness}. The first condition is that the Universe is \qq{timeless}, i.e., it is in an eigenstate $\ket{\psi}\in \cal{H}$ of its Hamiltonian $H$, which can always be chosen so that \begin{equation} H\ket{\psi}=0\;.\label{tot} \end{equation} This constraint is compatible with existing approaches to quantum gravity - e.g. the Wheeler-DeWitt equation in a closed universe \cite{WEDE}, but we regard it as the first of a set of sufficient conditions for a timeless approach to time in quantum theory. Note also that this assumption is compatible with observation, as argued in \cite{PAWO}, because it is impossible empirically to distinguish the situation where \eqref{tot} holds from that where the universe's state is not stationary, because the phases appearing in the state $\ket{\psi}$ are unobservable. {\bf Good clocks are possible}. The second sufficient condition is that the Hamiltonian includes at least one good clock -- by which we mean a system with a large set of distinguishable states, which interacts only weakly with the rest of the universe; in the ideal case, it should not interact at all. \footnote{That a perfect clock must not interact with anything else is not in contradiction with the fact that for actual clocks synchronisation must occur - indeed the latter, since it requires interactions, is always carried out when the clock is {\sl not} being used as a clock.} So, the Hamiltonian must be such that that there exists a tensor-product structure (TPS) $\cal{H}\sim \cal{H}_C\otimes \cal{H}_R$, where the first subsystem represents the clock and the second the rest of the universe, \cite{PER,PAWO}, such that this crucial {\sl non-interacting property} holds: $$H=H_C\otimes{\mathfrak I}+{\mathfrak{I}}\otimes H_R$$ where $\mathfrak{I}$ denotes the unit operator on each subspace. In classical physics, the \qq{measurement of time} is always performed relative to some dynamical variable (e.g. a pointer on a clock dial). In quantum theory, a similar logic is valid \cite{PER}. For the ideal clock, the observable to choose as indicator is the {\sl conjugate} observable $T_C$ to the clock Hamiltonian, $[{H}_C,T_C]=i$, with $T_C\ket{t}=t\ket{t}$, where the values $t$ form a continuum, which represent the values to be read on the hands of the clock. Once more, note that $T_C$ is {\sl not} a time-operator. It is an observable of the clock subsystem. That clocks are possible in reality means that the behaviour of the ideal clock can be approximated to an arbitrarily high accuracy: as pointed out in \cite{DEU}, the ideal clock can be approximated by systems with an observable $T$ that has a discrete spectrum of $n$ values $t_n$, where there is no limit to how well the sequence of $t_n$ can approximate the real line. In this paper we shall confine our attention to the ideal case, for simplicity of exposition. {\bf Entanglement.} The third sufficient condition for the P-W construction to hold is that the clock and the rest of the universe are {\sl entangled}: as it will become clear in a moment, this is the feature that allows the appearance of dynamical evolution on the rest to be recovered out of no evolution at all at the level of the universe. Formally, this means that the state of the universe $\ket{\psi}$ must have this form: \begin{equation} \ket{\psi}=\sum_t\alpha_t\ket{t}\ket{\phi_t}\;\label{dec} \end{equation} for some appropriate $\ket{\phi_t}$ defined on the rest, with two or more of the $\alpha_t$ being different from zero. In practice, as we shall see, for this to produce a realistic dynamics, $\alpha_t\neq 0$ for a sufficiently large number of $t$'s. This is because all that happens in the rest is given once and for all in the state $\ket{\psi}$. By taking one of the clock eigenstates $\ket{0}$ as the initial time, whereby $\ket{t}=\exp{(-i H_C t)}\ket{0}$, the story of the rest of the universe is a sequence of events encoded in the various $\ket{\phi_0}, \ket{\phi_1}, ...,\ket{\phi_t}$. Note that the rest and the clock must {\sl not} be in an eigenstate of their local Hamiltonians, otherwise the dynamics is trivial. In the basis $\ket{\epsilon_n}\ket{E_n}$ defined by the local hamiltonians $H_C$ and $H_R$, the universe state is therefore $\ket{\psi}=\sum_{m,n}\psi_{m,n} \ket{\epsilon_m}\ket{E_n}$ where $H\ket{\epsilon_m}\ket{E_n}=0\;\; \forall n,m $. An elementary example will clarify this point. Consider a universe made of two qubits only, with Hamiltonian $H=\sigma_z\otimes{\mathfrak I}+{\mathfrak{I}}\otimes \sigma_z$, where $\sigma_z$ represents the z-component of the spinor $\left (\sigma_x, \sigma_y, \sigma_z\right)$, $[\sigma_i, \sigma_j]=2\epsilon_{i,j,k}\sigma_k$ and $\sigma_i\sigma_j=2\delta_{i,j}$. The clock observable can be $\sigma_x$, so that in the clock basis $\ket{+},\ket{-}$ the state of the universe can be written as $\ket{\psi}=\frac{1}{\sqrt{2}}\left (\ket{+-}+\ket{-+}\right)$. As required, the Hamiltonian of the clock generates the shift on the two clock \qq{hands}, $\exp(-i\sigma_z\frac{\pi}{2})\ket{+}=\ket{-}$. In the energy basis (the basis of eigenvectors of $\sigma_z$) the state of the universe is $\ket{\psi}=\frac{1}{\sqrt{2}}\left (\ket{01}+\ket{10}\right)$. Therefore for this construction to be compatible with a realistic dynamics there must be a high degree of degeneracy in the Hamiltonian $0$-eigenspace. If the above conditions are satisfied, the evolution without evolution can be reconstructed as follows. The state of the rest of the universe when the clock reads $t$ is the Everett relative state, \cite{EVE}, defined as: \begin{equation} \rho_t=\frac{{\rm Tr_c}\{P_t^{(c)}\rho\}}{{\rm Tr}\{P_t^{(c)}\rho\}}=\ket{\phi_t}\bra{\phi_t}. \end{equation} Note that the projector in the definition of relative state has nothing to do with measurement and does not require one to be performed on the clock: rather, the relative states are a 1-parameter family $\ket{\phi_t}$ of states, labelled by $t$, each describing the state of the rest with respect to the clock ‘given that’ the latter is in the state $\ket{t}$. By using the constraint $\eqref{tot}$, the special, {\sl non-interacting} form of $H$, and the fact that $[H_C,T_C]=i$, one obtains that the relative state of the rest evolves according to the Schr\"odinger equation with respect to the parameter $t$: \begin{equation} \frac{\partial \rho_t}{\partial t}= i[\rho_t, H_R]\;.\label{SC} \end{equation} Thus, the logic that \qq{time can be said to exist if there is a description of the physical world such that the Schr\"odinger (or Heiseberg) equation holds on a subsystem of the universe}, seems to be applicable to quantum theory. The parameter $t$ is to be interpreted as time, and the evolution of the rest of the universe has been recovered out of no evolution at all. Assuming that the eigenstates of the clock have the form $\ket{t}=\exp(-iH_Ct)\ket{0}$ may seem too strong a constraint: together with the fact that the clock and the rest are entangled, that constraint directly implies that the evolution on the rest has to have the same exponential form leading to the Schr\"odinger equation. However, the main point of the PW approach is to show that that there exists at least one such choice. This is a rather remarkable property of unitary quantum theory, as it implies that it is consistent with there being the appearance of dynamics in a subpart of the universe, even when the whole universe is at rest. Note also that this construction is compatible with a time-dependent Hamiltonian arising on a {\sl subsystem} of the rest, just like in ordinary quantum mechanics. The time-dependent Hamiltonian for the subsystem only is an approximate description, generated by the interactions (encoded in the time-independent hamiltonian $H_R$) between the subsystem and the environment, in the approximation where the environment can be treated semi-classically (see, e.g. \cite{JAB}). \section{There is no ambiguity} A problem seems to arise in the PW logic. Quantum theory provides infinitely many inequivalent ways of partitioning the total Hilbert space of the universe into a tensor-product structure (TPS); as a consequence, there would seem to be several choices of the clock by which unitary evolution can arise on the rest of the universe. If true, this would mean that given the same overall state $\ket{\psi}$ describing the universe, the PW approach leads to completely different dynamics on the rest of the universe. This is the so-called \qq{clock ambiguity}, \cite{ALB}. We are about to show that this ambiguity does not in fact arise: having fixed the total Hamiltonian and the overall state $\ket{\psi}$ of the universe, if there is \textit{one} tensor-product structure - i.e., one partition of the universe into a good clock and the rest - leading to a unitary evolution generated by a {\sl time-independent} Hamiltonian for the relative state, then it must be unique. The crucial property will be that the clock is not any old subsystem of the universe, but it must be, in the ideal case, a {\sl non-interacting} one. Let us first summarise the clock ambiguity problem. By choosing a suitable orthonormal basis $|{k}\rangle $ in the overall Hilbert space ${\cal H}$, one can write: $$\ket{\psi} = \sum_k \alpha_k |{k}\rangle\;,$$ where $\ket{k}\iff \ket{t}_C\ket{\phi_t}_R$ in a given tensor product structure ${\cal H}\sim {\cal H_C}\otimes {\cal H_R} $. The clock ambiguity is thus expressed: consider a different state of the universe, such as $$|\tilde {\psi}\rangle = \sum_k \beta_k |{k}\rangle\;.$$ There is of course a unitary operator $W$ such that $|\tilde {\psi}\rangle = W|{\psi}\rangle$. Hence \begin{equation}|{\psi}\rangle =\sum_k \beta_k W^{\dagger}|{k}\rangle =\sum_k \beta_k |\tilde{k}\rangle\label{AMB}\end{equation} where we have defined $\ket{\tilde k}=W^{\dagger}|{k}\rangle$. Now, it is possible to choose a {\sl different} bi-partite tensor-product structure whereby: $ |\tilde{k}\rangle = |t\rangle_{\tilde C}|\phi_t\rangle_{\tilde R}$. The clock ambiguity is that there are countless such choices, and they would each seem give rise to very different description of the evolution of the rest. In one, the rest would appear to evolve according to the sequence of relative states: $\ket{\phi_0}_R$,$\ket{\phi_1}_R, ...., \ket{\phi_t}_R$; in the other, it would go through the sequence of {\sl different} relative states: $\ket{\phi_0}_{\tilde R }, \ket{\phi_1}_{\tilde R },...., \ket{\phi_t}_{\tilde R }$. In fact, the clock ambiguity does not arise, because the PW model has additional constraints. In short, a clock is a special subsystem of the universe, which must not interact with the rest, in the ideal case. So, let us assume that there exists a tensor-product structure $\cal{H}\sim \cal{H}_C\otimes \cal{H}_R$ where the clock and the rest are non-interacting: $H=H_C\otimes{\mathfrak I}+{\mathfrak{I}}\otimes H_R$ -- whereby, applying the PW argument, the relative state $\ket{\phi_t}_R$ of the rest evolves according to a unitary evolution generated by $\exp{(-iH_Rt)}$. Formally, a tensor product structure is a unitary mapping $U$ whose elements $U_{a,b}^{k}$ have the property that, for any state $\ket{k}\in{\cal H}$ and some basis states $\ket{a}_C\ket{b}_R$: $$ |{k}\rangle = \sum_{a,b}U_{a,b}^{k}|a\rangle_C|b\rangle_R.$$ Two tensor product structures are {\sl equivalent} if and only if their elements $U_{i,j}^{k}$, $\tilde U_{a,b}^{k}$ are related by {\sl local} unitaries $P$, $Q$: $$ U_{i,j}^{k}= \sum_{a,b}P_{i}^{a}Q_{j}^{b}\tilde U_{a,b}^{k}\;.$$ Hence, the case where the new TPS is equivalent to the original one corresponds to $W=P\otimes Q$ in \eqref{AMB}, i.e., to choosing a different clock observable $P^{\dagger}T_CP$ from the optimal one $T_C$ (the conjugate observable to $H_C$). Therefore this case need not concern us any further, as it simply consists of choosing a poorer clock. The case where the new TPS is not equivalent requires a little more explanation. In this case, the unitary $W$ in equation \eqref{AMB} has the form: $W=\exp\{-i\left(W_C+W_R+W_{CR}\right)\}$, for some Hermitean operators $W_C$, $W_R$ $W_{CR}$, where $W_{CR}$ operates as an interaction term between the two subsystems $C$ and $R$ of the original TPS. For two qubits, the most general form is: $$W_{CR}=\sum_{\alpha,\beta\in\{x,y,z\}}w_{\alpha,\beta}\;\sigma_{\alpha}\otimes\sigma_{\beta}\;$$ for real coefficients $w_{\alpha, \beta}$. The cases where $[H,W]=0$ or $[H,W_{CR}]=0$ also need not concern us any further, because in both cases $W$ would have a trivial, local action on $\ket{\psi}$. The remaining case can be addressed as follows. $H$ is the sum of two non-interacting terms for $C$ and $R$ in the tensor-product structure defined by $U_{i,j}^{k}$. Therefore, in {\sl any} tensor product structure $\tilde U_{a,b}^{k}$ obtained via $W$ acting on the TPS defined by $U_{i,j}^{k}$, $H$ will have an interaction term between the new clock $\tilde C$ and the new rest $\tilde R$ : \begin{equation} H=H_{\tilde C}\otimes{\mathfrak{I}}+{\mathfrak{I}}\otimes H_{\tilde R}+V_{\tilde C}\otimes V_{\tilde R}\; \end{equation} because the transformation to the new TPS is generated by a non-local unitary transformation. As a consequence, in the new, non-equivalent, tensor-product structure, the evolution of the relative state as a function of the labels of the eigenstates of observable $T_{\tilde C}$ of the clock will \textit{not} be a unitary evolution generated by a \textit{time-independent} Hamiltonian. As pointed out in \cite{PAGE} it will have the form: \begin{equation} \frac{\partial \rho_t}{\partial t}= i[\rho_t, H_{\tilde R}]\;+ {\rm terms\; depending\; on\; t}.\label{rot} \end{equation} Hence, given $H$ and $\ket{\psi}$, if there is {\sl one} tensor product structure in which the clock is ideal (no interactions) {\sl and} a Schr\"odinger-type unitary evolution (generated by the time-independent hamiltonian $H_R$) arises on the relative state of the rest with respect to the labels $t$, then the TPS must be unique. In all other non-equivalent tensor product structures, although it is possible to write the overall state as $|{\psi}\rangle =\sum_t \beta_t \ket{t}_{\tilde C}\ket{\phi_t}_{\tilde R}$, it must be $\ket{\phi_t}_{\tilde R}\neq \exp(-iH_{\tilde R}t)\ket{\phi_0}_{\tilde R}$, because the eq. \eqref{rot} holds instead of \eqref{SC} -- due to the interaction terms between the clock $\tilde C$ and the rest $\tilde R$. Thus, there is no clock ambiguity, as promised. We conclude that in unitary quantum theory it is not ambiguous to apply the same logic as in the classical time-less approaches: the clock and the clock observable are to be chosen so that a Schr\"odinger-type dynamics arises on the rest of the universe, generated by a time-independent Hamiltonian. There can be only one such choice, for a given total Hamiltonian $H$ and a given total state of the universe. {\bf The appearance of the flow of time.} It is worth pointing out that there is no flow of time in the PW picture. The PW approach shows that the Schr\"odinger equation generated by $H_R$ holds for the rest of the universe with respect to the labels $\{t\}$ of the eigenstates of a particular clock observable $T_C$, conjugate to $H_C$, $\ket{t}=\exp(-iH_Ct)\ket{0}$. But the {\sl flow of time} has to emerge as a result of there being subsystems of the rest of the universe that can perform measurements and store their results, thus constructing a {\sl history}. More specifically, let us consider a model where the rest is partitioned in three subparts, the \qq{observer} (which for simplicity we assume to be made of a memory only), the \qq{observed} and a sequence of ancillas. As mentioned in section 3, by treating the ancillas semiclassically, it is possible to describe the observed and the observer as undergoing an effective evolution generated by a time-dependent Hamiltonian, in turn generated by the interactions with the ancillas as prescribed by $H_R$ -- where time here is the label $t$ of the eigenstates of the clock. Let us suppose that effective evolution corresponds to a sequence of gates occurring on the observed and to the observer performing measurements on the observed to record what has happened. Specifically, suppose that the observer's memory starts in a blank state $\ket{b}^{\otimes N}$ at $t=0$ and the observed is in a state $ \ket{A_1}$ where $\ket{A_i}$, for $i=1...N$, is an eigenstate of some observable $A$. Suppose that the observer and the observed evolve under the effective time-dependent hamiltonian as follows: at time $t=1$ the observer measures the observable $A$, so that its state changes to $\ket{\text{Saw}A_1}\ket{b}^{\otimes N-1}\ket{A_1}$; then a local permutation happens on the observed, so that the state changes to $\ket{\text{Saw}A_1}\ket{b}^{\otimes N-1}\ket{A_2}$; then the observer measures $A$ again, so that the state is now $\ket{\text{Saw}A_1}\ket{\text{Saw}A_2}\ket{b}^{\otimes N-2}\ket{A_2}$ and so on, until the observer ends up recording a sequence of events $A_1, A_2, ...,A_N$ as prescribed by $H_R$. All these events, here described as sequential, are encoded statically in the overall state of the universe $\ket{\psi}$. The beauty of the PW approach is that it is fully internally consistent. The observer cannot empirically tell the difference between a situation where the sequence of events it observes is really generated by the Hamiltonian $H_R$ on the rest and all the other possibilities, which include cases where the universe can be manipulated by some external entity. This is a general feature of any other picture where constructing a history of events is possible in the sense above, e.g. static pictures such as the Block Universe in general relativity. Imagine, for example, that an entity existing outside of the PW universe were able to decide which element in the PW wavefunction constitutes the \qq{now}, as defined with respect to some external coordinate time -- a sort of \qq{meta-time} existing outside of the PW universe. The entity is able to point at any one element $t_n$ declaring it the now. Then (again in the meta-time picture) it could point at another element $t_m$ as the next now. And so on. This corresponds to picking a different generator of the clock states than $\ket{t}=\exp(-iH_Ct)\ket{0}$, corresponding to a different dynamical law from the $H_R$-generated Schr\"dinger equation. The entity can choose any order it likes for the labels $t$; not necessarily the one corresponding to the sequence of events outlined above. What if the entity decides to point first at a state which in the original labeling appears \qq{later}, and then at an \qq{earlier} one? This may seem to make time flow backwards even from the point of view of the observer. But that is not the case. The observer, from his own perspective, does not notice anything different, as his state only contains information about the \qq{previous times}. For example, the observer could never see any gaps if the entity decides to jump, say, from time $t=1$ to time $t=100$, because in the observer state corresponding to $t=100$ there will be recollections of all events $E_2...E_{100}$, from $t=2$ up to $t=100$. As far as the observer is concerned he perceives himself as coming to time $t=100$ directly from $t=99$. Therefore any experiment made by the observer would lead him to conclude that what he observes is consistent with the same Schr\"odinger equation as that corresponding to there being no jumps at all (in the meta-time), generated by the hamiltonian $H_R$. In other words, in the PW approach just like in any other dynamical theory, the existence of a meta-time is completely irrelevant from the observer's perspective, as it would not have any empirical consequences. {\bf The arrow of time.} In the PW picture there is no arrow of time, just like in unitary quantum theory: the PW approach gives rise to a time-reversal symmetric dynamical law. Thus, the arrow of time has to be imposed by a separate postulate, requiring that under that dynamical law a given monotone always increases (or decreases). Namely, suppose again that the rest consists of two subsystems, \qq{the observer} and \qq{the observed}. For simplicity, let us approximate the \qq{observer} as simply consisting of a memory needed to keep track of what happens to the observed. The arrow of time can now be specified by the increase in entanglement between the observer and the observed, by selecting a measure of entanglement, and requiring that entanglement never decreases on the rest under the evolution generated by $H_R$. In other words, early times correspond to no entanglement between the observer and the observed and later times to more and more entanglement (as the observer learns more and more about the system). Since the relative states of the rest are pure states, when considering a bi-partition there is a unique measure, which consists of the relative entropy \cite{VE}. For a discussion of some explicit models, see \cite{PAGE}. The only ambiguities that might arise in this context are due to: 1) the possibility of picking different partitions into subsystems; and 2) the possibility of having a partition into n-subsystems, where $n\geq 3$: for, in this case, there is not a unique measure. However, these ambiguities are the same as those related to which coarse-graining to adopt in the usual statistical-mechanical picture. Hence this is no more problematic than any other coarse-graining approaches to irreversibility in statistical-mechanics. \section{Conclusions} We have shown that the PW timeless approach to time in quantum theory has no ambiguities, thus vindicating it as a viable proposal for the emergence of time in a time-less universe described by the unitary quantum theory. The non-interacting property of the clock is crucial to establish that result: a good clock is not just a system with a large set of orthogonal states, such as good memory; it must be non-interacting while it is used as a clock. We have also updated the model so that it becomes possible to apply to to more general theories, including the successor of quantum theory. One possible development is to investigate under what conditions the PW logic could apply to different theories than quantum theory (e.g., generalised probabilistic theories \cite{BAR1} or constructor theory's super-information theories \cite{MA}). The challenge there is to understand what relative states would be, as well as in what form the clock ambiguity might appear. Another interesting application could be to recast this model in terms of pseudo-density matrices, \cite{PSEUDO} where time and space are treated in a unified framework. Finally, it is worth speculating about how the PW approach might provide observable consequences, when combined with cosmological models. For example, in the context of an expanding universe, one might use as clock observable the radius of the universe, whereby $\ket{\psi}=\sum_n\alpha_n\ket{t_n}\ket{\phi_n}\;$, where $t_n$ is the radius of the universe. In such models, \cite{REF}, $\langle t_n|t_m\rangle\sim \exp(-\gamma(t_n-t_m))$, where $\gamma$ is some parameter that can be fixed according to the particular cosmological model -- which means that different states of the clock get more and more distinguishable the more they are separated in \qq{time}. When applied in this scenario, the PW construction would lead to the conclusion that the relative state of the rest of the universe is no longer pure, but it is a mixed state -- this is because the operators $P_t=\ket{t}\bra{t}$ involved in constructing the relative state are no longer orthogonal projectors. This fact might have observable consequences even at the present epoch, according to which particular cosmological model one chooses, provided that the accuracy for measuring time is high enough. We leave investigating all that to future work. \end{document}
\begin{document} \title{Entropic relations for retrodicted quantum measurements} \author{Adri\'{a}n A. Budini} \affiliation{Consejo Nacional de Investigaciones Cient\'{\i}ficas y T\'{e}cnicas (CONICET), Centro At\'{o}mico Bariloche, Avenida E. Bustillo Km 9.5, (8400) Bariloche, Argentina, and Universidad Tecnol\'{o}gica Nacional (UTN-FRBA), Fanny Newbery 111, (8400) Bariloche, Argentina} \date{\today} \begin{abstract} Given an arbitrary measurement over a system of interest, the outcome of a posterior measurement can be used for improving the statistical estimation of the system state after the former measurement. Here, we realize an informational-entropic study of this kind of (Bayesian) retrodicted quantum measurement formulated in the context of quantum state smoothing. We show that the (average) entropy of the system state after the retrodicted measurement (smoothed state) is bounded from below and above by the entropies of the first measurement when performed in a selective and non-selective standard predictive ways respectively. For bipartite systems the same property is also valid for each subsystem. Their mutual information, in the case of a former single projective measurement, is also bounded in a similar way. The corresponding inequalities provide a kind of retrodicted extension of Holevo bound for quantum communication channels. These results quantify how much information gain is obtained through retrodicted quantum measurements in quantum state smoothing. While an entropic reduction is always granted, in bipartite systems mutual information may be degraded. Relevant physical examples confirm these features. \end{abstract} \pacs{03.65.Ta, 03.65.Hk, 03.65.Wj} \maketitle \section{Introduction} \textit{Prediction} and \textit{retrodiction} are different and alternative ways of handling information. Respectively, information in the past or in the future is taken into account for performing a \textit{probabilistic} (Bayesian) statement about a system of interest. In physics, most of the theoretical frames are formulated in a predictive way. The measurement process in quantum mechanics is clearly predictive. The corresponding information changes are well known. Non-selective \textit{projective} measurements never decrease von Neumann entropy \cite{nielsen}. Furthermore, the entropy $\mathcal{S}[\rho ]\equiv -\mathrm{Tr}[\rho \ln \rho ]$ after a measurement performed in a \textit{non-selective} way is always greater than the (average) entropy of the same measurement performed in a \textit{ selective} way \cite{breuerbook}, that is, $\mathcal{S}[\sum_{k}p(k)\rho _{k}]\geq \sum_{k}p(k)\mathcal{S}[\rho _{k}],$ where $\rho _{k}$ and $p(k)$ are respectively the system state and probability associated to each outcome $k.$ Their difference is bounded by Shannon entropy $\mathcal{H}[k]\equiv -\sum_{k}p(k)\ln [p(k)]$\ of the outcomes probabilities $\{p(k)\},$ $ \mathcal{H}[k]\geq \mathcal{S}[\sum_{k}p(k)\rho _{k}]-\sum_{k}p(k)\mathcal{S} [\rho _{k}].$ These statements follows straightforwardly from Klein inequality and the concavity of von Neumann entropy \cite{nielsen,breuerbook} . Much less is known when the quantum measurement process is performed in a retrodictive way. In quantum mechanics, retrodiction was introduced for criticizing the apparent time asymmetry of the measurement process \cite{aharonov,vaidman}. Pre- and post-selected measurement ensembles (initial and final states are known) are considered. Questions about intermediate states are characterized through a (retrodictive) Bayesian analysis and the standard Born rule. Retrodiction also arises in the related formalisms of past quantum states \cite{molmer} and quantum state smoothing \cite{wiseman,tsang}, which can be considered as a quantum extension of classical (Bayesian inference) smoothing techniques \cite{jaz,recipes}. Both information in the past and in the future of an open quantum system continuously monitored in time \cite {milburn,carmichaelbook} is available. Hence, the system information is described through a pair of operators, the \textit{past quantum state}, consisting in the system density matrix and an effect operator that takes into account the future information \cite{molmer}. These objects allow to estimate the outcome probabilities of an intermediate (retrodicted) quantum measurement process taking into account both past and future information. The previous scheme was studied and applied in a wide class of dynamics and physical arrangements \cite {tsanPRA,meschede,murch,haroche,xu,tan,naghi,huard,decay}. The system state (single density matrix) that takes into account both past and future information is called \textit{quantum smoothed state} \cite{wiseman,retro}. While in general it is argued that extra (future) information improves the estimation of a past (retrodicted) measurement, in contrast with predictive measurements, a rigorous quantification of this informational benefit is lacking. Hence, the goal of this paper is to perform an informational-entropic study of retrodictive quantum measurements. We find upper and lower bounds for the (average) entropy of the retrodicted state (quantum smoothed state). They are defined by the entropies of the same measurement without retrodiction and performed in a non-selective and selective ways respectively. The same kind of relation is obtained for each part of a bipartite system. Their mutual information satisfies similar inequalities whose explicit form (in the case of projective retrodictive measurements) leads to a kind of retrodicted extension of Holevo bound for quantum communication channels \cite{nielsen}. These features are exemplified with a qubit submitted to strong-weak retrodicted measurements \cite{murch} and a hybrid quantum-classical optical-like arrange \cite {molmer}. The developed results provide a rigorous characterization of the information changes achieved through retrodicted quantum measurements. The analysis is performed in the context of past quantum states and quantum state smoothing \cite{molmer,wiseman,tsang}. We remark that retrodicted measurements were also introduced in alternative ways \cite{barnett,pegg}. Some similitudes and differences become clear through the present study. The paper is outlined as follows. In Sec. II we present the general structure of retrodicted measurements and quantum state smoothing. In Sec. III the general entropic relations are obtained. The case of bipartite system is also characterized through their mutual information. In Sec. IV we study the case of projective measurement performed over a subsystem of a bipartite arrangement. Retrodicted-like Holevo bounds are derived. Examples are worked out in Sec. V. In Sec. VI we provide the Conclusions. Calculus details that support the main results are presented in the Appendices. \section{Retrodicted quantum measurements} Here we present the basic scheme (see Fig. 1) corresponding to a retrodicted quantum measurement. It recovers the past quantum state formalism \cite {molmer} and also allows us to define a quantum smoothed state \cite {wiseman,retro}. A quantum system is characterized by its density matrix $\rho _{I}.$ This object depends on the previous history of the system. In a first step, it is subjected to an arbitrary measurement process \cite{nielsen,breuerbook} defined by the set of measurement operators $\{\Omega _{m}\},$ which fulfills $\sum\nolimits_{m}\Omega _{m}^{\dagger }\Omega _{m}=\mathrm{I,}$ where $\mathrm{I}$ is the identity matrix in the system Hilbert space. The system states $\{\rho _{m}\}$\ associated to each outcome, and the probability $\{p(m)\}$\ of their occurrence, respectively are \begin{equation} \rho _{m}=\frac{\Omega _{m}\rho _{I}\Omega _{m}^{\dagger }}{\mathrm{Tr} [\Omega _{m}^{\dagger }\Omega _{m}\rho _{I}]},\ \ \ \ \ p(m)=\mathrm{Tr} [\Omega _{m}^{\dagger }\Omega _{m}\rho _{I}], \label{RhoEme} \end{equation} where $\mathrm{Tr}[\bullet ]$ is the trace operation. After the first measurement, the system evolves with its own (reversible or irreversible) completely positive dynamics \cite{nielsen,breuerbook} and then is subjected to a second arbitrary measurement process. It is defined by a set of operators $\{M_{y}\},$ which satisfy $\sum\nolimits_{y}M_{y}^{ \dagger }M_{y}=\mathrm{I.}$ In the following analysis the system dynamics is disregarded, or equivalently, it can be taken into account through a redefinition of the set of operators $\{M_{y}\}.$ The second measurement implies the state transformation $\rho _{m}\rightarrow M_{y}\rho _{m}M_{y}^{\dagger }/\mathrm{Tr}[M_{y}^{\dagger }M_{y}\rho _{m}].$ The (conditional) probability $p(y|m)$ of outcome $y$ given that the first one was $m$ reads \begin{equation} p(y|m)=\mathrm{Tr}[M_{y}^{\dagger }M_{y}\rho _{m}]=\frac{\mathrm{Tr}[\Omega _{m}\rho _{I}\Omega _{m}^{\dagger }M_{y}^{\dagger }M_{y}]}{\mathrm{Tr} [\Omega _{m}^{\dagger }\Omega _{m}\rho _{I}]}. \label{P_y_cond_m} \end{equation} An essential ingredient for defining a retrodicted measurement is to ask about the inverse conditional probability $p(m|y),$ that is, the probability of $m$ given the (posterior) outcome $y.$ This object follows from Bayes rule. Given that the joint probability $p(y,m)$\ for the measurement events $ m$ and $y$ satisfies $p(y,m)=p(y|m)p(m),$ it reads \begin{equation} p(y,m)=\mathrm{Tr}[\Omega _{m}\rho _{I}\Omega _{m}^{\dagger }M_{y}^{\dagger }M_{y}]. \label{conjunta} \end{equation} Now, by using that $p(y,m)=p(m|y)p(y),$ where \begin{equation} p(y)=\sum_{m}p(m,y), \label{py} \end{equation} is the probability of outcome $y,$ we obtain \begin{equation} p(m|y)=\frac{\mathrm{Tr}[\Omega _{m}\rho _{I}\Omega _{m}^{\dagger }M_{y}^{\dagger }M_{y}]}{\sum_{m^{\prime }}\mathrm{Tr}[\Omega _{m^{\prime }}\rho _{I}\Omega _{m^{\prime }}^{\dagger }M_{y}^{\dagger }M_{y}]}. \label{Pmy} \end{equation} This retrodicted probability relies on Bayes rules and standard quantum measurement theory. It arises in pre- and post-selected ensembles (here defined by $\rho _{I}$ and the outcome $y)$ \cite{aharonov,vaidman} and also in the past quantum state formalism (see supplemental material in Ref. \cite {molmer}). In fact, $p(m|y)$ can be written in terms of the past quantum state $\Xi \equiv (\rho ,E)$ where the density and effect operators are $ \rho =\rho _{I}$ and $E=M_{y}^{\dagger }M_{y}$ respectively. \begin{figure} \caption{Scheme of retrodicted measurements. The system is subjected to two successive measurement processes defined by the operators $\{\Omega _{m} \end{figure} \subsection*{Retrodicted-Quantum smoothed state} The previous analysis does not associate or define a system state to the retrodicted probability $p(m|y).$ This assignation depends on extra assumptions. Similarly to Ref. \cite{molmer} we assume that the result of the first measurement is hidden to us, that is, the first measurement is a non-selective one \cite{nielsen,breuerbook}. Hence, the system state after the first measurement, $\rho _{I}\rightarrow \rho _{\Omega },$ is \begin{equation} \rho _{\Omega }=\sum_{m}\rho _{m}\ p(m)=\sum_{m}\Omega _{m}\rho _{I}\Omega _{m}^{\dagger }. \label{Rho_Omega} \end{equation} The retrodicted or \textit{smoothed quantum state} $\rho _{y}$ \cite {wiseman,retro} here is defined as the estimation of the system state after the first non-selective measurement \textit{given} that we know the outcome (labeled by $y$) of the second (selective) measurement. Therefore, we write \begin{equation} \rho _{y}\equiv \sum_{m}\rho _{m}\ p(m|y)=\sum_{m}w(m,y)\Omega _{m}\rho _{I}\Omega _{m}^{\dagger }. \label{Rho_y} \end{equation} Here, $w(m,y)\equiv p(m|y)/p(m)=p(y,m)/[p(y)p(m)],$ which from Eqs. (\ref {RhoEme}) and (\ref{Pmy}) explicitly reads \begin{equation} w(m,y)=\frac{\mathrm{Tr}[\Omega _{m}\rho _{I}\Omega _{m}^{\dagger }M_{y}^{\dagger }M_{y}]}{\mathrm{Tr}[\Omega _{m}^{\dagger }\Omega _{m}\rho _{I}]\sum_{m^{\prime }}\mathrm{Tr}[\Omega _{m^{\prime }}\rho _{I}\Omega _{m^{\prime }}^{\dagger }M_{y}^{\dagger }M_{y}]}. \end{equation} We remark that the smoothed state $\rho _{y}$ depends of (is conditioned to) the result of the second measurement. Contrarily to the case of pre- and post selected measurements \cite{aharonov,vaidman}, where $y$ is fixed, here not any selection is imposed on the second measurement result. Therefore, we can define an \textit{average smoothed state} $\rho _{M}\equiv \sum_{y}\rho _{y}\ p(y),$ which corresponds to the system state after averaging $\rho _{y} $ over the outcomes $y.$ Using that $p(y)=\sum_{m^{\prime }}\mathrm{Tr} [\Omega _{m^{\prime }}\rho _{I}\Omega _{m^{\prime }}^{\dagger }M_{y}^{\dagger }M_{y}]$ [see Eq. (\ref{conjunta})] and that $ \sum_{y}M_{y}^{\dagger }M_{y}=\mathrm{I},$ it follows \begin{equation} \rho _{M}\equiv \sum_{y}\rho _{y}\ p(y)=\sum_{m}\rho _{m}\ p(m)=\rho _{\Omega }. \label{Iguales} \end{equation} Thus, the average smoothed state $\rho _{M}$\ recovers the state $\rho _{\Omega }$ corresponding to the state after the first non-selective measurement. A similar property was found in the quantum-classical arrangements studied in Ref. \cite{retro}. The analysis of retrodicted quantum measurements performed in Refs. \cite {barnett,pegg} also rely on quantum measurement theory and Bayes rule. Nevertheless, the assumptions are different to the previous ones. After the second measurement, the state $\rho _{m}$ are not known. Hence, the state after the first measurement [Eq. (\ref{Rho_Omega})] is taken as a state of maximal entropy, $\rho _{\Omega }\simeq \mathrm{I,}$ while $\rho _{y}$ [Eq. ( \ref{Rho_y})] looses its meaning. Hence, the following results do not apply straightforwardly to those models. \section{Entropic relations} The retrodicted quantum measurement scheme described previously consists in two, non-selective and selective, successive measurements. Now, the relevant question is how much information gain is obtained from the retrodicted (smoothed) state $\rho _{y}$ [Eq.(\ref{Rho_y})]. As usual, as an information measure we consider the von Neumann entropy $\mathcal{S}[\rho ]=-\mathrm{Tr} [\rho \ln \rho ].$ In general, one is interested in establishing upper and lower bounds for $\mathcal{S}[\rho _{y}],$ and to determine how they are related with, for example, the entropies $\mathcal{S}[\rho _{\Omega }]$ or $ \mathcal{S}[\rho _{m}].$ Given the arbitrariness of the two measurement processes and given the random nature of the outcome $y,$ it is not possible to establishing any general relation between the entropies $\mathcal{S}[\rho _{y}],$ $\mathcal{S} [\rho _{\Omega }],$ and $\mathcal{S}[\rho _{m}].$ Any relation is in fact possible. Therefore, similarly to the case of standard measurement process \cite{nielsen,breuerbook}, any entropy relation must be established by considering averages over the possible measurement outcomes. By using the concavity of the von Neumann entropy, $\mathcal{S} [\sum_{k}p(k)\rho _{k}]\geq \sum_{k}p(k)\mathcal{S}[\rho _{k}]$ \cite {nielsen} (with equality if and only if all states $\rho _{k}$ are the same), in Appendix A we derive the following entropy relation \begin{equation} \mathcal{S}[\rho _{\Omega }]\geq \sum_{y}p(y)\mathcal{S}[\rho _{y}]\geq \sum_{m}p(m)\mathcal{S}[\rho _{m}]. \label{Central} \end{equation} This is one of the central results of this paper. It demonstrates that the (average) entropy of the system after the retrodicted measurement, $ \sum_{y}p(y)\mathcal{S}[\rho _{y}],$ is bounded from above and below by the entropies of its associated non-selective, $\mathcal{S}[\rho _{\Omega }],$ and (average) selective, $\sum_{m}p(m)\mathcal{S}[\rho _{m}],$ measurement entropies. In other words, the retrodictive measurement is more informative than the first non-selective measurement, but is less informative than a selective resolution of the same measurement process. In Eq. (\ref{Central}), the lower bound is achieved when all states $\{\rho _{m}\}$ are the same, or alternatively when $p(m|y)=\delta _{my},$ that is, both measurement result are completely correlated, $p(y,m)=\delta _{ym}p(m)=\delta _{my}p(y)$ in Eq.~(\ref{conjunta}). On the other hand, the upper bound is fulfilled when all states $\{\rho _{y}\}$ are the same. This last condition occurs when all states $\{\rho _{m}\}$ are identical, or alternatively when $p(m|y)=p(m).$ Hence, both measurement results, $\{m\}$ and $\{y\},$ are statistically independent, $p(y,m)=p(y)p(m)$ in Eq.~(\ref {conjunta}) (see Appendix A). Interestingly, it is also possible to bound the difference between the terms appearing in Eq. (\ref{Central}). By using the upper bound $\sum_{k}p(k) \mathcal{S}[\rho _{k}]+\mathcal{H}[k]\geq \mathcal{S}[\sum_{k}p(k)\rho _{k}]$ \cite{nielsen}, where $\mathcal{H}[k]=-\sum_{k}p(k)\ln [p(k)]$ is the Shannon entropy of a probability distribution $\{p(k)\},$ in Appendix A we obtain \begin{equation} \mathcal{H}[y]\geq \mathcal{S}[\rho _{\Omega }]-\sum_{y}p(y)\mathcal{S}[\rho _{y}]\geq 0, \label{HyUpperBound} \end{equation} while in the other extreme it is valid that \begin{equation} \mathcal{H}[m]\geq \sum_{y}p(y)\mathcal{S}[\rho _{y}]-\sum_{m}p(m)\mathcal{S} [\rho _{m}]\geq 0. \label{LowerBound} \end{equation} In this way, the Shannon entropies $\mathcal{H}[y]$ and $\mathcal{H}[m]$ (associated to the two measurement outcomes) bound the difference between the entropies of the retrodicted and its associated non-selective and selective measurements. Conditions under which the upper bounds of Eqs. (\ref {HyUpperBound}) and (\ref{LowerBound}) are achieved are also provided in Appendix A. \subsection{Bipartite systems} In many physical arrangements where the retrodicted measurement scheme was studied, the system of interest is a bipartite one. Thus, a relevant question is to determine if the previous entropy inequality [Eq. (\ref {Central})] remains valid (or not) for each subsystem. Denoting by $A$ and $B$ each subsystem, their states follow from the partial traces $\rho ^{a}=\mathrm{Tr}_{b}[\rho ^{ab}],$ and $\rho ^{b}=\mathrm{Tr} _{a}[\rho ^{ab}],$ where $\rho ^{ab}$ is an arbitrary bipartite state. Under the replacements $\rho _{m}\rightarrow \rho _{m}^{a/b},$ $\rho _{y}\rightarrow \rho _{y}^{a/b},$ $\rho _{\Omega }\rightarrow \rho _{\Omega }^{a/b},$ from the demonstrations of Appendix A it is simple to realize that the inequalities Eqs.~(\ref{Central}), (\ref{HyUpperBound}), and (\ref {LowerBound}) remain valid for each subsystem. This result is valid independently of which kind of (bipartite) measurements are performed. \subsection{Mutual information} Another important aspect that can be studied when considering bipartite systems is the change in the mutual information between the subsystems. For a bipartite state $\rho ^{ab},$ the mutual information $\mathcal{I}[\rho ^{ab}]$ is defined as $\mathcal{I}[\rho ^{ab}]\equiv \mathcal{S}[\rho ^{a}]+ \mathcal{S}[\rho ^{b}]-\mathcal{S}[\rho ^{ab}].$ As demonstrated in Appendix B, bounds for this object can be derived by using the strong subadditivity property of von Neumann entropy, $\mathcal{S}[\rho _{abc}]+\mathcal{S}[\rho _{a}]\leq \mathcal{S}[\rho _{ab}]+\mathcal{S}[\rho _{ac}].$ Thus, as usual in quantum information results \cite{nielsen}, the demonstrations rely on introducing an extra ancilla system. In Appendix B we demonstrate that \begin{equation} \mathcal{S}[\rho _{\Omega }^{ab}]-\sum_{y}p(y)\mathcal{S}[\rho _{y}^{ab}]\geq \mathcal{I}[\rho _{\Omega }^{ab}]-\sum_{y}p(y)\mathcal{I} [\rho _{y}^{ab}]. \label{Mutual_2} \end{equation} Therefore, the difference between the mutual information corresponding to the non-selective measurement, $\mathcal{I}[\rho _{\Omega }^{ab}],$ and the\ average mutual information corresponding to the retrodicted one, $ \sum_{y}p(y)\mathcal{I}[\rho _{y}^{ab}],$ is bounded by the positive quantity $\mathcal{S}[\rho _{\Omega }^{ab}]-\sum_{y}p(y)\mathcal{S}[\rho _{y}^{ab}]$ [see Eq. (\ref{HyUpperBound})]. On the other hand, based on the strong subadditivity condition, it is also possible to show that \begin{eqnarray} &&\sum_{y}p(y)\mathcal{S}[\rho _{y}^{ab}]-\sum_{m}p(m)\mathcal{S}[\rho _{m}^{ab}]\geq \notag \\ &&\sum_{y}p(y)\mathcal{I}[\rho _{y}^{ab}]-\sum_{m}p(m)\mathcal{I}[\rho _{m}^{ab}]. \label{Mutual_1} \end{eqnarray} This inequality, which is similar to the previous one, here gives an upper bound for the difference between the (average) mutual information corresponding to the retrodicted measurement, $\sum_{y}p(y)\mathcal{I}[\rho _{y}^{ab}],$ and its (non-retrodicted) selective resolution, $\sum_{m}p(m) \mathcal{I}[\rho _{m}^{ab}].$ From Eq. (\ref{LowerBound}) it follows that the upper bound $\sum_{y}p(y)\mathcal{S}[\rho _{y}^{ab}]-\sum_{m}p(m) \mathcal{S}[\rho _{m}^{ab}]$ is a positive quantity. General conditions under which the previous bounds [Eqs. (\ref{Mutual_2}) and (\ref{Mutual_1})] become equalities are left open \cite{equality}. On the other hand, notice that only upper bounds were found. \section{Projective measurements in bipartite systems} The previous results are general and apply independently of the nature of the measurement processes (Fig.~1). Here, we consider an arbitrary bipartite system where a first \textit{projective measurement} is performed on subsystem $B,$ while the posterior one remains arbitrary being performed over subsystem $A.$ Hence, the operators $\{\Omega _{m}\}$ that define the first measurement are written as \begin{equation} \Omega _{m}=\mathrm{I}_{a}\otimes |m\rangle \langle m|. \end{equation} Here, $\mathrm{I}_{a}$ is the identity matrix in the Hilbert space of subsystem $A$ while $\{|m\rangle \}$ is a complete orthogonal base of $B.$ The second measurement is defined by the set of operators $\{M_{y}\},$ which act on the Hilbert space of $A.$ The bipartite state associated to each outcome $\{m\}$ reads [Eq.~(\ref {RhoEme})] \begin{equation} \rho _{m}^{ab}=\frac{\langle m|\rho _{I}|m\rangle }{\mathrm{Tr}_{a}[\langle m|\rho _{I}|m\rangle ]}\otimes |m\rangle \langle m|\equiv \rho _{m}^{a}\otimes |m\rangle \langle m|. \label{RhoAB_m} \end{equation} The state after the non-selective measurements is [Eq.~(\ref{Rho_Omega})] \begin{equation} \rho _{\Omega }^{ab}=\sum_{m}p(m)\rho _{m}^{a}\otimes |m\rangle \langle m|, \label{entropiasss} \end{equation} while the retrodictive smoothed state becomes [Eq.~(\ref{Rho_y})] \begin{equation} \rho _{y}^{ab}=\sum_{m}p(m|y)\rho _{m}^{a}\otimes |m\rangle \langle m|. \label{entropias} \end{equation} From Eqs. (\ref{RhoAB_m}) to (\ref{entropias}) it is possible to demonstrate (Appendix C) that in fact the inequalities (\ref{Central}) and bounds defined by Eqs. (\ref{HyUpperBound}) and (\ref{LowerBound}) are explicitly satisfied by the bipartite states. Similar expressions are valid for each subsystem. \subsection{Mutual information} The changes in the mutual information at the different measurement stages are upper bounded by Eqs. (\ref{Mutual_2}) and (\ref{Mutual_1}). Given the projective character of the first measurement, here it is also possible to find a lower bound to these informational changes. From Eqs. (\ref{Mutual_2}) and Eqs. (\ref{RhoAB_m}) to (\ref{entropias}), in Appendix C we obtain \begin{subequations} \label{positive2} \begin{eqnarray} \mathcal{H}\left[ m:y\right] &\geq &\mathcal{I}[\rho _{\Omega }^{ab}]-\sum_{y}p(y)\mathcal{I}[\rho _{y}^{ab}] \\ &=&\mathcal{S}[\rho _{\Omega }^{a}]-\sum_{y}p(y)\mathcal{S}[\rho _{y}^{a}]\geq 0. \end{eqnarray} where $\mathcal{H}[m:y]=\mathcal{H}[m]+\mathcal{H}[y]-\mathcal{H}[m,y],$ is the classical mutual information between the outcomes of both measurements, $ \{m\}$ and $\{y\}.$ The lower bound in the previous expression say us that the (average) mutual information associated to the retrodicted measurement, $ \sum_{y}p(y)\mathcal{I}[\rho _{y}^{ab}],$ is smaller than that corresponding to the non-selective measurement, $\mathcal{I}[\rho _{\Omega }^{ab}].$ Hence, contrarily to the entropy measure, here the retrodicted measurement leads to a degradation of the mutual information between the subsystems. Similarly to Eq.~(\ref{positive2}), it is possible to obtain (Appendix C) \end{subequations} \begin{subequations} \label{positive1} \begin{eqnarray} \mathcal{H}[m|y] &\geq &\sum_{y}p(y)\mathcal{I}[\rho _{y}^{ab}]-\sum_{m}p(m) \mathcal{I}[\rho _{m}^{ab}] \\ &=&\sum_{y}p(y)\mathcal{S}[\rho _{y}^{a}]-\sum_{m}p(m)\mathcal{S}[\rho _{m}^{a}]\geq 0,\ \ \ \ \ \ \ \end{eqnarray} where, as before, $\mathcal{H}[m|y]$ is the conditional entropy of outcomes $ \{m\}$ given outcomes $\{y\}.$ Thus, while the mutual information associated to the retrodictive measurement decreases with respect to the non-selective measurement, it is bounded from below by the mutual information of its selective resolution, $\sum_{m}p(m)\mathcal{I}[\rho _{m}^{ab}].$ \subsection{Retrodicted-like Holevo bound} Interestingly, Eqs. (\ref{positive2}) and (\ref{positive1}) can be read as a retrodicted version of the well known Holevo bound for quantum communication channels \cite{nielsen}. The standard Holevo bound\ arises in the following context. A sender prepare a quantum alphabet $\{\rho _{m}^{a}\}$ with probabilities $\{p(m)\}.$ A receiver performs a measurement characterized by the operators $\{M_{y}\}$ on the sent letter (state), which gives the result $y.$ The Holevo bound states that for any measurement the receiver may do it is fulfilled that \cite{nielsen} \end{subequations} \begin{equation} \mathcal{H}[m:y]\leq \mathcal{S}\left[ \sum\nolimits_{m}p(m)\rho _{m}^{a} \right] -\sum\nolimits_{m}p(m)\mathcal{S}[\rho _{m}^{a}]. \label{Holevo} \end{equation} Hence, the accessible channel information (measured by the mutual information $\mathcal{H}[m:y]$ between the preparation and the measurement outcomes), is upper bounded by $\chi \equiv \mathcal{S}\left[ \sum\nolimits_{m}p(m)\rho _{m}^{a}\right] -\sum\nolimits_{m}p(m)\mathcal{S} [\rho _{m}^{a}].$ In the retrodicted measurement scheme (Fig. 1), the preparation $\{\rho _{m}^{a}\}$ with probabilities $\{p(m)\}$ can be associated with the first non-selective measurement, while the receiver measurement corresponds to the second one. With this interpretation at hand, we notice that Eq. (\ref {positive2}) rewritten as \begin{equation} \mathcal{H}\left[ m:y\right] \geq \mathcal{S}\left[ \sum\nolimits_{y}p(y) \rho _{y}^{a}\right] -\sum_{y}p(y)\mathcal{S}[\rho _{y}^{a}], \label{retroHolevo} \end{equation} can be read as a \textit{retrodicted-like Holevo bound}. While Holevo bound ( \ref{Holevo}) gives an upper bound for the accessible information, the retrodicted bound [Eq. (\ref{positive2})] gives a lower bound for $\mathcal{H }[m:y].$ Interestingly, it is written in terms of the (retrodicted) quantum smoothed states $\{\rho _{y}^{a}\}.$ A complementary expression follows straightforwardly from Eq. (\ref{positive1}). \section{Examples} Different experimental realizations of the retrodicted scheme of Fig. 1 are performed with open quantum systems \textit{continuously monitored in time}. Hence, their description rely on the formalism of stochastic wave vectors \cite{milburn,carmichaelbook}. The results developed in the previous sections can be extended to this context. Nevertheless, for simplicity, we consider examples where only two measurements are performed (Fig. 1). The chosen measurement operators capture the main features of different experimental realizations \cite{murch,molmer}. \subsection{Weak and strong retrodicted measurements of a qubit} First, we consider a qubit system (two-level system) that starts in an arbitrary state $\rho _{I},$ which is written as \begin{equation} \rho _{I}=\frac{1}{2}(\mathrm{I}+\mathbf{r}_{I}\cdot \mathbf{\sigma }). \label{RhoInitial} \end{equation} Here, $\mathrm{I}$ is the identity matrix while $\mathbf{\sigma }=(\sigma _{x},\sigma _{y},\sigma _{z})$ is defined by Pauli matrixes. The Bloch vector \cite{nielsen,breuerbook} is defined as $\mathbf{r}_{I}=r_{I}\mathbf{n },$ where its modulus satisfies $0\leq r_{I}=|\mathbf{r}_{I}|\leq 1$ and $ \mathbf{n}=(n_{x},n_{y},n_{z})=(\sin (\theta _{I})\cos (\phi _{I}),\sin (\theta _{I})\sin (\phi _{I}),\cos (\theta _{I})).$ Thus, $\rho _{I}=\rho _{I}(r_{I},\theta _{I},\phi _{I}).$ Similarly to Ref. \cite{murch}, the first measurement operator is given by $ (m\rightarrow V)$ \begin{equation} \Omega _{V}=(2\pi a^{2})^{-1/4}\exp \left[ -\frac{(V-\sigma _{z})^{2}}{4a^{2} }\right] , \label{Omega} \end{equation} where $a>0$ is a real free parameter and $V\in (-\infty ,+\infty ),$ which defines the outcomes of the first measurement. Consistently, $\int_{-\infty }^{+\infty }dV\Omega _{V}^{\dag }\Omega _{V}=\mathrm{I.}$ In the experiment analyzed in \cite{murch}, the second measurement can be related with an effect operator that takes into account the future stochastic evolution. Instead, here we consider an arbitrary qubit projective measurement $(y=\pm ) $ performed in an arbitrary direction, which is defined by the angles $ (\theta ,\phi ).$ Hence, \begin{equation} M_{\pm }=|n_{\pm }\rangle \langle n_{\pm }|, \label{Projector} \end{equation} $[M_{\pm }=M_{\pm }(\theta ,\phi )]$ where the state vectors $|n_{\pm }\rangle $ are \begin{subequations} \begin{eqnarray} |n_{+}\rangle &=&\cos \Big{(}\frac{\theta }{2}\Big{)}|+\rangle +\sin \Big{(} \frac{\theta }{2}\Big{)}e^{-i\phi }|-\rangle , \\ |n_{-}\rangle &=&-\sin \Big{(}\frac{\theta }{2}\Big{)}|+\rangle +\cos \Big{(} \frac{\theta }{2}\Big{)}e^{+i\phi }|-\rangle . \end{eqnarray} Here, $|\pm \rangle $ are the eigenstates of $\sigma _{z}.$ The previous definitions completely set the retrodicted measurement scheme of Fig. 1. It depends on the free parameters $(r_{I},\theta _{I},\phi _{I},a,\theta ,\phi ).$\ Our results guarantee that the inequality (\ref {Central}) is fulfilled independently of their values. The states associated to a selective resolution of the first measurement, $ \rho _{V}=\Omega _{V}\rho _{I}\Omega _{V}^{\dagger }/\mathrm{Tr}[\Omega _{V}^{\dagger }\Omega _{V}\rho _{I}]$ [Eq.~(\ref{RhoEme})], can be calculated in an exact way from the following expression \end{subequations} \begin{equation} \tilde{\rho}_{V}=\sqrt{\frac{1}{2\pi a^{2}}}\left( \begin{array}{cc} \langle +|\rho _{I}|+\rangle e^{-\frac{(V-1)^{2}}{2a^{2}}} & \langle +|\rho _{I}|-\rangle e^{-\frac{(V^{2}+1)}{2a^{2}}} \\ \langle -|\rho _{I}|+\rangle e^{-\frac{(V^{2}+1)}{2a^{2}}} & \langle +|\rho _{I}|+\rangle e^{-\frac{(V+1)^{2}}{2a^{2}}} \end{array} \right) , \label{RhoVUn} \end{equation} where $\tilde{\rho}_{V}\equiv \Omega _{V}\rho _{I}\Omega _{V}^{\dagger }.$ From this result it is possible to demonstrate that \begin{equation} \lim_{a\rightarrow 0}\rho _{V}=|\pm \rangle \langle \pm |,\ \ \ (V\gtrless 0),\ \ \ \ \ \ \ \lim_{a\rightarrow \infty }\rho _{V}=\rho _{I}. \label{RhoVlimits} \end{equation} Thus, in the limit $a\rightarrow 0,$ the operators $\{\Omega _{V}\}$ perform a \textit{strong} projective measurement in the base of eigenstates of $ \sigma _{z}.$ On the other hand, in the limit $a\rightarrow \infty ,$ a \textit{weak} measurement \cite{weak} is performed, $\rho _{V}=\rho _{I}.$ From Eq. (\ref{RhoVUn}), after a straightforward calculation, the state $ \rho _{\Omega }=\int_{-\infty }^{\infty }dV\ \Omega _{V}\rho _{I}\Omega _{V}^{\dagger }$ [Eq. (\ref{Rho_Omega})] can be written as \begin{equation} \rho _{\Omega }=\left( \begin{array}{cc} \langle +|\rho _{I}|+\rangle & \langle +|\rho _{I}|-\rangle e^{-\frac{1}{ 2a^{2}}} \\ \langle -|\rho _{I}|+\rangle e^{-\frac{1}{2a^{2}}} & \langle +|\rho _{I}|+\rangle \end{array} \right) . \end{equation} This expression also reflects the strong and weak feature of the (here non-selective) measurement as a function of the parameter $a.$ In fact, when $a\rightarrow 0$ a diagonal matrix follows, while $\lim_{a\rightarrow \infty }\rho _{\Omega }=\rho _{I}.$ From Eq. (\ref{RhoVUn}) it is also simple to obtain the probability density $ p(V)=\mathrm{Tr}[\Omega _{V}^{\dagger }\Omega _{V}\rho _{I}]=\mathrm{Tr}[ \tilde{\rho}_{V}]$ [Eq.~(\ref{RhoEme})], which is defined by a superposition of two shifted Gaussian distributions weighted by the initial populations $ \langle \pm |\rho _{I}|\pm \rangle .$ In general, the joint probability $ p(\pm ,V)=\mathrm{Tr}[\Omega _{V}\rho _{I}\Omega _{V}^{\dagger }M_{\pm }^{\dagger }M_{\pm }]$ [Eq.~(\ref{conjunta})] reads \begin{eqnarray} p(\pm ,V) &=&+\sqrt{\frac{1}{2\pi a^{2}}}e^{-\frac{(V\mp 1)^{2}}{2a^{2}} }\cos \Big{(}\frac{\theta }{2}\Big{)}^{2}\langle \pm |\rho _{I}|\pm \rangle \notag \\ &&+\sqrt{\frac{1}{2\pi a^{2}}}e^{-\frac{(V\pm 1)^{2}}{2a^{2}}}\sin \Big{(} \frac{\theta }{2}\Big{)}^{2}\langle \mp |\rho _{I}|\mp \rangle \\ &&\pm \sqrt{\frac{1}{2\pi a^{2}}}e^{-\frac{V^{2}+1}{2a^{2}}}\sin (\theta )[e^{+i\phi }\langle +|\rho _{I}|-\rangle +c.c.]. \notag \end{eqnarray} From here, it follows the expressions for the retrodicted probabilities $ p(V|\pm )=\mathrm{Tr}[\Omega _{V}^{\dagger }\Omega _{V}\rho _{I}M_{\pm }]/p(\pm )$ [Eq.~(\ref{Pmy})], and the probabilities $p(\pm )=\int_{-\infty }^{+\infty }dV\mathrm{Tr}[\Omega _{V}\rho _{I}\Omega _{V}^{\dagger }M_{\pm }^{\dagger }M_{\pm }]$ associated to the second measurement outcomes [Eq. ( \ref{py})]. On the other hand, the integral that define the retrodicted smoothed states [Eq.~(\ref{Rho_y})] \begin{equation} \rho _{\pm }=\int_{-\infty }^{\infty }dV\ p(V|\pm )\frac{\Omega _{V}\rho _{I}\Omega _{V}^{\dagger }}{\mathrm{Tr}[\Omega _{V}^{\dagger }\Omega _{V}\rho _{I}]}, \label{smoothed} \end{equation} must be performed in a numerical way. \begin{figure} \caption{Average entropies corresponding to the qubit retrodicted measurement scheme defined by Eqs. (\protect\ref{Omega} \end{figure} In Fig. 2, for a particular initial condition, we show the entropies associated to the nonselective and selective measurements, $\mathcal{S}[\rho _{\Omega }]$ and $\int_{-\infty }^{+\infty }dVp(V)\mathcal{S}[\rho _{V}]$ respectively, as well as the average entropy of the retrodicted smoothed state, $\sum_{y=\pm }p(y)\mathcal{S}[\rho _{y}].$ Consistently, we observe that, independently of the parameter $a$ and angles $(\theta ,\phi )$ that define the first and second measurements respectively [Eqs. (\ref{Omega}) and (\ref{Projector})], the inequalities (\ref{Central}) are fulfilled. In the limit $a\rightarrow \infty $ (weak measurement) all entropies converge to the same value, which is given by the entropy of the initial state (gray line). In fact, in this limit all states $\rho _{V}$ are the same [Eq. (\ref{RhoVlimits})], property that guarantees the equality of all (average) entropies in Eq. (\ref{Central}). In the limit $a\rightarrow 0$ the first measurement corresponds to a strong projective one in the basis $\{|\pm \rangle \}$ of eigenstates of $\sigma _{z}.$ When $\theta =0,$ and arbitrary $\phi ,$ the second projective measurement is performed in the same basis $\{|\pm \rangle \}.$ Thus, both measurement outcomes are \textit{completely correlated,} which leads to the equality of the average entropies of the selective and retrodicted measurements. On the other hand, for $\theta =\pi /2,$ $\phi =\pi ,$ the second measurement is performed in the basis of eigenstates of $\sigma _{x}.$ In this case, both measurement outcomes are \textit{statistically independent } (see Appendix A), which leads to the equality of the average entropies of the non-selective and retrodicted measurements. While the previous properties are strictly fulfilled for $a=0,$ in Fig. 2 they remain approximately valid for $0\leq a\leq a_{s}\approx 0.4.$ Thus, \textit{from an entropic point of view}, in that interval the first measurement may be considered as a projective one. In fact, in all curves of Fig. 2, the value of the plateau regime around the origin can be estimated by taking into account two successive projective measurements, the first one being in the $ z $-direction and the second one in the direction defined by the angles $ (\theta ,\phi ).$ \subsection*{Post-selected expectation values and entropies} Under post-selection \cite{murch}, the measurement defined by the operator ( \ref{Omega}) leads to the so-called weak values \cite{weak}. Here, this feature is analyzed from an entropic point of view. From the retrodicted measurement scheme it is possible to define the averages \begin{equation} \langle V_{\Omega }\rangle \equiv \int_{-\infty }^{+\infty }dVVp(V),\ \ \ \ \ \ \ \langle V_{\pm }\rangle \equiv \int_{-\infty }^{+\infty }dVVp(V|\pm ). \label{weakDef} \end{equation} Here, $\langle V_{\Omega }\rangle $ gives the (unconditional) average of (the random variable) $V$ associated to the first measurement. On the other hand, $\langle V_{\pm }\rangle $ is the (conditional) average of $V$ given that the second measurement outcomes is $y=\pm .$ In agreement with Eq. (\ref {Iguales}), they fulfill the relation $\langle V_{\Omega }\rangle =p(+)\langle V_{+}\rangle +p(-)\langle V_{-}\rangle .$ Furthermore, from Eq.~(\ref{RhoVUn}) it follows $\langle V_{\Omega }\rangle =\langle +|\rho _{I}|+\rangle -\langle -|\rho _{I}|-\rangle =\mathrm{Tr}[\rho _{I}\sigma _{z}]=r_{I}\cos (\theta _{I}).$ Consistently, \textit{anomalous weak values} are defined by the condition $|\langle V_{\pm }\rangle |>1.$ In Fig. 3(a) and (b) we show the behavior of $\langle V_{\pm }\rangle $ as a function of the parameter $a.$ As expected, by increasing the parameter $a$ (weak measurement limit) the anomalous property $|\langle V_{\pm }\rangle |>1 $ may develops. Furthermore, we find that this feature is absent for $ 0\leq a\leq a_{s}\approx 0.4,$ which correspond to the interval where, from an \textit{entropic point of view}, the first measurement can be approximated by a strong projective one (plateaus in Fig. 2). Similarly to expectation values, one can define the \textit{conditional entropies} $\mathcal{S}[\rho _{\pm }],$ which correspond to the entropies of each post-selected smoothed state, Eq. (\ref{smoothed}). For the same parameters values, these objects are shown in Fig. 3(c) and (d). We find that $\mathcal{S}[\rho _{\pm }]$ do not fulfill the (average) bounds (\ref {Central}). In addition, we deduce that this feature cannot be related with the anomalous property of the weak expectation values. In fact, in general, any relation may occur, that is, normal or anomalous weak values may develop while the corresponding conditional entropies may or not be bounded by the constraints (\ref{Central}). \subsection{Retrodiction in a bipartite quantum-classical optical-like hybrid system} Retrodiction was studied in different physical arranges where the effective dynamics can be described through a quantum system $(A)$ coupled to unobservable stochastic classical degrees of freedom $(B)$ \cite{tsang}. The quantum system is continuously monitored in time. For optical ones, its fluorescence signal is observed via photon- or homodyne-detection processes \cite{wiseman,retro}. In Ref. \cite{molmer}, the state of the (two-state) classical system randomly modulate the coherent (fluorescent intensity) system dynamics. In general, one may also consider situations where the classical subsystem modulate any of the characteristic parameters of the quantum evolution \cite{sms}. These hybrid dynamics can also be studied from the present perspective, that is, through the entropic inequality Eq. (\ref {Central}) and the mutual information inequalities Eqs. (\ref{positive2}) and (\ref{positive1}). We consider a hybrid quantum-classical system whose initial bipartite state is \begin{equation} \rho _{I}^{ab}=\sum_{\mu }q_{\mu }\rho _{\mu }\otimes |c_{\mu }\rangle \langle c_{\mu }|. \label{InitialBiparto} \end{equation} Here, $\{\rho _{\mu }\}$ are different states $(\mathrm{Tr}_{a}[\rho _{\mu }]=1)$ of a quantum two-level system $A,$ while the projectors $\{|c_{\mu }\rangle \langle c_{\mu }|\}$ represent different (countable) macrostates of classical system $B.$ Their statistical weights satisfy $\sum_{\mu }q_{\mu }=1.$ The states $\{\rho _{\mu }\}$ are written as \begin{equation} \rho _{\mu }=\frac{1}{2}(\mathrm{I}+\mathbf{r}_{\mu }\cdot \mathbf{\sigma }), \label{initialA} \end{equation} where, similarly to Eq. (\ref{RhoInitial}), $\{\mathbf{r}_{\mu }\}$ are Bloch vectors. Hence, $\rho _{\mu }=\rho _{\mu }(r_{\mu },\theta _{\mu },\phi _{\mu }).$ \begin{figure} \caption{(a)-(b) Unconditional and conditional expectations values [Eq. \protect\ref{weakDef} \end{figure} The first (projective) measurement is defined by the operators $ (m\rightarrow \mu )$ \begin{equation} \Omega _{\mu }=|c_{\mu }\rangle \langle c_{\mu }|, \label{macro} \end{equation} which are associated to each classical macrostate. The operators of the second measurement are $(y=\pm )$ \begin{equation} M_{+}=|-\rangle \langle +|,\ \ \ \ \ \ \ \ \ M_{-}=|-\rangle \langle -|, \label{photon} \end{equation} where, as before, $|\pm \rangle $ are the eigenstates of $\sigma _{z},$ and $ M_{+}^{\dag }M_{+}+M_{-}^{\dag }M_{-}=\mathrm{I.}$ This generalized measurement \cite{nielsen} can straightforwardly be read as a photon-detection process. In fact, $M_{+}$ and $M_{-}$ can be associated to the presence and absence of a transition $|+\rangle \rightsquigarrow |-\rangle ,$ that is, a photon-detection event. The previous definitions completely set the retrodicted measurement scheme of Fig.~1. The state of the bipartite system after a measurement performed with the operators $\{\Omega _{\mu }\},$ in a selective and non-selective ways, respectively lead to [Eqs. (\ref{RhoEme}) and (\ref{Rho_Omega})] \begin{equation} \rho _{\mu }^{ab}=\rho _{\mu }\otimes |c_{\mu }\rangle \langle c_{\mu }|,\ \ \ \ \ \ \ \ \ \ \rho _{\Omega }^{ab}=\rho _{I}^{ab}. \label{ClasicoSelectivoNon} \end{equation} The first expression say us that $\rho _{\mu }$ is the state of $A$ \textit{ given} that $B$ is in the macrostate $\mu .$ Similarly to the experimental situations quoted previously, the second equality represent the inaccessibility of the classical degrees of freedom. Using that $M_{+}^{\dag }M_{+}=|+\rangle \langle +|$ and $M_{-}^{\dag }M_{-}=|-\rangle \langle -|,$ the joint probabilities Eq. (\ref{conjunta}) $ [p(y,m)\rightarrow p(\pm ,\mu )]$ read \begin{equation} p(\pm ,\mu )=q_{\mu }\langle \pm |\rho _{\mu }|\pm \rangle =q_{\mu }\frac{1}{ 2}[1\pm r_{\mu }\cos (\theta _{\mu })]. \end{equation} This expression in turn allows us to calculate the retrodicted probabilities $\{p(\mu |\pm )\}$ [Eq. (\ref{Pmy})] and $p(\pm )$ [Eq. (\ref{py})]. The retrodicted smoothed state read [Eq. (\ref{Rho_y})] \begin{equation} \rho _{\pm }^{ab}=\sum_{\mu }p(\mu |\pm )\rho _{\mu }\otimes |c_{\mu }\rangle \langle c_{\mu }|. \end{equation} Notice that in contrast with projective measurements in arbitrary bipartite systems [Eq. (\ref{entropias})], here the smoothed state only differs from the initial condition [Eq. (\ref{InitialBiparto})] by the replacement $ q_{\mu }\rightarrow p(\mu |\pm ).$ A similar result was found in Ref. \cite {retro}. In order to exemplify the problem we consider a two-state classical system, $ \mu =1,2.$ Therefore, the free parameters are $(r_{1},\theta _{1},\phi _{1}), $ $(r_{2},\theta _{2},\phi _{2}),$ for the initial states $\{\rho _{\mu }\}_{\mu =1,2},$ while an extra parameter $q$ gives their weights in the initial bipartite state (\ref{InitialBiparto}), $q_{1}=q$ and $ q_{2}=(1-q).$ Explicit expressions for the entropies and mutual information can be read from Appendix C. In Fig. 4(a), for a set of particular initial conditions, we plot the entropy of the quantum subsystem $A$ as a function of the weight $q.$ Consistently, as demonstrated in Sec.~III, the inequalities (\ref{Central}) are fulfilled by the entropies of the subsystem. In Fig. 4(b) we show the dependence of the (average) mutual information for the non-selective, retrodicted and selective measurements schemes. In agreement with Eqs. (\ref {positive2}) and (\ref{positive1}), we observe that, while the retrodicted scheme implies an entropic benefit for each subsystem, the retrodicted measurement decreases their mutual information when compared with the non-selective measurement. The difference between these objects is measured by the retrodicted-like Holevo bound (\ref{retroHolevo}). On the other hand, the average mutual information for the selective measurement vanishes [see Eq. (\ref{ClasicoSelectivoNon})]. The main features shown in Fig. 4 remain valid for arbitrary initial conditions. \begin{figure} \caption{(a) Entropy of the quantum subsystem for the measurement scheme defined by Eqs. (\protect\ref{macro} \end{figure} \section{Summary and Conclusions} We performed and informational-entropic study of retrodicted quantum measurements (Fig.~1). Given that a non-selective measurement was performed over a system of interest, a second successive measurement is used for improving the estimation of the possible outcomes of the former one. From the quantum expressions for the outcome probabilities, Bayes rule allows to obtaining the corresponding retrodicted probabilities, Eq.~(\ref{Pmy}). The system state after the retrodicted measurement (smoothed state) results from an addition of the system transformations associated to each measurement outcome with a weight given by the retrodicted probabilities, Eq. (\ref {Rho_y}). Based on the concavity of von Neumann entropy we proved that, in average, the entropy of the smoothed state is bounded from above and below by the entropies associated to the first non-selective measurement and the entropy corresponding to its selective resolution respectively, Eq. (\ref{Central}). This central result quantifies how much information gain may be obtained from the retrodicted measurement scheme. For bipartite systems it was shown that, independently of the measurements nature, the same property is valid for the entropy of each subsystem. In addition, based on the strong subadditivity of von Neumann entropy, upper bounds for the mutual information changes were also established, Eqs. (\ref {Mutual_2}) and (\ref{Mutual_1}). We specified the previous\ results for a bipartite system where the measurements are performed over each single system successively, being projective the former one. The retrodicted measurement diminishes the entropy of each subsystem. Nevertheless, their (average) mutual information is diminished with respect to that of the non-selective measurement, Eq. ( \ref{positive2}). This reduction is bounded from below by the (average) mutual information of the selective resolution of the first measurement, Eq. (\ref{positive1}). These inequalities in turn lead to a kind of retrodicted Holevo inequality that bound the (classical) mutual information [Eq. (\ref {retroHolevo})] between the two sets of measurement outcomes. As explicit examples we worked out the case of a qubit subjected to weak and strong retrodicted measurements. All theoretical results are confirmed by the model. In addition, we find that anomalous weak values arise when, from an entropic point of view, the first measurement cannot be approximated by a strong projective one. On the other hand, we considered a bipartite quantum-classical optical-like hybrid system. Degradation of mutual information under the retrodicted measurement scheme was explicitly confirmed. The developed results quantify the information changes that follow from a retrodicted measurement. While the entropy of the system of interest is always diminished, implying an information vantage, in bipartite systems mutual information may be degraded. These results provide a solid basis for studying other informational measures that may be of interest is physical arrangements where retrodicted measurements are implemented. \section*{Acknowledgments} This paper was supported by Consejo Nacional de Investigaciones Cient\'{\i} ficas y T\'{e}cnicas (CONICET), Argentina. \appendix \section{Demonstration of entropy inequalities} The entropy inequalities for the retrodicted measurement scheme can be derived as follows. They rely on the concavity of the von Neumann entropy~ \cite{nielsen}, $\mathcal{S}[\sum_{k}p(k)\rho _{k}]\geq \sum_{k}p(k)\mathcal{ S}[\rho _{k}],$ with equality if and only if all states $\rho _{k}$ for which $p(k)>0$ are identical. Starting from the $\mathcal{S} [\sum_{m}p(m)\rho _{m}],$ and using Eq. (\ref{Iguales}), it leads to the following inequalities \begin{subequations} \begin{eqnarray} \mathcal{S}\Big{[}\sum_{m}p(m)\rho _{m}\Big{]}\!\! &=&\!\!\mathcal{S}\Big{[} \sum_{y}p(y)\rho _{y}\Big{]}\geq \sum_{y}p(y)\mathcal{S}[\rho _{y}],\ \ \ \ \ \ \ \label{Uno} \\ \! &=&\!\sum_{y}p(y)\mathcal{S}\Big{[}\sum_{m}p(m|y)\rho _{m}\Big{]},\ \ \\ \! &\geq &\!\sum_{y}p(y)\sum_{m}p(m|y)\mathcal{S}[\rho _{m}], \label{Dos} \\ \! &=&\!\sum_{y}\sum_{m}p(m,y)\mathcal{S}[\rho _{m}], \\ \! &=&\!\sum_{m}p(m)\mathcal{S}[\rho _{m}], \end{eqnarray} where we have used that $p(m)=\sum_{y}p(m,y)=\sum_{y}p(m|y)p(y).$ Taking into account the first and last lines, it follows \end{subequations} \begin{equation} \mathcal{S}\Big{[}\sum_{m}p(m)\rho _{m}\Big{]}\geq \sum_{y}p(y)\mathcal{S} [\rho _{y}]\geq \sum_{m}p(m)\mathcal{S}[\rho _{m}], \label{EntropyAgain} \end{equation} which recovers the entropy inequalities (\ref{Central}). Given that \textit{equality} in the concavity entropy inequality is valid if and only if all states with nonvanishing weight are the same, from Eq.~(\ref {Uno}) we deduce that the upper bound is achieved when all states $\{\rho _{y}\}$ are the same. This condition happens when all states $\{\rho _{m}\}$ are identical, or alternatively when $p(m|y)=p(m)$ [see definition (\ref {Rho_y})]. Hence, the joint probability Eq.~(\ref{conjunta}) satisfies $ p(y,m)=p(y)p(m).$ This condition implies that both measurement results, $ \{m\}$ and $\{y\},$ are statistically independent. This property is fulfilled by projective measurements $\Omega _{m}=|m\rangle \langle m|$ and $ M_{y}=|y\rangle \langle y|,$ where the basis of states $\{|m\rangle \}$ and $ \{|y\rangle \}$ are such that $|\langle m|y\rangle |^{2}$ is independent of $ m$ \cite{nota1}. Similarly, from Eq.~(\ref{Dos}) we deduce that the lower bound in Eq.~(\ref {EntropyAgain}) is achieved when all states $\{\rho _{m}\}$ are the same, or alternatively when $p(m|y)=\delta _{my}.$ Hence, the joint probability Eq.~( \ref{conjunta}) satisfies $p(y,m)=\delta _{my}p(y)=\delta _{ym}p(m),$ that is, both measurement results, $\{m\}$ and $\{y\},$ are completely correlated. From Eq.~(\ref{conjunta}), we deduce that this condition is fulfilled by projective measurements $\Omega _{m}=|m\rangle \langle m|$ and $ M_{y}=|y\rangle \langle y|,$ where the basis of states $\{|m\rangle \}$ and $ \{|y\rangle \}$ are the same, $|\langle m|y\rangle |^{2}=\delta _{my}.$ We notice that statistical independence and complete correlation between both measurement outcomes also give the equality conditions for the entropies of the measurement probabilities $\{p(m)\}$ and their retrodicted version $\{p(m|y)\}.$ They satisfy the classical inequality \cite{nielsen} \begin{equation} \mathcal{H}[m]\geq \mathcal{H}[m|y]\geq 0, \end{equation} where $\mathcal{H}[m]=-\sum_{m}p(m)\ln [p(m)]$ and $\mathcal{H} [m|y]=-\sum_{y}p(y)\sum_{m}p(m|y)\ln [p(m|y)]$ is the conditional Shannon entropy of outcomes $\{m\}$ given outcomes $\{y\}.$ In fact, $\mathcal{H}[m]= \mathcal{H}[m|y]$ when $p(y,m)=p(y)p(m)$ \cite{nielsen}. On the other hand, the lower bound $\mathcal{H}[m|y]=0$ occurs when $\{m\}$ is a deterministic function of $\{y\}$ \cite{nielsen}, which here corresponds to $p(y,m)=\delta _{my}p(y)=\delta _{ym}p(m).$ By using the upper bound \cite{nielsen} $\sum_{k}p(k)\mathcal{S}[\rho _{k}]+ \mathcal{H}[k]\geq \mathcal{S}[\sum_{k}p(k)\rho _{k}],$ with equality if and only if all states $\rho _{k}$ have support on orthogonal subspaces, where $ \mathcal{H}[k]=-\sum_{k}p(k)\ln [p(k)],$ under the replacement $k\rightarrow y$ it follows \begin{equation} \mathcal{H}[y]\geq \mathcal{S}\Big{[}\sum_{y}p(y)\rho _{y}\Big{]} -\sum_{y}p(y)\mathcal{S}[\rho _{y}]. \end{equation} Taking into account that $\sum_{y}p(y)\rho _{y}=\sum_{m}p(m)\rho _{m}$ [Eq.~( \ref{Iguales})], the previous expression recovers Eq. (\ref{HyUpperBound}). This upper bound is achieved when all states $\{\rho _{y}\}${}have support on orthogonal subspaces. On the other hand, taking $k\rightarrow m$ the upper entropy bound becomes \begin{subequations} \begin{eqnarray} \mathcal{H}[m] &\geq &\mathcal{S}\Big{[}\sum_{m}p(m)\rho _{m}\Big{]} -\sum_{m}p(m)\mathcal{S}[\rho _{m}],\ \ \ \ \\ &\geq &\sum_{y}p(y)\mathcal{S}[\rho _{y}]-\sum_{m}p(m)\mathcal{S}[\rho _{m}], \end{eqnarray} where the last inequality is guaranteed by Eq. (\ref{EntropyAgain}), which in turn recovers Eq. (\ref{LowerBound}). This upper bound is achieved when all states $\{\rho _{m}\}$ have support on orthogonal subspaces and $ p(m|y)=p(m).$ \section{Demonstration of mutual information inequalities} Here we demonstrate the inequalities that bound the changes in the mutual information of a bipartite arrangement consisting in subsystems $A$ and $B,$ Eqs. (\ref{Mutual_2}) and (\ref{Mutual_1}). The demonstrations rely on the strong subadditivity property of von Neumann entropy \cite{nielsen}. Hence, an extra ancilla system $C$ is introduced. \textit{First inequality}: The tripartite arrangement is described by the state \end{subequations} \begin{equation} \rho ^{abc}=\sum_{m,y}p(m,y)\rho _{m}^{ab}\otimes |y\rangle \langle y|, \end{equation} where $p(m,y)$ is an arbitrary joint probability of $m$ and $y.$ Hence, $ \sum_{m}p(m,y)=p(y),$ and $\sum_{y}p(m,y)=p(m).$ The set $\{\rho _{m}^{ab}\}$ are states in the $AB$ Hilbert space, $\mathrm{Tr}_{ab}[\rho _{y}^{ab}]=1,$ while $\{|y\rangle \}$ is an orthogonal base of the Hilbert space of $C.$ The marginal state of $AB$ and $C,$ $\rho ^{ab}$ and $\rho ^{c}$\ respectively, read \begin{equation} \rho ^{ab}=\sum_{m}p(m)\rho _{m}^{ab},\ \ \ \ \ \ \ \rho ^{c}=\sum_{y}p(y)|y\rangle \langle y|, \end{equation} where $\rho ^{ab}$ by partial trace defines the states of $A$ and $B,$ $\rho ^{a}=\mathrm{Tr}_{b}[\rho ^{ab}]$ and $\rho ^{b}=\mathrm{Tr}_{a}[\rho ^{ab}]$ respectively. The entropy of the tripartite state $\rho ^{abc},$ by using that $p(m,y)=p(m|y)p(y),$ can be written as \begin{equation} \mathcal{S}[\rho ^{abc}]=\mathcal{H}[y]+\sum_{y}p(y)\mathcal{S}[\rho _{y}^{ab}], \label{Stripartita} \end{equation} where \begin{equation} \rho _{y}^{ab}=\sum_{m}p(m|y)\rho _{m}^{ab}, \end{equation} and $\mathcal{H}[y]$ is the classical Shannon entropy of the distribution $ \{p(y)\},$ $\mathcal{H}[y]=-\sum_{y}p(y)\ln [p(y)].$\ Similarly, the entropies $\mathcal{S}[\rho ^{ac}]$ and $\mathcal{S}[\rho ^{bc}]$ follows from Eq. (\ref{Stripartita}) under the replacements $\rho _{y}^{ab}\rightarrow \rho _{y}^{a}=\mathrm{Tr}_{b}[\rho _{y}^{ab}]$ and $ \rho _{y}^{ab}\rightarrow \rho _{y}^{b}=\mathrm{Tr}_{a}[\rho _{y}^{ab}]$ respectively. Using the strong subadditivity condition $\mathcal{S}[\rho ^{abc}]+\mathcal{S}[\rho ^{a}]\leq \mathcal{S}[\rho ^{ab}]+\mathcal{S}[\rho ^{ac}]$ \cite{nielsen}, it follows \begin{equation} \mathcal{S}[\rho ^{a}]-\mathcal{S}[\rho ^{ab}]\leq \sum_{y}p(y)\mathcal{S} [\rho _{y}^{a}]-\sum_{y}p(y)\mathcal{S}[\rho _{y}^{ab}]. \end{equation} Interchanging the indices $a\leftrightarrow b,$ the previous inequality becomes \begin{equation} \mathcal{S}[\rho ^{b}]-\mathcal{S}[\rho ^{ab}]\leq \sum_{y}p(y)\mathcal{S} [\rho _{y}^{b}]-\sum_{y}p(y)\mathcal{S}[\rho _{y}^{ab}]. \end{equation} The addition of the previous two expressions lead to \begin{equation} \mathcal{I}[\rho ^{ab}]-\sum_{y}p(y)\mathcal{I}[\rho _{y}^{ab}]\leq \mathcal{ S}[\rho ^{ab}]-\sum_{y}p(y)\mathcal{S}[\rho _{y}^{ab}], \end{equation} which recovers Eq.~(\ref{Mutual_2}), where the mutual information of a bipartite state is $\mathcal{I}[\rho ^{ab}]=\mathcal{S}[\rho ^{a}]+\mathcal{S }[\rho ^{b}]-\mathcal{S}[\rho ^{ab}].$ \textit{Second inequality}: In this case the tripartite arrangement is described by the state \begin{equation} \rho _{y}^{abc}=\sum_{m}p(m|y)\rho _{m}^{ab}\otimes |m\rangle \langle m|. \label{tripartitaY} \end{equation} This state parametrically depends on $y.$ $p(m|y)$ is an arbitrary conditional probability of $m$ given $y,$ $\sum_{m}p(m|y)=1.$ The set $ \{\rho _{m}^{ab}\}$ are states in the Hilbert space of the bipartite system $ AB,$ $\mathrm{Tr}_{ab}[\rho _{y}^{ab}]=1,$ while here $\{|m\rangle \}$ is an orthogonal base of the Hilbert space of $C.$ The marginal state of $AB$ and $ C,$ $\rho _{y}^{ab}$ and $\rho _{y}^{c}$\ respectively, read \begin{equation} \rho _{y}^{ab}=\sum_{m}p(m|y)\rho _{m}^{ab},\ \ \ \ \ \ \rho _{y}^{c}=\sum_{m}p(m|y)|m\rangle \langle m|. \end{equation} The states of $A$ and $B$ read $\rho _{y}^{a}=\mathrm{Tr}_{b}[\rho _{y}^{ab}] $ and $\rho _{y}^{b}=\mathrm{Tr}_{a}[\rho _{y}^{ab}]$ respectively. A straightforward calculation leads to \begin{equation} \mathcal{S}[\rho _{y}^{abc}]=\mathcal{H}[m]|_{y}+\sum_{m}p(m|y)\mathcal{S} [\rho _{m}^{ab}], \label{StripartitaY} \end{equation} where \begin{equation} \mathcal{H}[m]|_{y}\equiv -\sum_{m}p(m|y)\ln [p(m|y)]. \end{equation} The entropies $\mathcal{S}[\rho _{y}^{ac}]$ and $\mathcal{S}[\rho _{y}^{bc}]$ follows from Eq. (\ref{StripartitaY}) under the replacements $\rho _{m}^{ab}\rightarrow \rho _{m}^{a}$ and $\rho _{m}^{ab}\rightarrow \rho _{m}^{b}$ respectively. Using the strong subadditivity condition \cite{nielsen} $\mathcal{S}[\rho ^{abc}]+\mathcal{S}[\rho ^{a}]\leq \mathcal{S}[\rho ^{ab}]+\mathcal{S}[\rho ^{ac}],$ with $\rho ^{abc}\rightarrow \rho _{y}^{abc}$ [Eq. (\ref {tripartitaY})], jointly with Eq. (\ref{StripartitaY}), lead to \begin{equation} \mathcal{S}[\rho _{y}^{a}]-\mathcal{S}[\rho _{y}^{ab}]\leq \sum_{m}p(m|y) \mathcal{S}[\rho _{m}^{a}]-\sum_{m}p(m|y)\mathcal{S}[\rho _{m}^{ab}]. \end{equation} Interchanging $a\leftrightarrow b$ in the strong subadditivity condition, the previous equation becomes \begin{equation} \mathcal{S}[\rho _{y}^{b}]-\mathcal{S}[\rho _{y}^{ab}]\leq \sum_{m}p(m|y) \mathcal{S}[\rho _{m}^{b}]-\sum_{m}p(m|y)\mathcal{S}[\rho _{m}^{ab}]. \end{equation} By adding the previous two inequalities, it follows \begin{equation} \mathcal{I}[\rho _{y}^{ab}]-\sum_{m}p(m|y)\mathcal{I}[\rho _{m}^{ab}]\leq \mathcal{S}[\rho _{y}^{ab}]-\sum_{m}p(m|y)\mathcal{S}[\rho _{m}^{ab}], \end{equation} By applying $\sum_{y}p(y)$ to each contribution in the previous inequality, and using that $\sum_{y}p(m|y)p(y)=p(m),$ lead to \begin{eqnarray} &&\sum_{y}p(y)\mathcal{I}[\rho _{y}^{ab}]-\sum_{m}p(m)\mathcal{I}[\rho _{m}^{ab}] \notag \\ &\leq &\sum_{y}p(y)\mathcal{S}[\rho _{y}^{ab}]-\sum_{m}p(m)\mathcal{S}[\rho _{m}^{ab}], \end{eqnarray} which recovers Eq. (\ref{Mutual_1}). \section{Bipartite projective measurements} Here, we apply the main results of Sec. II to the case of bipartite projective measurements presented in Sec. III. \subsection{Entropy inequalities} From Eq. (\ref{entropiasss}), a straightforward calculation gives \begin{equation} \mathcal{S}[\rho _{\Omega }^{ab}]=\mathcal{H}[m]+\sum_{m}p(m)\mathcal{S} [\rho _{m}^{a}]. \label{SabProj} \end{equation} From Eq. (\ref{entropias}), the average entropy of the smoothed state reads \begin{equation} \sum_{y}p(y)\mathcal{S}[\rho _{y}^{ab}]=\mathcal{H}[m|y]+\sum_{m}p(m) \mathcal{S}[\rho _{m}^{a}], \label{SabSmooth} \end{equation} where $\mathcal{H}[m|y]=-\sum_{y}p(y)\sum_{m}p(m|y)\ln [p(m|y)]=\mathcal{H} [m,y]-\mathcal{H}[y]$ is the conditional entropy of outcomes $\{m\}$ given outcomes $\{y\}.$ From Eq. (\ref{RhoAB_m}), the average entropy corresponding to the selective resolution of the non-selective measurement is \begin{equation} \sum_{m}p(m)\mathcal{S}[\rho _{m}^{ab}]=\sum_{m}p(m)\mathcal{S}[\rho _{m}^{a}]. \label{SabSelective} \end{equation} Using that $0\leq \mathcal{H}[m|y]\leq \mathcal{H}[m]$ \cite{nielsen}, it follows that the entropy inequalities (\ref{Central}) are fulfilled by the bipartite system. From Eqs. (\ref{SabProj}) and (\ref{SabSmooth}), jointly with the inequality (\ref{HyUpperBound}) it follows \begin{equation} \mathcal{S}[\rho _{\Omega }^{ab}]-\sum_{y}p(y)\mathcal{S}[\rho _{y}^{ab}]= \mathcal{H}[m:y]\leq \mathcal{H}[y], \label{UpperProyBound} \end{equation} where $\mathcal{H}[m:y]=\mathcal{H}[m]-\mathcal{H}[m|y]=\mathcal{H}[m]+ \mathcal{H}[y]-\mathcal{H}[m,y],$ is the classical mutual information between the outcomes of both measurements, $\{m\}$ and $\{y\}.$ The demonstration of the inequality $\mathcal{H}[m:y]\leq \mathcal{H}[y]$ can be found in \cite{nielsen}. In addition, the inequality (\ref{LowerBound}), from Eqs. (\ref{SabSmooth}) and (\ref{SabSelective}), reads \begin{equation} \sum_{y}p(y)\mathcal{S}[\rho _{y}^{ab}]-\sum_{m}p(m)\mathcal{S}[\rho _{m}^{ab}]=\mathcal{H}[m|y]\leq \mathcal{H}[m]. \label{LowerProyBound} \end{equation} The demonstration of the inequality $0\leq \mathcal{H}[m|y]\leq \mathcal{H} [m]$ can also be found in \cite{nielsen}. The previous two equations demonstrate that the general inequalities (\ref{HyUpperBound}) and (\ref {LowerBound}) are in fact fulfilled. In the previous expressions the probabilities read $p(m)=\mathrm{Tr} _{a}[\langle m|\rho _{I}|m\rangle ]$ [Eq. (\ref{RhoEme})]. Furthermore, $ p(y|m)=\mathrm{Tr}_{a}[M_{y}^{\dagger }M_{y}\rho _{m}^{a}]$ [Eq.~(\ref {P_y_cond_m})], $p(y,m)=\mathrm{Tr}_{a}[\langle m|\rho _{I}|m\rangle M_{y}^{\dagger }M_{y}]$ [Eq. (\ref{conjunta})], while the retrodicted probability $p(m|y)$ [Eq. (\ref{Pmy})] reads $p(m|y)=\mathrm{Tr}_{a}[\langle m|\rho _{I}|m\rangle M_{y}^{\dagger }M_{y}]/p(y)$ where $p(y)=\sum_{m} \mathrm{Tr}_{a}[\langle m|\rho _{I}|m\rangle M_{y}^{\dagger }M_{y}]$ [Eq. ( \ref{py})]. \textit{Subsystems}: The previous results can also be specified for subsystem $A$ and $B.$ From Eqs. (\ref{entropiasss}) and (\ref{entropias}), it follows $\rho _{\Omega }^{b}=\sum_{m}p(m)|m\rangle \langle m|,$ and $\rho _{y}^{b}=\sum_{y}p(m|y)|m\rangle \langle m|.$ Furthermore, $\rho _{m}^{b}=|m\rangle \langle m|.$ The inequality Eq. (\ref{Central}), specified for subsystem $B,$ becomes $\mathcal{S}[\rho _{\Omega }^{b}]= \mathcal{H}[m]\geq \sum_{y}p(y)\mathcal{S}[\rho _{y}^{b}]=\mathcal{H} [m|y]\geq \sum_{m}p(m)\mathcal{S}[\rho _{m}^{b}]=0,$ because $\mathcal{S} [\rho _{m}^{b}]=0.$ Hence, $\mathcal{H}[m]\geq \mathcal{H}[m|y]\geq 0,$ which is a well known inequality valid for Shannon entropies \cite{nielsen}. Instead for subsystem $A,$ Eq. (\ref{Central}) leads to the non-trivial relation \begin{equation} \mathcal{S}[\rho _{\Omega }^{a}]\geq \sum_{y}p(y)\mathcal{S}[\rho _{y}^{a}]\geq \sum_{m}p(m)\mathcal{S}[\rho _{m}^{a}], \label{EntropyA} \end{equation} where $\rho _{\Omega }^{a}=\sum_{m}p(m)\rho _{m}^{a}$ and $\rho _{y}^{a}=\sum_{y}p(m|y)\rho _{m}^{a}$ [see Eqs. (\ref{entropiasss}) and (\ref {entropias})]. This inequality say us that while the first measurement is performed over subsystem $B,$ an information gain is also guaranteed for subsystem $A.$ The inequalities (\ref{HyUpperBound}) and (\ref{LowerBound}) can also be specified for each subsystem. For subsystem $A$ they become $\mathcal{S} [\rho _{\Omega }^{a}]-\sum_{y}p(y)\mathcal{S}[\rho _{y}^{a}]\leq \mathcal{H} [y],$ and $\sum_{y}p(y)\mathcal{S}[\rho _{y}^{a}]-\sum_{m}p(m)\mathcal{S} [\rho _{m}^{a}]\leq \mathcal{H}[m].$ For subsystem $B$ they lead to the same classical entropic relations found previously. \subsection{Mutual information inequalities} The mutual information under the different measurement schemes are characterized by Eqs. (\ref{Mutual_2}) and (\ref{Mutual_1}). Each term appearing in these inequalities is explicitly calculated below. From the entropy expressions (\ref{SabProj}), (\ref{SabSmooth}), and (\ref {SabSelective}), the mutual information associated to the different measurement stages read \begin{equation} \mathcal{I}[\rho _{\Omega }^{ab}]=\mathcal{S}[\rho _{\Omega }^{a}]-\sum_{m}p(m)\mathcal{S}[\rho _{m}^{a}], \end{equation} while \begin{equation} \sum_{y}p(y)\mathcal{I}[\rho _{y}^{ab}]=\sum_{y}p(y)\mathcal{S}[\rho _{y}^{a}]-\sum_{m}p(m)\mathcal{S}[\rho _{m}^{a}]. \end{equation} The difference of the previous two equations leads to the lower bound of Eq. (\ref{positive2}). On the other hand, from Eq.~(\ref{RhoAB_m}) it follows $ \sum_{m}p(m)\mathcal{I}[\rho _{m}^{ab}]=0,$ which in turn lead to the lower bound of Eq. (\ref{positive1}). The upper bounds of Eqs. (\ref{positive2}) and (\ref{positive1}) follows from the general inequalities (\ref{Mutual_2}) and (\ref{Mutual_1}) written in terms of Eqs. (\ref{UpperProyBound}) and (\ref{LowerProyBound}) respectively. \end{document}
\begin{document} \title{Efficient Projection onto the Perfect Phylogeny Model} \begin{abstract} Several algorithms build on the \emph{perfect phylogeny model} to infer evolutionary trees. This problem is particularly hard when evolutionary trees are inferred from the fraction of genomes that have mutations in different positions, across different samples. Existing algorithms might do extensive searches over the space of possible trees. At the center of these algorithms is a projection problem that assigns a fitness cost to phylogenetic trees. In order to perform a wide search over the space of the trees, it is critical to solve this projection problem fast. In this paper, we use Moreau's decomposition for proximal operators, and a tree reduction scheme, to develop a new algorithm to compute this projection. Our algorithm terminates with an exact solution in a finite number of steps, and is extremely fast. In particular, it can search over all evolutionary trees with fewer than $11$ nodes, a size relevant for several biological problems (more than $2$ billion trees) in about $2$ hours. \end{abstract} \section{Introduction} The perfect phylogeny model (PPM) \cite{hudson1983properties,kimura1969number} is used in biology to study evolving populations. It assumes that the same position in the genome never mutates twice, hence mutations only accumulate. Consider a population of organisms evolving under the PPM. The evolution process can be described by a labeled rooted tree, $T = (r,\mathcal{V},\mathcal{E})$, where $r$ is the root, i.e., the common oldest ancestor, the nodes $\mathcal{V}$ are the mutants, and the edges $\mathcal{E}$ are mutations acquired between older and younger mutants. Since each position in the genome only mutates once, we can associate with each node $v \neq r$, a unique mutated position, the mutation associated to the ancestral edge of $v$. By convention, let us associate with the root $r$, a null mutation that is shared by all mutants in $T$. This allows us to refer to each node $v\in \mathcal{V}$ as both a mutation in a position in the genome (the mutation associated to the ancestral edge of $v$), and a mutant (the mutant with the fewest mutations that has a mutation $v$). Hence, without loss of generality, $\mathcal{V} = \{1,\dots,q\}$, $\mathcal{E} = \{2,\dots,q\}$, where $q$ is the length of the genome, and $r = 1$ refers to both the oldest common ancestor and the null mutation shared by all. One very important use of the PPM is to infer how mutants of a common ancestor evolve \cite{el2015reconstruction,el2016multi,jiao2014inferring,malikic2015clonality, popic2015fast,satas-raphael17}. A common type of data used for this purpose is the frequency, with which different~positions in the genome mutate across multiple samples, obtained, e.g., from whole-genome or targeted deep sequencing~\cite{schuh2012monitoring}. Consider a sample $s$, one of $p$ samples, obtained at a given stage~of the evolution process. This sample has many mutants, some with the same genome, some with different genomes. Let $F \in \mathbb{R}^{q\times p}$ be such that $F_{v,s}$ is the fraction of genomes in $s$ with a mutation in position $v$ in the genome. Let $M \in \mathbb{R}^{q\times p}$ be such that $M_{v,s}$ is the fraction of mutant $v$ in $s$. By definition, the columns of $M$ must sum to $1$. Let $U \in \{0,1\}^{q\times q}$ be such that $U_{v,v'} = 1$, if and only if mutant $v$ is an ancestor of mutant $v'$, or if $v=v'$. We denote the set of all possible $U$ matrices, $M$ matrices and labeled rooted trees $T$, by $\mathcal{U}$, $\mathcal{M}$ and $\mathcal{T}$, respectively. See Figure \ref{fig:ppm_illustration} for an illustration. The PPM implies \begin{equation}\label{eq:PPM_model} F = U M. \end{equation} Our work contributes to the problem of inferring clonal evolution from mutation-frequencies: \emph{ How do we infer $M$ and $U$ from $F$?} Note that finding $U$ is the same as finding $T$ (see Lemma \ref{th:bijection_U_T}). \begin{figure} \caption{ \small Black lines are genomes. Red circles indicate mutations. g$i$ is the mutant with fewest mutations with position $i$ mutated. Mutation $1$, the mutation in the null position, $i = 1$, is shared by all mutants. $g1$ is the organism before mutant evolution starts. In sample $s=3$, $2/10$ of the mutants are type $g2$, hence $M_{2,3} \label{fig:ppm_illustration} \end{figure} Although model \eqref{eq:PPM_model} is simple, simultaneously inferring $M$ and $U$ from $F$ can be hard \cite{el2015reconstruction}. One popular inference approach is the following optimization problem over $U$, $M$ and $F$, \begin{align} \label{eq:projection_onto_PPM_with_tree} & \hspace{0.3cm}\min_{U \in \mathcal{U}} \mathcal{C}(U), \\ & \hspace{1.0cm}\mathcal{C}(U) = \min_{M,F \in \mathbb{R}^{q\times p}} \|\hat{F} - F\| \text{ subject to } F = UM, M \geq 0, M^\top \mathbf{1} = \mathbf{1},\label{eq:projection_onto_PPM} \end{align} where $\| \cdot \|$ is the Frobenius norm, and $\hat{F} \in \mathbb{R}^{q \times p}$ contains the measured fractions of mutations per position in each sample, which are known and fixed. In a nutshell, we want to project our measurement $\hat{F}$ onto the space of valid PPM models. Problem \eqref{eq:projection_onto_PPM_with_tree} is a hard mixed integer-continuous optimization problem. To approximately solve it, we might find a finite subset $\{U_i\} \subset \mathcal{U}$, that corresponds to a ``heuristically good'' subset of trees, $\{T_i\}\subset \mathcal{T}$, and, for each fixed matrix $U_i$, solve \eqref{eq:projection_onto_PPM}, which is a convex optimization problem. We can then return $T_x$, where $x \in {\argmin_i \mathcal{C}(U_i)}$. Fortunately, in many biological applications, e.g., \cite{el2015reconstruction,el2016multi,jiao2014inferring,malikic2015clonality,popic2015fast,satas-raphael17}, the reconstructed evolutionary tree involves a very small number of mutated positions, e.g., $q \leq 11$. In practice, a position $v$ might be an \emph{effective} position that is a cluster of multiple real positions in the genome. For a small $q$, we can compute $\mathcal{C}(U)$ for many trees, and hence approximate $M$, $U$, and get uncertainty measures for these estimates. This is important, since data is generally scarce and noisy. {\bf{Contributions:}} (i) we propose a new algorithm to compute $\mathcal{C}(U)$ exactly in $\mathcal{O}(q^2 p)$ steps, the first non-iterative algorithm to compute $\mathcal{C}(U)$; (ii) we compare its performance against state-of-the-art iterative algorithms, and observe a much faster convergence. In particular, our algorithm scales much faster than $\mathcal{O}(q^2 p)$ in practice; (iii) we implement our algorithm on a GPU, and show that it computes the cost of all (more than $2$ billion) trees with $\leq 11$ nodes, in $\leq 2.5$ hours. \section{Related work} A problem related to ours, but somewhat different, is that of inferring a phylogenetic tree from single-cell whole-genome sequencing data. Given all the mutations in a set of mutants, the problem is to arrange the mutants in a phylogenetic tree, \cite{fernandez2001perfect,gusfield1991efficient}. Mathematically, this corresponds to inferring $T$ from partial or corrupted observation of $U$. If the PPM is assumed, and all the mutations of all the mutants are correctly observed, this problem can be solved in linear time, e.g., \cite{ding2006linear}. In general, this problem is equivalent to finding a minimum cost Steiner tree on a hypercube, whose nodes and edges represent mutants and mutations respectively, a problem known to be hard~\cite{garey2002computers}. We mention a few works on clonality inference, based on the PPM, that try to infer both $U$ and $M$ from $\hat{F}$. No previous work solves problem \eqref{eq:projection_onto_PPM_with_tree} exactly in general, even for trees of size $q \leq 11$. Using our fast projection algorithm, we can solve \eqref{eq:projection_onto_PPM_with_tree} exactly by searching over all trees, if $q \leq 11$. Ref.~\cite{el2015reconstruction} (\emph{AncesTree}) reduces the space of possible trees $\mathcal{T}$ to subtrees of a heuristically constructed DAG. The authors use the element-wise $1$-norm in \eqref{eq:projection_onto_PPM} and, after introducing more variables to linearize the product $UM$, reduce this search to solving a MILP, which they try to solve via branch and bound. Ref. \cite{malikic2015clonality} (\emph{CITUP}) searches the space of all \emph{unlabeled} trees, and, for each unlabeled tree, tries to solve an MIQP, again using branch and bound techniques, which finds a labeling for the unlabeled tree, and simultaneously minimizes the distance $\|\hat{F}-F\|$. Refs. \cite{jiao2014inferring} and \cite{deshwar2015phylowgs} (\emph{PhyloSub/PhyloWGS}), use a stochastic model to sample trees that are likely to explain the data. Their model is based on \cite{ghahramani2010tree}, which generates hierarchical clusterings of objects, and from which lineage trees can be formed. A score is then computed for these trees, and the highest scoring trees are returned. Procedure \eqref{eq:projection_onto_PPM_with_tree} can be justified as MLE if we assume the stochastic model $\hat{F} = F + \mathcal{N}(0,I\sigma^2)$, where $F$, $U$ and $M$ satisfy the PPM model, and $\mathcal{N}(0,I\sigma^2)$ represents additive, component-wise, Gaussian measurement noise, with zero mean and covariance $I\sigma^2$. Alternative stochastic models can be assumed, e.g., as $M - U^{-1} \hat{F} = \mathcal{N}(0,I\sigma^2)$, where $M$ is non-negative and its columns must sum to one, and $\mathcal{N}(0,I\sigma^2)$ is as described before. For this model, and for each matrix $U$, the cost $\mathcal{C}(U)$ is a projection of $U^{-1} \hat{F}$ onto the probability simplex $M \geq 0, M^\top \mathbf{1} = \mathbf{1}$. Several fast algorithms are known for this problem, e.g., \cite{condat2016fast,duchi2008efficient,gong2011efficient,liu2009efficient,michelot1986finite} and references therein. In a $pq$-dimensional space, the exact projection onto the simplex can be done in $\mathcal{O}(qp)$ steps. Our algorithm is the first to solve \eqref{eq:projection_onto_PPM} exactly in a finite number of steps. We can also use iterative methods to solve \eqref{eq:projection_onto_PPM}. One advantage of our algorithm is that it has no tuning parameters, and requires no effort to check for convergence for a given accuracy. Since iterative algorithms can converge very fast, we numerically compare the speed of our algorithm with different implementations of the Alternating Direction Method of Multipliers (ADMM) \cite{boyd2011distributed}, which, if properly tuned, has a convergence rate that equals the fastest convergence rate among all first order methods~\cite{francca2016explicit} under some convexity assumptions, and is known to produce good solutions for several other kinds of problems, even for non-convex ones \cite{hao2016testing,francca2017distributed,laurenceinvFBA2018,mathysparta,zoran2014shape,bento2015proximal,bento2013message}. \section{Main results}\label{sec:main_results} We now state our main results, and explain the ideas behind their proofs. Detailed proofs can be found in the Appendix. Our algorithm computes $\mathcal{C}(U)$ and minimizers of \eqref{eq:projection_onto_PPM}, resp. $M^*$ and $F^*$, by solving an equivalent problem. Without loss of generality, we assume that $p = 1$, since, by squaring the objective in \eqref{eq:projection_onto_PPM}, it decomposes into $p$ independent problems. Sometimes we denote $\mathcal{C}(U)$ by $\mathcal{C}(T)$, since given $U$, we can specify $T$, and vice-versa. Let $\bar{i}$ be the closest ancestor of $i$ in $T= (r,\mathcal{V},\mathcal{E})$. Let $\Delta i$ be the set of all the ancestors of $i$ in $T$, plus $i$. Let $\partial i$ be the set of children of $i$ in $T$. \begin{theorem}[Equivalent formulation] \label{th:dual_prob} Problem \eqref{eq:projection_onto_PPM} can be solved by solving \begin{align} & \min_{t\in \mathbb{R}} \;\; t + \mathcal{L}(t),\label{eq:dual_higher}\\[-0.4cm] & \hspace{1.46cm}\mathcal{L}(t) = \min_{Z\in\mathbb{R}^q} \frac{1}{2}\sum_{i \in \mathcal{V}} (Z_i - Z_{\bar{i}})^2 \text{ subject to } Z_i \leq t - N_i \;,\forall i\in \mathcal{V} \label{eq:dual}, \end{align} where $N_i= \sum_{j \in \Delta i} \hat{F}_j$, and, by convention, $Z_{\bar{i}} = 0$ for $i = r$. In particular, if $t^*$ minimizes \eqref{eq:dual_higher}, $Z^*$ minimizes \eqref{eq:dual} for $t=t^*$, and $M^*,F^*$ minimize \eqref{eq:projection_onto_PPM}, then \begin{equation}\label{eq:Z_star_M_star_relation} M^*_i = -Z^*_i + Z^*_{\bar{i}} + \sum_{r \in \partial i} (Z^*_r - Z^*_{\bar{r}}) \text{ and } F^*_i = -Z^*_i + Z^*_{\bar{i}}, \forall i\in \mathcal{V}. \end{equation} Furthermore, $t^*$, $M^*$, $F^*$ and $Z^*$ are unique. \end{theorem} Theorem \ref{th:dual_prob} comes from a dual form of \eqref{eq:projection_onto_PPM}, which we build using Moreau's decomposition \cite{moreau1962decomposition}. \subsection{Useful observations}\label{sec:useful_obs} Let $Z^*(t)$ be the unique minimizer of \eqref{eq:dual} for some $t$. The main ideas behind our algorithm depend on a few simple properties of the paths $\{Z^*(t)\}$ and $\{\mathcal{L}'(t)\}$, the derivative of $\mathcal{L}(t)$ with respect to $t$. Note that $\mathcal{L}$ is also a function of $N$, as defined in Theorem \ref{th:dual_prob}, which depends on the input data $\hat{F}$. \begin{lemma}\label{th:convexity_of_L_dual} $\mathcal{L}(t)$ is a convex function of $t$ and $N$. Furthermore, $\mathcal{L}(t)$ is continuous in $t$ and $N$, and $\mathcal{L}'(t)$ is non-decreasing with $t$. \end{lemma} \begin{lemma}\label{th:continuity_of_Z_t} $Z^*(t)$ is continuous as a function of $t$ and $N$. $Z^*(t^*)$ is continuous as a function of $N$. \end{lemma} Let $ \mathcal{B}(t) = \{i: Z^*(t)_i = t - N_i\}, $ i.e., the set of components of the solution at the boundary of \eqref{eq:dual}. Variables in $\mathcal{B}$ are called \emph{fixed}, and we call other variables \emph{free}. Free (resp. fixed) nodes are nodes corresponding to free (resp. fixed) variables. \begin{lemma}\label{th:B_piece_wise_constant} $\mathcal{B}(t)$ is piecewise constant in $t$. \end{lemma} Consider dividing the tree $T=(r,\mathcal{V},\mathcal{E})$ into subtrees, each with at least one free node, using $\mathcal{B}(t)$ as separation points. See Figure \ref{fig:tree_and_variables} in Appendix \ref{sec:app:further_illustrations} for an illustration. Each $i\in \mathcal{B}(t)$ belongs to at most $\text{degree}(i)$ different subtrees, where $\text{degree}(i)$ is the degree of node $i$, and each $i \in \mathcal{V} \backslash \mathcal{B}(t)$ belongs exactly to one subtree. Let $T_1,\dots, T_k$ be the set of resulting (rooted, labeled) trees. Let $T_w = (r_w,\mathcal{V}_w,\mathcal{E}_w)$, where the root $r_w$ is the closest node in $T_w$ to $r$. We call $\{T_w\}$ the subtrees \emph{induced by} $\mathcal{B}(t)$. We define $\mathcal{B}_w(t) = \mathcal{B}(t) \cap \mathcal{V}_w$, and, when it does not create ambiguity, we drop the index $t$ in $\mathcal{B}_w(t)$. Note that different $\mathcal{B}_w(t)$'s might have elements in common. Also note that, by construction, if $i \in \mathcal{B}_w$, then $i$ must be a leaf of $T_w$, or the root of $T_w$. \begin{definition}\label{def:sub_problem_Tw_Bw} The $(T_w,\mathcal{B}_w)$-problem is the optimization problem over $|\mathcal{V}_w \backslash \mathcal{B}(t)|$ variables \begin{align}\label{eq:simpler_sub_problem} &\min_{\{Z_j : j \in \mathcal{V}_w \backslash \mathcal{B}(t)\}} \;(1/2)\sum_{j \in \mathcal{V}_w } (Z_j - Z_{\bar{j}})^2, \end{align} where $\bar{j}$ is the parent of $j$ in $T_w$, $Z_{\bar{j}} = 0$ if ${j} = r_w$, and $Z_j =Z^*(t)_j= t - N_j$ if $j \in \mathcal{B}_w(t)$. \end{definition} \begin{lemma}\label{eq:decomposition_of_problem} Problem \eqref{eq:dual} decomposes into $k$ independent problems. In particular, the minimizers $\{Z^*(t)_j : j \in \mathcal{V}_w \backslash \mathcal{B}(t) \}$ are determined as the solution~of the $(T_w,\mathcal{B}_w)$-problem. If $j \in \mathcal{V}_w$, then $Z^*(t)_j = c_1 t + c_2$ , where $c_1$ and $c_2$ depend on $j$ but not on $t$, and $0 \leq c_1 \leq 1$. \end{lemma} \begin{lemma}\label{th:Z_and_Lprime_are_piece_wise_linear} $Z^*(t)$ and $\mathcal{L}'(t)$ are piecewise linear and continuous in $t$. Furthermore, $Z^*(t)$ and $\mathcal{L}'(t)$ change linear segments if and only if $\mathcal{B}(t)$ changes. \end{lemma} \begin{lemma}\label{eq:B_always_grows} If $t \leq t'$, then $\mathcal{B}(t') \subseteq \mathcal{B}(t)$. In particular, $\mathcal{B}(t)$ changes at most $q$ times with $t$. \end{lemma} \begin{lemma}\label{th:Z_and_Lprime_finite_break_points} $Z^*(t)$ and $\mathcal{L}'(t)$ have less than $q+1$ different linear segments. \end{lemma} \subsection{The Algorithm}\label{sec:main_alg} In a nutshell, our algorithm computes the solution path $\{Z^*(t)\}_{t \in \mathbb{R}}$ and the derivative $\{\mathcal{L}'(t)\}_{t \in \mathbb{R}}$. From these paths, it finds the unique $t^*$, at which \begin{equation}\label{eq:condition_for_t} {\rm d}(t + \mathcal{L}(t))/{{\rm d}t} =0\rvert_{t = t^*} \Leftrightarrow \mathcal{L}'(t^*) = -1. \end{equation} It then evaluates the path $Z^*(t)$ at $t =t^*$, and uses this value, along with \eqref{eq:Z_star_M_star_relation}, to find $M^*$ and $F^*$, the unique minimizers of \eqref{eq:projection_onto_PPM}. Finally, we compute $\mathcal{C}(T) = \|\hat{F} - F^*\|$. We know that $\{Z^*(t)\}$ and $\{\mathcal{L}'(t)\}$ are continuous piecewise linear, with a finite number of different linear segments (Lemmas \ref{th:Z_and_Lprime_are_piece_wise_linear}, \ref{eq:B_always_grows} and \ref{th:Z_and_Lprime_finite_break_points}). Hence, to describe $\{Z^*(t)\}$ and $\{\mathcal{L}'(t)\}$, we only need to evaluate them at the \emph{critical values}, $t_1>t_2 > \dots >t_k$, at which $Z^*(t)$ and $\mathcal{L}'(t)$ change linear segments. We will later use Lemma \ref{th:Z_and_Lprime_are_piece_wise_linear} as a criteria to find the critical values. Namely, $\{t_i\}$ are the values of $t$ at which, as $t$ decreases, new variables become fixed, and $\mathcal{B}(t)$ changes. Note that variables never become free once fixed, by Lemma \ref{eq:B_always_grows}, which also implies that $k \leq q$. The values $\{Z^*(t_i)\}$ and $\{\mathcal{L}'(t_i)\}$ are computed sequentially as follows. If $t$ is very large, the constraint in \eqref{eq:dual} is not active, and $Z^*(t) = \mathcal{L}(t) = \mathcal{L}'(t)=0$. Lemma \ref{th:Z_and_Lprime_are_piece_wise_linear} tells us that, as we decrease $t$, the first critical value is the largest $t$ for which this constraint becomes active, and at which $\mathcal{B}(t)$ changes for the first time. Hence, if $i= 1$, we have $t_i = \max_s\{N_s\}$, $Z^*(t_i) = \mathcal{L}'(t_i) = 0$, and $\mathcal{B}(t_i) = \arg\max_s\{N_s\}$. Once we have $t_i$, we compute the rates $Z'^*(t_i)$ and $\mathcal{L}''(t_i)$ from $\mathcal{B}(t_i)$ and $T$, as explained in Section \ref{sec:computing_rates}. Since the paths are piecewise linear, derivatives are not defined at critical points. Hence, here, and throughout this section, these derivatives are taken from the left, i.e., $Z'^*(t_i) = \lim_{t \uparrow t_i} (Z^*(t_i) - Z^*(t))/(t_i - t)$ and $\mathcal{L}''(t_i) = \lim_{t \uparrow t_i} (\mathcal{L}'(t_i) - \mathcal{L}'(t))/(t_i - t)$. Since $Z'^*(t)$ and $\mathcal{L}''(t)$ are constant for $t\in (t_{i+1}, t_{i}]$, for $t\in (t_{i+1}, t_i]$ we have \begin{align} \label{eq:liner_relation_between_critical_points} Z^*(t) = Z^*(t_i) + (t - t_i)Z'^*(t_i), \quad \mathcal{L}'(t) = \mathcal{L}'(t_i) + (t - t_i)\mathcal{L}''(t_i), \end{align} and the next critical value, $t_{i+1}$, is the largest $t < t_i$, for which new variables become fixed, and $\mathcal{B}(t)$ changes. The value $t_{i+1}$ is found by solving for $t<t_i$ in \begin{equation}\label{eq:formula_for_next_t} Z^*(t)_r = Z^*(t_i)_r + (t - t_i)Z'^*(t_i)_r = t - N_r, \end{equation} and keeping the largest solution among all $r \notin \mathcal{B}$. Once $t_{i+1}$ is computed, we update $\mathcal{B}$ with the new variables that became fixed, and we obtain $Z^*(t_{i+1})$ and $\mathcal{L}'(t_{i+1})$ from \eqref{eq:liner_relation_between_critical_points}. The process then repeats. By Lemma \ref{th:convexity_of_L_dual}, $\mathcal{L}'$ never increases. Hence, we stop this process (a) as soon as $\mathcal{L}'(t_i) < -1$, or (b) when all the variables are in $\mathcal{B}$, and thus there are no more critical values to compute. If (a), let $t_k$ be the last critical value with $\mathcal{L}'(t_k) > -1$, and if (b), let $t_k$ be the last computed critical value. We use $t_k$ and \eqref{eq:liner_relation_between_critical_points} to compute $t^*$, at which $\mathcal{L}'(t^*) = -1$ and also $Z^*(t^*)$. From $Z^*(t^*)$ we then compute $M^*$ and $F^*$ and $\mathcal{C}(U) = \|\hat{F} - F^*\|$. The algorithm is shown compactly in Alg. \ref{alg:projection}. Its inputs are $\hat{F}$ and $T$, represented, e.g., using a linked-nodes data structure. Its outputs are minimizers to \eqref{eq:projection_onto_PPM}. It makes use of a procedure \emph{ComputeRates}, which we will explain later. This procedure terminates in $\mathcal{O}(q)$ steps and uses $\mathcal{O}(q)$ memory. Line \ref{alg:line:next_critical_value} comes from solving \eqref{eq:formula_for_next_t} for $t$. In line \ref{alg:line:return}, the symbols $M^*(Z^*, T)$ and $F^*(Z^*, T)$ remind us that $M^*$ and $F^*$ are computed from $Z^*$ and $T$ using \eqref{eq:Z_star_M_star_relation}. The correctness of Alg. \ref{alg:projection} follows from the Lemmas in Section \ref{sec:useful_obs}, and the explanation above. In particular, since there are at most $q+1$ different linear regimes, the bound $q$ in the for-loop does not prevent us from finding any critical value. Its time complexity is $\mathcal{O}(q^2)$, since each line completes in $\mathcal{O}(q)$ steps, and is executed at most $q$ times. \begin{theorem}[Complexity]\label{th:main_alg_complexity} Algorithm \ref{alg:projection} finishes in $\mathcal{O}(q^2)$ steps, and requires $\mathcal{O}(q)$ memory. \end{theorem} \begin{theorem}[Correctness]\label{th:alg_correctness} Algorithm \ref{alg:projection} outputs the solution to \eqref{eq:projection_onto_PPM}. \end{theorem} \begin{algorithm}[h!] \caption{Projection onto the PPM (input: $T$ and $\hat{F}$; output: $M^*$ and $F^*$)} \label{alg:projection} \begin{algorithmic}[1] \State $N_i = \sum_{j \in \Delta i} \hat{F}_j$, for all $i\in \mathcal{V}$ \label{alg:line:compute_N_from_F} \mathcal{C}omment{This takes $\mathcal{O}(q)$ steps using a DFS, see proof of Theorem \ref{th:main_alg_complexity}} \State $i = 1$, $t_i = \max_r\{N_r\}$, $\mathcal{B}(t_i) = \arg \max_r\{N_r\}$, $Z^*(t_i) = {\bf 0}$, $\mathcal{L}'(t_i) = {0}$.\label{alg:line:init} \mathcal{C}omment{Initialize} \mathcal{B}or{$i = 1$ to $q$} \label{alg:line:forloop} \State $(Z'^*(t_i),\mathcal{L}''(t_i)) = \text{ComputeRates}(\mathcal{B}(t_i),T)$ \mathcal{C}omment{Update rates of change}\label{alg:line:main_alg_compute_rates} \State $P = \{P_r : P_r = \frac{N_r + Z^*(t_i)_r - t_i Z'^*(t_i)_r}{1 - Z'^*(t_i)_r} \text{ if } r \notin \mathcal{B}(t_i), t_r < t_i, \text{ and } P_r = -\infty \text{ otherwise}\}$\label{alg:line:next_critical_value} \State $t_{i+1} = \max_r P_r$ \mathcal{C}omment{Update next critical value from \eqref{eq:liner_relation_between_critical_points}} \State $\mathcal{B}(t_{i+1}) = \mathcal{B}(t_{i}) \cup \arg \max_r P_s$ \label{alg:line:main_alg_new_fixed_nodes} \mathcal{C}omment{Update list of fixed variables} \State $Z^*(t_{i+1}) = Z^*(t_{i}) + (t_{i+1} - t_i) Z'^*(t_i)$ \mathcal{C}omment{Update solution path } \State $\mathcal{L}'(t_{i+1}) = \mathcal{L}'(t_{i}) + (t_{i+1} - t_i) \mathcal{L}''(t_i)$ \mathcal{C}omment{Update objective's derivative} \State {\bf if }$\mathcal{L}'(t_{i+1}) < -1 $ {\bf then break} \mathcal{C}omment{If already passed by $t^*$, then exit the for-loop} \mathcal{E}ndFor \State $t^* = t_{i} - \frac{1 + \mathcal{L}'(t_i)}{\mathcal{L}''(t_i)}$\label{alg:line:find_tstar} \mathcal{C}omment{Find solution to \eqref{eq:condition_for_t}} \State $Z^* = Z^*(t_i) + (t^* - t_i) Z'^*(t_i)$\label{alg:line:find_Zstar} \mathcal{C}omment{Find minimizers of \eqref{eq:dual} for $t = t^*$} \State \textbf{return} $M^*(Z^*,T)$, $F^*(Z^*,T)$ \label{alg:line:return} \mathcal{C}omment{Return solution to \eqref{eq:projection_onto_PPM} using \eqref{eq:Z_star_M_star_relation}, which takes $\mathcal{O}(q)$ steps} \end{algorithmic} \end{algorithm} \subsection{Computing the rates}\label{sec:computing_rates} We now explain how the procedure \emph{ComputeRates} works. Recall that it takes as input the tree $T$ and the set $\mathcal{B}(t_i)$, and it outputs the derivatives $Z'^*(t_i)$ and $\mathcal{L}''(t_i)$. A simple calculation shows that if we compute $Z'^*(t_i)$, then computing $\mathcal{L}''(t_i)$ is easy. \begin{lemma} \label{th:computing_ddL_from_dZ} $\mathcal{L}''(t_i)$ can be computed from $Z'^*(t_i)$ in $\mathcal{O}(q)$ steps and with $\mathcal{O}(1)$ memory as \begin{equation} \label{eq:computing_ddL_from_dZ} \mathcal{L}''(t_i) = \sum_{j \in \mathcal{V}} (Z'^*(t_i)_j - Z'^*(t_i)_{\bar{j}})^2, \end{equation} \end{lemma} where $\bar{j}$ is the closest ancestor to $j$ in $T$. We note that if $j \in \mathcal{B}(t_i)$, then, by definition, $Z'^*(t_i)_j= 1$. Assume now that $j \in \mathcal{V} \backslash \mathcal{B}(t_i)$. Lemma \ref{eq:decomposition_of_problem} implies we can find $Z'^*(t_i)_j$ by solving the $(T_w = (r_w,\mathcal{V}_w,\mathcal{E}_w),\mathcal{B}_w)$-problem as a function of $t$, where $w$ is such that $j \in \mathcal{V}_w$. In a nutshell, \emph{ComputeRates} is a recursive procedure to solve all the $(T_w,\mathcal{B}_w)$-problems as an explicit function~of~$t$. It suffices to explain how \emph{ComputeRates} solves one particular $(T_w,\mathcal{B}_w)$-problem explicitly. To simplify notation, in the rest of this section, we refer to $T_w$ and $\mathcal{B}_w$ as $T$ and $\mathcal{B}$. Recall that, by the definition of $T=T_w$ and $\mathcal{B}=\mathcal{B}_w$, if $i \in \mathcal{B}$, then $i$ must be a leaf of $T$, or the root of $T$. \begin{definition} \label{def:sub_problem_T_B_A_B_G} Consider a rooted tree $T = (r,\mathcal{V},\mathcal{E})$, a set $\mathcal{B}\subseteq\mathcal{V}$, and variables $\{Z_j:j\in \mathcal{V}\}$ such that, if $j\in \mathcal{B}$, then $Z_j = \alpha_j t + \beta_j$ for some $\alpha$ and $\beta$. We define the $(T,\mathcal{B},\alpha,\beta,\gamma)$-problem as \begin{align}\label{eq:general_sub_problem} \min_{\{Z_j: j \in \mathcal{V} \backslash \mathcal{B}\}} \frac{1}{2} \sum_{j \in \mathcal{V}} \gamma_{j} (Z_j - Z_{\bar{j}})^2, \end{align} where $\gamma > 0$, $\bar{j}$ is the closest ancestor to $j$ in $T$, and $Z_{\bar{j}} = 0$ if ${j}=r$. \end{definition} We refer to the solution of the $(T,\mathcal{B},\alpha,\beta,\gamma)$-problem as $\{Z^*_j: j \in \mathcal{V} \backslash \mathcal{B}\}$, which uniquely minimizes \eqref{eq:general_sub_problem}. Note that \eqref{eq:general_sub_problem} is unconstrained and its solution, $Z^*$, is a linear function of $t$. Furthermore, the $(T_w,\mathcal{B}_w)$-problem is the same as the $(T_w,\mathcal{B}_w,{\bf 1},-N,{\bf 1})$-problem, which is what we actually solve. We now state three useful lemmas that help us solve any $(T,\mathcal{B},\alpha,\beta,\gamma)$-problem efficiently. \begin{lemma}[Pruning]\label{th:pruning} Consider the solution $Z^*$ of the $(T,\mathcal{B},\alpha,\beta,\gamma)$-problem. Let $j\in\mathcal{V} \backslash \mathcal{B}$ be a leaf. Then $Z^*_j = Z^*_{\bar{j}}$. Furthermore, consider the $(\tilde{T},\mathcal{B},\alpha,\beta,\gamma)$-problem, where $\tilde{T} = (\tilde{r},\tilde{\mathcal{V}},\tilde{\mathcal{E}})$ is equal to $T$ with node $j$ pruned, and let its solution be $\tilde{Z}^*$. We have that $Z^*_i = \tilde{Z}^*_i$, for all $i \in \tilde{\mathcal{V}}$. \end{lemma} \begin{lemma}[Star problem]\label{th:star_problem} Let $T$ be a star such that node $1$ is the center node, node $2$ is the root, and nodes $3,\dots, r$ are leaves. Let $\mathcal{B} = \{2,\dots, r\}$. Let $Z^*_1 \in \mathbb{R}$ be the solution of the $(T,\mathcal{B},\alpha,\beta,\gamma)$-problem. Then, \begin{equation} \label{eq:star_solution} Z^*_1 = \left( \frac{\gamma_1 \alpha_2+ \sum^r_{i=3} \gamma_r \alpha_r}{\gamma_1 + \sum^r_{i=3} \gamma_r}\right) t + \left( \frac{\gamma_1 \beta_2+ \sum^r_{i=3} \gamma_r \beta_r}{\gamma_1 + \sum^r_{i=3} \gamma_r}\right). \end{equation} In particular, to find the rate at which $Z^*_1$ changes with $t$, we only need to know $\alpha$ and $\gamma$, not $\beta$. \end{lemma} \begin{lemma}[Reduction]\label{th:tree_reduction} Consider the $(T,\mathcal{B},\alpha,\beta,\gamma)$-problem such that $j,\bar{j} \in \mathcal{V} \backslash \mathcal{B}$, and such that $j$ has all its children $1,\dots,r \in \mathcal{B}$. Let $Z^*$ be its solution. Consider the $(\tilde{T},\tilde{\mathcal{B}},\tilde{\alpha},\tilde{\beta},\tilde{\gamma})-problem$, where $\tilde{T} = (\tilde{r},\tilde{\mathcal{V}},\tilde{\mathcal{E}})$ is equal to $T$ with nodes $1,\dots,r$ removed, and $\tilde{\mathcal{B}} = (\mathcal{B} \backslash \{1,\dots,r\}) \cup \{j\}$. Let ${\tilde{Z}^*}$ be its solution. If $(\tilde{\alpha}_i,\tilde{\beta}_i,\tilde{\gamma}_i)=(\alpha_i,\beta_i,\gamma_i)$ for all $i \in \mathcal{B} \backslash \{1,\ldots,r\}$, and $\tilde{\alpha}_j$, $\tilde{\beta}_j$ and $\tilde{\gamma}_j$ satisfy \begin{align} &\tilde{\alpha}_j = \frac{\sum^r_{i=1} \gamma_i \alpha_i}{\sum^r_{i=1} \gamma_i}, \;\; \tilde{\beta}_j = \frac{\sum^r_{i=1} \gamma_i \beta_i} {\sum^r_{i=1}\gamma_i}, \tilde{\gamma}_j = \left((\gamma_j)^{-1} + \left(\sum^r_{i=1} \gamma_i\right)^{-1}\right)^{-1}\label{eq:reduction_equations}, \end{align} then $Z^*_i = {\tilde{Z}^*}_i$ \,for all $i \in \mathcal{V} \backslash \{j\}$. \end{lemma} Lemma \ref{th:star_problem} and Lemma \ref{th:tree_reduction} allow us to recursively solve any $(T,\mathcal{B},\alpha,\beta,\gamma)$-problem, and obtain for it an explicit solution of the form $Z^*(t) = c_1 t + c_2$, where $c_1$ and $c_2$ do not depend on $t$. Assume that we have already repeatedly pruned $T$, by repeatedly invoking Lemma \ref{th:pruning}, such that, if $i$ is a leaf, then $i \in \mathcal{B}$. See Figure \ref{fig:recursion}-(left). First, we find some node $j \in \mathcal{V} \backslash \mathcal{B}$ such that all of its children are in $\mathcal{B}$. If $\bar{j} \in \mathcal{B}$, then $\bar{j}$ must be the root, and the $(T,\mathcal{B},\alpha,\beta,\gamma)$-problem must be a star problem as in Lemma \ref{th:star_problem}. We can use Lemma \ref{th:star_problem} to solve it explicitly. Alternatively, if $\bar{j} \notin \mathcal{V} \backslash \mathcal{B}$, then we invoke Lemma \ref{th:tree_reduction}, and reduce the $(T,\mathcal{B},\alpha,\beta,\gamma)$-problem to a strictly smaller $(\tilde{T},\tilde{\mathcal{B}},\tilde{\alpha},\tilde{\beta},\tilde{\gamma})$-problem, which we solve recursively. Once the $(\tilde{T},\tilde{\mathcal{B}},\tilde{\alpha},\tilde{\beta},\tilde{\gamma})$-problem is solved, we have an explicit expression $Z^*_i(t) = {c_1}_i t + {c_2}_i$ for all $i \in \mathcal{V} \backslash \{j\}$, and, in particular, we have an explicit expression $Z^*_{\bar{j}}(t) = {c_1}_{\bar{j}} t + {c_2}_{\bar{j}}$. The only free variable of the $(T,\mathcal{B},\alpha,\beta,\gamma)$-problem to be determined is $Z^*_j(t)$. To compute $Z^*_j(t)$, we apply Lemma \ref{th:star_problem} to the $({\dbtilde{T}},{\dbtilde{\mathcal{B}}},{\dbtilde{\alpha}},{\dbtilde{\beta}},{\dbtilde{\gamma}})$-problem, where ${\dbtilde{T}}$ is a star around $j$, ${\dbtilde{\gamma}}$ are the components of $\gamma$ corresponding to nodes that are neighbors of $j$, ${\dbtilde{\alpha}}$ and ${\dbtilde{\beta}}$ are such that $Z^*_i(t) = {\dbtilde{\alpha}}_i t + {\dbtilde{\beta}}_i$ for all $i$ that are neighbors of $j$, and for which $Z^*_i(t)$ is already known, and ${\dbtilde{\mathcal{B}}}$ are all the neighbors of $j$. See Figure \ref{fig:recursion}-(right). The algorithm is compactly described in Alg. \ref{alg:compute_rates_rec}. It is slightly different from the description above for computational efficiency. Instead of computing $Z^*(t) = c_1t+c_2$, we keep track only of $c_1$, the rates, and we do so only for the variables in $\mathcal{V} \backslash \mathcal{B}$. The algorithm assumes that the input $T$ has been pruned. The inputs $T$, $\mathcal{B}$, $\alpha$, $\beta$ and $\gamma$ are passed by reference. They are modified inside the algorithm but, once \emph{ComputeRatesRec} finishes, they keep their initial values. Throughout the execution of the algorithm, $T = (r,\mathcal{V},\mathcal{E})$ encodes (1) a doubly-linked list where each node points to its children and its parent, which we call $T.a$, and (b) a a doubly-linked list of all the nodes in $\mathcal{V} \backslash \mathcal{B}$ for which all the children are in $\mathcal{B}$, which we call $T.b$. In the proof of Theorem \ref{th:complexity_of_rec_reduce}, we prove how this representation of $T$ can be kept updated with little computational effort. The input $Y$, also passed by reference, starts as an uninitialized array of size $q$, where we will store the rates $\{Z'^*_i\}$. At the end, we read $Z'^*$ from $Y$. \begin{algorithm}[h!] \caption{ComputeRatesRec (input: $T=(r,\mathcal{V},\mathcal{E}), \mathcal{B},\alpha,\beta,\gamma,Y$)} \label{alg:compute_rates_rec} \begin{algorithmic}[1] \State Let $j$ be some node in $\mathcal{V} \backslash \mathcal{B}$ whose children are in $\mathcal{B}$\label{alg:line:rec:1} \mathcal{C}omment{We read $j$ from $T.b$ in $\mathcal{O}(1)$ steps} \If{$\bar{j} \in \mathcal{B}$}\label{alg:line:rec:2} \State Set $Y_j$ using \eqref{eq:star_solution} in Lemma \ref{th:star_problem} \label{alg:line:rec:3} \mathcal{C}omment{If $\bar{j} \in \mathcal{B}$, then the $(T,\mathcal{B},\alpha,\beta,\gamma)$-problem is star-shaped} \mathcal{E}lse \State Modify $(T,\mathcal{B},\alpha,\beta,\gamma)$ to match $(\tilde{T},\tilde{\mathcal{B}},\tilde{\alpha},\tilde{\beta},\tilde{\gamma})$ defined by Lemma \ref{th:tree_reduction} for $j$ in line \ref{alg:line:rec:1} \label{alg:line:rec:5} \State $\text{ComputeRatesRec}(T,\mathcal{B},\alpha,\beta,\gamma,Y)$\label{alg:line:rec:6} \mathcal{C}omment{Sets $Y_i= Z'^*_i$ for all ${i \in \mathcal{V} \backslash \mathcal{B}}$; $Y_j$ is not yet defined} \State Restore $(T,\mathcal{B},\alpha,\beta,\gamma)$ to its original value before line \ref{alg:line:rec:5} was executed \label{alg:line:rec:7} \State Compute $Y_j$ from \eqref{eq:star_solution}, using for $\alpha,\beta,\gamma$ in \eqref{eq:star_solution} the values ${\dbtilde{\alpha}},{\dbtilde{\beta}},{\dbtilde{\gamma}}$, where ${\dbtilde{\gamma}}$ are the components of $\gamma$ corresponding to nodes that are neighbors of $j$ in $T$, and ${\dbtilde{\alpha}}$ and ${\dbtilde{\beta}}$ are such that $Z^*_i = {\dbtilde{\alpha}}_i t + {\dbtilde{\beta}}_i$ for all $i$ that are neighbors of $j$ in $T$, and for which $Z^*_i$ is already known\label{alg:line:rec:8} \mathcal{E}ndIf \end{algorithmic} \end{algorithm} Let $q$ be the number of nodes of the tree $T$ that is the input at the zeroth level of the recursion. \begin{theorem}\label{th:complexity_of_rec_reduce} Algorithm \ref{alg:compute_rates_rec} correctly computes $Z'^*$ for the $(T,\mathcal{B},\alpha,\beta,\gamma)$-problem, and it can be implemented to finish in $\mathcal{O}(q)$ steps, and to use $\mathcal{O}(q)$ memory. \end{theorem} The correctness of Algorithm \ref{alg:compute_rates_rec} follows from Lemmas \ref{th:pruning}-\ref{th:tree_reduction}, and the explanation above. Its complexity is bounded by the total time spent on the two lines that actually compute rates during the whole recursion, lines \ref{alg:line:rec:3} and \ref{alg:line:rec:8}. All the other lines only transform the input problem into a more computable form. Lines \ref{alg:line:rec:3} and \ref{alg:line:rec:8} solve a star-shaped problem with at most ${degree}(j)$ variables, which, by inspecting \eqref{eq:star_solution}, we know can be done in $\mathcal{O}(\text{degree}(j))$ steps. Since, $j$ never takes the same value twice, the overall complexity is bounded by $\mathcal{O}(\sum_{j \in \mathcal{V}} \text{degree}(j)) = \mathcal{O}(|\mathcal{E}|)= \mathcal{O}(q)$. The $\mathcal{O}(q)$ bound on memory is possible because all the variables that occupy significant memory are being passed by reference, and are modified in place during the whole recursive procedure. The following lemma shows how the recursive procedure to solve a $(T,\mathcal{B},\alpha,\beta,\gamma)$-problem can be used to compute the rates of change of $Z^*(t)$ of a $(T,\mathcal{B})$-problem. Its proof follows from the observation that the rate of change of the solution with $t$ in \eqref{eq:star_solution} in Lemma \ref{th:star_problem} only depends on $\alpha$ and $\beta$, and that the reduction equations \eqref{eq:reduction_equations} in Lemma \ref{th:tree_reduction} never make $\alpha'$ or $\gamma'$ depend on $\beta$. \begin{lemma}[Rates only]\label{th:rates_only} Let $Z^*(t)$ be the solution of the $(T,\mathcal{B})$-problem, and let $\tilde{Z}^*(t)$ be the solution of the $(T,\mathcal{B},{\bf 1},0,{\bf 1})$-problem. Then, $Z^*(t) = c_1 t + c_2$, and $\tilde{Z}^*(t) = c_1 t$ for some $c_1$ and $c_2$. \end{lemma} \begin{figure} \caption{\small Red squares represent fixed nodes, and black circles free nodes. (Left) By repeatedly invoking Lemma \ref{th:pruning} \label{fig:recursion} \end{figure} We finally present the full algorithm to compute $Z'^*(t_i)$ and $\mathcal{L}''*(t_i)$ from $T$ and $\mathcal{B}(t_i)$. \begin{algorithm}[h!] \caption{ComputeRates (input: $T$ and $\mathcal{B}(t_i)$ output: $Z'^*(t_i)$ and $\mathcal{L}''(t_i)$)} \label{alg:compute_rates} \begin{algorithmic}[1] \State $Z'^*(t_i)_j = 1$ for all $j \in \mathcal{B}(t_i)$ \mathcal{B}or{ each $(T_w,\mathcal{B}_w)$-problem induced by $\mathcal{B}(t_i)$ } \State Set $\tilde{T}_w$ to be $T_w$ pruned of all leaf nodes in $\mathcal{B}_w$, by repeatedly evoking Lemma \ref{th:pruning} \State $ \text{ComputeRatesRec}(\tilde{T}_w,j,\mathcal{B}_w,{\bf 1},{\bf 0},{\bf 1},\tilde{Z'^*})$\label{alg:line:rates:4} \State $Z'^*(t_i)_j = \tilde{Z'^*}_j$ for all $j \in V_w \backslash \mathcal{B}$ \label{alg:line:rates:5} \mathcal{E}ndFor \State Compute $\mathcal{L}''(t_i)$ from $Z'^*(t_i)$ using Lemma \ref{th:computing_ddL_from_dZ} \State \textbf{return} $Z'^*(t_i)$ and $\mathcal{L}''(t_i)$ \end{algorithmic} \end{algorithm} The following theorem follows almost directly from Theorem \ref{th:complexity_of_rec_reduce}. \begin{theorem}\label{th:compute_rates_main_theorem} Alg. \ref{alg:compute_rates} correctly computes $Z'^*(t_i)$ and $\mathcal{L}''(t_i)$ in $\mathcal{O}(q)$ steps, and uses $\mathcal{O}(q)$ memory. \end{theorem} \section{Reducing computation time in practice} \label{sec:improvements} Our numerical results are obtained for an improved version of Algorithm \ref{alg:projection}. We now explain the main idea behind this algorithm. The bulk of the complexity of Alg. \ref{alg:projection} comes from line \ref{alg:line:main_alg_compute_rates}, i.e., computing the rates $\{Z'^*(t_i)_j\}_{j \in \mathcal{V} \backslash \mathcal{B}(t_i)}$ from $\mathcal{B}(t_i)$ and $T$. For a fixed $j \in \mathcal{V} \backslash \mathcal{B}(t_i)$, and by Lemma \ref{eq:decomposition_of_problem}, the rate $Z'^*(t_i)_j$, depends only on one particular $(T_w=(r_w,\mathcal{V}_w,\mathcal{E}_w),\mathcal{B}_w)$-problem induced by $\mathcal{B}(t_i)$. If exactly this same problem is induced by both $\mathcal{B}(t_i)$ and $\mathcal{B}(t_{i+1})$, which happens if the new nodes that become fixed in line \ref{alg:line:main_alg_new_fixed_nodes} of round $i$ of Algorithm \ref{alg:projection} are not in $\mathcal{V}_w \backslash \mathcal{B}_w$, then we can save computation time in round $i+1$, by not recomputing any rates for $j \in \mathcal{V}_w \backslash \mathcal{B}_w$, and using for $Z'^*(t_{i+1})_j$ the value $Z'^*(t_{i})_j$. Furthermore, if only a few $\{Z'^*_j\}$ change from round $i$ to round $i+1$, then we can also save computation time in computing $\mathcal{L}''$ from $Z'^*$ by subtracting from the sum in the right hand side of equation \eqref{eq:computing_ddL_from_dZ} the terms that depend on the previous, now changed, rates, and adding new terms that depend on the new rates. Finally, if the rate $Z'^*_j$ does not change, then the value of $t < t_i$ at which $Z^*_j(t)$ might intersect $t - N_j$, and become fixed, given by $P_j$ in line \ref{alg:line:next_critical_value}, also does not change. (Note that this is not obvious from the formula for $P_r$ in line \ref{alg:line:next_critical_value}). If not all $\{P_r\}$ change from round $i$ to round $i+1$, we can also save computation time in computing the maximum, and maximizers, in line \ref{alg:line:main_alg_new_fixed_nodes} by storing $P$ in a maximum binary heap, and executing lines \ref{alg:line:next_critical_value} and \ref{alg:line:main_alg_new_fixed_nodes} by extracting all the maximal values from the top of the heap. Each time any $P_r$ changes, the heap needs to be updated. \section{Numerical results}\label{sec:num_res} Our algorithm to solve \eqref{eq:projection_onto_PPM} exactly in a finite number of steps is of interest in itself. Still, it is interesting to compare it with other algorithms. In particular, we compare the convergence rate of our algorithm with two popular methods that solve \eqref{eq:projection_onto_PPM} iteratively: the Alternating Direction Method of Multipliers (ADMM), and the Projected Gradient Descent (PGD) method. We apply the ADMM, and the PGD, to both the primal formulation \eqref{eq:projection_onto_PPM}, and the dual formulation \eqref{eq:dual_higher}. We implemented all the algorithms in C, and derived closed-form updates for ADMM and PG, see Appendix \ref{app:sec:details_of_ADMM_and_PDG}. We ran all algorithms on a single core of an Intel Core i5 2.5GHz processor. Figure \ref{fig:run_time_diff_alg}-(left) compares different algorithms for a random Galton–Watson input tree truncated to have $q=1000$ nodes, with the number of children of each node chosen uniformly within a fixed range, and for a random input $\hat{F}\in\mathbb{R}^q$, with entries chosen i.i.d. from a normal distribution. We observe the same behavior for all random instances that was tested. We gave ADMM and PGD an advantage by optimally tuning them for each individual problem-instance tested. In contrast, our algorithm requires no tuning, which is a clear advantage. At each iteration, the error is measured as $\max_j \{|M_j - M^*_j|\}$. Our algorithm is about $74\times$ faster than its closest competitor (PGD-primal) for $10^{-3}$ accuracy. In Figure \ref{fig:run_time_diff_alg}-(right), we show the average run time of our algorithm versus the problem size, for random inputs of the same form. The scaling of our algorithm is (almost) linear, and much faster than our $\mathcal{O}(q^2p)$, $p=1$, theoretical bound. \begin{figure} \caption{\small (Left) Time that the different algorithms take to solve our problem for trees of with $1000$ nodes. (Right) Average run time of our algorithm for problems of different sizes. For each size, each point is averaged over $500$ random problem instances.} \label{fig:run_time_diff_alg} \end{figure} Finally, we use our algorithm to exactly solve \eqref{eq:projection_onto_PPM_with_tree} by computing $\mathcal{C}(U)$ for all trees and a given input $\hat{F}$. Exactly solving \eqref{eq:projection_onto_PPM_with_tree} is very important for biology, since several relevant phylogenetic tree inference problems deal with trees of small sizes. We use an NVIDIA QUAD P5000 GPU to compute the cost of all possible trees with $q$ nodes in parallel, and return the tree with the smallest cost. Basically, we assign to each GPU virtual thread a unique tree, using Prufer sequences \cite{prufer1918neuer}, and then have each thread compute the cost for its tree. For $q= 10$, we compute the cost of all $100$ million trees in about $8$ minutes, and for $q = 11$, we compute the cost of all $2.5$ billion trees in slightly less than $2.5$ hours. Code to solve \eqref{eq:projection_onto_PPM} using Alg. \ref{alg:projection}, with the improvements of Section \ref{sec:improvements}, can be found in \cite{gitcode}. More results using our algorithm can be found in Appendix \ref{sec:app:more_results}. \section{Conclusions and future work} We propose a new direct algorithm that, for a given tree, computes how close the matrix of frequency of mutations per position is to satisfying the perfect phylogeny model. Our algorithm is faster than the state-of-the-art iterative methods for the same problem, even if we optimally tune them. We use the proposed algorithm to build a GPU-based phylogenetic tree inference engine for the trees of relevant biological sizes. Unlike existing algorithms, which only heuristically search a small part of the space of possible trees, our algorithm performs a complete search over all trees relatively fast. It is an open problem to find direct algorithms that can provably solve our problem in linear time on average, or even for a worst-case input. {\bf Acknowledgement: } This work was partially funded by NIH/1U01AI124302, NSF/IIS-1741129, and a NVIDIA hardware grant. \appendix {\bf \Large Appendix for ``Efficient Projection onto the Perfect Phylogeny Model''} \section{Further illustrations}\label{sec:app:further_illustrations} \begin{figure} \caption{\small Four subtrees of $T$ induced by $\mathcal{B} \label{fig:tree_and_variables} \end{figure} \section{Proof of Theorem \ref{th:dual_prob} in Section \ref{sec:main_results}} \label{sec:app:proof_of_th:dual_prob} We prove Theorem \ref{th:dual_prob}, by first proving the following very similar theorem. \begin{theorem} \label{th:compact_dual_prob} Problem \eqref{eq:projection_onto_PPM} can be solved by solving \begin{align} & \min_{t} t + \mathcal{L}(t),\label{eq:compact_dual_higher}\\[-0.4cm] & \hspace{1.2cm}\mathcal{L}(t) = \min_{Z\in\mathbb{R}^q} \frac{1}{2}\|(U^\top)^{-1} Z \|^2 \text{ subject to } Z + N \leq t \mathbf{1} \label{eq:compact_dual}, \end{align} where $N = U^\top \hat{F}$. In particular, if $t^*$ minimizes \eqref{eq:compact_dual_higher}, $Z^*$ minimizes \eqref{eq:compact_dual} for $t=t^*$, and $M^*,F^*$ minimize \eqref{eq:projection_onto_PPM}, then \begin{equation}\label{eq:compact_Z_star_M_star_relation} M^* = - U^{-1}(U^{-1})^\top Z^*, \quad F^* = -(U^{-1})^\top Z^*. \end{equation} Furthermore, $t^*$, $M^*$, $F^*$ and $Z^*$ are unique. \end{theorem} \begin{proof}[Proof of Theorem \ref{th:compact_dual_prob}] Problem \eqref{eq:projection_onto_PPM} depends on the tree $T$ through the matrix of the ancestors, $U$. To see how Theorem \ref{th:compact_dual_prob} implies Theorem \ref{th:dual_prob}, it is convenient to make this dependency more explicit. Any tree in $\mathcal{T}$, can be represented through a binary matrix $T$, where $T_{ij}=1$ if and only if node $i$ is the closest ancestor of node $j$. Henceforth, let $\mathcal{T}$ denote the set of all such binary matrices. We need the following lemma, which we prove later in this section of the appendix. \begin{lemma}\label{th:bijection_U_T} Consider an evolutionary tree and its matrices $T \in \mathcal{T}$ and $U \in \mathcal{U}$. We have \begin{equation}\label{eq:U_and_T_relation} U = (I - T)^{-1}. \end{equation} \end{lemma} Eq.~\eqref{eq:U_and_T_relation} implies that $((U^{-1})^\top Z)_i = (Z - T^\top Z)_i = Z_i - Z_{\bar{i}}$, and that $U^{-1}((U^{-1})^\top Z)_i = Z_i - Z_{\bar{i}} - \sum_{r \in \partial i} (Z_r - Z_{\bar{r}})$, where $\partial i$ denotes the children of $i$ in $T$, $\bar{i}$ represents the closest ancestor of $i$ in $T$. We assume by convention that $Z_{\bar{i}} = 0$ when $i = r$ is the root of $T$. Furthermore, the definition of $U$ implies that $N_i = (U^\top \hat{F})_i = \sum_{j \in \Delta i} \hat{F}_j$, where $\Delta i$ denotes the ancestors of $j$. Thus, \begin{align} \label{eq:mechanical_interp} \mathcal{L}(t) = & \min_{Z\in\mathbb{R}^q} \frac{1}{2}\sum_{i \in \mathcal{V}} (Z_i - Z_{\bar{i}})^2 \text{ subject to }\\ & Z_i \leq t - \sum_{j \in \Delta i} \hat{F}_j \;,\forall i\in \mathcal{V},\nonumber\\ & M^*_i = -Z^*_i + Z^*_{\bar{i}} + \sum_{r \in \partial i} (Z^*_r - Z^*_{\bar{r}}) \text{ and } \\ & F^*_i = -Z^*_i + Z^*_{\bar{i}}, \forall i\in \mathcal{V} \nonumber. \label{eq:fast_computation_of_M_and_F_from_Z} \end{align} \end{proof} \begin{proof}[Proof of Theorem \ref{th:dual_prob}] Our proof is based on Moreau's decomposition \cite{parikh-boyd}. Before we proceed with the proof, let us introduce a few concepts. Given a convex, closed and proper function $g:\mathbb{R}^q \mapsto \mathbb{R}$, we define its proximal operator by the map $G: \mathbb{R}^q \mapsto \mathbb{R}^q$ such that \begin{equation}\label{eq:def_PO_g} G(n) = \arg \min_{x\in \mathbb{R}^q} g(x) + \frac{1}{2}\|x - n\|^2, \end{equation} where in our case $\|\cdot\|$ is the Euclidean norm. We define the Fenchel dual of $g$ as \begin{equation} g^*(x) = \sup_{s\in\mathbb{R}^q} \{x^\top s - g(s)\}, \end{equation} and we denote the proximal operator of $g^*$ by $G^*$. Note that $G^*$ can be computed from definition \eqref{eq:def_PO_g} by replacing $g$ by $g^*$. Moreau's decomposition identity states that \begin{equation} G(n) + G^*(n) = n. \end{equation} We can now start the proof. Consider the following indicator function \begin{equation}\label{eq:def_of_g_for_our_prob} g(\tilde{M}) = \begin{cases} 0, \qquad \text{if } (U^{-1}\tilde{M}) \geq 0 \ \text{and} \ \mathbf{1}^{\text{T}} (U^{-1}\tilde{M}) = 1, \\ +\infty, ~~\ \text{otherwise}, \end{cases} \end{equation} where $\tilde{M} \in \mathbb{R}^q$, and consider its associated proximal operator $G$. Solving problem \eqref{eq:projection_onto_PPM}, i.e., finding a minimizer $M^*$, is equivalent to evaluating $U^{-1} G(\hat{F})$. Using Moreau's decomposition, we have \begin{equation} M^* = U^{-1} G(\hat{F}) = U^{-1}\hat{F} - U^{-1}G^*(\hat{F}). \end{equation} We will show that $G^*(\hat{F}) = \hat{F} + (U^{-1})^\top Z^*$, where $Z^*$ is a minimizer of \eqref{eq:dual}, which proves \eqref{eq:Z_star_M_star_relation} and essentially completes the proof. To compute $G^*$, we first need to compute \begin{align} g^*(Y) &= \sup_{\tilde{M}} \{Y^\top \tilde{M} - g(\tilde{M})\}\\ &=\max_{\tilde{M}} Y^\top \tilde{M}\label{eq:g_start_middle}\\ & \hspace{0.5cm} \text{ subject to }~~U^{-1} \tilde{M} \geq 0, \mathbf{1}^\top(U^{-1} \tilde{M})=1\nonumber. \end{align} Making the change of variable $M = U^{-1} \tilde{M}$, the maximum in problem \eqref{eq:g_start_middle} can be re-written as \begin{align} &\max_{{M}} (U^\top Y)^\top M \label{eq:g_start_middle_2}\\ & \hspace{0.cm} \text{ subject to }~~{M} \geq 0, \mathbf{1}^\top {M}=1\nonumber. \end{align} It is immediate to see that the maximum in \eqref{eq:g_start_middle_2} is achieved if we set all components of $M$ equal to zero except the one corresponding to the largest component of the vector $U^\top Y$, which we should set to one. Therefore, we have \begin{equation} g^*(Y) = \max_i \; (U^\top Y)_i. \end{equation} Now we can write \begin{align}\label{eq:appendix_towards_uniqueness} G^*(\hat{F}) &= \arg \min_{Y\in \mathbb{R}^q}\; g^*(Y) + \frac{1}{2}\|Y - \hat{F}\|^2\\ &=\arg \min_{Y\in \mathbb{R}^q,t\in\mathbb{R}} \;t + \frac{1}{2}\|Y - \hat{F}\|^2\\ & \hspace{1.3cm} \text{ subject to }~~U^\top Y \leq t\nonumber. \end{align} Making the change of variable $Z = U^\top (Y -\hat{F})$, we can write $G^*(\hat{F})$ as \begin{align} G^*(\hat{F}) &=\hat{F} + (U^{-1})^\top Z^*, \text{ where } \\ (Z^*, t^*) &= \arg \min_{Z\in \mathbb{R}^q,t\in\mathbb{R}} \;t + \frac{1}{2}\|(U^{-1})^\top Z\|^2\\ & \hspace{1cm} \text{ subject to }~~Z + U^\top \hat{F} \leq t\nonumber. \end{align} To see that $M^*$ and $F^*$ are unique, notice that problem \eqref{eq:projection_onto_PPM} is a projection onto a convex set polytope, which always has a unique minimizer. Moureau's decomposition implies that $G^*(\hat{F})$ is unique, hence the minimizer $Y^*$ of \eqref{eq:appendix_towards_uniqueness} is unique. Thus, $Z^* = U^\top(Y^* - \hat{F})$ and $t^* = g^*(Y^*)$ are also unique. \end{proof} \begin{proof}[Proof of Lemma \ref{th:bijection_U_T}] We assume that the tree has $q$ nodes. The matrix $T$ is such that $T_{v,v'}=1$ if and only if $v$ is the closet ancestor of $v'$. Because of this, the $v$th column of $T^k$ has a one in row $v'$ if and only if $v'$ is an ancestor $v$ separated by $k$ generations. Thus, the $v$th column of~$I + T + T^2 +\dots+ T^{q-1}$, contains a one in all the rows $v'$ such that $v'$ is an ancestor of $v$, or if $v=v'$. But this is the definition of the matrix $U$ associated to the tree $T$. Since no two mutants can be separated by more than $q-1$ generations, $T^k = 0$ for all $k \geq q$. It follows that $$U = I + T + T^2 +\dots +T^{q-1} = \sum^\infty_{i=0} T^i = (I-T)^{-1}.$$ \end{proof} \section{Proof of useful observations in Section \ref{sec:useful_obs}} \label{sec:app:proof_of_useful_obser} \begin{proof}[Proof of Lemma \ref{th:convexity_of_L_dual}] The proof follows from the following generic fact, which we prove first. Let $g(W) = \min_{Z\in \mathbb{R}^q} f(Z,W)$. If $f$ is convex in $(Z,W)$, then $g$ is convex. Indeed, let $\alpha\geq 0$ and $\alpha' = 1 - \alpha$. We get $\alpha g( W_1) + \alpha'g(W_2) = \min_{Z_1,Z_2} \alpha f(Z_1,W_1) + \alpha' f(Z_2,W_2) \geq \min_{Z_1,Z_2} f(\alpha Z_1 + \alpha' Z_2,\alpha W_1 + \alpha' W_2) = g(\alpha W_1 + \alpha' W_2)$. To apply this result to our problem, let $f_1(Z)$ be the objective of \eqref{eq:mechanical_interp} and let $f_2(Z,t,N)$ be a function (on the extended reals) such that $f_2 = 0 $ if $(Z,t,N)$ satisfy the constraints in \eqref{eq:mechanical_interp} and $+\infty$ otherwise. Now notice that $\mathcal{L}(t) = \min_{Z} f_1(Z) + f_2(Z,t)$, where $f_1 + f_2$ is convex in $(Z,t,N)$, since both $f_1$ and $f_2$ are convex in $(Z,t,N)$. Convexity implies that $\mathcal{L}$ is continuous in $N$ and $t$. It also implies that $\mathcal{L}'(t)$ is non increasing~in~$t$. \end{proof} \begin{proof}[Proof of Lemma \ref{th:continuity_of_Z_t}] {Continuity of $Z^*(t)$: } The objective function in \eqref{eq:mechanical_interp} is convex as a function of $Z$ and has unique minimum at $Z_i = 0, \forall i$. Hence, it is strictly convex. Due to strict convexity, if the objective takes values in a small interval, then $Z$ must be inside some small ball. Since we know, by the remark following Lemma \ref{th:convexity_of_L_dual}, that $\mathcal{L}$ is continuous as a function of $t$, if $t$ and $t'$ are close, then $\mathcal{L}(t)$ and $\mathcal{L}(t')$ must be close. Strict convexity then implies that $Z^*(t)$ and $Z^*(t')$ must be close. The same argument can be used to prove continuity with respect to $N$. {Continuity of $Z^*(t^*)$: } Recall that $Z^*(t^*) = Z^*$, the solution of \eqref{eq:projection_onto_PPM}. $Z^*$ is a continuous function of $M^*$, which is the solution to \eqref{eq:projection_onto_PPM}, and thus is fully determined by $U$ and $\hat{F}$. Since, $\hat{F} = (U^\top)^{-1} N$, $\hat{F}$ is a continuous function of $N$, and it suffices to prove that $M^*$ is continuous in $\hat{F}$. Problem \eqref{eq:projection_onto_PPM} finds the projection of $\hat{F}$ onto a convex polytope. Let $F^*$ be this projection. Since $F^*$ changes continuously with $\hat{F}$, $M^* = U^{-1} F^*$ also changes continuously with $\hat{F}$. \end{proof} \begin{proof}[Proof of Lemma \ref{th:B_piece_wise_constant}] Since $Z^*(t)$ is continuous, if $Z^*(t)_i \neq t - N_i$ then $Z^*(t')_i \neq t' - N_i$ for $t'$ in some neighborhood of $t$. \end{proof} \begin{proof}[Proof of Lemma \ref{eq:decomposition_of_problem}] First note that, by definition of $\mathcal{B}(t)$, we know the value of all variables in $\mathcal{B}(t)$. Hence, the unknowns in problem \eqref{eq:mechanical_interp} are the variables in $\mathcal{V} \backslash \mathcal{B}(t)$, which can be partitioned into disjoint sets $\{\mathcal{V}_i \backslash \mathcal{B}(t)\}^k_{i=1}$. Second notice that for each term in the objective \eqref{eq:mechanical_interp} that involves not known variables, there is some subtree $T_i$ that contains both of its variables. It follows that, given $\mathcal{B}(t)$, problem \eqref{eq:mechanical_interp} breaks into $k$ independent problems, the $i$th problem having as unknowns only the variables in $\mathcal{V}_i \backslash \mathcal{B}(t)$ and all terms in the objective where either $j$ or $\bar{j}$ are in $\mathcal{V}_i \backslash \mathcal{B}(t)$. Obviously, if $j \in \mathcal{V}_w \cap \mathcal{B}(t)$, then, by definition, $Z^*(t)_j = c_1 t + c_2$, with $c_1 = 1$. To find the behavior of $Z^*(t)_j$ for $j \in \mathcal{V}_w \backslash \mathcal{B}(t)$, we need to solve \ref{eq:simpler_sub_problem}. To solve \eqref{eq:simpler_sub_problem}, notice that the first-order optimality conditions for problem \eqref{eq:mechanical_interp} imply that, if $j \in \mathcal{V} \backslash \mathcal{B}(t)$, then \begin{equation} Z_j = \frac{1}{|\partial j|} \sum_{r \in \partial j } Z_r, \end{equation} where $\partial j$ denotes the neighbors of node $j$. We can further write \begin{align} &Z_j = \frac{1}{|\partial j|} \sum_{r \in \partial j \cap \mathcal{B}(t)} Z_r + \frac{1}{|\partial j|} \sum_{r \in \partial j \backslash \mathcal{B}(t)}, \nonumber\\ & Z_r =\frac{1}{|\partial j|} \sum_{r \in \partial j \cap \mathcal{B}(t)} (t - N_r) + \frac{1}{|\partial j|} \sum_{r \in \partial j \backslash \mathcal{B}(t)} Z_r\label{eq:sol_for_Z_j}. \end{align} It follows that $Z_j = c_1t + c_2$, for some $c_1$ and $c_2$ that depend on $T$, $N$ and $\mathcal{B}$. If we solve for $Z_j$ by recursively applying \eqref{eq:sol_for_Z_j}, it is immediate to see that $c_1 \geq 0$. To see that $c_1\leq 1$, we study how $Z_j$, defined by \eqref{eq:sol_for_Z_j}, depends on $t$ algebraically. To do so, we treat $t$ as a variable. The study of this algebraic dependency in the proof should not be confused with $t$ being fixed in the statement of the theorem. Define $\rho = |\partial i \cup \mathcal{B}(t)|/|\partial j|$, and notice that \begin{align} \max_j\{ Z_j\} \leq \rho t + (1-\rho) \max_j \{Z_j\} + C, \end{align} in which $C$ is some constant. Recursively applying the above inequality we get \begin{equation} \max_j\{ Z_j\} \leq t + C', \end{equation} in which $C'$ is some constant. This shows that no $Z_j$ can grow with $t$ faster than $1 \times t$ and hence $c_1 \leq 1$. \end{proof} \begin{proof}[Proof of Lemma \ref{th:Z_and_Lprime_are_piece_wise_linear}] Lemma \ref{eq:decomposition_of_problem} implies that, for any $j$, $Z^*_j(t)$ depends linearly on $t$. The particular linear dependency, depends on $\mathcal{B}(t)$, which is piecewise constant by Lemma \ref{th:B_piece_wise_constant}. Therefore, $Z^*_j(t)$ is a continuous piecewise linear function of $t$. This in turn implies that $\mathcal{L}'(t)$ is a continuous piecewise linear function of $t$, since it is the derivative of the continuous piecewise quadratic $\mathcal{L}(t) = (1/2)\sum_{i \in \mathcal{V}} (Z^*(t)_i - Z^*(t)_{\bar{i}})^2$. Finally, since the particular linear dependency of $Z^*$, depends on $\mathcal{B}(t)$, it follows that $Z^*(t)$ and $\mathcal{L}'(t)$ change linear segment if and only if $\mathcal{B}(t)$ changes. \end{proof} \begin{proof}[Proof of Lemma \ref{eq:B_always_grows}] Let us assume that there exists $t < t'$ for which $\mathcal{B}(t) \subset \mathcal{B}(t')$. We can assume without loss of generality that $t$ is sufficiently close to $t'$ such that $\mathcal{B}(s)$ is constant for $s\in [t,t')$. Let $j$ be such that $j\in B(t')$ but $j\notin B(t)$. This means that $Z^*_j(s) < s - N_j$ for all $s\in [t,t')$ and that $Z^*_j(t') = t' - N_j$. Since by Lemma \ref{eq:decomposition_of_problem}, $Z^*_j(s) = c_1 s + c_2$, for some constants $c_1$ and $c_2$, the only way that $Z^*_j(s)$ can intersect $s - N_j$ at $s = t'$ is for $c_1 > 1$, which is a contradiction. If $\mathcal{B}(t)$ decreases as $t$ increase, and given that the largest that $\mathcal{B}(t)$ can be is $\{1,\dots,q\}$, it follows that $\mathcal{B}(t)$ can only take $q+1$ different configurations. One configuration per size of $\mathcal{B}(t)$, from $q$~to~$0$. \end{proof} \begin{proof}[Proof of Lemma \ref{th:Z_and_Lprime_finite_break_points}] Lemma \ref{eq:B_always_grows} implies that $\mathcal{B}(t)$ changes at most $q+1$ times. Lemma \ref{th:Z_and_Lprime_are_piece_wise_linear} then implies that $Z^*(t)$ and $\mathcal{L}'(t)$ have less than $q+1$ different linear segments. \end{proof} \section{Proofs of the properties of the algorithm in Section \ref{sec:main_alg}} \begin{proof}[Proof of Theorem \ref{th:main_alg_complexity}] {\bf{Run-time:}} Recall that $Z^*, Z'^* \in \mathbb{R}^q$ and that $\mathcal{L}' \in \mathbb{R}$. Line \ref{alg:line:compute_N_from_F} is done in $\mathcal{O}(q)$ steps by doing a DFS on $T$. Here, we assume that $T$ is represented as a linked list. Specifically, starting from the root, we keep a variable $x$ where we accumulate the values of $\hat{F}_j$ visited from the root to the current node being explored in $T$ as we move down the tree. As we move up the tree, we subtract values of the nodes $\hat{F}_j$ from $x$. Then, at each node $i$ visited by the DFS, we can read from $x$ the value $N_i$. Line \ref{alg:line:init} takes $\mathcal{O}(q)$ steps to finish. The procedure ComputeRates takes $\mathcal{O}(q)$ steps to finish, which we prove in Theorem \ref{th:compute_rates_main_theorem}. All of the other lines inside the for-loop are manipulations that take at most $\mathcal{O}(q)$ steps. Lines \ref{alg:line:find_Zstar} and \ref{alg:line:find_tstar} take $\mathcal{O}(q)$ steps. From \eqref{eq:Z_star_M_star_relation}, the complexity to compute $F^*$ is $\mathcal{O}(q)$, and the complexity to compute $M^*$ is $\mathcal{O}(\sum_{i \in \mathcal{V}} |\partial i|) = \mathcal{O}(|\mathcal{E}|) = \mathcal{O}(q)$. {\bf{Memory: }} The DFS in line \ref{alg:line:compute_N_from_F} only requires $\mathcal{O}(q)$ memory. Throughout the algorithm, we only need to keep the two most recent values of $t_i$, $\mathcal{B}(t_i)$, $Z^*(t_i)$, $Z'^*(t_i)$, $\mathcal{L}'(t_i)$ and $\mathcal{L}''(t_i)$. This takes $\mathcal{O}(q)$ memory. The procedure ComputeRates takes $\mathcal{O}(q)$ memory, which we prove in Theorem \ref{th:compute_rates_main_theorem}. \end{proof} \begin{proof}[Proof of Theorem \ref{th:alg_correctness}] The proof of Theorem \ref{th:alg_correctness} amounts to checking that, at every step of Algorithm \ref{alg:projection}, the quantities computed, e.g., the paths $\{Z^*(t)\}$ and $\{\mathcal{L}'(t)\}$, are correct. Lemmas \ref{th:Z_and_Lprime_are_piece_wise_linear} and \ref{th:Z_and_Lprime_finite_break_points} prove that $Z^*(t)$ and $\mathcal{L}'(t)$ are piecewise linear and continuous with at most $q$ changes in linear segment. Hence, the paths $\{Z^*(t)\}$ and $\{\mathcal{L}'(t)\}$ are fully specified by their value at $\{t_i\}^k_{i=1}$, and $k \leq q$. Lemma \ref{th:Z_and_Lprime_are_piece_wise_linear} proves that these critical values are determined as the instants, at which $\mathcal{B}(t)$ changes. Furthermore, Lemma \ref{eq:B_always_grows} proves that, as $t$ decreases, variables are only added to $\mathcal{B}(t)$. Hence, to find $\{t_i\}$ and $\{\mathcal{B}(t_i)\}$, we only need to find the times and components at which, as $t$ decreases, $Z^*(t)_r$ goes from $Z^*(t)_r < t - N_r$ to $Z^*(t)_r = t - N_r$. Also, since $\mathcal{B}$ can have at most $q$ variables, the for-loop in line \ref{alg:line:forloop} being bounded to the range $1$-$q$, does not prevent the algorithm from finding any critical value. Theorem \ref{th:compute_rates_main_theorem} tells us that we can compute $Z'^*(t_i)$ from $\mathcal{B}(t_i)$ and $T$. Since we have already proved that the path $\{Z^*(t)\}$ is piecewise linear and continuous, we can compute $t_{i+1}$, and the variables that become fixed, by solving \eqref{eq:formula_for_next_t} for $t$ for each $r \notin \mathcal{B}(t_i)$, and choosing for $t_{i+1}$ the largest such $t$, and choosing for the new fixed variables, i.e., $\mathcal{B}(t_{i+1})-\mathcal{B}(t_{i})$, the components $r$ for which the solution of \eqref{eq:formula_for_next_t} is $t_{i+1}$. Since we have already proved that that $Z^*(t)$ and $\mathcal{L}'(t)$ are piecewise linear and constant, we can compute $Z^*(t_{i+1})$ and $\mathcal{L}'(t_{i+1})$ from $Z^*(t_{i})$, $\mathcal{L}'(t_{i})$, $Z'^*(t_{i})$ and $\mathcal{L}''(t_{i})$ using \eqref{eq:liner_relation_between_critical_points}. Lemma \ref{th:convexity_of_L_dual} proves that $\mathcal{L}'(t)$ decreases with $t$, and Theorem \ref{th:dual_prob} proves that $t^*$ is unique. Hence, as $t$ decreases, there is a single $t$ at which $\mathcal{L}'(t)$ goes from $>-1$ to $< -1$. Since we have already proved that we correctly, and sequentially, compute $\mathcal{L}'(t_i)$, $\mathcal{L}''(t_i)$, and that $\mathcal{L}'(t)$ is piecewise linear and constant, we can stop computing critical values whenever we can determine that $\mathcal{L}'(t) = \mathcal{L}'(t_k) + (t-t_k) \mathcal{L}''(t_k)$ will cross the value $-1$, where $t_k$ is the latest computed critical value. This is the case when $\mathcal{L}'(t_k) > -1$ and $\mathcal{L}'(t_{k+1}) < -1$, or when $\mathcal{L}'(t_k) > -1$ and $t_k$ is the last possible critical value, which happens when $|\mathcal{B}(t_i)| = q$. From this last critical value, $t_k$, we can then find $t^*$ and $Z^*$ by solving $-1 = \mathcal{L}'(t_k) + (t^*-t_k) \mathcal{L}''(t_k)$ and $Z^* = Z^*(t_k) + (t^* - t_k) Z'^*(t_k)$. Finally, once we have $Z^*$, we can use \eqref{eq:Z_star_M_star_relation} in Theorem \ref{th:dual_prob} to find $M^*$ and $F^*$. \end{proof} \section{Proofs for computing the rates in Section \ref{sec:computing_rates}} \label{sec:app:proofs_for_rates} \begin{proof}[Proof of Lemma \ref{th:computing_ddL_from_dZ}] Let $t \in (t_{i+1},t_{i})$. We have, \begin{align} &\mathcal{L}'(t) = \frac{{\rm d} }{{\rm d} t} \frac{1}{2}\sum_{j \in \mathcal{V}} (Z^*(t)_j - Z^*(t)_{\bar{j}})^2= \nonumber\\ & \sum_{i \in \mathcal{V}} (Z^*(t)_j - Z^*(t)_{\bar{j}})({Z^*}'(t)_j - {Z^*}'(t)_{\bar{j}}). \end{align} Taking another derivative, and recalling that $Z''^*(t) = 0$ for $t \in (t_{i+1},t_{i})$, we get \begin{align} &\mathcal{L}''(t) = \sum_{j \in \mathcal{V}} ({Z^*}'(t)_j - {Z^*}'(t)_{\bar{j}})^2, \end{align} and the lemma follows by taking the limit $t \uparrow t_i$. \end{proof} \begin{proof}[Proof of Lemma \ref{th:pruning}] The $(T,\mathcal{B},\alpha,\beta,\gamma)$-problem is unconstrained and convex, hence we can solve it by taking derivatives of the objective with respect to the free variables, and setting them to zero. Let us call the objective function $F(Z)$. If $j \in \mathcal{V} \backslash \mathcal{B}$ is a leaf, then $\frac{{\rm d} F}{{\rm d} Z_j} = 0$ implies that $Z^*_j = Z^*_{\bar{j}}$. We now prove the second part of the lemma. Let $\tilde{F}(Z)$ be the objective of the modified problem. Clearly, $\frac{{\rm d} F}{{\rm d} Z_i} = \frac{{\rm d} \tilde{F}}{{\rm d} Z_i}$ for all $i \in \tilde{T} \backslash \bar{j}$. Let $C$ be the children of $\bar{j}$ in $T$ and $\tilde{C}$ be the children of $\bar{j}$ in $\tilde{T}$. We have $\tilde{C} = C \backslash j$. Furthermore, $\frac{{\rm d} \tilde{F}}{{\rm d} Z_j} = 0$ is equivalent to $\gamma_j (Z_j - Z_{\bar{j}}) + \sum_{s \in \tilde{C}} \gamma_s (Z_j - Z_s) = 0$, and $\frac{{\rm d} F}{{\rm d} Z_j} = 0$ is equivalent to $\gamma_j (Z_j - Z_{\bar{j}}) + \sum_{s \in C} \gamma_s (Z_j - Z_s) = 0$. However, we have already proved that the optimal solution for the original problem has $Z^*_j= Z^*_{\bar{j}}$. Hence, this condition can be replaced in $\frac{{\rm d} F}{{\rm d} Z_j}$, which becomes $\gamma_j (Z_j - Z_{\bar{j}}) + \sum_{s \in \tilde{C}} \gamma_s (Z_j - Z_s) = 0$. Therefore, the two problems have the same optimality conditions, which implies that $Z^*_i = \tilde{Z}^*_i$, for all $i \in \tilde{\mathcal{V}}$. \end{proof} \begin{proof}[Proof of Lemma \ref{th:star_problem}] The proof follows directly from the first order optimality conditions, a linear equation that we solve for $Z^*_1$. \end{proof} \begin{proof}[Proof of Lemma \ref{th:tree_reduction}] The first order optimality conditions for both problems are a system of linear equations, one equation per free node in each problem. All the equations associated to the ancestral nodes of $j$ are the same for both problems. The equation associated to variable $j$ in the $(T,\mathcal{B},\alpha,\beta,\gamma)$-problem is \begin{equation} \gamma_j (Z_{\bar{j}} - Z_j) + \sum^r_{i=1} \gamma_i (Z_i - Z_j) = 0, \end{equation} which implies that \begin{equation}\label{eq:Z_j_in_equiv_reduction} Z_j = \frac{ \gamma_j Z_{\bar{j}} + \sum^r_{i=1} \gamma_i Z_i }{ \gamma_j + \sum^r_{i=1} \gamma_i }. \end{equation} The equation associated to the variable $\bar{j}$ in the $(T,\mathcal{B},\alpha,\beta,\gamma)$-problem is \begin{equation} \label{eq:Z_j_bar_in_equiv_reduction} F(Z,\alpha,\beta,\gamma) + \gamma_j (Z_j - Z_{\bar{j}}) = 0, \end{equation} where $F(Z)$ is a linear function of $Z$ determined by the tree structure and parameters associated to the ancestral edges and nodes of $\bar{j}$. The equation associated to the variable $\bar{j}$ in the $(\tilde{T},\tilde{\mathcal{B}},\tilde{\alpha},\tilde{\beta},\tilde{\gamma})$-problem is \begin{equation}\label{eq:Z_j_bar_in_equiv_reduction_smaller_problem} F(\tilde{Z},\tilde{\alpha},\tilde{\beta},\tilde{\gamma}) + \tilde{\gamma}_j (\tilde{\alpha}_j t + \tilde{\beta}_j - \tilde{Z}_{\bar{j}}) = 0, \end{equation} for the same function $F$ as in \eqref{eq:Z_j_bar_in_equiv_reduction}. Note that the components of $\tilde{\alpha}$, $\tilde{\beta}$ and $\tilde{\gamma}$ associated to the ancestral edges and nodes of $\bar{j}$ are the same as in $\alpha$, $\beta$ and $\gamma$. Hence, $F(\tilde{Z},\tilde{\alpha},\tilde{\beta},\tilde{\gamma}) = F(\tilde{Z},\alpha,\beta,\gamma)$. By replacing \eqref{eq:Z_j_in_equiv_reduction} into \eqref{eq:Z_j_bar_in_equiv_reduction}, one can easily check the following. Equations \eqref{eq:Z_j_bar_in_equiv_reduction} and \eqref{eq:Z_j_bar_in_equiv_reduction_smaller_problem}, as linear equations on $Z$ and $\tilde{Z}$ respectively, have the same coefficients if \eqref{eq:reduction_equations} holds. Hence, if \eqref{eq:reduction_equations} holds, the solution to the linear system associated to the optimality conditions in both problem gives the same optimal value for all variables ancestral to $\bar{j}$ and including $\bar{j}$. \end{proof} \begin{proof}[Proof of Theorem \ref{th:complexity_of_rec_reduce}] Although $T$ changes during the execution of the algorithm, in the proof we let $T = (r,\mathcal{V},\mathcal{E})$ be the tree, passed to the algorithm at the zeroth level of the recursion. Recall that $|\mathcal{V}|= q$ and $\mathcal{E} = q-1$. {\bf{Correctness: }} The correctness of the algorithm follows directly from Lemmas \ref{th:pruning}, \ref{th:star_problem}, and \ref{th:tree_reduction} and the explanation following these lemmas. {\bf{Run-time: }} It is convenient to think of the complexity of the algorithm by assuming that it is running on a machine with a single instruction pointer that jumps from line to line in Algorithm \ref{alg:compute_rates_rec}. With this in mind, for example, the recursive call in line \ref{alg:line:rec:6} simply makes the instruction pointer jump from line \ref{alg:line:rec:6} to line \ref{alg:line:rec:1}. The run-time of the algorithm is bounded by the sum of the time spent in each line in Algorithm \ref{alg:compute_rates_rec}, throughout its entire execution. Each basic step costs one unit of time. Each node in $\mathcal{V}$ is only chosen as $j$ at most once, throughout the entire execution of the algorithm. Hence, line \ref{alg:line:rec:1} is executed at most $q$ times, and thus any line is executed at most $q$ times, at most once for each possible choice for $j$. Assuming that we have $T.b$ updated, $j$ in line \ref{alg:line:rec:1} can be executed in $\mathcal{O}(1)$ time, by reading the first element of the linked list $T.b$. Lines \ref{alg:line:rec:2} and \ref{alg:line:rec:6} also take $\mathcal{O}(1)$ time. Here, we are thinking of the cost of line \ref{alg:line:rec:6} as simply the cost to make the instruction pointer jump from line \ref{alg:line:rec:1} to line \ref{alg:line:rec:6}, not the cost to fully completing the call to \emph{ComputeRatesRec} on the modified problem. The modification made to the $(T,\mathcal{B},\alpha,\beta,\gamma)$-problem by lines \ref{alg:line:rec:5} and \ref{alg:line:rec:7}, is related to the addition, or removal, of at most $\text{degree}(j)$ nodes, where $\text{degree}(j)$ is the degree of $j$ in $T$. Hence, they can be executed in $\mathcal{O}(\text{degree}(j))$ steps. Finally, lines \ref{alg:line:rec:3} and \ref{alg:line:rec:8} require solving a star-shaped problem with $\mathcal{O}(\text{degree}(j))$ variables, and thus take $\mathcal{O}(\text{degree}{j})$, which can be observed by inspecting \eqref{eq:star_solution}. Therefore, the run-time of the algorithm is bounded by $\mathcal{O}(\sum_j \text{degree}(j)) = \mathcal{O}(q)$. To see that it is not expensive to keep $T$ updated, notice that, if $T$ changes, then either $T.b$ loses $j$ (line \ref{alg:line:rec:5}) or has $j$ reinserted (line \ref{alg:line:rec:7}), both of which can be done in $\mathcal{O}(1)$ steps. Hence, we can keep $T.b$ updated with only $\mathcal{O}(1)$ effort each time we run line \ref{alg:line:rec:5} and line \ref{alg:line:rec:7}. Throughout the execution of the algorithm, the tree $T$ either shrinks by loosing nodes that are children of the same parent (line \ref{alg:line:rec:5}), or $T$ grows by regaining nodes that are all siblings (line \ref{alg:line:rec:7}). Hence, the linked list $T.a$ can be kept updated with only $\mathcal{O}(1)$ effort each time we run line \ref{alg:line:rec:5} and line \ref{alg:line:rec:7}. Across the whole execution of the algorithm, $T.a$ and $T.b$ can be kept updated with $\mathcal{O}(\sum_j \text{degree}(j)) = \mathcal{O}(q)$ effort. {\bf{Memory: }} All the variables with a size that depend on $q$ are passed by reference in each call of \emph{ComputeRatesRec}, namely, $Y$, $T$, $\mathcal{B}$, $\alpha$, $\beta$ and $\gamma$. Hence, we only need to allocate memory for them once, at the zeroth level of the recursion. All these variables take $\mathcal{O}(q)$ memory to store. \end{proof} \begin{proof}[Proof of Lemma \ref{th:rates_only}] From Definition \ref{def:sub_problem_T_B_A_B_G}, we know that the $(T,\mathcal{B},{\bf 1},-N,{\bf 1})$-problem and the $(T,\mathcal{B})$-problem are the same. Hence, it is enough to prove that the solutions of (i) any $(T,\mathcal{B},\alpha,\beta,\gamma)$-problem and of (ii) the $(T,\mathcal{B},\alpha,0,\gamma)$-problem change at the same rate as a function of $t$. We have already seen that the $(T,\mathcal{B},\alpha,\beta,\gamma)$-problem can be solved by recursively invoking Lemma \ref{th:tree_reduction} until we arrive at problems that are small enough to be solved via Lemma \ref{th:star_problem}. We now make two observations. First, while recursing, Lemma \ref{th:tree_reduction} always transform a $(\tilde{T},\tilde{\mathcal{B}},\tilde{\alpha},\tilde{\beta},\tilde{\gamma})$-problem into a smaller problem $(\tilde{\tilde{T}},\tilde{\tilde{\mathcal{B}}},\tilde{\tilde{\alpha}},\tilde{\tilde{\beta}},\tilde{\tilde{\gamma}})$-problem where, by \eqref{eq:reduction_equations}, $\tilde{\tilde{\gamma}}$ and $\tilde{\tilde{\alpha}}$ only depend on $\tilde{\alpha}$ and $\tilde{\gamma}$ but not on $\tilde{\beta}$. Second, while recursing, and each time Lemma \ref{th:star_problem} is invoked to compute an explicit value for some component of the solution via solving some star-shaped $(\tilde{\tilde{T}},\tilde{\tilde{\mathcal{B}}},\tilde{\tilde{\alpha}},\tilde{\tilde{\beta}},\tilde{\tilde{\gamma}})$-problem, the rate of change of this component with $t$, is a function of $\tilde{\tilde{\alpha}}$ and $\tilde{\tilde{\gamma}}$ only. We can see this from \eqref{eq:star_solution}. Hence, the rate of change with $t$ of the solution of the $(T,\mathcal{B},\alpha,\beta,\gamma)$-problem does not depend on $\beta$. So we can assume $\beta = 0$. \end{proof} \begin{proof}[Proof of Theorem \ref{th:compute_rates_main_theorem}] {\bf{Correctness: }} The correctness of Algorithm \ref{alg:compute_rates} follows from the correctness of Algorithm \ref{alg:compute_rates_rec}. {\bf{Run-time and memory: }} We can prune each $T_w$ in $\mathcal{O}(|T_w|)$ steps and $\mathcal{O}(1)$ memory using DFS. In particular, once we reach a leaf of $T_w$ that is free, i.e., not in $\mathcal{B}_w$, and as DFS travels back up the tree, we can prune from $T_w$ all the nodes that are free. By Theorem \ref{th:complexity_of_rec_reduce}, the number of steps and memory needed to completely finish line \ref{alg:line:rates:4} is $\mathcal{O}(|T_w|)$. The same is true to complete line \ref{alg:line:rates:5}. Hence, the number of steps and memory required to execute the for-loop is $\mathcal{O}(\sum_w |T_w|)=\mathcal{O}(|T|)=\mathcal{O}(q)$. Finally, by Theorem \ref{th:computing_ddL_from_dZ}, $\mathcal{L}''$ can be computed from $Z'^*$ in $\mathcal{O}(q)$ steps using $\mathcal{O}(1)$ memory. \end{proof} \section{Details of the ADMM and the PGD algorithms in Section \ref{sec:num_res}} \label{app:sec:details_of_ADMM_and_PDG} Here we explain the details of our implementations of the Alternating Direction Method of Multipliers (ADMM) and the Projected Gradient Descent (PGD) methods, applied to our problem. \subsection{ADMM} \subsubsection{ADMM for the primal problem} We start by putting our initial optimization problem (\ref{eq:projection_onto_PPM}) into the following equivalent form: \begin{equation} \min_{M \in \mathbb{R}^{q}} \{f(M) = \frac{1}{2} \|F - U M \|^2\} + g(M), \end{equation} where $g(M)$ is the indicator function imposing the constraints on $M$: \begin{equation} g(M) := \begin{cases} 0, \qquad M \geq 0, M^\top \mathbf{1} = \mathbf{1}, \\ +\infty, \ ~~\text{otherwise}. \end{cases} \end{equation} In this formulation, our target function is a sum of two terms. We now proceed with the standard ADMM procedure, utilizing the splitting $f$, $g$. Our ADMM scheme iterates on the following variables $M, M_1, M_2, u_1, u_2, \in \mathbb{R}^q$. $M_1$ and $M_2$ are primal variables, $M$ is a consensus variable, and $u_1$ and $u_2$ are dual variables. It has tunning parameters $\alpha,\rho \in \mathbb{R}$. First, we evaluate the proximal map associated with the first term \begin{equation} M_1 \gets \argmin_{S \in \mathbb{R}^{q}} \frac{1}{2} \|F - U S \|^2 + \frac{\rho}{2} \|S - M + u_1 \|^2, \end{equation} where $S$ is a dummy variable. This map can be evaluated in closed form, \begin{equation} M_1 = (\rho I + U^\top U)^{-1} (\rho M - \rho u_1 + U^\top F). \end{equation} Second, we evaluate the proximal map associated with the second term \begin{equation} M_2 \gets \argmin_{S \in \mathbb{R}^{q}} g(S) + \frac{\rho}{2} \|S - M + u_2 \|^2, \end{equation} where $S$ is again a dummy variable. This map is precisely the projection onto the simplex, which has been extensively studied in the literature; there are many fast algorithms that solve this problem exactly. We implemented the algorithm proposed in \cite{condat2016fast}. Lastly, we perform the rest of the standard ADMM updates: \begin{equation} \begin{aligned} M &\gets \frac{1}{2}(M_1 + u_1 + M_2 + u_2), \\ u_1 &\gets u_1 + \alpha (M_1 - M), \\ u_2 &\gets u_2 + \alpha (M_2 - M). \\ \end{aligned} \end{equation} We repeat the above steps until a satisfactory precision is reached, and read off the final solution from the variable $M$. \subsubsection{ADMM for the dual problem} We now apply ADMM to the dual problem (\ref{eq:dual_higher}). We start by incorporating the constraints into the target function to rewrite (\ref{eq:dual_higher}) as \begin{equation} \min_{Z, t} \{f(t) = t\} + \{h(Z) = \frac{1}{2} \|(U^\top)^{-1} Z\|^2 \} + g(t, Z), \end{equation} where \begin{equation} g(t, Z) := \begin{cases} 0, \qquad t \mathbf{1} - Z \geq N, \\ +\infty, \ ~~\text{otherwise}, \end{cases} \end{equation} is the indicator function imposing the constraints on $t, Z$. ADMM now splits the problem into three parts, each associated to one of the functions $f,g$ and $h$. Our ADMM scheme will iterate on the following variables $Z, X_Z, X_{gZ}, u_Z, u_{gZ} \in \mathbb{R}^q$, and $t, X_t, X_{gt}, u_t, u_{gt} \in \mathbb{R}$. The variables $X_Z, X_{gZ},X_t, X_{gt}$ are primal variables, $t,Z$ are consensus variables, and $u_Z, u_{gZ},u_t, u_{gt}$ are dual variables. It has tunning parameters $\alpha,\rho \in \mathbb{R}$. First, we evaluate the proximal map for the first term \begin{equation} X_Z \gets \argmin_{S \in \mathbb{R}^q} \frac{1}{2} \|(U^\top)^{-1} S\|^2 + \frac{\rho}{2} \|S - Z + u_Z\|^2, \end{equation} where $S$ is a dummy variable. This map can be evaluated using an closed form formula: \begin{equation} X_Z = (\rho I + U^{-1} (U^{-1})^{\top})^{-1} \rho (Z - u_Z). \end{equation} Next, we evaluate the proximal map for the second term \begin{equation} X_t \gets \argmin_{S \in \mathbb{R}} S + \frac{\rho}{2} (S - t + u_t)^2, \end{equation} where $S$ is a dummy variable. Again, this can be solved straightforwardly: \begin{equation} X_t = \frac{\rho t - \rho u_t - 1}{\rho}. \end{equation} We then evaluate the proximal map for the third term, which involves the constraints \begin{align} & (X_{gZ}, X_{gt}) \gets \argmin_{S \in \mathbb{R}^q, S_t \in \mathbb{R}} g(S, S_t) + \frac{\rho}{2} \|(S, S_t)\nonumber\\ & - (Z - u_{gZ}, t - u_{gt})\|^2, \end{align} where $S, S_t$ are dummy variables. This problem is a projection onto the polyhedron defined by the constraints, $t \mathbf{1} - Z \geq N$, in $\mathbb{R}^{q+1}$. We developed an algorithm that solves this problem exactly in $\mathcal{O}(q \log q)$ steps. This is discussed in Section \ref{projection-onto-polyhedron}. What is left to be done is the following part of the ADMM: \begin{equation} \begin{aligned} & Z \gets \frac{1}{2}(X_Z + u_Z + X_{gZ} + u_{gZ}), \\ & u_Z \gets u_Z + \alpha (X_Z - Z), \\ & u_{gZ} \gets u_{gZ} + \alpha (X_{gZ} - Z), \\ & t \gets \frac{1}{2}(X_t + u_t + X_{gt} + u_{gt}), \\ & u_t \gets u_t + \alpha (X_t - t), \\ & u_{gt} \gets u_{gt} + \alpha (X_{gt} - t). \\ \end{aligned} \end{equation} We repeat the above steps until a satisfactory precision is reached, and read off the final solution from the variables $t$ and $Z$. \subsection{PGD} \subsubsection{PGD for the primal problem} Implementing PGD is rather straightforward. For the initial problem (\ref{eq:projection_onto_PPM}), we simply do the following update: \begin{equation} M \gets \text{Proj-onto-Simplex}(M + \alpha U^\top (F - U M)), \end{equation} where Proj-onto-Simplex() refers to projection onto the simplex, for which we implemented the algorithm proposed in \cite{condat2016fast}. $\alpha \in \mathbb{R}$ is the step size, a tuning parameter. We perform this update repeatedly until a satisfactory precision is reached. \subsubsection{PGD for the dual problem} For the dual problem (\ref{eq:dual_higher}) , the updates we need are \begin{equation} \begin{split} Z &\gets Z - \alpha U^{-1}(U^{-1})^\top Z, \\ t &\gets t - \alpha, \\ (Z, t) &\gets \text{Proj-onto-Polyhedron}((Z, t)), \end{split} \end{equation} where Proj-onto-Polyhedron() refers to projection onto the polyhedron defined by $t \mathbf{1} - Z \geq N$ in $\mathbb{R}^{q+1}$, while $\alpha \in \mathbb{R}$ is the step size. This is explicitly explained in \ref{projection-onto-polyhedron}. Again, we perform these updates repeatedly until a satisfactory precision is reached, and tune the parameters to achieve the best possible performance. \subsection{Projection onto the polyhedron $t \mathbf{1} - Z \geq N$} \label{projection-onto-polyhedron} We would like to solve the following optimization problem: \begin{gather} \label{proj_poly} \argmin_{Z \in \mathbb{R}^q, t \in \mathbb{R}} \frac{1}{2} \|(Z, t) - (A, B)\|^2, \\ \text{subject to }~~t \mathbf{1} - Z \geq N, \end{gather} which is the problem of projection onto the polyhedron $t \mathbf{1} - Z \geq N$ in $\mathbb{R}^{q+1}$. The Lagrangian of this optimization problem is \begin{equation} \mathcal{L} = \frac{1}{2} \|(Z, t) - (A,B)\|^2 + \lambda^\top (Z + N - t\mathbf{1}), \end{equation} where $\lambda \in \mathbb{R}^q$ is the Lagrange multiplier. We solve problem \eqref{proj_poly} by solving the dual problem $\max_{\lambda \geq 0} \min_{Z,t}\mathcal{L}$. We first solve the minimization over variables $Z$ and $t$. It is straightforward to find the closed form solutions: \begin{equation} \label{proj-poly-var} Z^* = A - \lambda, \qquad t^* = B + \mathbf{1}^\top \lambda. \end{equation} Using these expressions, we can rewrite the Lagrangian as \begin{equation} \mathcal{L} = -\frac{1}{2}\lambda^\top (I + \mathbf{1}\mathbf{1}^\top) \lambda + R^\top \lambda, \end{equation} where $R = A+ N - B \mathbf{1}$. Now our goal becomes solving the following optimization problem: \begin{gather} \label{proj_dual} \argmin \frac{1}{2} \lambda^\top (I + \mathbf{1}\mathbf{1}^\top) \lambda - R^\top \lambda, \\ \text{subject to }~~ \lambda \geq 0. \end{gather} The KKT conditions for (\ref{proj_dual}) are \begin{equation} \lambda_i + \mathbf{1}^\top \lambda - R_i - s_i = 0, \ \ \lambda_i \geq 0, \ \ s_i \geq 0, \ \lambda_i s_i = 0, \\ i = 1,..,q, \end{equation} where $s_i$ are Lagrange multipliers associated with the constraint $\lambda \geq 0$. We proceed with sorting the vector $R$ first, and maintain a map $f: \{1, 2, ..., q\} \rightarrow \{1, 2, ..., q\}$ that maps the sorted indices back to the unsorted indices of $R$. Let us call the sorted $R$ by $\tilde{R}$. Then, from the above KKT conditions, it is straightforward to derive the following expression for $\lambda_i$: \begin{equation} \label{lambda_i} \lambda_i = \begin{cases} &\tilde{R}_i - \mathbf{1}^\top \lambda, \ i \geq \tau, \\ &0, \ i < \tau, \end{cases} \ \ i = 1, 2, ..., q \end{equation} where \begin{equation} \label{eq:def_of_tau} \tau = \min \{i \ | \ \tilde{R}_i - \mathbf{1}^\top \lambda \geq 0\}. \end{equation} Then it follows that \begin{equation} \label{1_lambda} \begin{aligned} \mathbf{1}^\top \lambda = \sum_{i=\tau}^{q} (\tilde{R}_i - \mathbf{1}^\top \lambda) = \frac{1}{2 + q - \tau} \sum_{i=\tau}^{q} \tilde{R}_i, \end{aligned} \end{equation} and hence we have that \begin{equation} c(\tau):= \tilde{R}_\tau - {\bf 1}^\top \lambda = \tilde{R}_\tau - \frac{1}{2 + q - i} \sum_{j=\tau}^{q} \tilde{R}_j. \end{equation} According to \eqref{eq:def_of_tau}, to find $\tau$, we only need to find the smallest value of $i$ that makes $c(i)$ non negative. That is, $\tau = \min \{ i \ | \ c(i) \geq 0 \}$. Therefore, by sorting the components of $R$ from small to large, and checking $c(i)$ for each component, from large $i$ to small $i$, we can obtain the desired index $\tau$. Combining equations (\ref{lambda_i}) and (\ref{1_lambda}) with $\tau$, we find a solution that equals $\lambda^*$, the solution to problem (\ref{proj_dual}), apart from a permutation of its components. We then use our index map $f$ to undo the sorting of the components introduced by sorting $R$. Finally, by plugging $\lambda^*$ back into equation (\ref{proj-poly-var}), we obtain the desired solution to our problem (\ref{proj_poly}). The whole projection procedure can be done in $\mathcal{O}(q \log q)$, the slowest step being the sorting of $R$. \section{More results using our algorithm}\label{sec:app:more_results} In this section, we use our fast projection algorithm to infer phylogenetic trees from frequency of mutation data. The idea is simple. We scan all possible trees, and, for each tree $T$, we project $\hat{F}$ into a PPM for this $T$ using our fast projection algorithm. This gives us a projected $F$ and $M$ such that $F = UM$, the columns of $M$ are in the probability simplex, and $\|\hat{F} - F\|$ is small. Then, we return the tree whose projection yields the smallest $\|\hat{F} - F\|$. Since all of these projections can be done in parallel, we assign the projection for different subsets of the set of all possible trees to different GPU cores. Since we are performing an exhaustive search over all possible trees, we can only infer small trees. As such, when dealing with real-size data, similar to several existing tools, we first cluster the rows of $\hat{F}$, and produce an ``effective'' $\hat{F}$ with a small number of rows. We infer a tree on this reduced input. Each node in our tree is thus associated with multiple mutated positions in the genome, and multiple mutants, depending on the clustering. We cluster the rows of $\hat{F}$ using $k$-means, just like in \cite{malikic2015clonality}. We decide on the numbers of clusters, and hence tree size, based on the same BIC procedure as in \cite{malikic2015clonality}. It is possible that other pre-clustering, and tree-size-selection strategies, yield better results. We call the resulting tool EXACT. We note that it is not our goal to show that the PPM is adequate to extract phylogenetic trees from data. This adequacy, and its limits, are well documented in well-cited biology papers. Indeed, several papers provide open-source tools based on the PPM, and show their tools' good performance on data containing the frequencies of mutation per position in different samples, $\hat{F}$ in our paper. A few tools are PhyloSub \cite{jiao2014inferring}, AncesTree \cite{el2015reconstruction}, CITUP \cite{malikic2015clonality}, PhyloWGS \cite{deshwar2015phylowgs}, Canopy \cite{jiang2016assessing}, SPRUCE \cite{el2016inferring}, rec-BTP \cite{hajirasouliha2014combinatorial}, and LICHeE \cite{popic2015fast}. These papers also discuss the limitations of the PPM regarding inferring evolutionary trees, and others propose extensions to the PPM to capture more complex phenomena, see e.g., \cite{bonizzoni2014and}. It is important to further distinguish the focus of our paper from the focus of the papers cited in the paragraph above. In this paper, we start from the fact that the PPM is already being used to infer trees from $\hat{F}$, and with substantiated success. However, all of the existing methods are heuristics, leaving room for improvement. { We identify one subproblem that, if solved very fast, allows us to do { exact PPM-based tree inference} for problems of relevant biological sizes. It is this subproblem, a projection problem in Eq. (3), that is our focus. We introduce { the first non-iterative algorithm} to solve this projection problem, and show that it is 74$\times$ faster than different optimally-tuned iterative methods. We are also { the first} to show that a full-exact-enumeration approach to inferring $U$ and $M$ from $\hat{F}$ is possible, in our case, using a GPU and our algorithm to compute and compare the cost of all the possible trees that might explain the data $\hat{F}$.} EXACT often outperforms the above tools, {none of which does exact inference}. Our paper is not about EXACT, whose development challenges and significance for biology go beyond solving our projection problem, and which is the focus of our future work. Despite this difference in purpose, in this section we compare the performance of inferring trees from a full exact search over the space of all possible PPM models with the performance of a few existing algorithms. In Figure \ref{fig:app:more_results}, we compare EXACT, PhyloWGS, CITUP and AncesTree on recovering the correct ancestry relations on biological datasets also used by [3]. A total of $30$ { different datasets} \cite{ancestreedata}, i.e., $\hat{F}$, were tested. We use the default parameters in all of the algorithms tested. \begin{figure*} \caption{\small Comparison of different phylogenetic tree inference algorithms.} \label{fig:app:more_results} \end{figure*} In each test, and for every pair of mutations $i$ and $j$, we use the tree output by each tool to determine if (a) $i$ is an ancestor of $j$ or if $j$ is an ancestor of $i$, (b) if $i$ and $j$ are in the same node, (c) if either $i$ or $j$ are missing in the tree, or, otherwise, (d) if $i$ and $j$ are {incomparable}. We give these four possible ancestral relations, the following names: \emph{ancestral}, \emph{clustered}, \emph{missing}, and \emph{incomparable}. A random guess correctly identifies $25$\% of the ancestral categories, on average. If the fraction of misidentified relations is $0$, the output tree equals the ground-truth tree. All methods do better than random guesses. For example, in Figure \ref{fig:app:example_of_reconstruction}, according to EXACT, mutation $63$, at the root, is an ancestor of mutation $57$, at node $3$. However, according to the ground truth, in Figure \ref{fig:app:example_of_reconstruction}, they belong to the same node. So, as far as comparing $63$ with $57$ goes, EXACT makes a mistake. As another example, according tp EXACT, mutations $91$ and $55$ are incomparable, while according to the ground truth, $91$ is a descendent of $55$. Hence, as far as comparing $91$ with $55$ goes, EXACT makes another mistake. The fraction of errors, per ancestral relation error type, that each of these tools makes is: EXACT = $\{23\%,10\%,0\%,13\%\}$; PhyloWGS = $\{3\%,2\%,0\%,1\%\}$; AncesTree = $\{54\%,16\%,95\%,25\% \}$; CITUP = $\{27\%,13\%,0\%,21\%\}$. In our experiments, EXACT performs, on average, better than the other three methods. PhyloWGS performs close to EXACT, however, it has a much longer run time. Although AncesTree does fairly well in terms of accuracy, we observe that it often returns trees with the same topology, a star-shaped tree. The other methods, produce trees whose topology seems to be more strongly linked to the input data. Finally, AncesTree's inferred tree does not cover all of the existing mutations. This behaviour is expected, as, by construction, AncesTree tries to find the largest tree that can be explained with the PPM. See Figure \ref{fig:app:example_of_reconstruction}, and Figure \ref{fig:app:ground_truth_tree}, for an example of the output produced by different algorithms, and the corresponding ground truth. \begin{figure*} \caption{ \small Tree reconstructed by different algorithms for the first file in the folder \cite{ancestreedata} \label{fig:app:example_of_reconstruction} \end{figure*} \begin{figure} \caption{ \small Ground truth tree for the input file that generated Figure \ref{fig:app:example_of_reconstruction} \label{fig:app:ground_truth_tree} \end{figure} We end this section by discussing a few extra properties that distinguished an approach like EXACT from the existing tools. Because our algorithm's speed allows a complete enumeration of all of the trees, EXACT { has two unique properties}. First, EXACT can exactly solve \begin{equation}\label{eq:app:gen_likelihood} \min_{U \in \mathcal{U}} \mathcal{J}(\mathcal{C}(U)) + \mathcal{Q}(U), \end{equation} where $U$ encodes ancestral relations, $\mathcal{C}(U)$ is the fitness cost as defined in our paper, $\mathcal{J}$ is an arbitrary, fast-to-compute, 1D scaling function, and $\mathcal{Q}(U)$ is an arbitrary, fast-to-compute, tree-topology penalty function. No other tool has this flexibility. Second, EXACT can find the $k$ trees with the smallest objective value in \eqref{eq:app:gen_likelihood}. A few existing tools can output multiple trees, but only when these all have the same ``heuristically-optimal'' objective value. This feature is very important because, given that the input data is noisy, and the number of samples is often small, it allows, e.g., one to give a confidence score for the ancestry relations in the output tree. Furthermore, experiments show that the ground-truth tree can often be found among these $k$ best trees. Hence, using other biological principles, the ground-truth tree can often be identified from this set. Outputting just ``heuristically-optimal'' trees prevents this finding. \end{document}
\begin{document} \begin{abstract} We study the higher gradient integrability of distributional solutions $u$ to the equation $\Div(\sigma \etaabla u) = 0$ in dimension two, in the case when the essential range of $\sigma$ consists of only two elliptic matrices, i.e., $\sigma\in\{\sigma_1, \sigma_2\}$ a.e. in $\Omega$. In \cite{npp}, for every pair of elliptic matrices $\sigma_1$ and $\sigma_2$, exponents $p_{\sigma_1,\sigma_2}\in(2,+\infty)$ and $q_{\sigma_1,\sigma_2}\in (1,2)$ have been characterised so that if $u\in W^{1,q_{\sigma_1,\sigma_2}}(\Omega)$ is solution to the elliptic equation then $\etaabla u\in L^{p_{\sigma_1,\sigma_2}}_{\rm weak}(\Omega)$ and the optimality of the upper exponent $p_{\sigma_1,\sigma_2}$ has been proved. In this paper we complement the above result by proving the optimality of the lower exponent $q_{\sigma_1,\sigma_2}$. Precisely, we show that for every arbitrarily small $\delta$, one can find a particular microgeometry, i.e., an arrangement of the sets $\sigma^{-1}(\sigma_1)$ and $\sigma^{-1}(\sigma_2)$, for which there exists a solution $u$ to the corresponding elliptic equation such that $\etaabla u \in L^{q_{\sigma_1,\sigma_2}-\delta}$, but $\etaabla u \etaotin L^{q_{\sigma_1,\sigma_2}}$. The existence of such optimal microgeometries is achieved by convex integration methods, adapting the geometric constructions provided in \cite{afs} in the isotropic case to the present setting. \vskip .3truecm \etaoindent Keywords: Beltrami equation, elliptic equations, gradient integrability. \vskip.1truecm \etaoindent 2000 Mathematics Subject Classification: 30C62, 35B27. \varepsilonnd{abstract} \maketitle \tableofcontents \section{Introduction}\label{introduction} \etaoindent Let $\Omegaega \subset \mathbb{R}^2$ be a bounded open domain and let $\sigma \in L^{\infty} (\Omegaega; \mathbb{R}^{2 \times 2})$ be uniformly elliptic, i.e., $$ \sigma \xi \cdot \xi \geq \mathcal{L}(\matrici)bda |\xi|^{2} \text{ for every } \xi \in \mathbb{R}^2 \text{ and for a.e. } x \in \Omegaega, $$ for some $\mathcal{L}(\matrici)bda >0$. We study the gradient integrability of distributional solutions $u\in W^{1,1}(\Omega)$ to \begin{equation} \label{the pde} \Div (\sigma(x) \etaabla u (x)) = 0 \quad \text{ in }\,\, \Omegaega, \varepsilonnd{equation} in the case when the essential range of $\sigma$ consists of only two matrices, say $\sigma_1$ and $\sigma_2$. It is well-known from Astala's work \cite{a} that there exist exponents $q$ and $p$, with $1<q<2<p$, such that if $u\in W^{1,q}(\Omega ; \mathbb{R})$ is solution to \varepsilonqref{the pde}, then $\etaabla u\in L^{p}_{\rm weak}(\Omega ;\mathbb{R})$. In \cite{npp} the optimal exponents $p$ and $q$ have been characterised for every pair of elliptic matrices $\sigma_1$ and $\sigma_2$. Denoting by $p_{\sigma_1,\sigma_2}$ and $q_{\sigma_1,\sigma_2}$ such exponents, whose precise formulas are recalled in Section \ref{formule}, we summarise the result of \cite{npp} in the following theorem. \begin{theorem}\cite[Theorem 1.4 and Proposition 4.2]{npp}\label{upper-optimality} Let $\sigma_1, \sigma_2 \in \mathbb{R}^{2 \times 2}$ be elliptic. \begin{itemize} \item[i)] If $\sigma\in L^{\infty}(\Omega;\{\sigma_1,\sigma_2\})$ and $u\in W^{1,q_{\sigma_1,\sigma_2}}(\Omega)$ solves \varepsilonqref{the pde}, then $\etaabla u\in L^{p_{\sigma_1,\sigma_2}}_{\rm weak}(\Omega;\mathbb{R}^2)$.\\[-2mm] \item[ii)] There exists $\bar\sigma\in L^{\infty}(\Omega;\{\sigma_1,\sigma_2\})$ and a weak solution $\bar{u}\in W^{1,2}(\Omega)$ to \varepsilonqref{the pde} with $\sigma=\bar\sigma$, satisfying affine boundary conditions and such that $\etaabla \bar{u}\etaotin L^{p_{\sigma_1,\sigma_2}}(\Omega ;\mathbb{R}^2)$. \varepsilonnd{itemize} \varepsilonnd{theorem} Theorem \ref{upper-optimality} proves the optimality of the upper exponent $p_{\sigma_1,\sigma_2}$. The objective of this paper is to complement this result by proving the optimality of the lower exponent $q_{\sigma_1,\sigma_2}$. As shown in \cite{npp} (and recalled in Section \ref{formule}), there is no loss of generality in assuming that \begin{equation}\label{speciali} \sigma_1 = \diag (1/K,1/S_1),\quad \sigma_2 = \diag(K,S_2), \varepsilonnd{equation} with \begin{equation} \label{ellipt real} K>1 \qquad \text{and} \qquad \varphirac{1}{K} \leq S_j \leq K \, , \quad j=1,2 \,. \varepsilonnd{equation} Thus it suffices to show optimality for this class of coefficients, for which the exponents $p_{\sigma_1,\sigma_2}$ and $q_{\sigma_1,\sigma_2}$ read as \begin{equation}\label{formula-pq} q_{\sigma_1,\sigma_2}= \varphirac{2K}{K+1}, \quad p_{\sigma_1,\sigma_2}= \varphirac{2K}{K-1}. \varepsilonnd{equation} Our main result is the following \begin{theorem} \label{main theorem} Let $\sigma_1,\sigma_2 $ be defined by \varepsilonqref{speciali} for some $K>1$ and $S_1, S_2 \in [1/K , K]$. There exist coefficients $\sigma_n \in L^{\infty}(\Omegaega,\{ \sigma_1; \sigma_2 \})$, exponents $p_n \in \left[1,\varphirac{2K}{K+1} \right]$, functions $u_n \in W^{1,1} (\Omegaega;\mathbb{R})$ such that \begin{align} \label{pde3} &\begin{cases} \Div (\sigma_n (x) \etaabla u_n (x)) = 0 & \text{ in } \quad \Omegaega \,,\\ u_n (x) = x_1 & \text{ on } \quad \partial \Omegaega \,, \varepsilonnd{cases}\\ &\etaabla u_n \in L^{p_n}_{\rm weak}(\Omegaega;\mathbb{R}^2), \quad p_n \to \varphirac{2K}{K+1}, \\ &\etaabla u_n \etaotin L^{\varphirac{2K}{K+1}}(\Omegaega;\mathbb{R}^2). \varepsilonnd{align} In particular $u_n \in W^{1,q} (\Omegaega;\mathbb{R})$ for every $q < p_n$, but $\int_{\Omegaega} {|\etaabla u_n|}^{\varphirac{2K}{K+1}} \, dx= \infty$. \varepsilonnd{theorem} Theorem \ref{main theorem} was proved in \cite{afs} in the case of isotropic coefficients, namely for $\sigma_1=\varphirac{1}{K} I$ and $\sigma_2= KI$. We follow the method developed in \cite{afs}, which relies on convex integration and provides an explicit construction of the sequence $u_n$. The adaptation of such method to the present context is definitely non-trivial due to the anisotropy of the coefficients. \section{Connection with the Beltrami equation and explicit formulas for the optimal exponents} \label{formule} For the reader's convenience we recall in this section how to reduce to the case \varepsilonqref{speciali} starting from any pair $\sigma_1,\sigma_2 $. We will also give the explicit formulas for $p_{\sigma_1,\sigma_2}$ and $q_{\sigma_1,\sigma_2}$. It is well-known that a solution $u\in W^{1,q}_{loc}$, $q\geq 1$, to the elliptic equation \varepsilonqref{the pde} can be regarded as the real part of a complex map $\displaystyle f:\Omega\mapsto\mathbb{C}$ which is a $W^{1,q}_{loc}$ solution to a {\it Beltrami equation}. Precisely, if $v$ is such that \begin{equation}\label{stream} \rot^T \etaabla v = \sigma \etaabla u, \quad \rot := \left(\begin{array}{cc} 0&-1\\ 1&0 \varepsilonnd{array}\right), \varepsilonnd{equation} then $f:=u+iv$ solves the equation \begin{equation}\label{beltrami} f_{\bar{z}}=\mu \, f_{z}+ \etau \, \overline{ f_{z}}\quad \text{a.e. in }\Omega\,, \varepsilonnd{equation} where the so called complex dilatations $\mu$ and $\etau$, both belonging to $L^{\infty}(\Omega;\mathbb C)$, are given by \begin{equation}\label{mu-nu(sigma)} \mu=\varphirac{\sigma_{22}-\sigma_{11}-i(\sigma_{12}+\sigma_{21})}{1+\tr \, \sigma+\det\sigma}\,, \quad \etau=\varphirac{1-\det\sigma+i(\sigma_{12}-\sigma_{21})}{1+\tr \, \sigma+\det\sigma}\,, \varepsilonnd{equation} \etaoindent and satisfy the ellipticity condition \begin{equation}\label{ellipticity-munu} \| |\mu|+|\etau| \|_{L^\infty}< 1 \,. \varepsilonnd{equation} The ellipticity \varepsilonqref{ellipticity-munu} is often expressed in a different form. Indeed, it implies that there exists $0\leq k<1$ such that $\| |\mu|+|\etau| \|_{L^\infty}\leq k< 1$ or equivalently that \begin{equation}\label{ellipticity-munu-K} \| |\mu|+|\etau| \|_{L^\infty}\leq \varphirac{K-1}{K+1} \,, \varepsilonnd{equation} for some $K>1$. Let us recall that weak solutions to \varepsilonqref{beltrami}, \varepsilonqref{ellipticity-munu-K} are called $K$-quasiregular mappings. Furthermore, we can express $\sigma$ as a function of $\mu, \, \etau$ inverting the algebraic system \varepsilonqref{mu-nu(sigma)}, \begin{equation}\label{sigma(mu-nu)} \sigma = \left( \begin{array}{ll} \varphirac{|1-\mu|^2-|\etau|^2}{|1+\etau|^2-|\mu|^2} & \varphirac{2\Im(\etau-\mu)}{|1+\etau|^2-|\mu|^2}\\ \, & \, \\ \varphirac{-2\Im(\etau+\mu)}{|1+\etau|^2-|\mu|^2} & \varphirac{|1+\mu|^2-|\etau|^2}{|1+\etau|^2-|\mu|^2} \varepsilonnd{array} \right)\,. \varepsilonnd{equation} Conversely, if $f$ solves \varepsilonqref{beltrami} with $\mu,\etau \in L^{\infty}(\Omega,\mathbb C)$ satisfying \varepsilonqref{ellipticity-munu}, then its real part is solution to the elliptic equation \varepsilonqref{the pde} with $\sigma$ defined by \varepsilonqref{sigma(mu-nu)}. Notice that $\etaabla f$ and $\etaabla u$ enjoy the same integrability properties. Assume now that $\sigma:\Omega\to\{\sigma_1,\sigma_2\}$ is a two-phase elliptic coefficient and $f$ is solution to \varepsilonqref{beltrami}-\varepsilonqref{mu-nu(sigma)}. Abusing notation, we identify $\Omega$ with a subset of $\mathbb{R}^2$ and $f=u+iv$ with the real mapping $f=(u,v):\Omega\to \mathbb{R}^2$. Then, as shown in \cite{npp}, one can find matrices $A,B\in SL_{\rm sym}(2)$ (with $ SL_{\rm sym}(2)$ denoting the set of invertible matrices with determinant equal to one) depending only on $\sigma_1$ and $\sigma_2$, such that, setting \begin{equation}\label{trasformatedet} \tilde f(x):= A^{-1} f(Bx), \varepsilonnd{equation} one has that the function $\tilde f$ solves the new Beltrami equation \begin{equation*} \tilde f_{\bar{z}}=\tilde\mu \, f_{z}+ \tilde\etau \, \overline{ \tilde f_{z}} \quad \hbox{a.e. in } B(\Omega), \varepsilonnd{equation*} and the corresponding $\tilde\sigma: B(\Omega)\to \{\tilde\sigma_1,\tilde\sigma_2\}$ defined by \varepsilonqref{sigma(mu-nu)} is of the form \varepsilonqref{speciali}: \begin{equation*} \tilde\sigma_1 = \diag (1/K,1/S_1),\quad \tilde\sigma_2 = \diag(K,S_2), \quad K>1, \quad S_1,S_2 \in [1/K ,K]\,. \varepsilonnd{equation*} The results in \cite{a} and \cite{pv} imply that if $\tilde f\in W^{1,q}$, with $q\geq \varphirac{2K}{K+1}$, then $\etaabla\tilde f\in L^{ \varphirac{2K}{K-1}}_{\rm weak}$; in particular, $\tilde f\in W^{1,p}$ for each $p< \varphirac{2K}{K-1}$. Clearly $\etaabla\tilde f$ enjoys the same integrability properties as $\etaabla f$ and $\etaabla u$. Finally, we recall the formula for $K$ which will yield the optimal exponents. Denote by $d_1$ and $d_2$ the determinant of the symmetric part of $\sigma_1$ and $\sigma_2 $ respectively, $$ d_i:=\det\Big(\varphirac{\sigma_i + \sigma_i^T}{2}\Big) \,, \quad i=1,2 \,, $$ and by $(\sigma_i)_{jk}$ the $jk$-entry of $\sigma_i$. Set \begin{align*} m: &= \varphirac{1}{\sqrt{d_1 d_2}}\left[(\sigma_2)_{11} (\sigma_1)_{22} + (\sigma_1)_{11} (\sigma_2)_{22} - \varphirac{1}{2} \Big((\sigma_2)_{12}+(\sigma_2)_{21}\Big) \Big((\sigma_1)_{12}+(\sigma_1)_{21}\Big) \right] \,, \\ n: &= \varphirac{1}{\sqrt{d_1 d_2}} \left[ \det\sigma_1 + \det\sigma_2 -\varphirac{1}{2} \Big((\sigma_1)_{21} - (\sigma_1)_{12}\Big) \Big((\sigma_2)_{21} - (\sigma_2)_{12}\Big) \right] \,. \varepsilonnd{align*} Then \begin{equation}\label{formula-K} K= \left(\varphirac{m + \sqrt{m^2 - 4}}{2} \right)^{\varphirac{1}{2}}\left(\ \varphirac{n + \sqrt{n^2 -4}}{2}\right)^{\varphirac{1}{2}}. \varepsilonnd{equation} Thus, for any pair of elliptic matrices $\sigma_1,\sigma_2 \in\mathbb{R}^{2 \times 2}$, the explicit formula for the optimal exponents $p_{\sigma_1,\sigma_2}$ and $q_{\sigma_1,\sigma_2}$ are obtained by plugging \varepsilonqref{formula-K} into \varepsilonqref{formula-pq}. \section{Preliminaries} \subsection{Conformal coordinates} For every real matrix $A \in \mathbb{R}^{2 \times 2}$, \[ A= \left(\begin{matrix} a_{11} & a_{12} \\ a_{21} & a_{22} \\ \varepsilonnd{matrix} \right), \] we write $A=(a_+, a_-)$, where $a_+, a_- \in \mathbb{C}$ denote its conformal coordinates. By identifying any vector $v=(x,y) \in \mathbb{R}^2$ with the complex number $v=x + i y $, conformal coordinates are defined by the identity \begin{equation} \label{conf coord} Av = a_+ v + a_- \conj{v} \,. \varepsilonnd{equation} Here $\conj{v}$ denotes the complex conjugation. From \varepsilonqref{conf coord} we have relations \begin{equation} \label{real to conf} a_+ = \varphirac{a_{11}+a_{22}}{2} + i \, \varphirac{a_{21}-a_{12}}{2}\,, \qquad \qquad a_- = \varphirac{a_{11}-a_{22}}{2} + i \, \varphirac{a_{21}+a_{12}}{2}\,, \varepsilonnd{equation} and, conversely, \begin{equation} \label{conf to real} \begin{aligned} a_{11} & = \mathbb{R}e a_+ + \mathbb{R}e a_- \,, \qquad \qquad a_{12} = - \Im a_+ + \Im a_- \,, \\ a_{21} & = \Im a_+ + \Im a_- \,, \qquad \qquad a_{22} = \mathbb{R}e a_+ - \mathbb{R}e a_- \,. \varepsilonnd{aligned} \varepsilonnd{equation} Here $\mathbb{R}e z$ and $ \Im z$ denote the real and imaginary part of $z \in \mathbb{C}$ respectively. We recall that \begin{equation} \label{multiplication} AB= (a_+ b_+ + a_- \conj{b}_-, a_+ b_- + a_- \conj{b}_+)\,, \varepsilonnd{equation} and $\tr A = 2 \mathbb{R}e a_+$. Moreover \begin{equation} \label{formulae} \begin{gathered} \det (A) = \va{a_+}^2 - \va{a_-}^2\,, \\ \va{A}^2 = 2 \va{a_+}^2 + 2 \va{a_-}^2 \,, \\ \etaor{A} = \va{a_+} + \va{a_-}\,, \varepsilonnd{gathered} \varepsilonnd{equation} where $\va{A}$ and $\etaor{A}$ denote the Hilbert-Schmidt and the operator norm, respectively. We also define the second complex dilatation of the map $A$ as \begin{equation} \label{second dilatation} \mu_A := \varphirac{a_-}{\conj{a}_+} \,, \varepsilonnd{equation} and the distortion \begin{equation} \label{distortion} K(A) := \va{ \varphirac{1 + \va{\mu_A}}{1-\va{\mu_A}} }= \varphirac{\etaor{A}^2}{\va{\det (A)}} \,. \varepsilonnd{equation} The last two quantities measure how far $A$ is from being conformal. Following the notation introduced in \cite{afs}, we define \begin{equation} \label{conformal set} E_{\Delta} := \{A= (a,\mu \, \conj{a}) \, \colon \, a \in \mathbb{C} , \, \mu \in \Delta \} \varepsilonnd{equation} for a set $\Delta \subset \mathbb{C} \cup \{ \infty \}$; namely, $E_{\Delta}$ is the set of matrices with the second complex dilatation belonging to $\Delta$. In particular $E_0$ and $E_{\infty}$ denote the set of conformal and anti-conformal matrices respectively. From \varepsilonqref{multiplication} we have that $E_{\Delta}$ is invariant under precomposition by conformal matrices, that is \begin{equation} \label{conformal invariance} E_{\Delta} =E_{\Delta} A \qquad \text{for every} \qquad A \in E_0 \smallsetminus \{0\} \,. \varepsilonnd{equation} \subsection{Convex integration tools} We denote by $\mathcal{M}(\matrici)$ the set of signed Radon measures on $\mathbb{R}^{2 \times 2}$ having finite mass. By the Riesz's representation theorem we can identify $\mathcal{M}(\matrici)$ with the dual of the space $C_0 (\mathbb{R}mn)$. Given $\etau \in \mathcal{M}(\matrici)$ we define its \textit{barycenter} as \[ \conj{\etau} := \int_{\mathbb{R}^{2 \times 2}} A \, d\etau(A) \,. \] We say that a map $f \in C(\conj{\Omegaega}; \mathbb{R}^2)$ is \textit{piecewise affine} if there exists a countable family of pairwise disjoint open subsets $\Omegaega_i \subset \Omegaega$ with $\va{\partial \Omegaega_i}=0$ and \[ \va{ \Omegaega \smallsetminus \bigcup_{i=1}^{\infty} \Omegaega_i } =0 \,, \] such that $f$ is affine on each $\Omegaega_i$. Two matrices $A , B \in \mathbb{R}^{2 \times 2}$ such that $\rank (B-A)=1$ are said to be \textit{rank-one connected} and the measure $\mathcal{L}(\matrici)bda \delta_A + (1- \mathcal{L}(\matrici)bda) \delta_B \in \mathcal{M}(\matrici)$ with $\mathcal{L}(\matrici)bda \in [0,1]$ is called a \textit{laminate of first order}. \begin{definition} The family of \textit{laminates of first order} $\mathcal{L}(\matrici)$ is the smallest family of probability measures in $\mathcal{M}(\matrici)$ satisfying the following conditions: \begin{enumerate}[\indent (i)] \item $\delta_A \in \mathcal{L}(\matrici)$ for every $A \in \mathbb{R}^{2 \times 2} \,$; \item assume that $\sum_{i=1}^N \mathcal{L}(\matrici)bda_i \delta_{A_i} \in \mathcal{L}(\matrici)$ and $A_1=\mathcal{L}(\matrici)bda B + (1-\mathcal{L}(\matrici)bda)C$ with $\mathcal{L}(\matrici)bda \in [0,1]$ and $\rank(B-C)=1$. Then the probability measure \[ \mathcal{L}(\matrici)bda_1 (\mathcal{L}(\matrici)bda \delta_B + (1-\mathcal{L}(\matrici)bda ) \delta_C) + \sum_{i=2}^N \mathcal{L}(\matrici)bda_i \delta_{A_i} \] is also contained in $\mathcal{L}(\matrici)$. \varepsilonnd{enumerate} \varepsilonnd{definition} The process of obtaining new measures via (ii) is called \textit{splitting}. The following proposition provides a fundamental tool to solve differential inclusions by means of convex integration (see e.g. \cite[Proposition 2.3]{afs} for a proof). \begin{proposition} \label{gradienti} Let $\etau = \sum_{i=1}^N \alpha_i \delta_{A_i} \in \mathcal{L}(\matrici)$ be a laminate of finite order with barycenter $\conj{\etau}=A$, that is $A= \sum_{i=1}^N \alpha_i A_i$ with $\sum_{i=1}^N \alpha_i=1$. Let $\Omegaega \subset \mathbb{R}^2$ be a bounded open set, $\alpha\in (0,1)$ and $0<\delta < \min \va{A_i-A_j}/2$. Then there exists a piecewise affine Lipschitz map $f \colon \Omegaega \to \mathbb{R}^2$ such that \begin{enumerate}[\indent (i)] \item $f(x)=Ax$ on $\partial \Omegaega$, \item $\holder{f-A} < \delta$ , \item $\va{ \{ x \in \Omegaega \, \colon \, \va{\etaabla f (x) - A_i} < \delta \} } = \alpha_i \va{\Omegaega}$, \item $\dist (\etaabla f (x), \spt \etau) < \delta\,$ a.e. in $\Omegaega$. \varepsilonnd{enumerate} \varepsilonnd{proposition} \subsection{Weak $L^p$ spaces} We recall the definition of weak $L^{p}$ spaces. Let $f \colon \Omegaega \to \mathbb{R}^2$ be a Lebesgue measurable function. Define the distribution function of $f$ as \[ \mathcal{L}(\matrici)bda_f \colon (0,\infty) \to [0,\infty] \quad \text{with} \quad \mathcal{L}(\matrici)bda_f (t) := \va{ \{ x \in \Omegaega \, \colon \, |f(x)|>t \} } \,. \] Let $1\leq p<\infty$, then the following formula holds \begin{equation} \label{cavalieri} \int_{\Omegaega} \va{f(x)}^p \, dx = p \int_0^\infty t^{p-1} \mathcal{L}(\matrici)bda_f (t) \, dt \,. \varepsilonnd{equation} Define the quantity \[ [f]_p :={ \left( \sup_{t>0} \, t^p \mathcal{L}(\matrici)bda_f (t) \right) }^{1/p} \] and the weak $L^p$ space as \[ L^p_{\rm weak} (\Omegaega; \mathbb{R}^2) := \left\{ f \colon \Omegaega \to \mathbb{R}^2 \, \colon \, f \, \text{ measurable}, \, [f]_p <\infty \right\} \,. \] $L^p_{\rm weak}$ is a topological vector space and by Chebyshev's inequality we have $[f]_p \leq \etaor{f}_{L^p}$. In particular this implies $L^p \subset L^p_{\rm weak}$. \section{Proof of Theorem \ref{main theorem}} For the rest of this paper, $\sigma_1$ and $\sigma_2$ are as in \varepsilonqref{speciali}-\varepsilonqref{ellipt real}. We start by rewriting \varepsilonqref{the pde} as a differential inclusion. To this end, define the sets \begin{equation} \label{real targets} T_1 := \left\{ \left( \begin{matrix} x & -y \\ S_1^{-1}\, y & K^{-1}\,x \varepsilonnd{matrix} \right) \, \colon \, x,y \in \mathbb{R} \right\} \,, \qquad T_2 := \left\{ \left( \begin{matrix} x & -y \\ S_2 \, y & K \, x \varepsilonnd{matrix} \right) \, \colon \, x,y \in \mathbb{R} \right\} \,. \varepsilonnd{equation} Let $\sigma \in L^{\infty}(\Omegaega;\{\sigma_1,\sigma_2\})$. It is easy to check (see for example \cite[Lemma 3.2]{afs}) that $u$ solves \varepsilonqref{the pde} if and only if $f$ solves the differential inclusion \begin{equation} \label{differential inclusion} \etaabla f (x) \in T_1 \cup T_2 \quad \text{a.e. in } \,\, \Omegaega \,, \varepsilonnd{equation} where $f:=(u,v)$ and $v$ is the stream function of $u$, which is defined, up to an addictive constant, by \varepsilonqref{stream}. In order to solve the differential inclusion \varepsilonqref{differential inclusion}, it is convenient to use \varepsilonqref{real to conf} and write our target sets in conformal coordinates: \begin{equation} \label{conf targets} T_1 = \left\{ \left( a, d_1 (\conj{a}) \right) \, \colon \, a \in \mathbb{C} \right\}, \qquad T_2 = \left\{ \left(a, -d_2 (\conj{a}) \right) \, \colon \, a \in \mathbb{C} \right\} , \varepsilonnd{equation} where the operators $d_j \colon \mathbb{C} \to \mathbb{C}$ are defined as \begin{equation} \label{k s} d_j (a) := k \, \mathbb{R}e a + i \, s_j \, \Im a \,, \quad \text{with} \quad k:= \varphirac{K-1}{K+1} \quad \text{and} \quad s_j := \varphirac{S_j-1}{S_j+1} \,. \varepsilonnd{equation} Conditions \varepsilonqref{ellipt real} imply \begin{equation} \label{ellipt conf} 0<k<1 \quad \text{and} \quad -k \leq s_j \leq k \quad \text{for} \quad j=1,2 \,. \varepsilonnd{equation} Introduce the quantities \begin{gather} s:=\varphirac{s_1+s_2}{2} = \varphirac{S_1 S_2 -1}{(1+S_1)(1+S_2)} \label{little s} \\ S := \varphirac{1+s}{1-s} = \varphirac{S_1 + S_2 + 2 S_1 S_2}{2 + S_1 + S_2} \label{def S} \,. \varepsilonnd{gather} By \varepsilonqref{ellipt conf} we have \begin{equation} \label{average bounds} -k \leq s \leq k \qquad \text{and} \qquad \varphirac{1}{K} \leq S \leq K \,. \varepsilonnd{equation} We distinguish three cases. \paragraph{\indent \textit{1. Case $s > 0$} (corresponding to $S>1$).} We study this case in Section \ref{sec:convex}, where we generalise the methods used in \cite[Section 3.2]{afs}. Observe that this case includes the one studied in \cite{afs}. Indeed, for $s=k$ one has that $s_1=s_2=k$ and the target sets \varepsilonqref{conf targets} become \[ T_1 = E_k= \left\{ \left( a, k \conj{a} \right) \, \colon \, a \in \mathbb{C} \right\}, \qquad T_2 = E_{-k}= \left\{ \left(a, -k \conj{a} \right) \, \colon \, a \in \mathbb{C} \right\} , \] where $E_{\pm k}$ are defined in \varepsilonqref{conformal set}. We remark that, in this particular case, the construction provided in Section 5 coincides with the one given in \cite[Section 3.2]{afs}. \paragraph{\indent \textit{2. Case $s < 0$} (corresponding to $S<1$).} This case can be reduced to the previous one. Indeed, if we introduce $\hat{s}_j:= - s_j$, $\hat{s}:=(\hat{s}_1+ \hat{s}_2 )/2 > 0$ and the operators $\hat{d}_j (a) := k \, \mathbb{R}e a + i \, \hat{s}_j \, \Im a$ then the target sets \varepsilonqref{conf targets} read as \[ T_1 = \{ ( a, \hat{d}_1 (a) ) \, \colon \, a \in \mathbb{C} \}, \qquad T_2 = \{ (a, -\hat{d}_2 (a) ) \, \colon \, a \in \mathbb{C} \} . \] This is the same as the previous case, since the absence of the conjugation does not affect the geometric properties relevant to the constructions of Section \ref{sec:convex}. We notice that this case includes $s=-k$ for which the target sets become \[ T_1 = \left\{ \left( a, k a \right) \, \colon \, a \in \mathbb{C} \right\} \,, \qquad T_2 = \left\{ \left(a, -k a \right) \, \colon \, a \in \mathbb{C} \right\} . \] We remark that in this case, \varepsilonqref{differential inclusion} coincides with the classical Beltrami equation (see also \cite[Remark 3.21]{afs}). \paragraph{\indent \textit{3. Case $s = 0$} (corresponding to $s_1=-s_2$, $S_1=1/S_2$)} This is a degenerate case, in the sense that the constructions provided in Section 5 for $s>0$ are not well defined. Nonetheless, Theorem \ref{main theorem} still holds true. In fact, as already pointed out in \cite[Section A.3]{npp}, by an affine change of variables, the existence of a solution can be deduced by \cite[Lemma 4.1,Theorem 4.14]{afs}, where the authors prove the optimality of the lower critical exponent $\varphirac{2K}{K+1}$ for the solution of a system in non-divergence form. We remark that in this case Theorem \ref{main theorem} actually holds in the stronger sense of exact solutions, namely, there exists $u \in W^{1,1} (\Omegaega;\mathbb{R})$ solution to \varepsilonqref{pde3} and such that \[ \etaabla u \in L^{\varphirac{2K}{K+1}}_{\rm weak} (\Omegaega; \mathbb{R}^2) \,, \quad \etaabla u \etaotin L^{\varphirac{2K}{K+1}}(\Omegaega;\mathbb{R}^2) \,. \] \section{The case $s>0$} \label{sec:convex} In the present section we prove Theorem \ref{main theorem} under the hypothesis that the average $s$ is positive, namely that \begin{equation} \label{case2} \begin{aligned} & 0<k<1 \,\, \text{ and } \,\, -s_2< s_1 \leq s_2\,, \,\, \text{ with } \,\, 0< s_2 \leq k\,, \text{ or } \\ & 0<k<1 \,\, \text{ and } \,\, -s_1< s_2 \leq s_1\,, \,\, \text{ with } \,\, 0< s_1 \leq k\,. \varepsilonnd{aligned} \varepsilonnd{equation} From \varepsilonqref{case2}, recalling definitions \varepsilonqref{k s}, \varepsilonqref{little s}, \varepsilonqref{def S}, we have \begin{gather} 0<s \leq k \, , \qquad 1 < S \leq K \,, \label{average} \\ 1/S_2 < S_1 \leq S_2 \,, \quad 1< S_2 \leq K \,,\quad \text{ or } \quad1/S_1 < S_2 \leq S_1 \,, \quad 1< S_1 \leq K\,. \label{K bounds} \varepsilonnd{gather} In order to prove Theorem \ref{main theorem}, we will solve the differential inclusion \varepsilonqref{differential inclusion} by adapting the convex integration program developed in \cite[Section 3.2]{afs} to the present context. As already pointed out in the Introduction, the anisotropy of the coefficients $\sigma_1,\sigma_2$ poses some technical difficulties in the construction of the so-called staircase laminate, needed to obtain the desired approximate solutions. In fact, the anisotropy of $\sigma_1,\sigma_2$ translates into the lack of conformal invariance (in the sense of \varepsilonqref{conformal invariance}) of the target sets \varepsilonqref{conf targets}, while the constructions provided in \cite{afs} heavily rely on the conformal invariance of the target set $E_{\{-k,k\}}$. We point out that the lack of conformal invariance was a source of difficulty in \cite{npp} as well, for the proof of the optimality of the upper exponent. This section is divided as follows. In Section \ref{sec:rank one} we establish some geometrical properties of rank-one lines in $\mathbb{R}^{2 \times 2}$, that will be used in Section \ref{sec:weak} for the construction of the staircase laminate. For every sufficiently small $\delta>0$, such laminate allows us to define (in Proposition \ref{prop:grad}) a \textit{piecewise affine} map $f$ that solves the differential inclusion \varepsilonqref{differential inclusion} up to an arbitrarily small $L^{\infty}$ error. Moreover $f$ will have the desired integrability properties (see \varepsilonqref{crescita}, that is, \[ \etaabla f \in L^p_{\rm weak}(\Omegaega; \mathbb{R}^{2 \times 2}) \,, \quad p \in \left( \varphirac{2K}{K+1}-\delta ,\varphirac{2K}{K+1} \right] \,, \quad \etaabla f \etaotin L^{\varphirac{2K}{K+1}}(\Omegaega;\mathbb{R}^{2 \times 2})\,. \] Finally, in Theorem \ref{thm finale}, we remove the $L^{\infty}$ error introduced in Proposition \ref{prop:grad}, by means of a standard argument (see, e.g., \cite[Theorem A.2]{npp}). Throughout this section $c_K >1$ will denote various constants depending on $K,S_1$ and $S_2$, whose precise value may change from place to place. The complex conjugation is denoted by $J:=(0,1)$ in conformal coordinates, i.e., $Jz= \conj{z}$ for $z \in \mathbb{C}$. Moreover, $R_\theta:=(e^{i\theta},0) \in SO(2)$ denotes the counter clockwise rotation of angle $\theta \in (-\pi,\pi]$. Define the the argument function \[ \arg z := \theta \,, \quad \text{where} \quad z=|z|e^{i \theta} \,, \quad \text{with} \quad \theta \in (-\pi,\pi] \,. \] Abusing notation we write $\arg R_\theta=\theta$. For $A=(a,b) \in \mathbb{R}^{2 \times 2} \setminus \{0\}$ we set \begin{equation} \label{theta A} \theta_A :=- \arg (b-d_1 (\conj{a})) \,. \varepsilonnd{equation} \subsection{Properties of rank-one lines} \label{sec:rank one} In this Section we will establish some geometrical properties of rank-one lines in $\mathbb{R}^{2 \times 2}$. Lemmas \ref{lemma1}, \ref{lemma2} are generalizations of \cite[Lemmas 3.14, 3.15]{afs} to our target sets \varepsilonqref{conf targets}. In Lemmas \ref{lemma4}, \ref{lemma5} we will study certain rank-one lines connecting $T$ to $E_\infty$, that will be used in Section \ref{sec:weak} to construct the staircase laminate. \begin{lemma} \label{lemma: proprieta T} Let $Q \in T_j$ with $j \in \{1,2\}$ and $T_j$ as in \varepsilonqref{conf targets}. Then \begin{gather} \det Q > 0 \quad \text{ for } \quad Q \etaeq 0 \, , \label{positive det} \\ \va{s_j} \leq \va{\mu_Q} \leq k \,, \label{defomation2}\\ \max\{S_j,1/S_j \} \leq K(Q) \leq K\,. \label{distortion2} \varepsilonnd{gather} \varepsilonnd{lemma} \begin{proof} Let $Q=(q,d_1 (\conj{q})) \in T_1$. By \varepsilonqref{ellipt conf} we have $|s_1| |q| \leq | d_1(q) | \leq k |q|$ which readily implies \varepsilonqref{defomation2} and \[ (1-k^2) \va{q}^2 \leq \det (Q) \leq (1-s_1^2) \va{q}^2 \,. \] The last inequality implies \varepsilonqref{positive det}. Finally $K(Q)$ is increasing with respect to $|\mu_{Q}|\in(0,1)$, therefore \varepsilonqref{distortion2} follows from \varepsilonqref{defomation2}. The proof is analogous if $Q \in T_2$. \varepsilonnd{proof} \begin{lemma}\label{lemma1} Let $A,B \in \mathbb{R}^{2 \times 2}$ with $\det B \etaeq 0$ and $\det (B-A)=0$, then \begin{equation} \va{B} \leq \sqrt{2} \, K(B) \va{A} . \varepsilonnd{equation} In particular, if $A \in \mathbb{R}^{2 \times 2}$ and $Q \in T_j$, $j\in\{1,2\}$, are such that $\det(A - Q) =0$, then \[ \dist (A,T_j) \leq \va{A-Q} \leq (1 + \sqrt{2} K) \dist (A,T_j) \,. \] \varepsilonnd{lemma} \begin{proof} The first part of the statement is exactly like in \cite[Lemma 3.14]{afs}. For the second part, one can easily adapt the proof of \cite[Lemma 3.14]{afs} to the present context taking into account \varepsilonqref{positive det} and \varepsilonqref{distortion2}. For the reader's convenience we recall the argument. Let $A \in \mathbb{R}^{2 \times 2}, Q \in T_1$ and $Q_0 \in T_1$ such that $\dist (A,T_1)=|A-Q_0|$. By \varepsilonqref{positive det}, we can apply the first part of the lemma to $A-Q_0$ and $Q-Q_0$ to get \[ |Q-Q_0| \leq \sqrt{2} K(Q-Q_0) |A-Q_0| \leq \sqrt{2} K |A-Q_0| \,, \] where the last inequality follows from \varepsilonqref{distortion2}, since $Q-Q_0 \in T_1$. Therefore \[ |A-Q| \leq |A-Q_0| + |Q-Q_0| \leq (1+ \sqrt{2} K) |A-Q_0| = (1+ \sqrt{2} K) \dist(A,T_1) \,. \] The proof for $T_2$ is analogous. \varepsilonnd{proof} \begin{lemma}\label{lemma2} Every $A= (a,b) \in \mathbb{R}^{2 \times 2} \smallsetminus \{0\}$ lies on a rank-one segment connecting $T_1$ and $E_{\infty}$. Precisely, there exist matrices $Q \in T_1 \smallsetminus \{ 0\}$ and $P \in E_{\infty} \smallsetminus \{ 0\}$, with $\det (P-Q)=0$, such that $A \in [Q,P]$. We have $P=tJR_{\theta_A}$ for some $t>0$ and $\theta_A$ as in \varepsilonqref{theta A}. Moreover, there exists a constant $c_K >1$, depending only on $K,S_1,S_2$, such that \begin{equation} \label{estimates2} \varphirac{1}{c_K} \va{A} \leq \va{P-Q}, \va{P}, \va{Q} \leq c_K \va{A} \,. \varepsilonnd{equation} \varepsilonnd{lemma} \begin{proof} The proof can be deduced straightforwardly from the one of \cite[Lemma 3.15]{afs}. We decompose any $A=(a,b)$ as \[ A= (a,d_1(\conj{a})) + \varphirac{1}{t} (0, tb- t d_1 (\conj{a})) = Q + \varphirac{1}{t} P_t\,, \] with $Q \in T_1$ and $P_t \in E_{\infty}$. The matrices $Q$ and $P_t$ are rank-one connected if and only if $\va{a}=\va{d_1 (\conj{a}) + t (b - d_1 (\conj{a})) }$. Since $\det Q > 0$ for $Q \etaeq 0$, it is easy to see that there exists only one $t_0>0$ such that the last identity is satisfied. We then set $\rho:=1+1/t_0$ so that \[ A=\varphirac{1}{\rho} (\rho \, Q) + \varphirac{1}{t_0 \rho} (\rho \, P_{t_0}) \,. \] The latter is the desired decomposition, since $\rho \, Q \in T_1$, $\rho P_{t_0} \in E_{\infty}$ are rank-one connected, $\rho>0$ and $\rho^{-1} + (t_0 \rho)^{-1}=1$. Also notice that $\rho P_{t_0} = \rho t_0 |b-d_1(\conj{a})| J R_{\theta_A}$ as stated. Finally let us prove \varepsilonqref{estimates2}. Remark that \[ \dist (A,T_1) + \dist (A,E_\infty) \leq |A-P| + |A-Q| = |P-Q| \,. \] By the linear independence of $T_1$ and $E_{\infty}$, we get \[ \varphirac{1}{c_K} |A| \leq |P-Q| \,. \] Using Lemma \ref{lemma1}, \varepsilonqref{positive det} and \varepsilonqref{distortion2} we obtain \[ |P| \leq c_K |A| , \quad |Q| \leq c_K |A|, \quad |Q| \leq c_K |P| , \quad |P| \leq c_K |Q|. \] By the triangle inequality, \[ |P-Q| \leq |P| + |Q| \leq (1+c_K) \min ( |P|,|Q| ), \] and \varepsilonqref{estimates2} follows. \varepsilonnd{proof} We now turn our attention to the study of rank-one connections between the target set $T$ and $E_{\infty}$. \begin{lemma} \label{lemma4} Let $R=(r,0)$ with $\va{r}=1$ and $a \in \mathbb{C} \smallsetminus \{0\}$. For $j \in \{1,2\}$ define \begin{gather} Q_1 (a) :=\mathcal{L}(\matrici)bda_1 (a,d_1 (\conj{a})) \in T_1 \,, \quad Q_2 (a) := \mathcal{L}(\matrici)bda_2 (-a,d_2 (\conj{a})) \in T_2 \,, \etaotag \\ \mathcal{L}(\matrici)bda_j (a) := \varphirac{1}{ \sqrt{B_j^2 (a) + A_j (a)} + B_j (a)} \,, \label{lambda} \\ \begin{cases} A_j (a) := \det (a,d_j (a)) = \va{a}^2 - \va{d_j (a)}^2 \,, \\ B_j (a) := \mathbb{R}e \, (\conj{r} \, d_j (a)) \,. \varepsilonnd{cases} \label{coefficienti} \varepsilonnd{gather} Then $\mathcal{L}(\matrici)bda_j >0$, $A_j >0$ and $\det(Q_j-JR)=0$. Moreover there exists a constant $c_K>1$ depending only on $K,S_1,S_2$ such that \begin{equation} \label{stima Q} \varphirac{1}{c_K} \leq \va{Q_j (a)} \leq c_K \,, \varepsilonnd{equation} for every $a \in \mathbb{C} \smallsetminus \{0\}$ and $R \in SO(2)$. \varepsilonnd{lemma} \begin{proof} Condition $\det(Q_j-JR)=0$ is equivalent to $|\mathcal{L}(\matrici)bda_j a |= | \mathcal{L}(\matrici)bda_j d_j (\conj{a}) - \conj{r} |$, that is \begin{equation} \label{secondo grado} A_j (a) \mathcal{L}(\matrici)bda_j^2 + 2 B_j (a) \mathcal{L}(\matrici)bda_j - 1 = 0 \varepsilonnd{equation} with $A_j,B_j$ defined by \varepsilonqref{coefficienti}. Notice that $A_j >0$ by \varepsilonqref{positive det}. Therefore $\mathcal{L}(\matrici)bda_j$ defined in \varepsilonqref{lambda} solves \varepsilonqref{secondo grado} and satisfies $\mathcal{L}(\matrici)bda_j >0$. We will now prove \varepsilonqref{stima Q}. Since $a \etaeq 0$, we can write $a= t \omega$ for some $t>0$ and $\omega \in \mathbb{C}$, with $\va{\omega}=1$. We have $A_j(a)= t^2 A_j (\omega)$ and $B_j(a)= t B_j (\omega)$ so that $\mathcal{L}(\matrici)bda_j(a) = \mathcal{L}(\matrici)bda_j (\omega)/t$. Hence \begin{equation} \label{eq Qi} Q_1 (a) = \mathcal{L}(\matrici)bda_1 (\omega) (\omega,d_1(\conj{\omega})) \,, \quad Q_2 (a) = \mathcal{L}(\matrici)bda_2 (\omega) (-\omega,d_2(\conj{\omega}))\,. \varepsilonnd{equation} Since $\mathcal{L}(\matrici)bda_j$ is continuous and positive in $(\mathbb{C} \smallsetminus \{0\}) \times SO(2)$, \varepsilonqref{stima Q} follows from \varepsilonqref{eq Qi}. \varepsilonnd{proof} \textbf{Notation.} Let $\theta \in (-\pi,\pi]$. For $R_\theta=(e^{i\theta},0) \in SO(2)$, define $x:= \cos \theta, y:= \sin \theta$ and \begin{equation} \label{scelta di a} a(R_\theta):= \varphirac{x}{k} + i \, \varphirac{y}{s} \,, \varepsilonnd{equation} where $s$ is defined in \varepsilonqref{little s}. Identifying $SO(2)$ with the interval $(-\pi,\pi]$, for $j=1,2$, we introduce the function \begin{equation} \label{def lambda i} \mathcal{L}(\matrici)bda_j \colon (-\pi,\pi] \to (0,+\infty) \qquad \text{defined by} \qquad \mathcal{L}(\matrici)bda_j(R_\theta):=\mathcal{L}(\matrici)bda_j (a(R_\theta)) \varepsilonnd{equation} with $\mathcal{L}(\matrici)bda_j (a(R_\theta))$ as in \varepsilonqref{lambda}. Furthermore, for $n \in \mathbb{N}$ set \begin{equation} \label{L h} \begin{gathered} M_j (R_\theta) := \varphirac{\mathcal{L}(\matrici)bda_j }{\displaystyle \varphirac{\mathcal{L}(\matrici)bda_1 + \mathcal{L}(\matrici)bda_2}{2} - \mathcal{L}(\matrici)bda_1 \mathcal{L}(\matrici)bda_2} \,, \quad l(R_\theta):= \varphirac{M_1 + M_2}{2} -1 \,, \quad m := \min_{ \theta \in (-\pi,\pi]} \varphirac{ M_2}{2-M_2} \\ L(R_\theta):= \varphirac{1+l}{1-l} \, , \quad \beta_n (R_\theta) := 1 - \varphirac{1+l}{n} \,, \quad p (R_{\theta}):= \varphirac{2L}{L+1} \,. \varepsilonnd{gathered} \varepsilonnd{equation} \begin{lemma} \label{lemma5} For $j =1,2$, the functions \begin{gather*} \mathcal{L}(\matrici)bda_j \colon (-\pi,\pi] \to \left[ \varphirac{s}{1+s_j},\varphirac{k}{1+k} \right] \,, \qquad l \colon (-\pi ,\pi] \to [s,k] \,, \\ L \colon (-\pi,\pi] \to \left[ S,K \right] \,, \qquad p \colon (-\pi,\pi] \to \left[ \varphirac{2S}{S+1},\varphirac{2K}{K+1} \right] \,, \varepsilonnd{gather*} are even, surjective and their periodic extension is $C^1$. Furthermore, they are strictly decreasing in $(0,\pi/2)$ and strictly increasing in $(\pi/2,\pi)$, with maximum at $\theta=0,\pi$ and minimum at $\theta=\pi/2$. Finally \begin{gather} 0<M_j <2 \, , \qquad m>0 \label{stima M} \,, \\ \prod_{j=1}^n \beta_j (R_{\theta}) = \varphirac{1}{n^{p(R_\theta)}} + O\left(\varphirac{1}{n}\right) \,, \label{asintotica produttoria} \varepsilonnd{gather} where $O(1/n) \to 0$ as $n \to \infty$ uniformly for $\theta \in (-\pi,\pi]$. \varepsilonnd{lemma} \begin{proof} Let us consider $\mathcal{L}(\matrici)bda_j$ first. By definitions \varepsilonqref{coefficienti}, \varepsilonqref{scelta di a} and by recalling that $x^2+y^2 = 1$, we may regard $A_j,B_j$ and $\mathcal{L}(\matrici)bda_j$ as functions of $x \in [-1,1]$. In particular, \begin{equation} \label{coefficienti2} A_j (x) = \left( \varphirac{1-k^2}{k^2} - \varphirac{1-s_j^2}{s^2} \right) x^2 + \varphirac{1-s_j^2}{s^2} \, , \quad B_j (x) = \left( 1 - \varphirac{s_j}{s} \right) x^2 +\varphirac{s_j}{s} \,. \varepsilonnd{equation} By symmetry we can restrict to $x \in [0,1]$. We have three cases: \textit{1. Case $s_1=s_2$.} Since $s_1=s_2=s$, from \varepsilonqref{coefficienti2} we compute \[ \mathcal{L}(\matrici)bda_1 (x) =\mathcal{L}(\matrici)bda_2(x) = { \left( 1 + \sqrt{ \left( \varphirac{1}{k^2} - \varphirac{1}{s^2} \right) x^2 + \varphirac{1}{s^2} \, } \right) }^{-1} \,. \] By \varepsilonqref{case2},\varepsilonqref{average} this is a strictly increasing function in $[0,1]$, and the rest of the thesis for $\mathcal{L}(\matrici)bda_j$ readily follows. \textit{2. Case $s_1<s_2$.} By \varepsilonqref{case2} we have \begin{equation} \label{second case} -s_2<s_1<s \qquad \text{and} \qquad 0<s<s_2 \,. \varepsilonnd{equation} Relations \varepsilonqref{coefficienti2} and \varepsilonqref{second case} imply that \begin{align} A_j' (0)=0 \,, \quad A_j' (x) < 0 \,, \quad \text{ for } \quad x \in (0,1] \,, \label{der Aj} \\ B_1' (0)=0 \,, \quad B_1' (x) > 0 \,, \quad \text{ for } \quad x \in (0,1] \,,\label{der B1} \\ B_2' (0)=0 \,, \quad B_2' (x) < 0 \,, \quad \text{ for } \quad x \in (0,1] \,.\label{der B2} \varepsilonnd{align} We claim that \begin{equation} \label{claim1} \mathcal{L}(\matrici)bda_j' (0)=0 \,, \quad \mathcal{L}(\matrici)bda_j' (x) > 0 \,, \quad \text{ for } \quad x \in (0,1] \,. \varepsilonnd{equation} Before proving \varepsilonqref{claim1}, notice that $\mathcal{L}(\matrici)bda_j(0)= \displaystyle\varphirac{s}{1+s_j}$ and $\mathcal{L}(\matrici)bda_j(1)=\displaystyle \varphirac{k}{1+k}$, therefore the surjectivity of $\mathcal{L}(\matrici)bda_j$ will follow from \varepsilonqref{claim1}. Let us now prove \varepsilonqref{claim1}. For $j=2$ condition \varepsilonqref{claim1} is an immediate consequence of the definition of $\mathcal{L}(\matrici)bda_2$ and \varepsilonqref{der Aj}, \varepsilonqref{der B2}. For $j=1$ we have \begin{equation} \label{der lambda1} \mathcal{L}(\matrici)bda_1'(x) = -\varphirac{1}{\mathcal{L}(\matrici)bda_1^2} \left( \varphirac{A_1'+2 B_1 B_1'}{2 \sqrt{B_1^2 + A_1}} + B_1 ' \right) \varepsilonnd{equation} and we immediately see that $\mathcal{L}(\matrici)bda_1'(0)=0$ by \varepsilonqref{der Aj} and \varepsilonqref{der B1}. Assume now that $x \in (0,1]$. By \varepsilonqref{der B1} and \varepsilonqref{der lambda1}, the claim \varepsilonqref{claim1} is equivalent to \[ {A_1'}^2 + 4 A_1' B_1 B_1' - 4 A_1 {B_1'}^2 > 0 \,, \quad \text{ for } \quad x \in (0,1] \,. \] After simplifications, the above inequality is equivalent to \begin{equation} \label{cond f} \varphirac{4 f(s_1,s_2)}{k^4 {(s_1+s_2)}^4} \, x^2 >0 \,, \quad \text{ for } \quad x \in (0,1]\,, \varepsilonnd{equation} where $f(s_1,s_2)=a b c d$, with \begin{align*} a = -2k + (1+k) s_1 + (1-k) s_2 \,, \qquad b = 2k + (1+k) s_1 + (1-k) s_2 \,,\\ c = -2k - (1-k) s_1 - (1+k) s_2 \,, \qquad d = 2k - (1-k) s_1 - (1+k) s_2 \,. \varepsilonnd{align*} We have that $a,c <0$ since $s_1<s_2$ and $b,d>0$ since $s_1>-s_2$. Hence \varepsilonqref{cond f} follows. \textit{3. Case $s_2<s_1$.} In particular we have \begin{equation} \label{third case} -s_1<s_2<s \qquad \text{and} \qquad 0<s<s_1 \,. \varepsilonnd{equation} This is similar to the previous case. Indeed \varepsilonqref{der Aj} is still true, but for $B_j$ we have \begin{align} B_1' (0)=0 \,, \quad B_1' (x) < 0 \,, \quad \text{ for } \quad x \in (0,1] \,, \label{der B1 due} \\ B_2' (0)=0 \,, \quad B_2' (x) > 0 \,, \quad \text{ for } \quad x \in (0,1] \,. \label{der B2 due} \varepsilonnd{align} This implies \varepsilonqref{claim1} with $j=1$. Similarly to the previous case, we can see that \varepsilonqref{claim1} for $j=2$ is equivalent to \begin{equation} \label{cond f due} \varphirac{4 f(s_2,s_1)}{k^4 {(s_1+s_2)}^4} \, x^2 >0 \,, \quad \text{ for } \quad x \in (0,1] \,. \varepsilonnd{equation} Notice that $f$ is symmetric, therefore \varepsilonqref{cond f due} is a consequence of \varepsilonqref{cond f}. We will now turn our attention to the function $l$. Notice that \begin{equation} \label{l identity} l= \varphirac{1}{1-H} - 1 \,, \quad \text{where} \quad H:=\varphirac{2 \mathcal{L}(\matrici)bda_1 \mathcal{L}(\matrici)bda_2}{ \mathcal{L}(\matrici)bda_1 + \mathcal{L}(\matrici)bda_2} = 2 \, {\left( \varphirac{1}{\mathcal{L}(\matrici)bda_1} + \varphirac{1}{\mathcal{L}(\matrici)bda_2} \right)}^{-1} \varepsilonnd{equation} is the harmonic mean of $\mathcal{L}(\matrici)bda_1$ and $\mathcal{L}(\matrici)bda_2$. Therefore $H$ is differentiable and even. By direct computation we have \[ H'=2 \, \varphirac{\mathcal{L}(\matrici)bda_1' \mathcal{L}(\matrici)bda_2^2+ \mathcal{L}(\matrici)bda_1^2 \mathcal{L}(\matrici)bda_2'}{ {(\mathcal{L}(\matrici)bda_1+\mathcal{L}(\matrici)bda_2)}^2 } \,. \] Since $\mathcal{L}(\matrici)bda_j >0$, by \varepsilonqref{claim1} we have \begin{equation} \label{derivata H} H' (0)=0 \,, \quad H' (x) > 0 \,, \quad \text{ for } \quad x \in (0,1] \,. \varepsilonnd{equation} Moreover $H(0)=\displaystyle\varphirac{s}{1+s}$ and $H(1)=\displaystyle\varphirac{k}{1+k}$. Then from \varepsilonqref{l identity} we deduce $l(0)=s, l(1)=k$ and the rest of the statement for $l$. The statements for $L$ and $p$ follow directly from the properties of $l$ and from the fact that $t \to \displaystyle \varphirac{1+t}{1-t}$, $t \to \displaystyle \varphirac{2t}{t+1}$ are $C^1$ and strictly increasing for $0<t<1$ and $t>1$, respectively. Next we prove \varepsilonqref{stima M}. By \varepsilonqref{case2} and the properties of $\mathcal{L}(\matrici)bda_j$, we have in particular \begin{equation} \label{stima lambda bis} 0< \mathcal{L}(\matrici)bda_j < \varphirac{1}{2} \,, \quad 0<H < \varphirac{1}{2} \,, \varepsilonnd{equation} where $H$ is defined in \varepsilonqref{l identity}. Since $\mathcal{L}(\matrici)bda_j>0$, the inequality $M_j>0$ is equivalent to $H<1$, which holds by \varepsilonqref{stima lambda bis}. The inequality $M_2<2$ is instead equivalent to $\mathcal{L}(\matrici)bda_1 (1-2 \mathcal{L}(\matrici)bda_2)>0$, which is again true by \varepsilonqref{stima lambda bis}. The case $M_1<2$ is similar. Finally $m>0$ follows from $0<M_2<2$ and the continuity of $\mathcal{L}(\matrici)bda_j$. Finally we prove \varepsilonqref{asintotica produttoria}. By definition we have $1+l= \displaystyle \varphirac{2 L}{L+1}= p$. By taking the logarithm of $\prod_{j=1}^n \beta_j (R_\theta)$, we see that there exists a constant $c>0$, depending only on $K,S_1,S_2$, such that \begin{equation} \label{prod beta} \va{ \log \left( \prod_{j=1}^n \beta_j (R_{\theta}) \right) + p(R_\theta) \log n }< c \,, \quad \text{for every} \quad \theta \in (-\pi,\pi] \,. \varepsilonnd{equation} Estimate \varepsilonqref{prod beta} is uniform because $\beta_j$ and $p$ are $\pi$-periodic and uniformly continuous. \varepsilonnd{proof} \subsection{Weak staircase laminate} \label{sec:weak} We are now ready to construct a staircase laminate in the same fashion as \cite[Lemma 3.17]{afs}. The steps of our staircase will be the sets \[ \mathcal{S}_n := n J SO(2) = \left\{ (0,n e^{i \theta}) \, \colon \, \theta \in (-\pi,\pi] \right\} \,, \quad n \geq 1 \,. \] For $0< \delta < \pi/2$ we introduce the set \[ E_{\infty}^\delta := \{ (0,z) \in E_\infty \, \colon \, |\arg z|< \delta \} \,, \qquad \mathcal{S}_n^\delta := \mathcal{S}_n \cap E_\infty^\delta \,. \] \begin{lemma}\label{lemma3} Let $0<\delta<\pi/4$ and $0<\rho<\min \{m, \varphirac{1}{2}\}$, with $m>0$ defined in \varepsilonqref{L h}. There exists a constant $c_K >1$ depending only on $K,S_1, S_2$, such that for every $A=(a,b) \in \mathbb{R}^{2 \times 2}$ satisfying \begin{equation} \label{dist} \dist(A,\mathcal{S}_n) < \rho \,, \varepsilonnd{equation} there exists a laminate of third order $\etau_A$, such that: \begin{enumerate}[(i)] \item $\conj{\etau}_A=A$, \item $\spt \etau_A \subset T \cup \mathcal{S}_{n +1}\,,$ \item $\spt \etau_A \subset \{ \xi \in \mathbb{R}^{2 \times 2} \, \colon \, c_K^{-1} n < \va{\xi} < c_K \, n \} \,, $ \item $\spt \etau_A \cap \mathcal{S}_{n+1} = \{ (n+1) J R \}$, with $R=R_{\theta_A}$ as in \varepsilonqref{theta A}. \varepsilonnd{enumerate} Moreover \begin{equation} \label{growth} \left( 1 - c_K \, \varphirac{\rho}{n} \right) \beta_{n} (R) \leq \etau_A (\mathcal{S}_{n +1}) \leq \left( 1 + c_K \, \varphirac{\rho}{n} \right) \beta_{n+2} (R) \,, \varepsilonnd{equation} where $\beta_n$ is defined in \varepsilonqref{L h}. If in addition $n \geq 2$ and \begin{equation} \label{dist angolo} \dist (A , \mathcal{S}_n^\delta) < \rho \,, \varepsilonnd{equation} then \begin{equation} |\arg{R}|=|\theta_A | < \delta + \rho \,. \label{arg R bis} \varepsilonnd{equation} In particular $\spt \etau_{A} \subset T \cup \mathcal{S}_{n+1}^{\delta+\rho}$. \varepsilonnd{lemma} \begin{figure}[t!] \centering \def8.5cm{8.5cm} \input{staircase.pdf_tex} \caption{Weak staircase laminate.} \label{fig:staircase} \varepsilonnd{figure} \begin{proof} Let us start by defining $\etau_A$. From Lemma \ref{lemma2} there exist $c_K >1$ and non zero matrices $Q \in T_1$, $P \in E_{\infty}$, such that $\det(P-Q)=0$, \begin{gather} A=\mu_1 Q + (1-\mu_1) P \,, \quad \text{for some } \quad \mu_1 \in [0,1]\,, \label{split} \\ \varphirac{1}{c_K} \va{A} \leq \va{P-Q}, \va{P}, \va{Q} \leq c_K \va{A} \,. \label{estimates2 bis} \varepsilonnd{gather} Moreover $P=tJR$ with $R=R_{\theta_A}=(r,0)$ as in \varepsilonqref{theta A} and $t>0$. We will estimate $t$. By \varepsilonqref{dist}, there exists $\tilde{R} \in SO(2)$ such that $|A-n J \tilde{R} |< \rho$. Applying Lemma \ref{lemma1} to $A-n J \tilde{R}$ and $P - n J \tilde{R}$ yields \begin{equation} \label{stima segmento} |P- n J \tilde{R}|< \sqrt{2} \rho \,, \varepsilonnd{equation} since $P- n J \tilde{R} \in E_{\infty}$. Hence from \varepsilonqref{stima segmento} we get \begin{equation} \label{stima t} \va{t - n} < \rho \,, \varepsilonnd{equation} since $|JR|=|J \tilde{R}|= \sqrt{2}$. We also have \begin{equation}\label{mu1} \mu_1 = \varphirac{\va{A-Q}}{\va{P-Q}} \geq 1 - \varphirac{\va{P-A}}{\va{P-Q}}\geq 1 - c_K \varphirac{\rho}{n} \,, \varepsilonnd{equation} since $\va{P-A}<3 \rho$ and $\va{P-Q} > n/c_K$, by \varepsilonqref{dist angolo}, \varepsilonqref{estimates2 bis}, \varepsilonqref{stima segmento}. Next we split $P$ in order to ``climb'' one step of the staircase (see Figure \ref{fig:staircase}). Define $x:=\cos \theta_A,y:=\sin \theta_A$ and \[ a:= \varphirac{x}{k} + i \, \varphirac{y}{s} \,, \] as in \varepsilonqref{scelta di a}. Moreover set \[ Q_1 := \mathcal{L}(\matrici)bda_1 (a,d_1 (\conj{a})) \,, \quad Q_2 := \mathcal{L}(\matrici)bda_2 (-a,d_2 (\conj{a})) \,. \] Here $\mathcal{L}(\matrici)bda_1,\mathcal{L}(\matrici)bda_2$ are chosen as in \varepsilonqref{lambda}, so that $Q_j \in T_j$ and, by Lemma \ref{lemma4}, $\det(Q_j - JR)=0$. Furthermore, set \begin{equation} \label{comb convessa} \left\{ \begin{aligned} & \mu_2 := \varphirac{ M_2 - (t - n) M_2 }{2 n + M_2 + (t - n)(2 - M_2)} \,, \\ & \mu_3 := \varphirac{M_1 - (t-n) M_1}{2 (n + 1)} \,, \varepsilonnd{aligned} \right. \varepsilonnd{equation} with $M_j$ as in \varepsilonqref{L h}. With the above choices we have \begin{equation} \label{staircase} \left\{ \begin{aligned} & t J R = \mu_2 t Q_1 + (1-\mu_2) \tilde{P} \,, \\ & \tilde{P}= \mu_3 (n+1) Q_2 + (1-\mu_3) (n+1) JR \,, \varepsilonnd{aligned} \right. \varepsilonnd{equation} and $\mu_2,\mu_3 \in [0,1]$ by \varepsilonqref{stima M}. In order to check \varepsilonqref{staircase}, we solve the first equation in $\tilde{P}$ to get \begin{equation} \label{equiv} \gamma_2 t JR + (1-\gamma_2) t Q_1 = \gamma_3 (n + 1) Q_2 + (1-\gamma_3)(n + 1)JR \,, \varepsilonnd{equation} with $\mu_2 = 1 - 1/\gamma_2$ and $\mu_3 = \gamma_3$. Equating the first conformal coordinate of both sides of \varepsilonqref{equiv} yields \begin{equation} \label{gamma2} \gamma_2 = 1 + \gamma_3 \, \varphirac{n + 1}{t} \varphirac{\mathcal{L}(\matrici)bda_2}{\mathcal{L}(\matrici)bda_1} \,. \varepsilonnd{equation} Substituting \varepsilonqref{gamma2} in the second component of \varepsilonqref{equiv} gives us \begin{equation} \label{gamma3} \gamma_3 \left( \mathcal{L}(\matrici)bda_1 + \mathcal{L}(\matrici)bda_2 - \mathcal{L}(\matrici)bda_1 \mathcal{L}(\matrici)bda_2 \left( d_1 (a) + d_2(a) \right) \, r^{-1} \right) = \varphirac{1 - (t-n)}{n + 1 } \, \mathcal{L}(\matrici)bda_1 \,. \varepsilonnd{equation} By \varepsilonqref{scelta di a}, $d_1(a)+d_2(a)=2 r $ and equation \varepsilonqref{gamma3} yields \begin{equation} \label{gamma3 bis} \gamma_3 = \varphirac{1 - (t-n)}{n + 1 } \, \varphirac{\mathcal{L}(\matrici)bda_1}{ \mathcal{L}(\matrici)bda_1+\mathcal{L}(\matrici)bda_2-2 \mathcal{L}(\matrici)bda_1 \mathcal{L}(\matrici)bda_2} = \varphirac{1 - (t-n)}{2(n + 1) } \, M_1 \,. \varepsilonnd{equation} Equations \varepsilonqref{gamma2} and \varepsilonqref{gamma3 bis} give us \varepsilonqref{comb convessa}. Therefore, by \varepsilonqref{split} and \varepsilonqref{staircase}, the measure \[ \etau_A := \mu_1 \delta_Q + (1-\mu_1) \left( \mu_2 \delta_{t Q_1} + (1-\mu_2) \left( \mu_3 \delta_{(n+1) Q_2} + (1-\mu_3) \delta_{(n+1)JR} \right) \right) \] defines a laminate of third order with barycenter $A$, supported in $T_1 \cup T_2 \cup \mathcal{S}_{n +1}$ and such that $\spt \etau_A \cap \mathcal{S}_{n+1}= \{(n+1)JR\}$ with $R=R_{\theta_A}$. Moreover \[ \spt \etau_A \subset \{ \xi \in \mathbb{R}^{2 \times 2} \, \colon \, c_K^{-1} n < \va{\xi} < c_K \, n \} \,, \] since $c_K^{-1} n < |Q|<c_K n$ by \varepsilonqref{dist},\varepsilonqref{estimates2 bis} and $$ c_K^{-1} n <|t Q_1|, |(n+1)Q_2|<c_K n $$ by \varepsilonqref{stima t}, \varepsilonqref{stima Q}. Next we prove \varepsilonqref{growth} by estimating \begin{equation} \label{to show} \etau_A ( \mathcal{S}_{n +1}) = \mu_1 (1-\mu_2)(1-\mu_3) \,. \varepsilonnd{equation} Notice that $\etau_A ( \mathcal{S}_{n +1})$ depends on $R$. For small $\rho$, we have \[ \mu_2 = \displaystyle\varphirac{M_2}{2 n} + \rho \, O \left(\varphirac{1}{n} \right)\,, \quad \mu_3=\displaystyle \varphirac{M_1}{2 n} + \rho \, O \left(\varphirac{1}{n}\right) \,, \] so that \[ (1-\mu_2) (1-\mu_3)= 1 - \varphirac{M_1+M_2}{2 n} + \rho \, O \left( \varphirac{1}{n^2} \right) = 1 - \varphirac{1+l}{n} + \rho \, O \left( \varphirac{1}{n^2} \right)\,, \] with $l$ as in \varepsilonqref{L h}. Although this gives the right asymptotic, we will need to estimate \varepsilonqref{to show} for every $n \in \mathbb{N}$. By direct calculation \[ (1-\mu_2)(1-\mu_3) = \varphirac{n + (t- n)}{n + 1} \, \varphirac{2 n + 2 - M_1 + (t-n)M_1}{2 n + M_2 + (t-n) (2-M_2)} \,, \] so that \begin{equation} \label{to show 2} (1-\mu_2)(1-\mu_3) = \left( 1 + \varphirac{t-n}{n} \right) \left( 1 - \varphirac{1}{n + 1} \right) \left( 1 - \varphirac{ 2 l \, (1- (t-n) ) }{2 n + M_2 + (t-n) (2-M_2)} \right)\,. \varepsilonnd{equation} Let us bound \varepsilonqref{to show 2} from above. Recall that $t-n < \rho <1$ and $2-M_2 >0$, by \varepsilonqref{stima M}, so the denominator of the third factor in \varepsilonqref{to show 2} is bounded from above by $2 (n+1)$ and \begin{equation} \label{upper1} \begin{aligned} (1-\mu_2)(1-\mu_3) & \leq \left( 1 + \varphirac{\rho}{n} \right) \left( 1 - \varphirac{1}{n + 1} \right) \left( 1 - \varphirac{l}{n + 1} + l \, \varphirac{\rho}{n+1} \right) \\ & \leq \left( 1 + c_K \, \varphirac{\rho}{n} \right) \left( 1 - \varphirac{1}{n + 1} \right) \left( 1 - \varphirac{l}{n + 1} \right) \,, \varepsilonnd{aligned} \varepsilonnd{equation} where $c_K >1$ is such that \[ l \, \varphirac{\rho}{n + 1} \left( 1+ \varphirac{\rho}{n} \right) \leq (c_K -1) \, \varphirac{\rho}{n} \left( 1 - \varphirac{l}{n + 1} \right) \,. \] Moreover \begin{equation} \label{upper2} \left( 1 - \varphirac{1}{n + 1} \right) \left( 1 - \varphirac{l}{n + 1} \right) = 1 - \varphirac{1+l}{n+1} + \varphirac{l}{{(n + 1)}^2} \leq 1 - \varphirac{1+l}{n+2} = \beta_{n+2} (R) \,. \varepsilonnd{equation} The upper bound in \varepsilonqref{growth} follows from \varepsilonqref{upper1} and \varepsilonqref{upper2}. Let us now bound \varepsilonqref{to show 2} from below. We can estimate from below the denominator in the third factor of \varepsilonqref{to show 2} with $2n$, since $t - n > - \rho$ by \varepsilonqref{stima t} and the assumption that $\rho < m$ with $m$ as in \varepsilonqref{L h}. Therefore \begin{equation} \label{lower1} \begin{aligned} (1-\mu_2)(1-\mu_3) & \geq \left( 1 - \varphirac{\rho}{n} \right) \left( 1 - \varphirac{1}{n + 1} \right) \left( 1 - \varphirac{l}{n} - l \, \varphirac{\rho}{n} \right) \\ & \geq \left( 1 - c_K \, \varphirac{\rho}{n} \right) \left( 1 - \varphirac{1}{n + 1} \right) \left( 1 - \varphirac{l }{n } \right)\,, \varepsilonnd{aligned} \varepsilonnd{equation} if we choose $c_K >1$ such that \[ \left( 1- \varphirac{\rho}{n} \right) \, l \leq (c_K -1) \left( 1 - \varphirac{l}{n } \right) \,. \] Finally \begin{equation} \label{lower2} \left( 1 - \varphirac{1}{n +1} \right)\left( 1 - \varphirac{l}{n } \right) \geq 1 - \varphirac{1+l}{n} = \beta_n (R) \,. \varepsilonnd{equation} The lower bound in \varepsilonqref{growth} follows from \varepsilonqref{lower1} and \varepsilonqref{lower2}. Finally, the last part of the statement follows from a simple geometrical argument, recalling that $\arg R =\theta_A= - \arg(b-d_1 (\conj{a}))$ and using hypothesis \varepsilonqref{dist angolo}. \varepsilonnd{proof} \begin{remark} \label{iteration} By iteratively applying Lemma \ref{lemma3}, one can obtain, for every $R_\theta \in SO(2)$, a sequence of laminates of finite order $\etau_n \in \mathcal{L}(\matrici)$ that satisfies $\conj{\etau}_n=JR_\theta$, $\spt \etau_n \subset T_1 \cup T_2 \cup \mathcal{S}_{n +1}$, and \begin{equation} \label{explosion} \lim_{n \to \infty} \int_{\mathbb{R}^{2 \times 2}} {|\mathcal{L}(\matrici)bda |}^{p (R_\theta)} \, d \etau_n (\mathcal{L}(\matrici)bda) = \infty \,, \varepsilonnd{equation} where $p(R_\theta) \in \left[ \varphirac{2S}{S+1}, \varphirac{2K}{K+1} \right]$ is the function defined in \varepsilonqref{L h}. Indeed, setting $A=J R_\theta$ and iterating the construction of Lemma \ref{lemma3}, yields $\etau_n \in \mathcal{L}(\matrici)$ such that $\conj{\etau}_n=JR_\theta$ and $\spt \etau_n \subset T_1 \cup T_2 \cup \mathcal{S}_{n +1}$. Notice that $\etau_n$ contains the term $\prod_{j=1}^n (1-\mu_2^j)(1-\mu_3^j) \delta_{(n+1) J R_{\theta}}$, with $\mu_2^j,\mu_3^j$ as defined in \varepsilonqref{comb convessa}. Therefore, using \varepsilonqref{asintotica produttoria} and \varepsilonqref{growth} (with $\rho=0$), we obtain \begin{equation} \label{explosion2} \prod_{j=1}^n (1-\mu_2^j)(1-\mu_3^j) \approx \prod_{j=1}^n \beta_j (R_\theta) \approx \varphirac{1}{n^{p(R_\theta)}} \varepsilonnd{equation} which implies \varepsilonqref{explosion}. \varepsilonnd{remark} \begin{remark} \label{rmk:differenza} In the isotropic case $S=K$, the laminate $\etau_A$ provided by Lemma \ref{lemma3} coincides with the one in \cite[Lemma 3.16]{afs}. In particular, the growth condition \varepsilonqref{growth} is independent of the initial point $A$, and it reads as \[ \left( 1 - c_K \, \varphirac{\rho}{n} \right) \beta_n (I) \leq \etau_A (\mathcal{S}_{n +1}) \leq \left( 1 + c_K \, \varphirac{\rho}{n} \right) \beta_{n+2} (I) \,, \quad \beta_n (I)=1- \varphirac{1+k}{n} \,. \] Moreover, by Remark \ref{iteration}, for every $R_\theta \in SO(2)$, $J R_\theta$ is the center of mass of a sequence of laminates of finite order such that \varepsilonqref{explosion} holds with $p(R_\theta) \varepsilonquiv \varphirac{2K}{K+1}$, which gives the desired growth rate. In contrast, in the anisotropic case $1<S<K$, the growth rate of the laminates explicitly depends on the argument of the barycenter $J R_\theta$. The desired growth rate corresponds to $\theta = 0$, that is, the center of mass has to be $J$. In constructing approximate solutions with the desired integrability properties, it is then crucial to be able to select rotations whose angle whose angle lies in an arbitrarily small neighbourhood of $\theta = 0$. \varepsilonnd{remark} We now proceed to show the existence of a \textit{piecewise affine} map $f$ that solves the differential inclusion \varepsilonqref{differential inclusion} up to an arbitrarily small $L^{\infty}$ error. Such map will have the integrability properties given by \varepsilonqref{crescita}. \begin{proposition} \label{prop:grad} Let $\Omegaega \subset \mathbb{R}^2$ be an open bounded domain. Let $K>1$, $\alpha \in (0,1)$, $\varepsilon>0$, $0<\delta_0 < \varphirac{2K}{K+1} - \varphirac{2S}{S+1}$, $\gamma > 0$. There exist a constant $c_{K,\delta_0} > 1 $, depending only on $K,S_1,S_2,\delta_0$, and a \textit{piecewise affine} map $f \in W^{1,1} (\Omegaega;\mathbb{R}^2) \cap C^{\alpha} (\overline{\Omegaega};\mathbb{R}^2)$, such that \begin{enumerate}[\indent(i)] \item $f (x)= J x$ on $\partial \Omegaega$, \item $[f - J x]_{C^{\alpha} (\overline{\Omegaega})} < \varepsilon$, \item $\dist (\etaabla f (x), T ) < \gamma$ a.e. in $\Omegaega$. \varepsilonnd{enumerate} Moreover \begin{equation} \label{crescita} \varphirac{1}{c_{K,\delta_0}} t^{-\varphirac{2K}{K+1}} < \varphirac{ | \{ x \in \Omegaega \, \colon \, |\etaabla f (x)|>t \} | }{|\Omegaega|} < c_{K,\delta_0} \, t^{-p} \,, \varepsilonnd{equation} where $p \in \left(\varphirac{2K}{K+1}- \delta_0,\varphirac{2K}{K+1} \right]$. That is, $\etaabla f \in L^{p}_{\rm weak} (\Omegaega;\mathbb{R}^{2 \times 2})$ and $\etaabla f \etaotin L^{\varphirac{2K}{K+1}} (\Omegaega;\mathbb{R}^{2 \times 2})$. In particular $f \in W^{1,q} (\Omegaega;\mathbb{R}^2)$ for every $q < p$, but $\int_{\Omegaega} \va{\etaabla f (x)}^{\varphirac{2K}{K+1}} \, dx = \infty$. \varepsilonnd{proposition} \begin{proof} By Lemma \ref{lemma5} the function $p \colon (-\pi,\pi] \to \left[ \varphirac{2S}{S+1},\varphirac{2K}{K+1} \right]$ is uniformly continuous. Let $\alpha \colon [0,\infty] \to [0,\infty]$ be its modulus of continuity. Fix $0<\delta<\pi/4$ such that \begin{equation} \label{delta massimo} \alpha(\delta) < \delta_0 \,. \varepsilonnd{equation} Let $\{\rho_n\}$ be a strictly decreasing positive sequence satisfying \begin{equation}\label{rho 1} \rho_1 < \varphirac{1}{4} \min \{ m,c_K^{-1}, \dist(\mathcal{S}_1,T),\gamma \} \,, \quad \rho_n <\varphirac{\delta}{4} \, 2^{-n} \,, \varepsilonnd{equation} where $m>0$ and $c_K >1$ are the constants from Lemma \ref{lemma3}. Define $\{\delta_n\}$ as \begin{equation} \label{delta n} \delta_1 := 0 \, \quad \text{and} \quad \delta_n := \sum_{j=1}^{n-1} \rho_n \,\, \text{ for } \, n \geq 2 \,. \varepsilonnd{equation} In particular from \varepsilonqref{rho 1},\varepsilonqref{delta n} it follows that \begin{equation} \label{bound delta n} \delta_ n < \varphirac{\delta}{2} \,, \quad \text{ for every } \,\, n \in \mathbb{N}\,. \varepsilonnd{equation} \paragraph{\textbf{Step 1.}} Similarly to the proof of \cite[Proposition 3.17]{afs}, by repeatedly combining Lemma \ref{lemma3} and Proposition \ref{gradienti}, we will prove the following statement: \paragraph{\textit{Claim.}} There exist sequences of piecewise constant functions $\tau_n \colon \Omegaega \to (0 , \infty)$ and piecewise affine Lipschitz mappings $f_n \colon \Omegaega \to \mathbb{R}^2$, such that \begin{enumerate}[(a)] \item \label{prop a} $f_n(x)=Jx$ on $\partial \Omegaega$, \item \label{prop b}$[f_n - Jx]_{C^{\alpha} (\overline{\Omegaega})} < (1-2^{-n}) \varepsilon$, \item \label{prop c}$\dist(\etaabla f_n(x), T \cup \mathcal{S}_n^{\delta_n}) < \tau_n (x)$ a.e. in $\Omegaega$, \item \label{prop d}$\tau_n (x) = \rho_n$ in $\Omegaega_n$, \varepsilonnd{enumerate} where \[ \Omegaega_n := \{ x \in \Omegaega \, \colon \, \dist(\etaabla f_n(x), T) \geq \rho_n \} \,. \] Moreover \begin{equation}\label{stima omega n} \prod_{j=1}^{n-1} \left( 1 - c_K \varphirac{\rho_j}{j} \right) \beta_j (R_0) \leq \varphirac{|\Omegaega_n|}{|\Omegaega|} \leq \prod_{j=1}^{n-1} \left( 1 + c_K \varphirac{\rho_j}{j} \right) \beta_{j+2} (R_{\delta}) \,. \varepsilonnd{equation} \paragraph{\textit{Proof of the claim.}} We proceed by induction. Set $f_1(x):=Jx$ and $\tau_1 (x) := \rho_1$ for every $x \in \Omegaega$. Since $J \in \mathcal{S}_1^0$, then $f_1$ satisfies \ref{prop a}-\ref{prop c}. Also, $\rho_1 < \dist (T, \mathcal{S}_1)/4$ by \varepsilonqref{rho 1}, so $\Omegaega_1 = \Omegaega$ and \ref{prop d}, \varepsilonqref{stima omega n} follow. Assume now that $f_n$ and $\tau_n$ satisfy the inductive hypothesis. We will first define $f_{n+1}$ by modifying $f_n$ on the set $\Omegaega_n$. Since $f_n$ is piecewise affine we have a decomposition of $\Omegaega_n$ into pairwise disjoint open subsets $\Omegaega_{n,i}$ such that \begin{equation} \label{unione disgiunta} \va{ \Omegaega_n \smallsetminus \bigcup_{i=1}^{\infty} \Omegaega_{n,i} } = 0 \,, \varepsilonnd{equation} with $f_n (x) = A_i x + b_i$ in $\Omegaega_{n,i}$, for some $A_i \in \mathbb{R}^{2 \times 2}$ and $b_i \in \mathbb{R}^2$. Moreover \begin{equation} \label{distance} \dist (A_i, \mathcal{S}_n^{\delta_n}) < \rho_n \varepsilonnd{equation} by (c) and \ref{prop d}. Since \varepsilonqref{distance} and \varepsilonqref{rho 1} hold, we can invoke Lemma \ref{lemma3} to obtain a laminate $\etau_{A_i}$ and a rotation $R^i=R_{\theta_{A_i}}$ satisfying, in particular, $\conj{\etau}_{A_i}=A_{i}$, \begin{gather} |\arg R^i|=|\theta_{A_i}| < \delta_{n+1} \label{arg R} \,, \\ \spt \etau_{A_i} \subset T \cup \mathcal{S}_{n+1}^{\delta_{n+1}} \,, \label{supp n+1 bis} \varepsilonnd{gather} since $\delta_{n+1}= \delta_n + \rho_n$ by \varepsilonqref{delta n}. By applying Proposition \ref{gradienti} to $\etau_{A_i}$ and by taking into account \varepsilonqref{supp n+1 bis}, we obtain a piecewise affine Lipschitz mapping $g_i \colon \Omegaega_{n,i} \to \mathbb{R}^2$, such that \begin{enumerate}[resume*] \item \label{prop e} $g_i(x)=A_i x + b_i $ on $\partial \Omegaega_{n,i}$, \item \label{prop f}$[g_i - f_n]_{C^{\alpha}(\overline{\Omegaega_{n,i}})} < 2^{-(n+1+i)} \varepsilon$, \item \label{prop g}$c_K^{-1} n < | \etaabla g_i (x) | < c_K n$ a.e. in $\Omegaega_{n,i}$, \item \label{prop h}$\dist(\etaabla g_i(x), T\cup \mathcal{S}_{n+1}^{\delta_{n+1}} )< \rho_{n+1} $ a.e. in $\Omegaega_{n,i}$. \varepsilonnd{enumerate} Moreover \begin{equation} \label{stima induttiva} \left( 1 - c_K \varphirac{\rho_n}{n} \right) \beta_n (R^i) \leq \varphirac{|\omega_{n,i}|}{|\Omegaega_{n,i}|} \leq \left( 1 + c_K \varphirac{\rho_n}{n} \right) \beta_{n+2} (R^i) \,, \varepsilonnd{equation} with \[ \omega_{n,i}:= \va{ \left\{ x \in \Omegaega_{n,i} \, \colon \, \dist( \etaabla g_i (x), \mathcal{S}_{n+1}^{\delta_{n+1}} )< \rho_{n+1} \right\} } \,. \] Set \begin{equation*} f_{n+1}(x) := \begin{cases} f_n (x) & \text{if } x \in \Omegaega \smallsetminus \Omegaega_n \,, \\ g_i(x) & \text{if } x \in \Omegaega_{n,i} \,. \varepsilonnd{cases} \varepsilonnd{equation*} Since $\Omegaega_{n+1}$ is well defined, we can also introduce \[ \tau_{n+1} (x) := \begin{cases} \tau_{n}(x) & \text{for } x \in \Omegaega \smallsetminus \Omegaega_{n+1}\,, \\ \rho_{n+1} & \text{for } x \in \Omegaega_{n+1} \,, \varepsilonnd{cases} \] so that \ref{prop d} holds. From \ref{prop e} we have $f_{n+1}(x)=Jx$ on $\partial \Omegaega$. From \ref{prop f} we get $[f_{n+1} -f_n]_{C^{\alpha}(\overline{\Omegaega})}< 2^{-(n+1)}\varepsilon$ so that \ref{prop b} follows. \ref{prop c} is a direct consequence of \ref{prop d}, \ref{prop h}, and the fact that $\rho_n$ is strictly decreasing. Finally let us prove \varepsilonqref{stima omega n}. First notice that the sets $\omega_{n,i}$ are pairwise disjoint. By \varepsilonqref{rho 1}, in particular we have $\rho_{n+1} < \dist (T, \mathcal{S}_1)/4$, so that \begin{equation} \label{unione disgiunta 2} \va{ \Omegaega_{n+1} \smallsetminus \bigcup_{i=1}^{\infty} \omega_{n,i} } = 0 \,. \varepsilonnd{equation} By \varepsilonqref{arg R} and \varepsilonqref{bound delta n} we have $|\arg R^i |< \delta$. Then by the properties of $\beta_n$ (see Lemma \ref{lemma5}), \begin{equation} \label{stima beta} \beta_{n}(R^i) \geq \beta_{n}(R_0) \quad \text{and} \quad \beta_{n+2}(R^i) \leq \beta_{n+2}(R_{\delta}) \,. \varepsilonnd{equation} Using \varepsilonqref{stima beta}, \varepsilonqref{unione disgiunta}, \varepsilonqref{unione disgiunta 2} in \varepsilonqref{stima omega n} yields \[ |\Omegaega_n| \left( 1 - c_K \varphirac{\rho_n}{n} \right) \beta_j (R_0) \leq|\Omegaega_{n+1}| \leq |\Omegaega_n| \left( 1 + c_K \varphirac{\rho_n}{n} \right) \beta_{j+2} (R_\delta) \,, \] and \varepsilonqref{stima omega n} follows. \paragraph{\textbf{Step 2.}} Notice that on $\Omegaega \smallsetminus \Omegaega_n$ we have that $\etaabla f_{n+1} = \etaabla f_n $ almost everywhere, so $\Omegaega_{n+1} \subset \Omegaega_n$. Therefore $\{f_n\}$ is obtained by modification on a nested sequence of open sets, satisfying \[ \prod_{j=1}^{n-1} \left( 1 - c_K \varphirac{\rho_j}{j} \right) \beta_j (R_0) \leq \varphirac{|\Omegaega_n|}{|\Omegaega|} \leq \prod_{j=1}^{n-1} \left( 1 + c_K \varphirac{\rho_j}{j} \right) \beta_{j+2} (R_{\delta}) \,. \] By \varepsilonqref{rho 1} we have $\rho_n < \min \{ 2^{-n} \, \delta, c_K^{-1}\} /4$, so that \[ \prod_{j=1}^{\infty} \left( 1 - c_K \varphirac{\rho_j}{j} \right) = c_1 \,, \quad \prod_{j=1}^{\infty} \left( 1 + c_K \varphirac{\rho_j}{j} \right) = c_2 \,, \] with $0<c_1<c_2< \infty$, depending only on $K,S_1,S_2,\delta$ (and hence from $\delta_0$, by \varepsilonqref{delta massimo}). Moreover, from Lemma \ref{lemma5}, \[ \prod_{j=1}^n \beta_j (R_\theta)= n^{-p (R_\theta)} +O \left( \varphirac{1}{n} \right)\,, \quad \text{ uniformly in } \quad (-\pi,\pi] \,. \] Therefore, there exists a constant $c_{K,\delta_0} >1$ depending only on $K,S_1,S_2,\delta_0$, such that \begin{equation} \label{stima omega n 2} \varphirac{1}{c_{K,\delta_0} }\, n^{-\varphirac{2K}{K+1}} \leq |\Omegaega_n| \leq c_{K,\delta_0} \, n^{-p_{\delta_0}} \,, \varepsilonnd{equation} since $p(R_0) = \displaystyle \varphirac{2K}{K+1}$. Here $p_{\delta_0}:=p(R_\delta)$. Notice that, by \varepsilonqref{delta massimo}, $p_{\delta_0} \in \left(\varphirac{2K}{K+1}- \delta_0,\varphirac{2K}{K+1}\right]$, since $p$ is strictly decreasing in $[0,\pi/2]$. From \varepsilonqref{stima omega n 2}, in particular we deduce $|\Omegaega_n| \to 0$. Therefore $f_n \to f$ almost everywhere in $\Omegaega$, with $f$ piecewise affine. Furthermore $f$ satisfies (i)-(iii) by construction. We are left to estimate the distribution function of $\etaabla f$. By \ref{prop g} we have that \[ |\etaabla f(x)|> \varphirac{n}{c_{K,\delta_0}} \quad \text{in} \quad \Omegaega_n \qquad \text{and} \qquad |\etaabla f(x)|< c_{K,\delta_0} \, n \quad \text{in} \quad \Omegaega \smallsetminus \Omegaega_n \,. \] For a fixed $t > c_{K,\delta_0}$, let $n_1 :=[c_{K,\delta_0} t]$ and $n_2:=[c_{K,\delta_0}^{-1}t]$, where $[\cdot]$ denotes the integer part function. Therefore \[ \Omegaega_{n_1+1} \subset \{ x \in \Omegaega \, \colon \, |\etaabla f (x)|>t \} \subset \Omegaega_{n_2} \] and \varepsilonqref{crescita} follows from \varepsilonqref{stima omega n 2}, with $p=p_{\delta_0}$. Lastly, \varepsilonqref{crescita} implies that $\etaabla f_n$ is uniformly bounded in $L^1$, so that $f \in W^{1,1}(\Omegaega;\mathbb{R}^2)$ by dominated convergence. \varepsilonnd{proof} We remark that the constant $c_{K,\delta_0}$ in \varepsilonqref{crescita} is monotonically increasing as a function of $\delta_0$, that is $c_{K,\delta_1} \leq c_{K,\delta_2}$ if $\delta_1 \leq \delta_2$. We now proceed with the construction of exact solutions to \varepsilonqref{differential inclusion}. We will follow a standard argument (see, e.g., \cite[Remark 6.3]{f}, \cite[Thoerem A.2]{npp}). \begin{theorem} \label{thm finale} Let $\sigma_1,\sigma_2 $ be defined by \varepsilonqref{speciali} for some $K,S_1, S_2$ as in \varepsilonqref{K bounds} and $S$ as in \varepsilonqref{def S}. There exist coefficients $\sigma_n \in L^{\infty}(\Omegaega;\{ \sigma_1, \sigma_2 \})$, exponents $p_n \in \left[\varphirac{2S}{S+1},\varphirac{2K}{K+1} \right]$, functions $u_n \in W^{1,1} (\Omegaega;\mathbb{R})$, such that \begin{align} \label{pde2} &\begin{cases} \Div (\sigma_n (x) \etaabla u_n (x)) = 0 & \text{ in } \quad \Omegaega \,, \\ u_n (x) = x_1 & \text{ on } \quad \partial \Omegaega \,, \varepsilonnd{cases}\\ &\etaabla u_n \in L^{p_n}_{\rm weak}(\Omegaega;\mathbb{R}^2), \quad p_n \to \varphirac{2K}{K+1}, \label{thesis1}\\ &\etaabla u_n \etaotin L^{\varphirac{2K}{K+1}}(\Omegaega;\mathbb{R}^2). \label{thesis2} \varepsilonnd{align} In particular $u_n \in W^{1,q} (\Omegaega;\mathbb{R})$ for every $q < p_n$, but $\int_{\Omegaega} {|\etaabla u_n|}^{\varphirac{2K}{K+1}} \, dx= \infty$. \varepsilonnd{theorem} \begin{proof} By Proposition \ref{prop:grad} there exist sequences $f_n \in W^{1,1} (\Omegaega;\mathbb{R}^2) \cap C^{\alpha} (\overline{\Omegaega};\mathbb{R}^2)$, $\gamma_n \searrow 0$, $p_n \in \left[\varphirac{2S}{S+1},\varphirac{2K}{K+1} \right]$, such that, $f_n (x)=J x$ on $\partial \Omegaega$, \begin{gather} \dist (\etaabla f_n (x) , T_1 \cup T_2 ) < \gamma_n \quad \text{a.e. in} \quad \Omegaega \,, \label{thm:inclusion} \\ \etaabla f_n \in L^{p_n}_{\rm weak} (\Omegaega;\mathbb{R}^{2 \times 2}) \,, \quad p_n \to \varphirac{2K}{K+1} \,, \quad \etaabla f_n \etaotin L^{\varphirac{2K}{K+1}}(\Omegaega;\mathbb{R}^{2 \times 2}) \,. \label{thm:reg} \varepsilonnd{gather} In euclidean coordinates, condition \varepsilonqref{thm:inclusion} implies that \begin{equation} \label{thm:inclusion 2} \left( \begin{matrix} \etaabla f_n^1 (x) \\ \etaabla f_n^2 (x) \\ \varepsilonnd{matrix} \right) = \left( \begin{matrix} E_n (x) \\ \rot \sigma_n (x) E_n (x) \\ \varepsilonnd{matrix} \right) + \left( \begin{matrix} a_n (x) \\ b_n (x) \\ \varepsilonnd{matrix} \right) \quad \text{a.e. in} \quad \Omegaega \varepsilonnd{equation} with $f_n=(f_n^1,f_n^2)$, $\sigma_n := \sigma_1\chi_{\{\etaabla f \in T_1\}} +\sigma_2\chi_{\{\etaabla f \in T_2\}}$, $E_n \colon \Omegaega \to \mathbb{R}^2$, $\rot=\left( \begin{matrix} 0 & -1 \\ 1 & 0 \\ \varepsilonnd{matrix} \right)$ and \begin{equation} \label{coefficienti a zero} a_n , b_n \to 0 \qquad \text{in} \qquad L^{\infty}(\Omegaega;\mathbb{R}^2) \,. \varepsilonnd{equation} The boundary condition $f_n = J x$ reads $f^1_n = x_1$ and $f_n^2=-x_2$. We set $u_n := f_n^1 + v_n$, where $v_n \in H^1_0 (\Omegaega,\mathbb{R})$ is the unique solution to \[ \Div (\sigma_n \etaabla v) = - \Div (\sigma_n a_n - \rot^T b_n) \,. \] Notice that $v_n$ is uniformly bounded in $H^1$ by \varepsilonqref{coefficienti a zero}. Since \varepsilonqref{thm:inclusion 2} holds, it is immediate to check that $\Div (\sigma_n \etaabla u_n)= \Div (\rot^T \etaabla f_n^2)=0$, so that $u_n$ is a solution of \varepsilonqref{pde2}. Finally, the regularity thesis \varepsilonqref{thesis1}, \varepsilonqref{thesis2}, follows from the definition of $u_n$ and the fact that $v_n \in H_0^1(\Omegaega;\mathbb{R})$ and $f_n^1$ satisfies \varepsilonqref{thm:reg} with $1<p_n<2$. \varepsilonnd{proof} \etaocite{*} \varepsilonnd{document}
\rhogin{document} \title{On Thurston's core entropy algorithm} \rhogin{center} Dedicated to the memory of Tan Lei \varepsilonnd{center} \rhogin{abstract} The core entropy of polynomials, recently introduced by W. Thurston, is a dynamical invariant extending topological entropy for real maps to complex polynomials, whence providing a new tool to study the parameter space of polynomials. The base is a combinatorial algorithm allowing for the computation of the core entropy given by Thurston, but without supplying a proof. In this paper, we will describe his algorithm and prove its validity. \noindent{\bf Keywords and phrases}: core entropy, Hubbard tree, critical portrait, polynomial, complex dynamics. \noindent{\bf AMS(2010) Subject Classification}: 37B40, 37F10, 37F20. \varepsilonnd{abstract} \section{Introduction} \subsection{Background of the paper} During the last year of his life, William P. Thurston developed a theory of degree $d$-invariant laminations , a tool that he hoped would lead to what he called a ``qualitative picture of (the dynamics of) degree $d$ polynomials''. Thurston discussed his research on this topic in his seminar at Cornell University and was in the process of writing an article, but he passed away before completing the manuscript. Several people have set out to write supplementary material to the manuscript, based on what they learned from him throughout his seminar and email exchanges with him. The outcome is the manuscript \cite{TG}. The present article is part of the author's Ph.D thesis. The initial purpose, as suggested by my supervisor Tan Lei, was to fill in the empty section ``Hausdorff dimension and growth rate'' of W. Thurston's manuscript, as part of the supplementary material in \cite{TG}. However, as the research developed, the scope of the work exceeded largely what we had expected. In the end, we have decided to put a complete treatment of the quadratic case in \cite{TG}, leaving the general case into a series of independent writings. The present article is the first writing on the general setting. We will prove that Thurston's ingenuous entropy algorithm on critical portraits gives the core entropy of complex polynomials. In forthcoming articles, we will show that there are many different combinatorial models that encode the core entropy. Most models were suggested by W. Thurston. As we have shown (in \cite{TG}) in the quadratic case, there are a dozen of such models, including the Hausdorff dimension of various objects in various spaces, and the growth rate of various dynamical systems. \subsection{An introduction to this article} Recall that given a continuous map $f$ acting on a compact set $X$, its {topological entropy} $h(X,f)$ is a quantity that measures the complexity growth of the induced dynamical system. It is essentially defined as the growth rate of the number of itineraries under iteration (see \cite{AKM}). The \varepsilonmph{core entropy} of complex polynomials, to be explained below, was introduced by W. Thurston around 2011 in order to develop a ``qualitative picture'' of the parameter space of degree $d$ polynomials. In the quadratic case, a rich variety of results about the parameter space were known, see for example \cite{DH, L}. But in the higher degree case ($d\gammaeq 3$), our overall understanding has remained sketchy and unsatisfying. Let $d\gammaeq2$ be an integer, and $f$ a complex polynomial of degree $d$. A point $c\in\mbox{$\mathbb C$}$ is called a \varepsilonmph{critical point} of $f$ if $f'(c)=0$. The \varepsilonmph{critical set} $ {\rm crit}(f)$ is defined to be $${\rm crit}(f)=\{c\in\mbox{$\mathbb C$}\mid f'(c)=0\},$$ and the \varepsilonmph{postcritical set} ${\rm post}(f)$ is defined to be \rhogin{equation*} {\rm post}(f)=\overline{\{f^n(c) : {c\in{\rm crit}(f)}, {n\gammae 1} \}}. \varepsilonnd{equation*} In many cases, there exists a $f$-forward invariant, finite topological tree that contains ${\rm post}(f)$, called the {\it Hubbard tree} of $f$, which captures all essential dynamics of the polynomial. In particular, this tree exists if $f$ is \varepsilonmph{postcritically-finite }, i.e., $\#{\rm post}(f)<\infty$ (see Section {\rm Re}f{hubbard-tree}). \rhogin{definition}[core entropy] The \varepsilonmph{core entropy} of $f$, denoted by $h(f)$, is defined as the topological entropy of the restriction of $f$ to its Hubbard tree, when the tree exists. \varepsilonnd{definition} From this definition, we see that the core entropy extends the topological entropy of real polynomials to complex polynomials, where the invariant real segment in the real polynomial case is replaced by an invariant tree, known as the Hubbard tree. Hence, the core entropy yields a new way to study the parameter space of complex polynomials. A fundamental tool in this direction is an effective algorithm allowing for the computation of the core entropy. Let $f$ be a postcritically-finite polynomial of degree $d\gammaeq2$ and let $H_f$ denote its Hubbard tree. The simplest way to compute the entropy of $f:H_f\to H_f$ is to write the {\it incidence matrix} for the {\it Markov map} $f$ acting on $H_f$, and take the logarithm of its leading eigenvalue (see Section {\rm Re}f{entropy-result}). However, this method requires knowledge of the topology of $H_f$, and is thus difficult to realize on a computer. To avoid knowing the topology of the Hubbard tree and the action of $f$ on it, W. Thurston developed a purely combinatorially algorithm (without supplying a proof) using the combinatorial data {\it critical portraits}. The concept of critical portraits and this entropy algorithm will be exhaustively explained in Sections {\rm Re}f{section-critical-portrait} and {\rm Re}f{algorithm2} respectively. Roughly speaking, a postcritically-finite polynomial $f$ induces a finite collection of finite subsets of the unit circle \[{\mathbb T}h:=\mathbf{{\mathbb T}heta}_f=\{{\mathbb T}heta_1,\lambdadots, {\mathbb T}heta_s\},\] called a \varepsilonmph{weak critical marking} of $f$ (see Definition {\rm Re}f{weak-critical-marking}). The algorithm takes $\mathbf{{\mathbb T}heta}$ as input, constructs a non-negative matrix $A$ (bypassing $f$ and $H_f$), and provides as output its Perron-Frobenius leading eigenvalue $ \rho(\mathbf{{\mathbb T}heta})$. Using this algorithm, Thurston draw a picture of the core entropy of quadratic polynomials as a function of the external angle $\theta$ (see Figure {\rm Re}f{Thurston-plot}). \rhogin{figure}[htbp]\centering \includegraphics[width=11cm]{Thurston-entropy-plot.png} \caption{The core entropy of quadratic polynomials, drawn by Thurston}\lambdaambdabel{Thurston-plot} \varepsilonnd{figure} The validity of Thurston's algorithm in the quadratic case was proven by Y. Gao-L. Tan (\cite{TG}) and W. Jung (\cite{Jung}). Based on this algorithm, Tiozzo proved the {\bf continuity conjecture} of Thurston \cite{Ti} (Dudko and Schleicher \cite{DS} give an alternative proof of the conjecture without using this algorithm): Given $\theta\in \mathbb Q/\mbox{$\mathbb Z$}$, the parameter ray of angle $\theta$ determines a unique postcritically-finite quadratic polynomial $f_{c_\theta}=z^2+c_\theta$. Let $h(\theta)$ denote the core entropy of $f_{c_\theta}$. {\noindent\bf Theorem} (Thurston, Dudko-Schleicher, Tiozzo). {\it The entropy function $h:\mathbb Q/\mbox{$\mathbb Z$}\to\mbox{$\mathbb R$}$ extends to a continuous function $\mbox{$\mathbb R$}/\mbox{$\mathbb Z$}\to \mbox{$\mathbb R$}$.} Analogously, in order to clarify the connectedness locus (the generalization of the Mandelbrot set) for degree $d$ ($d\gammaeq3$) polynomials from the point of view of the core entropy, for example the continuity conjecture in the higher degree case, one should first verify the validity of Thurston's entropy algorithm in the general case. The purpose of this paper is to prove this point. We establish the following main theorem. This is the first step towards Thurston's program. \rhogin{theorem}\lambdaambdabel{Thurston-algorithm} Let $f$ be a postcritically-finite polynomial, and $\mathbf{{\mathbb T}heta}$ a weak critical marking of $f$. Let $\rho({\mathbb T}h)$ be the output of Thurston's entropy algorithm. Then $\lambdaog\rho(\mathbf{{\mathbb T}heta})$ equals the core entropy of $f$, i.e., $\lambdaog\rho(\mathbf{{\mathbb T}heta})=h(H_f,f)$. \varepsilonnd{theorem} The organization of the manuscript is as follows. In Sections 2 and 3, we recall some preliminary definitions and results about topological entropy and Hubbard trees that will be used below. In Section 4, we give a detailed description of the weak critical markings of postcritically-finite polynomials and their induced partitions in the dynamical plane. We then introduce Thurston's entropy algorithm in Section 5, and prove the main theorem (Theorem {\rm Re}f{Thurston-algorithm}) in Section 6. {\noindent \bf Acknowledgement.} I wish to express my sincere appreciation to L. Tan for leading me to this interesting topic, and offering constant encourage and very useful suggestions during the writing of the paper. Without her help, the paper would not appear. I also would like to thank R. H. Henrik for the comments on a preliminary version and J. S. Zeng for the very useful discussion and suggestions. Some pictures are provided by J. S. Zeng. The author is partially supported by NSFC grant no. 11501383. \section{Basic results about topological entropy}\lambdaambdabel{entropy-result} We will not use the general definition of the topological entropy in the paper (see \cite{AKM}). Instead, we summarize some basic results about the topological entropy that will be applied below. Let $f:X\to X$ be a continuous map. We denote by $h(X,f)$ the topological entropy of $f$ on $X$. The following three propositions can be found in \cite{Do}. \mbox{$\mathbb R$}EFPROP{Do2} If $X=X_1\cup X_2$, with $X_1$ and $X_2$ compact, $f(X_1)\subset X_1$ and $f(X_2)\subset X_2$, then $h(X,f)=\sup\bigl(h(X_1,f),h(X_2,f)\bigr)$. \varepsilonnd{proposition} \mbox{$\mathbb R$}EFPROP{Do3} Let $Z$ be a closed subset of $X$ such that $f(Z)\subset Z$. Suppose that for any $x\in X$, the distance of $f^n(x)$ to $Z$ tends to $0$, uniformly on any compact set in $X-Z$. Then $h(X,f)=h(Z,f)$. \varepsilonnd{proposition} \mbox{$\mathbb R$}EFPROP{Do4} Assume that $\pi$ is a surjective semi-conjugacy $$\rhogin{array}{rcl}Y &\xirightarrow[]{\ q\ } &Y\\ \pi\Big\deltaownarrow &&\Big\deltaownarrow \pi \\ X & \xirightarrow[]{\ f\ } & X .\varepsilonnd{array}$$ Then $h(X,f)\lambdaeq h(Y,q)$. Furthermore, if $\deltas \sup_{x\in X}\#\pi^{-1}(x)< \infty$ then $ h(X,f)=h(Y,q)$.\varepsilonnd{proposition} In this paper, we mainly use the topological entropy for real maps of dimension one. A (finite topological) \varepsilonmph{graph} $G$ is a compact Hausdorff space which contains a finite non-empty set $V_G$ (the set of vertices), such that every connected component of $G\setminus V_G$ is homeomorphic to an open interval of the real line. Since any graph can be embedded in $\mbox{$\mathbb R$}^3$ (see \cite{Moi}), we will consider each graph endowed with the topology induced by the topology of $\mbox{$\mathbb R$}^3$. \rhogin{definition}[monotone map]\lambdaambdabel{monotone} Let $X,Y$ be topological spaces and $\phi:X\to Y$ be continuous. Then $\phi$ is said to be \varepsilonmph{monotone} if $\phi^{-1}(y)$ is connected for every $y\in Y$. \varepsilonnd{definition} The following fact will be repeatedly used in the paper. Refer to e.g.\ \cite[A.13]{BM} for a proof. \rhogin{proposition}\lambdaambdabel{tree-map} Let $X$ be a topological space, and $\phi:[0,1]\to X$ a monotone map. Then the image $\phi([0,1])$ is either a point or an \varepsilonmph{arc}, i.e., a homeomorphic image of a closed interval of the real line. \varepsilonnd{proposition} \rhogin{definition}[Markov graph map]\lambdaambdabel{def:markov} Let $G$ be a finite graph with vertex set $V_G$. A continuous map $f:G\to G$ is called \varepsilonmph{Markov} if there is a finite subset $A$ of $G$ containing $V_G$ such that $f(A)\subset V_G$ and $f$ is monotone on each component of $G\setminus A$. \varepsilonnd{definition} Let $f:G\to G$ be a Markov graph map. By the definition, an edge of $G$ is mapped either to a vertex of $G$ or the union of several edges of $G$. Enumerate the edges of $G$ by $e_i$ , $i=1,\cdots,k$. We then obtain an \varepsilonmph{incidence matrix} $D_{(G,f)}=(a_{ij})_{k\times k}$ of $(f,X)$ such that $a_{ij}=\varepsilonll$ if $f(e_i)$ covers $e_j$ precisely $\varepsilonll$ times. Note that choosing different enumerations of the edges gives rise to conjugate incidence matrices, so in particular, the eigenvalues are independent of the choices. Denote by $\rho$ the largest non-negative eigenvalue of $D_{(G,f)}$. By the Perron-Frobenius theorem such an eigenvalue exists and equals the growth rate of $\|D_{(G,f)}^n\|$ for any matrix norm. The following result is classical (see \cite{AM,MS}): \rhogin{proposition}\lambdaambdabel{entropy-formula} The topological entropy $h(G,f)$ is equal to $0$ if $D_{(G,f)}$ is nilpotent, i.e., all eigenvalues of $D_{(G,f)}$ are zero; and equal to $\lambdaog\rho$ otherwise. \varepsilonnd{proposition} A special and important type of graph is the (topological) \varepsilonmph{tree}, which is a connected graph without cycles. A point $p$ of a tree $T$ is called an \varepsilonmph{endpoint} if $T {\setminus} \{p\}$ is connected, and called a \varepsilonmph{ branch point} if $T {\setminus} \{p\}$ has at least $3$ connected components. For any two points $p,q\in T$, there is a unique arc in $T$ joining $p$ and $q$. We denote this arc by $[p,q]_T$. \section{Postcritically-finite polynomials and its Hubbard tree} \subsection{Postcritically-finite polynomials} \lambdaambdabel{hubbard-tree} Let $f$ be a postcritically-finite polynomial, i.e., such that each of its critical points has a finite (and hence periodic or preperiodic) orbit under the action of $f$. By classical results of Fatou, Julia, Douady and Hubbard, the filled Julia set $K_f=\{z\in \mbox{$\mathbb C$}\mid f^n(z)\not\to \infty\}$ is compact, connected, locally connected and locally arc-connected. These properties also hold for the Julia set $J_f:=\partial K_f$. The Fatou set $F_f:=\overlineerline{\mbox{$\mathbb C$}} {\setminus} J_f$ consists of one unbounded component $U(\infty)$ which is equal to the basin of attraction of $\infty$, together with at most countably many bounded components constituting the interior of $K_f$. Each of the sets $K_f, J_f, F_f$ and $U(\infty)$ is fully invariant by $f$; each Fatou component is (pre)periodic (by Sullivan's non-wandering domain theorem, or by the hyperbolicity of the map); and each periodic Fatou component cycle contains at least one critical point of $f$ (including $\infty$). As a consequence, for $f$ a postcritically-finite polynomial, there is a system of Riemann mappings $$\Big\{\phi_U: \mathbb D\to U\,\Big|\, U \thetaxt{ Fatou component}\Big\}$$ satisfying that each extends to a continuous map on the closure $\overlineerline{\mathbb D}$, so that the following diagram commutes for all $U$: \rhogin{equation*} \rhogin{tikzpicture} \matrix[row sep=0.8cm,column sep=2.4cm] { \node (Gammai) {$ \overlineerline \mathbb D $}; & \node (Gamma) {$\overlineerline \mathbb D$}; \\ \node (S2i) {$\overlineerline U$}; & \node (S2) {$\overlineerline{f(U)}$,}; \\ }; \deltaraw[->] (Gamma) to node[auto=left,cdlabel] {\phi_{f(U)}} (S2); \deltaraw[->] (S2i) to node[auto=right,cdlabel] {f} (S2); \deltaraw[->] (Gammai) to node[auto=left,cdlabel] {\thetaxt{ power map }z^{d_{\tiny\mbox{$U$}}}} (Gamma); \deltaraw[->] (Gammai) to node[auto=right,cdlabel] {\phi_U} (S2i); \varepsilonnd{tikzpicture} \varepsilonnd{equation*} where $d_U$ denotes the degree of $f$ on $U$. The image $\phi_U(0)$ is called the \varepsilonmph{center} of the Fatou component $U$. It is easy to see that any center is mapped to a critical periodic point by some iterations of $f$. On every periodic Fatou component $U$, including $U(\infty)$, the map $\phi_U$ realizes a conjugacy between a power map and the first return map on $U$. The image in $U$ under $\phi_U$ of \thetaxtsc{closed}, \varepsilonmph{resp.} \thetaxtsc{open}, radial lines in $\overline{\mathbb D}$ are, by definition, \varepsilonmph{internal rays} of $U$ if $U$ is bounded, \varepsilonmph{resp.} \varepsilonmph{external rays} if $U=U(\infty)$. Since a power map sends a radial line to a radial line, the polynomial $f$ sends an internal/external ray to an internal/external ray. If $U$ is a bounded Fatou component, then $\phi_U:\overlineerline \mathbb D\to \overlineerline U$ is a homeomorphism and thus every boundary point of $U$ receives exactly one internal ray from $U$. This is in general not true for $U(\infty)$, where several external rays may land at a common boundary point. For any $\theta\in \mbox{$\mathbb R$}/\mbox{$\mathbb Z$}$, we use ${\mathcal R}_f(\theta)$ or simply ${\mathcal R}(\theta)$ to denote the image by $\phi_{U(\infty)}$ of the radial ray $\{re^{2\pi i \theta}, 0<r<1\}$ and will call it the \varepsilonmph{external ray of angle $\theta$}. We also use $\gamma(\theta)=\phi_{U(\infty)}(e^{2\pi i \theta})$ to denote the \varepsilonmph{landing point} of the ray ${\mathcal R}(\theta)$. If ${\mathcal R}(\theta)$ lands at the boundary of a Fatou component $U$, then there is a unique internal ray of $U$ that joins the center of $U$ and the landing point of ${\mathcal R}(\theta)$. We denote this internal ray by $r_{U}(\theta)$, and call the ray \rhogin{equation}\lambdaambdabel{extended-ray} {\mathcal E}_U(\theta):=r_U(\theta)\cup {\mathcal R}(\theta) \varepsilonnd{equation} the \varepsilonmph{extended ray of angle $\theta$ at $U$}. \rhogin{definition}[supporting rays]\lambdaambdabel{support-ray} We say that an external ray ${\mathcal R}(\theta)$ \varepsilonmph{ supports} a bounded Fatou component $U$ if \rhogin{enumerate} \item the ray lands at a boundary point $q$ of $U$, and \item there is a sector based at $q$ delimited by ${\mathcal R}(\theta)$ and the internal ray of $U$ landing at $q$ such that the sector does not contain other external rays landing at $q$. \varepsilonnd{enumerate} \varepsilonnd{definition} \subsection{The Hubbard trees of postcritically finite polynomials} The material of this part comes from \cite[Chpter 2]{DH} and \cite[Chpter I]{Poi2}. Let $f$ be a postcritically-finite polynomial. Then any pair of points in the closure of a bounded Fatou component can be joined in a unique way by a Jordan arc consisting of (at most two) segments of internal rays. We call such arcs \varepsilonmph{regulated} (following Douady and Hubbard). Since $K_f$ is arc connected, given two points $z_1, z_2\in K_f$, there is an arc $\gammaamma: [0,1]\to K_f$ such that $\gammaamma(0)=z_1$ and $\gammaamma(1)=z_2$. In general, we will not distinguish between the map $\gamma$ and its image. It is proved in \cite{DH} that such arcs can be chosen in a unique way so that the intersection with the closure of any Fatou component is regulated. We still call such arcs regulated and denote them by $[z_1,z_2]$. We say that a subset $X\subset K_f$ is \varepsilonmph{allowably connected} if for every $z_1,z_2\in X$ we have $[z_1,z_2]\subset X$. We define the \varepsilonmph{regulated hull} of a subset $X$ of $ K_f$ to be the minimal closed allowably connected subset of $K_f$ containing $X$. \rhogin{proposition}[\cite{DH}. Proposition 2.7] Let $f$ be a postcritically-finite polynomial. For a collection of $z_1,\lambdadots,z_n$ finitely many points in $K_f$, their regulated hull is a finite tree with endpoints in $\{z_1,\lambdadots,z_n\}$. \varepsilonnd{proposition} The \varepsilonmph{Hubbard tree} of $f$, denoted by $H_f$, is defined to be the regulated hull in $K_f$ of the finite set ${\rm post}(f)$. Its vertex set $V_{H_f}$ is defined to be the union of ${\rm post}(f)$, ${\rm crit}(f)\cap H_f$ and the branch points of $H_f$. The following is a well-known result (see \cite[Section I.1]{Poi2}). \rhogin{proposition}\lambdaambdabel{Hubbard-tree} Any postcritically-finite polynomial $f$ maps each edge of $H_f$ homeomorphically onto the union of edges of $H_f$. Consequently, $f:H_f\to H_f$ is a Markov map. \varepsilonnd{proposition} Using Proposition {\rm Re}f{entropy-formula}, we immediately get the following result. \rhogin{corollary} The topological entropy of $f$ acting on $H_f$ equals to the logarithm of the spectral radius of the incidence matrix $D_{(H_f,f)}$, i.e., $h(H_f,f)=\lambdaog\rho(D_{(H_f,f)})$. \varepsilonnd{corollary} \section{Weak critical markings and the induced partitions} \subsection{Weak critical markings of postcritically-finite polynomials}\lambdaambdabel{section-critical-portrait} For each postcritically-finite polynomial, we will define in this section a collection of combinatorial data from the rays landing at its critical points and on the critical Fatou components, called a \varepsilonmph{weak critical marking} of the polynomial. This concept is a generalization of the well-known one: critical markings of postcritically-finite polynomials (see \cite[Section I.2.1]{Poi1}), which was first introduced in \cite{BFH} to classify the strictly preperiodic polynomials as dynamical systems, and then extended by Poirier \cite{Poi1} to the general case (including periodic critical points). Let $f$ be a postcritically-finite polynomial of degree $d$. We first define ${\mathbb T}heta(U)$ as follows for each \varepsilonmph{critical Fatou component} $U$, i.e., a Fatou component containing a critical point. Denote $\deltaeltalta_U=\thetaxt{deg}(f|_U)$. \rhogin{itemize} \item In the case that $U$ is a periodic Fatou component, let \[U\mapsto f(U)\mapsto\cdots\mapsto f^n(U)=U \] be a critical Fatou cycle of period $n$. We will construct the associated set ${\mathbb T}heta(U')$ for every critical Fatou component $U'$ in this cycle simultaneously. Let $z\in\partial U$ be a \varepsilonmph{root} of $U$, i.e., a periodic point with period less than or equal to $n$. Such $z$ must exist because one can choose it as the landing point of an internal ray of $U$ fixed by $f^n$. Note that this choice naturally determines a root $f^k(z)$ for each Fatou component $f^k(U)$ for $k\in\{0,\lambdadots,n-1\}$, which is called the \varepsilonmph{preferred root} of $f^k(U)$. Let $U'$ be a critical Fatou component in the cycle and $z'$ its preferred root. Consider a supporting ray ${\mathcal R}(\theta)$ for this component $U'$ at $z'$. We define ${\mathbb T}heta(U',z',\theta)$ the set of arguments of the $\deltaeltalta_{U'}$ supporting rays for the component $U'$ that are inverse images of $f({\mathcal R}(\theta))$. There are finitely many such set ${\mathbb T}heta(U',z',\theta)$ according to the different choices of the roots $z$ of $U$ and the arguments $\theta$. We pick one of them and simply denote it by ${\mathbb T}heta(U')$. \item In the case that $U$ is a strictly preperiodic Fatou component, let $n$ be the minimal number such that $f^n(U)$ is a critical Fatou component. And let $z\in \partial U$ be a point which is mapped by $f^n$ to the point $\gamma(\varepsilonta)$, the landing point of ${\mathcal R}(\varepsilonta)$, with $\varepsilonta\in {\mathbb T}heta(f^n(U))$. Consider a supporting ray ${\mathcal R}(\theta)$ for the component $U$ at $z$, and define ${\mathbb T}heta(U,z,\theta)$ to be the set of arguments of the $\deltaeltalta_{U}$ supporting rays for $U$ that are inverse images of $f({\mathcal R}(\theta))$. There are finitely many such ${\mathbb T}heta(U,z,\theta)$ according to the different choices of such $z\in \partial U$ and the arguments $\theta$. We pick one of them and simply denote it by ${\mathbb T}heta(U)$. \varepsilonnd{itemize} \rhogin{definition}\lambdaambdabel{weak-critical-marking} Let $f$ be a postcritically-finite polynomial, with $U_1,\lambdadots,U_n$ the pairwise disjoint critical Fatou components. A finite collection of finite subsets of the unit circle \rhogin{equation}\lambdaambdabel{eq:weak-critical-marking} \mathbf{{\mathbb T}heta}=\mathbf{{\mathbb T}heta}_f:=\{{\mathbb T}heta_1(c_1),\lambdadots,{\mathbb T}heta_m(c_m);{\mathbb T}heta(U_1),\lambdadots,{\mathbb T}heta(U_n)\} \varepsilonnd{equation} is called a \varepsilonmph{weak critical marking} of $f$ if \rhogin{enumerate} \item each ${\mathbb T}heta(U_k)$ is defined as above for $k\in\{1,\lambdadots,n\}$; \item the union of $c_1,\lambdadots, c_m$ (not necessarily pairwise disjoint) equals the union of the critical points of $f$ in $J_f$; \item each ${\mathbb T}heta_j(c_j)$ consists of at least two angles such that the external rays with these angles land at $c_j$ and are mapped by $f$ to a single ray; \item the convex hulls in the closed unit disk of ${\mathbb T}heta_1(c_1),\lambdadots,{\mathbb T}heta_m(c_m)$ are pairwise disjoint; \item for each critical point $c\in J_f$, \[{\rm deg}(f|_c)-1=\sum_{c_j=c}\big(\#{\mathbb T}heta_j(c_j)-1\big). \] \varepsilonnd{enumerate} \varepsilonnd{definition} \rhogin{remark} \rhogin{enumerate} \item Any postcritically-finite polynomial $f$ always has a weak critical marking. For example, let $c_1,\lambdadots,c_m$ be the pairwise different critical points of $f$ in $J_f$, and $U_1,\lambdadots,U_n$ the pairwise different critical Fatou components. We first construct ${\mathbb T}heta(U_1),\lambdadots,{\mathbb T}heta(U_n)$ as above. Pick a collection of angles $\{\theta_1,\lambdadots,\theta_m\}$ such that the ray ${\mathcal R}(\theta_j)$ lands at $f(c_j)$ for each $j\in\{1,\lambdadots,m\}$. We then define each ${\mathbb T}heta_j(c_j)$ the set of arguments of the rays in $f^{-1}({\mathcal R}(\theta_j))$ that land at $c_j$. It is easy to check that such defined ${\mathbb T}heta_1(c_1),\lambdadots,{\mathbb T}heta_m(c_m)$ satisfy conditions (2)--(5) in Definition {\rm Re}f{weak-critical-marking}. Thus, together with the chosen ${\mathbb T}heta(U_1),\lambdadots,{\mathbb T}heta(U_n)$, one gets a weak critical marking $${\mathbb T}h=\{{\mathbb T}heta_1(c_1),\lambdadots,{\mathbb T}heta_m(c_m);{\mathbb T}heta(U_1),\lambdadots,{\mathbb T}heta(U_n)\}.$$ Note that the weak critical markings of a postcritically-finite polynomial are not unique but there are finitely many choices. \item Let ${\mathbb T}h$ be a weak critical marking of a postcritically-finite polynomial $f$ of the form \varepsilonqref{eq:weak-critical-marking}. When the points $c_1,\lambdadots,c_m$ are pairwise different, the weak critical marking ${\mathbb T}h$ is further called a \varepsilonmph{critical marking} of $f$. In this case, the cardinality of each ${\mathbb T}heta_j(c_j)$ equals to ${\rm deg}(f|_{c_j})$. We remark that our definition of critical markings is less restrictive than that of Poirier in \cite{Poi1}. \varepsilonnd{enumerate} \varepsilonnd{remark} For example, we consider the postcritically-finite polynomial $f_c(z)=z^3+c$ with $c\la phapprox 0.22036+1.18612 i$. The critical value $c$ receives two rays with arguments $11/72$ and $17/72$. Then, $${\mathbb T}h:=\lambdaambdarge\{\ {\mathbb T}heta_1(0):=\lambdaeft\{11/216,83/216\right\},{\mathbb T}heta_2(0):=\lambdaeft\{89/216,161/216\right\}\ \lambdaambdarge\}$$ is a weak critical marking, but not a critical marking, of $f_c$, and \[{\mathbb T}h:=\lambdaambdarge\{\ {\mathbb T}heta_1(0):=\{11/216,83/216,155/216\}\ \lambdaambdarge\}\] is a critical marking of $f_c$ (see Figure {\rm Re}f{portrait}). \rhogin{figure} \rhogin{tikzpicture} \node at (0,0) {\includegraphics[width=6.5cm]{portrait.jpg}}; \node at (2.25,3){\footnotesize{$\frac{11}{72}$}}; \node at (0.25,3){\footnotesize{$\frac{17}{72}$}}; \node at (3.5,1.25){\footnotesize{$\frac{17}{216}$}}; \node at (3.5,0.25){\footnotesize{$\frac{11}{216}$}}; \node at (-3.5,2.5){\footnotesize{$\frac{83}{216}$}}; \node at(-3.5,1.5){\footnotesize{$\frac{89}{216}$}}; \node at(-1,-2.5){\footnotesize{$\frac{155}{216}$}}; \node at(0.5,-2.5){\footnotesize{$\frac{161}{216}$}}; \varepsilonnd{tikzpicture} \caption{The Julia set of $f_c(z)=z\mapsto z^3+0.22036+1.18612 i$.}\lambdaambdabel{portrait} \varepsilonnd{figure} In the following, we will use several notations as listed in the table below: \rhogin{center} \rhogin{tabular}{|c|c|c|} \hline &&\\[-3pt] \smallsetminusall{$\mathbf{{\mathbb T}heta}_{\mathcal F}=\{{\mathbb T}heta(U_1),\lambdadots,{\mathbb T}heta(U_n)\}$} & \smallsetminusall{$\thetaxt{crit}(\mathbf{{\mathbb T}heta}_{\mathcal F})=\cup_{k=1}^{n}{\mathbb T}heta(U_k)$} & \smallsetminusall{$\thetaxt{post}(\mathbf{{\mathbb T}heta}_{\mathcal F})=\cup_{n\gammaeq1}\tau^n(\thetaxt{crit}(\mathbf{{\mathbb T}heta}_{\mathcal F}))$}\\ &&\\[-3pt] \hline &&\\[-3pt] \smallsetminusall{$\mathbf{{\mathbb T}heta}_{\mathcal J}=\{{\mathbb T}heta_1(c_1),\lambdadots,{\mathbb T}heta_m(c_m)\}$} & \smallsetminusall{$\thetaxt{crit}(\mathbf{{\mathbb T}heta}_{\mathcal J})=\cup_{j=1}^{m}{\mathbb T}heta_j(c_j)$} &\smallsetminusall{$\thetaxt{post}(\mathbf{{\mathbb T}heta}_{\mathcal J})=\cup_{n\gammaeq1}\tau^n(\thetaxt{crit}(\mathbf{{\mathbb T}heta}_{\mathcal J}))$}\\&&\\[-3pt]\hline &&\\[-3pt] \smallsetminusall{$\mathbf{{\mathbb T}heta}=\mathbf{{\mathbb T}heta}_{\mathcal F}\cup \mathbf{{\mathbb T}heta}_{\mathcal J}$}&\smallsetminusall{$\thetaxt{crit}(\mathbf{{\mathbb T}heta})=\thetaxt{crit}(\mathbf{{\mathbb T}heta}_{\mathcal F})\cup \thetaxt{crit}(\mathbf{{\mathbb T}heta}_{\mathcal J})$}& \smallsetminusall{$\thetaxt{post}(\mathbf{{\mathbb T}heta})=\thetaxt{post}(\mathbf{{\mathbb T}heta}_{\mathcal F})\cup \thetaxt{post}(\mathbf{{\mathbb T}heta}_{\mathcal J})$}\\&&\\[-3pt]\hline \varepsilonnd{tabular} \varepsilonnd{center} Set ${\mathbb T}:=\mbox{$\mathbb R$}/\mbox{$\mathbb Z$}$ and let $\tau:{\mathbb T}\to {\mathbb T}$ be the map defined by $\tau(\theta)=d\theta ({\rm mod}~\mbox{$\mathbb Z$})$. By the construction, we immediately get the basic properties of ${\mathbb T}h$ (see also \cite[Section I.3]{Poi1}). \rhogin{proposition}\lambdaambdabel{define-portrait} Enumerating the elements of ${\mathbb T}h$ by ${\mathbb T}heta_1,\lambdadots,{\mathbb T}heta_{m+n}$, then we have \rhogin{enumerate} \item each $\tau({\mathbb T}heta_i),i\in\{1,\lambdadots,m+n\}$, is a singleton; \item the convex hulls $hull({\mathbb T}heta_i),hull({\mathbb T}heta_j)$ in the closed unit disk of ${\mathbb T}heta_i,{\mathbb T}heta_j$ intersect at most at a point of ${\mathbb T}$ for any $i\not=j\in\{1,\lambdadots,m+n\}$; \item each $\#{\mathbb T}heta_i\gammaeq 2$, and $\sum_{i=1}^{m+n}(\#{\mathbb T}heta_i-1)=d-1$; \varepsilonnd{enumerate} \varepsilonnd{proposition} In the construction of ${\mathbb T}heta(U)$, we obtain some internal rays of $U$ associated with ${\mathbb T}heta(U)$. These internal rays will play an important role in the following discussion. Remember that $\gamma(\theta)$ denotes the landing point of ${\mathcal R}(\theta)$, and that $r_U(\theta)$ denotes the internal ray of $U$ landing at $\gamma(\theta)$ if $\gamma(\theta)\in\partial U$. \rhogin{definition}[critical/postcritical internal rays]\lambdaambdabel{critical-internal-ray} An internal ray is called a \varepsilonmph{critical internal ray} (relatively to ${\mathbb T}h$) if it can be represented as $r_U(\theta)$ for a critical Fatou component $U$ and $\theta\in{\mathbb T}heta(U)$; and called a \varepsilonmph{postcritical internal ray} (relatively to ${\mathbb T}h$) if it is an iterated image by $f$ of a critical internal ray. \varepsilonnd{definition} By the construction of the weak critical marking ${\mathbb T}h$, we have the following result. \rhogin{lemma}\lambdaambdabel{basic-property} \rhogin{enumerate} \item For each $\theta\in{\rm crit}({\mathbb T}h_{\mathcal F})/{\rm post}({\mathbb T}h_{\mathcal F})$, there is a critical/postcritical Fatou component $U$ such that $r_U(\theta)$ is a critical/postcritical internal ray. \item If $r_U(\theta)$ is a critical/postcritical internal ray, then $f(r_U(\theta))=r_{f(U)}(\tau(\theta))$ is a postcritical internal ray. \item Each critical/postcritical internal ray is eventually periodic under the iterations of $f$. \item The closure of any critical Fatou component $U$ contains no other critical/postcritical internal rays except those $r_U(\theta)$ with $\theta\in{\mathbb T}heta(U)$. \item The closure of each Fatou component contains at most one periodic critical/postcritical internal ray. \varepsilonnd{enumerate} \varepsilonnd{lemma} \subsection{The critical portrait}\lambdaambdabel{critical} The combinatorial information given in Proposition {\rm Re}f{define-portrait} for weak critical markings of postcritically-finite polynomials may be presented in an abstract way, thus giving rise to the concept of \varepsilonmph{critical portrait}. Denote by $\mathbb D$ the unit disk. By abuse of notation, we will identify any point of $\partial\mathbb D$ with its argument in ${\mathbb T}$. Then all angles in the circle are considered to be mod 1. A \varepsilonmph{leaf} is either one point in ${\mathbb T}$ (trivial) or the closure in $\overlineerline \mathbb D$ of a hyperbolic chord (non-trivial). For $x,y\in {\mathbb T}$, we use $\overline{xy}$ to denote the leaf joining $e^{2\pi i x}$ and $e^{2\pi i y}$. For any set $S\subset {\mathbb T}$, we denote by $hull(S)$ the hyperbolic convex hull in $\overline{\mathbb D}$ of $S$. \rhogin{definition}[critical portrait]\lambdaambdabel{formal-critical-portrati} A degree $d$ \varepsilonmph{critical portrait} is a finite collection of finite subsets of the unit circle $ \mathbf{{\mathbb T}heta}=\{ {\mathbb T}heta_1, \cdots, {\mathbb T}heta_s\}$ such that the properties (1), (2) and (3) in Proposition {\rm Re}f{define-portrait} simultaneously hold. \varepsilonnd{definition} \rhogin{figure} \rhogin{center} \includegraphics[scale=0.32]{major5.pdf} \quad \includegraphics[scale=0.45]{Degree5majors-b.pdf} \caption{Two critical portraits of degree $5$}\lambdaambdabel{critical-portrait-5} \varepsilonnd{center} \varepsilonnd{figure} Thus, any weak critical marking of a postcritically-finite polynomial is a critical portrait. By identifying each ${\mathbb T}heta_i\in {\mathbb T}h$ with its convex hull, a critical portrait can be equivalently considered as a collection of leaves and polygons pairwise disjoint in $\mathbb D$, each of whose vertices are identified under $z\mapsto z^d$, with total criticality $d-1$ (see Figure {\rm Re}f{critical-portrait-5}). \rhogin{lemma}\lambdaambdabel{equivalent} If $\mathbf{{\mathbb T}heta}=\{{\mathbb T}heta_1,\lambdadots,{\mathbb T}heta_s\}$ is a critical portrait of degree $d$, then $$\mathbb D\setminus\big(\cup_{i=1}^s hull({\mathbb T}heta_i)\big)$$ has $d$ connected components, and each one takes a total arc length $1/d$ on the unit circle. \varepsilonnd{lemma} \rhogin{proof} The set $hull({\mathbb T}heta_1)$ chops $\mathbb D$ into $\#{\mathbb T}heta_1$ regions. One of them contains $hull({\mathbb T}heta_2)$, and is chopped by $hull({\mathbb T}heta_2)$ into $\#{\mathbb T}heta_2$ regions. As a result, $hull({\mathbb T}heta_1)$ and $hull({\mathbb T}heta_2)$ together chop $\mathbb D$ into $\#{\mathbb T}heta_1 + (\#{\mathbb T}heta_2 - 1)$ regions. If we chop $hull({\mathbb T}heta_3), \lambdadots, hull({\mathbb T}heta_s)$ consecutively, at the end we get that the union of convex hulls of ${\mathbb T}heta_i$'s chop $\mathbb D$ into $\#{\mathbb T}heta_1 + \sum_{i = 2}^s (\#{\mathbb T}heta_i - 1)$ regions, and the condition (3) says this number is exactly $d$. On the other hand, each of these $d$ regions touches the boundary circle in arcs whose total length is a multiple of $1/d$, since we know ${\mathbb T}heta_i$ decomposes ${\mathbb T}$ into arcs each of which has length a multiple of $1/d$. Now we have the union of $d$ arcs each of which has total length a multiple of $1/d$, so each one must be of length exactly $1/d$ in order to sum up to $1$. \varepsilonnd{proof} Let $\mathbf{{\mathbb T}heta}=\{{\mathbb T}heta_1,\lambdadots,{\mathbb T}heta_s\}$ be a critical portrait of degree $d\gammaeq2$. \rhogin{definition}[unlinked equivalent on ${\mathbb T}$] Two points $x,y\in{\mathbb T}\setminus\cup_{i=1}^s{\mathbb T}heta_i$ are called \varepsilonmph{unlinked on ${\mathbb T}$} relatively to $\mathbf{{\mathbb T}heta}$ if they belong to a common component of ${\mathbb T}\setminus {\mathbb T}heta_i$ for all ${\mathbb T}heta_i\in \mathbf{{\mathbb T}heta}$. \varepsilonnd{definition} This unlinked relation is easily checked to be an equivalence relation. Together with Lemma {\rm Re}f{equivalent}, we immediately deduce the following result. \rhogin{proposition}\lambdaambdabel{common} \rhogin{enumerate} \item Two points $x,y\in{\mathbb T}\setminus\cup_{i=1}^s{\mathbb T}heta_i$ are unlinked on ${\mathbb T}$ if and only if they belong to a common complementary component of $ \overline{\mathbb D}\setminus\big(\cup_{i=1}^s hull( {\mathbb T}heta_i)\big)$. \item There are $d$ unlinked equivalence classes on ${\mathbb T}$, denoted by $I_1,\lambdadots, I_d$. For each $I_k$ and any angles $x,y\in \overline{I_k}$, $\tau(x)=\tau(y)$ if and only if there exist ${\mathbb T}heta_{i_1},\lambdadots,{\mathbb T}heta_{i_t}\in{\mathbb T}h$ with $t\gammaeq1$ such that $x\in{\mathbb T}heta_{i_1},y\in {\mathbb T}heta_{i_t}$ and ${\mathbb T}heta_{i_j}\cap{\mathbb T}heta_{i_{j+1}}\not=\varepsilonmptyset$ for all $j=1,\lambdadots,t-1$. \varepsilonnd{enumerate} \varepsilonnd{proposition} \subsection{Partitions in the dynamical plane induced by weak critical markings}\lambdaambdabel{partition-in-f} In this part, we fix a weak critical marking \[\mathbf{{\mathbb T}heta}=\{{\mathbb T}heta_1(c_1),\lambdadots, {\mathbb T}heta_m(c_m);\ {\mathbb T}heta(U_1),\lambdadots,{\mathbb T}heta(U_n)\}trajectory iangleq\{{\mathbb T}heta_1,\lambdadots,{\mathbb T}heta_{m+n}\}\] of a postcritically-finite polynomial $f$ of degree $d$. As shown in the last section, it induces a partition $I_1,\lambdadots, I_d$ on the unit circle. We will introduce in this section a corresponding partition on the dynamical plane of $f$, see also \cite[Section II.2]{Poi1} and \cite[Section 5]{Z}. Firstly, for each $c_j$ with $j\in\{1,\lambdadots,m\}$ and each critical Fatou component $U$ of $f$, we define the sets ${\mathcal R}_j(c_j)$ and ${\mathcal R}(U)$ in the dynamical plane corresponding to ${\mathbb T}heta_j(c_j)$ and ${\mathbb T}heta(U)$ respectively, such that ${\mathcal R}_j(c_j)$ is the union of the external rays with angles in ${\mathbb T}heta_j(c_j)$ and the point $c_j$, i.e., \[{\mathcal R}_j(c_j):=\big(\cup_{\theta\in{\mathbb T}heta_j(c_j)}{\mathcal R}(\theta)\big)\cup\{c_j\},\] and ${\mathcal R}(U)$ is the union of extended rays at $U$ with angles in ${\mathbb T}heta(U)$, i.e., \[{\mathcal R}(U):=\cup_{\theta\in{\mathbb T}heta(U)}{\mathcal E}_U(\theta).\] According to the construction of ${\mathbb T}heta_j(c_j)$ and ${\mathbb T}heta(U)$ in Section {\rm Re}f{section-critical-portrait}, we get the following properties of ${\mathcal R}_j(c_j)$ and ${\mathcal R}(U)$. \rhogin{lemma}\lambdaambdabel{property-R(c)} \rhogin{enumerate} \item Each of ${\mathcal R}_j(c_j)$ and ${\mathcal R}(U)$ is star-like with a critical point in the center. Moreover, ${\mathcal R}_j(c_j)\cap K_f=\{c_j\}$ and ${\mathcal R}(U)\cap K_f$ consists of critical internal rays in $U$. \item the intersection of ${\mathcal R}_j(c_j)$ and ${\mathcal R}_k(c_k)$ with $j\not=k\in\{1,\lambdadots,m\}$ is equal to $\{c_j\}$ if $c_j=c_k$, and an empty set otherwise. \item If ${\mathcal R}_j(c_j)\cap {\mathcal R}(U)\not=\varepsilonmptyset$, then $c_j\in\partial U$, and the intersection is either $\{c_j\}$ or the union of $\{c_j\}$ and an external ray ${\mathcal R}(\theta)$ landing at $c_j$. The latter case happens if and only if ${\mathbb T}heta_j(c_j)\cap {\mathbb T}heta(U)=\{\theta\}$. \item If ${\mathcal R}(U)\cap {\mathcal R}(U')\not=\varepsilonmptyset$ for distinct critical Fatou components $U,U'$, then the intersection is either a point $\{p\}:=\partial U\cap \partial U'$, or the union of $\{p\}$ and an external ray ${\mathcal R}(\theta)$ landing at $p$. The latter case happens if and only if ${\mathbb T}heta(U)\cap{\mathbb T}heta(U')=\{\theta\}$. \varepsilonnd{enumerate} \varepsilonnd{lemma} For each ${\mathbb T}heta\in {\mathbb T}h$, we simply denote \rhogin{equation}\lambdaambdabel{eq:1} {\mathcal R}({\mathbb T}heta):=\lambdaeft\{ \rhogin{array}{ll} {\mathcal R}_j(c_j), & \hbox{if ${\mathbb T}heta={\mathbb T}heta_j(c_j)$ with $j\in\{1,\lambdadots,m\}$;} \\[5pt] {\mathcal R}(U), & \hbox{if ${\mathbb T}heta={\mathbb T}heta(U)$ with $U$ a critical Fatou component.} \varepsilonnd{array} \right. \varepsilonnd{equation} \rhogin{definition}[unlinked equivalence in the $f$-plane]\lambdaambdabel{unlink-f} We say that two points $z_1,z_2$ of $\mbox{$\mathbb C$}\setminus \bigcup_{i=1}^{m+n}{\mathcal R}({\mathbb T}heta_i)$ are \varepsilonmph{unlinked equivalent in the $f$-plane} if they belong to a common connected component of $\mbox{$\mathbb C$}\setminus {\mathcal R}({\mathbb T}heta_i)$ for every ${\mathbb T}heta_i\in{\mathbb T}h$. \varepsilonnd{definition} Looking at the circle at infinity we immediately derive that two external rays ${\mathcal R}(\theta)$ and ${\mathcal R}(\varepsilonta)$ are in a common unlinked equivalence class in the $f$-plane if and only if $\theta$ and $\varepsilonta$ are in a common unlinked equivalence class on ${\mathbb T}$. This provides a canonical correspondence between the unlinked equivalence classes on ${\mathbb T}$ and the ones in the $f$-plane. We then denote as $$ V_1,\lambdadots, V_d$$ the unlinked equivalence classes in the $f$-plane with $ V_k$ corresponding to $I_k, k\in\{1,\lambdadots,d\}$ (see Figure {\rm Re}f{partition}). By Lemma {\rm Re}f{property-R(c)} and Definition {\rm Re}f{unlink-f}, the boundary of each $V_k$ consists of internal/external rays, and $\partial V_k\cap K_f$ consists of either critical points in $J_f$ or critical internal rays. \rhogin{figure} \rhogin{tikzpicture} \node at (-4,0) {\includegraphics[width=6cm]{partition-0.pdf}}; \node at (3.5,0) {{\rm Re}sizebox{8cm}{4.5cm}{\includegraphics{partition-1.pdf}}}; \node at (3.1,2){\footnotesize{${\mathcal R}(\frac{1}{4})$}}; \node at (7.2,2){\footnotesize{${\mathcal R}(\frac{1}{12})$}}; \node at (0.85,-2){\footnotesize{${\mathcal R}(\frac{7}{12})$}}; \node at (3.85,-2){\footnotesize{${\mathcal R}(\frac{3}{4})$}}; \node at (2,0){\footnotesize{$U_A$}}; \node at(5,0){\footnotesize{$U_B$}}; \node at(0,1.25){\footnotesize{$V_1$}}; \node at(2.5,-2){\footnotesize{$V_2$}}; \node at(4.5,2){\footnotesize{$V_2$}}; \node at(6.5,-1.5){\footnotesize{$V_3$}}; \node at(-5.5,1){\footnotesize{$I_1$}}; \node at (-4,0){\footnotesize{$I_2$}}; \node at (-2.5,-1){\footnotesize{$I_3$}}; \varepsilonnd{tikzpicture} \caption{This example comes from \cite[Section I.4.4]{Poi1}. The cubic polynomial $f(z)=z^3-\frac{3}{2}z$ has a critical portrait ${\mathbb T}h=\big\{{\mathbb T}heta(U_A)=\{\frac{1}{4},\frac{7}{12}\},{\mathbb T}heta(U_B)=\{\frac{3}{4},\frac{1}{12}\}\big\}$. It induces a partition on the unit circle (the left figure), and the corresponding partition in the $f$-plane.}\lambdaambdabel{partition} \varepsilonnd{figure} \rhogin{lemma}\lambdaambdabel{injective-f} Fix any unlinked equivalence class $ V$ in the $f$-plane and denote its closure by $\overline{ V}$. Then: \rhogin{enumerate} \item For two distinct points $z_1,z_2\in \overline{ V}\cap K_f$, the regulated arc $[z_1,z_2]\subset \overline{ V}$. \item The polynomial $f$ maps $ V$ bijectively onto $\mbox{$\mathbb C$}\setminus f(\partial V)$. \item If $f(z)=f(w)$ for $z\not=w\in \overline{ V}\cap K_f$, then the regulated arc $[z,w]$ belongs to $\partial V$. \item For any $z,w\in \overline{ V}\cap K_f$, the restriction of $f$ on $[z,w]\setminus\partial V$ is injective, and its image is contained in a regulated arc of $f([z,w])$. \varepsilonnd{enumerate} \varepsilonnd{lemma} \rhogin{proof} \rhogin{enumerate} \item For each $i\in\{1,\lambdadots,m+n\}$, denote by $Y_i$ the connected component of $\mbox{$\mathbb C$}\setminus{\mathcal R}({\mathbb T}heta_i)$ which contains $ V$. By Definition {\rm Re}f{unlink-f} we get $ V=\cap_{i=1}^{m+n} Y_i$. By Lemma {\rm Re}f{property-R(c)}, we further get $\overline{ V}=\cap_{i=1}^{m+n} \overline{Y_i}$. It follows that $z,w$ belong to each $\overline{Y_i}$. Note that the intersection of $\partial Y_i$ and $K_f$ is either a point in $J_f$ or two internal rays of a Fatou component, then $[z,w]$ is contained in every $\overline{Y_i}$. Consequently, we have $[z,w]\subset \cap_{i=1}^{m+n}\overline{Y_i}=\overline{ V}$. \item Note that the boundary of $V$ consists of external/internal rays. It follows that the image $f(\partial V)$ is the union of finitely many external or internal rays. Let $I$ denote the unlinked equivalence class on ${\mathbb T}$ that corresponds to $ V$. From the correspondence between $I$ and $ V$, and Proposition {\rm Re}f{common}.(2), we get that $f$ maps $ V\cap U(\infty)$ bijectively onto the region $U(\infty)\setminus f(\partial V)$. We claim that $f( V)\subset \mbox{$\mathbb C$}\setminus f(\partial V)$. On the contrary, let $z\in V\cap K_f$ such that $f(z)\in f(\partial V)$. If $z\in U$, a Fatou component, denote by $c$ the center of $U$. Since $\partial V\cap K_f$ consists of internal rays and $z\in V$, then, if $z=c$, the entire $U$ is contained in $V$; otherwise, there is a unique internal ray of $U$ passing through $z$, which is contained in $V\cup\{c\}$. Anyway, we can pick an internal ray $r_z$ of $U$ with $r_z\subset V\cup\{c\}$. So, by choosing the landing point of $r_z$ if necessary, we can assume $z\in J_f$. In this case, there is an external ray contained in $f(\partial V)$ landing at $f(z)$, and its lift based at $z$ is also an external ray. As $z\in V$, all external rays landing at $z$ belong to $ V\cap U(\infty)$. It contradicts to the fact that $f( V\cap U(\infty))=U(\infty)\setminus f(\partial V)$. The claim is then proven. Now, we fix a point $w_W\in U(\infty)$ in each component $W$ of $\mbox{$\mathbb C$}\setminus f(\partial V)$ and denote by $z_W$ the unique preimage of $w_W$ in $ V\cap U(\infty)$. Given any connected component $W$ of $\mbox{$\mathbb C$}\setminus f(\partial V)$, and any point $w\in W$, we pick an arc $\gamma_w\subset W$ joining $w$ and $w_W$. We assert that $\gamma_w$ has a unique lift in $ V$ starting from $z_W$. Firstly, let $\Gamma$ denote the component of $f^{-1}(\gamma_w)$ that contains $z_W$. Then $\Gamma\subset V$ since $z_W\in V$ and $\gamma_w \cap f(\partial V)=\varepsilonmptyset$. Note also that $V$ contains no critical points of $f$, so $f:\Gamma\to\gamma_w$ is a covering, and hence a homeomorphism. We denote by $h(w)$ the end point of this lift. Since $W$ is simply-connected, the point $h(w)$ is independent on the choice of $\gamma_w$. We then get a map $h:\mbox{$\mathbb C$}\setminus f(\partial V)\to V$. This map is easily checked to be the inverse of $f: V\to \mbox{$\mathbb C$}\setminus f(\partial V)$. \item Let $z,w$ be two distinct points in $\overline{ V}\cap K_f$ with $f(z)=f(w)$. We first claim that both $z$ and $w$ belong to $\partial V$. Otherwise we can find two points $z'$ and $w'$ in $ V$ near $z$ and $w$, respectively, such that $f(z')=f(w')$ and this contradicts assentation (2). Assume now that $z,w\in \partial V$ and let $[z,w]$ be the regulated arc joining $z,w$. By point (1), $[z,w]\subset \overline{ V}$. We just need to show that no points of $[z,w]$ are in $ V$. On the contrary, suppose that $x\in V\cap [z,w]$. Note that the image of $[z,w]$ is a tree, so the fact $f(z)=f(w)$ implies that each point in $f([z,w])$ has at least two preimages in $[z,w]$ by $f$. Then there is a point $y\in[z,w]$ such that $f(x)=f(y)$. It contradicts the claim above. \item The former assertion is a direct consequence of (3). Note that each connected component of $[z,w]\cap \partial V$ is either one point or a closed segment, so we denote by $$A_1:=[z_1,w_1],\lambdadots,A_m:=[z_m,w_m],\ m\gammaeq0,$$ the closures of the connected component of $[z,w]\setminus\partial V$ such that $A_i$ separates $A_{i-1}$ and $A_{i+1}$ for each $i\in [2,m-1]$. Here $m=0$ represents the case that $[z,w]\subset \partial V$. Obviously, if $m\gammaeq1$, each $A_i$ is a closed arc. For each $i\in[1,m-1]$, we denote by $B_i$ the connected component of $[z,w]\cap\partial V$ between $A_i$ and $A_{i+1}$ (see the left one in Figure {\rm Re}f{map}). \rhogin{figure} \rhogin{center} \includegraphics[width=15cm]{map.pdf} \caption{The decomposition of $[z,w]$ and its image in (4)}\lambdaambdabel{map} \varepsilonnd{center} \varepsilonnd{figure} For each $i\in\{1,\lambdadots,m\}$, set $E_i:=f(A_i)$. According to (3), we get that $E_i=[f(z_i),f(w_i)]$ and their interior are pairwise disjoint. We also set $F_i:=f(B_i),i\in\{1,\lambdadots,m\}$. Clearly, each $F_i$ is a tree or point containing $f(w_i)$ and $f(z_{i+1})$ (see the right one in Figure {\rm Re}f{map}). We claim that except these two points, the set $F_i$ is disjoint from $\bigcup_{j=1}^{m}E_j$. To see this, let $p\in B_i$ with $f(p)\in E_j$. Since $E_j=f(A_j)$, there exists a point $q\in A_j$ such that $f(q)=f(p)$. By (3) and the discussion above, the point $q$ also belongs to $B_i$. It implies that either $j=i$ and $q=w_i$ or $j=i+1$ and $q=z_{i+1}$. Then the claim is proven. Consequently, we denote by $P_j:=[f(w_j),f(z_{j+1})]$ an arc joining $E_j$ and $E_{j+1}$ for each $j\in\{1,\lambdadots,m-1\}$. Then $P_j\subset F_j$ and $P_j\cap \big(\cup_{i=1}^{m}E_i\big)=\{f(w_j),f(z_{j+1})\}$. Therefore $$L:=\big(\cup_{i=1}^mE_i\big)\cup\big(\cup_{j=1}^{m-1}P_j\big)$$ is a regulated arc and contained in $f([z,w])$ (see Figure {\rm Re}f{map}). \varepsilonnd{enumerate} \varepsilonnd{proof} \section{The core entropy and Thurston's entropy algorithm}\lambdaambdabel{section-algorithm} The purpose of this section is to define the notions in the following diagram and provide the background for proving Theorem {\rm Re}f{Thurston-algorithm}. \[ \rhogin{tikzpicture} \matrix[row sep=0.8cm,column sep=4cm] { \node (Gammai) {$ \thetaxt{critical portrait }\mathbf{{\mathbb T}heta}$}; & \node (Gamma) {$\lambdaog\rho(\mathbf{{\mathbb T}heta})$}; \\ \node (S2i) {p.f polynomial $f$}; & \node (S2) {$h(H_f, f)$.}; \\ }; \deltaraw[double equal sign distance] (Gamma) to node[auto=left,cdlabel] {\thetaxt{ equality by Thm. {\rm Re}f{Thurston-algorithm}} } (S2); \deltaraw[->] (S2i) to node[auto=right,cdlabel] {\thetaxt{the core entropy of $f$}} (S2); \deltaraw[->] (Gammai) to node[auto=left,cdlabel] {\thetaxt{Thurston's entropy algorithm}} (Gamma); \deltaraw[->] (S2i) to node[auto=right,cdlabel] {} (Gammai); \varepsilonnd{tikzpicture} \] \subsection{Separation sets} In order to introduce Thurston's entropy algorithm, we need to consider the position of two points of ${\mathbb T}$ relatively to a critical portrait. Let ${\mathbb T}h=\{{\mathbb T}heta_1,\lambdadots,{\mathbb T}heta_s\}$ be a critical portrait. Given two angles $x,y\in {\mathbb T}$ (not necessarily different) and an element ${\mathbb T}heta\in{\mathbb T}h$, we say that the leaf $\overline{xy}$ \varepsilonmph{crosses} $hull({\mathbb T}heta)$ if $x,y\not\in {\mathbb T}heta$ and $\overline{xy}\cap hull({\mathbb T}heta)\not=\varepsilonmptyset$. In this case, the angles $x, y$ are said to be \varepsilonmph{separated} by ${\mathbb T}heta$. \rhogin{definition}[separation set]\lambdaambdabel{separation-set} Given an ordered pair of angles $x,y$ in ${\mathbb T}$, we say that its \varepsilonmph{separation set} (relatively to ${\mathbb T}h$) is $(k_1, \deltaots, k_p)$ if the leaf $\overline{xy}$ successively crosses $hull({\mathbb T}heta_{k_1}),\lambdadots hull({\mathbb T}heta_{k_p})$ from $x$ to $y$, where ${\mathbb T}heta_{k_1},\lambdadots,{\mathbb T}heta_{k_p}\in {\mathbb T}h$, and no other elements of ${\mathbb T}h$ separate $x,y$. The angles $x$ and $y$ are called \varepsilonmph{non-separated} by ${\mathbb T}h$ if its separation set is empty. \varepsilonnd{definition} \rhogin{lemma}\lambdaambdabel{separation} Two angles $x,y\in{\mathbb T}$ are non-separated by ${\mathbb T}h$ if and only if there exist an unlinked equivalence class $I$ on ${\mathbb T}$ and two elements ${\mathbb T}heta_i,{\mathbb T}heta_j\in{\mathbb T}h$ intersecting $\overline{I}$ such that $x,y\in I\cup {\mathbb T}heta_i\cup {\mathbb T}heta_j$. \varepsilonnd{lemma} \rhogin{proof} It follows directly from the definition of non-separated. \varepsilonnd{proof} By this lemma, it is easy to see that \rhogin{corollary}\lambdaambdabel{non-separated} If $x,y\in {\mathbb T}$ has the separation set $(k_1,\lambdadots,k_p)\not=\varepsilonmptyset$, then, by setting $\theta_0:=x,\theta_{p+1}:=y$ and the choice of any angle $\theta_i\in{\mathbb T}heta_{k_i}$, each pair of angles $\theta_i,\theta_{i+1}$ for $i\in\{0,\lambdadots,p\}$ is non-separated by ${\mathbb T}h$. \varepsilonnd{corollary} \subsection{Thurston's entropy algorithm }\lambdaambdabel{algorithm2} A critical portrait is said to be \varepsilonmph{rational} if each of its elements contains only rational angles. For instance, any weak critical marking of a postcritically-finite polynomial is rational. Throughout this subsection, we fix a rational critical portrait \[{\mathbb T}h=\{{\mathbb T}heta_1,\lambdadots,{\mathbb T}heta_s\}.\] Recall that $\thetaxt{crit}({\mathbb T}h)=\cup_{i=1}^s{\mathbb T}heta_i$ and $\thetaxt{post}({\mathbb T}h)=\cup_{n\gammaeq1}\tau^n(\thetaxt{crit}({\mathbb T}h))$. We define a set ${\mathcal S}={\mathcal S}_{\mathbb T}h$, consisting of all unordered pairs $\{x,y\}$ with $x\not=y\in \thetaxt{post}({\mathbb T}h)$ if $\#{\rm post}({\mathbb T}h)\gammaeq 2$, and consisting of only $\{x,x\}$ if ${\rm post}({\mathbb T}h)=\{x\}$. Note that in the latter case $x$ is fixed by $\tau$. Since ${\mathbb T}h$ is rational, the sets $\thetaxt{post}({\mathbb T}h)$ and ${\mathcal S}$ are finite and non-empty. The following is the procedure of Thurston's entropy algorithm acting on the rational critical portrait ${{\mathbb T}h}$. \rhogin{enumerate} \item Let $\Sigmagma$ be the abstract linear space over $\mbox{$\mathbb R$}$ generated by the elements of ${\mathcal S}$. \item Define a linear map $\mathcal{A}: \Sigmagma\lambdaongrightarrow \Sigmagma$ such that for any basis vector $\{x,y\}\in {\mathcal S}$, \rhogin{enumerate} \item $\mathcal{A}(\{x,y\} )=\{\tau(x),\tau(y)\}$ if $x,y$ are non-separated by ${{\mathbb T}h}$; and \item $\mathcal{A}(\{x,y\} )=\sum\lambdaimits_{i=0}^{p}\mathcal{A}(\{\theta_i, \theta_{i+1}\})$, where $\theta_0:=x,\theta_{p+1}:=y$ and $\theta_i\in{\mathbb T}heta_{k_i}\in{\mathbb T}h$, if the ordered pair $(x,y)$ has the separation set $(k_1,\lambdadots,k_p)\not=\varepsilonmptyset$. \varepsilonnd{enumerate} \item Denote by $A=A_{{\mathbb T}h}$ the matrix of $\mathcal{A}$ in the basis ${\mathcal S}$. It is a non-negative matrix. Compute its leading non-negative eigenvalue $\rho({{\mathbb T}h})$ (such an eigenvalue exists by the Perron-Frobenius theorem). \varepsilonnd{enumerate} \rhogin{definition}\lambdaambdabel{alto} {\rm (Thurston's entropy algorithm)} Take a rational critical portrait ${\mathbb T}h$ as input and $ \lambdaog \rho({\mathbb T}h)$ as output. (It is easy to see that $A$ is not nilpotent therefore $\rho({\mathbb T}h)\gammae 1$).\varepsilonnd{definition} \rhogin{remark} \rhogin{enumerate} \item By Corollary {\rm Re}f{non-separated}, each ${\mathcal A}(\{\theta_i,\theta_{i+1}\})$ in (2).(b) has been defined in (2).(a). \item By Lemma {\rm Re}f{separation} and the definition in (2).(a), each ${\mathcal A}(\{\theta_i,\theta_{i+1}\})$, and hence ${\mathcal A}(\{x,y\})$, in (2).(b) is independent on the choice of $\theta_i\in{\mathbb T}heta_{k_i}$. \varepsilonnd{enumerate} \varepsilonnd{remark} \rhogin{example} Here and in all the examples $d=3$. Let ${\mathbb T}h=\{\ \{0,1/3\},\{7/15,4/5\}\ \}$. Then the set ${\rm post}({\mathbb T}h)=\{0,1/5,2/5,3/5,4/5\}$ \rhogin{center} \includegraphics[scale=0.58]{algorithm.pdf} \varepsilonnd{center} gives rise to an abstract linear space $\Sigmagma$ with basis: $${\mathcal S}= \Bigl\{\{0,\frac{1}{5}\},\{0,\frac{2}{5}\},\{0,\frac{3}{5}\},\{0,\frac{4}{5}\},\{\frac{1}{5},\frac{2}{5}\}, \{\frac{1}{5},\frac{3}{5}\},{\rule[-2ex]{0ex}{4.5ex}{}}\{\frac{1}{5},\frac{4}{5}\},\{\frac{2}{5},\frac{3}{5}\},\{\frac{2}{5},\frac{4}{5}\},\{\frac{3}{5},\frac{4}{5}\}\Bigr\}$$ The linear map ${\mathcal A}$ acts on the basis vectors as follows: $$ \rhogin{array}{l} \tiny{\Big\{0,\deltafrac{1}{5}\Big\}\rightarrow\Big\{0,\deltafrac{3}{5}\Big\},\quad \Big\{0,\deltafrac{2}{5}\Big\}\rightarrow\Big\{0,\deltafrac{1}{5}\Big\}, \quad \Big\{0,\deltafrac{3}{5}\Big\}\rightarrow\Big\{0,\deltafrac{2}{5}\Big\}+\Big\{\deltafrac{2}{5},\deltafrac{4}{5}\Big\}, \quad \Big\{0,\deltafrac{4}{5}\Big\}\rightarrow\Big\{0,\deltafrac{2}{5}\Big\}},\\[15pt] \tiny{ \Big\{\deltafrac{1}{5},\deltafrac{2}{5}\Big\}\rightarrow\Big\{0,\deltafrac{3}{5}\Big\}+\Big\{0,\deltafrac{1}{5}\Big\}, \quad \Big\{\deltafrac{1}{5},\deltafrac{3}{5}\Big\}\rightarrow\Big\{0,\deltafrac{3}{5}\Big\}+\Big\{0,\deltafrac{2}{5}\Big\}+\Big\{\deltafrac{2}{5},\deltafrac{4}{5}\Big\}, \quad \Big\{\deltafrac{1}{5},\deltafrac{4}{5}\Big\}\rightarrow\Big\{0,\deltafrac{3}{5}\Big\}+\Big\{0,\deltafrac{2}{5}\Big\}},\\[15pt] \tiny{\Big\{\deltafrac{2}{5},\deltafrac{3}{5}\Big\}\rightarrow\Big\{\deltafrac{1}{5},\deltafrac{2}{5}\Big\}+\Big\{\deltafrac{2}{5},\deltafrac{4}{5}\Big\},\quad \Big\{\deltafrac{2}{5},\deltafrac{4}{5}\Big\}\rightarrow\Big\{\deltafrac{1}{5},\deltafrac{2}{5}\Big\}\quad \Big\{\deltafrac{3}{5},\deltafrac{4}{5}\Big\}\rightarrow\Big\{\deltafrac{4}{5},\deltafrac{2}{5}\Big\}}. \varepsilonnd{array}$$ We compute $\lambdaog\rho({\mathbb T}h)=1.395$. \varepsilonnd{example} \subsection{Relating Thurston's entropy algorithm to polynomials}\lambdaambdabel{T-A} In this part, we will give an intuitive feeling of the relation between the output in the algorithm above and the core entropy of postcritically-finite polynomials, and leave the detailed proof to the next section. Let $f$ be a postcritically-finite polynomial of degree $d\gammaeq2$ and ${\mathbb T}h$ a weak critical marking of $f$ as constructed in Section {\rm Re}f{section-critical-portrait}. It is known that the core entropy of $f$, i.e., the topological entropy of $f$ on $H_f$, is equal to $\lambdaog \rho$ with $\rho$ being the leading eigenvalue of the incidence matrix $D_{(H_f,f)}$ when considering the Markov partition of $H_f$ by its edges. Instead, if one looks at the arcs of $H_f$ between any two postcritical points rather than the edges, the action of $f$ on $H_f$ induces another incidence matrix $A_f$ which turns out to have the same leading eigenvalue $\rho$. The advantage of this approach lies in the fact that each postcritical point of $f$ corresponds an angle in ${\rm post}({\mathbb T}h)$ so that any arc in $H_f$ between postcritical points can be combinatorially represented by an angle pair (not necessarily unique). Thus, intuitively, the action of $f$ on these arcs induces a linear map on the space generated by the angle pairs with angles in ${\rm post}({\mathbb T}h)$, which is the map ${\mathcal A}$ in the algorithm. Note that the matrix $A_{\mathbb T}h$ in the algorithm is in general larger than $A_f$ because one postcritical point of $f$ usually corresponds to several angles in ${\rm post}({\mathbb T}h)$. What we need to do in this paper is to show that these two matrices have the same leading eigenvalue. In the quadratic case, a complete proof of Theorem {\rm Re}f{Thurston-algorithm} can be found in \cite[Theorem~13.9]{TG}. The idea of the proof is the following: \rhogin{enumerate} \item Construct a topological graph $G$ and a Markov action $L:G\to G$ such that the incidence matrix $D_{(G,L)}$ is exactly $A_{\theta}$ (the matrix in the algorithm). \item Construct a continuous, finite-to-one, and surjective semi-conjugacy ${\mathbb P}^1hi$ from $L:G\to G$ to $ f_{}:H_f\to H_f$, and then apply Proposition {\rm Re}f{Do4}. \varepsilonnd{enumerate} We may thus conclude that \[\lambdaog\rho(A_{\theta})=\lambdaog\rho(D_{(G,L)})\overlineerset{\thetaxt{Prop. {\rm Re}f{entropy-formula}}}=h(G,L)\overlineerset{\thetaxt{Prop. {\rm Re}f{Do4}}}=h(H_f,f).\] In the higher degree case, the basic idea is similar, but there are several extra difficulties to overcome. The main problem is that a semi-conjugacy from $L:G\to G$ to $f:H_f\to H_f$ is much more difficult to construct, as we will see in the next section. The specific reason will be explained in the rest of this section. Let $f$ be a postcritically-finite polynomial with a weak critical marking ${\mathbb T}h=\{{\mathbb T}heta_1,\lambdadots,{\mathbb T}heta_{m+n}\}$. Recall the definition of ${\rm post}({\mathbb T}h)$ given in the table of Section {\rm Re}f{section-critical-portrait}. Let $G$ be a topological graph with vertex set $V_G:=\thetaxt{post}({\mathbb T}h),$ and edge set $E_{G}:=\big\{e(x,y)|\ x\not=y\in V_{G}\big\}$. In other words, $G$ is an (undirected) topological \varepsilonmph{complete graph} with vertex set $V_G$. Note that when ${\rm post}({\mathbb T}h)=\{x\}$ the graph $G$ consists of only one vertex $x$. In this case we specially consider $x$ as the (trivial) edge of $G$ to achieve consistency in the following statement. Mimicking the action of the linear map ${\mathcal A}$ in the algorithm, we define a Markov map $L:G\rightarrow G$ as follows. For simplicity, for any $x\in V_G$, we set $e(x,x):=x$, a vertex of $G$. Let $e(x,y)$ be an edge of $G$. If $x,y$ are non-separated by ${\mathbb T}h$, let $L$ map $e(x,y)$ monotonously onto the edge or vertex $e(\tau(x),\tau(y))$ of $G$, with $x$ mapped to $\tau(x)$ and $y$ mapped to $\tau(y)$. If $x,y$ has the separation set $(k_1,\lambdadots,k_p)\not=\varepsilonmptyset$, then subdivide the edge $e(x,y)$ into $p+1$ non-trivial arcs $\deltaeltalta(z_i,z_{i+1}), i\in[0,p],$ with $z_0:=x$ and $z_{p+1}:=y$, and let $L$ map each arc $\deltaeltalta(z_i,z_{i+1})$ monotonously onto $e(\tau(\theta_i),\tau(\theta_{i+1}))$ with $z_i$ mapped to $\tau(\theta_i)$ and $z_{i+1}$ mapped to $\tau(\theta_{i+1})$, where $\theta_0:=x$, $\theta_{p+1}:=y$ and $\theta_i\in {\mathbb T}heta_{k_i}$ for each $i\in[1,p]$. It is easily seen that the edges of $G$ are in a one-to-one correspondence to the pairs in ${\mathcal S}$ under the correspondence $e(x,y)\to \{x,y\}$. From the definition of $L$, the incidence matrix of $(G,L)$ is exactly the matrix $A_{\mathbb T}h$ in the algorithm. It follows from Propositions {\rm Re}f{Do3} and {\rm Re}f{entropy-formula} that \rhogin{proposition}\lambdaambdabel{equal-0} The topological entropy $h(G,L)$ equals to $\lambdaog\rho({\mathbb T}h)$. \varepsilonnd{proposition} So we only need to prove $h(G,L)=h(H_f,f)$. Motivated by Proposition {\rm Re}f{Do4}, we try to construct a continuous, finite-to-one, and surjective semi-conjugacy from $L:G\to G$ to $ f:H_f\to H_f$. A {\it priori}, the following way seems feasible. Divide the vertices of $G$ into two parts: \rhogin{equation}\lambdaambdabel{V-G-F} V_{G,{\mathcal F}}:= \thetaxt{post}({\mathbb T}h_{\mathcal F})\quad\thetaxt{and}\quad V_{G,{\mathcal J}}:= \thetaxt{post}({\mathbb T}h_{\mathcal J}), \varepsilonnd{equation} where the definitions of $\thetaxt{post}({\mathbb T}h_{\mathcal F})$ and $\thetaxt{post}({\mathbb T}h_{\mathcal J})$ are given in the table of Section {\rm Re}f{section-critical-portrait}. We remark that both of $V_{G,{\mathcal F}}$ and $V_{G,{\mathcal J}}$ are finite since the map $f$ is postcritically finite. For each vertex $x\in V_{G,{\mathcal F}}$, the ray ${\mathcal R}(x)$ supports a Fatou component, denoted by $U_x$, whereas for each vertex $y\in V_{G,{\mathcal J}}$, the ray ${\mathcal R}(y)$ lands at a postcritical point. We are then prompted to first define a map $\chi:V_{G}\to {\rm post}(f)$ such that $\chi(x)$ is the center of $U_x$ if $x\in V_{G,{\mathcal F}}$ and $\chi(x)=\gammaamma(x)$ if $x\in V_{G,{\mathcal J}}$, and then continuously extend $\chi$ to a map from $G$ to $H_f$, also denoted by $\chi$, such that $\chi$ maps each edge $e(x,y)$ monotonously onto the regulated arc $[\chi(x),\chi(y)]\subset H_f$. This construction of $\chi$ indeed works in the quadratic case, so the idea of the proof shown above can be directly realized. However, in the higher degree case, it may happen that the ray ${\mathcal R}(x)$ with $x\in V_{G,{\mathcal F}}$ supports simultaneously two Fatou components, or that an angle $y$ belongs to $V_{G,{\mathcal F}}\cap V_{G,{\mathcal J}}$ (see Example {\rm Re}f{no-well}). It means that such $\chi$ is not always well defined, so that the construction of the projection from $G$ to $H_f$ described above need not work. This is the key point that causes difficulties of proving Theorem {\rm Re}f{Thurston-algorithm} in the higher degree case. \rhogin{example}\lambdaambdabel{no-well} The first example comes from \cite[Section I.4.4]{Poi1}. Consider the cubic polynomial $f(z)=z^3-\frac{3}{2}z$. It is easy to see that its critical points are $\pm\frac{\sqrt{2}}{2}$ and they are interchanged by $f$ (see the left one of Figure {\rm Re}f{intersect}). We may choose a critical portrait of $f$ as \[{\mathbb T}h=\Bigl\{\ {\mathbb T}heta(U_A)=\{7/12,1/4\},\ {\mathbb T}heta(U_B)=\{11/12,1/4\}\ \Bigr\}.\] The ray ${\mathcal R}(1/4)$ supports the Fatou components $U_A$ and $U_B$ simultaneously. We then consider the cubic polynomial $f(z)=z^3+\frac{3}{2}z^2$. One of its critical points $0$ is fixed, and another critical point $-1$ is mapped to a repelling fixed point $1/2$ (the right one of Figure {\rm Re}f{intersect}). We may choose a critical portrait of $f$ as \[{\mathbb T}h=\Bigl\{\ {\mathbb T}heta(U)=\{0,1/3\};\ {\mathbb T}heta(-1)=\{1/3,2/3\}\ \Bigr\}.\] In this case, the angle $1/3$ belongs to both ${\mathbb T}heta(U)$ and ${\mathbb T}heta(-1)$. \rhogin{figure} \rhogin{tikzpicture} \node at (-4,0) {\includegraphics[width=8.5cm]{left.pdf}}; \node at (3.5,0) {\includegraphics[width=7cm]{right.pdf}}; \node at (-3.6,1.75){\footnotesize{${\mathcal R}(\frac{1}{4})$}}; \node at (-6.85,-1.75){\footnotesize{${\mathcal R}(\frac{7}{12})$}}; \node at (-1.15,-1.75){\footnotesize{${\mathcal R}(\frac{11}{12})$}}; \node at (-5.5,0){\footnotesize{$U_A$}}; \node at (-2.5,0){\footnotesize{$U_B$}}; \node at (7.5,0){\footnotesize{${\mathcal R}(0)$}}; \node at (2.5,2.2){\footnotesize{${\mathcal R}(\frac{1}{3})$}}; \node at (2.5,-2.2){\footnotesize{${\mathcal R}(\frac{2}{3})$}}; \node at (2.95,0){\footnotesize{$-1$}}; \node at (4.5,0){\footnotesize{$U$}}; \varepsilonnd{tikzpicture} \caption{Badly mixed cases of critical portraits}\lambdaambdabel{intersect} \varepsilonnd{figure} \varepsilonnd{example} \section{The proof of Theorem {\rm Re}f{Thurston-algorithm}}\lambdaambdabel{proof} By Proposition {\rm Re}f{equal-0}, we just need to verify the equality $h(G,L)=h(H_f,f)$. The idea is as follows (using the notations in Section {\rm Re}f{section-algorithm}). To solve the problem that $\chi:G\to H_f$ is not well defined, we reduce the Hubbard tree $H_f$ to another finite tree $T$ by collapsing critical and postcritical internal rays so that $\phi\circ\chi:G\to T$ is well defined, where $\phi$ denotes the quotient map from $H_f$ to $T$. Meanwhile, the Markov map $f: H_f\to H_f$ descends to a Markov map $g:T\to T$ fulfilling that $\phi\circ f=g\circ \phi$. Then, to establish $h(H_f,f)=h(G,L)$, on one hand we prove that $h(H_f,f)=h(T,g)$, which is based on an analysis of the incidence matrices of $(H_f,f)$ and $(T,g)$; on the other hand we verify that $h(T,g)=h(G,L)$, for which we basically use Proposition {\rm Re}f{Do4}. Following this strategy, our proof is divided into several steps. Throughout this section, we fix a postcritically-finite polynomial $f$ and a weak critical marking of $f$ \[{\mathbb T}h=\Bigl\{{\mathbb T}heta_1(c_1),\lambdadots,{\mathbb T}heta_m(c_m);{\mathbb T}heta(U_1),\lambdadots,{\mathbb T}heta(U_n)\Bigr\}trajectory iangleq\{{\mathbb T}heta_1,\lambdadots,{\mathbb T}heta_{m+n}\}.\] \subsection {The construction of the quotient map $\phi$}\lambdaambdabel{1} Recall the definition of critical/postcritical internal rays given in Section {\rm Re}f{section-critical-portrait}. By collapsing the critical and postcritical internal rays, we reduce the Hubbard tree $H_f$ to a tree $T$. Precisely, the relation $\sigmam$ is defined on $\mbox{$\mathbb C$}$ such that $z\sigmam w$ if and only if $z=w$ or $z$ and $w$ are contained in a path constituted by critical or postcritical internal rays. It is clear that $\sigmam$ is an equivalence relation on $\mbox{$\mathbb C$}$. We denote by $\phi:\mbox{$\mathbb C$}\to\mbox{$\mathbb C$}/_\sigmam$ the quotient map. \rhogin{lemma}\lambdaambdabel{quotient} The quotient space $\mbox{$\mathbb C$}/_\sigmam$ is homeomorphic to $\mbox{$\mathbb C$}$. \varepsilonnd{lemma} \rhogin{proof} Note that each $\sigmam$-class with more than one point is a finite tree, so the relation $\sigmam$ clearly satisfies the following $4$ properties: \rhogin{enumerate} \item there are at least two distinct equivalence classes; \item it is closed as a subset of $\widehat{\mbox{$\mathbb C$}}\times \widehat{\mbox{$\mathbb C$}}$ equipped with the product topology; \item each equivalence class is a compact connected set; \item the complementary component of each equivalence class is connected. \varepsilonnd{enumerate} Moore's Theorem (\cite{M}) says that if an equivalence relation on $\widehat{\mbox{$\mathbb C$}}$ has these $4$ properties, then the quotient space of $\widehat{\mbox{$\mathbb C$}}$ modulo this equivalence relation is homeomorphic to $\widehat{\mbox{$\mathbb C$}}$. By applying the theorem to the one-point compactification of $\mbox{$\mathbb C$}$, we get that $\mbox{$\mathbb C$}/_\sigmam$ is homeomorphic to $\mbox{$\mathbb C$}$. \varepsilonnd{proof} For simplicity, we assume that $\mbox{$\mathbb C$}/_\sigmam=\mbox{$\mathbb C$}$, and $\phi$ is chosen to be identity outside a small neighborhood of the filled in Julia set $K_f$. \rhogin{definition}[essential $\sigmam$-equivalence classes]\lambdaambdabel{essential} A $\sigmam$-equivalence class $\xi$ is called \varepsilonmph{essential} if it is either \varepsilonmph{non-trivial}, meaning that it contains more than one point, or a point in $ {\rm crit}(f)\cup {\rm post}(f)$. \varepsilonnd{definition} Thus $\phi(z)=\phi(w)$ with $z\not=w\in\mbox{$\mathbb C$}$ if and only if $z,w$ belong to an essential $\sigmam$-class. Note also that each essential $\sigmam$-equivalence class is either a finite tree or a point. In the latter case, it is a critical or postcritical point in the Julia set. We now define $T:=\phi(H_f)$. Since each non-trivial $\sigmam$-equivalence class $\xi$ is a regulated tree in $K_f$, then, for any regulated tree $H\subset K_f$, we have $[z,w]\subset \xi\cap H$ if $z,w\in \xi\cap H$. The intersection of each $\xi$ and $H$ is hence either empty, or one point or a sub-tree of $H$. It implies the proposition below. \rhogin{proposition}\lambdaambdabel{finite-tree} The restriction of $\phi$ on any regulated tree within $K_f$ is monotone. In particular, $\phi:H_f\to T$ is monotone and $T$ is a finite tree in $\mbox{$\mathbb C$}$. \varepsilonnd{proposition} The vertex set of $T$ is defined to be $V_T=\phi(V_{H_f})$. To characterize the edge set of $T$, we define a subset $E^{\thetaxt{col}}_{H_f}$ of the edge set of $H_f$ by \[E_{H_f}^{\thetaxt{col}}=\{e\in E_{H_f}\mid e\thetaxt{ is contained in an essential $\sigmam$-class}\}.\] Then the edge set of $T$ is equal to $\{\phi(e)\mid e\in E_{H_f}\setminus E_{H_f}^{\thetaxt{col}}\}$. \rhogin{lemma}\lambdaambdabel{star-like} There is an universal constant $N>0$, such that for any essential $\sigmam$-class $\xii$ and all integer $n>N$, the set $f^n(\xi)$ is either a periodic point in $J_f$, or a star-like tree consisting of periodic internal rays. \varepsilonnd{lemma} \rhogin{proof} Let $\xi$ be any essential $\sigmam$-class. If $\xi$ is a point, it is a critical or postcritical point in the Julia set, and hence $f^n(\xi)$ is a periodic point in $J_f$ for sufficiently large $n$. We now assume that $\xi$ is not a point. It is then the union of several critical or postcritical internal rays. Note that such internal rays are eventually periodic by iterations of $f$ (Lemma {\rm Re}f{basic-property}.(3)), so for any sufficiently large $n$, the set $f^n(\xi)$ is a tree consisting of periodic postcritical internal rays. By Lemma {\rm Re}f{basic-property}.(5), distinct internal rays in $f^n(\xi)$ lie in different Fatou components. It implies that $f^n(\xi)$ is a star-like tree. \varepsilonnd{proof} \subsection {The construction of a Markov map $g:T\to T$}\lambdaambdabel{2} Clearly, the equivalence relation $\sigmam$ defined in Section {\rm Re}f{1} is $f$-invariant, i.e., $x\sigmam y\mbox{$\mathbb R$}ightarrow f(x)\sigmam f(y)$. The polynomial $f:\mbox{$\mathbb C$}\to\mbox{$\mathbb C$}$ therefore descends to a map $f/_\sigmam:\mbox{$\mathbb C$}\to \mbox{$\mathbb C$}$ by $\phi$. Set $g:=f/_\sigmam$. The following commutative graph holds. \rhogin{equation}\lambdaambdabel{commutative0}\rhogin{array}{rcl}\mbox{$\mathbb C$} &\xirightarrow[]{\ f\ } &\mbox{$\mathbb C$} \\ \phi\Big\deltaownarrow &&\Big\deltaownarrow \phi \\ \mbox{$\mathbb C$} & \xirightarrow[]{\ g\ } & \mbox{$\mathbb C$} .\varepsilonnd{array} \varepsilonnd{equation} Note that $g\circ\phi=\phi\circ f$ is continuous, hence $g$ is continuous by the functorial property of the quotient topology. We call ${\rm crit}(g):=\phi( {\rm crit}(f))$ the \varepsilonmph{critical set} of $g$, and ${\rm post}(g):=\phi( {\rm post}(f))$ the \varepsilonmph{postcritical set} of $g$. For any $\theta\in {\mathbb T}$, as $\phi$ is injective within the unbounded Fatou component, it maps the external ray ${\mathcal R}_f(\theta)$ to a simple curve, denoted by ${\mathcal R}_g(\theta)$, called the \varepsilonmph{external ray of $g$ with angle $\theta$}. Similarly, we denote by $\gamma_g(\theta)$ the landing point of ${\mathcal R}_g(\theta)$. Clearly, we have $\gamma_g(\theta)=\phi(\gamma_f(\theta))$ for every $\theta\in {\mathbb T}$. With the semi-conjugacy $\phi$, the map $g$ inherits many properties of the map $f$. \rhogin{proposition}\lambdaambdabel{property-g} \rhogin{enumerate} \item ${\rm post}(g)=\cup_{n\gammaeq1}g^n({\rm crit}(g)), \ g(V_T)\subset V_T$ and ${\rm post}(g)\subset V_T$. \item For each $\theta\in {\mathbb T}$, we have $g({\mathcal R}_g(\theta))={\mathcal R}_g(\tau(\theta))$ and $g(\gamma_g(\theta))=\gamma_g(\tau(\theta))$. \item For each $\theta\in {\rm crit}({\mathbb T}h)$, \varepsilonmph{resp.} ${\rm post}({\mathbb T}h)$, the point $\gamma_g(\theta)$ belongs to ${\rm crit}(g)$, \varepsilonmph{resp}. ${\rm post}(g)$. \item The tree $T$ is $g$-invariant and the map $g:T\to T$ is Markov. \varepsilonnd{enumerate} \varepsilonnd{proposition} \rhogin{proof} {(1), (2).} One just need to notice that ${\rm crit}(g),{\rm post}(g)$ and $V_T$ are the images of $ {\rm crit}(f), {\rm post}(f)$ and $V_{H_f}$ by $\phi$. To complete the argument is straightforward using the commutative graph ({\rm Re}f{commutative0}). {\noindent (3).} For any $\theta\in {\rm crit}({\mathbb T}h)$, \varepsilonmph{resp.} ${\rm post}({\mathbb T}h)$, if $\theta\in {\rm crit}({\mathbb T}h_{\mathcal F})$, \varepsilonmph{resp.} ${\rm post}({\mathbb T}h_{\mathcal F})$, by Lemma {\rm Re}f{basic-property}.(1), there exists a Fatou component $U$ such that a critical internal ray, \varepsilonmph{resp.} postcritical internal ray $r_{U}(\theta)$ joins $\gamma_f(\theta)$ and a critical point \varepsilonmph{resp.} postcritical point of $f$ (the center of $U$); and if $\theta\in {\rm crit}({\mathbb T}h_{\mathcal J})$, \varepsilonmph{resp.} ${\rm post}({\mathbb T}h_{\mathcal J})$, then $\gamma_f(\theta)\in{\rm crit}(f)$, \varepsilonmph{resp.} ${\rm post}(f)$. In both case $\phi$ maps $\gamma_f(\theta)$ to a point of ${\rm crit}(g)$, \varepsilonmph{resp.} ${\rm post}(g)$. {\noindent (4).} From the commutative graph ({\rm Re}f{commutative0}), we get $g(T)=\phi\circ f(H_f)\subset\phi(H_f)=T$. By Proposition {\rm Re}f{Hubbard-tree}, the tree $H_f$ can be broken into a system of arcs $\mathbb Deltalta_{H_f}$ such that the restriction of $f$ on each one of $\mathbb Deltalta_{H_f}$ is homeomorphic to an edge of $H_f$. Projected by $\phi$, the set of arcs $$\mathbb Deltalta_T:=\{\phi(\deltaeltalta)\mid \deltaeltalta\in\mathbb Deltalta_{H_f}\thetaxt{ but } \deltaeltalta\nsubseteqq\thetaxt{ any essential $\sigmam$-class}\}$$ forms a decomposition of $T$. And by the commutative graph ({\rm Re}f{commutative0}), the map $g$ maps each one of $\mathbb Deltalta_T$ either to a vertex of $T$ or monotonously onto an edge of $T$. Then $g:T\to T$ is Markov. \varepsilonnd{proof} \subsection{The entropies $h(H_f,f)$ and $h(T,g)$ are equal} We have shown that $f:H_f\to H_f$ and $g:T\to T$ are both Markov maps, so, by Proposition {\rm Re}f{entropy-formula}, it is enough to show that the spectral radii of the incidence matrices $D_{(H_f,f)}$ and $D_{(T,g)}$ are equal. To do this, we need the lemma below. Recall that $E_{H_f}^{\thetaxt{col}}=\{e\in H_f\mid e\thetaxt{ is contained in an essential $\sigmam$-class}\}$. By a \varepsilonmph{cycle} in $E_{H_f}^{\thetaxt{col}}$ we mean a subset of $E_{H_f}^{\thetaxt{col}}$ in the form \[O=\{e_0=e_n,e_1=f(e_0),\lambdadots, e_n=f(e_{n-1})\}\] \rhogin{lemma}\lambdaambdabel{f-invariant} All edges in $E_{H_f}^{\thetaxt{col}}$ are attracted by cycles, i.e., for each $e\in E_{H_f}^{\thetaxt{col}}$, the sequence of iterates $f^n(e)$ ($n\gammaeq0$) eventually falls on the union of cycles in $E_{H_f}^{\thetaxt{col}}$. \varepsilonnd{lemma} \rhogin{proof} Let $e\in E_{H_f}^{\thetaxt{col}}$. It is by definition the union of several critical or postcritical internal rays. So is $f(e)$. It follows that $f(e)$ is the union of edges in $E_{H_f}^{\thetaxt{col}}$. Notice that each internal ray in $e$ contains a critical or postcritical point, which belongs to $V_{H_f}$, then the edge $e$, and hence $f(e)$, is constituted by one or two critical/postcritical internal rays. It follows that $f(e)$ is either still an edge in $E_{H_f}^{\thetaxt{col}}$ or the union of two edges in $E_{H_f}^{\thetaxt{col}}$ which are both postcritical internal rays. Repeating this process, we encounter the following two cases: \rhogin{enumerate} \item All $f^n(e)$ are edges. In this case $e$ is eventually periodic because $H_f$ is a finite tree. \item For some $k\gammaeq1$, the set $f^k(e)$ is the union of two edges in $E_{H_f}^{\thetaxt{col}}$, which are both postcritical internal rays. Since every postcritical internal ray is eventually periodic, the iteration of $e$ finally falls on the union of cycles in $E_{H_f}^{\thetaxt{col}}$. \varepsilonnd{enumerate} Then the lemma is proven. \varepsilonnd{proof} \rhogin{proposition}\lambdaambdabel{equal} The entropies $h(H_f,f)$ and $h(T,g)$ are equal. \varepsilonnd{proposition} \rhogin{proof} Set $$Y:=\bigcup\ \{e\mid e\in E_{H_f}^{\thetaxt{col}}\}.$$ Then $f(Y)\subset Y$ by Lemma {\rm Re}f{f-invariant}, and $f:Y\to Y$ is a Markov map. Denoting the incidence matrix of $(Y,f)$ by $M$, it follows from Proposition {\rm Re}f{entropy-formula} that $h(Y,f)=\lambdaog\rho(M)$. Arranging the edges of $H_f$ in the order $E_{H_f}^{\thetaxt{col}},\ E_{H_f}\setminus E_{H_f}^{\thetaxt{col}}$, the incidence matrix of $(H_f,f)$ takes the form \[D_{(H_f,f)}=\lambdaeft(\rhogin{array}{cc} M&\star\\ \mathbf{0}&B \varepsilonnd{array}\right) \] where $\mathbf{0}$ denotes a zero-matrix. Note that the matrix $B$ is exactly the incidence matrix of $(T,g)$, so it is enough to prove $\rho(M)=1$, or, equivalently, $h(Y,f)=0$. We denote by $O_1,\lambdadots, O_k$ the cycles in $E_{H_f}^{\thetaxt{col}}$, and set $O=\bigcup_{i=1}^k O_i$. Clearly $f(O)\subset O$ and $f:O\to O$ is a Markov map. Using Lemma {\rm Re}f{f-invariant}, Propositions {\rm Re}f{Do3} and {\rm Re}f{entropy-formula}, we have \[h(Y,f)=h(O,f)=\lambdaog\rho(D_{(O,f)}).\] It suffices to show that $\rho(D_{(O,f)})=1$. Group the edges of $H_f$ in $O$ in the order $O_1,\lambdadots, O_k$. Then the incidence matrix $D_{(O,f)}$ takes the form \[\lambdaeft(\rhogin{array}{ccc} M_1&&\\ &{d\deltae-d+1}ots&\\ &&M_k \varepsilonnd{array}\right) \] Fix $j\in[1,k]$. Denote the cycle of edges in $O_j$ by $e_0\mapsto e_1\mapsto\cdots\mapsto e_n=e_0.$ If we further arrange the edges in $O_j$ in the order $e_0,\ e_1,\ \lambdadots,\ e_n,$ the matrix $M_{j}$ takes the form \[\lambdaeft(\rhogin{array}{ccccc} 0&&&1\\ 1&0&&&\\ &{d\deltae-d+1}ots&{d\deltae-d+1}ots&\\ &&1&0\\ \varepsilonnd{array}\right) \] Clearly, $\rho(M_{j})=1$. \varepsilonnd{proof} \subsection{The partition induced by ${\mathbb T}h$ in the $g$-plane}\lambdaambdabel{partition-in-g} In Section {\rm Re}f{partition-in-f} we gave the partition in the $f$-plane induced by ${\mathbb T}h$. Using the projection $\phi$, we can transfer the partition in the $f$-plane to the $g$-plane. For any ${\mathbb T}heta\in{\mathbb T}h$, since $\phi$ collapses each critical and postcritical internal ray, then, by the construction of ${\mathcal R}_f({\mathbb T}heta)$ given in \varepsilonqref{eq:1}, all external rays in the $g$-plane with arguments in ${\mathbb T}heta$ land at a common critical point \rhogin{equation}\lambdaambdabel{eq:critical} c_{\mathbb T}heta:=\phi({\mathcal R}_f({\mathbb T}heta)\cap K_f) \varepsilonnd{equation} of $g$, and \rhogin{equation}\lambdaambdabel{intersection} {\mathcal R}_g({\mathbb T}heta):=\phi({\mathcal R}_f({\mathbb T}heta))=\big(\cup_{\theta\in{\mathbb T}heta}{\mathcal R}_g(\theta)\big)\cup\{c_{\mathbb T}heta\}. \varepsilonnd{equation} Similarly, we can also define the unlinked equivalence relation in the $g$-plane. \rhogin{definition}[unlink equivalent in the $g$-plane] Two points $z,w\in \mbox{$\mathbb C$}\setminus\cup_{i=1}^{m+n}{\mathcal R}_g({\mathbb T}heta_i)$ are said to be \varepsilonmph{unlink equivalent in the $g$-plane} if they belong to a common component of $\mbox{$\mathbb C$}\setminus {\mathcal R}_g({\mathbb T}heta_i)$ for every ${\mathbb T}heta_i\in{\mathbb T}h$. \varepsilonnd{definition} Note that the intersection of each ${\mathcal R}_f({\mathbb T}heta_i)$ and $K_f$ is either a critical point or the union of critical internal rays, then the following result is straightforward. \rhogin{proposition}\lambdaambdabel{partition-g} Let $V_1,\lambdadots,V_d$ be the unlinked equivalence classes in the $f$-plane. Set $W_i:=\phi(V_i), i=1,\lambdadots,d$. Then each $W_i$ is an unlinked equivalence class in the $g$-plane. And we have $\overline{W_i}=\phi(\overline{V_i})$ and $\partial W_i=\phi(\partial V_i)$. \varepsilonnd{proposition} With these preparations, we can prove the key lemma below. Note that for any angle $x\in V_G$, by (1) and (3) of Proposition {\rm Re}f{property-g}, the landing point $\gamma_g(x)$ of the ray ${\mathcal R}_g(x)$ belongs to $T$. \rhogin{lemma}\lambdaambdabel{injective-g} Let $x,y$ be two distinct points in $V_G$. \rhogin{enumerate} \item If $x,y$ are separated by ${\mathbb T}heta\in {\mathbb T}h$, then $[\gamma_g(x),\gamma_g(y)]_T$ intersects the critical point $c_{\mathbb T}heta$ of $g$ given in \varepsilonqref{eq:critical}. \item If $x,y$ are non-separated by ${\mathbb T}h$, then $\gamma_g(x)$ and $\gamma_g(y)$ belong to the closure of an unlinked equivalence class in the $g$-plane, and $g$ maps $[\gamma_g(x),\gamma_g(y)]_T$ monotonously onto $[\gamma_g(\tau(x)),\gamma_g(\tau(y))]_T$. \varepsilonnd{enumerate} \varepsilonnd{lemma} \rhogin{proof} \rhogin{enumerate} \item By Proposition {\rm Re}f{finite-tree}, the map $\phi$ is monotonous when restricted on $[\gamma_f(x),\gamma_f(y)]$, so $\phi\big([\gamma_f(x),\gamma_f(y)]\big)=[\gamma_g(x),\gamma_g(y)]_T$ by Proposition {\rm Re}f{tree-map}. Since $c_{\mathbb T}heta=\phi({\mathcal R}_f({\mathbb T}heta)\cap K_f)$, we remain to show that ${\mathcal R}_f({\mathbb T}heta)\cap[\gamma_f(x),\gamma_f(y)]\not=\varepsilonmptyset$. By looking at the circle at infinity, the rays ${\mathcal R}_f(x)$ and ${\mathcal R}_f(y)$ lie in different components of $\mbox{$\mathbb C$}\setminus{\mathcal R}_f({\mathbb T}heta)$. As ${\mathcal R}_f({\mathbb T}heta)$ is connected, then the intersection of ${\mathcal R}_f({\mathbb T}heta)$ and $[\gamma_f(x),\gamma_f(y)]$ is non-empty. \item By Lemma {\rm Re}f{separation}, there exist an unlink equivalence $I_k$ on ${\mathbb T}$ and two elements ${\mathbb T}heta_i,{\mathbb T}heta_j\in {\mathbb T}h$ intersecting $\overline{I_k}$ such that $x,y\in I_k\cap{\mathbb T}heta_i\cap{\mathbb T}heta_j$. Correspondingly, in the dynamical plane of $f$, the points $\gamma_f(x)$ and $\gamma_f(y)$ belong to $\overline{V_k}\cup{\mathcal R}_f({\mathbb T}heta_i)\cup{\mathcal R}_f({\mathbb T}heta_j)$. Note that $\phi({\mathcal R}_f({\mathbb T}heta_i)\cap K_f)$ and $\phi({\mathcal R}_f({\mathbb T}heta_j)\cap K_f)$ are points contained in $\phi(\overline{V_k})$, then $\gamma_g(x)$ and $\gamma_g(y)$ belong to $\overline{W_k}=\phi(\overline{V_k})$. To prove the latter part of (2), note that $g(\gamma_g(\theta))=\gamma_g(\tau(\theta))$ for any $\theta\in {\mathbb T}$ (Proposition {\rm Re}f{property-g}), so, using Proposition {\rm Re}f{tree-map}, we just need to prove that $g|_{[\gamma_g(x),\gamma_g(y)]}$ is monotone. Let $W$ be an unlinked equivalence class in the $g$-plane with $\gamma_g(x),\gamma_g(y)\in\overline{W}$. Since $\gamma_g(x),\gamma_g(y)$ also belong to $ T$, there exist two points $z,w\in \overline{V}\cap H_f$ such that $\phi(z)=\gamma_g(x)$ and $\phi(w)=\gamma_g(y)$, where $V$ is the unlinked equivalence class in the $f$-plane with $\phi(V)=W$. As $\phi|_{[z,w]}$ is monotone (Proposition {\rm Re}f{finite-tree}), then, by Proposition {\rm Re}f{tree-map}, we have $\phi([z,w])=[\gamma_g(x),\gamma_g(y)]_T$. We apply the formula $\phi\circ f=g\circ \phi$ on $[z,w]$, and use the notations in the proof of Lemma {\rm Re}f{injective-f}.(4). Recall that $A_1,\lambdadots,A_m$ denotes the closures of the connected component of $[z,w]\setminus\partial V$ such that each $A_i$ separates $A_{i-1}$ and $A_{i+1}$, and $E_i=f(A_i)$ for $i=1,\lambdadots,m$. Note first that $[\gamma_g(x),\gamma_g(y)]_T=\phi([z,w])=\phi(\bigcup_{i=1}^m A_i)$ and \rhogin{equation}\lambdaambdabel{eq*} g([\gamma_g(x),\gamma_g(y)]_T)=g\circ\phi(\cup_{i=1}^m A_i)=\phi\circ f(\cup_{i=1}^m A_i)=\phi(\cup_{i=1}^m E_i). \varepsilonnd{equation} Let $y$ be any point in $g([\gamma_g(x),\gamma_g(y)]_T)$. We will show that $(g|_{[\gamma_g(x),\gamma_g(y)]})^{-1}(y)$ is connected. Note that the fiber $Y:=\phi^{-1}(y)$ is either a point or a regulated tree in $K_f$. Denote by $L$ the regulated arc containing all $E_i, i=1,\lambdadots,m,$ constructed in the proof of Lemma {\rm Re}f{injective-f}.(4). Then \rhogin{equation}\lambdaambdabel{eq4} Y\cap \big(\cup_{i=1}^m E_i\big)=(Y\cap L)\cap \big(\cup_{i=1}^m E_i\big)=R\cap \big(\cup_{i=1}^m E_i\big), \varepsilonnd{equation} where $R:=Y\cap L$ is a segment in $L$. There is thus a segment $R'$ in $[z,w]$ such that $f(R'\cap \big(\cup_{i=1}^m A_i\big))=R\cap \big(\cup_{i=1}^m E_i\big)$. Since $f$ is injective on $\bigcup_{i=1}^m A_i$, then \rhogin{equation}\lambdaambdabel{eq5} f^{-1}(R\cap \big(\cup_{i=1}^m E_i\big))\cap \big(\cup_{i=1}^m A_i\big)=R'\cap \big(\cup_{i=1}^m A_i\big). \varepsilonnd{equation} By ({\rm Re}f{eq*}), ({\rm Re}f{eq4}) and ({\rm Re}f{eq5}), the set $\phi(R'\cap \bigcup_{i=1}^m A_i)$ is exactly the fiber $$(g|_{[\gamma_g(x),\gamma_g(y)]_T})^{-1}(y)=g^{-1}(y)\cap [\gamma_g(x),\gamma_g(y)]_T.$$ And it is by the construction of $A_i$ either a point or a closed segment. Thus, $g|_{[\gamma_g(x),\gamma_g(y)]_T}$ is monotone as we wishes to show. \varepsilonnd{enumerate} \varepsilonnd{proof} \subsection{The construction of a projection ${\mathbb P}^1hi:G\to T$}\lambdaambdabel{6} Recall that $G$ is a topological complete graph with the vertex set ${\rm post}({\mathbb T}h)$. Proposition {\rm Re}f{property-g}.(3) says that each external ray in the $g$-plane with argument in $V_G$ lands at a postcritical point of $g$. We then define a map ${\mathbb P}^1hi:V_G\to {\rm post}(g)$ by ${\mathbb P}^1hi(x)=\gamma_g(x)$ for $x\in V_G$, where $\gamma_g(x)$ denotes the landing point of ${\mathcal R}_g(x)$. \rhogin{lemma}\lambdaambdabel{property-phi} The map ${\mathbb P}^1hi$ is surjective, and satisfies $g\circ{\mathbb P}^1hi={\mathbb P}^1hi\circ\tau$. \varepsilonnd{lemma} \rhogin{proof} By the construction of ${\mathbb T}h$, for each postcritical point $z$, there exists an angle $x\in V_G$ such that either $\gamma_f(x)=z$ or a postcritical internal ray $r_{U}(x)$ joins $z$ and $\gamma_f(x)$. In both cases, we have $\phi(z)=\phi(\gamma_f(x))=\gamma_g(x)={\mathbb P}^1hi(x).$ Since ${\rm post}(g)=\phi({\rm post}(f))$, then ${\mathbb P}^1hi$ is surjective. By Proposition {\rm Re}f{property-g}.(2), we get $g\circ{\mathbb P}^1hi(x)=g(\gamma_g(x))=\gamma_g(\tau(x))={\mathbb P}^1hi\circ\tau(x)$ for each $x\in V_G$. \varepsilonnd{proof} We continuously extend ${\mathbb P}^1hi$ to a map, also denoted by ${\mathbb P}^1hi$, from $G$ to $T$ such that ${\mathbb P}^1hi$ maps each edge $e(x,y)$ of $G$ monotonously onto $[{\mathbb P}^1hi(x),{\mathbb P}^1hi(y)]_T$. The case that ${\mathbb P}^1hi(x)={\mathbb P}^1hi(y)$ may happen, in which $[{\mathbb P}^1hi(x),{\mathbb P}^1hi(y)]_T$ reduces to a point. We thus obtain a projection from $G$ to $T$. \rhogin{proposition}\lambdaambdabel{surjective} The projection ${\mathbb P}^1hi:G\to T$ is surjective. \varepsilonnd{proposition} \rhogin{proof} The Hubbard tree is by definition the regulated hull of postcritical points. It follows that the endpoints of $H_f$ belong to $ {\rm post}(f)$. Note that $\phi$ maps the endpoints of $H_f$ onto those of $T$, then the endpoints of $T$ belong to ${\rm post}(g)$. Since $G$ is a complete graph, the arc $[p,q]_T$ is contained in ${\mathbb P}^1hi(G)$ for any endpoints $p,q$ of $T$. We only need to invoke the fact that each edge of $T$ is contained in a regulated arc $[p,q]$ with $p,q$ two endpoints. \varepsilonnd{proof} In the construction, the projection ${\mathbb P}^1hi$ is only required to be monotone on each edge of $G$, so it is not necessarily a semi-conjugacy from $L:G\to G$ to $g:T\to T$. One may ask whether we can impose some additional conditions on the extension such that ${\mathbb P}^1hi$ becomes a semi-conjugacy. The answer might be yes, but we will not work on this. One reason is that to change ${\mathbb P}^1hi$ into a semi-conjugacy, we need to carefully analyse where $g$ is not injective on each edge of $T$ and correspondingly modifying $L$ piecewise on each edge of $G$, which is a tedious work. The other reason, which is crucial, is that even if we modify ${\mathbb P}^1hi$ to a semi-conjugacy, it is generally not finite to one, because we can only require that ${\mathbb P}^1hi$ is monotone, but not a homeomorphism, restricted on each edge of $G$, so that Proposition {\rm Re}f{Do3} is not available. Instead, we will suitably modify ${\mathbb P}^1hi$ on each edge of $G$ such that the relation $g\circ {\mathbb P}^1hi={\mathbb P}^1hi\circ f$ is satisfied in a weaker sense (see Lemma {\rm Re}f{commutative2} below). Recall the definition $L$ in Section {\rm Re}f{T-A}. If we specify that a separation set with the form $(k_1,\lambdadots,k_0)$ is empty, then the action of $L$ on an edge $e(x,y)\in E_G$ can be uniformly expressed as follows: Let the ordered pair $x,y$ have the separation set $(k_1,\lambdadots,k_p), p\gammaeq0$, i.e., the leaf $\overline{xy}$ successively crosses $hull({\mathbb T}heta_{k_1}),\lambdadots,hull({\mathbb T}heta_{k_p})$ from $x$ to $y$, and no other elements of ${\mathbb T}h$ separate $x$ and $y$. Subdivide the edge $e(x,y)$ into $p+1$ arcs $\deltaeltalta(z_i,z_{i+1}), i\in[0,p],$ with $z_0:=x$ and $z_{p+1}:=y$, such that these arcs are non-trivial if $x\not=y$. And then let $L$ map each arc $\deltaeltalta(z_i,z_{i+1})$ monotonously onto $e(\tau(\theta_i),\tau(\theta_{i+1}))$, where $\theta_0:=x$, $\theta_{p+1}:=y$ and $\theta_i\in {\mathbb T}heta_{k_i}$ for each $i\in[1,p]$. \rhogin{definition}[subdivision arcs of $G$]\lambdaambdabel{subdivision} For any edge $e(x,y)$ of $G$ with $x\not=y$, a non-trivial arc $\deltaeltalta(z_i,z_{i+1})\subset e(x,y)$ described above is called a subdivision arc of $G$. \varepsilonnd{definition} For a subdivision arc of $G$, we see from the action of $L$ that it is mapped either monotonously onto an edge of $G$ or onto a vertex of $G$. \rhogin{lemma}\lambdaambdabel{commutative2} We can modify the projection ${\mathbb P}^1hi$ on each subdivision arc $\deltaeltalta$ of $G$ such that ${\mathbb P}^1hi|_\deltaeltalta$ is either injective or a constant map, and the following equation holds: $$g\circ {\mathbb P}^1hi(\deltaeltalta)={\mathbb P}^1hi \circ L(\deltaeltalta).$$ \varepsilonnd{lemma} \rhogin{proof} Let $e(x,y)$ be any edge of $G$, and the ordered pair $x,y$ have the separation set $(k_1,\lambdadots,k_p)$ with $p\gammaeq0$. Then $e(x,y)$ contains $p+1$ subdivision arcs $\deltaeltalta(z_i,z_{i+1}), i=0,\lambdadots p,$ with $z_0:=x$ and $z_{p+1}:=y$. We set $\theta_0:=x,\theta_{p+1}:=y$, and pick an angle $\theta_i$ in each ${\mathbb T}heta_{k_i}\in {\mathbb T}h$. By Lemma {\rm Re}f{injective-g}.(1), the arc $[{\mathbb P}^1hi(x),{\mathbb P}^1hi(y)]_T=[\gamma_g(x),\gamma_g(y)]_T$ (possibly reduced to a point) successively passes through the points \[c_{{\mathbb T}heta_{k_i}}=\phi({\mathcal R}_f({\mathbb T}heta_{k_i})\cap K_f)=\gamma_g(\theta_i)={\mathbb P}^1hi(\theta_i),\quad i=1,\lambdadots,p.\] It follows that $[{\mathbb P}^1hi(x),{\mathbb P}^1hi(y)]_T$ also contains $p+1$ successive subdivision sets $[{\mathbb P}^1hi(\theta_i),{\mathbb P}^1hi(\theta_{i+1})]_T, i=0,\lambdadots p,$ where each of them is either an arc or a point. We now modify ${\mathbb P}^1hi:e(x,y)\to [{\mathbb P}^1hi(x),{\mathbb P}^1hi(y)] $ on each subdivision arc of $G$ in $e(x,y)$ such that \rhogin{enumerate} \item ${\mathbb P}^1hi(\deltaeltalta(z_i,z_{i+1}))=[{\mathbb P}^1hi(\theta_i),{\mathbb P}^1hi(\theta_{i+1})]$ with ${\mathbb P}^1hi(z_i)={\mathbb P}^1hi(\theta_i)$ and ${\mathbb P}^1hi(z_{i+1})={\mathbb P}^1hi(\theta_{i+1})$, for each $i=0,\lambdadots,p$. \item If ${\mathbb P}^1hi(\theta_i)\not={\mathbb P}^1hi(\theta_{i+1})$, we require that ${\mathbb P}^1hi:\deltaeltalta(z_i,z_{i+1})\to [{\mathbb P}^1hi(\theta_i),{\mathbb P}^1hi(\theta_{i+1})]$ is a homeomorphism. \varepsilonnd{enumerate} By Corollary {\rm Re}f{non-separated} and Lemma {\rm Re}f{injective-g}.(2), we get that $$g[{\mathbb P}^1hi(\theta_i),{\mathbb P}^1hi(\theta_{i+1})]_T=g[\gamma_g(\theta_i),\gamma_g(\theta_{i+1})]_T=[\gamma_g(\tau(\theta_i)),\gamma_g(\tau(\theta_{i+1}))]_T= [{\mathbb P}^1hi(\tau(\theta_i)),{\mathbb P}^1hi(\tau(\theta_{i+1}))]_T$$ Therefore, after this modification, we have \rhogin{eqnarray*} g\circ{\mathbb P}^1hi(\deltaeltalta(z_i,z_{i+1}))&=&g[{\mathbb P}^1hi(\theta_i),{\mathbb P}^1hi(\theta_{i+1})]_T=[{\mathbb P}^1hi(\tau(\theta_i)),{\mathbb P}^1hi(\tau(\theta_{i+1}))]_T\\ &=&{\mathbb P}^1hi\big(e(\tau(\theta_i),\tau(\theta_{i+1}))\big)={\mathbb P}^1hi\circ L(\deltaeltalta(z_i,z_{i+1})), \varepsilonnd{eqnarray*} which completes the proof. \varepsilonnd{proof} \subsection{The construction of a projection ${\mathbb P}^1si:\Gamma\to T$}\lambdaambdabel{7} { To solve the problem that ${\mathbb P}^1hi:G\to T$ is generally not finite to one, we will in this part construct a quotient graph $\Gamma$ of $G$ and a finite to one map ${\mathbb P}^1si:\Gamma\to T$.} \rhogin{definition}[collapsing subdivision arc] A subdivision arc of $G$ is called a \varepsilonmph{collapsing subdivision arc} if it is mapped by ${\mathbb P}^1hi$ to a point. \varepsilonnd{definition} Intuitively, by collapsing each collapsing subdivision arc to one point, we obtain a quotient graph $\Gamma$, and the projection ${\mathbb P}^1hi:G\to T$ descends to a projection ${\mathbb P}^1si:\Gamma\to T$, which is injective on each edge of $\Gamma$. We explain these facts in the following. We define a relation $\sigmameq$ on $G$ such that $p\sigmameq q$ if and only if either $p=q$ or $p$ and $q$ are contained in a path constituted by collapsing subdivision arcs of $G$. This relation is obviously an equivalence relation, so we denote by $\Gammaamma:=G/_\sigmameq$ the quotient space and by $\wp:G\to \Gamma$ the quotient map. \rhogin{proposition}\lambdaambdabel{quotient-graph} The topological space $\Gamma$ is a topological graph, and the Markov map $L:G\to G$ descends to a Markov map $Q:\Gamma\to \Gamma$ by $\wp$. \rhogin{equation}\lambdaambdabel{commutative4}\rhogin{array}{rcl}G &\xirightarrow[]{\ L\ } &G \\ \wp\Big\deltaownarrow &&\Big\deltaownarrow \wp \\ \Gamma & \xirightarrow[]{\ Q\ } & \Gamma .\varepsilonnd{array} \varepsilonnd{equation} \varepsilonnd{proposition} \rhogin{proof} We define $V_{\Gamma}:=\wp(V_G)$ the vertex set of $\Gamma$ and $E_{\Gamma}:=\{\wp(\widetilde{e})\mid \widetilde{e}\in E_G\setminus E_G^{\thetaxt{col}}\}$ the edge set of $\Gamma$, where $E_G^{\thetaxt{col}}$ denotes the set of the edges of $G$ which are constituted by collapsing subdivision arcs. It is not difficult to check that the topological space $\Gamma$ with vertex set $V_{\Gamma}$ and edge set $E_{\Gamma}$ satisfies the properties of being a topological graph (see Section {\rm Re}f{entropy-result}). Let $\widetilde{\deltaeltalta}$ be any collapsing subdivision arc of $G$. By Lemma {\rm Re}f{commutative2}, we have that ${\mathbb P}^1hi\big(L(\widetilde{\deltaeltalta})\big)=g\big({\mathbb P}^1hi(\widetilde{\deltaeltalta})\big)$ is a singleton. Then $L(\widetilde{\deltaeltalta})$ is either a point or the union of some collapsing subdivision arcs. It means that the equivalence relation $\sigmameq$ is $L$-invariant. The map $L:G\to G$ hence descends to a continuous self map on $\Gamma$, which is denoted by $Q$. Clearly, the set of arcs \rhogin{equation}\lambdaambdabel{eq:subdivision-arc} \{\thetaxt{$\wp(\widetilde{\deltaeltalta})$: $\widetilde{\deltaeltalta}$ is a subdivision arc, but not a collapsing subdivision arc of $G$}\} \varepsilonnd{equation} forms a system of subdivision arcs of $\Gamma$. By the formula $\wp\circ L=Q\circ \wp$ on $G$, the restriction of $Q$ on such a subdivision arc is either monotone onto an edge of $\Gamma$, or a constant map. Hence $Q:\Gamma\to \Gamma$ is a Markov map. \varepsilonnd{proof} \rhogin{proposition}\lambdaambdabel{semi-conjugacy} There exists a surjective and finite to one map ${\mathbb P}^1si:\Gamma\to T$ such that ${\mathbb P}^1si\circ \wp={\mathbb P}^1hi$ pointwise on $G$. \rhogin{equation}\lambdaambdabel{commutative-3} \xiymatrix@R=0.5cm{ G \la phar[dd]_{{\mathbb P}^1hi} \la phar[dr]^{\wp} \\ & \Gamma \la phar[dl]_{{\mathbb P}^1si} \\ T } \varepsilonnd{equation} \varepsilonnd{proposition} \rhogin{proof} For any $p\in \Gamma$, we define ${\mathbb P}^1si(p)={\mathbb P}^1hi\circ\wp^{-1}(p)$. Note that $\wp^{-1}(p)$ is either a point or a connected set consisting of collapsing subdivision arcs. Since ${\mathbb P}^1hi$ maps each collapsing subdivision arc to a point, then ${\mathbb P}^1hi\big(\wp^{-1}(p)\big)$ is always a singleton. It means that ${\mathbb P}^1si$ is well defined. The surjection of ${\mathbb P}^1si$ follows directly from the surjectivity of ${\mathbb P}^1hi$ given in Proposition {\rm Re}f{surjective}. To prove that ${\mathbb P}^1si$ is finite to one, it is enough to show that its restriction on each edge of $\Gamma$ is injective. Let $e\in E_\Gamma,\ p,q\in e$ and ${\mathbb P}^1si(p)={\mathbb P}^1si(q)$. By the definition of $E_\Gamma$, we can pick an edge $\widetilde{e}$ of $G$ in $E_G\setminus E_G^{\thetaxt{col}}$ such that $\wp(\widetilde{e})=e$. Denote by $\tilde{p}$ and $\tilde{q}$, preimages on $\widetilde{e}$ by $\wp$ of $p$ and $q$, respectively. We then have ${\mathbb P}^1hi(\tilde{p})={\mathbb P}^1hi(\tilde{q})$. According to the modified construction of ${\mathbb P}^1hi$ (Proposition {\rm Re}f{commutative2}), it follows that $\tilde{p}$ and $\tilde{q}$ belong to a path constituted by collapsing subdivision arcs of $G$. Hence $p=\wp(\tilde{p})=\wp(\tilde{q})=q$. It means that ${\mathbb P}^1si|_e$ is injective. \varepsilonnd{proof} \subsection{The entropies $h(\Gamma,Q)$ and $h(T,g)$ are equal}\lambdaambdabel{8} {By Proposition {\rm Re}f{semi-conjugacy}, the map ${\mathbb P}^1si:\Gamma\to T$ is surjective and finite to one. To prove $h(\Gamma,Q)=h(T,g)$, we only need to redefine $Q$ on each subdivision arcs of $\Gamma$ such that ${\mathbb P}^1si$ is a semi-conjugacy from $Q:\Gamma\to\Gamma$ to $g:T\to T$, and then apply Proposition {\rm Re}f{Do4}.} \rhogin{proposition}\lambdaambdabel{equal-1} The topological entropies $h(\Gamma,Q)$ and $h(T,g)$ are equal. \varepsilonnd{proposition} \rhogin{proof} In \varepsilonqref{eq:subdivision-arc} we obtain a system of subdivision arcs for $\Gamma$. Let $\deltaeltalta=\wp(\widetilde{\deltaeltalta})$ be such one. By the definition of the relation $\sigmameq$, the map $\wp:\widetilde{\deltaeltalta}\to\deltaeltalta$ is a homeomorphism. By Proposition {\rm Re}f{commutative2}, and the commutative graphs ({\rm Re}f{commutative4}) and ({\rm Re}f{commutative-3}), we have $$g\circ{\mathbb P}^1si(\deltaeltalta)=g\circ{\mathbb P}^1hi(\widetilde{\deltaeltalta})={\mathbb P}^1hi\circ L(\widetilde{\deltaeltalta})={\mathbb P}^1si\circ Q(\deltaeltalta)$$ If $Q(\deltaeltalta)$ is a singleton, the formula $g\circ{\mathbb P}^1si={\mathbb P}^1si\circ Q$ naturally holds pointwise on $\deltaeltalta$. Otherwise, $Q(\deltaeltalta)$ is an edge of $\Gamma$, and the maps ${\mathbb P}^1si|_{\deltaeltalta}$ and ${\mathbb P}^1si|_{Q(\deltaeltalta)}$ are both homeomorphisms onto their images. We then redefine $Q$ on $\deltaeltalta$ by lifting $g:{\mathbb P}^1si(\deltaeltalta)\to {\mathbb P}^1si(Q(\deltaeltalta))$ along these two homeomorphisms, i.e. by setting $ Q:={\mathbb P}^1si^{-1}\circ g\circ{\mathbb P}^1si$ on $\deltaeltalta$. After such modification of $Q$ on each subdivision arc of $\Gamma$, we obtain the formula $g\circ{\mathbb P}^1hi={\mathbb P}^1hi\circ Q$ pointwise on $\Gamma$. By Proposition {\rm Re}f{entropy-formula}, the topological entropy $h(\Gamma,Q)$ is independent on the precise choices of $Q$ as monotonous maps on the subdivision arcs of $\Gamma$. Then using Proposition {\rm Re}f{Do4}, we have $h(\Gamma,Q)=h(T,g)$. \varepsilonnd{proof} \subsection{The entropies $h(G,L)$ and $h(\Gamma,Q)$ are equal}\lambdaambdabel{9} To complete the proof of Theorem {\rm Re}f{Thurston-algorithm}, it remains to show that $h(\Gamma,Q)=h(G,L)$. The idea of its proof is similar to that of Proposition {\rm Re}f{equal}. Recall that $E_G^{\thetaxt{col}}=\{e\in E_G\mid {\mathbb P}^1hi(e)\thetaxt{ is a singleton}\}$. \rhogin{proposition}\lambdaambdabel{collapsing-edge} For each $e\in E_G^{\rm col}$, $L(e)$ is either one point or the union of edges in $E_G^{\rm col}$. \varepsilonnd{proposition} \rhogin{proof} Let $e\in E_G^{\thetaxt{col}}$ and let $\deltaeltalta\subset e$ be a subdivision arc of $G$. Then ${\mathbb P}^1hi(\deltaeltalta)$ is a singleton. By Lemma {\rm Re}f{commutative2}, we have that ${\mathbb P}^1hi\big(L(\deltaeltalta)\big)=g\big({\mathbb P}^1hi(\deltaeltalta)\big)$ is a singleton. It follows that $L(\deltaeltalta)$ is either a point or an edge of $G$ belonging to $E_G^{\thetaxt{col}}$. \varepsilonnd{proof} With this proposition, if we arrange the edges of $G$ in the order $E_G^{\thetaxt{col}}, E_G\setminus E_G^{\thetaxt{col}}$, the incidence matrix of $(G,L)$ takes the form \[D_{(G,L)}=\lambdaeft(\rhogin{array}{cc} X&\star\\ \mathbf{0}&C \varepsilonnd{array}\right) \] where $\mathbf{0}$ is a zero matrix. Note that the matrix $C$ is exactly the incidence matrix of $(\Gamma,Q)$, so it is enough to prove $\rho(X)=1$. From the modification of ${\mathbb P}^1hi$ in Lemma {\rm Re}f{commutative2}, we know that an edge $e(x,y)\in E_G^{\thetaxt{col}}$ if and only if ${\mathbb P}^1hi(x)={\mathbb P}^1hi(y)$, or equivalently, $\gamma_g(x)=\gamma_g(y)$. And by the definition $\phi$, this happens if and only if $\gamma_f(x)$ and $\gamma_f(y)$ are contained in an essential $\sigmam$-class (see Definition {\rm Re}f{essential}). From now on and to the end, all argument take place in the $f$-plane, so we omit the subscript $f$ in all quantities for simplicity. Let $K$ be a connected subset of $K_f$. An edge $e(x,y)$ of $G$ is called \varepsilonmph{corresponding} to $K$ if $\gamma(x),\gamma(y)\in K$. With this concept, any edge in $E_G^{\thetaxt{col}}$ corresponds to an essential $\sigmam$-class. \rhogin{lemma}\lambdaambdabel{correspond} Let $e=e(x,y)\in E_G^{\thetaxt{col}}$ correspond to a connected subset $K$ of an essential $\sigmam$-class. Then all edges of $G$ contained in $L(e)$ correspond to $f(K)$. \varepsilonnd{lemma} \rhogin{proof} Let the ordered pair $x,y$ have the separation set $(k_1,\lambdadots,k_p)$ with $p\gammaeq0$, i.e., the leaf $\overline{xy}$ successively crosses $hull({\mathbb T}heta_{k_1}),\lambdadots hull({\mathbb T}heta_{k_p})$ from $x$ to $y$ with ${\mathbb T}heta_{k_1},\lambdadots,{\mathbb T}heta_{k_p}\in {\mathbb T}h$, and denote the subdivision arcs of $G$ contained in $e(x,y)$ by $\deltaeltalta_i, i=0,\lambdadots p,$ with $x\in\deltaeltalta_0$ and $y\in\deltaeltalta_{p+1}$. Since $K$ is allowable connected, then $[\gamma(x),\gamma(y)]\subset K$. In the proof of Lemma {\rm Re}f{injective-g}, we have shown that $[\gamma(x),\gamma(y)]$ intersects ${\mathcal R}({\mathbb T}heta_{k_i})$ for all $i\in\{1,\lambdadots,p\}$. If ${\mathbb T}heta_{k_i}={\mathbb T}heta_j(c_j)$ for some ${\mathbb T}heta_j(c_j)\in{\mathbb T}h$, then $[\gamma(x),\gamma(y)]\cap {\mathcal R}({\mathbb T}heta_{k_i})=\{c_j\}$ and $\gamma(\theta_i)=c_j\in [\gamma(x),\gamma(y)]$ for any given $\theta_i\in {\mathbb T}heta_{k_i}$. If ${\mathbb T}heta_{k_i}={\mathbb T}heta(U)$ for a critical Fatou component $U$, the intersection of $[\gamma(x),\gamma(y)]$ and $U$ consists of two internal rays of $U$, which are critical/postcritical internal rays (since $[\gamma(x),\gamma(y)]$ is in an essential $\sigmam$-class). By Lemma {\rm Re}f{basic-property}.(4), these two internal rays must be contained in ${\mathcal R}({\mathbb T}heta_{k_i})$. We can then pick an angle $\theta_i\in{\mathbb T}heta_{k_i}$ satisfying $\gamma(\theta_i)\in[\gamma(x),\gamma(y)]$. Thus, we obtain a sequence of angles $\theta_1,\lambdadots,\theta_p$ such that $\theta_i\in{\mathbb T}heta_{k_i}$ and $\gamma(\theta_i)\in [\gamma(x),\gamma(y)]\subset K$ for each $i\in\{1,\lambdadots,p\}$. By setting $\theta_0:=x$ and $\theta_{p+1}:=y$, we have \rhogin{equation}\lambdaambdabel{eq:2} \gamma(\tau(\theta_0)),\lambdadots\lambdadots,\gamma(\tau(\theta_{p+1}))\in f(K). \varepsilonnd{equation} According to the definition of $L$, the edges of $G$ contained in $L(e(x,y))$ are the non-trivial arcs among $$L(\deltaeltalta_0)=e(\tau(\theta_0),\tau(\theta_1)),\lambdadots\lambdadots,L(\deltaeltalta_p)=e(\tau(\theta_p),\tau(\theta_{p+1})),$$ which all correspond to $f(K)$ by \varepsilonqref{eq:2}. \varepsilonnd{proof} The following lemma is quite similar to Lemma {\rm Re}f{f-invariant}, both the statement and the role. \rhogin{lemma}\lambdaambdabel{eventually-period} All edges in $E_{G}^{\rm col}$ are attracted by cycles, i.e., for each $e\in E_{G}^{\rm col}$, its iterations by $L$ eventually fall on the union of cycles in $E_{G}^{\rm col}$. \varepsilonnd{lemma} \rhogin{proof} Given any edge $e\in E_G^{\thetaxt{col}}$ corresponding to an essential $\sigmam$-class $\xi$. By repeated use of Lemma {\rm Re}f{correspond}, we get that for all $n\gammaeq1$, each edge of $G$ contained in $L^n(e)$ corresponds to $f^n(\xi)$. By Lemma {\rm Re}f{star-like}, there exists an $n_0>0$ such that for any $n\gammaeq n_0$, the set $f^n(\xi)$ is either a periodic point in $J_f$ or a star-like tree consisting of periodic internal rays. Hence $f^n(\xi)\cap J_f$ is a periodic point in $J_f$, denoted by $z_n$. We just need to prove that for any $n\gammaeq n_0$, each edge of $G$ contained in $L^n(e)$ is periodic by the iterations of $L$. Let $n\gammaeq n_0$. Denote by $A_n$ the set of external angles associated with $z_n$. We claim that each pair of angles in $A_n$ are non-separated by ${\mathbb T}h$. Suppose on the contrary that $x,y\in A_n$ are separated by ${\mathbb T}heta\in{\mathbb T}h$. Then, in the dynamical plane, the rays ${\mathcal R}(x),{\mathcal R}(y)$ are separated by ${\mathcal R}({\mathbb T}heta)$. It follows that $z_n\in{\mathcal R}({\mathbb T}heta)$. Since $z_n$ is periodic, the set ${\mathbb T}heta$ equals to ${\mathbb T}heta(U)$ for a critical Fatou component $U$, and one of the angles in ${\mathbb T}heta(U)$, say $\theta$, belongs to $A_n$. It contradicts that ${\mathcal R}(\theta)$ supports the Fatou component $U$ at $z_n$. Let $e(x,y)$ be any edge of $G$ contained in $L^n(e)$. We know from the first paragraph that $\gamma(x)$ and $\gamma(y)$ belong to $f^n(\xi)\cap J_f=\{z_n\}$. By the claim above, we have $L(e(x,y))=e(\tau(x),\tau(y))$, which is also an edge of $G$ and corresponds to $f^{n+1}(\xi)$. Since this argument holds for all sufficiently large $n$, and the angles in $A_n$ are periodic (because $z_n$ is periodic), the edge $e(x,y)$ must also be periodic by iterations of $L$. \varepsilonnd{proof} \rhogin{proposition}\lambdaambdabel{equal-2} The topological entropies verify $h(\Gamma,Q)=h(G,L)$. \varepsilonnd{proposition} \rhogin{proof} The effect of Lemma {\rm Re}f{eventually-period} in this proof is the same as that of Lemma {\rm Re}f{f-invariant} in the proof of Proposition {\rm Re}f{equal}. Using a similar argument, just replacing $H_f,T$ and $f$ in the proof of Proposition {\rm Re}f{equal} with $G,\Gamma$ and $L$ respectively, we obtain the equation $h(G,L)=h(\Gamma,Q)$. The details are omitted. \varepsilonnd{proof} \rhogin{proof}[\bf Proof of Theorem {\rm Re}f{Thurston-algorithm}] It follows directly from Propositions {\rm Re}f{equal-0}, {\rm Re}f{equal}, {\rm Re}f{equal-1} and {\rm Re}f{equal-2}. \varepsilonnd{proof} \rhogin{thebibliography}{FF} \bibitem{AKM} R. L. Adler, A. G. Konheim and M. H. McAndrew, \varepsilonmph{Topological entropy}, Trans. Amer. Math. Soc. 114(1965), 309-319. \bibitem{AM} L. Alsed\`{a}, M. Misiurewicz, \varepsilonmph{Semiconjugacy to a map of a constant slope}, arXiv:1410.1718. \bibitem{BFH} B. Bielefeld, Y.Fisher, J. Hubbard, \varepsilonmph{The Classification of Critically Preperiodic Polynomials as Dynamical Systems}; Journal AMS 5(1992), 721-762. \bibitem{BM} M. Bonk, D.Meyer, \varepsilonmph{Expanding Thurston maps}, \url{http:users.jyu.fi/~danmeyer/files/draft.pdf}. \bibitem{Do} A. Douady, \varepsilonmph{Topological entropy of unimodal maps: monotonicity for quadratic polynomials}, in Real and complex dynamical systems, NATO Adv. Sci. Inst. Ser. C Math. Phys. Sci. 464, 65-87, Kluwer, Dordrecht, ,1995. \bibitem{DH} A. Douady, J. H. Hubbard, {\it Exploring the Mandelbrot set: The Orsay Notes}, \url{http:www.math.cornell.edu/~hubbard/OrsayEnglish.pdf}. \bibitem{Fu} H. Furstenberg, \varepsilonmph{Disjointness in ergodic theory, minimal sets, and a problem in Diophantine approximation}, Math. Systems \ Theory 1 (1967), 1-49. \bibitem{DS} D. Dudko, D. Schleicher, \varepsilonmph{Core entropy of quadratic polynomials}, arXiv:1412.9760v1. \bibitem{Gao} Gao Yan, \varepsilonmph{Dynatomic periodic curve and core entropy for polynomials}, Thesis Angers University (2013). \bibitem{Jung} W. Jung, \varepsilonmph{Core entropy and biaccessibility of quadratic polynomials}, arXiv:1401.4792. \bibitem{L} M. Lyubich, \varepsilonmph{Dynamics of quadratic polynomials, I-II.} Acta Mathematica, v. 178 (1997), 185-297. \bibitem{MT} J. Milnor, W. Thurston, \varepsilonmph{On iterated maps of the interval, in Dynamical systems (College Park, MD, 1986每87)}. In: Lecture Notes in Math., vol. 1342, pp. 465每-563. Springer, Berlin (1988) \bibitem{MS} M. Misiurewicz, W. Szlenk. \varepsilonmph{Entropy of piecewise monotone mappings.} Studia Math. 67 (1980), 45每-63. \bibitem{Moi} E. Moise, \varepsilonmph{Geometric topology in dimensions 2 and 3}, Graduate Texts in Mathematics, Vol. 47, Springer-Verlag, New York, 1977. \bibitem{M} R.L. Moore, \varepsilonmph{Concerning upper semicontinuous collections of compacta}, Trans. Amer. Math. Soc. 27 (1925), 416-426. \bibitem{Poi1} A. Poirier, \varepsilonmph{On Post Critically Finite Polynomials, Part One: Critical Portraits}, arXiv:9305207v1. \bibitem{Poi2} A. Poirier, \varepsilonmph{On post Critically Finite Polynomials, Part Two: Hubbard Tree}, arXiv:9307235v1. \bibitem{Ti0} G. Tiozzo, \varepsilonmph{Topological entropy of quadratic polynomials and dimension of sections of the Mandelbrot set}, Adv. Math. 273, 651每-715 (2015) \bibitem{Ti} G. Tiozzo, \varepsilonmph{Continuity of core entropy of quadratic polynomials}, to appear in Invent. math. \bibitem{T} W. Thurston, \varepsilonmph{Polynomial dynamics from Combinatorics to Topology}, 1--109 in Complex Dynamics: Families and Friends, ed. Dierk Schleicher, A K Peters, Wellesley, MA, 2009. \bibitem{TG} W. Thurston, H. Baik, Y. Gao, J. Hubbard, K. Lindsey, L. Tan, D. Thurston, \varepsilonmph{degree-$d$ invariant lamination}, preprint. \bibitem{Z} J. S. Zeng, \varepsilonmph{Criterion for rays landing together}, arXiv:1503.05931. \varepsilonnd{thebibliography} \noindent Yan Gao, \\ Mathemaitcal School of Sichuan University, Chengdu 610064, P. R. China. \\ Email: [email protected] \varepsilonnd{document}
\begin{document} \ninept \title{A Fully Convolutional Neural Network for Speech Enhancement} \begin{abstract} In hearing aids, the presence of babble noise degrades hearing intelligibility of human speech greatly. However, removing the babble without creating artifacts in human speech is a challenging task in a low SNR environment. Here, we sought to solve the problem by finding a `mapping' between noisy speech spectra and clean speech spectra via supervised learning. Specifically, we propose using fully Convolutional Neural Networks, which consist of lesser number of parameters than fully connected networks. The proposed network, Redundant Convolutional Encoder Decoder (R-CED), demonstrates that a convolutional network can be 12 times smaller than a recurrent network and yet achieves better performance, which shows its applicability for an embedded system: the hearing aids. \end{abstract} \begin{keywords} Speech Enhancement, Speech Denoising, Babble Noise, Fully Convolutional Neural Network, Convolutional Encoder-Decoder Network, Redundant Convolutional Encoder-Decoder Network \end{keywords} \section{Introduction} \label{sec:intro} Denoising speech signals has been a long standing problem. Decades of works showed feasible solutions which estimated the noise model and used it to recover noise-deducted speech \cite{boll1979suppression, lim1979enhancement, ephraim1984speech, scalart1996speech, ephraim1995signal}. Nonetheless, estimating the model for a babble noise, which is encountered when a crowd of people are talking, is still a challenging task. The presence of babble noise, however, degrades hearing intelligibility of human speech greatly. When babble noise dominates over speech, aforementioned methods often times will fail to find the correct noise model \cite{krishnamurthy2009babble}. If so, the noise-deduction will render distortion in speech, which creates discomforts to the users of hearing aids \cite{mccormack2013people}. Here, instead of explicitly modeling the babble noise, we focus on learning a `mapping' between noisy speech spectra and clean speech spectra, inspired by recent works on speech enhancement using neural networks \cite{han2014learning, xu2015regression, xia2013speech, osako2015complex}. However, the model size of Neural Networks easily exceeds several hundreds of megabytes, limiting its applicability for an embedded system. On the other hand, Convolutional Neural Networks (CNN) typically consist of lesser number of parameters than FNNs and RNNs due to its weight sharing property. CNNs already proved its efficacy on extracting features in speech recognition \cite{abdel2014convolutional, amodei2015deep} or on eliminating noises in images \cite{mao2016image, he2015deep}. But upon our knowledge, CNNs have not been tested in speech enhancement. In this paper, we attempted to find a `memory efficient' denoising algorithm for babble noise that creates minimal artifacts and that can be implemented in an embedded device: the hearing aid. Through experiments, we demonstrated that CNN can perform better than Feedforward Neural Networks (FNN) or Recurrent Neural Networks (RNN) with much smaller network size. A new network architecture, Redundant Convolutional Encoder Decoder (R-CED), is proposed, which extracts redundant representations of a noisy spectrum at the encoder and map it back to clean a spectrum at the decoder. This can be viewed as mapping the spectrum to higher dimensions (e.g. kernel method), and projecting the features back to lower dimensions. The paper is organized as follows. In section \ref{sec:map}, a formal definition of the problem is stated. In section \ref{sec:conv}, the fully convolutional network architectures are presented including the proposed R-CED network. In section \ref{sec:exp}, the experimental methods are provided. In section \ref{sec:config}, description of the experiments and the corresponding network configurations are provided. In section \ref{sec:result}, the results are discussed, and in section \ref{sec:con}, we end with conclusion of this study. \begin{figure} \caption{Speech Enhancement Using a CNN} \label{fig:map} \end{figure} \begin{figure*} \caption{Modified Convolutional Encoder-Decoder Network (CED)} \label{fig:CED} \caption{Proposed Redundant CED (R-CED)} \label{fig:RCED} \end{figure*} \section{Problem Statement} \label{sec:map} Given a segment of noisy spectra $\{{\bf x}_t\}_{t=1}^T$ and clean spectra $\{{\bf y}_t\}_{t=1}^T$, our aim is to learn a mapping $f$ which generates a segment of `denoised' spectra $\{f({\bf x}_t)\}_{t=1}^T$ that approximate the clean spectra in the $\ell_2$ norm, e.g. \begin{align} \min \sum_{t=1}^T ||{\bf y}_t - f({\bf x}_t)||_2^2. \label{eq:obj_rnn} \end{align} Specifically, we formulate $f$ using a Neural Network (see \figref{fig:map}). If $f$ is a recurrent type network, the temporal behavior of input spectra is already addressed by the network, and hence objective \eqref{eq:obj_rnn} suffices. On the other hand, for a convolutional type network, the past $n_T$ noisy spectra $\left\{ {\bf x}_i \right\}_{i=t-n_T+1}^t$ are considered to denoise the current spectra, e.g. \begin{align} \sum_{t=1}^T ||{\bf y}_t - f({\bf x}_{t-n_T+1, \cdots} , {\bf x}_t)||_2^2. \end{align} We set $n_T= 8$. Hence, input spectra to the network is equivalent to about 100ms of speech segment, whereas the output spectra of the network is of duration 32ms (see \figref{fig:CED}, \figref{fig:RCED}). \section{Convolutional Network Architectures} \label{sec:conv} \subsection{Convolutional Encoder-Decoder Network (CED)} Convolutional Encoder-Decoder (CED) network proposed in \cite{vincent2010stacked} consists of symmetric encoding layers and decoding layers (see \figref{fig:CED}, each block represents a feature). Encoder consists of repetitions of a convolution, batch-normalization \cite{ioffe2015batch}, max-pooling, and an ReLU \cite{nair2010rectified} activation layer. Decoder consists of repetitions of a convolution, batch-normalization, and an up-sampling layer. Typically, CED compresses the features along the encoder, and then reconstructs the features along the decoder. In our problem, the orignal Softmax layer at the last layer is modified to a convolution layer, to make CED fully convolutional network. \subsection{Redundant CED Network (R-CED)} Here, we propose an alternative convolutional network architecture, namely Redundant Convolutional Encoder-Decoder (R-CED) network. R-CED consists of repetitions of a convolution, batch-normalization, and a ReLU activation layer (see \figref{fig:RCED}, each block represents a feature). No pooling layer is present, and thus no upsampling layer is required. As opposite to CED, R-CED encodes the features into higher dimension along the encoder and achieves compression along the decoder. The number of filters are kept symmetric: at the encoder, the number of filters are gradually increased, and at the decoder, the number of filters are gradually decreased. The last layer is a convolution layer, which makes R-CED a fully convolutional network. {\bf Cascaded R-CED Network (CR-CED):} Cascaded Redundant Convolutional Encoder-Decoder Network (CR-CED) is a variation of R-CED network. It consists of repetitions of R-CED Networks. Compared to the R-CED with the same network size (= the same number of parameters), CR-CED achieves better performance with less convergence time. \subsection{Bypass Connections} For CED, R-CED, and CR-CED, bypass connections are added to the network to facilitate the optimization in the training phase and improve performance. Between two different bypass schemes --- skip connections in \cite{mao2016image} and residual connections in \cite{he2015deep} --- we chose to use skip connections in \cite{mao2016image} which is more suitable for symmetric encoder-decoder design. Bypass connections are illustrated in \figref{fig:CED} and \figref{fig:RCED} as an `addition' operation symbol with an arrow. Bypass connections are added every other layer. \subsection{1-Dim Convolution Operation for Convolution Layers} At all convolution layers throughout the paper, convolution was performed only in 1 direction (see \figref{fig:1dimConvSpec}). In \figref{fig:1dimConvSpec}, the input ($3 \times 3$ white matrix) and the filter ($2 \times 3$ blue matrix) has the same dimension in time axis, and convolution is performed in frequency axis. We found this more efficient than 2-dim convolution (see \figref{fig:2dimConv}) for our input spectra ($129 \times 8$). \begin{figure} \caption{1-d Convolution} \label{fig:1dimConvSpec} \caption{2-d Convolution} \label{fig:2dimConv} \end{figure} \begin{figure*} \caption{Noisy Spectrogram} \label{fig:noisy_mag} \caption{Clean Spectrogram} \label{fig:clean_mag} \caption{Denoised Spectrogram} \label{fig:denoised_mag} \end{figure*} \begin{table*}[h!] \begin{tabular}{|c|c|c|c|} \hline & {\bf Layer Configuration }& {\bf Number of Filters }& {\bf Filter Width}\\ \hline \multirow{1}{*}{ {\bf CED, CED /w Bypass } } &Encoder: (Conv, BN, ReLU, Pool) $\times$ 5, & 12-16-20-24-32- & 13-11-9-7-5-\\ (11 Conv) &Decoder: (Conv, BN, ReLU, Upsample) $\times$ 5, Conv. & 24-20-16-12-8-1 & 7-9-11-13-8-129\\ \hline \multirow{1}{*}{{\bf R-CED, R-CED /w Bypass}} & \multirow{2}{*}{(Conv, ReLU, BN) $\times$ 9, Conv.} & 12-16-20-24-32- & 13-11-9-7-7-\\ (10 Conv)& & 24-20-16-12-1 & 7-9-11-13-129\\ \hline \multirow{1}{*}{{\bf R-CED, R-CED /w Bypass}} & \multirow{2}{*}{(Conv, ReLU, BN) $\times$ 15, Conv.} & 10-12-14-15-19-21-23-25- & 11-7-5-5-5-5-7-11- \\ (16 Conv)& & 23-21-19-15-14-12-10-1 & 7-5-5-5-5-7-11-129\\ \hline {\bf CR-CED (16 Conv)} & (Conv, ReLU, BN) $\times$ 15, Conv. & (18-30-8) $\times$ 5, 1 & (9-5-9) $\times$ 5, 129 \\ \hline \end{tabular} \caption{Network Configurations for CNNs: CED vs. R-CED vs. CR-CED} \label{table:config_cnn} \end{table*} \section{Experimental Methods} \label{sec:exp} \subsection{Preprocessing} \hspace{0.4cm} {\bf Dataset:} The experiment was conducted on the TIMIT database \cite{garofolo1993darpa} and 27 different types of noise clips were collected from freely available online resource \cite{akkermans2011freesound}. The noise are mostly babble, but includes different types of noise like instrumental sounds. Both data in the training set (4620 utterances) and the testing set (200 utterances) were added with one of 27 noise clips at 0dB SNR. After all feature transformation steps were completed, 20\% of the training features were assigned as the validation set. {\bf Feature Transformation:} The audio signals were down-sampled to 8kHz, and the silent frames were removed from the signal. The spectral vectors were computed using a 256-point Short Time Fourier Transform (32ms hamming window) with a window shift of 64-point (8ms). The frequency resolution was 31.25 Hz (=4kHz/128) per each frequency bin. 256-point STFT magnitude vectors were reduced to 129-point by removing the symmetric half. For FNN/RNN, the input feature consisted of a noisy STFT magnitude vector (size: 129$\times$1, duration: 32ms). For CNN, the input feature consisted of 8 consecutive noisy STFT magnitude vectors (size: $129\times8$, duration: 100ms). Both input features were standardized to have zero mean and and unit variance. {\bf Phase Aware Scaling:} To avoid extreme differences (more than 45 degree) between the noisy and clean phase, the clean spectral magnitude was encoded as similar to \cite{mowlaee2013iterative}: \[ {\bf s}_{\text{phase aware}} = {\bf s}_{\text{clean}}\cos(\theta_{\text{clean}} - \theta_{\text{noisy}}). \] Besides, spectral phase was not used in the training phase. At reconstruction, noisy spectral phase was used instead to perform inverse STFT and recover human speech. However, because human ear is not susceptible to phase difference smaller than 45 degree, the distortion was negligible. Through `phase aware scaling', the phase mismatch be smaller than 45 degree, and For all networks, the output feature consisted of a `phase aware' magnitude vector (size: 129$\times$1, duration: 32ms), and were standardized to have zero mean and and unit variance. \subsection{Optimization} Fully connected and convolution layer weights were initialized as in \cite{glorot2010understanding} and recurrent layer weights were initialized as in \cite{le2015simple}. Fully connected and recurrent layer weights were pretrained from the networks of smaller depth with same number of nodes. Convolution layers were trained from scratch, with the aid of batch normalization layer \cite{ioffe2015batch} added after each convolution layer \footnote{We note that adding BN layers on both FNN and RNN did not improve convergence nor performance for this experiment.}. All networks were trained using back propagation with gradient descent optimization using Adam \cite{Adam} with a mini-batch size of 64. The learning rate started from $lr =0.0015$ with $\beta_1 = 0.9$, $\beta_2 = 0.999$, and $\epsilon = 1.0e^{-8}$. When the validation loss didn't decrease for more than 4 epochs, learning rate was decreased to $lr/2, lr/3, lr/4$, subsequently. The training was repeated once more for FNN and RNN with $\ell_2$ regularization ($\lambda=10^{-5}$) which slightly improved the performance. \subsection{Evaluation Metric} Signal to Distortion Ration (SDR) \cite{ vincent2006performance} was used to measure the amount of $\ell_2$ error present between clean and denoised speech: \[ SDR := 10 \log_{10} \frac{ || {\bf y } ||^2 } { || f({\bf x} ) - {\bf y} || ^2 }. \] SDR is inversely associated with the objective function presented in \eqref{eq:obj_rnn}. In addition, Short time Objective Intelligibility (STOI) \cite{taal2010short} and Perceptual Evaluation of Speech Distortion (PESQ) \cite{rix2001perceptual} --- both assume that human perception has short term memory and hence the error is measured nonlinearly in time of interest--- were used to measure the subjective quality of listening. \section{Experimental Setup} \label{sec:config} \subsection{Test 1: FNN vs. RNN vs. CNN} The first experiment compared CNN with FNN and RNN to demonstrate how feasible it is to use CNN for speech enhancement. Network configurations (e.g. number of nodes, number of layers) that yielded the best performance for each network are summarized in \tref{table:config_nn}. The best FNN and RNN architectures have 4 fully connected (FC) layers whereas CNN has 16 convolutional layers. \begin{table}[b!] \begin{tabular}{|c|c|c|} \hline & \# of Layers & \# of Nodes \\ \hline \multirow{2}{*}{\bf FNN} & (FC, ReLU) $\times$ 3, & \multirow{2}{*}{1024-1024-1024}\\ & FC&\\ \hline \multirow{2}{*}{\bf RNN} & (Recurrent, ReLU) $\times$ 3, & \multirow{2}{*}{256-256-256}\\ & FC &\\ \hline \multirow{1}{*}{\bf CNN} & (Conv, ReLU, BN) $\times$ 15,& 10-12-14-15-19-21-23-25\\ (R-CED)& Conv &23-21-19-15-14-12-10-1\\ \hline \end{tabular} \caption{Network Config. for FNN vs. RNN vs. CNN} \label{table:config_nn} \end{table} \subsection{Test 2: CED vs. R-CED} In the second experiment, R-CED was compared to CED. For fair comparison, the total number of parameters were fixed to 33,000 (roughly 132MB of memory) while the depth of the network was fixed to 10 convolution layers. The filter width per each layer is determined accordingly such that i) the symmetric encoder-decoder structure is maintained, ii) the number of parameters are gradually increased and decreased, iii) the `frequency coverage' is equal for both network. Here, `frequency coverage' refers to how much nearby frequency bins at the input are used to reconstruct a single frequency bin at the output. We made sure that both networks utilize the same amount of frequency bins to reconstruct a single frequency bin. The configurations for Test 2 is summarized at top two rows of \tref{table:config_cnn}. \subsection{Test 3: Finding the Best R-CED Performance} In the third experiment, we tested how far the performance can be improved using the R-CED network. The R-CED and CR-CED network with skip connections of various network size and depth are compared. The network size (the number of parameters) considered are 33K (132MB memory) and 100K (400MB memory). The depth of the network considered are 10, 16, and 20 convolution layers. Bottom 3 rows in \tref{table:config_cnn} summarize the network configurations for Test 3. \section{Results} \label{sec:result} \subsection{Test 1: FNN vs. RNN vs. CNN} \figref{fig:perf_dnn} illustrates the denoising performance of FNN, RNN and CNN (left), and the corresponding network size (=the number of parameters, right). All networks exhibited similar performance based on both subjective (STOI, PESQ) and objective quality (SDR) measure. On the other hand, the model size of CNN was about 68 times smaller than that of FNN and about 12 times smaller than RNN. We note that FNN and RNN were optimized to have the smallest network architectures. This experiment validates that CNN requires far lesser number of parameters per layer due to its weight sharing property, and yet can achieve similar or better performance compared to FNN and RNN. 33,000 parameters for CNN are roughly 132MB of memory which can be implemented in an embedded system. Refer to \figref{fig:noisy_mag}, \figref{fig:clean_mag}, \figref{fig:denoised_mag} for the example of noisy spectrogram, clean spectrogram, and denoised spectrogram from CNN respectively. \subsection{Test 2: CED vs. R-CED} The denoising performance of CED and R-CED are shown in \figref{fig:perf} with first 4 bars. The R-CED with skip connections showed the best performance, whereas the CED without skip connections showed the worst performance. Regardless of the presence of skip connections, R-CED yielded better results than CED. The effect of the skip connection was prominent in CED (5.96 to 7.92). This implies that the decoder itself could not reconstruct the `lost' information compressed at the encoder, unless the `lost' information was provided by the skip connections. In addition, the resulting speech from CED sounded artificial and mechanical. This confirms that the decoder could not reconstruct what is necessary for audios to sound like human speech. On the other hand, the effect of the skip connection was not that notable in R-CED (8.07 to 8.19). This is because R-CED rather expands than compresses the input at the encoder, which can be viewed as mapping a spectrum to higher dimension (e.g. kernel method). By generating redundant representations of important features at the encoder, and removing unwanted features at the decoder, the speech quality was effectively enhanced. \subsection{Test 3: Finding the Best R-CED Network} The denoising performance of R-CED of various network size and depth are presented in \figref{fig:perf} with the last five bars. A few interesting observations are i) the network size was the most dominant factor that was associated with the network performance, ii) the network depth was secondary, iii) CR-CED with skip connection yielded the best performance when other conditions were kept same (16 convolution layers, 33K parameters). \definecolor{color1}{RGB}{179,205,227} \definecolor{color2}{RGB}{140,150,198} \definecolor{color3}{RGB}{136,86,167} \definecolor{color4}{RGB}{129,15,124} \pgfplotscreateplotcyclelist{param-colors}{ {color1}, {color2}, {color3}, } \begin{figure} \caption{A Comparison of Denoising Performance and the Network Size for FNN vs. RNN vs. CNN} \label{fig:perf_dnn} \end{figure} \definecolor{L10}{RGB}{203,201,226} \definecolor{L16}{RGB}{117,107,177} \definecolor{L20}{RGB}{84,39,143} \begin{figure} \caption{A Comparison of Denoising Performance and the Model Size for different CNN Archtectures} \label{fig:perf} \end{figure} \section{Conclusion} \label{sec:con} In this paper, we aimed to find a memory efficient denoising method that can be implemented in an embedded system. Inspired by past success of FNN and RNN, we hypothesized that CNN can effectively denoise speech with smaller network size according to its weight sharing property. We set up an experiment to denoise human speech from babble noise which is a major discomfort to the users of hearing aids. Through experiments, we demonstrated that CNN can yield similar or better performance with much lesser number of model parameters compared to FNN and RNN. Also, we proposed a new fully convolutional network architecture R-CED, and showed its efficacy in speech enhancement. We observed that the success of R-CED is associated with the increasing dimension of the feature space along the encoder and decreasing dimension along the decoder. We expect that R-CED can be also applied to other interesting domains. Our future work will include pruning the R-CED to minimize the operation count of convolution. \end{document}
\begin{document} \title{\bf {\Lie} \begin{abstract} The aim of this paper is to consider the relation between {\ensuremath{\mathsf{Lie}}}-isoclinism and isomorphism of two pairs of Leibniz algebras. We show that, unlike the absolute case for finite dimensional Lie algebras, these concepts are not identical, even if the pairs of Leibniz algebras are {\ensuremath{\mathsf{Lie}}}-stem. Moreover, throughout the paper we provide some conditions under which {\ensuremath{\mathsf{Lie}}}-isoclinism and isomorphism of {\ensuremath{\mathsf{Lie}}}-stem Leibniz algebras are equal. In order to get this equality, the concept of factor set is studied as well.\\ \noindent {\it Keywords:} Leibniz algebras, {\ensuremath{\mathsf{Lie}}}-isoclinic pairs, {\ensuremath{\mathsf{Lie}}}-stem pair, Factor set.\\ \noindent {\it Mathematics Subject Classification} 2010: 17A32, 18B99. \end{abstract} \section{Introduction} \label{intro} The isoclinism in group theory, that is an equivalence relation on groups which generalizes isomorphism, was first introduced by Hall \cite{RefH} for the purpose of classifying finite p-groups of small order. This concept was studied by several authors, including Tappe \cite{RefT} and Weichsel \cite{RefW}. In 1994, Moneyhun \cite{RefM} extended this concept to Lie algebras that produces a partition on the class of all Lie algebras into equivalences classes. By this equivalence relation, she showed that the isoclinic family of Lie algebras contains at least one stem Lie algebra. Also, she proved that the concepts of isoclinism and isomorphism between Lie algebras of the same finite dimension are identical. The isoclinism of a pair of Lie algebras was studied by Moghaddam et al. \cite{RefMP} in $ 2009 $. They generalised the first result of Moneyhun for the pair of Lie algebras. In addition, it showed that two pairs of finite dimensional stem Lie algebras are isoclinic if and only if they are isomorphic. A passage to the similar to the above, for the central extension of Lie algebras (in \cite{RefMSR}) present, under some conditions, the same result. In the last decades, a prominent research line consists in the extension of properties from Lie algebras to Leibniz algebras, which are non-anti-commutative versions of Lie algebras \cite{RefL,RefLP}. In more detail, a vector space $ \mathfrak{q} $ equipped with a bilinear map $ [-, -] : \mathfrak{q} \times \mathfrak{q}\frak rightarrow \mathfrak{q}$ is called Leibniz algebra if satisfying the Leibniz identity: \[[x, [y, z]]=[[x, y], z] - [[x, z], y],~~~~x, y, z\in \mathfrak{q}.\] The investigations on Leibniz algebras theory show that some results of the theory of Lie algebras can be extended to Leibniz one. It is of interest to know whether the above mentioned works, in particular, the equivalence between isoclinism and isomorphism in presence of finite dimension, \cite{RefMP,RefM}, are still true for the Leibniz algebras. So the main goal of this paper is to answer this question, for that we focus in the relative framework, that is the context relative to the Liezation functor as we explain below. In the papers \cite{RefBC,RefCKh} was initiated a study of properties of Leibniz algebras relative to the Liezation functor, which assigns to a Leibniz algebra $\mathfrak{q}$ the Lie algebra $\mathfrak{q}_{\ensuremath{\mathsf{Lie}}} = \mathfrak{q}/\langle \{[x,x]: x \in \mathfrak{q} \} \frak rangle$, as opposed to the absolute ones, the corresponding to the abelianization functor. The origin of this point of view comes from the general theory of central extensions relative to a chosen subcategory of a base category introduced in \cite{RefJK} and considered in the context of semi-abelian categories relative to a Birkhoff subcategory in \cite{RefJMT}. Continuing with this study, in the first section, we introduce the concept of \textit{{\ensuremath{\mathsf{Lie}}}-isoclinism} for pairs of Leibniz algebras that is an equivalence relation. Similar to the pair of Lie algebras, \cite{RefMP}, we prove that {\ensuremath{\mathsf{Lie}}}-isoclinic family of Leibniz algebras contains at least one {\ensuremath{\mathsf{Lie}}}-stem Leibniz algebra which is the smallest dimension and give some results about this concept, as well. In section $ 3 $, we use a function, named factor set, which is introduced by non-abelian extension of Leibniz algebras. Note that, this function (without indicating on factor set) has given by Liu et al. \cite{RefLSW} to classify non-abelian extensions of Leibniz algebras by the second non-abelian cohomology of Leibniz algebras. Finally, in section $4$, we show that two pairs of the same finite dimensional {\ensuremath{\mathsf{Lie}}}-isoclinic ({\ensuremath{\mathsf{Lie}}}-stem) Leibniz algebras are not isomorphic and indicate some relevant counterexamples. Moreover, by using the concept of factor set, we present as our main result some conditions that {\ensuremath{\mathsf{Lie}}}-isoclinism and isomorphism, for finite dimensional {\ensuremath{\mathsf{Lie}}}-stem Leibniz algebras, are equal. Throughout, all Leibniz algebras are considered over a fixed field $ \mathbb{K} $, unless otherwise stated. Our basic assumptions are the following. \begin{definition} Let $ \mathfrak{m} $ be a two-sided ideal of the Leibniz algebra $ \mathfrak{q} $, then $ (\mathfrak{m}, \mathfrak{q}) $ is said to be a \textit{pair} of Leibniz algebras. \end{definition} \begin{definition} The \textit{{\ensuremath{\mathsf{Lie}}}-commutator} and \textit{{\ensuremath{\mathsf{Lie}}}-center} of the pair of $ (\mathfrak{m}, \mathfrak{q}) $ are both two-sided ideals of $\mathfrak{q}$ contained in $\mathfrak{m}$ \begin{center} $[\mathfrak{m}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}}=\langle\{ [m, q]+[q, m]\mid m\in\mathfrak{m}, q\in\mathfrak{q} \}\frak rangle $\\ $Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})=\lbrace m\in \mathfrak{m} \mid [m,q]+[q, m]=0~{\frak rm for~all}~q\in \mathfrak{q}\frak rbrace = Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{q}) \cap \mathfrak{m}$. \end{center} \end{definition} \begin{remark} When $\mathfrak{m}=\mathfrak{q}$, then $Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{q}, \mathfrak{q})$ coincides with the {\ensuremath{\mathsf{Lie}}}-center of $\mathfrak{q}$ given in \cite{RefCKh}. \end{remark} \section{{\ensuremath{\mathsf{Lie}}}-isoclinism of pairs of Leibniz algebras} \label{sec:1} We begin with the following definition which is the corresponding relative version of the isoclinism of Lie algebras given in \cite{RefMP} (absolute case for Lie algebras). \begin{definition} \label{isoclinism1} The pairs of Leibniz algebras $ (\mathfrak{m}_i, \mathfrak{q}_i), i=1,2$, are said to be {\ensuremath{\mathsf{Lie}}}-isoclinic if there exist isomorphisms $\alpha:\mathfrak{q_1}/Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m_1}, \mathfrak{q_1}) \longrightarrow \mathfrak{q_2}/Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m_2}, \mathfrak{q_2})$ with $ \alpha (\mathfrak{m_1}/Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m_1}, \mathfrak{q_1}))= \mathfrak{m_2}/Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m_2}, \mathfrak{q_2}) $ and $ \beta: [\mathfrak{m_1}, \mathfrak{q_1}]_{\ensuremath{\mathsf{Lie}}}\longrightarrow[\mathfrak{m_2}, \mathfrak{q_2}]_{\ensuremath{\mathsf{Lie}}} $ such that the following diagram is commutative: \begin{equation} \label{isoclinism} \begin{CD} \frak frac{\mathfrak{m_1}}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m_1}, \mathfrak{q_1})}\times \frak frac{\mathfrak{q_1}}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m_1}, \mathfrak{q_1})}@>{C_1}>>[\mathfrak{m_1}, \mathfrak{q_1}]_{\ensuremath{\mathsf{Lie}}}\\ @VV{{\alpha_{|}}\times \alpha}V @VV{\beta}V\\ \frak frac{\mathfrak{m_2}}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m_2}, \mathfrak{q_2})}\times \frak frac{\mathfrak{q_2}}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m_2}, \mathfrak{q_1})}@>{C_2}>>[\mathfrak{m_2}, \mathfrak{q_2}]_{\ensuremath{\mathsf{Lie}}}, \end{CD} \end{equation} where $ C_i(\bar{m_i},\bar{q_i})=[m_i,q_i]+[q_i, m_i] $, for all ${\bar{m_i}}\in \frak frac{\mathfrak{m}_i}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{q}_i, \mathfrak{m}_i)}$ and ${\bar{q_i}}\in \frak frac{\mathfrak{q}_i}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{q}_i, \mathfrak{m}_i)}, i=1,2$. In this case, the pair $(\alpha, \beta)$ is called a {\ensuremath{\mathsf{Lie}}}-isoclinism between $(\mathfrak{m_1}, \mathfrak{q_1})$ and $ (\mathfrak{m_2}, \mathfrak{q_2}) $ and we write $ (\mathfrak{m_1}, \mathfrak{q_1}) \sim (\mathfrak{m_2}, \mathfrak{q_2})$. \end{definition} The following Proposition provides an equivalent condition for {\ensuremath{\mathsf{Lie}}}-isoclinism between two pairs of Leibniz algebras. \begin{proposition} Let $ {\pi}_i: \mathfrak{q}_i \twoheadrightarrow \frak frac{\mathfrak{q}_i}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}_i, \mathfrak{q}_i)}, i= 1, 2$, be the canonical surjective homomorphisms and $\alpha:\mathfrak{q_1}/Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m_1}, \mathfrak{q_1}) \longrightarrow \mathfrak{q_2}/Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m_2}, \mathfrak{q_2})$ with $ \alpha (\mathfrak{m_1}/Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m_1}, \mathfrak{q_1}))= \mathfrak{m_2}/Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m_2}, \mathfrak{q_2}) $ and $ \beta: [\mathfrak{m_1}, \mathfrak{q_1}]_{\ensuremath{\mathsf{Lie}}}\longrightarrow[\mathfrak{m_2}, \mathfrak{q_2}]_{\ensuremath{\mathsf{Lie}}} $ be isomorphisms.The pair $(\alpha, \beta)$ is a {\ensuremath{\mathsf{Lie}}}-isoclinism between $(\mathfrak{m_1}, \mathfrak{q_1})$ and $ (\mathfrak{m_2}, \mathfrak{q_2}) $, if and only if $\beta([m_1,q_1]+[q_1, m_1])=[m_2,q_2]+[q_2, m_2]$, where $ m_2\in \mathfrak{m_2}, q_2\in \mathfrak{q_2} $, $\alpha(\pi_1(q_1))=\pi_2(q_2)$ and $\alpha(\pi_1(m_1))=\pi_2(m_2)$. \end{proposition} \begin{proof} Direct checking. \end{proof} \begin{remark} When $\mathfrak{m}_i = \mathfrak{q}_i, i = 1, 2$, then we recover the notion of {\ensuremath{\mathsf{Lie}}}-isoclinism of Leibniz algebras given in \cite{RefBC}. \end{remark} An immediate result from Definition \frak ref{isoclinism1} is the following \begin{corollary}\label{corollary1} Let the pairs of Leibniz algebras $( \mathfrak{m}, \mathfrak{q})$ and $( \mathfrak{n}, \mathfrak{p})$ be {\ensuremath{\mathsf{Lie}}}-isoclinic. Then $\mathfrak{m}$ and $\mathfrak{n}$ are {\ensuremath{\mathsf{Lie}}}-isoclinic. \end{corollary} The following Lemma yields information about the {\ensuremath{\mathsf{Lie}}}-isoclinism between a pair of Leibniz algebras and its quotient pair by a two-sided ideal. \begin{lemma}\label{lemma1} Let $ (\mathfrak{m}, \mathfrak{q})$ be a pair of Leibniz algebras and $ \mathfrak{n} $ a two-sided ideal of $ \mathfrak{q} $ contained in $ \mathfrak{m} $. Then $ (\frak frac{\mathfrak{m}}{\mathfrak{n}}, \frak frac{\mathfrak{q}}{\mathfrak{n}}) \sim (\frak frac{\mathfrak{m}}{\mathfrak{n \cap [\mathfrak{m}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}}}}, \frak frac{\mathfrak{q}}{\mathfrak{n \cap [\mathfrak{m}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}}}})$. In particular, $\mathfrak{n} \cap [\mathfrak{m}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}}=0 $ if and only if $ (\frak frac{\mathfrak{m}}{\mathfrak{n}}, \frak frac{\mathfrak{q}}{\mathfrak{n}}) \sim (\mathfrak{m}, \mathfrak{q}) $. \end{lemma} \begin{proof} We set $ \overline{\mathfrak{q}}=\frak frac{\mathfrak{q}}{\mathfrak{n}}, \overline{\mathfrak{m}}=\frak frac{\mathfrak{m}}{\mathfrak{n}}, \widetilde{\mathfrak{q}}=\frak frac{\mathfrak{q}}{\mathfrak{n \cap [\mathfrak{m}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}}}}$ and $\widetilde{\mathfrak{m}}=\frak frac{\mathfrak{m}}{\mathfrak{n \cap [\mathfrak{m}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}}}}$. It is easy to check that the map $\frak frak gamma : \widetilde{\mathfrak{q}} \to \overline{\mathfrak{q}}$ given by $\frak frak gamma(\widetilde{q}) = \frak frak gamma(q + (\mathfrak{n} \cap [\mathfrak{m}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}})) = \overline{q} = q + \mathfrak{n}$, is a surjective homomorphism such that $\frak frak gamma\left( Z_{\ensuremath{\mathsf{Lie}}}( \widetilde{\mathfrak{m}}, \widetilde{\mathfrak{q}}) \frak right) = Z_{\ensuremath{\mathsf{Lie}}}(\overline{\mathfrak{m}}, \overline{\mathfrak{q}})$, then it induces a surjective homomorphism $\alpha: \dfrac{ \widetilde{\mathfrak{q}}}{Z_{\ensuremath{\mathsf{Lie}}}( \widetilde{\mathfrak{m}}, \widetilde{\mathfrak{q}})}\twoheadrightarrow \dfrac{\overline{\mathfrak{q}}}{Z_{\ensuremath{\mathsf{Lie}}}(\overline{\mathfrak{m}}, \overline{\mathfrak{q}})}$, given by $\alpha ( \widetilde{q}+Z_{\ensuremath{\mathsf{Lie}}}( \widetilde{\mathfrak{m}}, \widetilde{\mathfrak{q}}))=\overline{q}+Z_{\ensuremath{\mathsf{Lie}}}(\overline{\mathfrak{m}}, \overline{\mathfrak{q}})$. Moreover $\alpha$ is injective, because $\alpha(\widetilde{q}+Z_{\ensuremath{\mathsf{Lie}}}( \widetilde{\mathfrak{m}}, \widetilde{\mathfrak{q}}))=0$, implies that $\overline{q} \in Z_{\ensuremath{\mathsf{Lie}}}(\overline{\mathfrak{m}}, \overline{\mathfrak{q}})$, that is $\overline{q} = \frak frak gamma(\widetilde{q})$ with $\widetilde{q} \in {Z_{\ensuremath{\mathsf{Lie}}}( \widetilde{\mathfrak{m}}, \widetilde{\mathfrak{q}})}$. Consequently, $\alpha$ is an isomorphism. On the other hand, the restriction of $\frak frak gamma$ provides the surjective homomorphism $\beta:[ \widetilde{\mathfrak{m}}, \widetilde{\mathfrak{q}}]_{\ensuremath{\mathsf{Lie}}} \twoheadrightarrow [\overline{\mathfrak{m}}, \overline{\mathfrak{q}}]_{\ensuremath{\mathsf{Lie}}},$ given by $\beta([ \widetilde{m}, \widetilde{q}]+[\widetilde{q}, \widetilde{m}])=[\overline{m},\overline{q}]+[\overline{q},\overline{m}]$. Moreover it is easy to check that $\beta$ is injective. Now the commutativity of diagram (\frak ref{isoclinism}) is obvious. For the second statement, if $\mathfrak{n} \cap [\mathfrak{m}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}}=0 $ then $ \widetilde{\mathfrak{m}}\cong \mathfrak{m} $ and $\tilde{\mathfrak{q}}\cong \mathfrak{q}$ and so $(\frak frac{\mathfrak{m}}{\mathfrak{n}}, \frak frac{\mathfrak{q}}{\mathfrak{n}}) \sim (\mathfrak{m}, \mathfrak{q}) $. Conversely, the isomorphism $\beta : [\mathfrak{m}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}} \to [\overline{\mathfrak{m}}, \overline{\mathfrak{q}}]_{\ensuremath{\mathsf{Lie}}}$ actually is induced by the canonical projection $\pi : \mathfrak{q} \twoheadrightarrow \frak frac{\mathfrak{q}}{\mathfrak{n}}$ and ${\sf Ker}(\beta) = \mathfrak{n} \cap [\mathfrak{m}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}}$ \end{proof} \begin{definition} The pair of Leibniz algebras $(\mathfrak{m}, \mathfrak{q})$ is said to be a \textit{{\ensuremath{\mathsf{Lie}}}-stem pair} of Leibniz algebras, when $Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})\subseteq [\mathfrak{m}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}} $. \end{definition} \begin{example} An example of {\ensuremath{\mathsf{Lie}}}-stem pair is provided by the three-dimensional Leibniz algebra $\mathfrak{q}$ with basis $\{a_1,a_2,a_3\}$ and bracket operation given by $[a_1,a_3]=a_1, [a_2,a_3]=a_2$ (class 3 {\it a)} in \cite{RefCIL}), and the two-sided ideal $\mathfrak{m} = span \{a_1\}$. Obviously $0 =Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m},\mathfrak{q})\subseteq [\mathfrak{m},\mathfrak{q}]_{\ensuremath{\mathsf{Lie}}} = span \{a_1\}$. \end{example} \begin{proposition}\label{proposition2} The pair of Leibniz algebras $(\mathfrak{m}, \mathfrak{q})$ is a {\ensuremath{\mathsf{Lie}}}-stem pair if and only if the unique two-sided ideal $\mathfrak{s}$ of $\mathfrak{q}$, such that $\mathfrak{s} \subseteq \mathfrak{m}$ and $\mathfrak{s}\cap [\mathfrak{m}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}} =0$, is the trivial one. \end{proposition} \begin{proof} Let $ \mathfrak{s} $ be a two-sided ideal of $ \mathfrak{q}$ such that $\mathfrak{s}\subseteq \mathfrak{m}$ and $ \mathfrak{s}\cap [\mathfrak{m}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}} =0$, then $ \mathfrak{s}\subseteq Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})$ and so $ \mathfrak{s}=\mathfrak{s} \cap Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})\subseteq \mathfrak{s} \cap [\mathfrak{m}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}}=0$.\\ Conversely, assume that on the contrary $Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})\nsubseteq [\mathfrak{m}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}}$, then there exists $z\in Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})\backslash [\mathfrak{m}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}}, z \neq 0$. Put $ \mathfrak{s}$ the two sided-ideal of $\mathfrak{q}$ spanned by $z$, which is contained in $\mathfrak{m}$. So from the assumption we have $ \mathfrak{s}\cap [\mathfrak{m}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}} =0 $. Hence $ \mathfrak{s}=0$ and a contradiction follows. \end{proof} \vspace*{-.23cm} Plainly, {\ensuremath{\mathsf{Lie}}}-isoclinism between the pairs of Leibniz algebras is an equivalence relation, (see for instance \cite{RefBC}). By the {\ensuremath{\mathsf{Lie}}}-isoclinic family we mean equivalent classes that contains the class of all pairs of Leibniz algebras. The following Theorem ensures the existence of a {\ensuremath{\mathsf{Lie}}}-stem pair in the {\ensuremath{\mathsf{Lie}}}-isoclinic family of pairs of Leibniz algebras. \begin{theorem}\label{theorem1} Every {\ensuremath{\mathsf{Lie}}}-isoclinic family $\mathcal{C}$ of pairs of Leibniz algebras contains at least one {\ensuremath{\mathsf{Lie}}}-stem pair of Leibniz algebras. \end{theorem} \begin{proof} Let $ (\mathfrak{m}, \mathfrak{q}) $ be an arbitrary pair of Leibniz algebras in $ \mathcal{C} $ and $ \mathcal{A}=\lbrace\mathfrak{s} \mid \mathfrak{s}\trianglelefteq \mathfrak{q}, \mathfrak{s}\subseteq \mathfrak{m}, \mathfrak{s}\cap [\mathfrak{m}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}} =0 \frak rbrace $. The set $ \mathcal{A}$ is non-empty because it contains at least the zero ideal. We define a partial ordering on $\mathcal{A}$ by inclusion and evidently, by Zorn's lemma, we can find a maximal two-sided ideal $ \mathfrak{s}$ in $\mathcal{A}$. Since $ \mathfrak{s}\cap [\mathfrak{m}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}} =0 $, it follows from Lemma \frak ref{lemma1} that $(\mathfrak{m/s}, \mathfrak{q/s})\in \mathcal{C}$. Now, suppose that $ \mathfrak{h/s} $ is a two-sided ideal of $ \mathfrak{q/s}$ contained in $ \mathfrak{m/s} $ such that $ \mathfrak{h/s}\cap [\mathfrak{m/s}, \mathfrak{q/s}]_{\ensuremath{\mathsf{Lie}}} =0$. Note that such a $\mathfrak{h/s}$ always exists; for instance $\mathfrak{h}= \mathfrak{s}$. Then we have $ \mathfrak{h}\cap [\mathfrak{m}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}}\subseteq \mathfrak{s}\cap [\mathfrak{m}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}} =0$ and so $ \mathfrak{h}\in \mathcal{A} $. Moreover, $ \mathfrak{s}\subseteq \mathfrak{h} $, so by the maximality of $ \mathfrak{s} $, it follows that $\mathfrak{h}=\mathfrak{s} $ and then $ \mathfrak{h/s}=0 $. Therefore, by virtue of Proposition \frak ref{proposition2}, $(\mathfrak{m/s}, \mathfrak{q/s}) $ is a {\ensuremath{\mathsf{Lie}}}-stem pair of Leibniz algebras, as required. \end{proof} One of the main results in this paper is the following \begin{theorem}\label{theorem2} Let $\mathcal{C} $ be a {\ensuremath{\mathsf{Lie}}}-isoclinic family of finite dimensional pairs of Leibniz algebras and $ (\mathfrak{n}, \mathfrak{p})\in \mathcal{C}$. Then $ (\mathfrak{n}, \mathfrak{p})$ is a {\ensuremath{\mathsf{Lie}}}-stem pair if and only if ${\frak rm dim}(\mathfrak{p})= {\frak rm min} \lbrace {\frak rm dim}(\mathfrak{q}) | (\mathfrak{m}, \mathfrak{q})\in \mathcal{C}\frak rbrace$. \end{theorem} \begin{proof} Let $ (\mathfrak{m}, \mathfrak{q})$ and $(\mathfrak{n}, \mathfrak{p})$ be arbitrary pairs in $\mathcal{C}$ and assume that $(\mathfrak{n}, \mathfrak{p})$ is a {\ensuremath{\mathsf{Lie}}}-stem pair such that $\mathfrak{p}$ is finite-dimensional. Then we have \begin{align*} \frak frac{[\mathfrak{m}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}}}{[\mathfrak{m}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}}\cap Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})}&\cong \frak frac{[\mathfrak{m}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}}+Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})}\\ &\cong \left[\frak frac{\mathfrak{m}}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})},\frak frac{\mathfrak{q}}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})}\frak right]_{\ensuremath{\mathsf{Lie}}}\\ &\cong \left[\frak frac{\mathfrak{n}}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})},\frak frac{\mathfrak{p}}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})} \frak right]_{\ensuremath{\mathsf{Lie}}}\\ &\cong \frak frac{[\mathfrak{n}, \mathfrak{p}]_{\ensuremath{\mathsf{Lie}}}}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})}, \end{align*} and $[\mathfrak{m}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}}\cong [\mathfrak{n}, \mathfrak{p}]_{\ensuremath{\mathsf{Lie}}}$. Therefore ${\frak rm dim}\left(Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})\frak right)= {\frak rm dim} \left([\mathfrak{m}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}}\cap Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})\frak right)\leq$ ${\frak rm dim} \left( Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})\frak right)$. On the other hand $\frak frac{\mathfrak{p}}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})} \cong \frak frac{\mathfrak{q}}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})}$. Therefore ${\frak rm dim} \left( \mathfrak{p} \frak right) \leq {\frak rm dim} \left( \mathfrak{q}\frak right)$. Conversely, let $(\mathfrak{n}, \mathfrak{p})$ be in the family $ \mathcal{C} $ such that $\mathfrak{p} $ has the minimum dimension. Owing to Theorem \frak ref{theorem1} there is a two-sided ideal $\mathfrak{t}$ of $\mathfrak{p}$ contained in $Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})$ such that $(\mathfrak{n}, \mathfrak{p})\sim (\frak frac{\mathfrak{n}}{\mathfrak{t}}, \frak frac{\mathfrak{p}}{\mathfrak{t}})$ and $Z _{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})= \left([\mathfrak{n}, \mathfrak{p}]_{\ensuremath{\mathsf{Lie}}} \cap Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p}) \frak right)\oplus \mathfrak{t}$. But $\mathfrak{p}$ has minimum dimension, which implies that $\mathfrak{t}=0$, therefore $Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})\subseteq [\mathfrak{n}, \mathfrak{p}]_{\ensuremath{\mathsf{Lie}}}$ and this completes the proof. \end{proof} The above Theorem provides the following interesting consequence which will be used in the sequel and our main result in the last section. \begin{corollary} \label{corollary2} If $(\mathfrak{m}, \mathfrak{q})$ and $(\mathfrak{n}, \mathfrak{p})$ are two {\ensuremath{\mathsf{Lie}}}-isoclinic {\ensuremath{\mathsf{Lie}}}-stem pairs of Leibniz algebras then $Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})\cong Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})$. \end{corollary} \begin{proof} Let $(\mathfrak{m}, \mathfrak{q})$ and $(\mathfrak{n}, \mathfrak{p})$ be two {\ensuremath{\mathsf{Lie}}}-isoclinic pairs of Leibniz algebras. In view of proof of Theorem \frak ref{theorem2} and isomorphism $\beta: [\mathfrak{m}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}}\longrightarrow[\mathfrak{n}, \mathfrak{p}]_{\ensuremath{\mathsf{Lie}}}$ we have the following commutative diagram with exact rows: $$\begin{CD} 0@>>> Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})@>>> [\mathfrak{m}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}}@>{\pi_1}>> \frak frac{ [\mathfrak{m}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}}}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})}@>>>0\\ && @VV{\beta_{|}}V @V{\wr}V{\beta}V @V{\wr}V{\alpha}V\\ ~~ 0@>>> Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})@>>>[\mathfrak{n}, \mathfrak{p}]_{\ensuremath{\mathsf{Lie}}}@>{\pi_2}>> \frak frac{[\mathfrak{n}, \mathfrak{p}]_{\ensuremath{\mathsf{Lie}}}}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})} @>>>0. \end{CD}$$\\ where $\beta \left( Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q}) \frak right) \subseteq Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})$ since for all $x\in [\mathfrak{m}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}}$, $\alpha \left(x+Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})\frak right) = \beta (x) + Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})$. Hence, for $x \in Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})$, $0 = \alpha \left( \pi_1(x) \frak right) = \beta(x) + Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})$, so $\beta(x) \in Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})$. Now the Snake Lemma \cite{RefBB} yields $\beta_{|}$ is a surjective homomorphism and so $\beta ( Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q}))= Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})$. Moreover the left hand square is a pull-back diagram, then $\beta_{\mid}$ is a monomorphism. Therefore $Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})\cong Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})$. \end{proof} \section{Factor sets of a pair of Leibniz algebras} \label{sec:2} Chevalley and Eilenberg in \cite{RefCE} defined the factor sets for Lie algebras and now we recall factor sets for Leibniz algebras from \cite{RefLSW} and we analyze the interplay with the concepts relative to the Liezation functor for a pair of Leibniz algebras. Let $0\longrightarrow \mathfrak{m}\xrightarrow {\subseteq}\mathfrak{q}\xrightarrow{\pi}\mathfrak{q^*} \longrightarrow 0$ be a non-abelian extension of Leibniz algebras \cite[Definition 2.5]{RefLSW}. We choose a splitting $\tau: \mathfrak{q^*}\longrightarrow \mathfrak{q}$, that is a linear map such that $ \pi \circ \tau = {\sf Id}_{\mathfrak{q^*}}$. For each $x\in \mathfrak{q^*}$ we have two linear maps $L_x, R_x: \mathfrak{m}\longrightarrow \mathfrak{m}$ given by $L_x(m)= [m, \tau(x)]$ and $R_x(m)=[\tau(x), m]$. Associated to any pair of elements $ x, y\in \mathfrak{q^*} $, there is an element $f(x, y)\in \mathfrak{m}$ such that $$f(x, y) = [\tau(x), \tau(y)]-\tau [x, y].$$ The linear map $f$ is called the {\it factor set} corresponding to the function $\tau$. For any $ x, y, z\in \mathfrak{q^*}$, the following identities concerning factor sets hold: \begin{align*} [[\tau(x), \tau(y)], \tau(z)]&=[f(x, y), \tau(z)]+[\tau([x,y]), \tau(z)]\\ &=L_z(f(x,y))+f([x,y], z)+\tau([[x, y], z]), \end{align*} \begin{align*} [[\tau(x), \tau(z)], \tau(y)]&=[f(x, z), \tau(y)]+[\tau([x,z]), \tau(y)]\\ &=L_y(f(x,z))+f([x,z], y)+\tau([[x, z], y]), \end{align*} \begin{align*} [\tau(x), [\tau(y), \tau(z)]]&=[\tau(x), f(y, z)]+[\tau(x), \tau([y, z])]\\ &=R_x(f(y, z))+f(x, [y, z])+\tau([x, [y, z]]). \end{align*} From the above identities, the following equation is immediately derived: \begin{equation} \label{factor set id} f([x,y], z)-f([x,z], y)-f(x, [y, z])+ L_z(f(x,y))- L_y(f(x,z))-R_x(f(y, z))=0 \end{equation} Given a splitting $\tau$, we define on the $\mathbb{K}$-vector space $\mathfrak{m} \oplus \mathfrak{q^*}$ the bracket operation \begin{equation*} [(m_1, x_1), (m_2, x_2)] = ([m_1, m_2]+R_{x_1}(m_2)+L_{x_2}(m_1)+f(x_1, x_2), [x_1, x_2]). \end{equation*} A routine computation having in mind equation (\frak ref{factor set id}) shows that $(\mathfrak{m} \oplus \mathfrak{q^*}, [-,-])$ is a Leibniz algebra, which will be denoted $ \mathfrak{m}\times_f \mathfrak{q^*}$. Note that $\tau$ is a homomorphism if and only if $f(x, y) = 0$, for all $x, y\in \mathfrak{q^*},$ that is $f$ measures the deficiency of $\tau$ to be a homomorphism. If the exact sequence $0\longrightarrow \mathfrak{m}\xrightarrow {i}\mathfrak{q}\xrightarrow{\psi}\mathfrak{q^*} \longrightarrow 0$ splits by a homomorphism $\tau : \mathfrak{q^*} \to \mathfrak{q}$, then $\mathfrak{m}$ is endowed with an action from $\mathfrak{q^*}$ given by $[x,m]=[\tau(x), i(m)]_{\mathfrak{q}}, [m,x]=[i(m), \tau(x)]_{\mathfrak{q}}, m \in \mathfrak{m}, x \in \mathfrak{q^*}$. Hence the semi-direct product $\mathfrak{m} \frak rtimes \mathfrak{q^*}$ can be constructed, which gives rise to the split extension $0\longrightarrow \mathfrak{m}\xrightarrow{} \mathfrak{m} \frak rtimes \mathfrak{q^*} \xrightarrow{pr}\mathfrak{q^*} \longrightarrow 0$ \cite{RefLP}. \begin{definition} We say that two pairs of Leibniz algebras $(\mathfrak{m}, \mathfrak{q})$ and $(\mathfrak{n}, \mathfrak{p})$ are isomorphic if there exist an isomorphism $ \varphi: \mathfrak{q}\frak rightarrow \mathfrak{p} $ such that $ \varphi_{|{\mathfrak{m}}}:\mathfrak{m}\cong \mathfrak{n} $. \end{definition} \begin{lemma}\label{lemma2} Let $(\mathfrak{m}, \mathfrak{q})$ be a pair of Leibniz algebras. Then there exists the factor set $f:\dfrac{\mathfrak{q}}{\mathfrak{m}}\times \dfrac{\mathfrak{q}}{\mathfrak{m}}\longrightarrow \mathfrak{m}$ such that $\left( {\sf Ker}(\pi), \mathfrak{m}\times_f \dfrac{\mathfrak{q}}{\mathfrak{m}} \frak right) \cong \left( \mathfrak{m}, \mathfrak{q} \frak right)$, where $\pi : \mathfrak{m}\times_f \dfrac{\mathfrak{q}}{\mathfrak{m}} \twoheadrightarrow \dfrac{\mathfrak{q}}{\mathfrak{m}}$ is the canonical projection. \end{lemma} \begin{proof} Let $ \mathfrak{h} $ be a vector space complement of $ \mathfrak{m} $ in $ \mathfrak{q} $ and $ \frak rho :\dfrac{\mathfrak{q}}{\mathfrak{m}}\longrightarrow \mathfrak{q} $ be a linear map given by $ \frak rho(\bar{x})=h$, where $ x=h+m $ with $h\in \mathfrak{h}$ and $m\in \mathfrak{m}$, which is a splitting of the extension $0 \to \mathfrak{m} \to \mathfrak{q} \overset{pr} \to \dfrac{\mathfrak{q}}{\mathfrak{m}} \to 0$. Now, we define $ f:\dfrac{\mathfrak{q}}{\mathfrak{m}}\times \dfrac{\mathfrak{q}}{\mathfrak{m}}\longrightarrow \mathfrak{m} $ by $ f(\bar{x}, \bar{y})=[\frak rho(\bar{x}), \frak rho(\bar{y})] -\frak rho([\bar{x}, \bar{y}])$ that is well-defined, since $pr([\frak rho(\bar{x}), \frak rho(\bar{y})])=[\bar{x}, \bar{y}]= pr (\frak rho([\bar{x}, \bar{y}])) $. Hence $ f $ is the factor set and $\mathfrak{m}\times_f \dfrac{\mathfrak{q}}{\mathfrak{m}} $ is isomorphic to $\mathfrak{q}$ via $(m, \bar{x})\longmapsto m+\frak rho (\bar{x})$. Also, it is easy to check that $f|_{{\sf Ker}(\pi)}: {\sf Ker}(\pi) \cong \mathfrak{m}$ and this completes the proof. \end{proof} For the pair of Leibniz algebras $ ( \mathfrak{m}, \mathfrak{q}) $ we consider the extension \[0\longrightarrow {Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})}\xrightarrow {\subseteq}\mathfrak{m}\xrightarrow{\pi}{\mathfrak{m}}/{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})} \longrightarrow 0 \] and the factor set $ f $ corresponding to the splitting $ \tau:{\mathfrak{m}}/{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})}\longrightarrow \mathfrak{m} $ given by Lemma \frak ref{lemma2}. Moreover $ {Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})}\times_f {\mathfrak{m}}/{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})} $ is a Leibniz algebra isomorphic to $\mathfrak{m}$. We henceforth assume that ${\mathfrak{m}}_f:=Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q}) \times_f {\mathfrak{m}}/{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})}$ is given as just described. It is easy to see that the following map is an isomorphism: \begin{equation*} \kappa: Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})\longrightarrow Z_{\ensuremath{\mathsf{Lie}}}^{\mathfrak{m}_f}=\lbrace (z, 0) \mid z\in Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})\frak rbrace, ~~~~ z\longmapsto (z, 0). \end{equation*} This notation and the previous Lemma give rise to the following \begin{proposition} \label{proposition3} Let $ (\mathfrak{m}, \mathfrak{q}) $ and $ (\mathfrak{n}, \mathfrak{p}) $ be two {\ensuremath{\mathsf{Lie}}}-isoclinic {\ensuremath{\mathsf{Lie}}}-stem pairs of Leibniz algebras. Then there exists a factor set $ f :{\mathfrak{m}}/{{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})}}\times {\mathfrak{m}}/{{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})}}\to Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})$ such that $$ \mathfrak{n}\cong Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})\times_f \dfrac{\mathfrak{m}}{{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})}}={\mathfrak{m}}_f$$ \end{proposition} \begin{proof} Let the pair $(\alpha, \beta) $ be the {\ensuremath{\mathsf{Lie}}}-isoclinism between $(\mathfrak{m}, \mathfrak{q})$ and $ (\mathfrak{n}, \mathfrak{p}) $. By Corollary \frak ref{corollary2} we have the isomorphism $ \beta : {Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})} \cong {Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})} $ and owing to Lemma \frak ref{lemma2} there exists a factor set $g:{\mathfrak{n}}/{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})}\times {\mathfrak{n}}/{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})}$ $\longrightarrow {Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})}$ such that $ {\mathfrak{n}} \cong {\mathfrak{n}}_g$. Hence we define the map $ f : {\mathfrak{m}}/{{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})}}\times {\mathfrak{m}}/{{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})}}\longrightarrow Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})$ given by $ f(\bar{m_1}, \bar {m_2})=\beta^{-1}\big(g( \alpha \times \alpha(\bar{m_1}, \bar{m_2}))\big)$, for all $ \bar{m_i}\in {\mathfrak{m}}/{{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})}}, i= 1, 2 $, which is $\beta^{-1}$ of the factor set corresponding to $\frak rho_2 \circ \alpha$, where $\frak rho_2$ is the splitting of $\pi_2 : \mathfrak{n} \twoheadrightarrow{\mathfrak{n}}/{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})}$. One readily see that the mapping \begin{equation*} \theta : Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})\times_f \frak frac{\mathfrak{m}}{{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})}}\longrightarrow Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})\times_g \frak frac{\mathfrak{n}}{{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})}} \end{equation*} defined by $ \theta (z_1, \bar{m_1})=(\beta (z_1), \alpha (\bar{m_1})), z_1 \in Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q}), \bar{m_1} \in \frak frac{\mathfrak{m}}{{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})}}$, is an isomorphism, as required. \end{proof} We close this section by the following Lemma and Proposition which are of interest in their own account. \begin{lemma} \label{lemma3} Let $ f $ and $ g $ be two factor sets on the pair of Leibniz algebras $ (\mathfrak{m}, \mathfrak{q}) $ and $ (\mathfrak{n}, \mathfrak{p}) $, respectively. If $ \eta: \mathfrak{m}_f\longrightarrow \mathfrak{n}_g$ is an isomorphism such that $ \eta(Z_{\ensuremath{\mathsf{Lie}}}^{\mathfrak{m}_f})=Z_{\ensuremath{\mathsf{Lie}}}^{\mathfrak{n}_g}$, then $ \eta $ induces isomorphisms $ \eta_{1}: \mathfrak{m}/ Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})\longrightarrow \mathfrak{n}/ Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p}) $ and $ \eta_{2}:Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})\longrightarrow Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p}) $. \end{lemma} \begin{proof} We consider the following commutative diagram: $$\begin{CD} 0@>>> Z_{\ensuremath{\mathsf{Lie}}}^{\mathfrak{m}_f}@>>>\mathfrak{m}_f@>{\pi_{1}}>> \dfrac{\mathfrak{m}_f}{Z_{\ensuremath{\mathsf{Lie}}}^{\mathfrak{m}_f}}@>>>0\\ && @VV{\eta_{\mid}}V@VV{\eta}V @VV{\bar{\eta}}V\\ ~~ 0@>>> Z_{\ensuremath{\mathsf{Lie}}}^{\mathfrak{n}_g}@>>>\mathfrak{n}_g@>{\pi_{2}}>> \dfrac{ \mathfrak{n}_g}{Z_{\ensuremath{\mathsf{Lie}}}^{\mathfrak{n}_g}} @>>>0, \end{CD}$$\\ where $\bar{\eta}\left((z, \bar{m})+Z_{\ensuremath{\mathsf{Lie}}}^{\mathfrak{m}_f}\frak right)=\eta (z, \bar{m})+Z_{\ensuremath{\mathsf{Lie}}}^{\mathfrak{n}_g}$. Now we define $ \eta_1: \mathfrak{m}/ Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})\longrightarrow \mathfrak{n}/ Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})$ by $ \bar{\eta}((0, \bar{m})+Z_{\ensuremath{\mathsf{Lie}}}^{\mathfrak{m}_f})=\eta(0, \bar{m})+Z_{\ensuremath{\mathsf{Lie}}}^{\mathfrak{n}_g}=(0, \eta_1(\bar{m}))+Z_{\ensuremath{\mathsf{Lie}}}^{\mathfrak{n}_g} $, for all $ \bar{m}\in \mathfrak{m}/ Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q}) $, and $\eta_{2}:Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})\longrightarrow Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})$ given by $\eta(z,0)=(\eta_2(z), 0)$, for any $z \in Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})$. It is easy to check that $\eta_{1} $ and $ \eta_{2} $ are isomorphisms. \end{proof} \begin{proposition}\label{proposition4} Let $f$ and $g$ be two factor sets on the pair of Leibniz algebras $(\mathfrak{m}, \mathfrak{q})$ and $(\mathfrak{n}, \mathfrak{p})$, respectively. Let $\eta $, $ \eta_{1}$ and $\eta_{2} $ be as in Lemma \frak ref{lemma3}. Then there exists a linear map $ d: \mathfrak{n}/ Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})\longrightarrow Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})$ such that \begin{equation*} \label{formula} \begin{aligned}[] &\eta_2 {\big([z_1, z_2]+[\mu(\bar{{m}_1}), z_2]+[z_1, \mu(\bar{{m}_2})]+f(\bar{{m}_1}, \bar{{m}_2})\big)}+d(\eta_1([\bar{{m}_1}, \bar{{m}_2}]))=\\ &[\eta_2 (z_1)+d(\eta_1(\bar{{m}_1})), \eta_2 (z_2)+d(\eta_1(\bar{{m}_2}))]+[\eta_2 (z_1)+d(\eta_1(\bar{{m}_1})), \nu(\eta_1{(\bar{{m}_2})}) ]+ \\ &[\nu(\eta_1{(\bar{{m}_1})}), \eta_2 (z_2)+d(\eta_1(\bar{{m}_2}))]+g(\eta_{1}(\bar{m_1}), \eta_{1}(\bar{m_2})), \end{aligned} \end{equation*} for all $(z_1, \bar{m_1}), (z_2, \bar{m_2})\in \mathfrak{m}_f$, where $\mu$ and $\nu$ are the corresponding splittings of $\pi_1: \mathfrak{m} \twoheadrightarrow {\mathfrak{m}}/{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})}$ and $\pi_2: \mathfrak{n} \twoheadrightarrow {\mathfrak{n}}/{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})}$ associated to $f$ and $g$, respectively. \end{proposition} \begin{proof} By Lemma \frak ref{lemma3}, for all $\bar{m}\in \mathfrak{m}/ Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})$ and $z\in Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})$, there exists $z_{\eta_1(\bar{m})}\in Z_{\ensuremath{\mathsf{Lie}}}^{\mathfrak{n}_g} $ such that $ \eta (0, \bar{m})=(0, \eta_{1}(\bar{m}))+(z_{\eta_1(\bar{m})},0) $. We define the map $ d:\frak frac{\mathfrak{n}}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})}\longrightarrow Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})$ by $d(\eta_1(\bar{m}))=z_{\eta_1(\bar{m})}$, which is a linear map. So we have $\eta(z, \bar{m})=\eta(z,0)+\eta(0, \bar{m}) =(\eta_{2}(z), 0) +(0, \eta_{1}(\bar{m})) +(d(\eta_1({\bar{m}})),0)$. Applying this equality, for all $ (z_1, \bar{m_1}), (z_2, \bar{m_2})\in \mathfrak{m}_f $, one gets, \begin{align*} \eta([(z_1, \bar{{m}_1}), (z_2, \bar{{m}_2})])=&\eta ([z_1, z_2]+[\mu(\bar{{m}_1}), z_2]+[z_1, \mu(\bar{{m}_2})]+f(\bar{{m}_1}, \bar{{m}_2}), [\bar{{m}_1}, \bar{{m}_2}])\\ =& \left( \eta_2 {\left([z_1, z_2]+[\mu(\bar{{m}_1}), z_2]+[z_1, \mu(\bar{{m}_2})]+f(\bar{{m}_1}, \bar{{m}_2})\frak right)} \frak right.\\ +& \left. d(\eta_1([\bar{{m}_1}, \bar{{m}_2}]), {{\eta}_1}([\bar{{m}_1}, \bar{{m}_2}])\frak right). \end{align*} On the other hand, \begin{align*} \eta([(z_1, \bar{{m}_1}), (z_2, \bar{{m}_2})])=& [\eta(z_1, \bar{{m}_1}), \eta(z_2, \bar{{m}_2})]\\ = &[(\eta_2 (z_1)+d(\eta_1(\bar{{m}_1}), \eta_1{(\bar{{m}_1})}), (\eta_2 (z_2)+d(\eta_1(\bar{{m}_2}), \eta_1{(\bar{{m}_2})})]\\ = & \left( [\eta_2 (z_1)+d(\eta_1(\bar{{m}_1})), \eta_2 (z_2)+d(\eta_1(\bar{{m}_2}))] \frak right.\\ + & [\eta_2 (z_1)+d(\eta_1(\bar{{m}_1})), \nu(\eta_1{(\bar{{m}_2})})] \\ + & [\nu(\eta_1{(\bar{{m}_1})}), \eta_2 (z_2)+d(\eta_1(\bar{{m}_2}))]\\ + & g(\eta_{1}(\bar{m_1}), \eta_{1}(\bar{m_2})), [\eta_{1}(\bar{m_1}), \eta_{1}(\bar{m_2})]). \end{align*} Now the statement follows from the equality of the first component of both computations. \end{proof} Under assumptions of above Proposition and Proposition \frak ref{proposition3} we have the following isomorphism between pairs of Leibniz algebras: \[(Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q}), \mathfrak{m}_f) \cong (Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p}), \mathfrak{n}_g) \cong (Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p}), \mathfrak{n}) \] \section{{\ensuremath{\mathsf{Lie}}}-isoclinism and isomorphism between pairs of Leibniz algebras} \label{sec:3} If two pairs of Leibniz algebras are isomorphic, it is easy to check that they are {\ensuremath{\mathsf{Lie}}}-isoclinic. But in this chapter, we show that the converse is not necessarily valid for finite dimensional ({\ensuremath{\mathsf{Lie}}}-stem) Leibniz algebras, whereas isoclinic and isomorphism are equal for finite dimensional (stem) Lie algebras \cite{RefM}, and Pair of (stem) Lie algebras \cite{RefMP}. Nevertheless, we provide some conditions that these concepts are equal for finite dimensional {\ensuremath{\mathsf{Lie}}}-stem Leibniz algebras. \begin{example} $\mathfrak{q}=span \lbrace a_{1}, a_{2}, a_{3}\frak rbrace $, with non-zero multiplication $ [a_{1}, a_{3}]= a_{1} $ (it belongs to the class 2 (d)), where $ Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{q})=span \lbrace a_{2 }\frak rbrace $ and $ [\mathfrak{q}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}}=span \lbrace a_{1}\frak rbrace$, and $\mathfrak{p}= span \lbrace g_{1}, g_{2}, g_{3}\frak rbrace$, with non-zero multiplications $[g_{1}, g_{3}]=g_{1}, [g_{2}, g_{3}]=g_{2}$ and $[g_{3}, g_{2}]= -g_{2}$ (it belongs to the class 2 (e) with $\alpha=1$), where $Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{p})=span \lbrace g_{2 }\frak rbrace$ and $[\mathfrak{p}, \mathfrak{p}]_{\ensuremath{\mathsf{Lie}}}=span \lbrace g_{1}\frak rbrace$. Now we define isomorphisms $\omega: \frak frac{\mathfrak{q}}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{q})}\longrightarrow \frak frac{\mathfrak{p}}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{p})}$ given by $\omega ({a_{i}})={g_{i}},~ i=1, 3$, and $ \tau: [\mathfrak{q}, \mathfrak{q}]_{\ensuremath{\mathsf{Lie}}}\longrightarrow [\mathfrak{p}, \mathfrak{p}]_{\ensuremath{\mathsf{Lie}}}$ given by $\tau (a_{1})=g_{1}$. One easily verifies that $ \mathfrak{p} $ and $ \mathfrak{q}$ are {\ensuremath{\mathsf{Lie}}}-isoclinic. \end{example} In certain circumstances, even with additional conditions like the Leibniz algebras are {\ensuremath{\mathsf{Lie}}}-stem, this result is not true, as well. In the following, we investigate two {\ensuremath{\mathsf{Lie}}}-stem (pairs of) Leibniz algebras, with the same finite dimension, that they are not isomorphic since they belong to different classes in \cite[Theorem 4.2.6]{RefD} and \cite[Proposition 3.11]{RefCaKu}, but we check they are {\ensuremath{\mathsf{Lie}}}-isoclinic. \begin{example}\ \label{example3} \begin{enumerate} \item[{\it a)}] Consider the following five-dimensional non isomorphic {\ensuremath{\mathsf{Lie}}}-stem Leibniz algebras given in \cite[Theorem 4.2.6]{RefD}: $\mathcal{A}_1=span \lbrace a_{1}, a_{2}, a_{3}, a_4, a_5\frak rbrace $, with non-zero multiplications $[a_{1}, a_{1}]= a_{3}$, $[a_{2}, a_{1}]$ $=a_{4}$ and $[a_{1}, a_{3}]= a_{5}$, in which $Z_{\ensuremath{\mathsf{Lie}}}(\mathcal{A}_1)=span \lbrace a_{4}, a_5\frak rbrace $ is included in $[\mathcal{A}_1, \mathcal{A}_1]_{\ensuremath{\mathsf{Lie}}}$ $=span \lbrace a_{3}, a_{4}, a_5 \frak rbrace $, and $\mathcal{A}_7 = span \lbrace g_{1}, g_{2}, g_{3}, g_4, g_5 \frak rbrace$, with non-zero multiplications $[g_{1}, g_1]=g_{3}$, $[g_{1}, g_2]$ $=g_{4}$, $[g_2, g_1]= g_5$ and $[g_1, g_3]= g_5$, in which $Z_{\ensuremath{\mathsf{Lie}}}(\mathcal{A}_7)=span \lbrace g_{4 }, g_5\frak rbrace$ is included in $ [\mathcal{A}_7, \mathcal{A}_7]_{\ensuremath{\mathsf{Lie}}}$ $=span \lbrace g_{3}, g_4, g_5\frak rbrace $. We define $\omega: \frak frac{\mathcal{A}_1}{Z_{\ensuremath{\mathsf{Lie}}}(\mathcal{A}_1)}\longrightarrow \frak frac{\mathcal{A}_7}{Z_{\ensuremath{\mathsf{Lie}}}(\mathcal{A}_7)}$ by $ \omega (\bar{a_{1}})=\bar{g_{1}},~ \omega (\bar{a_{2}})=\bar{g_{2}},~ \omega (\bar{a_{3}})=\bar{g_{3}}, ~ \omega (\bar{a_{4}})=\bar{0}=\bar{g_{4}}, ~ \omega (\bar{a_{5}})=\bar{0}=\bar{g_{5}}$ and $\tau: [\mathcal{A}_1, \mathcal{A}_1]_{\ensuremath{\mathsf{Lie}}}\longrightarrow [\mathcal{A}_7, \mathcal{A}_7]_{\ensuremath{\mathsf{Lie}}}$ by $\tau (a_{3})=g_{3}, \tau (a_{4})=g_{4}+g_5, \tau (a_{5})=g_{5}$. Now it is easy to check that $\omega$ and $\tau$ are isomorphisms making diagram (\frak ref{isoclinism}) commutative, hence $\mathcal{A}_1\sim \mathcal{A}_7$. Note that the definition of $\tau$ reproduces the isomorphism given by Corollary \frak ref{corollary2}, namely $ \tau_{\mid}: Z_{\ensuremath{\mathsf{Lie}}}(\mathcal{A}_1)\cong Z_{\ensuremath{\mathsf{Lie}}}(\mathcal{A}_7)$. \item[{\it b)}] Consider the following four-dimensional non isomorphic {\ensuremath{\mathsf{Lie}}}-stem Leibniz algebras given in \cite{RefCaKu}: $\mathfrak{q}=span \lbrace e_{1}, e_{2}, e_{3}, e_4 \frak rbrace$, with non-zero multiplications $[e_{1}, e_4]= e_{1}$, $[e_{2}, e_4]$ $=e_{2}$ and $[e_4, e_4]= e_{3}$ (class ${\mathcal L}_{26}(\mu_2)$, with $\mu_2=1$, in \cite[Proposition 3.11]{RefCaKu}). Take the two-sided ideal $\mathfrak{m} = span \{e_1,e_2,e_3\}$ of $\mathfrak{q}$. Then $(\mathfrak{m},\mathfrak{q})$ is a {\ensuremath{\mathsf{Lie}}}-stem pair since $Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m},\mathfrak{q})=span \lbrace e_{3 }\frak rbrace$ and $[\mathfrak{m},\mathfrak{q}]_{\ensuremath{\mathsf{Lie}}}=span \lbrace e_{1}, e_{2}, e_3 \frak rbrace$, and $\mathfrak{p}=span \lbrace a_{1}, a_{2}, a_{3}, a_4 \frak rbrace$, with non-zero multiplications $[a_{1}, a_4]= a_{2}$, $[a_{3}, a_4]$ $=a_{3}$ and $[a_4, a_4]= a_{1}$ (class ${\mathcal L}_{40}$ in \cite[Proposition 3.11]{RefCaKu}). Take the two-sided ideal $\mathfrak{n} = span \{a_1,a_2,a_3\}$ of $\mathfrak{p}$. Then $(\mathfrak{n},\mathfrak{p})$ is a {\ensuremath{\mathsf{Lie}}}-stem pair since $Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n},\mathfrak{p})=span \lbrace a_{2}\frak rbrace$ and $[\mathfrak{n},\mathfrak{p}]_{\ensuremath{\mathsf{Lie}}}=span \lbrace a_{1}, a_{2}, a_3 \frak rbrace$. Now we define the isomorphisms $\omega: \frak frac{\mathfrak{q}}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m},\mathfrak{q})}\longrightarrow \frak frac{\mathfrak{p}}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n},\mathfrak{p})}$ by $\omega (\bar{e_{1}})=\bar{a_{1}},~ \omega (\bar{e_{2}})=\bar{a_{3}},~ \omega (\bar{e_4})= \bar{a_4} $ and $ \tau: [\mathfrak{m},\mathfrak{q}]_{\ensuremath{\mathsf{Lie}}}\longrightarrow [\mathfrak{n},\mathfrak{p}]_{\ensuremath{\mathsf{Lie}}}$ by $ \tau (e_{1})=a_{2}, \tau (e_{2})=a_{3}, \tau (e_{3})=a_{1}$. Now it is easy to check that $\omega$ and $\tau$ are isomorphism making diagram (\frak ref{isoclinism}) commutative, hence $(\mathfrak{m},\mathfrak{q})$ and $(\mathfrak{n},\mathfrak{p})$ are {\ensuremath{\mathsf{Lie}}}-isoclinic. \end{enumerate} \end{example} The aim of the rest of the paper is to find conditions under what {\ensuremath{\mathsf{Lie}}}-isoclinism between two {\ensuremath{\mathsf{Lie}}}-stem Leibniz algebras implies their isomorphism. \begin{lemma}\label{lemma4} For any {\ensuremath{\mathsf{Lie}}}-stem Leibniz algebra $\mathfrak{m}$, $Z(\mathfrak{m}) = Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m})$. \end{lemma} \begin{proof} Since for every $y\in \mathfrak{m}$ and $ z\in Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}) $, $[y, z]=0$, it follows that $[z, y]=[y, z]=0$ and therefore $Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}) \subseteq Z(\mathfrak{m})$. A direct checking shows that $Z(\mathfrak{m}) \subseteq Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m})$. \end{proof} \begin{theorem}\label{theorem3} Let $( \mathfrak{m}, \mathfrak{q})$ and $( \mathfrak{n}, \mathfrak{p})$ be two {\ensuremath{\mathsf{Lie}}}-isoclinic pairs of finite dimensional complex Leibniz algebras such that: \begin{enumerate} \item[a)] $\mathfrak{m}$ and $\mathfrak{n}$ are {\ensuremath{\mathsf{Lie}}}-stem Leibniz algebras. \item[b)] For all elements $ {m_1}, {m_2}\in {\mathfrak{m}}$ there exists $ \varepsilon_{12}\in \mathbb{C}$ such that $[{m_1}, {m_2}]={\varepsilon_{12}} [{m_2}, {m_1}]$. \end{enumerate} Then $\mathfrak{m}$ and $\mathfrak{n}$ are isomorphic. \end{theorem} \begin{proof} First of all, we claim that $( \mathfrak{m}, \mathfrak{q})$ and $( \mathfrak{n}, \mathfrak{p})$ are {\ensuremath{\mathsf{Lie}}}-stem pairs. Indeed, $Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q}) \subseteq Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}) \subseteq [\mathfrak{m},\mathfrak{m}]_{\ensuremath{\mathsf{Lie}}}\subseteq [\mathfrak{m},\mathfrak{q}]_{\ensuremath{\mathsf{Lie}}}$. Owing to Proposition \frak ref{proposition3}, there exist two factor sets $f: \frak frac{\mathfrak{m}}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})} \times \frak frac{\mathfrak{m}}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})} \to Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})$ and $g: \frak frac{\mathfrak{n}}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})} \times \frak frac{\mathfrak{n}}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})} \to Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})$ such that $\mathfrak{n} \cong \mathfrak{m}_f$ and $\mathfrak{m} \cong \mathfrak{n}_g$. Let $(\omega, \tau)$ be the {\ensuremath{\mathsf{Lie}}}-isoclinism between $\mathfrak{m}_f$ and $\mathfrak{n}_g$ provided by Corollary \frak ref{corollary1}, then the following diagram is commutative, \begin{equation*} \begin{CD} \frak frac{\mathfrak{m}_f}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}_f)}\times \frak frac{\mathfrak{m}_f}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}_f)}@>{C_1}>>[\mathfrak{m}_f, \mathfrak{m}_f]_{\ensuremath{\mathsf{Lie}}}\\ @VV{{\omega}\times \omega}V @VV{\tau}V\\ \frak frac{\mathfrak{n}_g}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}_g)}\times \frak frac{\mathfrak{n}_g}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}_g)}@>{C_2}>>[\mathfrak{n}_g, \mathfrak{n}_g]_{\ensuremath{\mathsf{Lie}}}, \end{CD} \end{equation*} We know that: $$\omega ((z, \bar{m})+Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}_f))= \omega ((0, \bar{m}) +Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}_f))=(0, \omega_1(\bar{m}))+Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}_g)$$ where $\omega_1: \mathfrak{m}/ Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})\longrightarrow \mathfrak{n}/ Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})$ is an isomorphism. On the other hand, from the following diagram $$\begin{CD} Z_{\ensuremath{\mathsf{Lie}}}^{\mathfrak{n}_g}\cong Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})\subseteq Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n})\subseteq [\mathfrak{n}, \mathfrak{n}]_{\ensuremath{\mathsf{Lie}}}\cong [\mathfrak{m}_f, \mathfrak{m}_f]_{\ensuremath{\mathsf{Lie}}}\\ \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad @VV{\tau}V\\ \quad Z_{\ensuremath{\mathsf{Lie}}}^{\mathfrak{m}_f}\cong Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})\subseteq Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m})\subseteq [\mathfrak{m}, \mathfrak{m}]_{\ensuremath{\mathsf{Lie}}}\cong [\mathfrak{n}_g, \mathfrak{n}_g]_{\ensuremath{\mathsf{Lie}}}\\ \end{CD}$$\\ and keeping in mind that $Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})$ is a two-sided ideal of $[\mathfrak{m}, \mathfrak{m}]_{\ensuremath{\mathsf{Lie}}}$, then from the proof of Theorem \frak ref{theorem2}, Corollary \frak ref{corollary2} and the following commutative diagram \[ \xymatrix{ 0 \ar[r] & Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})\cong Z_{\ensuremath{\mathsf{Lie}}}^{\mathfrak{n}_g} \ar[r] \ar@{-->}[d]^{\tau_{\mid}} & [\mathfrak{n}, \mathfrak{n}]_{\ensuremath{\mathsf{Lie}}} \ar[r] \ar[d]_{\wr}^{\tau}& \frak frac{[\mathfrak{n}, \mathfrak{n}]_{\ensuremath{\mathsf{Lie}}}}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})} \ar[r] \ar[d]_{\wr}^{\bar{\tau}} & 0\\ 0 \ar[r] & Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})\cong Z_{\ensuremath{\mathsf{Lie}}}^{\mathfrak{m}_f} \ar[r] & [\mathfrak{m}, \mathfrak{m}]_{\ensuremath{\mathsf{Lie}}} \ar[r] & \frak frac{ [\mathfrak{m}, \mathfrak{m}]_{\ensuremath{\mathsf{Lie}}}}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})}\ar[r] & 0, } \] we conclude that $\tau_{\mid}:Z_{\ensuremath{\mathsf{Lie}}}^{\mathfrak{n}_g} \cong Z_{\ensuremath{\mathsf{Lie}}}^{\mathfrak{m}_f}$. Now we define $\tau_{2}:Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})\longrightarrow Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})$ by $\tau (z,0)=(\tau_{2} (z), 0)$, for all $z\in Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})$. Using Lemma \frak ref{lemma4}, for all $ (z_i, \bar{m_i})\in \mathfrak{m}_f $, $ i=1, 2 $, \[ \begin{array}{l} \tau \left( \left[(z_1, \bar{m_1}),(z_2, \bar{m_2})\frak right] + \left[(z_2, \bar{m_2}),(z_1, \bar{m_1})\frak right] \frak right) =\\ \tau\Big( \left( [z_1, z_2]+ [z_1, \mu (\bar{m_2})]+ [\mu(\bar{m_1}), z_2] + f(\bar{m_1}, \bar{m_2}), [\bar{m_1}, \bar{m_2}]\frak right)+\\ \left( [z_2, z_1]+ [z_2, \mu (\bar{m_1})]+ [\mu(\bar{m_2}), z_1] + f(\bar{m_2}, \bar{m_1}), [\bar{m_2}, \bar{m_1}]\frak right)\Big)=\\ \left( \tau_2( f(\bar{m_1}, \bar{m_2})+f(\bar{m_2}, \bar{m_1})), 0\frak right)+ \tau \left(0, [\bar{m_1}, \bar{m_2}]+[\bar{m_2}, \bar{m_1}]\frak right). \end{array} \] where $\mu$ is as Proposition \frak ref{proposition4}. On the other hand, \[ \begin{array}{l} \tau \left( \left[(z_1, \bar{m_1}),(z_2, \bar{m_2})\frak right]+\left[(z_2, \bar{m_2}),(z_1, \bar{m_1})\frak right] \frak right)= \\ C_2(\omega ((z_1, \bar{m_1})+Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}_f)), \omega((z_2, \bar{m_2})+Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}_f))=\\ C_2((0, \omega_1 (\bar{m_1}))+Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}_g), (0, \omega_1 (\bar{m_2}))+Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}_g))=\\ \Big( g(\omega_1(\bar{m_1}), \omega_1(\bar{m_2}))+g(\omega_1(\bar{m_2}), \omega_1(\bar{m_1})), [\omega_1(\bar{m_1}), \omega_1(\bar{m_2})]+[\omega_1(\bar{m_2}), \omega_1(\bar{m_1})]\Big).\\ \end{array} \] Let $d\left(\omega_1 ([\bar{m_1}, \bar{m_2}]+[\bar{m_2}, \bar{m_1}]) \frak right)$ be the first component of $\tau \left(0, [\bar{m_1}, \bar{m_2}]+[\bar{m_2}, \bar{m_1}]\frak right)$ where $ d: \left[\frak frac{\mathfrak{n}}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})}, \frak frac{\mathfrak{n}}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})} \frak right]_{\ensuremath{\mathsf{Lie}}}\longrightarrow Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})$ is a linear map. Now the isomorphism $ \tau $ yields: \[ \begin{array}{rl} \tau_2 \left( f(\bar{m_1}, \bar{m_2})+f(\bar{m_2}, \bar{m_1})) + d(\omega_1 ([\bar{m_1}, \bar{m_2}]+[\bar{m_2}, \bar{m_1}])\frak right)&=\\ g(\omega_1(\bar{m_1}), \omega_1(\bar{m_2}))+g(\omega_1(\bar{m_2}), \omega_1(\bar{m_1})).& \end{array} \] Applying assumption {\it b)} and Lemma \frak ref{lemma4}, we conclude that \begin{equation}\label{best} \tau_2 \left( f(\bar{m_1}, \bar{m_2})) + d(\omega_1 ([\bar{m_1}, \bar{m_2}])\frak right)= g(\omega_1(\bar{m_1}), \omega_1(\bar{m_2})) \end{equation} Now we define $\lambda:\mathfrak{m}_f\longrightarrow \mathfrak{n}_g$ by $\lambda(z,\bar{m})=(\tau_2 (z)+d(\omega_1(\bar{m})), {\omega}_1(\bar{m}))$. By Lemma \frak ref{lemma4}, for all $ (z_i, \bar{m_i})\in \mathfrak{m}_f $, $ i=1, 2 $, we get, \begin{align*} \lambda([(z_1, \bar{{m}_1}]), (z_2, \bar{{m}_2})])&= \Big(\tau_2 {\big([z_1, z_2]+[\mu(\bar{{m}_1}), z_2]+[z_1, \mu(\bar{{m}_2})]+f(\bar{{m}_1}, \bar{{m}_2})\big)}\\ &+ d(\omega_1([\bar{{m}_1}, \bar{{m}_2}])), {{\omega}_1}([\bar{{m}_1}, \bar{{m}_2}])\Big)\\ & = \left( \tau_2 {\big( f(\bar{{m}_1}, \bar{{m}_2})\big)}+d(\omega_1([\bar{{m}_1}, \bar{{m}_2}])), \omega_1([\bar{{m}_1}, \bar{{m}_2}]) \frak right). \end{align*} On the other hand, \begin{align*} [\lambda(z_1, \bar{{m}_1}), \lambda(z_2, \bar{{m}_2})] &= \Big([\tau_2 (z_1)+d(\omega_1(\bar{{m}_1})), \tau_2 (z_2)+d(\omega_1(\bar{{m}_2}))]\\ & +[\tau_2 (z_1)+d(\omega_1(\bar{{m}_1})), \nu(\omega{(\bar{{m}_2})})]\\ &+[\nu(\omega_1{(\bar{{m}_1})}), \tau_2 (z_2)+d(\tau_1(\bar{{m}_2}))]\\ &+g(\omega_{1}(\bar{m_1}), \omega_{1}(\bar{m_2})), [\omega_{1}(\bar{m_1}), \omega_{1}(\bar{m_2})]\Big)\\ & = \left( g(\omega_{1}(\bar{m_1}), \omega_{1}(\bar{m_2})),[\omega_{1}(\bar{m_1}), \omega_{1}(\bar{m_2})] \frak right), \end{align*} where $\nu$ is as in Proposition \frak ref{proposition4}. It now follows from equality \frak ref{best}, that $ \lambda $ is a homomorphism. The following diagram \[ \xymatrix{ 0 \ar[r] & Z_{\ensuremath{\mathsf{Lie}}}^{\mathfrak{n}_g}\cong Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p}) \ar[r] \ar[d]^{{\tau_2}^{-1}} & \mathfrak{m}_f\cong \mathfrak{n} \ar@{-->}[d]^{\lambda} \ar[r]^{\pi_1~~~~} & \dfrac{\mathfrak{m}_f}{Z_{\ensuremath{\mathsf{Lie}}}^{\mathfrak{n}_g}}\cong\frak frac{\mathfrak{n}}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{n}, \mathfrak{p})} \ar[r] \ar[d]^{{\omega_1}^{-1}} & 0 \\ 0 \ar[r] & Z_{\ensuremath{\mathsf{Lie}}}^{\mathfrak{m}_f}\cong Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q}) \ar[r] & \mathfrak{n}_g\cong \mathfrak{m} \ar[r]^{\pi_2 ~~~~} & \dfrac{\mathfrak{n}_g}{Z_{\ensuremath{\mathsf{Lie}}}^{\mathfrak{m}_f}}\cong\frak frac{\mathfrak{m}}{Z_{\ensuremath{\mathsf{Lie}}}(\mathfrak{m}, \mathfrak{q})} \ar[r] & 0 } \] is commutative, where $\omega_1$ and $\tau_2$ are the above isomorphisms. Now by the Short Five Lemma \cite{RefBB}, it follows that $\lambda$ is an isomorphism. \end{proof} The above Theorem still holds for finite dimensional stem pair of Lie algebras if we drop the assumptions {\it a)} and {\it b)}, see \cite{RefMP} for more details. \begin{example} An example of non-Lie Leibniz algebra satisfying condition {\it b)} of Theorem \frak ref{theorem3} is the two-dimensional Leibniz algebra with basis $\{a_1, a_2 \}$ and the bracket operation given by $[a_2, a_2] = \lambda a_1, \lambda \in \mathbb{C} \setminus \mathbb{C}^2$ \cite{RefCu}. Here the parameter $\varepsilon = 1$. It should be mention that, in Example \frak ref{example3} ({\it a}), Leibniz algebra $\mathcal{A}_7$ does not satisfy in the condition {\it b)} of Theorem \frak ref{theorem3}. Note that the parameter $\varepsilon$ in condition {\it b)} of Theorem \frak ref{theorem3} doesn't require to be unique, that is every pair of elements $\bar{m}_i, \bar{m}_j$, has a parameter $\varepsilon_{ij}$ such that $[\bar{m}_i, \bar{m}_j ]=\varepsilon_{ij} [\bar{m}_j, \bar{m}_i]$. An example of Leibniz algebra satisfying this condition is the four-dimensional Leibniz algebra with basis $\{a_1, a_2, a_3, a_4\}$ with bracket operation $[a_1,a_1]=a_3, [a_2,a_4]=- [a_4,a_2]=a_2; [a_4, a_4]=- 2 a_2$ (class ${\mathcal L}_{16}$ in \cite[Proposition 3.10]{RefCaKu}). Here the parameters are $\varepsilon_{11}=1, \varepsilon_{24}=-1, \varepsilon_{44}=1$. \end{example} In Lie algebra theory, it is well-known that the isoclinism between two finite dimensional stem Lie algebras implies isomorphism. For a deeper discussion, we refer the reader to \cite{RefM}. In the following we provide some conditions to have analogous result. \begin{corollary} Let $ \mathfrak{q}$ and $\mathfrak{p}$ be {\ensuremath{\mathsf{Lie}}}-isoclinic {\ensuremath{\mathsf{Lie}}}-stem of finite dimensional complex Leibniz algebras and for all elements $ {x_1}, {x_2}\in {\mathfrak{q}}$ there exist $ \varepsilon_{12}\in \mathbb{C} $ such that $ [{x_1}, {x_2}]={\varepsilon_{12}} [{x_2}, {x_1}]$. Then $ \mathfrak{q}$ and $ \mathfrak{p}$ are isomorphic. \end{corollary} \end{document}
\begin{document} \title[When is a Specht ideal Cohen--Macaulay?]{When is a Specht ideal Cohen--Macaulay?} \author{Kohji Yanagawa} \address{Department of Mathematics, Kansai University, Suita, Osaka 564-8680, Japan} \email{[email protected]} \thanks{The author is partially supported by JSPS Grant-in-Aid for Scientific Research (C) 16K05114.} {\frak m}aketitle \begin{abstract} For a partition $\lambda$ of $n$, let $I^{\rm Sp}_\lambda$ be the ideal of $R=K[x_1, \ldots, x_n]$ generated by all Specht polynomials of shape $\lambda$. We show that if $R/I^{\rm Sp}_\lambda$ is Cohen--Macaulay then $\lambda$ is of the form either $(a, 1, \ldots, 1)$, $(a,b)$, or $(a,a,1)$. We also prove that the converse is true in the ${\sf op}eratorname{char}(K)=0$ case. To show the latter statement, the radicalness of these ideals and a result of Etingof et al. are crucial. We also remark that $R/I^{\rm Sp}_{(n-3,3)}$ is {\it not} Cohen--Macaulay if and only if ${\sf op}eratorname{char}(K)=2$. \end{abstract} \section{Introduction} Let $n$ be a positive integer. A {\it partition} of $n$ is a sequence $\lambda = (\lambda_1, \ldots, \lambda_l)$ of intergers with $\lambda_1 \ge \lambda_2 \ge \cdots \ge \lambda_l \ge 1$ and $\sum_{i=1}^l \lambda_i =n$. A partition $\lambda$ is often represented by its {\it Young diagram}. The {\it (Young) tableau} of shape $\lambda$ is a bijection from $[n]:=\{1,2, \ldots, n\}$ to the set of boxes in the Young diagram of $\lambda$. For example, the following is a tableau of shape $(4,2,1)$. \begin{equation}\label{ex of T} \begin{ytableau} 3 & 5 & 1& 7 \\ 6 & 2 \\ 4 \\ \end{ytableau} \end{equation} We say a tableau $T$ is {\it standard}, if all columns (resp. rows) are increasing from top to bottom (resp. from left to right). Let $R=K[x_1, \ldots, x_n]$ be a polynomial ring over a field $K$, $\lambda$ a partition of $n$, and $T$ a Young tableau of shape $\lambda$. Now let $f_T(j)$ denote the difference product of the variables whose subscripts belong to the $j$-th column of $T$. More precisely, if the $j$-th column of $T$ consists of $j_1, j_2, \ldots, j_m$ in the order from top to bottom, then $$f_T (j) = \prod_{1 \le s < t \le m} (x_{j_s}-x_{j_t})$$ (if the $j$-th column has only one box, then we set $f_T(j)=1$). Finally, we set $$f_T := \prod_{j=1}^{\lambda_1} f_T(j),$$ and call it the {\it Specht polynomial} of $T$. For example, if $T$ is the tableau given in \eqref{ex of T}, then $f_T=(x_3-x_6)(x_3-x_4)(x_6-x_4)(x_5-x_2).$ If $\lambda$ is a partition of $n$, the symmetric group $S_n$ acts on the vector space $U_\lambda$ of $R$ spanned by $$\{ \, f_T {\frak m}id \text{$T$ is a Young tableau of shape $\lambda$ } \}.$$ (Literature construct $U_\lambda$ using suitable equivalent classes of Young tableaux, not polynomials in $R$. For example, \cite{Sa} uses the notion called {\it Young polytabloids}. Of course, such a construction gives the same modules as ours up to isomorphism.) An $S_n$-module of this form is called a {\it Specht module}, and very important in the theory of symmetric groups. In fact, if ${\sf op}eratorname{char}(K)=0$, they give a complete list of irreducible representations of $S_n$, if we consider all possible partitions $\lambda$ of $n$. Specht modules occasionally appear in the study of commutative algebra. For example, \cite{LefPro} decomposes the artinian ring $R/(x_1^d, \ldots, x_n^d)$ into direct sum of Specht modules under the natural $S_n$-action. However, the present paper concerns the {\it ideal} $$I^{\rm Sp}_\lambda := (U_\lambda) =(\, f_T {\frak m}id \text{$T$ is a Young tableau of shape $\lambda$ })$$ of the polynomial ring $R$ itself (throughout the paper, we exclude the trivial partition $(n)$ of $n$, while some results make sense if we put $I^{\rm Sp}_{(n)} =R$). We focus on the following problem. \begin{prob}\label{main prob} When is $R/I^{\rm Sp}_\lambda$ Cohen--Macaulay? \end{prob} For many $\lambda$, $R/I^{\rm Sp}_\lambda$ is even non-pure (i.e., minimal primes have different dimensions). Moreover, for some $\lambda$, the Cohen--Macaulay property of $R/I^{\rm Sp}_\lambda$ depends on ${\sf op}eratorname{char}(K)$. For example, $R/I^{\rm Sp}_{(n-3,3)}$ is Cohen--Macaulay if and only if ${\sf op}eratorname{char}(K) \ne 2$ (Theorem~\ref{3-ji shiki}). For $\lambda=(\lambda_1, \ldots, \lambda_l)$, it is not difficult to see that ${\sf op}eratorname{ht}(I^{\rm Sp}_\lambda) = \lambda_1$ (see Lemma~\ref{lambda_1+1}). Moreover, in Proposition~\ref{CM ISP}, we see that if $R/I^{\rm Sp}_\lambda$ is Cohen--Macaulay then one of the following conditions is satisfied. \begin{itemize} \item[(1)] $\lambda =(n-d, 1, \ldots, 1)$, \item[(2)] $\lambda =(n-d,d)$, \item[(3)] $\lambda =(a,a,1)$. \end{itemize} If $\lambda =(n-d, 1, \ldots, 1)$, then $I^{\rm Sp}_\lambda$ is generated by all maximal minors of a $(d+1) \times n$ Vandermonde matrix, and Junzo Watanabe and the author (\cite{WY}) showed that $R/I^{\rm Sp}_\lambda$ is reduced and Cohen--Macaulay in this case. So it remains to consider the cases (2) and (3). In these cases, we have $$\sqrt{I^{\rm Sp}_\lambda}= \bigcap_{\substack{F \subset [n] \\ \# F = \lambda_1+1}} (x_i-x_j {\frak m}id i, j \in F).$$ Etingof et al. \!\!\cite{EGL} studied such ideals, and their result states that $R/\sqrt{I^{\rm Sp}_\lambda}$ is Cohen--Macaulay if ${\sf op}eratorname{char}(K)=0$ (in the cases (2) and (3)). On the other hand, in Theorems~\ref{ISp_{(n-k,k)} is radical} and \ref{ISp_{(a,a,1)} is radical}, we show that $R/I^{\rm Sp}_{(n-d,d)}$ and $R/I^{\rm Sp}_{(a,a,1)}$ are reduced. Hence they are Cohen--Macaulay if ${\sf op}eratorname{char}(K)=0$ (by virtue of a result of \cite{EGL}). Summing up, in the characteristic 0 case, $R/I^{\rm Sp}_\lambda$ is Cohen--Macaulay if and only if one of the above conditions (1), (2) or (3) is satisfied. Another application of the radicalness of $I^{\rm Sp}_{(n-d,d)}$ is a new characterization of Catalan numbers. More precisely, in the polynomial ring $K[x_1, \ldots, x_{2n}]$, consider the ideal $$I:= \bigcap_{\substack{F \subset [2n] \\ \# F = n+1}} (x_i-x_j {\frak m}id i, j \in F).$$ Since $I= \sqrt{I^{\rm Sp}_{(n,n)}}= I^{\rm Sp}_{(n,n)}$, we have ${\frak m}u(I) =C_n$, where $C_n$ is the $n$-th Catalan number $\displaystyle \frac{1}{2n+1}\binom{2n+1}{n}$. \section{Minimal primes and the necessity condition for the CM-ness} Let $R=K[x_1, \ldots, x_n]$ be a polynomial ring over a field $K$. We assume that $K$ is algebraically closed for the simplicity. However, for all results of the present paper, this assumption can be easily dropped. For an ideal $I \subset R$, set $V(I):=\{ {{\frak m}athfrak p} {\frak m}id {{\frak m}athfrak p} \in {\sf op}eratorname{Spec} R, {{\frak m}athfrak p} \supset I\}$ as usual. For ${\frak m}athbf a=(a_1, \ldots, a_n) \in K^n$, let ${{\frak m}athfrak m}_{\frak m}athbf a$ denote the maximal ideal $(x_1 -a_1, x_2 - a_2, \ldots, x_n - a_n)$ of $R$. By abuse of notation, we just write ${\frak m}athbf a \in V(I)$ to mean ${{\frak m}athfrak m}_{\frak m}athbf a \in V(I)$. Note that ${\frak m}athbf a \in V(I)$, if and only if ${\frak m}athbf a$ belongs to the algebraic subset of ${{\frak m}athbb A}^n$ defined by $I$, if and only if $f({\frak m}athbf a)=0$ for all $f \in I$. For the definition of the Specht ideal $I^{\rm Sp}_\lambda$, see the previous section. Let us begin to study the Cohen--Macaulay property of $R/I^{\rm Sp}_\lambda$. If $\lambda_2=1$, then $I^{\rm Sp}_\lambda$ is the determinantal ideal of a Vandermonde like matrix, and Watanabe and the author (\cite{WY}) showed that $R/I^{\rm Sp}_\lambda$ is reduced and Cohen--Macaulay in this case. So we mainly treat the case $\lambda_2 > 1$ in this paper. {\frak m}edskip The following fact immediately follows from the definition. \begin{lem}\label{0 point} For ${\frak m}athbf a =(a_1, \ldots, a_n) \in K^n$, ${\frak m}athbf a \in V(I^{\rm Sp}_\lambda)$ if and only if ${\frak m}athbf a$ satisfies the following condition; \noindent$(*)$ For any tableau $T$ of shape $\lambda$, there exist two distinct integers $i, j \in [n]$ such that $a_i=a_j$ and $i,j$ appear in the same column of $T$. \end{lem} Let $\Pi = \{ F_1, \ldots, F_m \}$ be a partition of $[n] := \{1,2, \ldots, n\}$, that is, $[n]=\coprod_{i=1}^m F_i$ and $F_i \ne \emptyset$ for all $i$. We call the ideal $$P_\Pi := ( \, x_i -x_j {\frak m}id \text{$i,j \in F_k$ for $k=1,2, \ldots, m$} \, ) \subset R$$ the {\it partition ideal} of $\Pi$. Clearly, this is a prime ideal with $R/P_\Pi \cong K[X_1, \ldots, X_m]$. Hence we have $\dim R/P_\Pi = m$ and ${\sf op}eratorname{ht}(P_\Pi)=n-m$. It is easy to see that if an ideal $I \subset R$ is generated by elements of the form $x_i -x_j$ then $I=P_\Pi$ for some partition $\Pi$ of $[n]$. \begin{lem}\label{partition} A minimal prime of $I^{\rm Sp}_\lambda$ is the partition ideal $P_\Pi$ of some $\Pi$. \end{lem} \begin{proof} Since $V((f_T))$ for a tableau $T$ is a union of hyperplanes of the form $V(x_i-x_j)$ for some $i \ne j$, the irreducible components of $V(I^{\rm Sp}_\lambda)$ are intersections of these hyperplanes, that is, linear spaces of the form $V(P_\Pi)$ for some $\Pi$. \end{proof} For a subset $F \subset [n]$ with $\#F \ge 2$, consider the prime ideal $$P_F = ( \, x_i -x_j {\frak m}id i,j \in F \, )$$ of $R$. Clearly, $P_F$ is a special case of partition ideals, and we have ${\sf op}eratorname{ht}(P_F) = \# F -1$. \begin{prop}\label{lambda_1+1} Let $\lambda=(\lambda_1, \ldots, \lambda_l)$ be a partition of $n$. For a subset $F \subset [n]$ with $\#F = \lambda_1+1$, $P_F$ is a minimal prime of $I^{\rm Sp}_\lambda$. Moreover, we have ${\sf op}eratorname{ht}(I^{\rm Sp}_\lambda) = \lambda_1$. \end{prop} \begin{proof} First, we will show that $P_F \supset I^{\rm Sp}_{\lambda}$ for $F \subset [n]$ with $\# F = \lambda_1+1$. It suffices to show that any ${\frak m}athbf a \in V(P_F)$ satisfies ${\frak m}athbf a \in V(I^{\rm Sp}_\lambda)$. However, if ${\frak m}athbf a \in V(P_F)$, then $a_i =a_j$ for all $i. j \in F$, and it satisfies the condition $(*)$ of Lemma~\ref{0 point} by the pigeonhole principle. Next, consider a partition $\Pi=\{ F_1, \ldots, F_d \}$ of $[n]$. Set $c_i= \# F_i$ for each $i$. And we may assume that $c_1 \ge c_2 \ge \cdots$. We will show that if ${\sf op}eratorname{ht}(P_\Pi) < \lambda_1$ (equivalently, $d > n-\lambda_1 = \lambda_2 +\cdots + \lambda_l$) then $P_\Pi \not \supset I^{\rm Sp}_\lambda$. We can take ${\frak m}athbf a=(a_1, \ldots, a_n) \in V(P_\Pi)$ so that $a_i =a_j$ if and only if $i,j \in F_k$ for some $k$. To show $P_\Pi \not \supset I^{\rm Sp}_\lambda$, it suffices to prove that ${\frak m}athbf a \not \in V(I^{\rm Sp}_\lambda)$, equivalently, ${\frak m}athbf a$ does not satisfy the condition $(*)$ of Lemma~\ref{0 point}. This is also equivalent to that we can assign one of $d$-colors to each box in the Young diagram of $\lambda$ so that the number of boxes painted in the $i$-th color is exactly $c_i$, and any two boxes in the same column have different colors. The existence of such coloring is illustrated in the following way (here, integers represent colors of boxes). First, we paint the boxes in the second line to the last line as follows. In other words, we paint these boxes just like counting them in the ``western letter-writing" order. $$ \ytableausetup {mathmode, boxsize=3.2em} \begin{ytableau} {}& & \none[\cdots]& & & & & \none[\cdots] \\ 1 & 2 & \none[\cdots] & \lambda_2 -2 & \lambda_2-1 & \lambda_2 \\ \lambda_2+1 & \lambda_2+2 & \none[\cdots] & \scriptstyle \lambda_2+\lambda_3-1 & \scriptstyle \lambda_2+\lambda_3 \\ \scriptstyle \lambda_2+\lambda_3+1 & \scriptstyle \lambda_2+\lambda_3+2 & \none[\cdots] \\ \none[\vdots] & \none[\vdots] \end{ytableau} $$ Note that we have used $s:= \lambda_2 + \cdots + \lambda_l$ colors to paint the boxes in the second line to the last line. Note that $s = n-\lambda_1 <d$. Finally, we paint the boxes in the first line as follows. $$ \ytableausetup {mathmode, boxsize=2.0em} \begin{ytableau} \scriptstyle s+1 & \scriptstyle s+2 & \none[\cdots] &d&1&1& \none[\cdots] &1 & 2 & 2 & \none[\cdots] & 2 & 3&\none[\cdots] \end{ytableau} $$ After the position $ \ytableausetup {mathmode, boxsize=1.3em} \begin{ytableau} 1&1& \none[\cdots] \end{ytableau} $ , each color $i$ appears $c_i-1$ times. Now it is easy to see that this coloring satisfies the expected condition. By Lemma~\ref{partition}, the claim we have just shown implies that ${\sf op}eratorname{ht}(I^{\rm Sp}_\lambda) \ge \lambda_1$. On the other hand,we know that $P_F \supset I^{\rm Sp}_{\lambda}$ for $F \subset [n]$ with $\# F = \lambda_1+1$. Since ${\sf op}eratorname{ht}(P_F) =\lambda_1$, $P_F$ is a minimal prime of $I^{\rm Sp}_\lambda$, and ${\sf op}eratorname{ht}(I^{\rm Sp}_\lambda)=\lambda_1$. \end{proof} . For an integer $k$ with $2 \le k \le n$, set $$I_{n,k} := \bigcap_{\substack{F \subset [n], \\ \#F =k}} P_F.$$ Clearly, ${\sf op}eratorname{ht}(I_{n,k}) =k-1$. \begin{prop}\label{minimal primes} Let $\lambda=(\lambda_1, \ldots, \lambda_l)$ be a partition of $n$. We always have $$\sqrt{I^{\rm Sp}_\lambda} \subset I_{n, \lambda_1 +1},$$ and the equality holds if and only if $\lambda_{l-1}=\lambda_1$. \end{prop} \begin{proof} The former assertion immediately follows from Proposition~\ref{lambda_1+1}. To prove the latter assertion, assume that $\lambda_{l-1} < \lambda_1$. Set $m: = {\frak m}ax\{ i {\frak m}id \lambda_i = \lambda_1 \}$. Note that $m < l-1$ now. Take ${\frak m}athbf a \in K^n$ so that $a_i=a_j$ if and only if $(k-1) \lambda_1 + 1 \le i,j \le k \lambda_1$ for some $1 \le k \le m$, or $m \lambda_1 +1 \le i,j \le m \lambda_1 + \lambda_{m+1}+1$. Then it is easy to see that ${\frak m}athbf a$ satisfies the condition $(*)$ of Lemma~\ref{0 point}, and hence ${\frak m}athbf a \in V(I^{\rm Sp}_\lambda)$. On the other hand, since there is no $F \subset [n]$ with $\# F = \lambda_1+1$ such that $a_i =a_j$ for all $i,j \in F$, we have ${\frak m}athbf a \not \in V( I_{n, \lambda_1 +1})$. This means that if we set $$F_1: =\{ 1,2, \ldots, \lambda_1 \}, \quad F_2: =\{ \lambda_1 +1, \lambda_1+ 2, \ldots, 2\lambda_1 \}, \ldots,$$ $$ F_m: =\{ (m-1)\lambda_1 +1, \ldots, m\lambda_1 \}, \, F_{m+1}: =\{ \, m\lambda_1 +1, m\lambda_1+ 2, \ldots, m\lambda_1 +\lambda_{m+1}+1 \, \}, $$ and $$P:=(x_i-x_j {\frak m}id i,j \in F_k \ \text{for $k=1, \ldots, m+1$})$$ then we have $P \in V(I^{\rm Sp}_\lambda)$ but $P \not \in V(I_{n, \lambda_1+1})$. Hence $\sqrt{I^{\rm Sp}_\lambda} \ne I_{n, \lambda_1 +1}.$ Next, assume that $\lambda_{l-1}=\lambda_1$. To show $\sqrt{I^{\rm Sp}_\lambda} = I_{n, \lambda_1 +1}$ (equivalently, $\sqrt{I^{\rm Sp}_\lambda} \supset I_{n, \lambda_1 +1}$ now), it suffices to show that ${\frak m}athbf a \not \in V(I_{n, \lambda_1+1})$ implies ${\frak m}athbf a \not \in V(I^{\rm Sp}_\lambda)$. Set $t:=\# \{ a_1, \ldots, a_n \}$. By the symmetry, we may assume that the indices are ``sorted'' as follows; these are integers $0=m_1 < m_2 < \cdots < m_t=n$ such that $a_i =a_j$ if and only if $m_k +1 \le i,j \le m_{k+1}$ for some $1 \le k < t$. By the assumption, the cardinality of each block (i.e., $m_{k+1}-m_k$ for each $k$) is less than of equal to $\lambda_1$. Here, considering the following tableau $$ \ytableausetup {mathmode, boxsize=2.3em} \begin{ytableau} 1 & 2 & 3& \none[\dots] & \scriptstyle \lambda_1 \\ \scriptstyle \lambda_1+1 & \scriptstyle \lambda_1+2 & \scriptstyle \lambda_1+3 & \none[\dots] & \scriptstyle 2 \lambda_1 \\ \scriptstyle 2\lambda_1+1 & \scriptstyle 2\lambda_1+2 & \scriptstyle 2\lambda_1+3 & \none[\dots] & \scriptstyle 3 \lambda_1 \\ \none[\vdots] & \none[\vdots] & \none[\vdots] & \none & \none[\vdots]\\ \end{ytableau} $$ of shape $\lambda$, we see that ${\frak m}athbf a \not \in V(I^{\rm Sp}_\lambda)$. In fact, for any distinct $i,j$ in the same column of the above tableau, we have $a_i \ne a_j$. So ${\frak m}athbf a$ does not satisfy the condition $(*)$, and hence ${\frak m}athbf a \not \in V(I^{\rm Sp}_\lambda)$. \end{proof} \begin{cor}\label{ISp non pure} Let $\lambda=(\lambda_1, \ldots, \lambda_l)$ be a partition of $n$. Then $\sqrt{I^{\rm Sp}_\lambda}$ is pure dimensional (i.e., all minimal primes of $I^{\rm Sp}_\lambda$ have the same height) if and only if $\lambda_{l-1}=\lambda_1$ or $\lambda_2 = 1$. \end{cor} \begin{proof} If $\lambda_{l-1}=\lambda_1$, then the assertion immediately follows from Proposition~\ref{minimal primes}. If $\lambda_2 =1$, then $I^{\rm Sp}_\lambda$ is Vandermonde type, and $R/I^{\rm Sp}_\lambda$ is Cohen--Macaulay as shown in \cite{WY}. So $\sqrt{I^{\rm Sp}_\lambda}$ (actually, $I^{\rm Sp}_\lambda$ itself) is pure dimensional. Conversely, if $\lambda_{l-1} <\lambda_1$, then the prime ideal $P$ introduced in the proof of Proposition~\ref{minimal primes} is a minimal prime of $I^{\rm Sp}_\lambda$. In fact, for a prime ideal $P' \subsetneq P$, some point ${\frak m}athbf a \in V(P')$ loses an equation $a_i =a_j$ which any point in $V(P)$ has. Then it is easy to see that ${\frak m}athbf a$ does not satisfy the condition $(*)$ of Lemma~\ref{0 point}, and hence ${\frak m}athbf a \not \in V(I^{\rm Sp}_\lambda)$. It means that $P' \not \supset I^{\rm Sp}_\lambda$. On the other hand, we have ${\sf op}eratorname{ht}(P)=m(\lambda_1-1)+\lambda_{m+1}$. It is easy to see that $\lambda_2 > 1$ implies ${\sf op}eratorname{ht}(P) > \lambda_1 \, (= {\sf op}eratorname{ht}(I^{\rm Sp}_\lambda))$, and $I^{\rm Sp}_\lambda$ is not pure. \end{proof} By motivation from the study of rational Cherednik algebras, Etingof, Gorsky and Losev \cite{EGL} proved the following result. \begin{thm}[{\cite[Proposition~3.11]{EGL}, see also \cite[Theorem~2.1]{BCES}}]\label{EGL} Suppose that ${\sf op}eratorname{char}(K)=0$. Then $R/I_{n,k}$ is Cohen--Macaulay if and only if $k=2$ or $2k >n$. \end{thm} The only if part of the above theorem can be slightly improved as follows. \begin{prop}\label{nonCM} If $k \ge 3$ and $2k \le n$, then $R/I_{n,k}$ does not satisfy Serre's condition $(S_2)$. \end{prop} \begin{proof} Set $F=\{ 1,2, \ldots, k\}$ and $F'=\{k+1, k+2, \ldots, 2k\}$. Then $P_F=(x_1-x_2, \ldots, x_1-x_k)$ and $P_{F'} =(x_{k+1}-x_{k+2}, \ldots, x_{k+1}-x_{2k})$ are minimal primes of $I_{n,k}$. Consider the prime ideal $P:= P_F +P_{F'}$. It is easy to see that any minimal prime of $I_{n,k}$ other than $P_F$ and $P_{F'}$ is not contained in $P$, that is, ${\sf op}eratorname{Min}_{R_P}( (R/I_{n,k})_P) =\{ (P_F)_P, (P_{F'})_P \}$. By the Mayer–Vietoris sequence $$0 \longrightarrow (R/I_{n,k})_P \longrightarrow (R/P_F)_P {\sf op}lus (R/P_{F'})_P \longrightarrow (R/P)_P \longrightarrow 0$$ we see that ${\sf op}eratorname{depth} (R/I_{n,k})_P=1$. On the other hand, $$\dim (R/I_{n,k})_P = {\sf op}eratorname{ht} (P) -{\sf op}eratorname{ht} (I_{n,k}) = 2(k-1)-(k-1)=k-1 \ge 2.$$ Hence $R/I_{n,k}$ does not satisfy Serre's condition $(S_2)$. \end{proof} \begin{prop}\label{CM ISP} Let $(\lambda_1, \ldots, \lambda_l)$ be a partition of $n$. If $R/I^{\rm Sp}_\lambda$ is Cohen--Macaulay, then one of the following conditions is satisfied,. \begin{itemize} \item[(1)] $\lambda_2=1$, \item[(2)] $l=2$, \item[(3)] $l=3$, $\lambda_1=\lambda_2$ and $\lambda_3=1$. \end{itemize} \end{prop} \begin{proof} If neither $\lambda_2 =1$ nor $\lambda_{l-1} = \lambda_1$ then $R/I^{\rm Sp}_\lambda$ is not Cohen--Macaulay by Corollary~\ref{ISp non pure}. So we may assume that $\lambda_{l-1} = \lambda_1$ (note that this condition is clearly satisfied if $l=2$). In this case, we have $\sqrt{I^{\rm Sp}_\lambda} = I_{n, \lambda_1 +1}$ by Proposition~\ref{minimal primes}. If none of the conditions (1)--(3) is satisfied, then $\lambda_1 +1 \ge 3$ and $n \ge 2(\lambda_1 +1)$, and we can show that $R/I^{\rm Sp}_\lambda$ does not satisfy $(S_2)$ by an argument similar to the proof of Proposition~\ref{nonCM}. In fact, in this situation, there is a prime ideal $P$ of $A:=(R/I^{\rm Sp}_\lambda)_P$ such that ${\sf op}eratorname{ht} P\ge 2$ and the ideal $(0) \subset A$ has a primary decomposition such that $(0)= {{\frak m}athfrak q} \cap {{\frak m}athfrak q}'$ and $\sqrt{{{\frak m}athfrak q} + {{\frak m}athfrak q}' }$ is the maximal ideal of $A$. It means that ${\sf op}eratorname{depth} A =1$, while $\dim A \ge 2$. \end{proof} In the case (1) of Theorem~\ref {CM ISP}, $R/I^{\rm Sp}_\lambda$ is Cohen--Macaulay for arbitrary $K$ by \cite{WY}. So it remains to consider the cases (2) and (3). If $I^{\rm Sp}_\lambda$ is radical, we can use Theorem~\ref{EGL} in the case ${\sf op}eratorname{char}(K)=0$. So the next problem is very natural. \begin{conj}\label{radical?} The Specht ideal $I^{\rm Sp}_\lambda$ is always a radical ideal. \end{conj} In the present paper, we will prove this conjecture in two important cases. We treat the case $\lambda=(n-d,d)$ in this section, and the case $\lambda=(a,a,1)$ in the next section. Clearly, the former (resp. latter) case corresponds to the condition (2) (resp. 3) of Proposition~\ref{CM ISP}. The following observation is useful in our proof. {\frak m}edskip If we set $y_i := x_i -x_n$ for $1 \le i \le n-1$, then we have $R=K[x_1, \ldots, x_n]=K[y_1 \ldots, y_{n-1}, x_n]$. Since $x_i-x_j =y_i-y_j$ for $1 \le i,j <n$, the Specht polynomial $f_T$ is a polynomial of $y_1, \ldots, y_{n-1}$ for all $T$. Hence $x_n$ is an $R/I^{\rm Sp}_\lambda$-regular element. Let $\lambda =(\lambda_1, \ldots, \lambda_l)$ be a partition of $n$ with $\lambda_{l-1}=\lambda_1$, and ${\frak m}u$ the partition of $n-1$ given by $$ {\frak m}u = \begin{cases} (\lambda_1, \ldots, \lambda_{l-1}, \lambda_l-1) & \text{(if $\lambda_l \ge 2$)}\\ (\lambda_1, \ldots, \lambda_{l-1}) & \text{(if $\lambda_l = 1$).} \end{cases} $$ \begin{lem}\label{radical of frJ} Let $\lambda$ and ${\frak m}u$ be as above, $I^{\rm Sp}_{\frak m}u \subset S:=K[x_1, \ldots, x_{n-1}]$ the Specht ideal of ${\frak m}u$, and $I_{{\langle} m {\rangle}} \subset S$ the ideal generated by all degree $m$ squarefree monomials. And let $\varphi: R \to S \, (\cong R/(x_n))$ be the natural surjection. Then we have $$\sqrt{\varphi(I^{\rm Sp}_\lambda)} = \varphi(I_{n, \lambda_1+1}) = \sqrt{I^{\rm Sp}_{\frak m}u} \cap I_{{\langle}n-\lambda_1{\rangle}}.$$ \end{lem} \begin{proof} By Proposition~\ref{minimal primes}, we have $\sqrt{I^{\rm Sp}_\lambda} = I_{n, \lambda_1 +1}$. Since ${\frak m}u$ satisfies the condition of Proposition~\ref{minimal primes} again, if we set $\overline{P}_F =(x_i -x_j {\frak m}id i, j \in F)\subset S$ for $F \subset [n-1]$, then $$\sqrt{I^{\rm Sp}_{\frak m}u} = \bigcap_{\substack{F \subset [n-1], \\ \#F =\lambda_1+1}} \overline{P}_F.$$ If $n \not \in F$ (i.e., $F \subset [n-1]$), then we have $\varphi(P_F) = \overline{P}_F$. If $n \in F$, then $\varphi(P_F)= (x_i {\frak m}id i < n, i \in F)$. For $F, F' \subset [n]$, consider $$0 \longrightarrow R/(P_F \cap P_{F'}) \longrightarrow R/P_F {\sf op}lus R/P_{F'} \longrightarrow R/(P_F + P_{F'}) \longrightarrow 0.$$ Since $P_F + P_{F'}$ is generated by polynomials of $y_i \, (= x_i -x_n)$ for $1 \le i \le n-1$, $x_n$ is a non-zero divisor over $R/(P_F + P_{F'})$. So applying $-\otimes_R S$, we have the exact sequence $$0 \longrightarrow S/\varphi(P_F \cap P_{F'}) \longrightarrow S/\varphi(P_F) {\sf op}lus S/\varphi(P_{F'}) \longrightarrow S/\varphi(P_F + P_{F'}) \longrightarrow 0$$ by \cite[Proposition~1.1.4]{BH}. It means that $\varphi(P_F \cap P_{F'}) = \varphi(P_F) \cap \varphi(P_{F'})$. Repeating this argument, we have $$\varphi(I_{n,\lambda_1 +1}) = \bigcap_{\substack{F \subset [n], \\ \#F =\lambda_1 +1}} \varphi(P_F),$$ and hence $\varphi(I_{n,\lambda_1 +1})$ is radical. Now we have \begin{eqnarray*} \sqrt{\varphi(I^{\rm Sp}_\lambda}) = \sqrt{\varphi(I_{n,\lambda_1 +1})} &=& \varphi(I_{n,\lambda_1 +1}) \\ &=& \bigcap_{\substack{F \subset [n], \\ \#F =\lambda_1 +1}} \varphi(P_F)\\ &=& \Biggl( \bigcap_{\substack{F \subset [n], \, n \not \in F \\ \#F =\lambda_1 +1}} \varphi(P_F) \Biggr) \cap \Biggl( \bigcap_{\substack{F \subset [n], \, n \in F \\ \#F =\lambda_1 +1}} \varphi(P_F) \Biggr) \\ &=& \Biggl( \bigcap_{\substack{F \subset [n-1], \\ \#F =\lambda_1 +1}} \overline{P}_F \Biggr) \cap \Biggl( \bigcap_{\substack{F \subset [n-1], \\ \#F =\lambda_1}} (x_i {\frak m}id i \in F) \Biggr) \\ &=& \sqrt{I^{\rm Sp}_{\frak m}u} \cap I_{{\langle}n-\lambda_1 {\rangle}}. \end{eqnarray*} \end{proof} \section{The radicalness of $I^{\rm Sp}_{(n-d,d)}$} In this section, we will prove Conjecture~\ref{radical?} in the case $\lambda=(n-d,d)$. \begin{thm}\label{ISp_{(n-k,k)} is radical} The Specht ideal $I^{\rm Sp}_{(n-d,d)}$ is radical. \end{thm} We prove this theorem by induction on $d$ (since $R/I^{\rm Sp}_{(n-1,1)} \cong K[X]$, the assertion is clear if $d=1$). For a polynomial $f \in S=K[x_1, \ldots, x_{n-1}]$, let ${\sf op}eratorname{supp}(f) $ be the set of squarefree monomials in $S$ which divide some nonzero term of $f$. \begin{ex} ${\sf op}eratorname{supp}(x_1x_2^3-3(x_2x_3)^2)=\{1, x_1, x_2, x_3, x_1x_2, x_2x_3 \}$. \end{ex} Let $$ T = \ytableausetup {mathmode, boxsize=2em} \begin{ytableau} i_1 & i_2 & \cdots &i_d & i_{d+1} & \cdots & i_{n-d} \\ j_1 & j_2 & \cdots & j_d \end{ytableau} $$ be a tableau of shape $\lambda$. The Specht polynomial $f_T$ does not care the order of the 1st to the $d$-th columns, and the order of the $(d+1)$-st to the $(n-d)$-th columns. Moreover, if we permute $i_k$ and $j_k$ for some $1 \le k \le d$ then its Specht polynomial becomes $-f_T$. On the set of tableaux of shape $\lambda$, we consider the equivalence relation modulo these permutations. Then $T \equiv T'$ if and only if $f_T =\pm f_{T'}$. Let ${\sf op}eratorname{Tab}(\lambda)$ denote the set of the equivalence classes. However, we sometimes identify an equivalence class $[T] \in {\sf op}eratorname{Tab}(\lambda)$ with its representative $T$. For example, we often write like $T \in {\sf op}eratorname{Tab}(\lambda)$. When we consider $f_T$ of $[T] \in {\sf op}eratorname{Tab}(\lambda)$, we assume that $i_k < j_k$ for all $1 \le k \le d$ unless otherwise specified. Clearly, $$I^{\rm Sp}_\lambda = (\, f_T {\frak m}id T \in {\sf op}eratorname{Tab}(\lambda) \, ).$$ Let ${\sf op}eratorname{StTab}(\lambda)$ denote the set of standard tableaux of shape $\lambda$. Note that an equivalence class $[T] \in {\sf op}eratorname{Tab}(\lambda)$ contains {\it at most} one standard tableau. If $[T]$ contains a standard tableau, we say it is standard. \begin{lem}\label{varphi} Recall that $S=K[x_1, \ldots, x_{n-1}]$. Consider the partitions $\lambda = (n-d, d)$ and ${\frak m}u = (n-d, d-1)$. For the natural surjection $\varphi: R \to S$, we have $$\varphi(I^{\rm Sp}_\lambda ) = ( \, x_i f_T {\frak m}id T \in {\sf op}eratorname{Tab}({\frak m}u), x_i \not \in {\sf op}eratorname{supp}(f_{\frak m}u) \, ) \subset S.$$ \end{lem} For notational simplicity, we set $${\frak J} := ( \, x_i f_T {\frak m}id T \in {\sf op}eratorname{Tab}({\frak m}u), x_i \not \in {\sf op}eratorname{supp}(f_{\frak m}u) \, ).$$ \begin{proof} It is well-known that $$I^{\rm Sp}_\lambda = ( \, f_{T'} {\frak m}id T' \in {\sf op}eratorname{StTab}(\lambda) \, ).$$ However, we use the inverse order $n \prec n-1 \prec \cdots \prec 1 $ here. Hence a standard tableau on $\lambda$ is of the form $$ T' = \ytableausetup {mathmode, boxsize=2em} \begin{ytableau} n & i_2 & i_3 & \cdots & i_d & \cdots & i_{n-d} \\ j_1 & j_2 & j_3 & \cdots & j_d \end{ytableau} $$ and we have $$\varphi(f_{T'}) = -x_{j_1} \prod_{k=2}^d (x_{i_k}-x_{j_k}).$$ For the tableau $$ T= \ytableausetup {mathmode, boxsize=2em} \begin{ytableau} i_2 & i_3 & \cdots &i_d & \cdots & i_{n-d} & j_1\\ j_2 & j_3 & \cdots & j_d \end{ytableau} $$ of shape ${\frak m}u$, we have $\varphi(f_{T'}) = -x_{j_1} f_T$ and $x_{j_1} \not \in {\sf op}eratorname{supp}(f_T)$. Hence we have $\varphi(I^{\rm Sp}_\lambda) \subset {\frak J}$. The converse inclusion follows from a similar argument. \end{proof} The following fact (and its local analog) must be well-known, but we give a proof for the reader's convenience. \begin{lem}\label{radical reduction} Let $A=\bigoplus_{i \in {{\frak m}athbb N}} A_i $ be a noetherian graded ring, and $a \in A$ a homogeneous non-zero divisor of positive degree. If $A/aA$ is reduced, then $A$ is also. \end{lem} \begin{proof} Since $aA$ is a radical ideal, there are prime ideals $P_1, \ldots, P_m$ of $A$ such that $\sqrt{I} =\bigcap_{i=1}^m P_i$. Since $a$ is a non-zero divisor, we have ${\sf op}eratorname{ht}(P_i) \ge 1$ for all $i$. So, for each $i$, we can take a minimal prime $Q_i$ of $A$ contained in $P_i$. Take a homogeneous element $x \in \bigcap_{i=1}^m Q_i$. Since $\bigcap_{i=1}^m Q_i \subset \bigcap_{i=1}^m P_i =aA$, there is a homogeneous element $x_1 \in A$ such that $x=a x_1$. Since $a$ is a non-zero divisor, we have $a \not \in Q_i$ for all $i$, and it means that $x_1 \in Q_i$ for all $i$, and hence $x_1 \in \bigcap_{i=1}^m Q_i$. Applying the above argument to $x_1$, we can find $x_2 \in A$ such that $x_1 = a x_2$, that is, $x= a^2 x_2$. Repeating this argument, we have $x \in \bigcap_{i=1}^\infty a^i A = (0)$, and it implies that $\bigcap_{i=1}^m Q_i =(0)$. Since $Q_i$'s are prime ideals, $(0)$ is a radical ideal. \end{proof} For partitions $\lambda=(n-d,d)$ and ${\frak m}u=(n-d,d-1)$, we assume the induction hypothesis that $I^{\rm Sp}_{\frak m}u$ is radical. By Lemma \ref{radical of frJ}, we have $\sqrt{{\frak J}} = I^{\rm Sp}_{\frak m}u \cap I_{{\langle}d {\rangle}}$. If ${\frak J} =\varphi(I^{\rm Sp}_\lambda)$ is radical, then so is $I^{\rm Sp}_\lambda$ itself by Lemma~\ref{radical reduction}. So it suffices to show that ${\frak J} \supset I^{\rm Sp}_{\frak m}u \cap I_{{\langle}d{\rangle}}. $ \begin{lem}\label{reduction} Let $\lambda$ and ${\frak m}u$ be as above. Assume that $I^{\rm Sp}_{\frak m}u$ is a radical ideal. Then $I^{\rm Sp}_\lambda$ is a radical ideal, if the following condition is satisfied. \begin{itemize} \item[$(*)$] If $\phi = x^{\frak m}athbf a(\sum_{T \in {\sf op}eratorname{Tab}({\frak m}u)} c_T f_T) \in I_{{\langle}d{\rangle}}$ for some squareferee monomial $x^{\frak m}athbf a \in S=K[x_1, \ldots, x_{n-1}]$ and $c_T \in K$, we have $\phi \in {\frak J}$. \end{itemize} \end{lem} \begin{proof} First, note that the assumption that $x^{\frak m}athbf a$ is squarefree can be easily dropped. In fact, for any ${{\frak m}athbf b} \in {{\frak m}athbb N}^{n-1}$ and $h \in I^{\rm Sp}_{\frak m}u$, $x^{{\frak m}athbf b} h \in I_{{\langle}d{\rangle}}$ implies $(\prod_{b_i > 0} x_i )\cdot h \in I_{{\langle}d{\rangle}}$, and $(\prod_{b_i > 0} x_i )\cdot h \in {\frak J}$ implies $x^{{\frak m}athbf b} h \in {\frak J}$. By the remark just before the lemma, $I^{\rm Sp}_\lambda$ is a radical ideal, if the condition \begin{itemize} \item[$(**)$] If $\psi = \sum_{T \in {\sf op}eratorname{Tab}({\frak m}u)} g_T f_T \in I_{{\langle}d{\rangle}}$ for some polynomials $g_T \in S$, then $\psi \in {\frak J}$. \end{itemize} is satisfied. So it suffices to show that $(*)$ implies $(**)$. Assume that $\psi = \sum_{T \in {\sf op}eratorname{Tab}({\frak m}u)} g_T f_T \in I_{{\langle}d{\rangle}}$. Take ${\frak m}athbf a \in {{\frak m}athbb N}^{n-1}$, and let $c_T x^{\frak m}athbf a$ be the degree ${\frak m}athbf a$ term of $g_T$ (of course, $c_T$ can be 0). Now we want to show that $$\psi_{\frak m}athbf a:= x^{\frak m}athbf a \sum_{T \in {\sf op}eratorname{Tab}({\frak m}u)} c_T f_T$$ is contained in $I_{{\langle}d{\rangle}}$. By contradiction, we assume that the degree ${{\frak m}athbf b}$ term of $\psi_{\frak m}athbf a$ is not contained in $I_{{\langle}d{\rangle}}$. Since $x^{{\frak m}athbf b} \not \in I_{{\langle}a{\rangle}}$, and all terms of $f_T$ are squarefree and have degree $d-1$, we have $x^{\frak m}athbf a = x^{{\frak m}athbf b}/({\prod_{b_i > 0}x_i)}$. Hence the degree ${{\frak m}athbf b}$ term of $g_Tf_T$ equals that of $c_T x^{\frak m}athbf a f_T$. Therefore, the degree ${{\frak m}athbf b}$ term of $\psi_{\frak m}athbf a$ coincides with that of $\psi \in I_{{\langle}d{\rangle}}$. This is a contradiction, and hence we have $\psi_{\frak m}athbf a \in I_{{\langle}d{\rangle}}$. Since $\psi_{\frak m}athbf a$ can play the role of $\phi$ in the condition $(*)$ , so $(*)$ implies $\psi_{\frak m}athbf a \in {\frak J}$. Hence $\psi = \sum_{{\frak m}athbf a \in {{\frak m}athbb N}^{n-1}} \psi_{\frak m}athbf a \in {\frak J}$, and $(*)$ implies $(**)$ \end{proof} \begin{lem}\label{support} With the same notation as Lemma~\ref{reduction}, if $x^{\frak m}athbf a \not \in {\sf op}eratorname{supp}(f_T)$ for a tableau $T$ of shape ${\frak m}u$, then we have $x^{\frak m}athbf a f_T \in {\frak J}.$ \end{lem} \begin{proof} We may assume that if $x_i$ divides $x^{\frak m}athbf a$ then it belongs to ${\sf op}eratorname{supp}(f_T)$. Then, by the shape of the Specht polynomial $f_T$, there are distinct $i,j \in [n-1]$ such that $x_ix_j \, | \, x^{\frak m}athbf a$ but $x_i x_j \not \in {\sf op}eratorname{supp}(f_T)$. Now $i$ and $j$ are in the same column in $T$, and we may assume that $T$ is of the form $$ \ytableausetup {mathmode, boxsize=2em} \begin{ytableau} i & i_2 & \cdots &i_{d-1} & k & \none[\cdots]\\ j & j_2 & \cdots & j_{d-1} \\ \end{ytableau} $$ (note that $n-d > d-1$ now). Consider two other tableaux on ${\frak m}u$ as follows $$ T_1 =\ytableausetup {mathmode, boxsize=2em} \begin{ytableau} i & i_2 & \cdots &i_{d-1} & j & \none[\cdots]\\ k & j_2 & \cdots & j_{d-1} \\ \end{ytableau} \quad \text{and} \quad T_2 =\ytableausetup {mathmode, boxsize=2em} \begin{ytableau} k & i_2 & \cdots &i_{d-1} & i & \none[\cdots]\\ j & j_2 & \cdots & j_{d-1} \\ \end{ytableau} $$ Then we have $f_T =f_{T_1} +f_{T_2}$. Hence $$x_ix_j f_T = x_ix_j (f_{T_1} +f_{T_2}) = x_i (x_j f_{T_1}) + x_j (x_i f_{T_2}).$$ Since $x_j \not\in {\sf op}eratorname{supp}(f_{T_1})$, we have $x_j f_{T_1} \in {\frak J}$. Similarly, $x_i f_{T_2} \in {\frak J}$. Hence $x_ix_j f_T \in {\frak J}$, and it implies that $x^{\frak m}athbf a f_T \in {\frak J}$. \end{proof} In the condition $(*)$ of Lemma~\ref{reduction}, we may assume that $c_T \ne 0$ implies $x^{\frak m}athbf a \in {\sf op}eratorname{supp}(f_T)$ by Lemma~\ref{support}, and $x^{\frak m}athbf a = x_1x_2\cdots x_k$ by the symmetry. Hence we have the following. \begin{cor}\label{reduction2} Let $\lambda$ and ${\frak m}u$ be as above. Assume that $I^{\rm Sp}_{\frak m}u$ is a radical ideal. Then $I^{\rm Sp}_\lambda$ is a radical ideal, if the following condition is satisfied. \begin{itemize} \item[$(*\!*\!*)$] For the squarefree monomial $x^{\frak m}athbf a = x_1x_2\cdots x_k \in S$ with $1 \le k \le d-1$, set $X:= \{\, T \in {\sf op}eratorname{Tab}({\frak m}u) {\frak m}id x^{\frak m}athbf a \in {\sf op}eratorname{supp}(f_T) \, \}$. If $\phi = x^{\frak m}athbf a(\sum_{T \in X} c_T f_T) \in I_{{\langle}d{\rangle}}$ for some $c_T \in K$, we have $\phi \in {\frak J}.$ \end{itemize} \end{cor} In the sequel, $X$ means the set defined in $(*\!*\!*)$. An element of $X$ has the following ``normal form" \begin{equation}\label{T} T= \ytableausetup {mathmode, boxsize=2.3em} \begin{ytableau} 1 & 2 & \cdots & k & i_{k+1} & \cdots & i_{d-1} & i_d& i_{d+1} & \cdots & i_{n-d} \\ j_1 & j_2 & \cdots &j_k & j_{k+1} & \cdots &j_{d-1} \end{ytableau}, \end{equation} where $i_{k+1} < i_{k+2} < \cdots < i_{d-1}$, $i_d < i_{d+1} < \cdots < i_{n-d}$ and $i_l < j_l$ for all $k < l < d$. \begin{lem}\label{permutation} With the same notation as Corollary~\ref{reduction2}, let $T \in X$ be a tableau of the form \eqref{T}. For any permutation $\sigma$ on $\{i_d, \ldots, i_{n-d}, j_1, \ldots, j_k \}$, we have $x^{\frak m}athbf a(f_T - f_{\sigma T}) \in {\frak J}$. Here $\sigma T$ is the Young tableau of shape ${\frak m}u$ given by replacing each $i$ in $T$ by $\sigma(i)$. \end{lem} \begin{proof} It is easy to see that $\sigma$ is a product of transpositions of the form $\tau= (i_a, j_b)$ for $d \le a \le n-d$ and $1 \le b \le k$. So it suffices to show that $x^{\frak m}athbf a(f_T - f_{\tau T}) \in {\frak J}$. By the symmetry, we may assume that $\tau = (i_d, j_1)$. We have $$ \tau T= \ytableausetup {mathmode, boxsize=2.3em} \begin{ytableau} 1 & 2 & \cdots & k & i_{k+1} & \cdots & i_{d-1} & j_1 & i_{d+1}&\cdots & i_{n-d} \\ i_d & j_2 & \cdots &j_k & j_{k+1} & \cdots &j_{d-1} \end{ytableau} $$ Set $$ T'= \ytableausetup {mathmode, boxsize=2.3em} \begin{ytableau} i_d & 2 & \cdots & k & i_{k+1} & \cdots & i_{d-1} & 1 & i_{d+1} & \cdots & i_{n-d} \\ j_1 & j_2 & \cdots &j_k & j_{k+1} & \cdots &j_{d-1} \end{ytableau} $$ Then $x^{\frak m}athbf a(f_T -f_{\tau T}) = x^{\frak m}athbf a f_{T'} \in {\frak J},$ since $x_1 \not \in {\sf op}eratorname{supp}(f_{T'}).$ \end{proof} For the tableau $T$ of \eqref{T}, set $$h_T := \prod_{l=k+1}^{d-1}(x_{i_l}-x_{j_l}).$$ \begin{lem}\label{relation of h_T} With the above notation, if $\phi = x^{\frak m}athbf a(\sum_{T \in X} c_T f_T) \in I_{{\langle}d{\rangle}}$ for some $c_T \in K$, then we have $$\sum_{T \in X} c_T h_T =0.$$ \end{lem} \begin{proof} Since $x^{\frak m}athbf a(f_T -x^{{\frak m}athbf a}h_T) \in I_{{\langle}d{\rangle}}$ for each $T \in X$, we have $$\phi - x^{2{\frak m}athbf a} \sum_{T \in X} c_T h_T = x^{\frak m}athbf a \sum_{T \in X} c_T (f_T -x^{{\frak m}athbf a}h_T )\in I_{{\langle}d{\rangle}},$$ and hence $ x^{2{\frak m}athbf a} (\sum_{T \in X} c_T h_T) \in I_{{\langle}d{\rangle}}$. On the other hand, any nonzero term of $x^{2{\frak m}athbf a} (\sum_{T \in X} c_T h_T)$ dose not belong to $I_{{\langle}d{\rangle}}$. Hence we have $\sum_{T \in X} c_T h_T =0$. \end{proof} {\frak m}edskip For the tableau $T$ of \eqref{T}, consider the tableau \begin{equation} \widehat{T}= \ytableausetup {mathmode, boxsize=2.3em} \begin{ytableau} i_{k+1} & \cdots & i_{d-1} & i_d& i_{d+1} & \cdots & i_{n-d} \\ j_{k+1} & \cdots &j_{d-1} \end{ytableau} \end{equation} of shape $(n-d-k, d-1-k)$, and the tableaux \begin{equation} \widetilde{T}= \ytableausetup {mathmode, boxsize=2.3em} \begin{ytableau} i_{k+1} & \cdots & i_{d-1} & i_d& i_{d+1} & \cdots & i_{n-d} & j_1 & j_2 & \cdots &j_k \\ j_{k+1} & \cdots &j_{d-1} \end{ytableau} \end{equation} on shape $(n-d, d-1-k)$. Clearly, $h_T = f_{\widehat{T}} = f_{\widetilde{T}}$. We say $T \in X$ is {\it quasi $h$-standard} (resp. {\it $h$-standard}), if $\widehat{T}$ (resp. $\widetilde{T}$ ) is a standard tableau. Here we regard $\widehat{T}$ and $\widetilde{T}$ as the tableaux with the letter set $[n] \setminus ([k] \cup \{j_1, \ldots, j_k\})$ and $[n] \setminus [k]$ respectively. Set $$Y:=\{ T \in X {\frak m}id \text{$T$ is quasi $h$-standard} \}$$ and $$Z:=\{ T \in X {\frak m}id \text{$T$ is $h$-standard} \}.$$ \begin{lem}\label{quasi h-standard} If $T \in X$, there are $T_1, \ldots, T_m \in Y$ and $c_1, \ldots, c_m \in K$ for such that $$f_T = \sum_{l=1}^m c_l f_{T_l}.$$ Moreover, if $T$ is of the form \eqref{T}, then we may assume that each $T_l$ is of the form $$ \ytableausetup {mathmode, boxsize=2.3em} \begin{ytableau} 1 & 2 & \cdots & k & i_{k+1}' & \cdots & i_{d-1}' & i_d'& i_{d+1}' & \cdots & i_{n-d}' \\ j_1 & j_2 & \cdots &j_k & j_{k+1}' & \cdots &j_{d-1}' \end{ytableau} $$ with $(i_d, i_{d+1}, \ldots, i_{n-d} )\le (i_d', i_{d+1}', \ldots, i_{n-d}' )$ in the coordinate-wise order. \end{lem} \begin{proof} It suffices to apply the standard argument of the Specht module theory (see, for example, \cite[\S 2.6]{Sa}) to the tableaux of the form $\widehat{T}$. For the reader's convenience, we will sketch the outline. Note that the tableau $T$ in \eqref{T} belongs to $Y$ if and only if $j_{k+1} < j_{k+2} < \cdots < j_{d-1}$ and $i_{d-1} < i_d$. First, assume that $j_{l-1} > j_l$ for some $k+2 \le l \le d-1$, and $l$ is the minimal number with this property (if there is no such $l$, let us move directly to the operation in the next paragraph). Note that $i_{l-1} < i_l < j_l < j_{l-1}$ now. Consider the following two tableaux $$ T_a= \begin{ytableau} \cdots & i_{l-1} & i_l & \cdots & \cdots \\ \cdots & j_l & j_{l-1} & \cdots \end{ytableau} \qquad \text{and} \qquad T_b= \begin{ytableau} \cdots & i_{l-1} & j_l & \cdots & \cdots \\ \cdots & i_l & j_{l-1} & \cdots \end{ytableau}. $$ More precisely, we have to apply a suitable column permutation to $T_b$ so that the first row is increasing from left to right. Except the three slots where $j_{l-1}$, $i_l$ and $j_l$ are in, $T_a$ and $T_b$ are same as $T$ (modulo the column permutation stated above). Since $f_T = f_{T_a} -f_{T_b}$, we replace $f_T$ by $f_{T_a}-f_{T_b}$. If $T_a \not \in Y$ or $T_b \not \in Y$, we apply the above operation to them. Repeating this procedure, we can reduce to the case where $j_{k+1} < j_{k+2} < \cdots < j_{d-1}$ in \eqref{T}. In the above situation, $i_{d-1} < i_d$ implies $T \in Y$. So we assume that $i_{d-1} > i_d$. Note that $i_d < i_{d-1} < j_{d-1}$ now. Consider the following two tableaux $$ T_c= \begin{ytableau} \cdots & i_d & i_{d-1} & \cdots\\ \cdots & j_{d-1} \end{ytableau} \qquad \text{and} \qquad T_d= \begin{ytableau} \cdots & i_d & j_{d-1} & \cdots\\ \cdots & i_{d-1} \end{ytableau} $$ (more precisely, we have to apply a suitable column permutation to each tableau so that the first row is increasing from left to right). Since $f_T = f_{T_c} -f_{T_d}$, we replace $f_T$ by $f_{T_c}-f_{T_d}$. However, the above column permutations of $T_c$ and $T_d$ might violate the inequalities $j_{k+1} < j_{k+2} < \cdots < j_{d-1}$. If this is the case, we apply (and repeat, if necessary) ``$T_a$ and $T_b$ operations". After that, we go back to ``$T_c$ and $T_d$ operations". Repeating this procedure, we can get the expected representation $f_T = \sum_{l=1}^m c_l f_{T_l}$. The last assertion of the lemma is clear, since $i_d < i_{d-1}$ in $T_c$ and $i_d < j_{d-1}$ in $T_d$ (this fact also guarantees the termination of the above procedure). \end{proof} \noindent{\it The proof of Theorem~\ref{ISp_{(n-k,k)} is radical}.} Since $I^{\rm Sp}_{\frak m}u$ is a radical ideal by induction hypothesis, we can use Corollary~\ref{reduction2}, and it suffices to show the statement $(*\!*\!*)$. For a given $\phi = x^{\frak m}athbf a(\sum_{T \in X} c_T f_T) \in I_{{\langle}d{\rangle}}$, we apply the following algorithm. {\frak m}edskip \noindent{\bf Operation 1.} Using Lemma~\ref{quasi h-standard}, we re-write $\phi$ as $\phi= x^{\frak m}athbf a(\sum_{T \in Y} c_T' f_T) $. {\frak m}edskip Note that $$\sum_{T \in Y} c_T' f_{\widetilde{T}} = \sum_{T \in Y} c_T' h_T= 0$$ by Lemma~\ref{relation of h_T}. If $c'_T \ne 0$ implies $T \in Z$ (equivalently, $\phi= x^{\frak m}athbf a(\sum_{T \in Z} c_T' f_T) $), then we have $\sum_{T \in Z} c_T' f_{\widetilde{T}} = 0$. Since $\{ f_{\widetilde{T}} {\frak m}id T \in Z \}$ is linearly independent as is well-known in the Specht module theory (see, for example, \cite[Theorem~2.5.2]{Sa}), we have $c'_T =0$ for all $T \in Z$ and hence $\phi=0 \in {\frak J}$. If $c_T' \ne 0$ for some $T \in Y \setminus Z$, we go to the next operation. {\frak m}edskip \noindent{\bf Operation 2.} For the tableau $T$ of the form \eqref{T}, we have a permutation $\sigma_T$ on $\{i_d, \ldots, i_{n-d}, j_1, \ldots, j_k \}$ such that $$\sigma_T(i_d) < \sigma_T(i_{d+1}) < \cdots < \sigma_T(i_{n-d}) < \sigma_T(j_1) < \sigma_T(j_2) < \cdots < \sigma_T(j_k).$$ (Note that $T \in Z$ if and only if $\sigma_T ={\rm Id}$.) Set $$\phi' =\sum_{T \in Y} c_T' f_{\sigma_T T}.$$ Note that $$\phi -\phi' = \sum_{T \in Y}c_T'(f_T - f_{\sigma_T T} ) \in {\frak J}.$$ by Lemma~\ref{permutation}. Hence it suffices to show that $\phi' \in {\frak J}$, and we replace $\phi$ by $\phi'$. However, $\sigma_T T \not \in Y$ (i.e., $i_{d-1} > i_d$) might happen, so we will go back to Operation~1, and then move to Operation~2. We repeat this procedure until we get a form $\phi= x^{\frak m}athbf a(\sum_{T \in Z} c_T' f_T)$. Operation~1 does not change the sequence $(j_1, \ldots, j_k)$, and Operation~2 raises it with respect to the coordinate-wise order. It means that this algorithm eventually stops, and we get an expected expression $\phi= x^{\frak m}athbf a(\sum_{T \in Z} c_T' f_T)$. Then $\phi=0$, as we have shown above. It means that the original $\phi$ can be reduced to 0 modulo ${\frak J}$, that is, the original $\phi$ belongs ${\frak J}$. So the condition $(*\!*\!*)$ holds. \qed {\frak m}edskip Combining Theorem~\ref{ISp_{(n-k,k)} is radical} with Theorem~\ref{EGL}, we have the following. \begin{cor}\label{ISp_{(n-k,k)} is CM} If ${\sf op}eratorname{char}(K)=0$, the ring $R/I^{\rm Sp}_{(n-d,d)}$ is Cohen-Macaulay. \end{cor} For $n \in {{\frak m}athbb N}$, the $n$-th {\it Catalan number} $C_n$ is given by $$C_n = \frac{1}{2n+1}\binom{2n+1}{n},$$ and $\{C_n\}_{n \in {{\frak m}athbb N}}$ is one of the most important combinatorial sequences. Exercise A8 of the monograph \cite{St} gives 17 algebraic interpretations of Catalan numbers (e.g., in terms of the ring of upper triangular matrices, $SL(2,{{\frak m}athbb C})$, the toric variety associated with the $n$-dimensional cube, $\ldots$) beside more than 200 purely combinatorial interpretations. It is well-known that the number of standard tableaux of shape $(n,n)$ (or equivalently, of shape $(n,n-1)$) is $C_n$. See, for example, \cite[Exercise 168]{St} (this fact is counted as a combinatorial interpretation in this monograph). Theorem~\ref{ISp_{(n-k,k)} is radical} gives yet another algebraic interpretation of the Catalan numbers. \begin{cor} In the polynomial ring $K[x_1, \ldots, x_{2n}]$, the number of minimal generators of the ideal $$I_{2n, n+1}= \bigcap_{\substack{F \subset [2n] \\ \# F = n+1}} (x_i-x_j {\frak m}id i, j \in F)$$ is the $n$-th Catalan number $C_n$. Similarly, the ideal $I_{2n-1, n+1} \subset K[x_1, \ldots, x_{2n-1}]$ is also generated by $C_n$ elements. \end{cor} \begin{proof} Since $I_{2n, n+1} =I^{\rm Sp}_{(n,n)}$ and $I_{2n-1, n+1} =I^{\rm Sp}_{(n,n-1)}$, the assertion follows from the above mentioned characterization of the Catalan numbers. \end{proof} \section{The radicalness of $I^{\rm Sp}_{(a,a,1)}$} In this section, we assume that $n= 2a+1$ for some $a \in {{\frak m}athbb N}$, and set $\lambda=(a,a,1)$ and ${\frak m}u =(a,a)$. For a Young tableau \begin{equation}\label{(a,a,1)} T = \ytableausetup {mathmode, boxsize=2em} \begin{ytableau} i_1 & i_2 & \cdots &i_a \\ j_1 & j_2 & \cdots & j_a \end{ytableau} \end{equation} of shape ${\frak m}u$, set $$P(T):=\{ \, (i_k, j_k ) {\frak m}id 1\le k \le a \, \}. $$ \begin{lem}\label{varphi2} Recall that $S=K[x_1, \ldots, x_{n-1}]$, and $\varphi: R \to S$ is the natural surjection. With the above notation, we have $$\varphi(I^{\rm Sp}_\lambda ) = ( \, x_i x_j f_T {\frak m}id T \in {\sf op}eratorname{Tab}({\frak m}u), (i, j) \in P(T) \, ) \subset S.$$ \end{lem} For notational simplicity, we set $${\frak J} := ( \, x_i x_j f_T {\frak m}id T \in {\sf op}eratorname{Tab}({\frak m}u), (i, j) \in P(T) \, ).$$ \begin{proof} The proof is parallel to that of Lemma~\ref{varphi}. Note that $$I^{\rm Sp}_\lambda = ( \, f_{T'} {\frak m}id T' \in {\sf op}eratorname{StTab}(\lambda) \, ).$$ Here we use the inverse order $n \prec n-1 \prec \cdots \prec 1 $, and hence a standard tableau on $\lambda$ is of the form $$ T' = \ytableausetup {mathmode, boxsize=2em} \begin{ytableau} n & i_2 & i_3 & \cdots &i_a \\ i_1 & j_2 & j_3 & \cdots & j_a \\ j_1 \end{ytableau} $$ and we have $$\varphi(f_{T'}) = x_{i_1}x_{j_1} \prod_{k=1}^{a} (x_{i_k}-x_{j_k})= x_{i_1}x_{j_1}f_T,$$ where $T$ is the tableau in \eqref{(a,a,1)}. It is easy to see that $\varphi(f_{T'}) \in {\frak J}$, and hence $\varphi(I^{\rm Sp}_\lambda) \subset {\frak J}$. The converse inclusion is easy. \end{proof} \begin{thm}\label{ISp_{(a,a,1)} is radical} The Specht ideal $I^{\rm Sp}_{(a,a,1)}$ is radical. \end{thm} \begin{proof} This can be proved by an argument similar to the previous section. Let $I_{{\langle}a+1{\rangle}} \subset S$ be the ideal generated by all squarefree monomials of degree $a+1$. With the above notation, we have $$\sqrt{\varphi(I^{\rm Sp}_\lambda)} = \sqrt{I^{\rm Sp}_{\frak m}u} \cap I_{{\langle}a+1 {\rangle}}=I^{\rm Sp}_{\frak m}u \cap I_{{\langle}a+1 {\rangle}},$$ where the first (resp. second) equality follows from Lemma~\ref{radical of frJ} (resp. Theorem~\ref{ISp_{(n-k,k)} is radical}). By Lemma~\ref{radical reduction}, it suffices to show that ${\frak J} \, (= \varphi(I^{\rm Sp}_\lambda))$ is a radical ideal, equivalently, ${\frak J} \supset I^{\rm Sp}_{\frak m}u \cap I_{{\langle}a+1 {\rangle}}$. By an argument similar to Lemma~\ref{reduction}, it suffices to show that \begin{itemize} \item[$(\star)$] If $\psi = x^{\frak m}athbf a(\sum_{T \in {\sf op}eratorname{Tab}({\frak m}u)} c_T f_T) \in I_{{\langle} a+1{\rangle}}$ for some squareferee monomial $x^{\frak m}athbf a \in S=K[x_1, \ldots, x_{n-1}]$ and $c_T \in K$, we have $\psi \in {\frak J}$. \end{itemize} By the symmetry, we may assume that $x^{\frak m}athbf a =x_1x_2\cdots x_k$. We can rewrite $\psi$ as $\psi = x^{\frak m}athbf a(\sum_{T \in {\sf op}eratorname{StTab}({\frak m}u)} c_T' f_T)$. Moreover, we can replace $\psi$ by $$ \psi': = x^{\frak m}athbf a(\sum_{T \in W} c_T' f_T), $$ where $W$ is the subset of ${\sf op}eratorname{StTab}({\frak m}u)$ consisting of $T$ of the form \begin{equation}\label{(a,a)} T= \ytableausetup {mathmode, boxsize=2em} \begin{ytableau} 1 & 2 & \cdots &k & i_{k+1} & \cdots &i_a \\ j_1& j_2 & \cdots &j_k & j_{k+1} & \cdots &j_a \\ \end{ytableau}. \end{equation} In fact, if $T \in {\sf op}eratorname{StTab}({\frak m}u) -W$, then it is clear that $x^{\frak m}athbf a f_T \in {\frak J}$. For the tableau $T$ in \eqref{(a,a)}, set $$h_T := \prod_{l=k+1}^a(x_{i_l}-x_{j_l}),$$ and consider the tableau $$\overline{T}:=\begin{ytableau} j_a & j_{a-1} & \cdots &j_{k+1} & j_k & j_{k-1} & \cdots & j_1 \\ i_a & i_{a-1} & \cdots & i_{k+1} \end{ytableau} $$ of shape $(a, a-k)$. Clearly, $h_T = (-1)^k f_{\overline{T}}$. Since $T$ is standard, $\overline{T}$ is standard with respect to the inverse order $n \prec n-1 \prec \cdots \prec k+1 \prec k$ and the letter set $\{k+1, \ldots, n-1, n \}$. Hence $\{ h_T {\frak m}id T \in W\}$ is linearly independent. On the other hand, by an argument similar to Lemma~\ref{relation of h_T}, we have $$\sum_{T \in W} c_T' h_T =0.$$ It implies that $c_T'=0$ for all $T \in W$, that is, $\psi'=0$. It means that $\psi \in {\frak J}$, and we get the expected statement $(\star)$. \end{proof} Combining Theorem~\ref{ISp_{(a,a,1)} is radical} with Theorem~\ref{EGL}, we have the following. \begin{cor}\label{ISp_{(a,a,1)} is CM} If ${\sf op}eratorname{char}(K)=0$, the ring $R/I^{\rm Sp}_{(a,a,1)}$ is Cohen-Macaulay. \end{cor} By \cite{WY}, Corollaries~\ref{ISp_{(n-k,k)} is CM} and \ref{ISp_{(a,a,1)} is CM}, we have the following. \begin{cor}\label{char(K)=0 main} Assume that ${\sf op}eratorname{char}(K)=0$. Then $R/I^{\rm Sp}_\lambda$ is Cohen--Macaulay if and only if $\lambda$ is one of the following form. \begin{itemize} \item[(1)] $\lambda =(n-d, 1, \ldots, 1)$, \item[(2)] $\lambda =(n-d,d)$, \item[(3)] $\lambda =(a,a,1)$. \end{itemize} \end{cor} \section{Characteristic free approach to the low dimensional cases} \begin{prop}\label{I_{n,n-1} is CM} For any $K$, $R/I^{\rm Sp}_{(n-2,2)}$ is a Cohen-Macaulay ring with the Hilbert series $${\sf op}eratorname{Hilb}(R/I^{\rm Sp}_{(n-2,2)},t)=\frac{1+(n-2)t+t^2}{(1-t)^2}.$$ \end{prop} \begin{proof} As in the previous section, set $S:= K[x_1, \ldots, x_{n-1}]$, and $\varphi: R\to S$ the natural surjection. Since $x_n$ is a non-zero divisor of $R/I^{\rm Sp}_{(n-2,2)}$, and $R/(I^{\rm Sp}_{(n-2,2)}+(x_n)) \cong S/\varphi(I^{\rm Sp}_{(n-2,2)})$, it suffices to show that $S/\varphi(I^{\rm Sp}_{(n-2,2)})$ is Cohen--Macaulay. In the proof of Theorem~\ref{ISp_{(n-k,k)} is radical}, we have shown that $\varphi(I^{\rm Sp}_{(n-d, d)})$ is radical. Since $\varphi(I^{\rm Sp}_{(n-2, 2)})$ and $I^{\rm Sp}_{(n-2,1)}$ are radical, we have $$\varphi(I^{\rm Sp}_{(n-2,2)})= I^{\rm Sp}_{(n-2,1)} \cap I_{{\langle}2{\rangle}}$$ by Lemma~~\ref{radical of frJ}, where $I^{\rm Sp}_{(n-2,1)}$ is a Specht ideal in $S$ and $I_{{\langle}2 {\rangle}} \subset S$ is the ideal generated by all degree 2 monomials. Consider the short exact sequence \begin{equation}\label{MV1} 0 \longrightarrow S/\varphi(I^{\rm Sp}_{(n-2,2)}) \longrightarrow S/I^{\rm Sp}_{(n-2,1)} {\sf op}lus S/I_{{\langle}2{\rangle}} \longrightarrow S/(I^{\rm Sp}_{(n-2,1)} + I_{{\langle}2{\rangle}}) \longrightarrow 0. \end{equation} It is an elementally fact of the Stanley--Reisner ring theory that $S/I_{{\langle}2{\rangle}}$ is a 1-dimensional Cohen--Macaulay ring. Moreover, we have $S/I^{\rm Sp}_{(n-2,1)} \cong K[X]$ and $S/(I^{\rm Sp}_{(n-2,1)} + I_{{\langle}2{\rangle}}) \cong K[X]/(X^2)$. Hence $S/\varphi(I^{\rm Sp}_{(n-2,2)})$ is Cohen--Macaulay by \eqref{MV1}. Next we will compute the Hilbert series. By basic techniques of Stanley--Reisner ring theory (see, for example \cite[pp.212--213]{BH}), we have $${\sf op}eratorname{Hilb}(S/I_{{\langle}2{\rangle}},t)= \frac{1+(n-2)t}{(1-t)}.$$ Hence, by \eqref{MV1}, \begin{eqnarray*} {\sf op}eratorname{Hilb}(S/\varphi(I^{\rm Sp}_{(n-2,2)}),t) &=& {\sf op}eratorname{Hilb}(S/I^{\rm Sp}_{(n-2,1)},t)+{\sf op}eratorname{Hilb}(S/I_{{\langle}2{\rangle}},t)\\\ && \qquad - {\sf op}eratorname{Hilb}(S/(I^{\rm Sp}_{(n-2,1)} + I_{{\langle}2{\rangle}}),t)\\ &=& \frac{1}{(1-t)} + \frac{1+(n-2)t}{(1-t)} -(1+t)\\ &=& \frac{1+(n-2)t+t^2}{(1-t)}. \end{eqnarray*} Since $x_n$ is a non-zero divisor of $R/I^{\rm Sp}_{(n-2,2)}$, the Hilbert series of $R/I^{\rm Sp}_{(n-2,2)}$ has the expected form. \end{proof} \begin{prop} For all $n \ge 4$, $R/I^{\rm Sp}_{(n-2,2)}$ is Gorenstein. \end{prop} \begin{proof} By Proposition~\ref{I_{n,n-1} is CM}, $A:=R/I^{\rm Sp}_{(n-2,2)}$ is a Cohen--Macaulay ring admitting the canonical module $\omega_A$, whose Hilbert series is also $${\sf op}eratorname{Hilb}(\omega_A,t)=\frac{1+(n-2)t+t^2}{(1-t)^2}.$$ In particular, $\dim_K (\omega_A)_0 =1$. Take $0 \ne a \in (\omega_A)_0$. In the sequel, $\overline{P}_F$ for $F \subset [n]$ denotes the prime ideal of $A=R/I^{\rm Sp}_{(n-2,2)}$ given by the quotient of $P_F \subset R$. Since $${\sf op}eratorname{Ass}_A \omega_A = {\sf op}eratorname{Ass}_A A =\{ \overline{P}_F {\frak m}id F \subset [n], \#F =n-1 \}, $$ there is some $F' \subset [n]$ such that $\# F'=n-1$ and $\overline{P}_{F'} \supset {\sf op}eratorname{Ann}_A(a)$. On the other hand, the symmetric group $S_n$ also acts on $\omega_A$. Since $a \in (\omega_A)_0 \cong K$, $a$ is stable under the $S_n$-action up to scalar multiplication. Hence $S_n$ also acts on $ {\sf op}eratorname{Ann}_A(a)$, and we have $\overline{P}_F \supset {\sf op}eratorname{Ann}_A(a)$ for all $F \subset [n]$ with $\# F=n-1$. Since $\bigcap_{\# F=n-1} P_F =I_{n,n-1}=I^{\rm Sp}_{(n-2,2)}$, we have ${\sf op}eratorname{Ann}_A(a) \subset \bigcap_{\# F=n-1} \overline{P}_F=(0)$, and hence $A \cong Aa$. Since $Aa$ and $\omega_A$ have the same Hilbert functions, $\omega_A=Aa$, and $A$ is Gorenstein. \end{proof} \begin{thm}\label{3-ji shiki} For $n \ge 5$, $R/I_{n,n-2}$ is Cohen-Macaulay if and only if ${\sf op}eratorname{char}(K) \ne 2$. Hence, for $n \ge 6$, $R/I^{\rm Sp}_{(n-3,3)}$ is Cohen-Macaulay if and only if ${\sf op}eratorname{char}(K) \ne 2$. The same is true for $R/I^{\rm Sp}_{(2,2,1)}$. \end{thm} \begin{proof} By virtue of Theorems~\ref{ISp_{(n-k,k)} is radical} and \ref{ISp_{(a,a,1)} is radical}, the second and third statements follow from the first, so it suffices to show the first. As above, let $S:=K[x_1, \ldots, x_{n-1}]$ be the polynomial ring and $\varphi: R \to S$ the natural surjection. Note that $I_{5, 3} =I^{\rm Sp}_{(2,2,1)}$ and $I_{n,n-2} =I^{\rm Sp}_{(n-3,3)}$ for $n \ge 6$. Set ${\frak m}u :=(n-3,2)$. Then $I^{\rm Sp}_{\frak m}u \subset S$ is radical by Theorem~\ref{ISp_{(n-k,k)} is radical}, and we have $$\varphi(I_{n,n-2}) = I^{\rm Sp}_{\frak m}u \cap I_{{\langle}3{\rangle}}$$ by Lemma~\ref{radical of frJ}. We consider the exact sequence \begin{equation}\label{MV3} 0 \longrightarrow S/\varphi(I_{n,n-2}) \longrightarrow S/I^{\rm Sp}_{\frak m}u {\sf op}lus S/I_{{\langle}3{\rangle}} \longrightarrow S/(I^{\rm Sp}_{\frak m}u + I_{{\langle}3{\rangle}} ) \longrightarrow 0. \end{equation} Since $I^{\rm Sp}_{\frak m}u$ and $I_{{\langle}3{\rangle}}$ have no associated prime in common, $A:=S/(I^{\rm Sp}_{\frak m}u + I_{{\langle}3{\rangle}} )$ has dimension 1. Since $S/I^{\rm Sp}_{\frak m}u$ and $S/I_{{\langle}3{\rangle}}$ are 2-dimensional Cohen--Macaulay rings, $R/I_{n,n-2}$ is Cohen--Macaulay, if and only if so is $S/\varphi(I_{n,n-2})$, if and only if so is $A$. So it suffices to show that $A$ is Cohen--Macaulay if and only if ${\sf op}eratorname{char}(K) \ne 2$. Now let us analyze the structure of $A$. For distinct $i,j,k \in [n-1]$, we have $x_ix_jx_k \in I_{{\langle}3{\rangle}} \subset I^{\rm Sp}_{\frak m}u + I_{{\langle}3{\rangle}}$. Since $(x_i -x_l)(x_j-x_k) \in I^{\rm Sp}_{\frak m}u$ for distinct $i,j,k,l \in [n-1]$, we have $$x_i^2x_j -x_i^2x_k = x_i(x_i -x_l)(x_j-x_k) +x_ix_lx_j-x_ix_lx_k \in I^{\rm Sp}_{\frak m}u + I_{{\langle}3{\rangle}}.$$ Similarly, considering $x_ix_j(x_i -x_l)(x_j-x_k)$, we have $x_i^2x_j^2 \in I^{\rm Sp}_{\frak m}u+I_{{\langle}3{\rangle}}$. So non-zero monomials of $A_m$ for $m \ge 2$ are of the form $\overline{x_i^m}$ or $\overline{x_i^{m-1}x_j}$. Moreover, $\overline{x_i^{m-1}x_j}=\overline{x_i^{m-1}x_k}$, if $i \ne j, k$. Hence, for $m \ge 3$, the homogeneous component $A_m$ is spanned by $2(n-1)$ elements $$\overline{x_1^m}, \ \ldots, \overline{x_{n-1}^m}, \ \overline{x_1^{m-1}x_2}, \ \overline{x_2^{m-1}x_3}, \ \ldots, \ \overline{x_{n-2}^{m-1}x_{n-1}}, \ \overline{x_1x_{n-1}^{m-1}}.$$ For simplicity, let $$\alpha_1^{(m)}, \ldots, \alpha_{n-1}^{(m)}, \beta_1^{(m)}, \beta_2^{(m)}, \ldots , \beta_{n-1}^{(m)}$$ denote these $2(n-1)$ elements. To show that $\alpha_1^{(m)}, \ldots, \alpha_{n-1}^{(m)}, \beta_1^{(m)}, \ldots \beta_{n-1}^{(m)}$ are linearly independent, assume that $$c_1 \alpha_1^{(m)} + \cdots + c_{n-1}\alpha_{n-1}^{(m)} + d_1 \beta_1^{(m)} + d_2 \beta_2^{(m)} + \cdots + d_{n-1} \beta_{n-1}^{(m)}=0$$ for $c_1, \ldots, c_{n-1}, d_1, \ldots, d_{n-1} \in K$. Then we have $$ c_1 x_1^m + \cdots + c_{n-1}x_{n-1}^m + d_1 x_1^{m-1}x_2+ \cdots + d_{n-1}x_1x_{n-1}^{m-1} \in I^{\rm Sp}_{\frak m}u + I_{{\langle}3{\rangle}}. $$ Hence there is a degree $m$ element $f \in I_{{\langle}3{\rangle}}$ such that \begin{equation}\label{linear relation} c_1 x_1^m + \cdots + c_{n-1}x_{n-1}^m + d_1 x_1^{m-1}x_2+ \cdots + d_{n-1}x_1x_{n-1}^{m-1} +f \in I^{\rm Sp}_{\frak m}u. \end{equation} For any $a \in K$, if we put $x_1 =a$ and $x_i =1$ for all $i \ne 1$ then the left side of \eqref{linear relation} becomes 0 (all elements in $I^{\rm Sp}_{\frak m}u$ have this property). Hence, for any $a \in K$, $x_1 =a$ is a root of the equation \begin{equation}\label{m-1 ji houteisiki} c_1 x_1^m + d_1 x_1^{m-1} + (\text{lower degree terms})=0. \end{equation} Since $\# K =\infty$, the left side of \eqref{m-1 ji houteisiki} is the zero polynomial, and $c_1 =d_1 =0$. Similarly, we have $c_i=d_i=0$ for all $i$. It means that $\alpha_1^{(m)}, \ldots, \alpha_{n-1}^{(m)}, \beta_1^{(m)}, \ldots \beta_{n-1}^{(m)}$ are linearly independent. Hence they form a basis of $A_m$, in particular, we have $\dim_K A_m=2(n-1)$ for all $m \ge 3$. {\frak m}edskip \noindent \underline{When ${\sf op}eratorname{char}(K) \ne 2$:} We will show that $e_1 = x_1 + x_2 + \cdots +x_{n-1}$ is $A$-regular. Since $e_1$ is clearly $(S/I^{\rm Sp}_{\frak m}u)$-regular, and $I_{{\langle}3{\rangle}}$ is generated by degree 3 elements, we have $e_1 y \ne 0$ for all $0 \ne y \in A_m$ with $m=0,1$. Since we have $$e_1 \alpha_i^{(m)} = \alpha_i^{(m+1)}+(n-2)\beta_i^{(m+1)}, \qquad e_1 \beta_i^{(m)} = \beta_i^{(m+1)}$$ for each $1 \le i \le n-1$, it is easy to see that $e_1 y \ne 0$ for all $0 \ne y \in A_m$ with $m \ge 3$. So it remains to show the case $ 0 \ne y \in A_2$. Consider the $K$-linear map $$f: A_2 \ni y \longmapsto e_1 y \in A_3.$$ For distinct $i, j \in [n-1]$, we have $$f(\overline{x_i^2})= \alpha_i^{(3)}+ (n-2)\beta_i^{(3)}, \qquad f(\overline{x_ix_j})=\beta_i^{(3)} + \beta_j^{(3)}.$$ Hence, for distinct $i, j, k \in [n-1]$, we have $$f(-\overline{x_jx_k} + \overline{x_kx_i} + \overline{x_i x_j})= -\beta_j^{(3)} -\beta_k^{(3)}+\beta_k^{(3)} + \beta_i^{(3)} +\beta_i^{(3)} + \beta_j^{(3)}= 2\beta_i^{(3)}.$$ Since ${\sf op}eratorname{char}(K) \ne 2$ now, it follows that $\beta_i^{(3)} \in {\sf op}eratorname{Im} f$, and hence $\alpha_i^{(3)} \in {\sf op}eratorname{Im} f$. So $f$ is surjective. On the other hand, we have $A_2=(S/I^{\rm Sp}_{\frak m}u)_2$, and the Hilbert series of $S/I^{\rm Sp}_{\frak m}u$ is given in Proposition~\ref{I_{n,n-1} is CM} (of course, we should replace $n$ by $n-1$), so we have $$\dim A_2= \dim (S/I^{\rm Sp}_{\frak m}u)_2 =2n-2 =\dim A_3.$$ It follows that the surjective map $f$ is also injective, and hence $e_1 y \ne 0$ for all $0 \ne y \in A_2$. Summing up, $e_1$ is $A$-regular. Since $\dim A=1$, $A$ is Cohen-Macaulay. {\frak m}edskip \noindent \underline{When ${\sf op}eratorname{char}(K) = 2$:} Set $y:=\overline{x_1x_2}+\overline{x_2x_3}+\overline{x_3x_1} \in A_2$. Clearly, $y \ne 0$. For $i \ge 4$, we have $x_iy=0$. On the other hand, for $i=1,2,3$, we also have $x_iy= 2\beta_i^{(3)}=0$ now. Hence $y$ is a non-zero socle element, and ${\sf op}eratorname{depth} A=0$. It means that $A$ is {\it not} Cohen--Macaulay. \end{proof} \begin{rem} {\it Macaulay2} shows that the Betti diagram of $R/I^{\rm Sp}_{(3,3)} \, (=R/I_{6,4})$ is \begin{verbatim} total: 1 5 9 5 0: 1 . . . 1: . . . . 2: . 5 . . 3: . . 9 5 \end{verbatim} if ${\sf op}eratorname{char}(K) =0$ (actually, if ${\sf op}eratorname{char}(K) \ne 2$), and \begin{verbatim} total: 1 5 9 6 1 0: 1 . . . . 1: . . . . . 2: . 5 . . . 3: . . 9 5 1 4: . . . 1 . \end{verbatim} if ${\sf op}eratorname{char}(K) = 2$. \end{rem} The computer experiments suggest the following conjectures. We have to say that the computation of Specht ideals is very heavy, so we do not have so much experience. \begin{conj} Let $\lambda=(\lambda_1, \ldots, \lambda_l)$ be a partition of $n$ satisfying the condition (2) or (3) of Proposition~\ref{CM ISP}. Then $R/I^{\rm Sp}_\lambda$ is Coehn--Macaulay if and only if ${\sf op}eratorname{char}(K)=0$ or ${\sf op}eratorname{char}(K) \ge n-\lambda_1$. \end{conj} \begin{conj} If ${\sf op}eratorname{char}(K)=0$, $R/I^{\rm Sp}_{(a,a,1)}$ has an $(a+2)$-linear resolution. \end{conj} \end{document}
\begin{document} \begin{abstract} In \cite{Haglund-Remmel-Wilson-2018} Haglund, Remmel and Wilson introduced their \emph{Delta conjectures}, which give two different combinatorial interpretations of the symmetric function $\mathsf{D}elta'_{e_{n-k-1}} e_n$ in terms of rise-decorated or valley-decorated labelled Dyck paths respectively. While the rise version has been recently proved \cites{DAdderio-Mellit-Compositional-Delta-2020,Blasiak-Haiman-Morse-Pun-Seeling-Extended-Delta-2021}, not much is known about the valley version. In this work we prove the Schr\"oder case of the valley Delta conjecture, the Schr\"oder case of its square version \cite{Iraci-VandenWyngaerd-Valley-Square-2021}, and the Catalan case of its extended version \cite{Qiu-Wilson-2020}. Furthermore, assuming the symmetry of (a refinement of) the combinatorial side of the extended valley Delta conjecture, we deduce also the Catalan case of its square version \cite{Iraci-VandenWyngaerd-Valley-Square-2021}. \end{abstract} \title{Some consequences of the valley Delta conjectures} \section{Introduction} In \cite{Haglund-Remmel-Wilson-2018} the authors introduced their \emph{Delta conjectures}, which give two different combinatorial interpretations of the symmetric function $\mathsf{D}elta'_{e_{n-k-1}} e_n$ in terms of rise-decorated or valley-decorated labelled Dyck paths respectively. More precisely, \begin{equation} \label{eq:DeltaConjs} \mathsf{D}elta'_{e_{n-k-1}} e_n=\sum_{\pi\in \mathsf{LD}(0,n)^{* k}}q^{\mathsf{dinv}(\pi)}t^{\mathsf{area}(\pi)}=\sum_{\pi\in \mathsf{LD}(0,n)^{\bullet k}}q^{\mathsf{dinv}(\pi)}t^{\mathsf{area}(\pi)} \end{equation} (see Sections~\ref{sec:CombDef}~and~\ref{sec:SF} for the missing definitions). This symmetric function is of particular interest as it gives conjecturally the Frobenius characteristic of the so called \emph{super diagonal coinvariants} \cite{Zabrocki-Delta-Module-2019}. The rise version has been extensively studied \cites{DAdderio-Iraci-VandenWyngaerd-Delta-t0-2020, DAdderio-Iraci-VandenWyngaerd-TheBible-2019, DAdderio-Iraci-VandenWyngaerd-GenDeltaSchroeder-2019, Garsia-Haglund-Remmel-Yoo-2019, Romero-Deltaq1-2017} before being finally proved: in \cite{DAdderio-Mellit-Compositional-Delta-2020} it is proved the compositional refinement introduced in \cite{DAdderio-Iraci-VandenWyngaerd-Theta-2021}, and in \cite{Blasiak-Haiman-Morse-Pun-Seeling-Extended-Delta-2021} it is proved the extended version. On the other hand, the valley version received significantly less attention, this fact being mainly due to technical difficulties. Before mentioning what is known about the valley Delta to this day, we want to remark the intrinsic interest in this version of the conjecture: indeed in \cite{Haglund-Sergel-2021} a conjectural basis of the super diagonal coinvariants in \cite{Zabrocki-Delta-Module-2019} is provided, that would explain the Hilbert series predicted by the valley Delta conjecture (not the rise Delta!). To this date, extended \cite{Qiu-Wilson-2020} and square versions \cite{Iraci-VandenWyngaerd-Valley-Square-2021} of the valley Delta have been formulated. In \cites{Iraci-VandenWyngaerd-Valley-Square-2021,Iraci-VandenWyngaerd-pushing-2021} it is proved that the original valley Delta conjecture implies these other versions, a surprising fact that has no analogue for the rise Delta conjectures: indeed, the rise version of the square Delta conjecture \cite{DAdderio-Iraci-VandenWyngaerd-DeltaSquare-2019} is still open. On the valley Delta conjecture itself, almost nothing is known: for example it is not even clear that the combinatorial side (the rightmost sum in \eqref{eq:DeltaConjs}) is a symmetric function. In this work we make a first step in this direction, by proving the so called Schr\"oder case of the valley versions of the Delta conjecture, i.e.\ the scalar product $\langle-,e_{n-d}h_d\rangle$. The strategy, similar to the one used in \cite{DAdderio-Iraci-VandenWyngaerd-TheBible-2019}, allows also to prove the Schr\"oder case of the valley Delta square conjecture in \cite{Iraci-VandenWyngaerd-Valley-Square-2021}. Using results from \cite{Iraci-VandenWyngaerd-pushing-2021} we are able also to deduce the Catalan case (i.e.\ the scalar product $\langle-,e_{n}\rangle$) of the extended valley Delta conjecture. Finally, assuming the symmetry of (a refinement of) the combinatorial side of the extended valley Delta conjecture, we deduce also the Catalan case of its square version \cite{Iraci-VandenWyngaerd-Valley-Square-2021}. The paper is organized in the following way: in Section~2 and Section~3 we introduce the combinatorial objects and the symmetric function tools respectively, needed in the rest of the paper. In Section~4 we prove our main results, providing in particular a recursion leading to the proof of the Schr\"oder cases of the valley Delta and the valley Delta square conjectures. In Sections~5 we prove how the Schr\"oder case of the valley Delta conjecture implies the Catalan case of the extended valley Delta conjecture, and in Section~6, assuming the symmetry of (a refinement of) the combinatorial side of the extended valley Delta conjecture, we deduce also the Catalan case of its square version. \section{Combinatorial definitions}\label{sec:CombDef} We recall the relevant combinatorial definitions (see also \cites{Haglund-Remmel-Wilson-2018, Qiu-Wilson-2020, Iraci-VandenWyngaerd-Valley-Square-2021}). \begin{definition} A \emph{square path} of size $n$ is a lattice path going from $(0,0)$ to $(n,n)$ consisting of east or north unit steps, always ending with an east step. The set of such paths is denoted by $\mathsf{SQ}(n)$. The \emph{shift} of a square path is the maximum value $s$ such that the path intersect the line $y=x-s$ in at least one point. We refer to the line $y=x+i$ as \emph{$i$-th diagonal} of the path, to the line $x=y$ (the $0$-th diagonal) as the \emph{main diagonal}, and to the line $y=x-s$ where $s$ is the shift of the square path as the \emph{base diagonal}. A vertical step whose starting point lies on the $i$-th diagonal is said to be at \emph{height} $i$. A \emph{Dyck path} is a square path whose shift is $0$. The set of Dyck paths is denoted by $\mathsf{D}(n)$. Of course $\mathsf{D}(n) \subseteq \mathsf{SQ}(n)$. \end{definition} For example, the path (ignoring the circled numbers) in Figure~\ref{fig:labelled-square-path} has shift $3$. \begin{figure} \caption{Example of an element in $\mathsf{LSQ} \label{fig:labelled-square-path} \end{figure} \begin{definition} Let $\pi$ be a square path of size $n$. We define its \emph{area word} to be the sequence of integers $a(\pi) = (a_1(\pi), a_2(\pi), \cdots, a_n(\pi))$ such that the $i$-th vertical step of the path starts from the diagonal $y=x+a_i(\pi)$. For example the path in Figure~\ref{fig:labelled-square-path} has area word $(0, \, -\!3, \, -\!3, \, -\!2, \, -\!2, \, -\!1, \, 0, \, 0)$. \end{definition} \begin{definition} A \emph{partial labelling} of a square path $\pi$ of size $n$ is an element $w \in \mathbb N^n$ such that \begin{itemize} \item if $a_i(\pi) > a_{i-1}(\pi)$, then $w_i > w_{i-1}$, \item $a_1(\pi) = 0 \implies w_1 > 0$, \item there exists an index $i$ such that $a_i(\pi) = - \mathsf{shift}(\pi)$ and $w_i(\pi) > 0$, \end{itemize} i.e.\ if we label the $i$-th vertical step of $\pi$ with $w_i$, then the labels appearing in each column of $\pi$ are strictly increasing from bottom to top, with the additional restrictions that, if the path starts north then the first label cannot be a $0$, and that there is at least one positive label lying on the base diagonal. We omit the word \emph{partial} if the labelling consists of strictly positive labels only. \end{definition} \begin{definition} A \emph{(partially) labelled square path} (resp.\ \emph{Dyck path}) is a pair $(\pi, w)$ where $\pi$ is a square path (resp. Dyck path) and $w$ is a (partial) labelling of $\pi$. We denote by $\mathsf{LSQ}(m,n)$ (resp. $\mathsf{LD}(m,n)$) the set of labelled square paths (resp. Dyck paths) of size $m+n$ with exactly $n$ positive labels, and thus exactly $m$ labels equal to $0$. See Figure~\ref{fig:labelled-square-path} for an example. \end{definition} The following definitions will be useful later on. \begin{definition} Let $w$ be a labelling of square path of size $n$. We define \[ x^w \coloneqq \left. \prod_{i=1}^{n} x_{w_i} \right\rvert_{x_0 = 1}.\] \end{definition} The fact that we set $x_0 = 1$ explains the use of the expression \emph{partially labelled}, as the labels equal to $0$ do not contribute to the monomial. Sometimes, with an abuse of notation, we will write $\pi$ as a shorthand for a labelled path $(\pi, w)$. In that case, we use the notation \[x^\pi \coloneqq x^w.\] Now we want to extend our sets introducing some decorations. \begin{definition} \label{def:valley} The \emph{contractible valleys} of a labelled square path $\pi$ of size $n$ are the indices $1 \leq i \leq n$ such that one of the following holds: \begin{itemize} \item $i = 1$ and either $a_1(\pi) < -1$, or $a_1(\pi) = -1$ and $w_1 > 0$, \item $i > 1$ and $a_i(\pi) < a_{i-1}(\pi)$, \item $i > 1$, $a_i(\pi) = a_{i-1}(\pi)$, and $w_i > w_{i-1}$. \end{itemize} We define \[ v(\pi, w) \coloneqq \{1 \leq i \leq n \mid i \text{ is a contractible valley} \}, \] corresponding to the set of vertical steps that are directly preceded by a horizontal step and, if we were to remove that horizontal step and move it after the vertical step, we would still get a square path with a valid labelling. In particular, if the vertical step is in the first row and it is attached to a $0$ label, then we require that it is preceded by at least two horizontal steps (as otherwise by removing it we get a path starting north with a $0$ label in the first row). \end{definition} \begin{definition} \label{def:rise} The \emph{rises} of a (labelled) square path $\pi$ of size $n$ are the indices in \[ r(\pi) \coloneqq \{2 \leq i \leq n \mid a_i(\pi) > a_{i-1}(\pi)\}, \] i.e.\ the vertical steps that are directly preceded by another vertical step. \end{definition} \begin{definition} A \emph{valley-decorated (partially) labelled square path} is a triple $(\pi, w, dv)$ where $(\pi, w)$ is a (partially) labelled square path and $dv \subseteq v(\pi, w)$. A \emph{rise-decorated (partially) labelled square path} is a triple $(\pi, w, dr)$ where $(\pi, w)$ is a (partially) labelled square path and $dr \subseteq r(\pi)$. \end{definition} Again, we will often write $\pi$ as a shorthand for the corresponding triple $(\pi, w, dv)$ or $(\pi, w, dr)$. We denote by $\mathsf{LSQ}(m,n)^{\bullet k}$ (resp. $\mathsf{LSQ}(m,n)^{\ast k}$) the set of partially labelled valley-decorated (resp. rise-decorated) square paths of size $m+n$ with $n$ positive labels and $k$ decorated contractible valleys (resp. decorated rises). We denote by $\mathsf{LD}(m,n)^{\bullet k}$ (resp. $\mathsf{LD}(m,n)^{\ast k}$) the corresponding subset of Dyck paths. We also define $\mathsf{LSQ}'(m,n)^{\bullet k}$ as the set of paths in $\mathsf{LSQ}(m,n)^{\bullet k}$ such that there exists an index $i$ such that $a_i(\pi) = - \mathsf{shift}(\pi)$ with $i \not \in dv$ and $w_i(\pi) > 0$, i.e.\ there is at least one positive label lying on the base diagonal that is not a decorated valley. See Figure~\ref{fig:decorated-square-paths} for examples. Notice that, because of the restrictions we have on the labelling and the decorations, the only path with $n=0$ is the empty path, for which $m=0$ and $k=0$. \begin{figure} \caption{Examples of an element in $\mathsf{LSQ} \label{fig:decorated-square-paths} \end{figure} We also recall the two relevant statistics on these sets (see \cite{Iraci-VandenWyngaerd-Valley-Square-2021}) that reduce to the ones defined in \cite{Loehr-Warrington-square-2007} when $m=0$ and $k=0$. \begin{definition} \label{def:area} Let $(\pi, w, dr) \in \mathsf{LSQ}(m,n)^{\ast k}$ and $s$ be its shift. We define \[ \mathsf{area}(\pi, w, dr) \coloneqq \sum_{i \not \in dr} (a_i(\pi) + s), \] i.e.\ the number of whole squares between the path and the base diagonal that are not in rows containing a decorated rise. For $(\pi, w, dv) \in \mathsf{LSQ}(m,n)^{\bullet k}$ we set $\mathsf{area}(\pi, w, dv) \coloneqq \mathsf{area}(\pi, w, \varnothing)$, where $(\pi, w, \varnothing) \in \mathsf{LSQ}(m,n)^{\ast 0}$. \end{definition} For example, the paths in Figure~\ref{fig:decorated-square-paths} have area $13$ (left) and $10$ (right). Notice that the area does not depend on the labelling. \begin{definition} \label{def:dinv} Let $(\pi, w, dv) \in \mathsf{LSQ}(m,n)^{\bullet k}$. For $1 \leq i < j \leq n$, the pair $(i,j)$ is a \emph{diagonal inversion} if \begin{itemize} \item either $a_i(\pi) = a_j(\pi)$ and $w_i < w_j$ (\emph{primary inversion}), \item or $a_i(\pi) = a_j(\pi) + 1$ and $w_i > w_j$ (\emph{secondary inversion}), \end{itemize} where $w_i$ denotes the $i$-th letter of $w$, i.e.\ the label of the vertical step in the $i$-th row. Then we define \begin{align*} \mathsf{dinv}(\pi,w,dv) & \coloneqq \# \{ 1 \leq i < j \leq n \mid (i,j) \text{ is an inversion and } i \not \in dv \}\\ & + \#\{1 \leq i \leq n \mid a_i(\pi) < 0 \text{ and } w_i > 0 \} - \# dv. \end{align*} For $(\pi, w, dr) \in \mathsf{LSQ}(m,n)^{\ast k}$ we set $\mathsf{dinv}(\pi, w, dr) \coloneqq \mathsf{dinv}(\pi, w, \varnothing)$, where $(\pi, w, \varnothing) \in \mathsf{LSQ}(m,n)^{\bullet 0}$. \end{definition} We refer to the middle term, counting the non-zero labels below the main diagonal, as \emph{bonus} or \emph{tertiary dinv}. For example, the path in Figure~\ref{fig:decorated-square-paths} (left) has dinv equal to $4$: $2$ primary inversions in which the leftmost label is not a decorated valley, i.e.\ $(1,7)$ and $(1,8)$; $1$ secondary inversion in which the leftmost label is not a decorated valley, i.e.\ $(1,6)$; $3$ bonus dinv, coming from the rows $3$, $4$, and $6$; $2$ decorated contractible valleys. It is easy to check that if $j \in dv$ then either there exists some diagonal inversion $(i,j)$ or $a_j(\pi) < 0$, and so the dinv is always non-negative (see \cite{Iraci-VandenWyngaerd-Valley-Square-2021}*{Proposition~1}). \emph{To lighten the notation we will usually refer to a labelled path $(\pi, w, dr)$ or $(\pi, w, dv)$ simply by $\pi$, so for example we will write $\pi \in \mathsf{LSQ}(m,n)^{\ast k}$ and $\mathsf{dinv}(\pi)$.} Let $\pi$ be any labelled path defined above, with shift $s$. We define its \emph{reading word} as the sequence of labels read starting from the ones in the base diagonal ($y=x-s$) going bottom to top; next the ones in the diagonal $y=x-s+1$ bottom to top; then the ones in the diagonal $y=x-s+2$ and so on. For example, the path in Figure~\ref{fig:decorated-square-paths} (left) has reading word $02401234$. Let us consider the paths in $\mathsf{LSQ}(m,n)^{\bullet k}$ where the reading word is a shuffle of $m$ $0$'s, the string $1, 2,\cdots, n-d$, and the string $n,n-1, \cdots n-d+1$. Notice that, given this restriction and the information about the position of the zero labels, all the information we need to keep track of the labelling is the position of the $d$ biggest labels, which will end up labelling \emph{peaks}, i.e.\ vertical steps followed by a horizontal step. Hence we can denote the $d$ biggest labels by decorated peaks and forget about the positive labels. We can thus identify our set with the set $\mathsf{SQ}(m,n)^{\bullet k, \circ d}$ of square paths with $m$ vertical steps labelled by a zero, $n$ non-labelled vertical steps, $k$ decorated contractible valleys and $d$ decorated peaks, where a valley is contractible if it is contractible in the corresponding labelled path, and the statistics are inherited from the labelled path as well. Similarly, we define $\mathsf{SQ}'(m,n)^{\bullet k, \circ d} \subseteq \mathsf{SQ}(m,n)^{\bullet k, \circ d}$ to be the subset coming from $\mathsf{LSQ}'(m,n)^{\bullet k}\subseteq \mathsf{LSQ}(m,n)^{\bullet k}$, $\mathsf{SQ}(m,n)^{\ast k, \circ d}$ the one coming from rise-decorated paths, and $\mathsf{D}(m,n)^{\bullet k, \circ d}$ and $\mathsf{D}(m,n)^{\ast k, \circ d}$ for the Dyck counterparts. Finally, we sometimes omit writing $m$ or $k$ when they are equal to $0$, e.g. \[\mathsf{SQ}'(n)^{\bullet k, \circ d}=\mathsf{SQ}'(0,n)^{\bullet k, \circ d}\quad\text{ and }\quad \mathsf{LSQ}(n) =\mathsf{LSQ}(0,n)^{\ast 0}. \] \section{Symmetric functions}\label{sec:SF} For all the undefined notations and the unproven identities, we refer to \cite{DAdderio-Iraci-VandenWyngaerd-TheBible-2019}*{Section~1}, where definitions, proofs and/or references can be found. We denote by $\Lambda$ the graded algebra of symmetric functions with coefficients in $\mathbb{Q}(q,t)$, and by $\langle\, , \rangle$ the \emph{Hall scalar product} on $\Lambda$, defined by declaring that the Schur functions form an orthonormal basis. The standard bases of the symmetric functions that will appear in our calculations are the monomial $\{m_\lambda\}_{\lambda}$, complete $\{h_{\lambda}\}_{\lambda}$, elementary $\{e_{\lambda}\}_{\lambda}$, power $\{p_{\lambda}\}_{\lambda}$ and Schur $\{s_{\lambda}\}_{\lambda}$ bases. For a partition $\mu \vdash n$, we denote by \[ \widetilde{H}_\mu \coloneqq \widetilde{H}_\mu[X] = \widetilde{H}_\mu[X; q,t] = \sum_{\lambda \vdash n} \widetilde{K}_{\lambda, \mu}(q,t) s_{\lambda} \] the \emph{(modified) Macdonald polynomials}, where \[ \widetilde{K}_{\lambda, \mu}(q,t) \coloneqq K_{\lambda, \mu}(q,1/t) t^{n(\mu)} \] are the \emph{(modified) $q,t$-Kostka coefficients} (see \cite{Haglund-Book-2008}*{Chapter~2} for more details). Macdonald polynomials form a basis of the ring of symmetric functions $\Lambda$. This is a modification of the basis introduced by Macdonald \cite{Macdonald-Book-1995}. If we identify the partition $\mu$ with its Ferrer diagram, i.e.\ with the collection of cells $\{(i,j)\mid 1\leq i\leq \mu_j, 1\leq j\leq \ell(\mu)\}$, then for each cell $c\in \mu$ we refer to the \emph{arm}, \emph{leg}, \emph{co-arm} and \emph{co-leg} (denoted respectively as $a_\mu(c), l_\mu(c), a'_\mu(c), l'_\mu(c)$) as the number of cells in $\mu$ that are strictly to the right, above, to the left and below $c$ in $\mu$, respectively (see Figure~\ref{fig:limbs}). \begin{figure} \caption{Limbs and co-limbs of a cell in a partition.} \label{fig:limbs} \end{figure} Let $M \coloneqq (1-q)(1-t)$. For every partition $\mu$, we define the following constants: \begin{align*} B_{\mu} & \coloneqq B_{\mu}(q,t) = \sum_{c \in \mu} q^{a_{\mu}'(c)} t^{l_{\mu}'(c)},\\ \Pi_{\mu} & \coloneqq \Pi_{\mu}(q,t) = \prod_{c \in \mu / (1)} (1-q^{a_{\mu}'(c)} t^{l_{\mu}'(c)}),\\ w_{\mu} &\coloneqq w_{\mu}(q,t)=\prod_{c\in \mu}(q^{a_{\mu}(c)}-t^{l_{\mu}(c)+1})(t^{l_{\mu}(c)}-q^{a_{\mu}(c)+1}). \end{align*} We will make extensive use of the \emph{plethystic notation} (cf. \cite{Haglund-Book-2008}*{Chapter~1 page 19}). We will use also the standard shorthand $f^\ast = f \left[\frac{X}{M}\right]$. We define the \emph{star scalar product} by setting for every $f,g\in \Lambda$ \[ \langlef,g\rangle_\ast\coloneqq \langlef[X],\omega g[MX]\rangle, \] where $\omega$ is the involution of $\Lambda$ sending $e_\lambda$ to $h_\lambda$ for every partition $\lambda$. It is well known that for any two partitions $\mu,\nu$ we have \[ \langle\widetilde{H}_\mu,\widetilde{H}_\nu\rangle_\ast=\delta_{\mu,\nu}w_\mu. \] We also need several linear operators on $\Lambda$. \begin{definition}[\protect{\cite{Bergeron-Garsia-ScienceFiction-1999}*{3.11}}] \label{def:nabla} We define the linear operator $\nabla \colon \Lambda \rightarrow \Lambda$ on the eigenbasis of Macdonald polynomials as \[ \nabla \widetilde{H}_\mu = e_{\lvert \mu \rvert}[B_\mu] \widetilde{H}_\mu=q^{n(\mu')}t^{n(\mu)} \widetilde{H}_\mu. \] \end{definition} \begin{definition} \label{def:pi} We define the linear operator $\mathbf{\Pi} \colon \Lambda \rightarrow \Lambda$ on the eigenbasis of Macdonald polynomials as \[ \mathbf{\Pi} \widetilde{H}_\mu = \Pi_\mu \widetilde{H}_\mu \] where we conventionally set $\Pi_{\varnothing} \coloneqq 1$. \end{definition} \begin{definition} \label{def:delta} For $f \in \Lambda$, we define the linear operators $\mathsf{D}elta_f, \mathsf{D}elta'_f \colon \Lambda \rightarrow \Lambda$ on the eigenbasis of Macdonald polynomials as \[ \mathsf{D}elta_f \widetilde{H}_\mu = f[B_\mu] \widetilde{H}_\mu, \qquad \qquad \mathsf{D}elta'_f \widetilde{H}_\mu = f[B_\mu-1] \widetilde{H}_\mu. \] \end{definition} Observe that on the vector space of symmetric functions homogeneous of degree $n$, denoted by $\Lambda^{(n)}$, the operator $\nabla$ equals $\mathsf{D}elta_{e_n}$. Notice also that $\nabla$, $\mathsf{D}elta_f$ and $\mathbf{\Pi}$ are all self-adjoint with respect to the star scalar product. \begin{definition}[\protect{\cite{DAdderio-Iraci-VandenWyngaerd-Theta-2021}*{(28)}}] \label{def:theta} For any symmetric function $f \in \Lambda^{(n)}$ we define the \emph{Theta operators} on $\Lambda$ in the following way: for every $F \in \Lambda^{(m)}$ we set \begin{equation*} \Theta_f F \coloneqq \left\{\begin{array}{ll} 0 & \text{if } n \geq 1 \text{ and } m=0 \\ f \cdot F & \text{if } n=0 \text{ and } m=0 \\ \mathbf{\Pi} (f \left[\frac{X}{M}\right] \cdot \mathbf{\Pi}^{-1} F) & \text{otherwise} \end{array} \right. , \end{equation*} and we extend by linearity the definition to any $f, F \in \Lambda$. \end{definition} It is clear that $\Theta_f$ is linear, and moreover, if $f$ is homogeneous of degree $k$, then so is $\Theta_f$, i.e. \[\Theta_f \Lambda^{(n)} \subseteq \Lambda^{(n+k)} \qquad \text{ for } f \in \Lambda^{(k)}. \] It is convenient to introduce the so called $q$-notation. In general, a $q$-analogue of an expression is a generalisation involving a parameter $q$ that reduces to the original one for $q \rightarrow 1$. \begin{definition} For a natural number $n \in \mathbb{N}$, we define its $q$-analogue as \[ [n]_q \coloneqq \frac{1-q^n}{1-q} = 1 + q + q^2 + \dots + q^{n-1}. \] \end{definition} Given this definition, one can define the $q$-factorial and the $q$-binomial as follows. \begin{definition} We define \[ [n]_q! \coloneqq \prod_{k=1}^{n} [k]_q \quad \text{and} \quad \qbinom{n}{k}_q \coloneqq \frac{[n]_q!}{[k]_q![n-k]_q!} \] \end{definition} \begin{definition} For $x$ any variable and $n \in \mathbb{N} \cup \{ \infty \}$, we define the \emph{$q$-Pochhammer symbol} as \[ (x;q)_n \coloneqq \prod_{k=0}^{n-1} (1-xq^k) = (1-x) (1-xq) (1-xq^2) \cdots (1-xq^{n-1}). \] \end{definition} We can now introduce yet another family of symmetric functions. \begin{definition} \label{def:Enk} For $0 \leq k \leq n$, we define the symmetric function $E_{n,k}$ \cite{Garsia-Haglund-qtCatalan-2002} by the expansion \[ e_n \left[ X \frac{1-z}{1-q} \right] = \sum_{k=0}^n \frac{(z;q)_k}{(q;q)_k} E_{n,k}. \] \end{definition} Notice that $E_{n,0} = \delta_{n,0}$. Setting $z=q^j$ we get \[ e_n \left[ X \frac{1-q^j}{1-q} \right] = \sum_{k=0}^n \frac{(q^j;q)_k}{(q;q)_k} E_{n,k} = \sum_{k=0}^n \qbinom{k+j-1}{k}_q E_{n,k} \] and in particular, for $j=1$, we get \begin{equation} \label{eq:Enken} e_n = E_{n,0} + E_{n,1} + E_{n,2} + \cdots + E_{n,n}, \end{equation} so these symmetric functions split $e_n$, in some sense. The Theta operators will be useful to restate the Delta conjectures in a new fashion, thanks to the following results. \begin{theorem}[\protect{\cite{DAdderio-Iraci-VandenWyngaerd-Theta-2021}*{Theorem~3.1} }] \label{thm:theta-en} \[ \Theta_{e_k} \nabla e_{n-k} = \mathsf{D}elta'_{e_{n-k-1}} e_n \] \end{theorem} We will also need the following identity. \begin{theorem}[\cite{DAdderio-Romero-Theta-Identities-2020}*{Corollary~9.2}] \label{thm:sf-identity} Given $m,n,k,r \in \mathbb{N}$, we have \begin{align*} h_m^\perp \Theta_{e_k} \nabla E_{n-k,r} = \sum_{p=0}^m t^{m-p} \sum_{i=0}^p q^{\binom{i}{2}} \qbinom{r-p+i}{i}_q \qbinom{r}{p-i}_q \mathsf{D}elta_{h_{m-p}} \Theta_{e_{k-i}} \nabla E_{n-m-(k-i), r-p+i} \end{align*} where $h_m^\perp$ is the adjoint of the multiplication by $h_m$ with respect to the Hall scalar product. \end{theorem} We applied the change of variables $j \mapsto m, m \mapsto k, p \mapsto n-k-r, k \mapsto r, s \mapsto p, r \mapsto p-i$ in \cite{DAdderio-Romero-Theta-Identities-2020}*{Corollary~9.2} in order to make it easier to interpret combinatorially and more consistent with the notation used in \cite{Iraci-VandenWyngaerd-pushing-2021}. \section{The Schr\"oder case of the valley version of the Delta conjectures} First of all, we state the extended valley Delta conjecture and the extended valley Delta square conjecture. \begin{conjecture}[Extended valley Delta conjecture \cite{Qiu-Wilson-2020}] \label{conj:valleyDelta} \begin{equation} \label{eq:valleyDelta} \mathsf{D}elta_{h_m}\Theta_{e_k}\nabla e_{n-k} =\sum_{\pi\in \mathsf{LD}(m,n)^{\bullet k}}q^{\mathsf{dinv}(\pi)}t^{\mathsf{area}(\pi)}x^{\pi}. \end{equation} \end{conjecture} \begin{conjecture}[Extended valley Delta square conjecture \cite{Iraci-VandenWyngaerd-Valley-Square-2021}] \begin{equation} \label{eq:valleyDeltasquare} \mathsf{D}elta_{h_m}\Theta_{e_k}\nabla \omega (p_{n-k})=\sum_{\pi\in \mathsf{LSQ}'(m,n)^{\bullet k}}q^{\mathsf{dinv}(\pi)}t^{\mathsf{area}(\pi)}x^{\pi}. \end{equation} \end{conjecture} \begin{remark} \label{rmk:SFcombside} It should be noticed that in general the combinatorial sides of \eqref{eq:valleyDelta} and \eqref{eq:valleyDeltasquare} are not even known to be symmetric functions. Hence these conjectures include the statement that those combinatorial sums are indeed symmetric functions. \end{remark} The main result we want to prove is the so-called \emph{Schr\"oder case} of the valley version of the Delta conjecture and the Delta square conjecture. In other words we want to show that the identities hold if we take the scalar product with $e_{n-d}h_d$. On the combinatorial side, assuming that those sums are symmetric functions (cf.\ Remark~\ref{rmk:SFcombside}), the theory of \emph{shuffles} (cf.\ \cite{DAdderio-Iraci-VandenWyngaerd-TheBible-2019}*{Section~3.3}) tells us that taking the scalar product with $e_{n-d}h_d$ in \eqref{eq:valleyDelta} and \eqref{eq:valleyDeltasquare} gives (cf.\ end of Section~\ref{sec:CombDef}) \[\sum_{\pi \in \mathsf{D}(m,n)^{\bullet k, \circ d}} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)} \quad \text{ and } \quad \sum_{\pi \in \mathsf{SQ}'(m,n)^{\bullet k, \circ d}} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)}\] respectively. \begin{theorem} \label{thm:valley-schroeder} \[ \langle \Theta_{e_k} \nabla e_{n-k}, e_{n-d} h_d \rangle = \sum_{\pi \in \mathsf{D}(n)^{\bullet k, \circ d}} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)} \] \end{theorem} \begin{theorem} \label{thm:valley-square-schroeder} \[ \langle \Theta_{e_k} \nabla \omega(p_{n-k}), e_{n-d} h_d \rangle = \sum_{\pi \in \mathsf{SQ}'(n)^{\bullet k, \circ d}} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)} \] \end{theorem} In order to prove these results, we proceed as follows. First, we recall (\eqref{eq:Enken} and \cite{DAdderio-Iraci-VandenWyngaerd-Theta-2021}*{(26)}) that \[ e_{n-k} = \sum_{r=1}^{n-k} E_{n-k,r} \qquad \text{ and } \qquad \omega(p_{n-k}) = \sum_{r=1}^{n-k} \frac{[n-k]_q}{[r]_q} E_{n-k,r} \] when $n-k > 0$, and $e_0 = \omega(p_0) = 1$ when $n-k=0$. Then, we find an algebraic recursion satisfied by the polynomials $\langle \Theta_{e_k} \nabla E_{n-k,r}, e_{n-d} h_d \rangle$, which we use to derive a similar one satisfied by the polynomials $\langle \Theta_{e_k} \nabla \frac{[n-k]_q}{[r]_q} E_{n-k,r}, e_{n-d} h_d \rangle$. Next, we prove that the $q,t$-enumerators of the sets \[ \mathsf{D}(n \backslash r)^{\bullet k, \circ d} \coloneqq \{ \pi \in \mathsf{D}(n)^{\bullet k, \circ d} \mid \text{$r$ non-decorated vertical steps of $\pi$ touch the main diagonal} \} \] and \[ \mathsf{SQ}'(n \backslash r)^{\bullet k, \circ d} \coloneqq \{ \pi \in \mathsf{SQ}'(n)^{\bullet k, \circ d} \mid \text{$r$ non-decorated vertical steps of $\pi$ touch the base diagonal} \} \] satisfy the same recursions as the aforementioned polynomials with the same initial conditions, thus proving the equality. Finally, we take the sum over $r$ of the identities we get, completing the proof of Theorem~\ref{thm:valley-schroeder} and Theorem~\ref{thm:valley-square-schroeder}. \subsection{Algebraic recursions} We begin by proving the following lemma. \begin{lemma} \[ \langle \Theta_{e_k} \nabla E_{n-k, r}, e_{n-d} h_d \rangle = \langle \mathsf{D}elta_{h_k} \mathsf{D}elta_{e_{n-k-d}} E_{n-k, r}, e_{n-k} \rangle \] \end{lemma} \begin{proof} By \cite{DAdderio-Iraci-VandenWyngaerd-Theta-2021}*{Lemma~6.1}, we have \[ \langle \Theta_{e_k} \nabla E_{n-k, r}, e_{n-d} h_d \rangle = \langle \mathsf{D}elta_{h_k} \nabla E_{n-k, r}, e_{n-k-d} h_d \rangle \] and by repeatedly applying the well-known identity $\langle \mathsf{D}elta_{e_a} f, h_n \rangle = \langle f, e_a h_{n-a} \rangle$ (see \cite{DAdderio-Iraci-VandenWyngaerd-TheBible-2019}*{Lemma~4.1}), we have \begin{align*} \langle \mathsf{D}elta_{h_k} \nabla E_{n-k, r}, e_{n-k-d} h_d \rangle & = \langle \mathsf{D}elta_{e_{n-k-d}} \mathsf{D}elta_{h_k} \nabla E_{n-k, r}, h_{n-k} \rangle \\ & = \langle \nabla \mathsf{D}elta_{h_k} \mathsf{D}elta_{e_{n-k-d}} E_{n-k, r}, h_{n-k} \rangle \\ & = \langle \mathsf{D}elta_{h_k} \mathsf{D}elta_{e_{n-k-d}} E_{n-k, r}, e_{n-k} \rangle \end{align*} as desired. \end{proof} Now, the polynomial $\langle \mathsf{D}elta_{h_k} \mathsf{D}elta_{e_{n-k-d}} E_{n-k, r}, e_{n-k} \rangle$ coincides with the expression $F_{n,r}^{(d,k)}$ in \cite{DAdderio-Iraci-VandenWyngaerd-TheBible-2019}*{Section~4.3, (4.77)}, so by \cite{DAdderio-Iraci-VandenWyngaerd-TheBible-2019}*{Theorem~4.18}, up to some simple rewriting, we have the following. \begin{proposition} \label{prop:preliminary-alg-recursion} The expressions $\langle \Theta_{e_k} \nabla E_{n-k, r}, e_{n-d} h_d \rangle$ satisfy the recursion \begin{align*} \langle \Theta_{e_k} \nabla E_{n-k, r}, e_{n-d} h_d \rangle & = \sum_{j=0}^k t^{n-r-j} \sum_{s=0}^{n-r} \sum_{v=0}^d q^{\binom{v}{2}} \qbinom{r}{v}_q \qbinom{r+j-1}{j} \qbinom{v+j+s-1}{s} \\ & \quad \times \langle \Theta_{e_{k-j}} \nabla E_{n-k-r, s}, e_{(n-r-j)-(d-v)} h_{d-v} \rangle \end{align*} with initial conditions $\langle \Theta_{e_k} \nabla E_{n-k, r}, e_{n-d} h_d \rangle = \delta_{r,0} \delta_{k,0} \delta_{d,0}$. \end{proposition} Notice that this implies that $\langle \Theta_{e_k} \nabla E_{n-k, r}, e_{n-d} h_d \rangle$ is actually a polynomial in $\mathbb{N}[q,t]$. We want to rewrite it slightly, for which we need the following lemma. \begin{lemma} \label{lem:chu-vandermonde} \[ q^{\binom{v}{2}} \qbinom{r}{v}_q \qbinom{r+j-1}{j} = \sum_{u=0}^{r-v} q^{\binom{u}{2}} \qbinom{u+v}{u}_q q^{\binom{u+v}{2}} \qbinom{r}{u+v}_q \qbinom{v+j-1}{j-u}_q \] \end{lemma} \begin{proof} We have \begin{align*} q^{\binom{v}{2}} \qbinom{r}{v}_q \qbinom{r+j-1}{j} & = q^{\binom{v}{2}} \qbinom{r}{v}_q \sum_{u=0}^{r-v} q^{u(u+v-1)} \qbinom{r-v}{u}_q \qbinom{v+j-1}{u+v-1} \\ & = q^{\binom{v}{2}} \qbinom{r}{v}_q \sum_{u=0}^{r-v} q^{u(u-1)} \qbinom{r-v}{u}_q q^{uv} \qbinom{v+j-1}{j-u} \\ & = \sum_{u=0}^{r-v} q^{\binom{u}{2}} \qbinom{r}{v}_q \qbinom{r-v}{u}_q q^{\binom{u}{2} + uv + \binom{v}{2}} \qbinom{v+j-1}{j-u} \\ & = \sum_{u=0}^{r-v} q^{\binom{u}{2}} \qbinom{u+v}{u}_q q^{\binom{u+v}{2}} \qbinom{r}{u+v}_q \qbinom{v+j-1}{j-u}_q \\ \end{align*} where in the first equality we used the well-known $q$-Chu-Vandermonde identity \cite{Andrews-Book-Partitions}*{(3.3.10)} and the other ones are simple algebraic manipulations. \end{proof} Combining Proposition~\ref{prop:preliminary-alg-recursion} and Lemma~\ref{lem:chu-vandermonde}, we get the following. \begin{theorem} \label{thm:alg-recursion} The expressions $\langle \Theta_{e_k} \nabla E_{n-k, r}, e_{n-d} h_d \rangle$ satisfy the recursion \begin{align*} \langle \Theta_{e_k} \nabla E_{n-k, r}, e_{n-d} h_d \rangle & = \sum_{j=0}^k t^{n-r-j} \sum_{s=0}^{n-r} \sum_{v=0}^d \sum_{u=0}^{r-v} q^{\binom{u}{2}} \qbinom{u+v}{u}_q q^{\binom{u+v}{2}} \qbinom{r}{u+v}_q \qbinom{v+j-1}{j-u}_q \\ & \quad \times \qbinom{v+j+s-1}{s}_q \langle \Theta_{e_{k-j}} \nabla E_{n-k-r, s}, e_{(n-r-j)-(d-v)} h_{d-v} \rangle \end{align*} with initial conditions $\langle \Theta_{e_k} \nabla E_{n-k, r}, e_{n-d} h_d \rangle = \delta_{r,0} \delta_{k,0} \delta_{d,0}$. \end{theorem} Using Theorem~\ref{thm:alg-recursion}, we can get a similar recursion for the other family of polynomials. \begin{theorem} \label{thm:alg-recursion-square} The expressions $\langle \Theta_{e_k} \nabla \frac{[n-k]_q}{[r]_q} E_{n-k, r}, e_{n-d} h_d \rangle$ satisfy the recursion \begin{align*} \langle \Theta_{e_k} \nabla \frac{[n-k]_q}{[r]_q} E_{n-k, r}, & e_{n-d} h_d \rangle = \langle \Theta_{e_k} \nabla E_{n-k, r}, e_{n-d} h_d \rangle \\%\sum_{j=0}^k t^{n-r-j} \sum_{s=0}^{n-r} \sum_{v=0}^d \sum_{u=0}^{r-v} q^{\binom{u}{2}} \qbinom{u+v}{u}_q q^{\binom{u+v}{2}} \qbinom{r}{u+v}_q \qbinom{v+j-1}{j-u}_q \\ & + \sum_{j=0}^k q^r t^{n-r-j} \sum_{s=0}^{n-r} \sum_{v=0}^d \sum_{u=0}^{r-v} q^{\binom{u}{2}} \qbinom{u+v}{u}_q q^{\binom{u+v}{2}} \qbinom{r-1}{u+v-1}_q \qbinom{v+j}{j-u}_q \\ & \quad \times \qbinom{v+j+s-1}{s-1}_q \langle \Theta_{e_{k-j}} \nabla \frac{[n-k-r]_q}{[s]_q} E_{n-k-r, s}, e_{(n-r-j)-(d-v)} h_{d-v} \rangle \end{align*} with initial conditions $\langle \Theta_{e_k} \nabla \frac{[n-k]_q}{[r]_q} E_{n-k, r}, e_{n-d} h_d \rangle = \delta_{r,0} \delta_{k,0} \delta_{d,0}$. \end{theorem} \begin{proof} We need to be careful with the initial conditions, as we have the factor $\frac{[n-k]_q}{[r]_q}$ that becomes $\frac{[0]_q}{[0]_q}$ when $n-k=r=0$. However, the property we actually need is \[ \omega(p_{n-k}) = \sum_{r=1}^{n-k} \frac{[n-k]_q}{[r]_q} E_{n-k,r}, \] which only holds for $n-k>0$. For $n-k = 0$ we have $\omega(p_0) = 1$, so we need to set $\frac{[n-k]_q}{[r]_q} E_{n-k,r} = 1$ whenever $n-k=r=0$ for our identities to hold, which satisfies the initial conditions. Using Theorem~\ref{thm:alg-recursion}, we have \begin{align*} \langle \Theta_{e_k} \nabla & \frac{[n-k]_q}{[r]_q} E_{n-k, r}, e_{n-d} h_d \rangle = \frac{[n-k]_q}{[r]_q} \langle \Theta_{e_k} \nabla E_{n-k, r}, e_{n-d} h_d \rangle\\%\sum_{j=0}^k t^{n-r-j} \sum_{s=0}^{n-r} \sum_{v=0}^d \sum_{u=0}^{r-v} q^{\binom{u}{2}} \qbinom{u+v}{u}_q q^{\binom{u+v}{2}} \qbinom{r}{u+v}_q \qbinom{v+j-1}{j-u}_q \\ & =\left( 1 + q^r \frac{[n-k-r]_q}{[r]_q} \right)\langle \Theta_{e_k} \nabla E_{n-k, r}, e_{n-d} h_d \rangle\\ & =\langle \Theta_{e_k} \nabla E_{n-k, r}, e_{n-d} h_d \rangle \\%= \sum_{j=0}^k t^{n-r-j} \sum_{s=0}^{n-r} \sum_{v=0}^d \sum_{u=0}^{r-v} q^{\binom{u}{2}} \qbinom{u+v}{u}_q q^{\binom{u+v}{2}} \qbinom{r}{u+v}_q \qbinom{v+j-1}{j-u}_q \\ & \quad + \sum_{j=0}^k q^r t^{n-r-j} \sum_{s=0}^{n-r} \sum_{v=0}^d \sum_{u=0}^{r-v} q^{\binom{u}{2}} \qbinom{u+v}{u}_q q^{\binom{u+v}{2}} \qbinom{r}{u+v}_q \qbinom{v+j-1}{j-u}_q \\ & \quad \quad \times \qbinom{v+j+s-1}{s}_q \frac{[s]_q}{[r]_q} \langle \Theta_{e_{k-j}} \nabla \frac{[n-k-r]_q}{[s]_q} E_{n-k-r, s}, e_{(n-r-j)-(d-v)} h_{d-v} \rangle \\ & = \langle \Theta_{e_k} \nabla E_{n-k, r}, e_{n-d} h_d \rangle \\%\sum_{j=0}^k t^{n-r-j} \sum_{s=0}^{n-r} \sum_{v=0}^d \sum_{u=0}^{r-v} q^{\binom{u}{2}} \qbinom{u+v}{u}_q q^{\binom{u+v}{2}} \qbinom{r}{u+v}_q \qbinom{v+j-1}{j-u}_q \\ & \quad + \sum_{j=0}^k q^r t^{n-r-j} \sum_{s=0}^{n-r} \sum_{v=0}^d \sum_{u=0}^{r-v} q^{\binom{u}{2}} \qbinom{u+v}{u}_q q^{\binom{u+v}{2}} \qbinom{r-1}{u+v-1}_q \qbinom{v+j}{j-u}_q \\ & \quad \quad \times \qbinom{v+j+s-1}{s-1}_q \langle \Theta_{e_{k-j}} \nabla \frac{[n-k-r]_q}{[s]_q} E_{n-k-r, s}, e_{(n-r-j)-(d-v)} h_{d-v} \rangle \end{align*} as desired. \end{proof} Once again notice that this implies that $\langle \Theta_{e_k} \nabla \frac{[n-k]_q}{[r]_q} E_{n-k, r}, e_{n-d} h_d \rangle$ is also a polynomial in $\mathbb{N}[q,t]$. \subsection{Combinatorial recursions} Let \[ \mathsf{D}_{q,t}(n \backslash r)^{\bullet k, \circ d} \coloneqq \sum_{\pi \in \mathsf{D}(n \backslash r)^{\bullet k, \circ d}} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)} \] and \[ \mathsf{SQ}'_{q,t}(n \backslash r)^{\bullet k, \circ d} \coloneqq \sum_{\pi \in \mathsf{SQ}'(n \backslash r)^{\bullet k, \circ d}} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)} \] be the $q,t$-enumerators of our set of lattice paths. We have the following results. \begin{theorem} \label{thm:combinatorial-recursion} The polynomials $\mathsf{D}_{q,t}(n \backslash r)^{\bullet k, \circ d}$ satisfy the recursion \begin{align*} \mathsf{D}_{q,t}(n \backslash r)^{\bullet k, \circ d} = \sum_{j=0}^k t^{n-r-j} \sum_{s=0}^{n-r} \sum_{v=0}^d \sum_{u=0}^{r-v} & q^{\binom{u}{2}} \qbinom{u+v}{u}_q q^{\binom{u+v}{2}} \qbinom{r}{u+v}_q \qbinom{v+j-1}{j-u}_q \\ & \quad \times \qbinom{v+j+s-1}{s}_q \mathsf{D}_{q,t}(n-r-j \backslash s)^{\bullet k-j, \circ d-(r-v)} \end{align*} with initial conditions $\mathsf{D}_{q,t}(0 \backslash r)^{\bullet k, \circ d} = \delta_{r,0} \delta_{k,0} \delta_{d,0}$. \end{theorem} \begin{proof} The initial conditions are trivial, as the only Dyck path of size $0$ has $0$ steps on the main diagonal, $0$ decorated valleys, and $0$ decorated peaks. We give an overview of the combinatorial interpretations of all the variables appearing in this formula. We say that a vertical step of a path is \emph{at height $i$} if there are $i$ whole cells in its row between it and the main diagonal. \begin{itemize} \item $r$ is the number of vertical steps at height $0$ that are not decorated valleys. \item $j$ is the number of vertical steps at height $0$ that are decorated valleys. \item $r-v$ is the number of decorated peaks at height $0$. \item $u$ is the number of decorated peaks at height $0$ that are also decorated valleys. \item $u+v$ is the number of vertical steps at height $0$ without any kind of decoration. \item $s$ is the number of vertical steps at height $1$ that are not decorated valleys. \end{itemize} The recursive step consists of removing all the steps that touch the main diagonal. There are $r+j$ vertical steps that touch the main diagonal, of which $j$ are decorated valleys and $r-v$ are decorated peaks (which are not necessarily disjoint), so after the recursive step we end up with a path in $\mathsf{D}(n-r-j \backslash s)^{\bullet k-j, \circ d-(r-v)}$. Let us look at what happens to the statistics of the path. The area goes down by the size (i.e.\ $n$) minus the number of vertical steps at height $0$ (i.e.\ $r+j$). This explains the term $t^{n-r-j}$. The factor $q^{\binom{u+v}{2}}$ takes into account the primary dinv among all the vertical steps at height $0$ that are neither decorated valleys nor decorated peaks. The factor $q^{\binom{u}{2}} \qbinom{u+v}{v}_q$ takes into account the primary dinv between all the vertical steps at height $0$ that are neither decorated valleys nor decorated peaks ($u+v$ of them), and all the vertical steps at height $0$ that are both decorated valleys and decorated peaks ($u$ of them), and the expression is explained by the fact that the latter cannot be consecutive (a decorated peak on the main diagonal cannot be followed by a decorated valley). The factor $\qbinom{r}{u+v}_q$ takes into account the primary dinv between all the vertical steps at height $0$ that are neither decorated valleys nor decorated peaks ($u+v$ of them), and all the vertical steps at height $0$ that are decorated peaks but not decorated valleys ($r-u-v$ of them), which can be interlaced in any possible way. The factor $\qbinom{v+j-1}{j-u}_q$ takes into account the primary dinv between all the vertical steps at height $0$ that are neither decorated valleys nor decorated peaks ($u+v$ of them), and all the vertical steps at height $0$ that are decorated valleys but not decorated peaks ($j-u$ of them), considering that the first of those must be non-decorated (we cannot start the path with a decorated valley). Finally, the factor $\qbinom{v+j+s-1}{s}_q$ takes into account the secondary dinv between all the vertical steps at height $1$ that are not decorated valleys ($s$ of them), and all the vertical steps at height $0$ that are not decorated peaks ($v+j$ of them), considering that the first of those must belong to the latter set (we need a vertical step that is not a decorated peak on the main diagonal to go up to the diagonal $y=x+1$). Summing over all the possible values of $j,s,v$, and $u$, we obtain the stated recursion. \end{proof} \begin{theorem} \label{thm:combinatorial-recursion-square} The polynomials $\mathsf{SQ}'_{q,t}(n \backslash r)^{\bullet k, \circ d}$ satisfy the recursion \begin{align*} \mathsf{SQ}'_{q,t}(n \backslash r)^{\bullet k, \circ d} & = \mathsf{D}_{q,t}(n \backslash r)^{\bullet k, \circ d}+ \\% \sum_{j=0}^k t^{n-r-j} \sum_{s=0}^{n-r} \sum_{v=0}^d \sum_{u=0}^{r-v} q^{\binom{u}{2}} \qbinom{u+v}{u}_q q^{\binom{u+v}{2}} \qbinom{r}{u+v}_q \qbinom{v+j-1}{j-u}_q \\ & \quad + \sum_{j=0}^k q^r t^{n-r-j} \sum_{s=0}^{n-r} \sum_{v=0}^d \sum_{u=0}^{r-v} q^{\binom{u}{2}} \qbinom{u+v}{u}_q q^{\binom{u+v}{2}} \qbinom{r-1}{u+v-1}_q \qbinom{v+j}{j-u}_q \\ & \quad \quad \times \qbinom{v+j+s-1}{s-1}_q \mathsf{SQ}'_{q,t}(n-r-j \backslash s)^{\bullet k-j, \circ d-(r-v)} \end{align*} with initial conditions $\mathsf{SQ}'_{q,t}(0 \backslash r)^{\bullet k, \circ d} = \delta_{r,0} \delta_{k,0} \delta_{d,0}$. \end{theorem} \begin{proof} The initial conditions are trivial, as the only square path of size $0$ has $0$ steps on the main diagonal, $0$ decorated valleys, and $0$ decorated peaks. We give an overview of the combinatorial interpretations of all the variables appearing in this formula. We say that a vertical step of a path is \emph{at height $i$} if there are $i$ whole cells in its row between it and the base diagonal. \begin{itemize} \item $r$ is the number of vertical steps at height $0$ that are not decorated valleys. \item $j$ is the number of vertical steps at height $0$ that are decorated valleys. \item $r-v$ is the number of decorated peaks at height $0$. \item $u$ is the number of decorated peaks at height $0$ that are also decorated valleys. \item $u+v$ is the number of vertical steps at height $0$ without any kind of decoration. \item $s$ is the number of vertical steps at height $1$ that are not decorated valleys. \end{itemize} The recursive step consists of removing all the steps that touch the base diagonal. We should distinguish whether we start with a Dyck path or a square path: if we start with a Dyck path, then the recursive step is the same as in Theorem~\ref{thm:combinatorial-recursion}, which correspond to the first summand; if we start with a square path that is not a Dyck path, we get the second summand. Since the case of a Dyck path is already dealt with, we only describe the recursion for square paths that are not Dyck paths. There are $r+j$ vertical steps that touch the base diagonal, of which $j$ are decorated valleys and $r-v$ are decorated peaks (which are not necessarily disjoint), so after the recursive step we end up with a path in $\mathsf{SQ}'(n-r-j \backslash s)^{\bullet k-j, \circ d-(r-v)}$. Let us look at what happens to the statistics of the path. The area goes down by the size (i.e.\ $n$), minus the number of vertical steps at height $0$ (i.e.\ $r+j$). This explains the term $t^{n-r-j}$. The factor $q^{\binom{u+v}{2}}$ takes into account the primary dinv among all the vertical steps at height $0$ that are neither decorated valleys nor decorated peaks. The factor $q^{\binom{u}{2}} \qbinom{u+v}{v}_q$ takes into account the primary dinv between all the vertical steps at height $0$ that are neither decorated valleys nor decorated peaks ($u+v$ of them), and all the vertical steps at height $0$ that are both decorated valleys and decorated peaks ($u$ of them), and the expression is explained by the fact that the latter cannot be consecutive (a decorated peak on the main diagonal cannot be followed by a decorated valley). The factor $\qbinom{r-1}{u+v-1}_q$ takes into account the primary dinv between all the vertical steps at height $0$ that are neither decorated valleys nor decorated peaks ($u+v$ of them), and all the vertical steps at height $0$ that are decorated peaks but not decorated valleys ($r-u-v$ of them), and the last of these must be a step without decorations: a decorated peak cannot be followed by a decorated valley, and the last vertical step at height $0$ cannot be a decorated peak, since the shift of the path is positive and it has to finish above the main diagonal. This explains the $r-1$ and the $u+v-1$. The factor $\qbinom{v+j}{j-u}_q$ takes into account the primary dinv between all the vertical steps at height $0$ that are neither decorated valleys nor decorated peaks ($u+v$ of them), and all the vertical steps at height $0$ that are decorated valleys but not decorated peaks ($j-u$ of them); unlike the previous case, now the interlacing can be any. Finally, the factor $\qbinom{v+j+s-1}{s-1}_q$ takes into account the secondary dinv between all the vertical steps at height $1$ that are not decorated valleys($s$ of them), and all the vertical steps at height $0$ that are not decorated peaks ($v+j$ of them), and since the shift of the path is positive and it has to finish above the main diagonal, the last vertical step at height $1$ must occurr after the last vertical step at height $0$. Summing over all the possible values of $j,s,v$, and $u$, we obtain the stated recursion. \end{proof} \subsection{The main theorems} We are now ready to prove Theorem~\ref{thm:valley-schroeder} and Theorem~\ref{thm:valley-square-schroeder}. \begin{theorem} \begin{equation} \label{eq:SchroValleyEnk} \langle \Theta_{e_k} \nabla E_{n-k, r}, e_{n-d} h_d \rangle = \sum_{\pi \in \mathsf{D}(n \backslash r)^{\bullet k, \circ d}} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)}. \end{equation} \end{theorem} \begin{proof} It follows immediately by combining Theorem~\ref{thm:alg-recursion} and Theorem~\ref{thm:combinatorial-recursion}. \end{proof} \begin{theorem} \begin{equation} \label{eq:SchroSquareValleyEnk} \langle \Theta_{e_k} \nabla \frac{[n-k]_q}{[r]_q} E_{n-k, r}, e_{n-d} h_d \rangle = \sum_{\pi \in \mathsf{SQ}'(n \backslash r)^{\bullet k, \circ d}} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)}. \end{equation} \end{theorem} \begin{proof} Use \eqref{eq:SchroValleyEnk} together with Theorem~\ref{thm:alg-recursion-square} and Theorem~\ref{thm:combinatorial-recursion-square}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:valley-schroeder} and Theorem~\ref{thm:valley-square-schroeder}] Take the sum over $r$ of both \eqref{eq:SchroValleyEnk} and \eqref{eq:SchroSquareValleyEnk}. \end{proof} \section{The Catalan case of the extended valley Delta conjecture} In \cite{Iraci-VandenWyngaerd-pushing-2021}, the authors show that the valley version of the Delta conjecture implies the extended version of the same conjecture. The argument requires the conjecture to hold in full generality to function, but we can still recycle it to derive the Catalan case of the extended conjecture from the Schr\"oder case of the original one. We need the following result, suggested by a combinatorial argument by the second author and Vanden Wyngaerd, and then proved by the first author and Romero. \begin{theorem}[{\cite{DAdderio-Romero-Theta-Identities-2020}*{Corollary~9.2}}] Given $d,n,k,r \in \mathbb{N}$, we have \begin{align*} h_d^\perp & \Theta_{e_k} \nabla E_{n-k,r} = \sum_{p=0}^d t^{d-p} \sum_{i=0}^p q^{\binom{i}{2}} \qbinom{r-p+i}{i}_q \qbinom{r}{p-i}_q \mathsf{D}elta_{h_{d-p}} \Theta_{e_{k-i}} \nabla E_{n-d-(k-i), r-p+i}. \end{align*} \end{theorem} Taking the scalar product with $e_{n-d}$, we get the following. \begin{proposition} \label{prop:sf-identity} Given $d,n,k,r \in \mathbb{N}$, we have \begin{align} \label{eq:RecSF} \langle \Theta_{e_k} \nabla E_{n-k,r}, e_{n-d} h_d \rangle & = \sum_{p=0}^m t^{m-p} \sum_{i=0}^p q^{\binom{i}{2}} \qbinom{r-p+i}{i}_q \qbinom{r}{p-i}_q \\ \notag & \quad \times \langle \mathsf{D}elta_{h_{d-p}} \Theta_{e_{k-i}} \nabla E_{n-d-(k-i), r-p+i}, e_{n-d} \rangle. \end{align} \end{proposition} We want to prove that the $q,t$-enumerators of the corresponding sets satisfy the same relations. \begin{theorem} \label{thm:comb-identity} Given $d,n,k,r \in \mathbb{N}$, we have \begin{align} \label{eq:RecComb}\mathsf{D}_{q,t}(n \backslash r)^{\bullet k, \circ d} = \sum_{p=0}^d t^{d-p} \sum_{i=0}^p q^{\binom{i}{2}} \qbinom{r-p+i}{i}_q \qbinom{r}{p-i}_q \mathsf{D}_{q,t}(d-p,n-d \backslash r-p+i)^{\bullet k-i}. \end{align} \end{theorem} \begin{proof} Using the same idea as in \cite{Iraci-VandenWyngaerd-pushing-2021}*{Theorem~5.1}, we delete the decorated peaks on the main diagonal, and apply the \emph{pushing algorithm} to the remaining decorated peaks, that is, we swap the horizontal and the vertical step they are composed of, becoming valleys. If the peak is also a decorated valley, it becomes a decorated valley. If $p$ is the number of decorated peaks on the main diagonal, and $i$ the number of those that also are decorated valleys, we have that the loss of area given by the pushing contributes for a factor $t^{d-p}$, removing the decorated peaks from the main diagonal contributes for a factor \[ q^{\binom{i}{2}} \qbinom{r-p+i}{i}_q \qbinom{r}{p-i}_q, \] and we are left with a path in $\mathsf{D}(d-p,n-d \backslash r-p+i)^{\bullet k-i}$: see \cite{Iraci-VandenWyngaerd-pushing-2021}*{Theorem~5.1} for more details. The thesis follows. \end{proof} Combining the two statements we get the following. \begin{proposition} Given $d,n,k,r \in \mathbb{N}$, we have \begin{equation} \label{eq:SchroExtValleyEnk} \langle \mathsf{D}elta_{h_d} \Theta_{e_k} \nabla E_{n-k,r}, e_{n} \rangle = \sum_{\pi \in \mathsf{D}(d, n \backslash r)^{\bullet k}} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)}. \end{equation} \end{proposition} \begin{proof} We can rewrite \eqref{eq:RecSF} as \begin{align*} t^d \langle \mathsf{D}elta_{h_{d}} & \Theta_{e_{k}} \nabla E_{n-d-k, r}, e_{n-d} \rangle = \langle \Theta_{e_k} \nabla E_{n-k,r}, e_{n-d} h_d \rangle\\ & - \sum_{p=1}^d t^{d-p} \sum_{i=0}^p q^{\binom{i}{2}} \qbinom{r-p+i}{i}_q \qbinom{r}{p-i}_q \langle \mathsf{D}elta_{h_{d-p}} \Theta_{e_{k-i}} \nabla E_{n-d-(k-i), r-p+i}, e_{n-d} \rangle. \end{align*} and \eqref{eq:RecComb} as \begin{align*} t^d \mathsf{D}_{q,t}(d,n-d \backslash r)^{\bullet k} & = \mathsf{D}_{q,t}(n \backslash r)^{\bullet k, \circ d} \\ & \quad - \sum_{p=1}^d t^{d-p} \sum_{i=0}^p q^{\binom{i}{2}} \qbinom{r-p+i}{i}_q \qbinom{r}{p-i}_q \mathsf{D}_{q,t}(d-p,n-d \backslash r-p+i)^{\bullet k-i}. \end{align*} Using \eqref{eq:SchroValleyEnk} and induction on $d$ (the base case $d=0$ is simply \eqref{eq:SchroValleyEnk}) we see that the right hand sides are equal, hence so are the left hand sides. This completes the proof. \end{proof} Taking the sum over $r$ in \eqref{eq:SchroExtValleyEnk}, we get the desired result. \begin{theorem} Given $d,n,k \in \mathbb{N}$, we have \[ \langle \mathsf{D}elta_{h_d} \Theta_{e_k} \nabla e_{n-k}, e_{n} \rangle = \sum_{\pi \in \mathsf{D}(d, n)^{\bullet k}} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)}. \] \end{theorem} \section{The Catalan case of the extended valley square conjecture} Next, we want to prove the analogous statement for the square version. In this case, we need to assume that the combinatorial side of (a refinement of) Conjecture~\ref{conj:valleyDelta} is a symmetric function (cf.\ Remark~\ref{rmk:SFcombside}). Indeed, the argument in \cite{Iraci-VandenWyngaerd-Valley-Square-2021} can be recycled to show that, if the Catalan case of the valley version of the extended Delta conjecture holds, then the same case of the square version of the same conjecture also holds. We need the following result. \begin{proposition}[{\cite{Iraci-VandenWyngaerd-Valley-Square-2021}*{Corollary~3}}] \label{cor:square-to-dyck} Let \begin{equation} \label{eq:LD_comb} \mathsf{LD}_{q,t;x}(d, n \backslash r)^{\bullet k} = \sum_{\pi \in \mathsf{LD}(d, n)^{\bullet k}} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)} x^\pi \end{equation} and \begin{equation} \label{eq:LSQ_comb} \mathsf{LSQ}'_{q,t;x}(d, n \backslash r)^{\bullet k} = \sum_{\pi \in \mathsf{LSQ}'(d, n)^{\bullet k}} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)} x^\pi. \end{equation} Then \[ \mathsf{LSQ}'_{q,t;x}(d, n \backslash r)^{\bullet k} = \frac{[n-k]_q}{[r]_q} \mathsf{LD}_{q,t;x}(d, n \backslash r)^{\bullet k}. \] \end{proposition} Now, assuming that \eqref{eq:LD_comb} (and hence also \eqref{eq:LSQ_comb}) is a symmetric function, again by the theory of shuffles, taking the scalar product with $e_{n}$ isolates the subsets of the paths whose reading word is $1, 2, \dots, n$, which are in statistic-preserving bijection with unlabelled paths by simply removing the positive labels (there is a unique way to put them back once the reading word is fixed). The following theorem is now immediate. \begin{theorem} Given $d,n,k,r \in \mathbb{N}$, if \eqref{eq:LD_comb} is a symmetric function, then \[ \langle \mathsf{D}elta_{h_d} \Theta_{e_k} \nabla \omega(p_{n-k}), e_{n} \rangle = \sum_{\pi \in \mathsf{SQ}'(d, n)^{\bullet k}} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)}. \] \end{theorem} \begin{proof} Using \eqref{eq:SchroExtValleyEnk} and Proposition~\ref{cor:square-to-dyck}, we have \begin{align*} \langle \mathsf{D}elta_{h_d} \Theta_{e_k} \nabla \frac{[n-k]_q}{[r]_q} E_{n-k, r}, e_{n} \rangle & = \frac{[n-k]_q}{[r]_q} \sum_{\pi \in \mathsf{D}(d, n\backslash r)^{\bullet k}} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)} \\ & = \left\langle \frac{[n-k]_q}{[r]_q} \mathsf{LD}_{q,t;x}(d, n \backslash r)^{\bullet k}, e_n \right\rangle \\ & = \langle \mathsf{LSQ}'_{q,t;x}(d, n \backslash r)^{\bullet k}, e_n \rangle \\ & = \sum_{\pi \in \mathsf{SQ}'(d, n \backslash r)^{\bullet k}} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)}, \end{align*} so by taking the sum over $r$ the thesis follows. \end{proof} \section{Future directions} In \cite{DAdderio-Iraci-VandenWyngaerd-GenDeltaSchroeder-2019}, the authors gave an algebraic recursion for the (conjectural) $q,t$-enumerators of the Schr\"oder case of the extended valley Delta conjecture. Hence it would be enough to find a combinatorial recursion that matches the algebraic one to prove that case as well; however, we were not able to do so. In \cite{DAdderio-Iraci-newdinv-2019}, a combinatorial recursion for the ``ehh'' case of the shuffle theorem is given, and it would be interesting to extend such recursion to the valley-decorated version of the combinatorial objects, which in turn would lead to a proof of the Schr\"oder case of the extended valley Delta conjecture via the pushing algorithm in \cite{Iraci-VandenWyngaerd-Valley-Square-2021}; once again, our attempts were unfruitful. Finally, as the classical recursion for the $q,t$-Catalan \cite{Garsia-Haglund-qtCatalan-2002} is an iterated version of the compositional one \cite{Haglund-Morse-Zabrocki-2012}, and both these recursions extend to the rise version of the Delta conjecture \cites{Zabrocki-4Catalan-2016, DAdderio-Mellit-Compositional-Delta-2020}, it might be the case that the same phenomenon occurrs here, and that there is a compositional refinement of the recursion for $d=0$ that might lead to a full proof of the conjecture; at the moment we do not know what should such a refinement look like. \end{document}
\begin{document} \title{Crossed modules \\ and the homotopy 2-type of a free loop space} \author{Ronald Brown\thanks{School of Computer Science, University of Bangor, LL57 1UT, Wales}} \maketitle \begin{center} Bangor University Maths Preprint 10.01 \end{center} \begin{abstract} The question was asked by Niranjan Ramachandran: how to describe the fundamental groupoid of $LX$, the free loop space of a space $X$? We show how this depends on the homotopy 2-type of $X$ by assuming $X$ to be the classifying space of a crossed module over a group, and then describe completely a crossed module over a groupoid determining the homotopy 2-type of $LX$; that is we describe crossed modules representing the 2-type of each component of $LX$. The method requires detailed information on the monoidal closed structure on the category of crossed complexes.\footnote{MSClass:18D15,55Q05,55Q52; Keywords: free loop space, crossed module, crossed complex, closed category, classifying space, higher homotopies.} \end{abstract} \section{Introduction} It is well known that for a connected $CW$-complex $X$ with fundamental group $G$ the set of components of the free loop space $LX$ of $X$ is bijective with the set of conjugacy classes of the group $G$, and that the fundamental groups of $LX$ fit into a family of exact sequences derived from the fibration $LX \to X$ obtained by evaluation at the base point. Our aim is to describe the homotopy 2-type of $LX$, the free loop space on $X$, when $X$ is a connected $CW$-complex, in terms of the 2-type of $X$. Weak homotopy 2-types are described by crossed modules (over groupoids), defined in \cite{BH81:algcub} as follows. A {\it crossed module} $\mathcal M$ is a morphism $\deltaelta: M \to P$ of groupoids which is the identity on objects such that $M$ is just a disjoint union of groups $M(x), x \in P_0$, together with an action of $P$ on $M$ written $(m,p) \mapsto m^p$, $m \in M(x), p:x \to y$ with $m^p \in M(y)$ satisfying the usual rules for an action. We find it convenient to use (non-commutative) additive notation for composition so if $p:x \to y, q:y \to z$ then $p+q: x \to z$, and $(m+n)^p= m^p+n^p, (m^p)^q=m^{p+q}, m^0=m$. Further we have the two crossed module rules for all $p \in P, m,n \in M$: \begin{enumerate}[CM1)] \item $\deltaelta(m^p)= -p+ \deltaelta m +p$; \item $-n+m+n=m^{\deltaelta n}$; \end{enumerate} whenever defined. This is a {\it crossed module of groups} if $P_0$ is a singleton. A crossed module $\mathcal M$ as above has a simplicial nerve $K=N^\Delta \mathcal M$ which in low dimensions is described as follows: \begin{itemize} \item $K_0=P_0$; \item $K_1=P$; \item $K_2$ consists of quadruples $\sigma=(m;c,a,b)$ where $m \in M, a,b,c \in P$ and $\deltaelta m= -c+a+b$ is well defined; \item $K_3$ consists of quadruples $(\sigma_0,\sigma_1,\sigma_2, \sigma_3)$ where $\sigma_i \in K_2$ and the $\sigma_i$ make up the faces of a 3-simplex, as shown in the following diagrams: \end{itemize} $$\deltaef\labelstyle{\textstyle} \vcenter{\xymatrix@R=3pc@C=3.5pc {& 3& \\ & 2 \ar[u] |(0.4)f \ar@{}[r]|(0.35){m_0}\ar@{}[l]|(0.35){m_1}& \\ 0\ar [rr] |a \ar [ur]|{c} \ar@/^0.6pc/ [uur]^{d} &\ar@{}[u]|(0.4){m_3}& 1 \ar @/_0.6pc/[uul]_{e} \ar [ul] |b }}\hspace{6em} \vcenter{\xymatrix@C=1.8pc@R=2.3pc{&\ar @{}[d]|(0.55){m_2}3& \\ 0 \ar@/^0.5pc/ [ur] ^{d} \ar [rr] |a & & 1 \ar @/_0.5pc/ [lu] _e\\ \ar@{}[r]^{\deltaelta m_2=-d+a+e}&}}$$ providing we have the rules \begin{alignat*}{2} \mu m_0&= -e+b+f,\quad & \mu m_1&= -d+c+f,\\ \mu m_2&= -d+a+e, & \mu m_3&= -c+a+b, \end{alignat*} together with the rule \begin{equation*} (m_3)^f-m_0 -m_2+m_1=0. \end{equation*} You may like to verify that these rules are consistent. A crossed module is the dimension 2 case of a {\it crossed complex}, the definition of which in the single vertex case goes back to Blakers in \cite{BL48}, there called a `group system', and in the many vertex case is in \cite{BH81:algcub}. The definition of the nerve of a crossed complex $C$ in the one vertex case is also in \cite{BL48}, and in the general case is in \cite{As78,BH91}. An alternative description of $K$ is that $K_n$ consists of the crossed complex morphisms $\Pi \Delta^n_* \to \mathcal M$ where $\Pi \Delta^n_* $ is the fundamental crossed complex of the $n$-simplex, with its skeletal filtration, and $\mathcal M$ is also considered as a crossed complex trivial in dimensions $>2$. This shows the analogy with the Dold-Kan theorem for chain complexes and simplicial abelian groups, \cite{Do58}. We thus define the {\it classifying space $B \mathcal M$ of $\mathcal M$} to be the geometric realisation $|N^\Delta \mathcal M|$, a special case of the definition in \cite{BH91}. It follows that an $a \in P(x)$ for some $x \in P_0$ determines a $1$-simplex in $X=B \mathcal M$ which is a loop and so a map $a': S^1 \to B \mathcal M$, i.e. $a' \in LX$. The chief properties of $X=B\mathcal M$ are that $\pi_0(X) \cong \pi_0(P)$ and for each $x \in P_0$ $$\pi_i(X,x) \cong \begin{cases} \mathsf{C}ok(\deltaelta:M(x) \to P(x)) & \text{ if } i=1, \\ ^{\rm op}eratorname{Ker}(\deltaelta :M(x) \to P(x)) & \text{ if } i=2, \\ 0 & \text{ if } i >2. \end{cases}$$ Further if $Y$ is a $CW$-complex, then there is a crossed module $\mathcal M$ and a map $ Y \to B \mathcal M$ inducing isomorphisms of $\pi_0, \pi_1, \pi_2$. For an exposition of some basic facts on crossed modules and crossed complexes in relation to homotopy theory, see for example \cite{Brown-grenoble}. There are other versions of the classifying space, for example the cubical version given in \cite{BHS}, and one for crossed module of groups using the equivalence of these with groupoid objects in groups, see for example \cite{Lod82,baez-stev-class-2-groups}. However the latter have not been shown to lead to the homotopy classification Theorem \ref{thm:homclass} below. Our main result is: \begin{thm}\label{thm:combined} Let $\mathcal M$ be the crossed module of groups $\deltaelta: M \to P$ and let $X=B\mathcal M$ be the classifying space of $\mathcal M$. Then the components of $LX$, the free loop space on $X$, are determined by equivalence classes of elements $a \in P$ where $a,b$ are equivalent if and only if there are elements $m \in M, p \in P $ such that $$b= p + a + \deltaelta m -p. $$ Further the homotopy $2$-type of a component of $LX$ given by $a \in P$ is determined by the crossed module of groups $L\mathcal M [a]=(\deltaelta_a: M \to P(a))$ where \begin{enumerate}[\rm (i)] \item $P(a)$ is the group of elements $(m,p)\in M \times P$ such that $\deltaelta m= [a,p]$, with composition $(n,q)+(m,p)= (m+n^p,q+p)$; \item $\deltaelta_a(m)= ( -m^a + m,\deltaelta m)$, for $m \in M$; \item the action of $P(a)$ on $M$ is given by $n^{(m,p)}= n^p$ for $n \in M, (m,p) \in P(a)$. \end{enumerate} In particular $\pi_1(LX,a)$ is isomorphic to $\mathsf{C}ok \deltaelta_a$, and $\pi_2(LX,a) \cong \pi_2(X,*)^{\bar{a}}$, the elements of $\pi_2(X,*)$ fixed under the action of $\bar{a}$, the class of $a$ in $G=\pi_1(X,*)$. \end{thm} We give a detailed proof that $L \mathcal M[a]$ is a crossed module in Appendix \ref{app:proof}. \begin{rem} The composition in (i) can be seen geometrically in the following diagram: \begin{equation} \deltaef\labelstyle{\textstyle} \vcenter{\xymatrix@M=0pt@=3pc{\ar [d] _q \ar [r]^a \ar@{} [dr]|n & \ar [d]^q \\ \ar [r] |a \ar [d]_p \ar@{}[dr] |m & \ar [d]^p \\ \ar [r]_a& }}\quad = \quad \vcenter{\xymatrix@M=0pt@C=4pc@R=3pc{\ar[d]_{q+p}\ar[r] ^a \ar@{}[dr]|{m+n^p}& \ar [d]^{q+p} \\ \ar [r]_a& }} \qquad \xdirects{2}{1}\tag*{$\mathsf{B}ox$}\end{equation} \end{rem} The following examples are due to C.D. Wensley. \begin{example} $\deltaelta=0: M \to P$, so that $M$ is a $P$-module. Then $P(a)$ is the set of $(m,p)$ s.t. $[a,p]=0$, i.e. $p \in C_a(P)$, and so is $M \rtimes C_a(P)$. ($P=G$ the fundamental group, as $\deltaelta=0$). But $\deltaelta_a(m)=(-m^a+m,0)$. So $\pi_1(L\mathcal M,a) =(M/[a,M]) \rtimes C_a(P)$. $\mathsf{B}ox$ \end{example} \begin{example} If $a \in Z(P)$, the center of $P$, then $[a,p]=0$ for all $p$. (For example, $P$ might be abelian.) Hence $P(a)= \pi \rtimes P$. Then $\pi_1(L\mathcal M,a) = (\pi \rtimes P)/ \{(-m^a+m,\deltaelta m)\mid m \in M\}.$ It is not clear to me that even in this case the exact sequence splits. (??) $\mathsf{B}ox$ \end{example} It is also possible to give a less explicit description of $\pi_1(LX,a)$ as part of an exact sequence: \begin{thm}\label{thm:exact} Under the circumstances of Theorem \ref{thm:combined}, if we set $\pi = ^{\rm op}eratorname{Ker} \deltaelta = \pi_2(X), G= \mathsf{C}ok \deltaelta= \pi_1(X)$, with the standard module action of $G$ on $\pi$, then the fundamental group $\pi_1(LX,a)$ in the component given by $a \in P$ is part of an exact sequence: \begin{equation} 0 \to \pi^{\bar{a}} \to \pi \to \pi/\{\bar{a}\} \to \pi_1(LX, a) \to C_{\bar{a}}(G) \to 1 \end{equation} where: $\pi/\{\alpha\}$ denotes $\pi$ with the action of $\alpha$ killed; and $C_\alpha(G)$ denotes the centraliser of the element $\alpha \in G$. \end{thm} The proof of Theorem \ref{thm:combined}, which will be given in Section 2, is essentially an exercise in the use of the following classification theorem \cite[Theorem A]{BH91}: \begin{thm}\label{thm:homclass} Let $Y$ be a $CW$-complex with its skeletal filtration $Y_*$ and let $C$ be a crossed complex, with its classifying space written $BC$. Then there is a natural weak homotopy equivalence $$ B(\mathsf{C}RS(\Pi Y_*,C)) \to (BC)^Y. $$ \end{thm} In the statement of this theorem we use the internal hom $\mathsf{C}RS(-,-)$ in the category $\mathsf{C}rs$ of crossed complexes: this internal hom is described explicitly in \cite{BH87}, in order to set up the exponential law $$\mathsf{C}rs(A \otimes B,C) \cong \mathsf{C}rs(A,\mathsf{C}RS(B,C))$$ for crossed complexes $A,B,C$, i.e. to give a monoidal closed structure on the category $\mathsf{C}rs$. Note that $\mathsf{C}RS(B,C)_0= \mathsf{C}rs(B,C)$, $\mathsf{C}RS(B,C)_1$ gives the homotopies of morphisms, and $\mathsf{C}RS(B,C)_n$ for $n \geqslant 2$ gives the higher homotopies. \section{Proofs} We deduce Theorem \ref{thm:combined} from the following Theorem. \begin{thm}\label{thm:mainthm} Let $X=B\mathcal M$, where $\mathcal M$ is the crossed module of groups $\deltaelta: M \to P$. Then the homotopy $2$-type of $LX$, the free loop space of $X$, is described by the crossed module over groupoids $L \mathcal M$ where \begin{enumerate}[\rm (i)] \item $(L \mathcal M)_0 = P$; \item $(L \mathcal M)_1= M \times P \times P$ with source and target given by $$s(m,p,a)= p+a+\deltaelta m -p, \quad t(m,p,a)= a$$ for $a,p\in P, m \in M$; \item the composition of such triples is given by $$ (n,q,b)+(m,p,a) =( m+n^p,q+p,a)$$ which of course is defined under the condition that \begin{align*} b&=p+a+\deltaelta m -p \end{align*} or, equivalently, $ b^p= a +\deltaelta m $; \item if $a \in P$ then $(L\mathcal M)_2(a)$ consists of pairs $(m,a)$ for all $m \in M$, with addition and boundary $$(m,a)+(n,a)=(m+n,a), \qquad \deltaelta (m,a)=(-m^a +m,\deltaelta m, a);$$ \item the action of $(L\mathcal M)_1$ on $(L\mathcal M)_2$ is given by: $(n,b)^{(m,p,a)}$ is defined if and only if $b^p=a+\deltaelta m $ and then its value is $(n^p,a)$. \end{enumerate} \end{thm} \begin{proof} In Theorem \ref{thm:homclass} we set $Y=S^1$ with its standard cell structure $e^0 \cup e^1$, and can write $\Pi Y_* \cong \mathbb{K}(\mathbb{Z},1)$ where the latter is the crossed complex with a base point $z_0$ and a free generator $z$ in dimension $1$, and otherwise trivial. Thus morphisms of crossed complexes from $\mathbb{K}(\mathbb{Z},1)$, and homotopies and higher homotopies of such morphisms, are completely determined by their values on $z_0$ and on $z$. A crossed module over a group or groupoid is also regarded as a crossed complex trivial in dimensions $> 2$. All the formulae required to prove Theorem \ref{thm:mainthm} follow from those for the internal hom $\mathsf{C}RS$ on the category $\mathsf{C}rs$ given in \cite[Proposition 3.14]{BH87} or \cite[\S 7.1.vii, \S 9.3]{BHS}. We set $L\mathcal M= \mathsf{C}RS(\mathbb{K}(\mathbb{Z},1),\mathcal M)$. Since $\mathbb{K}(\mathbb{Z},1)$ is a free crossed complex with one generator $z$ in dimension 1, the elements $a \in P$ are bijective with the morphisms $f:\mathbb{K}(\mathbb{Z},1) \to \mathcal M$, and we write this bijection as $a \mapsto \hat{a}$, where $a=\hat{a}(z)$. Also the homotopies and higher homotopies from $\mathbb{K}(\mathbb{Z},1) \to \mathcal M$ are determined by their values on $z$ and on the element $z_0$ of $\mathbb{K}(\mathbb{Z},1)$ in dimension 0. Thus a 1-homotopy $(h,\hat{a}):\hat{b}\simeq \hat{a}$ is such that $h$ lifts dimension by 1, and is given by elements $p=h(z_0)\in P, m=h(z) \in M$ and so $(h,\hat{a})$ is given by a triple $(p,m,a)$. The condition that this triple gives a homotopy $\hat{b} \simeq \hat{a}$ translates to $$ b=p+a+\deltaelta m-p$$ or, equivalently, $ a+\deltaelta m= b^p$. It follows easily that $\hat{b},\hat{a}$ belong to the same component of $L\mathcal M$ if and only if $b,a$ give conjugate elements in the quotient group $\pi_1({\mathcal M})$. (The use of such general homotopies was initiated in \cite{W49:CHII}.) The composition of such homotopies $\hat{c} \simeq \hat{b}\simeq \hat{a}$ is given by: $$(n,q,b) + (m,p,a)= (m+n^p,q+p,a)$$ which of course is defined if and only if $$b^p=a+\deltaelta m.$$ A 2-homotopy $(H,\hat{a})$ of $\hat{a}$ is such that $H$ lifts dimension by 2 and so is given by an element $H(z_0) \in M$. There are rules giving the composition, actions, and boundaries of such 1- and 2-homotopies. In particular the action of a 1-homotopy $(h,f^+):f^- \simeq f^+$ on a 2-homotopy $(H,f^-)$ gives a 2-homotopy $(H^h,f^+)$ where $H^h(c)=H(c)^{h(tc)}$. Here we take $c=z_0$ so that we obtain the action $(n,b)^{(m,p,a)}=n^p$. All these formulae follow from those given in \cite[Proposition 3.14]{BH87} or \cite[\S 9.3]{BHS}. A 2-homotopy $(H,\hat{a})$ is given by $a=\hat{a}(z)$ and $m=H(z_0)\in M$. We then have to work out $\deltaelta_2(H)$. We find that \begin{align*} \deltaelta_2(H)(x)&= \begin{cases} \deltaelta H(z_0) & \text{ if } x=z_0,\\ -H(sz)^{\hat{a}(z)} + H(tz) + \deltaelta H(z) & \text{ if } x=z, \end{cases}\\ & = \begin{cases} \deltaelta m & \text{ if } x=z_0,\\ -m^a +m & \text{ if } x=z. \end{cases} \end{align*} This completes the proof of Theorem \ref{thm:mainthm}. \end{proof} The proof of Theorem \ref{thm:combined} now follows by restricting the crossed module of groupoids given in Theorem \ref{thm:mainthm} to $L\mathcal M(a)$, the crossed module of groups over the object $a \in (L\mathcal M)_1=P$. Then we have an isomorphism $\theta: L\mathcal M(a) \to L\mathcal M[a]$ given by $\theta_0(a)=*, \; \theta_1(m,p,a)=(m,p), \; \theta_2(m,a)=m$. For the next result we need the notion of fibration of crossed modules of groupoids which is a special case of fibrations of crossed complexes as defined in \cite{How79} and applied in \cite{brown-homclass}. \begin{thm}\label{thm:fibration} In the situation of Theorem \ref{thm:mainthm}, there is a fibration $L\mathcal M \to \mathcal M$ of crossed modules of groupoids. Hence if $$\pi= \pi_2(X)\cong ^{\rm op}eratorname{Ker} \deltaelta,\quad G = \pi_1(X)\cong \mathsf{C}ok \deltaelta$$ then for each $a \in P$ there is an exact sequence \begin{equation} 0 \to \pi^{\bar{a}} \to \pi \to \pi/\{\bar{a}\} \to \pi_1(LX, a') \to C_{\bar{a}}(G) \to 1 \end{equation} where: $\bar{a}$ denotes the image of $a'$ in $G$; $\pi/\{\alpha\}$ denotes $\pi$ with the action of $\alpha$ killed; and $C_\alpha(G)$ denotes the centraliser of the element $\alpha \in G$. \end{thm} \begin{proof} We define the fibration $\psi: L \mathcal M \to \mathcal M$ by the inclusion $i:\{z_0\} \to \mathbb{K}(\mathbb{Z},1)$ and the identification $\mathsf{C}RS(\{z_0\}, \mathcal M) \cong \mathcal M$, where here $\{z_0\}$ denotes also the trivial crossed complex on the point $z_0$. Then $\psi$ is a fibration since $i$ is a cofibration, see \cite{BG89}. The exact description of $\psi$ in terms given earlier is that \begin{align*} \psi_0 (a)&= *, \quad a \in P,\\ \psi_1(m,p,a)&= p, \quad (m,p,a) \in M \times P\times P, \\ \psi_2(n,a) &= n, \quad (n,a) \in M \times P. \end{align*} To say that $\psi$ is a fibration of crossed modules over groupoids is to say that: (i) it is a morphism; (ii) $(\psi_1, \psi_0)$ is a fibration of groupoids, \cite{B70,anderson-fibrations}; and (iii) $\psi_2$ is piecewise surjective. Let $\mathcal F$ denote the fibre of $\psi$. Then $$\mathcal F_0=P, \quad \mathcal F_1=\{0\}\times M \times P,\quad \mathcal F_2= \{0\}\times P.$$ The exact sequence of the fibration for a given base point $a \in \mathcal F_0=P$ is \begin{multline*} 0 \to \pi_2(\mathcal F,a) \to \pi_2(L \mathcal M, a) \to \pi_2(\mathcal M,*) \labto{\, \partial \,} \\ \to \pi_1(\mathcal F,a) \to \pi_1(L \mathcal M, a) \to \pi_1(\mathcal M,*) \labto{\, \partial\, } \pi_0(\mathcal F) \to \pi_0 (L\mathcal M) \to *. \end{multline*} Under the obvious identifications, this leads to the exact sequence of Theorem \ref{thm:exact}. \end{proof} \begin{rem} These results and methods should be related to the description in \cite[\S 6]{B-87} of the homotopy type of the function space $(BG)^Y$ where $G$ is an abstract group and $Y$ is a $CW$-complex, and which gives a result of Gottlieb in \cite{Got}. $\mathsf{B}ox$ \end{rem} \begin{rem} Here is a methodological point. The category $\mathsf{C}rs$ of crossed complexes is equivalent to that of $\infty$-groupoids, as in \cite{BH81:inf}, where these $\infty$-groupoids are now commonly called `strict globular $\omega$-groupoids'. However the internal hom in the latter category is bound to be more complicated than that for crossed complexes, because the cell structure of the standard $n$-globe, $n >1,$ $$E^n = e^0_\pm \cup e^1_\pm \cup \cdots \cup e^{n-1}_\pm \cup e^n$$ is more complicated than that for the standard cell for which $$E^n=e^0 \cup e^{n-1}\cup e^n, n>1.$$ Also we obtain a precise answer using filtered spaces and strict structures, whereas the current fashion is to go for weak structures as yielding more homotopy $n$-types for $n>2$. In fact many results on crossed complexes are obtained using cubical methods. $\mathsf{B}ox$ \end{rem} \section*{Appendix: Verification of crossed module rules} \label{app:proof} We now verify the crossed module rules for the structure $$L\mathcal M[a]= (M \labto{\delta _a} P(a))$$ defined in Theorem \ref{thm:combined} from a crossed module of groups $\mathcal M= (M \labto{\delta } P)$ and $a\in P$ as follows: \begin{align*}P(a)&=\{(m,p)\in M \times P\mid \deltaelta m= -[a,p]=-a-p+a+p\};\\ \deltaelta _a m& = (-m^a+m, \deltaelta m);\\ (n,q)+(m,p)&=(m+n^p,q+p);\\ n^{(m,\,p)}&=n^p. \end{align*} \begin{prop}If $\deltaelta:M \to P$ is a crossed module of groups, and $a \in P$, then $L \mathcal M[a]$ as defined above is also a crossed module of groups. \end{prop} \noindent {\bf Proof} It is easy to check that $\deltaelta (-m^a+m)= [a,\delta m]$, so that $\deltaelta_a(m) \in P(a)$. We next show that $\delta _a$ is a morphism: \begin{align*} \deltaelta_a(n) + \deltaelta_a(m)&=(-n^a+n,\deltaelta n)+ (-m^a+m,\deltaelta m)\\ &= (-m^a +m +(-n^a+n)^{\deltaelta m}, \deltaelta n + \deltaelta m) \\ &= (-m^a -n^a +n +m, \delta n + \delta m) \\ &= \delta _a (n+m).\\ \intertext{Now we verify the first crossed module rule. Let $(m,p) \in P(a), n \in M$:} -(m,p) + \delta _a n + (m,p)&= (-m^{-p},-p) +(-n^a+n,\delta n)+ (m,p) \\ &= (-n^a +n +(-m^{-p})^{\delta n}, -p +\delta n)+(m,p) \\ &= (-n^a -m^{-p} + n,-p + \delta n)+ (m,p) \\ &= (m+(-n^a-m^{-p}+n)^p,-p +\delta n +p)\\ &= (m-n^{a+p}-m+n^p,\delta (n^p))\\ &= (-n^{a+p-\delta m} +n^p ,\delta (n^p))\\ &= (-n^{p+a} +n^p,\delta (n^p))\tag*{since $\deltaelta m= [a,p]$}\\ &= \delta_a (n^p).\\ \intertext{Now we verify the second crossed module rule:} m^{\delta _a n}&= m^{(-n^a+n,\,\deltaelta n)}\\ &= m^{\delta n}\\ &= -n+m+n. \tag*{$\mathsf{B}ox$} \end{align*} In effect, this illustrates that verifying the crossed complex rules for the internal hom $\mathsf{C}RS(C,D)$ is possible but tedious, and that is it is easier to say it follows from the general construction in terms of $\omega$-groupoids and the equivalence of categories, as in \cite{BH87}. On the other hand, this direct proof `proves', in the old sense of `tests', the general theory. \end{document}
\begin{document} \title{Backward Error Analysis for Perturbation Methods hanks{We would like to thank Pei Yu, Robert Moir and Julia Jankowski for their various contributions to this paper. We are also indebted to NSERC, Western University, as well as Galima Hassan for the key logistic support they provided.} \begin{abstract} We demonstrate via several examples how the backward error viewpoint can be used in the analysis of solutions obtained by perturbation methods. We show that this viewpoint is quite general and offers several important advantages. Perhaps the most important is that backward error analysis can be used to demonstrate the validity of the solution, however obtained and by whichever method. This includes a nontrivial safeguard against slips, blunders, or bugs in the original computation. We also demonstrate its utility in deciding when to truncate an asymptotic series, improving on the well-known rule of thumb indicating truncation just prior to the smallest term. We also give an example of elimination of \textsl{spurious} secular terms even when genuine secularity is present in the equation. We give short expositions of several well-known perturbation methods together with computer implementations (as scripts that can be modified). We also give a generic backward error based method that is equivalent to iteration (but we believe useful as an organizational viewpoint) for regular perturbation. \ensuremath{\varepsilon}nd{abstract} \section{Introduction} As the title suggests, the main idea of this paper is to use backward error analysis (BEA) to assess and interpret solutions obtained by perturbation methods. The idea will seem natural, perhaps even obvious, to those who are familiar with the way in which backward error analysis has seen its scope increase dramatically since the pioneering work of Wilkinson in the 60s, e.g., \cite{Wilkinson(1963),Wilkinson(1965)}. From early results in numerical linear algebraic problems and computer arithmetic, it has become a general method fruitfully applied to problems involving root finding, interpolation, numerical differentiation, quadrature, and the numerical solutions of ODEs, BVPs, DDEs, and PDEs, see, e.g, \cite{CorlessFillion(2013),Deuflhard(2003),Higham(1996)}. This is hardly a surprise when one considers that BEA offers several interesting advantages over a purely forward-error approach. BEA is often used in conjunction with perturbation methods. Not only is it the case that many algorithms' backward error analyses rely on perturbation methods, but the backward error is related to the forward error by a coefficient of sensitivity known as the condition number, which is itself a kind of sensitivity to perturbation. In this paper, we examine an apparently new idea, namely, that perturbation methods themselves can also be interpreted within the backward error analysis framework. Our examples will have a classical feel, but the analysis and interpretation is what differs, and we will make general remarks about the benefits of this mode of analysis and interpretation. However, due to the breadth of the literature in perturbation theory, we cannot determine with certainty the extent to which applying backward error analysis to perturbation methods is new. Still, none of the works we know, apart from \cite{Boyd(2014)}, \cite{Corless(1993)b}, and \cite{Corless(2014)}, even mention the possibility of using of using BEA to explain or measure the success of a perturbation computation. Among the books we have consulted, only \cite[p.~251 \& p.~289]{Boyd(2014)} mentions the residual by name, but does not use it systematically. At the very least, therefore, the idea of using BEA in relation to perturbation methods might benefit from a wider discussion. \section{The basic method from the BEA point of view} \label{genframe} The basic idea of BEA is increasingly well-known in the context of numerical methods. The slogan \textsl{a good numerical method gives the exact solution to a nearby problem} very nearly sums up the whole perspective. Any number of more formal definitions and discussions exist---we like the one given in \cite[chap.~1]{CorlessFillion(2013)}, as one might suppose is natural, but one could hardly do better than go straight to the source and consult, e.g., \cite{Wilkinson(1963),Wilkinson(1965),Wilkinson(1971),Wilkinson(1984)}. More recently \cite{Grcar(2011)} has offered a good historical perspective. In what follows we give a brief formal presentation and then give detailed analyses by examples in subsequent sections. Problems can generally be represented as maps from an input space $\mathcal{I}$ to an output space $\mathcal{O}$. If we have a problem $\varphi:\mathcal{I}\to\mathcal{O}$ and wish to find $y=\varphi(x)$ for some putative input $x\in\mathcal{I}$, lack of tractability might instead lead you to engineer a simpler problem $\hat{\varphi}$ from which you would compute $\hat{y}=\hat{\varphi}(x)$. Then $\hat{y}-y$ is the \textsl{forward error} and, provided it is small enough for your application, you can treat $\hat{y}$ as an approximation in the sense that $\hat{y}2pprox \varphi(x)$. In BEA, instead of focusing on the forward error, we try to find an $\hat{x}$ such that $\hat{y}=\varphi(\hat{x})$ by considering the \textsl{backward error} $\Delta x=\hat{x}-x$, i.e., we try to find for which set of data our approximation method $\hat{\varphi}$ has exactly solved our reference problem $\varphi$. The general picture can be represented by the following commutative diagram: \begin{center} \begin{tikzpicture} \def2{2} \draw (0,0) node (x) {$x$}; \draw (2,0) node (y) {$y$}; \draw (0,-2) node (xhat) {$\hat{x}$}; \draw (2,-2) node (yhat) {$\hat{y}$}; \draw (x) edge[->] node[above] {$\varphi$} (y); \draw (xhat) edge[->,dashed] node[below] {$\varphi$} (yhat); \draw (x) edge[->,dashed] node[left] {$+\Delta x$} (xhat); \draw (y) edge[->] node[right] {$+\Delta y$} (yhat); \draw (x) edge[->] node[above right] {$\hat{\varphi}$} (yhat); \ensuremath{\varepsilon}nd{tikzpicture} \ensuremath{\varepsilon}nd{center} We can see that, whenever $x$ itself has many components, different backward error analyses will be possible since we will have the option of reflecting the forward error back into different selections of the components. It is often the case that the map $\varphi$ can be defined as the solution to $\phi(x,y)=0$ for some operator $\phi$, i.e., as having the form \begin{align} x\xrightarrow{\varphi} \left\{ y\mid \phi(x,y)=0\right\}\>.\ensuremath{\varepsilon}nd{align} In this case, there will in particular be a simple and useful backward error resulting from computing the residual $r=\phi(x,\hat{y})$. Trivially $\hat{y}$ then exactly solves the reverse-engineered problem $\hat{\varphi}$ given by $\hat{\phi}(x,y)=\phi(x,y)-r=0$. Thus, when the residual can be used as a backward error, this directly computes a reverse-engineered problem that our method has solved exactly. We are then in the fortunate position of having both a problem and its solution, and the challenge then consists in determining how similar the reference problem $\varphi$ and the modified problems $\hat{\varphi}$ are, \textsl{and whether or not the modified problem is a good model for the phenomenon being studied}. \paragraph{Regular perturbation BEA-style} Now let us introduce a \textsl{general framework for perturbation methods} that relies on the general framework for BEA introduced above. Perturbation methods are so numerous and varied, and the problems tackled are from so many areas, that it seems a general scheme of solution would necessarily be so abstract as to be difficult to use in any particular case. Actually, the following framework covers many methods. For simplicity of exposition, we will introduce it using the simple gauge functions $1,\ensuremath{\varepsilon},\ensuremath{\varepsilon}^2,\ldots$, but note that extension to other gauges is usually straightforward (such as Puiseux, $\ensuremath{\varepsilon}^n\ln^m\ensuremath{\varepsilon}$, etc), as we will show in the examples. To begin with, let \begin{align} F(x,u;\ensuremath{\varepsilon})=0 \label{operatoreq} \ensuremath{\varepsilon}nd{align} be the operator equation we are attempting to solve for the unknown $u$. The dependence of $F$ on the scalar parameter $\ensuremath{\varepsilon}$ and on any data $x$ is assumed but henceforth not written explicitly. In the case of a simple power series perturbation, we will take the $m$th order approximation to $u$ to be given by the \textsl{finite} sum \begin{align} z_m = \sum_{k=0}^m \ensuremath{\varepsilon}^ku_k\>. \ensuremath{\varepsilon}nd{align} The operator $F$ is assumed to be Fr\'echet differentiable. For convenience we assume slightly more, namely, that for any $u$ and $v$ in a suitable region, there exists a linear invertible operator $F_1(v)$ such that \begin{align} F(u) = F(v) + F_1(v)(u-v) + O\left(\|u-v\|^2\right)\>. \ensuremath{\varepsilon}nd{align} Here, $\|\cdot\|$ denotes any convenient norm. We denote the \textsl{residual} of $z_m$ by \begin{align} \Delta_m := F(z_m)\>, \ensuremath{\varepsilon}nd{align} \ensuremath{\varepsilon}mph{i.e.}, $\Delta_m$ results from evaluating $F$ at $z_m$ instead of evaluating it at the reference solution $u$ as in equation \ensuremath{\varepsilon}qref{operatoreq}. If $\|\Delta_m\|$ is small, we say we have solved a ``nearby'' problem, namely, the reverse-engineered problem for the unknown $u$ defined by \begin{align} F(u)-F(z_m) = 0\>, \ensuremath{\varepsilon}nd{align} which is exactly solved by $u=z_m$. Of course this is trivial. It is \textsl{not} trivial in consequences if $\|\Delta_m\|$ is small compared to data errors or modelling errors in the operator $F$. We will exemplify this point more concretely later. We now suppose that we have somehow found $z_0=u_0$, a solution with a residual whose size is such that \begin{align} \|\Delta_0\|=\|F(u_0)\| = O(\ensuremath{\varepsilon})\qquad \textrm{as} \qquad \ensuremath{\varepsilon}\to0\>. \ensuremath{\varepsilon}nd{align} Finding this $u_0$ is part of the art of perturbation; much of the rest is mechanical. Suppose now inductively that we have found $z_n$ with residual of size \[ \|\Delta_n\| = O\left(\ensuremath{\varepsilon}^{n+1}\right) \quad\textrm{ as }\quad \ensuremath{\varepsilon}\to0\>. \] Consider $F(z_{n+1})$ which, by definition, is just $F(z_n+\ensuremath{\varepsilon}^{n+1}u_{n+1})$. We wish to choose the term $u_{n+1}$ in such a way that $z_{n+1}$ has residual of size $\|\Delta_{n+1}\|=O(\ensuremath{\varepsilon}^{n+2})$ as $\ensuremath{\varepsilon}\to0$. Using the Fr\'echet derivative of the residual of $z_{n+1}$ at $z_n$, we see that \begin{align} \Delta_{n+1} &= F(z_n+\ensuremath{\varepsilon}^{n+1}u_{n+1})= F(z_n)+F_1(z_n)\ensuremath{\varepsilon}^{n+1}u_{n+1}+O\left(\ensuremath{\varepsilon}^{2n+2}\right)\>. \label{resseries1} \ensuremath{\varepsilon}nd{align} By linearity of the Fr\'echet derivative, we also obtain $F_1(z_n) = F_1(z_0)+O(\ensuremath{\varepsilon})= [\ensuremath{\varepsilon}^0]F_1(z_0)+O(\ensuremath{\varepsilon})$. Here, $[\ensuremath{\varepsilon}^k]G$ refers to the coefficient of $\ensuremath{\varepsilon}^k$ in the expansion of $G$. Let \begin{align} A=[\ensuremath{\varepsilon}^0]F_1(z_0)\>, \ensuremath{\varepsilon}nd{align} that is, the zeroth order term in $F_1(z_0)$. Thus, we reach the following expansion of $\Delta_{n+1}$: \begin{align} \Delta_{n+1} = F(z_n) + A\ensuremath{\varepsilon}^{n+1}u_{n+1}+O\left(\ensuremath{\varepsilon}^{n+2}\right)\>.\label{eqDnp1} \ensuremath{\varepsilon}nd{align} Note that, in equation \ensuremath{\varepsilon}qref{resseries1}, one could keep $F_1(z_n)$, not simplifying to $A$ and compute not just $u_{n+1}$ but, just as in Newton's method, double the number of correct terms. However, this in practice is often too expensive \cite[chap.~6]{Geddes(1992)b}, and so we will in general use this simplification. As noted, we only need $F_1(z_0)$ accurate to $O(\ensuremath{\varepsilon})$, so in place of $F_1(z_0)$ in equation \ensuremath{\varepsilon}qref{eqDnp1} we use $A$. As a result of the above expansion of $\Delta_{n+1}$, we now see that to make $\Delta_{n+1} = O\left(\ensuremath{\varepsilon}^{n+2}\right)$, we must have $F(z_n)+A\ensuremath{\varepsilon}^{n+1}u_{n+1}=O(\ensuremath{\varepsilon}^{n+2})$, in which case \begin{align} A u_{n+1} +\frac{F(z_n)}{\ensuremath{\varepsilon}^{n+1}} = Au_{n+1} +\frac{\Delta_n}{\ensuremath{\varepsilon}^{n+1}} =O(\ensuremath{\varepsilon})\>. \ensuremath{\varepsilon}nd{align} Since by hypothesis $\Delta_n=F(z_n)=O(\ensuremath{\varepsilon}^{n+1})$, we know that $\sfrac{\Delta_n}{\ensuremath{\varepsilon}^{n+1}}=O(1)$. In other words, to find $u_{n+1}$ we solve the linear operator equation \begin{align*} A u_{n+1} = -[\ensuremath{\varepsilon}^{n+1}]\Delta_n\>, \ensuremath{\varepsilon}nd{align*} where, again, $[\ensuremath{\varepsilon}^{n+1}]$ is the coefficient of the $(n+1)$th power of $\ensuremath{\varepsilon}$ in the series expansion of $\Delta$. Note that by the inductive hypothesis the right hand side has norm $O(1)$ as $\ensuremath{\varepsilon}\to0$. Then $\|\Delta_{n+1}\| = O(\ensuremath{\varepsilon}^{n+2})$ as desired, so $u_{n+1}$ is indeed the coefficient we were seeking. We thus need $A=[\ensuremath{\varepsilon}^0]F(z_0)$ to be invertible. If not, the problem is singular, and essentially requires reformulation.\footnote{We remark that it is a sufficient but not necessary condition for regular expansion to be able to find our initial point $u_0$ and to have invertible $A=F_1(u_0;0)$. A regular perturbation problem can be defined in many ways, not just in the way we have done, with invertible $A$. For example, \cite[Sec 7.2]{Bender(1978)} essentially uses continuity in $\ensuremath{\varepsilon}$ as $\ensuremath{\varepsilon}\to0$ to characterize it. Another characterization is that for regular perturbation problems infinite perturbation series are convergent for some non-zero radius of convergence. } We shall see examples. If $A$ is invertible, the problem is regular. This general scheme can be compared to that of, say, \cite{Bellman(1972)}. Essential similarities can be seen. In Bellman's treatment, however, the residual is used implicitly, but not named or noted, and instead the equation defining $u_{n+1}$ is derived by postulating an infinite expansion \begin{align} u=u_0+\ensuremath{\varepsilon} u_1+\ensuremath{\varepsilon}^2u_2+\cdots\>. \ensuremath{\varepsilon}nd{align} By taking the coefficient of $\ensuremath{\varepsilon}^{n+1}$ in the expansion of $\Delta_n$ we are implicitly doing the same work, but we will see advantages of this point of view. Also, note that in the frequent case of more general asymptotic sequences, namely Puiseux series or generalized approximations containing logarithmic terms, we can make the appropriate changes in a straightforward manner, as we will show below. \section{Algebraic equations} We begin by applying the regular method from section \ref{genframe} to algebraic equations. We begin with a simple scalar equation and gradually increase the difficulty, thereby demonstrating the flexibility of the backward error point of view. \subsection{Regular perturbation}\label{RegularPert} In this section, after applying the method from section \ref{genframe} to a scalar equation, we use the same method to solve a $2\times2$ system; higher dimensional systems can be solved similarly. We give some computer algebra implementations (scripts that the reader may modify) of the basic method. Finally, in this section, we give an alternative method based on the Davidenko equation that is simpler to use in Maple. \subsubsection{Scalar equations} Let us consider a simple example similar to many used in textbooks for classical perturbation analysis. Suppose we wish to find a real root of \begin{align} x^5 -x-1=0 \label{refprobalgeq} \ensuremath{\varepsilon}nd{align} and, since the Abel-Ruffini theorem---which says that in general there are no solutions in radicals to equations of degree 5 or more---suggests it is unlikely that we can find an elementary expression for the solution of this \textsl{particular} equation of degree 5, we introduce a parameter which we call $\ensuremath{\varepsilon}$, and moreover which we suppose to be small. That is, we embed our problem in a parametrized family of similar problems. If we decide to introduce $\ensuremath{\varepsilon}$ in the degree-1 term, so that \begin{align} u^5-\ensuremath{\varepsilon} u-1=0\>, \label{pertalgeq} \ensuremath{\varepsilon}nd{align} we will see that we have a so-called regular perturbation problem. To begin with, we wish to find a $z_0$ such that $\Delta_0=F(z_0) = z_0^5-\ensuremath{\varepsilon} z_0-1=O(\ensuremath{\varepsilon})$. Quite clearly, this can happen only if $z_0^5-1=0$. Ignoring the complex roots in this example, we take $z_0=1$. To continue the solution process, we now suppose that we have found \begin{align} z_n = \sum_{k=0}^n u_k\ensuremath{\varepsilon}^k \ensuremath{\varepsilon}nd{align} such that $\Delta_n=F(z_n) = z_n^5-\ensuremath{\varepsilon} z_n-1=O(\ensuremath{\varepsilon}^{n+1})$ and we wish to use our iterative procedure. We need the Fr\'echet derivative of $F$, which in this case is just \begin{align} F_1(u) &= 5u^4-\ensuremath{\varepsilon}\>, \ensuremath{\varepsilon}nd{align} because \begin{align} F(u) = u^5-\ensuremath{\varepsilon} u-1 &= v^5-\ensuremath{\varepsilon} v-1 + F'(v)(u-v)+O(u-v)^2\>. \ensuremath{\varepsilon}nd{align} Hence, $A=5z_0^4=5$, which is invertible. As a result our iteration is $\Delta_n=F(z_n)$, i.e., \begin{align} 5u_{n+1} = -[\ensuremath{\varepsilon}^{n+1}]\Delta_n\>. \ensuremath{\varepsilon}nd{align} Carrying out a few steps we have \begin{align} \Delta_0 = F(z_0) = F(1) = 1-\ensuremath{\varepsilon}-1 = -\ensuremath{\varepsilon} \ensuremath{\varepsilon}nd{align} so \begin{align} 5\cdot u_1 = -[\ensuremath{\varepsilon}]\Delta_0 = -[\ensuremath{\varepsilon}](-\ensuremath{\varepsilon}) = 1\>. \ensuremath{\varepsilon}nd{align} Thus, $u_1=\sfrac{1}{5}$. Therefore, $z_1=1+\sfrac{\ensuremath{\varepsilon}}{5}$ and \begin{align} \Delta_1 &= \left(1+\frac{\ensuremath{\varepsilon}}{5}\right)^5 -\ensuremath{\varepsilon}\left(1+\frac{\ensuremath{\varepsilon}}{5}\right)-1\\ & = \left(1+5\frac{\ensuremath{\varepsilon}}{5}+10\frac{\ensuremath{\varepsilon}^2}{25}+O\left(\ensuremath{\varepsilon}^3\right)\right) - \ensuremath{\varepsilon}-\frac{\ensuremath{\varepsilon}^2}{5}-1\\ &=\left(\frac{2}{5}-\frac{1}{5}\right)\ensuremath{\varepsilon}^2+O\left(\ensuremath{\varepsilon}^3\right) = \frac{1}{5}\ensuremath{\varepsilon}^2+O\left(\ensuremath{\varepsilon}^3\right)\>. \ensuremath{\varepsilon}nd{align} Then we find that $Au_1=-\sfrac{1}{5}$ and thus $u_1=-\sfrac{1}{25}$. So, $u=1+\sfrac{\ensuremath{\varepsilon}}{5}-\sfrac{\ensuremath{\varepsilon}^2}{25}+O(\ensuremath{\varepsilon}^3)$. Finding more terms by this method is clearly possible although tedium might be expected at higher orders. Luckily nowadays computers and programs are widely available that can solve such problems without much human effort, but before we demonstrate that, let's compute the residual of our computed solution so far: \[ z_2 = 1+\frac{1}{5}\ensuremath{\varepsilon}-\frac{1}{25}\ensuremath{\varepsilon}^2\>. \] Then $\Delta_2 = z_2^5-\ensuremath{\varepsilon} z_2-1$ is \begin{align} \Delta_2 &= \left(1+\frac{1}{5}\ensuremath{\varepsilon}-\frac{1}{25}\ensuremath{\varepsilon}^2\right)^5-\ensuremath{\varepsilon}\left(1+\frac{1}{5}\ensuremath{\varepsilon}-\frac{1}{25}\ensuremath{\varepsilon}^2\right)-1 \nonumber \\ & = -\frac{1}{25}\ensuremath{\varepsilon}^3 - \frac{3}{125}\ensuremath{\varepsilon}^4+\frac{11}{3125}\ensuremath{\varepsilon}^5 +\frac{3}{125}\ensuremath{\varepsilon}^6 -\frac{2}{15625}\ensuremath{\varepsilon}^7 \nonumber\\ &\qquad -\frac{1}{78125}\ensuremath{\varepsilon}^8 +\frac{1}{390625}\ensuremath{\varepsilon}^9 -\frac{1}{9765675}\ensuremath{\varepsilon}^{10}\>. \ensuremath{\varepsilon}nd{align} We note the following. First, $z_2$ exactly solves the modified equation \begin{align} x^5-\ensuremath{\varepsilon} x-1\ensuremath{\varepsilon}nskip +\frac{1}{25}\ensuremath{\varepsilon}^3 + \frac{3}{25}\ensuremath{\varepsilon}^4-\ldots + \frac{1}{9765625}\ensuremath{\varepsilon}^{10}=0 \label{starred} \ensuremath{\varepsilon}nd{align} which is $O(\ensuremath{\varepsilon}^3)$ different to the original. Second, the complete residual was computed rationally: there is no error in saying that $z_2=1+\sfrac{\ensuremath{\varepsilon}}{5}-\sfrac{\ensuremath{\varepsilon}^2}{25}$ solves equation \ensuremath{\varepsilon}qref{starred} exactly. Third, if $\ensuremath{\varepsilon}=1$ then $z_2=1+\sfrac{1}{5}-\sfrac{1}{25}=1.16$ exactly (or $1\sfrac{4}{25}$ if you prefer), and the residual is then $(\sfrac{29}{25})^5-\sfrac{29}{25}-1\doteq -0.059658$, showing that $1.16$ is the exact root of an equation about 6\% different to the original. Something simple but importantly different to the usual treatment of perturbation methods has happened here. We have assessed the quality of the solution in an explicit fashion without concern for convergence issues or for the exact solution to $x^5-x-1=0$, which we term the reference problem. We use this term because its solution will be the reference solution. We can't call it the ``exact'' solution because $z_2$ is \textsl{also} an ``exact'' solution, namely to equation~\ensuremath{\varepsilon}qref{starred}. Every numerical analyst and applied mathematician knows that this isn't the whole story---we need some evaluation or estimate of the effects of such perturbations of the problem. One effect is the difference between $z_2$ and $x$, the reference solution, and this is what people focus on. We believe this focus is sometimes excessive. The are other possible views. For instance, if the backward error is physically reasonable. As an example, if $\ensuremath{\varepsilon}=1$ and $z_2=1.16$ then $z_2$ exactly solves $y^5-y-a=0$ where $a\neq 1$ but rather $a\doteq 0.9403$. If the original equation was really $u^5-u-2lpha=0$ where $2lpha=1\pm 5\%$ we might be inclined to accept $z_2=1.16$ because, for all we know, we might have the true solution (even though we're outside the $\pm 5\%$ range, we're only just outside; and how confident are we in the $\pm5\%$, after all?). \subsubsection{Simple computer algebra solution} The following Maple script can be used to solve this or similar problems $f(u;\ensuremath{\varepsilon})=0$. Other computer algebra systems can also be used. \lstinputlisting{RegularScalar} That code is a straightforward implementation of the general scheme presented in subsection \ref{genframe}. Its results, translated into \LaTeX\ and cleaned up a bit, are that \begin{align} z = 1+\frac{1}{5}\ensuremath{\varepsilon}-\frac{1}{25}\ensuremath{\varepsilon}^2+\frac{1}{125}\ensuremath{\varepsilon}^3 \ensuremath{\varepsilon}nd{align} and that the residual of this solution is \begin{align} \Delta = \frac{21}{3125}\ensuremath{\varepsilon}^5+O\left( \ensuremath{\varepsilon}^6 \right) \>. \ensuremath{\varepsilon}nd{align} With $N=3$, we get an extra order of accuracy as the next term in the series is zero, but this result is serendipitous. \subsubsection{Systems of algebraic equations}\label{systems} Regular perturbation for systems of equations using the framework from section \ref{genframe} is straightforward. We include an example to show some computer algebra and for completeness. Consider the following two equations in two unknowns: \begin{align} f_1(v_1,v_2) &=v_1^2+v_2^2 -1-\ensuremath{\varepsilon} v_1v_2 = 0\\ f_2(v_1,v_2) &= 25v_1v_2-12+2\ensuremath{\varepsilon} v_1 =0 \ensuremath{\varepsilon}nd{align} When $\ensuremath{\varepsilon}=0$ these equations determine the intersections of a hyperbola with the unit circle. There are four such intersections: $(\sfrac{3}{5},\sfrac{4}{5}), (\sfrac{4}{5},\sfrac{3}{5}), (-\sfrac{3}{5},-\sfrac{4}{5})$ and $(-\sfrac{4}{5},-\sfrac{3}{5})$. The Jacobian matrix (which gives us the Fr\'echet derivative in the case of algebraic equations) is \begin{align} F_1(v) = \begin{bmatrix} \frac{\partial f_1}{\partial v_1} & \frac{\partial f_1}{\partial v_2} \\[.25cm] \frac{\partial f_2}{\partial v_1} & \frac{\partial f_2}{\partial v_2} \ensuremath{\varepsilon}nd{bmatrix} = \begin{bmatrix} 2v_1 & 2v_2 \\ 25 v_2 & 25 v_1\ensuremath{\varepsilon}nd{bmatrix} + O(\ensuremath{\varepsilon})\>. \ensuremath{\varepsilon}nd{align} Taking for instance $u_0=[\sfrac{3}{5},\sfrac{4}{5}]^T$ we have \begin{align} A= F_1(u_0) = \begin{bmatrix} \sfrac{6}{5} & \sfrac{8}{5} \\ 20 & 15\ensuremath{\varepsilon}nd{bmatrix}\>. \ensuremath{\varepsilon}nd{align} Since $\det A=-14\neq 0$, $A$ is invertible and indeed \begin{align} A^{-1} = \begin{bmatrix} -\sfrac{15}{14} & \sfrac{4}{25} \\ \sfrac{10}{7} & -\sfrac{3}{35} \ensuremath{\varepsilon}nd{bmatrix}\>. \ensuremath{\varepsilon}nd{align} The residual of the zeroth order solution is \begin{align} \Delta_0 = F\left(\frac{3}{5},\frac{4}{5}\right) = \begin{bmatrix}-\sfrac{12}{25} \\ \sfrac{6}{5} \ensuremath{\varepsilon}nd{bmatrix}\>, \ensuremath{\varepsilon}nd{align} so $-[\ensuremath{\varepsilon}]\Delta_0 = [\sfrac{12}{25},-\sfrac{6}{5}]^T$. Therefore \begin{align} u_1 = \begin{bmatrix} u_{11} \\ u_{12}\ensuremath{\varepsilon}nd{bmatrix} = A^{-1}\begin{bmatrix}\sfrac{12}{25} \\ -\sfrac{6}{25}\ensuremath{\varepsilon}nd{bmatrix} = \begin{bmatrix} -\sfrac{114}{175} \\ \sfrac{138}{175}\ensuremath{\varepsilon}nd{bmatrix} \ensuremath{\varepsilon}nd{align} and $z_1=u_0+\ensuremath{\varepsilon} u_1$ is our improved solution: \begin{align} z_1 = \begin{bmatrix}\sfrac{3}{5} \\ \sfrac{4}{5} \ensuremath{\varepsilon}nd{bmatrix} + \ensuremath{\varepsilon} \begin{bmatrix} -\sfrac{114}{175} \\ \sfrac{138}{175}\ensuremath{\varepsilon}nd{bmatrix}\>. \ensuremath{\varepsilon}nd{align} To guard against slips, blunders, and bugs (some of those calculations were done by hand, and some were done in Sage on an Android phone) we compute \begin{align} \Delta_1 = F(z_1) = \ensuremath{\varepsilon}^2\begin{bmatrix}\sfrac{6702}{6125} \\ -\sfrac{17328}{1225}\ensuremath{\varepsilon}nd{bmatrix} + O\left(\ensuremath{\varepsilon}^3\right)\>. \ensuremath{\varepsilon}nd{align} That computation was done in Maple, completely independently. Initially it came out $O(\ensuremath{\varepsilon})$ indicating that something was not right; tracking the error down we found a typo in the Maple data entry ($183$ was entered instead of $138$). Correcting that typo we find $\Delta_1=O(\ensuremath{\varepsilon}^2)$ as it should be. Here is the corrected Maple code: \lstinputlisting{ResidualSystem} Just as for the scalar case, this process can be systematized and we give one way to do so in Maple, below. The code is not as pretty as the scalar case is, and one has to explicitly ``map'' the series function and the extraction of coefficients onto matrices and vectors, but this demonstrates feasibility. \lstinputlisting{RegularSystem.tex} This code computes $z_3$ correctly and gives a residual of $O(\ensuremath{\varepsilon}^4)$. From the backward error point of view, this code finds the intersection of curves that differ from the specified ones by terms of $O(\ensuremath{\varepsilon}^4)$. In the next section, we show a way to use a built-in feature of Maple to do the same thing with less human labour. \subsubsection{The Davidenko equation} Maple has a built-in facility for solving differential equations in series that (at the time of writing) is superior to its built-in facility for solving algebraic equations in series, because the latter can only handle scalar equations. This may change in the future, but it may not because there is the following simple workaround. To solve \begin{align} F(u;\ensuremath{\varepsilon})=0 \ensuremath{\varepsilon}nd{align} for a function $u(\ensuremath{\varepsilon})$ expressed as a series, simply differentiate to get \begin{align} D_1(F)(u,\ensuremath{\varepsilon})\frac{du}{d\ensuremath{\varepsilon}} + D_2(F)(u,\ensuremath{\varepsilon})=0\>. \ensuremath{\varepsilon}nd{align} Boyd \cite{Boyd(2014)} calls this the Davidenko equation. If we solve this in Taylor series with the initial condition $u(0)=u_0$, we have our perturbation series. Notice that what we were calling $A=[\ensuremath{\varepsilon}^0]F_1(u_0)$ occurs here as $D_1(F)(u_0,0)$ and this needs to be nonsingular to be solved as an ordinary differential equation; if $\mathrm{rank}(D_1(F)(u_0,0))<n$ then this is in fact a nontrivial differential algebraic equation that Maple may still be able to solve using advanced techniques (see, e.g., \cite{Avrachenkov(2013)}). Let us just show a simple case here: \lstinputlisting{RegularDavidenko} This generates (to the specified value of the order, namely, \verb|Order=4|) the solution \begin{align} x(\ensuremath{\varepsilon}) &=\frac{3}{5}-\frac{114}{175}\ensuremath{\varepsilon}+\frac{119577}{42875}\ensuremath{\varepsilon}^2-\frac{43543632}{2100875}\ensuremath{\varepsilon}^3\\ y(\ensuremath{\varepsilon}) &=\frac{4}{5}+\frac{138}{175}\ensuremath{\varepsilon}-\frac{119004}{42875}\ensuremath{\varepsilon}^2+\frac{43245168}{2100875}\ensuremath{\varepsilon}^3\>, \ensuremath{\varepsilon}nd{align} whose residual is $O(\ensuremath{\varepsilon}^4)$. Internally, Maple uses its own algorithms, which occasionally get improved as algorithmic knowledge advances. \subsection{Puiseux series}\label{Puiseux} Puiseux series are simply Taylor series or Laurent series with fractional powers. A standard example is \begin{align} \sin\sqrt{x} = x^{\sfrac{1}{2}} - \frac{1}{3!}x^{\sfrac{3}{2}} + \frac{1}{5!}x^{\sfrac{5}{2}}+\cdots \ensuremath{\varepsilon}nd{align} A simple change of variable (e.g. $t=\sqrt{x}$ so $x=t^2$) is enough to convert to Taylor series. Once the appropriate power $n$ is known for $\ensuremath{\varepsilon}=\mu^n$, perturbation by Puiseux expansion reduces to computations similar to those we've seen already. For instance, had we chosen to embed $u^5-u-1$ in the family $u^5-\ensuremath{\varepsilon}(u+1)$ (which is somehow conjugate to the family of the last section), then because the equation becomes $u^5=0$ when $\ensuremath{\varepsilon}=0$ we see that we have a five-fold root to perturb, and we thus suspect we will need Puiseux series. For scalar equations, there are built-in facilities in Maple for Puiseux series, which gives yet another way in Maple to solve scalar algebraic equations perturbatively. One can use the \texttt{RootOf} construct to do so as follows: \lstinputlisting{Puiseux} This yields \begin{align} z = 2lpha\ensuremath{\varepsilon}^{\sfrac{1}{5}}+\frac{1}{5}2lpha^2\ensuremath{\varepsilon}^{\sfrac{2}{5}} -\frac{1}{25}2lpha^3\ensuremath{\varepsilon}^{\sfrac{3}{5}} +\frac{1}{125}2lpha^4\ensuremath{\varepsilon}^{\sfrac{4}{5}} - \frac{21}{15626}2lpha \ensuremath{\varepsilon}^{\sfrac{6}{5}} \>. \ensuremath{\varepsilon}nd{align} This series describes all paths, accurately for small $\ensuremath{\varepsilon}$. Note that the command \begin{lstlisting} alias(alpha = RootOf(u^5-1,u)) \ensuremath{\varepsilon}nd{lstlisting} is a way to tell Maple that $2lpha$ represents a fixed fifth root of unity. Exactly which fixed root can be deferred till later. Working instead with the default value for the environment variable \texttt{Order}, namely \texttt{Order := 6}, gets us a longer series for $z$ containing terms up to $\ensuremath{\varepsilon}^{\sfrac{29}{5}}$ but not $\ensuremath{\varepsilon}^{\sfrac{30}{5}}=\ensuremath{\varepsilon}^6$. Putting the resulting $z_6$ back into $f(u)$ we get a residual \begin{align} \Delta_6 = f(z_6) = \frac{23927804441356816}{14551915228366851806640625}\ensuremath{\varepsilon}^7 + O(\ensuremath{\varepsilon}^8) \ensuremath{\varepsilon}nd{align} Thus we expect that for small $\ensuremath{\varepsilon}$ the residual will be quite small. For instance, with $\ensuremath{\varepsilon}=1$ the exact residual is, for $2lpha=1$, $\Delta_6=1.2\cdot 10^{-9}$. This tells us that this approximation ought to get us quite accurate roots, and indeed we do. We conclude this discussion with two remarks. The first is that by a discriminant analysis as we describe in section \ref{SingPert}, we find that the nearest singularity is at $\ensuremath{\varepsilon}=\sfrac{3125}{256}$, and so we expect this series to actually converge for $\ensuremath{\varepsilon}=1$. Again, this fact was not used in our analysis above. Secondly, we could have used the \verb|series/RootOf| technique to do both the regular perturbation in subsection \ref{RegularPert} or the singular one we will do in subsection \ref{SingPert}. The Maple commands are quite similar: \begin{lstlisting} series(RootOf(u^5-e*u-1,u),e); \ensuremath{\varepsilon}nd{lstlisting} and \begin{lstlisting} series(RootOf(e*u^5-u-1,u),e); \ensuremath{\varepsilon}nd{lstlisting} However, in both cases only the real root is expanded. Some ``Maple art'' (that one of us more readily characterizes as black magic) can be used to complete the computation, but the previous code (both the loop and the Davidenko equation) are easier to generalize. Making the \texttt{dsolve/series} code for the Davidenko equation work in the case of Puiseux series requires a preliminary scaling. \subsection{Singular perturbation}\label{SingPert} Suppose that instead of embedding $u^5-u-1=0$ in the regular family we used in the previous section, we had used $\ensuremath{\varepsilon} u^5-u-1=0$. If we run our previous Maple programs, we find that the zeroth order solution is unique, and $z_0=-1$. The Fr\'echet derivative is $-1$ to $O(\ensuremath{\varepsilon})$, and so $u_{n+1} = [\ensuremath{\varepsilon}^{n+1}]\Delta_n$ for all $n\geq 0$. We find, for instance, \begin{align} z_7 = -1-\ensuremath{\varepsilon} -5\ensuremath{\varepsilon}^2 - 35\ensuremath{\varepsilon}^3 -285\ensuremath{\varepsilon}^4 -2530\ensuremath{\varepsilon}^5 -23751\ensuremath{\varepsilon}^6 -231880\ensuremath{\varepsilon}^7 \ensuremath{\varepsilon}nd{align} which has residual $\Delta_7 = O(\ensuremath{\varepsilon}^8)$ but with a larger integer as the constant hidden in that $O$ symbol. For $\ensuremath{\varepsilon}=0.2$, the value of $z_7$ becomes \begin{align}z_7\doteq -7.4337280\ensuremath{\varepsilon}nd{align} while $\Delta_7=-4533.64404$, which is not small at all. Thus we have no evidence this perturbation solution is any good: we have the exact solution to $u^5-0.2 u-1=-4533.64404$ or $u^5-0.2 u+4532.64404=0$, probably not what was intended (and if it was, it would be a colossal fluke). Note that we do not need to know a reference value of a root of $u^5-0.2u-1$ to determine this. Trying a smaller $\ensuremath{\varepsilon}$, we find that if $\ensuremath{\varepsilon}=0.05$ we have $z_7\doteq -1.07$ and $\Delta_7\doteq -1.2\cdot 10^{-4}$. This means $z_7$ is an exact root of $u^5-0.05 u-1.00012$; which may very well be what we want. The following remark is not really germane to the method but it's interesting. Taking the discriminant with respect to $u$, i.e., the resultant of $f$ and $\sfrac{\partial f}{\partial u}$, we find $\mathrm{discrim}(f) = \ensuremath{\varepsilon}^3(3125\ensuremath{\varepsilon}-256)$. Thus $f$ will have multiple roots if $\ensuremath{\varepsilon}=0$ (there are 4 multiple roots at infinity) or if $\ensuremath{\varepsilon} = \sfrac{256}{3125}=0.08192$. Thus our perturbation expansion can be expected to diverge\footnote{A separate analysis leads to the identification of $u_k = \frac{1}{5k+1}\binom{5k+1}{k}$ (via \cite{OEIS}). The ratio test confirms that the series converges for $|\ensuremath{\varepsilon}|<\sfrac{256}{3125}$, and diverges if $\ensuremath{\varepsilon} = \sfrac{256}{3125}$.} for $\ensuremath{\varepsilon}\geq 0.08192$. What happens to $z_7$ if $\ensuremath{\varepsilon}=\sfrac{256}{3125}$? $z_7\doteq -1.1698$ and $\Delta_7=-9.65\cdot10^{-3}$, so we have an exact solution for $u^5-\sfrac{256}{3125}u-1.00965$; this is not bad. The reference double root is $-1.25$, about $0.1$ away, although this fact was not used in the previous discussion. But this computation, valid as it is, only found one root out of five, and then only for sufficiently small $\ensuremath{\varepsilon}$. We now turn to the roots that go to infinity as $\ensuremath{\varepsilon}\to0$. Preliminary investigation from similar to that of subsection \ref{Puiseux} shows that it is convenient to replace $\ensuremath{\varepsilon}$ by $\mu^4$. Many singular perturbation problems including this one can be turned into regular ones by rescaling. Putting $u=\sfrac{y}{\mu}$, we get \begin{align} \mu^4\left(\frac{y}{\mu}\right)^5-\frac{y}{\mu}-1=0\>, \ensuremath{\varepsilon}nd{align} which reduces to \begin{align} y^5-y-\mu=0\>. \ensuremath{\varepsilon}nd{align} This is now regular in $\mu$. The zeroth order the equation is $y(y^4-1)=0$ and the root $y=0$ just recovers the regular series previously attained; so we let $2lpha$ be a root of $y^4-1$, i.e., $2lpha\in\{1,-1,i,-i\}$. A very similar Maple program (to either of the previous two) gives \begin{align} y_5= 2lpha +\frac{1}{4}\mu - \frac{5}{32}2lpha^3\mu^2 +\frac{5}{32}2lpha^2\mu^3-\frac{385}{2048}2lpha\mu^4 + \frac{1}{4}\mu^5 \ensuremath{\varepsilon}nd{align} so our appoximate solution is $\sfrac{y_5}{\mu}$ or \begin{align} z_5 = \frac{2lpha}{\mu}+\frac{1}{4}-\frac{5}{32}2lpha^3\mu^2-\frac{385}{2048}2lpha\mu^3+\frac{1}{4}\mu^4 \ensuremath{\varepsilon}nd{align} which has residual \textsl{in the original equation} \begin{align} \Delta_5 = \mu^4 z^5 -z-1= \frac{23205}{16384}2lpha^3\mu^5 - \frac{21255}{65536}2lpha^2\mu^6 +O(\mu^7)\>.\label{residorigeq} \ensuremath{\varepsilon}nd{align} That is, $z_5$ exactly solves $\mu^4u^5-u-1-\sfrac{23205}{16384}\>2lpha^2\mu^5=O(\mu^6)$ instead of the one we had wanted to solve. This differs from the original by $O(|\ensuremath{\varepsilon}|^{\sfrac{5}{4}})$, and for small enough $\ensuremath{\varepsilon}$ this may suffice. \paragraph{Optimal backward error} Interestingly enough, we can do better. The residual is only one kind of backward error. Taking the lead from the Oettli-Prager theorem \cite[chap.~6]{CorlessFillion(2013)}, we look for equations of the form \begin{align} \left(\mu^4 +\sum_{j=10}^{15} a_j\mu^j\right) u^5 - u -1 \ensuremath{\varepsilon}nd{align} for which $z_5$ is a better solution yet. Simply equating coefficients of the residual \begin{align} \tilde{\Delta}_5 = \left(\mu^4+\sum_{j=10}^{15}a_j\mu^j\right)z_5^5-z_5-1 \ensuremath{\varepsilon}nd{align} to zero, we find \begin{align} (\mu^4 - \frac{23205}{16384}2lpha^2\mu^{10}+ \frac{2145}{1024}2lpha\mu^{11})z_5^5 -z_5-1 = \frac{12165535425}{1073741824}2lpha\mu^{11}+O(\mu^{12}) \ensuremath{\varepsilon}nd{align} and thus $z_5$ solves an equation that is $O(\mu^{\sfrac{10}{4}})=O(\ensuremath{\varepsilon}^{\sfrac{5}{2}})$ close to the original, not just an equation \ensuremath{\varepsilon}qref{residorigeq} that is $O(\mu^6)=O(|\ensuremath{\varepsilon}|^{\sfrac{5}{4}})$. This is a superior explanation of the quality of $z_5$. This was obtained with the following Maple code: \lstinputlisting{SingularPertOettli} Computing to higher orders (see the worksheet) gives e.g. that $z_8$ is the exact solution to an equation that differs by $O(\mu^{13})$ from the original, or better than $O(\ensuremath{\varepsilon}^3)$. This in spite of the fact that the basic residual $\Delta_8=O(\ensuremath{\varepsilon}^{9/4})$, only slightly better than $O(\ensuremath{\varepsilon}^2)$. We will see other examples of improved backward error over residual for singularly-perturbed problems. In retrospect it's not so surprising, or shouldn't have been: singular problems are sensitive to changes in the leading term, and so it takes less effort to match a given solution. \subsection{Perturbing all roots at once} The preceding analysis found a nearby equation for each root independently; this might suffice, but there are circumstances in which it might not. Perhaps we want a ``nearby'' equation satisfied by all roots at once. Sadly this is more difficult, and in general may not be possible. But it is possible for the example we've considered and we demonstrate how the backward error is used in such a case. Let \begin{align} \zeta_1 &= z_5(1)= \frac{1}{\mu}+\frac{1}{4} -\frac{5}{32}\mu-\frac{385}{2048}\mu^3+\frac{1}{4}\mu^4\\ \zeta_2 &= z_5(-1) = -\frac{1}{\mu}+\frac{1}{4}-\frac{5}{32}\mu +\frac{385}{2048}\mu^3 + \frac{1}{4}\mu^4\\ \zeta_3 &= z_5(i) = \frac{i}{\mu}+\frac{1}{4} + \frac{5}{32}\mu -\frac{385i}{2048}\mu^3 + \frac{1}{4}\mu^4\\ \zeta_4 &= z_5(-i) = -\frac{i}{\mu}+\frac{1}{4} + \frac{5}{32}\mu + \frac{385}{2048}\mu^3 + \frac{1}{4}\mu^4\\ \zeta_5 &= z_5 = -1-\mu^4-5\mu^8 , \ensuremath{\varepsilon}nd{align} $\zeta_5$ is the regular root we have found first in the previous subsection. Now put \begin{align} \tilde{p}(x)=\mu^4(x-\zeta_1)(x-\zeta_2)(x-\zeta_3)(x-\zeta_4)(x-\zeta_5) \ensuremath{\varepsilon}nd{align} and expand it. The result, by Maple, is \begin{multline} \mu^4x^5-5\mu^{12}x^4 + \left(\frac{23205}{16384}\mu^8+\frac{45}{8}\mu^{12}\right)x^3 -\left(\frac{5435}{32768}\mu^8+\frac{195697915}{33554432}\mu^{12}\right)x^2 \\ + \left( \frac{2575665}{2097152}\mu^8+\frac{5696429035}{1073741824}\mu^{12}-1 \right)x + \frac{8453745}{2097152}\mu^8 -\frac{5355037365}{1073741824}\mu^{12}-1 \ensuremath{\varepsilon}nd{multline} which equals \begin{align} \ensuremath{\varepsilon} x^5 -x-1-5\ensuremath{\varepsilon}^3 x^4 + (\frac{23205}{16384}\ensuremath{\varepsilon}^2+\frac{45}{8}\ensuremath{\varepsilon}^3)x^3 - (\frac{5435}{32768}\ensuremath{\varepsilon}^2+\cdots)x^2+O(\ensuremath{\varepsilon}^2) \ensuremath{\varepsilon}nd{align} As we see, this equation is remarkably close to the original, although we see changes in all the coefficients. The backward error is $O(\mu^8)$, i.e., $O(\ensuremath{\varepsilon}^2)$. Thus for algebraic equations it's possible to talk about simultaneous backward error. \subsection{A hyperasymptotic example} In \cite[sect.~15.3, pp.~285-288]{Boyd(2014)}, Boyd takes up the perturbation series expansion of the root near $-1$ of \begin{align} f(x,\ensuremath{\varepsilon})=1+x+\ensuremath{\varepsilon} \mathrm{sech}\left(\frac{x}{\ensuremath{\varepsilon}}\right) = 0\>, \ensuremath{\varepsilon}nd{align} a problem he took from \cite[p.~22]{Holmes(1995)}. After computing the desired expansion using a two-variable technique, Boyd then sketches an alternative approach suggested by one of us (based on \cite{Corless(1996)}), namely to use the Lambert $W$ function. Unfortunately, there are a number of sign errors in Boyd's equation (15.28). We take the opportunity here to offer a correction, together with a residual-based analysis that confirms the validity of the correction. First, the erroneous formula: Boyd has \begin{align} z_0 = \frac{W(-2e^{\sfrac{1}{\ensuremath{\varepsilon}}})\ensuremath{\varepsilon}-1}{\ensuremath{\varepsilon}} \ensuremath{\varepsilon}nd{align} and $x_0=-\ensuremath{\varepsilon} z_0$, so allegedly $x_0=1-\ensuremath{\varepsilon} W(-2\ensuremath{\varepsilon}^{\sfrac{1}{\ensuremath{\varepsilon}}})$. This can't be right: as $\ensuremath{\varepsilon}\to0^+$, $e^{\sfrac{1}{\ensuremath{\varepsilon}}}\to\infty$ and the argument to $W$ is negative and large; but $W$ is real only if its argument is between $-e^{-1}$ and $0$, if it's negative at all. We claim that the correct formula is \begin{align} x_0 = -1-\ensuremath{\varepsilon} W(2e^{-\sfrac{1}{\ensuremath{\varepsilon}}}) \label{star} \ensuremath{\varepsilon}nd{align} which shows that the errors in Boyd's equation (15.28) are explainable as trivial. Indeed, Boyd's derivation is correct up to the last step; rather than fill in the algebraic details of the derivation of formula~\ensuremath{\varepsilon}qref{star}, we here verify that it works by computing the residual: \begin{align} \Delta_0 = 1+x_0 + \ensuremath{\varepsilon} \mathrm{sech}\left(\frac{x_0}{\ensuremath{\varepsilon}}\right). \ensuremath{\varepsilon}nd{align} For notational simplicity, we will omit the argument to the Lambert $W$ function and just write $W$ for $W(2e^{-\sfrac{1}{\ensuremath{\varepsilon}}})$. Then, note that $\mathrm{sech}(\sfrac{x_0}{\ensuremath{\varepsilon}}) = \mathrm{sech}(\sfrac{1+\ensuremath{\varepsilon} W}{\ensuremath{\varepsilon}})$ since each $\mathrm{sech}$ is even, and that \begin{align} \mathrm{sech}\left(\frac{x_0}{\ensuremath{\varepsilon}}\right) = \frac{2}{\displaystyle e^{\sfrac{x_0}{\ensuremath{\varepsilon}}}+e^{-\sfrac{x_0}{\ensuremath{\varepsilon}}}} = \frac{1}{\displaystyle e^{(\sfrac{1}{\ensuremath{\varepsilon}}) +W}+e^{-\sfrac{1}{\ensuremath{\varepsilon}}-W}}\>. \ensuremath{\varepsilon}nd{align} Now, by definition, \begin{align} We^W = 2e^{-\sfrac{1}{\ensuremath{\varepsilon}}} \ensuremath{\varepsilon}nd{align} and thus we obtain \begin{align} e^W = \frac{2e^{-\sfrac{1}{\ensuremath{\varepsilon}}}}{W} \qquad \textrm{and} \qquad e^{-W} = \frac{We^{\sfrac{1}{\ensuremath{\varepsilon}}}}{2}\>. \ensuremath{\varepsilon}nd{align} It follows that \begin{align} \mathrm{sech}\left(\frac{x_0}{\ensuremath{\varepsilon}}\right) = \frac{2}{\displaystyle \sfrac{2}{W}+\sfrac{W}{2}} = \frac{W}{\displaystyle 1+\sfrac{W^2}{4}}\>, \label{sechW} \ensuremath{\varepsilon}nd{align} and hence the residual is \begin{align} \Delta_0 &= 1+(-1-\ensuremath{\varepsilon} W)+\ensuremath{\varepsilon} \frac{W}{\displaystyle 1+\sfrac{W^2}{4}} = \frac{\displaystyle -\ensuremath{\varepsilon} W(1+\sfrac{W^2}{4}) + \ensuremath{\varepsilon} W}{\displaystyle 1+\sfrac{W^2}{4} } \\ &= \frac{\displaystyle -\sfrac{\ensuremath{\varepsilon} W^3}{4}}{\displaystyle 1+\sfrac{W^2}{4}} = \frac{-\ensuremath{\varepsilon} W^3}{4+ W^2} \nonumber \>. \ensuremath{\varepsilon}nd{align} Now $W= W(2e^{-1/\ensuremath{\varepsilon}})$ and as $\ensuremath{\varepsilon}\to 0^+$, $2e^{-1/\ensuremath{\varepsilon}}\to 0$ rapidly; since the Taylor series for $W(z)$ starts as $W(z)= z-z^2+\frac{3}{2}z^3+\ldots$, we have that $W(2e^{-\sfrac{1}{\ensuremath{\varepsilon}}})\sim 2e^{-\sfrac{1}{\ensuremath{\varepsilon}}}$ and therefore \begin{align} \Delta_0 = -\ensuremath{\varepsilon} 2e^{-\sfrac{3}{\ensuremath{\varepsilon}}}+O(e^{-\sfrac{5}{\ensuremath{\varepsilon}}})\>. \ensuremath{\varepsilon}nd{align} We see that this residual is very small indeed. But we can say even more. Boyd leaves us the exercise of computing higher order terms; here is our solution to the exercise. A Newton correction would give us \begin{align} x_1 = x_0 - \frac{f(x_0)}{f'(x_0)} \ensuremath{\varepsilon}nd{align} and we have already computed $f(x_0)=\Delta_0$. What is $f'(x_0)$? Since $f(x) = 1+x+\ensuremath{\varepsilon}\mathrm{sech}(\sfrac{x}{\ensuremath{\varepsilon}})$, this derivative is \begin{align} f'(x) = 1-\mathrm{sech}\left(\frac{x}{\ensuremath{\varepsilon}}\right)\mathrm{tanh}\left(\frac{x}{\ensuremath{\varepsilon}}\right)\>. \ensuremath{\varepsilon}nd{align} Simplifying similarly to equation \ensuremath{\varepsilon}qref{sechW}, we obtain \begin{align} \mathrm{tanh}\left(\frac{x_0}{\ensuremath{\varepsilon}}\right) = \frac{e^{1/\ensuremath{\varepsilon} +W} - e^{-1/\ensuremath{\varepsilon}-W}}{e^{1/\ensuremath{\varepsilon}+W}+e^{-1/\ensuremath{\varepsilon}+W}} = \frac{\frac{2}{W}-\frac{W}{2}}{\frac{2}{W}+\frac{W}{2}} = \frac{4-W^2}{4+W^2}\>. \ensuremath{\varepsilon}nd{align} Thus \begin{align} f'(x_0) &= 1-\mathrm{sech}\left(\frac{x_0}{\ensuremath{\varepsilon}}\right)\mathrm{tanh}\left(\frac{x_0}{\ensuremath{\varepsilon}}\right) = 1- \frac{\displaystyle W(1-\sfrac{W^2}{4})}{\displaystyle (1+\sfrac{W^2}{4})^2}\>. \ensuremath{\varepsilon}nd{align} It follows that \begin{align} x_1 &= x_0 - \frac{\Delta_0}{f'(x_0)} = -1-\ensuremath{\varepsilon} W+\frac{\displaystyle\sfrac{\ensuremath{\varepsilon} W^3}{4+W^2}}{\displaystyle 1- \frac{W(1-\sfrac{W^2}{4})}{(1+\sfrac{W^2}{4})^2}} \\ &= -1 -\ensuremath{\varepsilon} W+ \frac{\ensuremath{\varepsilon} W^3(4+W^2)}{16-16W+8W^2+4W^3+W^4}\\ &= -1-\ensuremath{\varepsilon} W+ \frac{\ensuremath{\varepsilon}}{4}W^3+\frac{\ensuremath{\varepsilon}}{4}W^4+\frac{3}{16}\ensuremath{\varepsilon} W^5-\frac{11}{64}\ensuremath{\varepsilon} W^6+O(W^7) \ensuremath{\varepsilon}nd{align} Finally, the residual of $x_1$ is \begin{align} \Delta_1 = 4\ensuremath{\varepsilon} e^{\sfrac{7}{\ensuremath{\varepsilon}}}+O(\ensuremath{\varepsilon} e^{-\sfrac{8}{\ensuremath{\varepsilon}}})\>. \label{Newtresid} \ensuremath{\varepsilon}nd{align} We thus see an example of the use of $f'(x_0)$ instead of just $A$, as discussed in section \ref{genframe}, to approximately double the number of correct terms in the approximation. This analysis can be implemented in Maple as follows: \lstinputlisting{Hyperasymptotic} Note that we had to use the MultiSeries package \cite{Salvy(2010)} to expand the series in equation \ensuremath{\varepsilon}qref{Newtresid}, for understanding how accurate $z_2$ was. $z_2$ is slightly more lacunary than the two-variable expansion in \cite{Boyd(2014)}, because we have a zero coefficient for $W^2$. \section{Divergent Asymptotic Series} Before we begin, a note about the section title: some authors give the impression that the word ``asymptotic'' is used \textsl{only} for divergent series, and so the title might seem redundant. But the proper definition of an asymptotic series can include convergent series (see, e.g., \cite{Bruijn(1981)}), as it means that the relevant limit is not as the number of terms $N$ goes to infinity, but rather as the variable in question (be it \ensuremath{\varepsilon}, or $x$, or whatever) approaches a distinguished point (be it 0, or infinity, or whatever). In this sense, an asymptotic series might diverge as $N$ goes to infinity, or it might converge, but typically we don't care. We concentrate in this section on divergent asymptotic series. Beginning students are often confused when they learn the usual ``rule of thumb'' for optimal accuracy when using divergent asymptotic series, namely to truncate the series \textsl{before} adding in the smallest (magnitude) term. This rule is usually motivated by an analogy with \textsl{convergent} alternating series, where the error is less than the magnitude of the first term neglected. But why should this work (if it does) for divergent series? The answer we present in this section isn't as clear-cut as we would like, but nonetheless we find it explanatory. Perhaps you and your students will, too. The basis for the answer is that one can measure the residual $\Delta$ that arises on truncating the series at, say, $M$ terms, and choose $M$ to minimize the residual. Since the forward error is bounded by the condition number times the size of the residual, by minimizing $\|\Delta\|$ one minimizes a bound on the forward error. It often turns out that this method gives the same $M$ as the rule of thumb, though not always. An example may clarify this. We use the large-$x$ asymptotics of $J_0(x)$, the zeroth-order Bessel function of the first kind. In \cite[section 10.17(i)]{NIST:DLMF}, we find the following asymptotic series, which is attributed to Hankel: \begin{align} J_0(x) = \left(\frac{2}{\pi x}\right)^{\sfrac{1}{2}}\left( A(x)\cos\left(x-\frac{\pi}{4}\right)-B(x)\sin\left(x-\frac{\pi}{4}\right)\right) \ensuremath{\varepsilon}nd{align} where \begin{align} A(x) = \sum_{k\geq 0} \frac{a_{2k}}{x^{2k}} \qquad \textrm{and}\qquad B(x) = \sum_{k\geq 0} \frac{a_{2k+1}}{x^{2k+1}} \label{twoseries} \ensuremath{\varepsilon}nd{align} and where \begin{align} a_0 &= 1 \nonumber\\ a_k &= \frac{(-1)^k }{k! 8^k}\prod_{j=1}^k (2j-1)^2\>. \ensuremath{\varepsilon}nd{align} For the first few $a_k$s, we get \begin{align} a_0=1, a_1 = -\frac{1}{8}, a_2= -\frac{9}{128}, a_3 = \frac{75}{1024}\>, \ensuremath{\varepsilon}nd{align} and so on. The ratio test immediately shows the two series \ensuremath{\varepsilon}qref{twoseries} diverge for all finite~$x$. Luckily, we always have to truncate anyway, and if we do, the forward errors get arbitrarily small so long as we take $x$ arbitrarily large. Because the Bessel functions are so well-studied, we have alternative methods for computation, for instance \begin{align} J_0(x) = \frac{1}{\pi}\int_0^\pi \cos(x\sin\theta)d\theta \ensuremath{\varepsilon}nd{align} which, given $x$, can be evaluated numerically (although it's ill-conditioned in a relative sense near any zero of $J_0(x)$). So we can directly compute the forward error. But let's pretend that we can't. We have the asymptotic series, and not much more. Or course we have to have a defining equation---Bessel's differential equation \begin{align} x^2y''+xy'+x^2y=0 \ensuremath{\varepsilon}nd{align} with the appropriate normalizations at $\infty$. We look at \begin{align} y_{N,M} = \left(\frac{2}{\pi x}\right)^{\sfrac{1}{2}}A_N(x)\cos\left(x-\frac{\pi}{4}\right)-\frac{2}{\pi x}B_M(x)\cos\left(x-\frac{\pi}{4}\right) \ensuremath{\varepsilon}nd{align} where \begin{align} A_N(x) = \sum_{k=0}^N \frac{a_{2k}}{x^{2k}}\qquad \textrm{and}\qquad B_M(x) = \sum_{k=0}^M \frac{a_{2k+1}}{x^{2k+1}}\>. \ensuremath{\varepsilon}nd{align} Inspection shows that there are only two cases that matter: when we end on an even term $a_{2k}$ or on an odd term $a_{2k+1}$. The first terms omitted will be odd and even. A little work shows that the residual \begin{align} \Delta = x^2y''_{N,M} + x y'_{N,M} + x^2y_{N,M} \ensuremath{\varepsilon}nd{align} is just \begin{align} \frac{\displaystyle (k+\sfrac{1}{2})^2 a_k}{\displaystyle x^{k+\sfrac{1}{2}}} \cdot \left\{ \begin{array}{c} \cos(x-\sfrac{\pi}{4})\\ \sin(x-\sfrac{\pi}{4})\ensuremath{\varepsilon}nd{array}\right\} \ensuremath{\varepsilon}nd{align} if the final term \textsl{kept}, odd or even, is $a_k$. If even, then multiply by $\cos(x-\sfrac{\pi}{4})$; if odd, then $\sin(x-\sfrac{\pi}{4})$. Let's pause a moment. The algebra to show this is a bit finicky but not hard (the equation is, after all, linear). This end result s an extremely simple (and exact!) formula for $\Delta$. The finite series $y_{N,M}$ is then the exact solution to \begin{align} x^2y''+xy'+xy &= \Delta\\ &= \frac{\displaystyle (k+\sfrac{1}{2})^2 a_k}{x^{k+\sfrac{1}{2}}} \cdot \left\{ \begin{array}{c} \cos(x-\frac{\pi}{4})\\ \sin(x-\frac{\pi}{4})\ensuremath{\varepsilon}nd{array}\right\} \ensuremath{\varepsilon}nd{align} and, provided $x$ is large enough, this is only a small perturbation of Bessel's equation. In many modelling situations, such a small perturbation may be of direct physical significance, and we'd be done. Here, though, Bessel's equation typically arises as an intermediate step, after separation of variables, say. Hence one might be interested in the forward error. By the theory of Green's functions, we may express this as \begin{align} J_0(x) - y_{N,M}(x) = \int_x^\infty K(x,\xi)\Delta(\xi)d\xi \ensuremath{\varepsilon}nd{align} for a suitable kernel $K(x,\xi)$. The obvious conclusion is that if $\Delta$ is small then so will $J_0(x)-y_{N,M}(x)$; but $K(x,\xi)$ will have some effect, possibly amplifying the effects of $\Delta$, or perhaps even damping its effects. Hence, the connection is indirect. To have an error in $\Delta$ of at most $\ensuremath{\varepsilon}$, we must have \begin{align} \left(k+\frac{1}{2}\right)^2\frac{|a_k|}{x^{k+\sfrac{1}{2}}}\leq\ensuremath{\varepsilon} \ensuremath{\varepsilon}nd{align} (remember, $x>0$). This will happen only if \begin{align} x\geq \left(\left(k+\frac{1}{2}\right)^2 \frac{|a_k|}{\ensuremath{\varepsilon}}\right)^{2/(2k+1)} \ensuremath{\varepsilon}nd{align} and this, for fixed $k$, goes to $\infty$ as $\ensuremath{\varepsilon}\to0$. Alternatively, we may ask which $k$, for a fixed $x$, minimizes \begin{align} \left(k+\frac{1}{2}\right)^2\frac{|a_k|}{x^{k+\sfrac{1}{2}}} \ensuremath{\varepsilon}nd{align} and this answers the truncation question in a rational way. In this particular case, minimizing $\|\Delta\|$ doesn't necessarily minimize the forward error (although, it's close). For $x=2.3$, for instance, the sequence $(k+\sfrac{1}{2})^2|a_k|x^{-k-\sfrac{1}{2}}$ is (no $\sqrt{\sfrac{2}{\pi}}$) \begin{align} \begin{array}{ccccccc} k & 0 & 1 & 2 & 3 & 4 & 5\\ A_k & 0.165 & 0.081 & 0.055 & 0.049 & 0.054 & 0.070 \ensuremath{\varepsilon}nd{array} \ensuremath{\varepsilon}nd{align} The clear winner seems to be $k=3$. This suggests that for $x=2.3$, the best series to take is \begin{align} y_3 = \left(\frac{2}{\pi x}\right)^{\sfrac{1}{2}} \left( \left(1-\frac{9}{128x^2}\right)\cos\left(x-\frac{\pi}{4}\right) + \left(\frac{1}{8x}-\frac{75}{1024x^3}\right)\sin\left(x-\frac{\pi}{4}\right)\right)\>. \ensuremath{\varepsilon}nd{align} This gives $5.454\cdot 10^{-2}$ for $x=2.3$. But the cosine versus sine plays a role, here: $\cos(2.3-\sfrac{\pi}{4})\doteq 0.056$ while $\sin(2.3-\sfrac{\pi}{4})\doteq0.998$, so we should have included this. When we do, the estimates for $\Delta_0,\Delta_2$ and $\Delta_4$ are all significantly reduced---and this changes our selection, and makes $k=4$ the right choice; $\Delta_6>\Delta_4$ as well (either way). But the influence of the integral is mollifying. Comparing to a better answer (computers via the integral formula) $0.0555398$, we see that the error is about $8.8\cdot 10^{-4}$ whereas $((4+\sfrac{1}{2})^2a_4/2.3^{4+\sfrac{1}{2}})\cos(2.3-\sfrac{\pi}{4})$ is $3.06\cdot 10^{-3}$; hence the residual overestimates the error slightly. How does the rule of thumb do? The first term that is neglected here is $(\sfrac{1}{x})^{\sfrac{1}{2}}a_5x^{-5}\sin(x-\sfrac{\pi}{4})$ which is $\sim2.3\cdot 10^{-3}$ apart from the $(\sfrac{2}{\pi})^{\sfrac{1}{2}}=0.797$ factor, so about $1.86\cdot10^{-3}$. The \textsl{next} term is, however, $(\sfrac{2}{\pi x})^{\sfrac{1}{2}}a_6x^{-6}\cos(x-\sfrac{\pi}{4})\doteq -1.14\cdot 10^{-4}$ which is smaller yet, suggesting that we should keep the $a_5$ term. But we shouldn't. Stopping with $a_4$ gives a better answer, just as the residual suggests that it should. We emphasize that this is only a slightly more rational rule of thumb, because minimizing $\|\Delta\|$ only minimizes a bound on the forward error, not the forward error itself. Still, we have not seen this discussed in the literature before. A final comment is that the defining equation and its scale, define also the scale for what's a ``small'' residual. So, a justification for the ``rule of thumb'' would be as follows. In our general scheme, \begin{align} Au_{n+1} = -[\ensuremath{\varepsilon}^{n+1}]\Delta_n \ensuremath{\varepsilon}nd{align} and thus, loosely speaking, \begin{align} u_{n+1} \sim -A^{-1}\Delta_n + O(\ensuremath{\varepsilon}^{n+1})\>. \ensuremath{\varepsilon}nd{align} Thus, if we stop when $u_{n+1}$ is smallest, this would tend to happen at the same integer $n$ that $\Delta_n$ was smallest. This isn't going to be always true. For instance, if $A$ is a matrix with largest singular value $\sigma_1$ and smallest $\sigma_N>0$, with associated vectors $\hat{u}_k$ and $\hat{v}_k$, so that \begin{align} A\hat{v}_k = \sigma_k\hat{u}_k\>. \ensuremath{\varepsilon}nd{align} Then, if $u_{n+1}$ is like $\hat{v}_1$ then $\Delta_n$ will be like $\sigma\hat{u}_1$, which can be substantially larger; contrariwise, if $u_{n+1}$ is like $\hat{v}_N$ then $A\hat{v}_N=\sigma_N\hat{u}_N$ and $\Delta_n$ can be substantially smaller. The point is that directions of $\Delta_n$ can change between steps in the perturbation expansion; we thus expect correlation but not identity. \section{Initial-Value problems} BEA has successfully been applied to the \textsl{numerical} solution of differential equations for a long time, now. Examples include the works of Enright since the 1980s, e.g., \cite{Enright(1989)b,Enright(1989)a}, and indeed the Lanczos $\tau$-method is yet older~\cite{Lanczos(1988)}. It was pointed out in \cite{Corless(1992)} and \cite{Corless(1993)b} that BEA could be used for perturbation and other series solutions of differential equations, also. We here display several examples illustrating this fact. We use regular expansion, matched asymptotic expansions, the renormalization group method, and the method of multiple scales. \subsection{Duffing's Equation} This proposed way of interpreting solutions obtained by perturbation methods has interesting advantages for the analysis of series solutions to differential equations. Consider for example an unforced weakly nonlinear Duffing oscillator, which we take from \cite{Bender(1978)}: \begin{align} y''+y+\varepsilon y^3=0 \label{Duffing} \ensuremath{\varepsilon}nd{align} with initial conditions $y(0)=1$ and $y'(0)=0$. As usual, we assume that $0<\varepsilon\ll 1$. Our discussion of this example does not provide a new method of solving this problem, but instead it improves the interpretation of the quality of solutions obtained by various methods. \subsubsection{Regular expansion} The classical perturbation analysis supposes that the solution to this equation can be written as the power series \begin{align} y(t) = y_0(t) + y_1(t)\ensuremath{\varepsilon} + y_2(t)\ensuremath{\varepsilon}^2+y_3(t)\ensuremath{\varepsilon}^3+\cdots\>. \ensuremath{\varepsilon}nd{align} Substituting this series in equation \ensuremath{\varepsilon}qref{Duffing} and solving the equations obtained by equating to zero the coefficients of powers of $\ensuremath{\varepsilon}$ in the residual, we find $y_0(t)$ and $y_1(t)$ and we thus have the solution \begin{align} z_1(t)= \cos( t) +\ensuremath{\varepsilon} \left( \frac{1}{32}\cos(3t) -\frac{1}{32}\cos( t) -\frac{3}{8}t\sin( t) \right)\>. \label{classical1st} \ensuremath{\varepsilon}nd{align} The difficulty with this solution is typically characterized in one of two ways. Physically, the secular term $t\sin t$ shows that our simple perturbative method has failed since the energy conservation prohibits unbounded solutions. Mathematically, the secular term $t\sin t$ shows that our method has failed since the periodicity of the solution contradicts the existence of secular terms. Both these characterizations are correct, but require foreknowledge of what is physically meaningful or of whether the solutions are bounded. In contrast, interpreting \ensuremath{\varepsilon}qref{classical1st} from the backward error viewpoint is much simpler. To compute the residual, we simply substitute $z_2$ in equation \ensuremath{\varepsilon}qref{Duffing}, that is, the residual is defined by \begin{align} \Delta_1(t) = z_1'' + z_1 + \ensuremath{\varepsilon} z_1^3\>. \ensuremath{\varepsilon}nd{align} For the first-order solution of equation \ensuremath{\varepsilon}qref{classical1st}, the residual is \begin{multline} \Delta_1(t) = \Big( -\tfrac {3}{64}\cos( t) +\tfrac{3}{128}\cos( 5t) +\tfrac{3}{128}\cos( 3t) - \tfrac{9}{32}t\sin(t)\\ -\tfrac{9}{32}t\sin( 3t)\Big) \ensuremath{\varepsilon}^2+O( \ensuremath{\varepsilon}^3) \>. \label{ClassicalRes1st} \ensuremath{\varepsilon}nd{multline} $\Delta_1(t)$ is exactly computable. We don't print it all here because it's too ugly, but in figure \ref{ClassicalDuffingRes}, we see that the complete residual grows rapidly. \begin{figure} \centering \includegraphics[width=.55\textwidth]{ClassicalDuffingRes.png} \caption{Absolute Residual for the first-order classical perturbative solution of the unforced weakly damped Duffing equation with $\ensuremath{\varepsilon}=0.1$.} \label{ClassicalDuffingRes} \ensuremath{\varepsilon}nd{figure} This is due to the secular term $-\tfrac{9}{32}t(\sin(t)-\sin(3t))$ of equation \ensuremath{\varepsilon}qref{ClassicalRes1st}. Thus we come to the conclusion that the secular term contained in the first-order solution obtained in equation \ensuremath{\varepsilon}qref{classical1st} invalidate it, but this time we do not need to know in advance what to physically expect or to prove that the solution is bounded. This is a slight but sometimes useful gain in simplicity.\footnote{In addition, this method makes it easy to find mistakes of various kinds. For instance, a typo in the 1978 edition of \cite{Bender(1978)} was uncovered by computing the residual. That typo does not seem to be in the later editions, so it's likely that the authors found and fixed it themselves.} A simple Maple code makes it possible to easily obtain higher-order solutions: \lstinputlisting{DuffingClassical} Experiments with this code suggests the conjecture that $\Delta_n=O(t^n\ensuremath{\varepsilon}^{n+1})$. For this to be small, we must have $\ensuremath{\varepsilon} t=o(1)$ or $t<O(\sfrac{1}{\ensuremath{\varepsilon}})$. \subsubsection{Lindstedt's method} The failure to obtain an accurate solution on unbounded time intervals by means of the classical perturbation method suggests that another method that eliminates the secular terms will be preferable. A natural choice is Lindstedt's method, which rescales the time variable $t$ in order to cancel the secular terms. The idea is that if we use a rescaling $\tau=\omega t$ of the time variable and chose $\omega$ wisely the secular terms from the classical perturbation method will cancel each other out.\footnote{Interpret this as: we choose $\omega$ to keep the residual small over as long a time-interval as possible.} Applying this transformation, equation \ensuremath{\varepsilon}qref{Duffing} becomes \begin{align} \omega^2 y''(\tau)+y(\tau)+\ensuremath{\varepsilon} y^3(\tau) \qquad y(0)=1,\ensuremath{\varepsilon}nskip y'(0)=0\>.\label{DuffingTau} \ensuremath{\varepsilon}nd{align} In addition to writing the solution as a truncated series \begin{align} z_1(\tau) = y_0(\tau)+y_1(\tau)\ensuremath{\varepsilon} \label{ytau} \ensuremath{\varepsilon}nd{align} we expand the scaling factor as a truncated power series in \ensuremath{\varepsilon}: \begin{align} \omega=1+\omega_1\ensuremath{\varepsilon}\>. \label{omeg} \ensuremath{\varepsilon}nd{align} Substituting \ensuremath{\varepsilon}qref{ytau} and \ensuremath{\varepsilon}qref{omeg} back in equation \ensuremath{\varepsilon}qref{DuffingTau} to obtain the residual and setting the terms of the residual to zero in sequence, we find the equations \begin{align} y_0'' + y_0 =0\>, \ensuremath{\varepsilon}nd{align} so that $y_0=\cos(\tau)$, and \begin{align} y_1'' + y_1 = -y_0^3 - 2\omega_1 y_0'' \ensuremath{\varepsilon}nd{align} subject to the same initial conditions, $y_0(0)=1, y'_0(0)=0, y_1(0)=0$, and $y_1'(0)=0$. By solving this last equation, we find \begin{align} y_1(\tau) =\frac{31}{32}\cos(\tau) +\frac{1}{32}\cos(3\tau) -\frac{3}{8}\tau \sin(\tau)+\omega_1\tau\sin(\tau)\>. \ensuremath{\varepsilon}nd{align} So, we only need to choose $\omega_1=\sfrac{3}{8}$ to cancel out the secular terms containing $\tau\sin(\tau)$. Finally, we simply write the solution $y(t)$ by taking the first two terms of $y(\tau)$ and plug in $\tau=(1+\sfrac{3\ensuremath{\varepsilon}}{8})t$: \begin{align} z_1(t) = \cos \tau +\ensuremath{\varepsilon} \left( \frac{31}{32}\cos \tau +\frac{1}{32}\cos \tau \right) \ensuremath{\varepsilon}nd{align} This truncated power series can be substituted back in the left-hand side of equation \ensuremath{\varepsilon}qref{Duffing} to obtain an expression for the residual: \begin{align} \Delta_1(t) = \left( \frac{171}{128}\cos \left( t \right) +\frac {3}{128}\cos \left( 5t \right) +\frac {9}{16}\cos \left( 3t \right) \right) \ensuremath{\varepsilon}^2+O \left( \ensuremath{\varepsilon}^3\right) \ensuremath{\varepsilon}nd{align} See figure \ref{FstLindstedt}. \begin{figure} \centering \subfigure[First-Order\label{FstLindstedt}]{\includegraphics[width=.48\textwidth]{FstLindstedt.png}} \subfigure[Second-Order\label{SndLindstedt}]{\includegraphics[width=.48\textwidth]{SndLindstedt.png}} \caption{Absolute Residual for the Lindstedt solutions of the unforced weakly damped Duffing equation with $\ensuremath{\varepsilon}=0.1$.} \ensuremath{\varepsilon}nd{figure} We then do the same with the second term $\omega_2$. The following Maple code has been tested up to order $12$: \lstinputlisting{DuffingLindstedt} The significance of this is as follows: The normal presentation of the method first requires a proof (an independent proof) that the reference solution is bounded and therefore the secular term $\ensuremath{\varepsilon} t \sin t$ in the classical solution is spurious. \textsl{But} the residual analysis needs no such proof. It says directly that the classical solution solves not \begin{align} f(t,y,y',y'')=0 \ensuremath{\varepsilon}nd{align} nor $f+\Delta f=0$ for uniformly small $\Delta$ but rather that the residual \textsl{departs} from 0 and is \textsl{not} uniformly small whereas the residual for the Lindstedt solution \textsl{is} uniformly small. \subsection{Morrison's counterexample} In \cite[pp.~192-193]{Omalley(2014)}, we find a discussion of the equation \begin{align} y''+y+\ensuremath{\varepsilon}(y')^3+3\ensuremath{\varepsilon}^2(y')=0\>. \ensuremath{\varepsilon}nd{align} O'Malley attributed the equation to \cite{Morrison(1966)}. The equation is one that is supposed to illustrate a difficulty with the (very popular and effective) method of multiple scales. We give a relatively full treatment here because a residual-based approach shows that the method of multiple scales, applied somewhat artfully, can be quite successful and moreover we can demonstrate \textsl{a posteriori} that the method was successful. The solution sketched in \cite{Omalley(2014)} uses the complex exponential format, which one of us used to good effect in his PhD, but in this case the real trigonometric form leads to slightly simpler formul2e. We are very much indebted to our colleague, Professor Pei Yu at Western, for his careful solution, which we follow and analyze here.\footnote{We had asked him to solve this problem using one of his many computer algebra programs; instead, he presented us with an elegant handwritten solution.} The first thing to note is that we will use three time scales, $T_0=t$, $T_1=\ensuremath{\varepsilon} t$, and $T_2=\ensuremath{\varepsilon}^2 t$ because the DE contains an $\ensuremath{\varepsilon}^2$ term, which will prove to be important. Then the multiple scales formalism gives \begin{align} \frac{d}{dt} = \frac{\partial}{\partial T_0} + \ensuremath{\varepsilon} \frac{\partial}{\partial T_1} + \ensuremath{\varepsilon}^2 \frac{\partial}{\partial T_2} \label{msformalism} \ensuremath{\varepsilon}nd{align} This formalism gives most students some pause, at first: replace an ordinary derivative by a sum of partial derivatives using the chain rule? What could this mean? But soon the student, emboldened by success on simple problems, gets used to the idea and eventually the conceptual headaches are forgotten.\footnote{This can be made to make sense, after the fact. We imagine $F(T_1,T_2,T_3)$ describing the problem, and $\sfrac{d}{dt}=\sfrac{\partial F}{\partial T_1}\sfrac{\partial T_1}{\partial t} + \sfrac{\partial F}{\partial T_2}\sfrac{\partial T_2}{\partial t} + \sfrac{\partial F}{\partial T_3}\sfrac{\partial T_3}{\partial t}$ which gives $\sfrac{d}{dt}=\sfrac{\partial F}{\partial T_1}+\ensuremath{\varepsilon} \sfrac{\partial F}{\partial T_2} + \ensuremath{\varepsilon}^2 \sfrac{\partial F}{\partial T_3}$ if $T_1=t, T_2=\ensuremath{\varepsilon} t$ and $T_3=\ensuremath{\varepsilon}^2t$.} But sometimes they return, as with this example. To proceed, we take \begin{align} y=y_0+\ensuremath{\varepsilon} y_1+\ensuremath{\varepsilon}^2 y_2+O(\ensuremath{\varepsilon}^3) \ensuremath{\varepsilon}nd{align} and equate to zero like powers of $\ensuremath{\varepsilon}$ in the residual. The expansion of $\sfrac{d^2 y}{dt^2}$ is straightforward: \begin{multline} \left( \frac{\partial}{\partial T_0} + \ensuremath{\varepsilon}\frac{\partial}{\partial T_1}+\ensuremath{\varepsilon}^2\frac{\partial}{\partial T_2}\right)^2(y_0+\ensuremath{\varepsilon} y_1+\ensuremath{\varepsilon}^2 y_2) =\\ \frac{\partial^2 y_0}{\partial T_0^2} + \ensuremath{\varepsilon}\left(\frac{\partial^2 y_1}{\partial T_0^2}+2\frac{\partial^2 y_0}{\partial T_0\partial T_1}\right) +\ensuremath{\varepsilon}^2\left(\frac{\partial^2 y_2}{\partial T_0^2}+2\frac{\partial^2 y_1}{\partial T_0\partial T_1}+\frac{\partial^2 y_0}{\partial T_1^2}+2\frac{\partial^2 y_0}{\partial T_0\partial T_1}\right) \ensuremath{\varepsilon}nd{multline} For completeness we include the other necessary terms, even though this construction may be familiar to the reader. We have \begin{multline} \ensuremath{\varepsilon}\left(\frac{dy}{dt}\right)^3 = \ensuremath{\varepsilon}\left( \left(\frac{\partial}{\partial T_0}+\ensuremath{\varepsilon}\frac{\partial}{\partial T_1}\right)(y_0+\ensuremath{\varepsilon} y_1)\right)^3\\ = \ensuremath{\varepsilon}\left(\frac{\partial y_0}{\partial T_0}\right)^3 + 3\ensuremath{\varepsilon}^2\left(\frac{\partial y_0}{\partial T_0}\right)^2 \left(\frac{\partial y_0}{\partial T_1}+\frac{\partial y_1}{\partial T_0}\right)+\cdots\>, \ensuremath{\varepsilon}nd{multline} and $y=y_0+\ensuremath{\varepsilon} y_1+\ensuremath{\varepsilon}^2 y_2$ is straightforward, and also \begin{align} 3\ensuremath{\varepsilon}^2\left( \left(\frac{\partial}{\partial T_0}+\cdots\right)(y_0+\cdots)\right) = 3\ensuremath{\varepsilon}^2\frac{\partial y_0}{\partial T_0}+\cdots \ensuremath{\varepsilon}nd{align} is at this order likewise straightforward. At $O(\ensuremath{\varepsilon}^0)$ the residual is \begin{align} \frac{\partial^2 y_0}{\partial T_0^2}+y_0=0 \ensuremath{\varepsilon}nd{align} and without loss of generality we take as solution \begin{align} y_0 = a(T_1,T_2)\cos(T_0+\varphi(T_1,T_2)) \ensuremath{\varepsilon}nd{align} by shifting the origin to a local maximum when $T_0=0$. For notational simplicity put $\theta=T_0+\varphi(T_1,T_2)$. At $O(\ensuremath{\varepsilon}^1)$ the equation is \begin{align} \frac{\partial^2 y_1}{\partial T_0^2} + y_1 = -\left(\frac{\partial y_0}{\partial T_0}\right)^3 - 2\frac{\partial^2 y_0}{\partial T_0\partial T_1} \ensuremath{\varepsilon}nd{align} where the first term on the right comes from the $\ensuremath{\varepsilon}\dot{y}^3$ term whilst the second comes from the multiple scales formalism. Using $\sin^3\theta=\sfrac{3}{4}\sin\theta-\sfrac{1}{4}\sin 3\theta$, this gives \begin{align} \frac{\partial^2 y_1}{\partial T_0^2}+y_1 = \left(2\frac{\partial a}{\partial T_1}+\frac{3}{4} a^3\right)\sin\theta + 2a\frac{\partial \varphi}{\partial T_1}\cos\theta - \frac{a^3}{4}\sin 3\theta \ensuremath{\varepsilon}nd{align} and to suppress the resonance that would generate secular terms we put \begin{align} \frac{\partial a}{\partial T_1} = -\frac{3}{8}a^3 \quad\textrm{and}\qquad \frac{\partial\varphi}{\partial T_1}=0\>. \label{525} \ensuremath{\varepsilon}nd{align} Then $y_1 = \frac{a^3}{32}\sin 3\theta$ solves this equation and has $y_1(0)=0$, which does not disturb the initial condition $y_0(0)=a_0$, although since $\sfrac{dy_1}{dT_0}=\sfrac{3a^2}{32}\cos3\theta$ the derivative of $y_0+\ensuremath{\varepsilon} y_1$ will differ by $O(\ensuremath{\varepsilon})$ from zero at $T_0=0$. This does not matter and we may adjust this by choice of initial conditions for $\varphi$, later. The $O(\ensuremath{\varepsilon}^2)$ term is somewhat finicky, being \begin{multline} \frac{\partial^2 y_2}{\partial T_0^2}+y_2 = -2\frac{\partial^2 y_0}{\partial T_0\partial T_2} -2\frac{\partial^2 y_1}{\partial T_0\partial T_1} \\ -3\left(\frac{\partial y_0}{\partial T_0}\right)^2 \left(\frac{\partial y_0}{\partial T_1}+\frac{\partial y_1}{\partial T_0}\right) - \frac{\partial^2 y_0}{\partial T_1^2}-3\frac{\partial y_0}{\partial T_0} \ensuremath{\varepsilon}nd{multline} where the last term came from $3(\dot{y})\ensuremath{\varepsilon}^2$. Proceeding as before, and using $\partial\varphi/\partial T_1=0$ and $\sfrac{\partial a}{\partial T_1}=-\sfrac{3}{8}\>a^3$ as well as some other trigonometric identities, we find the right-hand side can be written as \begin{align} \left(2\frac{\partial a}{\partial T_2}+3a\right)\sin\theta+\left(2a\frac{\partial\varphi}{\partial T_2}-\frac{9}{128}a^5\right)\cos\theta-\frac{27}{1024}a^5\cos3\theta+\frac{9}{128}a^5\cos5\theta\>. \ensuremath{\varepsilon}nd{align} Again setting the coefficients of $\sin\theta$ and $\cos\theta$ to zero to prevent resonance we have \begin{align} \frac{\partial a}{\partial T_2}=-\frac{3}{2}a \label{528} \ensuremath{\varepsilon}nd{align} and \begin{align} \frac{\partial \varphi}{\partial T_2} = \frac{9}{256}a^4\qquad (a\neq0). \ensuremath{\varepsilon}nd{align} This leaves \begin{align} y_2= \frac{27}{1024}a^5\cos3\theta - \frac{3 a^5}{1024}\cos5\theta \ensuremath{\varepsilon}nd{align} again setting the homogeneous part to zero. Now comes a bit of multiple scales magic: instead of solving equations \ensuremath{\varepsilon}qref{525} and \ensuremath{\varepsilon}qref{528} in sequence, as would be usual, we write \begin{align} \frac{da}{dt} &= \frac{\partial a}{\partial T_0} + \ensuremath{\varepsilon} \frac{\partial a}{\partial T_1} + \ensuremath{\varepsilon}^2\frac{\partial a}{\partial T_2} = 0 + \ensuremath{\varepsilon}\left(-\frac{3}{8}a^3\right) + \ensuremath{\varepsilon}^2\left(-\frac{3}{2}a\right) \nonumber \\ &= -\frac{3}{8}\ensuremath{\varepsilon} a(a^2+4\ensuremath{\varepsilon})\>. \label{magic} \ensuremath{\varepsilon}nd{align} Using $a=2R$ this is equation (6.50) in \cite{Omalley(2014)}. Similarly \begin{align} \frac{d\varphi}{dt} &= \ensuremath{\varepsilon} \frac{\partial\varphi}{\partial T_1}+\ensuremath{\varepsilon}^2 \frac{\partial\varphi}{\partial T_2} = 0+\ensuremath{\varepsilon}^2 \frac{9}{256} a^4 \label{moremagic} \ensuremath{\varepsilon}nd{align} and once $a$ has been identified, $\varphi$ can be found by quadrature. Solving \ensuremath{\varepsilon}qref{magic} and \ensuremath{\varepsilon}qref{moremagic} by Maple, \begin{align} a = \frac{\sqrt{\ensuremath{\varepsilon}} a_0}{\displaystyle \sqrt{\ensuremath{\varepsilon} e^{3\ensuremath{\varepsilon}^2 t}+\frac{a_0^2}{4}(e^{3\ensuremath{\varepsilon}^2 t}-1)}} = 2\frac{\sqrt{\ensuremath{\varepsilon}} a_0}{\sqrt{u}} \ensuremath{\varepsilon}nd{align} and \begin{align} \varphi = -\frac{3}{16}\ensuremath{\varepsilon}^2\ln u + \frac{9}{16}\ensuremath{\varepsilon}^4 t-\frac{3}{16}\frac{\ensuremath{\varepsilon}^2a_0^2}{u} \ensuremath{\varepsilon}nd{align} where $u=4\ensuremath{\varepsilon} e^{3\ensuremath{\varepsilon}^2 t}+a_0^2(e^{3\ensuremath{\varepsilon}^2 t}-1)$. The residual is (again by Maple) \begin{align} \footnotesize \ensuremath{\varepsilon}^3\left( \frac{9}{16}a_0^3\cos3t+a_0^7\left( -\frac{351}{4096}\sin t - \frac{9}{512} \sin 7t+\frac{333}{4096}\sin 3t + \frac{459}{4096}\sin 5t\right)\right)+O(\ensuremath{\varepsilon}^4) \ensuremath{\varepsilon}nd{align} and there is no secularity visible in this term. It is important to note that the construction of the equation \ensuremath{\varepsilon}qref{magic} for $a(t)$ required both $\sfrac{\partial a}{\partial T_1}$ and $\sfrac{\partial a}{\partial T_2}$. Either one alone gives misleading or inconsistent answers. While it may be obvious to an expert that both terms must be used at once, the situation is somewhat unusual and a novice or casual user of perturbation methods may well wish reassurance. (We did!) Computing (and plotting) the residual $\Delta=\ddot{z}+z+\ensuremath{\varepsilon}(\dot{z})^3+3\ensuremath{\varepsilon}^2\dot{z}$ does just that (see figure \ref{YuResidual}). \begin{figure} \centering \includegraphics[width=.55\textwidth]{YuResidual.png} \caption{The residual $|\Delta_3|$ divided by $\ensuremath{\varepsilon}^3a$, with $\ensuremath{\varepsilon}=0.1$, where $a=O(e^{-\sfrac{3}{2}\>\ensuremath{\varepsilon}^2 t})$, on $0\leq t\leq \sfrac{10\mathrm{ln}(10)}{\ensuremath{\varepsilon}^2}$ (at which point $a=10^{-15}$). We see that $|\sfrac{\Delta_3}{\ensuremath{\varepsilon}^3a}|<1$ on this entire interval.} \label{YuResidual} \ensuremath{\varepsilon}nd{figure} It is simple to verify that, say, for $\ensuremath{\varepsilon}=1/100$, $|\Delta|<\ensuremath{\varepsilon}^3a$ on $0<t<10^5\pi$. Notice that $a\sim O(e^{-\sfrac{3}{2}\>\ensuremath{\varepsilon}^2 t})$ and $e^{-\sfrac{3}{2}\cdot 10^{-4}\cdot 10^5\cdot\pi}=e^{-15\pi} \doteq 10^{-15}$ by the end of this range. The method of multiple scales has thus produced $z$, the exact solution of an equation uniformly and relatively near to the original equation. In trigonometric form, \begin{multline} z = a\cos(t+\varphi)+\ensuremath{\varepsilon}\frac{a^3}{32}\cos(3(t+\varphi)) \\ + \ensuremath{\varepsilon}^2\left(\frac{27}{1024}a^5\cos(3(t+\varphi)) -\frac{3}{1024}a^5\cos^5((5(t+\varphi)) \right) \label{zeqn} \ensuremath{\varepsilon}nd{multline} and $a$ and $\varphi$ are as in equations \ensuremath{\varepsilon}qref{magic} and \ensuremath{\varepsilon}qref{moremagic}. Note that $\varphi$ asymptotically approaches zero. Note that the trigonometric solution we have demonstrated here to be correct, which was derived for us by our colleague Pei Yu, appears to differ from that given in \cite{Omalley(2014)}, which is \begin{align} y= Ae^{it} + \ensuremath{\varepsilon} Be^{3it} + \ensuremath{\varepsilon}^2 Ce^{5it}+\cdots \ensuremath{\varepsilon}nd{align} where (with $\tau=\ensuremath{\varepsilon} t$) \begin{align} C\sim \frac{3}{64}A^5+\cdots \qquad \textrm{and}\qquad B\sim -\frac{A^3}{8}(i+\frac{45}{8}\ensuremath{\varepsilon}|A|^2+\cdots) \ensuremath{\varepsilon}nd{align} and, if $A=Re^{i\varphi}$, \begin{align} \frac{dR}{d\tau} = -\frac{3}{2}(R^3+\ensuremath{\varepsilon} R+\cdots) \qquad \textrm{and}\qquad \frac{d\varphi}{d\tau} = -\frac{3}{2}R^2 (1+\frac{3\ensuremath{\varepsilon}}{8}R^2+\cdots) \ensuremath{\varepsilon}nd{align} Of course with the trigonometric form $y=a\cos(t+\varphi)$, the equivalent complex form is \begin{align} y &= a \left( \frac{e^{it+i\varphi}+ e^{-it-i\varphi}}{2}\right) = \frac{a}{2}e^{i\varphi}e^{it}+c.c. \ensuremath{\varepsilon}nd{align} and so $R=\sfrac{a}{2}$. As expected, equation (6.50) in \cite{Omalley(2014)} becomes \begin{align} \frac{da}{d\tau}\left(\frac{a}{2}\right) = -\frac{3}{2}\frac{a}{2}\left(\frac{a^2}{4}+\ensuremath{\varepsilon}\right) \ensuremath{\varepsilon}nd{align} or, alternatively, \begin{align} \frac{da}{d\tau} = -\frac{3}{8}\ensuremath{\varepsilon} a(a^2+4\ensuremath{\varepsilon}) \ensuremath{\varepsilon}nd{align} which agrees with that computed for us by Pei Yu. However, O'Malley's equation (6.48) gives \begin{align} C\cdot e^{i\cdot 5t} &= \frac{3}{64}A^5 e^{i5t} = \frac{3}{64}R^5e^{i5\theta} = \frac{3}{2048}a^5 e^{i5\theta}\>, \ensuremath{\varepsilon}nd{align} so that \begin{align} Ce^{i5t}+c.c = \frac{3}{1024}a^5\cos5\theta\>, \ensuremath{\varepsilon}nd{align} whereas Pei Yu has $-\sfrac{3}{1024}$. As demonstrated by the residual in figure \ref{YuResidual}, Pei Yu is correct. Well, sign errors are trivial enough. More differences occur for $B$, however. The $-\sfrac{A^3}{8} \> ie^{3it}$ term becomes $\sfrac{a^3}{32}\>\cos 3\theta$, as expected, but $-\sfrac{45}{64}A^3\cdot |A|^2e^{3it}+c.c.$ becomes $-\sfrac{45}{32}\sfrac{a^5}{32}\>\cos3\theta = -\sfrac{45}{1024}\>a^5\cos3\theta$, not $\sfrac{27}{1024}\>a^5\cos3\theta$. Thus we believe there has been an arithmetic error in \cite{Omalley(2014)}. This is also present in \cite{Omalley(2010)}. Similarly, we believe the $\sfrac{d\varphi}{dt}$ equation there is wrong. Arithmetic errors in perturbation solutions are, obviously, a constant hazard even for experts. We do not point out this error (or the other errors highlighted in this paper) in a spirit of glee---goodness knows we've made our own share. No, the reason we do so is to emphasize the value of a separate, independent check using the residual. Because we have done so here, we are certain that equation \ensuremath{\varepsilon}qref{zeqn} is correct: it produces a residual that is uniformly $O(\ensuremath{\varepsilon}^3)$ for bounded time, and which is $O(\ensuremath{\varepsilon}^{9/2}e^{-\sfrac{3}{2}\>\ensuremath{\varepsilon}^2 t})$ as $t\to \infty$. (We do not know why there is extra accuracy for large times). Finally, we remark that the difficulty this example presents for the method of multiple scales is that equation \ensuremath{\varepsilon}qref{magic} cannot be solved itself by perturbation methods (or, al least, we couldn't do it). One has to use all three terms at once; the fact that this works is amply demonstrated afterwards. Indeed the whole multiple scales procedure based on equation \ensuremath{\varepsilon}qref{msformalism} is really very strange when you think about it, but it can be justified afterwards. It really doesn't matter how we find equation \ensuremath{\varepsilon}qref{zeqn}. Once we have done so, verifying that it is the exact solution of a small perturbation of the original equation is quite straightforward. The implementation is described in the following Maple code: \lstinputlisting{Morrison} \subsection{The lengthening pendulum} As an interesting example with a genuine secular term, \cite{Boas(1966)} discuss the lengthening pendulum. There, Boas solves the linearized equation exactly in terms of Bessel functions. We use the model here as an example of a perturbation solution in a physical context. The original Lagrangian leads to \begin{align} \frac{d}{dt} \left(m\ensuremath{\varepsilon}ll^2\frac{d\theta}{dt}\right)+mg\ensuremath{\varepsilon}ll\sin\theta =0 \ensuremath{\varepsilon}nd{align} (having already neglected any system damping). The length of the pendulum at time $t$ is modelled as $\ensuremath{\varepsilon}ll =\ensuremath{\varepsilon}ll_0+vt$, and implicitly $v$ is small compared to the oscillatory speed $\sfrac{d\theta}{dt}$ (else why would it be a pendulum at all?). The presence of $\sin\theta$ makes this a nonlinear problem; when $v=0$ there is an analytic solution using elliptic functions \cite[chap.~4]{Lawden(2013)}. We \textsl{could} do a perturbation solution about that analytic solution; indeed there is computer algebra code to do so automatically \cite{Rand(2012)}. For the purpose of this illustration, however, we make the same small-amplitude linerization that Boas did and replace $\sin\theta$ by $\theta$. Dividing the resulting equation by $\ensuremath{\varepsilon}ll_0$, putting $\ensuremath{\varepsilon}=\sfrac{v}{\ensuremath{\varepsilon}ll_0\omega}$ with $\omega=\sqrt{\sfrac{g}{\ensuremath{\varepsilon}ll_0}}$ and rescaling time to $\tau=\omega t$, we get \begin{align} (1+\ensuremath{\varepsilon}\tau)\frac{d^2\theta}{d\tau^2}+2\ensuremath{\varepsilon}\frac{d\theta}{d\tau}+\theta=0\>. \ensuremath{\varepsilon}nd{align} This supposes, of course, that the pin holding the base of the pendulum is held perfectly still (and is frictionless besides). Computing a regular perturbation approximation \begin{align} z_{\textrm{reg}} = \sum_{k=0}^N \theta_k(\tau)\ensuremath{\varepsilon}^k \ensuremath{\varepsilon}nd{align} is straightforward, for any reasonable $N$, by using computer algebra. For instance, with $N=1$ we have \begin{align} z_{\textrm{reg}} = \cos\tau + \ensuremath{\varepsilon}\left(\frac{3}{4}\sin\tau+\frac{\tau^2}{4}\sin\tau-\frac{3}{4}\tau\cos\tau\right)\>. \ensuremath{\varepsilon}nd{align} This has residual \begin{align} \Delta_{\textrm{reg}} &= (1+\ensuremath{\varepsilon}\tau)z''_{\textrm{reg}}+2\ensuremath{\varepsilon} z'_{\textrm{reg}}+z_{\textrm{reg}}\\ &= -\frac{\ensuremath{\varepsilon}^2}{4}\left(\tau^3\sin\tau-9\tau^2\cos\tau-15\tau\sin\tau\right) \ensuremath{\varepsilon}nd{align} also computed straightforwardly with computer algebra. By experiment with various $N$ we find that the residuals are always of $O(\ensuremath{\varepsilon}^{N+1})$ but contain powers of $\tau$, as high as $\tau^{2N+1}$. This naturally raises the question of just when this can be considered ``small.'' We thus have the \textsl{exact} solution of \begin{align} (1+\ensuremath{\varepsilon}\tau)\frac{d^2\theta}{d\tau^2}+2\ensuremath{\varepsilon}\frac{d\theta}{d\tau}+\theta = \Delta_{\textrm{reg}}(\tau)=P(\ensuremath{\varepsilon}^{N+1}\tau^{2N+1}) \ensuremath{\varepsilon}nd{align} and it seems clear that if $\ensuremath{\varepsilon}^{N+1}\tau^{2N+1}$ is to be considered small it should at least be smaller than $\ensuremath{\varepsilon}\tau$, which appear on the left hand side of the equation. [$\sfrac{d^2}{d\tau^2}$ is $-\cos\tau$ to leading order, so this is periodically $O(1)$.] This means $\ensuremath{\varepsilon}^N\tau^{2N}$ should be smaller than $1$, which forces $\tau\leq T$ where $T=O(\ensuremath{\varepsilon}^{-q})$ with $q<\frac{1}{2}$. That is, this regular perturbation solution is valid only on a limited range of $\tau$, namely, $\tau=O(\ensuremath{\varepsilon}^{-\sfrac{1}{2}})$. Of course, the original equation contains a term $\ensuremath{\varepsilon}\tau$, and this itself is small only if $\tau\leq T_{\max}$ with $T_{\max}=O(\ensuremath{\varepsilon}^{-1+\delta})$ for $\delta>0$. Notice that we have discovered this limitation of the regular perturbation solution without reference to the `exact' Bessel function solution of this linearized equation. Notice also that $\Delta_{\textrm{reg}}$ can be interpreted as a small forcing term; a vibration of the pin holding the pendulum, say. Knowing that, say, such physical vibrations, perhaps caused by trucks driving past the laboratory holding the pendulum, are bounded in size by a certain amount, can help to decide what $N$ to take, and over which $\tau$-interval the resulting solution is valid. Of course, one might be interested in the forward error $\theta-z_{\textrm{reg}}$; but then one should be interested in the forward errors caused by neglecting physical vibrations (e.g. of trucks passing by) and the same theory---what a numerical analyst calls a condition number---can be used for both. But before we pursue that farther, let us first try to improve the perturbation solution. The method of multiple scales, or equivalent but easier in this case the renormalization group method \cite{Kirkinis(2012)} which consists for a linear problem of taking the regular perturbation solution and replacing $\cos\tau$ by $\sfrac{(e^{i\tau}+e^{-i\tau})}{2}$ and $\sin\tau$ by $\sfrac{(e^{i\tau}-e^{-i\tau})}{2i}$, gathering up the result and writing it as $\sfrac{1}{2}\>A(\tau;e)e^{i\tau}+\sfrac{1}{2}\>\bar{A}(\tau;\ensuremath{\varepsilon})e^{-i\tau}$. One then writes $A(\tau;\ensuremath{\varepsilon}) = e^{L(\tau;\ensuremath{\varepsilon})}+O(\ensuremath{\varepsilon}^{N+1})$ (that is, taking the logarithm of the $\ensuremath{\varepsilon}$-series for $A(\tau;\ensuremath{\varepsilon})=A_0(\tau)+\ensuremath{\varepsilon} A_1(\tau)+\cdots+\ensuremath{\varepsilon}^NA_N(\tau)+O(\ensuremath{\varepsilon}^{N+1})$, a straightforward exercise (especially in a computer algebra system) and then (if one likes) rewriting $\sfrac{1}{2}\>e^{L(\tau;\ensuremath{\varepsilon})+i\tau}+$ c.c. in real trigonometric form again, gives an excellent result. If $N=1$, we get \begin{align} \tilde{z}_{\textrm{renorm}}=e^{-\sfrac{3}{4}\>\ensuremath{\varepsilon}\tau}\cos\left(\frac{3}{4}\ensuremath{\varepsilon}+\tau-\ensuremath{\varepsilon}\frac{\tau^2}{4}\right) \ensuremath{\varepsilon}nd{align} which contains an irrelevant phase change $\frac{3}{4}\ensuremath{\varepsilon}$ which we remove here as a distraction to get \begin{align} z_{\textrm{renorm}} = e^{-\sfrac{3}{4}\>\ensuremath{\varepsilon}\tau}\cos\left(\tau-\ensuremath{\varepsilon}\frac{\tau^2}{4}\right)\>. \ensuremath{\varepsilon}nd{align} This has residual: \begin{align} \Delta_{\textrm{renorm}} &= (1+\ensuremath{\varepsilon}\tau)\frac{d^2z_{\textrm{renorm}}}{d\tau^2} +2\ensuremath{\varepsilon}\frac{dz_{\textrm{renorm}}}{d\tau}+z_{\textrm{renorm}} \nonumber\\ &= \ensuremath{\varepsilon}^2e^{-\frac{3}{4}\ensuremath{\varepsilon}\tau} \left( (\frac{3}{4}\tau^2-\frac{15}{16})\cos(\tau-\ensuremath{\varepsilon}\frac{\tau^2}{4})-\frac{9}{4}\tau\sin(\tau-\ensuremath{\varepsilon}\frac{t^2}{4})\right)+O(\ensuremath{\varepsilon}^3\tau^3e^{-\frac{3}{4}\ensuremath{\varepsilon}\tau}) \>. \ensuremath{\varepsilon}nd{align} By inspection, we see that this is superior in several ways to the residual from the regular perturbation method. First, it contains the damping term $e^{-\sfrac{3}{4}\>\ensuremath{\varepsilon}\tau}$ just as the computed solution does; this residual will be small compared even to the decaying solution. Second, at order $N$ it contains only $\tau^{N+1}$ as its highest power of $\ensuremath{\varepsilon}$, not $\tau^{2N+1}$. This will be small compared to $\ensuremath{\varepsilon}\tau$ for times $\tau< T$ with $T=O(\ensuremath{\varepsilon}^{-1+\delta})$ for \ensuremath{\varepsilon}mph{any} $\delta>0$; that is, this perturbation solution will provide a good solution so long as its fundamental assumption, that the $\ensuremath{\varepsilon}\tau$ term in the original equation, can be considered `small', is good. Note that again the quality of this perturbation solution has been judged without reference to the exact solution, and quite independently of whatever assumptions are usually made to argue for multiple scales solutions (such as boundedness of $\theta$) or the renormalization group method. Thus, we conclude that the renormalization group method gives a superior solution in this case, and this judgement was made possible by computing the residual. We have used the following Maple implementation: \lstinputlisting{LengtheningPendulum} See figure \ref{pendulum}. \begin{figure} \includegraphics[width=.45\textwidth]{LengtheningSols.png}\quad \includegraphics[width=.45\textwidth]{RenormRes.png} \caption{On the left, solutions to the lengthening pendulum equation (the renormalized solution is the solid line). On the right, residual of the renormalized solution, which is orders of magnitudes smaller than that of the regular expansion.} \label{pendulum} \ensuremath{\varepsilon}nd{figure} Note that this renormalized residual contains terms of the form $(\ensuremath{\varepsilon}\tau)^k e^{-\sfrac{3}{4}\>\ensuremath{\varepsilon} \tau}$> No matter what order we compute to, these have maxima $O(1)$ when $\tau=O(\sfrac{1}{\ensuremath{\varepsilon}})$, but as noted previously the fundamental assumption of perturbation has been violated by that large a $\tau$. \paragraph{Optimal backward error again} Now, one further refinement is possible. We may look for an $O(\ensuremath{\varepsilon}^2)$ perturbation of the lengthening of the pendulum, that explains part of this computed residual! That is, we look for $p(t)$, say, so that \begin{align} \Delta_2 := (1+\ensuremath{\varepsilon}\tau+\ensuremath{\varepsilon} p(\tau)) z_{\textrm{renorm}}'' + 2(\ensuremath{\varepsilon}+\ensuremath{\varepsilon}^2 p'(\tau))z_{\textrm{renorm}}'+z_{\textrm{renorm}} \label{renormeqs} \ensuremath{\varepsilon}nd{align} has only \textsl{smaller} terms in it than $\Delta_{\textrm{renorm}}$. Note the correlated changes, $\ensuremath{\varepsilon}^2 p(\tau)$ and $\ensuremath{\varepsilon}^2 p'(\tau)$. At this point, we don't know if this is possible or useful, but it's a good thing to try. In numerical analysis terms, we are trying to find a structured backward error for this computed solution. The procedure for identifying $p(\tau)$ in equation \ensuremath{\varepsilon}qref{renormeqs} is straightforward. We put $p(\tau)=a_0+a_1\tau+a_2\tau^2$ with unknown coefficients, compute $\Delta_2$, and try to choose $a_0$, $a_1$, and $a_2$ in order to make as many coefficients of powers of $\ensuremath{\varepsilon}$ in $\Delta_2$ to be zero as we can. When we do this, we find that \begin{align} p = -\frac{15}{16}+\frac{3}{4}\tau^2 \ensuremath{\varepsilon}nd{align} makes \begin{align} \Delta_{\textrm{mod}} &= \left(1+\ensuremath{\varepsilon}\tau+\ensuremath{\varepsilon}^2\left(\frac{3}{4}\tau^2-\frac{15}{16}\right)\right)z_{\textrm{renorm}}'' + 2\left(\ensuremath{\varepsilon}+\ensuremath{\varepsilon}^2\left(\frac{3}{2}\tau\right)\right) z_{\textrm{renorm}}' + z_{\textrm{renorm}}\\ &= \ensuremath{\varepsilon}^2e^{-\sfrac{3}{4}\>\ensuremath{\varepsilon}\tau}\left(-\frac{3}{4}\tau\sin\left(\tau-\sfrac{1}{4}\>\ensuremath{\varepsilon} \tau^2\right)\right) + O(\ensuremath{\varepsilon}^3\tau^3 e^{-\sfrac{3\ensuremath{\varepsilon}\tau}{4}})\>. \ensuremath{\varepsilon}nd{align} This is $O(\ensuremath{\varepsilon}^2\tau e^{-\sfrac{3\ensuremath{\varepsilon}\tau}{4}})$ instead of $O(\ensuremath{\varepsilon}^2\tau^2 e^{-\sfrac{3\ensuremath{\varepsilon}\tau}{4}})$, and therefore smaller. This \textsl{interprets} the largest term of the original residual, the $O(\ensuremath{\varepsilon}^2\tau^2)$ term, as a perturbation in the lengthening of the pendulum. The gain is one of interpretation; the solution is the same, but the equation it solves exactly is slightly different. For $O(\ensuremath{\varepsilon}^N\tau^N)$ solutions the modifications will probably be similar. Now, if $z\doteq\cos\tau$ then $z'\doteq-\sin\tau$; so if we include a damping term \begin{align} \left( +\ensuremath{\varepsilon}^2\cdot\frac{3}{8}\cdot\tau \theta' \right) \ensuremath{\varepsilon}nd{align} in the model, we have \begin{align} \left(1+\ensuremath{\varepsilon}\tau+\ensuremath{\varepsilon}^2\left(\frac{3}{4}\tau^2-\frac{15}{16}\right)\right) z_{\textrm{renorm}}'' + 2\left(\ensuremath{\varepsilon}-\ensuremath{\varepsilon}^2\left(\frac{3}{2}\tau\right)+\ensuremath{\varepsilon}^2\frac{3}{8}\tau\right)z_{\textrm{renorm}}' + z_{\textrm{renorm}} \nonumber\\ = O\left(\ensuremath{\varepsilon}^3\tau^3e^{-\sfrac{3}{4}\>\ensuremath{\varepsilon}\tau}\right) \ensuremath{\varepsilon}nd{align} and \textsl{all} of the leading terms of the residual have been ``explained'' in the physical context. If the damping term had been negative, we might have rejected it; having it increase with time also isn't very physical (although one might imagine heating effects or some such). \subsection{Vanishing lag delay DE} For another example we consider an expansion that ``everybody knows'' can be problematic. We take the DDE \begin{align} \dot{y}(t)+ay(t-\ensuremath{\varepsilon})+b y(t)=0 \ensuremath{\varepsilon}nd{align} from \cite[p.~52]{Bellman(1972)} as a simple instance. Expanding $y(t-\ensuremath{\varepsilon})=y(t)-\dot{y}(t)\ensuremath{\varepsilon}+O(\ensuremath{\varepsilon}^2)$ we get \begin{align} (1-a\ensuremath{\varepsilon})\dot{y}(t) + (b+a)y(t)=0 \ensuremath{\varepsilon}nd{align} by ignoring $O(\ensuremath{\varepsilon}^2)$ terms, with solution \begin{align} z(t) = \mathrm{exp}(-\frac{b+a}{1-a\ensuremath{\varepsilon}}t)u_0 \ensuremath{\varepsilon}nd{align} if a simple initial condition $u(0)=u_0$ is given. Direct computation of the residual shows \begin{align} \Delta &= \dot{z} + az(t-\ensuremath{\varepsilon})+bz(t)\\ &= O(\ensuremath{\varepsilon}^2)z(t) \ensuremath{\varepsilon}nd{align} uniformly for all $t$; in other words, our computed solution $z(t)$ exactly solves \begin{align} \dot{y} + ay(t-\ensuremath{\varepsilon}) + (b+O(\ensuremath{\varepsilon}^2))y(t)=0 \ensuremath{\varepsilon}nd{align} which is an equation of the same type as the original, with only $O(\ensuremath{\varepsilon}^2)$ perturbed coefficients. The initial history for the DDE should be prescribed on $-\ensuremath{\varepsilon}\leq t<0$ as well as the initial condition, and that's an issue, but often that history is an issue anyway. So, in this case, contrary to the usual vague folklore that Taylor series expansion in the vanishing lag ``can lead to difficulties'', we have a successful solution and we know that it's successful. We now need to assess the sensitivity of the problem to small changes in $b$, but we all know that has to be done anyway, even if we often ignore it. Another example of Bellman's on the same page, $\ddot{y}(t)+ay(t-\ensuremath{\varepsilon})=0$, can be treated in the same manner. Bellman cautions there that seemingly similar approaches can lead to singular perturbation problems, which can indeed lead to difficulties, but even there a residual/backward error analysis can help to navigate those difficulties. \subsection{Artificial viscosity in a nonlinear wave equation} Suppose we are trying to understand a particular numerical solution, by the method of lines, of \begin{align} u_t + uu_x = 0 \label{waveeq} \ensuremath{\varepsilon}nd{align} with initial condition $u(0,x)=e^{i\pi x}$ on $-1\leq x\leq 1$ and periodic boundary conditions. Suppose that we use the method of modified equations (see, for example, \cite{Griffiths(1986)}, \cite{Warming(1974)}, or \cite[chap~12]{CorlessFillion(2013)}) to find a perturbed equation that the numerical solution more nearly solves. Suppose also that we analyze the same numerical method applied to the divergence form \begin{align} u_t + \frac{1}{2}(u^2)_x=0\>. \label{waveeq2} \ensuremath{\varepsilon}nd{align} Finally, suppose that the method in question uses backward differences $f'(x) = \sfrac{(f(x)-f(x-2\ensuremath{\varepsilon}))}{2\ensuremath{\varepsilon}}$ (the factor 2 is for convenience) on an equally-spaced $x$-grid, so $\Delta x=-2\ensuremath{\varepsilon}$. The method of modified equations gives \begin{align} u_t + uu_x -\ensuremath{\varepsilon}(uu_{xx})+O(\ensuremath{\varepsilon}^2)=0 \ensuremath{\varepsilon}nd{align} for equation \ensuremath{\varepsilon}qref{waveeq} and \begin{align} u_t+uu_x -\ensuremath{\varepsilon} (u_x^2 + uu_{xx})+O(\ensuremath{\varepsilon}^2) = 0 \ensuremath{\varepsilon}nd{align} for equation \ensuremath{\varepsilon}qref{waveeq2}. The outer solution to each of these equations is just the reference solution to both equations \ensuremath{\varepsilon}qref{waveeq} and \ensuremath{\varepsilon}qref{waveeq2}, namely, \begin{align} u = \frac{1}{i\pi t} W(i\pi t e^{i\pi x}) \ensuremath{\varepsilon}nd{align} where $W(z)$ is the principal branch of the Lambert $W$ function, which satisfies $W(z) e^{W(z)}=z$. See \cite{Corless(1996)} for more on the Lambert $W$ function. That $u$ is the solution for this initial condition was first noticed by \cite{weideman(2003)}. The residuals of these outer solutions are just $-\ensuremath{\varepsilon} uu_{xx}$ and $-\ensuremath{\varepsilon}(u_x^2+uu_{xx})$ respectively. Simplifying, and again suppressing the argument of $W$ for tidiness, we find that \begin{align} -\ensuremath{\varepsilon} uu_{xx} = -\frac{\ensuremath{\varepsilon} W^2}{t^2(1+W^3)} \ensuremath{\varepsilon}nd{align} and \begin{align} -\ensuremath{\varepsilon}(u_x^2 + uu_{xx}) = -\frac{\ensuremath{\varepsilon} W^2(2+W)}{t^2(1+W^3)} \ensuremath{\varepsilon}nd{align} where $W$ is short for $W(i\pi t e^{i\pi x})$. We see that if $x=\sfrac{1}{2}$ and $t=\sfrac{1}{(\pi e)}$, both of these are singular: \begin{align} -\ensuremath{\varepsilon} uu_{xx} \sim -\ensuremath{\varepsilon} \left( \frac{i\pi^2 e^2\sqrt{2}}{4(et\pi-1)^{\sfrac{3}{2}}}+O\left(\frac{1}{et\pi-1}\right)\right) \ensuremath{\varepsilon}nd{align} and \begin{align} -\ensuremath{\varepsilon} (u^2_x +uu_{xx}) \sim -\ensuremath{\varepsilon} \left( \frac{i\pi^2e^2\sqrt{2}}{4(et\pi-1)^{\sfrac{3}{2}}} + O\left(\frac{1}{\sqrt{et\pi-1}}\right)\right)\>. \ensuremath{\varepsilon}nd{align} We see that the outer solution makes the residual very large near $x=\sfrac{1}{2}$ as $t\to \sfrac{1}{(\pi e)}^-$ suggesting that the solution of the modified equation---and thus the numerical solution---will depart from the outer solution. Both the original form and the divergence form are predicted to have similar behaviour, and this is confirmed by numerical experiments. We remark that using forward differences instead just changes the sign of $\ensuremath{\varepsilon}$, and given the similarity of $euu_{xx}$ to $\ensuremath{\varepsilon} u_{xx}$, we intuit that this will blow up rather quickly, like the backward heat equation, because the exact solution to Burger's equation $u_t+uu_x=\ensuremath{\varepsilon} u_{xx}$ involves a change in variable to the heat equation \cite[pp.~352-353]{Kevorkian(2013)}. We also remark also that this use of residual is a bit perverse: we here substitute the reference solution into an approximate (reverse-engineered) equation. Some authors do use `residual' or even `defect' in this sense., e.g., \cite{Chiba(2009)}. It only fits our usage because the reference solution to the original equation is just the outer solution of the perturbation problem of interest here. Finally, we can interpolate the numerical solution using a trigonometric interpolant in $x$ tensor producted with the interpolant in $t$ provided by the numerical solver (e.g., \texttt{ode15s} in Matlab). We can then compute the residual $\Delta(t,x) = z_t+zz_x$ in the original equation and we find that, away from the singularity, it is $O(\ensuremath{\varepsilon})$. If we compute the residual in the modified equation \begin{align} \Delta_1(t,x)=z_t+zz_x-\ensuremath{\varepsilon} zz_{xx} \ensuremath{\varepsilon}nd{align} we find that, away from the singularity, it is $O(\ensuremath{\varepsilon}^2)$. This is a more traditional use of residual in a numerical computation, and is done without knowledge of any reference solution. The analogous use we are making for perturbation methods can be understood from this numerical perspective. \section{Concluding Remarks} Decades ago, van Dyke had already made the point that, in perturbation theory, ``[t]he possibilities are too diverse to be subject to rules'' \cite[p.~31]{vanDyke(1964)}. Van Dyke was talking about the useful freedom to choose expansion variables artfully, but the same might be said for perturbation methods generally. This paper has attempted (in the face of that observation) to lift a known technique, namely the residual as a backward error, out of numerical analysis and apply it to perturbation theory. The approach is surprisingly useful and clarifies several issues, namely \begin{itemize} \item BEA allows one to directly use approximations taken from divergent series in an optimal fashion without appealing to ``rules of thumb'' such as stopping before including the smallest term. \item BEA allows the justification of removing spurious secular terms, even when true secular terms are present. \item Not least, residual computation and \ensuremath{\varepsilon}mph{a posteriori} BEA makes detection of slips, blunders, and bugs all but certain, as illustrated in our examples. \item Finally BEA interprets the computed solution solution $z$ as the exact solution to just as good a model. \ensuremath{\varepsilon}nd{itemize} In this paper we have used BEA to demonstrate the validity of solutions obtained by the iterative method, by Lindstedt's method, by the method of multiple scales, by the renormalization group method, and by matched asymptotic expansions. We have also successfully used the residual and BEA in many problems not shown here: eigenvalue problems from \cite{Nayfeh(2011)}; an example from \cite{vanDyke(1964)} using the method of strained coordinates; and many more. The examples here have largely been for algebraic equations and for ODEs, but the method was used to good effect in \cite{Corless(2014)} for a PDE system describing heat transfer between concentric cylinders, with a high-order perturbation series in Rayleigh number. Aside from the amount of computational work required, there is no theoretical obstacle to using the technique for other PDE; indeed the residual of a computed solution $z$ (perturbation solution, in this paper) to an operator equation $\varphi(y;x)=0$ is usually computable: $\Delta = \varphi(z;x)$ and its size (in our case, leading term in the expansion in the gauge functions) easily assessed. It's remarkable to us that the notion, while present here and there in the literature, is not used more to justify the validity of the perturbation series. We end with a caution. Of course, BEA is not a panacea. There are problems for which it is not possible. For instance, there may be hidden constraints, something like solvability conditions, that play a crucial role and where the residual tells you nothing. A residual can even be zero and if there are multiple solutions, one needs a way to get the right one. There are things that can go wrong with this backward error approach. First, the final residual computation might not be independent enough from the computation of $z$, and repeat the same error. An example is if one correctly solves \begin{align} \ddot{y}+y+\ensuremath{\varepsilon} \dot{y}^3+3\ensuremath{\varepsilon}^2\dot{y}=0 \ensuremath{\varepsilon}nd{align} and verifies that the residual is small, while \textsl{intending} to solve \begin{align} \ddot{y}+y+\ensuremath{\varepsilon} \dot{y}^3-3\ensuremath{\varepsilon}^2\dot{y}=0\>, \ensuremath{\varepsilon}nd{align} i.e., getting the wrong sign on the $\dot{y}$ term, both times. Another thing that can go wrong is to have an error in your independent check but not your solution. This happened to us with 183 instead of 138 in subsection \ref{systems}; the discrepancy alerted us that there \textsl{was} a problem, so this at least was noticeable. A third thing that can go wrong is that you verify the residual is small but forget to check the boundary counditions. A fourth thing that can go wrong is that the residual may be small in an absolute sense but still larger than important terms in the equation---the residual may need to be smaller than you expect, in order to get good qualitative results. A fifth thing is that the residual may be small but of the `wrong character', i.e., be unphysical. Perhaps the method has introduced the equivalent of negative damping, for instance. This point can be very subtle. A final point is that a good solution needs not just a small backward error, but also information about the sensitivity (or robustness) of the model to physical perturbations. We have not discussed computation of sensitivity, but we emphasize that even if $\Delta\ensuremath{\varepsilon}quiv 0$, you still have to do it, because real situations have real perturbations. Nonetheless, we hope that we have convinced you that BEA can be helpful. \ensuremath{\varepsilon}nd{document}
\begin{equation}gin{document} \title{Quantum Speed Limits for Observables} \author{Brij Mohan\(^{}\)}\email{[email protected]} \affiliation{\(\)Harish-Chandra Research Institute,\\ A CI of Homi Bhabha National Institute, Chhatnag Road, Jhunsi, Prayagraj 211019, Uttar Pradesh, India } \author{Arun Kumar Pati\(^{}\)}\email{[email protected]} \affiliation{\(\)Harish-Chandra Research Institute,\\ A CI of Homi Bhabha National Institute, Chhatnag Road, Jhunsi, Prayagraj 211019, Uttar Pradesh, India } \begin{equation}gin{abstract} In the Schr{\"o}dinger picture, the state of a quantum system evolves in time and the quantum speed limit describes how fast the state of a quantum system evolves from an initial state to a final state. However, in the Heisenberg picture the observable evolves in time instead of the state vector. Therefore, it is natural to ask how fast an observable evolves in time. This can impose a fundamental bound on the evolution time of the expectation value of quantum mechanical observables. We obtain the quantum speed limit time-bound for observable for closed systems, open quantum systems and arbitrary dynamics. Furthermore, we discuss various applications of these bounds. Our results can have several applications ranging from setting the speed limit for operator growth, correlation growth, quantum thermal machines, quantum control and many body physics. \end{abstract} \maketitle \section{Introduction} Time is one of the fundamental notions in the physical world, and it plays a significant role in almost every existing physical theory. Yet, understanding time has been a challenging task and often it is treated like parameter. Even though time is not an operator, there is a geometric uncertainty relation between time and energy fluctuation which imposes inherent limitation on how fast a quantum system can evolve in time. This was first discovered in an attempt to operationalize the time-energy uncertainty relation. This concept is now known as the quantum speed limit (QSL)~\cite{Mandelstam1945, Anandan1990}. Even though, how fast a quantum system evolves in time was addressed in Ref. \cite{Mandelstam1945}, the notion of speed of transportation of state vector was formally defined using the Fubini-Study metric in Ref. \cite{Anandan1990} and using the Riemannian metric in Ref. \cite{akp91}. Subsequently, an alternate speed limit for quantum state evolution was proved involving the average energy above the ground state of the Hamiltonian\cite{ Margolus1998}. The QSL decides the minimal time of evolution of the quantum system. It entirely depends on intrinsic quantities of evolving quantum systems, such as the shortest path connecting the initial and final states of the quantum system and the uncertainty in the Hamiltonian. The QSL bounds were first investigated for the unitary dynamics of pure states ~\cite{Mandelstam1945,Anandan1990, Margolus1998,Levitin2009, Gislason1956, Eberly1973, Bauer1978, Bhattacharyya1983, Leubner1985, Vaidman1992, Uhlmann1992, Uffink1993, Pfeifer1995, Horesh1998, AKPati1999, Soderholm1999, Andrecut2004, Gray2005, Luo2005, Zielinski2006, Andrews2007, Yurtsever2010, Fu2010, Zwierz2012, Poggi2013,Ness}. Later, QSL has been studied for the case of unitary dynamics of mixed states~\cite{Kupferman2008, Jones2010, Chau2010, S.Deffner2013, Fung2014, Andersson2014, D.Mondal2016, Mondal2016, S.Deffner2017, Campaioli2018}, unitary dynamics of multipartite systems~\cite{Giovannetti2004, Batle2005, Borras2006, Zander2007} and for more general dynamics \cite{Deffner2013, Campo2013, Taddei2013, Fung2013, Pires2016, S.Deffner2020,Jing2016,Pintos2021,Mohan,Hamazaki}. The study of QSL is significant for theoretical understanding of quantum dynamics and has relevance in developing quantum technologies and devices, etc. The QSL has applications in several fields, such as quantum computation~\cite{AGN12}, quantum thermodynamics~\cite{Mukhopadhyay2018,Funo2019}, quantum control theory~\cite{Caneva2009,Campbell2017}, quantum metrology~\cite{Campbell2018}, and so on. In quantum physics, during the arbitrary dynamical evolution of the quantum system, we often encounter the situation where time evolved state becomes perfectly distinguishable (orthogonal) to initial state. At the same time, the expectation value of a given observable does not change or changes at a slower rate. For instance, let us consider a closed quantum system with internal Hamiltonian $\sigma_{z}$ and initial state is $|+\rangle$, this initial state evolves to its orthogonal state $|-\rangle$ (up to a phase) by an external Hamiltonian $\sigma_{y}$, where $|+\rangle$ and $|-\rangle$ are the eigenstates of $\sigma_{x}$. In this scenario, the initial state and final state of the system are distinguishable. However, both the initial and final states are energetically indistinguishable, so the evolution time for the average energy of the quantum system is zero. Similarly, one can consider that if a given quantum system interacting with a pure dephasing environment (dephasing in $\sigma_{z}$), then its state evolves to a decohered state. However, the expectation value of energy does not change in this process. The above discussion suggests that observables of a system can have different quantum speed limit bounds. In the Schr\"{o}dinger picture, the state vector evolves in time, while in the Heisenberg picture, the observable of the quantum system evolves, and both of these formalisms are equivalent. In quantum mechanics, there is also interaction picture where both the state and the observables can change in time. This has important application in quantum field theory and many-body physics. In this paper we will use the Heisenberg picture for most of our discussions. The natural question, then, arises is how fast an observable evolves in time? Specifically, we will answer the question: how to obtain a lower bound on the evolution time of quantum observable and define the quantum speed limit for observable? Thus, seemingly technical difference between the Schr{\"o}dinger and the Heisenberg pictures, becomes rather important in the context of many-body physics. Here, often one cannot describe a state analytically due to its immense computational complexity. However, it is possible to compute expectation values of local observables in an efficient manner. Thus, it is desirable to bound how fast the expectation value corresponding to an observable changes in time in the Heisenberg picture. Another motivation for studying observable speed limit is that this can have application in understanding the operator growth in many-body physics. In the context of complex systems, one of the pressing question is to understand the universal operator growth hypothesis~\cite{Perales2008}. The observables of a system which may be represented as operators in quantum systems tend to grow over time, i.e., they become more complicated as the system evolves in time. If we start with a simple many-body operator at some initial time, then because of the interaction Hamiltonian, the operator may become complex at a later time. The quantum speed limit for the observable can answer a fundamental question on how fast a many body operator can tend to be complex? It is important know the rate of operator growth and we believe that the quantum speed limit for observable can throw new light on this question. In complex quantum systems and many-body physics the bound on the commutator of two operators, one operator being the time evolved version of an operator with support on some region and the another operator with support on some other region, plays a major role in the derivation of the Lieb-Robinson bound~\cite{Lieb,Hastings2010}. The later proves that the speed of propagation of perturbation in quantum system is bounded. Physically, this implies that for small times only small amounts of information can propagate from one region of the many-body system to another region. While the Lieb-Robinson bound leads to a speed limit of information in quantum systems, the quantum speed limit for the observable can answer the question how fast the commutator changes for observables belonging to two distant regions. This is also important in the physical context where the underlying dynamics is highly chaotic \cite{gorin}. The growth of the commutator between two operators as a function of their separation in time has been used to quantify the rate of growth of chaos and information scrambling \cite{hay,sus,stan}. Therefore, the quantum speed limit for the commutator can answer the question how fast a localized perturbation spreads in time in a quantum many-body system. Since the time scale over which scrambling of information occurs is distinct from the relaxation time of physical system, the observable quantum speed limit for the two-time commutator can play an important role in giving an estimate of the scrambling time in complex quantum systems. To answer these fundamental questions, we formally introduce the notion of the quantum speed limit for observable. It is characterized as the maximal evolution speed of the expectation value of the given observable of the quantum system during arbitrary dynamical evolution which can be unitary or non-unitary. It sets the bound on the minimum evolution time of quantum system required to evolve between different expectation values of a given observable. We do this for both closed and open quantum dynamics. We illustrate our main results for the ergotropy rate of quantum battery, rate of probability which also gives the standard QSL for state of the system. Moreover, we also compute the QSL for two-time correlation of an observable, which is a central quantity in the theory of quantum transport and complex quantum systems. We also apply our bound to obtain quantum speed limit for the commutator of two observables belonging to two distant regions in a many-body system. Our result can be equally important like the Lieb-Robinson bound. \section{Observable Quantum Speed Limit (OQSL)} Here, we show how to obtain QSL for observable. This will answer the question how fast the expectation value of a given observable changes in time instead of the state of the quantum system. The observable QSL is defined as the maximum rate of evolution of the expectation value of an observable of a given quantum system during dynamical evolution. It establishes a limit on the minimum evolution time necessary to evolve between different expected values of a given observable of the quantum system. Here, we will derive the observable QSL for closed and open system dynamics. \subsection{Unitary Dynamics} In this section, we will derive a bound for the observable undergoing unitary dynamics in the Heisenberg picture. Let us consider a closed quantum system whose initial state is $|\psi\rangle \in \cal{H}$ and its dynamical evolution dictated by unitary $U(t)=e^{-iHt/\hbar}$, where $H$ is time-independent driving Hamiltonian of the quantum system. Here, we want to determine how fast a quantum observable $O$ of the quantum system evolves in time and lower bound on its minimal evolution time. We know that the Heisenberg equation of motion governs the time evolution of an observable, which is given by \begin{equation}gin{equation}\label{HB:eqn} i \hbar \frac{d O(t)}{dt}= [O(t), H], \end{equation} where $H$ is the Hamiltonian of the system and $O(t) = U^{\dagger}(t)O(0)U(t)$ and $U^{\dagger}(t)U(t) = I$. Now, we take the average of the above in the state $\ket{\psi}$ and take the absolute value of Eq \eqref{HB:eqn}. On using the Heisenberg-Robertson uncertainty relation, i.e., $\Delta A \Delta B \geq \frac{1}{2}\langle[A,B]\rangle$, where $A$ and $B$ are two incompatible observables~\cite{Robertson1929}, we obtain the following inequality \begin{equation}gin{equation}\label{rate:1} \left| \frac{d\langle O(t)\rangle}{dt}\right|= \frac{1}{\hbar}| \langle[O(t), H]\rangle| \leq \frac{2 \Delta O(t) \Delta H }{\hbar}, \end{equation} where $\Delta O(t)$ = $\sqrt{\langle O(t)^{2} \rangle - \langle O(t)\rangle^2}$ and $\Delta H$ = $\sqrt{\langle H^{2} \rangle - \langle H \rangle^2 }$. The above inequality~\eqref{rate:1} is the upper bound on that the rate of change of expectation value of given observable of the quantum system evolving under unitary dynamics. After integrating the above inequality with respect to time, we obtained the desired bound \begin{equation}gin{equation}\label{Bound:1} T \geq \frac{\hbar}{2\Delta H}\int_{0}^{T}\frac{|d\langle O(t)\rangle|}{ \Delta O(t)}, \end{equation} where, we call the quantity (right hand side of inequality) $T^{O}_{QSL}=\frac{\hbar}{2\Delta H}\int_{0}^{T}\frac{|d\langle O(t)\rangle|}{ \Delta O(t)}$ quantum speed limit time of observable (OQSL), i.e., $T\geq T^{O}_{QSL}$. If an observable $O$ satisfy the condition $O^{2}=I$, i.e., for the self-inverse observable, the above inequality can be expressed as \begin{equation}gin{equation}\label{Bound:12} T \geq \frac{\hbar}{2\Delta H}|\arcsin{\langle O(T) \rangle}-\arcsin{\langle O(0) \rangle}|. \end{equation} Here, we illustrate the tightness of the above speed limit for the observable with a simple example. Consider a qubit in a pure state $|\psi\rangle = \alpha|0\rangle + \sqrt{1-\alpha^2}|1\rangle$ $(0 \leq \alpha \leq 1)$ and it does not evolve in time. The Hamiltonian of the system is given by $H$ = $\hat{m}.\Vec{\sigma}$, with $\hat{m}$ is unit vector. In the Heisenberg picture, let an observable evolves in time and we would like to evaluate the minimum evolution time of the expectation value of given observable $O(0)=\hat{n}.\Vec{\sigma}$, with $\hat{n}$ is unit vector in the state $|\psi\rangle $. We can calculate the following quantities $\Delta H=1$, $\langle O(0) \rangle=1$ and $\langle O(T) \rangle=-1$, where we assume $\hat{n}= (1,0,0)$, $\hat{m}=(0,0,1)$, $\alpha=\frac{1}{\sqrt{2}}$ and $T = \frac{\pi}{2}$. By using Eq \eqref{Bound:12}, we can obtain $T_{QSL}^{O}= \frac{\pi}{2} =T$. Hence, the bound given by Eq \eqref{Bound:12} is indeed tight. This simple example illustrates the usefulness of OQSL. The bound given in \eqref{Bound:1} may be thought of as the analog of the MT bound for the observable. \textbf {\it Quantum speed Limit for States:} Here, we discuss how the QSL for observable is connected to the standard QSL for state of a quantum system. We will show that the standard state-speed limit for the state, may be viewed as a special case of observable speed limits when the observable is chosen to be a projector on the initial state. Let us consider the initial state of a quantum system which is prepared in a state $|\psi\rangle=\sum_{i}a_{i}|i\rangle $. If we choose our observable to be a projector, i.e., $O(0)=P$, then the probability of finding quantum system in state $|i\rangle$ at $t=0$ is $p(0)=|a_{i}|^2$ (if we measure a projector $P=|i\rangle \langle i|$). Here, we want to obtain the speed limit for the projector for the unitarily evolving quantum system, i.e., how fast the probability of finding quantum system in state $|i\rangle$ changes in time. From \eqref{Bound:1}, then we can obtain the following inequality \begin{equation}gin{equation*} T \geq \frac{\hbar}{2\Delta H}\int_{0}^{T}\frac{|d \langle P(t) \rangle|}{ \sqrt{\langle P(t) \rangle (1-\langle P(t) \rangle)} }, \end{equation*} where $P(t)= U^{\dagger}(t)P(0)U(t)$ and $\langle P(t) \rangle$ = $p(t)$ is the probability of the quantum system in state $|i\rangle $ at a later time. \begin{equation}gin{equation} T \geq \frac{\hbar}{\Delta H}|\arcsin{ (\sqrt{p(T)} )}-\arcsin{ (\sqrt{p(0)} )}|. \end{equation} If $p(0) = 1$ i.e. $|\psi\rangle=|i\rangle $ then above inequality yields the well known Mandelstam-Tamm bound of QSL for state evolution \begin{equation}gin{equation}\label{SQSL} T \geq \frac{\hbar}{\Delta H}\arccos{ (\sqrt{p(T)} )}. \end{equation} This is the usual QSL obtained by Mandelstam-Tamm~\cite{Mandelstam1945} and Anandan-Aharaonov~\cite{Anandan1990}. Thus, the observable QSL also leads to standard QSL for the state change. In this sense, our approach also unifies the existing QSL. Note that, this is an expression for survival probability and it is related to fidelity decay, which is an important quantity in quantum chaos~\cite{Herrera2014}. Since the bound \eqref{Bound:1} is relatively harder to compute, one can derive the alternate bound for arbitrary initial state which may be pure or mixed (see Appendix \ref{app:B}), which is easier to compute. \begin{equation}gin{equation}\label{Bound:2} T \geq \frac{\hbar}{2\sqrt{{\rm tr}(\rho^2)}} \frac{|\langle O(T) \rangle - \langle O(0)\rangle|}{\norm{O(0)H}_{\rm hs}}, \end{equation} where ${\rm tr}(\rho^2)$ is the purity of the initial state, $\norm{A}_{\rm hs}$=$\sqrt{{\rm tr}{(A^{\dagger}A)}}$ is the Hilbert-Schmidt norm of operator $A$ and the RHS of the last equation is defined as $T^{O}_{QSL}$. The bound \eqref{Bound:2} suggests that if the initial state of the system is mixed, then observable evolves slower (OQSL depends on initial state). However, this bound \eqref{Bound:2} may not always be tight compared to the bound \eqref{Bound:1}. The above result can have an interesting application in physical systems where one try to estimate approximate conserved quantities. We know that if $O$ is the generator of a symmetry operation that acts on the physical system and if it commutes with the Hamiltonian, then it is conserved. Suppose that $O$ does not commute with the Hamiltonian. Then, we know that the observable will not be conserved. However, \eqref{Bound:2} can suggest that over some time interval $T$, how much $ \langle O(T) \rangle$ differs from $ \langle O(0) \rangle$. For a pure state, this difference is upper bounded by $\frac{2T}{\hbar} \norm{O(0)H}_{\rm hs} $. Next, we obtain another QSL for observable. Let us consider a quantum system with a pure state $\rho=|\psi\rangle \langle \psi|$. The time evolution of expectation value any system observable $O$ is given as \begin{equation}gin{equation} \langle O(t) \rangle={\rm tr}[U^{\dagger}(t)O(0)U(t)\rho]. \end{equation} To find the rate of change of expectation value of observable, we need to differentiate the above equation with respect to time. By taking the absolute value of the rate equation, and apply the triangular inequality and the H\"older inequality~\cite{Rogers1888,Holder1889,Bhatia1997}, we obtain the following inequality \begin{equation}gin{equation}\label{rate:2} \left|\frac{ d\langle O(t) \rangle}{dt}\right| \leq\frac{2}{\hbar}\norm{HO(t)}_{\rm op}. \end{equation} The above inequality~\eqref{rate:2} provides the upper bound on that the rate of change of expectation value of given observable of the quantum system evolving under unitary dynamics. The operator norm, the Hilbert-Schmidt norm and the trace norm of an operator satisfy the inequality $\norm{A}_{\rm op} \leq \norm{A}_{\rm hs} \leq \norm{A}_{\rm tr} $ and the operator norm is unitary invariant $\norm {U^{\dagger}AU}_{\rm op}$ = $\norm {A}_{\rm op}$, then we can obtain the following bound as (see Appendix \ref{app:C}) \begin{equation}gin{equation}\label{Bound:3} T \geq \frac{\hbar}{2} \{\frac{|\langle O(T) \rangle - \langle O(0)\rangle| }{min\{\norm{O(0)H}_{\rm op},\norm{O(0)H}_{\rm tr}\}}\}, \end{equation} where $T^{O}_{QSL}$= $\frac{\hbar}{2} \{\frac{|\langle O(T) \rangle - \langle O(0)\rangle| }{min\{\norm{O(0)H}_{\rm op},\norm{O(0)H}_{\rm tr}\}}\}$ quantum speed limit time of observable (OQSL). One should consider the maximum of \eqref{Bound:1}, \eqref{Bound:2} and \eqref{Bound:3} for the tighter bound. For the unitary evolution, the bounds \eqref{Bound:1}, \eqref{Bound:2} and \eqref{Bound:3} determines how fast the expectation value of an observable of the quantum system changes in time. If a given observable's initial and final expectation value does not change undergoing unitary evolution, then OQSL is zero. Since the minimal evolution time for state evolution cannot be zero in the above scenario, the aforementioned scenario is the major difference between OQSL bounds and standard QSL bounds (MT and ML bounds) for unitary dynamics. \subsection{Arbitrary Dynamics} In general, for an arbitrary observable $O$ one can obtain the following inequality using the triangle inequality for the absolute value, and the H\"{o}lder inequality (see Appendix \ref{app:D}) \begin{equation}gin{equation} |\langle O \rangle_{\rho} - \langle O \rangle_{\sigma}|\leq 2\norm{O}_{\rm op}l(\rho,\sigma), \end{equation} where $l(\rho,\sigma)$ = $\frac{1}{2} {\rm tr }\lvert \rho -\sigma \rvert$ is the trace distance between states $\rho$ and $\sigma$. With the help of the above inequality relation, we can define a distance that captures the change in the expectation of an observable during the arbitrary dynamical evolution (in general the evolution governed by the master equation $\dot{O}(t)=L_{t}^{\dagger}(O(t))$, where ${L}^{\dagger}_{t}$ is adjoint of the Liouvillian super-operator, which can be unitary or non-unitary), and it is given by \begin{equation}gin{equation}\label{OD} \cal{D}(O(t),O(0)) = \frac{|{\rm tr}(O(t) - O(0) ){\rho})|}{2\norm{O(0)}_{\rm op}}, \end{equation} where $\norm{O(0)}_{\rm op}$ is a re-scaling factor because the spectral gap in the observable can be arbitrarily large. Using Eq \eqref{OD} we can obtain the desired QSL bound on evolution time of expectation value of an observable for arbitrary dynamics as \begin{equation}gin{equation}\label{arbitrary} T \geq \frac{|\langle O(T) \rangle - \langle O(0) \rangle|}{\sqrt{{\rm tr}(\rho ^2)}\Lambda_{T}} \end{equation} where $\Lambda_{T}=\frac{1}{T}\int_{0}^{T}dt{\norm{{L_{t}^{\dagger}({O}(t))}}_{\rm hs}}$ is evolution speed of observable $O$ and $T^{O}_{QSL}=\frac{|\langle O(T) \rangle - \langle O(0) \rangle|}{\sqrt{{\rm tr}(\rho ^2)}\Lambda_{T}}$ quantum speed limit time of observable (OQSL). Details of the derivation are provided in Appendix \ref{app:E}. For arbitrary dynamics, the bound \eqref{arbitrary} determines how fast the expectation value of an observable of the quantum system changes in time. The derived bound \eqref{arbitrary} implies that the OQSL is dependent on the purity of the initial state of the quantum system as well as the evolution speed of the observable. \subsection{Lindblad Dynamics} Let us consider a quantum system (S) that is interacting with its environment (E). The total Hilbert space of combined system is $\cal{H}_{S}$ $\otimes$ $\cal{H}_{E}$ and we assume the initial state of combined system is represented by the separable density matrix $\rho_{SE}(0)$ $=$ $\rho$ $\otimes$ $\sigma$, where quantum system's initial state $\rho$ can be pure or mixed and $\sigma$ is state of environment. The Lindbladian ($\mathcal{L}$) governs the reduced dynamics of a quantum system (S). Here, we aim to determine how fast the expectation value of given observable $O$ of the reduced quantum system (S) evolves in time and lower bound on its minimal evolution time. Let the quantum system has an internal Hamiltonian $H_{S}$. If the system interacts with its surroundings, then, the dynamics of given observable of the quantum system is governed by the Lindblad master equation in the Heisenberg picture. Hence, the expectation value of the observable $O$ belonging to the system follows the dynamics \begin{equation}gin{equation} \langle O(t)\rangle ={\rm tr}(O(0)\Phi_{t}(\rho)) = {\rm tr}(\Phi^{\dagger}_{t}(O(0))\rho), \end{equation} where $\Phi_{t}$ is generator of dynamics, $O(t)= \Phi_{t}(O(0))=e^{\mathcal{L}^{\dagger}t}O(0)$ and ${\mathcal{L}^{\dagger}}$ is adjoint of Lindbladian. The time evolution of a observable $O$ is given by the following equation \begin{equation}gin{equation}\label{Lind} \frac{d O(t)}{dt} = {\mathcal{L}^{\dagger}}[O(t)], \end{equation} where $\mathcal{L}^{\dagger}[O(t)]=\frac{i}{\hbar}[H_{S},O(t)] + D[O(t)]$, $D[O(t)]=\sum_{k}\gamma_{k}(t)(L_{k}^{\dagger}O(t)L_{k}-\frac{1}{2}\{L_{k}^{\dagger}L_{k},O(t)\})$ and $L_{k}$'s are jump operators of the system. Let us take the average of Eq~\eqref{Lind} in the state $\rho$ and its absolute value. By applying the Cauchy-Schwarz inequality, we obtain the following inequality \begin{equation}gin{equation}\label{rate:4} \left|\frac{d\langle O(t) \rangle}{dt}\right| \leq \sqrt{{\rm tr}(\rho^{2})}\norm{{\mathcal{L}^{\dagger}}[O(t)]}_{\rm hs}, \end{equation} where $\norm{A}_{\rm hs}$ = $\sqrt{{\rm tr}(A^{\dagger}A)}$ is the Hilbert-Schmidt norm of operator A. The above inequality~\eqref{rate:4} is the upper bound on that the rate of change of expectation value of given observable of the quantum system evolving under Lindblad dynamics. After integrating the above inequality, we obtained the following bound \begin{equation}gin{equation}\label{Bound:open1} T \geq \frac{|\langle O(T) \rangle-\langle O(0) \rangle|}{\sqrt{{\rm tr}(\rho^2)}\Lambda_{T}}, \end{equation} where $\Lambda_{T}=\frac{1}{T}\int_{0}^{T}dt\norm{{\mathcal{L}^{\dagger}}[O(t)]}_{\rm hs}$ is evolution speed of the observable $O$ and $T^{O}_{QSL}$= $\frac{|\langle O(T) \rangle-\langle O(0) \rangle|}{\sqrt{{\rm tr}(\rho^2)}\Lambda_{T}}$ quantum speed limit time of observable (OQSL). For the Lindblad dynamics, the bound given in \eqref{Bound:open1} determines how fast the expectation value of an observable of the quantum system changes in time. The obtained bound \eqref{Bound:open1} suggests that the OQSL depends on the purity of the initial state of the evolving quantum system. Note that, OQSL time is zero if the expectation value of a given observable does not change during the dynamics. \textit{\bf Comparison between QSL and OQSL for pure dephasing dynamics:} Let us consider the QSL bound for open quantum systems governed by a Lindblad quantum master equation. For a Markovian dynamics of an open quantum system expressed via a Lindbladian ($\mathcal{L}$), the following lower bound on the evolution time needed for a quantum system to evolve from initial state $\rho_0$ to final state $\rho_T$ was given in Ref.~\cite{Campo2013} as \begin{equation}gin{equation}\label{qslbound:1} T \geq \frac{|\cos{\theta}-1|{\rm tr}[\rho_0^2]}{\sqrt{{\rm tr}[\mathcal{L}^\dagger(\rho_0)]^2}} \end{equation} where $\theta=\cos^{-1}(\frac{{\rm tr}[\rho_0 \rho_T]}{{\rm tr}[\rho_0^2]})$ is expressed in terms of relative purity between the initial and the final state. Let us consider a two-level quantum system with the ground state $|1\rangle\langle 1|$ and the excited state $|0\rangle\langle 0|$ interacting with a dephasing bath. The corresponding dephasing Lindblad or jump operator of the system is given by $L_{0} = \sqrt{\frac{\gamma}{2}} \sigma_{z}$, where $\sigma_{z}$ is the Pauli-$Z$ operator and $\gamma$ is a real parameter denoting the strength of dephasing. The Lindblad master equation~\cite{Lindblad1976} governs the time evolution of two-level quantum system, and it is given by \begin{equation}gin{equation} \frac{{ d} \rho_t}{{ d} t} = \cal{L}(\rho_t)= \frac{\gamma}{2}(\sigma_{z}\rho_t\sigma_{z}-\rho_t). \end{equation} If the quantum system initially in a state $\rho_{0}=|+\rangle \langle +|$, where $|+\rangle = {\frac{1}{\sqrt{2}}}(|0\rangle + |1\rangle)$, then solution of the Lindblad equation is given by \begin{equation}gin{align} \rho_t=\frac{1}{2} (|0\rangle \langle 0| + |1\rangle \langle 1| + e^{-\gamma t} (|1\rangle \langle 0|+ |0\rangle \langle 1|)). \end{align} If given observable is $O(0)$ = $\sigma_{x}$, then the solution of~\eqref{Lind} for the dephasing dynamics is given as \begin{equation}gin{equation} O(t)= e^{- \gamma t} \sigma_{x} \end{equation} To estimate bounds \eqref{Bound:open1} and \eqref{qslbound:1}, we require the following quantities: \begin{equation}gin{equation} {\rm tr}[\rho_0^2]=1 \end{equation} \begin{equation}gin{equation} \cos{\theta} =\frac{1+e^{-\gamma t}}{2} \end{equation} \begin{equation}gin{equation} {\rm tr}[\mathcal{L}^\dagger(\rho_0)]^2=\frac{\gamma ^2}{2} \end{equation} \begin{equation}gin{equation} \langle O(0) \rangle = {\rm tr}[\sigma_{z}\rho_0]=1 \end{equation} \begin{equation}gin{equation} \langle O(t) \rangle = {\rm tr}[e^{-\gamma t}\sigma_{z}\rho_0]=e^{-\gamma t} \end{equation} \begin{equation}gin{equation} \norm{{\mathcal{L}^{\dagger}}[O(t)]}_{\rm hs}=\sqrt{2}\gamma e^{- \gamma t} \end{equation} \begin{equation}gin{figure} \centering \includegraphics[width=9cm]{figure1.pdf} \caption{Here we depict $T_{QSL/OQSL}$ vs $T$ and we have considered $\gamma=1$.} \label{fig} \end{figure} In Fig.~\ref{fig}, we plot $T_{QSL/OQSL}$ vs $T$ $\in$ $[0, \frac{\pi}{2}]$ for pure dephasing dynamics and we have considered $\gamma=1$. Fig.~\ref{fig} shows that our OQSL bound \eqref{Bound:open1} is tighter than QSL bound \eqref{qslbound:1} for pure dephasing process. Both bounds (our bound \eqref{Bound:open1} and bound \eqref{qslbound:1}) are obtained by employing the Cauchy-Schwarz inequality. Therefore, one expects that both these bounds to be equally tight, but it is not true. It turns out that bound \eqref{qslbound:1} is loose. It happens because while deriving the bound \eqref{qslbound:1} in Ref.~\cite{Campo2013} the authors use an additional inequality along with the Cauchy-Schwarz inequality, i.e., ${\rm tr}(\rho_t^2) \leq 1$ (see Eq (7) of Ref.~\cite{Campo2013}). They did this to obtain a time-independent bound on the rate of change of the purity. \subsection{Dynamical Map} We can also express the QSL for the observable using the Kraus operator evolution. If a given quantum system has initial state $\rho$, and let its dynamical evolution is governed by a CPTP map ($\mathcal{E}$) which is described by a set of Kraus operators $\{K_{i}(t)\}$ and $\sum_{i}K^{\dagger}_{i}(t)K_{i}(t)=\mathcal{I}_{S}$. The dynamics of the observable $O$ in the Heisenberg picture is described as \begin{equation}gin{equation}\label{OE} O(t) =\sum_{i}K^{\dagger}_{i}(t)O(0)K_{i}(t). \end{equation} Using Eq \eqref{OE}, we obtain the QSL for the observable as given by \begin{equation}gin{equation}\label{Bound:open2} T \geq \frac{|\langle O(T) \rangle-\langle O(0) \rangle|}{2\sqrt{{\rm tr}(\rho^2)}\Lambda_{T}}, \end{equation} where $\Lambda_{T}=\frac{1}{T}\int_{0}^{T}dt\norm{{K}^{\dagger}_{i}(t)O(0)\dot{K}_{i}(t)}_{\rm hs}$ is the evolution speed of the observable $O$ and $T^{O}_{QSL}$= $\frac{|\langle O(T) \rangle-\langle O(0) \rangle|}{2\sqrt{{\rm tr}(\rho^2)}\Lambda_{T}}$ quantum speed limit time of observable (OQSL). Details of the derivation are provided in Appendix \ref{app:F}. For dynamical map dynamics, the bound \eqref{Bound:open2} determines how fast the expectation value of an observable of the quantum system changes in time. According to the obtained bound \eqref{Bound:open2} the OQSL depends on the purity of the initial state of evolving quantum system and on the speed of observable evolution. \subsection{State independent QSL for observables } The bounds for QSL for observable that have been proved in previous sections are state-dependent bounds. One may be curious to know if we can prove some state-independent bounds, i.e., whether we can derive the bounds which are given merely in terms of properties of the observables themselves. Here, we make an attempt to formulate a bound without optimising over states. To derive the state independent speed limit for observable, consider the Hilbert-Schmidt inner product for observables. The Hilbert-Schmidt inner product of two observables $O(0)$ and $O(t)$ is defined as \begin{equation}gin{equation} \langle O(0), O(t) \rangle = {\rm tr}[O(0)O(t)], \end{equation} where $O(t)=e^{{L}^{\dagger}t}O(0)$ (${{L}^{\dagger}}$ is adjoint of Liouvillian super-operator). After differentiating above equation with respect to time, we obtain \begin{equation}gin{equation} \frac{d}{dt}\langle O(0), O(t) \rangle = {\rm tr}[ O(0)\dot{O}(t) ] ={\rm tr}[O(0)L^{\dagger}({O}(t))]. \end{equation} Let us take the absolute value of the above equation. Then by applying the Cauchy-Schwarz inequality, we can obtain the following inequality \begin{equation}gin{equation} \left|\frac{d}{dt}\langle O(0), O(t) \rangle\right| \leq \norm{O(0)}_{\rm hs}\norm{L^{\dagger}(O(t))}_{\rm hs}. \end{equation} After integrating the above inequality, we obtain the following bound \begin{equation}gin{equation}\label{StateINQSL} T \geq \frac{ |\langle O(0), O(T) \rangle - \langle O(0), O(0) \rangle|}{ \norm{O(0)}_{\rm hs}\Lambda_{T}}, \end{equation} where $\Lambda_{T}=\frac{1}{T}\int_{0}^{T}dt\norm{{{L}^{\dagger}}[O(t)]}_{\rm hs}$ is the evolution speed of the observable $O$ and $T^{O}_{QSL}= \frac{ |\langle O(0)|O(T) \rangle - \langle O(0)|O(0) \rangle|}{ \norm{O(0)}_{\rm hs}\Lambda_{T}}$ quantum speed limit time of observable (OQSL). The bound \eqref{StateINQSL} is independent of state of the quantum system and it is applicable for arbitrary dynamics which can be unitary or non-unitary. In future, it will be worth exploring if it is possible to obtain some bounds using different approach, for example, which may involve separation between extreme eigenvalues of the operators or optimising over possible initial states. The state-independent bound will have its own merit as they could be understood as representing some best case scenario where the time is shortest possible to modify the expectation value of a given observable when optimizing over all possible states. These kind of bounds may find applications in the context of quantum metrology, where we optimize over the probe states in order to obtain the “fastest change” of the state from the point of view of certain parameters. \section{Applications} In this section, we illustrate the usefulness of observable QSL for quantum battery, growth of two-time correlation function and connection to the Lieb-Robinson bound. \subsection{Quantum batteries} A {\it Quantum battery} is a microscopic energy storage device introduced by R. Alicki, and M. Fannes \cite{Alicki2013}. Several theoretical works have been done to strengthen this novel idea of quantum battery and enhance its non-classical features \cite{Binder2015,Campaioli2017,Ferraro2018,F.Campaioli2018,Julia2020,Andolina2019,G.Andolina2019,Andolina2018,Le2018,S.Ghosh2020,Luis2020,Crescente2020,S.Gherardini2020,A.C.Santos2019,Mohan2021}. Quantum batteries can easily outperform classical batteries because of several quantum advantages. Here, our main aim is to obtain a minimal unitary charging time of the quantum battery using the observable QSL. The quantum battery consists of many quantum systems with several degrees of freedom in which we can deposit or extractwork from it. Let us consider the battery with Hamiltonian $H_{B}$, and it is charged by field $H_{C}$. The total Hamiltonian of the quantum battery is described by \begin{equation}gin{equation} H_{T}= H_{B} + H_{C}. \end{equation} The amount of extractable energy from the quantum system by unitary operations is termed the ergotropy of quantum battery \cite{F.Campaioli2018}. Which is given by \begin{equation}gin{equation}\label{ergotropy} \varepsilon(t) = \langle \psi(t)|H_{B}| \psi(t) \rangle - \langle \psi(0)|H_{B}| \psi(0) \rangle, \end{equation} where $|\psi(0)\rangle$ and $|\psi(t)\rangle$ are initial and final state of quantum battery while charging. Note that the above expression holds true in the Schr\"odinger picture. However, in the Heisenberg picture the ergotropy can be rewritten as \begin{equation}gin{equation*} \varepsilon(t) = \langle \psi(0)|(H_{B}(t)-H_{B}(0))| \psi(0) \rangle, \end{equation*} where $H_{B}(t)$ = $ e^{iHt/\hbar} H_{B}(0)e ^{-iHt/\hbar}$ and $H_{B}(0)$ = $H_{B}$. The rate of change ergotropy of quantum battery during the charging process can be obtained by differentiating the above equation with respect to time, which is given by \begin{equation}gin{equation*} \frac{ d \varepsilon(t)}{dt} =\frac{d}{dt}\langle \psi(0)|H_{B}(t)| \psi(0) \rangle. \end{equation*} Using our bound we can write the QSL for ergotropy as \begin{equation}gin{equation}\label{CT1} T \geq \frac{\hbar}{2\Delta H_{T}}\int_{0}^{T}\frac{|d\langle H_{B}(t)-H_{B}(0) \rangle|}{ \Delta H_{B}(t)}, \end{equation} where $T$ is charging time period of the quantum battery. Also, an alternative unified bound we can be obtained by using bounds \eqref{Bound:2} and \eqref{Bound:3}, \begin{equation}gin{equation}\label{CT2} T \geq \frac{\hbar}{2} \frac{|\langle H_{B}(T) \rangle - \langle H_{B}(0)\rangle| }{\min\{\norm{\bullet}_{\rm op},\norm{\bullet}_{\rm hs},\norm{\bullet}_{\rm tr}\}}, \end{equation} where $\bullet$ stands for ${H_{B}(0)H_{T}}$ and the operator norm, the Hilbert-Schmidt norm and the trace norm of an operator satisfy the inequality $\norm{A}_{\rm op} \leq \norm{A}_{\rm hs} \leq \norm{A}_{\rm tr} $. Since previously obtained bounds~\cite{Campaioli2017,F.Campaioli2018} on charging time of quantum battery are based on distinguishability of the initial and final state vectors of the quantum battery, the bounds we have presented in this section are based on the difference of initial ergotropy and final ergotropy of the quantum battery. The bounds obtained in this section can easily outperform previously obtained bounds, especially when the battery has degenerate energy levels. For example, let us consider the model of qubit quantum battery which has Hamiltonian $H_{B}=\sigma_{z}$. Let us consider battery is initially in state $|\phi^{+}\rangle= a|0\rangle +b |1\rangle$ (which has nonzero ergotropy) then by applying some charging field $H_{C}(t)$ we reached the final state $|\phi^{-}\rangle= a|0\rangle -b |1\rangle$. In this process neither we extracted any work from quantum battery nor stored any work in the quantum battery because both initial and final states have same ergotropy according to Eq \eqref{ergotropy}. Note that, if we calculate charging time according to standard QSL \eqref{SQSL} or bounds presented in ~\cite{Campaioli2017,F.Campaioli2018} we obtain nonzero minimal charging time but according to our bounds \eqref{CT1} and \eqref{CT2} minimal charging time is zero. This happens because standard QSL and bounds presented in ~\cite{Campaioli2017,F.Campaioli2018} are based on the notion of state distinguishability while our bounds \eqref{CT1} and \eqref{CT2} depends on change in the ergotropy. Therefore, our bounds \eqref{CT1} and \eqref{CT2} yields correct minimal charging time. \subsection{Transport properties} A crucial quantity in the theory of quantum transport in many body physics is the two-time correlation function of an observable. This section aims to obtain a speed limit for a two-time correlation of an observable and its time evolved observable. For an arbitrary pure quantum state $\rho$, we can define the two-time correlation function $C(A(t),A(0))$ between observables $A(t)$ and $A(0)$ as \begin{equation}gin{equation} C(t) = \langle A(t)A(0) \rangle - \langle A(t) \rangle \langle A(0) \rangle. \end{equation} For the closed dynamics case $A(t)$ = $U^{\dagger}(t)A(0)U(t)$ ($U(t)=e^{-iHt/\hbar}$) and for the open dynamics case $A(t)=e^{\mathcal{L}^{\dagger}t}A(0)$ (${\mathcal{L}^{\dagger}}$ is adjoint of Lindbladian). We can derive the following speed limit bound on two-time correlation function for closed dynamics \begin{equation}gin{equation}\label{QSLTTU} T \geq \frac{\hbar}{2} \frac{ |C(T) - C(0)|}{ \norm{ A(0)}_{\rm op} \frac{1}{T}\int_{0}^{T}dt\norm{[H,A(t)] }_{\rm op}}. \end{equation} Similarly, we can derive the following speed limit bound on two-time correlation function for open dynamics \begin{equation}gin{equation}\label{QSLTTO} T \geq \frac{\hbar}{2} \frac{ |C(T) - C(0)|}{ \norm{ A(0)}_{\rm op} \frac{1}{T}\int_{0}^{T}dt\norm{{\mathcal{L}^{\dagger}}[A(t)]}_{\rm op}}. \end{equation} Details of the derivation of bounds \eqref{QSLTTU} and \eqref{QSLTTO} are provided in Appendix \ref{app:G} and \ref{app:H}. \subsection{Relation to Lieb-Robinson bound} The Lieb-Robinson bound~\cite{Lieb,Hastings2010} provides the speed limit for information propagation about the perturbation. This gives an upper bound for the operator norm of the commutator of $A(t)$ and $B$, where $A$ and $B$ are spatially separated operators of a many body quantum system. This bound implies that even in the non-relativistic quantum dynamics one has some kind of locality structure analogous to the notion of finiteness of the speed of light in the relativistic theory. This section aims to derive distinct speed limit bound for the commutator of $A(t)$ and $B$, i.e., how fast the commutator changes in the Heisenberg picture. The commutator of two observables in two different regions of a many-body ssytem is defined as \begin{equation}gin{equation} O(t) = [B(0), A(t)]. \end{equation} The average of the commutator in the state $\rho$ is given by $ \langle{O(t)}\rangle= {\rm tr}(O(t)\rho)$ and $\rho$ is pure state of given quantum system. Here, we want to obtain the speed limit bound for commutator for closed system dynamics and open system dynamics both. For a closed dynamics $A(t)$ = $U^{\dagger}(t)A(0)U(t)$ ($U(t)=e^{-iHt/\hbar}$) and for open dynamics $A(t)=e^{\mathcal{L}^{\dagger}t}A(0)$ (${\mathcal{L}^{\dagger}}$ is adjoint of Lindbladian). We can derive the following speed limit bound on the commutator for closed dynamics \begin{equation}gin{equation}\label{CQSLU} T \geq \frac{2}{\hbar} \frac{ |\langle{O(T)}\rangle|}{ \norm{B(0)}_{\rm op}\frac{1}{T}\int_{0}^{T}dt\norm{[H,A(t)]}_{\rm op}}. \end{equation} Similarly, we can derive the following speed limit bound on the commutator for open dynamics \begin{equation}gin{equation}\label{CQSLO} T \geq \frac{2}{\hbar} \frac{ |\langle{O(T)}\rangle|}{ \norm{B(0)}_{\rm op}\frac{1}{T}\int_{0}^{T}dt\norm{{\mathcal{L}^{\dagger}}[A(t)]}_{\rm op}}. \end{equation} Details of the derivation of bounds \eqref{CQSLU} and \eqref{CQSLO} are provided in Appendix \ref{app:I} and \ref{app:J} . Note that our bounds are state-dependent while the Lieb-Robinson bound is state-independent. Also, to prove the Lieb-Robinson bound one needs bounded interactions such as those encountered in quantum spin systems, whereas the quantum speed limit for the commutator does not require any assumption about the underlying Hamiltonian. \section{Conclusions} The standard quantum speed limit for evolution of a state plays an important role in quantum theory, quantum information, quantum control and quantum thermodynamics. However, if we describe the quantum dynamics in the Heisenberg picture, then we cannot use the QSL for the state evolution. We need to define the evolution speed of observable for a quantum system in the Heisenberg picture which has never been addressed before. In this paper, we have derived the quantum speed limits for general observables for the unitary, the Lindbladian dynamics as well as completely positive dynamics for the first time. Along with this, we have presented several possible applications of these bounds such as in the quantum battery, probability dynamics, growth of two-point correlation function, time development of commutator and its connection to the Lieb-Robinson bound. A salient outcome of our approach is that the standard QSL for state can be viewed as a special case of QSL for observable. In future, we hope that these bounds can have useful applications in quantum metrology, quantum control, detection of non-Markovianity, quantum thermodynamics, charging and discharging of quantum batteries and many other areas as well. \section{ACKNOWLEDGMENTS} The research of BM was partly supported by the INFOSYS scholarship. We thank Kavan Modi for useful discussions and comments. \appendix \section{Derivation of Eq \eqref{Bound:2}}\label{app:B} To obtain an alternate OQSL given in Eq \eqref{Bound:2}, let the state of a quantum system is described by a density operator $\rho$ (not necessarily pure). The time evolution of the expectation value any system observable $O$ is given as \begin{equation}gin{equation} \langle O(t) \rangle= {\rm tr}[U^{\dagger}(t)O(0)U(t)\rho]. \end{equation} After differentiating above equation with respect to time, we obtain \begin{equation}gin{equation*} \begin{equation}gin{split} \frac{ d\langle O(t) \rangle}{dt}= {\rm tr}[\dot{U}^{\dagger}(t)O(0)U(t)\rho] + {\rm tr}[U^{\dagger}(t)O(0)\dot{U}(t)\rho]. \end{split} \end{equation*} Let us take absolute value of the above equation and using the triangular inequality $|A+B|\leq|A|+|B|$, we obtain \begin{equation}gin{equation*} \begin{equation}gin{split} \left|\frac{ d\langle O(t) \rangle}{dt}\right| &\leq\lvert {\rm tr}[\dot{U}^{\dagger}(t)O(0)U(t)\rho]\rvert \\ &+ \lvert {\rm tr}[U^{\dagger}(t)O(0)\dot{U}(t)\rho]\rvert. \end{split} \end{equation*} Now, using the Cauchy-Schwarz inequality $|{\rm tr}(AB)|$ $\leq$ $\sqrt{{\rm tr}(A^{\dagger}A){\rm tr}(B^{\dagger}B)}$, we can obtain the following inequality \begin{equation}gin{equation*} |\frac{ d\langle O(t) \rangle}{dt} | \leq 2\sqrt{{\rm tr}[\dot{U}^{\dagger}(t)O^{2}(0)\dot{U}(t)]{\rm tr}({\rho}^2)}. \end{equation*} The above inequality can be further simplified as \begin{equation}gin{equation*} \left| \frac{ d\langle O(t) \rangle}{dt}\right|\leq \frac{2}{\hbar}\sqrt{{\rm tr}(\rho)^2} \norm {O(0)H}_{\rm hs}, \end{equation*} where $\norm{A}_{\rm hs}=\sqrt{{\rm tr}(A^{\dagger}A)}$ is the Hilbert-Schmidt norm of operator A. After integrating with respect to time, we obtained the following bound \begin{equation}gin{equation} T \geq \frac{\hbar}{2\sqrt{{\rm tr}(\rho^2)}} \frac{|\langle O(T) \rangle - \langle O(0)\rangle|}{\norm{O(0)H}_{\rm hs}}. \end{equation} If an observable satisfy $O^{2}$ = $I$, then for pure state case the above bound can be expressed as \begin{equation}gin{equation} T \geq \frac{\hbar}{2 \norm{H}_{\rm hs} } |\langle O(T) \rangle - \langle O(0)\rangle| \end{equation} This completes the proof of Eq \eqref{Bound:2}. \section{Derivation of Eq \eqref{Bound:3}}\label{app:C} To derive the bound given in Eq \eqref{Bound:3}, let us assume that a quantum system has state $\rho$ (pure state). The time evolution of the expectation value any system observable $O$ is given as \begin{equation}gin{equation} \langle O(t) \rangle={\rm tr}[U^{\dagger}(t)O(0)U(t)\rho]. \end{equation} To find the rate of change of expectation value of observable, we need to differentiate the above equation with respect to time, which is given by \begin{equation}gin{equation*} \begin{equation}gin{split} \frac{ d\langle O(t) \rangle}{dt} = {\rm tr}[\dot{U}^{\dagger}(t)O(0)U(t)\rho] + {\rm tr}[U^{\dagger}(t)O(0)\dot{U}(t)\rho]. \end{split} \end{equation*} Let us take the absolute value of the above equation. Then by applying the triangular inequality $|A+B|\leq|A|+|B|$, we can obtain the following inequality \begin{equation}gin{equation} \begin{equation}gin{split} \left|\frac{ d\langle O(t) \rangle}{dt}\right| \leq\frac{1}{\hbar}(\lvert {\rm tr}[HO(t)\rho]\rvert + \lvert {\rm tr}[O(t)H\rho]\rvert). \end{split} \end{equation} Next, we use the H\"older inequality $|{\rm tr}(AB)|$ $\leq$ $\norm{A}_{\rm p}\norm{B}_{\rm q}$, where p, q $\in$ $[1,\infty)$ such that $\frac{1}{p}+\frac{1}{q}=1$ \cite{Rogers1888,Holder1889,Bhatia1997}. This leads to the following inequality \begin{equation}gin{equation*} \begin{equation}gin{split} \left|\frac{ d\langle O(t) \rangle}{dt}\right| \leq\frac{2}{\hbar}\norm{HO(t)}_{\rm op}. \end{split} \end{equation*} We know that the operator norm, the Hilbert-Schmidt norm and the trace norm of an operator satisfy the inequality $\norm{A}_{\rm op} \leq \norm{A}_{\rm hs} \leq \norm{A}_{\rm tr} $ and the operator norm is unitary invariant $\norm {U^{\dagger}AU}_{\rm op}$ = $\norm {A}_{\rm op}$, then we can express the above inequality as \begin{equation}gin{equation*} \begin{equation}gin{split} \left|\frac{ d\langle O(t) \rangle}{dt}\right| \leq \frac{2}{\hbar}\norm{HO(0)}_{\rm tr}. \end{split} \end{equation*} After integrating the above inequality, we obtained the desired bound \begin{equation}gin{equation*} T \geq \frac{\hbar}{2} \frac{|{ \langle O(T) \rangle -\langle O(0)\rangle}|}{\hspace{1mm}\norm{HO(0)}_{\rm tr}}. \end{equation*} In general, we can write the above bound as \begin{equation}gin{equation} T \geq \frac{\hbar}{2} \{\frac{|\langle O(T) \rangle - \langle O(0)\rangle| }{min\{\norm{O(0)H}_{\rm op},\norm{O(0)H}_{\rm tr}\}}\}. \end{equation} This completes the proof of Eq \eqref{Bound:3}. \section{Trace distance Bounds on Observable difference}\label{app:D} We know that, we can use the trace distance to figure out how close two density operators are. Similar question we can ask for the expectation value of the observable. Note that \begin{equation}gin{equation*} \begin{equation}gin{split} |\langle O \rangle_{\rho} - \langle O \rangle_{\sigma}|& \equiv |{\rm tr}(\rho - \sigma)O| \leq {\rm tr}|(\rho - \sigma)O|= \norm{(\rho - \sigma)O}_{\rm tr} \\ & \stackrel{\text{H\"{o}lder}}{\leq} \norm{(\rho - \sigma)}_{\rm tr}\norm{O}_{\rm op}= 2\norm{O}_{\rm op} l(\rho,\sigma). \end{split} \end{equation*} The above inequalities are obtained by using the triangle inequality for the absolute value, and the H\"{o}lder inequality \begin{equation}gin{equation*} \norm{AB}_{\rm tr}= \norm{A}_{\rm p}\norm{B}_{\rm q}, \frac{1}{p}+\frac{1}{q}=1, \end{equation*} with $p=1$ and $q=\infty$, note that $\norm{X}_{\rm \infty}=\norm{X}_{\rm op}$ is maximal absolute value of all eigenvalues of X when X is Hermitian. (p = 1 correspond to the trace norm). \section{OQSL for arbitrary dynamics}\label{app:E} By using the Cauchy-Schwarz inequality $|{\rm tr}(AB)|$ $\leq$ $\sqrt{{\rm tr}(A^{\dagger}A){\rm tr}(B^{\dagger}B)}$ in Eq \eqref{OD}, we can obtain following inequality \begin{equation}gin{equation*} \cal{D} \leq \frac{\sqrt{{\rm tr}(\rho ^2)}}{2\norm{O(0)}_{\rm op}}\sqrt{{\rm tr}[\{O(t) - O(0)\}^{\dagger}\{O(t) - O(0)\}]}. \end{equation*} The above inequality can be written in this form \begin{equation}gin{equation}\label{ineq} \cal{D} \leq \cal{D'} = \frac{\sqrt{{\rm tr}(\rho ^2)}}{2\norm{O(0)}_{\rm op}}\norm{\{O(t) - O(0)\}}_{\rm hs}. \end{equation} The rate of change of distance $\cal D'$ can be obtained by differentiating the above equation with respect to time. Thus, we obtain \begin{equation}gin{equation*} \cal{\dot{D'}} =\frac{\sqrt{{\rm tr}(\rho ^2)}}{2\norm{O(0)}_{\rm op}}\frac{{\rm tr}[\dot{O}(t)\{O(t) - O(0)\}+\{O(t) - O(0)\}\dot{O}(t)]}{2\norm{O(t) - O(0)}_{\rm hs}}. \end{equation*} The above inequality can be further simplified as \begin{equation}gin{equation*} \cal{\dot{D'}} = \frac{\sqrt{{\rm tr}(\rho ^2)}}{2\norm{O(0)}_{\rm op}}\frac{{\rm tr}[\dot{O}(t)\{O(t) - O(0)\}]}{\norm{O(t) - O(0)}_{\rm hs}}. \end{equation*} If we take the absolute value of $\cal{\dot{D'}} $ and again apply the Cauchy-Schwarz inequality $|{\rm tr}(AB)|$ $\leq$ $\sqrt{{\rm tr}(A^{\dagger}A){\rm tr}(B^{\dagger}B)}$, then we can obtain following inequality \begin{equation}gin{equation*} |\cal{\dot{D'}} | \leq \frac{\sqrt{{\rm tr}(\rho ^2)}}{\hspace{2mm}2\norm{O}_{\rm op}}\norm{\dot{O}(t)}_{\rm hs}=\frac{\sqrt{{\rm tr}(\rho ^2)}}{\hspace{2mm}2\norm{O}_{\rm op}}\norm{L_{t}^{\dagger}({O}(t))}_{\rm hs}. \end{equation*} After integrating the above inequality, we obtain \begin{equation}gin{equation*} \cal{{D'}}(O(T),O(0)) \leq \frac{\sqrt{{\rm tr}(\rho ^2)}}{\hspace{2mm}2\norm{O}_{\rm op}}T\Lambda_{T}, \end{equation*} where $\Lambda_{T}=\frac{1}{T}\int_{0}^{T}dt{\norm{{L_{t}^{\dagger}({O}(t))}}_{\rm hs}}$ is evolution speed of observable $O$. Let us use the Eq~\eqref{ineq}, then we obtain \begin{equation}gin{equation*} \cal{{D}}(O(T),O(0)) \leq \frac{\sqrt{{\rm tr}(\rho ^2)}}{\hspace{2mm}2\norm{O}_{\rm op}}T\Lambda_{T}. \end{equation*} Finally, we obtain the desired bound on evolution time of expectation value of an observable as \begin{equation}gin{equation} T \geq \frac{|\langle O(T) \rangle - \langle O(0) \rangle|}{\sqrt{{\rm tr}(\rho ^2)}\Lambda_{T}}, \end{equation} where $T^{O}_{QSL}=\frac{|\langle O(T) \rangle - \langle O(0) \rangle|}{\sqrt{{\rm tr}(\rho ^2)}\Lambda_{T}}$. This completes the proof of Eq \eqref{arbitrary}. \section{OQSL for Dynamical Map}\label{app:F} If the given quantum system has initial state $\rho$, and let its evolution is governed by a CPTP map ($\mathcal{E}$) which is described by a set of kraus operators $\{K_{i}(t)\}$ and $\sum_{i}K^{\dagger}_{i}(t)K_{i}(t)=\mathcal{I}_{S}$. The dynamics of observable in the Heisenberg picture is described as \begin{equation}gin{equation} O(t) =\sum_{i}K^{\dagger}_{i}(t)O(0)K_{i}(t). \end{equation} The time evolution of expectation value of the observable $O$ is given by \begin{equation}gin{equation*} \langle O(t) \rangle =\sum_{i} {\rm tr}[K^{\dagger}_{i}(t)O(0)K_{i}(t)\rho]. \end{equation*} The rate of change of the expectation value of observable $O$ can be obtained by differentiating the above equation with respect to time, which is given by \begin{equation}gin{equation*} \begin{equation}gin{split} \frac{ d\langle O(t) \rangle}{dt} = &\sum_{i}({\rm tr}[\dot{K}^{\dagger}_{i}(t)O(0){K}_{i}(t)\rho]\\ &+ {\rm tr}[{K}^{\dagger}_{i}(t)O(0)\dot{K}_{i}\rho]). \end{split} \end{equation*} Let us take its absolute value, then we can apply the triangle inequality $|A+B|\leq|A|+|B|$ and the Cauchy-Schwarz inequality $|{\rm tr}(AB)|$ $\leq$ $\sqrt{{\rm tr}(A^{\dagger}A){\rm tr}(B^{\dagger}B)}$. Finally, we have obtained the following inequality \begin{equation}gin{equation*} \begin{equation}gin{split} & \left|\frac{ d\langle O(t) \rangle}{dt}\right|\\ & \leq \sum_{i}(\sqrt{{\rm tr}[\dot{K}^{\dagger}_{i}(t)O(0){K}_{i}(t)K^{\dagger}_{i}(t)O(0) \dot{K}_{i}(t)]{\rm tr}(\rho^2)} \\ &+ \sqrt{{\rm tr}[{K}^{\dagger}_{i}(t)O(0)\dot{K}_{i}(t)\dot{K}^{\dagger}_{i}(t)O(0){K}_{i}(t)]{\rm tr}(\rho^2)} ). \end{split} \end{equation*} The simplified form of the above inequality is given as \begin{equation}gin{equation*} \begin{equation}gin{split} \left|\frac{ d\langle O(t) \rangle}{dt}\right| \leq 2\sqrt{{\rm tr}(\rho^2)} \sum_{i}\norm{{{K}^{\dagger}_{i}(t)O(0)\dot{K}_{i}(t)}}_{\rm hs}, \end{split} \end{equation*} where $\norm{A}_{\rm hs}$ = $\sqrt{{\rm tr}(A^{\dagger}A)}$ is the Hilbert-Schmidt norm of operator A. After integrating the above inequality, we obtained the following bound \begin{equation}gin{equation} T \geq \frac{|\langle O(T) \rangle-\langle O(0) \rangle|}{2\sqrt{{\rm tr}(\rho^2)}\Lambda_{T}}, \end{equation} where $\Lambda_{T}=\frac{1}{T}\int_{0}^{T}dt\norm{{K}^{\dagger}_{i}(t)O(0)\dot{K}_{i}(t)}_{\rm hs}$ is evolution speed of the observable $O$ and $T^{O}_{QSL}$= $\frac{|\langle O(T) \rangle-\langle O(0) \rangle|}{2\sqrt{{\rm tr}(\rho^2)}\Lambda_{T}}$ as given in Eq \eqref{Bound:open2} For open dynamics, the bounds \eqref{Bound:open1}, and \eqref{Bound:open2} determines how fast the expectation value of an observable of the quantum system changes in time. This completes the proof of Eq \eqref{Bound:open2}. \section{QSL of two Point Function(Unitary Case)}\label{app:G} Let us consider the two point correlation function, which is defined as \begin{equation}gin{equation} C(t) = \langle A(t)A(0) \rangle - \langle A(t) \rangle \langle A(0) \rangle, \end{equation} where $A(t)$ = $U^{\dagger}(t)A(0)U(t)$ and $U^{\dagger}(t)U(t)$ = $I$. After differentiating above equation with respect to time, we obtain \begin{equation}gin{equation} \frac{d C(t)}{dt} = \langle \dot{A}(t)A(0) \rangle - \langle \dot{A}(t) \rangle \langle A(0) \rangle. \end{equation} Let us take the absolute value of the above equation and by using the triangular inequality $|A+B|\leq|A|+|B|$, we obtain \begin{equation}gin{equation} \left|\frac{d C(t)}{dt}\right| \leq |\langle \dot{A}(t)A(0) \rangle| + |\langle \dot{A}(t) \rangle ||\langle A(0) \rangle|. \end{equation} Now, using the fact that $|{\rm tr}(A\rho)|$ $\leq$ $\norm{A}_{\rm op}$ (where $\rho$ is pure state), we can obtain the following inequality \begin{equation}gin{equation} \left|\frac{d C(t)}{dt}\right| \leq \norm{\dot{A}(t)A(0)}_{\rm op} + \norm{ \dot{A}(t) }_{\rm op}\norm{ A(0)}_{\rm op}. \end{equation} The above equation can be expressed as \begin{equation}gin{equation} \left|\frac{d C(t)}{dt}\right| \leq \norm{\dot{A}(t)}_{\rm op}\norm{A(0)}_{\rm op} + \norm{ \dot{A}(t) }_{\rm op}\norm{ A(0)}_{\rm op}. \end{equation} This leads to \begin{equation}gin{equation} \left|\frac{d C(t)}{dt}\right| \leq 2\norm{ A(0)}_{\rm op}\norm{ \dot{A}(t) }_{\rm op}. \end{equation} Therefore, we have \begin{equation}gin{equation} \left|\frac{d C(t)}{dt}\right| \leq \frac{2}{\hbar}\norm{ A(0)}_{\rm op}\norm{[H,A(t)] }_{\rm op}. \end{equation} After integrating, we obtain the following bound \begin{equation}gin{equation} T \geq \frac{\hbar}{2} \frac{ |C(T) - C(0)|}{ \norm{ A(0)}_{\rm op} \frac{1}{T}\int_{0}^{T}dt\norm{[H,A(t)] }_{\rm op}}. \end{equation} This completes the proof of Eq \eqref{QSLTTU}. \section{QSL of two Point Function (Open system Case)}\label{app:H} Let us consider the two point function, which is defined as \begin{equation}gin{equation} C(t) = \langle A(t)A(0) \rangle - \langle A(t) \rangle \langle A(0) \rangle, \end{equation} where $A(t)=e^{\mathcal{L}^{\dagger}t}A(0)$ and ${\mathcal{L}^{\dagger}}$ is adjoint of Lindbladian. After differentiating above equation with respect to time, we obtain \begin{equation}gin{equation} \frac{d C(t)}{dt} = \langle \dot{A}(t)A(0) \rangle - \langle \dot{A}(t) \rangle \langle A(0) \rangle. \end{equation} Lets take absolute value of the above equation and by using the triangular inequality $|A+B|\leq|A|+|B|$, we can obtain \begin{equation}gin{equation} \left|\frac{d C(t)}{dt}\right| \leq |\langle \dot{A}(t)A(0) \rangle| + |\langle \dot{A}(t) \rangle ||\langle A(0) \rangle|. \end{equation} Now, using the fact that $|{\rm tr}(A\rho)|$ $\leq$ $\norm{A}_{\rm op}$ (where $\rho$ is pure state), we can obtain the following inequality \begin{equation}gin{equation} \left|\frac{d C(t)}{dt}\right| \leq \norm{\dot{A}(t)A(0)}_{\rm op} + \norm{ \dot{A}(t) }_{\rm op}\norm{ A(0)}_{\rm op}. \end{equation} The above equation leads to \begin{equation}gin{equation} \left|\frac{d C(t)}{dt}\right| \leq \norm{\dot{A}(t)}_{\rm op}\norm{A(0)}_{\rm op} + \norm{ \dot{A}(t) }_{\rm op}\norm{ A(0)}_{\rm op}. \end{equation} The above equation can be written as \begin{equation}gin{equation} \left|\frac{d C(t)}{dt}\right| \leq 2\norm{ A(0)}_{\rm op}\norm{ \dot{A}(t) }_{\rm op}. \end{equation} Therefore, we have \begin{equation}gin{equation} \left|\frac{d C(t)}{dt}\right| \leq \frac{2}{\hbar}\norm{ A(0)}_{\rm op}\norm{{\mathcal{L}^{\dagger}}[A(t)]}_{\rm op}. \end{equation} After integrating, we obtain the following bound \begin{equation}gin{equation} T \geq \frac{\hbar}{2} \frac{ |C(T) - C(0)|}{ \norm{ A(0)}_{\rm op} \frac{1}{T}\int_{0}^{T}dt\norm{{\mathcal{L}^{\dagger}}[A(t)]}_{\rm op}}. \end{equation} This completes the proof of Eq \eqref{QSLTTO}. \section{ QSL bound for commutators (Unitary Case)}\label{app:I} The commutator of operators in two different regions of a many body system is defined as \begin{equation}gin{equation} \langle{O(t)}\rangle = \langle[B(0), A(t)]\rangle, \end{equation} where $A(t)$ = $U^{\dagger}(t)A(0)U(t)$ and $U^{\dagger}(t)U(t)$ = $I$. After differentiating the above equation with respect to time, we obtain \begin{equation}gin{equation} \frac{d \langle{O(t)}\rangle}{dt} = \langle[B(0), \dot{A}(t)]\rangle. \end{equation} Lets take absolute value of the above equation \begin{equation}gin{equation} \left|\frac{d \langle{O(t)}\rangle}{dt}\right| = |\langle[B(0), \dot{A}(t)]\rangle|. \end{equation} Now, by using $|{\rm tr}(A\rho)|$ $\leq$ $\norm{A}_{\rm op}$ (where $\rho$ is pure state), we can obtain following inequality \begin{equation}gin{equation} \left|\frac{d \langle{O(t)}\rangle}{dt}\right| \leq \norm{[B(0), \dot{A}(t)]}_{\rm op}. \end{equation} Let us use the inequality $ \norm{[O_{1}, O_{2}]}_{\rm op}\leq 2 \norm{O_{1}}_{\rm op} \norm{ O_{2}}_{\rm op}$, then we obtain \begin{equation}gin{equation} \left|\frac{d \langle{O(t)}\rangle}{dt}\right| \leq 2\norm{B(0)}_{\rm op}\norm{ \dot{A}(t)}_{\rm op}. \end{equation} The above equation can be written as \begin{equation}gin{equation} \left|\frac{d \langle{O(t)}\rangle}{dt}\right| \leq \frac{2}{\hbar}\norm{B(0)}_{\rm op}\norm{ [H,A(t)]}_{\rm op}. \end{equation} After integrating, we obtain the following bound \begin{equation}gin{equation} T \geq \frac{2}{\hbar} \frac{ |\langle{O(T)}\rangle|}{ \norm{B(0)}_{\rm op}\frac{1}{T}\int_{0}^{T}dt\norm{[H,A(t)]}_{\rm op}}. \end{equation} This completes the proof of Eq \eqref{CQSLU}. \section{ QSL bound for commutators (Open system Case)}\label{app:J} The commutator of operators in two different regions of a many body system is defined as \begin{equation}gin{equation} \langle{O(t)}\rangle = \langle[B(0), A(t)]\rangle, \end{equation} where $A(t)=e^{\mathcal{L}^{\dagger}t}A(0)$ and ${\mathcal{L}^{\dagger}}$ is adjoint of Lindbladian. After differentiating above equation with respect to time, we obtain \begin{equation}gin{equation} \frac{d \langle{O(t)}\rangle}{dt} = \langle[B(0) \dot{A}(t)]\rangle. \end{equation} Lets take absolute value of the above equation \begin{equation}gin{equation} \left|\frac{d \langle{O(t)}\rangle}{dt}\right| = |\langle[B(0), \dot{A}(t)]\rangle|. \end{equation} Now, by using $|{\rm tr}(A\rho)|$ $\leq$ $\norm{A}_{\rm op}$ (where $\rho$ is pure state), we can obtain following inequality \begin{equation}gin{equation} \left|\frac{d \langle{O(t)}\rangle}{dt}\right| \leq \norm{[B(0), \dot{A}(t)]}_{\rm op}. \end{equation} Let us use the inequality $ \norm{[O_{1}, O_{2}]}_{\rm op}\leq 2 \norm{O_{1}}_{\rm op} \norm{ O_{2}}_{\rm op}$, then we obtain \begin{equation}gin{equation} \left|\frac{d \langle{O(t)}\rangle}{dt}\right| \leq 2\norm{B(0)}_{\rm op}\norm{ \dot{A}(t)}_{\rm op}. \end{equation} The above equation can be written as \begin{equation}gin{equation} \left|\frac{d \langle{O(t)}\rangle}{dt}\right| \leq \frac{2}{\hbar}\norm{B(0)}_{\rm op} \norm{{\mathcal{L}^{\dagger}}[A(t)]}_{\rm op}. \end{equation} After integrating, we obtain the following bound \begin{equation}gin{equation} T \geq \frac{2}{\hbar} \frac{ |\langle{O(T)}\rangle|}{ \norm{B(0)}_{\rm op}\frac{1}{T}\int_{0}^{T}dt\norm{{\mathcal{L}^{\dagger}}[A(t)]}_{\rm op}}. \end{equation} This completes the proof of Eq \eqref{CQSLO}. \end{document}
\begin{document} \title{Rational curves on complete intersections in positive characteristic} \begin{abstract} We study properties of rational curves on complete intersections in positive characteristic. It has long been known that in characteristic 0, smooth Calabi-Yau and general type varieties are not uniruled. In positive characteristic, however, there are well-known counterexamples to this statement. We will show that nevertheless, a \emph{general} Calabi-Yau or general type complete intersection in projective space is not uniruled. We will also show that the space of complete intersections of degree $(d_1, \cdots, d_k)$ containing a rational curve has codimension at least $\sum_{i=1}^k d_i - 2n + 2$ in the moduli space of all complete intersections of given multidegree and dimension. \end{abstract} \section{Introduction} In characteristic zero, there is to some extent a dichotomy in the behavior of rational curves on smooth complete intersections in projective space. If the complete intersection is Fano, then it is rationally connected, i.e., there is a rational curve connecting any two points. On the other hand, if the complete intersection is Calabi-Yau or of general type, then it is not even uniruled, i.e., there is no rational curve passing through a very general point. In positive characteristic, the notions of rational connectedness and uniruledness become more complicated. While there are still notions of rational connectedness or uniruledness as above, we can alternatively require a variety to be \emph{separably} rationally connected or uniruled, which essentially means that there are rational curves on the variety which have many infinitesimal deformations. It is still true that all Fano varieties are rationally connected, whereas Calabi-Yau and general type varieties are never separably uniruled. More recently, it has been shown that the general Fano complete intersection is even separably rationally connected \cite{CZ}. However, in positive characteristic there are general type varieties that are uniruled. Shioda constructed examples of smooth hypersurfaces of arbitrarily large degree which are unirational, so in particular, rationally connected and uniruled \cite{S}. Liedtke shows that supersingular K3 surfaces are unirational \cite{L}, and also gives a construction of some families of uniruled surfaces of general type \cite{L2}. In this paper, we will show that despite the existence of these pathological examples, we still have the following result. \begin{theorem} \label{nonuniruled} Let $X$ be a general Calabi-Yau or general type complete intersection in projective space. Then $X$ is not uniruled. \end{theorem} Christian Liedtke has kindly pointed out to us that one can use isocrystal methods to prove this result for surfaces. Using Theorem \ref{nonuniruled} together with the methods of \cite{RiedlYang}, we can also obtain more quantitative results about uniruled hypersurfaces and hypersurfaces containing rational curves. For instance, we show the following. \begin{theorem} For $d \geq 2n-1$, a very general hypersurface will contain no rational curves, and moreover, the locus of hypersurfaces that contain rational curves will have codimension at least $d-2n+2$. \end{theorem} This transports results of Ein to postitive characteristic \cite{ein}. In the sequel, all Chow groups will have rational coefficients. We will work over a field $k$ of positive characteristic $p$ and fix a prime $\ell \neq p$. We would like to thank Arend Bayer, Izzet Coskun, Johan de Jong, Lawrence Ein, H\'el\`ene Esnault, J\'anos Koll\'ar, Christian Liedtke, and Jason Starr for numerous helpful conversations. Both authors were partially supported by an NSF RTG grant during this research. \section{Preliminaries on notions of uniruledness and rational connectedness} There are a number of inequivalent notions of rational connectedness and uniruledness in positive characteristic, so in this section we will give the precise definitions we will use and some basic facts about them. Details can be found in \cite{K}. \begin{definition} A variety $X$ is \emph{rationally chain connected} if there are varieties $U$ and $Y$, a proper flat morphism $g:U \to Y$ of relative dimension one whose geometric fibers have all components rational, and a map $u:U \to X$ such that the natural map \[ u \times u: U \times_Y U \to X \times X\] is dominant. $X$ is \emph{rationally connected} if we can choose $U$, $Y$, $g$, and $u$ as above with the additional property that the geometric fibers of $g$ are irreducible. $X$ is \emph{separably rationally connected} if there is a variety $Y$ and a morphism $u:Y \times \mathbb{P}^1 \to X$ such that the natural map \[ Y \times \mathbb{P}^1 \times \mathbb{P}^1 \to X \times X \] is dominant and separable. \end{definition} \begin{definition} A variety $X$ is \emph{uniruled} if there is a variety $Y$ and a dominant map $u:Y \times \mathbb{P}^1 \to X$. It is separably uniruled if we can find a separable such $u$. \end{definition} If the base field is algebraically closed, these properties and their negations are preserved under extension to another algebraically closed ground field. We will say that a variety is geometrically uniruled if its base change to the algebraic closure of the ground field is uniruled. If the ground field is uncountable, there are alternate characterizations of some of these near-rationality properties which can be easier to use. For instance, a variety is rationally connected if and only if two very general points can be connected by an irreducible rational curve. It is rationally chain connected if two very general points can be connected by a chain of rational curves. It is uniruled if a rational curve can be found passing through a very general point. \section{Non-uniruledness of general complete intersections} In this section, we prove that a general complete intersection with effective canonical bundle is not uniruled. The proof comes from an analysis of the coniveau filtration of the middle cohomology of $X$. On the one hand we have the result of Katz that the coniveau type must be $0$. On the other hand, we show that for a uniruled variety of dimension $n$, the coniveau of $H^n$ is strictly positive. This allows us to conclude that a general such complete intersection is not uniruled. \subsection{Coniveau filtration} We begin by recalling the definition of the coniveau filtration on cohomology, and some of its properties, which is given in \cite{SGA 7 II}, expos\'e XX. \begin{definition} \label{corrConiveau} \begin{equation*} N^j H^i(X_{{\overline{k}}},\mathbb{Q}_\ell)=\sum _{(d,T,Z)}\operatorname{image} \operatorname{cl}(Z)_*:H^{i-2j}(T_{{\overline{k}}},\mathbb{Q}_\ell) \to H^i(X_{{\overline{k}}},\mathbb{Q}_\ell) \end{equation*} where $d$ runs over all nonnegative integers, $T$ runs over all smooth proper connected varieties of dimension $d$, $Z$ runs over all algebraic cycles in $T \times X$ of codimension $j+d$, and $\operatorname{cl}(Z)_*$ is the map on cohomology induced by the correspondence $Z$, i.e., \[\operatorname{cl}(Z)_*(\alpha)=\pi_{2*}([Z] \cup \pi_1^*(\alpha)).\] \end{definition} We will say that the \emph{coniveau type} of $H^i(X_{{\overline{k}}},\mathbb{Q}_\ell)$ is the greatest integer $j$ such that $H^i(X_{{\overline{k}}},\mathbb{Q}_\ell)=N^j H^i(X_{{\overline{k}}},\mathbb{Q}_\ell)$ \begin{lemma} \label{functorialConiveau} Let $Z \in A^{\dim X+k}(X \times Y)$. The induced map \[ Z_*:H^i(X_{\overline{k}},\mathbb{Q}_\ell) \to H^{i+2k}(Y_{\overline{k}},\mathbb{Q}_\ell) \] sends $N^j H^i(X_{\overline{k}},\mathbb{Q}_\ell)$ to $N^{j+k}H^{i+2k}(Y_{\overline{k}},\mathbb{Q}_\ell)$. \end{lemma} \begin{proof} We know that $N^j H^i(X_{\overline{k}},\mathbb{Q}_\ell)$ is spanned by classes $\alpha$ for which there is a smooth projective variety $T$ of dimension $d$, a cycle $\Gamma \in A^{d+j}(T \times X),$ and a class $\beta \in H^{i-2j}(T_{\overline{k}},\mathbb{Q}_\ell)$ such that $\Gamma_* \beta=\alpha$. It therefore suffices to prove this result for such classes. But we know that $Z_* \alpha=(Z \circ \Gamma)_* \beta$, and $Z \circ \Gamma \in A^{d+k+j}(T \times Y)$, so it follows from the definition of the coniveau filtration that \[Z_* \alpha \in N^{j+k}H^{i+2k}(Y_{\overline{k}},\mathbb{Q}_\ell)\]. \end{proof} Given a smooth complete intersection of dimension $n$ in projective space, the Lefschetz hyperplane theorem determines that its cohomology is the same as projective space in all degrees but $H^n$. The following key result gives us information about the coniveau of this interesting cohomology group. \begin{theorem}[Katz, \cite{SGA 7 II}, expos\'e XXI]\label{complete-intersection-coniveau} Let $X$ be a general Calabi-Yau or general type $n$-dimensional complete intersection in projective space. Then the $n$th cohomology of $X$ has coniveau type 0. \end{theorem} We also have a result which will allow us to give explicit examples of such complete intersections. \begin{proposition}[\cite{SGA 7 II}, expos\'es XIX, XXI, et XXII] \label{coniveau-congruence} Let $k=\mathbb{F}_q$ be a finite field. If all the higher $\ell$-adic cohomology groups of $X$ have positive coniveau type, then \[ |X(k)| \equiv 1 \pmod q \] \end{proposition} \subsection{Coniveau of uniruled varieties} This subsection will be devoted to showing that a uniruled variety has positive coniveau type in its middle cohomology. The proof, which is inspired by Esnault \cite{E}, has three steps. First, we show that $\operatorname{CH}_0$ of uniruled varieties comes from $\operatorname{CH}_0$ of a proper subvariety. Second, we recall a result of Bloch and Srinivas that shows that such varieties admit a decomposition of the diagonal of a certain form. Finally, we use this decomposition of the diagonal and follow Bloch-Srinivas and Voisin to show that the middle cohomology of varieties with such a decomposition of the diagonal must have positive coniveau type. \begin{proposition} Let $X/k$ be a smooth projective variety with $k$ uncountable and algebraically closed. Suppose that $X$ is uniruled. Then there is a proper closed subvariety $Z \subset X$ such that the natural map $\operatorname{CH}_0(Z) \to \operatorname{CH}_0(X)$ is surjective, or equivalently, there is a nonempty open subset $V \subset X$ such that $\operatorname{CH}_0(V)=0$. \end{proposition} \begin{proof} Let $Z$ be an ample divisor on $X$. Let $p \in X$ be arbitrary. We show that the $0$-cylce $p$ is equivalent to a cycle supported on $Z$, which will suffice to prove the claim. Since $X$ is uniruled, there is a rational curve $C$ in $X$ through $p$. Since $Z$ is ample, $C$ will meet $Z$. Moving $p$ along $C$, we see that $p$ is equivalent to a cycle supported on $Z$. \end{proof} In fact, as Jason Starr pointed out to us, one can obtain a stronger result. Namely, using Nadel's trick and the MRCC fibration as described by Debarre \cite{De}, one can prove that any variety of Picard rank 1 that is uniruled must also be rationally connected. By working in families, it is possible to prove this even for complete intersection surfaces, whose Picard numbers are not necessarily 1. We now recall the decomposition of the diagonal, proposition 1 of \cite{BS}. \begin{proposition}\label{decompdiagonal} Let $X$ be a smooth projective variety of dimension $n$ and $Z \subset X$ a closed subvariety such that $\operatorname{CH}_0(Z) \to \operatorname{CH}_0(X)$ is surjective. Then there is a positive integer $N$, a divisor $D$, and cycles $\Gamma_1$, $\Gamma_2 \in \operatorname{CH}_n(X \times X)$ such that $\Gamma_1$ is supported on $Z \times X$, $\Gamma_2$ is supported on $X \times D$ and \[ N[\Delta]=\Gamma_1+\Gamma_2.\] \end{proposition} Having a decomposition of the diagonal has strong implications on the coniveau type of $H^i(X_{\overline{k}},\mathbb{Q}_\ell)$. The following proposition is essentially theorem 3.16 of \cite{V}, and the proof is very similar, with some minor adjustments needed for it to work in positive characteristic. \begin{proposition} \label{coniveauchow} Let $X$ be a smooth projective variety of dimension $n$. Let $Z \subset X$ be a closed subvariety of dimension $i$ such that $\operatorname{CH}_0(Z) \to \operatorname{CH}_0(X)$ is surjective. Then for $m>i$, $H^m(X_{{{\overline{k}}}},\mathbb{Q}_\ell)$ has strictly positive coniveau type. \end{proposition} Before we give the proof, we will also need the following lemma. \begin{lemma} \label{hardLefschetz} Let $Z$ be a smooth projective variety of dimension $n$. Then if $m=n+i$, $H^m(Z_{\overline{k}},\mathbb{Q}_\ell)$ has coniveau type at least $i$. \end{lemma} \begin{proof} Let $L$ be the class of an ample divisor on $Z$. By the hard Lefschetz theorem for $\ell$-adic cohomology (see \cite{D} or \cite{M}), $\cdot \cup L^i:H^{n-i}(Z_{{\overline{k}}},\mathbb{Q}_\ell) \to H^{n+i}(Z_{{\overline{k}}},\mathbb{Q}_\ell)$ is an isomorphism. But this map is induced by a correspondence, namely $\Delta_{*}(L^i)$, so its image lies in $N^iH^{n+i}(Z_{\overline{k}},\mathbb{Q}_\ell)$ by Definition \ref{corrConiveau}. \end{proof} We can now give the proof of proposition \ref{coniveauchow}. \begin{proof} We use the decomposition of the diagonal $N[\Delta]=\Gamma_1+\Gamma_2$ of proposition \ref{decompdiagonal}, with $\Gamma_1$ supported on $X \times D$ and $\Gamma_2$ supported on $Z \times X$. Note that the diagonal correspondence induces the identity map on cohomology. It therefore suffices to show that the images of $(\Gamma_1)_*$ and $(\Gamma_2)_*$ lie inside the coniveau 1 part of $H^m(X_{\overline{k}},\mathbb{Q}_\ell)$. We begin with $(\Gamma_{2})_*$. Recall that an alteration of a variety $V$ is a generically finite, proper map $\tilde{V} \to V$ with $\tilde{V}$ smooth. Any variety over any field admits an alteration \cite{DJ}. Take $\tilde{D}$ an alteration of $D$, with $f:\tilde{D} \to X$ the natural map. Then $(\Gamma_{2})_*$ factors through $f_*$, since $\Gamma_{2}$ is a pushforward of a rational cycle $\tilde{\Gamma}_2$ on $X \times \tilde{D}$, and $(\Gamma_{2})_* = f_* \circ (\tilde{\Gamma}_{2})_*$. The image of $f_*$ is contained in $N^1 H^m(X_{\overline{k}},\mathbb{Q}_\ell)$ by Definition \ref{corrConiveau} because $f$ is generically finite onto a proper subvariety of $X$ and pushforward by such a map increases cohomological degree. We now show the result for $(\Gamma_1)_*$. Let $\tilde{Z}$ be an alteration of $Z$ and $g:\tilde{Z} \to X$ the natural map. As before, we know there is a cycle $\tilde{\Gamma}_1$ on $\tilde{Z} \times X$ which pushes forward to $\Gamma_1$ under $g \times \id$. We have $(\Gamma_{1})_*=(\tilde{\Gamma}_{1})_* \circ g^*$. It suffices to show that the image of $g^*$ is contained in $N^1H^{m}(\tilde{Z}_{\overline{k}}, \mathbb{Q}_\ell)$ since the coniveau filtration is preserved under pushforward by lemma \ref{functorialConiveau}. By hypothesis, $m$ is greater than the dimension of $\tilde{Z}$, so by Lemma \ref{hardLefschetz}, it has positive coniveau type. \end{proof} By Theorem \ref{complete-intersection-coniveau}, we know that the middle cohomology of a general non-Fano complete intersection has coniveau type 0, but this is impossible for uniruled varieties, so we can deduce the following theorem. \begin{theorem}\label{notUniruledProp} Let $X$ be a smooth projective variety such that its middle cohomology has coniveau type 0. Then $X$ is not uniruled. In particular, a general non-Fano complete intersection is not uniruled. \end{theorem} Using Proposition \ref{coniveau-congruence} to relate coniveau to point-counting, we deduce the following corollary. \begin{corollary} \label{pointCountUniruled} Let $X$ be a smooth complete intersection in projective space over a finite field $k=\mathcal{F}_q$. If \[|X(k)| \not\equiv 1 \pmod q \] then $X$ is not geometrically uniruled. \end{corollary} \subsection{Fermat hypersurfaces} Some of the earliest examples of unirational hypersurfaces have been Fermat hypersurfaces. Suppose always that \[ d \not \equiv 0 \pmod p. \] Shioda and Katsura showed in \cite{SK} that for odd $n \geq 3$ and $d \geq 4$, the Fermat hypersurface of degree $d$ in $\mathbb{P}^n$ is unirational if there is an integer $\nu$ such that \[p^\nu \equiv -1 \pmod d.\] If $n=3$, then the converse also holds. By using Corollary \ref{pointCountUniruled}, we can give examples of non-uniruled Fermat hypersurfaces. Given a hypersurface over a finite field, we can use the Fulton trace formula \cite{F} to count the number of points modulo $p$. More precisely, we have the following corollary of the trace formula, which is proposition 5.15 of \cite{M2}. \begin{proposition} If $X=V(F)$ is a hypersurface in $\mathbb{P}^n$ of degree $n+1$, then \[|X(\mathbb{F}_Q)| \not\equiv 1 \pmod p \] if and only if the coefficient of $(x_0\cdots x_n)^{p-1}$ in $F^{p-1}$ is nonzero. \end{proposition} A straightforward calculation then implies that if $d=n+1$ and \[ p \equiv 1 \pmod d \] then this coefficient is nonzero, so in particular, these Fermat hypersurfaces are not geometrically uniruled. \section{Non-existence of rational curves} In characteristic $0$, Ein \cite{ein} proves that a very general hypersurface in $\mathbb{P}^n$ of degree $d$ with $d \geq 2n-1$ contains no rational curves. Voisin \cite{voisin} slightly strengthens this, proving that a very general hypersurface of degree $d \geq 2n-2$ contains no rational curves. Their techniques become more complicated in characteristic $p$. In particular, they are dealing with positivity of certain vector bundles under various finite covers, and in characteristic $p$, it is not immediately clear how to show that these behave well under inseparable base change. We present an alternate proof of Ein's result using strictly characteristic $p$ techniques which work for all complete intersections. We do not know of a proof of Voisin's result in positive characteristic. Consider the universal complete intersection of multidegree $\overline{d}=(d_1,\ldots,d_i)$ in $\mathbb{P}^n$, $\mathcal{U}_{n,\overline{d}} = \{(p,X) | p \in X,\,X \in \mathcal{H}_{\overline{d}}\}$, where $\mathcal{H}_{\overline{d}}$ is the Hilbert scheme of complete intersections in $\mathbb{P}^n$ of multidegree $\overline{d}$, and let $R_{n,\overline{d}} \subset \mathcal{U}_{n,\overline{d}}$ be the space of pairs $(p,X)$ with a rational curve in $X$ passing through $p$. $\mathcal{U}_{n,\overline{d}}$ is a projective variety, and $R_{n,\overline{d}}$ will be a countable union of projective varieties. Our main result is the following. \begin{theorem} \label{hyperbolicityThm} If $R_{n,\overline{d}}$ is a proper subset of $\mathcal{U}_{n,\overline{d}}$, then for any $c \geq 0$, every component of $R_{n-c,\overline{d}}$ is codimension at least $c+1$ in $\mathcal{U}_{n,\overline{d}}$. \end{theorem} Combined with Proposition \ref{notUniruledProp}, this gives \begin{corollary} The space of multidegree $\overline{d}$ complete intersections in $\mathbb{P}^n$ containing a rational curve has codimension at least $\sum d_i-2n+2$. The space of uniruled hypersurfaces has codimension at least $\sum d_i-n$. \end{corollary} \begin{proof} By Proposition \ref{notUniruledProp}, we see that $R_{\sum d_i-1,\overline{d}}$ is codimension at least $1$ in $\mathcal{U}_{\sum d_i-1,\overline{d}}$, so it follows from Theorem \ref{hyperbolicityThm} that for $n<\sum d_i-1$, $R_{n,\overline{d}}$ is codimension at least $\sum d_i-n$ in $\mathcal{U}_{n,\overline{d}}$. The fibers of the map from $R_{\sum d_i-c,\overline{d}}$ to the space of all hypersurfaces containing a rational curve are at least one-dimensional, so we can conclude that the codimension of the space of hypersurfaces containing a rational curve in the space of all hypersurfaces of degree $d$ in $\mathbb{P}^n$ is at least $\sum d_i-2n+2$. The fibers of the map from $R_{n,\overline{d}}$ to the space of hypersurfaces have dimension $n-1$ when considering uniruled hypersurfaces, so we conclude that the space of uniruled hypersurfaces has codimension at least $\sum d_i-n$. \end{proof} The techniques of the proof of Theorem \ref{hyperbolicityThm} are similar to those in \cite{RiedlYang}, but we reproduce them here because the statement that we are proving is slightly different and because we want to emphasize the independence of characteristic. We first need an elementary result on Grassmannians that we quote from \cite{RiedlYang}. \begin{proposition} \label{GrassProp} Let $m \leq n$. Let $B \subset \mathbb{G}(m,n)$ be irreducible of codimension at least $\epsilon \geq 1$. Let $C \subset \mathbb{G}(m-1,n)$ be a nonempty subvariety satisfying the following condition: $\forall c \in C$, if $b \in \mathbb{G}(m,n)$ has $c \subset b$, then $b \in B$. Then it follows that the codimension of $C$ in $\mathbb{G}(m-1,n)$ is at least $\epsilon + 1$. \end{proposition} \begin{proof} (Theorem \ref{hyperbolicityThm}) Let $(p,X) \in R_{n-c,\overline{d}}$ be a general point on one of the components. We wish to show that the codimension of any component of $R_{n-c,\overline{d}}$ passing through $(p,X)$ is at least $c$. In order to show this, we will find a subvariety $\mathcal{F} \subset \mathcal{U}_{n,\overline{d}}$ such that $\operatorname{codim}(\mathcal{F} \cap R_{n-c,\overline{d}} \subset \mathcal{F}) \geq c$. Let $(p,Y) \in \mathcal{U}_{n,\overline{d}}$ be very general, so that there are no rational curves in $Y$ through $p$. Let $(p,Z) \in \mathcal{U}_{M,\overline{d}}$ be a pair such that $(p,Y)$ and $(p,X)$ are both parameterized linear sections of $(p,Z)$, where $M$ is some large number . Let $\mathcal{F}_m$ be the closure of the space of parameterized linear sections of $(p,Z)$ in $\mathcal{U}_{m,d}$. By hypothesis, \[ \operatorname{codim} (\mathcal{F}_n \cap R_{n,\overline{d}} \subset \mathcal{F}_n) \geq 1, \] so it follows by Proposition \ref{GrassProp} that \[ \operatorname{codim} (\mathcal{F}_{n-c} \cap R_{n-c,\overline{d}} \subset \mathcal{F}_n) \geq c+1 \] which concludes the proof. \end{proof} \end{document}
\begin{document} \title{On the confluence of lambda-calculus\ with conditional rewriting} \begin{abstract} The confluence of untyped $\la$-calculus with {\em unconditional} re\-writing is now well understood. In this paper, we investigate the confluence of $\la$-calculus with {\em conditional} rewriting and provide general results in two directions. First, when conditional rules are algebraic. This extends results of M\"uller and Dougherty for unconditional rewriting. Two cases are considered, whether beta-reduction is allowed or not in the evaluation of conditions. Moreover, Dougherty's result is improved from the assumption of strongly normalizing $\b$-reduction to weakly normalizing $\b$-reduction. We also provide examples showing that outside these conditions, modularity of confluence is difficult to achieve. Second, we go beyond the algebraic framework and get new confluence results using a restricted notion of orthogonality that takes advantage of the conditional part of rewrite rules. \end{abstract} \tableofcontents \section{Introduction} Rewriting~\cite{dj90book} and $\la$-calculus~\cite{barendregt84book} are two universal computation models which are both used, with their own advantages, in programming languages design and implementation, as well as for the foundation of logical frameworks and proof assistants. Among other things, $\la$-calculus allows to manipulate abstractions and higher-order variables, while rewriting is traditionally well suited for defining functions over data-types and for dealing with equality. Starting from Klop's work on higher-order rewriting and because of their complementarity, many frameworks have been designed with a view to integrate these two formalisms. This integration has been handled either by enriching first-order rewriting with higher-order capabilities, by adding to $\la$-calculus algebraic features or, more recently, by a uniform integration of both paradigms. In the first case, we find the works on combinatory reduction systems~\cite{kor93tcs} and other higher-order rewriting systems~\cite{wolfram93phd,nipkow91lics} each of them subsumed by van Oostrom and van Raamsdonk's axiomatization of HORSs~\cite{or94lfcs}, and by the every expressive framework of CCERSs~\cite{gkk05ptc}. The second case concerns the more atomic combination of $\la$-calculus with term rewriting~\cite{jo91lics,blanqui05mscs} and the last category the rewriting calculus~\cite{cirstea01igpal,bckl03popl}. Despite this strong interest in the combination of both concepts, few works have considered {\em conditional} higher-order rewriting with $\la$-calculus. This is of particular interest for both computation and deduction. Indeed, conditional rewriting appears to be very convenient when programming with rewrite rules and its combination with higher-order features provides a quite agile background for the combination of algebraic and functional programming. This is also of main use in proof assistants based on the Curry-Howard-de Bruijn isomorphism where, as emphasized in {\em deduction modulo}~\cite{dhk03jar,blanqui05mscs}, rewriting capabilities for defining functions and proving equalities automatically is clearly of great interest when making large proof developments. Furthermore, while many confluence proofs often rely on termination and local confluence, in some cases, confluence may be necessary for proving termination (e.g.\ with type-level rewriting or strong elimination~\cite{blanqui05mscs}). It is therefore of crucial interest to have also criteria for the preservation of confluence when combining conditional rewriting and $\b$-reduction without assuming the termination of the combined relation. In particular, assuming the termination of just one of the two relations is already of interest. The earliest work on preservation of confluence when combining typed $\la$-calculus and first-order rewriting concerns the simple type discipline~\cite{breazu88lics} and the result has been extended to polymorphic $\la$-calculus in~\cite{bg94ic}. Concerning untyped $\la$-calculus, the result was shown in~\cite{muller92ipl} for left-linear rewriting. It is extended as a modularity result for higher-order rewriting in~\cite{or94lfcs}. In~\cite{dougherty92ic}, it is shown that left-linearity is not necessary, provided that terms considered are strongly $\b$-normalizable and are well-formed with respect to the declared arity of symbols, a property that we call here {\em arity compliance}. Higher-order conditional rewriting is studied in~\cite{avenhaus94ccl} and the confluence result relies on joinability of critical pairs, hence on termination of the combined rewrite relation. An approach closer to ours is taken with a form of conditional $\la$-calculus in~\cite{takahashi93tlca}, and with CCERSs in~\cite{gkk05ptc}. In both cases, confluence relies on a form of conditional orthogonality. However, in these works, conditions are abstract predicates on terms, and confluence is achieved by assuming that the satisfaction of these predicates is preserved by reduction. These results do not directly apply in our case, since proving that the satisfaction of conditions is preserved by reduction is actually the most difficult task for confluence, and this requires a precise knowledge of the shape of the conditions. These systems are related to those presented in Section~\ref{sec-exclus}. Our results can rather be seen as a form of modularity properties. Concerning confluence of unconditional term rewriting, the early work of~\cite{toyama87jacm} has been extended to the higher-order case in~\cite{or94lfcs}. In the case of conditional rewriting, if modularity properties have been investigated in the pure first-order setting (e.g.~\cite{middeldorp91ctrs,gramlich96tcs}), to the best of our knowledge, there was up to now no result on the preservation of confluence for the {\em combination with $\b$-reduction}. In this paper, we study the confluence property of the combination of $\b$-reduction with a confluent conditional rewrite system. This of course should rely on a clear understanding of the conditional rewrite relation under use and, as usual, the way matching is performed and conditions are checked is crucial. We always consider left-hand sides without abstractions. So, rewriting is not in need of higher-order pattern-matching but just relies on syntactic matching. We begin in Section~\ref{sec-calc} by presenting our notations and some basic facts on $\la$-calculus and conditional rewriting. We start from $\la$-calculus and discuss, via Böhm's theorem, the need of enriching its syntax with symbols defined by rewrite rules. We then present the different kinds of conditional rewriting considered in this paper. We are interested in {\em join} conditional rewriting: the conditions of rewrite rules are evaluated by testing the {\em joinability} of terms. Given a conditional rewrite system, we consider two conditional rewrite relations, whether $\b$-reduction is allowed or not in the evaluation of conditions. The case where $\b$-reduction is allowed in the conditions is termed {\em $\b$-conditional} rewriting. We also discuss the particular case of {\em normal} rewriting, i.e.\ when one side of the conditions is made of terms in normal form. We then give two examples of conditional rewrite system. The first one recalls the use of conditional rewriting in the study of $\la$-calculus with surjective pairing~\cite{vrijer89lics}. The second one is a term manipulation system inspired from a program of~\cite{huet86notes}. We conclude this section by some basic material on confluence. In Section~\ref{sec-over-gen} we state precisely the known results from which this paper starts and give a short overview of our results. The general goal of this paper is to give sufficient conditions for the confluence of $\beta$-reduction with $\b$-conditional rewriting (i.e.\ with $\b$-steps allowed in the evaluation of conditions). Our main objective is the {\em preservation} of confluence, that is, given a conditional rewrite system, to get confluence $\b$-conditional rewriting combined with $\b$-reduction assuming the confluence of conditional rewriting. Our approach is to generalize known results on the combination of $\b$-reduction with unconditional rewriting. We present in Section~\ref{sec-rew} the two different cases we start with: M{\"u}ller's result~\cite{muller92ipl} for left-linear rewriting, and Dougherty's result~\cite{dougherty92ic} for algebraic rewriting on strongly $\b$-normalizing terms respecting some arity conditions (called {\em arity compliance}). In each case, we will first consider the case of $\b$-reduction with conditional rewriting (when $\b$-reduction is not allowed in the evaluation of conditions) and then extend these results to $\b$-conditional rewriting. However, Example~\ref{ex-Bconfl} shows that for $\b$-conditional rewriting, we can not go beyond algebraic rewriting with arity conditions. In order to handle rewrite rules which can contain active variables and abstractions in right-hand sides or in conditions, we build on orthogonal conditional rewriting. Known results on the confluence of orthogonal for normal algebraic conditional rewriting are discussed in Section~\ref{sec-orth-cond}. We conclude this section by an informal overview of our results. They are summarized in Figure~\ref{fig-oversimple}, page~\pageref{fig-oversimple}. The last three sections contain the technical contributions of the paper. We begin in Section~\ref{sec-Aconfl} by extending M{\"u}ller's and Dougherty's result to conditional rewriting combined with $\b$-reduction. M{\"u}ller's result~\cite{muller92ipl} assumes the left-linearity of rewrite rules. Of course, with conditional rewriting, non-linearity can be simulated by linear systems. Extending the result of M{\"u}ller~\cite{muller92ipl}, we prove in Section~\ref{sec-Aconfl-sc} that the confluence of conditional rewriting combined with $\b$-reduction follows from the confluence of conditional rewriting when conditional rules are applicative, left-linear and semi-closed, which means that the conditions of rules cannot test for equality of open terms. In Section~\ref{sec-Aconfl-wn} we adapt Dougherty's method~\cite{dougherty92ic} to conditional rewriting and extend it to show that for a large set of {\em weakly} $\b$-normalizing terms, the left-linearity and semi-closure hypotheses can be dropped provided that rules are algebraic and terms are arity compliant. We then turn in Section~\ref{sec-Bconfl} to the confluence of $\b$-conditional rewriting combined with $\b$-reduction. We show in Example~\ref{ex-Bconfl} that confluence is in general not preserved with non-algebraic rules. When rules are algebraic, we show that arity compliance is a sufficient condition to deduce the confluence of $\b$-conditional rewriting combined with $\b$-reduction from the confluence of conditional rewriting alone. This is done first for left-linear semi-closed systems in Section~\ref{sec-Bconfl-sc}, a restriction that we also show to be superfluous when considering only {\em weakly} $\b$-normalizing terms (Section~\ref{sec-Bconfl-wn}). The case of non-algebraic rules is handled in Section~\ref{sec-exclus}. Such rules can contain active variables and abstractions in right-hand sides or in conditions (but still not in left-hand sides). In this case, the confluence of $\b$-conditional rewriting combined with $\b$-reduction does not follow anymore from the confluence of conditional rewriting. We show that confluence holds under a syntactic condition, called {\em orthonormality}, ensuring that if two rules overlap at a non-variable position, then their conditions cannot be both satisfied. An orthonormal system is therefore an orthogonal system whose orthogonality follows from the confluence of the rewrite relation (recall that with conditional rewriting critical pairs contain conditions ; hence orthogonality depends on the rewrite relation since it depends on the satisfiability of these conditions). This paper is an extended version of~\cite{bkr06fossacs}. We assume familiarity with $\la$-calculus~\cite{barendregt84book} and conditional rewriting~\cite{do90tcs,ohlebusch02book}. We recall the main notions in the next section. \begin{figure} \caption{Overview of the results. Algebraic and applicative terms are defined in Def.~\ref{def-terms} \label{fig-oversimple} \end{figure} \section{Lambda-calculus and conditional rewriting} \label{sec-calc} In this section we present the tools used in this paper and recall some well-known facts. \subsection{Terms and rewrite relations} We consider $\la$-terms with curried function symbols. Among them we distinguish {\em applicative terms} that do not contain abstractions, and {\em algebraic terms} that are applicative terms with no variable in active position. \begin{definition}[Terms] \label{def-terms} Let $\Si$ be a set of function symbols and $\Vte$ be a set of variables. \begin{enumerate} \item The set $\Te(\Si)$ of {\em $\la$-terms} is defined by the grammar \[ t,u \in \Te(\Si) \quad::=\quad x \gs \la x.t \gs t \ptesp u \gs \sff ~, \] where $x \in \Vte$ and $\sff \in \Si$. We denote by $\Te$ the set $\Te(\emptyset)$ of {\em pure} $\la$-terms. \item The set of {\em applicative terms} is defined by the grammar \[ t,u \quad::=\quad x \gs t \ptesp u \gs \sff ~. \] \item The set of {\em algebraic terms} is defined by the grammar \[ t \quad::=\quad x \gs \sff \ptesp t_1 \dots t_n ~. \] \end{enumerate} \end{definition} \noindent As usual, $\la$-terms are considered equal modulo $\alpha$-conversion. We denote by $\FV(t)$ the set of variables occurring free in the term $t$. A term is {\em closed} if it has no free variables and {\em open} otherwise, it is {\em linear} if each of its free variables occurs at most once. Given $h \in \Vte \cup \Si$, we write $h \ptesp \vt$ for $h \ptesp t_1 \dots t_n$ and let $|\vt| \esp\deq\esp n$. Similarly, we write $\la \vx.t$ for $\la x_1.\dots \la x_n.t$. \begin{example} Intuitively, an algebraic term is a curried first-order term with no arity constraint on symbols. For instance the terms $\filter$ and $\filter \ptesp p \ptesp x \ptesp l$ are algebraic, as well as $\filter \ptesp p \ptesp x \ptesp l \ptesp y \ptesp z$. An applicative term is an algebraic term which may contain variables in head position, such as $x \ptesp \filter$. The $\la$-term $\la x.x$ is not applicative (and thus not algebraic). \end{example} A lot of proofs of this paper are made by induction on the structure of $\la$-terms. However, it is often not convenient to reason directly on their syntax as given by the productions of $\Te(\Si)$. For instance, knowing that a term $t$ is an application, say $t = u \esp v$, gives little information on its behavior: we do not know whether $u$ is an abstraction, in which case $t$ is a $\b$-redex, or whether it is an algebraic term, in which case $t$ may be the instantiated left-hand side of a rewrite rule. It it is therefore useful to have an induction principle on $\la$-terms which makes apparent more informations on their structure. This is provided by the following well-known lemma, due to Wadsworth~\cite{wadsworth71phd}. \begin{lemma}[\cite{wadsworth71phd}] \label{lem-wadsworth} Any $\la$-term $t \in \Te(\Si)$ can be uniquely written in one of the following forms: \begin{align} \tag{a} \la x_1.\dots \la x_m.v\, &a_1 \dots a_n \\ \tag{b} \text{or} \quad \la x_1.\dots \la x_m.(\la y.b) &a_0 \, a_1 \dots a_n \end{align} where $n,m \geq 0$ and $v \in \Vte \cup \Si$. \end{lemma} A {\em substitution} is a map $\s : \Vte \a \Te(\Si)$ of finite domain. We denote by $t\s$ the capture-avoiding application of the substitution $\s$ to the term $t$. If $\s$ is the substitution which maps $x_i$ to $u_i$ for all $i \in \{1,\dots,n\}$, then we may write $t\wthdots{x_1}{u_1}{x_n}{u_n}$ instead of $t\s$. \begin{definition}[Rewrite Relations] A rewrite relation is a binary relation $\a$ on $\Te(\Si)$ closed under the following rules, where $\s$ is a substitution: \[ (\abs)~ \dfrac{t \esp\a\esp u} {\la x.t \esp\a\esp \la x.u} \qquad (\appl)~ \dfrac{t \esp\a\esp u} {t \ptesp v \esp\a\esp u \ptesp v} \qquad (\appr)~ \dfrac{t \esp\a\esp u} {v \ptesp t \esp\a\esp v \ptesp u} \qquad (\subst) \dfrac{t \esp\a\esp u} {t\s \esp\a\esp u\s} \] \end{definition} \noindent We denote by $\a^+$ the transitive closure of $\a$, by $\a^\re$ its reflexive closure, by $\a^*$ its reflexive and transitive closure, by $\al$ its inverse and by $\alr$ its reflexive symmetric and transitive closure. We write $t \ad u$ if there exists $v$ such that $t \a^* v \al^* u$ and $t \a^k u$ if $t \a^* u$ in at most $k$ steps. Given two rewrite relations $\a_{A}$ and $\a_{B}$, we let $\a_{A \cup B} \ptesp\mbin{\deq}\ptesp \a_A \cup \a_B$. We say that a term $t$ is an {\em $A$-normal form} if there is no $u$ such that $t \a_A u$. We let $\SN_A$, {\em the set of strongly $A$-normalizing terms}, be the set of terms on which the relation $\a_A$ is well-founded and we let $\WN_A$, {\em the set of weakly $A$-normalizing terms}, be the set of terms which rewrite to an $A$-normal form. Rewrite relations $\a$ satisfy the following property: for all $t,u,v \in \Te(\Si)$, \[ \text{if}\qquad t \a u \qquad\text{then}\qquad v\wth{x}{t} \esp\a^*\esp v\wth{x}{u} ~. \] In the following, we will often use a stronger property: for all $t,u,v \in \Te(\Si)$, \[ \text{if}\qquad t \a u \qquad\text{then}\qquad v\wth{x}{t} \esp\a\esp v\wth{x}{u} ~. \] This is in general false with rewrite relations, but this holds with {\em parallel} rewrite relations. \begin{definition}[Parallel Rewrite Relations] \label{def-par-rew} A {\em parallel rewrite relation} is a rewrite relation $\rpr$ closed under the rules \[ (\pvar)~ \dfrac{} {x \esp\rpr\esp x} \qquad\qquad (\psymb)~ \dfrac{} {\sff \esp\rpr\esp \sff} \qquad\qquad (\papp)~ \dfrac{t_1 \esp\rpr\esp u_1 \qquad t_2 \esp\rpr\esp u_2} {t_1 \ptesp t_2 \esp\rpr\esp u_1 \ptesp u_2} \] \end{definition} \noindent Note that given a parallel rewrite relation $\rpr$, we have $\la x.t \rpr \la x.u$ if $t \rpr u$, since by definition parallel rewrite relations are rewrite relations. Given a rewrite relation $\a$ and two substitutions $\s$ and $\s'$, we write $\s \a \s'$ if $\s$ and $\s'$ have the same domain and $\s(x) \a \s'(x)$ for all $x \in \dom(\s)$. \begin{proposition} \label{prop-par-rew} If $\rpr$ is a parallel rewrite relation on $\Te(\Si)$ then $\s \rpr \s'$ implies $v\s \rpr v\s'$. \end{proposition} \begin{proof} By induction on $v$. \begin{description} \item[$v \in \Vte \cup \Si$.] If $v = x \in \dom(\s)$ then $v\s = \s(x) \rpr \s'(x) = v\s'$. Otherwise, $v\s = v \rpr v = v\s'$ thanks to the rules $(\pvar)$ and $(\psymb)$. \item[$v = v_1 \ptesp v_2$.] By induction hypothesis we have $v_i\s \rpr v_i\s'$ for all $i \in \{1,2\}$, and we conclude by the rule $(\papp)$. \item[$v = \la x.v_1$.] By induction hypothesis. \qedhere \end{description} \end{proof} \noindent In particular, if $\dom(\s) \cap \FV(v) = \emptyset$ then $v \rpr v$: parallel rewrite relations are reflexive. \subsection{Lambda-calculus} $\la$-calculus is characterized by $\b$-reduction. This is the smallest rewrite relation $\a_\b$ on $\Te(\Si)$ such that \[ (\la x.t)u \quad\a_\b\quad t\wth{x}{u} ~. \] In order to understand our motivations for studying the combination of $\la$-calculus with (conditional) rewriting, let us recall some facts about {\em pure} $\la$-calculus. It is well-known that integers can be coded within pure $\la$-calculus. An example of such coding is that of {\em Church's numerals}. The $\Zero$ and $\Succ$ functions are represented by the following terms: \[ \Zero \quad\deq\quad \la x.\la f.x \qquad\text{and}\qquad \Succ \quad\deq\quad \la n.\la x.\la f.f \esp (n \esp x \esp f) ~. \] We can code {\em iteration} with the term $\Iter \esp x \ptesp y \ptesp z \esp\deq\esp z \ptesp x \ptesp y$, and for all $n,u,v \in \Te$ we have \[ \begin{array}{l !{\quad=\quad} l !{\quad\a^2_\b\quad} l} \Iter \esp u \esp v \esp \Zero & (\la xf.x) \esp u \esp v & u \\ \Iter \esp u \esp v \esp (\Succ \esp n) & (\la xf.f \esp (n \esp x \esp f)) \esp u \esp v & v \esp (n \esp u \esp v) \quad=\quad v \esp (\Iter \esp u \esp v \esp n) ~. \end{array} \] However, {\em recursion} cannot be implemented in constant time (see for instance~\cite{parigot89csl}): there is no term $\Rec \esp x \ptesp y \ptesp z$ such that there is $k \in \N$ such that for all $u,v,n \in \Te$, \[ \Rec \esp u \esp v \esp \Zero \quad\a^k_\b\quad u \qquad\text{and}\qquad \Rec \esp u \esp v \esp (\Succ \esp n) \quad\a^k_\b\quad v \esp (\Rec \esp u \esp v \esp n) \esp n ~. \] In particular, there is no coding of the predecessor function for Church's numerals in constant time. This suggests that practical utilizations of the $\la$-calculus may require extensions of $\b$-reduction. At this point it is interesting to recall B{\"o}hm's theorem. It states that any proper extension of $\b\eta$-conversion on the set of weakly $\b$-normalizing pure $\la$-terms is inconsistent. \begin{theorem}[B\"ohm~\cite{bohm68}] Let $\a_\e$ be the smallest rewrite relation on $\Te$ such that $\la x.t \ptesp x \a_\e t$ if $x \notin \FV(t)$. If $\simeq$ is an equivalence relation on $\Te$ which is stable by contexts, contains $\alr_{\b\e}$ and such that $\simeq \setminus \alr_{\b\e}$ contains a pair of weakly $\b$-normalizing terms, then for all $t,u \in \Te$ we have $t \simeq u$. \end{theorem} This theorem suggests to find extensions of $\b$-reduction operating on extensions of the set of pure $\la$-terms $\Te$. A possibility, that we consider in this paper, is to work with {\em function symbols} $\sff \in \Si$ defined by {\em rewrite rules}. \subsection{Conditional rewriting} In this paper, we are interested in {\em conditional} rewriting. The following example introduces the main ideas. Consider lists built from the empty list $\nil$ and the constructor $\cons$. We use the symbols $\true$ and $\false$ to represent the boolean values "true" and "false". We would like to define, via rewriting, a function $\filter$ such that \begin{itemize} \item $\filter \ptesp p \ptesp \nil$ rewrites to $\nil$, \item $\filter \ptesp p \ptesp (\cons \ptesp t \ptesp ts)$ rewrites to $\cons \ptesp t \ptesp (\filter \ptesp p \ptesp ts)$ if $p \ptesp t$ rewrites to $\true$, and \item $\filter \ptesp p \ptesp (\cons \ptesp t \ptesp ts)$ rewrites to $\filter \ptesp p \ptesp ts$ if $p \ptesp t$ rewrites to $\false$. \end{itemize} This specification can be written using {\em conditional rewrite rules} ($\sgt$ reads {\em implies}): \begin{equation} \begin{aligned} \label{eqn-filter} \begin{array}{l c l c l c l} & & & & \filter \esp p \esp \nil & \at & \nil \\ p \esp x & = & \true & \sgt & \filter \esp p \esp (\cons \esp x \esp xs) & \at & \cons \esp x \esp (\filter \esp p \esp xs) \\ p \esp x & = & \false & \sgt & \filter \esp p \esp (\cons \esp x \esp xs) & \at & \filter \esp p \esp xs \end{array} \end{aligned} \end{equation} If we try to define a rewrite relation $\a$ that corresponds to our specification, we get that \begin{equation} \filter \esp p \esp (\cons \esp t \esp ts) \quad\a\quad \cons \esp t \esp (\filter \esp p \esp ts) \qquad\text{if}\qquad p \esp t ~\a^*~ \true ~. \end{equation} In other words, to define $\a$ in the step \[ \filter \esp p \esp (\cons \esp t \esp ts) \quad\a\quad \cons \esp t \esp (\filter \esp p \esp ts) ~, \] we need to test if $p \ptesp t \a^* \true$, hence to use the relation $\a$. This circularity can be broken off by using an inductive definition of conditional rewriting: the relation $\a$ is stratified in relations $(\a_i)_{i \in \N}$. The correctness of the definition is ensured by Tarski's fixpoint theorem, which can be applied because, when replacing the symbol $=$ by $\a^*$ in (\ref{eqn-filter}), the obtained formula is {\em positive} in $\a$ (it is in fact a Horn clause). We now turn to formal definitions. \begin{definition}[Conditional Rewrite Rules] \label{def-cond-rules} A conditional rewrite rule is an expression of the form \[ d_1 = c_1 \esp\land\esp \dots \esp\land\esp d_n = c_n \esp\sgt\esp l \dbesp\at\dbesp r \] where $d_1,\dots,d_n,c_1\dots,c_n,l,r \in \Te(\Si)$ and \begin{enumerate} \item every variable of $\vd,\vc,r$ occurs also in $l$, \item $l$ is an algebraic term which is not a variable. \end{enumerate} In conditional rewrite rules, we distinguish \begin{itemize} \item the left-hand side $l$, the right-hand side $r$ ; \item the conditions $d_1 = c_1 \esp \land \dots \land \esp d_n = c_n$. \end{itemize} A rule $d_1 = c_1 \land \dots \land d_n = c_n \sgt l \at r$ is {\em unconditional} if $n = 0$. It is {\em left-linear} if $l$ is linear. \end{definition} Since left-hand sides are algebraic terms, rewriting is performed using syntactical first-order matching. Note that the conditions of rewrite rules are not symmetric: the condition $d = c$ is not the same as $c = d$. Given a set $\R$ of conditional rewrite rules, different conditional rewrite relations can be defined, depending on the evaluation of the conditions: by conversion, by joinability or by reduction. This leads respectively to semi-equational, join and oriented conditional rewriting. In this paper, we focus on join conditional rewriting, and call it simply {\em conditional rewriting}. We also condsider the case of join condition rewriting with $\beta$-reduction allowed in the evaluation of conditions, and call it {\em $\beta$-conditional rewriting}. \begin{definition}[Conditional Rewriting] \label{def-cond-rew} Let $\R$ be a set of conditional rewrite rules. \begin{itemize} \item The {\em conditional rewrite relation} $\a_{\R}$ is defined as \[ \a_{\R} \quad\deq\quad \bigcup_{i \in \N} \a_{\R_i} ~, \] where $\a_{\R_0} \ \deq\ \emptyset$ and for all $i \in \N$, $\a_{\R_{i+1}}$ is the smallest rewrite relation such that for every rule $\vd = \vc \sgt l \at_\R r$ and every substitution $\s$, \[ \text{if}\qquad \vd\s \quad\ad_{\R_{i}}\quad \vc\s \qquad\text{then}\qquad l\s \quad\a_{\R_{i+1}}\quad r\s ~. \] \item The {\em $\b$-conditional rewrite relation} $\a_{\R(\b)}$ is defined as \[ \a_{\R(\b)} \quad\deq\quad \bigcup_{i \in \N} \a_{\R(\b)_i} ~, \] where $\a_{\R(\b)_0} \ \deq\ \emptyset$ and for all $i \in \N$, $\a_{\R(\b)_{i+1}}$ is the smallest rewrite relation such that for every rule $\vd = \vc \sgt l \at_\R r$ and every substitution $\s$, \[ \text{if}\qquad \vd\s \quad\ad_{\b \cup \R(\b)_{i}}\quad \vc\s \qquad\text{then}\qquad l\s \quad\a_{\R(\b)_{i+1}}\quad r\s ~. \] \end{itemize} \end{definition} \noindent Hence, with conditional rewriting $\a_\R$, $\b$-reduction is not allowed in the evaluation of conditions, while it is allowed with $\b$-conditional rewriting $\a_{\R(\b)}$. Note that $\a_{\R} \sle \a_{\R(\b)}$. The converse is false, as shown by the following example. \begin{example} Consider the rule \[ p\esp x \ =\ {\true} \quad \sgt\quad {\filter}\esp p \esp (\cons \esp x \esp l) \quad \at\quad \cons \esp x \esp ({\filter}\esp p \esp l) \] issued from the conditional rewrite system~(\ref{eqn-filter}) and assume that $\id \esp x \at x$. With conditional rewriting we have \[ \filter\esp \id \esp (\cons \esp \true \esp ts) \quad\a_{\R}\quad \cons \esp \true \esp (\filter \esp \id \esp ts) \qquad\text{since}\qquad \id \esp \true \quad\a_{\R}\quad \true ~. \] With $\b$-conditional rewriting we also have \[ \filter\esp (\la x. x) \esp (\cons\esp \true \esp ts) \dbesp\a_{\R(\b)}\dbesp \cons \esp \true \esp (\filter \esp \la x.x \esp ts) \quad~\text{since}~\quad (\la x. x) \esp \true \dbesp\a_{\b}\dbesp \true ~, \] but the term $\filter \esp (\la x.x) \esp (\cons \esp \true \esp ts)$ is a $\a_{\R}$-normal form. \end{example} An interesting particular case of join conditional rewriting is {\em normal} rewriting. \begin{definition}[Normal Conditional Rewriting] \label{def-norm-rew} Let $\R$ be a conditional rewrite system. If for every rule $\vd = \vc \sgt l \at_\R r$, the conditions $\vc$ are closed terms in $\a_{\R}$-normal form, then we say that $\a_{\R}$ is a {\em normal conditional rewrite relation}. \end{definition} \noindent In general, for a given conditional system the normal forms w.r.t.\ join and semi-equational rewriting are not the same (this is a by-product of the fact that semi-equational orthogonal rewriting is confluent, while join orthogonal rewriting is not~\cite{bk86jcss,ohlebusch02book}, see also Theorem~\ref{thm-orth-cond}). The notion of normal conditional rewriting presented in Definition~\ref{def-norm-rew} is thus specific to join conditional rewriting (it is easy to see that it coincides with normality for oriented rewriting). An important point with conditional rewriting is the possible undecidability of a rewriting step. This impacts of effectiveness of the notion of normal conditional rewriting. \begin{remark}[Decidability] \label{rem-dec} One-step conditional rewrite relations are in general {\em not} decidable. Consider a rule $\vd = \vc \sgt l \at r$. Because of the recursive definition of $\a_\R$, to know if $l\s \a_\R r\s$, we need to reduce the terms $\vd\s$ and $\vc\s$. This is in general undecidable, even for terminating systems~\cite{kaplan84tcs} (see also~\cite{ohlebusch02book}). These facts have consequences on normal rewriting. Given a set of conditional rules, to determine whether it can generate a normal relation, we have to check that a part of the conditions is in normal form. This is in general undecidable, even when the rewrite relation terminates. We therefore focus on join rewriting because it seems to be a more easily and generally applicable theory than normal rewriting, even if the implementation of conditional rewriting is easier when we {\em already know} that the conditional rewrite relation is normal. \end{remark} Our results on the preservation of confluence impose restrictions on rewrite rules. Some of them concern the terms which can appear in different parts of the rules. This motivates the following definition. Recall from Definition~\ref{def-cond-rules} that left-hand sides are always assumed to be algebraic. \begin{definition}[Applicative and Algebraic Conditional Rewrite Rules] A conditional rewrite rule $\vd = \vc \sgt l \at r$ is \begin{itemize} \item {\em right-applicative} if $r$ is an applicative term, \item {\em applicative} if it is right-applicative and if moreover the terms $\vd,\vc$ are applicative, \item {\em right-algebraic} if $r$ is an algebraic term, \item {\em algebraic} if it is right-algebraic and if moreover the terms $\vd,\vc$ are algebraic. \end{itemize} A rewrite system $\R$ is {\em right-applicative} (resp. {\em applicative}, {\em right-algebraic}, {\em algebraic}) if all its rules are right-applicative (resp. applicative, right-algebraic, algebraic). \end{definition} In the conditional rewrite system~(\ref{eqn-filter}), the first rule $\filter \esp p \esp \nil \ \at\ \nil$ is algebraic. The two other rules are right-algebraic. They both use the term $p \esp x$ in their conditions, where $p$ is a variable. This term is applicative but not algebraic. \subsection{Examples} \label{sec-ex} We now give some examples of conditional rewrite systems. \subsubsection{Coherence of lambda-calculus with surjective pairing} \label{sec-ex-sp} We begin by recalling one use of conditional rewriting in the study of $\la$-calculus with surjective pairing. We use $\pair \esp t_1 \esp t_2$ to denote the pairing of $t_1$ and $t_2$. The rewrite rules for binary products are the following: \[ \fst \esp (\pair \esp x_1 \esp x_2) \quad\at_\pi\quad x_1 \qquad\qquad \snd \esp (\pair \esp x_1 \esp x_2) \quad\at_\pi\quad x_2 ~. \] It is well-known that the combination of these rules with $\b$-reduction is confluent (see Theorem~\ref{thm-muller}, proved in~\cite{muller92ipl}). This follows from the {\em left-linearity} of the rewrite rules. However, when we add the rule for surjective pairing \[ \pair \esp (\fst \esp x) \esp (\snd \esp x) \quad\at_{\SP}\quad x \] then the combination of the resulting rewrite relation with $\b$-reduction is not confluent~\cite{klop80phd}. Note that the rule $\at_\SP$ is not left-linear: the variable $x$ appears twice in the left-hand side. However, the corresponding conversion is coherent: there are two terms that are not convertible. This has been first shown using semantic methods~\cite{scott75scandiv}. In~\cite{vrijer89lics}, de Vrijer uses semi-equational $\b$-conditional rewriting to give a syntactic proof of the coherence of $\b$-reduction combined with surjective pairing. His rules are those of $\at_\pi$ plus \[ \snd \esp x \esp=\esp y \esp\sgt\esp \pair \esp (\fst \esp x) \esp y \dbesp\at_{lr}\dbesp x \qquad\qquad \fst \esp x \esp=\esp y \esp\sgt\esp \pair \esp y \esp (\snd \esp x) \dbesp\at_{lr}\dbesp x ~. \] The resulting relation is confluent modulo an equivalence relation, and this allows de Vrijer to show that $\la$-calculus plus pairs and surjective pairing is a conservative extension of the pure $\la$-calculus: for any two {\em pure} $\la$-terms $t,u \in \Te$, \[ t \alr_\b u \qquad\text{if and only if}\qquad t \alr_{\b \cup \pi \cup \SP} u ~. \] \subsubsection{A term manipulation system} \label{sec-ex-cond-term} Our main example is an adaptation of a \textsc{CAML} program of~\cite{huet86notes}. It defines functions that perform in a term the replacement of the subterm at a given occurrence by another term. Terms are represented by trees whose nodes contain a label and the list of their successor nodes. This system must be read having in mind the combination of $\la$-calculus with (join) $\b$-conditional rewriting. We begin by some basic functions on lists. \[ \begin{array}{c !{\qquad} c} \begin{array}{l !{\esp \at \esp} l} {\car}\esp (\cons \esp x \esp l) & x \\ {\car}\esp \nil & {\err} \end{array} & \begin{array}{l !{\esp \at \esp} l} {\cdr}\esp (\cons \esp x \esp l) & l \\ {\cdr}\esp \nil & {\err} \end{array} \\ \\ \begin{array}{l !{\esp \at \esp} l} {\get}\esp l\esp \zero & {\car}\esp l \\ {\get}\esp l\esp (\suc \esp n) & {\get}\esp ({\cdr}\esp l)\esp n \end{array} & \begin{array}{l !{\esp \at \esp} l} {\length}\esp \nil & \zero \\ {\length}\esp (\cons \esp x \esp l)& \suc \esp (\length \esp l) \end{array} \\ \\ \multicolumn{2}{c} { \begin{array}{l c l c l c l} & & & & {\filter}\esp p \esp \nil & \at & \nil \\ p\esp x & = & {\true} & \sgt & {\filter}\esp p \esp (\cons \esp x \esp l) & \at & \cons \esp x \esp ({\filter}\esp p \esp l) \\ p\esp x & = & {\false} & \sgt & {\filter}\esp p \esp (\cons \esp x \esp l) & \at & {\filter}\esp p \esp l \end{array} } \end{array} \] Let us define $\apply$ such that $\apply \ptesp f \ptesp n \ptesp l$ applies $f$ to the $n$th element of $l$. It uses $\appaux$ as an auxiliary function: \[ \begin{array}{l c l c l c l} > ({\length}\esp l)\esp n & = & {\true} & \sgt & {\apply}\esp f\esp n\esp l & \at & {\appaux}\esp f\esp n\esp l \\ > ({\length}\esp l)\esp n & = & {\false} & \sgt & {\apply}\esp f\esp n\esp l & \at & {\err} \\ \\ & & & & {\appaux}\esp f\esp \zero \esp l & \at & \cons \esp (f\esp ({\car}\esp l)) \esp ({\cdr}\esp l) \\ & & & & {\appaux}\esp f\esp (\suc \esp n)\esp l & \at & \cons \esp ({\car}\esp l) \esp ({\appaux}\esp f \esp n\esp ({\cdr}\esp l)) \end{array} \] We represent first-order terms by trees with nodes $\node \ptesp y \ptesp l$ where $y$ is intended to be a label and $l$ the list of sons. Positions are lists of integers and $\occ \ptesp u \ptesp t$ tests if $u$ is an occurrence of $t$. We define it as follows: \[ \begin{array}{l c l c l c l} & & & & {\occ}\esp \nil\esp t & \at & \true \\ > ({\length}\esp l)\esp x & = & {\false} & \sgt & {\occ}\esp (\cons \esp x \esp o)\esp ({\node}\esp y\esp l) & \at & \false \\ > ({\length}\esp l)\esp x &= & {\true} & \sgt & {\occ}\esp (\cons \esp x \esp o)\esp ({\node}\esp y\esp l) & \at & \occ\esp o\esp ({\get}\esp l\esp x) \end{array} \] To finish, $\replace \ptesp t \ptesp o \ptesp s$ replaces by $s$ the subterm of $t$ at occurrence $o$. \[ \begin{array}{c} \begin{array}{l c l c l c l} {\occ}\esp u \esp t & = & \true & \sgt & {\replace}\esp t \esp o \esp s & \at & {\rep}\esp t \esp o \esp s \\ {\occ}\esp u \esp t & = & {\false} & \sgt & {\replace}\esp t \esp o \esp s & \at & {\err} \\ \end{array} \\ \\ \begin{array}{l c l} {\rep}\esp t\esp \nil \esp s & \at & s \\ {\rep}\esp ({\node}\esp y\esp l) \esp (\cons \esp x \esp o)\esp s & \at & {\node}\esp y \esp ({\apply}\esp (\la z.{\rep}\esp z \esp o\esp s)\esp x \esp l) \end{array} \end{array} \] The system {\sf Tree} that consists of the rules defining $\car$, $\cdr$, $\get$, $\length$ and $\occ$ is algebraic. The rules of $\apply$ and $\appaux$ are right-applicative and those for $\filter$ contain in their conditions the variable $p$ in active position. This definition of $\rep$ involves a $\la$-abstraction in a right hand side. In Section~\ref{sec-exclus}, we prove confluence of the relation $\a_{\b \cup \R(\b)}$ induced by the whole system. \subsection{Confluence} \label{sec-confl} The main property on rewrite relations studied in this paper is {\em confluence}. The confluence of a relation $\a$ which has at least two distinct normal forms entails the coherence of the conversion $\alr$. Moreover, it allows to evaluate terms in a modular way: the choice of the subterm to be evaluated first has no impact on the result of the evaluation. In this section we recall some well-known facts about confluence which will be useful in the following. A sufficient condition for confluence is the {\em diamond property}. \begin{definition}[Confluence] A rewrite relation $\a$ is {\em confluent} if ${\al^* \a^*} \sle {\a^* \al^*}$ and has the {\em diamond property} if ${\al\a} \sle {\a\al}$. \comment{ Let $\a$ be a rewrite relation on $\Te(\Si)$. \begin{enumerate} \item We say that $\a$ is {\em confluent} if for all $t,u,v \in \Te(\Si)$ such that $u \al^* t \a^* v$, there exists $w \in \Te(\Si)$ such that $u \a^* w \al^* v$. \item We say that $\a$ has the {\em diamond property} if for all $t,u,v \in \Te(\Si)$ such that $u \al t \a v$, there exists $w \in \Te(\Si)$ such that $u \a w \al v$. \end{enumerate} } In diagrammatic form: \[ \begin{array}{c !{\qquad\qquad} c} \xymatrix@R=\xyS@C=\xyS{ & \cdot \ar@{->}[dr]^{}_{*} \ar@{->}[dl]_{}^{*} & \\ \cdot \ar@{..>}[dr]_{}^{*} & & \cdot \ar@{..>}[dl]^{}_{*} \\ & \cdot & \\ } & \xymatrix@R=\xyS@C=\xyS{ & \cdot \ar@{->}[dr]^{} \ar@{->}[dl]_{} & \\ \cdot \ar@{..>}[dr]_{} & & \cdot \ar@{..>}[dl]^{} \\ & \cdot & \\ } \\ \text{Confluence} & \text{Diamond property} \end{array} \] \end{definition} \noindent The stratification of conditional rewrite relations leads to fine-grained notions of confluence. \begin{definition}[Stratified Confluences] \label{def-strat-confl} Assume that $(\a_i)_{i \in \N}$ are rewrite relations and let ${\a} \ptesp\deq\ptesp {\bigcup_{i \in \N} \a_i}$. We say that $\a$ is {\em level confluent} if for all $i \geq 0$ we have ${\al^*_{i} \a^*_{i}} \sle {\a^*_{i} \al^*_{i}}$; and {\em shallow confluent} if for all $i,j \geq 0$ we have ${\al^*_{i} \a^*_{j}} \sle {\a^*_{j} \al^*_{i}}$. In diagrammatic form: \[ \begin{array}{c !{\qquad\qquad} c} \xymatrix@R=\xyS@C=\xyS{ & \cdot \ar@{->}[dr]^{i}_{*} \ar@{->}[dl]_{i}^{*} & \\ \cdot \ar@{..>}[dr]_{i}^{*} & & \cdot \ar@{..>}[dl]^{i}_{*} \\ & \cdot & \\ } & \xymatrix@R=\xyS@C=\xyS{ & \cdot \ar@{->}[dr]^{i}_{*} \ar@{->}[dl]_{j}^{*} & \\ \cdot \ar@{..>}[dr]_{i}^{*} & & \cdot \ar@{..>}[dl]^{j}_{*} \\ & \cdot & \\ } \\ \text{Level Confluence} & \text{Shallow Confluence} \end{array} \] \end{definition} \noindent Note that shallow confluence implies level confluence which in turns implies confluence. For instance, in Section~\ref{sec-exclus} we show that $\a_{\b \cup \R(\b)}$ is shallow confluent for some conditional rewrite systems $\R$ called orthonormal. This entails their confluence. \paragraph{Combinations of rewrite relations.} Since we are interested in the confluence of the combination of two rewrite relations (conditional rewriting and $\la$-calculus), we will use the following notions. \begin{definition}[Commutation] A rewrite relation $\a_A$ {\em commutes} with a rewrite relation $\a_B$ if ${\al^*_A \a^*_B} \sle {\a^*_B \al^*_A}$. \end{definition} \noindent The Hindley-Rosen Lemma is a simple but important tool to prove the confluence of the combination of two rewrite relations. \begin{lemma}[Hindley-Rosen] \label{lem-hr} If $\a_A$ and $\a_B$ are two confluent rewrite relations that commute then $\a_{A \cup B}$ is confluent. \end{lemma} \noindent The next simple lemma is useful to prove the commutation of two relations. \begin{lemma} \label{lem-commut} Let $\a_A$ and $\a_B$ be two rewrite relations such that for all $t,u,v \in \Te(\Si)$, if $u \al_A t \a_B v$ then there is a term $w$ such that $u \a^*_B w \al_A v$. Then $\a_A$ commutes with $\a_B$. \end{lemma} \begin{proof} We show (i) by induction on $\a^*_B$ and deduce (ii) by induction on $\a^*_A$. \[ \begin{array}{c !{\qquad} c} \xymatrix@R=\xyTS@C=\xyS{ \cdot \ar@{->}[rr]^{B}_{*} \ar@{->}[dd]_{A} & & \cdot \ar@{..>}[dd]^{A} \\ \\ \cdot \ar@{..>}[rr]_{B}^{*} & & \cdot \\ } & \xymatrix@R=\xyTS@C=\xyS{ \cdot \ar@{->}[rr]^{B}_{*} \ar@{->}[dd]_{A}^{*} & & \cdot \ar@{..>}[dd]^{A}_{*} \\ \\ \cdot \ar@{..>}[rr]_{B}^{*} & & \cdot \\ } \\ \text{(i)} & \text{(ii)} \end{array} \] \end{proof} \section{Confluence: from unconditional to conditional rewriting} \label{sec-over-gen} In this section we state precisely the known results from which this paper starts and give a short overview of our results. In Section~\ref{sec-rew} we review the results on the combination of $\la$-calculus with {\em unconditional} rewriting that we extend to conditional and $\b$-conditional rewriting in Section~\ref{sec-Aconfl} and Section~\ref{sec-Bconfl} respectively. In Section~\ref{sec-orth-cond} we recall a result on the confluence of orthogonal normal rewrite relations, that we generalize to orthonormal $\b$-conditional rewriting in Section~\ref{sec-exclus}. We then give a short overview of our results in Section~\ref{sec-overview}. \subsection{Confluence of beta-reduction with unconditional rewriting} \label{sec-rew} Our results of Section~\ref{sec-Aconfl} and~\ref{sec-Bconfl} on the preservation of confluence for the combination of $\la$-calculus with (left-algebraic) {\em conditional} rewriting are extensions of similar results on the combination of $\la$-calculus with (left-algebraic) {\em unconditional} rewriting. We concentrate of two cases, both untyped, that we review in this section: \begin{itemize} \item In Section~\ref{sec-muller} we discuss M{\"u}ller's result~\cite{muller92ipl} (stated in Theorem~\ref{thm-muller}) on left-linear rewriting. \item In Section~\ref{sec-dough} we discuss Dougherty's result~\cite{dougherty92ic} (stated in Theorem~\ref{thm-dough}) on strongly $\b$-normalizing terms with arity conditions. \end{itemize} \subsubsection{Left-linear rewriting} \label{sec-muller} Using the example of surjective pairing~\cite{klop80phd}, we have recalled in Section~\ref{sec-ex-sp} that the combination of a confluent non left-linear rewrite system with $\b$-reduction may not be confluent. An example has also been presented in~\cite{bm87popl}, which can be seen as an adaptation of an example due to Huet~\cite{huet80jacm} concerning first-order rewriting. \begin{example}[\cite{bm87popl}] \label{ex-minus} Consider the confluent rewrite system \[ \minus \esp x \esp x \quad\at_\minus\quad \zero \qquad\qquad \minus \esp (\suc \esp x) \esp x \quad\at_\minus\quad (\suc \esp \zero) ~, \] and let \[ Y_\suc \quad\deq\quad (\la x. \suc \esp (x \esp x)) \esp (\la x. \suc \esp (x \esp x)) ~. \] Since $Y_\suc \a_\b \suc \ptesp Y_\suc$, we have the following unjoinable peak: \[ \xymatrix@C=\xyS@R=\xyS{ & \minus \esp Y_\suc \esp Y_\suc \ar@{>}[dl] \ar@{>}[dr] \\ \minus \esp (\suc \esp Y_\suc) \esp Y_\suc \ar@{>}[d] & & \zero \\ \suc \esp \zero } \] \end{example} \begin{remark}[Interpretation with B{\"o}hm trees] A simple interpretation of this system is to see $Y_\suc$ as representing the "infinite integer" $\infty$, the term $\minus \ptesp Y_\suc \ptesp Y_\suc$ representing the undefined operation $\infty - \infty$. This interpretation can be made concrete by using B{\"o}hm trees~\cite{barendregt84book}. The B{\"o}hm tree of the term $Y_\suc$ is the infinite term \[ \begin{array}{c} \suc \\ \vline \\ \suc \\ \vline \\ \vdots \end{array} \] Intuitively, Ex.~\ref{ex-minus} can be seen as an instance of the fact that confluence of non left-linear systems is not preserved when we extend the term algebra (in this case by infinite terms). \end{remark} As shown in~\cite{muller92ipl}, confluence is preserved when rewriting is left-linear. The original result concerns only algebraic systems, but can easily be extended to unconditional rewrite systems with arbitrary right-hand sides. \begin{theorem}[\cite{muller92ipl}] \label{thm-muller} If $\R$ is a left-linear {\em unconditional} rewrite system such that $\a_{\R}$ is confluent then $\a_{\b \cup \R}$ is confluent. \end{theorem} \noindent This result has been generalized to the case of Higher-Order Rewrite Systems~\cite{or94lfcs}. \subsubsection{Strongly beta-normalizing terms} \label{sec-dough} To handle non left-linear systems, as seen in Example~\ref{ex-minus} we have to forbid infinite terms. This is possible for example by focusing on algebraic rewriting on typed terms. Confluence is preserved for the combination of algebraic rewriting with simply typed $\la$-calculus~\cite{breazu88lics}. This result has been then extended to the polymorphic $\la$-calculus~\cite{bg89icalp,bg94ic,okada89issac}. A question arises from these results: besides strong normalization of $\b$-reduction, what is the role of typing in the preservation of confluence ? This is studied in~\cite{dougherty92ic}, which shows that for algebraic rewriting terms must satisfy some {\em arity conditions}. \begin{example} \label{ex-dough} Consider the rewrite system \[ \id \esp x \quad\at_\id\quad x ~, \] and let $\W_\suc \esp\deq\esp \la x.\suc \esp (x \esp x)$. The term \[ t \quad\deq\quad \minus \esp (\id \esp \W_\suc \esp \W_\suc) \esp (\id \esp \W_\suc \esp \W_\suc) \] is in $\b$-normal form, hence strongly $\b$-normalizing. Moreover, we can check that the rewrite system $\at_{\minus \cup \id}$ is confluent. However, $\at_{\b \cup \minus \cup \id}$ is not confluent since $t$ rewrites to the unjoinable peak of Example~\ref{ex-minus}: \[ \minus \ptesp (\id \esp \W_\suc \esp \W_\suc) \ptesp (\id \esp \W_\suc \esp \W_\suc) \quad\a^2_\id\quad \minus \esp Y_\suc \esp Y_\suc ~. \] \end{example} \noindent The problem is that reducing $\id$ in the $\b$-normal form $\id \ptesp \W_\suc \ptesp \W_\suc$ leads to a term which is no longer in $\b$-normal form: rewriting does not preserve $\b$-normal forms. The approach taken in~\cite{dougherty92ic} is to find {\em arity conditions} on terms for rewriting to preserve $\b$-normal forms. Consider a symbol $\sff \in \Si$ such that for all $\sff \vl \at_\R r$, we have $|\vl| \leq n$. Then we discard terms of the form $\sff \vt$ with $|\vt| > n$. For example, the term $\id \esp \W_\suc \esp \W_\suc$ is not allowed since $\id$ takes two arguments whereas its rewrite rule takes only one. \begin{definition}[Applicative Arity] \label{def-arity} An {\em arity} is a function $\ap : \Si \a \N$. \begin{enumerate}[(i)] \item \label{def-arity-term} A term $t$ {\em respects $\ap$} if it contains no subterm $\sff \vt$ with $|\vt| > \ap(\sff)$. \item \label{def-arity-rule} A rewrite system $\R$ respects $\ap$ if for all $\sff \vl \at_\R r$, $\sff \vl$ and $r$ respect $\ap$ and moreover $|\vl| = \ap(\sff)$. \end{enumerate} \end{definition} However, the respect of an arity is not stable by $\b$-reduction. For instance, with $\ap(\id) = 1$ the term $(\la x.x \ptesp \W_\suc \ptesp \W_\suc) \ptesp \id$ respects $\ap$ but it $\b$-reduces to $\id \ptesp \W_\suc \ptesp \W_\suc$ which does not respect $\ap$. In order to work with terms which respect an arity $\ap$ and whose $\b\R$-reducts respect $\ap$ too, it is convenient to consider sets of terms respecting $\ap$ and which are stable by reduction. This motivates the following definition. \begin{definition}[$(\R,\ap)$-Stable Terms] \label{def-rstable} Given an arity $\ap$ and a rewrite system $\R$, a set of terms $S$ is $(\R,\ap)$-stable if \begin{enumerate}[(i)] \item for all $t \in S$, $t$ respects $\ap$, \item for all $t \in S$, if $t \a_{\b \cup \R} u$ then $u \in S$, \item for all $t \in S$, if $u$ is a subterm of $t$ then $u \in S$. \end{enumerate} \end{definition} Dougherty~\cite{dougherty92ic} obtain the preservation of confluence on $(\R,\ap)$-stable sets of strongly $\b$-normalizing terms. \begin{theorem}[\cite{dougherty92ic}] \label{thm-dough} If $\R$ is an algebraic confluent {\em unconditional} rewrite system that respects an arity $\ap$, then $\a_{\b \cup \R}$ is confluent on every $(\R,\ap)$-stable set $S \sle \SN_{\b}$. \end{theorem} \begin{remark} \label{rem-alg-rhs} To get the preservation $\b$-normal forms by rewriting it is necessary to restrict to algebraic right-hand sides, since in contrast with algebraic terms, substituting a variable in an applicative term may produce a $\b$-redex. For instance $(x \ptesp z)\wth{x}{\la y.y} = (\la y.y) \ptesp z$. \end{remark} \subsection{Orthogonal conditional rewriting} \label{sec-orth-cond} Orthogonality is a sufficient condition for the confluence of some kinds of conditional rewriting. In this section we recall some known results about the confluence of algebraic orthogonal conditional rewrite systems. They were initially formulated in the framework of first-order conditional rewriting, of which algebraic rewriting is an instance. The main result is the shallow confluence of orthogonal normal conditional rewriting. We generalize it in Section~\ref{sec-exclus} to orthonormal $\b$-conditional rewriting. For {\em unconditional} rewriting, orthogonality is a simple syntactic criterion: it entails the confluence of left-linear systems with no critical pairs~\cite{huet80jacm}. With conditional rewriting, things get more complicated since the notion of critical pairs has to take into account the conditions of rewrite rules. \begin{definition}[Conditional Critical Pairs] \label{def-cond-cp} Let $\R$ be a set of conditional rules and suppose that $\r_1 : \vd = \vc \sgt l\at r$ and $\r_2 : \vd' = \vc' \sgt l' \at r'$ are two renaming of rules in $\R$ such that they have no variable in common. If $p$ is a non-variable occurrence of $l$ and $\s$ is a most general unifier of $l|_p$ and $l'$, then \[ \vd\s \esp=\esp \vc\s \quad\land\quad \vd'\s \esp=\esp \vc'\s \quad\sgt\quad \left(l[p \gets r']\s ~,~ r\s \right) \] is a {\em conditional critical pair} of $\R$. If $\rho_1$ and $\rho_2$ are renaming of the same rule, we assume that $p$ is not the root position of $l$. A critical pair of the form $\vd = \vc \sgt (s,s)$ is called {\em trivial}. \end{definition} The important point is that in a conditional critical pair $\vd = \vc \sgt (s,t)$, it is possible that there is no substitution $\s$ such that $\vd\s = \vc\s$. Thus, critical pairs can be {\em feasible} or {\em unfeasible}. According to the kind of conditional rewriting considered (with and without $\b$-reduction in the evaluation of conditions), the satisfaction of conditions is done differently. Therefore, we consider two notions of feasibility. \begin{definition}[Feasibility of Conditional Critical Pairs] A critical pair $\vd = \vc \sgt (s,t)$ of a conditional system $\R$ is \begin{itemize} \item {\em feasible} if there is a substitution $\s$ such that $\vd\s \ad_{\R} \vc\s$; \item {\em $\b$-feasible} if there is a substitution $\s$ such that $\vd\s \ad_{\b \cup \R(\b)} \vc\s$; \end{itemize} A critical pair which is not feasible (resp. $\b$-feasible) is said {\em unfeasible} (resp. {\em $\b$-unfeasible}). \end{definition} The easiest way to prove unfeasibility of critical pairs is often to use confluence. We come back on this question in Section~\ref{sec-exclus}. Both notions of feasibility induce a notion of orthogonality. \begin{definition}[Orthogonality] A set $\R$ of left-linear conditional rewrite rules is \begin{itemize} \item {\em orthogonal} (resp. {\em $\b$-orthogonal}) if all its critical pairs are unfeasible (resp. {\em $\b$-unfeasible}); \item {\em weakly orthogonal} (resp. {\em weakly $\b$-orthogonal}) if all its critical pairs are either trivial or unfeasible (resp. {\em $\b$-unfeasible}). \end{itemize} \end{definition} Hence, to test the orthogonality of a conditional system, we have to evaluate the conditions of its critical pairs. According to Remark~\ref{rem-dec}, this is in general undecidable. It is well-known that for normal (and semi-equational) rewriting, weak orthogonality implies confluence. This in general {\em not} the case for join conditional rewriting, as shown in~\cite{bk86jcss}. \begin{theorem}[\cite{bk86jcss,ohlebusch02book}] \label{thm-orth-cond} Let $\R$ be a conditional rewrite system. If $\R$ is weakly orthogonal, and moreover is a normal system, then $\a_{\R}$ is shallow confluent. \end{theorem} \subsection{Overview of the results} \label{sec-overview} The goal of this paper is to give sufficient conditions for the confluence of $\beta$-reduction with $\b$-conditional rewriting (i.e.\ with $\b$-steps allowed in the evaluation of conditions). More precisely, we seek to obtain results on the {\em preservation} of confluence, that is to get the confluence of $\a_{\b \cup \R(\b)}$ assuming the confluence of $\a_\R$. Our approach is to generalize the results summarized in Section~\ref{sec-rew} on the combination of $\b$-reduction with unconditional rewriting. We thus consider two different cases: \begin{itemize} \item First, the extension of M{\"u}ller's result~\cite{muller92ipl}, when $\b$-reduction is not restricted (we thus need to assume left-linearity, and to extend this notion to conditional rewriting). \item Second, the extension of Dougherty's result~\cite{dougherty92ic}, when we restrict to $\b$-normalizing terms (we thus need some arity conditions on terms). In fact, we improve~\cite{dougherty92ic} from strongly $\b$-normalizing terms to weakly $\b$-normalizing terms. \end{itemize} In each case, we proceed in two steps. We first consider in Section~\ref{sec-Aconfl} the case of $\b$-reduction with conditional rewriting $\a_{\R}$ (when $\b$-reduction is not allowed in the evaluation of conditions). We then extend these results to $\b$-conditional rewriting $\a_{\R(\b)}$ in Section~\ref{sec-Bconfl}. As discussed at the beginning of Section~\ref{sec-Bconfl} (see Example~\ref{ex-Bconfl}), for the extension of both~\cite{muller92ipl} and~\cite{dougherty92ic} to $\b$-conditional rewriting, rewrite rules must be algebraic and respect arity conditions. In contrast, the extension of~\cite{muller92ipl} to the simpler case of conditional rewriting holds without these restrictions. This motivates the definition of criteria for the confluence of $\b$-reduction with $\b$-conditional rewriting when rules need not be algebraic nor to respect an arity (recall from Definition~\ref{def-cond-rules} that left-hand sides are always algebraic in this paper). We propose such a criterion in Section~\ref{sec-exclus}, which defines {\em orthonormal} conditional rewriting, an extension of orthogonal rewriting. We show the {\em shallow} confluence of $\b$-reduction with $\b$-conditional rewriting for these systems, hence extending Theorem~\ref{thm-orth-cond}. Our results are summarized in Figure~\ref{fig-oversimple} page~\pageref{fig-oversimple}. \section{Confluence of beta-reduction with conditional rewriting} \label{sec-Aconfl} We now turn to conditional rewriting. In this section we focus on the combination of join conditional rewriting $\a_\R$ with $\b$-reduction: we do not allow the use of $\b$-reduction in the evaluation of conditions. The important point of left-linearity is to prevent {\em unconditional} rewriting from comparing arbitrary terms. It forbids in particular comparisons of infinite terms such as $Y_\suc$. But with conditional rewriting, rewrite rules can make this comparison in their conditions while being left-linear. Hence, starting from Example~\ref{ex-minus}, we can define a left-linear conditional rewrite system which makes the commutation of rewriting with $\b$-reduction fail. \begin{example} \label{ex-minus-cond} Consider the conditional system \[ x = y \esp\sgt\esp \minus \esp x \esp y \esp\at\esp \zero \qquad\qquad x = (\suc \esp y) \esp\sgt\esp \minus \esp x \esp y \esp\at\esp (\suc \esp \zero) ~. \] This system is left-linear, but the conditions can test the equality of open terms. The join conditional rewrite relation issued from this system forms with $\a_\b$ the following unjoinable peak: \[ \xymatrix@R=\xyS@C=\xyTS{ & & \minus \esp Y_{\suc} \esp Y_{\suc} \ar@{>}[dl] \ar@{>}[dr] & (Y_\suc \esp\ad\esp Y_\suc) \\ ((\suc \esp Y_\suc) \esp\ad\esp (\suc \esp Y_\suc)) & \minus \esp (\suc \esp Y_{\suc}) \esp Y_{\suc} \ar@{>}[d] & & \zero \\ & (\suc \esp \zero) } \] \end{example} As for unconditional rewriting in Section~\ref{sec-rew}, we consider two ways to overcome the problem: to restrict rewriting (Section~\ref{sec-Aconfl-sc}) or to restrict $\b$-reduction (Section~\ref{sec-Aconfl-wn}). \subsection{Confluence for left-linear semi-closed systems} \label{sec-Aconfl-sc} In this section we are interested in the extension of Theorem~\ref{thm-muller} (\cite{muller92ipl}) to conditional rewriting: we want sufficient conditions on rewrite rules for the preservation of confluence on {\em all untyped terms of $\Te(\Si)$}. As seen in Example~\ref{ex-minus-cond}, for conditional rewriting we have to extend the notion of left-linearity in order to forbid comparison of open terms in the conditions of rewrite rules. To this end we restrict to {\em semi-closed} conditional rewrite rules. \begin{definition}[Semi-Closed Conditional Rewrite Rules] \label{def-sc-cond} A conditional rewrite system $\R$ is {\em semi-closed} if for all rules \[ d_1 \esp=\esp c_1 \esp\land\esp \dots \esp\land\esp d_n \esp=\esp c_n \esp\sgt\esp l \dbesp\at_\R\dbesp r ~, \] the terms $c_1,\dots,c_n$ are applicative and closed. \end{definition} For example, the system {\sf Tree} of Section~\ref{sec-ex} is left-linear and semi-closed. In a semi-closed rule $\vd = \vc \sgt l \at r$, since $\vc$ are closed terms, it is tempting to normalize them and obtain a normal rewrite relation, but as noted in Remark~\ref{rem-dec}, results on join rewriting seem more easily applicable. We show that the confluence of $\a_\R$ implies the confluence of $\a_{\b\cup\R}$ for semi-closed left-linear right-applicative systems (Theorem~\ref{thm-Aconfl-sc}). Using Hindley-Rosen lemma (Lemma~\ref{lem-hr}), this follows from the commutation of conditional rewriting with $\b$-reduction. As in~\cite{muller92ipl}, we obtain this commutation as a consequence of the commutation of conditional rewriting with a relation $\rpr_\b$ of {\em parallel} $\b$-reduction (see Definition~\ref{def-par-rew}). This is shown in Lemma~\ref{lem-commut-cond}, which relies on Property~\ref{prop-par-rew} ($\s \rpr_\b \s'$ implies $t\s \rpr_\b t\s'$). This property holds for parallel rewrite relations but fails with $\a_\b$. The parallel $\b$-reduction $\rpr_\b$ we use is Tait and Martin-L{\"o}f's relation~\cite{barendregt84book,takahashi95ic}. It is defined as follows. \begin{definition}[Parallel $\b$-Reduction] We let $\rpr_\b$ be the smallest parallel rewrite relation closed under the rule \[ (\pBeta)~ \dfrac{t_1 ~\rpr_\b~ u_1 \qquad\qquad t_2 ~\rpr_\b~ u_2} {(\la x.t_1)t_2 ~\rpr_\b~ u_1\wth{x}{u_2}} \] \end{definition} We will use some well-known properties of $\rpr_\b$. If $\s \rpr_\b \s'$ then $s\s \rpr_\b s\s'$; this is the one-step reduction of parallel redexes. We can also simulate $\b$-reduction: $\a_\b \sle \rpr_\b \sle \a^*_\b$. And third, $\rpr_\b$ enjoys the diamond property: $\rpl_\b \rpr_\b \sle \rpr_\b \rpl_\b$. The relation $\rpr_\b$ is stronger than the one used in~\cite{muller92ipl}: it can reduce in one step nested $\b$-redexes, while the relation of~\cite{muller92ipl} is simply the smallest parallel rewrite relation containing $\b$-reduction (i.e.\ the {\em parallel closure} of $\a_\b$). The diamond property (which holds for $\rpr_\b$) fails for the parallel closure of $\b$-reduction precisely because it cannot reduce nested $\b$-redexes in one step. The parallel closure of $\a_\b$ would have been sufficient to obtain Lemma~\ref{lem-commut-cond}, but we use the nested relation $\rpr_\b$ because we rely on the diamond property in Section~\ref{sec-Bconfl-sc}. Nested parallelizations (corresponding to complete developments) are already used in~\cite{or94lfcs} for their confluence proof of HORSs. However, our method inherits more from~\cite{muller92ipl} than from~\cite{or94lfcs}, as we use complete developments of $\a_\b$ only, whereas complete developments of $\a_\b$ {\em and} of $\a_{\R}$ are used for the modularity result of~\cite{or94lfcs}. The left-linearity assumption is used in the proof of Lemma~\ref{lem-commut-cond} via the following property of linear algebraic terms. \begin{proposition} \label{prop-subst-lin-rhd} Let $t$ be an algebraic linear term and $\s$ be a substitution such that $t\s \rpr_\b u$. There is a substitution $\s'$ such that $u = t\s'$ with $\s \rhd_\b \s'$ and $\s'(x) = \s(x)$ for all $x \notin \FV(t)$. \end{proposition} \begin{proof} By induction on $t$. \begin{description} \item[$t = x \in \Vte$.] In this case $t\s = \s(x)$. Take $\s'$ such that $\s'(x) = u$ and $\s'(y) = \s(y)$ for all $y \neq x$. \item[$t = \sff \in \Si$.] In this case $t\s = t = u$, hence $\s' = \s$ fits (recall that $\rpr_\b$ is reflexive). \item[$t = t_1 t_2$.] Since $t$ is algebraic, $t_1\s \esp t_2\s$ is not a $\b$-redex. It follows that $u$ is of the form $u_1 \esp u_2$ with $(t_1\s,t_2\s) \rpr_\b (u_1,u_2)$. By induction hypothesis, there are two substitutions $\s'_1$ and $\s'_2$ such that for each $i \in \{1,2\}$ we have $\s \rhd_\b \s'_i$, $u_i = t_i\s'_i$, and $\s'_i(x) = \s(x)$ for all $x \notin \FV(t_i)$. Since $t$ is linear, $\FV(t_1) \cap \FV(t_2) = \emptyset$, hence with $\s' \deq \s'_1 \uplus \s'_2$, we have $u = u_1 u_2 = t_1\s' \esp t_2\s' = t\s'$, $\s \rhd_\b \s'$ and $\s(y) = \s'(y)$ for all $y \notin \FV(t)$. \qedhere \end{description} \end{proof} We are now ready to prove the commutation of $\a_{\R}$ and $\rpr_\b$. In fact we prove a slightly stronger statement, which can be termed as the "level commutation" of $\a_{\R}$ and $\rpr_\b$. \begin{lemma}[Commutation of $\a_{\R_i}$ with $\rpr_\b$] \label{lem-commut-cond} If $\R$ is a conditional rewrite system which is semi-closed, left-linear and right-applicative, then $\rpr_\b$ commutes with $\a_{\R_i}$ for all $i \in \N$: \[ \xymatrix@R=\xyS@C=\xyL{ \cdot \ar@{->}[r]^{\R_{i}}_{*} \ar@{->}[d]_{\rpr_\b}^{*} & \cdot \ar@{.>}[d]^{\rpr_\b}_{*} \\ \cdot \ar@{.>}[r]_{\R_{i}}^{*} & \cdot } \] \end{lemma} \begin{proof} We reason by induction on $i \in \N$. The base case $i = 0$ is trivial. Let $i \geq 0$ and assume that $\a_{\R_i}$ commutes with $\rpr_\b$. We show that $\a_{\R_{i+1}}$ commutes with $\rpr_\b$. We begin by showing that for all $t,u,v \in \Te(\Si)$, if $u \rpl_\b t \a_{\R_{i+1}} v$ then there is a term $w$ such that $u \a^*_{\R_{i+1}} w \rpl_\b v$. In diagrammatic form: \begin{equation} \label{eq-lem-commut-cond-base} \begin{aligned} \xymatrix@R=\xyS@C=\xyL{ t \ar@{->}[r]^{\R_{i+1}} \ar@{->}[d]_{\rpr_\b} & v \ar@{.>}[d]^{\rpr_\b} \\ u \ar@{.>}[r]_{\R_{i+1}}^{*} & w } \end{aligned} \end{equation} We deduce from~(\ref{eq-lem-commut-cond-base}) that $\a_{\R_{i+1}}$ commutes with $\rpr_\b$ by applying Lemma~\ref{lem-commut}. We now show~(\ref{eq-lem-commut-cond-base}) by induction on $t$. If both reductions $t \rpr_\b u$ and $t \a_{\R_{i+1}} v$ occur in a proper subterm of $t$ then we conclude by induction hypothesis. Otherwise there are two cases. \begin{enumerate} \item $t = (\la x.t_1)t_2$ with $u = u_1\wth{x}{u_2}$ and $v = (\la x.v_1)v_2$ where $(u_1,u_2) \rpl_\b (t_1,t_2) \a_{\R_{i+1}} (v_1,v_2)$. By induction hypothesis, there are terms $w_1$ and $w_2$ such that $u_i \a^*_{\R_{i+1}} w_i \rpl_\b v_i$. We deduce that $u_1\wth{x}{u_2} \a^*_{\R_{i+1}} w_1\wth{x}{w_2}$ and that $(\la x.v_1)v_2 \rpr_\b w_1\wth{x}{w_2}$. \item $t$ is the $\R$-redex contracted in the step $t \a_{\R_{i+1}} v$. In this case, there is a conditional rule $\vd = \vc \sgt l \at_\R r$ and a substitution $\s$ s.t.\ $t = l\s$ and $v = r\s$. Moreover, there are terms $\vv$ such that $\vd\s \a^*_{\R_{i}} \vv \al^*_{\R_{i}} \vc\s$. Since $\R$ is semi-closed, the terms $\vc$ are closed and applicative, hence $\vc\s = \vc$ and the terms $\vv$ are applicative since $\R$ is right-applicative. Since $l$ is left-linear and algebraic, we deduce from Proposition~\ref{prop-subst-lin-rhd} that there is a substitution $\s'$ such that $\s \rhd_\b \s'$ and $u=l\s'$. It follows Proposition~\ref{prop-par-rew} that $r\s \rhd_\b r\s'$ and $\vd\s \rhd_\b \vd\s'$. To conclude that $w \deq r\s'$ fits, it remains to show that $l\s' \a_{\R_{i+1}} r\s'$, that is $\vd\s' \ad_{\R_{i}} \vc$. We have $\vd\s' \rpl_\b \vd\s \a^*_{\R_{i}} \vv$, hence by induction hypothesis there are $\vw$ s.t.\ $\vd\s' \a^*_{\R_{i}} \vw \rpl^*_{\b} \vv$. It follows that $\vd\s' \a^*_{\R_{i}} \vv$, the terms $\vv$ being applicative hence in $\b$-normal form. We thus have $\vd\s' \ad_{\R_{i}} \vc$, and conclude that $l\s' \a_{\R_{i+1}} r\s'$. \qedhere \end{enumerate} \end{proof} \noindent A direct application of Hindley-Rosen's Lemma (Lem.~\ref{lem-hr}) then offers the preservation of confluence. \begin{theorem}[Confluence of $\a_{\b\cup\R}$] \label{thm-Aconfl-sc} Let $\R$ be a semi-closed left-linear right-applicative system. If $\a_\R$ is confluent then so is $\a_{\b\cup\R}$. \end{theorem} \paragraph{Comparison with M{\"u}ller's work.} Our main result on the confluence of $\b$-reduction with conditional rewriting for left-linear semi-closed system (Theorem~\ref{thm-Aconfl-sc}) is not a true extension of Theorem~\ref{thm-muller}. Indeed, Theorem~\ref{thm-muller} applies to unconditional systems with arbitrary right-hand sides, while Theorem~\ref{thm-Aconfl-sc} requires right-hand sides to be applicative. The problem is that Lemma~\ref{lem-commut-cond} may fail with non-applicative right-hand sides. Consider the system: \[ \sfh \esp\at\esp \la x.x \qquad\qquad x = \sfh\ptesp \sfa \esp\sgt\esp \sfg\ptesp x \esp\at\esp \sfa \] We have \( \sfg\ptesp \sfa \esp\al_\b\esp \sfg\ptesp ((\la x.x) \sfa) \esp\a_\R\esp \sfa \). But since the term $(\la x.x)\sfa$ is a $\R$-normal form, the condition $x \ad_{\R} \sfh \ptesp \sfa$ is not satisfied, and $\sfg\ptesp \sfa$ is a $\R$-normal form. We can extend Theorem~\ref{thm-muller} to normal conditional rewriting, i.e.\ to systems $\R$ such that in all rules $\vd = \vc \sgt l \at_\R r$, the closed algebraic terms $\vc$ are required to be in normal form w.r.t. $\a_\R$ (recall from Remark~\ref{rem-dec} that this is undecidable). The proof follows exactly the same scheme as for Theorem~\ref{thm-Aconfl-sc}. The only difference lies in the commutation of $\a_{\R_i}$ with $\a_\b$, for which Lemma~\ref{lem-commut-cond} does not apply. \begin{theorem}[Extension of~\cite{muller92ipl}] \label{thm-ext-mull} Let $\R$ be a left-linear semi-closed system such that $\a_\R$ is a normal conditional rewrite relation. If $\a_\R$ is confluent then so is $\a_{\b\cup\R}$. \end{theorem} \begin{proof} As in Theorem~\ref{thm-Aconfl-sc}, the proof relies on the commutation of $\a_{\R_i}$ with $\a_\b$. Since right-hand sides are not applicative, we can not use Lemma~\ref{lem-commut-cond}. However, the commutation of $\a_{\R_i}$ with $\rpr_\b$ is proved using the same general reasoning, excepted for the following point. Assume that $\a_{\R_i}$ commutes with $\rpr_\b$ and that for a rule $\vd = \vc \sgt l \at_\R r$ and a substitution $\s$ we have $l\s' \rpl_\b l\s \a_{\R_{i+1}} r\s$. As $\a_{\R}$ is normal, we have $\vd\s \a^*_{\R_i} \vc$, and by induction hypothesis there are $\vc'$ such that $\vd\s' \a^*_{\R_i} \vc' \al^*_\b \vc$. Since $\R$ is semi-closed, the terms $\vc$ are algebraic hence $\b$-normals. It follows that $\vd\s' \a^*_{\R_i} \vc$, hence $l\s' \a_{\R_{i+1}} r\s'$. \end{proof} \subsection{Confluence on weakly beta-normalizing terms} \label{sec-Aconfl-wn} We now turn to the problem of dropping the left-linearity and semi-closure conditions. We generalize Theorem~\ref{thm-dough}~\cite{dougherty92ic} in two ways. First, we adapt it to conditional rewriting. Second, we use weakly $\b$-normalizing terms whose $\b$-normal forms respect an arity, whereas Dougherty uses sets of strongly normalizing arity compliant terms closed under reduction. As seen in Example~\ref{ex-minus-cond}, fixpoint combinators make the commutation of $\a^*_\b$ and $\a^*_{\R}$ fail when rewriting involves equality tests between open terms. When using weakly $\b$-normalizing terms, we can project rewriting on $\b$-normal forms ($\bnf$), thus eliminating fixpoints as soon as they are not significant for the reduction. Hence, we seek to obtain \begin{equation} \begin{aligned} \label{diag-projA-bnf} \xymatrix@R=\xyS@C=\xyL{ s \ar@{->}[r]^{\b \cup \R}_{*} \ar@{.>}[d]_{\b}^{*} & t \ar@{.>}[d]^{\b}_{*} \\ \bnf(s) \ar@{.>}[r]_{\R}^{*} & \bnf(t) } \end{aligned} \end{equation} We rely on the following: \begin{enumerate} \item \label{enum-Aconfl-wn-alg-rhs} First, $\b$-normal forms should be stable by rewriting. By Remark~\ref{rem-alg-rhs} we must assume that right-hand sides are algebraic, and as seen in Example~\ref{ex-dough}, we must use the notion of applicative arity (Definition~\ref{def-arity}). \item We need normalizing $\b$-derivations to commute with rewriting. This follows from using the leftmost-outermost strategy of $\la$-calculus. \item Finally, we assume that conditions are algebraic. Since left-hand sides and right-hand sides are algebraic (by Definition~\ref{def-cond-rules} and item~\ref{enum-Aconfl-wn-alg-rhs} respectively), this entails that for all rules $\vd = \vc \sgt l \at_\R r$ and all substitutions $\s$, we have $\bnf(\vd\s) = \vd \ptesp \bnf(\s)$, $\bnf(\vc\s) = \vc \ptesp \bnf(\s)$, $\bnf(l\s) = l \ptesp \bnf(\s)$, $\bnf(r\s) = r \ptesp \bnf(\s)$. \end{enumerate} We now have to extend to conditional rewriting the condition of arity on rewrite rules stated in Definition~\ref{def-arity}.(\ref{def-arity-rule}). \begin{definition}[Applicative Arity for Conditional Rules] \label{def-arity-cond} A rule $\vd=\vc\sgt l\at_\R r$ {\em respects} an arity $\ap : \Si \a \N$ if the terms $\vd,\vc$ respect $\ap$ and the unconditional rule $l \at r$ respects $\ap$. \end{definition} As seen in Example~\ref{ex-dough}, terms and rewrite systems respecting an arity prevent collapsing rules from creating $\b$-redexes. However, we do not assume that every term at hand respects an arity. If a term has a $\b$-normal form, the leftmost-outermost strategy for $\b$-reduction never evaluates non-normalizing subterms. It follows that such subterms may not respect any arity without disturbing the projection on $\b$-normal forms. Therefore it is sufficient to require that terms have a $\b$-normal form that respects an arity. \begin{definition} \label{def-an} Given an arity $\ap : \Si \a \N$, we let $\AN$ be the set of terms having a $\b$-normal form, and whose $\b$-normal form respects $\ap$. \end{definition} The proof goes through essentially thanks to two points: the well-foundedness of the leftmost-outermost strategy for $\a_\b$ on weakly $\b$-normalizing terms~\cite{barendregt84book}; and the fact that this strategy can be described by means of {\em head} $\b$-reductions, that are easily shown to commute with (parallel) conditional rewriting. We use a well-founded relation containing head $\b$-reductions. Recall that by Lemma~\ref{lem-wadsworth}, any $\la$-term can be written either \begin{align} \tag{a} \la \vx .v\, &a_0 \, a_1 \dots a_n \\ \tag{b} \text{or} \quad \la \vx .(\la y.b) &a_0 \, a_1 \dots a_n \end{align} where $v \in \Vte \cup \Si$. We denote {\em head} $\b$-reductions by $\a_h$. They consist of head $\b$-steps: \[ \la \vx.(\la y.b) a_0 \, a_1 \dots a_n ~\a_h~ \la \vx.b\wth{y}{a_0} a_1 \dots a_n ~. \] We use the relation $\succ$ defined as: \begin{align} \tag{a} \la \vx.v \, a_0 \, a_1 \dots a_n &\succ a_i \\ \tag{b} \la \vx.(\la y.b) a_0 \, a_1 \dots a_n &\succ \la \vx.b\wth{y}{a_0} \, a_1 \dots a_n \end{align} where $v \in \Vte \cup \Si$ and $0 \leq i \leq n$ and $n > 0$. Note that in the case (a), $a_i$ can have free variables among $\vx$, hence it can also be a subterm of a term $\alpha$-equivalent to $\la \vx.v \va$; for instance $\la x.{\sf f}x \succ y$ for all $y \in \Vte$. Recall that $\WN_\b$ is the set of weakly $\b$-normalizing terms. \begin{lemma} \label{lem-succ} If $s \in \WN_\b$ and $s \succ t$ then $t \in \WN_\b$. Moreover, $\succ$ is well-founded on $\WN_\b$. \end{lemma} \begin{proof} For the first part, let $s \in \WN_\b$ and $s \succ t$. If $s$ is of the form (b), the first step of the leftmost-outermost derivation normalizing $s$ is $t$. Hence $t \in \WN_\b$. Otherwise, if $t$ has no $\b$-normal form, then $s$ has no $\b$-normal form. For the second part, we write $\#(s)$ for the number of $\a_h$-steps in the leftmost-outermost derivation starting from $s$ and $|s|$ for the size of $s$. We show that if $s \succ t$, then $(\#(s),|s|) >_{lex} (\#(t),|t|)$. If $s$ is of the form (b), by the first point $t \in \WN_\b$. Since $s \a_h t$, we have $\#(s) > \#(t)$. Otherwise, the leftmost-outermost strategy starting from $s$ reduces each $a_i$ by leftmost-outermost reductions. Hence $\#(s) \ge \#(t)$. But in this case, $t$ is a proper subterm of $s$, hence $|s| > |t|$. \end{proof} It follows that we can reason by well-founded induction on $\succ$. For all $i \geq 0$, we use a {\em nested} parallelization of $\a_{\R_i}$. It corresponds to the one used in~\cite{or94lfcs}, that can be seen as a generalization of Tait and Martin-L{\"o}f parallel relation. As for $\rpr_\b$ and $\a_{\b}$, in the orthogonal case, a complete development of $\a_{\R_i}$ can be simulated by {\em one step} $\rpr_{\R_i}$-reduction. This relation is also an adaptation to conditional rewriting of the parallelization used in~\cite{dougherty92ic}. \begin{definition}[Conditional Nested Parallel Relations] \label{def-walkA} For all $i \geq 0$, let $\rpr_{\R_i}$ be the smallest parallel rewrite relation closed under the rule \[ (\pRule)~ \dfrac{\vd = \vc \sgt l \at_\R r \qquad l\s \a_{\R_i} r\s \qquad \s \rpr_{\R_i} \t} { l\s \rpr_{\R_i} r\t } \] \end{definition} Recall that $l\s \a_{\R_i} r\s$ is ensured by $\vd\s \ad_{\R_{i-1}} \vc\s$. These relations enjoy some nice properties: \begin{proposition} \label{prop-tgtA} For all $i \geq 0$, \begin{enumerate} \item \label{subset-tgtA} $\a_{\R_i} ~\sle~ \rpr_{\R_i} ~\sle~ \a_{\R_i}^*$; \item \label{tgtA-subst} ${s \rpr_{\R_i} t} ~\imp~ {u\wth{x}{s} \rpr_{\R_i} u\wth{x}{t}}$; \item \label{tgtA-subst2} \( {\lbrack s \rpr_{\R_i} t ~\&~ u \rpr_{\R_i} v \rbrack} ~\imp~ {u\wth{x}{s} \rpr_{\R_i} v\wth{x}{t}} \). \end{enumerate} \end{proposition} \begin{proof} The first point is shown by induction on the definition of $\rpr_{\R_i}$ ; the second follows from Proposition~\ref{prop-par-rew} and the fact that $\rpr_{\R_i}$ is a parallel rewrite relation. For the last one, we use an induction on $\rpr_{\R_i}$ in $u \rpr_{\R_i} v$. If $u$ is $v$, the result is trivial. If $u \rpr_{\R_i} v$ was obtained by $(\papp)$ or $(\abs)$, the result follows from induction hypothesis. Otherwise, $u \rpr_{\R_i} v$ is obtained by $\pRule$. That is, there is a rule $\vd = \vc \sgt l \at_\R r$ such that $u = l\s$, $v = r\t$, $\s \rpr_{\R_i} \t$ and $l\s \a_{\R_i} r\s$. Since $\a_{\R_i}$ is a rewrite relation, we have $l\s\wth{x}{s} \rpr_{\R_i} r\s\wth{x}{s}$. By induction hypothesis, we have $\s\wth{x}{s} \rpr_{\R_i} \t\wth{x}{t}$. Therefore $l\s\wth{x}{s} \rpr_{\R_i} r\t\wth{x}{t}$. \end{proof} We now turn to the {\em one step} commutation of $\rpr_{\R_i}$ and $\a_h$. This is a direct consequence of the third point of Proposition~\ref{prop-tgtA}. Commutation of $\a_h$ with (unconditional) rewriting has already be coined in~\cite{bfg97jfp}. \begin{lemma} \label{lem-commut-h} Let $i \geq 0$. If $u \al_h s \rpr_{\R_i} t$ then there exists $v$ such that $u \rpr_{\R_i} v \al_h t$~: \begin{equation*} \begin{aligned} \xymatrix@R=\xyS@C=\xyL{ s \ar@{>}[r]^{\rpr_{\R_i}} \ar@{>}[d]_{h} & t \ar@{..>}[d]^{h} \\ u \ar@{..>}[r]_{\rpr_{\R_i}} & v } \end{aligned} \end{equation*} \end{lemma} \begin{proof} Assume that $s \al_h \la \vx. (\la y.a_0)a_1 \dots a_p \rpr_{\R_i} t$. Because rules have non-variable algebraic left-hand sides, $t= \la \vx. (\la y.b_0)b_1 \dots b_p$ with for all $k \in \{0,\dots,p \}$, $a_k \rpr_{\R_i} b_k$. On the other hand, $s = \la \vx. a_0\wth{y}{a_1}a_2 \dots a_p$. It follows from Proposition~\ref{prop-tgtA}.\ref{tgtA-subst2} that $a_0\wth{x}{a_1} \rpr_{\R_i} b_0\wth{x}{b_1}$ (in {\em one} step). Hence we have $s \rpr_{\R_i} \la \vx. b_0\wth{y}{b_1}b_2 \dots b_p \al_h t$. \end{proof} The main lemma is the projection of rewriting on $\b$-normal forms, that is, the commutation of diagram (\ref{diag-projA-bnf}). \begin{lemma} \label{lem-projA-bnf} Let $\ap : \Si \a \N$ be an arity and $\R$ be an algebraic conditional rewrite system which respects $\ap$. For all $i \in \N$, if $t \in \AN$ and $t\a_{\b\cup\R_i}^* u$, then $u\in \AN$ and $\bnf(t)\a_{\R_i}^* \bnf(u)$. \end{lemma} \begin{proof} We reason by induction on $i \in \N$. The base case $i = 0$ is trivial. We assume that the property holds for $i \geq 0$ and show it for $i+1$. The proof is in two steps. \begin{enumerate} \item \label{lem-projA-bnf-base} We begin by showing that for all $t \in \AN$ we have \begin{equation} \begin{aligned} \label{diag-projAbnf-par} \xymatrix@R=\xyS@C=\xyL{ t \ar@{->}[r]^{\rpr_{\R_{i+1}}} \ar@{.>}[d]_{\b}^{*} & u \ar@{.>}[d]^{\b}_{*} \\ \bnf(t) \ar@{.>}[r]_{\rpr_{\R_{i+1}}} & \bnf(u) } \end{aligned} \end{equation} We reason by induction on $\succ$ using Lemma~\ref{lem-wadsworth}. \begin{description} \item[$t = \la \vx.x t_1 \dots t_n$.] In this case, $\bnf(t) = \la \vx.x \bnf(t_1) \dots \bnf(t_n)$ and $u = \la \vx. x u_1 \dots u_n$ with $t_k \rpr_{\R_{i+1}} u_k$ for all $k \in \{1,\dots,n\}$. As $t \succ t_k$, for all $k \in \{1,\dots,n\}$, by induction hypothesis on $\succ$ we have $u_k \in \AN$ and $\bnf(t_k) \rpr_{\R_{i+1}} \bnf(u_k)$. Since $\bnf(u) = \la \vx.x \bnf(u_1) \dots \bnf(u_n)$, we have $\bnf(u) \in \AN$ and $\bnf(t) \rpr_{\R_{i+1}} \bnf(u)$. \item[$t = \la \vx.f t_1 \dots t_n$.] If no rule is reduced at the head of $t$, the result follows from induction hypothesis on $\succ$. Otherwise, there is a rule $\vd = \vc \sgt l \at r$ such that $t = \la \vx.l\s\va$ and $u = \la \vx.r\t\vb$ with $l\s \rpr_{\R_{i+1}} r\t$ and $\vd\s \ad_{\R_{i}} \vc\s$. Since $l$ is algebraic, $\bnf(t)$ is of the form $\la \vx. l\s' \va'$ where $\s' = \bnf(\s)$ and $\va' = \bnf(\va)$. Since $\bnf(t)$ respects $\ap$, $\va' = \emptyset$, hence $\va = \emptyset$ and $t = \la \vx . l\s$. Therefore, because $l\s \rpr_{\R_{i+1}} r\t$, we have $\vb = \emptyset$ and $u = \la \vx.r\t$. It remains to show that $u \in \AN$ and that $\bnf(t) = \la \vx.l\s' \rpr_{\R_{i+1}} \bnf(u)$. Because $l$ is algebraic, its variables are $\prec^+ l$. We can then apply induction hypothesis on $\s \rpr_{\R_{i+1}} \t$. It follows that $\t$ has a $\b$-normal form $\t'$, which respects $\ap$ and moreover such that $\s' \rpr_{\R_{i+1}} \t'$. Since $r$ is algebraic, $\la \vx.r\t'$ is the $\b$-normal form of $u$ (which respects $\ap$). Hence it remains to show that $l\s' \rpr_{\R_{i+1}} r\t'$. Because $\s' \rpr_{\R_{i+1}} \t'$, it suffices to prove that $l\s' \a_{\R_{i+1}} r\s'$. Thus, we are done if we show that $\vd\s' \ad_{\R_{i}} \vc\s'$. Since $\vd$ and $\vc$ are algebraic, $\bnf(\vd\s) = \vd\s'$ and $\bnf(\vc\s) = \vc\s'$. Now, since $\vd$ is algebraic and respects $\ap$, and since $\s'$ respects $\ap$, it follows that $\vd\s'$ respects $\ap$. The same holds for $\vc\s'$. Hence we conclude by applying on $\vd\s \ad_{\R_{i}} \vc\s$ the induction hypothesis on $i$. \item[$t = \la \vx.(\la x.v)w t_1 \dots t_n$.] In this case, we head $\b$-normalize $t$ and obtain a term $t' \in \AN$. Using the commutation of $\rpr_{\R_{i+1}}$ and $\a_h$, we obtain a term $u'$ such that $t' \rpr_{\R_{i+1}} u'$. Since $t \succ^+ t'$, we can reason as in the preceding cases. \end{description} \item We now show that $t \a^*_{\b \cup \R_i} u$ implies $\bnf(t) \a^*_{\R_i} \bnf(u)$. We reason by induction on $t \a^*_{\b \cup \R_i} u$, using Proposition~\ref{prop-tgtA}.\ref{subset-tgtA}. The base case $t = u$ is trivial. Assume that $t \a_{\b\cup\R_i} v \a^*_{\b\cup\R_i} u$. By induction hypothesis we have $\bnf(v) \a^*_{\R_i} \bnf(u)$. There are two cases. If $t \a_\b v$, then $\bnf(t) = \bnf(v)$ and we are done. Otherwise we have $t \a_{\R_i} v$, hence $\bnf(t) \a^*_{\R_i} \bnf(v) \a^*_{\R_i} \bnf(u)$ by~\ref{lem-projA-bnf-base}. \qedhere \end{enumerate} \end{proof} Preservation of confluence is a direct consequence of the projection on $\b$-normal forms. \begin{theorem} \label{thm-Aconfl-bnf} Let $\ap : \Si \a \N$ be an arity and $\R$ be an algebraic conditional rewrite system which respects $\ap$. If $\a_\R$ is confluent on $\AN$, then $\a_{\b\cup\R}$ is confluent on $\AN$. \end{theorem} \begin{proof} Let $s,t,u$ such that $s \in \AN$ and $u ~\al^*_{\b \cup \R}~ s ~\a^*_{\b \cup \R}~ t$. By two applications of Lemma~\ref{lem-projA-bnf} we get that $\bnf(u) ~\al^*_\R~ \bnf(s) ~\a^*_\R~ \bnf(t)$, with moreover $\bnf(s)$, $\bnf(t)$ and $\bnf(u) \in \AN$. Hence we conclude by $\a_\R$-confluence on $\AN$. In diagrammatic form, \[ \xymatrix{ & s \ar@{->}[dr]^{\b \cup \R}_{*} \ar@{->}[dl]_{\b \cup \R}^{*} \ar@{..>}[d]_{\b}^{*} \\ u \ar@{..>}[d]_{\b}^{*} & \bnf(s) \ar@{..>}[dr]_{\R}^{*} \ar@{..>}[dl]^{\R}_{*} & t \ar@{..>}[d]^{\b}_{*} \\ \bnf(u) \ar@{..>}[dr]^{\R}_{*} & & \bnf(t) \ar@{..>}[dl]_{\R}^{*} \\ & v & \\ } \] \end{proof} \comment{ In that paper, Dougherty proves, in the non-conditional case, that left-linearity is not needed when considering so called $\R$-stable sets of terms. A set of terms is $\R$-stable if it is made of $\b$-strongly normalizing arity compliant terms and is closed under $\a_\b$, $\a_\R$ and subterm. Then, $\a_{\b\cup\R}$ is confluent on any $\R$-stable set of terms whenever $\a_\R$ is confluent and $\R$ is first-order. We extend this result in three ways. First, we adapt it to conditional rewriting. The main difference is that we need a more subtle induction relation. Second, we do not limit ourselves to curried versions of first-order rewrite rules. We allow nested symbols in rules to be applied to less arguments that their arity. Third, and that is the main point, we show that confluence is preserved out of terms having an arity compliant $\b$-normal form. The amelioration is two-fold. First we do not need to assume that {\em every} $\b$-reduction starting from {\em any} $\a_{\b\cup\R}$-reduct of a term terminates. We simply need that {\em there exists} a terminating $\b$-reduction starting from the term. Second, we do not need to assume that {\em every} $\a_{\b\cup\R}$-reduct of a term is arity compliant but only that its $\b$-normal form is. } \section{Using beta-reduction in the evaluation of conditions} \label{sec-Bconfl} In this section we focus on the combination of $\b$-reduction with the {\em join $\b$-conditional} rewrite relation $\a_{\R(\b)}$ issued from a conditional rewrite system $\R$ (see Definition~\ref{def-cond-rew}). We give sufficient conditions on $\R$ to deduce the confluence of $\a_{\b\cup\R(\b)}$ from the confluence of $\a_{\R}$. We achieve this by exhibiting two different criteria ensuring that derivations combining $\b$-reduction and $\b$-conditional rewriting can be projected, via $\b$-reductions, to derivations made of conditional rewriting only (hence without using $\b$-reduction in the evaluation of conditions): \begin{equation} \begin{aligned} \label{diag-BsleA} \xymatrix@R=\xyS@C=\xyL{ s \ar@{->}[r]^{\b \cup \R(\b)}_{*} \ar@{.>}[d]_{\b}^{*} & t \ar@{.>}[d]^{\b}_{*} \\ s' \ar@{.>}[r]_{\R}^{*} & t' } \end{aligned} \end{equation} It is easy to see that property~(\ref{diag-BsleA}) combined to the confluence of $\a_{\b\cup\R}$ entails the confluence of $\a_{\R}$. We can actually prove property~(\ref{diag-BsleA}) on some subsets of $\Te(\Si)$ only. This motivates the assumptions on the following proposition. \begin{proposition} \label{prop-Bconfl} Let $\R$ be a conditional rewrite system and $S \sle \Te(\Si)$ be a set of terms closed under $\a_{\b \cup \R(\b)}$. Assume that $\a_{\b\cup\R}$ is confluent on $S$. If property~(\ref{diag-BsleA}) is satisfied for all $s,t \in S$ then $\a_{\b\cup\R(\b)}$ is confluent on $S$. \end{proposition} \begin{proof} Let $t \in S$ and $u,v$ such that \[ u \quad\al^*_{\b \cup \R(\b)}\quad t \quad\a^*_{\b \cup \R(\b)}\quad v ~. \] Note that $u,v \in S$ since $S$ is closed under $\a_{\b\cup\R(\b)}$. By property~(\ref{diag-BsleA}) applied twice and by confluence of $\a_{\b \cup \R}$ on $S$, there is $w$ such that $u \a^*_{\b \cup \R} w \al^*_{\b \cup \R} v$. We conclude by the fact that $\a_{\R} \sle \a_{\R(\b)}$. In diagrammatic form, \begin{equation*} \begin{aligned} \xymatrix@C=\xyS@R=\xyS{ & u \ar@{..>}[dl]_{\b}^{*} & & t \ar@{>}[ll]_{\b \cup \R(\b)}^{*} \ar@{>}[rr]^{\b \cup \R(\b)}_{*} \ar@{..>}[dr]_{\b}^{*} \ar@{..>}[dl]^{\b}_{*} & & v \ar@{..>}[dr]^{\b}_{*} & \\ \cdot \ar@{..>}[drrr]_{\b \cup \R}^{*} & & \cdot \ar@{..>}[ll]_{\R}^{*} & & \cdot \ar@{..>}[rr]^{\R}_{*} & & \cdot \ar@{..>}[dlll]^{\b \cup \R}_{*} \\ & & & w & & & } \end{aligned} \end{equation*} \end{proof} \noindent Our two different criteria to obtain~(\ref{diag-BsleA}) are the extensions to $\b$-conditional rewriting of the two criteria studied for conditional rewriting in Section~\ref{sec-Aconfl}. \begin{itemize} \item The first one concerns left-linear (and semi-closed) rewriting, with no termination assumption on $\b$-reduction. \item The second one concerns arity-preserving algebraic rewriting, with a weak-normalization assumption on $\b$-reduction. \end{itemize} In the left-linear and semi-closed case, allowing $\b$-reduction in the evaluation of conditions imposes us to put stronger assumptions on $\R$ than for conditional rewriting in Section~\ref{sec-Aconfl-sc}: rewrite rules need to be algebraic and to respect and arity. Recall that these assumptions were already made in Section~\ref{sec-Aconfl-wn} when considering possibly non left-linear rewriting on weakly $\b$-normalizing terms. The following example presents rules which either are not algebraic or do not respect the arity prescribed by left-hand sides. With these rules property~(\ref{diag-BsleA}) fails and $\a_{\b\cup\R(\b)}$ is not confluent whereas $\a_{\R}$ and $\a_{\b\cup\R}$ are confluent. \begin{example} \label{ex-Bconfl} With the conditional rewrite systems~(\ref{eq-ex1}), (\ref{eq-ex2}), (\ref{eq-ex3}) and (\ref{eq-ex4}) below, \begin{enumerate} \item the relations $\a_\R$ and $\a_{\b\cup\R}$ are confluent, \item property~(\ref{diag-BsleA}) is not satisfied and the relation $\a_{\b\cup\R(\b)}$ is not confluent. \end{enumerate} \begin{align} \label{eq-ex1} {\sfg} \esp x \esp y &\dbesp\at\dbesp x \esp y & {\sfg} \esp x \esp \sfc \esp=\esp {\sfd} \esp\sgt\esp \sff \esp x &\dbesp\at\dbesp \sfa \esp x & \sff \esp x &\dbesp\at\dbesp \sfb \esp x \\ \label{eq-ex2} & & x \esp \sfc \esp=\esp {\sfd} \esp\sgt\esp {\sff} \esp x &\dbesp\at\dbesp \sfa \esp x & \sff \esp x &\dbesp\at\dbesp \sfb \esp x \\ \label{eq-ex3} & & {\id} \esp x \esp {\sfc} \esp=\esp {\sfd} \esp\sgt\esp {\sff} \esp x &\dbesp\at\dbesp \sfa \esp x & \sff \esp x &\dbesp\at\dbesp \sfb \esp x \\ \label{eq-ex4} {\sfh} \esp x \esp y &\dbesp\at\dbesp {\id} \esp x \esp y & {\sfh} \esp x \esp \sfc \esp=\esp \sfd \esp\sgt\esp \sff \esp x &\dbesp\at\dbesp \sfa \esp x & \sff \esp x &\dbesp\at\dbesp \sfb \esp x \end{align} where $\id$ is defined by $\id \esp x \at x$. \end{example} \begin{proof}\item \begin{enumerate} \item Since the symbol $\sfd$ is not defined, these systems lead to normal conditional rewrite relations. Since they are left-linear and semi-closed, we can apply Theorem~\ref{thm-ext-mull}, and deduce the confluence of $\a_{\b \cup \R}$ from the confluence of $\a_{\R}$. Since they are left-linear systems, if their critical pairs are unfeasible, we can obtain the confluence of $\a_{\R}$ by Theorem~\ref{thm-orth-cond}. Each system has a unique conditional rule and a unique critical pair, issued from the root superposition of this rule with $\sff \ptesp x \esp\at\esp \sfb \ptesp x$. In each case, the number of occurrences of the symbol $\sfc$ in a term is preserved by $\a_{\R}$. Moreover, for each instantiation of the conditional rule, the instantiated left-hand side of the condition contains at least one occurrence of $\sfc$. It follows that it cannot reduce to $\sfd$, and that the critical pair is unfeasible. Therefore, we obtain the confluence of $\a_{\R}$ by Theorem~\ref{thm-orth-cond} and we deduce the confluence of $\a_{\b \cup \R}$ thanks to Theorem~\ref{thm-ext-mull}. \item In each case, the step $\sff \esp \la x.\sfd \a_{\R(\b)} \sfa \esp \la x.\sfd$ is not in $\a^*_{\b} \a^*_{\R} \al^*_\b$ and the following peak is unjoinable \[ \sfa \esp \la x.\sfd \quad\al_{\R(\b)}\quad \sff \esp \la x.\sfd \quad\a_{\R(\b)}\quad \sfb \esp \la x.\sfd ~. \] \qedhere \end{enumerate} \end{proof} \noindent Note that systems~(\ref{eq-ex1}) and~(\ref{eq-ex2}) contain respectively a right-hand side and a condition which are not algebraic, and that systems~(\ref{eq-ex3}) and~(\ref{eq-ex4}) contain respectively a right-hand side and a condition that do not respect the arity of $\id$ imposed by the rewrite rule $\id \ptesp x \at x$. Note also that~(\ref{diag-BsleA}) is reminiscent of a property required on the substitution calculus used in~\cite{or94lfcs}. This would require to see $\a_\b$ as the substitution calculus. But this does not fit in our framework, in particular because we consider $\a_\b$ and rewriting at the same level. Moreover, the substitution calculus used in~\cite{or94lfcs} is required to be complete (i.e. strongly normalizing and confluent), which is not the case here for $\a_\b$. \paragraph{Outline.} We begin in Section~\ref{sec-Bconfl-sc} by the extension of Theorem~\ref{thm-Aconfl-sc} to $\b$-conditional rewriting for left-linear and semi-closed systems. In this case, preservation of confluence only holds on terms respecting an arity (namely {\em conditionally $(\R,\ap)$-stable terms}, see Definition~\ref{def-rstable-cond}). This is an extra hypothesis compared to the results of Section~\ref{sec-Aconfl-sc}. Then, in Section~\ref{sec-Bconfl-wn} we consider the case of Theorem~\ref{thm-Aconfl-bnf}. It directly extends to $\b$-conditional rewriting. In both cases, we assume that rules are algebraic and respect an arity. In each case our assumptions ensure that the results of Section~\ref{sec-Aconfl} apply, hence that $\a_{\b\cup\R}$ is confluent whenever $\a_\R$ is confluent. Hence, using Proposition~\ref{prop-Bconfl} we deduce the confluence of $\a_{\b \cup \R(\b)}$ from the confluence of $\a_\R$ and property~(\ref{diag-BsleA}). \paragraph{Remarks.} In~\cite{bkr06fossacs}, we have shown~(\ref{diag-BsleA}) by using a stratification of $\a_{\R(\b)}$ in which, instead of having $\a_{\R(\b)_0} = \emptyset$ as in Definition~\ref{def-cond-rew}, we had $\a_{\R(\b)_0} = \a_\b$ (it is easy to show that these two base cases induce the same relation $\a_{\R(\b)}$). We proceed here in a slightly different and more general way. We show~(\ref{diag-BsleA}) with $\a_{\R(\b)_0} = \emptyset$ and use the following intermediate property: for all $i \in \N$, \begin{equation} \begin{aligned} \label{diag-BsleA-strat} \xymatrix@R=\xyS@C=\xyL{ t \ar@{->}[r]^{\b \cup \R(\b)_i}_{*} \ar@{.>}[d]_{\b}^{*} & u \ar@{.>}[d]^{\b}_{*} \\ t' \ar@{.>}[r]_{\R_i}^{*} & u' } \end{aligned} \end{equation} \subsection{Confluence for left-linear semi-closed systems} \label{sec-Bconfl-sc} This section is devoted to the proof of (\ref{diag-BsleA-strat}) for left-linear semi-closed systems. Using Proposition~\ref{prop-Bconfl} and Theorem~\ref{thm-Aconfl-sc}, we then easily deduce the confluence of $\a_{\b \cup \R(\b)}$ when $\a_\R$ is confluent. We postpone the proof of~(\ref{diag-BsleA-strat}) until Lemma~\ref{lem-BtoA}, Section~\ref{sec-Bconfl-sc-confl}. The material used in the proof is presented and motivated in Section~\ref{sec-Bconfl-sc-prelim} below. \subsubsection{Preliminaries} \label{sec-Bconfl-sc-prelim} The proof of~(\ref{diag-BsleA-strat}) involves some intermediate lemmas and the extension of $(\R,\ap)$-stable sets of terms to conditional rewriting. In order to motivate them, we sketch some steps of the proof. Property~(\ref{diag-BsleA-strat}) is proved by induction on $i \in \N$. Assuming the property for $i \in \N$, we discuss it for $i+1$. We reason by induction on the length of the derivation $t \a^*_{\b \cup \R(\b)_{i+1}} u$. We present the ingredients used in the different steps of this induction. \paragraph{The base case.} In the base case, we have $t \a_{\b \cup \R(\b)_{i+1}} u$ in one step. The case of $t \a_\b u$ is trivial: take $t' \deq u' \deq u$. The case of $t \a_{\R(\b)_{i+1}} u$ is more involved. We show \begin{equation} \label{diag-BsleA-strat-base-rew} \begin{aligned} \xymatrix@R=\xyS@C=\xyL{ t \ar@{->}[r]^{\R(\b)_{i+1}} \ar@{.>}[d]_{\b}^{*} & u \ar@{.>}[d]^{\b}_{*} \\ t' \ar@{.>}[r]_{\R_{i+1}} & u' } \end{aligned} \end{equation} Consider a rule $\vd = \vc \sgt l \at_\R r$. Recall from Example~\ref{ex-Bconfl} that it must be algebraic and respect an arity. Hence every $\b$-redex occurring in $\vd\s$ or $\vc\s$ also occurs in $l\s$. Then, property~(\ref{diag-BsleA-strat-base-rew}) means that there is a $\b$-reduction starting from $l\s$ that reduces these redexes and produce a substitution $\s'$ such that \[ l\s \quad\a^*_\b\quad l\s' \quad\a_{\R_{i+1}}\quad r\s' \quad\al^*_\b\quad r\s ~. \] In other words, if the conditions are satisfied with $\s$ and $\a_{\b\cup\R(\b)_i}$ (i.e.\ $\vd\s\ad_{\b\cup\R(\b)_i} \vc$, recall that $\vc$ are closed terms since $\R$ is semi-closed), then they are satisfied with $\s'$ and $\a_{\R_i}$ (i.e.\ $\vd\s'\ad_{\R_i} \vc$). Let us look at this more precisely. Assume that $\vd\s\ad_{\b\cup\R(\b)_i} \vc$. Hence there are terms $\vv$ such that \[ \vd\s \quad\a^*_{\b\cup\R(\b)_i}\quad \vv \quad\al^*_{\b\cup\R(\b)_i}\quad \vc ~. \] By induction hypothesis on $i$, we get terms $\vw$ and $\vv'$ such that \[ \xymatrix@R=\xyS@C=\xyL{ \vd\s \ar@{->}[r]^{\b \cup \R(\b)_i}_{*} \ar@{.>}[d]_{\b}^{*} & \vv \ar@{.>}[d]^{\b}_{*} & \vc \ar@{->}[l]_{\b\cup \R(\b)_i}^{*} \\ \vw \ar@{.>}[r]_{\R_i}^{*} & \vv' } \] In order to conclude, we need a substitution $\s'$ such that $\s \a^*_{\b} \s'$ and $\vw \a^*_{\b} \vd\s'$. Using the algebraicity of $\vd$, this follows from Proposition~\ref{prop-beta-alg}, which is stated and proved below. The remaining of the proof uses the commutation of $\a_\b$ and $\a_{\R_i}$ (Lemma~\ref{lem-commut-cond}) and relies on the semi-closure and the right-applicativity of $\R$ (which follows from its algebraicity). See the proof of Lemma~\ref{lem-BtoA} in Section~\ref{sec-Bconfl-sc-confl} for details. We need to show that if $t$ is an algebraic term such that $t\s \a^*_{\b} v$, then there is a substitution $\s'$ such that $\s \a^*_\b \s'$ and $v \a^*_\b t\s'$. This is provided by the two following technical propositions. The first one is a generalization of the diamond property of $\rpr_\b$. It is a direct consequence of Lemma 3.2 in~\cite{takahashi95ic}. \begin{proposition} \label{prop-diamond} Let $n \geq 0$ and assume that $s,s_1,\dots, s_n$ are terms such that $s \rpr_\b s_i$ for all $i \in \{1,\dots,n\}$. Then there is a term $s'$ such that $s \rpr_\b s'$ and $s_i \rpr_\b s'$ for all $i \in \{1,\dots,n\}$. \end{proposition} We deduce the following property. The proof of case~\ref{prop-beta-alg-star} uses the diamond property of $\rpr_\b$. \begin{proposition} \label{prop-beta-alg} Let $t_1,\dots,t_n$ be algebraic terms and let $\s$ be a substitution. \begin{enumerate} \item \label{prop-beta-alg-base} If $t_i\s \rpr_\b u_i$ for all $i \in \{1,\dots,n\}$, then there is a substitution $\s'$ such that $\s \rpr_\b \s'$ and $u_i \rpr_\b t_i\s'$ for all $i \in \{1,\dots,n\}$. \item \label{prop-beta-alg-star} If $t_i\s \a^*_\b u_i$ for all $i \in \{1,\dots,n\}$, then there is a substitution $\s'$ such that $\s \a^*_\b \s'$ and $u_i \a^*_\b t_i\s'$ for all $i \in \{1,\dots,n\}$. \end{enumerate} \end{proposition} Note that the terms $t_1,\dots,t_n$ need not be linear. \begin{proof}\item \begin{enumerate} \item Since $t_i$ is algebraic, every occurrence of a $\b$-redex in $t_i$ is of the form $p.d$ where $p$ is an occurrence of a variable $x$ in $t_i$. Since $\rpr_\b$ is reflexive, for each $i \in \{1,\dots,n\}$, each $x \in \FV(t_i)$ and each $p \in \Occ(x,t_i)$, there is a term $s_{(i,x,p)}$ such that \[ t_i\s|_p \quad=\quad \s(x) \quad\rpr_\b\quad s_{(i,x,p)} \] and for all $i \in \{1,\dots,n\}$, \[ u_i \quad=\quad t_i[p \gets s_{(i,x,p)} \esp\tq\esp x \in \FV(t_i) \quad\land\quad p \in \Occ(x,t_i)] ~. \] By Proposition~\ref{prop-diamond}, for all $x \in \FV(t_1,\dots,t_n)$, there is a term $v_x$ such that $\s(x) \rpr_\b v_x$ and $s_{(i,x,p)} \rpr_\b v_x$ for all $i \in \{1,\dots,n\}$ and all $p \in \Occ(x,t_i)$. Therefore, for all $i \in \{1,\dots,n\}$ we have \[ u_i \quad\rpr_\b\quad t_i[p \gets v_x \esp\tq\esp x \in \FV(t_i) \quad\land\quad p \in \Occ(x,t_i)] ~. \] Let $\s'$ be the substitution of same domain as $\s$ such that $\s'(x) = v_x$ for all $x \in \FV(t_1,\dots,t_n)$ and $\s'(x) = \s(x)$ for all $x \notin \FV(t_1,\dots,t_n)$. Then we have $\s \rpr_\b \s'$ and $u_i \rpr_\b t_i\s'$ for all $i \in \{1,\dots,n\}$. \item By induction on $k \in \N$, we show that if $t_i\s \rpr^k_\b u_i$ for all $i\in \{1,\dots,n\}$, then there is $\s'$ such that $\s \rpr^*_\b \s'$ and $u_i \rpr^*_\b t_i\s'$ for all $i \in \{1,\dots,n\}$. The base case $t_i\s \rpr^0_\b u_i$ for all $i \in \{1,\dots,n\}$ is trivial. For the induction case, there are $u'_1,\dots,u'_n$ such that ${t_i\s} \esp\rpr^k_\b\esp {u'_i} \esp\rpr_\b\esp {u_i}$ for all $i \in \{1,\dots,n\}$. Then, by~\ref{prop-beta-alg-base}, there is $\s'$ such that $\s \rpr_\b \s'$ and $u'_i \rpr_\b t_i\s'$ for all $i \in \{1,\dots,n\}$. Since $\rpr_\b$ satisfies the diamond property (Proposition~\ref{prop-diamond}), for all $i \in \{1,\dots,n\}$ there is $u''_i$ such that ${t_i\s'} \esp\rpr^k_\b\esp {u''_i} \esp\rpl_\b\esp {u_i}$, and by induction hypothesis on $k$, there is $\s''$ such that $\s' \rpr^*_\b \s''$ and $u''_i \rpr^*_\b t_i\s''$ for all $i \in \{1,\dots,n\}$. We deduce that $u_i \rpr_\b^* t_i\s''$ for all $i \in \{1,\dots,n\}$. In diagrammatic form: \[ \xymatrix@R=\xyS@C=\xyL{ \vt\s \ar@{->}[r]^{\rpr_\b} & \vu' \ar@{->}[r]^{\rpr_\b}_{k} \ar@{.>}[d]_{\rpr_\b} & \vu \ar@{.>}[d]^{\rpr_\b} \\ & \vt\s' \ar@{.>}[r]_{\rpr_\b}^{k} & {\vu''} \ar@{.>}[d]^{\rpr_\b}_{*} \\ & & \vt{\s''} } \] \qedhere \end{enumerate} \end{proof} \paragraph{The induction case.} In the induction case, we have $t \a^*_{\b \cup \R(\b)_{i+1}} u$ in more than one step. Hence, this derivation can be written as $t \a_{\b \cup \R(\b)_{i+1}} v \a^*_{\b \cup \R(\b)_{i+1}} u$ for some $v$. If $t \a_\b v$, then we easily conclude by induction hypothesis on $v \a^*_{\b \cup \R(\b)_{i+1}} u$. Otherwise, we have $t \a_{\R(\b)_{i+1}} v$ and things get more involved. Using the induction hypothesis on $v \a^*_{\b \cup \R(\b)_{i+1}} u$ and the discussion of the above paragraph for $t \a_{\R(\b)_{i+1}} v$, we arrive at the following situation: \[ \xymatrix@R=\xyS@C=\xyS{ && t \ar@{->}[rr]^{\R(\b)_{i+1}} \ar@{.>}[dl]_{\b}^{*} && v \ar@{->}[rr]^{\b \cup \R(\b)_{i+1}}_{*} \ar@{.>}[dl]^{\b}_{*} \ar@{.>}[dr]_{\b}^{*} && u \ar@{.>}[dr]^{\b}_{*} \\ & t' \ar@{.>}[rr]_{\R_{i+1}} && v'' && v' \ar@{.>}[rr]_{\R_{i+1}}^{*} && u' } \] Using the confluence of $\a_\b$ and the commutation of $\a_\b$ with $\a_{\R_i}$ (Lemma~\ref{lem-commut-cond}), we get \[ \xymatrix@R=\xyS@C=\xyS{ && t \ar@{->}[rr]^{\R(\b)_{i+1}} \ar@{.>}[dl]_{\b}^{*} && v \ar@{->}[rr]^{\b \cup \R(\b)_{i+1}}_{*} \ar@{.>}[dl]^{\b}_{*} \ar@{.>}[dr]_{\b}^{*} && u \ar@{.>}[dr]^{\b}_{*} \\ & t' \ar@{.>}[rr]_{\R_{i+1}} && v'' \ar@{.>}[dr]^{\b}_{*} && v' \ar@{.>}[rr]_{\R_{i+1}}^{*} \ar@{.>}[dl]_{\b}^{*} && u' \ar@{.>}[dl]^{\b}_{*} \\ &&&& v''' \ar@{.>}[rr]_{\R_{i+1}}^{*} && u'' } \] In order to conclude we use the following property: for all $i \in \N$, \begin{equation} \begin{aligned} \label{diag-AsleA-strat} \xymatrix@R=\xyS@C=\xyL{ t \ar@{->}[r]^{\b \cup \R_i}_{*} \ar@{.>}[d]_{\b}^{*} & u \ar@{.>}[d]^{\b}_{*} \\ t' \ar@{.>}[r]_{\R_i}^{*} & u' } \end{aligned} \end{equation} The intricate case of property~(\ref{diag-AsleA-strat}) is when there is an $\R_i$-step followed by a $\b$-step: \[ t \quad\a_{\R_i}\quad v \quad\a_\b\quad u ~. \] In this case, we have to make sure that the step $t \a_{\R_i} v$ did not create the $\b$-redex contracted in $v \a_\b u$. As seen in Section~\ref{sec-rew}, this follows from arity assumptions on terms. We therefore use terms whose arity is compatible with that of the rewrite system. We need this property to be preserved by $\a_{\b \cup \R(\b)}$, but also by the conditions of rewrite rules: given a semi-closed rule $\vd = \vc \sgt l \at_\R r$ and a substitution $\s$, if $l\s$ respects $\ap:\Si \a \N$, then the terms $r\s,\vd\s$ should also respect $\ap$. This is the case when $r,\vd$ are algebraic and respect $\ap$. Moreover, in the following we have to make sure that every term at hand satisfy these properties. In particular, if we have a rule $\vd = \vc \sgt l \at r$ such that $l\s$ and all its reducts respect an arity $\ap$, this has to be the case of $\vd\s$ too (the case of $\vc$ follows from semi-closure). Hence, we consider sets of terms which are stable under the rewrite relation $\a_{\o{\R}}$ issued from the rewrite system \[ \begin{array}{r !{\quad} c !{\quad} l @{\tq} l} \at_{\o\R} &\deq& \{(l,d_i) & d_1 = c_1 \land \ldots \land d_n = c_n \sgt l \at_\R r \quad\land\quad i \in \{1,\dots,n\}\} \\ &\cup& \{(l,r) & d_1 = c_1 \land \ldots \land d_n = c_n \sgt l \at_\R r\} ~. \end{array} \] This motivates the following definition, which extends $(\R,\ap)$-stability (Definition~\ref{def-rstable}) to conditional rewriting. \begin{definition}[Conditionally $(\R,\ap)$-Stable Terms] \label{def-rstable-cond} Let $\ap : \Si \a \N$ be an arity and $\R$ be a conditional rewrite system. A set of terms $S$ is {\em conditionally $(\R,\ap)$-stable} if it is $(\o{\R},\ap)$-stable. \end{definition} We now show~(\ref{diag-AsleA-strat}). The proof of this property occupies Proposition~\ref{prop-post} and Lemma~\ref{lem-post-star}. Note that we prove Proposition~\ref{prop-post} for systems whose conditions need not be algebraic. However, this property may fail in presence of right-hand sides which either are not algebraic or do not respect the arity prescribed by the left-hand sides. Note also that we work on conditionally $(\R,\ap)$-stable terms. \begin{proposition} \label{prop-post} Let $\R$ be a left-linear semi-closed system which is right-algebraic and respects $\ap: \Si \a \N$, and let $S \sle \Te(\Si)$ be conditionally $(\R,\ap)$-stable. For all $i \in \N$ and all $t,u,v \in S$, if $t \a_{\R_i} u \rpr_\b v$, then there are $t'$ and $v'$ such that $t \rpr_\b t' \a^*_{\R_i} v' \rpl_\b v$~: \begin{equation*} \begin{aligned} \xymatrix@R=\xyS@C=\xyS{ t \ar@{>}[r]^{\R_i} \ar@{..>}[d]_{\rpr_\b} & u \ar@{>}[r]^{\rpr_\b} & v \ar@{..>}[d]^{\rpr_\b} \\ t' \ar@{..>}[rr]_{\R_i}^{*} & & v' } \end{aligned} \end{equation*} \end{proposition} \begin{proof} The base case $i = 0$ is trivial, and we assume $i > 0$. We reason by induction on $t$ using Lemma~\ref{lem-wadsworth}. \begin{description} \item[$t = \la \vx.x \esp t_1 \dots t_n$.] In this case, $u = \la \vx.x u_1 \dots u_n$ with $(t_1,\dots,t_n) \a_{\R_i} (u_1,\dots,u_n)$. Moreover, $v = \la \vx.x \esp v_1 \dots v_n$ with $(u_1,\dots,u_n) \rpr_\b (v_1,\dots,v_n)$. By induction hypothesis, there are $(t'_1,\dots,t'_n)$ and $(v'_1,\dots,v'_n)$ such that \[ (t_1,\dots,t_n) \quad\rpr_\b\quad (t'_1,\dots,t'_n) \quad\a^*_{\R_i}\quad (v'_1,\dots,v'_n) \quad\rpl_\b\quad (v_1,\dots,v_n) ~. \] It follows that \[ \la \vx.x \esp t_1 \dots t_n \quad\rpr_\b\quad \la \vx.x \esp t'_1 \dots t'_n \quad\a^*_{\R_i}\quad \la \vx.x \esp v'_1 \dots v'_n \quad\rpl_\b\quad \la \vx.x \esp v_1 \dots v_n ~. \] \item[$t = \la \vx.\sff t_{1} \dots t_{n}$.] If $u = \la \vx.\sff u_{1} \dots u_{n}$ with \[ (t_1,\dots,t_{n}) \quad\a_{\R_i}\quad (u_1,\dots,u_{n}) ~, \] then $v = \la \vx.\sff v_{1} \dots v_{n}$ with $(u_1,\dots,u_{n}) \rpr_\b (v_1,\dots,v_{n})$ and we reason as in the previous case. Otherwise, there is a rule $\vd = \vc \sgt l \at_\R r$, a substitution $\s$ and $k \in \{1,\dots,n\}$ such that $t = \la \vx.l\s t_{k+1} \dots t_{n}$ and $u = \la \vx.r\s t_{k+1} \dots t_{n}$. As $t$ and $\R$ respect $\ap$, we have $n = k$, hence $t = \la \vx.l\s$, $u = \la \vx.r\s$ and $v = \la \vx.w$ with $r\s \rpr_\b w$. Since $r$ is algebraic, by Proposition~\ref{prop-beta-alg}.\ref{prop-beta-alg-base} there is $\s'$ such that $\s \rpr_\b \s'$ and $w \rpr_\b r\s'$. As $l$ is linear, by Proposition~\ref{prop-subst-lin-rhd} we have $l\s \rpr_\b l\s'$. It remains to show that $l\s' \a_{\R_i} r\s'$. Since $l\s \a_{\R_i} r\s$, there are terms $\vv$ such that $\vd\s \a^*_{\R_{i-1}} \vv \al^*_{\R_{i-1}} \vc$. Since $\vd\s \a^*_\b \vd\s'$, by Lemma~\ref{lem-commut-cond}, we obtain terms $\vv'$ such that \[ \vd\s \quad\a^*_\b\quad \vd\s' \quad\a^*_{\R_{i-1}}\quad \vv' \quad\al^*_{\b}\quad \vv \quad\al^*_{\R_{i-1}}\quad \vc ~. \] Since terms $\vc$ are applicative and closed, they are algebraic, and since $\R$ is right-algebraic, terms $\vv$ are also algebraic, hence in $\b$-normal form. It follows that $\vv = \vv'$, hence that $\vd\s' \ad_{\R_{i-1}} \vc$, and we deduce that $l\s' \a_{\R_i} r\s'$. \item[$t = \la \vx.(\la x.t_0)t_1 \dots t_n ~(n \geq 1)$.] Then $u$ is of the form $u = \la \vx.(\la x.u_0)u_1 \dots u_n$ and we have $(t_0,\dots,t_n) \a_{\R_i} (u_0,\dots,u_n)$. If $v = \la \vx.(\la x.v_0)v_1 \dots v_n$ with \[ (u_0,\dots,u_n) \quad\rpr_\b\quad (v_0,\dots,v_n) ~, \] then we conclude by induction hypothesis, as in the first case. Otherwise, $v = \la \vx.v_0\wth{x}{v_1} v_2 \dots v_n$ with $(u_0,\dots,u_n) \rpr_\b (v_0,\dots,v_n)$. By induction hypothesis, we have \[ (t_0,\dots,t_n) \quad\rpr_\b\quad (t'_0,\dots,t'_n) \quad\a^*_{\R_i}\quad (v'_0,\dots,v'_n) \quad\rpl_\b\quad (v_0,\dots,v_n) ~. \] It follows that by using $(\pBeta)$, $(\papp)$ we have \[ \begin{array}{c c c} \la \vx.(\la x.t_0)t_1 \dots t_n && \la \vx.v_0\wth{x}{v_1} v_2 \dots v_n \\ \triangledown_\b &&\triangledown_\b \\ \la \vx.t'_0\wth{x}{t'_1} t'_2 \dots t'_n & \quad\a^*_{\R_i}\quad & \la \vx.v'_0\wth{x}{v'_1} v'_2 \dots v'_n ~. \end{array} \] \qedhere \end{description} \end{proof} \begin{lemma} \label{lem-post-star} Let $\R$ be a semi-closed left-linear right-algebraic system which respects $\ap : \Si \a \N$, and let $S \sle \Te(\Si)$ be conditionally $(\R,\ap)$-stable. For all $s,t \in S$, if $s \a^*_{\b \cup {\R_i}} t$ then there are $s'$, $t'$ such that $s \a^*_\b s' \a^*_{\R_i} t' \al^*_\b t$~: \begin{equation*} \begin{aligned} \xymatrix@R=\xyS@C=\xyL{ s \ar@{->}[r]^{\b \cup {\R_i}}_{*} \ar@{.>}[d]_{\b}^{*} & t \ar@{.>}[d]^{\b}_{*} \\ s' \ar@{.>}[r]_{{\R_i}}^{*} & t' } \end{aligned} \end{equation*} \end{lemma} \comment{ \begin{proof} The proof is in four steps. We begin (1) to show that $\a_{\R_i} \rpr_\b \sle \rpr_\b \a^*_{{\R_i}} \rpl_\b$, reasoning by cases on the step $\rpr_\b$. This inclusion relies on an important fact of algebraic terms: if $s$ is algebraic and $s\s\rpr_\b v$ then $v \rpr_\b s\s'$ with $\s \rpr^*_\b \s'$. From (1), it follows that (2) $\a_{\R_i}^*\rpr_\b\sle \rpr_\b\a_{\R_i}^*\rpl_\b^*$, by induction on the number of $\a_{\R_i}$-steps. Then (3), we obtain $\a_{\R_i}^*\rpr_\b^*\sle \rpr_\b^*\a_{\R_i}^*\rpl_\b^*$ using an induction on the number of $\rpr_\b$-steps and the diamond property of $\rpr_\b$. Finally (4), we deduce that $(\rpr_\b\cup\a_{\R_i})^*\sle \rpr_\b^*\a_{\R_i}^*\rpl_\b^*$ by induction on the length of $(\rpr_\b\cup\a_{\R_i})^*$. \end{proof}} \begin{proof} The proof is in three steps. \begin{enumerate} \item \label{lem-post-star-postpone} We show $\a_{\R_i}^*\rpr_\b\sle \rpr_\b\a_{\R_i}^*\rpl_\b^*$ by induction on the number of ${\R_i}$-steps. Assume that $s\a_{\R_i}^* t'\a_{\R_i} t\rpr_\b u$. By Lemma~\ref{prop-post}, there are $v$ and $v'$ such that $t'\rpr_\b v\a_{\R_i}^* v'\rpl_\b u$. By induction hypothesis, there are $s'$ and $s''$ such that $s\rpr_\b s'\a_{\R_i}^* s''\rpl_\b^* v$. Then, by Lemma~\ref{lem-commut-cond}, there is $t''$ such that $s''\a_{\R_i}^* t''\rpl_\b^* v'$. Thus, $s\rpr_\b s'\a_{\R_i}^* t''\rpl_\b^* u$. \item \label{lem-post-star-postpone-star} We show $\a_{\R_i}^*\rpr_\b^*\sle \rpr_\b^*\a_{\R_i}^*\rpl_\b^*$ by induction on the number of $\rpr_\b$-steps. Assume that $s\a_{\R_i}^* t\rpr_\b u'\rpr_\b^* u$. After~\ref{lem-post-star-postpone}, there are $s'$ and $t'$ such that $s\rpr_\b s'\a_{\R_i}^* t'\rpl_\b^* u'$. By the diamond property of $\rpr_\b$, there is $v$ such that $t'\rpr_\b^* v\rpl_\b^* u$, where $t'\rpr_\b^* v$ is no longer than $u'\rpr_\b^* u$. Hence, by induction hypothesis, there are $s''$ and $t''$ such that $s'\rpr_\b^* s''\a_{\R_i}^* t''\rpl_\b^* v$. Therefore, $s\rpr_\b^* s''\a_{\R_i}^* t''\rpl_\b^* u$. \item We prove $(\rpr_\b\cup\a_{\R_i})^*\sle \rpr_\b^*\a_{\R_i}^*\rpl_\b^*$ by induction on the length of $(\rpr_\b\cup\a_{\R_i})^*$. Assume that $s \a_{\rpr_\b\cup{\R_i}} t \a_{\rpr_\b\cup{\R_i}}^* u$. There are two cases. First, $s\rpr_\b t$. This case follows directly from the induction hypothesis. Second, $s\a_{\R_i} t$. By induction hypothesis, there are $t'$ and $u'$ such that $t\rpr_\b^* t'\a_{\R_i}^* u'\rpl_\b^* u$. After~\ref{lem-post-star-postpone-star}, there are $s'$ and $t''$ such that $s\rpr_\b^* s'\a_{\R_i}^* t''\rpl_\b^* t'$. Finally, by Lemma~\ref{lem-commut-cond}, there is $u''$ such that $t''\a_{\R_i}^* u'' \rpl_\b^* u'$. Hence, $s\rpr_\b^* s'\a_{\R_i}^* u''\rpl_\b^* u$. \end{enumerate} We conclude by the fact that $\rpr_\b^* = \a_\b^*$. \end{proof} \paragraph{Remark.} Note that $\b$-reduction is the only way to obtain a term not respecting $\ap$ from a term respecting it. For instance, with $\ap(\id) = 1$ the term $(\la x.x\esp y\esp y)\id$ respects $\ap$ whereas $\id \esp y \esp y$ does not respect $\ap$. \begin{proposition} Let $\R$ be an algebraic conditional rewrite system and $t \in \Te(\Si)$ that both respect $\ap : \Si \a \N$. If $t \a_{\R(\b)} u$ then $u$ respects $\ap$. \end{proposition} \begin{proof} We reason by induction on $t$, using Lemma~\ref{lem-wadsworth}. The only case which does not directly follow from the induction hypothesis is when $t = \la \vx.\sff t_{1} \dots t_{n}$ and there is a rule $\vd = \vc \sgt l \at_\R r$, a substitution $\s$ and $k \in \{1,\dots,n\}$ such that $t = \la \vx.l\s t_{k+1} \dots t_{n}$. Since $t$ and $\R$ respect $\ap$, we have $k = n$. Hence $u = \la \vx.r\s$ and $u$ respects $\ap$ since $r$ is an algebraic term that respects $\ap$. \end{proof} \subsubsection{Confluence of beta-reduction with beta-conditional rewriting} \label{sec-Bconfl-sc-confl} We now have all we need to show property~(\ref{diag-BsleA-strat}). As seen in Example~\ref{ex-Bconfl}, rules have to be algebraic and arity compliant. We reason by induction on $i \in \N$. \begin{lemma} \label{lem-BtoA} Let $\R$ be a semi-closed left-linear algebraic system which respects $\ap : \Si \a \N$, and let $S \sle \Te(\Si)$ be conditionally $(\R,\ap)$-stable. For all $t,u \in S$, if $t \a^*_{\b \cup {\R(\b)_i}} u$ then there are $t'$, $u'$ such that $t \a^*_\b t' \a^*_{\R_i} u' \al^*_\b u$~: \begin{equation}\label{eq-lem-BtoA} \begin{aligned} \xymatrix@R=\xyS@C=\xyL{ t \ar@{->}[r]^{\b \cup \R(\b)_i}_{*} \ar@{.>}[d]_{\b}^{*} & u \ar@{.>}[d]^{\b}_{*} \\ t' \ar@{.>}[r]_{\R_i}^{*} & u' } \end{aligned} \end{equation} \end{lemma} \begin{proof} We show~(\ref{eq-lem-BtoA}) by induction on $i \in \N$. The base case $i = 0$ is trivial. We assume that the property holds for $i \geq 0$ and show it for $i+1$. The proof is in two steps. \begin{enumerate} \item \label{lem-BtoA-base} We begin by showing that diagram~(\ref{eq-lem-BtoA-base}) commutes: \begin{equation}\label{eq-lem-BtoA-base} \begin{aligned} \xymatrix@R=\xyS@C=\xyL{ t \ar@{->}[r]^{\R(\b)_{i+1}} \ar@{.>}[d]_{\b}^{*} & u \ar@{.>}[d]^{\b}_{*} \\ t' \ar@{.>}[r]_{\R_{i+1}} & u' } \end{aligned} \end{equation} We reason by induction on $t$, using Lemma~\ref{lem-wadsworth}. The only case that does not directly follow from the induction hypothesis is when $t = \la \vx.\sff t_1 \dots t_n$ and there is a rule $\vd = \vc \sgt l \at_\R r$, a substitution $\s$ and $k \in \{1,\dots,n\}$ such that $t = \la \vx.l\s t_{k+1} \dots t_{n}$ and $u = \la \vx.r\s t_{k+1} \dots t_{n}$ with $l\s \a_{\R(\b)_{i+1}} r\s$. Since $t$ and $\R$ respect $\ap$, we have $k = n$, hence $u = \la \vx.r\s$. To deduce~(\ref{eq-lem-BtoA-base}), it remains to show that there is a substitution $\s'$ such that \[ l\s \quad\a^*_\b\quad l\s' \quad\a_{\R_{i+1}}\quad r\s' \quad\al^*_\b\quad r\s ~. \] Since $l\s \a_{\R(\b)_{i+1}} r\s$, there are terms $\vv$ such that $\vd\s \a^*_{\b \cup \R(\b)_i} \vv \al^*_{\b \cup \R(\b)_i} \vc$. By induction hypothesis on $i$, there are terms $\vw$ and $\vv'$ such that \[ \vd\s \quad\a^*_\b\quad \vw \quad\a^*_{\R_i}\quad \vv' \quad\al^*_\b\quad \vv ~. \] By Proposition~\ref{prop-beta-alg}.\ref{prop-beta-alg-star}, as terms $\vd$ are algebraic there is a substitution $\s'$ such that $\s \a^*_\b \s'$ and $\vw \a^*_\b \vd\s'$. By Lemma~\ref{lem-commut-cond} (commutation of $\a_{\R_i}$ with $\a_\b$), we obtain terms $\vv'$ such that $\vd\s' \a^*_{\R_i} \vv' \al^*_\b \vv$. It follows that \[ \vd\s \quad\a^*_\b\quad \vd\s' \quad\a^*_{\R_i}\quad \vv' \quad\al^*_\b\quad \vv \quad\al^*_{\b \cup \R(\b)_i}\quad \vc ~. \] Since terms $\vc$ are algebraic and $\R$ is right-applicative, every reduct of $\vc$ by $\a_{\R(\b)}$ is $\b$-normal. We thus have $\vv' = \vv$ and by induction hypothesis on $i$ we deduce that $\vc \a^*_{\R_i} \vv$. It follows that $\vd\s' \ad_{\R_i} \vc$, hence $l\s' \a_{\R_{i+1}} r\s'$. We have $l\s \a^*_\b l\s'$ and $r\s \a^*_\b r\s'$ since $\s \a^* \s'$, hence \[ t \quad\a^*_\b\quad \la \vx.l\s' \quad\a_{\R_{i+1}}\quad \la \vx.r\s' \quad\al^*_\b\quad u ~. \] \item We now show~(\ref{eq-lem-BtoA}) by induction on the length of $t \a^*_{\b \cup \R(\b)_{i+1}} u$. Assume that \[ t\a_{\b\cup\R(\b)_{i+1}} v\a_{\b\cup\R(\b)_{i+1}}^* u ~. \] By induction hypothesis, there are $v'$ and $u'$ such that $v \a_\b^* v' \a_{\R_{i+1}}^* u' \al_\b^* u$. and there are two cases. If $t \a_\b v$, then we are done since $t \a_\b^* v'$. Otherwise, we have $t\a_{\R(\b)_{i+1}} u$. From~\ref{lem-BtoA-base}, there are $t'$ and $v''$ such that \[ t\a_\b^* t' \a_{\R_{i+1}}^* v'' \al_\b^* v ~. \] Now, by confluence of $\a_\b$, there is $v'''$ such that $v''\a_\b^* v''' \al_\b^* v'$. Commutation of $\a_\b$ and $\a_{\R_{i+1}}$ (Lemma~\ref{lem-commut-cond}) applied to $v''' \al^*_\b v' \a^*_{\R_{i+1}} u'$ gives us a term $u''$ such that $v''' \a_{\R_{i+1}}^* u''\al_\b^* u'$. We thus have $t' \a_{\b \cup \R_{i+1}}^* u''$ and by Lemma~\ref{lem-post-star} there are $t''$ and $u'''$ such that $t'' \a_\b^*\a_{\R_{i+1}}^*\al_\b^* u'''$. Therefore, $t\a_\b^*\a_{\R_{i+1}}^*\al_\b^* u$. In diagrammatic form, \[ \xymatrix@R=\xyS@C=\xyS{ && t \ar@{->}[rr]^{\R(\b)_{i+1}} \ar@{.>}[dl]_{\b}^{*} && v \ar@{->}[rr]^{\b \cup \R(\b)_{i+1}}_{*} \ar@{.>}[dl]^{\b}_{*} \ar@{.>}[dr]_{\b}^{*} && u \ar@{.>}[dr]^{\b}_{*} \\ & t' \ar@{.>}[rr]_{\R_{i+1}} \ar@{.>}[dl]_{\b}^{*} && v'' \ar@{.>}[dr]^{\b}_{*} && v' \ar@{.>}[rr]_{\R_{i+1}}^{*} \ar@{.>}[dl]_{\b}^{*} && u' \ar@{.>}[dl]^{\b}_{*} \\ t'' \ar@{.>}[drrrrr]_{\R_{i+1}}^{*} &&&& v''' \ar@{.>}[rr]^{\R_{i+1}}_{*} && u'' \ar@{.>}[dl]^{\b}_{*} \\ &&&&& u''' } \] \qedhere \end{enumerate} \end{proof} We easily deduce~(\ref{diag-BsleA}) from~(\ref{eq-lem-BtoA}). We get the confluence of $\a_{\b\cup\R}$ using Theorem~\ref{thm-Aconfl-sc}. By Proposition~\ref{prop-Bconfl}, the confluence of $\a_{\b \cup \R(\b)}$ follows from the confluence of $\a_{\R}$ on conditionally $(\R,\ap)$-stable sets of terms. \begin{theorem} \label{thm-Bconfl-sc} Let $\R$ be a semi-closed left-linear algebraic system which respects $\ap : \Si \a \N$. Then, on any conditionally $(\R,\ap)$-stable set of terms, if $\a_{\R}$ is confluent then so is $\a_{\b \cup \R(\b)}$. \end{theorem} \begin{proof} Since $\R$ is semi-closed, left-linear and right-applicative, confluence of $\a_{\b \cup \R}$ follows from confluence of $\a_{\R}$ by Theorem~\ref{thm-Aconfl-sc}. We then conclude by Lemma~\ref{lem-BtoA} and Proposition~\ref{prop-Bconfl}, since conditionally $(\R,\ap)$-stable sets of terms are closed under $\a_{\b\cup\R(\b)}$. \end{proof} \subsection{Confluence on weakly beta-normalizing terms} \label{sec-Bconfl-wn} In this section, we extend to $\a_{\R(\b)}$ the results of Section~\ref{sec-Aconfl-wn}. The main point is to obtain the lemma corresponding to Lemma~\ref{lem-projA-bnf}. Moreover, as in Section~\ref{sec-Bconfl-sc}, for all $i \in \N$ we project $\a_{\R(\b)_i}$ on $\a_{\R_i}$. We thus want to obtain the following property, which implies~(\ref{diag-BsleA}): \begin{equation} \begin{aligned} \label{diag-projB-bnf} \xymatrix@R=\xyS@C=\xyL{ t \ar@{->}[r]^{\b \cup \R(\b)_i}_{*} \ar@{.>}[d]_{\b}^{*} & u \ar@{.>}[d]^{\b}_{*} \\ \bnf(t) \ar@{.>}[r]_{\R_i}^{*} & \bnf(u) } \end{aligned} \end{equation} We use the same tools as in Section~\ref{sec-Aconfl-wn}. We consider weakly $\b$-normalizing terms whose $\b$-normal form respects the arity specified by rewrite rules, and we reason by induction on $\succ$. We also assume that rewrite rules are algebraic. We denote by $\rpr_{\R(\b)}$ the nested parallelization of join $\b$-conditional rewriting, defined similarly as in Definition~\ref{def-walkA}. It satisfies Proposition~\ref{prop-tgtA} and Lemma~\ref{lem-commut-h}. We now show~(\ref{diag-projB-bnf}) using exactly the same method as for showing~(\ref{diag-projA-bnf}) in Lemma~\ref{lem-projA-bnf}. \begin{lemma} \label{lem-projB-bnf} Let $\ap : \Si \a \N$ be an arity and $\R$ be an algebraic conditional rewrite system which respects $\ap$. For all $i \in \N$, if $t \in \AN$ and $t \a_{\b\cup\R(\b)_{i}}^* u$, then $u\in \AN$ and $\bnf(t) \a_{\R_{i}}^* \bnf(u)$. \end{lemma} \begin{proof} We reason exactly as in the proof of Lemma~\ref{lem-projA-bnf}. We prove the property by induction on $i \in \N$. In the induction case we show that for all $t \in \AN$, \begin{equation} \begin{aligned} \label{diag-projBbnf-par} \xymatrix@R=\xyS@C=\xyL{ t \ar@{->}[r]^{\rpr_{\R(\b)_{i+1}}} \ar@{.>}[d]_{\b}^{*} & u \ar@{.>}[d]^{\b}_{*} \\ \bnf(t) \ar@{.>}[r]_{\rpr_{\R_{i+1}}} & \bnf(u) } \end{aligned} \end{equation} We reason by induction on $\succ$ using Lemma~\ref{lem-wadsworth}. The only difference with the proof of Lemma~\ref{lem-projA-bnf} is the case where $t = \la \vx.f t_1 \dots t_n$ and there is a rule $\vd = \vc \sgt l \at r$ such that $t = \la \vx.l\s\va$ and $u = \la \vx.r\t\vb$ with $l\s \rpr_{\R(\b)_{i+1}} r\t$ and $\vd\s \ad_{\b \cup \R(\b)_{i}} \vc\s$. Exactly for the same reasons as in Lemma~\ref{lem-projA-bnf}, we have $\va = \vb = \emptyset$, $t = \la \vx . l\s$ and $u = \la \vx.r\s$. Moreover, $\bnf(t) = \la \vx. l \s'$ and $\bnf(u) = \la \vx.r\t'$ with $\s' \deq \bnf(\s)$ and $\t' = \bnf(\t)$, and by induction hypothesis on $\succ$ we have $\s' \rpr_{\R_{i+1}} \t'$. It remains to show that $l\s' \rpr_{\R_{i+1}} r\t'$. Because $\s' \rpr_{\R_{i+1}} \t'$, it suffices to prove that $l\s' \a_{\R_{i+1}} r\s'$. Thus, we are done if we show that $\vd\s' \ad_{\R_{i}} \vc\s'$. Since $\vd$ and $\vc$ are algebraic, $\bnf(\vd\s) = \vd\s'$ and $\bnf(\vc\s) = \vc\s'$. Now, since $\vd$ is algebraic and respects $\ap$, and since $\s'$ respects $\ap$, it follows that $\vd\s'$ respects~$\ap$. The same holds for $\vc\s'$. Hence we conclude by applying on $\vd\s \ad_{\b \cup \R(\b)_{i}} \vc\s$ the induction hypothesis on $i$. \end{proof} We deduce the preservation of confluence. \begin{theorem} \label{thm-Bconfl-bnf} Let $\ap : \Si \a \N$ be an arity and $\R$ be an algebraic conditional rewrite system which respects $\ap$. If $\a_\R$ is confluent on $\AN$, then $\a_{\b\cup\R(\b)}$ is confluent on $\AN$. \end{theorem} \begin{proof} We can reason as described at the beginning of this section, using Proposition~\ref{prop-Bconfl}, Theorem~\ref{thm-Aconfl-bnf} and Lemma~\ref{lem-projB-bnf}. A direct proof is also possible, reasoning as for Theorem~\ref{thm-Aconfl-bnf}. \end{proof} \section{Orthonormal systems} \label{sec-exclus} In this section, we give a criterion ensuring the confluence of $\a_{\b\cup\R(\b)}$ when conditions and right-hand sides possibly contain abstractions and active variables. This criterion comes from peculiarities of orthogonality with conditional rewriting. As remarked in Section~\ref{sec-orth-cond}, a conditional critical pair can be {\em feasible} or not. In~\cite{ohlebusch02book}, it is remarked that results on the confluence of semi-equational and normal orthogonal conditional systems could be extended to systems that have no feasible critical pair. But the results obtained this way are not directly applicable, since proving unfeasibility of critical pairs may require confluence. An example of such situation is the following rewrite system. \begin{example} \label{ex-orthonormal} Consider the following two rules, taken from the system presented in Section~\ref{sec-ex-cond-term}: \[ \begin{array}{l c l c l c l} > ({\length}\esp l)\esp x & = & {\false} & ~\sgt~ & {\occ}\esp (\cons \esp x \esp o)\esp ({\node}\esp y\esp l) & ~\at~ & {\false} \\ > ({\length}\esp l)\esp x & = & {\true} & ~\sgt~ & {\occ}\esp (\cons \esp x \esp o)\esp ({\node}\esp y\esp l) & ~\at~ & {\occ}\esp o\esp ({\get}\esp l\esp x) \end{array} \] The only conditional critical pair between them is \[ > (\length\esp l)\esp x \esp=\esp \true \quad\land\quad > (\length\esp l)\esp x \esp=\esp \false \quad\sgt\quad (\false ~,~ \occ \esp o\esp (\get\esp l \esp x)) \] The condition of this pair cannot be satisfied by a confluent relation. Hence, if $\a_{\b \cup \R(\b)_i}$ is confluent then we can reason as in Lemma~\ref{lem-commut-cond} and obtain the confluence of $\a_{\b \cup \R(\b)_{i+1}}$. \end{example} In this section, we define a class of systems, called {\em orthonormal}, that allows to generalize this reasoning. As in Example~\ref{ex-orthonormal}, confluence can be shown stratified way: the confluence of $\a_{\b \cup \R(\b)_i}$ implies the unfeasibility of critical pairs w.r.t.\ $\a_{\b \cup \R(\b)_i}$, which in turn entails the confluence of the next stratum $\a_{\b \cup \R(\b)_{i+1}}$. We thus obtain the level confluence of $\a_{\b \cup \R(\b)}$. Rules of orthonormal systems can have $\la$-terms in their right-hand sides and conditions. Moreover, no arity assumption is made. Hence, orthonormality ensures the confluence of $\b$-conditional rewriting combined to $\b$-reduction when we cannot deduce it from the confluence of conditional rewriting (see Section~\ref{sec-Bconfl}). Systems similar to orthonormal systems have already been studied in the first-order case~\cite{gm87ctrs,kw97rta}. It is worth relating orthonormal systems with approaches to conditional rewriting in which conditions are arbitrary predicates on terms. For first-order conditional rewriting this approach has been taken in~\cite{bk86jcss}. It has been applied to $\la$-calculus~\cite{takahashi93tlca}, and this is the way conditional rewrite rules are handled in the very expressive framework of CCERSs~\cite{gkk05ptc}. Neither of these approaches can directly handle Example~\ref{ex-orthonormal}. In each case, confluence is proved under the assumption that the predicates used in conditions are stable by reduction, while proving this property in the case of Example~\ref{ex-orthonormal} requires confluence. A symbol $\sff \in\Si$ is {\em defined} if it is the head of the left-hand side of a rule. \begin{definition}[Orthonormal Systems] \label{def-orth} A conditional rewrite system $\R$ is {\em orthonormal} if \begin{enumerate} \item\label{def-orth-ll} it is left-linear; \item\label{def-orth-norm} in every rule $\vd=\vc\sgt l\at_\R r$, the terms in $\vc$ are closed $\b$-normal forms not containing defined symbols; \item\label{def-orth-orth} for every critical pair \[ d_1 \esp=\esp c_1 \quad\land\quad\dots\quad\land\quad d_n \esp=\esp c_n \quad\sgt\quad (s , t) \] there exist distinct $i,j \in \{1,\dots,n\}$ such that $d_i=d_j$ and $c_i\neq c_j$. \end{enumerate} \end{definition} \noindent Condition~\ref{def-orth-norm} is a simple syntactic and decidable way to ensure that orthonormal systems are normal (recall from Remark~\ref{rem-dec} that normality is in general undecidable). As explained in Example~\ref{ex-orthonormal}, assuming the confluence of $\a_{\b \cup \R(\b)_i}$, condition~\ref{def-orth-orth} implies the unfeasibility of critical pairs w.r.t.\ $\a_{\b \cup \R(\b)_i}$, hence the confluence of the next stratum $\a_{\b \cup \R(\b)_{i+1}}$. This entails the {\em level} confluence of $\a_{\b \cup \R(\b)}$. We actually prove in Theorem~\ref{thm:lcr:beta:cond} the {\em shallow} confluence of $\a_{\b \cup \R(\b)}$, which is a stronger property (see Definition~\ref{def-strat-confl}). Theorem~\ref{thm:lcr:beta:cond} is thus an extension of the shallow confluence of orthogonal first-order normal conditional rewriting (Theorem~\ref{thm-orth-cond}) to orthonormal $\b$-conditional rewriting. The most important point w.r.t. the results of Section~\ref{sec-Bconfl} is that orthonormal systems do not need to respect an arity nor to be algebraic. \begin{example} The system presented in Section~\ref{sec-ex-cond-term} is orthonormal. \end{example} We now show that $\a_{\b\cup\R(\b)}$ is shallow confluent when $\R$ is orthonormal. This result is stated and proved in Theorem~\ref{thm:lcr:beta:cond} below. We use some intermediate lemmas. The parallel moves property occupies Lemmas~\ref{lem:par:moves:beta:cond} and~\ref{lem:scr:parallel:beta:cond}. We begin by showing that the confluence of $\a_{\b \cup \R(\b)_i}$ implies the commutation of $\a^*_\b$ and $\a^*_{\R(\b)_{i+1}}$. \begin{lemma} \label{lem-commut-orth} Let $\R$ be an orthonormal system. For all $i \in \N$, if $\a_{\b\cup\R(\b)_i}$ is confluent then $\a_{\R(\b)_{i+1}}$ commutes with $\a_\b$~: \begin{equation*} \begin{aligned} \xymatrix@R=\xyS@C=\xyL{ \cdot \ar@{>}[r]^{\R(\b)_{i+1}}_{*} \ar@{>}[d]_{\b}^{*} & \cdot \ar@{..>}[d]^{\b}_{*} \\ \cdot \ar@{..>}[r]_{\R(\b)_{i+1}}^{*} & \cdot } \end{aligned} \end{equation*} \end{lemma} \begin{proof} We reason as in Lemma~\ref{lem-commut-cond}. We show property~(\ref{eq-lem-commut-orth-base}) below and then deduce the commutation of $\a_{\R(\b)_{i+1}}$ and $\a_\b$ using Lemma~\ref{lem-commut} and the fact that $\a^*_{\b} \,=\, \rpr^*_\b$. \begin{equation} \label{eq-lem-commut-orth-base} \begin{aligned} \xymatrix@R=\xyS@C=\xyL{ t \ar@{->}[r]^{\R(\b)_{i+1}} \ar@{->}[d]_{\rpr_\b} & v \ar@{.>}[d]^{\rpr_\b} \\ u \ar@{.>}[r]_{\R(\b)_{i+1}}^{*} & w } \end{aligned} \end{equation} The only difference with the proof of Lemma~\ref{lem-commut-cond} is when $t \a_{\R(\b)_{i+1}} v$ by contracting a rooted redex. In this case, there is a rule $\vd = \vc \sgt l \at_\R r$ and a substitution $\s$ such that $t = l\s$ and $v = r\s$. We show that there is a term $w$ such that $u\a_{\R(\b)_{i+1}}^* w\rpl_\b r\s$. As $l$ is a non-variable linear algebraic term, there is a substitution $\s'$ such that $\s\rpr_\b \s'$ and $l\s\rpr_\b l\s'=u$. Therefore we have $r\s\rpr_\b r\s'$. It remains to show that $l\s'\a_{\R(\b)_{i+1}} r\s'$. Recall that $\vd\s\a_{\b\cup\R(\b)_i}^* \vc$. By assumption (confluence of $\a_{\b\cup \R(\b)_i}$), since $\vd\s \a^*_\b \vd\s'$ there are $\vv$ such that $\vd\s'\a_{\b\cup\R(\b)_i}^* \vv\al_{\b\cup \R(\b)_i}^* \vc$. But $\vc$ are $\b \cup \R(\b)$-normal forms, hence $\vv=\vc$. We conclude that $l\s'\a_{\R(\b)_{i+1}} r\s'\rpl_\b r\s$. \end{proof} We follow the usual scheme of proofs of confluence of orthogonal conditional rewrite systems~\cite{ohlebusch02book}. For all $i \in \N$, we denote by $\a_{\parallel\R(\b)_i}$ the smallest parallel rewrite relation containing $\a_{\R(\b)_i}$ (see Definition~\ref{def-par-rew}). Hence, $\a_{\parallel\R(\b)_i}$ is strictly included in the nested parallel relation $\rpr_{\R(\b)_i}$ used in Section~\ref{sec-Bconfl-wn} (Definition~\ref{def-walkA}). The main property is the commutation of $\a_{\parallel\R(\b)_i}$ and $\a_{\parallel\R(\b)_j}$ for all $i,j \in \N$, which corresponds to the usual parallel moves property. Let $<_{\mul}$ be the multiset extension of the usual ordering on naturals numbers. In our case, the parallel moves property is: \begin{description} \item[Parallel Moves.] Given $i,j \in \N$, if $\a_{\b \cup \R(\b)_n}$ commutes with $\a_{\b \cup \R(\b)_m}$ for all $n,m$ such that $\{n,m\} <_{\mul} \{i,j\}$, then $\a_{\parallel\R(\b)_i}$ commutes with $\a_{\parallel\R(\b)_j}$. \end{description} The proof is decomposed into Lemma~\ref{lem:par:moves:beta:cond} and Lemma~\ref{lem:scr:parallel:beta:cond}. In Lemma~\ref{lem:par:moves:beta:cond}, assuming the commutation of $\a_{\b\cup \R(\b)_n}$ and $\a_{\b\cup \R(\b)_m}$ for all $n,m$ such that $\{n,m\} <_\mul \{i,j\}$, we consider, for the commutation of $\a_{\parallel\R(\b)_i}$ and $\a_{\parallel\R(\b)_j}$, the particular case of a rooted $\R(\b)_{i}$-reduction. \begin{lemma} \label{lem:par:moves:beta:cond} Let $\R$ be an orthonormal system and $i,j\geq 0$. Assume that $\a_{\b \cup \R(\b)_n}$ commutes with $\a_{\b \cup \R(\b)_m}$ for all $n$, $m$ such that $\{n,m\} <_{\mul} \{i,j\}$. Then for all rules $\vd = \vc \sgt l \at_\R r$, we have \[ \xymatrix@R=\xyS@C=\xyL{ l\s \ar@{->}[r]^{\R(\b)_i} \ar@{->}[d]_{\parallel\R(\b)_j} & r\s \ar@{..>}[d]^{\parallel\R(\b)_j} \\ u \ar@{..>}[r]_{\R(\b)_i}^{\re} & v } \] \end{lemma} \begin{proof} The result holds if $i=0$ since $\a_{\R(\b)_0}=\emptyset$. If $j = 0$, then $u = l\s$ and take $v=r\s$. Assume that $i,j >0$ and write $q_1,\dots,q_n$ for the (disjoint) occurrences in $l\s$ of the redexes contracted in $l\s \a_{\parallel\R(\b)_j} u$. Therefore, for all $k$, $1 \leq k \leq n$, there exists a rule $\rho_k: \vd_k = \vc_k \sgt l_k \at_\R r_k$ and a substitution $\t_k$ such that $l\s|_{q_k} =l_k\t_k$. Thus, $ u= l\s[r_1\t_1]_{q_1}\dots[r_n\t_n]_{q_n}$. It is possible to rename variables and assume that $\rho$, $\rho_1, \dots \rho_n$ have disjoint variables. Therefore, we can take $\s = \t_1 = \cdots = \t_n$. Assume that there is a non-variable superposition, i.e.\ that a $q_k$ is a {\em non variable} occurrence in~$l$. Hence rules $\rho$ and $\rho_k$ form an instance of a critical pair $ \vd'\mu = \vc'\sgt ({l[r_k]_{q_k}}\mu,r\mu) $ and there exists a substitution $\mu'$ such that $\s = \mu \mu'$. By definition of orthonormal systems, $|\vd'\mu| \geq 2$ and there is $m\neq p$ such that $c'_m \neq c'_p$ and $d'_m\mu = d'_p\mu$. Let us write $h$ for $max(i,j) - 1$. As $d'_m\mu = d'_p\mu$ we have $d'_m\s = d'_p\s$ and it follows that \[ c'_m \quad\al_{\b \cup \R(\b)_h}^*\quad d'_m\s \quad=\quad d'_p\s \quad\a_{\b \cup \R(\b)_h}^*\quad c'_p ~. \] But $\{h,h \} <_{\mul} \{i,j \}$ and by assumption $\a_{\b \cup \R(\b)_h}$ is confluent. Therefore we must have $c'_m \ad_{\b \cup \R(\b)_h} c'_p $. But it is not possible since $c'_m$ and $c'_p$ are distinct normal forms. Hence, conditions of $\rho$ and $\rho_k$ cannot be both satisfied by $\s$ and $\a_{\b \cup \R(\b)_{h}}$ and it follows that there is no non-variable superposition. Therefore, each $q_k$ is of the form $u_k.v_k$ where $l|_{u_k}$ is a variable $x_k$. Let $\s'$ be such that $\s'(x_k) = \s(x_k)[r_k\s]_{v_k}$ and $\s'(y) = \s(y)$ if $y \neq x_k$ for all $1 \leq k \leq n$. Then, $l\s \a_{\parallel\R(\b)_j} l\s'$ and by linearity of $l$, $u=l\s'$. Furthermore, $r\s \a_{\parallel\R(\b)_j} r\s'$. We now show that $l\s' \a_{\R(\b)_i} r\s'$. We have $\vd\s \a_{\b \cup \R(\b)_{i-1}}^* \vc$ and $\vd\s \a_{\R(\b)_j}^* \vd\s'$. As $i,j > 0$, we have $\{i-1,j\} <_{\mul} \{i,j\}$. Therefore, by assumption $\a_{\b \cup \R(\b)_{i-1}}$ and $\a_{\b \cup \R(\b)_{j}}$ commute and there exist terms $\vc'$ such that \[ \vd\s' \quad\a_{\b \cup \R(\b)_{i-1}}^*\quad \vc' \quad\al_{\b \cup \R(\b)_{j-1}}^*\quad \vc ~. \] As terms $\vc$ are $\a_{\b\cup \R(\b)}$-normal forms, we have $\vc' = \vc$ and it follows that $l\s' \a_{\R(\b)_i} r\s'$. \end{proof} Now, in Lemma~\ref{lem:scr:parallel:beta:cond} we show that the commutation of $\a_{\parallel\R(\b)_i}$ and $\a_{\parallel\R(\b)_j}$ is ensured by the two particular cases of rooted $\R(\b)_i$-reduction and $\R(\b)_j$-reduction. \begin{lemma} \label{lem:scr:parallel:beta:cond} Let $\R$ be an orthonormal system and $i,j\geq 0$. Property (i) below holds if and only if for all rules $\vd=\vc\sgt l\at_\R r$, properties (ii) and (iii) hold. \[ \begin{array}{c !{\quad} c !{\quad} c} \xymatrix@R=\xyS@C=\xyL{ s \ar@{->}[r]^{\parallel\R(\b)_i} \ar@{->}[d]_{\parallel\R(\b)_j} & t \ar@{..>}[d]^{\parallel\R(\b)_j} \\ u \ar@{..>}[r]_{\parallel\R(\b)_i} & v } & \xymatrix@R=\xyS@C=\xyL{ l\s \ar@{->}[r]^{\R(\b)_i} \ar@{->}[d]_{\parallel\R(\b)_j} & r\s \ar@{..>}[d]^{\parallel\R(\b)_j} \\ u \ar@{..>}[r]_{\R(\b)_i}^{\re} & v } & \xymatrix@R=\xyS@C=\xyL{ l\s \ar@{->}[r]^{\R(\b)_j} \ar@{->}[d]_{\parallel\R(\b)_i} & r\s \ar@{..>}[d]^{\parallel\R(\b)_i} \\ u \ar@{..>}[r]_{\R(\b)_j}^{\re} & v } \\ \text{(i)} & \text{(ii)} & \text{(iii)} \end{array} \] \end{lemma} \begin{proof} The ``only if'' statement is trivial. For the ``if'' case, let $s,t,u$ be three terms such that $ u \al_{\parallel\R(\b)_j} s \a_{\parallel\R(\b)_i} t $. If $s$ is $t$ (resp. $u$), then take $v = u$ (resp. $v = t$). Otherwise, we reason by induction on the structure of $s$. If there is a rooted reduction, we conclude by properties (ii) and (iii). Now assume that both reductions are nested. In this case $s$ cannot be a symbol $\sff \in \Si$ nor a variable. If $s$ is an abstraction, we conclude by induction hypothesis. Otherwise $s$ is an application $s_1 s_2$, and by assumption $u = u_1 u_2$ and $t = t_1 t_2$ with $u_k \al_{\parallel\R(\b)_j} s_k \a_{\parallel\R(\b)_i} t_k$. In this case also we conclude by induction hypothesis. \end{proof} Now, an induction on $<_{\mul}$ provides the commutation of $\a_{\b\cup\R(\b)_i}$ and $\a_{\b\cup\R(\b)_j}$ for all $i,j\geq 0$, i.e.\ the shallow confluence of $\a_{\b \cup \R(\b)}$. \begin{theorem} \label{thm:lcr:beta:cond} If $\R$ is an orthonormal system, then $\a_{\b\cup\R(\b)}$ is shallow confluent. \end{theorem} \begin{proof} We reason by induction on unordered pairs $\{i,j\}$ seen as multisets and compared with the well-founded relation $<_{\mul}$. We show the commutation of $\a_{\b\cup\R(\b)_i}$ and $\a_{\b\cup\R(\b)_j}$ for all $i,j\geq 0$. The least unordered pair $\{i,j\}$ (considered as a multiset) with respect to $<_{\mul}$ is $\{0,0\}$. As $\a_{\b\cup\R(\b)_0}=\a_\b$ by definition, this case holds by confluence of $\b$. Now, assume that $i>0$ and that the commutation of $\a_{\b\cup\R(\b)_n}$ and $\a_{\b\cup\R(\b)_m}$ holds for all $n,m$ with $\{n,m\} <_{\mul} \{i,0\}$. As $\{i-1,i-1\} <_{\mul} \{i,0\}$, $\a_{\b\cup\R(\b)_{i-1}}$ is confluent and the commutation of $\a_{\b\cup\R(\b)_i}$ with $\a_{\b\cup\R(\b)_0}$ ($=\a_\b$) follows from Lemma~\ref{lem-commut-orth}. The remaining case is when $i,j>0$. Using the induction hypothesis, from Lemma~\ref{lem:par:moves:beta:cond} and Lemma~\ref{lem:scr:parallel:beta:cond}, we obtain the commutation of $\a_{\parallel\R(\b)_i}$ and $\a_{\parallel\R(\b)_j}$, which in turn implies the commutation of $\a^*_{\R(\b)_i}$ and $\a^*_{\R(\b)_j}$. Now, as $\{i-1,i-1\} <_{\mul} \{i,j\}$, by Lemma~\ref{lem-commut-orth}, $\a_\b$ and $\a_{\R(\b)_i}$ commute. This way, we also obtain the commutation of $\a_\b$ and $\a_{\R(\b)_j}$. Then, the commutation of $\a^*_{\b \cup \R(\b)_i}$ and $\a^*_{\b\cup\R(\b)_j}$ easily follows. \end{proof} \begin{example} The relation $\a_{\b\cup\R(\b)}$ induced by the system presented in Section~\ref{sec-ex-cond-term} is shallow confluent and thus confluent. \end{example} \section{Conclusion} Our results are summarized in Figure~\ref{fig-oversimple} page~\pageref{fig-oversimple}. We provide detailed conditions to ensure modularity of confluence when combining $\b$-reduction and conditional rewriting, either when the evaluation of conditions uses $\b$-reduction or when it does not. This has useful applications on the high-level specification side and for enriching the conversion used in logical frameworks or proof assistants, while still preserving the confluence property. These results lead us to the following remarks and further research points. The results obtained in Section~\ref{sec-Aconfl} and~\ref{sec-Bconfl} for the join conditional rewrite systems extend to the case of oriented systems (hence to normal systems) and to the case of level-confluent semi-equational systems. For semi-equational systems, the proofs follow the same scheme, provided that level-confluence of $\a_\R$ is assumed. However, it would be interesting to know if this restriction can be dropped. Problems arising from non left-linear rewriting are directly transposed to left-linear conditional rewriting. The semi-closure condition is sufficient to avoid this, and it seems to provide the counterpart of left-linearity for unconditional rewriting. However, two remarks have to be made about this restriction. First, it would be interesting to know if it is a necessary condition and besides, to characterize a class of non semi-closed systems that can be translated into equivalent semi-closed ones. Second, semi-closed terminating join systems behave like normal systems. But normal systems can be easily translated into equivalent non-conditional systems. Moreover such a translation preserves good properties such as left-linearity and non ambiguity. As many practical uses of rewriting rely on terminating systems, semi-closed join systems may be in practice essentially an intuitive way to design rewrite systems that can be then efficiently implemented by non-conditional rewriting. A wider interesting perspective would be to extend the results to CCERSs~\cite{gkk05ptc}. \appendix{ } \end{document}
\begin{document} \begin{frontmatter} \title{Flips in Edge-Labelled Pseudo-Triangulations\tnoteref{mytitlenote}} \tnotetext[mytitlenote]{The results in this paper appeared in CCCG as an extended abstract, and are part of the second author's PhD thesis. This work was partially supported by NSERC.} \author{Prosenjit Bose} \ead{[email protected]} \author{Sander Verdonschot} \ead{[email protected]} \address{School of Computer Science, Carleton University, Ottawa} \begin{abstract} We show that $O(n^2)$ exchanging flips suffice to transform any edge-labelled pointed pseudo-triangulation into any other with the same set of labels. By using insertion, deletion and exchanging flips, we can transform any edge-labelled pseudo-triangulation into any other with $O(n \log c + h \log h)$ flips, where $c$ is the number of convex layers and $h$ is the number of points on the convex hull. \end{abstract} \begin{keyword} edge flip, diagonal flip, pseudo-triangulation, edge label \end{keyword} \end{frontmatter} \section{Introduction} A \emph{pseudo-triangle} is a simple polygon with three convex interior angles, called \emph{corners}, that are connected by reflex chains. Given a set $P$ of $n$ points in the plane, a \emph{pseu\-do-tri\-an\-gu\-la\-tion} of $P$ is a subdivision of its convex hull into pseudo-triangles, using all points of $P$ as vertices (see~Figure~\ref{fig:el-pt}). A pseu\-do-tri\-an\-gu\-la\-tion is \emph{pointed} if all vertices are incident to a reflex angle in some face (including the outer face; see Figure~\ref{fig:el-pointed-pt} for an example). Pseu\-do-tri\-an\-gu\-la\-tions find applications in areas such as kinetic data structures~\cite{kirkpatrick2002kinetic} and rigidity theory~\cite{streinu2005pseudo}. More information on pseu\-do-tri\-an\-gu\-la\-tions can be found in a survey by Rote, Santos, and Streinu~\cite{rote2007pseudotriangulations}. \begin{figure} \caption{(a) A pseu\-do-tri\-an\-gu\-la\-tion with two non-pointed vertices. (b) A pointed pseu\-do-tri\-an\-gu\-la\-tion.} \label{fig:el-pt} \label{fig:el-pointed-pt} \end{figure} Since a regular triangle is also a pseudo-triangle, pseu\-do-tri\-an\-gu\-la\-tions generalize triangulations (subdivisions of the convex hull into triangles). In a triangulation, a flip is a local transformation that removes one edge, leaving an empty quadrilateral, and inserts the other diagonal of that quadrilateral. Note that this is only allowed if the quadrilateral is convex, otherwise the resulting graph would be non-plane. Lawson~\cite{lawson1972transforming} showed that any triangulation with $n$ vertices can be transformed into any other with $O(n^2)$ flips, and Hurtado, Noy, and Urrutia~\cite{hurtado1999flipping} gave a matching $\Omega(n^2)$ lower bound. \begin{figure} \caption{A flip in a pseudo-quadrilateral.} \label{fig:el-flip} \end{figure} Pointed pseu\-do-tri\-an\-gu\-la\-tions support a similar type of flip, but before we can introduce this, we need to generalize the concept of pseudo-triangles to \emph{pseudo-$k$-gons}: weakly simple polygons\footnote{A weakly simple polygon is a plane graph with a bounded face that is incident to all edges.} with $k$ convex interior angles. A diagonal of a pseudo-$k$-gon is called a \emph{bitangent} if the pseudo-$k$-gon remains pointed after insertion of the diagonal. In a pointed pseu\-do-tri\-an\-gu\-la\-tion, \emph{flipping} an edge removes the edge, leaving a pseudo-quadrilateral, and inserts the unique other bitangent of the pseudo-quadrilateral (see Figure~\ref{fig:el-flip}). In contrast with triangulations, all internal edges of a pointed pseu\-do-tri\-an\-gu\-la\-tion are flippable. Bereg~\cite{bereg2004transforming} showed that $O(n \log n)$ flips suffice to transform any pointed pseu\-do-tri\-an\-gu\-la\-tion into any other. Aichholzer~et al.\xspace~\cite{aichholzer2003pseudotriangulations} showed that the same result holds for all pseu\-do-tri\-an\-gu\-la\-tions (including triangulations) if we allow two more types of flips: \emph{insertion} and \emph{deletion} flips. As the name implies, these either insert or delete one edge, provided that the result is still a pseu\-do-tri\-an\-gu\-la\-tion. To disambiguate, they call the other flips \emph{exchanging} flips. In a later paper, this bound was refined to $O(n \log c)$~\cite{aichholzer2006transforming}, where $c$ is the number of convex layers of the point set. There is recent interest in \emph{edge-labelled} triangulations: triangulations where each edge has a unique label, and flips reassign the label of the flipped edge to the new edge. Araujo-Pardo~et al.\xspace~\cite{araujo2015colorful} studied the flip graph of edge-labelled triangulations of a convex polygon, proving that it is still connected and in fact covers the regular flip graph. They also fully characterized its automorphism group. Independently, Bose~et al.\xspace~\cite{bose2013flipping} showed that it has diameter $\Theta(n \log n)$. Bose~et al.\xspace also considered edge-labelled versions of other types of triangulations. In many cases, the flip graph is disconnected. For example, it is easy to create a triangulation on a set of points with an edge that can never be flipped. Two edge-labelled triangulations in which such an edge has different labels can therefore never be transformed into each other with flips. On the other hand, Bose~et al.\xspace showed that the flip graph of edge-labelled combinatorial triangulations is connected and has diameter $\Theta(n \log n)$. In this paper, we investigate flips in \emph{edge-la\-belled pseu\-do-tri\-an\-gu\-la\-tions}: pseu\-do-tri\-an\-gu\-la\-tions where each internal edge has a unique label in $\{1, \ldots, 3n - 3 - 2h\}$, where $h$ is the number of vertices on the convex hull ($3n - 3 - 2h$ is the number of internal edges in a triangulation). In the case of an exchanging flip, the new edge receives the label of the old edge. For a deletion flip, the edge and its label are removed, and for an insertion flip, the new edge receives an unused label from the set of all possible labels. In contrast with the possibly disconnected flip graph of edge-labelled geometric triangulations, we show that $O(n^2)$ exchanging flips suffice to transform any edge-la\-belled pointed pseu\-do-tri\-an\-gu\-la\-tion into any other with the same set of labels. By using all three types of flips -- insertion, deletion and exchanging -- we can transform any edge-la\-belled pseu\-do-tri\-an\-gu\-la\-tion into any other with $O(n \log c + h \log h)$ flips, where $c$ is the number of convex layers of the point set, and $h$ is the number of vertices on the convex hull. In both settings, we have an $\Omega(n \log n)$ lower bound, making the second result tight. \section{Pointed pseu\-do-tri\-an\-gu\-la\-tions} \label{sec:el-pointed} In this section, we show that every edge-la\-belled pointed pseu\-do-tri\-an\-gu\-la\-tion can be transformed into any other with the same set of labels by $O(n^2)$ exchanging flips. We do this by showing how to transform a given edge-la\-belled pointed pseu\-do-tri\-an\-gu\-la\-tion into a \emph{canonical} one. The result then follows by the reversibility of flips. \begin{figure} \caption{A left-shelling pseu\-do-tri\-an\-gu\-la\-tion.} \label{fig:el-left-shelling} \end{figure} Before we can start the proof, we need a few more definitions. Given a set of points in the plane, let $v_0$ be the point with the lowest $y$-coordinate, and let $v_1, \ldots, v_n$ be the other points in clockwise order around $v_0$. The \emph{left-shelling} pseu\-do-tri\-an\-gu\-la\-tion is the union of the convex hulls of $v_0, \ldots, v_i$, for all $2 \leq i \leq n$ (see Figure~\ref{fig:el-left-shelling}). Thus, every vertex after $v_1$ is associated with two edges: a \emph{bottom} edge connecting it to $v_0$ and a \emph{top} edge that is tangent to the convex hull of the earlier vertices. The \emph{right-shelling} pseu\-do-tri\-an\-gu\-la\-tion is similar, with the vertices added in counter-clockwise order instead. As canonical pseu\-do-tri\-an\-gu\-la\-tion, we use the left-shelling pseu\-do-tri\-an\-gu\-la\-tion, with the bottom edges labelled in clockwise order around $v_0$, followed by the internal top edges in the same order (based on their associated vertex). Since we can transform any pointed pseu\-do-tri\-an\-gu\-la\-tion into the left-shelling pseu\-do-tri\-an\-gu\-la\-tion with $O(n \log n)$ flips~\cite{bereg2004transforming}, the main part of the proof lies in reordering the labels of a left-shelling pseu\-do-tri\-an\-gu\-la\-tion. We use two tools for this, called a \emph{sweep} and a \emph{shuffle}, that are each implemented by a sequence of flips. A sweep interchanges the labels of some internal top edges with their respective bottom edges, while a shuffle permutes the labels on all bottom edges. \begin{lemma} \label{lem:el-sort-left-shelling} We can transform any left-shelling pseu\-do-tri\-an\-gu\-la\-tion into the canonical one with $O(1)$ shuffle and sweep operations. \end{lemma} \begin{proof} In the canonical pseu\-do-tri\-an\-gu\-la\-tion, we call the labels assigned to bottom edges \emph{low}, and the labels assigned to top edges \emph{high}. In the first step, we use a shuffle to line up every bottom edge with a high label with a top edge with a low label. Then we exchange these pairs of labels with a sweep. Now all bottom edges have low labels and all top edges have high labels, so all that is left is to sort the labels. We can sort the low labels with a second shuffle. To sort the high labels, we sweep them to the bottom edges, shuffle to sort them there, then sweep them back. \end{proof} The remainder of this section describes how to perform a sweep and a shuffle with flips. \begin{lemma} \label{lem:el-degree-2-swap} We can interchange the labels of the edges incident to an internal vertex $v$ of degree two with three exchanging flips. \end{lemma} \begin{proof} Consider what happens when we remove $v$. Deleting one of its edges leaves a pseudo-quadrilateral. Removing the second edge then either merges two corners into one, or removes one corner, leaving a pseudo-triangle $T$. There are three bitangents that connect $v$ to $T$, each corresponding to the geodesic between $v$ and a corner of $T$. Any choice of two of these bitangents results in a pointed pseu\-do-tri\-an\-gu\-la\-tion. When one of them is flipped, the only new edge that can be inserted so that the result is still a pointed pseu\-do-tri\-an\-gu\-la\-tion is the bitangent that was not there before the flip. Thus, we can interchange the labels with three flips (see Figure~\ref{fig:el-degree-2-swap}). \end{proof} \begin{figure} \caption{Interchanging the labels of the edges incident to a vertex of degree two.} \label{fig:el-degree-2-swap} \end{figure} \begin{lemma}[Sweep] \label{lem:el-sweep} In the left-shelling pseu\-do-tri\-an\-gu\-la\-tion, we can interchange the labels of any number of internal top edges and their corresponding bottom edges with $O(n)$ exchanging flips. \end{lemma} \begin{proof} Let $S$ be the set of vertices whose internal top edge should have its label swapped with the corresponding bottom edge. Consider a ray $L$ from $v_0$ that starts at the positive $x$-axis and sweeps through the point set to the negative $x$-axis. We will maintain the following invariant: the graph induced by the vertices to the left of $L$ is their left-shelling pseu\-do-tri\-an\-gu\-la\-tion and the graph induced by the vertices to the right of $L$ is their right-shelling pseu\-do-tri\-an\-gu\-la\-tion (both groups include $v_0$). Furthermore, the labels of the top edges of the vertices in $S$ to the right of $L$ have been interchanged with their respective bottom edges. This invariant is satisfied at the start. Suppose that $L$ is about to pass a vertex $v_k$. If $v_k$ is on the convex hull, its top edge is not internal and no action is required for the invariant to hold after passing $v_k$. So assume that $v_k$ is not on the convex hull and consider its incident edges. It is currently part of the left-shelling pseu\-do-tri\-an\-gu\-la\-tion of points to the left of $L$, where it is the last vertex. Thus, $v_k$ is connected to $v_0$ and to one vertex to its left. It is not connected to any vertex to its right, since there are $2n - 3$ edges in total, and the left- and right-shelling pseu\-do-tri\-an\-gu\-la\-tions to each side of $L$ contribute $2(k + 1) - 3 + 2(n - k) - 3 = 2n - 4$ edges. So the only edge that crosses $L$ is an edge of the convex hull. Therefore $v_k$ has degree two, which means that we can use Lemma~\ref{lem:el-degree-2-swap} to swap the labels of its top and bottom edge with three flips if $v_k \in S$. Furthermore, the sides of the pseudo-triangle that remains if we were to remove $v_k$, form part of the convex hull of the points to either side of $L$. Thus, flipping the top edge of $v_k$ results in the tangent from $v_k$ to the convex hull of the points to the right of $L$ -- exactly the edge needed to add $v_k$ to their right-shelling pseu\-do-tri\-an\-gu\-la\-tion. Therefore we only need $O(1)$ flips to maintain the invariant when passing $v_k$. At the end, we have constructed the right-shelling pseu\-do-tri\-an\-gu\-la\-tion and swapped the desired edges. An analogous transformation without any swapping can transform the graph back into the left-shelling pseu\-do-tri\-an\-gu\-la\-tion with $O(n)$ flips in total. \end{proof} \begin{figure} \caption{A pseudo-pentagon with four bitangents. It is impossible to swap the two diagonals without flipping an edge of the pseudo-pentagon, as they just flip back and forth between the solid bitangents and the dotted ones, regardless of the position of the other diagonal.} \label{fig:el-four-bitangents} \end{figure} \begin{figure} \caption{Interchanging the labels of two bitangents of a pseudo-pen\-tagon with five bitangents. An edge in the pentagon corresponds to a geodesic between two corners of the pseudo-pentagon.} \label{fig:el-pentagon-swap} \end{figure} \begin{lemma} \label{lem:el-consecutive-swap} In the left-shelling pseu\-do-tri\-an\-gu\-la\-tion, we can interchange the labels of two consecutive bottom edges with $O(1)$ exchanging flips. \end{lemma} \begin{proof} When we remove the two consecutive bottom edges (say $a$ and $b$), we are left with a pseudo-pentagon $X$. A pseudo-pentagon can have up to five bitangents, as each bitangent corresponds to a geodesic between two corners. If $X$ has exactly five bitangents, this correspondence is a bijection. This implies that the bitangents of $X$ can be swapped just like diagonals of a convex pentagon (see Figure~\ref{fig:el-pentagon-swap}). On the other hand, if $X$ has only four bitangents, it is impossible to swap $a$ and $b$ without flipping an edge of $X$ (see~Figure~\ref{fig:el-four-bitangents}). Fortunately, we can always transform $X$ into a pseudo-pentagon with five bitangents. If the pseudo-triangle to the right of $b$ is a triangle, $X$ already has five bitangents (see~Lemma~\ref{lem:el-five-bitangents-a} in Section~\ref{sec:el-deferred}). Otherwise, the top endpoint of $b$ is an internal vertex of degree two and we can flip its top edge to obtain a new pseudo-pentagon that does have five bitangents (see~Lemma~\ref{lem:el-five-bitangents-b} in Section~\ref{sec:el-deferred}). After swapping the labels of $a$ and $b$, we can flip this top edge back. Thus, in either case we can interchange the labels of $a$ and $b$ with $O(1)$ flips. \end{proof} We can use Lemma~\ref{lem:el-consecutive-swap} to reorder the labels of the bottom edges with insertion or bubble sort, as these algorithms only swap adjacent values. \begin{corollary}[Shuffle] \label{cor:el-shuffle} In the left-shelling pseu\-do-tri\-an\-gu\-la\-tion, we can reorder the labels of all bottom edges with $O(n^2)$ exchanging flips. \end{corollary} Combining this with Lemmas~\ref{lem:el-sort-left-shelling} and \ref{lem:el-sweep}, and the fact that we can transform any pointed pseu\-do-tri\-an\-gu\-la\-tion into the left-shelling one with $O(n \log n)$ flips~\cite{bereg2004transforming}, gives the main result. \begin{theorem} \label{thm:el-upper-bound} We can transform any edge-la\-belled pointed pseu\-do-tri\-an\-gu\-la\-tion with $n$ vertices into any other with $O(n^2)$ exchanging flips. \end{theorem} The following lower bound follows from the $\Omega(n \log n)$ lower bound on the flip distance between edge-la\-belled triangulations of a convex polygon~\cite{bose2013flipping}. \begin{theorem} \label{thm:el-lower-bound} There are pairs of edge-la\-belled pointed pseu\-do-tri\-an\-gu\-la\-tions with $n$ vertices that require $\Omega(n \log n)$ exchanging flips to transform one into the other. \end{theorem} \begin{proof} When all vertices are in convex position, each vertex is pointed in the outer face. As such, any triangulation of these vertices is, in fact, a pointed pseudo-triangulation. Additionally, there is no difference between the standard definition of a flip in a triangulation of a convex polygon and the flip in that same setting, when interpreted as a pointed pseudo-triangulation. Thus, the lower bound of $\Omega(n \log n)$ flips to transform between two edge-labelled triangulations of a convex polygon applies to pointed pseudo-triangulations as well. \end{proof} \subsection{Deferred proofs} \label{sec:el-deferred} This section contains a few technical lemmas that were omitted from the previous section. \begin{figure} \caption{(a) A corner of a pseudo-triangle and an edge such that the entire pseudo-triangle on the other side of the edge lies inside the corner's wedge. (b) If $a$ can see a point past $x$, then the geodesic does not contain $x$.} \label{fig:el-opposite-flip} \label{fig:el-shorter-path} \end{figure} \begin{lemma} \label{lem:el-opposite-flip} Let $a$ be a corner of a pseudo-triangle with neighbours $x$ and $y$, and let $e$ be an edge on the chain opposite $a$. If all vertices of the other pseudo-triangle containing $e$ lie in the wedge formed by extending the edges $ax$ and $ay$ into half-lines (see Figure~\ref{fig:el-opposite-flip}), then flipping $e$ will result in an edge incident on $a$. \end{lemma} \begin{proof} Let $T$ be the pseudo-triangle on the other side of $e$, and let $b$ be the corner of $T$ opposite $e$. Then flipping $e$ inserts the geodesic between $a$ and $b$. This geodesic must intersect $e$ in a point $s$ and then follow the shortest path from $s$ to $a$. If $s$ lies strictly inside the wedge, nothing can block $as$, thus the new edge will contain $as$ and be incident on $a$. Now, if all of $e$ lies strictly inside the wedge, our result follows. But suppose that $e$ has $x$ as an endpoint and the geodesic between $a$ and $b$ intersects $e$ in $x$. As $a$ can see $x$ and all of $T$ lies inside the wedge, there is an $\varepsilon > 0$ such that $a$ can see the point $X$ on the boundary of $T$ at distance $\varepsilon$ from $x$ (see Figure~\ref{fig:el-shorter-path}). The line segment $ap$ intersects the geodesic at a point $s'$. By the triangle inequality, $s'a$ is shorter than following the geodesic from $s'$ via $x$ to $a$. But then this would give a shorter path between $a$ and $b$, by following the geodesic to $s'$ and then cutting directly to $a$. As the geodesic is the shortest path by definition, this is impossible. Thus, the geodesic cannot intersect $e$ at $x$ and the new edge must be incident to $a$. \end{proof} \begin{lemma} \label{lem:el-five-bitangents-a} Let $a$ and $b$ be two consecutive internal bottom edges in the left-shelling pseu\-do-tri\-an\-gu\-la\-tion, such that the pseudo-triangle to the right of $b$ is a triangle. Then the pseudo-pentagon $X$ formed by removing $a$ and $b$ has five bitangents. \end{lemma} \begin{proof} Let $c_0, \ldots, c_4$ be the corners of $X$ in counter-clockwise order around the boundary. By Lemma~\ref{lem:el-opposite-flip}, flipping $b$ results in an edge $b'$ that intersects $b$ and is incident on $c_1$. This edge is part of the geodesic between $c_1$ and $c_3$, and as such it is tangent to the convex chain $v_0, v_a, \ldots, c_3$, where $v_a$ is the top endpoint of $a$ ($v_a$ could be $c_3$). Therefore it is also the tangent from $c_1$ to the convex hull of $\{v_0, \ldots, v_a\}$. This means that the newly created pseudo-triangle with $c_1$ as corner and $a$ on the opposite pseudo-edge also meets the conditions of Lemma~\ref{lem:el-opposite-flip}. Thus, flipping $a$ results in another edge, $a'$, also incident on $c_1$. As $b$ separates $c_1$ from all vertices in $\{v_0, \ldots, v_a\}$, $a'$ must also intersect $b$. This gives us four bitangents, of which two are incident on $v_0$ ($a$ and $b$), and two on $c_1$ ($a'$ and $b'$). Finally, flipping $a$ before flipping $b$ results in a bitangent that is not incident on $v_0$ (as $v_0$ is a corner and cannot be on the new geodesic), nor on $c_1$ (as $b$ separates $a$ from $c_1$). Thus, $X$ has five bitangents. \end{proof} \begin{lemma} \label{lem:el-five-bitangents-b} Let $a$ and $b$ be two consecutive internal bottom edges in the left-shelling pseu\-do-tri\-an\-gu\-la\-tion, such that the pseudo-triangle to the right of $b$ is not a triangle. Then the pseudo-pentagon $X$ formed by flipping the corresponding top edge of $b$ and removing $a$ and $b$ has five bitangents. \end{lemma} \begin{proof} Let $v_a$ and $v_b$ be the top endpoints of $a$ and $b$. By Lemma~\ref{lem:el-opposite-flip} and since $b$ had degree two, flipping the top edge of $b$ results in the edge $v_bc_1$. We get three bitangents for free: $a$, $b$, and $b'$ -- the old top edge of $b$ and the result of flipping $b$. $X$ consists of a reflex chain $C$ that is part of the convex hull of the points to the left of $a$, followed by three successive tangents to $C$, $v_a$, or $v_b$. Since $C$ lies completely to the left of $a$, it cannot significantly alter any of the geodesics or bitangents inside the polygon, so we can reduce it to a single edge. Now, $X$ consists either of a triangle with two internal vertices, or a convex quadrilateral with one internal vertex. If $X$ is a triangle with two internal vertices, the internal vertices are $v_a$ and $v_b$. Let its exterior vertices be $v_0$, $x$, and $y$. Then there are seven possible bitangents: $a = v_0v_a$, $b = v_0v_b$, $xv_a$, $xv_b$, $yv_a$, $yv_b$, and $v_av_b$. We know that $xv_a$ and $yv_b$ are edges, so there are five possible bitangents left. As all vertices involved are either corners or have degree one in $X$, the only condition for an edge to be a bitangent is that it does not cross the boundary of $X$. Since the exterior boundary is a triangle, this reduces to it not crossing $xv_a$ and $yv_b$. Two line segments incident to the same vertex cannot cross. Thus, $xv_b$, $yv_a$, and $v_av_b$ cannot cross $xv_a$ and $yv_b$, and $X$ has five bitangents. If $X$'s convex hull has four vertices, the internal vertex is $v_b$ (otherwise the pseudo-triangle to the right of $b$ would be a triangle). Let its exterior vertices be $v_0$, $x$, $v_a$, and $y$. Then there are six possible bitangents: $a = v_0v_a$, $b = v_0v_b$, $xy$, $xv_b$, $yv_b$, and $v_av_b$, of which one ($yv_b$) is an edge of $X$. Since $a$ and $b$ are guaranteed to be bitangents, and $xy$, $xv_b$, and $v_av_b$ all share an endpoint with $yv_b$, the arguments from the previous case apply and we again have five bitangents. \end{proof} \section{General pseu\-do-tri\-an\-gu\-la\-tions} \label{sec:el-general-pts} In this section, we extend our results for edge-la\-belled pointed pseu\-do-tri\-an\-gu\-la\-tions to all edge-la\-belled pseu\-do-tri\-an\-gu\-la\-tions. Since not all pseu\-do-tri\-an\-gu\-la\-tions have the same number of edges, we need to allow flips that change the number of edges. In particular, we allow a single edge to be deleted or inserted, provided that the result is still a pseu\-do-tri\-an\-gu\-la\-tion. Since we are dealing with edge-la\-belled pseu\-do-tri\-an\-gu\-la\-tions, we need to determine what happens to the edge labels. It is useful to first review the properties we would like these flips to have. First, a flip should be a local operation -- it should affect only one edge. Second, a labelled edge should be flippable if and only if the edge is flippable in the unlabelled setting. This allows us to re-use the existing results on flips in pseu\-do-tri\-an\-gu\-la\-tions. Third, flips should be reversible. Like most proofs about flips, our proof in the previous section crucially relies on the reversibility of flips. With these properties in mind, the edge-deletion flip is rather straightforward -- the labelled edge is removed, and other edges are not affected. Since the edge-insertion flip needs to be the inverse of this, it should insert the edge and assign it a \emph{free label} -- an unused label in $\{1, \ldots, 3n - 3 - 2h\}$, where $h$ is the number of vertices on the convex hull ($3n - 3 - 2h$ is the number of internal edges in a triangulation). With the definitions out of the way, we can turn our attention to the number of flips required to transform any edge-la\-belled pseu\-do-tri\-an\-gu\-la\-tions into any other. In this section, we show that by using insertion and deletion flips, we can shuffle (permute the labels on bottom edges) with $O(n + h \log h)$ flips. Combined with the unlabelled bound of $O(n \log c)$ flips by Aichholzer~et al.\xspace~\cite{aichholzer2006transforming}, this brings the total number of flips down to $O(n \log c + h \log h)$. When the points are in convex position ($h = n$), all pseudo-triangulations are full triangulations, and the $O(n \log n)$ bound by Bose~et al.\xspace~\cite{bose2013flipping} in this setting implies our bound. Therefore, we assume for the remainder of this section that the points are not in convex position ($h < n$). As before, we first build a collection of simple tools that help prove the main result. \begin{lemma} \label{lem:el-degree-2-free} With $O(1)$ flips, we can interchange the label of an edge incident to an internal vertex of degree two with a free label. \end{lemma} \begin{proof} Let $v$ be a vertex of degree two and let $e$ be an edge incident to $v$. Since $v$ has degree two, its removal leaves an empty pseudo-triangle $T$. There are three bitangents that connect $v$ to $T$, one for each corner. Thus, we can insert the third bitangent $f$ with the desired free label, making $v$ non-pointed (see~Figure~\ref{fig:el-degree-2-free}). Flipping $e$ now removes it and frees its label. Finally, flipping $f$ moves it into $e$'s starting position, completing the exchange. \end{proof} \begin{figure} \caption{Interchanging the label of an edge incident to a vertex of degree two with a free label.} \label{fig:el-degree-2-free} \end{figure} This implies that, using an arbitrary free label as placeholder, we can swap any two edges incident to internal degree-two vertices -- no matter where they are in the pseu\-do-tri\-an\-gu\-la\-tion. \begin{corollary} \label{lem:el-two-degree-2-swap} We can interchange the labels of two edges, each incident to some internal vertex of degree two, with $O(1)$ flips. \end{corollary} Recall that during a sweep (Lemma~\ref{lem:el-sweep}), each internal vertex has degree two at some point. Since the number of free labels for a pointed pseu\-do-tri\-an\-gu\-la\-tion is equal to the number of internal vertices, this means that we can use Lemma~\ref{lem:el-degree-2-free} to swap every label on a bottom edge incident to an internal vertex with a free label by performing a single sweep. Afterwards, a second sweep can replace these labels on the bottom edges in any desired order. Thus, permuting the labels on bottom edges incident to internal vertices can be done with $O(n)$ flips. Therefore, the difficulty in permuting the labels on all bottom edges lies in bottom edges that are not incident to an internal vertex, that is, chords of the convex hull. If there are few such chords, a similar strategy (free them all and replace them in the desired order) might work. Unfortunately, the number of free labels can be far less than the number of chords. We now consider operations on maximal groups of consecutive chords, which we call \emph{fans}. As the vertices of a fan are in convex position, fans behave in many ways like triangulations of a convex polygon, which can be rearranged with $O(n \log n)$ flips~\cite{bose2013flipping}. The problem now becomes getting the right set of labels on the edges of a fan. Consider the internal vertices directly to the left ($\ensuremath{v_{\textsc{l}}}$) and right ($\ensuremath{v_{\textsc{r}}}$) of a fan $F$, supposing both exist. Vertex $\ensuremath{v_{\textsc{l}}}$ has degree two and forms part of the reflex chain of the first pseudo-triangle to the left of $F$. Thus, flipping $\ensuremath{v_{\textsc{l}}}$'s top edge connects it to the leftmost vertex of $F$ (excluding $v_0$). Vertex $\ensuremath{v_{\textsc{r}}}$ is already connected to the rightmost vertex of $F$, so we just ensure that it has degree two. To do this, we flip all incident edges from vertices further to the right, from the bottom to the top. Now the diagonals of $F$ form a triangulation of a convex polygon whose boundary consists of $v_0$, $\ensuremath{v_{\textsc{l}}}$, the top endpoints of the chords, and $\ensuremath{v_{\textsc{r}}}$ (see~Figure~\ref{fig:el-indexed-fan}). It is possible that there is no internal vertex to one side of $F$. In that case, there is only one vertex on that side of $F$, which is part of the convex hull, and we can simply use that vertex in place of $\ensuremath{v_{\textsc{l}}}$ or $\ensuremath{v_{\textsc{r}}}$ without flipping any of its edges. Since there is at least one internal vertex by assumption, either $\ensuremath{v_{\textsc{l}}}$ or $\ensuremath{v_{\textsc{r}}}$ is an internal vertex. This vertex is called the \emph{index} of $F$. If a vertex is the index of two fans, it is called a \emph{shared index}. \begin{figure} \caption{(a) An indexed fan. (b) Shifting the index ($\ensuremath{v_{\textsc{l} \label{fig:el-indexed-fan} \label{fig:el-increase-index} \end{figure} A triangulated fan is called an \emph{indexed fan} if there is one edge incident to the index, the \emph{indexed edge}, and the remaining edges are incident to one of the neighbours of the index on the boundary. Initially, all diagonals of $F$ are incident to $v_0$, so we transform it into an indexed fan by flipping the diagonal of $F$ closest to the index. Next, we investigate several operations on indexed fans that help us move labels between fans. \begin{lemma}[Shift] In an indexed fan, we can shift the indexed edge to the next diagonal with $O(1)$ flips. \end{lemma} \begin{proof} Suppose that $\ensuremath{v_{\textsc{l}}}$ is the index (the proof for $\ensuremath{v_{\textsc{r}}}$ is analogous). Let $e$ be the current indexed edge, and $f$ be the leftmost diagonal incident to $v_0$. Then flipping $f$ followed by $e$ makes $f$ the only edge incident to the index and $e$ incident to the neighbour of the index (see~Figure~\ref{fig:el-increase-index}). Since flips are reversible, we can shift the index the other way too. \end{proof} \begin{figure} \caption{Changing which side a shared index indexes.} \label{fig:el-swap-shared-index} \end{figure} \begin{lemma} We can switch which fan a shared index currently indexes with $O(1)$ flips. \end{lemma} \begin{proof} Flipping the current indexed edge ``parks" it by connecting it to the two neighbours of the index, and reduces the degree of the index to two (see~Figure~\ref{fig:el-swap-shared-index}). Now, flipping the top edge of the index connects it to the other fan, where we parked the previously indexed edge. Flipping that edge connects it to the index again. \end{proof} \begin{lemma} \label{lem:el-deg-3-to-2} In a pointed pseu\-do-tri\-an\-gu\-la\-tion, we can always decrease the degree of a vertex $v$ of degree three by flipping one of the edges incident to its reflex angle. \end{lemma} \begin{proof} Consider the geodesic from $v$ to the opposite corner $c$ of the pseudo-triangle $v$ is pointed in. The line supporting the part of the geodesic when it reaches $v$ splits the edges incident to $v$ into two groups. As there are three edges, one of these groups must contain multiple edges. Flipping the edge incident to its reflex angle in the group with multiple edges results in a geodesic to $c$. If this geodesic passed through $v$, it would insert the missing edges along the geodesic from $v$ to $c$ (otherwise we could find a shorter path). But inserting this geodesic would make $v$ non-pointed. Thus, $v$ cannot be on this geodesic. Therefore the new edge is not incident to $v$ and the flip reduces the degree of $v$. \end{proof} Since the index always has degree three, this allows us to extend the results from Lemmas~\ref{lem:el-degree-2-free} and \ref{lem:el-degree-2-swap} regarding vertices of degree two to indexed edges. \begin{corollary} \label{cor:el-index-free} In an indexed fan, we can interchange the label of the indexed edge with a free label in $O(1)$ flips. \end{corollary} \begin{corollary} \label{cor:el-index-swap} Given two indexed fans, we can interchange the labels of the two indexed edges with $O(1)$ flips. \end{corollary} Now we have enough tools to shuffle the bottom edges. \begin{lemma}[Shuffle] \label{lem:el-shuffle-non-pointed} In the left-shelling pseu\-do-tri\-an\-gu\-la\-tion, we can reorder the labels of all bottom edges with $O(n + h \log h)$ flips, where $h$ is the number of vertices on the convex hull. \end{lemma} \begin{proof} In the initial pseu\-do-tri\-an\-gu\-la\-tion, let $B$ and $\ensuremath{\mathcal{F}}$ be the sets of labels on bottom edges and free labels, respectively. Let $F_i$ be the set of labels on the $i$-th fan (in some fixed order), and let $\ensuremath{\overline{F}}$ be the set of labels on non-fan bottom edges. Let $F_i'$ and $\ensuremath{\overline{F}}'$ be these same sets in the target pseu\-do-tri\-an\-gu\-la\-tion. As we are only rearranging the bottom labels, we have that $B = F_1 \cup \ldots \cup F_k \cup \ensuremath{\overline{F}} = F_1' \cup \ldots \cup F_k' \cup \ensuremath{\overline{F}}'$, where $k$ is the number of fans. We say that a label $\ell$ \emph{belongs to} fan $i$ if $\ell \in F_i'$. At a high level, the reordering proceeds in four stages. In stage one, we free all labels in $\ensuremath{\overline{F}}$. In stage two, we place each label from $B \setminus \ensuremath{\overline{F}}'$ in the fan it belongs to, leaving the labels in $\ensuremath{\overline{F}}'$ free. Then, in stage three, we correct the order of the labels within each fan. Finally, we place the labels in $\ensuremath{\overline{F}}'$ correctly. Since each internal vertex contributes exactly one top edge, one bottom edge, and one free label, we have that $|\ensuremath{\overline{F}}| = |\ensuremath{\mathcal{F}}|$. To free all labels in $\ensuremath{\overline{F}}$, we perform a sweep (see~Lemma~\ref{lem:el-sweep}). As every internal vertex has degree two at some point during the sweep, we can exchange the label on its bottom edge with a free label at that point, using Lemma~\ref{lem:el-degree-2-free}. This requires $O(n)$ flips. The labels in $\ensuremath{\mathcal{F}}$ remain on the bottom edges incident to internal vertices throughout stage two and three, as placeholders. To begin stage two, we index all fans with $O(n)$ flips and shift these indices to the first `foreign' edge: the first edge whose label does not belong to the current fan. If no such edge exists, we can ignore this fan for the remainder of stage two, as it already has the right set of labels. Now suppose that there is a fan $F_i$ whose indexed edge $e$ is foreign: $\ell_e \notin F_i'$. Then either $\ell_e \in F_j'$ for some $j \neq i$, or $\ell_e \in \ensuremath{\overline{F}}'$. In the first case, we exchange $\ell_e$ with the label on the indexed edge of $F_j$, and shift the index of $F_j$ to the next foreign edge. In the second case, we exchange $\ell_e$ with a free label in $B \setminus \ensuremath{\overline{F}}'$. If this label belongs to $F_i$, we shift its index to the next foreign edge. In either case, we increased the number of correctly placed labels by at least one. Thus $n - 1$ repetitions suffice to place all labels in the fan they belong to, wrapping up stage two. Since we perform a linear number of swaps and shifts, and each takes a constant number of flips, the total number of flips required for stage two is $O(n)$. For stage three, we note that each indexed fan corresponds to a triangulation of a convex polygon. As such, we can rearrange the labelled diagonals of a fan $F_i$ into their desired final position with $O(|F_i| \log |F_i|)$ flips~\cite{bose2013flipping}. Thus, if we let $h$ be the number of vertices on the convex hull, the total number of flips for this step is bounded by \[ \sum_i O(|F_i| \log |F_i|)~~\leq~~\sum_i O(|F_i| \log h)~~=~~O(h \log h). \] For stage four, we first return to a left-shelling pseu\-do-tri\-an\-gu\-la\-tion by un-indexing each fan, using $O(n)$ flips. After stage two, the labels in $\ensuremath{\overline{F}}'$ are all free, so all that is left is to place these on the correct bottom edges, which we can do with a final sweep. Thus, we can reorder all bottom labels with $O(n + h \log h)$. \end{proof} This leads to the following bound. \begin{theorem} We can transform any edge-la\-belled pseu\-do-tri\-an\-gu\-la\-tion with $n$ vertices into any other with $O(n \log c + h \log h)$ flips, where $c$ is the number of convex layers and $h$ is the number of vertices on the convex hull. \end{theorem} \begin{proof} Using the technique by Aichholzer~et al.\xspace~\cite{aichholzer2006transforming}, we first transform the pseu\-do-tri\-an\-gu\-la\-tion into the left-shelling pseu\-do-tri\-an\-gu\-la\-tion $T$ with $O(n \log c)$ flips. Our canonical pseu\-do-tri\-an\-gu\-la\-tion contains the labels $\{1, \ldots, 2n - h - 3\}$, but it is possible for $T$ to contain a different set of labels. Since all labels are drawn from $\{1, \ldots, 3n - 2h - 3\}$, at most $n - h$ labels differ. This is exactly the number of internal vertices. Thus, we can use $O(n + h \log h)$ flips to shuffle (Lemma~\ref{lem:el-shuffle-non-pointed}) all non-canonical labels on fan edges to bottom edges incident to an internal vertex. Once there, we use a sweep (Lemma~\ref{lem:el-sweep}) to ensure that every internal vertex has degree two at some point, at which time we replace its incident non-canonical labels with canonical ones with a constant number of flips (Lemma~\ref{lem:el-degree-2-free}). Once our left-shelling pseu\-do-tri\-an\-gu\-la\-tion has the correct set of labels, we use a constant number of shuffles and sweeps to sort the labels (Lemma~\ref{lem:el-sort-left-shelling}). Since we can shuffle and sweep with $O(n + h \log h)$ and $O(n)$ flips, respectively, the total number of flips reduces to $O(n \log c + n + h \log h) = O(n \log c + h \log h)$. \end{proof} The correspondence between triangulations of a convex polygon and pseu\-do-tri\-an\-gu\-la\-tions, proven in Theorem~\ref{thm:el-lower-bound}, also gives us the following lower bound, as no insertion or deletion flips are possible in this setting. \begin{theorem} \label{thm:el-non-pointed-lower-bound} There are pairs of edge-la\-belled pseu\-do-tri\-an\-gu\-la\-tions with $n$ vertices such that any sequence of flips that transforms one into the other has length $\Omega(n \log n)$. \end{theorem} \section*{Acknowledgments} We would like to thank Anna Lubiw and Vinayak Pathak for helpful discussions. \end{document}
\begin{document} \title{Qudit lattice surgery} \begin{abstract} We observe that lattice surgery, a model of fault-tolerant qubit computation, generalises straightforwardly to arbitrary finite-dimensional qudits. The generalised model is based on the group algebras $\mathbb{C}\mathbb{Z}_d$ for $d \geq 2$. It still requires magic state injection for universal quantum computation. We relate the model to the ZX-calculus, a diagrammatic language based on Hopf-Frobenius algebras. \end{abstract} \section{Introduction} Topological quantum computing is a theoretical paradigm of large-scale quantum error correction in which important data is encoded in non-local features of a vast entangled state. So long as the physical errors on the overall system stay below some threshold value, the data is protected. The archetypal example is the $\mathbb{C}\mathbb{Z}_2$ surface code \cite{DKLP,BK1}, a system which requires only nearest-neighbour connectivity between qubits and has a high threshold against errors \cite{WFH}. The key practical feature of the surface code, as opposed to the earlier toric code \cite{Kit1}, is that it may be embedded on the plane with boundaries, and does not require exotic homology to encode data. Lattice surgery was developed by Horsman et al. \cite{HFDM} as a method of computation using the surface code. It is conceptually simple and flexible, and believed to be efficient in its consumption of resources such as qubits and time \cite{FG,Lit1} compared to other methods such as defect braiding \cite{FMMC}. Lattice surgery starts with patches of surface code, then employs `splits' and `merges' on these, which act non-unitarily on logical states. In fact, merges are described by completely positive trace preserving (CPTP) maps, and cannot be performed deterministically in general. Both features make the model cumbersome to describe using the circuit model. Interestingly, computation using lattice surgery closely mirrors the Hopf algebra structure of $\mathbb{C} \mathbb{Z}_2$, the group algebra of $\mathbb{Z}_2$ over the field $\mathbb{C}$. Coincidentally, this is one of the building blocks of the ZX-calculus, a formal graphical language for reasoning about quantum computation \cite{CD}. It represents quantum processes using ZX-diagrams, which may then be rewritten using the axioms of the calculus. The initial presentation of the ZX-calculus applied only to qubits, and can be summarised algebraically as $\mathbb{C} \mathbb{Z}_2$ and $\mathbb{C}(\mathbb{Z}_2)$ sitting on the same vector space, plus the inherent Fourier transform and a so-called phase group. De Beaudrap and Horsman \cite{BH} noticed this relationship between lattice surgery and qubit ZX-calculus, and leveraged it to develop novel lattice surgery procedures. Other techniques using the same idea have also been developed, e.g. for efficient compilation of magic state distillation circuits \cite{GF} and reasoning about implementing deterministic programs despite non-deterministic merges \cite{BDHP}. The ZX-calculus has since been generalised to qudits using $\mathbb{C} \mathbb{Z}_d$, for $d\in \mathbb{N}$ \cite{W1}. We observe that lattice surgery may similarly be generalised. The procedure is algebraically very simple, with the most advanced technology required being the Fourier transform. We give a concrete description of the computational model, although for brevity we elide some of the details such as the Pauli frame. We then leverage the qudit ZX-calculus to describe transformations on the logical data. We use this description to give a series of qudit lattice surgery procedures, and show that the model still requires magic state injection for universality. \section{The $\mathbb{C}\mathbb{Z}_d$ surface code} In this section we introduce the surface code for qudits; readers familiar with Kitaev models may wish to skip to Section~\ref{sec:lattice_surgery}. Throughout, we let $\mathbb{Z}_d$ be the cyclic group with $d$ elements labelled by integers $0,\cdots,d-1$ with addition as group multiplication. We assume $d\geq 2$, as the $d=1$ case is trivial. In the interest of brevity, proofs are brief or relegated to the appendices. For more explicit and thorough treatments at a higher level of generality see e.g. \cite{Kit1,Bom,Cow}. Throughout, we occasionally ignore normalisation (typically factors of $d$ or $1{}_{(1)}ver d$) when convenient. \begin{definition} Let $\mathbb{C}\mathbb{Z}_d$ be the group Hopf algebra with basis states $|i\rangle$ for $i \in \mathbb{Z}_d$. $\mathbb{C}\mathbb{Z}_d$ has multiplication given by a linear extension of its native group multiplication, so $|i\rangle{}_{(1)}times |j\rangle \mapsto |i+j\rangle$, and the unit $|0\rangle$. It has comultiplication given by $|i\rangle\mapsto |i\rangle{}_{(1)}times |i\rangle$, and the counit $|i\rangle\mapsto 1 \in \mathbb{C}$. It has the normalised integral element $\Lambda_{\mathbb{C}\mathbb{Z}_d} = \frac{1}{d}\sum_i |i\rangle$ and the antipode is the group inverse. $\mathbb{C}\mathbb{Z}_d$ is commutative and cocommutative. \end{definition} \begin{definition}Let $\mathbb{C}(\mathbb{Z}_d)$ be the function Hopf algebra with basis states $|\partialta_i\rangle$ for $i\in \mathbb{Z}_d$. $\mathbb{C}(\mathbb{Z}_d)$ is the dual algebra to $\mathbb{C}\mathbb{Z}_d$. $\mathbb{C}(\mathbb{Z}_d)$ has multiplication $|\partialta_i\rangle {}_{(1)}times |\partialta_j\rangle \mapsto \partialta_{i,j}|\partialta_i\rangle$ and the unit $\sum_i |\partialta_i\rangle$. It has comultiplication $|\partialta_i\rangle \mapsto \sum_{h\in \mathbb{Z}_d} |\partialta_h\rangle{}_{(1)}times|\partialta_{i-h}\rangle$ and counit $|\partialta_i\rangle\mapsto \partialta_{i,0}$. It has the normalised integral element $\Lambda_{\mathbb{C}(\mathbb{Z}_d)} = |\partialta_0\rangle$ and the antipode is also the inverse. $\mathbb{C}(\mathbb{Z}_d)$ is commutative and cocommutative. \end{definition} \begin{lemma}{{}_{(2)}riangleright}bel{lem:fourier} The algebras are related by the Fourier isomorphism, so $\mathbb{C}(\mathbb{Z}_d)\cong \mathbb{C}\mathbb{Z}_d$ as Hopf algebras. In particular this isomorphism has maps \begin{equation}{{}_{(2)}riangleright}bel{Zisom} |j\rangle \mapsto \sum_k q^{jk}|\partialta_k\rangle,\quad |\partialta_j\rangle\mapsto {1{}_{(1)}ver d} \sum_k q^{-jk}|k\rangle,\end{equation} where $q = e^{i2\pi{}_{(1)}ver d}$ is a primitive $d$th root of unity. \end{lemma} \begin{definition}{{}_{(2)}riangleright}bel{def:lattice_acts} Now let $\Sigma = \Sigma(V, E, P)$ be a square lattice viewed as a directed graph with its usual (cartesian) orientation. The corresponding Hilbert space $\hbox{{$\mathcal H$}}$ will be a tensor product of vector spaces with one copy of $\mathbb{C}\mathbb{Z}_d$ at each arrow in $E$, with basis denoted by $\{|i\rangle\}_{i\in \mathbb{Z}_d}$ as before. Next, for each vertex $v \in V$ and each face $p \in P$ we define an action of $\mathbb{C}\mathbb{Z}_d$ and $\mathbb{C}(\mathbb{Z}_d)$, which acts on the vector spaces around the vertex or around the face, and trivially elsewhere, according to \[{}_{(2)}ikzfig{vertex_action}\] and \[{}_{(2)}ikzfig{face_action}\] for $|l\rangle \in \mathbb{C}\mathbb{Z}_d$ and $|\partialta_j\rangle\in \mathbb{C}(\mathbb{Z}_d)$. \end{definition} Here $|l\rangle{{}_{(2)}riangleright}_v$ subtracts in the case of arrows pointing towards the vertex and $|\partialta_j\rangle{{}_{(2)}riangleright}_p$ has $c,d$ entering negatively in the $\partialta$-function because these are contra to a {\em clockwise} flow around the face in our conventions. The vertex actions are built from four-fold copies of the operator $X$ and $X^\dagger$, where $X^l|i\rangle=|i+l\rangle$. Consider the face actions of elements $\sum_j q^{mj}|\partialta_j\rangle$, i.e. the Fourier transformed basis of $\mathbb{C}(\mathbb{Z}_d)$; these face actions are made up of $Z$ and $Z^\dagger$, where $Z^m|i\rangle=q^{mi}|i\rangle$, and the $Z$, $X$ obey $ZX=qXZ$. Stabilisers on the lattice are given by measurements of the $X{}_{(1)}times X{}_{(1)}times X^\dagger{}_{(1)}times X^\dagger$ and $Z{}_{(1)}times Z{}_{(1)}times Z^\dagger{}_{(1)}times Z^\dagger$ operators on vertices and faces respectively; that is, for the vertices we non-deterministically perform one of the $d$ projectors $P_v(j) = \sum_k q^{jk}|k\rangle{{}_{(2)}riangleright}_v$ for $j\in \mathbb{Z}_d$, according to each of the $d$ measurement outcomes. Similarly for faces, we perform one of the $d$ projectors $P_p(j) =|\partialta_j\rangle{{}_{(2)}riangleright}_p$. In practice, this requires additional `syndrome' qudits at each vertex and face; we give explicit circuits for these in Appendix~\ref{app:circs}. At each round of measurement, we measure all of the stabilisers on the whole lattice. Physically, we may also say that the system is in a subspace of a certain Hamiltonian: \[H=-(\sum_v A(v) + \sum_p B(p))+{\rm const.}\] where \[ A(v)=P_v(0)=\Lambda{{}_{(2)}riangleright}_v={1{}_{(1)}ver d}\sum_i |i\rangle{{}_{(2)}riangleright}_v,\quad B(p)=P_p(0)=\Lambda^*{{}_{(2)}riangleright}_p=|\partialta_0\rangle{{}_{(2)}riangleright}_p.\] It is easy to see that \[ A(v)^2=A(v),\quad B(p)^2=B(p),\quad [A(v),A(v')]=[B(p),B(p')]=[A(v),B(p)]=0\] where $v,v'$ are different vertices, and $p,p'$ are different faces. When the measurements at a vertex $v$ and face $p$ yield the projectors $A(v)$ and $B(p)$ we say that no errors were detected at these locations, and we are locally in the vacuum. Then if we obtain the projectors $A(v)$ and $B(p)$ everywhere we are in the global vacuum space $\hbox{{$\mathcal H$}}_{vac}$. One can check that a state ${|{\rm vac}\rangle} \in \hbox{{$\mathcal H$}}_{vac}$ obeys \[|l\rangle{{}_{(2)}riangleright}_v{|{\rm vac}\rangle} = A(v){|{\rm vac}\rangle}= \sum_j q^{mj}|\partialta_j\rangle{{}_{(2)}riangleright}_p{|{\rm vac}\rangle} = B(p){|{\rm vac}\rangle}={|{\rm vac}\rangle}\] for all $l, m\in \mathbb{Z}_d, v\in V, p\in P$. \begin{definition}{{}_{(2)}riangleright}bel{def:log_states} We can always write down at least two vacuum states\footnote{In certain cases, such as when the lattice is embedded onto a sphere, these states coincide.}, which we shall call: \[|0\rangle_L := \prod_{v}A(v)\bigotimes_E |0\rangle\] and \[|\partialta_0{}\rangle_L := \prod_{p}B(p)\bigotimes_E \sum_i |i\rangle.\] \end{definition} Computationally, the vacuum space is also the {}_{(2)}extit{logical space}, the subspace in which we store data; the subscript ${}_L$ refers to this logical space, and $|0\rangle_L$, $|\partialta_0{}\rangle_L$ are canonical logical states. If measurements yield other projectors $P_v(j)$ or $P_p(j)$ then we have detected an error; in physics jargon we have detected the presence of an electric or magnetic particle. One important feature of the code is that if we receive the measurement outcome $P_v(j)$, say, at a vertex then there will be another vertex at which we detect $P_v(-j)$ instead. This is because all operators on the lattice come in the form of {}_{(2)}extit{string operators}. String operators come in two types: $X$ and $Z$. \begin{definition} An $X$-type string operator ${}_xF^i_\xi$ acts on the lattice as \[{}_{(2)}ikzfig{x_string_operator}\] where $\xi$ is a string that passes between faces and for each crossed edge we apply either an $X^i$ or $X^i{}^\dagger$ depending on the orientation, as shown. \end{definition} The $X$-type string operators satisfy $({}_xF^i_{\xi})^\dagger = {}_xF^{-i}_{\xi}$. Additionally we have \[{}_xF^i_{\xi}\circ{}_xF^j_{\xi} = {}_xF^{i+j}_{\xi}\] and, given concatenated strings $\xi, \xi'$, \[{}_xF^i_{\xi'\circ\xi} = {}_xF^i_{\xi'}\circ{}_xF^i_{\xi}\] where one can see multiplication and comultiplication of $\mathbb{C} \mathbb{Z}_d$; more generally they obey the same Hopf laws as $\mathbb{C} \mathbb{Z}_d$. The other axioms are easy to check. The $X$-type string operators make magnetic quasiparticles `appear' at the faces at which a string ends. In particular, given an initial vacuum state ${|{\rm vac}\rangle}$, we can check that \[P_{p_0}(i){}_xF^i_\xi{|{\rm vac}\rangle} = P_{p_1}(-i){}_xF^i_\xi{|{\rm vac}\rangle} = {}_xF^i_\xi{|{\rm vac}\rangle}\] where $p_0,p_1$ are the start and endpoints of the string, so we will detect errors at these locations upon measurement. However, the string operators leave the system in the vacuum in the intermediate faces of the string, as we have: \[B(p){}_xF^i_\xi = {}_xF^i_\xi B(p)\] for any $p \neq p_0$ or $p_1$. As a consequence, we may think of string operators as equivalent up to a sort of discrete framed isotopy. \begin{definition} A $Z$-type string operator ${}_zF^{\partialta_j}_\xi$ acts on the lattice as \[{}_{(2)}ikzfig{z_string_operator}\] by passing between vertices. For each crossed edge we include a term in the $\partialta$-function, as shown. Observe that $\sum_j q^{mj} {}_zF^{\partialta_j}_\xi$ applies a $Z^m$ or $Z^m{}^\dagger$ at each edge, and that this is the Fourier transformed basis of the $Z$-type string operators. \end{definition} The $Z$-type string operators satisfy $({}_zF^{\partialta_i}_\xi)^\dagger = {}_zF^{\partialta_i}_\xi$. Additionally we have \[{}_zF^{\partialta_i}_\xi\circ {}_zF^{\partialta_j}_\xi = \partialta_{i,j}\ {}_zF^{\partialta_j}_\xi\] and \[{}_zF^{\partialta_i}_{\xi'\circ\xi} = \sum_{_h}{}_zF^{\partialta_h}_{\xi'}\circ{}_zF^{\partialta_{i-h}}_{\xi},\] so $Z$-type string operators obey the same Hopf laws as $\mathbb{C}(\mathbb{Z}_d)$. The $Z$-type string operators generate electric quasiparticles at the vertices at which a string ends. We have \[P_{v_0}(i)\sum_j q^{ij}{}_zF^{\partialta_j}_\xi{|{\rm vac}\rangle}=P_{v_1}(-i)\sum_j q^{ij}{}_zF^{\partialta_j}_\xi{|{\rm vac}\rangle}=\sum_j q^{ij}{}_zF^{\partialta_j}_\xi{|{\rm vac}\rangle}.\] As a result, we refer to this basis of the $Z$-type string operators as the `quasiparticle basis'. They leave the system in the vacuum in the intermediate vertices of the string: \[A(v)\sum_j q^{ij}{}_zF^{\partialta_j}_\xi = \sum_j q^{ij}{}_zF^{\partialta_j}_\xi A(v)\] for any $v\neq v_0$ or $v_1$. In the quasiparticle basis we have \[\sum_j q^{ij}{}_zF^{\partialta_j}_\xi\circ\sum_j q^{kj}{}_zF^{\partialta_j}_\xi = \sum_j q^{(i+k)j}{}_zF^{\partialta_j}_\xi\] and \[\sum_j q^{ij}{}_zF^{\partialta_j}_{\xi'\circ\xi} = \sum_j q^{ij}{}_zF^{\partialta_j}_{\xi'}\circ \sum_k q^{ik}{}_zF^{\partialta_k}_{\xi}\] i.e. the same algebraic rules as the $X$-type string operators, and as $\mathbb{C} \mathbb{Z}_d$. \begin{lemma}\cite{Kit1} String operators which form a closed loop on a lattice segment which is locally vacuum either act as identity or are physically impossible, i.e. they take the system to 0. \end{lemma} \proof First, assume the string passes between faces, so we have an $X$-type string operator. In this case, we may tile the loop with squares on the dual lattice (that is, the dual in the graph-theoretic sense). Then one can check the closed string operator acts as a product over the tiles of $|l\rangle{{}_{(2)}riangleright}_v$ actions. As $|l\rangle{{}_{(2)}riangleright}_v{|{\rm vac}\rangle} = {|{\rm vac}\rangle}$ the state is left unchanged. If the string passes between vertices, tile the loop with squares. Consider the $Z$-type string operators in the quasiparticle basis. Then the closed string operator acts as a product of $\sum_j q^{mj}|\partialta_j\rangle{{}_{(2)}riangleright}_p$ actions. As $\sum_j q^{mj}|\partialta_j\rangle{{}_{(2)}riangleright}_p{|{\rm vac}\rangle} = {|{\rm vac}\rangle}$ the state is left unchanged. In the original basis, the product of $|\partialta_h\rangle{{}_{(2)}riangleright}_p$ actions acts as identity if $h=0$; otherwise it takes the system to 0. \endproof We are now ready to define a {}_{(2)}extit{patch}. \begin{definition} A patch is a rectangular segment of lattice bordered by two rough and two smooth boundaries, like so: \[{}_{(2)}ikzfig{patch}\] where rough boundaries are at the top and bottom, while smooth boundaries are at the left and right.\footnote{One can of course define patches with other combinations of boundaries, which are useful for specific kinds of circuits \cite{Lit1}, but this is a convenient definition for our purposes.} \end{definition} There are assumed to be no parts of the lattice beyond the patch; these are all of the edges in the lattice. The stabilisers on the boundaries are the same as in the bulk, with the exceptions that (a) stabilisers obviously exclude the edges which are not present, and (b) there are no stabilisers for single edges. So, in particular, there are no vertex measurements which include only the single top and bottom edges; likewise, there are no face measurements which include only the single left and right edges.\footnote{More generally, boundary conditions are defined by a subgroup $K\subseteq\mathbb{Z}_d$. This leads to a rich algebraic theory \cite{BSW}. In the present case, the subgroups $K$ associated to rough and smooth boundaries are $K=\{0\}$ and $K=\mathbb{Z}_d$ respectively.} \begin{lemma} Let the system be in a vacuum state. All $X$-type string operators which extend between the left to right boundaries, for example in the manner below \[{}_{(2)}ikzfig{patch_X_string}\] leave the system in vacuum, but do not generally act as identity. \end{lemma} \proof There are no face stabilisers for the single edges at the end, and at all other faces $B(p)$ commutes with the string operators. However, while the string can be smoothly deformed up and down the sides of the patch while leaving the operation on the vacuum invariant, it cannot be expressed as a product of vertex or face operators, and explicit checks on small (but nontrivial) examples show that ${}_xF^i_{\xi}$ does not act as identity unless $i=0$. \endproof In fact, we have a stronger property: all operators which act as a product of $X$ operations on edges and leave the system in vacuum may be expressed as a linear combination of $X$-type string operators extending between left and right, so the $d$ different $X$-type string operators ${}_xF^i_{\xi}$ form an orthonormal basis for the algebra of such operators. We have a similar result for the $Z$-type string operators in the quasiparticle basis, which extend between the top and bottom boundaries. These properties motivate the following: \begin{lemma} A patch as defined above with underlying group algebra $\mathbb{C} \mathbb{Z}_d$ has ${\rm dim}(\hbox{{$\mathcal H$}}_{vac}) = d$ and two canonical bases, $\{|i\rangle_L\}_{i\in\mathbb{Z}_d}$ and $\{|\partialta_i\rangle_L\}_{i\in \mathbb{Z}_d}$. \end{lemma} \proof The states in $\hbox{{$\mathcal H$}}_{vac}$, and hence the logical space of the code, are uniquely characterised by the algebra of operators upon them. Given a reference state $|{\rm ref}\rangle$ in the vacuum, if there is another vacuum state $|\psi\rangle$ there must be some linear map which transforms $|{\rm ref}\rangle$ into $|\psi\rangle$. Thus $\{{}_xF^i_{\xi}\}_{i\in\mathbb{Z}_d}$ automatically gives an orthonormal basis for $\hbox{{$\mathcal H$}}_{vac}$. Let us call $|0\rangle_L$ the reference state from Def~\ref{def:log_states}. Then $|i\rangle_L := {}_xF^i_{\xi}|0\rangle_L$, where $\xi$ is any string extending from the left boundary to the right. We may call ${}_xF^i_{\xi}$ a logical $X^i$ gate, i.e. $X^i_L$. As with $\mathbb{C}\mathbb{Z}_d$ itself, we have a Fourier basis for the patch's logical space. To begin with, we have $|\partialta_0\rangle_L = \sum_i {}_xF^i_{\xi}|0\rangle_L$. Then, we define further logical states in the Fourier basis by $|\partialta_i\rangle_L = \sum_j q^{ij}{}_zF^{\partialta_j}_{\xi'}|\partialta_0\rangle_L$, where now the string $\xi'$ extends from the top to bottom, and we claim that $|\partialta_i\rangle_L = \sum_kq^{-ik}|k\rangle_L$. We check on a small example that this is consistent with Lemma~\ref{lem:fourier} in Appendix~\ref{app:fourier_patch}, and assert that it holds generally. As a result we call $\sum_j q^{ij}{}_zF^{\partialta_j}_{\xi'}$ a logical $Z^i$ gate, $Z^i_L$. \endproof Note that the logical space is independent of the size of lattice, and depends only on the topology. The lattice size is relevant only for the probability of correcting errors. \section{Lattice surgery}{{}_{(2)}riangleright}bel{sec:lattice_surgery} If we have two patches with logical spaces $(\hbox{{$\mathcal H$}}_{vac})_1$ and $(\hbox{{$\mathcal H$}}_{vac})_2$ which are disjoint in space then we evidently have a combined logical space $\hbox{{$\mathcal H$}}_{vac} = (\hbox{{$\mathcal H$}}_{vac})_1 \mathop{{{}_{(1)}times}} (\hbox{{$\mathcal H$}}_{vac})_2$. We may start with one patch and `split' it to convert it into two patches. \subsection{Splits} To perform a smooth split, take a patch and measure out a string of intermediate qudits from top to bottom in the $\{|\partialta_i\rangle\}$ basis, like so: \[{}_{(2)}ikzfig{split1}\] Regardless of the measurement results we get, we now have two disjoint patches next to each other. We can see the effect on the logical state by considering an $X$-type string operator which had been extending across a string $\xi$ from left to right on the original patch. Previously it had been ${}_xF^i_{\xi}$, say. Now, let $\xi = \xi''\circ\xi'$, where $\xi'$ extends across the left patch after the split and $\xi''$ extends across right one. Then ${}_xF^i_{\xi} = {}_xF^i_{\xi'}\circ{}_xF^i_{\xi''}$; our $X^i_L$ gate on the original logical space is taken to $X^i_L\mathop{{{}_{(1)}times}} X^i_L$ on $(\hbox{{$\mathcal H$}}_{vac})_1 \mathop{{{}_{(1)}times}} (\hbox{{$\mathcal H$}}_{vac})_2$. It is easy to see that this then gives the map: \[\Delta_s : |i\rangle_L\mapsto |i\rangle_L{}_{(1)}times |i\rangle_L\] for $i\in\mathbb{Z}_d$. This is the same regardless of the measurement outcomes on the intermediate qubits we measured out. To perform a rough split, take a patch and measure out a string of qudits from left to right in the $\{|i\rangle\}$ basis. A similar analysis to before, but for $Z^i_L$ gates, shows that we have \[\Delta_r : |\partialta_i\rangle_L\mapsto |\partialta_i\rangle_L{}_{(1)}times |\partialta_i\rangle_L.\] \begin{remark}\rm We now note a subtlety: for both smooth and rough splits we induce a copy in the relevant bases, that is the comultiplication of $\mathbb{C}\mathbb{Z}_d$, rather than the comultiplication of $\mathbb{C}(\mathbb{Z}_d)$ for the rough splits. This is because we are placing both algebras on the same object, using the non-natural isomorphism $V\cong V^*$ for vector spaces $V$. Thus if we take the rough split map in the other basis we get \[\Delta_r : |i\rangle_L\mapsto \sum_h |h\rangle_L {}_{(1)}times |i-h\rangle_L.\] This follows directly from Lemma~\ref{lem:fourier}. The fact that both algebras are placed on the same object allows us to relate the model to the ZX-calculus in Section~\ref{sec:zx}. \end{remark} \subsection{Merges} To perform a smooth merge, we do the reverse operation. Start with two disjoint patches: \[{}_{(2)}ikzfig{split2}\] and then initialise between them a string of intermediate qudits, each in the $\sum_i|i\rangle$ state, like so: \[{}_{(2)}ikzfig{merge}\] Then measure the stabilisers at all sites on the now merged lattice. Now, assuming no errors have occurred all the stabilisers are automatically satisfied everywhere except the measurements which include the new edges. These measurements realise a measurement of $Z_L\mathop{{{}_{(1)}times}} Z_L$ on the logical space $(\hbox{{$\mathcal H$}}_{vac})_1 \mathop{{{}_{(1)}times}} (\hbox{{$\mathcal H$}}_{vac})_2$. We prove this in Appendix~\ref{app:merge}. With merges, the resultant logical state after merging is also dependent on the measurement outcomes. Depending on which `frame' we choose we can have two different sets of possible maps from the smooth merge, see \cite{BH} for the easier qubit case. Here we choose to adopt the Pauli frame of the second patch. In the Fourier basis we thus have the Kraus operators: \[\nabla_s: \{|\partialta_i\rangle_L\mathop{{{}_{(1)}times}}|\partialta_j\rangle_L\mapsto q^{in}|\partialta_{i+j}\rangle_L\}_{n \in \{0,\cdots,d-1\}}\] where $q^{in}$ is a factor introduced by the $Z_L\mathop{{{}_{(1)}times}} Z_L$ measurement; we have $n \in \{0,\cdots,d-1\}$ for the $d$ different possible measurement outcomes. If we only consider the $n=0$ case for a moment, one can come to the conclusion that this is the correct map using the $Z_L$ logical operators: \[\sum_j q^{ij}{}_zF^{\partialta_j}_\xi\circ\sum_j q^{kj}{}_zF^{\partialta_j}_\xi = \sum_j q^{(i+k)j}{}_zF^{\partialta_j}_\xi\] from earlier, where $\xi$ extends from bottom to top on both original patches. Then when we merge the patches, we get the combined string operator. In the other basis of logical states, the smooth merge gives: \[\nabla_s: \{|i\rangle_L\mathop{{{}_{(1)}times}}|j\rangle_L\mapsto \partialta_{i+n,j}|i+n\rangle_L\}_{n\in \{0,\cdots,d-1\}},\] \begin{remark}\rm It is common in categorical quantum mechanics to consider the so-called multiplicative fragment of quantum mechanics. In this fragment, we may post-select rather than just make measurements according to the traditional postulates. As such, there is a choice of post-selection such that $n=0$ and we acquire the multiplication of $\mathbb{C}\mathbb{Z}_d$ or $\mathbb{C}(\mathbb{Z}_d)$ depending on basis. While physically we cannot post-select, this is a useful toy model in which algebraic notions may be more conveniently related to quantum mechanical processes. \end{remark} Considering the same convention of frame, a rough merge gives: \[\nabla_r: \{|i\rangle_L\mathop{{{}_{(1)}times}}|j\rangle_L\mapsto q^{in}|i+j\rangle\}_{n\in \{0,\cdots,d-1\}}\] by a similar argument, this time performing a measurement of $X_L\mathop{{{}_{(1)}times}} X_L$ to merge patches at the top and bottom. \subsection{Units and deletion} While we are on the subject of measurements, we can delete a patch by measuring out every qudit associated to its lattice in the $Z$-basis. If we do so, we obtain the maps \[{\epsilon}_r: \{|i\rangle_L\mapsto \partialta_{n,i}\}_{n\in \{0,\cdots,d-1\}}.\] In the $n=0$ outcome this is precisely the counit of $\mathbb{C}(\mathbb{Z}_d)$. We check this in Appendix~\ref{app:counit}. If we instead measure out each qudit in the $X$-basis we get \[{\epsilon}_s: \{|i\rangle_L\mapsto q^{in}\}_{n\in \{0,\cdots,d-1\}},\] where we see the counit of $\mathbb{C}\mathbb{Z}_d$. One can clearly also construct the units of $\mathbb{C}(\mathbb{Z}_d)$ and $\mathbb{C}\mathbb{Z}_d$, being $\eta_s: \sum_i|i\rangle_L$ and $\eta_r: |0\rangle_L$ respectively. The last remaining pieces of the puzzle are the antipode and Fourier transform on the logical space. \subsection{Antipode} First we demonstrate how to map between the $|0\rangle_L$ and $|\partialta_i\rangle_L$ states. If we apply a Fourier transform $H = \sum_{j,k}q^{-jk}|k\rangle\langlej|$ to a qudit in the state $|0\rangle$ we have $H|0\rangle = \sum_i |i\rangle$.\footnote{The $H$ stands for Hadamard, which is what the qubit Fourier transform is commonly called. The qudit Fourier transform is not a Hadamard matrix in general.} As $HX = Z^\dagger H$ (and $XH = HZ$) all $A(v)$ projectors are translated to $B(p)$ projectors by rotating the lattice to exchange vertices with faces \[{}_{(2)}ikzfig{vertex_rotate}\] such that the $X, X^\dagger$ match up with $Z^\dagger, Z$ appropriately when considering the clockwise conventions from Def~\ref{def:lattice_acts}. This is just a conceptual rotation, and there does not need to be any {}_{(2)}extit{physical} rotation in space. Thus we have \[H_L |0\rangle_L=(\bigotimes_E H) \prod_{v}A(v)\bigotimes_E |0\rangle = \prod_pB(p)\bigotimes_E \sum_i |i\rangle = |\partialta_i\rangle_L\] where $H_L = \bigotimes_E H$ is the logical Fourier transform, and the lattice has been mapped: \[{}_{(2)}ikzfig{rotate_patch}\] $H_L$ also takes $X$-type string operators to $Z$-type string operators in the quasiparticle basis but with a sign change, and thus we have \[H_L|i\rangle_L = H_LX^i|0\rangle_L=Z^{-i}H_L|0\rangle_L = \sum_{k}q^{-ik}|k\rangle_L=|\partialta_i\rangle_L\] so it is genuinely a Fourier transform. Applying it twice gives \[H_LH_L|i\rangle_L = \sum_{k,l}q^{-ik}q^{-kl}|l\rangle_L = \sum_l \partialta_{l,-i}|l\rangle_L = |-i\rangle_L\] where the lattice is now as though the whole patch has been rotated in space by $\pi$ by the same argument as before. This is evidently the {}_{(2)}extit{logical antipode}, $S_L = H_LH_L$. This completes the set of fault-tolerant operations we may perform with the $\mathbb{C}\mathbb{Z}_d$ lattice surgery. One can create other states in a non-error corrected manner and then perform state distillation to acquire the correct state with a high probability, but this is beyond the scope of the paper and very similar to e.g. \cite{FSG}. \section{The ZX-calculus}{{}_{(2)}riangleright}bel{sec:zx} The ZX-calculus is based on Hopf-Frobenius algebras sitting on the same object. It imports ideas from monoidal category theory to justify its graphical formalism \cite{Sel}. See \cite{HV} for an introduction from the categorical point of view. Calculations may be performed by transforming diagrams into one another, and the calculus may be thought of as a tensor network theory equipped with rewriting rules. Here we present the syntax and semantics of ZX-diagrams for $\mathbb{C}\mathbb{Z}_d$. We are unconcerned with either universality or completeness \cite{Back}, and give only the necessary generators for our purposes; moreover, we adopt a slightly simplified convention. First, we have generators: \[{}_{(2)}ikzfig{units}\] for elements, where the small red and green nodes are called `spiders', and diagrams flow from bottom to top.\footnote{Red and green are dark and light shades in greyscale.} The labels associated to a spider are called phases. Then we have the multiplication maps, \[{}_{(2)}ikzfig{merge_spiders}\] comultiplication, \[{}_{(2)}ikzfig{comult_spiders}\] maps to $\mathbb{C}$, \[{}_{(2)}ikzfig{counits}\] and Fourier transform\footnote{The Hadamard symbol here makes it look like it is vertically reversible, i.e. $H^\dagger = H$, but it is not; this is just a notational flaw.} plus antipode: \[{}_{(2)}ikzfig{hadamard}\] Now, these generators obey all the normal Hopf rules: associativity of multiplication and comultiplication, unit and counit, bialgebra and antipode laws, but that it is not all. The ZX-calculus makes use of an old result by Pareigis \cite{Par}, which states that all finite-dimensional Hopf algebras on vector spaces automatically give two Frobenius structures, which in the present case correspond to the red and green spiders above. In this case, they are in fact so-called $\dagger$-special commutative Frobenius algebras ($\dagger$-SFCAs) \cite{CPV}. Such algebras have a normal form, such that any connected set of green or red spiders may be combined into a single green or red spider respectively, summing the phases \cite{CD}. This is called the {}_{(2)}extit{spider theorem}. As an easy example, observe that we can define the $X^a$ gate in the ZX-calculus as: \[{}_{(2)}ikzfig{X_gate_spider}\] and similarly for a $Z^b$ gate, \[{}_{(2)}ikzfig{Z_gate_spider}.\] The Fourier transform then `changes colour' between green and red spiders. We show these axioms in Appendix~\ref{app:zx_axioms}. For a detailed exposition of the qudit ZX-calculus in greater generality see \cite{W1}. Now, one can immediately see that the generators are automatically (by virtue of the $\mathbb{C}\mathbb{Z}_d$ and $\mathbb{C}(\mathbb{Z}_d)$ structures) in bijection with the lattice surgery operations described previously. The bijection between this fragment of the ZX-calculus and lattice surgery was spotted by de Beaudrap and Horsman in the qubit case \cite{BH}; however, their presentation emphasises the Frobenius structures. The algebraic explanation for the lattice surgery properties is all in the Hopf structure: in summary, it is because the string operators are Hopf-like.\footnote{We formalise such operators as module maps in \cite{Cow}.} The Frobenius structures are still useful diagrammatic reasoning tools because of the spider theorem, and also because the two interacting Frobenius algebras correspond to the rough (red spider) and smooth (green spider) operations. There is a convenient 3-dimensional visualisation for this using `logical blocks', which we defer to Appendix~\ref{app:block}. There we also include Table~\ref{tbl:lat_oper}, which is a dictionary between lattice operations, ZX-diagrams and linear maps. \subsection{Gate synthesis}{{}_{(2)}riangleright}bel{sec:synth} Using the ZX-calculus we can thus design logical protocols in a straightforward manner. We have already implicitly shown a state injection protocol, being the spider merges for the $X^a$ and $Z^b$ gates above, but we can go further. A common gate in the circuit model is the controlled-$X$ ($CX$) gate. In qudit quantum computing this is defined as the map \[CX: |i\rangle {}_{(1)}times |j\rangle \mapsto |i\rangle {}_{(1)}times |i+j\rangle\] which in the ZX-calculus we might represent as, say, \[{}_{(2)}ikzfig{cnot_spiders}.\] In the first diagram we perform a smooth split followed by a rough merge; in the second we do the opposite. In the third and fourth we first generate a maximally entangled state and then perform a smooth and rough merge on either side. The antipodes are necessary because of a minor complication with duals in the qudit ZX-calculus. Rewrites using the calculus show that these are equal, and conversions into linear maps do indeed yield the $CX$. We check this in Appendix~\ref{app:cx}. Note that we implicitly assumed the $n=0$ measurement outcomes for the merges, but we assert that in this case the protocol works deterministically by applying corrections. This is a generalisation of protocols specified in \cite{BH}, and the correction arguments are identical. We can also easily see that the lattice surgery operations are not universal, even with the addition of logical $X_L$ and $Z_L$ gates using string operators. All phases have integer values and so we cannot even achieve all single-qudit gates in the 2nd level of the Clifford hierarchy fault-tolerantly. For example, we cannot construct a $\sqrt{X}_L$ gate with the operations listed here. With this limitation in mind, in Appendix~\ref{app:generalisations} we discuss the prospects for expanding the scope of the model to other group algebras and to Hopf algebras more generally. \section{Conclusion} We have shown that lattice surgery is straightforward to generalise to qudits, assuming an underlying abelian group structure. The resultant diagrammatics which can be used to describe computation are elegant, concise and powerful. We currently do not know how this generalises further, and what the connections are to quantum field theories. We aim to tackle these issues in future work. \begin{thebibliography}{99} \bibitem{DKLP}E. Dennis, A. Kitaev, A. Landahl and J. Preskill, Topological quantum memory, J. Math. Phys. 43, 4452-4505 (2002) \bibitem{BK1}S. Bravyi and A. Kitaev, Quantum codes on a lattice with boundary, arXiv:quant-ph/9811052 [quant-ph] \bibitem{Kit1}A. Kitaev, Fault-tolerant quantum computation by anyons, Annals Phys. 303 (2003) 2-30 \bibitem{HFDM}D. Horsman, A. G. Fowler, S. Devitt and R. Van Meter, New J. Phys. 14 (2012) 123011 \bibitem{WFH}D. S. Wang, A. G. Fowler and L. C. L. Hollenberg, Quantum computing with nearest neighbor interactions and error rates over $1\%$, Phys. Rev. A (2011) 83:020302(R) \bibitem{FG}A. G. Fowler and C. Gidney, Low overhead quantum computation using lattice surgery, arXiv:1808.06709 [quant-ph] \bibitem{Lit1}D. Litinski, A Game of Surface Codes: Large-Scale Quantum Computing with Lattice Surgery, Quantum 3, 128 (2019) \bibitem{FMMC}A. G. Fowler, M. Mariantoni, J. M. Martinis, A. N. Cleland, Surface codes: Towards practical large-scale quantum computation, Phys. Rev. A 86, 032324 (2012) \bibitem{CD}B. Coecke and R. Duncan, Interacting Quantum Observables: Categorical Algebra and Diagrammatics, New J. Phys. 13 (2011) \bibitem{BH}N. de Beaudrap and D. Horsman, The ZX calculus is a language for surface code lattice surgery, Quantum 4, 218 (2020) \bibitem{GF}C. Gidney and A. G. Fowler, Efficient magic state factories with a catalyzed $|CCZ\rangle$ to $2|T\rangle$ transformation, Quantum 3, 135 (2019) \bibitem{BDHP}N. de Beaudrap, R. Duncan, D. Horsman and S. Perdrix, Pauli Fusion: a Computational Model to Realise Quantum Transformations from ZX Terms, EPTCS 318 (2020) pp. 85-105 \bibitem{W1}Q. Wang, Qufinite ZX-calculus: a unified framework of qudit ZX-calculi, arXiv:2104.06429 [quant-ph] \bibitem{Reut}D. Reutter, Frobenius algebras, Hopf algebras and 3-categories, Hopf Algebras in Kitaev's Quantum Double Models: Mathematical Connections from Gauge Theory to Topological Quantum Computing and Categorical Quantum Mechanics \bibitem{Bom}H. Bombin and M. A. Martin-Delgado, Family of non-Abelian Kitaev models on a lattice: Topological condensation and confinement, Phys. Rev. B 78 (2008) 115421 \bibitem{Cow}A. Cowtan and S. Majid, Quantum double aspects of surface code models, J. Math. Phys. 63, 042202 (2022) \bibitem{Got}D. Gottesman, Stabilizer Codes and Quantum Error Correction, Caltech Ph.D. Thesis (1997) \bibitem{BSW}S. Beigi, P. Shor and D. Whalen, The quantum double model with boundary: condensations and symmetries, Comm. Math. Phys. 306 (2011) 663--694 \bibitem{FSG}A. G. Fowler, A. M. Stephens and Peter Groszkowski, High-threshold universal quantum computation on the surface code, Phys. Rev. A 80 (2009) 052312 \bibitem{Sel}P. Selinger, A survey of graphical languages for monoidal categories, Springer Lecture Notes in Physics 813, pp. 289-355, 2011 \bibitem{Back}M. Backens, Completeness and the ZX-calculus, arXiv:1602.08954 [quant-ph] \bibitem{Par}B. Pareigis, When Hopf algebras are Frobenius algebras, J. Alg., Volume 18, Issue 4 (1971) 588-596 \bibitem{CPV}B. Coecke, D. Pavlovic and J. Vicary, A new description of orthogonal bases, Mathematical Structures in Computer Science (2012) \bibitem{HV}C. Heunen and J. Vicary, Categories for Quantum Theory: An Introduction, Oxford University Press (2019) DOI:10.1093/oso/9780198739623.001.0001 \bibitem{Logic}H. Bombin, C. Dawson, R. V. Mishmash, N. Nickerson, F. Pastawski, S. Roberts, Logical blocks for fault-tolerant topological quantum computation, arXiv:2112.12160 [quant-ph] \bibitem{PS2}P. Schauenburg, Computing Higher Frobenius-Schur Indicators in Fusion Categories Constructed from Inclusions of Finite Groups, Pacific J. Math. 280 (2016) 177--201 \bibitem{EGNO}P. Etingof, S. Gelaki, D. Nikshych and V. Ostrik, Tensor Categories, Mathematical Surveys and Monographs 205 (2010) \bibitem{Cow2}A. Cowtan and S. Majid, On the algebraic structure of boundaries in Hopf double models, In preparation \bibitem{Meu}C. Meusburger, Kitaev lattice models as a Hopf algebra gauge theory, Commun. Math. Phys. 353 (2017) 413--468 \bibitem{Chen}P. Chen, S. Cui and B. Yan, Ribbon operators in the generalized Kitaev quantum double model based on Hopf algebras, arXiv:2105.08202 [cond-mat.str-el] \bibitem{Os}V. Ostrik, Module categories, weak Hopf algebras and modular invariants, Transform. Groups 8 (2003) 177–-206 \bibitem{Kir}B. Balsam and A. Kirillov, Jr., Kitaev's lattice model and Turaev-Viro TQFTs, arXiv:1206.2308 [math.QA] \bibitem{KS}A. Kapustin and N. Saulina, Topological boundary conditions in abelian Chern-Simons theory, Nucl.Phys.B845:393-435 (2011) \bibitem{FS}J. Fuchs and C. Schweigert, Category theory for conformal boundary conditions, Fields Institute Communications 39 (2003) 25--71 \bibitem{Lauda}A. D. Lauda, Frobenius algebras and planar open string topological field theories, arXiv:math/0508349 [math.QA] \bibitem{Kock}J. Kock, Frobenius Algebras and 2-D Topological Quantum Field Theories (London Mathematical Society Student Texts), Cambridge: Cambridge University Press (2003) doi:10.1017/CBO9780511615443 \bibitem{Ma:book}S. Majid, {\em Foundations of Quantum Group Theory}, Cambridge University Press, (1995); paperback ed. (2000) \end{thebibliography} \appendix \section{Circuits for measuring stabilisers}{{}_{(2)}riangleright}bel{app:circs} Given the face \[{}_{(2)}ikzfig{face_to_be_measured}\] we can perform a face measurement using the circuit \[{}_{(2)}ikzfig{Z_stab_measurement}\] i.e. a measurement of the $Z\mathop{{{}_{(1)}times}} Z\mathop{{{}_{(1)}times}} Z^\dagger\mathop{{{}_{(1)}times}} Z^\dagger$ operator. The $CX$ gates act as $|i\rangle\mathop{{{}_{(1)}times}} |j\rangle\mapsto |i\rangle\mathop{{{}_{(1)}times}}|i+j\rangle$, and the yellow boxes are Fourier transforms $H$. Note that $H^2 : |i\rangle\mapsto |-i\rangle$. Hence one can calculate that this circuit is the map \[|i\rangle\mathop{{{}_{(1)}times}}|j\rangle\mathop{{{}_{(1)}times}}|k\rangle\mathop{{{}_{(1)}times}}|l\rangle\mapsto \partialta_a(i+j-k-l)|i\rangle\mathop{{{}_{(1)}times}}|j\rangle\mathop{{{}_{(1)}times}}|k\rangle\mathop{{{}_{(1)}times}}|l\rangle \] for some $a\in \mathbb{Z}_d$. For a vertex \[{}_{(2)}ikzfig{vertex_to_be_measured}\] we have \[{}_{(2)}ikzfig{X_stab_measurement}\] measuring the $X\mathop{{{}_{(1)}times}} X\mathop{{{}_{(1)}times}} X^\dagger\mathop{{{}_{(1)}times}} X^\dagger$ operator. The $CX$ gates also act as $|\partialta_i\rangle\mathop{{{}_{(1)}times}}|\partialta_j\rangle\mapsto |\partialta_{i-j}\rangle\mathop{{{}_{(1)}times}}|\partialta_j\rangle$, motivating the exchanged control and target and application of $H^2$ to the other qudits. Then we see that this circuit is the map \[|\partialta_i\rangle\mathop{{{}_{(1)}times}}|\partialta_j\rangle\mathop{{{}_{(1)}times}}|\partialta_k\rangle\mathop{{{}_{(1)}times}}|\partialta_l\rangle\mapsto \partialta_b(i+j-k-l)|\partialta_i\rangle\mathop{{{}_{(1)}times}}|\partialta_j\rangle\mathop{{{}_{(1)}times}}|\partialta_k\rangle\mathop{{{}_{(1)}times}}|\partialta_l\rangle\] for some $b\in Z_d$. \section{Fourier basis for patches}{{}_{(2)}riangleright}bel{app:fourier_patch} Consider the small patch \[{}_{(2)}ikzfig{small_patch}\] Now, $|i\rangle_L$ is the following state: \[{}_{(2)}ikzfig{patch_0}\] where we have taken $|0\rangle_L$ and applied an $X$-type string from left to right. Now, consider $|\partialta_0\rangle_L$: \[{}_{(2)}ikzfig{patch_plus}\] where we performed a change of variables $g\mapsto -g$, $h\mapsto -h$. Now, $\partialta_0(d+a-c-f-g+h)$ holds iff $d+a-g=i$ and $-f-c+h=-i$ for some $i\in \mathbb{Z}_d$. Thus we have $|\partialta_0\rangle_L = \sum_i|i\rangle_L$. If we then apply a $Z$-type string operator from top to bottom in the quasiparticle basis we see that $|\partialta_j\rangle_L = \sum_iq^{-ij}|i\rangle_L$. One could then show that the bases are consistent under Fourier transform for all sizes of patch by induction, using the above as the base case. \section{Proof of lattice merges}{{}_{(2)}riangleright}bel{app:merge} We demonstrate the smooth merge on a small patch but it is easy to see that the same method applies for arbitrary large patches. We begin with two patches, in the $|\partialta_g\rangle_L$ and $|\partialta_h\rangle_L$ states respectively. \[{}_{(2)}ikzfig{patch_delta_0}\] Then initialise two new edges between, each in the $|\partialta_0\rangle$ state. \[{}_{(2)}ikzfig{patch_delta_0_together}\] where we have exaggerated the length of the new edges for emphasis. Now if we apply stabiliser measurements at all points we see that the only relevant ones are the face measurements including the new edges (the vertex measurements will still yield $A(v)$ unless a physical error has appeared there). The relevant measurements give us \[\partialta_s(c-w-k);\quad \partialta_r(k+i+d-c-j-x+w-l);\quad \partialta_t(-d+l+x)\] for each new face, where $r,s,t\in \mathbb{Z}_d$. By substitution this gives \[\partialta_r(k+i+d-c-j-x+w-l) = \partialta_r(-t-s+i-j) = \partialta_{r+t+s}(i-j) = \partialta_n(i-j) = \partialta_i(n+j)\] where $n$ is the group product of $r,t,s$ in $\mathbb{Z}_d$. Computationally, $n$ is the important {}_{(2)}extit{measurement outcome} of the merge. Plugging back in to the patches we have \[{}_{(2)}ikzfig{merge_outcome_patch}\] In the positive outcome case, i.e. when $s=r=t=0$, it is immediate that we have $|\partialta_{g+h}\rangle_L$ on the combined patch. Otherwise, we can `fix' the internal additions of $s, t, n$ to the edges with string operators or alternatively accommodate them into the Pauli frame in the same manner as described in e.g. \cite{BH}. Then we are left with $q^{ng}|\partialta_{g+h}\rangle_L$, as stated. The Fourier transformed version of the above explains the rough merges as well, so we do not describe it explicitly. \section{Proof of lattice counits}{{}_{(2)}riangleright}bel{app:counit} We now show a `smooth counit' on a patch with state $|\partialta_j\rangle_L$: \[{}_{(2)}ikzfig{delta_j_patch}\] Measure out all edges in the $Z$ basis, giving \[\sum_{a,b,c,d,i}q^{ij}\partialta_r(a)\partialta_s(a-c)\partialta_t(c)\partialta_u(i+b-a)\partialta_v(b-d)\partialta_w(i+d-c)\partialta_x(-b)\partialta_y(-d)\] for some $r,\cdots,y\in \mathbb{Z}_d$. Then we observe that $\partialta_u(i+b-a)=\partialta_i(a-b-u)=\partialta_i(n)$ for $n=a-b-u$, and by performing some other substitutions we arrive at \[q^{nj}\partialta_v(y-x)\partialta_w(n-y-t)\partialta_s(n-u-x-t)\] Importantly, the only factor here which depends on the input state is $q^{nj}$. All the $\partialta$-functions are merely conditions regarding which measurement outcomes are possible due to the lattice geometry. These will always be satisfied by our measurements, thus we have just \[|\partialta_j\rangle_L\mapsto q^{nj}\] for $n\in\mathbb{Z}_d$, which in the other basis is $|i\rangle_L\mapsto \partialta_{n,i}$ as stated. The rough counit follows similarly. \section{Qudit ZX-calculus axioms}{{}_{(2)}riangleright}bel{app:zx_axioms} We show some relevant axioms for the fragment of qudit ZX-calculus which interests us. These simply coincide with the rules from Hopf and Frobenius structures, along with the Fourier transform. We ignore the more general phase group \cite{W1}, and also leave out non-zero scalars. First, we define a spider \[{}_{(2)}ikzfig{spider_theorem}\] which is well-defined due to associativity and specialty of the underlying Frobenius structure. The spider is also invariant under exchange of input wires with each other and the same for outputs, as the Frobenius algebra is (co)-commutative. A phaseless spider with 1 input and 1 output is identity: \[{}_{(2)}ikzfig{phaseless_spider}\] Now, we can define duality morphisms on the object $\mathbb{C}\mathbb{Z}_d$, which we call a `cup' and similarly a `cap': \[{}_{(2)}ikzfig{cup}\] which correspond to: \[{}_{(2)}ikzfig{cup_rules}\] for the cup, and the vertically flipped version for the cap. The antipodes included here are responsible for the antipodes in the $CX$ gate in Section~\ref{sec:synth}. Then we have the Fourier exchange rule: \[{}_{(2)}ikzfig{fourier_exchange}\] which encodes Lemma~\ref{lem:fourier} graphically. Then we have the bialgebra rules \[{}_{(2)}ikzfig{bialgebra_rules}\] and rules pertaining to the antipode: \[{}_{(2)}ikzfig{antipode_axioms}\] This is far from an exhaustive set of rules. \section{The logical block depiction}{{}_{(2)}riangleright}bel{app:block} The lattice at a given time is drawn with a red line for a smooth boundary and green for a rough boundary: \[\includegraphics[width=0.35{}_{(2)}extwidth]{colour_patch}\] where the surface is shaded blue for clarity. A block extending upwards represents the transformation over time. For example: \[\includegraphics[width=0.1{}_{(2)}extwidth]{id_block}\] We call this the `logical block' depiction, following similar work in \cite{Logic}. Table~\ref{tbl:lat_oper} is an explicit dictionary between lattice surgery operations, qudit ZX-calculus and linear maps in the multiplicative fragment, i.e. the $n=0$ measurement outcomes. We choose to use the multiplicative fragment to highlight the visual connection between the columns. We see that red and green spiders correspond to rough and smooth operations respectively. \begin{table} \centering \begin{tabular}{ | m{3cm} | c | m{2cm} | m{3cm} | } \hline Lattice operation & Logical block & ZX-diagram & Linear map\\ \hline smooth unit & \begin{minipage}{.05{}_{(2)}extwidth} \includegraphics[width=\linewidth]{smooth_unit_block} \end{minipage} & \[{}_{(2)}ikzfig{smooth_unit_ZX}\] & \[\sum_i|i\rangle\] \\ \hline smooth split & \begin{minipage}{.1{}_{(2)}extwidth} \includegraphics[width=\linewidth]{smooth_split_block} \end{minipage} & \[{}_{(2)}ikzfig{smooth_split_ZX}\] & \[|i\rangle\mapsto |i\rangle\mathop{{{}_{(1)}times}}|i\rangle\] \\ \hline smooth merge & \begin{minipage}{.1{}_{(2)}extwidth} \includegraphics[width=\linewidth]{smooth_merge_block} \end{minipage} & \[{}_{(2)}ikzfig{smooth_merge_ZX}\] & \[|i\rangle\mathop{{{}_{(1)}times}}|j\rangle\mapsto \partialta_{i,j}|i\rangle\] \\ \hline smooth counit & \begin{minipage}{.05{}_{(2)}extwidth} \includegraphics[width=\linewidth]{smooth_counit_block} \end{minipage} & \[{}_{(2)}ikzfig{smooth_counit_ZX}\] & \[|i\rangle\mapsto 1\] \\ \hline rough unit & \begin{minipage}{.05{}_{(2)}extwidth} \includegraphics[width=\linewidth]{rough_unit_block} \end{minipage} & \[{}_{(2)}ikzfig{rough_unit_ZX}\] & \[|0\rangle\] \\ \hline rough split & \begin{minipage}{.1{}_{(2)}extwidth} \includegraphics[width=\linewidth]{rough_split_block} \end{minipage} & \[{}_{(2)}ikzfig{rough_split_ZX}\] & \[|i\rangle\mapsto \sum_h|h\rangle{}_{(1)}times |i-h\rangle\] \\ \hline rough merge & \begin{minipage}{.1{}_{(2)}extwidth} \includegraphics[width=\linewidth]{rough_merge_block} \end{minipage} & \[{}_{(2)}ikzfig{rough_merge_ZX}\] & \[|i\rangle\mathop{{{}_{(1)}times}}|j\rangle\mapsto |i+j\rangle\] \\ \hline rough counit & \begin{minipage}{.05{}_{(2)}extwidth} \includegraphics[width=\linewidth]{rough_counit_block} \end{minipage} & \[{}_{(2)}ikzfig{rough_counit_ZX}\] & \[|i\rangle\mapsto\partialta_{i,0}\] \\ \hline rotation & \begin{minipage}{.05{}_{(2)}extwidth} \includegraphics[width=\linewidth]{fourier_block} \end{minipage} & \[{}_{(2)}ikzfig{fourier_spider}\] & \[|i\rangle\mapsto\sum_jq^{-ij}|j\rangle\] \\ \hline \end{tabular} \caption{Dictionary of lattice surgery operations in the multiplicative fragment.}{{}_{(2)}riangleright}bel{tbl:lat_oper} \end{table} We have no new results or proofs in this section, but we would like to discuss the diagrams of logical blocks. These sorts of diagrams for lattice surgery have been used in an engineering setting to compile quantum circuits to lattice surgery \cite{GF,Logic}. To go from the cubes shown there to the tubes which we show here we merely relax the discretisation of space and time somewhat to expose the relationship with algebra. This relationship with algebra is relevant because such diagrams have appeared in a seemingly quite different context. It is well known that the category of `2-dimensional thick tangles', {}_{(2)}extbf{2Thick}, is monoidally equivalent to the category {}_{(2)}extbf{Frob} freely generated by a noncommutative Frobenius algebra \cite{Lauda}. This should be unsurprising to those familiar with the notion of a `pair of pants' algebra. We say that {}_{(2)}extbf{2Thick} is a {}_{(2)}extit{presentation} of {}_{(2)}extbf{Frob}. Similarly, the symmetric monoidal category {}_{(2)}extbf{2Cob} of (diffeomorphism classes of) 2-dimensional cobordisms between (disjoint unions of) circles is a presentation of {}_{(2)}extbf{ComFrob}, the category freely generated by a commutative Frobenius algebra \cite{Kock}. This fact is important for topological quantum field theories (TQFTs). One can define an $n$-dimensional TQFT as a symmetric monoidal functor from ${}_{(2)}extbf{nCob}\rightarrow {}_{(2)}extbf{Vect}$, the category of finite-dimensional vector spaces. The key point is that the functor takes (diffeomorphism classes of) manifolds as inputs and outputs linear maps between vector spaces, which are by definition manifold invariants. One can see that 2D TQFTs are in bijection with commutative Frobenius algebras in ${}_{(2)}extbf{Vect}$. In \cite{Reut}, Reutter gives a slightly different monoidal category, which we will call {}_{(2)}extbf{2Block}. It has as objects disjoint unions of squares, with the same shading of sides as those in the logical block diagrams above. Then morphisms are classes of surfaces between the squares, such that the borders between the surfaces match up with the edges of the squares at the source and target objects and the surface colours are consistent with those of the squares' sides. While the morphisms are obviously quotiented by equivalence of surfaces up to border-preserving diffeomorphism, Reutter quotients by `saddle-invertibility' as well, which is not a rule one can acquire through topological moves alone, as it involves the closing and opening of holes. Reutter conjectures that ${}_{(2)}extbf{2Block}\simeq {}_{(2)}extbf{uHopf}$, where {}_{(2)}extbf{uHopf} is the category freely generated by a unimodular Hopf algebra.\footnote{In {}_{(2)}extbf{Vect}, unimodularity is typically defined using integrals \cite{Ma:book}. In this more abstract setting it is defined by some axioms on dualities.} While we do not know enough about topology or geometry to prove (or disprove) this conjecture, we suspect one route is to consider Morse functions and classify the diffeomorphism classes near critical points. This is similar to one proof of ${}_{(2)}extbf{2Cob}\simeq {}_{(2)}extbf{ComFrob}$ \cite{Kock}. For the reader's convenience, we now reproduce a handful of the equivalences under topological deformation which motivate this conjecture. We have the axioms of a Frobenius algebra, \[\includegraphics[width=0.3\linewidth]{unit_mult_block}\] \[\includegraphics[width=0.3\linewidth]{associativity_block}\] \[\includegraphics[width=0.3\linewidth]{frob_block}\] and the same for red faces. These are just widened versions of the diagrams in {}_{(2)}extbf{2Thick}. Then one can see the interpretation of a unimodular Hopf algebra as two interacting Frobenius algebras. We start with two Frobenius algebras and glue them together in such a way that they give the bialgebra and antipode axioms. The main bialgebra rule is \[\includegraphics[width=0.3\linewidth]{bialgebra_block}\] where we require saddle invertibility to close up a hole in the middle. This is also required for showing that comultiplication is a unit map and so on. Given all of these deformations and those involving the antipode, which is a twist by $\pi$, one can see that they define a functor ${}_{(2)}extbf{uHopf}\rightarrow{}_{(2)}extbf{2Block}$; the hard part is proving that this is an equivalence. Now, Reutter also draws a comparison with representation theory and tensor category theory. It is striking that, given the unimodular Hopf algebra $\mathbb{C}\mathbb{Z}_d$, we can create a logical space on a patch isomorphic to the vector space of $\mathbb{C}\mathbb{Z}_d$ itself, and the logical operations precisely coincide with the linear maps defined by the algebra. We conjecture that lattice surgery is the `computational implementation' of this presentation of unimodular Hopf algebras, in the same way that the logical space of the Kitaev model on a closed orientable manifold $\mathcal{M}$ is isomorphic to the vector space $F(\mathcal{M})$ in the image of a Dijkgraaf-Witten theory $F : {}_{(2)}extbf{2Cob}\rightarrow {}_{(2)}extbf{Vect}$ when given the same manifold $\mathcal{M}$ \cite[Thm~3.2]{Cow}. It remains to be seen whether this extends further than just abelian group algebras. \section{Logical $CX$ gate}{{}_{(2)}riangleright}bel{app:cx} Here we check the correctness of the $CX$ gate implementations from Section~\ref{sec:synth}. First, observe that the diagram: \[{}_{(2)}ikzfig{left_cnot}\] yields the linear map: \[|i\rangle{}_{(1)}times|j\rangle\mapsto |i\rangle{}_{(1)}times |i\rangle {}_{(1)}times |j\rangle \mapsto |i\rangle{}_{(1)}times |i+j\rangle\] where we have considered the diagram piecemeal from bottom to top, indicated by the dashed lines. Then we can perform a sequence of rewrites between all four diagrams, labelled below: \[{}_{(2)}ikzfig{rewrite_cnot1}\] where at each stage we have either used the spider rule, inserted duals, or swapped between duals and spiders; see Appendix~\ref{app:zx_axioms}. \section{Generalisations and Hopf algebras}{{}_{(2)}riangleright}bel{app:generalisations} While we have shown that lattice surgery works for arbitrary dimensional qudits, we emphasise that the algebraic structures involved are very simple so far. The lattice model in the bulk can be generalised significantly: first, one can replace $\mathbb{C}\mathbb{Z}_d$ with another finite abelian group algebra. As all finite abelian groups decompose into direct sums of cyclic groups this case follows immediately from the work herein and is uninteresting. At the second level up, we can replace it with an arbitrary finite group algebra $\mathbb{C} G$. At this level several assumptions break down: \begin{itemize} \item $\mathbb{C} G$ still has a dual function algebra $\mathbb{C}(G)$, but the Fourier transform no longer coincides with Pontryagin duality, and the two algebras will no longer be isomorphic in general. One can still define a Fourier transform in the sense that it translates between convolution and multiplication, but in this case the Fourier transform is the Peter-Weyl isomorphism, i.e. a bimodule isomorphism between $\mathbb{C} G$ and a direct sum of matrix algebras labelled by the irreps of $G$. \item The $\mathbb{C} G$ lattice model can no longer be described using string operators, and these must be promoted to ribbon operators \cite{Kit1}. This is because the lattice model is based on the Drinfeld double $D(G) = \mathbb{C}(G){>\!\!\!\triangleleft} \mathbb{C} G$, where the associated action is conjugation. In the abelian case conjugation acts trivially and so we have $D(\mathbb{Z}_d) = \mathbb{C}(\mathbb{Z}_d) {}_{(1)}times \mathbb{C}\mathbb{Z}_d$: the double splits into independent algebras, which give the $X$-type and $Z$-type string operators respectively. \item There are still canonical choices of rough and smooth boundary, labelled by subgroups $K = \{e\}$ and $K = G$ for rough and smooth boundaries respectively. Similarly, we still have well-defined measurements, using representations of $C G$ and $C(G)$ for vertices and faces. However, the algebra of ribbon operators which are undetectable at the boundary, and hence the logical operations on a patch, becomes significantly more complicated, see \cite{PS2} for the underlying module theory. Preliminary calculations indicate that they are labelled by conjugacy classes (i.e. irreps) of $G$, and it is not even obvious that ${\rm dim}(\hbox{{$\mathcal H$}}_{vac}) = |G|$ as in the abelian case. This is quite an obstruction to calculating the logical maps corresponding to lattice surgery operations. \end{itemize} Of course, the Kitaev model can be generalised much further still. The third level would be arbitrary finite-dimensional Hopf $\mathbb{C}^*$-algebras. At this level even the calculations in the bulk are tricky, and many features were only recently resolved \cite{Cow,Meu,Chen}. Understanding lattice surgery in these models seems a formidable task. We aim to at least make some progress on this in upcoming work \cite{Cow2}. The fourth (and highest) level is the maximal generality, which are weak Hopf $\mathbb{C}^*$-algebras, in bijection (up to an equivalence) with so-called {}_{(2)}extit{unitary fusion categories} \cite{EGNO}. Even at this extreme generality, there are glimpses of hope. There are two canonical choices of boundaries given by the trivial (rough) and regular (smooth) module categories \cite{Os}, and we speculate that calculating some basic features like ${\rm dim}(\hbox{{$\mathcal H$}}_{vac})$ of a patch could be done using techniques from topological quantum field theory (TQFT). At this level of generality, the connections with TQFT become more tantalising. The parallels between topological quantum computing in the bulk and TQFTs are well-known, see e.g. \cite{Kir}, but lattice surgery introduces discontinuous deformations in the manner of geometric surgery. While boundaries of TQFTs are well-studied \cite{KS,FS}, we do not know whether TQFT theorists study the relation between geometric surgery on manifolds and linear algebra in the same manner as they do for, say, diffeomorphism classes of cobordisms. \end{document}
\begin{document} \begin{abstract} We investigate the number of straight lines contained in a K3 quartic surface \(X\) defined over an algebraically closed field of characteristic~\(3\). We prove that if \(X\) contains \(112\) lines, then \(X\) is projectively equivalent to the Fermat quartic surface; otherwise, \(X\) contains at most \(67\) lines. We improve this bound to \(58\) if \(X\) contains a star (ie four distinct lines intersecting at a smooth point of \(X\)). Explicit equations of three \(1\)-dimensional families of smooth quartic surfaces with \(58\) lines, and of a quartic surface with \(8\) singular points and \(48\) lines are provided. \end{abstract} \title{Lines on K3 quartic surfaces in characteristic 3} \section{Introduction} A quartic surface in \(\IP^3\) whose singularities are isolated rational double points is called a \emph{K3 quartic surface}. Together with \cite{veniani-char2} and \cite{veniani1}, this paper forms a trilogy dedicated to the study of the number of straight lines lying on a K3 quartic surface \(X\), denoted by \(|{\Fn X}|\). The field \(\IK\) over which \(X\) is defined is always supposed to be algebraically closed. The aim of this paper is to investigate the case \(\Char \IK = 3\). \subsection{Background} Unlike smooth cubic surfaces, which always contain 27 lines, a dimension count shows that a general smooth quartic surface does not contain any line at all. One is therefore led to study the maximal number of lines that a quartic surface can contain. Historically, the main focus has been on smooth complex quartic surfaces, which are an example of algebraic K3 surfaces. In 1882 F. Schur~\cite{schur} discovered a surface with 64 lines that now carries his name, given by the following equation: \[ x^4 - xy^3 = w^4 - wz^3. \] The fact that \(|{\Fn X}| \leq 64\) for a smooth complex quartic surface \(X\) was proven by B. Segre in 1943~\cite{segre}. Around 70 years later, though, Rams and Schütt~\cite{rams.schuett:64.lines} discovered a flaw in Segre's argument (further investigated in a spin-off paper \cite{rams.schuett:quartics.lines.2nd.kind}) and fixed his proof, extending Segre's theorem to the case \(\Char \IK \neq 2,3\). Approximately at the same time, Degtyarev, Itenberg and Sertöz, spurred by a remark of Barth~\cite{barth}, gave another solution with an algebraic approach. Their work resulted in a complete classification of large configurations of lines on smooth complex quartic surfaces up to projective equivalence~\cite{Degtyarev.Itenberg.Sertoz}. There are \(8\) possible configurations with more than \(52\) lines, corresponding to \(10\) distinct surfaces. A list of explicit equations for these surfaces, initiated by Schur \cite{schur}, Rams and Schütt~\cite{rams.schuett:64.lines}, Degtyarev, Itenberg and Sertöz~\cite{Degtyarev.Itenberg.Sertoz}, and Shimada and Shioda~\cite{shimada-shioda}, was completed by the author \cite{veniani:symmetries.equations}. If \(\Char \IK \neq 2,3\), the inequality \(|{\Fn X}|\leq 64\) holds more generally for a (possibly non-smooth) K3 quartic surface \(X\) \cite{veniani1}. The bound of \(64\) lines is not known to be sharp for non-smooth K3 quartic surfaces. To the author's knowledge, the record for the number of lines contained in a non-smooth K3 quartic surface in the case \(\Char \IK = 0\) (respectively \(\Char \IK > 3\)) is attained by a surface with \(52\) lines (respectively by its reduction modulo \(5\) with \(56\) lines), whose equation is published in this paper for the first time (see \autoref{ex:52.lines.2.sing.pts}). If \(\Char \IK = 2\), a sharp bound is known for both smooth and non-smooth K3 surfaces. More precisely, if \(X\) is smooth, then \(|{\Fn X}| \leq 60\). The bound was found by Degtyarev~\cite{degtyarev:lines.supersingular} and a surface attaining the bound was found by Rams and Schütt~\cite{rams.schuett:at.most.64.lines.char.2}. More generally, if \(X\) is a K3 quartic surface, \(|{\Fn X}| \leq 68\) and if equality holds, then \(X\) is not(!) smooth and projectively equivalent to a member of a \(1\)-dimensional family discovered by Rams and Schütt \cite{veniani-char2}. \subsection{Principal results} The Fermat quartic surface, which is defined by \begin{equation} \label{eq:fermat} x^4+y^4+w^4+z^4 = 0, \end{equation} contains \(48\) lines over \(\IC\). Its reduction modulo \(3\) is a smooth, supersingular (ie of Picard rank~\(22\)) surface and contains \(112\) lines. It is the smooth quartic surface with the highest number of lines in characteristic~\(3\), a result proven independently by Rams and Schütt \cite{RS-char3} and Degtyarev \cite{degtyarev:lines.supersingular}. The main result of this paper is the following generalization. \begin{theorem}[see \autoref{subsec:proof.thm:char3}] \label{thm:char3} Assume \(\Char \IK = 3\). If a K3 quartic surface \(X\) is not projectively equivalent to the Fermat surface, then \(|{\Fn X}| \leq 67\). \end{theorem} We refine \autoref{thm:char3} in the case that \(X\) contains four distinct lines meeting in a smooth point, a configuration which we call a \emph{star}. This configuration is particularly relevant in characteristic~\(3\). For instance, each line in the Fermat surface is contained in \(10\) stars, and if two lines on the Fermat surface intersect, then they belong to the same star. \begin{addendum}[see \autoref{prop:star}] \label{add:char3} Assume \(\Char \IK = 3\). If a K3 quartic surface \(X\) is not projectively equivalent to the Fermat surface and contains a star, then \(|{\Fn X}| \leq 58\). \end{addendum} For smooth surfaces more precise results were found by Degtyarev~\cite{degtyarev:lines.supersingular}, distinguishing between supersingular and non-supersingular surfaces. A smooth supersingular quartic surface can contain \(112\), \(58\) or at most \(52\) lines (\cite[Theorem 1.2]{degtyarev:lines.supersingular}). There exist three different possible configurations of \(58\) lines. In \autoref{ex:58-1st-conf}, \autoref{ex:58-2nd-conf} and \autoref{ex:58-3rd-conf}, explicit equations of the three \(1\)-dimensional families of smooth quartic surfaces containing these configurations are provided. All of them contain a star. The first two families were discovered independently by Degtyarev (\cite[Proposition~8.13 and Proposition~8.14]{degtyarev:lines.supersingular}). To the author's knowledge, the third one is new. \autoref{add:char3} and the known results about smooth surfaces provide evidence for the following conjecture. \begin{conjecture} Assume \(\Char \IK = 3\). If a K3 quartic surface \(X\) is not projectively equivalent to the Fermat surface, then \(|{\Fn X}| \leq 58\). \end{conjecture} Neither the bound of \(67\) lines in \autoref{thm:char3} nor the bound of \(58\) lines in \autoref{add:char3} are known to be sharp for non-smooth K3 quartic surfaces. In \autoref{ex:shimada-shioda} we show the existence of a non-smooth K3 quartic surface with \(48\) lines in characteristic~\(3\), the highest number known to the author. Finally, in \autoref{ex:52.lines.2.sing.pts} we exhibit a non-smooth K3 quartic surface with \(52\) lines in characteristic~\(0\) and \(56\) lines in characteristic~\(5\). \subsection{Contents of the paper} The main technique used in this paper is the study of the genus~\(1\) fibration induced by a line on a K3 quartic surface. A genus \(1\) fibration can be elliptic or a quasi-elliptic, the latter appearing only in characteristic 2 and 3 by a theorem of Tate~\cite{tate}. Heuristically, quasi-elliptic fibrations can carry a higher number of fiber components than elliptic fibrations, thus allowing for a higher number of lines on \(X\). This technique is explained in \autoref{sec:K3-quartic}, where the notation is also fixed. The technical core of the paper is contained in \autoref{sec:char3-valency}, where several bounds on the possible valencies of a line (see \autoref{def:valency}) are given. The proofs of the main results are then carried out in~\autoref{sec:char3-proof}. Finally, various examples of K3 quartic surfaces with many lines are discussed in \autoref{sec:char3-examples}. \subsection*{Acknowledgments} I am particularly indebted to Sławomir Rams and Matthias Schütt for suggesting the problem of studying the number of lines on K3 quartic surfaces. I also wish to thank Fabio Bernasconi, Alex Degtyarev and Víctor González-Alonso for many stimulating discussions. \section{Preliminaries} \label{sec:K3-quartic} In this section we fix our notation and recall some general facts about K3 quartic surfaces (see \cite[§2]{veniani-char2} for more details). \subsection{Lines and singular points} Starting from here and throughout the paper, \(X\) denotes a K3 quartic surface defined over a field \(\IK\) of characteristic \(p \geq 0\), \(\Sing(X)\) is the set of singular points on \(X\) (which can be of type~\(\bA_n\), \(\bD_n\) or \(\bE_n\)), \(\rho : Z \rightarrow X\) is the minimal resolution of \(X\) (\(Z\) is a K3 surface), \(H\) is a hyperplane divisor in the linear system defined by \(\rho^*(\mathcal O_{X}(1))\), \(\ell\) is a line on \(X\), \(L\) is the strict transform of \(\ell\) on~\(Z\) and \(|{\Fn X}|\) is the number of lines lying on \(X\). The pencil of planes \(\{\Pi_t\}_{t \in \IP^1}\) containing \(\ell\) induces a genus~1 fibration \(\pi: Z \rightarrow \IP^1\). A line \(\ell\) is \emph{elliptic} (respectively \emph{quasi-elliptic}) if it induces an elliptic (respectively \emph{quasi-elliptic}) fibration. For \(t \in \IP^1\), the \emph{residual cubic} \(c_t\) is the union of the irreducible components of \(X \cap \Pi_t\) different from \(\ell\). A fiber \(F_t\) of \(\pi\) is the pullback through \(\rho\) of the residual cubic \(c_t\) contained in \(X \cap \Pi_t\). A general fiber of \(\pi\) is denoted by \(F\). The restriction of \(\pi\) to \(L\) is again denoted by \(\pi\). The \emph{degree} of \(\ell\) is the degree of the morphism \(\pi: L \rightarrow \IP^1\). (If \(\pi: L \rightarrow \IP^1\) is constant, we say that \(\ell\) has degree \(0\).) The line \(\ell\) is \emph{separable} (respectively \emph{inseparable}) if \(\pi:L\rightarrow \IP^1\) is separable (respectively inseparable). The \emph{singularity} of \(\ell\) is the cardinality of \(\Sing(X) \cap \ell\). A point \(P\) on a separable line \(\ell\) is a point \emph{of ramification \(n_l\)} if the corresponding point on \(L\) has ramification index \(n\) and \(\length(\Omega_{L/\IP^1}) = l\). If \(\Char \IK\) does not divide \(n\), then \(l = n-1\) and can be omitted, whereas if \(\Char \IK\) divides \(n\), then \(l \geq n\). We say that \(\ell\) has ramification \((n_l)^r(n'_{l'})^{r'}\ldots\) if it has \(r\) points of ramification \(n_l\) and so on. \begin{lemma}[{\cite[Proposition 1.5]{veniani1}}] \label{lem:degree.singularity} If \(\ell \subset X\) has degree~\(d\) and singularity \(s\), then \(d \leq 3 - s\), and \(d = 3\) if and only if \(s = 0\). \end{lemma} \begin{lemma}[{\cite[Lemma 2.7]{veniani1}}] \label{lemma:linesthroughsingularpoint} If \(P \in \Sing(X)\), then there are at most \(8\) lines on \(X\) passing through \(P\). \qed \end{lemma} \subsection{Valency} Let \(\Pi \subset \IP^3\) be a plane such that \(X \cap \Pi\) splits into four lines \(\ell_1,\ldots,\ell_4\) (not necessarily distinct). If a line \(\ell' \subset X\) not lying on \(\Pi\) meets two or more distinct lines \(\ell_i\), then their point of intersection must be a singular point of the surface. It follows that \(|{\Fn X}|\) is bounded by \begin{equation} \label{eq:FnX} \begin{split} |{\Fn X}| &\leq \#\{\text{lines in \(\Pi\)}\} \\ &\quad + \#\{\text{lines not in \(\Pi\) going through \(\Sing(X) \cap \Pi\)} \} \\ &\quad + \sum_{i = 1}^4\#\{\text{lines not in \(\Pi\) meeting \(\ell_i\) in a smooth point}\}. \\ \end{split} \end{equation} \begin{definition} \label{def:valency} The \emph{valency} of \(\ell\), denoted by \(v(\ell)\), is the number of lines \(\ell' \subset X\), \(\ell' \neq \ell\), such that \(\ell' \cap \ell \notin \Sing(X)\). The \emph{local valency} of \(\ell\) at \(t \in \IP^1\), denoted by \(v_t(\ell)\), is the number of lines \(\ell'\subset c_t\), \(\ell'\neq \ell\), such that \(\ell' \cap \ell \notin \Sing(X)\). \end{definition} It holds \begin{equation} \label{eq:v=sum-vF} v(\ell) = \sum_{t \in \IP^1} v_t(\ell). \end{equation} A line \(\ell\) is said to be \emph{of type~\((p,q)\)} if there exist exactly \(p\) points \(t \in \IP^1\) such that \(c_t\) splits into three (not necessarily distinct) lines and exactly \(q\) points \(t \in \IP^1\) such that \(c_t\) splits into a line and an irreducible conic. It follows from \eqref{eq:v=sum-vF} that if \(\ell\) has type~\((p,q)\) and degree~\(d\), then \begin{equation} \label{eq:vl} v(\ell) \leq d p + q. \end{equation} \subsection{Lines of the first and second kind} Let \(x_0,x_1,x_2,x_3\) be the coordinates of \(\IP^3\). We denote by \(\IK[x_0,\ldots,x_3]_{(d)}\) the space of forms (ie homogeneous polynomials) of degree~\(d\). Up to projective equivalence, we can suppose that the line \(\ell\) is given by \(x_0 = x_1 = 0\), so that the quartic \(X\) is defined by \begin{equation} \label{eq:surfaceX} X: \sum_{i_0 + i_1 + i_2 + i_3 = 4} a_{i_0 i_1 i_2 i_3} x_{0}^{i_0} x_{1}^{i_1} x_{2}^{i_2} x_{3}^{i_3} = 0, \quad a_{i_0 i_1 0 0} = 0 \text{ for all \(i_0\), \(i_1\)}, \end{equation} where \(i_0,\ldots,i_4\) are non-negative integers. The planes containing \(\ell\) are parametrized by \(\Pi_t: x_0 = t x_1\), where \(t = \infty\) denotes the plane \(x_1 = 0\). The residual cubic \(c_t\) is defined by the equation of \(\Pi_t\) and the equation \(g \in \IK[t][x_1,x_2,x_3]_{(3)}\) obtained by substituting \(x_0\) with \(t x_1\) in \eqref{eq:surfaceX} and factoring out \(x_1\). An explicit computation shows that there exist two forms \(\alpha,\beta \in \IK[x_2,x_3]_{(\deg(\ell))}\) such that \(\ell \cap c_t\) consists of the points \([0:0:x_2:x_3]\) satisfying \begin{equation} \label{eq:penciloncurve} g(t;0,x_2,x_3) = t \alpha(x_2, x_3) + \beta(x_2, x_3) = 0. \end{equation} An \emph{inflection point} of a (possibly reducible) cubic curve \(c \subset \IP^2\) is a point of \(c\) which is a zero of the hessian of the cubic. If \(c\) contains a line as a component, all the points of the line are inflection points of \(c\). The following lemma can be checked by an explicit computation, too. \begin{lemma} \label{lemma:line&tangentconic} If \(c \subset \IP^2\) is a reducible cubic that is the union of an irreducible conic and a line \(m\), then the locus of inflection points of \(c\) is exactly~\(m\). \qed \end{lemma} If the surface \(X\) is defined as in \eqref{eq:surfaceX}, then the hessian of the equation \(g\) defining the residual cubic \(c_t\) restricted to the line \(\ell\) is given by \begin{equation} \label{eq:hessian} h = \det \left( \frac{\partial^2 g}{\partial x_i x_j} \right)_{1\leq i,j, \leq 3} \bigg\rvert_{x_1 = 0} \in \IK[t][x_2,x_3]_{(3)}, \end{equation} which is a polynomial of degree \(5\) in \(t\), with forms of degree \(3\) in \((x_2,x_3)\) as coefficients (if \(\Char \IK = 2\), this definition has to be slightly modified, see~\cite{RS-char3}). The resultant \(R(\ell)\) with respect to the variable \(t\) of the polynomials \eqref{eq:penciloncurve} and \eqref{eq:hessian} is called the \emph{resultant} of the line \(\ell\). We say that a line \(\ell\) of positive degree is a line of the \emph{second kind} if its resultant is identically equal to zero. Otherwise, we say that \(\ell\) is a line of the \emph{first kind}. A root \([ x_2: x_3]\) of \(R(\ell)\) corresponds to a point \(P = [0:0: x_2: x_3]\) on \(\ell\). If \(P\) is a smooth surface point, then it is an inflection point for the residual cubic passing through it. \begin{proposition}[{\cite[Proposition 2.18]{veniani-char2}}] \label{prop:1stkind} If \(\ell \subset X\) is a line of the first kind of degree~\(d\), then \(v(\ell) \leq 3 + 5\,d\). \qed \end{proposition} \subsection{Triangle-free surfaces} \label{subsec:triangle-free.K3} We follow here the nomenclature of \cite[§5]{veniani1}. A Dynkin diagram (resp. extended Dynkin diagram) is also called an \emph{elliptic} graph (resp. \emph{parabolic} graph). The \emph{line graph} \(\Gamma(X)\) of \(X\) is the dual graph of the strict transforms of its lines on \(Z\). A \emph{triangle} on \(X\) is the union of three lines intersecting pairwise in smooth points of \(X\). We say that \(X\) is \emph{triangle-free} if it contains no triangles. \begin{lemma}[{\cite[Lemma 2.22]{veniani-char2}}] \label{lem:elliptic-triangle-free} If \(X\) is triangle-free and \(\ell \subset X\) is elliptic, then \(v(\ell) \leq 12\). \qed \end{lemma} \begin{lemma}[{\cite[Lemma 2.23]{veniani-char2}}] \label{lem:triangle-conf} A triangle on \(X\) is contained in a plane~\(\Pi \subset \IP^3\) such that the intersection of \(\Pi\) and \(X\) is one of the configurations pictured in \autoref{fig:triangle-conf}. \qed \end{lemma} \begin{figure} \caption{Possible configurations of lines on a plane with a triangle. Singular points are marked white. In configurations \(\mathcal D_0\) and \(\mathcal E_0\) the singular points might coincide.} \label{fig:triangle-conf} \end{figure} \subsection{Elliptic fibrations} For further details on genus 1 fibration in any characteristic, we refer to \cite{bom-mum,cossec-dolgachev,rud-sha2,schuett.shioda:elliptic.surfaces}. From now on, we suppose that \(\Char \IK =3\). We adopt Kodaira's notation for the possible types of a fiber \(F\) appearing in a genus \(1\) fibration. We denote by \(e(F)\) and \(\delta(F)\) respectively the Euler--Poincaré characteristic and the wild ramification index in characteristic~\(3\) of \(F\). The possible values of \(e(F)\) and \(\delta(F)\) according to the type of \(F\) are displayed in \autoref{tab:fibers}. The following formula holds for an elliptic fibration on a K3 surface. \begin{equation} \label{eq:euler.elliptic} \sum_{t \in \IP^1} (e(F_t) + \delta(F_t)) = 24. \end{equation} \begin{table}[b] \centering \caption{Euler--Poincaré characteristic and possible wild ramification indices of a fiber \(F\) in a genus 1 fibration.} \label{tab:fibers} \begin{tabular}{r|cccccccc} \toprule type of \(F\) & \(\I_n\) & \(\II\) & \(\III\) & \(\IV\) & \(\I^*_n\) & \(\IV^*\) & \(\III^*\) & \(\II^*\) \\ \midrule \(e(F)\) & \(n\) & \(2\) & \(3\) & \(4\) & \(n+2\) & \(8\) & \(9\) & \(10\) \\ \(\delta(F)\) & \(0\) & \(\geq 1\) & \(0\) & \(\geq 1\) & \(0\) & \(\geq 1\) & \(0\) & \(\geq 1\) \\ \bottomrule \end{tabular} \end{table} \subsection{Quasi-elliptic fibrations and cuspidal lines} A fiber \(F\) of a quasi-elliptic fibration can only have one of the following types: \(\II,\,\IV,\,\IV^*,\,\II^*\). A K3 surface with a quasi-elliptic fibration is necessarily supersingular and the following formula holds \begin{equation} \label{eq:euler.quasi-elliptic} \sum_{t \in \IP^1} (e(F_t) - 2) = 20. \end{equation} If \(iv\), \(iv^*\) and \(ii^*\) denote the numbers of reducible fibers of the respective type, then \eqref{eq:euler.quasi-elliptic} can be also written \begin{equation} \label{eq:char3-euler} iv + 3\,iv^* + 4\,ii^* = 10. \end{equation} \begin{figure} \caption{Cuspidal curve (white vertex) intersecting a reducible fiber of a quasi-elliptic fibration. Multiple white dots represent different possibilities.} \label{fig:cuspidal.curve} \end{figure} The closure of the locus of cusps is a smooth curve~\(K\), called \emph{cuspidal curve}, such that \(K\cdot F = 3\). The restriction of the fibration to \(K\) is an inseparable morphism of degree \(3\). The cuspidal curve meets a reducible fiber in one of the ways picture in \autoref{fig:cuspidal.curve}. In particular, the way \(K\) intersects \(F\) is uniquely determined unless \(F\) is of type~\(\II^*\). Note that \(K\) intersects a fiber of type~\(\IV\) at the intersection point of the three components. A line \(\ell \subset X\) is said to be \emph{cuspidal} if it is quasi-elliptic and the cuspidal curve \(K\) of the induced fibration coincides with the strict transform \(L\) of~\(\ell\). A cuspidal line is necessarily inseparable. \section{Bounds on the valency} \label{sec:char3-valency} From now on we assume that \(X\) is a K3 quartic surface defined over a field \(\IK\) of characteristic~3. This section is devoted to the study of the possible valencies of a line \(\ell \subset X\) (\autoref{def:valency}). \subsection{Separable elliptic lines} The results which we will prove presently are summarized in \autoref{tab:char3-elliptic}. \begin{table}[b] \caption{Bounds for the valency of a separable elliptic line.} \centering \label{tab:char3-elliptic} \begin{tabular}{cccl} \toprule kind & degree & singularity & valency \\ \midrule \multirow{3}{*}{first kind} & \(3\) & 0 & \(\leq 18\) (sharp bound) \\ & \(2\) & 1 & \(\leq 13\) \\ & \(1\) & 2 or 1 & \(\leq 8\) \\ \midrule \multirow{4}{*}{second kind} & \(3\) & 0 & \(\leq 21\) (sharp bound) \\ & \(2\) & 1 & \(\leq 14\) (sharp bound) \\ & \(1\) & 2 & \(\leq 9\) \\ & \(1\) & 1 & \(\leq 11\) \\ \midrule -- & \(0\) & 3, 2 or 1 & \(\leq 2\) (sharp bound) \\ \bottomrule \end{tabular} \end{table} \begin{lemma} \label{lemma:2ndkind-ram2} Let \(\ell \subset X\) be a separable line of the second kind and \(P \in \ell\) a smooth point of ramification~\(2\). Then, either the corresponding fiber is of type~\(\II\) with a cusp in \(P\), or the corresponding residual cubic splits into a double line and a simple line. \end{lemma} \proof Note that only lines of degree \(3\) and \(2\) can have a point \(P\) of ramification~\(2\). We choose coordinates so that \(P\) is given by \([0:0:0:1]\). This means that \[ a_{0103} = 0 \quad \text{and} \quad a_{0112} = 0. \] Since \(P\) is of ramification index~2 and it is non-singular, we can normalize \[ a_{0121} = 1 \quad \text{and} \quad a_{1003} = 1. \] Since \(\ell\) is of the second kind, the following relations must be satisfied: \[ a_{0202} = 0, \quad a_{0301} = a_{0211}^2 \quad \text{and} \quad a_{0310} = a_{0211}a_{0220}. \] This means that the residual cubic in \(x_0 = 0\) corresponding to \(P\) is given by \[ \left(a_{0211} x_{1} - x_{2}\right)^2 x_{3} + f_3(x_1,x_2), \] where \(f_3\) is a form of degree 3. Looking at the explicit formula for \(f_3\) (which we do not write here), we see that either this cubic is irreducible and gives rise to a fiber of type~\(\II\), or the polynomial \(g = a_{0211} x_1 - x_2\) divides \(f_3\); in the latter case it is immediate to compute that also \(g^2\) divides \(f_3\). \endproof \begin{lemma} \label{lemma:2ndkind-ram3_4} Let \(\ell \subset X\) be a separable line of the second kind and \(P \in \ell\) a point of ramification \(3_4\). Then, either the corresponding fiber is of type~\(\II\) with a cusp in \(P\), or the corresponding residual cubic splits into three lines (not necessarily distinct) intersecting in~\(P\). \end{lemma} \proof Note that \(\ell\) has necessarily degree~\(3\). We choose coordinates so that \(P\) is given by \([0:0:0:1]\) and the fiber corresponds to the plane \(\Pi_0: x_0 = 0\). This means that \[ a_{0103} = 0, \quad a_{0112} = 0 \quad \text{and} \quad a_{0121} = 0. \] A calculation with local parameters shows that \(\length(\Omega_{L/\IP^1}) = 4\) if and only if \(a_{1012} = 0\). Moreover, the following three coefficients must be different from 0: \(a_{0130}\), \(a_{1003}\) and \(a_{1021}\); the first two because otherwise there would be singular points on \(\ell\) (implying that the degree of \(\ell\) is less than \(3\)), the third because otherwise \(\ell\) would be inseparable. We can normalize them to 1, rescaling coordinates. Two necessary conditions for the line \(\ell\) to be of the second kind are \[ a_{0202} = 0 \quad \text{and} \quad a_{0211} = 0. \] Hence, the residual cubic in \(\Pi_0\) is given by \[ a_{0301} x_{1}^{2} x_{3} + x_{2}^{3} + x_{1}f_2(x_1,x_2). \] Either the fiber is irreducible and has a cusp in \(P\) (\(a_{0301} \neq 0\)), or it splits into three concurrent lines (\(a_{0301} = 0\)). \endproof \begin{remark} \label{rmk:e=v} Suppose that \(\ell\) is an elliptic line of degree \(3\). From~\autoref{tab:fibers} we observe \[ v_t(\ell) \leq e(F_t) \leq e(F_t) + \delta(F_t), \] since a reducible fiber has \(e(F_t) \geq 2\) and a fiber with at least three components has \(e(F_t) \geq 3\). From equations~\eqref{eq:v=sum-vF} and \eqref{eq:euler.elliptic} we infer that \begin{equation} \label{eq:euler-valency} v(\ell) = \sum_{t\in\IP^1} v_t(\ell) \leq \sum_{t\in\IP^1} (e(F_t) + \delta(F_t)) = 24. \end{equation} If \(v_t(\ell) = e(F_t)\), then \(F_t\) is of type~\(\I_3\). Hence, if for any subset \(S \subset \IP^1\) one has \[ \sum_{s\in S} e(F_s) = \sum_{s \in S} v_s(\ell) = N, \] for some \(N \geq 0\), then all fibers \(F_s\) must be of type~\(\I_3\) and, in particular, \(N\) must be divisible by~3. \end{remark} An application of the Riemann--Hurwitz formula yields the following lemma. \begin{lemma} If \(\ell \subset X\) is a separable line of degree~\(3\), then \(\ell\) has ramification \(2_1^4\), \(2_1 3_3\) or \(3_4\). \qed \end{lemma} \begin{proposition} \label{prop:lines2ndkind} If \(\ell \subset X\) is a separable elliptic line of the second kind of degree~\(3\), then the valency of \(\ell\) is bounded according to \autoref{tab:lines2ndkind}. \end{proposition} \begin{table}[t] \caption{Bounds for the valency of a separable elliptic line of the second kind of degree 3.} \label{tab:lines2ndkind} \centering \begin{tabular}{cl} \toprule ramification & valency \\ \midrule \(2_1^4\) & \(\leq 12\) \\ \(2_1 3_3\) & \(\leq 21\) (sharp bound) \\ \(3_4\) & \(\leq 21\) (sharp bound) \\ \bottomrule \end{tabular} \end{table} \proof Suppose first that \(\ell\) has a point \(P\) of ramification index \(2\). According to \autoref{lemma:2ndkind-ram2}, the corresponding fiber \(F_P\) is either of type~\(\II\), so that \(e(F_P) + \delta_P \geq 2+1 = 3\) (\autoref{tab:fibers}) and \(v(F_P) = 0\), or it contains a double component, so that \(e(F_P) \geq 6\) and \(v(F_P) = 2\); in any case, the difference \(e(F_P) + \delta_P - v(F_P)\) is always at least 3. Therefore, if there are 4 points of ramification 2, then by formula~\eqref{eq:euler-valency} \(v(\ell)\) is not greater than \(24 - 4\cdot 3 = 12\), while if there is just one, \(v(\ell)\) is not greater than \(24 - 3 = 21\). Suppose now that \(\ell\) has no point of ramification index \(2\), ie \(\ell\) has ramification~\(3_4\). If the ramified fiber \(F_0\) is of type~\(\II\), then there can be at most \(24-3 = 21\) lines meeting \(\ell\). Otherwise, \(F_0\) splits into three concurrent lines by \autoref{lemma:2ndkind-ram3_4}. Then, \(e(F_0) + \delta_0 \geq 5\) (type~\(\IV\) has wild ramification, too), which means that the contribution to \(v(\ell)\) of the other fibers is not greater than \(24-5 = 19\). Nonetheless, by \autoref{rmk:e=v} this contribution cannot be exactly \(19\), since \(19\) is not divisible by \(3\). Hence, again, we can have at most \(3 +18 = 21\) lines intersecting~\(\ell\). \endproof \begin{example} The following surface contains a separable elliptic line (\(x_0 = x_1 = 0\)) of ramification \(2_1 3_3\) with valency \(21\): \[ x_{0}^{4} + x_{0}^{2} x_{1} x_{2} - x_{1}^{3} x_{2} + x_{0} x_{1} x_{2}^{2} + x_{1} x_{2}^{3} + x_{0}^{2} x_{1} x_{3} + x_{1}^{2} x_{3}^{2} + x_{0} x_{2} x_{3}^{2} + x_{0} x_{3}^{3} = 0. \] \end{example} \begin{example} Let \(i\) be a square root of \(-1\). The following surface contains a separable elliptic line \(x_0 = x_1 = 0\) of ramification \(3_4\) with valency \(21\): \begin{multline*} i x_{0}^{3} x_{1} + i x_{1}^{3} x_{2} + i x_{1} x_{2}^{3} - i x_{0}^{3} x_{3} + i x_{0} x_{1} x_{2} x_{3} + i x_{0} x_{3}^{3} \\ = x_{0}^{2} x_{1} x_{2} + x_{1}^{2} x_{2}^{2} + x_{0} x_{2}^{2} x_{3} - x_{0}^{2} x_{3}^{2}. \end{multline*} \end{example} \begin{proposition} \label{prop:e-deg2} If \(\ell \subset X\) is an elliptic line of degree~\(2\), then, \(v(\ell) \leq 14\). \end{proposition} \proof By \autoref{prop:1stkind}, we can assume that \(\ell\) is of the second kind. Since \(\ell\) has degree \(2\), it must have singularity \(1\) (\autoref{lem:degree.singularity}). Let \(P\) be the singular point on~\(\ell\). The morphism \(\pi: L \rightarrow \IP^1\), being of degree 2, is separable and has two points of ramification index 2. At least one of the point of ramification must be different from \(P\). Let us call it \(Q\). By \autoref{lemma:2ndkind-ram2} either the fiber corresponding to \(Q\) is of type~\(\II\) or the residual cubic splits into a double line and a simple line. Suppose the fiber \(F_Q\) is of type~\(\II\). If \(\ell\) is of type~\((p,q)\), then \(3\,p + 2\,q \leq 24-3=21\). Applying formula~\eqref{eq:vl}, we have \(v(\ell) \leq 2\,p + q = 14\). If the residual cubic corresponding \(F_Q\) splits into a double line and a simple line, then it contributes 1 to the valency and at least 6 to the Euler--Poincaré characteristic. Applying formula~\eqref{eq:vl} again yields \(v(\ell) \leq 13\). \endproof \begin{example} The following surface contains an elliptic line \(\ell:x_0 = x_1 = 0\) of degree 2 with valency 14, thus attaining the bound in \autoref{prop:e-deg2}. The surface contains one point \(P = [0:0:0:1]\) of type~\(\bA_1\). The line \(\ell\) has 7 fibers of type~\(\I_3\) and one ramified fiber of type~\(\II\). The other ramified fiber corresponds to the plane \(x_0 = 0\) and is smooth. \[ x_{0}^{4} + x_{0}^{2} x_{1} x_{2} - x_{1}^{3} x_{2} + x_{0} x_{1} x_{2}^{2} + x_{1} x_{2}^{3} + x_{1}^{2} x_{3}^{2} + x_{0} x_{2} x_{3}^{2} = 0. \] \end{example} \begin{proposition} \label{prop:e-deg1} Let \(\ell \subset X\) be an elliptic line of degree \(1\). Then, \(v(\ell) \leq 9\) if \(\ell\) has singularity \(2\), and \(v(\ell) \leq 11\) if \(\ell\) has singularity \(1\). \end{proposition} \proof The proof can be carried over word by word from the characteristic 0 case (see \cite[Propositions~2.13 and 2.14]{veniani1}). \endproof \subsection{Quasi-elliptic lines of degree 3} \autoref{tab:char3-qe} summarizes the known bounds for the valency of a quasi-elliptic line, which we are about to prove. \begin{table}[t] \centering \caption{Bounds for the valency of a quasi-elliptic line.} \label{tab:char3-qe} \begin{tabular}{cll} \toprule degree & & valency \\ \midrule \multirow{2}{*}{\(3\)} & cuspidal & \(\leq 30\) (sharp bound) \\ & not cuspidal & \(\leq 21\) (sharp bound) \\ \(2\) & & \(\leq 14\) (sharp bound) \\ \(1\) & & \(\leq 10\) \\ \(0\) & & \(\leq 2\) \\ \bottomrule \end{tabular} \end{table} \begin{lemma} \label{lemma:inseparable=>cuspidal} If \(\ell \subset X\) is an inseparable line with \(v(\ell)>12\), then \(\ell\) is cuspidal (in particular, quasi-elliptic). \end{lemma} \proof Up to coordinate change, we can suppose that the residual cubic contained in \(x_0 = t x_1\) intersects the line \(\ell:x_0 = x_1 = 0\) in \([0:0:0:1]\) for \(t = 0\) and in \([0:0:1:0]\) for \(t = \infty\). This means that the following coefficients vanish: \[ a_{0103},\,a_{0112},\, a_{0121}; \, a_{1012},\,a_{1021},\, a_{1030}. \] Moreover, \(a_{1003}\) and \(a_{0130}\) must be different from \(0\), and can be normalized to \(1\) and \(-1\), respectively, by rescaling coordinates. Up to a Frobenius change of parameter \(t = s^3\), we can explicitly write the intersection point \(P_s\) of the residual cubic with \(\ell\), which is given by \[ P_s = [0:0:s:1]. \] If a residual cubic \(c_s\) is reducible, then all components must pass through \(P_s\); in particular, \(P_s\) must be a singular point of \(c_s\). One can see explicitly that \(P_s\) is a singular point of \(c_s\) if and only if \(s\) is a root of the following degree \(8\) polynomial: \begin{equation} \label{eq:char3-phi} \begin{split} g(s) & = a_{2020} s^{8} + a_{2011} s^{7} + a_{2002} s^{6} + a_{1120} s^{5} \\ & + a_{1111} s^{4} + a_{1102} s^{3} + a_{0220} s^{2} + a_{0211} s + a_{0202}. \end{split} \end{equation} Furthermore, it can be checked by a local computation that if \(c_s\) splits into three (not necessarily distinct) lines, then \(s\) is a double root of \(g\). This implies that the valency of \(\ell\) is not greater than \(3\cdot 8 /2 = 12\), unless the polynomial \(g\) vanishes identically, but \(g \equiv 0\) implies that all points \(P_s\) are singular for \(c_s\), ie the line \(\ell\) is cuspidal. \endproof \begin{corollary} \label{cor:family-C} If \(\ell\subset X\) is a cuspidal line, then \(X\) is projectively equivalent to a member of the family \(\mathcal C\) defined by \[ \mathcal C\colon x_{0}x_{3}^3 - x_{1}x_{2}^3 + x_{2}q_3(x_{0},x_{1}) + x_{3}q_3'(x_{0},x_{1}) + q_4(x_{0},x_{1}), \] where \(q_3\), \(q_3'\) and \(q_4\) are forms of degree \(3\), \(3\) and \(4\), respectively. \end{corollary} \proof The family can be found imposing that the polynomial \(g\) in the proof of \autoref{lemma:inseparable=>cuspidal} vanishes identically. \endproof \begin{corollary} \label{cor:cuspidal-reducible-fiber} If \(\ell \subset X\) is a cuspidal line, then a residual cubic corresponding to a reducible fiber of \(\ell\) is either the union of three distinct concurrent lines, or a triple line. \end{corollary} \proof The intersection of a residual cubic with \(\ell\) is always one single point. A residual cubic of \(\ell\) cannot be the union of a line and an irreducible conic, because the line and the conic would result in a fiber of type~\(\I_n\) (because the conic has to be tangent to \(\ell\)), which is impossible in a quasi-elliptic fibration. Therefore, a residual cubic relative to a reducible fiber must split into three (not necessarily distinct) lines. If at least two of them coincide, the plane on which they lie contains at least a singular point \(P\) of the surface (which is not on \(\ell\), since \(\ell\) has degree \(3\)). An explicit inspection of this configuration in the family \(\mathcal C\) (for instance, supposing up to change of coordinates that \(P\) is given by \([0:1:0:0]\)) shows that the residual cubic degenerates to a triple line. \endproof \begin{lemma} \label{lemma:qe-deg3} If \(\ell \subset X\) is a quasi-elliptic line of degree \(3\), then \(v(\ell) \leq 30\). \end{lemma} \proof The fibration induced by the line \(\ell\) has at most 10 reducible fibers, each of which can contribute at most 3 to its valency. \endproof \begin{remark} The bound of \autoref{lemma:qe-deg3} is sharp. As soon as a K3 quartic surface \(X\) is smooth, the valency of a quasi-elliptic line of degree \(3\) on \(X\) is automatically \(30\), because the fibration induced by \(\ell\) can only have \(10\) reducible fibers of type~\(\IV\), whose residual cubics are the union of three concurrent lines. Notably, this happens for all 112 lines of the Fermat surface. \end{remark} \begin{figure} \caption{Possible residual cubics corresponding to a fiber of type~\(\IV\).} \label{fig:cubics-type-IV} \end{figure} \begin{lemma} \label{lemma:conf-IV} Let \(\ell \subset X\) be any line of degree~\(3\). A fiber of type~\(\IV\) of the fibration induced by \(\ell\) corresponds to one of the residual cubics pictured in~\autoref{fig:cubics-type-IV}, with the only restriction that \(\ell\) cannot pass through a singular point. \end{lemma} \proof A fiber of type~\(\IV\) contains three simple components; hence, the corresponding residual cubic can also have only simple components. Since it cannot contain cycles, it must be one of the following, as in \autoref{fig:cubics-type-IV}: a cusp, a conic and a line meeting tangentially in one point, or three distinct lines meeting in one point. The remaining components must come from the resolution of the singular points on the surface. The types of the singular points can be immediately deduced from the respective Dynkin diagrams. \endproof \begin{figure} \caption{Possible residual cubics corresponding to a fiber of type~\(\IV^*\).} \label{fig:cubics-type-IV*} \end{figure} \begin{lemma} \label{lemma:conf-IV*} Let \(\ell \subset X\) be any line of degree~\(3\). A fiber of type~\(\IV^*\) of the fibration induced by \(\ell\) corresponds to one of the residual cubics pictured in \autoref{fig:cubics-type-IV*}, with the only restriction that \(\ell\) cannot pass through a singular point. \end{lemma} \proof Besides the residual cubics with only simple components described in the previous lemma, we can also have multiple components, namely a double line and a simple line, or a triple line. In the former case, the strict transforms of the lines can intersect (if their intersection point is smooth) or not (if their intersection point is singular), giving rise to two different configurations, which we distinguish by the letters \(a\) and \(b\). In the latter case, there is no ambiguity, since a fiber of type~\(\IV^*\) contains only one triple component. \endproof We denote by \(iv_0\), \(iv_1\),\ldots the number of fibers of type~\(\IV_0,\) \(\IV_1\), and so on. Note that the subscript indicates the local valency of the fiber. \begin{lemma} \label{lem:char3-d=3-degK} If \(\ell \subset X\) is a separable quasi-elliptic line of degree \(3\), then the degree of its cuspidal curve is at least \(3\) and at most \(7\). \end{lemma} \proof Writing \(H = F + L\), one gets \(k = K\cdot H = 3 + K \cdot L\). The cuspidal curve \(K\) and the line \(L\) are distinct because \(\ell\) is separable. The curve \(K\) can meet \(L\) only in points of ramification; moreover, a local computation shows that if \(K\) is tangent to \(L\), then ramification \(3_4\) occurs, and that higher order tangency cannot happen. We thus obtain the following bounds according to the ramification type of \(\ell\): \(K\cdot L \leq 4\) for type~\(2_1^4\), \(K\cdot L \leq 2\) for type~\(3_2 2_1\), and \(K\cdot L \leq 2\) for type~\(3_4\). \endproof \begin{proposition} \label{prop:qe-deg3} If \(\ell \subset X\) is a separable quasi-elliptic line of degree \(3\), then \(v(\ell) \leq 21\). \end{proposition} \proof A fiber of type~\(\II^*\) can have local valency at most \(2\), because it contains only one simple components and three distinct lines would give rise to three distinct simple components. Hence, recalling equation \eqref{eq:char3-euler}, \begin{align*} v(\ell) & \leq 3\,iv + 3\,iv^* + 2\,ii^* \\ & = 3\,(10-3\,iv^*-4\,ii^*) + 3\,iv^* + 2\,ii^* \\ & = 30 - 6\,iv^* - 10\,ii^*. \end{align*} In particular, if \(ii^* > 0\), then \(v(\ell) \leq 20\), so we can suppose that \(\ell\) has no \(\II^*\)-fibers. Similarly, we can suppose that \(\ell\) has at most one fiber of type~\(\IV^*\). If \(\ell\) has no \(\IV^*\)-fiber, then it must have \(10\) fibers of type~\(\IV\). Using the classification of \autoref{lemma:conf-IV}, we list the possible configurations with \(v(\ell)>21\) (16 cases) in \autoref{tab:char3-qe-deg3-v>16}. \begin{table}[t] \centering \caption{Fibration types and valencies for a separable quasi-elliptic line \(\ell\) of degree 3 with \(v(\ell) \geq 16\).} \label{tab:char3-qe-deg3-v>16} \label{tab:qe-deg3} \begin{tabular}{cccccc} \toprule case & \(iv^*\) & \(iv_3\) & \(iv_1\) & \(iv_0\) & valency \\ \midrule 1 & -- & 10 & \(0\) & 0 & \(30\) \\ 2 & -- & 9 & \(1\) & 0 & \(28\) \\ 3 & -- & 9 & \(0\) & 1 & \(27\) \\ 4 & -- & 8 & \(2\) & 0 & \(26\) \\ 5 & -- & 8 & \(1\) & 1 & \(25\) \\ 6 & -- & 8 & \(0\) & 2 & \(24\) \\ 7 & -- & 7 & \(3\) & 0 & \(24\) \\ 8 & -- & 7 & \(2\) & 1 & \(23\) \\ 9 & -- & 7 & \(1\) & 2 & \(22\) \\ 10 & -- & 6 & \(4\) & 0 & \(22\) \\ 11 & \(iv^*_3\) & 7 & \(0\) & 0 & \(24\) \\ 12 & \(iv^*_3\) & 6 & \(1\) & 0 & \(22\) \\ 13 & \(iv^*_{2a}\) & 7 & \(0\) & 0 & \(23\) \\ 14 & \(iv^*_{2b}\) & 7 & \(0\) & 0 & \(23\) \\ 15 & \(iv^*_{1a}\) & 7 & \(0\) & 0 & \(22\) \\ 16 & \(iv^*_{1t}\) & 7 & \(0\) & 0 & \(22\) \\ \bottomrule \end{tabular} \end{table} For each case, we consider the lattice generated by \(L\), a general fiber \(F\), the fiber components of the reducible fibers and the cuspidal curve \(K\) (which must be different from \(L\), since \(\ell\) is separable). All intersection numbers are univocally determined (\(L\cdot F = 3\) because \(\ell\) has degree~\(3\)), except for \[ K \cdot L = K \cdot (H-F) = k - 3, \] but \(k\) can only take up the values \(3,\ldots,7\) on account of \autoref{lem:char3-d=3-degK}. We check that this lattice has rank bigger than \(22\) in all cases, except for case 6 with \(k=3\) (ie \(K \cdot L = 0\)). On the other hand, this case does not exist. In fact, suppose that \(\ell\) is as in case 6 with \(K\cdot L = 0\); in particular, \(\ell\) has no ramified fibers with multiple components and, since \(v(\ell) = 24\), \(\ell\) is of the second kind. It follows that if \(\ell\) has a point of ramification \(2\), then by \autoref{lemma:2ndkind-ram2}, the ramified fiber must be a cusp, ie \(K\) intersects \(L\) so \(K\cdot L >0\); if \(\ell\) has ramification \(3_4\), then by \autoref{lemma:2ndkind-ram3_4} the ramified fiber must be either a cusp or the union of three distinct lines; in both cases, \(K\cdot L >0\). \endproof \begin{example} The following surface contains a separable quasi-elliptic line \(\ell: x_0 = x_1 = 0\) of degree~\(3\) with valency~21, thus attaining the bound of \autoref{prop:qe-deg3}: \[ X: x_{1}^{4} + x_{0}^{2} x_{2}^{2} - x_{1}^{2} x_{2}^{2} - x_{1} x_{2}^{3} + x_{0} x_{2}^{2} x_{3} + x_{0} x_{3}^{3}. \] It contains only one singular point \([1:0:0:0]\) of type~\(\bE_6\). \end{example} \subsection{Quasi-elliptic lines of lower degree} \begin{proposition} \label{prop:qe-deg2} If \(\ell \subset X\) is a quasi-elliptic line of degree~\(2\), then \(v(\ell) \leq 14\). \end{proposition} \proof By \autoref{prop:1stkind}, we can assume that \(\ell\) is of the second kind. Let \(P\) be the singular point on \(\ell\) and suppose \(t \in \IP^1\) is such that \(v_t(\ell) > 0\). The cubic \(c = c_t\) is reducible, because it contains at least a line. Suppose that \(c\) splits into a line \(m\) and an irreducible conic \(q\) (which must be tangent to each other because fibers of type~\(\I_n\) are not admitted in a quasi-elliptic fibration). Since \(v_t(\ell) >0\), the line \(m\) meets \(\ell\) in a smooth point; hence, the three configurations in \autoref{fig:quasi-elliptic.d=2.cubics} may arise. \begin{figure} \caption{Possible residual cubics appearing in the proof of \autoref{prop:qe-deg2} \label{fig:quasi-elliptic.d=2.cubics} \end{figure} The third configuration gives rise to a fiber of type~\(\III\), but this type does not appear in a quasi-elliptic fibration. (In the other configurations, the point of intersection of \(m\) with the residual conic is singular.) In the first configuration, if \(q \cap \ell = \{P,Q\}\), then \(Q\) is not an inflection point of the residual cubic by \autoref{lemma:line&tangentconic}, contradicting the fact that \(\ell\) is of the second kind. The second configuration can be ruled out by an explicit parametrization (in a line of the second kind, either the point \(P\) is a ramification point, or the cubic passing twice through \(P\) is singular at \(P\)). Thus, all three configurations are impossible, so \(c\) must split into three (not necessarily distinct) lines and at least one of them should pass through \(P\). Since there can be at most 8 lines through a singular point (\autoref{lemma:linesthroughsingularpoint}), there can be at most 7 such reducible fibers, each of them contributing at most 2 to the valency of \(\ell\), whence \(v(\ell) \leq 14\). \endproof \begin{example} The bound given by \autoref{prop:qe-deg2} is sharp. In fact, the following quartic surface contains a quasi-elliptic line \(\ell: x_0 = x_1 = 0\) of degree \(2\) and valency \(14\): \[ X : x_{0}^{4} + x_{0}^{3} x_{1} + x_{0} x_{1}^{3} + x_{1} x_{2}^{3} + x_{0} x_{1} x_{3}^{2} + x_{1}^{2} x_{3}^{2} + x_{0} x_{2} x_{3}^{2} = 0. \] The quartic contains two singular points, \(P = [0:0:0:1]\) of type~\(\bA_1\) and \(Q = [-1:1:1:0]\) of type~\(\bE_6\). The fibration induced by the line \(\ell\) has 7 fibers of type~\(\IV\) and one fiber of type~\(\IV^*\) corresponding to the plane containing~\(Q\). \end{example} \begin{lemma} If \(\ell \subset X\) is a quasi-elliptic line of degree \(1\), then \(v(\ell) \leq 10\). \end{lemma} \proof The fibration induced by the line \(\ell\) has at most 10 reducible fibers, each of which contributes at most 1 to its valency. \endproof \section{Proof of \autoref{thm:char3}} \label{sec:char3-proof} In order to prove the main theorem, we divide our arguments according to whether the K3 quartic surface contains a star (\autoref{subsec:star.case}) or more generally a triangle (\autoref{subsec:triangle.case}), or contains no triangles at all (\autoref{subsec:triangle-free.case}). Throughout the section, \(X\) denotes a K3 quartic surface defined over a field of characteristic \(3\). \subsection{Star case} \label{subsec:star.case} We will start by analyzing quartic surfaces containing a star, which, we recall, is the union of four distinct lines meeting in a smooth point. Since the four lines in a star are necessarily coplanar, a star is the same as a configuration~\(\mathcal C_0\) (see \autoref{fig:triangle-conf}). The lines have necessarily degree \(3\) because there are no singular points on the plane containing them. We will first need a series of lemmas. In all of them, we will parametrize the surface \(X\) as in \eqref{eq:surfaceX} in such a way that the star is contained in the plane \(x_0 = 0\) and the lines meet at \([0:0:0:1]\), ie setting the following coefficients equal to \(0\): \[ a_{0301},\,a_{0211},\,a_{0121},\,a_{0202},\,a_{0112},\,a_{0103}. \] If necessary, we will parametrize a second line in the star \(\ell'\) as \(x_0 = x_2 = 0\), by further assuming \(a_{0400} = 0\). \begin{lemma} \label{lemma:star-1st-kind} If \(\ell \subset X\) is a line of the first kind in a star, then \(v(\ell) \leq 15\). \end{lemma} \proof It can be checked by an explicit computation that the resultant of \(\ell\) has a root of order \(6\) at the center of the star; this implies that there are at most \(18-6 = 12\) lines meeting \(\ell\) not contained in the star. \endproof \begin{lemma} \label{lemma:star-2_1 3_3} If \(\ell \subset X\) is a separable line of ramification \(2_1 3_3\) contained in a star, then it is of the first kind. \end{lemma} \proof By a change of coordinates, we can assume that the point of ramification index 2 is \([0:0:1:0]\), and that ramification occurs at \(x_1 = 0\). Imposing that \(\ell\) is of the second kind leads to a contradiction (\(\ell\) cannot be separable). \endproof \begin{lemma} \label{lemma:star-3_4} If three lines in a star contained in \(X\) are separable and at least two of them have ramification \(3_4\), then the third one also has ramification \(3_4\). \end{lemma} \proof Beside \(\ell:x_0 = x_1 = 0\) and \(\ell':x_0 = x_2 = 0\), we can suppose without loss of generality that a third line is given by \(\ell'':x_0 = x_1 + x_2 = 0\), setting \(a_{0220} = a_{0130}+a_{0310}\). The conditions for \(\ell\), \(\ell'\) or \(\ell''\) to be of ramification~\(3_4\) are \(a_{1012} = 0\), \(a_{1102} = 0\) and \(a_{1012} = a_{1102}\), respectively. Two of them imply the third one. \endproof \begin{lemma} \label{lemma:star-no-3x3_4} If three lines in a star contained in \(X\) are separable, then at most two of them can be of the second kind. \end{lemma} \proof We parametrize \(\ell\), \(\ell'\) and \(\ell''\) as in the previous Lemma. Imposing that all three of them are of the second kind leads to a contradiction (at least one of them must be inseparable). \endproof \begin{lemma} \label{lemma:star-1st-kind+3_4} Let \(\ell, \ell' \subset X\) be two lines in a star. If \(\ell\) is a separable line of the second kind, and \(\ell'\) is a line of the first kind of ramification~\(3_4\), then \(v(\ell')\leq 12\). \end{lemma} \proof This can be checked again by an explicit computation of the resultant of \(\ell'\), which has now a root of order \(9\) at the center of the star. \endproof \begin{lemma} \label{lemma:star-not-cuspidal+cuspidal} Let \(\ell, \ell' \subset X\) be two lines in a star. If \(\ell\) is a cuspidal line, and \(\ell'\) is not cuspidal, then \(v(\ell')\leq 12\). \end{lemma} \proof We parametrize \(\ell: x_0 = x_1 = 0\) as in \autoref{cor:family-C}. By virtue of \autoref{lemma:inseparable=>cuspidal}, we can suppose that \(\ell':x_0 = x_2 = 0\) is separable. An explicit computation shows that \(\ell'\) cannot be of the second kind, and that its resultant has a root of order \(9\) in \(x_2 = 0\). \endproof \begin{lemma} \label{lemma:star-not-cuspidal+2xcuspidal} Let \(\ell, \ell', \ell'' \subset X\) be three lines in a star. If \(\ell\) and \(\ell'\) are cuspidal, and \(\ell''\) is not cuspidal, then \(v(\ell'') = 3\). \end{lemma} \proof We parametrize \(\ell:x_0 = x_1 = 0\) and \(\ell':x_0 = x_2 = 0\) as in \autoref{cor:family-C}, ie we suppose that \(X\) is given by the family \(\mathcal C\) where the following coefficients are set to zero: \[ a_{0400},\,a_{0301};\,a_{1201},\,a_{1300};\,a_{2200},\,a_{2101},\,a_{1210}. \] By a further rescaling we put \(a_{0310} = 1\) and we consider \(\ell'': x_0 = x_1-x_2 = 0\). The line~\(\ell''\) is inseparable and we can compute its polynomial \(g\) as in formula \eqref{eq:char3-phi} in the proof of \autoref{lemma:inseparable=>cuspidal} (by parametrizing the pencil with \(x_0 = s^3(x_1-x_2)\)), which turns out to be \[ g(s) = a_{2110} s^8. \] This means that \(\ell''\) has only one singular fiber in \(s = 0\) (namely a fiber of type~\(\IV\) with the maximum possible index of wild ramification), unless \(a_{2110} = 0\) and \(g \equiv 0\), in which case \(\ell''\) is cuspidal. \endproof \begin{lemma} \label{lemma:cuspidal=>two-stars} If \(\ell \subset X\) is a cuspidal line which is not contained in at least two stars, then \(v(\ell) \leq 6\). \end{lemma} \proof On account of \autoref{cor:cuspidal-reducible-fiber}, the number of stars in which \(\ell\) is contained is exactly equal to the number of fibers of type~\(\IV\) in its fibration; moreover, \(v_t(\ell) = 1\) if \(t \in \IP^1\) corresponds to a fiber of type~\(\IV^*\) or \(\II^*\), yielding \[ v(\ell) = 3\,iv + iv^* + ii^*. \] Recalling formula \eqref{eq:char3-euler}, we deduce that if \(iv < 2\) then \(v(\ell) \leq 6\). \endproof \begin{proposition} \label{prop:star} If \(X\) contains a star and is not projectively equivalent to the Fermat surface, then \(|{\Fn X}| \leq 58\). \end{proposition} \proof If \(\ell_1,\,\ell_2,\,\ell_3,\,\ell_4\) are the lines contained in the star, it holds from \eqref{eq:FnX} \[ |{\Fn X}| \leq 4 + \sum_{i = 1}^4 (v(\ell_i)-3) = \sum_{i=1}^4 v(\ell_i) -8. \] Suppose first that all lines \(\ell_i\) are not cuspidal. If \(v(\ell_i) \leq 15\) for \(i=1,2,3,4\), then \[ |{\Fn X}| \leq 4\cdot 15-8 = 52. \] If \(v(\ell_1)>15\), then by \autoref{lemma:star-1st-kind} and \autoref{lemma:inseparable=>cuspidal}, \(\ell_1\) must be separable of the second kind; hence \(v(\ell_1) \leq 21\); if \(v(l_i)\leq 15\) for \(i=2,3,4\), then \[ |{\Fn X}| \leq (21+3\cdot 15)-8 = 58. \] If \(v(\ell_1)>15\) and \(v(\ell_2)>15\), then both \(\ell_1\) and \(\ell_2\) are separable lines of the second kind. On account of \autoref{lemma:star-2_1 3_3}, they both have ramification \(3_4\). We claim that both \(v(\ell_3)\) and \(v(\ell_4)\) are not greater than \(12\). In fact, if \(\ell_3\) is separable, then by \autoref{lemma:star-3_4} and \autoref{lemma:star-no-3x3_4} it must be of the first kind and have ramification \(3_4\), which in turn implies that \(v(\ell_3)\leq 12\), because of \autoref{lemma:star-1st-kind+3_4}; if \(\ell_3\) is inseparable, then \(v(\ell) \leq 12\) by \autoref{lemma:inseparable=>cuspidal}. The same applies to \(\ell_4\). Hence, we conclude that \[ |{\Fn X}|\leq (2\cdot 21 + 2\cdot 12) -8 = 58. \] Assume now that exactly one of the lines, say \(\ell_1\), is cuspidal, so that \(v(\ell_1) \leq 30\). On account of \autoref{lemma:star-not-cuspidal+cuspidal} we have \[|{\Fn X}| \leq (30 + 3\cdot 12) -8 = 58.\] Suppose then that both \(\ell_1\) and \(\ell_2\) are cuspidal. If \(\ell_3\) and \(\ell_4\) are not cuspidal, then by \autoref{lemma:star-not-cuspidal+2xcuspidal} \[|{\Fn X}| \leq (2\cdot 30 + 2\cdot 3) - 8 = 58.\] Finally, suppose that \(\ell_1\), \(\ell_2\) and \(\ell_3\) are cuspidal. By a local computation it can be seen that \(\ell_4\) is also necessarily cuspidal. Thanks to the bound of \autoref{lemma:cuspidal=>two-stars}, we can suppose that at least two lines, say \(\ell_1\) and \(\ell_2\), are part of another star. Pick two lines \(\ell'_1\) and \(\ell'_2\), each of them in another star containing \(\ell_1\) respectively \(\ell_2\), which intersect each other (necessarily in a smooth point). Perform a change of coordinates so that \(\ell_1\), \(\ell_2\), \(\ell'_1\) and \(\ell'_2\) are given respectively by \(x_0 = x_1 = 0\), \(x_0 = x_2 = 0\), \(x_1 = x_3 = 0\) and \(x_2 = x_3 = 0\). Impose that \(\ell_1\), \(\ell_2\) and \(\ell_3\) are cuspidal lines: the resulting surface is projectively equivalent to Fermat surface. \endproof \subsection{Triangle case} \label{subsec:triangle.case} In this section we study the case in which \(X\) admits a triangle. The three lines forming the triangle need to be coplanar, and we denote by \(\Pi\) the plane on which they lie. The plane \(\Pi\) intersects \(X\) also in a fourth line, which might coincide with one of the first three. \begin{proposition} \label{prop:triangle-mult} If \(X\) admits a configuration \(\mathcal D_0\) or \(\mathcal E_0\), then \(|{\Fn X}| \leq 60\). \end{proposition} \proof Let \(\ell_0\) be the double line in the plane \(\Pi\) containing one of the two configurations, and let \(\ell_1\) and \(\ell_2\) be the two simple lines. Lines meeting \(\ell_0\) different from \(\ell_1\) and \(\ell_2\) must pass through the singular points; hence, by \autoref{lemma:linesthroughsingularpoint} there can be at most \(3\cdot(8-1) = 21\) of them. Note that \(\ell_1\) and \(\ell_2\) cannot be cuspidal because of \autoref{cor:cuspidal-reducible-fiber}. In the fibrations induced by \(\ell_1\) and \(\ell_2\) the plane \(\Pi\) corresponds to a fiber with a multiple component, hence with Euler--Poincaré characteristic at least 6; therefore, if \(\ell_1\) and \(\ell_2\) are both elliptic, there can be at most \(18\) more lines meeting them, yielding by \eqref{eq:FnX} \[ |{\Fn X}| \leq 3\cdot(8-1) + (18 + 18) + 3 = 60. \] Suppose that \(\ell_1\) is quasi-elliptic. The plane \(\Pi\) corresponds to a fiber of type~\(\IV^*\) or~\(\II^*\); hence, there can be only one singular point on \(\ell_0\): in fact, by inspection of the Dynkin diagrams, a component of multiplicity \(2\) in these fiber types meets at most \(2\) other components (and one of them is the strict transform of \(\ell_2\)). The lines \(\ell_1\) and \(\ell_2\) not being cuspidal, we know that they have valency at most \(21\). It follows from \eqref{eq:FnX} that \[ |{\Fn X}| \leq (8-1) + 2\cdot(21-2) + 3 = 48. \qedhere \] \endproof \begin{lemma} \label{lemma:A0A1} Let \(\ell, \ell' \subset X\) be two lines of degree \(3\) in configuration \(\cA_0\) or \(\cA_1\). If \(v(\ell)>18\), then \(v(\ell')\leq 18\). \end{lemma} \proof Let \(\Pi\) be the plane containing \(\ell\) and \(\ell'\). Both lines are separable, since otherwise the respective residual cubics would intersect them in one point. We suppose that also \(v(\ell')>18\) and look for a contradiction. Since both lines have valency greater than \(18\), they must be lines of the second kind with ramification \(2_1 3_3\) or \(3_4\). In particular, they must have a point of ramification \(3\) (let us call it \(P\in\ell\) and \(P'\in\ell'\)), which does not lie on \(\Pi\). Up to change of coordinates, we can assume the following: \begin{enumerate} \item \(\Pi\) is the plane \(x_0 = 0\); \item \(\ell\) and \(\ell'\) are given respectively by \(x_0 = x_1 = 0\) and \(x_0 = x_2 = 0\); \item \(P\) is given by \([0:0:1:0]\) and \(P'\) by \([0:1:0:0]\); \item ramification in \(P\) (resp. \(P'\)) occurs in \(x_1 = 0\) (resp. \(x_2 = 0\)). \end{enumerate} This amounts to setting the following coefficients equal to \(0\): \[ a_{0400},\,a_{0301},\, a_{0202},\, a_{0103};\, a_{1030},\, a_{1021},\, a_{1012};\, a_{1300},\, a_{1201},\, a_{1102}. \] Furthermore, \(a_{0112} \neq 0\), since the two residual lines in \(\Pi\) do not contain \([0:0:0:1]\), the intersection point of \(\ell\) and \(\ell'\); we set \(a_{0112} = 1\) after rescaling one variable. Two necessary condition for \(\ell\) and \(\ell'\) to be lines of the second kind are \[ a_{0310} = a_{0211}^2 \quad \text{and} \quad a_{0130} = a_{0121}^2. \quad \] This means that the residual conic in \(\Pi: x_0 = 0\) is given explicitly by \begin{equation} \label{eq:Q0} a_{0211}^{2} x_{1}^{2} + a_{0121}^{2} x_{2}^{2} + a_{0220} x_{1} x_{2} + a_{0211} x_{1} x_{3} + a_{0121} x_{2} x_{3} + x_{3}^{2} = 0. \end{equation} This conic splits into two lines by hypothesis; hence, it has a singular point. Computing the derivatives, one finds that the following condition must be satisfied: \[ a_{0220} = -a_{0121} a_{0211}. \] Substituting into \eqref{eq:Q0}, one finds that the conic degenerates to a double line: \[ (a_{0211} x_{1} + a_{0121} x_{2} - x_{3})^2 = 0; \] thus, we have neither configuration \(\cA_0\) nor \(\cA_1\). \endproof \begin{proposition} \label{prop:triangle-not-star} If \(X\) contains a triangle but not a star, then \(|{\Fn X}| \leq 67\). \end{proposition} \proof The proof is a case-by-case analysis on the configurations that are given by \autoref{lem:triangle-conf}, except configurations \(\mathcal C\) (a star, treated in \autoref{prop:star}), \(\mathcal D_0\) and \(\mathcal E_0\) (treated in \autoref{prop:triangle-mult}). We recall the bounds on the valency of \autoref{sec:char3-valency} and the fact that there are at most \(8\) lines through a singular point. Note that all lines in a configuration of type~\(\cA\) are elliptic, since the configuration corresponds to a fiber of type~\(\I_n\). Suppose that \(X\) contains configuration \(\cA_0\). If one of the four lines has valency \(v(\ell) > 18\), then the valency of all other three lines is not greater than \(18\) by \autoref{lemma:A0A1}. It follows from \eqref{eq:FnX} that \[ |{\Fn X}| \leq 4 + (21-3) + 3\cdot (18-3) = 67. \] Suppose that \(X\) contains configuration \(\cA_1\). Let \(\ell_1\) and \(\ell_2\) be the lines through the singular point, and \(\ell_3\) and \(\ell_4\) the other two lines. We know that \(v(\ell_i) \leq 14\), \(i=1,2\), whereas \autoref{lemma:A0A1} applies to \(\ell_3\) and \(\ell_4\), yielding \(v(\ell_3) + v(\ell_4) \leq 18+21\). It follows from \eqref{eq:FnX} that \[ |{\Fn X}| \leq 4 + (8-2) + 2\cdot (14-2) + (18-3) + (21-3) = 67. \] Suppose that \(X\) contains configuration \(\cA_2\). Let \(\ell_1\) be the line of singularity~\(2\). According to \autoref{prop:e-deg1}, \(v(\ell_1) \leq 9\). Therefore, it holds \[ |{\Fn X}| \leq 4 + 2\cdot (8-2) + (9-1) + 2\cdot (14-2) + (21-3) = 66. \] If \(X\) contains configuration \(\cA_3\), then \[ |{\Fn X}| \leq 4 + 3\cdot (8-2) + 2 + 3\cdot (14-2) = 60. \] If \(X\) contains a configuration of type~\(\cB\), then the three lines meeting at the same (smooth) point must be of the first kind by \autoref{lemma:2ndkind-ram2}. It follows from \eqref{eq:FnX} that \[ |{\Fn X}| \leq \begin{cases} 67 & \text{if \(X\) contains configuration \(\cB_0\)}, \\ 63 & \text{if \(X\) contains configuration \(\cB_1\)}, \\ 61 & \text{if \(X\) contains configuration \(\cB_2\)}, \\ 57 & \text{if \(X\) contains configuration \(\cB_3\)}. \end{cases} \] Thus, in all cases \(|{\Fn X}| \leq 67\). \endproof \subsection{Triangle-free case} \label{subsec:triangle-free.case} In this section we employ the notation and the ideas of \autoref{subsec:triangle-free.K3} and \cite[§5]{veniani1}. The following result is analogous to \cite[Proposition 5.5]{veniani1}. \begin{proposition} \label{prop:char3-alex} If \(X\) is triangle-free and \(D \subset \Gamma(X)\) is parabolic, then \[ |{\Fn X}| \leq v(D) + 24. \] \end{proposition} \proof The subgraph \(D\) induces a genus 1 fibration (\cite[§3, Theorem 1]{PSS}), which can be elliptic or quasi-elliptic. The vertices in \(D \cup (\Gamma \smallsetminus \Span D)\) are fiber components of this fibration. If the fibration is elliptic, there cannot be more than 24 components, on account of the Euler--Poincaré characteristic. If the fibration is quasi-elliptic, we obtain from formula~\eqref{eq:char3-euler} that \begin{equation} \label{eq:char3-euler-2} iv^* + ii^* \leq 3. \end{equation} A fiber of type~\(\IV\) can contain at most 2 lines, since there are no triangles. Hence, from \eqref{eq:char3-euler} and \eqref{eq:char3-euler-2} we deduce \begin{align*} |{\Fn X}|= |\Gamma(X)| &\leq v(D) + 2\,iv + 7\,iv^* + 9\,ii^* \\ & = v(D) + 20 + iv^* + ii^* \\ &\leq v(D) + 23. \qedhere \end{align*} \endproof \begin{lemma} \label{lem:char3-triangle-free} If \(X\) is triangle-free, then \(v(\ell) \leq 12\) for any line \(\ell \subset X\). \end{lemma} \proof Thanks to \autoref{lem:elliptic-triangle-free}, we can suppose that \(\ell\) is quasi-elliptic. We claim that for all \(t \in \IP\) it holds \begin{equation} \label{eq:triangle-free2} 2 \, v_t(\ell) \leq e(F_t) -2. \end{equation} Indeed, this is true if the residual cubic \(c_t\) is irreducible (\(v_t(\ell) = 0\) and \(e(F_t) \geq 2\)), or if \(c_t\) is the union of a line and a reducible conic (\(v_t(\ell) \leq 1\) and \(e(F_t) \geq 4\) because \(F_t\) is reducible). If \(c_t\) splits into three lines (not necessarily distinct), then these lines meet in one singular point because \(X\) is triangle-free and fibers of type~\(\I_n\) do not appear in a quasi-elliptic fibration. Hence, \(F_t\) is of type~\(\IV^*\) or \(\II^*\) and \eqref{eq:triangle-free2} holds because \(v_t(\ell)\leq 3\) and \(e(F_t) \geq 8\). From \eqref{eq:v=sum-vF}, \eqref{eq:euler.quasi-elliptic} and \eqref{eq:triangle-free2} it follows that \[ 2\, v(\ell) = 2\,\sum_t v_t(\ell) \leq \sum_t (e(F_t) - 2) = 24. \qedhere \] \endproof \begin{proposition} \label{prop:char3-triangle-free} If \(X\) is triangle-free, then \(|{\Fn X}| \leq 64\). \end{proposition} \proof We can adapt the proof in \cite[Proposition~5.9]{veniani1}. Indeed, the proof of \cite[Proposition~5.8]{veniani1} is also valid in characteristic~\(3\), since we merely employ the analogous result of \autoref{prop:char3-alex} and the fact that the Picard lattice of \(X\) has rank at most \(22\). Therefore, if \(X\) contains no quadrangle, then \(|{\Fn X}| \leq 54\). If \(X\) contains a quadrangle, then we can apply \autoref{lem:char3-triangle-free} to the \(\mathbf{\tilde A}_3\)-subgraph corresponding to the quadrangle, finding \(|{\Fn X}| \leq 4 \cdot (12-2) +24 = 64\). \endproof \subsection{Proof of \autoref{thm:char3}} \label{subsec:proof.thm:char3} The case in which \(X\) contains a star is treated in \autoref{prop:star}, the case in which \(X\) contains no stars, but contains a triangle is treated in \autoref{prop:triangle-not-star}, and the case in which \(X\) contains no triangles is treated in \autoref{prop:char3-triangle-free}, so the proof is now complete. \qed \section{Examples} \label{sec:char3-examples} In this last section, we present examples of K3 quartic surfaces with many lines defined over a field of characteristic~\(3\) (with the exception of \autoref{ex:52.lines.2.sing.pts}). Most of the examples were found during the proof of \autoref{add:char3}. \autoref{ex:58-3rd-conf} and \autoref{ex:52.lines.2.sing.pts} were found starting from the configuration of lines communicated to the author by Degtyarev, using methods similar to the ones employed in \cite{veniani:symmetries.equations}. \begin{example} \label{ex:58-1st-conf} A general member of the \(1\)-dimensional family defined by \[ x_{1}^{3} x_{2} - x_{1} x_{2}^{3} + x_{0}^{3} x_{3} - x_{0} x_{3}^{3} = a x_{0}^{2} x_{1} x_{2} \] is smooth and contains \(58\) lines. More precisely, for \(a = 0\) we obtain a surface which is projectively equivalent over \(\IF_9\) to the Fermat surface and thus contains 112 lines. If \(a \neq 0, \infty\), the surface contains a star (in \(x_0 = 0\)) formed by two cuspidal lines (of valency~30) and two elliptic lines with no other singular fibers than the star itself (hence, of valency~3). The remaining \(54\) lines are of type~\((p,q)= (1,9)\). The surface contains exactly 19 stars. \end{example} \begin{example} \label{ex:58-2nd-conf} A general member of the \(1\)-dimensional family defined by \[ x_{1}^{3} x_{2} - x_{1} x_{2}^{3} + x_{0}^{3} x_{3} - x_{0} x_{3}^{3} = a x_{0} x_{1} {\left(a x_{0} x_{2} + a x_{1} x_{3} + x_{1} x_{2} + x_{0} x_{3}\right)} \] is smooth and contains exactly \(58\) lines. More precisely, as long as \(a \neq 0,\, 1,\,-1,\, \infty\), the surface contains one cuspidal line (given by \(x_0 = x_1 = 0\)) which intersects 12 lines of type~\((4,0)\), and 18 lines of type~\((1,9)\); the remaining 27 lines are of type~\((4,6)\) (for instance, \(x_2 = x_3 = 0\)). The surface contains exactly 10 stars. For \(a = 0\) we find again a model of the Fermat surface, whereas for \(a = \pm 1\) the surface contains 20 lines and a triple point. For \(a = \infty\) we obtain the union of two planes and a quadric surface. All surfaces of the family are endowed with the symmetries \([x_0:x_1:x_2:x_3] \mapsto [x_1:x_0:x_3:x_2]\) and \([x_0:x_1:x_2:x_3] \mapsto [x_0:x_1:-x_2:-x_3]\). \end{example} \begin{example} \label{ex:58-3rd-conf} A general member of the \(1\)-dimensional family defined by \begin{multline*} {\left(a^{3} + a^{2} + a + 1\right)} \left( x_{1}^{3} x_{2} + x_{1} x_{2}^{3} - x_{0}^{3} x_{3} - x_{0} x_{3}^{3} \right) = \\ {\left(a - 1\right)} \left( x_{0}^{2} x_{1} x_{2} - x_{0}^{2} x_{3}^{2} + x_{1} x_{2} x_{3}^{2} \right) + {\left(a + 1\right)} \left( x_{1}^{2} x_{2}^{2} - x_{0} x_{1}^{2} x_{3} - x_{0} x_{2}^{2} x_{3} \right) \\ + {\left(a^{2} - 1\right)} {\left(x_{1} x_{2} + x_{0} x_{3}\right)} {\left(x_{0} + x_{3}\right)} {\left(x_{1} + x_{2}\right)} - {\left(a^{2} + 1\right)} x_{0} x_{1} x_{2} x_{3} \end{multline*} is smooth and contains exactly \(58\) lines. More precisely, if \(a \neq 0,1,-1,\infty\) and \(a^2 \neq -1\), then the surface contains exactly one star in the plane \begin{equation} \label{eq:char3-3rd-family-star} x_0 + x_3 = x_1 + x_2. \end{equation} The star is formed by two lines of type~\((7,0)\) and two lines of type~\((1,9)\), whose equations can be explicitly written after a change of parameter \(a = d/(d^2+1)\). Each line of type~\((7,0)\) meets 18 lines of type~\((3,6)\), and each line of type~\((1,9)\) meets 9 lines of type~\((4,6)\). All lines are elliptic. If \(a = 0,\,1\) or \(-1\) the surface is the union of a double plane and a quadric surface. If \(a = \infty\) the surface is projectively equivalent to the Fermat surface. If \(a^2 = -1\), then the surface contains \(9\) points of type~\(\bA_1\) and \(40\) lines. The star in the plane \eqref{eq:char3-3rd-family-star} is formed by two elliptic lines of type~\((4,0)\) and two quasi-elliptic lines of type~\((1,9)\). Each line of type~\((4,0)\) intersects \(9\) lines of singularity~\(2\) and valency~\(6\), while each line of type~\((1,9)\) intersects \(9\) lines of singularity~\(1\) and valency~\(9\). All surfaces of the family are endowed with the symmetries \([x_0:x_1:x_2:x_3] \mapsto [x_1:x_0:x_2:x_3]\) and \([x_0:x_1:x_2:x_3] \mapsto [x_0:x_1:x_3:x_2]\). \end{example} \begin{example} \label{ex:shimada-shioda} The reduction modulo 3 of Shimada--Shioda's surface \(X_{56}\) (see \cite{shimada-shioda}) can be written \[ \Psi(x_0,x_1,x_2,x_3) = \Psi(-x_1,x_0,-x_3,x_2) \] where \[ \Psi(w,x,y,z) = w z {\left(w^{2} + w x + x^{2} + y^{2} + y z + z^{2}\right)}. \] It contains \(48\) lines and \(8\) singular points of type~\(\bA_1\), namely \([0:1:0:t]\), \([1:0:t:0]\), \([s:1:1:s]\) and \([s:1:-1:-s]\), where \(t^2 + 1 = 0\) and \(s^2-s-1 = 0\). The plane \(\Pi_0\colon x_0 = 0\) contains a configuration \(\cB_2\) as in \autoref{fig:triangle-conf}. The line \(x_0 = x_1 = 0\) has singularity \(0\), type~\((4,2)\) and valency \(14\). The line \(x_0 = x_2 = 0\) is quasi-elliptic and has singularity \(2\), type~\((4,6)\) and valency~\(4\). The other two lines in \(\Pi_0\) have singularity \(1\), type~\((5,1)\) and valency~\(11\). There are \(8\) lines in totale through each of the two points \([0:1:0:t]\). To the author's knowledge, this example attains the highest record of lines among non-smooth K3 quartic surfaces in characteristic~\(3\). \end{example} \begin{example} \label{ex:52.lines.2.sing.pts} The complex surface defined by \begin{multline*} x_{0}^2 x_{1} x_{2} - x_{1}^3 x_{2} - 2 x_{1}^2 x_{2}^2 - x_{1} x_{2}^3 + x_{0}^3 x_{3} + x_{0} x_{1}^2 x_{3} \\ + 2 x_{0} x_{1} x_{2} x_{3} + x_{0} x_{2}^2 x_{3} + 2 x_{0}^2 x_{3}^2 + x_{1} x_{2} x_{3}^2 + x_{0} x_{3}^3 = 0 \end{multline*} contains \(52\) lines and \(2\) singular points of type~\(\bA_1\), namely \(P_0 = [0:1:-1:0]\) and \(P_1 = [1:0:0:-1]\). The surface admits the symmetries \([x_0:x_1:x_2:x_3] \mapsto [x_3:x_1:x_2:x_0]\) and \([x_0:x_1:x_2:x_3] \mapsto [x_0:x_2:x_1:x_3]\). The plane \(\Pi_0\colon x_0 = 0\) contains a configuration \(\cA_1\) as in \autoref{fig:triangle-conf}. Let \(\ell_1,\ell_2\) be the lines in \(\Pi_0\) of singularity \(0\) and let \(\ell_3,\ell_4\) be the lines in \(\Pi_0\) of singularity \(1\). The lines \(\ell_1,\ell_2\) are of type~\((2,8)\) and have valency~\(14\), while \(\ell_3,\ell_4\) are of type~\((6,1)\) and have valency~\(12\). There are \(8\) lines in total through \(P_0\). The reduction modulo \(2\) of this surface is not a K3 quartic surface because it is singular along the line defined by \(x_0 = x_1 + x_2 + x_3 = 0\). The reduction modulo \(3\) contains \(36\) lines and \(10\) singular points of type~\(\bA_1\), namely \([0:1:-1:0]\), \([1:0:0:-1]\), \([t:1:t+1:t]\), \([t:1:1:1-t]\), \([s:1:1-s:s]\) and \([s:1:1:-1-s]\), where \(t^2-t-1 = 0\) and \(s^2 + s -1 = 0\). The lines \(\ell_1,\ell_2\) are of type~\((2,4)\) and have valency \(10\), while \(\ell_3,\ell_4\) are of type~\((5,0)\) and have valency \(9\). There are \(6\) lines in total through \(P_0\). The reduction modulo \(5\) contains \(56\) lines and \(4\) singular points of type~\(\bA_1\), namely \([0:1:-1:0]\), \([1:0:0:-1]\) and \([t:1:1:t]\), where \(t^2 = 3\). The lines \(\ell_1,\ell_2\) are of type~\((4,4)\) and have valency~\(16\), while \(\ell_3,\ell_4\) are of type~\((6,1)\) and have valency \(12\). There are \(8\) lines in total through \(P_0\). To the author's knowledge, this example attains the highest record of lines among non-smooth K3 quartic surfaces both in characteristic \(0\) and in characteristic \(p > 3\). \end{example} \end{document}
\begin{document} \title{Bell inequality violation with two remote atomic qubits} \date{\today } \author{D. N. Matsukevich} \altaffiliation[Electronic address: ]{[email protected]} \author{P. Maunz} \author{D. L. Moehring} \altaffiliation[Present address: ] {Max-Planck-Institut f\"{u}r Quantenoptik, Hans-Kopfermann-Str. 1, D-85748 Garching, Germany} \author{S. Olmschenk} \author{C. Monroe} \affiliation{Department of Physics and Joint Quantum Institute, University of Maryland, College Park, Maryland, 20742} \pacs{03.65.Ud, 03.67.Mn, 37.10.Ty, 42.50.Xa} \begin{abstract} We observe violation of a Bell inequality between the quantum states of two remote Yb$^+$ ions separated by a distance of about one meter with the detection loophole closed. The heralded entanglement of two ions is established via interference and joint detection of two emitted photons, whose polarization is entangled with each ion. The entanglement of remote qubits is also characterized by full quantum state tomography. \end{abstract} \maketitle In 1964 Bell showed that in all local realistic theories, correlations between the outcomes of measurements in different parts of a physical system satisfy a certain class of inequalities \cite{bell}. Furthermore, he found that some predictions of quantum mechanics violate these inequalities. Starting with the first experimental tests of Bell inequalities with photons \cite{freedman,aspect,weihs}, violation of a Bell inequality has been observed in a wide range of systems including protons \cite{bellproton}, K-mesons \cite{bellkaon}, ions \cite{wineland}, neutrons \cite{bellneutron}, B-mesons \cite{bellmeson}, heterogeneous atom-photon systems \cite{david2,mats2} and atomic ensembles \cite{mats,chou}. Demonstration of the violation of a Bell inequality has also become a routine technique to verify the presence of entanglement and check the security of a quantum communication link \cite{gisin}. In order to exclude all local realistic theories, a rigorous experimental test of a Bell inequality must satisfy two conditions. First, the measurement time has to be sufficiently short such that no information traveling at the speed of light can propagate from one qubit to another during the measurement (locality loophole). Second, the efficiency of the quantum state detection has to be high enough such that it is impossible to mimic a Bell inequality violation by selective choice of the successful measurement events (detection loophole). Since photons can propagate over a long distance and be detected fast, the locality loophole was first closed in a photon system \cite{aspect,weihs}. On the other hand, high detection efficiency and deterministic preparation of an entangled state of trapped ions has closed the detection loophole in a system of two trapped ions separated by $\sim 3\: \mu$m \cite{wineland}. Although several experimental schemes for a loophole-free Bell inequality test have been proposed \cite{kwiat94,huelga,fry,garcia,simon}, no experiment to date has closed both loopholes simultaneously. One of these proposals (by Simon and Irvine \cite{simon}) combines the advantages of photons and trapped ions. This protocol starts by preparing two spatially separated ions, each entangled with its emitted photon. These photons are then sent to an intermediate location where a partial Bell state analysis is performed. Successful detection of an entangled state of two photons unambiguously heralds the preparation of an entangled state of the two ions. The Bell inequality violation is then verified by local rotation and detection of the ion qubits \cite{wineland}. In this Letter, we report an important step towards implementation of this protocol with the observation of a Bell inequality violation using two $^{171}\rm{Yb}^+$ ions separated by about $1$ meter. In contrast to our previous work where the photonic qubit was encoded in the frequencies of a photon \cite{david1,davidjosa}, here we use the polarization degree of freedom for the photonic qubit and two nearly degenerate states of the atom to encode the atomic qubit. This allows for measurement of both atomic and photonic qubits in arbitrary bases and for characterization of the generated ion-photon and ion-ion entangled states by quantum state tomography, resulting in an ion-ion entanglement fidelity of 81\%. Together with the high efficiency of detecting the quantum state of an ion, such a fidelity makes it possible to observe a Bell inequality violation between two distant particles with the detection loophole closed. With an even larger separation between the ions or faster detection of the atomic qubit, the technique demonstrated here may ultimately allow for a loophole-free Bell inequality test \cite{simon,weinfurter}. The experimental setup is shown in Fig.~\ref{fig:setup}. A single $^{171}$Yb$^{+}$ ion is stored in each of two rf-Paul traps located in independent vacuum chambers. The ions are placed in a magnetic field of 4.6 G parallel to the direction of the quantization axis. A 700 ns light pulse at 369.5 nm resonant with the ${^{2}S_{1/2},F = 1} \rightarrow {^{2}P_{1/2},F = 1}$ transition optically pumps the ions in both traps to the $^{2}S_{1/2}, F = 0, m_F=0$ $(^{2}S_{1/2}|0, 0\rangle)$ state. The $^{2} P_{1/2}$ state has a probability of $\simeq 0.005$ to decay to the metastable $^{2} D_{3/2}$ state. To prevent population trapping in this state, the ion is illuminated with 935.2 nm light resonant with the $^{2}D_{3/2} \rightarrow {^3D[3/2]_{1/2}}$ transition \cite{steve}. After optical pumping, a $\simeq 2$~ps light pulse from a frequency doubled, mode-locked, Ti-sapphire laser, polarized linearly along the direction of the magnetic field, transfers the population to the excited $^{2}P_{1/2}|1,0\rangle$ state with near unit efficiency. Since the duration of the excitation pulse is much shorter than the excited state lifetime ($\simeq 8$ ns), at most one 369.5 nm photon can be emitted by the ion \cite{peter}. The duration of each optical pumping and excitation cycle is 1.4~$\mu s$. After 107 cycles, the ions are Doppler cooled for 40~$\mu s$. The average overall excitation rate is 0.52~MHz. When viewed along the quantization axis, the $^{2}P_{1/2}|1,0\rangle$ state either decays to the $^2S_{1/2}| 1,1 \rangle$ state emitting a left circular ($\sigma^{-}$) polarized photon or to the $^2S_{1/2}| 1, -1 \rangle$ state emitting a right circular ($\sigma^{+}$) polarized photon. According to the dipole radiation pattern, $\pi$ polarized photons emitted due to decay to the $^2S_{1/2}| 0,0 \rangle$ state cannot propagate along the quantization axis direction. Therefore, when the photon is emitted along the quantization axis, the state of each ion is entangled with the polarization state of its emitted photon: \begin{equation} \Psi = \frac{1}{\sqrt 2} ( | 1, 1 \rangle | \sigma^{-} \rangle - | 1, -1 \rangle | \sigma^{+} \rangle). \end{equation} For each ion, the emitted photons are collected by an imaging lens (numerical aperture 0.23) and sent through a $\lambda / 4$ wave plate to convert $\sigma^{+}$ or $\sigma^{-}$ circular polarization to linear horizontal ($H$) or vertical ($V$) polarization, respectively. The state of each ion-photon system given the photon passes the quarter-wave plate can be written as \begin{equation} \Psi = \frac{1}{\sqrt 2} ( | 1, 1 \rangle | V \rangle - i | 1, -1 \rangle | H \rangle). \end{equation} \begin{figure} \caption{\label{fig:setup} \label{fig:setup} \end{figure} The photons from each ion are coupled to a single mode fiber to facilitate mode-matching on a nonpolarizing 50/50 beamsplitter. Photons at the output ports of the beamsplitter are detected with photomultiplier tubes (PMTs) (see Fig.~\ref{fig:setup}(b)). The contrast of interference between two modes is $97 \% $, the quantum efficiency of each PMT is about 15\%, and the count rate due to the dark counts and background light leakage is about 3 Hz. The arrival times of the photo-electric pulses from the PMTs are recorded by a time to digital converter. Coincidence detection of two photons interrupts an experimental time sequence and triggers a sequence of microwave pulses to perform rotation of the ion qubits, followed by state detection of the ions using standard fluorescence techniques \cite{steve}. Given perfect mode-matching of the input single photon wavepackets on the beamsplitter, detection of a photon at each output port of the beamsplitter corresponds to a successful measurement of photons in the state \cite{mandel} \begin{equation} \Psi_{ph} = \frac{1}{\sqrt{2}}(| H \rangle | V \rangle - | V \rangle | H \rangle). \end{equation} This projects ions ($a$) and ($b$) onto the entangled state \cite{simon,davidjosa} \begin{equation} \Psi_{ion} = \frac{1}{\sqrt{2}}(| 1,1 \rangle_a | 1, -1 \rangle_b - |1, -1 \rangle_a | 1, 1 \rangle_b ). \label{eq:ion_entangle} \end{equation} Due to the Zeeman splitting of the ground states of the ion, the emitted $\sigma^{+}$ and $\sigma^{-}$ polarized photons have a frequency difference of about 13 MHz. Nevertheless, it is still possible to get the entangled state of Eq.(\ref{eq:ion_entangle}) if the photons with the same polarization also have the same frequency. Quantum state tomography of the atomic qubits requires the ability to detect the state of each ion in an arbitrary basis. For this we first apply a resonant microwave $\pi$ pulse to transfer the population from $^{2}S_{1/2}| 1, -1 \rangle$ to the $^{2}S_{1/2}| 0, 0 \rangle$ state. Next, a second microwave pulse resonant with the $^{2}S_{1/2}|0, 0\rangle \leftrightarrow {^{2}S_{1/2}}| 1, 1 \rangle$ transition, with a controlled duration and phase relative to the first pulse, is applied to perform a qubit rotation (see Fig.~2). As a result, the state of the ion is transformed as follows: \begin{eqnarray} \cos (\theta_i / 2) | 1, 1 \rangle + \sin (\theta_i / 2) e^{i \phi} |1, -1 \rangle & \rightarrow & |1, 1\rangle \nonumber \\ - \sin (\theta_i / 2) e^{- i \phi} | 1, 1 \rangle + \cos (\theta_i / 2) |1, -1 \rangle & \rightarrow & |0, 0\rangle. \end{eqnarray} Here $\theta_i$ is proportional to the duration of the second microwave pulse, and $\phi$ is the relative phase between the first and second pulses \cite{david2,david1,blinov}. \begin{figure} \caption{The ion-photon entanglement scheme and sequence of microwave pulses for ion qubit manipulations. $\sigma^{+} \end{figure} The fluctuations of the magnetic field at the position of the ion can change the phase that each ion acquires before state detection. To keep the magnitude of the magnetic field constant, the experimental sequence is interrupted about 20 times a second to perform a Ramsey experiment. The ions in each trap are first optically pumped to the $| 0, 0 \rangle$ ground state. Two microwave $\pi/2$ pulses resonant with the $|0, 0 \rangle \leftrightarrow | 1, -1 \rangle$ transition separated by 200 $\mu s$ are applied and then the state of each ion is detected. The probability to find the ion in the $|1,-1\rangle$ state is continuously monitored and the current in the bias coil is adjusted to keep the magnetic field magnitude constant. We estimate that fluctuations of the magnetic field do not exceed 1 mG over the several days of experiment. Following microwave rotations, the state of the Yb$^{+}$ ion is detected. The 369.5~nm light resonant with the ${^2}S_{1/2}, F = 1 \rightarrow {^2}P_{1/2}, F = 0$ transition impinges on the ion and the fluorescence is detected with a PMT. Ideally, if an ion is in the $F = 1$ state, it scatters this light. On the other hand, if the ion is in the $F = 0$ state, it remains dark, allowing the quantum state of the atomic qubit to be distinguished with an efficiency of about 98\% \cite{steve}. It is important to note that unlike single photon detection, every attempt to detect the state of an ion gives a result. The efficiency quoted here is the probability that this result is correct. To verify that the emitted photon is indeed entangled with the ion, we temporarily add an additional half-wave or quarter-wave plate and a polarizer (see Fig.~\ref{fig:setup}a). In this case the ion manipulation and detection sequence is triggered on the detection of a single photon. \begin{figure} \caption{\label{fig:ion_photon_state} \label{fig:ion_photon_state} \end{figure} Quantum state tomography is performed for full characterization of the the state of the ion-photon system \cite{weinfurter,wilk}. We chose to measure both the ion and the photon states in the $\{\sigma_j \ \sigma_k; j,k = x,y,z \}$ bases. Each measurement is integrated for 100 seconds with an average rate of about 25 ion-photon entanglement events per second. The state tomography algorithm follows the maximum likehood estimation technique described in \cite{kwiat}, with the result shown in Fig.~\ref{fig:ion_photon_state}. From the reconstructed density matrix, we calculate the entanglement fidelity $F_{ip} = 0.925 \pm 0.003$, concurrence $C_{ip} = 0.861 \pm 0.006$ and entanglement of formation $E_{Fip} = 0.805 \pm 0.008$. We also have measured a Bell inequality parameter $S$ for our ion-photon system \cite{david2}. The result of the measurement ($S = 2.54 \pm 0.02 > 2$) clearly violates the Clauser-Horne-Simony-Holt version of the Bell inequality \cite{chsh}, described below. This high measured entanglement fidelity between a single ion and a single photon allows us to establish entanglement between two remote ions in violation of a Bell inequality. With two ions simultaneously excited, the photoelectric pulses from the PMTs on both output ports of a beamsplitter arriving within a $\pm 25$ ns coincidence window indicate a successful entanglement event. Following the second photoelectric pulse from the PMTs, the states of both ions are rotated and detected as described above. To verify the Bell inequality violation, we keep the phase $\phi$ for both ions at $0$ and vary $\theta$. Following Clauser-Horne-Simony-Holt (CHSH) \cite{chsh}, we calculate the correlation function $E(\theta_a, \theta_b)$ given by \begin{equation} E(\theta_a, \theta_b) = p(\theta_a, \theta_b) + p(\theta_a^{\bot}, \theta_b^{\bot}) - p(\theta_a^{\bot}, \theta_b) - p(\theta_a, \theta_b^{\bot}), \end{equation} where $p(\theta_a, \theta_b)$ is the probability to find ion ($a$) in the state $ \cos (\theta_a / 2) |1,1\rangle + \sin (\theta_a / 2) |1,-1\rangle $ and ion ($b$) in the state $ \cos (\theta_b / 2) |1,1\rangle + \sin (\theta_b / 2) |1,-1\rangle $, $\theta_{a,b}^{\bot} = \theta_{a,b} + \pi$. The CHSH version of a Bell inequality states that for all local realistic theories \begin{equation} S =| E(\theta_a, \theta_b) + E(\theta'_a, \theta_b) | + | E(\theta_a, \theta'_b) - E(\theta'_a, \theta'_b) | \leq 2. \end{equation} The result of the Bell inequality measurement is given in Table \ref{tab:ion_ion}, based on 2276 coincidence events. On average, we observed 1 entanglement event per 39 sec. The result $S = 2.22 \pm 0.07$ represents a Bell inequality violation by more than three standard deviations. Since every heralded entanglement event is followed by the measurement of the qubit states, the Bell inequality violation is observed with the detection loophole closed. \begin{figure} \caption{\label{fig:ion_ion} \label{fig:ion_ion} \end{figure} \begin{table} \caption{\label{tab:ion_ion} Measured correlation function $E(\theta _a, \theta _b)$ and CHSH parameter $S$ for the ion-ion state. Errors are based on the statistics of the photon counting events.} \begin{ruledtabular} \begin{tabular}{ccccc} $\theta _a$ & $\theta _b $& $E(\theta_a, \theta _b)$ \\ \hline $\pi/2$ & $\pi/4$ & $-0.518 \pm 0.036$ \\ $\pi/2$ & $3 \pi /4$ & $-0.546 \pm 0.034$ \\ 0 & $\pi/4$ & $-0.581 \pm 0.034$ \\ 0 & $3 \pi /4$ & $0.573 \pm 0.035$ \\ & & $S=2.22 \pm 0.07$ \\ \end{tabular} \end{ruledtabular} \end{table} We also performed state tomography for the entangled state of the two ions. As in the ion-photon case, ion measurements were performed in the $\{\sigma_j \ \sigma_k; j,k = x,y,z \}$ bases, with the result shown in Fig.~\ref{fig:ion_ion}. From this density matrix we estimate the entanglement fidelity $F_{ii} = 0.813 \pm 0.015$, concurrence $C_{ii} = 0.64 \pm 0.03$, and entanglement of formation $E_{Fii} = 0.52 \pm 0.04$. The ion-photon entanglement fidelity is mainly limited by the detection efficiency of the ion state (3\% decrease of fidelity), a fluctuating ambient magnetic field that causes ion dephasing (2\%), imperfect compensation of the stress-induced birefringence in the viewports of our vacuum chambers and imperfections in polarization control for light propagating through the fibers (1\%), decay of the ion to the $m_F = 0$ state due to nonzero solid angle of the photon collection (1\%), and PMTs dark counts ($ < 0.5 \%$). In addition, interference contrast of the interferometer contributes to the reduced entanglement fidelity of two ions (9 $\pm$ 3)\% compared to the ideal interference between the photons $F_{ii}^{ideal} = 86\%$, calculated from the reconstructed ion-photon state. The 13 fold improvement in the entanglement generation rate compared to our previous experiment \cite{david1} is due to a different excitation scheme that allows the transfer of all the population to the excited state and a different direction for photon collection with respect to the applied magnetic field that does not require polarization filtering of the collected photon. Here we have successfully extended the separation between entangled particles by more than 5 orders of magnitude, as compared to the previous Bell inequality test performed with the detection loophole closed \cite{wineland}. However, a much larger separation between the ions or a shorter detection time are necessary for a loophole-free test of a Bell inequality. For example, a realistic 50 $\mu$s detection time will require 15 km separation between the ions \cite{david2}. This may be difficult, since absorption of 369.5 nm light in the optical fiber is relatively large ($\simeq 0.2$ dB/m). Therefore, frequency conversion \cite{tanzilli} or free space light transmission may be an alternative solution. Entanglement over larger distances could also be generated using the quantum repeater protocol \cite{briegel}. Remote entanglement of atomic qubits is also an important step towards the implementation of scalable quantum information processing and scalable quantum communication networks. This work is supported by the NSA and the IARPA under Army Research Office contract, and the NSF Physics at the Information Frontier Program. \end{document}
\begin{document} \title{Generating $k$-independent variables in constant time} \author{ Tobias Christiani\\ \small \texttt{[email protected]}\\ \small IT University of Copenhagen \and Rasmus Pagh\\ \small \texttt{[email protected]}\\ \small IT University of Copenhagen \thanks{The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement no. [614331].} } \renewcommand\footnotemark{} \date{ } \maketitle \begin{abstract} The generation of pseudorandom elements over finite fields is fundamental to the time, space and randomness complexity of randomized algorithms and data structures. We consider the problem of generating $k$-independent random values over a finite field $\mathbb{F}$ in a word RAM model equipped with constant time addition and multiplication in $\mathbb{F}$, and present the first nontrivial construction of a generator that outputs each value in \emph{constant time}, not dependent on~$k$. Our generator has period length $|\mathbb{F}|\poly \log k$ and uses $k \poly(\log k) \log |\mathbb{F}|$ bits of space, which is optimal up to a $\poly \log k$ factor. We are able to bypass Siegel's lower bound on the time-space tradeoff for \mbox{$k$-independent} functions by a restriction to sequential evaluation. \end{abstract} \section{Introduction} Pseudorandom generators transform a short random seed into a longer output sequence. The output sequence has the property that it is indistinguishable from a truly random sequence by algorithms with limited computational resources. Pseudorandom generators can be classified according to the algorithms (distinguishers) that they are able to fool. An algorithm from a class of algorithms that is fooled by a generator can have its randomness replaced by the output of the generator, while maintaining the performance guarantees from the analysis based on the assumption of full randomness. When truly random bits are costly to generate or supplying them in advance requires too much space, a pseudorandom generator can reduce the time, space and randomness complexity of an algorithm. This paper presents an explicit construction of a pseudorandom generator that outputs a $k$-independent sequence of values in \emph{constant time} per value, not dependent on $k$, on a word RAM~\cite{hagerup1998}. The generator works over an arbitrary finite field that allows constant time addition and multiplication over $\mathbb{F}$ on the word RAM. Previously, the most efficient methods for generating $k$-independent sequences were either based on multipoint evaluation of degree $k-1$ polynomials, or on direct evaluation of constant time hash functions. Multipoint evaluation has a time complexity of $O(\log^{2} k \log \log k)$ field operations per value while hash functions with constant evaluation time use excessive space for non-constant $k$ by Siegel's lower bound~\cite{siegel2004}. We are able to get the best of both worlds: constant time generation and near-optimal seed length and space usage. \paragraph{Significance.} In the analysis of randomized algorithms and in the hashing literature in particular, $k$-independence has been the dominant framework for limited randomness. Sums of $k$-independent variables have their $j$th moment identical to fully random variables for $j \leq k$ which preserves many properties of full randomness. For output length $n$, $\Theta(\log n)$-independence yields Chernoff-Hoeffding bounds~\cite{schmidt1995} and random graph properties~\cite{alon2008}, while $\Theta(\poly \log n)$-independence suffices to fool $AC^{0}$ circuits~\cite{braverman2010}. Our generator is particularly well suited for randomized algorithms with time complexity $O(n)$ that use a sequence of $k$-independent variables of length $n$, for non-constant $k$. For such algorithms, the generation of $k$-independent variables in constant time by evaluating a hash function over its domain requires space $O(n^{\epsilon})$ for some constant $\epsilon > 0$. In contrast, our generator uses space $O(k \poly \log k)$ to support constant time generation. Algorithms for randomized load balancing such as the simple process of randomly throwing $n$ balls into $n$ bins fit the above description and presents an application of our generator. Using the bounds by Schmidt et al.~\mbox{\cite[Theorem 2]{schmidt1995}} it is easy to show that $\Theta(\log n / \log \log n)$-independence suffices to obtain a maximal load of any bin of $O(\log n / \log \log n)$ with high probability. This guarantee on the maximal load is asymptotically the same as under full randomness. Using our generator, we can allocate each ball in constant time using space $O(\log n \poly \log \log n)$ compared to the lower bound of $O(n^{\epsilon})$ of hashing-based approaches to generating $k$-independence. In Section \ref{sec:loadbalancing} we show how our generator improves upon existing solutions to a dynamic load balancing problem. The generation of pseudorandomness for Monte Carlo experiments presents another application. Limited independence between Monte Carlo experiments can be shown to yield Chernoff-like bounds on the deviation of an estimator from its expected value. Consider a randomized algorithm~$\mathcal{A}(Y)$ that takes $m$ random elements from $\mathbb{F}$ encoded as a string $Y$ and returns a value in the interval $[0,1]$. Let $\mu_{\mathcal{A}} > 0$ denote the expectation of the value returned by~$\mathcal{A}(Y)$ under the assumption that $Y$ encodes a truly random input. Define the estimator \begin{equation} \hat{\mu}_{\mathcal{A}} = \frac{1}{t}\sum_{i=1}^{t}\mathcal{A}(Y_{i}). \end{equation} Due to a result by Schmidt et al. \cite[Theorem 5]{schmidt1995}, for every choice of constants $\epsilon, \alpha > 0$, it suffices that $Y_{1}, \dots, Y_{t}$ encodes a sequence of $\Theta(m \log t)$-independent variables over $\mathbb{F}$ to yield the following high probability bound on the deviation of $\hat{\mu}_{\mathcal{A}}$ from $\mu_{\mathcal{A}}$. \begin{equation} Pr[|\hat{\mu}_{\mathcal{A}} - \mu_{\mathcal{A}}| \geq \epsilon\mu_{\mathcal{A}}] \leq O(t^{-\alpha}). \end{equation} We hope that our generator can be a useful tool to replace heuristic methods for generating pseudorandomness in applications where theoretical guarantees are important. In order to demonstrate the practicality of our techniques, we present experimental results on a variant of our generator in Section \ref{sec:experiments}. Our experiments show that $k$-independent values can be generated nearly as fast as output from heuristic pseudorandom generators, even for large $k$. \paragraph{Methods.} Our construction is a surprisingly simple combination of bipartite unique neigbor expanders with multipoint polynomial evaluation. The basic, probabilistic construction of our generator proceeds in two steps: First we use multipoint evaluation to fill a table with \mbox{$\Theta(k)$-independent} values from a finite field, using an average of $\poly \log k$ operations per table entry. Next we apply a bipartite unique neighbor expander with constant outdegree and with right side nodes corresponding to entries in the table and a left side that is $\poly \log k$ times larger than the right side. For each node in the left side of the expander we generate a $k$-independent value by returning the sum of its neighboring table entries. Our main result stated in Theorem 1 uses the same idea, but instead of relying on a single randomly constructed expander graph, we employ a cascade of explicit constant degree expanders and show that this is sufficient for constant time generation. \paragraph{Relation to the literature.} Though the necessary ingredients have been known for around 10 years, we believe that a constant time generator has evaded discovery by residing in a blind spot between the fields of hashing and pseudorandom generators. The construction of constant time $k$-independent \emph{hash functions} has proven to be a difficult task, and a fundamental result by Siegel~\cite{siegel2004} showed a time-space tradeoff that require hashing-based generators with sequence length $n$ to use $O(n^{\epsilon})$ space for some constant $\epsilon > 0$. On the other hand, from the point of view of pseudorandom generators, a generator of $k$-independent variables, for non-constant~$k$, can not be used as an efficient method of derandomization: A lower bound by Chor et al.~\cite{chor1985} shows that the sample space of such generators must be superpolynomial in their output length. Consequently, research shifted towards generators that produce other types of outputs such as biased sequences or almost $k$-independent variables \cite{alon1992, naor1993, goldreich2010}. It is relevant to ask whether there already exist constructions of constant time pseudorandom generators on the word RAM that can be used instead of generators that output $k$-independent variables. For example, Nisan's pseudorandom generator~\cite{nisan1992} uses constant time to generate a pseudorandom word and has remarkably strong properties: Every algorithm running in $\textsc{space}(s)$ that uses $n$ random words can have its random input replaced by the output of a constant time generator with seed length $O(s \log n)$. The probability that the outcome of the algorithm differs when using pseudorandomness as opposed to statistical randomness is decreasing exponentially in the seed length. In spite of this strong result, there are many natural applications where the restrictions on Nisan's model means that we cannot use his generator directly to replace the use of a $k$-generator. An example is the analysis that uses a union bound over all subsets of $k$ words of a randomly generated structure described by $n$ words. Algorithms shown to be derandomized by Nisan's generator are restricted to one-way access to the output of the generator. Therefore the output of Nisan's generator can not be used to derandomize an algorithm that tests for the events of the union bound without using excessive space. In this case, \mbox{$k$-independence} can directly replace the use of full randomness without changing the analysis. \subsection{Our contribution} We present three improved constructions of \emph{$k$-generators}, formally defined in Section~\ref{sec:preliminaries}, that are able to generate a sequence of $k$-independent values over a finite field $\mathbb{F}$. Our results are stated in a word RAM model equipped with constant time addition and multiplication in $\mathbb{F}$. Our main result is a fully explicit generator: \begin{theorem}\label{thm:explicit} For every finite field $\mathbb{F}$ with constant time arithmetic there exists a data structure that for every choice of $k \leq |\mathbb{F}| /\! \poly \log |\mathbb{F}|$ is an explicit constant time $k$-generator. The generator has range $\mathbb{F}$, period $|\mathbb{F}|\poly \log k$, and seed length, space usage and initialization time $k \poly \log k$. \end{theorem} We further investigate how the space usage and seed length may be reduced by employing a probabilistic construction that has a certain probability of error: \begin{theorem} \label{thm:existence} For every finite field $\mathbb{F}$ with constant time arithmetic and every choice of positive constants $\varepsilon$, $\delta$ there exists a data structure that for every choice of $k = O(|\mathbb{F}|)$ is a constant time $k$-generator with failure probability $\delta$, range~$\mathbb{F}$, period $|\mathbb{F}|$, seed length $O(k)$, space usage $O(k \log^{2+\varepsilon}k)$, and initialization time $O(k \poly \log k)$. \end{theorem} Finally, we improve existing $k$-generators with optimal space complexity: \begin{theorem} \label{thm:fastmultipoint} For every finite field $\mathbb{F}$ that supports computing the discrete Fourier transform of length $k$ in $O(k \log k)$ operations, there exists a data structure that, for every choice of $k \leq |\mathbb{F}|$ and given a primitive element $\omega$, is an explicit $O(\log k)$~time $k$-generator with range $\mathbb{F}$, period $|\mathbb{F}|$, seed length $k$, space usage $O(k)$, and initialization time $O(k \log k)$. \end{theorem} Table~\ref{tab:results} summarizes our results along with previous methods of generating sequences of $k$-independent values over $\mathbb{F}$. All the methods output sequences that have a length of at least~$|\mathbb{F}|$. \begin{table}[htpb] \centering \small \begin{tabular}{lllll} \toprule {\bf Construction} & {\bf Time} & {\bf Space} & {\bf Seed length} & {\bf Comment} \\ \midrule Polynomials \cite{joffe1974,wegman1981} & $O(k)$ & $O(k)$ & $k$ & \\ Multipoint \cite{gathen2013} & $O(\log^2 k \log\log k)$ & $O(k \log k)$ & $k$ & \\ Multipoint \cite{bostan2005} & $O(\log k \log\log k)$ & $O(k)$ & $k$ & Requires $\omega$. \\ Siegel \cite{siegel2004} & $O(1)$ & $O(|\mathbb{F}|^{\varepsilon})$ & $O(k)$ & Probabilistic. \\ Theorem \ref{thm:explicit} & $O(1)$ & $k \poly \log k$ & $k \poly \log k$ & Explicit. \\ Theorem \ref{thm:existence} & $O(1)$ & $O(k \log^{2 + \varepsilon}k)$ & $O(k)$ & Probabilistic. \\ Theorem \ref{thm:fastmultipoint} & $O(\log k)$ & $O(k)$ & $k$ & Requires $\omega_{k}$, FFT. \\ \bottomrule \end{tabular} \caption{ Overview of generators that produce a $k$-independent sequence over a finite field $\mathbb{F}$. We use $\varepsilon$ to denote an arbitrary positive constant and $\omega$ and $\omega_{k}$ to denote, respectively, a primitive element and a $k$-th root of unity of $\mathbb{F}$. The unit for space and seed length is the number of elements of $\mathbb{F}$ that need to be stored, i.e., a factor $\log_2 |\mathbb{F}|$ from the number of bits. Probabilistic constructions rely on random generation of objects for which no explicit construction is known, and may fail with some probability. } \label{tab:results} \end{table} \paragraph{Overview of paper} In Section \ref{sec:preliminaries} we define \mbox{$k$-generators} and related concepts and review results that lead up to our main results. Section \ref{sec:explicit} presents the details of our explicit construction of constant time generators. In Section \ref{sec:probabilistic} we apply the same techniques with a probabilistic expander construction to obtain generators with improved space and randomness complexity. Section \ref{sec:faster} presents an algorithm for evaluating a polynomial over all elements of $\mathbb{F}$ that improves existing generators with optimal space. Section \ref{sec:wordRAM} shows how arithmetic over $\mathbb{F}_{p}$ can be implemented in constant time on a standard word RAM with integer multiplication and also reviews algorithms and the state of hardware support for $\mathbb{F}_{2^{w}}$. Section \ref{sec:loadbalancing} applies our generator to improve the time-space tradeoff of previous solutions to a load balancing problem. Section \ref{sec:experiments} presents experimental results on the generation time of different $k$-generators for a range of values of $k$. \section{Preliminaries} \label{sec:preliminaries} We begin by defining two fundamental concepts: \begin{definition} A sequence $(X_{1}, X_{2}, \dots, X_{n})$ of $n$ random variables with finite range $R$ is an \emph{$(n,k)$-sequence} if the variables at every set of $k$ positions in the sequence are independent and uniformly distributed over $R$. \end{definition} \begin{definition} A family of functions ${\mathcal{F} \subseteq \{f \mid f \colon U \to R \}}$ is \emph{$k$-independent} if for every set of $k$ distinct inputs $x_{1}, x_{2}, \dots, x_{k}$ it holds that $f(x_{1}), f(x_{2}), \dots, f(x_{k})$ are independent and uniformly distributed over $R$ when $f$ is selected uniformly at random from $\mathcal{F}$. We say that a function $f$ selected uniformly at random from $\mathcal{F}$ is a \emph{$k$-independent function}. \end{definition} We now give a formal definition of the generator data structure. \begin{definition} A \emph{$k$-generator} with range $R$, period $n$ and failure probability $\delta$ is a data structure with the following properties: \begin{itemize} \item[--] It supports an initialization operation that takes a random seed $s$ as input. \item[--] After initialization it supports an \texttt{emit()} operation that returns a value from $R$. \item[--] There exists a set $B$ such that $\Pr[s \in B] \leq \delta$ and conditioned on $s \not\in B$ the sequence $(X_{1}, X_{2}, \dots, X_{n})$ of values returned by \texttt{emit()} is an $(n,k)$-sequence. \end{itemize} A $k$-generator is \emph{explicit} if the initialization and emit operation has time complexity $\poly k$ and the probability of failure is zero. We refer to a $k$-generator as a constant time $k$-generator if the \texttt{emit()} operation has time complexity $O(1)$, not dependent on $k$. \end{definition} A $k$-generator differs from a data structure for representing a $k$-independent hash function by only allowing sequential access to the underlying $(n,k)$-sequence. It is this restriction on generators that allows us to obtain a better time-space tradeoff for the problem of generating $k$-independent variables than is possible by using a $k$-independent hash function directly as a generator. We are interested in the following parameters of $k$-generators: seed length, period, probability of failure, space needed by the data structure, the time complexity of the initialization operation and the time complexity of a single \texttt{emit()} operation. \paragraph{Model of computation.} Our results are stated in the word RAM model of computation with word length ${w = \Theta(\log|\mathbb{F}|)}$ bits. In addition to the standard bit manipulation and integer arithmetic instructions, we also assume the ability to perform arithmetic operations $(+, -, \times)$ over $\mathbb{F}$ in constant time. In the context of our results that use abelian groups $(A, +)$ we assume that an element of $A$ can be stored in a constant number of words and that addition can be performed in constant time. Let $\mathbb{F}_{q}$ denote a field of cardinality $q = p^{z}$ for $p$ prime and $z$ a positive integer. Constant time arithmetic in $\mathbb{F}_{p}$ is supported on a standard word RAM with integer multiplication \cite{granlund1994}. Section \ref{sec:wordRAM} presents additional details about the algorithms required to implement finite field arithmetic over $\mathbb{F}_p$ and $\mathbb{F}_{2^{w}}$ and how they relate to a standard word RAM with integer multiplication. \subsection{$k$-independent functions from the literature} \label{sec:hashing} We now review the literature on $k$-independent functions and how they can be used to construct $k$-generators. We distinguish between a $k$-independent function $f : U \to R$ and a $k$-independent hash function by letting the latter refer to a data structure that after initialization supports random access to the $(n,k)$-sequence defined by evaluating $f$ over~$U$. There exists an extensive literature that focuses on how to construct $k$-independent hash functions that offer a favorable tradeoff between representation space and evaluation time \cite{dietzfelbinger2012}. We note that a family of $k$-independent hash functions can be used to construct a $k$-generator by setting the seed to a random function in the family. \paragraph{Constant time $k$-independent hash functions.} A fundamental cell probe lower bound by Siegel \cite{siegel2004} shows that a data structure to support constant time evaluation of $f$ on every input in $U$ cannot use less than $\Omega(|U|^{\epsilon})$ space for some constant $\epsilon > 0$. This bound holds even for amortized constant evaluation time over functions in the family and elements in the domain. From Siegel's lower bound, it is clear that we cannot use $k$-independent hash functions directly to obtain a constant time $k$-generator that uses only $O(k \poly \log k)$ words of space. Known constructions of $k$-independent hash functions with constant evaluation time are based on expander graphs. Siegel \cite{siegel2004} gave a probabilistic construction of a family of \mbox{$k$-independent} hash functions in the word RAM model based on an iterated product of bipartite expander graphs. Thorup \cite{thorup2013} showed that a simple tabulation hash function with high probability yields the type of expander graphs required by Siegel's construction. Unfortunately only randomized constructions of the expanders required by these hash functions is known, introducing a positive probability of error in \mbox{$k$-generators} based on them. \paragraph{Polynomials.} Here we briefly review the classic construction of $k$-independent functions based on polynomials over finite fields. \begin{lemma}[Joffe \cite{joffe1974}, Carter and Wegman \cite{wegman1981}] \label{lem:kpoly} For every choice of finite field $\mathbb{F}$ and every $k \leq |\mathbb{F}|$, let $\mathcal{H}_{k} \subset \mathbb{F}[X]$ be the family of polynomials of degree at most $k-1$ over $\mathbb{F}$. ${\mathcal{H}_{k} \subset \{ f \mid f \colon \mathbb{F} \to \mathbb{F} \}}$ is a family of $k$-independent functions. \end{lemma} An advantage of using families of polynomials as hash functions is that they use near optimal randomness, allow any choice of $k \leq |\mathbb{F}|$, and have no probability of failure. It can also be noted that in the case where $k = O(\log |\mathbb{F}|)$ and we are restricted to linear space $O(k)$, polynomial hash functions evaluated using Horner's scheme are optimal \mbox{$k$-independent} hash functions \cite{larsen2012, siegel2004}. Using slightly more space and for sufficiently large $k$, a data structure by Kedlaya and Umans \cite{kedlaya2008} supports evaluation of a polynomial of degree $k$ over $\mathbb{F}$. The space usage and preprocessing time of their data structure is $k^{1 + \epsilon}\log^{1 + o(1)}|\mathbb{F}|$ for constant $\epsilon > 0$. After preprocessing a polynomial $f$, the data structure can evaluate $f$ in an arbitrary point of $\mathbb{F}$ using time $\poly(\log k)\log^{1 + o(1)}|\mathbb{F}|$. \paragraph{Multipoint evaluation.} Using algorithms for multipoint evaluation of polynomials we are able to obtain a \mbox{$k$-generator} with $\poly \log k$ generation time and space usage that is linear in $k$. Multipoint evaluation of a polynomial~${f \in \mathbb{F}[X]}$ of degree at most $k-1$ in $k$ arbitrary points of~$\mathbb{F}$ has a time complexity of $O(k \log^{2} k \log \log k)$ in the word RAM model that supports field operations~\mbox{\cite[Corollary 10.8]{gathen2013}}. Bostan and Schost \cite{bostan2005} mention an algorithm for multipoint evaluation of $f$ over a geometric progression of $k$ elements with running time $O(k \log k \log \log k)$. In order to use this method to construct a $k$-generator with period $|\mathbb{F}|$ it is necessary to know a primitive element $\omega$ of $\mathbb{F}_{q}$ so we can perform multipoint evaluation over $\mathbb{F}^{*} = \{\omega^{0}, \omega^{1}, \dots, \omega^{q-2} \}$. Given the prime factorization of $q - 1$ there exists a Las Vegas algorithm for finding $\omega$ with expected running time $O(\log^{4} q)$~\mbox{\cite[Chapter 11]{shoup2009}}. In the following lemma we summarize the properties of $k$-generators based on multipoint evaluation of polynomials over finite fields. \begin{lemma}[{Gathen and Gerhard \cite[Corollary 10.8]{gathen2013}, Bostan and Schost \cite{bostan2005}}] \label{lem:multipoint} For every finite field $\mathbb{F}$ there exists for every $k \leq |\mathbb{F}|$ and bijection $\pi : [|\mathbb{F}|] \to \mathbb{F}$ an explicit $k$-generator with period $|\mathbb{F}|$ and seed length $k$. The space required by the generator and the initialization and generation time depends on the choice of $\pi$ and multipoint evaluation algorithm. \begin{itemize} \item[--] For arbitrary choice of $\pi$ there exists a $k$-generator with generation time $O(\log^{2} k \log \log k)$, intialization time $O(k \log^{2} k \log \log k)$ and space usage $O(k \log k)$. \item[--] Given a primitive element $\omega$ of $\mathbb{F}$ and a bijection $\pi(i) = \omega^{i}$ there exists a generator with generation time $O(\log k \log \log k)$, initialization time $O(k \log k \log \log k)$ and space usage $O(k)$. \end{itemize} \end{lemma} \paragraph{Space lower bounds.} Since randomness can be viewed as a resource like time and space, we are naturally interested in generators that can output long $k$-independent sequences using as few random bits as possible. Families of \mbox{$k$-independent functions} $f : U \rightarrow R$ with $U = R$ and $k \leq |U|$ will trivially have to use at least $k \log |U|$ random bits --- a bound matched by polynomial hash functions. We are often interested in generators with $|U| \gg |R|$, for example if we wish to use a generator for randomized load balancing in the heavily loaded case. A lower bound by Chor et al.~\cite{chor1985} shows that even in this case the minimal seed length required for $k$-independence is $\Omega(k \log |U|)$ for every $|R| \leq |U|$. \subsection{Expander graphs} All graphs in this paper are bipartite with $cm$ vertices on the left side, $m$ vertices on the right side and left outdegree $d$. Graphs are specified by their edge function $\mathcal{G}amma : [cm] \times [d] \to [m]$ where the notation $[n]$ is used to denote the set $\{0,1,\dots,n-1\}$. Let $S$ be a subset of left side vertices. For convenience we use $\mathcal{G}amma(S)$ to denote the neighbors of $S$. \begin{definition} The bipartite graph $\mathcal{G}amma : [cm] \times [d] \to [m]$ is \mbox{\emph{$(c,m,d,k)$-unique}} ($k$-unique) if for every $S \subseteq [cm]$ with $|S| \leq k$ there exists $y \in \mathcal{G}amma(S)$ such that $y$ has a unique neighbor in $S$. An expander graph is \emph{explicit} if it has a deterministic description and $\mathcal{G}amma$ is computable in time polynomial in $\log cm + \log d$. \end{definition} The performance of our generator constructions are directly tied to the parameters of such expanders. In particular, we would like explicit expanders that simultanously have a low outdegree $d$, are highly unbalanced and are $k$-unique for $k$ as close to $m$ as possible. A direct application of a result by Capalbo et al. \cite[Theorem 7.1]{capalbo2002} together with an equivalence relation between different types of expander graphs from Ta-Shma et al. \cite[Theorem 8.1]{tashma2007} yields explicit constructions of unbalanced unique neighbor expanders.\footnote{We state the results here without the restriction from \cite{capalbo2002} that $c$ and $m$ are powers of two. We do this to simplify notation and it only affects constant factors in our results.} \begin{lemma}[Capalbo et al. {\cite[Theorem 7.1]{capalbo2002}}] \label{lem:explicit} For every choice of $c$ and $m$ there exists a $(c,m,d,k)$-unique expander with $d = \poly \log c$ and $k = \Omega(m/d)$. For constant $c$ the expander is explicit. \end{lemma} We note the following simple technique for constructing a larger $k$-unique expander from a smaller $k$-unique expander. \begin{lemma} \label{lem:stacking} Let $\mathcal{G}amma$ be a $(c,m,d,k)$-unique expander with $cm \times m$ adjacency matrix $\mat{M}$. For any positive integer $b$ define $\mathcal{G}amma^{(b)}$ as the bipartite graph with block diagonal adjacency matrix $\mat{M}^{(b)} = \diag(\mat{M}, \dots, \mat{M})$ with $b$ blocks in the diagonal. Then $\mathcal{G}amma^{(b)}$ is a $(c, bm, d, k)$-unique expander. \end{lemma} \paragraph{From expanders to independence.} By associating each right vertex in a $(c,m,d,k)$-unique expander with a position in a $(m,dk)$-sequence over an abelian group $(A,+)$, we can generate a $(cm,k)$-sequence over $A$. This approach was pioneered by Siegel and has been used in different constructions of families of $k$-independent hash functions~\cite{siegel2004, thorup2013}. \begin{lemma}[Siegel {\cite[Lemma 2.6, Corollary 2.11]{siegel2004}}] \label{lem:expanderhashing} Let $\mathcal{G}amma : [cm] \times [d] \to [m]$ be a $k$-unique expander and let $h : [m] \rightarrow A$ be a $dk$-independent function with range an abelian group. Let $g : [cm] \rightarrow A$ be defined as \begin{equation} g(x) = \sum_{y \in \mathcal{G}amma(\{x\})}h(y). \end{equation} Then $g$ is a $k$-independent function. \end{lemma} \section{Explicit constant time generators} \label{sec:explicit} In this section we show how to obtain a constant time \mbox{$k$-generator} by combining an explicit $ \poly k$-generator with a cascading composition of unbalanced unique neighbor expanders. Our technique works by generating a small number of highly independent elements in an abelian group and then successively applying constant degree expanders to produce a greater number of less independent elements. We continue this process up until the point where the final number of elements is large enough to match the cost of generating the smaller batch of highly independent elements. The generator has two components. The first component is an explicit $m$-generator $g_{0} : [n] \to A$ with period $n$ and range an abelian group $A$. The second component is an explicit sequence $\left(\mathcal{G}amma_{i}\right)^{t}_{i = 1}$ of unbalanced unique neighbor expanders. The expanders are constructed such that the left side of the $i$th expander matches the right side of the $(i+1)$th expander. By Lemma \ref{lem:explicit}, for every choice of imbalance $c$, target independence $k$ and length of the expander sequence $t$ there exists a sequence of expanders with the property that \begin{equation} \mathcal{G}amma_{i} \text{ is } (c, c^{i-1}m, d, d^{t-i}k)\text{-unique}, \label{eq:expandersequence} \end{equation} for $m = O(d^{t}k)$ and $d = \poly \log c$. For constant $c$ each expander in the sequence is explicit. We now combine the explicit $m$-generator $g_{0}$ and the sequence of expanders $\left(\mathcal{G}amma_{i}\right)^{t}_{i = 1}$ to define the $k$-independent function $g_{t}$. Let $b = m/n$ and assume for simplicity that $m$ divides $n$. For each $\mathcal{G}amma_{i}$ we use the technique from Lemma \ref{lem:stacking} to construct a $(c, c^{i-1}n, d, d^{t-i}k)$-unique expander $\mathcal{G}amma_{i}^{(b)}$. Let $x_{i}$ denote a number in $[c^{i}n]$ corresponding to a vertex in the right side of $\mathcal{G}amma_{i}^{(b)}$. We are now ready to give a recursive definition of $g_{i} : [c^{i}n] \to A$. \begin{equation} g_{i}(x_{i}) = \sum \limits_{x_{i-1} \in \mathcal{G}amma_{i}^{(b)}(\{x_{i}\})} g_{i-1}(x_{i-1}), \quad 1 \leq i \leq t. \label{eq:gexplicit} \end{equation} \begin{lemma} $g_{i}$ is $d^{t-i}k$-independent. \end{lemma} \begin{proof} We proceed by induction on $i$. By definition, $g_{0} : [n] \to A$ is $d^{t}k$-independent. Assume by induction that $g_{i} : [c^{i}n] \to A$ is $d^{t-i}k$-independent. By definition $\mathcal{G}amma_{i+1}^{(b)}$ is a $(c, c^{i}n, d, d^{t-(i+1)}k)$-unique expander. Applying Lemma \ref{lem:expanderhashing} we have that $g_{i+1} : [c^{i+1}n] \to A$ is $d^{t-(i+1)}k$-independent. \end{proof} We will now show that $g_{t}$ supports fast sequential evaluation and prove that we can use $g_{t}$ to construct an explicit constant time $k$-generator from any explicit \mbox{$m$-generator}, for an appropriate choice of $m$. Divide the domain of each $g_{i}$ evenly into $b = n/m$ batches of size $c^{i}m$ corresponding to each block of the adjacency matrix of $\mathcal{G}amma_{i}$ used to construct $\mathcal{G}amma_{i}^{(b)}$ and index the batches by $j \in [b]$. In order to evaluate $g_{i+1}$ over batch number $j$ it suffices to know $\mathcal{G}amma_{i+1}$ and the values of $g_{i}$ over batch number $j$. Fast sequential evaluation of $g_{t}$ is achieved in the following steps. First we tabulate the sequence of expanders $\left(\mathcal{G}amma_{i}\right)^{t}_{i = 1}$ such that $\mathcal{G}amma_{i}(\{x_{i}\})$ can be read in $d$ operations. Secondly, to evaluate $g_{t}$ over batch $j$, we begin by tabulating the output of $g_{0}$ over batch $j$ and then successively apply our tabulated expanders to produce tables for the output of $g_{1}, g_{2}, \dots, g_{t}$ over batch $j$. Given tables for the sequence of expanders and assuming that the generator underlying $g_{0}$ has been initialized, we now consider the average number of operations used per output when performing batch-evaluation of $g_{t}$. The number of values output is $c^{t}m$. The cost of emitting $m$ values from $g_{0}$ is by definition at most $\poly(m)$. The cost of producing tables for the output of $g_{1}, g_{2}, \dots, g_{t}$ for the current batch is given by $\sum_{i=1}^{t}dc^{i}m = O(dc^{t}m)$ for $c > 1$. The average number of operations used per output when performing batch-evaluation of $g_{t}$ is therefore bounded from above by \begin{equation} \frac{O(dc^{t}m) + \poly m}{c^{t}m} = O(d) + \frac{\poly m}{c^{t}}. \label{eq:averagetime} \end{equation} The following lemma states that we can obtain a constant time $k$-generator from every explicit $m$-generator by setting $t = O(\log k)$ and choosing $c$ to be an appropriately large constant. \begin{lemma} \label{lem:general} Let $A$ be an abelian group with constant time addition. Suppose there exists an explicit $m$-generator with range $A$, period $n$ and space usage $\poly m$. Then there exists a positive constant $\epsilon$ such that for every $k \leq m^{\epsilon}$ there exists an explicit constant time $k$-generator with range $A$, period $n$, and seed length, space usage and initialization time $\poly k$. \end{lemma} \begin{proof} The sequence of expanders $\left(\mathcal{G}amma_{i}\right)^{t}_{i = 1}$ with the properties given in \eqref{eq:expandersequence} exists for $m = O(d^{t}k)$ and $d = \poly \log c$ and is explicit for $c$ constant. By inserting $m = O(d^{t}k)$ into equation \eqref{eq:averagetime} it can be seen that the average number of operations is constant for $c = O(1)$ and $t = O(\log k)$ with constants that depend on the parameters of the $m$-generator. The $k$-generator is initialized by initializing the $m$-generator, finding and tabulating the sequence of expanders and producing the first batch of values, all of which can be done in $\poly k$ time and space. After initialization, each call to \texttt{emit()} will return a value from the current batch and use a constant number of operations for the task of preparing the next batch of outputs. \end{proof} We now show our main theorem about explicit constant time $k$-generators over finite fields. The construction uses an explicit $m$-generator based on multipoint evaluation. Combined with the approach of Lemma \ref{lem:general} this yields a near-optimal time-space tradeoff for $k$-generation. \begin{restate}[Repeated] For every finite field $\mathbb{F}$ with constant time arithmetic there exists a data structure that for every choice of $k \leq |\mathbb{F}| /\! \poly \log |\mathbb{F}|$ is an explicit constant time $k$-generator. The generator has range $\mathbb{F}$, period $|\mathbb{F}|\poly \log k$, and seed length, space usage and initialization time $k \poly \log k$. \end{restate} \begin{proof} Fix the choice of finite field $\mathbb{F}$. By Lemma~\ref{lem:multipoint} there exists an explicit $m$-generator in $\mathbb{F}$ for $m \leq |\mathbb{F}|$ with period $|\mathbb{F}|$ that uses time $O(m \log^{3} m)$ to emit $m$ values. Fix some constant $c > 1$ and let $\left(\mathcal{G}amma_{i}\right)^{t}_{i = 1}$ denote an explicit sequence of constant degree expanders with the properties given by \eqref{eq:expandersequence}. The average number of operations per $k$-independent value output by $g_{t}$ when performing batch evaluation is given by \begin{equation} \frac{O(dc^{t}m) + O(m \log^{3} m)}{c^{t}m} = O(d) + \frac{O(\log^{3} d^{t}k)}{c^{t}}. \label{eq:averagetimefield} \end{equation} Setting $t = O(\log \log k)$ and following the approach of Lemma \ref{lem:general} we obtain a $k$-generator with the stated properties. \end{proof} Based on the discussion in a paper by Capalbo \cite{capalbo2005} that introduces unbalanced unique neighbor expanders for concrete values of $c$ and $d$, it appears likely that the constants hidden in Theorem \ref{thm:explicit} for the current best explicit constructions make our explicit generators unsuited for practical use since $c$ is close to $1$ when $d$ is reasonably small. The next section explores how randomly generated unique neighbor expanders can be used to show stronger existence results and yield $k$-generators with tractable constants. \section{Constant time generators with optimal seed length} \label{sec:probabilistic} Randomly constructed expanders of the type used in this paper have stronger properties than known explicit constructions, and can be generated with an overwhelming probability of success. There is no known efficient algorithm for verifying whether a given graph is a unique neighbor expander. Therefore randomly generated expanders cannot be used to replace explicit constructions without some probability of failure. In this section we apply the probabilistic method to show the existence of $k$-generators with better performance characteristics than those based on known explicit constructions of expanders. We are able to show the existence of constant time generators with optimal seed length that use $O(k\log^{2+\varepsilon}k)$ words of space for any constant $\varepsilon > 0$. Furthermore, such generators can be constructed for any choice of constant failure probability $\delta > 0$. The generators we consider in this section use only a single expander graph but are otherwise identical to the generators described in Section \ref{sec:explicit}. Using a single expander graph suffices for constant time generation because the probabilistic constructions are powerful enough to support an imbalance of $c = \poly \log k$ while maintaining constant degree. This imbalance is enough to amortize the cost of multipoint evaluation in a single expansion step as opposed to the sequence of explicit expanders employed in Theorem \ref{thm:explicit}. Our arguments are a straightforward application of the probabilistic method, but we include them for completeness and because we are interested in somewhat nonstandard parameters. We consider the following randomized construction of a $(c,m,d,k)$-unique expander $\mathcal{G}amma$. For each vertex $x$ in $[cm]$, we add an edge between $x$ and each distinct node of $d$ nodes selected uniformly at random from $[m]$. By a standard argument, the graph can only fail to be unique neighbor expander if there exists a subset $S$ of left hand side vertices with $|S| \leq k$ such that $|\mathcal{G}amma(S)| \leq \lfloor d|S|/2 \rfloor$ \cite[Lemma 2.8]{siegel2004}. In the following we assume that $kd \leq m$. \begin{align} &\Pr[\mathcal{G}amma \text{ is not a unique neighbor expander}] \notag \\ &\leq \Pr[\exists S \subseteq [cm], |S| \leq k : |\mathcal{G}amma(S)| \leq \lfloor d|S|/2 \rfloor] \notag \\ &\leq \sum_{\substack{S \subseteq [cm]\\ |S| \leq k}} \Pr[|\mathcal{G}amma(S)| \leq \lfloor d|S|/2 \rfloor] \notag \\ &\leq \sum_{i = 1}^{k} \binom{cm}{i} \binom{m}{\lfloor id/2 \rfloor} \left(\frac{\lfloor id/2 \rfloor}{m}\right)^{id} \notag \\ &\leq \sum_{i=1}^{k} \left(\frac{cme}{i}\right)^{i} \left(\frac{me}{id/2}\right)^{id/2} \left(\frac{id/2}{m}\right)^{id} \notag \\ &= \sum_{i=1}^{k} \left( cm e^{1 + d/2} \left(\frac{(d/2)i^{1 - 1/(d/2)}}{m}\right)^{d/2} \right)^{i} \label{eq:probexpander} \end{align} If the expression in the outer parentheses in \eqref{eq:probexpander} can be bounded from above by $1/2$ for $i = 1,2,\dots,k$, then the expander exists. We also note that the randomized expander construction can be performed using $dk$-independent variables without changing the result in~\eqref{eq:probexpander}. Let $\gamma > 1$ be a number that may depend on $k$ and let $\delta$ denote an upper bound on the probability that the randomized construction fails. By setting $m = O(dk\gamma)$ we are able to obtain the following expression for the relation between $\delta$, the imbalance $c$ and the left outdegree bound $d$. \begin{equation} \delta = e\frac{cd}{\gamma^{d/2 - 1}} \label{eq:expanderparameters} \end{equation} Equation \eqref{eq:expanderparameters} reveals tradeoffs for the parameters of the randomly constructed $k$-unique expander graphs. For example, increasing $\gamma$ makes it possible to make the graph more unbalanced while maintaining the same upper bound on the probability of failure $\delta$. The increased imbalance comes at the cost of an increase in $m$, the size of the right side of the graph. Similarly it can be seen how increasing $d$ can be used to reduce the probability of error. Setting the parameters to minimize the space occupied by the expander while maintaining constant outdegree and by extension constant generation time, we obtain Theorem~\ref{thm:existence}. \begin{restate}[Repeated] For every finite field $\mathbb{F}$ with constant time arithmetic and every choice of positive constants $\varepsilon$, $\delta$ there exists a data structure that for every choice of $k = O(|\mathbb{F}|)$ is a constant time $k$-generator with failure probability $\delta$, range~$\mathbb{F}$, period $|\mathbb{F}|$, seed length $O(k)$, space usage $O(k \log^{2+\varepsilon}k)$, and initialization time $O(k \poly \log k)$. \end{restate} \begin{proof} Let $\tilde{\varepsilon} < \varepsilon$ be a constant and set $\gamma = \log^{\tilde{\varepsilon}}k$. Choosing $d$ to be a sufficiently large constant (dependent on $\tilde{\varepsilon}$), equation \eqref{eq:expanderparameters} shows that for every $\delta > 0$ there exists a $(c, m, d, k)$-unique expander $\mathcal{G}amma$ with $c = \Omega(\log^{2+\varepsilon}k)$ and $m = O(k\gamma)$. Using multipoint evaluation, the right side vertices of $\mathcal{G}amma$ can be associated with $\Theta(k)$-independent variables over $\mathbb{F}$ using $O(k \log^{2 + \varepsilon}k)$ operations. By the properties of $\mathcal{G}amma$ and applying Lemma \ref{lem:expanderhashing} we are able to generate batches of $k$-independent variables of size $\Omega(k\log^{2+\varepsilon}k)$ using $O(k \log^{2+\varepsilon})$ operations. The seed length of $O(k)$ holds by the observation that randomized construction of the expander only requires $O(k)$-independence. The $O(k \poly \log k)$ initialization time is obtained by using multipoint evaluation to construct a table for $\mathcal{G}amma$. \end{proof} \section{Faster multipoint evaluation for $k$-generators} \label{sec:faster} This section presents an improved generator based directly on multipoint evaluation of a polynomial hash function $h \in \mathcal{H}_{k}$ over a finite field. For our purpose of generating an $(n,k)$-sequence from $h$, we are free to choose the order of elements of $\mathbb{F}$ in which to evaluate $h$. We present an algorithm for the systematic evaluation of $h$ over disjoint size $k$ subsets of $\mathbb{F}$ using Fast Fourier Transform (FFT) algorithms. Our technique yields a $k$-generator over $\mathbb{F}$ with generation time $O(\log k)$, and space usage and seed length that is optimal up to constant factors. The algorithm depends upon the structure of $\mathbb{F}$, similarly to other FFT algorithms over finite fields \cite{bhattacharya2004}. The nonzero elements of $\mathbb{F}$ form a multiplicative cyclic group $\mathbb{F}^{*}$ of order $q-1$. The multiplicative group has a primitive element $\omega$ which generates $\mathbb{F}^{*}$. \begin{equation} \mathbb{F}^{*} = \{ \omega^{0}, \omega^{1}, \omega^{2}, \dots, \omega^{q-2} \}. \end{equation} For $k$ that divides $q-1$, we can construct a multiplicative subgroup $S_{k,0}^{*}$ of order $k$ with $\omega_{k} = \omega^{(q-1)/k}$ as the generating element. $S_{k,0}^{*}$ contains $k$ distinct elements of $\mathbb{F}$. Define for $j = 0,1, \dots, (q-1)/k - 1$, \begin{equation} S_{k,j}^{*} = \omega^{j}S_{k,0} = \{ \omega^{j}\omega_{k}^{0}, \omega^{j}\omega_{k}^{1}, \dots, \omega^{j}\omega_{k}^{k-1} \}. \end{equation} Viewed as subsets of $\mathbb{F}^{*}$ the sets $S_{k,j}^{*}$ form an exact cover of $\mathbb{F}^{*}$. We now consider how to evaluate a degree $k-1$ polynomial $h(x) \in \mathbb{F}[X]$ in the points of $S_{k,j}^{*}$. The polynomial takes the form \begin{equation} h(x) = a_{0}x^{0} + a_{1}x^{1} + \dots + a_{k-1}x^{k-1}. \end{equation} Rewriting the polynomial evaluation over $S_{k,j}^{*}$ in matrix notation: \begin{equation} \colvec{ h(\omega^{j}\omega_{k}^{0}) \\ h(\omega^{j}\omega_{k}^{1}) \\ h(\omega^{j}\omega_{k}^{2}) \\ \vdots \\ h(\omega^{j}\omega_{k}^{k-1})} = \colvec{ \omega_{k}^{0\cdot0} & \omega_{k}^{0\cdot1} & \dots & \omega_{k}^{0\cdot(k-1)} \\ \omega_{k}^{1\cdot0} & \omega_{k}^{1\cdot1} & \dots & \omega_{k}^{1\cdot(k-1)} \\ \omega_{k}^{2\cdot0} & \omega_{k}^{2\cdot1} & \dots & \omega_{k}^{2\cdot(k-1)} \\ \vdots & \vdots & & \vdots \\ \omega_{k}^{(k-1)\cdot0} & \omega_{k}^{(k-1)\cdot1} & \dots & \omega_{k}^{(k-1)\cdot(k-1)} } \colvec{\omega^{j \cdot 0}a_{0} \\ \omega^{j \cdot 1}a_{1} \\ \omega^{j \cdot 2}a_{2} \\ \vdots \\ \omega^{j \cdot (k-1)}a_{k-1}} \label{eq:polymatrix} \end{equation} We assume that the coefficients of $h$ and $\omega^{j}$ are given and consider algorithms for efficient evaluation of the matrix-vector product. The coefficients $\tilde{a}_{j,i} = \omega^{j \cdot i}a_{i}$ for $i = 0,1,\dots,k-1$ can be found in $O(k)$ operations and define a polynomial $\tilde{h}_{j}(x) = \sum_{i=0}^{k-1}\tilde{a}_{i,j}x^{i}$. Evaluating $\tilde{h}_{0}(x)$ over $S_{k,0}^{*}$ corresponds to computing the Discrete Fourier Transform over a finite field. \begin{restate}[Repeated] For every finite field $\mathbb{F}$ that supports computing the discrete Fourier transform of length $k$ in~$O(k \log k)$ operations, there exists a data structure that, for every choice of $k \leq |\mathbb{F}|$ and given a primitive element $\omega$, is an explicit $O(\log k)$~time $k$-generator with range $\mathbb{F}$, period~$|\mathbb{F}|$, seed length $k$, space usage $O(k)$, and initialization time $O(k \log k)$. \end{restate} \begin{proof} Evaluation of $\tilde{h}_{j}(x)$ over $S_{k,j}^{*}$ takes $O(k \log k)$ operations by assumption. For every batch $j$ starting at $j = 0$, the value of $\omega^{j}$ is stored and used to compute the coefficients of $\tilde{h}_{j+1}(x)$ ing $O(k)$ operations. \end{proof} We now discuss the validity of the assumption that we are able to compute the DFT over a finite field in $O(k \log k)$ operations. Assume that $k \mid (q - 1)$ and that $\omega_{k}$ is known. If $k$ is highly composite there exist Fast Fourier Transforms for computing \eqref{eq:polymatrix} in $O(k \log k)$ field operations~\cite{duhamel1990}. If $k$ is not highly composite there exists an algorithm for computing the DFT in equation \eqref{eq:polymatrix} in $O(kz \log kz )$ operations for fields of cardinality $q = p^{z}$ in our model of computation~\cite{preparata1977}. For $q = p^{O(1)}$ this reduces to the desired $O(k \log k)$ operations. \section{Finite field arithmetic on the word RAM} \label{sec:wordRAM} Throughout the paper we have used as our model of computation a modified word RAM with constant time arithmetic $(+,-, \times)$ over a finite field $\mathbb{F}$. In this section we show how our model relates to the more standard $\emph{multiplication model}$ defined as a word RAM with constant time arithmetic $(+, -, \times)$ over the integers $[2^{w}]$ for $w$-bit words \cite{hagerup1998}. Arithmetic over $\mathbb{F}_{p}$ for prime $p$ is integer arithmetic modulo $p$. We now argue that arithmetic operations over $\mathbb{F}_{p}$ can be performed in $O(1)$ operations in the multiplication model. Every integer $x$ can be written on the form $x = qp + r$ for non-negative integers $q, r$ with $r < p$. Assume that $x$ can be represented in a constant number of words. The problem of computing $r = x \bmod p$ can be solved by an integer division and $O(1)$ operations in the multiplication model due to the identity $r = x - \lfloor x/p \rfloor p$. An algorithm by Granlund and Montgomery \cite{granlund1994} computes $\lfloor x/p \rfloor$ for any constant $p$ using $O(1)$ operations in the multiplication model which gives the desired result. Another finite field of interest is $\mathbb{F}_{2^{w}}$ due to the correspondence between field elements and bit vectors of length $w$. We will argue that a word RAM model that supports constant time multiplication over $\mathbb{F}_{2^{w}}$ is not unrealistic considering current hardware. Addition in $\mathbb{F}_{2^{w}}$ has direct support in standard CPU instruction sets through the XOR operation. A multiplication of two elements $x$ and $y$ in $\mathbb{F}_{2^{w}}$ can be viewed as a two-step process. First, we perform a carryless multiplication $z = x \cdot y$ of the representation of $x$ and $y$ as polynomials in $F_{2}[X]$. Second, we use a modular reduction to bring the product $x \cdot y$ back into $\mathbb{F}_{2^{w}}$, similarly to modular arithmetic over $\mathbb{F}_{p}$. Recently, hardware manufacturers have included partial support for multiplication in $\mathbb{F}_{2^{w}}$ with the CLMUL instruction for carryless multiplication \cite{gueron2014}. The modular reduction step is performed by dividing $x \cdot y$ by an irreducible polynomial $g$ and returning the remainder. Irreducible polynomials $g$ that can be represented as sparse binary vectors with constant weight results in a constant time algorithm for modular reduction as presented by Gueron and Kounavis \cite{gueron2014}. We briefly introduce the computation underlying the algorithm to show that its complexity depends on the number of {\tt 1}s in the binary representation of $g$. Let $L^{w}$ and $M^{w}$ be functions that return the $w$ least, respectively most, significant bits of their argument as represented in $\mathbb{F}_{2^{2w}}$. The complexity of Gueron and Kounavis' algorithm for modular reduction of $z = x \cdot y$ is determined by the complexity of evaluating the expression \begin{equation} L^{w}(L^{w}(g)\cdot M^{w}(M^{w}(z) \cdot g)). \label{eq:clmul} \end{equation} Evaluating $L^{w}$ and $M^{w}$ is standard bit manipulation. For $g$ of constant weight, the carryless multiplications denoted by $\cdot$ in equation $\eqref{eq:clmul}$ can be implemented as a constant number of bit shifts and XORs. For every $w \leq 10000$ an irreducible trinomial or pentanomial ($g$ of weight at most 5) has been found~\cite{seroussi1998}. Together with the hardware support for convolutions this allows us to implement fast multiplication over fields of practical interest. \section{A load balancing application} \label{sec:loadbalancing} We next consider how our new generator yields stronger guarantees for load balancing. Our setting is motivated by applications such as splitting a set of tasks of unknown duration among a set of $m$ machines, in order to keep the load as balanced as possible. Once a task is assigned to a machine, it cannot be reassigned, i.e., we do not allow \emph{migration}. For simplicity we consider the \emph{unweighted} case where we strive to keep the \emph{number} of tasks on each machine low, and we assume that $m$ divides $|\mathbb{F}|$ for some field $\mathbb{F}$ with constant time operations on a word RAM. Suppose that each machine has capacity (e.g.~memory enough) to handle $b$ tasks at once, and that we are given a sequence of $t$ tasks $T_1,\dots,T_t$, where we identify each task with its duration (an interval in $\mathbb{R}$). Now let $k = mb$ and suppose that we use our constant time $k$-generator to determine for each $i=1,\dots,t$ which machine should handle $T_i$. (We emphasize that this is done without knowledge of $T_i$, and without coordination with the machines.) Compared to using a fully random choice this has the advantage of requiring only $k\poly\log k$ words of random bits, which in turn may make the algorithm faster if random number generation is a bottleneck. Yet, we are able to get essentially the same guarantee on load balancing as in the fully random case. To see this let $L(x) = \{ i \; | \; x\in T_i\}$ be the set of tasks active at time $x$, and let $L_q(x)$ be the subset of $L(x)$ assigned to machine $q$ using our generator. We have: \begin{lemma}\label{lem:error} For $\varepsilon > 0$, if $|L(x)| (1+\varepsilon) < mb$ then $\Pr[\max_q |L_q(x)| > b] < m \exp(-\varepsilon^2 b / 3)$. \end{lemma} \begin{proof} Since $|L(x)| < mb = k$ we have that the assignment of tasks in $L(x)$ to machines is uniformly random and independent. This means that the number of tasks assigned to each machine follows a binomial distribution with mean $b/(1+\varepsilon)$, and we can apply a Chernoff bound of $\exp(-\varepsilon^2 b / 3)$ on the probability that more than $b$ tasks are assigned to a particular machine. A union bound over all $m$ machines yields the result. \end{proof} Lemma~\ref{lem:error} allows us to give a strong guarantee on the probability of exceeding the capacity $b$ of a machine at any time, assuming that the average load is bounded by $b/(1+\varepsilon)$. In particular, let $S\subseteq\mathbb{R}$ be a set of size at most $2t$ such that every workload $L(y)$ is equal to $L(x)$ for some $x\in S$. The existence of $S$ is guaranteed since the $t$ tasks are intervals, and they have at most $2t$ end points. This means that $$\sup_{x\in\mathbb{R}} \max_q |L_q(x)| = \max_{x\in S} \max_q |L_q(x)|,$$ so a union bound over $x\in S$ gives $$\Pr[\sup_{x\in\mathbb{R}} \max_q |L_q(x)| > b] < 2tm \exp(-\varepsilon^2 b / 3) \enspace .$$ For constant $\varepsilon$ and whenever $b = \omega(\log k)$ and $tm = 2^{o(b)}$ we get an error probability that is exponentially small in $b$. Such a strong error guarantee can not be achieved with known constant time hashing methods~\cite{siegel2004,pagh2008,dietzfelbinger2003,thorup2013} in reasonable space, since they all have an error probability that decreases polynomially with space usage. Even if explicit constructions for the expanders needed in Siegel's hash functions were found, the resulting space usage would be polynomially higher than with our $k$-generator. \section{Experiments} \label{sec:experiments} This section contains experimental results of an implementation of a $k$-generator over $\mathbb{F}_{2^{64}}$. There are two main components to the generator: an algorithm for filling a table of size $m$ with $dk$-independent variables and a bipartite unbalanced expander graph. For the first component, we use an implementation of Gao-Mateer's additive FFT \cite[Algorithm 2.]{gao2010}. Utilizing the Gao-Mateer algorithm we can generate a batch of $k$ elements of an $(|\mathbb{F}|, k)$-sequence using space $O(k)$ and $O(k \log^{2} k)$ operations on a word RAM that supports arithmetic over $\mathbb{F}$. The additive complexity of the FFT algorithm is $O(k \log^{2} k)$ while the multiplicative complexity is $O(k \log k)$. Addition in $\mathbb{F}_{2^{64}}$ is implemented as an XOR-operation on 64-bit words. Multiplication is implemented using the PCLMUL instruction along with the techniques for modular reduction by Gueron et al. \cite{gueron2014} outlined in Section \ref{sec:wordRAM}. For the second component we introduce a slightly different type of expander graphs that only work in the special case of fields of characteristic two. Let $\mathbb{F}_{2^{w}}$ be a field of characteristic two and let $\mat{M}$ be a $cm \times m$ adjacency matrix of a graph $\mathcal{G}amma$ where each entry of $\mat{M}$ is viewed as an element of $\mathbb{F}_{2^{w}}$. By a similar argument to the one used in Lemma \ref{lem:expanderhashing} the linear system $\mat{M}\vect{x}$ defines a $(cm, k)$-sequence if $\vect{x}$ is a vector of \mbox{$dk$-independent} variables over $\mathbb{F}_{2^{w}}$ and $\mat{M}$ has row rank at least $k$. We consider randomized constructions of $\mat{M}$ over $\mathbb{F}_{2}$ with at most $d$ \texttt{1}s in each row and row rank at least $k$. It is easy to see that a matrix $\mat{M}$ over $\mathbb{F}_{2}$ with these properties also defines a matrix with the same properties over $\mathbb{F}_{2^{w}}$. Since $k$-uniqueness of $\mathcal{G}amma$ implies that $\mat{M}$ has row rank $k$, but not the other way around, we are able to obtain better performance characteristics of generators over $\mathbb{F}_ {2^{w}}$ by focusing on randomized constructions of $\mat{M}$. The matrix $\mat{M}$ is constructed in the following way. Independently, for each $i \in [cm]$ sample $d$ integers uniformly with replacement from $[m]$ and define the $i$th row of $\mat{M}$ as the vector constructed by taking the zero vector and adding~\texttt{1}s in the $d$ positions sampled for row $i$. Observe that if $\mat{M}$ does not have row rank at least $k$ then some non-empty subset of at most $k$ rows of $\mat{M}$ sum to the zero vector. In order for a non-empty set of vectors over $\mathbb{F}_{2}^{m}$ to sum to the zero vector, the bit-parity must be even in each of the $m$ positions of the sum. The sum of any $i$ rows of $\mat{M}$ corresponds to a balls and bins process that distributes $id$ balls into $m$ bins, independently and uniformly at random. Let $id$ be an even number. Then there are $(id - 1)!!$ ways of ordering the balls into pairs and the probability that the outcome is equal to any particular pairing is $(1/m)^{id/2}$. This yields the following upper bound on the probability that a subset of $i$ rows sums to zero: \begin{equation} \beta_{pair}(i, d, m) = (id-1)!!\! \left(\frac{1}{m}\right)^{id/2}. \label{eq:libound} \end{equation} A comparison between this bound and the bound for $k$-uniqueness from equation \eqref{eq:probexpander} shows that, for each term in the sum, the multiplicative factor applied to the binomial coefficient $\binom{cm}{i}$ is exponentially smaller in $id$ for the bound in \eqref{eq:libound}. The pair-based approach which yields the bound $\beta_{pair}$ overestimates the probability of failure on subsets of size $i$, increasingly as $id$ grows large compared to $m$. We therefore introduce a different bound based on the Poisson approximation to the binomial distribution: the number of balls in each of the $m$ positions can approximately be modelled as independent Poisson distributed variables \cite[Ch. 5.4]{mitzenmacher2005}. The probability that that the parity is even in each of the $m$ positions in a sum of $i$ rows is bounded by \begin{equation} \beta_{poisson}(i, d, m) = e\sqrt{id}\left(\frac{1+e^{-2\frac{id}{m}}}{2}\right)^{m}, \label{eq:poibound} \end{equation} where we use the same approach as Mitzenmacher et al.~\cite{mitzenmacher2014}. For any given subset of rows of $\mat{M}$, we are free to choose between the two bounds. The probability that a randomly constructed matrix $\mat{M}$ fails to have rank at least $k$ can be bounded from above using a union bound over subsets of rows of $\mat{M}$. \begin{equation} \delta \leq \sum_{i=1}^{k}\binom{cm}{i}\min(\beta_{pair}(i, d, m), \beta_{poisson}(i, d, m)). \label{eq:combinedbound} \end{equation} We now consider the generation time of our implementation. Let $FFT_{dk}$ denote the time taken by the FFT algorithm to generate a $dk$-independent value and let $RA_{d,m}$ denote the time it takes to perform $d$ random accesses in a table of size $m$. The time taken to generate a value by the implementation of our generator is then given by \begin{equation} T = \frac{FFT_{dk}}{c} + RA_{d,m}. \label{eq:generationtime} \end{equation} In our experiments, the choice of parameters for the expander graphs were based on a search for the fastest generation time over every combination of imbalance $c \in \{16, 32, 64 \}$ and outdegree $d \in \{ 4, 8, 16 \}$. Given choices of $d$, $c$ and independence $k$, the size of the right side of the expander $m$ was increased until existence could be guaranteed by the bound in \eqref{eq:combinedbound}. The generator in the experiments had the restriction that $m \leq 2^{26}$ and we have measured $RA_{d,m}$ assuming that the expander is read sequentially from RAM. The experiments were run on a machine with an Intel Core i5-4570 processor with 6MB cache and 8GB of RAM. Table \ref{tab:experimentalresults} shows the generation time in nanoseconds per 64-bit output using Horner's scheme, Gao-Mateer's FFT and the implementation of our generator (FFT + Expander). For the implementation of the generator, we also show the parameters of the randomly generated expander that yielded the fastest generation time among expanders in the search space. The generation time for Horner's scheme is approximately linear in $k$ and logarithmic in $k$ for the FFT, as predicted by theory. The FFT is faster than using Horner's scheme already at $k = 64$ and orders of magnitude faster for large $k$. Using our implementation of Gao-Mateer's FFT algorithm we are able to evaluate a polynomial of degree $2^{20}-1$ in $2^{20}$ points in less than a second. The same task takes over an hour when using Horner's rule, even with both algorithms using the same underlying implementation of algebraic operations in the field. For small values of $k$, our generator is an order of magnitude faster than the FFT and comes close to the performance of the 64-bit C++11 implementation of the popular Mersenne Twister. Our generator uses 25 nanoseconds to output a 1024-independent value. This is equivalent to an output of over 300MB/s. The Mersenne Twister uses around 4 nanoseconds to generate a 64-bit value. In practice, the memory hierarchy appears to be the primary obstacle to maintaining a constant generation time as $k$ increases. Our generator reads the expander graphs sequentially and performs random lookups into the table of $dk$-independent values. As $k$ grows large, the table can no longer fit into cache and for large imbalance $c$, the expander can no longer be stored in main memory. Searching a wider range of expander parameters could easily yield a faster generation time, potentially at the cost of a larger imbalance $c$ or higher probability of failure $\delta$. \begin{table}[htpb] \centering \begin{tabular}{rrr|rrrrr} \toprule \multirow{2}{*}{$k$} & \multicolumn{1}{c}{Horner} & \multicolumn{1}{c|}{FFT} & \multicolumn{5}{c}{FFT + Expander} \\ & \multicolumn{1}{c}{ns} & \multicolumn{1}{c|}{ns} & \multicolumn{1}{c}{$c$} & \multicolumn{1}{c}{$m$} & \multicolumn{1}{c}{$d$} & \multicolumn{1}{c}{$\delta$} & \multicolumn{1}{c}{ns} \\ \midrule $2^{5}$ & 177 & 243 & 64 & $2^{13}$ & 8 & $10^{-7}$ & 15 \\ $2^{6}$ & 361 & 294 & 64 & $2^{14}$ & 8 & $10^{-8}$ & 16 \\ $2^{7}$ & 730 & 338 & 64 & $2^{15}$ & 8 & $10^{-9}$ & 19 \\ $2^{8}$ & 1470 & 375 & 64 & $2^{16}$ & 8 & $10^{-10}$ & 23 \\ $2^{9}$ & 2950 & 412 & 64 & $2^{17}$ & 8 & $10^{-11}$ & 24 \\ $2^{10}$ & 5902 & 449 & 64 & $2^{18}$ & 8 & $10^{-12}$ & 25 \\ $2^{11}$ & 11808 & 487 & 32 & $2^{18}$ & 8 & $10^{-12}$ & 35 \\ $2^{12}$ & 23627 & 523 & 64 & $2^{18}$ & 16 & $10^{-29}$ & 43 \\ $2^{13}$ & 47183 & 561 & 32 & $2^{18}$ & 16 & $10^{-29}$ & 54 \\ $2^{14}$ & 94429 & 599 & 64 & $2^{22}$ & 8 & $10^{-15}$ & 68 \\ $2^{15}$ & 188258 & 638 & 64 & $2^{23}$ & 8 & $10^{-16}$ & 69 \\ $2^{16}$ & 376143 & 678 & 64 & $2^{24}$ & 8 & $10^{-17}$ & 77 \\ $2^{17}$ & 751781 & 719 & 64 & $2^{25}$ & 8 & $10^{-18}$ & 85 \\ $2^{18}$ & 1505016 & 765 & 64 & $2^{26}$ & 8 & $10^{-19}$ & 93 \\ $2^{19}$ & 3015969 & 808 & 32 & $2^{26}$ & 8 & $10^{-19}$ & 110 \\ $2^{20}$ & 6082313 & 864 & 64 & $2^{26}$ & 16 & $10^{-46}$ & 175 \\ \bottomrule \iffalse $2^{21}$ & 12161688 & 943 & & & & & \\ $2^{22}$ & 24256125 & 1024 & & & & & \\ $2^{23}$ & 48490250 & 1110 & & & & & \\ $2^{24}$ & 96996000 & 1237 & & & & & \\ $2^{25}$ & 193845000 & 1310 & & & & & \\ $2^{26}$ & 387528000 & 1387 & & & & & \\ \fi \end{tabular} \caption{Generation time in nanoseconds per 64-bit value using Horner's scheme, Gao-Mateer's FFT and an implementation of our constant-time generator} \label{tab:experimentalresults} \end{table} \section*{Acknowledgment} We are grateful to Martin Dietzfelbinger who gave feedback on an early version of the paper, allowing us to significantly enhance the presentation. \end{document}
\begin{document} \begin{abstract} We study the following reconstruction problem for colorings. Given a countable set $X$ (finite or infinite), a coloring on $X$ is a function $\varphi: [X]^{2}\to \{0,1\}$, where $[X]^{2}$ is the collection of all 2-elements subsets of $X$. A set $H\subseteq X$ is homogeneous for $\varphi$ when $\varphi$ is constant on $[H]^2$. Let $\h(\varphi)$ be the collection of all homogeneous sets for $\varphi$. The coloring $1-\varphi$ is called the complement of $\varphi$. We say that $\varphi$ is {\em reconstructible} up to complementation from its homogeneous sets, if for any coloring $\psi$ on $X$ such that $\h(\varphi)=\h(\psi)$ we have that either $\psi=\varphi$ or $\psi=1-\varphi$. We present several conditions for reconstructibility and non reconstructibility. For $X$ an infinite countable set, we show that there is a Borel way to recovering a coloring from its homogeneous sets. \end{abstract} \subjclass[2010]{Primary 05D10, 03E15; Secondary 05C15} \keywords{Graph reconstruction, coloring of pairs, maximal homogeneous sets, Borel selectors.} \title{Reconstruction of a coloring from its homogeneous sets} \section{Introduction} In this paper we study the following reconstruction problem for colorings. Given a countable set $X$ (finite or infinite), a coloring on $X$ is a function $\varphi: [X]^{2}\to \{0,1\}$, where $[X]^{2}$ is the collection of 2-element subsets of $X$. Let $\h(\varphi)$ be the homogeneous sets for $\varphi$; that is, the collection of $H\subseteq X$ such that $\varphi$ is constant on $[H]^2$. Clearly, $\h(\varphi)=\h(1-\varphi)$. We say that $\varphi$ is {\em reconstructible} up to complementation from its homogeneous sets, if for any coloring $\psi$ on $X$ such that $\h(\varphi)=\h(\psi)$ we have that either $\psi=\varphi$ or $\psi=1-\varphi$. In the terminology of graphs, we are talking about graphs that can be reconstructed (up to complementation) from the collection of their cliques and independent sets. This type of reconstruction problem was considered long time ago in \cite{Cameron1993} for finite graphs but apparently was not pursued any further. A somewhat similar problem was addressed in \cite{Pouzetetal2013,Kaddour2019,Pouzetetal2011}. They analyzed a variant of the well known graph reconstruction conjecture (see \cite{bondy1991}), and studied conditions under which a pair of graphs with the same homogeneous sets are isomorphic up to complementation. In this paper we study conditions under which a pair of graphs with the same homogeneous sets are equal up to complementation. An example of a reconstructible coloring is given by the random graph. We extract from this example a general method for showing reconstrutibility which is quite useful. Suppose that for every $F\subseteq X$ with $|F|=4$ there is $Y\supseteq F$ such that the restriction of $\varphi$ to $[Y]^2$ is reconstructible, then $\varphi$ is reconstructible. In particular, whenever a coloring $\varphi$ on $\N$ has infinitely many initial segments which are reconstructible, then $\varphi$ itself is reconstructible. The first example that we found of a non-reconstructible coloring is given by a partition of $\N$ into two infinite sets. We associate to this partition a coloring $\varphi$ where $\varphi(\{x,y\})=1$ if and only if both $x$ and $y$ belong to the same part of the partition. This example satisfies a very simple criterion for non reconstructibility: If there is a pair $\{x,y\}$ (an edge) such that $\varphi(\{x,z\})=1-\varphi(\{y,z\})$ for all $z\nin \{x,y\}$, then $\varphi$ is non-reconstructible. The reciprocal is not true. Such pairs (edges) will be called {\em critical}. We show a characterization of colorings that admits a critical pair. In the example mentioned above of a coloring associated to a partition of $\N$ into two parts, the collection of its homogeneous sets has exactly two maximal elements with respect to inclusion. Motivated by that, we present some results relating the structure of the family of maximal homogeneous sets to the reconstruction problem. In the last section of the paper we study the reconstruction problem from a descriptive set theoretic point of view. For instance, the collection of reconstructible colorings on $\N$ is a dense $G_\delta$ subset of the space of colorings $\{0,1\}^{[\N]^2}$, that is, from the Baire category point of view, almost every coloring is reconstructible. We can regard $\h(\varphi)$ as a closed subset of the Cantor space $\{0,1\}^{\N}$ (which will be denoted, as usual, by $2^{\N}$), thus as an element of the hyperspace $K(2^{\N})$, which is a Polish space endowed with the usual Vietoris topology. We show that there is a Borel way to recover a coloring from its homogeneous sets. More precisely, there is Borel map $f:K(2^{\N})\to \{0,1\}^{[\N]^2}$ such that $f(\h(\varphi))$ is a reconstruction of $\varphi$, i.e., $\h(f(\h(\varphi))=\h(\varphi)$. To finish this introduction we comment about our original motivation. A collection $\mathcal{H}$ of subsets of $\N$ is {\em tall}, if for every infinite set $A\subseteq \N$, there is an infinite set $B\in \mathcal{H}$ such that $B\subseteq A$. Ramsey's Theorem says that $\h(\varphi)$ is tall for every coloring $\varphi$ on $\N$. Some tall families admit a Borel selector, that is, a Borel map such that given an infinite set $A$, the map selects an infinite subset of $A$ belonging to the tall family (\cite{GrebikUzca2018}). The collection $\h(\varphi)$ is an important example of a tall family admitting a Borel selector (\cite{GrebikUzca2018,HMTU2017}). It is an open problem to find a characterization of those tall Borel families that admit a Borel selector. A quite related question is to characterize when a tall Borel family $\mathcal{H}$ admits a coloring $\varphi$ such that $\h(\varphi)\subseteq \mathcal{H}$. In other words, when is it possible to extract from such tall family $\mathcal{H}$ a coloring $\varphi$ such that $\h(\varphi)\subseteq \mathcal{H}$? These considerations lead naturally to a Borel reconstruction problem: Suppose $\mathcal{H}=\h(\varphi)$, can we recover from $\mathcal{H}$, in a Borel way, a coloring $\psi$ such that $\h(\psi)=\mathcal{H}$? In the last section of the paper we show that the answer is positive. \section{Preliminaries} We will use standard notation from set theory. Throughout the article, $X$ will denote a countable (finite or infinite) set. Given $k\in\N$, we will denote by $[X]^{k}$ the collection of all subsets of $X$ of size $k$, by $[X]^{<k}$ the subsets of $X$ of size strictly less than $k$, and by $[X]^{\leq k}$ the union $[X]^{k}\cup[X]^{<k}$. The collection of all finite subsets of $X$ will be denoted by $[X]^{<\omega}$. $X^{<\omega}$ denotes the collection of all finite sequences of elements of $X$ and $X^{\leq n}$ the collection of sequences of length at most $n$ of elements of $X$. A {\em coloring} on a set $X$, is any mapping $\varphi:[X]^2\to \{0,1\}$. Whenever is clear from the context, we identify 2 with $\{0,1\}$. For instance, the collection of all colorings $\{0,1\}^{[X]^2}$ will be denoted by $2^{[X]^2}$. Given a coloring $\varphi$ on $X$ and $Y\subseteq X$, we will denote by $\varphi|_Y$ the restriction of $\varphi$ to $[Y]^2$. We say that $\psi$ extends $\varphi$, and write $\varphi\subseteq \psi$, whenever $\varphi$ is a coloring on $Y$, $\psi$ is a coloring on $X$, $Y\subseteq X$ and $\psi|_Y=\varphi$. We write $\varphi\subset \psi$ when $\varphi\subseteq \psi$ and $\varphi\neq \psi$. For $X$ infinite, the family of colorings $2^{[X]^2}$ will be seen as a topological space with the usual product topology which makes it homeomorphic to $2^X$. A partition of $X$ is a collection $(A_i)_{i\in I}$ of non empty subsets of $X$ such that $I\subseteq \N$, $X=\bigcup_{i\in I}A_i$, and $A_i\cap A_j=\emptyset$ for every $i\neq j$ in $I$. Given $X=\bigcup_{i\in I}A_i$ a partition of $X$, we let {\em the coloring associated to the partition} be the mapping $\varphi:[X]^2\to 2$ defined by $\varphi(\{x,y\})=1$ if and only if $x,y\in A_i$ for some $i\in I$. Given a linear ordering $(X,<)$ and $\{n,m\}\in [X]^2$, we denote by $\{n,m\}_<$ the fact that $n<m$. If $e=\{r_n\}_n$ is an enumeration of $\Q$, the Sierpi\'{n}ski coloring $\varphi_e:[\N]^2\to 2$, associated to $e$, is defined by $\varphi_e(\{n,m\}_<)=1$ if and only if $r_n<r_m$. The {\em random graph} $R=\langle \N, E\rangle$ (see \cite{Cameron1997}) has the following extension property. Given two finite disjoint subsets $A, B$ of $\N$, there is $n\in \N$ such that $\{x,n\}\in E$ for all $x\in A$ and $\{y,n\}\not\in E$ for all $y\in B$. This makes $R$ universal in the following sense. Given a graph $\langle{\N,G}\rangle$, there is a subset $X\subseteq\N$ such that $\langle{\N,G}\rangle$ and $\langle {X, E|_X}\rangle$ are isomorphic. Given a coloring $\varphi:[X]^2\to 2$, we say that $H\subseteq X$ is {\em $i$-homogeneous (for $\varphi$)} if $\varphi([H]^2)=\{ i\}$ for $i\in\{0,1\}$. This notion is clearly trivial if $|H|=2$, so we assume that an homogeneous set has at least 3 elements. Denote by $\h(\varphi)$ the set of homogeneous sets for $\varphi$; that is, $$ \h(\varphi)=\{H\subseteq X:\varphi\text{ is constant on }[H]^2\}. $$ \begin{prop} \label{triangulos} Let $\varphi$ and $\psi$ be two colorings on a set $X$. Then $\h(\varphi)=\h(\psi)$ if and only if $\h(\varphi)\cap [\N]^{3}=\h(\psi)\cap [\N]^{3}$. \end{prop} \begin{proof} Suppose $\h(\varphi)\cap [\N]^{3}=\h(\psi)\cap [\N]^{3}$ and let $H$ be a homogeneous set for $\varphi$. Let $\{x,y\},\{w,z\}$ be two different pairs in $[H]^2$, by hypothesis $\{x,y,w\}$ and $\{y,w,z\}$ are $\psi$-homogeneous, hence $H$ is $\psi$-homogeneous. \end{proof} It is clear that if $\varphi$ is the coloring associated to the partition $X=\bigcup_{i\in I}A_i$, then $\h(\varphi)=\{H:H\subseteq A_i\text{ for some }i\in I\}\cup\{H:|H\cap A_i|\leq 1\text{ for every }i\in I\}$. On the other hand, if $\varphi_e$ is the Sierpi\'{n}ski coloring associated to an enumeration $e=\{r_n\}_n$ of $\Q$, then $H\in\h(\varphi_e)$ if and only if $H$ is monotone respect to $e$; that is, if either $r_n<r_m$ for every $n<m$ in $H$, or $r_n\geq r_m$ for every $n<m$ in $H$. In general, it is well known, that if $|X|\geq 6$, there is an homogeneous set of size 3. Furthermore, we recall that Ramsey's Theorem states that every coloring $\varphi:[X]^2\to 2$ on an infinite set $X$ has an infinite homogeneous set. A coloring $\varphi:[X]^2\to 2$ is said to be {\em reconstructible} (up to complementation) from its homogeneous sets if given a coloring $\psi:[X]^2\to 2$ such that $\h(\varphi)=\h(\psi)$, we have that either $\varphi=\psi$ or $\varphi=1-\psi$. Let $\mathcal{R}$ be the collection of all reconstructible colorings, and let $\neg \mathcal{R}$ be its complement. We will call a coloring {\em non-reconstructible} if it belongs to $\neg\mathcal{R}$. Since $\h(\varphi)=\h(1-\varphi)$, we have that $\varphi\in\mathcal{R}$ if and only if $1-\varphi\in\mathcal{R}$. Finally, given $\varphi,\psi\in 2^{[X]^2}$, we say that $\psi$ is a {\em reconstruction} of $\varphi$, if $\h(\psi)=\h(\varphi)$ and we say it is a non-trivial reconstruction if in addition $\psi\neq\varphi$ and $\psi\neq 1-\varphi$. \section{Reconstructible colorings} The aim of this section is to present some sufficient conditions for the reconstructibility of a coloring. On the one hand, we shall see that in order to determine if a coloring belongs to $\mathcal{R}$, it is enough to ensure that some finite restrictions do. On the other, we will introduce properties $E_0$ and $E_1$, and we will see that any coloring with any of these properties is in $\mathcal{R}$. \subsection{Finitistic conditions for reconstructibility} Our first result is a very useful criterion for reconstructibility. \begin{prop} \label{4suffices} Let $\varphi$ be a coloring on $X$. If for every $F\in [X]^{\leq 4}$ there is $Y\subseteq X$ such that $F\subseteq Y$ and $\varphi|_Y\in \mathcal{R}$, then $\varphi\in \mathcal{R}$. \end{prop} \begin{proof} Let $\psi$ be a coloring on $X$ such that $\h(\varphi)=\h(\psi)$. Suppose that for every $F\in [X]^{\leq 4}$ there is $Y\subseteq X$ such that $F\subseteq Y$ and $\varphi|_Y\in \mathcal{R}$; and that there are $x,y\in X$ such that $\varphi(\{x,y\})=\psi(\{x,y\})$. We will show that $\varphi=\psi$. Let $w,z\in X$ with $\{x,y\}\neq\{z,w\}$. By hypothesis, there is $Y\subseteq X$ such that $\{x,y,w,z\}\subseteq Y$ and $\varphi|_Y\in \mathcal{R}$. We have $\h(\varphi|_Y)=\h(\psi|_Y)$, $\varphi|_Y\in \mathcal{R}$ and $\varphi(\{x,y\})=\psi(\{x,y\})$, therefore $\varphi|_Y=\psi|_Y$. In particular, $\varphi(\{w,z\})=\psi(\{w,z\})$ and we are done. \end{proof} There are colorings $\varphi\in \mathcal{R}$ such that $\varphi|_F\not\in\mathcal{R}$ for some $|F|\leq 4$ (see Example \ref{4sufficesB}). \begin{coro} Let $\varphi$ be a coloring on $\N$. Suppose that for infinitely many $n$, $\varphi |_{\{0, \cdots, n\}}\in \mathcal{R}$, then $\varphi\in \mathcal{R}$. \end{coro} The previous result naturally suggests the following problem. \begin{question} \label{segmentos-UR} Let $\varphi$ be a reconstructible coloring on $\N$ and $F\subseteq\N$ be a finite set. Is there a finite set $G\supseteq F$ such that $\varphi |_G\in \mathcal{R}$? \end{question} Proposition \ref{4suffices} stresses the importance of knowing examples of colorings on finite sets belonging to $\mathcal{R}$. Our first example is trivial but we include it for future reference. \begin{ex} \label{constant} Any constant coloring belongs to $\mathcal{R}$. \end{ex} The next result provides a general method to extend any coloring on a finite set to a reconstructible one. It will be used several times in the sequel. \begin{prop} \label{propertyE} Let $\varphi_0$ be any coloring of the pairs of $F=\{x,y,w,z\}$. Let $a$ and $b$ be two elements not in $F$. The coloring $\varphi$ on $F\cup \{a,b\}$ extending $\varphi_0$ as in Figure \ref{figpropertyE} is reconstructible (where the colors between the elements of $F$ are not drawn). \begin{minipage}{.5\textwidth} \centering \begin{tikzpicture}[scale=0.7] \filldraw[black] (0,0) circle (1.5pt) node[anchor=east] {\scriptsize$x$}; \filldraw[black] (2,4) circle (1.5pt) node[anchor=east] {\scriptsize$a$}; \filldraw[black] (4,4) circle (1.5pt) node[anchor=west] {\scriptsize$b$}; \filldraw[black] (2,0) circle (1.5pt) node[anchor=east] {\scriptsize$y$}; \filldraw[black] (4,0) circle (1.5pt) node[anchor=east] {\scriptsize$z$}; \filldraw[black] (6,0) circle (1.5pt) node[anchor=west] {\scriptsize$w$}; \draw[thick] (0,0) -- (2,4); \draw[thick] (0,0) -- (4,4); \draw[thick] (2,4) -- (4,4); \draw[thick] (2,0) -- (2,4); \draw[thick] (2,0) -- (4,4); \draw[thick] (2,4) -- (4,0); \draw[thick] (4,4) -- (4,0); \draw[thick] (4,4) -- (6,0); \draw[thick] (2,4) -- (6,0); \end{tikzpicture} \captionof{figure}{Partial drawing of $\varphi$} \label{figpropertyE} \end{minipage} \begin{minipage}{.5\textwidth} \centering \begin{tikzpicture}[scale=0.7] \filldraw[black] (10,0) circle (1.5pt) node[anchor=east] {\scriptsize$x$}; \filldraw[black] (12,4) circle (1.5pt) node[anchor=east] {\scriptsize$a$}; \filldraw[black] (14,4) circle (1.5pt) node[anchor=west] {\scriptsize$b$}; \filldraw[black] (12,0) circle (1.5pt) node[anchor=east] {\scriptsize$y$}; \filldraw[black] (14,0) circle (1.5pt) node[anchor=east] {\scriptsize$z$}; \filldraw[black] (16,0) circle (1.5pt) node[anchor=west] {\scriptsize$w$}; \draw[mygray,thick] (10,0) -- (12,4); \draw[mygray,thick] (10,0) -- (14,4); \draw[mygray,thick] (12,4) -- (14,4); \draw[mygray,thick] (12,0) -- (12,4); \draw[mygray,thick] (12,0) -- (14,4); \draw[mygray,thick] (12,4) -- (14,0); \draw[mygray,thick] (14,4) -- (14,0); \draw[mygray,thick] (14,4) -- (16,0); \draw[mygray,thick] (12,4) -- (16,0); \end{tikzpicture} \captionof{figure}{Partial drawing of $\psi$} \label{figpropertyEpsi} \end{minipage} \end{prop} \begin{proof} Let $X=F\cup \{a,b\}$ and $\psi$ be a coloring of $[X]^2$ such that $\h(\varphi)=\h(\psi)$. Suppose there is $\{u,v\}\in [X]^2$ such that $\varphi(\{u,v\})=1-\psi(\{u,v\})$. We will show that $\varphi=1-\psi$. We will assume that $u=x$ and $v=y$. A completely analogous argument works for the other cases. Notice that $\{a,b,x\},\{a,b,y\}, \{a,b,z\}, \{a,b,w\}\in\h(\varphi)=\h(\psi)$. Let $i=\varphi(\{x,a\})$. Thus $\psi$ looks as depicted in Figure \ref{figpropertyEpsi}. Again the colors between elements of $F$ are not drawn. We consider two cases: {\em Case 1:} Suppose $\varphi(\{x,y\})=i$. It follows that $\{x,y,a\}\in \h(\varphi)=\h(\psi)$, and therefore $\psi(\{a,z\})=\psi(\{a,b\})=\psi(\{a,w\})=\psi(\{a,y\})=\psi(\{x,y\})=1-i$. Now notice that $\varphi(\{z,w\})=\psi(\{z,w\})$ would imply $\{a,z,w\}\in\h(\varphi)\triangle\h(\psi)$ which is a contradiction. It follows that $\varphi(\{z,w\})=1-\psi(\{z,w\})$. {\em Case 2:} Suppose $\varphi(\{x,y\})=1-i$. Then $\{x,y,a\}\notin\h(\varphi)=\h(\psi)$. But, $\psi(\{a,x\})=\psi(\{a,b\})=\psi(\{a,y\})$, thus $\psi(\{a,x\})\neq\psi(\{x,y\})=i$ and therefore $\psi(\{a,w\})=\psi(\{a,z\})=\break\psi(\{a,x\})=1-i$. Then, we argue as in the previous case to see that $\varphi(\{z,w\})\neq\psi(\{z,w\})$. In either case, we have that $\varphi=1-\psi$. \end{proof} From the previous result we get the following more general fact. \begin{prop} \label{extensionUR} Let $\varphi$ be a coloring on $X$ and $a,b\not\in X$. Then, there is a coloring $\psi$ on $X\cup\{a,b\}$ such that $\varphi\subset \psi$ and $\psi\in \mathcal{R}$. \end{prop} \begin{proof} Define $\psi$ on $X\cup\{a,b\}$ by $\psi(\{a,b\})=\psi(\{a,x\})=\psi(\{b,x\})=1$ for all $x\in X$, and $\varphi\subset \psi$. From Proposition \ref{propertyE} we get that $\psi $ satisfies the hypothesis of Proposition \ref{4suffices}, hence $\psi\in \mathcal{R}$.\end{proof} As an application of Proposition \ref{4suffices} we have the following result about a coloring on binary sequences. \begin{prop} \label{binarytree} The coloring associated to the extension ordering on binary sequences is reconstructible. \end{prop} \begin{proof} Let $\varphi$ be the coloring associated to the extension ordering on $2^{<\omega}$, i.e., $\varphi(\{x,y\})=1$ if and only if $y$ is an extension of $x$. We first show the result for the restriction of $\varphi$ to $X=2^{\leq 3}$. This coloring looks as despicted in Figure \ref{figbinarytree}, where only some $1$-edges are drawn. \hspace*{-1cm} \begin{center} \begin{tikzpicture}[scale=0.8] \filldraw[black] (0,0) circle (1.5pt) node[anchor=north] {\scriptsize$a$}; \filldraw[black] (-3.5,1) circle (1.5pt) node[anchor=north] {\scriptsize$b$}; \filldraw[black] (3.5,1) circle (1.5pt) node[anchor=north] {\scriptsize$c$}; \filldraw[black] (-4,2) circle (1.5pt) node[anchor=east] {\scriptsize$d$}; \filldraw[black] (-2,2) circle (1.5pt) node[anchor=west] {\scriptsize$e$}; \filldraw[black] (2,2) circle (1.5pt) node[anchor=east] {\scriptsize$f$}; \filldraw[black] (4.5,2) circle (1.5pt) node[anchor=west] {\scriptsize$g$}; \filldraw[black] (-3.5,3) circle (1.5pt) node[anchor=south] {\scriptsize$h$}; \filldraw[black] (-4.5,3) circle (1.5pt) node[anchor=south] {\scriptsize$i$}; \filldraw[black] (-2.5,3) circle (1.5pt) node[anchor=south] {\scriptsize$j$}; \filldraw[black] (-0.5,3) circle (1.5pt) node[anchor=south] {\scriptsize$k$}; \filldraw[black] (1,3) circle (1.5pt) node[anchor=south] {\scriptsize$l$}; \filldraw[black] (3,3) circle (1.5pt) node[anchor=south] {\scriptsize$m$}; \filldraw[black] (4,3) circle (1.5pt) node[anchor=south] {\scriptsize$n$}; \filldraw[black] (5,3) circle (1.5pt) node[anchor=south] {\scriptsize$o$}; \draw[thick] (0,0) -- (-3.5,1); \draw[thick] (-3.5,1) -- (-4,2); \draw[thick] (-3.5,1) -- (-2,2); \draw[thick] (-4,2) -- (-4.5,3); \draw[thick] (-4,2) -- (-3.5,3); \draw[thick] (-2,2) -- (-2.5,3); \draw[thick] (-2,2) -- (-0.5,3); \draw[thick] (0,0) -- (3.5,1); \draw[thick] (3.5,1) -- (2,2); \draw[thick] (3.5,1) -- (4.5,2); \draw[thick] (2,2) -- (1,3); \draw[thick] (2,2) -- (3,3); \draw[thick] (4.5,2) -- (4,3); \draw[thick] (4.5,2) -- (5,3); \end{tikzpicture} \captionof{figure}{Partial drawing of $\varphi$} \label{figbinarytree} \end{center} We show that $\varphi\in \mathcal{R}$. Let $\psi$ be a coloring of $[X]^2$ such that $\h(\varphi)=\h(\psi)$. Notice that every branch and every antichain is homogeneous (for both colorings). Suppose that $\varphi(\{x,y\})=1-\psi(\{x,y\})$ for some $\{x,y\}\in[X]^2$. We need to show that $\varphi=1-\psi$. We consider the case $x=d$ and $y=b$, the other cases are similar. Then all branches starting on $i$, $h$, $j$ or $k$ are of color $1$ for $\varphi$ and of color $0$ for $\psi$. Since $\{i,h,d\}$ is not homogeneous, then $\psi(\{i,h\})=1$. Therefore $\{i,h,j,k,l,m,n,o\}$ is $1$-homogeneous for $\psi$. Since $\{l,m,f\}$ is not homogeneous, then $\psi(\{l,f\})= 0$ or $\psi(\{m,f\})=0$. In either case, we get that all branches starting from $l$, $m$, $n$ or $o$ are all $0$-homogeneous for $\psi$. As before, we conclude that $\{d,e,f,g\}$ and $\{b,c\}$ are 1-homogeneous for $\psi$. This shows that $\varphi=1-\psi$. Now we finish the proof of the proposition. To see that $\varphi\in \mathcal{R}$, we use Proposition \ref{4suffices}. Let $F\subset 2^{<\omega}$ be a set with at most 4 elements. It is easy to verify that $\langle F,\varphi|_F\rangle$ is isomorphic (as a graph) to a subset of $\langle X,\varphi|_ X\rangle$. From the result above, $\varphi|_X\in \mathcal{R}$ and we are done. \end{proof} The following examples will be needed later in the paper. \begin{ex} \label{particion} Let $X=\{0,1,2,3,4,5\}$ and consider the partition of $X$ given by $\{0,1,2\}$, $\{3,4\}$ and $\{5\}$. Let $\varphi$ be the coloring associated to this partition. It is depicted in Figure \ref{figpartition}, where we only draw the pairs with color 1, i.e. those $\{x,y\}$ which are a subset of a part of the partition. \begin{center} \begin{tikzpicture}[scale=1.6] \draw[thick] (0,0) -- (0,1); \draw[thick] (0,1) -- (0,2); \draw[thick] (1,0) -- (1,1); \draw[thick] (0,0) .. controls (-0.5,1.5) and (-0.5,0.5) .. (0,2); \filldraw[black] (0,0) circle (1pt) node[anchor=east] {\scriptsize $0$}; \filldraw[black] (0,1) circle (1pt) node[anchor=east] {\scriptsize $1$}; \filldraw[black] (0,2) circle (1pt) node[anchor=east] {\scriptsize $2$}; \filldraw[black] (1,0) circle (1pt) node[anchor=east] {\scriptsize $3$}; \filldraw[black] (1,1) circle (1pt) node[anchor=east] {\scriptsize $4$}; \filldraw[black] (2,0) circle (1pt) node[anchor=west] {\scriptsize $5$}; \end{tikzpicture} \captionof{figure}{} \label{figpartition} \end{center} We claim that $\varphi|_Y\in \mathcal{R}$ for every $Y\subseteq X$. We show it for $X=Y$, the rest is similar. Let $\psi$ be a coloring on $X$ such that $\h(\varphi)=\h(\psi)$. Notice that $\{0,4,5\}$, $\{0,3,5\}$ and $\{2,4,5\}$ are $\varphi$-homogeneous and $\{0,1,3\}$ and $\{3,4,5\}$ are not $\varphi$-homogeneous. Since $\h(\varphi)=\h(\psi)$, $\psi(\{0,1\})= \psi(\{3,4\})=1-\psi(\{0,3\})$. Thus, $\psi$ is either $\varphi$ of $1-\varphi$. \end{ex} \begin{prop} \label{particion2} Let $\varphi$ be a coloring on a set $F=\{a,b,c,d, e\}$, and let $G=\{x,y,z\}$ be disjoint from $F$. Let $X=F\cup G$, and $\psi$ be the extension of $\varphi$ to $X$ as depicted in Figure \ref{figpartition2}, where we only draw the pairs $\{u,v\}$ of color 1 with $u\in F$and $v\in G$. Then $\psi\in \mathcal{R}$. \begin{center} \begin{tikzpicture}[scale=1.6] \draw[thick] (-1,0) -- (0,1); \draw[thick] (0,0) -- (0,1); \draw[thick] (1,0) -- (1,1); \draw[thick] (2,0) -- (2,1); \filldraw[black] (-1,0) circle (1pt) node[anchor=north] {\scriptsize$a$}; \filldraw[black] (0,0) circle (1pt) node[anchor=north] {\scriptsize$b$}; \filldraw[black] (0,1) circle (1pt) node[anchor=south] {\scriptsize$x$}; \filldraw[black] (1,0) circle (1pt) node[anchor=north] {\scriptsize$c$}; \filldraw[black] (1,1) circle (1pt) node[anchor=south] {\scriptsize$y$}; \filldraw[black] (2,0) circle (1pt) node[anchor=north] {\scriptsize$d$}; \filldraw[black] (2,1) circle (1pt) node[anchor=south] {\scriptsize $z$}; \filldraw[black] (3,0) circle (1pt) node[anchor=north] {\scriptsize$e$}; \end{tikzpicture} \captionof{figure}{Partial drawing of $\psi$.} \label{figpartition2} \end{center} \end{prop} \begin{proof} Let $\rho$ be a coloring on $X$ such that $\h(\psi)=\h(\rho)$. We assume without lost of generality that $\rho(\{x,a\})=\psi(\{x,a\})=1$, and we prove that $\rho=\psi$. Using the same kind of arguments as in Example \ref{particion}, it is easy to verify that $\rho(\{u,v\})=\psi(\{u,v\})$ for every $u\in F$ and $v\in G$, and also for $u, v\in G$. So, it remains to show that $\rho $ also extends $\varphi$. Indeed, given $u, v\in F$, there is $w\in \{x,y,z\}$ such that $\psi(\{w, u\})=\psi(\{w,v\})=0$. Thus, $\rho(\{w, u\}) =\rho(\{w,v\})=0$. We need to show that $\psi(\{ u,v\})=\rho(\{u,v\})$. Suppose otherwise, $\psi(\{ u,v\})\not=\rho(\{u,v\})$. Then, $\{u,v,w\}\in \h(\rho)\triangle \h(\psi)$, a contradiction. \end{proof} \begin{ex} \label{recmax} The coloring $\varphi$ on $\{0,1,2,3,4,5\}$ depicted in Figure \ref{figrecmax} is reconstructible. \begin{center} \begin{tikzpicture}[scale=1.2] \draw[thick] (0,0) -- (0,1); \draw[thick] (0,1) -- (0,2); \draw[thick] (0,0) .. controls (-0.5,1.5) and (-0.5,0.5) .. (0,2); \draw[thick] (2,0) -- (2,1); \draw[gray] (0,0) -- (2,0); \draw[thick] (0,2) -- (1,-1); \draw[gray] (0,2) -- (2,1); \draw[gray] (0,2) -- (2,0); \draw[gray] (0,0) -- (2,1); \draw[gray] (0,1) -- (2,1); \draw[gray] (0,1) -- (2,0); \draw[thick] (0,0) -- (1,-1); \draw[thick] (0,1) -- (1,-1); \draw[thick] (2,1) -- (1,-1); \draw[thick] (2,0) -- (1,-1); \filldraw[black] (0,0) circle (1pt) node[anchor=east] {\scriptsize $1$}; \filldraw[black] (0,1) circle (1pt) node[anchor=east] {\scriptsize $2$}; \filldraw[black] (0,2) circle (1pt) node[anchor=east] {\scriptsize $3$}; \filldraw[black] (2,1) circle (1pt) node[anchor=west] {\scriptsize $5$}; \filldraw[black] (2,0) circle (1pt) node[anchor=west] {\scriptsize $4$}; \filldraw[black] (1,-1) circle (1pt) node[anchor=north] {\scriptsize $0$}; \end{tikzpicture} \captionof{figure}{Coloring $\varphi$} \label{figrecmax} \end{center} Let $\psi$ be a reconstruction of $\varphi$, i.e. $\h(\varphi)=\h(\psi)$. Notice that any homogeneous set is a subset of either $H_1=\{0,1,2,3\}$ or $H_2=\{0,4,5\}$. It is easy to check that if $\psi$ gives the same color, say black, to $H_1$ and $H_2$, then $\psi=\varphi$. So, suppose $\psi$ gives to $H_1$ and $H_2$ color black and gray, respectively. Then one has to consider two completely analogous cases depending on whether $\psi(\{1,4\}$ is black or gray. Suppose it is black. Since $\psi(\{1,2\})$ is black and $\{1,2,4\}$ is not homogeneous, $\psi(\{2,4\})$ is gray. Since $\{2,4,5\}$ is not homogeneous, $\psi(\{2,5\})$ is black. Analogously, one conclude that $\psi(\{1,5\})$ is gray. Since $\{3,4,5\}$ is not homogeneous, $\{3,5\}$ must be black. On the other hand, since $\{2,3,5\}$ is not homogeneous, $\{3,5\}$ must be gray. A contradiction. \end{ex} \subsection{Properties $E_0$ and $E_1$} Now we introduce a property for a coloring stronger than being in $\mathcal{R}$. It was motivated by the extension property of the random graph. Given $i\in\{0,1\}$, we say that a coloring $\varphi:[\N]^2\longrightarrow 2$ has the {\em property $E_i$} if for every finite set $F\subset \N$ there is $z\in\N\setminus F$ such that $\varphi(\{z,x\})=i$ for every $x\in F$. It is clear that if $\varphi$ has the property $E_0$ then $1-\varphi$ has the property $E_1$. So, for our reconstruction problem, we could only work with either $E_0$ or $E_1$. The random graph clearly has the property $E_i$, for $i\in\{0,1\}$. Another example is the following. \begin{prop} \label{general-Sierpinski} Let $R\subseteq \N\times \N$ be a strict linear ordering on $\N$. Let $\varphi_R$ be defined by $\varphi_R (\{n,m\}_<)=1$ if and only if $(n,m)\in R$. If $\langle \N, R\rangle$ does not have a maximal (resp. a minimal) element, then $\varphi_R$ has the property $E_1$ (resp. $E_0$). \end{prop} \begin{proof} Suppose $\langle \N, R\rangle$ does not have a maximal element. Let $F\subseteq \N$ be a finite set. Then, there is $z\in \N$ such that $(x,z)\in R$ for all $x\in F$. Thus, $\varphi_R(\{z,x\})=1$ for all $x\in F$. \end{proof} \begin{prop} \label{EiUR} Every coloring with property $E_i$, $i\in\{0,1\}$, belongs to $\mathcal{R}$. \end{prop} \begin{proof} We will use Proposition \ref{4suffices}. Let $\varphi$ be a coloring with the property $E_i$. Let $F=\{x,y,z,w\}$ be a subset of $\N$. By the property $E_i$, there is $a\in\N\setminus \{x,y,z,w\}$ such that \[ \varphi(\{a,x\})=\varphi(\{a,y\})=\varphi(\{a,z\})=\varphi(\{a,w\})=i \] and there is $b\in\N\setminus \{x,y,z,w,a\}$ such that \[ \varphi(\{b,x\})=\varphi(\{b,y\})=\varphi(\{b,z\})=\varphi(\{b,w\})=\varphi(\{b,a\})=i. \] Let $X=F\cup\{a,b\}$. Now observe that $\langle X,\varphi|_X\rangle$ is isomorphic to the graph in Proposition \ref{propertyE}. Thus, $\varphi|_X\in \mathcal{R}$ and we are done. \end{proof} We will now see that any coloring with the property $E_i$ provides infinitely many reconstructible colorings obtained by making finite changes to the original one. Let $\varphi\in 2^{[\N]^2}$ and $a\subset [\N]^2$ be a finite set. Let $\varphi_a\in 2^{[\N]^2}$ be defined by $\varphi_a^{-1}(1)=a\triangle \varphi^{-1}(1)$. In other words, $\varphi_a(\{x,y\})=\varphi(\{x,y\})$ if $\{x,y\}\notin a$; and $\varphi_a(\{x,y\})=1-\varphi(\{x,y\})$ if $\{x,y\}\in a$, for every $\{x,y\}\in[\N]^2$. Such colorings are the {\em finite changes} of $\varphi$. \begin{prop} \label{finitechangesEi} Let $\varphi$ be a coloring on $\N$ and $a\subset [\N]^2$ a finite set. If $\varphi$ has the property $E_i$, for $i\in\{0,1\}$, then $\varphi_a$ has the property $E_i$. \end{prop} \begin{proof} Let us fix $i\in\{0,1\}$ and assume that $\varphi$ has property $E_i$. Let $F\subset\N$ be a finite set, and consider $G=F\cup \{w: \{w,z\}\in a \;\text{for some $z\in \N$}\}$. By the property $E_i$ of $\varphi$, there is $z\in\N\setminus G$ such that $\varphi(\{z,x\})=i$ for every $x\in G$. Given $x\in F$, we have $\{z,x\}\notin a$ since $z\not\in G$. Thus, $\varphi_a(\{z,x\})=\varphi(\{z,x\})=i$.\end{proof} \begin{coro}\label{finitechanges} The finite changes of the following colorings are reconstructible: \begin{enumerate}[label=(\roman*)] \item Constant colorings on $\N$. \item The random graph. \item The Sierpi\'{n}ski's coloring. \item $\varphi_R$, for $R\subseteq \N\times \N$ a linear ordering on $\N$ without maximal or minimal element. \end{enumerate} \end{coro} \section{ Non-reconstructible colorings}\label{nonrecoverable} In this section we analyze non-reconstructible colorings. We start by showing a condition that implies non reconstructibility and which is used in almost all examples presented. We also show that any coloring can be extended to a non- reconstructible one (Proposition \ref{extentioninnotR}). \begin{defi} For a coloring $\varphi$ on $X$ and $x, y\in X$, we say that an edge $\{x,y\}$ is {\em critical for $\varphi$}, if $\varphi(\{x,z\})=1-\varphi(\{y,z\})$, for all $z\in X\setminus\{x,y\}$. \end{defi} The following simple observation gives a very useful criterion to show non reconstructibility. \begin{prop} \label{criterio-no-UR} Let $\varphi$ be a coloring on $X$ with $|X|\geq 3$. If $\varphi$ has a critical pair, then $\varphi$ is non-reconstructible. \end{prop} \proof Let $\{x,y\}$ be a critical pair for $\varphi$. Define $\psi:[X]^2\longrightarrow 2$ by $\psi(\{w,z\})=\varphi(\{w,z\})$ if $\{w,z\}\neq\{x,y\}$ and $\psi(\{x,y\})=1-\varphi(\{x,y\})$. Notice that $\{x,y\}\not\subseteq H$ for any $H\in \h(\varphi)\cup \h(\psi)$. Therefore $\h(\varphi)=\h(\psi)$ and $\psi$ witnesses that $\varphi\not\in\mathcal{R}$. \endproof The condition of being a critical pair is stronger than just requiring that the pair is not contained in a homogeneous set. For instance, let $\varphi$ be a constant coloring over $\N$ and consider the finite change $\varphi_a$ of $\varphi$ with $a=\{0,1\}$. Then $\varphi_a$ is reconstructible and $\{0,1\}$ is not contained in any $\varphi_a$-homogeneous sets. Now we give our first example of a non-reconstructible coloring, which is the prototype of such colorings. \begin{ex}\label{nonrecovered} Consider a partition of $\N$ into two infinite sets, for instance, let $A_0$ be the set of even numbers and $A_1$ be the set of odd numbers. Let $\varphi$ be the coloring associated to this partition, i.e., $\varphi(\{x,y\})=1$ if and only if $\{x,y\}\subseteq A_i$ for some $i$. Then, the pair $\{0,1\}$ is critical for $\varphi$, thus by Proposition \ref{criterio-no-UR}, $\varphi\in\neg\mathcal{R}$. Moreover, given any nonempty set $B\subseteq \N$ consider the coloring $\varphi_B:[\N]^2\longrightarrow 2$ given by $\varphi_B(\{x,y\})=\varphi(\{x,y\})$ if $\{x,y\}\neq\{2n, 2n+1\}$ for any $n\in B$; and $\varphi_B(\{2n, 2n+1\})=1$ for all $n\in B$. Then, $\varphi$ and $\varphi_B$ have the same homogeneous sets. Below we depict the coloring $\varphi$ (see Figure \ref{fignonrec}) and a non-trivial reconstruction $\psi$ (see Figure \ref{fignonrec2}) of it for the case of a partition of the set $\{0,1,2,3,4,5\}$. \begin{minipage}{.5\textwidth} \centering \begin{tikzpicture}[scale=1.6] \draw[thick] (0,0) -- (0,1); \draw[thick] (0,1) -- (0,2); \draw[thick] (1,0) -- (1,1); \draw[thick] (1,1) -- (1,2); \draw[thick] (0,0) .. controls (-0.5,1.5) and (-0.5,0.5) .. (0,2); \draw[thick] (1,0) .. controls (1.5,1.5) and (1.5,0.5) .. (1,2); \draw[gray] (0,0) -- (1,0); \draw[gray] (0,0) -- (1,1); \draw[gray] (0,0) -- (1,2); \draw[gray] (0,1) -- (1,0); \draw[gray] (0,1) -- (1,1); \draw[gray] (0,1) -- (1,2); \draw[gray] (0,2) -- (1,0); \draw[gray] (0,2) -- (1,1); \draw[gray] (0,2) -- (1,2); \filldraw[black] (0,0) circle (1pt) node[anchor=east] {\scriptsize $0$}; \filldraw[black] (0,1) circle (1pt) node[anchor=east] {\scriptsize $2$}; \filldraw[black] (0,2) circle (1pt) node[anchor=east] {\scriptsize $4$}; \filldraw[black] (1,0) circle (1pt) node[anchor=west] {\scriptsize $1$}; \filldraw[black] (1,1) circle (1pt) node[anchor=west] {\scriptsize $3$}; \filldraw[black] (1,2) circle (1pt) node[anchor=west] {\scriptsize $5$}; \end{tikzpicture} \captionof{figure}{Coloring $\varphi$} \label{fignonrec} \end{minipage} \begin{minipage}{.5\textwidth} \centering \begin{tikzpicture}[scale=1.6] \draw[thick] (0,0) -- (0,1); \draw[thick] (0,1) -- (0,2); \draw[thick] (1,0) -- (1,1); \draw[thick] (1,1) -- (1,2); \draw[thick] (0,0) -- (1,0); \draw[thick] (0,0) .. controls (-0.5,1.5) and (-0.5,0.5) .. (0,2); \draw[thick] (1,0) .. controls (1.5,1.5) and (1.5,0.5) .. (1,2); \draw[gray] (0,0) -- (1,1); \draw[gray] (0,0) -- (1,2); \draw[gray] (0,1) -- (1,0); \draw[gray] (0,1) -- (1,1); \draw[gray] (0,1) -- (1,2); \draw[gray] (0,2) -- (1,0); \draw[gray] (0,2) -- (1,1); \draw[gray] (0,2) -- (1,2); \filldraw[black] (0,0) circle (1pt) node[anchor=east] {\scriptsize $0$}; \filldraw[black] (0,1) circle (1pt) node[anchor=east] {\scriptsize $2$}; \filldraw[black] (0,2) circle (1pt) node[anchor=east] {\scriptsize $4$}; \filldraw[black] (1,0) circle (1pt) node[anchor=west] {\scriptsize $1$}; \filldraw[black] (1,1) circle (1pt) node[anchor=west] {\scriptsize $3$}; \filldraw[black] (1,2) circle (1pt) node[anchor=west] {\scriptsize $5$}; \end{tikzpicture} \captionof{figure}{Coloring $\psi$} \label{fignonrec2} \end{minipage} \end{ex} We present below a characterization of colorings admitting a critical pair. For that purpose, we introduce a function on colorings. Let $X$ be a set with $3\leq |X|\leq \aleph_0$. For each coloring $\varphi\not\in\mathcal{R}$ on $X$, let \[ r(\varphi)=\min\{ |\{\{x,y\}\in [X]^2: \varphi(\{x,y\})\neq \psi(\{x,y\})\}|:\psi\in 2^{[X]^2},\; \h(\psi)=\h(\varphi), \psi\neq\varphi, \psi\neq 1-\varphi\}. \] For convenience, let $r(\varphi)=0$ if $\varphi\in\mathcal{R}$. Notice $0\leq r(\varphi)\leq \aleph_0$. \begin{lema} \label{rnoes2} Let $\varphi$ be a coloring on a countable set with at least 3 elements. \begin{itemize} \item[(i)] If $\varphi$ has a critical pair, then $r(\varphi)=1$. \item[(ii)] $r(\varphi)\neq 2$ for every $\varphi$. \end{itemize} \end{lema} \begin{proof} (i) By Proposition \ref{criterio-no-UR}, $\varphi\not\in \mathcal{R}$ and the result follows from the proof of Proposition \ref{criterio-no-UR}. (ii) If $\varphi\in \mathcal{R}$, then $r(\varphi)=0$. Let $\varphi\in\neg \mathcal{R}$, and suppose $r(\varphi)=2$ to get a contradiction. Let $\psi$ be a reconstruction of $\varphi$ such that \begin{equation} \label{d2} |\{\{x,y\}: \varphi(\{x,y\})\neq \psi(\{x,y\})\}|=2. \end{equation} Let $\{x,y\}$ be such that $\varphi(\{x,y\})\neq \psi(\{x,y\})$. By (i), $\varphi$ does not have a critical pair. Since $\{x, y\}$ is not critical for $\varphi$, there is $z\not\in\{x,y\}$ such that $\varphi(\{x,z\})=\varphi(\{y,z\})$. We claim that $\{x,y,z\}\not\in \h(\varphi)$. Suppose not and let $i$ be its $\varphi$-color. Then, $\{x,y,z\}$ would be a $\psi$-homogeneous set of color $1-i$, which contradicts (\ref{d2}). Thus \begin{equation} \label{uno} \varphi(\{y,z\})=\varphi(\{x,z\})=1-\varphi(\{x,y\})=\psi(\{x,y\}). \end{equation} Since $\{x,y,z\}$ is not $\psi$-homogeneous, we assume, without lost of generality, that \begin{equation} \psi(\{x,z\})=1-\psi(\{x,y\}). \end{equation} Notice that $\varphi(\{x,z\})\neq \psi(\{x,z\})$ and, by (\ref{d2}), $\varphi$ and $\psi$ agree on any pair different from $\{x,z\}$ and $\{x,y\}$. Thus \begin{equation} \psi(\{y,z\})=\varphi(\{y,z\}). \end{equation} Since $\{x,z\}$ is not critical for $\varphi$, there is $w\not\in\{x,z\}$ such that $\varphi(\{x,w\})=\varphi(\{z,w\})$. From (\ref{uno}), $w\neq y$. By (\ref{d2}), $\varphi(\{x,w\})=\psi(\{x,w\})=\psi(\{z,w\})$. It is easy to verify that $\{x,w,z\}\in\h(\varphi)\triangle\h(\psi)$, a contradiction. \end{proof} \begin{teo} \label{criterio-no-UR2} Let $\varphi$ be a coloring on a set $X$ with $|X|\geq 3$. The following are equivalent. \begin{itemize} \item[(i)] There is a critical pair for $\varphi$. \item[(ii)] $r(\varphi)=1$. \item[(iii)] There is a coloring $\psi$ and $x\in X$ such that $\h(\varphi)=\h(\psi)$, $\varphi\neq\psi$ and $\varphi|_{X\setminus\{x\}}=\psi |_{X\setminus\{x\}}$. \end{itemize} \end{teo} \begin{proof} $(i) \Rightarrow (ii)$. By Lemma \ref{rnoes2}. $(ii) \Rightarrow (iii)$. Obvious. $(iii) \Rightarrow (i)$. Let $\psi$ and $x$ be as in the hypothesis of (iii). Towards a contradiction, suppose there are no critical pairs for $\varphi$. Let $y\in X\setminus\{x\}$ be such that $\varphi(\{x,y\})\neq\psi(\{x,y\})$. Since $\{x,y\}$ is not critical for $\varphi$, there is $z\not\in \{x,y\}$ such that $\varphi(\{x,z\})=\varphi(\{y,z\})$. Since $\varphi|_{X\setminus\{x\}}=\psi |_{X\setminus\{x\}}$, $\psi(\{y,z\})=\varphi(\{y,z\})$. Hence, $\{x,y,z\}\not\in\h(\varphi)=\h(\psi)$, otherwise $\psi(\{x,y\})=\psi(\{y,z\})=\varphi(\{y,z\})=\varphi(\{x,y\})$, a contradiction. Then, $\varphi(\{x,z\})\neq\psi(\{x,z\})$. By Lemma \ref{rnoes2}, $r(\varphi)\geq 3$, thus there is $\{u,w\}\in [X]^2$ with $\{u,w\}$ different from $\{x,y\}, \{x,z\}$ such that $\varphi(\{u,w\})\neq\psi(\{u,w\})$. As $\varphi|_{X\setminus\{x\}}=\psi |_{X\setminus\{x\}}$, we assume that $u=x$, i.e. $\varphi(\{x,w\})\neq\psi(\{x,w\})$. There are two cases to be considered: (a) Suppose $\varphi(\{x,w\})= \varphi(\{x,y\})$. Then, $\{x,y,w\}\in \h(\varphi)\triangle\h(\psi)$. (b) Suppose $\varphi(\{x,w\})=1- \varphi(\{x,y\})$. Then, $\{x,w,z\}\in \h(\varphi)\triangle\h(\psi)$. In both cases we get a contradiction with $\h(\varphi)=\h(\psi)$. \end{proof} We know very little about the function $r$. \begin{question} Is there a coloring $\varphi$ such that $r(\varphi)=\aleph_0$? \end{question} There are non-reconstructible colorings without a critical pair, as we show next. However, we do not know a method to construct colorings in $\neg\mathcal{R}$ without critical pairs. \begin{ex} \label{sin-pareja-critica} The colorings $\varphi$ and $\varphi'$ depicted below (Figures \ref{fignoncritical1} and \ref{fignoncritical2}) are non-reconstructible and do not have a critical pair. Colorings $\psi$ and $\psi'$ (Figures \ref{fignoncritical1b} and \ref{fignoncritical2b}) are, respectively, a non-trivial reconstruction of $\varphi$ and $\varphi'$. \begin{minipage}{.5\textwidth} \centering \begin{tikzpicture}[scale=1.2] \draw[thick] (0,0) -- (0,1); \draw[thick] (2,0) -- (2,1); \draw[thick] (0,0) -- (2,0); \draw[thick] (0,1) -- (1,2); \draw[thick] (1,2) -- (2,1); \draw[mygray,thick] (0,0) -- (1,2); \draw[mygray,thick] (0,0) -- (2,1); \draw[mygray,thick] (0,1) -- (2,1); \draw[mygray,thick] (0,1) -- (2,0); \draw[mygray,thick] (1,2) -- (2,0); \filldraw[black] (0,0) circle (1pt) node[anchor=east] {\scriptsize $1$}; \filldraw[black] (0,1) circle (1pt) node[anchor=east] {\scriptsize $2$}; \filldraw[black] (1,2) circle (1pt) node[anchor=south] {\scriptsize $3$}; \filldraw[black] (2,1) circle (1pt) node[anchor=west] {\scriptsize $4$}; \filldraw[black] (2,0) circle (1pt) node[anchor=west] {\scriptsize $5$}; \end{tikzpicture} \captionof{figure}{$\varphi$} \label{fignoncritical1} \end{minipage} \begin{minipage}{.5\textwidth} \centering \begin{tikzpicture}[scale=1.2] \draw[mygray,thick] (0,0) -- (0,1); \draw[mygray,thick] (2,0) -- (2,1); \draw[thick] (0,0) -- (2,0); \draw[thick] (0,1) -- (1,2); \draw[mygray,thick] (1,2) -- (2,1); \draw[mygray,thick] (0,0) -- (1,2); \draw[thick] (0,0) -- (2,1); \draw[thick] (0,1) -- (2,1); \draw[mygray,thick] (0,1) -- (2,0); \draw[thick] (1,2) -- (2,0); \filldraw[black] (0,0) circle (1pt) node[anchor=east] {\scriptsize $1$}; \filldraw[black] (0,1) circle (1pt) node[anchor=east] {\scriptsize $2$}; \filldraw[black] (1,2) circle (1pt) node[anchor=south] {\scriptsize $3$}; \filldraw[black] (2,1) circle (1pt) node[anchor=west] {\scriptsize $4$}; \filldraw[black] (2,0) circle (1pt) node[anchor=west] {\scriptsize $5$}; \end{tikzpicture} \captionof{figure}{$\psi'$} \label{fignoncritical1b} \end{minipage} \begin{minipage}{.5\textwidth} \centering \begin{tikzpicture}[scale=1.2] \draw[thick] (0,0) -- (0,1); \draw[thick] (2,0) -- (2,1); \draw[thick] (0,0) -- (2,0); \draw[mygray,thick] (0,1) -- (1,2); \draw[thick] (1,2) -- (2,1); \draw[mygray,thick] (0,0) -- (1,2); \draw[mygray,thick] (0,0) -- (2,1); \draw[thick] (0,1) -- (2,1); \draw[mygray,thick] (0,1) -- (2,0); \draw[mygray,thick] (1,2) -- (2,0); \draw[thick] (0,0) -- (1,-1); \draw[thick] (1,2) -- (1,-1); \draw[mygray,thick] (0,1) -- (1,-1); \draw[mygray,thick] (2,1) -- (1,-1); \draw[mygray,thick] (2,0) -- (1,-1); \filldraw[black] (0,0) circle (1pt) node[anchor=east] {\scriptsize $1$}; \filldraw[black] (0,1) circle (1pt) node[anchor=east] {\scriptsize $2$}; \filldraw[black] (1,2) circle (1pt) node[anchor=south] {\scriptsize $3$}; \filldraw[black] (2,1) circle (1pt) node[anchor=west] {\scriptsize $4$}; \filldraw[black] (2,0) circle (1pt) node[anchor=west] {\scriptsize $5$}; \filldraw[black] (1,-1) circle (1pt) node[anchor=north] {\scriptsize $6$}; \end{tikzpicture} \captionof{figure}{$\varphi'$} \label{fignoncritical2} \end{minipage} \begin{minipage}{.5\textwidth} \centering \begin{tikzpicture}[scale=1.2] \draw[thick] (0,0) -- (0,1); \draw[thick] (2,0) -- (2,1); \draw[thick] (0,0) -- (2,0); \draw[mygray,thick] (0,1) -- (1,2); \draw[mygray,thick] (1,2) -- (2,1); \draw[thick] (0,0) -- (1,2); \draw[mygray,thick] (0,0) -- (2,1); \draw[thick] (0,1) -- (2,1); \draw[mygray,thick] (0,1) -- (2,0); \draw[mygray,thick] (1,2) -- (2,0); \draw[mygray,thick] (0,0) -- (1,-1); \draw[thick] (1,2) -- (1,-1); \draw[mygray,thick] (0,1) -- (1,-1); \draw[thick] (2,1) -- (1,-1); \draw[mygray,thick] (2,0) -- (1,-1); \filldraw[black] (0,0) circle (1pt) node[anchor=east] {\scriptsize $1$}; \filldraw[black] (0,1) circle (1pt) node[anchor=east] {\scriptsize $2$}; \filldraw[black] (1,2) circle (1pt) node[anchor=south] {\scriptsize $3$}; \filldraw[black] (2,1) circle (1pt) node[anchor=west] {\scriptsize $4$}; \filldraw[black] (2,0) circle (1pt) node[anchor=west] {\scriptsize $5$}; \filldraw[black] (1,-1) circle (1pt) node[anchor=north] {\scriptsize $6$}; \end{tikzpicture} \captionof{figure}{$\psi'$} \label{fignoncritical2b} \end{minipage} \end{ex} We have seen that the finite changes of some reconstructible colorings remain reconstructible (see Proposition \ref{finitechangesEi}). The following generalization of Example \ref{nonrecovered} shows an analogous fact for some non-reconstructible colorings. \begin{prop}\label{Two_parts_unrecoverable} Let $\varphi$ be the coloring associated to a partition of $\N$ into two parts. Then, \begin{enumerate} \item[(i)] $\varphi_a\in\neg \mathcal{R}$, for every finite set $a\subset[\N]^2$. \item[(ii)] For every nonempty set $I\subseteq\N$, there is $\varphi_I\in\neg \mathcal{R}\setminus\{\varphi,1-\varphi\}$. \end{enumerate} \end{prop} \begin{proof} Let $\N=A\cup B$ be a partition of $\N$, and $\varphi:[\N]^2\longrightarrow 2$ be the coloring associated to the partition. \begin{itemize} \item[(i)] Consider $a\subset[\N]^2$ a nonempty finite set. Let $m=\max(\bigcup a)$, $p\in A$ with $p>m$ and $q\in B$ with $q>m$. Notice that $\{p,z\},\{q,z\}\notin a$ for every $z\in\N\setminus\{p,q\}$. Thus, $\varphi_a(\{p,z\})=\varphi(\{p,z\})$ and $\varphi_a(\{q,z\})=\varphi(\{q,z\})$ for every $z\in\N\setminus\{p,q\}$. Then, $\{p,q\}$ is critical for $\varphi_a$, thus by Proposition \ref{criterio-no-UR}, $\varphi_a\in\neg\mathcal{R}$. \item[(ii)] Let $A=\{a_i:i\in\N\}$ and $B=\{b_i:i\in\N\}$ be enumerations of $A$ and $B$, and consider $\emptyset\neq I\subseteq\N$. Define $\varphi_I:[\N]^2\longrightarrow 2$ by $\varphi_I(\{x,y\})=\varphi(\{x,y\})$ if $\{x,y\}\neq\{a_n,b_n\}$ for any $n\in I$; and $\varphi_I(\{a_n,b_n\})=1$ for every $n\in I$. Then, for $n\in I$, $\{a_n,b_{n}\}$ is critical for $\varphi_I$ and we are done by Proposition \ref{criterio-no-UR}. \end{itemize} \end{proof} We have seen in Proposition \ref{extensionUR} that any coloring can be extended to a coloring belonging to $\mathcal{R}$. Our next result shows that it can also be extended to a coloring in $\neg\mathcal{R}$. \begin{prop}\label{extentioninnotR} Let $\varphi$ be a coloring on $X$ and $a\not\in X$. There is a coloring $\psi$ on $X\cup\{a\}$ such that $\varphi\subset\psi$ and $\psi\in\neg \mathcal{R}$. \end{prop} \begin{proof} Fix $x_0\in X$ and $a\notin X$. Let $\psi$ be a coloring on $X$ defined by $\psi(\{a,x_0\})=1$, $\psi(\{a,x\})=1-\varphi(\{x_0,x\})=0$ for $x\in X\setminus\{x_0\}$, and $\psi|_X=\varphi$. Then, $\{a, x_0\}$ is critical for $\psi$. Hence, $\psi\in\neg\mathcal{R}$ by Proposition \ref{criterio-no-UR}. \end{proof} \section{Colorings associated to partitions of $\N$ into more than two parts} In this section we will show that, in contrast with Proposition \ref{Two_parts_unrecoverable}, the coloring associated to any partition of $\N$ into at least three parts belongs to $\mathcal{R}$ (Theorem \ref{colorpartition}). Furthermore, we will provide conditions on the partition so that the finite changes of the corresponding coloring are also in $\mathcal{R}$ (Proposition \ref{infinitepartition} and Proposition \ref{3infiniteparts}). \begin{teo} \label{colorpartition} Let $(A_i)_{i\in I}$ be a partition of $\N$ with $|I|\geq 3$. Then, the coloring associated to the partition belongs to $\mathcal{R}$. \end{teo} \begin{proof} Let $(A_i)_{i\in I}$ be a partition as in the hypothesis. We will use Proposition \ref{4suffices} to show that $\varphi\in \mathcal{R}$. Let $F\subseteq \N$ be a set with 4 elements. There are two cases to be considered. If $F$ is homogeneous, then $\varphi|_F\in \mathcal{R}$. Otherwise, there are $i,j,k\in I$ such that $F\subseteq A_i\cup A_j\cup A_k$, $|F\cap A_i|\leq 3$, $|F\cap A_j|\leq 2$ and $|F\cap A_k|\leq 1$. Thus, there is $Y$ such that $F\subseteq Y\subset A_i\cup A_j\cup A_k$ such that $\varphi|_Y$ is (isomorphic to) the coloring in Example \ref{particion} and hence $\varphi|_Y\in \mathcal{R}$. \end{proof} The following example shows that Proposition \ref{4suffices} cannot be strengthened in the following sense. It can happen that a coloring $\varphi$ is reconstructible but there is $F\subseteq X$ with $|F|\leq 4$ and $\varphi|_F\not\in \mathcal{R}$. \begin{ex} \label{4sufficesB} Let $\N=A\cup B\cup C$ be a partition of $\N$ into infinite sets, and $\varphi$ be the coloring associated to the partition. By Theorem \ref{colorpartition}, $\varphi\in \mathcal{R}$. However, if $x,y\in A$, $z,w\in B$ and $F=\{x,y,z,w\}$, then $\varphi|_F\not\in\mathcal{R}$ as $\{x,z\}$ is critical for $\varphi|_F$. \end{ex} In the following we deal with the finite changes of the coloring associated to partition of $\N$. We show that whether a finite change of such coloring is in $\mathcal{R}$ depends on the type of partitions. \begin{prop}\label{infinitepartition} Let $(A_k)_{k}$ be an infinite partition of $\N$. Then, every finite change of the coloring associated to the partition belongs to $\mathcal{R}$. \end{prop} \begin{proof} Let $a\subset[\N]^2$ be a finite set and $\varphi$ be the coloring on $\N$ associated to the partition $(A_k)_{k}$. We claim that $\varphi_a$ has the property $E_0$ and thus it is in $\mathcal{R}$, by Proposition \ref{EiUR}. Let $F\subseteq \N$ be a finite set. Let $k$ be such that $(F\cup\{x,y\})\cap A_k=\emptyset$ for all $\{x,y\}\in a$. Pick $z\in A_k$. Then $\varphi_a(\{z,w\})=\varphi(\{z,w\})=0$ for all $w\in F$. \end{proof} \begin{prop} \label{3infiniteparts} Let $(A_i)_{i<k}$ be a finite partition of $\N$, where $k>2$, and at least three $A_i$ are infinite. Then, every finite change of the coloring associated to the partition is in $\mathcal{R}$. \end{prop} \begin{proof} Let $a\subset[\N]^2$ be a finite set and $\varphi$ be the coloring associated to the partition $(A_i)_{i<k}$. We will use Proposition \ref{4suffices} to show that $\varphi_a\in \mathcal{R}$. Let $F\subseteq \N$ be a set with 4 elements and $G=\bigcup a$. We consider two cases: (1) There are $i, j$ such that $A_i$ and $A_j$ are infinite and $F\cap ( A_i\cup A_j)=\emptyset$. Let $u\in A_i$, $v\in A_j$ such that $\{u,v\}\not\in G$. Put $Y=F\cup \{u, v\}$. Since $\varphi_a(\{u,v\})=\varphi_a(\{u,x\})=\varphi_a(\{v,x\})$ for all $x\in F$, $\langle Y,\varphi_a|_Y\rangle$ is (isomorphic to) the coloring in Proposition \ref{propertyE} and thus $\varphi_a|_Y\in \mathcal{R}$. (2) Let $i, j, l$ be such that $A_i$, $A_j$ and $A_l$ are infinite. Let $x\in A_j$, $y\in A_j$ and $z\in A_l$ such that $\{x,y,z\}\cap (F\cup G)=\emptyset$ Let $Y=F\cup\{x,y,z\}$. Suppose that at most one of the sets $A_i$, $A_j$ and $A_l$ is disjoint from $F$. By an argument analogous to that used in Proposition \ref{particion2} it follows that $\varphi_a|_Y\in\mathcal{R}$. \end{proof} The following example shows that Proposition \ref{3infiniteparts} is optimal in the sense that we cannot ensure reconstructibility of all finite changes of the coloring associated to finite partitions of $\N$. It is interesting, since it shows that the reconstructibility is a somewhat unstable property. \begin{ex}\label{A_0finite} There is a partition $\N=A_0\cup A_1\cup A_2$ of $\N$, with $|A_0|=2$, such that some finite changes of the coloring associated to it are in $\mathcal{R}$ and some are in $\neg\mathcal{R}$. Let $A_0=\{0,1\}$, $A_1=\{2n+1:n>0\}$, $A_2=\{2n:n>0\}$, $\varphi$ be the coloring on $\N$ associated to the partition $(A_i)_{i<3}$, and $a=\big\{\{0,4\},\{1,5\},\{4,5\}\big\}$. Notice that $\{4,5\}$ is critical for $\varphi_a$, thus $\varphi_a\in\neg\mathcal{R}$ by Proposition \ref{criterio-no-UR}. \begin{minipage}{.5\textwidth} \centering \begin{tikzpicture} \draw[thick] (0,0) -- (0,1); \draw[thick] (1,0) -- (1,1); \draw[thick] (2,0) -- (2,1); \draw[thick] (1,1) -- (2,1); \draw[thick] (0,0) -- (1,1); \draw[thick] (1,1) -- (1,4); \draw[thick] (2,1) -- (2,4); \draw[mygray, thick] (0,1) -- (1,1); \draw[mygray, thick] (0,1) -- (1,0); \draw[mygray, thick] (0,0) -- (1,0); \draw[mygray, thick] (1,0) -- (2,0); \draw[mygray, thick] (1,0) -- (2,1); \draw[mygray, thick] (1,1) -- (2,0); \draw[mygray, thick] (1,2) -- (2,2); \draw[mygray, thick] (1,3) -- (2,3); \draw[mygray, thick] (1,4) -- (2,4); \draw[mygray, thick] (1,1) -- (2,2); \draw[mygray, thick] (1,2) -- (2,1); \draw[mygray, thick] (1,3) -- (2,2); \draw[mygray, thick] (1,2) -- (2,3); \draw[mygray, thick] (1,3) -- (2,4); \draw[mygray, thick] (1,4) -- (2,3); \draw[mygray, thick] (0,0) .. controls (.5,-0.5) and (1.5,-0.5) .. (2,0); \draw[thick] (0,1) .. controls (.5,1.5) and (1.5,1.5) .. (2,1); \filldraw[black] (0,0) circle (1pt) node[anchor=east] {\scriptsize $0$}; \filldraw[black] (0,1) circle (1pt) node[anchor=east] {\scriptsize $1$}; \filldraw[black] (1,0) circle (1pt) node[anchor=north] {\scriptsize $2$}; \filldraw[black] (1,1) circle (1pt) node (a) at (0.8,1.15) {\scriptsize $4$}; \filldraw[black] (2,0) circle (1pt) node[anchor=west] {\scriptsize $3$}; \filldraw[black] (2,1) circle (1pt) node[anchor=west] {\scriptsize $5$}; \filldraw[black] (2,2) circle (1pt) node[anchor=west] {\scriptsize $7$}; \filldraw[black] (1,2) circle (1pt) node[anchor=east] {\scriptsize $6$}; \filldraw[black] (1,3) circle (1pt) node[anchor=east] {\scriptsize $8$}; \filldraw[black] (2,3) circle (1pt) node[anchor=west] {\scriptsize $9$}; \filldraw[black] (1,4) circle (1pt) node[anchor=east] {\scriptsize $10$}; \filldraw[black] (2,4) circle (1pt) node[anchor=west] {\scriptsize $11$}; \node (b) at (1.5,4.4) {$\vdots$}; \node (b) at (0,-1) {$A_0$}; \node (b) at (1,-1) {$A_2$}; \node (b) at (2,-1) {$A_1$}; \end{tikzpicture} \captionof{figure}{Graph of $\varphi_a$} \end{minipage} \begin{minipage}{.5\textwidth} \centering \begin{tikzpicture} \draw[thick] (0,0) -- (0,1); \draw[thick] (1,0) -- (1,1); \draw[thick] (2,0) -- (2,1); \draw[mygray, thick] (0,0) -- (1,1); \draw[thick] (1,1) -- (1,3); \draw[thick] (2,1) -- (2,3); \draw[mygray, thick] (0,1) -- (1,1); \draw[mygray, thick] (0,1) -- (1,0); \draw[mygray, thick] (0,0) -- (1,0); \draw[mygray, thick] (1,0) -- (2,0); \draw[mygray, thick] (1,0) -- (2,1); \draw[mygray, thick] (1,1) -- (2,0); \draw[mygray, thick] (1,2) -- (2,2); \draw[mygray, thick] (1,3) -- (2,3); \draw[mygray, thick] (1,1) -- (2,2); \draw[mygray, thick] (1,2) -- (2,1); \draw[mygray, thick] (1,3) -- (2,2); \draw[mygray, thick] (1,2) -- (2,3); \draw[thick] (1,1) -- (2,1); \draw[mygray, thick] (0,0) .. controls (.5,-0.5) and (1.5,-0.5) .. (2,0); \draw[mygray, thick] (0,1) .. controls (.5,1.5) and (1.5,1.5) .. (2,1); \filldraw[black] (0,0) circle (1pt) node[anchor=east] {\scriptsize $0$}; \filldraw[black] (0,1) circle (1pt) node[anchor=east] {\scriptsize $1$}; \filldraw[black] (1,0) circle (1pt) node[anchor=north] {\scriptsize $2$}; \filldraw[black] (1,1) circle (1pt) node (a) at (0.8,1.15) {\scriptsize $4$}; \filldraw[black] (2,0) circle (1pt) node[anchor=west] {\scriptsize $3$}; \filldraw[black] (2,1) circle (1pt) node[anchor=west] {\scriptsize $5$}; \filldraw[black] (2,2) circle (1pt) node[anchor=west] {\scriptsize $7$}; \filldraw[black] (1,2) circle (1pt) node[anchor=east] {\scriptsize $6$}; \filldraw[black] (1,3) circle (1pt) node[anchor=east] {\scriptsize $8$}; \filldraw[black] (2,3) circle (1pt) node[anchor=west] {\scriptsize $9$}; \node (b) at (1.5,3.3) {$\vdots$}; \end{tikzpicture} \captionof{figure}{Graph of $\varphi_b$} \end{minipage} On the other hand, notice that $\varphi_\emptyset=\varphi\in \mathcal{R}$ by Theorem \ref{colorpartition}. A non-trivial finite change of $\varphi$ which belongs to $\mathcal{R}$ is $\varphi_b$ for $b=\big\{\{4,5\}\big\}$. To see this, we argue as in the proof of Proposition \ref{3infiniteparts}. Let $F=\{0,1,2,3,4,5\}$. It is easy to verify that $\varphi_b|_F\in \mathcal{R}$. \end{ex} \section{Maximal homogeneous sets}\label{maximalsection} In this section we explore reconstructibility of a coloring looking at the maximal homogeneous sets. Let us start by observing the obvious: for any coloring of $\N$ there are maximal homogeneous sets (by Zorn's Lemma). For any cardinal $1\leq \kappa\leq \aleph_0$ or $\kappa=2^{\aleph_0}$ there is a coloring on $\N$ with exactly $\kappa$ maximal homogeneous sets. In fact, a constant coloring has $\N$ as the unique maximal homogeneous set. Let $\varphi$ be the coloring associated to a partition of $\N$ into 2 infinite pieces. Then $\varphi$ has 2 maximal homogeneous sets. For $3\leq \kappa<\aleph_0$, we left to the reader to check that a simple finite change of $\varphi$ produces a coloring with exactly $\kappa$ maximal homogeneous sets. The coloring associated to a partition of $\N$ into 3 infinite pieces has countable many maximal homogeneous sets. Finally, the coloring associated to a partition of $\N$ into infinitely many infinite pieces has $2^{\aleph_0}$ maximal homogeneous sets. We do not know in general how this relates to the reconstructibility of the colorings. However, we present some results when $\kappa\leq 2$. \begin{lema} \label{externos} Let $\varphi$ be a coloring on $\N$. We have: \begin{itemize} \item[(i)] Any homogeneous set is contained in a maximal homogeneous set. \item[(ii)] Let $A=\{x\in \N: \;x\in H \; \text{for some maximal $H\in\h(\varphi)$}\}$. Then $|\N\setminus A|\leq 2$. \item[(iii)] If $H$ is a maximal homogeneous set of color $i$ and $x\nin A$, then $\{y\in H:\; \varphi(\{x,y\})=i\}$ has at most one element. \end{itemize} \end{lema} \proof (i) It is a well known result that easily follows from Zorn's lemma. (ii) Towards a contradiction, suppose $v,w,z\nin A$ with $\varphi(\{v,z\})=0$ and $\varphi(\{v,w\})=\varphi(\{w,z\})=1$. Let $H$ be an homogeneous set. One has to consider whether $H$ is of color 0 or 1. Both cases are treated analogously. (a) Suppose $H$ is of color 0. Let $x_1\in H$. Since $v\nin A$, $\{x_1, v,z\}$ is not homogeneous, we assume w.l.o.g. that $\varphi(\{v,x_1\})=1$. As $\{x_1, v,w\}$ is not homogeneous, $\varphi(\{w,x_1\})=0$. Let $x_2\in H$ different than $x_1$. Since $\{x_1, x_2,w\}$ is not homogeneous, $\varphi(\{w,x_2\})=1$. Analogously, we conclude that $\varphi(\{v,x_2\})=\varphi(\{x_2,z\})=0$. Therefore $\{v,z,x_2\}$ is a 0-homogeneous set, which contradicts that $z\nin A$. (b) Suppose $H$ is of color 1. Let $x_1\in H$. Since $v\nin A$, $\{x_1, v,z\}$ is not homogeneous and thus $\varphi(\{z,x_1\})=1$. As $\{x_1, v,w\}$ is not homogeneous, $\varphi(\{w,x_1\})=0$. Let $x_2\in H$ be different than $x_1$. Since $\{x_1, x_2,z\}$ is not homogeneous, $\varphi(\{z,x_2\})=0$. Analogously, we conclude that $\varphi(\{v,x_2\})=1$ and $\varphi(\{w,x_1\})=\varphi(\{w,x_2\})=0$. Let $x_3\in H\setminus\{x_1,x_2\}$. Then $\varphi(\{z,x_3\})=0$ and $\varphi(\{v,x_3\})=1$. Therefore $\{v,x_2,x_3\}$ is a 1-homogeneous set, which contradicts that $v\nin A$. (iii) Suppose there are $y,z\in H$ such that $\varphi(\{x,y\})=\varphi(\{x,z\})=i$. We have that $\varphi(\{y,z\})=i$ since $H$ is of color $i$. Then $\{x,y,z\}$ is homogeneous, which contradicts that $x\nin A$. \endproof We extend the definition of the finite changes $\varphi_a$ of a coloring as follows. For each coloring $\varphi$ on $\N$ and $A\subseteq [\N]^2$, let $\varphi_A$ be given by $\varphi_A(s)=\varphi(s)$ if $s\nin A$ and $\varphi_A(s)=1-\varphi(s)$ if $s\in A$. The next proposition characterizes the colorings with exactly one maximal homogeneous set. \begin{prop} Let $\varphi$ be a non constant coloring on $\N$. Then, $\hom(\varphi)$ has exactly one maximal element if and only if one of the following holds for a constant coloring $\psi$ on $\N$. \begin{itemize} \item[(i)] $\varphi=\psi_{A_1}$ where $A_1=\{\{x_0,y\}: \;\text{$y\in \N\setminus \{x_0\}$}\}$ for some $x_0\in \N$. \item[(ii)] $\varphi=\psi_{A_2}$ where $A_2=\{\{x_0,y\}: \;\text{$y\in \N\setminus \{x_0, x_1\}$}\}$ for some $x_0, x_1\in \N$. \item[(iii)] $\varphi=\psi_{A_3}$ where $A_3=\{\{x_0,y\}: \;\text{$y\in \N\setminus \{x_0,x_1,x_2\}$}\}\cup\{\{x_2,y\}: \;\text{$y\in \N\setminus \{x_0,x_2\}$}\} $ for some $x_0, x_1, x_2\in \N$. \item[(iv)] $\varphi=\psi_{A_4}$ where $A_4=\{\{x_0,y\}: \;\text{$y\in \N\setminus \{x_0, x_2\}$}\}\cup\{\{x_2,y\}: \;\text{$y\in \N\setminus \{x_2\}$}\} $ for some $x_0, x_2\in \N$. \end{itemize} In all cases, $\{x_0,x_1\}$ is a critical pair for $\varphi$ and therefore such colorings are non-reconstructible. \end{prop} \begin{proof} The pictures below describe each case. It is clear that each of the colorings $\psi_{A_i}$ has exactly one maximal homogeneous set. Let $M$ be the unique maximal homogeneous set of $\varphi$, say of color $i$ and fix $x_1\in M$. Let $A=\{x\in \N: \;x\in H \; \text{for some maximal $H\in\h(\varphi)$}\}$. Since $\varphi$ is non constant, $|\N\setminus A|\leq 2$, by Lemma \ref{externos}. We consider two cases: (1) $\N\setminus A=\{x_0\}$. We have two subcases depending on the color of $\{x_0,x_1\}$. If $\varphi(\{x_0,x_1\})=1-i$. Then $\varphi(\{x_0,x\})=1-i$ for all $x\in M$, by the uniqueness of $M$. Then $\varphi=\psi_{A_1}$. Analogously, if $\varphi(\{x_0,x_1\})=i$, then $\varphi=\psi_{A_2}$. (2) $\N\setminus A=\{x_0,x_2\}$. As in case (1) we have that if $\varphi(\{x_0,x_1\})=i$, then $\varphi=\psi_{A_3}$. And, if $\varphi(\{x_0,x_1\})=1-i$, then $\varphi=\psi_{A_4}$. \begin{minipage}{.25\textwidth} \centering \begin{tikzpicture} \draw[thick] (1,0) -- (1,1); \draw[thick] (0,0) -- (1,1); \draw[thick] (1,1) -- (1,4); \draw[mygray, thick] (0,0) -- (1,1); \draw[mygray, thick] (0,0) -- (1,2); \draw[mygray, thick] (0,0) -- (1,0); \draw[mygray, thick] (0,0) -- (1,3); \draw[mygray, thick] (0,0) -- (1,4); \filldraw[black] (0,0) circle (1pt) node[anchor=east] {\scriptsize $x_0$}; \filldraw[black] (1,0) circle (1pt) node[anchor=north] {\scriptsize $x_1$}; \filldraw[black] (1,1) circle (1pt) node (a) at (0.8,1.15) {\scriptsize $$}; \filldraw[black] (1,2) circle (1pt) node[anchor=east] {\scriptsize $$}; \filldraw[black] (1,3) circle (1pt) node[anchor=east] {\scriptsize $$}; \filldraw[black] (1,4) circle (1pt) node[anchor=east] {\scriptsize $$}; \node (b) at (1,4.4) {$\vdots$}; \end{tikzpicture} \captionof{figure}{$\scriptstyle\psi_{A_1}$} \end{minipage} \begin{minipage}{.25\textwidth} \centering \begin{tikzpicture} \draw[thick] (1,0) -- (1,1); \draw[thick] (0,0) -- (1,1); \draw[thick] (1,1) -- (1,4); \draw[mygray, thick] (0,0) -- (1,1); \draw[mygray, thick] (0,0) -- (1,2); \draw[thick] (0,0) -- (1,0); \draw[mygray, thick] (0,0) -- (1,3); \draw[mygray, thick] (0,0) -- (1,4); \filldraw[black] (0,0) circle (1pt) node[anchor=east] {\scriptsize $x_0$}; \filldraw[black] (1,0) circle (1pt) node[anchor=west] {\scriptsize $x_1$}; \filldraw[black] (1,1) circle (1pt) node (a) at (0.8,1.15) {\scriptsize $$}; \filldraw[black] (1,2) circle (1pt) node[anchor=east] {\scriptsize $$}; \filldraw[black] (1,3) circle (1pt) node[anchor=east] {\scriptsize $$}; \filldraw[black] (1,4) circle (1pt) node[anchor=east] {\scriptsize $$}; \node (b) at (1,4.4) {$\vdots$}; \end{tikzpicture} \captionof{figure}{$\scriptstyle\psi_{A_2}$} \end{minipage} \begin{minipage}{.25\textwidth} \centering \begin{tikzpicture} \draw[thick] (1,0) -- (1,1); \draw[thick] (0,0) -- (1,1); \draw[thick] (1,1) -- (1,4); \draw[thick] (0,0) -- (0,1); \draw[mygray, thick] (0,0) -- (1,1); \draw[mygray, thick] (0,0) -- (1,2); \draw[mygray, thick] (0,0) -- (1,3); \draw[mygray, thick] (0,0) -- (1,4); \draw[thick] (0,0) -- (1,0); \draw[mygray, thick] (0,1) -- (1,0); \draw[mygray, thick] (0,1) -- (1,1); \draw[mygray, thick] (0,1) -- (1,2); \draw[mygray, thick] (0,1) -- (1,3); \draw[mygray, thick] (0,1) -- (1,4); \filldraw[black] (0,0) circle (1pt) node[anchor=east] {\scriptsize $x_0$}; \filldraw[black] (0,1) circle (1pt) node[anchor=east] {\scriptsize $x_2$}; \filldraw[black] (1,0) circle (1pt) node[anchor=west] {\scriptsize $x_1$}; \filldraw[black] (1,1) circle (1pt) node (a) at (0.8,1.15) {\scriptsize $$}; \filldraw[black] (1,2) circle (1pt) node[anchor=east] {\scriptsize $$}; \filldraw[black] (1,3) circle (1pt) node[anchor=east] {\scriptsize $$}; \filldraw[black] (1,4) circle (1pt) node[anchor=east] {\scriptsize $$}; \node (b) at (1,4.4) {$\vdots$}; \end{tikzpicture} \captionof{figure}{$\scriptstyle\psi_{A_3}$} \end{minipage} \begin{minipage}{.25\textwidth} \centering \begin{tikzpicture} \draw[thick] (1,0) -- (1,1); \draw[thick] (0,0) -- (1,1); \draw[thick] (1,1) -- (1,4); \draw[thick] (0,0) -- (0,1); \draw[mygray, thick] (0,0) -- (1,1); \draw[mygray, thick] (0,0) -- (1,2); \draw[mygray, thick] (0,0) -- (1,3); \draw[mygray, thick] (0,0) -- (1,4); \draw[mygray,thick] (0,0) -- (1,0); \draw[mygray, thick] (0,1) -- (1,0); \draw[mygray, thick] (0,1) -- (1,1); \draw[mygray, thick] (0,1) -- (1,2); \draw[mygray, thick] (0,1) -- (1,3); \draw[mygray, thick] (0,1) -- (1,4); \filldraw[black] (0,0) circle (1pt) node[anchor=east] {\scriptsize $x_0$}; \filldraw[black] (0,1) circle (1pt) node[anchor=east] {\scriptsize $x_2$}; \filldraw[black] (1,0) circle (1pt) node[anchor=west] {\scriptsize $x_1$}; \filldraw[black] (1,1) circle (1pt) node (a) at (0.8,1.15) {\scriptsize $$}; \filldraw[black] (1,2) circle (1pt) node[anchor=east] {\scriptsize $$}; \filldraw[black] (1,3) circle (1pt) node[anchor=east] {\scriptsize $$}; \filldraw[black] (1,4) circle (1pt) node[anchor=east] {\scriptsize $$}; \node (b) at (1,4.4) {$\vdots$}; \end{tikzpicture} \captionof{figure}{$\scriptstyle\psi_{A_4}$} \end{minipage} \end{proof} Now we analyze colorings with exactly two maximal homogeneous sets. The prototype is the coloring associated to a partition of $\N$ into two parts. We present the analysis according to the cardinality of $\N\setminus (H_1\cup H_2)$ where $H_1$ and $H_2$ are the maximal homogeneous sets. \begin{lema} \label{samecolor} Let $\varphi$ be a coloring on $\N$ such that $\h(\varphi)$ has exactly two maximal elements $H_1$ and $H_2$. Then $\varphi([H_1]^2)=\varphi([H_2]^2)$. \end{lema} \begin{proof} Assume towards a contradiction that $\varphi([H_1]^2)=\{1\}$ and $\varphi([H_2]^2)=\{0\}$. In particular, $|H_1\cap H_2|\leq 1$. By Ramsey's Theorem, $\h(\varphi)$ contains an infinite set, thus we can assume that $H_1$ is infinite. Then, $|H_1\setminus H_2|=\aleph_0$ and $|H_2\setminus H_1|\geq 2$. For every $x\nin H_2$, let $$ L_x=\{y\in H_2: \varphi(\{x,y\})=0\}. $$ We claim that $|L_x|\leq 1$. Otherwise, $|\{x\}\cup L_x|\geq 3$ and thus it is a 0-homogeneous set. Therefore, $\{x\}\cup L_x\subseteq H_2$, a contradiction. Analogously, letting $M_y=\{x\in H_1: \varphi(\{y,x\})=1\}$ for any $y\nin H_1$, we have that $|M_y|\leq 1$. Let us fix $y\in H_2\setminus H_1$. Since $|M_y|\leq 1$, there are $p,q\in H_1\setminus H_2$ such that $\varphi(\{y,p\})=\varphi(\{y,q\})=0$. Thus, $L_p=L_q=\{y\}$. Since $|H_1\cap H_2|\leq 1$ and $|H_2|\geq 3$, fix $z\in H_2\setminus (H_1\cup\{y\})$. Thus, $z\notin L_p\cup L_q$. That is, $\varphi(\{z,p\})=\varphi(\{z,q\})=1$, and therefore $\{p,q,z\}\in\h(\varphi)$. By Proposition \ref{externos}(i), $\{p,q,z\}$ is contained in either $H_1$ or $H_2$, which is impossible. \end{proof} \begin{prop} Let $\varphi$ be a coloring on $\N$ with exactly two maximal homogeneous sets $H_1$ and $H_2$ such that $\N=H_1\cup H_2$. Then, $\varphi$ is reconstructible if and only if $H_1\cap H_2\neq \emptyset$. \end{prop} \proof Clearly, $\h(\varphi)$ has exactly two maximal elements: $H_1$ and $H_2$. By Proposition \ref{samecolor}, we assume that $H_1$ and $H_2$ are homogeneous of color 1. Suppose $H_1\cap H_2\neq\emptyset$. Let $x_0\in H_1\cap H_2$ and $y\in H_1\setminus H_2$, $z\in H_2\setminus H_1$. Notice that $\{x_0,y,z\}$ is not homogeneous, otherwise $\{x,y,z\}\subseteq H_1$ or $\{x,y,z\}\subseteq H_2$, which is impossible. Thus $\varphi(\{y,z\})=0$. Then, this coloring looks similar to the one depicted in Figure \ref{fig2max}. The points below $x_0$ are the elements in $H_1\cap H_2$. \begin{center} \begin{tikzpicture}[scale=1.2] \node (b) at (0,2.5) {$\vdots$}; \node (b) at (2,2.5) {$\vdots$}; \draw[thick] (0,0) -- (0,1); \draw[thick] (0,1) -- (0,2); \draw[thick] (2,0) -- (2,1); \draw[mygray,thick] (0,0) -- (2,0); \draw[mygray,thick] (0,0) -- (2,2); \draw[mygray,thick] (0,0) -- (2,1); \draw[thick] (2,1) -- (2,2); \draw[mygray,thick] (0,2) -- (2,2); \draw[mygray,thick] (0,1) -- (2,2); \draw[thick] (0,2) -- (1,-1); \draw[mygray,thick] (0,2) -- (2,1); \draw[mygray,thick] (0,2) -- (2,0); \draw[mygray,thick] (0,0) -- (2,1); \draw[mygray,thick] (0,1) -- (2,1); \draw[mygray,thick] (0,1) -- (2,0); \draw[thick] (0,0) -- (1,-1); \draw[thick] (0,1) -- (1,-1); \draw[thick] (2,1) -- (1,-1); \draw[thick] (2,0) -- (1,-1); \draw[thick] (1,-1) -- (2,2); \draw[thick] (1,-1) -- (1,-2); \draw[thick] (1,-2) -- (1,-3); \node (b) at (1,-3.5) {$\vdots$}; \filldraw[black] (0,0) circle (1pt) node[anchor=east] {\scriptsize $y$}; \filldraw[black] (0,1) circle (1pt) node[anchor=east] {\scriptsize $$}; \filldraw[black] (0,2) circle (1pt) node[anchor=east] {\scriptsize $$}; \filldraw[black] (2,1) circle (1pt) node[anchor=west] {\scriptsize $$}; \filldraw[black] (2,0) circle (1pt) node[anchor=west] {\scriptsize $z$}; \filldraw[black] (2,2) circle (1pt) node[anchor=west] {\scriptsize $$}; \filldraw[black] (1,-1) circle (1pt) node[anchor=west] {\scriptsize $x_0$}; \filldraw[black] (1,-2) circle (1pt) node[anchor=west] {\scriptsize $$}; \filldraw[black] (1,-3) circle (1pt) node[anchor=west] {\scriptsize $$}; \end{tikzpicture} \captionof{figure}{} \label{fig2max} \end{center} To see that $\varphi$ is reconstructible we use Proposition \ref{4suffices}. Let $F\subset\N$ of size 4. If $F$ is not contained in an homogeneous set, then there is a set $Y$ of size 5 such $F\subset Y$ and $\varphi|_Y$ is isomorphic to the coloring given in Example \ref{recmax}. Then, $\varphi$ is reconstructible by Proposition \ref{4suffices}. Conversely, suppose now that $H_1\cap H_2=\emptyset$. Let $x\in H_1$. Then $\{y\in H_2:\; \varphi\{x,y\}=1\}$ has at most one point. If for some $x\in H_1$, there is $y\in H_2$ such that $\varphi(\{x,y\})=1$, then $\{x,y\}$ is a critical pair. Otherwise, if $\varphi(\{x,y\})=0$ for all $x\in H_1$ and all $y\in H_2$, then $\{x,y\}$ is critical for any $x\in H_1$ and $y\in H_2$. In any case, $\varphi$ is non-reconstructible by Proposition \ref{criterio-no-UR}. \endproof Now we treat the case where $|\N\setminus (H_1\cup H_2)|=1$. Before presenting a general result, we give an example illustrating this case. \begin{ex}\label{exAnonempty} Define $\varphi:[\N]^2\longrightarrow 2$ by $$ \varphi(\{n,m\})=\begin{cases} 0, & \text{if $n=0$ and $m>1$;}\\ 0, & \text{if $n=1$ and $m>0$ is even;}\\ 1, & \text{if $n=0$ and $m=1$;}\\ 1, & \text{if $n=1$ and $m>0$ is odd;}\\ 1, & \text{otherwise.} \end{cases} $$ See Figure \ref{fig1externo}. It is not difficult to see that the only maximal homogeneous sets are $H_1=\N\setminus\{0,1\}$ and $H_2=\{2n+1:n\in \N\}$. \begin{center} \begin{tikzpicture}[scale=1] \draw[thick] (0,0) -- (0,5); \draw[thick] (2,1.5) -- (3,1.5); \draw[thick] (0,1) -- (2,1.5); \draw[thick] (0,3) -- (2,1.5); \draw[thick] (0,5) -- (2,1.5); \draw[mygray, thick] (0,0) -- (2,1.5); \draw[mygray, thick] (0,2) -- (2,1.5); \draw[mygray, thick] (0,4) -- (2,1.5); \draw[mygray, thick] (0,0) .. controls (2,.5) and (2.5,1) .. (3,1.5); \draw[mygray, thick] (0,5) .. controls (1,4.5) and (2,4) .. (3,1.5); \filldraw[black] (0,0) circle (1pt) node[anchor=east] {\scriptsize $2$}; \filldraw[black] (0,1) circle (1pt) node[anchor=east] {\scriptsize $3$}; \filldraw[black] (0,2) circle (1pt) node[anchor=east] {\scriptsize $4$}; \filldraw[black] (0,3) circle (1pt) node[anchor=east] {\scriptsize $5$}; \filldraw[black] (0,4) circle (1pt) node[anchor=east] {\scriptsize $6$}; \filldraw[black] (0,5) circle (1pt) node[anchor=east] {\scriptsize $7$}; \filldraw[black] (2,1.5) circle (1pt) node[anchor=north] {\scriptsize $1$}; \filldraw[black] (3,1.5) circle (1pt) node[anchor=north] {\scriptsize $0$}; \node (b) at (0,5.3) {$\vdots$}; \end{tikzpicture} \captionof{figure}{Partial drawing of $\varphi$} \label{fig1externo} \end{center} \end{ex} \begin{prop} Let $\varphi$ be a coloring on $\N$ with exactly two maximal homogeneous sets $H_1$ and $H_2$ and such that $\N\setminus(H_1\cup H_2)=\{z\}$ for some $z$. Then, $\varphi$ has a critical pair and therefore is non-reconstructible. \end{prop} \proof By Proposition \ref{samecolor}, we assume that $H_1$ and $H_2$ are homogeneous of color 1.We have to consider several cases. (i) Suppose there is $x_0\in H_1$ and $y_0\in H_2$ such $\varphi(\{z,x_0\})=\varphi(\{z,y_0\})=1$. We claim that $\{x_0, z\}$ is a critical pair for $\varphi$. By a simple argument (as in the proof of Proposition \ref{externos}) we have that such $x_0$ and $y_0$ are unique. Thus \begin{equation}\label{2max} \varphi(\{z,x\})=\varphi(\{z,y\})=0\;\; \text{for all $x\in H_1\setminus\{x_0\}$ and $y\in H_2\setminus\{y_0\}$}. \end{equation} We claim that $(H_1\cup H_2)\setminus\{x_0,y_0\}$ is homogeneous. In fact, if $x\in H_1\setminus\{x_0\}$ and $y\in H_2\setminus\{y_0\}$, then $\{z,x,y\}$ is not homogeneous. By \eqref{2max}, $\varphi(\{x,y\})=1$ and from this the claim follows. By the maximality of $H_1$ and $H_2$, we can assume w.l.o.g. that $H_2\setminus\{y_0\}\subseteq H_1$. Notice that $\N=(H_1\setminus\{x_0\})\cup\{x_0,y_0, z\}$. Let $p\nin \{x_0,z\}$. If $p\neq y_0$, by \eqref{2max}, $\varphi(\{z, p\})=0$ and $\varphi(\{x_0,p\})=1$ as $p\in H_1$. On the other hand, $\varphi(\{y_0,z\})=\varphi(\{z,x_0)=1$ and $\{x_0, y_0, z\}$ is not homogeneous. Thus $\varphi(\{x_0, y_0\})=0$. This shows that $\{x_0, z\}$ is a critical pair. (ii) Suppose there is $x_0\in H_1$ such $\varphi(\{z,x_0\})=1$ and for all $y\in H_2$, $\varphi(\{z,y\})=0$. By a similar argument as before one can show that $H_1\setminus\{x_0\}\subseteq H_2$. Let $y_0\in H_2\setminus H_1$. Then we consider two subcases. If $\varphi(\{x_0, y_0\})=1$, then $\{x_0,y_0\}$ is a critical pair. And, if $\varphi(\{x_0, y_0\})=0$, then $\{z,y_0\}$ is a critical pair. By the symmetry of the problem, we are left with the case where $\varphi(\{z,x\})=\varphi(\{z,y\})=0$ for all $x\in H_1$ and all $y\in H_2$. This implies that $H_1\cup H_2$ is homogeneous, which is imposible by the maximality of $H_1$ and $H_2$. In fact, given $x\in H_1$ and $y\in H_2$ different, we have that $\{x,y,z\}$ is not homogeneous, thus $\varphi(\{x,y\})=1$. \endproof We do not have a general result about colorings such that $\N\setminus(H_1\cup H_2)$ has two elements. We just present an example which seems interesting as it is non-reconstructible but does not have a critical pair. \begin{ex}The coloring $\varphi$ (see Figure \ref{6.7}) has two maximal homogeneous sets, is non-reconstructible and has no critical pair. \begin{minipage}{.5\textwidth}\centering \begin{tikzpicture}[scale=1] \node (b) at (0,2.5) {$\vdots$}; \draw[thick] (0,0) -- (0,1); \draw[thick] (0,1) -- (0,2); \draw[thick] (0,0) -- (-1,-1); \draw[thick] (-1,-1) -- (-1,-2); \draw[mygray,thick] (-1,-2) -- (1,-1); \draw[mygray,thick] (-1,-1) -- (1,-1); \draw[thick] (-1,-2) -- (1,-2); \draw[mygray,thick] (-1,-2) -- (0,0); \draw[mygray,thick] (0,0) -- (1,-2); \draw[mygray,thick] (-1,-1) -- (1,-2); \draw[mygray,thick] (-1,-2) -- (0,1); \draw[mygray,thick] (-1,-2) -- (0,2); \draw[mygray,thick] (1,-2) -- (0,1); \draw[mygray,thick] (1,-2) -- (0,2); \draw[thick] (-1,-1).. controls (-0.7,0.5).. (0,1); \draw[thick] (-1,-1) .. controls (-0.7,1.5).. (0,2); \draw[thick] (1,-1) .. controls (0.7,0.5).. (0,1); \draw[thick] (1,-1).. controls (0.7,1.5).. (0,2); \draw[thick] (0,0) -- (1,-1); \draw[thick] (1,-1) -- (1,-2); \filldraw[black] (0,0) circle (1pt) node[anchor=east] {\scriptsize $4$}; \filldraw[black] (0,1) circle (1pt) node[anchor=east] {\scriptsize $5$}; \filldraw[black] (0,2) circle (1pt) node[anchor=east] {\scriptsize $6$}; \filldraw[black] (1,-1) circle (1pt) node[anchor=west] {\scriptsize $3$}; \filldraw[black] (-1,-1) circle (1pt) node[anchor=east] {\scriptsize $2$}; \filldraw[black] (1,-2) circle (1pt) node[anchor=west] {\scriptsize $1$}; \filldraw[black] (-1,-2) circle (1pt) node[anchor=east] {\scriptsize $0$}; \end{tikzpicture} \captionof{figure}{$\varphi$} \label{6.7} \end{minipage} \begin{minipage}{.5\textwidth} \centering \begin{tikzpicture}[scale=1] \node (b) at (0,2.5) {$\vdots$}; \draw[thick] (0,0) -- (0,1); \draw[thick] (0,1) -- (0,2); \draw[thick] (0,0) -- (-1,-1); \draw[mygray,thick] (-1,-1) -- (-1,-2); \draw[thick] (-1,-2) -- (1,-1); \draw[mygray,thick] (-1,-1) -- (1,-1); \draw[thick] (-1,-2) -- (1,-2); \draw[mygray,thick] (-1,-2) -- (0,0); \draw[mygray,thick] (0,0) -- (1,-2); \draw[thick] (-1,-1) -- (1,-2); \draw[mygray,thick] (-1,-2) -- (0,1); \draw[mygray,thick] (-1,-2) -- (0,2); \draw[mygray,thick] (1,-2) -- (0,1); \draw[mygray,thick] (1,-2) -- (0,2); \draw[thick] (-1,-1).. controls (-0.7,0.5).. (0,1); \draw[thick] (-1,-1) .. controls (-0.7,1.5).. (0,2); \draw[thick] (1,-1) .. controls (0.7,0.5).. (0,1); \draw[thick] (1,-1).. controls (0.7,1.5).. (0,2); \draw[thick] (0,0) -- (1,-1); \draw[mygray,thick] (1,-1) -- (1,-2); \filldraw[black] (0,0) circle (1pt) node[anchor=east] {\scriptsize $4$}; \filldraw[black] (0,1) circle (1pt) node[anchor=east] {\scriptsize $5$}; \filldraw[black] (0,2) circle (1pt) node[anchor=east] {\scriptsize $6$}; \filldraw[black] (1,-1) circle (1pt) node[anchor=west] {\scriptsize $3$}; \filldraw[black] (-1,-1) circle (1pt) node[anchor=east] {\scriptsize $2$}; \filldraw[black] (1,-2) circle (1pt) node[anchor=west] {\scriptsize $1$}; \filldraw[black] (-1,-2) circle (1pt) node[anchor=east] {\scriptsize $0$}; \node (b) at (0,-2.5) {$\scriptstyle\psi$}; \end{tikzpicture} \captionof{figure}{$\psi$} \label{6.7b} \end{minipage} Then, $H_1=\{2,4,5,6,\cdots\}$ and $H_2=\{3,4,5,6,\cdots\}$ are the only maximal homogeneous sets and $\N\setminus (H_1\cup H_2)=\{0,1\}$. There is no critical pair for $\varphi$ but $\psi$ (see Figure \ref{6.7b}) is a non-trivial reconstruction of $\varphi$. \end{ex} We finish this section with an example of a non-reconstructible coloring with three pairwise disjoint infinite maximal homogeneous sets. Contrarily, we have proved in Theorem \ref{colorpartition} that the coloring associated to any partition of $\N$ into three parts belongs to $\mathcal{R}$. Thus, knowing that $\h(\varphi)$ has at least three infinite maximal pairwise disjoint elements does not guarantee that $\varphi\in\mathcal{R}$. \begin{ex} \label{3max-no-UR} Let $A=\{a_i:i\in\N\}$, $B=\{b_i:i\in\N\}$ and $C=\{c_i:i\in\N\}$ be pairwise disjoint subsets of $\N$ such that $\N=A\cup B\cup C$. Define $\varphi:\Nt\longrightarrow 2$ by $\varphi(\{a_i,a_j\})=\varphi(\{b_i,b_j\})=\varphi(\{c_i,c_j\})=1$ for every $i\neq j$; $\varphi(\{a_0,b_0\})=\varphi(\{a_0,c_{2i}\})=\varphi(\{b_0,c_{2i+1}\})=1$ for every $i\in\N$; and $\varphi(\{n,m\})=0$ for any other $\{n,m\}\in\Nt$. See Figure \ref{6.8}. \begin{center} \begin{tikzpicture}[thick, scale=0.9] \draw[mygray, thick] (0,0) -- (2,1); \draw[mygray, thick] (0,1) -- (2,2); \draw[mygray, thick] (0,2) -- (2,1); \draw[mygray, thick] (0,1) -- (2,0); \draw[mygray, thick] (1,-1) -- (2,0); \draw[mygray, thick] (0,0) -- (1,-2); \draw[mygray, thick] (0,2) -- (2,2); \draw[mygray, thick] (0,1) -- (2,1); \draw[mygray, thick] (0,3) -- (2,2); \draw[mygray, thick] (0,2) -- (2,3); \draw[mygray, thick] (0,3) -- (2,3); \draw[thick] (0,0) -- (0,3); \draw[thick] (1,-1) -- (1,-4); \draw[thick] (2,3) -- (2,0); \draw[thick] (0,0) -- (2,0); \draw[thick] (0,0) -- (1,-1); \draw[thick] (2,0) -- (1,-2); \draw[mygray, thick] (2,0) .. controls (1.5,-2) .. (1,-3); \draw[mygray, thick] (0,0) .. controls (0.4,-3) .. (1,-4); \draw[thick] (0,0) .. controls (0.5,-2) .. (1,-3); \draw[thick] (2,0) .. controls (1.6,-3) .. (1,-4); \filldraw[black] (0,0) circle (1pt) node[anchor=east] {\scriptsize $a_0$}; \filldraw[black] (0,1) circle (1pt) node[anchor=east] {\scriptsize $a_1$}; \filldraw[black] (0,2) circle (1pt) node[anchor=east] {\scriptsize $a_2$}; \filldraw[black] (0,3) circle (1pt) node[anchor=east] {\scriptsize $a_3$}; \filldraw[black] (2,0) circle (1pt) node[anchor=west] {\scriptsize $b_0$}; \filldraw[black] (2,1) circle (1pt) node[anchor=west] {\scriptsize $b_1$}; \filldraw[black] (2,2) circle (1pt) node[anchor=west] {\scriptsize $b_2$}; \filldraw[black] (2,3) circle (1pt) node[anchor=west] {\scriptsize $b_2$}; \filldraw[black] (1,-1) circle (1pt) node[anchor=west] {\scriptsize $c_0$}; \filldraw[black] (1,-2) circle (1pt) node[anchor=west] {\scriptsize $c_1$}; \filldraw[black] (1,-3) circle (1pt) node[anchor=west] {\scriptsize $c_2$}; \filldraw[black] (1,-4) circle (1pt) node[anchor=west] {\scriptsize $c_3$}; \node (b) at (1,3.3) {$\vdots$}; \node (b) at (1,-4.3) {$\vdots$}; \end{tikzpicture} \captionof{figure}{Partial drawing of $\varphi$} \label{6.8} \end{center} Notice that $A,B$ and $C$ are maximal elements of $\h(\varphi)$. Moreover, $\{a_0,b_0\}$ is critical for $\varphi$, thus $\varphi\in\neg\mathcal{R}$, by Proposition \ref{criterio-no-UR}. \end{ex} \section{Complexity of the reconstruction problem.} This section is devoted to analyzing the reconstruction problem from the descriptive set theoretic point of view. We show that the problem of recovering a coloring from the collection of its homogeneous sets can be done in a Borel way. The space of colorings $2^{[\N]^2}$ is endowed with the product topology which has as basic open sets $\{\varphi\in 2^{[\N]^2}: \; \varphi_0\subseteq \varphi\}$ for $\varphi_0$ a coloring on a finite subset of $\N$. Let $K(2^{\N})$ be the hyperspace of compact subsets of $2^{\N}$ with the Vietoris topology (see, for instance, \cite[4F]{Kechris94}). A subbasis for $K(2^{\N})$ consists of the sets $V^+=\{L\in K(2^{\N}): \; L\subset V\}$ and $V^-=\{L\in K(2^{\N}): \; L\cap V\neq \emptyset\}$, for $V\subseteq 2^{\N}$ open. As $2^{\N}$ is zero dimensional, we can also assume that $V$ is clopen. We recall that a subset of a topological space is $G_\delta$ (respectively, $F_\sigma$), if it is a countable intersection of open sets (respectively, a countable union of closed sets). Under the usual identification of a subset of $\N$ with its characteristic function, $\hom(\varphi)$ is not topologically closed in $2^{\N}$. Notice, however, that if $A\in cl(\h(\varphi))\setminus\h(\varphi)$, then $|A|\leq 2$. Thus $cl(\h(\varphi))= cl(\h(\psi))$ iff $\h(\varphi)=\h(\psi)$. Since we want to analyze the reconstruction problem from a topological point of view, it is better to work with a closed set instead of $\h(\varphi)$. However, it is more convenient to use a closed set larger than the closure. Let \[ \ch(\varphi)=\h(\varphi)\cup\{A\subset \N: |A|\leq 2\}. \] Notice that $cl(\h(\varphi))\subseteq \ch(\varphi)$ and $\ch(\varphi)$ is closed. Also, $\ch(\varphi)= \ch(\psi)$ iff $\h(\varphi)=\h(\psi)$. The main result of this section is to show that there is a Borel function $g:K(2^{\N})\to 2^{[\N]^2}$ such that \[ \ch(g(\ch(\varphi)))=\ch(\varphi) \] for all $\varphi\in 2^{[\N]^2}$. So $g(\ch(\varphi))$ is a coloring recovered from $\ch(\varphi)$, but notice that $g(\ch(\varphi))$ might be neither $\varphi$ nor $1-\varphi$ when $\varphi\not\in \mathcal{R}$. \begin{prop} \label{RecobrableBorel} The collection of all colorings on $\N$ belonging to $\mathcal{R}$ is a dense $G_\delta$ subset of $2^{[\N]^2}$. \end{prop} \begin{proof} Recall that $\h(\varphi)=\h(\psi)$ if and only if $\h(\varphi)\cap [\N]^{3}=\h(\psi)\cap [\N]^{3}$ (see Proposition \ref{triangulos}). The following set is closed: \[ E=\{(\varphi,\psi)\in 2^{[\N]^2}\times 2^{[\N]^2}: \; \h(\varphi)=\h(\psi)\}. \] In fact, $(\varphi,\psi)\not\in E$ if and only if there is $H\in [\N]^{3}$ such that either $H\in \h(\varphi)\setminus \h(\psi)$ or $H\in \h(\psi)\setminus \h(\varphi)$. For every finite set $H$, it is straightforward to verify that $ \{\varphi\in 2^{[\N]^2}: \; H\in \h(\varphi)\}$ is clopen. Thus the complement of $E$ is open. Now consider the relation $\varphi\approx \psi$ if either $\varphi=\psi$ or $\psi= 1-\varphi$. Clearly $\approx$ is a closed subset of $2^{[\N]^2}\times 2^{[\N]^2}$. Finally we have \[ \varphi\not\in \mathcal{R} \;\Leftrightarrow \;\; \exists \psi\in 2^{[\N]^2}\; ( (\varphi,\psi)\in E \wedge (\varphi\not\approx\psi)). \] Thus, the collection of colorings that are not reconstructible is the projection of a $K_\sigma$ set (i.e. a countable union of compact sets) and thus it is also $K_\sigma$. Finally, from the definition of the product topology on $2^{[\N]^2}$, given a coloring $\varphi$, we have that the collection of all its finite changes, i.e. $\{\varphi_a:\; \mbox{$a\subseteq [\N]^2$ is finite}\}$ is a dense subset of $2^{[\N]^2}$. Thus, by Corollary \ref{finitechanges}, $\mathcal{R}$ is dense. \end{proof} \begin{prop} \label{funcionhom} The function $\ch:2^{[\N]^2}\to K(2^{\N})$ given by $\varphi\mapsto \ch(\varphi)$ is Borel. \end{prop} \begin{proof} Let $V$ be a clopen subset of $2^{\N}$. We need to show that $C(V^+)=\{\varphi\in 2^{\Nt}:\; \ch(\varphi)\in V^+\}$ and $C(V^-)=\{\varphi\in 2^{\Nt}:\; \ch(\varphi)\in V^-\}$ are Borel. First, if $[\N]^{\leq 2}\not\subseteq V$, then $C(V^+)=\emptyset$ and there is nothing to show. So we assume that $[\N]^{\leq 2}\subseteq V$, In this case, $\ch(\varphi)\subseteq V$ is equivalent to $\h(\varphi)\subseteq V$. Then, we have that \begin{equation} \label{vmas} \ch(\varphi)\subseteq V \Leftrightarrow (\forall H\in [\N]^{<\omega})(H\in \h(\varphi)\rightarrow H\in V). \end{equation} Notice that $\{\varphi\in 2^{\Nt}:\; H\in \h(\varphi)\}$ is clopen for every finite $H\subset \N$. From this and \eqref{vmas} we conclude that $C(V^+)$ is $G_\delta$. For $C(V^-)$ notice that if $V\cap [\N]^{\leq 2}\neq \emptyset$, then $C(V^-)=2^{\Nt}$ and there is nothing to show. Since, $[\N]^{\leq 2}$ is a closed nowhere dense subset of $2^{\N}$, we can assume w.l.o.g. that $V\cap [\N]^{\leq 2}=\emptyset$. In this case, we have $\ch(\varphi)\cap V\neq\emptyset$ if and only if $\h(\varphi)\cap V\neq\emptyset$. And, as before, this happens when $\h(\varphi)\cap [\N]^{<\omega}\cap V\neq\emptyset$. Thus $C(V^-)$ is open. \end{proof} Let $HOM=\{\ch(\varphi):\;\varphi\in 2^{[\N]^2}\}$. \begin{prop} \label{homgdelta} $HOM$ is $G_\delta$ in $K(2^{\N})$ and thus it is Polish. \end{prop} \begin{proof} Let $\mathcal{P}_n$ denote the collection of subsets of $n+1$ of size at least 3. Let $L\in K(2^{\N})$. We claim \begin{eqnarray} \label{gdelta} L\in HOM\;\;\Leftrightarrow \;\; [\N]^{\leq 2}\subseteq L\; \&\; (\forall n\geq 2)\,(\exists\varphi\in 2^{[n+1]^2})\, ( L\cap \mathcal{P}_n=\h(\varphi)). \end{eqnarray} In fact, suppose $\varphi$ is a coloring on $\N$ and $L=\h(\varphi)$. Then, $\varphi|_{n+1}$ satisfies the right hand side of (\ref{gdelta}). For the other direction, let $L\in K(2^{\N})$ be such that $[\N]^{\leq 2}\subseteq L$ and consider the following set \[ T_L=\{\varphi\in 2^{[n+1]^2}: \; L\cap \mathcal{P}_n=\h(\varphi)\; \mbox{and $n\in \N$}\}. \] Then, $T_L$ is a finitely branching tree. If $L$ satisfies the right hand side of (\ref{gdelta}), then $T_L$ is infinite, thus it has a branch $\varphi$ which is clearly a coloring on $\N$. Then, $L$ and $\h(\varphi)$ contain the same finite sets of size at least 3, thus $L=\ch(\varphi)$. To see that $HOM$ is $G_\delta$ we observe the following. The collection of all $L\in K(2^{\N})$ such that $ [\N]^{\leq 2}\subseteq L$ is closed. On the other hand, given $B\subseteq A\subset2^{\N}$ with $A$ finite. Then, $\{L\in K(2^{\N}):\; L\cap A=B\}$ is $G_\delta$. The reason is that the relation $``x\in L"$ is closed in $2^{\N}\times K(2^{\N})$ and we have \[ L\cap A=B\;\Leftrightarrow \; \forall x\in A\; (x\in L\leftrightarrow x\in B). \] The last claim is the classical fact that a subset of a Polish space (i.e. a completely metrizable separable space) is itself Polish if and only if it is $G_\delta$ (see \cite[3.11]{Kechris94}). \end{proof} \begin{teo} There is $g: HOM\to 2^{[\N]^2}$ Borel such that, for all $L\in HOM$, \[ \ch(g(L))= L. \] \end{teo} \begin{proof} Consider the following relation: \[ A=\{(L, \varphi)\in HOM\times 2^{[\N]^2}:\; L=\ch(\varphi)\}. \] Since the function $\ch$ is Borel (see Proposition \ref{funcionhom}), $A$ is Borel. We claim that all vertical sections of $A$ are closed (and hence compact). In fact, let $E$ be the equivalence relation on $2^{[\N]^2}$ given by $\varphi E \psi$ if $\h(\varphi)=\h(\psi)$. We have seen in the proof of Proposition \ref{RecobrableBorel} that $E$ is closed. Recall that $\hom(\varphi)=\hom(\psi)$ iff $\ch(\varphi)=\ch(\psi)$. Thus, for every $L\in HOM$, if $L=\ch(\varphi)$, then $A_L$ is the $E$-equivalence class of $\varphi$ which is closed, as promised. Since $HOM$ is Polish (by Proposition \ref{homgdelta}), we can use a classical uniformization theorem to define $g$. We know that any Borel relation on a Polish space with $K_\sigma$ sections has a Borel uniformization (see \cite[18.18]{Kechris94}). Thus, there is a Borel map $g:HOM\to 2^{[\N]^2}$ such that $ \ch(g(L))= L$.\end{proof} \noindent{\bf Acknowledgment:} We thank the referees for all their comments and suggestions which help to improve the presentation of the results. \end{document}
\begin{document} \title[Classification of contact foliations]{Homotopy classification of contact foliations\\ on open contact manifolds} \author[M. Datta]{Mahuya Datta} \address{Statistics and Mathematics Unit, Indian Statistical Institute\\ 203, B.T. Road, Calcutta 700108, India.\\ e-mail:[email protected]\\ } \author[S. Mukherjee]{Sauvik Mukherjee} \address{Statistics and Mathematics Unit, Indian Statistical Institute\\ 203, B.T. Road, Calcutta 700108, India.\\ e-mail:[email protected]} \keywords{Contact manifolds, Contact foliations, $h$-principle} \thanks{2010 Mathematics Subject Classification: 53C12, 53D99, 57R17, 57R30, 57R32} \begin{abstract}We have given a homotopy classification of foliations on open contact manifolds whose leaves are contact submanifolds of the ambient space. The result is an extension of Haefliger's classification of foliations on open manifold. On the way to the main theorem we prove a result on equidimensional isocontact immersions on open contact manifolds. \end{abstract} \muaketitle \sigmaection{Introduction} In \cite{haefliger}, Haefliger gave a homotopy classification of foliations on open manifolds. A foliation on a manifold $M$ can be thought of as a partition of the manifold into injectively immersed submanifolds, called leaves. The simplest type of foliations on a manifold $M$ are obtained from submersions on it, in which case the level sets of the submersions are the leaves of foliations. More generally, if a smooth map $f:M\to N$ is transversal to a given foliation $\muathcal F_N$ on $N$ then the inverse image of $\muathcal F_N$ under $f$, denoted by $f^{-1}\muathcal F_N$, is a foliation on $M$; it is a standard fact that the codimension of $f^{-1}\muathcal F_N$ is the same as the codimension of $\muathcal F_N$. Using a result of Phillips on homotopy classification of transversal maps to foliations (\cite{phillips1}), Haefliger obtained a classification of foliations on open manifolds. Later on, this result was extended to all manifolds by Thurston. In this article we shall study foliations on an open manifolds $M$ in the presence of contact form and extend the result of Haefliger. Let $(M,\alpha)$ be a contact manifold with contact form $\alpha$. Then $\ker\alpha$ is a codimension 1 subbundle of the tangent bundle $TM$ and the restriction of $d\alpha$ to $\ker\alpha$ is a symplectic structure on the bundle. A foliation $\muathcal F$ on $M$ will be called a \emph{contact foliation on $M$ subordinate to} $\alpha$ (or simply a \emph{contact foliation} on $(M,\alpha)$) if the leaves of $\muathcal F$ are contact submanifolds of $M$. The tangent distribution $T\muathcal F$ of a contact foliation is transversal to the contact subbundle $\ker\alpha$; moreover, the intersection $T\muathcal F$ with $\ker\alpha$ is a symplectic subbundle of $\ker\alpha$ with respect to the symplectic structure $d'\alpha=d\alpha|_{\ker\alpha}$. Suppose that $N$ is an arbitrary manifold with a foliation $\muathcal F_N$ of codimension $2q$ which is strictly less than $\dim M$. We shall denote the normal bundle $TM/T\muathcal F$ of $\muathcal F_n$ by $\nuu\muathcal F_N$ and $\pi:TN\to\nuu\muathcal F_N$ will denote the canonical projection map. Let $Tr_\alpha(M,\muathcal F_N)$ be the space of all maps $f:M\rightarrow N$ which are transversal to $\muathcal F_N$ and such that $f^{-1}\muathcal F_N$, the inverse image of the foliation $\muathcal F_N$, is a contact foliation on $M$. Let $\muathcal E_\alpha(TM,\nuu\muathcal F_N)$ be the space of all vector bundle morphisms $F:TM\rightarrow TN$ such that \begin{enumerate} \item $\pi\circ F:TM\to \nuu(\muathcal F_N)$ is an epimorphism, \item $\ker(\pi\circ F)$ is transverse to the contact distribution $\ker\alpha$ and \item $\ker(\pi\circ F)\cap\ker\alpha$ is a symplectic subbundle of $(\ker\alpha,d'\alpha)$.\end{enumerate} With $C^{\infty}$-compact open topology on $Tr_\alpha(M,\muathcal F_N)$ and $C^{0}$-compact open topology on $\muathcal{E}_\alpha(TM,\nuu\muathcal F_N)$ we obtain the following result. \begin{theorem}\label{T:contact-transverse} Let $(M,\alpha)$ be an open contact manifold and $(N,\muathcal F_N)$ be any foliated manifold. Suppose that the codimension of $\muathcal F_N$ is even and is strictly less than the dimension of $M$. Then \[\pi\circ d:Tr_\alpha(M,\muathcal F_N)\to\muathcal{E}_\alpha(TM,\nuu\muathcal F_N)\] is a weak homotopy equivalence.\end{theorem} \nuoindent Theorem~\ref{T:contact-transverse} may be viewed as an extension of Phillips' Transversality Theorem (\cite{phillips1}) in the contact setting. Using this result we can obtain a homotopy classification of contact foliations on $(M,\alpha)$ following Haefilger (\cite{haefliger}). To state the result, let $\Gamma_q$ be the groupoid of germs of local diffeomorphisms of $\mathbb R^q$ and $B\Gamma_q$ be the classifying space of $\Gamma_q$ structures with the universal $\Gamma_q$-structure $\Omega_q$. The homotopy classes of $\Gamma_q$ structures on $M$ are in one-to-one correspondence with the the homotopy classes of continuous maps $M\to B\Gamma_q$ (see \cite{haefliger1}). Any $\Gamma_q$ structure on $M$ can be obtained as the pullback $f^*\Omega_q$ by a continuous map $f:M\to B\Gamma_q$. Theorem~\ref{T:contact-transverse} leads to the following classification of contact foliations on open contact manifolds. \begin{theorem} Let $(M,\alpha)$ be an open contact manifold. The integral homotopy classes of codimension $2q$ contact foliations on $M$ subordinate to $\alpha$ are in one-to-one correspondence with the `integrable homotopy' classes of bundle epimorphisms $(F,f):TM\to \nuu\Omega_{2q}$ for which $\ker F\cap \ker\alpha$ is a symplectic subbundle of $\ker\alpha$.\label{MT1}\end{theorem} \nuoindent Theorem~\ref{T:contact-transverse} will follow from a general $h$-principle type result (see Theorem~\ref{CT} stated below) by observing that $Tr_\alpha(M,\muathcal F_N)$ is the solution space of some open relation which is invariant under the action of local contactomorphisms. \begin{theorem} \label{CT} Let $(M,\alpha)$ be an open contact manifold and $\muathcal{R}\sigmaubset J^{r}(M,N)$ be an open relation invariant under the action of the pseudogroup of local contactomorphisms of $(M,\alpha)$. Then the parametric $h$-principle holds for $\muathcal{R}$. \end{theorem} \nuoindent A symplectic analogue of Theorem~\ref{CT} was proved in \cite{datta-rabiul} using a result of Ginzburg (\cite{ginzburg}). Ginzburg demonstrated some weaker form of stability for symplectic forms on open manifolds, though Moser's stability is known to be false on such manifolds. Here we prove a contact analogue of Ginzburg's result which can be stated as follows. \begin{theorem} Let $\xi_{t}$, $t\in[0,1]$ be a continuous family of contact structures defined by the contact forms $\alpha_t$ on a compact manifold $M$ with boundary. Let $(N,\tilde{\xi}=\ker\eta)$ be a contact manifold without boundary. Then every isocontact immersion $f_0:(M,\xi_0)\to (N,\tilde{\xi})$ admits a regular homotopy $\{f_t\}$ such that $f_t:(M,\xi_t)\to (N,\tilde{\xi})$ is an isocontact immersion for all $t\in[0,1]$. In addition, if $M$ contains a compact submanifold $V_{0}$ in its interior and $\xi_{t}=\xi_{0}$ on $\it{Op}(V_{0})$ then $f_{t}$ can be chosen to be a constant homotopy on $Op\,(V_{0})$.\label{T:equidimensional_contact immersion} \end{theorem} As a corollary to Theorem~\ref{T:equidimensional_contact immersion} we show that every open contact manifold admits a regular homotopy of contact immersions $\varphi_t$, $t\in [0,1]$, such that $\varphi_0=id_M$ and $\varphi_1$ takes $M$ into an arbitrary neighbourhood of a core of $M$. This has a very important role to play in the proof of Theorem~\ref{CT}. The paper is organised as follows. We recall preliminaries of contact manifolds in Section 2. Theorem~\ref{T:equidimensional_contact immersion} is proved in Section 3. In Section 4, we prove Theorem~\ref{CT} after briefly reviewing the language of $h$-principle and a few major results which are necessary for our purpose. We prove Theorem~\ref{T:contact-transverse} and Theorem~\ref{MT1} in Sections 5 and 7 respectively. In the final section we give an example of contact foliation on some open subsets of odd-dimensional spheres. We include the relevant background of $\Gamma_q$-structures and its relations to foliations in Section 7. \sigmaection{Preliminaries of contact manifolds} In this section we review basic definitions and results related to contact manifolds. \begin{definition}{\em Let $M$ be a $2n+1$ dimensional manifold. A 1-form $\alpha$ on $M$ is said to be a \emph{contact form} if $\alpha \wedge (d\alpha)^n$ is nowhere vanishing. }\label{contact_form} \end{definition} If $\alpha$ is a contact form then \[d'\alpha=d\alpha|_{\ker\alpha}\index{$d'\alpha$}\] is a symplectic structure on the hyperplane distribution $\ker\alpha$. Also, there is a global vector field $R_\alpha$ on $M$ defined by the relations \begin{equation}\alpha(R_\alpha)=1,\ \ \ i_{R_\alpha}.d\alpha=0,\label{reeb} \index{Reeb vector field} \end{equation} where $i_X$ denotes the interior multiplication by the vector field $X$. Thus, $TM$ has the following decomposition: \begin{equation}TM=\ker\alpha \oplus \ker\,d\alpha,\label{decomposition}\end{equation} where $\ker\alpha$ is a symplectic vector bundle and $\ker\,d\alpha$ is the 1-dimensional subbundle generated by $R_\alpha$. The vector field $R_\alpha$ is called the \emph{Reeb vector field} of the contact form $\alpha$. A codimension 1 hyperplane distribution $\xi$ on $M$ is said to be a \emph{contact structure} on $M$ if $\xi$ is locally defined as the kernel of a (local) contact form $\alpha$. Observe that the local contact form in this case is defined uniquely up to multiplication by a nowhere vanishing function $f$. Moreover, $d(f\alpha)|_\xi=f. d\alpha|_\xi$ and hence every contact structure is associated with a conformal symplectic structure. If $\alpha$ is a contact form then the distribution $\ker\alpha$ will be called the \emph{contact distribution of} $\alpha$. \begin{example}\label{ex:contact}\end{example} \begin{enumerate}\item Every odd dimensional Euclidean space $\mathbb R^{2n+1}$ has a canonical contact form given by $\alpha=dz+\sigmaum_{i=1}^nx_i\,dy_i$, where $(x_1,\dots,x_n,y_1,\dots,y_n,z)$ is the canonical coordinate system on $\mathbb R^{2n+1}$. \item Every even dimensional Euclidean space $\mathbb R^{2n}$ has a canonical 1-form $\lambda=\sigmaum_{i=1}^n(x_idy_i-y_idx_i)$ which is called the Liouville form of $\mathbb R^{2n}$, where $(x_1,\dots,x_n$, $y_1,\dots,y_n)$ is the canonical coordinate system on $\mathbb R^{2n}$. The restriction of $\lambda$ on the unit sphere in $\mathbb R^{2n}$ defines a contact form. \end{enumerate} A contact form $\alpha$ also defines a canonical isomorphism $\phi:TM\to T^*M$ between the tangent and the cotangent bundles of $M$ given by \begin{equation}\phi(X)=i_X d\alpha+\alpha(X)\alpha, \text{ for } X\in TM.\label{tgt_cotgt}\end{equation} It is easy to see that the Reeb vector field $R_\alpha$ corresponds to the 1-form $\alpha$ under $\phi$. \begin{definition} {\em Let $(N,\xi)$ be a contact manifold. A monomorphisn $F:TM\to (TN,\xi)$ is called \textit{contact} if $F$ is transversal to $\xi$ and $F^{-1}(\xi)$ is a contact structure on $M$. A smooth map $f:M\to (N,\xi)$ is called \textit{contact} if its differential $df$ is contact. If $M$ is also a contact manifold with a contact structure $\xi_0$, then a monomorphism $F:TM\to TN$ is said to be \textit{isocontact} if $\xi_0=F^{-1}\xi$ and $F:\xi_0\to\xi$ is conformal symplectic with respect to the conformal symplectic structures on $\xi_0$ and $\xi$. A smooth map $f:M\to N$ is said to be \textit{isocontact} if $df$ is isocontact. A diffeomorphism $f:(M,\xi)\to (N,\xi')$ is said to be a \emph{contactomorphism} \index{contactomorphism} if $f$ is isocontact. \index{isocontact map}} \end{definition} If $\xi=\ker\alpha$ for a globally defined 1-form $\alpha$ on $N$, then $f$ is contact if $f^*\alpha$ is a contact form on $M$. Furthermore, if $\xi_0=\ker\alpha_0$ then $f$ is isocontact if $f^*\alpha=\varphi \alpha_0$ for some nowhere vanishing function $\varphi:M\to\mathbb R$. \begin{definition}{\em A vector field $X$ on a contact manifold $(M,\alpha)$ is called a \emph{contact vector field} if it satisfies the relaion $\muathcal L_X\alpha=f\alpha$ for some smooth function $f$ on $M$, where $\muathcal L_X$ denotes the Lie derivation operator with respect to $X$.}\label{D:contact_vector_field} \index{contact vector field}\end{definition} Every smooth function $H$ on a contact manifold $(M,\alpha)$ gives a contact vector field $X_H=X_0+\begin{array}r{X}_H$ defined as follows: \begin{equation}X_0=HR_\alpha \ \ \ \text{ and }\ \ \ \begin{array}r{X}_H\in \Gamma(\xi) \text{ such that }i_{\begin{array}r{X}_H}d\alpha|_\xi = -dH|_\xi,\label{contact_hamiltonian} \index{contact hamiltonian}\end{equation} where $\xi=\ker\alpha$; equivalently, \begin{equation}\alpha(X_H)=H\ \ \text{ and }\ \ i_{X_H}d\alpha=-dH+dH(R_{\alpha})\alpha.\label{contact_hamiltonian1} \end{equation} The vector field $X_H$ is called the \emph{contact Hamiltonian vector field} of $H$. If $\phi_t$ is a local flow of a contact vector field $X$, then \[\frac{d}{dt}\phi_t^*\alpha = \phi_t^*(i_X.d\alpha+d(\alpha(X)))=\phi_t^*(f\alpha)=(f\circ\phi_t)\phi_t^*\alpha.\] Therefore, $\phi_t^*\alpha=\lambda_t\alpha$, where $\lambda_t=e^{\int f\circ\phi_t \,dt}$. Thus the flow of a contact vector field preserves the contact structure. \begin{theorem}(Gray's Stability Theorem (\cite{gray})) \label{T:gray stability} If $\xi_t,\ t\in \mathbb I$ is a smooth family of contact structures on a closed manifold $M$, then there exists an isotopy $\psi_t,\ t\in \mathbb I$, of $M$ such that \[\psi_t:(M,\xi_0)\to (M,\xi_t) \] is isocontact for all $t\in \mathbb I$ \end{theorem} \begin{remark}{\em Gray's stability theorem is not valid on non-closed manifolds. We shall see an extension of Theorem~\ref{T:gray stability} for such manifolds in Theorem~\ref{T:equidimensional_contact immersion}) which is one of the main results of this article.} \end{remark} We end this section with the definition of a contact submanifold. \begin{definition} {\em A submanifold $N$ of a contact manifold $(M,\xi)$ is said to be a \emph{contact submanifold} if the inclusion map $i:N\to M$ is a contact map.}\label{contact_submanifold} \index{contact submanifold}\end{definition} \begin{lemma} A submanifold $N$ of a contact manifold $(M,\xi=\ker\alpha)$ is a contact submanifold if and only if $TN$ is transversal to $\xi|_N$ and $TN\cap\xi|_N$ is a symplectic subbundle of $(\xi,d'\alpha)$.\label{L:contact_submanifold}\end{lemma} \sigmaection{Equidimensional contact immersions} We begin with a simple observation. \begin{observation}{\em Let $(M,\alpha)$ be a contact manifold. The product manifold $M\times\mathbb R^2$ has a canonical contact form given by $\tilde{\alpha}=\alpha- y\,dx$, where $(x,y)$ are the coordinate functions on $\mathbb R^2$. We shall denote the contact structure associated with $\tilde{\alpha}$ by $\tilde{\xi}$. Now suppose that $H:M\times\mathbb R\to \mathbb R$ is a smooth function which vanishes on some open set $U$. Define $\begin{array}r{H}:M\times\mathbb R\to M\times\mathbb R^2$ by $\begin{array}r{H}(u,t)=(u,t,H(u,t))$ for all $(u,t)\in M\times\mathbb R$. Since $\begin{array}r{H}(u,t)=(u,t,0)$ for all $(u,t)\in U$, the image of $d\begin{array}r{H}_{(u,t)}$ is $T_uM\times\mathbb R\times\{0\}$. On the other hand, $\tilde{\xi}_{(u,t,0)}=\xi_u\times\mathbb R^2$. Therefore, $\begin{array}r{H}$ is transversal to $\tilde{\xi}$ on $U$. }\end{observation} \begin{proposition} Let $M$ be a contact manifold with contact form $\alpha$. Suppose that $H$ is a smooth real-valued function on $M\times(-\varepsilon,\varepsilon)$ with compact support such that its graph $\Gamma$ in $M\times\mathbb R^2$ is transversal to the kernel of $\tilde{\alpha}=\alpha-y\,dx$. Then there is a diffeomorphism $\Psi:M\times(-\varepsilon,\varepsilon)\to \Gamma$ which pulls back $\tilde{\alpha}|_{\Gamma}$ onto $h(\alpha\oplus 0)$, where $h$ is a nowhere-vanishing smooth real-valued function on $M\times\mathbb R$.\label{characteristic} \end{proposition} \begin{proof} Since the graph $\Gamma$ of $H$ is transversal to $\tilde{\xi}$, the restriction of $\tilde{\alpha}$ to $\Gamma$ is a nowhere vanishing 1-form on it. Define a function $\begin{array}r{H}:M\times(-\varepsilon,\varepsilon)\to M\times\mathbb R^2$ by $\begin{array}r{H}(u,t)=(u,t,H(u,t))$. The map $\begin{array}r{H}$ defines a diffeomorphism of $M\times(-\varepsilon,\varepsilon)$ onto $\Gamma$, which pulls back the form $\tilde{\alpha}|_{\Gamma}$ onto $\alpha-H\,dt$. It is therefore enough to obtain a diffeomorphism $F:M\times(-\varepsilon,\varepsilon)\to M\times(-\varepsilon,\varepsilon)$ which pulls back the 1-form $\alpha-H\,dt$ onto a multiple of $\alpha\oplus 0$. For each $t$, define a smooth function $H^t$ on $M$ by $H^t(u)=H(u,t)$ for all $u\in M$. Let $X_{H^t}$ denote the contact Hamiltonian vector field on $M$ associated with $H^t$. Consider the vector field $\begin{array}r{X}$ on $M\times\mathbb R$ as follows: \[\begin{array}r{X}(u,t)=(X_{H^t}(u),1),\ \ (u,t)\in M\times(-\varepsilon,\varepsilon).\] Let $\{\begin{array}r{\phi}_s\}$ denote a local flow of $\begin{array}r{X}$ on $M\times\mathbb R$. Then writing $\begin{array}r{\phi}_s(u,t)$ as \begin{center}$\begin{array}r{\phi}_s(u,t)=(\phi_s(u,t),s+t)$ for all $u\in M$ and $s,t\in \mathbb R$,\end{center} we get the following relation: \[\frac{d\phi_s}{ds}(u,t)=X_{t+s}(\phi_s(u,t)), \] where $X_t$ stands for the vector field $X_{H^t}$ for all $t$. In particular, we have \begin{equation}\frac{d\phi_t}{dt}(u,0)=X_t(\phi_t(u,0)),\label{flow_eqn}\end{equation} Define a level preserving map $F:M\times(-\varepsilon,\varepsilon)\to M\times(-\varepsilon,\varepsilon)$ by \[F(u,t)=\begin{array}r{\phi}_t(u,0)=(\phi_t(u,0),t).\] Since the support of $H$ is contained in $K\times (-\varepsilon,\varepsilon)$ for some compact set $K$, the flow $\begin{array}r{\phi}_s$ starting at $(u,0)$ remains within $M\times (-\varepsilon,\varepsilon)$ for $s\in (-\varepsilon,\varepsilon)$. Note that \begin{center}$dF(\frac{\partial}{\partial t})=\frac{\partial}{\partial t}\begin{array}r{\phi}_t(u,0)=\begin{array}r{X}(\begin{array}r{\phi}_t(u,0))=\begin{array}r{X}(\phi_t(u,0),t)=(X_{H^t}(\phi_t(u,0)),1).$\end{center} This implies that \begin{center}$\begin{array}{rcl}F^*(\alpha\oplus 0) (\frac{\partial}{\partial t}|_{(u,t)}) & = & (\alpha\oplus 0)(dF(\frac{\partial}{\partial t}|_{(u,t)}))\\ & = & \alpha(X_{H^t}(\phi_t(u,0)))\\ & = &H^t(\phi_t(u,0))\ \ \ \ \ \text{by equation }(\ref{contact_hamiltonian1}) \\ & = & H(\begin{array}r{\phi}_t(u,0))=H(F(u,t)) \end{array}$\end{center} Also, \begin{center}$F^*(H\,dt)(\frac{\partial}{\partial t})=(H\circ F)\,dt(dF(\frac{\partial}{\partial t}))=H\circ F$\end{center} Hence, \begin{equation}F^*(\alpha-Hdt)(\frac{\partial}{\partial t})=0.\label{eq:F1}\end{equation} On the other hand, \begin{equation}F^*(\alpha-H\,dt)|_{M\times\{t\}}=F^*\alpha|_{M\times\{t\}}=\psi_t^*\alpha,\label{eq:F2}\end{equation} where $\psi_t(u)=\phi_t(u,0)$, $\psi_0(u)=u$. Thus, $\{\psi_t\}$ are the integral curves of the time dependent vector field $\{X_t\}$ on $M$ (see (\ref{flow_eqn})), and we get \[\begin{array}{rcl}\frac{d}{dt}\psi_t^*\alpha & = & \psi^*_t(i_{X_t}d\alpha+d(i_{X_t}\alpha))\\ & = & \psi^*_t(dH^t(R_\alpha)\alpha-dH^t+dH^t) \ \ \text{by equation }(\ref{contact_hamiltonian1})\\ & = & \psi^*_t(dH^t(R_\alpha)\alpha)\\ & = & \theta(t)\psi_t^*\alpha,\end{array}\] where $\theta(t)=\psi_t^*(dH^t(R_\alpha))$. Hence $\psi_t^*\alpha=e^{\int_0^t\theta(s)ds}\psi_0^*\alpha=e^{\int_0^t\theta(s)ds}\alpha$. We conclude from equation (\ref{eq:F1}) and (\ref{eq:F2}) that $F^*(\alpha-H\,dt)=e^{\int_0^t\theta(s)ds}\alpha$. Finally, take $\Psi=\begin{array}r{H}\circ F$ which has the desired properties. \end{proof} \begin{remark}{\em If there exists an open subset $\tilde{U}$ of $M$ such that $H$ vanishes on $\tilde{U}\times (-\varepsilon,\varepsilon)$ then the contact Hamiltonian vector fields $X_t$ defined above are identically zero on $\tilde{U}$ for all $t\in(-\varepsilon,\varepsilon)$. Since $\psi_t=\phi_t(\ \ ,0)$ are the integral curves of the time dependent vector fields $X_t=X_{H^t}$, $0\leq t\leq 1$, we must have $\psi_t(u)=u$ for all $u\in \tilde{U}$. Therefore, $F(u,t)=(u,t)$ and hence $\Psi(u,t)=(u,t,0)$ for all $u\in\tilde{U}$ and all $t\in(-\varepsilon,\varepsilon)$.}\label{R:characteristic}\end{remark} \begin{remark}{\em If $\Gamma$ is a codimension 1 submanifold of a contact manifold $(N,\tilde{\alpha})$ such that the tangent planes of $\Gamma$ are transversal to $\tilde{\xi}=\ker\tilde{\alpha}$ then there is a codimension 1 distribution $D$ on $\Gamma$ given by the intersection of $\ker \tilde{\alpha}|_\Gamma$ and $T\Gamma$. Since $D=\ker\tilde{\alpha}|_\Gamma\cap T\Gamma$ is an odd dimensional distribution, $d\tilde{\alpha}|_D$ has a 1-dimensional kernel. If $\Gamma$ is locally defined by a function $\Phi$ then $d\Phi_x$ does not vanish identically on $\ker\tilde{\alpha}_x$, for $\ker d\Phi_x$ is transversal to $\ker\tilde{\alpha}_x$. Thus there is a unique non-zero vector $Y_x$ in $\ker\tilde{\alpha}_x$ satisfying the relation $i_{Y_x}d\alpha_x=d\Phi_x$. Clearly, $Y_x$ is tangent to $\Gamma$ at $x$ and it is defined uniquely only up to multiplication by a non-zero real number (as $\Phi$ is not unique). However, the 1-dimensional distribution on $\Gamma$ defined by $Y$ is uniquely defined by the contact form $\tilde{\alpha}$. The integral curves of $Y$ are called \emph{characteristics} of $\Gamma$ (\cite{arnold}). It can be verified that the diffeomorphism $\Psi$ in the proof of the above proposition maps the lines in $M\times\mathbb R$ onto the characteristics on $\Gamma$.} \end{remark} The following lemma will reduce Theorem~\ref{T:equidimensional_contact immersion} to the special case in which the contact forms $\alpha_t$ are piecewise primitive. A non-parametric form of this lemma was proved in \cite{eliashberg}. \begin{lemma} \label{EM2} Let $\alpha_{t}, t\in[0,1]$, be a continuous family of contact forms on a compact manifold $M$, possibly with non-empty boundary. Then for each $t\in [0,1]$, there exists a sequence of primitive 1-forms $\beta_t^l=r_t^l\,ds_t^l, l=1,..,N$ such that \begin{enumerate} \item $\alpha_t=\alpha_0+\sigmaum_1^N \beta_t^l$ for all $t\in [0,1]$, \item for each $j=0,..,N$ the form $\alpha^{(j)}_t=\alpha_{0}+\sigmaum_{1}^{j}\beta_t^{l}$ is contact, \item for each $j=1,..,N$ the functions $r_t^j$ and $s_t^j$ are compactly supported within a coordinate neighbourhood. \end{enumerate} Furthermore, the forms $\beta_t^l$ depends continuously on $t$. If $\alpha_t=\alpha_0$ on $Op\,V_0$, where $V_0$ is a compact subset contained in the interior of $M$, then the functions $r^l_t$ and $s^l_t$ can be chosen to be equal to zero on an open neighbourhood of $V_0$. \end{lemma} \begin{proof} If $M$ is compact and with boundary, then we embed it in a bigger manifold $\tilde{M}$ of the same dimension; in fact, we may assume that $\tilde{M}$ is obtained from $M$ by attaching a collar along the boundary of $M$. Using the compactness property of $M$, one can cover $M$ by finitely many coordinate neighbourhoods $U^i, i=1,2,\dots,L$. Choose a partition of unity $\{\rho^i\}$ subordinate to $\{U^i\}$. \begin{enumerate}\item Since $M$ is compact, the set of all contact forms on $M$ is an open subspace of $\Omega^1(M)$ in the weak topology. Hence, there exists a $\delta>0$ such that $\alpha_t+s(\alpha_{t'}-\alpha_t)$ is contact for all $s\in[0,1]$, whenever $|t-t'|<\delta$. \item Get an integer $n$ such that $1/n<\delta$. Define for each $t$ a finite sequence of contact forms, namely $\alpha^j_t$, interpolating between $\alpha_0$ and $\alpha_t$ as follows: \[\alpha^j_t=\alpha_{[nt]/n}+\sigmaum_{i=1}^j\rho^i(\alpha_t-\alpha_{[nt]/n}),\] where $[x]$ denotes the largest integer which is less than or equal to $x$ and $j$ takes values $1,2,\dots,L$. In particular, for $k/n\leq t\leq (k+1)/n$, we have \[\alpha^j_t=\alpha_{k/n}+\sigmaum_{i=1}^j\rho^i(\alpha_t-\alpha_{k/n}),\] and $\alpha^L_t=\alpha_t$ for all $t$. \item Let $\{x_j^i:j=1,\dots,m\}$ denote the coordinate functions on $U^i$, where $m$ is the dimension of $M$. There exists unique set of smooth functions $y_{t,k}^{ij}$ defined on $U^i$ satisfying the following relation: \[\alpha_t-\alpha_{k/n}=\sigmaum_{j=1}^m y_{t,k}^{ij} dx^i_j\ \ \text{on } U^i \text{ for }k/n\leq t\leq (k+1)/n\] Further, note that $y_{t,k}^{ij}$ depends continuously on the parameter $t$ and $y_{t,k}^{ij}=0$ when $t=k/n$, $k=0,1,\dots,n$. \item Let $\sigmaigma^i$ be a smooth function such that $\sigmaigma^i\equiv 1$ on a neighbourhood of $\text{supp\,}\rho^i$ and $\text{supp}\,\sigmaigma^i\sigmaubset U^i$. Define functions $r^{ij}_{t,k}$ and $s^{ij}$, $j=1,\dots,m$, as follows: \[ r_{t,k}^{ij}=\rho^i y_t^{ij} \ \ \ s^{ij}=\sigmaigma^i x^i_j.\] These functions are compactly supported and supports are contained in $U^i$. It is easy to see that $r^{ij}_{t,k}=0$ when $t=k/n$ and \[\rho^i(\alpha_t-\alpha_{k/n})=\sigmaum_{j=1}^m r_{t,k}^{ij}\,ds^{ij} \ \text{ for }\ t\in [k/n,(k+1)/n].\] \end{enumerate} It follows from the above discussion that $\alpha_t-\alpha_{k/n}$ can be expressed as a sum of primitive forms which depends continuously on $t$ in the interval $[k/n,(k+1)/n]$. We can now complete the proof by finite induction argument. Suppose that $(\alpha_{t}-\alpha_0)=\sigmaum_l\alpha_{t,k}^l$ for $t\in [0,k/n]$, where each $\alpha_{t,k}^l$ is a primitive 1-form. Define \[\begin{array}{rcl}\tilde{\alpha}_{t,k}^l& = & \left\{ \begin{array}{cl} \alpha_{t,k}^l & \text{if } t\in [0,k/n]\\ \alpha_{k/n,k}^l & \text{if } t\in [k/n,(k+1)/n]\end{array}\right.\end{array}\] Further define for $j=1,\dots,N$, $i=1,\dots,L$, \[\begin{array}{rcl}\beta_{t,k}^{ij}& = & \left\{ \begin{array}{cl} 0 & \text{if } t\in [0,k/n]\\ r^{ij}_{t,k}\,ds^{ij} & \text{if } t\in [k/n,(k+1)/n]\end{array}\right.\end{array}\] Finally note that for $t\in [0,(k+1)/n]$, we can write $\alpha_t-\alpha_0$ as the sum of all the above primitive forms. Indeed, if $k/n\leq t<(k+1)/n$, then \begin{eqnarray*}\alpha_t-\alpha_0 & = & (\alpha_t-\alpha_{k/n})+(\alpha_{k/n}-\alpha_0)\\ & = & \sigmaum_{i=1}^L\sigmaum_{j=1}^m r^{ij}_{t,k}\,ds^{ij}+\sigmaum_l \alpha^l_{k/n,k}\\ & = & \sigmaum_{i,j}\beta^{ij}_{t,k}+\sigmaum_l \tilde{\alpha}^l_{t,k}.\end{eqnarray*} The same relation holds for $0\leq t\leq k/n$, since $\beta^{ij}_{t,k}$ vanish for all such $t$. This proves the first part of the lemma. Now suppose that $\alpha_t=\alpha_0$ on an open neighbourhood $U$ of $V_0$. Choose two compact neighbourhoods of $V_0$, namely $K_0$ and $K_1$ such that $K_0\sigmaubset \text{Int\,}K_1$ and $K_1\sigmaubset U$. Since $M\sigmaetminus\text{Int\,}K_1$ is compact we can cover it by finitely many coordinate neighbourhoods $U^i$, $i=1,2,\dots,L$, such that $(\bigcup_{i=1}^L U^i)\cap K_0=\emptyset$. Proceeding as above we get a decomposition of $\alpha_t$ on $\bigcup_{i=1}^L U^i$ into primitive 1-forms $r^l_t\,ds^l_t$. Observe that $\{U^i:i=1,\dots,L\}\cup\{U\}$ is an open covering of $M$ in this case. The functions $r^l_t$ and $s^l_t$ can be extended to all of $M$ without disturbing their supports. Hence, the functions $r^l_t$ and $s^l_t$ vanish on $K_0$. This completes the proof of the lemma. \end{proof} \emph{Proof of Theorem~\ref{T:equidimensional_contact immersion}}. In view of Lemma~\ref{EM2}, it is enough to prove the theorem for a family of contact forms $\alpha_t$, $t\in [0,1]$, satisfying \[\alpha_{t}=\alpha_{0}+r_tds_t \label{primitive_contact_forms}\] for some smooth real valued functions $r_t,s_t$ which are (compactly) supported in an open set $U$ of $M$. We shall first show that $f_{0}:(M,\xi_{0})\rightarrow (N,\tilde{\xi})$ can be homotoped to an immersion $f_{1}:M\to N$ such that $f_{1}^{*}\tilde{\xi}=\xi_{1}$ which is a non-parametric version of the stated result. For simplicity of notation we write $(r,s)$ for $(r_1,s_1)$ and define a smooth embedding $\varphi:U\to U\times\mathbb R^2$ by \[\varphi(u)=(u,s(u),-r(u)) \text{ \ for \ }u\in U.\] Since $r,s$ are compactly supported $\varphi(u)=(u,0,0)$ for all $u\in Op\,(\partial U)$ and there exist positive constants $\varepsilon_1$ and $\varepsilon_2$ such that $Im\,f$ is contained in $U\times I_{\varepsilon_1}\times I_{\varepsilon_2}$, where $I_\varepsilon$ denotes the open interval $(-\varepsilon,\varepsilon)$ for $\varepsilon>0$. Clearly, $\varphi^*(\alpha_0-y\,dx)=\alpha_0+r\,ds$ and so \begin{equation}\varphi:(U,\xi_{1})\rightarrow (U\times \mathbb R^2,\ker(\alpha_{0}- y\,dx)) \label{eq:equidimensional_1}\end{equation} is an isocontact embedding. The image of $\varphi$ is the graph of a smooth function $k=(s,-r):U\rightarrow I_{\varepsilonpsilon_{1}}\times I_{\varepsilonpsilon_{2}}$ which is compactly supported with support contained in the interior of $U$. Further note that $\pi(\varphi(U))$ is the graph of $s$ and hence a submanifold of $U\times I_{\varepsilonpsilon_1}$. Now let $\pi:U\times I_{\varepsilonpsilon_{1}}\times I_{\varepsilonpsilon_{2}}\rightarrow U\times I_{\varepsilonpsilon_{1}}$ be the projection onto the first two coordinates. Since Im\,$\varphi$ is the graph of $k$, $\pi|_{\text{Im\,}\varphi}$ is an embedding onto the set $\pi(\varphi(U))$ which is the graph of $s$. Now observe that Im\,$\varphi$ can also be viewed as the graph of a smooth function, namely $h:\pi(\varphi(U))\rightarrow I_{\varepsilonpsilon_{2}}$ defined by $h(u,s(u))=-r(u)$. It is easy to see that $h$ is compactly supported. \begin{center} \begin{picture}(300,140)(-100,5)\sigmaetlength{\unitlength}{1cm} \linethickness{.075mm} \muultiput(-1,1.5)(6,0){2} {\line(0,1){3}} \muultiput(-.25,2)(4.5,0){2} {\line(0,1){2}} \muultiput(-1,1.5)(0,3){2} {\line(1,0){6}} \muultiput(-.25,2)(0,2){2} {\line(1,0){4.5}} \put(1.7,1){$U\times I_{\varepsilonpsilon_{1}}$} \put(1.2,2.7){\sigmamall{$U$}} \put(2,3.4){\sigmamall{$\pi(\varphi(U))$}} \muultiput(-.9,1.6)(.2,0){30}{\line(1,0){.05}} \muultiput(-.9,1.7)(.2,0){30}{\line(1,0){.05}} \muultiput(-.9,1.8)(.2,0){30}{\line(1,0){.05}} \muultiput(-.9,1.9)(.2,0){30}{\line(1,0){.05}} \muultiput(-.9,4.0)(0,-.1){21}{\line(1,0){.05}} \muultiput(-.7,4.0)(0,-.1){21}{\line(1,0){.05}} \muultiput(-.5,4.0)(0,-.1){21}{\line(1,0){.05}} \muultiput(-.3,4.0)(0,-.1){21}{\line(1,0){.05}} \muultiput(4.3,4.0)(0,-.1){21}{\line(1,0){.05}} \muultiput(4.5,4.0)(0,-.1){21}{\line(1,0){.05}} \muultiput(4.7,4.0)(0,-.1){21}{\line(1,0){.05}} \muultiput(4.9,4.0)(0,-.1){21}{\line(1,0){.05}} \muultiput(-.9,4.1)(.2,0){30}{\line(1,0){.05}} \muultiput(-.9,4.2)(.2,0){30}{\line(1,0){.05}} \muultiput(-.9,4.3)(.2,0){30}{\line(1,0){.05}} \muultiput(-.9,4.4)(.2,0){30}{\line(1,0){.05}} \muultiput(-1,3)(.3,0){20}{\line(1,0){.1}} \muultiput(-1,3)(5.1,0){2}{\line(1,0){.9}} \qbezier(-.1,3)(1,3.1)(1.5,3.8) \qbezier(1.5,3.8)(1.7,4)(1.9,3.7) \qbezier(1.9,3.7)(2.1,3)(2.2,2.3) \qbezier(2.2,2.3)(2.4,2)(2.6,2.2) \qbezier(2.6,2.2)(3.2,2.9)(4.3,3) \end{picture}\end{center} In the above figure, the bigger rectangle represents the set $U\times I_{\varepsilonpsilon_{1}}$ and the central dotted line represents $U\times 0$. The curve within the rectangle stands for the domain of $h$, which is also the graph of $s$. We can now extend $h$ to a compactly supported function $H:U\times I_{\varepsilonpsilon_{1}}\rightarrow I_{\varepsilonpsilon_{2}}$ (see \cite{whitney}) which vanishes on the shaded region and is such that its graph is transversal to $\ker(\alpha_0- y\,dx)$. Indeed, since $\varphi$ is an isocontact embedding it is transversal to $\ker(\alpha_0-y\,dx)$ and hence graph $H$ is transversal to $\ker(\alpha_0-y\,dx)$ on an open neighbourhood of $\pi(\varphi(U))$ for any extension $H$ of $h$. Since transversality is a generic property, we can assume (possibly after a small perturbation) that graph of $H$ is transversal to $\ker(\alpha_0- y\,dx)$. Let $ \Gamma $ be the graph of $H$; then the image of $\varphi$ is contained in $\Gamma$. By Lemma~\ref{characteristic} there exists a diffeomorphism $\Phi:\Gamma\to U\times I_{\varepsilon_1}$ with the property that \begin{equation}\Phi^*(\ker(\alpha_{0}\oplus 0))=\ker((\alpha_{0}- y\,dx)|_\Gamma).\label{eq:equidimensional_2}\end{equation} Next we use $f_0$ to define an immersion $F_{0}:U\times \muathbb R\rightarrow N\times \muathbb{R}$ as follows: \begin{center} $F_0(u,x)=(f_0(u),x)$ for all $u\in U$ and $x\in \mathbb R$.\end{center} It is straightforward to see that \begin{itemize} \item $F_{0}(u,0)\in N \times 0$ for all $u\in U$ and \item $F_0^*(\eta \oplus 0)$ is a multiple of $\alpha_{0}\oplus 0$ by a nowhere vanishing function on $M\times\mathbb R$. \end{itemize} Therefore, the following composition is defined: \[U\sigmatackrel{\varphi}{\longrightarrow} \Gamma\sigmatackrel{\Phi}{\longrightarrow} U\times I_{\varepsilon_1} \sigmatackrel{F_0}{\longrightarrow} N\times\muathbb{R} \sigmatackrel{\pi_{N}}{\longrightarrow} N, \] where $\pi_{N}:N\times \muathbb{R}\rightarrow N$ is the projection onto $N$. Observe that $\pi_{N}^*\eta=\eta \oplus 0$ and therefore, it follows from equations (\ref{eq:equidimensional_1}) and (\ref{eq:equidimensional_2}) that the composition map $f_1=\pi_{N} F_{0}\Phi \varphi:(U,\xi_1)\rightarrow (N,\tilde{\xi})$ is isocontact. Such a map is necessarily an immersion. Let $K=(\text{supp\,}r\cup\text{supp\,}s)$. Take a compact set $K_1$ in $U$ such that $K\sigmaubset \text{Int\,}K_1$, and let $\tilde{U}=U\sigmaetminus K_1$. If $u\in \tilde{U}$ then $\varphi(u)=(u,0,0)$. This gives $h(u,0)=0$ for all $u\in\tilde{U}$. We can choose $H$ such that $H(u,t)=0$ for all $(u,t)\in\tilde{U}\times I_{\varepsilon_1}$. Then, by Remark~\ref{R:characteristic}, $\Phi(u,0,0)=(u,0)$ for all $u\in\tilde{U}$. Consequently, \[f_1(u)=\pi_{N} F_{0}\Phi \varphi(u)=\pi_{N} F_{0}(u,0)=\pi_N(f_0(u),0)=f_0(u) \ \text{for all } u\in\tilde{U}.\] In other words, $f_1$ coincides with $f_0$ outside an open neighbourhood of $K$. Now, if we have a continuous family of contact forms $\alpha_t$ as in equation (\ref{primitive_contact_forms}) then define \[\varphi_t(u)=(u,s_t(u),-r_t(u)) \text{ \ for \ }u\in U.\] Since each $\varphi_t$ has compact support, it follows that $\cup_{t\in[0,1]}\varphi_t(U)$ is a compact subset of $U\times\mathbb R^2$ and there exist positive constants $\varepsilon_1$ and $\varepsilon_2$ such that $\varphi_t(U)\sigmaubset U\times I_{\varepsilonpsilon_{1}}\times I_{\varepsilonpsilon_{2}}$ for all $t\in [0,1]$. Proceeding exactly as before we get a continuous family of smooth functions $H_t$ such that their graphs $\Gamma_t$ are transversal to $\ker(\alpha_0-y\,dx)$. By applying Proposition~\ref{characteristic} we then get a continuous family of homeomorphisms $\Phi_t:\Gamma_t\to U\times I_{\varepsilon_1}$ which pull back the $\ker(\alpha_{0}\oplus 0)$ onto $\ker(\alpha_{0}- y\,dx)|_{\Gamma_t}$. The desired homotopy $f_t$ is then defined by $f_t=\pi_{N} F_{0}\Phi_t \varphi_t$. This completes the proof of the theorem.\qed \begin{remark}{\em A symplectic version of the above result was proved by Ginzburg in \cite{ginzburg}.}\end{remark} Every open manifold admits a Morse function $f$ without a local maxima. The codimension of the Morse complex of such a function is, therefore, strictly positive (\cite{milnor_morse},\cite{milnor}). The gradient flow of $f$ brings the manifold into an arbitrary small neighbourhood of the Morse complex. In fact, one can get a polyhedron $K\sigmaubset M$ such that codim\,$K>0$, and an isotopy $\phi_t:M\to M$, $t\in[0,1]$, such that $K$ remains pointwise fixed and $\phi_1$ takes $M$ into an arbitrarily small neighborhood $U$ of $K$. The polyhedron $K$ is called a \emph{core} of $M$. We shall now deduce from the above theorem, the existence of isocontact immersions of an open manifold $M$ into itself which compress the manifold $M$ into an arbitrary small neighbourhoods of its core. \begin{corollary} \label{CO} Let $(M,\xi=\ker\alpha)$ be an open contact manifold and let $K$ be a core of it. Then for a given neighbourhood $U$ of $K$ in $M$ there exists a homotopy of isocontact immersions $f_{t}:(M,\xi)\rightarrow (M,\xi), t\in[0,1]$, such that $f_{0}=id_{M}$ and $f_{1}(M)\sigmaubset U$. \end{corollary} \begin{proof}Since $K$ is a core of $M$ there is an isotopy $g_t$ such that $g_0=id_M$ and $g_1(M)\sigmaubset U$. Using $g_t$, we can express $M$ as $M=\bigcup_{0}^{\infty}V_{i}$, where $V_{0}$ is a compact neighbourhood of $K$ in $U$ and $V_{i+1}$ is diffeomorphic to $V_i\bigcup (\partial V_{i}\times [0,1])$ so that $\begin{array}r{V_i}\sigmaubset \text{Int}\,(V_{i+1})$ and $V_{i+1}$ deformation retracts onto $V_{i}$. If $M$ is a manifold with boundary then this sequence is finite. We shall inductively construct a homotopy of immersions $f^{i}_{t}:M\rightarrow M$ with the following properties: \begin{enumerate} \item $f^i_0=id_M$ \item $f^i_1(M)\sigmaubset U$ \item $f^i_t=f^{i-1}_t$ on $V_{i-1}$ \item $(f^i_t)^*\xi=\xi$ on $V_{i}$. \end{enumerate} Assuming the existence of $f^{i}_{t}$, let $\xi_{t}=(f^{i}_{t})^{*}(\xi)$ (so that $\xi_0=\xi$, and consider a 2-parameter family of contact structures defined by $\eta_{t,s}=\xi_{t(1-s)}$. Then for all $t,s\in\mathbb I$, we have: \[\eta_{t,0}=\xi_t,\ \ \eta_{t,1}=\xi_0=\xi\ \text{ and }\ \eta_{0,s}=\xi.\] The parametric version of Theorem~\ref{T:equidimensional_contact immersion} gives a homotopy of immersions $\tilde{f}_{t,s}:V_{i+2}\rightarrow M$, $(t,s)\in \mathbb I\times\mathbb I$, satisfying the following conditions: \begin{enumerate} \item $\tilde{f}_{t,0},\tilde{f}_{0,s}:V_{i+2}\hookrightarrow M$ are the inclusion maps \item $(\tilde{f}_{t,s})^*\xi_t=\eta_{t,s}$; in particular, $(\tilde{f}_{t,1})^*\xi_t=\xi$ \item $\tilde{f}_{t,s}=id$ on $V_i$ since $\eta_{t,s}=\xi_0$ on $V_i$. \end{enumerate} We now extend the homotopy $\{\tilde{f}_{t,s}|_{V_{i+1}}\}$ to all of $M$ as immersions such that $\tilde{f}_{0,s}=id_M$ for all $s$. By an abuse of notation, we denote the extended homotopy by the same symbol. Define the next level homotopy as follows: \[f^{i+1}_{t}=f^{i}_{t}\circ \tilde{f}_{t,1} \ \text{ for }\ t\in [0,1].\] This completes the induction step since $(f^{i+1}_t)^*(\xi)=(\tilde{f}_{t,1})^*\xi_t=\xi$ on $V_{i+2}$ for all $t$, and $f^{i+1}_t|_{V_{i}}=f^i_t|_{V_{i}}$. To start the induction we use the isotopy $g_t$ and let $\xi_t=g_t^*\xi$. Note that $\xi_t$ is a family of contact structures on $M$ defined by contact forms $g_t^*\alpha$. For starting the induction we construct $f^{0}_{t}$ as above by setting $V_{-1}=\emptyset$. Having constructed the family of homotopies $\{f^i_t\}$ as above we set $f_t=\lim_{i\to\infty}f^i_t$ which is the desired homotopy of isocontact immersions. \end{proof} \sigmaection{An $h$-principle for open relations on open contact manifolds} We begin with a brief exposition to the theory of $h$-principle. For further details we refer to \cite{gromov_pdr}. Suppose that $M$ and $N$ are smooth manifolds. Let $J^{r}(M,N)$ be the space of $r$-jets of germs of local maps from $M$ to $N$ (\cite{golubitsky}). The canonical map $p^{(r)}:J^r(M,N)\to M$ which takes a jet $j^r_f(x)$ onto the base point $x$ is a fibration. We shall refer to $J^r(M,N)$ as the $r$-jet bundle over $M$. A continuous map $\sigmaigma:M\to J^r(M,N)$ is said to be a \emph{section} of the jet bundle $p^{(r)}:J^r(M,N)\to M$ if $p^{(r)}\circ \sigmaigma=id_M$. A section of $p^{(r)}$ which is the $r$-jet of some map $f:M\to N$ is called a \emph{holonomic} section of the jet bundle. A subset $\muathcal{R} \sigmaubset J^{r}(M,N)$ of the $r$-jet space is called a \emph{partial differential relation of order} $r$ (or simply a \emph{relation}). If $\muathcal{R}$ is an open subset of the jet space then we call it an \emph{open relation}. A $C^r$ map $f:M\rightarrow N$ is said to be a \emph{solution} of $\muathcal{R}$ if the image of its $r$-jet extension $j^r_f:M\rightarrow J^r(M,N)$ lies in $\muathcal{R}$. We denote by $\Gamma(\muathcal{R})$ the space of sections of the bundle $J^r(M,N)\rightarrow N$ having images in $\muathcal{R}$. The space of $C^\infty$ solutions of $\muathcal{R}$ is denoted by $Sol(\muathcal{R})$. If $Sol(\muathcal R)$ and $\Gamma(\muathcal R)$ are endowed with the $C^{\infty}$-compact open topology and the $C^0$-compact open topology respectively, then the $r$-jet map \[j^r:Sol(\muathcal R)\to \Gamma(\muathcal R)\] taking an $f\in Sol(\muathcal R)$ onto the holonomic section $j^r_f$ is a continuous map which is clearly one to one. Therefore, we can identify $Sol(\muathcal R)$ with the space of holonomic sections of $\muathcal R$. \begin{definition}{\em A differential relation $\muathcal{R}$ is said to satisfy the \emph{$h$-principle} if every element $\sigmaigma_{0} \in \Gamma(\muathcal{R})$ admits a homotopy $\sigmaigma_{t}\in \Gamma(\muathcal{R})$ such that $\sigmaigma_{1}$ is holonomic. The relation $\muathcal R$ satisfies the \emph{parametric $h$-principle} \index{parametric $h$-principle} if the $r$-jet map $j^{r}:Sol(\muathcal{R})\rightarrow \Gamma(\muathcal{R})$ is a weak homotopy equivalence. }\index{$h$-principle}\end{definition} We shall often talk about (parametric) $h$-principle for certain function spaces without referring to the relations of which they are solutions. \begin{remark}{\em The space $\Gamma(\muathcal R)$ is referred as the space of formal solutions of $\muathcal R$. Finding a formal solution is a purely (algebraic) topological problem which can be addressed with the obstruction theory. Finding a solution of $\muathcal R$ is, on the other hand, a differential topological problem. Thus, the $h$-principle reduces a differential topological problem to a problem in algebraic topology.}\end{remark} Next we define the notion of local $h$-principle near a polyhedron. \begin{definition}{\em Let $K$ be a subset of $M$. We shall say that a relation $\muathcal R$ satisfies the \emph{$h$-principle near $K$} if given an open set $U$ containing $K$ and a section $F:U\to \muathcal{R}|_U$, there exists an open set $\tilde{U}\sigmaubset U$ containing $K$ such that $F|_{\tilde{U}}$ is homotopic to a holonomic section $\tilde{F}:\tilde{U}\to \muathcal{R}$ in $\Gamma(\muathcal R)$.}\end{definition} The above $h$-principle will also be referred as an $h$-principle on $Op\,K$. If $K$ is a subset of $M$ then by $Op\,K$ we shall mean an unspecified open set in $M$ containing $K$. The set $C^k(Op\,K,N)$ will denote the set of all $C^k$ functions which are defined on some open neighbourhood of $K$. \begin{definition}{\em A function $F:Z\to C^k({Op\,K},N)$ defined on any topological space $Z$ will be called `\emph{continuous}' if there exists an open set $U$ containing $K$ such that each $F(z)$ has an extension $\tilde{F}(z)$ which is defined on $U$ and $z\muapsto \tilde{F}(z)$ is continuous with respect to the $C^k$-compact open topology on the function space. A relation $\muathcal R$ is said to satisfy the \emph{parametric $h$-principle near} $K$ if $j^r:Sol(\muathcal R|_{Op\,K})\to \Gamma(\muathcal R|_{Op\,K})$ is a weak homotopy equivalence.}\end{definition} Let $\text{Diff}(M)$ be the pseudogroup of local diffeomorphisms of $M$ \cite{geiges}. There is a natural (contravariant) action of $\text{Diff}(M)$ on $J^{r}(M,N)$ given by $\sigmaigma .\alpha:=j^{r}_{f\circ \sigmaigma}(x)$, where $\sigmaigma$ is a local diffeomorphism of $M$ defined near $x\in M$ and $f$ is a representative of the $r$-jet $\alpha$ at $\sigmaigma(x)$. Let $\muathcal D$ be a subgroup of $\text{Diff}(M)$. A differential relation $\muathcal{R}$ is said to be \emph{$\muathcal{D}$-invariant} if the following condition is satisfied: \begin{center} For every $\alpha\in \muathcal R$ and $\sigmaigma\in \muathcal D$, the element $\sigmaigma.\alpha$ belongs to $\muathcal R$ provided it is defined.\end{center} We shall denote the element $\sigmaigma.\alpha$ by the notation $\sigmaigma^*\alpha$ The following result, due to Gromov, is the first general result in the theory of $h$-principle. \begin{theorem} Every open, Diff$(M)$ invariant relation $\muathcal R$ on an open manifold $M$ satisfies the parametric $h$-principle.\label{open-invariant} \end{theorem} The $h$-principle can be established in 2 steps. First one proves the local $h$-principle near the core $K$ of $M$ and then lifts the $h$-principle to $M$ by a contracting diffeotopy. If a relation is invariant under the action of a smaller pseudogroup of diffeomorphism, say $\muathcal D$, then the $h$-principle can still hold if $\muathcal D$ has some additional properties. \begin{definition} {\em (\cite{gromov_pdr}) Let $M_{0}$ be a submanifold of $M$ of positive codimension and let $\muathcal{D}$ be a pseudogroup of local diffeomorphisms of $M$. We say that $M_0$ \emph{is sharply movable} by $\muathcal{D}$, if given any hypersurface $S$ in an open set $U$ in $M_0$ and any $\varepsilonpsilon>0$, there is an isotopy $\delta_{t}$, $t\in\mathbb I$, in $\muathcal{D}$ and a positive real number $r$ such that the following conditions hold: \begin{enumerate}\item[$(i)$] $\delta_{0}|_U=id_{U}$, \item[$(ii)$] $\delta_{t}$ fixes all points outside the $\varepsilonpsilon$-neighbourhood of $S$, \item[$(iii)$] $dist(\delta_{1}(x),M_{0})\geq r$ for all $x\in S$ and for some $r>0$,\end{enumerate} where $dist$ denotes the distance with respect to any fixed metric on $M$.}\label{D:sharp_diffeotopy}\index{sharp diffeotopy}\end{definition} The diffeotopy $\delta_t$ will be referred as a \emph{sharply moving diffeotopy}. A pseudogroup $\muathcal D$ is said to have the \emph{sharply moving property} if every submanifold $M_0$ of positive codimension is sharply movable by $\muathcal D$. We end this section with the following result due to Gromov (\cite{gromov_pdr}). \begin{theorem} \label{T:gromov-invariant} Let $\muathcal{R}\sigmaubset J^r(M,N)$ be an open relation which is invariant under the action of a pseudogroup $\muathcal{D}$. If $\muathcal{D}$ sharply moves a submanifold $M_{0}$ in $M$ of positive codimension then the parametric h-principle holds for $\muathcal{R}$ on $Op\,(M_{0})$. \end{theorem} We shall now prove Theorem~\ref{CT} by an application of the above theorem.\\ \emph{Proof of Theorem~\ref{CT}}. Let $\muathcal{D}$ denote the pseudogroup of contact diffeomorphisms of $M$. We shall first show that $\muathcal{D}$ has the sharply moving property (see Definition~\ref{D:sharp_diffeotopy}). Let $M_0$ be a submanifold of $M$ of positive codimension. Take a closed hypersurface $S$ in $M_0$ and an open set $U\sigmaubset M$ containing $S$. We take a vector field $X$ along $S$ which is transversal to $M_{0}$. Let $H:M\rightarrow \muathbb{R}$ be a function such that \[\alpha(X)=H,\ \ \ \ i_X d\alpha|_\xi=-dH|_\xi, \ \ \text{at points of } S.\] (see equation~\ref{contact_hamiltonian1}). The contact-Hamiltonian vector field $X_H$ is clearly transversal to $M_0$ at points of $S$. As transversality is a stable property and $U$ is small, we can assume that $X_{H}\pitchfork U$. Now consider the initial value problem \[\frac{d}{dt}\delta_{t}(x)=X_{H}(\delta_t(x)), \ \ \delta_0(x)=x\] The solution to this problem exists for small time $t$, say for $t\in [0,\begin{array}r{\varepsilonpsilon}]$, for all $x$ lying in some small enough neighbourhood of $S$. Moreover, since $X_H$ is transversal to $S$, there would exist a positive real number $\varepsilon$ such that the integral curves $\delta_t(x)$ for $x\in S$ do not meet $M_0$ during the time interval $(0,\varepsilon)$. Let \[S_{\varepsilonpsilon}=\cup_{t\in[0,\varepsilonpsilon/2]}\delta_{t}(S).\] Take a smooth function $\varphi$ which is identically equal to 1 on a small neighbourhood of $S_{\varepsilonpsilon}$ and supp\,$\varphi\sigmaubset \cup_{t\in[0,\varepsilonpsilon)}\delta_t(S)$. We then consider the initial value problem with $X_{H}$ replaced by $X_{\varphi H}$. Since $X_{\varphi H}$ is compactly supported the flow of $X_{\varphi H}$, say $\begin{array}r{\delta_{t}}$, is defined for all time $t$. Because of the choice of $\varphi$, the integral curves $\begin{array}r{\delta}_t(x_0)$, $x_0\in M_0$, cannot come back to $M_0$ for $t>0$. Hence, we have the following: \begin{itemize} \item $\begin{array}r{\delta}_0|_U=id_U$ \item $\begin{array}r{\delta_t}=id$ outside a small neighbourhood of $S_{\varepsilonpsilon}$ \item $dist(\begin{array}r{\delta}_1(x),M_0)>r$ for all $x\in S$ and for some $r>0$. \end{itemize} This proves that $\muathcal{D}$ sharply moves any submanifold of $M$ of positive codimension. Since $M$ is open it has a core $K$ which is of positive codimension. Since the relation $\muathcal R$ is open and invariant under the action of $\muathcal D$, we can apply Theorem~\ref{T:gromov-invariant} to conclude that $\muathcal R$ satisfies the parametric $h$-principle near $K$. We shall now lift the $h$-principle from $Op\,K$ to all of $M$ by appealing to Corollary~\ref{CO}. By the local $h$-principle near $K$, an arbitrary section $F_0$ of $\muathcal R$ admits a homotopy $F_{t}$ in $\Gamma(\muathcal{R}|_U)$ such that $F_1$ is holonomic on $U$, where $U$ is an open neighbourhood of $K$ in $M$. Let $f_{t}=p^{(r)}\circ F_t$, where $p^{(r)}:J^{r}(M,N)\rightarrow N$ is the canonical projection map of the jet bundle. By Corollary~\ref{CO} above we get a homotopy of isocontact immersions $g_{t}:(M,\xi)\rightarrow (M,\xi)$ satisfying $g_{0}=id_{M}$ and $g_{1}(M)\sigmaubset U$, where $\xi=\ker\alpha$. The concatenation of the homotopies $g_t^*(F_0)$ and $g_1^*(F_t)$ gives the desired homotopy in $\Gamma(\muathcal R)$ between $F_0$ and the holonomic section $g_1^*(F_1)$. This proves that $\muathcal R$ satisfies the ordinary $h$-principle. To prove the parametric $h$-principle, take a parametrized section $F_z\in \Gamma(\muathcal{R})$, $z\in \mathbb D^n$, such that $F_z$ is holonomic for all $z\in \muathbb S^{n-1}$. This implies that there is a family of smooth maps $f_z\in Sol(\muathcal R)$, parametrized by $z\in \muathbb S^{n-1}$, such that $F_z=j^r_f(z)$. We shall homotope the parametrized family $F_z$ to a family of holonomic sections in $\muathcal R$ such that the homotopy remains constant on $\muathbb S^{n-1}$. By the parametric $h$-principle near $K$, there exists an open neighbourhood $U$ of $K$ and a homotopy $H:\mathbb D^n\times\mathbb I\to \Gamma(\muathcal R|_U)$, such that $H^0_z=F_z$ and $H_z^1$ is holonomic for all $z\in \mathbb D^n$; furthermore, $H_z^t=j^r_f(z)$ on $U$ for all $z\in \muathbb S^{n-1}$. Let $\delta:[0,1/2]\to [0,1]$ be the linear homeomorphism such that $\delta(0)=0$ and $\delta(1/2)=1$. Define a function $\muu$ as follows: \begin{eqnarray*}\muu(z) = & \delta(\|z\|)z/\|z\| & \text{ if }\ \|z\|\leq 1/2.\end{eqnarray*} First deform $F_z$ to $\widetilde{F}_z$, where \[\begin{array}{rcl}\widetilde{F}_z & = & \left\{ \begin{array}{ll} F_{\muu(z)} & \text{if } \|z\|\leq 1/2\\ F_{z/\|z\|} & \text{if } 1/2\leq\|z\|\leq 1\end{array}\right.\end{array}\] Let $\begin{array}r{\delta}:[1/2,1]\to [0,1]$ be the linear homeomorphism such that $\begin{array}r{\delta}(1/2)=1$ and $\begin{array}r{\delta}(1)=0$. Define a homotopy $\widetilde{F}^s_z$ of $\tilde{F}_z$ as follows: \[\begin{array}{rcl}\widetilde{F}_z^s & = & \left\{ \begin{array}{ll} g_s^*(F_{\muu(z)}), & \|z\|\leq 1/2\\ g^*_{s\begin{array}r{\delta}(\|z\|)}(F_{z/\|z\|}) & 1/2\leq\|z\|\leq 1\end{array}\right.\end{array}\] Note that \[\begin{array}{rcl}\widetilde{F}^1_z & = & \left\{ \begin{array}{ll} g_1^*(F_{\muu(z)}), & \|z\|\leq 1/2\\ g^*_{\begin{array}r{\delta}(\|z\|)}(F_{z/\|z\|}) & 1/2\leq\|z\|\leq 1\end{array}\right.\end{array}\] Finally we consider a parametrized homotopy given as follows: \[\begin{array}{rcl}\widetilde{H}^s_z & = & \left\{ \begin{array}{ll} g_1^*(H^s_{\muu(z)}), & \|z\|\leq 1/2\\ g^*_{\begin{array}r{\delta}(\|z\|)}(F_{z/\|z\|}) & 1/2\leq\|z\|\leq 1\end{array}\right.\end{array}\] Note that $\widetilde{H}^1_z$ is holonomic for all $z\in\mathbb D^n$ and $\widetilde{H}^s_z=j^r_f(z)$ for all $z\in \muathbb S^{n-1}$. The concatenation of the three homotopies now give a homotopy between the parametrized sections $F_z$ and $\tilde{H}^1_z$ relative to $\muathbb S^{n-1}$. This proves the parametric $h$-principle for $\muathcal R$. \qed\\ \sigmaection{Transversality Theorem on open contact manifolds} Throughout this section, $M$ is a contact manifold with a given contact form $\alpha$ and $N$ is a foliated manifold with a smooth foliation $\muathcal F_N$ of even codimension. \begin{definition} {\em A foliation $\muathcal F$ on $M$ will be called a \emph{contact foliation subordinate to} $\alpha$ or, a \emph{contact foliation on} $(M,\alpha)$ if the leaves of $\muathcal F$ are contact submanifolds of $(M,\alpha)$.\label{subordinate_contact_foliation}} \end{definition} Recall that a leaf $L$ of an arbitrary foliation on $M$ admits an injective immersion $i_L:L\to M$. We shall say that $L$ is a contact submanifold of $(M,\alpha)$ if the pullback form $i_L^*\alpha$ is a contact form on $L$. \begin{remark}{\em In view of Lemma~\ref{L:contact_submanifold}, $\muathcal F$ is a contact foliation on $(M,\alpha)$ if and only if $T\muathcal F$ is transversal to the contact distribution $\ker\alpha$ and $T\muathcal F\cap \ker\alpha$ is a symplectic subbundle of $(\ker\alpha,d'\alpha)$.}\label{R:tangent_contact_foliation}\end{remark} Let $Tr_\alpha(M,\muathcal F_N)$ and $\muathcal E_\alpha(TM,\nuu\muathcal F_N)$ be as in Section 1. We define a first order differential relation $\muathcal R$ consisting of all 1-jets represented by triples $(x,y,F)$, where $x\in M, y\in N$ and $F:T_{x}M\rightarrow T_{y}N$ is a linear map such that \begin{enumerate}\item $\pi\circ F:T_xM\to \nuu(\muathcal F_N)_y$ is an epimorphism \item $\ker(\pi\circ F)\cap \ker\alpha_x$ is a symplectic subspace of $(\ker\alpha_x,d'\alpha_x)$. \end{enumerate} Then it is easy to note that the space of sections of $\muathcal R$ can be identified with $\muathcal E_\alpha(TM,\nuu(\muathcal F_N))$. \begin{observation} {\em The solution space of $\muathcal R$ is the same as $Tr_\alpha(M,\muathcal F)$. To see this, it is sufficient to note (see Definition~\ref{contact_submanifold}) that the following two statements are equivalent: \begin{enumerate} \item[(S1)] $f:M\to N$ is transversal to $\muathcal F_N$ and the leaves of the inverse foliation $f^*\muathcal F_N$ are contact submanifolds (immersed) of $M$. \item[(S2)] $\pi\circ df$ is an epimorphism and $\ker (\pi\circ df)\cap \ker\alpha$ is a symplectic subbundle of $(\ker\alpha,d'\alpha)$. \end{enumerate}} Hence Theorem~\ref{T:contact-transverse} states that the relation $\muathcal R$ satisfies the parametric $h$-principle. \label{P:solution space}\end{observation} We will now show that the relation $\muathcal R$ is open and invariant under the action of local contactomorphisms. \begin{lemma} \label{OR} The relation $\muathcal{R}$ defined above is an open relation. \end{lemma} \begin{proof} Let $V$ be a $(2m+1)$-dimensional vector space with a (linear) 1-form $\theta$ and a 2-form $\tau$ on it such that $\theta \wedge \tau^{m}\nueq 0$. We shall call $(\theta,\tau)$ an almost contact structure on $V$. Note that the restriction of $\tau$ to $\ker\theta$ is then non-degenerate. A subspace $K$ of $V$ will be called an almost contact subspace if the restrictions of $\theta$ and $\tau$ to $K$ define an almost contact structure on $K$. In this case, $K$ must be transversal to $\ker\theta$ and $K\cap \ker\theta$ will be a symplectic subspace of $\ker\theta$. Let $W$ be a vector space of even dimension and $Z$ a subspace of $W$ of codimension $2q$. Denote by $L_Z^\pitchfork(V,W)$ the set of all linear maps $L:V\to W$ which are transversal to $Z$. This is clearly an open subset in the space of all linear maps from $V$ to $W$. Define a subset $\muathcal L$ of $L_Z^\pitchfork(V,W)$ by \[\muathcal L=\{L\in L_Z^\pitchfork(V,W)| \ker(\pi\circ L) \text{ is an almost contact subspace of }V\}\] We shall prove that $\muathcal L$ is an open subset of $L_Z^\pitchfork(V,W)$. Consider the map \[E:L_Z^\pitchfork(V,W)\rightarrow Gr_{2(m-q)+1}(V)\] \[L\muapsto \ker (\pi\circ L),\] where $\pi:W\to W/Z$ is the quotient map. Let $\muathcal U_c$ denote the subset of $G_{2(m-q)+1}(V)$ consisting of all almost contact subspaces $K$ of $V$. Observe that $\muathcal L=E^{-1}(\muathcal U_c)$. We shall now prove that \begin{itemize}\item $E$ is a continuous map and \item $\muathcal U_c$ is an open subset of $G_{2(m-q)+1}(V)$. \end{itemize} To prove that $E$ is continuous, take $L_0\in L_Z^\pitchfork(V,W)$ and let $K_0=\ker (\pi\circ L_0)$. Consider the subbasic open set $U_{K_0}$ consisting of all subspaces $Y$ of $V$ such that the canonical projection $p:K_0\oplus K_0^\perp\to K_0$ maps $Y$ isomorphically onto $K_0$. The inverse image of $U_{K_0}$ under $E$ consists of all $L:V\to W$ such that $p|_{\ker (\pi\circ L)}:\ker (\pi\circ L)\to K_0$ is onto. It may be seen easily that if $L\in L_Z^\pitchfork(V,W)$ then \begin{eqnarray*} p \text{ maps } \ker (\pi\circ L) \text{ onto }K_0 & \Leftrightarrow & \ker (\pi\circ L)\cap K_0^\perp=\{0\} \\ & \Leftrightarrow & \pi\circ L|_{K_0^\perp}:K_0^\perp\to W/Z \text{ is an isomorphism}.\end{eqnarray*} Now, the set of all $L$ such that $\pi\circ L|_{K_0^\perp}$ is an isomorphism is an open subset. Hence $E^{-1}(U_{K_0})$ is open and therefore $E$ is continuous. To prove the openness of $\muathcal U_c$ take $K_0\in\muathcal U$. Recall that a subbasic open set $U_{K_0}$ containing $K_0$ can be identified with the space $L(K_0,K_0^\perp)$, where $K_0^\perp$ denotes the orthogonal complement of $K$ with respect to some inner product on $V$ (\cite{milnor_stasheff}). Let $\Theta$ denote the following composition of continuous maps: \[\begin{array}{rcccl}U_{K_0}\cong L(K_0,K_0^{\perp}) & \sigmatackrel{\Phi}{\longrightarrow} & L(K_0,V)& \sigmatackrel{\Psi}{\longrightarrow} & \Lambda^{2(m-q)+1}(K_0^*)\cong\mathbb R\end{array}\] where $\Phi(L)=I+L$ and $\Psi(L)=L^*(\theta\wedge\tau^{2(m-q)})$. It may be noted that, if $K\in U_{K_0}$ is mapped onto some $T\in L(K_0,V)$ then the image of $T$ is $K$. Hence it follows that \[{\muathcal U}_c\cap U_{K_0}=(\Psi\circ\Phi)^{-1}(\mathbb R\sigmaetminus 0)\] which proves that ${\muathcal U}_c\cap U_{K_0}$ is open. Since $U_{K_0}$ is a subbasic open set in the topology of Grassmannian it proves the openness of $\muathcal U_c$. Thus $\muathcal L$ is an open subset. We now show that $\muathcal R$ is an open relation. First note that, each tangent space $T_xM$ has an almost contact structure given by $(\alpha_x,d\alpha_x)$. Let $U$ be a trivializing neighbourhood of the tangent bundle $TM$. We can choose a trivializing neighbourhood $\tilde{U}$ for the tangent bundle $TN$ such that $T\muathcal F_N$ is isomorphic to $\tilde{U}\times Z$ for some codimension $2q$-vector space in $\mathbb R^{2n}$. This implies that $\muathcal R\cap J^1(U,\tilde{U})$ is diffeomorphic with $U\times\tilde{U}\times\muathcal L$. Since the sets $J^1(U,\tilde{U})$ form a basis for the topology of the jet space, this completes the proof of the lemma. \end{proof} \begin{lemma} \label{IV} $\muathcal{R}$ is invariant under the action of the pseudogroup of local contactomorphisms of $(M,\alpha)$. \end{lemma} \begin{proof} Let $\delta$ be a local diffeomorphism on an open neighbourhood of $x\in M$ such that $\delta^*\alpha=\lambda\alpha$, where $\lambda$ is a nowhere vanishing function on $Op\, x$. This implies that $d\delta_x(\xi_x)=\xi_{\delta(x)}$ and $d\delta_x$ preserves the conformal symplectic structure determined by $d\alpha$ on $\ker \xi$. If $f$ is a local solution of $\muathcal R$ at $\delta(x)$, then \[d\delta_x(\ker d(f\circ\delta)_x\cap \xi_x)=\ker df_{\delta(x)}\cap\xi_{\delta(x)}.\] Hence $f\circ\delta$ is also a local solution of $\muathcal R$ at $x$. Since $\muathcal R$ is open every representative function of a jet in $\muathcal R$ is a local solution of $\muathcal R$. Thus local contactomorphisms act on $\muathcal R$ by $\delta.j^1_f(\delta(x)) = j^1_{f\circ\delta}(x)$. \end{proof} \emph{Proof of Theorem~\ref{T:contact-transverse}}: In view of Theorem~\ref{CT}, and Lemma~\ref{OR}, \ref{IV} it follows that the relation $\muathcal R$ satisfies the parametric $h$-principle. This completes the proof by Observation ~\ref{P:solution space}.\qed\\ \begin{definition} \emph{A smooth submersion $f:(M,\alpha)\to N$ is called a \emph{contact submersion} if the level sets of $f$ are contact submanifolds of $M$.} \end{definition} We shall denote the space of contact submersion $(M,\alpha)\to N$ by $\muathcal C_\alpha(M,N)$. The space of epimorphisms $F:TM\to TN$ for which $\ker F\cap \ker\alpha$ is a symplectic subbundle of $(\ker\alpha,d'\alpha)$ will be denoted by $\muathcal E_\alpha(TM,TN)$. Taking $\muathcal F_N$ to be the zero-dimensional foliation on $N$ in Theorem~\ref{T:contact-transverse} we get the following result. \begin{corollary} Let $(M,\alpha)$ be an open contact manifold. The derivative map \[d:\muathcal C_\alpha(M,N)\to \muathcal E_\alpha(TM,TN)\] is a weak homotopy equivalence.\label{T:contact_submersion} \end{corollary} \begin{remark}{\em Suppose that $F_0\in \muathcal E_\alpha(TM,TN)$ and $D$ is the kernel of $F_0$. Then $(D,\alpha|_D,d\alpha|_D)$ is an almost contact distribution. Since $M$ is an open manifold, the bundle epimorphism $F_0:TM\to TN$ can be homotoped (in the space of bundle epimorphism) to the derivative of a submersion $f:M\to N$ (\cite{phillips}). Hence the distribution $\ker F_0$ is homotopic to an integrable distribution, namely the one given by the submersion $f$. It then follows from a result proved in \cite{datta_mukherjee} that $(D,\alpha|_D,d\alpha|_D)$ is homotopic to the distribution associated to a contact foliation $\muathcal F$ on $M$. Theorem~\ref{T:contact-transverse} further implies that it is possible to get a foliation $\muathcal F$ which is subordinate to $\alpha$ and is defined by a submersion.}\end{remark} \sigmaection{Foliations and $\Gamma_q$-structures} \sigmaubsection{$\Gamma$-structures \label{classifying space}} We first review some basic facts about $\Gamma$-structures for a topological groupoid $\Gamma$ following \cite{haefliger}. We also recall the connection between foliations on manifolds and $\Gamma_q$ structures, where $\Gamma_q$ is the groupoid of germs of local diffeomorphisms of $\mathbb R^q$\index{$\Gamma_q$}). For preliminaries of topological groupoid we refer to \cite{moerdijk}. \begin{definition}\label{GS}{\em Let $X$ be a topological space with an open covering $\muathcal{U}=\{U_i\}_{i\in I}$ and let $\Gamma$ be a topological groupoid over a space $B$. A 1-cocycle on $X$ over $\muathcal U$ with values in $\Gamma$ is a collection of continuous maps \[\gamma_{ij}:U_i \cap U_j\rightarrow \Gamma\] such that \[\gamma_{ik}(x)=\gamma_{ij}(x)\gamma_{jk}(x),\ \text{ for all }\ x\in U_i \cap U_j \cap U_k. \] The above conditions imply that $\gamma_{ii}$ has its image in the space of units of $\Gamma$ which can be identified with $B$ via the unit map $1:B\to \Gamma$. We call two 1-cocycles $(\{U_i\}_{i\in \mathbb I},\gamma_{ij})$ and $(\{\tilde{U}_k\}_{k\in K},\tilde{\gamma}_{kl})$ equivalent if for each $i\in I$ and $k\in K$, there are continuous maps \[\delta_{ik}:U_i \cap \tilde{U}_k\rightarrow \Gamma\] such that \[\delta_{ik}(x)\tilde{\gamma}_{kl}(x)=\delta_{il}(x)\ \text{for}\ x\in U_i \cap \tilde{U}_k \cap \tilde{U}_l\] \[\gamma_{ji}(x)\delta_{ik}(x)=\delta_{ij}(x)\ \text{for}\ x\in U_i \cap U_j \cap \tilde{U}_k.\] An equivalence class of a 1-cocycle is called a $\Gamma$-\emph{structure}\index{$\Gamma$-structure}. These structures have also been referred as Haefliger structures in the later literature.} \end{definition} For a continuous map $f:Y\rightarrow X$ and a $\Gamma$-structure $\Sigmagma=(\{U_i\}_{i\in I},\gamma_{ij})$ on $X$, the \emph{pullback $\Gamma$-structure} $f^*\Sigmagma$ is defined by the covering $\{f^{-1}U_i\}_{i \in I}$ together with the cocycles $\gamma_{ij}\circ f$. If $f,g:Y\to X$ are homotopic maps and $\Sigmagma$ is a $\Gamma$-structure on $X$ then the pull-back structures $f^*\Sigmagma$ and $g^*\Sigmagma$ are not the same. They are homotopic in the following sense. \begin{definition}{\em Two $\Gamma$-structures $\Sigmagma_0$ and $\Sigmagma_1$ on a topological space $X$ are called \emph{homotopic} if there exists a $\Gamma$-structure $\Sigmagma$ on $X\times I$, such that $i_0^*\Sigmagma=\Sigmagma_0$ and $i_1^*\Sigmagma=\Sigmagma_1$, where $i_0:X\to X\times I$ and $i_1:X\to X\times I$ are canonical injections defined by $i_t(x)=(x,t)$ for $t=0,1$.}\end{definition} \begin{definition}{\em Let $\Gamma$ be a topological groupoid with space of units $B$, source map $\muathbf{s}$ and target map $\muathbf{t}$. Consider the infinite sequences \[(t_0,x_0,t_1,x_1,...)\] with $t_i \in [0,1],\ x_i \in \Gamma$ such that all but finitely many $t_i$'s are zero and $\muathbf{t}(x_i)=\muathbf{t}(x_j)$ for all $i,j$. Two such sequences \[(t_0,x_0,t_1,x_1,...)\] and \[(t'_0,x'_0,t'_1,x'_1,...)\] are called equivalent if $t_i=t'_i$ for all $i$ and $x_i=x'_i$ for all $i$ with $t_i\nueq 0$. Denote the set of all equivalence classes by $E\Gamma$. The topology on $E\Gamma$ is defined to be the weakest topology such that the following set maps are continuous: \[t_i:E\Gamma \rightarrow [0,1]\ \text{ given by }\ (t_0,x_0,t_1,x_1,...)\muapsto t_i \] \[x_i: t_i^{-1}(0,1] \rightarrow \Gamma \ \text{ given by }\ (t_0,x_0,t_1,x_1,...)\muapsto x_i.\] There is also a `$\Gamma$-action' on $E\Gamma$ as follows: Two elements $(t_0,x_0,t_1,x_1,...)$ and $(t'_0,x'_0,t'_1,x'_1,...)$ in $E\Gamma$ are said to be $\Gamma$-equivalent if $t_i=t'_i$ for all $i$, and if there exists a $\gamma\in \Gamma$ such that $x_i=\gamma x'_i$ for all $i$ with $t_i\nueq 0$. The set of equivalence classes with quotient topology is called the \emph{classifying space of} $\Gamma$, and is denoted by $B\Gamma$\index{$B\Gamma$}.} \end{definition} Let $p: E\Gamma \rightarrow B\Gamma$ denote the quotient map. The maps $t_i:E\Gamma \rightarrow [0,1]$ project down to maps $u_i:B\Gamma \rightarrow [0,1]$ such that $u_i \circ p=t_i$. The classifying space $B\Gamma$ has a natural $\Gamma$-structure $\Omega=(\{V_i\}_{i\in I},\gamma_{ij})$, where $V_i=u_i^{-1}(0,1]$ and $\gamma_{ij}:V_i \cap V_j \rightarrow \Gamma$ is given by \[(t_0,x_0,t_1,x_1,...)\muapsto x_i x_j^{-1}\] We shall refer to this $\Gamma$ structure as the \emph{universal $\Gamma$-structure}\index{universal $\Gamma$-structure}. For any two topological groupoids $\Gamma_1,\Gamma_2$ and for a groupoid homomorphism $f:\Gamma_1\rightarrow \Gamma_2$ there exists a continuous map \[Bf:B\Gamma_1\rightarrow B\Gamma_2,\] defined by the functorial construction. \begin{definition}{\em (Numerable $\Gamma$-structure) Let $X$ be a topological space. An open covering $\muathcal{U}=\{U_i\}_{i\in I}$ of $X$ is called \emph{numerable} if it admits a locally finite partition of unity $\{u_i\}_{i\in I}$, such that $u_i^{-1}(0,1]\sigmaubset U_i$. If a $\Gamma$-structure can be represented by a 1-cocycle whose covering is numerable then the $\Gamma$-structure is called \emph{numerable}. } \end{definition} It can be shown that every $\Gamma$-structure on a paracompact space is numerable. \begin{definition}{\em Let $X$ be a topological space. Two numerable $\Gamma$-structures are called \emph{numerably homotopic} if there exists a homotopy of numerable $\Gamma$-structures joining them.} \end{definition} Haefliger proved that the homotopy classes of numerable $\Gamma$-structures on a topological space $X$ are in one-to-one correspondence with the homotopy classes of continuous maps $X\to B\Gamma$. \begin{theorem}(\cite{haefliger1}) \label{CMT} Let $\Gamma$ be a topological groupoid and $\Omega$ be the universal $\Gamma$ structure on $B\Gamma$. Then \begin{enumerate} \item $\Omega$ is numerable. \item If $\Sigmagma$ is a numerable $\Gamma$-structure on a topological space $X$, then there exists a continuous map $f:X\rightarrow B\Gamma$ such that $f^*\Omega$ is homotopic to $\Sigmagma$. \item If $f_0,f_1:X\rightarrow B\Gamma$ are two continuous functions, then $f_0^*\Omega$ is numerably homotopic to $f_1^*\Omega$ if and only if $f_0$ is homotopic to $f_1$. \end{enumerate} \end{theorem} \sigmaubsection{$\Gamma_q$-structures and their normal bundles} We now specialise to the groupoid $\Gamma_q$ of germs of local diffeomorphisms of $\muathbb{R}^{q}$. The source map $\muathbf s:\Gamma_q\to \mathbb R^q$ and the target map $\muathbf t:\Gamma_q\to \mathbb R^q$ are defined as follows: If $\phi\in\Gamma_q$ represents a germ at $x$, then \[{\muathbf s}(\phi)=x\ \ \text{ and }\ \ {\muathbf t}(\phi)=\phi(x)\] The units of $\Gamma_q$ consists of the germs of the identity map at points of $\mathbb R^q$. $\Gamma_q$ is topologised as follows: For a local diffeomorphism $f:U\rightarrow f(U)$, where $U$ is an open set in $\muathbb{R}^q$, define $U(f)$ as the set of germs of $f$ at different points of $U$. The collection of all such $U(f)$ forms a basis of some topology on $\Gamma_q$ which makes it a topological groupoid. The derivative map gives a groupoid homomorphism \[\begin{array}r{d}:\Gamma_q \rightarrow GL_q(\muathbb{R})\] which takes the germ of a local diffeomorphism $\phi$ of $\mathbb R^q$ at $x$ onto $d\phi_x$. Thus, to each $\Gamma_q$-structure $\omegaega$ on a topological space $M$ there is an associated (isomorphism class of) $q$-dimensional vector bundle $\nuu(\omegaega)$ over $M$ which is called the \emph{normal bundle of} $\omegaega$. In fact, if $\omegaega$ is defined by the cocycles $\gamma_{ij}$ then the cocycles $\begin{array}r{d}\circ \gamma_{ij}$ define the vector bundle $\nuu(\omegaega)$. Moreover, two equivalent cocycles in $\Gamma_q$ have their normal bundles isomorphic. Thus the normal bundle of a $\Gamma_q$ structure is the isomorphism class of the normal bundle of any representative cocycle. If two $\Gamma_q$ structures $\Sigmagma_0$ and $\Sigmagma_1$ are homotopic then there exists a $\Gamma_q$ structure $\Sigmagma$ on $X\times I$ such that $i_0^*\Sigmagma=\Sigmagma_0$ and $i_1^*\Sigmagma=\Sigmagma_1$, where $i_0:X\to X\times \{0\}\hookrightarrow X\times I$ and $i_1:X\to X\times \{1\}\hookrightarrow X\times I$ are canonical injective maps. Then $\nuu(i_0^*\Sigmagma_0)\cong i_0^*\nuu(\Sigmagma)\cong i_1^*\nuu(\Sigmagma)\cong \nuu(i_1^*\Sigmagma_1)$. Hence, normal bundles of homotopic $\Gamma_q$ structures are isomorphic. In particular, we have a vector bundle $\nuu\Omega_q$ on $B\Gamma_q$ associated with the universal $\Gamma_q$-structure $\Omega_q$ \index{$\Omega_q$} on $B\Gamma_q$. \begin{proposition}If a continuous map $f:X\to B\Gamma_q$ classifies a $\Gamma_q$-structure $\omegaega$ on a topological space $X$, then $Bd\circ f$ classifies the vector bundle $\nuu(\omegaega)$. In particular, $\nuu\Omega_q\cong Bd^*E(GL_q(\mathbb R))$ and hence $\nuu(\omegaega)\cong f^*\nuu\Omega_q$. \end{proposition} \sigmaubsection{$\Gamma_q$-structures vs. foliations} If a foliation $\muathcal F$ on a manifold $M$ is represented by the Haefliger data $\{U_i,s_i,h_{ij}\}$, then we can define a $\Gamma_q$ structure on $M$ by $\{U_i,g_{ij}\}$, where \[g_{ij}(x) = \text{ the germ of } h_{ij} \text{ at } s_i(x) \text{ for }x\in U_i\cap U_j.\] In particular, $g_{ii}(x)$ is the germ of the identity map of $\mathbb R^q$ at $s_i(x)$ and hence $g_{ii}$ takes values in the units of $\Gamma_q$. If we identify the units of $\Gamma_q$ with $\mathbb R^q$, then $g_{ii}$ may be identified with $s_i$ for all $i$. Thus, one arrives at a $\Gamma_q$-structure $\omegaega_{\muathcal F}$ represented by 1-cocycles $(U_i,g_{ij})$ such that \[g_{ii}:U_i\rightarrow \muathbb{R}^q\sigmaubset \Gamma_q\] are submersions for all $i$. The functions $\tau_{ij}:U_i\cap U_j\to GL_q(\mathbb R)$ defined by $\tau_{ij}(x)=(\begin{array}r{d}\circ g_{ij})(x)$ for $x\in U_i\cap U_j$, define the normal bundle of $\omegaega_{\muathcal F}$. Furthermore, since $\tau_{ij}(x)=dh_{ij}(s_i(x))$, $\nuu(\omegaega_{\muathcal F})$ is isomorphic to the quotient bundle $\nuu(\muathcal F)$. Thus a foliation on a manifold $M$ defines a $\Gamma_q$-structure whose normal bundle is embedded in $TM$. As we have noted above, foliations do not behave well under the pullback operation, unless the maps are transversal to foliations. However, in view of the relation between foliations and $\Gamma_q$ structures, it follows that the inverse image of a foliation by any map gives a $\Gamma_q$-structure. The following result due to Haefliger says that any $\Gamma_q$ structure is of this type. \begin{theorem}(\cite{haefliger1}) \label{HL} Let $\Sigmagma$ be a $\Gamma_{q}$-structure on a manifold $M$. Then there exists a manifold $N$, a closed embedding $s:M \hookrightarrow N$ and a $\Gamma_{q}$-foliation $\muathcal{F}_N$ on $N$ such that $s^*(\muathcal{F}_N)=\Sigmagma$ and $s$ is a cofibration. \end{theorem} Using the above theorem and the transversality result due to Phillips \cite{phillips1}, Haefliger gave the following classification of foliations on open manifolds. \begin{theorem} The integrable homotopy classes of foliations on an open manifolds are in one-to-one correspondence with the homotopy classes of epimorphisms $TM\to \nuu\Omega$. \end{theorem} \sigmaection{Classification of contact foliations} Throughout this section $M$ is a contact manifold with a contact form $\alpha$. As before $\xi$ will denote the associated contact structure $\ker\alpha$ and $d'\alpha=d\alpha|_{\xi}$. Let $Fol_\alpha^{2q}(M)$ denote the space of contact foliations on $M$ of codimension $2q$ subordinate to $\alpha$ (Definition~\ref{subordinate_contact_foliation}). Let $\muathcal E_{\alpha}(TM,\nuu\Omega_{2q})$ be the space of all vector bundle epimorphisms $F:TM\to \nuu \Omega_{2q}$ such that $\ker F$ is transversal to $\ker\alpha$ and $\ker\alpha\cap \ker F$ is a symplectic subbundle of $(\ker\alpha,d'\alpha)$. If $\muathcal F\in Fol^{2q}(M)$ and $f:M\to B\Gamma_{2q}$ is a classifying map of $\muathcal F$, then $f^*\Omega_{2q}= \muathcal F$ as $\Gamma_{2q}$-structure. We can define a vector bundle epimorphisms $TM\to \nuu\Omega_{2q}$ by the following diagram (see \cite{haefliger1}) \begin{equation} \xymatrix@=2pc@R=2pc{ TM \ar@{->}[r]^-{\pi_{\muathcal F}}\ar@{->}[rd] & \nuu \muathcal{F}\cong f^*(\nuu \Omega_{2q}) \ar@{->}[r]^-{\begin{array}r{f}}\ar@{->}[d] & \nuu \Omega_{2q} \ar@{->}[d]\\ & M \ar@{->}[r]_-{f} & B\Gamma_{2q} }\label{F:H(foliation)} \end{equation} where $\pi_{\muathcal F}:TM\to \nuu(\muathcal F)$ is the projection map onto the normal bundle and $(\begin{array}r{f},f)$ represents a pull-back diagram. Note that the kernel of this morphism is $T\muathcal F$ and therefore, if $\muathcal F\in Fol^{2q}_\alpha(M)$, then $\begin{array}r{f}\circ \pi_{\muathcal F} \in \muathcal E_\alpha(TM,\nuu\Omega_{2q})$ (see Remark~\ref{R:tangent_contact_foliation}). However, the morphism $\begin{array}r{f}\circ \pi_{\muathcal F}$ is defined uniquely only up to homotopy. Thus, there is a function \[H'_\alpha:Fol^{2q}_\alpha(M)\to \pi_0(\muathcal E_\alpha(TM,\nuu\Omega_{2q})).\] \begin{definition} {\em Two contact foliations $\muathcal F_0$ and $\muathcal F_1$ on $(M,\alpha)$ are said to be \emph{integrably homotopic relative to $\alpha$} if there exists a foliation $\tilde{\muathcal F}$ on $(M\times\mathbb I,\alpha\oplus 0)$ such that the following conditions are satisfied: \begin{enumerate} \item $\tilde{\muathcal F}$ is transversal to the trivial foliation of $M\times\mathbb I$ by the leaves $M\times\{t\}$, $t\in \mathbb I$; \item the foliation $\muathcal F_t$ on $M$ induced by the canonical injective map $i_t:M\to M\times\mathbb I$ (given by $x\muapsto (x,t)$) is a contact foliation subordinate to $\alpha$ for each $t\in\mathbb I$; \item the induced foliations on $M\times\{0\}$ and $M\times\{1\}$ coincide with $\muathcal F_0$ and $\muathcal F_1$ respectively,\end{enumerate} Here $\alpha\oplus 0$ denotes the pull-back of $\alpha$ by the projection map $p_1:M\times\mathbb R\to M$.}\end{definition} Let $\pi_0(Fol^{2q}_{\alpha}(M))$ denote the space of integrable homotopy classes of contact foliations on $(M,\alpha)$. Define \[H_\alpha:\pi_0(Fol^{2q}_\alpha(M))\to \pi_0(\muathcal E_\alpha(TM,\nuu\Omega_{2q}))\] by $H_{\alpha}([\muathcal{F}])=H_\alpha'(\muathcal F)$, where $[\muathcal F]$ denotes the integrable homotopy class of $\muathcal F$ relative to $\alpha$. To see that $H_\alpha$ is well-defined, let $\tilde{\muathcal F}$ be an integrable homotopy relative to $\alpha$ between two contact foliations $\muathcal F_0$ and $\muathcal F_1$. If $F:M\times[0,1]\to B\Gamma_{2q}$ is a classifying map of $\tilde{\muathcal F}$ then we have a diagram similar to (\ref{F:H(foliation)}) given as follows: \[ \xymatrix@=2pc@R=2pc{ T(M\times[0,1]) \ar@{->}[r]^-{\begin{array}r{\pi}}\ar@{->}[rd] & \nuu \tilde{F} \ar@{->}[r]^-{\begin{array}r{F}}\ar@{->}[d] & \nuu \Omega_{2q} \ar@{->}[d]\\ & M\times [0,1] \ar@{->}[r]_-{F} & B\Gamma_{2q} } \] Let $i_t:M\to M\times\{t\}\hookrightarrow M\times\mathbb R$ denote the canonical injective map of $M$ into $M\times\{t\}$ and $f_t:M\to B\Gamma_{2q}$ be defined as $f_t(x)=F(x,t)$ for $(x,t)\in M\times[0,1]$. Since $\begin{array}r{F}\circ \begin{array}r{\pi}\circ di_t:TM\to \nuu(\Omega_{2q})$ represents the homotopy class $H'_\alpha(f_t^*\Omega_{2q})$ we conclude that $H'_\alpha(\muathcal F_0)= H'_\alpha(\muathcal F_1)$. This proves that $H_\alpha$ is well-defined. We now state the main result of this article. \begin{theorem} \label{haefliger_contact} If $M$ is open then $H_\alpha:\pi_0(Fol^{2q}_{\alpha}(M)) \longrightarrow \pi_0(\muathcal E_{\alpha}(TM,\nuu\Omega_{2q}))$ is bijective. \end{theorem} We first prove a lemma. \begin{lemma}Let $N$ be a smooth manifold with a foliation $\muathcal F_N$ of codimension $2q$. If $g:N\to B\Gamma_{2q}$ classifies $\muathcal F_N$ then we have the commutative diagram as follows: \begin{equation} \xymatrix@=2pc@R=2pc{ \pi_0(Tr_{\alpha}(M,\muathcal{F}_N))\ar@{->}[r]^-{P}\ar@{->}[d]_-{\cong}^-{\pi_0(\pi \circ d)} & \pi_0(Fol^{2q}_{\alpha}(M))\ar@{->}[d]^-{H_{\alpha}}\\ \pi_0(\muathcal E_{\alpha}(TM,\nuu \muathcal{F}_N))\ar@{->}[r]_{G_*} & \pi_0(\muathcal E_{\alpha}(TM,\nuu\Omega_{2q})) }\label{Figure:Haefliger} \end{equation} where the left vertical arrow is the isomorphism defined by Theorem~\ref{T:contact-transverse}, $P$ is induced by a map which takes an $f\in Tr_\alpha(M,\muathcal F_N)$ onto the inverse foliation $f^{-1}\muathcal F_N$ and $G_*$ is induced by the bundle homomorphism $G:\nuu\muathcal F_N\to \nuu\Omega_{2q}$ covering $g$.\label{L:haefliger} \end{lemma} \begin{proof} We shall first show that the horizontal arrows in (\ref{Figure:Haefliger}) are well defined. If $f\in Tr_{\alpha}(M,\muathcal{F}_N)$ then the inverse foliation $f^*\muathcal F_N$ belongs to $Fol^{2q}_\alpha(M)$. Furthermore, if $f_t$ is a homotopy in $Tr_{\alpha}(M,\muathcal{F}_N)$, then the map $F:M\times\mathbb I\to N$ defined by $F(x,t)=f_t(x)$ is clearly transversal to $\muathcal F_N$ and so $\tilde{\muathcal F}=F^*\muathcal F_N$ is a foliation on $M\times\mathbb I$. The restriction of $\tilde{\muathcal F}$ to $M\times\{t\}$ is the same as the foliation $f^*_t(\muathcal F_N)$, which is a contact foliation subordinate to $\alpha$. Hence, we get a map \[\pi_0(Tr_{\alpha}(M,\muathcal{F}_N))\sigmatackrel{P}\longrightarrow \pi_0(Fol^{2q}_{\alpha}(M))\] defined by \[[f]\longmapsto [f^{-1}\muathcal{F}_N]\] where $[f^{-1}\muathcal{F}_N]$ denotes the integrable homotopy class of the foliation $f^{-1}\muathcal{F}_N$. On the other hand, since $g:N\to B\Gamma_{2q}$ classifies the foliation $\muathcal F_N$, there is a vector bundle homomorphism $G:\nuu\muathcal F_N\to \nuu\Omega_{2q}$ covering $g$. This induces a map \[G_*: \pi_0(\muathcal E_\alpha(TM,\nuu(\muathcal F_N)))\to \pi_0(\muathcal E_\alpha(TM,\nuu\Omega_{2q}))\] which takes an element $[F]\in \muathcal E_\alpha(TM,\nuu(\muathcal F_N))$ onto $[G\circ F]$. We now prove the commutativity of (\ref{Figure:Haefliger}). Note that if $f\in Tr_{\alpha}(M,\muathcal{F}_N))$ then $g\circ f:M\to B\Gamma_{2q}$ classifies the foliation $f^*\muathcal F_N$. Let $\widetilde{df}:\nuu(f^*\muathcal F_N)\to \nuu(\muathcal F_N)$ be the unique map making the following diagram commutative: \[ \xymatrix@=2pc@R=2pc{ TM\ar@{->}[r]^-{df}\ar@{->}[d]_-{\pi_M} & TN\ar@{->}[d]^-{\pi_N}\\ \nuu (f^*\muathcal{F}_N)\ar@{->}[r]_{\widetilde{df}} & \nuu(\muathcal F_N) } \] where $\pi_M:TM\to\nuu(f^*\muathcal F_N)$ is the projection map onto the normal bundle of $f^*\muathcal F_N$. Observe that $G\circ\widetilde{df}:\nuu(f^*\muathcal F_N)\to \nuu(\Omega_{2q})$ covers the map $g\circ f$ and $(G\circ \widetilde{df},g\circ f)$ is a pullback diagram. Therefore, we have \[H_\alpha([f^*\muathcal F_N])=[(G\circ\widetilde{df})\circ \pi_M]=[G\circ(\pi\circ df)].\] This proves the commutativity of (\ref{Figure:Haefliger}).\end{proof} {\em Proof of Theorem ~\ref{haefliger_contact}}. The proof is exactly similar to that of Haefliger's classification theorem. The main idea is to reduce the classification to Theorem~\ref{T:contact-transverse} by using Theorem~\ref{HL} and Lemma~\ref{L:haefliger}. We refer to \cite{francis} for a detailed proof of Haefliger's theorem.\qed\\ \begin{theorem}Let $(M,\alpha)$ be an open contact manifold and let $\tau:M\to BU(n)$ be a map classifying the symplectic vector bundle $\xi=\ker\alpha$. Then there is a bijection between the elements of $\pi_0(\muathcal E_{\alpha}(TM,\nuu\Omega))$ and the homotopy classes of triples $(f,f_0,f_1)$, where $f_0:M\to BU(q)$, $f_1:M\to BU(n-q)$ and $f:M\to B\Gamma_{2q}$ such that \begin{enumerate}\item $(f_0,f_1)$ is homotopic to $\tau$ in $BU(n)$ and \item $Bd\circ f$ is homotopic to $Bi\circ f_0$ in $BGL_{2q}$.\end{enumerate} In other words the following diagrams are homotopy commutative:\\ \[\begin{array}{ccc} \xymatrix@=2pc@R=2pc{ & &\ \ B\Gamma(2q)\ar@{->}[d]^{Bd}\\ M \ar@{->}[r]_-{f_0}\ar@{-->}[urr]^{f} & BU(q)\ar@{->}[r]_{Bi} & BGL(2q) } & \hspace{1cm}& \xymatrix@=2pc@R=2pc{ &\ \ BU(q)\times BU(n-q)\ar@{->}[d]^{\oplus}\\ M \ar@{->}[r]_-{\tau}\ar@{-->}[ur]^{(f_0,f_1)}& BU(n) }\end{array}\] \end{theorem} \begin{proof} An element $(F,f)\in \muathcal E_{\alpha}(TM,\nuu\Omega)$ defines a (symplectic) splitting of the bundle $\xi$ as \[\xi \cong (\ker F\cap \xi)\oplus (\ker F\cap \xi)^{d'\alpha}\] since $\ker F\cap \xi$ is a symplectic subbundle of $\xi$. Let $F'$ denote the restriction of $F$ to $(\ker F\cap \xi)^{d'\alpha}$. It is easy to see that $(F',f):(\ker F\cap \xi)^{d'\alpha}\to \nuu(\Omega)$ is a vector bundle map which is fibrewise isomorphism. If $f_0:M\to BU(q)$ and $f_1:M\to BU(n-q)$ are continuous maps classifying the vector bundles $\ker F\cap \xi$ and $(\ker F\cap \xi)^{d'\alpha}$ respectively, then the classifying map $\tau$ of $\xi$ must be homotopic to $(f_0,f_1):M\to BU(q)\times BU(n-q)$ in $BU(n)$ (Recall that the isomorphism classes of Symplectic vector bundles are classified by homotopy classes of continuous maps into $BU$ \cite{husemoller}). Furthermore, note that $(\ker F\cap \xi)^{d'\alpha}\cong f^*(\nuu\Omega)=f^*(Bd^*EGL_{2q}(\mathbb R))$; therefore, $Bd\circ f$ is homotopic to $f_0$ in $BGL(2q)$. Conversely, take a triple $(f,f_0,f_1)$ such that \[Bd\circ f\sigmaim Bi\circ f_0 \text{ and } (f_0,f_1)\sigmaim \tau.\] Then $\xi$ has a symplectic splitting given by $f_0^*EU(q)\oplus f_1^*EU(n-q)$. Further, since $Bd\circ f\sigmaim Bi\circ f_0$, we have $f_0^*EU(q)\cong f^*\nuu(\Omega)$. Hence there is an epimorphism $F:\xi\sigmatackrel{p_2}{\longrightarrow} f_0^*EU(q) \cong f^*\nuu(\Omega)$ whose kernel $f_1^*EU(n-q)$ is a symplectic subbundle of $\xi$. Finally, $F$ can be extended to an element of $\muathcal E_\alpha(TM,\nuu\Omega)$ by defining its value on $R_\alpha$ equal to zero.\end{proof} \begin{definition}{\em Let $N$ be a contact submanifold of $(M,\alpha)$ such that $T_xN$ is transversal to $\xi_x$ for all $x\in N$. Then $TN\cap \xi|_N$ is a symplectic subbundle of $\xi$. The symplectic complement of $TN\cap \xi|_N$ with respect to $d'\alpha$ will be called \emph{the normal bundle of the contact submanifold $N$}.} \end{definition} The following result is a direct consequence of the above classification theorem. \begin{corollary} Let $B$ be a symplectic subbundle of $\xi$ with a classifying map $g:M\to BU(q)$. The integrable homotopy classes of contact foliations on $M$ with their normal bundles isomorphic to $B$ are in one-one correspondence with the homotopy classes of lifts of $Bi\circ g$ in $B\Gamma_{2q}$. \end{corollary} We end this article with an example to show that a contact foliation on a contact manifold need not be transversally symplectic, even if its normal bundle is a symplectic vector bundle. \begin{definition}{\em (\cite{haefliger1}) A codimension ${2q}$-foliation $\muathcal F$ on a manifold $M$ is said to be \emph{transverse symplectic} if $\muathcal F$ can be represented by Haefliger cocycles which take values in the groupoid of local symplectomorphisms of $(\mathbb R^{2q},\omegaega_0)$.} \end{definition} Thus the normal bundle of a transversally symplectic foliation has a symplectic structure. It can be shown that if $\muathcal F$ is transversally symplectic then there exists a closed 2-form $\omegaega$ on $M$ such that $\omegaega^q$ is nowhere vanishing and $\ker\omegaega=T\muathcal F$. \begin{example} {\em Let us consider a closed almost-symplectic manifold $V^{2n}$ which is not symplectic (e.g., we may take $V$ to be $\muathbb S^6$) and let $\omegaega_V$ be a non-degenerate 2-form on $V$ defining the almost symplectic structure. Set $M=V\times\muathbb{R}^3$ and let $\muathcal{F}$ be the foliation on $M$ defined by the fibres of the projection map $\pi:M\to V$. Thus the leaves are $\{x\}\times\muathbb{R}^3,\ x\in V$. Consider the standard contact form $\alpha=dz+x dy$ on the Euclidean space $\mathbb R^3$ and let $\tilde{\alpha}$ denote the pull-back of $\alpha$ by the projection map $p_2:M\to\mathbb R^3$. The 2-form $\beta=\omegaega_V\oplus d\alpha$ on $M$ is of maximum rank and it is easy to see that $\beta$ restricted to $\ker\tilde{\alpha}$ is non-degenerate. Therefore $(\tilde{\alpha},\beta)$ is an almost contact structure on $M$. Moreover, $\tilde{\alpha}\wedge \beta|_{T\muathcal{F}}$ is nowhere vanishing. We claim that there exists a contact form $\eta$ on $M$ such that its restrictions to the leaves of $\muathcal F$ are contact. Recall that there exists a surjective map \[(T^*M)^{(1)}\sigmatackrel{D}{\rightarrow}\wedge^1T^*M \oplus \wedge^2T^*M\] such that $D\circ j^1(\alpha)=(\alpha,d\alpha)$ for any 1-form $\alpha$ on $M$. Let \[r:\wedge^1T^*M \oplus \wedge^2T^*M\rightarrow \wedge^1T^*\muathcal{F} \oplus \wedge^2T^*\muathcal{F}\] be the restriction map defined by the pull-back of forms and let $A\sigmaubset \Gamma(\wedge^1T^*M \oplus \wedge^2T^*M)$ be the set of all pairs $(\eta,\Omega)$ such that $\eta \wedge \Omega^{n+1}$ is nowhere vanishing and let $B\sigmaubset \Gamma(\wedge^1T^*\muathcal{F} \oplus\wedge^2T^*\muathcal{F})$ be the set of all pairs whose restriction on $T\muathcal{F}$ is nowhere vanishing. Now set $\muathcal{R}\sigmaubset (T^*M)^{(1)}$ as \[\muathcal{R}=D^{-1}(A)\cap (r\circ D)^{-1}(B).\] Since both $A$ and $B$ are open so is $\muathcal{R}$. Now if we consider the fibration $M\sigmatackrel{\pi}{\rightarrow}V$ then it is easy to see that the diffeotopies of $M$ preserving the fibers of $\pi$ sharply moves $V\times 0$ and $\muathcal{R}$ is invariant under the action of such diffeotopies. So by Theorem~\ref{T:gromov-invariant} there exists a contact form $\eta$ on $Op(V\times 0)=V\times\muathbb{D}^3_{\varepsilonpsilon}$ for some $\varepsilonpsilon>0$, and $\eta$ restricted to each leaf of the foliation $\muathcal F$ is also contact. Now take a diffeomorphism $g:\muathbb{R}^3\rightarrow \muathbb{D}^3_{\varepsilonpsilon}$. Then $\eta'=(id_V\times g)^*\eta$ is a contact form on $M$. Further, $\muathcal{F}$ is a contact foliation relative to $\eta'$ since $id_V\times g$ is foliation preserving. But $\muathcal{F}$ can not be transversal symplectic because then there would exist a closed 2-form $\beta$ whose restriction to $\nuu \muathcal{F}=\pi^*(TV)$ would be non-degenerate. This would imply that $V$ is a symplectic manifold contradicting our hypothesis.} \end{example} \sigmaection{Examples of contact foliations on contact manifolds} The odd dimensional spheres $\muathbb S^{2n+1}$, $n\geq 1$, are examples of contact manifolds as described in Example~\ref{ex:contact}. We shall show that the open submanifolds of $\muathbb S^{n+1}$ obtained by deleting a lower dimensional sphere from it admits contact foliations. We shall first interpret Corollary~\ref{T:contact_submersion} in terms of certain $2n$-frames in $M$, when the target manifold is an Euclidean space. Recall from Section 2 that the tangent bundle $TM$ of a contact manifold $(M,\alpha)$ splits as $\ker\alpha\oplus\ker \,d\alpha$. Let $P:TM\to\ker\alpha$ be the projection morphism onto $\ker\alpha$ relative to this splitting. We shall denote the projection of a vector field $X$ on $M$ under $P$ by $\begin{array}r{X}$. For any smooth function $h:M\to \mathbb R$, $X_h$ will denote the contact Hamiltonian vector field defined as in the prelimiaries (see equations (\ref{contact_hamiltonian1})). \begin{lemma} Let $(M,\alpha)$ be a contact manifold and $f:M\to \mathbb R^{2n}$ be a submersion with coordinate functions $f_1,f_2,\dots,f_{2n}$. Then the following statements are equivalent: \begin{enumerate}\item[(C1)] $f$ is a contact submersion. \item[(C2)] The restriction of $d\alpha$ to the bundle spanned by $X_{f_1},\dots,X_{f_{2n}}$ defines a symplectic structure. \item[(C3)] The vector fields $\begin{array}r{X}_{f_1},\dots,\begin{array}r{X}_{f_{2n}}$ span a symplectic subbundle of $(\xi,d'\alpha)$. \end{enumerate}\end{lemma} \begin{proof} If $f:(M,\alpha)\to\mathbb R^{2n}$ is a contact submersion then the following relation holds pointwise: \begin{equation}\ker df\cap \ker\alpha=\langle \begin{array}r{X}_{f_1},...,\begin{array}r{X}_{f_{2n}}\rangle^{\perp_{d'\alpha}},\end{equation} where the right hand side represents the symplectic complement of the subbundle spanned by $\begin{array}r{X}_{f_1},...,\begin{array}r{X}_{f_{2n}}$ with respect to $d'\alpha$. Indeed, for any $v\in \ker\alpha$, \[ d'\alpha(\begin{array}r{X}_{f_i},v)=-df_i(v),\ \ \text{ for all }i=1,...,2n \] Therefore, $v\in\ker\alpha\cap\ker df$ if and only if $d'\alpha(\begin{array}r{X}_{f_i},v)=0$ for all $i=1,\dots,2n$, that is $v\in \langle \begin{array}r{X}_{f_1},...,\begin{array}r{X}_{f_{2n}}\rangle^{\perp_{d'\alpha}}$. Thus, the equivalence of (C1) and (C3) is a consequence of the equivalence between (S1) and (S2). The equivalence of (C2) and (C3) follows from the relation $d\alpha(X,Y)=d\alpha(\begin{array}r{X},\begin{array}r{Y})$, where $X,Y$ are any two vector fields on $M$. \end{proof} An ordered set of vectors $e_{1}(x),...,e_{2n}(x)$ in $\xi_x$ will be called a \emph{symplectic $2n$-frame} \index{symplectic $2n$-frame} in $\xi_x$ if the subspace spanned by these vectors is a symplectic subspace of $\xi_x$ with respect to the symplectic form $d'\alpha_x$. Let $T_{2n}\xi$ be the bundle of symplectic $2n$-frames in $\xi$ and $\Gamma(T_{2n}\xi)$ denote the space of sections of $T_{2n}\xi$ with the $C^{0}$ compact open topology. For any smooth submersion $f:(M,\alpha)\rightarrow \muathbb{R}^{2n}$, define the \emph{contact gradient} of $f$ by \[\Xi f(x)=(\begin{array}r{X}_{f_{1}}(x),...,\begin{array}r{X}_{f_{2n}}(x)),\] where $f_{i}$, $i=1,2,\dots,2n$, are the coordinate functions of $f$. If $f$ is a contact submersion then $\begin{array}r{X}_{f_{1}}(x),...,\begin{array}r{X}_{f_{2n}}(x))$ span a symplectic subspace of $\xi_x$ for all $x\in M$, and hence $\Xi f$ becomes a section of $T_{2n}\xi$. \begin{theorem} \label{ED} Let $(M^{2m+1},\alpha)$ be an open contact manifold. Then the contact gradient map $\Xi:\muathcal{C}_\alpha(M,\muathbb{R}^{2n})\rightarrow \Gamma(T_{2n}\xi)$ is a weak homotopy equivalence. \end{theorem} \begin{proof} As $T\muathbb{R}^{2n}$ is a trivial vector bundle, the map \[i_{*}:\muathcal{E}_\alpha(TM,\muathbb{R}^{2n})\rightarrow \muathcal{E}_\alpha(TM,T\muathbb{R}^{2n})\] induced by the inclusion $i:0 \hookrightarrow \muathbb{R}^{2n}$ is a homotopy equivalence, where $\muathbb{R}^{2n}$ is regarded as the vector bundle over $0\in \muathbb{R}^{2n}$. The homotopy inverse $c$ is given by the following diagram. For any $F\in \muathcal E_\alpha(TM,T\mathbb R^{2n})$, $c(F)$ is defined by as $p_2\circ F$, \[\begin{array}{ccccc} TM & \sigmatackrel{F}{\longrightarrow} & T\muathbb{R}^{2n}=\muathbb{R}^{2n}\times \muathbb{R}^{2n} & \sigmatackrel{p_2}{\longrightarrow} & \muathbb{R}^{2n}\\ \downarrow & & \downarrow & & \downarrow\\ M & \longrightarrow & \muathbb{R}^{2n} & \longrightarrow & 0 \end{array}\] where $p_2$ is the projection map onto the second factor. Since $d'\alpha$ is non-degenerate, the contraction of $d'\alpha$ with a vector $X\in\ker\alpha$ defines an isomorphism \[\phi:\ker\alpha \rightarrow (\ker\alpha)^*.\] We define a map $\sigmaigma:\oplus_{i=1}^{2n}T^*M\to \oplus_{i=1}^{2n}\xi$ by \[\sigmaigma(G_1,\dots,G_{2n})=-(\phi^{-1}(\begin{array}r{G}_1),...,\phi^{-1}(\begin{array}r{G}_{2n})),\] where $\begin{array}r{G}_i=G_i|_{\ker\alpha}$. Then noting that \[\ker(G_1,\dots,G_{2n})\cap \ker\alpha=\langle\phi^{-1}(\begin{array}r{G}_1),\dots,\phi^{-1}(\begin{array}r{G}_{2n})\rangle^{\perp_{d'\alpha}},\] we get a map $\tilde{\sigmaigma}$ by restricting $\sigmaigma$ to $\muathcal E(TM,\mathbb R^{2n})$: \[\tilde{\sigmaigma}:{\muathcal E}(TM,\muathbb{R}^{2n})\longrightarrow \Gamma(M,T_{2n}\xi),\] Moreover, the contact gradient map $\Xi$ factors as $\Xi= \tilde{\sigmaigma} \circ c \circ d$: \begin{equation}\muathcal{C}_\alpha(M,\muathbb{R}^{2n})\sigmatackrel{d}\rightarrow \muathcal{E}_\alpha(TM,T\muathbb{R}^{2n})\sigmatackrel{c}\rightarrow \muathcal{E}_\alpha(TM,\muathbb{R}^{2n})\sigmatackrel{\tilde{\sigmaigma}}\rightarrow \Gamma(T_{2n}\xi).\end{equation} To see this take any $f:M\to \mathbb R^{2n}$. Then, $c(df)=(df_{1},...,df_{2n})$, and hence \[ \tilde{\sigmaigma} c (df)=(\phi^{-1}(df_1|_\xi),...,\phi^{-1}(df_{2n}|_\xi)) = (\begin{array}r{X}_{f_1},\dots,\begin{array}r{X}_{f_{2n}})=\Xi(f)\] which gives $\tilde{\sigmaigma} \circ c \circ d(f)=\Xi f$. We claim that $\tilde{\sigmaigma}: \muathcal{E}_\alpha(TM,\muathbb{R}^{2n})\to \Gamma(T_{2n}\xi)$ is a homotopy equivalence. To prove this we define a map $\tau: \oplus_{i=1}^{2n}\xi \to \oplus_{i=1}^{2n} T^*M$ by the formula \[\tau(X_1,\dots,X_{2n})=(i_{X_1}d\alpha,...,i_{X_{2n}} d\alpha)\] which induces a map $\tilde{\tau}: \Gamma(T_{2n}\xi) \to \muathcal{E}(TM,\muathbb{R}^{2n})$. It is easy to verify that $\tilde{\sigmaigma} \circ \tilde{\tau}=id$. In order to show that $\tilde{\tau}\circ\tilde{\sigmaigma}$ is homotopic to the identity, take any $G\in \muathcal E_\alpha(TM,\mathbb R^{2n})$ and let $\widehat{G}=(\tau\circ \sigmaigma)(G)$. Then $\widehat{G}$ equals $G$ on $\ker\alpha$. Define a homotopy between $G$ and $\hat{G}$ by $G_t=(1-t)G+t\widehat{G}$. Then $G_t=G$ on $\ker\alpha$ and hence $\ker G_t\cap \ker\alpha=\ker G\cap \ker\alpha$. This also implies that each $G_t$ is an epimorphism. Thus, the homotopy $G_t$ lies in $\muathcal E_\alpha(TM,\mathbb R^{2n})$. This shows that $\tilde{\tau}\circ \tilde{\sigmaigma}$ is homotopic to the identity map. This completes the proof of the theorem since $d:\muathcal{C}(M,\muathbb{R}^{2n}) \rightarrow \muathcal{E}(TM,T\muathbb{R}^{2n})$ is a weak homotopy equivalence (Theorem~\ref{T:contact-transverse}) and $c$, $\tilde{\sigmaigma}$ are homotopy equivalences.\end{proof} \begin{example} {\em Let $\muathbb{S}^{2N-1}$ denote the $2N-1$ sphere in $\mathbb R^{2N}$ \[\muathbb{S}^{2N-1}=\{(z_{1},...,z_{2N})\in \muathbb{R}^{2N}: \Sigmagma_{1}^{2N}|z_{i}|^{2}=1\}\] This is a standard example of a contact manifold where the contact form $\eta$ is induced from the 1-form $\sigmaum_{i=1}^{N} (x_i\,dy_i-y_i\,dx_i)$ on $\mathbb R^{2N}$. For $N>K$, we consider the open manifold $\muathcal S_{N,K}$ obtained from $\muathbb{S}^{2N-1}$ by deleting a $(2K-1)$-sphere: \begin{center}$\muathcal{S}_{N,K}=\muathbb S^{2N-1}\sigmaetminus \muathbb{S}^{2K-1}$,\end{center} where \[\muathbb{S}^{2K-1}=\{(z_{1},...,z_{2K},0,...,0)\in \muathbb{R}^{2N}: \Sigmagma_{1}^{2K}|z_{i}|^{2}=1\}\] Then $\muathcal{S}_{N,K}$ is an contact submanifold of $\muathbb S^{2N-1}$. Let $\xi$ denote the contact structure associated to the contact form $\eta$ on $\muathcal S_{N,K}$. Since $\xi\to \muathcal S_{N,K}$ is a symplectic vector bundle, we can choose a complex structure $J$ on $\xi$ such that $d'\eta$ is $J$-invariant. Thus, $(\xi,J)$ becomes a complex vector bundle of rank $N-1$. We define a homotopy $F_t:\muathcal S_{N,K}\to \muathcal S_{N,K}$, $t\in [0,1]$, as follows: For $(x,y)\in \muathbb{R}^{2k}\times \muathbb{R}^{2(N-k)}\cap \muathcal{S}_{N,K}$ \[F_t(x,y)=\frac{(1-t)(x,y)+t(0,y/\|y \|)}{\|(1-t)(x,y)+t(0,y/\| y \|) \|}\] This is well defined since $y\nueq 0$. It is easy to see that $F_0=id$, $F_1$ maps $\muathbb{S}^{2(N-K)-1}$ into $\muathcal S_{N,K}$ and the homotopy fixes $\muathbb{S}^{2(N-K)-1}$ pointwise. Define $r:\muathcal S_{N,K}\rightarrow \{0\}\times \muathbb{R}^{2(N-k)}\cap \muathcal S_{N,K}\muathbb{S}^{2(N-K)-1}\sigmaimeq \muathbb{S}^{2(N-K)-1}$ by \[r(x,y)= (0,y/\|y\|), \ \ \ (x,y)\in\mathbb R^{2K}\times\mathbb R^{2(N-K)}\cap \muathcal{S}_{N,K}\] Then $F_1$ factors as $F_1=i\circ r$, where $i$ is the inclusion map, and we have the following diagram: \[ \begin{array}{lcccl} r^*(i^*\xi)&\longrightarrow&i^*\xi&\longrightarrow&\xi\\ \downarrow&&\downarrow&&\downarrow\\ \muathcal{S}_{N,K}&\sigmatackrel{r}{\longrightarrow}&\muathbb{S}^{2(N-K)-1}&\sigmatackrel{i}{\longrightarrow}&\muathcal{S}_{N,K}\end{array}\] Hence, $\xi=F_0^*\xi\cong F_1^*\xi=r^*(\xi|_{S^{(2N-2K)-1}})$ as complex vector bundles. Since $\xi$ is a (complex) vector bundle of rank $N-1$, $\xi|_{\muathbb S^{2(N-K)-1}}$ will have a decomposition of the following form (\cite{husemoller}): \[\xi|_{S^{(2N-2K)-1}}\cong \tau^{N-K-1}\oplus \theta^K,\] where $\theta^K$ is a trivial complex vector bundle of rank $K$ and $\tau^{N-K-1}$ is a complementary subbundle. Hence $\xi$ must also have a trivial direct summand $\theta$ of rank $K$. Moreover, $\theta$ will be a symplectic subbundle of $\xi$ since the complex structure $J$ is compatible with the symplectic structure $d'\eta$ on $\xi$. Thus, $S_{N,K}$ admits a symplectic $2K$ frame spanning $\theta$. Hence, by Theorem~\ref{ED}, there exist contact submersions of $\muathcal S_{N,K}$ into $\mathbb R^{2K}$. Consequently, $\muathcal S_{N,K}$ admits contact foliations of codimension $2K$ for each $K<N$. }\end{example} \end{document}
\begin{document} \begin{frontmatter} \title{Low-rank Riemannian eigensolver for high-dimensional Hamiltonians} \author[add1,fn1]{Maxim Rakhuba\corref{cor1}} \ead{[email protected]} \fntext[fn1]{Work by MR on this project was performed while he was a junior research scientist at Skolkovo Institute of Science and Technology.} \cortext[cor1]{Corresponding author} \author[add4,add2]{Alexander Novikov} \ead{[email protected]} \author[add3,add4]{Ivan Oseledets} \ead{[email protected]} \address[add1]{Seminar for Applied Mathematics, ETH Zurich, R\"amistrasse 101, 8092 Zurich, Switzerland} \address[add4]{Marchuk Institute of Numerical Mathematics of the Russian Academy of Sciences, 119333 Moscow, Russia} \address[add2]{National Research University Higher School of Economics, 101000 Moscow, Russia} \address[add3]{Skolkovo Institute of Science and Technology, Skolkovo Innovation Center, 143026 Moscow, Russia} \begin{abstract} Such problems as computation of spectra of spin chains and vibrational spectra of molecules can be written as \emph{high-dimensional eigenvalue problems}, i.e., when the eigenvector can be naturally represented as a multidimensional tensor. Tensor methods have proven to be an efficient tool for the approximation of solutions of high-dimensional eigenvalue problems, however, their performance deteriorates quickly when the number of eigenstates to be computed increases. We address this issue by designing a new algorithm motivated by the ideas of \emph{Riemannian optimization} (optimization on smooth manifolds) for the approximation of multiple eigenstates in the \emph{tensor-train format}, which is also known as matrix product state representation. The proposed algorithm is implemented in TensorFlow, which allows for both CPU and GPU parallelization. \lambdad{abstract} \lambdad{frontmatter} \section{Introduction} \label{sec:intro} The paper aims at the approximate computation of $b$ lowest eigenvalues~$\lambdaergy^{(i)}$ and corresponding eigenvectors ${\bf x}^{(i)}$ of a high-dimensional Hamiltonian ${\bf H}\in\mathbb{R}^{n^d\times n^d}$: \begin{equation}\label{eq:main} {\bf H} {\bf x}^{(i)} = \lambdaergy^{(i)}{\bf x}^{(i)}, \quad i = 1,\dots, b. \lambdad{equation} Such problems arise in different applications of solid state physics and quantum chemistry problems including, but not restricted to the computation of spectra of spin chains ($n = 2$) or vibrational spectra of molecules (usually $n<20$). High-dimensional problems are known to be computationally challenging due to the \emph{curse of dimensionality}, which implies exponential growth of the number of parameters with respect to the dimensionality~$d$ of the problem. For example, storage of a single eigenvector for a spin chain with $d=50$ spins requires approximately 10 Pb of computer memory, which is far beyond available RAM on modern supercomputers. One way to deal with the curse of dimensionality is to utilize tensor decompositions of vectors ${\bf x}^{(i)}$, $i=1,\dots,b$ represented as multidimensional arrays ${\mathcal X}^{(i)}$ of size $n\times \dots \times n$ also known as tensors. Tensor decompositions have appeared to be useful for a long time both in solid states physics and molecular dynamics communities where different kinds of tensor decompositions have been used. Recently, the tensor method has also attracted the attention in the numerical analysis community, where new ideas such as cross approximation method~\cite{ost-tucker-2008,ot-ttcross-2010,lars-htcross-2013} have been developed. In this work we introduce a new method for solving problem~\eqref{eq:main} using the~\emph{tensor-train (TT) format}~\cite{osel-tt-2011}, also known as matrix product state (MPS) representation, which has been used for a long time in quantum information theory and solid state physics to approximate certain wavefunctions \cite{white-dmrg-1992, ostlund-dmrg-1995} (DMRG method), see the review \cite{schollwock-2011} for more details. One of the peculiarities of the proposed method is that it utilizes ideas of optimization on smooth manifolds (Riemannian optimization). This is possible due to the fact that the set of tensors of a fixed TT-rank\footnote{TT-rank controls the number of parameters in the decomposition (Sec.~\ref{sec:tt}).} forms a smooth nonlinear manifold~\cite{holtz-manifolds-fixed-rank-2011}. For small TT-ranks the manifold is low-dimensional and hence all computations are inexpensive. To find a single eigenpair $(\lambdaergy^{(1)}, {\bf x}^{(1)})$ Riemannian optimization allows to naturally avoid the rank growth of the method (Sec.~\ref{sec:riemtt}). This is essential for fast computations as the complexity of the tensor method usually has strong rank dependence. However, for finding several eigenvalues, the generalization is not straightforward and leads to a significant complexity increase. Alternatively, one can compute eigenpairs using Riemannian optimization one-by-one, i.e. using deflation techniques, but such an approach is known to have slow convergence if the spectra is clustered~\cite{arbenz-lectures-2012}. To avoid this we propose a modification of the method (Sec.~\ref{sec:proposed}) that on the one hand keeps the benefits of efficient single eigenstate computation using Riemannian approach, and on the other hand inherits faster convergence properties of block methods. The idea of the proposed method is as follows. We suggest projecting all the eigenvectors at each iteration onto a single specifically chosen tangent space\footnote{In a nutshell, tangent space is a linearization of the manifold at a given point (Sec.~\ref{sec:riem_lopcg}).}. However, there is no reason to believe that all eigenvectors can be approximated using tangent space of a single eigenvector. Thus, in our algorithm, we always treat the projection as a correction to the current iterate. Overall, this leads to a small, but non-standard optimization procedure for the coefficients of the iterative process (Sec.~\ref{sec:coef}). The main contributions of this paper are: \begin{itemize} \item We develop a low-rank Riemannian alternating projection (LRRAP) concept for block iterative methods and apply it to the locally optimal block preconditioned conjugate gradient (LOBPCG) method. \item We implement the proposed LRRAP LOBPCG solver using TensorFlow\footnote{If you are unfamiliar with TensorFlow, see~\ref{sec:app-t3f-implementation} for introduction.} library, which allows for parallelization on both CPUs and GPUs. \item We accurately calculate vibrational spectra of acetonitrile (CH$_3$CN) and ethylene oxide (C$_2$H$_4$O) molecules as well as spectra of one-dimensional spin chains. The comparison with the state-of-the-art methods in these domains indicates that we obtain comparable accuracy with a significant acceleration (up to 20 times) for large~$b$ thanks to the design of the method and the GPU support. \lambdad{itemize} \section{Riemannian optimization on TT manifolds} \label{sec:riemtt} In this section we provide a brief description of TT-decomposition and Riemannian optimization essentials on the example of a single eigenvalue computation using LOBPCG method. In Section~\ref{sec:proposed} the approach described here will be generalized to the computation of several eigenvalues. \subsection{TT representation}\label{sec:tt} Recall that we consider problem~\eqref{eq:main} and represent each eigenvector of size~$n^d$ as a $n\times \dots \times n$ tensor. This allows for compression using tensor decompositions of multidimensional arrays, and particularly the TT decomposition. For tensor ${\mathcal X} = \{{\mathcal X}_{i_1,\dots,i_d}\}_{i_1,\dots,i_d=1}^{n}\in \mathbb{R}^{n\times \dots \times n}$ its TT decomposition reads \begin{equation}\label{eq:tt-repr} {\mathcal X}_{i_1,\dots,i_d} = G_{1}(i_1)\, G_{2}(i_2) \dots G_d (i_d), \lambdad{equation} where $G_{k}(i_k)$ are $r_{k-1}\times r_k$ matrices, $k=2,\dots,d-1$. For the product of matrices to be a number we require that $G_{1}(i_1)$ be row matrices $1\times r_1$ and $G_{d}(i_d)$ be column matrices $r_{d-1}\times 1$, which means $r_0=r_d = 1$. For simplicity we force $r_1=\dots = r_{d-1} = r$ and call $r$ the \emph{TT-rank}. The same value of $r_k$ for all $k=1,\dots,d-1$ implies that for some modes $r_k$ can be overestimated. Note that in order to store TT representation of array ${\mathcal X}$ one needs $\mathcal{O}(dnr^2)$ elements compared with $n^d$ elements of the initial array. The TT representation of Hamiltonian\footnote{ Matrix ${\bf H}\in\mathbb{R}^{n^d\times n^d}$ can also be naturally considered as a multidimensional array $\mathcal{H}$ of dimension $2d$. The TT decomposition of $\mathcal{H}$ reads $\mathcal{H}_{i_1,\dots,i_d,j_1,\dots,j_d} = H_{1}(i_1,j_1)\, H_{2}(i_2,j_2) \dots H_d (i_d, j_d)$, where $i_1,\dots,i_d$ represent row indexing of ${\bf H}$, while $j_1,\dots,j_d$ represent its column indexing, $H_{k}(i_k,j_k)$ are $R_{k-1}\times R_k$ matrices, $k=2,\dots,d-1$ and $R_0=R_d = 1$. } is defined by analogy. We will denote the maximum rank of a Hamiltonian by $R$. It rarely happens that some tensor can be represented with small TT-rank~$r$ exactly. Therefore, to keep ranks small the tensor is approximated with some accuracy by another tensor with a small TT-rank. In the considered numerical experiments we expect exponential decay of the introduced error with respect to~$r$. From now on we say that vector ${\bf x}$ of length $n^d$ is of TT-rank $r$ implying that being reshaped into a $n\times\dots\times n$ multidimensional array ${\mathcal X}$: ${\bf x}=\mathrm{vec}({\mathcal X})$ it can be represented with TT-rank equal to $r$. \subsection{Rayleigh quotient minimization using Riemannian optimization}\label{sec:riem_lopcg} Consider the problem of finding the smallest eigenvalue~$\lambdaergy^{(1)}$ (assuming it is simple) and the corresponding eigenvector~${\bf x}^{(1)}$. Suppose we are given an a priori knowledge that ${\bf x}^{(1)}$ can be accurately approximated by a TT representation of a small TT-rank $r$, i.e. it lies on the manifold of tensors of fixed TT-rank $r$: \[ \mathcal{M}_r \equiv \{{\bf x}\in \mathbf{R}nd \, |\, \text{TT-rank}({\bf x}) = r \}. \] To obtain the eigenpair $(\lambdaergy^{(1)},{\bf x}^{(1)})$ we pose a Rayleigh quotient $\mathbf{R}Q({\bf x})$ minimization problem on the low-parametric manifold $\mathcal{M}_r$ instead of the full $\mathbf{R}nd$ space: \begin{equation}\label{tfeig:eq:rqlr} \min_{{\bf x} \in \mathcal{M}_r}\, \mathbf{R}Q({\bf x}), \quad \mathbf{R}Q({\bf x})\equiv \frac{\left< {\bf x}, {\bf H} {\bf x} \right>}{\left< {\bf x}, {\bf x} \right>}, \lambdad{equation} where we assume that ${\bf H}$ is symmetric. Since $\mathcal{M}_r$ forms a smooth manifold~\cite{holtz-manifolds-fixed-rank-2011} one can utilize the so-called methods of Riemannian optimization, i.e. optimization on smooth manifolds. One of the key concepts for the Riemannian optimization is \emph{tangent space}, which consists of all tangent vectors to $\mathcal{M}_r$ at a given~${\bf x}$~\cite{robsal-diffgeom-2011}. We will denote tangent space of $\mathcal{M}_r$ at ${\bf x}$ by $T_{\bf x} \mathcal{M}_r$. tangent space can be viewed as the linearization of a manifold at the given point ${\bf x}$. It has the same dimension as the manifold~\cite{robsal-diffgeom-2011} and assuming that $r$ is small, it allows to locally replace the manifold with a low-dimensional linear space. Provided a tangent space at hand we can discuss the simplest optimization method on $\mathcal{M}_r$ --- the Riemannian gradient descent method, which consists of several steps and is illustrated in Fig.~\ref{fig:riem}. Given the starting point ${\bf x}_k$, the first step is to calculate the Riemannian gradient, which in this case is projection of the standard Euclidean gradient: $\mathsf{P}_{{\bf x}_k} \nabla \mathbf{R}Q ({\bf x}_{k})$, where $\mathsf{P}_{{\bf x}_k}$ denotes the orthogonal projection on $T_{{\bf x}_k} \mathcal{M}_r$. This is simply the Rayleigh quotient steepest descent direction at ${\bf x}_k$, restricted to the tangent space~$T_{{\bf x}_k} \mathcal{M}_r$. The second step is, given the search direction from $T_{\bf x} \mathcal{M}_r$, map it back to the manifold to obtain ${\bf x}_{k+1}$ of the iterative process. This is done by the smooth mapping $\widetilde\mathbf{R}etr_r({\bf x},\cdot):T_{\bf x} \mathcal{M}_r\to \mathcal{M}_r$ called \emph{retraction}. Note that in case of fixed-rank manifold $\mathcal{M}_r$ we use the retraction satisfying $\widetilde\mathbf{R}etr_r({\bf x},\xi) \equiv \mathbf{R}etr_r({\bf x} + \xi)$~\cite{ao-retract-2014} and call it truncation. \begin{figure}[t] \begin{center} \begin{tikzpicture} \node at (0,0) {\includegraphics[width=100mm]{manifold.pdf}}; \node at (-1,-2.2) {{ $\mathcal{M}_r$}}; \node at (-1.9,1.8) {{ $T_{{\bf x}_k} \mathcal{M}_r$}}; \node at (-0.15,3.2) {{ $-\nabla \mathbf{R}Q ({\bf x}_k)$}}; \node [rotate=-7] at (0.5,1.17) {{\small $-\mathsf{P}_{{\bf x}_k}\negthickspace \nabla \mathbf{R}Q ({\bf x}_k)$}}; \node at (2.2,-1.3) {{ ${\bf x}_{k+1}$}}; \node at (-1.3,0.9) {{ ${\bf x}_{k}$}}; \lambdad{tikzpicture} \lambdad{center} \caption{Illustration of the Riemannian gradient descent method. $\mathcal{M}_r$ denotes the smooth manifold of vectors of fixed TT-rank and $T_{{\bf x}_k} \mathcal{M}_r$ its tangent space at ${\bf x}_{k}$. The gradient is projected on the tangent space and then ${\bf x}_k$ is moved in the direction of the projected gradient and afterwards is retracted to the manifold: ${\bf x}_{k+1} = \mathbf{R}etr_r \left({\bf x}_k - \mathsf{P}_{{\bf x}_k} \nabla\mathbf{R}Q ({\bf x}_k) \right)$. } \label{fig:riem} \lambdad{figure} Similarly to the original gradient descent method, the convergence of such method can be slow and preconditioning has to be used. There are different ways how to account for a preconditioner in the Riemannian version of the iteration. We will use the idea considered in \cite{ksv-manprec-2016}, where the preconditioner acts on the gradient and the result is projected afterwards \begin{equation}\label{eq:riem_pinvit} \begin{split} {\bf x}_{k+1} = \mathbf{R}etr_r \left({\bf x}_{k} - \,\mathsf{P}_{{\bf x}_k} \mathbf{B}^{-1} \nabla\mathbf{R}Q ({\bf x}_k) \right), \\ \lambdad{split} \lambdad{equation} which for eigenvalue computations can be viewed as Riemannian generalization\footnote{The search direction~$\mathsf{P}_{{\bf x}_k} \mathbf{B}^{-1} \nabla\mathbf{R}Q ({\bf x}_k)$ is usually additionally multiplied by a constant $\tau_k$ to be found from the line search procedure $\mathbf{R}Q({\bf x}_k-\tau_k \mathsf{P}_{{\bf x}_k} \mathbf{B}^{-1} \nabla\mathbf{R}Q ({\bf x}_k))\to\min_{\tau_k}$. This ensures the convergence in the presence of the nonlinear mapping $\mathbf{R}etr_r$.} of the preconditioned inverse iteration (PINVIT). Indeed, one iteration of a classical version of PINVIT reads \[ {\bf x}_{k+1} = {\bf x}_{k} - \mathbf{B}^{-1} \mathbf{R}vec_k, \] where $\mathbf{R}vec_k = {\bf H} {\bf x}_k - \mathbf{R}Q ({\bf x}_k) {\bf x}_k$ denotes the residual, which is proportional to the gradient of the Rayleigh quotient: \[ \nabla \mathbf{R}Q ({\bf x}) = \frac{2}{\left<{\bf x}, {\bf x} \right>} \left({\bf H} {\bf x} - \mathbf{R}Q ({\bf x}) {\bf x} \right). \] Note that to avoid growth of $\|{\bf x}_{k}\|$ additional normalization ${\bf x}_{k}:={\bf x}_{k}/\|{\bf x}_{k}\|$ is done after each iteration to ensure $\left<{\bf x}_k, {\bf x}_k \right>=1$. We could have restricted ourselves to the case of PINVIT~\eqref{eq:riem_pinvit}, but to get faster convergence we utilize a superior method --- locally optimal preconditioned conjugate gradients (LOPCG) and its block version (LOBPCG) to calculate several eigenvalues (see Sec.~\ref{sec:proposed}). To our knowledge the Riemannian version of LOPCG was not considered in the literature, so we provide it here. According to the classical LOBPCG the search direction $\mathbf{p}_k$ is a linear combination of the preconditioned gradient and $\mathbf{p}_{k-1}$. In the Riemannian setting instead of the preconditioned gradient $\mathbf{B}^{-1} \nabla\mathbf{R}Q ({\bf x}_k)$ we consider its projected analog~$\mathsf{P}_{{\bf x}_k} \mathbf{B}^{-1} \nabla\mathbf{R}Q ({\bf x}_k)\in T_{{\bf x}_k} \mathcal{M}_r$. However, the problem is that~$\mathbf{p}_{k-1}\not\in T_{{\bf x}_k} \mathcal{M}_r$, so similarly to~\cite{v-matcompl-2013} we use another important concept from the differential geometry -- vector transport, which is a mapping from $T_{{\bf x}_{k-1}}\mathcal{M}_r$ to $T_{{\bf x}_{k}}\mathcal{M}_r$ satisfying certain properties~\cite{absil}. Since $\mathcal{M}_r$ is an embedded submanifold of $\mathbb{R}^{n^d}$, the orthogonal projection from~$T_{{\bf x}_{k-1}}\mathcal{M}_r$ to $T_{{\bf x}_{k}}\mathcal{M}_r$ can be used as a vector transport~\cite{absil}, so that \begin{equation}\label{eq:pk1} \mathbf{p}_{k+1} = c_1 \,\mathsf{P}_{{\bf x}_k} \mathbf{B}^{-1} \nabla \mathbf{R}Q ({\bf x}_k) + c_2 \,\mathsf{P}_{{\bf x}_k} \mathbf{p}_{k}, \lambdad{equation} and \begin{equation}\label{eq:lopcg_riem} {\bf x}_{k+1} = \mathbf{R}etr_r \left( {\bf x}_k + \mathbf{p}_{k+1} \right) \lambdad{equation} where $c_1, c_2$ are constants to be found from \begin{equation}\label{eq:c1c2} (c_1,c_2) = \argmin_{(\zeta_1,\zeta_2)} \mathbf{R}Q ({\bf x}_k + \zeta_1 \,\mathsf{P}_{{\bf x}_k} \mathbf{B}^{-1} \nabla \mathbf{R}Q ({\bf x}_k) + \zeta_2 \,\mathsf{P}_{{\bf x}_k} \mathbf{p}_{k}) \lambdad{equation} Note that we have omitted $\mathbf{R}etr_r$ in the optimization procedure. This allows to solve~\eqref{eq:c1c2} exactly. Despite in numerical computations we have never faced the problem of functional increase with the omitted $\mathbf{R}etr_r$, additional line-search procedure could be introduced to ensure convergence. \subsection[Complexity reduction for computations on Riemannian manifold]{Complexity reduction for computations on $\mathcal{M}_r$} One of the key benefits of the Riemannian optimization on $\mathcal{M}_r$ is that it allows to avoid the rank growth. This is particularly important in low-rank tensor computations due to the strong rank dependence of tensor methods, which sometimes is called the curse of the rank. The crucial property that allows for a significant speed-up of computations in the Riemannian approach is that TT-ranks of any vector from a tangent space of $\mathcal{M}_r$ has ranks at most $2r$. Since tangent space is a linear space, linear combination of any number of vectors from one tangent space also has rank at most $2r$. As a result, $\mathbf{p}_{k+1}$ in \eqref{eq:pk1} is of rank at most $2r$ and for the already computed $\mathbf{p}_{k+1}$ the computation of~\eqref{eq:lopcg_riem} is inexpensive. By contrast, if in~\eqref{eq:lopcg_riem} we omit $\mathsf{P}_{{\bf x}_k}$, i.e. no Riemannian optimization approach is used, then this leads to complexity increase since matrix-vector multiplications considerably increase TT-ranks. Moreover, one can show that in~\eqref{eq:c1c2} to find $c_1$ and $c_2$ scalar products of TT-tensors have to be computed. The calculation of scalar products of two vectors belonging to the same tangent space is of less complexity than of two general TT-tensors of the same rank. We provide details about the implementation of the Riemannian optimization for TT-manifolds in Sec.~\ref{sec:impl} after general description of the proposed method. \section{The proposed method} \label{sec:proposed} The main goal of the paper is to consider a problem of finding several eigenvalues and the corresponding eigenvectors. For convenience we rewrite~\eqref{eq:main} in the block form \[ {\bf H} \mathbf{X} = \mathbf{X}\mathbf{\Lambda}, \quad \mathbf{X} = [{\bf x}^{(1)},\dots,{\bf x}^{(b)}], \quad \mathbf{\Lambda} = \mathrm{diag}(\lambdaergy^{(1)},\dots,\lambdaergy^{(b)}), \] additionally assuming $\lambdaergy^{(1)}\leq \lambdaergy^{(2)} \leq \dots \leq \lambdaergy^{(b)}\not=\lambdaergy^{(b+1)}$. Then the problem under the TT-rank constraint on ${\bf x}^{(\alpha)}$, $\alpha=1,\dots,b$ can be reformulated as trace minimization: \begin{equation} \label{eq:lobpcg_riem_problem_formulation} \begin{aligned} & \underset{\mathbf{X}\in \mathbb{R}^{n^d\times b}}{\text{minimize}} & & \Tr(\mathbf{X}^\intercal {\bf H} \mathbf{X}) \\ & \text{subject to} & & \mathbf{X}^\intercal \mathbf{X} = \mathbf{I}_b\\ & & & {\bf x}^{(i)} \in \mathcal{M}_{r}, \; i = 1, \ldots, b, \lambdad{aligned} \lambdad{equation} where $\Tr(\cdot)$ denotes trace of a matrix. \subsection{Speeding-up computation of the smallest eigenvalue} As a starting point to generalize~\eqref{eq:lopcg_riem} to the block case assume that we are first aiming at finding the first eigenvalue. The iteration~\eqref{eq:lopcg_riem} can be improved by performing additional subspace acceleration by using more vectors in the tangent space of $\mathcal{M}_r$ at ${\bf x}_{k}^{(1)}$. In particular, we suggest using the LOBPCG method and project all arising vectors to the tangent space\footnote{Note that $\mathsf{P}_{{\bf x}_k^{(1)}} {\bf x}_k^{(1)} = {\bf x}_k^{(1)}$.} at ${\bf x}_k^{(1)}$: \begin{equation} \label{eq:p1_lobpcg_riem} \begin{split} & \mathbf{R}_k = {\bf H} \mathbf{X}_k - \mathbf{X}_k\mathbf{\Lambda}_k, \quad \mathbf{\Lambda}_k = \mathrm{diag}\left(\mathbf{R}Q\left({\bf x}_k^{(1)}\right),\dots, \mathbf{R}Q\left({\bf x}_k^{(b)}\right)\right) \\ & \mathbf{P}_{k+1} = \mathsf{P}_{{\bf x}^{(1)}_k} \mathbf{B}^{-1} \mathbf{R}_k \mathbf{C}_2 + \mathsf{P}_{{\bf x}^{(1)}_k} \mathbf{P}_{k}\mathbf{C}_3\\ & \mathbf{X}_{k+1} =\mathbf{R}etr_r \left(\mathsf{P}_{{\bf x}_k^{(1)}} \mathbf{X}_k \mathbf{C}_1 + \mathbf{P}_{k+1} \right), \lambdad{split} \lambdad{equation} where the truncation operator $\mathbf{R}etr_r$ is applied independently to each column of the matrix and $\mathbf{C}_i\in\mathbb{R}^{b \times b}$, $i=1,2,3$ are matrices of coefficients to be found from trace minimization. In particular, introducing notation \[ \mathbf{V}_k = \mathsf{P}_{{\bf x}_k^{(1)}}\begin{bmatrix}\mathbf{X}_{k} &\mathbf{R}_k &\mathbf{P}_{k}\lambdad{bmatrix}\in \mathbb{R}^{n^d\times 3b}, \quad \mathbf{C} = \begin{bmatrix} \mathbf{C}_1 \\ \mathbf{C}_2 \\ \mathbf{C}_3 \lambdad{bmatrix} \in \mathbb{R}^{3 b \times b}, \] we have \begin{equation} \label{eq:naive_coef_problem_lobpcg} \begin{aligned} & \underset{\mathbf{C}}{\text{minimize}} & & \Tr(\mathbf{C}^\intercal(\mathbf{V}_k^\intercal {\bf H} \mathbf{V}_k)\mathbf{C}), \\ & \text{subject to} & & \mathbf{C}^\intercal(\mathbf{V}_k^\intercal\mathbf{V}_k)\mathbf{C} = \mathbf{I}_b, \lambdad{aligned} \lambdad{equation} which reduces to a classical generalized eigenvalue problem of finding $b$ smallest eigenvalues $\theta_1,\dots,\theta_b$ and corresponding eigenvectors of the matrix pencil $\mathbf{V}_k^\intercal {\bf H} \mathbf{V}_k - \theta \mathbf{V}_k^\intercal\mathbf{V}_k$ of size $3b\times 3b$. Since we use projection to the same tangent space, there is no rank growth even for large $b$ and hence the application of~$\mathbf{R}etr_r$ is inexpensive. Moreover, we expect that to achieve a given accuracy of $\lambdaergy^{(1)}$ the number of iterations for~\eqref{eq:p1_lobpcg_riem} is less than the number of iterations for~\eqref{eq:lopcg_riem} thanks to the additional subspace acceleration. \subsection{Riemannian alternating projection method} Similarly to~\eqref{eq:p1_lobpcg_riem} all column vectors of $[\mathbf{X}_{k},\mathbf{B}^{-1} \mathbf{R}_k,\mathbf{P}_{k}]$ can be projected to the tangent space $T_{{\bf x}^{(t_k)}_k}\mathcal{M}_r$ of ${\bf x}^{(t_k)}_k$ for some integer $1\leq t_k\leq b$, which not necessarily equals $1$. However, there is no evidence that all eigenvectors except for the $t_k$-th one can be accurately approximated using $T_{{\bf x}^{(t_k)}_k}\mathcal{M}_r$. Therefore, we propose to use $T_{{\bf x}^{(t_k)}_k}\mathcal{M}_r$ to search for the \emph{correction} to the already found approximations of $\mathbf{X}_k$ as follows: \begin{equation} \label{eq:lobpcg_riem} \begin{split} & \mathbf{R}_k = {\bf H} \mathbf{X}_k - \mathbf{X}_k\mathbf{\Lambda}_k, \\ & \mathbf{P}_{k+1} = \mathsf{P}_{{\bf x}^{(t_k)}_k} \mathbf{B}^{-1} \mathbf{R}_k \mathbf{C}_2 + \mathsf{P}_{{\bf x}^{(t_k)}_k} \mathbf{P}_{k}\mathbf{C}_3\\ & \mathbf{X}_{k+1} =\mathbf{R}etr_r \left( \textcolor{blue}{\mathbf{X}_k\,\text{diag}(\mathbf{c})} + \mathsf{P}_{{\bf x}^{(t_k)}_k} \mathbf{X}_k \mathbf{C}_1 + \mathbf{P}_{k+1} \right), \lambdad{split} \lambdad{equation} where $\mathbf{c}\in\mathbb{R}^{b}$ an $\text{diag}(\mathbf{c})$ denotes diagonal $b\times b$ matrix with the vector $\mathbf{c}$ on the diagonal. By forcing $\mathbf{X}_k$ to be multiplied by a diagonal matrix $\text{diag}(\mathbf{c})$ instead of a general square matrix, we reduce computational cost of the method. Indeed, in this case we avoid calculating linear combinations of the columns of $\mathbf{X}_k$ which do not belong to the same tangent space. Note that we may also write \[ \mathbf{X}_{k+1} =\mathbf{R}etr_r \left( \mathbf{X}_k\, \text{diag}(\mathbf{c}) + \mathbf{V} \mathbf{C} \right), \] where \[ \mathbf{V}_k = \mathsf{P}_{{\bf x}^{(t_k)}_k}\left[ \mathbf{X}_{k}, \mathbf{B}^{-1} \mathbf{R}_k, \mathbf{P}_{k}\right], \] and the matrices of coefficients $$ \mathbf{C} = \begin{bmatrix} \mathbf{C}_1 \\ \mathbf{C}_2 \\ \mathbf{C}_3 \lambdad{bmatrix},\quad \mathbf{C}_j\in\mathbb{R}^{b\times b}, \quad j=1,2,3 $$ and $\mathbf{c}\in\mathbb{R}^{b}$ are to be found from the trace minimization problem \begin{equation} \label{eq:coefficients-problem0} \begin{aligned} & \underset{\mathbf{c},\, \mathbf{C}}{\text{minimize}} & & \Tr\left[(\mathbf{X}_k\, \text{diag}(\mathbf{c}) + \mathbf{V}_k \mathbf{C})^\intercal {\bf H} (\mathbf{X}_k\, \text{diag}(\mathbf{c}) + \mathbf{V}_k \mathbf{C}) \right]\\ & \text{subject to} & & (\mathbf{X}_k\, \text{diag}(\mathbf{c}) + \mathbf{V}_k \mathbf{C})^\intercal (\mathbf{X}_k \, \text{diag}(\mathbf{c}) + \mathbf{V}_k \mathbf{C}) = \mathbf{I}_{b} \lambdad{aligned} \lambdad{equation} or equivalently \begin{equation} \label{eq:coefficients-problem} \begin{aligned} & \underset{\mathbf{c},\, \mathbf{C}}{\text{minimize}} & & \Tr\left( \begin{bmatrix} \mathrm{diag}(\mathbf{c}) \\ \mathbf{C} \lambdad{bmatrix}^\intercal \begin{bmatrix} \mathbf{X}_k^\intercal {\bf H} \mathbf{X}_k & \mathbf{X}_k^\intercal {\bf H} \mathbf{V}_k \\ \mathbf{V}_k^\intercal {\bf H} \mathbf{X}_k & \mathbf{V}_k^\intercal {\bf H} \mathbf{V}_k \lambdad{bmatrix} \begin{bmatrix} \mathrm{diag}(\mathbf{c}) \\ \mathbf{C} \lambdad{bmatrix} \right)\\ & \text{subject to} & & \begin{bmatrix} \mathrm{diag}(\mathbf{c}) \\ \mathbf{C} \lambdad{bmatrix}^\intercal \begin{bmatrix} \mathbf{X}_k^\intercal \mathbf{X}_k & \mathbf{X}_k^\intercal \mathbf{V}_k \\ \mathbf{V}_k^\intercal \mathbf{X}_k & \mathbf{V}_k^\intercal \mathbf{V}_k \lambdad{bmatrix} \begin{bmatrix} \mathrm{diag}(\mathbf{c}) \\ \mathbf{C} \lambdad{bmatrix} = \mathbf{I}_{b} \lambdad{aligned} \lambdad{equation} which because of the diagonal constraint does not boil down to a standard generalized eigenvalue problem. Due to the description technicality of solution of~\eqref{eq:coefficients-problem}, we postpone it to Section~\ref{sec:coef}. The convergence of the proposed method depends on the choice of the integer sequence $\{t_1,t_2,\dots, t_n, \dots\}$. We call the strategy that in a certain way chooses the tangent space on each iteration the \emph{tangent space schedule}. Note that we could have found all eigenvalues one by one, i.e. $t_1=\dots = t_{k_1^{\delta}}=1$, then $t_{k_1^{\delta}}=\dots = t_{k_1^{\delta} + k_2^{\delta}}=2$ and so on, where $k_\alpha^{\delta}$ is the number of iterations for $\mathbf{R}Q({\bf x}^{(\alpha)})$ to achieve accuracy $\delta$. This strategy, however, for a large number of eigenvalues~$b$ requires a lot of iterations to be done. Therefore, we utilize strategies that do not require all tangent spaces to be chosen at least ones. We found that although the random choice (discrete uniform distribution) of $t_i$: $1\leq t_i \leq b$ already ensures convergence in most of the cases, the strategy \begin{equation} t_k = \argmax_{i} \@ifstar{\oldabs}{\oldabs*}{\frac{ \mathbf{R}Q ({\bf x}_{k-1}^{(i)}) - \mathbf{R}Q ({\bf x}_k^{(i)})}{\mathbf{R}Q ({\bf x}_k^{(i)})}} \label{eq:schedule} \lambdad{equation} in which we choose eigenvalue with the current slowest convergence, yields more reliable results for larger number of eigenvalues. Before running the adaptive strategy for $t_k$ we choose $t_1=\dots = t_{k_0}=1$ while convergence criteria for $\mathbf{R}Q({\bf x}^{(1)})$ is not fulfilled. This corresponds to~\eqref{eq:p1_lobpcg_riem} instead of~\eqref{eq:lobpcg_riem} and hence, speeds up computations. The implementation details of the iteration~\eqref{eq:lobpcg_riem} will be given in the next section and the algorithm for the computation of coefficients will be described later in Section~\ref{sec:coef}. \section{Implementation of tensor operations} \label{sec:impl} In this section we provide a brief description of implementation details for Riemannian optimization on $\mathcal{M}_r$ with complexity estimates and summarize the algorithm. \subsection{Representation of tangent space vectors} We start with the description of vectors from tangent spaces, as they are the main object we are working with. Let ${\bf x}$ be given by its TT decomposition~\eqref{eq:tt-repr}. It is known~\cite{osel-tt-2011} that by a sequence of QR decomposition of cores, \eqref{eq:tt-repr} can be represented both as \begin{equation}\label{eq:uqr} {\mathcal X}_{i_1\dots i_d} = U_1(i_1) U_2 (i_2) U_3 (i_3) \dots U_d (i_d), \lambdad{equation} where $\mathcal{U}_{k}=\{U_k(i_k)\}_{i_k=1}^n\in\mathbb{R}^{r_{k-1}\times n\times r_{k}}$, $k=1,\dots,d-1$ being reshaped into matrices $\mathscr{M}^{\mathsf{L}}_k(\mathcal{V}_{k})$ of size $r_{k-1} n\times r_{k}$ have orthogonal columns, $r_1=\dots = r_{d-1}=r$ and $r_0=r_d=1$. Here $\mathscr{M}_k^{\mathsf{L}}:\mathbb{R}^{r_{k-1}\times n\times r_{k}}\to \mathbb{R}^{r_{k-1} n\times r_{k}}$ denotes the matricization operator, that maps first two indices of considered three-dimensional arrays into a single one. Similarly, we may have \begin{equation}\label{eq:vqr} {\mathcal X}_{i_1\dots i_d} = V_1(i_1) V_2 (i_2) V_3 (i_3) \dots V_d (i_d), \lambdad{equation} where $\mathcal{V}_{k}=\{V_k(i_k)\}_{i_k=1}^n\in\mathbb{R}^{r_{k-1}\times n\times r_{k}}$, $k=2,\dots,d$ being reshaped into matrices $\mathscr{M}^{\mathsf{R}}(\mathcal{V}_{k})$ of size $r_{k-1} \times nr_{k}$ have orthogonal rows. Representations~\eqref{eq:uqr} and~\eqref{eq:vqr} are called correspondingly left- and right-orthogonalizations of TT-representation~\eqref{eq:tt-repr}, which can be performed with $\mathcal{O}(dnr^3)$ complexity. Using these notations, one possible way to parametrise tangent space $T_{{\bf x}}\mathcal{M}_r$ is as follows. Any $\xi = \mathrm{vec}(\mathbf{X}i)\in T_{\bf x} \mathcal{M}_r$ can be represented as \begin{equation}\label{eq:tangent_param} \begin{split} \mathbf{X}i_{i_1\dots i_d} = &\delta G_1(i_1) V_2 (i_2) V_3 (i_3) \dots V_d (i_d)\ + \\ &U_1(i_1) \delta G_2 (i_2) V_3 (i_3) \dots V_d (i_d)\ + \dots + \\ &U_1(i_1) U_2 (i_2) U_3 (i_3) \dots \delta G_d (i_d), \lambdad{split} \lambdad{equation} where cores $\delta\mathcal{G}_{k}=\{\delta G_k(i_k)\}_{i_k=1}^n\in\mathbb{R}^{r_{k-1}\times n\times r_{k}}$ additionally satisfy the following gauge conditions to ensure uniqueness\footnote{To see that the tangent space is overparametrized by~\eqref{eq:tangent_param} note that the number of parameters in all the $G_k(i_k)$ is $(d-2)nr^2 + 2nr$, while the dimension of the manifold~$\mathcal{M}_r$ and hence of the tangent space is smaller: $\mathrm{dim}(T_{\bf x} \mathcal{M}_r)=(d-2)nr^2 + 2nr - (d-1)r^2$~\cite{holtz-manifolds-fixed-rank-2011}.} of the representation: \begin{equation}\label{eq:gauge} \left(\mathscr{M}^{\mathsf{L}}\left(\delta\mathcal{G}_{k} \right)\right)^\intercal \mathscr{M}^{\mathsf{L}}\left(\mathcal{U}_{k} \right) = \mathbf{0}, \quad k=1,\dots,d-1. \lambdad{equation} To show that any vector from a tangent space has TT-rank not greater than $2r$ we note that for~\eqref{eq:tangent_param} the explicit formula holds: \[ \mathbf{X}i_{i_1\dots i_d} = S_1(i_1) S_2(i_2) \dots S_d (i_d), \] where for $k=2,\dots,d-1$ \[ S_1(i_1) = \begin{bmatrix} \delta G_1(i_1) & U_1(i_1) \lambdad{bmatrix}, \, S_k(i_k) = \begin{bmatrix} V_k(i_k) & \\ \delta G_k(i_k) & U_k (i_k) \lambdad{bmatrix}, \, S_d(i_d) = \begin{bmatrix} V_d(i_d) \\ \delta G_d(i_d) \lambdad{bmatrix}, \] which can be verified by direct multiplication of all $S_k (i_k)$. From~\eqref{eq:tangent_param} it is also simply noticeable that if $\xi^{(1)}, \xi^{(2)}\in T_{\bf x} \mathcal{M}_r$ given by $\delta G^{(1)}_k(i_k)$ and $\delta G^{(2)}_k(i_k)$, $k=1,\dots,d$ correspondingly, then their linear combination $(\alpha\xi^{(1)} + \beta\xi^{(2)})\in T_{\bf x} \mathcal{M}_r$ is given by $(\alpha\, \delta G^{(1)}_k(i_k) + \beta\, \delta G^{(2)}_k(i_k))$, $\alpha,\beta\in\mathbb{R}$. \subsection{Projection to a tangent space}\label{sec:projection} Representation~\eqref{eq:tangent_param} allows to obtain explicit formulas for $\delta G_k(i_k)$ of $\xi = \mathsf{P}_{T_{\bf x} \mathcal{M}_r}{\bf z}$ without inversions of possibly ill-conditioned matrices~\cite{ksv-manprec-2016}: \[ \begin{split} &\mathrm{vec}(\delta\mathcal{G}_k)= \left(\mathbf{I}_{r_k-1}\otimes (\mathbf{I}_{nr_k} - \mathscr{M}^{\mathsf{L}}\left(\mathcal{U}_{k} \right) \mathscr{M}^{\mathsf{L}}\left(\mathcal{U}_{k} \right)^\intercal)\right) (\mathbf{X}_{>k}^\intercal \otimes I_n \otimes \mathbf{X}_{<k}^\intercal)\, {\bf z},\\ &\mathrm{vec}(\delta\mathcal{G}_d)= (I_n \otimes \mathbf{X}_{<d}^\intercal)\, {\bf z}, \lambdad{split} \] where $k=1,\dots, d-1$ and \[ \begin{split} &\mathbf{X}_{<k} = [G_1(i_1)\dots G_{k-1}(i_{k-1})]\in \mathbb{R}^{n^{k-1} \times r}, \\ &\mathbf{X}_{>k} = [G_{k+1}(i_{k+1})\dots G_{d}(i_d)]\in \mathbb{R}^{n^{d-k} \times r}. \lambdad{split} \] Matrices $\mathbf{X}_{<k}$, $\mathbf{X}_{>k}$ are never formed explicitly and used here for the ease of notation. The complexity of projecting a vector~${\bf z}$ given by its TT decomposition with $\text{TT-rank}({\bf z})= r_{\bf z}$ is $\mathcal{O}(dnr r_{\bf z}^2)$. One of the most time-consuming operations arising in the algorithm is a projection to a tangent space of a matrix-vector product. Suppose that both ${\bf H}$ and ${\bf y}$ are given in the TT format with TT-ranks $R$ and $r_{\bf y}$. Then ${\bf z} = {\bf H}{\bf y}$ can also be represented in the TT format with the TT-rank bounded from above as $r_{\bf y} R$~\cite{osel-tt-2011} since \[ \begin{split} &{\mathcal Z}_{i_1,\dots,i_d} = Z_1(i_1)\dots Z_d(i_d), \\ &Z_k(i_k) = \sum_{j_k=1}^n H_k(i_k, j_k) \otimes G_k(i_k) \in \mathbb{R}^{R_{k-1}(r_{\bf y})_{k-1}\times R_{k}(r_{\bf y})_{k}}. \lambdad{split} \] Calculation of $Z_k(i_k)$ can be done in $\mathcal{O}(n^2 R^2 r_{\bf y}^2)$ complexity. Once $Z_k(i_k)$ $k=1,\dots,d$ are calculated, complexity of finding the TT representation of $\mathsf{P}_{T_{\bf x} \mathcal{M}_r} {\bf z}$ is $\mathcal{O}(dnr r_{\bf z}^2) = \mathcal{O}(dnr r_{\bf y}^2 R^2)$ as $r_{\bf z}= r_{\bf y} R$. This complexity can be additionally reduced down to $\mathcal{O}(dn^2 r r_{\bf y} R^2)$ if we take care of the Kronecker-product structure of $Z_k(i_k)$. Moreover, since it does not require computation of $Z_k(i_k)$ explicitly, the storage is also less in this case. \subsection{Computation of inner products} \label{sec:inner} Next thing that arises in the method, in particular in~\eqref{eq:coefficients-problem} is the computation of Gram matrices $\mathbf{V}^\intercal \mathbf{V}$, $\mathbf{V}^\intercal {\bf H} \mathbf{V}$, ${\bf X}^\intercal {\bf X}$, $\mathbf{V}^\intercal {\bf X}$, $\mathbf{V}^\intercal {\bf H} \mathbf{V}$, $\mathbf{V}^\intercal {\bf H} \mathbf{X}$ and as we will see in Sec.~\ref{sec:coef} solution to~\eqref{eq:coefficients-problem} will involve only $\mathrm{diag}({\bf X}^\intercal{\bf H} {\bf X})$ instead of the full ${\bf X}^\intercal{\bf H} {\bf X}$. Here $\mathbf{V}$ consists of columns that belong to a single tangent space $T_{\bf x} \mathcal{M}_r$ and ${\bf X}$ are general tensors of TT-rank $r$. Let us first discuss the computation of $\mathbf{V}^\intercal \mathbf{V}$. Let ${\bf v}^{(\alpha)}$ be the $\alpha$-th column of~$\mathbf{V}$, $\alpha=1,\dots,3b$ and be given by $\delta G_k (i_k) \equiv \delta G_k^{(\alpha)} (i_k)$ from~\eqref{eq:tangent_param}. Then, thanks to gauge conditions~\eqref{eq:gauge}, left orthogonality of $\mathcal{U}$ and right orthogonality of $\mathcal{V}$, we have \begin{equation}\label{eq:inner_tangent} \left< {\bf v}^{(\alpha)}, {\bf v}^{(\beta)} \right> = \sum_{k=1}^d \left<\delta\mathcal{G}_k^{(\alpha)}, \delta\mathcal{G}_k^{(\beta)} \right>_F, \lambdad{equation} where $\left<\mathcal{A}, \mathcal{B} \right>_F \equiv \mathrm{vec}(\mathcal{A})^\intercal \, \mathrm{vec}(\mathcal{B})$ is the Frobenius inner product. Therefore, the complexity of computing $\mathbf{V}^\intercal \mathbf{V}$ is $\mathcal{O}(b^2 dnr^2)$. The computation of~$\mathbf{V}^\intercal {\bf H} \mathbf{V}$ is done using the following trick. Note that all columns of $\mathbf{V}$ belong to a single tangent space $T_{\bf x} \mathcal{M}_r$. Hence, \begin{equation}\label{eq:trick_proj} \mathbf{V}^\intercal {\bf H} \mathbf{V} = (\mathsf{P}_{T_{\bf x} \mathcal{M}_r}\mathbf{V})^\intercal {\bf H}\mathbf{V} = \mathbf{V}^\intercal (\mathsf{P}_{T_{\bf x} \mathcal{M}_r}{\bf H} \mathbf{V}). \lambdad{equation} As a result, we first compute $\mathsf{P}_{T_{\bf x} \mathcal{M}_r}{\bf H} \mathbf{V}$ using the procedure described in Section~\ref{sec:projection}. Then we compute the inner product of two vectors from the same tangent space as in~\eqref{eq:inner_tangent}. Thus, the complexity of finding $\mathbf{V}^\intercal {\bf H} \mathbf{V}$ is $\mathcal{O}(b dn^2 r^2 R^2 + b^2 d r^2 )$. Similarly to~\eqref{eq:trick_proj} we have $\mathbf{V}^\intercal {\bf H} \mathbf{X} = \mathbf{V}^\intercal (\mathsf{P}_{T_{\bf x} \mathcal{M}_r}{\bf H} \mathbf{X})$ and $\mathbf{V}^\intercal \mathbf{X} = \mathbf{V}^\intercal (\mathsf{P}_{T_{\bf x} \mathcal{M}_r} \mathbf{X})$. Computation of ${\bf X}^\intercal {\bf X}$ is a standard procedure, which consists of $b^2$ inner products of TT tensors. Since the complexity of inner product of two tensors of TT-rank $r$ is $\mathcal{O}(dnr^3)$~\cite{osel-tt-2011}, computing $b^2$ of inner products require~$\mathcal{O}(b^2 dnr^3)$ floating point operations. Finally, the computation of ${{\bf x}^{(\alpha)}}^\intercal {\bf H} {{\bf x}^{(\alpha)}}$, $\alpha=1,\dots,b$ arising in $\mathrm{diag}({\bf X}^\intercal{\bf H} {\bf X})$ costs $\mathcal{O}(nd R r^2 (r+R))$. Eventually, computation of $\mathrm{diag}({\bf X}^\intercal{\bf H} {\bf X})$ has complexity $\mathcal{O}(b nd R r^2 (r+nR))$. \subsection{Retraction computation} The final thing remains to compute to proceed to the next iteration after we have found the coefficients $\mathbf{c}, \mathbf{C}$ and $\mathbf{V}_k$, is to retract the obtained vectors $\mathbf{X}_{k}\mathrm{diag}(\mathbf{c}) + \mathbf{V}_k \mathbf{C}$ to the manifold $\mathcal{M}_r$. All columns $\mathbf{V}_k$ are from the same tangent space at ${\bf x}_k^{(t_k)}$, so the correction $(\mathbf{V}_k \mathbf{C})[:,i]$ to each ${\bf x}_k^{(i)}$, $i=1,\dots,b$ is of rank $2r$. Therefore, ${\bf x}_k^{(i)} \mathbf{c}[i] + (\mathbf{V}_k \mathbf{C})[:,i]$ is of rank at most $2r$ for $i= t_k$ and $3r$ otherwise. The retraction is done by the TT-SVD algorithm~\cite{osel-tt-2011} that consists of a sequence of QR and SVD decompositions applied to the unfolded tensor cores. It has complexity $\mathcal{O}(dnr^3)$. Note that the retraction is properly defined only for $i= t_k$, but we formally apply it to other vectors as well, as the coefficients $\mathbf{c},\mathbf{C}$ were obtained to ensure the descent direction for all vectors. The descent direction is, however, chosen without regard to the retraction. This can be accounted for by introducing approximate line search, but numerical experiments showed this is in general redundant and $\mathbf{c},\mathbf{C}$ already provide good enough approximation. \subsection{The algorithm description} Let us now discuss the iteration~\eqref{eq:lobpcg_riem} step-by-step. For simplicity let us denote $\mathsf{P}_k \equiv \mathsf{P}_{{\bf x}^{(t_k)}_k}$. First, assuming we are given $\mathbf{P}_{k}$, the calculation of $\mathsf{P}_k \mathbf{P}_{k}$ is done in $\mathcal{O}(b dnr^3)$ complexity. To calculate $\mathsf{P}_k\mathbf{B}^{-1} \mathbf{R}_k$ term we split it into two parts: \[ \mathsf{P}_k\mathbf{B}^{-1} \mathbf{R}_k = \mathsf{P}_k\mathbf{B}^{-1}{\bf H} \mathbf{X}_k -\mathsf{P}_k \mathbf{B}^{-1}\mathbf{X}_k\mathbf{\Lambda}_k. \] The $\mathsf{P}_k \mathbf{B}^{-1}\mathbf{X}_k$ is a projected matrix vector product, which is calculated as described in Section~\ref{sec:projection} and costs $\mathcal{O}(b dn^2 r^2 R^2)$ operations. The term $\mathsf{P}_k\mathbf{B}^{-1}{\bf H} \mathbf{X}_k$ is more difficult to compute as it involves two sequential matrix-vector products. To deal with this term efficiently, we use the trick from~\cite{ksv-manprec-2016} and assume that \begin{equation}\label{eq:assump_prec} \mathbf{B}^{-1} = \mathbf{B}_1 + \dots + \mathbf{B}_{\rho_{\mathbf{B}}}, \lambdad{equation} where $\mathbf{B}_i$, $i=1,\dots,\rho_{\mathbf{B}}$ are of TT-rank $1$. This trick helps us thanks to the fact the multiplication of a TT-matrix of TT-rank 1 by a general TT-matrix does not change the rank of the latter. The assumption~\eqref{eq:assump_prec} holds for the preconditioner we use for the calculation of vibrational spectra of molecules, while for spin chains computation no preconditioner is used. Thus, \begin{equation}\label{eq:trick_prec} \mathsf{P}_k\mathbf{B}^{-1}{\bf H} \mathbf{X}_k = \mathsf{P}_k\mathbf{B}_1{\bf H} \mathbf{X}_k + \dots + \mathsf{P}_k\mathbf{B}_{\rho_{\mathbf{B}}}{\bf H} \mathbf{X}_k \lambdad{equation} and hence the complexity of computing~$\mathsf{P}_k\mathbf{B}^{-1}{\bf H} \mathbf{X}_k$ is $\mathcal{O}(b dn^2 r^2 R^2\rho_{\mathbf{B}})$. The truncation operation costs $\mathcal{O}(b dnr^3)$, which is negligible compared to other operations. The overall algorithm is summarized in Algorithm~\ref{alg:riemannian_lobpcg}. \begin{algorithm}[t] \begin{algorithmic}[1] \mathbf{R}equire TT-matrix ${\bf H}$, initial guess ${\bf X}_1 = [{\bf x}^{(1)}_1 \dots {\bf x}^{(b)}_1]$, where ${\bf x}^{(1)}_i$ are TT-tensors, TT-rank $r$, convergence tolerance $\varepsilon$ \Ensure ${\bf X}_k = [{\bf x}^{(1)}_k \dots {\bf x}^{(b)}_k]$ \State Initialize $\mathbf{P}_k = 0 \cdot {\bf X}_1$ \State Set $t_1 = 1$ \For{$k=1, 2, \dots$ until converged} \mathbf{I}f{$\|\mathsf{P}_{{\bf x}^{(1)}_k} \left({\bf H} {\bf x}_k^{(1)} -\mathbf{R}Q ({\bf x}_k^{(1)}) {\bf x}_k^{(1)} \right) \| > \varepsilon $)} \State Set $t_k = 1$ \Else \State Choose $t_k$ randomly or according to~\eqref{eq:schedule} \EndIf \State Calculate $\mathsf{P}_k {\bf X}_k$ and $\mathsf{P}_k \mathbf{P}_k$ \mathbf{C}omment{$\mathsf{P}_k \equiv \mathsf{P}_{{\bf x}_k}^{(t_k)}$} \State Calculate $\mathbf{R}Q ({\bf x}_k^{(i)})$, $i=1,\dots,b$ \State Calculate $\mathsf{P}_k\mathbf{B}^{-1} \mathbf{R}_k$ using \eqref{eq:assump_prec} and \eqref{eq:trick_prec} \State Set $\mathbf{V}_k = \mathsf{P}_k \left [ \mathbf{X}_k, \mathbf{B}^{-1} \mathbf{R}_k, \mathbf{P}_k \right ]$ and calculate $\mathbf{V}_k^\intercal \mathbf{V}_k$, $\mathbf{V}_k^\intercal {\bf H} \mathbf{V}_k$, ${\bf X}_k^\intercal {\bf X}_k$, $\mathbf{V}_k^\intercal {\bf X}_k$, $\mathbf{V}_k^\intercal {\bf H} \mathbf{V}_k$, $\mathbf{V}_k^\intercal {\bf H} \mathbf{X}_k$, $\mathrm{diag}({\bf X}_k^\intercal{\bf H} {\bf X}_k)$ as described in Sec.~\ref{sec:inner} \For {$i=1, \ldots, b$} \State $\mathbf{R}vec_k^{(i)} = {\bf H} {\bf x}_k^{(i)} - \mathbf{R}Q ({\bf x}_k^{(i)}) {\bf x}_k^{(i)}$ \EndFor \State $\mathbf{c}, \mathbf{C}$ = \texttt{find\_coefficients}($\mathbf{V}_k^\intercal \mathbf{V}_k$, $\mathbf{V}_k^\intercal {\bf H} \mathbf{V}_k$, ${\bf X}_k^\intercal {\bf X}_k$, $\mathbf{V}_k^\intercal {\bf X}_k$, $\mathbf{V}_k^\intercal {\bf H} \mathbf{V}_k$, $\mathbf{V}_k^\intercal {\bf H} \mathbf{X}_k$, $\mathrm{diag}({\bf X}_k^\intercal{\bf H} {\bf X}_k)$), see Alg.~\ref{alg:riemannian_lobpcg}. \State $\mathbf{P}_{k+1} = [\mathsf{P}_k \mathbf{B}^{-1} \mathbf{R}_k, \mathsf{P}_k \mathbf{P}_k]\, \mathbf{C} [:, b:3b]$ \State Calculate ${\bf X}_{k+1} = \mathbf{X}_k\, \text{diag}(\mathbf{c}) + \mathsf{P}_k \mathbf{X}_k \mathbf{C}_1 + \mathbf{P}_{k+1}$ \State Calculate ${\bf X}_{k+1} \coloneqq \mathbf{R}etr_r ({\bf X}_{k+1})$ \EndFor \caption{Low-Rank Riemannian Alternating Projection LOBPCG. }\label{alg:riemannian_lobpcg} \lambdad{algorithmic} \lambdad{algorithm} \section{Trace minimization problem for coefficients} \label{sec:coef} Let us rewrite the problem~\eqref{eq:coefficients-problem} omitting index $k$ for simplicity \begin{equation} \label{eq:coefficients-problem1} \begin{aligned} & \underset{\mathbf{c},\, \mathbf{C}}{\text{minimize}} & & \Tr\left( \begin{bmatrix} \mathrm{diag}(\mathbf{c}) \\ \mathbf{C} \lambdad{bmatrix}^\intercal \begin{bmatrix} \mathbf{X}^\intercal {\bf H} \mathbf{X} & \mathbf{X}^\intercal {\bf H} \mathbf{V} \\ \mathbf{V}^\intercal {\bf H} \mathbf{X} & \mathbf{V}^\intercal {\bf H} \mathbf{V} \lambdad{bmatrix} \begin{bmatrix} \mathrm{diag}(\mathbf{c}) \\ \mathbf{C} \lambdad{bmatrix} \right)\\ & \text{subject to} & & \begin{bmatrix} \mathrm{diag}(\mathbf{c}) \\ \mathbf{C} \lambdad{bmatrix}^\intercal \begin{bmatrix} \mathbf{X}^\intercal \mathbf{X} & \mathbf{X}^\intercal \mathbf{V} \\ \mathbf{V}^\intercal \mathbf{X} & \mathbf{V}^\intercal \mathbf{V} \lambdad{bmatrix} \begin{bmatrix} \mathrm{diag}(\mathbf{c}) \\ \mathbf{C} \lambdad{bmatrix} = \mathbf{I}_{b} \lambdad{aligned} \lambdad{equation} Unfortunately, this problem can not be reduced to a generalized eigenvalue problem, which can be solved by means of a reliable and optimized software packages. Therefore, to solve it we propose an iterative method. It is derived using the Lagrange multiplier method formally applied to the minimization problem. The Lagrange function of~\eqref{eq:coefficients-problem1} reads \[ \begin{split} \mathcal{L}(\mathbf{c}, \mathbf{C}, \mathcal{M}ult) = &\frac 12 \Tr\left( \begin{bmatrix} \mathrm{diag}(\mathbf{c}) \\ \mathbf{C} \lambdad{bmatrix}^\intercal \begin{bmatrix} \mathbf{X}^\intercal {\bf H} \mathbf{X} & \mathbf{X}^\intercal {\bf H} \mathbf{V} \\ \mathbf{V}^\intercal {\bf H} \mathbf{X} & \mathbf{V}^\intercal {\bf H} \mathbf{V} \lambdad{bmatrix} \begin{bmatrix} \mathrm{diag}(\mathbf{c}) \\ \mathbf{C} \lambdad{bmatrix} \right)- \\ &\frac 12 \Tr \left[\mathcal{M}ult\left( \begin{bmatrix} \mathrm{diag}(\mathbf{c}) \\ \mathbf{C} \lambdad{bmatrix}^\intercal \begin{bmatrix} \mathbf{X}^\intercal \mathbf{X} & \mathbf{X}^\intercal \mathbf{V} \\ \mathbf{V}^\intercal \mathbf{X} & \mathbf{V}^\intercal \mathbf{V} \lambdad{bmatrix} \begin{bmatrix} \mathrm{diag}(\mathbf{c}) \\ \mathbf{C} \lambdad{bmatrix} - \mathbf{I}_{b} \right) \right] \lambdad{split} \] Thanks to the symmetry of the constraint matrix we can consider $\mathcal{M}ult^\intercal = \mathcal{M}ult$ without the loss of generality. Then, the gradient of the Largangian reads \[ \begin{split} &\nabla_{\mathbf{c}}\, \mathcal{L} = \text{diag}(\X^\intercal \mH \X) \mathbf{c} + \text{diag} ((\X^\intercal \mH \basis) \mathbf{C} - (\X^\intercal \basis) \mathbf{C} \mathcal{M}ult) \mathbf{1} - (\mathcal{M}ult \odot \X^\intercal \X) \mathbf{c}, \\ &\nabla_{\mathbf{C}}\, \mathcal{L} = (\basis^\intercal \mH \basis) \mathbf{C} + (\basis^\intercal \mH \X) \text{diag} (\mathbf{c}) - (\basis^\intercal \X) \text{diag}(\mathbf{c}) \mathcal{M}ult - (\basis^\intercal \basis) \mathbf{C} \mathcal{M}ult, \lambdad{split} \] where $\text{diag}(\mathbf{A})$ denotes a diagonal matrix with the same diagonal as $\mathbf{A}$, $\mathbf{1}$ --- vector of all ones of the corresponding size and $\mathbf{A}\odot\mathbf{B}$ --- elementwise product of matrices $\mathbf{A}$,$\mathbf{B}$ of the same size. Thus, the critical point of the Lagrangian can be found from \begin{equation}\label{eq:lagrange_matrix} \begin{split} &\text{diag}(\X^\intercal \mH \X) \mathbf{c} + \text{diag} ((\X^\intercal \mH \basis) \mathbf{C}) \mathbf{1} = \text{diag} ( (\X^\intercal \basis) \mathbf{C} \mathcal{M}ult) \mathbf{1} + (\mathcal{M}ult \odot \X^\intercal \X) \mathbf{c}, \\ & (\basis^\intercal \mH \basis) \mathbf{C} + (\basis^\intercal \mH \X) \text{diag} (\mathbf{c}) = (\basis^\intercal \X) \text{diag}(\mathbf{c}) \mathcal{M}ult + (\basis^\intercal \basis) \mathbf{C} \mathcal{M}ult, \\ & (\mathbf{X}\, \text{diag}(\mathbf{c}) + \mathbf{V} \mathbf{C})^\intercal (\mathbf{X}\, \text{diag}(\mathbf{c}) + \mathbf{V} \mathbf{C}) = \mathbf{I}_{b}, \lambdad{split} \lambdad{equation} We put emphasis on the fact that the latter equation does not depend on the whole $\mathbf{X}^\intercal {\bf H} \mathbf{X}$ which is present in~\eqref{eq:coefficients-problem1}, but only on $\mathrm{diag}(\mathbf{X}^\intercal {\bf H} \mathbf{X})$ instead. This significantly reduces complexity of computations. Equations~\eqref{eq:lagrange_matrix} can be rewritten in the following form \begin{equation} \label{lagrange_detailed} \begin{split} \begin{bmatrix} ({\bf x}^{(\alpha)})^\intercal {\bf H} {\bf x}^{(\alpha)} & ({\bf x}^{(\alpha)})^\intercal {\bf H} \mathbf{V} \\ \mathbf{V}^\intercal {\bf H} {\bf x}^{(\alpha)} & \mathbf{V}^\intercal {\bf H}\mathbf{V} \lambdad{bmatrix} \begin{bmatrix} \zeta_\alpha \\ \mathbf{c}_\alpha \lambdad{bmatrix} = \lambda_{\alpha\alpha} \begin{bmatrix} ({\bf x}^{(\alpha)})^\intercal {\bf x}^{(\alpha)} & ({\bf x}^{(\alpha)})^\intercal \mathbf{V} \\ \mathbf{V}^\intercal {\bf x}^{(\alpha)} & \mathbf{V}^\intercal \mathbf{V} \lambdad{bmatrix} \begin{bmatrix} \zeta_\alpha \\ \mathbf{c}_\alpha \lambdad{bmatrix} & \\ + \sum_{\substack{\beta =1, \\ \beta \not = \alpha}}^{b} \lambda_{\alpha\beta} \begin{bmatrix} ({\bf x}^{(\alpha)})^\intercal {\bf x}^{(\beta)} & ({\bf x}^{(\alpha)})^\intercal \mathbf{V} \\ \mathbf{V}^\intercal {\bf x}^{(\beta)} & \mathbf{V}^\intercal \mathbf{V} \lambdad{bmatrix} \begin{bmatrix} \zeta_\beta \\ \mathbf{c}_\beta \lambdad{bmatrix} & \\ \begin{bmatrix} \zeta_\alpha & \mathbf{c}_\alpha^\intercal \lambdad{bmatrix} \begin{bmatrix} ({\bf x}^{(\alpha)})^\intercal {\bf x}^{(\beta)} & ({\bf x}^{(\alpha)})^\intercal \mathbf{V} \\ \mathbf{V}^\intercal {\bf x}^{(\beta)} & \mathbf{V}^\intercal \mathbf{V} \lambdad{bmatrix} \begin{bmatrix} \zeta_\beta \\ \mathbf{c}_\beta \lambdad{bmatrix} = \delta_{\alpha\beta} , \quad \alpha,\beta=1,\dots,b, \qquad\quad & \lambdad{split} \lambdad{equation} where $\delta_{\alpha\beta}$ is the Kronecker delta and $\mathbf{c} = \begin{bmatrix} \zeta_1 & \dots & \zeta_b \lambdad{bmatrix}^\intercal$. For convenience we introduce the following notations: \begin{equation}\label{eq:coef_notation} \begin{split} \mathbf{A}_{\alpha} =& \begin{bmatrix} ({\bf x}^{(\alpha)})^\intercal {\bf H} {\bf x}^{(\alpha)} & ({\bf x}^{(\alpha)})^\intercal {\bf H} \mathbf{V} \\ \mathbf{V}^\intercal {\bf H} {\bf x}^{(\alpha)} & \mathbf{V}^\intercal {\bf H}\mathbf{V} \lambdad{bmatrix}, \\ \mathbf{G}_{\alpha\beta} =& \begin{bmatrix} ({\bf x}^{(\alpha)})^\intercal {\bf x}^{(\beta)} & ({\bf x}^{(\alpha)})^\intercal \mathbf{V} \\ \mathbf{V}^\intercal {\bf x}^{(\beta)} & \mathbf{V}^\intercal \mathbf{V} \lambdad{bmatrix}, \\ \mathbf{s}_{\alpha} =& \begin{bmatrix} \zeta_\alpha \\ \mathbf{c}_\alpha \lambdad{bmatrix}. \lambdad{split} \lambdad{equation} Using these notations \eqref{lagrange_detailed} reads \begin{equation} \label{lagrange_short} \begin{split} &\mathbf{A}_{\alpha} \mathbf{s}_{\alpha} = \lambda_{\alpha\alpha} \mathbf{G}_{\alpha\alpha}\mathbf{s}_{\alpha} + \sum_{\substack{\beta =1, \\ \beta \not = \alpha}}^{b} \lambda_{\alpha\beta} \mathbf{G}_{\alpha\beta}\mathbf{s}_{\beta}, \\ &\mathbf{s}_{\alpha}^\intercal \mathbf{G}_{\alpha\beta}\mathbf{s}_{\beta} = \delta_{\alpha\beta}, \quad \alpha,\beta = 1,\dots, b. \lambdad{split} \lambdad{equation} To solve~\eqref{lagrange_short} we propose the following iterative process: \begin{equation} \label{eq:lagrange_iterative} \begin{split} &\mathbf{A}_{\alpha} \mathbf{s}_{\alpha}^{(k+1)} = \lambda_{\alpha\alpha}^{(k+1)} \mathbf{G}_{\alpha\alpha}\,\mathbf{s}_{\alpha}^{(k+1)} + \sum_{\beta < \alpha} \lambda_{\alpha\beta}^{(k+1)} \mathbf{G}_{\alpha\beta}\,\mathbf{s}_{\beta}^{(k+1)} + \sum_{\beta > \alpha}^{b} \lambda_{\alpha\beta}^{(k)} \mathbf{G}_{\alpha\beta}\,\mathbf{s}_{\beta}^{(k)}, \\ &\left(\mathbf{s}_{\alpha}^{(k+1)}\right)^\intercal \mathbf{G}_{\alpha\beta}\, \mathbf{s}_{\beta}^{(k+1)} = \delta_{\alpha\beta}, \quad \alpha \leq \beta, \\ &\left(\mathbf{s}_{\alpha}^{(k+1)}\right)^\intercal \mathbf{G}_{\alpha\beta}\, \mathbf{s}_{\beta}^{(k)} = 0, \quad \alpha > \beta. \lambdad{split} \lambdad{equation} To reduce~\eqref{eq:lagrange_iterative} to a sequence of generalized eigenvalue problems let us start with $\alpha=1$ and proceed to $\alpha = b$. For $\alpha = 2,\dots, b -1$ define \[ \begin{split} &{\bf S}_{\alpha}^{(k)} = \begin{bmatrix} \mathbf{G}_{\alpha,1}\,\mathbf{s}_{1}^{(k+1)} & \dots & \mathbf{G}_{\alpha,\alpha-1}\,\mathbf{s}_{\alpha-1}^{(k+1)} & \mathbf{G}_{\alpha,\alpha+1}\,\mathbf{s}_{\alpha+1}^{(k)} & \dots & \mathbf{G}_{\alpha,b}\,\mathbf{s}_{b}^{(k)} \lambdad{bmatrix},\\ &{\bf S}_{1}^{(k)} = \begin{bmatrix} \mathbf{G}_{1,2}\,\mathbf{s}_{2}^{(k)} & \dots & \mathbf{G}_{1,b}\,\mathbf{s}_{b}^{(k)} \lambdad{bmatrix},\\ &{\bf S}_{b}^{(k)} = \begin{bmatrix} \mathbf{G}_{b,1}\,\mathbf{s}_{1}^{(k+1)} & \dots & \mathbf{G}_{b,b -1}\,\mathbf{s}_{b-1}^{(k+1)} \lambdad{bmatrix}, \lambdad{split} \] with null spaces \[ \mathcal{N}^{(k)}_\alpha = \mathrm{Null}\left({{\bf S}_{\alpha}^{(k)}}^\intercal\right). \] We also introduce $\mathbf{Q}_\alpha$ of size $(3b + 1)\times \mathrm{dim}(\mathcal{N}^{(k)}_\alpha)$ whose columns form an orthonormal basis in $\mathcal{N}^{(k)}_\alpha$. Accounting for the fact that $\mathbf{s}_{\alpha}^{(k+1)}\in \mathcal{N}^{(k)}_\alpha$, multiplying the first equation of~\eqref{eq:lagrange_iterative} by $\mathbf{Q}_1^\intercal$ and introducing $\mathbf{z}_{\alpha}^{(k+1)}$: $\mathbf{s}_{\alpha}^{(k+1)} = \mathbf{Q}_\alpha \mathbf{z}_{\alpha}^{(k+1)}$ we arrive at the sequence of generalized eigenvalue problems \begin{equation}\label{eq:eigh_mult} \begin{split} &\left(\mathbf{Q}_\alpha^\intercal\mathbf{A}_{\alpha} \mathbf{Q}_\alpha\right) \mathbf{z}_{\alpha}^{(k+1)} = \lambda_{\alpha\alpha}^{(k+1)} \left(\mathbf{Q}_\alpha^\intercal\mathbf{G}_{\alpha\alpha} \mathbf{Q}_\alpha\right) \mathbf{z}_{\alpha}^{(k+1)}, \\ &\left( \mathbf{z}_{\alpha}^{(k+1)} \right)^\intercal \left(\mathbf{Q}_\alpha^\intercal\mathbf{G}_{\alpha\alpha} \mathbf{Q}_\alpha\right) \mathbf{z}_{\alpha}^{(k+1)} = 1, \lambdad{split} \lambdad{equation} in which we are searching for the smallest eigenvalue~$\lambda_{\alpha\alpha}^{(k+1)}$ and the corresponding eigenvector~$\mathbf{z}_{\alpha}^{(k+1)}$. The matrix $\mathbf{Q}_\alpha$ is calculated using SVD of ${\bf S}_{\alpha}^{(k)}=U\Sigma V^\intercal$ by choosing $b$ plus number of zero singular values of last columns of $U$. The algorithm described above is summarized in Algorithm~\ref{alg:heuristic}. Given matrices $\mathbf{V}^\intercal \mathbf{V}$, $\mathbf{V}^\intercal {\bf H} \mathbf{V}$, ${\bf X}^\intercal {\bf X}$, $\mathbf{V}^\intercal {\bf X}$, $\mathbf{V}^\intercal {\bf H} \mathbf{V}$, $\mathbf{V}^\intercal {\bf H} \mathbf{X}$, $\mathrm{diag}({\bf X}^\intercal{\bf H} {\bf X})$ the overall complexity to find the coefficients is $\mathcal{O}(b^4)$. This is due to the fact that we calculate full eigendecomposition that costs $\mathcal{O}(b^3)$ for each $\alpha=1,\dots,b$. Note that we only need one eigenvalue and one eigenvector for each $\alpha$. This does not significantly increase the complexity of the overall algorithm (including tensor operations) for moderate $b$ up to $100$. Arguably $\mathcal{O}(b^4)$ complexity may be reduced down to $\mathcal{O}(b^3)$ using low-rank update of matrix decompositions. We, however, do not consider it here as it does not significantly influence overall performance of the method on the considered range of $b$ and requires a lot of technical details to be presented. \begin{algorithm}[H] \begin{algorithmic}[1] \caption{\texttt{find\_coefficients} function from Algorithm~\ref{alg:riemannian_lobpcg}}\label{alg:heuristic} \mathbf{R}equire Matrices $\mathbf{V}^\intercal \mathbf{V}$, $\mathbf{V}^\intercal {\bf H} \mathbf{V}$, ${\bf X}^\intercal {\bf X}$, $\mathbf{V}^\intercal {\bf X}$, $\mathbf{V}^\intercal {\bf H} \mathbf{V}$, $\mathbf{V}^\intercal {\bf H} \mathbf{X}$, $\mathrm{diag}({\bf X}^\intercal{\bf H} {\bf X})$, initial guesses $\mathbf{s}_{\beta}^{(1)}\in\mathbb{R}^{3b + 1}$, $\beta=2,\dots,b$, maximum number of iterations $K$ \Ensure $\mathbf{c}, \mathbf{C}$ that approximate the solution to~\eqref{eq:coefficients-problem1} \State Assemble $\mathbf{A}_{\alpha}$, $\mathbf{G}_{\alpha\beta}$, $\alpha,\beta=1,\dots,b$ according to~\eqref{eq:coef_notation} \State ${\bf S}_{1}^{(1)}=\begin{bmatrix} \mathbf{G}_{1,2}\,\mathbf{s}_{2}^{(1)} & \dots & \mathbf{G}_{1,b}\,\mathbf{s}_{b}^{(1)} \lambdad{bmatrix}$ \For{$k=1, 2, \ldots, K$} \For{$\alpha=1, 2, \ldots, b$} \State Compute SVD: ${\bf S}_{\alpha}^{(k)}=U\mathrm{diag(\mathbf{\sigma})} V^\intercal$ \State Set $\mathbf{Q}_\alpha \coloneqq U[:,2b+1-\#\mathrm{zeros}({\bf \sigma}):3b+1]$, \State Compute $\mathbf{Q}_\alpha^\intercal\mathbf{A}_{\alpha} \mathbf{Q}_\alpha$, $\mathbf{Q}_\alpha^\intercal\mathbf{G}_{\alpha\alpha} \mathbf{Q}_\alpha$ \State Find $\mathbf{z}_{\alpha}^{(k+1)}$ from~\eqref{eq:eigh_mult} \State Compute $\mathbf{s}_{\alpha}^{(k+1)} = \mathbf{Q}_\alpha \mathbf{z}_{\alpha}^{(k+1)}$ \mathbf{I}f{$\alpha\not= b$} \State Calculate ${\bf S}_{\alpha+1}^{(k)}=\begin{bmatrix} \mathbf{G}_{\alpha+1,\beta} \mathbf{z}_{\beta}^{(k+1)} \lambdad{bmatrix}_{\beta=1, \beta\not =\alpha+1}^b$ \EndIf \EndFor \State ${\bf S}_{1}^{(k+1)}=\begin{bmatrix} \mathbf{G}_{1,2}\,\mathbf{s}_{2}^{(k+1)} & \dots & \mathbf{G}_{1,b}\,\mathbf{s}_{b}^{(k+1)} \lambdad{bmatrix}$ \EndFor \lambdad{algorithmic} \lambdad{algorithm} \section{Numerical experiments} In this section, we numerically assess the proposed method. For all experiments we use NVIDIA DGX-1 station with 8 V100 GPUs (only one of which is used at a time) and Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz with 80 logical cores. \subsection{Molecule vibrational spectra} \label{sec:num_mol} One of the applications we consider is the computation of vibrational spectra of molecules. In particular, we consider Hamiltonians given as \begin{equation}\label{eq:schroed} \mathcal{H} = -\frac{1}{2}\sum_{i=1}^d \omega_i \frac{\partial^2}{\partial q_i^2}+ V(q_1,\dots, q_d), \lambdad{equation} where $V$ denotes potential energy surface (PES). The proposed method is applicable if the Hamiltonian can be represented in the TT-format with small TT-ranks, which holds, e.g. for sum-of-product PES. Therefore, we consider PES given in the polynomial form as is used in~\cite{avila2011using134}: \[ \begin{split} V(q_1,\dots,q_{d}) = \frac{1}{2} \sum_{i=1}^{d} \omega_i q_i^2 + \frac{1}{6} \sum_{i=1}^{d} \sum_{j=1}^{d} \sum_{k=1}^{d} \phi_{ijk}^{(3)} q_i q_j q_k\\ + \frac{1}{24} \sum_{i=1}^{d} \sum_{j=1}^{d} \sum_{k=1}^{d} \sum_{l=1}^{d} \phi_{ijkl}^{(4)} q_i q_j q_k q_l. \lambdad{split} \] To discretize the problem we use the discrete variable representation (DVR) scheme on the tensor product of Hermite meshes~\cite{baye-dvr-1986}. The complexity of assembling the discretized PES $V$ is~\cite{ro-mp-2016} $$\mathcal{O}\left(\left(\verb nnz \left(\phi_{ijk}^{(3)}\right) + \verb nnz \left(\phi_{ijkl}^{(4)}\right) \right) d n^2 R^3 \right)$$ and is negligibly small compared with the cost of one iteration of the iterative process. As a preconditioner we use approximate inversion of the harmonic part of~\eqref{eq:schroed} using~\cite{khor-prec-2009} and formula~\eqref{eq:trick_prec} for efficient computations. We calculate vibrational spectra of two molecules: acetonitrile (CH$_3$CN) and ethylene oxide (C$_2$H$_4$O) for which $d=12$ and $d=15$ correspondingly. The potential energy surfaces for these molecules were kindly provided by the group of Prof. Tucker Carrington. Table~\ref{tab:hamranks} contains TT-ranks of Hamiltonians of these two molecules with the mode order sorted in correspondence with the ascending order of $\omega_{i}$, $i=1,\dots,d$ with mode sizes chosen according to~\cite{lc-rrbpm-2014} for CH$_3$CN and $n=15$ for all modes for C$_2$H$_4$O. Initial guess is chosen from the solution of the harmonic part of the Hamiltonian and can be derived analytically. \setlength{\tabcolsep}{4pt} \begin{table}[t!] \caption{TT-ranks of the Hamiltonian with $\varepsilon=10^{-12}$ truncation tolerance.} \label{tab:hamranks} \begin{center} \begin{tabular}{lccccccccccccccc} \toprule CH$_3$CN & $R_1$ & $R_2$ & $R_3$ & $R_4$ & $R_5$ & $R_6$ & $R_7$ & $R_8$ & $R_9$ & $R_{10}$ & $R_{11}$ \\ & 5& 9& 14&21&25&26&24&18&15&8&5\\ \midrule C$_2$H$_4$O & $R_1$ & $R_2$ & $R_3$ & $R_4$ & $R_5$ & $R_6$ & $R_7$ & $R_8$ & $R_9$ & $R_{10}$ & $R_{11}$ & $R_{12}$ & $R_{13}$ & $R_{14}$ \\ & 5& 11& 17& 21& 23& 25& 27 & 28 & 25 & 23 & 21 & 16 & 11 & 5\\ \bottomrule \lambdad{tabular} \lambdad{center} \lambdad{table} Figure~\ref{fig:schedule_ch3cn} represents convergence of each of the eigenvalues when running the proposed method for different choices of tangent space schedules. The following scenarios are compared. The first one is when the tangent space is the fixed tangent space of the first eigenvector for all iterations, i.e. $t_1=\dots=t_K=1$. In this case there is no need to find corrections using~\eqref{eq:coefficients-problem} and hence, computations are faster for large $b$. In this case we observe that although first ten eigenvalues converge within approximately $25$ iterations, most of the eigenvalues do not converge to the desired accuracy even when the number of iterations is $100$. The behaviour is in accordance with the fact that not necessarily can all eigenvectors be approximated using only one tangent space of the first eigenvector. In the optimal scenario after every iteration we choose the tangent space that allows us obtaining the smallest value of the functional in~\eqref{eq:coefficients-problem}. This is done by checking tangent spaces of all the eigenvectors, which is impractical and hence is provided only for comparison purposes. Not surprisingly, this scenario yields the fastest convergence. Finally, in the figure we also provide two practically interesting cases: tangent space schedule according to~\eqref{eq:schedule}, which we call here ``argmax'' and the random choice of tangent space after each iteration. The ``argmax'' strategy aims at speeding up convergence of the eigenvalues with the slowest convergence rate, thus certifying convergence of larger eigenvalues. By contrast, the random choice of tangent spaces provides faster convergence of the smallest eigenvalues, while the error of larger eigenvalues fluctuates. In the following numerical experiments we choose the ``argmax'' strategy as a more reliable one and combine it with the usage of the first tangent space for the first $20$ iterations. \begin{figure}[t] \begin{center} \includegraphics[width=120mm]{schedule_ch3cn.pdf} \caption{CH$_3$CN molecule, $r=10$. Convergence plots of $40$ eigenvalues for four different tangent space schedule scenarios: first tangent space is chosen for all the iterations, optimal choice of schedule (not practical), proposed scheduling by~\eqref{eq:schedule} and a random choice of tangent space.} \label{fig:schedule_ch3cn} \lambdad{center} \lambdad{figure} In Figure~\ref{fig:time_vs_b_ch3cn} we provide computational time of one iteration with respect to the number of computed eigenvalues $b$ for acetonitrile molecule CH$_3$CN and $r=25$. This figure illustrates that the tensor part of computations described in Section~\ref{sec:impl} scales effectively linearly in the given range of $b$. When $b$ approaches $100$, the problem of finding coefficients (Alg.~\ref{alg:heuristic}) starts dominating. We plan to improve the complexity of this part of the method in the future work. \begin{figure}[t] \centering \subfloat[]{\label{fig:time_vs_b_ch3cn}{\includegraphics[width=0.45\textwidth]{time_vs_b_ch3cn_cap.pdf}}} \subfloat[]{\label{fig:time_vs_b_spins}{\includegraphics[width=0.45\textwidth]{time_spins_cap.pdf}}} \caption{GPU energy spectra calculation for acetonitrile molecule CH$_3$CN (a) and for spin lattice with $d=40$ spins (b). Time per one iteration vs. $b$ -- number of computed eigenvectors, TT-rank for the both cases is $25$. Plot illustrates linear scaling of tensor operations complexity with respect to $b$. For large $b$ problem of finding coefficients (Alg.~\ref{alg:heuristic}) starts dominating.} \label{fig:time_vs_b} \lambdad{figure} Table~\ref{tab:errs_mols} represents accuracies of energy levels for different methods for CH$_3$CN and C$_2$H$_4$O molecules. We measure the mean absolute error (MAE) of energy levels: \begin{equation}\label{eq:mae} \mathrm{MAE} = \frac{1}{b} \sum_{i=1}^{b} \left|\tilde\epsilon^{(i)} - \epsilon^{(i)}_\mathrm{ref}\right|, \lambdad{equation} where $\tilde\epsilon^{(i)}$, $i=1,\dots,b$ are the calculated energy levels and $\epsilon^{(i)}_\mathrm{ref}$ are accurate energy levels calculated in~\cite{ro-mp-2016} for CH$_3$CN and in~\cite{intertwined2017thomas} for C$_2$H$_4$O. For comparison purposes we also provide timings of the methods. The timings of LRRAP LOBPCG and MP LOBPCG are measured on the same machine, whereas timings of hierarchical rank reduced block power method (H-RRBPM) are taken from~\cite{tc-hrrbpm-2015}. We do not provide timings of H-RRBPM and HI-RRBPM for C$_2$H$_4$O since the calculations in~\cite{intertwined2017thomas} were done for different $b$ ($b=200$) compared with $b=35$ for LRRAP LOBPCG and MP LOBPCG. Table~\ref{tab:errs_mols} illustrates that the method is capable of producing accurate results with speedups up to 20 times on GPU compared with CPU. We note that for larger molecules additional speedup can be obtained using several GPUs simultaneously. In contrast to the MP LOBPCG method~\cite{ro-mp-2016} the proposed method is capable of producing comparably accurate results faster both on CPUs and GPUs. The fact is that for the considered examples the cost of each iteration of the proposed method is considerably less than the cost of an iteration of the MP LOBPCG method, although LRRAP LOBPCG requires more iterations. Moreover, MP LOBPCG introduces errors after such operations as calculation of linear combinations of vectors and matrix-vector products due to truncation errors, which eventually leads to lower accuracies of the result. At the same time in the LRRAP approach corrections on each iteration belong to a single tangent space, so no rank growth occurs. Moreover, before the retraction all tensor calculations are done with the machine precision. Table~\ref{tab:errs_mols} also illustrates that LRRAP LOBPCG for $r=25$ is more accurate than the most accurate basis (basis-3 or ``b3'' for short) considered in~\cite{tc-hrrbpm-2015}. Acceleration on GPUs allows to get additional gain in time w.r.t. H-RRBPM method. We note, however, that the recently proposed HI-RRBPM~\cite{intertwined2017thomas} and its improved version~\cite{intertwined2018thomas} are superior to H-RRBPM. We also note that accuracy of eigenvalues can be additionally improved using the manifold-projected simultaneous inverse iteration (MP SII) proposed for the TT-format in~\cite{ro-mp-2016}. This strategy was used to correct eigenvalues obtained using MP LOBPCG with small~$r$. \setlength{\tabcolsep}{4pt} \begin{table}[t!] \caption{Mean absolute error (MAE)~\eqref{eq:mae} and computation times of different methods. Reference energies $\epsilon^{(i)}_\mathrm{ref}$ are taken from~\cite{ro-mp-2016} ($r=40$) for CH$_3$CN and from~\cite{intertwined2017thomas} (setting D) for C$_2$H$_4$O. $b=84$ energy levels are calculated for CH$_3$CN and $b=35$ energy levels are calculated for molecule C$_2$H$_4$O. Note that timings of H-RRBPM method are taken from~\cite{tc-hrrbpm-2015}, so computations were performed on a different machine. MAE error for H-RRBPM and HI-RRBPM is measured for first $b$ eigenvalues.} \label{tab:errs_mols} \begin{center} \begin{tabular}{c|cc|cc|ccc|cccc} \toprule & \lambdaicolumn{2}{c|}{\small\makecell{LRRAP \\ LOBPCG \\ (proposed)}} & \lambdaicolumn{2}{c|}{\small\makecell{MP \\ LOBPCG \\ \cite{ro-mp-2016}}} & \lambdaicolumn{3}{c|}{\small\makecell{H-RRBPM\\ \cite{tc-hrrbpm-2015}}} & \lambdaicolumn{3}{c}{\small\makecell{HI-RRBPM \\ \cite{intertwined2017thomas}}} \\ \midrule \cellcolor{black!10}{CH$_3$CN} & {\small $r\shorteq 15$} & {\small $r\shorteq 25$} & {\small $r\shorteq 15$} & {\small $r\shorteq 25$} & b1 & b2 & b3 & \\ \midrule {\small MAE,\,cm$^{-1}$} & $0.4$ & $0.05$ & $0.5$ & $0.07$ & $2.6$ & $0.9$ & $0.2$ \\ {\small GPU time} & $51$ s & $114$ s & & & & & & & \\ {\small CPU time} & 12\,{\small min} & 26\,{\small min} & $27$\,{\small min} & $45$\,{\small min} & $44$ s & $11$\,{\small min} & $3.2$\,{\small h} \\ \midrule \cellcolor{black!10}{C$_2$H$_4$O} & $r\shorteq 25$ & $r\shorteq 35$ & $r\shorteq 25$ & $r\shorteq 35$ & \lambdaicolumn{3}{c|}{E {\small (H-RRBPM, \cite{intertwined2017thomas})}} & A & B & C \\ \midrule {\small MAE,\,cm$^{-1}$} & $1.5$ & $0.3$ & $1.7$ & $0.4$ & \lambdaicolumn{3}{c|}{0.78} & $1.0$ & $0.6$ & $0.2$ \\ {\small GPU time} & 92 s & 129 s & & & & & & & \\ {\small CPU time} & $25$\,{\small min} & $41$\,{\small min} & $32$\,{\small min} & $64$\,{\small min} & & & & & \\ \bottomrule \lambdad{tabular} \lambdad{center} \lambdad{table} \subsection{Spin chains} \label{sec:task_spins} As a second application, we consider Heisenberg model for one-dimensional lattices of spins. The goal it to compute minimal energy levels of Hamiltonian \begin{equation}\label{eq:ham_spin} {\bf H} = \sum_{i=1}^{d-1} \left( \mathbf{S}_x^{(i)} \mathbf{S}_x^{(i+1)} + \mathbf{S}_y^{(i)}\mathbf{S}_y^{(i+1)} + \mathbf{S}_z^{(i)} \mathbf{S}_z^{(i+1)} \right), \lambdad{equation} with \[ \mathbf{S}_\alpha^{(i)} = \mathbf{I}\otimes \dots \otimes \mathbf{I} \otimes \mathbf{S}_\alpha \otimes \mathbf{I} \otimes \dots \otimes \mathbf{I}, \quad \alpha = x,y,z, \] where $\mathbf{I}$ is $2\times 2$ identity matrix and $\mathbf{S}_\alpha$, $\alpha = x,y,z$ are elementary Pauli matrices \[ \mathbf{S}_x = \frac{1}{2} \begin{bmatrix} 1 & 0 \\ 0 & -1 \lambdad{bmatrix}, \quad \mathbf{S}_y = \frac{\mathrm{i}}{2} \begin{bmatrix} 0 & 1 \\ -1 & 0 \lambdad{bmatrix}, \quad \mathbf{S}_z = \frac{1}{2} \begin{bmatrix} 0 & 1 \\ 1 & 0 \lambdad{bmatrix}. \] It can be verified numerically that TT-rank of the Hamiltonian~\eqref{eq:ham_spin} is bounded from above by $5$. \begin{figure}[t] \begin{center} \includegraphics[width=120mm]{schedule_spins.pdf} \caption{Heisenberg model, $b=30$, $r=25$, $d=40$. Convergence plots of $30$ eigenvalues for four different tangent space schedule scenarios: first tangent space is chosen for all the iterations, optimal choice of schedule (not practical), proposed scheduling by~\eqref{eq:schedule} and a random choice of tangent space.} \label{fig:schedule_spins} \lambdad{center} \lambdad{figure} Similarly to Sec.~\ref{sec:num_mol} in Figure~\ref{fig:schedule_spins} we plot convergence of each of the eigenvalues for different choices of tangent space schedules: when only tangent space of the first eigenvector is used, strategy with the optimal choice of schedule, ``argmax'' strategy~\eqref{eq:schedule} and a random choice of tangent spaces. The convergence behaviour is similar to the convergence behaviour of vibrational spectra computation (see Sec.~\ref{sec:num_mol}). The only difference we observe is that all eigenvectors can be well approximated in the tangent space of the first eigenvector. This allows to run most of the iterations in one tangent space. After that only a few iterations of the ``argmax'' strategy are needed to increase accuracy, which leads to a significant complexity reduction. Note that by contrast to the computation of vibrational spectra no preconditioner is used for~\eqref{eq:ham_spin}. \setlength{\tabcolsep}{4pt} \begin{table}[t!] \caption{Mean absolute error (MAE)~\eqref{eq:mae} and computation times of different methods for open spin chains. Reference energies $\epsilon^{(i)}_\mathrm{ref}$ are calculated using \texttt{eigb}~\cite{dkos-eigb-2014} with $\delta=10^{-5}$, where $\delta$ denotes the truncation error for both \texttt{eigb} and ALPS.} \label{tab:errs_spins} \begin{center} \begin{tabular}{c|ccc|cc|cccccc} \toprule & \lambdaicolumn{3}{c|}{\small\makecell{LRRAP \\ LOBPCG \\ (proposed)}} & \lambdaicolumn{2}{c|}{\small\makecell{eigb \\ \cite{dkos-eigb-2014}}} & \lambdaicolumn{2}{c}{\small\makecell{ALPS \\ \cite{alps2011bauer}}} \\ \midrule \cellcolor{black!10}{$d=40$, $b=5$} & {\small $r \shorteq 20$} & {\small $r \shorteq 35$} & {\small $r \shorteq 45$}& {\small $\delta \shorteq 10^{-2}$} & {\small $\delta \shorteq 10^{-3}$} & {\small $\delta \shorteq 10^{-4}$} & {\small $\delta \shorteq 10^{-5}$} \\ \midrule {\small MAE} & 1.0e-4 & 1.2e-5 & 2.2e-6 & 2.2e-4 & 2.4e-6 & 1.0e-2 & 1.2e-3 \\ {\small GPU time} & 9\,s & 25\,s & 44\,s & & & & \\ {\small CPU time} & 50\,s & 145\,s & 251\,s & 6\,s & 26\,s & 35\,s & 44\,s\\ \midrule \cellcolor{black!10}{$d=40$, $b=35$} & {\small $r \shorteq 20$} & {\small $r \shorteq 35$} & {\small $r \shorteq 45$}& {\small $\delta \shorteq 10^{-2}$} & {\small $\delta \shorteq 10^{-3}$} & {\small $\delta \shorteq 10^{-4}$} & {\small $\delta \shorteq 10^{-5}$} \\ \midrule {\small MAE} & 7.7e-3 & 1.3e-4 & 5.1e-6 & 1.3e-4 & 3.0e-6 & 1.4e-2 & 1.6e-3 \\ {\small GPU time} & 28\,s & 51\,s & 85\,s & & & \\ {\small CPU time} & 5.6\,{\small min} & 12\,{\small min} & 21\,{\small min} & 18\,{\small min} &31\,{\small min} & 24\,{\small min} & 41\,{\small min} \\ \bottomrule \lambdad{tabular} \lambdad{center} \lambdad{table} In Figure~\ref{fig:time_vs_b_spins} time of one iteration on GPU is provided for $r=25$ and $d=40$. By contrast to the computation of molecule vibrational energy levels (Fig.~\ref{fig:time_vs_b_ch3cn}), for $b=84$ the time spent for tensor operations is less than the time to find coefficients. This is due to the fact that TT-ranks of molecule Hamiltonians (Tab.~\ref{tab:hamranks}) are larger than those of the considered spin lattices, where TT-rank is bounded by $5$. We compare the proposed method with two open source packages. The first one is \texttt{eigb} method~\cite{dkos-eigb-2014} available in the~\texttt{ttpy}\footnote{\url{https://github.com/oseledets/ttpy}} library. The second one is DMRG-based algorithm implemented in ALPS~\cite{alps2011bauer} (algorithms and libraries for physical simulations), which also allows to compute more than one of eigenstates. We present results of the comparison in Table~\ref{tab:errs_spins}. We observe that for small $b$ ($b=5$) we are not able to be faster than both~\texttt{eigb} and ALPS at comparable accuracies. For larger $b$ ($b=35$) the proposed method is faster on CPU, and GPU provides additional acceleration up to $15$ times. We note that with our method we are capable of calculating larger $b$ (up to $100$) with no problems, while~\texttt{eigb} struggles in this range. The point is that due to the usage of the block TT format, the rank in~\texttt{eigb} rapidly grows with $b$. Therefore, much larger rank values (more than $1000$ for $b>40$~\cite{dkos-eigb-2014}) are required for~\texttt{eigb}. \section{Related work} The computation of energy levels of multidimensional Hamiltonians using low-rank tensor approximations has been considered in several communities. In solid state physics computation of energy levels of one-dimensional spin lattices using tensor decompositions has been known for a long time. For this kind of applications the \emph{matrix product state} (MPS) representation~\cite{fannes-mps-1992,klumper-mps-1993} is used to approximate eigenfunctions. MPS is known to be equivalent to the Tensor-Train format~\cite{osel-tt-2011} used in the current paper. The classical algorithm to approximate ground state of a one-dimensional spin lattice using MPS representation is the \emph{density matrix renormalization group} (DMRG)~\cite{white-dmrg-1992, ostlund-dmrg-1995}, see review~\cite{schollwock-2011} for details. Although MPS and DMRG are widely used in solid state physics, they were unknown in numerical analysis. Calculation of several eigenvalues using two-site DMRG was first considered in papers by S.~White~\cite{white-dmrg-1992,white-dmrg-1993}. Alternatively, one can use the \emph{numerical renormalization group} (NRG)~\cite{wilson-nrg-1975} and its improved version \emph{variational NRG}~\cite{pizorn-eigb-2012}. In these methods index, corresponding to the number of an eigenvalue is present only in the last site (core) of the MPS representation. The \texttt{eigb} algorithm~\cite{dkos-eigb-2014}, which is an extension of~\cite{khos-dmrg-2010}, alternately assigns eigenvalue index to different sites (cores) of the representation, which allows for rank adaptation. By contrast, in the proposed method we consider every eigenvector separately, which implies that eigenvectors do not share sites (cores) of the decomposition. This leads to smaller rank values of eigenvectors. Moreover, thanks to the Riemannian optimization approach no rank growth occurs at iterations. Calculation of energy levels using tensor decompositions was also considered for molecule vibrational spectra computations. CP-decomposition~\cite{kolda-review-2009} (canonical decomposition) of eigenvectors of vibrational problems was considered in~\cite{lc-rrbpm-2014}, where rank reduced block power method (RRBPM) was used. Its hierarchical version (H-RRBPM) was later proposed in~\cite{tc-hrrbpm-2015}. The H-RRBPM was improved in~\cite{intertwined2017thomas} and later in~\cite{intertwined2018thomas} (HI-RRBPM, where ``I'' stands for ``intertwined''). With HI-RRBPM vibrational energy levels of molecules with more than dozens of atoms such as uracil and naphthalene~\cite{intertwined2018thomas} were accurately calculated. We also note that tensor decompositions were used in quantum dynamics, and in particular in the multiconfiguration time dependent Hartree (MCTDH) method \cite{meyer-book-2009} as well as in its multilayer version~\cite{wt-mlmctdh-2003} (ML-MCTDH), which is similar to the Hierarchical Tucker representation~\cite{gras-hsvd-2010}. Finally, a number of methods was developed independently in the mathematical community. Tensor versions of power and inverse power methods (inverse iteration) were considered in~\cite{beylkin-2002,garcke-mregr-2009,khst-eigen-2012}. For tensor versions of preconditioned eigensolvers such as preconditioned inverse iteration (PINVIT) and LOBPCG methods see~\cite{mach-innereig-2013,ro-hf-2016,ro-crossconv-2015,lebedeva-tensornd-2011,tobler-htdmrg-2011}. The generalization of the Jacobi-Davidson method was considered in~\cite{ro-jd-2018}. The solvers based on alternating optimization procedures such as ALS~\cite{holtz-ALS-DMRG-2012} or AMEn~\cite{ds-amen-2014} are proposed in~\cite{ds-dmrgamen-2015,kressner-evamen-2014}. \section{Conclusion and future work} We propose an eigensolver for high-dimensional eigenvalue problems in the TT format (MPS). The ability of the solver to efficiently calculate up to $100$ eigenstates is assured by the usage of Riemannian optimization, which allows to avoid the rank growth naturally. The solver is implemented in TensorFlow, thus allowing both CPU and GPU parallelization. In the considered numerical examples the GPU version of the solver produces 15-20 times acceleration compared with the CPU version. At each iteration of the solver, there arises a small, but nonstandard optimization problem to find coefficients of the iterative process. The method proposed to address this problem allows to solve it with the complexity comparable to the complexity of tensor operations for up to $b=100$ eigenvalues. For $b>100$ there is still room for improvement, which we plan to address in the future work. \appendix \section{TensorFlow implementation overview} \label{sec:app-t3f-implementation} In this section, we provide brief introduction into TensorFlow (which we use to simplify GPU support) and details on how we implement tensor operations. We implemented the proposed method in Python relying on two libraries: TensorFlow and T3F. TensorFlow~\cite{tf-2015} is a library written by Google to use for Machine Learning research (i.e., fast prototyping) and production alike. The focus is the ease of prototyping, GPU support, good parallelization abilities, and automatic differentiation\footnote{Given a computer program which defines a differentiable function, automatic differentiation generates another program which can compute the (exact) gradient of the function in the time at most 4 times slower than executing the original function. (In this paper we don't use automatic differentiation.)}. TensorFlow provides a library of linear-algebra (and some other) functions which abstract away the hardware details. For example, running the matrix multiplication function \texttt{tensorflow.matmul(A, B)} would multiply the two matrices on a CPU using all the available threads. Running the same code on a computer with a GPU will execute the matrix multiplication on the GPU. When executing a sequence of TensorFlow operations, TensorFlow makes sure that the data is not copied to and from GPU memory in between the operations. For example, running \begin{verbatim} A = numpy.random.normal(100, 100) matmul = tensorflow.matmul(A, A) print(tensorflow.reduce_sum(matmul)) \lambdad{verbatim} will generate a random matrix using Numpy on CPU, then copy it to the GPU memory, perform matrix multiplication on GPU, then compute the sum of all elements in the resulting matrix on GPU, and only then copy the result back to the CPU memory for printing. Moreover, TensorFlow allows to easily process pieces of data on multiple GPUs and then combine the result together on a single master GPU. Another important TensorFlow feature is vectorization. Almost all functions support working with \emph{batches} of objects, i.e. executing the same operations on a set of different arrays. For example, applying \texttt{tensorflow.matmul(A, B)} to two arrays of shapes $100 \times 3 \times 4$ and $100 \times 4 \times 5$ will return an array \texttt{C} of shape $100 \times 3 \times 5$ such that $C^{(i)} = A^{(i)} B^{(i)}$, $i = 1, \ldots, 100$, and all the matrices are multiplied in parallel. This is especially important on GPUs: because of massive parallel resources of GPUs, running 100 small matrix multiplications sequentially (in a for loop) is almost 100 times slower than multiplying them all in parallel in a single vectorized operation. The second library used in this paper is Tensor Train on Tensor Flow (T3F)~\cite{novikov2018tensor}, which is written on top of TensorFlow and provides many primitives for working with Tensor Train decomposition. To speed up the computations in the proposed method, we represent the current approximation to the eigenvectors $\mathbf{X} = \{{\bf x}^{(1)}, \ldots, {\bf x}^{(b)}\}$ as a \emph{batch} of $b$ TT-vectors, letting T3F and TensorFlow vectorize all the operations w.r.t. the number of TT-vectors. We also treat the basis $\mathbf{V}$ on each iteration as a batch of TT-vectors. T3F library supports batch processing and for example executing \texttt{t3f.bilinear\_form(A, $\mathbf{X}$, $\mathbf{X}$)} finds the value of $({\bf x}^{(i)})^\intercal A {\bf x}^{(i)}$ for each $i = 1, \ldots, b$ in parallel on a GPU. When dealing with large problems, we also use the multigpu feature of TensorFlow to use all the available GPUs on a single computer. Let us consider a detailed example of adding two TT-tensors together. Given two tensors $\tens{A},\tens{B}\in \mathbb{R}^{n \times \ldots \times n}$ represented in the TT-format $$ \tensel{A}_{i_1 \ldots i_d} = G^\tens{A}_1(i_1) G^\tens{A}_2(i_2) \ldots G^\tens{A}_d(i_d) $$ $$ \tensel{B}_{i_1 \ldots i_d} = G^\tens{B}_1(i_1) G^\tens{B}_2(i_2) \ldots G^\tens{B}_d(i_d) $$ where for any $i_k = 1, \ldots, n$ the matrix $G^\tens{A}_k(i_k)$ is of size $r \times r$ for $k=2, \ldots, d-1$. $G^\tens{A}_1(i_1)$ is of size $1 \times r$, and $G^\tens{A}_d(i_d)$ is of size $r \times 1$. The goal is to find TT-cores $\{G^\tens{C}_k(i_k)\}_{k=1}^d$ of tensor $\tens{C} = \tens{A} + \tens{B}$. The expression for these TT-cores is given by~\cite{osel-tt-2011}: \begin{equation} \begin{aligned} \label{eq:add_tt_cores} G^\tens{C}_k(i_k) &= \begin{bmatrix} G^\tens{A}_{k}(i_{k}) & 0\\ 0 & G^\tens{B}_{k}(i_{k}) \lambdad{bmatrix}, ~~k = 2, \ldots, d-1\\ G^\tens{C}_1(i_1) &= \begin{bmatrix} G^\tens{A}_1(i_1) & G^\tens{B}_1(i_1) \lambdad{bmatrix}, \quad G^\tens{C}_d(i_d) = \begin{bmatrix} G^\tens{A}_d(i_d)\\ G^\tens{B}_d(i_d) \lambdad{bmatrix} \lambdad{aligned} \lambdad{equation} Note that we also want to support batch processing, e.g. adding 100 TT-tensors $\tens{C}^{(i)} = \tens{A}^{(i)} + \tens{B}^{(i)}$, $i = 1, \ldots, 100$. The resulting program is similar to any implementation of summation of TT-tensors with two exceptions: support of batch processing and using TensorFlow for all elementary operations to allow GPU support. T3F represents a batch of $d$-dimensional TT-tensors as a list of $d$ arrays, $k$-th array representing $k$-th TT-core of all the tensors in the batch. The shape of the $k$-th array ($k = 2, \ldots, d-1$) is $b \times r \times n \times r$, where $b$ is the batch-size, the shape of the first array ($k=1$) is $b \times 1 \times n \times r$, and the shape of the last array ($k = d$) is $b \times r \times n \times 1$. \begin{algorithm} \begin{algorithmic}[1] \mathbf{R}equire Arrays representing TT-cores of batches of tensors $\{\tens{A}^{(i)}\}_{i=1}^b$ and $\{\tens{B}^{(i)}\}_{i=1}^b$ \Ensure Array representing TT-cores of batch $\{\tens{C}^{(i)}\}_{i=1}^b = \{\tens{A}^{(i)} + \tens{B}^{(i)}\}_{i=1}^b$ \State Concatenate $G^\tens{A}_1$ and $G^\tens{B}_1$ (of shape $b \times 1 \times n \times r$) along the 4-th axis to form array $G^\tens{C}_1$ of shape $b \times 1 \times n \times 2 r$ \For{$k=2, \ldots, d-1$} \State Create an array of zeros of size $b \times r \times n \times r$ \State Concatenate $G^\tens{A}_k$ with array of zeros along $4$-th axis into array $U$ of shape $b \times r \times n \times 2r$ \State Concatenate array of zeros with $G^\tens{B}_k$ along $4$-th axis into array $D$ of shape $b \times r \times n \times 2r$ \State Concatenate $U$ and $D$ along $2$-nd axis into $G^\tens{C}_k$ of shape $b \times 2r \times n \times 2r$. \EndFor \State Concatenate $G^\tens{A}_d$ and $G^\tens{B}_d$ (of shape $b \times r \times n \times 1$) along the 2-nd axis to form array $G^\tens{C}_d$ of shape $b \times 2r \times n \times 1$ \caption{Implementation of adding two batches of TT-tensors in T3F (\texttt{t3f.add(A, B)}). }\label{alg:t3f_add} \lambdad{algorithmic} \lambdad{algorithm} To add two TT-tensors, T3F calls TensorFlow functions to create arrays of appropriate sizes filled with zeros and to concatenate the TT-cores of tensors $\tens{A}$ and $\tens{B}$ with each other and with zeros (see Alg.~\ref{alg:t3f_add}). Note that a user of T3F can ignore this implementation details and just call \texttt{t3f.add(A, B)}. \lambdad{document}
\begin{document} \title{Double resonance for one-sided superlinear or singular nonlinearities} \begin{abstract} We deal with the problem of existence of periodic solutions for the scalar differential equation $x''+ f(t,x)=0$ when the {\em asymmetric} nonlinearity satisfies a {\em one-sided} superlinear growth at infinity. The nonlinearity is asked to be {\em next to} resonance and a Landesman-Lazer type of condition will be introduced in order to obtain a positive answer. Moreover we provide also the corresponding result for equations with a singularity and asymptotically linear growth at infinity, showing a further application to radially symmetric systems. \end{abstract} \section{Introduction}\label{intro} In this paper we are going to study different types of scalar second order differential equations. We are interested in nonlinearities which, roughly speaking, are {\em next to resonance}. We will provide different results of existence of periodic solutions extending some previous theorems well-known in literature treating the case of {\em nonresonant} nonlinearities. We will first focus our attention on nonlinearities defined on the whole real line, in particular, we will start looking for periodic solutions of the scalar differential equation \begin{equation}\label{nleq} x''+ f(t,x)=0 \,, \end{equation} where $f: \R \times \R \to \R$ is a continuous function which is $T$-periodic in the first variable. Then, in Section~\ref{secrad}, we will show how such a result can be adapted to the case of nonlinearities with a singularity. Finally, we will provide a further application to radially symmetric systems. \medbreak It is well-known by classical results~\cite{Dancer,Fucik,MawhinSurvey} that the asymmetric oscillator $$ x''+\mu x^+ -\nu x^- =0 $$ has nontrivial solutions if the couple $(\mu,\nu)$ belongs to the so-called Dancer-Fucik spectrum $$ \Sigma= \bigcup_{j\in \N} \mathscr C_j\,, $$ where $$ \mathscr C_0 = \left\{ (\mu,\nu) \in \R^2 \,:\, \mu\geq0, \nu\geq0 \text{ such that } \mu\,\nu=0\right\} $$ and $$ \mathscr C_j = \left\{ (\mu,\nu) \in \R^2 \,:\, \mu\geq0, \nu\geq0 \text{ such that } \frac{\pi}{T} \left(\frac{1}{\sqrt{\mu}} + \frac{1}{\sqrt{\nu}}\right)= \frac 1j \right\}. $$ In particular, it consists of the two positive semi-axes $\mathscr C_0$ and of an infinite number of curves $\mathscr C_j$ having a vertical asymptote $\mu=\mu_j$ and an horizontal one $\nu=\nu_j$ with $\mu_j=\nu_j=(j\pi/T)^2$. The study of {\em asymmetric nonlinearities} $f$ satisfying $$ \nu_\downarrow \leq \liminf_{x\to-\infty} \frac{f(t,x)}{x} \leq \limsup_{x\to-\infty} \frac{f(t,x)}{x}\leq \nu_\uparrow\,, $$ $$ \mu_\downarrow \leq \liminf_{x\to+\infty} \frac{f(t,x)}{x} \leq \limsup_{x\to+\infty} \frac{f(t,x)}{x}\leq \mu_\uparrow\,, $$ for some suitable constants in $[0,+\infty]$, providing the existence of periodic solutions to equation~\eqref{nleq} presents a wide literature (see e.g.~\cite{CMZ,Dancer,DI,Fabry,FH,FS1,Fucik} or the survey~\cite{MawhinSurvey} and the references therein). The existence is strictly related to the position of the rectangle $W =[\mu_\downarrow,\mu_\uparrow]\times[\nu_\downarrow,\nu_\uparrow]$ with respect to the set $\Sigma$: \begin{itemize} \item if $W\cap \Sigma = \varnothing$ ({\em non-resonance}) we have the existence of at least one periodic solution (cf.~\cite{CMZ,DI,FH}), \item if $W$ is bounded and $W\cap \Sigma =\{(\mu_\downarrow,\nu_\downarrow)\}$ or $W\cap \Sigma =\{(\mu_\uparrow,\nu_\uparrow)\}$ ({\em simple resonance}) the existence of a periodic solution can be obtained by the introduction of a Landesman-Lazer type of condition (cf.~\cite{Fabry}) or of an Ahmad-Lazer-Paul type of condition (cf.~\cite{J-ALP}), \item if $W$ is bounded and $W\cap \Sigma =\{(\mu_\downarrow,\nu_\downarrow),(\mu_\uparrow,\nu_\uparrow)\}$ ({\em double resonance}) the existence of a periodic solution can be obtained by the introduction of a {\em double} Landesman-Lazer type of condition (cf.~\cite{Fabry,FondaPHH,FG,FM})\,. \end{itemize} In this paper we will present an existence result of periodic solutions for the {\em double resonance} case in which $W$ is unbounded. Such a situation presents three possible interpretations: nonlinearities with one-sided superlinear growth, nonlinearities with a singularity and scalar equations with impacts. In this paper we will present the first two situations, the last one has been considered by the author recently in \cite{S10}. We are going to treat nonlinearities satisfying the following asymptotic asymmetric behavior. \begin{itemize} \item[{\bf (A)}] {\sl Assume \begin{equation}\label{as-} \lim_{x\to - \infty} \frac{f(t,x)}{x} = + \infty\, \end{equation} and that there exists a constant $c>0$ and an integer $N>0$ such that \begin{equation}\label{as+} \mu_N x - c \leq f(t,x) \leq \mu_{N+1} x + c\,, \end{equation} for every $x>0$ and every $t\in[0,T]$, where $\mu_j=(j\pi/T)^2$. } \end{itemize} \noindent Notice that the specular case can also be considered as well. The case of a nonlinearity satisfying a {\em nonresonant} one-sided superlinear growth was studied e.g. in~\cite{FH}, but to the best of our knowledge an existence result for nonlinearities satisfying a {\em double} resonance condition has not been provided yet. \medbreak We are now ready to state the first of the main results of this paper. We address the reader to Sections~\ref{secrad} for corresponding theorems related to scalar equations with a singularity and to Section~\ref{final} for some applications to radially symmetric systems. \begin{theorem}\label{main} Assume {\bf(A)} and the Landesman-Lazer conditions \begin{equation}\label{LLcond1} \int_0^T \liminf_{x\to+\infty} (f(t,x)-\mu_{N} x ) \phi_N(t+\tau) \, dt > 0\,, \end{equation} \begin{equation}\label{LLcond2} \int_0^T \limsup_{x\to+\infty} (f(t,x)-\mu_{N+1} x ) \phi_{N+1}(t+\tau) \, dt < 0\,, \end{equation} where $\phi_j$ is defined as $$ \phi_j (t)= \begin{cases} \sin (\sqrt{\mu_j} t) & t\in[0, T/j]\\ 0 & t\in[T/j, T] \,. \end{cases} $$ Then, equation $x''+f(t,x)=0$ has at least one $T$-periodic solution. \end{theorem} It is possible to relax the Landesman-Lazer conditions~\eqref{LLcond1} and~\eqref{LLcond2} in the previous theorem, requiring that nonlinearity $f$ satisfies the following additional hypothesis. \begin{itemize} \item[{\bf (H)}] {\sl For every $\tau\in[0,T]$ and for every $\zeta>0$, consider the set $\mathcal I(\tau,\zeta)=[\tau-\zeta,\tau+\zeta]$ and the functions $$ f_{1,\zeta,\tau}(x) = \min_{t\in\mathcal I(\tau,\zeta)} f(t,x) \qquad f_{2,\zeta,\tau}(x) = \max_{t\in\mathcal I(\tau,\zeta)} f(t,x) $$ with their primitives $F_{i,\zeta,\tau}(x)=\int_0^x f_{i,\zeta,\tau}(\xi) \,d\xi$. We assume that $$ \lim_{\zeta \to 0} \left( \lim_{x\to-\infty} \frac{F_{2,\zeta,\tau}(x)}{F_{1,\zeta,\tau}(x)} \right) = 1 $$ uniformly in $\tau\in[0,T]$. } \end{itemize} The variant of Theorem~\ref{main} is thus given by the next one. \begin{theorem}\label{main2} Assume {\bf(A)}, {\bf(H)} and the Landesman-Lazer conditions~\eqref{LLcond1} and~\eqref{LLcond2} where $\phi_j$ is defined as $$ \phi_j (t)= \left| \sin (\sqrt{\mu_j} t) \right| \,. $$ Then, equation $x''+f(t,x)=0$ has at least one $T$-periodic solution. \end{theorem} \medbreak Let us briefly explain the main differences between the two types of Landesman-Lazer conditions adopted in the previous theorems. The one involved in Theorem~\ref{main} is stronger than the one introduced in Theorem~\ref{main2}. In fact, it is easy to verify that the first implies the second. Hence, we can replace the stronger Landesman-Lazer conditions of Theorem~\ref{main}, with the weaker ones by introducing the additional assumption {\bf(H)}. Roughly speaking, it requires that the superlinear behavior of the nonlinearity at $-\infty$ is an {\em infinity of the same order} when $t$ varies as explained in the following example, where we show some nonlinearities satisfying (or not) such a hypothesis. \begin{example}\label{ex1} Suppose that there exists a function $h:\R\to \R$ satisfying $$ \lim\limits_{x\to-\infty} \frac{h(x)}{x} = +\infty\,, $$ such that $$ 0< \liminf_{x\to-\infty} \frac {f(t,x)}{h(x)} \leq \limsup_{x\to-\infty} \frac {f(t,x)}{h(x)} < +\infty \,. $$ Then, {\bf (H)} holds. As a particular situation, we can consider a nonlinearity $f$ which can be split (when $x<0$) as $f(t,x)=q(t)h(x)+p(t,x)$ with $q(t)>0$ and $\lim\limits_{x\to-\infty} \frac{p(t,x)}{h(x)}=0$ uniformely in $t$. As a direct example, {\bf (H)} holds for nonlinearities $f$ not depending on $t$ when $x<0$, or nonlinearities as $f(t,x) = (1+\sin^2(t)) x^5 + x^3$, or $f(t,x)= x^3 + \sin^2(t) x^2$. Otherwise, for example, if $f(t,x) = x^3 + \sin^2(t) x^5$ when $x<0$, then $f$ does not satisfy {\bf (H)}. \end{example} \section{Proof of Theorems~\ref{main} and~\ref{main2}}\label{secproof} By degree theoretic arguments, the proof consists in finding a common {\em a~priori bound} for all the $T$-periodic solutions of the differential equations \begin{equation}\label{homotopy} x'' + g_\lambda(t,x) = 0\,, \end{equation} where $\lambda\in[0,1]$ and $$ g_\lambda(t,x) = \lambda f(t,x) + (1-\lambda)h(t,x) \,, $$ with $$ h(t,x)= \begin{cases} f(t,x) & x<-1\\ \mu x + x[\,\mu x - f(t,x)] & -1\leq x\leq 0\\ \mu x & x>0 \end{cases} $$ defining $\mu=(\mu_N+\mu_{N+1})/2$. In particular we will find a $R_{good}>0$ such that every $T$-periodic solution of~\eqref{homotopy} satisfies $x(t)^2+ x'(t)^2 < R_{good}^2$ for every $t$. In~1993, Fabry and Habets proved in~\cite{FH} that there exists at least one $T$-periodic solution to~\eqref{homotopy} with $\lambda=0$, by the use of degree arguments. In particular, they found a similar a priori bound $R_{FH}$ for all the solutions of $$ x''+ \tilde \lambda h(t,x) + (1-\tilde\lambda) (\mu x^+ - \nu x^-) =0 \,, $$ with $\tilde\lambda\in[0,1]$ and $(\mu,\nu)\notin\Sigma$, thus solving the case of nonresonant nonlinearities. Hence, simply asking $R_{good}>R_{FH}$, using Leray-Schauder degree theory, the proof of Theorems~\ref{main} and~\ref{main2} easily follows. In Section~\ref{prel}, we will provide some preliminary lemmas which make use of some phase-plane techniques, then in Section~\ref{priori} we will prove the existence of the common a priori bound. \subsection{Some preliminary lemmas}\label{prel} In this section we present some estimates on the behavior of the solutions to~\eqref{homotopy} provided by the use of some phase-plane techniques. We will not present all the proofs, we leave some of them to the reader as an exercise of mere computation, referring to other papers for comparisons. By the way, some of the statements are well-known in literature. \medbreak Let us set $$ f_1(x)=\min_{t\in[0,T]} f(t,x) \quad \text{and}\quad f_2(x)=\max_{t\in[0,T]} f(t,x) \,, $$ then, by~\eqref{as-}, there exists $d<0$ such that $$ f_1(x) < f(t,x) < f_2(x) < 0 $$ for every $x<d$, with $\lim_{x\to-\infty} f_2(x)/x=+\infty$. Define the primitives $ F_i (x)= \int_d^x f_i(\xi) d\xi$. Notice that $F_1>F_2$ are decreasing functions when $x<d$. \medbreak For every solution $x$ of equation~\eqref{homotopy} the couple $(x,y)=(x,x')$ is a solution of the planar system \begin{equation}\label{planar} \begin{cases} x'=y\\ -y'= g_\lambda(t,x) \,. \end{cases} \end{equation} \noindent We will say that \begin{equation}\label{Rlarge} (x,y) \text{ is $R$-large, if } x^2(t) + y^2(t) >R^2 \text{ for every } t\in[0,T]\,, \end{equation} where $(x,y)$ is a solution of~\eqref{planar}. We will also consider the parametrization of such solutions in polar coordinates $$ \begin{cases} x(t) = \rho(t) \cos \theta(t)\\ y(t) = \rho(t) \sin \theta(t) \end{cases} $$ where the angular velocity and the radial velocity of $0$-large solutions are given by $$ - \theta'(t) = \frac{y^2(t)+x(t)g_\lambda(t,x(t))}{x^2(t)+y^2(t)}\,, $$ $$ \rho'(t) = \frac{y(t)[x(t)-g_\lambda(t,x(t))]}{\sqrt{x^2(t)+y^2(t)}}\,. $$ An easy computation gives us the following. \begin{remark}\label{rotate+} There exists $R_0>0$ such that every $R_0$-large solution of~\eqref{planar} rotates clockwise (i.e. $- \theta'(t)>0$). \end{remark} In what follows the constant $R_0$ will be enlarged in order to obtain some additional properties on $R_0$-large solutions. Consider a $R_0$-large solution $(x,y)$, then there exist some instants $t_i$ (see Figure~\ref{figlapt}) such that $$ \begin{array}{l} x(t_1)=d, \quad y(t_1)>0\,,\\ x(t_2)=0, \quad y(t_2)>0\,,\\ x(t_3)>0, \quad y(t_3)=0\,,\\ x(t_4)=0, \quad y(t_4)<0\,,\\ x(t_5)=d, \quad y(t_5)<0\,,\\ x(t_6)<0, \quad y(t_6)=0\,,\\ x(t_7)=d, \quad y(t_7)>0 \,,\\ x(t_8)=0, \quad y(t_8)>0 \,.\\ \end{array} $$ \begin{figure} \caption{A $R_0$-large solution and the instants $t_1,\ldots,t_8$.} \label{figlapt} \end{figure} The following lemma can be obtained easily (see e.g.~\cite{FM,FS1} for details). \begin{lemma}\label{nonreslemma} For every $\ee>0$, it is possible to find $R_\ee>R_0$, such that every $R_\ee$-large solution of~\eqref{planar} satisfies $$ t_5 - t_1\in\left( \frac{\pi}{\sqrt{\mu_{N+1}}} - \ee \,,\, \frac{\pi}{\sqrt{\mu_{N}}} + \ee \right) =\left( \frac{T}{N+1} - \ee \,,\, \frac{T}{N} + \ee \right) $$ and $t_7-t_5<\epsilon\,.$ \end{lemma} Hence, we obtain easily the next one, choosing $\ee$ sufficiently small. \begin{lemma}\label{Ngiri} It is possible to find $R_0$, such that every $R_0$-large $T$-periodic solution of~\eqref{planar} performs exactly $N$ or $N+1$ rotations around the origin in the phase-plane. \end{lemma} With a similar reasoning, we can prove the following. \begin{lemma}\label{halflap} For every $\ee>0$, it is possible to find $R_\ee>R_0$, such that every $R_\ee$-large solution of~\eqref{planar} satisfies $$ \frac{T}{N+1} - \ee < t_4 - t_2 < \frac{T}{N} + \ee \text{ and } t_8-t_4 < \ee \,. $$ \end{lemma} \medbreak The following lemma gives us some informations on the dynamics when $x>d$. \begin{lemma}\label{lapontheright} It is possible to find $R_0$ large enough to have the existence of some positive constants $\theta_0$ and $\ell_0$ such that every $R_0$-large solution to~\eqref{planar}, when written in polar coordinates, satisfies $$ -\vartheta'(t) > \omega_0 \quad\text{and}\quad |\rho'(t)| \leq \ell_0 \rho(t) $$ when $x(t)>d$. So that, for $\kappa=\ell_0/\omega_0$, $$ \left|\frac{d\rho}{d(-\theta)}\right| < \kappa \rho \,. $$ \end{lemma} We leave the proof to the reader as an exercise. We refer to~\cite{FS1,FS7} for details. \begin{remark}\label{r1} A direct consequence of the previous lemma is that $$ e^{-\kappa\pi/2} \, x(t_3) \leq |y(t_j)| \leq e^{\kappa\pi/2} \, x(t_3)\,, \text{ with } j=2,4\,, $$ if $(x,y)$ is $R_0$-large\,. \end{remark} \medbreak Let us now focus our attention on the dynamics when $x<d$. We are going to prove that there exists a functions $\mathcal T$ such that $y(t_7)< \mathcal T(y(t_5))$, thus permitting to find a second function $\mathcal L$ such that $y(t_8) \leq \mathcal L(y(t_2))$. We will have consequently a control on the behavior of solutions escaping from the origin. We start defining the energy functions $$ H_i(x,y) = \frac 12 y^2 + F_i (x)\,, \quad i=1,2\,. $$ Then, we have \begin{equation}\label{en<1} \frac {d}{dt} H_1(x(t),y(t))<0 \,\,\text{if } y(t)>0\,, \quad \frac {d}{dt} H_1(x(t),y(t))>0 \,\,\text{if } y(t)<0\ \end{equation} and \begin{equation}\label{en<2} \frac {d}{dt} H_2(x(t),y(t))<0 \,\,\text{if } y(t)<0\,, \quad \frac {d}{dt} H_2(x(t),y(t))>0 \,\,\text{if } y(t)>0\,. \end{equation} \begin{figure} \caption{(a) The level subsets $\chi_1$ and $\chi_2$ of the energy functions $H_1$ and $H_2$ control the behavior of the solution $(x,y)$ of system~\eqref{planar} \label{figen} \end{figure} These functions give us a control on the behavior of the solutions, see Figure~\ref{figen}a. In particular, we have that $$ -\sqrt{2F_1(x(t_6))} < y(t_5) < -\sqrt{2F_2(x(t_6))}\,, $$ $$ \sqrt{2F_2(x(t_6))} < y(t_7) < \sqrt{2F_1(x(t_6))}\,, $$ thus giving us $y(t_7)< \mathcal T(y(t_5))$, where $$ \mathcal T(\upsilon) = \sqrt{2 F_1\left( F_2^{-1} (\upsilon^2/2) \right)} \geq \upsilon\,. $$ By the estimate in Lemma~\ref{lapontheright}, we have $\sqrt{y(t_5)^2+d^2}<e^{\kappa\pi + a} \,y(t_2)$ and $y(t_8)< e^a \sqrt{y(t_7)^2+d^2}$, where $a=\kappa\arcsin (d/R_0)$. Summing up, we obtain \begin{equation}\label{elle} y(t_8) \leq \mathcal L(y(t_2)) \text{ with } \mathcal L(\upsilon)= e^a \sqrt{ \mathcal T^2 \left( \sqrt{ e^{2(\kappa\pi + a)} \, \upsilon^2 -d^2} \right) + d^2 }\,. \end{equation} The same argument can be obtained by glueing together some {\em guiding curves} in the plane $(x,y)$ following an idea introduced in~\cite{FS1} and developed in~\cite{FS3,FS7,S5} in different situations. Figure~\ref{figen}b illustrates this idea. \medbreak Moreover, by~\eqref{as-}, we can suppose $R_0$ sufficiently large to have \begin{equation}\label{inout} 2F(-r)-r^2 >2F(x)-x^2 \quad \text{for every } r\geq R_0 \text { and } x\in(-r,d) \,. \end{equation} In particular, once fixed $r\geq R_0$, we have $$ H_i(x,y)<H_i(-r,0)\,, \quad i=1,2\,, $$ for every $(x,y)$ satisfying $x^2+y^2<r^2$ and $x<d$. In other words, if $R_0$ is sufficiently large, the level subsets $\chi_i$ of the energy functions $H_i$ passing through the point $(-R_0,0)$ do not enter the open ball of radius $R_0$, see Figure~\ref{figelastic}b. By~\eqref{as-}, we can also suppose that \begin{equation}\label{en1} 2F_2(-R_0)>R_0^2 e^{2(\kappa\pi+a)}\,. \end{equation} We can now state the following lemma. \begin{lemma}\label{elastic} There exists $\mathcal R(R_0)>R_0$ such that every $T$-periodic solution of~\eqref{planar} such that $x^2(t_0)+y^2(t_0)>\mathcal R(R_0)$ at a certain time $t_0$ is a $R_0$-large solution. \end{lemma} \proof Set $\mathcal R(R_0)=\mathcal L^{N+2}(\hat y)$ with $\hat y =e^a \sqrt{2F_1(-R_0)+d^2})$. Argue by contradiction and suppose the existence of a $T$-periodic solution $(x,y)$ of~\eqref{planar}, such that, for some instants $\tau_0$ and $\tau_1$ with $\tau_0<\tau_1\leq \tau_0+T$, it satisfies $x^2(\tau_0)+y^2(\tau_0)=R_0^2$, and $x^2(t)+y^2(t)>R_0^2$ for $t\in(\tau_0,\tau_1)$ and $x^2(\tau_1)+y^2(\tau_1)>\mathcal R(R_0)$. By Lemma~\ref{rotate+}, set $\tau_2>\tau_0$ the smallest instant such that $x(\tau_2)=0$ and $y(\tau_2)>0$. We prove now that $y(\tau_2) <\hat y$. First of all, it could not be $x(\tau_0)>0$, or $x(\tau_0)>d$ with $y(\tau_0)<0$: the solution would enter the ball of radius $R_0$ too early (see Figure~\ref{figelastic}a). In fact, using Lemma~\ref{lapontheright}, we can find an instant $\tau_3\in(\tau_0,\tau_2)$, with $x(\tau_3)=d$ and $-e^{\kappa\pi+a} R_0<y(\tau_3)<0$. Then, by~\eqref{en1} and the estimates in~\eqref{en<1} and~\eqref{en<2}, we obtain the contradiction: the solution re-enters the ball guided by the level curve of $H_2(x,y)=H_2(d,-e^{\kappa\pi+a} R_0)$, denoted by $\chi_0$ in Figure~\ref{figelastic}a. The possibility of having $x(\tau_0)\leq d$ with $y(\tau_0)<0$ is avoided by the {\em guiding} level curve $H_2(x,y)=H_2(-R_0,0)$, denoted by $\chi_2$ in Figure~\ref{figelastic}b. So, it remains the case $x(\tau_0)\leq 0$ with $y(\tau_0)\geq 0$. In this situation the {\em guiding} level curve $H_1(x,y)=H_1(-R_0,0)$, denoted by $\chi_1$ in figure~\ref{figelastic}b, controls the solution when $x<d$ and then by the estimate in Lemma~\ref{lapontheright} we obtain $y(\tau_2) <\hat y$. Now, in the interval $[\tau_2,\tau_1]$ the solution performs a certain number of complete rotations around the origin, which is less than $N+2$ thanks to Lemma~\ref{Ngiri}. Hence, by~\eqref{elle}, $y(\tau_1) <\mathcal L^{N+2}(y(\tau_2)) < \mathcal L^{N+2}(\hat y) = \mathcal R(R_0)$ thus giving a contradiction. \cvd \begin{figure} \caption{(a) If $x(\tau_0)>0$ or $x(\tau_0)>d$ with $y(\tau_0)<0$ then the solution re-enters the ball of radius $R_0$ before exiting the bigger ball of radius $\mathcal R(R_0)$, excluding this situation. (b) In the other cases, the level curves $\chi_1$ and $\chi_2$, respectively of the energy functions $H_1$ and $H_2$, drive the solutions, permitting us to find the desired estimate $y(\tau_2) <\hat y$.} \label{figelastic} \end{figure} \medbreak Some easy consequences of the previous lemma are the followings. \begin{remark}\label{valgono} If a $T$-periodic solution $x$ of~\eqref{homotopy} satisfies $\|x\|_\infty> \mathcal R(R)$, then Lemmas~\ref{Ngiri},~\ref{lapontheright} and Remark~\ref{r1} hold for $(x,y)=(x,x')$ solution of~\eqref{planar}. \end{remark} \begin{remark}\label{tonorm} Suppose to have a sequence $x_n$ of $T$-periodic solutions of~\eqref{planar} such that $\lim_n \max_{[0,T]} (x_n^2(t)+y_n^2(t)) = +\infty$ then $\lim_n \|x_n\|_\infty = +\infty$. \end{remark} The proof of this last remark easily follows by noticing that Lemma~\ref{elastic} holds similarly for every $R>R_0$. \medbreak Repeating some of the arguments contained in the proof of Lemma~\ref{elastic}, we can see that $x(t_6)>\mathcal M(x(t_3))$ where $\mathcal M(\upsilon) = F_2^{-1} \left( \frac12 (e^{\kappa \pi + 2a} \, \upsilon^2-d^2)\right)$. In particular $$ \lim_{r\to+\infty} \frac{\mathcal M(r)}{r} = 0\,. $$ As an immediate consequence, we have the following lemma. \begin{lemma}\label{mintozero} Suppose to have a sequence $x_n$ of $T$-periodic solutions to~\eqref{homotopy}, with $\lim_n \|x_n\|_\infty = +\infty$, then $$ \lim_{n} \frac{\min x_n(t)}{\| x_n\|_\infty} = 0 \,. $$ \end{lemma} \subsection{The a priori bound}\label{priori} The proof of Theorems~\ref{main} and~\ref{main2} is given essentially by the validity of the following proposition. \begin{proposition}\label{apb} There exists $R_{good}$ sufficiently large, such that every $T$-periodic solution of~\eqref{planar} satisfies $x^2(t)+y^2(t)<R_{good}$ for every $t\in[0,T]$. \end{proposition} \proof We argue by contradiction and suppose that there exists a sequence of $T$-periodic solutions $(x_n,y_n)$ of~\eqref{planar}, with $\lambda=\lambda_n$, such that $\max_{[0,T]} x^2_n(t)+y^2_n(t) > n^2$. We have immediately, by Remark~\ref{tonorm}, that $\lim_n \|x_n\|_\infty = +\infty$. Let us denote by $\bar t_n$ the point of maximum of $x_n$, i.e. such that $x_n(\bar t_n)=\|x_n\|_\infty$. We can assume, by Lemma~\ref{elastic}, all these functions to be $R_0$-large. In particular, by Lemma~\ref{Ngiri}, all the solutions must perform $N$ or $N+1$ rotations around the origin. Consider the sequence of normalized functions $$ v_n = \frac{x_n}{\|x_n\|_\infty} \,, $$ which are solutions to \begin{equation}\label{approx} v_n'' + \frac{g_{\lambda_n}(t,x_n(t))}{\|x_n\|_\infty}=0\,. \end{equation} We have, by Lemma~\ref{mintozero}, $v_n\leq 1=v_n(\bar t_n)$ and $\lim_n \min v_n = 0$. Clearly, $v_n'=y_n/\|x_n\|_\infty$ and, by Remark~\ref{r1}, $\|v_n'\|_\infty \leq e^{\kappa\pi/2}$. For this reason, up to subsequence, $v_n$ converges {\em weakly} in $H^1$ and uniformly to a $T$-periodic non-negative function $v$, with $\|v\|_\infty =1$. Moreover, we can assume that $\lambda_n \to \bar \lambda$ and that all the solutions $v_n$ draw in the phase-plane the same number of rotations around the origin $K \in\{N,N+1\}$. \medbreak We can find some instants $t^n_r$ and $s^n_r$ such that the solutions $v_n$, in the phase-plane $(x,y)$, cross the positive $y$ semi-axis at $t^n_r$ and the negative $y$ semi-axis at $s^n_r$, i.e. \begin{equation}\label{ts} t_1^n < s_1^n < t_2^n < s_2^n < \cdots < t_{K}^n < s_{K}^n < t_{K+1}^n = t_1^n + T \,, \end{equation} such that, for every $r\in\{1,\ldots,K\}$, $$ x_n(t) >0 \text{ for every } t\in(t_r^n, s_r^n)\,, $$ $$ x_n(t) <0 \text{ for every } t\in(s_r^n, t_{r+1}^n)\,. $$ Up to subsequences, we can assume that $t_r^n \to \check\xi_r$ and $s_r^n \to \hat\xi_r$ such that $$ \check\xi_1 \leq \hat\xi_1 \leq \check\xi_2 \leq \hat\xi_2 \leq \cdots \leq\check\xi_{K} \leq \hat\xi_{K} \leq \check\xi_{K+1} = \check\xi_1 +T\,. $$ By Lemma~\ref{halflap}, we have $\lim_n t_{r+1}^n - s_r^n =0$, then $\hat\xi _r =\check\xi_{r+1}$. Let us simply denote $\xi_r = \hat\xi_r = \check\xi_{r+1}$. Clearly, $v(\xi_r)=0$. By the estimate in Lemma~\ref{halflap}, we can easily conclude that necessarily $\xi_{r+1} - \xi_{r} = T/K$. Being $\|v\|_\infty=1$ we are sure that there exists an index $r$ such that $v>0$ in the interval $J_r=(\xi_r,\xi_{r+1})$. Let us state the following claims, which will be proved in Section~\ref{newsec}, for the reader convenience. We emphasize that the proof of these claims is a crucial part of the proof of Theorems~\ref{main} and~\ref{main2}. \begin{claim}\label{maxint} Suppose that $v$ is positive in at least one instant of an interval $J_r=(\xi_r,\xi_{r+1})$, then $v(t)>0$ for every $t\in J_{r}$. \end{claim} \begin{claim}\label{alwayspos} If {\bf(H)} holds, then we have $v>0$ in the interval $J_r=(\xi_r,\xi_{r+1})$, for every index $r$. Moreover, the right and left derivatives at $\xi_r$ exist with $-v'(\xi_r^-)=v'(\xi_r^+)$. \end{claim} \medbreak We will now prove that $v$ solves $v'' + \mu_{K} v =0$ for almost every $t$. By the use of some functions with compact support in $J_r$, we can prove (see~\cite{FGsing} for details) that $v\in H^2_{loc}(J_r)\cap C^1(J_r)$ is a {\em weak} solution of $v'' + p(t) v =0$ in any interval $J_r$, where $p(t)$ is such that $\mu_N\leq p(t) \leq \mu_{N+1}$. We need to show that $p(t) = \mu_{K}$ for almost every $t\in J_r$. Consider one of the intervals $J_r$ in which $v$ remains positive (Claim~\ref{maxint} guarantees that $v$ remains positive in the whole interval $J_r$). We will simply denote the extremals of $J_r$ with $\alpha$ and $\beta$, i.e. we set $(\alpha,\beta)=J_r$ for the reader convenience. We have $\beta-\alpha=T/K$. Introducing modified polar coordinates $$ \begin{cases} v(t) = \frac{1}{\sqrt{\mu_{K}}} \, \tilde\rho(t) \cos( \tilde\vartheta(t))\\ v'(t) = \tilde\rho(t) \sin( \tilde\vartheta(t)) \end{cases} $$ we obtain, integrating $-\tilde\vartheta'$ on $[\alpha,\beta]$, if $K=N$ $$ \pi = \sqrt{\mu_N} \int_\alpha^\beta \frac{p(t)v(t)^2 + v'(t)^2}{\mu_{N}v(t)^2 + v'(t)^2} \,dt \geq \sqrt{\mu_{N}} \, \frac{T}{N} = \pi $$ and if $K=N+1$ $$ \pi = \sqrt{\mu_{N+1}} \int_\alpha^\beta \frac{p(t)v(t)^2 + v'(t)^2}{\mu_{N+1}v(t)^2 + v'(t)^2} \,dt \leq \sqrt{\mu_{N+1}} \, \frac{T}{N+1} = \pi\,, $$ thus giving us, in both cases, $p(t)=\mu_{K}$ for almost every $t\in[\alpha,\beta]$. In particular, for every $t\in J_r$, if {\bf (H)} holds \begin{equation}\label{thisformH1} v(t) = \sin \big(\sqrt{\mu_{N+1}} (t- \xi_r)\big)\,, \end{equation} thanks to Claim~\ref{alwayspos}, while, if it does not hold we have only \begin{equation}\label{thisform} v(t) = c_r \sin \big(\sqrt{\mu_{N+1}} (t- \xi_r)\big)\,, \end{equation} with $c_r\in[0,1]$ and at least one of them is equal to $1$, being $\|v\|_\infty=1$. Moreover, we have necessarily $\lambda_n \to \bar\lambda=1$. \medbreak We still consider the interval $(\alpha,\beta)=J_r$ for a certain index $r$. The function $v$ is a solution of the Dirichlet problem: $$ \begin{cases} v'' + \mu_{K} \, v =0\\ v(\alpha)=0, \quad v(\beta)=0\,. \end{cases} $$ Denote by $\pscal \cdot\cdot$ and $\| \cdot \|_2$, respectively, the scalar product and the norm in $L^2(\alpha,\beta)$. Call $\phi_K$ the solution of the previous Dirichlet problem with $\| \phi_K\|_2=1$ and introduce the projection of $x_n$ and $v_n$ on the eigenspace generated by $\phi_K$: $$ x_n^0 = \pscal{x_n}{\phi_K} \phi_K \quad \aand \quad v_n^0 = \pscal{v_n}{\phi_K} \phi_K \,. $$ Being $v=\|v\|_2 \phi_{K}$, we have $v_n^0 \to v$ uniformly in $[\alpha,\beta]$ and $v_n^0\geq 0$, for $n$ sufficiently large. Multiplying equation~\eqref{homotopy} by $v_n^0$ and integrating in the interval $[\alpha,\beta]$ we obtain $$ \begin{array}{l} \ds \int_\alpha^\beta g_{\lambda_n}(t,x_n(t)) v_n^0(t) \,dt = -\int_\alpha^\beta (x_n^0)''(t) v^0_n(t)\,dt\\ [3mm] \ds\hspace{10mm}= - \int_\alpha^\beta x_n^0(t) (v_n^0)''(t) \,dt = \int_\alpha^\beta \mu_{K} x_n^0(t) v_n^0(t) \,dt \\ [3mm] \ds\hspace{20mm}= \int_\alpha^\beta \mu_{K} x_n(t) v_n^0(t) \,dt \,. \end{array} $$ Defining $r_n(t,x)= g_{\lambda_n}(t,x) - \mu_{K} x$ we have $$ \int_{\alpha}^{\beta} r_n(t,x_n(t)) v_n^0(t) \,dt = 0 $$ and, applying Fatou's lemma, $$ \int_{\alpha}^{\beta} \limsup_{n\to\infty} r_n(t,x_n(t)) v_n^0(t) \,dt \geq 0 \geq \int_{\alpha}^{\beta} \liminf_{n\to\infty} r_n(t,x_n(t)) v_n^0(t) \,dt \,. $$ It is easy to see that for every $s_0\in(\alpha,\beta)$ it is possible to find $\bar n(s_0)$ such that $x_n(s_0)>0$ for every $n>\bar n(s_0)$. So, pointwise, for $n$ large enough $r_n(t,x_n(t))=\lambda_n f(t,x_n(t)) + (1-\lambda_n) \mu x_n(t)-\mu_{K} x_n(t)$. Hence, being $v_n^0 \to v$ and $\lambda_n \to 1$, we have, if $K=N$ $$ \int_{\alpha}^{\beta} \liminf_{x\to+\infty} [f(t,x)-\mu_{N} x] v(t) \,dt \leq 0 $$ and if $K=N+1$ $$ \int_{\alpha}^{\beta} \limsup_{x\to+\infty} [f(t,x)-\mu_{N+1} x] v(t) \,dt \geq 0\,. $$ The previous estimates contradict the hypotheses in~\eqref{LLcond1} if $K=N$ or in~\eqref{LLcond2} if $K=N+1$. Notice that, by Claim~\ref{alwayspos}, if {\bf(H)} holds then this reasoning can be repeated for every interval $J_r$ thus obtaining the contradiction being $v$ as in~\eqref{thisformH1} and not as in~\eqref{thisform}. \cvd \subsection{Proof of Claims~\ref{maxint} and~\ref{alwayspos}}\label{newsec} In this section we prove Claims~\ref{maxint} and~\ref{alwayspos}. We have preferred to postpone their proof because the arguments we will use are totally independent by the rest of the proof of Proposition~\ref{apb}. This section is inspired by some recent results obtained by the second author in~\cite{S10} for impact systems at resonance (see also~\cite{FS3}). The functions $v_n=x_n/\|x_n\|_\infty$ solve equation~\eqref{approx}, which we rewrite in a simpler form $$ v_n'' + h_n(t,v_n) =0\,, $$ where, for every $n$, \begin{equation}\label{hgood} |h_n(t,v)|\leq d (v + 1) \quad\text{ for every } t\in[0,T] \text{ and } v\geq 0 \,, \end{equation} for a suitable constant $d>0$. \begin{remark}\label{C1conv} Let $[a,b]\subset J_r$ such that $v$ is positive in $[a,b]$. The sequence $v_n$ $C^1$-converges to $v$ in $[a,b]$. \end{remark} \proof We have already seen that $(v_n)_n$ is bounded in $C^1$, and by~\eqref{hgood} as an immediate consequence we get $|v_n''(t)|\leq |h_n(t,v_n)|\leq 2d$ for every $t\in[a,b]$. So, being $v_n$ bounded in $C^2$ in such a interval, by the Ascoli-Arzel\` a theorem, we have that $v_n$ $C^1$-converges to $v$ in $[a,b]$. \cvd \medbreak We can now prove the first of the two claims. \medbreak \noindent\textit{Proof of Claim~\ref{maxint}.} Let $\bar s\in J_{r}$ be the point of maximum of $v$ restricted to the interval $J_r$ with $v(\bar s)=\bar v$. Suppose that $v$ vanishes at $s_0\in J_{r}$, and let $U_0$ be a closed neighborhood of $s_0$ contained in $J_{r}$. We assume without loss of generality that $U_0\subset(\bar s,\xi_{r+1})$, the case $U_0\subset(\xi_{r},\bar s)$ follows similarly. Notice that $v_n'(t)<-e^{-\kappa\pi/2}\bar v/2<0$ in $U_0$ as a consequence of Lemma~\ref{lapontheright} (cf. Remark~\ref{r1}), so that the previous lemma forces $v$ to be negative on a right neighborhood of $s_0$, thus giving us a contradiction. \cvd \medbreak The following lemma gives us the estimates on the left and right derivatives when $v$ vanishes. \begin{lemma}\label{der} Suppose that $v$ is positive in the interval $J_r=(\xi_r,\xi_{r+1})$, then the following limits exist $$ v'(\xi_r^+)=\lim_{t\to\xi_r^+} v'(t)>0 \quad \text{and} \quad v'(\xi_{r+1}^-)=\lim_{t\to\xi_{r+1}^-} v'(t)<0\,. $$ \end{lemma} \proof We will prove only the existence of the second limit, the other case follows similarly. In the interval $J_r$ the function $v$ has a positive maximum, thus we can assume that $\max_{J_r} v > M$ and $\max_{J_r} v_n > M$ for a suitable constant $M\in(0,1)$ for large indexes $n$. Using Remark~\ref{r1}, we obtain that $-v_n'(s_{r+1}^n)\in [M/c,cM]$, where $c=e^{\kappa\pi/2}$. So, we can assume up to subsequence that $\lim_n -v_n'(s_{r+1}^n) = \bar y>0$. We now prove that $\lim_{t\to\xi_{r+1}^-} -v'(t)= \bar y$. Fix $\epsilon>0$ and $s\in(0,\epsilon)$. It is possible to find, for every $n$ sufficiently large, that the following inequalities hold: $$ \begin{array}{l} s_{r+1}^n > \xi_{r+1} -s > s_{r+1}^n - 2\epsilon \,, \\[1mm] |v_n'(\xi_{r+1}-s) - v'(\xi_{r+1}-s) | <\epsilon\,, \\[1mm] |v_n'(s_{r+1}^n)+\bar y|<\epsilon\,. \end{array} $$ Moreover, by~\eqref{hgood}, for every $\delta>0$ $$ |v_n'(s_{r+1}^n) - v_n'(s_{r+1}^n-\delta)| < 2d\delta\,, $$ thus giving us that $$ |v'(\xi_{r+1}-s)+\bar y| < (4d+2) \epsilon\,. $$ The previous inequality holds for every $\epsilon>0$ and $s\in(0,\epsilon)$. The lemma is thus proved. \cvd \medbreak In what follows we study how the validity of hypothesis {\bf (H)} gives more informations on the function $v$. \medbreak \begin{lemma}\label{goodreturn} Assume {\bf (H)}, then for every index $r$, $$ \lim_n \frac{-v_n'(s^n_r)}{v_n'(t^n_{r+1})} = \lim_n \frac{-x_n'(s^n_r)}{x_n'(t^n_{r+1})} = 1\,. $$ \end{lemma} \proof Fix $r$ and define the interval $\mathcal I_n=(s^n_r,t^n_{r+1})$, whose length tends to zero for $n$ large. Using the notation introduced in Figure~\ref{figlapt}, by the estimates in Lemma~\ref{lapontheright}, we can obtain $$ a^-(y_n(t_4))\leq \sqrt{y_n(t_5)^2+d^2} \leq a^+(y_n(t_4)) \,, $$ where $a^-(\upsilon)=e^{-a(\upsilon)} \upsilon$ and $a^+(\upsilon)=e^{a(\upsilon)} \upsilon$ with $a(\upsilon)=\kappa \arcsin(d/\upsilon)$. Moreover, by the same argument which gave us~\eqref{elle}, we have $$ \mathcal T_{2,1}^{\mathcal I_n}(y_n(t_5)) \leq y_n(t_7) \leq \mathcal T_{1,2}^{\mathcal I_n}(y_n(t_5)) \,, $$ where $$ \mathcal T_{i,j}^{\mathcal I_n}(\alpha) = \sqrt{2 F_i^{\mathcal I_n}\left( (F_j^{\mathcal I_n})^{-1} (\alpha^2/2) \right)} \,, $$ with $F_i^{\mathcal I_n}=F_{i,\zeta_n,\tau_n}$ defined as in {\bf (H)}, being $\mathcal I(\tau_n,\zeta_n)=\mathcal I_n$. Then, again by Lemma~\ref{lapontheright}, we have $$ a^-\left(\sqrt{y_n(t_7)^2+d^2}\right)\leq y_n(t_8) \leq a^+\left(\sqrt{y_n(t_7)^2+d^2}\right) \,. $$ Notice that $\lim_{\upsilon\to \infty} a^\pm(\upsilon)=1$, and by {\bf (H)} we have also $$ \lim_n \lim_{\alpha\to \infty} \frac{\mathcal T_{i,j}^{\mathcal I_n}(\alpha)}{\alpha}=1\,. $$ Hence, the desired estimate follows. \cvd \medbreak The previous estimate is the main ingredient we need to prove the following lemma. \begin{lemma}\label{pos} Assume {\bf(H)}. Suppose that $v$ is positive for a certain $t_0\in J_r=(\xi_{r},\xi_{r+1})$, then $\xi_r$ and $\xi_{r+1}$ are isolated zeros. Hence, by Claim~\ref{maxint}, as an immediate consequence $v$ is positive in every interval $J_r$. \end{lemma} \proof We just prove that $\xi_{r+1}$ is an isolated zero. By the argument presented in the proof of Lemma~\ref{der}, if the left derivative $v'(\xi_{r+1}^-)=-\eta<0$, then we can assume $-v_n'(s_{r+1}^n)>\eta/2$ for $n$ large enough. Suppose by contradiction that there exists $\ee_0\in(0,\eta/8d)$, with $d$ as in~\eqref{hgood}, such that $v(\xi_{r+1}+\ee_0)=0$. For every $n$ large enough we have $|t_{r+2}^n-\xi_{r+1}|<\ee_0/4$ and by Lemma~\ref{goodreturn} $v_n'(t_{r+1}^n)>\eta/2$. Being $|v_n''|\leq 2d$ when $v_n$ is positive we can show that if $s<\eta/4d$ then $v_n(t_{r+2}^n+s)>s\, \eta/4$. By construction $\xi_{r+1}+\ee_0=t_{r+2}^n+s_0$ for a certain $s_0\in(\ee_0/2,\eta/4d)$, so that we obtain $v_n(\xi_{r+1}+\ee_0)=v_n(t_{r+2}^n+s_0)>\eta\ee_0/8$ for every $n$ large enough, thus contradicting $v_n\to v$. \cvd \medbreak We can now prove the remaining claim. \medbreak \textit{Proof of Claim~\ref{alwayspos}.} The first part of the statement is given by Lem\-ma~\ref{pos}. The estimate on the derivatives easily follows by Lemmas~\ref{der} and~\ref{goodreturn}, remembering that in the proof of Lemma~\ref{der} we have shown that $\lim_n = v_n'(s^n_r) = v'(\xi_r^-)$ and $\lim_n = v_n'(t^n_{r+1}) = v'(\xi_r^+)$. \cvd \medbreak \section{Nonlinearities with a singularity and\\ radially symmetric systems}\label{secrad} In this section we provide a result of existence of periodic solutions to scalar differential equations with a singularity in the spirit of Theorems~\ref{main} and~\ref{main2}. In particular we consider the differential equation \begin{equation}\label{nleq2} x''+ f(t,x)=0 \,, \end{equation} where $f: \R \times (0,+\infty) \to \R$ is a continuous function $T$-periodic in the first variable. The nonlinearity $f$ presents a {\em strong} singularity at $x=0$, in the following sense. \begin{itemize} \item[{\bf(${\rm A}^0$)}]{\sl There exist $\delta>0$ and two continuous functions $f_1, f_2:(0,\delta) \to \R$ such that $$ f_1(x) < f(t,x) < f_2(x) < 0, \quad \forevery t\in\R \aand x\in(0,\delta) \,, $$ satisfying $$ \lim_{x\to0^+} f_i(x) = -\infty \quad \aand \quad \int_0^{\delta} f_i(\xi)\,d\xi = -\infty\,, \quad i=1,2\,. $$ } \end{itemize} We assume that the nonlinearity $f$ has an asymptotically linear growth at infinity, as follows. \begin{itemize} \item[{\bf(${\rm A}^\infty$)}]{\sl There exist a constant $c>0$ and an integer $N>0$ such that $$ \mu_N x - c \leq f(t,x) \leq \mu_{N+1} x + c\,, $$ for every $x>1$ and every $t\in[0,T]$. } \end{itemize} The corresponding of Theorem~\ref{main} can be reformulated for the differential equation~\eqref{nleq2} in this way. \begin{theorem}\label{mainweak2} Assume {\bf($A^0$)} and {\bf($A^\infty$)} and the Landesman-Lazer conditions \begin{equation}\label{LLcond1a2} \int_0^T \liminf_{x\to+\infty} (f(t,x)-\mu_{N} x ) \phi_N(t+\tau) \, dt > 0 \,, \end{equation} \begin{equation}\label{LLcond2a2} \int_0^T \limsup_{x\to+\infty} (f(t,x)-\mu_{N+1} x ) \phi_{N+1}(t+\tau) \, dt < 0 \,, \end{equation} where $\phi_j$ is defined as $$ \phi_j(t) = \begin{cases} \sin (\sqrt{\mu_j} t) & t\in[0, T/j]\\ 0 & t\in[T/j, T] \end{cases} $$ extended by periodicity. Then, equation~\eqref{nleq2} has at least one $T$-periodic solution. \end{theorem} As in the previous section, we can introduce an additional assumption on the behavior of $f$ near zero, in order to obtain a different version of the previous theorem. \begin{itemize} \item[{\bf ($\widetilde{\rm H}$)}] {\sl For every $\tau\in[0,T]$ and for every $\zeta>0$, consider the set $\mathcal I(\tau,\zeta)=[\tau-\zeta,\tau+\zeta]$ and the functions $$ f_{1,\tau,\zeta}(x) = \min_{t\in \mathcal I(\tau,\zeta)} f(t,x) \qquad f_{2,\tau,\zeta}(x) = \max_{t\in \mathcal I(\tau,\zeta)} f(t,x) $$ with their primitives $F_{i,\tau,\zeta}(x)=\int_\delta^x f_{i,\tau,\zeta}(\xi) \,d\xi$. We assume that $$ \lim_{\zeta \to 0} \left( \lim_{x\to 0} \frac{F_{2,\tau,\zeta}(x)}{F_{1,\tau,\zeta}(x)} \right) = 1 $$ uniformly in $\tau\in[0,T]$. } \end{itemize} Hence, the corresponding of Theorem~\ref{main2} is the following. \begin{theorem}\label{mainstrong2} Assume {\bf($A^0$)}, {\bf($A^\infty$)} and {\bf ($\widetilde{\rm H}$)}, and the Landesman-Lazer conditions~\eqref{LLcond1a2} and~\eqref{LLcond2a2} where, $\phi_j$ is defined as $$ \phi_j (t)= \left| \sin (\sqrt{\mu_j} t) \right| \,. $$ Then, equation~\eqref{nleq2} has at least one $T$-periodic solution. \end{theorem} Let us here show some nonlinearities satisfying (or not) hypothesis {\bf ($\widetilde{\rm H}$)}, cf. Example~\ref{ex1}. \begin{example}\label{ex2} Suppose that there exists a function $h:(0,+\infty)\to \R$ satisfying $$ \lim_{x\to0^+} h(x) = -\infty\,, $$ such that $$ 0< \liminf_{x\to 0} \frac {f(t,x)}{h(x)} \leq \limsup_{x\to 0} \frac {f(t,x)}{h(x)} < +\infty \,. $$ Then, {\bf ($\widetilde{\rm H}$)} holds. As a particular case, suppose that $f$ can be split (for $0<x<1$) as $f(t,x)=q(t)h(x)+p(t,x)$ with $q(t)>0$ and $\lim\limits_{x\to 0} \frac{p(t,x)}{f(x)}=0$ uniformly in $t$. In particular we can consider nonlinearities not depending on $t$ when $0<x<1$, or nonlinearities as $f(t,x) = -(1+\sin^2(t)) x^{-5} - x^{-3}$, or $f(t,x)= -x^{-3} - \sin^2(t) x^{-2}$. Otherwise, if for example $f(t,x) = -x^{-3} - \sin^2(t) x^{-5}$ when $0<x<1$, then $f$ does not satisfies {\bf ($\widetilde{\rm H}$)}. \end{example} The previous theorems can be viewed as the generalization of the result provided by del Pino, Man\'asevich and Montero in \cite{dPMM} to nonlinearities {\em near resonance}. Recently an existence result by the introduction of Lazer-Leach conditions has been proved by Wang in \cite{WWW}, and we recall the result obtained by Fonda and Garrione in \cite{FGsing} where the authors provide a Landesman-Lazer condition {\sl on one side}, roughly speaking, with respect to the smaller eigenvalue. In particular the previous theorems can be viewed as an answer to~\cite[Remark~2.5]{FGsing}. \subsection{Proof of Theorems~\ref{mainweak2} and~\ref{mainstrong2}}\label{pf2} The proof of Theorems~\ref{mainweak2} and~\ref{mainstrong2}, follows step by step the proof of Theorems~\ref{main} and~\ref{main2}, with some wise adjustments. Hence, we will provide only a sketch. We refer to~\cite{FTonNA} for detailed computations in this setting. Let us underline that, up to a rescaling of the $x$ variable, it is not restrictive to assume $\delta=1$ in {\bf ($A^0$)}. In \cite{FTonNA}, Fonda and Toader provide an {\em a priori bound} to solutions of equation~\eqref{nleq2} when the nonlinearity satisfies ($A^0$) and the {\em nonresonance} condition $$ \mu_N < \mu_\downarrow \leq \liminf_{x\to+\infty} \frac{f(t,x)}{x} \leq \limsup_{x\to+\infty} \frac{f(t,x)}{x}\leq \mu_\uparrow < \mu_{N+1} \,. $$ As a particular case we find the nonlinearity $$ h(t,x) = \begin{cases} f(t,x) & x<1/2\\ (2x-1) \mu x + (2-2x) f(t,x) & 1/2\leq x\leq 1\\ \mu x & x> 1 \end{cases} $$ with $\mu=(\mu_N+\mu_{N+1})/2$. Arguing as in Section~\ref{secproof}, we can introduce a family of differential equations \begin{equation}\label{homotopy2} x'' + g_\lambda(t,x) = 0\,, \end{equation} as in~\eqref{homotopy}, and by standard arguments in degree theory, the proof can be easily obtained when we can find an {\em a priori bound} to the solutions of~\eqref{homotopy2}. Arguing as in Section~\ref{prel} we consider the corresponding system \begin{equation}\label{planar2} \begin{cases} x'=y\\ -y'= g_\lambda(t,x) \,. \end{cases} \end{equation} which is now defined for $(x,y)\in(0,+\infty)\times \R$. We consider the function $$ \mathcal N(x,y) = \frac{1}{x^2} + x^2 + y^2\,, $$ so that, as in~\eqref{Rlarge}, we say that \begin{equation}\label{Nlarge} (x,y) \text{ is $\mathcal N_0$-large, if } \mathcal N(x,y)>\mathcal N_0 \text{ for every } t\in[0,T]\,. \end{equation} All the results contained in Section~\ref{prel} (wisely adjusted) can be reformulated by the study of the phase portrait when $0<x<1$ and when $x>1$. We list some of them for the reader convenience. \begin{lemma}\label{fullsingrem} There exists $\mathcal N_0$ sufficiently large such that every $\mathcal N_0$-large solution of~\eqref{planar2} rotates clockwise around the point $(1,0)$ performing exactly $N$ or $N+1$ rotations. \end{lemma} \begin{lemma}\label{bbb} For every $\ee>0$ there exists $\mathcal N_\ee$ such that every $\mathcal N_\ee$-large solution $(x,y)$ of~\eqref{planar2}, performing a complete rotation around the point $(1,0)$ in the interval $[t_0,t_2]$, satisfies $$ t_1-t_0 \in \left(\frac{T}{N+1}-\ee,\frac TN+\ee\right ) \aand t_2-t_1 <\ee\,, $$ for a certain $t_1\in(t_0,t_2)$, being $x>1$ in the interval $(t_0,t_1)$ and $0<x<1$ in $(t_1,t_2)$. \end{lemma} We refer to \cite{FTonNA} for the detailed computation giving us the previous lemmas. We simply underline that the dynamics when $0<x<1$ (respectively when $x>1$) remember the dynamics of the {\em one-sided superlinear} scalar equation previously studied when $x<0$ (resp. when $x>0$). By the construction of some guiding functions we can prove the following estimates. Notice that the use of guiding functions was adopted also in \cite{FTonNA}, by the use of a {\em general method} presented by Fonda and the author in \cite{FS1}. \begin{lemma}\label{aaa} There exists $\mathscr N(\mathcal N_0) > \mathcal N_0$ such that every $T$-periodic solution of~\eqref{planar2} such that $\mathcal N(x(t_0),y(t_0)) > \mathscr N(\mathcal N_0)$ at a certain time $t_0$ is a $\mathcal N_0$-large solution. \end{lemma} \begin{lemma}\label{aaaa} Suppose to have a sequence $x_n$ of $T$-periodic solutions to~\eqref{planar2} such that $\lim_n \max_{[0,T]} \mathcal N(x_n(t),y_n(t)) = +\infty$ then $\lim_n \|x_n\|_\infty = +\infty$. \end{lemma} \medbreak All the four preceding lemmas are the main ingredients to obtain the desired a priori bound which is given by the next statement. \begin{proposition}\label{apb2} There exists $\mathcal N_{good}$ sufficiently large, such that every $T$-periodic solution of~\eqref{planar2} satisfies $\mathcal N(x(t),y(t)) < \mathcal N_{good}$ for every $t\in[0,T]$. \end{proposition} The proof follows the one of Proposition~\ref{apb}: we assume the existence of a sequence of solutions {\em arbitrarily large} in the sense of~\eqref{Nlarge} and we introduce the {normalized sequence} $v_n=x_n/\|x_n\|_\infty$ converging to a certian non-negative function $v$. We can introduce the instants $t^n_r$ and $s^n_r$ as in~\eqref{ts} requiring now that $x_n(t) > 1$ for every $t\in(t^n_r,s^n_r)$ and $0<x_n(t) < 1$ for every $t\in(s^n_r, t^n_{r+1})$. Similarly, using Lemma~\ref{bbb}, we can obtain the sequence of instants $\xi_r$ such that $v(\xi_r)=0$, being $v_n(t^n_r)=v_n(s^n_r)=1/\|x_n\|_\infty \to 0$ for $n\to \infty$. So, whenever we need to consider an interval when $v$ is positive, we can assume the index $n$ sufficiently large to have $x_n>1$ and argue similarly as in Section~\ref{priori}. The analogues of results in Section~\ref{newsec} follows similarly. \section{Final remarks}\label{final} We desire now to show an application of Theorems~\ref{mainweak2} and~\ref{mainstrong2} to radially symmetric systems thanks to a general technique introduced in \cite{FTonNA,FTonProc} by Fonda and Toader. We consider the differential equation \begin{equation}\label{eqrad} {\bf x}'' + f(t,|{\bf x}|) \frac{{\bf x}}{|{\bf x} |}=0\,, \end{equation} where ${\bf x}\in \R^d$ and ${\text{f}}:\R\times(0,+\infty)\to\R$ is a continuous function, $T$-periodic in the first variable. By the radial symmetry of the equation, every solution of~\eqref{eqrad} is contained in a plane, so we can pass to polar coordinates and consider solutions to the following system \begin{equation}\label{sist1} \begin{cases} \ds \rho'' - \frac{L^2}{\rho^3}+f(t,\rho)=0 \qquad \rho>0 \\[2mm] \rho^2 \vartheta' = L \,, \end{cases} \end{equation} where $L\in \R$ is the angular momentum. We are interested in the existence of periodic solutions performing a certain number $\nu$ of revolutions around the origin in the time $kT$ and $T$-periodic in the $\rho$ variable, i.e. such that \begin{equation}\label{percond} \begin{array}{l} \ds \rho(t+T)=\rho(t)\,,\\ \ds \vartheta(t+kT)=\vartheta(t)+2\pi\nu\,. \end{array} \end{equation} Applying the Fonda-Toader {\em general principle for rotating solutions} (cf. \cite[Theorem~2]{FTonProc}), we obtain as a corollary the following theorem, extending to nonlinearities {\em near resonance} the previous result provided in \cite[Theorem~2]{FTonNA} by the same authors. \begin{theorem}\label{mainrad} If the nonlinearity $f$ in~\eqref{eqrad} satisfies the hypotheses of Theorem~\ref{mainweak2} (or Theorem~\ref{mainstrong2}) then for every integer $\nu$, there exists an integer $k_\nu$ such that for every integer $k\geq k_\nu$ equation~\eqref{eqrad} has a $kT$-periodic solution $x_{k,\nu}$ which makes exactly $\nu$ revolutions around the origin in the period~$kT$. In particular the corresponding solution of system~\eqref{sist1} satisfies the periodicity conditions~\eqref{percond}. Moreover there exists a constant $R$, independent by the choice of $\nu$, such that $1/R<|{\bf x}_{k,\nu}(t)|<R$ for every $t$ and if $L_{k,\nu}$ denotes the angular momentum of the solution $x_{k,\nu}$, then $\lim_{k\to\infty} L_{k,\nu} = 0$. \end{theorem} \newcommand\mybib[8]{{\bibitem{#1} {\sc #2}, {\em #3}, {#4}~{\bf #5} ({#6}), {#7}--{#8}.}} \newcommand\mybibb[4]{{\bibitem{#1} {\sc #2}, {\em #3}, {#4}.}} \bigbreak \begin{tabular}{l} Andrea Sfecci\\ Universit\`a Politecnica delle Marche\\ Dipartimento di Ingegneria Industriale e Scienze Matematiche\\ Via Brecce Bianche 12\\ 60131 Ancona\\ Italy\\ e-mail: [email protected] \end{tabular} \medbreak \noindent Mathematics Subject Classification: 34C25 \medbreak \noindent Keywords: periodic solutions, resonance, superlinear growth, Landesman-Lazer condition, singularity, radially symmetric systems. \end{document}
\begin{document} \title{ Heteroclinic Cycles of a Symmetric May-Leonard Competition Model with Seasonal Succession \thanks{This research is partially supported by NSF of China No.11871231 and Fujian Province University Key Laboratory of Computational Science, School of Mathematical Sciences, Huaqiao University, Quanzhou, China} \date{\empty} \date{\empty} \author{ {Xizhuang Xie$^{a,b}$, Lin Niu$^b$\thanks{The corresponding author, \emph {E-mail address}: [email protected]} } \\ \\[3pt] { \small $^{a}$School of Mathematical Sciences, Huaqiao University}\\ {\small Quanzhou, Fujian, 362021, P. R. China}\\ {\small $^{b}$School of Mathematical Sciences}\\ { \small University of Science and Technology of China}\\ { \small Hefei, Anhui, 230026, P. R. China}\\ } }\maketitle \begin{abstract} In this paper, we are concerned with the stability of heteroclinic cycles of the symmetric May-Leonard competition model with seasonal succession. Sufficient conditions for stability of heteroclinic cycles are obtained. Meanwhile, we present the explicit expression of the carrying simplex in a special case. By taking $\varphi$ as the switching strategy parameter for a given example, the bifurcation of stability of the heteroclinic cycle is investigated. We also find that there are rich dynamics (including nontrivial periodic solutions, invariant closed curves and heteroclinic cycles) for such a new system via numerical simulation. Biologically, the result is interesting as a caricature of the complexities that seasonal succession can introduce into the classical May-Leonard competitive model. \par \textbf{Keywords}: May-Leonard competitive model, Seasonal succession, Heteroclinic Cycles, Carrying simplex, Bifurcation \par \textbf{AMS Subject Classification (2020)}: 34C12, 34C25, 34D23, 37C65, 92D25 \end{abstract} \section{Introduction} Lotka-Volterra competition is modeled by a system of differential equations describing the competition between two or more species that compete for the same resources or habitat. There have been extensive studies and applications of Lotka-Volterra competition models (see \cite{C1980,SX2012,JN22017,K1974,M2002,M2007,HL1995,YX2013}). In particular, May and Leonard \cite{ML1975} focused on the case that all three competing species have the same intrinsic growth rates and the three species compete in the ``rock-scissors-paper" manner, which is modeled by the following form \begin{equation} \begin{cases} \label{ML} \frac{dx_1}{dt}=x_1(1-x_1-\alpha x_2-\beta x_3),\\ \frac{dx_2}{dt}=x_2(1-\beta x_1-x_2-\alpha x_3),\\ \frac{dx_3}{dt}=x_3(1-\alpha x_1-\beta x_2-x_3),\\ (x_1(0),x_2(0),x_3(0))=x^0 \in R_+^{3}, \end{cases} \end{equation} where $0<\alpha<1<\beta$ and $\alpha+\beta>2$. They found numerically that system (\ref{ML}) exhibits a general class of solutions with non-periodic oscillations of bounded amplitude but ever-increasing cycle time; asymptotically, the system cycles from being composed almost wholly of population 1, to almost wholly 2, to almost wholly 3, back to almost wholly 1, etc. The rigorous proof was attributed to Schuster, Sigmund and Wolf \cite{SSW1979}. This interesting cyclical fluctuation phenomenon was later known as ``May-Leonard phenomenon". In \cite{CHW1998}, Chi, Hsu and Wu obtained a complete classification for the global dynamics of the asymmetric May-Leonard competition model, and exhibited the May-Leonard phenomenon under certain conditions. For discrete-time cases, Roeger and Allen \cite{RA2004} studied two discrete-time models applicable to three competing plant species and proved that there also exist similar dynamics to system (\ref{ML}). Recently, Jiang et al. investigated three-dimensional Kolmogorov competitive systems admitting a carrying simplex, and derived that these systems have heteroclinic cycles (see \cite{JLD2014,JL2016,JL2017,JN22017}), which implies that May-Leonard phenomenon happens and greatly enriches existing results of heteroclinic cycles for competitive systems. Furthermore, Jiang, Niu and Wang \cite{JNW2016} analyzed the effects of heteroclinic cycles and the interplay of heteroclinic attractors or repellers on the boundary of the carrying simplices for three-dimensional competitive maps. For more results of heteroclinic cycles, we refer to \cite{FS1991,KB1995,KB2004,OWY2008,HB2013,GJNY2018,GJNY2020} and references therein. \par On the other hand, alternation of seasons is a very common phenomenon in nature. The alternations may be fast (e.g., daily light change, Litchman and Klausmeier \cite{LK2001}), intermediate (e.g., tidal cycles and annual seasons, DeAngelis et al.\cite{D2009}), or slow (e.g., EI Nino events). Due to the seasonal alternation, populations experience a periodic external environment such as temperature, rainfall, humidity and wind. One impressive example occurs in phytoplankton and zooplankton of the temperate lakes, where the species grow during the warmer months and die off or form resting stages in the winter. Such phenomenon is called seasonal succession in Sommer et al.\cite{SUZLD1986}. In \cite{HT1995}, Hu and Tessier's study on competitive interactions of two Daphnia species in Gull Lake suggested that the strength of interspecific and intraspecific competition varied with season. This seasonal shift in the nature of competitive interactions provides an explanation for seasonal succession in this Daphnia assemblage. \par In fact, seasonal forcing is a major cause of nonequilibrium dynamics. It has been a fascinating subject for ecologists and mathematicians to explore the modeling of nonequilibrium dynamics by means of seasonal succession. As early as 1974, Koch \cite{K1974} considered a two species competition model in a seasonal environment, where good and bad seasons alternate. In the good season from spring to autumn, the species follow Lotka-Volterra competition interaction. While, the species reduce by some constant factors due to adverse conditions in the bad season from autumn to spring through winter. This research stated that although a chance fluctuation may affect the relative extent of growth of the two species in a particular year leading to a change in the input ratio of populations the following spring, under fairly general circumstances, this can affect the growth cycle of the two species in just the right way to return the system towards its original cycle so as to permit stable. Litchman and Klausmeier \cite{LK2001} analyzed a competition model of two species for a single nutrient under fluctuating light with seasonal succession in the chemostat. Later, Klausmeier \cite{K2010} described a novel approach to modeling seasonally forced food webs (called successional state dynamics-SSD). The approach treats succession as a series of state transitions driven by both the internal dynamics of species interactions and external forcing. It is applicable to communities where species dynamics are fast relative to the external forcing, such as plankton and other microbes, diseases, and some insect communities. By applying this approach to the classical Rosenzweig-MacArthur predator-prey model, he found numerically that the forced dynamics are more complicated, showing multiannual cycles or apparent chaos. In Steiner et al.\cite{SSH2009}, the SSD modeling approach was then employed to generate analytical predictions of the effects of altered seasonality on species persistence and the timing of community state transitions, which highlighted the utility of the SSD modeling approach as a framework for predicting the effects of altered seasonality on the structure and dynamics of multitrophic communities. In theory, Hsu and Zhao \cite{SX2012} first studied the global dynamics of a two-species Lotka-Volterra two-species competition model with seasonal succession and obtained a complete classification for the global dynamics via the theory of monotone dynamical systems. Xiao \cite{X2016} discussed the effect of the seasonal harvesting on the survival of the population for a class of population model with seasonal constant-yield harvesting. In recent years, SSD modeling approach has attracted considerable attention in theoretical ecology, see \cite{YX2013,TXZ2017,BBR2017,HHS2018} and references therein. Although the seasonal succession model looks like a periodic system, its coefficients are discontinuous and periodic in time $t$. Many known results on periodic systems with continuous periodic coefficients are not applicable to these models (e.g. Cushing \cite{C1980}). Besides, it is still challenging to understand the mechanisms of seasonal succession and their impacts on the dynamics of communities and ecosystems. \par Motivated by the SSD modeling approach in Klausmeier \cite{K2010}, we propose a three-dimensional May-Leonard competition model with seasonal succession as follows: \begin{equation} \begin{cases} \label{3ML} \frac{dx_i}{dt}=-\lambda_i x_i,\qquad \quad \qquad \ m \omega \leq t\leq m \omega+(1-\varphi)\omega, \quad i=1,2,3, \\ \frac{dx_1}{dt}=x_1(1-x_1-\alpha x_2-\beta x_3), \ m \omega+(1-\varphi)\omega \leq t\leq (m+1)\omega,\\ \frac{dx_2}{dt}=x_2(1-\beta x_1-x_2-\alpha x_3), \ m \omega+(1-\varphi)\omega \leq t\leq (m+1)\omega,\\ \frac{dx_3}{dt}=x_3(1-\alpha x_1-\beta x_2- x_3), \ \ m \omega+(1-\varphi)\omega \leq t\leq (m+1)\omega,\\ \big(x_1(0),x_2(0),x_3(0)\big)=x^0 \in \mathbb{R}^3_+, \quad \qquad \ m=0,1,2\dots\\ \end{cases} \end{equation} where $m\in \mathbb{Z_+}$, $\varphi \in [0,1]$, $\omega$, $\alpha$, $\beta$, $\lambda_i \ (i=1,2,3)$ are all positive constants. \par Clearly, if $\varphi=0$, then the system (\ref{3ML}) become the following decoupling form \begin{equation} \label{LLL} \begin{cases} \frac{dx_i}{dt}=-\lambda_i x_i, \qquad \quad i=1,2,3,\\ \big(x_1(0),x_2(0),x_3(0)\big)=x^0 \in \mathbb{R}^3_+.\\ \end{cases} \end{equation} While, if $\varphi=1$, system (\ref{3ML}) turns out to be the symmetric May-Leonard competition model (\ref{ML}). \par From system (\ref{3ML}), one can see that it is a time-periodic system in a seasonal environment. Overall period is $\omega$, and $\varphi$ represents the switching proportion of a period between two subsystems (\ref{LLL}) and (\ref{ML}). Biologically, $\varphi$ is used to describe the proportion of the period in the good season in which the species follow system (\ref{ML}), while $(1-\varphi)$ stands for the proportion of the period in the bad season in which the species die exponentially according to system (\ref{LLL}). \par It is not difficult to see that system (\ref{3ML}) admits a unique nonnegative global solution $x(t,x^0)$ on $[0,+\infty)$ for any $x^0 \in \mathbb{R}^3_+$. Since system (\ref{3ML}) is $\omega$-periodic, we only consider the Poincar$\acute{\rm e}$ map $S$ on $\mathbb{R}^3_+$, that is, $S(x^0)=x(\omega,x^0)$ for any $ x^0 \in \mathbb{R}_+^3$. Due to system (\ref{LLL}), let us first define a linear map $L$ by $$L(x_1,x_2,x_3)=(e^{-\lambda_1(1-\varphi)\omega}x_1,e^{-\lambda_2(1-\varphi)\omega}x_2,e^{-\lambda_3(1-\varphi)\omega}x_3),\ \forall \ (x_1,x_2,x_3)\in \mathbb{R}^3_+.$$ We also let $\{Q_t\}_{t\geq 0}$ represent the solution flow associated with the symmetric May-Leonard competition system (\ref{ML}). Then, we have $$S(x^0)=Q_{\varphi\omega}(Lx^0), \qquad \forall x^0\in \mathbb{R}^3_+, \qquad {\rm{i.e.,}} \qquad S=Q_{\varphi\omega}\circ L.$$ Hence, it suffices to investigate the dynamics of the discrete-time system $\{S^n\}_{n\geq 0}$. However, compared to the concrete discrete-time competitive maps discussed in \cite{OWY2008,JL2016,JL2017,JN22017,GJNY2018,GJNY2020}, there is no explicit expression of the Poincar$\acute{\rm e}$ map $S$ for system (\ref{3ML}). This makes the research much more difficult and complicated on the dynamics of system (\ref{3ML}). \par The main aim of this paper is to establish the stability criterion of heteroclinic cycles for system (\ref{3ML}), as well as its application on a given system (\ref{3ML2}). More precisely, it is proved that system (\ref{3ML}) admits a heteroclinic cycle $\Gamma$ connecting the three axial fixed points $R_i (i=1,2,3)$ via the theory of the carrying simplex (see Theorem \ref{thm31}). Based on this, sufficient conditions for stability of the heteroclinic cycle are obtained (see Theorem \ref{thm32}). In particular, when $\lambda_1=\lambda_2=\lambda_3$ and $\alpha+\beta \neq 2$, we show that system (\ref{3ML}) inherits stability properties of the heteroclinic cycle of classical May-Leonard system (\ref{ML}). Meanwhile, we also present the explicit expression of the carrying simplex in the case that $\alpha+\beta= 2$ (see Corollary \ref{thm33}). For a given system (\ref{3ML2}), the critical value of the existence of the heteroclinic cycle is determined (see Theorem \ref{thm41}). By applying the stability criterion for system (\ref{3ML}) to the given system (\ref{3ML2}), we obtain the stability branches of the heteroclinic cycle for such a system (see Theorem \ref{thm42}). Moreover, the branch graph of stability is demonstrated in Figure 3. By numerical simulation, we find that there are rich dynamics for the system as $\varphi$ varies. To be more explicit, when $\varphi$ varies in $[0,1]$, system (\ref{3ML2}) enjoys different dynamic features: the orbit emanating from initial value $x^0=(0.5,0.6,0.4)$ for the Poincar$\acute{\rm e}$ map $\tilde{S}$ tends to the heteroclinic cycle $\tilde{\Gamma}$(see Figure 4) $\rightarrow$ invariant closed curves (see Figure 6 and Figure 7) $\rightarrow$ the positive fixed point (see Figure 8-12) $\rightarrow$ the interior fixed point of coordinate plane $\{x_2=0\}$ (see Figure 13) $\rightarrow$ the interior fixed point of $x_1$-axis (see Figure 14) $\rightarrow$ the interior fixed point of the coordinate plane $\{x_3=0\}$ (see Figure 15) $\rightarrow$ the interior fixed point of $x_2$-axis (see Figure 16) $\rightarrow$ the trivial fixed point $O$ (see Figure 17). From these numerical examples, one can see that the dynamic features and patterns generated by system (\ref{3ML2}) are much richer , which are different from concrete discrete-time competitive maps analyzed in \cite{GJNY2018,GJNY2020,JL2016,JL2017}. \par The paper is organized as follows. In section 2, we introduce some notations and relevant definitions. Section 3 is devoted to analyze the existence and stability of heteroclinic cycles of system (\ref{3ML}). We investigate the stability branches of the heteroclinic cycle for a given system (\ref{3ML2}) and present the branch graph of stability in Section 4. Furthermore, we provide some numerical simulations to exhibit rich dynamics for system (\ref{3ML2}). This paper ends with a discussion in Section 5. \section{Notations and Definitions}\label{sec2} \par Throughout this paper, let $Z_+$ denote the set of nonnegative integers. We use $\mathbb{R}^n_+$ to define the nonnegative cone $\mathbb{R}^n_+:=\{x\in \mathbb{R}^n:x_i\geq 0, \ i=1,\cdots,n\}$. Let $X\subseteq \mathbb{R}^n_+$ and $S:X \rightarrow X$ be a continuous map. The orbit of a state $x$ for $S$ is $\gamma(x)=\{S^n(x)$, $n\in \mathbb{Z}_+ \}$. A fixed point $x$ of $S$ is a point $x\in X$ such that $S(x)=x$. A point $y\in X$ is called a $k$-periodic point of $S$ if there exists some positive integer $k>1$, such that $S^k(y)=y$ and $S^m(y)\neq y$ for every positive integer $m<k$. The $k$-periodic orbit of the $k$-periodic point $y$, $\gamma(y)=\{y$, $S(y)$, $S^2(y)$, $...$, $S^{k-1}(y)\}$, is often called a periodic orbit for short. The $\omega$-{\it limit set} of $x$ is defined by $\omega(x) =\{y\in \mathbb{R}_+^n: S^{n_k}x\rightarrow y \ (k\rightarrow\infty)\ {\rm for \ some \ sequence} \ n_k\rightarrow +\infty \ {\rm in} \ \mathbb{Z}_+\}$. If the orbit of a state $x$ for $S^{-1}$ exists, then the $\alpha$-{\it limit set} of $x$ is denoted by $\alpha(x) =\{y\in \mathbb{R}_+^n: S^{-n_k}x\rightarrow y \ (k\rightarrow\infty)\ {\rm for \ some \ sequence} \ n_k\rightarrow +\infty \ {\rm in} \ \mathbb{Z}_+\}$. \par A set $V \subseteq X$ is called positively invariant under $S$ if $SV \subseteq V$, and invariant if $SV=V$. We call that a nonempty compact positively invariant subset $V$ repels for $S$ if there exists a neighborhood $U$ of $V$ such that given any $x \in X \setminus V$ there is a $k_0(x)$ such that $S^kx \in U^c$ (the complement of $U$) for $k\geq k_0(x)$, and $V$ is said to be an attractor for $S$, if there exists a neighborhood $U$ of $V$, such that $\lim\limits_{k \rightarrow +\infty }d(S^kx,V)=0$ uniformly in $x\in U$, where $d(\cdot,\cdot)$ is the metric for $X$; $V$ is referred to as a global attractor for $S$, if for any bounded set $U\subseteq X$, $\lim\limits_{k \rightarrow +\infty} d(S^kx,V)=0$ uniformly in $x\in U$. Note that if the orbit $\gamma(x)$ of $x$ has compact closure, $\omega(x)$ is nonempty, compact and invariant. If $S$ is a differentiable map, we write $DS(x)$ as the Jacobian matrix of $S$ at the point $x$ and the spectral radius of $DS(x)$ is denoted by $r(DS(x))$. For an $n\times n$ matrix $A$, we write $A\geq 0$ iff $A$ is a nonnegative matrix (i.e., all the entries are nonnegative) and $A\gg 0$ iff $A$ is a positive matrix (i.e., all the entries are positive). \par A map $S$ is said to be \textit {competitive} in $\mathbb{R}_+^n$, if $x,y\in \mathbb{R}_+^n$ and $S(x)<S(y)$, then $x<y$. Furthermore, $S$ is called \textit{strongly competitive} in $\mathbb{R}_+^n$, if $S(x)<S(y)$ then $x\ll y$. \par A \textit {carrying simplex} for the periodic map $S$ is a subset $\Sigma\subset \mathbb{R}_+^n\backslash \{0\}$ with the following properties (see \cite[Theorem 11]{YJ2002}): \\$\rm(P1)$ $\Sigma$ is compact, invariant and unordered; \\$\rm(P2)$ $\Sigma$ is homeomorphic via radial projection to the $(n-1)$-dim standard probability simplex $\mathrm{d}elta^{n-1}:=\{x\in \mathbb{R}_+^n |$ $\Sigma_ix_i=1\}$; \\$\rm(P3)$ $\forall\ x\in \mathbb{R}_+^n\backslash \{0\}$, there exists some $y\in \Sigma$ such that $\lim\limits_{n\rightarrow \infty}|S^nx-S^ny|=0$. \section{Heteroclinic cycle} For system (\ref{3ML}), it is clear that $O:=(0,0,0)$ is a trivial fixed point of the Poincar$\acute{\rm e}$ map $S$. Denote $h_i:=(\varphi-\lambda_i(1-\varphi))\omega, i=1,2,3.$ By Hsu and Zhao \cite[Lemma 2.1]{SX2012}, if $h_i>0$ $(i=1,2,3)$, then $S$ has three axial fixed points, written as $R_1:=(x_1^*,0,0)$, $R_2:=(0,x_2^*,0)$ and $R_3:=(0,0,x_3^*)$. Besides, there are three planar fixed points for $S$ in three coordinate planes under appropriate conditions (see \cite[Theorem 2.2-2.4]{SX2012} or \cite[Theorem 3.6]{NWX2021}). First, we provide the existence result of carrying simplex for system (\ref{3ML}). \begin{lemma}{\rm (The existence of the carrying simplex)} \label{lem20} Assume that $h_i>0, i=1,2,3$. Let $S:\mathbb{R}_+^3\rightarrow \mathbb{R}_+^3$ be the Poincar$\acute{e}$ map induced by system (\ref{3ML}). Then, system (\ref{3ML}) admits a two-dimensional carrying simplex $\Sigma$ which attracts every nontrivial orbit in $\mathbb{R}_+^3$. \end{lemma} \begin{proof} The statement is a result of Niu et al. \cite[Theorem 2.3]{NWX2021} for $n=3$, $b_i=a_{ii}=1 \ ( i=1,2,3)$, $a_{12}=a_{23}=a_{31}=\alpha$ and $a_{32}=a_{21}=a_{13}=\beta$, where $b_i, a_{ij}, i,j=1,2,3$ are defined in \cite{NWX2021}. \end{proof} \par By the stability analysis of equilibria, we have \begin{lemma} \label{lem21} {\rm (Stability of the fixed point $O$)} Given system (\ref{3ML}), then \begin{enumerate}[\rm(i)] \item If $h_i<0$ $(i=1,2,3)$, then $O$ is an asymptotically stable fixed point of $S$. \item If one of three values $h_1,h_2$ and $h_3$ is greater than $0$, then $O$ is an unstable fixed point of $S$. In particular, if $h_i>0$ $ (i=1,2,3)$, then $O$ is a hyperbolic repeller. \end{enumerate} \end{lemma} \begin{proof} Let $f(x)=(f_1,f_2,f_3)^T$, where $$\left\{ \begin{array}{ll} \displaystyle f_1=x_1(1-x_1-\alpha x_2-\beta x_3),\\ \displaystyle f_2=x_2(1-\beta x_1-x_2-\alpha x_3),\\ \displaystyle f_3=x_3(1-\alpha x_1-\beta x_2-x_3).\\ \end{array}\right. $$ Then, $Df(x)=$ {\small $$ \left ( \begin{matrix} 1-2x_1-\alpha x_2-\beta x_3 & -\alpha x_1 & -\beta x_1 \\ -\beta x_2 & 1-2x_2-\beta x_1-\alpha x_3 & -\alpha x_2\\ -\alpha x_3 & -\beta x_3 & 1-2x_3-\alpha x_1-\beta x_2 \end{matrix} \right ). $$} \\For simplicity, we denote $u(t,x):=Q_t(x)$ and $V(t,x):=D_xu(t,x)=D_xQ_t(x)$, then $S(x)=Q_{\varphi\omega}(Lx)=u(\varphi\omega,Lx)$, and hence, \begin{align*} DS(x) &=D(Q_{\varphi\omega}(Lx))\cdot D(Lx)\\ &=V(\varphi\omega,Lx)\cdot D(Lx)\\ &=V(\varphi\omega,Lx)\cdot {\rm diag} \big({e^{-\lambda_1(1-\varphi)\omega}},{e^{-\lambda_2(1-\varphi)\omega}},{e^{-\lambda_3(1-\varphi)\omega}} \big). \end{align*} Note that $V(t,x)$ satisfies $$\frac{dV(t)}{dt}=Df(u(t,x))V(t),\quad V(0)=I.$$ Taking $O=(0,0,0)$, we have $u(t,LO)=(0,0,0)$, then $Df(u(t,LO))= {\rm diag} (1,1,1),$ and hence, $$V(\varphi\omega,LO)= {\rm diag} \Big(e^{\int_0^{\varphi\omega }1dt},e^{\int_0^{\varphi\omega }1dt},e^{\int_0^{\varphi\omega} 1dt} \Big).$$ By the expression of $S$, it follows that $$DS(O)= \left ( \begin{matrix} e^{(\varphi-\lambda_1(1-\varphi))\omega} & 0 & 0 \\ 0 & e^{(\varphi-\lambda_2(1-\varphi))\omega} & 0 \\ 0 & 0 & e^{(\varphi-\lambda_3(1-\varphi))\omega} \end{matrix} \right ). $$ Consequently, the matrix $DS(O)$ has three positive eigenvalues $\mu_1,\mu_2$ and $\mu_3$ given by $$\mu_1=e^{h_1}, \qquad \mu_2=e^{h_2} \quad {\rm and} \quad \mu_3=e^{h_3}.$$ Clearly, if $h_i<0 $ $(i=1,2,3)$, then $\mu_i<1$, and hence $O$ is an asymptotically stable fixed point of $S$; if $h_i>0$, then $\mu_i>1,i=1,2,3$, which implies that $O$ is a hyperbolic repeller. We have proved the lemma. \end{proof} \begin{lemma} \label{lem22} {(Stability of the axial fixed point $R_1$)} Given system (\ref{3ML}), suppose that $h_i>0,$ $i=1,2,3$, then \begin{enumerate} \item[(i)] If $h_2<\beta h_1$, $h_3<\alpha h_1$, then $R_1$ is an asymptotically stable fixed point of $S$. \item[(ii)] If $h_2>\beta h_1$, $h_3>\alpha h_1$, then $R_1$ is a saddle point with two-dimensional unstable manifolds. \item[(iii)] If $(h_2-\beta h_1)(h_3-\alpha h_1)<0$, then $R_1$ is a saddle point with one-dimensional unstable manifolds. \end{enumerate} \end{lemma} \begin{proof} Let $u(t,LR_1):=(u_1^*(t),0,0)$. By the proof of Lemma \ref{lem21}, it is easy to see that $$Df(u(t,LR_1))= \left ( \begin{matrix} 1-2u_1^*(t) & -\alpha u_1^*(t) & -\beta u_1^*(t) \\ 0 & 1-\beta u_1^*(t) & 0 \\ 0 & 0 & 1-\alpha u_1^*(t) \end{matrix} \right ). $$ Then, $$V(\varphi\omega,LR_1)= \left ( \begin{matrix} e^{\int_0^{\varphi\omega}1-2u_1^*(t)dt} & * & *\\ 0 & e^{\int_0^{\varphi\omega}1-\beta u_1^*(t)dt} & * \\ 0 & 0 & e^{\int_0^{\varphi\omega}1-\alpha u_1^*(t)dt} \end{matrix} \right ),$$ where $*$ represents unknown algebraic expressions. Observe that $u_1^*(t)$ satisfies $$\frac{du_1^*(t)}{dt}=u_1^*(t)(1-u_1^*(t)).$$ Integrating the above equation for $t$ from $0$ to $\varphi\omega$, we have $$\int_0^{\varphi\omega}u_1^*(t)dt=(\varphi-\lambda_1(1-\varphi))\omega,$$ and then, $DS(R_1)=$ $$ \left ( \begin{matrix} e^{-(\varphi-\lambda_1(1-\varphi))\omega} & * & *\\ 0 & e^{[(\varphi-\lambda_2(1-\varphi))-\beta (\varphi-\lambda_1(1-\varphi))]\omega} & * \\ 0 & 0 & e^{[(\varphi-\lambda_3(1-\varphi))-\alpha(\varphi-\lambda_1(1-\varphi))]\omega} \end{matrix} \right ).$$ Thus, the matrix $DS(R_1)$ has three positive eigenvalues $\mu_1,\mu_2$ and $\mu_3$ given by $$\mu_1=e^{-h_1}<1, \qquad \mu_2=e^{h_2-\beta h_1} \quad {\rm and} \quad \mu_3=e^{h_3-\alpha h_1},$$ which implies that the statements (i)-(iii) are valid. The proof is completed. \end{proof} \par By symmetric arguments, we also have \begin{lemma} \label{lem23} (Stability of the axial fixed point $R_2$) Given system (\ref{3ML}), suppose that $h_i>0, i=1,2,3$, then \begin{enumerate} \item[(i)] If $h_1<\alpha h_2$, $h_3<\beta h_2$, then $R_2$ is an asymptotically stable fixed point of $S$. \item[(ii)] If $h_1>\alpha h_2$, $h_3>\beta h_2$, then $R_2$ is a saddle point with two-dimensional unstable manifolds. \item[(iii)] If $(h_1-\alpha h_2)(h_3-\beta h_2)<0$, then $R_2$ is a saddle point with one-dimensional unstable manifolds. \end{enumerate} \end{lemma} \begin{lemma} \label{lem24} (Stability of the axial fixed point $R_3$) Given system (\ref{3ML}), suppose that $h_i>0, i=1,2,3$, then \begin{enumerate} \item[(i)] If $h_1<\beta h_3$, $h_2<\alpha h_3$, then $R_3$ is an asymptotically stable fixed point of $S$. \item[(ii)] If $h_1>\beta h_3$, $h_2>\alpha h_3$, then $R_3$ is a saddle point with two-dimensional unstable manifolds. \item[(iii)] If $(h_1-\beta h_3)(h_2-\alpha h_3)<0$, then $R_3$ is a saddle point with one-dimensional unstable manifolds. \end{enumerate} \end{lemma} \par For the sake of convenience, we introduce the following notation: \\\textbf{$(\tilde{H})$} \quad \qquad \qquad $h_2<\beta h_1, h_1>\alpha h_2;\ h_3<\beta h_2, h_2>\alpha h_3;\ h_1<\beta h_3, h_3>\alpha h_1.$ \\Moreover, we write $\vartheta$ as $$\vartheta:=(h_1-\beta h_3)(h_2-\beta h_1)(h_3-\beta h_2)+(h_1-\alpha h_2)(h_2-\alpha h_3)(h_3-\alpha h_1).$$ \par By the theory of the carrying simplex, we have \begin{theorem} \label{thm31} {\rm (The existence of the heteroclinic cycle)} Suppose that the condition $(\tilde{H})$ holds and $h_i>0, i=1,2,3$, then system (\ref{3ML}) admits a heteroclinic cycle $\Gamma$ ($=\partial \Sigma$) connecting the three axial fixed points $R_1, R_2$ and $R_3$ , where $\partial \Sigma$ stands for the boundary of the carrying simplex $\Sigma$ (see Figure 1). \end{theorem} \begin{proof} By Lemma \ref{lem20}, system (\ref{3ML}) admits a two-dimensional carrying simplex $\Sigma$. Furthermore, the intersection of $\Sigma$ with three coordinate planes is composed of three line segments connecting any two of $R_i,i=1,2,3$. These three line segments are written as $\Gamma_{12},\Gamma_{23}$ and $\Gamma_{13}$, respectively. By the invariance of $\Sigma$, one can obtain that $\Gamma_{12},\Gamma_{23}$ and $\Gamma_{13}$ are invariant under the map $S$. In view of $h_2<\beta h_1$ and $ h_1>\alpha h_2$, it follows from Niu et al. \cite[Theorem 3.6(ii)]{NWX2021} that there are no interior fixed points on $\Gamma_{12}$. Note that the restriction of $S$ to the segment $\Gamma_{12}$ must be a monotone one-dimensional map, all orbits on $\Gamma_{12}$ have one fixed point as the $\alpha$-limit set and the other as the $\omega$-limit set. Meanwhile, Niu et al. \cite[Theorem 3.6(ii)]{NWX2021} ensures that all orbits on $\Gamma_{12}$ take $R_2$ as the $\alpha$-limit set and $R_1$ as the $\omega$-limit set under the condition $(\tilde{H})$. By the cyclic symmetry, we also obtain that $\Gamma_{23}$ and $\Gamma_{13}$ have similar properties as $\Gamma_{12}$ under the condition $(\tilde{H})$. Define $\Gamma=\Gamma_{12}\cup \Gamma_{13} \cup \Gamma_{23}$, then $\Gamma= \partial \Sigma$. Furthermore, it is obvious that $\Gamma$ is a heteroclinic cycle connecting the three axial fixed points $R_1,R_2$ and $R_3$, which forms the boundary $\partial \Sigma$ of $\Sigma$ (see Figure 1). We have thus proved the theorem. \end{proof} \begin{figure} \caption{\footnotesize $\Gamma$ (Clockwise direction)} \caption{\footnotesize $\Gamma$ (Counter-clockwise direction)} \end{figure} \begin{remark} \label{rmk31} By modifying the proof of Theorem \ref{thm31}, it can easily be shown that if the condition $(\tilde{H})$ is revised as $$h_2>\beta h_1, h_1<\alpha h_2;\ h_3>\beta h_2, h_2<\alpha h_3;\ h_1>\beta h_3, h_3<\alpha h_1,$$ then system (\ref{3ML}) also admits a heteroclinic cycle connecting the three axial fixed points $R_1,R_2$ and $R_3$, yet the arrows are reserved (see Figure 2). \end{remark} \begin{theorem} {\rm (The stability of the heteroclinic cycle)} \label{thm32} Given system (\ref{3ML}), suppose that the condition $(\tilde{H})$ holds and $h_i>0,i=1,2,3$, then \begin{enumerate} \item [(i)]If $\vartheta >0$, then the heteroclinic cycle $\Gamma$ repels. \item [(ii)]If $\vartheta <0$, then the heteroclinic cycle $\Gamma$ attracts. \end{enumerate} \end{theorem} \begin{proof} By the proof of Lemma \ref{lem22} and Lemma \ref{lem23}-\ref{lem24}, one can see that the eigenvalues of the matrices $DS(R_1)$ and $DS(R_3)$ along the $x_2$-axis direction are $e^{h_2-\beta h_1}$ and $e^{h_2-\alpha h_3}$, respectively; the eigenvalues of the matrices $DS(R_1)$ and $DS(R_2)$ along the $x_3$-axis direction are $e^{h_3-\alpha h_1}$ and $e^{h_3-\beta h_2}$, respectively; the eigenvalues of the matrices $DS(R_2)$ and $DS(R_3)$ along the $x_1$-axis direction are $e^{h_1-\alpha h_2}$ and $e^{h_1-\beta h_3}$, respectively. A simple calculation yields $$\ln e^{h_1-\beta h_3}\cdot \ln e^{h_2-\beta h_1}\cdot \ln e^{h_3-\beta h_2}+ \ln e^{h_1-\alpha h_2}\cdot \ln e^{h_2-\alpha h_3} \cdot \ln e^{h_3-\alpha h_1} $$ $$=(h_1-\beta h_3)(h_2-\beta h_1)(h_3-\beta h_2)+(h_1-\alpha h_2)(h_2-\alpha h_3)(h_3-\alpha h_1)=\vartheta.$$ Note that system (\ref{3ML}) is a time-periodic Kolmogorov systems and the the Poincar$\rm \acute{e}$ map $S$ is $C^1$-diffeomorphism onto its image (see the proof of Theorem 2.3 in Niu et al. \cite{NWX2021}), we can rewrite $S$ as: $$S(x_1,x_2,x_3)=(x_1G_1(x),x_2G_2(x),x_3G_3(x)), \quad x\in \mathbb{R}_+^3,$$ where $G_i(x)=$ $\left\{ \begin{array}{ll} \displaystyle \frac{S_i(x)}{x_i}, \quad x_i \neq 0, \\ \displaystyle \frac{\partial S_i}{\partial x_i}(x),\quad x_i=0.\\ \end{array}\right.$ Then $S$ is considered as a Kolmogorov competitive map. According to Theorem 3 in Jiang, Niu and Wang \cite{JNW2016}, we can obtain that if $\vartheta>0$, then the heteroclinic cycle $\Gamma$ repels; if $\vartheta<0$, then the heteroclinic cycle $\Gamma$ attracts. Thus, we have completed the proof. \end{proof} \par In particular, when $\lambda_1=\lambda_2=\lambda_3$, we have \begin{corollary} \label{thm33} Given system (\ref{3ML}), suppose that the condition $(\tilde{H})$ holds, $h_i>0 \ (i=1,2,3)$, $\lambda_1=\lambda_2=\lambda_3=\lambda$ and $0<\alpha<1<\beta$ , then \begin{enumerate} \item [(i)] If $\alpha+\beta<2$, then the heteroclinic cycle $\Gamma$ repels. \item [(ii)] If $\alpha+\beta>2$, then the heteroclinic cycle $\Gamma$ attracts. \item [(iii)] If $\alpha+\beta=2$, then the carrying simplex of system (\ref{3ML}) is $$\Sigma=\{(x_1,x_2,x_3)\in \mathbb{R}_+^3: x_1+x_2+x_3=\frac{1-e^{p-q}}{1-e^{-q}}\},$$ where $p=\lambda(1-\varphi)\omega, \ q=\varphi\omega$. Furthermore, $P:=\frac{1-e^{p-q}}{3(1-e^{-q})}(1,1,1)$ is a positive fixed point of $S$. \end{enumerate} \end{corollary} \begin{proof} Since $\lambda_1=\lambda_2=\lambda_3$, it follows that $h_1=h_2=h_3$, and then, \begin{align*} \vartheta &=h_1^3\big((1-\beta)^3+(1-\alpha)^3\big)\\ &=h_1^3\Big(\big(2-(\alpha+\beta)\big)\big( (1-\beta)^2 +(\beta-1)(1-\alpha)+(1-\alpha)^2 \big)\Big). \end{align*} Noticing that $0<\alpha<1<\beta,$ one can obtain that $\big( (1-\beta)^2 +(\beta-1)(1-\alpha)+(1-\alpha)^2 \big)>0.$ By $h_i>0\ (i=1,2,3)$, it is clear that if $\alpha+\beta<2$, then $\vartheta >0$; if $\alpha+\beta>2$, then $\vartheta <0$. This implies that the statements (i) and (ii) are true due to Theorem \ref{thm32}. \par Finally, it remains to show that the statement (iii) holds. For this purpose, we firstly consider the one-dimensional case of system (\ref{3ML}), that is, \begin{equation} \label{3ML-1} \begin{cases} \frac{dx}{dt}=-\lambda x,\qquad \quad \; m \omega \leq t\leq m \omega+(1-\varphi) \omega,\\ \frac{dx}{dt}=x(1-x),\quad m \omega+(1-\varphi) \omega \leq t\leq (m+1)\omega,\\ x(0)=x_0 \in \mathbb{R}_+. \end{cases} \end{equation} Solving the equation $\frac{dx}{dt}=x(1-x)$ with the initial value $x^0 \in \mathbb{R}_+$ yields \begin{equation} \label{3ML-2} x(t,x^0)=\frac{x^0}{x^0+(1-x^0)e^{-t}}, \qquad \forall \ t\geq 0,\ x^0 \in \mathbb{R}_+. \end{equation} Define the Poincar$\rm \acute{e}$ map of system (\ref{3ML-1}), $$\bar{S}(x_0)=x(\omega,x_0), \ \forall \ x_0 \in \mathbb{R}_+.$$ By Hsu and Zhao\cite[Lemma 2.1]{SX2012}, $\bar{S}$ has a unique positive fixed point $x^*$ as $\varphi-\lambda(1-\varphi)>0$. Based on the expression (\ref{3ML-2}) and a straightforward computation, it then follows that \begin{equation} \label{3ML-3} x^*=\frac{1-e^{\lambda(1-\varphi)\omega-\varphi\omega}}{1-e^{-\varphi\omega}}=\frac{1-e^{p-q}}{1-e^{-q}}. \end{equation} \par Since $h_i>0\ (i=1,2,3)$, the Poincar$\rm \acute{e}$ map $S$ generated by system (\ref{3ML}) has three axial fixed points $R_1, R_2$ and $R_3$. Note that $\lambda_1=\lambda_2=\lambda_3=\lambda$ and (\ref{3ML-3}), it follows that $$R_1=(\frac{1-e^{p-q}}{1-e^{-q}},0,0),\ \ R_2=(0,\frac{1-e^{p-q}}{1-e^{-q}},0),\ \ R_3=(0,0,\frac{1-e^{p-q}}{1-e^{-q}}).$$ Then, the intersection of $\mathbb{R}_+^3$ with the planar including three axial fixed points $R_1$,$R_2$ and $R_3$ is $$\pi:=\{(x_1,x_2,x_3)\in \mathbb{R}_+^3: x_1+x_2+x_3=\frac{1-e^{p-q}}{1-e^{-q}}\}.$$ For any $\bar{x}=(a,b,c)$ $\in \pi$, we have $a+b+c=\frac{1-e^{p-q}}{1-e^{-q}},$ and $$L\bar{x}=(e^{-\lambda(1-\varphi)\omega}a,e^{-\lambda(1-\varphi)\omega}b,e^{-\lambda(1-\varphi)\omega}c) =(e^{-p}a,e^{-p}b,e^{-p}c).$$ Let $Q_t(x):=\big(x_1(t,x),x_2(t,x),x_3(t,x)\big)$ represent the solution flow associated with system (\ref{ML}), then $$Q_t(L\bar{x})=\big(x_1(t,L\bar{x}),x_2(t,L\bar{x}),x_3(t,L\bar{x})\big),$$ and hence, $$S(\bar{x})=Q_{\varphi\omega}(L\bar{x})=\big(x_1(\varphi\omega,L\bar{x}),x_2(\varphi\omega,L\bar{x}),x_3(\varphi\omega,L\bar{x})\big).$$ \par Next, we will show that $S(\bar{x})\in \pi$. For system (\ref{ML}), we first denote $$ Y(t)=x_1(t)+x_2(t)+x_3(t),$$ and then, $$\frac{dY(t)}{dt}=Y(t)(1-Y(t)).$$ Solving the above equation with the initial value $Y(0)\in \mathbb{R}_+^3$ gives $$Y(t)=\frac{Y(0)}{Y(0)+(1-Y(0))e^{-t}}, \qquad \quad \forall \ t\geq 0,\ Y(0)\in \mathbb{R}_+^3.$$ After a easy calculation, we have $$Y(0)=x_1(0,L\bar{x})+x_2(0,L\bar{x})+x_3(0,L\bar{x})=e^{-p}a+e^{-p}b+e^{-p}c=e^{-p}(a+b+c).$$ Taking $t=\varphi\omega$, it follows that $$x_1(\varphi\omega, L\bar{x})+x_2(\varphi\omega, L\bar{x})+x_3(\varphi\omega, L\bar{x}) =Y(\varphi\omega)=\frac{Y(0)}{Y(0)+(1-Y(0))e^{-\varphi\omega}}.$$ Then, $$Y(\varphi\omega)=\frac{e^{-p}(a+b+c)}{e^{-p}(e^{-p}(a+b+c)) +e^{-q}-e^{-p-q}(e^{-p}(a+b+c))}.$$ Substituting $a+b+c=\frac{1-e^{p-q}}{1-e^{-q}}$ into the above equation yields $$x_1(\varphi\omega, L\bar{x})+x_2(\varphi\omega, L\bar{x})+x_3(\varphi\omega, L\bar{x})=Y(\varphi\omega)=\frac{1-e^{p-q}}{1-e^{-q}},$$ which implies that $S(\bar{x})\in \pi$. By the arbitrariness of $\bar{x}$, one can see that $\pi$ is invariant under the map $S$. Using the invariance of the carrying simplex $\Sigma$, we further obtain that $\pi$ is the carrying simplex $\Sigma$ of system (\ref{3ML}), that is, $$\Sigma=\pi=\{(x_1,x_2,x_3)\in \mathbb{R}_+^3: x_1+x_2+x_3=\frac{1-e^{p-q}}{1-e^{-q}}\}.$$ \par In particular, when $\varphi=1$, the carrying simplex $\Sigma'$ of the classical May-Leonard competition model (\ref{ML}) is $$\Sigma'=\{(x_1,x_2,x_3)\in \mathbb{R}_+^3: x_1+x_2+x_3=1\}.$$ Under the condition that $0<\alpha<1<\beta$ and $\alpha+\beta=2$, system (\ref{ML}) admits a unique positive equilibria $P':=(\frac{1}{3},\frac{1}{3},\frac{1}{3})$. Then the ray equation connecting $O$ and $P'$ is $$l=\{(\frac{1}{3}t,\frac{1}{3}t,\frac{1}{3}t)| t\geq 0 \}.$$ Next, we prove that $l$ is invariant under system (\ref{ML}). Clearly, $Q_t(O)=O$, where $\{Q_t\}_{t\geq 0}$ stands for the solution flow associated with system (\ref{ML}). For any $x_0 \in l\setminus \{O\}$, there exists $\xi>0$ such that $x_0=(\frac{1}{3}\xi,\frac{1}{3}\xi,\frac{1}{3}\xi).$ Let $\phi(t)$ be the unique solution of the following system \begin{equation} \nonumber \begin{cases} \frac{d\phi(t)}{dt}=\phi(t)(1-\phi(t)),\\ \phi(0)=\xi. \end{cases} \end{equation} It can easily be verified that $(\frac{1}{3}\phi(t),\frac{1}{3}\phi(t),\frac{1}{3}\phi(t))$ is the unique solution of system (\ref{ML}) with initial value $x_0=(\frac{1}{3}\xi,\frac{1}{3}\xi,\frac{1}{3}\xi)$. Based on the fact that $(\frac{1}{3}\phi(t),\frac{1}{3}\phi(t),\frac{1}{3}\phi(t))\in l\setminus \{O\}$, it follows that $l$ is invariant under system (\ref{ML}). \par On the other hand, there admits a unique intersection point $P=\frac{1-e^{p-q}}{3(1-e^{-q})}(1,1,1)$ between $l$ and the carrying simplex $\Sigma$ of system (\ref{3ML}), that is, $l \cap \Sigma =\{P\}.$ Recall that the expression $L(x_1,x_2,x_3)=e^{-p}(x_1,x_2,x_3),$ we can obtain that $LP \in l$. Again, $l$ is invariant under the solution flow $\{ Q_t \}_{t\geq 0}$ of system (\ref{ML}), we have $Q_t(LP)\in l$. Taking $t=\varphi\omega,$ it follows that $S(P)=Q_{\varphi \omega}(LP) \in l.$ By the invariance of the carrying simplex, we also get $S(P)\in \Sigma$, and then, $S(P)\in l \cap \Sigma$. Consequently, $S(P)=P$, which implies that $P$ is a positive fixed point of $S$. We have completed the proof. \end{proof} \begin{remark} From Corollary \ref{thm33}, one can see that system (\ref{3ML}) inherits stability properties of the heteroclinic cycle of the classical May-Leonard competition model (\ref{ML}) in the case that $\lambda_1=\lambda_2=\lambda_3$ and $\alpha+\beta \neq 2$. This means that the stability of the heteroclinic cycle has not been changed by the introduction of seasonal succession. \end{remark} \section{Bifurcation and Numerical simulation} There are five parameters $\varphi, \omega, \lambda_i \ (i=1,2,3) $ related to the seasonal succession. By Theorem \ref{thm31}-\ref{thm32}, it is not difficult to see that the period $\omega$ is independent of the existence and stability of the heteroclinic cycle. Hsu and Zhao\cite[Lemma 2.1]{SX2012} states that if $h_i\leq 0, i=1,2,3$ (i.e., $\lambda_i \geq \frac{\varphi}{1-\varphi}$), then there does not exist axial fixed points for system (\ref{3ML}), which implies that there is no heteroclinic cycles connecting three axial fixed points. Here, we select $\varphi$ as the switching strategy parameter for a given example to analyze the stability of the heteroclinic cycle. By numerical simulation, rich dynamics (including nontrivial periodic solutions, invariant closed curves and heteroclinic cycles) for such a system are exhibited. \par For system (\ref{ML}), when taking $\alpha=0.4, \beta=1.61$, it follows from May and Leonard\cite{ML1975} (or Chi, Hsu and Wu \cite{CHW1998}) that the heteroclinic cycle of the system is attracting. In what follows, for the same parameter values $\alpha=0.4$ and $\beta=1.61$, we also take $\lambda_1=0.2, \lambda_2=0.1, \lambda_3=0.4,$ $\omega=10$ and initial values $x_1(0)=0.5, x_2(0)=0.6, x_3(0)=0.4$, and then, the concrete form is given by \begin{equation} \begin{cases} \label{3ML2} \frac{dx_1}{dt}=-0.2 x_1,\qquad \quad \qquad \qquad \qquad \ \ 10m \leq t\leq 10m+10(1-\varphi), \\ \frac{dx_1}{dt}=x_1(1-x_1-0.4 x_2-1.61 x_3), \ \ 10m +10(1-\varphi) \leq t\leq 10(m+1),\\ \frac{dx_2}{dt}=-0.1 x_2,\qquad \quad \qquad \qquad \qquad\ \ 10m \leq t\leq 10m+10(1-\varphi), \\ \frac{dx_2}{dt}=x_2(1-1.61 x_1-x_2-0.4 x_3),\ \ 10m+10(1-\varphi) \leq t\leq 10(m+1),\\ \frac{dx_3}{dt}=-0.4 x_3,\qquad \quad \qquad \qquad \qquad\ \ 10m \leq t\leq 10m+10(1-\varphi), \\ \frac{dx_3}{dt}=x_3(1-0.4 x_1-1.61 x_2- x_3), \ \ 10m +10(1-\varphi) \leq t\leq 10(m+1),\\ \big(0.5,0.6,0.4\big)=x^0 \in \mathbb{R}^3_+, \quad \qquad \qquad \qquad \ m=0,1,2\dots,\\ \end{cases} \end{equation} where $\varphi \in [0,1]$. Recall that $h_i>0 \ (i=1,2,3)$ implies that system (\ref{3ML2}) has three interior fixed points of $i$-axis, written as $\tilde{R}_i \ (i=1,2,3)$. Then, we have \begin{theorem} \label{thm41} Let $\tilde{S}$ be the Poincar$\acute{ e}$ map induced by system (\ref{3ML2}), then \begin{enumerate} \item [(i)] If $\varphi \in [\frac{4.44}{10.54},1]$, then system (\ref{3ML2}) admits a heteroclinic cycle $\tilde{\Gamma}$ connecting three axial fixed points $\tilde{R}_i \ (i=1,2,3)$. \item [(ii)] If $\varphi \in [0,$ $\frac{4.44}{10.54})$, then system (\ref{3ML2}) does not admit a heteroclinic cycle connecting three axial fixed points $\tilde{R}_i \ (i=1,2,3)$. \end{enumerate} \end{theorem} \begin{proof} Noticing that $h_i=(\varphi-\lambda_i(1-\varphi))\omega \ (i=1,2,3)$, an easy calculation yields $$h_1=12\varphi-2, \ h_2=11\varphi-1 \ \ {\rm and} \ \ h_3=14\varphi-4.$$ In addition, $$h_2-\beta h_1=2.22-8.32\varphi,\ h_1-\alpha h_2=7.6\varphi-1.6,\ h_3-\beta h_2=-3.71\varphi-2.39<0$$ and $$h_2-\alpha h_3=5.4\varphi+0.6>0 ,\ h_1-\beta h_3=-10.54\varphi+4.44, \ h_3-\alpha h_1=9.2\varphi-3.2 .$$ It then easily follows that \begin{enumerate} \item [(1)] $\varphi>\frac{2}{7}$ if and only if $h_i>0 \ (i=1,2,3)$; \item [(2)] $\varphi>\frac{4.44}{10.54}$ if and only if the condition $(\tilde{H})$ holds. \end{enumerate} This implies that if $\varphi>\frac{4.44}{10.54}$, then system (\ref{3ML2}) satisfies $(\tilde{H})$ and $h_i>0,i=1,2,3$. Meanwhile, system (\ref{3ML2}) has three axial fixed points $\tilde{R}_i \ (i=1,2,3)$. By Theorem \ref{thm31}, we can see that if $\varphi>\frac{4.44}{10.54}$, then system (\ref{3ML2}) admits a heteroclinic cycle $\tilde{\Gamma}$ connecting three axial fixed points $\tilde{R}_i \ (i=1,2,3)$. \par When $\varphi=\frac{4.44}{10.54}$, one can calculate that $$h_2-\beta h_1<0,\ h_1-\alpha h_2>0,\ h_3-\beta h_2<0, h_2-\alpha h_3>0 ,\ h_1=\beta h_3, \ h_3-\alpha h_1>0$$ and $h_i>0 \ (i=1,2,3)$. Compared with the condition $(\tilde{H})$, we only consider the case that $h_1=\beta h_3, \ h_3-\alpha h_1>0$. Since $h_1=\beta h_3$, it follows from Hsu and Zhao \cite[Lemma 2.5(i)]{SX2012} that there is no interior fixed points on the coordinate plane $\{x_2=0\}$. By Lemma \ref{lem20}, we denote $\tilde{\Sigma}$ to be the carrying simplex of system (\ref{3ML2}) and $\tilde{\Gamma}_{13}$ to be the intersection of $\tilde{\Sigma}$ with the coordinate plane $\{x_2=0\}$. Due to $h_3-\alpha h_1>0$ and Lemma \ref{lem22} (or see Niu et al. \cite[Lemma 3.2]{NWX2021}), we can see that $\tilde{R}_1$ repels along $\tilde{\Gamma}_{13}$. Note that the restriction of $\tilde{S}$ to the segment $\Gamma_{13}$ is a monotone one-dimensional map, all orbits on $\Gamma_{13}$ have one fixed point as the $\alpha$-limit set and the other as the $\omega$-limit set, and then, all orbits on $\Gamma_{13}$ take $R_1$ as the $\alpha$-limit set and $R_3$ as the $\omega$-limit set. By the cyclic symmetry, it is not difficult to obtain that system (\ref{3ML2}) admits a heteroclinic cycle $\tilde{\Gamma}$ connecting three axial fixed points $\tilde{R}_i \ (i=1,2,3)$. \par On the other hand, when $\varphi \in [0,\frac{2}{7}]$, it follows from Hsu and Zhao\cite[Lemma 2.1]{SX2012} that there is no interior fixed points on $x_3$-axis, which entails that there does not admit a heteroclinic cycle connecting three axial fixed points in this case. When $\varphi \in (\frac{2}{7},\frac{3.2}{9.2}]$, we have $$h_2-\beta h_1=2.22-8.32\varphi<0, \ h_1-\alpha h_2=7.6\varphi-1.6>0,\ h_3-\beta h_2<0,$$ and $$ h_2-\alpha h_3>0,\ h_1-\beta h_3=-10.54\varphi+4.44>0, \ h_3-\alpha h_1=9.2\varphi-3.2 \leq 0 ;$$ when $\varphi \in (\frac{3.2}{9.2},\frac{4.44}{10.54})$, we also have $$ h_2-\beta h_1=2.22-8.32\varphi<0, \ h_1-\alpha h_2=7.6\varphi-1.6>0, \ h_3-\beta h_2<0,$$ and $$ h_2-\alpha h_3>0,\ h_1-\beta h_3=-10.54\varphi+4.44> 0, \ h_3-\alpha h_1=9.2\varphi-3.2 > 0.$$ For these two cases, by using the proof of Theorem \ref{thm31} and Lemma \ref{lem22}-\ref{lem24}, it is not difficult to see that the boundary of the carrying simplex $\tilde{\Sigma}$ of system (\ref{3ML2}) can not form a heteroclinic cycle connecting three axial fixed points. Consequently, when $\varphi \in [0,\frac{4.44}{10.54})$, there does not exist a heteroclinic cycle connecting three axial fixed points $\tilde{R}_i \ (i=1,2,3)$ for system (\ref{3ML2}). The proof is completed. \end{proof} Noticing that $$\vartheta=(7.6\varphi-1.6)(5.4\varphi+0.6)(9.2\varphi-3.2)-(8.32\varphi-2.22)(3.71\varphi+2.39)(10.54\varphi-4.44).$$ Solving the algebra equation $\vartheta=0$, there exists a unique positive root $\varphi_0 \approx 0.6995$ in $\varphi \in [\frac{4.44}{10.54},1]$. Moreover, when $\varphi \in $ $[\frac{4.44}{10.54},1]$, we have \begin{theorem} \label{thm42} Given system (\ref{3ML2}), then \begin{enumerate} \item [(i)] If $\varphi \in (\varphi_0,1]$, then $\tilde{\Gamma}$ attracts; \item [(ii)] If $\varphi \in [\frac{4.44}{10.54},\varphi_0)$, then $\tilde{\Gamma}$ repels, \end{enumerate} where $\varphi_0$ is the positive root of the algebra equation $\vartheta=0$. The bifurcation graph of $\varphi$-$\vartheta$ in $\varphi \in [\frac{4.44}{10.54},1]$ is demonstrated in Figure 3. \end{theorem} \begin{proof} For the algebra equation $\vartheta=0$, an easy calculation gives \begin{enumerate} \item [(1)] If $\varphi \in (\varphi_0,1]$, then $\vartheta<0$. \item [(2)] If $\varphi \in [\frac{4.44}{10.54},\varphi_0)$, then $\vartheta>0$. \end{enumerate} By Theorem \ref{thm32}, it follows that the statements (i) and (ii) are valid, which implies that $\varphi_0$ is the critical value of stability of the heteroclinic cycle $\tilde{\Gamma}$. The proof is completed. \end{proof} Consider that $\frac{4.44}{10.54}\approx 0.4213$ and $\varphi_0 \approx 0.6995$, it follows from Theorem \ref{thm42} that the bifurcation graph of $\varphi$-$\vartheta$ is shown as below (see Figure 3). \begin{figure} \caption{$\varphi$-$\vartheta$ bifurcation graph, $\varphi \in [\frac{4.44} \end{figure} \par According to above analyses, we take different values of $\varphi$ to observe the dynamics of system (\ref{3ML2}) via numerical simulation. Firstly, taking $\varphi=0.73 $, one can see that the orbit emanating from initial value $x^0=(0.5,0.6,0.4)$ for the Poincar$\acute{\rm e}$ map $\tilde{S}$ generated by system (\ref{3ML2}) tends to the heteroclinic cycle $\tilde{\Gamma}$ from Figure 4. The corresponding solution flow is indicated in Figure 5, which states that system (\ref{3ML2}) possesses the May-Leonard phenomenon. \begin{figure} \caption{\footnotesize Tend to $\tilde{\Gamma} \caption{\footnotesize Tend to $\tilde{\Gamma} \end{figure} \par When taking $\varphi=0.631$ and $\varphi=0.629 $, one can see that the orbit emanating from initial value $x^0=(0.5,0.6,0.4)$ for $\tilde{S}$ tends to invariant closed curves from Figure 6 and Figure 7. Meanwhile, we also find that the invariant closed curve shrinks as $\varphi$ decreases. \begin{figure} \caption{\scriptsize Tend to invariant closed curve } \caption{\scriptsize Tend to invariant closed curve} \end{figure} \par When taking $\varphi=0.61$ $\varphi=0.59, \varphi=0.58$ and $\varphi=0.52 $, one can see that the invariant closed curve for $\tilde{S}$ has degenerated into a positive fixed point from Figure 8-12. Furthermore, The patterns converging to positive fixed points are rich and varied. \begin{figure} \caption{\scriptsize Tend to the positive fixed point} \caption{\scriptsize Tend to the positive fixed point} \end{figure} \begin{figure} \caption{\scriptsize Tend to the positive fixed point} \caption{\scriptsize Tend to the positive fixed point} \end{figure} \par By Theorem \ref{thm41}, when $\varphi \in [0,$ $\frac{4.44}{10.54})$, system (\ref{3ML2}) does not admit the heteroclinic cycle. Using numerical simulation, we find that the orbit emanating from initial value $x^0=(0.5,0.6,0.4)$ for $\tilde{S}$ tends to different fixed points as $\varphi$ decreases gradually. More precisely, when taking $\varphi=0.39$, one can see that the orbit for $\tilde{S}$ tends to a positive fixed point from Figure 13; when taking $\varphi=0.36$, one can see that the orbit for $\tilde{S}$ tends to the interior fixed point of coordinate plane $\{x_2=0\}$ from Figure 14; when taking $\varphi=0.34$, one can see that the orbit for $\tilde{S}$ tends to the interior fixed point of $x_1$-axis from Figure 15; when taking $\varphi=0.22$, one can see that the orbit for $\tilde{S}$ tends to the interior fixed point of coordinate plane $\{x_3=0\}$ from Figure 16; when taking $\varphi=0.1$, one can see that the orbit for map $\tilde{S}$ tends to the interior fixed point of $x_2$-axis from Figure 17; when taking $\varphi=0.09$, one can see that the orbit for $\tilde{S}$ tends to the trivial fixed point $O$ from Figure 18. \begin{figure} \caption{\scriptsize Tend to the positive fixed point} \caption{\scriptsize Tend to the interior fixed point of $\{x_2=0\} \end{figure} \par In conclusion, compared to the classical May-Leonard competition model (\ref{ML}), system (\ref{3ML2}) has much richer dynamics as $\varphi$ varies. To be more explicit, when $\varphi$ varies in $[0,1]$, system (\ref{3ML2}) enjoys different dynamic features: the orbit emanating from initial value $x^0=(0.5,0.6,0.4)$ for the Poincar$\acute{\rm e}$ map $\tilde{S}$ tends to the heteroclinic cycle $\tilde{\Gamma}$(see Figure 4) $\rightarrow$ invariant closed curves (see Figure 6 and Figure 7) $\rightarrow$ positive fixed points (see Figure 8-12) $\rightarrow$ the interior fixed point of coordinate plane $\{x_2=0\}$ (see Figure 13) $\rightarrow$ the interior fixed point of $x_1$-axis (see Figure 14) $\rightarrow$ the interior fixed point of the coordinate plane $\{x_3=0\}$ (see Figure 15) $\rightarrow$ the interior fixed point of $x_2$-axis (see Figure 16) $\rightarrow$ the trivial fixed point $O$ (see Figure 17). \begin{figure} \caption{\scriptsize Tend to the interior fixed point of $x_1$-axis} \caption{\scriptsize Tend to the interior fixed point of $\{x_3=0\} \end{figure} \begin{figure} \caption{\scriptsize Tend to the interior fixed point of $x_2$-axis} \caption{\scriptsize Tend to the trivial fixed point $O$} \end{figure} \section{Discussion} In this paper, we focus on a symmetric May-Leonard competition model with seasonal succession. By virtue of the carrying simplex, we obtain that system (\ref{3ML}) admits a heteroclinic cycle connecting three axial fixed point $R_i, i=1,2,3$, which is the boundary of the carrying simplex. By the stability theory of heteroclinic cycles for discrete kolmogorov competitive maps proposed by Jiang et al. \cite{JNW2016}, the stability criterion of the heteroclinic cycle for system (\ref{3ML}) is established. Especially, when $\lambda_1=\lambda_2=\lambda_3$ and $\alpha+\beta \neq 2$, it is shown that system (\ref{3ML}) inherits stability properties of the heteroclinic cycle of the classical May-Leonard competition model (\ref{ML}), which implies that the stability of the heteroclinic cycle has not been changed by the introduction of seasonal succession in this case. Besides, we also present the explicit expression of the carrying simplex, which states that the carrying simplex is flat in the special case. Applying the stability criterion to a given system (\ref{3ML2}), we obtain that the stability bifurcation of the heteroclinic cycle for such a system. By numerical simulation, we not only illustrate the effectiveness of our theoretical results, but also exhibit rich dynamics (including nontrivial periodic solutions, invariant closed curves and heteroclinic cycles) for the system. Compared to concrete discrete-time competitive maps discussed in \cite{GJNY2018,GJNY2020,JL2016,JL2017,JN22017}, the dynamic patterns converging to invariant closed curves and positive fixed points for system (\ref{3ML2}) are much richer. We guess that the interesting and beautiful dynamic patterns are generated by the switching between two subsystems. \par In addition, lots of numerical simulations strongly suggest that if system (\ref{3ML}) has one positive periodic solution, then the positive periodic solution is unique. The estimate for the Floquet multipliers of the positive periodic solution and the complete classification for the global dynamics of system (\ref{3ML}) are all challenging problems for us. We will leave them for future research. \end{document}
\begin{document} \title{Enhanced Lifespan of Smooth Solutions of a Burgers-Hilbert Equation hanks{Submitted: September 29, 2011.} \begin{abstract} We consider an initial value problem for a quadratically nonlinear inviscid Burgers-Hilbert equation that models the motion of vorticity discontinuities. We use a normal form transformation, which is implemented by means of a near-identity coordinate change of the independent spatial variable, to prove the existence of small, smooth solutions over cubically nonlinear time-scales. For vorticity discontinuities, this result means that there is a cubically nonlinear time-scale before the onset of filamentation. \end{abstract} \begin{keywords} Normal form transformations, nonlinear waves, inviscid Burgers equation, vorticity discontinuities. \end{keywords} \begin{AMS} 37L65, 76B47. \end{AMS} \pagestyle{myheadings} \thispagestyle{plain} \markboth{HUNTER AND IFRIM}{BURGERS-HILBERT EQUATION} \section{Introduction} We consider the following initial value problem for an inviscid Burgers-Hilbert equation for $u(t,x; \epsilon)$: \begin{align} \label{bheq} \begin{split} &u_t+ \epsilon u u_x = \mathbf{H}\left[ u\right] ,\\ &u(0,x;\epsilon) = u_0(x). \end{split} \end{align} In \eq{bheq}, $\mathbf{H}$ is the spatial Hilbert transform, $\epsilon$ is a small parameter, and $u_0$ is given smooth initial data. This Burgers-Hilbert equation is a model equation for nonlinear waves with constant frequency \cite{biello}, and it provides an effective equation for the motion of a vorticity discontinuity in a two-dimensional flow of an inviscid, incompressible fluid \cite{biello, marsden}. Moreover, as shown in \cite{biello}, even though \eq{bheq} is quadratically nonlinear it provides a formal asymptotic approximation for the small-amplitude motion of a planar vorticity discontinuity located at $y = \epsilon u(t,x;\epsilon)$ over cubically nonlinear time-scales. We assume for simplicity that $x\in \mathbb{R}$, in which case the Hilbert transform is given by \[ \mathbf{H}[u](t,x;\epsilon) = \mathrm{p.v.} \frac{1}{\pi} \int\frac{u(t,y;\epsilon)}{x-y}\, dy. \] We will show that smooth solutions of \eq{bheq} exist for times of the order $\epsilon^{-2}$ as $\epsilon \to 0$. Explicitly, if $H^s(\mathbb{R})$ denotes the standard Sobolev space of functions with $s$ weak $L^2$-derivatives, we prove the following result: \begin{theorem} \label{th:main} Suppose that $u_0 \in H^2(\mathbb{R})$. There are constants $k >0$ and $\epsilon_{0} > 0$, depending only on $\Vert u_{0}\Vert _{H^2}$, such that for every $\epsilon$ with $|\epsilon| \leq \epsilon _{0}$ there exists a solution \[ u \in C\left( I^\epsilon;{H}^{2}\left( \mathbb{R}\right) \right) \cap C^1 \left(I^\epsilon;{H}^{1}\left(\mathbb{R}\right) \right) \] of (\ref{bheq}) defined on the time-interval $I^\epsilon=\left[-{k}/{\epsilon^2},{k}/{\epsilon^2}\right]$. \end{theorem} The cubically nonlinear $O(\epsilon^{-2})$ lifespan of smooth solutions for the Burgers-Hilbert equation is longer than the quadratically nonlinear $O(\epsilon^{-1})$ lifespan for the inviscid Burgers equation $u_t + \epsilon uu_x=0$. The explanation of this enhanced lifespan is that the quadratically nonlinear term of the order $\epsilon$ in \eq{bheq} is nonresonant for the linearized equation. To see this, note that the solution of the linearized equation $u_t = \mathbf{H}[u]$ is given by $u = e^{t\mathbf{H}} u_0$, or \[ u(t,x) = u_0(x) \cos t + h_0(x) \sin t,\qquad h_0 = \mathbf{H}[u_0], \] as may be verified by use of the identity $\operatorname{\mathbf{H}}^2 = -\operatorname{\mathbf{I}}$. This solution oscillates with frequency one between the initial data and its Hilbert transform, and the effect of the nonlinear forcing term $\epsilon u u_x$ on the linearized equation averages to zero because it contains no Fourier component in time whose frequency is equal to one. Alternatively, one can view the averaging of the nonlinearity as a consequence of the fact that the nonlinear steepening of the profile in one phase of the oscillation is canceled by its expansion in the other phase. This phenomenon is illustrated by numerical results from \cite{biello}, which are reproduced in Figure~\ref{u_sing}. The transition from an $O(\epsilon^{-1})$ lifespan for large $\epsilon$ to an $O(\epsilon^{-2})$ lifespan for small $\epsilon$ is remarkably rapid: once a singularity fails to form over the first oscillation in time, a smooth solution typically persists over many oscillations. \begin{figure} \caption{Logarithm of the singularity formation time $T_s$ for the Burgers-Hilbert equation \eq{bheq} \label{u_sing} \end{figure} In the context of the motion of a vorticity discontinuity, the formation of a singularity in a solution of \eq{bheq} corresponds to the filamentation of the discontinuity \cite{dritschel}. The result proved here corresponds to an enhanced lifespan before nonlinear `breaking' of the discontinuity leads to the formation of a filament. There are three main difficulties in the proof of Theorem~\ref{th:main}. The first is that the presence of a quadratically nonlinear term in \eq{bheq} means that straightforward energy estimates prove the existence of smooth solutions only on time-scales of the order $\epsilonilon^{-1}$. Following the idea introduced by Shatah \cite{shatah} in the context of PDEs, and used subsequently by other authors, we remove the quadratically nonlinear term of the order $\epsilon$ by a normal form or near-identity transformation, replacing it by a cubically nonlinear term of the order $\epsilon^2$. The second difficulty is that a standard normal form transformation of the dependent variable, of the type used by Shatah, leads to a loss of spatial derivatives because we are using a lower-order linear term $\mathbf{H}[u]$ to eliminate a higher-order nonlinear term $\epsilon uu_x$. The third difficulty is that \eq{bheq} is nondispersive and solutions of the linearized equation oscillate but do not decay in time. Thus, we cannot use any dispersive decay in time to control the loss of spatial derivatives. The key idea in this paper that avoids these difficulties is to make a transformation of the independent variable, rather than the dependent variable. We write \begin{equation} h(t,x;\epsilon) = \mathbf{H}[u](t,x;\epsilon) \label{defh} \end{equation} and define \begin{equation} g(t,\xi;\epsilon) = h(t,x;\epsilon),\qquad x = \xi - \epsilon g(t,\xi;\epsilon). \label{nearid_trans} \end{equation} Then, as we will show, the transformed function $g(t,\xi;\epsilon)$ satisfies an integro-differential equation of the form \begin{align} \begin{split} g_{t}(t,\xi;\epsilon) &= \mathrm{p.v.} \frac{1}{\pi} \int\frac{g(t,\tilde{\xi};\epsilon)}{\xi-\tilde{\xi}}\, d\tilde{\xi} \\ &\qquad - \frac{1}{\pi} \epsilon^2 \partial_{\xi} \int (\xi-\tilde{\xi}) g_{\tilde{\xi}}(t,\tilde{\xi};\epsilon) \phi\left(\frac{g(t,\xi;\epsilon)-g(t,\tilde{\xi};\epsilon)}{\xi-\tilde{\xi}}; \epsilon\right)\, d\tilde{\xi} \end{split} \label{g-eq} \end{align} where $\phi(c;\epsilon)$ is a smooth function, given in Lemma~\ref{lemma:geq}. The term of the order $\epsilon$ has been removed from \eq{g-eq}, and the equation has good energy estimates that imply the enhanced lifespan of smooth solutions. The interpretation of the transformation \eq{nearid_trans} is not entirely clear. On taking the Hilbert transform of \eq{bheq} we get $h_t = - u + O(\epsilon)$, so that \[ x_t = -\epsilon g_t = -\epsilon h_t + O(\epsilon^2) = \epsilon u + O(\epsilon^2). \] Thus the transformation $\xi\mapsto x$ in \eq{nearid_trans} agrees up to the order $\epsilon$ with a transformation from characteristic to spatial coordinates for \eq{bheq}. The coordinate $\xi$, however, differs from $x$ even when $t=0$, and the use of characteristic coordinates does not appear to simplify the analysis. As a partial motivation for \eq{nearid_trans}, we show in Section~\ref{sec:normal_form} that it agrees to leading order in $\epsilon$ with a normal form transformation of the dependent variable that is given in \cite{biello}. We were not able, however, to use the latter normal form transformation to prove Theorem~\ref{th:main} because of the loss of derivatives in the higher-order terms. We consider \eq{bheq} on the real line for simplicity. Equation \eq{bheq} is nondispersive and solutions of the linearized equation oscillate in time. Thus, our proof does not depend on any dispersive decay of the solutions in time, and a similar result would apply to spatially periodic solutions. Theorem~\ref{th:main} is also presumably true in $H^s$ for any $s > 3/2$; we consider $s=2$ to avoid complications associated with the use of fractional derivatives. A proof of singularity formation for \eq{bheq} under certain conditions on $u_0$ and $\epsilon$ is given in \cite{castro}. \section{Proof of the Theorem} \label{sec:proof} In this section, we prove Theorem~\ref{th:main}. It follows from standard energy arguments (\textit{e.g.}\ \cite{kato}) that \eq{bheq} has a unique local $H^2$-solution in a time-interval $J^\epsilon$ depending on the $H^2$-norm of the initial data and $\epsilon$. Moreover, for any $s\ge 2$, the solution remains in $H^s$ if the initial data is in $H^s$ and depends continuously on the initial data in $C(J^\epsilon; H^s)$. Thus, in order to prove Theorem~\ref{th:main} it is sufficient to prove an \textit{a priori} $H^2$-bound for smooth solutions $u \in C^\infty\left(I^\epsilon; H^\infty(\mathbb{R})\right)$ where $H^\infty(\mathbb{R}) = \cap_{s=1}^\infty H^s(\mathbb{R})$. To derive this bound, we first transform the equation to remove the order $\epsilon$ term and then carry out $H^2$-estimates on the transformed equation. The required computations, such as integrations by parts, are justified for these smooth solutions. \subsection{Near-identity transformation} Let $h$ denote the Hilbert transform of $u$, as in \eq{defh}. Taking the Hilbert transform of \eq{bheq}, using the identity \[ \mathbf{H}\left[u^2 - h^2\right] = 2 h u \] and the fact that $u= - \mathbf{H}[h]$, we find that $h$ satisfies the equation \begin{equation} \label{htrans} h_{t}+\epsilon \left\lbrace \mathbf{H}\left[hh_{x}\right]-h\mathbf{H}\left[ h_{x}\right] -\mathbf{H}\left[ h\right]h_{x}\right\rbrace =\mathbf{H}\left[ h\right]. \end{equation} We will make the change of variables \eq{nearid_trans} in \eq{htrans}, so first we discuss \eq{nearid_trans}. The map $\xi\mapsto x$ is smoothly invertible if $|\epsilon g_\xi| < 1$, which holds by Sobolev embedding if $\|\epsilon g\|_{H^2}$ is sufficiently small. Specifically, we have the Gagliardo-Nirenberg-Moser inequality \begin{equation} \|g_\xi\|_{L^\infty} \le N \|g\|^{1/4}_{L^2} \|g_{\xi\xi}\|^{3/4}_{L^2}, \label{gag_nir} \end{equation} where we can take, for example, \[ N = \sqrt{\frac{8}{3}}. \] We assume throughout this section that \begin{equation} N\|\epsilon g\|^{1/4}_{L^2} \|\epsilon g_{\xi\xi}\|^{3/4}_{L^2} \le \frac{1}{2}, \label{c2small} \end{equation} which ensures that \begin{equation} \|\epsilon g_\xi\|_{L^\infty} \le \frac{1}{2}. \label{cinftysmall} \end{equation} By the chain rule, \begin{equation} h_{x}=\frac{g_{\xi}}{1-\epsilon g_{\xi}}, \qquad h_{xx}=\frac{g_{\xi\xi}}{(1-\epsilon g_{\xi})^3}. \label{hx} \end{equation} Thus, if \eq{cinftysmall} holds, then \[ \int_{\mathbb{R}} h^2 \, dx = \int_{\mathbb{R}} g^2 \left(1-\epsilon g_\xi\right)\, d\xi = \int_{\mathbb{R}} g^2 \, d\xi, \qquad \int_{\mathbb{R}} h_{xx}^2 \, dx = \int_{\mathbb{R}} \frac{g_{\xi\xi}^2}{\left(1-\epsilon g_\xi\right)^5}\, d\xi. \] Hence, since $\mathbf{H}$ is an isometry on $H^s$, \begin{equation} \|u\|_{L^2} = \|g\|_{L^2}, \qquad \left(\frac{2}{3}\right)^{5/2} \|g_{\xi\xi}\|_{L^2} \le \|u_{xx}\|_{L^2} \le 2^{5/2} \|g_{\xi\xi}\|_{L^2}, \label{normest} \end{equation} and $H^2$-estimates for $g$ imply $H^2$-estimates for $u$. Conversely, one can use the contraction mapping theorem on $C_0(\mathbb{R})$, the space of continuous functions that decay to zero at infinity, to show that if $h_0\in C_0^1(\mathbb{R})$ and \begin{equation} \|\epsilon h_{0x}\|_{L^\infty} < 1 \label{hginvert} \end{equation} then there exists a function $g_0(\cdot;\epsilon)\in C_0(\mathbb{R})$ such that \[ h_0\left(\xi - \epsilon g_0(\xi;\epsilon)\right) = g_0(\xi;\epsilon). \] The function $g_0$ is smooth if $h_0$ is smooth, and $\|\epsilon g_{0\xi}\|_{L^\infty} \le 1/2$ if $\|\epsilon h_{0x}\|_{L^\infty} \le 1/3$. Thus, we can obtain initial data for $g$ from the initial data for $h$. From \eq{nearid_trans}, we have \begin{equation*} h_{t}=\frac{g_{t}}{1-\epsilon g_{\xi}}, \qquad \mathbf{H}\left[h\right]= \mathrm{p.v.}\frac{1}{\pi}\int_{\mathbb{R}}\left[ \frac{1-\epsilon\tilde{g}_{\tilde{\xi}}}{\xi-\tilde{\xi}-\epsilon(g-\tilde{g})}\right]\tilde{g} \, d\tilde{\xi} \end{equation*} where we use the notation \[ g=g(t,\xi;\epsilon),\qquad \tilde{g}=g(t,\tilde{\xi};\epsilon). \] Using these expressions, together with \eq{hx}, in (\ref{htrans}) and simplifying the result, we find that $g(t,\xi;\epsilon)$ satisfies the following nonlinear integro-differential equation: \[ g_{t}= \mathrm{p.v.}\frac{1}{\pi}\int_{\mathbb{R}}\frac{\tilde{g}+\epsilon(g-2\tilde{g})\tilde{g}_{\tilde{\xi}} -\epsilon^2 (g-\tilde{g})g_{\xi}\tilde{g}_{\tilde{\xi}}}{\xi-\tilde{\xi}-\epsilon (g-\tilde{g})}\, d\tilde{\xi}. \] Subtracting off the leading order term in $\epsilon$ from the integrand, we may write this equation as \begin{equation} \label{f} g_{t}=\mathbf{H}[g] + \frac{1}{\pi}\epsilon^2 \int_{\mathbb{R}}\left(\frac{g-\tilde{g}}{x-\tilde{x}}\right)\left\{ \left(\frac{\tilde{g}}{\xi-\tilde{\xi}}\right)\left[\frac{g-\tilde{g}}{\xi-\tilde{\xi}} - \tilde{g}_{\tilde{\xi}}\right] + \tilde{g}_{\tilde{\xi}}\left[\frac{g-\tilde{g}}{\xi-\tilde{\xi}} - g_\xi\right]\right\}\, d\tilde{\xi} \end{equation} where \[ \mathbf{H}[g](t,\xi;\epsilon) = \mathrm{p.v.}\frac{1}{\pi} \int_{\mathbb{R}} \frac{g(t,\tilde{\xi};\epsilon)}{\xi - \tilde{\xi}} \, d\tilde{\xi} \] denotes the Hilbert transform of $g$ with respect to $\xi$ and \[ x = \xi - \epsilon g(t,\xi;\epsilon),\qquad \tilde{x} = \tilde{\xi} - \epsilon g(t,\tilde{\xi};\epsilon). \] The integral of the order $\epsilon^2$ in \eq{f} is not a principal value integral since the integrand is a smooth function of $(\xi,\tilde{\xi})$. Finally, we observe that this equation can be put in the form \eq{g-eq}. \begin{lemma} \label{lemma:geq} An equivalent form of equation (\ref{f}) is given by \begin{equation} \label{finalfinaleq} g_{t}=\mathbf{H}\left[g \right] -\frac{1}{\pi}\epsilon ^2 \partial_{\xi}\int_{\mathbb{R}}(\xi-\tilde{\xi})\tilde{g}_{\tilde{\xi}}\, \phi \left( \frac{g-\tilde{g}}{\xi-\tilde{\xi}};\epsilon\right) \, d\tilde{\xi}, \end{equation} where \begin{equation} \phi(c;\epsilon)= - \frac{1}{\epsilon^2}\left\lbrace \log \left( 1-\epsilon c\right) + \epsilon c \right\rbrace. \label{defphi} \end{equation} \end{lemma} \begin{proof} First, we check that \eq{finalfinaleq} is well-defined. Abusing notation slightly, we write \begin{equation} c = \frac{g-\tilde{g}}{\xi-\tilde{\xi}}. \label{defc} \end{equation} From \eq{defphi}, \begin{equation} \phi_c(c;\epsilon) = \frac{c}{1-\epsilon c}, \label{defphic} \end{equation} so $|\phi(c;\epsilon)| \le c^2$ when $|\epsilon c| \le 1/2$, which is implied by \eq{cinftysmall}. In that case \[ \left|\int_{\mathbb{R}}(\xi-\tilde{\xi})\tilde{g}_{\tilde{\xi}}\, \phi \left(c;\epsilon\right) \, d\tilde{\xi}\right| \le \int_{\mathbb{R}} \left|\left(g-\tilde{g}\right) \tilde{g}_{\tilde{\xi}} c\right|\,d\tilde{\xi}. \] We use $|g-\tilde{g}| \le 2 \|g\|_{L^\infty}$ in the right hand side of this inequality and apply the Cauchy-Schwartz inequality to get \begin{equation} \left|\int_{\mathbb{R}}(\xi-\tilde{\xi})\tilde{g}_{\tilde{\xi}}\, \phi \left(c;\epsilon\right) \, d\tilde{\xi}\right| \le 2 \|g\|_{L^\infty}\|{g}_{\xi}\|_{L^2}\left\|c\right\|_{L^2} \label{tempint} \end{equation} where \[ \left\|c\right\|_{L^2} = \left[\int_{\mathbb{R}} \left(\frac{g-\tilde{g}}{\xi-\tilde{\xi}}\right)^2\, d\tilde{\xi}\right]^{1/2} \] denotes the $L^2$-norm of $c$ with respect to $\tilde{\xi}$, which is a function of $\xi$. Temporarily suppressing the $(t;\epsilon)$-variables and denoting the derivative of $g$ with respect to $\xi$ by $g^\prime(\xi) = g_\xi(\xi)$, we have from the Taylor integral formula that \[ c = \int_0^1 g^\prime\left(\tilde{\xi} + r(\xi - \tilde{\xi})\right)\, dr, \] and the Cauchy-Schwartz inequality implies that \begin{align*} \left\|c\right\|^2_{L^2}(\xi) &= \int_{\mathbb{R}} c^2 \, d\tilde{\xi} \\ &= \int_0^1 \int_0^1 \int_{\mathbb{R}} g^\prime\left(\tilde{\xi} + r(\xi - \tilde{\xi})\right)g^\prime\left(\tilde{\xi} + s(\xi - \tilde{\xi})\right) \, d\tilde{\xi} dr ds \\ & \le \int_0^1 \int_0^1 \left(\int_{\mathbb{R}} g^{\prime2}\left(\tilde{\xi} + r(\xi - \tilde{\xi})\right)\, d\tilde{\xi}\right)^{1/2} \left(\int_{\mathbb{R}} g^{\prime2}\left(\tilde{\xi} + s(\xi - \tilde{\xi})\right) \, d\tilde{\xi}\right)^{1/2} dr ds \\ & \le \left(\int_0^1 \int_0^1 \frac{1}{\sqrt{rs}} dr ds\right) \left(\int_{\mathbb{R}} g^{\prime2}(\xi)\, d\xi\right) \\ &\le 4 \left(\int_{\mathbb{R}} g^{\prime2}(\xi)\, d\xi\right). \end{align*} Thus, \begin{equation} \sup_{\xi\in \mathbb{R}}\left(\int_{\mathbb{R}} c^2 \, d\tilde{\xi} \right)^{1/2} \le 2 \left\|g_\xi\right\|_{L^2}. \label{cl2} \end{equation} Using this estimate in \eq{tempint}, we get \[ \sup_{\xi\in \mathbb{R}}\left|\int_{\mathbb{R}}(\xi-\tilde{\xi})\tilde{g}_{\tilde{\xi}}\, \phi \left(c;\epsilon\right) \, d\tilde{\xi}\right| \le 4 \|g\|_{L^\infty} \|{g}_{\xi}\|^2_{L^2}. \] Thus, the $\tilde{\xi}$-integral in \eq{finalfinaleq} converges when $g \in H^1(\mathbb{R})$ and is, in fact, a uniformly bounded function of $\xi$. To verify that \eq{finalfinaleq} agrees with \eq{f}, we take the $\xi$-derivative under the integral in \eq{finalfinaleq}, use \eq{defphic} which implies that \[ \phi_c\left(c;\epsilon\right) = \frac{g-\tilde{g}}{x-\tilde{x}}, \] and integrate by parts in the result. This gives \begin{equation} g_t = \mathbf{H}\left[g \right] + \frac{1}{\pi}\epsilon^2 \int_{\mathbb{R}} \left(\frac{g-\tilde{g}}{x-\tilde{x}}\right) \left[\tilde{g} c_{\tilde{\xi}} - (\xi-\tilde{\xi}) \tilde{g}_{\tilde{\xi}} c_\xi \right] \, d\tilde{\xi}. \label{tempgeq} \end{equation} Using the equations \begin{equation} c_{\tilde{\xi}} = \frac{c - \tilde{g}_{\tilde{\xi}}}{\xi-\tilde{\xi}}, \qquad c_\xi = - \frac{c - g_{\xi}}{\xi-\tilde{\xi}} ,\qquad \label{defcxi} \end{equation} in \eq{tempgeq} and comparing the result with \eq{f} proves the lemma. \end{proof} \subsection{Energy Estimates} Multiplying (\ref{finalfinaleq}) by $g$, integrating the result with respect to $\xi$, and integrating by parts with respect to $\xi$, we find that the right-hand side vanishes by skew-symmetry in $(\xi,\tilde{\xi})$ so that \begin{equation} \frac{d}{dt} \left\|g\right\|_{L^2} = 0.\label{conl2norm} \end{equation} The conservation of $\|g\|_{L^2}$ is consistent with the conservation of $\|u\|_{L^2}$ from \eq{bheq}. Hence, from \eq{normest}, we have $\|g\|_{L^2}= \|g_0\|_{L^2} = \|u_0\|_{L^2}$. Differentiating (\ref{finalfinaleq}) twice with respect to $\xi$, multiplying the result by $g_{\xi \xi}$, integrating with respect to $\xi$, and integrating by parts with respect to $\xi$, we get \begin{equation} \frac{d}{dt}\int_{\mathbb{R}}g_{\xi \xi}^2\, d\xi=\epsilon^2 I \label{H2eq} \end{equation} where \begin{equation} I = \int_{\mathbb{R}^2} g_{\xi \xi \xi} \partial_{\xi}^2\left[ (\xi -\tilde{\xi})\tilde{g}_{\tilde{\xi}}\phi (c;\epsilon) \right] \, d\xi d\tilde{\xi}. \label{tempI} \end{equation} The following lemma estimates $I$ in terms of the $H^2$-norm of $g$. \begin{lemma} \label{lemma:est} Suppose that $I$ is given by \eq{tempI} where $\phi$ is defined in \eq{defphi}, and $c$ is defined in \eq{defc}. There exists a numerical constant $A>0$ such that \begin{equation} |I| \le A \left\|g_\xi\right\|_{L^2} \|g_{\xi\xi}\|^3_{L^2} \label{Iest} \end{equation} whenever $g\in H^\infty(\mathbb{R})$ satisfies \eq{c2small}. \end{lemma} \begin{proof} We first convert the $\tilde{\xi}$-derivative in the expression \eq{tempI} for $I$ to a $\xi$-derivative. Let \[ \Phi^\prime(c;\epsilon) = \phi(c;\epsilon), \] where a prime on $\Phi$ and related functions denotes a derivative with respect to $c$. It follows from \eq{defcxi} that \begin{align*} (\xi -\tilde{\xi})\tilde{g}_{\tilde{\xi}}\phi(c;\epsilon) &= (\xi -\tilde{\xi})\left[c - (\xi-\tilde{\xi}) c_{\tilde{\xi}}\right]\phi(c;\epsilon) \\ &= (\xi -\tilde{\xi}) c\phi(c;\epsilon) - (\xi-\tilde{\xi})^2 \Phi_{\tilde{\xi}}(c;\epsilon). \end{align*} We use this equation in \eq{tempI} and integrate by parts with respect to $\tilde{\xi}$ in the term involving $\Phi$. Since $g$ is independent of $\tilde{\xi}$, this gives \begin{equation} I =\int_{\mathbb{R}^2} g_{\xi \xi \xi} \partial_{\xi}^2\left[(\xi-\tilde{\xi})\Psi(c;\epsilon)\right] \, d\xi d\tilde{\xi} \label{defI} \end{equation} where \[ \Psi(c;\epsilon)=c\phi(c;\epsilon)-2\Phi(c;\epsilon). \] Expanding the derivatives with respect to $\xi$ in \eq{defI}, using \eq{defc} to express $c_{\xi \xi}$ in terms of $g_{\xi\xi}$, and integrating by parts with respect to $\xi$ in the result to remove the third-order derivative of $g$, we find that $I$ can be expressed as \[ I = -\frac{5}{2} I_1 + 3 I_2 - I_3 \] where \begin{align} \begin{split} I_{1}&=\int_{\mathbb{R}^2}\Psi^{\prime\prime} (c;\epsilon) c_{\xi}g^2_{\xi \xi}\, d\xi d\tilde{\xi}, \\ I_{2}&=\int_{\mathbb{R}^2} \Psi^{\prime\prime}(c;\epsilon)c_{\xi}^2 g_{\xi \xi}\, d\xi d\tilde{\xi}, \\ I_{3}&=\int_{\mathbb{R}^2} (\xi-\tilde{\xi})\Psi^{\prime\prime\prime}(c;\epsilon)c_{\xi}^3 g_{\xi \xi}\, d\xi d\tilde{\xi}. \end{split} \label{defIj} \end{align} The functions $\Psi^{\prime\prime}$, $\Psi^{\prime\prime\prime}$ are given explicitly by \[ \Psi^{\prime\prime}(c;\epsilon) = \frac{c}{(1-\epsilon c)^2}, \qquad \Psi^{\prime\prime\prime}(c;\epsilon) = \frac{1 + \epsilon c}{(1-\epsilon c)^3}. \] In particular, if $|\epsilon c| \le 1/2$, which is the case if $g$ satisfies \eq{c2small}, then \begin{equation} \label{m} \vert \Psi^{\prime\prime} (c;\epsilon)\vert \leq 4 |c|, \qquad \vert \Psi^{\prime\prime\prime}(c;\epsilon)\vert \leq 12. \end{equation} We will estimate the terms in \eq{defIj} separately. \emph{Estimating $I_{1}$:} Using (\ref{m}) in \eq{defIj}, we get that \begin{align} \begin{split} \vert I_{1}\vert &\leq 4\int_{\mathbb{R}^2} \left|c c_{\xi} g_{\xi\xi}^2\right| \, d\tilde{\xi} d\xi \\ &\leq 4\sup_{\xi\in \mathbb{R}} \left[\int_{\mathbb{R}} \left|c c_{\xi}\right| \, d\tilde{\xi}\right]\left(\int_{\mathbb{R}}g_{\xi\xi}^2\, d{\xi}\right) \\ &\leq 4 \sup_{\xi\in \mathbb{R}} \left[\left(\int_{\mathbb{R}}c^2\, d\tilde{\xi}\right)^{1/2} \left(\int_{\mathbb{R}}c_\xi^2\, d\tilde{\xi}\right)^{1/2}\right] \left(\int_{\mathbb{R}}g_{\xi\xi}^2\, d\xi\right). \end{split} \label{tempI1} \end{align} By a similar argument to the proof of \eq{cl2}, using Taylor's theorem with integral remainder and the Cauchy-Schwartz inequality, we have from \eq{defc} and \eq{defcxi} that \begin{align*} \int_{\mathbb{R}}c_\xi^2\, d\tilde{\xi} &=\int_{\mathbb{R}} \left[\frac{g - \tilde{g} - (\xi-\tilde{\xi}) g_{\xi}}{(\xi-\tilde{\xi})^2}\right]^2\, d\tilde{\xi} \\ &= \int_0^1 \int_0^1 \int_{\mathbb{R}} (1-r)(1-s) g^\prime\left(\tilde{\xi} + r(\xi-\tilde{\xi})\right) g^\prime\left(\tilde{\xi} + s(\xi-\tilde{\xi})\right)\, d\tilde{\xi} dr ds \\ &\le \int_0^1 \int_0^1 \int_{\mathbb{R}} (1-r)(1-s) \\ &\qquad \left(\int_{\mathbb{R}} g^{\prime2}\left(\tilde{\xi} + r(\xi-\tilde{\xi})\right)\, d\tilde{\xi}\right)^{1/2} \left(\int_{\mathbb{R}} g^{\prime2}\left(\tilde{\xi} + s(\xi-\tilde{\xi})\right)\, d\tilde{\xi}\right)^{1/2} dr ds \\ &\le \left(\int_0^1 \int_0^1 \frac{(1-r)(1-s)}{\sqrt{rs}} \, dr ds\right) \left(\int_{\mathbb{R}} g_\xi^2\left(\xi\right)\, d{\xi}\right) \\ &\le \frac{16}{9} \|g_{\xi\xi}\|^2_{L^2}. \end{align*} Thus, \begin{equation} \sup_{\xi\in \mathbb{R}} \left(\int_{\mathbb{R}}c_\xi^2\, d\tilde{\xi}\right)^{1/2} \le \frac{4}{3} \|g_{\xi\xi}\|_{L^2}. \label{l2cxi} \end{equation} Using \eq{cl2} and \eq{l2cxi} in \eq{tempI1}, we get that \begin{equation*} \vert I_{1}\vert \leq A_1 \left\|g_\xi\right\|_{L^2} \|g_{\xi\xi}\|^3_{L^2}, \end{equation*} where $A_1 = 32/3$ is a numerical constant. \emph{Estimating $I_{2}$:} Using (\ref{m}) and \eq{l2cxi} in \eq{defIj}, we get that \begin{align} \begin{split} \vert I_{2}\vert &\leq 4\int_{\mathbb{R}^2} \left|c c^2_{\xi} g_{\xi\xi}\right| \, d\tilde{\xi} d\xi \\ &\leq 4\int_{\mathbb{R}} \left(\sup_{\tilde{\xi}\in \mathbb{R}}|c|\right) \left(\int_{\mathbb{R}} c_{\xi}^2\,d\tilde{\xi}\right) \left|g_{\xi\xi}\right| \, d{\xi} \\ &\leq \frac{64}{9} \|g_{\xi\xi}\|^2_{L^2} \int_{\mathbb{R}} \left(\sup_{\tilde{\xi}\in \mathbb{R}}|c|\right) \left|g_{\xi\xi}\right| \, d{\xi}. \end{split} \label{tempI2} \end{align} Suppressing the $(t;\epsilon)$-variables, we observe from \eq{defc} that \begin{align*} \sup_{\tilde{\xi}\in \mathbb{R}}\vert c\vert &=\sup_{\tilde{\xi}\in \mathbb{R}} \left|\frac{g-\tilde{g}}{\xi-\tilde{\xi}}\right| \\ &= \sup_{\tilde{\xi}\in \mathbb{R}}\left|\frac{1}{\xi-\tilde{\xi}}\int^{\xi}_{\tilde{\xi}} g^\prime(z)\, dz \right| \\ &\le g^\ast_\xi(\xi), \end{align*} where \[ g^\ast_\xi(\xi) = \sup_{\tilde{\xi}\in \mathbb{R}} \frac{1}{|\xi-\tilde{\xi}|} \left|\int_{\tilde{\xi}}^\xi |g^\prime(z)|\, dz\right| \] is the maximal function of $g^\prime=g_\xi$, defined using intervals whose left or right endpoint is $\xi$. Using this inequality and the Cauchy-Schwartz inequality in \eq{tempI2}, we find that \[ \vert I_{2}\vert \leq \frac{64}{9} \|g^\ast_\xi\|_{L^2} \|g_{\xi\xi}\|^3_{L^2}. \] The maximal operator is bounded on $L^2$, so there exists a numerical constant $M$ such that \begin{equation} \|g^\ast_\xi\|_{L^2} \le M\|g_\xi\|_{L^2}. \label{max_con} \end{equation} For example, from \cite{grafakos}, we can take \[ M = 1 + \sqrt{2}. \] It follows that \[ \vert I_{2}\vert \leq A_2 \|g_\xi\|_{L^2} \|g_{\xi\xi}\|^3_{L^2} \] where $A_2 = 64M/9$. \emph{Estimating $I_{3}$:} Using \eq{defcxi} in \eq{defIj}, we we can rewrite $I_{3}$ as \begin{equation*} I_{3}=\int_{\mathbb{R}^2}\Psi^{\prime\prime\prime}(c;\epsilon) g_{\xi \xi} (c-g_{\xi})c^2_{\xi}\, d\xi d\tilde{\xi}. \end{equation*} Splitting this integral into two terms, we get $I_3 = I_3^\prime - I_3^{\prime\prime}$ where \[ I_{3}^{\prime} =\int_{\mathbb{R}^2}\Psi^{\prime\prime\prime}(c;\epsilon) cc^2_{\xi} g_{\xi \xi}\, d\xi d\tilde{\xi}, \qquad I_{3}^{\prime\prime} =\int_{\mathbb{R}^2}\Psi^{\prime\prime\prime}(c;\epsilon)c^2_{\xi}g_{\xi}g_{\xi \xi}\, d\xi d\tilde{\xi}. \] Using \eq{m}, we have \[ |I_{3}^{\prime}| \le 12 \int_{\mathbb{R}^2} | cc^2_{\xi}g_{\xi \xi}|\, d\xi d\tilde{\xi}, \qquad |I_{3}^{\prime\prime}| \le 12\int_{\mathbb{R}^2} |c^2_{\xi}g_{\xi}g_{\xi \xi}|\, d\xi d\tilde{\xi}. \] We estimate $I_{3}^{\prime}$ in exactly the same way as $I_{2}$, which gives \[ \vert I_{3}^{\prime}\vert \le A_3^{\prime} \Vert g_{\xi}\Vert _{L^2}\Vert g_{\xi \xi }\Vert^3_{L^2} \] where $A_3^{\prime} = 64 M/3$. We estimate $I_{3}^{\prime\prime}$ in a similar way to $I_{1}$ as \[ \vert I_{3}^{\prime\prime} \vert \leq 12 \sup_{\xi\in \mathbb{R}}\left[\int_{\mathbb{R}} c^2_{\xi}\, d\tilde{\xi}\right] \left(\int_{\mathbb{R}} |g_{\xi}g_{\xi \xi}|\, d\xi\right), \] which by use of \eq{l2cxi} and the Cauchy-Schwartz inequality gives \[ \vert I_{3}^{\prime\prime}\vert \le A_3^{\prime\prime} \Vert g_{\xi}\Vert _{L^2}\Vert g_{\xi \xi }\Vert^3_{L^2} \] where $A_3^{\prime\prime} = 64/3$. Combining these estimates, we get \eq{Iest} with \begin{equation} A = 48 + \frac{128}{3} M \label{Acon} \end{equation} where $M$ is the maximal-function constant in \eq{max_con}. \end{proof} Using \eq{Iest} in \eq{H2eq}, we find that \[ \frac{d}{dt} \|g_{\xi\xi}\|_{L^2} \le \frac{1}{2}\epsilon^2 A \Vert g_{\xi}\Vert _{L^2}\Vert g_{\xi \xi }\Vert^2_{L^2}. \] Since $\|g_\xi\|_{L^2}^2 \le \|g\|_{L^2} \|g_{\xi\xi}\|_{L^2}$ and $\|g\|_{L^2} = \|g_0\|_{L^2}$ is conserved, we get \begin{equation} \frac{d}{dt} \|g_{\xi\xi}\|_{L^2} \le \frac{1}{2} \epsilon^2 A \Vert g_0\Vert^{1/2}_{L^2}\Vert g_{\xi \xi }\Vert^{5/2}_{L^2} \label{energyest} \end{equation} provided that \eq{c2small} holds. It follows from \eq{energyest} and Gronwall's inequality that if $|\epsilon|\le \epsilon_0$, where $\epsilon_0$ is sufficiently small, then $\|g_{\xi\xi}\|_{L^2}$ remains finite and \eq{c2small} holds in some time-interval $0 \le t \le k/\epsilon^2$, where the constants $\epsilon_0, k > 0$ may be chosen to depend only on $\|u_0\|_H^2$. The same estimates hold backward in time, so this completes the proof of Theorem~\ref{th:main}. By solving the differential inequality \eq{energyest} subject to the constraint \eq{c2small}, we can obtain explicit expressions for $\epsilon_0$ and $k$. Let \[ E_0 = \|g_0\|_{L^2}^{1/4} \|g_{0\xi\xi}\|_{L^2}^{3/4}, \] which is comparable to $\|u_0\|_{H^2}$ from \eq{normest}. Then we find that Theorem~\ref{th:main} holds with \[ \epsilon_0 = \frac{1}{2\sqrt{2} N} \frac{1}{E_0},\qquad k = \frac{2}{3A} \frac{1}{E_0^2} \]\ where $N$ is the constant in \eq{gag_nir} and $A$ is the constant in \eq{Acon}. \section{Normal form transformation} \label{sec:normal_form} In this section, we relate the near-identity transformation of the independent variables used above to a more standard normal form transformation of the dependent variables, of the form introduced by Shatah \cite{shatah} \[ v = u + B(u,u) \] where $B$ is a bilinear form. We consider the normal form transformation $u\mapsto v$ given in \cite{biello}: \begin{equation} \label{shortnft} v=u+\frac{1}{2}\epsilon |\partial_x| (h^2),\qquad h = \mathbf{H}[u]. \end{equation} Here, $\partial_x$ denotes the derivative with respect to $x$ and $|\partial_x| = \mathbf{H}\partial_x$. Differentiating \eq{shortnft} with respect to $t$, using \eq{bheq} to eliminate $u_t$, and simplifying the result, we find that this transformation removes the nonresonant term of the order $\epsilon$ from the equation and gives \begin{equation} v_{t}+\frac{1}{2}\epsilon^2 |\partial_x|\,\left[h |\partial_x|(u^2) \right] = \mathbf{H}[v]. \label{shortnfteq} \end{equation} The bilinear form $B$ in \eq{shortnft} is not bounded on $H^2$, but one can show that the normal form transformation \eq{shortnft} is invertible on a bounded set in $H^2$ when $\epsilon$ is sufficiently small. We were not able, however, to obtain $H^2$-estimates for $v$ from \eq{shortnfteq}, because \eq{shortnfteq} contains second-order derivatives, rather than first-order derivatives as in \eq{bheq}, and there is a loss of derivatives in estimating the $H^s$-norm of $v$. In fact, for every power of $\epsilon u$ that one gains through a normal form transformation of the dependent variable, one introduces an additional derivative. The appearance of additional derivatives is a consequence of using a zeroth-order linear term $\mathbf{H}[u]$ to remove a first-order quadratic term $\epsilon uu_x$. By contrast, higher-order linear terms lead to normal form transformations that are easier to analyze. For example, consider the KdV equation \[ u_t + \epsilon uu_x =u_{xxx}. \] Then, assuming we can ignore difficulties associated with low wavenumbers (\textit{e.g.}\ by considering spatially periodic solutions with zero mean), we find that the normal form transformation \[ v = u - \frac{1}{6} \epsilon \left(\partial_x^{-1} u\right)^2 \] leads to the equation \[ v_t - \frac{1}{6}\epsilon^2 u^2 \left(\partial_x^{-1} u\right) = v_{xxx}. \] In this case, the normal form transformation is bounded and it smooths the nonlinear term. To explain the connection between the normal form transformation (\ref{shortnft}) and the near-identity transformation \eq{nearid_trans}, we reformulate \eq{shortnft} in a convenient way. Writing $g = \mathbf{H}[v]$ and taking the Hilbert transform of (\ref{shortnft}), we get the ODE \begin{equation} \label{eq1} g=h-\epsilon hh_{x}. \end{equation} We regard $g(t,x)$ as a given function and use \eq{eq1} to determine the corresponding function $h$. We may write (\ref{eq1}) as \begin{equation*} \frac{h-g}{\epsilon}-hh_{x}=0, \end{equation*} which agrees up to the order $\epsilon$ with an evolution equation in $\epsilon$ for $h(t,x;\epsilon)$: \begin{equation} h_{\epsilon}-hh_{x}=0,\qquad h(t,x;0) = g(t,x). \label{heps} \end{equation} By the method of characteristics, the solution of \eq{heps} is \[ h(t, x;\epsilon )=g(t,\xi),\qquad x=\xi-\epsilon g(t,\xi), \] which is the transformation \eq{nearid_trans}. Since \eq{nearid_trans} agrees to the order $\epsilon$ with a normal form transformation that removes the order $\epsilon$ term from \eq{bheq}, this transformation must do so also, as we verified explicitly in Section~\ref{sec:proof}. It is rather remarkable that the normal form transformation (\ref{shortnft}) can be implemented by making a change of spatial coordinate in the equation for $h$, but we do not have a good explanation for why this should be possible. \end{document}
\begin{document} \title {\bf Random $K_k$-removal algorithm \thanks{The work was partially supported by NSFC.}} \author {{\large Fang Tian$^1$\thanks{Corresponding Author:\ [email protected](Email Address).}\quad Zi-Long Liu$^2$\quad Xiang-Feng Pan$^3$}\\ {\small $^1$ Department of Applied Mathematics}\\% {\small Shanghai University of Finance and Economics, Shanghai, 200433, China} \\ {\small\tt [email protected]}\\[1ex] {\small $^2$School of Optical-Electrical and Computer Engineering}\\ {\small University of Shanghai for Science and Technology, Shanghai, 200093, China}\\ {\small\tt [email protected]}\\[1ex] {\small $^3$School of Mathematical Sciences}\\ {\small Anhui University, Hefei, Anhui, 230601, China}\\ {\small\tt [email protected]}} \date{} \maketitle \begin{abstract} One interesting question is how a graph develops from some constrained random graph process, which is a fundamental mechanism in the formation and evolution of dynamic networks. The problem here is referred to the random $K_k$-removal algorithm. For a fixed integer $k\geqslant 3$, it starts with a complete graph on $n\rightarrow\infty$ vertices and iteratively removes the edges of an uniformly chosen $K_k$. This algorithm terminates once no $K_k$s remain and at the same time it generates one linear $k$-uniform hypergraph. For $k=3$, it was shown that the size in the final graph is $n^{3/2+o(1)}$. Less results are on the cases when $k\geqslant 4$. In this paper, we prove that the exact expected trajectories of various key parameters in the algorithm to some iteration such that the final size in the algorithm is at most $n^{2-1/(k(k-1)-2)+o(1)}$ for $k\geqslant 4$. We also show the bound is a natural barrier. \end{abstract} \hskip 10pt{\bf Keywords:}\ random greedy algorithm, $K_k$-free, the critical interval method, dynamic concentration. \hskip 10pt {\bf Mathematics Subject Classifications:}\ 05D40, 68R10 \section{Introduction} Extremal problems are central research issues in random graph algorithms, which are also fundamental mechanisms in the formation and evolution of dynamic networks. A better understanding of the underlying graph offers us opportunities to study how a graph develops from some constrained random greedy process. Recently, the power of random greedy algorithm is illustrated in~\cite{guo2021} by showing the existence of mathematical objects with better properties. Each time random greedy algorithms go beyond classical applications of the probabilistic method used in previous work. The problem here is referred to the random $K_k$-removal algorithm. Given a fixed integer $k\geqslant 3$, the random $K_k$-removal algorithm for generating one $K_k$-free graph, and at the same time creating a linear $k$-uniform hypergraph, is defined as follows. Start from a complete graph on vertex set $[n]$, denoted by $G(0)$, and $G(i+1)$ is the remaining graph from $G(i)$ by selecting one $K_k$ uniformly at random out of all $K_k$s in $G(i)$ and deleting all its edges. Let the hitting time $M$ be $M=\min\{i\,:G(i)\ {\mbox{is\ } K_k{\mbox {-free}}}\}$ and $E(i)$ denote the edge set of $G(i)$, thus $|E(M)|$ is the number of edges in the final $K_k$-free graph. Work on finding the exact values of $|E(M)|$ has evolved over the past 20 years and is a nontrivial task even for $k=3$. Bollob\'{a}s and Erd\H{o}s~\cite{bollobas98} conjectured that with high probability $|E(M)|=n^{3/2+o(1)}$ when $k=3$. It was shown $|E(M)|=o(n^2)$ by Spencer~\cite{spencer95} and independently by R\"{o}dl and Thoma~\cite{rt96}. Grable~\cite{grable97} improved this bound to $|E(M)|\leqslant n^{7/4+o(1)}$. Bohman et al.~\cite{bohman101} introduced the critical interval method for proving dynamic concentrations. They~\cite{bohman15} confirmed the exponent in a breakthrough by generalizing the approach in~\cite{bohman101}. Less results directly studied the random $K_k$-removal algorithms when $k\geqslant 4$. Bennett and Bohman~\cite{bennett15} conjectured that $|E(M)|\leqslant n^{2k/(k+1)+o(1)}$ as a folklore for $k\geqslant 3$ when they investigated the random greedy hypergraph matching algorithm. It is exactly the one proposed by Bollob\'{a}s and Erd\H{o}s when $k=3$. A different recipe for obtaining a random $K_k$-free graph is the so-called ``$K_k$-free process". In that algorithm, starting with an empty graph, the ${n\choose 2}$ edges are randomly inserted so long as no $K_k$s are formed in the current graph. Despite the high similarity between the two protocols, it was shown~\cite{bohman15} that the random $K_k$-removal algorithm has proved quite challenging at the level of acquiring the correct exponent of the final number of edges. A pseudo-random heuristic for divining the evolution of various key parameters plays a central role in the understanding of these algorithms that produce interesting combinatorial objects ~\cite{bennett15,bohman09,bohman15,bohman101,bohman19,pico142,war142}. In this paper, we directly discuss the structure of random $K_k$-removal algorithm for $k\geqslant 4$. We design an ensemble of appropriate random variables including the number of $K_k$s, using a heuristic assumption to find the trajectories of these variables when the process evolves. Compared with the random $K_3$-removal algorithm, it is challenging to make use of these auxiliary variables to analyze the one-step change of the number of $K_k$s when $k\geqslant 4$ and show a rigorous proof of their expressions. At last, we verified that \begin{theorem} Given a fixed integer $k\geqslant 4$, consider the random $K_k$-removal algorithm on $n$ vertices. Let $M$ be the number of steps it takes the process to terminate and $E(M)$ be the size of the resulting $K_k$-free graph. With high probability, $|E(M)|\leqslant n^{2-1/(k(k-1)-2)+o(1)}$. \end{theorem} \noindent Though our bound exists a gap with $|E(M)|\leqslant n^{2k/(k+1)+o(1)}$ conjectured in~\cite{bennett15}, we will show our result corresponds to the inherent barrier of the algorithm. The remainder of this paper is organized as follows. In the next section, notations and some lemmas for analyzing the random $K_k$-removal algorithm are presented. In Section 3, we discuss the evolution of the algorithm in detail and estimate the trajectories of these random variables. We formally prove the concentrations in Section 4. \section{Notations and Some Lemmas} Let $(\Omega,\mathcal{F},\mathbb{P})$ be an arbitrary probability space. Note that our probability space is the set of all maximal sequences of edge-disjoint $K_k$s on vertex set $[n]$ with probability measure given by the uniform random choice at each step. Let $(\mathcal{F}_i)_{i\geqslant 0}$ be the filtration given by the evolutionary algorithm. Given a sequence of random variables $X_i$, let $\Delta X=X_{i+1}-X_i$ denote the one-step change for the random variables $X_i$ and the pair $\{X_i,\mathcal{F}_i\}_{i\geqslant 0}$ is then called a submartingale (resp. supermartingale) if $X_i$ is $\mathcal{F}_i$-measurable and $\mathbb{E}[\Delta X|{\mathcal{F}}_i]\geqslant 0$ (resp. $\mathbb{E}[\Delta X|{\mathcal{F}}_i]\leqslant 0$) for all $i\geqslant 0$. An event is said to occur with high probability (w.h.p. {\it for short}), if the probability that it holds tends to 1 when $n\rightarrow\infty$. Furthermore, for two positive-valued functions $f, g$ on the variable $n$, we write $f\ll g$ to denote $\lim_{n\rightarrow \infty}f(n)/g(n)=0$ and $f\sim g$ to denote $\lim_{n\rightarrow \infty}f(n)/g(n)=1$. Let $a=b\pm c$ be short for $a\in [b-c,b+c]$, ${S\choose b}=\emptyset$ and ${a\choose b}=0$ if $b>|S|$ and $b>a$. We also use the standard asymptotic notation $o$, $O$, $\Omega$ and $\Theta$. All logarithms are natural, and the floor and ceiling signs are omitted whenever they are not crucial. Throughout the following sections we assume that $n\rightarrow\infty$. For $2\leqslant m\leqslant k$, $u\in[n]$ and $U_m=\{u_1,\cdots,u_m\}\in \binom{[n]}{m}$, let $N_{u}=N_{u}(i)=\{x\in [n]:xu\in E(i)\}$, $N_{U_m}=N_{U_m}(i)=\cap_{i=1}^{m} N_{u_i}$ and $\mathcal{K}_m(i)$ be the set of complete graph $K_m$ in $G(i)$. Our goal is to estimate the number of $K_k$s in $G(i)$, that is $|\mathcal{K}_k(i)|$, which is particularly denoted by $\mathbf{Q}_k(i)$. Define the random variable $\mathbf{R}_{k,U_m}(i)$ to be \begin{align} \mathbf{R}_{k,U_m}(i)=\begin{cases}\bigl|{K}_{k-m}\cap {N_{U_m}\choose k-m}\bigr|, \ 2\leqslant m\leqslant k-1;\\{\textbf 1}_{U_k},\ {m=k.}\end{cases} \end{align} For $2\leqslant m\leqslant k-1$, $\mathbf{R}_{k,U_m}(i)$ counts the number of $K_{k-m}$s in $G(i)$ such that every vertex in $K_{k-m}$ is in $N_{U_m}$; particularly $\mathbf{R}_{k,U_{k-1}}(i)=|N_{U_{k-1}}|$ is the codegree of the vertex subset $U_{k-1}$. $\textbf{1}_{U_k}$ is the indicator random variable with $\textbf{1}_{U_k}=1$ if the subgraph induced by $U_k$ in $G(i)$ is complete, instead $\textbf{1}_{U_k}=0$ otherwise. Bennett et al.~\cite{bennett15} ever added more assumptions on codegrees of larger vertex subsets to obtain stronger results on random greedy hypergraph matching algorithm. Sometimes for shorthand we will suppress $i$. These random variables in (2.1) yield important information about the underlying process. Suppose that the vertex set of the $(i+1)$-th taken $K_k$ is denoted by $U_k$. Let $U_m\in {U_k\choose m}$ with $2\leqslant m\leqslant k$ and \begin{align*} \mathbf{Q}_k^{U_m}(i)=\bigl|\{K_k\in G(i)|K_k\cap U_k=U_m\}\bigr|, \end{align*}namely, $\mathbf{Q}_k^{U_m}(i)$ denotes the number of $K_k$s in $G(i)$ that exactly contains the vertices $U_m$ in $U_k$. In particular, $\mathbf{Q}_k^{U_k}(i)=1$. Thus, we have \begin{align} \mathbf{Q}_k(i)-\mathbf{Q}_k(i+1) =\sum_{m=2}^{k}\Bigl(\sum_{U_m\in \binom{U_k}{m}}\mathbf{Q}_k^{U_m}(i)\Bigr). \end{align} It is observed that $\mathbf{R}_{k,U_m}$ in (2.1) denotes the number of extensions to one copy of $K_k$ from $U_m$ when $U_m$ is complete. By inclusion-exclusion formula, we have \begin{align} \mathbf{Q}_k^{U_m}(i)&=\mathbf{R}_{k,U_m}+\sum_{T_1\in \binom{U_k\setminus U_m}{1}}(-1)^1\mathbf{R}_{k,U_m\cup T_1}+\cdots+\notag\\ &\quad\quad\sum_{T_{k-m-1}\in \binom{U_k\setminus U_m}{k-m-1}} (-1)^{k-m-1}\mathbf{R}_{k,U_m\cup T_{k-m-1}}+(-1)^{k-m}\mathbf{R}_{k,U_k}. \end{align} Note that \begin{align*} \sum_{U_m\in \binom{U_k}{m}}\Bigl(\sum_{T_i\in \binom{U_k\backslash U_m}{i}} \mathbf{R}_{k,U_m\cup T_i}\Bigr) = \binom{m+i}{m}\sum_{U_{m+i}\in \binom{U_k}{m+i} }\mathbf{R}_{k,U_{m+i}} \end{align*} for $0\leqslant i\leqslant k-m$ because each element $\mathbf{R}_{k,U_{m+i}}$ on the right side is counted $ \binom{m+i}{m}$ times on the left side. Sum the above corresponding displays (2.3) for all $U_m\in \binom{U_k}{m}$ with $2\leqslant m\leqslant k$ altogether into the equation (2.2), then it follows that \begin{align*} &\mathbf{Q}_k(i)-\mathbf{Q}_k(i+1)\\ =&\sum_{U_2\in \binom{U_k}{2}}\mathbf{R}_{k,U_2}+\sum_{U_3\in \binom{U_k}{3}} \biggl[(-1)^1{3\choose 2}+(-1)^0{3\choose 3}\biggr]\mathbf{R}_{k,U_3}+\cdots\\ &\quad\quad+\sum_{U_{k-1}\in{U_k\choose k-1}}\biggl[(-1)^{k-3}{k-1\choose 2} +\cdots+(-1)^0{k-1\choose k-1}\biggr]\mathbf{R}_{k,U_{k-1}}\\ &\quad\quad+\biggl[(-1)^{k-2}{k\choose 2}+\cdots+(-1)^0{k\choose k}\biggr]\mathbf{R}_{k,U_k}. \end{align*} Since $\sum_{j=2}^r(-1)^{r-j}{r\choose j}=(-1)^r(r-1)$ for any given integer $r\geqslant2$ and $\mathbf{R}_{k,U_k}=1$ in (2.1), \begin{align} \mathbf{Q}_k(i)-\mathbf{Q}_k(i+1) &=\sum_{U_2\in{U_k\choose 2}}\mathbf{R}_{k,U_2}-2\sum_{U_3\in{U_k\choose 3}}\mathbf{R}_{k,U_3}+\cdots\notag\\ &\quad\quad+(-1)^{k-1}(k-2)\sum_{U_{k-1}\in{U_k\choose k-1}}\mathbf{R}_{k,U_{k-1}}+(-1)^{k}(k-1). \end{align} Thus, the expectation $\mathbb{E}[\Delta \mathbf{Q}_k| \mathcal{F}_i]$ of $\Delta \mathbf{Q}_k$ is \begin{align} &\mathbb{E}[\Delta \mathbf{Q}_k| \mathcal{F}_i]\notag\\ &=-\sum\limits_{U_k\in \mathcal{K}_k(i)} \frac{\sum_{U_2\in{U_k\choose 2}} \mathbf{R}_{k,U_2}+\cdots+(-1)^{k-1}(k-2)\sum_{U_{k-1}\in{U_k\choose k-1}} \mathbf{R}_{k,U_{k-1}}+(-1)^{k}(k-1)}{\mathbf{Q}_k(i)}\notag\\ &=(-1)^{k+1}(k-1)-\frac{1}{\mathbf{Q}_k(i)}\sum_{U_2\in \mathcal{K}_2(i)} \mathbf{R}_{k,U_2}^2+\cdots+\frac{(-1)^k(k-2)}{\mathbf{Q}_k(i)} \sum_{U_{k-1}\in \mathcal{K}_{k-1}(i)}\mathbf{R}_{k,U_{k-1}}^2, \end{align} where the last equality is true because \begin{align*} \sum_{U_k\in \mathcal{K}_k(i)}\sum_{U_m\in \binom{U_k}{m}} \mathbf{R}_{k,U_m}=\sum_{U_m\in \mathcal{K}_m(i)}\mathbf{R}_{k,U_m}^2 \end{align*} for $2\leqslant m\leqslant k-1$ by double counting. We also need the following lemmas to establish dynamic concentrations on variables $\mathbf{Q}_k(i)$ and $\mathbf{R}_{k,U_m}$ for any $U_m\in \binom{[n]}{m}$ with $2\leqslant m\leqslant k-1$, which were also used in ~\cite{bennett15,bohman09,bohman15,bohman101,bohman19,pico142,war142}. \begin{lemma}[Bohman et al.~\cite{bohman15}] Let $a_1,\cdots,a_\ell\in \mathbb{R}$ and some $a\in \mathbb{R}$. Suppose that $|a_i-a|\leqslant \varepsilon$ for all $1\leqslant i\leqslant \ell$, then $ \frac{(\sum_{i=1}^\ell a_i)^2}{\ell} \leqslant \sum_{i=1}^\ell a_i^2\leqslant \frac{(\sum_{i=1}^\ell a_i)^2}{\ell}+4\ell\varepsilon^2.$ \end{lemma} \begin{lemma}[Hoeffding and Azuma~\cite{hoeffding63}] Suppose a sequence of random variables $\{X_i\}_{i\geqslant 0}$ is a supermartingale (resp. submartingale) and $|X_{i}-X_{i-1}|<c_i$, then for any positive integer $\ell$ and any positive real number $a$, $\mathbb{P}\bigl[X_\ell-X_0\geqslant a\bigr]\leqslant\exp\bigl[ \frac{-a^2}{2\sum_{i=1}^{\ell}c_i^2}\bigr]. \bigl({\mbox{resp.}}\ \mathbb{P}\bigl[X_\ell-X_0\leqslant -a\bigr]\leqslant \exp\bigl[ \frac{-a^2}{2\sum_{i=1}^{\ell}c_i^2}\bigr].\bigr)$ \end{lemma} Let $\eta,N>0$ be constants. A sequence of random variables $\{X_i\}_{i\geqslant 0}$ is $(\eta,N)$-bounded if $X_i-\eta\leqslant X_{i+1}\leqslant X_i+N$ for all $i\geqslant 0$. For $(\eta,N)$-bounded supermartingales and submartingales, Bohman~\cite{bohman09} showed that \begin{lemma}[Bohman~\cite{bohman09}] Suppose $\{X_i\}_{i\geqslant 0}$ is an $(\eta,N)$-bounded supermartingale (resp. submartingale) with initial value $0$ and $\eta\leqslant \frac{N}{10}$. Then for any positive integer $\ell$ and any positive real number $a$ with $a<\eta \ell$, $\mathbb{P}\bigl[X_\ell\geqslant a\bigr]\leqslant\exp\bigl[-\frac{a^2}{3\ell\eta N}\bigr]. \bigl({\mbox{resp.}}\ \mathbb{P}\bigl[X_\ell\leqslant-a\bigr]\leqslant\exp\bigl[-\frac{a^2}{3\ell\eta N}\bigr].\bigr)$ \end{lemma} Finally, in order to explain it is definitely possible to further improve our results. the lemma below in~\cite{cher52} is also required. \begin{lemma}[\cite{cher52}] For $X\sim \Bin(n, p)$ and any $0<\xi\leqslant np$, $\mathbb{P}\bigl[|X - np|>\xi\bigr]<2\exp\left[-\xi^2/\left(3np\right)\right]$. \end{lemma} \section{Estimates on the variables in $G(i)$} In the following, we use some heuristics to anticipate the likely values of the auxiliary random variables throughout the process. We assume the random $K_k$-removal algorithm produces a graph whose variables are roughly the same as they would be in a random graph $\mathcal{G}(n,p)$ with the same edge density. The classical Erd\H{o}s-R\'{e}nyi random graph $\mathcal{G}(n,p)$ is on vertex set $[n]=\{1,\cdots,n\}$ and any two vertices appear as an edge independently with probability $p$. In order to describe the expected trajectories of $\mathbf{Q}_k(i)$ and $\mathbf{R}_{k,U_m}$ as smooth functions for any $U_m=\{u_1, u_2,\cdots, u_m\}\in \binom{[n]}{m}$ with $2\leqslant m\leqslant k-1$, we appropriately rescale the number of steps $i$ to be $t=t(i)= \frac{i}{n^2}$ and introduce a notion of edge density as \begin{align} p=p(i,n)=1- \frac{k(k-1)i}{n^2}=1-k(k-1)t. \end{align} Note that $p$ can be viewed as either a continuous function of $t$ or as a function of the discrete variable $i$. We pass between these interpretations without comment. With this notation, we have \begin{align} |E(i)|={n\choose 2}-{k\choose 2}i={n\choose 2}- \frac{1}{2}(1-p)n^2= \frac{1}{2}(n^2p-n) \end{align} such that the number of edges in $G(i)$ with edge density $p$ is approximately equal to the one in the Erd\H{o}s-R\'{e}nyi graph $\mathcal{G}(n,p)$ up to the negligible linear term when $p$ lies in some range. For a fixed integer $k\geqslant 4$, $2\leqslant m\leqslant k-1$ and $U_m\in \binom{[n]}{m}$, under the assumption that $G(i)$ resembles $\mathcal{G}(n,p)$, we anticipate that the expressions of $\mathbf{Q}_k(i)$ and $\mathbf{R}_{k,U_m}$ are \begin{align*} \mathbf{Q}_k(i)\sim \frac{n^k}{k!}p^{{k\choose 2}} \quad {\rm{and}}\quad \mathbf{R}_{k,U_m} \sim \frac{n^{k-m}}{(k-m)!}p^{{k\choose 2}-{m\choose 2}}, \end{align*} where $ \frac{n^k}{k!}p^{{k\choose 2}}$ counts the expected number of $K_k$s in $\mathcal{G}(n,p)$; ${n-m\choose k-m}p^{{k\choose 2}-{m\choose 2}} \sim \frac{n^{k-m}}{(k-m)!}p^{{k\choose 2}-{m\choose 2}}$ counts the expected number of $K_{k-m}$s in which every vertex is in $N_{U_m}$. Our main theorem is as follows: \begin{theorem} Given a fixed integer $k\geqslant 4$, let $U_m\in {[n]\choose m}$ with $2\leqslant m\leqslant k-1$, then there exist absolute constants $\mu$, $\gamma_m$ and $\lambda$ such that, with high probability, \begin{align} \mathbf{Q}_k(i)&\leqslant \frac{n^k}{k!}p^{{k\choose 2}}+ \frac{n^{k-1}}{2}p^{{k\choose 2}-4},\\ \mathbf{Q}_k(i)&\geqslant \frac{n^k}{k!}p^{{k\choose 2}}-\sigma^2n^{\alpha}p^{-1}\log^\mu n,\\ \mathbf{R}_{k,U_m}&= \frac{n^{k-m}}{(k-m)!}p^{{k\choose 2}-{m\choose 2}} \pm \sigma n^{\beta_m}\log^{\gamma_m} n \end{align} holding for every $i\leqslant i_0$ with $i_0= \frac{n^2}{k(k-1)}- \frac{\sqrt[3]{2}}{k(k-1)}n^{2- \frac{1}{k(k-1)-2}}\log^{\lambda}n$, where \begin{align} \alpha&=k- \frac{{k\choose 2}+1}{2{k\choose 2}-2},\\ \beta_m&=k- m- \frac{\binom{k}{2}-\binom{m}{2}}{2{k\choose 2}-2}, \end{align} and the error function $\sigma=\sigma(t)$ is taken with initial value $\sigma(0)=1$ that slowly grows to be \begin{align} \sigma=\sigma(t)=1- \frac{k(k-1)}{4}\log p(t). \end{align} \end{theorem} Theorem 3.1 is proved in Section 4. It implies that for these specific choices of constants satisfying the equations in (3.6) and (3.7), and the error function $\sigma$ in (3.8), these random variables are around the heuristical trajectories to the stopping time $\tau=i_0$ with high probability. These dynamic concentrations in turn show that the algorithm produces a graph of size at most $|E(i_0)|$ with high probability. We make no attempt to optimize the constants $\mu$, $\lambda$ and $\gamma_m$ in all error terms with $2\leqslant m\leqslant k-1$. There are many choices of them that can be balanced to satisfy certain inequalities, such as $[{k\choose 2}+1]\lambda>\mu+2, [{k\choose 2}- \binom{m}{2}]\lambda>\gamma_m+1$ with $2\leqslant m\leqslant k-1$, and $\gamma_2> \frac{1}{2}$, can support our analysis of Theorem 3.1. We do not replace them with their actual values. This is for the interest of understanding the role of these constants played in the calculations. \begin{proof}[Proof of Theorem 1.1] We recover the number of edges when $p=p_0$ to be \begin{align*} |E(i_0)|={n \choose 2}-{k\choose 2}i_0\sim \frac{\sqrt[3]{2}}{2}n^{2- \frac{1}{k(k-1)-2}}\log^{\lambda}n. \end{align*} Theorem 1.1 follows directly from Theorem 3.1 by $|E(M)|\leqslant |E(i_0)|$ with room to sparse in the power of the logarithmic factor. \end{proof} \begin{remark}The variation equations in (3.3)-(3.5) are verified in a straightforward manner below. According to (3.1), define \begin{align} p_0&=p(i_0,n)=1-\frac{k(k-1) i_0}{n^2}=\sqrt[3]{2}n^{- \frac{1}{k(k-1)-2}}\log^{\lambda}n. \end{align} Since $ i\leqslant i_0$ in Theorem 3.1, we have $ p\geqslant p_0$ in (3.9). Note that $\frac{n^k}{k!}p^{{k\choose 2}}\gg\sigma^2n^{\alpha}p^{-1}\log^\mu n$ when $\alpha$ is in (3.6), and appropriate choices of $\lambda$ and $\mu$. It follows that $\mathbf{Q}_k(i)= (1+o(1))n^k p^{{k\choose 2}}/k!$ in (3.3) and (3.4). Similarly all the error terms in (3.5) are negligible compared to their respective corresponding main terms. \end{remark} \begin{remark} Our bound in Theorem 1.1 exists a gap with $|E(M)|\leqslant n^{2k/(k+1)+o(1)}$ conjectured in~\cite{bennett15}. In fact, the term $n^{2-1/(k(k-1)-2)}$ corresponds to a natural barrier in the random $K_k$-removal algorithm. To illustrate this, as stated in Theorem 3.1, we know $G(i)$ is roughly the same with $\mathcal{G}(n,p)$, while we notice that the standard variations of $\mathbf{R}_{k,U_m}$ for any $U_m\in \binom{[n]}{m}$ with $2\leqslant m\leqslant k-1$ would be as large as their main trajectories when $p$ is around $n^{-1/(k(k-1)-2)}$ (up to logarithmic factors), which means that the control over $\mathbf{R}_{k,U_m}$ for any $U_m\in \binom{[n]}{m}$ is lost. \end{remark} \begin{remark} As stated in Theorem 3.1, we know $G(i)$ is roughly the same with $\mathcal{G}(n,p)$ for $i\leqslant i_0$. Thus, when $p$ is around $p_0$ in (3.9), by a union bound, it follows that the probability that there exists one $U_m\in \binom{[n]}{m}$ with some $m$ satisfying $2\leqslant m\leqslant k-1$ such that $|\mathbf{R}_{k,U_m}- \frac{n^{k-m}}{(k-m)!}p_0^{{k\choose 2}- \binom{m}{2}} |> \sigma n^{\beta_m}\log^{\gamma_m} n$ is at most \begin{align} &\sum_{U_m\in \binom{[n]}{m},2\leqslant m\leqslant k-1}\mathbb{P}\biggl[\biggl|\mathbf{R}_{k,U_m}- \frac{n^{k-m}}{(k-m)!}p_0^{{k\choose 2}- \binom{m}{2}} \biggr|> \sigma n^{\beta_m}\log^{\gamma_m} n\biggr]\notag\\ &<2 \sum_{m=2}^{k-1}\binom{n}{m}\exp\biggl[-\frac{(k-m)!(\sigma n^{\beta_m}\log^{\gamma_m} n)^2}{3n^{k-m}p_0^{{k\choose 2}- \binom{m}{2}}}\biggr]\notag\\ &=\sum_{m=2}^{k-1}\binom{n}{m}\exp\biggl[-\Theta\biggl(n^{k-m- \frac{{k\choose 2}- \binom{m}{2}}{2{k\choose 2}-2}}\biggr)\biggr] \end{align} by applying Lemma 2.4 with $\xi=\sigma n^{\beta_m}\log^{\gamma_m} n$, where the last equality is true because $\beta_m$ is in (3.7). Since the summand in (3.10) is increasing in $m$ for fixed $k\geqslant 4$, it suffices to take the number of terms times the last term when $m=k-1$. Thus, we have \begin{align*} &\sum_{m=2}^{k-1}\binom{n}{m}\exp\biggl[-\Theta\biggl(n^{k-m- \frac{{k\choose 2}- \binom{m}{2}}{2{k\choose 2}-2}}\biggr)\biggr]\\ &=O\biggl(n^{k-1}\exp\biggl[-\Theta\biggl(n^{1- \frac{{k\choose 2}- \binom{k-1}{2}}{2{k\choose 2}-2}}\biggr)\biggr]\biggr)=o(1). \end{align*} In fact, we could show the similar phenomenon even when we take $\xi=\Theta(n^\theta)$ with $\frac{1}{2}\beta_m <\theta<\beta_m$, instead our main results in Theorem 3.1 cannot support us. Like~\cite{bohman15}, in order to prove better bounds on $|E(M)|$, it is possible to design new random variables such that their variations decrease as the process evolves. \end{remark} \section{Proof of Theorem 3.1} Recall the outline of the critical interval method~\cite{bohman101,bohman15,bennett15} to control some graph parameters when the process evolves. Let the stopping time $\tau$ be the minimum of $i_0$ and the smallest index $i$ such that any one of the random variables violates its corresponding trajectory. Let the event $\mathcal{E}_X$ be of the form $X(i)= x(i)\pm e(i)$ for all $i\leqslant i_0$, where $X(i)$ is some random variable, $x(i)$ is the expected trajectory and $e(i)$ is the error term. We show that the event $\{\tau=i_0\}$ holds by means of $\{\tau=i_0\}=\cap_{X\in\mathcal{I}}\mathcal{E}_X$, where $|\mathcal{I}|$ is polynomial in $n$. For each such random variable $X(i)$, we define a critical interval $I_{X}$ for its bound (upper and lower) that has one endpoint at the bound we are trying to maintain and the other slightly closer to the expected trajectory of the random variable. Consider a fixed step $j\leqslant i_0$ such that $X(j)\in I_{X}$. Define the stopping time $\tau_{X,j}$ to be $\tau_{X,j}=\min\{i_0,\max\{j,\tau\},{\mbox{the\ smallest\ }}i\geqslant j\ {\mbox{ such\ that}\ }X(i)\notin I_{X}\}$, which made us possible to establish the martingale condition and apply the martingale inequality in Lemma 2.2 or Lemma 2.3. Establish bounds on the events that the designed variable crosses its critical interval in the process, such that a simple application of the union bound over all starting point $j$ shows that the probability of the occurrence of any event in the collection is low to complete the proof. As a supplement, we list some necessary inequalities that we need in the following proof of Theorem 3.1. By Lemma 2.1, we have \begin{align*} \sum_{U_m\in \mathcal{K}_m(i)}\mathbf{R}_{k,U_m}^2\geqslant \frac{ (\sum_{U_m\in \mathcal{K}_m(i)}\mathbf{R}_{k,U_m})^2}{ |\mathcal{K}_m(i)|} \end{align*} for any $U_m\in \mathcal{K}_m(i)$ with $2\leqslant m\leqslant k-1$. Firstly, note that $\sum_{U_m\in \mathcal{K}_m(i)}\mathbf{R}_{k,U_m}={k\choose m}\mathbf{Q}_k(i)$ because each element on the right side is counted ${k \choose m}$ times on the left side. Next, note that $|\mathcal{K}_{2}(i)|=|E(i)|\sim \frac{n^2}{2}p$ in (3.2) when $p\geqslant p_0$ in (3.9), and we recursively apply the equation $|\mathcal{K}_{m}(i)|\leqslant \frac{n}{m}|\mathcal{K}_{m-1}(i)|$ to achieve $|\mathcal{K}_{m}(i)|\leqslant\frac{n^m}{m!}p$ with $2\leqslant m\leqslant k-1$. Thus, we have \begin{align} \sum_{U_m\in \mathcal{K}_m(i)}\mathbf{R}_{k,U_m}^2\geqslant \frac{m!{k\choose m}^2\mathbf{Q}_k^2(i)}{n^mp}. \end{align} Conditioned on the estimates in (3.5) hold on $\mathbf{R}_{k,U_m}$ for any $U_m\in {[n]\choose m}$ with $2\leqslant m\leqslant k-1$, we also have the upper bounds of $\sum_{U_m\in \mathcal{K}_m(i)}\mathbf{R}_{k,U_m}^2$. For $m=2$, we have $\beta_2=k- \frac{5}{2}$ in (3.7) and $|\mathcal{K}_{2}(i)| \sim \frac{n^2}{2}p$, then by Lemma 2.1, \begin{align} \sum_{U_2\in \mathcal{K}_2(i)}\mathbf{R}_{k,U_2}^2&\leqslant \frac{\bigl(\sum_{U_2\in \mathcal{K}_2(i)}\mathbf{R}_{k,U_2}\bigr)^2}{|\mathcal{K}_2(i)|}+ 4|\mathcal{K}_2(i)|\bigl(\sigma n^{\beta_2}\log^{\gamma_2} n\bigr)^2\notag\\ &\sim \frac{2!{k\choose 2}^2\mathbf{Q}_k^2(i)}{n^2p}+2\sigma^2 n^{2k-3}p\log^{2\gamma_2} n. \end{align} For $3\leqslant m\leqslant k-1$, by the estimates in (3.5) and $|\mathcal{K}_{m}(i)|\leqslant\frac{n^m}{m!}p$, the trivial upper bound is \begin{align} \sum_{U_m\in \mathcal{K}_m(i)}\mathbf{R}_{k,U_m}^2\leqslant \frac{n^mp}{m!}\Bigl(\frac{n^{k-m}}{(k-m)!}p^{{k\choose 2}-{m\choose 2}}+\sigma n^{\beta_m}\log^{\gamma_m} n\Bigr)^2. \end{align} \subsection{Tracking $\mathbf{Q}_k(i)$} For the upper bound of $\mathbf{Q}_k(i)$, we introduce a critical interval as \begin{align} I_{\mathbf{Q}_k}^u=\Bigl( \frac{n^k}{k!}p^{k\choose 2}+Bn^{k-1}p^{{k\choose 2}-4}, \frac{n^k}{k!}p^{k\choose 2}+ \frac{n^{k-1}}{2}p^{{k\choose 2}-4}\Bigr), \end{align} where \begin{align} B&= \frac{1}{2}- \frac{1}{2{k\choose 2}}+ \frac{1}{3{k\choose 2}(k-4)!}<\frac{1}{2}. \end{align} Consider a fixed step $j\leqslant i_0$. Suppose $\mathbf{Q}_k(j)\in I_{\mathbf{Q}_k}^u$. Define \begin{align} \tau_{\mathbf{Q}_k,j}^u=\min\bigl\{i_0,\max\{j,\tau\},{\mbox{the\ smallest\ }}i\geqslant j{\mbox{ such\ that}\ }\mathbf{Q}_k(i)\notin I_{\mathbf{Q}_k}^u\bigr\}. \end{align} Let $j\leqslant i\leqslant\tau_{\mathbf{Q}_k,j}^u$, thus all calculations in this subsection are conditioned on the estimates in (3.5) hold on $\mathbf{R}_{k,U_m}$ for any $U_m\in {[n]\choose m}$ with $2\leqslant m\leqslant k-1$. By the equation shown in (2.5), it follows that \begin{equation*} \begin{aligned} \mathbb{E}[\Delta \mathbf{Q}_k| \mathcal{F}_i] &=(-1)^{k+1}(k-1)- \frac{1}{\mathbf{Q}_k(i)}\sum_{U_2\in \mathcal{K}_2(i)} \mathbf{R}_{k,U_2}^2+\cdots+ \frac{(-1)^k(k-2)}{\mathbf{Q}_k(i)}\sum_{U_{k-1}\in \mathcal{K}_{k-1}(i)}\mathbf{R}_{k,U_{k-1}}^2\\ &<(-1)^{k+1}(k-1)- \frac{2{k\choose 2}^2\mathbf{Q}_k(i)}{n^2p}+ \frac{2}{\mathbf{Q}_k(i)}\frac{n^3 p}{3!} \Bigl( \frac{n^{k-3}}{(k-3)!}p^{{k\choose 2}-3}+\sigma n^{\beta_3}\log^{\gamma_3} n\Bigr)^2\\ &\qquad+O\bigl(n^{k-4}p^{{k\choose 2}-1}\bigr), \end{aligned} \end{equation*} where $\sum_{U_2\in \mathcal{K}_2(i)} \mathbf{R}_{k,U_2}^2$ and $\sum_{U_3\in \mathcal{K}_3(i)} \mathbf{R}_{k,U_3}^2$ are replaced by the equations in (4.1) and (4.3), the last term $O(n^{k-4}p^{{k\choose 2}-1})$ comes from $\sum_{U_4\in \mathcal{K}_4(i)}\mathbf{R}_{k,U_4}^2$ in (4.1) that dominates all the remaining terms. Since $\mathbf{Q}_k(i)\in I_{\mathbf{Q}_k}^u$ is in (4.4), we further have \begin{align} \mathbb{E}[\Delta \mathbf{Q}_k| \mathcal{F}_i] &<(-1)^{k+1}(k-1)-\frac{2{k\choose 2}^2 n^{k-2}}{k!}p^{{k\choose 2}-1}-2{k\choose 2}^2Bn^{k-3}p^{{k\choose 2}-5}\notag\\ &\quad\quad+ \frac{k!n^{k-3}}{3(k-3)!^2}p^{{k\choose 2}-5}+O\bigl(\sigma n^{\beta_3}p^{-2}\log^{\gamma_3}n\bigr), \end{align} where $O(n^{k-4}p^{{k\choose 2}-1})$ is absorbed into $O(\sigma n^{\beta_3}p^{-2}\log^{\gamma_3}n)$ when $\beta_3$ is in (3.7). For all $i$ with $j\leqslant i\leqslant\tau_{\mathbf{Q}_k,j}^u$, define the sequence of random variables to be \begin{align} \mathbf{U}(i)= \mathbf{Q}_k(i)- \frac{n^k}{k!}p^{k\choose 2}- \frac{n^{k-1}}{2}p^{{k\choose 2}-4}. \end{align} \noindent{\bf Claim 4.1:}\ The sequence $\mathbf{U}(j),\mathbf{U}(j+1),\cdots,\mathbf{U}(\tau_{\mathbf{Q}_k,j}^u)$ is a supermartingale and the maximum one step $\Delta \mathbf{U}$ is $O(\sigma n^{k- 5/2}\log^{\gamma_2} n)$. \begin{proof}[Proof of Claim 4.1]\ To see this, for $j\leqslant i\leqslant\tau_{\mathbf{Q}_k,j}^u$, as the equation in (4.8), we have \begin{equation*} \begin{aligned} \mathbb{E}[\Delta \mathbf{U}|\mathcal{F}_i] &=\mathbb{E}[\Delta \mathbf{Q}_k|\mathcal{F}_i]- \frac{n^k}{k!}\Bigl[p^{k\choose 2}(i+1)-p^{k\choose 2}(i)\Bigr]- \frac{n^{k-1}}{2}\Bigl[p^{{k\choose 2}-4}(i+1)-p^{{k\choose 2}-4}(i)\Bigr]. \end{aligned} \end{equation*} Note that $p=p(i)=1-k(k-1)t$, $p(i+1)=1-k(k-1)(t+\frac{1}{n^2})$ in (3.1), then by Taylor's expansion, we have \begin{align} \mathbb{E}[\Delta \mathbf{U}|\mathcal{F}_i] &=\mathbb{E}[\Delta \mathbf{Q}_k|\mathcal{F}_i]- \frac{n^k}{k!}\biggl[-{k\choose 2} \frac{k(k-1)}{n^2}p^{{k\choose 2}-1}+O\Bigl( \frac{1}{n^4}p^{{k\choose 2}-2}\Bigr)\biggr]\notag\\ &\quad\quad- \frac{n^{k-1}}{2}\biggl[-\biggl({k\choose 2}-4\biggr) \frac{k(k-1)}{n^2}p^{{k\choose 2}-5} +O\Bigl(\frac{1}{n^4}p^{{k\choose 2}-6}\Bigr)\biggr]\notag\\ &=\mathbb{E}[\Delta \mathbf{Q}_k|\mathcal{F}_i]+ \frac{2{k\choose 2}^2 n^{k-2}}{k!}p^{{k\choose 2}-1}+ \biggl[{k\choose 2}-4\biggr]{k\choose 2}n^{k-3}p^{{k\choose 2}-5}\notag\\ &\quad\quad+O\bigl(n^{k-4}p^{{k\choose 2}-2}\bigr), \end{align} where $O(n^{k-5}p^{{k\choose 2}-6})$ is absorbed into $O(n^{k-4}p^{{k\choose 2}-2})$ when $p\geqslant p_0$ in (3.9). With the help of the equation in (4.7), we further have \begin{equation*} \begin{aligned} \mathbb{E}[\Delta \mathbf{U}|\mathcal{F}_i] &<(-1)^{k+1}(k-1)-\biggl[2{k\choose 2}^2B-{k\choose 2}^2+4{k\choose 2}- \frac{k!}{3(k-3)!^2}\biggr]n^{k-3}p^{{k\choose 2}-5}\\ &\quad\quad+O\bigl( \sigma n^{\beta_3}p^{-2}\log^{\gamma_3}n\bigr)\\ &<(-1)^{k+1}(k-1)-2{k\choose 2}n^{k-3}p^{{k\choose 2}-5}+ O\bigl( \sigma n^{\beta_3}p^{-2}\log^{\gamma_3}n\bigr), \end{aligned} \end{equation*} where \begin{align*}2{k\choose 2}^2B-{k\choose 2}^2+4{k\choose 2}- \frac{k!}{3(k-3)!^2}=3{k\choose 2} -{k\choose 2} \frac{2}{3(k-3)!}>2{k\choose 2} \end{align*} by $B$ shown in (4.5), and $O(n^{k-4}p^{{k\choose 2}-2})$ is absorbed into $O(\sigma n^{\beta_3}p^{-2}\log^{\gamma_3}n)$ by $\beta_3$ shown in (3.7). Note that ${k\choose 2}n^{k-3}p^{{k\choose 2}-5}> O( \sigma n^{\beta_3}p^{-2}\log^{\gamma_3}n)+(-1)^{k+1}(k-1)$ when $p\geqslant p_0$ in (3.9), and appropriate choices of $\lambda$ and $\gamma_3$, then we have $\mathbb{E}[\Delta \mathbf{U}|\mathcal{F}_i]<0$ and the sequence $\mathbf{U}(j),\mathbf{U}(j+1),\cdots,\mathbf{U}(\tau_{\mathbf{Q}_k,j}^u)$ is a supermartingale. Next, we show the maximum one step $\Delta \mathbf{U}$ is $O(\sigma n^{k-5/2}\log^{\gamma_2} n)$. As the equations shown in (4.8) and (4.9), we have \begin{equation*} \begin{aligned} \Delta \mathbf{U}&=\Delta \mathbf{Q}_k+ \frac{2{k\choose 2}^2}{k!}n^{k-2}p^{{k\choose 2}-1}+ \biggl[{k\choose 2}-4\biggr]{k\choose 2}n^{k-3}p^{{k\choose 2}-5} +O\bigl(n^{k-4}p^{{k\choose 2}-2}\bigr).\\ \end{aligned} \end{equation*} Apply the equation of $\Delta \mathbf{Q}_k$ shown in (2.4) to the above display, by the equation of $\textbf{R}_{k,U_m}$ shown in (3.5) for any $U_m\in {[n]\choose m}$, and $\beta_m$ shown in (3.7) with $2\leqslant m\leqslant k-1$, then we finally have \begin{equation*} \begin{aligned} \Delta \mathbf{U}&\leqslant-{k\choose 2}\Bigl( \frac{n^{k-2}}{(k-2)!}p^{{k\choose 2}-1}- \sigma n^{\beta_2}\log^{\gamma_2} n\Bigr)+{k\choose 3}\Bigl( \frac{n^{k-3}}{(k-3)!} p^{{k\choose 2}-{3\choose 2}}+\sigma n^{\beta_3}\log^{\gamma_3} n\Bigr)+\cdots\\ &\quad\quad+ \frac{2{k\choose 2}^2}{k!}n^{k-2}p^{{k\choose 2}-1}+ \biggl[{k\choose 2}-4\biggr]{k\choose 2}n^{k-3}p^{{k\choose 2}-5}+O\bigl(n^{k-4}p^{{k\choose 2}-2}\bigr)\\ &=O\bigl(\sigma n^{k-\frac{5}{2}}\log^{\gamma_2} n\bigr). \end{aligned} \end{equation*} The claim follows. \end{proof} Now, apply Lemma 2.2 to the sequence $\mathbf{U}(j),\mathbf{U}(j+1),\cdots, \mathbf{U}(\tau_{\mathbf{Q}_k,j}^u)$. The number of steps in this sequence is $O(n^2p)$ because $|E(i)|\sim \frac{n^2}{2}p$ in (3.2) when $p\geqslant p_0$ in (3.9). Since $\mathbf{Q}_k(j)\in I_{\mathbf{Q}_k}^u$ in (4.4), we have the initial value $\mathbf{U}(j)\geqslant -\bigl( \frac{1}{2{k\choose 2}}- \frac{1}{3{k\choose 2}(k-4)!}\bigr)n^{k-1}p^{{k\choose 2}-4}$. Then, for all $i$ with $j\leqslant i\leqslant \tau_{\mathbf{Q}_k,j}^u$, the probability of a large deviation for $\mathbf{Q}_k(i)$ beginning at the step $j$ is at most \begin{equation*} \begin{aligned} &\mathbb{P}\Bigl[\mathbf{Q}_k(i)\geqslant \frac{n^k}{k!}p^{k\choose 2}+ \frac{n^{k-1}}{2}p^{{k\choose 2}-4}\Bigr]\\ &=\mathbb{P}\Bigl[\mathbf{U}(i)\geqslant 0\Bigr]\\ &\leqslant \exp\biggl[-\Omega\biggl( \frac{(n^{k-1}p^{{k\choose 2}-4})^2}{(n^2p) \bigl(\sigma n^{k-5/2}\log^{\gamma_2} n\bigr)^2}\biggr)\biggr]\\ &=\exp\biggl[-\Omega\biggl( \frac{n p^{2{k\choose 2}-9}}{\sigma^2\log^{2\gamma_2}n}\biggr)\biggr]. \end{aligned} \end{equation*} By the union bound, note that there are at most $n^2$ possible values of $j$ in (3.1) and $p\geqslant p_0$ in (3.9), then we have \begin{equation*} \begin{aligned} n^2\exp\biggl[-\Omega\biggl( \frac{n p^{2{k\choose 2}-9}}{\sigma^2\log^{2\gamma_2}n}\biggr)\biggr]=o(1). \end{aligned} \end{equation*} W.h.p., $\mathbf{Q}_k(i)$ never crosses its critical interval $I_{\mathbf{Q}_k}^u$ in (4.1), and so the upper bound of $\mathbf{Q}_k(i)$ in (3.3) is true. \begin{remark} Proving the lower bound of $\mathbf{Q}_k(i)$ is similar. We show the proof in the appendix for reference. \end{remark} \subsection{Tracking $\mathbf{R}_{k,U_m}$ for any $U_m\in{[n]\choose m}$ with $2\leqslant m\leqslant k-1$} We prove the dynamic concentration of $\mathbf{R}_{k,U_m}$ for any $U_m\in{[n]\choose m}$ with $2\leqslant m\leqslant k-1$ in this subsection. Fix one subset $U_{m^*}\in{[n]\choose m^*}$ for some $m^*$ with $2\leqslant m^*\leqslant k-1$. We start with the upper bound of $\mathbf{R}_{k,U_{m^*}}$. Our critical interval for the upper bound of $\mathbf{R}_{k,U_{m^*}}$ is \begin{align} I_{\mathbf{R}_{k,U_{m^*}}}^u=\Bigl( \frac{n^{k-m^*}}{(k-m^*)!}&p^{{k\choose 2} -{m^*\choose 2}}+(\sigma-1) n^{\beta_{m^*}}\log^{\gamma_{m^*}} n,\notag\\&\qquad \frac{n^{k-m^*}}{(k-m^*)!}p^{{k\choose 2}-{m^*\choose 2}}+\sigma n^{\beta_{m^*}}\log^{\gamma_{m^*}} n\Bigr), \end{align} where $\beta_{m^*}=k-m^*- \frac{\binom{k}{2}-\binom{m^*}{2}}{2\binom{k}{2}-2}$ in (3.7). Consider a fixed step $j\leqslant i_0$. Suppose $\mathbf{R}_{k,U_{m^*}}(j)\in I_{\mathbf{R}_{k,U_{m^*}}}^u$. Define \begin{align} \tau_{\mathbf{R}_{k,U_{m^*}},j}^u=\min\bigl\{i_0,\max\{j,\tau\}, {\mbox{the\ smallest\ }}i\geqslant j\ {\mbox{ such\ that}\ } \mathbf{R}_{k,U_{m^*}}\notin I_{\mathbf{R}_{k,U_{m^*}}}^u\bigr\}. \end{align} Let $j\leqslant i\leqslant\tau_{\mathbf{R}_{k,U_{m^*}},j}^u$, thus all calculations are conditioned on the events that the estimates in (3.3) and (3.4) hold on $\mathbf{Q}_k(i)$, and the estimates in (3.5) hold on $\mathbf{R}_{k,U_m}$ for all $U_m\in {[n]\choose m}$ with $2\leqslant m\leqslant k-1$ and $U_m\neq U_{m^*}$. Take one $U_{m^*}^c\in K_{k-m^*}\cap N_{U_{m^*}}$ in $G(i)$ and let $\mathbf{Q}_{k,U_{m^*},U_{m^*}^c}$ be the number of $K_k$s in $G(i)$ such that the removal of the edges in any one of these $K_k$s results in $U_{m^*}^c\notin K_{k-m^*}\cap N_{U_{m^*}}$ in $G(i+1)$. Then, we have \begin{align} \mathbb{E}[\Delta \mathbf{R}_{k,{U_{m^*}}}| \mathcal{F}_i] &=-\sum_{{U_{m^*}^c}\in K_{k-m^*}\cap N_{U_{m^*}}} \frac{\mathbf{Q}_{k,U_{m^*},U_{m^*}^c}}{\mathbf{Q}_k(i)}. \end{align} In order to count $\mathbf{Q}_{k,U_{m^*},U_{m^*}^c}$, let $H\subseteq U_{m^*}\cup U_{m^*}^c$ and $\mathbf{Q}_{k,U_{m^*},U_{m^*}^c}^{H}$ be the number of $K_k$s in $\mathbf{Q}_{k,U_{m^*},U_{m^*}^c}$ such that these $K_k$s satisfy $K_k\cap (U_{m^*}\cup U_{m^*}^c)=H$. Define $|H|=h$. To ensure that the removal of the edges in any one of these $K_k$s results in $U_{m^*}^c\notin K_{k-m^*}\cap N_{U_{m^*}}$ in $G(i+1)$, it is observed that $H\cap U_{m^*}^c\neq \emptyset$ and $2\leqslant h\leqslant k$. Choose $H\in {\cup_{\rho=0}^{h-1}{U_{m^*}\choose \rho}\oplus{U_{m^*}^c\choose h-\rho}}$, where ${U_{m^*}\choose \rho}\oplus{U_{m^*}^c\choose h-\rho}$ denotes the collection of union sets consisting of $\rho$ vertices in $U_{m^*}$ and $h-\rho$ vertices in $U_{m^*}^c$. Hence, $\mathbf{Q}_{k,U_{m^*},U_{m^*}^c}$ is decomposed into \begin{align} \mathbf{Q}_{k,U_{m^*},U_{m^*}^c}=\sum_{h=2}^{k}\sum_{\rho=0}^{h-1} \sum_{H\in {{U_{m*}\choose \rho}\oplus{U_{m^*}^c\choose h-\rho}}}\mathbf{Q}_{k,U_{m^*},U_{m^*}^c}^{H}. \end{align} Following the inclusion-exclusion counting technique shown in (2.4), we have \begin{align*} \mathbf{Q}_{k,U_{m^*},U_{m^*}^c}^{H} &=\mathbf{1}_H\cdot\mathbf{R}_{k,H}-\sum_{T_1\in{(U_{m^*}\cup U_{m^*}^c) \setminus H\choose 1}}\mathbf{1}_{H\cup T_1}\cdot\mathbf{R}_{k,H\cup T_1}+\cdots\\&\quad\quad+ \sum_{T_{k-h}\in{(U_{m^*}\cup U_{m^*}^c)\setminus H\choose k-h}}(-1)^{k-h}\mathbf{1}_{H\cup T_{k-h}}\cdot\mathbf{R}_{k,H\cup T_{k-h}}\\ &=\sum_{z=0}^{k-h}\sum_{T_z\in{(U_{m^*}\cup U_{m^*}^c)\setminus H\choose z}}(-1)^z\mathbf{1}_{H\cup T_z}\cdot\mathbf{R}_{k,H\cup T_z}, \end{align*} where $\mathbf{1}_{H\cup T_z}$ with $0\leqslant z\leqslant k-h$ is the indicator random variable depending on whether the subgraph induced by $H\cup T_z$ in $G(i)$ is complete or not. Combining with the equation in (4.13), we further have \begin{align*} \mathbf{Q}_{k,U_{m^*},U_{m^*}^c} &=\sum_{h=2}^{k}\sum_{\rho=0}^{h-1}\sum_{z=0}^{k-h}\sum_{H\in {{U_{m*}\choose \rho} \oplus{U_{m^*}^c\choose h-\rho}}}\sum_{T_z\in{(U_{m^*}\cup U_{m^*}^c) \setminus H\choose z}}(-1)^z\mathbf{1}_{H\cup T_z}\cdot\mathbf{R}_{k,H\cup T_z}. \end{align*} In the above display, for fixed integers $h$ and $z$, we recount the union $H\cup T_z$ as a subset $H_{h+z}\in {U_{m^*}\choose \zeta} \oplus{U_{m^*}^c\choose h+z-\zeta}$ with $0\leqslant \zeta\leqslant h+z$, then each $H_{h+z}$ is counted $[{h+z\choose h}-{\zeta\choose h}]$ times in $H\cup T_z$ because $H\cap U_{m^*}^c\neq \emptyset$, which means that \begin{align*} &\sum_{\rho=0}^{h-1}\sum_{H\in {{U_{m*}\choose \rho} \oplus{U_{m^*}^c\choose h-\rho}}}\sum_{T_z\in{(U_{m^*}\cup U_{m^*}^c) \setminus H\choose z}}(-1)^z\mathbf{1}_{H\cup T_z}\cdot\mathbf{R}_{k,H\cup T_z}\\ &=\sum_{\zeta=0}^{h+z}\sum_{H_{h+z}\in {U_{m^*}\choose \zeta} \oplus{U_{m^*}^c\choose h+z-\zeta}}\biggl[{h+z\choose h}-{\zeta\choose h}\biggr] (-1)^z\mathbf{1}_{H_{h+z}}\cdot\mathbf{R}_{k,H_{h+z}}. \end{align*} It follows that \begin{align} \mathbf{Q}_{k,U_{m^*},U_{m^*}^c} &=\sum_{h=2}^{k}\sum_{z=0}^{k-h}\sum_{\zeta=0}^{h+z}\sum_{H_{h+z}\in {U_{m^*}\choose \zeta} \oplus{U_{m^*}^c\choose h+z-\zeta}}\biggl[{h+z\choose h}-{\zeta\choose h}\biggr] (-1)^z\mathbf{1}_{H_{h+z}}\cdot\mathbf{R}_{k,H_{h+z}}. \end{align} In fact, $\mathbf{Q}_{k,U_{m^*},U_{m^*}^c}$ is the sum of all elements in the upper triangular matrix below $$\small{ \left(\begin{array}{ccc} \sum\limits_{\zeta=0}^{2}\sum\limits_{H_{2}\in {U_{m^*}\choose \zeta}\oplus{U_{m^*}^c\choose 2-\zeta}}(-1)^0[{2\choose 2}-{\zeta\choose 2}]\mathbf{1}_{H_2}\cdot\mathbf{R}_{k,H_{2}}&\cdots&\sum\limits_{\zeta=0}^{k}\sum\limits_{H_{k}\in {U_{m^*}\choose \zeta}\oplus{U_{m^*}^c\choose k-\zeta}}(-1)^{k-2}[{k\choose 2}-{\zeta\choose 2}]\mathbf{1}_{H_k}\cdot\mathbf{R}_{k,H_{k}}\\ \cdots&\cdots &\cdots\\ \sum\limits_{\zeta=0}^{k}\sum\limits_{H_{k}\in {U_{m^*}\choose \zeta}\oplus{U_{m^*}^c\choose k-\zeta}}(-1)^{0}[{k\choose k}-{\zeta\choose k}]\mathbf{1}_{H_k}\cdot\mathbf{R}_{k,H_{k}}&\cdots&0 \end{array}\right)} $$ with the line corresponds to the index $h$ and the column corresponds to the index $z$ in (4.14), respectively. Recalculate $\mathbf{Q}_{k,U_{m^*},U_{m^*}^c}$ according to every back diagonal lines to be \begin{align} \mathbf{Q}_{k,U_{m^*},U_{m^*}^c}&=\sum_{h=2}^{k}\sum_{\zeta=0}^{h}\sum_{H_{h}\in {U_{m^*}\choose \zeta}\oplus{U_{m^*}^c\choose h-\zeta}}\sum_{s=2}^{h}(-1)^{h-i}\biggl[{h\choose s}-{\zeta\choose s}\biggr]\mathbf{1}_{H_h}\cdot\mathbf{R}_{k,H_{h}}. \end{align} Note that there is no $\mathbf{R}_{k,U_{m^*}}$ on the right side of (4.15) because $\mathbf{R}_{k,U_{m^*}}$ corresponds to the case when $\zeta=h$. Thus, the estimates on $\mathbf{Q}_k(i)$ in (3.3) and (3.4), the estimates on $\mathbf{R}_{k,U_m}$ in (3.5) for all $U_m\in {[n]\choose m}$, $U_m\neq U_{m*}$ with $2\leqslant m\leqslant k-1$, already support the calculation of $\mathbf{Q}_{k,U_{m^*},U_{m^*}^c}$ in (4.15). Furthermore, according to the expressions of $\mathbf{R}_{k,H_h}$ for $2\leqslant h\leqslant k-1$ in (3.5), the term $\mathbf{R}_{k,H_2}$ dominates the sum on the right side of (4.15). Thus, we have $\zeta=0,1$ and $s=2$. It follows that, \begin{align} &\mathbf{Q}_{k,U_{m^*},U_{m^*}^c}\notag\\ &=\biggl[{k-m^*\choose 2}+m^*(k-m^*)\biggl]\Bigl( \frac{n^{k-2}}{(k-2)!}p^{{k\choose 2}-1} - \sigma n^{\beta_2}\log^{\gamma_2} n\Bigr)+O\bigl(n^{k-3}p^{{k\choose 2}-3}\bigr), \end{align} where ${k-m^*\choose 2}$ counts the number of $\mathbf{R}_{k,H_{2}}$ when $\zeta=0$ and $s=2$, $m^*(k-m^*)$ counts the number of $\mathbf{R}_{k,H_{2}}$ when $\zeta=1$ and $s=2$. Note that ${k-m^*\choose 2}+m^*(k-m^*)={k\choose 2}-{m^*\choose 2}$ and $\beta_2=k- \frac{5}{2}$ in (3.7), combining the equations in (4.12) and (4.16), and applying the estimates of $\mathbf{Q}_k(i)$ in (3.3), we have \begin{equation*} \begin{aligned} \mathbb{E}[\Delta \mathbf{R}_{k,U_{m^*}}| \mathcal{F}_i] <-\sum_{U_{m^*}^c\in K_{k-m^*}\cap N_{U_{m^*}}} \frac{\bigl[{k\choose 2}- {m^*\choose 2}\bigl]\bigl( \frac{n^{k-2}}{(k-2)!}p^{{k\choose 2}-1}- \sigma n^{k- \frac{5}{2}}\log^{\gamma_2} n\bigr)+O\bigl(n^{k-3}p^{{k\choose 2}-3}\bigr)}{\frac{n^k}{k!}p^{{k\choose 2}}}. \end{aligned} \end{equation*} The ways to choose $U_{m^*}^c\in K_{k-m^*}\cap N_{U_{m^*}}$ is $\mathbf{R}_{k,U_{m^*}}$ and $\mathbf{R}_{k,U_{m^*}}\in I_{\mathbf{R}_{k,U_{m^*}}}^u$ in (4.10), then it further follows that \begin{equation*} \begin{aligned} &\mathbb{E}[\Delta \mathbf{R}_{k,U_{m^*}}| \mathcal{F}_i]\\ &<- \frac{\bigl[{k\choose 2}-{m^*\choose 2}\bigr] \bigl( \frac{n^{k-m^*}}{(k-m^*)!}p^{{k\choose 2}-{m^*\choose 2}} +(\sigma-1) n^{\beta_{m^*}}\log^{\gamma_{m^*}} n\bigr) \bigl( \frac{n^{k-2}}{(k-2)!}p^{{k\choose 2}-1}- \sigma n^{k- \frac{5}{2}}\log^{\gamma_2} n\bigr)}{ \frac{n^k}{k!}p^{{k\choose 2}}}\\ &\quad +O\bigl(n^{k-m^*-3}p^{{k\choose 2}-{m^*\choose 2}-3}\bigr). \end{aligned} \end{equation*} Rearrange the above equation to be \begin{align} \mathbb{E}[\Delta \mathbf{R}_{k,U_{m^*}}| \mathcal{F}_i] &<- \frac{\bigl[{k\choose 2}-{m^*\choose 2}\bigr] k(k-1)n^{k-m^*-2}}{(k-m^*)!} p^{{k\choose 2}-{m^*\choose 2}-1}\notag\\ &\quad\quad+ \frac{\bigl[{k\choose 2}-{m^*\choose 2}\bigr] k!\sigma n^{k-{m^*}- \frac{5}{2}}\log^{\gamma_2} n}{(k-m^*)!p^{{m^*\choose 2}}}\notag\\ &\quad\quad- \frac{\bigl[{k\choose 2}-{m^*\choose 2}\bigr]k(k-1)(\sigma-1)} {p}n^{\beta_{m^*}-2}\log^{\gamma_{m^*}} n \notag\\ &\quad\quad+ \frac{\bigl[{k\choose 2}-{m^*\choose 2}\bigr]k!\sigma(\sigma-1)}{p^{{k\choose 2}}}n^{\beta_{m^*}-\frac{5}{2}} \log^{\gamma_2+\gamma_{m^*}} n\notag\\ &\quad\quad+O\bigl(n^{k-m^*-3}p^{{k\choose 2}-{m^*\choose 2}-3}\bigr). \end{align} For all $i$ with $j\leqslant i\leqslant\tau_{\mathbf{R}_{k,U_{m^*}},j}^u$, define the sequence of random variables to be \begin{align} \mathbf{Z}_{U_{m^*}}(i)=\mathbf{R}_{k,U_{m^*}}- \frac{n^{k-m^*}}{(k-m^*)!}p^{{k\choose 2}- {m^*\choose 2}}-(\sigma-1) n^{\beta_{m^*}}\log^{\gamma_{m^*}} n. \end{align} In order to prove the upper bound of $\mathbf{R}_{k,U_{m^*}}$ is the equation in (3.5), we prove the following two claims. \vskip 0.3cm \noindent{\bf Claim 4.2:}\ Removing the edges of one $K_k$ in $G(i)$, we have \begin{align*} \mathbf{R}_{k,U_{m^*}}(i)-\mathbf{R}_{k,U_{m^*}}(i+1)= O\bigl(n^{k-m^*-1}p^{\binom{k}{2}-\binom{m^*+1}{2}}\bigr). \end{align*} \begin{proof}[Proof of Claim 4.2] When we remove the edges of one $K_k$ from $G(i)$, note that $\mathbf{R}_{k,U_{m^*}}(i)$ is the number of $K_{k-m^*}$s in which every vertex is in $N_{U_{m^*}}$, then it is clearly true that $\mathbf{R}_{k,U_{m^*}}(i)-\mathbf{R}_{k,U_{m^*}}(i+1)\geqslant 0$. Suppose the removed $K_k$ contains one vertex in $U_{m^*}$, denoted by $u\in U_{m^*}$; and also contains some vertex, denoted by $w$, that is in $N_{U_{m^*}}$. Then the number of $K_{k-m^*-1}$s in which every vertex is in $N_{U_{m^*}\cup \{w\}}$ is at most $\mathbf{R}_{k,U_{m^*}\cup \{w\}}(i)$. By the equation in (3.5), we complete the proof. \end{proof} \noindent{\bf Claim 4.3:}\ The sequence $-\mathbf{Z}_{U_{m^*}}(j),-\mathbf{Z}_{U_{m^*}}(j+1),\cdots, -\mathbf{Z}_{U_{m^*}}(\tau_{\mathbf{R}_{k,U_{m^*}},j}^u)$ is an $(\eta,N)$-bounded submartingale, where $\eta=\Theta\bigl(n^{k-m^*-2}p^{\binom{k}{2}-\binom{m^*}{2}-1}\bigr)$ and $N=\Theta\bigl(n^{k-m^*-1}p^{\binom{k}{2}-\binom{m^*+1}{2}}\bigr)$ for $2\leqslant m^*\leqslant k-1$. \begin{proof}[Proof of Claim 4.3]\ For all $i$ with $j\leqslant i\leqslant\tau_{\mathbf{R}_{k,U_{m^*}},j}^u$, as the equation in (4.18), we have \begin{equation*} \begin{aligned} \mathbb{E}[\Delta \mathbf{Z}_{U_{m^*}}|\mathcal{F}_i]&=\mathbb{E}[\Delta \mathbf{R}_{k,U_{m^*}}|\mathcal{F}_i]- \frac{n^{k-m^*}}{(k-m^*)!}\Bigl[p^{{k\choose 2}-{m^*\choose 2}}(i+1)-p^{{k\choose 2}-{m^*\choose 2}}(i)\Bigr]\\ &\quad- n^{\beta_{m^*}}\log^{\gamma_{m^*}} n\bigl[\sigma(i+1)-\sigma(i)\bigr]. \end{aligned} \end{equation*} Note that $p=p(i)=1-k(k-1)t$, $p(i+1)=1-k(k-1)\bigl(t+ \frac{1}{n^2}\bigr)$ in (3.1), $\sigma(i)=1- \frac{k(k-1)}{4}\log p(i)$, and $\sigma(i+1)=1-\frac{k(k-1)}{4}\log p(i+1)$ in (3.8), then \begin{align} \mathbb{E}[\Delta \mathbf{Z}_{U_{m^*}}|\mathcal{F}_i]&= \mathbb{E}[\Delta \mathbf{R}_{k,U_{m^*}}|\mathcal{F}_i]- \frac{n^{k-m^*}}{(k-m^*)!}\biggl[-\biggl({k\choose 2}-{m^*\choose 2}\biggr) \frac{k(k-1)}{n^2}p^{{k\choose 2}-{m^*\choose 2}-1}\notag\\ &\quad\quad+O\Bigl( \frac{1}{n^4}p^{{k\choose 2}-{m^*\choose 2}-2}\Bigr)\biggr] -n^{\beta_{m^*}}\log^{\gamma_{m^*}} n\Bigl[ \frac{\sigma'}{n^2}+O\Bigl( \frac{\sigma''}{n^4}\Bigr)\Bigr]\notag\\ &=\mathbb{E}[\Delta \mathbf{R}_{k,U_{m^*}}|\mathcal{F}_i]+\biggl[{k\choose 2}-{m^*\choose 2}\biggr] \frac{k(k-1)n^{k-m^*-2}}{(k-m^*)!}p^{{k\choose 2}-{m^*\choose 2}-1}\notag\\ &\quad\quad-\sigma'n^{\beta_{m^*}-2}\log^{\gamma_{m^*}} n+ O\bigl(n^{k-m^*-4}p^{{k\choose 2}-{m^*\choose 2}-2}\bigr), \end{align} where $O(\sigma''n^{\beta_{m^*}-4}\log^{\gamma_{m^*}} n)$ is absorbed into $O(n^{k-m^*-4}p^{{k\choose 2}-{m^*\choose 2}-2})$ because $\sigma''=O(p^{-2})$ in (3.6), $\beta_{m^*}$ shown in (3.7), $p\geqslant p_0$ in (3.9), and appropriate choices of the constants $\lambda$ and $\gamma_{m^*}$. Combining the equations in (4.17) and (4.19), we further have \begin{align} \mathbb{E}[\Delta \mathbf{Z}_{ U_{m^*}}|\mathcal{F}_i] &< \frac{\bigl[{k\choose 2}-{m^*\choose 2}\bigr]k!\sigma }{(k-m^*)!p^{{m^*\choose 2}}}n^{k-{m^*}- \frac{5}{2}}\log^{\gamma_2} n\notag\\ &\quad\quad - \frac{\bigl[{k\choose 2}-{m^*\choose 2}\bigr]k(k-1)(\sigma-1)} {p}n^{\beta_{m^*}-2}\log^{\gamma_{m^*}} n\notag\\ &\quad\quad+ \frac{\bigl[{k\choose 2}-{m^*\choose 2}\bigr]k!\sigma(\sigma-1) }{p^{{k\choose 2}}}n^{\beta_{m^*}-\frac{5}{2}}\log^{\gamma_2+\gamma_{m^*}} n\notag\\ &\quad\quad-\sigma'n^{\beta_{m^*}- 2} \log^{\gamma_{m^*}} n+O\bigl(n^{k-m^*-3}p^{{k\choose 2}-{m^*\choose 2}-3}\bigr), \end{align} where $O(n^{k-m^*-4}p^{{k\choose 2}-{m^*\choose 2}-2})$ in (4.19) is absorbed into $O(n^{k-m^*-3}p^{{k\choose 2}-{m^*\choose 2}-3})$ in (4.17). At last, we have $\mathbb{E}[\Delta \mathbf{Z}_{U_{m^*}}|\mathcal{F}_i]<0$ in (4.20) because the following inequalities \begin{equation*} \begin{aligned} & \frac{k!\sigma}{(k-m^*)!p^{{m^*\choose 2}}}n^{k-{m^*}- \frac{5}{2}}\log^{\gamma_2} n < \frac{k(k-1)(\sigma-1)} {2p}n^{\beta_{m^*}-2}\log^{\gamma_{m^*}} n,\\ & \frac{k!\sigma}{p^{{k\choose 2}}}n^{\beta_{m^*}- \frac{5}{2}}\log^{\gamma_2} n < \frac{k(k-1)} {2p}n^{\beta_{m^*}-2},\\ & O\bigl(n^{k-m^*-3}p^{{k\choose 2}-{m^*\choose 2}-3}\bigr)< \sigma' n^{\beta_{m^*}- 2}\log^{\gamma_{m^*}} n \end{aligned} \end{equation*} are obviously true when $\beta_{m^*}$ is in (3.7), $\sigma'= k^2(k-1)^2/4p$ in (3.8), $p\geqslant p_0$ in (3.9), and appropriate choices of $\lambda$ and $\gamma_{m^*}$. We have proved that the sequence $-\mathbf{Z}_{U_{m^*}}(j),-\mathbf{Z}_{U_{m^*}}(j+1),\cdots, -\mathbf{Z}_{U_{m^*}}(\tau_{\mathbf{R}_{k,U_{m^*}},j}^u)$ is a submartingale for any $2\leqslant m^*\leqslant k-1$. In the following, we show the sequence is $(\eta,N)$-bounded. By the equation in (4.18) and the calculation in (4.19), we have \begin{equation*} \begin{aligned} &-\mathbf{Z}_{U_{m^*}}(i+1)+\mathbf{Z}_{U_{m^*}}(i)\\ &=\mathbf{R}_{k,U_{m^*}}(i)- \mathbf{R}_{k,U_{m^*}}(i+1)+ \frac{n^{k-m^*}}{(k-m^*)!}\Bigl[p^{{k\choose 2}-{m^*\choose 2}}(i+1)-p^{{k\choose 2}-{m^*\choose 2}}(i)\Bigr]\\ &\quad\quad+n^{\beta_{m^*}}\log^{\gamma_{m^*}} n\bigl[\sigma(i+1)-\sigma(i)\bigr]\\ &=\mathbf{R}_{k,U_{m^*}}(i)-\mathbf{R}_{k,U_{m^*}}(i+1)-\biggl[{k\choose 2}-{m^*\choose 2}\biggr] \frac{k(k-1)n^{k-m^*-2}}{(k-m^*)!}p^{{k\choose 2}-{m^*\choose 2}-1}\\ &\quad\quad+\sigma'n^{\beta_{m^*}-2} \log^{\gamma_{m^*}} n+O\bigl(n^{k-m^*-4}p^{{k\choose 2}-{m^*\choose 2}-2}\bigr). \end{aligned} \end{equation*} Note that $p\geqslant p_0$ in (3.9), and appropriate choices of $\lambda$ and $\gamma_{m^*}$, we have \begin{align*} \biggl[{k\choose 2}-{m^*\choose 2}\biggr] \frac{k(k-1)n^{k-m^*-2}}{(k-m^*)!}p^{{k\choose 2}-{m^*\choose 2}-1} >\sigma'n^{\beta_{m^*}-2} \log^{\gamma_{m^*}} n+O\bigl(n^{k-m^*-4}p^{{k\choose 2}-{m^*\choose 2}-2}\bigr). \end{align*} Thus, we take \begin{align*} \eta&=\biggl[{k\choose 2}-{m^*\choose 2}\biggr] \frac{k(k-1)n^{k-m^*-2}}{(k-m^*)!}p^{{k\choose 2}-{m^*\choose 2}-1}\\ &=\Theta\bigl(n^{k-m^*-2}p^{{k\choose 2}-{m^*\choose 2}-1}\bigr). \end{align*} Since $-\mathbf{Z}_{U_{m^*}}(i+1)+\mathbf{Z}_{U_{m^*}}(i)\leqslant \mathbf{R}_{k,U_{m^*}}(i)-\mathbf{R}_{k,U_{m^*}}(i+1)$, applying Claim 4.2, we take \begin{align*} N=\Theta\Bigl(n^{k-m^*-1}p^{\binom{k}{2}-\binom{m^*+1}{2}}\Bigr). \end{align*} We complete the proof of Claim 4.3. \end{proof} The number of the sequence $-\mathbf{Z}_{U_{m^*}}(j),-\mathbf{Z}_{U_{m^*}}(j+1),\cdots, -\mathbf{Z}_{U_{m^*}}(\tau_{\mathbf{R}_{k,U_{m^*}},j}^u)$ is also $O(n^2p)$, which implies $\ell=O(n^2p)$ in Lemma 2.3. Choose $a=n^{\beta_{m^*}}\log^{\gamma_{m^*}}n$, then $a=o(\eta \ell)$. Lemma 2.3 yields that, \begin{equation*} \begin{aligned} &\mathbb{P}\Bigl[\mathbf{R}_{k,U_{m^*}}\geqslant \frac{n^{k-m^*}}{(k-m^*)!} p^{{k\choose 2}-{m^*\choose 2}}+\sigma n^{\beta_{m^*}}\log^{\gamma_{m^*}} n\Bigr]\\ &=\mathbb{P}\Bigl[-\mathbf{Z}_{U_{m^*}}(i)\leqslant -n^{\beta_{m^*}}\log^{\gamma_{m^*}} n\Bigr]\\ &\leqslant\exp\biggl[-\Omega\biggl( \frac{n^{2\beta_{m^*}}\log^{2\gamma_{m^*}} n}{n^{k-m^*}\cdot n^{k-m^*-1}}\biggr)\biggr]\\ &=\exp\biggl[-\Omega\biggl(n^{\frac{\binom{m^*}{2}-1}{\binom{k}{2}-1}}\log^{2\gamma_{m^*}} n\biggr)\biggr]. \end{aligned} \end{equation*} By the union bound, note that the choice to choose $j$, $m^*$ ($2\leqslant m^*\leqslant k-1$) and $U_{m^*}\in {[n]\choose m^*}$ is at most $(k-2)n^{m^*+2}$, then we also have \begin{align*} (k-2)n^{m^*+2}\exp\biggl[-\Omega\biggl(n^{\frac{\binom{m^*}{2}-1}{\binom{k}{2}-1}}\log^{2\gamma_{m^*}} n\biggr)\biggr]=o(1) \end{align*} because it is clearly true when $3\leqslant m^*\leqslant m-1$, and taking $\gamma_2> \frac{1}{2}$ for $m^*=2$. In a conclusion, w.h.p., none of $\mathbf{R}_{k,U_{m}}$ for any $U_m\in {n\choose m}$ with $2\leqslant m\leqslant k-1$ have such a large upward deviations. \begin{remark} The argument for the lower bound of $\mathbf{R}_{k,U_{m}}$ in (3.5) for any $U_m\in {[n]\choose m}$ with $2\leqslant m\leqslant k-1$ is the symmetric analogue of the above analysis. \end{remark} \section{Conclusions} For the random $K_k$-removal algorithm, there are less direct results when $k\geqslant 4$ because their evolutionary structures are more complicated than the case $k=3$ to investigate. We establish dynamic concentrations of complete higher codegree around the expected trajectories that are derived by their pseudorandom properties. The final size of the random $K_k$-removal algorithm is at most $n^{2-1/(k(k-1)-2)+o(1)}$ for $k\geqslant 4$. In order to improve the result, it is observed that the main obstacle is the parameter $\mathbf{R}_{k,U_m}$ for $2\leqslant m\leqslant k-1$. The control over $\mathbf{R}_{k,U_m}$ loses when $p$ around $p_0$ shown in Remark 3.3, while the probabilities of these extreme events are very low shown in Remark 3.4. The behaviors of these chosen random variables $\mathbf{R}_{k,U_m}$ for $2\leqslant m\leqslant k-1$ are not in a position to analyze the structures of the process further, and it is definitely possible to find some new ideas to track the random $K_k$-removal algorithm. This will be investigated in future work. \section*{Appendix} \noindent{\bf Appendix: Lower bound of $\mathbf{Q}_k(i)$ (for Remark 4.1) } \vskip 0.3cm \noindent For the lower bound of $\mathbf{Q}_k(i)$, we work with the critical interval \begin{equation*} \begin{aligned} I_{\mathbf{Q}_k}^\ell=\Bigl( \frac{n^k}{k!}p^{{k\choose 2}}-\sigma^2n^{\alpha}p^{-1}\log^\mu n, \frac{n^k}{k!}p^{{k\choose 2}}-\sigma(\sigma-1)n^{\alpha}p^{-1}\log^\mu n\Bigr), \end{aligned} \eqno(1) \end{equation*}where $\alpha$ is shown in (3.6). Consider a fixed step $j\leqslant i_0$. Similarly, suppose $\mathbf{Q}_k(j)\in I_{\mathbf{Q}_k}^\ell$ and define \begin{equation*} \begin{aligned} \tau_{\mathbf{Q}_k,j}^\ell=\min\bigl\{i_0,\max\{j,\tau\},{\mbox{the\ smallest\ }}i\geqslant j {\mbox{ such\ that}\ }\mathbf{Q}_k(i)\notin I_{\mathbf{Q}_k}^\ell\bigr\}. \end{aligned} \eqno(2) \end{equation*}Let $j\leqslant i\leqslant\tau_{\mathbf{Q}_k,j}^\ell$. All calculations in this subsection are conditioned on the estimates in (3.5) hold on $\mathbf{R}_{k,U_m}$ for any $U_m\in {[n]\choose m}$ with $2\leqslant m\leqslant k-1$. By the equations shown in (2.5), we get the estimate on $\mathbb{E}[\Delta \mathbf{Q}_k| \mathcal{F}_i]$ in reverse direction, \begin{equation*} \begin{aligned} \mathbb{E}[\Delta \mathbf{Q}_k| \mathcal{F}_i] &=(-1)^{k+1}(k-1)- \frac{1}{\mathbf{Q}_k(i)}\sum_{U_2\in \mathcal{K}_2(i)}\mathbf{R}_{k,U_2}^2+\cdots+ \frac{(-1)^k(k-2)}{\mathbf{Q}_k(i)}\sum_{U_{k-1}\in \mathcal{K}_{k-1}(i)}\mathbf{R}_{k,U_{k-1}}^2\\ &>(-1)^{k+1}(k-1)- \frac{1}{\mathbf{Q}_k(i)}\Bigl( \frac{2!{k\choose 2}^2\mathbf{Q}_k^2(i)}{n^2p}+2\sigma^2 n^{2k-3}p\log^{2\gamma_2} n\Bigr)\\ &\quad\quad+ \frac{12{k\choose 3}^2\mathbf{Q}_k(i)}{n^3p}+O(n^{k-4}p^{{k\choose 2}-11})\\ &=(-1)^{k+1}(k-1)- \frac{2{k\choose 2}^2\mathbf{Q}_k(i)}{n^2p}+ \frac{12{k\choose 3}^2\mathbf{Q}_k(i)}{n^3p}+ O( n^{k-3}\sigma^2p^{-{k\choose 2}+1}\log^{2\gamma_2} n), \end{aligned} \end{equation*} where $\sum_{U_2\in \mathcal{K}_2(i)}\mathbf{R}_{k,U_2}^2$ and $\sum_{U_3\in \mathcal{K}_3(i)}\mathbf{R}_{k,U_3}^2$ are replaced by the equations in (4.1) and (4.2), the term $O(n^{k-4}p^{{k\choose 2}-11})$ comes from $\sum_{U_4\in \mathcal{K}_4(i)}\mathbf{R}_{k,U_4}^2$ in (4.3) that dominates all the remaining terms. Since $\mathbf{Q}_k(j)\in I_{\mathbf{Q}_k}^\ell$ shown in (1), we further have \begin{equation*} \begin{aligned} \mathbb{E}[\Delta \mathbf{Q}_k| \mathcal{F}_i] &>(-1)^{k+1}(k-1)- \frac{2{k\choose 2}^2\bigl( \frac{n^k}{k!}p^{{k\choose 2}}-\sigma(\sigma-1)n^{\alpha}p^{-1}\log^\mu n\bigr)}{n^2p}\\ &\quad\quad+ \frac{12{k\choose 3}^2\bigl( \frac{n^k}{k!}p^{{k\choose 2}}-\sigma^2n^{\alpha}p^{-1}\log^\mu n\bigr)}{n^3p}+O\bigl( n^{k-3}\sigma^2 p^{-{k\choose 2}+1}\log^{2\gamma_2} n \bigr)\\ &=(-1)^{k+1}(k-1)- \frac{2{k\choose 2}^2 n^{k-2}}{k!}p^{{k\choose 2}-1}+ \frac{2{k\choose 2}^2\sigma(\sigma-1)n^{\alpha-2}\log^\mu n}{p^{2}}\\ &\quad\quad+ \frac{12{k\choose 3}^2n^{k-3}}{k!}p^{{k\choose 2}-1}+O\bigl( n^{k-3}\sigma^2 p^{-{k\choose 2}+1}\log^{2\gamma_2} n\bigr), \end{aligned} \eqno(3) \end{equation*}where $\alpha$ is in (3.6), and $O( \sigma^2n^{\alpha-3}p^{-2}\log^\mu n)$ is absorbed into $O(n^{k-3}\sigma^2 p^{-{k\choose 2}+1}\log^{2\gamma_2} n)$. For all $i$ with $j\leqslant i\leqslant\tau_{\mathbf{Q}_k,j}^\ell$, define the sequence of random variables to be \begin{equation*} \begin{aligned} \mathbf{L}(i)=\mathbf{Q}_k(i)- \frac{n^k}{k!}p^{{k\choose 2}}+\sigma^2n^{\alpha}p^{-1}\log^\mu n. \end{aligned} \eqno(4) \end{equation*} \noindent{\bf Claim A:}\ The sequence $\mathbf{L}(j),\mathbf{L}(j+1),\cdots,\mathbf{L}(\tau_{\mathbf{Q}_k,j}^\ell)$ is a submartingale and the maximum one step $\Delta \mathbf{L}$ is $O(\sigma n^{k- 5/2}\log^{\gamma_2}n)$. \vskip 0.2cm \begin{proof}[Proof of Claim A]\ Similarly, for all $i$ with $j\leqslant i<\tau_{\mathbf{Q}_k,j}^\ell$, as the equation shown in (4), we have \begin{equation*} \begin{aligned} \mathbb{E}[\Delta \mathbf{L}|\mathcal{F}_i] &=\mathbb{E}[\Delta \mathbf{Q}_k|\mathcal{F}_i]- \frac{n^k}{k!}\Bigl[p^{k\choose 2}(i+1)-p^{k\choose 2}(i)\Bigr] +n^{\alpha}\log^\mu n\Bigl[ \frac{\sigma^2(i+1)}{p(i+1)}-\frac{\sigma^2(i)}{p(i)}\Bigr].\\ \end{aligned} \end{equation*} Note that $p(i)=1-k(k-1)t$, $p(i+1)=1-k(k-1)(t+ \frac{1}{n^2})$ in (3.1), $\sigma(i)=1- \frac{k(k-1)}{4}\log p(i)$, $\sigma(i+1)=1- \frac{k(k-1)}{4}\log p(i+1)$ in (3.8), then by Taylor's expansion, we have \begin{equation*} \begin{aligned} \mathbb{E}[\Delta \mathbf{L}|\mathcal{F}_i] &=\mathbb{E}[\Delta \mathbf{Q}_k|\mathcal{F}_i]- \frac{n^k}{k!}\biggl[-{k\choose 2} \frac{k(k-1)}{n^2}p^{{k\choose 2}-1} +O\Bigl( \frac{1}{n^4}p^{{k\choose 2}-2}\Bigr)\biggr]\\ &\quad\quad+n^{\alpha}\log^\mu n\biggl[ \frac{2\sigma\sigma' p-\sigma^2p'}{n^2 p^2} +O\Bigl( \frac{\sigma^2}{n^4p^3}\Bigr)\biggr]\\ &=\mathbb{E}[\Delta \mathbf{Q}_k|\mathcal{F}_i]+ \frac{2{k\choose 2}^2 n^{k-2} }{k!}p^{{k\choose 2}-1}+ \frac{2\sigma\sigma'n^{\alpha-2}\log^\mu n}{p}\\ &\quad\quad+ \frac{k(k-1)\sigma^2n^{\alpha-2}\log^\mu n}{p^2}+O\bigl(n^{k-4} p^{{k\choose 2}-2}\bigr), \end{aligned} \eqno(5) \end{equation*} where $O( n^{\alpha-4}\sigma^2 p^{-3}\log^{\mu} n)$ is absorbed into $O(n^{k-4}p^{{k\choose 2}-2})$ because $\alpha$ is in (3.6), $p\geqslant p_0$ in (3.9), and appropriate choices of $\lambda$ and $\mu$. Combining the equations in (3) and (5), we have \begin{equation*} \begin{aligned} \mathbb{E}[\Delta \mathbf{L}|\mathcal{F}_i] &>(-1)^{k+1}(k-1)+ \frac{\bigl[2{k\choose 2}^2+k(k-1)\bigr]\sigma^2n^{\alpha-2}\log^\mu n}{p^{2}}- \frac{2{k\choose 2}^2\sigma n^{\alpha-2}\log^\mu n}{p^{2}}\\ &\quad\quad+ \frac{12{k\choose 3}^2n^{k-3}}{k!}p^{{k\choose 2}-1}+ \frac{2\sigma\sigma'n^{\alpha-2}\log^\mu n}{p}+ O\bigl(n^{k-3}\sigma^2 p^{-{k\choose 2}+1}\log^{2\gamma_2} n\bigr), \end{aligned} \end{equation*} where $O(n^{k-4}p^{{k\choose 2}-2})$ in (5) is absorbed into $O(n^{k-3}\sigma^2 p^{-{k\choose 2}+1}\log^{2\gamma_2} n)$ in (3). We have \begin{align*} 2\sigma\sigma'n^{\alpha-2}\log^\mu np^{-1}= 2{k\choose 2}^2\sigma n^{\alpha-2}p^{-2}\log^\mu n \end{align*} by $\sigma'= \frac{1}{4}k^2(k-1)^2 p^{-1}$ in (3.8). It follows that \begin{equation*} \begin{aligned} \mathbb{E}[\Delta \mathbf{L}|\mathcal{F}_i] &>(-1)^{k+1}(k-1)+ \frac{\bigl[2{k\choose 2}^2+k(k-1)\bigr]\sigma^2n^{\alpha-2}\log^\mu n}{p^{2}}\\ &\quad\quad+ \frac{12{k\choose 3}^2n^{k-3}}{k!}p^{{k\choose 2}-1}+ O\bigl(n^{k-3}\sigma^2 p^{-{k\choose 2}+1}\log^{2\gamma_2} n\bigr). \end{aligned} \eqno(6) \end{equation*} Note that \begin{align*} \Bigl[2{k\choose 2}^2+k(k-1)\Bigr]\sigma^2n^{\alpha-2}p^{-2}\log^\mu n> O(n^{k-3}\sigma^2 p^{-{k\choose 2}+1}\log^{2\gamma_2} n)+(-1)^{k+1}(k-1) \end{align*} when $\alpha$ is in (3.6), $p\geqslant p_0$ in (3.9), and appropriate choices of $\lambda$, $\mu$ and $\gamma_2$. We have $\mathbb{E}[\Delta \mathbf{L}|\mathcal{F}_i]>0$. The sequence $\mathbf{L}(j),\mathbf{L}(j+1),\cdots,\mathbf{L}(\tau_{\mathbf{Q}_k,j}^\ell)$ is a submartingale. Next, we show the maximum one step $\Delta \mathbf{L}$ is $O(\sigma n^{k- 5/2}\log^{\gamma_2} n)$. As the equation in (4) and the calculation in (5), we have \begin{equation*} \begin{aligned} \Delta \mathbf{L}&=\Delta \mathbf{Q}_k+ \frac{2{k\choose 2}^2 n^{k-2} }{k!}p^{{k\choose 2}-1}+ \frac{2\sigma\sigma'n^{\alpha-2}\log^\mu n}{p}+ \frac{k(k-1)\sigma^2n^{\alpha-2}\log^\mu n}{p^2}+O(n^{k-4}p^{{k\choose 2}-2}). \end{aligned} \end{equation*}Apply the equation of $\Delta \mathbf{Q}_k$ in (2.4) and the estimates on $\mathbf{R}_{k,U_m}$ for any $U_m\in {[n]\choose m}$ when $2\leqslant m\leqslant k-1$ in (3.5) to the above display, then \begin{equation*} \begin{aligned} \Delta \mathbf{L}&\leqslant -{k\choose 2}\Bigl( \frac{n^{k-2}}{(k-2)!}p^{{k\choose 2}-1} -\sigma n^{\beta_2}\log^{\gamma_2} n\Bigr) +{k\choose 3}\Bigl( \frac{n^{k-3}}{(k-3)!}p^{{k\choose 2}-{3\choose 2}}+ \sigma n^{\beta_3}\log^{\gamma_3} n\Bigr)+\cdots\\ &\quad\quad+ \frac{2{k\choose 2}^2 n^{k-2}}{k!}p^{{k\choose 2}-1}+ \frac{2\sigma\sigma'n^{\alpha-2}\log^\mu n}{p}+ \frac{k(k-1)\sigma^2n^{\alpha-2}\log^\mu n}{p^2} +O\bigl(n^{k-4}p^{{k\choose 2}-2}\bigr)\\%&\\ &=O(\sigma n^{k- \frac{5}{2}}\log^{\gamma_2} n), \end{aligned} \end{equation*}where $\alpha$ and $\beta_2$ are in (3.6) and (3.7), the term $O(\sigma^2n^{\alpha-2}p^{-2}\log^\mu n)$ is absorbed into $O(\sigma n^{k- 5/2}\log^{\gamma_2} n)$ when $p\geqslant p_0$ shown in (3.9), and appropriate choices of $\lambda$, $\mu$ and $\gamma_2$. The number of steps in this sequence is also $O(n^2p)$. Since $\mathbf{Q}_k(j)\in I_{\mathbf{Q}_k}^\ell$ in (1), we have $\mathbf{L}(j)<\sigma n^{\alpha}p^{-1}\log^\mu n$ from (4). For all $i$ with $j\leqslant i\leqslant\tau_{\mathbf{Q}_k,j}^\ell$, Lemma 2.3 yields that the probability of such a large deviation beginning at the step $j$ is at most \begin{equation*} \begin{aligned} &\mathbb{P}\Bigl[\mathbf{Q}_k(i)\leqslant \frac{n^k}{k!}p^{{k\choose 2}}-\sigma^2n^{\alpha}p^{-1}\log^\mu n\Bigr]\\ &=\mathbb{P}\Bigl[\mathbf{L}(i)\leqslant 0\Bigr]\\ &\leqslant \exp\biggl[-\Omega\biggl( \frac{\bigl(\sigma n^{\alpha}p^{-1}\log^\mu n\bigr)^2} {(n^2p)\bigl( \sigma n^{k- 5/2}\log^{\gamma_2}n\bigr)^2}\biggr)\biggr]\\ &=\exp\biggl[-\Omega\biggl(\frac{n^{2\alpha-2k+3}\log^{2\mu} n}{p^3\log^{2\gamma_2} n}\biggr)\biggr]. \end{aligned} \end{equation*} By the union bound, note that there are at most $n^2$ possible values of $j$ shown in (3.1), then we have \begin{equation*} n^2\exp\biggl[-\Omega\biggl(\frac{n^{2-\frac{2}{\binom{k}{2}-1}}\log^{2\mu} n}{p^3\log^{2\gamma_2} n}\biggr)\biggr]=o(1) \end{equation*} with $\alpha$ is shown in (3.6). W.h.p. $\mathbf{Q}_k(i)$ never crosses its critical interval $I_{\mathbf{Q}_k}^\ell$ in (4), and so the lower bound on $\mathbf{Q}_k(i)$ in (3.4) is true. \end{proof} \end{document}
\begin{document} \begin{frontmatter} \title{Efficient implicit integration for finite-strain viscoplasticity with a nested multiplicative split} \author{A.V. Shutov\corauthref{cor}}, \corauth[cor]{Corresponding author. Tel.: +7-383-333-1746; fax: +7-383-333-1612.} \ead{[email protected]} \address{Lavrentyev Institute of Hydrodynamics\\ Pr. Lavrentyeva 15, 630090 Novosibirsk, Russia, \\ Novosibirsk State University\\ Pirogova 2, 630090 Novosibirsk, Russia} \begin{abstract} An efficient and reliable stress computation algorithm is presented, which is based on implicit integration of the local evolution equations of multiplicative finite-strain plasticity/viscoplasticity. The algorithm is illustrated by an example involving a combined nonlinear isotropic/kinematic hardening; numerous backstress tensors are employed for a better description of the material behavior. The considered material model exhibits the so-called weak invariance under arbitrary isochoric changes of the reference configuration, and the presented algorithm retains this useful property. Even more: the weak invariance serves as a guide in constructing this algorithm. The constraint of inelastic incompressibility is exactly preserved as well. The proposed method is first-order accurate. Concerning the accuracy of the stress computation, the new algorithm is comparable to the Euler Backward method with a subsequent correction of incompressibility (EBMSC) and the classical exponential method (EM). Regarding the computational efficiency, the new algorithm is superior to the EBMSC and EM. Some accuracy tests are presented using parameters of the aluminum alloy 5754-O and the 42CrMo4 steel. FEM solutions of two boundary value problems using MSC.MARC are presented to show the correctness of the numerical implementation. \end{abstract} \begin{keyword} implicit time stepping \sep partitioned Euler Backward \sep multiplicative elasto-plasticity \sep kinematic hardening \sep finite element method. \end{keyword} \end{frontmatter} \emph{AMS Subject Classification}: 74C20; 65L80; 74S05. \section{Introduction} In many practical applications, metal forming simulations are implemented to estimate the induced plastic anisotropy, the residual stresses and the magnitude of spring back. It is known that the numerical computations accounting for these effects should employ constitutive models of finite strain plasticity/viscoplasticity with combined nonlinear isotropic/kinematic hardening \cite{LaurentGreeze2009, LaurentGreeze2010, Firat2012}. One of such models was proposed by Shutov and Krei\ss ig within the phenomenological context of multiplicative plasticity \cite{ShutovKrVisc}. An important constitutive assumption of this model is the nested multiplicative split of the deformation gradient: Within the first split, which is a classical one (cf. \cite{Bilby, Kroener}), the deformation gradient is multiplicatively decomposed into the elastic and inelastic parts. Within the further splits, which were introduced by Lion in \cite{LionIJP} to account for a nonlinear kinematic hardening, the inelastic part is decomposed into conservative (energetic) and dissipative parts. Although there is a big number of approaches to finite strain elasto-plasticity \cite{RewXiao}, the approach based on the multiplicative split is gaining considerable popularity in the modeling community. This is due to numerous advantages of the approach: Utilizing the multiplicative split, one may build a model which is thermodynamically consistent, objective, does not exhibit any non-physical energy dissipation in the elastic range and shear oscillation under simple shear. Moreover, the multiplicative split is considered to be the most suitable framework for the elastoplastic analysis of single crystals \cite{Lubarda}.\footnote{ Aside from materials with a crystalline structure, the multiplicative split advocated in this paper yields good results even for rubber-like materials \cite{Nedjar} and geomaterials \cite{SimoMeschke}.} Furthermore, the classical small strain models of metal plasticity exhibit the so-called weak invariance of the solution \cite{ShutovWeakInvar}. The use of the multiplicative split allows one to retain this weak invariance property during generalization to finite strains \cite{ShutovWeakInvar}. Apart from the modeling of nonlinear kinematic hardening, the extensions of the Shutov and Krei\ss ig model are used in analysis of nonlinear distortional hardening \cite{ShutovPaKr}, thermoplasticity \cite{ShutovThermal}, ductile damage \cite{ShutovDuctDamage}, and ratchetting \cite{ZHU2013}. Moreover, the model of Shutov and Krei\ss ig was coupled with a microscopic model to account for the evolution of metal microstructure in \cite{SilberDisloc}. Due to the manifold applications of this model, its numerical treatment becomes an important task. The main goal of the current paper is to report a new efficient and robust stress integration algorithm, which benefits from the special structure of the governing equations. A similar phenomenological model with a nested multiplicative split was presented in \cite{Vladimirov2008} by Vladimirov et al. This model was applied to many practical problems (see, among others, \cite{Vladimirov2009, Brepols2014,FrischkornReese}). Moreover, the idea of the nested multiplicative split was also adopted by other scholars to nonlinear kinematic hardening in \cite{DettRes, TsakmakisWulleweit, Hartmann2008, HennanAnand} and to shape memory alloys in \cite{Helm1, Helm3, ReeseChrist, FrischkornReese}. Due to the similarity of the modeling assumptions, the algorithm proposed in the current study can be applied to these constitutive equations as well. For the implementation in a displacement-based FEM, a numerical algorithm is needed which computes the stresses at each point of Gau\ss \ integration as a function of the local deformation history \cite{SimoHughes}. For the model of Shutov and Krei\ss ig (and its further extensions), the stress computation is based on the integration of the underlying constitutive equations. As usual for elasto-plasticity of metals, the corresponding system of constitutive equations is stiff. Thus, in general, an explicit time integration yields unacceptable results,\footnote{For the discussion of various explicit and semi-explicit integrators, the reader is referred to \cite{Safaei} and references cited therein.} and implicit time stepping is required to obtain a stable integration procedure. In the current study, the preference is given to the Euler Backward discretization of the evolution equations \cite{SimoHughes, SimMieh}. Alternatively, one may consider its modification based on the exponential mapping \cite{WebAnan, MiStei, Simo}. A special class of time-stepping algorithms is obtained by the minimization of the integrated stress power. Such a variational approach is rather general so it can be applied to various models with nonlinear kinematic hardening \cite{Mosler2010}. As a result of the implicit time discretization, a system of nonlinear algebraic equations is obtained. Unfortunately, in general, the closed-form solution for this system is not available. Therefore, a time-consuming local iterative procedure has to be implemented to solve this system of equations \cite{ShutovKrKoo, Vladimirov2008, ReeseChrist, Brepols2014, FrischkornReese}. In some cases, the overall system of algebraic equations can be reduced to the solution of a scalar equation with respect to a single unknown. Such ``one-equation integrators" are already available for many models of elastoplasticity with nonlinear kinematic hardening: An efficient stress algorithm was constructed for a model which is based on the assumption of small elastic strains in \cite{Luhrs}. In \cite{Hartmann}, another anisotropic plasticity model is analyzed, which is based on the additive decomposition of the linearized Green strain. The authors benefit from the simple structure of the model, which resembles the infinitesimal elasto-plasticity. In \cite{Zhu2014}, a problem-adapted algorithm was proposed for a model of hypoelasto-plasticity with the additive decomposition of the strain rate. Next, a structure-exploiting scheme for a model with nonlinear kinematic hardening was recently proposed in \cite{Rothe2015} in the small-strain context. In \cite{Montans2012}, an anisotropic model is considered employing logarithmic (Hencky) strain measures and quadratic stored energy functions. These assumptions simplify the algorithmic treatment, since an implicit integration by exponential mapping can be carried out using small strain procedures. At the same time, less progress has been achieved in the area of multiplicative elasto-plasticity or multiplicative viscoelasticity coupled to hyperelasticity. Due to the nonlinear kinematics, an efficient time-stepping becomes non-trivial even in the isotropic case. The construction of problem-adapted integrators is still a topic of ongoing research: An explicit update formula was proposed in \cite{ShutovLandgraf2013} for the special case of neo-Hookean free energy. Its generalization to elasto-plastic material with Yeoh elasticity is discussed in \cite{LandgrShuPAMM}; an application to a temperature-dependent constitutive behavior is presented in \cite{Johlitz}. A semi-implicit scheme was proposed in \cite{HennanAnand} for a model with nonlinear kinematic hardening implementing the Lion approach. This scheme is constructed assuming that the elastic strains are always coaxial with the plastic strain increment,\footnote{ This coaxiality is discussed after equation (12.11) in \cite{HennanAnand}.} which is a very restrictive assumption in case of plastic anisotropy. Moreover, in order to simplify the algorithmic treatment, the authors assume in \cite{HennanAnand} that the plastic strain increments are small. Another hybrid explicit/implicit procedure was proposed in \cite{SilbermannPAMM}, which is more robust than a purely explicit procedure. In this paper, the overall system of equations is subdivided (partitioned) into coupled subsystems. For each of the subsystems, \emph{simple closed-form solutions are obtained}. An efficient time stepping procedure for the fully coupled system of algebraic equations is then constructed by a tailored combination of these explicit solutions. As a result, a new algorithm is obtained, which is called partitioned Euler Backward method (PEBM). Within PEBM, only a scalar equation with respect to a single unknown inelastic strain increment has to be solved iteratively. In the current paper, the accuracy of the stress computation by the proposed PEBM is tested numerically using material parameters of 5754-O aluminum alloy and 42CrMo4 steel. Concerning the accuracy, the new algorithm is comparable to the Euler Backward method with subsequent correction of incompressibility (EBMSC) and the classical exponential method (EM). The novel PEBM is superior to the conventional EBMSC and EM concerning the computational efficiency: For a model with $N$ backstress tensors, the EBMSC and EM require a solution of $6 N+7$ nonlinear scalar equations at each Gau\ss \ point. At the same time, the PEBM requires a solution of a scalar equation with respect to a single unknown inelastic strain increment. In particular, the introduction of additional backstresses is possible almost without any increase in computational costs. We close this introduction with a few words regarding notation. Throughout this paper, bold-faced symbols denote first- and second-rank tensors in $\mathbb{R}^3$. A coordinate-free tensor formalism is used in this work; $\mathbf{1}$ stands for the second-rank identity tensor; the deviatoric part of a tensor is denoted by $\mathbf A^{\text{D}} := \mathbf A - \frac{1}{3} \text{tr}(\mathbf A) \mathbf 1$, where $\text{tr}(\mathbf A)$ stands for the trace. The overline $\overline{(\cdot)}$ stands for the unimodular part of a tensor \begin{equation}\label{BarDef} \overline{\mathbf{A}}=(\det \mathbf{A})^{-1/3} \mathbf{A}. \end{equation} In description of algorithms, the notation $\mathbf{A} \gets \mathbf{B}$ means a computation step where the variable $\mathbf{A}$ obtains the new value $\mathbf{B}$. \section{MATERIAL MODEL} \subsection{System of constitutive equations} First, we recall the viscoplastic material model proposed by Shutov and Krei\ss ig in \cite{ShutovKrVisc}. Here we consider its modification with two Armstrong-Frederick backstresses (cf. \cite{ShutovKuprin}). A generalization of the algorithm to cover an arbitrary number of backstresses is straightforward. For simplicity of the numerical implementation, we adopt the Lagrangian (material) formulation of the constitutive equations.\footnote{An alternative derivation of the model is presented in \cite{Broec}.} The description of finite strain kinematics is based on the original work of Lion \cite{LionIJP}. Along with the well-known right Cauchy-Green tensor ${\mathbf C}$, we introduce tensor-valued internal variables: the inelastic right Cauchy-Green tensor ${\mathbf C}_{\text{i}}$ and two inelastic right Cauchy-Green tensors of substructure: ${\mathbf C}_{\text{1i}}$ and ${\mathbf C}_{\text{2i}}$. In order to capture the isotropic hardening we will need the inelastic arc length (Odqvist parameter) $s$ and its dissipative part $s_d$. For the free energy per unit mass we assume the additive decomposition as follows \begin{equation}\label{EnergStorage} \psi = \psi_{\text{el}} (\mathbf C {\mathbf C_{\text{i}}}^{-1}) + \psi_{\text{kin 1}}(\mathbf C_{\text{i}} {\mathbf C_{\text{1i}}}^{-1}) + \psi_{\text{kin 2}}(\mathbf C_{\text{i}} {\mathbf C_{\text{2i}}}^{-1}) + \psi_{\text{iso}}(s-s_d). \end{equation} Here, $\psi_{\text{el}}$ stands for the energy storage due to macroscopic elastic deformations, $\psi_{\text{kin 1}}$ and $\psi_{\text{kin 2}}$ represent two different storage mechanisms associated to the kinematic hardening, and $\psi_{\text{iso}}$ is the energy storage related to the isotropic hardening.\footnote{Some additional energy-storage terms, which are not related to any hardening mechanism, can be added to the right-hand side of \eqref{EnergStorage}. Such terms can play an important role in thermoplasticity \cite{ShutovThermal}.} The functions $\psi_{\text{el}}$, $\psi_{\text{kin 1}}$ and $\psi_{\text{kin 2}}$ are assumed to be isotropic. Let $\tilde{\mathbf T}$ and $\tilde{\mathbf X}$ denote respectively the 2nd Piola-Kirchhoff stress tensor and the total backstress tensor operating on the reference configuration. The backstress describes the translation of the yield surface in the stress space, and it will be decomposed into two partial backstresses $\tilde{\mathbf X}_{1}$ and $\tilde{\mathbf X}_{2}$. The isotropic expansion of the yield surface is captured by the scalar quantity $R$. The rate-dependent overstress $f$ is introduced to account for the viscous effects. In order to formulate the system of constitutive equations, consider a local initial value problem. The local deformation history ${\mathbf C}(t)$ is assumed to be known in the time interval $t \in [0,T]$. The corresponding stress response is governed by the following system of ordinary differential and algebraic equations \begin{equation}\label{prob1} \dot{\mathbf C}_{\text{i}} = 2 \frac{\displaystyle \lambda_{\text{i}}}{\displaystyle \mathfrak{F}} \big( \mathbf C \tilde{\mathbf T} - \mathbf C_{\text{i}} \tilde{\mathbf X} \big)^{\text{D}} \mathbf C_{\text{i}}, \quad \mathbf C_{\text{i}}|_{t=0} = \mathbf C_{\text{i}}^0, \ \det \mathbf C_{\text{i}}^0 =1, \ \mathbf C_{\text{i}}^0 = (\mathbf C_{\text{i}}^0)^{\text{T}} > 0, \end{equation} \begin{equation}\label{prob21} \dot{\mathbf C}_{\text{1i}} = 2 \lambda_{\text{i}} \varkappa_1 (\mathbf C_{\text{i}} \tilde{\mathbf X_{1}} \big)^{\text{D}} \mathbf C_{\text{1i}}, \quad \mathbf C_{\text{1i}}|_{t=0} = \mathbf C_{\text{1i}}^0, \ \det \mathbf C_{\text{1i}}^0 =1, \ \mathbf C_{\text{1i}}^0 = (\mathbf C_{\text{1i}}^0)^{\text{T}} > 0, \end{equation} \begin{equation}\label{prob22} \dot{\mathbf C}_{\text{2i}} = 2 \lambda_{\text{i}} \varkappa_2 (\mathbf C_{\text{i}} \tilde{\mathbf X_{2}} \big)^{\text{D}} \mathbf C_{\text{2i}}, \quad \mathbf C_{\text{2i}}|_{t=0} = \mathbf C_{\text{2i}}^0, \ \det \mathbf C_{\text{2i}}^0 =1, \ \mathbf C_{\text{2i}}^0 = (\mathbf C_{\text{2i}}^0)^{\text{T}} > 0, \end{equation} \begin{equation}\label{prob3} \dot{s}= \sqrt{\frac{\displaystyle 2}{\displaystyle 3}} \lambda_{\text{i}}, \quad \dot{s}_{\text{d}}= \frac{\displaystyle \beta}{\displaystyle \gamma} \dot{s} R, \quad s|_{t=0} = s^0, \ s_{\text{d}}|_{t=0} = s_{\text{d}}^0, \end{equation} \begin{equation}\label{prob4} \tilde{\mathbf T} = 2 \rho_{\scriptscriptstyle \text{R}} \frac{\displaystyle \partial \psi_{\text{el}} (\mathbf C {\mathbf C_{\text{i}}}^{-1})} {\displaystyle \partial \mathbf{C}}\big|_{\mathbf C_{\text{i}} = \text{const}}, \quad \tilde{\mathbf X} = \tilde{\mathbf X}_{1} + \tilde{\mathbf X}_{2}, \end{equation} \begin{equation}\label{prob42} \tilde{\mathbf X}_{1} = 2 \rho_{\scriptscriptstyle \text{R}} \frac{\displaystyle \partial \psi_{\text{kin 1}}(\mathbf C_{\text{i}} {\mathbf C_{\text{1i}}}^{-1})} {\displaystyle \partial \mathbf C_{\text{i}}}\big|_{\mathbf C_{\text{1i}} = \text{const}}, \quad \tilde{\mathbf X}_{2} = 2 \rho_{\scriptscriptstyle \text{R}} \frac{\displaystyle \partial \psi_{\text{kin 2}}(\mathbf C_{\text{i}} {\mathbf C_{\text{2i}}}^{-1})} {\displaystyle \partial \mathbf C_{\text{i}}}\big|_{\mathbf C_{\text{2i}} = \text{const}}, \end{equation} \begin{equation}\label{prob5} R = \rho_{\scriptscriptstyle \text{R}} \frac{\displaystyle \partial \psi_{\text{iso}}(s-s_d)}{\displaystyle \partial s}, \end{equation} \begin{equation}\label{prob6} \lambda_{\text{i}}= \frac{\displaystyle 1}{\displaystyle \eta}\Big\langle \frac{\displaystyle 1}{\displaystyle f_0} f \Big\rangle^{m}, \quad f= \mathfrak{F}- \sqrt{\frac{2}{3}} \big[ K + R \big], \quad \mathfrak{F}= \sqrt{\text{tr} \big[ \big( \mathbf C \tilde{\mathbf T} - \mathbf C_{\text{i}} \tilde{\mathbf X} \big)^{\text{D}} \big]^2 }. \end{equation} Here, $\rho_{\scriptscriptstyle \text{R}} > 0$, $\varkappa_1 \geq 0$, $\varkappa_2 \geq 0$, $\beta \geq 0$, $\gamma \in \mathbb{R}$, $\eta \geq 0$, $m \geq 1$, $K > 0$ are constant material parameters, the real-valued constitutive functions $\psi_{\text{el}}$, $\psi_{\text{kin 1}}$, $\psi_{\text{kin 2}}$, $\psi_{\text{iso}}$ are assumed to be known. The constant $f_0 = 1$ MPa is adopted to obtain a non-dimensional term in the bracket; thus, $f_0$ is not a material parameter. The symbol $\lambda_{\text{i}}(t)$ stands for the inelastic multiplier; $\mathfrak{F}(t)$ denotes the norm of the driving force; the superposed dot denotes the material time derivative: $\frac{d}{d t} \mathbf A = \dot{\mathbf A}$; $(\cdot)^{\text{D}}$ is the deviatoric part of a second-rank tensor; $\langle x \rangle := \max (x,0)$ is the Macaulay bracket. For $\psi_{\text{iso}}$ we assume \begin{equation}\label{isotrAssum} \rho_{\scriptscriptstyle \text{R}} \psi_{\text{iso}}(s-s_{\text{d}}) = \frac{\gamma}{2} (s - s_{\text{d}})^2, \end{equation} which yields \begin{equation}\label{isotrAssum2} R = \gamma (s - s_{\text{d}}). \end{equation} Note that this restrictive assumption for $\psi_{\text{iso}}$ is not essential for the efficient time stepping scheme, which will be presented in the following. For the remaining part of the energy storage we adopt potentials of compressible neo-Hookean material \begin{equation}\label{spec11} \rho_{\scriptscriptstyle \text{R}} \psi_{\text{el}}(\mathbf{C} \mathbf{C}_{\text{i}}^{-1})= \frac{k}{50}\big( (\text{det} \mathbf{C} \mathbf{C}_{\text{i}}^{-1})^{5/2} + (\text{det} \mathbf{C} \mathbf{C}_{\text{i}}^{-1})^{-5/2} -2 \big)+ \frac{\mu}{2} \big( \text{tr} \overline{\mathbf{C} \mathbf{C}_{\text{i}}^{-1}} - 3 \big), \end{equation} \begin{equation}\label{spec12} \rho_{\scriptscriptstyle \text{R}} \psi_{\text{kin 1}}(\mathbf{C}_{\text{i}} \mathbf{C}_{\text{i1}}^{-1})= \frac{c_1}{4}\big( \text{tr} \overline{\mathbf{C}_{\text{i}} \mathbf{C}_{\text{1i}}^{-1}} - 3 \big), \quad \rho_{\scriptscriptstyle \text{R}} \psi_{\text{kin 2}}(\mathbf{C}_{\text{i}} \mathbf{C}_{\text{2i}}^{-1})= \frac{c_1}{4}\big( \text{tr} \overline{\mathbf{C}_{\text{i}} \mathbf{C}_{\text{2i}}^{-1}} - 3 \big), \end{equation} where $k >0$, $\mu > 0$, $c1 > 0$, and $c2 > 0$ are material parameters. \textbf{Remark 1.} The volumetric part of the strain energy function \eqref{spec11} was proposed by Hartmann and Neff in \cite{HartmannNeff}. It has an important advantage over many alternatives since it provides a convex function of $\det \mathbf{F}$. Moreover, the energy and the hydrostatic stress component tend to infinity as $\det \mathbf{F} \rightarrow 0$ or $\det \mathbf{F} \rightarrow \infty$. Thus, ansatz \eqref{spec11} predicts a physically reasonable volumetric stress response even for finite elastic volume changes. Note that the algorithm, presented in the current study, can be applied irrespective of the specific choice of the volumetric strain energy function. $ \Box $ Using \eqref{spec11}, \eqref{spec12} and the incompressibility condition $\text{det} \mathbf C_{\text{i}} =1$ which will be discussed in the following, relations \eqref{prob4} and \eqref{prob42} are rewritten as \begin{equation}\label{trans41} \tilde{\mathbf T} = \frac{\displaystyle k}{\displaystyle 10} \ \big( (\text{det} \mathbf C)^{5/2}-(\text{det} \mathbf C)^{-5/2} \big) \ \mathbf C^{-1} + \mu \ \mathbf C^{-1} (\overline{\mathbf C} \mathbf C_{\text{i}}^{-1})^{\text{D}}, \end{equation} \begin{equation}\label{trans42} \tilde{\mathbf X}_1 = \frac{c_1}{2} \ \mathbf C_{\text{i}}^{-1} (\mathbf C_{\text{i}} \mathbf C_{\text{1i}}^{-1})^{\text{D}}, \quad \tilde{\mathbf X}_2 = \frac{c_2}{2} \ \mathbf C_{\text{i}}^{-1} (\mathbf C_{\text{i}} \mathbf C_{\text{2i}}^{-1})^{\text{D}}. \end{equation} Substituting these results into $\eqref{prob1}$ --- $\eqref{prob22}$, we obtain the reduced evolution equations \begin{equation}\label{ReducedEvolut} \dot{\mathbf C}_{\text{i}} = 2 \frac{\displaystyle \lambda_{\text{i}}}{\displaystyle \mathfrak{F}} \big( \mu \ (\overline{\mathbf C} \mathbf C_{\text{i}}^{-1})^{\text{D}} - \frac{c_1}{2} \ (\mathbf C_{\text{i}} \mathbf C_{\text{1i}}^{-1})^{\text{D}} - \frac{c_2}{2} \ (\mathbf C_{\text{i}} \mathbf C_{\text{2i}}^{-1})^{\text{D}} \big) \mathbf C_{\text{i}}, \end{equation} \begin{equation}\label{ReducedEvolut2} \dot{\mathbf C}_{\text{1i}} = \lambda_{\text{i}} \ \varkappa_1 \ c_1 \ (\mathbf C_{\text{i}} \mathbf C_{\text{1i}}^{-1})^{\text{D}} \mathbf C_{\text{1i}}, \quad \dot{\mathbf C}_{\text{2i}} = \lambda_{\text{i}} \ \varkappa_2 \ c_2 \ (\mathbf C_{\text{i}} \mathbf C_{\text{2i}}^{-1})^{\text{D}} \mathbf C_{\text{2i}}. \end{equation} \subsection{Properties of the material model} The material model is thermodynamically consistent.\footnote{ The thermodynamic consistency of the model was proved in \cite{ShutovKrVisc} for the special case of only one backstress. This proof can be easily generalized to cover numerous backstress tensors.} It combines the nonlinear hardening of Armstrong-Frederick type and the isotropic hardening of Voce type. The exact solution of \eqref{prob1} -- \eqref{prob6} exhibits the following geometric property \begin{equation}\label{geopr0} \mathbf{C}_{\text{i}}, \mathbf{C}_{\text{1i}}, \mathbf{C}_{\text{2i}} \in M, \quad \text{where} \ M := \big\{ \mathbf B \in Sym: \text{det} \mathbf B =1 \big\}. \end{equation} Thus, \eqref{prob1} -- \eqref{prob6} is a system of differential and algebraic equations on the manifold. The restriction $\text{det} (\mathbf{C}_{\text{i}}) =1$ corresponds to the incompressibility of the inelastic flow, typically assumed for metals; the conditions $\text{det} (\mathbf{C}_{\text{1i}}) =1$ and $\text{det} (\mathbf{C}_{\text{2i}}) =1$ reflect the incompressibility on the substructural level, which is a modeling assumption. As was shown in \cite{ShutovKrStab}, these incompressibility conditions should be exactly satisfied by the numerical algorithm in order to suppress the accumulation of the integration error. Another important restriction is as follows: For the exact solution, the tensor-valued variables $\mathbf{C}_{\text{i}}$, $\mathbf{C}_{\text{1i}}$ and $\mathbf{C}_{\text{2i}}$ are positive definite. Also from the physical standpoint, these quantities must remain positive definite, since they represent some inelastic metric tensors of Cauchy-Green type. One important property of the material model, which will play a crucial role in the current study, is the \emph{weak invariance} under arbitrary isochoric changes of the reference configuration. Generally speaking, the weak invariance corresponds to a generalized (weak) symmetry of the material. Like any other symmetry property, the weak invariance should be inhereted by the discretized constitutive equations. For the general definition of the weak invariance the reader is referred to \cite{ShutovWeakInvar}. For the concrete material model analyzed in the current study, the weak invariance is formulated as follows. Let $\textbf{F}_0$ be arbitrary second rank tensor such that $\text{det}(\textbf{F}_0) = 1$. If the prescribed local loading $\textbf{C}(t), \ t \in [0,T]$ is replaced by a new loading history $\textbf{C}^{\text{new}}(t) := \textbf{F}^{-\text{T}}_0 \ \textbf{C}(t) \ \textbf{F}^{-1}_0$ and the initial conditions are transformed according to \begin{equation*}\label{IniCond} \textbf{C}_{\text{i}}^{\text{new}}|_{t=0} = \textbf{F}^{-\text{T}}_0 \ \textbf{C}_{\text{i}}|_{t=0} \ \textbf{F}^{-1}_0, \quad \textbf{C}_{\text{1i}}^{\text{new}}|_{t=0} = \textbf{F}^{-\text{T}}_0 \ \textbf{C}_{\text{1i}}|_{t=0} \ \textbf{F}^{-1}_0, \quad \textbf{C}_{\text{2i}}^{\text{new}}|_{t=0} = \textbf{F}^{-\text{T}}_0 \ \textbf{C}_{\text{2i}}|_{t=0} \ \textbf{F}^{-1}_0, \end{equation*} then the second Piola-Kirchhoff tensor, predicted by the model, is transformed according to \begin{equation}\label{WeakInvariance_Reference} \widetilde{\mathbf{T}}^{\text{new}} (t) = \mathbf{F}_0 \ \widetilde{\mathbf{T}} (t) \ \mathbf{F}_0^{\text{T}} \quad \text{for} \ t \in [0,T]. \end{equation} Moreover, if the history of the deformation gradient is replaced through $\textbf{F}^{\text{new}}(t) := \textbf{F}(t) \ \textbf{F}^{-1}_0$, then the same Cauchy stresses $\textbf{T}$ will be predicted by the model \begin{equation}\label{WeakInvariance} \textbf{T}^{\text{new}}(t) = \textbf{T}(t) \quad \text{for} \ t \in [0,T]. \end{equation} For this invariance property, it is necessary and sufficient that the internal variables are transformed according to the following transformation rules (cf. \cite{ShutovPfeIhl}) \begin{equation}\label{WeakInvariance_InterVariables} \textbf{C}_{\text{i}}^{\text{new}}(t) = \textbf{F}^{-\text{T}}_0 \ \textbf{C}_{\text{i}}(t) \ \textbf{F}^{-1}_0, \quad \textbf{C}_{\text{1i}}^{\text{new}}(t) = \textbf{F}^{-\text{T}}_0 \ \textbf{C}_{\text{1i}}(t) \ \textbf{F}^{-1}_0, \quad \textbf{C}_{\text{2i}}^{\text{new}}(t) = \textbf{F}^{-\text{T}}_0 \ \textbf{C}_{\text{2i}}(t) \ \textbf{F}^{-1}_0. \end{equation} In metals, the plastic flow may exhibit a fluid-like feature that there is no any preferred configuration. This observation motivates the notion of the weak invariance from the physical standpoint. For a more detailed discussion, the reader is referred to \cite{ShutovWeakInvar}. Some of the commonly used approaches to finite strain plasticity are compatible with the weak invariance requirement, but some of them not \cite{ShutovWeakInvar}. The weak invariance brings the advantage that the constitutive equations are independent of the choice of the reference configuration. In particular, this property simplifies the analysis of multi-stage forming operations \cite{ShutovPfeIhl}. In the current study, the weak invariance is utilized as a guide in constructing numerical algorithms. \section{Implicit time stepping} Let us consider a time step $t_n \mapsto t_{n+1}$ with $\Delta t := t_{n+1} - t_n >0$. Within a local setting, we may suppose that ${}^{n+1} \mathbf C := \mathbf C (t_{n+1})$ and ${}^{n} \mathbf C := \mathbf C (t_{n})$ are known and the internal variables are given at $t_n$ by ${}^{n} \mathbf C_{\text{i}}$, ${}^{n} \mathbf C_{\text{1i}}$, ${}^{n} \mathbf C_{\text{2i}}$, ${}^{n} s$, ${}^{n} s_{\text{d}}$. In order to compute the stress tensor ${}^{n+1} \tilde{\mathbf T} $ we need to update the internal variables. For what follows, it is useful to introduce the incremental inelastic strain \begin{equation}\label{ineinc} \xi := \Delta t \ {}^{n+1} \lambda_{\text{i}} \geq 0, \end{equation} which is a non-dimensional quantity. For most practical applications, we may assume that $ \xi \leq 0.2$.\footnote{This upper bound is based on experience (see also \cite{Hartmann}). Since the material response is highly path dependent, one should avoid plastic increments larger than 20\%.} A new implicit numerical method will be presented in this section which is based on some closed-form solutions for elementary decoupled problems. For this new method, the time stepping procedure will be reduced to the solution of a scalar equation with respect to the unknown $\xi$. \subsection{Update of $s$ and $s_\text{d}$ for a given $\xi$} Integrating $\eqref{prob3}_1$ and $\eqref{prob3}_2$ from $t_n$ to $t_{n+1}$ and taking \eqref{isotrAssum2} into account, we obtain the isotropic hardening ${}^{n+1} R$, the arc-length ${}^{n+1} s$, and its dissipative part ${}^{n+1} s_{\text{d}}$ as explicit functions of $\xi$ \begin{equation}\label{updateR} {}^{n+1} R (\xi) = \frac{{}^{\text t} R + \sqrt{2/3} \gamma \xi}{1 + \sqrt{2/3} \beta \xi}, \quad \text{where} \quad {}^{\text t} R := \gamma({}^{n} s - {}^{n} s_{\text{d}}), \end{equation} \begin{equation}\label{updateSSd} {}^{n+1} s (\xi) = {}^{n} s + \sqrt{2/3} \ \xi, \quad {}^{n+1} s_{\text{d}} (\xi) = {}^{n} s_{\text{d}} + \frac{\beta}{\gamma} \ \sqrt{2/3} \ \xi \ {}^{n+1} R (\xi). \end{equation} \textbf{Remark 2.} In case of constant $\gamma$ and $\beta$, a closed-form solution can be derived for the system \eqref{prob3}, \eqref{isotrAssum2}, which can also be used for the time integration. $\Box$ \subsection{Update of $\textbf{C}_{\text{i}}$ for the known ${}^{n+1}\textbf{C}_{\text{1i}}$, ${}^{n+1}\textbf{C}_{\text{2i}}$, $\xi$} Assume for a moment that ${}^{n+1}\textbf{C}_{\text{1i}}$, ${}^{n+1}\textbf{C}_{\text{2i}}$, and $\xi > 0$ are known. In this section we update $\textbf{C}_{\text{i}}$. As a first step, we express the norm of the driving force as a function of $\xi$: Combining relations $\eqref{prob6}_1$ and $\eqref{prob6}_2$ we arrive at \begin{equation}\label{F2} \mathfrak{F} = \mathfrak{F}_2(\xi) := f_0 \big(\frac{\eta \xi}{\Delta t}\big)^{1/m} + \sqrt{2/3}(K + \ {}^{n+1}R(\xi)). \end{equation} Next, we rewrite the evolution equation \eqref{ReducedEvolut} as follows \begin{equation}\label{ReducedEvolutCi2} \dot{\mathbf C}_{\text{i}} = 2 \frac{\displaystyle \lambda_{\text{i}}}{\displaystyle \mathfrak{F}} \big( \mu \ \overline{\mathbf C} - \frac{c_1}{2} \ \mathbf C_{\text{i}} \mathbf C_{\text{1i}}^{-1} \mathbf C_{\text{i}} - \frac{c_2}{2} \ \mathbf C_{\text{i}} \mathbf C_{\text{2i}}^{-1} \mathbf C_{\text{i}} \big) + \beta \mathbf C_{\text{i}}, \end{equation} where \begin{equation}\label{EqForBeta} \beta = -\frac{2}{3} \frac{\displaystyle \lambda_{\text{i}}}{\displaystyle \mathfrak{F}}\big( \mu \ \text{tr}(\overline{\mathbf C} \mathbf C_{\text{i}}^{-1}) - \frac{c_1}{2} \ \text{tr}(\mathbf C_{\text{i}} \mathbf C_{\text{1i}}^{-1}) - \frac{c_2}{2} \ \text{tr}(\mathbf C_{\text{i}} \mathbf C_{\text{2i}}^{-1})\big) \in \mathbb{R}. \end{equation} Implementing the Euler Backward method (EBM) to \eqref{ReducedEvolutCi2}, and replacing $\mathfrak{F}$ through $\mathfrak{F}_2(\xi)$ we obtain \begin{equation}\label{ReducedEvolut22} {}^{n+1} \mathbf C_{\text{i}} = {}^{n} \mathbf C_{\text{i}} + 2 \frac{\displaystyle \xi }{\displaystyle \mathfrak{F}_2} \big( \mu \ {}^{n+1} \overline{\mathbf C} - \frac{c_1}{2} \ {}^{n+1} \mathbf C_{\text{i}} \ {}^{n+1}\mathbf C_{\text{1i}}^{-1} \ {}^{n+1} \mathbf C_{\text{i}} - \frac{c_2}{2} \ {}^{n+1} \mathbf C_{\text{i}} \ {}^{n+1}\mathbf C_{\text{2i}}^{-1} \ {}^{n+1} \mathbf C_{\text{i}} \big) + \beta \Delta t \ {}^{n+1} \mathbf C_{\text{i}}, \end{equation} where the scalar $\beta$ is unknown since it depends on the unknown ${}^{n+1}\textbf{C}_{\text{i}}$. Unfortunately, due to the additive nature of the classical EBM, this method violates the incompressibility restriction $\text{det} (\mathbf C_{\text{i}}) =1$. In order to enforce the exact incompressibility, we may modify the right-hand side of \eqref{ReducedEvolut22} by introducing additional (correction) term $\varepsilon \mathbf{P}$, thus arriving at \begin{equation}\label{ReducedEvolut22mod} {}^{n+1} \mathbf C_{\text{i}} = {}^{n} \mathbf C_{\text{i}} + 2 \frac{\displaystyle \xi }{\displaystyle \mathfrak{F}_2} \big( \mu \ {}^{n+1} \overline{\mathbf C} - \frac{c_1}{2} \ {}^{n+1} \mathbf C_{\text{i}} \ {}^{n+1}\mathbf C_{\text{1i}}^{-1} \ {}^{n+1} \mathbf C_{\text{i}} - \frac{c_2}{2} \ {}^{n+1} \mathbf C_{\text{i}} \ {}^{n+1}\mathbf C_{\text{2i}}^{-1} \ {}^{n+1} \mathbf C_{\text{i}} \big) + \beta \Delta t \ {}^{n+1} \mathbf C_{\text{i}} + \varepsilon \mathbf{P}. \end{equation} Here, $\varepsilon$ is a suitable scalar and $\mathbf{P}$ is a suitable second-rank tensor.\footnote{Other modification of EBM used to enforce the incompressibility were presented in \cite{Helm2, Vladimirov2008}.} The scalar $\varepsilon$ is determined such that $\text{det} ({}^{n+1} \mathbf C_{\text{i}}) =1$. In some publications, the authors suggest the correction of diagonal terms in ${}^{n+1} \mathbf C_{\text{i}}$ to enforce the incompressibility. Such a correction corresponds to $\mathbf{P} =\mathbf{1}$, but this choice of $\mathbf{P}$ violates the weak invariance of the solution (see Appendix A). In the current study we put $\mathbf{P} = {}^{n+1} \mathbf C_{\text{i}}$. Such a choice is \emph{compatible with the weak invariance} (see Appendix A). After some algebraic manipulations we obtain from \eqref{ReducedEvolut22mod} \begin{equation}\label{ReducedEvolut24} z \ {}^{n+1} \mathbf C_{\text{i}} = {}^{n} \mathbf C_{\text{i}} + 2 \frac{\displaystyle \xi }{\displaystyle \mathfrak{F}_2} \big( \mu \ {}^{n+1} \overline{\mathbf C} - \frac{c_1}{2} \ {}^{n+1} \mathbf C_{\text{i}} \ {}^{n+1}\mathbf C_{\text{1i}}^{-1} {}^{n+1} \mathbf C_{\text{i}} - \frac{c_2}{2} \ {}^{n+1} \mathbf C_{\text{i}} \ {}^{n+1}\mathbf C_{\text{2i}}^{-1} \ {}^{n+1} \mathbf C_{\text{i}} \big), \end{equation} where the scalar unknown $z := 1 - \beta \Delta t - \varepsilon$ is determined from the incompressibility condition $\text{det} ({}^{n+1} \mathbf C_{\text{i}}) =1$. Next, we introduce the following abbreviations \begin{equation}\label{AbbrevPhiC} c:= c_1 + c_2, \quad \mathbf{\Phi} := \frac{\displaystyle c_1}{c} \ {}^{n+1}\mathbf C_{\text{1i}}^{-1} + \frac{\displaystyle c_2}{c} \ {}^{n+1}\mathbf C_{\text{2i}}^{-1}. \end{equation} Using these, we rewrite \eqref{ReducedEvolut24} as follows \begin{equation}\label{ReducedEvolut3} z \ {}^{n+1} \mathbf C_{\text{i}} = {}^{n} \mathbf C_{\text{i}} + 2 \frac{\displaystyle \xi }{\displaystyle \mathfrak{F}_2} \mu \ {}^{n+1} \overline{\mathbf C} - \frac{\displaystyle \xi c}{\displaystyle \mathfrak{F}_2} \ {}^{n+1} \mathbf C_{\text{i}} \ \mathbf{\Phi} \ {}^{n+1} \mathbf C_{\text{i}}. \end{equation} Further, multiplying both sides of \eqref{ReducedEvolut3} with $\mathbf{\Phi}^{1/2}$ from the left and right, we obtain \begin{equation}\label{ReducedEvolut4} z \ \mathbf{\Phi}^{1/2} \ {}^{n+1} \mathbf C_{\text{i}} \ \mathbf{\Phi}^{1/2} = \mathbf{\Phi}^{1/2} \big[ {}^{n} \mathbf C_{\text{i}} + 2 \frac{\displaystyle \xi }{\displaystyle \mathfrak{F}_2} \mu \ {}^{n+1} \overline{\mathbf C} \ \big] \mathbf{\Phi}^{1/2} -\frac{\displaystyle \xi c}{\displaystyle \mathfrak{F}_2} \ \mathbf{\Phi}^{1/2} \ {}^{n+1} \mathbf C_{\text{i}} \ \mathbf{\Phi} \ {}^{n+1} \mathbf C_{\text{i}} \ \mathbf{\Phi}^{1/2}. \end{equation} Now let us introduce one more abbreviation \begin{equation}\label{Abbreviation} \mathbf A : = \mathbf{\Phi}^{1/2} \big[ {}^{n} \mathbf C_{\text{i}} + 2 \frac{\displaystyle \xi }{\displaystyle \mathfrak{F}_2} \mu \ {}^{n+1} \overline{\mathbf C} \ \big] \mathbf{\Phi}^{1/2}. \end{equation} The tensor $\mathbf A$ is known. In order to simplify the system of equations, we rewrite it with respect to a new unknown variable \begin{equation}\label{NewUnknown} \mathbf Y := \mathbf{\Phi}^{1/2} \ {}^{n+1} \mathbf C_{\text{i}} \ \mathbf{\Phi}^{1/2}. \end{equation} Substituting relations \eqref{Abbreviation} and \eqref{NewUnknown} into \eqref{ReducedEvolut4}, we obtain a quadratic equation with respect to $\mathbf Y$ \begin{equation}\label{QuadrEquation} z \ \mathbf Y = \mathbf A - \frac{\displaystyle \xi c}{\displaystyle \mathfrak{F}_2} \mathbf Y^2. \end{equation} Recall that, for physical reasons, the tensor ${}^{n+1} \mathbf C_{\text{i}}$ must be positive definite. Therefore, the physically reasonable $\mathbf Y$ is also positive definite. Thus, the correct solution of \eqref{QuadrEquation} is given by \begin{equation}\label{ReducedEvolut5} \mathbf Y = \frac{\displaystyle \mathfrak{F}_2}{\displaystyle 2 \xi c} \Big[ -z \textbf{1} + \big(z^2 \textbf{1} + 4 \frac{\displaystyle \xi c}{\displaystyle \mathfrak{F}_2} \textbf{A} \big)^{1/2} \Big]. \end{equation} Unfortunately, this relation is prone to round-off errors if evaluated step-by-step (see Appendix B). Thus, a reliable method should be used to compute the matrix function on the right-hand side of \eqref{ReducedEvolut5}. Using the algebraic identity $(\mathbf X^{1/2} -\mathbf 1) (\mathbf X^{1/2} + \mathbf 1) = \mathbf X - \mathbf 1$, the closed-form solution \eqref{ReducedEvolut5} can be recast in the form \begin{equation}\label{ReducedEvolut5notCont} \mathbf Y = 2 \textbf{A} \Big[ \Big( z^2 \textbf{1} + \frac{4 \xi c}{\mathfrak{F}_2} \textbf{A} \Big)^{1/2} + z \textbf{1} \Big]^{-1}. \end{equation} In the exact arithmetics this formula is equivalent to \eqref{ReducedEvolut5} but it is more advantageous from the numerical standpoint (see Appendix B). Now it remains to identify the unknown parameter $z$. Recall that $z$ can be estimated using the incompressibility condition $\det ({}^{n+1} \mathbf C_{\text{i}}) =1$, which is equivalent to $\det (\textbf{Y}) = \det (\mathbf{\Phi})$. As shown in Appendix C, a simple formula can be obtained using the perturbation method for small $\frac{\displaystyle c \ \xi}{\displaystyle \mathfrak{F}_2}$: \begin{equation}\label{EstimOfz} z = z_0 - \frac{\text{tr} \mathbf A}{3 z_0} \frac{c \ \xi}{\mathfrak{F}_2} + O\Big(\Big(\frac{c \ \xi}{\mathfrak{F}_2}\Big)^2\Big), \quad \text{where} \ z_0 := \Big(\frac{\det \mathbf A}{\det{\mathbf \Phi}}\Big)^{1/3}. \end{equation} Neglecting the terms $O\Big(\Big(\frac{c \ \xi}{\mathfrak{F}_2}\Big)^2\Big)$, a reliable estimation is obtained, which is accurate for $c=0$. As will be shown in the following sections, this formula allows us to obtain an accurate and robust local numerical procedure even for finite values of $\frac{\displaystyle c \ \xi}{\displaystyle \mathfrak{F}_2}$. After $z$ is evaluated, $\mathbf{Y}$ is computed using \eqref{ReducedEvolut5notCont} and $\mathbf C_{\text{i}} $ is updated through \begin{equation}\label{UpdateCi} {}^{n+1} \mathbf C^*_{\text{i}} := \mathbf{\Phi}^{-1/2} \ \mathbf Y \ \mathbf{\Phi}^{-1/2}. \end{equation} Note that this solution exactly preserves the weak invariance since it is the solution for system \eqref{ReducedEvolut24} which itself is weakly invariant. Next, since the variable $z$ is not computed exactly, the incompressibility condition can be violated. Thus, to enforce the incompressibility, a final correction step is needed \begin{equation}\label{UpdateCi_correct} {}^{n+1} \mathbf C_{\text{i}} = \overline{{}^{n+1} \mathbf C^*_{\text{i}}}. \end{equation} The procedure, described in this subsection, yields ${}^{n+1} \mathbf C_{\text{i}}$ as a function of ${}^{n+1} \mathbf C_{\text{1i}}$, ${}^{n+1} \mathbf C_{\text{2i}}$, and $\xi$: \begin{equation}\label{UpdateCi2} {}^{n+1} \mathbf C_{\text{i}} = \mathfrak{C}_{\text{i}} ({}^{n+1} \mathbf C_{\text{1i}}, {}^{n+1} \mathbf C_{\text{2i}}, \xi). \end{equation} \subsection{Update of $\textbf{C}_{\text{1i}}$ and $\textbf{C}_{\text{2i}}$ for the known ${}^{n+1} \textbf{C}_{\text{i}}$ and $\xi$} Let us update $\textbf{C}_{\text{1i}}$ using evolution equation $\eqref{ReducedEvolut2}_1$. Its Euler Backward discretization yields \begin{equation}\label{UpdateCii} {}^{n+1} \mathbf C_{\text{1i}} = {}^{n} \mathbf C_{\text{1i}} + \xi \ \varkappa_1 \ c_1 \ ({}^{n+1} \mathbf C_{\text{i}} \ ({}^{n+1}\mathbf C_{\text{1i}})^{-1})^{\text{D}} \ {}^{n+1} \mathbf C_{\text{1i}}. \end{equation} Next, we rewrite this algebraic equation as follows \begin{equation}\label{UpdateCii2} {}^{n+1} \mathbf C_{\text{1i}} = {}^{n} \mathbf C_{\text{1i}} + \xi \ \varkappa_1 \ c_1 \ {}^{n+1} \mathbf C_{\text{i}} + \tilde{\beta} \ {}^{n+1}\mathbf C_{\text{1i}}, \end{equation} where $\tilde{\beta} \in \mathbb{R}$ is unknown, since it depends on ${}^{n+1}\mathbf C_{\text{1i}}$. In order to enforce the incompressibility condition $\det({}^{n+1}\mathbf C_{\text{1i}})=1$, we modify the right-hand side by introducing an additional term $\tilde{\varepsilon} \ {}^{n+1}\mathbf C_{\text{1i}}$ \begin{equation}\label{UpdateCii3} {}^{n+1} \mathbf C_{\text{1i}} = {}^{n} \mathbf C_{\text{1i}} + \xi \ \varkappa_1 \ c_1 \ {}^{n+1} \mathbf C_{\text{i}} + \tilde{\beta} \ {}^{n+1}\mathbf C_{\text{1i}} + \tilde{\varepsilon} \ {}^{n+1}\mathbf C_{\text{1i}}, \end{equation} where $\tilde{\varepsilon}$ is determined using the incompressibility condition. In analogy to the previous subsection, this additional term is chosen in such a way as not to spoil the weak invariance of the solution. Rearranging the terms, we arrive at \begin{equation}\label{UpdateCii4} \tilde{z} \ {}^{n+1} \mathbf C_{\text{1i}} = {}^{n} \mathbf C_{\text{1i}} + \xi \ \varkappa_1 \ c_1 \ {}^{n+1} \mathbf C_{\text{i}}, \end{equation} where $\tilde{z}:=1 - \tilde{\beta} - \tilde{\varepsilon}$ is computed such that $\det({}^{n+1}\mathbf C_{\text{1i}})=1$. The solution for this problem takes a very simple form \begin{equation}\label{UpdateCii5} {}^{n+1} \mathbf C_{\text{1i}} = \overline{{}^{n} \mathbf C_{\text{1i}} + \xi \ \varkappa_1 \ c_1 \ {}^{n+1} \mathbf C_{\text{i}}}. \end{equation} \textbf{Remark 3.} Interestingly, this update formula has exactly the same structure as the explicit update formula for the Maxwell fluid which was reported in \cite{ShutovLandgraf2013}. This similarity is due to the fact that the evolution equation $\eqref{ReducedEvolut2}_1$ coincides with the evolution equation for the Maxwell fluid, in case of constant $\xi$. In contrast to the derivation in \cite{ShutovLandgraf2013}, the derivation in the current section exploits the notion of the weak invariance as a guideline. $\Box$ In exactly the same way, the following update formula is obtained for ${}^{n+1} \mathbf C_{\text{2i}}$: \begin{equation}\label{UpdateCii6} {}^{n+1} \mathbf C_{\text{2i}} = \overline{{}^{n} \mathbf C_{\text{2i}} + \xi \ \varkappa_2 \ c_2 \ {}^{n+1} \mathbf C_{\text{i}}}. \end{equation} In a compact form, update formulas \eqref{UpdateCii5} and \eqref{UpdateCii6} are rewritten as \begin{equation}\label{UpdateCii7} {}^{n+1} \mathbf C_{\text{1i}} = \mathfrak{C}_{\text{1i}} ({}^{n+1} \mathbf C_{\text{i}}, \xi), \quad {}^{n+1} \mathbf C_{\text{2i}} = \mathfrak{C}_{\text{2i}} ({}^{n+1} \mathbf C_{\text{i}}, \xi). \end{equation} \subsection{Overall procedure: partitioned Euler Backward method (PEBM)} Multiplying $\eqref{prob6}_1$ with $\Delta t$, we arrive at the following incremental consistency condition for finding $\xi$ \begin{equation}\label{Consistency} \xi \eta = \displaystyle \Delta t \Big\langle \frac{\displaystyle 1}{\displaystyle f_0} f \Big\rangle^{m}, \quad \text{where} \ f =\widetilde{f} ({}^{n+1} \mathbf C_{\text{i}}, {}^{n+1} \mathbf C_{\text{1i}}, {}^{n+1} \mathbf C_{\text{2i}}, \xi). \end{equation} Here, the overstress function $\widetilde{f} ({}^{*} \mathbf C_{\text{i}}, {}^{*} \mathbf C_{\text{1i}}, {}^{*} \mathbf C_{\text{2i}}, {}^{*}\xi)$ is determined from $\eqref{prob6}_2$ and $\eqref{prob6}_3$ as follows \begin{equation}\label{Overstress123} \widetilde{f} ({}^{*} \mathbf C_{\text{i}}, {}^{*} \mathbf C_{\text{1i}}, {}^{*} \mathbf C_{\text{2i}}, {}^{*}\xi):= \mathfrak{F}_1 ({}^{*} \mathbf C_{\text{i}}, {}^{*} \mathbf C_{\text{1i}}, {}^{*} \mathbf C_{\text{2i}}) - \sqrt{\frac{2}{3}} \big[ K + {}^{n+1} R ({}^{*}\xi) \big], \end{equation} \begin{equation}\label{Overstress124} \mathfrak{F}_1 ({}^{*} \mathbf C_{\text{i}}, {}^{*} \mathbf C_{\text{1i}}, {}^{*} \mathbf C_{\text{2i}}):= \sqrt{\text{tr} \big[ \big( {}^{n+1} \mathbf C \ {}^{*}\tilde{\mathbf T} - {}^{*}\mathbf C_{\text{i}} \ {}^{*}\tilde{\mathbf X} \big)^{2} \big] }, \end{equation} \begin{equation}\label{Overstress125} {}^{*}\tilde{\mathbf T} ({}^{*} \mathbf C_{\text{i}}) := \mu \ {}^{n+1}\mathbf C^{-1} (\overline{{}^{n+1} \mathbf C} \ {}^{*}\mathbf C_{\text{i}}^{-1})^{\text{D}}, \end{equation} \begin{equation}\label{Overstress126} {}^{*} \tilde{\mathbf X} ({}^{*} \mathbf C_{\text{i}}, {}^{*} \mathbf C_{\text{1i}}, {}^{*} \mathbf C_{\text{2i}}) := \frac{c_1}{2} \ {}^{*}\mathbf C_{\text{i}}^{-1} \ ( {}^{*}\mathbf C_{\text{i}} \ {}^{*}\mathbf C_{\text{1i}}^{-1})^{\text{D}} + \frac{c_2}{2} \ {}^{*}\mathbf C_{\text{i}}^{-1} \ ( {}^{*}\mathbf C_{\text{i}} \ {}^{*}\mathbf C_{\text{2i}}^{-1})^{\text{D}}. \end{equation} The following predictor-corrector scheme is implemented in this study: \begin{itemize} \item[1] \textbf{Elastic predictor:} Evaluate the trial overstress for frozen internal variables \begin{equation}\label{OverstressTrial} {}^{\text{trial}} f:= \widetilde{f} ({}^{n} \mathbf C_{\text{i}}, {}^{n} \mathbf C_{\text{1i}}, {}^{n} \mathbf C_{\text{2i}}, 0). \end{equation} If $ {}^{\text{trial}} f \leq 0$ then the current stress state lies in the elastic domain. Put $\xi =0$, ${}^{n+1} \mathbf C_{\text{i}} = {}^{n} \mathbf C_{\text{i}}$, ${}^{n+1} \mathbf C_{\text{1i}} = {}^{n} \mathbf C_{\text{1i}}$, ${}^{n+1} \mathbf C_{\text{2i}} = {}^{n} \mathbf C_{\text{2i}}$, ${}^{n+1} s = {}^{n} s$, ${}^{n+1} s_{\text{d}} = {}^{n} s_{\text{d}}$. The time step is thus complete. If $ {}^{\text{trial}} f > 0$ then proceed to the plastic corrector step. \item[2] \textbf{Plastic corrector:} The corrector step consists of the following substeps. \begin{itemize} \item[2.1] The initial (rough) estimation for ${}^{n+1} \mathbf{C}_{\text{i}}$ is obtained from ${}^{n} \mathbf{C}_{\text{i}}$ by the same shift (push-forward operation) which brings $\overline{{}^{n} \mathbf{C}}$ to $\overline{{}^{n+1} \mathbf{C}}$. More precisely: \begin{equation}\label{DefineShift} {}^{\text{est}} \mathbf{C}_{\text{i}} \gets \mathbf{F}^{-\text{T}}_{\text{sh}} \ {}^{n} \mathbf{C}_{\text{i}} \ \mathbf{F}^{-1}_{\text{sh}}, \ \text{where} \ \mathbf{F}_{\text{sh}} := (\overline{{}^{n+1} \mathbf{C}^{-1} \ {}^{n} \mathbf{C}})^{1/2}. \end{equation} Next, estimate $\xi >0$ by solving the following scalar equation \begin{equation}\label{RoughXi} {}^{\text{est}}\xi = ({}^{\text{trial}} f - ({}^{\text{est}}\xi \eta/ \Delta t)^{1/m} )/(2 \mu). \end{equation} \item[2.2] The internal variables ${}^{\text{n+1}} \mathbf{C}_{\text{1i}}$ and ${}^{\text{n+1}} \mathbf{C}_{\text{2i}}$ are estimated using the explicit update formulas \begin{equation}\label{RoughCii} {}^{\text{est}} \mathbf{C}_{\text{1i}} \gets \mathfrak{C}_{\text{1i}} ({}^{\text{est}} \mathbf C_{\text{i}}, {}^{\text{est}}\xi), \quad {}^{\text{est}} \mathbf{C}_{\text{2i}} \gets \mathfrak{C}_{\text{2i}} ({}^{\text{est}} \mathbf C_{\text{i}}, {}^{\text{est}}\xi). \end{equation} \item[2.3] Resolve the following incremental consistency condition with respect to $\xi$ \begin{equation}\label{Consistency2} \xi \eta = \displaystyle \Delta t \Big\langle \frac{\displaystyle 1}{\displaystyle f_0} \widetilde{f} (\mathfrak{C}_{\text{i}}({}^{\text{est}}\mathbf C_{\text{1i}}, {}^{\text{est}}\mathbf C_{\text{2i}}, \xi), {}^{\text{est}} \mathbf C_{\text{1i}}, {}^{\text{est}} \mathbf C_{\text{2i}}, \xi) \Big\rangle^{m}. \end{equation} In other words, a time step is performed with fixed $\mathbf C_{\text{1i}} \equiv {}^{\text{est}}\mathbf C_{\text{1i}}$ and $\mathbf C_{\text{2i}} \equiv {}^{\text{est}}\mathbf C_{\text{2i}}$. Let $ \tilde{\xi}$ be the solution of \eqref{Consistency2}. Using it, compute \begin{equation}\label{Substep22} {}^{\text{est}} \xi \gets \tilde{\xi}, \quad {}^{\text{est}}\mathbf C_{\text{i}} \gets \mathfrak{C}_{\text{i}}({}^{\text{est}}\mathbf C_{\text{1i}}, {}^{\text{est}}\mathbf C_{\text{2i}}, \tilde{\xi}). \end{equation} \item[2.4] For better accuracy, repeat substeps 2.2, 2.3, 2.2, 2.3; then assign \begin{equation}\label{Substep24} \xi \gets {}^{\text{est}} \xi, \quad {}^{n+1} \mathbf C_{\text{i}} \gets {}^{\text{est}} \mathbf C_{\text{i}}, \quad {}^{n+1} \mathbf C_{\text{1i}} \gets {}^{\text{est}} \mathbf C_{\text{1i}}, \quad \quad {}^{n+1} \mathbf C_{\text{2i}} \gets {}^{\text{est}} \mathbf C_{\text{2i}}. \end{equation} Finally, the variables $s$ an $s_{\text{d}}$ are updated by \eqref{updateSSd}. The plastic corrector step is thus complete. \end{itemize} \end{itemize} \textbf{Remark 4.} In the current algorithm, substeps 2.2 and 2.3 are repeated for three times. In the general context of the decoupled Euler Backward method, these repetitions correspond to the so-called relaxation iterations. As an alternative to the proposed algorithm, one may consider a full relaxation which consists in numerous repetitions of 2.2 and 2.3, carried to convergence. As will be shown in the following, the full relaxation is not necessary and \emph{only three relaxation iterations yield very good results} for different metallic materials. Such accurate results are achieved due to the preliminary substep 2.1, which provides a good approximation of the solution even for finite values of $\xi$. Without the preliminary push-forward \eqref{DefineShift}, the integration procedure would be less accurate. $\Box$ Instead of solving a system of nonlinear algebraic equations with respect to ${}^{n+1} \mathbf C_{\text{i}}$, ${}^{n+1} \mathbf C_{\text{1i}}$, ${}^{n+1} \mathbf C_{\text{2i}}$, and $\xi$, as it was carried out in previous studies, only a scalar consistency equation has to be solved with respect to $\xi$. Namely, equation \eqref{Consistency2} has to be resolved for three times. The resulting numerical scheme is first order accurate\footnote{The reader interested in higher-order integration methods for elasto-plasticity is referred to \cite{HartmannBier2008, Eidel, EidelStumpf} and references cited therein.}; the solution remains bounded even for very large time steps. Moreover, the geometric property \eqref{geopr0} is exactly satisfied and the positive definiteness of $\mathbf C_{\text{i}}$ and $\mathbf C_{\text{ii}}$ is guaranteed even for very large time steps and strain increments. The push-forward operation \eqref{DefineShift} exactly retains the weak invariance of the solution (see Appendix D). For that reason, the numerical solution exhibits the same weak invariance property as the original continuum model. \section{Simulation results} \subsection{Constitutive parameters for real materials} In this section, the novel PEBM is tested using two different sets of constitutive parameters pertaining to two different materials. The accuracy of the stress computation by PEBM will be compared with the Euler Backward method with subsequent correction (EBMSC) and the exponential method (EM). These conventional methods are shortly summarized in Appendix E. \subsubsection{Material parameters for 5754-O aluminum alloy} We adopt the experimental data reported for 5754-O aluminum alloy in \cite{Chaparro2008, LaurentGreeze2009}. Four different shear tests are employed for the parameter identification: three Bauschinger tests with different prestrain levels and a monotonic test. For this material, the rate-dependence is weak \cite{LaurentGreeze2009}; therefore, we switch-off the viscosity by putting $\eta =0$. The identified constitutive parameters are summarized in table \ref{tab1}. The comparison of the theoretical and experimental results is shown in Figure \ref{figFor5754}. As can be seen from the figure, the model with two parameters for the isotropic hardening and four parameters for the kinematic hardening describes the real stress response with a good fidelity. \begin{table}[h!] \caption{Set of constitutive parameters for 5754-O aluminum alloy} \begin{center} \begin{tabular}{| l l l l l |} \hline $k$ [MPa] & $\mu$ [MPa] & $c_1$ [MPa] & $c_2$ [MPa] & $\gamma$ [MPa] \\ \hline 68630 & 26310 & 115.5 & 11500 & 1963 \\ \hline \end{tabular} \\ \begin{tabular}{|l l l l l l|} \hline $K$ [MPa] & $m$ [-] & $\eta$ [$\text{s}$] & $\varkappa_1$ [$\text{MPa}^{-1}$] & $\varkappa_2$ [$\text{MPa}^{-1}$] & $\beta$ [-] \\ \hline 31.5 & 1.0 & 0 & 0.0885 & 0.01676 & 13.33 \\ \hline \end{tabular} \end{center} \label{tab1} \end{table} \begin{figure} \caption{Experimental and theoretical results for the stress response of 5754-O aluminum alloy subjected to simple shear. Experimental data reported in \cite{Chaparro2008, LaurentGreeze2009} \label{figFor5754} \end{figure} \subsubsection{Material parameters for 42CrMo4 steel} The experimental data for 42CrMo4 steel were reported in \cite{ShutovKuprin}. For details concerning the experimental set up and simulation of the torsion test, the reader is referred to \cite{ShutovKuprin}. All tests are performed with a constant shear strain rate $| \dot{\gamma}_{\text{shear}} | = 0.07 s^{-1}$. Two different cyclic torsion tests are adopted in the current study for the identification of material parameters. The identified parameters are summarized in table \ref{tab2}. As can be seen from Figure \ref{figFor42CrMo4}, the considered model describes the nonlinear stress response upon the load reversal with a good accuracy. \begin{table}[h!] \caption{Set of constitutive parameters for 42CrMo4 steel} \begin{center} \begin{tabular}{| l l l l l |} \hline $k$ [MPa] & $\mu$ [MPa] & $c_1$ [MPa] & $c_2$ [MPa] & $\gamma$ [MPa] \\ \hline 135600 & 52000 & 1674 & 22960 & 145 \\ \hline \end{tabular} \\ \begin{tabular}{|l l l l l l|} \hline $K$ [MPa] & $m$ [-] & $\eta$ [$\text{s}$] & $\varkappa_1$ [$\text{MPa}^{-1}$] & $\varkappa_2$ [$\text{MPa}^{-1}$] & $\beta$ [-] \\ \hline 335 & 2.26 & 500 000 & 0.00386 & 0.0043102 & 0.0503 \\ \hline \end{tabular} \end{center} \label{tab2} \end{table} \begin{figure} \caption{Experimental and theoretical results for the stress response of 42CrMo4 steel; two cyclic torsion tests are presented. Experimental data reported in \cite{ShutovKuprin} \label{figFor42CrMo4} \end{figure} For all the following tests we switch-off the viscous effects: material parameters from table \ref{tab2} are taken for 42CrMo4 steel with $\eta=0$ and $m=1$. We note that the material parameters which appear in tables \ref{tab1} and \ref{tab2} correspond to a very fast saturation of nonlinear kinematic hardening with large backstresses in the saturated state. From the numerical standpoint, such a type of stress response is highly unfavorable: much larger computational errors are expected for these parameters than in case of slow saturation with small backstresses. Thus, the following numerical tests are assumed to be representative for a broad class of metallic materials. \subsection{Accuracy test: non-proportional loading} Consider the time instancies $T_0=0$s, $T_1=100$s, $T_2=200$s, $T_3=300$s. Let us simulate the stress response in the time interval $t \in [T_0,T_3]$ for the following local loading program \begin{equation}\label{loaprog0} \mathbf F (t) = \overline{\mathbf F^{\prime} (t)}, \end{equation} where the auxiliary function $\mathbf F^{\prime}(t)$ is given by \begin{equation*}\label{loaprog} \mathbf F^{\prime} (t) := \begin{cases} (1 - t/T_1) \mathbf F_1 + (t/T_1) \mathbf F_2 \quad \text{if} \ t \in [T_0,T_1] \\ (2 - t/T_1) \mathbf F_2 + (t/T_1-1) \mathbf F_3 \quad \text{if} \ t \in (T_1,T_2] \\ (3 - t/T_1) \mathbf F_3 + (t/T_1-2) \mathbf F_4 \quad \text{if} \ t \in (T_2,T_3] \end{cases}. \end{equation*} Here, $\mathbf F_1$, $\mathbf F_2$, $\mathbf F_3$, and $\mathbf F_4$ are the key points, which are defined in a fixed Cartesian coordinate system through \begin{equation*}\label{loaprog2} \mathbf F_1 :=\mathbf 1, \ \mathbf F_2 := \left( \begin{array}{ccc} 2 & 0 & 0 \\ 0 & \displaystyle \frac{1}{\sqrt2} & 0 \\ 0 & 0 & \displaystyle \frac{1}{\sqrt2} \end{array} \right), \ \mathbf F_3 := \left( \begin{array}{ccc} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right), \ \mathbf F_4 := \left( \begin{array}{ccc} \displaystyle \frac{1}{\sqrt2} & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & \displaystyle \frac{1}{\sqrt2} \end{array} \right). \end{equation*} This program defines a rather arbitrary loading, representative for metal forming applications. Note that this loading program exhibits an abrupt change of the load path at $t=100 \text{s}$ and $t=200 \text{s}$. For the numerical test we impose the following initial conditions, corresponding to unloaded isotropic initial state: \begin{equation}\label{Inico} \mathbf C_{\text{i}}|_{t=0} = \mathbf 1, \quad \mathbf C_{\text{1i}}|_{t=0}= \mathbf 1, \quad \mathbf C_{\text{2i}}|_{t=0}= \mathbf 1, \quad s|_{t=0}=0, \quad s_{\text{d}}|_{t=0}=0. \end{equation} The numerical solution obtained with extremely small time step ($\Delta t = 0.0025 \text{s}$) using the EBMSC will be seen as the exact solution. The numerical error function is computed as a distance between the exact and the numerically computed Cauchy stresses \begin{equation}\label{ErrorDef} \text{Error}(t): = \| \mathbf T^{\text{numerical}}(t) - \mathbf T^{\text{exact}}(t) \|. \end{equation} The error function is plotted for different values of the time step size $\Delta t$; Figures \ref{ErrorForAl} and \ref{ErrorForSt} correspond to material parameters for Al 5754O and 42CrMo4 steel, respectively. The numerical tests reveal that the proposed partitioned Euler Backward method (PEBM) and the conventional Euler Backward with subsequent correction (EBMSC) have a similar integration error even for very big time steps $\Delta t = 10 \ \text{s}$. Both algorithms are first order accurate and they are \emph{comparable in accuracy}, but the computational effort for the new algorithm is much smaller than for EBMSC. A more detailed discussion of the computational costs will be presented in Section 4.4. \begin{figure} \caption{Dependence of the integration error on the time step size $\Delta t$. Material parameters of 5754-O aluminum alloy are used. \label{ErrorForAl} \label{ErrorForAl} \end{figure} \begin{figure} \caption{Dependence of the integration error on the time step size $\Delta t$. Material parameters of 42CrMo4 steel are used. \label{ErrorForSt} \label{ErrorForSt} \end{figure} \subsection{Accuracy test: iso-error maps} The iso-error maps are useful for the visualization of the integration error within a single time step \cite{SimoHughes, LeeKim2015}. In this subsection we consider only incompressible deformation paths which can be described by \begin{equation}\label{DefPathIsoError} \mathbf F (t) = F_{11}(t) \ \mathbf e_1 \otimes \mathbf e_1 + (F_{11}(t))^{-1/2} (\mathbf e_2 \otimes \mathbf e_2 + \mathbf e_3 \otimes \mathbf e_3) + F_{12}(t) \ \mathbf e_1 \otimes \mathbf e_2. \end{equation} Just as in the previous subsection, the viscous effects are switched off. Thus, the stress response is rate independent. For simplicity, the time $t$ is understood here as a non-dimensional monotonic loading parameter. Moreover, we switch off the isotropic hardening as well, by putting $\gamma = 0$ for both materials. To mimic the effect of isotropic hardening, we put $K = 178.8$ MPa for the AA 5754-O and $K = 400$ MPa for the 42CrMo4 steel. First, we consider a 20\% prestrain in uniaxial tension \begin{equation}\label{UniaxTensPrestrain} F_{11}(t) = 1 + 0.2 t, \quad F_{12}(t)=0, \quad t \in [0,1]. \end{equation} This prestrain is simulated by 500 time steps. After that, a single time step is performed to a new state, characterized by $(F_{11},F_{12})$. The numerical solution obtained for this step using EBMSC with 300 subincrements is considered as the exact solution. Using this exact solution, the error is defined as in the previous subsection (see equation \eqref{ErrorDef}). The iso-error maps for the new algorithm (PEBM), for the Euler Backward method with subsequent correction (EBMSC), and for the exponential map (EM) are presented in Figure \ref{IsoErrorTensionAlu} for the 5754-O aluminum alloy and in Figure \ref{IsoErrorTensionSteel} for the 42CrMo4 steel. As in the previous subsection, the computational methods exhibit a similar numerical error. For all three methods, the error is relatively small in the recent loading direction and large in the opposite direction. The PEBM-error is slightly larger than the EBMSC-error for strain increments opposite to the recent loading direction and slightly smaller for increments in the loading direction. \begin{figure} \caption{Iso-error maps for different integration methods using material parameters of 5754-O aluminum alloy. Prestrain in tension. \label{IsoErrorTensionAlu} \label{IsoErrorTensionAlu} \end{figure} \begin{figure} \caption{Iso-error maps for different integration methods using material parameters of 42CrMo4 steel. Prestrain in tension. \label{IsoErrorTensionSteel} \label{IsoErrorTensionSteel} \end{figure} Next, we consider a combined prestrain with 10\% tension and 10\% shear \begin{equation}\label{UniaxTensPrestrain} F_{11}(t) = 1 + 0.1 t, \quad F_{12}(t)=0.1 t, \quad t \in [0,1]. \end{equation} The iso-error maps for free different integration algorithms are shown in Figure \ref{IsoErrorTensionAndShearAlu} for the 5754-O aluminum alloy and in Figure \ref{IsoErrorTensionAndShearSteel} for the 42CrMo4 steel. Just as in case of uniaxial prestrain, all three algorithms are equivalent regarding the accuracy of the stress computation. Again, small errors occur in the recent loading direction along with relatively large errors in the opposite direction. \begin{figure} \caption{Iso-error maps for different integration methods using material parameters of 5754-O aluminum alloy. Combined prestrain in tension and shear. \label{IsoErrorTensionAndShearAlu} \label{IsoErrorTensionAndShearAlu} \end{figure} \begin{figure} \caption{Iso-error maps for different integration methods using material parameters of 42CrMo4 steel. Combined prestrain in tension and shear. \label{IsoErrorTensionAndShearSteel} \label{IsoErrorTensionAndShearSteel} \end{figure} \subsection{Analysis of computational efforts} For the novel PEBM, the computational effort depends solely on the number of Newton iterations required to resolve the incremental consistency condition \eqref{Consistency2}. For the conventional EBMSC and EM, a system of 19 coupled nonlinear equations has to be resolved at each time step (cf. Appendix E). The simultaneous numerical solution of these equations is problematic for large strain increments. In order to obtain a more reliable procedure, we employ the return mapping approach. First, a numerical procedure is created which computes the unknown ${}^{n+1} \mathbf C_{\text{i}}$, ${}^{n+1} \mathbf C_{\text{1i}}$, and ${}^{n+1} \mathbf C_{\text{2i}}$ as functions of $\xi$. We call this a ``$\xi$-step". Each $\xi$-step is based on the iterative process of the Newton-Raphson type where 18 nonlinear equations have to be solved. Then, the correct value of $\xi$ is found as a solution of the incremental consistency condition \eqref{AppendixE}. Just as for PEBM, this scalar equation is solved using the Newton method. For the novel PEBM each Newton iteration requires a finite number of operations on 3x3 matrices (cf. Sections 3.2 and 3.3). These operations include the matrix multiplication, computation of the square root and computation of the inverse. On the other hand, for the conventional EBMSC and EM each Newton iteration calls the $\xi$-step, which is much more expensive since an 18x18 matrix has to be built and inverted several times within the iterative process. Generally, the Newton iteration for the PEBM is approximately two orders of magnitude cheaper than the Newton iteration for the EBMSC or EM. Since the $\xi$-step involves the iterative process, it is (a priory) less reliable than a closed-form solution used in PEBM. For $\xi > 0.1$, the straightforward implementation of the Newton-Raphson method to the $\xi$-step yields a divergent iteration process. In order to obtain a reliable numerical procedure for EBMSC and EM, the $[0, \xi]$ interval is subincremented, which further increases the computational costs. The setting used in the previous subsection can be also adopted for the analysis of the computational efforts. Toward that end we consider a combined prestrain with 10\% tension and 10\% shear, as described by \eqref{UniaxTensPrestrain}. The prestrain is simulated numerically by 500 time steps. After that, a single time step is performed to a new $(F_{11},F_{12})$-state. The number of Newton iterations required by PEBM, EBMSC and EM to make this step is plotted in Fig. \ref{figIterNumber}. Recall that for the PEBM the incremental consistency condition has to be solved for three times, since three relaxation iterations are implemented in the current study (cf. Remark 4). Within the first relaxation iteration, the initial approximation for the unknown $\xi$ is zero. For the subsequent relaxation iterations, the solution from the previous relaxation iterations acts as the initial approximation. The total (accumulated) number of Newton iterations is plotted for the PEBM. Although PEBM requires more iterations than the conventional methods, the difference in the iteration number is not large. Therefore, PEBM is still much more efficient than EBMSC or EM. In contrast to the conventional EBMSC and EM, \emph{no convergence issues} are observed for the novel PEBM. \begin{figure} \caption{Isolines for the number of Newton iterations required by different methods. \label{figIterNumber} \label{figIterNumber} \end{figure} \subsection{FEM solutions of boundary value problems} The proposed PEBM is implemented into the commercial FEM code MSC.MARC employing the Hypela2 interface. In order to demonstrate the correctness of the numerical implementation, we consider two different bending-dominated problems with finite displacements and rotations. In both problems, the material parameters correspond to the 5754-O aluminum alloy (cf. table \ref{tab1}). First, a displacement-controlled problem is solved. A cantilever beam is modeled as shown in Figure \ref{BeamMarc} (left). The left end of the beam is clamped; the right end is subjected to a prescribed vertical displacement with zero rotation. A total number of 30 elements of type Hex20 with the full integration is used. A series of tests is performed with a constant time step size within each test; the corresponding reaction-displacement curves are shown in Figure \ref{BeamMarc} (right). The numerical solution obtained using 720 time steps (with $\Delta t = 1/240$ s) serves as a reference. As can be seen, even for the large time steps ($\Delta t = 1/8$ s, 24 times steps), the conventional EBMSC and the novel PEBM produce almost identical results. Both methods \emph{exhibit quadratic convergence} and they require approximately the same number of iterations. \begin{figure} \caption{FEM solution in MSC.MARC: displacement-controlled loading of a cantilever beam. \label{BeamMarc} \label{BeamMarc} \end{figure} Further, a non-monotonic force-controlled loading is applied to the Cook membrane as shown in Fig. \ref{CookMarc}. The mesh contains 213 elements of type Hex20 with the full integration. The obtained force-displacement curves are shown in Figure \ref{CookMarc} (right) for different values of the time step size. The numerical solution obtained using 240 time steps (with $\Delta t = 1/80$ s) serves as a reference. Just as in the previous problem, EBMSC and PEBM produce almost identical results, even for the large time steps ($\Delta t = 1/5$ s). Again, both methods exhibit quadratic convergence and they require approximately the same number of iterations. \begin{figure} \caption{FEM solution in MSC.MARC: force-controlled loading of Cook's membrane. \label{CookMarc} \label{CookMarc} \end{figure} Concluding this subsection, the conventional EBMSC and the novel EBM seem to be equivalent regarding the accuracy and stability of the global FE procedure, even if large strain increments are involved. \section{Discussion and conclusion} A new structure-exploiting numerical scheme is proposed for the implicit time stepping. The analyzed model of finite-strain viscoplasticity is based on the nested multiplicative split combined with hyperelastic relations. The main ingredients of the new algorithm are: \begin{itemize} \item Elastic predictor --- plastic corrector approach, also known as return mapping; \item Use of new analytical (closed-form) solutions for elementary decoupled problems (see Section 3.2); \item Approximation of the inelastic strain within each time step by a special push-forward operation (see Section 3.3) \end{itemize} The new algorithm exactly preserves the symmetry of the tensor-valued internal variables $\mathbf{C}_{\text{i}}$, $\mathbf{C}_{\text{1i}}$, and $\mathbf{C}_{\text{2i}}$. Moreover, these quantities remain positive definite, which is important, since they represent some inelastic metric tensors. The incompressibility restriction is also satisfied, which is necessary for prevention of the error accumulation. Another important property of the proposed algorithm is the exact retention of the weak invariance under the isochoric change of the reference configuration. \emph{It is impressive that the retention of the weak invariance may serve as a guide in constructing efficient numerical algorithms.} For instance, the update formulas in Sections 3.2 and 3.3 as well as the push-forward operation \eqref{DefineShift} are obtained employing the weak invariance as a hint. The proposed PEBM is first-order accurate; the solution remains bounded even for very large time steps. The performance of the new algorithm is examined using the constitutive parameters of two real materials, which is a tough test because of large values of $c = c_1 + c_2$. The larger $c$ is, the stiffer is the problem, which makes an accurate numerical integration even more challenging. Moreover, some convergence problems are observed on the Gauss point level while using the Euler Backward method and exponential method, if working with big inelastic increments ($\xi > 0.1$) and large values of $c$. \emph{At the same time, no computational difficulties are encountered within the proposed PEBM, since the problem-causing nonlinear algebraic equations are now solved analytically.} The numerical tests reveal that the proposed PEBM exhibits a similar error as the EBMSC. Both methods produce numerical results close to the EM. Regarding the computational efficiency, the new local algorithm is superior to the EBMSC and EM. For example, for a model with two back-stress tensors considered in the paper, the EBMSC and EM require a solution of a system of 19 nonlinear equations whereas the new PEBM allows us to reduce the number of equations from 19 to 1. The algorithm is implemented into MSC.MARC and two boundary-value problems are solved numerically to demonstrate the correctness of the FE implementation. According to the test results, the conventional EBMSC and the novel PEBM are equivalent concerning the accuracy and reliability of the global FE procedure. Although the advocated time-stepping method is demonstrated here using a concrete model of finite strain viscoplasticity, it can be applied to similar models with the nested multiplicative split, developed for thermoplasticity \cite{ShutovThermal}, shape memory alloys \cite{Helm1, Helm3, ReeseChrist, FrischkornReese}, and ratchetting \cite{Feigen, ZHU2013}. The application of the proposed method to combined kinematic/distortional hardening with a radial flow rule reported in \cite{ShutovPaKr} is straightforward as well. In conclusion, the presented method leads to enhanced computational efficiency of the local implicit time-stepping for the anisotropic finite strain plasticity, based on Lion's approach. \section*{Appendix A. Weak invariance of discrete evolution equations} \subsection*{Violation of the weak invariance for $\mathbf{P} = \mathbf 1$} Let us show that the numerical scheme based on the equation \begin{equation}\label{AppendixA1} {}^{n+1} \mathbf C_{\text{i}} = {}^{n} \mathbf C_{\text{i}} + 2 \frac{\displaystyle \xi }{\displaystyle \mathfrak{F}_2} \big( \mu \ {}^{n+1} \overline{\mathbf C} - \frac{c_1}{2} \ {}^{n+1} \mathbf C_{\text{i}} \ {}^{n+1}\mathbf C_{\text{1i}}^{-1} \ {}^{n+1} \mathbf C_{\text{i}} - \frac{c_2}{2} \ {}^{n+1} \mathbf C_{\text{i}} \ {}^{n+1}\mathbf C_{\text{2i}}^{-1} \ {}^{n+1} \mathbf C_{\text{i}} \big) + \beta \Delta t \ {}^{n+1} \mathbf C_{\text{i}} + \varepsilon \mathbf{1} \end{equation} is not weakly invariant. Toward that end, consider a reference change induced by $\textbf{F}_0$, where $\text{det}(\textbf{F}_0) = 1$. Due to the reference change, the current right Cauchy-Green tensor ${}^{n+1} \textbf{C}$ transforms as follows \begin{equation}\label{AppendixA2} {}^{n+1} \overline{\mathbf C}^{\text{new}} = \textbf{F}^{-\text{T}}_0 \ {}^{n+1} \overline{\mathbf C} \ \textbf{F}^{-1}_0. \end{equation} The weak invariance can only take place if the internal variables are transformed according to \begin{equation}\label{AppendixA3} {}^{n} \textbf{C}_{\text{i}}^{\text{new}} = \textbf{F}^{-\text{T}}_0 \ {}^{n} \textbf{C}_{\text{i}} \ \textbf{F}^{-1}_0, \end{equation} \begin{equation}\label{AppendixA4} {}^{n+1} \textbf{C}_{\text{1i}}^{\text{new}} = \textbf{F}^{-\text{T}}_0 \ {}^{n+1} \textbf{C}_{\text{1i}} \ \textbf{F}^{-1}_0, \quad {}^{n+1} \textbf{C}_{\text{2i}}^{\text{new}} = \textbf{F}^{-\text{T}}_0 \ {}^{n+1} \textbf{C}_{\text{2i}} \ \textbf{F}^{-1}_0. \end{equation} Substituting these relations into the original equation \eqref{AppendixA1}, which is formulated with respect to unknown ${}^{n+1} \textbf{C}_{\text{i}}$, we arrive at \begin{multline}\label{AppendixA5} {}^{n+1} \mathbf C_{\text{i}} = \textbf{F}^{\text{T}}_0 \ {}^{n} \mathbf C_{\text{i}}^{\text{new}} \ \textbf{F}_0 + 2 \frac{\displaystyle \xi }{\displaystyle \mathfrak{F}_2} \big( \mu \ \textbf{F}^{\text{T}}_0 {}^{n+1} \overline{\mathbf C}^{\text{new}} \ \textbf{F}_0 - \frac{c_1}{2} \ {}^{n+1} \mathbf C_{\text{i}} \ \textbf{F}^{-1}_0 ({}^{n+1} {\mathbf C}^{\text{new}}_{\text{1i}})^{-1} \ \textbf{F}^{-\text{T}}_0 \ {}^{n+1} \mathbf C_{\text{i}} - \\ - \frac{c_2}{2} \ {}^{n+1} \mathbf C_{\text{i}} \ \textbf{F}^{-1}_0 ({}^{n+1} {\mathbf C}^{\text{new}}_{\text{2i}})^{-1} \ \textbf{F}^{-\text{T}}_0 \ {}^{n+1} \mathbf C_{\text{i}} \big) + \beta \Delta t \ {}^{n+1} \mathbf C_{\text{i}} + \varepsilon \mathbf{1}. \end{multline} Multiplying both sides of \eqref{AppendixA5} with $\textbf{F}^{-\text{T}}_0$ from the left and $\textbf{F}^{-1}_0$ from the right, and introducing $\mathbf Z := \textbf{F}^{-\text{T}}_0 \ {}^{n+1} \mathbf C_{\text{i}} \ \textbf{F}^{-1}_0$, we obtain \begin{equation}\label{AppendixA6} \mathbf Z = {}^{n} \mathbf C_{\text{i}}^{\text{new}} + 2 \frac{\displaystyle \xi }{\displaystyle \mathfrak{F}_2} \big( \mu \ {}^{n+1} \overline{\mathbf C}^{\text{new}} - \frac{c_1}{2} \ \mathbf Z \ ({}^{n+1}\mathbf C_{\text{1i}}^{\text{new}})^{-1} \ \mathbf Z - \frac{c_2}{2} \ \mathbf Z \ ({}^{n+1}\mathbf C_{\text{2i}}^{\text{new}})^{-1} \ \mathbf Z \big) + \beta \Delta t \ \mathbf Z + \varepsilon \textbf{F}^{-\text{T}}_0 \textbf{F}^{-1}_0. \end{equation} Next, let ${}^{n+1} \textbf{C}_{\text{i}}^{\text{new}}$ be the solution of the new equation, which is formally obtained from equation \eqref{AppendixA1} by replacing the old quantities by their new counterparts. This equation has almost the same from as \eqref{AppendixA6}. The only difference lies in the last term on the right-hand side: $\varepsilon \textbf{1} \neq \varepsilon \textbf{F}^{-\text{T}}_0 \textbf{F}^{-1}_0$. Therefore, these two equations have two different solutions: ${}^{n+1} \textbf{C}_{\text{i}}^{\text{new}} \neq \textbf{Z}$. Recalling the definition of $\textbf{Z}$, we finally have \begin{equation}\label{AppendixA7} {}^{n+1} \textbf{C}_{\text{i}}^{\text{new}} \neq \textbf{F}^{-\text{T}}_0 \ {}^{n+1} \mathbf C_{\text{i}} \ \textbf{F}^{-1}_0. \end{equation} Thus, the weak invariance is violated. Numerical schemes of type \eqref{AppendixA1} should not be employed for discretization of weakly invariant constitutive equations. \subsection*{Weak invariance for $\mathbf{P} = {}^{n+1} \mathbf C_{\text{i}}$} Now we prove the weak invariance property for the numerical scheme \begin{multline}\label{AppendixA8} {}^{n+1} \mathbf C_{\text{i}} = {}^{n} \mathbf C_{\text{i}} + 2 \frac{\displaystyle \xi }{\displaystyle \mathfrak{F}_2} \big( \mu \ {}^{n+1} \overline{\mathbf C} - \frac{c_1}{2} \ {}^{n+1} \mathbf C_{\text{i}} \ {}^{n+1}\mathbf C_{\text{1i}}^{-1} \ {}^{n+1} \mathbf C_{\text{i}} - \frac{c_2}{2} \ {}^{n+1} \mathbf C_{\text{i}} \ {}^{n+1}\mathbf C_{\text{2i}}^{-1} \ {}^{n+1} \mathbf C_{\text{i}} \big) + \\ + \beta \Delta t \ {}^{n+1} \mathbf C_{\text{i}} + \varepsilon \ {}^{n+1} \mathbf C_{\text{i}}, \end{multline} which is used to determine ${}^{n+1} \mathbf C_{\text{i}}$. In order to prove this invariance, we need to assume that all the input variables are transformed properly. More precisely, we suppose that \eqref{AppendixA3} and \eqref{AppendixA4} hold true. Substituting these relations into \eqref{AppendixA8}, we arrive at \begin{multline}\label{AppendixA9} {}^{n+1} \mathbf C_{\text{i}} = \textbf{F}^{\text{T}}_0 \ {}^{n} \mathbf C_{\text{i}}^{\text{new}} \ \textbf{F}_0 + 2 \frac{\displaystyle \xi }{\displaystyle \mathfrak{F}_2} \big( \mu \ \textbf{F}^{\text{T}}_0 {}^{n+1} \overline{\mathbf C}^{\text{new}} \ \textbf{F}_0 - \frac{c_1}{2} \ {}^{n+1} \mathbf C_{\text{i}} \ \textbf{F}^{-1}_0 ({}^{n+1} {\mathbf C}^{\text{new}}_{\text{1i}})^{-1} \ \textbf{F}^{-\text{T}}_0 \ {}^{n+1} \mathbf C_{\text{i}} - \\ - \frac{c_2}{2} \ {}^{n+1} \mathbf C_{\text{i}} \ \textbf{F}^{-1}_0 ({}^{n+1} {\mathbf C}^{\text{new}}_{\text{2i}})^{-1} \ \textbf{F}^{-\text{T}}_0 \ {}^{n+1} \mathbf C_{\text{i}} \big) + \beta \Delta t \ {}^{n+1} \mathbf C_{\text{i}} + \varepsilon {}^{n+1} \mathbf C_{\text{i}}. \end{multline} Multiplying both sides of \eqref{AppendixA9} with $\textbf{F}^{-\text{T}}_0$ from the left and $\textbf{F}^{-1}_0$ from the right, we obtain the following equation for $\mathbf Z := \textbf{F}^{-\text{T}}_0 \ {}^{n+1} \mathbf C_{\text{i}} \ \textbf{F}^{-1}_0$ \begin{equation}\label{AppendixA10} \mathbf Z = {}^{n} \mathbf C_{\text{i}}^{\text{new}} + 2 \frac{\displaystyle \xi }{\displaystyle \mathfrak{F}_2} \big( \mu \ {}^{n+1} \overline{\mathbf C}^{\text{new}} - \frac{c_1}{2} \ \mathbf Z \ ({}^{n+1}\mathbf C_{\text{1i}}^{\text{new}})^{-1} \ \mathbf Z - \frac{c_2}{2} \ \mathbf Z \ ({}^{n+1}\mathbf C_{\text{2i}}^{\text{new}})^{-1} \ \mathbf Z \big) + \beta \Delta t \ \mathbf Z + \varepsilon \mathbf Z. \end{equation} This equation exactly coincides with \eqref{AppendixA8}, where the old quantities are replaced by the new ones. Thus, ${}^{n+1} \textbf{C}_{\text{i}}^{\text{new}} = \textbf{Z}$ \begin{equation}\label{AppendixA10} {}^{n+1} \textbf{C}_{\text{i}}^{\text{new}} = \textbf{F}^{-\text{T}}_0 \ {}^{n+1} \mathbf C_{\text{i}} \ \textbf{F}^{-1}_0, \end{equation} which is exactly the required weak invariance. \section*{Appendix B. Analysis of round-off errors} Let us estimate the round-off errors which appear during the step-by-step evaluation of \eqref{ReducedEvolut5}. The square root in \eqref{ReducedEvolut5} can be computed exactly up to machine precision $\epsilon$. For simplicity assume that this is the only source of errors. Thus, the step-by-step evaluation of \eqref{ReducedEvolut5} yields \begin{equation}\label{Contamination} \mathbf Y \leftarrow \frac{\displaystyle \mathfrak{F}_2}{\displaystyle 2 \xi c} \Big[ -z \textbf{1} + \big(z^2 \textbf{1} + 4 \frac{\displaystyle \xi c}{\displaystyle \mathfrak{F}_2} \textbf{A} \big)^{1/2} + \boldsymbol{\epsilon} \Big], \quad \text{where} \quad \| \boldsymbol{\epsilon} \| \approx \epsilon. \end{equation} In the machine arithmetics we obtain \begin{equation}\label{Contamination0} \mathbf Y^{\text{machine}} \leftarrow \mathbf Y^{\text{exact}} + \frac{\displaystyle \mathfrak{F}_2}{\displaystyle 2 \xi c} \boldsymbol{\epsilon}. \end{equation} In other words, $\mathbf Y^{\text{machine}} $ is contaminated by the error $\frac{\displaystyle \mathfrak{F}_2}{\displaystyle 2 \xi c} \boldsymbol{\epsilon}$, which can become very large for small $2 \frac{ \displaystyle \xi c}{\displaystyle \mathfrak{F}_2}$. Now let us analyze the alternative equation \eqref{ReducedEvolut5notCont}. Its step-by-step evaluation is equivalent to the following chain \begin{equation}\label{Chain0} \mathbf{Y}_0 \leftarrow \big(z^2 \textbf{1} + 4 \frac{\displaystyle \xi c}{\displaystyle \mathfrak{F}_2} \textbf{A} \big)^{1/2}, \ \mathbf{Y}_1 \leftarrow \mathbf{Y}_0 + z \mathbf{1}, \ \mathbf{Y}_2 \leftarrow \mathbf{Y}^{-1}_1, \ \mathbf{Y} \leftarrow 2 \mathbf{A} \mathbf{Y}_2. \end{equation} Again, the square root is computed with a small error $\boldsymbol{\epsilon}$. Thus, we obtain \begin{equation}\label{Contamination2} \mathbf{Y}^{\text{machine}}_0 \leftarrow \mathbf{Y}^{\text{exact}}_0 + \boldsymbol{\epsilon}, \ \mathbf{Y}^{\text{machine}}_1 \leftarrow \mathbf{Y}^{\text{exact}}_1 + \boldsymbol{\epsilon}. \end{equation} Using the Neumann series, it can be easily shown that \begin{equation}\label{Contamination3} (\mathbf{Y}^{\text{exact}}_1 + \boldsymbol{\epsilon})^{-1} = (\mathbf{Y}^{\text{exact}}_1)^{-1} - (\mathbf{Y}^{\text{exact}}_1)^{-1} \ \boldsymbol{\epsilon} \ (\mathbf{Y}^{\text{exact}}_1)^{-1} + O(\epsilon^2). \end{equation} Thus, neglecting the higher-order terms, we have \begin{equation}\label{Contamination4} \mathbf{Y}^{\text{machine}}_2 \leftarrow \mathbf{Y}^{\text{exact}}_2 - \mathbf{Y}^{\text{exact}}_2 \ \boldsymbol{\epsilon} \ \mathbf{Y}^{\text{exact}}_2. \end{equation} Finally, \begin{equation}\label{Contamination4} \mathbf{Y}^{\text{machine}} \leftarrow \mathbf{Y}^{\text{exact}} - 2 \mathbf{A} \mathbf{Y}^{\text{exact}}_2 \ \boldsymbol{\epsilon} \ \mathbf{Y}^{\text{exact}}_2 = \mathbf{Y}^{\text{exact}} - \mathbf{Y}^{\text{exact}} \ \boldsymbol{\epsilon} \ \mathbf{Y}^{\text{exact}}_2. \end{equation} Since $\mathbf{Y}^{\text{exact}}_2$ is bounded, the small error $\boldsymbol{\epsilon}$ \emph{is not multiplied} by a large factor as it was the case for \eqref{ReducedEvolut5}. In other words, relation \eqref{ReducedEvolut5notCont} can be evaluated step-by-step. \section*{Appendix C. Estimation of $z$} Let us derive the estimation \eqref{EstimOfz} for the unknown parameter $z$. First, we recall equation \eqref{QuadrEquation} and the incompressibility relation formulated in terms of $\mathbf Y$ \begin{equation}\label{Appendix1} z \ \mathbf Y = \mathbf A - \frac{\displaystyle \xi c}{\displaystyle \mathfrak{F}_2} \mathbf Y^2, \quad \det (\textbf{Y}) = \det (\mathbf{\Phi}). \end{equation} In order to rewrite the first relation in a more compact form, we introduce the abbreviation $\xi':=\frac{\displaystyle c \xi}{\displaystyle \mathfrak{F}_2}$ \begin{equation}\label{Appendix2} z \ \mathbf Y = \mathbf A - \xi' \mathbf Y^2. \end{equation} This equation yields $\mathbf Y$ as a function of $z$ and $\xi'$. Let us consider its expansion in Taylor series for small $\xi'$ \begin{equation}\label{Appendix3} \mathbf Y = \tilde{\mathbf Y}(z,\xi') = \frac{1}{z} \mathbf A - \frac{\xi'}{z^3} {\mathbf A}^2 + O(\xi'^2), \quad \mathbf Y|_{\xi' =0} = \frac{1}{z} \mathbf A. \end{equation} Since the parameter $z$ was introduced to enforce the incompressibility, $z$ is estimated using the incompressibility relation $\eqref{Appendix1}_2$, which yields \begin{equation}\label{Appendix4} z = \tilde{z} (\xi'), \quad z_0 := \tilde{z} (0) = \Big(\frac{\det \mathbf A}{\det{\mathbf \Phi}}\Big)^{1/3}. \end{equation} Employing the implicit function theorem, we have \begin{equation}\label{Appendix5} \frac{d \tilde{z} (\xi')}{d \xi'}|_{\xi' =0} = - \frac{\partial \det \tilde{\mathbf Y}(z,\xi')}{\partial \xi'}|_{z=z_0, \xi' =0} \ \Big(\frac{\partial \det \tilde{\mathbf Y}(z,\xi')}{\partial z}|_{z=z_0, \xi' =0}\Big)^{-1}. \end{equation} Next, adopting the Jacobi formula and differentiating $\eqref{Appendix3}_1$, we obtain \begin{equation}\label{Appendix52} \frac{\partial \det \tilde{\mathbf Y}(z,\xi')}{\partial \xi'}|_{z=z_0, \xi' =0} = \det \tilde{\mathbf Y}(z_0,0) \ (\tilde{\mathbf Y}(z_0,0))^{-1} : \Big(-\frac{\mathbf A^2}{z_0^3}\Big), \end{equation} \begin{equation}\label{Appendix53} \frac{\partial \det \tilde{\mathbf Y}(z,\xi')}{\partial z}|_{z=z_0, \xi' =0} = \det \tilde{\mathbf Y}(z_0,0) \ (\tilde{\mathbf Y}(z_0,0))^{-1} : \Big(- \frac{\mathbf A}{z_0^2}\Big). \end{equation} Substituting this into \eqref{Appendix5}, after some algebraic computations we arrive at \begin{equation}\label{Appendix6} \frac{d \tilde{z} (\xi')}{d \xi'}|_{\xi' =0} = - \frac{\text{tr} \mathbf A}{3 z_0}, \quad \tilde{z}(\xi') = z_0 - \frac{\text{tr} \mathbf A}{3 z_0} \xi' + O(\xi'^2). \end{equation} Finally, we have \begin{equation}\label{Appendix7} z = z_0 - \frac{\text{tr} \mathbf A}{3 z_0} \frac{c \ \xi}{\mathfrak{F}_2} + O\Big(\Big(\frac{c \ \xi}{\mathfrak{F}_2}\Big)^2\Big). \end{equation} Obviously, this estimation is exact for $c=0$ and finite $\xi$. \section*{Appendix D. Weak invariance of a certain push-forward operation} Let us check that the push-forward operation \eqref{DefineShift} preserves the weak invariance of the solution. Toward that end consider the quantities ${}^{n+1}\mathbf{C}$, ${}^{n}\mathbf{C}$, ${}^{n} \mathbf{C}_{\text{i}}$ which operate on the original reference configuration (see Figure \ref{AppendixC}). Operation \eqref{DefineShift} can be rewritten in the following way \begin{equation}\label{AppendixC1} {}^{\text{est}} \mathbf{C}_{\text{i}} := \mathbf{F}^{-\text{T}}_{\text{sh}} \ {}^{n}\mathbf{C}_{\text{i}} \ \mathbf{F}^{-1}_{\text{sh}} = (\overline{{}^{n+1} \mathbf{C} \ {}^{n} \mathbf{C}^{-1}})^{1/2} \ {}^{n}\mathbf{C}_{\text{i}} \ (\overline{{}^{n} \mathbf{C}^{-1} \ {}^{n+1} \mathbf{C}})^{1/2}, \end{equation} where the push-forward $\mathbf{F}^{-\text{T}}_{\text{sh}} \ (\cdot) \ \mathbf{F}^{-1}_{\text{sh}}$ brings $\overline{{}^{n}\mathbf{C}}$ to $\overline{{}^{n+1}\mathbf{C}}$. Let $\textbf{F}_0$ be the reference change, such that $\text{det}(\textbf{F}_0) = 1$. The quantities with respect to the new reference configuration are computed as follows \begin{equation}\label{AppendixC2} {}^{n+1}\mathbf{C}^{\text{new}} = \textbf{F}^{-\text{T}}_0 \ {}^{n+1}\mathbf{C} \ \textbf{F}^{-1}_0, \quad {}^{n}\mathbf{C}^{\text{new}} = \textbf{F}^{-\text{T}}_0 \ {}^{n}\mathbf{C} \ \textbf{F}^{-1}_0, \quad {}^{n}\mathbf{C}_{\text{i}}^{\text{new}} = \textbf{F}^{-\text{T}}_0 \ {}^{n}\mathbf{C}_{\text{i}} \ \textbf{F}^{-1}_0. \end{equation} Specifying \eqref{AppendixC1} for the quantities on the new reference, we have \begin{equation}\label{AppendixC3} {}^{\text{est}} \mathbf{C}^{\text{new}}_{\text{i}} := (\mathbf{F}_{\text{sh}}^{\text{new}})^{-\text{T}} \ {}^{n}\mathbf{C}^{\text{new}}_{\text{i}} \ (\mathbf{F}^{\text{new}}_{\text{sh}})^{-1} = (\overline{{}^{n+1} \mathbf{C}^{\text{new}} \ ({}^{n} \mathbf{C}^{\text{new}})^{-1}})^{1/2} \ {}^{n}\mathbf{C}^{\text{new}}_{\text{i}} \ (\overline{ ({}^{n}\mathbf{C}^{\text{new}})^{-1} \ {}^{n+1}\mathbf{C}^{\text{new}}})^{1/2}, \end{equation} where the push-forward $(\mathbf{F}_{\text{sh}}^{\text{new}})^{-\text{T}} \ (\cdot) \ (\mathbf{F}^{\text{new}}_{\text{sh}})^{-1}$ brings $\overline{{}^{n}\mathbf{C}^{\text{new}}}$ to $\overline{{}^{n+1}\mathbf{C}^{\text{new}}}$. The situation is summarized in Figure \ref{AppendixC}. \begin{figure} \caption{Invariance of the push-forward operation \eqref{DefineShift} \label{AppendixC} \end{figure} Our goal now is to clarify the interrelationship between ${}^{\text{est}}\mathbf{C}^{\text{new}}_{\text{i}}$ and ${}^{\text{est}}\mathbf{C}_{\text{i}}$. Substituting \eqref{AppendixC2} into \eqref{AppendixC3} and using the identities $(\textbf{F}^{-\text{T}}_0 \mathbf{A} \textbf{F}^{\text{T}}_0)^{1/2} = \textbf{F}^{-\text{T}}_0 \ \mathbf{A}^{1/2} \ \textbf{F}^{\text{T}}_0$ as well as $(\textbf{F}_0 \mathbf{A} \textbf{F}^{-1}_0)^{1/2} = \textbf{F}_0 \ \mathbf{A}^{1/2} \ \textbf{F}^{-1}_0$, we arrive at \begin{equation}\label{AppendixC4} {}^{\text{est}} \mathbf{C}^{\text{new}}_{\text{i}} = \textbf{F}^{-\text{T}}_0 \ (\overline{{}^{n+1} \mathbf{C} \ {}^{n} \mathbf{C}^{-1}})^{1/2} \ {}^{n}\mathbf{C}_{\text{i}} \ (\overline{{}^{n} \mathbf{C}^{-1} \ {}^{n+1} \mathbf{C}})^{1/2} \ \textbf{F}^{-1}_0 = \textbf{F}^{-\text{T}}_0 \ {}^{\text{est}} \mathbf{C}_{\text{i}} \ \textbf{F}^{-1}_0. \end{equation} This relation proves the weak invariance of push-forward \eqref{AppendixC1}. Alternatively, one may consider another push-forward operation, defined thorough \begin{equation}\label{Appendix534} {}^{\text{est}} \mathbf{C}_{\text{i}} \gets \tilde{\mathbf{F}}^{-\text{T}}_{\text{sh}} \ {}^{n} \mathbf{C}_{\text{i}} \ \tilde{\mathbf{F}}^{-1}_{\text{sh}}, \ \text{where} \ \tilde{\mathbf{F}}_{\text{sh}} := (\overline{{}^{n+1} \mathbf{C}})^{-1/2} \ (\overline{{}^{n} \mathbf{C}})^{1/2}. \end{equation} Just as \eqref{DefineShift}, it brings $\overline{{}^{n}\mathbf{C}}$ to $\overline{{}^{n+1}\mathbf{C}}$, but such a transformation is not invariant under the change of the reference configuration: \begin{equation}\label{Appendix6} {}^{\text{est}} \mathbf{C}^{\text{new}}_{\text{i}} \neq \textbf{F}^{-\text{T}}_0 \ {}^{\text{est}} \mathbf{C}_{\text{i}} \ \textbf{F}^{-1}_0. \end{equation} Thus, operations of type \eqref{Appendix534} should not be used in numerical computations. \section*{Appendix E. EBMSC and EM} \subsubsection*{Discretization of differential equations using EBMSC and EM} Let us consider the initial value problem for a system of nonlinear ordinary differential equations \begin{equation*}\label{difur} \dot{\mathbf A} (t) = \mathbf{f} (\mathbf A(t), t) \mathbf A (t), \quad \mathbf A(0)=\mathbf A^0, \quad \det(\mathbf A^0)=1. \end{equation*} By ${}^n \mathbf A, {}^{n+1} \mathbf A$ denote numerical solutions respectively at $t_n$ and $t_{n+1}$. The classical Euler Backward method (EBM) uses the equation with respect to the unknown ${}^{n+1} \mathbf A^{\text{EBM}}$ \cite{SimoHughes, SimMieh}: \begin{equation}\label{Eulcl} {}^{n+1} \mathbf A^{\text{EBM}} = \big[ \mathbf 1 - \Delta t \ \mathbf{f} ({}^{n+1} \mathbf A^{\text{EBM}}, t_{n+1}) \big]^{-1} \ {}^n \mathbf A. \end{equation} If the incompressibility restriction $\det ({}^{n+1} \mathbf A) \equiv 1$ needs to be enforced, a modified method, called Euler Backward method with subsequent correction of incompressibility (EBMSC), can be considered \cite{ShutovLandgraf2013}: \begin{equation}\label{EBMSC} {}^{n+1} \mathbf A^{\text{EBMSC}} := \overline{{}^{n+1} \mathbf A^{\text{EBM}}}. \end{equation} The exponential method (EM) is based on the equation with respect to ${}^{n+1} \mathbf A^{\text{EM}}$ \cite{WebAnan, MiStei, Simo}: \begin{equation}\label{Expo} {}^{n+1} \mathbf A^{\text{EM}} = \exp\big(\displaystyle \Delta t \ \mathbf{f} ({}^{n+1} \mathbf A^{\text{EM}}, t_{n+1})\big) \ {}^n \mathbf A. \end{equation} In contrast to the classical EBM, EM exactly preserves the incompressibility. Under some general assumptions, both methods preserve the symmetry of $\mathbf A$ (see \cite{ShutovKrVisc}). \subsubsection*{Application to the model of Shutov and Krei\ss ig} For the material model under consideration, two conventional numerical procedures can be obtained using EBMSC and EM in the following way. First, we discretize the evolution equations \eqref{prob1} -- \eqref{prob22} using EBMSC or EM. Next, the scalar evolution equations \eqref{prob3} are discretized using the Euler Backward method as discussed in Section 3.1. Within a nested procedure, the resulting system of algebraic equations is solved numerically for a fixed $\xi := \Delta t \ {}^{n+1} \lambda_{\text{i}}$ with respect to unknown $\mathbf C_{\text{i}}$, $\mathbf C_{\text{1i}}$, $\mathbf C_{\text{2i}}$, $s$, and $s_{\text{d}}$. Implementing these functional dependencies, the unknown inelastic strain increment $\xi$ is found using the predictor-corrector scheme, by solving the consistency condition \begin{equation}\label{AppendixE} \xi \eta = \displaystyle \Delta t \Big\langle \frac{\displaystyle 1}{\displaystyle f_0} f \Big\rangle^{m}, \end{equation} where $f$ is a function of $\xi$. For more details, the reader is referred to \cite{ShutovKrVisc, ShutovKrKoo}. \end{document}
\begin{document} \title{On Killers of Cable Knot Groups} \author{Ederson R. F. Dutra} \address{Mathematisches Seminar, Christian-Albrechts-Universität zu Kiel, Ludewig-Meyn Str.~4, 24098 Kiel, Germany} \email{[email protected]} \thanks{ This work was supported by CAPES - Coordination for the Improvement of Higher Education Personnel-Brazil, program Science Without Borders grant 13522-13-2} \begin{abstract} A killer of a group $G$ is an element that normally generates $G$. We show that the group of a cable knot contains infinitely many killers such that no two lie in the same automorphic orbit. \end{abstract} \maketitle \section{Introduction} Let $G$ be an arbitrary group and $S\subseteq G$. We define the normal closure $\langle\langle S\rangle\rangle_G$ of $S$ as the smallest normal subgroup of $G$ containing $S$, equivalently $$\langle\langle S\rangle\rangle_{G}= \{\prod_{i=1}^{k}u_is_i^{\varepsilon_i}u_i^{-1} \ | \ u_i\in G, \varepsilon_i=\pm 1, s_i\in S, k\in \mathbb{N}\}.$$ Following \cite{Simon}, we call an element $g\in G$ a \textit{killer} if $\langle\langle g\rangle\rangle_{G}=G$ . We say that two killers $g_1,g_2\in G$ are \textit{equivalent} if there exists an automorphism $\phi:G\rightarrow G$ such that $\phi(g_1)=g_2$. Let $\mathfrak{k}$ be a knot in $S^3$ and $V(\mathfrak{k})$ a regular neighborhood of $\mathfrak{k}$. Denote by $$X(\mathfrak{k})= S^3-Int(V(\mathfrak{k})) $$ the knot manifold of $\mathfrak{k}$ and by $G(\mathfrak{k})=\pi_1(X(\mathfrak{k}))$ its group. A \emph{meridian} of $\mathfrak{k}$ is an element of $G(\mathfrak{k})$ which can be represented by a simple closed curve on $ \partial V(\mathfrak{k})$ that is contractible in $V(\mathfrak{k})$ but not contractible in $\partial V(\mathfrak{k})$. Thus a meridian is well defined up to conjugacy and inversion. From a Wirtinger presentation of $G(\mathfrak{k})$ we see that the meridian is a killer. In \cite[Theorem 3.11]{Tsau} the author exhibit a knot for which there exists a killer that is not equivalent to the meridian. Silver–Whitten–Williams \cite[Corollary 1.3]{Silver} showed that if $\mathfrak{k}$ is a hyperbolic $2$-bridge knot or a torus knot or a hyperbolic knot with unknotting number one, then its group contains infinitely many pairwise inequivalent killers. In \cite[Conjecture 3.3]{Silver} it is conjectured that the group of any nontrivial knot has infinitely many inequivalent killers, see also \cite[Question 9.26]{Friedl}. In this paper we show the following. \begin{theorem}{\label{T1}} Let $\mathfrak{k}$ be a cable knot about a nontrivial knot $\mathfrak{k}_1$. Then its group contains infinitely many pairwise inequivalent killers. \end{theorem} Moreover, we show that having infinitely many inequivalent killers is preserved under connected sums. As a Corollary we show that the group of any nontrivial knot whose exterior is a graph manifold contains infinitely many inequivalent killers. \section{Proof of Theorem 1} Let $m,n$ be coprime integers with $n\geq 2$. The \emph{cable space} $CS({m,n})$ is defined as follows: let $D^2=\{z\in \mathbb{C} \ | \ \Vert z\Vert \leq 1\}$ and $\rho :D^2\rightarrow D^2$ a rotation through an angle of $2\pi (m/n)$ about the origin. Choose a disk $\delta\subset Int(D^2)$ such that $\rho^i(\delta)\cap \rho^j(\delta)=\emptyset$ for $1\leq i\neq j\leq n$ and denote by $D^2_n$ the space $$D^2-Int\mathcal{B}ig(\bigcup_{i=1}^n\rho^i(\delta)\mathcal{B}ig). $$ $\rho$ induces a homeomorphism $\rho_0:=\rho|_{D^2_n}:D_n^2\rightarrow D_n^2$. We define $CS(m,n)$ as the mapping torus of $\rho_0$, i.e. $$CS(m,n):=D^2_n\times I/(z,0)\sim (\rho_0(z),1).$$ Note that $CS(m,n)$ has the structure of a Seifert fibered space. Each fiber is the image of $\{\rho^i(z)|1\leq i\leq n\} \times I$ under the quotient map, where $z\in D^2_n$. There is exactly one exceptional fiber, namely the image $C_0$ of the arc $0\times I$. In order to compute the fundamental group $A$ of $CS(m,n)$, denote the free generators of $\pi_1(D_n^2)$ corresponding to the boundary paths of the removed disks $\rho_0(\delta),\ldots, \rho_0^n(\delta)$ by $x_1,\ldots,x_n$ respectively. From the definition of $CS(m,n)$ we see that we can write $A$ as the semi-direct product $F(x_1,\ldots,x_n)\rtimes \mathbb{Z}$, where the action of $\mathbb{Z}=\langle t\rangle$ on $\pi_1(D_n^2)=F(x_1,\ldots,x_n)$ is given by $$tx_it^{-1}=x_{\sigma(i)} \text{ for } 1\leq i\leq n.$$ The element $t$ is represented by the exceptional fiber of $CS(m,n)$ and the permutation $\sigma:\{1,\ldots,n\}\rightarrow \{1,\ldots, n\}$ is given by $i \mapsto i+m\text{ mod } n$. Thus $$ A=\langle x_1,\ldots,x_n, t \ | \ tx_it^{-1}=x_{\sigma(i)} \ \text{for} \ 1\leq i\leq n\rangle.$$ We finally remark that any element $a\in A$ is uniquely written as $w\cdot t^z$ for some $w\in F(x_1,\ldots, x_n)$ and $z\in \mathbb{Z}$. We next define cable knots. Let $V_0$ be the solid torus $D^2\times I/(z,0)\sim (\rho(z),1) $ and for some $z_0\in Int(D^2)-0$ let $\mathfrak{k}_0$ be the image of $\{\rho^i(z_0)\hspace{0.5mm}|\hspace{0.5mm}1\leq i\leq n\}\times I$ under the quotient map. Note that $\mathfrak{k}_0$ is a simple closed curve contained in the interior of $V_0$. Let $\mathfrak{k}_1$ be a nontrivial knot in $S^3$ and $V(\mathfrak{k}_1)$ a regular neighborhood of $\mathfrak{k}_1$ in $S^3$. Let further $h:V_0\rightarrow V(\mathfrak{k}_1)$ be a homeomorphism which maps the meridian $\partial D^2\times 1$ of $V_0$ to a meridian of $\mathfrak{k}_1$. The knot $\mathfrak{k}:=h(\mathfrak{k}_0)$ is called a $(m,n)$\emph{-cable knot} about $\mathfrak{k}_1$. Thus the knot manifold $X(\mathfrak{k})$ of a $(m,n)$-cable knot $\mathfrak{k}$ decomposes as $$ X(\mathfrak{k})=CS({m,n})\cup X(\mathfrak{k}_1)$$ with $\partial X(\mathfrak{k}_1)=CS({m,n})\cap X(\mathfrak{k}_1) $ an incompressible torus in $X(\mathfrak{k})$. It follows from the Theorem of Seifert and van-Kampen that $$G(\mathfrak{k})=A\ast_{C}B$$ where $B=G(\mathfrak{k}_1)$ and $C=\pi_1 (\partial X(\mathfrak{k}_1))$. Denote by $m_1$ the meridian of $\mathfrak{k}_1$ and note that in $A$ we have $m_1=x_1\cdot \ldots\cdot x_n$. In turn, the meridian $m\in G(\mathfrak{k})$ of $\mathfrak{k}$ is written as $m=x_1\in A$. The proof of Theorem~\ref{T1} is divided in two steps. In Lemma~\ref{L1} we exhibit elements that normally generate the group of the cable knot and next, in Lemma~\ref{L2}, we prove that these killers are indeed inequivalent. Choose $s\in \{1,\ldots,n-1\}$ such that $\sigma^s(1)=2$. Since $\sigma^s(i)=i+sm \text{ mod }n$ it follows that $\sigma^s= (1 \ \ 2 \ \ 3 \ \ldots \ n-1 \ \ n)$. \begin{lemma}\label{L1} Let $\mathfrak{k} $ be a $(m,n)$-cable knot about a nontrivial knot $\mathfrak{k}_1 $. Then for each $l\geq 1$ the element $$g_l: =x_{1}^lx_2^{-(l-1)}=x_1^l\cdot (t^sx_1t^{-s})^{-(l-1)}$$ normally generates the group of $\mathfrak{k} $. \end{lemma} \begin{proof} The first step of the proof is to show that the group of the companion knot is contained in $\langle\langle g_l\rangle\rangle_{G(\mathfrak{k})}$. \textbf{Claim 1:} The meridian $m_1=x_1 \cdot\ldots\cdot x_n$ of $\mathfrak{k}_1$ belongs to $ \langle\langle g_l\rangle\rangle_{G(\mathfrak{k})} $. Consequently $B=\langle\langle m_1\rangle\rangle_{B}=\langle\langle m_1\rangle\rangle_{G(\mathfrak{k})}\cap B \subseteq\langle\langle g_l \rangle\rangle_{G(\mathfrak{k})} $. Note that for $0\leq i\leq n-1$ we have $t^{is}g_lt^{-is}=x_{i+1}^lx_{i+2}^{-(l-1)}$, where indices are taken mod $n$. Thus \begin{eqnarray}\nonumber x_1\cdot \ldots\cdot x_n &=& x_1^{-(l-1)}(x_1^lx_2\cdot \ldots\cdot x_nx_1^{-(l-1)})x_1^{l-1} \\ \nonumber &=& x_1^{-(l-1)} \mathcal{B}ig(\prod_{i=0}^{n-1}x_{i+1}^lx_{i+2}^{-(l-1)}\mathcal{B}ig)x_1^{l-1} \\ \nonumber &=& x_1^{-(l-1)}\mathcal{B}ig(\prod_{i=0}^{n-1} t^{is}\cdot g_l\cdot t^{-is}\mathcal{B}ig)x_1^{l-1}\\ \nonumber &=& \prod_{i=0}^{n-1}\mathcal{B}ig(x_n^{-(l-1)} t^{is} \cdot g_l\cdot t^{-is}x_1^{l-1}\mathcal{B}ig)\nonumber\\ & = & \prod_{i=0}^{n-1}\mathcal{B}ig( (x_1^{-(l-1)}t^{is})\cdot g_l \cdot (x_1^{(-l-1)}t^{is})^{-1}\mathcal{B}ig)\nonumber \end{eqnarray} which implies that $m_1\in \langle\langle g_l\rangle\rangle_{G(\mathfrak{k})} $. Thus Claim 1 is proved. From Claim 1 it follows that the peripheral subgroup $C=\pi_1(\partial X(\mathfrak{k}_1))$ of $\mathfrak{k}_1$ is contained in $\langle\langle g_l\rangle\rangle_{G(\mathfrak{k})}$ since $C\subseteq B$ and consequently we have $$G(\mathfrak{k})/\langle\langle g_l\rangle\rangle_{G(\mathfrak{k})} =A\ast_{C}B/\langle\langle g_l\rangle\rangle_{G(\mathfrak{k})} \cong A/\langle\langle g_l , C\rangle\rangle_A. $$ Thus, we need to show that $A/\langle\langle g_l, C\rangle\rangle_A=1$. It is easy to see that $A/\langle\langle C\rangle\rangle_A$ is cyclically generated by $\pi(x_1)$, where $\pi:A\rightarrow A/\langle\langle C\rangle\rangle_A$ is the canonical projection. The result now follows from the fact that $\pi(g_l)=\pi(x_1^l\cdot t^sx_1^{(l-1)}t^{-s})=\pi(x_1)$. \end{proof} \begin{lemma}\label{L2} If $k\neq l$, then $g_k$ is not equivalent to $g_l$. \end{lemma} \begin{proof} Assume that $\phi:G(\mathfrak{k})\rightarrow G(\mathfrak{k})$ is an automorphism such that $\phi(g_l)=g_k$ and let $f:X(\mathfrak{k})\rightarrow X(\mathfrak{k})$ be a homotopy equivalence inducing $\phi$. From \cite[Theorem 14.6]{Johannson} it follows that $f$ can be deformed into $\hat{f}:X(\mathfrak{k})\rightarrow X(\mathfrak{k})$ so that $\hat{f}$ sends $X(\mathfrak{k}_1)$ homeomorphically onto $X(\mathfrak{k}_1)$ and $\hat{f}|_{CS(m,n)}:CS(m,n)\rightarrow CS(m,n)$ is a homotopy equivalence. Thus $\phi(A)$ is conjugated to $A$, that is, $\phi(A)=gAg^{-1}$ for some $g\in G(\mathfrak{k})$. Since $\phi(g_l)=g_k$ it implies that $g_k\in gAg^{-1}$. As $g_k$ is not conjugated (in $A$) to an element of $C$, it implies that $g\in A$ and so $\phi(A)=A$. By \cite[Proposition 28.4]{Johannson}, we may assume that $\hat{f}|_{CS(m,n)}$ is fiber preserving. Since $CS(m,n)$ has exactly one exceptional fiber, which represents $t$, we must have $\phi(t)=at^{\eta }a^{-1}$ for some $a=v\cdot t^{z_1}\in A$ and some $\eta\in \{\pm 1\}$. The automorphism $\phi|_{A}:A\rightarrow A$ induces an automorphism $\phi_{\ast}$ on the factor group $A/\langle t^n\rangle=\langle x_1,t \ | \ t^n=1\rangle=\mathbb{Z}\ast \mathbb{Z}_n$ such that $\phi_{\ast}(t)=at^{\eta}a^{-1}$. It is a standard fact about automorphisms of free products that we must have ${\phi}_{\ast}(x_1)=at^{e_0}x_1^{\varepsilon}t^{e_1}a^{-1}$ for $e_0,e_1\in \mathbb{Z}$ and $\varepsilon \in \{\pm 1\}$. Thus, $$\phi(x_1)=at^{e_0}x_1^{\varepsilon} t^{e_1}a^{-1}t^{dn}=at^{e_0}\cdot x_1t^{e_0+e_1+dn}\cdot t^{-e_0}a^{-1}$$ for some $d\in \mathbb{Z}$. Since $t$ has non-zero homology in $H_1(X(\mathfrak{k}))$ it follows that $e_0+e_1+dn=0$. Consequently, $\phi(x_1)= b \cdot x_1^{\varepsilon}\cdot b^{-1}$, where $b=at^{e_0}=v\cdot t^{z_2}\in A$ and $z_2=z_1+e_0$. Hence we obtain \begin{eqnarray} \phi(g_l) & = & \phi(x_1^lx_2^{-(l-1)})\nonumber \\ & = & \phi(x_1^l\cdot t^{s}x_1^{-(l-1)}t^{-s})\nonumber \\ & = & bx_1^{\varepsilon l}b^{-1}\cdot at^{ \eta s}a^{-1}\cdot bx_1^{-\varepsilon(l-1)}b^{-1}\cdot at^{-\eta s}a^{-1}\nonumber \\ & = & vt^{z_2}x_1^{\varepsilon l}t^{-z_2}v^{-1}\cdot vt^{z_1}t^{\eta s}t^{-z_1}v^{-1}\cdot vt^{z_2}x_1^{-\varepsilon (l-1)}t^{-z_2}v^{-1}\cdot vt^{z_1}t^{-\eta s}t^{-z_1}v^{-1} \nonumber\\ & = & vx_i^{\varepsilon l} x_j^{-\varepsilon (l-1) }v^{-1} \nonumber \end{eqnarray} where $i=\sigma^{z_2}(1)$ and $j=\sigma^{z_2+\eta s}(1)$. Note that $i\neq j$ since $\sigma^{s}(1)=2$ and $\sigma^{-s}(1)=n$. Hence, $\phi(g_l)=g_k$ implies that $$ v(x_i^{\varepsilon l}\cdot x_j^{-\varepsilon (l-1) })v^{-1} = x_1^k\cdot x_2^{-(k-1)}$$ in $F(x_1,\ldots, x_n)$. Thus, in the abelinization of $F(x_1,\ldots,x_n)$ we have $$\varepsilon[lx_i+(1-l)x_j]=kx_1+(1-k)x_2$$ which implies that $\{i,j\}=\{1,2\}$. If $(i,j)=(1,2)$, then $\varepsilon l=k$ and so $k=|k|=|\varepsilon l|=l$. If $(i,j)=(2,1)$, then $ \varepsilon l=k-1$ and $\varepsilon(1-l)=k$. Consequently, $\varepsilon=1$ and $l+k=1$ which is impossible since $k,l\geq 1$. \end{proof} \section{Connected sums and killers} In this section we show that having infinitely many inequivalnet killers is preserved under connected sums of knots. This fact, Theorem~\ref{T1}, and Corollary 1.3 of \cite{Silver} imply that the group of knots whose exterior is a graph manifold have infinitely many inequivalent killers. Let $\mathfrak{k}$ be a knot and $\mathfrak{k}_1,\ldots, \mathfrak{k}_n$ its prime factors, that is, $\mathfrak{k}= \mathfrak{k}_1\sharp \ldots \sharp \mathfrak{k}_n$ and each $\mathfrak{k}_i$ is a nontrivial prime knot. Assume that $x\in G(\mathfrak{k}_i)$ is a killer of $G(\mathfrak{k}_i)$. It is well-known that $G(\mathfrak{k}_i)\leq G(\mathfrak{k})$ and $ \langle m\rangle \leq G(\mathfrak{k}_i)$ for all $i$, where $m$ denotes the meridian of $\mathfrak{k}$. From this we immediately see that $m\in \langle\langle x\rangle\rangle_{G(\mathfrak{k}_i)} \subseteq \langle\langle x\rangle\rangle_{G(\mathfrak{k})}$ which implies that $G(\mathfrak{k})=\langle \langle m\rangle\rangle_{G(\mathfrak{k})}\subseteq \langle\langle x\rangle\rangle_{G(\mathfrak{k})}$, ie., $x$ is a killer of $G(\mathfrak{k})$. Now suppose that $x,y\in G(\mathfrak{k}_i)$ are killers of $G(\mathfrak{k}_i)$ and that there exists an automorphism $\phi$ of $G(\mathfrak{k})$ such that $\phi(x)=y$. $\phi$ is induced by a homotopy equivalence $f:X(\mathfrak{k})\rightarrow X(\mathfrak{k})$. From \cite[Theorem 14.6]{Johannson} it follows that $f$ can be deformed into $\hat{f}:X(\mathfrak{k})\rightarrow X(\mathfrak{k})$ so that: \begin{enumerate} \item[1.] $\hat{f}{|_V}:V\rightarrow V$ is a homotopy equivalence, where $V=S^1\times(n-\text{punctured disk})$ is the peripheral component of the characteristic submanifold of $X(\mathfrak{k})$. \item[2.] $\hat{f}|_{\overline{X(\mathfrak{k})-V}} :\overline{X(\mathfrak{k})-V}\rightarrow \overline{X(\mathfrak{k})-V}$ is a homeomorphism. \end{enumerate} Note that $\overline{X(\mathfrak{k})-V}=X(\mathfrak{k}_1)\mathop{\dot{\cup}}\ldots \mathop{\dot{\cup}} X(\mathfrak{k}_n)$. Since $\hat{f}|_{\overline{X(\mathfrak{k})-V}}$ is a homeomorphism it follows that $\hat{f}$ sends $X(\mathfrak{k}_i)$ homeomorphically onto $X(\mathfrak{k}_{\tau(i)})$ for some permutation $\tau$ of $\{1,\ldots,n\}$. Consequently, there exists $g^{\prime}\in G(\mathfrak{k})$ such that $$\phi(G(\mathfrak{k}_i))=g^{\prime}G(\mathfrak{k}_{\tau(i)}){g^{\prime}}^{-1}.$$ If $\tau(i)=i$ and $g'\in G(\mathfrak{k}_i)$, then $\phi$ induces an automorphism $\psi:= \phi|_{G(\mathfrak{k}_i)}$ of $G(\mathfrak{k}_i)$ such that $\psi(x)=y$, i.e., $x$ and $y$ are equivalent in $G(\mathfrak{k}_i)$. If $\tau(i)\neq i$ or $g'\notin G(\mathfrak{k}_{\tau(i)})$, then it is not hard to see that $y$ is conjugated (in $G(\mathfrak{k}_i)$) to an element of $\langle m\rangle$ since $y=\phi(x)\in G(\mathfrak{k}_i)\cap g^{\prime}G(\mathfrak{k}_{\tau(i)}){g^{\prime}}^{-1}$. As $\langle\langle m^k\rangle\rangle\neq G(\mathfrak{k})$ for $|k|\geq 2$ and $y$ normally generates $G(\mathfrak{k})$, we conclude that $y$ is conjugated (in $G(\mathfrak{k}_i))$) to $m^{\pm1}$. The same argument applied to $\phi^{-1}$ shows that $x$ is conjugated (in $G(\mathfrak{k}_i))$) to $m^{\pm1}$. Therefore, if the group of one of the prime factors of $\mathfrak{k}$ has infinitely many inequivalent killers, then so does the group of $\mathfrak{k}$. As a Corollary of Theorem~\ref{T1} and the remark made above we obtain the following result. \begin{corollary} If $\mathfrak{k}$ is a knot such that $X(\mathfrak{k})$ is a graph manifold, then $G(\mathfrak{k})$ contains infinitely many pairwise inequivalent killers. \end{corollary} \begin{proof} From \cite{JS} it follows that the only Seifert-fibered manifolds that can be embedded into a knot manifold with incompressible boundary are torus knot complements, composing spaces and cable spaces. Thus, if $X(\mathfrak{k})$ is a graph manifold, then one of the following holds: \begin{enumerate} \item $\mathfrak{k}$ is a torus knot. \item $\mathfrak{k}$ is a cable knot. \item $\mathfrak{k}=\mathfrak{k}_1\sharp \ldots \sharp \mathfrak{k}_n$ where each $\mathfrak{k}_i$ is either a torus knot or a cable knot. \end{enumerate} Now the result follows from Theorem~\ref{T1} and \cite[Corollary 1.3]{Silver}. \end{proof} \end{document}
\begin{document} \global\long\def\eqn#1{\begin{align}#1\end{align}} \global\long\def\vec#1{\overrightarrow{#1}} \global\long\def\ket#1{\left|#1\right\rangle } \global\long\def\bra#1{\left\langle #1\right|} \global\long\def\bkt#1{\left(#1\right)} \global\long\def\sbkt#1{\left[#1\right]} \global\long\def\cbkt#1{\left\{#1\right\}} \global\long\def\abs#1{\left\vert#1\right\vert} \global\long\def\cev#1{\overleftarrow{#1}} \global\long\def\der#1#2{\frac{{d}#1}{{d}#2}} \global\long\def\pard#1#2{\frac{{\partial}#1}{{\partial}#2}} \global\long\def\mathrm{Re}{\mathrm{Re}} \global\long\def\mathrm{Im}{\mathrm{Im}} \global\long\def\mathrm{d}{\mathrm{d}} \global\long\def\mathrm{d}d{\mathcal{D}} \global\long\defS_{\mathrm{M, IF}}{S_{\mathrm{M, IF}}} \global\long\defS_{\mathrm{int}}{S_{\mathrm{int}}} \global\long\def\avg#1{\left\langle #1 \right\rangle} \global\long\def\aavg#1{\left\llangle #1 \right\rrangle} \global\long\def\mr#1{\mathrm{#1}} \global\long\def\mb#1{{\mathbf #1}} \global\long\def\mc#1{\mathcal{#1}} \global\long\def\mathrm{Tr}{\mathrm{Tr}} \global\long\def\dbar#1{\Bar{\Bar{#1}}} \global\long\def$n^{\mathrm{th}}$\,{$n^{\mathrm{th}}$\,} \global\long\def$m^{\mathrm{th}}$\,{$m^{\mathrm{th}}$\,} \global\long\def\nonumber{\nonumberumber} \newcommand{\orange}[1]{{\color{orange} {#1}}} \newcommand{\cyan}[1]{{\color{cyan} {#1}}} \newcommand{\blue}[1]{{\color{blue} {#1}}} \newcommand{\yellow}[1]{{\color{yellow} {#1}}} \newcommand{\green}[1]{{\color{green} {#1}}} \newcommand{\mathrm{Re}d}[1]{{\color{red} {#1}}} \global\long\def\todo#1{\orange{{$\bigstar$ \cyan{\bf\sc #1}}$\bigstar$} } \title{Dissipative dynamics of a particle coupled to field via internal degrees of freedom} \author{Kanupriya Sinha} \email{[email protected]} \affiliation{Department of Electrical Engineering, Princeton University, Princeton, New Jersey 08544, USA} \author{Adri\'{a}n Ezequiel Rubio L\'{o}pez} \email{[email protected]} \affiliation{Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences,\\ Institute for Theoretical Physics, University of Innsbruck, A-6020 Innsbruck, Austria} \author{Yi\u{g}it Suba\c{s}\i} \email{[email protected]} \affiliation{Computer, Computational and Statistical Sciences Division, Los Alamos National Laboratory, Los Alamos, NM 87545, USA} \begin{abstract} We study the non-equilibrium dissipative dynamics of the center of mass of a particle coupled to a field via its internal degrees of freedom. We model the internal and external degrees of freedom of the particle as quantum harmonic oscillators in 1+1 D, with the internal oscillator coupled to a scalar quantum field at the center of mass position. Such a coupling results in a nonlinear interaction between the three pertinent degrees of freedom -- the center of mass, internal degree of freedom, and the field. It is typically assumed that the internal dynamics is decoupled from that of the center of mass owing to their disparate characteristic time scales. Here we use an influence functional approach that allows one to account for the self-consistent backaction of the different degrees of freedom on each other, including the coupled non-equilibrium dynamics of the internal degree of freedom and the field, and their influence on the dissipation and noise of the center of mass. Considering a weak nonlinear interaction term, we employ a perturbative generating functional approach to derive a second order effective action and a corresponding quantum Langevin equation describing the non-equilibrium dynamics of the center of mass. We analyze the resulting dissipation and noise arising from the field and the internal degree of freedom as a composite environment. Furthermore, we establish a generalized fluctuation-dissipation relation for the late-time dissipation and noise kernels. Our results are pertinent to open quantum systems that possess intermediary degrees of freedom between system and environment, such as in the case of optomechanical interactions. \end{abstract} \maketitle \section{Introduction} Dissipative non-equilibrium dynamics of complex quantum systems comprising different degrees of freedom is a subject of significant interest from the perspective of theory of open quantum systems, as well as, emerging quantum information applications and devices \cite{BPBook, Weiss, Clerk20}. While the dissipative dynamics of a reduced quantum system coupled linearly to its environment has been studied extensively, the case of a nonlinear coupling between a system and its environment via an intermediate degree of freedom and its effect on the resulting dynamics is seldom discussed \cite{HPZ93, Maghrebi16, Lampo16}. A commonly employed assumption is that the time-scales associated with the different degrees of freedom are sufficiently disparate so that their dynamics can be treated as being effectively decoupled from each other \cite{Stenholm86, Lan08, Haake}. Such an assumption precludes the possibility of studying the rich interplay between the different degrees of freedom, and its effect on the non-equilibrium dynamics of the system of interest. Moreover, an adiabatic elimination of fast variables does not allow one to see the effects of the non-equilibrium dynamics and quantum fluctuations of the fast degrees of freedom on the system dissipation and noise. A canonical example of such a complex open quantum system is the optomechanical interaction between a neutral polarizable particle and a field, for instance, a moving atom, nanoparticle, or molecule interacting with the electromagnetic (EM) field \cite{Wieman, Yin, Molecules}. As an essential feature these systems possess internal electronic degrees of freedom that interact with the EM field and are located at the center of mass position represented by the mechanical degree of freedom (MDF). While the MDF of the particle does not couple to the EM field directly, its internal degrees of freedom (IDF) interact with the EM field at the center of mass position, thus mediating an effective coupling between the MDF and the field. Considering the MDF as the system of interest, with the EM field and the IDF acting as an environment, such an effective interaction leads to dissipation and noise in system dynamics. There is an interesting interplay between the three degrees of freedom (the MDF, the IDF, and the EM field) that takes place in such a scenario as has been previously explored in \cite{KSthesis, MOF1, MOF2, Wang14}. Considering the dynamics from an approach that includes the self-consistent backaction of all the degrees of freedom on each other allows one to examine the role of IDF in the center of mass dynamics. For example, it has been shown that the IDF can facilitate a transfer of correlations between the field and center of mass in the case of optomechanical interactions \cite{KSthesis, MOF2} and affect the center of mass decoherence \cite{Brun16}. On the other hand, it has also been demonstrated that including the quantized center of mass motion can affect the radiation reaction and thermalization of the IDF \cite{AERL19}. We study here a minimal model that captures the interplay between different degrees of freedom in a composite system and the nonequilibrium dissipative center of mass dynamics. Such a model forms the basis for optomechanical interactions between neutral particles and fields, and more generally applies to open quantum systems that couple to an environment via internal degrees of freedom. The remainder of this paper is organized as follows. In Section \mathrm{Re}f{model} we describe the model in consideration, in Section \mathrm{Re}f{derivation} we describe the derivation of a second order effective action for the center of mass by integrating out the IDF and the field using a perturbative influence functional approach \cite{HPZ93}. In Section \mathrm{Re}f{Dynamics} we write an effective Langevin equation for the center of mass, discuss the resulting dynamics and present a generalized fluctuation-dissipation relation. We summarize our findings and present an outlook of the work in Section \mathrm{Re}f{Discussion}. \begin{figure} \caption{Schematic representation of a mechanical oscillator of mass $M$ and frequency $\Omega $ coupled to a massless scalar field $\Phi$ via an IDF. The internal oscillator with position coordinate $Q$ is described as a harmonic oscillator of mass $m $ and frequency $\omega_0 $ and couples to the scalar field $\Phi\bkt{{x} \label{Schematic} \end{figure} \section{Model} \label{model} Let us consider a particle with its center of mass described by a mechanical oscillator of mass $M$ and frequency $\Omega$, and its polarization described by an IDF as a harmonic oscillator of mass $m$ and resonance frequency $\omega_0$. The composite system is assumed to be interacting with a quantum field, which we consider to be a scalar field $\Phi(x,t)$ in $1+1$ D. The total action of the system can be written as \eqn{ S = S_{M}+S_{I}+S_{F}+S_{\mathrm{int},} \label{TotalAction} } where \eqn{S_M &= \int\mathrm{d}\tau\bkt{ \frac{1}{2}M \dot{X}^2-\frac{1}{2}M\Omega^2 X^2}} refers to the free action for the mechanical oscillator with $\cbkt{X, \dot {X}}$ referring to the center of mass position and velocity, while \eqn{ S_I &= \int\mathrm{d}\tau\bkt{ \frac{1}{2}m \dot{Q}^2-\frac{1}{2}m\omega_0^2 Q^2}} corresponds to the free action for the internal degree of freedom with amplitude $Q$, and \eqn{ S_F &= \int\mathrm{d}\tau\int\mathrm{d}{x} \frac{\epsilon_0}{2}\sbkt{\bkt{\partial_t\Phi(x,\tau)}^2- \bkt{\mb{\partial_x}\Phi(x,\tau)}^2}} is the free action for the scalar field $\Phi\bkt{x,t}$, $\epsilon_0 $ being the vacuum permittivity. We consider $ \hbar = c = 1$ throughout this paper. The correspondence with the EM field is established by identifying the scalar field $\Phi\bkt{{x},t}$ as the vector potential. Considering that the particle interacts with the field via its IDF, at the position determined by the MDF, the interaction action is given as \eqn{ S_{\mathrm{int}} &= \int\mathrm{d}\tau \int \mathrm{d}{x} \lambda Q\Phi({x},\tau)\delta\bkt{{x}-X(\tau)} } with the strength of the interaction defined by $\lambda$. We note that the above interaction action is nonlinear, with the two degrees of freedom and the field interacting together. Such a model, previously referred to as the mirror-oscillator-field model has been studied in the context of describing optomechanical interactions from a microscopic perspective \cite{MOF1, MOF2, KSthesis}. It provides a self-consistent treatment of the different degrees of freedom involved, in that one does not need to impose the boundary conditions on the field by hand, and the mechanical effects of the field are consistently described upon tracing over the IDF. In typical experimental systems of relevance, such as those of atoms and nanoparticles confined in traps, the center of mass motion is restricted to a region of the space smaller than the wavelengths of the field that interact resonantly with the internal degrees. Thus motivated by practical considerations, we assume that the motion of the MDF is restricted to small displacements about the average equilibrium position $X_0$, such that we can expand the interaction action to first order in displacements from the equilibrium as follows \eqn{ S_\mr{int}&\approx\int\mathrm{d}\tau \sbkt{\lambda Q\Phi(X_0,\tau)+\lambda Q\partial_x\Phi(X_0,\tau) \bkt{X - X_0}}. \label{IntAction}} The first term in the above action represents the linear interaction between the IDF and the field, similar to the $\sim \mb{p}\cdot\mb{A}$ coupling in atom-field interactions. The second term stands for the tripartite interaction between the IDF, the MDF and the field. Thus, as we show in the following, the effective coupling of the center of mass to the field via the IDF leads to its dissipative dynamics. From now on, for simplicity and without loss of generality, we assume $X_0=0$. \subsection{Scalar field as bath of harmonic oscillators} The above separation of the total interaction action (Eq.\eqref{IntAction}) suggests splitting the field into two separate oscillator baths such that one of the baths interacts with the IDF, while the other is coupled to both the internal and mechanical degrees. We thus make an eigenmode decomposition of the field to rewrite it in terms of a bath of oscillators as follows \cite{QBM3} \eqn{ \Phi({x},\tau) = \sum_n\sqrt{ \frac{1}{\epsilon_0{L}}} \sbkt{q_n^{(-)} \cos({\omega}_n{x})+q_n^{(+)} \sin({\omega}_n{x})}, } where $L$ is the mode volume of the field, $\omega_n$ refers to the mode frequency, and $q_n^{(+)}$ and $q_n^{(-)}$ refer to two independent set of eigenmodes of the field, assuming periodic boundary conditions. One can thus rewrite the free field action as as a sum $S_F = S_F ^{(+)}+S_F ^{(-)} $, where \eqn{ S_F^{(\pm)} = \int \mathrm{d} \tau\sum_n \frac{1}{2} \sbkt{\left.\dot{q}_n^{(\pm)}\right.^2-\omega_n^2\left.{q}_n^{(\pm)}\right.^2} } with $S_F^{(\pm)}$ corresponding to two separate baths of $(+)$ and $(-)$ oscillators. Thus, for $X_0 =0$ the interaction action in Eq.\eqref{IntAction} can be written as a sum of two separate interaction terms $ S_{\mathrm{int}} = S_{\mathrm{int}}^{(-)}+S_{\mathrm{int}}^{(+)} $, with \eqn{ \label{Sint-} S_{\mathrm{int}} ^{(-)}&= \int\mathrm{d}\tau\lambda Q\sum_nq_n^{(-)}\\ \label{Sint+} S_{\mr{int}}^{(+)}&=\int\mathrm{d}\tau\lambda Q\sum_n\omega_nX q_n^{(+)} . } Thus the interaction action $S_\mr{int}^{(-)}$ linearly couples the $(-)$ bath of oscillators of the scalar field to the IDF, leading to the dissipative dynamics of the IDF. The interaction action $S_\mr{int}^{(+)}$ couples the center of mass to the $(+)$ bath of oscillators via the IDF. We note that $S_\mr{int}^{(+)}$ is essentially responsible for all the mechanical effects of the field, such as radiation pressure due to optical fields and vacuum-induced forces \cite{PierreBook, PeterQOBook}. For instance, one can see that eliminating the IDF from the interaction action $S_\mr{int}^{(+)}$ yields an effective nonlinear interaction between the MDF and the field, similar to the intensity-position coupling in optomechanical interactions \cite{KSthesis}. We also remark here that the above separation of the field into two uncorrelated baths is akin to the Einstein-Hopf theorem that establishes the statistical independence of the blackbody radiation field and its derivative \cite{EinHopf1, EinHopf3}. \section{Effective action derivation from perturbative influence functional approach} \label{derivation} The nonlinearity of the interaction action $S_\mr{int} ^{(+)}$ limits the possibility of obtaining an exact analytical solution for the center of mass dynamics. We therefore follow a perturbative generating functional approach as prescribed in \cite{HPZ93}, assuming that the characteristic nonlinear coupling strength $\bkt{\sim \lambda \omega_n}$ can be treated perturbatively to write the resulting dynamics of the center of mass. Let us consider the evolution of the system density matrix $\hat \rho_M $ as follows \eqn{\label{rhomt}&\hat \rho_M\bkt{t}= \mathrm{Tr}_I \mathrm{Tr}_{F^+}\mathrm{Tr}_{F^-}\sbkt{\hat{\mc{U}}(t,0)\bkt{\hat \rho_M(0)\otimes\hat \rho_I(0)\otimes\hat \rho_F^{(+)}(0)\otimes \hat\rho_F^{(-)}(0)}\hat{\mc{U}}^\dagger(t,0)},} where we have assumed that the density operators for the different degrees of freedom are initially uncorrelated with each other, with $\hat \rho_I $ and $\hat \rho_F^{(\pm)}$ referring to the density matrices for the IDF and the $(\pm)$ bath, and $ \hat{\mc{U}}(t,0)$ corresponding to the time evolution operator. We write the density matrices in a coordinate representation such that \eqn{\hat{\rho}_M (\tau) = &\int \mathrm{d} X \mathrm{d} X' \rho_M\bkt{X,X';\tau} \ket{X}\bra{X'},\\ \hat{\rho}_I (\tau) = &\int \mathrm{d} Q \mathrm{d} Q' \rho_I\bkt{Q,Q';\tau} \ket{Q}\bra{Q'},\\ \hat{\rho}_{F}^{(\pm)} (\tau) = &\prod_n \int \mathrm{d} q_n ^{(\pm)} \mathrm{d} {q^{(\pm)}_n}' \rho_F^{(\pm)}\bkt{\cbkt{ q_n ^{(\pm)}, {q^{(\pm)}_n}'};\tau} \ket{ q_n^{(\pm)} }\bra{ {q^{(\pm)}_n}'},} such that the system evolution in Eq.\,\eqref{rhomt} can be rewritten as \begin{widetext} \eqn{ \rho_M\bkt{{X}_f,{X}_f';t}=&\prod_{m,n} \int\mathrm{d} Q_f \mathrm{d} q^{(+)}_{nf} \mathrm{d} q^{(-)}_{mf}\int\mathrm{d} {X}_i\mathrm{d} Q_i \mathrm{d} q^{(+)}_{ni} \mathrm{d} q^{(-)}_{mi}\int\mathrm{d} {X}_i'\mathrm{d} Q_i' \mathrm{d} {q^{(+)}_{ni}}' \mathrm{d} {q^{(-)}_{mi}}' \nonumber\\ &\sbkt{\rho_M\bkt{{X}_i,{X}_i';0}\otimes\rho_I\bkt{Q_i,Q_i';0}\otimes\rho_F\bkt{\cbkt{q^{(+)}_{ni},{q^{(+)}_{ni}}'};0}\otimes\rho_F\bkt{\cbkt{q^{(-)}_{mi},{q^{(-)}_{mi}}'};0}\right.\nonumber\\ &\left.\mc{J}\bkt{X_f, Q_f , \cbkt{ {q_{nf}^{(+)}}, {q_{mf}^{(-)}}};t\big\vert X_i, Q_i , \cbkt{ {q_{ni}^{(+)}}, {q_{mi}^{(-)}}};0}\mc{J}^\dagger\bkt{X'_f, Q_f ,\cbkt{ {q_{nf}^{(+)}}, {q_{mf}^{(-)}}};t\big\vert X'_i, Q'_i ,\cbkt{ {q_{ni}^{(+)}}', {q_{mi}^{(-)}}'};0}}, } \end{widetext} where $\cbkt{{X}_i, {X}_i'}$ and $\cbkt{ {X}_f, {X}_f'}$ refer to the initial and final coordinates corresponding to the center of mass variables respectively, $\cbkt{ Q_i,Q_i'; Q_f, Q_f' }$ are those for the internal oscillator, and $\cbkt{ q_{ni}^{(\pm)} , {q_{ni}^{(\pm)}}' ; q_{nf}^{(\pm)}, {q_{nf}^{(\pm)}}'}$ are the initial and final coordinates for the $n^\mr{th}$ oscillator of the $(\pm)$ bath. We assume that the initial states $\rho_F^{(\pm)}\bkt{\cbkt{q^{(\pm)}_{ni},{q^{(\pm)}_{ni}}'};0}$ of the field and $\rho_I\bkt{Q_i,Q_i';0}$ of the IDF are thermal with a temperature $T_F$ and $T_I$ respectively. The forward propagator is defined as $\mc{J}\bkt{X_f, Q_f , \cbkt{ q_{nf}^{(+)}, q_{mf}^{(-)}};t\big\vert X_i, Q_i ,\cbkt{ q_{ni}^{(+)}, q_{mi}^{(-)}};0}\equiv \prod_{m,n} \bra{{X}_f}\bra{Q_f}\bra{q^{(+)}_{nf}}\bra{q^{(-)}_{mf}}\hat{\mc{U}} \bkt{t,0}\ket{q^{(-)}_{mi}}\ket{q^{(+)}_{ni}}\ket{Q_i}\ket{{X}_i}$. Thus the first and last terms in the integral refer to the forward and backward propagators that can be expressed in path integral representation to write the evolution as \eqn{ &\rho_M\bkt{{X}_f,{X}_f';t}=\int\mathrm{d} {X}_i\int\mathrm{d} {X}_i'\,\rho_M\bkt{{X}_i,{X}_i';0}\int_{{X}(0) = {X}_i}^{{X}(t) = {X}_f}\mathrm{d}d {X}\int_{{X}'(0) = {X}'_i}^{{X}'(t) = X'_f}\mathrm{d}d {X}' e^{i\bkt{S_M[{X}] - S_M[{X}']}}\mc{F}\sbkt{{X},{X}'}, \label{rhoMx} } where $\mc{F}\sbkt{{X}, {X}'}$ refers to the influence functional of the field and the internal oscillator acting on the MDF \cite{FeynmanTrick}, which can be written explicitly as \eqn{\label{IFxx'} &\mc{F}\sbkt{{X},{X}'}\equiv\nonumber\\ &\prod_{m,n} \int\mathrm{d} Q_f\mathrm{d} q^{(+)}_{nf} \mathrm{d} q^{(-)}_{mf} \int \mathrm{d} Q_i \mathrm{d} q^{(+)}_{ni} \mathrm{d} q^{(-)}_{mi}\int \mathrm{d} Q_i' \mathrm{d} {q^{(+)}_{ni}}'\mathrm{d} {q^{(-)}_{mi}}'\rho_I\bkt{Q_i,Q_i';0}\otimes\rho_F^{(+)}\bkt{\cbkt{q^{(+)}_{ni},{q^{(+)}_{ni}}'};0}\otimes \rho_F^{(-)}\bkt{\cbkt{q^{(-)}_{mi},{q^{(-)}_{mi}}'};0}\nonumber\\ &\int_{Q(0) = Q_i}^{Q(t) = Q_f}\mathrm{d}d Q\int_{Q'(0) = Q'_i}^{Q'(t) = Q_f}\mathrm{d}d Q' e^{i\bkt{S_I[Q] - S_I[Q']}} \int_{q^{(+)}_n(0) = q^{(+)}_{ni}}^{q^{(+)}_{n}(t) = q^{(+)}_{nf}}\mathrm{d}d q_n^{(+)}\int_{{q^{(+)}_n}'(0) = {q^{(+)}_{ni}}'}^{{q^{(+)}_{n}}'(t) = q^{(+)}_{nf}}\mathrm{d}d {q^{(+)}_n}' e^{i\bkt{S_{F}^{(+)}\sbkt{\cbkt{q_n^{(+)}}} - S_{F}^{(+)}\sbkt{\cbkt{{q_n^{(+)}}'}}}} \nonumber\\ &\int_{q^{(-)}_m(0) = q^{(-)}_{mi}}^{q^{(-)}_{m}(t) = q^{(-)}_{mf}}\mathrm{d}d q_m^{(-)}\int_{{q^{(-)}_m}'(0) = {q^{(-)}_{mi}}'}^{{q^{(-)}_{m}}'(t) = q^{(-)}_{mf}}\mathrm{d}d {q^{(-)}_m}' e^{i\bkt{S_{F}^{(-)}\sbkt{\cbkt{q_m^{(-)}}} - S_{F}^{(-)}\sbkt{\cbkt{{q_m^{(-)}}'}}}}e^{i\bkt{S_{\mr{int}}^{(-)}\sbkt{ Q,\cbkt{q_m^{(-)}}} - S_{\mr{int}}^{(-)}\sbkt{ Q',\cbkt{{q_m^{(-)}}'}}} }\nonumber\\ &e^{i\bkt{S_{\mr{int}}^{(+)}\sbkt{{X}, Q,\cbkt{q_n^{(+)}}} - S_{\mr{int}}^{(+)}\sbkt{{X}', Q',\cbkt{{q_n^{(+)}}'}}}}. } Thus we have treated here the MDF as the system and the IDF and the field as the bath, with the influence functional capturing the influence of the bath on the evolution of the MDF. We will now evaluate the influence functional in a perturbative manner, by first tracing out the $(-)$ bath modes that couple only to the IDF, and then the $(+) $ bath modes and the IDF that couple to the center of mass. \subsection {Tracing over the $(-)$ bath} Let us define ${\mc{F}^{(-)}\sbkt{Q, Q'}}$ as the influence functional that accounts for the influence of the $(-)$ bath on the IDF as follows \eqn{\label{IF-} \mc{F}^{(-)}\sbkt{ Q, Q'} \equiv \prod_m\int\mathrm{d} q^{(-)}_{mf}\int\mathrm{d} q^{(-)}_{mi}\mathrm{d} {q^{(-)}_{mi}}'\rho_F^{(-)}\bkt{\cbkt{q^{(-)}_{mi},{q^{(-)}_{mi}}'};0}\int_{q^{(-)}_m(0) = q^{(-)}_{mi}}^{q^{(-)}_{m}(t) = q^{(-)}_{mf}}\mathrm{d}d q_m^{(-)}\int_{{q^{(-)}_m}'(0) = {q^{(-)}_{mi}}'}^{{q^{(-)}_{m}}'(t) = {q^{(-)}_{mf}}'}\mathrm{d}d {q^{(-)}_m}'\nonumber\\e^{i\bkt{S_{F}^{(-)}\sbkt{\cbkt{q_m^{(-)}}} - S_{F}^{(-)}\sbkt{\cbkt{{q_m^{(-)}}'}}}}e^{i\bkt{S_{\mr{int}}^{(-)}\sbkt{ Q,\cbkt{q_m^{(-)}}} - S_{\mr{int}}^{(-)}\sbkt{ Q',\cbkt{{q_m^{(-)}}'}}}}. } Performing the path integrals by considering an initial thermal state of the $(-) $ bath oscillators with temperature $T_F$ corresponding to the temperature of the field, one obtains \cite{CalzettaHu} \eqn{\label{Fexp-} \mc{F}^{(-)}\sbkt{ Q, Q'}= e^{{i}S^{(-)}_{\mr{I, IF}}\sbkt{ Q, Q'}}, } where the influence action due to the $(-)$ bath is given as \eqn{ S^{(-)}_{\mr{I, IF}}\sbkt{ Q, Q'}\equiv-\int_0^t\mathrm{d} \tau_1\int_0^{\tau_1}\mathrm{d}\tau_2 \sbkt{ Q\bkt{\tau_1} - Q'\bkt{\tau_1} }\eta^{(-)}\bkt{\tau_1 - \tau_2} \sbkt{ Q \bkt{\tau_2} + Q' \bkt{\tau_2}}\nonumber\\ +i\int_0^t\mathrm{d} \tau_1\int_0^{\tau_1}\mathrm{d}\tau_2 \sbkt{ Q\bkt{\tau_1} - Q'\bkt{\tau_1} }\nu^{(-)}\bkt{\tau_1 - \tau_2} \sbkt{ Q \bkt{\tau_2} - Q' \bkt{\tau_2}}. \label{smineff} } The above influence action corresponds to the case of quantum Brownian motion (QBM) model with a linear system-bath coupling \cite{HPZ92}. The dissipation and noise kernels, $\eta^{(-)}(\tau)$ and $\nu^{(-)}(\tau)$ respectively, are \eqn{ \eta^{(-)}(\tau) =&- \sum_n \frac{\lambda^2 }{2 \omega_n }\sin\bkt{\omega_n \tau}{\Theta(\tau)}\label{DissQBMMinus}\\ \nu^{(-)}(\tau) =& \sum_n \frac{\lambda^2}{2 \omega_n }\coth\bkt{\frac{ \omega_{n}}{2k_BT_F}}\cos\bkt{\omega_n \tau}.\label{NoiseQBMMinus} } The spectral density associated with the $(-)$ bath is thus \eqn{\label{J-} J^{(-)}\bkt{\omega} = \sum_n \frac{\lambda^2 }{2 \omega_n } \delta\bkt{\omega - \omega_n }. } We note from the above that the influence of the $(-)$ bath on the IDF leads to its QBM dynamics, with a sub-Ohmic spectral density, which necessitates a non-Markovian treatmeant of the resulting dynamics \cite{Weiss}. We now consider the effect of the IDF in turn on the dynamics of the center of mass. Using Eq.\,\eqref{IF-} and \eqref{Fexp-}, we can rewrite the total influence functional in Eq.\,\eqref{IFxx'} as \eqn{\label{Fxx'} \mc{F}\sbkt{{X},{X}'}=\prod_{n} &\int\mathrm{d} Q_f\mathrm{d} q^{(+)}_{nf}\int \mathrm{d} Q_i\mathrm{d} q^{(+)}_{ni} \int \mathrm{d} Q_i' \mathrm{d} {q^{(+)}_{ni}}'\,\rho_I\bkt{Q_i,Q_i';0}\otimes\rho_F^{(+)}\bkt{\cbkt{q^{(+)}_{ni},{q^{(+)}_{ni}}'};0}\nonumber\\ &\int_{Q(0) = Q_i}^{Q(t) = Q_f}\mathrm{d}d Q\int_{Q'(0) = Q'_i}^{Q'(t) = Q'_f}\mathrm{d}d Q' e^{i\bkt{S_I[Q] - S_I[Q']}} \int_{q_n^{(+)}(0) = q_{ni}^{(+)}}^{q_{n}^{(+)}(t) = q_{nf}^{(+)}}\mathrm{d}d q_n\int_{{q^{(+)}_n}'(0) = {q^{(+)}_{ni}}'}^{{q^{(+)}_n}'(t) = q^{(+)}_{nf}}\mathrm{d}d {q^{(+)}_n}' \nonumber\\ &e^{i\bkt{S_{F}\sbkt{\cbkt{q_n^{(+)}}} - S_{F}\sbkt{\cbkt{{q_n^{(+)}}'}}}} e^{i\bkt{S_{\mr{int}}^{(+)}\sbkt{{X},Q,\cbkt{q_n^{(+)}}} - S^{(+)}_{\mr{int}}\sbkt{{X}', Q',\cbkt{{q_n^{(+)}}'}}}}e^{i S_{\mr{I, IF}}^{(-)}\sbkt{Q,Q'}}, } where the integrations over the IDF and the $(+)$ bath of oscillators remain. Thus far the above influence functional is exact in that we have not made any additional approximations with regard to the strength of coupling when tracing over the $(-)$ bath. \subsection{Tracing over the $(+)$ bath and the internal degree of freedom}\label{TracingInternalDOF} Next we would like to integrate out the (+) bath and the IDF, which are nonlinearly coupled to the system, using a perturbative generating functional approach. The generating functional is simply the influence functional of the environment where the bath oscillators are linearly coupled to the system. We define the influence action $S_\mr{M,IF}\sbkt{X,X'}$ corresponding to the influence functional $\mc F\sbkt{{X},{X}'}$ in Eq.\eqref{Fxx'} such that $\mc F\sbkt{{X},{X}'}\equiv e^{i S_\mr{M,IF}}$. The influence action up to second order in the coupling strength $\lambda$ can be obtained as (see appendix \mathrm{Re}f{App:seff} for a detailed derivation) \cite{HPZ93}: \eqn{\label{seff} S_{\mathrm{M, IF}}^{(2)}[{X},{X}'] \approx& \avg{S_{\mathrm{int}}^{(+)}\sbkt{{X},\frac{1}{i}\frac{\delta}{\delta {\bf J}}}}_0 - \avg{S_{\mathrm{int}}^{(+)}\sbkt{{X}',-\frac{1}{i}\frac{\delta}{\delta {\bf J'}}}}_0\nonumber\\ &+\frac{i}{2}\bkt{ \avg{\cbkt{S_{\mathrm{int}}^{(+)}\sbkt{{X},\frac{1}{i}\frac{\delta}{\delta {\bf J}}}}^2}_0 - \cbkt{\avg{S_{\mathrm{int}}^{(+)}\sbkt{{X},\frac{1}{i}\frac{\delta}{\delta {\bf J}}}}_0}^2 }\nonumberumber\\ & +\frac{i}{2}\bkt{ \avg{\cbkt{S_{\mathrm{int}}^{(+)}\sbkt{{X}',-\frac{1}{i}\frac{\delta}{\delta {\bf J'}}}}^2}_0 - \cbkt{\avg{S_{\mathrm{int}}^{(+)}\sbkt{{X}',-\frac{1}{i}\frac{\delta}{\delta {\bf J'}}}}_0}^2}\nonumberumber\\ &-i\bkt{ \avg{S_{\mathrm{int}}^{(+)}\sbkt{{X},\frac{1}{i}\frac{\delta}{\delta {\bf J}}}S_{\mathrm{int}}^{(+)}\sbkt{{X}',-\frac{1}{i}\frac{\delta}{\delta {\bf J'}}}}_0 - \avg{S_{\mathrm{int}}^{(+)}\sbkt{{X},\frac{1}{i}\frac{\delta}{\delta {\bf J}}}}_0\avg{S_{\mathrm{int}}^{(+)}\sbkt{{X}',-\frac{1}{i}\frac{\delta}{\delta {\bf J'}}}}_0 },} where we have defined $ {\bf J}\equiv \bkt{J,\cbkt{J_n}}$, $ {\bf J'}\equiv \bkt{J',\cbkt{J'_n}}$, and the expectation values $\avg{\mc{O}[{J},{J'}]}_0\equiv\left. \mc{O}[{J},{J'}]\mc G^{(1)}[{J},{J'}]\right\vert_{{ J} = { J'}=0}$. The generating functional $\mc G^{(1)}[{J},{J'}]$ for the $(+) $ bath and the IDF is defined as \eqn{\label{Gentot} \mc{G}^{(1)}[J,J',\cbkt{J_n,J'_n}] \equiv\prod_n \mc{F}^{(1)}_n\sbkt{J_n,J'_n} \mc{F}^{(1)}_I\sbkt{J,J'} } with $\mc{F}^{(1)}_I\sbkt{J,J'}$ and $\mc{F}^{(1)}_n\sbkt{J_n,J'_n}$ as the influence functionals for the internal oscillator and the $n^\mr{th}$ oscillator of the $(+)$ bath respectively, with a linear coupling to the corresponding source current terms $\cbkt{J,J'}$ and $\cbkt{J_n,J'_n}$. An explicit derivation of the generating functional is given in Appendix\,\mathrm{Re}f{App:gf}. We can calculate the second order influence action in Eq.\,\eqref{seff} as shown in Appendix\,\mathrm{Re}f{Seffavg} to obtain a form similar to that of linear QBM dynamics \cite{HPZ92}: \begin{widetext} \eqn{S_{\mathrm{M, IF}}^{(2)}\sbkt{X,X'} =& -\int_0^t\mathrm{d} t_1\int_0^{t_1}\mathrm{d} t_2 \,\sbkt{X(t_1)-X'(t_1)} \eta^{(2)} (t_1,t_2)\sbkt{X(t_2)+X'(t_2)}\nonumberumber\\ &+i\int_0^t\mathrm{d} t_1\int_0^{t_1}\mathrm{d} t_2 \, \sbkt{X(t_1)-X'(t_1)} \nu ^{(2)}(t_1,t_2)\sbkt{X(t_2)-X'(t_2)} , \label{EffAct2nd}} \end{widetext} where dissipation and noise kernels are defined as \eqn{ \eta^{(2)} (t_1,t_2) =&\frac{1}{2}\eta ^{(+)} \bkt{t_1 - t_2} \cbkt{\nu_{GG} \bkt{ t_1,t_2} + \aavg{Q_h\bkt{ t_1} Q_h\bkt{t_2}}}+ \frac{1}{4}\nu ^{(+)} \bkt{t_1 - t_2}{g}\bkt{t_1 - t_2} \Theta\bkt{t_1 - t_2},\label{Eta2}\\ \nu^{(2)} (t_1,t_2) =& \frac{1}{2}\nu ^{(+)} \bkt{t_1 - t_2} \cbkt{\nu_{GG} \bkt{ t_1,t_2} + \aavg{Q_h\bkt{ t_1} Q_h\bkt{t_2}}}-\frac{1}{4} \eta ^{(+)} \bkt{t_1 - t_2}{g}\bkt{t_1 - t_2} ,\label{Nu2} } {with ${G}(t)=g(t)\Theta(t)$} the propagator for the IDF defined as in Eq.\eqref{Gret}. The noise correlation of the IDF is given by \eqn{&\nu_{GG}(t_1,t_2)\equiv \int_0^t\mathrm{d}\tau_1\int_0^t\mathrm{d}\tau_2 {G}\bkt{t_1-\tau_1}\nu^{(-)}\bkt{\tau_1-\tau_2}{G}\bkt{t_2-\tau_2}. \label{NuGG} } The noise arising from the dispersion in the initial conditions of the IDF is captured in the term $\sim \aavg{Q_h \bkt{t_1}Q_h \bkt{t_2}}$, where $Q_h\bkt{t}$ is the classical solution to the homogeneous Langevin equation (Eq.\,\eqref{IDFhom}) corresponding to the dissipative dyanmics of IDF in the presence of the $ (-)$ bath. The average $\aavg{\dots}$ is taken over initial position and momentum distribution of the IDF, as defined in Eq.\,\eqref{aavg}. The dissipation and noise kernels associated with the $(+)$ bath, defined based on a bilinear system-bath coupling are \eqn{\label{ETA+} \eta^{(+)}\bkt{\tau} \equiv& - \frac{1}{2}\sum_n \lambda^2 \omega_n \sin\bkt{\omega_n \tau}{\Theta(\tau)}\\ \label{NU+} \nu^{(+)}\bkt{\tau}\equiv & \frac{1}{2 }\sum_n \lambda^2 \omega_n \coth\bkt{\frac{ \omega_{n}}{2k_BT_F}}\cos\bkt{\omega_n \tau}, } similar to those of the $(-)$ bath (Eq.\,\eqref{DissQBMMinus} and Eq.\,\eqref{NoiseQBMMinus}) with a coupling $\lambda\rightarrow \lambda \omega_n $. The spectral density associated with the $(+) $ bath is thus \eqn{\label{J+} J^{(+)}\bkt{\omega} = \sum_n \frac{\lambda^2 \omega_n}{2}\delta \bkt{\omega - \omega_n}, } corresponding to an Ohmic case \cite{CalzettaHu}. We note that while the two baths $(+)$ and $(-)$ correspond to the same field, due to a gradient coupling of the center of mass to the field the $(+)$ bath has an Ohmic spectral density while the $(-)$ bath is sub-Ohmic (see Eq.\,\eqref{J-}). One can note a few important features from the influence action (Eq.\,\eqref{EffAct2nd}) and the dissipation and noise kernels therein (Eq.\,\eqref{Eta2} and Eq.\,\eqref{Nu2}): \begin{itemize} \item {The overall structural similarity of the second order influence action to that of the linear QBM extends to structure of the dissipation and noise kernels as well. Meaning that writing the interaction Hamiltonian as $H \sim \hat X \hat {\mc{B}} $, where $\hat {\mc{B}} $ is the total bath operator (possibly nonlinear), one can write an effective action of the same form as linear QBM with dissipation and noise kernels given in terms of the expectation values of the commutator and anticommutator of the two-time bath correlation functions $ \eta\bkt{\tau} = \avg{\sbkt{\hat{ \mc{B} }\bkt{\tau }, \hat {\mc{B}} \bkt{0}}}$ and $\nu\bkt{\tau} = \avg{\cbkt{\hat {\mc{B}} \bkt{\tau }, \hat {\mc{B}} \bkt{0}}}$. This is a more general property of bilinear system-bath couplings. The influence functional of Eq.\eqref{Fxx'} for $X=0=X'$ results in a Gaussian path integral, so it is reasonable to obtain that the second order results with a linear QBM form. One can further verify as a check that the unitarity of the evolution implies $S_{\mathrm{M, IF}}^{(2)}\sbkt{X,X}=0$}. \item{The dissipation kernel contains two parts that can be interpreted as: (1) coming from the dissipation kernel of the $(+)$ bath $(\eta^{(+)}(t_1-t_2) )$ combined with the noise from the IDF $(\nu_{GG} \bkt{t_1, t_2}+ \aavg{Q_h\bkt{ t_1} Q_h\bkt{t_2}}$, and (2) linear propagator of the IDF $( G \bkt{t_1 - t_2})$ combined with the noise from the $(+)$ bath $(\nu^{(+)} \bkt{t_1 - t_2})$. Notice that the quantity $\nu_{GG} \bkt{ t_1,t_2} + \aavg{Q_h\bkt{ t_1} Q_h\bkt{t_2}}$ stands for the full quantum correlations of the IDF under the influence of the $(-)$ bath alone. The term $\aavg{Q_h\bkt{ t_1} Q_h\bkt{t_2}}$ accounts for the relaxation of the initial conditions' contribution, thus carrying the information about the initial state of the IDF and determined only by the dissipation caused by the $(-)$ bath. On the other hand, the term $\nu_{GG} \bkt{ t_1,t_2}$ stands for the fluctuations on the IDF generated by the fluctuations of the $(-)$ bath.} \item{The noise kernel has the combined noise from both the IDF $(\nu_{GG} \bkt{t_1, t_2}+ \aavg{Q_h\bkt{ t_1} Q_h\bkt{t_2}}$ and that from the $(+)$ bath $(\nu^{(+)} \bkt{t_1 - t_2})$. In addition to that the noise term also contains the two dissipation kernels from the IDF and the $(+)$ bath. } \end{itemize} \section {Langevin equation and fluctuation-dissipation relation} \label{Dynamics} Having determined the influence action for the center of mass, we can now turn to studying its dynamics. This section is devoted to the deduction of an effective Langevin equation of motion for the center of mass and a corresponding generalized fluctuation-dissipation relation in the long-time limit. \subsection{Effective equation of motion for the center of mass} Considering the influence action obtained as a result of tracing out the IDF and the field (Eq.\eqref{EffAct2nd}), the total effective action for the center of mass variables up to second order reads: \eqn{ S_{\mathrm{M, eff}}\sbkt{X,X'} = S_{M}\sbkt{X} - S_{M}\sbkt{X'} + S_{\mathrm{M, IF}}^{(2)}\sbkt{X,X'}.} The above action is quadratic and has the same form as in the case of linear QBM \cite{CalzettaHu}. One can thus deduce an equation of motion for the center of mass by extremizing the above action to obtain \eqn{ \mathrm{d}ot{X}\bkt{t} + \Omega^{2}X\bkt{t} + \frac{2}{M} \int_{0}^{t}d\tau ~\eta^{(2)}(t,\tau)~X(\tau)=0. \label{EffEqCOMAvg}} A Langevin equation can be worked out by implementing the Feynman and Vernon procedure for Gaussian path integrals \cite{FeynmanTrick,Behunin1}. First, we notice that $i S_{\mathrm{M, IF}}^{(2)}=i {\rm Re}[S_{\mathrm{M, IF}}^{(2)}]-{\rm Im}[ S_{\mathrm{M, IF}}^{(2)}]$, and that both, the real and imaginary parts of the effective action are quadratic functionals of $\{X,X'\}$. Thus, ${\rm exp}(-{\rm Im}[S_{\mathrm{M, IF}}^{(2)}])$ is a Gaussian functional since the kernel owing to the quadratic nature of the action. Considering that a Gaussian functional can be written in terms of its functional Fourier transform (which is also a Gaussian functional), we can write ${\rm exp}(-{\rm Im}[S_{\mathrm{M, IF}}^{(2)}])$ as an influence functional over a new variable $\xi(t)$ \eqn{ e^{iS_{\mathrm{M, IF}}^{(2)}\sbkt{X,X'}}=\int\mathcal{D}\xi~\mathcal{P}\sbkt{\xi} e^{i\int_{0}^{t}dt_{1}\sbkt{X(t_{1})-X'(t_{1})}\sbkt{\xi(t_{1})-\int_{0}^{t_{1}}dt_{2}\eta^{(2)}(t_{1},t_{2})\sbkt{X(t_{2})+X'(t_{2})}}}, \label{Seff2StochasticInt} } where the distribution of the new functional variable is given by \eqn{ \mathcal{P}\sbkt{\xi}=e^{-\int_{0}^{t}dt_{1}\int_{0}^{t_{1}}dt_{2}\xi(t_{1})\sbkt{4\nu^{(2)}(t_{1},t_{2})}^{-1}\xi(t_{2})}.} It is worth noting that the new variable $\xi\bkt{t}$ comes to replace the kernel describing the quantum and thermal fluctuations of the composite environment and drive the dynamics as an external stochastic force. To recover the influence action as in Eq.\,\eqref{Seff2StochasticInt}, one needs to integrate over $\xi$ given the functional distribution $\mathcal{P}[\xi]$, which is positive definite since the noise kernel $\nu^{(2)}\bkt{t_1 , t_2}$ is symmetric and positive. Thus, the stochastic variable $\xi\bkt{t}$ acts as a classical fluctuating force, which can be interpreted as a noise with a probability distribution $\mathcal{P}[\xi]$. Furthermore, due to the Gaussianity of $\mathcal{P}\sbkt{\xi\bkt{t}}$ the noise is completely characterized by its first and second moments: \eqn{\label{ximom}&\avg{\xi\bkt{t}}_{\xi}=0, \nonumber\\ &\avg{\xi(t_{1})\xi(t_{2})}_{\xi}=4\nu^{(2)}(t_{1},t_{2}),} where $\avg{...}_{\xi}=\int\mathcal{D}\xi\mathcal{P}\sbkt{\xi}(...)$. Now, we can re-define an effective action for the center of mass including this new variable as: \eqn{ \tilde{S}_{\mathrm{M, eff}}\sbkt{X,X',\xi} = S_{M}\sbkt{X} - S_{M}\sbkt{X'} + {\mathrm{Re}}\sbkt{S_{\mathrm{M, IF}}^{(2)}\sbkt{X,X'}} +\int_{0}^{t}dt_{1}\sbkt{X(t_{1})-X'(t_{1})}\xi(t_{1}).} Finally, associated with this new action we can derive an effective equation of motion for the center of mass $(\delta \tilde{S}_{\mathrm{M, eff}}/\delta X_{\Delta})|_{X_{\Delta}=0}=0$, which now gives the Langevin equation: \eqn{ \mathrm{d}ot{X}\bkt{t} + \Omega^{2}X\bkt{t} + \frac{2}{M} \int_{0}^{t}d\tau ~\eta^{(2)}(t,\tau)~X(\tau)=\xi\bkt{t}, \label{EffEqCOMLangevin}} subjected to the statistical properties of $\xi\bkt{t}$ as in Eq.\,\eqref{ximom}. We can see that averaging the above equation over the stochastic force ($\avg{...}_{\xi}$) reduces to Eq.(\mathrm{Re}f{EffEqCOMAvg}), which we can interpret as the Langevin equation of motion after averaging over the possible noise realizations. As has been shown previously in \cite{CRV}, the correlations of the system observables can be obtained from the solutions of such a Langevin equation. \subsection{Generalized fluctuation-dissipation relation}\label{GFDR} We now analyze the relations between the dissipation and noise kernels of the composite environment. Considering the definitions of the dissipation and noise kernels given in Eqs.\eqref{Eta2} and \eqref{Nu2}, respectively, we first take the long-time limit. In this limit, the dissipation and noise kernels simplify to \eqn{ \eta^{(2)} (t_1,t_2)\longrightarrow& \sbkt{\frac{1}{2}\eta ^{(+)} \bkt{t_1 - t_2} \nu_{GG} \bkt{ t_1,t_2}+\frac{1}{4}g\bkt{t_1 - t_2}\nu ^{(+)} \bkt{t_1 - t_2} }\Theta\bkt{t_1 - t_2},\label{Eta2LongTime}\\ \nu^{(2)} (t_1,t_2) \longrightarrow& \frac{1}{2}\nu ^{(+)} \bkt{t_1 - t_2} \nu_{GG} \bkt{ t_1,t_2}-\frac{1}{4}g\bkt{t_1 - t_2} \eta ^{(+)} \bkt{t_1 - t_2},\label{Nu2LongTime} } where the term $\aavg{Q_h\bkt{ t_1} Q_h\bkt{t_2}}$ associated with the relaxation of the IDF vanishes in the late time limit, and the kernel $\nu_{GG} (t_1, t_2) $ reads: \eqn{&\nu_{GG}(t_1,t_2)\equiv \int_{0}^{\infty}\mathrm{d}\tau_1\int_{0}^{\infty}\mathrm{d}\tau_2 {G}\bkt{t_1-\tau_1}\nu^{(-)}\bkt{\tau_1-\tau_2} {G}\bkt{t_2-\tau_2}. \label{NuGGLongTime} } Notice that this limit holds regardless of the initial state of the IDF. This is in agreement with the notion that the dynamics at late-times is determined by the field fluctuations only. It can be shown that the Fourier and Laplace transforms of the dissipation and noise kernels corresponding to the individual baths can be related in terms of the following fluctuation-dissipation relations (see Appendix\,\mathrm{Re}f{App:kernels} for details) \eqn{\label{FDRBathPM}\bar \nu^{(\pm)}(\omega)=&-2\coth\left(\frac{\omega}{2k_{B}T_{F}}\right){\rm Im}\sbkt{\bar\eta^{(\pm)}(\omega)} \\ \label{FDRnuGG} \bar \nu_{GG}(\omega)=&\coth\bkt{\frac{\omega}{2k_{B}T_{F}}}{\rm Im}[\bar{G }(\omega)], } where $\bar F(\omega)$ represent the Fourier transforms of $F(t)$, defined as $\bar F(\omega)=\int_{-\infty}^{+\infty}{dt} e^{i\omega t} F(t)$. Note that both $\eta^{(\pm)}$ and $G$ are causal functions, as is also true for the dissipation kernel $\eta^{(2)}$. One can thus rewrite Eqs.\eqref{Eta2LongTime} and \eqref{Nu2LongTime} in terms of the Fourier transformed kernels as follows \eqn{\bar{\eta}^{(2)}(\omega)=&\frac{1}{2}\sbkt{\bar\nu_{GG}\ast\bar\eta^{(+)}}(\omega)+\frac{1}{4}\sbkt{\bar{G}\ast\bar{\nu}^{(+)}}(\omega),\label{Eta2W}\\ \bar\nu^{(2)}(\omega)=&\frac{1}{2}\sbkt{\bar\nu_{GG}\ast\bar\nu^{(+)}}(\omega)+\sbkt{{\rm Im}\bkt{\bar{G}}\ast{\rm Im}\bkt{\bar\eta^{(+)}}}(\omega). \label{Nu2W} } The convolution product is defined for two functions $[A\ast B](\omega)\equiv\int_{-\infty}^{+\infty}\frac{d\omega'}{2\pi}A(\omega-\omega')B(\omega')$, which satisfies $[A\ast B](\omega)=[B\ast A](\omega)$. Using the fluctuation-dissipation relations for the kernels of the two baths and the IDF, we can write the dissipation kernel for the MDF in terms of those of the IDF and the baths in frequency domain as follows \eqn{ \label{eta2w2}{\rm Im}\sbkt{\bar{\eta}^{(2)}(\omega)}=&\int_{0}^{+\infty}\frac{d\omega'}{2\pi}\frac{1}{2}\left[\frac{1}{2}\cbkt{\bar\nu^{(+)}(\omega-\omega')-\bar \nu^{(+)}(\omega+\omega')}{\rm Im}[\bar{G}(\omega')]+\cbkt{\bar{\nu}_{GG}(\omega-\omega')-\bar{\nu}_{GG}(\omega+\omega')}{\rm Im}\sbkt{\bar \eta^{(+)}(\omega')}\right]. } We note from the above equation that the contributions to the total dissipation come from the three body processes between the MDF, IDF and the $ (+) $ bath, with the propagator of the IDF combining with the noise of the $(+)$ bath and vice versa. Each term in the above equation accounts for the four possible processes involved in the dynamics at second order, wherein one of the thermal baths contributes the initial excitation, while the other bath acts as an extra channel for partial energy exchange. Furthermore, the above kernel can be alternatively expressed in a compact way as follows \eqn{ {\rm Im}\sbkt{\bar{\eta}^{(2)}(\omega)}=&\int_{-\infty}^{+\infty}\frac{d\omega'}{2\pi}\frac{1}{2}\left[\coth\left(\frac{\omega-\omega'}{2k_{B}T_{F}}\right)-\coth\left(\frac{\omega'}{2k_{B}T_{F}}\right)\right]{\rm Im}[\bar {G}(\omega-\omega')]~{\rm Im}\sbkt{\bar \eta^{(+)}(\omega')}. \label{ImEta2DissTimesNoise}} We can similarly express the noise kernel in terms of the dissipation kernels of the IDF and the two baths as follows \eqn{\label{nu2w2}\bar{\nu}^{(2)}(\omega)=&\int_{-\infty}^{+\infty}\frac{d \omega'}{2\pi}\frac{1}{2}\sbkt{1-\coth\bkt{\frac{\omega-\omega'}{2k_{B}T_{F}}}\coth\bkt{\frac{\omega'}{2k_{B}T_{F}}}}{\rm Im}[\bar {G}(\omega-\omega')]~{\rm Im}\sbkt{\bar \eta^{(+)}(\omega')}. } It is not possible to find simple relation between the above tranforms of the dissipation and noise kernels $\{\bar{\eta}^{(2)},\bar{\nu}^{(2)}\}$ because of the presence of the integrals on $\omega'$, which are related to the convolution products. We therefore define new kernels $D^{(2)}(\omega,\omega')$ and $N^{(2)}(\omega,\omega')$ that depend on two frequency variables, accounting for processes where a given frequency-component contribution is modified by other frequencies such that $\bar\eta^{(2)}(\omega)=\int_{-\infty}^{+\infty}\frac{d\omega'}{2\pi}D^{(2)}(\omega,\omega')$ and $\bar\nu^{(2)}(\omega)=\int_{-\infty}^{+\infty}\frac{d\omega'}{2\pi}N^{(2)}(\omega,\omega')$. Thus we can find a relation between these new kernels that reads \eqn{N^{(2)}\bkt{\omega,\omega'}=2\coth\sbkt{\frac{\omega-2\omega'}{2k_{B}T}}{\rm Im}\sbkt{D^{(2)}(\omega,\omega')}, \label{FDRImEta2ReNu2} } or inversely \eqn{{\rm Im}\sbkt{D^{(2)}(\omega,\omega')}=\frac{1}{2}\tanh\sbkt{\frac{\omega-2\omega'}{2k_{B}T}}N^{(2)}\bkt{\omega,\omega'}. \label{FDRImEta2ReNu2bis} } The introduction of an extra variable $\omega'$ stands for the complexity of the environment in terms of its energy exchange with the system. The nonlinear coupling allows the two parts of the environment (the field and the IDF) to simultaneously exchange energy with the system. From these relations, it becomes simple to prove that \eqn{{\rm Im}\sbkt{\bar \eta^{(2)}(\omega)} =&\int_{-\infty}^{+\infty}\frac{d\omega'}{2\pi}\frac{1}{2}\tanh\sbkt{\frac{\omega'}{2k_{B}T}}N^{(2)}\bkt{\omega,\frac{\omega-\omega'}{2}}, \label{DissAsIntN2DoubleW} } and \eqn{\bar\nu^{(2)}(\omega) =&\int_{-\infty}^{+\infty}\frac{d\omega'}{2\pi}2\coth\sbkt{\frac{\omega'}{2k_{B}T}}D^{(2)}\bkt{\omega,\frac{\omega-\omega'}{2}}. \label{NoiseAsIntD2DoubleW} } The above equation corresponds to a generalized fluctuation-dissipation relation (FDR) for the late-time dissipation and noise kernels. We can physically interpret the results by comparing the present situation to the case of linear QBM. The FDR that holds between the dissipation and noise kernels in QBM can be stated in frequency-domain for a single frequency component $\omega$ in the general form that reads $[{\textit Noise}](\omega)=[{\textit Thermal Factor}](\omega)\times[{\textit Dissipation}](\omega)$, wherein a single frequency connects both sides of the relation. In our case, the nonlinear nature of the coupling precludes such a relation at a single frequency. Physically such a nonlinear coupling leads to processes wherein an excitation from one of the baths interacts with the system while perturbed by the other bath. We can thus formulate a generalized FDR Eq.\,\eqref{FDRImEta2ReNu2} by introducing two frequency variables to account for the nonlinearity. \section{Discussion} \label{Discussion} We have derived the non-equilbrium dissipative center of mass dynamics of a particle interacting with a field via its IDF. The model considered in this paper underlies microscopic optomechanical interactions between neutral particles and fields, and is generally applicable to open quantum systems that possess intermediary quantum degrees of freedom between system and bath. We show that the field can be separated into two baths, referred to as $(+)$ and $(-)$ baths, with the $(-)$ bath coupled linearly to the IDF, and the $(+)$ bath coupled nonlinearly to both IDF and MDF (see Eq.\,\eqref{Sint-} and Eq.\,\eqref{Sint+}). Such a decomposition allows one to systematically trace over $(-)$ bath and express its resulting influence on the IDF in terms of a exact second order effective action as given by Eq.\,\eqref{smineff}. As the nonlinear coupling between the MDF, IDF and the $(+)$ bath poses a challenge in terms of writing the exact dynamics of the MDF, we use a perturbative influence functional approach to trace over the IDF and $(+)$ bath to obtain a second order effective action for the MDF. We find that the effective action, the dissipation and noise kernels resulting from the composite environment (Eq.\,\eqref{EffAct2nd}, \eqref{Eta2} and \eqref{Nu2} respectively) are structurally similar to linear QBM dynamics. This can be attributed to the quadratic nature of coupling between the system variable and the composite bath that goes as $\sim X\mc{B}$, where the bath operator $\mc{B}\bkt{ \sim Q \sum_n q_n ^{(+)}}$ can be nonlinear. We find a Langevin equation of motion for the MDF, describing its dissipative non-equilibrium dynamics (Eq.\,\eqref{EffEqCOMLangevin}). The dissipation and noise kernels at late time can be related via a generalized FDR as given by Eq.\,\eqref{FDRImEta2ReNu2}. This work highlights three aspects: \begin{enumerate} \item {\textit{Dissipation and noise of the open quantum system in the presence of an IDF} -- It can be seen from the dissipation and noise kernels (Eq.\,\eqref{Eta2} and \eqref{Nu2}) arising from the composite bath of the IDF+field that the total dissipation of the MDF is a combination of the dissipation kernel corresponding to the IDF and noise kernel corresponding to the $(+) $ bath, and vice versa. Similarly the noise kernel involves a combination of the noise of the IDF and that of the $(+)$ bath. Additionally, it also includes a contribution resulting from the combination of the dissipation of the IDF and that of the $(+)$ bath, which is needed to obtain a generalized FDR for the kernels of our composite environment. As discussed in Sec.\,\mathrm{Re}f{TracingInternalDOF}, the dissipation and noise kernels are similar in structure to those of linear QBM up to the second order perturbative in action. The present approach can be extended to higher orders in a systematic way, thus extending the results of Ref.\cite{HPZ93} to complex environments.} \item{ \textit{Coupled non-equilibrium dynamics of different degrees of freedom with self-consistent backaction} -- The equation of motion for the MDF given in Eq.\,\eqref{EffEqCOMLangevin} describes the quantum center of mass motion including the self-consistent backaction of the various degrees of freedom on each other. The approach based on the separation of the field into two uncorrelated baths is akin to the Einstein-Hopf theorem that establishes the statistical independence of the blackbody radiation field and its derivative \cite{EinHopf1, EinHopf3}. The influence of the $(-)$ bath on the IDF is included via the effective action Eq.\,\eqref{smineff}, and the perturbative second order effective action in Eq.\,\eqref{EffAct2nd} includes the influence of the $(+)$ bath and the IDF (affected by the $(-)$ bath). In this way, the quantum field influences the MDF's dynamics by two means, by its direct interaction as the $(+)$ bath and through the fluctuations of the IDF as the $(-)$ bath. Furthermore, considering the zero-temperature limit of the dynamics, we find that a mechanical oscillator interacting with the vacuum field exhibits dissipative dynamics as a result of the quantum fluctuations of its composite nonlinear environment. Such a vacuum-induced noise poses a fundamental constraint on preparing mechanical objects in quantum states \cite{KSYS1, Skatepark}.} \item{\textit{Fluctuation-dissipation relations for a system coupled to a composite environment } -- We find a generalized FDR between the late-time dissipation and noise kernels, that has a similar structure to linear QBM (see Eq.\,\eqref{FDRImEta2ReNu2}) albeit involving two frequency variables due to the nonlinearity of the interaction. This allows one to interpret the single-frequency response of the effective kernels for the MDF in terms of contributions from various frequency components of the IDF and field. Our result provides an example of FDR for an open quantum system with nonlinear dynamics when linear response theory is no longer valid \cite{HPZ93, JT20}.} \end{enumerate} As a future prospect our results can be extended to studying the non-equilibrium dynamics in a wide range of physical systems, particularly when the time scales of the different degrees of freedom are no longer disparate. In the context of atom-field interactions, it has been shown for example that the internal and external degrees of freedom of atoms in a standing laser wave can exhibit synchronization \cite{Argonov05}. Furthermore, it is possible for the IDF dynamics to be sufficiently slow for a dark internal state which can lead to highly correlated internal-external dynamics as in the case of velocity selective coherent population trapping scheme \cite{Aspect88, Aspect89}, and long-lived internal states in atomic clock transitions \cite{Boyd}. It has also been experimentally observed that the internal and external degrees of freedom can be efficiently coupled in the presence of colored noise \cite{Machluf10}. On the other hand, when the center of mass dynamics is fast enough to be comparable to the internal dynamics, one requires a careful consideration of the coupled internal-external dynamics. Such a situation can arise in the case of dynamical Casimir effect (DCE) \cite{Lin18}, when considering the self-consistent backaction of the field and the IDF on the mirror \cite{Butera, Belen19}. Finally, our results can be extended to analyze the thermalization properties and non-equilibrium dynamics of the center mass of nanoparticles \cite{Yin, AERL18}. \section{Acknowledgments} We are grateful to Bei-Lok Hu, Esteban A. Calzetta, and Peter W. Milonni for insightful discussions. Y.S. acknowledges support from the Los Alamos National Laboratory ASC Beyond Moore's Law project and the LDRD program. A.E.R.L. wants to thank the AMS for the support. \appendix \section{Generating functional derivation} \label{App:gf} The generating functionals corresponding to the IDF and the $(+) $ bath can be explicitly written as \eqn{\label{gfidf} \mc{F}^{(1)}_I\sbkt{J,J'} = \int\mathrm{d} Q_f\int\mathrm{d} Q_i\int\mathrm{d} Q'_i\,\rho_I\bkt{Q_i,Q'_i;0}\int_{Q(0) = Q_i}^{Q(t) = Q_f}\mathrm{d}d Q\int_{Q'(0) = Q'_i}^{Q'(t) = Q_f}\mathrm{d}d Q'& e^{i \bkt{S_I[Q]-S_I[Q'] +S_{\mr{I, IF}}^{(-)}[Q,Q']}}\nonumber\\ &e^{i\int_0^t \mathrm{d}\tau \sbkt{J(\tau)Q(\tau)-J'(\tau)Q'(\tau)}} } \eqn{\label{gfqn} \mc{F}^{(1)}_n\sbkt{J_n,J'_n} =&\int\mathrm{d} q_{nf}^{(+)}\int\mathrm{d} q_{ni}^{(+)}\int\mathrm{d} {q_{ni}^{(+)}}'\,\rho_{F,n}^{(+)}\bkt{\cbkt{q_{ni}^{(+)},{q_{ni}^{(+)}}'};0}\nonumber\\ &\int_{q_n^{(+)}(0) = q^{(+)}_{ni}}^{q^{(+)}_{n}(t) = q^{(+)}_{nf}}\mathrm{d}d q^{(+)}_n\int_{{q_n^{(+)}}'(0) = {q_{ni}^{(+)}}'}^{{q_n^{(+)}}'(t) = q^{(+)}_{nf}}\mathrm{d}d {{q_n}^{(+)}}' e^{i \bkt{S_{F,n}^{(+)}[\cbkt{q_n^{(+)}}]-S_{F,n}^{(+)}[\cbkt{{q_n^{(+)}}'}] }} e^{i\int_0^t \mathrm{d}\tau \sbkt{J_n(\tau)q_n^{(+)}(\tau)-J'_n(\tau){{q_n^{(+)}}'}(\tau)}}, } where $\rho_{F,n}^{(+)}\bkt{q_{ni}^{(+)},{q_{ni}^{(+)}}';0}$ represents the initial state of the $n^\mr{th}$ oscillator of the $(+) $ bath, and $S_{F,n}^{(+)}$ is the corresponding free action. We can then see from Eq.\,\eqref{IF-} and Eq.\,\eqref{Fexp-} that the generating functional for the $(+)$ bath is simply the influence functional for linear QBM given by \eqn{\label{gf1}\mc{F}^{(1)}_n\sbkt{J_n,J'_n} = e^{i S_{\mr{IF},n}^{(+)}[J_n,J_n']}, } where influence action $S_{\mr{IF},n} ^{(+)}[J_n,J_n']$ is defined as \eqn{ \label{sif+} S^{(+)}_{\mr{IF},n}\sbkt{ J_n, J_n'}\equiv-\int_0^t\mathrm{d} \tau_1\int_0^{\tau_1}\mathrm{d}\tau_2 \sbkt{ J_n\bkt{\tau_1} - J_n'\bkt{\tau_1} }\eta_n^{(+)}\bkt{\tau_1 - \tau_2} \sbkt{ J_n \bkt{\tau_2} + J_n' \bkt{\tau_2}}\nonumber\\ +i\int_0^t\mathrm{d} \tau_1\int_0^{\tau_1}\mathrm{d}\tau_2 \sbkt{ J_n\bkt{\tau_1} - J_n'\bkt{\tau_1} }\nu_n^{(+)}\bkt{\tau_1 - \tau_2} \sbkt{ J_n \bkt{\tau_2} - J_n' \bkt{\tau_2}}, } with the dissipation and noise kernels \eqn{\label{eta+} \eta_n^{(+)}(\tau) =&- \frac{1 }{2 \omega_n }\sin\bkt{\omega_n \tau}\\ \label{nu+} \nu_n^{(+)}(\tau) =& \frac{1}{2 \omega_n }\coth\bkt{\frac{ \omega_{n}}{2k_BT_F}}\cos\bkt{\omega_n \tau}. } Now to evaluate the generating functional for the IDF in terms of $\cbkt{J,J'}$, we follow the approach in \cite{CRV}. Consider the effective action pertaining to the IDF in Eq. \eqref{gfidf} \begin{widetext} \eqn{ S_{I,\mr{eff}}\sbkt{Q,Q',J,J'} =& S_I\sbkt{Q}-S_I\sbkt{Q'}+S_{I,\mr{IF}}^{(-)}\sbkt{Q,Q'}+\int_0^t\mathrm{d} \tau \sbkt{J(\tau)Q(\tau)-J'(\tau)Q'(\tau)}\\ =& \int_0^t \mathrm{d} \tau \sbkt{\frac{1}{2} M\dot Q^2 - \frac{1}{2} M\omega_0^2Q^2 +J Q} - \sbkt{\frac{1}{2} M\dot Q'^2 - \frac{1}{2} M\omega_0^2Q'^2 +J' Q'} \nonumber\\ &- \int _0^t \mathrm{d}\tau_1\int _0^{\tau_1} \mathrm{d}\tau_2 \sbkt{Q(\tau_1) -Q'(\tau_1) }\eta^{(-)}\bkt{\tau_1-\tau_2}\sbkt{Q(\tau_2) +Q'(\tau_2) }\nonumber\\ &+i \int _0^t \mathrm{d}\tau_1\int _0^{\tau_1} \mathrm{d}\tau_2 \sbkt{Q(\tau_1) -Q'(\tau_1) }\nu^{(-)}\bkt{\tau_1-\tau_2}\sbkt{Q(\tau_2) -Q'(\tau_2) }\\ =& \int_0^t \mathrm{d} \tau \sbkt{ \frac{1}{2} M\dot Q_\Sigma(\tau)\dot Q_\Delta(\tau) - \frac{1}{2}M\omega_0^2 Q_\Sigma(\tau)Q_\Delta(\tau) +Q_\Sigma(\tau) J_\Delta(\tau)+ Q_\Delta(\tau) J_\Sigma(\tau)} \nonumber\\ &- \int _0^t \mathrm{d}\tau_1\int _0^{\tau_1} \mathrm{d}\tau_2 Q_\Delta(\tau_1)\eta^{(-)}\bkt{\tau_1-\tau_2}Q_\Sigma(\tau_2)+i \int _0^t \mathrm{d}\tau_1\int _0^{\tau_1} \mathrm{d}\tau_2 Q_\Delta(\tau_1)\nu^{(-)}\bkt{\tau_1-\tau_2}Q_\Delta(\tau_2), } \end{widetext} where we have defined the new coordinates as $Q_\Sigma \equiv Q+Q'$, $Q_\Delta\equiv Q-Q'$, $J_\Sigma\equiv \bkt{J+J'}/2$, and $J_\Delta\equiv \bkt{J-J'}/2$. The dissipation and noise kernels $\eta^{(-)} \bkt{t} $ and $\nu^{(-)}\bkt{t}$ are as defined in Eq.\,\eqref{DissQBMMinus} and Eq.\,\eqref{NoiseQBMMinus}. Rewriting the generating functional in terms of the relative and center of mass coordinates, using the approach outlined in \cite{CRV}, Eq.\,\eqref{gfidf} can be simplified to obtain \eqn{\label{gfidf2} \mc{F}^{(1)}_I\sbkt{J,J'} =& \aavg{e^{2iJ_\Delta\cdot \,Q_{h}}} e^{-2\bkt{J_\Delta\cdot\, {G}} \cdot {\nu}^{(-)}\cdot\bkt{J_\Delta\cdot \,{G}}^T}e^{2iJ_\Delta\cdot {G}\cdot J_\Sigma}, } where \eqn{\label{aavg} \aavg{\dots} = \int \mathrm{d} Q^i \mathrm{d} \Pi^i W\bkt{Q^i, \Pi^i}( \dots ),} denotes the average over the initial Wigner distribution $W(Q^i, \Pi^i)$. $Q_{ h}$ is the solution to the homogeneous Langevin equation \eqn{\label{IDFhom}{L}\cdot Q_h =0,} with initial conditions given by $\cbkt{Q^i,\Pi^i}$ and the differential operator $L(t,t')=m(\frac{d^{2}}{dt'^{2}}+\omega_{0}^{2})\delta(t-t')+2\eta^{(-)}(t-t')$. The Green's function corresponding to the Langevin operator ${{L}}$ is defined as \eqn{\label{Gret}{ L}\cdot { G} = { \delta},} where $ A\cdot B\equiv \int_0 ^t \mathrm{d} \tau A(\tau ) B(\tau) $. \section{Perturbative effective action derivation} \label{App:seff} Now having obtained the total generating functional as defined in Eq. \eqref{Gentot} which contains the influence of the IDF (Eq. \eqref{gfidf2}) and the (+) bath (Eq. \eqref{sif+}) on the system, we substitute and back in the total generating functional to obtain the effective action perturbatively up to second order in the system-bath coupling parameter in a term by term manner as follows. \eqn{F[X,X'] =& \left.\exp{\cbkt{{i}\bkt{S_{\mathrm{int}}^{(+)} \sbkt{X,\frac{1}{i}\frac{\delta}{\delta J},\cbkt{\frac{1}{i}\frac{\delta}{\delta J_n}}} - S_{\mathrm{int}}^{(+)} \sbkt{X',\frac{-1}{i}\frac{\delta}{\delta J'},\cbkt{\frac{-1}{i}\frac{\delta}{\delta J'_n} }}}}}\mc G^{(1)}[ J, J',\cbkt{J_n, J'_n}]\right\vert_{{\bf J} = {\bf J'}=0}\nonumber\\ &\equiv \exp{ \cbkt{{i} S_{\mathrm{M, IF}} [X,X']}}. } So far the above expression would yield the exact effective action for the given system-bath interaction since we have not invoked any weak-coupling approximations yet. We will now make a perturbative expansion of the interaction action up to second order in the system-bath coupling strength in the exponent. Let us consider $S_{\mathrm{int}}^{(+)} \sbkt{X,\frac{1}{i}\frac{\delta}{\delta J},\cbkt{\frac{1}{i}\frac{\delta}{\delta J_n}}} \equiv \varepsilon \tilde {S}_{\mr{int}}^{(+)}\sbkt{X,\frac{1}{i}\frac{\delta}{\delta J},\cbkt{\frac{1}{i}\frac{\delta}{\delta J_n}}}$ and $S_{\mathrm{int}}^{(+)} \sbkt{X',-\frac{1}{i}\frac{\delta}{\delta J'},\cbkt{-\frac{1}{i}\frac{\delta}{\delta J'_n}}} \equiv \varepsilon' \tilde {S}_{\mr{int}}^{(+)}\sbkt{X',-\frac{1}{i}\frac{\delta}{\delta J'},\cbkt{-\frac{1}{i}\frac{\delta}{\delta J'_n}}}$, where the dimensionless parameters $\varepsilon$ and $\varepsilon'$ characterize the system-bath coupling strength that we expand the influence action about \eqn{S_{\mathrm{M, IF}}^{(2)}[X,X'] \approx \left.S_{\mathrm{M, IF}}[X,X']\right\vert_{\varepsilon= \varepsilon'=0}+ \varepsilon\left.\frac{\delta}{\delta\varepsilon}S_{\mathrm{M, IF}}[X,X']\right\vert_{ \varepsilon = \varepsilon'=0}+ \varepsilon'\left.\frac{\delta}{\delta \varepsilon'}S_{\mathrm{M, IF}}[X,X']\right\vert_{ \varepsilon = \varepsilon'=0}\nonumberumber\\+\frac{ \varepsilon^2}{2} \left.\frac{\delta^2}{\delta \varepsilon^2}S_{\mathrm{M, IF}}[X,X']\right\vert_{ \varepsilon = \varepsilon'=0} +\frac{ \varepsilon'^2}{2} \left.\frac{\delta^2}{\delta \varepsilon'^2}S_{\mathrm{M, IF}}[X,X']\right\vert_{ \varepsilon = \varepsilon'=0}+{ \varepsilon \varepsilon'}\left.\frac{\delta^2}{\delta \varepsilon\delta \varepsilon'}S_{\mathrm{M, IF}}[X,X']\right\vert_{ \varepsilon = \varepsilon'=0} } Let us consider the above expression term by term \begin{enumerate} \item{$\left.S_{\mathrm{M, IF}}[X,X']\right\vert_{ \varepsilon = \varepsilon'=0} = 0$, this can be understood as the influence action corresponding to a non-interacting bath, which trivially vanishes.} \item{$\begin{aligned}[t] \left.\frac{\delta}{\delta \varepsilon}S_{\mathrm{M, IF}}[X,X']\right\vert_{ \varepsilon = \varepsilon'=0} &=\left. \tilde {S}_\mr{int}^{(+)}\sbkt{X,\frac{1}{i}\frac{\delta}{\delta{\bf J}}}\mc G^{(1)}[\bf{J},\bf{J'}]\right\vert_{{\bf J} = {\bf J'}=0} \end{aligned} $ } \item{$\begin{aligned}[t] \left.\frac{\delta}{\delta \varepsilon'}S_{\mathrm{M, IF}}[X,X']\right\vert_{ \varepsilon = \varepsilon'=0} &= -\left. \tilde {S}_\mr{int}^{(+)}\sbkt{X',-\frac{1}{i}\frac{\delta}{\delta{\bf J'}}}\mc G^{(1)}[\bf{J},\bf{J'}]\right\vert_{{\bf J} = {\bf J'}=0} \end{aligned}$} \item{$\begin{aligned}[t] \left.\frac{\delta^2}{\delta \varepsilon^2}S_{\mathrm{M, IF}}[X,X']\right\vert_{ \varepsilon = \varepsilon'=0} ={i}\bkt{ \left.\sbkt{\bkt{\tilde{S}_\mr{int}^{(+)}\sbkt{X,\frac{1}{i} \frac{\delta}{\delta {\bf J}}}}^2\mc G^{(1)}[\bf{J},\bf{J'}]}\right\vert_{{\bf J} = {\bf J'}=0}- \sbkt{\left.\tilde{S}_\mr{int}^{(+)}\sbkt{X,\frac{1}{i} \frac{\delta}{\delta {\bf J}}}\mc G^{(1)}[\bf{J},\bf{J'}]\right\vert_{{\bf J} = {\bf J'}=0}}^2} \end{aligned}$} \item{$\begin{aligned}[t] \left.\frac{\delta^2}{\delta \varepsilon'^2}S_{\mathrm{M, IF}}[X,X']\right\vert_{ \varepsilon = \varepsilon'=0} = i&\left( \left.\sbkt{\bkt{\tilde{S}_\mr{int}^{(+)}\sbkt{X',-\frac{1}{i} \frac{\delta}{\delta {\bf J'}}}}^2\mc G^{(1)}[\bf{J},\bf{J'}]}\right\vert_{{\bf J} = {\bf J'}=0}\right.\nonumberumber\\ &\left.- \sbkt{\left.\tilde{S}_\mr{int}^{(+)}\sbkt{X',-\frac{1}{i} \frac{\delta}{\delta {\bf J'}}}\mc G^{(1)}[\bf{J},\bf{J'}]\right\vert_{{\bf J} = {\bf J'}=0}}^2\right) \end{aligned}$ } \item{$\begin{aligned}[t] \left.\frac{\delta^2}{\delta \varepsilon\delta \varepsilon'}S_{\mathrm{M, IF}}[X,X']\right\vert_{ \varepsilon = \varepsilon'=0} =&-i\left( \left.\sbkt{\tilde{S}_\mr{int}^{(+)}\sbkt{X,\frac{1}{i} \frac{\delta}{\delta {\bf J}}}\tilde{S}_\mr{int}^{(+)}\sbkt{X',-\frac{1}{i} \frac{\delta}{\delta {\bf J'}}}\mc G^{(1)}[\bf{J},\bf{J'}]}\right\vert_{{\bf J} = {\bf J'}=0}\right.\nonumberumber\\ &\left.- \sbkt{\left.\tilde{S}_\mr{int}^{(+)}\sbkt{X,\frac{1}{i} \frac{\delta}{\delta {\bf J}}}\mc G^{(1)}[\bf{J},\bf{J'}]\right\vert_{{\bf J} = {\bf J'}=0}}\sbkt{\left.\tilde{S}_\mr{int}^{(+)}\sbkt{X',-\frac{1}{i} \frac{\delta}{\delta {\bf J'}}}\mc G^{(1)}[\bf{J},\bf{J'}]\right\vert_{{\bf J} = {\bf J'}=0}}\right) \end{aligned}$ } \end{enumerate} Putting all the terms together we can rewrite the influence action up to second order in the system-bath coupling as in Eq\,\eqref{seff}. \section{Calculation of averages} \label{Seffavg} \subsection{First order average} The first order average in the influence action can be calculated as follows \eqn{\avg{S_{\mathrm{int}}^{(+)}\sbkt{{X},\frac{1}{i}\frac{\delta}{\delta { J}},\cbkt{\frac{1}{i}\frac{\delta}{\delta { J_n}}}}}_0 &= \left. S_{\mathrm{int}}^{(+)}\sbkt{{X},\frac{1}{i}\frac{\delta}{\delta { J}},\cbkt{\frac{1}{i}\frac{\delta}{\delta { J_n}}}}\mc G^{(1)}[{J},{J'},\cbkt{J_n,J_n'}]\right\vert_{{\bf J} = {\bf J'}=0}\nonumberumber\\ & = \left. \int_0^t \mathrm{d} \tau \sum_{n}\lambda \omega_{n}\sbkt{\frac{1}{i}\frac{\delta}{\delta{ J_n(\tau)}}}\sbkt{ \frac{1}{i}\frac{\delta}{\delta{J(\tau)}}} {X}(\tau)\prod_{p} F^{(1)}_p [J_p,J_p']F^{(1)}_I [J,J']\right\vert_{{\bf J} = {\bf J'}=0} } where we have used the generating functional as defined in Eq. \eqref{Gentot}, and the interaction action from Eq. \eqref{Sint+}. Now one can note that the first order derivative of the influence action which is quadratic in $ \cbkt{J_n,J_n'} $ brings in a factor that is linear in $\cbkt{J_n,J_n'}$, and thus vanishes when we set $J_n = J_n'=0$. Thus, we obtain that \eqn{&\avg{S_{\mathrm{int}}^{(+)}\sbkt{X,\frac{1}{i}\frac{\delta}{\delta { J}},\cbkt{\frac{1}{i}\frac{\delta}{\delta { J_n}}}}}_0=\avg{S_{\mathrm{int}}^{(+)}\sbkt{X',\frac{1}{i}\frac{\delta}{\delta { J'}},\cbkt{\frac{1}{i}\frac{\delta}{\delta { J'_n}}}}}_0 =0.} \subsection{Second order averages} \label{AppB2} We calculate the second order terms in the influence action as follows: \begin{enumerate} \item { \begin{widetext} \eqn{\label{seffxx} \avg{\cbkt{S_{\mathrm{int}}^{(+)}\sbkt{X,\frac{1}{i}\frac{\delta}{\delta {\bf J}}}}^2}_0 =& \bkt{\int_0^t \mathrm{d} t_1 \sum_{n}\lambda \omega_{n}\sbkt{\frac{1}{i}\frac{\delta}{\delta{ J(t_1)}}}\sbkt{ \frac{1}{i}\frac{\delta}{\delta{J_{n}(t_1)}}} X(t_1)}\nonumberumber\\ &\bkt{\int_0^t \mathrm{d} t_2 \sum_{m}\lambda k_{m}\sbkt{\frac{1}{i}\frac{\delta}{\delta{ J{}(t_2)}}}\sbkt{ \frac{1}{i}\frac{\delta}{\delta{J_{m}(t_2)}}} X(t_2)}\left.F^{(1)}_I [J,J']\,\prod_{p}F^{(1)}_p [J_p,J_p']\right\vert_{{\bf J} = {\bf J'}=0}\\ \label{seffxx2} &= \int_0^t \mathrm{d} t_1\int_0^t \mathrm{d} t_2 \sum_{m,n}\lambda^2 \omega_{m}\omega_{n} X(t_1) X(t_2)\,\zeta_I\bkt{t_1,t_2}\,\sum_{p}\zeta^{(p)}_{m,n}\bkt{t_1,t_2} } \end{widetext} where we have defined the influence of the (+) bath and the IDF as \eqn{ \label{zetap}\zeta^{(p)}_{m,n}(t_1,t_2)\equiv &\left.\frac{\delta }{\delta J_n(t_1)}\cbkt{\frac{\delta }{\delta J_m(t_2)}\sbkt{F^{(1)}_p [J_p,J'_p]}}\right\vert_{J_p = J'_p =0}\nonumber\\ =& -{i}\delta_{mp}\delta_{np}\sbkt{\eta_p^{(+)} \bkt{t_1-t_2 }\Theta\bkt{t_1-t_2}+\eta_p^{(+)}\bkt{t_2-t_1}\Theta\bkt{t_2-t_1}}\nonumber\\ &-\delta_{mp}\delta_{np}\sbkt{\nu_p^{(+)} \bkt{t_1-t_2}\Theta\bkt{t_1-t_2}+\nu_p^{(+)}\bkt{t_2-t_1}\Theta\bkt{t_2-t_1}}\\ = & \delta_{mp}\delta_{np}\sbkt{-i\eta_p^{(+)} \bkt{t_1-t_2 }\mr{sign}\bkt{t_1-t_2}-\nu_p^{(+)}\bkt{t_1-t_2}}} \eqn{ \label{zetaI} \zeta_I(t_1,t_2)\equiv& \frac{\delta }{\delta J(t_1)}\cbkt{\frac{\delta }{\delta J(t_2)}\sbkt{F^{(1)}_I [J,J']}}\nonumber\\ =& -\frac{1}{2} \sbkt{\int_0^{t} \mathrm{d} \tau_1 \int_0^t \mathrm{d} \tau_2\bkt{ {G} \bkt{t_1-\tau_1} \nu^{(-)}\bkt{\tau_1-\tau_2} {G}\bkt{t_2 -\tau_2} + {G}\bkt{t_2 - \tau_1}\nu^{(-)}\bkt{\tau_1-\tau_2} {G}\bkt{t_1 - \tau_2}}}\nonumber\\ &-\frac{i}{2} \sbkt{ {G}\bkt{t_1-t_2}+ {G}\bkt{t_2-t_1}} - \aavg{ Q_h\bkt{ t_1} Q_h\bkt{t_2}}. } Let us further define an odd function $ {g}\bkt{t}$ such that $ {G}\bkt{t_1 - t_2} \equiv {g}\bkt{t_1 - t_2}\Theta \bkt{t_1 - t_2}$, and \eqn{\nu_{GG}\bkt{t_1 , t_2}\equiv &\int_0^{t_1} \mathrm{d} \tau_1 \int_0^{t_2} \mathrm{d} \tau_2\sbkt{ {g} \bkt{t_1-\tau_1} \nu^{(-)}\bkt{\tau_1-\tau_2} {g}\bkt{t_2 -\tau_2}}, } such that we can rewrite $\zeta_I \bkt{t_1, t_2}$ as \eqn{ \zeta_I \bkt{t_1, t_2 } = - \nu_{GG}\bkt{t_1 , t_2} - \frac{i }{2} {g} \bkt{t_1 - t_2} \mr{sign} \bkt{t_1 - t_2 }- \aavg{ Q_h\bkt{ t_1} Q_h\bkt{t_2}}. } We have made use of the fact that the function $g\bkt{s} = -g\bkt{-s} $ is odd, and the noise kernel $\nu ^{(+) }\bkt{s} = \nu ^{(+) }\bkt{-s} $ is even. This allows us to rewrite Eq.\eqref{seffxx2} as \eqn{&\avg{\cbkt{S_{\mathrm{int}}^{(+)}\sbkt{X,\frac{1}{i}\frac{\delta}{\delta {\bf J}}}}^2}_0\nonumber\\ =& \int_0^t \mathrm{d} t_1\int_0^t \mathrm{d} t_2 \sum_p\lambda^2 k_p^2 X(t_1) X(t_2)\sbkt{-i \eta_p ^{(+) } \bkt{t_1 - t_2} \mr{sign}\bkt{t_1 - t_2 } - \nu_p ^{(+) } \bkt{t_1 - t_2 }} \nonumber\\ &\sbkt{- \nu_{GG}'\bkt{t_1 , t_2} + \frac{i }{2} {g} \bkt{t_1 - t_2} \mr{sign} \bkt{t_1 - t_2 }}\\ \label{seffXX} = &\int_0^t \mathrm{d} t_1\int_0^t \mathrm{d} t_2 \sum_p\lambda^2 k_p^2 X(t_1) X(t_2)\sbkt{i \eta_p ^{(+)}\bkt{t_1 - t_2 } \nu_{GG}' \bkt{t_1, t_2}\mr{sign}\bkt{t_1-t_2}+ \nu_p ^{(+)} \bkt{t_1 - t_2} \nu_{GG}' \bkt{t_1, t_2} \right.\nonumber\\ &\left. - \frac{1}{2} {g}\bkt{t_1 - t_2} \eta_p ^{(+)} \bkt{t_1 - t_2}+ \frac{i}{2} {g}\bkt{t_1 - t_2} \nu_p ^{(+)} \bkt{t_1 - t_2}\mr{sign}\bkt{t_1-t_2} }, } where $\nu_{GG}' \bkt{ t_1,t_2} \equiv \nu_{GG} \bkt{ t_1,t_2} + \aavg{ Q_h\bkt{ t_1} Q_h\bkt{t_2}}$. } \item{ \eqn{ \label{seffxpxp}\avg{\cbkt{S_{\mathrm{int}}^{(+)}\sbkt{X',-\frac{1}{i}\frac{\delta}{\delta {\bf J'}}}}^2}_0 = \int_0^t\mathrm{d} t_1 \int_0^t\mathrm{d} t_2 \sum_{m,n} \lambda^2\omega_m\omega_n X'(t_1)X'(t_2)\,\widehat{\zeta}_I\bkt{t_1,t_2}\,\sum_{p}\widehat{\zeta}^{(p)}_{m,n}\bkt{t_1,t_2}, } where \eqn{\label{hatzetap}\widehat{\zeta}^{(p)}_{m,n}(t_1,t_2)\equiv &\left.\frac{\delta }{\delta J'_n(t_1)}\cbkt{\frac{\delta }{\delta J'_m(t_2)}\sbkt{F^{(1)}_p [J_p,J'_p]}}\right\vert_{J_p = J'_p =0}\nonumber\\ =& {i}\delta_{mp}\delta_{np}\sbkt{\eta_p^{(+)} \bkt{t_1-t_2 }\Theta\bkt{t_1-t_2}+\eta_p^{(+)}\bkt{t_2-t_1}\Theta\bkt{t_2-t_1}}\nonumber\\ &-\delta_{mp}\delta_{np}\sbkt{\nu_p^{(+)} \bkt{t_1-t_2}\Theta\bkt{t_1-t_2}+\nu_p^{(+)}\bkt{t_2-t_1}\Theta\bkt{t_2-t_1}} \\ = & \delta_{mp}\delta_{np}\sbkt{i\eta_p^{(+)} \bkt{t_1-t_2 }\mr{sign}\bkt{t_1-t_2}-\nu_p^{(+)}\bkt{t_1-t_2}} } \eqn{ \label{hatzetaI} \widehat{\zeta}_I(t_1,t_2)\equiv& \frac{\delta }{\delta J'(t_1)}\cbkt{\frac{\delta }{\delta J'(t_2)}\sbkt{F^{(1)}_I [J,J']}}\nonumber\\ =& -\frac{1}{2} \sbkt{\int_0^t \mathrm{d} \tau_1 \int_0^t \mathrm{d} \tau_2\bkt{ {G} \bkt{t_1-\tau_1} \nu^{(-)}\bkt{\tau_1-\tau_2} {G}\bkt{\tau_2-t_2} + {G}\bkt{t_2 - \tau_1}\nu^{(-)}\bkt{\tau_1-\tau_2} {G}\bkt{\tau_2-t_1}}}\nonumber\\ &+\frac{i}{2} \sbkt{{G}\bkt{t_1-t_2}+{G}\bkt{t_2-t_1}} - \aavg{ Q_h\bkt{ t_1} Q_h\bkt{t_2}}\\ = & - \nu'_{GG}\bkt{ t_1, t_2 } + \frac{i }{2} {g}\bkt{t_1 - t_2}\mr{sign} \bkt{t_1 - t_2} } Substituting back in Eq.\eqref{seffxpxp} \eqn{&\avg{\cbkt{S_{\mathrm{int}}^{(+)}\sbkt{X',-\frac{1}{i}\frac{\delta}{\delta {\bf J'}}}}^2}_0 \nonumber\\ =& \int_0^t\mathrm{d} t_1 \int_0^t\mathrm{d} t_2 \sum_{p} \lambda^2k_p^2 X'(t_1)X'(t_2)\sbkt{i\eta_p^{(+)} \bkt{t_1-t_2 }\mr{sign}\bkt{t_1-t_2}-\nu_p^{(+)}\bkt{t_1-t_2}}\nonumber\\ &\sbkt{- \nu_{GG}'\bkt{ t_1, t_2 } - \frac{i }{2} {g}\bkt{t_1 - t_2}\mr{sign} \bkt{t_1 - t_2} }\\ \label{seffXPXP} = & \int_0^t\mathrm{d} t_1 \int_0^t\mathrm{d} t_2 \sum_{p} \lambda^2k_p^2 X'(t_1)X'(t_2)\sbkt{ -i \eta_p ^{(+)} \bkt{t_1 - t_2} \nu_{GG}' \bkt{t_1 , t_2} \mr{sign} \bkt{t_1 - t_2} + \nu_p ^{(+)}\bkt{t_1 - t_2} \nu_{GG}' \bkt{t_1 , t_2}\right.\nonumber\\ &\left. - \frac{1}{2} {g}\bkt{t_1 - t_2} \eta_p ^{(+)} \bkt{t_1 - t_2}- \frac{i}{2} {g}\bkt{t_1 - t_2} \nu_p ^{(+)} \bkt{t_1 - t_2}\mr{sign}\bkt{t_1-t_2}}. } } \item{ \eqn{ \label{seffxxp}\avg{S_{\mathrm{int}}^{(+)}\sbkt{X',-\frac{1}{i}\frac{\delta}{\delta {\bf J'}}}S_{\mathrm{int}}^{(+)}\sbkt{X,\frac{1}{i}\frac{\delta}{\delta {\bf J}}}}_0 = \int_0^t\mathrm{d} t_1 \int_0^t\mathrm{d} t_2 \sum_{m,n} \lambda^2\omega_m\omega_n X(t_1)X'(t_2)\,\widetilde{\zeta}_I\bkt{t_1,t_2}\,\sum_{p} \widetilde{\zeta}^{(p)}_{m,n}\bkt{t_1,t_2} } where \eqn{\label{tildezetap}\widetilde{\zeta}^{(p)}_{m,n}(t_1,t_2)\equiv &\left.\frac{\delta }{\delta J_n(t_1)}\cbkt{\frac{\delta }{\delta J'_m(t_2)}\sbkt{F^{(1)}_p [J_p,J'_p]}}\right\vert_{J_p = J'_p =0}\nonumber\\ =& -i\delta_{mp}\delta_{np}\sbkt{\eta_p^{(+)} \bkt{t_1-t_2 }\Theta\bkt{t_1-t_2}-\eta_p^{(+)} \bkt{t_2-t_1}\Theta\bkt{t_2-t_1}}\nonumber\\ &+\delta_{mp}\delta_{np}\sbkt{\nu_p^{(+)} \bkt{t_1-t_2}\Theta\bkt{t_1-t_2}+\nu_p^{(+)}\bkt{t_2-t_1}\Theta\bkt{t_2-t_1}}\\ = & \delta_{mp}\delta_{np}\sbkt{-i\eta_p^{(+)} \bkt{t_1-t_2 } + \nu_p ^{(+)} \bkt{t_1 - t_2}}, } and \eqn{\label{tildezetaI}\widetilde{\zeta}_I(t_1,t_2)\equiv& \frac{\delta }{\delta J(t_1)}\cbkt{\frac{\delta }{\delta J'(t_2)}\sbkt{F^{(1)}_I [J,J']}}\nonumber\\ =& \frac{1}{2} \sbkt{\int_0^t \mathrm{d} \tau_1 \int_0^t \mathrm{d} \tau_2\bkt{ {G} \bkt{t_1-\tau_1} \nu^{(-)}\bkt{\tau_1-\tau_2} {G}\bkt{\tau_2-t_2} + {G}\bkt{t_2 - \tau_1}\nu^{(-)}\bkt{\tau_1-\tau_2} {G}\bkt{\tau_2-t_1}}}\nonumber\\ &+\frac{i}{2} \sbkt{{G}\bkt{t_1-t_2}-{G}\bkt{t_2-t_1}}+ \aavg{ Q_h\bkt{ t_1} Q_h\bkt{t_2}}\\ = & \nu'_{GG}\bkt{t_1 ,t_2 } - \frac{i}{2} g\bkt{t_1 - t_2} . } Substituting the above in Eq.\eqref{seffxxp} \eqn{ &\avg{S_{\mathrm{int}}^{(+)}\sbkt{X',-\frac{1}{i}\frac{\delta}{\delta {\bf J'}}}S_{\mathrm{int}}^{(+)}\sbkt{X,\frac{1}{i}\frac{\delta}{\delta {\bf J}}}}_0 \nonumber\\ =& \int_0^t\mathrm{d} t_1 \int_0^t\mathrm{d} t_2 \sum_{p} \lambda^2k_p^2 X(t_1)X'(t_2)\sbkt{-i \eta_p ^{(+)} \bkt{t_1 - t_2} + \nu_p ^{(+)} \bkt{t_1 - t_2}} \sbkt{\nu_{GG}'\bkt{t_1 ,t_2 } +\frac{i}{2} g\bkt{t_1 - t_2}}\\ \label{seffXXP} = & \int_0^t\mathrm{d} t_1 \int_0^t\mathrm{d} t_2 \sum_{p} \lambda^2k_p^2 X(t_1)X'(t_2)\sbkt{-i \eta_p ^{(+)} \bkt{t_1 - t_2} \nu_{GG}'\bkt{t_1 ,t_2 } + \nu_p ^{(+)} \bkt{t_1 - t_2} \nu_{GG}'\bkt{t_1 ,t_2 }\right.\nonumber\\ &\left.-\frac{1}{2} g\bkt{t_1 - t_2}\eta_p ^{(+)} \bkt{t_1 - t_2}- \frac{i}{2} {g}\bkt{t_1 - t_2}\nu_p ^{(+)} \bkt{t_1 - t_2}} . } } \end{enumerate} Putting together Eqs.\,\eqref{seffXX}, \eqref{seffXPXP}, and \eqref{seffXXP} in Eq.\eqref{seff}, we obtain the second order influence action as \eqn{&S_\mr{M, IF}^{(2)}\sbkt{X,X'}\nonumber\\ = &\frac{i}{2}\sum_p\lambda^2 k_p^2\int_0^t \mathrm{d} t_1\int_0^t \mathrm{d} t_2\sbkt{ X(t_1) X(t_2)\cbkt{i \eta_p ^{(+)}\bkt{t_1 - t_2 } \nu_{GG}' \bkt{t_1, t_2}\mr{sign}\bkt{t_1-t_2}+ \nu_p ^{(+)} \bkt{t_1 - t_2} \nu_{GG}' \bkt{t_1, t_2} \right.\right.\nonumber\\ &\left.\left. - \frac{1}{2} {g}\bkt{t_1 - t_2} \eta_p ^{(+)} \bkt{t_1 - t_2}+ \frac{i}{2} {g}\bkt{t_1 - t_2} \nu_p ^{(+)} \bkt{t_1 - t_2}\mr{sign}\bkt{t_1-t_2} }\right.\nonumber\\ &\left.+ X'(t_1)X'(t_2)\cbkt{ -i \eta_p ^{(+)} \bkt{t_1 - t_2} \nu_{GG}' \bkt{t_1 , t_2} \mr{sign} \bkt{t_1 - t_2} + \nu_p ^{(+)}\bkt{t_1 - t_2} \nu_{GG}' \bkt{t_1 , t_2}\right.\right.\nonumber\\ &\left. \left. - \frac{1}{2} {g}\bkt{t_1 - t_2} \eta_p ^{(+)} \bkt{t_1 - t_2}- \frac{i}{2} {g}\bkt{t_1 - t_2} \nu_p ^{(+)} \bkt{t_1 - t_2}\mr{sign}\bkt{t_1-t_2}}\right.\nonumber\\ & \left.-2 X(t_1)X'(t_2)\cbkt{-i \eta_p ^{(+)} \bkt{t_1 - t_2} \nu_{GG}'\bkt{t_1 ,t_2 } + \nu_p ^{(+)} \bkt{t_1 - t_2} \nu_{GG}'\bkt{t_1 ,t_2 }\right.\right.\nonumber\\ &\left.\left.- \frac{1}{2} g\bkt{t_1 - t_2}\eta_p ^{(+)} \bkt{t_1 - t_2}- \frac{i}{2} {g}\bkt{t_1 - t_2}\nu_p ^{(+)} \bkt{t_1 - t_2}}}} \eqn{ = & \frac{i}{8}\sum_p\lambda^2 k_p^2\int_0^t \mathrm{d} t_1\int_0^t \mathrm{d} t_2\sbkt{ \cbkt{X_\Sigma(t_1) + X_\Delta(t_1) }\cbkt{X_\Sigma(t_2) + X_\Delta(t_2) }\cbkt{i \eta_p ^{(+)}\bkt{t_1 - t_2 } \nu_{GG}' \bkt{t_1, t_2}\mr{sign}\bkt{t_1-t_2} \right.\right.\nonumber\\ &\left.\left. + \nu_p ^{(+)} \bkt{t_1 - t_2} \nu_{GG}' \bkt{t_1, t_2}- \frac{1}{2} {g}\bkt{t_1 - t_2} \eta_p ^{(+)} \bkt{t_1 - t_2}+ \frac{i}{2} {g}\bkt{t_1 - t_2} \nu_p ^{(+)} \bkt{t_1 - t_2}\mr{sign}\bkt{t_1-t_2} }\right.\nonumber\\ &\left.+ \cbkt{X_\Sigma(t_1) - X_\Delta(t_1) }\cbkt{X_\Sigma(t_2) - X_\Delta(t_2) }\cbkt{ -i \eta_p ^{(+)} \bkt{t_1 - t_2} \nu_{GG}' \bkt{t_1 , t_2} \mr{sign} \bkt{t_1 - t_2} \right.\right.\nonumber\\ &\left. \left. + \nu_p ^{(+)}\bkt{t_1 - t_2} \nu_{GG}' \bkt{t_1 , t_2}- \frac{1}{2} {g}\bkt{t_1 - t_2} \eta_p ^{(+)} \bkt{t_1 - t_2}- \frac{i}{2} {g}\bkt{t_1 - t_2} \nu_p ^{(+)} \bkt{t_1 - t_2}\mr{sign}\bkt{t_1-t_2}}\right.\nonumber\\ & \left.-2 \cbkt{X_\Sigma(t_1) + X_\Delta(t_1) }\cbkt{X_\Sigma(t_2) - X_\Delta(t_2) }\cbkt{-i \eta_p ^{(+)} \bkt{t_1 - t_2} \nu_{GG}'\bkt{t_1 ,t_2 } + \nu_p ^{(+)} \bkt{t_1 - t_2} \nu_{GG}'\bkt{t_1 ,t_2 }\right.\right.\nonumber\\ &\left.\left.- \frac{1}{2} g\bkt{t_1 - t_2}\eta_p ^{(+)} \bkt{t_1 - t_2}- \frac{i}{2} {g}\bkt{t_1 - t_2}\nu_p ^{(+)} \bkt{t_1 - t_2}}}} \eqn{ = & \frac{i}{4}\sum_p\lambda^2 k_p^2\int_0^t \mathrm{d} t_1\int_0^t \mathrm{d} t_2\nonumber\\ &\sbkt{ X_\Sigma(t_1) X_\Sigma(t_2) \cbkt{ -i \eta_p ^{(+)} \bkt{t_1 - t_2} \nu_{GG}'\bkt{t_1 ,t_2 }- \frac{i}{2} {g}\bkt{t_1 - t_2}\nu_p ^{(+)} \bkt{t_1 - t_2}} \right.\nonumber\\ &\left.+X_\Delta(t_1) X_\Delta(t_2) \cbkt{2\nu_p ^{(+)} \bkt{t_1 - t_2} \nu_{GG}'\bkt{t_1 ,t_2 }- g\bkt{t_1 - t_2}\eta_p ^{(+)} \bkt{t_1 - t_2} -i \eta_p ^{(+)} \bkt{t_1 - t_2} \nu_{GG}'\bkt{t_1 ,t_2 }\right.\right.\nonumber\\ &\left.\left.-\frac{i}{2} {g}\bkt{t_1 - t_2}\nu_p ^{(+)} \bkt{t_1 - t_2}}\right.\nonumber\\ &\left.+ X_\Sigma(t_1) X_\Delta(t_2) \cbkt{i \eta_p ^{(+)} \bkt{t_1 - t_2} \nu_{GG}'\bkt{t_1 ,t_2 }\bkt{\mr{sign}\bkt{t_1 - t_2}-1}+ \frac{i}{2} {g}\bkt{t_1 - t_2}\nu_p ^{(+)} \bkt{t_1 - t_2}\bkt{ \mr{sign}\bkt{t_1 - t_2}-1}\right.\right.\nonumber\\ &\left.\left. + \nu_p ^{(+)} \bkt{t_1 - t_2} \nu_{GG}'\bkt{t_1 ,t_2 }- \frac{1}{2} g\bkt{t_1 - t_2}\eta_p ^{(+)} \bkt{t_1 - t_2} }\right.\nonumber\\ & \left.+ X_\Delta(t_1) X_\Sigma(t_2)\cbkt{i \eta_p ^{(+)} \bkt{t_1 - t_2} \nu_{GG}'\bkt{t_1 ,t_2 }\bkt{\mr{sign}\bkt{t_1 - t_2}+1}+ \frac{i}{2} {g}\bkt{t_1 - t_2}\nu_p ^{(+)} \bkt{t_1 - t_2}\bkt{ \mr{sign}\bkt{t_1 - t_2}+1}\right.\right.\nonumber\\ &\left.\left. - \nu_p ^{(+)} \bkt{t_1 - t_2} \nu_{GG}'\bkt{t_1 ,t_2 }-\frac{1}{2} g\bkt{t_1 - t_2}\eta_p ^{(+)} \bkt{t_1 - t_2} }}\\ = & \frac{i}{4}\sum_p\lambda^2 k_p^2\int_0^t \mathrm{d} t_1\int_0^t \mathrm{d} t_2\sbkt{X_\Delta(t_1) X_\Delta(t_2) \cbkt{ 2\nu_p ^{(+)} \bkt{t_1 - t_2} \nu_{GG}'\bkt{t_1 ,t_2 } - g\bkt{t_1 - t_2} \eta_p ^{(+)} \bkt{t_1 - t_2}}\right.\nonumber\\ & \left.+ X_\Delta(t_1) X_\Sigma(t_2)\cbkt{2i\eta_p ^{(+)} \bkt{t_1 - t_2} \nu_{GG}'\bkt{t_1 ,t_2 }+ i {g}\bkt{t_1 - t_2}\nu_p ^{(+)} \bkt{t_1 - t_2} }\Theta\bkt{t_1 - t_2}}, } where we have defined $X_\Sigma\bkt{t} \equiv X\bkt{t}+ X'\bkt{t}$, and $X_\Delta\bkt{t} \equiv X\bkt{t}- X'\bkt{t}$, and for the last line we have used the property $\int_{0}^{t}dt_{1}\int_{0}^{t}dt_{2}f\bkt{t_{1}}f\bkt{t_{2}}F\bkt{t_1 - t_2}H\bkt{t_1,t_2}=0$ provided that $F\bkt{t_2 - t_1}=-F\bkt{t_1 - t_2}$ and $H\bkt{t_2,t_1}=H\bkt{t_1,t_2}$. Using the definitions of the dissipation and noise kernels for the (+) bath as in Eq.\eqref{ETA+} and Eq. \eqref{NU+} \eqn{S_\mr{M, IF}^{(2)}= \int_0^t \mathrm{d} t_1\int_0^t \mathrm{d} t_2&\sbkt{\frac{i}{4} X_\Delta(t_1) X_\Delta(t_2) \cbkt{ 2\nu ^{(+)} \bkt{t_1 - t_2} \nu_{GG}'\bkt{t_1 ,t_2}- g\bkt{t_1 - t_2} \eta_p ^{(+)} \bkt{t_1 - t_2} }\right.\nonumber\\ & \left.- \frac{1}{4} X_\Delta(t_1) X_\Sigma(t_2)\cbkt{ {g}\bkt{t_1 - t_2}\nu ^{(+)} \bkt{t_1 - t_2} +2\eta ^{(+)} \bkt{t_1 - t_2} \nu_{GG}'\bkt{t_1 ,t_2 }}\Theta\bkt{t_1 - t_2}},} which reduces to Eq.\,\eqref{EffAct2nd}. \section{Fourier transform of the dissipation and noise kernels} \label{App:kernels} The Fourier transform of the dissipation and noise kernels corresponding to the baths associated with the scalar field $\{\eta^{(\pm)},\nu^{(\pm)}\}$ are \eqn{ \label{etapmw} \bar\eta^{(\pm)}(\omega)=&i{\rm Im}\sbkt{\bar \eta^{(\pm)}(\omega)}=-i\frac{\pi}{2}\sbkt{ J^{(\pm)}(\omega)- J^{(\pm)}(-\omega)},\\ \label{nupmw} \bar\nu^{(\pm)}(\omega)=&\pi\coth\left(\frac{\omega}{2k_{B}T_{F}}\right)\sbkt{ J^{(\pm)}(\omega)- J^{(\pm)}(-\omega)} . } From these expressions, we immediately obtain a fluctuation-dissipation relation for the $ \bkt{\pm}$ baths as in Eq.\eqref{FDRBathPM}. We now consider the kernel $\nu_{GG}(t_1,t_2)$ in the late-time limit (Eq.\eqref{NuGGLongTime}) as follows: \eqn{\nu_{GG}(t_1,t_2)=\int_{-\infty}^{+\infty}\frac{d\omega}{2\pi}\bar{\nu}^{(-)}\bkt{\omega}\int_{0}^{t_{1}}\mathrm{d}\tau_{1}~G \bkt{t_{1}-\tau_{1}}e^{-i\omega\tau_{1}}\int_{0}^{t_{2}}\mathrm{d}\tau_2~G \bkt{t_{2}-\tau_{2}}e^{i\omega\tau_{2}}, \label{nuGGFourierFiniteTime} } where we have written $\nu^{(-)}(t)$ in terms of its Fourier transform. The time integrals on a finite interval are convolutions that can be re-cast in terms of the Laplace transform of $G$ (defined as $\tilde F(z)=\int_{0}^{+\infty}{dt} e^{-z t} F(t)$) via its inverse as: \eqn{\int_{0}^{t}\mathrm{d}\tau~G \bkt{t-\tau}e^{\pm i\omega\tau}=\int_{l-i\infty}^{l+i\infty}\frac{dz}{2\pi i}~e^{zt}\frac{\tilde{G }(z)}{(z\mp i\omega)}, } where $l$ has to be larger than the real parts of each of the poles of $\tilde{G }$ to ensure causality. Specifically, as $\tilde{G }$ has poles $\{z_{k}\}$ with negative or vanishing real parts (${\rm Re}(z_{k})<0$), implying $l>0$. Using Cauchy's theorem, the causal property of the retarded propagator allow us to express the convolution in terms of a sum of the residue of the poles of $\tilde{G}(z)/(z\mp i\omega)$ as: \eqn{\int_{0}^{t}\mathrm{d}\tau~G \bkt{t-\tau}e^{\pm i\omega\tau}=e^{\pm i\omega t}\bar{G}(\mp\omega) +\sum_{k}e^{z_{k} t}\frac{\mathcal{R}_{k}}{(z_{k}\mp i\omega)}, } with $\mathcal{R}_{k}\equiv{\rm Res}[\tilde{G }(z),z_{k}]$ and where we have used the relation between the Fourier and Laplace transforms of causal functions $\tilde{G}(\pm i\omega)=\bar{G}(\mp \omega)$. Considering that ${\rm Re}(z_{k})<0$, it is easy to realize that in the large-time limit $t\rightarrow+\infty$: \eqn{\int_{0}^{t}\mathrm{d}\tau~G \bkt{t-\tau}e^{\pm i\omega\tau}\xrightarrow{t\rightarrow+\infty}e^{\pm i\omega t}\bar{G}(\mp\omega) , } which immediately allow us to obtain for the long-time limit of Eq.\eqref{nuGGFourierFiniteTime}: \eqn{\nu_{GG}(t_1,t_2)\rightarrow\int_{-\infty}^{+\infty}\frac{d\omega}{2\pi}e^{-i\omega(t_{1}-t_{2})}\left|\bar{G}(\omega) \right|^{2}\bar{\nu}^{(-)}\bkt{\omega}, \label{nuGGFourierLongTime} } where we have used that $\bar{G}\bkt{-\omega}=\bar{G}^{*}\bkt{\omega}$ given that $G $ is real. Similarly, we note that the last expression is a function of $t_{1}-t_{2}$ only, which is generally not the case during the course of the evolution. In the late-time limit, we can define the Fourier transform of the kernel as $\bar \nu_{GG}(\omega)\equiv|\bar{G}\bkt{\omega} |^{2}\bar{\nu}^{(-)}\bkt{\omega}$. We note that $\bar\nu_{GG}(\omega)$ is real and even, as expected. Furthermore, considering Eq.\eqref{Gret}, we have that the Fourier transform of the retarded propagator reads: \eqn{\bar {G}(\omega) =\frac{1}{m\omega_{0}^{2}-m\omega^{2}+2\bar {\eta}^{(-)}(\omega) }. } Noticing that $\bar{\eta}^{(-)}(\omega) $ has real and imaginary parts, we leave the real part as a renormalization of the frequency of the center of mass, while the imaginary part accounts for the dissipation. After implementing this, we consider the renormalized frequency $\omega_{R}^{2}\equiv\omega_{0}^{2}+2{\rm Re}[\bar{\eta}^{(-)}(\omega)]/m$, so we can re-cast the Fourier transform of the retarded propagator as: \eqn{\bar{G}(\omega) =\frac{1}{m\omega_{R}^{2}-m\omega^{2}+2i{\rm Im}[\bar{\eta}^{(-)}(\omega) ]}, } from which we obtain that ${\rm Im}[\bar {G}(\omega)]=-2|\bar{G}(\omega)|^{2}{\rm Im}[\bar{\eta}^{(-)}(\omega)]$. Combining this with the fluctuation-dissipation relation for the bath $(-)$ of Eq.\eqref{FDRBathPM}, we can directly arrive at the fluctuation-dissipation relation between $\bar\nu_{GG}\bkt{\omega}$ and $\bar{G}$ as in Eq.\,\eqref{FDRnuGG}. \end{document}
\begin{document} \title{The geometry of the disk complex} \author{Howard Masur} \address{\hskip-\parindent Department of Mathematics\\ University of Chicago\\ Chicago, Illinois 60637} \email{[email protected]} \author{Saul Schleimer} \address{\hskip-\parindent Department of Mathematics\\ University of Warwick\\ Coventry, CV4 7AL, UK} \email{[email protected]} \thanks{This work is in the public domain.} \date{\today} \begin{abstract} We give a distance estimate for the metric on the disk complex and show that it is Gromov hyperbolic. As another application of our techniques, we find an algorithm which computes the Hempel distance of a Heegaard splitting, up to an error depending only on the genus. \end{abstract} \maketitle \setcounter{tocdepth}{1} \tableofcontents \section{Introduction} \label{Sec:Introduction} In this paper we initiate the study of the geometry of the disk complex of a handlebody $V$. The disk complex $\mathcal{D}(V)$ has a natural simplicial inclusion into the curve complex $\mathcal{C}(S)$ of the boundary of the handlebody. Surprisingly, this inclusion is not a quasi-isometric embedding; there are disks which are close in the curve complex yet very far apart in the disk complex. As we will show, any obstruction to joining such disks via a short path is a topologically meaningful subsurface of $S = \partial V$. We call such subsurfaces {\em holes}. A path in the disk complex must travel into and then out of these holes; paths in the curve complex may skip over a hole by using the vertex representing the boundary of the subsurface. We classify the holes: \begin{theorem} \label{Thm:ClassificationHolesDiskComplex} Suppose $V$ is a handlebody. If $X \subset \partial V$ is a hole for the disk complex $\mathcal{D}(V)$ of diameter at least $61$ then: \begin{itemize} \item $X$ is not an annulus. \item If $X$ is compressible then there are disks $D, E$ with boundary contained in $X$ so that the boundaries fill $X$. \item If $X$ is incompressible then there is an $I$-bundle $\rho_F \colon T \to F$ so that $T$ is a component of $V {\smallsetminus} \partial_v T$ and $X$ is a component of $\partial_h T$. \end{itemize} \end{theorem} See Theorems~\ref{Thm:Annuli}, \ref{Thm:CompressibleHoles} and \ref{Thm:IncompressibleHoles} for more precise statements. The $I$--bundles appearing in the classification lead us to study the arc complex $\mathcal{A}(F)$ of the base surface $F$. Since the $I$--bundle $T$ may be twisted the surface $F$ may be non-orientable. Thus, as a necessary warm-up to the difficult case of the disk complex, we also analyze the holes for the curve complex of an non-orientable surface, as well as the holes for the arc complex. \subsection*{Topological application} It is a long-standing open problem to decide, given a Heegaard diagram, whether the underlying splitting surface is reducible. This question has deep connections to the geometry, topology, and algebra of the ambient three-manifold. For example, a resolution of this problem would give new solutions to both the three-sphere recognition problem and the triviality problem for three-manifold groups. The difficulty of deciding reducibility is underlined by its connection to the Poincar\'e conjecture: several approaches to the Poincar\'e Conjecture fell at essentially this point. See~\cite{CavicchioliSpaggiari06} for a entrance into the literature. One generalization of deciding reducibility is to find an algorithm that, given a Heegaard diagram, computes the {\em distance} of the Heegaard splitting as defined by Hempel~\cite{Hempel01}. (For example, see~\cite[Section~2]{Birman06}.) The classification of holes for the disk complex leads to a coarse answer to this question. \begin{restate}{Theorem}{Thm:CoarselyComputeDistance} In every genus $g$ there is a constant $K = K(g)$ and an algorithm that, given a Heegaard diagram, computes the distance of the Heegaard splitting with error at most $K$. \end{restate} In addition to the classification of holes, the algorithm relies on the Gromov hyperbolicity of the curve complex~\cite{MasurMinsky99} and the quasi-convexity of the disk set inside of the curve complex~\cite{MasurMinsky04}. However the algorithm does not depend on our geometric applications of \refthm{ClassificationHolesDiskComplex}. \subsection*{Geometric application} The hyperbolicity of the curve complex and the classification of holes allows us to prove: \begin{restate}{Theorem}{Thm:DiskComplexHyperbolic} The disk complex is Gromov hyperbolic. \end{restate} Again, as a warm-up to the proof of \refthm{DiskComplexHyperbolic} we prove that $\mathcal{C}(F)$ and $\mathcal{A}(S)$ are hyperbolic in \refcor{NonorientableCurveComplexHyperbolic} and \refthm{ArcComplexHyperbolic}. Note that Bestvina and Fujiwara~\cite{BestvinaFujiwara07} have previously dealt with the curve complex of a non-orientable surface, following Bowditch~\cite{Bowditch06}. These results cannot be deduced from the fact that $\mathcal{D}(V)$, $\mathcal{C}(F)$, and $\mathcal{A}(S)$ can be realized as quasi-convex subsets of $\mathcal{C}(S)$. This is because the curve complex is locally infinite. As simple example consider the Cayley graph of $\mathbb{Z}^2$ with the standard generating set. Then the cone $C(\mathbb{Z}^2)$ of height one-half is a Gromov hyperbolic space and $\mathbb{Z}^2$ is a quasi-convex subset. Another instructive example, very much in-line with our work, is the usual embedding of the three-valent tree $T_3$ into the Farey tessellation. The proof of \refthm{DiskComplexHyperbolic} requires the {\em distance estimate} \refthm{DiskComplexDistanceEstimate}: the distance in $\mathcal{C}(F)$, $\mathcal{A}(S)$, and $\mathcal{D}(V)$ is coarsely equal to the sum of subsurface projection distances in holes. However, we do not use the hierarchy machine introduced in~\cite{MasurMinsky00}. This is because hierarchies are too flexible to respect a symmetry, such as the involution giving a non-orientable surface, and at the same time too rigid for the disk complex. For $\mathcal{C}(F)$ we use the highly rigid {Teichm\"uller~} geodesic machine, due to Rafi~\cite{Rafi10}. For $\mathcal{D}(V)$ we use the extremely flexible train track machine, developed by ourselves and Mosher~\cite{MasurEtAl10}. Theorems~\ref{Thm:DiskComplexDistanceEstimate} and \ref{Thm:DiskComplexHyperbolic} are part of a more general framework. Namely, given a combinatorial complex $\mathcal{G}$ we understand its geometry by classifying the holes: the geometric obstructions lying between $\mathcal{G}$ and the curve complex. In Sections~\ref{Sec:Axioms} and \ref{Sec:Partition} we show that any complex $\mathcal{G}$ satisfying certain axioms necessarily satisfies a distance estimate. That hyperbolicity follows from the axioms is proven in \refsec{Hyperbolicity}. Our axioms are stated in terms of a path of markings, a path in the the combinatorial complex, and their relationship. For the disk complex the combinatorial paths are surgery sequences of essential disks while the marking paths are provided by train track splitting sequences; both constructions are due to the first author and Minsky~\cite{MasurMinsky04} (\refsec{BackgroundTrainTracks}). The verification of the axioms (\refsec{PathsDisk}) relies on our work with Mosher, analyzing train track splitting sequences in terms of subsurface projections~\cite{MasurEtAl10}. We do not study non-orientable surfaces directly; instead we focus on symmetric multicurves in the double cover. This time marking paths are provided by {Teichm\"uller~} geodesics, using the fact that the symmetric Riemann surfaces form a totally geodesic subset of {Teichm\"uller~} space. The combinatorial path is given by the systole map. We use results of Rafi~\cite{Rafi10} to verify the axioms for the complex of symmetric curves. (See \refsec{PathsNonorientable}.) \refsec{PathsArc} verifies the axioms for the arc complex again using {Teichm\"uller~} geodesics and the systole map. It is interesting to note that the axioms for the arc complex can also be verified using hierarchies or, indeed, train track splitting sequences. The distance estimates for the marking graph and the pants graph, as given by the first author and Minsky~\cite{MasurMinsky00}, inspired the work here, but do not fit our framework. Indeed, neither the marking graph nor the pants graph are Gromov hyperbolic. It is crucial here that all holes {\em interfere}; this leads to hyperbolicity. When there are non-interfering holes, it is unclear how to partition the marking path to obtain the distance estimate. \subsection*{Acknowledgments} We thank Jason Behrstock, Brian Bowditch, Yair Minsky, Lee Mosher, Hossein Namazi, and Kasra Rafi for many enlightening conversations. We thank Tao Li for pointing out that our original bound inside of \refthm{IncompressibleHoles} of $O(\log g(V))$ could be reduced to a constant. \section{Background on complexes} \label{Sec:BackgroundComplexes} We use $S_{g,b,c}$ to denote the compact connected surface of genus $g$ with $b$ boundary components and $c$ cross-caps. If the surface is orientable we omit the subscript $c$ and write $S_{g,b}$. The {\em complexity} of $S = S_{g, b}$ is $\xi(S) = 3g - 3 + b$. If the surface is closed and orientable we simply write $S_g$. \subsection{Arcs and curves} A simple closed curve $\alpha \subset S$ is {\em essential} if $\alpha$ does not bound a disk in $S$. The curve $\alpha$ is {\em non-peripheral} if $\alpha$ is not isotopic to a component of $\partial S$. A simple arc $\beta \subset S$ is proper if $\beta \cap \partial S = \partial \beta$. An isotopy of $S$ is proper if it preserves the boundary setwise. A proper arc $\beta \subset S$ is {\em essential} if $\beta$ is not properly isotopic into a regular neighborhood of $\partial S$. Define $\mathcal{C}(S)$ to be the set of isotopy classes of essential, non-peripheral curves in $S$. Define $\mathcal{A}(S)$ to be the set of proper isotopy classes of essential arcs. When $S = S_{0,2}$ is an annulus define $\mathcal{A}(S)$ to be the set of essential arcs, up to isotopies fixing the boundary pointwise. For any surface define $\mathcal{AC}(S) = \mathcal{A}(S) \cup \mathcal{C}(S)$. For $\alpha, \beta \in \mathcal{AC}(S)$ the geometric intersection number $\iota(\alpha, \beta)$ is the minimum intersection possible between $\alpha$ and any $\beta'$ equivalent to $\beta$. When $S = S_{0,2}$ we do not count intersection points occurring on the boundary. If $\alpha$ and $\beta$ realize their geometric intersection number then $\alpha$ is {\em tight} with respect to $\beta$. If they do not realize their geometric intersection then we may {\em tighten} $\beta$ until they do. Define $\Delta \subset \mathcal{AC}(S)$ to be a {\em multicurve} if for all $\alpha, \beta \in \Delta$ we have $\iota(\alpha, \beta) = 0$. Following Harvey~\cite{Harvey81} we may impose the structure of a simplical complex on $\mathcal{AC}(S)$: the simplices are exactly the multicurves. Also, $\mathcal{C}(S)$ and $\mathcal{A}(S)$ naturally span sub-complexes. Note that the curve complexes $\mathcal{C}(S_{1,1})$ and $\mathcal{C}(S_{0,4})$ have no edges. It is useful to alter the definition in these cases. Place edges between all vertices with geometric intersection exactly one if $S = S_{1,1}$ or two if $S = S_{0,4}$. In both cases the result is the Farey graph. Also, with the current definition $\mathcal{C}(S)$ is empty if $S = S_{0,2}$. Thus for the annulus only we set $\mathcal{AC}(S) = \mathcal{C}(S) = \mathcal{A}(S)$. \begin{definition} \label{Def:Distance} For vertices $\alpha, \beta \in \mathcal{C}(S)$ define the {\em distance} $d_S(\alpha, \beta)$ to be the minimum possible number of edges of a path in the one-skeleton $\mathcal{C}^1(S)$ which starts at $\alpha$ and ends at $\beta$. \end{definition} Note that if $d_S(\alpha, \beta) \geq 3$ then $\alpha$ and $\beta$ {\em fill} the surface $S$. We denote distance in the one-skeleton of $\mathcal{A}(S)$ and of $\mathcal{AC}(S)$ by $d_\mathcal{A}$ and $d_\mathcal{AC}$ respectively. Recall that the geometric intersection of a pair of curves gives an upper bound for their distance. \begin{lemma} \label{Lem:Hempel} Suppose that $S$ is a compact connected surface which is not an annulus. For any $\alpha, \beta \in \mathcal{C}^0(S)$ with $\iota(\alpha, \beta) > 0$ we have $d_S(\alpha, \beta) \leq 2 \log_2(\iota(\alpha, \beta)) + 2$. \qed \end{lemma} \noindent This form of the inequality, stated for closed orientable surfaces, may be found in~\cite{Hempel01}. A proof in the bounded orientable case is given in~\cite{Schleimer06b}. The non-orientable case is then an exercise. When $S = S_{0,2}$ an induction proves \begin{equation} \label{Eqn:DistanceInAnnulus} d_X(\alpha, \beta) = 1 + \iota(\alpha, \beta) \end{equation} for distinct vertices $\alpha, \beta \in \mathcal{C}(X)$. See~\cite[Equation 2.3]{MasurMinsky00}. \subsection{Subsurfaces} Suppose that $X \subset S$ is a connected compact subsurface. We say $X$ is {\em essential} exactly when all boundary components of $X$ are essential in $S$. We say that $\alpha \in \mathcal{AC}(S)$ {\em cuts} $X$ if all representatives of $\alpha$ intersect $X$. If some representative is disjoint then we say $\alpha$ {\em misses} $X$. \begin{definition} \label{Def:CleanlyEmbedded} An essential subsurface $X \subset S$ is {\em cleanly embedded} if for all components $\nablata \subset \partial X$ we have: $\nablata$ is isotopic into $\partial S$ if and only if $\nablata$ is equal to a component of $\partial S$. \end{definition} \begin{definition} \label{Def:Overlap} Suppose $X, Y \subset S$ are essential subsurfaces. If $X$ is cleanly embedded in $Y$ then we say that $X$ is {\em nested} in $Y$. If $\partial X$ cuts $Y$ and also $\partial Y$ cuts $X$ then we say that $X$ and $Y$ {\em overlap}. \end{definition} A compact connected surface $S$ is {\em simple} if $\mathcal{AC}(S)$ has finite diameter. \begin{lemma} \label{Lem:SimpleSurfaces} Suppose $S$ is a connected compact surface. The following are equivalent: \begin{itemize} \item $S$ is not simple. \item The diameter of $\mathcal{AC}(S)$ is at least five. \item $S$ admits an ending lamination or $S = S_1$ or $S_{0,2}$. \item $S$ admits a pseudo-Anosov map or $S = S_1$ or $S_{0,2}$. \item $\chi(S) < -1$ or $S = S_{1,1}, S_1, S_{0,2}$. \end{itemize} \end{lemma} \noindent Lemma~4.6 of~\cite{MasurMinsky99} shows that pseudo-Anosov maps have quasi-geodesic orbits, when acting on the associated curve complex. A Dehn twist acting on $\mathcal{C}(S_{0,2})$ has geodesic orbits. Note that \reflem{SimpleSurfaces} is only used in this paper when $\partial S$ is non-empty. The closed case is included for completeness. \begin{proof}[Proof sketch of \reflem{SimpleSurfaces}] If $S$ admits a pseudo-Anosov map then the stable lamination is an ending lamination. If $S$ admits a filling lamination then, by an argument of Kobayashi~\cite{Kobayashi88b}, $\mathcal{AC}(S)$ has infinite diameter. (This argument is also sketched in~\cite{MasurMinsky99}, page 124, after the statement of Proposition~4.6.) If the diameter of $\mathcal{AC}$ is infinite then the diameter is at least five. To finish, one may check directly that all surfaces with $\chi(S) > -2$, other than $S_{1,1}$, $S_1$ and the annulus have $\mathcal{AC}(S)$ with diameter at most four. (The difficult cases, $S_{012}$ and $S_{003}$, are discussed by Scharlemann~\cite{Scharlemann82}.) Alternatively, all surfaces with $\chi(S) < -1$, and also $S_{1,1}$, admit pseudo-Anosov maps. The orientable cases follow from Thurston's construction~\cite{Thurston88}. Penner's generalization~\cite{Penner88} covers the non-orientable cases. \end{proof} \subsection{Handlebodies and disks} Let $V_g$ denote the {\em handlebody} of genus $g$: the three-manifold obtained by taking a closed regular neighborhood of a polygonal, finite, connected graph in $\mathbb{R}^3$. The genus of the boundary is the {\em genus} of the handlebody. A properly embedded disk $D \subset V$ is {\em essential} if $\partial D \subset \partial V$ is essential. Let $\mathcal{D}(V)$ be the set of essential disks $D \subset V$, up to proper isotopy. A subset $\Delta \subset \mathcal{D}(V)$ is a multidisk if for every $D, E \in \Delta$ we have $\iota(\partial D, \partial E) = 0$. Following McCullough~\cite{McCullough91} we place a simplical structure on $\mathcal{D}(V)$ by taking multidisks to be simplices. As with the curve complex, define $d_\mathcal{D}$ to be the distance in the one-skeleton of $\mathcal{D}(V)$. \subsection{Markings} \label{Sec:Markings} A finite subset $\mu \subset \mathcal{AC}(S)$ {\em fills} $S$ if for all $\beta \in \mathcal{C}(S)$ there is some $\alpha \in \mu$ so that $\iota(\alpha, \beta) > 0$. For any pair of finite subsets $\mu, \nu \subset \mathcal{AC}(S)$ we extend the intersection number: \[ \iota(\mu,\nu) = \sum_{\alpha \in \mu, \beta \in \nu} \iota(\alpha, \beta). \] We say that $\mu, \nu$ are {\em $L$--close} if $\iota(\mu, \nu) \leq L$. We say that $\mu$ is a {\em $K$--marking} if $\iota(\mu, \mu) \leq K$. For any $K,L$ we may define $\mathcal{M}_{K,L}(S)$ to be the graph where vertices are filling $K$--markings and edges are given by $L$--closeness. As defined in~\cite{MasurMinsky00} we have: \begin{definition} A {\em complete clean marking} $\mu = \{ \alpha_i \} \cup \{ \beta_i \}$ consists of \begin{itemize} \item A collection of {\em base} curves $\operatorname{base}(\mu) = \{ \alpha_i \}$: a maximal simplex in $\mathcal{C}(S)$. \item A collection of {\em transversal} curves $\{\beta_i\}$: for each $i$ define $X_i = S {\smallsetminus} \bigcup_{j \neq i} \alpha_j$ and take $\beta_i \in \mathcal{C}(X_i)$ to be a Farey neighbor of $\alpha_i$. \end{itemize} \end{definition} \noindent If $\mu$ is a complete clean marking then $\iota(\mu, \mu) \leq 2 \xi(S) + 6 \chi(S)$. As discussed in~\cite{MasurMinsky00} there are two kinds of {\em elementary moves} which connected markings. There is a {\em twist} about a pants curve $\alpha$, replacing its transversal $\beta$ by a new transversal $\beta'$ which is a Farey neighbor of both $\alpha$ and $\beta$. We can {\em flip} by swapping the roles of $\alpha_i$ and $\beta_i$. (In the case of the flip move, some of the other transversals must be {\em cleaned}.) It follows that for any surface $S$ there are choices of $K, L$ so that $\mathcal{M}(S)$ is non-empty and connected. We use $d_\mathcal{M}(\mu, \nu)$ to denote distance in the marking graph. \section{Background on coarse geometry} \label{Sec:BackgroundGeometry} Here we review a few ideas from coarse geometry. See~\cite{Bridson99}, \cite{CDP90}, or \cite{Gromov87} for a fuller discussion. \subsection{Quasi-isometry} Suppose $r, s, A$ are non-negative real numbers, with $A \geq 1$. If $s \leq A \cdot r + A$ then we write $s \mathbin{\leq_{A}} r$. If $s \mathbin{\leq_{A}} r$ and $r \mathbin{\leq_{A}} s$ then we write $s \mathbin{=_A} r$ and call $r$ and $s$ {\em quasi-equal} with constant $A$. We also define the {\em cut-off function} $[r]_c$ where $[r]_c = 0$ if $r < c$ and $[r]_c = r$ if $r \geq c$. Suppose that $(\mathcal{X}, d_\mathcal{X})$ and $(\mathcal{Y}, d_\mathcal{Y})$ are metric spaces. A relation $f \colon \mathcal{X} \to \mathcal{Y}$ is an $A$--{\em quasi-isometric embedding} for $A \geq 1$ if, for every $x, y \in \mathcal{X}$, \[ d_\mathcal{X}(x, y) \mathbin{=_A} d_\mathcal{Y}(f(x), f(y)). \] The relation $f$ is a {\em quasi-isometry}, and $\mathcal{X}$ is {\em quasi-isometric} to $\mathcal{Y}$, if $f$ is an $A$--quasi-isometric embedding and the image of $f$ is $A$--{\em dense}: the $A$--neighborhood of the image equals all of $\mathcal{Y}$. \subsection{Geodesics} Fix an interval $[u,v] \subset \mathbb{R}$. A {\em geodesic}, connecting $x$ to $y$ in $\mathcal{X}$, is an isometric embedding $f \colon [u, v] \to \mathcal{X}$ with $f(u) = x$ and $f(v) = y$. Often the exact choice of $f$ is unimportant and all that matters are the endpoints $x$ and $y$. We then denote the image of $f$ by $[x,y] \subset \mathcal{X}$. Fix now intervals $[m,n], [p,q] \subset \mathbb{Z}$. An $A$--quasi-isometric embedding $g \colon [m,n] \to \mathcal{X}$ is called an $A$--{\em quasi-geodesic} in $\mathcal{X}$. A function $g \colon [m,n] \to \mathcal{X}$ is an $A$--{\em unparameterized quasi-geodesic} in $\mathcal{X}$ if \begin{itemize} \item there is an increasing function $\rho \colon [p,q] \to [m,n]$ so that $g \circ \rho \colon [p,q] \to \mathcal{X}$ is an $A$--{\em quasi-geodesic} in $\mathcal{X}$ and \item for all $i \in [p,q-1]$, $\operatorname{diam}_\mathcal{X}\left(g\left[\rho(i), \rho(i+1)\right]\right) \leq A$. \end{itemize} (Compare to the definition of $(K, \nablata, s)$--quasi-geodesics found in~\cite{MasurMinsky99}.) A subset $\mathcal{Y} \subset \mathcal{X}$ is $Q$--{\em quasi-convex} if every $\mathcal{X}$--geodesic connecting a pair of points of $\mathcal{Y}$ lies within a $Q$--neighborhood of $\mathcal{Y}$. \subsection{Hyperbolicity} We now assume that $\mathcal{X}$ is a connected graph with metric induced by giving all edges length one. \begin{definition} \label{Def:GromovHyperbolic} The space $\mathcal{X}$ is $\nablata$--{\em hyperbolic} if, for any three points $x, y, z$ in $\mathcal{X}$ and for any geodesics $k = [x, y]$, $g = [y, z]$, $h = [z, x]$, the triangle $ghk$ is $\nablata$--{\em slim}: the $\nablata$--neighborhood of any two sides contains the third. \end{definition} An important tool for this paper is the following theorem of the first author and Minsky~\cite{MasurMinsky99}: \begin{theorem} \label{Thm:C(S)IsHyperbolic} The curve complex of an orientable surface is Gromov hyperbolic. \qed \end{theorem} For the remainder of this section we assume that $\mathcal{X}$ is $\nablata$--hyperbolic graph, $x, y, z \in \mathcal{X}$ are points, and $k = [x, y], g = [y, z], h = [z, x]$ are geodesics. \begin{definition} \label{Def:ProjectionToGeodesic} We take $\rho_k \colon \mathcal{X} \to k$ to be the {\em closest points relation}: \[ \rho_k(z) = \big\{ w \in k \mathbin{\mid} \mbox{ for all $v \in k$, $d_\mathcal{X}(z, w) \leq d_\mathcal{X}(z, v)$ } \big\}. \] \end{definition} We now list several lemmas useful in the sequel. \begin{lemma} \label{Lem:RightTriangle} There is a point on $g$ within distance $2\nablata$ of $\rho_k(z)$. The same holds for $h$. \qed \end{lemma} \begin{lemma} \label{Lem:ProjectionHasBoundedDiameter} The closest points $\rho_k(z)$ have diameter at most $4\nablata$. \qed \end{lemma} \begin{lemma} \label{Lem:CenterExists} The diameter of $\rho_g(x) \cup \rho_h(y) \cup \rho_k(z)$ is at most $6\nablata$. \qed \end{lemma} \begin{lemma} \label{Lem:MovePoint} Suppose that $z'$ is another point in $\mathcal{X}$ so that $d_\mathcal{X}(z, z') \leq R$. Then $d_\mathcal{X}(\rho_k(z), \rho_k(z')) \leq R + 6\nablata.$ \qed \end{lemma} \begin{lemma} \label{Lem:MoveGeodesic} Suppose that $k'$ is another geodesic in $X$ so that the endpoints of $k'$ are within distance $R$ of the points $x$ and $y$. Then $d_X(\rho_k(z), \rho_{k'}(z)) \leq R + 11\nablata$. \qed \end{lemma} We now turn to a useful consequence of the Morse stability of quasi-geodesics in hyperbolic spaces. \begin{lemma} \label{Lem:FirstReverse} For every $\nablata$ and $A$ there is a constant $C$ with the following property: If $\mathcal{X}$ is $\nablata$--hyperbolic and $g \colon [0, N] \to \mathcal{X}$ is an $A$--unparameterized quasi-geodesic then for any $m < n < p$ in $[0, N]$ we have: \[ d_\mathcal{X}(x, y) + d_\mathcal{X}(y, z) < d_\mathcal{X}(x, z) + C \] where $x, y, z = g(m), g(n), g(p)$. \qed \end{lemma} \subsection{A hyperbolicity criterion} \label{Sec:HyperbolicityCriterion} Here we give a hyperbolicity criterion tailored to our setting. We thank Brian Bowditch for both finding an error in our first proof of \refthm{HyperbolicityCriterion} and for informing us of Gilman's work~\cite{Gilman94, Gilman02}. Suppose that $\mathcal{X}$ is a graph with all edge-lengths equal to one. Suppose that $\gamma \colon [0, N] \to \mathcal{X}$ is a loop in $\mathcal{X}$ with unit speed. Any pair of points $a, b \in [0, N]$ gives a {\em chord} of $\gamma$. If $a < b$, $N/4 \leq b - a$ and $N/4 \leq a + (N - b)$ then the chord is {\em $1/4$--separated}. The length of the chord is $d_\mathcal{X}(\gamma(a), \gamma(b))$. Following Gilman~\cite[Theorem~B]{Gilman94} we have: \begin{theorem} \label{Thm:Gilman} Suppose that $\mathcal{X}$ is a graph with all edge-lengths equal to one. Then $\mathcal{X}$ is Gromov hyperbolic if and only if there is a constant $K$ so that every loop $\gamma \colon [0, N] \to \mathcal{X}$ has a $1/4$--separated chord of length at most $N/7 + K$. \qed \end{theorem} Gilman's proof goes via the subquadratic isoperimetric inequality. We now give our criterion, noting that it is closely related to another paper of Gilman~\cite{Gilman02}. \begin{theorem} \label{Thm:HyperbolicityCriterion} Suppose that $\mathcal{X}$ is a graph with all edge-lengths equal to one. Then $\mathcal{X}$ is Gromov hyperbolic if and only if there is a constant $M \geq 0$ and, for all unordered pairs $x, y \in \mathcal{X}^0$, there is a connected subgraph $g_{x, y}$ containing $x$ and $y$ with the following properties: \begin{itemize} \item (Local) If $d_\mathcal{X}(x, y) \leq 1$ then $g_{x,y}$ has diameter at most $M$. \item (Slim triangles) For all $x, y, z \in \mathcal{X}^0$ the subgraph $g_{x,y}$ is contained in an $M$--neighborhood of $g_{y,z} \cup g_{z,x}$. \end{itemize} \end{theorem} \begin{proof} Suppose that $\gamma \colon [0, N] \to \mathcal{X}$ is a loop. If $\epsilon$ is the empty string let $I_\epsilon = [0,N]$. For any binary string $\omega$ let $I_{\omega0}$ and $I_{\omega1}$ be the first and second half of $I_\omega$. Note that if $|\omega| \geq \lceil \log_2 N \rceil$ then $|I_\omega| \leq 1$. Fix a string $\omega$ and let $[a, b] = I_\omega$. Let $g_\omega$ be the subgraph connecting $\gamma(a)$ to $\gamma(b)$. Note that $g_0 = g_1$ because $\gamma(0) = \gamma(N)$. Also, for any binary string $\omega$ the subgraphs $g_{\omega}, g_{\omega 0}, g_{\omega 1}$ form an $M$--slim triangle. If $|\omega| \leq \lceil \log_2 N \rceil$ then every $x \in g_\omega$ has some point $b \in I_\omega$ so that \[ d_\mathcal{X}(x, \gamma(b)) \leq M (\lceil \log_2 N \rceil - |\omega|) + 2M. \] Since $g_0$ is connected there is a point $x \in g_0$ that lies within the $M$--neighborhoods both of $g_{00}$ and of $g_{01}$. Pick some $b \in I_1$ so that $d_\mathcal{X}(x, \gamma(b))$ is bounded as in the previous paragraph. It follows that there is a point $a \in I_0$ so that $a, b$ are $1/4$--separated and so that \[ d_\mathcal{X}(\gamma(a), \gamma(b)) \leq 2M \lceil \log_2 N \rceil + 2M. \] Thus there is an additive error $K$ large enough so that $\mathcal{X}$ satisfies the criterion of \refthm{Gilman} and we are done. \end{proof} \section{Natural maps} \label{Sec:Natural} There are several natural maps between the complexes and graphs defined in \refsec{BackgroundComplexes}. Here we review what is known about their geometric properties, and give examples relevant to the rest of the paper. \subsection{Lifting, surgery, and subsurface projection} Suppose that $S$ is not simple. Choose a hyperbolic metric on the interior of $S$ so that all ends have infinite areas. Fix a compact essential subsurface $X \subset S$ which is not a peripheral annulus. Let $S^X$ be the cover of $S$ so that $X$ lifts homeomorphically and so that $S^X \mathrel{\cong} {\operatorname{interior}}(X)$. For any $\alpha \in \mathcal{AC}(S)$ let $\alpha^X$ be the full preimage. Since there is a homeomorphism between $X$ and the Gromov compactification of $S^X$ in a small abuse of notation we identify $\mathcal{AC}(X)$ with the arc and curve complex of $S^X$. \begin{definition} \label{Def:CuttingRel} We define the {\em cutting relation} $\kappa_X \colon \mathcal{AC}(S) \to \mathcal{AC}(X)$ as follows: $\alpha' \in \kappa_X(\alpha)$ if and only if $\alpha'$ is an essential non-peripheral component of $\alpha^X$. \end{definition} Note that $\alpha$ cuts $X$ if and only if $\kappa_X(\alpha)$ is non-empty. Now suppose that $S$ is not an annulus. \begin{definition} \label{Def:SurgeryRel} We define the {\em surgery relation} $\sigma_X \colon \mathcal{AC}(S) \to \mathcal{C}(S)$ as follows: $\alpha' \in \sigma_S(\alpha)$ if and only if $\alpha' \in \mathcal{C}(S)$ is a boundary component of a regular neighborhood of $\alpha \cup \partial S$. \end{definition} With $S$ and $X$ as above: \begin{definition} \label{Def:SubsurfaceProjection} The {\em subsurface projection relation} $\pi_X \colon \mathcal{AC}(S) \to \mathcal{C}(X)$ is defined as follows: If $X$ is not an annulus then define $\pi_X = \sigma_X \circ \kappa_X$. When $X$ is an annulus $\pi_X = \kappa_X$. \end{definition} If $\alpha, \beta \in \mathcal{AC}(S)$ both cut $X$ we write $d_X(\alpha, \beta) = \operatorname{diam}_X(\pi_X(\alpha) \cup \pi_X(\beta))$. This is the {\em subsurface projection distance} between $\alpha$ and $\beta$ in $X$. \begin{lemma} \label{Lem:SubsurfaceProjectionLipschitz} Suppose $\alpha, \beta \in \mathcal{AC}(S)$ are disjoint and cut $X$. Then $\operatorname{diam}_X(\pi_X(\alpha)), d_X(\alpha, \beta) \leq 3$. \qed \end{lemma} See Lemma~2.3 of~\cite{MasurMinsky00} and the remarks in the section Projection Bounds in~\cite{Minsky10}. \begin{corollary} \label{Cor:ProjectionOfPaths} Fix $X \subset S$. Suppose that $\{\beta_i\}_{i=0}^N$ is a path in $\mathcal{AC}(S)$. Suppose that $\beta_i$ cuts $X$ for all $i$. Then $d_X(\beta_0, \beta_N) \leq 3N + 3$. \qed \end{corollary} It is crucial to note that if some vertex of $\{ \beta_i \}$ {\em misses} $X$ then the projection distance $d_X(\beta_0, \beta_n)$ may be arbitrarily large compared to $n$. \refcor{ProjectionOfPaths} can be greatly strengthened when the path is a geodesic~\cite{MasurMinsky00}: \begin{theorem} \label{Thm:BoundedGeodesicImage}[Bounded Geodesic Image] There is constant $M_0$ with the following property. Fix $X \subset S$. Suppose that $\{\beta_i\}_{i=0}^n$ is a geodesic in $\mathcal{C}(S)$. Suppose that $\beta_i$ cuts $X$ for all $i$. Then $d_X(\beta_0, \beta_n) \leq M_0$. \qed \end{theorem} Here is a converse for \reflem{SubsurfaceProjectionLipschitz}. \begin{lemma} \label{Lem:BoundedProjectionImpliesBoundedIntersection} For every $a \in \mathbb{N}$ there is a number $b \in \mathbb{N}$ with the following property: for any $\alpha, \beta \in \mathcal{AC}(S)$ if $d_X(\alpha, \beta) \leq a$ for all $X \subset S$ then $\iota(\alpha, \beta) \leq b$. \end{lemma} Corollary~D of~\cite{ChoiRafi07} gives a more precise relation between projection distance and intersection number. \begin{proof}[Proof of \reflem{BoundedProjectionImpliesBoundedIntersection}] We only sketch the contrapositive: Suppose we are given a sequence of curves $\alpha_n, \beta_n$ so that $\iota(\alpha_n, \beta_n)$ tends to infinity. Passing to subsequences and applying elements of the mapping class group we may assume that $\alpha_n = \alpha_0$ for all $n$. Setting $c_n = \iota(\alpha_0, \beta_n)$ and passing to subsequences again we may assume that $\beta_n / c_n$ converges to $\lambda \in \mathcal{PML}(S)$, the projectivization of Thurston's space of measured laminations. Let $Y$ be any connected component of the subsurface filled by $\lambda$, chosen so that $\alpha_0$ cuts $Y$. Note that $\pi_Y(\beta_n)$ converges to $\lambda|_Y$. Again applying Kobayashi's argument~\cite{Kobayashi88b}, the distance $d_Y(\alpha_0, \beta_n)$ tends to infinity. \end{proof} \subsection{Inclusions} We now record a well known fact: \begin{lemma} \label{Lem:C(S)QuasiIsometricToAC(S)} The inclusion $\nu \colon \mathcal{C}(S) \to \mathcal{AC}(S)$ is a quasi-isometry. The surgery map $\sigma_S \colon \mathcal{AC}(S) \to \mathcal{C}(S)$ is a quasi-inverse for $\nu$. \end{lemma} \begin{proof} Fix $\alpha, \beta \in \mathcal{C}(S)$. Since $\nu$ is an inclusion we have $d_\mathcal{AC}(\alpha, \beta) \leq d_S(\alpha, \beta)$. In the other direction, let $\{ \alpha_i \}_{i = 0}^N$ be a geodesic in $\mathcal{AC}(S)$ connecting $\alpha$ to $\beta$. Since every $\alpha_i$ cuts $S$ we apply \refcor{ProjectionOfPaths} and deduce $d_S(\alpha, \beta) \leq 3N + 3$. Note that the composition $\sigma_S \circ \nu = \operatorname{Id}|\mathcal{C}(S)$. Also, for any arc $\alpha \in \mathcal{A}(S)$ we have $d_\mathcal{AC}(\alpha, \nu(\sigma_S(\alpha))) = 1$. Finally, $\mathcal{C}(S)$ is $1$--dense in $\mathcal{AC}(S)$, as any arc $\gamma \subset S$ is disjoint from the one or two curves of $\sigma_S(\gamma)$. \end{proof} Brian Bowditch raised the question, at the Newton Institute in August 2003, of the geometric properties of the inclusion $\mathcal{A}(S) \to \mathcal{AC}(S)$. The natural assumption, that this inclusion is again a quasi-isometric embedding, is false. In this paper we will exactly characterize how the inclusion distorts distance. We now move up a dimension. Suppose that $V$ is a handlebody and $S = \partial V$. We may take any disk $D \in \mathcal{D}(V)$ to its boundary $\partial D \in \mathcal{C}(S)$, giving an inclusion $\nu \colon \mathcal{D}(V) \to \mathcal{C}(S)$. It is important to distinguish the disk complex from its image $\nu(\mathcal{D}(V))$; thus we will call the image the {\em disk set}. The first author and Minsky~\cite{MasurMinsky04} have shown: \begin{theorem} \label{Thm:DiskComplexConvex} The disk set is a quasi-convex subset of the curve complex. \qed \end{theorem} It is natural to ask if this map is a quasi-isometric embedding. If so, the hyperbolicity of $\mathcal{C}(V)$ immediately follows. In fact, the inclusion again badly distorts distance and we investigate exactly how, below. \subsection{Markings and the mapping class group} Once the connectedness of $\mathcal{M}(S)$ is in hand, it is possible to use local finiteness to show that $\mathcal{M}(S)$ is quasi-isometric to the Cayley graph of the mapping class group~\cite{MasurMinsky00}. Using subsurface projections the first author and Minsky~\cite{MasurMinsky00} obtained a {\em distance estimate} for the marking complex and thus for the mapping class group. \begin{theorem} \label{Thm:MarkingGraphDistanceEstimate} There is a constant ${C_0} = {C_0}(S)$ so that, for any $c \geq {C_0}$ there is a constant $A$ with \[ d_\mathcal{M}(\mu, \mu') \,\, \mathbin{=_A} \,\, \sum [d_X(\mu, \mu')]_c \] independent of the choice of $\mu$ and $\mu'$. Here the sum ranges over all essential, non-peripheral subsurfaces $X \subset S$. \end{theorem} This, and their similar estimate for the pants graph, is a model for the distance estimates given below. Notice that a filling marking $\mu \in \mathcal{M}(S)$ cuts all essential, non-peripheral subsurfaces of $S$. It is not an accident that the sum ranges over the same set. \section{Holes in general and the lower bound on distance} \label{Sec:Holes} Suppose that $S$ is a compact connected surface. In this paper a {\em combinatorial complex} $\mathcal{G}(S)$ will have vertices being isotopy classes of certain multicurves in $S$. We will assume throughout that vertices of $\mathcal{G}(S)$ are connected by edges only if there are representatives which are disjoint. This assumption is made only to simplify the proofs --- all arguments work in the case where adjacent vertices are allowed to have uniformly bounded intersection. In all cases $\mathcal{G}$ will be connected. There is a natural map $\nu \colon \mathcal{G} \to \mathcal{AC}(S)$ taking a vertex of $\mathcal{G}$ to the isotopy classes of the components. Examples in the literature include the marking complex~\cite{MasurMinsky00}, the pants complex~\cite{Brock03a} \cite{BehrstockEtAl05}, the Hatcher-Thurston complex~\cite{HatcherThurston80}, the complex of separating curves~\cite{BrendleMargalit04}, the arc complex and the curve complexes themselves. For any combinatorial complex $\mathcal{G}$ defined in this paper {\em other than the curve complex} we will denote distance in the one-skeleton of $\mathcal{G}$ by $d_\mathcal{G}(\cdot,\cdot)$. Distance in $\mathcal{C}(S)$ will always be denoted by $d_S(\cdot, \cdot)$. \subsection{Holes, defined} Suppose that $S$ is non-simple. Suppose that $\mathcal{G}(S)$ is a combinatorial complex. Suppose that $X \subset S$ is an cleanly embedded subsurface. A vertex $\alpha \in \mathcal{G}$ {\em cuts} $X$ if some component of $\alpha$ cuts $X$. \begin{definition} \label{Def:Hole} We say $X \subset S$ is a {\em hole} for $\mathcal{G}$ if every vertex of $\mathcal{G}$ cuts $X$. \end{definition} Almost equivalently, if $X$ is a hole then the subsurface projection $\pi_X \colon \mathcal{G}(S) \to \mathcal{C}(X)$ never takes the empty set as a value. Note that the entire surface $S$ is always a hole, regardless of our choice of $\mathcal{G}$. A boundary parallel annulus cannot be cleanly embedded (unless $S$ is also an annulus), so generally cannot be a hole. A hole $X \subset S$ is {\em strict} if $X$ is not homeomorphic to $S$. We now classify the holes for $\mathcal{A}(S)$. \begin{example} \label{Exa:HolesArcComplex} Suppose that $S = S_{g,b}$ with $b > 0$ and consider the arc complex $\mathcal{A}(S)$. The holes, up to isotopy, are exactly the cleanly embedded surfaces which contain $\partial S$. So, for example, if $S$ is planar then only $S$ is a hole for $\mathcal{A}(S)$. The same holds for $S = S_{1,1}$. In these cases it is an exercise to show that $\mathcal{C}(S)$ and $\mathcal{A}(S)$ are quasi-isometric. In all other cases the arc complex admits infinitely many holes. \end{example} \begin{definition} \label{Def:DiameterOfHole} If $X$ is a hole and if $\pi_X(\mathcal{G}) \subset \mathcal{C}(X)$ has diameter at least $R$ we say that the hole $X$ has {\em diameter} at least $R$. \end{definition} \begin{example} Continuing the example above: Since the mapping class group acts on the arc complex, all non-simple holes for $\mathcal{A}(S)$ have infinite diameter. \end{example} Suppose now that $X, X' \subset S$ are disjoint holes for $\mathcal{G}$. In the presence of symmetry there can be a relationship between $\pi_X|\mathcal{G}$ and $\pi_{X'}|\mathcal{G}$ as follows: \begin{definition} \label{Def:PairedHoles} Suppose that $X, X'$ are holes for $\mathcal{G}$, both of infinite diameter. Then $X$ and $X'$ are {\em paired} if there is a homeomorphism $\tau \colon X \to X'$ and a constant $L_4$ so that $$ d_{X'}(\pi_{X'}(\gamma), \tau(\pi_X(\gamma))) \leq L_4 $$ for every $\gamma \in \mathcal{G}$. Furthermore, if $Y \subset X$ is a hole then $\tau$ pairs $Y$ with $Y' = \tau(Y)$. Lastly, pairing is required to be symmetric; if $\tau$ pairs $X$ with $X'$ then $\tau^{-1}$ pairs $X'$ with $X$. \end{definition} \begin{definition} \label{Def:Interfere} Two holes $X$ and $Y$ {\em interfere} if either $X \cap Y \neq \emptyset$ or $X$ is paired with $X'$ and $X' \cap Y \neq \emptyset$. \end{definition} Examples arise in the symmetric arc complex and in the discussion of twisted $I$--bundles inside of a handlebody. \subsection{Projection to holes is coarsely Lipschitz} The following lem\-ma is used repeatedly throughout the paper: \begin{lemma} \label{Lem:LipschitzToHoles} Suppose that $\mathcal{G}(S)$ is a combinatorial complex. Suppose that $X$ is a hole for $\mathcal{G}$. Then for any $\alpha, \beta \in \mathcal{G}$ we have $$d_X(\alpha, \beta) \leq 3 + 3 \cdot d_\mathcal{G}(\alpha, \beta).$$ The additive error is required only when $\alpha = \beta$. \end{lemma} \begin{proof} This follows directly from Corollary~\ref{Cor:ProjectionOfPaths} and our assumption that vertices of $\mathcal{G}$ connected by an edge represent disjoint multicurves. \end{proof} \subsection{Infinite diameter holes} \label{Sec:InfiniteDiameterHoles} We may now state a first answer to Bowditch's question. \begin{lemma} \label{Lem:InfiniteDiameterHoleImpliesNotQIEmbedded} Suppose that $\mathcal{G}(S)$ is a combinatorial complex. Suppose that there is a strict hole $X \subset S$ having infinite diameter. Then $\nu \colon \mathcal{G} \to \mathcal{AC}(S)$ is not a quasi-isometric embedding. \qed \end{lemma} This lemma and \refexa{HolesArcComplex} completely determines when the inclusion of $\mathcal{A}(S)$ into $\mathcal{AC}(S)$ is a quasi-isometric embedding. It quickly becomes clear that the set of holes tightly constrains the intrinsic geometry of a combinatorial complex. \begin{lemma} \label{Lem:DisjointHolesImpliesNotHyperbolic} Suppose that $\mathcal{G}(S)$ is a combinatorial complex invariant under the natural action of $\mathcal{MCG}(S)$. Then every non-simple hole for $\mathcal{G}$ has infinite diameter. Furthermore, if $X, Y \subset S$ are disjoint non-simple holes for $\mathcal{G}$ then there is a quasi-isometric embedding of $\mathbb{Z}^2$ into $\mathcal{G}$. \qed \end{lemma} We will not use Lemmas~\ref{Lem:InfiniteDiameterHoleImpliesNotQIEmbedded} or~\ref{Lem:DisjointHolesImpliesNotHyperbolic} and so omit the proofs. Instead our interest lies in proving the far more powerful distance estimate (Theorems~\ref{Thm:LowerBound} and~\ref{Thm:UpperBound}) for $\mathcal{G}(S)$. \subsection{A lower bound on distance} Here we see that the sum of projection distances in holes gives a lower bound for distance. \begin{theorem} \label{Thm:LowerBound} Fix $S$, a compact connected non-simple surface. Suppose that $\mathcal{G}(S)$ is a combinatorial complex. Then there is a constant ${C_0}$ so that for all $c \geq {C_0}$ there is a constant $A$ satisfying \[ \sum [d_X(\alpha, \beta)]_c \mathbin{\leq_{A}} d_\mathcal{G}(\alpha, \beta). \] Here $\alpha, \beta \in \mathcal{G}$ and the sum is taken over all holes $X$ for the complex $\mathcal{G}$. \qed \end{theorem} The proof follows the proof of Theorems~6.10 and~6.12 of~\cite{MasurMinsky00}, practically word for word. The only changes necessary are to \begin{itemize} \item replace the sum over {\em all} subsurfaces by the sum over all holes, \item replace Lemma~2.5 of~\cite{MasurMinsky00}, which records how markings differing by an elementary move project to an essential subsurface, by \reflem{LipschitzToHoles} of this paper, which records how $\mathcal{G}$ projects to a hole. \end{itemize} One major goal of this paper is to give criteria sufficient obtain the reverse inequality; \refthm{UpperBound}. \section{Holes for the non-orientable surface} \label{Sec:HolesNonorientable} Fix $F$ a compact, connected, and non-orientable surface. Let $S$ be the orientation double cover with covering map $\rho_F \colon S \to F$. Let $\tau \colon S \to S$ be the associated involution; so for all $x \in S$, $\rho_F(x) = \rho_F(\tau(x))$. \begin{definition} A multicurve $\gamma \subset \mathcal{AC}(S)$ is {\em symmetric} if $\tau(\gamma) \cap \gamma = \emptyset$ or $\tau(\gamma) = \gamma$. A multicurve $\gamma$ is {\em invariant} if there is a curve or arc $\gamma' \subset F$ so that $\gamma = \rho_F^{-1}(\gamma')$. The same definitions holds for subsurfaces $X \subset S$. \end{definition} \begin{definition} The {\em invariant complex} $\mathcal{C}^\tau(S)$ is the simplicial complex with vertex set being isotopy classes of invariant multicurves. There is a $k$--simplex for every collection of $k+1$ distinct isotopy classes having pairwise disjoint representatives. \end{definition} Notice that $\mathcal{C}^\tau(S)$ is simplicially isomorphic to $\mathcal{C}(F)$. There is also a natural map $\nu \colon \mathcal{C}^\tau(S) \to \mathcal{C}(S)$. We will prove: \begin{lemma} \label{Lem:InvariantQI} $\nu \colon \mathcal{C}^\tau(S) \to \mathcal{C}(S)$ is a quasi-isometric embedding. \end{lemma} It thus follows from the hyperbolicity of $\mathcal{C}(S)$ that: \begin{corollary}[\cite{BestvinaFujiwara07}] \label{Cor:NonorientableCurveComplexHyperbolic} $\mathcal{C}(F)$ is Gromov hyperbolic. \qed \end{corollary} We begin the proof of \reflem{InvariantQI}: since $\nu$ sends adjacent vertices to adjacent edges we have \begin{equation} \label{Eqn:NonOrientableLowerBound} d_S(\alpha, \beta) \leq d_{\mathcal{C}^\tau}(\alpha, \beta), \end{equation} as long as $\alpha$ and $\beta$ are distinct in $\mathcal{C}^\tau(S)$. In fact, since the surface $S$ itself is a hole for $\mathcal{C}^\tau(S)$ we may deduce a slightly weaker lower bound from \reflem{LipschitzToHoles} or indeed from \refthm{LowerBound}. The other half of the proof of \reflem{InvariantQI} consists of showing that $S$ is the {\em only} hole for $\mathcal{C}^\tau(S)$ with large diameter. After a discussion of {Teichm\"uller~} geodesics we will prove: \begin{restate}{Lemma}{Lem:SymmetricSurfaces} There is a constant $K$ with the following property: Suppose that $\alpha, \beta$ are invariant multicurves in $S$. Suppose that $X \subset S$ is an essential subsurface where $d_X(\alpha, \beta) > K$. Then $X$ is symmetric. \end{restate} From this it follows that: \begin{corollary} \label{Cor:NonorientableHoles} With $K$ as in \reflem{SymmetricSurfaces}: If $X \subset S$ is a hole for $\mathcal{C}^\tau(S)$ with diameter greater than $K$ then $X = S$. \end{corollary} \begin{proof} Suppose that $X \subset S$ is a strict subsurface, cleanly embedded. Suppose that $\operatorname{diam}_X(\mathcal{C}^\tau(S)) > K$. Thus $X$ is symmetric. It follows that $\partial X {\smallsetminus} \partial S$ is also symmetric. Since $\partial X$ does not cut $X$ deduce that $X$ is not a hole for $\mathcal{C}^\tau(S)$. \end{proof} This corollary, together with the upper bound (\refthm{UpperBound}), proves \reflem{InvariantQI}. \section{Holes for the arc complex} \label{Sec:HolesArcComplex} Here we generalize the definition of the arc complex and classify its holes. \begin{definition} \label{Def:RelativeArcComplex} Suppose that $S$ is a non-simple surface with boundary. Let $\Delta$ be a non-empty collection of components of $\partial S$. The {\em arc complex} $\mathcal{A}(S, \Delta)$ is the subcomplex of $\mathcal{A}(S)$ spanned by essential arcs $\alpha \subset S$ with $\partial \alpha \subset \Delta$. \end{definition} Note that $\mathcal{A}(S, \partial S)$ and $\mathcal{A}(S)$ are identical. \begin{lemma} \label{Lem:ArcComplexHoles} Suppose $X \subset S$ is cleanly embedded. Then $X$ is a hole for $\mathcal{A}(S, \Delta)$ if and only if $\Delta \subset \partial X$. \qed \end{lemma} This follows directly from the definition of a hole. We now have an straight-forward observation: \begin{lemma} \label{Lem:ArcComplexHolesIntersect} If $X, Y \subset S$ are holes for $\mathcal{A}(S, \Delta)$ then $X \cap Y \neq \emptyset$. \qed \end{lemma} The proof follows immediately from \reflem{ArcComplexHoles}. \reflem{DisjointHolesImpliesNotHyperbolic} indicates that \reflem{ArcComplexHolesIntersect} is essential to proving that $\mathcal{A}(S, \Delta)$ is Gromov hyperbolic. In order to prove the upper bound theorem for $\mathcal{A}$ we will use pants decompositions of the surface $S$. In an attempt to avoid complications in the non-orientable case we must carefully lift to the orientation cover. Suppose that $F$ is non-simple, non-orientable, and has non-empty boundary. Let $\rho_F \colon S \to F$ be the orientation double cover and let $\tau \colon S \to S$ be the induced involution. Fix $\Delta' \subset \partial F$ and let $\Delta = \rho_F^{-1}(\Delta')$. \begin{definition} We define $\mathcal{A}^\tau(S, \Delta)$ to be the {\em invariant arc complex}: vertices are invariant multi-arcs and simplices arise from disjointness. \end{definition} Again, $\mathcal{A}^\tau(S, \Delta)$ is simplicially isomorphic to $\mathcal{A}(F, \Delta')$. If $X \cap \tau(X) = \emptyset$ and $\Delta \subset X \cup \tau(X)$ then the subsurfaces $X$ and $\tau(X)$ are paired holes, as in \refdef{PairedHoles}. Notice as well that all non-simple symmetric holes $X \subset S$ for $\mathcal{A}^\tau(S, \Delta)$ have infinite diameter. Unlike $\mathcal{A}(F, \Delta')$ the complex $\mathcal{A}^\tau(S, \Delta)$ may have disjoint holes. None\-the\-less, we have: \begin{lemma} \label{Lem:ArcComplexHolesInterfere} Any two non-simple holes for $\mathcal{A}^\tau(S, \Delta)$ interfere. \end{lemma} \begin{proof} Suppose that $X, Y$ are holes for the $\tau$--invariant arc complex, $\mathcal{A}^\tau(S, \Delta)$. It follows from \reflem{SymmetricSurfaces} that $X$ is symmetric with $\Delta \subset X \cup \tau(X)$. The same holds for $Y$. Thus $Y$ must cut either $X$ or $\tau(X)$. \end{proof} \section{Background on three-manifolds} \label{Sec:BackgroundThreeManifolds} Before discussing the holes in the disk complex, we record a few facts about handlebodies and $I$--bundles. Fix $M$ a compact connected irreducible three-manifold. Recall that $M$ is {\em irreducible} if every embedded two-sphere in $M$ bounds a three-ball. Recall that if $N$ is a closed submanifold of $M$ then $\operatorname{fr}(N)$, the frontier of $N$ in $M$, is the closure of $\partial N {\smallsetminus} \partial M$. \subsection{Compressions} Suppose that $F$ is a surface embedded in $M$. Then $F$ is {\em compressible} if there is a disk $B$ embedded in $M$ with $B \cap \partial M = \emptyset$, $B \cap F = \partial B$, and $\partial B$ essential in $F$. Any such disk $B$ is called a {\em compression} of $F$. In this situation form a new surface $F'$ as follows: Let $N$ be a closed regular neighborhood of $B$. First remove from $F$ the annulus $N \cap F$. Now form $F'$ by gluing on both disk components of $\partial N {\smallsetminus} F$. We say that $F'$ is obtained by {\em compressing} $F$ along $B$. If no such disk exists we say $F$ is {\em incompressible}. \begin{definition} \label{Def:BdyCompression} A properly embedded surface $F$ is {\em boundary compressible} if there is a disk $B$ embedded in $M$ with \begin{itemize} \item ${\operatorname{interior}}(B) \cap \partial M = \emptyset$, \item $\partial B$ is a union of connected arcs $\alpha$ and $\beta$, \item $\alpha \cap \beta = \partial \alpha = \partial \beta$, \item $B \cap F = \alpha$ and $\alpha$ is properly embedded in $F$, \item $B \cap \partial M = \beta$, and \item $\beta$ is essential in $\partial M {\smallsetminus} \partial F$. \end{itemize} \end{definition} A disk, like $B$, with boundary partitioned into two arcs is called a {\em bigon}. Note that this definition of boundary compression is slightly weaker than some found in the literature; the arc $\alpha$ is often required to be essential in $F$. We do not require this additional property because, for us, $F$ will usually be a properly embedded disk in a handlebody. Just as for compressing disks we may {\em boundary compress} $F$ along $B$ to obtain a new surface $F'$: Let $N$ be a closed regular neighborhood of $B$. First remove from $F$ the rectangle $N \cap F$. Now form $F'$ by gluing on both bigon components of $\operatorname{fr}(N) {\smallsetminus} F$. Again, $F'$ is obtained by {\em boundary compressing} $F$ along $B$. Note that the relevant boundary components of $F$ and $F'$ cobound a pair of pants embedded in $\partial M$. If no boundary compression exists then $F$ is {\em boundary incompressible}. \begin{remark} \label{Rem:SurfacesInHandlebodies} Recall that any surface $F$ properly embedded in a handlebody $V_g$, $g \geq 2$, is either compressible or boundary compressible. \end{remark} Suppose now that $F$ is properly embedded in $M$ and $\Gamma$ is a multicurve in $\partial M$. \begin{remark} \label{Rem:IntersectionNumber} Suppose that $F'$ is obtained by a boundary compression of $F$ performed in the complement of $\Gamma$. Suppose that $F' = F_1 \cap F_2$ is disconnected and each $F_i$ cuts $\Gamma$. Then $\iota(\partial F_i, \Gamma) < \iota(\partial F, \Gamma)$ for $i = 1, 2$. \end{remark} It is often useful to restrict our attention to boundary compressions meeting a single subsurface of $\partial M$. So suppose that $X \subset \partial M$ is an essential subsurface. Suppose that $\partial F$ is tight with respect to $\partial X$. Suppose $B$ is a boundary compression of $F$. If $B \cap \partial M \subset X$ we say that $F$ is {\em boundary compressible into $X$}. \begin{lemma} \label{Lem:XCompressibleImpliesBdyXCompressible} Suppose that $M$ is irreducible. Fix $X$ a connected essential subsurface of $\partial M$. Let $F \subset M$ be a properly embedded, incompressible surface. Suppose that $\partial X$ and $\partial F$ are tight and that $X$ compresses in $M$. Then either: \begin{itemize} \item $F \cap X = \emptyset$, \item $F$ is boundary compressible into $X$, or \item $F$ is a disk with $\partial F \subset X$. \end{itemize} \end{lemma} \begin{proof} Suppose that $X$ is compressible via a disk $E$. Isotope $E$ to make $\partial E$ tight with respect to $\partial F$. This can be done while maintaining $\partial E \subset X$ because $\partial F$ and $\partial X$ are tight. Since $M$ is irreducible and $F$ is incompressible we may isotope $E$, rel $\partial$, to remove all simple closed curves of $F \cap E$. If $F \cap E$ is non-empty then an outermost bigon of $E$ gives the desired boundary compression lying in $X$. Suppose instead that $F \cap E = \emptyset$ but $F$ does cut $X$. Let $\nablata \subset X$ be a simple arc meeting each of $F$ and $E$ in exactly one endpoint. Let $N$ be a closed regular neighborhood of $\nablata \cup E$. Note that $\operatorname{fr}(N) {\smallsetminus} F$ has three components. One is a properly embedded disk parallel to $E$ and the other two $B, B'$ are bigons attached to $F$. At least one of these, say $B'$ is trivial in the sense that $B' \cap \partial M$ is a trivial arc embedded in $\partial M {\smallsetminus} \partial F$. If $B$ is non-trivial then $B$ provides the desired boundary compression. Suppose that $B$ is also trivial. It follows that $\partial E$ and one component $\gamma \subset \partial F$ cobound an annulus $A \subset X$. So $D = A \cup E$ is a disk with $(D, \partial D) \subset (M, F)$. As $\partial D = \gamma$ and $F$ is incompressible and $M$ is irreducible deduce that $F$ is isotopic to $E$. \end{proof} \subsection{Band sums} A {\em band sum} is the inverse operation to boundary compression: Fix a pair of disjoint properly embedded surfaces $F_1, F_2 \subset M$. Let $F' = F_1 \cup F_2$. Fix a simple arc $\nablata \subset \partial M$ so that $\nablata$ meets each of $F_1$ and $F_2$ in exactly one point of $\partial \nablata$. Let $N \subset M$ be a closed regular neighborhood of $\nablata$. Form a new surface by adding to $F' {\smallsetminus} N$ the rectangle component of $\operatorname{fr}(N) {\smallsetminus} F'$. The surface $F$ obtained is the result of {\em band summing} $F_1$ to $F_2$ along $\nablata$. Note that $F$ has a boundary compression {\em dual} to $\nablata$ yielding $F'$: that is, there is a boundary compression $B$ for $F$ so that $\nablata \cap B$ is a single point and compressing $F$ along $B$ gives $F'$. \subsection{Handlebodies and I-bundles} Recall that handlebodies are irreducible. Suppose that $F$ is a compact connected surface with at least one boundary component. Let $T$ be the orientation $I$--bundle over $F$. If $F$ is orientable then $T \mathrel{\cong} F \times I$. If $F$ is not orientable then $T$ is the unique $I$--bundle over $F$ with orientable total space. We call $T$ the {\em $I$--bundle} and $F$ the {\em base space}. Let $\rho_F \colon T \to F$ be the associated bundle map. Note that $T$ is homeomorphic to a handlebody. If $A \subset T$ is a union of fibers of the map $\rho_F$ then $A$ is {\em vertical} with respect to $T$. In particular take $\partial_v T = \rho_F^{-1}(\partial F)$ to be the {\em vertical boundary} of $T$. Take $\partial_h T$ to be the union of the boundaries of all of the fibers: this is the {\em horizontal boundary} of $T$. Note that $\partial_h T$ is always incompressible in $T$ while $\partial_v T$ is incompressible in $T$ as long as $F$ is not homeomorphic to a disk. Note that, as $|\partial_v T| \geq 1$, any vertical surface in $T$ can be boundary compressed. However no vertical surface in $T$ may be boundary compressed into $\partial_h T$. We end this section with: \begin{lemma} \label{Lem:BdyIncompImpliesVertical} Suppose that $F$ is a compact, connected surface with $\partial F \neq \emptyset$. Let $\rho_F \colon T \to F$ be the orientation $I$--bundle over $F$. Let $X$ be a component of $\partial_h T$. Let $D \subset T$ be a properly embedded disk. If \begin{itemize} \item $\partial D$ is essential in $\partial T$, \item $\partial D$ and $\partial X$ are tight, and \item $D$ cannot be boundary compressed into $X$ \end{itemize} then $D$ may be properly isotoped to be vertical with respect to $T$. \qed \end{lemma} \section{Holes for the disk complex} \label{Sec:HolesDisk} Here we begin to classify the holes for the disk complex, a more difficult analysis than that of the arc complex. To fix notation let $V$ be a handlebody. Let $S = S_g = \partial V$. Recall that there is a natural inclusion $\nu \colon \mathcal{D}(V) \to \mathcal{C}(S)$. \begin{remark} \label{Rem:ComplementOfHoleIsIncompressible} The notion of a hole $X \subset \partial V$ for $\mathcal{D}(V)$ may be phrased in several different ways: \begin{itemize} \item every essential disk $D \subset V$ cuts the surface $X$, \item $\overline{S {\smallsetminus} X}$ is incompressible in $V$, or \item $X$ is {\em disk-busting} in $V$. \end{itemize} \end{remark} The classification of holes $X \subset S$ for $\mathcal{D}(V)$ breaks roughly into three cases: either $X$ is an annulus, is compressible in $V$, or is incompressible in $V$. In each case we obtain a result: \begin{restate}{Theorem}{Thm:Annuli} Suppose $X$ is a hole for $\mathcal{D}(V)$ and $X$ is an annulus. Then the diameter of $X$ is at most $5$. \end{restate} \begin{restate}{Theorem}{Thm:CompressibleHoles} Suppose $X$ is a compressible hole for $\mathcal{D}(V)$ with diameter at least $15$. Then there are a pair of essential disks $D, E \subset V$ so that \begin{itemize} \item $\partial D, \partial E \subset X$ and \item $\partial D$ and $\partial E$ fill $X$. \end{itemize} \end{restate} \begin{restate}{Theorem}{Thm:IncompressibleHoles} Suppose $X$ is an incompressible hole for $\mathcal{D}(V)$ with diameter at least $61$. Then there is an $I$--bundle $\rho_F \colon T \to F$ embedded in $V$ so that \begin{itemize} \item $\partial_h T \subset S$, \item $X$ is isotopic in $S$ to a component of $\partial_h T$, \item some component of $\partial_v T$ is boundary parallel into $S$, \item $F$ supports a pseudo-Anosov map. \end{itemize} \end{restate} As a corollary of these theorems we have: \begin{corollary} \label{Cor:LargeImpliesInfiniteDiam} If $X$ is hole for $\mathcal{D}(V)$ with diameter at least $61$ then $X$ has infinite diameter. \end{corollary} \begin{proof} If $X$ is a hole with diameter at least $61$ then either \refthm{CompressibleHoles} or~\refthm{IncompressibleHoles} applies. If $X$ is compressible then Dehn twists, in opposite directions, about the given disks $D$ and $E$ yields an automorphism $f \colon V \to V$ so that $f|X$ is pseudo-Anosov. This follows from Thurston's construction~\cite{Thurston88}. By \reflem{SimpleSurfaces} the hole $X$ has infinite diameter. If $X$ is incompressible then $X \subset \partial_h T$ where $\rho_F \colon T \to F$ is the given $I$--bundle. Let $f \colon F \to F$ be the given pseudo-Anosov map. So $g$, the suspension of $f$, gives a automorphism of $V$. Again it follows that the hole $X$ has infinite diameter. \end{proof} Applying \reflem{InfiniteDiameterHoleImpliesNotQIEmbedded} we find another corollary: \begin{theorem} \label{Thm:D(V)NotQuasiIsomEmbeddedInC(S)} If $S = \partial V$ contains a strict hole with diameter at least $61$ then the inclusion $\nu \colon \mathcal{D}(V) \to \mathcal{C}(S)$ is not a quasi-isometric embedding. \qed \end{theorem} \section{Holes for the disk complex -- annuli} \label{Sec:Annuli} The proof of \refthm{Annuli} occupies the rest of this section. This proof shares many features with the proofs of Theorems~\ref{Thm:CompressibleHoles} and~\ref{Thm:IncompressibleHoles}. However, the exceptional definition of $\mathcal{C}(S_{0,2})$ prevents a unified approach. Fix $V$, a handlebody. \begin{theorem} \label{Thm:Annuli} Suppose $X$ is a hole for $\mathcal{D}(V)$ and $X$ is an annulus. Then the diameter of $X$ is at most $5$. \end{theorem} We begin with: \begin{proofclaim} For all $D \in \mathcal{D}(V)$, $|D \cap X| \geq 2$. \end{proofclaim} \begin{proof} Since $X$ is a hole, every disk cuts $X$. Since $X$ is an annulus, let $\alpha$ be a core curve for $X$. If $|D \cap X| = 1$, then we may band sum parallel copies of $D$ along an subarc of $\alpha$. The resulting disk misses $\alpha$, a contradiction. \end{proof} Assume, to obtain a contradiction, that $X$ has diameter at least $\mathbb{A}ulusConst$. Suppose that $D \in \mathcal{D}(V)$ is a disk chosen to minimize $D \cap X$. Among all disks $E \in \mathcal{D}(V)$ with $d_X(D, E) \geq 3$ choose one which minimizes $|D \cap E|$. Isotope $D$ and $E$ to make the boundaries tight and also tight with respect to $\partial X$. Tightening triples of curves is not canonical; nonetheless there is a tightening so that $S {\smallsetminus} (\partial D \cup \partial E \cup X)$ contains no triangles. See Figure~\ref{Fig:NoTriangles}. \begin{figure} \caption{Triangles outside of $X$ (see the left side) can be moved in (see the right side). This decreases the number of points of $D \cap E \cap (S {\smallsetminus} \label{Fig:NoTriangles} \end{figure} After this tightening we have: \begin{proofclaim} Every arc of $\partial D \cap X$ meets every arc of $\partial E \cap X$ at least once. \end{proofclaim} \begin{proof} Fix components arcs $\alpha \subset D \cap X$ and $\beta \subset E \cap X$. Let $\alpha', \beta'$ be the corresponding arcs in $S^X$ the annular cover of $S$ corresponding to $X$. After the tightening we find that \[ |\alpha \cap \beta| \geq |\alpha' \cap \beta'| - 1. \] Since $d_X(D, E) \geq 3$ \refeqn{DistanceInAnnulus} implies that $|\alpha' \cap \beta'| \geq 2$. Thus $|\alpha \cap \beta| \geq 1$, as desired. \end{proof} \begin{proofclaim} There is an outermost bigon $B \subset E {\smallsetminus} D$ with the following properties: \begin{itemize} \item $\partial B = \alpha \cup \beta$ where $\alpha = B \cap D$, $\beta = \partial B {\smallsetminus} \alpha \subset \partial E$, \item $\partial \alpha = \partial \beta \subset X$, and \item $|\beta \cap X| = 2$. \end{itemize} Furthermore, $|D \cap X| = 2$. \end{proofclaim} See the lower right of Figure~\ref{Fig:PossibleAlphas} for a picture. \begin{proof} Consider the intersection of $D$ and $E$, thought of as a collection of arcs and curves in $E$. Any simple closed curve component of $D \cap E$ can be removed by an isotopy of $E$, fixed on the boundary. (This follows from the irreducibility of $V$ and an innermost disk argument.) Since we have assumed that $|D \cap E|$ is minimal it follows that there are no simple closed curves in $D \cap E$. So consider any outermost bigon $B \subset E {\smallsetminus} D$. Let $\alpha = B \cap D$. Let $\beta = \partial B {\smallsetminus} \alpha = B \cap \partial V$. Note that $\beta$ cannot completely contain a component of $E \cap X$ as this would contradict either the fact that $B$ is outermost or the claim that every arc of $E \cap X$ meets some arc of $D \cap X$. Using this observation, Figure~\ref{Fig:PossibleAlphas} lists the possible ways for $B$ to lie inside of $E$. \begin{figure} \caption{The arc $\alpha$ cuts a bigon $B$ off of $E$. The darker part of $\partial E$ are the arcs of $E \cap X$. Either $\beta$ is disjoint from $X$, $\beta$ is contained in $X$, $\beta$ meets $X$ in a single subarc, or $\beta$ meets $X$ in two subarcs.} \label{Fig:PossibleAlphas} \end{figure} Let $D'$ and $D''$ be the two essential disks obtained by boundary compressing $D$ along the bigon $B$. Suppose $\alpha$ is as shown in one of the first three pictures of Figure~\ref{Fig:PossibleAlphas}. It follows that either $D'$ or $D''$ has, after tightening, smaller intersection with $X$ than $D$ does, a contradiction. We deduce that $\alpha$ is as pictured in lower right of Figure~\ref{Fig:PossibleAlphas}. Boundary compressing $D$ along $B$ still gives disks $D', D'' \in \mathcal{D}(V)$. As these cannot have smaller intersection with $X$ we deduce that $|D \cap X| \leq 2$ and the claim holds. \end{proof} Using the same notation as in the proof above, let $B$ be an outermost bigon of $E {\smallsetminus} D$. We now study how $\alpha \subset \partial B$ lies inside of $D$. \begin{proofclaim} The arc $\alpha \subset D$ connects distinct components of $D \cap X$. \end{proofclaim} \begin{proof} Suppose not. Then there is a bigon $C \subset D {\smallsetminus} \alpha$ with $\partial C = \alpha \cup \gamma$ and $\gamma \subset \partial D \cap X$. The disk $C \cup B$ is essential and intersects $X$ at most once after tightening, contradicting our first claim. \end{proof} We finish the proof of Theorem~\ref{Thm:Annuli} by noting that $D \cup B$ is homeomorphic to $\Upsilon \times I$ where $\Upsilon$ is the simplicial tree with three edges and three leaves. We may choose the homeomorphism so that $(D \cup B) \cap X = \Upsilon \times \partial I$. It follows that we may properly isotope $D \cup B$ until $(D \cup B) \cap X$ is a pair of arcs. Recall that $D'$ and $D''$ are the disks obtained by boundary compressing $D$ along $B$. It follows that one of $D'$ or $D''$ (or both) meets $X$ in at most a single arc, contradicting our first claim. \qed \section{Holes for the disk complex -- compressible} \label{Sec:CompressibleHoles} The proof of Theorem~\ref{Thm:CompressibleHoles} occupies the second half of this section. \subsection{Compression sequences of essential disks} \label{Sec:CompressionSequences} Fix a multicurve $\Gamma \subset S = \partial V$. Fix also an essential disk $D \subset V$. Properly isotope $D$ to make $\partial D$ tight with respect to $\Gamma$. If $D \cap \Gamma \neq \emptyset$ we may define: \begin{definition} \label{Def:Sequence} A {\em compression sequence} $\{ \Delta_k \}_{k = 1}^n$ starting at $D$ has $\Delta_1 = \{D\}$ and $\Delta_{k+1}$ is obtained from $\Delta_k$ via a boundary compression, disjoint from $\Gamma$, and tightening. Note that $\Delta_k$ is a collection of exactly $k$ pairwise disjoint disks properly embedded in $V$. We further require, for $k \leq n$, that every disk of $\Delta_k$ meets some component of $\Gamma$. We call a compression sequence {\em maximal} if either \begin{itemize} \item no disk of $\Delta_n$ can be boundary compressed into $S {\smallsetminus} \Gamma$ or \item there is a component $Z \subset S {\smallsetminus} \Gamma$ and a boundary compression of $\Delta_n$ into $S {\smallsetminus} \Gamma$ yielding an essential disk $E$ with $\partial E \subset Z$. \end{itemize} We say that such maximal sequences {\em end essentially} or {\em end in $Z$}, respectively. \end{definition} All compression sequences must end, by Remark~\ref{Rem:IntersectionNumber}. Given a maximal sequence we may relate the various disks in the sequence as follows: \begin{definition} \label{Def:DisjointnessPair} Fix $X$, a component of $S {\smallsetminus} \Gamma$. Fix $D_k \in \Delta_k$. A {\em disjointness pair} for $D_k$ is an ordered pair $(\alpha, \beta)$ of essential arcs in $X$ where \begin{itemize} \item $\alpha \subset D_k \cap X$, \item $\beta \subset \Delta_n \cap X$, and \item $d_\mathcal{A}(\alpha, \beta) \leq 1$. \end{itemize} \end{definition} If $\alpha \neq \alpha'$ then the two disjointness pairs $(\alpha, \beta)$ and $(\alpha', \beta)$ are distinct, even if $\alpha$ is properly isotopic to $\alpha'$. A similar remark holds for the second coordinate. The following lemma controls how subsurface projection distance changes in maximal sequences. \begin{lemma} \label{Lem:DisjointnessPairs} Fix a multicurve $\Gamma \subset S$. Suppose that $D$ cuts $\Gamma$ and choose a maximal sequence starting at $D$. Fix any component $X \subset S {\smallsetminus} \Gamma$. Fix any disk $D_k \in \Delta_k$. Then either $D_k \in \Delta_n$ or there are four distinct disjointness pairs $\{ (\alpha_i, \beta_i) \}_{i = 1}^4$ for $D_k$ in $X$ where each of the arcs $\{ \alpha_i \}$ appears as the first coordinate of at most two pairs. \end{lemma} \begin{proof} We induct on $n - k$. If $D_k$ is contained in $\Delta_n$ there is nothing to prove. If $D_k$ is contained in $\Delta_{k+1}$ we are done by induction. Thus we may assume that $D_k$ is the disk of $\Delta_k$ which is boundary compressed at stage $k$. Let $D_{k+1}, D_{k+1}' \in \Delta_{k+1}$ be the two disks obtained after boundary compressing $D_k$ along the bigon $B$. See \reffig{Pants} for a picture of the pair of pants cobounded by $\partial D_k$ and $\partial D_{k+1} \cup \partial D_{k+1}'$. \begin{figure} \caption{All arcs connecting $D_k$ to itself or to $D_{k+1} \label{Fig:Pants} \end{figure} Let $\nablata$ be a band sum arc dual to $B$ (the dotted arc in \reffig{Pants}). We may assume that $|\Gamma \cap \nablata|$ is minimal over all arcs dual to $B$. It follows that the band sum of $D_{k+1}$ with $D_{k+1}'$ along $\nablata$ is tight, without any isotopy. (This is where we use the fact that $B$ is a boundary compression {\em in the complement of $\Gamma$}, as opposed to being a general boundary compression of $D_k$ in $V$.) There are now three possibilities: neither, one, or both points of $\partial \nablata$ are contained in $X$. First suppose that $X \cap \partial \nablata = \emptyset$. Then every arc of $D_{k+1} \cap X$ is parallel to an arc of $D_k \cap X$, and similarly for $D_{k+1}'$. If $D_{k+1}$ and $D_{k+1}'$ are both components of $\Delta_n$ then choose any arcs $\beta, \beta'$ of $D_{k+1} \cap X$ and of $D_{k+1}' \cap X$. Let $\alpha, \alpha'$ be the parallel components of $D_k \cap X$. The four disjointness pairs are then $(\alpha, \beta)$, $(\alpha, \beta')$, $(\alpha', \beta)$, $(\alpha', \beta')$. Suppose instead that $D_{k+1}$ is not a component of $\Delta_n$. Then $D_k$ inherits four disjointness pairs from $D_{k+1}$. Second suppose that exactly one endpoint $x \in \partial \nablata$ meets $X$. Let $\gamma \subset D_{k+1}$ be the component of $D_{k+1} \cap X$ containing $x$. Let $X'$ be the component of $X \cap P$ that contains $x$ and let $\alpha, \alpha'$ be the two components of $D_k \cap X'$. Let $\beta$ be any arc of $D_{k+1}' \cap X$. If $D_{k+1} \mathbin{\notin} \Delta_n$ and $\gamma$ is not the first coordinate of one of $D_{k+1}$'s four pairs then $D_k$ inherits disjointness pairs from $D_{k+1}$. If $D_{k+1}' \mathbin{\notin} \Delta_n$ then $D_k$ inherits disjointness pairs from $D_{k+1}'$. Thus we may assume that both $D_{k+1}$ and $D_{k+1}'$ are in $\Delta_n$ {\em or} that only $D_{k+1}' \in \Delta_n$ while $\gamma$ appears as the first arc of disjointness pair for $D_{k+1}$. In case of the former the required disjointness pairs are $(\alpha, \beta)$, $(\alpha', \beta)$, $(\alpha, \gamma)$, and $(\alpha', \gamma)$. In case of the latter we do not know if $\gamma$ is allowed to appear as the second coordinate of a pair. However we are given four disjointness pairs for $D_{k+1}$ and are told that $\gamma$ appears as the first coordinate of at most two of these pairs. Hence the other two pairs are inherited by $D_k$. The pairs $(\alpha, \beta)$ and $(\alpha', \beta)$ give the desired conclusion. Third suppose that the endpoints of $\nablata$ meet $\gamma \subset D_{k+1}$ and $\gamma' \subset D_{k+1}'$. Let $X'$ be a component of $X \cap P$ containing $\gamma$. Let $\alpha$ and $\alpha'$ be the two arcs of $D_k \cap X'$. Suppose both $D_{k+1}$ and $D_{k+1}'$ lie in $\Delta_n$. Then the desired pairs are $(\alpha, \gamma)$, $(\alpha', \gamma)$, $(\alpha, \gamma')$, and $(\alpha', \gamma')$. If $D_{k+1}' \in \Delta_n$ while $D_{k+1}$ is not then $D_k$ inherits two pairs from $D_{k+1}$. We add to these the pairs $(\alpha, \gamma')$, and $(\alpha', \gamma')$. If neither disk lies in $\Delta_n$ then $D_k$ inherits two pairs from each disk and the proof is complete. \end{proof} Given a disk $D \in \mathcal{D}(V)$ and a hole $X \subset S$ our \reflem{DisjointnessPairs} allows us to adapt $D$ to $X$. \begin{lemma} \label{Lem:DistanceIsUnchangedInSequences} Fix a hole $X \subset S$ for $\mathcal{D}(V)$. For any disk $D \in \mathcal{D}(V)$ there is a disk $D'$ with the following properties: \begin{itemize} \item $\partial X$ and $\partial D'$ are tight. \item If $X$ is incompressible then $D'$ is not boundary compressible into $X$ and $d_\mathcal{A}(D, D') \leq 3$. \item If $X$ is compressible then $\partial D' \subset X$ and $d_\mathcal{AC}(D, D') \leq 3$. \end{itemize} Here $\mathcal{A} = \mathcal{A}(X)$ and $\mathcal{AC} = \mathcal{AC}(X)$. \end{lemma} \begin{proof} If $\partial D \subset X$ then the lemma is trivial. So assume, by Remark~\ref{Rem:ComplementOfHoleIsIncompressible}, that $D$ cuts $\partial X$. Choose a maximal sequence with respect to $\partial X$ starting at $D$. Suppose that the sequence is non-trivial ($n > 1$). By \reflem{DisjointnessPairs} there is a disk $E \in \Delta_n$ so that $D \cap X$ and $E \cap X$ contain disjoint arcs. If the sequence ends essentially then choose $D' = E$ and the lemma is proved. If the sequence ends in $X$ then there is a boundary compression of $\Delta_n$, disjoint from $\partial X$, yielding the desired disk $D'$ with $\partial D' \subset X$. Since $E \cap D' = \emptyset$ we again obtain the desired bound. Assume now that the sequence is trivial ($n = 1$). Then take $E = D \in \Delta_n$ and the proof is identical to that of the previous paragraph. \end{proof} \begin{remark} \label{Rem:WhyDistanceIsUnchangedInSequences} \reflem{DistanceIsUnchangedInSequences} is unexpected: after all, any pair of curves in $\mathcal{C}(X)$ can be connected by a sequence of band sums. Thus arbitrary band sums can change the subsurface projection to $X$. However, the sequences of band sums arising in \reflem{DistanceIsUnchangedInSequences} are very special. Firstly they do not cross $\partial X$ and secondly they are ``tree-like'' due to the fact every arc in $D$ is separating. When $D$ is replaced by a surface with genus then \reflem{DistanceIsUnchangedInSequences} does not hold in general; this is a fundamental observation due to Kobayashi~\cite{Kobayashi88b} (see also~\cite{Hartshorn02}). Namazi points out that even if $D$ is only replaced by a planar surface \reflem{DistanceIsUnchangedInSequences} does not hold in general. \end{remark} \subsection{Proving the theorem} We now prove: \begin{theorem} \label{Thm:CompressibleHoles} Suppose $X$ is a compressible hole for $\mathcal{D}(V)$ with diameter at least $15$. Then there are a pair of essential disks $D, E \subset V$ so that \begin{itemize} \item $\partial D, \partial E \subset X$ and \item $\partial D$ and $\partial E$ fill $X$. \end{itemize} \end{theorem} \begin{proof} Choose disks $D'$ and $E'$ in $\mathcal{D}(V)$ so that $d_X(D', E') \geq 15$. By \reflem{DistanceIsUnchangedInSequences} there are disks $D$ and $E$ so that $\partial D, \partial E \subset X$, $d_X(D', D) \leq 6$, and $d_X(E', E) \leq 6$. It follows from the triangle inequality that $d_X(D, E) \geq 3$. \end{proof} \section{Holes for the disk complex -- incompressible} \label{Sec:IncompressibleHoles} This section classifies incompressible holes for the disk complex. \begin{theorem} \label{Thm:IncompressibleHoles} Suppose $X$ is an incompressible hole for $\mathcal{D}(V)$ with diameter at least $61$. Then there is an $I$--bundle $\rho_F \colon T \to F$ embedded in $V$ so that \begin{itemize} \item $\partial_h T \subset \partial V$, \item $X$ is a component of $\partial_h T$, \item some component of $\partial_v T$ is boundary parallel into $\partial V$, \item $F$ supports a pseudo-Anosov map. \end{itemize} \end{theorem} Here is a short plan of the proof: We are given $X$, an incompressible hole for $\mathcal{D}(V)$. Following \reflem{DistanceIsUnchangedInSequences} we may assume that $D, E$ are essential disks, without boundary compressions into $X$ or $S {\smallsetminus} X$, with $d_X(D,E) > 43$. Examine the intersection pattern of $D$ and $E$ to find two families of rectangles $\mathcal{R}$ and $\mathcal{Q}$. The intersection pattern of these rectangles in $V$ will determine the desired $I$--bundle $T$. The third conclusion of the theorem follows from standard facts about primitive annuli. The fourth requires another application of \reflem{DistanceIsUnchangedInSequences} as well as \reflem{SimpleSurfaces}. \subsection{Diagonals of polygons} To understand the intersection pattern of $D$ and $E$ we discuss diagonals of polygons. Let $D$ be a $2n$ sided regular polygon. Label the sides of $D$ with the letters $X$ and $Y$ in alternating fashion. Any side labeled $X$ (or $Y$) will be called an {\em $X$ side} (or {\em $Y$ side}). \begin{definition} \label{Def:Diagonal} An arc $\gamma$ properly embedded in $D$ is a {\em diagonal} if the points of $\partial \gamma$ lie in the interiors of distinct sides of $D$. If $\gamma$ and $\gamma'$ are diagonals for $D$ which together meet three different sides then $\gamma$ and $\gamma'$ are {\em non-parallel}. \end{definition} \begin{lemma} \label{Lem:EightDiagonals} Suppose that $\Gamma \subset D$ is a collection of pairwise disjoint non-parallel diagonals. Then there is an $X$ side of $D$ meeting at most eight diagonals of $\Gamma$. \end{lemma} \begin{proof} A counting argument shows that $|\Gamma| \leq 4n - 3$. If every $X$ side meets at least nine non-parallel diagonals then $|\Gamma| \geq \operatorname{fr}ac{9}{2}n > 4n -3$, a contradiction. \end{proof} \subsection{Improving disks} \label{Sec:ImprovingDisks} Suppose now that $X$ is an incompressible hole for $\mathcal{D}(V)$ with diameter at least $61$. Note that, by Theorem~\ref{Thm:Annuli}, $X$ is not an annulus. Let $Y = \overline{S {\smallsetminus} X}$. Choose disks $D'$ and $E'$ in $V$ so that $d_X(D', E') \geq 61$. By \reflem{DistanceIsUnchangedInSequences} there are a pair of disks $D$ and $E$ so that both are essential in $V$, cannot be boundary compressed into $X$ or into $Y$, and so that $d_{\mathcal{A}(X)}(D', D) \leq 3$ and $d_{\mathcal{A}(X)}(E', E) \leq 3$. Thus $d_X(D', D) \leq 9$ and $d_X(E', E) \leq 9$ (\reflem{LipschitzToHoles}). By the triangle inequality $d_X(D, E) \geq 61 - 18 = 43$. Recall, as well, that $\partial D$ and $\partial E$ are tight with respect to $\partial X$. We may further assume that $\partial D$ and $\partial E$ are tight with respect to each other. Also, minimize the quantities $|X \cap (\partial D \cap \partial E)|$ and $|D \cap E|$ while keeping everything tight. In particular, there are no triangle components of $\partial V {\smallsetminus} (D \cup E \cup \partial X)$. Now consider $D$ and $E$ to be even-sided polygons, with vertices being the points $\partial D \cap \partial X$ and $\partial E \cap \partial X$ respectively. Let $\Gamma = D \cap E$. See Figure~\ref{Fig:RectangleWithBadArcs} for one {\it a\thinspace priori}~possible collection $\Gamma \subset D$. \begin{figure} \caption{In fact, $\Gamma \subset D$ cannot contain simple closed curves or non-diagonals.} \label{Fig:RectangleWithBadArcs} \end{figure} From our assumptions and the irreducibility of $V$ it follows that $\Gamma$ contains no simple closed curves. Suppose now that there is a $\gamma \subset \Gamma$ so that, in $D$, both endpoints of $\gamma$ lie in the same side of $D$. Then there is an outermost such arc, say $\gamma' \subset \Gamma$, cutting a bigon $B$ out of $D$. It follows that $B$ is a boundary compression of $E$ which is disjoint from $\partial X$. But this contradicts the construction of $E$. We deduce that all arcs of $\Gamma$ are diagonals for $D$ and, via a similar argument, for $E$. Let $\alpha \subset D \cap X$ be an $X$ side of $D$ meeting at most eight distinct types of diagonal of $\Gamma$. Choose $\beta \subset E \cap X$ similarly. As $d_X(D, E) \geq 43$ we have that $d_X(\alpha, \beta) \geq 43 - 6 = 37$. Now break each of $\alpha$ and $\beta$ into at most eight subarcs $\{ \alpha_i \}$ and $\{ \beta_j \}$ so that each subarc meets all of the diagonals of fixed type and only of that type. Let $R_i \subset D$ be the rectangle with upper boundary $\alpha_i$ and containing all of the diagonals meeting $\alpha_i$. Let $\alpha_i'$ be the lower boundary of $R_i$. Define $Q_j$ and $\beta_j'$ similarly. See \reffig{RectanglesInDisks} for a picture of $R_i$. \begin{figure} \caption{The rectangle $R_i \subset D$ is surrounded by the dotted line. The arc $\alpha_i$ in $\partial D \cap X$ is indicated. In general the arc $\alpha'_i$ may lie in $X$ or in $Y$.} \label{Fig:RectanglesInDisks} \end{figure} Call an arc $\alpha_i$ {\em large} if there is an arc $\beta_j$ so that $|\alpha_i \cap \beta_j| \geq 3$. We use the same notation for $\beta_j$. Let $\Theta$ be the union of all of the large $\alpha_i$ and $\beta_j$. Thus $\Theta$ is a four-valent graph in $X$. Let $\Theta'$ be the union of the corresponding large $\alpha_i'$ and $\beta_i'$. \begin{claim} \label{Clm:ThetaNonEmpty} The graph $\Theta$ is non-empty. \end{claim} \begin{proof} If $\Theta = \emptyset$, then all $\alpha_i$ are small. It follows that $|\alpha \cap \beta| \leq 128$ and thus $d_X(\alpha, \beta) \leq 16$, by \reflem{Hempel}. As $d_X(\alpha, \beta) \geq 37$ this is a contradiction. \end{proof} Let $Z \subset \partial V$ be a small regular neighborhood of $\Theta$ and define $Z'$ similarly. \begin{claim} \label{Clm:ThetaEssential} No component of $\Theta$ or of $\Theta'$ is contained in a disk $D \subset \partial V$. No component of $\Theta$ or of $\Theta'$ is contained in an annulus $A \subset \partial V$ that is peripheral in $X$. \end{claim} \begin{proof} For a contradiction suppose that $W$ is a component of $Z$ contained in a disk. Then there is some pair $\alpha_i, \beta_j$ having a bigon in $\partial V$. This contradicts the tightness of $\partial D$ and $\partial E$. The same holds for $Z'$. Suppose now that some component $W$ is contained in an annulus $A$, peripheral in $X$. Thus $W$ fills $A$. Suppose that $\alpha_i$ and $\beta_j$ are large and contained in $W$. By the classification of arcs in $A$ we deduce that either $\alpha_i$ and $\beta_j$ form a bigon in $A$ or $\partial X$, $\alpha_i$ and $\beta_j$ form a triangle. Either conclusion gives a contradiction. \end{proof} \begin{claim} \label{Clm:ThetaFills} The graph $\Theta$ fills $X$. \end{claim} \begin{proof} Suppose not. Fix attention on any component $W \subset Z$. Since $\Theta$ does not fill, the previous claim implies that there is a component $\gamma \subset \partial W$ that is essential and non-peripheral in $X$. Note that any large $\alpha_i$ meets $\partial W$ in at most two points, while any small $\alpha_i$ meets $\partial W$ in at most $32$ points. Thus $|\alpha \cap \partial W| \leq 256$ and the same holds for $\beta$. Thus $d_X(\alpha, \beta) \leq 36$ by the triangle inequality. As $d_X(\alpha, \beta) \geq 37$ this is a contradiction. \end{proof} The previous two claims imply: \begin{claim} \label{Clm:ThetaConnected} The graph $\Theta$ is connected. \qed \end{claim} There are now two possibilities: either $\Theta \cap \Theta'$ is empty or not. In the first case set $\Sigma = \Theta$ and in the second set $\Sigma = \Theta \cup \Theta'$. By the claims above, $\Sigma$ is connected and fills $X$. Let $\mathcal{R} = \{ R_i \}$ and $\mathcal{Q} = \{ Q_j \}$ be the collections of large rectangles. \subsection{Building the I-bundle} We are given $\Sigma$, $\mathcal{R}$ and $\mathcal{Q}$ as above. Note that $\mathcal{R} \cup \mathcal{Q}$ is an $I$--bundle and $\Sigma$ is the component of its horizontal boundary meeting $X$. See Figure~\ref{Fig:RandQ} for a simple case. \begin{figure} \caption{$\mathcal{R} \label{Fig:RandQ} \end{figure} Let $T_0$ be a regular neighborhood of $\mathcal{R} \cup \mathcal{Q}$, taken in $V$. Again $T_0$ has the structure of an $I$--bundle. Note that $\partial_h T_0 \subset \partial V$, $\partial_h T_0 \cap X$ is a component of $\partial_h T_0$, and this component fills $X$ due to Claim~\ref{Clm:ThetaFills}. We will enlarge $T_0$ to obtain the correct $I$--bundle in $V$. Begin by enumerating all annuli $\{ A_i \} \subset \partial_v T_0$ with the property that some component of $\partial A_i$ is inessential in $\partial V$. Suppose that we have built the $I$--bundle $T_i$ and are now considering the annulus $A = A_i$. Let $\gamma \cup \gamma' = \partial A \subset \partial V$ with $\gamma$ inessential in $\partial V$. Let $B \subset \partial V$ be the disk which $\gamma$ bounds. By induction we assume that no component of $\partial_h T_i$ is contained in a disk embedded in $\partial V$ (the base case holds by Claim~\ref{Clm:ThetaEssential}). It follows that $B \cap T_i = \partial B = \gamma$. Thus $B \cup A$ is isotopic, rel $\gamma'$, to be a properly embedded disk $B' \subset V$. As $\gamma'$ lies in $X$ or $Y$, both incompressible, $\gamma'$ must bound a disk $C \subset \partial V$. Note that $C \cap T_i = \partial C = \gamma'$, again using the induction hypothesis. It follows that $B \cup A \cup C$ is an embedded two-sphere in $V$. As $V$ is a handlebody $V$ is irreducible. Thus $B \cup A \cup C$ bounds a three-ball $U_i$ in $V$. Choose a homeomorphism $U_i \mathrel{\cong} B \times I$ so that $B$ is identified with $B \times \{0\}$, $C$ is identified with $B \times \{1\}$, and $A$ is identified with $\partial B \times I$. We form $T_{i+1} = T_i \cup U_i$ and note that $T_{i+1}$ still has the structure of an $I$--bundle. Recalling that $A = A_i$ we have $\partial_v T_{i+1} = \partial_v T_i {\smallsetminus} A_i$. Also $\partial_h T_{i+1} = \partial_h T_i \cup (B \cup C) \subset \partial V$. It follows that no component of $\partial_h T_{i+1}$ is contained in a disk embedded in $\partial V$. Similarly, $\partial_h T_{i+1} \cap X$ is a component of $\partial_h T_{i+1}$ and this component fills $X$. After dealing with all of the annuli $\{ A_i \}$ in this fashion we are left with an $I$--bundle $T$. Now all components of $\partial \partial_v T$ [{\it sic}] are essential in $\partial V$. All of these lying in $X$ are peripheral in $X$. This is because they are disjoint from $\Sigma \subset \partial_h T$, which fills $X$, by induction. It follows that the component of $\partial_h T$ containing $\Sigma$ is isotopic to $X$. This finishes the construction of the promised $I$--bundle $T$ and demonstrates the first two conclusions of \refthm{IncompressibleHoles}. For future use we record: \begin{remark} \label{Rem:CornersAreEssential} Every curve of $\partial \partial_v T = \partial \partial_h T$ is essential in $S = \partial V$. \end{remark} \subsection{A vertical annulus parallel into the boundary} Here we obtain the third conclusion of \refthm{IncompressibleHoles}: at least one component of $\partial_v T$ is boundary parallel in $\partial V$. Fix $T$ an $I$--bundle with the incompressible hole $X$ a component of $\partial_h T$. \begin{claim} \label{Clm:VerticalAnnuliIncompressible} All components of $\partial_v T$ are incompressible in $V$. \end{claim} \begin{proof} Suppose that $A \subset \partial_v T$ was compressible. By Remark~\ref{Rem:CornersAreEssential} we may compress $A$ to obtain a pair of essential disks $B$ and $C$. Note that $\partial B$ is isotopic into the complement of $\partial_h T$. So $\overline{S {\smallsetminus} X}$ is compressible, contradicting Remark~\ref{Rem:ComplementOfHoleIsIncompressible}. \end{proof} \begin{claim} \label{Clm:VerticalAnnulusBdyParallel} Some component of $\partial_v T$ is boundary parallel. \end{claim} \begin{proof} Since $\partial_v T$ is incompressible (Claim~\ref{Clm:VerticalAnnuliIncompressible}) by Remark~\ref{Rem:SurfacesInHandlebodies}, we find that $\partial_v T$ is boundary compressible in $V$. Let $B$ be a boundary compression for $\partial_v T$. Let $A$ be the component of $\partial_v T$ meeting $B$. Let $\alpha$ denote the arc $A \cap B$. The arc $\alpha$ is either essential or inessential in $A$. Suppose $\alpha$ is inessential in $A$. Then $\alpha$ cuts a bigon, $C$, out of $A$. Since $B$ was a boundary compression the disk $D = B \cup C$ is essential in $V$. Since $B$ meets $\partial_v T$ in a single arc, either $D \subset T$ or $D \subset \overline{V {\smallsetminus} T}$. The former implies that $\partial_h T$ is compressible and the latter that $X$ is not a hole. Either gives a contradiction. It follows that $\alpha$ is essential in $A$. Now carefully boundary compress $A$: Let $N$ be the closure of a regular neighborhood of $B$, taken in $V {\smallsetminus} A$. Let $A'$ be the closure of $A {\smallsetminus} N$ (so $A'$ is a rectangle). Let $B' \cup B''$ be the closure of $\operatorname{fr}(N) {\smallsetminus} A$. Both $B'$ and $B''$ are bigons, parallel to $B$. Form $D = A' \cup B' \cup B''$: a properly embedded disk in $V$. If $D$ is essential then, as above, either $D \subset T$ or $D \subset \overline{V {\smallsetminus} T}$. Again, either gives a contradiction. It follows that $D$ is inessential in $V$. Thus $D$ cuts a closed three-ball $U$ out of $V$. There are two final cases: either $N \subset U$ or $N \cap U = B' \cup B''$. If $U$ contains $N$ then $U$ contains $A$. Thus $\partial A$ is contained in the disk $U \cap \partial V$. This contradicts Remark~\ref{Rem:CornersAreEssential}. Deduce instead that $W = U \cup N$ is a solid torus with meridional disk $B$. Thus $W$ gives a parallelism between $A$ and the annulus $\partial V \cap \partial W$, as desired. \end{proof} \begin{remark} \label{Rem:DiskBusting} Similar considerations prove that the multicurve $$\{ \partial A \mathbin{\mid} \mbox{$A$ is a boundary parallel component of $\partial_v T$} \}$$ is disk-busting for $V$. \end{remark} \subsection{Finding a pseudo-Anosov map} Here we prove that the base surface $F$ of the $I$--bundle $T$ admits a pseudo-Anosov map. As in Section~\ref{Sec:ImprovingDisks}, pick essential disks $D'$ and $E'$ in $V$ so that $d_X(D', E') \geq 61$. \reflem{DistanceIsUnchangedInSequences} provides disks $D$ and $E$ which cannot be boundary compressed into $X$ or into $\overline{S {\smallsetminus} X}$ -- thus $D$ and $E$ cannot be boundary compressed into $\partial_h T$. Also, as above, $d_X(D, E) \geq 61 - 18 = 43$. After isotoping $D$ to minimize intersection with $\partial_v T$ it must be the case that all components of $D \cap \partial_v T$ are essential arcs in $\partial_v T$. By \reflem{BdyIncompImpliesVertical} we conclude that $D$ may be isotoped in $V$ so that $D \cap T$ is vertical in $T$. The same holds of $E$. Choose $A$ and $B$, components of $D \cap T$ and $E \cap T$. Each are vertical rectangles. Since $\operatorname{diam}_X(\pi_X(D)) \leq 3$ (\reflem{SubsurfaceProjectionLipschitz}) we now have $d_X(A, B) \geq 43 - 6 = 37$. We now begin to work in the base surface $F$. Recall that $\rho_F \colon T \to F$ is an $I$--bundle. Take $\alpha = \rho_F(A)$ and $\beta = \rho_F(B)$. Note that the natural map $\mathcal{C}(F) \to \mathcal{C}(X)$, defined by taking a curve to its lift, is distance non-increasing (see Equation~\ref{Eqn:NonOrientableLowerBound}). Thus $d_F(\alpha, \beta) \geq 37$. By \refthm{Annuli} the surface $F$ cannot be an annulus. Thus, by \reflem{SimpleSurfaces} the subsurface $F$ supports a pseudo-Anosov map and we are done. \subsection{Corollaries} We now deal with the possibility of disjoint holes for the disk complex. \begin{lemma} \label{Lem:DiskComplexPairedHoles} Suppose that $X$ is a large incompressible hole for $\mathcal{D}(V)$ supported by the $I$--bundle $\rho_F \colon T \to F$. Let $Y = \partial_h T {\smallsetminus} X$. Let $\tau \colon \partial_h T \to \partial_h T$ be the involution switching the ends of the $I$--fibres. Suppose that $D \in \mathcal{D}(V)$ is an essential disk. \begin{itemize} \item If $F$ is orientable then $d_{\mathcal{A}(F)}(D \cap X, D \cap Y) \leq 6$. \item If $F$ is non-orientable then $d_X(D, \mathcal{C}^\tau(X)) \leq 3$. \end{itemize} \end{lemma} \begin{proof} By \reflem{DistanceIsUnchangedInSequences} there is a disk $D' \subset V$ which is tight with respect to $\partial_h T$ and which cannot be boundary compressed into $\partial_h T$ (or into the complement). Also, for any component $Z \subset \partial_h T$ we have $d_{\mathcal{A}(Z)}(D, D') \leq 3$. Properly isotope $D'$ to minimize $D' \cap \partial_v T$. Then $D' \cap \partial_v T$ is properly isotopic, in $\partial_v T$, to a collection of vertical arcs. Let $E \subset D' \cap T$ be a component. \reflem{BdyIncompImpliesVertical} implies that $E$ is vertical in $T$, after an isotopy of $D'$ preserving $\partial_h T$ setwise. Since $E$ is vertical, the arcs $E \cap \partial_h T \subset D'$ are $\tau$--invariant. The conclusion follows. \end{proof} Recall \reflem{ArcComplexHolesIntersect}: all holes for the arc complex intersect. This cannot hold for the disk complex. For example if $\rho_F \colon T \to F$ is an $I$--bundle over an orientable surface then take $V = T$ and notice that both components of $\partial_h T$ are holes for $\mathcal{D}(V)$. However, by the first conclusion of \reflem{DiskComplexPairedHoles}, $X$ and $Y$ are paired holes, in the sense of \refdef{PairedHoles}. So, as with the invariant arc complex (\reflem{ArcComplexHolesInterfere}), all holes for the disk complex interfere: \begin{lemma} \label{Lem:DiskComplexHolesInterfere} Suppose that $X, Z \subset \partial V$ are large holes for $\mathcal{D}(V)$. If $X \cap Z = \emptyset$ then there is an $I$--bundle $T \mathrel{\cong} F \times I$ in $V$ so that $\partial_h T = X \cup Y$ and $Y \cap Z \neq \emptyset$. \end{lemma} \begin{proof} Suppose that $X \cap Z = \emptyset$. It follows from Remark~\ref{Rem:ComplementOfHoleIsIncompressible} that both $X$ and $Z$ are incompressible. Let $\rho_F \colon T \to F$ be the $I$--bundle in $V$ with $X \subset \partial_h T$, as provided by \refthm{IncompressibleHoles}. We also have a component $A \subset \partial_v T$ so that $A$ is boundary parallel. Let $U$ be the solid torus component of $V {\smallsetminus} A$. Note that $Z$ cannot be contained in $\partial U {\smallsetminus} A$ because $Z$ is not an annulus (\refthm{Annuli}). Let $\alpha = \rho_F(A)$. Choose any essential arc $\nablata \subset F$ with both endpoints in $\alpha \subset \partial F$. It follows that $\rho_F^{-1}(\nablata)$, together with two meridional disks of $U$, forms an essential disk $D$ in $V$. Let $W = \partial_h T \cup (U {\smallsetminus} A)$ and note that $\partial D \subset W$. If $F$ is non-orientable then $Z \cap W = \emptyset$ and we have a contradiction. Deduce that $F$ is orientable. Now, if $Z$ misses $Y$ then $Z$ misses $W$ and we again have a contradiction. It follows that $Z$ cuts $Y$ and we are done. \end{proof} \section{Axioms for combinatorial complexes} \label{Sec:Axioms} The goal of this section and the next is to prove, inductively, an upper bound on distance in a combinatorial complex $\mathcal{G}(S) = \mathcal{G}$. This section presents our axioms on $\mathcal{G}$: sufficient hypotheses for \refthm{UpperBound}. The axioms, apart from \refax{Holes}, are quite general. \refax{Holes} is necessary to prove hyperbolicity and greatly simplifies the recursive construction in \refsec{Partition}. \begin{theorem} \label{Thm:UpperBound} Fix $S$ a compact connected non-simple surface. Suppose that $\mathcal{G} = \mathcal{G}(S)$ is a combinatorial complex satisfying the axioms of \refsec{Axioms}. Let $X$ be a hole for $\mathcal{G}$ and suppose that $\alpha_X, \beta_X \in \mathcal{G}$ are contained in $X$. For any constant $c > 0$ there is a constant $A$ satisfying: \[ d_\mathcal{G}(\alpha_X, \beta_X) \mathbin{\leq_{A}} \sum [d_Y(\alpha_X, \beta_X)]_c \] where the sum is taken over all holes $Y \subseteq X$ for $\mathcal{G}$. \end{theorem} The proof of the upper bound is more difficult than that of the lower bound, \refthm{LowerBound}. This is because naturally occurring paths in $\mathcal{G}$ between $\alpha_X$ and $\beta_X$ may waste time in non-holes. The first example of this is the path in $\mathcal{C}(S)$ obtained by taking the short curves along a {Teichm\"uller~} geodesic. The {Teichm\"uller~} geodesic may spend time rearranging the geometry of a subsurface. Then the systole path in the curve complex must be much longer than the curve complex distance between the endpoints. In Sections~\ref{Sec:PathsNonorientable}, \ref{Sec:PathsArc}, \ref{Sec:PathsDisk} we will verify these axioms for the curve complex of a non-orientable surface, the arc complex, and the disk complex. \subsection{The axioms} Suppose that $\mathcal{G} = \mathcal{G}(S)$ is a combinatorial complex. We begin with the axiom required for hyperbolicity. \begin{axiom}[Holes interfere] \label{Ax:Holes} All large holes for $\mathcal{G}$ interfere, as given in \refdef{Interfere}. \end{axiom} Fix vertices $\alpha_X, \beta_X \in \mathcal{G}$, both contained in a hole $X$. We are given $\Lambda = \{ \mu_n \}_{n = 0}^N$, a path of markings in $X$. \begin{axiom}[Marking path] \label{Ax:Marking} We require: \begin{enumerate} \item The support of $\mu_{n+1}$ is contained inside the support of $\mu_n$. \item For any subsurface $Y \subseteq X$, if $\pi_Y(\mu_k) \neq \emptyset$ then for all $n \leq k$ the map $n \mapsto \pi_Y(\mu_n)$ is an unparameterized quasi-geodesic with constants depending only on $\mathcal{G}$. \end{enumerate} \end{axiom} \noindent The second condition is crucial and often technically difficult to obtain. We are given, for every essential subsurface $Y \subset X$, a perhaps empty interval $J_Y \subset [0, N]$ with the following properties. \begin{axiom}[Accessibility] \label{Ax:Access} The interval for $X$ is $J_X = [0, N]$. There is a constant ${B_3}$ so that \begin{enumerate} \item If $m \in J_Y$ then $Y$ is contained in the support of $\mu_m$. \item If $m \in J_Y$ then $\iota(\partial Y, \mu_m) < {B_3}$. \item If $[m, n] \cap J_Y = \emptyset$ then $d_Y(\mu_m, \mu_n) < {B_3}$. \end{enumerate} \end{axiom} \noindent There is a combinatorial path $\Gamma = \{ \gamma_i \}_{i = 0}^K \subset \mathcal{G}$ starting with $\alpha_X$ ending with $\beta_X$ and each $\gamma_i$ is contained in $X$. There is a strictly increasing reindexing function $r \colon [0, K] \to [0, N]$ with $r(0) = 0$ and $r(K) = N$. \begin{axiom}[Combinatorial] \label{Ax:Combin} There is a constant ${C_2}$ so that: \begin{itemize} \item $d_Y(\gamma_i, \mu_{r(i)}) < {C_2}$, for every $i \in [0, K]$ and every hole $Y \subset X$, \item $d_\mathcal{G}(\gamma_i, \gamma_{i+1}) < {C_2}$, for every $i \in [0, K - 1]$. \end{itemize} \end{axiom} \begin{axiom}[Replacement] \label{Ax:Replace} There is a constant ${C_4}$ so that: \begin{enumerate} \item If $Y \subset X$ is a hole and $r(i) \in J_Y$ then there is a vertex $\gamma' \in \mathcal{G}$ so that $\gamma'$ is contained in $Y$ and $d_\mathcal{G}(\gamma_i, \gamma') < {C_4}$. \item If $Z \subset X$ is a non-hole and $r(i) \in J_Z$ then there is a vertex $\gamma' \in \mathcal{G}$ so that $d_\mathcal{G}(\gamma_i, \gamma') < {C_4}$ and so that $\gamma'$ is contained in $Z$ or in $X {\smallsetminus} Z$. \end{enumerate} \end{axiom} There is one axiom left: the axiom for straight intervals. This is given in the next subsection. \subsection{Inductive, electric, shortcut and straight intervals} We describe subintervals that arise in the partitioning of $[0, K]$. As discussed carefully in \refsec{Deductions}, we will choose a lower threshold ${L_1}(Y)$ for every essential $Y \subset X$ and a general upper threshold, ${L_2}$. \begin{definition} \label{Def:Inductive} Suppose that $[i, j] \subset [0, K]$ is a subinterval of the combinatorial path. Then $[i, j]$ is an {\em inductive interval} associated to a hole $Y \subsetneq X$ if \begin{itemize} \item $r([i, j]) \subset J_Y$ (for paired $Y$ we require $r([i, j]) \subset J_Y \cap J_{Y'}$) and \item $d_Y(\gamma_i, \gamma_j) \geq {L_1}(Y)$. \end{itemize} \end{definition} When $X$ is the only relevant hole we have a simpler definition: \begin{definition} \label{Def:Electric} Suppose that $[i, j] \subset [0, K]$ is a subinterval of the combinatorial path. Then $[i, j]$ is an {\em electric interval} if $d_Y(\gamma_i, \gamma_j) < {L_2}$ for all holes $Y \subsetneq X$. \end{definition} Electric intervals will be further partitioned into shortcut and straight intervals. \begin{definition} \label{Def:Shortcut} Suppose that $[p, q] \subset [0, K]$ is a subinterval of the combinatorial path. Then $[p, q]$ is a {\em shortcut} if \begin{itemize} \item $d_Y(\gamma_p, \gamma_q) < {L_2}$ for all holes $Y$, including $X$ itself, and \item there is a non-hole $Z \subset X$ so that $r([p, q]) \subset J_Z$. \end{itemize} \end{definition} \begin{definition} \label{Def:Straight} Suppose that $[p, q] \subset [0, K]$ is a subinterval of the combinatorial path and is contained in an electric interval $[i, j]$. Then $[p, q]$ is a {\em straight interval} if $d_Y(\mu_{r(p)}, \mu_{r(q)}) < {L_2}$ for all non-holes $Y$. \end{definition} Our final axiom is: \begin{axiom}[Straight] \label{Ax:Straight} There is a constant $A$ depending only on $X$ and $\mathcal{G}$ so that for every straight interval $[p, q]$: \[ d_\mathcal{G}(\gamma_p, \gamma_q) \mathbin{\leq_{A}} d_X(\gamma_p, \gamma_q) \] \end{axiom} \subsection{Deductions from the axioms} \label{Sec:Deductions} \refax{Marking} and \reflem{FirstReverse} imply that the reverse triangle inequality holds for projections of marking paths. \begin{lemma} \label{Lem:Reverse} There is a constant ${C_1}$ so that \[ d_Y(\mu_m, \mu_n) + d_Y(\mu_n, \mu_p) < d_Y(\mu_m, \mu_p) + {C_1} \] for every essential $Y \subset X$ and for every $m < n < p$ in $[0, N]$. \qed \end{lemma} We record three simple consequences of \refax{Access}. \begin{lemma} \label{Lem:Access} There is a constant ${C_3}$, depending only on ${B_3}$, with the follow properties: \begin{itemize} \item[(i)] If $Y$ is strictly nested in $Z$ and $m \in J_Y$ then $d_Z(\partial Y,\mu_m) \leq {C_3}$. \item[(ii)] If $Y$ is strictly nested in $Z$ then for any $m, n \in J_Y$, $d_Z(\mu_m, \mu_n) < {C_3}$. \item[(iii)] If $Y$ and $Z$ overlap then for any $m, n \in J_Y \cap J_Z$ we have $d_Y(\mu_m, \mu_n), d_Z(\mu_m, \mu_n) < {C_3}$. \end{itemize} \end{lemma} \begin{proof} We first prove conclusion (i): Since $Y$ is strictly nested in $Z$ and since $Y$ is contained in the support of $\mu_m$ (part (1) of \refax{Access}), both $\partial Y$ and $\mu_m$ cut $Z$. By \refax{Access}, part (2), we have that $\iota(\partial Y, \mu_m) \leq {B_3}$. It follows that $\iota(\partial Y, \pi_Z(\mu_m)) \leq 2{B_3}$. By \reflem{Hempel} we deduce that $d_Z(\partial Y, \mu_m) \leq 2 \log_2 {B_3} + 3$. We take ${C_3}$ larger than this right hand side. Conclusion (ii) follows from a pair of applications of conclusion (i) and the triangle inequality. For conclusion (iii): As in (ii), to bound $d_Z(\mu_m, \mu_n)$ it suffices to note that $\partial Y$ cuts $Z$ and that $\partial Y$ has bounded intersection with both of $\mu_m, \mu_n$. \end{proof} We now have all of the constants ${C_1}, {C_3}, {C_2}, {C_4}$ in hand. Recall that $L_4$ is the pairing constant of \refdef{PairedHoles} and that $M_0$ is the constant of \ref{Thm:BoundedGeodesicImage}. We must choose a lower threshold ${L_1}(Y)$ for every essential $Y \subset X$. We must also choose the general upper threshold, ${L_2}$ and general lower threshold ${L_0}$. We require, for all essential $Z, Y$ in $X$, with $\xi(Z) < \xi(Y) \leq \xi(X)$: \begin{gather} \label{Eqn:InductBigger} {L_0} > {C_3} + 2{C_2} + 2L_4 \\ \label{Eqn:UpperBigger} {L_2} > {L_1}(X) + 2L_4 + 6{C_1} + 2{C_2} + 14{C_3} + 10 \\ \label{Eqn:LowerBigger} {L_1}(Y) > M_0 + 2{C_3} + 4{C_2} + 2L_4 +{L_0} \\ \label{Eqn:Order} {L_1}(X) > {L_1}(Z) + 2{C_3} + 4{C_2} + 4L_4 \end{gather} \section{Partition and the upper bound on distance} \label{Sec:Partition} In this section we prove \refthm{UpperBound} by induction on $\xi(X)$. The first stage of the proof is to describe the {\em inductive partition}: we partition the given interval $[0, K]$ into inductive and electric intervals. The inductive partition is closely linked with the hierarchy machine~\cite{MasurMinsky00} and with the notion of antichains introduced in~\cite{RafiSchleimer09}. We next give the {\em electric partition}: each electric interval is divided into straight and shortcut intervals. Note that the electric partition also gives the base case of the induction. We finally bound $d_\mathcal{G}(\alpha_X, \beta_X)$ from above by combining the contributions from the various intervals. \subsection{Inductive partition} We begin by identifying the relevant surfaces for the construction of the partition. We are given a hole $X$ for $\mathcal{G}$ and vertices $\alpha_X, \beta_X \in \mathcal{G}$ contained in $X$. Define \[ B_X = \{ Y \subsetneq X \mathbin{\mid} \mbox{$Y$ is a hole and~} d_Y(\alpha_X, \beta_X) \geq {L_1}(X) \}. \] For any subinterval $[i,j] \subset [0, K]$ define \[ B_X(i, j) = \{ Y \in B_X \mathbin{\mid} d_Y(\gamma_i, \gamma_j) \geq {L_1}(X)\}. \] We now partition $[0, K]$ into inductive and electric intervals. Begin with the partition of one part $\mathcal{P}_X = \{ [0, K] \}$. Recursively $\mathcal{P}_X$ is a partition of $[0, K]$ consisting of intervals which are either inductive, electric, or undetermined. Suppose that $[i, j] \in \mathcal{P}_X$ is undetermined. \begin{proofclaim} If $B_X(i,j)$ is empty then $[i,j]$ is electric. \end{proofclaim} \begin{proof} Since $B_X(i, j)$ is empty, every hole $Y \subsetneq X$ has either $d_Y(\gamma_i, \gamma_j) < {L_1}(X)$ or $Y \mathbin{\notin} B_X$. In the former case, as ${L_1}(X) < {L_2}$, we are done. So suppose the latter holds. Now, by the reverse triangle inequality (\reflem{Reverse}), \[ d_Y(\mu_{r(i)}, \mu_{r(j)}) < d_Y(\mu_0, \mu_N) + 2{C_1}. \] Since $r(0) = 0$ and $r(K) = N$ we find: \[ d_Y(\gamma_i, \gamma_j) < d_Y(\alpha_X, \beta_X) + 2{C_1} + 4{C_2}. \] Deduce that \[ d_Y(\gamma_i, \gamma_j) < {L_1}(X) + 2{C_1} + 4{C_2} < {L_2}. \] This completes the proof. \end{proof} Thus if $B_X(i, j)$ is empty then $[i, j] \in \mathcal{P}_X$ is determined to be electric. Proceed on to the next undetermined element. Suppose instead that $B_X(i,j)$ is non-empty. Pick a hole $Y \in B_X(i,j)$ so that $Y$ has maximal $\xi(Y)$ amongst the elements of $B_X(i,j)$ Let $p, q \in [i,j]$ be the first and last indices, respectively, so that $r(p), r(q) \in J_Y$. (If $Y$ is paired with $Y'$ then we take the first and last indices that, after reindexing, lie inside of $J_Y \cap J_{Y'}$.) \begin{proofclaim} The indices $p, q$ are well-defined. \end{proofclaim} \begin{proof} By assumption $d_Y(\gamma_i, \gamma_j) \geq {L_1}(X)$. By \refeqn{InductBigger}, \[ {L_1}(X) > {C_3}+2{C_2}. \] We deduce from \refax{Access} and \refax{Combin} that $J_Y \cap r([i, j])$ is non-empty. Thus, if $Y$ is not paired, the indices $p, q$ are well-defined. Suppose instead that $Y$ is paired with $Y'$. Recall that measurements made in $Y$ and $Y'$ differ by at most the pairing constant $L_4$ given in \refdef{PairedHoles}. By (\ref{Eqn:LowerBigger}), $${L_1}(X) >{C_3} + 2{C_2} + 2L_4.$$ We deduce again from \refax{Access} that $J_{Y'} \cap r([i, j])$ is non-empty. Suppose now, for a contradiction, that $J_Y \cap J_{Y'} \cap r([i, j])$ is empty. Define $$ h = \max \{ \ell \in [i, j] \mathbin{\mid} r(\ell) \in J_Y \}, \quad k = \min \{ \ell \in [i, j] \mathbin{\mid} r(\ell) \in J_{Y'} \} $$ Without loss of generality we may assume that $h < k$. It follows that $d_{Y'}(\gamma_i, \gamma_h) < {C_3} + 2{C_2}$. Thus $d_Y(\gamma_i, \gamma_h) < {C_3} + 2{C_2} + 2L_4$. Similarly, $d_Y(\gamma_h, \gamma_j) < {C_3} + 2{C_2}$. Deduce $$ d_Y(\gamma_i, \gamma_j) < 2{C_3} + 4{C_2} + 2L_4 < {L_1}(X) ,$$ the last inequality by (\ref{Eqn:LowerBigger}). This is a contradiction to the assumption. \end{proof} \begin{proofclaim} The interval $[p, q]$ is inductive for $Y$. \end{proofclaim} \begin{proof} We must check that $d_Y(\gamma_p, \gamma_q) \geq {L_1}(Y)$. Suppose first that $Y$ is not paired. Then by the definition of $p, q$, (2) of \refax{Access}, and the triangle inequality we have $$ d_Y(\mu_{r(i)}, \mu_{r(j)}) \leq d_Y(\mu_{r(p)}, \mu_{r(q)}) + 2{C_3}. $$ Thus by \refax{Combin}, $$ d_Y(\gamma_i, \gamma_j) \leq d_Y(\gamma_p, \gamma_q) + 2{C_3} + 4{C_2}. $$ Since by (\ref{Eqn:Order}), $$ {L_1}(Y) + 2{C_3} + 4{C_2} < {L_1}(X) \leq d_Y(\gamma_i, \gamma_j) $$ we are done. When $Y$ is paired the proof is similar but we must use the slightly stronger inequality ${L_1}(Y) + 2{C_3} + 4{C_2} + 4L_4 < {L_1}(X)$. \end{proof} Thus, when $B_X(i, j)$ is non-empty we may find a hole $Y$ and indices $p, q$ as above. In this situation, we subdivide the element $[i, j] \in \mathcal{P}_X$ into the elements $[i, p-1]$, $[p, q]$, and $[q+1, j]$. (The first or third intervals, or both, may be empty.) The interval $[p, q] \in \mathcal{P}_X$ is determined to be inductive and associated to $Y$. Proceed on to the next undetermined element. This completes the construction of $\mathcal{P}_X$. As a bit of notation, if $[i, j] \in \mathcal{P}_X$ is associated to $Y \subset X$ we will sometimes write $I_Y = [i, j]$. \subsection{Properties of the inductive partition} \begin{lemma} \label{Lem:HolesContained} Suppose that $Y, Z$ are holes and $I_Z$ is an inductive element of $\mathcal{P}_X$ associated to $Z$. Suppose that $r(I_Z) \subset J_Y$ (or $r(I_Z) \subset J_Y \cap J_{Y'}$, if $Y$ is paired). Then \begin{itemize} \item $Z$ is nested in $Y$ or \item $Z$ and $Z'$ are paired and $Z'$ is nested in $Y$. \end{itemize} \end{lemma} \begin{proof} Let $I_Z = [i, j]$. Suppose first that $Y$ is strictly nested in $Z$. Then by (ii) of \reflem{Access}, $d_Z(\mu_{r(i)}, \mu_{r(j)}) < {C_3}$. Then by Axiom \ref{Ax:Combin} \[ d_Z(\gamma_i, \gamma_j) < {C_3} + 2{C_2} < {L_1}(Z), \] a contradiction. We reach the same contradiction if $Y$ and $Z$ overlap using (iii) of \reflem{Access}. Now, if $Z$ and $Y$ are disjoint then there are two cases: Suppose first that $Y$ is paired with $Y'$. Since all holes interfere, $Y'$ and $Z$ must meet. In this case we are done, just as in the previous paragraph. Suppose now that $Z$ is paired with $Z'$. Since all holes interfere, $Z'$ and $Y$ must meet. If $Z'$ is nested in $Y$ then we are done. If $Y$ is strictly nested in $Z'$ then, as $r([i, j]) \subset J_Y$, we find that as above by Axioms \ref{Ax:Combin} and (ii) of \reflem{Access} that $$d_{Z'}(\gamma_i, \gamma_j) < {C_3} + 2{C_2}$$ and so $d_Z(\gamma_i, \gamma_j) < {C_3} + 2{C_2} + 2L_4 < {L_1}(Z)$, a contradiction. We reach the same contradiction if $Y$ and $Z'$ overlap. \end{proof} \begin{proposition} \label{Prop:AtMostThreeInductives} Suppose $Y \subsetneq X$ is a hole for $\mathcal{G}$. \begin{enumerate} \item If $Y$ is associated to an inductive interval $I_Y \in \mathcal{P}_X$ and $Y$ is paired with $Y'$ then $Y'$ is not associated to any inductive interval in $\mathcal{P}_X$. \item There is at most one inductive interval $I_Y \in \mathcal{P}_X$ associated to $Y$. \item There are at most two holes $Z$ and $W$, distinct from $Y$ (and from $Y'$, if $Y$ is paired) such that \begin{itemize} \item there are inductive intervals $I_Z = [h, i]$ and $I_W = [j, k]$ and \item $d_Y(\gamma_h, \gamma_i), d_Y(\gamma_j, \gamma_k) \geq {L_0}$. \end{itemize} \end{enumerate} \end{proposition} \begin{remark} \label{Rem:AtMostThreeInductive} It follows that for any hole $Y$ there are at most three inductive intervals in the partition $\mathcal{P}_X$ where $Y$ has projection distance greater than ${L_0}$. \end{remark} \begin{proof}[Proof of \refprop{AtMostThreeInductives}] To prove the first claim: Suppose that $I_Y = [p, q]$ and $I_{Y'} = [p', q']$ with $q < p'$. It follows that $[r(p), r(q')] \subset J_Y \cap J_{Y'}$. If $q + 1 = p'$ then the partition would have chosen a larger inductive interval for one of $Y$ or $Y'$. It must be the case that there is an inductive interval $I_Z \subset [q + 1, p' - 1]$ for some hole $Z$, distinct from $Y$ and $Y'$, with $\xi(Z) \geq \xi(Y)$. However, by \reflem{HolesContained} we find that $Z$ is nested in $Y$ or in $Y'$. It follows that $Z = Y$ or $Y$, a contradiction. The second statement is essentially similar. Finally suppose that $Z$ and $W$ are the first and last holes, if any, satisfying the hypotheses of the third claim. Since $d_Y(\gamma_h, \gamma_i) \geq {L_0}$ we find by \refax{Combin} that $$d_Y(\mu_{r(h)}, \mu_{r(i)}) \geq {L_0} - 2{C_2}.$$ By (\ref{Eqn:InductBigger}), ${L_0} - 2{C_2} > {C_3}$ so that $$J_Y \cap r(I_Z) \neq \emptyset.$$ If $Y$ is paired then, again by (\ref{Eqn:InductBigger}) we have ${L_0}> {C_3} + 2{C_2} + 2L_4$, we also find that $J_{Y'} \cap r(I_Z) \neq \emptyset$. Symmetrically, $J_Y \cap r(I_W)$ (and $J_{Y'} \cap r(I_W)$) are also non-empty. It follows that the interval between $I_Z$ and $I_W$, after reindexing, is contained in $J_Y$ (and $J_{Y'}$, if $Y$ is paired). Thus for any inductive interval $I_V = [p, q]$ between $I_Z$ and $I_W$ the associated hole $V$ is nested in $Y$ (or $V'$ is nested in $Y$), by \reflem{HolesContained}. If $V = Y$ or $V = Y'$ there is nothing to prove. Suppose instead that $V$ (or $V'$) is strictly nested in $Y$. It follows that $$d_Y(\gamma_p, \gamma_q) < {C_3} + 2{C_2} < {L_0}.$$ Thus there are no inductive intervals between $I_Z$ and $I_W$ satisfying the hypotheses of the third claim. \end{proof} The following lemma and proposition bound the number of inductive intervals. The discussion here is very similar to (and in fact inspired) the {\em antichains} defined in~\cite[Section 5]{RafiSchleimer09}. Our situation is complicated by the presence of non-holes and interfering holes. \begin{lemma} \label{Lem:PigeonForHoles} Suppose that $X, \alpha_X, \beta_X$ are given, as above. For any $\ell \geq (3 \cdot {L_2})^{\xi(X)}$, if $\{Y_i\}_{i = 1}^\ell$ is a collection of distinct strict sub-holes of $X$ each having $d_{Y_i}(\alpha_X, \beta_X)\geq {L_1}(X)$ then there is a hole $Z \subseteq X$ such that $d_Z(\alpha_X, \beta_X) \geq {L_2} - 1$ and $Z$ contains at least ${L_2}$ of the $Y_i$. Furthermore, for at least ${L_2} - 4({C_1} + {C_3} + 2{C_3} + 2)$ of these $Y_i$ we find that $J_{Y_i} \subsetneq J_Z$. (If $Z$ is paired then $J_{Y_i} \subsetneq J_Z \cap J_{Z'}$.) Each of these $Y_i$ is disjoint from a distinct vertex $\eta_i \in [ \pi_Z(\alpha_X), \pi_Z(\beta_X) ]$. \end{lemma} \begin{proof} Let $g_X$ be a geodesic in $\mathcal{C}(X)$ joining $\alpha_X, \beta_X$. By the Bounded Geodesic Image Theorem (\refthm{BoundedGeodesicImage}), since ${L_1}(X) > M_0$, for every $Y_i$ there is a vertex $\omega_i\in g_X$ such that $Y_i\subset X {\smallsetminus} \omega_i$. Thus $d_X(\omega_i,\partial Y_i) \leq 1$. If there are at least ${L_2}$ distinct $\omega_i$, associated to distinct $Y_i$, then $d_X(\alpha_X, \beta_X) \geq {L_2} - 1$. In this situation we take $Z = X$. Since $J_X = [0, N]$ we are done. Thus assume there do not exist at least ${L_2}$ distinct $\omega_i$. Then there is some fixed $\omega$ among these $\omega_i$ such that at least $\operatorname{fr}ac{\ell}{{L_2}}\geq 3 (3 \cdot {L_2})^{\xi(X)-1}$ of the $Y_i$ satisfy \[ Y_i \subset (X {\smallsetminus} \omega). \] Thus one component, call it $W$, of $X{\smallsetminus} \omega$ contains at least $(3 \cdot {L_2})^{\xi(X)-1}$ of the $Y_i$. Let $g_W$ be a geodesic in $\mathcal{C}(W)$ joining $\alpha_W = \pi_W(\alpha_X)$ and $\beta_W=\pi_W(\beta_X)$. Notice that $$d_{Y_i}(\alpha_W, \beta_W) \geq d_{Y_i}(\alpha_X, \beta_X) - 8$$ because we are projecting to nested subsurfaces. This follows for example from \reflem{SubsurfaceProjectionLipschitz}. Hence $d_{Y_i}(\alpha_W, \beta_W) \geq {L_1}(W)$. Again apply \refthm{BoundedGeodesicImage}. Since ${L_1}(W) > M_0$, for every remaining $Y_i$ there is a vertex $\eta_i \in g_W$ such that \[ Y_i \subset (W {\smallsetminus} \eta_i) \] If there are at least ${L_2}$ distinct $\eta_i$ then we take $Z = W$. Otherwise we repeat the argument. Since the complexity of each successive subsurface is decreasing by at least $1$, we must eventually find the desired $Z$ containing at least ${L_2}$ of the $Y_i$, each disjoint from distinct vertices of $g_Z$. So suppose that there are at least ${L_2}$ distinct $\eta_i$ associated to distinct $Y_i$ and we have taken $Z = W$. Now we must find at least ${L_2} - 4({C_1} + {C_3} + 2{C_3} + 2)$ of these $Y_i$ where $J_{Y_i} \subsetneq J_Z$. To this end we focus attention on a small subset $\{Y^j\}_{j = 1}^5 \subset \{Y_i\}$. Let $\eta_j$ be the vertex of $g_Z = g_W$ associated to $Y^j$. We choose these $Y^j$ so that \begin{itemize} \item the $\eta_j$ are arranged along $g_Z$ in order of index and \item $d_Z(\eta_j, \eta_{j + 1}) > {C_1} + {C_3} + 2{C_3} + 2$, for $j = 1, 2, 3, 4$. \end{itemize} This is possible by (\ref{Eqn:UpperBigger}) because \[ {L_2} > 4({C_1} + {C_3} + 2{C_3}). \] Set $J_j = J_{Y^j}$ and pick any indices $m_j \in J_j$. (If $Z$ is paired then $Y^j$ is as well and we pick $m_j \in J_{Y^j} \cap J_{(Y^j)'}$.) We use $\mu(m_j)$ to denote $\mu_{m_j}$. Since $\partial Y^j$ is disjoint from $\eta_j$, \refax{Access} and \reflem{Hempel} imply \begin{equation} \label{Eqn:Eta} d_Z(\mu(m_j), \eta_j) \leq {C_3} + 1. \end{equation} Since the sequence $\pi_Z(\mu_n)$ satisfies the reverse triangle inequality (\reflem{Reverse}), it follows that the $m_j$ appear in $[0, N]$ in order agreeing with their index. The triangle inequality implies that \[ d_Z(\mu(m_1), \mu(m_2)) > {C_3}. \] Thus \refax{Access} implies that $J_Z \cap [m_1, m_2]$ is non-empty. Similarly, $J_Z \cap [m_4, m_5]$ is non-empty. It follows that $[m_2, m_4] \subset J_Z$. (If $Z$ is paired then, after applying the symmetry $\tau$ to $g_Z$, the same argument proves $[m_2, m_4] \subset J_{Z'}$.) Notice that $J_2 \cap J_3 = \emptyset$. For if $m \in J_2 \cap J_3$ then by (\ref{Eqn:Eta}) both $d_Z(\mu_m, \eta_2)$ and $d_Z(\mu_m, \eta_3)$ are bounded by ${C_3} + 1$. It follows that \[ d_Z(\eta_2, \eta_3) < 2{C_3} + 2, \] a contradiction. Similarly $J_3 \cap J_4 = \emptyset$. We deduce that $J_3 \subsetneq [m_2, m_4] \subset J_Z$. (If $Z$ is paired $J_3 \subset J_Z \cap J_{Z'}$.) Finally, there are at least $$ {L_2} - 4({C_1} + {C_3} + 2{C_3} + 2)$$ possible $Y_i$'s which satisfy the hypothesis on $Y^3$. This completes the proof. \end{proof} Define $$ \mathcal{P}_{\text{ind}} = \{ I \in \mathcal{P}_X \mathbin{\mid} \mbox{ $I$ is inductive}\}. $$ \begin{proposition} \label{Prop:NumberOfInductives} The number of inductive intervals is a lower bound for the projection distance in $X$: $$ d_X(\alpha_X, \beta_X) \geq \operatorname{fr}ac{|\mathcal{P}_{\text{ind}}|}{2(3 \cdot {L_2})^{\xi(X) - 1} + 1} - 1. $$ \end{proposition} \begin{proof} Suppose, for a contradiction, that the conclusion fails. Let $g_X$ be a geodesic in $\mathcal{C}(X)$ connecting $\alpha_X$ to $\beta_X$. Then, as in the proof of \reflem{PigeonForHoles}, there is a vertex $\omega$ of $g_X$ and a component $W \subset X {\smallsetminus} \omega$ where at least $(3 \cdot {L_2})^{\xi(X) - 1}$ of the inductive intervals in $I_X$ have associated surfaces, $Y_i$, contained in $W$. Since $\xi(X) - 1 \geq \xi(W)$ we may apply \reflem{PigeonForHoles} inside of $W$. So we find a surface $Z \subseteq W \subsetneq X$ so that \begin{itemize} \item $Z$ contains at least ${L_2}$ of the $Y_i$, \item $d_Z(\alpha_X, \beta_X) \geq {L_2}$, and \item there are at least ${L_2} - 4({C_1} + {C_3} + 2 {C_3} + 2)$ of the $Y_i$ where $J_{Y_i} \subsetneq J_Z$. \end{itemize} Since $Y_i \subsetneq Z$ and $Y_i$ is a hole, $Z$ is also a hole. Since ${L_2} > {L_1}(X)$ it follows that $Z \in B_X$. Let $\mathcal{Y} = \{ Y_i \}$ be the set of $Y_i$ satisfying the third bullet. Let $Y^1 \in \mathcal{Y}$ and $\eta_1 \in g_Z$ satisfy $\partial Y^1 \cap \eta_1 = \emptyset$ and $\eta_1$ is the first such. Choose $Y^2$ and $\eta_2$ similarly, so that $\eta_2$ is the last such. By \reflem{PigeonForHoles} \begin{equation} \label{Eqn:EtaBigger} d_Z(\eta_1,\eta_2) \geq L_2 - 4({C_1} + {C_3} + 2{C_3} + 2). \end{equation} Let $p = \min I_{Y^1}$ and $q = \max I_{Y^2}$. Note that $[p, q] \subset J_Z$. (If $Z$ is paired with $Z'$ then $[p, q] \subset J_Z \cap J_{Z'}$.) Again by (1) of \refax{Access}, and \reflem{Hempel}, \[ d_Z(\mu_{r(p)}, \partial Y^1) < {C_3}. \] It follows that \[ d_Z(\mu_{r(p)}, \eta_1) \leq {C_3} + 1 \] and the same bound applies to $d_Z(\mu_{r(q)}, \eta_2)$. Combined with (\ref{Eqn:EtaBigger}) we find that \[ d_Z(\mu_{r(p)}, \mu_{r(q)}) \geq {L_2} - 4{C_1} - 4{C_3} - 10{C_3} - 10. \] By the reverse triangle inequality (\reflem{Reverse}), for any $p' \leq p, q \leq q'$, \[ d_Z(\mu_{r(p')}, \mu_{r(q')}) \geq {L_2} - 6{C_1} - 4{C_3} - 10 {C_3} - 10. \] Finally by \refax{Combin} and the above inequality we have \[ d_Z(\gamma_{p'}, \gamma_{q'}) \geq {L_2} - 6{C_1} - 4{C_3} - 10{C_3} - 10 - 2{C_2}. \] By (\ref{Eqn:UpperBigger}) the right-hand side is greater than ${L_1}(X) + 2L_4$ so we deduce that $Z \in B_X(p', q')$, for any such $p', q'$. (When $Z$ is paired deduce also that $Z' \in B_X(p', q')$.) Let $I_V$ be the first inductive interval chosen by the procedure with the property that $I_V \cap [p, q] \neq \emptyset$. Note that, since $I_{Y^1}$ and $I_{Y^2}$ will also be chosen, $I_V \subset [p, q]$. Let $p', q'$ be the indices so that $V$ is chosen from $B_X(p', q')$. Thus $p' \leq p$ and $q \leq q'$. However, since $I_V \subset [p, q] \subset J_Z$, \reflem{HolesContained} implies that $V$ is strictly nested in $Z$. (When pairing occurs we may find instead that $V \subset Z'$ or $V' \subset Z$.) Thus $\xi(Z) > \xi(V)$ and we find that $Z$ would be chosen from $B_X(p', q')$, instead of $V$. This is a contradiction. \end{proof} \subsection{Electric partition} The goal of this subsection is to prove: \begin{proposition} \label{Prop:Electric} There is a constant $A$ depending only on $\xi(X)$, so that: if $[i,j] \subset [0, K]$ is a electric interval then \[ d_\mathcal{G}(\gamma_i, \gamma_j) \mathbin{\leq_{A}} d_X(\gamma_i, \gamma_j). \] \end{proposition} We begin by building a partition of $[i,j]$ into straight and shortcut intervals. Define $$ C_X = \{ Y \subsetneq X \mathbin{\mid} \mbox{$Y$ is a non-hole and } d_Y(\mu_{r(i)}, \mu_{r(j)}) \geq {L_1}(X) \}. $$ We also define, for all $[p, q] \subset [i, j]$ $$ C_X(p,q) = \{ Y \in C_X \mathbin{\mid} J_Y \cap [r(p), r(q)] \neq \emptyset \}. $$ Our recursion starts with the partition of one part, $\mathcal{P}(i, j) = \{[i, j]\}$. Recursively $\mathcal{P}(i, j)$ is a partition of $[i, j]$ into shortcut, straight, or undetermined intervals. Suppose that $[p, q] \in \mathcal{P}(i, j)$ is undetermined. \begin{proofclaim} If $C_X(p, q)$ is empty then $[p, q]$ is straight. \end{proofclaim} \begin{proof} We show the contrapositive. Suppose that $Y$ is a non-hole with $d_Y(\mu_{r(p)}, \mu_{r(q)}) \geq {L_2}$. Since ${L_2} > {C_3}$, \refax{Access} implies that $J_Y \cap [r(p), r(q)]$ is non-empty. Also, the reverse triangle inequality (\reflem{Reverse}) gives: $$ d_Y(\mu_{r(p)}, \mu_{r(q)}) < d_Y(\mu_{r(i)}, \mu_{r(j)}) + 2{C_1}. $$ Since ${L_2} > {L_1}(X) + 2{C_1}$, we find that $Y \in C_X$. It follows that $Y \in C_X(p,q)$. \end{proof} So when $C_X(p, q)$ is empty the interval $[p, q]$ is determined to be straight. Proceed onto the next undetermined element of $\mathcal{P}(i, j)$. Now suppose that $C_X(p, q)$ is non-empty. Then we choose any $Y \in C_X(p,q)$ so that $Y$ has maximal $\xi(Y)$ amongst the elements of $C_X(p,q)$. Notice that by the accessibility requirement that $J_Y \cap [r(p), r(q)]$ is non-empty. There are two cases. If $J_Y \cap r([p, q])$ is empty then let $p' \in [p, q]$ be the largest integer so that $r(p') < \min J_Y$. Note that $p'$ is well-defined. Now divide the interval $[p, q]$ into the two undetermined intervals $[p, p']$, $[p' + 1, q]$. In this situation we say $Y$ is associated to a {\em shortcut of length one} and we add the element $[p' + \operatorname{fr}ac{1}{2}]$ to $\mathcal{P}(i, j)$. Next suppose that $J_Y \cap r([p, q])$ is non-empty. Let $p', q' \in [p,q]$ be the first and last indices, respectively, so that $r(p'), r(q') \in J_Y$. (Note that it is possible to have $p' = q'$.) Partition $[p, q] = [p, p'-1] \cup [p', q'] \cup [q'+1, q]$. The first and third parts are undetermined; either may be empty. This completes the recursive construction of the partition. Define $$ \mathcal{P}_{\text{short}} = \{ I \in \mathcal{P}(i, j) \mathbin{\mid} \mbox{$I$ is a shortcut}\} $$ and $$ \mathcal{P}_\mathbin{\mid}r = \{ I \in \mathcal{P}(i, j) \mathbin{\mid} \mbox{$I$ is straight}\}. $$ \begin{proposition} \label{Prop:NumberOfShortcuts} With $\mathcal{P}(i, j)$ as defined above, $$ d_X(\gamma_i, \gamma_j) \geq \operatorname{fr}ac{|\mathcal{P}_{\text{short}}|}{2(3 \cdot {L_2})^{\xi(X) - 1} + 1} - 1. $$ \end{proposition} \begin{proof} The proof is identical to that of \refprop{NumberOfInductives} with the caveat that in \reflem{PigeonForHoles} we must use the markings $\mu_{r(i)}$ and $\mu_{r(j)}$ instead of the endpoints $\gamma_i$ and $\gamma_j$. \end{proof} Now we ``electrify'' every shortcut interval using \refthm{UpperBound} recursively. \begin{lemma} \label{Lem:Shortcut} There is a constant ${L_3} = {L_3}(X, \mathcal{G})$, so that for every shortcut interval $[p, q]$ we have $d_\mathcal{G}(\gamma_p, \gamma_q) < {L_3}$. \end{lemma} \begin{proof} As $[p,q]$ is a shortcut we are given a non-hole $Z \subset X$ so that $r([p,q]) \subset J_Z$. Let $Y = X {\smallsetminus} Z$. Thus \refax{Replace} gives vertices $\gamma_p', \gamma_q'$ of $\mathcal{G}$ lying in $Y$ or in $Z$, so that $d_\mathcal{G}(\gamma_p, \gamma_p'), d_\mathcal{G}(\gamma_q, \gamma_q') \leq {C_4}$. If one of $\gamma_p', \gamma_q'$ lies in $Y$ while the other lies in $Z$ then \[ d_\mathcal{G}(\gamma_p, \gamma_q) < 2{C_4} + 1. \] If both lie in $Z$ then, as $Z$ is a non-hole, there is a vertex $\nablata \in \mathcal{G}(S)$ disjoint from both of $\gamma_p'$ and $\gamma_q'$ and we have \[ d_\mathcal{G}(\gamma_p, \gamma_q) < 2{C_4} + 2. \] If both lie in $Y$ then there are two cases. If $Y$ is not a hole for $\mathcal{G}(S)$ then we are done as in the previous case. If $Y$ is a hole then by the definition of shortcut interval, \reflem{LipschitzToHoles}, and the triangle inequality we have \[ d_W(\gamma_p', \gamma_q') < 6 + 6{C_4} + {L_2} \] for all holes $W \subset Y$. Notice that $Y$ is strictly contained in $X$. Thus we may inductively apply \refthm{UpperBound} with $c = 6 + 6{C_4} + {L_2}$. We deduce that all terms on the right-hand side of the distance estimate vanish and thus $d_\mathcal{G}(\gamma_p', \gamma_q')$ is bounded by a constant depending only on $X$ and $\mathcal{G}$. The same then holds for $d_\mathcal{G}(\gamma_p, \gamma_q)$ and we are done. \end{proof} We are now equipped to give: \begin{proof}[Proof of \refprop{Electric}] Suppose that $\mathcal{P}(i, j)$ is the given partition of the electric interval $[i, j]$ into straight and shortcut subintervals. As a bit of notation, if $[p, q] = I \in \mathcal{P}(i, j)$, we take $d_\mathcal{G}(I) = d_\mathcal{G}(\gamma_p, \gamma_q)$ and $d_X(I) = d_X(\gamma_p, \gamma_q)$. Applying \refax{Combin} we have \begin{align} \label{Eqn:StraightUpperBound} d_\mathcal{G}(\gamma_i, \gamma_j) & \leq \sum_{I \in \mathcal{P}_\mathbin{\mid}r} d_\mathcal{G}(I) + \sum_{I \in \mathcal{P}_{\text{short}}} d_\mathcal{G}(I) + {C_2}|\mathcal{P}(i, j)| \end{align} The last term arises from connecting left endpoints of intervals with right endpoints. We must bound the three terms on the right. We begin with the third; recall that $|\mathcal{P}(i, j)| = |\mathcal{P}_{\text{short}}| + |\mathcal{P}_\mathbin{\mid}r|$, that $|\mathcal{P}_\mathbin{\mid}r| \leq |\mathcal{P}_{\text{short}}| + 1$, and that $|\mathcal{P}_{\text{short}}| \mathbin{\leq_{A}} d_X(\gamma_i, \gamma_j)$. The second inequality follows from the construction of the partition while the last is implied by \refprop{NumberOfShortcuts}. Thus the third term of \refeqn{StraightUpperBound} is quasi-bounded above by $d_X(\gamma_i, \gamma_j)$. By \reflem{Shortcut}, the second term of \refeqn{StraightUpperBound} at most ${L_3}|\mathcal{P}_{\text{short}}|$. Finally, by \refax{Straight}, for all $I \in \mathcal{P}_\mathbin{\mid}r$ we have $$ d_\mathcal{G}(I) \mathbin{\leq_{A}} d_X(I), $$ Also, it follows from the reverse triangle inequality (\reflem{Reverse}) that \[ \sum_{I \in \mathcal{P}_\mathbin{\mid}r} d_X(I) \leq d_X(\gamma_i, \gamma_j) + (2{C_1} + 2{C_2})|\mathcal{P}_\mathbin{\mid}r| + 2{C_2}. \] We deduce that $\sum_{I \in \mathcal{P}_\mathbin{\mid}r} d_\mathcal{G}(I)$ is also quasi-bounded above by $d_X(\gamma_i, \gamma_j)$. Thus for a somewhat larger value of $A$ we find \[ d_\mathcal{G}(\gamma_i, \gamma_j) \mathbin{\leq_{A}} d_X(\gamma_i, \gamma_j). \] This completes the proof. \end{proof} \subsection{The upper bound} We will need: \begin{proposition} \label{Prop:Convert} For any $c > 0$ there is a constant $A$ with the following property. Suppose that $[i, j] = I_Y$ is an inductive interval in $\mathcal{P}_X$. Then we have: \[ d_\mathcal{G}(\gamma_i, \gamma_j) \mathbin{\leq_{A}} \sum_Z [d_Z(\gamma_i, \gamma_j)]_{c} \] where $Z$ ranges over all holes for $\mathcal{G}$ contained in $X$. \end{proposition} \begin{proof} \refax{Replace} gives vertices $\gamma'_i$, $\gamma'_j \in \mathcal{G}$, contained in $Y$, so that $d_\mathcal{G}(\gamma_i, \gamma'_i) \leq {C_4}$ and the same holds for $j$. Since projection to holes is coarsely Lipschitz (\reflem{LipschitzToHoles}) for any hole $Z$ we have $d_Z(\gamma_i, \gamma'_i) \leq 3 + 3{C_3}$. Fix any $c > 0$. Now, since \begin{align*} d_\mathcal{G}(\gamma_i, \gamma_j) & \leq d_\mathcal{G}(\gamma'_i, \gamma'_j) + 2{C_3} \end{align*} to find the required constant $A$ it suffices to bound $d_\mathcal{G}(\gamma'_i, \gamma'_j)$. Let $c' = c + 6{C_3} + 6$. Since $Y \subsetneq X$, induction gives us a constant $A$ so that \begin{align*} d_\mathcal{G}(\gamma'_i, \gamma'_j) & \mathbin{\leq_{A}} \sum_Z [d_Z(\gamma'_i, \gamma'_j)]_{c'} \\ & \leq \sum_Z [d_Z(\gamma_i, \gamma_j) + 6{C_3} + 6]_{c'} \\ & < (6{C_3} + 6)N + \sum_Z [d_Z(\gamma_i, \gamma_j)]_c \end{align*} where $N$ is the number of non-zero terms in the final sum. Also, the sum ranges over sub-holes of $Y$. We may take $A$ somewhat larger to deal with the term $(6{C_3} + 6)N$ and include all holes $Z \subset X$ to find \begin{align*} d_\mathcal{G}(\gamma_i, \gamma_j) & \mathbin{\leq_{A}} \sum_Z [d_Z(\gamma_i, \gamma_j)]_c \end{align*} where the sum is over all holes $Z \subset X$. \end{proof} \subsection{Finishing the proof} Now we may finish the proof of \refthm{UpperBound}. Fix any constant $c \geq 0$. Suppose that $X$, $\alpha_X$, $\beta_X$ are given as above. Suppose that $\Gamma = \{ \gamma_i \}_{i = 0}^K$ is the given combinatorial path and $\mathcal{P}_X$ is the partition of $[0, K]$ into inductive and electric intervals. So we have: \begin{align} \label{Eqn:UpperBound} d_\mathcal{G}(\alpha_X, \beta_X) & \leq \sum_{I \in \mathcal{P}_{\text{ind}}} d_\mathcal{G}(I) + \sum_{I \in \mathcal{P}_{\text{ele}}} d_\mathcal{G}(I) + {C_2}|\mathcal{P}_X| \end{align} Again, the last term arises from adjacent right and left endpoints of different intervals. We must bound the terms on the right-hand side; begin by noticing that $|\mathcal{P}_X| = |\mathcal{P}_{\text{ind}}| + |\mathcal{P}_{\text{ele}}|$, $|\mathcal{P}_{\text{ele}}| \leq |\mathcal{P}_{\text{ind}}| + 1$ and $|\mathcal{P}_{\text{ind}}| \mathbin{\leq_{A}} d_X(\alpha_X, \beta_X)$. The second inequality follows from the way the partition is constructed and the last follows from \refprop{NumberOfInductives}. Thus the third term of \refeqn{UpperBound} is quasi-bounded above by $d_X(\alpha_X, \beta_X)$. Next consider the second term of \refeqn{UpperBound}: \begin{align*} \sum_{I \in \mathcal{P}_{\text{ele}}} d_\mathcal{G}(I) & \mathbin{\leq_{A}} \sum_{I \in \mathcal{P}_{\text{ele}}} d_X(I) \\ & \leq d_X(\alpha_X, \beta_X) + (2{C_1} + 2{C_2})|\mathcal{P}_{\text{ele}}| + 2{C_2} \end{align*} with the first inequality following from \refprop{Electric} and the second from the reverse triangle inequality (\reflem{Reverse}). Finally we bound the first term of \refeqn{UpperBound}. Let $c' = c + {L_0}$. Thus, \begin{align*} \sum_{I \in \mathcal{P}_{\text{ind}}} d_\mathcal{G}(I) & \leq \sum_{I_Y \in \mathcal{P}_{\text{ind}}} \left( A'_Y \left( \sum_{Z \subsetneq Y} [d_Z(I_Y)]_{c'} \right) + A'_Y \right) \\ & \leq A'' \left( \sum_{I \in \mathcal{P}_{\text{ind}}} \sum_{Z \subsetneq X} [d_Z(I)]_{c'} \right) + A'' \cdot |\mathcal{P}_{\text{ind}}| \\ & \leq A'' \left( \sum_{Z \subsetneq X} \sum_{I \in \mathcal{P}_{\text{ind}}} [d_Z(I)]_{c'} \right) + A'' \cdot |\mathcal{P}_{\text{ind}}| \end{align*} Here $A'_Y$ and the first inequality are given by \refprop{Convert}. Also $A'' = \max \{ A'_Y \mathbin{\mid} Y \subsetneq X \}$. In the last line, each sum of the form $\sum_{I \in \mathcal{P}_{\text{ind}}} [d_Z(I)]_{c'}$ has at most three terms, by \refrem{AtMostThreeInductive} and the fact that $c' > {L_0}$. For the moment, fix a hole $Z$ and any three elements $I, I', I'' \in \mathcal{P}_{\text{ind}}$. By the reverse triangle inequality (\reflem{Reverse}) we find that \[ d_Z(I) + d_Z(I') + d_Z(I'') < d_Z(\alpha_X, \beta_X) + 6{C_1} + 8{C_2} \] which in turn is less than $d_Z(\alpha_X, \beta_X) + {L_0}$. It follows that \[ [d_Z(I)]_{c'} + [d_Z(I')]_{c'} + [d_Z(I'')]_{c'} < [d_Z(\alpha_X, \beta_X)]_c + {L_0}. \] Thus, \begin{align*} \sum_{Z \subsetneq X} \sum_{I \in \mathcal{P}_{\text{ind}}} [d_Z(I)]_{c'} & \leq {L_0} \cdot N + \sum_{Z \subsetneq X} [d_Z(\alpha_X, \beta_X)]_c \end{align*} where $N$ is the number of non-zero terms in the final sum. Also, the sum ranges over all holes $Z \subsetneq X$. Combining the above inequalities, and increasing $A$ once again, implies that \[ d_\mathcal{G}(\alpha_X, \beta_X) \mathbin{\leq_{A}} \sum_Z [d_Z(\alpha_X, \beta_X)]_c \] where the sum ranges over all holes $Z \subseteq X$. This completes the proof of \refthm{UpperBound}. \qed \section{Background on {Teichm\"uller~} space} \label{Sec:BackgroundTeich} Our goal in Sections~\ref{Sec:PathsNonorientable}, \ref{Sec:PathsArc} and~\ref{Sec:PathsDisk} will be to verify the axioms stated in \refsec{Axioms} for the complex of curves of a non-orientable surface, for the arc complex, and for the disk complex. Here we give the necessary background on {Teichm\"uller~} space. Fix now a surface $S = S_{g,n}$ of genus $g$ with $n$ punctures. Two conformal structures on $S$ are equivalent, written $\Sigma \sim \Sigma'$, if there is a conformal map $f \colon \Sigma \to \Sigma'$ which is isotopic to the identity. Let $\mathcal{T} = \mathcal{T}(S)$ be the {\em {Teichm\"uller~} space} of $S$; the set of equivalence classes of conformal structures $\Sigma$ on $S$. Define the {Teichm\"uller~} metric by, \[ d_\mathcal{T}(\Sigma,\Sigma') = \inf_f \left\{ \operatorname{fr}ac{1}{2} \log K(f) \right\} \] where the infimum ranges over all quasiconformal maps $f \colon \Sigma \to \Sigma'$ isotopic to the identity and where $K(f)$ is the maximal dilatation of $f$. Recall that the infimum is realized by a {Teichm\"uller~} map that, in turn, may be defined in terms of a quadratic differential. \subsection{Quadratic differentials} \begin{definition} A {\em quadratic differential} $q(z)\,dz^2$ on $\Sigma$ is an assignment of a holomorphic function to each coordinate chart that is a disk and of a meromorphic function to each chart that is a punctured disk. If $z$ and $\zeta$ are overlapping charts then we require \[ q_z(z) = q_\zeta(\zeta) \left(\operatorname{fr}ac{d\zeta}{dz}\right)^2 \] in the intersection of the charts. The meromorphic function $q_z(z)$ has at most a simple pole at the puncture $z = 0$. \end{definition} At any point away from the zeroes and poles of $q$ there is a natural coordinate $z = x + iy$ with the property that $q_z \equiv 1$. In this natural coordinate the foliation by lines $y = c$ is called the {\em horizontal foliation}. The foliation by lines $x = c$ is called the {\em vertical foliation}. Now fix a quadratic differential $q$ on $\Sigma = \Sigma_0$. Let $x, y$ be natural coordinates for $q$. For every $t \in \mathbb{R}$ we obtain a new quadratic differential $q_t$ with coordinates \[ x_t = e^{t} x, \qquad y_t = e^{-t} y. \] Also, $q_t$ determines a conformal structure $\Sigma_t$ on $S$. The map $t \mapsto \Sigma_t$ is the {Teichm\"uller~} geodesic determined by $\Sigma$ and $q$. \subsection{Marking coming from a {Teichm\"uller~} geodesic} \label{Sec:MarkingFromTeich} Suppose that $\Sigma$ is a Riemann surface structure on $S$ and $\sigma$ is the uniformizing hyperbolic metric in the conformal class of $\Sigma$. In a slight abuse of terminology, we call the collection of shortest simple non-peripheral closed geodesics the {\em systoles} of $\sigma$. Fix a constant $\epsilon$ smaller than the Margulis constant. The $\epsilon$--thick part of {Teichm\"uller~} space consists of those Riemann surfaces such that the hyperbolic systole has length at least $\epsilon$. We define $P = P(\sigma)$, a {\em Bers pants decomposition} of $S$, as follows: pick $\alpha_1$, any systole for $\sigma$. Define $\alpha_i$ to be any systole of $\sigma$ restricted to $S {\smallsetminus} (\alpha_1 \cup \ldots \cup \alpha_{i - 1})$. Continue in this fashion until $P$ is a pants decomposition. Note that any curve with length less than the Margulis constant will necessarily be an element of $P$. Suppose that $\Sigma, \Sigma' \in \mathcal{T}(S)$. Suppose that $P, P'$ are Bers pants decompositions with respect to $\Sigma$ and $\Sigma'$. Suppose also that $d_\mathcal{T}(\Sigma, \Sigma') \leq 1$. Then the curves in $P$ have uniformly bounded lengths in $\Sigma'$ and conversely. By the Collar Lemma, the intersection $\iota(P, P')$ is bounded, solely in terms of $\xi(S)$. Suppose now that $\{ \Sigma_t \mathbin{\mid} t \in [-M, M] \}$ is the {Teichm\"uller~} geodesic defined by the quadratic differentials $q_t$. Let $\sigma_t$ be the hyperbolic metric uniformizing $\Sigma_t$. Let $P_t = P(\sigma_t)$ be a Bers pants decomposition. We now find transversals in order to complete $P_t$ to a {\em Bers marking} $\nu_t$. Suppose that $P_t = \{ \alpha_i \}$. For each $i$, let $A^i$ be the annular cover of $S$ corresponding to $\alpha_i$. Note that $q_t$ lifts to a singular Euclidean metric $q^i_t$ on $A^i$. Let $\alpha^i$ be a geodesic representative of the core curve of $A^i$ with respect to the metric $q_t^i$. Choose $\gamma_i \in \mathcal{C}(A^i)$ to be any geodesic arc, also with respect to $q^i_t$, that is perpendicular to $\alpha^i$. Let $\beta_i$ be any curve in $S {\smallsetminus} (\{ \alpha_j \}_{j \neq i})$ which meets $\alpha_i$ minimally and so that $d_{A_i}(\beta_i, \gamma_i) \leq 3$. (See the discussion after the proof of Lemma~2.4 in~\cite{MasurMinsky00}.) Doing this for each $i$ gives a complete clean marking $\nu_t = \{ \alpha_i \} \cup \{ \beta_i \}$. We now have: \begin{lemma} \cite[Remark 6.2 and Equation (3)]{Rafi10} There is a constant $B_0 = B_0(S)$ with the following property. For any {Teichm\"uller~} geodesic and for any time $t$, there is a constant $\nablata > 0$ so that if $|t - s| \leq \nablata$ then \[ \iota(\nu_t, \nu_s) < B_0. \] \end{lemma} Suppose that $\Sigma_t$ and $\Sigma_s$ are surfaces in the $\epsilon$--thick part of $\mathcal{T}(S)$. We take $B_0$ sufficiently large so that if $\iota(\nu_t, \nu_s) \geq B_0$ then $d_\mathcal{T}(\Sigma_t, \Sigma_s) \geq 1$. \subsection{The marking axiom} We construct a sequence of markings $\mu_n$, for $n \in [0, N] \subset \mathbb{N}$, as follows. Take $\mu_0 = \nu_{-M}$. Now suppose that $\mu_n = \nu_t$ is defined. Let $s > t$ be the first time that there is a marking with $\iota(\nu_t, \nu_s) \geq B_0$, if such a time exists. If so, let $\mu_{n+1} = \nu_s$. If no such time exists take $N = n$ and we are done. We now show that $\mu_n = \nu_t$ and $\mu_{n+1} = \nu_s$ have bounded intersection. By the above lemma there is a marking $\nu_r$ with $t \leq r < s$ and \[ \iota(\nu_r,\nu_s) \leq B_0. \] By construction \[ \iota(\nu_t, \nu_r) < B_0. \] Since intersection number bounds distance in the marking complex we find that by the triangle inequality, $\nu_t$ and $\nu_s$ are bounded distance in the marking complex. Conversely, since distance bounds intersection in the marking complex we find that $\iota(\mu_n,\mu_{n+1})$ is bounded. It follows that $d_Y(\mu_n, \mu_{n+1})$ is uniformly bounded, independent of $Y \subset S$ and of $n \in [0, N]$. It now follows from Theorem 6.1 of \cite{Rafi10} that, for any subsurface $Y \subset S$, the sequence $\{ \pi_Y(\mu_n) \} \subset \mathcal{C}(Y)$ is an unparameterized quasi-geodesic. Thus the marking path $\{ \mu_n \}$ satisfies the second requirement of \refax{Marking}. The first requirement is trivial as every $\mu_n$ fills $S$. \subsection{The accessibility axiom} We now turn to \refax{Access}. Since $\mu_n$ fills $S$ for every $n$, the first requirement is a triviality. In Section 5 of \cite{Rafi10} Rafi defines, for every subsurface $Y \subset S$, an {\em interval of isolation} $I_Y$ inside of the parameterizing interval of the {Teichm\"uller~} geodesic. Note that $I_Y$ is defined purely in terms of the geometry of the given quadratic differentials. Further, for all $t \in I_Y$ and for all components $\alpha \subset \partial Y$ the hyperbolic length of $\alpha$ in $\Sigma_t$ is less than the Margulis constant. Furthermore, by Theorem~5.3~\cite{Rafi10}, there is a constant ${B_3}$ so that if $[s,t] \cap I_Y = \emptyset$ then \[ d_Y(\nu_s, \nu_t) \leq {B_3}. \] So define $J_Y \subset [0, N]$ to be the subinterval of the marking path where the time corresponding to $\mu_n$ lies in $I_Y$. The third requirement follows. Finally, if $m \in J_Y$ then $\partial Y$ is contained in $\operatorname{base}(\mu_m)$ and thus $\iota(\partial Y, \mu_m) \leq 2 \cdot |\partial Y|$. \subsection{The distance estimate in {Teichm\"uller~} space} We end this section by quoting another result of Rafi: \begin{theorem}\cite[Theorem~2.4]{Rafi10} \label{Thm:TeichDistanceEstimate} Fix a surface $S$ and a constant $\epsilon > 0$. There is a constant ${C_0} = {C_0}(S, \epsilon)$ so that for any $c > {C_0}$ there is a constant $A$ with the following property. Suppose that $\Sigma$ and $\Sigma'$ lie in the $\epsilon$--thick part of $\mathcal{T}(S)$. Then \[ d_\mathcal{T}(\Sigma, \Sigma') \mathbin{=_A} \sum_X [d_X(\mu, \mu')]_c + \sum_\alpha [\log d_\alpha(\mu, \mu')]_c \] where $\mu$ and $\mu'$ are Bers markings on $\Sigma$ and $\Sigma'$, $Y \subset S$ ranges over non-annular surfaces and $\alpha$ ranges over vertices of $\mathcal{C}(S)$. \qed \end{theorem} \section{Paths for the non-orientable surface} \label{Sec:PathsNonorientable} Fix $F$ a compact, connected, and non-orientable surface. Let $S$ be the orientation double cover with covering map $\rho_F \colon S \to F$. Let $\tau \colon S \to S$ be the associated involution. Note that $\mathcal{C}(F) = \mathcal{C}^\tau(S)$. Let $\mathcal{C}^\tau(S) \to \mathcal{C}(S)$ be the relation sending a symmetric multicurve to its components. Our goal for this section is to prove \reflem{SymmetricSurfaces}, the classification of holes for $\mathcal{C}(F)$. As remarked above, \reflem{InvariantQI} and \refcor{NonorientableCurveComplexHyperbolic} follow, proving the hyperbolicity of $\mathcal{C}(F)$. \subsection{The marking path} We will use the extreme rigidity of {Teichm\"uller~} geodesics to find $\tau$--invariant marking paths. We first show that $\tau$--invariant Bers pants decompositions exist. \begin{lemma} \label{Lem:TauInvar} Fix a $\tau$--invariant hyperbolic metric $\sigma$. Then there is a Bers pants decomposition $P = P(\sigma)$ which is $\tau$--invariant. \end{lemma} \begin{proof} Let $P_0 = \emptyset$. Suppose that $0 \leq k < \xi(S)$ curves have been chosen to form $P_k$. By induction we may assume that $P_k$ is $\tau$--invariant. Let $Y$ be a component of $S {\smallsetminus} P_k$ with $\xi(Y) \geq 1$. Note that since $\tau$ is orientation reversing, $\tau$ does not fix any boundary component of $Y$. Pick any systole $\alpha$ for $Y$. \begin{proofclaim} Either $\tau(\alpha) = \alpha$ or $\alpha \cap \tau(\alpha) = \emptyset$. \end{proofclaim} \begin{proof} Suppose not and take $p \in \alpha \cap \tau(\alpha)$. Then $\tau(p) \in \alpha \cap \tau(\alpha)$ as well, and, since $\tau$ has no fixed points, $p \neq \tau(p)$. The points $p$ and $\tau(p)$ divide $\alpha$ into segments $\beta$ and $\gamma$. Since $\tau$ is an isometry, we have \[ \ell_\sigma(\tau(\alpha)) = \ell_\sigma(\alpha) \quad \mbox{and} \quad \ell_\sigma(\tau(\beta)) = \ell_\sigma(\beta). \] Now concatenate to obtain (possibly immersed) loops \[ \beta' = \beta*\tau(\beta) \quad \mbox{and} \quad \gamma' = \gamma*\tau(\gamma). \] If $\beta'$ is null-homotopic then $\alpha \cup \tau(\alpha)$ cuts a monogon or a bigon out of $S$, contradicting our assumption that $\alpha$ was a geodesic. Suppose, by way of contradiction, that $\beta'$ is homotopic to some boundary curve $b \subset \partial Y$. Since $\tau(\beta') = \beta'$, it follows that $\tau(b)$ and $\beta'$ are also homotopic. Thus $b$ and $\tau(b)$ cobound an annulus, implying that $Y$ is an annulus, a contradiction. The same holds for $\gamma'$. Let $\beta''$ and $\gamma''$ be the geodesic representatives of $\beta'$ and $\gamma'$. Since $\beta$ and $\tau(\beta)$ meet transversely, $\beta''$ has length in $\sigma$ strictly smaller than $2\ell_\sigma(\beta)$. Similarly the length of $\gamma''$ is strictly smaller than $2\ell_\sigma(\gamma)$. Suppose that $\beta''$ is shorter then $\gamma''$. It follows that $\beta''$ strictly shorter than $\alpha$. If $\beta''$ is embedded then this contradicts the assumption that $\alpha$ was shortest. If $\beta''$ is not embedded then there is an embedded curve $\beta'''$ inside of a regular neighborhood of $\beta''$ which is again essential, non-peripheral, and has geodesic representative shorter than $\beta''$. This is our final contradiction and the claim is proved. \end{proof} Thus, if $\tau(\alpha) = \alpha$ we let $P_{k+1} = P_k \cup \{ \alpha \}$ and we are done. If $\tau(\alpha) \neq \alpha$ then by the above claim $\tau(\alpha) \cap \alpha = \emptyset$. In this case let $P_{k+2} = P_k \cup \{ \alpha, \tau(\alpha) \}$ and \reflem{TauInvar} is proved. \end{proof} Transversals are chosen with respect to a quadratic differential metric. Suppose that $\alpha, \beta \in \mathcal{C}^\tau(S)$. If $\alpha$ and $\beta$ do not fill $S$ then we may replace $S$ by the support of their union. Following Thurston~\cite{Thurston88} there exists a square-tiled quadratic differential $q$ with squares associated to the points of $\alpha \cap \beta$. (See~\cite{Bowditch06} for analysis of how the square-tiled surface relates to paths in the curve complex.) Let $q_t$ be image of $q$ under the {Teichm\"uller~} geodesic flow. We have: \begin{lemma} \label{Lem:TauIsom} $\tau^*q_t = q_t$. \end{lemma} \begin{proof} Note that $\tau$ preserves $\alpha$ and also $\beta$. Since $\tau$ permutes the points of $\alpha \cap \beta$ it permutes the rectangles of the singular Euclidean metric $q_t$ while preserving their vertical and horizontal foliations. Thus $\tau$ is an isometry of the metric and the conclusion follows. \end{proof} We now choose the {Teichm\"uller~} geodesic $\{ \Sigma_t \mathbin{\mid} t \in [-M, M] \}$ so that the hyperbolic length of $\alpha$ is less than the Margulis constant in $\sigma_{-M}$ and the same holds for $\beta$ in $\sigma_M$. Also, $\alpha$ is the shortest curve in $\sigma_{-M}$ and similarly for $\beta$ in $\sigma_M$ \begin{lemma} Fix $t$. There are transversals for $P_t$ which are close to being quadratic perpendicular in $q_t$ and which are $\tau$--invariant. \end{lemma} \begin{proof} Let $P = P_t$ and fix $\alpha \in P$. Let $X = S {\smallsetminus} (P {\smallsetminus} \alpha)$. There are two cases: either $\tau(X) \cap X = \emptyset$ or $\tau(X) = X$. Suppose the former. So we choose any transversal $\beta \subset X$ close to being $q_t$--perpendicular and take $\tau(\beta)$ to be the transversal to $\tau(\alpha)$. Suppose now that $\tau(X) = X$. It follows that $X$ is a four-holed sphere. The quotient $X/\tau$ is homeomorphic to a twice-holed $\mathbb{R}PP^2$. Therefore there are only four essential non-peripheral curves in $X/\tau$. Two of these are cores of M\"obius bands and the other two are their doubles. The cores meet in a single point. Perforce $\alpha$ is the double cover of one core and we take $\beta$ the double cover of the other. It remains only to show that $\beta$ is close to being $q_t$--perpendicular. Let $S^\alpha$ be the annular cover of $S$ and lift $q_t$ to $S^\alpha$. Let $\perp$ be the set of $q_t^\alpha$--perpendiculars. This is a $\tau$-invariant diameter one subset of $\mathcal{C}(S^\alpha)$. If $d_\alpha(\perp, \beta)$ is large then it follows that $d_\alpha(\perp, \tau(\beta))$ is also large. Also, $\tau(\beta)$ twists in the opposite direction from $\beta$. Thus \[ d_\alpha(\beta, \tau(\beta)) - 2d_\alpha(\perp, \beta) = O(1) \] and so $d_\alpha(\beta, \tau(\beta))$ is large, contradicting the fact that $\beta$ is $\tau$--invariant. \end{proof} Thus $\tau$--invariant markings exist; these have bounded intersection with the markings constructed in \refsec{BackgroundTeich}. It follows that the resulting marking path satisfies the marking path and accessibility requirements, Axioms~\ref{Ax:Marking} and~\ref{Ax:Access}. \subsection{The combinatorial path} As in \refsec{BackgroundTeich} break the interval $[-M, M]$ into short subintervals and produce a sequence of $\tau$-invariant markings $\{ \mu_n \}_{n = 0}^N$. To choose the combinatorial path, pick $\gamma_n \in \operatorname{base}(\mu_n)$ so that $\gamma_n$ is a $\tau$--invariant curve or pair of curves and so that $\gamma_n$ is shortest in $\operatorname{base}(\mu_n)$. We now check the combinatorial path requirements given in \refax{Combin}. Note that $\gamma_0 = \alpha$, $\gamma_N = \beta$; also the reindexing map is the identity. Since \[ \iota(\gamma_n, \mu_{r(n)}) = \iota(\gamma_n, \mu_n) = 2 \] the first requirement is satisfied. Since $\mu_n$ and $\mu_{n+1}$ have bounded intersection, the same holds for $\gamma_n$ and $\gamma_{n+1}$. Projection to $F$, surgery, and \reflem{Hempel} imply that $d_{\mathcal{C}^\tau}(\gamma_n, \gamma_{n+1})$ is uniformly bounded. This verifies \refax{Combin}. \subsection{The classification of holes} We now finish the classification of large holes for $\mathcal{C}^\tau(S)$. Fix $L_0 > 3{C_3} + 2{C_2} + 2{C_1}$. Note that these constants are available because we have verified the axioms that give them. \begin{lemma} \label{Lem:SymmetricSurfaces} Suppose that $\alpha, \beta \in \mathcal{C}^\tau(S)$. Suppose that $X \subset S$ has $d_X(\alpha, \beta) > L_0$. Then $X$ is symmetric. \end{lemma} \begin{proof} Let $(\Sigma_t, q_t)$ be the {Teichm\"uller~} geodesic defined above and let $\sigma_t$ be the uniformizing hyperbolic metric. Since $L_0 > {C_3} + 2{C_2}$ it follows from the accessibility requirement that $J_X = [m, n]$ is non-empty. Now for all $t$ in the interval of isolation $I_X$ \[ \ell_{\sigma_t}(\nablata) < \epsilon, \] where $\nablata$ is any component of $\partial X$ and $\epsilon$ is the Margulis constant. Let $Y = \tau(X)$. Since $\tau$ is an isometry (\reflem{TauIsom}) and since the interval of isolation is metrically defined we have $I_Y = I_X$ and thus $J_Y = J_X$. Deduce that $\partial Y$ is also short in $\sigma_t$. This implies that $\partial X \cap \partial Y = \emptyset$. If $X$ and $Y$ overlap then by (iii) of \reflem{Access} we have \[ d_X(\mu_m, \mu_n) < {C_3} \] and so by the triangle inequality, two applications of (2) of \refax{Access}, we have \[ d_X(\mu_0, \mu_N) < 3{C_3}. \] By the combinatorial axiom it follows that \[ d_X(\alpha, \beta) < 3{C_3} + 2{C_2} \] a contradiction. Deduce that either $X = Y$ or $X \cap Y = \emptyset$ as desired. \end{proof} As noted in \refsec{HolesNonorientable} this shows that the only hole for $\mathcal{C}^\tau(S)$ is $S$ itself. Thus all holes trivially interfere, verifying \refax{Holes}. \subsection{The replacement axiom} We now verify \refax{Replace} for subsurfaces $Y \subset S$ with $d_Y(\alpha, \beta) \geq L_0$. (We may ignore all subsurfaces with smaller projection by taking ${L_1}(Y) > L_0$.) By \reflem{SymmetricSurfaces} the subsurface $Y$ is symmetric. If $Y$ is a hole then $Y = S$ and the first requirement is vacuous. Suppose that $Y$ is not a hole. Suppose that $\gamma_n$ is such that $n \in J_Y$. Thus $\gamma_n \in \operatorname{base}(\mu_n)$. All components of $\partial Y$ are also pants curves in $\mu_n$. It follows that we may take any symmetric curve in $\partial Y$ to be $\gamma'$ and we are done. \subsection{On straight intervals} Lastly we verify \refax{Straight}. Suppose that $[p, q]$ is a straight interval. We must show that $d_{\mathcal{C}^\tau}(\gamma_p, \gamma_q) \leq d_S(\gamma_p, \gamma_q)$. Suppose that $\mu_p = \nu_s$ and $\mu_q = \nu_t$; that is, $s$ and $t$ are the times when $\mu_p, \mu_q$ are short markings. Thus $d_X(\mu_p, \mu_q) \leq {L_2}$ for every $X \subsetneq S$. This implies that the {Teichm\"uller~} geodesic, along the straight interval, lies in the thick part of {Teichm\"uller~} space. Notice that $d_{\mathcal{C}^\tau}(\gamma_p, \gamma_q) \leq {C_2}|p - q|$, since for all $i \in [p, q - 1]$, $d_{\mathcal{C}^\tau}(\gamma_i, \gamma_{i+1}) \leq {C_2}$. So it suffices to bound $|p - q|$. By our choice of $B_0$ and because the {Teichm\"uller~} geodesic lies in the thick part we find that $|p - q| \leq d_\mathcal{T}(\Sigma_s, \Sigma_t)$. Rafi's distance estimate (\refthm{TeichDistanceEstimate}) gives: \[ d_\mathcal{T}(\Sigma_s, \Sigma_t) \mathbin{=_A} d_S(\nu_s, \nu_t). \] Since $\nu_s = \mu_p$, $\nu_t = \mu_q$, and since $\gamma_p \in \operatorname{base}(\mu_p)$, $\gamma_q \in \operatorname{base}(\mu_q)$ deduce that \[ d_S(\mu_p, \mu_q) \leq d_S(\gamma_p, \gamma_q) + 4. \] This verifies \refax{Straight}. Thus the distance estimate holds for $\mathcal{C}^\tau(S) = \mathcal{C}(F)$. Since there is only one hole for $\mathcal{C}(F)$ we deduce that the map $\mathcal{C}(F) \to \mathcal{C}(S)$ is a quasi-isometric embedding. As a corollary we have: \begin{theorem} \label{Thm:NonOrientableCCHyperbolic} The curve complex $\mathcal{C}(F)$ is Gromov hyperbolic. \qed \end{theorem} \section{Paths for the arc complex} \label{Sec:PathsArc} Here we verify that our axioms hold for the arc complex $\mathcal{A}(S, \Delta)$. It is worth pointing out that the axioms may be verified using {Teichm\"uller~} geodesics, train track splitting sequences, or resolutions of hierarchies. Here we use the former because it also generalizes to the non-orientable case; this is discussed at the end of this section. First note that \refax{Holes} follows from \reflem{ArcComplexHolesIntersect}. \subsection{The marking path} We are given a pair of arcs $\alpha, \beta \in \mathcal{A}(X, \Delta)$. Recall that $\sigma_S \colon \mathcal{A}(X) \to \mathcal{C}(X)$ is the surgery map, defined in \refdef{SurgeryRel}. Let $\alpha' = \sigma_S(\alpha)$ and define $\beta'$ similarly. Note that $\alpha'$ cuts a pants off of $S$. As usual, we may assume that $\alpha'$ and $\beta'$ fill $X$. If not we pass to the subsurface they do fill. As in the previous sections let $q$ be the quadratic differential determined by $\alpha'$ and $\beta'$. Exactly as above, fix a marking path $\{ \mu_n \}_{n = 0}^N$. This path satisfies the marking and accessibility axioms (\ref{Ax:Marking}, \ref{Ax:Access}). \subsection{The combinatorial path} Let $Y_n \subset X$ be any component of $X {\smallsetminus} \operatorname{base}(\mu_n)$ meeting $\Delta$. So $Y_n$ is a pair of pants. Let $\gamma_n$ be any essential arc in $Y_n$ with both endpoints in $\Delta$. Since $\alpha' \subset \operatorname{base}(\mu_0)$ and $\beta' \subset \operatorname{base}(\mu_N)$ we may choose $\gamma_0 = \alpha$ and $\gamma_N = \beta$. As in the previous section the reindexing map is the identity. It follows immediately that $\iota(\gamma_n, \mu_n) \leq 4$. This bound, the bound on $\iota(\mu_n, \mu_{n+1})$, and \reflem{BoundedProjectionImpliesBoundedIntersection} imply that $\iota(\gamma_n, \gamma_{n+1})$ is likewise bounded. The usual surgery argument shows that if two arcs have bounded intersection then they have bounded distance. This verifies \refax{Combin}. \subsection{The replacement and the straight axioms} Suppose that $Y \subset X$ is a subsurface and $\gamma_n$ has $n \in J_Y$. Let $\mu_n = \nu_t$; that is $t$ is the time when $\mu_n$ is a short marking. Thus $\partial Y \subset \operatorname{base}(\mu_n)$ and so $\gamma_n \cap \partial Y = \emptyset$. So regardless of the hole-nature of $Y$ we may take $\gamma' = \gamma_n$ and the axiom is verified. \refax{Straight} is verified exactly as in \refsec{PathsNonorientable}. \subsection{Non-orientable surfaces} Suppose that $F$ is non-orientable and $\Delta_F$ is a collection of boundary components. Let $S$ be the orientation double cover and $\tau \colon S \to S$ the involution so that $S/\tau = F$. Let $\Delta$ be the preimage of $\Delta_F$. Then $\mathcal{A}^\tau(S, \Delta)$ is the invariant arc complex. Suppose that $\alpha_F, \beta_F$ are vertices in $\mathcal{A}(F, \Delta')$. Let $\alpha, \beta$ be their preimages. As above, without loss of generality, we may assume that $\sigma_F(\alpha_F)$ and $\sigma_F(\beta_F)$ fill $F$. Note that $\sigma_F(\alpha_F)$ cuts a surface $X$ off of $F$. The surface $X$ is either a pants or a twice-holed $\mathbb{R}PP^2$. When $X$ is a pants we define $\alpha' \subset S$ to be the preimage of $\sigma_F(\alpha_F)$. When $X$ is a twice-holed $\mathbb{R}PP^2$ we take $\gamma_F$ to be a core of one of the two M\"obius bands contained in $X$ and we define $\alpha'$ to be the preimage of $\gamma_F \cup \sigma_F(\alpha_F)$. We define $\beta'$ similarly. Notice that $\alpha$ and $\alpha'$ meet in at most four points. We now use $\alpha'$ and $\beta'$ to build a $\tau$--invariant {Teichm\"uller~} geodesic. The construction of the marking and combinatorial paths for $\mathcal{A}^\tau(S, \Delta)$ is unchanged. Notice that we may choose combinatorial vertices because $\operatorname{base}(\mu_n)$ is $\tau$--invariant. There is a small annoyance: when $X$ is a twice-holed $\mathbb{R}PP^2$ the first vertex, $\gamma_0$, is disjoint from but not equal to $\alpha$. Strictly speaking, the first and last vertices are $\gamma_0$ and $\gamma_N$; our constants are stated in terms of their subsurface projection distances. However, since $\alpha \cap \gamma_0 = \emptyset$, and the same holds for $\beta$, $\gamma_N$, their subsurface projection distances are all bounded. \section{Background on train tracks} \label{Sec:BackgroundTrainTracks} Here we give the necessary definitions and theorems regarding train tracks. The standard reference is~\cite{PennerHarer92}. See also~\cite{Mosher03}. We follow closely the discussion found in~\cite{MasurEtAl10}. \subsection{On tracks} A {\em generic train track} $\tau \subset S$ is a smooth, embedded trivalent graph. As usual we call the vertices {\em switches} and the edges {\em branches}. At every switch the tangents of the three branches agree. Also, there are exactly two {\em incoming} branches and one {\em outgoing} branch at each switch. See Figure~\ref{Fig:TrainTrackModel} for the local model of a switch. \begin{figure} \caption{The local model of a train track.} \label{Fig:TrainTrackModel} \end{figure} Let $\mathcal{B}(\tau)$ be the set of branches. A transverse measure on $\tau$ is function $w \colon \mathcal{B} \to \mathbb{R}_{\geq 0}$ satisfying the switch conditions: at every switch the sum of the incoming measures equals the outgoing measure. Let $P(\tau)$ be the projectivization of the cone of transverse measures. Let $V(\tau)$ be the vertices of $P(\tau)$. As discussed in the references, each vertex measure gives a simple closed curve carried by $\tau$. For every track $\tau$ we refer to $V(\tau)$ as the marking corresponding to $\tau$ (see \refsec{Markings}). Note that there are only finitely many tracks up to the action of the mapping class group. It follows that $\iota(V(\tau))$ is uniformly bounded, depending only on the topological type of $S$. If $\tau$ and $\sigma$ are train tracks, and $Y \subset S$ is an essential surface, then define \[ d_Y(\tau, \sigma) = d_Y(V(\tau), V(\sigma)). \] We also adopt the notation $\pi_Y(\tau) = \pi_Y(V(\tau))$. A train track $\sigma$ is obtained from $\tau$ by {\em sliding} if $\sigma$ and $\tau$ are related as in Figure~\ref{Fig:Slide}. We say that a train track $\sigma$ is obtained from $\tau$ by {\em splitting} if $\sigma$ and $\tau$ are related as in Figure~\ref{Fig:Split}. \begin{figure} \caption{All slides take place in a small regular neighborhood of the affected branch.} \label{Fig:Slide} \end{figure} \begin{figure} \caption{There are three kinds of splitting: right, left, and central.} \label{Fig:Split} \end{figure} Again, since the number of tracks is bounded (up to the action of the mapping class group) if $\sigma$ is obtained from $\tau$ by either a slide or a split we find that $\iota(V(\tau), V(\sigma))$ is uniformly bounded. \subsection{The marking path} We will use sequences of train tracks to define our marking path. \begin{definition} \label{Def:SplittingSequence} A {\em sliding and splitting sequence} is a collection $\{ \tau_n \}_{n = 0}^N$ of train tracks so that $\tau_{n+1}$ is obtained from $\tau_n$ by a slide or a split. \end{definition} The sequence $\{ \tau_n \}$ gives a sequence of markings via the map $\tau_n \mapsto V_n = V(\tau_n)$. Note that the support of $V_{n+1}$ is contained within the support of $V_n$ because every vertex of $\tau_{n+1}$ is carried by $\tau_n$. Theorem 5.5 of~\cite{MasurEtAl10} verifies the remaining half of \refax{Marking}. \begin{theorem} \label{Thm:TrainTrackUnparamGeodesic} Fix a surface $S$. There is a constant $A$ with the following property. Suppose that $\{ \tau_n \}_{n=0}^N$ is a sliding and splitting sequence in $S$ of birecurrent tracks. Suppose that $Y \subset S$ is an essential surface. Then the map $n \mapsto \pi_Y(\tau_n)$, as parameterized by splittings, is an $A$--unparameterized quasi-geodesic. \qed \end{theorem} Note that, when $Y = S$, \refthm{TrainTrackUnparamGeodesic} is essentially due to the first author and Minsky; see Theorem~1.3 of~\cite{MasurMinsky04}. In Section 5.2 of~\cite{MasurEtAl10}, for every sliding and splitting sequence $\{ \tau_n \}_{n = 0}^N$ and for any essential subsurface $X \subsetneq S$ an accessible interval $I_X \subset [0,N]$ is defined. \refax{Access} is now verified by Theorem 5.3 of~\cite{MasurEtAl10}. \subsection{Quasi-geodesics in the marking graph} We will also need Theorem~6.1 from~\cite{MasurEtAl10}. (See~\cite{Hamenstadt09} for closely related work.) \begin{theorem} \label{Thm:TrainTrackQuasi} Fix a surface $S$. There is a constant $A$ with the following property. Suppose that $\{ \tau_n \}_{n=0}^N$ is a sliding and splitting sequence of birecurrent tracks, injective on slide subsequences, where $V_N$ fills $S$. Then $\{ V(\tau_n) \}$ is an $A$--quasi-geodesic in the marking graph. \qed \end{theorem} \section{Paths for the disk complex} \label{Sec:PathsDisk} Suppose that $V = V_g$ is a genus $g$ handlebody. The goal of this section is to verify the axioms of \refsec{Axioms} for the disk complex $\mathcal{D}(V)$ and so complete the proof of the distance estimate. \begin{theorem} \label{Thm:DiskComplexDistanceEstimate} There is a constant ${C_0} = {C_0}(V)$ so that, for any $c \geq {C_0}$ there is a constant $A$ with \[ d_\mathcal{D}(D, E) \mathbin{=_A} \sum [d_X(D, E)]_c \] independent of the choice of $D$ and $E$. Here the sum ranges over the set of holes $X \subset \partial V$ for the disk complex. \end{theorem} \subsection{Holes} The fact that all large holes interfere is recorded above as \reflem{DiskComplexHolesInterfere}. This verifies \refax{Holes}. \subsection{The combinatorial path} Suppose that $D, E \in \mathcal{D}(V)$ are disks contained in a compressible hole $X \subset S = \partial V$. As usual we may assume that $D$ and $E$ fill $X$. Recall that $V(\tau)$ is the set of vertices for the track $\tau \subset X$. We now appeal to a result of the first author and Minsky, found in~\cite{MasurMinsky04}. \begin{theorem} \label{Thm:DiskSurgerySequence} There exists a surgery sequence of disks $\{ D_i \}_{i=0}^K$, a sliding and splitting sequence of birecurrent tracks $\{ \tau_n \}_{n=0}^N$, and a reindexing function $r \colon [0, K] \to [0, N]$ so that \begin{itemize} \item $D_0 = D$, \item $E \in V_N$, \item $D_i \cap D_{i+1} = \emptyset$ for all $i$, and \item $\iota(\partial D_i, V_{r(i)})$ is uniformly bounded for all $i$. \end{itemize} \qed \end{theorem} \begin{remark} For the details of the proof we refer to~\cite{MasurMinsky04}. Note that the double-wave curve replacements of that paper are not needed here; as $X$ is a hole, no curve of $\partial X$ compresses in $V$. It follows that consecutive disks in the surgery sequence are disjoint (as opposed to meeting at most four times). Also, in the terminology of~\cite{MasurEtAl10}, the disk $D_i$ is a {\em wide dual} for the track $\tau_{r(i)}$. Finally, recurrence of $\tau_n$ follows because $E$ is fully carried by $\tau_N$. Transverse recurrence follows because $D$ is fully dual to $\tau_0$. \end{remark} Thus $V_n$ will be our marking path and $D_i$ will be our combinatorial path. The requirements of \refax{Combin} are now verified by \refthm{DiskSurgerySequence}. \subsection{The replacement axiom} We turn to \refax{Replace}. Suppose that $Y \subset X$ is an essential subsurface and $D_i$ has $r(i) \in J_Y$. Let $n = r(i)$. From \refthm{DiskSurgerySequence} we have that $\iota(\partial D_i, V_n)$ is uniformly bounded. By \refax{Access} we have $Y \subset \operatorname{supp}(V_n)$ and $\iota(\partial Y, \mu_n)$ is bounded. It follows that there is a constant $K$ depending only on $\xi(S)$ so that \[ \iota(\partial D_i, \partial Y) < K. \] Isotope $D_i$ to have minimal intersection with $\partial Y$. As in \refsec{CompressionSequences} boundary compress $D_i$ as much as possible into the components of $X {\smallsetminus} \partial Y$ to obtain a disk $D'$ so that either \begin{itemize} \item $D'$ cannot be boundary compressed any more into $X {\smallsetminus} \partial Y$ or \item $D'$ is disjoint from $\partial Y$. \end{itemize} We may arrange matters so that every boundary compression reduces the intersection with $\partial Y$ by at least a factor of two. Thus: \[ d_\mathcal{D}(D_i, D') \leq \log_2(K). \] Suppose now that $Y$ is a compressible hole. By \reflem{XCompressibleImpliesBdyXCompressible} we find that $\partial D' \subset Y$ and we are done. Suppose now that $Y$ is an incompressible hole. Since $Y$ is large there is an $I$-bundle $T \to F$, contained in the handlebody $V$, so that $Y$ is a component of $\partial_h T$. Isotope $D'$ to minimize intersection with $\partial_v T$. Let $\Delta$ be the union of components of $\partial_v T$ which are contained in $\partial V$. Let $\Gamma = \partial_v T {\smallsetminus} \Delta$. Notice that all intersections $D' \cap \Gamma$ are essential arcs in $\Gamma$: simple closed curves are ruled out by minimal intersection and inessential arcs are ruled out by the fact that $D'$ cannot be boundary compressed in the complement of $\partial Y$. Let $D''$ be a outermost component of $D' {\smallsetminus} \Gamma$. Then \reflem{BdyIncompImpliesVertical} implies that $D''$ is isotopic in $T$ to a vertical disk. If $D'' = D'$ then we may replace $D_i$ by the arc $\rho_F(D')$. The inductive argument now occurs inside of the arc complex $\mathcal{A}(F, \rho_F(\Delta))$. Suppose that $D'' \neq D'$. Let $A \in \Gamma$ be the vertical annulus meeting $D''$. Let $N$ be a regular neighborhood of $D'' \cup A$, taken in $T$. Then the frontier of $N$ in $T$ is again a vertical disk, call it $D'''$. Note that $\iota(D''', D') < K - 1$. Finally, replace $D_i$ by the arc $\rho_F(D''')$. Suppose now that $Y$ is not a hole. Then some component $S {\smallsetminus} Y$ is compressible. Applying \reflem{XCompressibleImpliesBdyXCompressible} again, we find that either $D'$ lies in $Z = X {\smallsetminus} Y$ or in $Y$. This completes the verification of \refax{Replace}. \subsection{Straight intervals} We end by checking \refax{Straight}. Suppose that $[p, q] \subset [0, K]$ is a straight interval. Recall that $d_Y(\mu_{r(p)}, \mu_{r(q)}) < {L_2}$ for all strict subsurfaces $Y \subset X$. We must check that $d_\mathcal{D}(D_p, D_q) \mathbin{\leq_{A}} d_X(D_p, D_q)$. Since $d_\mathcal{D}(D_p, D_q) \leq {C_2} |p - q|$ it is enough to bound $|p - q|$. Note that $|p - q| \leq |r(p) - r(q)|$ because the reindexing map is increasing. Now, $|r(p) - r(q)| \mathbin{\leq_{A}} d_{\mathcal{M}(X)}(\mu_{r(p)}, \mu_{r(q)})$ because the sequence $\{ \mu_n \}$ is a quasi-geodesic in $\mathcal{M}(X)$ (\refthm{TrainTrackQuasi}). Increasing $A$ as needed and applying \refthm{MarkingGraphDistanceEstimate} we have \[ d_\mathcal{M}(\mu_{r(p)}, \mu_{r(q)}) \mathbin{\leq_{A}} \sum_Y [d_Y(\mu_{r(p)}, \mu_{r(q)})]_{L_2} \] and the right hand side is thus less than $d_X(\mu_{r(p)}, \mu_{r(q)})$ which in turn is less than $d_X(D_p, D_q) + 2{C_2}$. This completes our discussion of \refax{Straight} and finishes the proof of \refthm{DiskComplexDistanceEstimate}. \section{Hyperbolicity} \label{Sec:Hyperbolicity} The ideas in this section are related to the notion of ``time-ordered domains'' and to the hierarchy machine of~\cite{MasurMinsky00} (see also Chapters~4 and~5 of Behrstock's thesis~\cite{Behrstock04}). As remarked above, we cannot use those tools directly as the hierarchy machine is too rigid to deal with the disk complex. \subsection{Hyperbolicity} We prove: \begin{theorem} \label{Thm:GIsHyperbolic} Fix $\mathcal{G} = \mathcal{G}(S)$, a combinatorial complex. Suppose that $\mathcal{G}$ satisfies the axioms of \refsec{Axioms}. Then $\mathcal{G}$ is Gromov hyperbolic. \end{theorem} As corollaries we have \begin{theorem} \label{Thm:ArcComplexHyperbolic} The arc complex is Gromov hyperbolic. \qed \end{theorem} \begin{theorem} \label{Thm:DiskComplexHyperbolic} The disk complex is Gromov hyperbolic. \qed \end{theorem} In fact, \refthm{GIsHyperbolic} follows quickly from: \begin{theorem} \label{Thm:GoodPathsGiveSlimTriangles} Fix $\mathcal{G}$, a combinatorial complex. Suppose that $\mathcal{G}$ satisfies the axioms of \refsec{Axioms}. Then for all $A \geq 1$ there exists $\nablata \geq 0$ with the following property: Suppose that $T \subset \mathcal{G}$ is a triangle of paths where the projection of any side of $T$ into into any hole is an $A$--unparameterized quasi-geodesic. Then T is $\nablata$--slim. \end{theorem} \begin{proof}[Proof of \refthm{GIsHyperbolic}] As laid out in \refsec{Partition} there is a uniform constant $A$ so that for any pair $\alpha, \beta \in \mathcal{G}$ there is a recursively constructed path $\mathcal{P} = \{\gamma_i\} \subset \mathcal{G}$ so that \begin{itemize} \item for any hole $X$ for $\mathcal{G}$, the projection $\pi_X(\mathcal{P})$ is an $A$--un\-pa\-ram\-e\-ter\-ized quasi-geodesic and \item $|\mathcal{P}| \mathbin{=_A} d_\mathcal{G}(\alpha, \beta)$. \end{itemize} So if $\alpha \cap \beta = \emptyset$ then $|\mathcal{P}|$ is uniformly short. Also, by \refthm{GoodPathsGiveSlimTriangles}, triangles made of such paths are uniformly slim. Thus, by \refthm{HyperbolicityCriterion}, $\mathcal{G}$ is Gromov hyperbolic. \end{proof} The rest of this section is devoted to proving \refthm{GoodPathsGiveSlimTriangles}. \subsection{Index in a hole} For the following definitions, we assume that $\alpha$ and $\beta$ are fixed vertices of $\mathcal{G}$. For any hole $X$ and for any geodesic $k \in \mathcal{C}(X)$ connecting a point of $\pi_X(\alpha)$ to a point of $\pi_X(\beta)$ we also define $\rho_k \colon \mathcal{G} \to k$ to be the relation $\pi_X|\mathcal{G} \colon \mathcal{G} \to \mathcal{C}(X)$ followed by taking closest points in $k$. Since the diameter of $\rho_k(\gamma)$ is uniformly bounded, we may simplify our formulas by treating $\rho_k$ as a function. Define $\operatorname{index}_X \colon \mathcal{G} \to \mathbb{N}$ to be the {\em index} in $X$: \[ \operatorname{index}_X(\sigma) = d_X(\alpha, \rho_k(\sigma)). \] \begin{remark} \label{Rem:ChoiceOfGeodesic} Suppose that $k'$ is a different geodesic connecting $\pi_X(\alpha)$ to $\pi_X(\beta)$ and $\operatorname{index}'_X$ is defined with respect to $k'$. Then \[ |\operatorname{index}_X(\sigma) - \operatorname{index}'_X(\sigma)| \leq 17\nablata + 4 \] by \reflem{MovePoint} and \reflem{MoveGeodesic}. After permitting a small additive error, the index depends only on $\alpha, \beta, \sigma$ and not on the choice of geodesic $k$. \end{remark} \subsection{Back and sidetracking} Fix $\sigma, \tau \in \mathcal{G}$. We say $\sigma$ {\em precedes} $\tau$ {\em by at least} $K$ in $X$ if \[ \operatorname{index}_X(\sigma) + K \leq \operatorname{index}_X(\tau). \] We say $\sigma$ precedes $\tau$ {\em by at most} $K$ if the inequality is reversed. If $\sigma$ precedes $\tau$ then we say $\tau$ {\em succeeds} $\sigma$. Now take $\mathcal{P} = \{\sigma_i\}$ to be a path in $\mathcal{G}$ connecting $\alpha$ to $\beta$. Recall that we have made the simplifying assumption that $\sigma_i$ and $\sigma_{i+1}$ are disjoint. We formalize a pair of properties enjoyed by unparameterized quasi-geodesics. The path $\mathcal{P}$ {\em backtracks} at most $K$ if for every hole $X$ and all indices $i < j$ we find that $\sigma_j$ precedes $\sigma_i$ by at most $K$. The path $\mathcal{P}$ {\em sidetracks} at most $K$ if for every hole $X$ and every index $i$ we find that \[ d_X(\sigma_i, \rho_k(\sigma_i)) \leq K, \] for some geodesic $k$ connecting a point of $\pi_X(\alpha)$ to a point of $\pi_X(\beta)$. \begin{remark} \label{Rem:ChoiceOfGeodesicTwo} As in Remark~\ref{Rem:ChoiceOfGeodesic}, allowing a small additive error makes irrelevant the choice of geodesic in the definition of sidetracking. We note that, if $\mathcal{P}$ has bounded sidetracking, one may freely use in calculation whichever of $\sigma_i$ or $\rho_k(\sigma_i)$ is more convenient. \end{remark} \subsection{Projection control} We say domains $X, Y \subset S$ {\em overlap} if $\partial X$ cuts $Y$ and $\partial Y$ cuts $X$. The following lemma, due to Behrstock~\cite[4.2.1]{Behrstock04}, is closely related to the notion of {\em time ordered} domains~\cite{MasurMinsky00}. An elementary proof is given in~\cite[Lemma~2.5]{Mangahas10}. \begin{lemma} \label{Lem:Zugzwang} There is a constant ${M_1} = {M_1}(S)$ with the following property. Suppose that $X, Y$ are overlapping non-simple domains. If $\gamma \in \mathcal{AC}(S)$ cuts both $X$ and $Y$ then either $d_X(\gamma, \partial Y) < {M_1}$ or $d_Y(\partial X, \gamma) < {M_1}$. \qed \end{lemma} We also require a more specialized version for the case where $X$ and $Y$ are nested. \begin{lemma} \label{Lem:ZugzwangTwo} There is a constant ${M_1}Two = {M_1}Two(S)$ with the following property. Suppose that $X \subset Y$ are nested non-simple domains. Fix $\alpha, \beta, \gamma \in \mathcal{AC}(S)$ that cut $X$. Fix $k = [\alpha', \beta'] \subset \mathcal{C}(Y)$, a geodesic connecting a point of $\pi_Y(\alpha)$ to a point of $\pi_Y(\beta)$. Assume that $d_X(\alpha, \beta) \geq M_0$, the constant given by \refthm{BoundedGeodesicImage}. If $d_X(\alpha, \gamma) \geq {M_1}Two$ then $$\operatorname{index}_Y(\partial X) - 4 \leq \operatorname{index}_Y(\gamma).$$ Symmetrically, we have $$\operatorname{index}_Y(\gamma) \leq \operatorname{index}_Y(\partial X) + 4$$ if $d_X(\gamma, \beta) \geq {M_1}Two$. \qed \end{lemma} \subsection{Finding the midpoint of a side} Fix $A \geq 1$. Let $\mathcal{P}, \mathcal{Q}, \mathcal{R}$ be the sides of a triangle in $\mathcal{G}$ with vertices at $\alpha, \beta, \gamma$. We assume that each of $\mathcal{P}$, $\mathcal{Q}$, and $\mathcal{R}$ are $A$--unparameterized quasi-geodesics when projected to any hole. Recall that $M_0 = M_0(S)$, ${M_1} = {M_1}(S)$, and ${M_1}Two = {M_1}Two(S)$ are functions depending only on the topology of $S$. We may assume that if $T \subset S$ is an essential subsurface, then $M_0(S) > M_0(T)$. Now choose ${K_1} \geq \max \{ M_0, 4{M_1}, {M_1}Two, 8 \} + 8\nablata$ sufficiently large so that any $A$--unparameterized quasi-geodesic in any hole back and side tracks at most ${K_1}$. \begin{claim} \label{Clm:PrecedesSucceedsExclusive} If $\sigma_i$ precedes $\gamma$ in $X$ and $\sigma_j$ succeeds $\gamma$ in $Y$, both by at least $2{K_1}$, then $i < j$. \end{claim} \begin{proof} To begin, as $X$ and $Y$ are holes and all holes interfere, we need not consider the possibility that $X \cap Y = \emptyset$. If $X = Y$ we immediately deduce that $$\operatorname{index}_X(\sigma_i) + 2{K_1} \leq \operatorname{index}_X(\gamma) \leq \operatorname{index}_X(\sigma_j) - 2{K_1}.$$ Thus $\operatorname{index}_X(\sigma_i) + 4{K_1} \leq \operatorname{index}_X(\sigma_j)$. Since $\mathcal{P}$ backtracks at most ${K_1}$ we have $i < j$, as desired. Suppose instead that $X \subset Y$. Since $\sigma_i$ precedes $\gamma$ in $X$ we immediately find $d_X(\alpha, \beta) \geq 2{K_1} \geq M_0$ and $d_X(\alpha, \gamma) \geq 2{K_1} - 2\nablata \geq {M_1}Two$. Apply \reflem{ZugzwangTwo} to deduce $\operatorname{index}_Y(\partial X) - 4 \leq \operatorname{index}_Y(\gamma)$. Since $\sigma_j$ succeeds $\gamma$ in $Y$ it follows that $\operatorname{index}_Y(\partial X) - 4 + 2{K_1} \leq \operatorname{index}_Y(\sigma_j)$. Again using the fact that $\sigma_i$ precedes $\gamma$ in $X$ we have that $d_X(\sigma_i, \beta) \geq {M_1}Two$. We deduce from \reflem{ZugzwangTwo} that $\operatorname{index}_Y(\sigma_i) \leq \operatorname{index}_Y(\partial X) + 4$. Thus $$\operatorname{index}_Y(\sigma_i) - 8 + 2{K_1} \leq \operatorname{index}_Y(\sigma_j).$$ Since $\mathcal{P}$ backtracks at most ${K_1}$ in $Y$ we again deduce that $i < j$. The case where $Y \subset X$ is similar. Suppose now that $X$ and $Y$ overlap. Applying \reflem{Zugzwang} and breaking symmetry, we may assume that $d_X(\gamma, \partial Y) < {M_1}$. Since $\sigma_i$ precedes $\gamma$ we have $\operatorname{index}_X(\gamma) \geq 2{K_1}$. \reflem{MovePoint} now implies that $\operatorname{index}_X(\partial Y) \geq 2{K_1} - {M_1} - 6\nablata$. Thus, \[ d_X(\alpha, \partial Y) \geq 2{K_1} - {M_1} - 8\nablata \geq {M_1} \] where the first inequality follows from \reflem{RightTriangle}. Applying \reflem{Zugzwang} again, we find that $d_Y(\alpha, \partial X) < {M_1}$. Now, since $\sigma_j$ succeeds $\gamma$ in $Y$, we deduce that $\operatorname{index}_Y(\sigma_j) \geq 2{K_1}$. So \reflem{RightTriangle} implies that $d_Y(\alpha, \sigma_j) \geq 2{K_1} - 2\nablata$. The triangle inequality now gives \[ d_Y(\partial X, \sigma_j) \geq 2{K_1} - {M_1} - 2\nablata \geq {M_1}. \] Applying \reflem{Zugzwang} one last time, we find that $d_X(\partial Y, \sigma_j) < {M_1}$. Thus $d_X(\gamma, \sigma_j) \leq 2{M_1}$. Finally, \reflem{MovePoint} implies that the difference in index (in $X$) between $\sigma_i$ and $\sigma_j$ is at least $2{K_1} - 2{M_1} - 6\nablata$. Since this is greater than the backtracking constant, ${K_1}$, it follows that $i < j$. \end{proof} Let $\sigma_\alpha \in \mathcal{P}$ be the {\em last} vertex of $\mathcal{P}$ preceding $\gamma$ by at least $2{K_1}$ in some hole. If no such vertex of $\mathcal{P}$ exists then take $\sigma_\alpha = \alpha$. \begin{claim} \label{Clm:LastNearCenter} For every hole $X$ and geodesic $h$ connecting $\pi_X(\alpha)$ to $\pi_X(\beta)$: \[ d_X(\sigma_\alpha, \rho_h(\gamma)) \leq 3{K_1}+6\nablata+1 \] \end{claim} \begin{proof} Since $\sigma_i$ and $\sigma_{i+1}$ are disjoint we have $d_X(\sigma_i, \sigma_{i+1}) \geq 3$ and so \reflem{MovePoint} implies that \[ |\operatorname{index}_X(\sigma_{i+1}) - \operatorname{index}_X(\sigma_i)| \leq 6\nablata + 3. \] Since $\mathcal{P}$ is a path connecting $\alpha$ to $\beta$ the image $\rho_h(\mathcal{P})$ is $6\nablata + 3$--dense in $h$. Thus, if $\operatorname{index}_X(\sigma_\alpha) + 2{K_1} + 6\nablata + 3 < \operatorname{index}_X(\gamma)$ then we have a contradiction to the definition of $\sigma_\alpha$. On the other hand, if $\operatorname{index}_X(\sigma_\alpha) \geq \operatorname{index}_X(\gamma) + 2{K_1}$ then $\sigma_\alpha$ precedes and succeeds $\gamma$ in $X$. This directly contradicts \refclm{PrecedesSucceedsExclusive}. We deduce that the difference in index between $\sigma_\alpha$ and $\gamma$ in $X$ is at most $2{K_1} + 6\nablata + 3$. Finally, as $\mathcal{P}$ sidetracks by at most ${K_1}$ we have \[ d_X(\sigma_\alpha, \rho_h(\gamma)) \leq 3{K_1} + 6\nablata + 3 \] as desired. \end{proof} We define $\sigma_\beta$ to be the first $\sigma_i$ to succeed $\gamma$ by at least $2{K_1}$ --- if no such vertex of $\mathcal{P}$ exists take $\sigma_\beta = \beta$. If $\alpha = \beta$ then $\sigma_\alpha = \sigma_\beta$. Otherwise, from Claim~\ref{Clm:PrecedesSucceedsExclusive}, we immediately deduce that $\sigma_\alpha$ comes before $\sigma_\beta$ in $\mathcal{P}$. A symmetric version of Claim~\ref{Clm:LastNearCenter} applies to $\sigma_\beta$: for every hole $X$ \[ d_X(\rho_h(\gamma), \sigma_\beta) \leq 3{K_1} + 6\nablata + 3. \] \subsection{Another side of the triangle} Recall now that we are also given a path $\mathcal{R} = \{ \tau_i \}$ connecting $\alpha$ to $\gamma$ in $\mathcal{G}$. As before, $\mathcal{R}$ has bounded back and sidetracking. Thus we again find vertices $\tau_\alpha$ and $\tau_\gamma$ the last/first to precede/succeed $\beta$ by at least $2{K_1}$. Again, this is defined in terms of the closest points projection of $\beta$ to a geodesic of the form $h = [\pi_X(\alpha), \pi_X(\gamma)]$. By Claim~\ref{Clm:LastNearCenter}, for every hole $X$, $\tau_\alpha$ and $\tau_\gamma$ are close to $\rho_h(\beta)$. By \reflem{CenterExists}, if $k = [\pi_X(\alpha), \pi_X(\beta)]$, then $d_X(\rho_k(\gamma), \rho_h(\beta)) \leq 6\nablata$. We deduce: \begin{claim} \label{Clm:BodySmall} $d_X(\sigma_\alpha, \tau_\alpha) \leq 6{K_1} + 18\nablata+2$. \qed \end{claim} This claim and \refclm{LastNearCenter} imply that the body of the triangle $\mathcal{P}\mathcal{Q}\mathcal{R}$ is bounded in size. We now show that the legs are narrow. \begin{claim} \label{Clm:NearbyVertex} There is a constant $N_1Two = N_1Two(S)$ with the following property. For every $\sigma_i \leq \sigma_\alpha$ in $\mathcal{P}$ there is a $\tau_j \leq \tau_\alpha$ in $\mathcal{R}$ so that $$d_X(\sigma_i, \tau_j) \leq N_1Two$$ for every hole $X$. \end{claim} \begin{proof} We only sketch the proof, as the details are similar to our previous discussion. Fix $\sigma_i \leq \sigma_\alpha$. Suppose first that no vertex of $\mathcal{R}$ precedes $\sigma_i$ by more than $2{K_1}$ in any hole. So fix a hole $X$ and geodesics $k = [\pi_X(\alpha), \pi_X(\beta)]$ and $h = [\pi_X(\alpha), \pi_X(\gamma)]$. Then $\rho_h(\sigma_i)$ is within distance $2{K_1}$ of $\pi_X(\alpha)$. Appealing to Claim~\ref{Clm:BodySmall}, bounded sidetracking, and hyperbolicity of $\mathcal{C}(X)$ we find that the initial segments \[ [\pi_X(\alpha), \rho_k(\sigma_\alpha)], \quad [\pi_X(\alpha), \rho_h(\tau_\alpha)] \] of $k$ and $h$ respectively must fellow travel. Because of bounded backtracking along $\mathcal{P}$, $\rho_k(\sigma_i)$ lies on, or at least near, this initial segment of $k$. Thus by \reflem{MoveGeodesic} $\rho_h(\sigma_i)$ is close to $\rho_k(\sigma_i)$ which in turn is close to $\pi_X(\sigma_i)$, because $\mathcal{P}$ has bounded sidetracking. In short, $d_X(\alpha, \sigma_i)$ is bounded for all holes $X$. Thus we may take $\tau_j = \tau_0 = \alpha$ and we are done. Now suppose that some vertex of $\mathcal{R}$ precedes $\sigma_i$ by at least $2{K_1}$ in some hole $X$. Take $\tau_j$ to be the last such vertex in $\mathcal{R}$. Following the proof of Claim~\ref{Clm:PrecedesSucceedsExclusive} shows that $\tau_j$ comes before $\tau_\alpha$ in $\mathcal{R}$. The argument now required to bound $d_X(\sigma_i, \tau_j)$ is essentially identical to the proof of Claim~\ref{Clm:LastNearCenter}. \end{proof} By the distance estimate, we find that there is a uniform neighborhood of $[\sigma_0, \sigma_\alpha] \subset \mathcal{P}$, taken in $\mathcal{G}$, which contains $[\tau_0, \tau_\alpha] \subset \mathcal{P}$. The slimness of $\mathcal{P}\mathcal{Q}\mathcal{R}$ follows directly. This completes the proof of Theorem~\ref{Thm:GoodPathsGiveSlimTriangles}. \qed \section{Coarsely computing Hempel distance} \label{Sec:HempelDistance} We now turn to our topological application. Recall that a {\em Heegaard splitting} is a triple $(S, V, W)$ consisting of a surface and two handlebodies where $V \cap W = \partial V = \partial W = S$. Hempel~\cite{Hempel01} defines the quantity \[ d_S(V, W) = \min \big\{ d_S(D,E) \mathbin{\mid} D \in \mathcal{D}(V), E \in \mathcal{D}(W) \big\} \] and calls it the {\em distance} of the splitting. Note that a splitting can be completely determined by giving a pair of cut systems: simplices $\mathbb{D} \subset \mathcal{D}(V)$, $\mathbb{E} \subset \mathcal{D}(W)$ where the corresponding disks cut the containing handlebody into a single three-ball. The triple $(S, \mathbb{D}, \mathbb{E})$ is a {\em Heegaard diagram}. The goal of this section is to prove: \begin{theorem} \label{Thm:CoarselyComputeDistance} There is a constant $R_1 = R_1(S)$ and an algorithm that, given a Heegaard diagram $(S, \mathbb{D}, \mathbb{E})$, computes a number $N$ so that $$|d_S(V, W) - N| \leq R_1.$$ \end{theorem} \noindent Let $\rho_V \colon \mathcal{C}(S) \to \mathcal{D}(V)$ be the closest points relation: \[ \rho_V(\alpha) = \big\{ D \in \mathcal{D}(V) \mathbin{\mid} \mbox{ for all $E \in \mathcal{D}(V)$, $d_S(\alpha, D) \leq d_S(\alpha, E)$ } \big\}. \] \noindent \refthm{CoarselyComputeDistance} follows from: \begin{theorem} \label{Thm:CoarselyComputeProjection} There is a constant $R_0 = R_0(V)$ and an algorithm that, given an essential curve $\alpha \subset S$ and a cut system $\mathbb{D} \subset \mathcal{D}(V)$, finds a disk $C \in \mathcal{D}(V)$ so that \[ d_S(C, \rho_V(\alpha)) \leq R_0. \] \end{theorem} \begin{proof}[Proof of \refthm{CoarselyComputeDistance}] Suppose that $(S, \mathbb{D}, \mathbb{E})$ is a Heegaard diagram. Using Theorem~\ref{Thm:CoarselyComputeProjection} we find a disk $D$ within distance $R_0$ of $\rho_V(\mathbb{E})$. Again using Theorem~\ref{Thm:CoarselyComputeProjection} we find a disk $E$ within distance $R_0$ of $\rho_W(D)$. Notice that $E$ is defined using $D$ and not the cut system $\mathbb{D}$. Since computing distance between fixed vertices in the curve complex is algorithmic~\cite{Leasure02, Shackleton04} we may compute $d_S(D, E)$. By the hyperbolicity of $\mathcal{C}(S)$ (\refthm{C(S)IsHyperbolic}) and by the quasi-convexity of the disk set (\refthm{DiskComplexConvex}) this is the desired estimate. \end{proof} Very briefly, the algorithm asked for in \refthm{CoarselyComputeProjection} searches an $R_2$--neighborhood in $\mathcal{M}(S)$ about a splitting sequence from $\mathbb{D}$ to $\alpha$. Here are the details. \begin{algorithm} \label{Alg:Projection} We are given $\alpha \in \mathcal{C}(S)$ and a cut system $\mathbb{D} \subset \mathcal{D}(V)$. Build a train track $\tau$ in $S = \partial V$ as follows: make $\mathbb{D}$ and $\alpha$ tight. Place one switch on every disk $D \in \mathbb{D}$. Homotope all intersections of $\alpha$ with $D$ to run through the switch. Collapse bigons of $\alpha$ inside of $S {\smallsetminus} \mathbb{D}$ to create the branches. Now make $\tau$ a generic track by combing away from $\mathbb{D}$~\cite[Proposition~1.4.1]{PennerHarer92}. Note that $\alpha$ is carried by $\tau$ and so gives a transverse measure $w$. Build a splitting sequence of measured tracks $\{ \tau_n \}_{n = 0}^N$ where $\tau_0 = \tau$, $\tau_N = \alpha$, and $\tau_{n+1}$ is obtained by splitting the largest switch of $\tau_n$ (as determined by the measure imposed by $\alpha$). Let $\mu_n = V(\tau_n)$ be the vertices of $\tau_n$. For each filling marking $\mu_n$ list all markings in the ball $B(\mu_n, R_2) \subset \mathcal{M}(S)$, where $R_2$ is given by \reflem{DetectingNearbyDisks} below. (If $\mu_0$ does not fill $S$ then output $\mathbb{D}$ and halt.) For every marking $\nu$ so produced we use Whitehead's algorithm (see \reflem{Whitehead}) to try and find a disk meeting some curve $\gamma \in \nu$ at most twice. For every disk $C$ found compute $d_S(\alpha, C)$~\cite{Leasure02, Shackleton04}. Finally, output any disk which minimizes this distance, among all disks considered, and halt. \end{algorithm} We use the following form of Whitehead's algorithm~\cite{Berge08}: \begin{lemma} \label{Lem:Whitehead} There is an algorithm that, given a cut system $\mathbb{D} \subset V$ and a curve $\gamma \subset S$, outputs a disk $C \subset V$ so that $\iota(\gamma, \partial C) = \min \{ \iota(\gamma, \partial E) \mathbin{\mid} E \in \mathcal{D}(V) \}$. \qed \end{lemma} We now discuss the constant $R_2$. We begin by noticing that the track $\tau_n$ is transversely recurrent because $\alpha$ is fully carried and $\mathbb{D}$ is fully dual. Thus by \refthm{TrainTrackUnparamGeodesic} and by Morse stability, for any essential $Y \subset S$ there is a stability constant $\operatorname{Stab}le$ for the path $\pi_Y(\mu_n)$. Let $\nablata$ be the hyperbolicity constant for $\mathcal{C}(S)$ (\refthm{C(S)IsHyperbolic}) and let $Q$ be the quasi-convexity constant for $\mathcal{D}(V) \subset \mathcal{C}(S)$ (\refthm{DiskComplexConvex}). Since $\iota(\mathbb{D}, \mu_0)$ is bounded we will, at the cost of an additive error, identify their images in $\mathcal{C}(S)$. Now, for every $n$ pick some $E_n \in \rho_V(\mu_n)$. \begin{lemma} \label{Lem:DetectingNearbyDisks} There is a constant $R_2$ with the following property. Suppose that $n < m$, $d_S(\mu_n, E_n), d_S(\mu_m, E_m) \leq \operatorname{Stab}le + \nablata + Q$, and $d_S(\mu_n, \mu_m) \geq 2(\operatorname{Stab}le + \nablata + Q) + 5$. Then there is a marking $\nu \in B(\mu_n, R_2)$ and a curve $\gamma \in \nu$ so that either: \begin{itemize} \item $\gamma$ bounds a disk in $V$, \item $\gamma \subset \partial Z$, where $Z$ is a non-hole or \item $\gamma \subset \partial Z$, where $Z$ is a large hole. \end{itemize} \end{lemma} \begin{proof}[Proof of \reflem{DetectingNearbyDisks}] Choose points $\sigma, \sigma'$ in the thick part of $\mathcal{T}(S)$ so that all curves of $\mu_n$ have bounded length in $\sigma$ and so that $E_n$ has length less than the Margulis constant in $\sigma'$. As in \refsec{BackgroundTeich} there is a {Teichm\"uller~} geodesic and associated markings $\{ \nu_k \}_{k = 0}^K$ so that $d_\mathcal{M}(\nu_0, \mu_n)$ is bounded and $E_n \in \operatorname{base}(\nu_K)$. We say a hole $X \subset S$ is {\em small} if $\operatorname{diam}_X(\mathcal{D}(V)) < 61$. \begin{proofclaim} There is a constant $R_2Temp$ so that for any small hole $X$ we have $d_X(\mu_n, \nu_K) < R_2Temp$. \end{proofclaim} \begin{proof} If $d_X(\mu_n, \nu_K) \leq M_0$ then we are done. If the distance is greater than $M_0$ then \refthm{BoundedGeodesicImage} gives a vertex of the $\mathcal{C}(S)$--geodesic connecting $\mu_n$ to $E_n$ with distance at most one from $\partial X$. It follows from the triangle inequality that every vertex of the $\mathcal{C}(S)$--geodesic connecting $\mu_m$ to $E_m$ cuts $X$. Another application of \refthm{BoundedGeodesicImage} gives \[ d_X(\mu_m, E_m) < M_0. \] Since $X$ is small $d_X(E_m, \mathbb{D}), d_X(E_n, \mathbb{D}) \leq 60$. Since $\iota(\nu_K, E_n) = 2$ the distance $d_X(\nu_K, E_n)$ is bounded. Finally, because $p \mapsto \pi_X(\mu_p)$ is an $A$--unparameterized quasi-geodesic in $\mathcal{C}(X)$ it follows that $d_X(\mathbb{D}, \mu_n)$ is also bounded and the claim is proved. \end{proof} Now consider all strict subsurfaces $Y$ so that \[ d_Y(\mu_n, \nu_M) \geq R_2Temp. \] None of these are small holes, by the claim above. If there are no such surfaces then \refthm{MarkingGraphDistanceEstimate} bounds $d_\mathcal{M}(\mu_n, \nu_M)$: taking the cutoff constant larger than \[ \max \{ R_2Temp, {C_0}, \operatorname{Stab}le + \nablata + Q \} \] ensures that all terms on the right-hand side vanish. In this case the additive error in \refthm{MarkingGraphDistanceEstimate} is the desired constant $R_2$ and the lemma is proved. If there are such surfaces then choose one, say $Z$, that minimizes $\ell = \min J_Z$. Thus $d_Y(\mu_n, \nu_\ell) < {C_3}$ for all strict non-holes and all strict large holes. Since $d_S(\mu_n, E_n) \leq \operatorname{Stab}le + \nablata + Q$ and $\{ \nu_m \}$ is an unparameterized quasi-geodesic \cite[Theorem~6.1]{Rafi10} we find that $d_S(\mu_n, \nu_l)$ is uniformly bounded. The claim above bounds distances in small holes. As before we find a sufficiently large cutoff so that all terms on the right-hand side of \refthm{MarkingGraphDistanceEstimate} vanish. Again the additive error of \refthm{MarkingGraphDistanceEstimate} provides the constant $R_2$. Since $\partial Z \subset \operatorname{base}(\nu_\ell)$ the lemma is proved. \end{proof} To prove the correctness of \refalg{Projection} it suffices to show that the disk produced is close to $\rho_V(\alpha)$. Let $m$ be the largest index so that for all $n \leq m$ we have \[ d_S(\mu_n, E_n) \leq \operatorname{Stab}le + \nablata + Q. \] It follows that $\mu_{m+1}$ lies within distance $\operatorname{Stab}le + \nablata$ of the geodesic $[\alpha, \rho_V(\alpha)]$. Recall that $d_S(\mu_n, \mu_{n+1}) \leq {C_1}$ for any value of $n$. A shortcut argument shows that \[ d_S(\mu_m, \rho_V(\alpha)) \leq 2{C_1} + 3\operatorname{Stab}le + 3\nablata + Q. \] Let $n \leq m$ be the largest index so that \[ 2(\operatorname{Stab}le + \nablata + Q) + 5 \leq d_S(\mu_n, \mu_m). \] If no such $n$ exists then take $n = 0$. Now, \reflem{DetectingNearbyDisks} implies that there is a disk $C$ with $d_S(C, \mu_n) \leq 4R_2$ and this disk is found during the running of \refalg{Projection}. It follows from the above inequalities that \[ d_S(C, \alpha) \leq 4R_2 + 5\operatorname{Stab}le + 5\nablata + 3Q + 5 + 2{C_1} + d_S(\alpha, \rho_V(\alpha)). \] So the disk $C'$, output by the algorithm, is at least this close to $\alpha$ in $\mathcal{C}(S)$. Examining the triangle with vertices $\alpha, \rho_V(\alpha), C'$ and using a final short-cut argument gives \[ d_S(C', \rho_V(\alpha)) \leq 4R_2 + 5\operatorname{Stab}le + 9\nablata + 5Q + 5 + 2{C_1}. \] This completes the proof of \refthm{CoarselyComputeProjection}. \qed \end{document}
\begin{document} \newcommand{{\mathbb{C}}}{{\mathbb{C}}} \newcommand{{\mathbb{R}}}{{\mathbb{R}}} \newcommand{B}{B} \newcommand{{\mathbb{Z}}}{{\mathbb{Z}}} \newcommand{{\mathbb{N}}}{{\mathbb{N}}} \newcommand{\left}{\left} \newcommand{\right}{\right} \newcommand{{\mathbb{N}}inf}{{\mathbb{N}}_\infty} \newcommand{\Vol}[1]{\mathrm{Vol}\left(#1\right)} \newcommand{\B}[4]{B_{\left(#1,#2\right)}\left(#3,#4\right)} \newcommand{{\mathbb{C}}jN}[3]{\left\|#1\right\|_{C^{#2}\left(#3\right)}} \newcommand{{\mathbb{C}}j}[2]{C^{#1}\left( #2\right)} \newcommand{\bigtriangledown}{\bigtriangledown} \newcommand{\sI}[2]{\mathcal{I}\left(#1,#2 \right)} \newcommand{\Det}[1]{\det_{#1\times #1}} \newcommand{\mathcal{K}}{\mathcal{K}} \newcommand{\mathcal{K}t}{\rightidetilde{\mathcal{K}}} \newcommand{\mathcal{A}}{\mathcal{A}} \newcommand{\mathcal{B}}{\mathcal{B}} \newcommand{\mathcal{C}}{\mathcal{C}} \newcommand{\mathcal{D}}{\mathcal{D}} \newcommand{\mathcal{S}}{\mathcal{S}} \newcommand{\mathcal{F}}{\mathcal{F}} \newcommand{\mathcal{Q}}{\mathcal{Q}} \newcommand{\mathcal{V}}{\mathcal{V}} \newcommand{\mathcal{M}}{\mathcal{M}} \newcommand{\mathcal{T}}{\mathcal{T}} \newcommand{\q( \sV\w)}{\left( \mathcal{V}\right)} \newcommand{\varsigma}{\varsigma} \newcommand{\varsigmat}{\rightidetilde{\varsigma}} \newcommand{\dil}[2]{#1^{\left(#2\right)}} \newcommand{\dilp}[2]{#1^{\left(#2\right)_{p}}} \newcommand{-\log_2 \sA}{-\log_2 \mathcal{A}} \newcommand{\widehat{e}}{\rightidehat{e}} \newcommand{\mathbb{H}^1}{\mathbb{H}^1} \newcommand{\sum d}{\sum d} \newcommand{\tilde{d}}{\tilde{d}} \newcommand{\hat{d}}{\hat{d}} \newcommand{\Span}[1]{\mathrm{span}\left\{ #1 \right\}} \newcommand{\dspan}[1]{\dim \Span{#1}} \newcommand{K_0}{K_0} \newcommand{\ad}[1]{\mathrm{ad}\left( #1 \right)} \newcommand{\LtOpN}[1]{\left\|#1\right\|_{L^2\rightarrow L^2}} \newcommand{\LpOpN}[2]{\left\|#2\right\|_{L^{#1}\rightarrow L^{#1}}} \newcommand{\LpN}[2]{\left\|#2\right\|_{L^{#1}}} \newcommand{\mathrm{Jac}\:}{\mathrm{Jac}\:} \newcommand{\widetilde{\kappa}}{\rightidetilde{\kappa}} \newcommand{\widetilde{\gamma}}{\rightidetilde{\gamma}} \newcommand{\widetilde{\gamma}t}{\rightidetilde{\rightidetilde{\gamma}}} \newcommand{\widehat{\gamma}}{\rightidehat{\gamma}} \newcommand{\widehat{S}}{\rightidehat{S}} \newcommand{\widehat{W}}{\rightidehat{W}} \newcommand{\widehat{I}}{\rightidehat{I}} \newcommand{\widetilde{W}}{\rightidetilde{W}} \newcommand{\widetilde{X}}{\rightidetilde{X}} \newcommand{\widetilde{T}}{\rightidetilde{T}} \newcommand{\widetilde{D}}{\rightidetilde{D}} \newcommand{\widetilde{\Phi}}{\rightidetilde{\Phi}} \newcommand{\widehat{V}}{\rightidehat{V}} \newcommand{\widehat{X}}{\rightidehat{X}} \newcommand{\widehat{\delta}}{\rightidehat{\delta}} \newcommand{\mathcal{S}h}{\rightidehat{\mathcal{S}}} \newcommand{\mathcal{F}h}{\rightidehat{\mathcal{F}}} \newcommand{\widehat{\theta}}{\rightidehat{\theta}} \newcommand{\widetilde{c}}{\rightidetilde{c}} \newcommand{\tilde{a}}{\tilde{a}} \newcommand{\tilde{b}}{\tilde{b}} \newcommand{\mathfrak{g}}{\mathfrak{g}} \newcommand{\q( \sC\w)}{\left( \mathcal{C}\right)} \newcommand{\q( \sC_{\fg}\w)}{\left( \mathcal{C}_{\mathfrak{g}}\right)} \newcommand{\q( \sC_{J}\w)}{\left( \mathcal{C}_{J}\right)} \newcommand{\q( \sC_{Y}\w)}{\left( \mathcal{C}_{Y}\right)} \newcommand{\q( \sC_{Y}\w)u}{\left( \mathcal{C}_{Y}\right)_u} \newcommand{\q( \sC_{J}\w)u}{\left( \mathcal{C}_{J}\right)_u} \newcommand{\overline{B}}{\overline{B}} \newcommand{Bb}{\overline{Q}} \newcommand{\mathcal{P}}{\mathcal{P}} \newcommand{\mathcal{N}}{\mathcal{N}} \newcommand{\q(\mathcal{H}\w)}{\left(\mathcal{H}\right)} \newcommand{\widetilde{\Omega}}{\rightidetilde{\Omega}} \newcommand{K_0t}{\rightidetilde{K}} \newcommand{\mathcal{M}t}{\rightidetilde{\mathcal{M}}} \newcommand{\denum}[2]{#1_{#2}} \newcommand{\tilde{\phi}}{\tilde{\phi}} \newcommand{\q\{1,\ldots, \nu\w\}}{\left\{1,\ldots, \nu\right\}} \newcommand{\diam}[1]{ {\mathrm{diam}}\left\{#1\right\} } \newcommand{{\mathbb{N}}t}{\rightidetilde{N}} \newcommand{\widetilde{\psi}}{\rightidetilde{\psi}} \newcommand{\widetilde{\sigma}}{\rightidetilde{\sigma}} \newcommand{\q\{\mu_1\w\}}{\left\{\mu_1\right\}} \newcommand{\nu+2}{\nu+2} \newcommand{\LplqOpN}[3]{\left\| #3 \right\|_{L^{#1}\left(\ell^{#2}\left({\mathbb{N}}^{\nu} \right) \right)\rightarrow L^{#1}\left( \ell^{#2 }\left( {\mathbb{N}}^{\nu} \right) \right) }} \newcommand{\LplqN}[3]{\left\| #3 \right\|_{L^{#1}\left(\ell^{#2 }\left({\mathbb{N}}^{\nu} \right) \right) }} \newcommand{\iota_\infty}{\iota_\infty} \newcommand{\ell}{\ell} \newcommand{\Lpp}[2]{L^{#1}\left(#2\right)} \newcommand{\Lppn}[3]{\left\|#3\right\|_{\Lpp{#1}{#2}}} \newcommand{E\cup \q\{\mu_1\w\}}{E\cup \left\{\mu_1\right\}} \newcommand{E\cup \q\{\mu_1\w\}c}{\left(E\cup \q\{\mu_1\w\}\right)^c} \newcommand{\ip}[2]{\left< #1, #2 \right>} \newtheorem{thm}{Theorem}[section] \newtheorem{cor}[thm]{Corollary} \newtheorem{prop}[thm]{Proposition} \newtheorem{lemma}[thm]{Lemma} \newtheorem{conj}[thm]{Conjecture} \theoremstyle{remark} \newtheorem{rmk}[thm]{Remark} \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \theoremstyle{remark} \newtheorem{example}[thm]{Example} \numberwithin{equation}{section} \title{Multi-parameter singular Radon transforms II: the $L^p$ theory} \author{Elias M. Stein\footnote{Partially supported by NSF DMS-0901040.} and Brian Street\footnote{Partially supported by NSF DMS-0802587.}} \date{} \maketitle \begin{abstract} The purpose of this paper is to study the $L^p$ boundedness of operators of the form \[ f\mapsto \psi(x) \int f(\gamma_t(x))K(t)\: dt, \] where $\gamma_t(x)$ is a $C^\infty$ function defined on a neighborhood of the origin in $(t,x)\in {\mathbb{R}}^N\times {\mathbb{R}}^n$, satisfying $\gamma_0(x)\equiv x$, $\psi$ is a $C^\infty$ cutoff function supported on a small neighborhood of $0\in {\mathbb{R}}^n$, and $K$ is a ``multi-parameter singular kernel'' supported on a small neighborhood of $0\in {\mathbb{R}}^N$. We also study associated maximal operators. The goal is, given an appropriate class of kernels $K$, to give conditions on $\gamma$ such that every operator of the above form is bounded on $L^p$ ($1<p<\infty$). The case when $K$ is a Calder\'on-Zygmund kernel was studied by Christ, Nagel, Stein, and Wainger; we generalize their work to the case when $K$ is (for instance) given by a ``product kernel.'' Even when $K$ is a Calder\'on-Zygmund kernel, our methods yield some new results. This is the second paper in a three part series. The first paper deals with the case $p=2$, while the third paper deals with the special case when $\gamma$ is real analytic. \end{abstract} \section{Introduction} The goal of this paper is to prove the $L^p$ boundedness of (a special case of) the multi-parameter singular Radon transforms introduced in \cite{StreetMultiParameterSingRadonLt}. We consider operators of the form \begin{equation}\label{EqnIntroOp} T\left( f\right) \left( x\right) = \psi\left( x\right) \int f\left( \gamma_t\left( x\right) \right) K\left( t\right) \: dt, \end{equation} where $\psi$ is a $C_0^\infty$ cut off function (supported near, say, $0\in {\mathbb{R}}^n$), $\gamma_t\left( x\right) =\gamma\left( t,x\right)$ is a $C^\infty$ function defined on a neighborhood of the origin in ${\mathbb{R}}^N\times {\mathbb{R}}^n$ satisfying $\gamma_0\left( x\right) \equiv x$, and $K$ is a ``multi-parameter'' distribution kernel, supported near $0$ in ${\mathbb{R}}^N$. For instance, one could take $K$ to be a ``product kernel'' supported near $0$.\footnote{Our main theorem applies to classes of kernels other than product kernels.} To define this notion, suppose we have decomposed ${\mathbb{R}}^N={\mathbb{R}}^{N_1}\times \cdots\times {\mathbb{R}}^{N_\nu}$, and write $t=\left( t_1,\ldots, t_{\nu}\right)\in {\mathbb{R}}^{N_1}\times \cdots\times {\mathbb{R}}^{N_\nu}$. A product kernel satisfies \begin{equation*} \left|\partial_{t_1}^{\alpha_1}\cdots \partial_{t_\nu}^{\alpha_\nu} K\left( t\right)\right|\lesssim \left|t_1\right|^{-N_1-\left|\alpha_1\right|}\cdots \left|t_{\nu}\right|^{-N_{\nu}-\left|\alpha_\nu\right|}, \end{equation*} along with certain ``cancellation conditions.''\footnote{The simplest example of a product kernel is given by $K\left(t_1,\ldots,t_\nu\right)=K_1\left(t_1\right)\otimes\cdots\otimes K_\nu\left(t_\nu\right)$, where $K_1,\ldots, K_\nu$ are standard Calder\'on-Zygmund kernels. That is, $K_j$ satisfies $\left|\partial_{t_j}^{\alpha_j} K_j\left(t_j\right)\right|\lesssim \left|t_j\right|^{-N_j-\left|\alpha_j\right|}$, again along with certain ``cancellation conditions.'' When $\nu=1$, the class of product kernels is precisely the class of Calder\'on-Zygmund kernels. See Section 16 of \cite{StreetMultiParameterSingRadonLt} for the statement of the cancellation conditions. We do not make them precise in this paper, since we will be working with more general kernels $K$.} The goal is to develop conditions on $\gamma$ such that $T$ is bounded on $L^p$ ($1<p<\infty$). In addition, we will prove (under the same conditions on $\gamma$) $L^p$ boundedness ($1<p\leq \infty$) for the corresponding maximal operator, \begin{equation*} \mathcal{M} f\left( x\right) = \psi\left(x\right) \sup_{0<\delta_1,\ldots,\delta_\nu\leq a}\frac{1}{\delta_1^{N_1} \delta_2^{N_2}\cdots\delta_\nu^{N_\nu} } \int_{\left|t_\mu\right|\leq \delta_\mu} \left|f\left( \gamma_t\left( x\right)\right)\right|\: dt_1\cdots dt_\nu, \end{equation*} where $a>0$ is some small number depending on $\gamma$. In fact, the $L^p$ boundedness of $\mathcal{M}$ will be a step towards proving the $L^p$ boundedness of $T$. This paper is the second in a three part series. The first paper in the series \cite{StreetMultiParameterSingRadonLt} dealt with the $L^2$ theory, and applied to a larger class of kernels than the results in this paper do (see Section \ref{SectionKernels} for a discussion). However, all of the examples we have in mind (and in particular all of the examples discussed in \cite{StreetMultiParameterSingRadonLt}) do fall under the theory discussed in this paper. An $L^2$ result from \cite{StreetMultiParameterSingRadonLt} will serve as one of the main technical lemmas of this paper. The third paper in the series \cite{StreetMultiParameterSingRadonAnal} deals with the special case when $\gamma$ is assumed to be real-analytic. In this case many of the assumptions from this paper take a much simpler form, and some of our results can be improved. See \cite{SteinStreetMultiparameterSingularRadonTransformsAnnounce} for an overview of the series. For a more detailed introduction to the operators defined in this paper, we refer the reader to \cite{StreetMultiParameterSingRadonLt}, which discusses special motivating cases, and gives a number of examples. \begin{rmk} The results in this paper generalize the $L^p$ boundedness results from the work (in the single-parameter setting, when $K$ is a Calder\'on-Zygmund kernel) of Christ, Nagel, Stein, and Wainger \cite{ChristNagelSteinWaingerSingularAndMaximalRadonTransforms}. In fact, as discussed in \cite{StreetMultiParameterSingRadonLt}, the results in this paper generalize the $L^p$ boundedness results in \cite{ChristNagelSteinWaingerSingularAndMaximalRadonTransforms} even if one considers only the single parameter setting.\footnote{For instance, in the single parameter setting if $K\left(t\right)$ is a Calder\'on-Zygmund kernel supported near $t=0$, $\gamma_t\left(x\right)$ is a real analytic function in both variables, defined on a neighborhood of $\left(0,0\right)\in {\mathbb{R}}^N\times {\mathbb{R}}^n$ satisfying $\gamma_0\left(x\right)\equiv x$, and $\psi\left(x\right)\in C_0^\infty$ is supported near $x=0$, then it follows from the results in this paper that the operator $T$ given by \eqref{EqnIntroOp} is bounded on $L^p$ ($1<p<\infty$) under no additional hypotheses--see \cite{StreetMultiParameterSingRadonAnal} for the full details on this. This result is beyond the methods of \cite{ChristNagelSteinWaingerSingularAndMaximalRadonTransforms}.} One major difficulty that this paper deals with, which did not appear in \cite{ChristNagelSteinWaingerSingularAndMaximalRadonTransforms}, is that (unlike the single-parameter setting) there is no relevant Calder\'on-Zygmund theory to fall back on. At least not {\it a priori}. See Remark \ref{RmkNoCZTheory} and Section \ref{SectionSingularIntegrals} for further details. What can be used instead is the idea of controlling operators by square functions; a procedure that has appeared a number of times before. See, for instance, \cite{FeffermanSteinSingularIntegralsOnProductSpaces,NagelSteinOnTheProductTheoryOfSingularIntegrals}. \end{rmk} \subsection{Informal statement of the main results} In this section, we informally state the special case of our main results when $K\left(t_1,\ldots, t_\nu\right)$ is a product kernel relative to the decomposition ${\mathbb{R}}^N={\mathbb{R}}^{N_1}\times \cdots\times {\mathbb{R}}^{N_\nu}$ (see the introduction for this notion). We suppose we are given a $C^\infty$ function, $\gamma\left(t,x\right)=\gamma_t\left(x\right)\in {\mathbb{R}}^n$ defined on a small neighborhood of the origin in $\left(t,x\right)\in {\mathbb{R}}^N\times {\mathbb{R}}^n$, satisfying $\gamma_0\left(x\right)\equiv x$. For $t$ sufficiently small, $\gamma_t$ (thought of of a function in the $x$-variable) is a diffeomorphism onto its image. Thus, it makes sense to write $\gamma_t^{-1}$, the inverse mapping. We define the vector field \begin{equation*} W\left(t,x\right)= \frac{d}{d\epsilon}\bigg|_{\epsilon=1} \gamma_{\epsilon t}\circ \gamma_t^{-1} \left(x\right)\in T_x {\mathbb{R}}^n. \end{equation*} Note that $W\left(0,x\right)\equiv 0$. For a collection of vector fields $\mathcal{V}$, let $\mathcal{D}\left(\mathcal{V}\right)$ denote the involutive distribution generated by $\mathcal{V}$. I.e., the smallest $C^\infty$ module containing $\mathcal{V}$ and such that if $X,Y\in \mathcal{D}\left(\mathcal{V}\right)$ then $\left[X,Y\right]\in \mathcal{D}\left(\mathcal{V}\right)$. For a multi-index $\alpha\in {\mathbb{N}}^N$, write $\alpha=\left(\alpha_1,\ldots, \alpha_\nu\right)$, with $\alpha_\mu\in {\mathbb{N}}^{N_\mu}$. Decompose $W$ into a Taylor series in the $t$ variable, \begin{equation*} W\left(t,x\right)\sim \sum_{\alpha\ne 0} t^{\alpha}X_\alpha. \end{equation*} We call $\alpha=\left(\alpha_1,\ldots, \alpha_\nu\right)\in {\mathbb{N}}^N$ a pure power if $\alpha_\mu\ne 0$ for precisely one $\mu$. Otherwise, we call it a non-pure power. We assume that the following conditions hold ``uniformly'' for $\delta=\left(\delta_1,\ldots, \delta_\nu\right)\in \left(0,1\right]^\nu$, though we defer making this notion of uniform precise to Section \ref{SectionCurves}. \begin{itemize} \item For every $\delta\in \left(0,1\right]^\nu$, \begin{equation*} \mathcal{D}_{\delta}:=\mathcal{D}\left(\left\{\delta_1^{\left|\alpha_1\right|}\cdots\delta_\nu^{\left|\alpha_\nu\right|}X_{\alpha_1,\ldots,\alpha_\nu} : \left(\alpha_1,\ldots,\alpha_\nu\right)\text{ is a pure power}\right\}\right) \end{equation*} is finitely generated as a $C^\infty$ module, uniformly in $\delta$. \item For every $\delta\in \left(0,1\right]^\nu$, \begin{equation}\label{EqnIntroWInsD} W\left(\delta_1 t_1,\ldots, \delta_\nu t_\nu\right)\in \mathcal{D}_\delta, \end{equation} uniformly in $\delta$. \end{itemize} \begin{rmk} If it were not for the ``uniform'' aspect of the above assumptions, they would be independent of $\delta$. Thus it is the uniform part, which we have not made precise, that is the heart of the above assumptions. \end{rmk} Our main theorems are, \begin{thm} Under the above assumptions (which are made precise in Section \ref{SectionCurves}), the operator given by \begin{equation*} f\mapsto \psi\left(x\right) \int f\left(\gamma_t\left(x\right)\right) K\left(t\right)\: dt, \end{equation*} is bounded on $L^p$ ($1<p<\infty$), for every product kernel $K\left(t_1,\ldots, t_\nu\right)$, with sufficiently small support, provided $\psi$ has sufficiently small support. \end{thm} \begin{thm} Under the same assumptions, the maximal operator given by \begin{equation*} f\mapsto \psi\left(x\right) \sup_{0<\delta_1,\ldots, \delta_\nu<<1} \int_{\left|t\right|<1} \left| f \left(\gamma_{\delta_1 t_1,\ldots, \delta_\nu t_\nu}\left(x\right) \right) \right|\: dt \end{equation*} is bounded on $L^p$ ($1<p\leq \infty$). \end{thm} The precise statement of our main result (where more general kernels are considered) can be found in Theorems \ref{ThmMainThmFirstPass}, \ref{ThmMainThmSecondPass}, and \ref{ThmMainMaxThm}. \begin{rmk} In \cite{ChristNagelSteinWaingerSingularAndMaximalRadonTransforms}, the main conditions were stated in terms of slightly different vector fields. Namely, it was shown that $\gamma$ could be written in the form \begin{equation*} \gamma_t\left(x\right)\sim \exp\left(\sum_{\alpha\ne 0} t^{\alpha} \widehat{X}_\alpha\right)x, \end{equation*} where $\widehat{X}_\alpha$ are $C^\infty$ vector fields and the above notation means $\gamma_t\left(x\right) = \exp\left(\sum_{0<\left|\alpha\right|<L} t^{\alpha} \widehat{X}_\alpha\right)x+O\left(\left|t\right|^L\right)$, $\forall L$. One can easily solve for the $\widehat{X}_\alpha$ in terms of the $X_\alpha$ and {\it vice versa}. In fact, one can equivalently state our theorems by replacing $X_\alpha$ with $\widehat{X}_\alpha$, throughout. Nevertheless, we still need to introduce the vector field $W$, in order to state \eqref{EqnIntroWInsD}. The connection between the $\widehat{X}_\alpha$ and the $X_\alpha$ is spelled out in more detail in Section 8 of \cite{StreetMultiParameterSingRadonAnal}. See also Section 9 of \cite{StreetMultiParameterSingRadonLt}. \end{rmk} \begin{rmk} For several examples of $\gamma$ which satisfy our hypotheses and several examples which do not, the reader is referred to Section 17 of \cite{StreetMultiParameterSingRadonLt}. \end{rmk} $\left|v\right|$ for $\left( \sum_j \left|v_j\right|^2 \right)^{\frac{1}{2}}$, and we write $\left|v\right|_\infty$ for $\sup_j \left|v_j\right|$. $B^{n}\left( \eta\right)$ will denote the ball of radius $\eta>0$ in the $\left|\cdot\right|$ norm. For two numbers $a,b\in {\mathbb{R}}$ we write $a\vee b$ for the maximum of $a$ and $b$ and $a\rightedge b$ for the minimum. If instead, $a=\left( a_1,\ldots, a_n\right), b=\left( b_1,\ldots, b_n\right)\in {\mathbb{R}}^n$, we write $a\vee b$ (respectively, $a\rightedge b$) for $\left(a_1\vee b_1,\ldots, a_n \vee b_n \right)$ (respectively, $\left(a_1\rightedge b_1,\ldots, a_n \rightedge b_n\right)$). For a vectors $\delta=\left( \delta_1,\ldots, \delta_\nu\right), d=\left( d_1,\ldots, d_\nu\right)\in {\mathbb{R}}^\nu$, we define $\delta^d$ by the standard multi-index notation. I.e., $\delta^d=\prod_{\mu=1}^{\nu} \delta_\mu^{d_\mu}$. Also we will write $2^d = \left(2^{d_1},\ldots,2^{d_\nu}\right)$. Given a, possibly arbitrary, set $U\subseteq {\mathbb{R}}^n$ and a continuous function $f$ defined on a neighborhood of $U$, we write $${\mathbb{C}}jN{f}{j}{U} = \sum_{\left|\alpha\right|\leq j}\sup_{x\in U} \left|\partial_x^\alpha f\left( x\right)\right|,$$ and if we state that ${\mathbb{C}}jN{f}{j}{U}$ is finite, we mean that the partial derivatives up to order $j$ of $f$ exist on $U$, are continuous, and the above norm is finite. If $f$ is replaced by a vector field $Y=\sum_k a_k\left( x\right)\partial_{x_k}$, then we write, $${\mathbb{C}}jN{Y}{j}{U} = \sum_{k}{\mathbb{C}}jN{a_k}{j}{U}.$$ Given a matrix $A$, we write $\left\| A\right\|$ for the usual operator norm. Given two integers $1\leq m\leq n$, we let $\sI{m}{n}$ denote the set of all lists of integers $\left( i_1,\ldots, i_m\right)$ such that $$1\leq i_1<i_2<\cdots<i_m\leq n.$$ Furthermore, suppose $A$ is an $n\times q$ matrix, and suppose $1\leq n_0\leq n\rightedge q$. For $I\in \sI{n_0}{n}$, $J\in \sI{n_0}{q}$ we let $A_{I,J}$ denote the $n_0\times n_0$ matrix given by taking the rows from $A$ which are listed in $I$ and the columns from $A$ which are listed in $J$. We define $$\Det{n_0} A = \left( \det A_{I,J} \right)_{\substack{I\in \sI{n_0}{n}\\J\in \sI{n_0}{q}}},$$ so that, in particular, $\Det{n_0} A$ is a {\it vector} (it will not be important to us in which order the coordinates are arranged). $\Det{n_0} A$ comes up when one changes variables. Indeed, suppose $\Phi$ is a $C^1$ diffeomorphism from an open subset $U\subset {\mathbb{R}}^{n_0}$ mapping to an $n_0$ dimensional submanifold of ${\mathbb{R}}^n$, where this submanifold is given the induced Lebesgue measure $dx$. Then, we have $$\int_{\Phi\left( U\right)} f\left( x\right) \: dx = \int_U f\left( \Phi\left( t\right)\right) \left|\Det{n_0} d\Phi\left( t\right)\right|\: dt.$$ If $A=\left( A_1,\ldots, A_q\right)$ is a list of, possibly non-commuting, operators, we will use ordered multi-index notation to define $A^\alpha$, where $\alpha$ is a list of numbers $1,\ldots, q$. $\left|\alpha\right|$ will denote the length of the list. For instance, if $\alpha=\left( 1,4,4,2,1\right)$, then $\left| \alpha\right|=5$ and $A^\alpha = A_1A_4A_4A_2A_1$. Thus, if $A_1,\ldots A_q$ are vector fields, then $A^\alpha$ is an $\left|\alpha\right|$ order partial differential operator. If $f:{\mathbb{R}}^{n}\rightarrow {\mathbb{R}}^m$ is a map, then we write, \begin{equation*} d f\left( x\right) \left( \frac{\partial}{\partial x_j}\right), \end{equation*} for the differential of $f$ at the point $x$ applied to the vector field $\frac{\partial}{\partial x_j}$. If $f$ is a function of two variables, $f\left( t,x\right):{\mathbb{R}}^N\times {\mathbb{R}}^n\rightarrow {\mathbb{R}}^m$, and we wish to view $d f$ as a linear transformation acting on the vector space spanned by $\frac{\partial}{\partial_{t_j}}$ ($1\leq j\leq N$), then we instead write, \begin{equation*} \frac{\partial f}{\partial t} \left( t,x\right) \end{equation*} to denote this linear transformation. Hence, it makes sense to write, \begin{equation*} \det_{n_0\times n_0} \frac{\partial f}{\partial t} \left( t,x\right), \end{equation*} where $n_0\leq m\rightedge N$. If $\psi_1,\psi_2\in C_0^\infty\left( {\mathbb{R}}^n\right)$, we write $\psi_1\prec \psi_2$ to denote that $\psi_2\equiv 1$ on a neighborhood of the support of $\psi_1$. We will have occasion to use vector valued functions. We denote by $L^p\left( \ell^q\left( {\mathbb{N}}^{\nu} \right)\right)$ the set of sequences of measurable functions $\left\{f_j\right\}_{j\in{\mathbb{N}}^\nu}$ such that, $$\LpN{p}{\left(\sum_{j\in {\mathbb{N}}^\nu} \left|f_j \right|^q \right)^{1/q}}<\infty.$$ Finally, we will devote a good deal of notation to multi-parameter Carnot-Carath\'eodory geometry. See Sections \ref{SectionCCGeom} and \ref{SectionCCGeomII}. \section{Kernels}\label{SectionKernels} In this section, we will discuss the classes of kernels $K\left( t\right)$ for which we will study operators of the form \eqref{EqnIntroOp}. The kernels which we study will be supported in $B^N\left( a\right)=\left\{x\in {\mathbb{R}}^N: \left|x\right|<a\right\}$, where $a>0$ is some small number to be chosen later (depending on $\gamma$). Fix $\nu\in {\mathbb{N}}$, we will be studying $\nu$ parameter operators. Fix $1\leq \mu_0\leq \nu$, and define, \begin{equation*} \mathcal{A}_{\mu_0} = \left\{\delta=\left( \delta_1,\ldots, \delta_\nu\right)\in \left[0,1\right]^\nu: \delta_{\mu_0}\leq \delta_{\mu_0+1}\leq \cdots\leq \delta_{\nu}\right\}. \end{equation*} The class of kernels we define will depend on $\mu_0$ (by depending on $\mathcal{A}_{\mu_0}$). In \cite{StreetMultiParameterSingRadonLt}, the class of kernels depended on a subset $\mathcal{A}\subseteq\left[0,1\right]^\nu$. In that paper, one could use any subset $\mathcal{A}$ such that if $\delta_1,\delta_2\in \mathcal{A}$, then $\delta_1\vee \delta_2\in \mathcal{A}$ (where $\delta_1\vee \delta_2$ is given by taking the coordinatewise maximum of $\delta_1$ and $\delta_2$). In this paper, we restrict our attention to $\mathcal{A}$ of the form $\mathcal{A}_{\mu_0}$ for some $\mu_0$.\footnote{Actually, this is not the most general form of $\mathcal{A}$ that can be handled by our methods. See Section \ref{SectionMoreKernels} for further details.} This is the only difference between the setting in \cite{StreetMultiParameterSingRadonLt} and the setting in this paper. Notice, $\mathcal{A}_{\nu}=\left[0,1\right]^\nu$ and $\mathcal{A}_{1}=\left\{\delta_1\leq\delta_2\leq\cdots\leq \delta_\nu\right\}$; these make up the principle examples we are interested in. We suppose we are given $\nu$-parameter dilations on ${\mathbb{R}}^N$. That is, we are given $e=\left( e_1,\ldots, e_N\right)$, with each $0\ne e_j= \left( e_j^1,\ldots, e_j^{\nu}\right)\in \left[0,\infty\right)^\nu$. For $\delta\in \left[0,\infty\right)^\nu$ and $t=\left( t_1,\ldots, t_N\right)\in {\mathbb{R}}^N$, we define,\footnote{$\delta^{e_j}$ is defined by standard multi-index notation: $\delta^{e_j}= \prod_{\mu} \delta_\mu^{e_j^\mu}$.} \begin{equation}\label{EqnDefndeltat} \delta t = \left( \delta^{e_1}t_1,\ldots, \delta^{e_{N}}t_N \right), \end{equation} thereby obtaining $\nu$-parameter dilations on ${\mathbb{R}}^N$. For each $\mu$, $1\leq \mu\leq \nu$, let $t_\mu$ denote those coordinates $t_j$ of $t=\left(t_1,\ldots, t_N \right)\in {\mathbb{R}}^N$ such that $e_j^\mu\ne 0$. For $j=\left(j_1,\ldots, j_\nu\right)\in {\mathbb{Z}}^\nu$, define $2^j=\left(2^{j_1},\ldots, 2^{j_\nu}\right)$. The class of distributions we will define depends on $N$, $a$, $e$, $\mu_0$, and $\nu$. Define, \begin{equation*} -\log_2 \sA_{\mu_0}=\left\{j\in {\mathbb{N}}^\nu : 2^{-j}\in \mathcal{A}_{\mu_0}\right\} = \left\{j\in {\mathbb{N}}^\nu : j_{\mu_0}\geq j_{\mu_0+1}\geq \cdots \geq j_{\nu}\right\}. \end{equation*} Given a function $\varsigma$ on ${\mathbb{R}}^N$, and $j\in {\mathbb{N}}^\nu$, define, \begin{equation*} \dil{\varsigma}{2^j}\left( t\right) = 2^{j\cdot e_1+\cdots + j\cdot e_N} \varsigma\left( 2^j t\right). \end{equation*} Note that $\dil{\varsigma}{2^j}$ is defined in such a way that, \begin{equation*} \int \dil{\varsigma}{2^j} \left( t\right) \: dt = \int \varsigma\left( t\right) \: dt. \end{equation*} \begin{defn}\label{DefnsK} We define $\mathcal{K}=\mathcal{K}\left( N,e,a,\mu_0,\nu\right)$ to be the set of all distributions, $K$, of the form \begin{equation}\label{EqnSumDefK} K=\sum_{j\in -\log_2 \sA_{\mu_0}} \dil{\varsigma_j}{2^j}, \end{equation} where $\left\{\varsigma_j\right\}_{j\in-\log_2 \sA_{\mu_0}}\subset C_0^\infty\left( B^N\left( a\right) \right)$ is a bounded set, satisfying \begin{equation*} \int \varsigma_j\left( t\right) \: dt_\mu =0, \end{equation*} unless $0=j_\mu$ or $\mu>\mu_0$ and $j_{\mu}=j_{\mu-1}$. It was shown in \cite{StreetMultiParameterSingRadonLt} that any sum of the form \eqref{EqnSumDefK} converges in the sense of distributions. \end{defn} See \cite{StreetMultiParameterSingRadonLt} for a more in-depth discussion of the class $\mathcal{K}$. \begin{rmk} If each $e_j$ is non-zero in only one component, kernels in $\mathcal{K}\left(N,e,a,\nu,\nu\right)$ are known as ``product kernels,'' and kernels in $\mathcal{K}\left(N,e,a,1,\nu\right)$ are known as ``flag kernels.'' The classes we study here are more general: we do not insist that each $e_j$ are nonzero in only one component, and we allow for any $1\leq\mu_0\leq \nu$. For an illustrative example where the $e_j$ are not non-zero in only one component, see Section 17.8 of \cite{StreetMultiParameterSingRadonLt}. For more information on product and flag kernels see, e.g., \cite{NagelRicciSteinSingularIntegralsWithFlagKernels}. \end{rmk} \section{Multi-parameter Carnot-Carath\'eodory geometry}\label{SectionCCGeom} At the heart of the definition of the class of $\gamma$ which we will study lies multi-parameter Carnot-Carath\'eodory geometry. Thus, before we can even define the class of $\gamma$, it is necessary to review the relevant definitions of multi-parameter Carnot-Carath\'eodory balls. We defer the theorems we will use to deal with these balls to Section \ref{SectionCCGeomII}. Our main reference for Carnot-Carath\'eodory geometry is \cite{StreetMultiParameterCCBalls}, and we refer the reader there for a more detailed discussion. Let $\Omega\subseteq {\mathbb{R}}^n$ be a fixed open set, and suppose $X_1,\ldots, X_q$ are $C^\infty$ vector fields on $\Omega$. We define the Carnot-Carath\'eodory ball of unit radius, centered at $x_0\in \Omega$, with respect to the list $X$ by \begin{equation*} \begin{split} B_X\left( x_0\right):=\bigg\{y\in \Omega \:\bigg|\: &\exists \gamma:\left[0,1\right]\rightarrow \Omega, \gamma\left( 0\right) =x_0, \gamma\left( 1\right) =y, \\ &\gamma'\left( t\right) = \sum_{j=1}^q a_j\left( t\right) X_j\left( \gamma\left( t\right)\right), a_j\in L^\infty\left( \left[0,1\right]\right), \\ &\Lppn{\infty}{\left[0,1\right]}{\left(\sum_{1\leq j\leq q} \left|a_j\right|^2\right)^{\frac{1}{2}}}<1\bigg\}. \end{split} \end{equation*} Now that we have the definition of balls with unit radius, we may define (multi-parameter) balls of any radius merely by scaling the vector fields. To do so, we assign to each vector field, $X_j$, a (multi-parameter) formal degree $0\ne d_j=\left(d_j^1,\ldots, d_j^\nu \right)\in \left[0,\infty\right)^\nu$. For $\delta=\left(\delta_1,\ldots, \delta_\nu \right)\in \left[0,\infty\right)^\nu$, we define the list of vector fields $\delta X$ to be the list $\left( \delta^{d_1} X_1,\ldots, \delta^{d_q}X_q\right)$. Here, $\delta^{d_j}$ is defined by the standard multi-index notation: $\delta^{d_j} =\prod_{\mu=1}^\nu \delta_\mu^{d_j^\mu}$. We define the ball of radius $\delta$ centered at $x_0\in \Omega$ by $$\B{X}{d}{x_0}{\delta} := B_{\delta X}\left( x_0\right). $$ At times, it will be convenient to assume that the ball $\B{X}{d}{x_0}{\delta}$ lies ``inside'' of $\Omega$. To this end, we make the following definition. \begin{defn} Given $x_0\in \Omega$ and $\Omega'\subseteq \Omega$, we say the list of vector fields $X$ satisfies $\mathcal{C}\left( x_0,\Omega'\right)$ if for every $a=\left( a_1,\ldots, a_q\right) \in \left( L^{\infty}\left( \left[0,1\right]\right)\right)^q$, with $$\Lppn{\infty}{\left[0,1\right]}{\left|a\right|}=\Lppn{\infty}{\left[0,1\right]}{\left(\sum_{j=1}^q \left|a_j\right|^2\right)^{\frac{1}{2}}}<1,$$ there exists a solution $\gamma:\left[0,1\right]\rightarrow \Omega'$ to the ODE \begin{equation*} \gamma'\left( t\right) =\sum_{j=1}^q a_j\left( t\right) X_j\left( \gamma\left( t\right)\right), \leftuad \gamma\left( 0\right) =x_0. \end{equation*} Note, by Gronwall's inequality, when this solution exists, it is unique. Similarly, we say $\left( X,d\right)$ satisfies $\mathcal{C}\left( x_0,\delta,\Omega'\right)$ if $\delta X$ satisfies $\mathcal{C}\left(x_0,\Omega'\right)$. \end{defn} One of the main points of \cite{StreetMultiParameterCCBalls} was to provide a detailed study of the balls $\B{X}{d}{x_0}{\delta}$, under appropriate conditions on the list $\left( X,d\right)$. To do this, we first need to pick a subset $\mathcal{A}\subseteq \left[0,1\right]^\nu$, and a compact set $K_0\Subset \Omega$. We will (essentially) be restricting our attention to those balls $\B{X}{d}{x_0}{\delta}$ such that $x_0\in K_0$ and $\delta\in \mathcal{A}$. One should think of $\mathcal{A}=\mathcal{A}_{\mu_0}$ for some $\mu_0$, as that case will be the primary one used in this paper. \begin{defn}\label{DefnsD} We say $\left( X,d\right)$ satisfies $\mathcal{D}\left(K_0,\mathcal{A}\right)$ if the following holds: \begin{itemize} \item Take $\Omega'$ with $K_0\Subset \Omega'\Subset \Omega$ and $\xi>0$ such that for every $\delta\in \mathcal{A}$ and $x\in K_0$, $\left( X,d\right)$ satisfies $\mathcal{C}\left( x, \xi\delta, \Omega'\right)$. \item For every $\delta\in \mathcal{A}$ and $x\in K_0$, we assume \begin{equation*} \left[\delta^{d_i} X_i, \delta^{d_j} X_j\right] = \sum_k c_{i,j}^{k,\delta,x} \delta^{d_k} X_k, \text{ on } \B{X}{d}{x}{\xi\delta}. \end{equation*} \item For every ordered multi-index $\alpha$ we assume\footnote{We write ${\mathbb{C}}jN{f}{0}{U}=\sup_{x\in U} \left|f\left(x\right)\right|$, and if we say the norm is finite, we mean (in addition) that $f$ is continuous on $U$.} \begin{equation*} \sup_{\substack{\delta\in \mathcal{A}\\ x\in K_0 } } {\mathbb{C}}jN{\left(\delta X\right)^\alpha c_{i,j}^{k,x,\delta}}{0}{\B{X}{d}{x}{\xi\delta}}<\infty. \end{equation*} \end{itemize} If we wish to be explicit about $\Omega'$ and $\xi$, we write $\mathcal{D}\left( K_0,\mathcal{A}, \Omega', \xi\right)$. \end{defn} It is under condition $\mathcal{D}\left( K_0, \mathcal{A}\right)$ that the balls $\B{X}{d}{x}{\delta}$ were studied in \cite{StreetMultiParameterCCBalls}. We refer the reader to Section \ref{SectionCCGeomII} for an overview of the theorems from \cite{StreetMultiParameterCCBalls} that we shall use. In what follows, we will not be directly given a list of vector fields with formal degrees satisfying $\mathcal{D}\left( K_0, \mathcal{A}\right)$, \begin{equation*} \left( X_1,d_1\right), \ldots, \left( X_q,d_q\right), \end{equation*} but, rather, we will be given a list of $C^\infty$ vector fields with formal degrees which we will assume to ``generate'' such a list. To understand this, let $\left( X_1,d_1\right),\ldots, \left( X_r,d_r\right)$ be $C^\infty$ vector fields with associated formal degrees $0\ne d_j\in \left[0,\infty\right)^\nu$. For a list $L=\left( l_1,\ldots, l_m\right)$ where $1\leq l_j\leq r$, we define, \begin{equation*} \begin{split} X_L &= \ad{X_{l_1}}\ad{X_{l_2}}\cdots \ad{X_{l_{m-1}}} X_{l_m},\\ d_L &= d_{l_1}+d_{l_2}+\cdots +d_{l_m}. \end{split} \end{equation*} We define $\mathcal{S}=\left\{\left( X_L,d_L\right) : L \text{ is any such list}\right\}.$ \begin{defn}\label{DefnGeneratesAFiniteList} We say $\mathcal{S}$ is {\it finitely generated} or that $\left( X_1,d_1\right), \ldots, \left( X_r,d_r\right)$ {\it generates a finite list} if there exists finite subset, $\mathcal{F}\subseteq \mathcal{S}$, such that $\mathcal{F}$ satisfies $\mathcal{D}\left( K_0,\mathcal{A}\right)$\footnote{Here, we are thinking of $K_0$ and $\mathcal{A}$ fixed.} and \begin{equation*} \left( X_j, d_j\right)\in \mathcal{F}, \leftuad 1\leq j\leq r. \end{equation*} If we enumerate the vector fields in $\mathcal{F}$, \begin{equation*} \mathcal{F}=\left\{\left(X_1,d_1\right),\ldots, \left( X_q,d_q\right) \right\}, \end{equation*} we say that $\left( X_1,d_1\right),\ldots, \left( X_r,d_r\right)$ {\it generates the finite list} $\left( X_1,d_1\right) ,\ldots, \left( X_q,d_q\right)$. Note that, if $\mathcal{S}$ is finitely generated, $\left( X_1,d_1\right),\ldots \left( X_r,d_r\right)$ could generate many different finite lists. However, if we let $\left( X,d\right)$ and $\left( X',d'\right)$ be two different such lists then either choice will work for our purposes. In fact, it is shown in \cite{StreetMultiParameterSingRadonLt} that $\left( X,d\right)$ and $\left(X',d'\right)$ are {\it equivalent} in a sense that is made precise and discussed at length in that paper. It follows that, in every place we use these notions, it will not make a difference which finite list we use. Thus, we will unambiguously say ``$\left( X_1,d_1\right) ,\ldots, \left(X_r,d_r\right)$ generates the finite list $\left( X_1,d_1\right) ,\ldots, \left( X_q,d_q\right)$,'' to mean that $\left( X_1,d_1\right), \ldots, \left(X_r,d_r\right)$ generates a finite list and $\left( X_1,d_1\right),\ldots, \left( X_q,d_q\right)$ can be any such list. \end{defn} \section{Surfaces}\label{SectionCurves} In this section, we define the class of $\gamma$ for which we will study operators of the form \eqref{EqnIntroOp}. This is nothing but a reprise of the definitions in \cite{StreetMultiParameterSingRadonLt}, and we refer the reader there for more details. We assume we are given an open subset $\Omega\subseteq {\mathbb{R}}^N$, a fixed $\mu_0$, $1\leq \mu_0\leq \nu$, and dilations $e$ as in Section \ref{SectionKernels}. \begin{defn} Given a multi-index $\alpha\in {\mathbb{N}}^N$, we define $$\deg\left(\alpha\right)=\sum_{j=1}^N \alpha_j e_j\in \left[0,\infty\right)^\nu.$$ \end{defn} Let $K_0\Subset\Omega'\Subset \Omega''\Subset \Omega$ be subsets of $\Omega$ with $K_0$ compact and $\Omega'$ and $\Omega''$ open but relatively compact in $\Omega$. Our goal in this section is to define a class of $C^\infty$ functions $$\gamma\left( t,x\right) : B^N\left(\rho\right) \times \Omega''\rightarrow \Omega$$ such that $\gamma\left( 0,x\right)=x$. Here $\rho>0$ is a small number. This class of functions will depend on $\mu_0$, $N$, $e$, and $\Omega$ (nominally, the class will also depend on $K_0$, $\Omega'$, and $\Omega''$, but this will not be an essential point). This class will be such that if $\psi$ is a $C_0^\infty$ function supported in the interior of $K_0$, then there is an $a>0$ sufficiently small such that the operator given by \eqref{EqnIntroOp} is bounded on $L^p$ ($1<p<\infty$) for every $K\in \mathcal{K}\left( N,e,a,\mu_0,\nu\right)$. Note, by possibly shrinking $\rho>0$ we may assume that, for each $t\in B^N\left( \rho\right)$, $\gamma\left( t, \cdot\right)\big|_{\Omega'}$ is a diffeomorphism (in the $x$-variable) onto its image. From now on we will assume this. Also, as in the introduction, we will write $\gamma_t\left( x\right)= \gamma\left( t,x\right)$. Unlike the work in \cite{ChristNagelSteinWaingerSingularAndMaximalRadonTransforms}, we separate the condition on $\gamma_t$ into two aspects. For the first, suppose we are given a list of $C^\infty$ vector fields on $\Omega'$, $X_1,\ldots, X_q$, with associated $\nu$-parameter formal degrees, $d_1,\ldots d_q$, satisfying $\mathcal{D}\left( K_0, \mathcal{A}_{\mu_0}, \Omega',\xi\right)$ for some $\xi>0$ (we will see later where these vector fields will come from). We denote the list $\left(X_1,d_1\right),\ldots,\left(X_q,d_q\right)$ by $\left(X,d\right)$. \begin{defn}\label{DefnControl} Suppose we are given a $C^\infty$ vector field on $\Omega'$, depending smoothly on $t\in B^N\left( \rho\right)$, $W\left( t,x\right)\in T_x\Omega'$. We say $\left( X,d\right)$ {\it controls} $W\left( t,x\right)$ if there exists a $\rho_1\leq \rho$ and $\tau_1\leq \xi$ such that for every $x_0\in K_0$, $\delta\in \mathcal{A}_{\mu_0}$ there exist functions $c_l^{x_0,\delta}$ on $B^N\left( \rho_1\right)\times \B{X}{d}{x_0}{\tau_1\delta}$ satisfying \begin{itemize} \item $W\left( \delta t,x\right) = \sum_{l=1}^q c_l^{x_0,\delta}\left( t,x\right) \delta^{d_l} X_l\left( x\right)$ on $B^N\left( \rho_1\right)\times \B{X}{d}{x_0}{\tau_1\delta}$, where $\delta t$ is defined as in \eqref{EqnDefndeltat}. \item $\sup_{\substack{x_0\in K_0\\\delta\in \mathcal{A}_{\mu_0} } }\sum_{\left|\alpha\right|+\left|\beta\right|\leq m} {\mathbb{C}}jN{\left(\delta X\right)^{\alpha} \partial_t^\beta c_l^{x_0,\delta}}{0}{B^N\left( \rho_1\right)\times \B{X}{d}{x_0}{\tau_1\delta}}<\infty,$ for every $m$. \end{itemize} \end{defn} \begin{defn}\label{DefnControlCurve} We say $\left( X,d\right)$ {\it controls} $\gamma_t\left( x\right)$ if $\left( X,d\right)$ controls $W$ where \begin{equation*} W\left(t,x\right)=\frac{d}{d\epsilon}\bigg|_{\epsilon=1} \gamma_{\epsilon t}\circ \gamma_t^{-1} \left( x\right). \end{equation*} Here, $\epsilon\left( t_1,\ldots, t_N\right) = \left( \epsilon t_1,\ldots, \epsilon t_N\right)$, and so is unrelated to the dilations $e$. \end{defn} Part of our assumption will be that a particular family of vector fields $\left( X,d\right)$ controls $\gamma_t$. Where these vector fields come from constitutes the other part of our assumption on $\gamma$. Let $W$ be as in Definition \ref{DefnControlCurve}. Let $X_{\alpha}\left( x\right)$ be the Taylor coefficients of $W$ when the Taylor series is taken in the $t$ variable: \begin{equation}\label{EqnDefnXjalpha} W\left( t,x\right) \sim \sum_{\left|\alpha\right|>0} t^\alpha X_{\alpha}\left( x\right), \end{equation} so that $X_{\alpha}$ is a $C^\infty$ vector field on $\Omega'$. Our assumption on $\gamma_t$ is that if we take the set of vector fields with formal degrees: \begin{equation}\label{EqnCurvesDefnS} \mathcal{S}=\left\{\left(X_{\alpha},\deg\left(\alpha\right) \right):\deg\left( \alpha\right)\text{ is non-zero in only one component}\right\}, \end{equation} then there is a finite subset $\mathcal{F}\subseteq \mathcal{S}$ such that $\mathcal{F}$ generates a finite list $\left( X,d\right) = \left( X_1,d_1\right),\ldots, \left(X_q,d_q\right)$ and this finite list controls $\gamma_t$. \begin{rmk} The list of vector fields $\left( X,d\right)$ depends on a few choices we have made in the above: it depends on the chosen finite subset $\mathcal{F}$ and it depends on the chosen list generated by $\mathcal{F}$. However, neither of these choices affects $\left( X,d\right)$ in an essential way. This is discussed in detail in \cite{StreetMultiParameterSingRadonLt}. \end{rmk} \section{Statement of Results} Fix $\Omega\subseteq {\mathbb{R}}^n$ open, and $K_0\Subset \Omega'\Subset\Omega''\Subset\Omega$ with $K_0$ compact (with nonempty interior) and $\Omega'$ and $\Omega''$ open but relatively compact in $\Omega$. Let \begin{equation*} \gamma\left( t,x\right):B^N\left( \rho\right)\times \Omega''\rightarrow \Omega \end{equation*} be a $C^\infty$ function such that $\gamma\left( 0,x\right) \equiv x$. Here, $\rho>0$ is a small number. Fix $\nu\in {\mathbb{N}}$ positive, and $\mu_0$, $1\leq \mu_0\leq \nu$. Furthermore, let $e=\left( e_1,\ldots, e_N\right)$ be given, with $0\ne e_j\in \left[0,\infty\right)^\nu$. {\bf We suppose $\gamma$ satisfies the assumptions of Section \ref{SectionCurves} with this $K_0$, $\mu_0$, and $e$.} \begin{thm}\label{ThmMainThmFirstPass} For every $\psi\in C_0^\infty\left( {\mathbb{R}}^n\right)$ supported in the interior of $K_0$, there exists $a>0$ such that for every $K\in \mathcal{K}\left( N,e,a,\mu_0,\nu\right)$ the operator \begin{equation*} f\mapsto \psi\left( x\right) \int f\left( \gamma_t\left( x\right) \right)K\left( t\right) \: dt \end{equation*} extends to a bounded operator $L^p\left( {\mathbb{R}}^n\right)\rightarrow L^p\left( {\mathbb{R}}^n\right)$, for every $1<p<\infty$. \end{thm} Actually, as shown in \cite{StreetMultiParameterSingRadonLt}, Theorem \ref{ThmMainThmFirstPass} follows directly from the following, slightly more general, theorem. \begin{thm}\label{ThmMainThmSecondPass} There exists $a>0$ such that for every $\psi_1,\psi_2\in C_0^\infty\left( {\mathbb{R}}^n\right)$ supported on the interior of $K_0$, every $K\in \mathcal{K}\left( N,e,a,\mu_0,\nu\right)$ and every $C^\infty$ function \begin{equation*} \kappa\left( t,x\right):B^N\left( a\right)\times \Omega''\rightarrow {\mathbb{C}} \end{equation*} the operator \begin{equation*} T f\left( x\right)= \psi_1\left( x\right) \int f\left( \gamma_t\left( x\right) \right) \psi_2\left( \gamma_t\left( x\right)\right) \kappa\left( t,x\right) K\left( t\right) \: dt \end{equation*} extends to a bounded operator $L^p\left( {\mathbb{R}}^n\right) \rightarrow L^p\left( {\mathbb{R}}^n\right)$ for every $1<p<\infty$. \end{thm} \begin{rmk} We focus on the more general Theorem \ref{ThmMainThmSecondPass}. The importance of the form of the operator in Theorem \ref{ThmMainThmSecondPass} is that the class of operators is closed under taking $L^2$ adjoints, which is not true of the class of operators in Theorem \ref{ThmMainThmFirstPass}. See Section 12.3 of \cite{StreetMultiParameterSingRadonLt} for the proof of this. \end{rmk} We also study maximal functions. Define, for $\psi_1,\psi_2\in C_0^\infty\left( {\mathbb{R}}^n\right)$, supported on the interior of $K_0$ with $\psi_1\geq 0$, \begin{equation*} \mathcal{M} f\left( x\right) = \sup_{\delta\in \mathcal{A}_{\mu_0}} \psi_1\left( x\right)\int_{\left|t\right|<a} \left|f\left( \gamma_{\delta t}\left( x\right) \right)\psi_2\left( \gamma_{\delta t}\left( x\right)\right)\right|\: dt, \end{equation*} where $a>0$ is a fixed, small real number (this should be considered as the same $a$ as in Theorem \ref{ThmMainThmSecondPass}). Then, we have, \begin{thm}\label{ThmMainMaxThm} Under the same assumptions as Theorem \ref{ThmMainThmSecondPass}, \begin{equation*} \LpN{p}{\mathcal{M} f}\lesssim \LpN{p}{f}, \end{equation*} for every $1<p\leq \infty$. \end{thm} Most of the paper will be devoted to exhibiting the proofs of Theorems \ref{ThmMainThmSecondPass} and \ref{ThmMainMaxThm} in the case when $\mu_0=\nu$. That is, when $\mathcal{A}_{\mu_0}=\left[0,1\right]^\nu$. This case contains all the main ideas, but allows for simpler notation. We then describe the modifications needed to attack the more general situation in Section \ref{SectionMoreKernels}. \begin{rmk}\label{RmkNoCZTheory} A major difficulty in the proofs of Theorems \ref{ThmMainThmSecondPass} and \ref{ThmMainMaxThm} is, there is no appropriate {\it a priori} multi-parameter theory analogous to the standard theory of Calder\'on-Zygmund singular integrals to fall back on. Indeed, in the single parameter ($\nu=1$) case, one can ``smooth out'' the operators in question enough to apply the standard Calder\'on-Zygmund theory and obtain some $L^p$ estimates (see Section 18 of \cite{ChristNagelSteinWaingerSingularAndMaximalRadonTransforms}). To get around this issue, we will use square function techniques, that will allow us to introduce the Calder\'on-Zygmund theory in a more round about manner. Once Theorem \ref{ThmMainThmSecondPass} is proved, we will actually obtain, {\it a posteriori,} a prototype for some aspects of a multi-parameter Calder\'on-Zygmund type theory. See Section \ref{SectionSingularIntegrals}. \end{rmk} \begin{rmk}\label{RmkABetterMaxForRealAnal} While the maximal results in this paper are new, the class of $\gamma$ we consider was more motivated by Theorem \ref{ThmMainThmSecondPass} than by Theorem \ref{ThmMainMaxThm}. Indeed, we will see in Section \ref{SectionMaximalComment} that there are choices of $\gamma$ where the $L^p$ boundedness of the singular integral fails, but the $L^p$ boundedness of the maximal function holds. We have not attempted to state Theorem \ref{ThmMainMaxThm} in such a way to include these choices. This deficiency will be partially rectified in \cite{StreetMultiParameterSingRadonAnal}. Indeed, in \cite{StreetMultiParameterSingRadonAnal}, we will obtain the following result. Let $\gamma_t\left(x\right):{\mathbb{R}}^N_0\times {\mathbb{R}}^n_0\rightarrow {\mathbb{R}}^n$ be a {\bf real analytic function} defined on a neighborhood of $\left(0,0\right)\in {\mathbb{R}}^N\times {\mathbb{R}}^n$, and satisfying $\gamma_0\left(x\right)\equiv x$; assuming {\it no additional hypotheses}. Define a maximal operator by, \begin{equation*} \mathcal{M}t f\left(x\right) = \sup_{\delta=\left(\delta_1,\ldots, \delta_N\right)\in \left[0,1\right]^N}\psi\left(x\right) \int_{\left|t\right|\leq a} \left|f\left(\gamma_{\delta_1t_1,\ldots, \delta_N t_N}\left(x\right)\right)\right|\: dt. \end{equation*} Then, $\mathcal{M}t$ is bounded on $L^p$ ($1<p\leq \infty$), provided $\psi$ is supported on a sufficiently small neighborhood of $0$, and $a$ is sufficiently small. Note that this result is clearly not a special case of Theorem \ref{ThmMainMaxThm}. See, also, \cite{SteinStreetMultiparameterSingularRadonTransformsAnnounce} for a further discussion of this point. \end{rmk} \begin{rmk} The hypotheses we put on $\gamma$ depend on the choice of the dilators $e$ and on $\mu_0$. In particular, if all the $e_j$ are nonzero in only one component, the hypotheses still depend on $\mu_0$. In fact, our hypotheses are weaker when $K$ is a ``flag kernel'' ($\mu_0=1$ and all $e_j$ are nonzero in only one component) than when $K$ is a ``product kernel'' ($\mu_0=\nu$ and all $e_j$ are nonzero in only one component). This is discussed in detail, with examples, in Sections 17.5 and 17.6 of \cite{StreetMultiParameterSingRadonLt}. \end{rmk} \section{Basic Notation} Throughout the paper, for $v=\left(v_1,\ldots, v_n\right)\in {\mathbb{R}}^n$, we write $\left|v\right|$ for $\left( \sum_j \left|v_j\right|^2 \right)^{\frac{1}{2}}$, and we write $\left|v\right|_\infty$ for $\sup_j \left|v_j\right|$. $B^{n}\left( \eta\right)$ will denote the ball of radius $\eta>0$ in the $\left|\cdot\right|$ norm. For two numbers $a,b\in {\mathbb{R}}$ we write $a\vee b$ for the maximum of $a$ and $b$ and $a\rightedge b$ for the minimum. If instead, $a=\left( a_1,\ldots, a_n\right), b=\left( b_1,\ldots, b_n\right)\in {\mathbb{R}}^n$, we write $a\vee b$ (respectively, $a\rightedge b$) for $\left(a_1\vee b_1,\ldots, a_n \vee b_n \right)$ (respectively, $\left(a_1\rightedge b_1,\ldots, a_n \rightedge b_n\right)$). For a vectors $\delta=\left( \delta_1,\ldots, \delta_\nu\right), d=\left( d_1,\ldots, d_\nu\right)\in {\mathbb{R}}^\nu$, we define $\delta^d$ by the standard multi-index notation. I.e., $\delta^d=\prod_{\mu=1}^{\nu} \delta_\mu^{d_\mu}$. Also we will write $2^d = \left(2^{d_1},\ldots,2^{d_\nu}\right)$. Given a, possibly arbitrary, set $U\subseteq {\mathbb{R}}^n$ and a continuous function $f$ defined on a neighborhood of $U$, we write $${\mathbb{C}}jN{f}{j}{U} = \sum_{\left|\alpha\right|\leq j}\sup_{x\in U} \left|\partial_x^\alpha f\left( x\right)\right|,$$ and if we state that ${\mathbb{C}}jN{f}{j}{U}$ is finite, we mean that the partial derivatives up to order $j$ of $f$ exist on $U$, are continuous, and the above norm is finite. If $f$ is replaced by a vector field $Y=\sum_k a_k\left( x\right)\partial_{x_k}$, then we write, $${\mathbb{C}}jN{Y}{j}{U} = \sum_{k}{\mathbb{C}}jN{a_k}{j}{U}.$$ Given a matrix $A$, we write $\left\| A\right\|$ for the usual operator norm. Given two integers $1\leq m\leq n$, we let $\sI{m}{n}$ denote the set of all lists of integers $\left( i_1,\ldots, i_m\right)$ such that $$1\leq i_1<i_2<\cdots<i_m\leq n.$$ Furthermore, suppose $A$ is an $n\times q$ matrix, and suppose $1\leq n_0\leq n\rightedge q$. For $I\in \sI{n_0}{n}$, $J\in \sI{n_0}{q}$ we let $A_{I,J}$ denote the $n_0\times n_0$ matrix given by taking the rows from $A$ which are listed in $I$ and the columns from $A$ which are listed in $J$. We define $$\Det{n_0} A = \left( \det A_{I,J} \right)_{\substack{I\in \sI{n_0}{n}\\J\in \sI{n_0}{q}}},$$ so that, in particular, $\Det{n_0} A$ is a {\it vector} (it will not be important to us in which order the coordinates are arranged). $\Det{n_0} A$ comes up when one changes variables. Indeed, suppose $\Phi$ is a $C^1$ diffeomorphism from an open subset $U\subset {\mathbb{R}}^{n_0}$ mapping to an $n_0$ dimensional submanifold of ${\mathbb{R}}^n$, where this submanifold is given the induced Lebesgue measure $dx$. Then, we have $$\int_{\Phi\left( U\right)} f\left( x\right) \: dx = \int_U f\left( \Phi\left( t\right)\right) \left|\Det{n_0} d\Phi\left( t\right)\right|\: dt.$$ If $A=\left( A_1,\ldots, A_q\right)$ is a list of, possibly non-commuting, operators, we will use ordered multi-index notation to define $A^\alpha$, where $\alpha$ is a list of numbers $1,\ldots, q$. $\left|\alpha\right|$ will denote the length of the list. For instance, if $\alpha=\left( 1,4,4,2,1\right)$, then $\left| \alpha\right|=5$ and $A^\alpha = A_1A_4A_4A_2A_1$. Thus, if $A_1,\ldots A_q$ are vector fields, then $A^\alpha$ is an $\left|\alpha\right|$ order partial differential operator. If $f:{\mathbb{R}}^{n}\rightarrow {\mathbb{R}}^m$ is a map, then we write, \begin{equation*} d f\left( x\right) \left( \frac{\partial}{\partial x_j}\right), \end{equation*} for the differential of $f$ at the point $x$ applied to the vector field $\frac{\partial}{\partial x_j}$. If $f$ is a function of two variables, $f\left( t,x\right):{\mathbb{R}}^N\times {\mathbb{R}}^n\rightarrow {\mathbb{R}}^m$, and we wish to view $d f$ as a linear transformation acting on the vector space spanned by $\frac{\partial}{\partial_{t_j}}$ ($1\leq j\leq N$), then we instead write, \begin{equation*} \frac{\partial f}{\partial t} \left( t,x\right) \end{equation*} to denote this linear transformation. Hence, it makes sense to write, \begin{equation*} \det_{n_0\times n_0} \frac{\partial f}{\partial t} \left( t,x\right), \end{equation*} where $n_0\leq m\rightedge N$. If $\psi_1,\psi_2\in C_0^\infty\left( {\mathbb{R}}^n\right)$, we write $\psi_1\prec \psi_2$ to denote that $\psi_2\equiv 1$ on a neighborhood of the support of $\psi_1$. We will have occasion to use vector valued functions. We denote by $L^p\left( \ell^q\left( {\mathbb{N}}^{\nu} \right)\right)$ the set of sequences of measurable functions $\left\{f_j\right\}_{j\in{\mathbb{N}}^\nu}$ such that, $$\LpN{p}{\left(\sum_{j\in {\mathbb{N}}^\nu} \left|f_j \right|^q \right)^{1/q}}<\infty.$$ Finally, we will devote a good deal of notation to multi-parameter Carnot-Carath\'eodory geometry. See Sections \ref{SectionCCGeom} and \ref{SectionCCGeomII}. \section{Multi-parameter Carnot-Carath\'eodory geometry Revisited}\label{SectionCCGeomII} In this section, we present the results that allow us to deal with Carnot-Carath\'eodory geometry. The results we outline here are contained in Section 4 of \cite{StreetMultiParameterCCBalls}. The heart of this theory is the ability to ``rescale'' vector fields. This rescaling is obtained by conjugating by a particular diffeomorphism, which will be denoted by $\Phi$ in what follows. Before we can enter into details, we must explain the connection between multi-parameter balls and single-parameter balls. We assume we are given $C^\infty$ vector fields $X_1,\ldots, X_q$ with associated formal degrees $0\ne d_1,\ldots,d_q\in \left[0,\infty\right)^\nu$. Here, $\nu\in {\mathbb{N}}$ is the number of parameters. Given the multi-parameter degrees, we obtain corresponding single parameter degrees, which we denote by $\sum d$ and are defined by $\left( \sum d\right)_j:=\sum_{\mu=1}^\nu d_j^\mu=\left|d_j\right|_1$. Let $\delta\in \left[0,\infty\right)^\nu$, and suppose we wish to study the ball \begin{equation*} \B{X}{d}{x_0}{\delta}. \end{equation*} Decompose $\delta=\delta_0\delta_1$ where $\delta_0\in \left[0,\infty\right)$ and $\delta_1\in\left[0,\infty\right)^\nu$ (of course this decomposition is not unique). Then, directly from the definition, we obtain: \begin{equation*} \B{X}{d}{x_0}{\delta}=\B{\delta_1 X}{\sum d}{x_0}{\delta_0}=\B{\delta X}{\sum d}{x_0}{1}. \end{equation*} Thus, studying a ball of radius $\delta$ corresponding to $\left( X,d\right)$ is the same as studying a ball of radius $1$ corresponding to $\left( \delta X, \sum d\right)$. For this reason, taking $K_0$ as in Section \ref{SectionCCGeom} and assuming $\left( X,d\right)$ satisfies $\mathcal{D}\left( K_0,\left[0,1\right]^\nu\right)$, we will fix $x_0\in K_0$ and $\delta\in \left[0,1\right]^\nu$ and study balls of radius $\approx 1$ centered at $x_0$ corresponding to the vector fields with single-parameter formal degrees $\left( \delta X, \sum d\right)$. In what follows, it will be important that all of the implicit constants are independent of $x_0\in K_0$ and $\delta\in \left[0,1\right]^\nu$. We now turn to stating a theorem about a list of $C^\infty$ vector fields $Z_1,\ldots, Z_q$ defined on an open set $\Omega\subseteq {\mathbb{R}}^n$, with associated single parameter formal degrees $\tilde{d}_1,\ldots, \tilde{d}_q\in \left(0,\infty\right)$. The special case we are interested in is the case when $\left( Z,\tilde{d}\right) = \left( \delta X, \sum d\right)$; i.e., when $Z_j=\delta^{d_j} X_j$ and $\tilde{d}_j=\left|d_j\right|_1$. Fix $x_0\in \Omega$ and $1\geq \xi>0$.\footnote{In our primary example, one takes $x_0\in K_0$ and $\xi$ as in $\mathcal{D}\left( K_0,\left[0,1\right]^\nu\right)$.} Let $n_0=\dspan{Z_1\left( x_0\right), \ldots, Z_q\left( x_0\right)}$. For $\left( j_1,\ldots, j_{n_0}\right)\in \sI{n_0}{q}$, let $Z_J$ denote the list of vector fields $Z_{j_1},\ldots, Z_{j_{n_0}}$. Fix $J_0\in \sI{n_0}{q}$ such that \begin{equation*} \left| \det_{n_0\times n_0} Z_{J_0}\left( x_0\right)\right|_\infty = \left|\det_{n_0\times n_0} Z\left( x_0\right) \right|_\infty, \end{equation*} where we have identified $Z\left( x_0\right)$ with the $n\times q$ matrix whose columns are given by $Z_1\left( x_0\right),\ldots, Z_q\left( x_0\right)$ and similarly for $Z_{J_0}\left( x_0\right)$. We assume $\left(Z,\tilde{d}\right)$ satisfies $\mathcal{C}\left( x_0,\xi,\Omega\right)$. In addition we suppose that there are functions $c_{i,j}^k$ on $\B{Z}{\tilde{d}}{x_0}{\xi}$ such that \begin{equation*} \left[Z_i,Z_j\right]=\sum c_{i,j}^k Z_k, \text{ on }\B{Z}{\tilde{d}}{x_0}{\xi}. \end{equation*} We assume that: \begin{itemize} \item ${\mathbb{C}}jN{Z_j}{m}{\B{Z}{\tilde{d}}{x_0}{\xi}}<\infty$ for every $m$. \item $\sum_{\left|\alpha\right|\leq m} {\mathbb{C}}jN{ Z^\alpha c_{i,j}^k}{0}{\B{Z}{\tilde{d}}{x_0}{\xi}}<\infty,$ for every $m$ and every $i,j,k$. \end{itemize} We say that $C$ is an $m$-admissible constant if $C$ can be chosen to depend only on upper bounds for the above two quantities (for that particular choice of $m$), $m$, upper and lower bounds for $\tilde{d}_1,\ldots, \tilde{d}_{q}$, an upper bound for $n$ and $q$, and a lower bound for $\xi$. Note that, in our primary example $\left( Z,\tilde{d}\right)=\left( \delta X, \sum d\right)$, $m$-admissible constants can be chosen independent of $x_0\in K_0$ and $\delta\in \left[0,1\right]^\nu$. We write $A\lesssim_m B$ if $A\leq C B$, where $C$ is an $m$-admissible constant, and we write $A\approx_m B$ if $A\lesssim_m B$ and $B\lesssim_m A$. Finally, we say $\tau=\tau\left( \kappa\right)$ is an $m$-admissible constant if $\tau$ can be chosen to depend on all the parameters an $m$ admissible constant may depend on, and $\tau$ may also depend on $\kappa$. \begin{rmk} Under the above assumptions, the classical Frobenius theorem applies to show that (near $x_0$) there is a submanifold passing through $x_0$, of dimension $n_0$, whose tangent space is spanned by $Z_1,\ldots, Z_q$. The balls $\B{Z}{\tilde{d}}{x_0}{\delta}$ are open subsets of this submanifold, and we use the notation $\Vol{\B{Z}{d}{x_0}{\delta}}$ to denote the volume of this ball in the sense of the induced Lebesgue measure on this submanifold. See \cite{StreetMultiParameterCCBalls} for more details. \end{rmk} \begin{thm}\label{ThmMainCCThm} There exist $2$-admissible constants $\eta_1,\xi_1>0$ such that if the map $\Phi:B^{n_0}\left( \eta_1\right)\rightarrow \B{Z}{\tilde{d}}{x_0}{\xi}$ is defined by \begin{equation*} \Phi\left( u\right) = e^{u\cdot Z_{J_0}} x_0, \end{equation*} we have \begin{itemize} \item $\Phi:B^{n_0}\left( \eta_1\right) \rightarrow \B{Z}{\tilde{d}}{x_0}{\xi}$ is injective. \item $\B{Z}{\tilde{d}}{x_0}{\xi_1}\subseteq \Phi\left( B^{n_0}\left( \eta_1\right)\right)$. \item For all $u\in B^{n_0}\left( \eta_1\right)$, $\left|\det_{n_0\times n_0} d\Phi\left( u\right)\right|\approx_2 \left|\det_{n_0\times n_0} Z\left( x_0\right)\right|$. \item $\Vol{\B{Z}{\tilde{d}}{x_0}{\xi_1}}\approx_2 \left|\det_{n_0\times n_0} Z\left( x_0\right)\right|$. \end{itemize} Furthermore, if we let $Y_j$ be the pullback of $Z_j$ under the map $\Phi$, then we have, for $m\geq 0$, \begin{equation}\label{EqnRescaledSmooth} {\mathbb{C}}jN{Y_j}{m}{B^{n_0}\left( \eta_1\right)} \lesssim_{m\vee 2} 1, \end{equation} \begin{equation}\label{EqnRescaledCjN} {\mathbb{C}}jN{f}{m}{B^{n_0}\left(\eta_1\right)} \approx_{\left(m-1\right)\vee 2} \sum_{\left|\alpha\right|\leq m} {\mathbb{C}}jN{Y^\alpha f}{0}{B^{n_0}\left( \eta_1\right)}. \end{equation} Finally, \begin{equation}\label{EqnRescaledSpan} \left|\det_{n_0\times n_0} Y_{J_0}\left( u\right)\right|\approx_{2} 1,\leftuad \forall u\in B^{n_0}\left( \eta_1\right). \end{equation} \end{thm} Note that, in light of \eqref{EqnRescaledSmooth} and \eqref{EqnRescaledSpan}, pulling back by the map $\Phi$ allows us to rescale the vector fields $Z$ in such a way that the rescaled vector fields, $Y$, are smooth and span the tangent space (uniformly in any relevant parameters). We will also need the following technical result. \begin{prop}\label{PropExtraCCStuff} Suppose $\xi_2,\eta_2>0$ are given. Then there exist $2$-admissible constants $\eta'=\eta'\left( \xi_2\right)>0$, $\xi'=\xi'\left( \eta_2\right)>0$ such that, \begin{equation*} \Phi\left( B^{n_0}\left( \eta'\right)\right) \subseteq \B{Z}{\tilde{d}}{x_0}{\xi_2}, \end{equation*} \begin{equation*} \B{Z}{\tilde{d}}{x_0}{\xi'}\subseteq \Phi\left( B^{n_0}\left( \eta_2\right)\right). \end{equation*} \end{prop} \begin{proof} The existence of $\eta'$ can be seen by applying Theorem \ref{ThmMainCCThm} with $\xi$ replaced by $\xi\rightedge \xi_2$. The existence of $\xi'$ can be shown by combining the proof of Proposition 3.21 of \cite{StreetMultiParameterCCBalls} with the proof of Proposition 4.16 of \cite{StreetMultiParameterCCBalls}. \end{proof} \begin{rmk} With a slight abuse of notation, when we say $m$-admissible constant, where $m<2$, we will take that to mean a $2$-admissible constant. Using this new notation, the $\vee$ in \eqref{EqnRescaledSmooth} and \eqref{EqnRescaledCjN} may be removed. \end{rmk} \begin{rmk} It is not hard to see that the single-parameter formal degrees $\tilde{d}$ do not play an essential role in the above (see Remark 3.3 of \cite{StreetMultiParameterCCBalls}). In fact, one could state Theorem \ref{ThmMainCCThm}, taking all the formals degrees $\tilde{d}_j=1$ and that would be sufficient for our purposes. Moreover, in every place we use the single-parameter formal degrees $\tilde{d}$, they are inessential. We have stated the result as above, though, to allow us to transfer seamlessly between the vector fields $\left( X,d\right)$ and $\left( Z,\tilde{d}\right)$, without any hand-waving about the formal degrees. \end{rmk} \section{Some special single-parameter operators}\label{SectionCZOps} In this section, we describe a certain single-parameter (i.e., $\nu=1$) special case of our main theorem. This special case will be easy to obtain using the theory described in Section \ref{SectionCCGeomII}, along with the classical Calder\'on-Zygmund theory of singular integrals. We will then, in Section \ref{SectionSquareFunc}, use the operators developed in this section to create an appropriate Littlewood-Paley theory adapted to the more general operators of this paper. We suppose we are given $K_0\Subset \Omega'\Subset \Omega''\Subset \Omega$ as in Section \ref{SectionCCGeom} and a list of $C^\infty$ vector fields $X_1,\ldots X_q$ on $\Omega'$ with {\it single}-parameter formal degrees $d_1,\ldots, d_q\in \left(0,\infty\right)$. We assume that there exists an $\xi>0$ such that $\left( X,d\right)$ satisfies $\mathcal{D}\left( K_0, \left[0,1\right],\Omega', \xi\right)$. \begin{rmk}\label{RmkSingleParamHomogType} In the case above, for $\delta\leq \frac{\xi_1}{2}$,\footnote{Here, $\xi_1$ is as in Theorem \ref{ThmMainCCThm}.} we have, using the notation and results from Theorem \ref{ThmMainCCThm}, \begin{equation}\label{EqnSingleParamHomogType} \begin{split} \Vol{\B{X}{d}{x_0}{2\delta}}&\approx \left|\det_{n_0\times n_0} \left(2\delta\right) X\left( x_0\right)\right| \\ &\approx \left|\det_{n_0\times n_0}\delta X\left( x_0\right)\right|\\ &\approx \Vol{\B{X}{d}{x_0}{\delta}}, \end{split} \end{equation} where we have used our usual notation that $\delta X$ denotes the matrix whose columns are given by $\delta^{d_1} X_1,\ldots, \delta^{d_q}X_q$. \eqref{EqnSingleParamHomogType} is the fundamental estimate involved in showing that the balls $\B{X}{d}{\cdot}{\cdot}$ form a space of homogeneous type. Thus, if $X_1,\ldots, X_q$ spanned the tangent space (i.e., if $n_0=n$), then the above Carnot-Carath\'eodory balls would be open subsets of ${\mathbb{R}}^n$ and would endow $K_0$ with the structure of a space of homogeneous type. However, we are interested in the case when $X_1,\ldots, X_q$ do not, necessarily, span the tangent space. In this case, the classical Frobenius theorem applies to show that $X_1,\ldots, X_q$ foliate the $K_0$ into leaves,\footnote{The involutive distribution generated by $X_1,\ldots, X_q$ is finitely generated as a $C^\infty$ module in light of condition $\mathcal{D}$.} and each leaf is a space of homogeneous type. Using the coordinate charts ($\Phi$) developed in Section \ref{SectionCCGeomII}, we will be able to exploit this fact in what follows. This idea was also used in Section 6.2 of \cite{StreetMultiParameterCCBalls}. \end{rmk} We consider the function, \begin{equation*} \gamma\left( t,x\right):B^q\left( \rho\right)\times \Omega''\rightarrow \Omega, \end{equation*} given by, \begin{equation*} \gamma_{\left( t_1,\ldots, t_q\right)}\left( x\right) = e^{t_1X_1+\cdots+ t_q X_q}x. \end{equation*} We define dilations on ${\mathbb{R}}^q$ by, for $\delta>0$, \begin{equation*} \delta \left( t_1,\ldots, t_q\right) = \left( \delta^{d_1} t_1,\ldots, \delta^{d_q} t_q\right), \end{equation*} and we define for $\varsigma: {\mathbb{R}}^q\rightarrow {\mathbb{C}}$, and $j\geq 0$, \begin{equation*} \dil{\varsigma}{2^j} \left( t\right) = 2^{j\left( d_1+\cdots+ d_q\right)} \varsigma\left( 2^j t\right). \end{equation*} Fix $a>0$ a small number. Let $\left\{\varsigma_j\right\}_{j\in {\mathbb{N}}}\subset C_0^\infty \left( B^q\left( a\right)\right)$ be a bounded subset satisfying, \begin{equation*} \int \varsigma_j \left( t\right) \: dt=0, \text{ if } j>0, \end{equation*} where we are including $0\in {\mathbb{N}}$. Let $K$ be the distribution defined by, \begin{equation*} K\left( t\right) =\sum_{j\in {\mathbb{N}}} \dil{\varsigma_j}{2^j}\left( t\right), \end{equation*} where the sum is taken in the sense of distributions. I.e., $K\in \mathcal{K}\left( q,e,a,1,1\right)$, where we are taking $e=\left( d_1,\ldots, d_q\right)\in \left(0,\infty\right)^q$. Let $\kappa:B^q\left( a\right) \times \Omega''\rightarrow {\mathbb{C}}$ be a $C^\infty$ function and let $\psi_1,\psi_2\in C_0^\infty\left( {\mathbb{R}}^n\right)$ be supported on the interior of $K_0$. Define the operator, \begin{equation*} Tf\left( x\right) =\psi_1\left( x\right) \int f\left( \gamma_t\left( x\right) \right) \psi_2\left( \gamma_t\left( x\right)\right) \kappa\left( t,x\right) K\left( t\right) \: dt. \end{equation*} \begin{thm}\label{ThmSingleParamSingInt} There is an $a>0$ such that $T$ (as defined above) is bounded on $L^p$, $1<p<\infty$. \end{thm} In addition we will need a maximal theorem. Take $\psi_1,\psi_2\in C_0^\infty\left( {\mathbb{R}}^n\right)$ supported on the interior of $K_0$ with $\psi_1\geq 0$, and define \begin{equation*} \mathcal{M} f\left( x\right) = \sup_{\delta\in \left(0,1\right]} \psi_1\left( x\right)\int_{\left|t\right|<a} \left| f\left( \gamma_{\delta t}\left( x\right)\right) \psi_2\left( \gamma_{\delta t}\left( x\right)\right)\right|\: dt. \end{equation*} Then, we have, \begin{thm}\label{ThmSingleParamMax} For $a>0$ sufficiently small, $\LpN{p}{\mathcal{M} f} \lesssim \LpN{p}{f}$, for $1<p\leq\infty$. \end{thm} Note that Theorems \ref{ThmSingleParamSingInt} and \ref{ThmSingleParamMax} are special cases of Theorems \ref{ThmMainThmSecondPass} and \ref{ThmMainMaxThm}, respectively (this is proved in Section 17.1 of \cite{StreetMultiParameterSingRadonLt}). However, we will see that Theorems \ref{ThmSingleParamSingInt} and \ref{ThmSingleParamMax} can be proven by reduction to the classical Calder\'on-Zygmund theory. We will then use these theorems to develop an appropriate Littlewood-Paley theory, with which to prove Theorems \ref{ThmMainThmSecondPass} and \ref{ThmMainMaxThm}. We separate the proofs of Theorem \ref{ThmSingleParamSingInt} and \ref{ThmSingleParamMax} into two cases. In the first case, we prove the results under the assumption that $X_1,\ldots, X_q$ span the tangent space at each point of $K_0$; this is covered in Section \ref{SectionCZSpan}. Then, we use the results in Section \ref{SectionCZSpan} to prove the more general case when $X_1,\ldots, X_q$ do not necessarily span the tangent space at each point; this is covered in Section \ref{SectionCZNoSpan}. \subsection{When the vector fields span}\label{SectionCZSpan} In this section, we prove Theorems \ref{ThmSingleParamSingInt} and \ref{ThmSingleParamMax} under the additional assumption that $\inf_{x\in K_0} \left|\det_{n\times n} X\left( x\right) \right|\widetilde{\gamma}rsim 1$. We will then be able to apply this special case to each leaf, in order to obtain the more general statement of Theorems \ref{ThmSingleParamSingInt} and \ref{ThmSingleParamMax}. \begin{rmk} In this particular case, Theorems \ref{ThmSingleParamSingInt} and \ref{ThmSingleParamMax} (and the methods used in this section) are already well understood. If fact, as we will see, when the vector fields span the operators in Theorem \ref{ThmSingleParamSingInt} are just Calder\'on-Zygmund singular integrals corresponding to the space of homogeneous type given by the balls $\B{X}{d}{x}{\delta}$. The maximal function $\mathcal{M}$ is comparable to the usual maximal function associated to this space of homogeneous type. See \cite{NagelRosaySteinWaingerEstimatesForTheBergmanAndSzegoKernels,KoenigOnMaximalSobolevAndHolderEstimatesForTheTangentialCR} for similar methods. One could also use the methods of \cite{ChristNagelSteinWaingerSingularAndMaximalRadonTransforms} to prove the results in this section, but those methods are much stronger than are needed for this simple special case. In any case, we do not know of a reference that has these results in the exact form we need them and so include the short proof here. \end{rmk} We focus now on the proof of Theorem \ref{ThmSingleParamSingInt} and will explain the proof of Theorem \ref{ThmSingleParamMax} at the end of the section. Thus, let $T$ be as in Theorem \ref{ThmSingleParamSingInt}. We already know from the theory in \cite{StreetMultiParameterSingRadonLt} that \begin{equation*} \LpOpN{2}{T}\lesssim 1. \end{equation*} Our goal is to show that $T$ is a Calder\'on-Zygmund singular integral operator and it will follow that $T$ is bounded on $L^p$ for $1<p\leq 2$. Since the class of operators discussed in Theorem \ref{ThmSingleParamSingInt} is self-adjoint it will follow that $T$ is bounded on $L^p$ for $1<p<\infty$. \begin{rmk} Actually, it is not hard to see that, instead of using the $L^2$ theory in \cite{StreetMultiParameterSingRadonLt}, we could apply the $T\left( b\right)$ theorem to obtain the $L^p$ boundedness of $T$. We leave this approach to the interested reader. \end{rmk} Let $\rho\left( x,y\right)$ be the Carnot-Carath\'eodory metric corresponding to the vector fields with formal degrees $\left(X_1,d_1\right),\ldots, \left(X_q,d_q\right)$. That is, \begin{equation*} \rho\left( x,y\right) = \inf \left\{\delta>0 : y\in \B{X}{d}{x}{\delta}\right\}. \end{equation*} Let $K_0t\left( x,y\right)$ denote the Schwartz kernel of $T$. We wish to show that \begin{equation}\label{EqnToShowSingInt} \int_{\B{X}{d}{y_1}{2\delta}^{c}} \left| K_0t\left(x,y_1 \right)-K_0t\left( x,y_2\right) \right|\: dx\lesssim 1, \text{ if }y_2\in \B{X}{d}{y_1}{\delta}, \end{equation} and the $L^p$ boundedness ($1<p\leq 2$) of $T$ will follow from the classical theory of Calder\'on-Zygmund singular integrals (see, e.g., Theorem 3 on page 19 of \cite{SteinHarmonicAnalysis}). This uses the fact that the balls $\B{X}{d}{\cdot}{\cdot}$ form a space of homogeneous type, as discussed in Remark \ref{RmkSingleParamHomogType}. We now turn to proving \eqref{EqnToShowSingInt}. As is well known, it suffices to prove the inequality, \begin{equation}\label{EqnToShowSmoothSingInt} \left|X_x^{\alpha} X_y^{\beta} K_0t\left( x,y\right)\right|\lesssim \frac{\rho\left( x,y\right)^{-\deg\left(\alpha\right)-\deg\left(\beta\right)} }{\Vol{\B{X}{d}{x}{\rho\left( x,y\right)} }}, \end{equation} where $X_x$ denotes the list of vector fields $\left( X_1,\ldots, X_q\right)$ thought of a partial differential operators in the $x$ variable, $\alpha$ denotes an ordered multi-index, and \begin{equation*} \deg\left( \alpha\right) = \sum_{j=1}^q k_j d_j, \end{equation*} where $k_j$ is the number of times $j$ appears in the ordered multi-index $\alpha$. Similarly for $X_y$ and $\beta$. Actually, it would suffice to prove \eqref{EqnToShowSmoothSingInt} in the special case $\left|\alpha\right|=0$, $\left|\beta\right|=1$, but this is no simpler to prove. For $j\in {\mathbb{N}}$, let $T_j$ be the operator given by \begin{equation*} T_j f\left( x\right) = \psi_1\left( x\right) \int f\left( \gamma_t\left( x\right) \right) \psi_2\left( \gamma_t\left( x\right) \right) \kappa\left( t,x\right) \dil{\varsigma}{2^j}\left( t\right) \: dt. \end{equation*} Let $K_0t_j\left( x,y\right)$ be the Schwartz kernel of $T_j$. Thus, $K_0t = \sum_{j\in {\mathbb{N}}} K_0t_j$. To prove \eqref{EqnToShowSmoothSingInt}, it suffices to show that there is an $a>0$ (independent of $j$) such that, when $K_0t_j$ is defined as above, we have, \begin{itemize} \item $K_0t_j\left(x,y\right)$ is supported on $\left\{\left( x,y\right): \rho\left( x,y\right) \leq \xi_1 2^{-j}\right\}$, where $\xi_1$ is a constant, independent of $j$, and \item $\left|\left(2^{-j}X_x \right)^\alpha \left(2^{-j} X_y \right)^{\beta} K_0t_j\left( x,y\right)\right|\lesssim \frac{1}{\Vol{\B{X}{d}{x}{2^{-j}}}}$, where, as usual, $\delta X$ denotes the list of vector fields $\delta^{d_1} X_1,\ldots, \delta^{d_q} X_q$. \end{itemize} To prove the above we apply Theorem \ref{ThmMainCCThm} to the list of vector fields $\left( 2^{-j}X,d\right)$ with $x_0\in K_0$. Note that all of the assumptions in that section hold uniformly for $x_0\in K_0$ and $j\in {\mathbb{N}}$. Thus we obtain, $\eta_1,\xi_1>0$ and for each $x_0\in K_0$ and $j\in {\mathbb{N}}$ a map, \begin{equation*} \Phi_{j,x_0}:B^{n}\left( \eta_1\right) \rightarrow \B{X}{d}{x_0}{\xi 2^{-j}}, \end{equation*} as in Theorem \ref{ThmMainCCThm}. To prove the claim about the support of $K_0t_j$ it suffices to show that for $x_0\in K_0$ and $\left|t\right|\leq a$, we have, \begin{equation}\label{EqnSingIntToShowSupport} e^{t_1 2^{-jd_1} X_1+\cdots+ t_q 2^{-jd_q} X_q}x_0\in \B{X}{d}{x_0}{\xi_1 2^{-j}}. \end{equation} Let $Y_1,\ldots, Y_q$ denote the pullbacks of $X_1,\ldots, X_q$ via $\Phi_{j,x_0}$. Pulling \eqref{EqnSingIntToShowSupport} back via $\Phi_{j,x_0}$ it suffices to show, \begin{equation*} e^{t_1 Y_1+\cdots+t_q Y_q} 0 \in \B{Y}{d}{x_0}{\xi_1}. \end{equation*} Take $\eta'>0$ so small that $B^{n}\left( \eta'\right) \subseteq \B{Y}{d}{0}{\xi_1}$. It is easy to see that this is possible, since $\xi_1\widetilde{\gamma}rsim 1$ and $\inf_{u\in B^{n}\left( \eta_1\right) }\left|\det_{n\times n} Y\left( u\right)\right|\widetilde{\gamma}rsim 1$. Since $Y_1,\ldots, Y_q\in C^\infty$ uniformly in $j$ and $x_0$ (see Theorem \ref{ThmMainCCThm}), it is follows that for $\left|t\right|\leq a$, with $a>0$ sufficiently small, we have, \begin{equation*} e^{t_1Y_1+\cdots+ t_qY_q}0\in B^{n}\left( \eta'\right) \subseteq \B{Y}{d}{0}{\xi_1}; \end{equation*} which completes the proof of the support of $K_0t_j$. Since $\left|\det d\Phi_{j,x_0}\left( u\right)\right|\approx \Vol{\B{X}{d}{x_0}{2^{-j}}}$ for $u\in B^{n}\left( \eta_1\right)$ (and in light of the support of $K_0t_j$), to prove the differential inequalities on $K_0t_j$, it suffices to show, \begin{equation*} \left|\left(\det d\Phi_{j,x_0}\left( v\right) \right) Y_u^\alpha Y_v^\beta K_0t_j\left(\Phi_{j,x_0}\left( u\right), \Phi_{j,x_0}\left( v\right) \right)\right|\lesssim 1, \end{equation*} where $u,v\in B^{n}\left( \eta_1\right)$. Using that $Y_1,\ldots, Y_q$ and $\Phi_{j,x_0}$ are $C^\infty$ uniformly in any relevant parameters, it suffices to show that for all multi-indices $\alpha$ and $\beta$ (no longer ordered), \begin{equation*} \left|\partial_u^{\alpha} \partial_v^{\beta} \left( K_0t_j\left(\Phi_{j,x_0}\left( u\right), \Phi_{j,x_0}\left( v\right) \right) \det d\Phi_{j,x_0}\left( v\right)\right) \right|\lesssim 1; \end{equation*} that is, that $ K_0t_j\left(\Phi_{j,x_0}\left( u\right), \Phi_{j,x_0}\left( v\right) \right) \det d\Phi_{j,x_0}\left( v\right)$ is $C^\infty$ uniformly in any relevant parameters. Let $\Phi^{\#}$ denote the map $\Phi^{\#} g = g\circ \Phi$. Then, $ K_0t_j\left(\Phi_{j,x_0}\left( u\right), \Phi_{j,x_0}\left( v\right) \right) \det d\Phi_{j,x_0}\left( v\right)$ is the Schwartz kernel of the map \begin{equation*} \widetilde{T}_j = \Phi_{j,x_0}^{\#} T_j \left(\Phi_{j,x_0}^{\#}\right)^{-1}. \end{equation*} It is easy to see that, \begin{equation*} \widetilde{T}_j g\left( u\right) = \psi_1\left( \Phi_{j,x_0}\left( u\right) \right) \int g\left( \widetilde{\gamma}_t\left( u\right) \right) \psi_2\left( \widetilde{\gamma}_t\left( u\right)\right) \kappa\left( 2^{-j} t, \Phi_{j,x_0}\left( u\right) \right) \varsigma_j\left( t\right) \: dt, \end{equation*} where $\widetilde{\gamma}_t\left( u\right) = e^{t_1Y_1+\cdots + t_qY_q} u$. From Theorem \ref{ThmMainCCThm}, we have that, \begin{equation*} \left|\det Y_{J_1}\left( 0\right) \right|\widetilde{\gamma}rsim 1, \end{equation*} for some $J_1\in \sI{n}{q}$. Without loss of generality, by reordering the coordinates, we may assume $J_1=\left( 1,\ldots, n\right)$. Recall that $\varsigma_j$ is supported in $B^q\left( a\right)$. For each $u$ and $t_{n+1},\ldots, t_q$ fixed, define the map, \begin{equation*} \Psi_{u,t_{n+1},\ldots, t_q}\left( t_1,\ldots, t_n\right)= e^{t_1 Y_1+\cdots +t_q Y_q}u. \end{equation*} Using the $C^\infty$ bounds for $Y_1,\ldots, Y_q$, we have that for $\left|t\right|\leq a$ (with $a>0$ sufficiently small), \begin{equation*} \left|\det d \Psi_{u,t_{n+1},\ldots,t_q}\left( t_1,\ldots, t_n\right) \right|\approx 1. \end{equation*} Applying the change of variables $v=\Psi_{u,t_{n+1},\ldots,t_q}\left( t_1,\ldots, t_n\right)$, it is immediate to see that the Schwartz kernel of $\widetilde{T}_j$ is $C^\infty$ uniformly in any relevant parameters. This completes the proof of Theorem \ref{ThmSingleParamSingInt} in the case when $X_1,\ldots, X_q$ span the tangent space. The proof of Theorem \ref{ThmSingleParamMax}, in this case, is merely a simpler reprise of the above. Indeed, the standard Calder\'on-Zygmund theory shows that the maximal function, \begin{equation*} \mathcal{M}t f\left( x\right) = \sup_{\delta\in \left(0,1\right]} \psi_1\left( x\right)\frac{1}{\Vol{\B{X}{d}{x}{\delta}} }\int_{y\in \B{X}{d}{x}{\delta}} \left|f\left( y\right) \psi_2\left( y\right)\right|\: dy, \end{equation*} is bounded on $L^p$ ($1<p\leq \infty$). Hence we need only show that pointwise bound, \begin{equation}\label{EqnToShowSpanMax} \mathcal{M} f\left( x\right) \lesssim \mathcal{M}t f\left( x\right). \end{equation} Let \begin{equation*} A_\delta f\left( x\right) = \psi_1\left( x\right) \int_{\left|t\right|\leq a} f\left( \gamma_{\delta t}\left( x\right)\right) \left|\psi_2\left( \gamma_{\delta t}\left( x\right)\right) \right|\: dt, \end{equation*} so that $\mathcal{M} f\left( x\right) = \sup_{\delta\in \left(0,1\right]} A_\delta \left|f\right|$. To show \eqref{EqnToShowSpanMax}, it suffices to show, for $a>0$ sufficiently small, independent of $\delta$, \begin{itemize} \item If $K_0t_\delta\left( x,y\right)$ is the Schwartz kernel of $A_\delta$, then $K_0t_\delta\left( x,y\right)$ is supported on $\left( x,y\right)$ such that $y\in \B{X}{d}{x}{\delta}$. \item $\left|K_0t_\delta\left( x,y\right)\right|\lesssim \frac{1}{\Vol{\B{X}{d}{x}{\delta} } }$. \end{itemize} This follows just as above. \subsection{When the vector fields do not span}\label{SectionCZNoSpan} In this section, we complete the proof of Theorems \ref{ThmSingleParamSingInt} and \ref{ThmSingleParamMax} by proving the case when $X_1,\ldots, X_q$ do not span the tangent space. The idea, as outlined in Remark \ref{RmkSingleParamHomogType}, is to use the fact that the involutive distribution generated by $X_1,\ldots, X_q$ is finitely generated as a $C^\infty$ module. In fact, in light of $\mathcal{D}\left( K_0, \left[0,1\right], \Omega, \xi\right)$, $X_1,\ldots, X_q$ are generators of this distribution (as a $C^\infty$ module). Because of this, the classical Frobenius theorem applies to foliate the ambient space into leaves; $X_1,\ldots, X_q$ spanning the tangent space to each leaf. The goal is to apply the theory of Section \ref{SectionCZSpan} to each leaf. We will be able to do this by utilizing the coordinate charts on each leaf given to us by Theorem \ref{ThmMainCCThm}. Let $n_0\left( x\right) = \dim\Span{X_1\left( x\right),\ldots X_q\left( x\right)}$. Then, there exist $\eta_1, \xi_1>0$ such that for each $x\in K_0$, we obtain a map $$\Phi_{x}:B^{n_0 \left(x\right)}\left( \eta_1\right)\rightarrow \B{X}{d}{x}{\xi},$$ as in Theorem \ref{ThmMainCCThm}, by applying Theorem \ref{ThmMainCCThm} to the vector fields $\left( Z,\tilde{d}\right) =\left( X,d\right)$. Let $K_2\Subset K_0$ be the support of $\psi_1$, and let $K_1$ be such that $K_2\Subset K_1\Subset K_0$. Here $A \Subset B$ denotes that $A$ is a relatively compact subset of the interior of $B$. For a function $f$ defined on $\Omega$, $\delta\leq \xi$, and $x\in K_0$, let \begin{equation*} A_{\B{X}{d}{\cdot}{\delta}} f\left( x\right) = \frac{1}{\Vol{\B{X}{d}{x}{\delta}} } \int_{\B{X}{d}{x}{\delta}} f\left( y\right) \: dy, \end{equation*} where $\Vol{\B{X}{d}{x}{\delta}}$ denotes the induced Lebesgue measure of $\B{X}{d}{x}{\delta}$ on the leaf in which $x$ lies. We restate Proposition 6.17 of \cite{StreetMultiParameterCCBalls}. \begin{prop}[Proposition 6.17 of \cite{StreetMultiParameterCCBalls}]\label{PropIntOfAvgs} There exists a constant $\xi_0>0$, $\xi_0<\xi$, such that for every $\xi'\leq \xi_0$ and every measurable function $f$ with $f\geq 0$, we have \begin{equation*} \int_{K_2} f\left( x\right) \: dx\lesssim \int_{K_1} A_{\B{X}{d}{\cdot}{\xi'}} f\left( x\right) \: dx \lesssim \int_{K_0} f\left( x\right) \: dx, \end{equation*} where the implicit constants may depend on a lower bound for $\xi'$. \end{prop} We will prove, \begin{prop}\label{PropLpAvgBound} Let $\xi'=\frac{\xi_1}{4}\rightedge \frac{\xi_0}{2}$, then we have the pointwise bound for $1<p<\infty$, $x\in K_0$, \begin{equation*} A_{\B{X}{d}{\cdot}{\xi'}} \left| T f\right|^p \left( x\right) \lesssim A_{\B{X}{d}{\cdot}{2\xi'}} \left|f\right|^p\left( x\right), \end{equation*} where the implicit constant may depend on $p$, and we have taken $a>0$ sufficiently small, in the definition of $T$. \end{prop} Before we prove Proposition \ref{PropLpAvgBound}, let us first see how it yields Theorem \ref{ThmSingleParamSingInt}. \begin{proof}[Proof of Theorem \ref{ThmSingleParamSingInt} given Propositions \ref{PropIntOfAvgs} and \ref{PropLpAvgBound}] Letting $\xi'$ be as in Proposition \ref{PropLpAvgBound}, we have, \begin{equation*} \begin{split} \LpN{p}{Tf}^p &= \int_{K_2} \left|T f\left( x\right)\right|^p \: dx\\ &\lesssim \int_{K_1} A_{\B{X}{d}{\cdot}{\xi'}} \left| T f\right|^p \left( x\right)\: dx\\ &\lesssim \int_{K_1} A_{\B{X}{d}{\cdot}{2\xi'}} \left|f\right|^p\left( x\right)\: dx\\ &\lesssim \int_{K_0} \left|f\left( x\right)\right|^p \: dx\\ &\lesssim \LpN{p}{f}^p, \end{split} \end{equation*} completing the proof. \end{proof} We now turn to the proof of Proposition \ref{PropLpAvgBound}. It suffices to show that for each $x_0\in K_0$, we have, \begin{equation*} A_{\B{X}{d}{\cdot}{\xi'}}\left|Tf\right|^p \left( \Phi_{x_0}\left( 0\right)\right) \lesssim A_{\B{X}{d}{\cdot}{2\xi'}}\left|f\right|^p\left( \Phi_{x_0}\left( 0\right)\right), \end{equation*} since $\Phi_{x_0}\left( 0\right) = x_0$. Fix $x_0$ and let $Y_1,\ldots, Y_q$ be the pullbacks of $X_1,\ldots, X_q$ via the map $\Phi_{x_0}$ to $B^{n_0\left( x_0\right)}\left( \eta_1\right)$. We have, \begin{lemma}\label{LemmaCZNospan} For $f\geq 0$ a measurable function defined on $\Omega$, \begin{equation*} A_{\B{X}{d}{\cdot}{\xi'}} f \left( x_0\right) \approx \int_{\B{Y}{d}{\cdot}{\xi'}} f\circ \Phi_{x_0} \left( u\right)\: du. \end{equation*} \end{lemma} \begin{proof} We apply a change of variables $x=\Phi_{x_0}\left( u\right)$, using that $\Phi_{x_0}\left( \B{Y}{d}{0}{\xi'}\right) = \B{X}{d}{x_0}{\xi'}$ and that $\left|\det_{n_0\left( x\right)\times n_0\left( x\right)} d\Phi_{x_0}\left( u\right)\right|\approx \Vol{\B{X}{d}{x_0}{\xi}}$. See (B.2) of \cite{StreetMultiParameterCCBalls} for details on this sort of change of variables. It follows that, \begin{equation*} \begin{split} \int_{\B{Y}{d}{0}{\xi'}} f\left( \Phi_{x_0}\left( u\right)\right) \: du &\approx \frac{1}{\Vol{\B{X}{d}{x_0}{\xi'}}} \int_{\B{X}{d}{x_0}{\xi'}} f\left( x\right) \: dx\\ & =A_{\B{X}{d}{\cdot}{\xi'}} f\left( x_0\right), \end{split} \end{equation*} completing the proof. \end{proof} \begin{proof}[Completion of the proof of Proposition \ref{PropLpAvgBound}] In light of Lemma \ref{LemmaCZNospan}, it suffices to show \begin{equation*} \int_{\B{Y}{d}{0}{\xi'}} \left|\left[\Phi_{x_0}^{\#} T \left(\Phi_{x_0}^{\#}\right)^{-1}\right] \Phi_{x_0}^{\#} f \left( u\right)\right|^p\: du \lesssim \int_{\B{Y}{d}{0}{2\xi'}} \left| \Phi_{x_0}^{\#} f\left( u\right)\right|^p\: du. \end{equation*} This will follow from, \begin{equation}\label{EqnCZNospanToShowPulledBack} \left\| \Phi_{x_0}^{\#} T \left(\Phi_{x_0}^{\#}\right)^{-1} \right\|_{L^p\left(\B{Y}{d}{0}{\xi'} \right)\rightarrow L^p\left(\B{Y}{d}{0}{2\xi'} \right)}\lesssim 1, \end{equation} for $1<p<\infty$, with the implicit constant independent of $x_0$. To prove \eqref{EqnCZNospanToShowPulledBack}, we apply the theory in Section \ref{SectionCZSpan} to the operator $ \Phi_{x_0}^{\#} T \left(\Phi_{x_0}^{\#}\right)^{-1} $. Note that, \begin{equation*} \Phi_{x_0}^{\#} T \left(\Phi_{x_0}^{\#}\right)^{-1} g \left( u\right) = \psi_1\left( \Phi_{x_0}\left( u\right)\right) \int g\left( \widetilde{\gamma}_t\left( u\right) \right) \psi_2\left( \widetilde{\gamma}_t\left( u\right)\right) \kappa\left( t,\Phi\left( u\right)\right) K\left( t\right) \: dt, \end{equation*} where $\widetilde{\gamma}_t\left( u\right) = e^{t_1Y_1+\cdots+t_qY_q}u$. We have, from Theorem \ref{ThmMainCCThm}, that $\left|\det_{n_0\left( x\right) \times n_0\left( x\right)} Y\left( u\right)\right|\approx 1,$ for $u\in B^{n_0\left( x\right)}\left( \eta_1\right)$. That is, that $Y_1,\ldots, Y_q$ span the tangent space (uniformly in $x_0$). It is easy to see that the methods in Section \ref{SectionCZSpan} apply to the operator $\Phi_{x_0}^{\#} T \left(\Phi_{x_0}^{\#}\right)^{-1}$ uniformly in $x_0$, establishing \eqref{EqnCZNospanToShowPulledBack} and completing the proof of Proposition \ref{PropLpAvgBound}. \end{proof} The proof of Theorem \ref{ThmSingleParamMax} follows by a simpler reprise of the above. See also Section 6.2 of \cite{StreetMultiParameterCCBalls}. \section{Auxiliary operators}\label{SectionAuxOps} In this section, we introduce a number of operators, which will be useful in the proof of Theorems \ref{ThmMainThmSecondPass} and \ref{ThmMainMaxThm}. Before we begin, we pick four $C_0^\infty$ cut-off functions $\denum{\psi}{0},\denum{\psi}{-1},\denum{\psi}{-2},\denum{\psi}{-3}\geq 0$, supported on the interior of $K_0$ with \begin{equation*} \psi_1,\psi_2\prec \denum{\psi}{0}\prec \denum{\psi}{-1} \prec \denum{\psi}{-2}\prec \denum{\psi}{-3}. \end{equation*} In the statement of Theorem \ref{ThmMainThmSecondPass}, we took $K\in \mathcal{K}\left( N,e,a,\nu,\nu\right)$ (recall, we are first presenting the proof in the case $\mu_0=\nu$, and in Section \ref{SectionMoreKernels} will present the necessary modifications to treat general $\mu_0$). Thus, \begin{equation*} K\left( t\right) = \sum_{j\in {\mathbb{N}}^\nu} \dil{\varsigma_j}{2^j}\left( t\right), \end{equation*} where $\left\{\varsigma_j\right\}\subseteq C_0^\infty \left( B^N\left( a\right)\right)$ is a bounded set and the $\varsigma_j$ satisfy certain cancellation conditions (see Section \ref{SectionKernels} for details). Hence, there is a corresponding decomposition of $T$. We define, for $j\in {\mathbb{N}}^\nu$, \begin{equation*} T_j f\left( x\right) = \psi_1\left( x\right) \int f\left( \gamma_t\left( x\right)\right) \psi_2\left( \gamma_t\left( x\right)\right) \kappa\left( t,x\right) \dil{\varsigma_j}{2^j}\left( t\right) \: dt. \end{equation*} We have, \begin{equation*} \sum_{j\in {\mathbb{N}}^\nu} T_j = T. \end{equation*} We now turn to the operators which we will use to construct our Littlewood-Paley theory. For each $\mu$, $1\leq \mu\leq \nu$, we obtain a list of vector fields with single-parameter formal degrees $\left( X^\mu, d^\mu\right)$, by letting $X^\mu_1,\ldots, X^\mu_{q_\mu}$ be those vector fields $X_j$ such that $d_j$ is non-zero in only the $\mu$th component. We then assign the formal degree to be $d_j^\mu$ (i.e., the value of the non-zero component). Using this definition, \begin{equation*} \left( \delta_\mu X^\mu, d^\mu\right) = \left( \widehat{\delta} X, \sum d\right), \end{equation*} where $\widehat{\delta}$ is $\delta_\mu$ in the $\mu$th component and $0$ in all other components, and we have suppressed the vector fields that are equal to $0$. As a consequence, $\left( X^\mu, d^\mu\right)$ satisfies $\mathcal{D}\left( K_0, \left[0,1\right], \Omega', \xi\right)$, since $\left( X,d\right)$ satisfies $\mathcal{D}\left( K_0, \left[0,1\right]^\nu, \Omega', \xi\right)$. We define (single-parameter) dilations on ${\mathbb{R}}^{q_\mu}$ by, \begin{equation}\label{EqnRqmuDil} \delta \left( t_1,\ldots, t_{q_\mu}\right) = \left(\delta^{d_1^\mu}t_1, \ldots,\delta^{d_{q_\mu}^\mu} t_{q_\mu} \right). \end{equation} Let $\phi_\mu\in C_0^\infty\left( B^{q_\mu}\left( a\right) \right)$ be such that $\int \phi_\mu =1$, and assume $\phi_\mu\geq 0$. Define, \begin{equation*} \phi_{\mu,j} = \begin{cases} \phi_\mu & \text{if }j=0,\\ \dil{\phi_\mu}{2}-\phi_\mu & \text{if }j>0. \end{cases} \end{equation*} Here, as usual, $\dil{\phi_\mu}{2^{j}}\left( t\right) = 2^{j\left( d_1^\mu+\cdots+d_{q_\mu}^\mu\right)}\phi_\mu\left( 2^j t\right)$. Define, \begin{equation*} \widehat{\gamma}_{\left( t_1,\ldots, t_{q_\mu}\right)}^\mu \left( x\right) = e^{t_1 X_1^\mu + \cdots + t_{q_\mu} X_{q_\mu}^\mu }x. \end{equation*} For $j\in {\mathbb{N}}$, define, \begin{equation*} D_j^\mu f\left( x\right) = \denum{\psi}{-3}\left( x\right) \int f\left(\widehat{\gamma}_t^\mu\left( x\right)\right) \denum{\psi}{-3}\left( \widehat{\gamma}_t^\mu\left( x\right) \right) \dil{\phi_{\mu,j}}{2^j}\left( t\right)\: dt; \end{equation*} so that $\sum_{j\in {\mathbb{N}}} D_j^\mu = \denum{\psi}{-3}^2.$ For $j=\left( j_1,\ldots, j_\nu\right) \in {\mathbb{N}}^\nu$, define, \begin{equation}\label{EqnDefnD} D_j = D_{j_1}^1 D_{j_2}^2\cdots D_{j_\nu}^\nu, \end{equation} so that, \begin{equation*} \sum_{j\in {\mathbb{N}}^\nu} D_j = \denum{\psi}{-3}^{2\nu}. \end{equation*} In Section \ref{SectionSquareFunc}, we will use the operators $D_j$ to create an appropriate Littlewood-Paley square function. Now we turn to the operators which will be at the basis of the study of the maximal function. The study of the maximal function will proceed by induction on the number of parameters ($\nu$), with the base case being the trivial case $\nu=0$ (we will explain this more in what follows). In what follows, we introduce operators that will facilitate this induction. Let ${\mathbb{N}}inf = {\mathbb{N}}\cup \left\{\infty\right\}$. For a subset $E\subseteq \q\{1,\ldots, \nu\w\}$ and $j=\left( j_1,\ldots, j_\nu\right) \in {\mathbb{N}}^\nu$, define $j_E\in {\mathbb{N}}inf^\nu$ to be equal to $j_\mu$ in those components $\mu\in E$, and equal to $\infty$ in the rest of the components. For $t\in {\mathbb{R}}^N$, we dilate $2^{-j_E} t$ in the usual way, where we identify $2^{-\infty}=0$; thus, $2^{-j_E} t$ is zero in every coordinate $t_j$ such that $e_j^\mu\ne 0$ for some $\mu\in E^{c}$. We may think of these dilations as $\left|E\right|$-parameter dilations acting on a lower dimensional space consisting of those coordinates which are not mapped to $0$ under this dilation. Notice that $j_{\q\{1,\ldots, \nu\w\}}=j$ and $j_{\emptyset} =\left(\infty,\infty,\cdots,\infty \right)$. Let $\sigma\in C_0^\infty\left( B^N\left( a\right)\right)$ satisfy $\sigma\geq 0$ and $\sigma\geq 1$ on a neighborhood of $0$. We assume, further, that $\sigma$ is of the form, \begin{equation*} \sigma\left( t_1,\ldots, t_N\right) = \sigma_0\left( t_1\right)\cdots \sigma_0\left( t_N\right). \end{equation*} where $\sigma_0\in C_0^\infty\left( {\mathbb{R}}\right)$, is supported near $0$, is $\geq 0$, and is $\geq 1$ on a neighborhood of $0$. We define for $j\in {\mathbb{N}}inf^\nu$, \begin{equation*} M_{j} f\left( x\right) = \denum{\psi}{0}\left( x\right) \int f\left(\gamma_{2^{-j}t }\left( x\right) \right) \denum{\psi}{0}\left( \gamma_{2^{-j} t}\left(x\right)\right) \sigma\left( t\right) \: dt. \end{equation*} Notice, \begin{equation}\label{EqnMemptyset} M_{j_{\emptyset}} f\left( x\right) = \denum{\psi}{0}^2\left( x\right) \left[\int \sigma\left( t\right) \: dt \right] f\left( x\right). \end{equation} It is immediate to see, \begin{equation}\label{EqnsMBoundDisc} \mathcal{M} f\left( x\right) \lesssim \sup_{j\in {\mathbb{N}}^\nu} M_j \left|f\right|\left( x\right) + \denum{\psi}{0}\left( x\right) \int_{\left|t\right|\leq a} \left|f\left( \gamma_t\left( x\right)\right)\right|\denum{\psi}{0}\left( \gamma_t\left( x\right)\right) \: dt. \end{equation} The second term on the left hand side of \eqref{EqnsMBoundDisc} is easy to control, and so to prove Theorem \ref{ThmMainMaxThm}, it suffices to prove the following proposition. \begin{prop}\label{PropDiscMaxBound} \begin{equation*} \LpN{p}{\sup_{j\in {\mathbb{N}}^{\nu}} \left|M_j f\right|}\lesssim \LpN{p}{f}, \end{equation*} for $1<p<\infty$. \end{prop} Indeed, to deduce Theorem \ref{ThmMainMaxThm} merely apply Proposition \ref{PropDiscMaxBound} to $\left|f\right|$ and use \eqref{EqnsMBoundDisc}. The difficulty in Proposition \ref{PropDiscMaxBound} is that, unlike the operators $T_j$, the operators $M_j$ do not have any cancellation to take advantage of. We now turn to reducing Proposition \ref{PropDiscMaxBound} to an equivalent result where there will be cancellation to take advantage of. We begin by explaining our induction. Given $E\subseteq \q\{1,\ldots, \nu\w\}$, separate $t\in {\mathbb{R}}^N$ into two variables $t=\left( t_1^E, t_2^E\right)$: $t_2^E$ will be those coordinates that are mapped to $0$ under $2^{-j_E}t$, and $t_1^E$ will be the rest of the coordinates. I.e., $t_2^E$ are those coordinates $t_j$ such that $e_j^\mu\ne 0$ for some $\mu\in E^c$. With an abuse of notation, we write $2^{-j_E} t_1^E$ as the $t_1^E$ coordinate of $2^{-j_E} t$, and so $2^{-j_E} t_1^E$ defines $\left|E\right|$-parameter dilations on $t_1^E$. Furthermore, with another abuse of notation, we write $\sigma\left( t\right) = \sigma\left( t_1^E\right)\sigma\left( t_2^E\right)$, where $\sigma\left( t_1^E\right)$ is a product of $\sigma_0\left( t_j\right)$ such that $t_j$ is a coordinate of $t_1^E$, and similarly for $t_2^E$. We may rewrite $M_{j_E}$ as follows, \begin{equation*} M_{j_E} f\left( x\right) = \left[\denum{\psi}{0}\left( x\right) \int f\left( \gamma_{2^{-j_E}t_1^E}\left( x\right) \right) \denum{\psi}{0}\left(\gamma_{2^{-j_E} t_1^E}\left( x\right) \right) \sigma\left( t_1^E\right) \: dt_1^E\right] \left[\int \sigma\left(t_2^E \right)\: dt_2^E\right]. \end{equation*} The term $\int \sigma\left(t_2^E \right)\: dt_2^E$ is a constant. It is easy to see from our assumptions that $\gamma_{2^{-j_E} t_1^E}$ is of the same form as $\gamma_{2^{-j}t}$ with $\nu$ replaced by $\left|E\right|$. I.e., $\gamma_{t_1^E}$ satisfies the hypotheses of Theorem \ref{ThmMainMaxThm} with $\nu$ replaced by $\left|E\right|$. As a consequence, $M_{j_E}$ is a constant times an operator of the same form as $M_j$, with $\nu$ replaced by $\left|E\right|$. We will prove Proposition \ref{PropDiscMaxBound} by induction on $\nu$. Due to the above discussion, our inductive hypothesis implies, \begin{equation}\label{EqnInductHypo} \LpN{p}{\sup_{j\in {\mathbb{N}}^{\nu}}\left|M_{j_E} f\right| }\lesssim \LpN{p}{f}, \end{equation} for $E\subsetneq \q\{1,\ldots, \nu\w\}$ and $1<p\leq\infty$. The base case of our induction will correspond to $E=\emptyset$. In light of \eqref{EqnMemptyset}, the base case is trivial. For each $\mu$, $1\leq \mu\leq \nu$, and each $j\in {\mathbb{N}}inf$, define the operator, \begin{equation*} A_j^\mu f\left( x\right) = \denum{\psi}{-1}\left( x\right) \int_{t\in{\mathbb{R}}^{q_\mu}} f\left(\widehat{\gamma}_{2^{-j}t}^\mu\left( x\right) \right) \denum{\psi}{-1}\left(\widehat{\gamma}_{2^{-j}t}^\mu\left( x\right) \right)\sigma\left( t\right) \: dt, \end{equation*} where we have used the dilations on ${\mathbb{R}}^{q_\mu}$ defined in \eqref{EqnRqmuDil} and we have identified $2^{-\infty}=0$; so that $A_\infty^\mu = \left[\int \sigma\left( t\right)\: dt\right]\denum{\psi}{-1}^2$. Here we have abused notation and viewed $\sigma$ as a function on ${\mathbb{R}}^{q_{\mu}}$. By this we mean, $\sigma\left( t_1,\ldots, t_{q_\mu}\right)= \prod_{j=1}^{q_{\mu}} \sigma_0\left( t_j\right)$. Define the maximal operator, \begin{equation*} \mathcal{M}^\mu f\left( x\right) = \sup_{\delta\in \left[0,1\right]} \denum{\psi}{-3}\left( x\right) \int_{\left|t\right|\leq a} \left|f\left(\widehat{\gamma}_{\delta t}^\mu\left( x\right)\right)\right| \denum{\psi}{-3}\left(\widehat{\gamma}_{\delta t}^\mu\left( x\right) \right) \: dt. \end{equation*} Note that Theorem \ref{ThmSingleParamMax} shows that, \begin{equation*} \LpN{p}{\mathcal{M}^\mu f}\lesssim \LpN{p}{f}, \leftuad 1<p\leq \infty. \end{equation*} Also it is elementary to verify the pointwise inequality \begin{equation}\label{EqnSupAjBound} \sup_{j\in {\mathbb{N}}inf} \left|A_j^\mu f\left( x\right) \right|\lesssim \mathcal{M}^\mu f\left( x\right), \end{equation} and so we have, \begin{equation*} \LpN{p}{\sup_{j\in {\mathbb{N}}inf} \left|A_j^\mu f\left( x\right)\right|}\lesssim \LpN{p}{f}, \leftuad 1<p\leq \infty. \end{equation*} For $j=\left(j_1,\ldots, j_\nu \right)\in {\mathbb{N}}inf^\nu$ define \begin{equation}\label{EqnDefnA} A_j = A_{j_1}^1 A_{j_2}^2\cdots A_{j_\nu}^\nu. \end{equation} Notice that \begin{equation*} A_{\left(\infty, \infty,\cdots, \infty \right)} = \left[\int \sigma\left( t\right) \: dt\right]^{\nu} \denum{\psi}{-1}^{2\nu}. \end{equation*} And since $\denum{\psi}{-1} M_j= M_j=M_{j_{\q\{1,\ldots, \nu\w\}}}$, we see to prove Proposition \ref{PropDiscMaxBound} it suffices to prove, \begin{equation}\label{EqnMaxSTSWithA} \LpN{p}{\sup_{j\in {\mathbb{N}}}\left|A_{j_\emptyset} M_{j_{\q\{1,\ldots, \nu\w\}}}f\right|}\lesssim \LpN{p}{f},\leftuad 1\leq p\leq \infty. \end{equation} For $E\subsetneq \q\{1,\ldots, \nu\w\}$, combining \eqref{EqnInductHypo} and \eqref{EqnSupAjBound}, we see, \begin{equation}\label{EqnInductHypWithA} \LpN{p}{\sup_{j\in {\mathbb{N}}} \left|A_{j_{E^c}} M_{j_{E}} f \right| }\lesssim \LpN{p}{f},\leftuad 1<p\leq \infty. \end{equation} For $j\in {\mathbb{N}}^\nu$, define the operator, \begin{equation*} B_j = \sum_{E\subseteq \q\{1,\ldots, \nu\w\}} \left( -1\right)^{\left|E\right|} A_{j_{E^c}} M_{j_E}. \end{equation*} From \eqref{EqnInductHypWithA} we see that to prove \eqref{EqnMaxSTSWithA} (and hence to prove Proposition \ref{PropDiscMaxBound} and Theorem \ref{ThmMainMaxThm}) it suffices to prove, \begin{prop} \begin{equation*} \LpN{p}{\sup_{j\in {\mathbb{N}}^\nu} \left|B_j f\right|}\lesssim \LpN{p}{f}, \end{equation*} for $1<p\leq \infty$. \end{prop} \section{Preliminary $L^2$ results} In this section we describe some $L^2$ results concerning the operators defined in Section \ref{SectionAuxOps}. These results, along with the results in Section \ref{SectionCZOps}, make up the main technical results on which our theory is based. All of the results in this section will follow from the results in \cite{StreetMultiParameterSingRadonLt} (after some reductions). \begin{thm}\label{ThmL2Thm} For $j_1,\ldots, j_r\in {\mathbb{N}}^\nu$, define, \begin{equation*} \diam{j_1,\ldots, j_r} = \max_{1\leq l,m\leq r} \left|j_l-j_m\right|. \end{equation*} If we take $a>0$ sufficiently small,\footnote{Recall, all of the operators in Section \ref{SectionAuxOps} were defined in terms of some small $a>0$.} then there exists $\epsilon_2>0$ such that, \begin{itemize} \item $\LpOpN{2}{B_{j_1}D_{j_2}}\lesssim 2^{-\epsilon_2\diam{j_1,j_2}}$, \item $\LpOpN{2}{D_{j_1}T_{j_2}D_{j_3}}\lesssim 2^{-\epsilon_2 \diam{j_1,j_2,j_3}}$, \item $\LpOpN{2}{D_{j_1}^{*}D_{j_2}^{*}D_{j_3}D_{j_4}}\lesssim 2^{-\epsilon_2\diam{j_1,j_2,j_3,j_4}}$, \item $\LpOpN{2}{D_{j_1}D_{j_2}D_{j_3}^{*}D_{j_4}^{*}}\lesssim 2^{-\epsilon_2\diam{j_1,j_2,j_3,j_4}}$. \end{itemize} Here, $j_1,j_2,j_3$, and $j_4$ are arbitrary elements of ${\mathbb{N}}^\nu$. \end{thm} The rest of this section is devoted to the proof of Theorem \ref{ThmL2Thm}. We will see that each part of Theorem \ref{ThmL2Thm} follows from an application of the same general result. This result is proved in \cite{StreetMultiParameterSingRadonLt}, and we review the statement of the result in in Section \ref{SectionGenL2}. In Section \ref{SectionL2Reduce} we show how to reduce each part of Theorem \ref{ThmL2Thm} to the result in Section \ref{SectionGenL2}. \begin{rmk} Using methods similar to the ones in this section, one can prove $\LpOpN{2}{T_{j_1}^{*}T_{j_2}}, \LpOpN{2}{T_{j_1}T_{j_2}^{*}}\lesssim 2^{-\epsilon \diam{j_1,j_2}}$. This shows, via the Cotlar-Stein lemma, that $T$ is bounded on $L^2$. This is the proof used in \cite{StreetMultiParameterSingRadonLt}. \end{rmk} \subsection{A general $L^2$ result}\label{SectionGenL2} In this section, we review the main technical result from \cite{StreetMultiParameterSingRadonLt}; this result will imply Theorem \ref{ThmL2Thm}. The setting is as follows. We are given operators $S_1,\ldots, S_L$, and $R_1$, $R_2$, and a real number $\zeta\in \left[0,1\right]$. We will present conditions on these operators such that there exists $\epsilon>0$ with, \begin{equation}\label{EqnToShowGenL2} \LpOpN{2}{S_1\cdots S_L \left( R_1-R_2\right)}\lesssim \zeta^\epsilon. \end{equation} In Section \ref{SectionL2Reduce}, we will show that the assumptions of this section hold uniformly in $j_1,j_2,j_3,j_4$ for the operators in Theorem \ref{ThmL2Thm} (with an appropriate choice of $\zeta$), and Theorem \ref{ThmL2Thm} will follow. The term $R_1-R_2$ is how we make use of the cancellation implicit in the operators in Theorem \ref{ThmL2Thm}. To describe the operators above, suppose we are given $C^\infty$ vector fields $Z_1,\ldots, Z_q$ on $\Omega$ with {\it single}-parameter formal degrees $\tilde{d}_1,\ldots, \tilde{d}_q$. We will be taking $\left( Z,\tilde{d}\right) = \left( \delta X, \sum d\right)$ for some $\delta\in \left[0,1\right]^\nu$, to prove Theorem \ref{ThmL2Thm}. We assume that $\left( Z,\tilde{d}\right)$ satisfies the assumptions of Theorem \ref{ThmMainCCThm} uniformly for $x_0\in K_0$, for some fixed $\xi>0$. We assume that $r$ of the vector fields $Z_1,\ldots, Z_r$ generate $Z_1,\ldots, Z_q$ in the sense that there is an\footnote{The implicit constants in \eqref{EqnToShowGenL2} may depend on $M_1$ and $\xi$.} $M_1$ such that for every $j$, $r+1\leq j\leq q$, $Z_j$ may be written in the form, \begin{equation*} Z_j = \ad{Z_{l_1}} \ad{Z_{l_2}} \cdots \ad{Z_{l_m}} Z_{l_{m+1}}, \leftuad 1\leq l_k\leq r, \leftuad 0\leq m\leq M_1-1. \end{equation*} \begin{defn} Let $\widehat{\gamma}:B^N\left( \rho\right)\times \Omega''\rightarrow \Omega$ be a $C^\infty$ function, satisfying $\widehat{\gamma}_0\left( x\right) \equiv x$. We say that $\widehat{\gamma}$ is controlled by $\left( Z,\tilde{d}\right)$ at the unit scale if the following holds. Define the vector field $\widehat{W}\left( t,x\right)$ by, \begin{equation*} \widehat{W}\left( t,x\right) = \frac{d}{d\epsilon}\bigg|_{\epsilon=1} \widehat{\gamma}_{\epsilon t}\circ \widehat{\gamma}_{t}^{-1}\left( x\right). \end{equation*} We suppose, there exist $\rho_1,\tau_1>0$ such that for every $x_0\in K_0$, \begin{itemize} \item $\widehat{W}\left( t,x\right) = \sum_{l=1}^q c_l\left( t,x\right) Z_l\left( x\right)$, on $\B{Z}{\tilde{d}}{x_0}{\tau_1}$, \item $\sum_{\left|\alpha\right|+\left|\beta\right|\leq m} {\mathbb{C}}jN{Z^\alpha \partial_t^\beta c_l}{0}{B^N\left(\rho_1\right)\times \B{Z}{\tilde{d}}{x_0}{\tau_1}}<\infty$, for every $m$. \end{itemize} \end{defn} \begin{rmk} Note that the assumption that $\left( X,d\right)$ controls $\gamma$ can be restated as $\left( \delta X, \sum d\right)$ controls $\gamma_{\delta t}$ at the unit scale for every $\delta\in \left[0,1\right]^\nu$, uniformly in $\delta$. \end{rmk} We now turn to defining the operators $S_j$. We assume, for each $j$, we are given a $C^\infty$ function $\widehat{\gamma}_j:B^{N_j}\left(\rho\right)\times \Omega''\rightarrow \Omega$ with $\widehat{\gamma}_j\left(0,x\right)\equiv x$, and that this function is controlled by $\left( Z,\tilde{d}\right)$ at the unit scale. As usual, we restrict our attention to $\rho>0$ small, so that $\widehat{\gamma}_{j,t}^{-1}$ makes sense wherever we use it. We suppose we are given $\psi_{j,1},\psi_{j,2}\in C_0^\infty\left( {\mathbb{R}}^n\right)$ supported on the interior of $K_0$ and $\kappa_j\in C^\infty\left(\overline{B^{N_j}\left( a\right)}\times \overline{\Omega' } \right)$. Finally, we suppose we are given $\varsigma_j\in C_0^\infty\left( B^{N_j}\left( a\right)\right)$. We define, \begin{equation*} S_j f\left( x\right) = \psi_{j,1}\left( x\right) \int f\left( \widehat{\gamma}_{j,t}\left( x\right)\right) \psi_{j,2}\left(\widehat{\gamma}_{j,t}\left( x\right)\right) \kappa_j\left( t,x\right) \varsigma_j\left( t\right)\: dt. \end{equation*} \begin{defn} If $S_j$ is of the above form, we say $S_j$ is controlled by $\left( Z,\tilde{d}\right)$ at the unit scale. \end{defn} \begin{rmk}\label{RmkAdjointControl} If $S_j$ is controlled by $\left( Z,\tilde{d}\right)$ at the unit scale, then so is $S_j^{*}$. This is shown in \cite{StreetMultiParameterSingRadonLt}. Furthermore, a simple change of variables shows that if $S_j$ is controlled at the unit scale by $\left( Z,\tilde{d}\right)$, then $\LpOpN{2}{S_j}\lesssim 1$. \end{rmk} We assume further, that for each $l$, $1\leq l\leq r$, there is a $j$, $1\leq j\leq L$, and a multi index $\alpha$ (with $\left|\alpha\right|\leq M_2$ for some\footnote{The implicit constants in \eqref{EqnToShowGenL2} are allowed to depend on $M_2$.} $M_2$), such that, \begin{equation*} Z_l\left( x\right) = \frac{1}{\alpha!} \frac{\partial}{\partial t}^\alpha\bigg|_{t=0} \frac{d}{d\epsilon}\bigg|_{\epsilon=1} \widehat{\gamma}_{j,\epsilon t}\circ \widehat{\gamma}_{j,t}^{-1}\left( x\right). \end{equation*} This concludes our assumptions on $S_1,\ldots, S_L$. We now turn to the operators $R_1$ and $R_2$. It is here where $\zeta$ plays a role. We assume we are given a $C^\infty$ function $\widetilde{\gamma}_{t,s}$ which is controlled by $\left( Z,\tilde{d}\right)$ at the unit scale: \begin{equation*} \widetilde{\gamma}_{t,s}\left( x\right) : B^{{\mathbb{N}}t}\left( \rho\right)\times \left[-1,1\right]\times \Omega''\rightarrow \Omega, \leftuad \widetilde{\gamma}_{0,0}\left( x\right) \equiv x. \end{equation*} \begin{rmk} Here we are thinking of $\left( t,s\right)$ as playing the role of the $t$ variable in the definition of control. \end{rmk} We suppose we are given $\widetilde{\kappa}\left( t,s,x\right) \in C^{\infty}\left(\overline{B^{{\mathbb{N}}t}\left( a\right)} \times \left[-1,1\right]\times \Omega'' \right)$, $\varsigmat\in L^1\left( B^N\left( a\right)\right)$, and $\widetilde{\psi}_1,\widetilde{\psi}_2\in C^\infty_0\left( {\mathbb{R}}^n\right)$ supported on the interior of $K_0$. We define, for $\xi\in \left[-1,1\right]$, \begin{equation*} R^{\xi} f\left(x\right) = \widetilde{\psi}_1\left( x\right) \int f\left( \widetilde{\gamma}_{t,\xi}\left( x\right)\right) \widetilde{\psi}_2\left( \widetilde{\gamma}_{t,\xi}\left( x\right)\right) \widetilde{\kappa}\left( t,\xi,x\right) \varsigmat\left( t\right)\: dt. \end{equation*} We set $R_1=R^{\zeta}$ and $R_2=R^{0}$. \begin{thm}[Theorem 14.5 of \cite{StreetMultiParameterSingRadonLt}]\label{ThmGenL2Thm} In the above setup, if $a>0$ is sufficiently small, we have, \begin{equation*} \LpN{2}{S_1\cdots S_L \left( R_1-R_2\right)}\lesssim \zeta^{\epsilon}, \end{equation*} for some $\epsilon>0$. \end{thm} \begin{rmk}\label{RmkUnifAssump} It is important in our applications of Theorem \ref{ThmGenL2Thm} that the various constants can be chosen independent of any relevant parameters. I.e., that if all of the hypotheses of this section hold ``uniformly'' then so does Theorem \ref{ThmGenL2Thm}. Indeed, this is the case, and is discussed further and made precise in \cite{StreetMultiParameterSingRadonLt}. In this paper, we merely say that in our proof of Theorem \ref{ThmL2Thm}, all of our applications of Theorem \ref{ThmGenL2Thm} will satisfy the hypotheses of this section uniformly in the appropriate sense, and we leave the straight-forward verification of this fact to the reader. \end{rmk} \subsection{Reduction to the general $L^2$ result}\label{SectionL2Reduce} This section is devoted to proving Theorem \ref{ThmL2Thm} by applying Theorem \ref{ThmGenL2Thm}. We will be implicitly choosing $a>0$ by choosing it small enough that Theorem \ref{ThmGenL2Thm} applies. Since the assumptions of Theorem \ref{ThmGenL2Thm} will hold uniformly in $j_1,j_2,j_3,j_4$, we will have that $a>0$, $\epsilon>0$, and the implicit constant in Theorem \ref{ThmGenL2Thm} can all be chosen independent of $j_1,j_2,j_3,j_4\in {\mathbb{N}}^\nu$. See Remark \ref{RmkUnifAssump} and \cite{StreetMultiParameterSingRadonLt} for more details on this. Recall the list of vector fields $\left( X,d\right) = \left( X_1,d_1\right),\ldots, \left( X_q,d_q\right)$ satisfying $\mathcal{D}\left( K_0, \left[0,1\right]^\nu\right)$ defined in Section \ref{SectionCurves}. Our assumptions on $\gamma$ can be restated (by possibly reordering $\left( X_1,d_1\right),\ldots, \left( X_q,d_q\right)$) as there exists $r\leq q$ such that, \begin{enumerate} \item $\left( X,d\right)$ controls $\gamma$. \item For $1\leq l\leq r$, $d_l$ is nonzero in only one component. \item Every $\left( X_j,d_j\right)$, with $r<j\leq q$, can be written as, \begin{equation*} X_j = \ad{X_{l_1}}\ad{X_{l_2}}\cdots \ad{X_{l_m}}X_{l_{m+1}}, \end{equation*} \begin{equation*} d_j= d_{l_1}+d_{l_2}+\cdots+d_{l_{m+1}}, \end{equation*} with $1\leq l_k\leq r$. \item Every $\left( X_l, d_l\right)$, $1\leq l\leq r$, is of the form, \begin{equation}\label{EqnHowTheXjsCome} X_l = \frac{1}{\alpha!}\frac{\partial}{\partial t}^{\alpha}\bigg|_{t=0} \frac{d}{d\epsilon}\bigg|_{\epsilon=1} \gamma_{\epsilon t}\circ \gamma_t^{-1}\left( x\right), \end{equation} \begin{equation}\label{EqnHowThedjsCome} d_l = \deg\left( \alpha\right). \end{equation} \end{enumerate} We now describe how the above assumptions come into play in what follows. Take $\delta\in \left[0,1\right]$. Define the list of vector fields with single-parameter formal degree $\left( Z,\tilde{d}\right)=\left( \delta X,\sum d\right)$. Note that $Z_1,\ldots, Z_r$ generate $Z_1,\ldots, Z_q$, in the sense of that every $Z_j$ ($r+1\leq j\leq q$) can be written in the form, \begin{equation*} Z_j = \ad{Z_{l_1}}\ad{Z_{l_2}}\cdots \ad{Z_{l_m}}Z_{l_{m+1}}, \end{equation*} with $1\leq l_k\leq r$ for every $k$. Fix $\delta_1=\left( \delta_1^1,\ldots, \delta_{1}^\nu\right), \delta_2=\left( \delta_2^1,\ldots, \delta_2^\nu\right)\in \left[0,1\right]^\nu$ and assume $\delta_1^\mu\leq \delta_2^\mu$ for every $\mu$. Then, using the fact that $\left( X,d\right)$ controls $\gamma$, we have $\left( \delta_2 X, \sum d\right)$ controls $\gamma_{\delta_1 t}$ at the unit scale, uniformly in $\delta_1,\delta_2$. Furthermore, suppose that $\delta_1^\mu=\delta_2^\mu$ for some fixed $\mu$. Suppose further that for $j_0$ fixed ($j_0\leq r$), $d_{j_0}$ is nonzero in only the $\mu$th coordinate. Define $\widetilde{\gamma}_t = \gamma_{\delta_1 t}$. We then have, \begin{equation*} \delta_2^{d_{j_0}} X_{j_0} = \delta_1^{d_{j_0}} X_{j_0} =\frac{1}{\alpha!}\frac{\partial}{\partial t}^{\alpha}\bigg|_{t=0} \frac{d}{d\epsilon}\bigg|_{\epsilon=1} \widetilde{\gamma}_{\epsilon t}\circ \widetilde{\gamma}_t^{-1}\left( x\right), \end{equation*} for some $\alpha$. We now turn to the proof of Theorem \ref{ThmL2Thm}. We describe, in detail, the proof for $B_j D_k$ as it is the most complicated (here we have replaced $j_1$ with $j$ and $j_2$ with $k$ for notational convenience). We then indicate the modifications necessary to study all of the other operators. Let $\ell=j\rightedge k$. Define $\left( Z,d\right) = \left( 2^{-\ell} X,\sum d\right)$, and $\iota_\infty = \left|j-k\right|_\infty$. Our goal is to show that there exists $\epsilon>0$ such that, \begin{equation}\label{EqnToShowBjDk} \LpOpN{2}{B_j D_k}\lesssim 2^{-\epsilon \iota_\infty}. \end{equation} Note that this is trivial if $\iota_\infty=0$ and so we assume $\iota_\infty>0$ in what follows. We separate the proof into two cases. The first case is when there is a $\mu_1$ ($1\leq \mu_1\leq \nu$) such that $\iota_\infty = k_{\mu_1}-j_{\mu_1}$. In this case, we need only use the cancellation in the operator $D_k$, and so it suffices to prove, \begin{equation}\label{EqnToShowL21} \LpOpN{2}{A_{j_{E^c}} M_{j_{E}} D_{k}}\lesssim 2^{-\epsilon \iota_\infty}, \end{equation} for every $E\subseteq \q\{1,\ldots, \nu\w\}$. To prove \eqref{EqnToShowL21}, it suffices to prove, \begin{equation}\label{EqnToShowL22} \LpOpN{2}{ \left[ D_{k}^{*} M_{j_E}^{*} A_{j_{E^c}}^{*} A_{j_{E^c}} M_{j_E} D_{k} \right]^2 }\lesssim 2^{-\epsilon\iota_\infty}, \end{equation} where we have changed $\epsilon$. Using that $\LpOpN{2}{D_k^{*}}, \LpOpN{2}{M_{j_E}^{*}}, \LpOpN{2}{A_{j_{E^c}}^{*}}\lesssim 1$, to prove \eqref{EqnToShowL22} it suffices to show, \begin{equation}\label{EqnToShowL23} \LpOpN{2}{A_{j_{E^c}} M_{j_E} D_k D_k^{*} M_{j_E}^{*}A_{j_{E^c}}^{*} A_{j_{E^c}} M_{j_E} D_k}\lesssim 2^{-\epsilon \iota_\infty}. \end{equation} We now expand the last $D_k$ in \eqref{EqnToShowL23} into $D_k = D_{k_1}^1 D_{k_2}^2\cdots D_{k_\nu}^\nu$. Using that $\LpOpN{2}{D_{k_\mu}^\mu}\lesssim 1$ for every $\mu$, to prove \eqref{EqnToShowL23} it suffices to show, \begin{equation}\label{EqnToShowL24} \LpOpN{2}{A_{j_{E^c}} M_{j_E} D_k D_k^{*} M_{j_E}^{*}A_{j_{E^c}}^{*} A_{j_{E^c}} M_{j_E} D_{k_1}^1 D_{k_2}^2\cdots D_{k_{\mu_1}}^{\mu_1}}\lesssim 2^{-\epsilon \iota_\infty}. \end{equation} We will prove \eqref{EqnToShowL24} by applying Theorem \ref{ThmGenL2Thm} with, \begin{equation*} S_1\cdots S_L = A_{j_{E^c}} M_{j_E} D_k D_k^{*} M_{j_E}^{*}A_{j_{E^c}}^{*} A_{j_{E^c}} M_{j_E} D_{k_1}^1 D_{k_2}^2\cdots D_{k_{\mu_1-1}}^{\mu_1-1}, \end{equation*} \begin{equation*} R_1=D_{j_{\mu_1}}^{\mu_1}, \leftuad R_2=0. \end{equation*} In the above, we are thinking of $D_{j}$ and $A_{j_{E^{c}}}$ as a product of terms (see \eqref{EqnDefnD} and \eqref{EqnDefnA}), and assigning an $S_l$ to each term in the product, similarly for the adjoints. First we verify that each $S_l$ is controlled at the unit scale by $\left( Z,\tilde{d}\right)$. We will begin by showing that $A_{j_\mu}^\mu$, $A_\infty^\mu$, and $D_{k_\mu}^\mu$ are controlled at the unit scale. We will also show that $M_{j_E}^E$ is controlled at the unit scale. It will then follow that $D_{k}^{*}$, $M_{j_E}^{*}$, and $A_{j_{E^c}}^{*}$ are all products of operators which are controlled at the unit scale, since if $S_l$ is controlled at the unit scale, so is $S_l^{*}$ (Remark \ref{RmkAdjointControl}). Consider, \begin{equation*} A_{j_\mu}^{\mu} f\left( x\right) =\denum{\psi}{-1}\left( x\right) \int_{t\in {\mathbb{R}}^{q_\mu}} f\left( \widehat{\gamma}_{2^{-j_\mu} t}^\mu \left( x\right) \right) \denum{\psi}{-1}\left( \widehat{\gamma}_{2^{-j_\mu}t}^\mu\left( x\right)\right) \sigma\left( t\right) \: dt, \end{equation*} and so to show that $A_{j_\mu}^\mu$ is controlled at the unit scale by $\left( Z,\tilde{d}\right)$, it suffices to show that $\widehat{\gamma}_{2^{-j_\mu}}^\mu$ is controlled at the unit scale by $\left( Z,\tilde{d}\right)$. However, \begin{equation*} \widehat{\gamma}_{2^{-j_\mu} t}^\mu \left( x\right) = \exp\left( 2^{-j_\mu d_1^\mu} t_1 X_1^\mu + \cdots + 2^{-j_\mu d_{q_\mu}^\mu} t_{q_\mu} X_{q_\mu}^\mu \right)x. \end{equation*} By definition, the list of vector fields $\left( 2^{-j_\mu} X^\mu, d^\mu\right)$ is a sublist of the list of vector fields $\left( 2^{-j} X,\sum d\right)$. Since $j\geq \ell$ coordinatewise, it follows immediately from Lemma 12.18 of \cite{StreetMultiParameterSingRadonLt} that $\widehat{\gamma}_{2^{-j_\mu} t}^{\mu}$ is controlled at the unit scale by $\left( Z,d\right) = \left( 2^{-\ell} X,\sum d\right)$. It is trivial that $A_{\infty}^\mu$ is controlled at the unit scale by $\left( Z,\tilde{d}\right)$ since $\widehat{\gamma}_t\left( x\right) \equiv x$ is controlled at the unit scale by $\left(Z,\tilde{d}\right)$, trivially. The proof that $D_{k_\mu}^\mu$ is controlled at the unit scale by $\left( Z,\tilde{d}\right)$ follows just as the proof for $A_{j_\mu}^\mu$. We now turn to $M_{j_E}$. Since, \begin{equation*} M_{j_E} f\left( x\right) = \denum{\psi}{0}\left( x\right) \int f\left( \gamma_{2^{-j_E}t}\left( x\right)\right) \denum{\psi}{0}\left( \gamma_{2^{-j_E}t}\left( x\right)\right) \sigma\left( t\right)\: dt, \end{equation*} we need only show that $\gamma_{2^{-j_E}t}$ is controlled at the unit scale by $\left( Z,\tilde{d}\right)$. As discussed at the beginning of this section, $\gamma_{2^{-j}t}$ is controlled by $\left( Z,\tilde{d}\right)$. Since $\gamma_{2^{-j_E}t}$ is the same as $\gamma_{2^{-j}t}$ except with some of the coordinates of $t$ set to $0$, the result follows. This completes the proof that each $S_p$ is controlled at the unit scale by $\left( Z,\tilde{d}\right)$. To complete our discussion of the $S_p$, we need to show that for each $l$, $1\leq l\leq r$, $Z_l$ is of the form, \begin{equation}\label{EqnToShowZAppears} Z_l\left( x\right) = \frac{1}{\alpha!}\frac{\partial}{\partial t}^\alpha\bigg|_{t=0}\frac{d}{d\epsilon}\bigg|_{\epsilon=0} \widetilde{\gamma}_{\epsilon t}\circ \widetilde{\gamma}_t^{-1}\left( x\right), \end{equation} for some $\alpha$, where $\widetilde{\gamma}$ is one of the functions defining the maps $S_1,\ldots, S_L$. Fix $l$, $1\leq l\leq r$. Recall, $Z_l= 2^{-\ell \cdot d_l} X_l$, and $d_l$ is nonzero in precisely one component. Let us suppose that $d_l$ is nonzero in only the $\mu_2$ component. Thus, $Z_l= 2^{-\ell_{\mu_2} d_l^{\mu_2}} X_l$. There are two possibilities. Either $\ell_{\mu_2}= j_{\mu_2}$ or $\ell_{\mu_2}=k_{\mu_2}$. We deal with the second case first. Suppose $\ell_{\mu_2}= k_{\mu_2}$. Pick $p$ such that $S_p = D_{k_{\mu_2}}^{\mu_2}$. We have, \begin{equation*} D_{k_{\mu_2}}^{\mu_2} f\left( x\right) = \denum{\psi}{-3}\left( x\right) \int f\left( \widetilde{\gamma}_t\left( x\right)\right) \denum{\psi}{-3}\left( \widetilde{\gamma}_t\left( x\right)\right) \phi_{\mu_2,k_{\mu_2}}\left( t\right) \: dt, \end{equation*} where, \begin{equation*} \widetilde{\gamma}_t\left( x\right) = \widehat{\gamma}_{2^{-k_{\mu_2}} t}^{\mu_2}\left( x\right) = \exp\left( 2^{-k_{\mu_2} d_1^{\mu_2}} t_1 X_1^{\mu_2} + \cdots + 2^{-k_{\mu_2} d_{q_{\mu_2}}^{\mu_2}} t_{q_{\mu_2}} X_{q_{\mu_2}}^{\mu_2}\right)x. \end{equation*} By the definition of $\left( X^{\mu_2}, d^{\mu_2}\right)$, $\left( X_l, d_l^{\mu_2}\right)$ appears in the list $\left( X^{\mu_2},d^{\mu_2}\right)$ (this uses the fact that $d_l$ is nonzero in only the $\mu_2$ coordinate). Hence, $Z_l$ is of the form $2^{-\ell_{\mu_2} d_{m}^{\mu_2}} X_m^{\mu_2}=2^{-k_{\mu_2} d_{m}^{\mu_2}} X_m^{\mu_2}$ for some $m$. It follows that, \begin{equation*} Z_l \left( x\right) = \frac{\partial}{\partial t_m}\bigg|_{t=0} \frac{d}{d\epsilon}\bigg|_{\epsilon =1} \widetilde{\gamma}_{\epsilon t}\circ \widetilde{\gamma}_{t}^{-1} \left(x \right), \end{equation*} which completes the proof of \eqref{EqnToShowZAppears} in this case. We now turn to the case when $\ell_{\mu_2}=j_{\mu_2}$. We separate this case into two cases: when $\mu_2\in E$ and when $\mu_2\in E^{c}$. First we deal with the case when $\mu_2\in E^{c}$. In that case, we use $p$ such that $S_p=A_{j_{\mu_2}}^{\mu_2}$, and the proof of \eqref{EqnToShowZAppears} proceeds just as in the case for $D_{k_{\mu_2}}^{\mu_2}$ above. We turn to the case when $\mu_2\in E$. In this case, we use $p$ such that $S_p= M_{j_E}$. We have, \begin{equation*} M_{j_E} f\left( x\right) = \denum{\psi}{0}\left( x\right)\int f\left( \gamma_{2^{-j_E}t}\left( x\right)\right) \denum{\psi}{0}\left( \gamma_{2^{-j_E}t}\left( x\right)\right) \sigma\left(t\right)\: dt. \end{equation*} Hence, we take $\widetilde{\gamma}_t= \gamma_{2^{-j_E}t}$. Note, if $\widetilde{\gamma}_t$ were instead taken to be equal to $\gamma_{2^{-j}t}$, then \eqref{EqnToShowZAppears} would follow immediately from \eqref{EqnHowTheXjsCome} and \eqref{EqnHowThedjsCome}. $\gamma_{2^{-j_E}t}$ is just $\gamma_{2^{-j}t}$ with some of the coordinates set to $0$. Thus, to prove \eqref{EqnToShowZAppears}, we need only show that $\partial_t^{\alpha}$ in \eqref{EqnHowTheXjsCome} only involves those coordinates of $2^{-j_E}t$ which are not identically $0$. Let $\alpha$ be the multi-index from \eqref{EqnHowTheXjsCome}. We know that $\deg\left( \alpha\right) = d_l$ and therefore is nonzero in only the $\mu_2$ coordinate. Thus if $t_m$ is a coordinate appearing in $\partial_t^{\alpha}$, then $e_m$ must be nonzero in only the $\mu_2$ coordinate. Since $j_E$ is $\infty$ only in those coordinates $\mu\in E^{c}$, it follows that $2^{-j_E}t$ is not identically $0$ in any of the coordinates appearing in $\partial_t^{\alpha}$. \eqref{EqnToShowZAppears} follows. In conclusion, the operators $S_1,\ldots, S_L$ satisfy all the hypotheses of Theorem \ref{ThmGenL2Thm}. We now turn to $R_1,R_2$. Recall, we are presently considering the case $\iota_\infty = k_{\mu_1}-j_{\mu_1}$. We have, \begin{equation*} \begin{split} D_{k_\mu}^\mu f\left( x\right) &= \denum{\psi}{-3}\left( x\right) \int f\left( \widehat{\gamma}_{2^{-k_{\mu_1}}t}^{\mu_1}\left( x\right)\right) \denum{\psi}{-3}\left(\widehat{\gamma}_{2^{-k_{\mu_1}}t}^{\mu_1}\left( x\right) \right) \phi_{\mu_1,k_{\mu_1}} \left( t\right) \: dt\\ &\leftuad -\denum{\psi}{-3}\left( x\right) \int f\left( \widehat{\gamma}_{0}^{\mu_1}\left( x\right)\right) \denum{\psi}{-3}\left(\widehat{\gamma}_{0}\left( x\right) \right) \phi_{\mu_1,k_{\mu_1}} \left( t\right) \: dt\\ &=:R_1f\left( x\right) - R_2f\left(x\right), \end{split} \end{equation*} where we have used that $\int \phi_{\mu_1,k_{\mu_1}}=0$ (since $k_{\mu_1}>0$, which follows from our assumption that $\iota_\infty>0$) and therefore $R_2=0$. Let $c_0=\min_{1\leq m\leq q_{\mu}} d_{m}^{\mu}>0$. Define $\zeta=2^{-c_0 \iota_\infty}$. Let, \begin{equation*} \widetilde{\gamma}_{t,s}\left( x\right) = \exp\left( 2^{-\ell_{\mu_1} d_1^{\mu_1}} 2^{-\left(k_{\mu_1}-\ell_{\mu_1}\right) \left(d_1^{\mu_1}-c_0\right) } s t_1 + \cdots+2^{-\ell_{\mu_1} d_{q_{\mu_1}}^{\mu_1}} 2^{-\left(k_{\mu_1}-\ell_{\mu_1}\right) \left(d_{q_{\mu_1}}^{\mu_1}-c_0\right) } st_{q_{\mu_1}} \right)x. \end{equation*} It is simple to verify that $\left( Z,\tilde{d}\right)$ controls $\widetilde{\gamma}_{t,s}$ at the unit scale. In particular, this follows from the fact that $\left( Z,\tilde{d}\right)$ controls $\widehat{\gamma}_{2^{-\ell_{\mu_1}} t}^{\mu_1}$ at the unit scale (see Lemma 12.18 of \cite{StreetMultiParameterSingRadonLt}), and $\widetilde{\gamma}_t$ is just $\widehat{\gamma}_{2^{-\ell_{\mu_1}} t}$ with $t_m$ replaced by $2^{-\left(k_{\mu_1}-\ell_{\mu_1}\right) \left(d_m^{\mu_1}-c_0\right) } s t_m$. Let $\zeta = 2^{-c_0\left(k_{\mu_1}-\ell_{\mu_1} \right)}=2^{-c_0\iota_\infty}$. Note that, \begin{equation*} \widehat{\gamma}_{2^{-k_{\mu_1}}t}^{\mu_1}= \widetilde{\gamma}_{t, \zeta}. \end{equation*} We therefore have, \begin{equation*} \begin{split} \left( R_1-R_2\right) f\left( x\right) &= \denum{\psi}{-3}\left( x\right) \int f\left( \widetilde{\gamma}_{t,\zeta}\left( x\right)\right) \denum{\psi}{-3}\left(\widetilde{\gamma}_{t,\zeta}\left( x\right) \right) \phi_{\mu_1,k_{\mu_1}} \left( t\right) \: dt\\ &\leftuad -\denum{\psi}{-3}\left( x\right) \int f\left( \widetilde{\gamma}_{t,0}\left( x\right)\right) \denum{\psi}{-3}\left(\widetilde{\gamma}_{t,0}\left( x\right) \right) \phi_{\mu_1,k_{\mu_1}} \left( t\right) \: dt. \end{split} \end{equation*} This completes the proof that $R_1-R_2$ has the desired form. Theorem \ref{ThmGenL2Thm} applies to show that there exists $\epsilon>0$ such that, \begin{equation*} \LpOpN{2}{S_1\cdots S_L\left( R_1-R_2\right)}\lesssim \zeta^{\epsilon} = 2^{-\epsilon' \iota_\infty}, \end{equation*} for some $\epsilon'>0$. This completes the proof of \eqref{EqnToShowL24} and therefore shows, \begin{equation*} \LpOpN{2}{B_j D_k}\lesssim 2^{-\epsilon'' \left|j-k\right|}, \end{equation*} in this case. We return to deal with the second case: there is a $\mu_1$ such that $\iota_\infty = j_{\mu_1}-k_{\mu_1}$. We wish to show, \begin{equation*} \LpOpN{2}{B_j D_k}\lesssim 2^{-\epsilon\iota_\infty}. \end{equation*} Applying the triangle inequality, it suffices to show for every $E\subseteq \q\{1,\ldots, \nu\w\}\setminus \left\{\mu_1\right\}$, \begin{equation*} \LpOpN{2}{ \left[A_{j_{E\cup \q\{\mu_1\w\}c}}M_{j_{E\cup \q\{\mu_1\w\}}} - A_{j_{E^c}}M_{j_E} \right]D_k }\lesssim 2^{-\epsilon \iota_\infty}. \end{equation*} Let $O_{j,k,E} =\left[A_{j_{E\cup \q\{\mu_1\w\}c}}M_{j_{E\cup \q\{\mu_1\w\}}} - A_{j_{E^c}}M_{j_E} \right]D_k $. Thus, we wish to show, \begin{equation*} \LpOpN{2}{O_{j,k,E}^{*} O_{j,k,E}}\lesssim 2^{-\epsilon \iota_\infty}. \end{equation*} Applying the triangle inequality to the term $O_{j,k,E}^{*}$, we see that it suffices to show for every $F\subseteq \q\{1,\ldots, \nu\w\}$, \begin{equation*} \LpOpN{2}{D_{k}^{*} M_{j_F}^{*} A_{j_{F^c}}^{*} O_{j,k,E}}\lesssim 2^{-\epsilon \iota_\infty}. \end{equation*} Using that $\LpOpN{2}{O_{j,k,E}}\lesssim 1$, we see, \begin{equation*} \begin{split} \LpOpN{2}{D_k^{*} M_{j_F}^{*} A_{j_{F^c}}^{*} O_{j,k,E}}^2 &= \LpOpN{2}{O_{j,k,E}^{*} A_{j_{F^{c}}} M_{j_F} D_k D_k^{*} M_{j_F}^{*} A_{j_{F^c}}^{*} O_{j,k,E} }\\ &\lesssim \LpOpN{2}{ A_{j_{F^{c}}} M_{j_F} D_k D_k^{*} M_{j_F}^{*} A_{j_{F^c}}^{*} O_{j,k,E}}. \end{split} \end{equation*} Thus, it suffices to show, \begin{equation}\label{EqnToShowL2PO} \LpOpN{2}{P_{j,k,F} O_{j,k,E}}\lesssim 2^{-\epsilon \iota_\infty}, \end{equation} where $P_{j,k,F}= A_{j_{F^{c}}} M_{j_F} D_k D_k^{*} M_{j_F}^{*} A_{j_{F^c}}^{*} $. We now use that $\LpOpN{2}{D_k}, \LpOpN{2}{M_{j_E}}, \LpOpN{2}{M_{j_{E\cup \q\{\mu_1\w\}}}}\lesssim 1$, to see, \begin{equation*} \begin{split} \LpOpN{2}{P_{j,k,F}O_{j,k,E}} \lesssim &\LpOpN{2}{P_{j,k,F} \left[ A_{j_{E\cup \q\{\mu_1\w\}c }}-A_{j_{E^c}}\right] }\\ &+\sum_{G\in \left\{E^{c}, E\cup \q\{\mu_1\w\}c \right\} } \LpOpN{2}{P_{j,k,F} A_{j_G}\left[M_{j_{E\cup \q\{\mu_1\w\}}}-M_{j_E}\right] } \end{split} \end{equation*} Thus it suffices to show, for every $F,G\subseteq \q\{1,\ldots, \nu\w\}$ and every $E\subseteq \q\{1,\ldots, \nu\w\}\setminus \left\{\mu_1\right\}$, \begin{equation}\label{EqnToShowL2MCancel} \LpOpN{2}{P_{j,k,F} A_{j_G} \left[M_{j_{E\cup \q\{\mu_1\w\}}}-M_{j_E}\right]}\lesssim 2^{-\epsilon \iota_\infty}, \end{equation} \begin{equation}\label{EqnToShowL2ACancel} \LpOpN{2}{P_{j,k,F}\left[A_{j_{E\cup \q\{\mu_1\w\}}} - A_{j_E}\right]}\lesssim 2^{-\epsilon \iota_\infty}, \end{equation} where we have reversed the roles of $E$ and $E^c$ in \eqref{EqnToShowL2ACancel}. We begin with \eqref{EqnToShowL2ACancel}. Write $j_E=\left( j_E^1,\ldots, j_E^\nu\right) \in {\mathbb{N}}inf^{\nu}$. Note that, \begin{equation*} A_{j_{E\cup \q\{\mu_1\w\}}}-A_{j_E} = A_{j_E^1}^1 A_{j_E^2}^2\cdots A_{j_{E}^{\mu_1-1}}^{\mu_1-1} \left[ A_{j_{\mu_1}}^{\mu_1} - A_{\infty}^{\mu_1}\right] A_{j_E^{\mu_1+1}}^{\mu_1+1}\cdots A_{j_E^\nu}^\nu. \end{equation*} Using the fact that $\LpOpN{2}{A_{j_E^{\mu}}^{\mu}}\lesssim 1$ for every $\mu$, to prove \eqref{EqnToShowL2ACancel} it suffices to show, \begin{equation}\label{EqnToShowL2ACancel2} \LpOpN{2}{P_{j,k,F} A_{j_E^1}^1 \cdots A_{j_{E}^{\mu_1-1}}^{\mu_1-1} \left[ A_{j_{\mu_1}}^{\mu_1} - A_{\infty}^{\mu_1}\right]}\lesssim 2^{-\epsilon \iota_\infty}. \end{equation} To prove \eqref{EqnToShowL2ACancel2} we will apply Theorem \ref{ThmGenL2Thm} with, \begin{equation*} S_1\cdots S_L = P_{j,k,F} A_{j_E^1}^1 \cdots A_{j_{E}^{\mu_1-1}}^{\mu_1-1} =A_{j_{F^c}} M_{j_F} D_k D_k^{*} M_{j_F}^{*} A_{j_{F^c}}^{*} A_{j_E^1}^1 \cdots A_{j_{E}^{\mu_1-1}}^{\mu_1-1}, \end{equation*} \begin{equation*} R_1 = A_{j_{\mu_1}}^{\mu_1},\leftuad R_2 = A_\infty^{\mu_1}. \end{equation*} As before, we take $\left( Z,\tilde{d}\right) = \left( 2^{-\ell}X, \sum d\right)$. The proof that the term $S_1\cdots S_L$ is of the proper form follows just as before. We, therefore, concern ourselves only with showing that $R_1$ and $R_2$ have the proper form. We have, \begin{equation*} R_1 f\left( x\right) = \denum{\psi}{-1}\left( x\right) \int f\left( \widehat{\gamma}_{2^{-j_\mu}t}^\mu\left( x\right)\right) \denum{\psi}{-1}\left( \widehat{\gamma}_{2^{-j_\mu} t}\left(x\right)\right)\sigma\left( t\right) \: dt, \end{equation*} \begin{equation*} R_2 f\left( x\right) = \denum{\psi}{-1}^2\left( x\right)\left[\int \sigma\right] f\left( x\right)=\denum{\psi}{-1}\left( x\right) \int f\left( \widehat{\gamma}_{0}^\mu\left( x\right)\right) \denum{\psi}{-1}\left( \widehat{\gamma}_{0}\left(x\right)\right)\sigma\left( t\right) \: dt. \end{equation*} Setting $c_0=\min_{1\leq m\leq q_{\mu_1}} d_l^{\mu_1}$ and $\zeta=2^{-c_0\iota_\infty}$, the proof that $R_1-R_2$ is of the proper form follows just as in the previous case. Theorem \ref{ThmGenL2Thm} applies to show \eqref{EqnToShowL2ACancel2}, thereby establishing \eqref{EqnToShowL2ACancel}. We turn, finally, to showing \eqref{EqnToShowL2MCancel}. We will apply Theorem \ref{ThmGenL2Thm} with \begin{equation*} S_1\cdots S_L = P_{j,k,F} A_{j_G} = A_{j_{F^c}} M_{j_F} D_k D_k^{*} M_{j_F}^{*} A_{j_{F^c}}^{*} A_{j_G}, \end{equation*} \begin{equation*} R_1 = M_{j_{E\cup \q\{\mu_1\w\}}}, \leftuad R_2=M_{j_E}. \end{equation*} As before, we take $\left( Z,\tilde{d}\right)= \left( 2^{-\ell} X,\sum d\right)$. That $S_1\cdots S_L$ has the proper form to apply Theorem \ref{ThmGenL2Thm} follows just as before. We therefore only concern ourselves with $R_1$ and $R_2$. Consider, \begin{equation*} M_{j_E} f\left( x\right) = \denum{\psi}{0}\left( x\right) \int f\left( \gamma_{2^{-j_E}t}\left( x\right)\right) \denum{\psi}{0}\left( \gamma_{2^{-j_E}t}\left( x\right)\right) \sigma\left( t\right)\: dt, \end{equation*} with a similar formula for $M_{j_{E\cup \q\{\mu_1\w\}}}$. For $t\in {\mathbb{R}}^N$, separate $t$ into two variables: $t=\left( t_1,t_2\right)$. $t_1$ will consist of those coordinates $t_l$ such that $e_l^{\mu_1}\ne 0$, and $t_2$ will denote the rest of the coordinates. Write $t_1=\left( t_{l_1},\ldots, t_{l_{N_1}}\right)$. Let $i\in {\mathbb{N}}inf^{\nu}$ be given by, \begin{equation*} i_\mu = \begin{cases} j_E^{\mu} & \text{if }\mu\ne \mu_1,\\ k_{\mu_1} & \text{if }\mu=\mu_1. \end{cases} \end{equation*} Notice, \begin{equation*} 2^{-j_{E\cup \q\{\mu_1\w\}}} \left( t_1,t_2\right) = 2^{-i}\left( \left( 2^{\left(k_{\mu_1}-j_{\mu_1}\right) e_{l_1}^{\mu_1} }t_{l_1},\cdots, 2^{\left(k_{\mu_1}-j_{\mu_1}\right) e_{l_{N_1}}^{\mu_1} }t_{l_{N_1}} \right), t_2 \right), \end{equation*} \begin{equation*} 2^{-j_{E}} \left( t_1,t_2\right) = 2^{-i}\left( 0, t_2\right). \end{equation*} Define $c_0=\min\left\{e_{l_1}^{\mu_1},\cdots, e_{l_{N_1}}^{\mu_1}\right\}>0$, and $\zeta= 2^{-c_0\left(j_{\mu_1}-k_{\mu_1}\right)} = 2^{-c_0\iota_\infty}$. Thus, \begin{equation*} 2^{-j_{E\cup \q\{\mu_1\w\}}} \left( t_1,t_2\right) = 2^{-i}\left( \left( 2^{\left(k_{\mu_1}-j_{\mu_1}\right) \left(e_{l_1}^{\mu_1}-c_0\right) }\zeta t_{l_1},\cdots, 2^{\left(k_{\mu_1}-j_{\mu_1}\right) \left(e_{l_{N_1}}^{\mu_1}-c_0\right) }\zeta t_{l_{N_1}} \right), t_2 \right). \end{equation*} Define, \begin{equation*} \widetilde{\gamma}\left(\left( t_1,t_2,s\right) x\right) = \gamma\left( 2^{-i}\left( \left( 2^{\left(k_{\mu_1}-j_{\mu_1}\right) \left(e_{l_1}^{\mu_1}-c_0\right) }s t_{l_1},\cdots, 2^{\left(k_{\mu_1}-j_{\mu_1}\right) \left(e_{l_{N_1}}^{\mu_1}-c_0\right) }s t_{l_{N_1}} \right), t_2 \right) , x\right). \end{equation*} We claim that $\left( Z,\tilde{d}\right)$ controls $\widetilde{\gamma}$ at the unit scale. Indeed, we know $\left(Z,\tilde{d}\right)$ controls $\gamma_{2^{-\ell} t}$ at the unit scale,\footnote{As discussed before, this follows directly from our assumptions on $\gamma$.} and since $i\geq \ell$ coordinatewise, we have that $\left( Z,\tilde{d}\right)$ controls $\gamma_{2^{-i}t}$ at the unit scale. Now the result follows easily from the definition of control, since $\left(k_{\mu_1}-j_{\mu_1}\right)\left(e_{l_m}^{\mu_1}-c_0\right)<0$ for every $m$. Note that, \begin{equation*} R_1 f\left( x\right) = \denum{\psi}{0}\left( x\right) \int f\left( \widetilde{\gamma}_{t,\zeta}\left( x\right)\right) \denum{\psi}{0}\left(\widetilde{\gamma}_{t,\zeta}\left( x\right) \right)\sigma\left( t\right) \: dt, \end{equation*} \begin{equation*} R_2 f\left( x\right) = \denum{\psi}{0}\left( x\right) \int f\left( \widetilde{\gamma}_{t,0}\left( x\right)\right) \denum{\psi}{0}\left(\widetilde{\gamma}_{t,0}\left( x\right) \right)\sigma\left( t\right) \: dt. \end{equation*} Thus, $R_1-R_2$ has the proper form for Theorem \ref{ThmGenL2Thm}. Theorem \ref{ThmGenL2Thm} applies to show, \begin{equation*} \LpOpN{2}{S_1\cdots S_L\left( R_1-R_2\right)}\lesssim \zeta^{\epsilon}= 2^{-\epsilon' \iota_\infty}, \end{equation*} which establishes \eqref{EqnToShowL2MCancel} and completes the proof of \eqref{EqnToShowBjDk}. We now make comments on the modifications of the above necessary to deal with the other parts of Theorem \ref{ThmL2Thm}. When considering $D_{j_1}T_{j_2}D_{j_3}$ one takes $\ell=j_1\rightedge j_2\rightedge j_3$ and $\iota_\infty = \max_{1\leq k,l\leq 3} \left|j_k-j_l\right|_\infty\approx \diam{j_1,j_2,j_3}$. Also, we set $\left( Z,\tilde{d}\right) =\left( 2^{-\ell} X,\sum d\right)$. One proceeds in essentially the same manner as $B_{j_1}D_{j_2}$. The $D_j$ terms behave just as before. When $T_{j_2}$ appears as a $S_p$ for some $p$, it can be treated just as $M_{j_{\q\{1,\ldots, \nu\w\}}}$ was treated above. When $\iota_\infty=j_2^{\mu_1}-j_1^{\mu_1}$ or $\iota_\infty=j_2^{\mu_1}-j_3^{\mu_1}$, for some $\mu_1$, $T_{j_2}$ must also be used as $R_1-R_2$ in the above argument. In that case, one uses that $\int \varsigma_{j_2}\left( t\right) d t_{\mu_1}=0$ and setting $R_2=0$, one can write $T_{j_2}=R_1=R_1-R_2$ in a form that works just as $M_{j_{\q\{1,\ldots, \nu\w\}}} - M_{j_{\q\{1,\ldots, \nu\w\}\setminus \left\{\mu_1\right\}}}$ did above. See also \cite{StreetMultiParameterSingRadonLt} for details on this. When considering, instead, $D_{j_1}D_{j_2}D_{j_3}^{*}D_{j_4}^{*}$ or $D_{j_1}^{*} D_{j_2}^{*} D_{j_3} D_{j_4}$, one takes $\ell=j_1\rightedge j_2\rightedge j_3\rightedge j_4$ and $\iota_\infty = \max_{1\leq k,l\leq 4} \left| j_k-j_l \right|_\infty \approx \diam{j_1,j_2,j_3,j_4}$. Also, one takes $\left( Z,\tilde{d}\right) =\left( 2^{-\ell}X,\sum d\right)$. With these choices everything proceeds as above with simple modifications. We leave the details to the interested reader. \section{Square functions and the reproducing formula}\label{SectionSquareFunc} Using the operators $D_j$ defined in Section \ref{SectionAuxOps} we develop, in this section, a Littlewood-Paley square function and a Calder\'on-type ``reproducing formula'' which will be essential to our proof of Theorems \ref{ThmMainThmSecondPass} and \ref{ThmMainMaxThm}. Recall, \begin{equation*} \sum_{j\in {\mathbb{N}}^{\nu}} D_j = \denum{\psi}{-3}^{2\nu}. \end{equation*} For notational convenience, we define $D_j=0$ for $j\in {\mathbb{Z}}^{\nu}\setminus {\mathbb{N}}^{\nu}$. For $M\in {\mathbb{N}}$, define, \begin{equation*} U_M = \sum_{\substack{j\in {\mathbb{N}}^{\nu}\\ \left|l\right|\leq M}} D_j D_{j+l}, \end{equation*} \begin{equation*} R_M = \sum_{\substack{j\in {\mathbb{N}}^{\nu}\\ \left|l\right|> M}} D_j D_{j+l}; \end{equation*} so that $U_M+R_M=\denum{\psi}{-3}^{4\nu}$. The main results of this section are: \begin{thm}[A Calder\'on-type ``reproducing formula'']\label{ThmReproduce} Fix $p_0$, $1<p_0<\infty$. There exists $M=M\left( p_0\right)$, and a bounded map $V_M:L^{p_0}\rightarrow L^{p_0}$ such that, \begin{equation*} \denum{\psi}{-2} U_M V_M = \denum{\psi}{-2} = V_M U_M \denum{\psi}{-2}. \end{equation*} \end{thm} \begin{rmk} Strictly speaking, Theorem \ref{ThmReproduce} does not give a reproducing formula (one would need $V_M=I$ for it to be a reproducing formula). However, we will use it in the same way that one often uses the Calder\'on reproducing formula, which is why we have labeled it such. \end{rmk} \begin{thm}[The Littlewood-Paley square function]\label{ThmSquare} For every $p$, $1<p<\infty$, \begin{equation}\label{EqnfLessSquare} \LpN{p}{\denum{\psi}{-2} f} \lesssim \LpN{p}{\left(\sum_{j\in {\mathbb{N}}^\nu}\left|D_j\denum{\psi}{-2} f\right|^2\right)^{\frac{1}{2}}}, \end{equation} \begin{equation}\label{EqnSquareLessf} \LpN{p}{\left(\sum_{j\in {\mathbb{N}}^\nu}\left|D_j f\right|^2\right)^{\frac{1}{2}}}\lesssim \LpN{p}{f}. \end{equation} \end{thm} The rest of this section is devoted to the proofs of Theorems \ref{ThmReproduce} and \ref{ThmSquare}. We begin with the proof of \eqref{EqnSquareLessf}: \begin{lemma}\label{LemmaSquareLessf} For every $p$, $1<p<\infty$, we have: \begin{equation}\label{EqnLemmaSquareLessf} \LpN{p}{\left(\sum_{j\in {\mathbb{N}}^\nu}\left|D_j f\right|^2\right)^{\frac{1}{2}}}\leq C_p\LpN{p}{f}. \end{equation} The same result holds with $D_j$ replaced with $D_j^{*}$. \end{lemma} \begin{proof} As is well known, to prove \eqref{EqnLemmaSquareLessf}, it suffices to prove for every set of $\nu$ sequences $\left\{\epsilon_{j_1}^1\right\}_{j_1\in {\mathbb{N}}},\ldots, \left\{\epsilon_{j_\nu}^\nu\right\}_{j_\nu\in {\mathbb{N}}}$, taking values of $\pm 1$, the operator \begin{equation}\label{EqnToShowIIDDbound} \sum_{j_1,\ldots, j_\nu\in {\mathbb{N}}} \epsilon_{j_1}^1\epsilon_{j_2}^2\cdots \epsilon_{j_\nu}^\nu D_{\left(j_1,\ldots, j_\nu\right)} \end{equation} is bounded on $L^p$, with bound independent of the choice of the sequence. To see why \eqref{EqnToShowIIDDbound} is enough, see for example, page 267 of \cite{SteinHarmonicAnalysis} and Chapter 4, Section 5 of \cite{SteinSingularIntegrals}. We have, \begin{equation*} \sum_{j_1,\ldots, j_\nu\in {\mathbb{N}}} \epsilon_{j_1}^1\epsilon_{j_2}^2\cdots \epsilon_{j_\nu}^\nu D_{\left(j_1,\ldots, j_\nu\right)} = \left(\sum_{j_1\in {\mathbb{N}}} \epsilon_{j_1}^1 D_{j_1}^1\right)\cdots \left(\sum_{j_\nu\in {\mathbb{N}}} \epsilon_{j_\nu}^{\nu} D_{j_\nu}^{\nu}\right). \end{equation*} For each $\mu$, \begin{equation*} \sum_{j_\mu} \epsilon_{j_\mu}^{\mu} D_{j_\mu}^{\mu} \end{equation*} is of the form covered by Theorem \ref{ThmSingleParamSingInt} and hence bounded on $L^p$ ($1<p<\infty$). It is easy to see that the Theorem \ref{ThmSingleParamSingInt} holds uniformly in the choice of the sequence $\left\{\epsilon_{j_{\mu}}^{\mu}\right\}$ taking values of $\pm 1$. The result now follows. The same proof works with $D_j$ replaced by $D_j^{*}$. \end{proof} To prove Theorem \ref{ThmReproduce}, we first state a preliminary lemma. \begin{lemma}\label{LemmaRMLimit} For $p_0$ fixed, $1<p_0<\infty$, \begin{equation*} \lim_{M\rightarrow \infty} \LpOpN{p_0}{R_M} =0. \end{equation*} \end{lemma} \begin{proof}[Proof of Theorem \ref{ThmReproduce} given Lemma \ref{LemmaRMLimit}] Note that, \begin{equation*} U_M = \denum{\psi}{-3}^{4\nu} - R_M. \end{equation*} Take $\psi\in C_0^\infty$ with $\denum{\psi}{-2}\prec \psi\prec \denum{\psi}{-3}$. Take $M=M\left( p_0\right)$ so large that $\LpOpN{p_0}{R_M \psi}<1$. Define, \begin{equation*} V_M = \sum_{m=0}^\infty \psi\left(R_M \psi\right)^m, \end{equation*} with convergence in the uniform operator topology $L^{p_0}\rightarrow L^{p_0}$. It is direct to verify that $V_M$ satisfies the conclusions of Theorem \ref{ThmReproduce}. \end{proof} Lemma \ref{LemmaRMLimit} follows by interpolating the next two lemmas, \begin{lemma}\label{LemmaBadLpRM} For $1<p<\infty$, $M\geq 1$, \begin{equation*} \LpOpN{p}{R_M}\leq C_p M^{\nu}. \end{equation*} \end{lemma} \begin{lemma}\label{LemmaGoodL2RM} \begin{equation*} \LpOpN{2}{R_M}\lesssim 2^{-\epsilon M}, \end{equation*} for some $\epsilon>0$. \end{lemma} \begin{proof}[Proof of Lemma \ref{LemmaBadLpRM}] Since $R_M = \denum{\psi}{-3}^{4\nu}-U_M$, it suffices to prove the result with $U_M$ in place of $R_M$. Let $q$ be dual to $p$. Fix $f\in L^p$ and $g\in L^q$ with $\LpN{q}{g}=1$. Consider, \begin{equation*} \begin{split} \left|\ip{g}{U_M f}\right| &\leq \sum_{\left|l\right|\leq M} \left|\ip{g}{\sum_{j\in {\mathbb{N}}^\nu} D_j D_{j+l} f}\right|\\ &= \sum_{\left|l\right|\leq M} \left| \sum_{j\in {\mathbb{N}}^\nu} \ip{D_j^{*} g}{ D_{j+l} f}\right|\\ &\leq \sum_{\left|l\right|\leq M} \int \left(\sum_{j\in {\mathbb{N}}^\nu} \left|D_j^{*} g\right|^2\right)^{1/2} \left(\sum_{j\in {\mathbb{N}}^{\nu}} \left|D_j f\right|^2\right)^{1/2}\\ &\lesssim M^\nu \LpN{q}{\left(\sum_{j\in{\mathbb{N}}^\nu} \left|D_j^{*}g\right|^2\right)^{1/2} } \LpN{p}{\left(\sum_{j\in {\mathbb{N}}^\nu} \left|D_j f\right|^2\right)^{1/2}}\\ &\lesssim M^{\nu} \LpN{q}{g} \LpN{p}{f}\\ &= M^{\nu} \LpN{p}{f}; \end{split} \end{equation*} where, in the second to last line, we have applied Lemma \ref{LemmaSquareLessf} twice. Taking the supremum over all $g$ with $\LpN{q}{g}=1$ yields the result. \end{proof} \begin{proof}[Proof of Lemma \ref{LemmaGoodL2RM}] We wish to apply the Cotlar-Stein lemma to, \begin{equation*} \sum_{\substack{j\in {\mathbb{N}}^\nu\\ \left|l\right|>M}} D_j D_{j+l}. \end{equation*} Applying Theorem \ref{ThmL2Thm}, we have, \begin{equation*} \LpOpN{2}{D_{j_1} D_{j_1+l_1} D_{j_2}^{*} D_{j_2+l_2}^{*}}, \LpOpN{2}{D_{j_1}^{*} D_{j_1+l_1}^{*} D_{j_2} D_{j_2+l_2}}\lesssim 2^{-\epsilon_2 \diam{j_1,j_1+l_1, j_2,j_2+l_2}}. \end{equation*} The Cotlar-Stein lemma states, \begin{equation*} \LpOpN{2}{R_M}\lesssim \sup_{\substack{j_1\in {\mathbb{N}} \\ \left|l_1\right|>M}} \sum_{\substack{ j_2\in {\mathbb{N}} \\ \left|l_2\right|>M}} 2^{-\epsilon_2\diam{j_1,j_1+l_1, j_2, j_2+l_2}/2} \lesssim 2^{-\epsilon M}; \end{equation*} completing the proof. \end{proof} We have now completed the proof of Theorem \ref{ThmReproduce}. We end this section with the completion of the proof of Theorem \ref{ThmSquare}, by proving \eqref{EqnfLessSquare}. \begin{proof}[Proof of \eqref{EqnfLessSquare}] Fix $p_0$, $1<p_0<\infty$, and let $M=M\left( p_0\right)$ be as in Theorem \ref{ThmReproduce}. Let $q_0$ be dual to $p_0$, so that $V_M^{*}:L^{q_0}\rightarrow L^{q_0}$. Let $g\in L^{q_0}$ be such that $\LpN{q_0}{g}=1$. We have for $f\in L^{p_0}$, \begin{equation*} \begin{split} \left|\ip{g}{\denum{\psi}{-2} f}\right| &= \left|\ip{V_M^{*} g}{U_M \denum{\psi}{-2} f}\right|\\ &\leq \sum_{\left|l\right|\leq M} \left| \sum_{j\in {\mathbb{N}}^\nu} \ip{D_j^{*} V_M^{*} g}{ D_{j+l} \denum{\psi}{-2} f}\right|\\ &\leq \sum_{\left|l\right|\leq M} \LpN{q_0}{\left(\sum_{j\in {\mathbb{N}}^{\nu}} \left|D_j^{*} V_M^{*} g\right|^2\right)^{1/2} } \LpN{p_0}{\left( \sum_{j\in {\mathbb{N}}^{\nu}} \left|D_j \denum{\psi}{-2} f\right|^2 \right)^{1/2} }\\ &\lesssim M^\nu \LpN{q_0}{V_M^{*} g} \LpN{p_0}{\left( \sum_{j\in {\mathbb{N}}^{\nu}} \left|D_j \denum{\psi}{-2} f\right|^2 \right)^{1/2} }\\ &\lesssim \LpN{p_0}{\left( \sum_{j\in {\mathbb{N}}^{\nu}} \left|D_j \denum{\psi}{-2} f\right|^2 \right)^{1/2} }; \end{split} \end{equation*} where in the second to last line we applied Lemma \ref{LemmaSquareLessf}, and in the last line we used that $M$ is fixed (since $p_0$ is), $V_M^{*}$ is bounded on $L^{q_0}$, and $\LpN{q_0}{g}=1$. Taking the supremum over all $g$ with $\LpN{q_0}{g}=1$ yields the result. \end{proof} \section{The maximal result (Theorem \ref{ThmMainMaxThm})} In this section, we prove Theorem \ref{ThmMainMaxThm}. The proof proceeds by a bootstrapping argument. In fact, there are at least two, well-known, bootstrapping arguments that can be used to prove results like Theorem \ref{ThmMainMaxThm}. One can be found in \cite{NagelSteinWaingerDifferentiationInLacunaryDirections}, another in Section 4 of \cite{GreenleafSeegerWaingerOnXrayTransformsForRigidLineComplexes}. Either of these arguments will suffice for our purposes. We proceed using the methods of \cite{NagelSteinWaingerDifferentiationInLacunaryDirections}. Let us review a few of the reductions covered in Section \ref{SectionAuxOps}. First, for $E\subseteq \q\{1,\ldots, \nu\w\}$, define, \begin{equation*} \mathcal{M}_E f\left(x\right) = \sup_{j\in {\mathbb{N}}^\nu} M_{j_E} \left|f\right| \left( x\right). \end{equation*} Then, to prove Theorem \ref{ThmMainMaxThm} it suffices to prove that $\mathcal{M}_{\q\{1,\ldots, \nu\w\}}$ is bounded on $L^p$ ($1<p\leq \infty$). We proceed by induction on $\nu$. As discussed in Section \ref{SectionAuxOps}, the base case ($\nu=0$) is trivial, and we may assume by our inductive hypothesis that $\mathcal{M}_E$ is bounded on $L^p$ for $E\subsetneq \q\{1,\ldots, \nu\w\}$. Note that $\mathcal{M}_{\q\{1,\ldots, \nu\w\}}$ is clearly bounded on $L^\infty$, and so our goal is to show that it is bounded on $L^p$, $1<p\leq 2$. The following lemma was proved in Section \ref{SectionAuxOps} (under our inductive hypothesis), \begin{lemma}\label{LemmaEquivBandsM} For each $p$, $1<p<\infty$, \begin{equation*} \LpN{p}{\mathcal{M}_{\q\{1,\ldots, \nu\w\}} f}\lesssim \LpN{p}{f}, \leftuad \forall f\in L^p, \end{equation*} if and only if \begin{equation*} \LpN{p}{\sup_{j\in {\mathbb{N}}^{\nu}}\left|B_j f\right|}\lesssim\LpN{p}{f}, \leftuad \forall f\in L^p. \end{equation*} \end{lemma} \begin{rmk} Actually, only the if part of Lemma \ref{LemmaEquivBandsM} was shown in Section \ref{SectionAuxOps}. The only if part is immediate, and is also not used in what follows. We, therefore, leave it to the interested reader. \end{rmk} In what follows, $D_j$ for $j\in {\mathbb{Z}}^{\nu}\setminus {\mathbb{N}}^\nu$ is defined to be $0$. For $k\in {\mathbb{Z}}^{\nu}$ define a new operator acting on sequences of measurable functions $\left\{f_j\left(x\right)\right\}_{j\in {\mathbb{N}}^\nu}$ by, \begin{equation*} \mathcal{B}_k \left\{f_j\right\}_{j\in {\mathbb{N}}^{\nu}} = \left\{B_j D_{j+k} f_j\right\}_{j\in {\mathbb{N}}^{\nu}}. \end{equation*} \begin{prop}\label{PropSTSsBk} Fix $p_0$, $1<p_0<\infty$. If there is $\epsilon>0$ such that \begin{equation*} \LplqOpN{p_0}{2}{\mathcal{B}_k}\lesssim 2^{-\epsilon\left|k\right|}, \end{equation*} then $\mathcal{M}_{\q\{1,\ldots, \nu\w\}}$ is bounded on $L^{p_0}$. \end{prop} \begin{proof} In light of Lemma \ref{LemmaEquivBandsM} and because \begin{equation*} \LpN{p_0}{\sup_{j\in {\mathbb{N}}^{\nu}} \left|B_j f\right| } \leq \LpN{p_0}{ \left(\sum_{j\in {\mathbb{N}}^{\nu}} \left|B_j f\right|^2\right)^{1/2}}, \end{equation*} it suffices to show \begin{equation*} \LpN{p_0}{ \left(\sum_{j\in {\mathbb{N}}^{\nu}} \left|B_j f\right|^2\right)^{1/2}}\lesssim \LpN{p_0}{f}. \end{equation*} Fix $M=M\left( p_0\right)$ as in Theorem \ref{ThmReproduce}. Note, $B_j= B_j \denum{\psi}{-2} = B_j \denum{\psi}{-2} U_M V_M= B_j U_M V_M$. Let $g=V_M f$. Note that $\LpN{p_0}{g}\lesssim \LpN{p_0}{f}$. Thus, it suffices to show, \begin{equation*} \LpN{p_0}{\left(\sum_{j\in {\mathbb{N}}^{\nu}} \left|B_j U_M g\right|^2\right)^{1/2} }\lesssim \LpN{p_0}{g}. \end{equation*} Consider, using the triangle inequality, \begin{equation*} \begin{split} \LpN{p_0}{\left(\sum_{j\in {\mathbb{N}}^{\nu}} \left|B_j U_M g\right|^2\right)^{1/2}} & = \LpN{p_0}{\left( \sum_{j\in {\mathbb{N}}^{\nu}} \left|\sum_{\substack{j_0\in {\mathbb{N}}^{\nu} \\ \left|l\right|\leq M } } B_j D_{j_0} D_{j_0+l} g\right|^2\right)^{1/2} }\\ & = \LpN{p_0}{\left( \sum_{j\in {\mathbb{N}}^{\nu}} \left|\sum_{\substack{k\in {\mathbb{Z}}^{\nu} \\ \left|l\right|\leq M } } B_j D_{j+k} D_{j+k+l} g\right|^2\right)^{1/2} }\\ & \leq \sum_{\substack{k\in {\mathbb{Z}}^{\nu}\\ \left|l\right|\leq M } }\LpN{p_0}{\left( \sum_{j\in {\mathbb{N}}^{\nu}} \left| B_j D_{j+k} D_{j+k+l} g\right|^2\right)^{1/2} }\\ & = \sum_{\substack{k\in {\mathbb{Z}}^{\nu}\\ \left|l\right|\leq M } } \LplqN{p_0}{2}{\mathcal{B}_k \left\{D_{j+k+l} g\right\}_{j\in {\mathbb{N}}^{\nu}} }\\ & \lesssim \sum_{\substack{k\in {\mathbb{Z}}^{\nu}\\ \left|l\right|\leq M } } 2^{-\epsilon \left|k\right|} \LplqN{p_0}{2}{\left\{D_j g\right\}_{j\in {\mathbb{N}}^{\nu} }}\\ &\lesssim \LpN{p_0}{\left(\sum_{j\in {\mathbb{N}}^{\nu} } \left|D_j g\right|^2\right)^{1/2} }\\ &\lesssim \LpN{p_0}{g}, \end{split} \end{equation*} where, in the last line, we have applied Theorem \ref{ThmSquare}. This completes the proof of the proposition. \end{proof} Define, \begin{equation*} \mathcal{P}=\left\{p\in \left(1,2\right]: \exists \epsilon>0, \LplqOpN{p}{2}{\mathcal{B}_k}\lesssim 2^{-\epsilon \left|k\right|}\right\}. \end{equation*} In light of Proposition \ref{PropSTSsBk}, the $L^p$ boundedness of $\mathcal{M}_{\q\{1,\ldots, \nu\w\}}$ will follow directly from the following proposition. \begin{prop}\label{PropsPEqual} $\mathcal{P}=\left(1,2\right]$. \end{prop} Proposition \ref{PropsPEqual}, in turn, follows directly from the next lemma. \begin{lemma}\label{LemmasPProp} \begin{itemize} \item $2\in \mathcal{P}$, \item If $q\in \mathcal{P}$, then $\left(\frac{2q}{q+1}, 2\right]\subseteq \mathcal{P}$. \end{itemize} \end{lemma} It is easy to see that any subset of $\left( 1,2\right]$ satisfying the conclusions of Lemma \ref{LemmasPProp} must equal $\left(1,2\right]$. The rest of this section is devoted to the proof of Lemma \ref{LemmasPProp}, which then completes the proof of Theorem \ref{ThmMainMaxThm}. \begin{proof}[Proof of Lemma \ref{LemmasPProp}] That $2\in \mathcal{P}$ follows directly from Theorem \ref{ThmL2Thm}. In fact, if $\epsilon_2>0$ is as in Theorem \ref{ThmL2Thm}, we have, \begin{equation}\label{EqnsBkL2Ineq} \LplqOpN{2}{2}{\mathcal{B}_k} \lesssim 2^{-\epsilon_2\left|k\right|}; \end{equation} merely by interchanging the norms. Moreover, using that, \begin{equation*} \LpOpN{1}{B_j D_{j+k}}\lesssim 1, \end{equation*} we have, \begin{equation}\label{EqnsBkL1Ineq} \LplqOpN{1}{1}{\mathcal{B}_k} \lesssim 1; \end{equation} also by interchanging the norms. Interpolating \eqref{EqnsBkL2Ineq} and \eqref{EqnsBkL1Ineq} shows for $1<p\leq 2$, \begin{equation}\label{EqnsBkLpIneq} \LplqOpN{p}{p}{\mathcal{B}_k} \lesssim 2^{-\epsilon_p \left|k\right|}, \end{equation} where $\epsilon_p=\left( 2-\frac{2}{p}\right)\epsilon_2>0$. Now suppose $q\in \mathcal{P}$. By Proposition \ref{PropSTSsBk} $\mathcal{M}_{\q\{1,\ldots, \nu\w\}}$ is bounded on $L^q$. We claim, \begin{equation}\label{EqnlinfsB} \LplqOpN{q}{\infty}{\mathcal{B}_k} \lesssim 1. \end{equation} Before we verify \eqref{EqnlinfsB}, recall the maximal functions $\mathcal{M}^{\mu}$ defined in Section \ref{SectionAuxOps}. $\mathcal{M}^{\mu}$ is bounded on $L^p$ ($1<p\leq \infty$), and by \eqref{EqnSupAjBound} and the definition of $A_{j_E}$, we have \begin{equation*} \left|A_{j_E} f\left(x\right)\right| \lesssim \left[\prod_{\mu\in E} \mathcal{M}^\mu \right] f\left( x\right), \end{equation*} where the product is taken in order of increasing $\mu$. A similar result holds for $D_j$ with $E$ replaced by $\q\{1,\ldots, \nu\w\}$. We now turn to verifying \eqref{EqnlinfsB}. \begin{equation*} \begin{split} \LpN{q}{\sup_{j\in {\mathbb{N}}^{\nu}} \left|B_j D_{j+k} f_j\right|} &\leq \sum_{E\subseteq \q\{1,\ldots, \nu\w\}} \LpN{q}{\sup_{j\in {\mathbb{N}}^{\nu}} \left|A_{j_{E^c}} M_{j_E} D_{j+k}f_j\right|}\\ &\lesssim \sum_{E\subseteq \q\{1,\ldots, \nu\w\}} \LpN{q}{\left[\prod_{\mu\in E^c} \mathcal{M}^{\mu}\right]\mathcal{M}_E \left[\prod_{\mu=1}^\nu \mathcal{M}^{\mu}\right] \sup_{j\in {\mathbb{N}}^{\nu}} \left|f_j\right| }\\ &\lesssim \LpN{q}{\sup_{j\in {\mathbb{N}}^{\nu}} \left|f_j\right|}. \end{split} \end{equation*} In the last line, we have used our inductive hypothesis when $E\ne \q\{1,\ldots, \nu\w\}$ and we used that $\mathcal{M}_{\q\{1,\ldots, \nu\w\}}$ is bounded on $L^q$ when $E=\q\{1,\ldots, \nu\w\}$. This completes the verification of \eqref{EqnlinfsB}. Interpolating \eqref{EqnlinfsB} with \eqref{EqnsBkLpIneq} as $p\rightarrow 1$ proves that $\left(\frac{2q}{q+1},2\right]\subseteq \mathcal{P}$. Here, we have implicitly used the fact that (by interpolation) if $r\in \mathcal{P}$, then $\left[r,2\right]\subseteq \mathcal{P}$ (since $2\in \mathcal{P}$). \end{proof} \section{Proof of Theorem \ref{ThmMainThmSecondPass}} This section is devoted to proving Theorem \ref{ThmMainThmSecondPass}: $T:L^p\rightarrow L^p$, $1<p<\infty$. Since the class of operators covered in Theorem \ref{ThmMainThmSecondPass} is closed under adjoints (see Section 12.3 of \cite{StreetMultiParameterSingRadonLt}), it suffices to prove the result for $1<p\leq 2$. We decompose $T=\sum_{j\in {\mathbb{N}}^{\nu}} T_j$ as in Section \ref{SectionAuxOps}. In what follows, $T_j$ and $D_j$ for $j\in {\mathbb{Z}}^\nu\setminus {\mathbb{N}}^\nu$ are defined to be $0$. For $k_1,k_2\in {\mathbb{Z}}^\nu$, define a new operator, acting on sequences of measurable functions $\left\{f_j\left( x\right)\right\}_{j\in {\mathbb{N}}^{\nu}}$ by \begin{equation*} \mathcal{T}_{k_1,k_2} \left\{f_j\right\}_{j\in {\mathbb{N}}^\nu} = \left\{D_j T_{j+k_1} D_{j+k_2} f_j\right\}_{j\in {\mathbb{N}}^\nu}. \end{equation*} Theorem \ref{ThmMainThmSecondPass} follows immediately from a combination of the following two propositions. \begin{prop}\label{PropIfsTBound} Fix $p_0$, $1<p_0<\infty$. If there exists $\epsilon>0$ such that \begin{equation*} \LplqOpN{p_0}{2}{\mathcal{T}_{k_1,k_2}}\lesssim 2^{-\epsilon\left(\left|k_1\right|+\left|k_2\right|\right)}, \end{equation*} then $T$ is bounded on $L^{p_0}$. \end{prop} \begin{prop}\label{PropsTBound} For each $p$, $1<p\leq 2$, there is an $\epsilon=\epsilon\left( p\right)>0$ such that, \begin{equation*} \LplqOpN{p}{2}{\mathcal{T}_{k_1,k_2}} \lesssim 2^{-\epsilon\left( \left|k_1\right|+\left|k_2\right| \right) }. \end{equation*} \end{prop} \begin{proof}[Proof of Proposition \ref{PropIfsTBound}] Fix $p_0$, $1<p_0<\infty$. Take $M=M\left( p_0\right)$ as in Theorem \ref{ThmReproduce}. We have, $T=T\denum{\psi}{-2} = T \denum{\psi}{-2}U_M V_M= T U_M V_M$. Since $V_M$ is bounded on $L^{p_0}$, it suffices to show $TU_M$ is bounded on $L^{p_0}$. Consider, using Theorem \ref{ThmSquare}, \begin{equation*} \begin{split} \LpN{p_0}{TU_M f} & =\LpN{p_0}{\denum{\psi}{-2}TU_M f}\\ &\approx \LpN{p_0}{\left(\sum_{j\in {\mathbb{N}}^{\nu}} \left|D_j \denum{\psi}{-2} T U_M f\right|^2 \right)^{1/2}}\\ &= \LpN{p_0}{\left(\sum_{j\in {\mathbb{N}}^{\nu}} \left|D_j T U_M f\right|^2 \right)^{1/2}}. \end{split} \end{equation*} Thus, to complete the proof, it suffices to show, \begin{equation*} \LpN{p_0}{\left(\sum_{j\in {\mathbb{N}}^{\nu}} \left|D_j T U_M f\right|^2 \right)^{1/2}}\lesssim \LpN{p_0}{f}. \end{equation*} We have, by the triangle inequality, \begin{equation*} \begin{split} \LpN{p_0}{\left(\sum_{j\in {\mathbb{N}}^{\nu}} \left|D_j T U_M f\right|^2 \right)^{1/2}} &= \LpN{p_0}{\left( \sum_{j\in {\mathbb{N}}^{\nu}} \left|\sum_{\substack{j_1,j_2\in {\mathbb{N}}^{\nu}\\ \left|l\right|\leq M}} D_j T_{j_1} D_{j_2} D_{j_2+l} f\right|^2 \right)^{1/2} }\\ &=\LpN{p_0}{\left( \sum_{j\in {\mathbb{N}}^{\nu}} \left|\sum_{\substack{k_1,k_2\in {\mathbb{Z}}^{\nu}\\ \left|l\right|\leq M}} D_j T_{j+k_1} D_{j+k_2} D_{j+k_2+l} f\right|^2 \right)^{1/2} }\\ &\leq \sum_{\substack{k_1,k_2\in {\mathbb{Z}}^{\nu}\\ \left|l\right|\leq M}}\LpN{p_0}{\left( \sum_{j\in {\mathbb{N}}^{\nu}} \left| D_j T_{j+k_1} D_{j+k_2} D_{j+k_2+l} f\right|^2 \right)^{1/2} }\\ &= \sum_{\substack{k_1,k_2\in {\mathbb{Z}}^{\nu}\\ \left|l\right|\leq M}} \LplqN{p_0}{2}{\mathcal{T}_{k_1,k_2} \left\{D_{j+k_2+l}f\right\}_{j\in {\mathbb{N}}^{\nu}} }\\ &\lesssim \sum_{\substack{k_1,k_2\in {\mathbb{Z}}^{\nu}\\ \left|l\right|\leq M}} 2^{-\epsilon\left( \left|k_1\right|+\left|k_2\right|\right)} \LplqN{p_0}{2}{\left\{D_j f\right\}_{j\in {\mathbb{N}}^{\nu}}}\\ &\lesssim \LpN{p_0}{\left(\sum_{j\in {\mathbb{N}}^{\nu}} \left|D_j f\right|^2 \right)^{1/2} }\\ &\lesssim \LpN{p_0}{f}, \end{split} \end{equation*} where, in the last line, we have applied Theorem \ref{ThmSquare}. This completes the proof. \end{proof} \begin{proof}[Proof of Proposition \ref{PropsTBound}] We first prove the result for $p=2$. In this case, the result follows immediately from Theorem \ref{ThmL2Thm}. Indeed if $\epsilon_2>0$ is as in Theorem \ref{ThmL2Thm}, we have, \begin{equation*} \LplqOpN{2}{2}{\mathcal{T}_{k_1,k_2}}\lesssim 2^{-\frac{\epsilon_2}{2}\left( \left|k_1\right|+\left|k_2\right| \right)}; \end{equation*} merely by interchanging the norms. In addition, since, \begin{equation}\label{EqnsTL2} \LpOpN{1}{D_jT_{j+k_1}{D_{j+k_2}}}\lesssim 1, \end{equation} we have, \begin{equation}\label{EqnsTL1} \LplqOpN{1}{1}{\mathcal{T}_{k_1,k_2}}\lesssim 1; \end{equation} also by interchanging the norms. Interpolating \eqref{EqnsTL2} and \eqref{EqnsTL1} shows that for every $p$, $1<p\leq 2$, \begin{equation}\label{EqnsTLp} \LplqOpN{p}{p}{\mathcal{T}_{k_1,k_2}}\lesssim 2^{-\epsilon_p\left(\left|k_1\right|+\left|k_2\right| \right)}, \end{equation} where $\epsilon_p=\left(1-\frac{1}{p} \right)\epsilon_2>0$. We claim, for every $p$, $1<p<\infty$, \begin{equation}\label{EqnsTlinf} \LplqOpN{p}{\infty}{\mathcal{T}_{k_1,k_2}}\lesssim 1. \end{equation} We use the maximal operators $\mathcal{M}^\mu$ defined in Section \ref{SectionAuxOps} along with the maximal operator $\mathcal{M}$ from Theorem \ref{ThmMainMaxThm}. We have, \begin{equation*} \begin{split} \LpN{p}{\sup_{j} \left|\mathcal{T}_{k_1,k_2} \left\{f_j\right\}_{j\in {\mathbb{N}}^{\nu}}\right|} &= \LpN{p}{\sup_j \left| D_j T_{j+k_1} D_{j+k_2} f_j \right| }\\ &\lesssim \LpN{p}{\left[\prod_{\mu=1}^\nu \mathcal{M}^\mu \right]\mathcal{M} \left[\prod_{\mu=1}^\nu \mathcal{M}^\mu\right]\sup_j \left|f_j\right|}\\ &\lesssim \LpN{p}{\sup_j \left|f_j\right|}. \end{split} \end{equation*} In the last line, we used the $L^p$ boundedness of the various maximal functions. \eqref{EqnsTlinf} follows. Interpolating \eqref{EqnsTLp} and \eqref{EqnsTlinf} yields the result. \end{proof} \section{More general kernels}\label{SectionMoreKernels} In the previous sections, we exhibited the proofs of Theorems \ref{ThmMainThmSecondPass} and \ref{ThmMainMaxThm} in the special case $\mu_0=\nu$. That is, in the case when $\mathcal{A}_{\mu_0}=\left[0,1\right]^\nu$. In this section, we describe the modifications necessary to prove the result for general $\mu_0$. At the end of the section we make some remarks about even more general sets $\mathcal{A}\subseteq \left[0,1\right]^\nu$ for which our methods apply. Fix $\mu_0$, $1\leq \mu_0\leq \nu$. Recall, \begin{equation*} \mathcal{A}_{\mu_0} = \left\{\delta=\left( \delta_1,\ldots, \delta_\nu\right) \in \left[0,1\right]^\nu: \delta_{\mu_0}\leq \delta_{\mu_0+1}\leq \cdots \leq \delta_{\nu}\right\}. \end{equation*} Our decomposition of $T$ now takes the form, \begin{equation*} T= \sum_{j\in -\log_2 \sA} T_j, \end{equation*} where \begin{equation*} -\log_2 \sA = \left\{j\in {\mathbb{N}}^{\nu} : 2^{-j}\in \mathcal{A}_{\mu_0}\right\}=\left\{j\in {\mathbb{N}}^{\nu}: j_{\mu_0}\geq j_{\mu_0+1}\geq \cdots \geq j_{\nu}\right\}; \end{equation*} see Section \ref{SectionKernels} for more details. The proof in this case remains almost exactly the same, provided we make a few different choices when defining the auxiliary operators in Section \ref{SectionAuxOps}. In the case when $\mu_0=\nu$, we defined the vector fields with single parameter formal degrees $\left( X^\mu, d^{\mu}\right)$ so that, \begin{equation}\label{EqnChooseXmu} \left( \delta_\mu X^{\mu}, d^{\mu}\right) = \left( \widehat{\delta} X, \sum d\right), \end{equation} where $\widehat{\delta}\in \left[0,1\right]^\nu$ was $\delta_\mu$ in the $\mu$ coordinate and $0$ in every other coordinate. In the case when $\mu_0<\mu$, this choice no longer works. Instead, we choose $\left( X^\mu, d^\mu\right)$ so that \eqref{EqnChooseXmu} holds where $\widehat{\delta}$ is defined to be $\delta_\mu$ in the $\mu'$ coordinate, for every $\mu'\geq \mu$, and defined to be $0$ in the rest of the coordinates. The operators $A_j^{\mu}$ and $D_j^{\mu}$ are defined with this choice of $\left( X^\mu, d^\mu\right)$. For $E\subseteq \q\{1,\ldots, \nu\w\}$, $j_E\in {\mathbb{N}}inf^{\nu}$ must be defined differently, so that $2^{-j_E}\in \mathcal{A}$. For $1\leq \mu\leq \mu_0$, we define, \begin{equation*} j_E^{\mu} = \begin{cases} j_{\mu} & \text{if }\mu\in E,\\ \infty & \text{otherwise.} \end{cases} \end{equation*} For $\mu>\mu_0$, we recursively define, \begin{equation*} j_E^{\mu} =\begin{cases} j_\mu & \text{if }\mu\in E,\\ \min \left\{\infty, j_E^{\mu-1}\right\} & \text{otherwise.} \end{cases} \end{equation*} One defines $A_{j_{E}}$, $M_{j_E}$ in the same manner as before, but with this choice of $j_E$. Now the proof goes through just as before. Whenever one uses $B_j$ or $T_j$ one must restrict attention to $j\in -\log_2 \sA$. However, when one considers $D_j$, one allows $j$ to range over $j\in {\mathbb{N}}^{\nu}$. In fact, the above methods work for more general $\mathcal{A}\subseteq \left[0,1\right]^\nu$. For instance, just by changing $e$ and $a$, studying the operators associated to \begin{equation*} \mathcal{A}=\left\{\delta=\left( \delta_1,\ldots, \delta_\nu\right)\in \left[0,1\right]^\nu : \delta_{\mu_0}^{b_{\mu_0}}\leq C_{\mu_0+1} \delta_{\mu_0}^{b_{\mu_0+1}}\leq \cdots\leq C_{\nu} \delta_\nu^{b_\nu} \right\}, \end{equation*} where $b_\mu$ and $C_\mu$ are positive numbers, is equivalent to studying the operators associated to $\mathcal{A}_{\mu_0}$. See \cite{StreetMultiParameterSingRadonLt} for the definition of the class of kernels $\mathcal{K}$ for more general sets $\mathcal{A}$. There were two main reasons that our methods applied to $\mathcal{A}_{\mu_0}$. \begin{enumerate} \item There was a natural choice of $\left( X^\mu, d^\mu\right)$ for each $\mu$. \item Given $\delta_E$ as above (for $\delta\in \mathcal{A}$), there was a natural (``minimal'') choice $\widehat{\delta}\in \mathcal{A}$ with $\widehat{\delta}_{\mu}=\delta_\mu$, $\forall\mu\in E$. \end{enumerate} There are, of course, many subsets $\mathcal{A}\subseteq \left[0,1\right]^\nu$ where no such natural choices can be made. There are more examples (which do satisfy the above in an appropriate way) which can be covered by our methods. However, we know of no simple general condition unifying these examples. Moreover, for all the applications we have in mind, $\mathcal{A}_{\mu_0}$ will suffice. We, therefore, say no more on this issue, here. \section{Singular integrals not of Radon transform type}\label{SectionSingularIntegrals} The main point of Section \ref{SectionCZOps} was that a single-parameter special case of the operators studied in this paper, fell under the general Calder\'on-Zygmund singular integral framework. The point of this section is to make a few remarks of the multi-parameter special case of our main theorems which is analogous to this single-parameter special case. The operators studied in this section can be considered as a prototype for a multi-parameter analog of parts of the Calder\'on-Zygmund theory. \begin{rmk} Often, when one hears of {\it multi-parameter singular integrals}, it is the product theory of singular integrals to which is being referred. See, e.g., \cite{FeffermanSingularIntegralsOnProductDomains, NagelSteinOnTheProductTheoryOfSingularIntegrals}. The operators discussed in this section are not necessarily of product type. \end{rmk} Suppose we are given $\nu$ families of $C^\infty$ vector fields with single-parameter formal degrees, $\left( X^\mu, d^{\mu}\right)= \left( X^\mu_1, d^{\mu}_1\right) ,\ldots, \left( X^{\mu}_{q_\mu}, d^{\mu}_{q_\mu}\right)$, $1\leq \mu\leq \nu$. We suppose that each $\left( X^\mu, d^\mu\right)$ satisfies $\mathcal{D}\left( K_0, \left[0,1\right]\right)$. Let $\left( X_1,d_1\right),\ldots, \left( X_r,d_r\right)$ be the list of vector fields consisting of $\left( X^\mu_j, \hat{d}_j^\mu\right)$ for every $\mu$ and $j$, where $\hat{d}_j^{\mu}\in \left[0,\infty\right)^\nu$ is $d_j^\mu$ in the $\mu$ coordinate and $0$ in every other coordinate. Taking $\mu_0=\nu$ (i.e., $\mathcal{A}=\left[0,1\right]^\nu$), we suppose $\left( X_1,d_1\right),\ldots, \left( X_r,d_r\right)$ generates a finite list $\left( X_1,d_1\right) ,\ldots, \left( X_q,d_q\right)$. We take $N=q$ and define $\nu$-parameter dilations on ${\mathbb{R}}^N$ by, \begin{equation*} \delta\left( t_1,\ldots, t_q\right) = \left( \delta^{d_1}t_1,\ldots \delta^{d_q}t_q\right), \end{equation*} for $\delta\in \left[0,\infty\right)^\nu$. Define, \begin{equation*} \gamma_{\left( t_1,\ldots, t_q\right)} \left( x\right) = e^{t_1X_1+\cdots + t_qX_q}x. \end{equation*} We consider operators, $T$, of the form covered in Theorem \ref{ThmMainThmSecondPass}, where $K\in \mathcal{K}\left( q,d,a,\nu,\nu\right)$ for some small $a>0$. It follows from the remarks in Section 17.1 of \cite{StreetMultiParameterSingRadonLt} that all of the assumptions of Theorem \ref{ThmMainThmSecondPass} are satisfied with the above choices. Hence, $T$ is bounded on $L^p$, $1<p<\infty$. For $K\in \mathcal{K}\left( q,d,a,\nu,\nu\right)$ decompose $K$, \begin{equation*} K=\sum_{j\in {\mathbb{N}}^{\nu}} \dil{\varsigma_j}{2^j}, \end{equation*} where $\varsigma_j$ is as in Definition \ref{DefnsK}. Corresponding to this decomposition of $K$, one obtains a decomposition of $T$, $T=\sum_{j\in {\mathbb{N}}^{\nu}} T_j$. Let $T_j\left( x,y\right)$ denote the Schwartz kernel of $T_j$. It follows directly from Proposition 4.22 of \cite{StreetMultiParameterCCBalls} that \begin{itemize} \item $T_j\left( x,y\right)$ is supported on $y\in \B{X}{d}{x}{2^{-j}}$, \item $\left|T_j\left( x,y\right)\right|\lesssim \Vol{\B{X}{d}{x}{2^{-j}} }^{-1}$. \end{itemize} In the one parameter situation, this is just the fact that a Calder\'on-Zygmund singular integral can be decomposed into dyadic scales in the usual way. Thus, the above is a prototype for a multi-parameter generalization of the usual dyadic decomposition of a single-parameter Calder\'on-Zygmund singular integral operator. \begin{rmk} In light of the above, $T_j^{*}T_j$ is no ``smoother'' than $T_j$. The reader used to the single parameter theory might then suspect that a $T^{*}T$ type iteration argument will not be helpful in our studies. However, this is not the case. Indeed, a $T^{*}T$ type iteration argument was essential to our proof (in Section \ref{SectionGenL2}). The idea is that when $j_1,j_2\in {\mathbb{N}}^{\nu}$ and $j_1\rightedge j_2\ne j_1,j_2$, $\left( T_{j_1}^{*} T_{j_2}\right)^{*} T_{j_1}^{*} T_{j_2}$ is smoother than $T_{j_1}^{*}T_{j_2}$. \end{rmk} We now turn to maximal functions. With all the same choices as above, we define $\mathcal{M}$ as in Theorem \ref{ThmMainMaxThm}. With $\psi_1,\psi_2$ as in Theorem \ref{ThmMainMaxThm}, define a new maximal operator by, \begin{equation*} \mathcal{M}t f\left( x\right) = \sup_{\left|\delta\right|\leq a'} \psi_1\left( x\right) \frac{1}{\Vol{\B{X}{d}{x}{\delta}}} \int_{\B{X}{d}{x}{\delta}} \left| f\left( y\right) \psi_2\left( y\right)\right| \: dy. \end{equation*} It follows directly from Proposition 4.22 of \cite{StreetMultiParameterCCBalls} that $\mathcal{M}t f\left( x\right) \lesssim \mathcal{M} f\left( x\right)$, provided $a'>0$ is sufficiently small. Hence, $\mathcal{M}t$ is bounded on $L^p$, $1<p\leq \infty$. This generalizes the maximal results of \cite{StreetMultiParameterCCBalls}. Reduction to Theorem \ref{ThmMainMaxThm} is not the only way to prove the $L^p$ boundedness of $\mathcal{M}t$. Indeed, for each $\mu$, define the maximal operator, \begin{equation*} \mathcal{M}t_\mu f\left( x\right) = \sup_{0<\delta_\mu\leq a''} \denum{\psi}{0}\left( x\right) \frac{1}{\Vol{\B{X^\mu}{d^\mu}{x}{\delta_\mu}}} \int_{\B{X^\mu}{d^\mu}{x}{\delta_\mu}} \left| f\left( y\right) \denum{\psi}{0}\left( y\right)\right| \: dy. \end{equation*} It is shown in Section 6.2 of \cite{StreetMultiParameterCCBalls} that $\mathcal{M}t_\mu$ is bounded on $L^p$ ($1<p\leq \infty$) for each $\mu$. This proceeds in a similar manner to the methods in Section \ref{SectionCZNoSpan}: by reduction to the classical Calder\'on-Zygmund theory. It can be shown that, \begin{equation}\label{EqnsMtIneq} \mathcal{M}t f\left( x\right) = \left(\mathcal{M}t_1\cdots \mathcal{M}t_\nu \right)^M f\left( x\right), \end{equation} for some large $M$ (provided $a'$ is sufficiently smaller than $a''$); and the $L^p$ boundedness of $\mathcal{M}t$ follows. The proof of \eqref{EqnsMtIneq} is somewhat lengthy and technical, and does not seem to yield Theorem \ref{ThmMainMaxThm} in the general case. We, therefore, say no more about this here. In this situation, we can develop a Littlewood-Paley square function of an appropriate type. While the operators $D_j$ from Section \ref{SectionAuxOps} were sufficient to create a Littlewood-Paley square function to prove the $L^p$ boundedness of $T$, they are not of the same type as $T_j$--and therefore take us out of the class of operators we are discussing. Instead, one uses that the distribution $\delta_0\in \mathcal{K}\left( q, d, a, \nu,\nu\right)$ (Proposition 16.3 of \cite{StreetMultiParameterSingRadonLt}). Write, \begin{equation*} \delta_0 = \sum_{j\in {\mathbb{N}}^{\nu}} \dil{\varsigma_j}{2^j}, \end{equation*} where $\varsigma_j$ is as in Definition \ref{DefnsK}. Define, \begin{equation*} \widetilde{D}_j f\left( x\right) = \denum{\psi}{-2}\left( x\right) \int f\left(\gamma_t\left( x\right) \right) \denum{\psi}{-2}\left( \gamma_t\left( x\right) \right) \dil{\varsigma_j}{2^j}\left( t\right) \: dt. \end{equation*} Thus, $\denum{\psi}{-2}^2 = \sum_{j\in {\mathbb{N}}^{\nu}} \widetilde{D}_j$. One can recreate the theory in Section \ref{SectionSquareFunc} with $D_j$ replaced by $\widetilde{D}_j$, so long as one uses Theorem \ref{ThmMainThmSecondPass} instead of Theorem \ref{ThmSingleParamSingInt} throughout. We therefore obtain a Calder\'on-type ``reproducing formula'' and a Littlewood-Paley square function in terms of $\widetilde{D}_j$. \section{Some comments on maximal operators}\label{SectionMaximalComment} There are a number of maximal operators in the literature which are related to the one discussed in Theorem \ref{ThmMainMaxThm}. The ones most closely related are those discussed in \cite{ChristTheStrongMaximalFunctionOnANilpotentGroup}, where certain strong maximal functions on nilpotent Lie groups are discussed. Of course, our methods also apply to convolution operators on nilpotent groups. See Section 17.2 of \cite{StreetMultiParameterSingRadonLt}. Our results can be used to study some of the maximal operators which were covered in \cite{ChristTheStrongMaximalFunctionOnANilpotentGroup}, and we discuss this below. At the end of this section, we discuss the fact that not all of the maximal operators from \cite{ChristTheStrongMaximalFunctionOnANilpotentGroup} are covered by our results. Nevertheless, these maximal operators can be covered by the {\it methods} of this paper, and this will be taken up (and generalized) in \cite{StreetMultiParameterSingRadonAnal}. In what follows, we describe the connection between the results in this paper and the results in \cite{ChristTheStrongMaximalFunctionOnANilpotentGroup} in the special case of the three dimensional Heisenberg group, $\mathbb{H}^1$. All of the comments that follow work more generally for, say, stratified nilpotent Lie groups, but we leave those details to the interested reader. As a manifold, $\mathbb{H}^1 = {\mathbb{C}}\times {\mathbb{R}}$, and we give it coordinates $\left( z,t\right) = \left( x,y,t\right)$. A basis for the left invariant vector fields on $\mathbb{H}^1$ is $X=\partial_x - 2y\partial_y$, $Y= \partial_y+2x\partial_t$, $T=\partial_t$. In \cite{ChristTheStrongMaximalFunctionOnANilpotentGroup}, the following strong maximal function is considered, \begin{equation*} \mathcal{M}t f\left( \xi\right) = \sup_{\delta_1,\delta_2,\delta_3>0} \int_{\left|\left( x,y,t\right)\right|\leq 1} \left|f\left(e^{x\delta_1 X+ y\delta_2 Y + t\delta_3 T}\xi\right)\right| \: dx\: dy\: dt. \end{equation*} It is shown that $\mathcal{M}t$ is bounded on $L^p$, $1<p\leq \infty$. We claim that this result follows from Theorem \ref{ThmMainMaxThm}. To do this, we show, \begin{equation*} \mathcal{M}t_N f\left( \xi\right) = \sup_{N\geq \delta_1,\delta_2,\delta_3>0} \int_{\left|\left( x,y,t\right)\right|\leq 1} \left|f\left(e^{x\delta_1 X+ y\delta_2 Y + t\delta_3 T}\xi\right)\right| \: dx\: dy\: dt, \end{equation*} is bounded on $L^p$ ($1<p<\infty$) with bound independent of $N$. Indeed, let $\psi\geq 0$ be a $C_0^\infty$ function which equals $1$ on a neighborhood of $0$. Define, \begin{equation*} \mathcal{M} f\left( \xi\right) = \sup_{1\geq \delta_1,\delta_2,\delta_3>0} \psi\left( \xi\right) \int_{\left|\left( x,y,t\right)\right|\leq a} \left|f\left(e^{x\delta_1 X+ y\delta_2 Y + t\delta_3 T}\xi\right)\right| \psi\left(e^{x\delta_1 X+ y\delta_2 Y + t\delta_3 T}\xi \right) \: dx\: dy\: dt, \end{equation*} where $a>0$ is some small number. It is easy to verify that Theorem \ref{ThmMainMaxThm} applies (see Section 17.2 of \cite{StreetMultiParameterSingRadonLt}). Thus $\mathcal{M}$ is bounded on $L^p$. Note, for $f$ with small support near $0$, we trivially have, \begin{equation*} \mathcal{M}t_a f\left( \xi\right) \lesssim \mathcal{M} f\left( \xi\right), \end{equation*} for every $\xi$. Now consider the one-parameter dilations on $\mathbb{H}^1$ given by, \begin{equation}\label{EqnOneParamDil} r\left( x,y,t\right) = \left( rx,ry,r^2 t\right), \end{equation} for $r\in \left( 0,\infty\right)$. Define, for $1<p<\infty$, \begin{equation*} \dilp{f}{r}\left( \xi\right) = r^{4/p} f\left( r\xi\right), \end{equation*} so that $\LpN{p}{\dilp{f}{r}}=\LpN{p}{f}$. It is easy to see that, \begin{equation*} \dilp{\left(\mathcal{M}t_{N/r} \dilp{f}{r} \right)}{1/r} = \mathcal{M}t_{N} f. \end{equation*} Fix $f\in L^p$ with compact support. Taking $r$ so large $N/r\leq a$, and $\dilp{f}{r}$ has small support, we see, \begin{equation*} \LpN{p}{\mathcal{M}t_N f} = \LpN{p}{\mathcal{M}t_{N/r} \dilp{f}{r}} \lesssim \LpN{p}{\mathcal{M} \dilp{f}{r}}\lesssim \LpN{p}{\dilp{f}{r}} = \LpN{p}{f}. \end{equation*} Thus, we have, \begin{equation*} \LpN{p}{\mathcal{M}t_N f}\lesssim \LpN{p}{f}, \end{equation*} for every $f$ with compact support. A limiting argument completes the proof. There is another approach which can be used to prove the $L^p$ boundedness of $\mathcal{M}t$. Namely, one could simply recreate the entire proof in this paper, without using cutoff functions, and instead of restricting to $\delta\in \left[0,1\right]^{\nu}$, one allows $\delta\in \left[0,\infty\right)^\nu$. It is easy to see that, in this special case, all of our methods go through. This is due to the fact that one has global one-parameter dilations, \eqref{EqnOneParamDil}, on $\mathbb{H}^1$ which respect each aspect of our proof. The proof method for the $L^2$ boundedness of $\mathcal{M}t$ in \cite{ChristTheStrongMaximalFunctionOnANilpotentGroup} is closely related to the proof in this paper. One main difference is that (for certain maximal operators more general than $\mathcal{M}t$), \cite{ChristTheStrongMaximalFunctionOnANilpotentGroup} uses transference methods to lift the problem to a higher dimensional maximal function. This allows \cite{ChristTheStrongMaximalFunctionOnANilpotentGroup} to deal with certain maximal functions on nilpotent groups which are not directly applicable by our methods. It turns out that all of the maximal operators covered in \cite{ChristTheStrongMaximalFunctionOnANilpotentGroup} {\it can} be covered by our methods, with some modifications. This will be discussed in \cite{StreetMultiParameterSingRadonAnal}, where (among other things) the results of \cite{ChristTheStrongMaximalFunctionOnANilpotentGroup} will be generalized. To understand where Theorem \ref{ThmMainMaxThm} falls short, consider the function $\gamma: {\mathbb{R}}^{2}\times {\mathbb{R}}\rightarrow {\mathbb{R}}$ given by $\gamma_{s,t}\left( x\right) = x-st$. It is easy to see, using the methods of Section 17.5 of \cite{StreetMultiParameterSingRadonLt} that there is a product kernel $K\left( s,t\right)\in \mathcal{K}\left(2,\left(\left(1,0\right),\left( 0,1\right)\right),a,2,2\right) $ supported on $B^2\left( a\right)$ (with $a$ as small as we like) such that the corresponding singular Radon transform (as in Theorem \ref{ThmMainThmFirstPass}) is not bounded on $L^2$. This fact was first noted in \cite{NagelWaingerL2BoundednessOfHilbertTransformsMultiParameterGroup}. However, if $\psi\in C_0^\infty$, $\psi\geq 0$, is supported sufficiently close to $0$, the maximal function, \begin{equation*} \mathcal{M} f\left( x\right) = \sup_{0<\delta_1,\delta_2\leq a} \psi\left( x\right) \int \left|f\left(\gamma_{\delta_1 s, \delta_2 t}\left( x\right)\right) \right|\: ds\: dt, \end{equation*} is bounded on $L^p\left( {\mathbb{R}}\right)$ ($1<p\leq \infty$), for $a>0$ sufficiently small. Thus, in an ideal world, Theorem \ref{ThmMainMaxThm} would be generalized to apply to this choice of $\gamma$ (and other, more complicated,\footnote{Because of the form of $\gamma$, the operator $\mathcal{M}$ is actually equal to a one-parameter maximal operator, which is covered by Theorem \ref{ThmMainMaxThm}. There are other choices of $\gamma$ of the same general type where this is not the case, yet the maximal operator is still bounded.} $\gamma$ like it). However, since we used the {\it same} class of $\gamma$ for Theorem \ref{ThmMainThmFirstPass} and Theorem \ref{ThmMainMaxThm}, and Theorem \ref{ThmMainThmFirstPass} fails for this choice of $\gamma$, our methods need to be modified to attack this sort of example. In fact, it is possible to modify our methods in a natural way to deal with cases the same type as this choice of $\gamma$. This will be discussed in detail in \cite{StreetMultiParameterSingRadonAnal}. The reason we have not done so here, is that in order to include $\gamma$ in the class of functions we study, we will need to strengthen other aspects of our assumptions in a few technical ways. Thus, the maximal theorem proven in \cite{StreetMultiParameterSingRadonAnal} will not be strictly stronger than the one in this paper. This is an issue, since the proof of Theorem \ref{ThmMainThmSecondPass} used Theorem \ref{ThmMainMaxThm}. Thus, we would end up weakening Theorem \ref{ThmMainThmSecondPass}, if we attempted to modify Theorem \ref{ThmMainMaxThm}. See Remark \ref{RmkABetterMaxForRealAnal} for further details on the sort of maximal results we will prove in \cite{StreetMultiParameterSingRadonAnal}. \center{MCS2010: Primary 42B20, Secondary 42B25} \center{Keywords: Calder\'on-Zygmund theory, singular integrals, singular Radon transforms, maximal Radon transforms, Littlewood-Paley theory, product kernels, flag kernels, Carnot-Carath\'eodory geometry} \end{document}
\begin{document} \title{Multiplication for solutions of the equation $\mathop{\mathrm{grad}}{f} = M\mathop{\mathrm{grad}}{g}$} \author{{\bf Jens Jonasson}\\ Department of Mathematics\\ Link\"oping University\\ SE-581 83 Link\"oping, Sweden} \maketitle \begin{center} E-mail address: \texttt{[email protected]} \end{center} \begin{abstract} Linear first order systems of partial differential equations of the form $\nabla f = M\nabla g,$ where $M$ is a constant matrix, are studied on vector spaces over the fields of real and complex numbers, respectively. The Cauchy--Riemann equations belong to this class. We introduce a bilinear $*$-multi\-plication on the solution space, which plays the role of a nonlinear superposition principle, that allows for algebraic construction of new solutions from known solutions. The gradient equations $\nabla f = M\nabla g$ constitute only a simple special case of a much larger class of systems of partial differential equations which admit a bilinear multiplication on the solution space, but we prove that any gradient equation has the exceptional property that the general analytic solution can be expressed through power series of certain simple solutions, with respect to the $*$-multiplication. \end{abstract} \section{Introduction} We consider equations of the form \begin{align}\label{fmg} \nabla f = M\nabla g, \end{align} where $f(\mathbf{x})$ and $g(\mathbf{x})$ are unknown scalar-valued functions, defined on an open convex domain in an $n$-dimensional vector space $\mathcal{V}$ over the field of real or complex numbers, and $M$ is a constant $n\times n$ matrix. Hence, the systems under consideration constitute a system, that is usually overdetermined, of $n$ first order linear partial differential equations (abbreviated PDEs) for two unknown functions. The Cauchy--Riemann equations \begin{align*} \pd{f}{x} & = \pd{g}{y}\\ \pd{f}{y} & = -\pd{g}{x} \end{align*} are an example of an equation of the form (\ref{fmg}). Since there is a $1-1$ correspondence between solutions $(f,g)$ of the Cauchy--Riemann equations and holomorphic functions $F = f + \mathrm{i} g$, any product \begin{align*} (f + \mathrm{i} g)(\tilde{f} + \mathrm{i}\tilde{g}) = (f\tilde{f} - g\tilde{g}) + \mathrm{i}(f\tilde{g} + g\tilde{f}) \end{align*} of two holomorphic functions is again holomorphic. The ordinary multiplication of holomorphic functions defines a bilinear $*$-multiplication on the solutions space $S$ of the Cauchy--Riemann equations \begin{align}\label{crmult} \begin{aligned} *: S \times S & \longrightarrow S \\ (f, g)*(\tilde{f}, \tilde{g}) & = (f\tilde{f} - g\tilde{g}, f\tilde{g} + g\tilde{f}). \end{aligned} \end{align} Moreover, since every holomorphic function is analytic, every solution of the Cauchy--Riemann equations can be expressed locally as a power series of a simple solution. For example, every solution can be formulated, in a neighborhood of the origin, as a power series of the simple solution $(x, y)$ \begin{align}\label{crps} (f, g) = \sum_{r=0}^\infty (a_r, b_r)*(x, y)_*^r, \quad \textrm{where} \quad (x, y)_*^r = \underbrace{(x, y)* \cdots *(x, y)}_{r \textrm{ factors}}, \end{align} and $a_r, b_r$ are real constants. In \cite{jodeit-1990}, Jodeit and Olver have given an expression for the general solution of the equation (\ref{fmg}). In \cite{jonasson-2007}, we have have introduced a multiplication $*$ of solutions for a wide class of linear first order systems of differential equations containing systems of the form (\ref{fmg}), where the matrix $M$ can also have non-constant entries. The $*$-multiplication is a nonlinear superposition principle that allows for algebraic construction of new solutions from known solutions. By combining the $*$-multiplication with the ordinary linear superposition principle, power series solutions can be constructed from a single simple solution. In this paper we show that the equations (\ref{fmg}) are included in the family of systems equipped with a $*$-multiplication on the solution space. We also compare power series solutions, constructed with the $*$-multiplication, with the formulas in \cite{jodeit-1990} that describe the general solution of (\ref{fmg}). This paper contains the following main results: \begin{enumerate} \item Any equation (\ref{fmg}), including both the real and the complex case, admits a $*$-multiplication which provides a nonlinear superposition formula. For the Cauchy--Riemann equations, this multiplication reduces to the ordinary multiplication (\ref{crmult}), obtained from the multiplication of holomorphic functions. \item Any solution of (\ref{fmg}) can be represented as a power series (with respect to the $*$-multiplication) of certain simple solutions. In other words, we provide a different and more explicit way to describe the general analytic solution of (\ref{fmg}). This may be compared with the Cauchy--Riemann equations where each solution can be obtained from the harmonically conjugated pair given by the real and imaginary parts of a power series in one complex variable (\ref{crps}). \item The equations of the form (\ref{fmg}) constitute an interesting example of a large class of systems of linear PDEs with a simple structure, admitting $*$-multiplication of solutions. The content of this paper can be considered a thorough investigation of the $*$-multiplication for the class of systems of the form (\ref{fmg}). The results contribute to a better understanding of the general class of equations admitting $*$-multiplication. \end{enumerate} \begin{Rem3} In the special case when the matrix $M$ consists of only one Jordan block for each eigenvalue, the general solution of $($\ref{fmg}$)$ can be described in terms of component functions of functions which are differentiable over some algebra \cite{waterhouse-1992}. For this restricted class of matrices $M$, some of the results above can also be derived from the content of \cite{waterhouse-1992}. \end{Rem3} This paper is organized as follows. In section \ref{multsec} we present the necessary results from \cite{jonasson-2007} about bilinear $*$-multiplication for solutions of linear systems of PDEs with variable and constant coefficients. Section \ref{fmgsec} contains a summary of the main results of paper \cite{jodeit-1990} by Jodeit and Olver about the algebraic form of the general solution to the equation (\ref{fmg}). Sections \ref{cstmultsec} and \ref{pssec} contain the main results of this paper. In section \ref{cstmultsec}, we show that any equation of the form (\ref{fmg}) admits a bilinear $*$-multiplication of solutions of the type described in \cite{jonasson-2007}. In section \ref{pssec} we prove (theorem \ref{ps2} and theorem \ref{realps}) that every solution of (\ref{fmg}) can be expressed as a power series of simple solutions with respect to the $*$-multiplication. \section{Multiplication of solutions for systems of PDEs}\label{multsec} Certain systems of linear partial differential equations admit a bilinear operation on the solution space. The most general results about this operation, which we call $*$-multiplication, are given in \cite{jonasson-2007} (the results are derived for equations on a real differentiable manifold, but all arguments hold also for analytic equations over a complex vector space), and in this section we give a brief summary. Let $Z_\mu = Z_0 + Z_1\mu + \cdots + Z_{m-1} \mu^{m-1} + \mu^m$ and $V_\mu = V_0 + V_1\mu + \cdots + V_{m-1} \mu^{m-1}$ be polynomials in the variable $\mu$, with coefficients that are smooth scalar-valued functions on the vector space $\mathcal{V}$. Furthermore, let $A_\mu = A_0 + A_1\mu + \cdots + A_k\mu^k$ be a polynomial where each coefficient is a $n \times n$ matrix, with smooth scalar-valued functions as entries. Consider then the matrix equation \begin{align}\label{avz} A_\mu \nabla V_\mu \equiv 0 \quad \left(mod\; Z_\mu \right), \end{align} where the unknown function $V_\mu$ is a solution if the remainder of the polynomial $A_\mu \nabla V_\mu$ modulo $Z_\mu$ is zero, i.e., if there exists a polynomial vector $\mathbf{u}_\mu$ such that $A_\mu \nabla V_\mu = Z_\mu \mathbf{u}_\mu$. Given two solutions $V_\mu$ and $W_\mu$, the $*$-product $V_\mu * W_\mu$ is defined as the unique remainder of the ordinary product $V_\mu W_\mu$ modulo $Z_\mu$, i.e., $V_\mu * W_\mu$ is the unique polynomial of degree less than $m$ that can be written as $V_\mu * W_\mu = V_\mu W_\mu - Q_\mu Z_\mu$ for some polynomial $Q_\mu$. In general, neither $V_\mu W_\mu$ nor $V_\mu * W_\mu$ are solutions of (\ref{avz}), but when $A_\mu$ and $Z_\mu$ are related by the equation \begin{align}\label{azz} A_\mu \nabla Z_\mu \equiv 0 \quad \left(mod\; Z_\mu \right), \end{align} that is, when $Z_\mu - \mu^m$ is a solution of (\ref{avz}), then the $*$-multiplication maps any two solutions into a new solution. Thus, the $*$-multiplication provides a method for constructing, in a pure algebraic way, new solutions from already known solutions. Especially, when the coefficients of $Z_\mu$ are not all constant, $*$-products of trivial (constant) solutions are in general non-trivial solutions. \begin{Exmp3} If we let \begin{align*} A_\mu = \matris{\mu}{1}{-1}{\mu}, \quad V_\mu = g + f\mu, \quad Z_\mu = 1 + \mu^2, \end{align*} the equation $($\ref{avz}$)$ reduces to the Cauchy--Riemann equations. Since the coefficients in $Z_\mu$ are here constant, the condition $($\ref{azz}$)$ is trivially satisfied. Hence, the $*$-multiplication maps solutions to solutions, and it reconstructs the multiplication formula $($\ref{crmult}$)$ obtained from the multiplication of holomorphic functions: \begin{align*} (f + g\mu)(\tilde{f} + \tilde{g}\mu) & = f\tilde{f} + (f\tilde{g} + g\tilde{f})\mu + g\tilde{g}\mu^2 \\ & \equiv (f\tilde{f} - g\tilde{g}) + (f\tilde{g} + g\tilde{f})\mu = (f + g\mu)*(\tilde{f} + \tilde{g}\mu). \end{align*} \end{Exmp3} We see that, in order to obtain the first of the main results named in the introduction, it is enough to find, for each matrix $M$, a matrix $A_\mu$ and a function $Z_\mu$ such that the relation (\ref{azz}) is satisfied, and such that the equation (\ref{avz}) is equivalent to (\ref{fmg}). The theory of $*$-multiplication can also be formulated in matrix language, without use of congruences and the extra parameter $\mu$. In matrix notation the equation (\ref{avz}) can be written as \begin{align}\label{matrissys} \sum_{i=0}^k C^iV'A_i = 0, \end{align} where $V'$ denotes the $m \times n$ functional matrix with entries $(V')_{ij} = \partial V_i / \partial x_j$ and $C$ is the companion matrix for the polynomial $Z_\mu$ \begin{align*} C = \left[ \begin{array}{ccccc} 0 & 0 & \cdots & 0 & -Z_0 \\ 1 & 0 & \cdots & 0 & -Z_1 \\ 0 & 1 & \ddots & & \vdots \\ \vdots & \vdots & \ddots & 0 & -Z_{n-2} \\ 0 & 0 & \cdots & 1 & -Z_{n-1} \end{array} \right]. \end{align*} The $*$-multiplication exists for any equation (\ref{matrissys}) for which $V = [Z_0 \quad Z_1 \;\; \cdots Z_{m-1}]^T$ is a solution. The $*$-product of two solutions can then be written explicitly as the following matrix product \begin{align*} V*W & = V_CW_Ce_1 = \left( \sum_{i=0}^{m-1} V_iC^i \right)\left( \sum_{i=0}^{m-1} W_iC^i \right)\left[ \begin{array}{c} 1 \\ 0 \\ \vdots \\ 0 \end{array} \right]. \end{align*} An interesting property of the $*$-multiplication is that complex solutions can be generated, starting with simple solutions. Especially, when $Z_\mu$ has non-constant coefficients, $*$-powers of trivial (constant) solutions will in general be non-trivial. By using the $*$-multiplication in combination with the linear superposition principle, one can form $*$-polynomial solutions \begin{align*} V_\mu = \sum_{r=0}^N a_r\mu^r_*, \quad {\rm where} \quad \mu_*^r = \underbrace{\mu * \mu * \cdots *\mu,}_{r factors} \end{align*} or in matrix notation \begin{align}\label{polysol} V = \sum_{r=0}^N a_rC^re_1, \end{align} where $a_i$ are constants. Moreover, when some conditions are satisfied, by letting $N\rightarrow \infty$ in (\ref{polysol}) one obtains power series solutions. However, when the coefficients of $Z_\mu$ are all constant (which is the case for the Cauchy--Riemann equations), power series of trivial solutions will be trivial. Instead, in order to construct interesting solutions, one has to build power series from a non-trivial solution. Such power series are not considered in \cite{jonasson-2007} but the results for power series of trivial solutions extend immediately to power series of non-trivial solutions. More precisely, the power series $\sum a_r(V_\mu)_*^r$, which in matrix notation reads \begin{align*} \sum_{r=0}^\infty a_r\tilde{C}^re_1, \quad {\rm where} \quad \tilde{C} = \sum_{i=0}^{m-1}V_iC^i, \end{align*} defines a solution of (\ref{avz}) as long as the modulus of the eigenvalues of $\tilde{C}$ are smaller than the radius of convergence of the power series $\sum a_rt^r$ and the geometrical multiplicities of the eigenvalues are constant. For power series of trivial solutions, the matrix $\tilde{C}$ is identical with the companion matrix $C$ and its eigenvalues coincide with the roots of the polynomial $Z_\mu$. We can also consider $*$-power series with arbitrary trivial solutions as coefficients \begin{align*} \sum_{r=0}^\infty a_{\mu, r}*(V_\mu)_*^r, \quad {\rm where} \quad a_{\mu, r} = a_{r,0} + a_{r,1}\mu + \cdots a_{r,m-1}\mu^{m-1}. \end{align*} \begin{Rem3} The $*$-multiplication \cite{jonasson-2006, jonasson-2007} is a generalization of the multiplication of cofactor pair systems introduced by Lundmark \cite{lundmark-2001}, which in turn is a generalization of a recursive formula for obtaining new cofactor pair systems \cite{rauch-1986, rauch-1999, lundmark-2003}. A cofactor pair system is a Newton equation $\ddot{q}^h + \Gamma^h_{ij}\dot{q}^i\dot{q}^j = F^h,$ $h = 1, 2, \ldots, n$ on a Riemannian manifold, where the vector field $F$ can be written as \begin{align*} F = -(\det{J})^{-1}J\nabla V = -(\det{\tilde{J}})^{-1}\tilde{J}\nabla\tilde{V} \end{align*} for some functions $V,$ $\tilde{V}$ and two special conformal Killing tensors $J$, $\tilde{J}$. The family of cofactor pair systems contain all separable conservative Lagrangian systems, that in general can be integrated through separation of variables in the Hamilton--Jacobi sense \cite{rauch-1999,lundmark-2003,lundmark-2002,rauch-2003, crampin-2001, benenti-2005}. The multiplication of cofactor pair systems is a mapping that, for fixed special conformal Killing tensors $J$ and $\tilde{J}$, for two solutions $(V, \tilde{V})$ and $(W, \tilde{W})$ of the equation $(\det{J})^{-1}J\nabla V = (\det{\tilde{J}})^{-1}\tilde{J}\nabla\tilde{V}$ prescribes a new solution $(V, \tilde{V})*(W, \tilde{W})$ in a bilinear way. \end{Rem3} \section{The general solution of $\nabla f = M\nabla g$}\label{fmgsec} In \cite{jodeit-1990}, Jodeit and Olver describe a general analytic solution of the equation (\ref{fmg}) for any constant, real or complex, matrix $M$. In this section, we introduce some notation, and briefly describe the main results from \cite{jodeit-1990}. A linear change of variables $\mathbf{x}\rightarrow A\mathbf{x}$ transforms the equation (\ref{fmg}) into $\nabla f = A^{-T}MA^T\nabla g$. Thus, by performing this kind of transformation, we can assume that the matrix $M$ is in some suitable canonical form with respect to similarity. We will assume in the following that $M$ is in Jordan canonical form when $\mathcal{V}$ is a vector space over the complex numbers, and in real Jordan canonical form \cite{horn} for real $\mathcal{V}$. \subsection{The complex case} If we let $\lambda_1,\ldots ,\lambda_p$ denote the distinct eigenvalues of the matrix $M$, there is a primary decomposition of the vector space \begin{align}\label{v1vp} \mathcal{V} = \mathcal{V}^1\oplus\cdots\oplus \mathcal{V}^p \end{align} consisting of invariant subspaces $\mathcal{V}^k = \ker{(M - \lambda_kI)^{e_k}}$, where $e_k$ is the geometric multiplicity of $\lambda_k$. As mentioned above, there is no loss of generality to assume that $M$ has a diagonal block structure $M = \mathrm{diag}(M^1,\ldots ,M^p),$ where $M^k = \mathrm{diag}(J^k_1,\ldots ,J^k_{p_k})$ and each $J^k_i$ is a Jordan block corresponding to the eigenvalue $\lambda_k$, i.e., \begin{align*} J^k_i = \left[ \begin{array}{cccc} \lambda_k & 1 & &\\ & \lambda_k & \ddots &\\ & & \ddots & 1\\ & & & \lambda_k \end{array} \right]. \end{align*} There is a simple decomposition of the general solution of (\ref{fmg}) that is associated with the primary decomposition (\ref{v1vp}) of the vector space $\mathcal{V}$. Every solution $(f,\; g)$ of the equation (\ref{fmg}) can be written as a sum \begin{align}\label{sumfg} f = f^1 + f^2 + \cdots + f^p, \quad g = g^1 + g^2 + \cdots + g^p, \end{align} where $(f^k,\; g^k)$ is a solution of the corresponding equation $\nabla f = M^k\nabla g$. Hence, the problem of describing the general solution of (\ref{fmg}) is reduced to the case when the matrix $M$ consists of Jordan blocks corresponding to a single eigenvalue $\lambda$. Therefore, let $M = \mathrm{diag}(J_1,\ldots ,J_m),$ where $J_k$ is a Jordan block of size $(n_k+1)\times (n_k+1)$ corresponding to $\lambda$. We assume also that coordinates are chosen in such a way that the size of the Jordan blocks is decreasing, i.e., $n_1 \ge n_2 \ge \cdots \ge n_m \ge 0$. Any vector $\mathbf{x}\in \mathcal{V}$ is decomposed as $\mathbf{x} = [\mathbf{x}^1 \quad \mathbf{x}^2 \;\; \cdots \;\; \mathbf{x}^m]^T$ so that $\mathbf{x}^k = [x^k_0 \quad x^k_1 \;\; \cdots \;\; x^k_{n_k}]^T$ contains the variables that correspond to the block $J_k$. The variables $x^1_0,\; x^2_0, \ldots, x^m_0$, that appear at the first position in the vectors $\mathbf{x}^1,\; \mathbf{x}^2, \ldots, \mathbf{x}^m$, are called \emph{major variables} and all other variables are referred to as \emph{minor variables}. For each block $J_k$, we introduce a function \begin{align*} x^k_\mu = x_0^k + x_1^k\mu + \cdots + x_{n_k}^k\mu^{n_k}, \end{align*} which depends on the variables contained in $\mathbf{x}^k$ and a real parameter $\mu$. Furthermore, let $\nu_k$ denote the number of Jordan blocks of $M$ which are of size at least $k+1$, i.e., $\nu_k = \mathrm{max}\{i\in\mathbb{N}:n_i \ge k\}$, and define vectors $\mathbf{x}^{(0)}_\mu, \ldots, \mathbf{x}^{(n_1)}_\mu$ in the following way: \begin{align*} \mathbf{\mathbf{x}}^{(k)}_\mu = [x^1_\mu \quad x^2_\mu \;\; \cdots \;\; x^{\nu_k}_\mu]^T. \end{align*} According to \cite{jodeit-1990}, two analytic functions $f(\mathbf{x})$ and $g(\mathbf{x})$ form a solution of the equation (\ref{fmg}) (where we have assumed that the matrix $M$ has only one eigenvalue $\lambda$ and has a Jordan canonical form) if and only if \begin{align*} f = f_1 + f_2 + \cdots + f_{n_1} + c, \quad g = g_1 + g_2 + \cdots + g_{n_1}, \end{align*} where $c$ is an arbitrary constant, and there exist scalar-valued analytic functions \begin{align*} \phi_k(s_1, \ldots, s_{\nu_k}),\quad k = 0, 1, \ldots, n_1, \end{align*} such that $f_k$, $g_k$ are given by \begin{align}\label{gensol} \left\{ \begin{aligned} f_k(\mathbf{x}) = & \left. \lambda\frac{\partial^k}{\partial \mu^k}\phi_k \left( \mathbf{x}^{(k)}_\mu\right) \right|_{\mu=0} + k\left.\frac{\partial^{k-1}}{\partial \mu^{k-1}}\phi_k \left( \mathbf{x}^{(k)}_\mu\right) \right|_{\mu=0} \\ g_k(\mathbf{x}) = & \left. \frac{\partial^k}{\partial \mu^k}\phi_k \left( \mathbf{x}^{(k)}_\mu\right) \right|_{\mu=0}. \end{aligned} \right. \end{align} There are two interesting observations regarding the general solution: \begin{enumerate} \item By a change of dependent variable $h = f - \lambda g$ the equation (\ref{fmg}) transforms into an equation for $h$ and $g$ which is independent of the eigenvalue $\lambda$. \item The functions $\phi_0, \phi_1, \ldots , \phi_{n_1}$ in (\ref{gensol}) that are arbitrary, depend only on the major variables $x^1_0,\; x^2_0, \ldots, x^m_0$, whereas the dependence on the minor variables of the functions $f_k$ and $g_k$ is restricted to certain fixed polynomials. For example, in the case when $m=1$, i.e., when the matrix consists of only one Jordan block, the functions $f_k$ and $g_k$ can be written as \begin{align*} f_k(\mathbf{x}) & = \lambda g_k(\mathbf{x}) + k\sum_{j=0}^{k-1} P_{k-1,j}(x_1, x_2, \ldots , x_{k-j}) \phi_k^{(j)}(x_0)\\ g_k(\mathbf{x}) & = \sum_{j=0}^k P_{k,j}(x_1, x_2, \ldots , x_{k-j+1})\phi_k^{(j)}(x_0), \end{align*} where $\phi_k^{(j)}$ denote the $j$-th derivative of the function $\phi_k$, and $P_{k,j}$ are fixed polynomials, related to the partial Bell polynomials \cite{comtet}. \end{enumerate} We provide a concrete example in order to illustrate how to read this result from \cite{jodeit-1990}. \begin{Exmp3}\label{exn5} Consider the equation $($\ref{fmg}$)$ where $M$ is a constant $5\times 5$ matrix. As mentioned above, by changing independent variables in a linear way, we can assume that $M$ is in canonical Jordan form. We consider the equation $($\ref{fmg}$)$ in the particular case when $M$ is given by \begin{align*} M = \left[ \begin{array}{ccccc} \lambda_1 & 1 & 0 & 0 & 0\\ 0 & \lambda_1 & 0 & 0 & 0\\ 0 & 0 & \lambda_2 & 1 & 0\\ 0 & 0 & 0 & \lambda_2 & 0\\ 0 & 0 & 0 & 0 & \lambda_2\\ \end{array} \right], \end{align*} where $\lambda_1$ and $\lambda_2$ are distinct eigenvalues. First of all, the general solution can be decomposed as $f = f^1 + f^2,$ $g = g^1 + g^2$, where $(f^1,\;g^1)$ and $(f^2,\;g^2)$ are solutions of \begin{align*} \nabla f^1 = \matris{\lambda_1}{1}{0}{\lambda_1}\nabla g^1,\quad and \quad \nabla f^2 = \matrist{\lambda_2}{1}{0}{0}{\lambda_2}{0}{0}{0}{\lambda_2} \nabla g^2, \end{align*} respectively. From the formula $($\ref{gensol}$)$, we then obtain \begin{align*} \left\{ \begin{aligned} f^1 & = f^1_0 + f^1_1 = \lambda_1g^1 + \phi_1(x_0) + c \\ g^1 & = g^1_0 + g^1_1 = x_1\phi_1'(x_0) + \phi_0(x_0) \end{aligned} \right., \end{align*} and \begin{align*} \left\{ \begin{aligned} f^2 & = f^2_0 + f^2_1 = \lambda_2g^2 + \psi_1(y_0^1) + c \\ g^2 & = g^2_0 + g^2_1 = \psi_0(y_0^1,\; y_0^2) + y^1_1\psi_1'(y_0^1) \end{aligned} \right., \end{align*} where $x_0,\; y_0^1,\; y_0^2$ are the major variables, $x_1,\; y^1_1$ the minor variables, and $\phi_0,\; \phi_1,\; \psi_0,\; \psi_1$ are arbitrary scalar-valued analytic functions. \end{Exmp3} \subsection{The real case}\label{sec.gensolreal} We consider now the equation (\ref{fmg}) in a convex domain of a vector space over the real numbers. Since the field of real numbers is not algebraically closed, the matrix $M$ is in general not similar to a matrix of Jordan canonical form. Instead we assume that $M = \mathrm{diag}(M^1,\ldots, M^p)$ has the real Jordan canonical form, so that the blocks corresponding to the real eigenvalues have the same structure as in the complex case, whereas a block $M^k = \mathrm{diag}(M^k_1,\ldots, M^k_{r_k})$ corresponding to a complex conjugated pair of eigenvalues, $\lambda, \bar{\lambda} = \alpha \pm \mathrm{i}\beta$, consists of blocks $M^k_i$ of the form \begin{align}\label{realjordan} \left[ \begin{array}{cccc} \Lambda & I & &\\ & \Lambda & \ddots &\\ & & \ddots & I\\ & & & \Lambda \end{array} \right], \quad {\rm where} \quad \Lambda = \matris{\alpha}{\beta}{-\beta}{\alpha}, \quad I = \matris{1}{0}{0}{1}. \end{align} The general solution of (\ref{fmg}) can again be decomposed as in (\ref{sumfg}), where each pair $(f^k,g^k)$ is a solution of the subsystem $\nabla f = M^k \nabla g$. Thus, since the equations corresponding to real eigenvalues can be treated in the same way as in the complex case, it is enough to consider the case when $M = \mathrm{diag}(M_1,\ldots, M_m)$ has only one pair of complex conjugate eigenvalues $\lambda, \bar{\lambda}$. We assume that each $M_i$ is a matrix of the form (\ref{realjordan}) of size $2(n_i +1)$ where $n_1 \ge \cdots \ge n_m \ge 0$. Moreover, we denote the variables corresponding to the Jordan decomposition by $x^1_0, y^1_0, \ldots, x^1_{n_1}, y^1_{n_1}, x^2_0, y^2_0, \ldots, x^m_{n_m}, y^m_{n_m}$, and define complex variables $z^k_j = x^k_j + \mathrm{i}y^k_j$, $\bar{z}^k_j = x^k_j - \mathrm{i}y^k_j$. In a similar way as in the complex case, we introduce functions \begin{align*} z^k_\mu = z^k_0 + z^k_1\mu + \cdots + z^k_{n_k}\mu^{n_k}, \quad k = 1, 2, \ldots, m, \end{align*} depending on a real parameter $\mu$, and vectors \begin{align*} \mathbf{z}^{(k)}_\mu = [z^1_\mu \quad z^2_\mu \;\; \cdots \;\; z^{\nu_k}_\mu]^T, \quad k = 0, 1, \ldots, n_1, \end{align*} where $\nu_k$ is defined in the same way as above. If we let $F = f - \bar{\lambda} g$, the general analytic solution of $\nabla f = M\nabla g$ can be expressed as $F = F_0 + F_1 + \cdots + F_{n_1}$ where \begin{align}\label{Fkreal} F_k = \left. \left( \frac{\partial^k}{\partial \mu^k}\phi_k ( \mathbf{z}^{(k)}_\mu ) - \sum_{l=1}^k \left( \frac{-\mathrm{i}}{2\beta} \right)^l \frac{k!}{(k-l)!} \overline{\frac{\partial^{k-l}}{\partial \mu^{k-l}}\phi_k ( \mathbf{z}^{(k)}_\mu )}\right) \right|_{\mu=0}. \end{align} and $\phi_k(s_1, \ldots, s_{\nu_k})$ are again arbitrary complex-valued analytic functions depending on $\nu_k$ complex variables. \subsection{A characterization of the general solution in terms of differentiable functions on algebras} In \cite{waterhouse-1992}, the general solution of (\ref{fmg}) is characterized through components of functions which are differentiable over algebras. From this characterization, some results regarding multiplication of solutions can be obtained. However, the results in \cite{waterhouse-1992}, regarding the equation (\ref{fmg}) is given only for a quite restricted class of matrices $M$. We will give a brief summary of the concepts and results in \cite{waterhouse-1992} about the equation (\ref{fmg}). Let $A$ be a commutative algebra of finite dimension over the real or complex numbers, and consider a function $f:A\rightarrow A$. We say that $f$ is \emph{$A$-differentiable} in the point $\mathbf{a}\in A$ if the limit \begin{align*} \lim_{\substack{\mathbf{x} \rightarrow 0 \\ \mathbf{x}\in A^*}} \mathbf{x}^{-1}(f(\mathbf{a}+\mathbf{x}) - f(\mathbf{a})) \end{align*} exists. When $A$ is a $\mathbb{C}$-algebra, any $A$-differentiable function is analytic, i.e., it can be expressed locally as a power series in the $A$-variable $\mathbf{x}$. A function $V:A \rightarrow \mathbb{R}$ (or $\mathbb{C}$) is called a \emph{component function} of an $A$-differentiable function $f$ if it can be written as $V = \phi \circ f$ for some linear function $\phi: A \rightarrow \mathbb{R}$ (or $\mathbb{C}$). When a $n\times n$ matrix $M$ consists of one single Jordan block, there exists an algebra $A$ which is generated by a single element $\mathbf{s}$ such that in some basis $\mathbf{a}_1, \ldots, \mathbf{a}_n$, $M$ is the matrix of multiplication by $\mathbf{s}$, $\mathbf{s}\mathbf{a}_j = \sum_i M_{ji} \mathbf{a}_i$. If $V = \phi (f)$ is a component function of a $\mathcal{C}^2$ $A$-differentiable function $f$ and $W = \phi (\mathbf{s}f)$ is the corresponding component function of $\mathbf{s}f$, then $\nabla W = M\nabla V$. Conversely, if $V$ and $W$ are $\mathcal{C}^2$ functions such that $\nabla W = M\nabla V$, then $V = \phi (f)$ and $W = \phi (\mathbf{s}f)$ for some $A$-differentiable function $f$ and some linear function $\phi$. This characterization in terms of component functions of $A$-differentiable functions implies the existence of a multiplication of solutions for the equation (\ref{fmg}) since products of $A$-differentiable functions are again $A$-differentiable. Moreover, in the complex case, since any $A$-differentiable function is analytic, one can prove that any analytic solution of (\ref{fmg}) can be expressed as a power series of a simple solution. Thus, since some of the problems we are considering in this paper are already treated in \cite{waterhouse-1992}, it is worth to emphasize in which way we extend these results. \begin{enumerate} \item In \cite{waterhouse-1992}, the only case considered is when the matrix $M$ has exactly one Jordan block corresponding to each eigenvalue. In this paper, the equation (\ref{fmg}) is studied for all matrices $M$. \item In \cite{waterhouse-1992}, only the existence of the algebra $A$, from the characterization of the general solution of (\ref{fmg}), is given. An explicit construction is natural and simple in the complex case, but not trivial in the real case. In this paper we provide an explicit multiplication formula for solutions of (\ref{fmg}) for every possible $M$, both in the real and in the complex case. \item The characterization of $A$-differentiable functions in terms of analytic functions is only valid for the complex case. We show that any solution of (\ref{fmg}), in both the complex and real case, can be expressed as a $*$-power series. \item The equations (\ref{fmg}) constitute only a simple special case of a quite large family of systems of PDEs with $*$-multiplication. Therefore, the results obtained in this paper, for the restricted class of systems (\ref{fmg}), indicate that similar results may be established for the entire class of systems with $*$-multiplication. \end{enumerate} In this paper, we have used the notation and conventions used in \cite{jodeit-1990} and \cite{jonasson-2007}, rather than in \cite{waterhouse-1992}. However, for the special cases when a multiplication of solutions can be obtained from \cite{waterhouse-1992}, this multiplication coincides, even though it may not be obvious immediately, with the $*$-multiplication studied in this paper. \section{Multiplication for systems $\nabla f = M\nabla g$}\label{cstmultsec} In this section we show that every equation (\ref{fmg}) admits a $*$-multiplication. The approach we use for proving this, is to extend the equation (\ref{fmg}) by introducing certain auxiliary dependent variables and adding some equations that are consequences of the original equation. One can then show that the extended system has the form (\ref{avz}) and that it satisfies (\ref{azz}), so that it admits a $*$-multiplication. Since there is a simple correspondence between solutions of the original and the extended system, one can interpret the $*$-multiplication as an algebraically defined operation on the solution space of the equation (\ref{fmg}). We treat the real and complex case separately, starting with the latter. \begin{Rem3} We will sometimes use the exterior differential operator $\mathrm{d}$ instead of the gradient operator $\nabla$. Since we are only considering complex analytic functions, the exterior differential reduces to the $\partial$-operator \cite{hormander}. Thus, we can write the equation $($\ref{fmg}$)$ as $\mathrm{d} f = M^T \mathrm{d} g$ which should be understood as the following equation for row matrices: \begin{align*} [\partial_1 f \; \cdots \; \partial_n f] = [\partial_1 g \; \cdots \; \partial_n g]M^T, \end{align*} where $\partial_i = \partial/\partial x_i$ denotes the partial derivative with respect to $x_i$, and $M^T$ denotes the transpose of the matrix $M$. \end{Rem3} \subsection{The complex case} When the general solution of the equation (\ref{fmg}) is obtained in \cite{jodeit-1990}, Jodeit and Olver split their investigation into three cases, and we will use the same strategy in this paper. The cases are: \begin{enumerate} \item The matrix $M$ consists of a single Jordan block, i.e., \begin{align}\label{jblock} M = \left[ \begin{array}{cccc} \lambda & 1 & &\\ & \lambda & \ddots &\\ & & \ddots & 1\\ & & & \lambda \end{array} \right]. \end{align} \item $M=\mathrm{diag}(J_1,\ldots,\;J_m)$ has only one eigenvalue but consists of several Jordan blocks corresponding to that eigenvalue. \item $M$ is a general complex matrix. \end{enumerate} Obviously, the first case is a special case of the second case, which in turn is a special case of the third case. However, treating the first two cases separately, has a didactic advantage. \subsubsection{One Jordan block} We assume now that the matrix $M$ consists of one Jordan block of size $n+1$. By a change of dependent variable $h = f - \lambda g,$ the equation (\ref{fmg}) reduces to the equation \begin{align}\label{hug} \nabla h = U_n \nabla g, \end{align} where $U_n$ is defined as the the Jordan block of size $(n+1) \times (n+1)$ corresponding to the eigenvalue $\lambda = 0$, i.e., \begin{align}\label{un} U_n := \left[ \begin{array}{cccc} 0 & 1 & & \\ & \ddots & \ddots & \\ & & 0 & 1\\ & & & 0 \end{array} \right]. \end{align} Equation (\ref{hug}) admits a $*$-multiplication on the solution space: \begin{Prop3}\label{nil} Let $B$ be any nilpotent constant matrix. Then the system $\nabla f = B\nabla g$ can be extended to the system \begin{align}\label{amur} A_\mu\nabla V_\mu \equiv 0 \quad \left(mod\; \mu^r \right), \end{align} where \begin{align*} A_\mu = -B + \mu I,\quad V_\mu = V_0 + \mu V_1 + \cdots + \mu^{r-3}V_{r-3} + \mu^{r-2}f + \mu^{r-1}g, \end{align*} and $r$ is a natural number such that $B^r = 0$. The functions $V_0, \ldots ,V_{r-3}$ are auxiliary functions that are uniquely determined (up to additive constants) by any given solution $(f,g)$ of $($\ref{hug}$)$. \end{Prop3} The proof relies on the following lemma. \begin{Lem3} Let $B$ be a constant $n\times n$ matrix, and $f$ any smooth function such that $\mathrm{d} (B\mathrm{d} f) = 0$. Then $\mathrm{d} (B^k\mathrm{d} f) = 0$ for any $k\in\mathbb{N}$. \end{Lem3} \begin{proof} The proof is by induction over $k$. For $k=2,$ by the assumption $\mathrm{d} (B\mathrm{d} f) = 0$, we have \begin{align*} \left( \mathrm{d} (B^2 \mathrm{d} f) \right)_{ij} & = \sum_{a,b=1}^n \left( B_{ab}B_{bi}\partial_a\partial_j f - B_{ab}B_{bj}\partial_a\partial_i f \right) \\ & = \sum_{a,b=1}^n \left( B_{aj}B_{bi}\partial_a\partial_b f - B_{ai}B_{bj}\partial_a\partial_b f \right) = 0, \end{align*} where $B_{ij}$ denotes the entry in row $i$ and column $j$ of the matrix $B$. Assume now that the statement is true for $k = p - 1$, where $p \ge 3$. If $\mathrm{d} (B\mathrm{d} f) = 0$, there exists, according to the Poincaré lemma, a function $g$ such that $\mathrm{d} g = B \mathrm{d} f$. Thus, we have $\mathrm{d} (B \mathrm{d} g) = \mathrm{d} (B^2 \mathrm{d} f) = 0$, and therefore by the inductive assumption \begin{align*} 0 = \mathrm{d} (B^{p-1} \mathrm{d} g) = \mathrm{d} (B^p \mathrm{d} f). \end{align*} \end{proof} \begin{proof}[Proof (of proposition \ref{nil}).] Let $A_\mu = -B + \mu I$, and $V_\mu = V_0 + \mu V_1 + \cdots + \mu^{r-1}V_{r-1}$. Then, the components of equation (\ref{amur}) read \begin{align*} \left\{ \begin{aligned} 0 & = B^T\mathrm{d} V_0\\ \mathrm{d} V_0 & = B^Td V_1\\ & \cdots\\ \mathrm{d} V_{r-2} & = B^Td V_{r-1}, \end{aligned} \right. \end{align*} or equivalently (using the assumption that $B$ is nilpotent), \begin{align}\label{vbv} \mathrm{d} V_{i} = \left( B^T \right)^{r-1-i}\mathrm{d} V_{r-1},\quad i = 0,1,\ldots ,r-2. \end{align} It is then obvious that if $V_\mu$ solves (\ref{vbv}), then the functions $f = V_{r-2}$, $g = V_{r-1}$ solve the equation $\nabla f = B\nabla g$. On the other hand, assume now that $(f,g)$ solve the equation $\nabla f = B\nabla g$, and let $V_{r-2} = f$, $V_{r-1} = g$. Then, since $\mathrm{d} (B^T \mathrm{d} V_{r-1}) = \mathrm{d}^2 V_{r-2} = 0$, the previous lemma implies that $\mathrm{d} ((B^T)^k \mathrm{d} V_{r-1}) = 0$, for any $k\ge 1.$ Therefore, according to the Poincaré lemma there exist functions $V_0,V_1,\ldots ,V_{r-3}$ such that (\ref{vbv}) is satisfied. \end{proof} Since $A_\mu\nabla\mu^r \equiv 0$ is trivially satisfied, the equation (\ref{amur}) admits a $*$-multi\-plica\-tion. Given any two solutions $V_\mu$ and $\tilde{V}_\mu$ of (\ref{amur}), their $*$-product is given by the formula \begin{align*} V_\mu * \tilde{V}_\mu = \sum_{k=0}^{r-1}\left( \sum_{j=0}^k V_j\tilde{V}_{k-j} \right)\mu^k, \end{align*} or using the matrix notation \begin{align}\label{mult} \left[ \begin{array}{c} V_0 \\ V_1 \\ \vdots \\ V_{r-1}\end{array}\right]* \left[ \begin{array}{c} \tilde{V}_0 \\ \tilde{V}_1 \\ \vdots \\ \tilde{V}_{r-1}\end{array}\right] = \left[ \begin{array}{c} V_0\tilde{V}_0 \\ V_0\tilde{V}_1 + V_1\tilde{V}_0 \\ \vdots \\ V_0\tilde{V}_{r-1} + V_1\tilde{V}_{r-2} + \cdots +V_{r-1}\tilde{V}_0 \end{array}\right] \end{align} Especially, since the matrix $U_n$ is nilpotent with $U_n^{n+1} = 0$, proposition \ref{nil} allows us to extend the system (\ref{hug}) to the $\mu$-dependent system \begin{align}\label{umu} (-U_n + \mu I)\nabla V_\mu \equiv 0 \quad \left(mod\; \mu^{n+1} \right). \end{align} Thus, given two solutions $(h,g)$ and $(\tilde{h},\tilde{g})$ of (\ref{hug}), there exist unique (up to additive constants) functions $V_0,\ldots V_{n}$ and $\tilde{V}_0,\ldots \tilde{V}_{n}$ such that \begin{align*} \nabla V_{i} = U_n^{n-i}\nabla g,\quad \textrm{and}\quad \nabla \tilde{V}_{i} = U_n^{n-i}\nabla \tilde{g},\quad i = 0,1,\ldots ,n. \end{align*} A new solution of (\ref{hug}) is then algebraically constructed through the $*$-multi\-plica\-tion as \begin{align*} (h,g) = \left( \sum_{i=0}^{n-1} V_{i}\tilde{V}_{n-1-i},\; \sum_{i=0}^n V_{i}\tilde{V}_{n-i} \right). \end{align*} \subsubsection{Several Jordan blocks corresponding to one eigenvalue} Assume now that $M = \mathrm{diag}(J_1,\ldots ,J_m)$ has only one eigenvalue $\lambda$, where each $J_i$ is a Jordan block of size $n_i+1$, and $n_1\ge n_2\ge\cdots \ge n_m$. Just as in the case of one single Jordan block, by changing one dependent variable $h = f -\lambda g$, we obtain the equivalent equation \begin{align}\label{hug2} \nabla h = U\nabla g, \quad \textrm{where}\quad U = \mathrm{diag}(U_{n_1},\ldots, U_{n_m}), \end{align} and $U_i$ is given by (\ref{un}). Since $U$ is nilpotent, we can apply proposition \ref{nil}, with $B = U$ and $r = n_1 + 1$, in order to extend (\ref{hug2}) to the system \begin{align}\label{umu2} (-U+\mu I)\nabla V_\mu \equiv 0 \quad \left(mod\; \mu^{n_1+1} \right), \end{align} Thus, there is a $*$-multiplication for solutions of the equation (\ref{fmg}) (having the same multiplication formula (\ref{mult})) also for the case when $M$ consists of several Jordan blocks corresponding to the same eigenvalue. Besides generating solutions of the system (\ref{hug2}) with $*$-multiplication, it follows from the next proposition that one can also embed solutions of the subsystems $\nabla h = \mathrm{diag}(U_{n_i}, \ldots, U_{n_m})\nabla g$, for any $1\le i \le m$, into the solution space of (\ref{hug2}). \begin{Prop3}\label{embedd} Suppose that $V_\mu$ is a solution of the equation \begin{align}\label{amur2} A_\mu\nabla V_\mu \equiv 0 \quad \left(mod\; \mu^r \right),\quad V_\mu = \sum_{i=0}^{r-1}V_i\mu^i \end{align} defined in a subset $\Omega_1$ of a vector space $\mathcal{V}_1$, where $A_\mu = -A + \mu I$, and $r$ is a natural number. Then $W_\mu = \mu^s V_\mu$ is a solution of the extended system \begin{align*} B_\mu\nabla W_\mu \equiv 0 \quad \left(mod\; \mu^{r+s} \right),\quad W_\mu = \sum_{i=0}^{r+s-1}W_i\mu^i \end{align*} defined in $\Omega_1\times \mathcal{V}_2,$ where $\mathcal{V}_2$ is another vector space, $B_\mu = B + \mu I$, and $B$ is the square matrix with the block structure \begin{align*} B = -\matris{C}{0}{0}{A}, \end{align*} for some constant matrix $C$ and some $s\in \mathbb{N}$. \end{Prop3} \begin{proof} Let $V_\mu$ be a solution of the equation (\ref{amur2}), i.e., there exists a $1$-form $\alpha$, not depending on $\mu$, such that $A_\mu^T\mathrm{d} V_\mu = \mu^r\alpha$. Then \begin{align*} B_\mu^T\mathrm{d} (\mu^sV_\mu) & = \mu^s B_\mu^T\mathrm{d} V_\mu = \mu^s \left( C_\mu^T\mathrm{d}_1 V_\mu + A_\mu^T\mathrm{d}_2 V_\mu\right)\\ & = \mu^s A_\mu^T\mathrm{d}_2 V_\mu = \mu^{s+r}\alpha \equiv 0 \quad \left(mod\; \mu^{r+s} \right), \end{align*} where $\mathrm{d}_1$ and $\mathrm{d}_2$ denote the exterior differential operators on $\mathcal{V}_1$ and $\mathcal{V}_2$, respectively. \end{proof} Let $(h^i,g^i)$ be a solution of $\nabla h^i =\mathrm{diag}(U_{n_i},\ldots, U_{n_m})\nabla g^i$. Then, according to proposition \ref{nil}, there exists a unique (up to unessential constants) solution $V^i_\mu = V_0^i + \cdots + V^i_{n_i-2}\mu^{n_i-2} + h^i\mu^{n_i-1} + g^i\mu^{n_i}$ of the extended system \begin{align*} \left( -\mathrm{diag}(U_i,\ldots,U_m) + \mu I \right) \mathrm{d} V^i_\mu \equiv 0 \quad \left( mod\; \mu^{n_i+1} \right). \end{align*} According to proposition \ref{embedd}, it then follows that $V_\mu^i$ can be embedded as a solution $\mu^{n_1-n_i}V_\mu^i$ of the equation (\ref{hug2}). \subsubsection{The general complex case} Since the general solution of equation (\ref{fmg}) can be written as a sum (\ref{sumfg}) where $(f^i,g^i)$ is obtained from the general solution of the system (\ref{hug2}), it is clear that there is no non-trivial $*$-multiplication for the system (\ref{fmg}) other than the multiplications induced from the subsystems corresponding to single eigenvalues, that was introduced above. In order to illustrate the $*$-multiplication for equations of the form (\ref{fmg}), we return to example \ref{exn5}. \begin{Exmp3} Consider again equation $($\ref{fmg}$)$ in the case when $M$ is the $5\times 5$ matrix defined in example \ref{exn5}. Take for instance the particular solution \begin{align*} \left\{ \begin{aligned} f & = \lambda_1 2x_0x_1 + \lambda_1(x_0)^3 + (x_0)^2 + \lambda_2 y_0^1(y_0^2)^2 + \lambda_2 y_1^1 + y_0^1\\ g & = 2x_0x_1 + (x_0)^3 + y_0^1(y_0^2)^2 + y_1^1 \end{aligned} \right. \end{align*} This solution can be written as a sum of $*$-products of simple (first order polynomials) solutions of subsystems. In detail, \begin{align*} \left\{ \begin{aligned} f & = \lambda_1g_1 + h_1 + \lambda_2g_2 + h_2 \\ g & = g_1 + g_2 \end{aligned} \right. \end{align*} where \begin{align*} h_1 + \mu g_1 = \mu * (x_0 + \mu x_1)^3_* + (x_0 + \mu x_1)^2_*, \end{align*} and \begin{align*} h_2 + \mu g_2 = \left( \mu (y_0^2)^2_* \right)*(y_0^1 + \mu y_1^1) + (y_0^1 + \mu y_1^1). \end{align*} We note that the factor $\mu (y_0^2)^2_*$ is the embedding, according to proposition \ref{embedd}, of the solution $(y_0^2)^2_* = (y_0^2)^2$ of the subsystem $\nabla h = U_0 \nabla g$ (or equivalently $h'(y_0^2) = 0$), into the system $\nabla h_2 = \mathrm{diag}(U_1,U_0)\nabla g_2$. \end{Exmp3} The possibility to express solutions of an equation of the kind (\ref{fmg}) as a sum of simple $*$-products, that was illustrated in the previous example, is a general property. In the next section we show that any analytic solution of (\ref{fmg}) can be expressed as a power series of simple solutions with respect to the $*$-multiplication. \subsection{The real case} We consider now the equation (\ref{fmg}) over a real vector space when the matrix $M$ has real entries. From the discussion in section \ref{sec.gensolreal}, it is clear that it is no restriction to assume that $M$ has only one pair of complex conjugate eigenvalues. Thus, we let $M = \mathrm{diag}(M_1,\ldots, M_m)$ be an arbitrary square matrix having real Jordan form with eigenvalues $\lambda, \bar{\lambda} = \alpha \pm \mathrm{i}\beta$, and we use the same notation as in section \ref{sec.gensolreal}. Furthermore, we can, without loss of generality, assume that $\alpha = 0$ and $\beta = 1$. This is realized by changing both dependent and independent variables according to \begin{align*} \tilde{f} & = f - \alpha g, \quad \tilde{g} = \beta g, \\ \tilde{x}_j^k & = \beta^{j-n_k}x_j^k, \quad \tilde{y}_j^k = \beta^{j-n_k}y_j^k, \quad j = 0, 1, \ldots, n_k \quad k = 1, 2, \ldots, m. \end{align*} The matrix $M$ is not nilpotent, which was the case for the complex case, but the system (\ref{fmg}) is a so called quasi-Cauchy--Riemann equation which are known to admit $*$-multiplication. \begin{Thm3}\label{realmult} Let $M$ be a square matrix of size $2(n + 1)$ having real Jordan form, and with no other eigenvalues than $\pm \mathrm{i}$. Then equation $($\ref{fmg}$)$ can be extended to the system \begin{align}\label{amuv} A_\mu\nabla V_\mu \equiv 0 \quad \left(mod\; (\mu^2 + 1)^{n+1} \right), \end{align} where \begin{align*} A_\mu = M^{-1} + \mu I,\quad V_\mu = V_0 + V_1 \mu + \cdots + V_{2n+1}\mu^{2n+1}, \end{align*} and $V_0 = f$ and $V_{2n+1} = g$. The functions $V_1, \ldots, V_{2n}$ are uniquely determined (up to additive constants) by the solution $(f,g)$. \end{Thm3} \begin{proof} Since $\det{M} = (\det{\Lambda})^{n+1} = 1$, equation (\ref{fmg}) can be written as \begin{align}\label{mfdetmg} M^{-1}\nabla f = \det{M^{-1}}\nabla g. \end{align} Equation (\ref{mfdetmg}) is a quasi-Cauchy--Riemann equation and is therefore equivalent to the parameter dependent equation (see \cite{jonasson-2006})\footnote{In \cite{jonasson-2006}, the definition of a quasi-Cauchy--Riemann equation require that the matrix $M$ is symmetric, but as long as the entries are constant this assumption is not necessary.}: \begin{align}\label{fdetmu} (M^{-1} + \mu I)\nabla V_\mu \equiv 0 \quad \left(mod\; \det{(M^{-1} + \mu I)} \right), \end{align} where $V_0 = f$ and $V_{2n+1} = g$. Thus, since \begin{align*} \det{(M^{-1} + \mu I)} = (\det{(\Lambda + \mu I)})^{n+1} = (\mu^2 + 1)^{n+1}, \end{align*} the proof is complete. \end{proof} Since $(M^{-1} + \mu I)\nabla (\mu^2 + 1)^{n+1} = 0$, the solution space of the system (\ref{amuv}) admits $*$-multiplication. In the special case when (\ref{fmg}) is the Cauchy--Riemann equations, i.e., $M = \Lambda$ with $\alpha = 0$ and $\beta = 1$, the parameter dependent equation (\ref{fdetmu}) coincides with the original equation (\ref{fmg}) and the $*$-multiplication reduces to the ordinary multiplication, obtained from the multiplication of holomorphic functions. In a similar way as in the complex case, solutions of (\ref{fmg}) can be constructed by embedding of solutions of subsystems. Let $V_\mu$ be a solution of \begin{align*} \tilde{A}_\mu\nabla V_\mu \equiv 0 \quad \left(mod\; (\mu^2 + 1)^{\tilde{n}+1} \right), \end{align*} where $\tilde{A}_\mu = \mathrm{diag}(M_i^{-1},\ldots, M_m^{-1}) + \mu I$ and $\tilde{n} = n_i + \cdots + n_m + m - i$. Then $(\mu^2 + 1)^{n-\tilde{n}}V_\mu$ is a solution of (\ref{amuv}). A proof of a more general result is given in \cite{jonasson-2007}. \section{Analytic solutions are $*$-analytic}\label{pssec} We have seen in the previous section that any equation of the form (\ref{fmg}) can be reduced to a number of systems of the form (\ref{umu2}) (or (\ref{amuv}) in the real case), each one equipped with a $*$-multiplication on the solution space. In this section we prove that every analytic solution can be expressed as a power series of simple solutions, similarly as for the Cauchy--Riemann equations. Being precise, by simple solution we mean a solution which is linear in the independent variables. Furthermore, we say that a solution $(f,g)$ of (\ref{fmg}) is $*$-\emph{analytic} if it can be expressed locally as a finite sum of power series of simple solutions with respect to the different $*$-multiplications of the corresponding subsystems of the form (\ref{umu2}) and (\ref{amuv}). In other words, in this section we aim to prove that every analytic solution of (\ref{fmg}) is also $*$-analytic. Since it is no restriction, we will only consider power series expansions in a neighborhood of the origin. In both the real and the complex case, equation (\ref{fmg}) is extended to a finite number of $\mu$-dependent equations of the form \begin{align}\label{xmuvmu} (X + \mu I) \nabla V_\mu \equiv 0 \quad \left(mod\; Z_\mu \right), \end{align} where $X$ is a constant matrix, $Z_\mu = Z_0 + \cdots + Z_n\mu^n$ is a polynomial with constant coefficients, $V_\mu = V_0 + \cdots + V_n\mu^n$, and $V_n = g$. Equation (\ref{xmuvmu}) can be written without the parameter $\mu$ as a system \begin{align}\label{xvj} X \nabla V_j + \nabla V_{j-1} = Z_j \nabla g \quad j = 0, 1, \ldots, n, \end{align} where $V_{-1} = 0$. Let $V_\mu^{(N)} = V_0^{(N)} + \cdots + V_n^{(N)}\mu^n$ be a solution with coefficients being polynomials of degree $N$ in the coordinate functions, \begin{align*} V_j^{(N)} = \sum_{k=0}^N \sum_{\substack{I=(i_0,\ldots,i_n) \in \mathbb{N}^n \\ i_0 + \cdots + i_n = k}} a_I x_0^{i_0}x_1^{i_1} \cdots x_n^{i_n}. \end{align*} Because of the relation (\ref{xvj}), the power series \begin{align*} \lim_{N \rightarrow \infty} V_j^{(N)}, \quad j = 0, 1, \ldots, n \end{align*} will all have the same domain of convergence. This means that when constructing $*$-power series of a simple solution, it is enough to consider the domain of convergence for the power series defined by the coefficient at the highest power $\mu^n$. \subsection{The complex case} We split the investigation of the complex case into three different cases in the same way as in the previous section. \subsubsection{One Jordan block} We consider first the equation (\ref{fmg}) when the Jordan decomposition of the constant matrix $M$ consist of one single Jordan block. We have seen that, then there is no loss of generality to consider the equation (\ref{hug}) instead. \begin{Thm3}\label{gensolthm1} Let $(h,g)$ be an analytic solution of the equation $($\ref{hug}$)$. Then there exist constant solutions $(b_i)_\mu = b_{i0} + b_{i1}\mu + \cdots + b_{in}\mu^n$ (with $b_{ij} \in \mathbb{C}$) of the extended system $($\ref{umu}$)$ for $i = 0, 1, \ldots$, such that \begin{align}\label{ps1} \sum_{i=0}^\infty (b_i)_\mu*(x_\mu)_*^i = V_0 + \cdots + V_{n-2}\mu^{n-2} + h\mu^{n-1} + g\mu^n, \end{align} where $x_\mu = x_0 + x_1\mu + \cdots + x_n\mu^n$. \end{Thm3} We formulate a result about the structure of the $*$-powers $(x_\mu)^i_*$ as a lemma: \begin{Lem3}\label{xi} For each $i\in \mathbb{N}$ we have \begin{align}\label{xif} (x_\mu)^i_* = \sum_{j=0}^n \Psi_{i,j}(\mathbf{x})\mu^j, \end{align} where $\Psi_{i,j}$ are the polynomials given by \begin{align*} \Psi_{i,j}(\mathbf{x}) & := \! \sum_{\substack{a_0,a_1,\ldots,a_n \in \mathbb{N} \\ a_0 + a_1 + \cdots + a_n = i \\ a_1 + 2a_2 + \cdots + na_n = j }}\! \binom{i}{a_0, a_1, \ldots, a_n}x_0^{a_0}x_1^{a_1} \cdots x_n^{a_n} \\ & = \sum_{s=0}^j \binom{i}{s}x_0^{i\!-\!s} B_{js}(x_1, 2x_2, \ldots, (j\!-\!s\!+\!1)!x_{j\!-\!s\!+\!1}), \end{align*} where $B_{js} =B_{js}(x_1, x_2, \ldots, x_{j\!-\!s\!+\!1}) $ are the partial Bell polynomials \cite{comtet} given by \begin{align*} \sum_{\substack{a_1,a_2,\ldots,a_{j\!-s\!+1} \in \mathbb{N} \\ a_0 + a_1 + \cdots + a_{j\!-s\!+1} = s \\ a_1 + 2a_2 + \cdots + (j\!-s\!+1)a_{j\!-s\!+1} = j }}\!\!\!\!\!\!\!\!\!\!\!\! \binom{s}{a_1, \ldots, a_{j\!-\!s\!+\!1}}\!\!\left(\frac{x_1}{1!}\right)^{a_1} \left(\frac{x_2}{2!}\right)^{a_2} \cdots \left( \frac{x_{j\!-\!s\!+\!1}}{(j\!-\!s\!+\!1)!} \right)^{a_{j\!-\!s\!+\!1}} \end{align*} \end{Lem3} \begin{proof} From the multinomial theorem, we have \begin{align*} x_\mu^i = \sum_{\substack{a_0,a_1,\ldots,a_n \in \mathbb{N} \\ a_0 + a_1 + \cdots + a_n = i }}\! \binom{i}{a_0, a_1, \ldots, a_n}x_0^{a_0}x_1^{a_1} \cdots x_n^{a_n}\mu^{a_1 \!+\! 2a_2 \!+ \cdots +\! na_n}. \end{align*} Thus, formula (\ref{xif}) follows now from the definition of the $*$-multiplication. Moreover, \begin{align*} \Psi_{i,j}(\mathbf{x}) & = \sum_{a_0=0}^i\frac{x_0^{a_0}}{a_0!} \! \sum_{\substack{a_1,a_2,\ldots,a_n \in \mathbb{N} \\ a_1 + \cdots + a_n = i - a_0 \\ a_1 + 2a_2 + \cdots + na_n = j }}\! \frac{i!}{a_1!a_2!\cdots a_n!}x_1^{a_1} \cdots x_n^{a_n} \\ & = \sum_{s=0}^j \binom{i}{s}x_0^{i-s}\!\!\!\!\!\!\!\!\! \sum_{\substack{a_1,a_2,\ldots,a_{j\!-\!s\!+\!1} \in \mathbb{N} \\ a_1 + \cdots + a_{j\!-\!s\!+\!1} = s \\ a_1 + 2a_2 + \cdots + (j\!-\!s\!+\!1)a_{j\!-\!s\!+\!1} = j }}\! \binom{s}{a_1, \ldots, a_{j\!-\!s\!+\!1}} x_1^{a_1} \cdots x_{j\!-\!s\!+\!1}^{a_{j\!-\!s\!+\!1}}\\ & = \sum_{s=0}^j \binom{i}{s}x_0^{i\!-\!s} B_{js}(x_1, 2x_2, \ldots, (j\!-\!s\!+\!1)!x_{j\!-\!s\!+\!1}). \end{align*} \end{proof} \begin{proof}[Proof (of theorem \ref{gensolthm1}).] For any solution $(h,g)$ of (\ref{hug}), $h$ is uniquely determined from $g$ (up to an additive constant). Thus, it is enough to prove that for any analytic solution $(h,g)$, we can choose $(b_i)_\mu$ such that the highest order coefficient of $\sum_{i=0}^\infty (b_i)_\mu*(x_\mu)^i_*$ coincides with $g$. From the results obtained in \cite{jodeit-1990}, presented in section \ref{fmgsec}, the general solution $g$ is given by \begin{align}\label{geng} g(\mathbf{x}) = \left. \sum_{j=0}^n \frac{\partial^j}{\partial \mu^j}\phi_j \left( x_\mu \right) \right|_{\mu=0}, \end{align} where $\phi_1, \phi_2, \ldots, \phi_n$ are arbitrary analytic functions of one complex variable, given by \begin{align}\label{psphi} \phi_j(s) = \sum_{i=0}^\infty c_{ij}s^i, \quad j = 0, 1, \ldots, n. \end{align} Hence, if we write the function $g$ in a more explicit way, we obtain \begin{align*} g(\mathbf{x}) & = \sum_{j=0}^n\sum_{i=0}^\infty c_{ij} \left. \left( \frac{\partial^j}{\partial \mu^j} (x_\mu)^i \right) \right|_{\mu=0} \\ & = \sum_{j=0}^n\sum_{i=0}^\infty c_{ij}\!\!\! \left. \left( \!\! \frac{\partial^j}{\partial \mu^j} \!\!\!\! \sum_{\substack{a_0,\ldots,a_n \in \mathbb{N} \\ a_0 + \cdots + a_n = i }}\!\!\!\! \binom{i}{a_0, \ldots, a_n}x_0^{a_0}x_1^{a_1} \cdots x_n^{a_n}\mu^{a_1 \!+\! \cdots \!+\! na_n} \right) \!\right|_{\mu=0}\\ & = \sum_{j=0}^n\sum_{i=0}^\infty j!c_{ij} \Psi_{i,j}. \end{align*} We define now the constant solutions $(b_i)_\mu$ as \begin{align*} (b_i)_\mu = \sum_{k=0}^n (n-k)!c_{i,n-k}\mu^k. \end{align*} From lemma (\ref{xi}), it then follows that the highest order coefficient of the $*$-power series $V_\mu = \sum (b_i)_\mu * (x_\mu)_*^i$ coincides with $g(\mathbf{x})$. \end{proof} We illustrate this result with a simple example. \begin{Exmp3} Consider the system $\nabla h = U_2\nabla g$, where $U_2$ is the $3\times 3$ matrix described above, i.e., \begin{align}\label{u3ex} \nabla h = \matrist{0}{1}{0}{0}{0}{1}{0}{0}{0} \nabla g. \end{align} The general analytic solution of $($\ref{u3ex}$)$ is given by \begin{align}\label{u3exsol} \left\{ \begin{aligned} h & = \phi_1(x_0) + 2x_1\phi_2'(x_0) + c\\ g & = \phi_0(x_0) + x_1\phi_1'(x_0) + 2x_2\phi_2'(x_0) + x_1^2\phi_2''(x_0) \end{aligned} \right. \end{align} where $\phi_0,\phi_1,\phi_2$ are general analytic functions of one complex variable. If the corresponding power series representations are given by $($\ref{psphi}$)$ for $j = 0,1,2$, then the general solution $($\ref{u3exsol}$)$ is $*$-analytic with the following power series representation: \begin{align*} \vektort{V_0}{h}{g} = \sum_{i=0}^\infty \vektort{c_{i2}}{c_{i1}}{c_{i0}}*\vektort{x_0}{x_1}{x_2}_*^i. \end{align*} \end{Exmp3} \subsubsection{Several Jordan blocks corresponding to one eigenvalue} Assume now that the matrix $M$ in equation (\ref{fmg}) consists of only one eigenvalue $\lambda$, but that its Jordan canonical form consists of several Jordan blocks. As we have seen, it is then no restriction to assume that the eigenvalue is zero, so that (\ref{fmg}) is reduced to the equation (\ref{hug2}). Then a given analytic solution can in general not be written directly as a power series of simple solutions with respect to the $*$-multiplication (\ref{mult}) for that system. This is easily realized from the following example. \begin{Exmp3} Consider the equation $\nabla h = \mathrm{diag}(U_1,U_0) \nabla g$, i.e., \begin{align}\label{u1u0} \nabla h = \matrist{0}{1}{0}{0}{0}{0}{0}{0}{0} \nabla g. \end{align} The general analytic solution is given by \begin{align*} \left\{ \begin{aligned} h & = \phi(x) + c \\ g & = y\phi'(x) + \psi(x,z) \end{aligned} \right. \end{align*} where $\phi$ and $\psi$ are arbitrary analytic functions, and the $*$-multiplication formula is given by \begin{align}\label{u1u0mult} \vektor{h}{g}*\vektor{\tilde{h}}{\tilde{g}} = \vektor{h\tilde{h}}{h\tilde{g} + g\tilde{h}}. \end{align} Thus, we see that the $*$-product of polynomial solutions will never have higher degree in the $z$-variable than the highest degree of $g$ and $\tilde{g}$. Since a polynomial solution can have any given polynomial degree in $z$, it becomes obvious that not every analytic solution of $($\ref{u1u0}$)$ can be written as a power series of simple solutions with respect to the $*$-multiplication $($\ref{u1u0mult}$)$. The remedy is to embed power series solutions of the trivial subsystem $\nabla h = U_0 \nabla g$ in the sense of proposition \ref{embedd}. The general solution is given by $g = \tau(z)$ where $\tau$ is an arbitrary function, and the $*$-multiplication coincides with the ordinary multiplication. Thus, according to proposition \ref{embedd}, we can embed $*$-powers $z_*^j = z^j$ of the trivial solution $g = z$ into solutions of the system $($\ref{u1u0}$)$. Any analytic solution of $($\ref{u1u0}$)$ can then be written as an infinite sum of products of the solutions $(x, y)^i_*$ and the embedded solutions $(0, z^j)$ of the $*$-powers $z_*^j$: \begin{align*} \vektor{\displaystyle\sum_{i=0}^\infty c_ix^i}{\displaystyle\sum_{i=0}^\infty ic_iyx^i + \sum_{i,j=0}^\infty c_{ij}x^iz^j} = \sum_{i=0}^\infty c_i\vektor{x}{y}^i_* + \sum_{i=0}^\infty c_{ij}\vektor{x}{y}^i_**\vektor{0}{z_*^j} \end{align*} \end{Exmp3} In the general case with one eigenvalue, we proceed in the same way as in the example, and embed power series solutions for the subsystems $\nabla h^i =\mathrm{diag}(U_{n_i},\ldots, U_{n_m})\nabla g^i$, in the sense of proposition \ref{embedd}. \begin{Thm3}\label{ps2} Every analytic solution of the matrix equation $($\ref{hug2}$)$ is $*$-analytic. In detail, let $(h,g)$ be an analytic solution defined by \begin{align*} g = \sum_{j=0}^{n_1} \left. \frac{\partial^j}{\partial \mu^j}\phi_j \left( \mathbf{x}^{(j)}_\mu\right) \right|_{\mu=0}, \end{align*} where $\phi_0,\phi_1,\ldots ,\phi_{n_1}$ are analytic functions with power series representations \begin{align*} \phi_j(s_1,s_2,\ldots ,s_{\nu_j}) = \sum_{I=(i_1,i_2, \ldots , i_{\nu_j})\in\mathbb{N}^{\nu_j}} c_{I,j}s_1^{i_1}s_2^{i_2} \cdots s_{\nu_j}^{i_{\nu_j}}. \end{align*} Then $(h,g)$ is $*$-analytic with power series representation \begin{align*} V_0 + \cdots + V_{n_1-2}\mu^{n_1-2} + h\mu^{n_1-1} + g\mu^{n_1} = \sum_{r=1}^m \sum_{I=(i_1,i_2, \ldots , i_r)\in\mathbb{N}^r} \!\!\!\!\!\!(b_I)_\mu * (\mathbf{x}_\mu)^I_*, \end{align*} where, for each multi-index $I = (i_1,i_2, \ldots ,i_r)$, $(b_I)_\mu$ is the trivial solution of the extended system $($\ref{umu2}$)$ defined as \begin{align*} (b_I)_\mu = \sum_{j=n_{r+1}+1}^{n_r} j!c_{I,j}\mu^{n_r-j}, \quad (n_{m+1} :=- 1), \end{align*} and \begin{align}\label{xI} (\mathbf{x}_\mu)^I_* := \left( x^1_\mu \right)_*^{i_1} * \left( \mu^{n_1-n_2}(x^2_\mu)_*^{i_2} * \cdots * \left( \mu^{n_{r-1}-n_r}(x^r_\mu)_*^{i_r} \right)\right). \end{align} \end{Thm3} \begin{Rem3} The definition $($\ref{xI}$)$ of the powers $(\mathbf{x}_\mu)^I_*$ needs some further explanation since the $*$-operators on the right hand side of $($\ref{xI}$)$ denote in fact multiplications for different systems. In order to calculate $(\mathbf{x}_\mu)^I_*$, one has to start with the power $(x^r_\mu)_*^{i_r}$, which is a solution of the subsystem $\nabla h = \mathrm{diag}(U_r, \ldots, U_m)\nabla g$. This solution is then embedded, in the sense of proposition \ref{embedd}, into the solution $\mu^{n_{r-1}-n_r}(x^r_\mu)_*^{i_r}$ of the subsystem $\nabla h = \mathrm{diag}(U_{r-1}, U_r, \ldots, U_m)\nabla g$, and is thereafter $*$-multiplied with $(x^{r-1}_\mu)_*^{i_{r-1}}$. One continues this successive embedding of solutions into larger subsystems until the solution $(x^2_\mu)_*^{i_2} * \cdots * (\mu^{n_{r-1}-n_r}(x^r_\mu)_*^{i_r})$ of the system $\nabla h = \mathrm{diag} (U_2, \ldots, U_m)\nabla g$ is embedded (through multiplication with $\mu^{n_1-n_2}$) into a solution of the system $($\ref{hug2}$)$, and then finally multiplied with the solution $(x^1_\mu)_*^{i_1}$. \end{Rem3} As in the case with only one Jordan block, we place the calculation of the powers $(\mathbf{x}_\mu)^I_*$ in a separate lemma. \begin{Lem3} For each $I = (i_1,i_2, \ldots ,i_r)\in \mathbb{N}^r$ we have \begin{align*} (\mathbf{x}_\mu)^I_* = \sum_{j=0}^{n_r} \Psi_{I,j}(\mathbf{x}^1, \mathbf{x}^2, \ldots, \mathbf{x}^r)\mu^{n_1-n_r+j}, \end{align*} where $\Psi_{I,j}$ are polynomials defined through the polynomials $\Psi_{i,j}$ in lemma \ref{xi} as \begin{align*} \Psi_{I,j}(\mathbf{x}^1, \mathbf{x}^2, \ldots, \mathbf{x}^r) = \sum_{\substack{ 0\le j_1,\ldots, j_r \le n_r\\ j_1+j_2+\cdots +j_r=j}} \Psi_{i_1,j_1}(\mathbf{x}^1)\Psi_{i_2,j_2}(\mathbf{x}^2) \cdots \Psi_{i_r,j_r}(\mathbf{x}^r) \end{align*} \end{Lem3} \begin{proof} For $r = 1$, the statement reduces to lemma \ref{xi}. For arbitrary $r$, since \begin{align*} (\mathbf{x}_\mu)^I_* = (x^1_\mu)^{i_1}_* * \left( (\tilde{\mathbf{x}}_\mu)^{(i_2, \ldots i_r)}_* \mu^{n_1-n_2}\right), \end{align*} where $\tilde{\mathbf{x}} = [\mathbf{x}^2, \ldots, \mathbf{x}^r]^T$, we may assume by induction that \begin{align*} (\mathbf{x}_\mu)^I_* & = \left( \sum_{s=0}^{n_1} \Psi_{i_1,s}(\mathbf{x}^1)\mu^s \right) \! * \!\left( \mu^{n_1\!-n_2}\!\sum_{t=0}^{n_r} \Psi_{(i_2, \ldots i_r),t}(\mathbf{x}^2, \ldots, \mathbf{x}^r)\mu^{n_2-n_r+t} \right). \end{align*} Thus, from the $*$-multiplication formula (\ref{mult}), we obtain \begin{align*} (\mathbf{x}_\mu)^I_* & = \sum_{j=0}^{n_r} \left( \sum_{s=0}^j \Psi_{i_1,s}(\mathbf{x}^1)\Psi_{(i_2, \ldots, i_r),j-s}(\mathbf{x}^2, \ldots, \mathbf{x}^r)\right)\mu^{n_1-n_r+j}\\ & = \sum_{j=0}^{n_r}\Psi_{I,j}(\mathbf{x}^1, \mathbf{x}^2, \ldots, \mathbf{x}^r)\mu^{n_1-n_r+j}, \end{align*} where the last equality follows immediately from the definition of the polynomials $\Psi_{I,j}$. \end{proof} \begin{proof}[Proof (of theorem \ref{ps2}).] Since $\nu_{n_{r+1}+1} = \nu_{n_{r+1}+2} = \cdots = \nu_{n_r} = r$ for $r = 1, 2, \ldots, m$, we have \begin{align*} g & = \sum_{j=0}^{n_1} \sum_I \left. c_{I,j} \frac{\partial^j}{\partial \mu^j} \left( (x^1_\mu)^{i_1}(x^2_\mu)^{i_2} \cdots (x_\mu^{\nu_j})^{i_{\nu_j}} \right) \right|_{\mu=0}\\ & = \sum_{r=1}^{m} \sum_{j=n_{r+1}+1}^{n_r} \sum_I \left. c_{I,j} \frac{\partial^j}{\partial \mu^j} \left( (x^1_\mu)^{i_1}(x^2_\mu)^{i_2} \cdots (x_\mu^r)^{i_r} \right) \right|_{\mu=0}. \end{align*} Thus, it is enough to show that for each $I = (i_1,i_2, \ldots ,i_r)$, the expression \begin{align*} \sum_{j=n_{r+1}+1}^{n_r} \left. c_{I,j} \frac{\partial^j}{\partial \mu^j} \left( (x^1_\mu)^{i_1}(x^2_\mu)^{i_2} \cdots (x_\mu^r)^{i_r} \right) \right|_{\mu=0} \end{align*} coincides with the highest order coefficient of $(b_I)_\mu * (\mathbf{x}_\mu)^I_*$ that, according to the previous lemma and the $*$-multiplication formula (\ref{mult}), equals \begin{align*} \sum_{j=n_{r+1}+1}^{n_r} j!c_{I,j}\Psi_{I,j}. \end{align*} The equality is therefore proven by the following calculation: \begin{align*} \left. \frac{\partial^j}{\partial \mu^j} \left( (x^1_\mu)^{i_1}(x^2_\mu)^{i_2} \cdots (x_\mu^r)^{i_r} \right) \right|_{\mu=0} & = \left. \frac{\partial^j}{\partial \mu^j} \prod_{s=1}^r (x^s_0 + x^s_1\mu + \cdots x^s_{n_s}\mu^{n_s})^{i_s} \right|_{\mu=0} \\ & = \left. \frac{\partial^j}{\partial \mu^j} \sum_{t=0}^{i_1n_1+\cdots + i_rn_r} \Psi_{I,t}\mu^t \right|_{\mu=0}\\ & = j!\Psi_{I,j}. \end{align*} \end{proof} \subsubsection{The general complex case} Since the general solution of (\ref{fmg}) can be decomposed as a sum (\ref{sumfg}) of solutions to subsystems, one of the main results of this paper is an immediate consequence of theorem \ref{ps2}: \begin{Thm3} Over the complex numbers, every analytic solution of the matrix equation $($\ref{fmg}$)$ is $*$-analytic. \end{Thm3} \subsection{The real case} The sub-equations of (\ref{fmg}) corresponding to real eigenvalues of $M$ are treated in the exact same way as in the complex case. Therefore, it is enough to consider equations corresponding to complex conjugate pairs of eigenvalues. We consider only the case when $M$ consists of a single real Jordan block, i.e., $M$ has the form (\ref{realjordan}). As we have seen in section \ref{cstmultsec}, there is no restriction to assume that $\alpha = 0$ and $\beta = 1$. We have already proved that the corresponding system (\ref{fmg}) can be extended to the parameter dependent equation (\ref{amuv}). In the complex case with one single Jordan block, there was a very natural choice of ``simple'' solution to construct $*$-power series form. Namely, $x_\mu = x_0 + x_1\mu + \cdots + x_n\mu^n$ where the coefficients are given by the coordinate functions. In the current situation on the other hand, the corresponding function, with coefficients given by the coordinate functions, is not even a solution. Instead of searching for an appropriate simple solution, suitable for constructing power series, we change variables $\mathbf{x} = B\mathbf{s}$, where $B$ is a constant matrix, in such a way that the simple function \begin{align}\label{smu} s_\mu = s_0 + s_1\mu + \cdots + s_{2n+1}\mu^{2n+1} \end{align} becomes a solution. \begin{Lem3}\label{lemxbs} $s_\mu$ is a solution if $B$ is given by the block structure \begin{align*} B_{ij} = \binom{i \!+\! j \!-\! 3}{i - 1}(-\Lambda)^{i+j-2}e_1, \quad \begin{array}{l} i = 1, 2, \ldots, n + 1 \\ j = 1, 2, \ldots, 2(n + 1), \end{array} \end{align*} where $B_{ij}$ denotes the $2\times 1$ block in the block-position $(i,j)$ of $B$, and \begin{align*} \Lambda = \matris{0}{1}{-1}{0},\quad \quad e_1 = \vektor{1}{0}. \end{align*} \end{Lem3} \begin{proof} By changing variables $\mathbf{x} = B\mathbf{s}$, equation (\ref{amuv}) transforms into \begin{align*} (B^TM^{-1}B^{-T} + \mu I)\nabla V_\mu \equiv 0 \quad \left(mod\; (\mu^2 + 1)^{n+1} \right). \end{align*} If we let $X = B^TM^{-1}B^{-T}$ and $(1 + \mu^2)^{n+1} = Z_0 + \cdots + Z_{2n+1}\mu^{2n+1} + \mu^{2n+2}$, the equation above can be written as \begin{align*} X\nabla V_{i} + \nabla V_{i-1} = Z_i\nabla V_{2n+1}, \quad i = 0, 1, \ldots, 2n + 1, \quad V_{-1} := 0. \end{align*} Thus, if we require $V_\mu = s_\mu$ to be a solution, we see that $X$ is uniquely determined as $X = -C^{T}$, where $C$ is the companion matrix of the polynomial $(1 + \mu^2)^{n+1}$. In other words, we seek a constant matrix $B$ such that $X = -C^{T}$, or equivalently, $-M^{-T} = BCB^{-1}$. Thus, from the basic theory of rational canonical forms of linear mappings, it is then clear that we can choose $B = [\mathbf{v}\; N\mathbf{v}\; \cdots\; N^{2n+1}\mathbf{v}]$, where $N = -M^{-T}$ and $\mathbf{v}$ is any cyclic vector for the matrix $N$. Especially, we can choose $v = [1\; 0\; \cdots\; 0]^T$. The proof is complete when we show that the powers of $N$ have the following block triangular form \begin{align}\label{na} (N^a)_{ij} = \left\{ \begin{array}{ll} \displaystyle \binom{a\!-\!1\!+\!i\!-\!j}{i\!-\!j}(-\Lambda)^{a+i-j} & \textrm{if }i \ge j \\ & \\ 0 & \textrm{if }i < j, \end{array}\right. \end{align} where $ i, j = 1, \ldots, n+1$, and $(N^a)_{ij}$ denotes the $2\times 2$ block in the block-position $(i,j)$ of $N^a$. The formula (\ref{na}) is true for $a = 0$. Thus, if we assume that it is true for some $a \ge 0$, it follows by induction for arbitrary $a$: \begin{align*} (N^{a+1})_{ij} & = \sum_{k=1}^{n+1} N_{ik}(N^a)_{kj} = \sum_{k=j}^{i} (-\Lambda)^{1+i-k} \binom{a\!-\!1\!+\!k\!-\!j}{k\!-\!j} (-\Lambda)^{a+k-j} \\ & = \sum_{s=0}^{i-j} \binom{a\!+\!s\!-\!1}{s} (-\Lambda)^{a+1+i-j} = \binom{a\!+\!i\!-\!j}{i-j}(-\Lambda)^{a+1+i-j}. \end{align*} \end{proof} In the $s$-coordinates, every analytic solution of (\ref{fmg}) can be expressed as finite sum of $*$-power series \begin{align*} a_\mu*\sum_{j=0}^\infty c_j(s_\mu)_*^j, \end{align*} where $a_\mu$ is a constant solution. \begin{Thm3}\label{realps} Assume that the matrix $M$ consists of a single real Jordan block $($\ref{realjordan}$)$, corresponding to a complex conjugate pair of eigenvalues. Then, every analytic solution of $($\ref{fmg}$)$ is $*$-analytic. In detail, let $(f, g)$ be an analytic solution of $($\ref{fmg}$)$ defined by $f + \mathrm{i}g = F_0 + F_1 + \cdots + F_n$, where $F_k$ is given by $($\ref{Fkreal}$)$ and the corresponding function $\phi_k$ has power series representation \begin{align*} \phi_k(s) = \sum_{j=0}^\infty c_{kj}s^j. \end{align*} Then $(f, g)$ can be expressed as a $*$-power series in the following way. \begin{align}\label{psreal} \sum_{m=0}^n \sum_{j=0}^\infty c_{mj} a_{m,\mu}* (s_\mu)_*^j = f + \cdots + g\mu^{2n+1}, \end{align} where $s_\mu$ is given by $($\ref{smu}$)$ and $a_{m,\mu}$ are constant solutions given by \begin{align*} a_{m,\mu} = \sum_{j=0}^m m!(1 + \mu^2)^{n-j} \sum_{k=0}^j \sum_{i=0}^m (-1)^k 2^{i-m} \binom{j}{k}b_{ikm} \end{align*} where \begin{align*} b_{ikm} = \left\{ \begin{array}{ll}\displaystyle (-1)^{\frac{m}{2}} \binom{i+2k}{i} & \textrm{if m is even} \\ & \\ \displaystyle \mu (-1)^{\frac{m-1}{2}} \binom{i+2k-1}{i} & \textrm{if m is odd} \end{array}\right. \end{align*} \end{Thm3} \begin{proof} The proof is by induction over $n$. For $n = 0$, $f + \mathrm{i}g = \sum_j c_{0j}(x_0 + \mathrm{i}y_0)^j$, \begin{align*} B = \left[ \binom{-1}{0}e_1 \; \binom{0}{0}(-\Lambda)e_1\right] = I. \end{align*} Thus, $\mathbf{x} = \mathbf{s}$, and since $a_{0,\mu} = b_{000} = 1,$ the $*$-power series in the left hand side of (\ref{psreal}) reduces to \begin{align*} \sum_{j=0}^\infty c_{0j} a_{0,\mu}* (s_\mu)_*^j = \sum_{j=0}^\infty c_{0j} (x_0 + y_0\mu)_*^j = f + g\mu. \end{align*} Hence, the theorem holds for $n = 0$. If $V_{n,\mu}$ denotes the $*$-power series in the left hand side of (\ref{psreal}). Then, \begin{align*} \sum_{m=0}^n \sum_{j=0}^\infty c_{mj} a_{m,\mu}* (s_\mu)_*^j = \sum_{j=0}^\infty c_{nj} a_{n,\mu}* (s_\mu)_*^j + V_{n-1,\mu}(1 + \mu^2). \end{align*} Thus, if we assume that $V_{n-1,\mu} = \tilde{f} + \cdots + \tilde{g}\mu^{2n-1}$ where $\tilde{f} + \mathrm{i}\tilde{g} = F_0 + F_1 + \cdots + F_{n-1}$, it is enough (by induction) to prove that \begin{align*} \sum_{j=0}^\infty c_{nj} a_{n,\mu}* (s_\mu)_*^j = f_n + \cdots + g_n\mu^{2n+1}, \end{align*} where $f_n + \mathrm{i}g_n = F_n$. By letting $F_n^{(N)} = f_n^{(N)} + \mathrm{i}g_n^{(N)}$ denote the function $F_n$, corresponding to $\phi(s) = s^N$, the proof is complete if we can prove for any $N$ that \begin{align}\label{amusmuN} a_\mu * (s_\mu)_*^N = f_n^{(N)} + \cdots + g_n^{(N)}\mu^{2n+1}, \end{align} where $a_\mu = a_{n,\mu}$. We start by confirming the case when $N = 1$, and thereafter we consider the general case $N > 1$. We begin by expressing the functions $f_n^{(1)}$ and $g_n^{(1)}$ in the $s$-variables. Let $z_\mu = z_0 + z_1\mu + \cdots z_n\mu^n$, then \begin{align*} F_n^{(1)} & = \left. \left( \frac{\partial^n}{\partial \mu^n}z_\mu - \sum_{l=1}^n \left( \frac{-\mathrm{i}}{2} \right)^l \frac{n!}{(n-l)!} \overline{\frac{\partial^{n-l}}{\partial \mu^{n-l}} z_\mu} \right) \right|_{\mu=0} \\ & = \frac{n!}{2^n}\left( 2^nz_n - \sum_{l=1}^n 2^{n-l}(-\mathrm{i})^l \bar{z}_{n-l} \right). \end{align*} Thus, if we let \begin{align*} D = \matris{1}{0}{0}{-1}, \end{align*} then, by using lemma \ref{lemxbs} and the identity $\Lambda D = -D \Lambda$, we get in matrix notation \begin{align*} & \vektor{f_n^{(1)}}{g_n^{(1)}} = \frac{n!}{2^n}\left( 2^n\vektor{x_n}{y_n} - \sum_{l=1}^n 2^{n-l}\Lambda^l\vektor{x_{n-l}}{-y_{n-l}}\right) \\ & \quad \quad = \frac{n!}{2^n}\left( 2^n\vektor{x_n}{y_n} - \sum_{l=1}^n 2^{n-l}\Lambda^lD\vektor{x_{n-l}}{y_{n-l}}\right) \\ &\quad \quad = \frac{n!}{2^n} \left[ -\Lambda^n D \;\; \ldots\;\; -2^{n-1} \Lambda D \;\; 2^nI \right] \mathbf{x} \\ & \quad \quad= \frac{n!}{2^n} \left[ -D(-\Lambda)^n \;\; \ldots\;\; -2^{n-1}D(-\Lambda) \;\; 2^nI \right] B\mathbf{s} \end{align*} \begin{align*} & \quad \quad= \frac{n!}{2^n} \sum_{j=1}^{2n+2} \left( 2^nB_{n+1,j} -\sum_{i=1}^n 2^{i-1} D(-\Lambda)^{n-i+1} B_{i,j} \right)s_{j-1} \\ & \quad \quad= \frac{n!}{2^n} \!\sum_{j=1}^{2n+2} \!\left( 2^n \binom{n \!+\! j \!-\! 2}{n} I \!-\!\sum_{i=1}^n 2^{i-1} \binom{i \!+\! j \!-\!3}{i-1} D \right) (-\Lambda)^{n+j-1} e_1s_{j-1} \\ & \quad \quad= \sum_{j=0}^{2n+1}\matris{b_j}{0}{0}{d_j} (-\Lambda)^{n+j} e_1s_j, \end{align*} where \begin{align*} d_j = \frac{n!}{2^n} \sum_{i=0}^n 2^i \binom{i \!+\! j \!-\!1}{i}, \quad \textrm{and } b_j = 2n!\binom{n \!+\! j\!-\!1}{n} - d_j. \end{align*} If we assume that $n$ is even, since $\Lambda^2 = -I$, we obtain \begin{align*} & \vektor{f_n^{(1)}}{g_n^{(1)}} = \sum_{j=0}^{2n+1} (-1)^{\frac{n}{2}} \matris{b_j}{0}{0}{d_j} (-\Lambda)^{j} e_1s_j \\ &\quad \quad = \sum_{r=0}^{n} (-1)^{\frac{n}{2}+r} \left( s_{2r} \matris{b_{2r}}{0}{0}{d_{2r}} - s_{2r+1} \matris{b_{2r+1}}{0}{0}{d_{2r+1}} \Lambda \right) e_1 \\ &\quad \quad = \sum_{r=0}^{n} (-1)^{\frac{n}{2}+r} \vektor{b_{2r}s_{2r}}{d_{2r+1}s_{2r+1}}. \end{align*} When $n$ is odd, we get in a similar way \begin{align*} \vektor{f_n^{(1)}}{g_n^{(1)}} = \sum_{r=0}^{n} (-1)^{\frac{n-1}{2}+r} \vektor{-b_{2r+1}s_{2r+1}}{d_{2r}s_{2r}}. \end{align*} We now have to confirm that $a_\mu*s_\mu = f_n^{(1)} + \cdots + g_n^{(1)}\mu^{2n+1}$. We note that it is enough to prove that the coefficients at the highest power of $\mu$ coincide, since the other coefficients are then uniquely determined up to irrelevant constant terms. Moreover, we consider only the case when $n$ is even, since the case with odd $n$ is completely analogous. Let $a_\mu = \sum_{i=0}^n a_i (1 + \mu^2)^i$, and express $s_\mu$ in the following way. \begin{align*} s_\mu & = \sum_{r=0}^{2n+1} s_r\mu^r = \sum_{k=0}^n (s_{2k} + s_{2k+1}\mu)\mu^{2k} = \sum_{k=0}^n (s_{2k} + s_{2k+1}\mu)(1 + \mu^2 - 1)^k \\ & = \sum_{j=0}^n t_j(1 + \mu^2)^j, \quad \textrm{where} \quad t_j = \sum_{k=j}^n (-1)^{k-j}\binom{k}{j}(s_{2k} + s_{2k+1}\mu). \end{align*} Then, we obtain \begin{align*} a_\mu * s_\mu & = \left( \sum_{i=0}^n a_i (1 + \mu^2)^i \right) * \left( \sum_{j=0}^n t_j(1 + \mu^2)^j \right) \\ & = \sum_{k=0}^n (1 + \mu^2)^k \sum_{r=0}^k a_{k-r}t_r \\ & = \cdots + \mu^{2n+1}\left( \sum_{i=0}^n a_{n-i} \sum_{r=i}^n (-1)^{r-i} \binom{r}{i}s_{2r+1}\right) \\ & = \cdots + \mu^{2n+1}\left( \sum_{r=0}^n s_{2r+1} \sum_{i=0}^r (-1)^{r-i} \binom{r}{i} a_{n-i} \right). \\ \end{align*} It is therefore enough, for proving the case $N = 1$, to confirm that the following expression vanishes for each $r = 0, 1, \ldots, n$. \begin{align*} & \sum_{i=0}^r (-1)^i \binom{r}{i} a_{n-i} - (-1)^{\frac{n}{2}} d_{2r+1} \\ = & \sum_{i=0}^r (-1)^i \binom{r}{i} n! \sum_{k=0}^i \sum_{j=0}^n (-1)^k 2^{j-n} \binom{i}{k} (-1)^{\frac{n}{2}} \binom{j\!+\!2k}{j} \\ & \quad - (-1)^{\frac{n}{2}} \frac{n!}{2^n} \sum_{j=0}^n \binom{j\!+\!2r}{j}2^j \\ = & \; (-1)^{\frac{n}{2}} \frac{n!}{2^n} \sum_{j=0}^n 2^j \left( \sum_{k=0}^r (-1)^k \binom{j\!+\!2k}{j} \sum_{i=k}^r (-1)^i \binom{r}{i} \binom{i}{k} - \binom{j\!+\!2r}{j} \right). \end{align*} Thus, the case $N = 1$ is proved by the following calculation. \begin{align*} & \sum_{i=k}^r (-1)^i \binom{r}{i} \binom{i}{k} = \sum_{i=k}^r (-1)^i \binom{r}{k} \binom{r\!-\!k}{i\!-\!k} \\ = & \; (-1)^k \binom{r}{k} \sum_{i=0}^{r-k} (-1)^i \binom{r\!-\!k}{i} = \left\{ \begin{array}{ll} (-1)^k & \textrm{if } r = k \\ & \\ 0 & \textrm{if } r \ne k. \end{array} \right. \end{align*} We are now ready to prove (\ref{amusmuN}) for arbitrary $N \ge 1$. We have already proven the case when $n = 0$, so it is no restriction to assume that $n \ge 1$. Let $\tilde{f}_n^{(N)}$ and $\tilde{g}_n^{(N)}$ denote the coefficients at the lowest and highest powers of $\mu$ in $a_\mu * (s_\mu)_*^N$, respectively. We will prove that $\tilde{f}_n^{(N)} + \mathrm{i} \tilde{g}_n^{(N)} = F_n^{(N)}$ by showing that, when considered as polynomials in $x_0$, their coefficients at the power $x_0^{N-1}$ coincide. Thereafter we will motivate that this implies that the functions must be identical. We consider first the function $F_n^{(N)}$. Since $z_\mu - x_0$ is independent of $x_0$, it follows that \begin{align*} z_\mu^N & = (z_0 + z_1\mu + \cdots z_n \mu^n)^N = (x_0 + (z_\mu - x_0))^N \\ & = x_0^N + Nx_0^{N-1}(z_\mu - x_0) + \textrm{ (lower order terms in $x_0$ (l.o.t.))} \\ & = Nx_0^{N-1}z_\mu - (N - 1)x_0^N + \textrm{ (l.o.t.)} \end{align*} Thus, since $(N - 1)x_0^N$ is independent of $\mu$, we get \begin{align*} F_n^{(N)} & = \left. \left( \frac{\partial^n}{\partial \mu^n} z_\mu^N - \sum_{l=1}^n \left( \frac{-\mathrm{i}}{2} \right)^l \frac{n!}{(n-l)!} \frac{\partial^{n-l}}{\partial \mu^{n-l}} \bar{z}_\mu^N \right) \right|_{\mu=0} \\ & = \left( Nx_0^{N-1} \frac{\partial^n}{\partial \mu^n} z_\mu - \sum_{l=1}^{n-1} \left( \frac{-\mathrm{i}}{2} \right)^l \frac{n!}{(n-l)!} Nx_0^{N-1} \frac{\partial^{n-l}}{\partial \mu^{n-l}} \bar{z}_\mu \right.\\ & \quad \quad \left. \left. - \left( \frac{-\mathrm{i}}{2} \right)^n n! \bar{z}_\mu^N \right) \right|_{\mu=0} + \textrm{ (l.o.t.)} \\ & = n!x_0^{N-1}\!\left( \!Nz_n \!-\! \sum_{l=1}^{n-1} \left( \frac{-\mathrm{i}}{2} \right)^l \! N\bar{z}_{n-l} - \left( \frac{-\mathrm{i}}{2} \right)^n \!(x_0 \!- \!\mathrm{i}Ny_0)\! \right) \!+\! \textrm{ (l.o.t.)} \\ & = n!x_0^{N-1}\!\left( Nz_n - \sum_{l=1}^n \left( \frac{-\mathrm{i}}{2} \right)^l N\bar{z}_{n-l} - (N - 1)x_0 \right) + \textrm{ (l.o.t.)} \\ & = x_0^{N-1}\!\left( NF_n^{(1)} - n! (N - 1)x_0 \right) + \textrm{ (l.o.t.)} \end{align*} On the other hand, from lemma \ref{lemxbs}, it follows that, $s_0 = x_0 + c_0$, where $c_0$ is independent of $x_0$, and that $s_1, \ldots s_{2n+1}$ are independent of $x_0$. Thus, \begin{align*} s_\mu^N = \left( x_0 + (s_\mu - x_0) \right)^N = Nx_0^{N-1}s_\mu - (N - 1)x_0^N + \textrm{ (l.o.t.)}, \end{align*} and therefore, using the fact that we already proved the case when $N = 1$, we obtain \begin{align*} a_\mu * (s_\mu)_*^N & = a_\mu * \left( Nx_0^{N-1}s_\mu - (N - 1)x_0^N + \textrm{ (l.o.t.)} \right) \\ & = x_0^{N-1} \left( Na_\mu*s_\mu - (N - 1)x_0a_\mu \right) + \textrm{ (l.o.t.)} \\ & = x_0^{N-1} \left( Nf_n^{(1)} - (N - 1)a_0x_0 \right.\\ & \quad \quad \left. + \cdots + \mu^{2n+1} (Ng_n^{(1)} - (N - 1)a_nx_0) \right) + \textrm{ (l.o.t.)} \end{align*} Hence, \begin{align*} \tilde{f}_n^{(N)} + \mathrm{i} \tilde{g}_n^{(N)} = x_0^{N-1} \! \left( NF_n^{(1)} - (N - 1)(a_0 + \mathrm{i}a_n) x_0 \right) + \textrm{ (l.o.t.)} \end{align*} we conclude that the coefficients of $\tilde{f}_n^{(N)} + \mathrm{i} \tilde{g}_n^{(N)}$ and $F_n^{(N)}$ at the power $x_0^{N-1}$ coincide. The last part of the proof is to motivate that $\tilde{f}_n^{(N)} + \mathrm{i} \tilde{g}_n^{(N)}$ and $F_n^{(N)}$ must coincide. From the $*$-multiplication theorem \ref{realmult}, it follows that $(\tilde{f}_n^{(N)}$, $\tilde{g}_n^{(N)})$ must be a solution of (\ref{fmg}). Therefore, $\tilde{f}_n^{(N)} + \mathrm{i} \tilde{g}_n^{(N)} = F_0 + \cdots + F_n$, where the functions $F_k$ have the form (\ref{Fkreal}) for some choice of $\phi_k$. Moreover, $\tilde{f}_n^{(N)}$ and $\tilde{g}_n^{(N)}$ are homogeneous polynomials in the variables $x_0, y_0, \ldots, y_n$. Since $F_k = F_k^{(r)}$ will give a polynomial solution which is homogeneous of degree $r$, we can therefore conclude that $F_0 = b_0F_0^{(N)}$, $F_1 = b_1F_1^{(N)}$, $\cdots$, $F_n = b_nF_n^{(N)}$, for some constants $b_0, \ldots, b_n$. Thus, for some polynomial $c$ which is constant in $x_0$, we get \begin{align} & \tilde{f}_n^{(N)} + \mathrm{i} \tilde{g}_n^{(N)} - F_n^{(N)} = b_0F_0^{(N)} + \cdots + b_{n-1}F_{n-1}^{(N)} + (b_n - 1) F_n^{(N)} \nonumber \\ & \quad = cx_0^N + Nx_0^{N-1} \left( b_0(F_0^{(1)} - x_0) + \cdots + b_{n-1}(F_{n-1}^{(1)} - x_0) \right. \nonumber \\ & \quad \quad \quad\left.+ (b_n - 1)(F_n^{(1)} - x_0) \right) + \textrm{ (l.o.t.)} \label{fix0} \end{align} Finally, the polynomials $F_0^{(1)} - x_0, \ldots, F_n^{(1)} - x_0$ are linearly independent since, for each $k$, $F_k^{(1)} - x_0$ is independent of $y_{k+1}, \ldots, y_n$ with a non-trivial dependence on $y_k$. Therefore, since we know that the $x_0^{N-1}$-coefficient in (\ref{fix0}) is zero, we obtain that $b_0 = \cdots = b_{n-1} = 0$ and $b_n = 1$. Hence, we conclude that $\tilde{f}_n^{(N)} + \mathrm{i} \tilde{g}_n^{(N)} = F_n^{(N)}$, which completes the proof. \end{proof} In order to prove that any analytic solution of (\ref{fmg}) is also $*$-analytic, one would need to cover the last case when the matrix $M$ consists of several real Jordan blocks corresponding to the same eigenvalue. This step is technically involved and we have omitted it here. It is however expected that the result holds in whole generality, as we have proved in the complex case. We conclude this section by stating a hypothesis for the last remaining case of the equation (\ref{fmg}). Assume that $M$ is a real matrix with eigenvalues $\pm\mathrm{i}$. It is then no restriction to assume that $M$ has the form $M = \mathrm{diag}(-C^{-T}_1, -C^{-T}_2, \ldots, -C^{-T}_m)$, where each block $C_i$ is the companion matrix for the polynomial $(1 + \mu^2)^{n_i+1}$ with $n_1 \ge n_2 \ge \cdots \ge n_m$. Let $s^1_0, s^1_1, \ldots, s^1_{2n_1+1}, s^2_0, \ldots, s^m_{2n_m+1}$ be the corresponding coordinates. According to theorem \ref{realmult}, the equation (\ref{fmg}) can be extended to the $\mu$-dependent equation (\ref{amuv}). \begin{Hyp3} The general analytic solution of the equation (\ref{fmg}), with $M = \mathrm{diag}(-C^{-T}_1, -C^{-T}_2, \ldots, -C^{-T}_m)$, is $*$-analytic. In detail, every analytic solution $(f,g)$ of (\ref{fmg}) can be obtained from a $*$-power solution $V_\mu = f + V_1\mu + \cdots + g\mu^{2n+1}$ of (\ref{amuv}) given by \begin{align*} V_\mu = \sum_{r=1}^m \sum_{I=(i_1,i_2, \ldots , i_r)\in\mathbb{N}^r} \!\!\!\!\!\!(a_I)_\mu * (\mathbf{s}_\mu)^I_*, \end{align*} where \begin{align}\label{smui} (\mathbf{s}_\mu)^I_* := \left( s^1_\mu \right)_*^{i_1} * \left( \mu^{n_1-n_2}(s^2_\mu)_*^{i_2} * \cdots * \left( \mu^{n_{r-1}-n_r}(s^r_\mu)_*^{i_r} \right)\right). \end{align} The $*$-power (\ref{smui}) should be interpreted in a similar way as the corresponding $*$-power (\ref{xI}) in the complex case. \end{Hyp3} \section{Conclusions}\label{concsec} The product of two holomorphic functions is again holomorphic. In terms of the Cauchy--Riemann equations, \begin{align*} \nabla f = \matris{0}{1}{-1}{0} \nabla g, \end{align*} this fact can be expressed through a multiplication (bilinear operation) $*$ on the solution space \begin{align*} (f, g)*(\tilde{f}, \tilde{g}) = (f\tilde{f} - g\tilde{g}, f\tilde{g} + g\tilde{f}). \end{align*} That any holomorphic function is analytic means that any solution $(f, g)$ of the Cauchy--Riemann equations can be expressed locally as a convergent power series of a simple (linear in the variables $x, y$) solution. In a neighborhood of the origin for example, any solution can be written as \begin{align*} (f, g) = \sum_{r=0}^\infty (a_r, b_r)*(x, y)_*^r, \quad \textrm{where} \quad (x, y)_*^r = \underbrace{(x, y)* \cdots *(x, y)}_{r \textrm{ factors}}. \end{align*} The main result of this paper is that we establish similar properties for the more general system \begin{align}\label{fmg2} \nabla f = M \nabla g, \end{align} where $M$ is an arbitrary $n \times n$ matrix with constant entries: \begin{enumerate} \item Any system (\ref{fmg2}) admits a multiplication $*$ on the solution space, mapping two solutions to a new solution in a bilinear way. \item The analytic solutions of (\ref{fmg2}) are characterized by the $*$-analytic functions, meaning that every solution of (\ref{fmg2}) can be expressed locally through $*$-power series of simple solutions. \end{enumerate} Systems of the form (\ref{fmg2}) constitute only a special case of a quite large family of systems of PDEs which admit $*$-multiplication of solutions \cite{jonasson-2007}. A natural problem, worth studying in the future, is therefore to settle whether the results for the gradient equation (\ref{fmg2}) can be extended to more complex systems of PDEs. \section*{Acknowledgments} I would like to thank Prof. Stefan Rauch-Wojciechowski for useful discussions and comments. \raggedright \end{document}
\begin{document} \title{Sparse Linear Centroid-Encoder: A Convex Method for Feature Selection} \author{\name Tomojit Ghosh \email [email protected] \\ \addr Department of Mathematics\\ Colorado State University\\ Fort Collins, CO 80523 ,USA \AND \name Michael Kirby \email [email protected] \\ \addr Department of Mathematics\\ Colorado State University\\ Fort Collins, CO 80523, USA \AND \name Karim Karimov \email [email protected] \\ \addr Department of Mathematics\\ Colorado State University\\ Fort Collins, CO 80523, USA} \editor{TBD} \maketitle \maketitle \begin{abstract} We present a novel feature selection technique, Sparse Linear Centroid-Encoder (SLCE). The algorithm uses a linear transformation to reconstruct a point as its class centroid and, at the same time, uses the $\ell_1$-norm penalty to filter out unnecessary features from the input data. The original formulation of the optimization problem is nonconvex, but we propose a two-step approach, where each step is convex. In the first step, we solve the linear Centroid-Encoder, a convex optimization problem over a matrix $A$. In the second step, we only search for a sparse solution over a diagonal matrix $B$ while keeping $A$ fixed. Unlike other linear methods, e.g., Sparse Support Vector Machines and Lasso, Sparse Linear Centroid-Encoder uses a single model for multi-class data. We present an in-depth empirical analysis of the proposed model and show that it promotes sparsity on various data sets, including high-dimensional biological data. Our experimental results show that SLCE has a performance advantage over some state-of-the-art neural network-based feature selection techniques. \end{abstract} \section{Introduction} In the era of Big Data and AI supercomputing, there are unparalled opportunities for knowledge discovery from data. Deep neural networks and transformers have had a transformative impact on how scientists, and engineers approach modeling and predictive analytics. While incredibly powerful, the modeling capabilities of these neural networks are often hard to explain, i.e., they perform well for reasons that are poorly understood. The tendency is to embrace complexity and large parameter sets, rather than simplicity and explainability. In this paper we present a {\it relatively} simple neural network architecture that has surprisingly good performance even with limited data. The main modeling question we address here is the existence of a reduced input feature set capable of classifying the phenomenon of interest, and further, if there exists a linear data reduction mapping to facilitate their discovery. We are motivated along this path given the success nonlinear architectures, e.g., deep feature selection (DFS)~\citep{li2016deep} and Centroid Encoder~\citep{ghosh2022supervised}. For example, the basic idea of Centroid Encoder is to use an autoencoder neural network architecture where the targets at the decoder output are the centroids of the class. The result is a dimensionality reducing transformation, i.e., the encoder, that can be used for classification. Motivated in part by this autoencoder architecture, in this work we propose a linear feature selection tool with a new optimization problem. We exploit the linearity of the mapping to create a two step algorithm with each step being convex. The two steps are alternated and consist of data fitting and sparsification. The major appeal of this approach is the simplicity of the resulting model, the reduced set of explanatory features and the fact that--given each problem is convex--much less data is actually required to learn the model. Why are we modeling {\it small} data sets in the era of Big Data? While advances in technology have made the acquisition of certain types of health data possible, the ethical issues surrounding the collection of samples from human and animal subjects make this data extremely limited in comparison to data sets generated by scraping the internet. Note that in addition to the limited number of available points $p$, the dimension of each sample $n$ can be relatively large, e.g., omics data, so the assumption is that, at least for many biological data sets, $n \gg p$. Promoting sparsity with an $\ell_1$ penalty for feature selection is now widely used, e.g., \citep{tibshirani1996regression,fonti2017feature,muthukrishnan2016lasso,marafino2015efficient,shen2011identifying,lindenbaum2021randomly,candes2008enhancing,daubechies2010iteratively,bertsimas2017trimmed,xie2009scad}. Support Vector Machines \citep{cortes1995support} have also been used extensively for feature selection, see \citep{marafino2015efficient,shen2011identifying,sokolov2016pathway,guyon2002gene,o2013iterative,chepushtanova2014band}. Group Sparse ANN \citep{scardapane2017group} used group Lasso \citep{tibshirani1996regression} to impose the sparsity on a group of variables instead of a single variable. An unsupervised feature selection technique based on the autoencoder architecture has been proposed~\citep{han2018autoencoder}. Additional examples of nonlinear archtictures include \citep{balin2019concrete,yamada2020feature, singh2020fsnet,taherkhani2018deep}. The contents of this paper is as follows: In Section \ref{SCLCE} we propose a two step optimization problem for convex linear feature selection. In Section \ref{analysis} we explore the robustness of the methodology including the sensitivity of the method on the sparsity parameter, and the consistency of the features selected. In Section \ref{experiments} we do a comparative analysis of the proposed algorithm with benchmark datasets. Finally we offer conclusions in Section \ref{dis_cons}. \section{Sparse Optimization using Linear Centroid-Encoder} \label{SCLCE} Consider a data set $X=\{x_i\}_{i=1}^{N}$ with $N$ samples and $M$ classes where $x_i \in \mathbb{R}^d$. Note each sample of data matrix $X$ is represented by a column. The classes are denoted by $C_j, j = 1, \dots, M$ where the indices of the data associated with class $C_j$ are denoted $I_j$. We define centroid of each class as $c_j=\frac{1}{|C_j|}\sum_{i \in I_j} x_i$ where $|C_j|$ is the cardinality of class $C_j$. Now we define a matrix of class means $\tilde{C} \in \mathbb{R}^{d \times n}$ where the $i$'th column of $\tilde{C}$ is the centroid associated with the class of the $i$'th column of $X$. Note $\tilde{C}$ will have non-unique entries as long as $M<n$. For example, consider the data set $X=\{x_1,x_2,x_3,x_4,x_5\}$ which has two classes $C_1,C_2$ where $I_1=\{1,3,5\}$ and $I_2=\{2,4\}$. Taking $c_1,c_2$ as the corresponding centroids we have $\tilde C = \{c_1,c_2,c_1,c_2,c_1\}$. With this setup first we will describe Linear Centroid-Encoder (LCE) which is the starting point of our sparse algorithm. \subsection{Linear Centroid-Encoder (LCE)} The goal of LCE is to provide the linear transformation of the data to $k$ dimensions that best approximates the class centroids. Let $A\in \mathbb{R}^{d\times k}$ be the transformation matrix. The unknown matrix ${A}$ may be determined by the following optimization problem \begin{equation} \begin{aligned} \underset {A} {minimize}\;\;\|\tilde C-{A} {A}^T X \|_F^2 \;\;\; \end{aligned} \label{equation:LCE_cost} \end{equation} Notice that the objective function in Equation (\ref{equation:LCE_cost}) is a convex function of $A$. Let $\mathcal{L}(A)=\frac{1}{2}\|\tilde C-{A} {A}^T X \|_F^2$. The gradient of $\mathcal{L}$ is readily calculated as: \begin{equation} \begin{aligned} \frac{\partial \mathcal{L}}{\partial A}= AA^TXX^TA + XX^TAA^TA - (\tilde C X^T + X \tilde C^T)A \end{aligned} \label{equation:LCE_gradient} \end{equation} \subsection{Sparse Linear Centroid-Encoder (SLCE)} Let $B\in \mathbb{R}^{d\times d}$ be a diagonal matrix with $b_{j,j}$ are the diagonal entries. If we left-multiply $X$ by $B$, i.e, $BX$, then each component $x_{i,j}$ of a the sample $x_i$ will be multiplied by $b_{j,j}$ as shown below:\\ $BX = \begin{bmatrix} b_{1,1} \;\;0\;\;\hdots \;\; 0\\ 0 \;\; b_{2,2}\;\;\hdots \;\; 0\\ \vdots \hspace{0.75cm} \vdots \hspace{0.75cm}\vdots\\ 0 \;\; 0 \;\; \hdots \;\; b_{d,d}\\ \end{bmatrix} \begin{bmatrix} x_{1,1} \;\;x_{2,1}\;\;\hdots \;\;x_{n,1}\\ x_{1,2} \;\; x_{2,2}\;\;\hdots \;\;x_{n,2}\\ \vdots \hspace{1.0cm} \vdots \hspace{1.0cm}\vdots\\ x_{1,d} \;\;x_{2,d} \hdots \;\; x_{d,n}\\ \end{bmatrix} = \begin{bmatrix} b_{1,1} x_{1,1} \;\;b_{1,1} x_{2,1}\;\;\hdots \;\;b_{1,1} x_{n,1}\\ b_{2,2} x_{1,2} \;\; b_{2,2} x_{2,2}\;\;\hdots \;\;b_{2,2} x_{n,2}\\ \vdots \hspace{1.25cm} \vdots \hspace{1.5cm}\vdots\\ b_{d,d} x_{1,d} \;\;b_{d,d} x_{2,d} \hdots \;\; b_{d,d} x_{d,n}\\ \end{bmatrix}$ With this setup, we introduce the Sparse Linear Centroid-Encoder below: \begin{equation} \begin{aligned} \underset {A,B} {minimize}\;\;\|\tilde C-{A} {A}^T (BX) \|_F^2 \;\; + \lambda |diag(B)|_1 \end{aligned} \label{equation:SLCE_cost} \end{equation} where $\lambda$ is a hyperparameter. The $\ell_1$-norm will drive most of the $b_{j,j}$ to near zero. As an effect the corresponding elements or features of $x_i$ will be ignored. Hence the the model will work as a linear feature detector. It's noteworthy that our aim is to reconstruct the class centroids $c_j$ of a sample $x_i$ with fewer features. Equation \ref{equation:SLCE_cost} is a non-convex model over the matrices $A,B$. Notice, the optimization becomes convex if we fix one matrix and solve over the other. Hence the solution comes following two convex steps: in the first step we keep $B$ fixed and solve over $A$ and in the second step we freeze $A$ and solve over $B$. The hyperparameter $\lambda$ in Equation \ref{equation:SLCE_conves_step2} controls the sparsity. A higher value will drive most of the $b_{j,j}$ to near zero producing a sparser solution than a solution with lower $\lambda$. Therefore $\lambda$ is the knob which controls the sparsity of our model. \textbf{Comment of Convexity:} The optimization problem of SLCE is non convex over the matrices $A,B$. But if we optimize over one matrix keeping other one fixed, then the problem becomes convex as shown below: \begin{equation} \begin{aligned} \underset {A} {minimize}\;\;\|\tilde C-{A} {A}^T (BX) \|_F^2 \;\; + \lambda |diag(B)|_1 \end{aligned} \label{equation:SLCE_conves_step1} \end{equation} \begin{equation} \begin{aligned} \underset {B} {minimize}\;\;\|\tilde C-{A} {A}^T (BX) \|_F^2 \;\; + \lambda |diag(B)|_1 \end{aligned} \label{equation:SLCE_conves_step2} \end{equation} If we initialize $b_{j,j}$ to 1 of matrix $B$, then Equation (\ref{equation:SLCE_conves_step1}) is equivalent to Equation (\ref{equation:LCE_cost}) which is the LCE cost. First, we solve this optimization which is convex over the set of $\mathbb{R}^ {d \times k}$ matrices $A$. The domain is a convex set, as convex combination of two $\mathbb{R}^ {d \times k}$ matrices is also a $\mathbb{R}^ {d \times k}$ matrix. The cost function $f(A)=\|\tilde C-{A} {A}^T X \|_F^2$ is also convex as any norm is a convex function \citep{10.5555/993483}. The second part of the optimization, i.e., Equation \ref{equation:SLCE_conves_step2} is also convex. The domain is a convex set as any convex combination of two $\mathbb{R}^{d\times d}$ diagonal matrices is also a $\mathbb{R}^{d\times d}$ diagonal matrix. The function $g(B)=\|\tilde C-{A} {A}^T (BX)\|_F^2 \;\; + \lambda |diag(B)|_1 $ is a combination of Frobenius and $\ell_1$ norm which are convex. Hence $g(B)$ is a convex function as it's a combination of two convex functions \citep{10.5555/993483}. \subsection{Training of SLCE} As mentioned before, SLCE is a two-step convex algorithm. In the first step, we search for a solution for the matrix $A$ using Equation \ref{equation:LCE_cost} with an embedding dimension set to 5 for all data sets. In this step, we train the model until the absolute value of the difference of the costs of two consecutive iterations becomes less than equal to $10^{-6}$. After that, we fix the matrix $A$ and introduce the diagonal matrix $B$ with diagonal entries set to 1. We run the model for ten iterations without applying the $\ell_1$ penalty to adjust the parameters of $B$. After which, we use the $\ell_1$ penalty on the diagonal elements of $B$ for $2000$ iterations. Throughout the training process, we use a fixed learning rate of 0.002. We don't use mini-batches but train on the entire training set with Adam optimizer \citep{DBLP:journals/corr/KingmaB14}. We implemented SLCE in PyTorch \citep{paszke2017automatic} to run on GPUs on Tesla V100 GPU machines. We will provide the code with a dataset as supplementary material. \\ \textbf{Hyperparameter Tuning:} SLCE uses two hyperparameters: the embedding dimension $k$ and the sparsity parameter $\lambda$. In all of our experiments in the article, we fixed $k=5$. To tune $\lambda$, we did a two-fold cross-validation on a training partition with ten repeats to pick a suitable $\lambda$ for a data set. We chose $\lambda$ from the range $0.04 \dots 0.5$ and the optimal values are kept in Table \ref{table:SLCEHyperparameters_dataset} \begin{table}[!ht] \centering \begin{tabular} {|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Hyperparameter} & \multicolumn{6}{c|}{Dataset} \\ \cline{2-7} & \multicolumn{1}{c|} {ALLAML} & \multicolumn{1}{c|} {GLIOMA} & \multicolumn{1}{c|} {SMK\_CAN} & \multicolumn{1}{c|}{Prostate\_GE} & \multicolumn{1}{c|}{GLI\_85} & \multicolumn{1}{c|}{CLL\_SUB} \\ \cline{1-7} {$\lambda$} & $0.10$ & $0.30$ & $0.05$ & $0.20$ & $0.25$ & $0.04$\\ \hline \end{tabular} \caption{Data set specific sparsity parameter $\lambda$ used in our bench marking experiment.} \label{table:SLCEHyperparameters_dataset} \end{table} \section{Analysis of SLCE} \label{analysis} We did an array of analysis of the proposed model and we present the details here. \subsection{Feature Sparsity} \begin{figure} \caption{Sparsity analysis of SLCE on Pancan data set for three different choices of $\lambda$. In each case, we plotted the absolute value of weights of the diagonal elements ($b_{jj} \label{fig:SLCE_sparsity_analysis} \end{figure} The proposed model induces sparsity by minimizing the $\ell_1$-norm of the diagonal matrix $B$. The hyperparameter $\lambda$ acts as a regulator to control feature sparsity; a higher value will generate a sparser solution selecting a small set of features from the input data, whereas a smaller value will pick a large group of variables. Figure \ref{fig:SLCE_sparsity_analysis} shows the sparsity analysis using the high-dimensional Pancan data with $20,531$ features. We split the data set into a $50:50$ ratio of train and test and run SLCE on the training partition. The model promotes feature sparsity by enforcing a significant number of $b_{jj}$ to near zeros ($10^{-4}$ to $10^{-8}$). For example, the model only selects $887$ out of $20,531$ variables of original data when $\lambda=0.5$. As expected, the number of chosen features starts to increase for smaller values of $\lambda$. \subsection{Analysis of Feature Cut-Off} The experiment with feature sparsity shows that the $\ell_1$-norm drives a lot of diagonal elements of matrix $B$ to near zero, which is pretty clear from the three sparsity plots. Here we answer the question of finding out all the features whose absolute weight is significantly higher than the rest. In Figure \ref{fig:SLCE_feature_cutOff}, we plot the ratio of the absolute weight of two consecutive $b_{jj}$'s for three values of $\lambda$. The plot suggests that the ratio is maximum at a specific location. For example, when $\lambda=0.5$, the location is 887, and for $\lambda=0.25$, the location is 2155. Hence one can ignore the features after this position. This location matches the position of the elbow in the sparsity plot in the previous analysis. In all of our experiments, we observe the consistent behavior of the ratio plot, which we use to pick out the significant variables in each run. \begin{figure} \caption{Demonstration of feature cut off using ratio of two consecutive weights.} \label{fig:SLCE_feature_cutOff} \end{figure} \subsection{Feature Selection Stability} \begin{figure} \caption{Demonstration of stability of SLCE features on Pancan data set. We ran the model with three choices of $\lambda$ five times each. The Venn diagram shows the intersection of those five feature sets.} \label{fig:SLCE_feature_stability} \end{figure} Here we analyze how stable the proposed model is regarding the number of selected features over multiple trials. We also check the overlap of the feature sets over several runs to check how similar they are. To this end, we run our model five times on the Pancan data for three different values of $\lambda$ and then compare the feature sets. Figure \ref{fig:SLCE_feature_stability} shows the results. Notice that, in each case, the number of features the model selects is consistent. For example, when $\lambda=0.5$, the model picks 887, 884, 886, 889, and 886 variables with an overlap of 876 resulting a Jaccard similarity of 0.9766. Similarly we found a high Jaccard index of 0.9393 and 0.9006 for $\lambda=$ $0.25$ and $0.125$ respectively. High Jaccard scores indicate that the feature sets have a lot of commonality over different runs. \subsection{Discriminative Power of SLCE Features} Now we focus on whether the selected variables help separate the classes, i.e., the discriminative power of selected features. We compare the PCA embedding of Pancan data using all features vs. SLCE-selected features. First, we create the PCA embedding on the training set with all $20,531$ features and show the training and test samples in 3D using a scatter plot. After that, we fit the training set by SLCE with $\lambda=0.5$; pick the selected $887$ features, create a PCA embedding of the training and test samples using the selected features, and compare the 3D scatter plot with the first one as shown in Figure \ref{fig:SLCE_feature_dicriminative_power}. The embedding with all the features doesn't separate the five classes, whereas PCA does create five distinct blobs of data, one for each tumor type with the SLCE features. Notice that the test samples are also mapped closer to the corresponding training data. \begin{figure} \caption{Demonstration of discriminative power of SLCE features on Pancan data set. On the left, we show the three dimensional PCA-embedding done with all the features. On the right, we show the PCA embedding with 887 SLCE features.} \label{fig:SLCE_feature_dicriminative_power} \end{figure} \section{Experimental Results} \label{experiments} We present the comparative evaluation of our model on various data sets using several feature selection techniques. \subsection{Experimental Details} \begin{table}[!ht] \centering \begin{tabular} {|c|c|c|c|c|c|} \hline Dataset & No. Features & No. of Classes & No. of Samples & Domain \\ \hline ALLAML & 7129 & 2 & 72 & Biology \\ GLIOMA & 4434 & 4 & 50 & Biology \\ SMK\_CAN & 19993 & 2 & 187 & Biology \\ Prostate\_GE & 5966 & 2 & 102 & Biology \\ GLI\_85 & 22283 & 2 & 85 & Biology \\ CLL\_SUB & 11340 & 3 & 111 & Biology \\ PanCan & 20531 & 5 & 801 & Biology(RNA-Seq) \\ \hline \end{tabular} \caption{Descriptions of the data sets used for benchmarking experiments.} \label{table:dataDescription} \end{table} We use seven high-dimensional biological (see Table \ref{table:dataDescription}) to compare SLCE with Penalized Fisher's Linear Discriminant Analysis (PFLDA) \citep{witten2011penalized} and three neural network-based models to run benchmarking experiments. We implemented PFLDA in Python to compare and contrast with SLCE. Apart from comparing classification result, we also check the sensitivity of sparsity parameter of these two linear models. For benchmarking with ANN-based methods, we picked the published results from the article \cite{singh2020fsnet}, except for Stochastic Gates \citep{yamada2020feature}, which we ran by ourselves using authors code from GitHub. We followed the same experimental methodology described in \citep{singh2020fsnet} for an apples-to-apples comparison. This approach permitted a direct comparison of FsNet, Supervised CAE using the authors' best results. The experiment follows the following workflow: \begin{itemize} \item Split each data sets into training and test partition using 50:50 ratio. \item Run SLCE on the training set to extract top $K\in\{10,50\}$ features. \item Using the top $K$ features train a one hidden layer ANN classifier with $500$ ReLU units to predict the test samples. \item Repeat the classification 20 times and report average accuracy. \end{itemize} \subsection{Result: SLCE vs PFLDA} Table \ref{table:exp1_results} compares classification accuracies using the top features of PFLDA and SLCE. We should note that in our experiments we found PFLDA to be very sensitive to the choice of $\lambda$, which controls sparsity. For example, setting $\lambda=0.021$ works for the model on ALLAML data, but appears to behave spuriously for a value of 0.0215 returning no non-zero features, see Figure \ref{fig:SLDA_Sparsity_ALLAML}. We also found that the PFLDA model's performance depends significantly on the data dimension becoming increasingly sensitive as the ambient dimension grows. For example, we observed that the PFLDA model abruptly sets the values of the weights of all the features to zero if $\lambda$ exceeds a threhold as shown in Figure \ref{fig:SLDA_Sparsity}; in other words, no features are selected. \begin{table}[ht!] \centering \begin{tabular} {|c|c|c|c|c|c|} \hline \multirow{2}{*}{Data set} & \multicolumn{2}{c|} {Top 10 features} & \multicolumn{2}{c|} {Top 50 features} & \multirow{1}{*}{All Features} \\ \cline{2-5} & \multicolumn{1}{c|} {PFLDA} & \multicolumn{1}{c|} {SLCE} & \multicolumn{1}{c|} {PFLDA} & \multicolumn{1}{c|} {SLCE} & \multirow{1}{*}{ANN} \\ \hline ALLAML & $92.7$ & $\textbf{94.1}$ & $95.8$ & $\textbf{96.1}$ & $89.9$\\ \hline Prostate\_GE & $90.7$ & $\textbf{91.2}$ & $88.3$ & $\textbf{90.7}$ & $75.9$\\ \hline GLIOMA & $\textbf{59.9}$ & $58.8$ & $66.2$ & $\textbf{69.2}$ & $70.3$\\ \hline SMK\_CAN & $66.6$ & $\textbf{67.3}$ & $68.6$ & $\textbf{70.9}$ & $65.7$\\ \hline GLI\_85 & $\textbf{85.8}$ & $84.9$ & $85.3$ & $\textbf{85.5}$ & $79.5$\\ \hline CLL\_SUB & $52.5$ & $\textbf{62.9}$ & $52.3$ & $\textbf{75.3}$ & $56.9$\\ \hline \end{tabular} \caption{Comparison of mean classification accuracy of PFLDA, and SLCE features on six real-world high-dimensional biological data sets. The prediction rates are averaged over twenty runs on the test set. The result on the last columns is computed using all the features from one hidden layer neural network with 500 ReLU units. } \label{table:exp1_results} \end{table} \begin{figure} \caption{Sparsity analysis of PFLDA on ALLAML data using different values of $\lambda$.} \label{fig:SLDA_Sparsity_ALLAML} \end{figure} \begin{figure} \caption{Sparsity analysis of PFLDA on Prostate\_GE data using different values of $\lambda$.} \label{fig:SLDA_Sparsity} \end{figure} Due to this limitation of PFLDA, the model took significant time to find a suitable $\lambda$. Returning to the comparison in Table \ref{table:exp1_results}, we see that generally, the SLCE features produce better classification performance than PFLDA. The top 50 SLCE features more accurately predict the test samples than the top 50 PFLDA features in all the cases. In the 'Top 10 features' group, SLCE performed better in four out of six cases. These results establish the performance advantage of our proposed method over PFLDA. The last column of the table shows the accuracy with all the features using a single hidden layer neural network classifier with 500 ReLU units. Notice that the classification using top 50 SLCE features is better than all features, except for GLIOMA. \subsection{Result: SLCE vs ANN-Based Models} Table \ref{table:exp2_results} presents the classification performance using the top features of Feature Selection Network (FsNet), Supervised Concrete Autoencoder (SCAE), Stochastic Gate (STG), and Sparse Linear Centroid-Encoder (SLCE). Note that apart from SLCE, the other methods are neural network-based nonlinear models. We also present the classification accuracy for each data set using all the features under the column "All Fea." Generally, feature selection helps to improve classification accuracies. From the category of top ten features, FsNet produces the best result in three cases, followed by SLCE, with the best accuracy in two cases. SCAE surpasses the other two models in GLI\_85 data. Notice Stochastic Gates doesn't perform competitively in this category. On the other hand, STG is the top-performing model in one case in the top 50 features category, where SLCE outperforms the other models in the remaining five data sets. Notice that the performance of FsNet doesn't improve with more features (e.g., GLIOMA); in fact, the accuracies drop in CLL\_SUB, GLI\_85, SMK\_CAN. We observe the same trend for SCAE in those three data sets. In contrast, SLCE and STG benefit from more features. Considering all the twelve classification tasks, our proposed method performed the best in seven cases, followed by FsNet (three best results). \begin{table}[h!] \centering \begin{tabular} {|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Data set} & \multicolumn{4}{c|} {Top 10 features} & \multicolumn{4}{c|} {Top 50 features} & \multirow{1}{*}{All} \\ \cline{2-9} & \multicolumn{1}{c|} {FsNet} & \multicolumn{1}{c|} {SCAE} & \multicolumn{1}{c|} {STG} & \multicolumn{1}{c|} {SLCE} & \multicolumn{1}{c|} {FsNet} & \multicolumn{1}{c|} {SCAE} & \multicolumn{1}{c|} {STG} & \multicolumn{1}{c|} {SLCE} & \multirow{1}{*}{Fea.} \\ \hline ALLAML & $91.1$ & $83.3$ & $81.0$ & $\textbf{94.1}$ & $92.2$ & $93.6$ & $88.5$ & $\textbf{96.1}$ & $89.9$\\ \hline Prostate\_GE & $87.1$ & $83.5$ & $82.3$ & $\textbf{91.2}$ & $87.8$ & $88.4$ & $85.0$ & $\textbf{90.7}$ & $75.9$\\ \hline GLIOMA & $\textbf{62.4}$ & $58.4$ & $62.0$ & $58.8$ & $62.4$ & $60.4$ & $\textbf{70.4}$ & $69.2$ & $70.3$\\ \hline SMK\_CAN & $\textbf{69.5}$ & $68.0$ & $65.2$ & $63.1$ & $64.1$ & $66.7$ & $68.0$ & $\textbf{70.9}$ & $65.7$\\ \hline GLI\_85 & $87.4$ & $\textbf{88.4}$ & $72.2$ & $84.9$ & $79.5$ & $82.2$ & $81.0$ & $\textbf{85.5}$ & $79.5$\\ \hline CLL\_SUB & $\textbf{64.0}$ & $57.5$ & $54.4$ & $62.9$ & $58.2$ & $55.6$ & $63.2$ & $\textbf{75.3}$ & $56.9$\\ \hline \end{tabular} \caption{Comparison of mean classification accuracy of FsNet, SCAE, STG, and SLCE features on six real-world high-dimensional biological data sets. The prediction rates are averaged over twenty runs on the test set. Numbers for FsNet and SCAE are being reported from \citep{singh2020fsnet}. The result on the last columns is computed using all the features from one hidden layer neural network with 500 ReLU units.} \label{table:exp2_results} \end{table} We analyzed the STG's ability to promote feature sparsity using ALLAML and GLIOMA. We took three values of $\lambda$ to be 0.01, 0.1 and 1.0 to fit the model on a training partition. After that we plotted the probability of the gates in descending order. Figure \ref{fig:STG_Sparsity} presents the sparsity plot. Notice the model failed to promote feature sparsity on these two data sets. Er observed the similar pattern for other data sets as well. \begin{figure} \caption{Sparsity analysis of Stochastic Gates on ALLAML and GLIOMA data using three values of $\lambda$.} \label{fig:STG_Sparsity} \end{figure} \section{Discussion, Conclusion and Limitations} \label{dis_cons} In this research work, we proposed a novel feature selection model, Sparse Linear Centroid-Encoder (SLCE). We presented a convex training approach for SLCE in two steps. Being linear, it is well suited for low sample size and high-dimensional biological data sets. The model doesn't require an exhaustive search of network architecture and other training-related hyperparameters, typical to a neural network-based model. The convex training approach guarantees that each step has a global solution that is advantageous compared to nonconvex methods. Unlike other linear and non-linear methods, e.g., Lasso, and Sparse SVM, the model doesn't use the response variable in data fitting; instead, SLCE finds a linear transformation to reconstruct a sample as its class centroid followed by a sparse training using $\ell_1$-norm. These innovations make our model unique. SLCE also enjoys the benefits of a single model for multiclass data—the feature selection mechanism using a diagonal matrix work globally over all the classes on the data set. This aspect of SLCE makes it attractive over other linear techniques, e.g., Lasso and SSVM, where a binary feature selection method is used as a multiclass method by one-against-one(OAO) or one-against-all (OAA) class pairs. These models will suffer a combinatorial explosion when the number of classes increases. Our model also has a significant benefit over Penalized Fisher's Discriminant Analysis for multiclass problems. PFLDA induces sparsity in all directions making it hard to find a single set of features necessary for all the FLDA directions. In contrast, our proposed method searches for a global group of features in the feature selection stage. Unlike PFLDA, our model is not sensitive to the sparsity-inducing parameter $\lambda$. We showed that the model successfully enforces sparsity on numerous biological data sets using $\ell_1$-norm. The sparsity parameter controls the number of selected features as expected. The visualization experiment with the Pancan data shows how the model select discriminative feature that improves the PCA embedding. We also found that the variables chosen over multiple runs have many similarities regarding feature count and feature overlap. The visualization with the Venn diagram and the Jaccard index supports the claim. The extensive benchmarking with six biological data sets and five methods provides evidence that the features of SLCE often produce better generalization performance than other state-of-the-art models. Apart from the linear method PFLDA, we compared SLCE with four ANN-based state-of-the-art feature selection techniques and found that it produced the best result in six cases out of twelve classification experiments. Unlike Stochastic Gates, we have found that our model consistently sparsifies the input features for biological datasets. The strong generalization performance, coupled with the ability to sparsify input features, establishes the value of our model as a linear feature detector. SLCE, in its current form, maps a sample to its class centroid while applying sparsity. The features may not be discriminatory if two class centroids are close in the ambient space. Adopting a cost that also caters to separating the classes may be beneficial. Our model may not be the right choice for the cases where class centroids make little sense, e.g., natural images. The current scope of the work doesn't allow us to investigate other optimization techniques, e.g., proximal gradient descent, trimmed Lasso, etc., which we plan to explore in the future. \end{document}
\begin{document} \title{Lozenge Tiling Function Ratios for Hexagons with Dents on Two Sides} \thispagestyle{empty} \begin{abstract} We give a formula for the number of lozenge tilings of a hexagon on the triangular lattice with unit triangles removed from arbitrary positions along two non-adjacent, non-opposite sides. Our formula implies that for certain families of such regions, the ratios of their numbers of tilings are given by simple product formulas. \end{abstract} \section{Introduction} The \emph{triangular lattice} is a tiling of the plane by unit equilateral triangles, and a \emph{region} on the triangular lattice is a connected union of finitely many of those triangles\footnote{When we say `triangle' in this paper, we always mean a unit triangle on the lattice.}. We say triangles that share an edge are \emph{adjacent}. A \emph{lozenge} on the triangular lattice is the union of two adjacent triangles, and a \emph{lozenge tiling} or simply \emph{tiling} of a region is a set of lozenges within that region, which cover the region, and which do not overlap. A region with at least one tiling is called \emph{tileable}. A hexagonal region is said to be \emph{semiregular} if its opposite sides are the same length. We identify congruent regions and let $H_{a,b,c}$ denote the semiregular hexagon(al region) with opposite sides of lengths $a$, $b$, and $c$. \begin{figure} \caption{(a) $H_{3,4,2} \label{fig:fig1} \end{figure} A formula for the number of lozenge tilings of a semiregular hexagon was first given by MacMahon \cite{Ma} in the context of plane partitions. A bijection between plane partitions and lozenge tilings was later given by David and Tomei \cite{DT89}. Letting $\M(R)$ denote the number of lozenge tilings of a region $R$, MacMahon's formula can be expressed \begin{equation}\M(H_{a,b,c}) = \prod_{i=1}^c \frac{(a+i)_b}{(i)_b}=: P(a,b,c)\label{eq:MacMahon}\end{equation} where $(x)_y$ is the pochhammer symbol, $(x)_y = \prod_{i=0}^{y-1}(x+i)$. When $R_{\thickbar x}$ denotes a family of regions defined by parameters $\thickbar x$ (as with $\{H_{a,b,c}: a,b,c \in \mathbb N\}$) we say that $\M(R_{\thickbar x})$ denotes the \emph{tiling function} of $R_{\thickbar x}$ with parameters $\thickbar x$. So $P(a,b,c)$ is the tiling function for semiregular hexagons. Hexagons in the triangular lattice need not be semiregular, but the lengths of opposite sides necessarily differ by the same amount, say $t$. When $t>0$ we say the longer of each opposite pair is a \emph{long} side and the rest are \emph{short} sides. We can therefore define hexagonal regions by the lengths of their short sides $a,b, c$, and the difference $t$, using the notation $H_{a,b,c,t}$. The sides of the hexagon alternate around the perimeter between short and long, so without a loss of generality we will assume that the side lengths of the hexagon appear in the clockwise order $a, b+t, c, a+t, b, c+t$ starting from the north side. The hexagon in Figure \ref{fig:fig2} (a) is $H_{4,3,2,4}$. Semiregular hexagons have $t=0$, so $H_{a,b,c} = H_{a,b,c,0}.$ The region $H_{4,3,2,4}$ has no lozenge tilings at all. A lozenge covers exactly one up-pointing triangle and one down-pointing triangle, so that a region is tileable if it contains the same number of triangles with each orientation: we call such a region \emph{balanced}. The hexagon $H_{a,b,c,t}$ contains an excess of $t$ triangles with one orientation, so semiregular hexagons are the only hexagons which are balanced. The orientation in excess is that along the {long} sides of the hexagon (by our conventions, these are up-pointing triangles). In this paper we study regions we call \emph{dented hexagons}, obtained by removing unit triangles along\footnote{\label{note1}When we say a unit triangle is \emph{along} a side of the the hexagon, we mean it shares an edge with the border of that side. When we say we a triangle is removed \emph{from} a side, we mean that the removed triangle is along that side.} two long sides of a hexagon. We suppose $m$ triangles are removed from the northeast side, and $n$ triangles are removed from the northwest side. For balanced regions, this implies $t=n+m$. Figure \ref{fig:fig2} (b) gives one example of such a region. The black triangles have been removed from the region; we call these \emph{dents}. The grey areas are part of the dented hexagon but are covered by \emph{forced lozenges}, which are described at the end of Section \ref{background}. \begin{figure} \caption{(a) $H_{4,3,2,4} \label{fig:fig2} \end{figure} Given non-negative integers $a,b,c,t$ suppose $\vec u = (u_i)_{i=1}^m$ and $\vec v = (v_j)_{j=1}^n$ are vectors of integers with $1 \leq u_i < u_{i+1} \leq b+t$ for $1 \leq i < m$, and $1 \leq v_j < v_{j+1} \leq c+t$ for $1 \leq j < n$, so that $a > 0$ or $u_1 > 1$ or $v_1 > 1$. We use $H_{a,b,c,t,\vec u, \vec v}$ to denote the dented hexagon formed by removing dents from $H_{a,b,c,t}$ at locations indexed by $\vec u$ and $\vec v$. Specifically, we remove each $u_i$th unit triangle from the northeast side of $H_{a,b,c,t}$ and each $v_j$th unit triangle from the northwest side of $H_{a,b,c,t}$. The indexing in both cases starts at the northmost triangle along each side. It is straightforward to check that $H_{a,b,c,t,\vec u, \vec v}$ with parameters as described is well-defined and unique. Other natural parameters of dented hexagons we will use are $\underline{u_i}:=b+n+i-u_i$ and $\underline{v_j}:=c+m+j-v_j$. The parameter $\underline{u_i}$ counts the number of triangles along the northeast side which are below the $i$th dent along that side but have \emph{not} been removed from the region: these are the up-pointing triangles immediately southeast of that dent, as shown in Figure \ref{fig:fig2} (b). The parameter $\underline{v_j}$ analogously counts up-pointing triangles directly southwest of the dent indexed by $v_j$. The main result of this paper is a remarkably simple product formula for the ratio between the tiling functions of $H_{a,b,c,t,\vec u, \vec v}$ and $H_{0,b,c,t,\vec u, \vec v}$ so long as the latter region is well defined. It follows that the first family of regions has a `nice' tiling function when the latter family does, and we identify some instances of this. This is not the first paper to identify families of regions with nice ratios of tiling functions; similar results were recently found by Lai, Ciucu, Rohatgi, and Byun \cite{By19}, \cite{Ci19a}, \cite{Ci19b}, \cite{La19a}, \cite{La19b}, \cite{La19c}. One specific subfamily of dented hexagons was studied by Lai, who found a lovely tiling function \cite{La17}. A general formula for the tiling function of a hexagonal region with dents in arbitrary positions along its border has been discovered by Ciucu and Fischer \cite{CF16}, but that formula is given only as the determinant of a matrix. \section{Further Background} \label{background} A region on the triangular lattice can be identified with a graph, so that each unit triangle corresponds to a vertex and adjacent triangles correspond to adjacent vertices. This graph is bipartite, with its vertex bipartition defined by the orientations of the triangles, since triangles are only adjacent to triangles of the opposite orientation. A lozenge tiling of a region naturally partitions the triangles of that region into adjacent pairs, and so a lozenge tiling can be identified with a perfect matching on the graph of that region. We will therefore borrow two graph theoretic techniques. The following is a direct application of the Graph Splitting Lemma, which appears as Lemma 3.6 in a 2014 paper by Lai \cite{La14}, and is implicit in earlier work by Ciucu \cite{Ci97}. \begin{lemma}[Region Splitting Lemma]\thlabel{splitting} Let $R$ be a balanced region of the triangular lattice with a partition into regions $P$ and $Q$ such that unit triangles in $P$ which are adjacent to unit triangles in $Q$ are all of the same orientation, and that this orientation is not in excess within $P$. Then $\M(R) = \M(P)\cdot \M(Q)$, and in particular $\M(R)= 0$ if either $P$ or $Q$ is not balanced. \qed \end{lemma} We will also use Kuo's graphical condensation method. The following is a special case of his Theorem 5.4 from \cite{Ku04}, expressed in the language of this paper. \begin{lemma}\thlabel{kuo} Let $R$ be a simply connected region of the triangular lattice, and let $\alpha, \beta, \gamma, \delta$ be unit triangles $R$ of the same orientation which touch the boundary of $R$ at a corner or edge, so that the places they touch the boundary appear in the cyclic order $\alpha, \beta, \gamma, \delta$. Then \begin{align*} & \M(R - \alpha-\beta)\cdot \M(R - \gamma-\delta) \\ =& \M(R -\alpha-\delta) \cdot \M(R - \beta-\gamma) - \M(R-\alpha-\gamma) \cdot \M(R-\beta-\delta) {\mathcal N}umberthis \label{eqn} \end{align*} \qed \end{lemma} The lozenge tilings of a region on the triangular lattice often have a natural interpretation as families of nonintersecting lattice paths with fixed start and end points; this relationship is shown in Figure \ref{fig:lattice}. We will make use of the following classical result originally due to Lindstr{\"o}m \cite{Li73}, and independently to Gessel and Viennot \cite{Ge89}. The formulation we use could be considered a restatement of Lemma 14 from \cite{Ci01}. \begin{proposition}\thlabel{GV} Let $S=\{(s_i,t_i) : i \in [n]\}$ and $E=\{(p_j,q_j) : j \in [n]\}$ be sets of coordinates on the square lattice $\mathbb Z^2$, such that every Up-Right lattice path from $(s_i,t_i)$ to $(p_j,q_j)$ intersects any Up-Right lattice path from $(s_j,t_j)$ to $(p_i,q_i)$ for $i {\mathcal N}eq j$. The number of families of $n$ non-intersecting Up-Right lattice paths which each start at a point in $S$ and end at a point in $E$ is given by $\det((a_{i,j})_{i,j=1}^n)$, where $a_{i,j} = {p_j+q_j-s_i-t_i \choose q_j-t_i}$ is the number of Up-Right lattice paths from $(s_i,t_i)$ to $(p_j,q_j)$. \end{proposition} When a lozenge is in all possible tilings of a region, we say that lozenge is \emph{forced}. Often, a region contains a unit triangle which is adjacent to only one other triangle - the lozenge covering those two triangles is forced. One can remove the triangles covered by a forced lozenge to produce a new region, the tilings of which are in a natural bijection with the tilings of the original region; the bijection is defined by removing or replacing the forced lozenge. The new region may itself have a triangle only adjacent to one other triangle, which means it too is covered by a lozenge that is forced in the new region, and therefore in the original region. Removing triangles covered by forced lozenges may thus yield a sequence of regions that all have the same number of tilings. Figure \ref{fig:fig2} (b) serves as an example. The forced lozenges are shown and colored grey. \section{Main Results} Our first remark about dented hexagons is a characterization of when they have any tilings at all. \begin{proposition}\thlabel{existence} Let a balanced dented hexagon $H$ be given. Let $L_N$ be the Nth horizontal lattice line south of H's northern side\footnote{$L_N$ is then $\frac{\sqrt3}{2}N$ units south of H's northern side.}, and $\mu_N$ be the number of dents north of $L_N$. Then $H$ has a tiling iff $\mu_N \leq N$ for all $N \in \mathbb N$. \end{proposition} \begin{proof} Let $\vec u$ and $\vec v$ and a balanced dented hexagon $H=H_{a,b,c,t,\vec u, \vec v}$ be given. Suppose for some $N$, $\mu_N > N$; assume that $N$ is minimal such that this is the case. It is straightforward to check the region above $L_N$ is unbalanced. By \thref{splitting}, applied to the regions above and below $L_N$, the dented hexagon has no tilings. If $\mu_N \leq N$ for each $N$, we can exhibit a partition of the overall region into tileable regions, proving the existence of a tiling for the entire region. We induct on the number of dents. If the region has no dents, it is a balanced semiregular hexagon and has at least one tiling. Suppose \thref{existence} holds for dented hexagons with fewer than $t$ dents. Suppose WLOG that $u_m \geq v_n$; in other words the southmost dent is on the eastern side of the region. We sketch the semiregular hexagon $H_{1,\underline{u_m},c}$ directly underneath the southmost dent, so that its northern side is the border of that dent. Figure \ref{fig:fig7} (a) depicts this hexagon (dark). We also sketch a parallelogram consisting of all triangles directly west of the southwest edge of the dark hexagon. This parallelogram is also depicted (light) in Figure \ref{fig:fig7} (a). Both the hexagon and parallelogram are tileable, and the rest of the region is tileable by the induction hypothesis; so the entire region is tileable. \begin{figure} \caption{$H_{5,5,3,4,(5,6),(2,5)} \label{fig:fig7} \end{figure} The fact that $\mu_{t-1} \leq t-1$ implies that $u_m \geq t$, and $\underline{u_m} \leq b$. This guarantees the northeast and southwest sides of the dark hexagon are of length $\leq b$, so that the light region west of the dark hexagon is actually a parallelogram. Had it been the case that $v_n > u_m$, we would simply flip the picture. \end{proof} Our application of the inductive hypothesis in the proof implies that one could continue subdividing the entire region into parallelograms and semiregular hexagons. Such a subdivision is depicted in Figure \ref{fig:fig7} (b). \thref{existence} shows that untileable dented hexagons are characterized by having too many dents too far north, which yields the following consequence. \begin{corollary}\thlabel{existcor} Let $H=H_{a,b,c,n+m,(u_i)_1^m, (v_j)_1^n}$ and $H'=H_{a',b',c',n+m,(u'_i)_1^m, (v'_j)_1^n}$ be dented hexagons with $u_i \leq u_i'$ and $v_j \leq v_j'$ for each $i, j$. If $H$ has a tiling, so does $H'$. \qed \end{corollary} Our main result is that families of dented hexagons with fixed parameters, $b,c, t, \vec u, \vec v$ have a tiling function given by the following rational function in $a$ (in fact, a polynomial in $a$): \begin{theorem}\thlabel{main} Where $H_{a,b,c,t,(u_i)_1^m, (v_j)_1^n}$ is a tileable dented hexagon (implying $t=n+m$), \begin{align*} {\M(H_{a,b,c,t,\vec u, \vec v})} &= {\M(H_{0,b,c,t,\vec u, \vec v})} \prod_{i=1}^m(u_i)_{\underline{u_i}} \prod_{j=1}^n(v_j)_{\underline{v_j}}\\ & \times \frac{P(a,b+n, c+m)}{\prod_{i=1}^m(a+u_i)_{\underline{u_i}}\prod_{j=1}^n(a+v_j)_{\underline{v_j}}}. \end{align*} \end{theorem} When $H_{0,b,c,t,\vec u, \vec v}$ has a nice tiling function, \thref{main} shows that $H_{a,b,c,t, \vec u, \vec v}$ also has a nice tiling function. In particular, this explains a result found by Lai (Theorem 3.1 from \cite{La17}, with $q=1$), that when the dents along each side of the region are adjacent and ${v_n} = {u_m}$ the tiling function for the region is relatively simple. We discuss this and related results in section \ref{corollaries}. \section{Groundwork} \label{groundwork} In this section we lay the technical foundation for the arguments that follow. \begin{lemma} If $H_{0,b,c,t,\vec u, \vec v}$ is a tileable dented hexagon, then $\M(H_{a,b,c,t,\vec u, \vec v})$ is a polynomial in $a$ when fixing $b,c,t$ and the vectors $\vec u$ and $\vec v$. \end{lemma} \begin{proof} The tilings of $H_{0,b,c,t,(u_i)_1^m, (v_j)_1^n}$ are in bijection with families of $(b+n)$ non-intersecting lattice paths which start along the southwest boundaries of dented hexagon and end at the northeast boundaries. This bijection is explained briefly in Figure \ref{fig:lattice}. Using this type of bijection is a standard technique when enumerating lozenge tilings, and similar examples of the technique explained in greater detail can be found in \cite{Ci01}. \begin{figure} \caption{Tilings of $H_{3,4,2,5,(3,6),(2,5,6)} \label{fig:lattice} \end{figure} The ``start points'' along the southwest side of the region can be interpreted as having coordinates $\{(-i,i): i \in [b]\}$, and the start points from the dents would then have coordinates $\{(-b,b+c+t+1-{v_i}): i\in [n]\}$. The ``end points'' then have coordinates $\{(a-b-1+j,b+c+t+1-j): j \in [b+t] - \{u_i: i \in [m]\}\}$. It is clear these satisfy the path-intersection condition of \thref{GV} when each set is labeled with indices increasing from north to south. Applying \thref{GV}, the number of lattice paths with these start and end points is given by the determinant of a matrix, the entries of which are of the form ${a-1+v_i \choose v_i-j}$ and ${a+c+t\choose b+c+t+1-j-i}$. Since each entry in the matrix is a polynomial on $a$, so is the determinant. \end{proof} The arguments that follow are largely about the proportionality of polynomials. For $p, q$ polynomials in $a$ with we will use the non-standard notation $$p(a) \xequiv{a} q(a)$$ to mean there is a nonzero constant $c$ with $c\cdot p(a)=q(a)$. We will sometimes use this notation where $p,q$ are functions over many parameters which are polynomials over $a$ when all other parameters are fixed, in which case $c$ may depend on parameters other than $a$. It is easy to check $\xequiv{a}$ is an equivalence relation and satisfies the following properties, where $p_1,p_2,$ and $q$ are polynomials in $a$, and $q$ is not identically 0. \begin{align} \brac{p_1(a)} \xequiv{a} \brac{p_2(a)} & \Rightarrow \brac{p_1(a)+p_2(a)} \xequiv{a} \brac{p_1(a)} \xequiv{a} \brac{p_2(a)} \label{eq:sum}\\ \brac{p_1(a)} \xequiv{a} \brac{p_2(a)} & \iff \brac{p_1(a)q(a)} \xequiv{a} \brac{p_2(a) q(a)} \label{eq:substitute} \end{align} We will make use of the following technical lemma. \begin{lemma}\thlabel{sub} For $d \in \mathbb Z$ and $y \in \mathbb Z^+$ and $z \in \mathbb N$, \begin{equation} \label{eq:sub} \brac{P(a+d,y-1, z+1)} \xequiv{a} \brac{P(a+d,y,z) \frac{(a+d+z+1)_{y-1}}{(a+d+y)_z}}. \end{equation} \end{lemma} \begin{proof} It is straightforward to show from the definition of $P(x,y,z)$ that $\frac{P(x,y,z+1)}{P(x,y,z)} = \frac{(x+z+1)_y}{(z+1)_y}$; in turn it is straightforward to show \begin{equation}\label{eq:subgen} P(x,y-1,z+1) = P(x,y,z)\frac{(y)_{z}(x+z+1)_{y-1}}{(x+y)_{z}(z+1)_{y-1}}. \end{equation} Equation \eqref{eq:sub} follows from equation \eqref{eq:subgen} where $x=a+d$. \end{proof} \section{Proof of \thref{main}} For $b,c \in \mathbb N$ and $\vec u = (u_i)_1^m, \vec v = (v_j)_1^n$ vectors over $\mathbb N$ so that there exists a well-defined region $H_{0,b,c,m+n, \vec u, \vec v}$, define $$f_{b,c,\vec u, \vec v}(a) := \frac{P(a,b+n, c+m)}{\prod_{i=1}^m(a+u_i)_{\underline{u_i}}\prod_{j=1}^n(a+v_j)_{\underline{v_j}}}.$$ The main method of our proof is to show that $\M(H_{a,b,c,m+n,\vec u, \vec v})$ interpreted as a function in $a$ is proportional to $f$. We will do this by showing $f$ is a polynomial in $a$ which can be defined recursively and obeys the same recursion as the tiling function. Observe that the equation in \thref{main} can be expressed $$\brac{\M(H_{a,b,c,m+n,\vec u, \vec v})} \xequiv{a} \brac{f_{b,c,\vec u, \vec v}(a)}.$$ It will sometimes be convenient for us to assume that $u_1 >1$ or $v_1>1$. A dented hexagon with $u_1=v_1=1$ has no tilings by \thref{existence}, in which case \thref{main} holds by vacuously. In the case that $u_1 = 1$, and $v_1> 1$ or $n=0$, the top row of triangles of the region $H_{a,b,c,t,\vec u, \vec v}$ are covered by forced lozenges, and may be removed from the region. Figure \ref{fig:fig4} (a) depicts an example of this. Removing the forced lozenges gives the region $H_{a+1,b,c,t-1,(u_{i}-1)_{i=2}^m, (v_j -1)_{j=1}^n}$, which thus has the same tiling function as $H_{a,b,c,t,\vec u, \vec v}$. We should like to show that our definition of $f_{b,c,\vec u, \vec v}$ respects this and the analogous case with $v_1=1.$ \begin{figure} \caption{$H_{4,3,2,3,(1,6),(3,4)} \label{fig:fig4} \end{figure} If $\underline{u_m}=0$ (or $\underline{v_n}=0$), then the southeast (or southwest) side of the region is covered by forced lozenges. If $\underline{u_m}=0$, then $H_{a,b,c,t,\vec u, \vec v}$ thus has the same tiling function as $H_{a,b,c+1,t-1,(u_i)_{i=1}^{m-1}, \vec v}$, which is the region that is obtained by removing the forced tiles. This is depicted in Figure \ref{fig:fig4} (b). We wish to show our definition of $f_{b,c, \vec u, \vec v}$ respects this fact, and the analogous fact when $\underline{v_n}=0$. \begin{lemma}\thlabel{loptop} If $H_{a,b, c, t,(u_i)_1^m, (v_j)_1^n}$ is tileable then \begin{enumerate}[label=\alph*.] \item If $u_1 = 1$, and $v_1>1$ or $n=0$, then $\brac{f_{b,c,\vec u, \vec v}(a)} \xequiv{a} \brac{f_{b,c,(u_i-1)_2^m, (v_j-1)_1^n}(a+1)}.$ \item If $v_1=1$, and $u_1>1$ or $m=0$, then $\brac{f_{b,c,\vec u, \vec v}(a)} \xequiv{a} \brac{f_{b,c,(u_i-1)_1^m, (v_j-1)_2^n}(a+1)}.$ \item If $\underline{u_m} = 0$ then $f_{b,c,\vec u, \vec v}(a) = f_{b,c+1,(u_i)_{i=1}^{m-1}, (v_j)_1^n}(a).$ \item If $\underline{v_n} = 0$ then $f_{b,c,\vec u, \vec v}(a) = f_{b+1,c,(u_i)_{i=1}^{m}, (v_j)_1^{n-1}}(a).$ \end{enumerate} \end{lemma} \begin{proof} These identities are straightforward to check when written explicitly. \end{proof} Let $\vec \emptyset$ denote the empty vector. We will now verify that \thref{main} holds when $\vec u = \vec \emptyset$. The case where $\vec v = \vec \emptyset$ follows by symmetry. \begin{lemma}\thlabel{CLP} The dented hexagon $H_{a,b,c,n,\vec \emptyset, (v_j)_1^n}$ is tileable and \begin{equation}\label{eq:eqCLP} \brac{\M(H_{a,b,c,n,\vec \emptyset, (v_j)_1^n})} \xequiv{a} f_{b,c, \vec \emptyset, \vec v}(a) := \brac{ \frac{P(a,b+n,c)}{\prod_{j=1}^n (a+v_j)_{\underline{v_j}}}}. \end{equation} \end{lemma} This is equivalent to a result by Cohn, Larsen, and Propp \cite{CLP}, but the equivalence is not obvious so we will prove this result directly using Kuo Condensation. \begin{proof} Dented hexagons with dents on just one side are tileable by \thref{existence}. We will induct on $n+c$, with base cases at $n=0$ and $c=0$. Note that if $c=0$ then the region has a unique tiling\footnote{The entire northwest side is comprised of dents, and the unique tiling extends to the unique tiling of the $a \times b$ parallelogram.} and $f_{b,0, \vec \emptyset, \vec v}(a):= \frac{P(a,b+n,0)}{\prod_{j=1}^n(a+v_j)_{0}} = 1.$ In the case $n=0$, equation \eqref{eq:eqCLP} reduces to MacMahon's formula \eqref{eq:MacMahon}. For our inductive hypothesis, assume that \eqref{eq:eqCLP} holds for dented hexagons $H_{a,b',c',n',\vec \emptyset, (v_j')_1^{n-1}}$ with $c'+n' < c+n$. Consider dented hexagons $H_{a,b,c,n,\vec \emptyset, (v_j)_1^n}$. In the case $v_1=1$, $H_{a,b,c,n,\vec \emptyset, (v_j)_1^n}$ has forced lozenges along its northern side so has the same tiling function as $H_{a+1,b,c,n-1,\vec \emptyset, (v_j-1)_2^{n}}$, so \begin{align*} \brac{\M\left(H_{a,b,c,n,\vec \emptyset, (v_j)_1^n}\right)}&\xequiv{a} \brac{\M\left(H_{a+1,b,c,n-1,\vec \emptyset, (v_j-1)_2^{n}}\right)}\\ (\mbox{by IH}) &\xequiv{a} \brac{f_{b,c,\vec \emptyset, (v_j-1)_2^{n}}(a+1)}\\ (\mbox{by \thref{loptop}b}) &\xequiv{a} \brac{ f_{b,c,\vec \emptyset, (v_j)_1^{n}}(a)}. \end{align*} In the case $\underline{v_n}=0$, $H_{a,b,c,n,\vec \emptyset, (v_j)_1^n}$ has forced lozenges along its southwest side so has the same tiling function as $H_{a,b+1,c,n-1,\vec \emptyset, (v_j)_1^{n-1}}$, so \begin{align*} \brac{\M\left(H_{a,b,c,n,\vec \emptyset, (v_j)_1^n}\right)}&\xequiv{a} \brac{\M\left(H_{a,b+1,c,n-1,\vec \emptyset, (v_j)_1^{n-1}}\right)}\\ (\mbox{by IH}) &\xequiv{a} \brac{f_{b,c+1,\vec \emptyset, (v_j)_1^{n-1}}(a)}\\ (\mbox{by \thref{loptop}d}) &\xequiv{a} \brac{ f_{b,c,\vec \emptyset, (v_j)_1^{n}}(a)}. \end{align*} \begin{figure} \caption{(a) $H_{3,5,4,3,\vec \emptyset,(2,3,5)} \label{fig:fig8} \end{figure} We therefore assume that $n>0$, $v_1>1$, and $\underline{v_n}>0$. Regard $H_{a,b,c,n,\vec \emptyset, \vec v}$ as a subregion of the (unbalanced) region $R_a=H_{a, b, c-1, n+1, \vec \emptyset, (v_j+1)_{j=1}^{n-1}}$. Within each region $R_a$, let $\alpha$ be the unit triangle indexed by $v_n$, let $\beta$ be the northmost triangle on the northwest side, let $\gamma$ be the southmost triangle along the northeast side, and let $\delta$ be the southmost triangle along the northwest side. These placements are depicted in Figure \ref{fig:fig8}. We will apply Kuo Condensation (\thref{kuo}) to $R_a, \alpha, \beta, \gamma, \delta$. Figure \ref{fig:fig9} depicts each region referenced in the condensation formula, in some cases with forced lozenges shaded (we regard these as not being part of the region). Note they are all balanced dented hexagons. Since these regions can be regarded as dented hexagons with dents on only one side, each is tileable by \thref{existence}. The parameter $c+n$ is strictly minimal on $R_a - \alpha - \gamma$, so we may apply the inductive hypothesis to each of the other regions. \begin{figure} \caption{$R_a$ with two of $\alpha, \beta, \gamma, \delta$ removed and forced lozenges shaded.} \label{fig:fig9} \end{figure} \thref{kuo} can be expressed $$M(R-\alpha-\gamma)M(R-\beta-\delta)=M(R-\alpha-\delta)M(R-\beta-\gamma)-M(R-\alpha-\beta)M(R-\gamma-\delta).$$ Removing forced lozenges as depicted in Figure \ref{fig:fig9}, we can rewrite this: \begin{equation} \begin{array}{rl} &M\left(H_{a,b,c,n,\vec \emptyset, (v_j)_1^n}\right) M\left(H_{a+1,b+1,c-1,n-1, \vec \emptyset, (v_j-1)_1^{n-1}}\right)\\[10pt] =& M\left(H_{a,b+1,c-1,n,\vec \emptyset, (v_j)_1^n}\right) M\left(H_{a+1,b,c,n-1,\vec \emptyset, (v_j-1)_1^{n-1}}\right) \\[10pt] -& M\left(H_{a+1,b,c-1,n,\vec \emptyset, (v_j-1)_1^n}\right) M\left(H_{a,b+1,c,n-1,\vec \emptyset, (v_j)_1^{n-1}}\right) \end{array} \end{equation} Applying the inductive hypothesis to each region except $H_{a,b,c,n,\vec \emptyset, (v_j)_1^n}$, this implies \begin{equation}\label{eq:kuobase} \begin{array}{rl} &M\left(H_{a,b,c,n,\vec \emptyset, (v_j)_1^n}\right) \displaystyle\frac{P(a+1,b+n,c-1)}{\prod_{j=1}^{n-1}(a+v_j)_{\underline{v_j}}}\\[15pt] \xequiv{a}& \displaystyle\frac{P(a,b+n+1,c-1)}{\prod_{j=1}^{n}(a+v_j)_{\underline{v_j}-1}} \cdot \displaystyle\frac{P(a+1,b+n-1,c)}{\prod_{j=1}^{n-1}(a+v_j)_{\underline{v_j}+1}}\\[15pt] -& \displaystyle\frac{P(a+1,b+n,c-1)}{\prod_{j=1}^{n}(a+v_j)_{\underline{v_j}}} \cdot \displaystyle\frac{P(a,b+n,c)}{\prod_{j=1}^{n-1}(a+v_j)_{\underline{v_j}}} \end{array} \end{equation} We shall show that when the term $M(H_{a,b,c,n, \vec \emptyset, (v_j)_1^n})$ is replaced with $f_{b,c, \vec \emptyset, (v_j)_1^n}(a)$, the products on each line of \eqref{eq:kuobase} are $\xequiv{a}$-equivalent, so that $$M(H_{a,b,c,n, \vec \emptyset, (v_j)_1^n}) \xequiv{a} f_{b,c, \vec \emptyset, (v_j)_1^n}(a)$$ by relations $\eqref{eq:sum}$ and $\eqref{eq:substitute}$. It is clear that the product on the first line would be equal to the product on the third line when $f_{b,c,\vec \emptyset, (v_j)_1^n}(a)$ is written out explicitly, so it remains to show the products on the last two lines are $\xequiv{a}$-equivalent. We will manipulate the terms on the second line to show this. We rewrite \begingroup \allowdisplaybreaks \begin{align} \prod_{j=1}^n (a+v_j)_{\underline{v_j}-1} &= \prod_{j=1}^n (a+v_j)_{\underline{v_j}}/(a+c)_n\label{eq:sub1}\\ \prod_{j=1}^{n-1} (a+v_j)_{\underline{v_j}+1} &=(a+c+1)_{n-1}\prod_{j=1}^{n-1} (a+v_j)_{\underline{v_j}}. \label{eq:sub2} \end{align} \endgroup We can therefore rewrite: \begin{align*} &\displaystyle\frac{P(a,b+n+1,c-1)}{\prod_{j=1}^{n}(a+v_j)_{\underline{v_j}-1}} \cdot \displaystyle\frac{P(a+1,b+n-1,c)}{\prod_{j=1}^{n-1}(a+v_j)_{\underline{v_j}+1}}\\[15pt] {\cal H}space{-.5cm}(\mbox{by eqns. \eqref{eq:sub1}, \eqref{eq:sub2}}) = & \displaystyle\frac{P(a,b+n+1,c-1)}{\prod_{j=1}^n (a+v_j)_{\underline{v_j}}} \cdot \displaystyle\frac{P(a+1,b+n-1,c)}{\prod_{j=1}^{n-1} (a+v_j)_{\underline{v_j}} } \cdot \frac{(a+c)_n}{(a+c+1)_{n-1}}\\[15pt] (\mbox{by eqn. \eqref{eq:sub}})\xequiv{a}& \displaystyle\frac{P(a,b+n,c)}{\prod_{j=1}^n (a+v_j)_{\underline{v_j}}} \cdot \frac{(a+b+n+1)_{c-1}}{(a+c)_{b+n}} \\[15pt] \times & \displaystyle\frac{P(a+1,b+n,c-1)}{\prod_{j=1}^{n-1} (a+v_j)_{\underline{v_j}}} \cdot \frac{(a+1+c)_{b+n-1}}{(a+b+n+1)_{c-1}} \\[15pt] \times & \frac{(a+c)_n}{(a+c+1)_{n-1}}\\[15pt] = & \displaystyle\frac{P(a+1,b+n,c-1)}{\prod_{j=1}^{n}(a+v_j)_{\underline{v_j}}} \cdot \displaystyle\frac{P(a,b+n+1,c)}{\prod_{j=1}^{n-1}(a+v_j)_{\underline{v_j}}}, \end{align*} which is exactly the product on the third line of equation \eqref{eq:kuobase}. This completes the proof. \end{proof} Note that $H_{0,b,c,n,\vec \emptyset, (v_j)_1^n}$ has forced lozenges at its northern tip that when removed leave a region of the form $H_{1,b,c+1-v_1,n-1,\vec \emptyset, (v_j-v_1)_2^n}$, which is also a dented hexagon with fewer dents which are still all along the northwest side. Through repeated application of \thref{CLP}, one can obtain a complete product formula for $\M(H_{a,b,c,n,\vec \emptyset, \vec v})$; in particular, when $b=c=0$, this may be regarded as an independent proof for the number of tilings of a trapezoid with an arbitrary number of dents along the long base, which Cohn, Larsen, and Propp \cite{CLP} attribute to Gelfand and Tsetlin \cite{GT}. Our main theorem can be interpreted as a generalization of that result. We are now ready to prove \thref{main}. The proof is similar to that of \thref{CLP}, and uses that lemma as a base case. \begin{proof}[Proof of \thref{main}] We shall show by induction that if $H_{a,b,c,t,\vec u, \vec v}$ is tileable then \begin{equation}\label{eq:eqmain} \brac{\M(H_{a,b,c,t,\vec u, \vec v})} \xequiv{a} \brac{ f_{b,c, \vec u, \vec v}(a)}:=\frac{P(a,b+n, c+m)}{\prod_{i=1}^m(a+u_i)_{\underline{u_i}}\prod_{j=1}^n(a+v_j)_{\underline{v_j}}}. \end{equation} We will induct on the number of dents $m+n$, using $m=0$ and $n=0$ as base cases. In these cases, equation \eqref{eq:eqmain} follows immediately from \thref{CLP}. For our inductive hypothesis, suppose equation \eqref{eq:eqmain} holds for dented hexagons with fewer than $m+n$ dents. Consider a dented hexagon with dents indexed by $(u_i)_1^m, (v_j)_1^n$. In the case where $\underline{u_m}=0$, $H_{a,b,c,t,(u_i)_1^m,(v_j)_1^n}$ has forced lozenges along its southeast side and therefore has the same tiling function as $H_{a,b,c+1,t-1,(u_i)_1^{m-1},(v_j)_1^n}$, so \begin{align*} \brac{\M(H_{a,b,c,t,(u_i)_1^m,(v_j)_1^n})} &\xequiv{a} \brac{\M(H_{a,b,c+1,t-1,(u_i)_1^{m-1},(v_j)_1^n})}\\ (\mbox{IH}) & \xequiv{a} \brac{f_{b,c+1,(u_i)_1^{m-1},(v_j)_1^n}(a)}\\ (\mbox{\thref{loptop}c}) & \xequiv{a} \brac{f_{b,c,(u_i)_1^{m},(v_j)_1^n}(a)}. \end{align*} Equation \eqref{eq:eqmain} holds when $\underline{v_n}=0$ by symmetric reasoning. Assume therefore that $m,n>0$, $\underline{u_m}>0$ and $\underline{v_n}>0$. Regard $H_{a,b,c,t,(u_i)_1^m,(v_j)_1^n}$ as a subregion of the (unbalanced) region $R_a:= H_{a,b,c,t,(u_i)_{1}^{m-1}, (v_j)_{1}^{n-1}}$. Within $R_a$ let $\alpha$ denote the unit triangle indexed by $v_n$, let $\beta$ denote the triangle indexed by $u_m$, let $\gamma$ denote the southmost triangle along the northeast side of $R_a$, and let $\delta$ denote the southmost triangle along the northwest side of $R_a$. Since $m,n>0$ and $\underline{u_m}>0$ and $\underline{v_n}>0$ these locations are clearly defined within $R_a$ and are distinct, as depicted in Figure \ref{fig:fig5}. \begin{figure} \caption{(a) $H_{3,4,2,5,(3,6),(2,5,6)} \label{fig:fig5} \end{figure} We will apply Kuo Condensation (\thref{kuo}) to $R_a, \alpha, \beta, \gamma, \delta$. Figure \ref{fig:fig6} depicts each region referenced in the condensation formula, in some cases with forced lozenges shaded (we regard these as not being part of the region). Note they are all families of balanced dented hexagons. Since $H_{a,b,c,t,\vec u, \vec v}$ is tileable so are each of these regions by \thref{existcor}. The number of dents is strictly minimal on $R_a - \alpha - \beta$, so we may apply the inductive hypothesis to each of the other regions. Recall lemma 2 states $$ M(R-\alpha-\beta)M(R-\beta-\delta) = M(R-\alpha-\delta)M(R-\beta-\gamma) - M(R-\alpha-\gamma)M(R-\beta-\delta). $$ Removing forced lozenges as depicted in Figure \ref{fig:fig6}, we can rewrite this: \begin{equation} \begin{array}{lr} & M\left(H_{a,b,c,t,(u_i)_1^{m},(v_j)_1^{n}}\right) M\left(H_{a,b+1,c+1,t-2,(u_i)_1^{m-1},(v_j)_1^{n-1}}\right)\\[10 pt] =& M\left(H_{a,b+1,c,t-1,(u_i)_1^{m-1},(v_j)_1^{n}}\right) M\left(H_{a,b,c+1,t-1,(u_i)_1^{m},(v_j)_1^{n-1}}\right)\\[10 pt] -& M\left(H_{a,b,c+1,t-1,(u_i)_1^{m-1},(v_j)_1^{n}}\right) M\left(H_{a,b+1,c,t-1,(u_i)_1^{m},(v_j)_1^{n-1}}\right) \end{array} \end{equation} Applying the inductive hypothesis to each region except $H_{a,b,c,t,(u_i)_1^m,(v_j)_1^n}$, this implies \begin{equation}\label{eq:kuomain} \begin{array}{rl} &M\left(H_{a,b,c,t,(u_i)_1^{m},(v_j)_1^{n}}\right) \displaystyle\frac{P(a,b+n,c+m)}{\prod_{i=1}^{m-1} (a+u_i)_{\underline{u_i}} \prod_{j=1}^{n-1} (a+v_j)_{\underline{v_j}}}\\[15pt] \xequiv{a}& \displaystyle\frac{P(a,b+n+1,c+m-1)}{\prod_{i=1}^{m-1} (a+u_i)_{\underline{u_i}+1} \prod_{j=1}^{n} (a+v_j)_{\underline{v_j}-1}} \displaystyle\frac{P(a,b+n-1,c+m+1)}{\prod_{i=1}^{m} (a+u_i)_{\underline{u_i}-1} \prod_{j=1}^{n-1} (a+v_j)_{\underline{v_j}+1}}\\[15pt] -& \displaystyle \frac{P(a,b+n,c+m)}{\prod_{i=1}^{m-1} (a+u_i)_{\underline{u_i}} \prod_{j=1}^{n} (a+v_j)_{\underline{v_j}}} \displaystyle\frac{P(a,b+n,c+m)}{\prod_{i=1}^{m} (a+u_i)_{\underline{u_i}} \prod_{j=1}^{n-1} (a+v_j)_{\underline{v_j}}} \end{array} \end{equation} \begin{figure} \caption{$R_a$ with two of $\alpha, \beta, \gamma, \delta$ removed and forced lozenges shaded.} \label{fig:fig6} \end{figure} We shall show that when the term $M\left(H_{a,b,c,t,(u_i)_1^{m},(v_j)_1^{n}}\right)$ is replaced with $f_{b,c,(u_i)_1^m,(v_j)_1^n}(a)$, the products on each line of \eqref{eq:kuomain} are $\xequiv{a}$-equivalent, so that $$M(H_{a,b,c,n, (u_i)_1^m, (v_j)_1^n}) \xequiv{a} f_{b,c, (u_i)_1^m, (v_j)_1^n}(a)$$ by relations $\eqref{eq:sum}$ and $\eqref{eq:substitute}$. It is clear that the product on the first line would be equal to the product on the third line when $f_{b,c,\vec \emptyset, (v_j)_1^n}(a)$ is written out explicitly, so it remains to show the products on the last two lines are $\xequiv{a}$-equivalent. We will manipulate the terms on the second line to show this, employing the following analogues to equations \eqref{eq:sub1} and \eqref{eq:sub2}: \begingroup \allowdisplaybreaks \begin{align} \prod_{j=1}^k (a+v_j)_{\underline{v_j}-1} &= \prod_{j=1}^k(a+v_j)_{\underline{v_j}} / (a+c+m)_k \label{eq:sub3}\\ \prod_{i=1}^k (a+u_i)_{\underline{u_i}-1} &= \prod_{i=1}^k(a+u_j)_{\underline{u_i}} / (a+b+n)_k\\ \prod_{j=1}^k (a+v_j)_{\underline{v_j}+1} &= (a+c+m+1)_k\prod_{j=1}^k(a+v_j)_{\underline{v_j}} \\ \prod_{i=1}^k (a+u_i)_{\underline{u_i}-1} &= (a+b+n+1)_k\prod_{i=1}^k(a+u_j)_{\underline{u_i}}. \label{eq:sub4} \end{align} \endgroup We can therefore rewrite: \begin{align*} &\displaystyle\frac{P(a,b+n+1,c+m-1)}{\prod_{i=1}^{m-1} (a+u_i)_{\underline{u_i}+1} \prod_{j=1}^{n} (a+v_j)_{\underline{v_j}-1}} \cdot \frac{P(a,b+n-1,c+m+1)}{\prod_{i=1}^{m} (a+u_i)_{\underline{u_i}-1} \prod_{j=1}^{n-1} (a+v_j)_{\underline{v_j}+1}}\\[10pt] {\cal H}space{-1.5cm}(\mbox{by eqns. \eqref{eq:sub3} - \eqref{eq:sub4}}) = & \frac{P(a,b+n+1,c+m-1)}{\prod_{i=1}^{m-1} (a+u_i)_{\underline{u_i}} \prod_{j=1}^{n} (a+v_j)_{\underline{v_j}}} \cdot \frac{(a+c+m)_{n}}{(a+b+n+1)_{m-1}}\\[10pt] \times & \frac{P(a,b+n-1,c+m+1)}{\prod_{i=1}^{m} (a+u_i)_{\underline{u_i}} \prod_{j=1}^{n-1} (a+v_j)_{\underline{v_j}}}\cdot \frac{(a+b+n)_{m}}{(a+c+m+1)_{n-1}}\\[10pt] {\cal H}space{-1.5cm}(\mbox{by \thref{sub}}) \xequiv{a} & \frac{P(a,b+n,c+m)}{\prod_{i=1}^{m-1} (a+u_i)_{\underline{u_i}} \prod_{j=1}^{n} (a+v_j)_{\underline{v_j}}} \cdot \frac{(a+b+n+1)_{c+m-1}}{(a+c+m)_{b+n}} \cdot \frac{(a+c+m)_{n}}{(a+b+n+1)_{m-1}}\\[10pt] \times & \frac{P(a,b+n,c+m)}{\prod_{i=1}^{m} (a+u_i)_{\underline{u_i}} \prod_{j=1}^{n-1} (a+v_j)_{\underline{v_j}}} \cdot \frac{(a+c+m+1)_{b+n-1}}{(a+b+n)_{c+m}} \cdot \frac{(a+b+n)_{m}}{(a+c+m+1)_{n-1}}\\[10pt] =& \displaystyle \frac{P(a,b+n,c+m)}{\prod_{i=1}^{m-1} (a+u_i)_{\underline{u_i}} \prod_{j=1}^{n} (a+v_j)_{\underline{v_j}}} \cdot \displaystyle\frac{P(a,b+n,c+m)}{\prod_{i=1}^{m} (a+u_i)_{\underline{u_i}} \prod_{j=1}^{n-1} (a+v_j)_{\underline{v_j}}}, \end{align*} which is exactly the product from third line from \eqref{eq:kuomain}. This is the last case in the inductive step, so equation \eqref{eq:eqmain} holds in general. \end{proof} \section{Hexagons with Two Large Dents} \label{corollaries} When the dents along each side of a dented hexagon are all adjacent, as in $H_{a,b,c,t,(u+i)_{i=1}^{m}, (v+j)_{j=1}^{n}}$, the region has forced lozenges that form large triangular dents. Figure \ref{fig:fig3} depicts this forcing, and the region that results if the forced lozenges and omitted dents are removed entirely. \begin{figure} \caption{(a) $H_{5,4,2,5,(4+i)_1^2,(3+j)_1^3} \label{fig:fig3} \end{figure} The original goal of this paper was finding a general tiling function for hexagons with two large dents, and indeed \thref{main} simplifies to a ratio of tilings of semiregular hexagons when the dents along each side are all adjacent: \begin{corollary}\thlabel{twodents} A dented hexagon $H_{a,b,c,t,(u+i)_{i=1}^{m}, (v+j)_{j=1}^{n}}$ has tilings exactly if $u\geq n$ or $v\geq m$, in which case \begin{align*}{\cal H}space{-.5cm}\M\left(H_{a,b,c,t,(u+i)_{i=1}^{m}, (v+j)_{j=1}^{n}}\right) =& \M\left(H_{0,b,c,t,(u+i)_{i=1}^{m}, (v+j)_{j=1}^{n}}\right)\\ \times & \frac{P(a,b+n,c+m)P(u,b+n-u,m)P(v,c+m-v,n)}{P(a+u,b+n-u,m)P(a+v,c+m-v,n)}. \end{align*} \end{corollary} This generalizes a specific case of a result by Lai who studied the problem when the southern borders of the dents are level (see Theorem 3.1 from \cite{La17}, with $q=1$). We give an expression of that result below in the language of this paper; it follows from \thref{twodents}. \begin{corollary}\thlabel{TLcorr} Given a dented hexagon $H_{a,b,c,t,(u+i)_{i=1}^{m}, (v+j)_{j=1}^{n}}$ with $u+m=v+n$, let $D:=u-n$. If $D<0$ the region has no tilings. Otherwise, \begin{align*} \M\left(H_{a,b,c,t,(u+i)_{i=1}^{m}, (v+j)_{j=1}^{n}}\right) =& \frac{P(a,b+n,c+m)P(u,b-D,m)P(v,c-D,n)}{P(a+u,b-D,m)P(a+v,c-D,n)}\\ \times & \frac{P(c-D,n+m,b)P(D,n,m)}{P(c-D+n,m,D)} \cdot P(D,m,b-D). \end{align*} \end{corollary} \begin{proof}[Proof of \thref{twodents}] It is straightforward to check for arbitrary values that \begin{equation}\label{eq:eqtwodent} \begin{array}{rl} m > v, n > u \iff & v < u+ m \leq v + n, \mbox{ and} \quad m+ (u+ m-v) > u+m \\ \mbox{ OR}& u < v+ n \leq u+ m,\mbox{ and} \quad n + (v+ n-u) > v+n. \end{array} \end{equation} We shall show the second set of expressions hold exactly when the region $H_{a,b,c,t,(u+i)_1^m,(v+j)_1^n}$ has no tilings. Suppose that $u+m \leq v+n$ (meaning the southern border of the eastern dent is weakly north of the southern border of the western dent) and the region has no tilings. Then $(\mu_N - N)$ is maximized at $N=u+m$: so $\mu_{u+m}>u+m$. If $v>u+m$ then $\mu_{u+m}=m\leq u+m$, giving a contradiction. So it must be that $v\leq u+m$. This implies $\mu_{u+m}=m+(u+m-v) > u+m$, so that the first line on the left side of equation \eqref{eq:eqtwodent} holds. The argument works in reverse: if $v<u+m\leq v+n$ and $(u+m)-v>u$ then $\mu_{u+m} > u+m$; so these three inequalities are equivalent to the eastern dent being weakly north of the western dent and the region having no tilings. The inequalities $u < v+ n \leq u+ m$ and $n + (v+ n-u) > v+n$ are equivalent to the western dent being weakly north of the eastern dent and the region having no tilings. The formula given follows immediately from \thref{main}. \end{proof} \begin{proof}[Proof of \thref{TLcorr}] The region $H_{0,b,c,t,(u+i)_{i=1}^{m}, (v+j)_{j=1}^{n}}$ has forced lozenges that when removed give a region congruent to $H_{c-D,n+m,b-D,D,(n+i)_{i=1}^D, \vec \emptyset}$, as depicted in figure \ref{fig:fig12}. \begin{figure} \caption{(a) $H_{0,6,4,5,(4+i)_1^3, (5+j)_1^2} \label{fig:fig12} \end{figure} The region $H_{0,n+m,b-D,D,(n+i)_{i=1}^D, \vec \emptyset}$ also has forced lozenges, that when removed give a semiregular hexagon $H_{D,m,b-D}$. The result then follows by applying \thref{twodents} to both $H_{a,b,c,t,(u+i)_{i=1}^{m}, (v+j)_{j=1}^{n}}$ and $H_{c-D,n+m,b-D,D,(n+i)_{i=1}^D, \vec \emptyset}$. \end{proof} \section{Final Remarks} The method of proof used for \thref{TLcorr} is applicable whenever the tiling function of $H_{0,b,c,t,\vec u, \vec v}$ is simple to express. For example, let $H=H_{0,b,c,t,(u+i)_1^m, (v+j)_1^n}$ be a region with $\underline{v_n}=1$, as depicted in Figure \ref{fig:fig10} (a). \begin{figure} \caption{(a) $H_{0,b,c, \vec u, \vec v} \label{fig:fig10} \end{figure} Observe that the blue split-line in the figure partitions the region into two unbalanced hexagons $H_{n,b,0,1}$ and $H_{m-1,b+n-u,c-1,1}$. It can be seen by modifying the proof of the Region Splitting Lemma that if $R$ is a balanced region with a partition into regions $P$, $Q$, so that unit triangles in $P$ which are adjacent to unit triangles in $Q$ are all of the same orientation, and this orientation \emph{is in excess} within $P$ by some amount, say $d$, then all tilings of $R$ include exactly $d$ lozenges covering one triangle from $P$ and one triangle from $Q$. It follows that all tilings of the entire region must include exactly one lozenge which crosses the split-line. Let $S_i$ be the tilings of the region so that the split-line-crossing lozenge's northern border is $i$ units north of $H$'s southern side. Let $R_i$ be the region obtained by removing that lozenge from $H$, and observe that $S_i$ has a natural bijection with the tilings of $R_i$. Furthermore, \thref{splitting} applies to each region $R_i$ with respect to the blue split-line, and partitions $R_i$ into two regions with known tiling functions: \begingroup \allowdisplaybreaks \begin{align*} \M(R_i) &= \M(H_{1,n,b+1-i})\M(H_{m-1,b+n-u,c-1,1,(i),\vec \emptyset})\\[5pt] &= \frac{P(m-1,b+n-u,c)(b+n-u)!}{n!(c-1)!(b+m+n-u-1)!}\\ & \times (b+2-i)_n (b+n+2-u-i)_{c-1}(i)_{m-1}\\[5pt] \M(H_{a,b,c,t,(u+i)_1^m, (v+j)_1^n}) &= \frac{P(a,b+n,c+m)P(u,b+n-u,m)(v+n)!(a+v)!}{P(a+u,b+n-u,m)P(a+v,1,n)(v)!(a+v+n)!} \\ &\times \sum_{i=1}^{b+n+1-u}\M(R_i) \end{align*} \endgroup A similar calculation {could} be made when $1 \leq \underline{v_n} \leq m$, indexing over the positions of $|\underline{v_n}|$ distinct lozenges which cross the split-line. We can apply this method to a different family of dented hexagons, with $v$ arbitrary, $u<b+1$, and $n=1$, employing a split-line which cuts southwest from the eastern dent, as depicted in Figure \ref{fig:fig11}. \begin{figure} \caption{(a) $H_{0,b,c, m+1,\vec u, \vec v} \label{fig:fig11} \end{figure} Again, each tiling of this region has exactly one lozenge which crosses the split-line. If $R_i$ is obtained by removing the lozenge in the $i$th position from the southwest end of the split-line, then tilings of $H_{0,b,c,m+1,(u+i)_1^m, (v+1)}$ are in bijection with the union of the tilings of $\{R_i\}_{i=1}^{c+m-v+1}$: \begin{align*} \M(R_i) &= \M(H_{1,u-1,c+m+1-v-i}) \M(H_{b-u,c,m,1,(i),\vec \emptyset})\\ &=\frac{P(b-u,c,m+1)c!}{(u-1)!(b-u+c)!m!}\\ & \times(c+m+2-v-i)_{u-1}(c-i+2)_{m}(i)_{b-u}\\[5pt] H_{a,b,c,m+1,(u+i)_1^m, (v+1)} &= \frac{P(a,b+1,c+m)P(u,b+1-u,m)v!(c+m+a)!c!}{P(a+u,b+1-u,m)(c+m)!(a+v)!}\\ & \times \sum_{i=1}^{c+m-v+1} \M(R_i). \end{align*} We note some dead ends. We have shown that the tiling function for the dented hexagon $H_{a,b,c,t,\vec u, \vec v}$ may be given as a polynomial of {entirely linear factors} (of $a$ when the other parameters are fixed). The same is not true for obvious other parameters of the region, such as $c$ or $b$. Similarly when dents were placed along more than two sides of the hexagon, or along \emph{short} sides of the hexagon, the tiling function of the region could not be interpreted as a polynomial of linear factors over any obvious single parameter of the region. \textbf{Acknowledgments.} The author would like to thank M. Ciucu for introducing me to this subject, for his insights into this problem, and for his feedback on this paper. \begin{bibdiv} \begin{biblist} \bib{By19}{article}{ title={Identities involving Schur functions and their applications to a shuffling theorem}, author={Byun, S. H.}, status={submitted}, eprint={arXiv:1906.04533 [math.CO]}, date={2019} } \bib{CF16}{article}{ title={Lozenge Tilings of Hexagons with Arbitrary Dents}, author={Ciucu, M.}, author={Fischer, I.}, journal={Adv. Appl. Math.}, date={2016}, Volume={73 C}, pages={1-22} } \bib{Ci97}{article}{ title={Enumeration of Perfect Matchings in Graphs with Reflective Symmetry}, author={Ciucu, M.}, journal={J. Combin. Theory Ser. A}, date={1997}, Volume={77}, number={1}, pages={67-97} } \bib{Ci01}{article}{ title={Enumeration of Lozenge Tilings of Hexagons with a Central Triangular Hole}, author={Ciucu, M.}, author={Eisenk{\"o}bl, T.}, author={Krattenthaler, C.}, author={Zare, D.}, journal={J. Combin. Theory Ser. A}, date={2001}, Volume={95}, pages={ 251-334} } \bib{Ci19a}{article}{ title={Tilings of hexagons with a removed triad of bowties}, author={Ciucu, M.}, author={Lai, T.}, author={Rohatgi, R.}, eprint={arXiv:1909.04070 [math.CO]}, date={2020} } \bib{Ci19b}{article}{ title={Lozenge tilings of doubly-intruded hexagons}, author={Ciucu, M.}, author={Lai, T.}, date={2019}, journal={J. Combin. Theory Ser. A}, date={2019}, Volume={167}, pages={294-339} } \bib{CLP}{article}{ title={The shape of a typical boxed plane partition}, journal={New York J. Math.}, volume={4}, author={H. Cohn}, author={M. Larsen}, author={J. Propp}, date={1998}, pages={137-165} } \bib{DT89}{article}{ title={The Problem of the Calissons}, author={David, G.}, author={Tomei, C.}, journal={Amer. Math. Monthly}, date={1989}, Volume={96 (935)}, pages={429-431} } \bib{Ge89}{article}{ title={Binomial determinants, paths, and hook length formulae. }, author={Gessel, I.}, author={Viennot, X.}, journal={Adv. in Math.}, date={1985}, volume={58}, number={3}, pages={300-321} } \bib{Ku04}{article}{ title={Applications of Graphical Condensation for Enumerating Matchings and Tilings}, author={Kuo, E.}, journal={Theoret. Comput. Sci.}, date={2004}, Volume={319}, number={1-3}, pages={29-57}} \bib{La14}{article}{ title={Enumeration of Hybrid Domino-Lozenge Tilings}, author={Lai, T.}, journal={J. Combin. Theory Ser. A}, date={2014}, volume={122}, pages={53–81}} \bib{La17}{article}{ title={A q-enumeration of lozenge tilings of a hexagon with three dents}, author={Lai, T.}, journal={Adv. in Appl. Math.}, date={2017}, volume={82}, pages={23-57}} \bib{La19a}{article}{ title={A Shuffling Theorem for Reflectively Symmetric Tilings}, author={Lai, T.}, status={submitted}, eprint={arXiv:1905.09268 [math.CO]}, date={2019} } \bib{La19b}{article}{ title={A Shuffling Theorem for Centrally Symmetric Tilings}, author={Lai, T.}, status={submitted}, eprint={arXiv:1906.03759 [math.CO]}, date={2019} } \bib{La19c}{article}{ title={A shuffling theorem for lozenge tilings of doubly-dented hexagons}, author={Lai, T.}, author={Rohatgi, R.}, status={submitted}, eprint={arXiv:1905.08311 [math.CO]}, date={2019} } \bib{Li73}{article}{ title={On the vector representations of induced matroids}, author={Lindstr{\"o}m, B.}, journal={Bull. Lond. Math. Soc.}, volume={5}, date={1973}, pages={85-90} } \bib{Ma}{book}{ title={Combinatory Analysis}, volume={2}, author={MacMahon, P. A.}, publisher={Cambridge University Press}, date={1916}, pages={12}, reprint={ title={Combinatory Analysis}, volume={1-2}, author={MacMahon, P. A.}, publisher={Chelsea, New York}, date={1960} } } \bib{GT}{article} { title={Finite-dimensional representations of the group of unimodular matrices}, language={Russian}, journal={Dokl. Akad. Nauk.}, volume={71}, date={1950}, pages={825-828}, author={M. Gelfand}, author={M. L. Tsetlin}, translation={ language={English}, title={Izrail M. Gelfand: Collected Papers}, publisher={Springer-Verlag, Berlin}, volume={2}, date={1988}, pages={653–656} } } \end{biblist} \end{bibdiv} \end{document}
\begin{document} \verticaladjustment{-2pt} \title{Thirteen Simple Steps for Creating An R Package with an External C++ Library} \thispagestyle{firststyle} \ifthenelse{\boolean{shortarticle}}{\ifthenelse{\boolean{singlecolumn}}{\abscontentformatted}{\abscontent}}{} \hypertarget{introduction}{ \section{Introduction}\label{introduction}} The process of building a new package with Rcpp can range from the very easy---a single simple C++ function---to the very complex. If, and how, external resources are utilised makes a big difference as this too can range from the very simple---making use of a header-only library, or directly including a few C++ source files without further dependencies---to the very complex. Yet a lot of the important action happens in the middle ground. Packages may bring their own source code, but also depend on just one or two external libraries. This paper describes one such approach in detail: how we turned the Corels application \citep{arxiv:corels,github:corels} (provided as a standalone C++-based executable) into an R-callable package \textbf{RcppCorels} \citep{github:rcppcorels} via \textbf{Rcpp} \citep{CRAN:Rcpp,JSS:Rcpp}. \hypertarget{the-thirteen-key-steps}{ \section{The Thirteen Key Steps}\label{the-thirteen-key-steps}} \hypertarget{ensure-use-of-a-suitable-license}{ \subsection{Ensure Use of a Suitable license}\label{ensure-use-of-a-suitable-license}} Before embarking on such a journey, it is best to ensure that the licensing framework is suitable. Many different open-source licenses exists, yet a few key ones dominate and can generally be used \emph{with each other}. There is however a fair amount of possible legalese involved, so it is useful to check inter-license compatibility, as well as general usability of the license in question. Several sites can help via license recommendations, and checks for interoperability. One example is the site at \href{https://choosealicense.com/}{choosealicense.com} (which is backed by GitHub) can help, as can \href{https://tldrlegal.com/}{tldrlegal.com}. License choice is a complex topic, and general recommendations are difficult to make besides the key point of sticking to already-established and known licenses. \hypertarget{ensure-the-software-builds}{ \subsection{Ensure the Software builds}\label{ensure-the-software-builds}} In order to see how hard it may to combine an external entity, either a program a library, with R, it helps to ensure that the external entity actually still builds and runs. This may seem like a small and obvious steps, but experience suggests that it worth asserting the ability to build with current tools, and possibky also with more than one compiler or build-system. Consideration to other platforms used by R also matter a great deal as one of the strengths of the R package system is its ability to cover the three key operating system families. \hypertarget{ensure-it-still-works}{ \subsection{Ensure it still works}\label{ensure-it-still-works}} This may seem like a variation on the previous point, but besides the ability to \emph{build} we also need to ensure the ability to \emph{run} the software. If the external entity has tests and demo, it is highly recommended to run them. If there are reference results, we should ensure that they are still obtained, and also that the run-time performance it still (at a minimum) reasonable. \hypertarget{ensure-it-is-compelling}{ \subsection{Ensure it is compelling}\label{ensure-it-is-compelling}} This is of course a very basic litmus test: is the new software relevant? Is is helpful? Would others benefit from having it packaged and maintained? \hypertarget{start-an-rcpp-package}{ \subsection{Start an Rcpp package}\label{start-an-rcpp-package}} The first step in getting a new package combing R and C++ is often the creation of a new Rcpp package. There are several helper functions to choose from. A natural first choice is \texttt{Rcpp.package.skeleton()} from the \textbf{Rcpp} package \citep{CRAN:Rcpp}. It can be improved by having the optional helper package \textbf{pkgKitten} \citep{CRAN:pkgKitten} around as its \texttt{kitten()} function smoothes some rougher edges left by the underlying Base R function \texttt{package.skeleton()}. This step is shown below in then appendix, and corresponds to the first commit, followed by a first edit of file \texttt{DESCRIPTION}. Any code added by the helper functions, often just a simple \texttt{helloWorld()} variant, can be run to ensure that the package is indeed functional. More importantly, at this stage, we can also start building the package as a compressed tar archive and run the R checker on it. \hypertarget{integrate-external-package}{ \subsection{Integrate External Package}\label{integrate-external-package}} Given a basic package with C++ support, we can now turn to integrating the external package. This complexity of this step can, as alluded to earlier, vary from very easy to very complex. Simple cases include just dependending on library headers which can either be copied to the package, or be provided by another package such as \textbf{BH} \citep{CRAN:BH}. It may also be a dependency on a fairly standard library available on most if not all systems. The graphics formats bmp, jpeg or png may be example; text formats like JSON or XML are another. One difficulty, though, may be that \emph{run-time} support does not always guarantee \emph{compile-time} support. In these cases, a \texttt{-dev} or \texttt{-devel} package may need to be installed. In the concrete case of Corels, we \begin{itemize} \tightlist \item copied all existing C++ source and header files over into the \texttt{src/} directory; \item renamed all header files from \texttt{*.hh} to \texttt{*.h} to comply with an R preference; \item create a minimal \texttt{src/Makevars} file, here with link instructions for GMP; \item moved \texttt{main.cc} to a subdirectory as we cannot build with another \texttt{main()} function (and R will not include files from subdirectories); \item added a minimal R-callable function along with a \texttt{logger} instance. \end{itemize} Here, the last step was needed as the file \texttt{main.cc} provided a global instance referred to from other files. Hence, a minimal R-callable wrapper is being added at this stage (shown in the appendix as well). Actual functionality will be added later. We will come back to the step concerning the link instructions. As this point we have a package for R also containing the library we want to add. \hypertarget{make-the-external-code-compliant-with-r-policies}{ \subsection{Make the External Code compliant with R Policies}\label{make-the-external-code-compliant-with-r-policies}} R has fairly strict guidelines, defined both in the \emph{CRAN Repository Policy} document at the CRAN website, and in the manual \emph{Writing R Extension}. Certain standard C and C++ functions are not permitted as their use could interfere with running code from R. This includes somewhat obvious recommendations (``do not call \texttt{abort}'' as it would terminate the R sessions) but extends to not using native print methods in order to cooperate better with the input and output facilities of R. So here, and reflecting that last aspect, we changed all calls to \texttt{printf()} to calls to \texttt{Rprintf()}. Similarly, R prefers its own (well-tested) random-number generators so we replaced one (scaled) call to \texttt{random()\ /\ RAND\_MAX} with the equivalent call to R's \texttt{unif\_rand()}. We also avoided one use of \texttt{stdout} in \texttt{rulelib.h}. The requirement for such changes may seem excessive at first, but the value added stemming from consistent application of the CRAN Policies is appreciated by most R users. \hypertarget{complete-the-interface}{ \subsection{Complete the Interface}\label{complete-the-interface}} In order to further test the package, and of course also for actual use, we need to expose the key parameters and arguments. Corels parsed command-line arguments; we can translate this directly into suitable arguments for the main function. At a first pass, we created the following interface: \begin{Shaded} \begin{Highlighting}[] \CommentTok{// [[Rcpp::export]]} \DataTypeTok{bool}\NormalTok{ corels(}\BuiltInTok{std::}\NormalTok{string rules_file,} \BuiltInTok{std::}\NormalTok{string labels_file,} \BuiltInTok{std::}\NormalTok{string log_dir,} \BuiltInTok{std::}\NormalTok{string meta_file = }\StringTok{""}\NormalTok{,} \DataTypeTok{bool}\NormalTok{ run_bfs = }\KeywordTok{false}\NormalTok{,} \DataTypeTok{bool}\NormalTok{ calculate_size = }\KeywordTok{false}\NormalTok{,} \DataTypeTok{bool}\NormalTok{ run_curiosity = }\KeywordTok{false}\NormalTok{,} \DataTypeTok{int}\NormalTok{ curiosity_policy = }\DecValTok{0}\NormalTok{,} \DataTypeTok{bool}\NormalTok{ latex_out = }\KeywordTok{false}\NormalTok{,} \DataTypeTok{int} \DataTypeTok{map_type}\NormalTok{ = }\DecValTok{0}\NormalTok{,} \DataTypeTok{int}\NormalTok{ verbosity = }\DecValTok{0}\NormalTok{,} \DataTypeTok{int}\NormalTok{ max_num_nodes = }\DecValTok{100000}\NormalTok{,} \DataTypeTok{double}\NormalTok{ regularization = }\FloatTok{0.01}\NormalTok{,} \DataTypeTok{int}\NormalTok{ logging_frequency = }\DecValTok{1000}\NormalTok{,} \DataTypeTok{int}\NormalTok{ ablation = }\DecValTok{0}\NormalTok{) \{} \CommentTok{// actual function body omitted} \NormalTok{\}} \end{Highlighting} \end{Shaded} Rcpp facilities the integration by adding another wrapper exposing all the function arguments, and setting up required arguments without default (the first three) along with optional arguments given a default. The user can now call \texttt{corels()} from R with three required arguments (the two input files plus the log directory) as well as number of optional arguments. \hypertarget{add-sample-data}{ \subsection{Add Sample Data}\label{add-sample-data}} R package can access data files that are shipped with them. That is very useful feature, and we therefore also copy in the files include in the Corels repository and its \texttt{data/} directory. \begin{Shaded} \begin{Highlighting}[] \NormalTok{fs}\OperatorTok{::}\KeywordTok{dir_tree}\NormalTok{(}\StringTok{"../rcppcorels/inst/sample_data"}\NormalTok{)} \CommentTok{# ../rcppcorels/inst/sample_data} \CommentTok{# +-- compas_test-binary.csv} \CommentTok{# +-- compas_test.csv} \CommentTok{# +-- compas_test.label} \CommentTok{# +-- compas_test.out} \CommentTok{# +-- compas_train-binary.csv} \CommentTok{# +-- compas_train.csv} \CommentTok{# +-- compas_train.label} \CommentTok{# +-- compas_train.minor} \CommentTok{# \textbackslash{}-- compas_train.out} \end{Highlighting} \end{Shaded} \hypertarget{set-up-working-example}{ \subsection{Set up working example}\label{set-up-working-example}} Combining the two preceding steps, we can now offer an illustrative example. It is included in the helpd page for function \texttt{corels()} and can be run from R via \texttt{example("corels")}. \begin{Shaded} \begin{Highlighting}[] \KeywordTok{library}\NormalTok{(RcppCorels)} \NormalTok{.sysfile <-}\StringTok{ }\ControlFlowTok{function}\NormalTok{(f) }\CommentTok{# helper function} \KeywordTok{system.file}\NormalTok{(}\StringTok{"sample_data"}\NormalTok{,f,}\DataTypeTok{package=}\StringTok{"RcppCorels"}\NormalTok{)} \NormalTok{rules_file <-}\StringTok{ }\KeywordTok{.sysfile}\NormalTok{(}\StringTok{"compas_train.out"}\NormalTok{)} \NormalTok{label_file <-}\StringTok{ }\KeywordTok{.sysfile}\NormalTok{(}\StringTok{"compas_train.label"}\NormalTok{)} \NormalTok{meta_file <-}\StringTok{ }\KeywordTok{.sysfile}\NormalTok{(}\StringTok{"compas_train.minor"}\NormalTok{)} \NormalTok{logdir <-}\StringTok{ }\KeywordTok{tempdir}\NormalTok{() } \KeywordTok{stopifnot}\NormalTok{(}\KeywordTok{file.exists}\NormalTok{(rules_file),} \KeywordTok{file.exists}\NormalTok{(labels_file),} \KeywordTok{file.exists}\NormalTok{(meta_file),} \KeywordTok{dir.exists}\NormalTok{(logdir))} \KeywordTok{corels}\NormalTok{(rules_file, labels_file, logdir, meta_file,} \DataTypeTok{verbosity =} \DecValTok{100}\NormalTok{,} \DataTypeTok{regularization =} \FloatTok{0.015}\NormalTok{,} \DataTypeTok{curiosity_policy =} \DecValTok{2}\NormalTok{, }\CommentTok{# by lower bound} \DataTypeTok{map_type =} \DecValTok{1}\NormalTok{) }\CommentTok{# permutation map} \KeywordTok{cat}\NormalTok{(}\StringTok{"See "}\NormalTok{, logdir, }\StringTok{" for result file."}\NormalTok{)} \end{Highlighting} \end{Shaded} In the example, we pass the two required arguments for rules and labels files, the optional argument for the `meta' file as well as an added required argument for the output directory. R policy prohibits writing in user-directories, we default to using the temporary directory of the current session, and report its value at the end. For other arguments default values are used. \hypertarget{finesse-library-dependencies}{ \subsection{Finesse Library Dependencies}\label{finesse-library-dependencies}} One fairly common difficulty in bringing a library to R via a package consists of external dependencies. In the case of `Corels', the GNU GMP library for multi-precision arithmetic is needed with both its C and C++ language bindings. In order to detect presence of a required (or maybe optional library), tools like `autoconf' or `cmake' are often used. For example, to detect presence of the the GNU GMP, CRAN package \textbf{sbrl} \citep{CRAN:sbrl} uses `autoconf' and turns optional use on if the library is present. Here, however, we need to test for both the C and C++ library bindings. One additional problem with `Corels' is that at present, compilation depends on GMP. So while we can use `autoconf' to detect it, we have to abort the build if the library (or its C++ parts) are not present. \hypertarget{finalise-license-and-copyright}{ \subsection{Finalise License and Copyright}\label{finalise-license-and-copyright}} It is good (and common) practice to clearly attribute authorship. Here, credit is given to the `Corels' team and authors as well as to the authors of the underlying `rulelib' code used by `Corels' via the file \texttt{inst/AUTHORS} (which will be installed as \texttt{AUTHORS} with the package. In addition, the file \texttt{inst/LICENSE} clarifies the GNU GPL-3 license for `RcppCorels' and `Corels', and the MIT license for `rulelib'. \hypertarget{additional-bonus-some-more-meta-files}{ \subsection{Additional Bonus: Some more `meta' files}\label{additional-bonus-some-more-meta-files}} Several files help to improve the package. For example, \texttt{.Rbuildignore} allows to exclude listed files from the resulting R package keeping it well-defined. Similarly, \texttt{.gitignore} can exclude files from being added to the \texttt{git} repository. We also like \texttt{.editorconfig} for consistent editing default across a range of modern editors. \hypertarget{summary}{ \section{Summary}\label{summary}} We describe s series of steps to turn the standalone library `Corels' describes by \citet{arxiv:corels} into a R package \textbf{RcppCorels} using the facilities offered by \textbf{Rcpp} \citep{CRAN:Rcpp}. Along the way, we illustrate key aspects of the R package standards and CRAN Repository Policy proving a template for other research software wishing to provide their implementations in a form that is accessibly by R users. \onecolumn \hypertarget{appendix-1-creating-the-basic-package}{ \subsection{Appendix 1: Creating the basic package}\label{appendix-1-creating-the-basic-package}} \begin{Shaded} \begin{Highlighting}[] \ExtensionTok{edd@rob}\NormalTok{:~/git$ r --packages Rcpp --eval }\StringTok{'Rcpp.package.skeleton("RcppCorels")'} \ExtensionTok{Attaching}\NormalTok{ package: ‘utils’} \ExtensionTok{The}\NormalTok{ following objects are masked from ‘package:Rcpp’:} \ExtensionTok{.DollarNames}\NormalTok{, prompt} \ExtensionTok{Creating}\NormalTok{ directories ...} \ExtensionTok{Creating}\NormalTok{ DESCRIPTION ...} \ExtensionTok{Creating}\NormalTok{ NAMESPACE ...} \ExtensionTok{Creating}\NormalTok{ Read-and-delete-me ...} \ExtensionTok{Saving}\NormalTok{ functions and data ...} \ExtensionTok{Making}\NormalTok{ help files ...} \ExtensionTok{Done.} \ExtensionTok{Further}\NormalTok{ steps are described in }\StringTok{'./RcppCorels/Read-and-delete-me'}\NormalTok{.} \ExtensionTok{Adding}\NormalTok{ Rcpp settings} \OperatorTok{>>} \ExtensionTok{added}\NormalTok{ Imports: Rcpp} \OperatorTok{>>} \ExtensionTok{added}\NormalTok{ LinkingTo: Rcpp} \OperatorTok{>>} \ExtensionTok{added}\NormalTok{ useDynLib directive to NAMESPACE} \OperatorTok{>>} \ExtensionTok{added}\NormalTok{ importFrom(Rcpp, evalCpp) }\ExtensionTok{directive}\NormalTok{ to NAMESPACE} \OperatorTok{>>} \ExtensionTok{added}\NormalTok{ example src file using Rcpp attributes} \OperatorTok{>>} \ExtensionTok{added}\NormalTok{ Rd file for rcpp_hello_world} \OperatorTok{>>} \ExtensionTok{compiled}\NormalTok{ Rcpp attributes } \ExtensionTok{edd@rob}\NormalTok{:~/git$} \ExtensionTok{edd@rob}\NormalTok{:~/git$ mv RcppCorels/ rcppcorels # prefer lowercase directories} \ExtensionTok{edd@rob}\NormalTok{:~/git$ } \end{Highlighting} \end{Shaded} \hypertarget{appendix-2-a-minimal-srcmakevars}{ \subsection{Appendix 2: A Minimal src/Makevars}\label{appendix-2-a-minimal-srcmakevars}} \begin{Shaded} \begin{Highlighting}[] \ExtensionTok{CXX_STD}\NormalTok{ = CXX11} \ExtensionTok{PKG_CFLAGS}\NormalTok{ = -I. -DGMP -DSKIP_MAIN} \ExtensionTok{PKG_LIBS}\NormalTok{ = }\VariableTok{$(}\ExtensionTok{LAPACK_LIBS}\VariableTok{)} \VariableTok{$(}\ExtensionTok{BLAS_LIBS}\VariableTok{)} \VariableTok{$(}\ExtensionTok{FLIBS}\VariableTok{)}\NormalTok{ -lgmpxx -lgmp} \end{Highlighting} \end{Shaded} \hypertarget{appendix-3-a-placeholder-wrapper}{ \subsection{Appendix 3: A Placeholder Wrapper}\label{appendix-3-a-placeholder-wrapper}} \begin{Shaded} \begin{Highlighting}[] \PreprocessorTok{#include }\ImportTok{"queue.h"} \PreprocessorTok{#include }\ImportTok{<Rcpp.h>} \CommentTok{/*} \CommentTok{ * Logs statistics about the execution of the algorithm and dumps it to a file.} \CommentTok{ * To turn off, pass verbosity <= 1} \CommentTok{ */} \NormalTok{NullLogger* logger;} \CommentTok{// [[Rcpp::export]]} \DataTypeTok{bool}\NormalTok{ corels() \{} \ControlFlowTok{return} \KeywordTok{true}\NormalTok{; }\CommentTok{// more to fill in, naturally} \NormalTok{\}} \end{Highlighting} \end{Shaded} \end{document}
\begin{document} \begin{frontmatter} \title{Subordination principle, Wright functions and large-time behaviour for the discrete in time fractional diffusion equation} \author{Luciano Abadias$^{\dagger}$} \address{Departamento de Matem\'aticas, Instituto Universitario de Matem\'aticas y Aplicaciones, Universidad de Zaragoza, 50009 Zaragoza, Spain. \\ $^{\dagger}[email protected]} \author{Edgardo Alvarez$^{\ddagger}$ and Stiven D\'iaz$^{*}$ } \address{Universidad del Norte, Departamento de Matem\'aticas y Estad\'istica, Barranquilla, Colombia. \\ $^{\ddagger}[email protected] \\$^{*}[email protected]} \begin{keyword} Subordination formula, Scaled Wright function, Fractional difference equations, Large-time behavior, Decay of solutions, Discrete fundamental solution. \mathbb{{M}}SC[2010] Primary: 39A14, 35R11, 33E12, 35B40. \end{keyword} \begin{abstract} The main goal in this paper is to study asymptotic behaviour in $L^p(\mathbb{{R}}^N)$ for the solutions of the fractional version of the discrete in time $N$-dimensional diffusion equation, which involves the Caputo fractional $h$-difference operator. The techniques to prove the results are based in new subordination formulas involving the discrete in time Gaussian kernel, and which are defined via an analogue in discrete time setting of the scaled Wright functions. Moreover, we get an equivalent representation of that subordination formula by Fox H-functions. \end{abstract} \end{frontmatter} \section{Introduction} One of the most important aspects in the study of evolution problems is the asymptotic behaviour of solutions. In particular, since J. Fourier introduced the classical heat equation $u_t=\mathbb{{D}}elta u$ in 1822, see \cite{Fourier}, to model diffusion phenomena, several authors have invested their time and effort in researching the large time behaviour of diffusion processes. For example in \cite{DD,ZE,GV,KS,N} the authors studied large-time behaviour and other asymptotic estimates for diffusion problems in $\mathbb{{R}}^N,$ and in \cite{D2, GV2} for open bounded domains. Estimates for heat kernels on manifolds have been studied in \cite{Gr,L,D}, and in \cite{M} the author obtained Gaussian upper estimates for the heat kernel associated to the sub-laplacian on a Lie group. In last years, everyone knows the relevance of fractional calculus on the topic of evolution problems and partial differential equations. Recently, from different points of view see \cite{Abadias-Alvarez-18, Kemppainen}, asymptotic estimates of solutions of fractional diffusion phenomena have been stated. In particular, in \cite{Abadias-Alvarez-18} the authors use subordination formulas as the main tool, however in \cite{Kemppainen} the authors work directly with estimates of the fractional fundamental solution. We propose to use both tools for our aims, as we will see throughout the paper. On the other hand, finite differences were introduced some centuries ago, and they have been used in different mathematical problems, mainly in approximation of solutions of differential problems for the numerical solution of differential equations and partial differential equations. The most knowing ones are the forward, backward and central differences (the forward and backward differences are associated to the Euler, explicit and implicit, numerical methods). In the last years, several authors have been working in partial difference-differential equations (\cite{Abadia-Alvarez-20,ADT, Abadias-Lizama, C, C2, PAMS, LR}) from the point of mathematical analysis, more precisely, harmonic analysis, functional analysis and fractional differences. These last ones, fractional differences, are nowadays a topic of important research, see for example \cite{Abadia-Alvarez-20,Go-Peter,Lizama-(2015),PAMS,Mo-Wy,Ponce-(2020)} and references therein. Several aspects of such problems have been studied in that papers: maximal regularity, stability, fractional discrete resolvent operators, among others. The main goal in this paper is to study asymptotic decay and large time behaviour on $L^p(\mathbb{{R}}^N)$ of solutions of the following fractional discrete in time heat problem, as the authors in \cite{Abadia-Alvarez-20} do in the classical case ($\alpha=1$). We consider \begin{equation}\label{1.2} \left\{ \begin{array}{lll} {_C}\delta^{\alpha} u(nh,x) -\mathbb{{D}}elta u(nh,x)=0,\quad n\in\mathbb{{N}},\,x\in\mathbb{R}^N, \\ \\ u(0,x) =f(x),\\ \end{array} \right. \end{equation} where $0 <\alpha \leq 1$, ${_C}\delta^{\alpha}$ is the Caputo fractional $h$-difference operator (Section \ref{Section 3}), $\mathbb{{D}}elta$ denotes the Laplace operator acting in space, $u$ is defined on $\mathbb{{N}}^{h}_0\times\mathbb{{R}}^{N}$ and $f$ is a function defined on $\mathbb{{R}}^{N}$. We use subordination formulas to write the solution of the previous problem. Indeed we will write the solution $$u(nh,x)=\sum_{j=1}^{\infty}\varphi^h_{\alpha,1-\alpha}(n-1,j-1)(\mathcal{G}_{j,h}*f)(x),$$ where $\mathcal{G}_{j,h}$ is the discrete Gaussian kernel associated to the discrete in time heat problem given in \cite{Abadia-Alvarez-20}, and $\varphi^h_{\alpha,\beta}$ is discrete scaled Wright function (which is introduced in Section \ref{dsftsf}): \begin{equation*} \varphi^h_{\alpha,\beta}(n,j):= \frac{1}{2 \pi i} \displaystyle\int_{\Upsilon} \frac{1}{z^{n+1}}\frac{\left(1-h(\frac{1-z}{h})^{\alpha} \right)^{j}}{(\frac{1-z}{h})^{\beta}}\,dz, \quad n,j\in\mathbb{N}_0,\,\beta\geq0. \end{equation*} Note that it is a generalization of one given in \cite{Alvarez-Diaz-Lizama-2020}. Previous subordination formula and the known results about the classical case ($\alpha=1,$ see \cite{Abadia-Alvarez-20}) are the key tools to study the decay and the asymptotic behaviour on $L^p(\mathbb{{R}}^n)$ for the solutions of \eqref{1.2}. The paper is organized as follows. In second section we revisited known useful results for our aims. We recall the concept of Wright functions, which plays a key role in the subordination formulae for resolvent families in the continuous case, and whose properties help us to study important results for the discrete setting. Also, we present basic properties for the discrete Gaussian kernel in $\mathbb{{R}}^N$ (solution of \eqref{1.2} for $\alpha=1$). Section 3 is devoted to state our fractional discrete setting. We consider the classical backward difference on the mesh of step $h>0.$ This difference allows to consider adequate notions of discrete fractional sum, and discrete fractional difference in the Riemann-Liouville and Caputo sense, for our purpose. Several useful properties will be shown. In Section 4 we introduced the Mittag-Leffler sequences which are the solutions of \eqref{1.2} in the scalar case. Such Mittag-Leffler functions motivate the study of the Wright functions in the discrete setting, which are one of the key tools in this paper. Proposition \ref{Prop-Levy} contains many properties of both type functions, and the relations between them. Such result can be considered the analogue one to \cite[Theorem 3]{Abadia-Miana} in the continuous case. In Section 5 we focus on the study of the fundamental solution of $\eqref{1.2}.$ We prove by the subordination formula given by the discrete heat kernel and Wright functions that, in fact, the integral defined by such subordination formula is the solution. One of the flashy facts of the subordination formula proposed (see \eqref{subordination1}) is its relation with the Gaussian kernel and the Fox H-function, as Proposition \ref{GaussianSemigroup-FS} and Proposition \ref{FoxH-FS} show. Also we state basic properties of the fundamental solution. One of them is that the integral over $\mathbb{{R}}^N$ of the fundamental solution is 1, which gives directly the mass conservation principle for solutions of \eqref{1.2}. Section 6 contains the asymptotic $L^p$-results. We state the $L^p$-decay for the fundamental solution and its gradient. These decay bounds allow to get the $L^p$-decay for the solution of $\eqref{1.2}$ (Theorem \ref{teorema4.1}), and also the large time behaviour. In this asymptotic behaviour is reflected the mass conservation principle because the solution converges to the total mass times the fundamental solution (see Theorem \ref{Theorem5.1}). \section{Preliminaries} \subsection{Continuous fractional calculus.} In this part, we recall some concepts and basic results about fractional calculus in continuous time. Let $0<\alpha<1$ and $f$ be a locally integrable function. The Riemann-Liouville fractional derivative of $f$ of order $\alpha$ is given by $$_{R}D^{\alpha}_t f(t) := \frac{d}{dt}\displaystyle\int_0^t \dfrac{(t-s)^{-\alpha}}{\Gamma(1-\alpha)}f(s) \, ds, \ t \geq 0.$$ The Caputo fractional derivative of order $\alpha$ of a function $f$ is defined by \begin{align*} _{C}D^{\alpha}_t f(t)&:=\dfrac{1}{\Gamma(1-\alpha)} \displaystyle\int_0^t (t-s)^{-\alpha }f'(s) \, ds, \ t \geq 0, \end{align*} where $f'$ is the first order distributional derivative of $f(\cdot)$, for example if we assume that $f(\cdot)$ has locally integrable distributional derivative up to order one. Then, when $\alpha=1$ we obtain $_{C}D^{\alpha}_t:=\dfrac{d}{dt}$. For more details, see for example \cite{Mainardi, Miller}. The Mittag-Leffler functions are given by \begin{align*} E_{\alpha,\beta}(z):=\displaystyle\sum_{n=0}^{\infty}\frac{z^n}{\Gamma(\alpha n+\beta)}, \qquad \alpha,\,\beta>0,\,z\in\mathbb{{C}}. \end{align*} We write $E_{\alpha}(z):=E_{\alpha,1}(z).$ They are solutions of the fractional differential problems $$_C D_t^{\alpha}E_{\alpha}(\omega t^{\alpha})=\omega E_{\alpha}(\omega t^{\alpha}),$$ and $$_R D_t^{\alpha}\biggl(t^{\alpha-1}E_{\alpha,\alpha}(\omega t^{\alpha})\biggr)=\omega t^{\alpha-1}E_{\alpha,\alpha}(\omega t^{\alpha}),$$ for $0<\alpha<1,$ under certain initial conditions. Their Laplace transform is \begin{equation*} \int_0^{\infty}e^{-\lambda t}t^{\beta-1}E_{\alpha,\beta}(\omega t^{\alpha})\,dt=\frac{\lambda^{\alpha-\beta}}{\lambda^{\alpha}-\omega}, \qquad \mbox{Re}(\lambda)>\omega^{\frac{1}{\alpha}},\,\omega>0. \end{equation*} For more details about the Mittag-Leffler function $E_{\alpha,\beta}$ see \cite[Chapter 18]{EMOTB}. Recall the definition of the Wright type function (see \cite{Mainardi}) \begin{equation*}\label{wright} W_{\lambda,\mu}(z)=\displaystyle\sum_{n=0}^{\infty}\dfrac{z^n}{n!\Gamma(\lambda n+\mu)}= \frac{1}{2\pi i} \int_{H_a} \sigma^{-\mu} e^{\sigma +z \sigma^{-\lambda}} d\sigma, \quad \lambda>-1,\,\mu\geq 0,\,z\in \mathbb{C}, \end{equation*} where $H_a$ denotes the Hankel path defined as a contour which starts and ends at $-\infty$ and encircles the origin once counterclockwise. For $0<\alpha<1$ and $\beta\geq 0,$ the scaled Wright function in two variables $\psi_{\alpha,\beta}$ (and which was introduced by Abadias and Miana in \cite{Abadia-Miana}) is given by \begin{align}\label{ScaledWright} \psi_{\alpha,\beta}(t,s):=t^{\beta-1}W_{-\alpha,\beta}(-st^{-\alpha}),\qquad t>0,\ s\in\mathbb{{C}}. \end{align} Note that using the change of variable $z=\frac{\sigma}{t},$ we get the integral representation $$\psi_{\alpha,\beta}(t,s)=\frac{1}{2\pi i}\int_{Ha}z^{-\beta}e^{tz-sz^{\alpha}}\,dz, \qquad t,s>0.$$ Many properties about such functions that we will use along the paper appear in \cite{Abadia-Miana}.\\ \\ Next, we recall the definition of Fox H-funtions. Let $m, n, p, q\in\mathbb{{N}}_0$ such that $0\leq m \leq q$, $0\leq n \leq p$. Let $a_i,b_j\in \mathbb{{C}}$ and $\alpha_i,\beta_j\in \mathbb{{R}}_+.$ The Fox H-function is defined via a Mellin-Barnes type integral, \begin{align*} H^{mn}_{pq}(z)&:=H^{mn}_{p,q}\left[\begin{array}{c} z \left\vert \begin{array}{c} (a_{i}, \alpha_{i})_{1,p} \\ (b_{j}, \beta_{j})_{1,q} \end{array} \right. \end{array} \right] =\frac{1}{2\pi i} \int_{\gamma}\mathcal{H}^{mn}_{pq}(s) z^{-s} \ ds, \end{align*} where \begin{align*} (a_{i}, \alpha_{i})_{1,p}:=(a_{1}, \alpha_{1}), &\cdots, (a_{p}, \alpha_{p}), \\ (b_{j}, \beta_{j})_{1,q}:=(b_{1}, \beta_{1}), &\cdots, (b_{q}, \beta_{q}), \end{align*} \begin{align*} \mathcal{H}^{mn}_{pq}(s)=\dfrac{\displaystyle\prod_{j=1}^{m}\Gamma(b_j+\beta_j s)\displaystyle\prod_{i=1}^{n}\Gamma(1-a_i-\alpha_i s)}{\displaystyle\prod_{i=n+1}^{p}\Gamma(a_i+\alpha_i s)\displaystyle\prod_{j=m+1}^{q}\Gamma(1-b_j-\beta_j s)}, \end{align*} and $\gamma$ is the infinite contour in the complex plane which separates the poles \begin{align*} b_{jl}=\frac{-b_j-l}{\beta_j} \ (j=1,\cdots, m; \ l\in\mathbb{{N}}_{0}) \end{align*} of the Gamma function $\Gamma(b_j+\beta_{j}s)$ to the left of $\gamma$ and the poles \begin{align*} a_{ik}=\frac{1-a_i+k}{\alpha_i} \ (i=1,\cdots, n; \ k\in\mathbb{{N}}_{0}) \end{align*} to the right of $\gamma$. For more details of this type of functions, see \cite{H-transforms}. \\ \\ Finally, we recall that the Gaussian kernel $G_t(x)$ is defined by \begin{align}\label{gaussian-semigroup} G_t(x)=\frac{1}{(4\pi t)^{1/2}}e^{-\frac{|x|^2}{4t}}, \qquad t>0, \ x\in\mathbb{{R}}^{N}. \end{align} Let $$\widehat{u}(\xi)=\mathcal{F}(u)(\xi)=(2\pi)^{-N/2}\int_{\mathbb{R}^N}e^{-ix\cdot\xi}f(x)dx$$ and $$\mathcal{F}^{-1}(u)(\xi):=\mathcal{F}(u)(-\xi)$$ denote the Fourier and inverse Fourier transform of $u$, respectively. The function $G_t(x)$ has the following properties (see for example \cite{Evans}).\\ \begin{proposition}\label{gauss-prop} The Gaussian kernel satisfies: \begin{enumerate}[$(i)$] \item $\displaystyle G_t(x)>0$. \item $\displaystyle \int_{\mathbb{{R}}^N}G_t(x)\,dx=1$. \item $\displaystyle\mathcal{F}(G_t)(\xi)=e ^{-t\vert\xi \vert^2},$ \quad $\xi\in\mathbb{{R}}^N.$ \item $\displaystyle \int_{\mathbb{{R}}^N}|x|^2 G_{t}(x)\,dx=2Nt.$ \end{enumerate} \end{proposition} \subsection{Discrete diffusion equation.} In \cite{Abadia-Alvarez-20}, for $h>0$ the authors defined the heat kernel in discrete time on the mesh of step $h$ as \begin{equation}\label{discretegaussian} \mathcal{G}_{n,h}(x):= \frac{1}{h^n\Gamma(n)} \int_0^\infty e^{-t/h} t^{n-1} G_t(x)\,dt,\quad n\in\mathbb{{N}},x\in\mathbb{{R}}\setminus\{0\}. \end{equation} Moreover, they proved the following proposition. \begin{proposition}\label{DG-Properties} The function $\mathcal{G}_{n,h}$ satisfies: \begin{enumerate}[$(i)$] \item $\displaystyle\mathcal{G}_{n,h}(x)>0,\quad n\in\mathbb{{N}},x \in \mathbb{{R}}.$ \item $\displaystyle \int_{\mathbb{{R}}}^{}\mathcal{G}_{n,h}(x)\ dx=1$. \item $\displaystyle\mathcal{F}({\mathcal{G}}_{n,h})(\xi)=\frac{1}{(1+h|\xi|^2)^n},\quad \xi\in\mathbb{{R}}.$ \item $\dfrac{\mathcal{G}_{n,h}(x)-\mathcal{G}_{n-1,h}(x)}{h} = \mathbb{{D}}elta \mathcal{G}_{n,h}(x),\quad n\geq 2,\ x\in\mathbb{{R}}.$ \end{enumerate} \end{proposition} Next, given a function $f$ defined on $\mathbb{{R}}^N$ and $h>0$, Abadias and Alvarez proved (see \cite{Abadia-Alvarez-20}) that $w(nh,x):=(\mathcal{G}_{n,h}\ast f)(x)$ is the unique solution of the problem \begin{equation}\label{SolFirtOrderEquation} \left\{ \begin{array}{lll} \dfrac{w(nh,x)-w((n-1)h,x)}{h}= \mathbb{{D}}elta w(nh,x) ,\quad n\in\mathbb{{N}},\,x\in\mathbb{R}^N\setminus\{0\}, \\ \\ w(0,x) =f(x). \end{array} \right. \end{equation} \section{Discrete fractional calculus}\label{Section 3} In this section we recall the definition of Ces\`aro numbers and some useful properties of them. Also, we introduce the discrete time setting where we will work, and the corresponding associated fractional calculus. For an arbitrary $\alpha\in\mathbb{{C}},$ we denote by $k^{\alpha}(n)$ the Ces\`aro numbers which are the Fourier coefficients of the holomorphic function on the disc $(1-z)^{-\alpha},$ that is, \begin{align}\label{ZT-K} \frac{1}{(1-z)^{\alpha}}=\sum_{n=0}^{\infty}k^{\alpha}(n)z^n. \end{align} It is known that the expression of the Ces\`aro numbers is given by \begin{equation}\label{kernel_k_1} k^{\alpha}(n):=\left\{ \begin{array}{lcc} \dfrac{\alpha(\alpha+1)\cdots \cdots ( \alpha+n-1)}{n!}, & \qquad n\in\mathbb{{N}}, \\ \\ \qquad \qquad \qquad 1, & \qquad n= 0. \end{array} \right. \end{equation} Note that $k^0(n):= \delta_{0}(n)$ is the Kronecker delta. Sometimes, to make more easily computations with the Ces\`aro sequence $k^{\alpha}$ for $\alpha\in\mathbb{{C}} \setminus \{0,-1,-2,...\}$, we use an equivalent expression of \eqref{kernel_k_1}, namely \begin{equation}\label{kernel_k} k^{\alpha}(n)= \dfrac{\Gamma(n+\alpha)}{\Gamma(\alpha)\Gamma(n+1)}, \end{equation} where $\Gamma(\cdot)$ is the gamma function. We recall the following properties of $k^{\alpha}$ which appear for example in \cite{GoLi19,PAMS,Zygmund}. \begin{proposition}\label{prop-Cesaro} The following properties hold: \begin{enumerate}[$(i)$] \item For $\alpha>0$, $k^{\alpha}(n)>0$, $n\in\mathbb{{N}}_0$. \item For all $\alpha,\beta\in\mathbb{{C}}$, we have the semigroup property \begin{align}\label{semigroup-k} \sum_{j=0}^{n} k^{\alpha}(n-j) k^{\beta} (j)=k^{\alpha+\beta}(n). \end{align} \item For $\alpha>0,$ \begin{align}\label{asin-kernel} k^{\alpha}(n)=\frac{n^{\alpha-1}}{\Gamma(\alpha)}\left( 1 + \mathcal{O}\left(\frac{1}{n}\right)\right), \quad n\in\mathbb{{N}}. \end{align} \item For $\alpha>0$ \begin{align}\label{inde-k} k^{\alpha}(n+1)=\dfrac{\alpha+n}{n+1}k^{\alpha}(n). \end{align} \end{enumerate} \end{proposition} Let $h>0$ and $f$ be a sequence defined on $\mathbb{{N}}_0^h:=\{0,h,2h,\ldots\},$ the backward difference of the sequence $f$ is defined by \begin{center} $\delta_{\text{left}} f(nh):=\dfrac{f(nh)-f((n-1)h)}{h},$ \quad $n\in \mathbb{{N}}.$ \end{center} Taking into account the previous definition, we get the following result. \begin{proposition}\label{delta-k} Let $0<\alpha<1$ and $\rho_{\alpha}(nh):=h^{\alpha}k^\alpha(n)$. Then, $$\delta_{\text{left}} \rho_{\alpha}(nh)=h^{\alpha-1}k^{\alpha-1}(n), \ n\in\mathbb{{N}}.$$ \end{proposition} \begin{proof} By \eqref{inde-k} and the property of Gamma function $z\Gamma(z)=\Gamma(z+1)$ the result follows. \end{proof} The definitions of the fractional sum and fractional difference operators (in the sense of Riemann-Liouville and Caputo) with the sequence $ k^\alpha$ were initially proposed by C. Lizama in \cite{Lizama-(2015)}. Recently, R. Ponce in \cite{Ponce-(2020)} defines a generalization. We make a slightly modification (since our index starts at one) to this definition.\\ \begin{definition} Let $f$ be a sequence defined on $\mathbb{{N}}_0^h$. For $\alpha \geq 0$, the $\alpha$-th fractional sum of $f$ is defined by means of the formula $$\delta^{-\alpha}f(nh):=h^{\alpha}\displaystyle\sum_{j=1}^n k^{\alpha}(n-j)f(jh),\quad n\in\mathbb{{N}}.$$ Note that, for $\alpha=0$, $\delta^{-\alpha}f(nh)=f(nh).$ \end{definition} As a direct consequence of the previous definition, the operator $\delta^{-\alpha}$ satisfies the semigroup property: \begin{align*} \delta^{-\alpha}\delta^{-\beta}f(nh)=\delta^{-(\alpha+\beta)}f(nh), \ n\in\mathbb{{N}}. \end{align*} \begin{definition}\label{DRL} Let $0<\alpha<1$ and $f$ be a sequence defined on $\mathbb{{N}}_0^h$. The $\alpha$-th fractional $h$-difference in the sense of Riemann-Liouville of $f$ is defined by \begin{align}\label{RL} {_{RL}}\delta^{\alpha} f(nh):=\delta_{\text{left}} \delta^{-(1-\alpha)}f(nh),\qquad n\in\mathbb{{N}}. \end{align} \end{definition} \begin{proposition}\label{SumAndDeltaRL} For $0<\alpha<1$ the relation $$\delta^{-\alpha} {_{RL}}\delta^{\alpha} f(nh)=f(nh), \ n\in\mathbb{{N}},$$ holds. \end{proposition} \begin{proof} First of all, we adopt the notation: $\sum^{0}_{j=1} f(jh)=0$. Then, by \eqref{semigroup-k}, we get \begin{align*} \delta^{-\alpha} {_{RL}}\delta^{\alpha} f(nh)&=h^\alpha \sum^{n}_{j=1} k^{\alpha}(n-j){_{RL}}\delta^{\alpha} f(jh)\\ &=h^\alpha \sum^{n}_{j=1} k^{\alpha}(n-j)\delta_{\text{left}} \delta^{-(1-\alpha)}f (jh) \\ &=h^{\alpha-1} \sum^{n}_{j=1} k^{\alpha}(n-j) \delta^{-(1-\alpha)}f (jh)-h^{\alpha-1} \sum^{n}_{j=1} k^{\alpha}(n-j) \delta^{-(1-\alpha)}f ((j-1)h)\\ &= \sum^{n}_{j=1} k^{\alpha}(n-j)\sum^{j}_{i=1}k^{1-\alpha}(j-i) f (ih)-\sum^{n}_{j=1} k^{\alpha}(n-j) \sum^{j-1}_{i=1}k^{1-\alpha}(j-1-i)f (ih)\\ &= \sum^{n}_{j=1} k^{\alpha}(n-j)\sum^{j}_{i=1}k^{1-\alpha}(j-i) f (ih)-\sum^{n}_{j=2} k^{\alpha}(n-j) \sum^{j-1}_{i=1}k^{1-\alpha}(j-1-i)f (ih)\\ &= \sum^{n-1}_{j=0} k^{\alpha}(n-1-j)\sum^{j}_{i=0}k^{1-\alpha}(j-i) f ((i+1)h)\\\ &\qquad \qquad -\sum^{n-2}_{j=0} k^{\alpha}(n-2-j) \sum^{j}_{i=0}k^{1-\alpha}(j-i)f ((i+1)h)\\ &= \sum^{n-1}_{j=0} f ((i+1)h) - \sum^{n-2}_{i=0}f ((i+1)h)=f(nh). \end{align*} Consequently, we have $\delta^{-\alpha} {_{RL}}\delta^{\alpha} f(nh)=f(nh)$ for all $n\in \mathbb{{N}}.$ \end{proof} \begin{definition}\label{Caputo} Let $0<\alpha<1$ and $f$ be a sequence defined on $\mathbb{{N}}_0^h$. The Caputo fractional difference of order $\alpha$ is defined by \begin{equation}\label{Caputo} {_C}\delta^{\alpha} f(nh):=\delta^{-(1-\alpha)}\delta_{\text{left}} f(nh),\quad n\in\mathbb{{N}}. \end{equation} \end{definition} Note that the previous definition gives ${_C}\delta^{\alpha}=\delta_{\text{left}}$ for $\alpha=1.$ The next result shows the relation between Caputo and Riemann fractional difference. \begin{proposition}\label{relation-CR} Let $0<\alpha< 1$. Then the following identity holds $${_C}\delta^{\alpha} f(nh)={_{RL}}\delta^{\alpha}(f(nh)-f(0)),\qquad n\in\mathbb{{N}}.$$ \end{proposition} \begin{proof} By \eqref{Caputo} and \eqref{RL}, \begin{eqnarray*} {_C}\delta^{\alpha} f(nh)&=&h^{1-\alpha}\sum^{n}_{j=1}k^{1-\alpha}(n-j)\delta_{\text{left}} f(jh) \\ &=&h^{-\alpha}\sum^{n}_{j=1}k^{1-\alpha}(n-j)f(jh) -h^{-\alpha}\sum^{n}_{j=1}k^{1-\alpha}(n-j)f((j-1)h) \\ &=&h^{-\alpha}\sum^{n}_{j=0}k^{1-\alpha}(n-j)f(jh) -h^{-\alpha}\sum^{n-1}_{j=0}k^{1-\alpha}(n-1-j)f(jh)-h^{-\alpha}k^{1-\alpha}(n)f(0) \\ &=&h^{-\alpha}\sum^{n}_{j=0}k^{1-\alpha}(n-j)f(jh) -h^{-\alpha}\sum^{n-1}_{j=0}k^{1-\alpha}(n-1-j)f(jh))\\ & &\hspace*{3cm} -h^{-\alpha}f(0)\left(\sum_{j=0}^{n} k^{1-\alpha}(n-j)-\sum_{j=0}^{n-1} k^{1-\alpha}(n-1-j) \right) \\ &=&h^{-\alpha}\sum^{n}_{j=0}k^{1-\alpha}(n-j)(f(jh)-f(0)) -h^{-\alpha}\sum^{n-1}_{j=0}k^{1-\alpha}(n-1-j)(f(jh)-f(0)). \end{eqnarray*} \end{proof} \begin{corollary} Let $0<\alpha<1$ and $f$ be a sequence defined on $\mathbb{{N}}^{h}_0$. Then the following identity holds \begin{align*} \delta^{-\alpha} {_C}\delta^{\alpha} f(nh)=f(nh)-f(0). \end{align*} \end{corollary} \begin{proof} The result is immediate by Propositions \ref{SumAndDeltaRL} and \ref{relation-CR}. \end{proof} The Proposition \ref{relation-CR} allows to establish the following property between the fractional difference operators. \begin{proposition} For all $0<\alpha<1$ and $0 \leq \beta $ the relation holds \begin{align*} \delta^{-(\beta+1)} {_{RL}\delta}^{1-\alpha}f(nh))= \delta^{-(\beta+\alpha)} f(nh), \qquad n\in\mathbb{{N}}. \end{align*} \end{proposition} \begin{proof} By Proposition \ref{relation-CR}, we have \begin{align*} \delta^{-(\beta+1)} {_{RL}\delta}^{1-\alpha}f(nh))&=h^{\beta+1}\sum^{n}_{j=1}k^{\beta+1}(n-j){_{RL}\delta}^{1-\alpha}f(jh) \\ &=h^{\beta+1}\sum^{n}_{j=1}k^{\beta+1}(n-j){_{C}\delta}^{1-\alpha}f(jh) +h^{\beta+\alpha}\sum^{n}_{j=1}k^{\beta+1}(n-j) k^{\alpha}(j-1)f(0) \\ &=h^{\beta+1}\sum^{n}_{j=1}k^{\beta+1}(n-j){_{C}\delta}^{1-\alpha}f(jh) +h^{\beta+\alpha}\sum^{n-1}_{j=0}k^{\beta+1}(n-1-j) k^{\alpha}(j)f(0) \\ &= \delta^{-(\beta+\alpha)} \delta^{-1} \delta_{\text{left}} f(nh) +h^{\alpha+\beta} \sum^{n-1}_{j=0} k^{\alpha+\beta}(n-1-j)f(0) \\ &= \delta^{-(\beta+\alpha)} f(nh)- \delta^{-(\beta+\alpha)} f(0)+h^{\alpha+\beta} \sum^{n}_{j=1} k^{\alpha+\beta}(n-j)f(0) \\ &= \delta^{-(\beta+\alpha)} f(nh). \end{align*} \end{proof} \section{Special functions and subordination formulae}\label{dsftsf} In this section we introduce a discrete version of the Mittag-Leffler and scaled Wright functions, which generalize the ones defined in \cite{Alvarez-Diaz-Lizama-2020}. Also, we present some interesting properties which will be useful along the paper. Let $\alpha,\beta,h>0$ and $\lambda\in\mathbb{{C}}$. The Mittag-Leffler sequences are given by \begin{align}\label{MT-discrete} \mathcal{E}^{h}_{\alpha,\beta}(\lambda,n):=\frac{h^{\beta}}{(n-1)!}\sum_{j=0}^{\infty}\frac{\Gamma(\alpha j+\beta+n-1)}{\Gamma(\alpha j+\beta)}(h^{\alpha}\lambda)^{j}, \quad n\in\mathbb{{N}}, \quad |\lambda|<\frac{1}{h^{\alpha}}. \end{align} The convergence of previous series can be justified by \eqref{asin-kernel}. Using the Ces\`aro numbers \eqref{kernel_k}, one can rewrite \eqref{MT-discrete} as $$\mathcal{E}^{h}_{\alpha,\beta}(\lambda,n)=\sum_{j=0}^{\infty}h^{\alpha j+\beta}k^{\alpha j + \beta}(n-1)\lambda^{j}, \quad n\in\mathbb{{N}}, \quad |\lambda|<\frac{1}{h^{\alpha}}.$$ Particularly, note that $$ \mathcal{E}^{h}_{1,1}(\lambda,n)=\sum_{j=0}^{\infty}(h\lambda)^{j}k^{ j+1 }(n-1)=\sum_{j=0}^{\infty}(h\lambda)^{j}k^{ n }(j)=\frac{1}{h^{n}}(1/h-\lambda)^{-n},\quad |\lambda|<\frac{1}{h}.$$ \begin{proposition} Let $0<\alpha<1$, $0<\beta,h$ and $\lambda\in\mathbb{{C}}$ such that $|\lambda|<\frac{1}{h^{\alpha}}$. The sequence \begin{align*} \varepsilon(nh):=\left\{ \begin{array}{lcc} \mathcal{E}^{h}_{\alpha,1}(\lambda,n), &\qquad n\in\mathbb{{N}}, \\ \\ 1, &\qquad n= 0 \end{array} \right. \end{align*} is solution of the fractional difference problem \begin{align*} _{C}\delta^{\alpha}\varepsilon(nh)=\lambda \varepsilon(nh), \qquad n\in\mathbb{{N}}. \end{align*} \end{proposition} \begin{proof} For $n\in\mathbb{{N}}$, we have \begin{align*} \delta^{1-\alpha} \varepsilon(nh)&=h^{1-\alpha}\sum^{n}_{j=1}k^{1-\alpha}(n-j) \mathcal{E}^{h}_{\alpha,1}(\lambda,j) \\ &=h^{1-\alpha}\sum^{n}_{j=1}k^{1-\alpha}(n-j) \sum_{w=0}^{\infty}h^{\alpha w+1}k^{\alpha w + 1}(j-1)\lambda^{w}\\ &=\sum_{w=0}^{\infty}h^{2-\alpha+\alpha w}\lambda^{w} \sum^{n}_{j=1} k^{1-\alpha}(n-j)k^{\alpha w + 1}(j-1)\\ &=\sum_{w=0}^{\infty}h^{2-\alpha+\alpha w}\lambda^{w}\sum^{n-1}_{j=0} k^{1-\alpha}(n-1-j)k^{\alpha w + 1}(j)\\ &=\sum_{w=0}^{\infty}h^{2-\alpha+\alpha w} k^{2-\alpha+\alpha w}(n-1)\lambda^{w}\\ &=\sum_{w=1}^{\infty}h^{2-\alpha+\alpha w} k^{2-\alpha+\alpha w}(n-1)\lambda^{w}+h^{2-\alpha} k^{2-\alpha}(n-1), \end{align*} where have used \eqref{semigroup-k}. Now, by Proposition \ref{delta-k}, we get \begin{align*} \delta_{\text{left}}h^{2-\alpha} k^{2-\alpha}(n-1)=h^{1-\alpha} k^{1-\alpha}(n-1) \end{align*} and \begin{align*} \delta_{\text{left}} h^{2-\alpha+\alpha w} k^{2-\alpha+\alpha w}(n-1)=h^{1-\alpha+\alpha w} k^{1-\alpha+\alpha w}(n-1). \end{align*} Then, \begin{align*} _{R}\delta^{\alpha} \varepsilon(nh)=\sum_{w=0}^{\infty}h^{1+\alpha w} k^{1+\alpha w}(n-1)\lambda^{w}+h^{1-\alpha} k^{1-\alpha}(n-1)u(0). \end{align*} Hence, by Proposition \ref{relation-CR} we get the result. \end{proof} Next, let us define the discrete scaled Wright function. \begin{definition}\label{Def-Wright-discreta} Let $0 <\alpha< 1$ and $0 \leq \beta$ be given. For $n\in \mathbb{{N}}_{0}$ and $h>0$, the discrete scaled Wright function $\varphi^{h}_{\alpha,\beta}$ is defined by \begin{equation}\label{Wright-discreta} \varphi^h_{\alpha,\beta}(n,j):= \frac{1}{2 \pi i} \displaystyle\int_{\Upsilon} \frac{1}{z^{n+1}}\frac{\left(1-h(\frac{1-z}{h})^{\alpha} \right)^{j}}{(\frac{1-z}{h})^{\beta}}\,dz, \quad j\in\mathbb{N}_0, \end{equation} where $\Upsilon$ is the path oriented counterclockwise given by the circle centered at the origin and radius $0<r<1.$ Note that if $|z|<1,$ then $\frac{1-z}{h}$ belongs to the disc centered at $1/h$ and radius $1/h.$ So, for each $j\in\mathbb{{N}}_0,$ the function $z\to \frac{\left(1-h(\frac{1-z}{h})^{\alpha} \right)^{j}}{(\frac{1-z}{h})^{\beta}}$ is holomorphic on the unit disc. Therefore, by the Cauchy formula for the derivatives, we have defined $\varphi^h_{\alpha,\beta}(n,j)$ as the $n$-coefficient of the power series centered at the origin of such holomorphic function. \end{definition} In the following proposition, we present some useful properties of the discrete scaled Wright function $\varphi^h_{\alpha,\beta}$. Many of them follow the spirit of the analogue ones in the continuous case, see \cite[Theorem 3]{Abadia-Miana}.\\ \begin{proposition}\label{Prop-Levy} Let $0 <\alpha < 1$, $0\leq \beta ,$ $0<h$ and $n,j\in\mathbb{{N}}_0$. The following properties hold: \begin{enumerate}[$(i)$] \item $\varphi^{h}_{\alpha,\beta}(n,j)= h^{\beta}\displaystyle\sum^{j}_{i=0} \binom{j}{i}(-1)^{i}h^{i-\alpha i}k^{\beta-\alpha i}(n).$ \item $\varphi^h_{\alpha,\beta+\gamma}(n,j)=h^{\beta} \displaystyle\sum_{i=0}^{n} k^{\beta }(n-i) \varphi^{h}_{\alpha,\gamma}(i,j), \quad \gamma>0.$ \item $ \dfrac{1}{h^{n}\Gamma(n)}\displaystyle\int_0^{\infty}e^{-s/h}s^{n-1} \psi_{\alpha,\beta}(s,t) \ ds= e^{-t/h}\displaystyle\sum_{j=1}^{\infty} \varphi^h_{\alpha,\beta}(n-1,j-1)\frac{t^{j-1}}{ h^{j}(j-1)!},\ t>0 ,$\\ \\ where $\psi_{\alpha,\beta}$ is given by \eqref{ScaledWright}. \item $\displaystyle\sum^{\infty}_{j=1}\varphi^{h}_{\alpha,\beta}(n-1,j-1)\frac{1}{h^{j}}\left(\frac{1}{h}-\lambda\right)^{-j}=\frac{1}{h}\mathcal{E}^h_{\alpha,\alpha+\beta}(\lambda,n)$, \ $n\in\mathbb{{N}}, \ \vert \lambda\vert<\dfrac{1}{h^{\alpha}}$. \item $\varphi^{h}_{\alpha,\beta}(n,j)-\varphi^{h}_{\alpha,\beta}(n,j+1) =h\varphi^{h}_{\alpha,\beta-\alpha}(n,j).$ \item $\varphi^h_{\alpha,0}(n,j+1)=\displaystyle\sum_{p=0}^{n}\varphi^h_{\alpha,0}(n-p,j)\varphi^h_{\alpha,0}(p,1)$. \item $\varphi^{h}_{\alpha,\beta}(n,j)\geq 0, \quad 0<h\leq 1.$ \item $\displaystyle\sum_{i=0}^{\infty} \varphi^h_{\alpha,0}(i,j)=1$. \item $\displaystyle\sum_{j=0}^{\infty} \varphi^h_{\alpha, \beta}(n,j)k^{\gamma}(j)=h^{\beta+\gamma(\alpha-1)}k^{\beta+\gamma\alpha}(n)$. \end{enumerate} \end{proposition} \begin{proof}~ \begin{enumerate}[$(i)$] \item Note that for $|z|<1$ we can write $$ h^{\beta}\displaystyle\sum^{j}_{i=0} \binom{j}{i}(-1)^{i}h^{i-\alpha i}(1-z)^{\alpha i-\beta}=\frac{1}{(\frac{1-z}{h})^{\beta}}\displaystyle\sum^{j}_{i=0} \binom{j}{i}(-1)^{i}h^{i}\left(\frac{1-z}{h}\right)^{\alpha i}=\frac{\left(1-h(\frac{1-z}{h})^{\alpha} \right)^{j}}{(\frac{1-z}{h})^{\beta}}.$$ By the uniqueness of the coefficients we have the result. \item The identity follows from the previous item $(i)$ and \eqref{semigroup-k}. Indeed, \begin{align*} \varphi^{h}_{\alpha,\beta+\gamma}(n,j)=&h^{\beta+\gamma}\displaystyle\sum^{j}_{w=0} \binom{j}{w}(-1)^{i}h^{w-\alpha w}k^{\beta+\gamma-\alpha w}(n)\\ =&h^{\beta+\gamma}\displaystyle\sum^{j}_{w=0} \binom{j}{w}(-1)^{i}h^{w-\alpha w} \sum^{n}_{i=0} k^{\beta}(n-i)k^{\gamma-\alpha w}(i) \\ =&h^{\beta+\gamma}\sum^{n}_{i=0} k^{\beta}(n-i)\displaystyle\sum^{j}_{w=0} \binom{j}{w}(-1)^{w}h^{w-\alpha w} k^{\gamma-\alpha w}(i) \\ =& h^{\beta} \displaystyle\sum_{i=0}^{n} k^{\beta }(n-i) \varphi^{h}_{\alpha,\gamma}(i,j). \end{align*} \item Note that, by \eqref{ScaledWright} \begin{align*} \frac{1}{h^{n}\Gamma(n)}\displaystyle\int_0^{\infty}e^{-s/h}s^{n-1}\psi_{\alpha,\beta}(s,t) \ ds &= \frac{1}{h^{n}\Gamma(n)}\displaystyle\int_0^{\infty}e^{-s/h}s^{n+\beta-2} W_{-\alpha,\beta} (-ts^{-\alpha})\ ds\\ &=\frac{1}{h^{n}}\displaystyle\sum_{i=0}^{\infty}\dfrac{(-t)^{i}}{\Gamma(n)\Gamma(-\alpha i+\beta)i!}\displaystyle\int_0^{\infty} e^{-s/h}s^{n+\beta-2-\alpha i} \ ds \\ &=h^{\beta-1} \displaystyle\sum_{i=0}^{\infty}h^{-\alpha i}\dfrac{\Gamma(n-1+\beta-\alpha i)}{\Gamma(n)\Gamma(-\alpha i+\beta)}\frac{(-t)^{i}}{i!} \\ &=h^{\beta-1} \displaystyle\sum_{i=0}^{\infty}h^{-\alpha i}k^{\beta-\alpha i}(n-1)\frac{(-t)^{i}}{i!}. \end{align*} On the other hand, by $(i)$ we obtain \begin{align*} \sum_{j=1}^{\infty} \varphi^{h}_{\alpha,\beta}(n-1,j-1)&\frac{t^{j-1}}{h^{j}(j-1)!}=h^{\beta}\sum_{j=1}^{\infty}\displaystyle\sum^{j-1}_{i=0} \binom{j-1}{i}(-1)^{i}h^{i-\alpha i}k^{\beta-\alpha i}(n-1)\frac{t^{j-1}}{h^{j}(j-1)!}\\ &=h^{\beta-1}\sum_{j=0}^{\infty} \displaystyle\sum^{j}_{i=0} \binom{j}{i}(-1)^{i}h^{i-\alpha i}k^{\beta-\alpha i}(n-1)\left(\frac{t}{h}\right)^{j}\frac{1}{j!} \\ &=h^{\beta-1}\sum_{i=0}^{\infty}\sum_{j=i}^{\infty} \binom{j}{i}(-1)^{i}h^{i-\alpha i}k^{\beta-\alpha i}(n-1) \left(\frac{t}{h}\right)^{j}\frac{1}{j!} \\ &=h^{\beta-1}\sum_{i=0}^{\infty} (-1)^{i}h^{i-\alpha i} k^{\beta-\alpha i}(n-1) \sum_{j=i}^{\infty} \binom{j}{i}\left(\frac{t}{h}\right)^{j}\frac{1}{j!} \\ &=h^{\beta-1}\sum_{i=0}^{\infty} (-1)^{i}h^{i-\alpha i}k^{\beta-\alpha i}(n-1) \sum_{j=i}^{\infty} \frac{1}{i!(j-i)!} \left(\frac{t}{h}\right)^{j}\\ &=h^{\beta-1}\sum_{i=0}^{\infty} (-1)^{i}h^{i-\alpha i}k^{\beta-\alpha i}(n-1) \sum_{j=0}^{\infty} \frac{1}{i!j!} \left(\frac{t}{h}\right)^{j+i}\\ &=h^{\beta-1}\sum_{i=0}^{\infty} (-1)^{i}h^{-\alpha i}k^{\beta-\alpha i}(n-1) \frac{t^{i} }{i!} e^{t/h}, \end{align*} for all $n\in\mathbb{{N}}$. Thus, the result is proved. \item Let $h>0$ and $\lambda\in\mathbb{{C}}$ such that $\vert \lambda\vert<\dfrac{1}{h^{\alpha}}$. Then, \begin{align*} \displaystyle\sum^{\infty}_{j=1}\varphi^{h}_{\alpha,\beta}(n-1,j-1)\frac{1}{h^{j}}\left(\frac{1}{h}-\lambda\right)^{-j}&=\displaystyle\sum^{\infty}_{j=1}\varphi^{h}_{\alpha,\beta}(n-1,j-1)\frac{1}{h^{j}}\int^{\infty}_{0}e^{-\left(\frac{1}{h}-\lambda\right)s}\frac{s^{j-1}}{(j-1)!}ds\\ &= \int^{\infty}_{0}e^{-\left(\frac{1}{h}-\lambda\right)s}\displaystyle\sum^{\infty}_{j=1}\varphi^{h}_{\alpha,\beta}(n-1,j-1)\frac{1}{h^{j}}\frac{s^{j-1}}{(j-1)!}ds \\ &=\frac{1}{h^{n}\Gamma(n)}\int^{\infty}_{0}e^{\lambda s}\displaystyle\int_0^{\infty}e^{-t/h}t^{n-1}\psi_{\alpha,\beta}(t,s) \ dt \ ds \\ &=\frac{1}{h^{n}\Gamma(n)} \displaystyle\int_0^{\infty}e^{-t/h}t^{n-1} t^{\alpha+\beta-1} E_{\alpha,\alpha+\beta}(\lambda t^{\alpha}) \ dt \\ &=\frac{1}{h}\mathcal{E}^h_{\alpha,\alpha+\beta}(\lambda,n), \end{align*} where we have used item $(iii)$, \cite[Theorem 3 (iii)]{Abadia-Miana} and \cite[Theorem 2.8.]{Alvarez-Diaz-Lizama-2020}. \item The result is obtained as follows: \begin{align*} \varphi^{h}_{\alpha,\beta}(n,j)-&\varphi^{h}_{\alpha,\beta}(n,j+1)\\ &= h^{\beta}\displaystyle\sum^{j}_{i=0} \binom{j}{i}(-1)^{i} h^{i-\alpha i}k^{\beta-\alpha i}(n) - h^{\beta} \displaystyle\sum^{j+1}_{i=0} \binom{j+1}{i}(-1)^{i}h^{i-\alpha i}k^{\beta-\alpha i}(n)\\ &=-h^{\beta}\displaystyle\sum^{j+1}_{i=1} \binom{j}{i-1}(-1)^{i}h^{i-\alpha i}k^{\beta-\alpha i}(n)\\ &=h^{\beta+1-\alpha }\displaystyle\sum^{j}_{i=0} \binom{j}{i}(-1)^{i}h^{i-\alpha i} k^{\beta-\alpha (i+1)}(n) \\ &=h\varphi^{h}_{\alpha,\beta-\alpha}(n,j), \end{align*} where we have used \cite[Section 1.4, Eq. (5)]{Aigner-(2006)}. \item By item $(i)$, \cite[Section 1.4, Eq. (5)]{Aigner-(2006)} and \eqref{semigroup-k} it follows \begin{small} \begin{align*} \varphi^h_{\alpha,0}(n,j+1)&=\displaystyle\sum^{j+1}_{i=0} \binom{j+1}{i}(-1)^{i}h^{i-\alpha i}k^{-\alpha i}(n) \\ &=\displaystyle\sum^{j}_{i=0} \binom{j}{i}(-1)^{i}h^{i-\alpha i}k^{-\alpha i}(n)+\displaystyle\sum^{j+1}_{i=1} \binom{j}{i-1}(-1)^{i}h^{i-\alpha i}k^{-\alpha i}(n)\\ &=\displaystyle\sum^{j}_{i=0} \binom{j}{i}(-1)^{i}h^{i-\alpha i}k^{-\alpha i}(n)-\displaystyle\sum^{j}_{i=0} \binom{j}{i}(-1)^{i}h^{i-\alpha i}h^{1-\alpha }k^{-\alpha (i+1)}(n)\\ &=\displaystyle\sum^{n}_{p=0}\displaystyle\sum^{j}_{i=0} \binom{j}{i}(-1)^{i}h^{i-\alpha i}k^{-\alpha i}(n-p)k^{0}(p)\\ &\qquad \qquad -\displaystyle\sum^{n}_{p=0}\displaystyle\sum^{j}_{i=0} \binom{j}{i}(-1)^{i}h^{i-\alpha i}h^{1-\alpha }k^{-\alpha i}(n-p)k^{-\alpha}(p) \\ &=\displaystyle\sum^{n}_{p=0}\displaystyle\sum^{j}_{i=0} \binom{j}{i}(-1)^{i}h^{i-\alpha i}k^{-\alpha i}(n-p)\left(k^{0}(p)-h^{1-\alpha}k^{-\alpha}(p)\right)\\ &=\displaystyle\sum^{n}_{p=0}\displaystyle\sum^{j}_{i=0} \binom{j}{i}(-1)^{i}h^{i-\alpha i}k^{-\alpha i}(n-p) \displaystyle\sum^{1}_{i=0} \binom{1}{i}(-1)^{i}h^{i-\alpha i}k^{-\alpha i}(p) \\ &=\displaystyle\sum^{n}_{p=0} \varphi^h_{\alpha,0}(n-p,j)\varphi^h_{\alpha,0}(p,1). \end{align*} \end{small} \item From Definition \ref{Def-Wright-discreta}, we have that $\varphi^h_{\alpha,0}(n,0)=\delta_0(n)$ for $n\in\mathbb{{N}}_0.$ Furthermore, \begin{align*} \varphi^h_{\alpha,0}(n,1)&=k^{0}(n)-h^{1-\alpha}k^{-\alpha}(n). \end{align*} Then, $$\varphi^h_{\alpha,0}(0,1)=1-h^{1-\alpha} \geq 0 $$ and $$\varphi^h_{\alpha,0}(n,1)=h^{1-\alpha}\frac{\alpha(1-\alpha)(2-\alpha)\cdots(n-1-\alpha)}{n!}\geq 0, \ n\in\mathbb{{N}}.$$ By $(i)$ of the Proposition \ref{prop-Cesaro} and items $(vi)$ and $(ii)$ the result follows. \item The identity is a particular case of \eqref{Wright-discreta}, by letting $z\rightarrow 1^{-}$ with $z\in\mathbb{{R}}$. \item By \eqref{Wright-discreta} and \eqref{ZT-K}, we have \begin{align*} \displaystyle\sum_{l=0}^{\infty}\varphi^h_{\alpha,\beta}(n,l) k^{\gamma}(j)&= \frac{1}{2 \pi i} \displaystyle\int_{\Upsilon} \frac{1}{z^{n+1}}\displaystyle\sum_{l=0}^{\infty}\frac{\left(1-h(\frac{1-z}{h})^{\alpha} \right)^{j}}{(\frac{1-z}{h})^{\beta}} k^{\gamma}(j)\,dz \\ &=\frac{1}{h^{\gamma}}\frac{1}{2 \pi i} \displaystyle\int_{\Upsilon} \frac{1}{z^{n+1}}\frac{1}{(\frac{1-z}{h})^{\beta+\gamma\alpha}} \,dz \\ &=h^{\beta+\gamma(\alpha-1)}k^{\beta+\gamma\alpha}(n). \end{align*} \end{enumerate} \vspace*{-1cm} \end{proof} \begin{remark} Let $0<\alpha<1$. Taking $\lambda=0$ in Proposition \ref{Prop-Levy}-$(iv)$, we have \begin{align}\label{DensityInJ} \displaystyle\sum^{\infty}_{j=0}\varphi^{h}_{\alpha,1-\alpha}(n-1,j)=1, \qquad n\in\mathbb{{N}}. \end{align} \end{remark} \begin{remark} Some of the results obtained in the previous proposition can be found in the work carried out by Alvarez et al. in \cite{Alvarez-Diaz-Lizama-2020} with $h=1$. \end{remark} \section{Fundamental solution}\label{homogeneo} Here we investigate the representation of the solution to the fractional diffusion equation \eqref{1.2}, and we prove several interesting properties related to it. Let us start recalling the problem. Let $h>0$ and $0<\alpha< 1$. Consider the fractional diffusion equation in discrete time on the Lebesgue $L^p(\mathbb{{R}}^N)$ spaces, given by \begin{equation}\label{Main} \left\{ \begin{array}{lll} {_C}\delta^{\alpha} u(nh,x) =\mathbb{{D}}elta u(nh,x),\quad n\in\mathbb{{N}},\,x\in\mathbb{R}^N, \\ \\ u(0,x) =f(x),\\ \end{array} \right. \end{equation} where $u$ and $f$ are function defined on $\mathbb{{N}}^{h}_0\times\mathbb{{R}}^{N}$ and $\mathbb{{R}}^{N}$ respectively. Let us define the fundamental solution \begin{align} \label{subordination1} \mathcal{G}^{\alpha}_{n,h}(x):=\sum_{j=1}^{\infty} \varphi^{h}_{\alpha,1-\alpha}(n-1,j-1)\mathcal{G}_{j,h}(x),\quad n\in\mathbb{{N}},\,x\in\mathbb{R}^N, \end{align} where the functions $\mathcal{G}_{n,h}(x)$ denote the discrete Gaussian defined by \eqref{discretegaussian}. The next result shows that $\mathcal{G}^{\alpha}_{n,h}\ast f$ is the solution of \eqref{Main}.\\ \begin{theorem}\label{Theo-Sol} Let $f$ be a function on $L^p(\mathbb{{R}}^N).$ For $h>0$ and $0<\alpha< 1$, the function \begin{align}\label{sol} u(nh,x):=(\mathcal{G}_{n,h}^{\alpha}*f)(x) \end{align} is the unique solution of the fractional diffusion equation in discrete time \eqref{Main} on the Lebesgue $L^p(\mathbb{{R}}^N)$ spaces. \end{theorem} \begin{proof} First of all, note that by Proposition \ref{DG-Properties}-$(ii)$ and \eqref{DensityInJ}, we can conclude that \begin{equation}\label{Eq5.4} \int_{\mathbb{{R}}^N}\mathcal{G}_{n,h}^{\alpha}(x)\,dx=1. \end{equation} Consequently, we have $\|u(nh,x)\|_p\leq \|f\|_{p}.$ Now, we see that $u$ satisfies \eqref{Main}. Equation \eqref{SolFirtOrderEquation} implies \begin{small} \begin{align*} \mathbb{{D}}elta u(nh,x)&=\sum_{j=1}^{\infty} \varphi^{h}_{\alpha,1-\alpha}(n-1,j-1)\delta_{\text{left}}(\mathcal{G}_{j,h}\ast f)(x)\\ &=h^{-1}\sum_{j=1}^{\infty} \varphi^{h}_{\alpha,1-\alpha}(n-1,j-1)(\mathcal{G}_{j,h}\ast f)(x)-h^{-1}\sum_{j=1}^{\infty} \varphi^{h}_{\alpha,1-\alpha}(n-1,j-1)(\mathcal{G}_{j-1,h}\ast f)(x)\\ &=h^{-1}\sum_{j=1}^{\infty} \varphi^{h}_{\alpha,1-\alpha}(n-1,j-1)(\mathcal{G}_{j,h}\ast f)(x)-h^{-1}\sum_{j=0}^{\infty} \varphi^{h}_{\alpha,1-\alpha}(n-1,j)(\mathcal{G}_{j,h}\ast f)(x)\\ &=h^{-1}\sum_{j=1}^{\infty} \varphi^{h}_{\alpha,1-\alpha}(n-1,j-1)(\mathcal{G}_{j,h}\ast f)(x)-h^{-1}\sum_{j=1}^{\infty} \varphi^{h}_{\alpha,1-\alpha}(n-1,j)(\mathcal{G}_{j,h}\ast f)(x)\\ &\qquad\qquad-h^{-1}\varphi^{h}_{\alpha,1-\alpha}(n-1,0)f(x). \end{align*} Now, by Proposition \ref{Prop-Levy}-$(ii)$ and \eqref{kernel_k_1}, we have \begin{align*} h^{-1}\varphi^{h}_{\alpha,1-\alpha}(n-1,0)f(x)&=h^{-1}h^{1-\alpha}\sum^{n-1}_{i=0}k^{1-\alpha}(n-1-i)\varphi^{h}_{\alpha,0}(i,0)f(x)\\ &=h^{-\alpha}\sum^{n-1}_{i=0}k^{1-\alpha}(n-1-i)k^{0}(i)f(x)\\ &=h^{-\alpha} k^{1-\alpha}(n-1)f(x). \end{align*} Then, by $(v)$ of Proposition \ref{Prop-Levy}, we get \begin{align*} \mathbb{{D}}elta u(nh,x)&=h^{-1}\sum_{j=1}^{\infty} \varphi^{h}_{\alpha,1-\alpha}(n-1,j-1)(\mathcal{G}_{j,h}\ast f)(x)-h^{-1}\sum_{j=1}^{\infty} \varphi^{h}_{\alpha,1-\alpha}(n-1,j)(\mathcal{G}_{j,h}\ast f)(x)\\ &\qquad\qquad-h^{-\alpha} k^{1-\alpha}(n-1)f(x)\\ &=h^{-1}\sum_{j=1}^{\infty}\left( \varphi^{h}_{\alpha,1-\alpha}(n-1,j-1)-\varphi^{h}_{\alpha,1-\alpha}(n-1,j)\right)(\mathcal{G}_{j,h}\ast f)(x)-h^{-\alpha} k^{1-\alpha}(n-1)f(x)\\ &=h^{-1}\sum_{j=0}^{\infty}\left( \varphi^{h}_{\alpha,1-\alpha}(n-1,j)-\varphi^{h}_{\alpha,1-\alpha}(n-1,j+1)\right)(\mathcal{G}_{j+1,h}\ast f)(x)-h^{-\alpha} k^{1-\alpha}(n-1)f(x)\\ &=\sum_{j=0}^{\infty} \varphi^{h}_{\alpha,1-2\alpha}(n-1,j) (\mathcal{G}_{j+1,h}\ast f)(x)-h^{-\alpha} k^{1-\alpha}(n-1)f(x). \end{align*} By the previous identity and $(ii)$ of Proposition \ref{Prop-Levy}, we have that \begin{align*} h^{\alpha}\mathbb{{D}}elta \sum^{n}_{w=1} &k^\alpha(n-w) u(wh,x)=h^{\alpha}\mathbb{{D}}elta \sum^{n-1}_{w=0} k^\alpha(n-1-w) u((w +1)h,x)\\ &=h^{\alpha}\sum^{n-1}_{w=0} k^\alpha(n-1-w)\sum_{j=0}^{\infty} \varphi^{h}_{\alpha,1-2\alpha}(w,j) (\mathcal{G}_{j+1,h}\ast f)(x)\\ &\qquad - \sum^{n-1}_{w=0} k^\alpha(n-1-w) k^{1-\alpha}(w)f(x) \\ &=h^{1-\alpha}\sum^{n-1}_{w=0} k^\alpha(n-1-w)\sum_{j=0}^{\infty} \sum_{p=0}^{w} k^{1-2\alpha}(w-p)\varphi^{h}_{\alpha,0}(p,j) (\mathcal{G}_{j+1,h}\ast f)(x) -f(x) \\ &=h^{1-\alpha}\sum_{j=0}^{\infty} \sum^{n-1}_{w=0} k^\alpha(n-1-w)\sum_{p=0}^{w} k^{1-2\alpha}(w-p)\varphi^{h}_{\alpha,0}(p,j) (\mathcal{G}_{j+1,h}\ast f)(x)-f(x) \\ &=h^{1-\alpha}\sum_{j=0}^{\infty} \sum_{p=0}^{n-1} k^{1-\alpha}(n-1-p)\varphi^{h}_{\alpha,0}(p,j) (\mathcal{G}_{j+1,h}\ast f)(x) - f(x) \\ &= \sum_{j=0}^{\infty} \varphi^{h}_{\alpha,1-\alpha}(n-1,j) (\mathcal{G}_{j+1,h}\ast f)(x) -f(x) \\ &=u(nh,x)- f(x), \end{align*} that is, \begin{align*} u(nh,x)= h^{\alpha} \mathbb{{D}}elta \sum^{n}_{w=1} k^\alpha(n-w)u(wh,x) +f(x). \end{align*} Now, convolving the above identity by $k^{1-\alpha}$ and multiplying by $h^{-\alpha}$, we obtain \begin{align*} h^{-\alpha}\sum_{j=0}^{n}k^{1-\alpha}(n-j) u(jh,x)&=\mathbb{{D}}elta \sum_{j=0}^{n} u(j h,x)-\mathbb{{D}}elta u(0,x)+h^{-\alpha}k^{2-\alpha}(n) f(x)\\ &=\mathbb{{D}}elta \sum_{j=1}^{n} u(j h,x) +h^{-\alpha}k^{2-\alpha}(n) f(x). \end{align*} By Proposition \ref{delta-k}, we can conclude that \begin{align*} h^{-\alpha}\sum_{k=0}^{n}k^{1-\alpha}(n-j) u(jh,x)-&h^{-\alpha}\sum_{k=0}^{n-1}k^{1-\alpha}(n-1-j) u(jh,x)\\ & \qquad = \mathbb{{D}}elta u(nh,x)+h^{-\alpha}k^{1-\alpha}(n) f(x). \end{align*} Hence, the result follows from Proposition \ref{relation-CR}. \end{small} \end{proof} In the following results we show other representations for $\mathcal{G}^{\alpha}_{n,h}$. In the first result, we represent $\mathcal{G}_{n,h}^{\alpha}(x)$ using the Poisson transform of the Gaussian kernel while in the second one we use the Fox H-function. This fact in turn gives other representations of the solution \eqref{sol}. \\ \begin{proposition}\label{GaussianSemigroup-FS} Let $0<h$ and $0<\alpha< 1$. Then, \eqref{subordination1} is equivalent to \begin{align}\label{psi-1-alfha} \mathcal{G}^{\alpha}_{n,h}(x)=\frac{1}{h^{n}}\displaystyle\int_{0}^{\infty}\displaystyle\int_{0}^{\infty} e^{-s/h}\frac{s^{n-1}}{(n-1)!}\psi_{\alpha,1-\alpha}(s,t)G_{t}(x)\ ds \ dt, \quad n\in\mathbb{{N}}, \ x\in\mathbb{{R}}^N \setminus \{ 0\}, \end{align} where $G_{t}$ is the Gaussian kernel \eqref{gaussian-semigroup} and $\psi_{\alpha,\beta}$ is \eqref{ScaledWright}. \end{proposition} \begin{proof} From Proposition \ref{Prop-Levy} part $(iii)$, we get \begin{align*} \mathcal{G}^{\alpha}_{n,h}(x)&= \sum_{j=1}^{\infty} \varphi^{h}_{\alpha,1-\alpha}(n-1,j-1)\mathcal{G}_{j,h}(x)\\ &=\sum_{j=1}^{\infty} \varphi^{h}_{\alpha,1-\alpha}(n-1,j-1)\frac{1}{h^{j}}\displaystyle\int_{0}^{\infty}e^{-t/h}\frac{t^{j-1}}{(j-1)!}G_{t}(x)\ dt \\ &=\displaystyle\int_{0}^{\infty}e^{-t/h}\sum_{j=1}^{\infty}\varphi^{h}_{\alpha,1-\alpha}(n-1,j-1)\frac{1}{h^{j}}\frac{t^{j-1}}{ (j-1)!}G_{t}(x)\ dt \\ &=\frac{1}{h^{n}}\displaystyle\int_{0}^{\infty}\displaystyle\int_{0}^{\infty} e^{-s/h}\frac{s^{n-1}}{(n-1)!}\psi_{\alpha,1-\alpha}(s,t)G_{t}(x)\ ds \ dt. \end{align*} The result follows. \end{proof} \begin{proposition}\label{FoxH-FS} Let $0<h$ and $0<\alpha< 1$. Then \begin{align*} \mathcal{G}^{\alpha}_{n,h}(x)= \frac{1}{ \Gamma(n)\pi^{N/2}\vert x \vert^{N}} H^{30}_{13}\left[\begin{array}{c} \dfrac{\vert x\vert^{2}}{4h^{\alpha}} \left\vert \begin{array}{c} (1,\alpha) \\ (n,\alpha),(\frac{N}{2},1), (1,1) \end{array} \right. \end{array} \right], \end{align*} where $H^{03}_{31}$ denotes the Fox H-function. \end{proposition} \begin{proof} By \cite[Theorem 3.1]{Bazhlthesis}, \cite[Theorem 15-$(ii)$.]{Abadia-Miana} and \cite[Theorem 2.12]{Kemppainen}, we have the following subordination formula \begin{align*} \frac{1}{\pi^{N/2}\vert x \vert^{N}}H^{02}_{21}\left[\begin{array}{c} \dfrac{4t^{\alpha}}{\vert x\vert^{2}} \left\vert \begin{array}{c} (1-\frac{N}{2},1),(0,1)\\ (0,\alpha) \end{array} \right. \end{array} \right] &=\displaystyle\int_{0}^{\infty} \psi_{\alpha,1-\alpha}(t,s)G_{s}(x)\ ds. \end{align*} Now, \begin{align*} \mathcal{G}^{\alpha}_{n,h}(x)&=\frac{1}{h^{n}\Gamma(n)\pi^{N/2}\vert x \vert^{N}}\displaystyle\int_{0}^{\infty} e^{-t/h}t^{n-1} H^{12}_{32}\left[\begin{array}{c} \dfrac{4 t^{\alpha}}{\vert x\vert^{2}} \left\vert \begin{array}{c} (1-\frac{N}{2},1),\ (0,1),\ (0,1)\\ (0,1), \ (0,\alpha) \end{array} \right. \end{array} \right] \, dt \\ &=\frac{1}{ \Gamma(n)\pi^{N/2}\vert x \vert^{N}} H^{13}_{42}\left[\begin{array}{c} \dfrac{4 h^{\alpha}}{\vert x\vert^{2}} \left\vert \begin{array}{c} (1-n,\alpha),\ (1-\frac{N}{2},1),\ (0,1), \ (0,1)\\ (0,1), \ (0,\alpha) \end{array} \right. \end{array} \right] \\ &=\frac{1}{ \Gamma(n)\pi^{N/2}\vert x \vert^{N}} H^{03}_{31}\left[\begin{array}{c} \dfrac{4h^{\alpha}}{\vert x\vert^{2}} \left\vert \begin{array}{c} (1-n,\alpha),(1-\frac{N}{2},1), (0,1)\\ (0,\alpha) \end{array} \right. \end{array} \right]\\ &= \frac{1}{ \Gamma(n)\pi^{N/2}\vert x \vert^{N}} H^{30}_{13}\left[\begin{array}{c} \dfrac{\vert x\vert^{2}}{4h^{\alpha}} \left\vert \begin{array}{c} (1,\alpha) \\ (n,\alpha),(\frac{N}{2},1), (1,1) \end{array} \right. \end{array} \right], \end{align*} where we have used Corollary 2.3.1, Proposition 2.2 and Proposition 2.3 of \cite{H-transforms}. \end{proof} The following proposition states some basic properties of the fundamental solution.\\ \begin{proposition}\label{HeatProperties} The function $\mathcal{G}^{\alpha}_{n,h}$ satisfies: \begin{enumerate}[$(i)$] \item $\displaystyle\mathcal{G}_{n,h}^{\alpha}(x)>0,\quad n\in\mathbb{{N}}, \ 0<h\leq 1.$ \item $\displaystyle \int_{\mathbb{{R}}^N}\mathcal{G}_{n,h}^{\alpha}(x)\,dx=1$. \item $\displaystyle\mathcal{F}({\mathcal{G}}_{n,h}^{\alpha})(\xi)=\frac{1}{h}\mathcal{E}^h_{\alpha,1}(-\vert\xi \vert^2,n),$ \quad $\xi\in\mathbb{{R}}^N.$ \item $\displaystyle \int_{\mathbb{{R}}^N}|x|^2\mathcal{G}_{n,h}^{\alpha}(x)\,dx=\Gamma(3)Nh^{\alpha}k^{\alpha+1}(n-1).$ \end{enumerate} \end{proposition} \begin{proof} $(i)$ Follows from $(vi)$ of Proposition \ref{Prop-Levy} and $(i)$ of the Proposition \ref{DG-Properties}. $(ii)$ was showed in the proof of Theorem \ref{Theo-Sol} (see \eqref{Eq5.4}). Next, let us prove $(iii)$. Since $\mathcal{F}({G}_{t})(\xi)=e^{-t|\xi|^2}$, for $\xi\in\mathbb{{R}}^N$ by Proposition \ref{gauss-prop} part $(iii)$, it follows from \cite[Theorem 3]{Abadia-Miana} that \begin{align*} \int_0^{\infty} \psi_{\alpha,1-\alpha}(s,t)e^{-t\vert \xi \vert^{2}}\, dt=E_{\alpha,1}(-\vert \xi\vert^2 s^{\alpha}). \end{align*} Equation \eqref{MT-discrete} implies that $$\mathcal{F}(\mathcal{G}^{\alpha}_{n,h})(\xi)=\frac{1}{h^n}\int_{0}^{\infty} e^{-s/h}\frac{s^{n-1}}{\Gamma(n)}E_{\alpha,1}(-\vert \xi\vert^2 s^{\alpha})\,ds =\frac{1}{h}\mathcal{E}^h_{\alpha}(-\vert\xi \vert^2,n).$$ Finally, by Fubini's Theorem, Proposition \ref{gauss-prop} and \cite[Theorem 3]{Abadia-Miana}, we have that \begin{align*} \displaystyle \int_{\mathbb{{R}}^N}|x|^2\mathcal{G}_{n,h}^{\alpha}(x)\,dx=&2N\frac{1}{h^{n}}\displaystyle\int_{0}^{\infty}\displaystyle\int_{0}^{\infty} e^{-s/h}\frac{s^{n-1}}{(n-1)!}\psi_{\alpha,1-\alpha}(s,t)t\, dt\,ds\\ =&2N\Gamma(2)\frac{1}{h^{n}}\displaystyle\int_{0}^{\infty} e^{-s/h}\frac{s^{n-1}}{(n-1)!} g_{\alpha+1}(s) \,ds \\ =&\Gamma(3)Nh^{\alpha}k^{\alpha+1}(n-1). \end{align*} Thus, we get item $(iv)$. \end{proof} \begin{remark}We recall that the total mass and first moment of the function $$w(nh,x)=(\mathcal{G}_{n,h}\ast f)(x)$$ are conservative (see \cite[Remark 2.6]{Abadia-Alvarez-20}). Then, we have that the total mass of solution of \eqref{Main}, given by $$u(nh,x)=\sum_{j=1}^{\infty} \varphi^{h}_{\alpha,1-\alpha}(n-1,j-1)(\mathcal{G}_{j,h}\ast f)(x)$$ is conservative. Indeed, \begin{align*} \int_{\mathbb{{R}}^N}u(nh,x)\,dx=\sum_{j=1}^{\infty} \varphi^{h}_{\alpha,1-\alpha}(n-1,j-1)\int_{\mathbb{{R}}^N}(\mathcal{G}_{j,h}\ast f)(x) \,dx=\int_{\mathbb{{R}}^N} f(x)\,dx, \end{align*} where in the last equality we have used \eqref{DensityInJ}. The first moment is also conservative: $$\int_{\mathbb{{R}}^N}x\,u(nh,x)dx=\sum_{j=1}^{\infty} \varphi^{h}_{\alpha,1-\alpha}(n-1,j-1)\int_{\mathbb{{R}}^N}x\,(\mathcal{G}_{j,h}\ast f)(x) \,dx=\int_{\mathbb{{R}}^N}xf (x)\,dx,$$ as long as $(1+|x|)f\in L^{1}(\mathbb{{R}}^N)$. However, in the same way that $w$, the second moment of $u$ is not conserved in time. \end{remark} \section{Asymptotic decay and large-time behavior of solutions for the fractional diffusion equation in discrete time} Now we will present the asymptotic decay of the solution of \eqref{Main} (which is given by \eqref{sol}) in $L^p$ spaces and the corresponding large-time behaviour. \subsection{Asymptotic decay}\label{4} In this part we show the following estimates of the fundamental solution $\mathcal{G}_{n,h}^{\alpha}$ in $L^p$-spaces. Finally we also state $L^p$-estimates for $\nabla\mathcal{G}_{n,h}^{\alpha},$ which are useful for study the large time behaviour of solutions of \eqref{Main} in Lebesgue spaces.\\ \begin{lemma}\label{G-dacay} Let $0<\alpha<1.$ Then $$ \|\mathcal{G}_{n,h}^{\alpha} \|_{p} \leq C_p\dfrac{1}{(nh)^{\frac{\alpha N}{2}(1-1/p)}},\quad n\in\mathbb{{N}}, $$ for $p\in [1,\infty]$ if $N=1,$ for $p\in[1,\infty)$ if $N=2,$ and for $p\in[1,\frac{N}{N-2})$ if $N>2.$ \end{lemma} \begin{proof} It is well known (see \cite[p.334 (3.326)]{G}) that there exists $C_p$ (independent of $t$) such that $||G_t||_{p} = C_p\frac{1}{t^{\frac{N}{2}(1-\frac{1}{p})}}.$ Then for $n$ large enough and the values of $p$ given in the hypothesis, by \eqref{psi-1-alfha} and \cite[Theorem 3 (vi)]{Abadia-Miana} one gets $$ \|\mathcal{G}^{\alpha}_{n,h}\|_{p}\leq \frac{C_p}{h^n\Gamma(n)}\int_{0}^{\infty}e^{\frac{-t}{h}} t^{n-\alpha\frac{N}{2}(1-\frac{1}{p})-1}\,dt=C_p\frac{\Gamma(n-\alpha\frac{N}{2}(1-\frac{1}{p}))}{h^{\alpha\frac{N}{2}(1-\frac{1}{p})}\Gamma(n)}\leq \frac{C_p}{(nh)^{\alpha\frac{N}{2}(1-\frac{1}{p})}}, $$ where we have applied the asymptotic behaviour of the Gamma function (see \cite{ET}). Since the function $\mathcal{G}^{\alpha}_{n,h}$ belongs to $L^p(\mathbb{{R}}^N)$ for all $n\in\mathbb{{N}},$ then the result is valid for all $n\in\mathbb{{N}}.$ \end{proof} Next, let us present a result about the $L^p-L^q$ asymptotic decay for $u.$\\ \begin{theorem}\label{teorema4.1} Let $1\leq q\leq p\leq \infty.$ If $f\in L^q(\mathbb{{R}}^N)$, then the solution $u$ of \eqref{Main} satisfies \begin{itemize} \item[$(i)$] If $q=\infty$ then $ \| u(nh) \|_{\infty} \leq\| f\|_{\infty}. $ \item[$(ii)$] If $1\leq q<\infty$ and $N>2q$, then for each $p\in[q,\frac{Nq}{N-2q})$ \begin{equation}\label{eqdecay} \| u(nh) \|_{p} \leq C_p\dfrac{1}{(nh)^{\frac{\alpha N}{2}(1/q-1/p)}}\| f\|_{q}. \end{equation} \item[$(iii)$] If $1\leq q<\infty$ and $N=2q$, then for each $p\in[q,\infty)$ the estimate \eqref{eqdecay} holds. \item[$(iv)$] If $1\leq q<\infty$ and $N<2q$, then for each $p\in[q,\infty]$ the estimate \eqref{eqdecay} holds. \end{itemize} Here, $C_p$ is a constant independent of $h$ and $n$. \end{theorem} \begin{proof} Take $r\geq 1$ such that $1+1/p=1/q+1/r,$ and applying Young's inequality we get $$\| u(nh) \|_{p} =\| \mathcal{G}_{n,h}^{\alpha}*f\|_{p} \leq \| \mathcal{G}_{n,h}^{\alpha}\|_{r}\| f\|_{q}.$$ Now, we apply Lemma \ref{G-dacay} to estimate $\| \mathcal{G}_{n,h}^{\alpha}\|_{r}.$ For the case $(i)$, if $q=\infty,$ then $p=\infty,r=1,$ and therefore since $\|\mathcal{G}_{n,h}^{\alpha}\|_1=1,$ the result follows. Note that in the case $(ii)$, if $1\leq q<\infty$ and $N>2q,$ then the condition $q\leq p<\frac{Nq}{N-2q}$ implies $1\leq r <\frac{N}{N-2}.$ So, by Lemma \ref{G-dacay} we get the desired estimates. The cases $(iii)$ and $(iv)$ follow in a similar way. \end{proof} \begin{lemma}\label{Nabla-G-Estimates} Let $0<\alpha<1.$ Then $$ \|\nabla\mathcal{G}_{n,h}^{\alpha} \|_{p} \leq C_p\dfrac{1}{(nh)^{\frac{\alpha N}{2}(1-1/p)+\frac{\alpha}{2}}},\quad n\in\mathbb{{N}}, $$ for $p\in [1,\infty)$ if $N=1,$ and for $p\in[1,\frac{N}{N-1})$ if $N>1.$ \end{lemma} \begin{proof} The proof is similar to the proof of Lemma \ref{G-dacay} by use of $\| \nabla G_t \|_{p} = C_p\frac{1}{t^{\frac{N}{2}(1-\frac{1}{p})+1/2}}$ (see \cite[p.334 (3.326)]{G}). \end{proof} \subsection{Large-time behaviour of solutions}\label{5} In this part we study the asymptotic behaviour of solution $u$ of problem given by \eqref{Main}. Set $$M:=\int_{\mathbb{{R}}^N} f(x)\,dx.$$ Before to show the main result of this section, we need the following decomposition lemma (see \cite{DZ}).\\ \begin{lemma}\label{decolemma} Suppose $f\in L^1(\mathbb{R}^N)$ such that $\int_{\mathbb{R}^N}|x||f(x)|dx<\infty.$ Then there exists $F\in L^1(\mathbb{R}^N;\mathbb{R}^N)$ such that \begin{equation*} f=\left(\int_{\mathbb{R}^N}f(x)dx\right)\delta_0+\mbox{div}\, F \end{equation*} in the distributional sense and \begin{equation*} \|F\|_{L^1(\mathbb{R}^N;\mathbb{R}^N)}\leq C_d\int_{\mathbb{R}^N}|x||f(x)|dx. \end{equation*} \end{lemma} \begin{theorem}\label{Theorem5.1} Let $1\leq p\leq \infty$ and $u$ be the solution of \eqref{Main}. \begin{itemize} \item[$(i)$] Then $$(nh)^{\frac{\alpha N}{2}\left(1-\frac{1}{p}\right)}\|u(nh)-M\mathcal{G}^{\alpha}_{n,h}\|_{p}\to 0,\quad \mbox{as}\quad n\to\infty,$$ for $p\in [1,\infty)$ if $N=1,$ and for $p\in[1,\frac{N}{N-1})$ if $N>1,$ \item[$(ii)$] Suppose in addition that $|x|f\in L^1(\mathbb{{R}}),$ then $$ (nh)^{\frac{\alpha N}{2}(1-\frac{1}{p})}\|u(nh)-M\mathcal{G}^{\alpha}_{n,h}\|_{p}\lesssim (nh)^{-\alpha/2},$$ for $p\in [1,\infty)$ if $N=1,$ and for $p\in[1,\frac{N}{N-1})$ if $N>1.$ \end{itemize} \end{theorem} \begin{proof} First we prove assertion $(ii)$. Since that $f, |x|f\in L^1(\mathbb{{R}}^N),$ by decomposition Lemma \ref{decolemma} there exists $\psi\in L^1(\mathbb{R}^N;\mathbb{R}^N)$ such that \begin{align*} u(nh,x)&=(\mathcal{G}^{\alpha}_{n,h}\ast (M\delta_0+\mbox{div}\, \psi(\cdot)))(x)\\ &=M_c\mathcal{G}^{\alpha}_{n,h}(x)+(\nabla\mathcal{G}^{\alpha}_{n,h}\ast \psi)(x), \end{align*} in the distributional sense, and $$\|\psi\|_{1}\leq C\||x| f(x)\|_1<\infty.$$ Lemma \ref{Nabla-G-Estimates} implies that \begin{equation}\label{Part-2-estimates} \|u(nh)-M\mathcal{G}^{\alpha}_{n,h}\|_{p}\leq C\|\nabla\mathcal{G}^{\alpha}_{n,h}\|_{p}\|xf(x)\|_{1} \leq C_{p,f}\dfrac{1}{(nh)^{\frac{\alpha N}{2}(1-1/p)+\frac{ \alpha}{2}}}, \end{equation} Hence the assertion $(ii)$ is proved. To prove $(i)$, we choose a sequence $(\eta_j)\subset C_0^{\infty}(\mathbb{R}^N)$ such that $\int_{\mathbb{R}^N}\eta_j(x)\,dx=M$ for all $j,$ and $\eta_j\to f$ in $L^1(\mathbb{R}^N)$ . For each $j$, by Lemma \ref{G-dacay} and \eqref{Part-2-estimates}, we get \begin{align*} \|u(nh)-M\mathcal{G}^{\alpha}_{n,h}\|_{p}&\leq \|\mathcal{G}^{\alpha}_{n,h}\ast(f-\eta_j)\|_{p}+\|\mathcal{G}^{\alpha}_{n,h}\ast \eta_j-M_c\mathcal{G}^{\alpha}_{n,h}\|_{p}\\ &\leq \|\mathcal{G}^{\alpha}_{n,h}\|_{p}\|f-\eta_j\|_{1}+\|\mathcal{G}^{\alpha}_{n,h}\ast \eta_j-M_c\mathcal{G}^{\alpha}_{n,h}\|_{p}\\ &\leq C_p\dfrac{1}{(nh)^{\frac{\alpha N}{2}(1-1/p)}}\|f-\eta_j\|_{1}+C_{p,\eta_j}\dfrac{1}{(nh)^{\frac{\alpha N}{2}(1-1/p)+\frac{\alpha}{2}}}. \end{align*} Then $$\limsup_{n\to\infty}\,(nh)^{\frac{\alpha N}{2}(1-1/p)} \|u(nh)-M\mathcal{G}^{\alpha}_{n,h}\|_{p}\leq C_p\|f-\eta_j\|_{1}.$$ The assertion follows by letting $j\to\infty$. \end{proof} \section*{References} \end{document}
\begin{document} \title[Algorithms for Game Metrics] {Algorithms for Game Metrics\rsuper*} \author[K.~Chatterjee]{Krishnendu Chatterjee\rsuper a} \address{{\lsuper a}IST Austria (Institute of Science and Technology Austria)} \email{[email protected]} \author[L.~de Alfaro]{Luca de Alfaro\rsuper b} \address{{\lsuper{b,d}}Computer Science Department, University of California, Santa Cruz} \email{\{luca,vishwa\}@soe.ucsc.edu} \author[R.~Majumdar]{Rupak Majumdar\rsuper c} \address{{\lsuper c}Department of Computer Science, University of California, Los Angeles} \email{[email protected]} \author[V.~Raman]{Vishwanath Raman\rsuper d} \keywords{game semantics, minimax theorem, metrics, $\omega$-regular properties, quantitative $\mu$-calculus, probabilistic choice, equivalence of states, refinement of states} \subjclass{F.4.1, F.1.1, F.1.1} \titlecomment{{\lsuper*}A preliminary version of this paper titled ``Algorithms for Game Metrics" appeared in the IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science, December 2008. This is a version with full proofs and extensions.} \begin{abstract} \noindent Simulation and bisimulation metrics for stochastic systems provide a quantitative generalization of the classical simulation and bisimulation relations. These metrics capture the similarity of states with respect to quantitative specifications written in the quantitative $\mu$-calculus and related probabilistic logics. We first show that the metrics provide a bound for the difference in {\em long-run average\/} and {\em discounted average\/} behavior across states, indicating that the metrics can be used both in system verification, and in performance evaluation. For turn-based games and MDPs, we provide a polynomial-time algorithm for the computation of the one-step metric distance between states. The algorithm is based on linear programming; it improves on the previous known exponential-time algorithm based on a reduction to the theory of reals. We then present PSPACE algorithms for both the decision problem and the problem of approximating the metric distance between two states, matching the best known algorithms for Markov chains. For the bisimulation kernel of the metric our algorithm works in time $\calo(n^4)$ for both turn-based games and MDPs; improving the previously best known $\calo(n^9\cdot\log(n))$ time algorithm for MDPs. For a concurrent game $G$, we show that computing the exact distance between states is at least as hard as computing the value of concurrent reachability games and the square-root-sum problem in computational geometry. We show that checking whether the metric distance is bounded by a rational $r$, can be done via a reduction to the theory of real closed fields, involving a formula with three quantifier alternations, yielding $\calo(|G|^{\calo(|G|^5)})$ time complexity, improving the previously known reduction, which yielded $\calo(|G|^{\calo(|G|^7)})$ time complexity. These algorithms can be iterated to approximate the metrics using binary search. \end{abstract} \maketitle \setcounter{page}{1} \section{Introduction} System metrics constitute a quantitative generalization of system relations. The bisimulation relation captures state {\em equivalence:\/} two states $s$ and $t$ are bisimilar if and only if they cannot be distinguished by any formula of the $\mu$-calculus \cite{BCG88}. The bisimulation {\em metric\/} captures the {\em degree of difference\/} between two states: the bisimulation distance between $s$ and $t$ is a real number that provides a tight bound for the difference in value of formulas of the {\em quantitative\/} $\mu$-calculus at $s$ and $t$ \cite{DGJP99}. A similar connection holds between the simulation relation and the simulation metric. The classical system relations are a basic tool in the study of {\em boolean\/} properties of systems, that is, the properties that yield a truth value. As an example, if a state $s$ of a transition system can reach a set of target states $R$, written $s \sat \diam R$ in temporal logic, and $t$ can simulate $s$, then we can conclude $t \sat \diam R$. System metrics play a similarly fundamental role in the study of the quantitative behavior of systems. As an example, if a state $s$ of a Markov chain can reach a set of target states $R$ with probability $0.8$, written $s \sat \P_{\geq 0.8} \diam R$, and if the metric simulation distance from $t$ to $s$ is 0.3, then we can conclude $t \sat \P_{\geq 0.5} \diam R$. The simulation relation is at the basis of the notions of system refinement and implementation, where qualitative properties are concerned. In analogous fashion, simulation metrics provide a notion of approximate refinement and implementation for quantitative properties. We consider three classes of systems: \begin{itemize} \item {\em Markov decision processes.} In these systems there is one player. At each state, the player can choose a move; the current state and the move determine a probability distribution over the successor states. \item {\em Turn-based games.} In these systems there are two players. At each state, only one of the two players can choose a move; the current state and the move determine a probability distribution over the successor states. \item{\em Concurrent games.} In these systems there are two players. At each state, both players choose moves simultaneously and independently; the current state and the chosen moves determine a probability distribution over the successor states. \end{itemize} System metrics were first studied for Markov chains and Markov decision processes (MDPs) \cite{DGJP99,vanBreugelCONCUR01,vanBreugel-icalp2001,DGJP02,RadhaLICS02}, and they have recently been extended to two-player turn-based and concurrent games \cite{dAMRS07}. The fundamental property of the metrics is that they provide a tight bound for the difference in value that formulas belonging to quantitative specification languages assume at the states of a system. More precisely, let $\qmu$ indicate the {\em quantitative $\mu$-calculus,} a specification language in which many of the classical specification properties, including reachability and safety properties, can be written \cite{dAM04}. The metric bisimulation distance between two states $s$ and $t$, denoted $[s \priobis_g t]$, has the property that $[s \priobis_g t] = \sup_{\varphi \in \qmu} |\varphi(s) - \varphi(t)|$, where $\varphi(s)$ and $\varphi(t)$ are the values $\varphi$ assumes at $s$ and $t$. To each metric is associated a {\em kernel:\/} the kernel of a metric $d$ is the relation that relates the pairs of states that have distance~0; to each metric corresponds a metric kernel relation. The kernel of the simulation metric is {\em probabilistic simulation;\/} the kernel of the bisimulation metric is {\em probabilistic bisimulation\/} \cite{SL94}. \noindent{\bf Metric as bound for discounted and long-run average payoff.} Our first result is that the metrics developed in \cite{dAMRS07} provide a bound for the difference in {\em long-run average\/} and {\em discounted average\/} properties across states of a system. These average rewards play a central role in the theory of stochastic games, and in its applications to optimal control and economics \cite{Bertsekas95,FilarVrieze97}. Thus, the metrics of \cite{dAMRS07} are useful both for system verification, and for performance evaluation, supporting our belief that they constitute the canonical metrics for the study of the similarity of states in a game. We point out that it is possible to define a discounted version $\metr{\priobis_g}^\dfactor$ of the game bisimulation metric; however, we show that this discounted metric does {\em not\/} provide a bound for the difference in discounted values. \noindent{\bf Algorithmic results.} Next, we investigate algorithms for the computation of the metrics. The metrics can be computed in iterative fashion, following the inductive way in which they are defined. A metric $d$ can be computed as the limit of a monotonically increasing sequence of approximations $d_0$, $d_1$, $d_2$, \ldots, where $d_0(s,t)$ is the difference in value that variables can have at states $s$ and $t$. For $k \geq 0$, $d_{k+1}$ is obtained from $d_k$ via $d_{k+1} = H(d_k)$, where the operator $H$ depends on the metric (bisimulation, or simulation), and on the type of system. Our main results are as follows: \begin{enumerate} \item {\em Metrics for turn-based games and MDPs.} We show that for turn-based games, and MDPs, the one-step metric operator $H$ for both bisimulation and simulation can be computed in polynomial time, via a reduction to linear programming (LP). The only previously known algorithm, which can be inferred from \cite{dAMRS07}, had EXPTIME complexity and relied on a reduction to the theory of real closed fields; the algorithm thus had more a complexity-theoretic, than a practical, value. The key step in obtaining our polynomial-time algorithm consists in transforming the original $\sup$-$\inf$ \emph{non-linear} optimization problem (which required the theory of reals) into a quadratic-size $\inf$ \emph{linear} optimization problem that can be solved via LP. We then present PSPACE algorithms for both the decision problem of the metric distance between two states and for the problem of computing the approximate metric distance between two states for turn-based games and MDPs. Our algorithms match the complexity of the best known algorithms for the sub-class of Markov chains \cite{vBSW08}. \item {\em Metrics for concurrent games.} For concurrent games, our algorithms for the $H$ operator still rely on decision procedures for the theory of real closed fields, leading to an EXPTIME procedure. However, the algorithms that could be inferred from \cite{dAMRS07} had time-complexity $\calo(|G|^{\calo(|G|^7)})$, where $|G|$ is the size of a game; we improve this result by presenting algorithms with $\calo(|G|^{\calo(|G|^5)})$ time-complexity. \item {\em Hardness of metric computation in concurrent games.} We show that computing the exact distance of states of concurrent games is at least as hard as computing the value of concurrent reachability games \cite{EY06,crg-tcs07}, which is known to be at least as hard as solving the square-root-sum problem in computational geometry \cite{GareyGrahamJohnson76}. These two problems are known to lie in PSPACE, and have resisted many attempts to show that they are in NP. \item {\em Kernel of the metrics.} We present polynomial time algorithms to compute the simulation and bisimulation kernel of the metrics for turn-based games and MDPs. Our algorithm for the bisimulation kernel of the metric runs in time $\calo(n^4)$ (assuming a constant number of moves) as compared to the previous known $\calo(n^9\cdot\log(n))$ algorithm of \cite{ZhangH07} for MDPs, where $n$ is the size of the state space. For concurrent games the simulation and the bisimulation kernel can be computed in time $\calo(|G|^{\calo(|G|^3)})$, where $|G|$ is the size of a game. \end{enumerate} Our formulation of probabilistic simulation and bisimulation differs from the one previously considered for MDPs in \cite{Baier96}: there, the names of moves (called ``labels'') must be preserved by simulation and bisimulation, so that a move from a state has at most one candidate simulator move at another state. Our problem for MDPs is closer to the one considered in \cite{ZhangH07}, where labels must be preserved, but where a label can be associated with multiple probability distributions (moves). For turn-based games and MDPs, the algorithms for probabilistic simulation and bisimulation can be obtained from the LP algorithms that yield the metrics. For probabilistic simulation, the algorithm we obtain coincides with the algorithm previously published in \cite{ZhangH07}. The algorithm requires the solution of feasibility-LP problems with a number of variables and inequalities that is quadratic in the size of the system. For probabilistic bisimulation, we are able to improve on this result by providing an algorithm that requires the solution of feasibility-LP problems that have linearly many variables and constraints. Precisely, as for ordinary bisimulation, the kernel is computed via iterative refinement of a partition of the state space \cite{Milner90}. Given two states that belong to the same partition, to decide whether the states need to be split in the next partition-refinement step, we present an algorithm that requires the solution of a feasibility-LP problem with a number of variables equal to the number of moves available at the states, and number of constraints linear in the number of equivalence classes. Overall, our algorithm for bisimulation runs in time $\calo(n^4)$ (assuming a constant number of moves), considerably improving the $\calo(n^9\cdot\log(n))$ algorithm of \cite{ZhangH07} for MDPs, and providing for the first time a polynomial algorithm for turn-based games. \section{Definitions} \noindent{\bf Valuations.} Let $[\lb, \rb]\subs \reals$ be a fixed, non-singleton real interval. Given a set of states $S$, a {\em valuation over $S$\/} is a function $\valu: S \mapsto [\lb, \rb]$ associating with every state $s \in S$ a value $\lb \leq \valu(s) \leq \rb$; we let $\valus$ be the set of all valuations. For $c \in [\lb,\rb]$, we denote by $\imeanbb{c}$ the constant valuation such that $\imeanbb{c}(s) = c$ at all $s \in S$. We order valuations pointwise: for $\valu, \valub \in \valus$, we write $\valu \leq \valub$ iff $\valu(s) \leq \valub(s)$ at all $s \in S$; we remark that $\valus$, under $\leq$, forms a lattice. Given $a,b \in \reals$, we write $a \imax b = \max\set{a,b}$, and $a \imin b = \min\set{a,b}$; we also let $a \oplus b = \min \set{1, \max \set{0, a + b}}$ and $a \ominus b = \max \set{0, \min \set{1, a - b}}$. We extend $\imin, \imax, +, -, \oplus, \ominus$ to valuations by interpreting them in pointwise fashion. \noindent{\bf Game structures.} For a finite set $A$, let $\distr(A)$ denote the set of probability distributions over $A$. We say that $p \in \distr(A)$ is {\em deterministic\/} if there is $a \in A$ such that $p(a) = 1$. We assume a fixed finite set $\vars$ of {\em observation variables\/}. A (two-player, concurrent) {\em game structure\/} $\game=\tuple{S,\int{\cdot},\moves,\mov_1,\mov_2,\trans}$ consists of the following components \cite{ATL02,crg-tcs07}: \begin{itemize} \item A finite set $S$ of states. \item A variable interpretation $\int{\cdot}: \vars \mapsto [\lb,\rb]^S$, which associates with each variable $\varx \in \vars$ a valuation $\int{v}$. \item A finite set $\moves$ of moves. \item Two move assignments $\mov_1,\mov_2$: $S\mapsto 2^\moves \setm \set{\emptyset}$. For $i\in\{1,2\}$, the assignment $\mov_i$ associates with each state $s \in S$ the nonempty set $\mov_i(s)\subseteq\moves$ of moves available to player $i$ at state~$s$. \item A probabilistic transition function $\trans$: $S\times\moves\times\moves\mapsto\distr(S)$, that gives the probability $\trans(s,a_1,a_2)(t)$ of a transition from $s$ to $t$ when player 1 plays move $a_1$ and player 2 plays move~$a_2$. \end{itemize} At every state $s\in S$, player~1 chooses a move $a_1\in\mov_1(s)$, and simultaneously and independently player~2 chooses a move $a_2\in\mov_2(s)$. The game then proceeds to the successor state $t\in S$ with probability $\trans(s,a_1,a_2)(t)$. We let $\dest(s,a_1,a_2) = \set{t \in S \mid \trans(s,a_1,a_2)(t) > 0}$. The {\em propositional distance} $\propdist(s,t)$ between two states $s,t\in S$ is the maximum difference in the valuation of any variable: \[ \propdist(s,t) = \max_{\varx\in\vars} \vert\int{\varx}(s) - \int{\varx}(t)\vert \eqpun . \] The kernel of the propositional distance induces an equivalence on states: for states $s,t$, we let $s\loceq t$ if $\propdist(s,t) = 0$. In the following, unless otherwise noted, the definitions refer to a game structure with components $\game=\tuple{S,\int{\cdot},\moves,\mov_1,\mov_2,\trans}$. We indicate the opponent of a player $\ii\in\set{1,2}$ by $\jj = 3 - \ii$. We consider the following subclasses of game structures. \noindent{\bf Turn-based game structures.} A game structure $\game$ is {\em turn-based\/} if we can write $S = S_1 \union S_2$ with $S_1 \inters S_2 = \emptyset$ where $s \in S_1$ implies $|\mov_2(s)| = 1$, and $s \in S_2$ implies $|\mov_1(s)| = 1$, and further, there exists a special variable $\turn \in \vars$, such that $\int{\turn}{s} = \theta_1$ iff $s \in S_1$, and $\int{\turn}{s} = \theta_2$ iff $s \in S_2$. \noindent{\bf Markov decision processes.} For $i \in \set{1,2}$, we say that a structure is an $i$-MDP if $\forall s \in S$, $\vert \mov_{\jj}(s) \vert = 1$. For MDPs, we omit the (single) move of the player without a choice of moves, and write $\delta(s,a)$ for the transition function. \noindent{\bf Moves and strategies.} A {\em mixed move\/} is a probability distribution over the moves available to a player at a state. We denote by $\dis_i(s) \subs \distr(\moves)$ the set of mixed moves available to player~$i \in \set{1,2}$ at $s \in S$, where: \[ \dis_i(s) = \set{\dis \in \distr(\moves) \mid \dis(a) > 0 \text{\textit{ implies }} a \in \mov_i(s)} \eqpun . \] The moves in $\moves$ are called {\em pure moves.\/} We extend the transition function to mixed moves by defining, for $s \in S$ and $x_1 \in \dis_1(s)$, $x_2 \in \dis_2(s)$, \[ \trans(s,x_1,x_2)(t) = \sum_{a_1 \in \mov_1(s)} \: \sum_{a_2 \in \mov_2(s)} \: \trans(s,a_1,a_2)(t) \cdot x_1(a_1) \cdot x_2(a_2) \eqpun . \] A {\em path\/} $\path$ of $\game$ is an infinite sequence $s_0, s_1, s_2,...$ of states in $s \in S$, such that for all $k \ge 0$, there exist moves $a_1^k \in \mov_1(s_k)$ and $a_2^k \in \mov_2(s_k)$ with $\trans(s_k, a_1^k, a_2^k)(s_{k+1}) > 0$. We write $\Paths$ for the set of all paths, and $\Paths_s$ for the set of all paths starting from state $s$. A \emph{strategy} for player $i \in \{1, 2\}$ is a function $\stra_i: S^+ \mapsto \distr(\moves)$ that associates with every non-empty finite sequence $\path \in S^+$ of states, representing the history of the game, a probability distribution $\stra_i(\path)$, which is used to select the next move of player~$i$; we require that for all $\path \in S^*$ and states $s \in S$, if $\stra_i(\path s)(a) > 0$, then $a \in \mov_i(s)$. We write $\Stra_i$ for the set of strategies for player~$i$. Once the starting state $s$ and the strategies $\straa$ and $\strab$ for the two players have been chosen, the game is reduced to an ordinary stochastic process, denoted $\game_s^{\straa, \strab}$, which defines a probability distribution on the set $\Paths$ of paths. We denote by $\Pr_s^{\straa,\strab}(\cdot)$ the probability of a measurable event (sets of paths) with respect to this process, and denote by $\E_s^{\straa,\strab}(\cdot)$ the associated expectation operator. For $k \geq 0$, we let $X_k: \Paths \to S$ be the random variable denoting the $k$-th state along a path. \noindent{\bf One-step expectations and predecessor operators.} Given a valuation $\valu \in \valus$, a state $s \in S$, and two mixed moves $x_1 \in \dis_1(s)$ and $x_2 \in \dis_2(s)$, we define the expectation of $\valu$ from $s$ under $x_1, x_2$ by, \[ \E^{x_1,x_2}_s (\valu) = \sum_{t \in S} \, \trans(s,x_1,x_2)(t) \, \valu(t) \eqpun . \] For a game structure $\game$, for $i \in \set{1,2}$ we define the {\em valuation transformer\/} $\pre_i: \valus \mapsto \valus$ by, for all $\valu \in \valus$ and $s \in S$ as, \[ \pre_i(\valu)(s) = \Sup_{x_\ii \in \dis_\ii(s)} \; \Inf_{x_\jj \in \dis_\jj(s)} \E^{x_\ii,x_\jj}_s (\valu) \eqpun . \] Intuitively, $\pre_i(\valu)(s)$ is the maximal expectation player $i$ can achieve of $\valu$ after one step from $s$: this is the standard ``one-day'' or ``next-stage'' operator of the theory of repeated games \cite{FilarVrieze97}. \subsection{Quantitative $\mu$-calculus} We consider the set of properties expressed by the {\em quantitative $\mu$-calculus\/} ($\qmu$). As discussed in \cite{Kozen83,dAM04,IverMorgan}, a large set of properties can be encoded in $\qmu$, spanning from basic properties such as maximal reachability and safety probability, to the maximal probability of satisfying a general $\omega$-regular specification. \noindent{\bf Syntax.} The syntax of quantitative $\mu$-calculus is defined with respect to the set of observation variables $\vars$ as well as a set $\mvars$ of {\em calculus variables,} which are distinct from the observation variables in $\vars$. The syntax is given as follows: \begin{align*} \varphi\ ::=\ & c \mid \varx \mid \mvar \mid \neg \varphi \mid \varphi\vee\varphi \mid \varphi \wedge\varphi \mid \varphi \oplus c \mid \varphi \ominus c \mid \qpre_1(\varphi) \mid \qpre_2(\varphi) \mid \mu \mvar.\, \varphi \mid \nu \mvar.\, \varphi \end{align*} for constants $c \in [\lb,\rb]$, observation variables $\varx \in \vars$, and calculus variables $\mvar \in \mvars$. In the formulas $\mu \mvar.\, \varphi$ and $\nu \mvar.\, \varphi$, we furthermore require that all occurrences of the bound variable $\mvar$ in $\varphi$ occur in the scope of an even number of occurrences of the complement operator $\neg$. A formula $\varphi$ is {\em closed\/} if every calculus variable $\mvar$ in $\varphi$ occurs in the scope of a quantifier $\mu \mvar$ or $\nu \mvar$. From now on, with abuse of notation, we denote by $\qmu$ the set of closed formulas of $\qmu$. A formula is a {\em player~$i$ formula,} for $i \in \set{1,2}$, if $\varphi$ does not contain the $\qpre_{\jj}$ operator; we denote with $\qmu_i$ the syntactic subset of $\qmu$ consisting only of closed player~$i$ formulas. A formula is in {\em positive form\/} if the negation appears only in front of constants and observation variables, i.e., in the context $\neg c$ and $\neg \varx$; we denote with $\qmu^{+}$ and $\qmu_i^{+}$ the subsets of $\qmu$ and $\qmu_i$ consisting only of positive formulas. We remark that the fixpoint operators $\mu$ and $\nu$ will not be needed to achieve our results on the logical characterization of game relations. They have been included in the calculus because they allow the expression of many interesting properties, such as safety, reachability, and in general, $\omega$-regular properties. The operators $\oplus$ and $\ominus$, on the other hand, are necessary for our results. \noindent {\bf Semantics.} A variable valuation $\xenv$: $\mvars\mapsto\valus$ is a function that maps every variable $\mvar\in\mvars$ to a valuation in~$\valus$. We write $\xenv[\mvar\mapsto\valu]$ for the valuation that agrees with $\xenv$ on all variables, except that $\mvar$ is mapped to~$\valu$. Given a game structure $\game$ and a variable valuation $\xenv$, every formula $\varphi$ of the quantitative $\mu$-calculus defines a valuation $\sem{\varphi}^\game_{\xenv}\in\valus$ (the superscript $\game$ is omitted if the game structure is clear from the context): \begin{align*} & \sem{c}_{\xenv} = \imeanbb{c} & & \sem{\varx}_{\xenv} = \int{\varx} \\ & \sem{\mvar}_{\xenv} = \xenv(\mvar) & & \sem{\neg \varphi}_\xenv = \imeanbb{1} - \sem{\varphi}_\xenv \\ & \textstyle \sem{\varphi {\oplus \brace \ominus} c}_\xenv = \textstyle \sem{\varphi}_\xenv {\oplus \brace \ominus} \imeanbb{c} & & \textstyle \sem{\varphi_1\,{\vee\brace\wedge}\,\varphi_2}_{\xenv} = \textstyle \sem{\varphi_1}_{\xenv} \,{\imax\brace\imin}\, \sem{\varphi_2}_{\xenv} \\ & \sem{\qpre_i(\varphi)}_\xenv = \pre_i(\sem{\varphi}_\xenv) & & \textstyle \sem{{\mu\brace\nu}\mvar.\, \varphi}_{\xenv} = \textstyle {\inf\brace\sup}\set{\valu \in \valus \mid \valu=\sem{\varphi}_{\xenv[\mvar\mapsto \valu]}} \end{align*} where $i \in \set{1,2}$. The existence of the fixpoints is guaranteed by the monotonicity and continuity of all operators and can be computed by Picard iteration \cite{dAM04}. If $\varphi$ is closed, $\sem{\varphi}_\xenv$ is independent of $\xenv$, and we write simply $\sem{\varphi}$. \noindent{\bf Discounted quantitative $\mu$-calculus.} A {\em discounted\/} version of the $\mu$-calculus was introduced in \cite{luca-icalp-disc-03}; we call this $\dmu$. Let $\Lambda$ be a finite set of discount parameters that take values in the interval $[0, 1)$. The discounted $\mu$-calculus extends $\qmu$ by introducing discounted versions of the player $\qpre$ modalities. The syntax replaces $\qpre_i(\varphi)$ for player $i \in \set{1, 2}$ with its discounted variant, $\lambda \cdot \qpre_i(\varphi)$, where $\lambda \in \Lambda$ is a discount factor that discounts one-step valuations. Negation in the calculus is defined as $\neg (\lambda \cdot \qpre_1(\varphi)) = (1 - \lambda) + \lambda \cdot \qpre_2(\neg \varphi)$. This leads to two additional pre-modalities for the players, $(1 - \lambda) + \lambda \cdot \qpre_i(\varphi)$. \paragraph{Game bisimulation and simulation metrics.} A {\em directed metric\/} is a function $d: S^2 \mapsto \reals_{\geq 0}$ which satisfies $d(s,s) = 0$ and the \emph{triangle inequality} $d(s,t) \leq d(s,u) + d(u,t)$ for all $s,t,u \in S$. We denote by $\metrsp \subseteq S^2 \mapsto \reals$ the space of all directed metrics; this space, ordered pointwise, forms a lattice which we indicate with $(\metrsp, \leq)$. Since $d(s, t)$ may be zero for $s \ne t$, these functions are {\em pseudo-metrics\/} as per prevailing terminology~\cite{vanBreugelCONCUR01}. In the following, we omit ``directed'' and simply say metric when the context is clear. For a metric $d$, we indicate with $C(d)$ the set of valuations $k\in\valus$ where $k(s) - k(t) \leq d(s,t)$ for every $s,t\in S$. A metric transformer $H_{\priosim_1}: \metrsp \mapsto \metrsp$ is defined as follows, for all $d \in \metrsp$ and $s, t \in S$: \begin{equation} \label{eq-game-met} H_{\priosim_1}(d)(s,t) = \propdist(s, t) \imax \Sup_{k \in C(d)} \bigl( \pre_1(k)(s) - \pre_1(k)(t) \bigr) \eqpun . \end{equation} The {\em player~1 game simulation metric\/} $[\priosim_1]$ is the least fixpoint of $H_{\priosim_1}$; the {\em game bisimulation metric\/} $[\priobis_1]$ is the least symmetrical fixpoint of $H_{\priosim_1}$ and is defined as follows, for all $d \in \metrsp$ and $s, t \in S$: \begin{equation} \label{eq-game-bis-met} H_{\priobis_1}(d)(s,t) = H_{\priosim_1}(d)(s,t) \imax H_{\priosim_1}(d)(t,s) \eqpun . \end{equation} The operator $H_{\priosim_1}$ is monotonic, non-decreasing and continuous in the lattice $(\metrsp, \leq)$. We can therefore compute $H_{\priosim_1}$ using Picard iteration; we denote by $\metr{\priosim_1^n} = H_{\priosim_1^n} (\imeanbb{0})$ the $n$-iterate of this. From the determinacy of concurrent games with respect to $\omega$-regular goals~\cite{Martin98}, we have that the game bisimulation metric is {\em reciprocal\/}, in that $[\priobis_1] = [\priobis_2]$; we will thus simply write $[\priobis_g]$. Similarly, for all $s, t \in S$ we have $[s \priosim_1 t] = [t \priosim_2 s]$. The main result in \cite{dAMRS07} about these metrics is that they are logically characterized by the quantitative $\mu$-calculus of \cite{dAM04}. We omit the formal definition of the syntax and semantics of the quantitative $\mu$-calculus; we refer the reader to \cite{dAM04} for details. Given a game structure $\game$, every closed formula $\varphi$ of the quantitative $\mu$-calculus defines a valuation $\sem{\varphi}\in\valus$. Let $\qmu$ (respectively, $\qmu_1^{+}$) consist of all quantitative $\mu$-calculus formulas (respectively, all quantitative $\mu$-calculus formulas with only the $\pre_1$ operator and all negations before atomic propositions). The result of \cite{dAMRS07} shows that for all states $s, t \in S$, \begin{align} \label{logical-charact-metrics} \metr{s \priosim_{1} t} & = \sup_{\varphi \in \qmu_1^+} (\sem{\varphi}(s) - \sem{\varphi}(t)) & \metr{s \priobis_{g} t} & = \sup_{\varphi \in \qmu} |\sem{\varphi}(s) - \sem{\varphi}(t)| \eqpun . \end{align} \noindent{\bf Metrics for the discounted quantitative $\mu$-calculus.} We call $\dmu^\dfactor$ the discounted $\mu$-calculus with all discount parameters $\le \dfactor$. We define the discounted metrics via an $\dfactor$-discounted metric transformer $H_{\priosim}^\dfactor: \metrsp \mapsto \metrsp$, defined for all $d \in \metrsp$ and all $s, t \in S$ by: \begin{align} H_{\priosim_1}^\dfactor(d)(s,t) \; = \; \propdist(s, t) \imax \dfactor \cdot \Sup_{k \in C(d)} \bigl( \pre_1(k)(s) - \pre_1(k)(t) \bigr) \eqpun . \label{eq-disc-priosim} \end{align} Again, $H_{\priosim_1}^\dfactor$ is continuous and monotonic in the lattice $(\metrsp, \leq)$. The {\em $\dfactor$-discounted simulation metric} $\metr{\priosim_1}^\dfactor$ is the least fixpoint of $H_{\priosim_1}^\dfactor$, and the {\em $\dfactor$-discounted bisimulation metric\/} $\metr{\priobis_1}^\dfactor$ is the least symmetrical fixpoint of $H_{\priosim_1}^\dfactor$. The following result follows easily by induction on the Picard iterations used to compute the distances \cite{luca-icalp-disc-03}; for all states $s, t \in S$ and a discount factor $\dfactor \in [0, 1)$, \begin{align} & \metr{s \priosim_1 t}^\dfactor \le \metr{s \priosim_1 t} & & \metr{s \priobis_1 t}^\dfactor \le \metr{s \priobis_1 t} \eqpun . \label{theo-undisc-bound} \end{align} Using techniques similar to the undiscounted case, we can prove that for every game structure $\game$ and discount factor $\dfactor\in[0,1)$, the fixpoint $\metr{\priosim_i}^\dfactor$ is a directed metric and $\metr{\priobis_i}^\dfactor$ is a metric, and that they are {\em reciprocal}, i.e., $\metr{\priosim_1}^\dfactor = \metr{\priomis_2}^\dfactor$ and $\metr{\priobis_1}^\dfactor = \metr{\priobis_2}^\dfactor$. Given the discounted bisimulation metric coincides for the two players, we write $\metr{\priobis_g}^\dfactor$ instead of $\metr{\priobis_1}^\dfactor$ and $\metr{\priobis_2}^\dfactor$. We now state without proof that the discounted $\mu$-calculus provides a logical characterization of the discounted metric. The proof is based on induction on the structure of formulas, and closely follows the result for the undiscounted case \cite{dAMRS07}. Let $\dmu^\dfactor$ (respectively, $\dmu_1^{\dfactor,+}$) consist of all discounted $\mu$-calculus formulas (respectively, all discounted $\mu$-calculus formulas with only the $\pre_1$ operator and all negations before atomic propositions). It follows that for all game structures $\game$ and states $s, t \in S$, \begin{align} \metr{s \priosim_1 t}^\dfactor & = \sup_{\varphi \in \dmu_1^{\dfactor,+}} (\sem{\varphi}(s) - \sem{\varphi}(t)) & \metr{s \priobis_g t}^\dfactor & = \sup_{\varphi \in \dmu^\dfactor} |\sem{\varphi}(s) - \sem{\varphi}(t)| \label{theo-logical-charact-metrics} \eqpun . \end{align} \noindent{\bf Metric kernels.} The kernel of the metric $\metr{\priobis_g}$ ($\metr{\priobis_g}^\dfactor$) defines an equivalence relation $\priobis_g$ ($\priobis_g^\dfactor$) on the states of a game structure: $s \priobis_g t$ $(s \priobis_g t)^\dfactor$ iff $\metr{s \priobis_g t} = 0$ ($\metr{s \priobis_g t}^\dfactor = 0$); the relation $\priobis_g$ is called the {\em game bisimulation\/} relation \cite{dAMRS07} and the relation $\priobis_g^\dfactor$ is called the {\em discounted game bisimulation\/} relation. Similarly, we define the {\em game simulation\/} preorder $s \priosim_1 t$ as the kernel of the directed metric $\metr{\priosim_1}$, that is, $s \priosim_1 t$ iff $\metr{s \priosim_1 t} = 0$. The {\em discounted game simulation\/} preorder is defined analogously. \section{Bounds for Average and Discounted Payoff Games} \label{sec-average} From (\ref{logical-charact-metrics}) it follows that the game bisimulation metric provides a tight bound for the difference in valuations of quantitative $\mu$-calculus formulas. In this section, we show that the game bisimulation metric also provides a bound for the difference in average and discounted value of games. This lends further support for the game bisimulation metric, and its kernel, the game bisimulation relation, being the canonical game metrics and relations. \paragraph{Discounted payoff games.} Let $\straa$ and $\strab$ be strategies of player~1 and player~2 respectively. Let $\alpha \in [0,1)$ be a discount factor. The {\em $\alpha$-discounted} payoff $v^\alpha_1(s,\straa,\strab)$ for player~1 at a state $s$ for a variable $r\in\vars$ and the strategies $\straa$ and $\strab$ is defined as: \begin{align} v^\alpha_1(s,\straa,\strab) = (1-\alpha) \cdot \sum_{n=0}^\infty \alpha^n \cdot \E_s^{\straa,\strab} \bigl( \int{r}(X_n) \bigr), \label{player-1-disc-reward} \end{align} where $X_n$ is a random variable representing the state of the game in step $n$. The discounted payoff for player~2 is defined as $v^\alpha_2(s,\straa,\strab) = -v^\alpha_1(s,\straa,\strab)$. Thus, player~1 wins (and player~2 loses) the ``discounted sum'' of the valuations of $r$ along the path, where the discount factor weighs future rewards with the discount $\alpha$. Given a state $s\in S$, we are interested in finding the maximal payoff $v_i^\alpha(s)$ that player~$i$ can ensure against all opponent strategies, when the game starts from state $s \in S$. This maximal payoff is given by: \[ w^\alpha_i(s) = \sup_{\pi_i \in \Pi_i} \; \inf_{\pi_{\jj} \in \Pi_{\jj}} \rew_i(s,\pi_\ii,\pi_\jj) \eqpun . \] These values can be computed as the limit of the sequence of $\alpha$-discounted, $n$-step rewards, for $n \to \infty$. For $i \in \set{1,2}$, we define a sequence of valuations $w^\alpha_i(0)(s)$, $w^\alpha_i(1)(s)$, $w^\alpha_i(2)(s)$, \ldots as follows: for all $s \in S$ and $n \geq 0$: \begin{align} \label{recur-player-1-disc-reward} w^\alpha_\ii(n+1)(s) = (1 - \alpha) \cdot \int{r}(s) + \alpha \cdot \pre_\ii(w^\alpha_i(n))(s) \eqpun . \end{align} where the initial valuation $w^\alpha_i(0)$ is arbitrary. Shapley proved that $w^\alpha_i = \lim_{n\to \infty} w^\alpha_i(n)$ \cite{Shapley53}. \paragraph{Average payoff games.} Let $\straa$ and $\strab$ be strategies of player~1 and player~2 respectively. The {\em average} payoff $\rew_1(s,\straa,\strab)$ for player~1 at a state $s$ for a variable $r\in\vars$ and the strategies $\straa$ and $\strab$ is defined as \begin{equation} \label{eq-average-stra} \rew_1(s,\straa,\strab) = \lim\inf_{n \to \infty} \frac{1}{n} \sum_{k=0}^{n-1} \E_s^{\straa,\strab} \bigl( \int{r}(X_k) \bigr), \end{equation} where $X_k$ is a random variable representing the $k$-th state of the game. The reward for player~2 is $\rew_2(s,\straa,\strab) = -\rew_1(s,\straa,\strab)$. A game structure $\game$ with average payoff is called an average reward game. The {\em average value} of the game $\game$ at $s$ for player $i\in\set{1,2}$ is defined as \begin{align*} \orew_\ii(s) = \sup_{\pi_\ii \in \Pi_\ii} \; \inf_{\pi_\jj \in \Pi_\jj} \rew_i(s,\pi_\ii,\pi_\jj) \eqpun . \end{align*} \noindent Mertens and Neyman established the determinacy of average reward games, and showed that the limit of the discounted value of a game as all the discount factors tend to $1$ is the same as the average value of the game: for all $s \in S$ and $i \in \set{1,2}$, we have $\lim_{\alpha \to 1} w_\ii^\alpha(s) = \orew_\ii(s)$ \cite{MN81}. It is easy to show that the average value of a game is a valuation. \paragraph{Metrics for discounted and average payoffs.} We show that the game simulation metric $\metr{\priosim_1}$ provides a bound for discounted and long-run rewards. The discounted metric $\metr{\priosim_1}^\dfactor$ on the other hand does not provide such a bound as the following example shows. \psfrag{s}{\textbf{\textit{$s$}}} \psfrag{t}{\textbf{\textit{$t$}}} \psfrag{s'}{\textbf{\textit{$s'$}}} \psfrag{t'}{\textbf{\textit{$t'$}}} \psfrag{2}{\textbf{\textit{$2$}}} \psfrag{5}{\textbf{\textit{$5$}}} \psfrag{2.1}{\textbf{\textit{$2.1$}}} \psfrag{8}{\textbf{\textit{$8$}}} \psfrag{1}{\textbf{\textit{$1$}}} \psfrag{0}{\textbf{\textit{$0$}}} \begin{figure} \caption{Example that shows that the discounted metric may not be an upper bound for the difference in the discounted value across states.} \label{fig:disc-bound} \end{figure} \begin{examp}{} Consider a game consisting of four states $s, t, s', t'$, and a variable $r$, with $[r](s) = 2$, $[r](s') = 2.1$, $[r](t) = 5$, and $[r](t') = 8$ as shown in Figure~\ref{fig:disc-bound}. All players have only one move at each state, and the transition relation is deterministic. Consider a discount factor $\alpha = 0.9$. The $0.9$-discounted metric distance between states $s'$ and $s$, is $\metr{s' \priobis_g s}^{0.9} = 0.9 \cdot (8 - 5) = 2.7$. For the difference in discounted values between the states we proceed as follows. Using formulation \ref{recur-player-1-disc-reward}, taking $\odrew(0)(t) = 5$, since state $t$ is absorbing, we get $\odrew(1)(t) = (1 - 0.9) \cdot 5 + 0.9 \cdot 5 = 5$ which leads to $\odrew(n)(t) = 5$ for all $n \ge 0$. Similarly $\odrew(n)(t') = 8$ for all $n \ge 0$. Therefore, the difference in discounted values between $s$ and $s'$, again using \ref{recur-player-1-disc-reward}, is given by: $\odrew(s') - \odrew(s) = (1 - 0.9) \cdot (2.1 - 2) + 0.9 \cdot (8 - 5) = 2.71$. \qed \end{examp} In the following we consider player~1 rewards (the case for player~2 is identical). \begin{theo}{} \label{theo-disc-reward-bound} The following assertions hold. \begin{enumerate} \item For all game structures $\game$, $\dfactor$-discounted rewards $\odrew_1$, for all states $s, t \in S$, we have, (a) $\odrew_1(s) - \odrew_1(t) \le [s \priosim_1 t]$ and (b) $|\odrew_1(s) - \odrew_1(t)| \le [s \priobis_g t]$. \item There exists a game structure $\game$, states $s, t \in S$, such that for all $\alpha$-discounted rewards $\odrew_1$, $\odrew_1(t) - \odrew_1(s) > \metr{t\priobis_g s}^\dfactor$. \end{enumerate} \end{theo} \begin{proof} We first prove assertion (1)(a). As the metric can be computed via Picard iteration, we have for all $n \geq 0$: \begin{equation} \label{eq-balls} [s \priosim_1^n t] = \propdist(s, t) \imax \sup_{k \in C([\priosim_1^{n-1}])} (\pre_1(k)(s) - \pre_1(k)(t)) \eqpun . \end{equation} We prove by induction on $n \geq 0$ that $\odrew_1(n)(s) - \odrew_1(n)(t) \leq [s \priosim_1^n t]$. For all $s \in S$, taking $\odrew_1(0)(s) = \int{r}(s)$, the base case follows. Assume the result holds for $n-1 \geq 0$. We have: \begin{align*} \odrew_1(n)(s) - \odrew_1(n)(t) & = (1 - \alpha) \cdot \int{r}(s) + \alpha \cdot \pre_1(w^\alpha(n-1))(s) - \\ & \quad \ (1 - \alpha) \cdot \int{r}(t) \ - \alpha \cdot \pre_1(w^\alpha(n-1))(t) \\ & = (1 - \alpha) \cdot \bigl(\int{r}(s) - \int{r}(t)\bigr) \ + \\ & \quad \ \ \alpha \cdot \bigl(\pre_1(w^\alpha(n-1))(s) - \pre_1(w^\alpha(n-1))(t)\bigr) \\ & \le (1 - \alpha) \cdot \propdist(s, t) + \alpha \cdot [s \priosim_1^n t] \;\; \le \;\; [s \priosim_1^n t] , \end{align*} where the last step follows by (\ref{eq-balls}), since by the induction hypothesis we have $\odrew_1(n-1) \in C([\priosim_1^{n-1}])$. This proves assertion (1)(a). Given (1)(a), from the definition of $\metr{s \priobis_g t} = \metr{s \priosim_1 t} \imax \metr{t \priosim_1 s}$, (1)(b) follows. The example shown in Figure~\ref{fig:disc-bound} proves the second assertion. \end{proof} Using the fact that the limit of the discounted reward, for a discount factor that approaches~1, is equal to the average reward, we obtain that the metrics provide a bound for the difference in average values as well. \begin{cor}{} \label{cor-avg-reward-bound} For all game structures $\game$ and states $s$ and $t$, we have (a) $\orew(s) - \orew(t) \leq [s \priosim_1 t]$ and (b) $|\orew(s) - \orew(t)| \leq [s \priobis_g t]$. \end{cor} \paragraph{Metrics for total rewards.} The {\em total reward} $\trew_1(s,\straa,\strab)$ for player~1 at a state $s$ for a variable $r\in\vars$ and the strategies $\straa \in \Straa$ and $\strab \in \Strab$ is defined as \cite{FilarVrieze97}: \begin{equation} \label{eq-total-stra} \trew_1(s,\straa,\strab) = \lim\inf_{n \to \infty} \frac{1}{n} \sum_{k=0}^{n-1} \sum_{j=0}^{k} \E_s^{\straa,\strab} \bigl(\int{r}(X_j) \bigr), \end{equation} where $X_j$ is a random variable representing the $j$-th state of the game. The payoff $\trew_2(s,\straa,\strab)$ for player~2 is defined by replacing $\int{r}$ with $-\int{r}$ in (\ref{eq-total-stra}). The {\em total-reward value} of the game $\game$ at $s$ for player $i\in\set{1,2}$ is defined analogously to the average value, via, \[ \otrew_i(s) = \sup_{\pi_i \in \Pi_i} \; \inf_{\pi_{\altro i} \in \Pi_{\altro i}} \trew_i(s,\straa,\strab) \eqpun . \] While the game simulation metric $[\priobis_g]$ provides an upper bound for the difference in discounted reward across states, as well as for the difference in average reward across states, it does not provide a bound for the difference in total reward. We now introduce a new metric, the {\em total reward metric}, $[\tpriobis_g]$, which provides such a bound. For a discount factor $\dfactor \in [0, 1)$, we define a metric transformer $H_{\tpriosim_1}^\dfactor: \metrsp \mapsto \metrsp$ as follows. For all $d \in \metrsp$ and $s, t \in S$, we let: \begin{align} & H_{\tpriosim_1}^\dfactor(d)(s,t) = \propdist(s, t) + \dfactor \cdot \Sup_{k \in C(d)} \bigl( \pre_1(k)(s) - \pre_1(k)(t) \bigr) \eqpun . \label{eq-disc-tpriosim} \end{align} The metric $[\tpriosim_1]^\alpha$ (resp.\ $[\tpriobis_1]^\alpha$) is obtained as the least (resp.\ least symmetrical) fixpoint of (\ref{eq-disc-tpriosim}). We write $[\tpriosim_1]$ for $[\tpriosim_1]^1$, and $[\tpriobis_1]$ for $[\tpriobis_1]^1$. These metrics are {\em reciprocal}, i.e., $\metr{\tpriosim_1}^\dfactor = \metr{\tpriomis_2}^\dfactor$ and $\metr{\tpriobis_1}^\dfactor = \metr{\tpriobis_2}^\dfactor$. If $\dfactor < 1$ we get the {\em discounted total reward metric\/} and if $\dfactor = 1$ we get the {\em undiscounted total reward metric\/}. While the discounted total reward metric is bounded, the undiscounted total reward metric may not be bounded. The total metrics provide bounds for the difference in discounted, average, and total reward between states. \begin{theo}{} \label{theo-boundedness-of-total-metrics} The following assertions hold. \begin{enumerate} \item For all game structures $\game$, for all discount factors $\dfactor \in [0, 1)$, for all states $s, t \in S$, \begin{align*} & (a)\ \metr{s \tpriosim_1 t}^\dfactor \le (\rb - \lb) / (1 - \dfactor), & & (b)\ \metr{s \tpriosim_1 t}^\dfactor \le \metr{s \tpriosim_1 t}, & \\ & (c)\ \odrew_1(s) - \odrew_1(t) \le \metr{s \tpriosim_1 t}^\dfactor, & & (d)\ \orew_1(s) - \orew_1(t) \le \metr{s \tpriosim_1 t}, & \\ & (e)\ \otrew_1(s) - \otrew_1(t) \le \metr{s \tpriosim_1 t}. \end{align*} \item There exists a game structure $\game$ and states $s, t \in S$ such that, $\metr{s \tpriosim_1 t} = \infty$. \end{enumerate} \end{theo} \begin{proof} For assertion (1)(a), notice that $\propdist(s, t) \le (\rb - \lb)$. Consider the n-step Picard iterate towards the metric distance. We have, \[ \metr{s \tpriosim_1^n t}^\dfactor \le \sum_{i = 0}^{n} \dfactor^i \cdot (\rb - \lb) \eqpun . \] In the limit this yields $\metr{s \tpriosim_1 t}^\dfactor \le (\rb - \lb) / (1 - \dfactor)$. Assertion (1)(b) follows by induction on the Picard iterations that realize the metric distance. For all $n \ge 0$, $\metr{s \tpriosim_1^n t}^\dfactor \le \metr{s \tpriosim_1^n t}$. Assertion (1)(c) follows by the definition of the discounted total reward metric where we have replaced the $\imax$ with a $+$. By induction, for all $n \ge 0$, from the proof of Theorem~\ref{theo-disc-reward-bound} we have, \[ \odrew_1(n)(s) - \odrew_1(n)(t) \le (1 - \dfactor) \cdot \propdist(s, t) + \dfactor \cdot (\pre_1(\odrew_1(n - 1))(s) - \pre_1(\odrew_1(n-1))(t)) \le \metr{s \tpriosim_1^n t}^\dfactor \eqpun . \] For assertion (1)(d), towards an inductive argument on the Picard iterates that realize the metric, for all $n \ge 0$, we have $\metr{s \priosim_1^n t} \le \metr{s \tpriosim_1^n t}$, which in the limit gives $\metr{s \priosim_1 t} \le \metr{s \tpriosim_1 t}$. This leads to $\orew_1(s) - \orew_1(t) \le \metr{s \tpriosim_1 t}$, using Corollary~\ref{cor-avg-reward-bound}. This proves assertion (1)(d). We now prove assertion (1)(e) by induction and show that for all $n \ge 0$, $\otrew_1(n)(s) - \otrew_1(n)(t) \le \metr{s \tpriosim_1^n t}$. As the metric can be computed via Picard iteration, we have for all $n \ge 0$: \begin{equation} \label{eq-total-balls} [s \tpriosim_1^n t] = \propdist(s, t) + \sup_{k \in C([\tpriosim_1^{n-1}])} (\pre_1(k)(s) - \pre_1(k)(t)) \eqpun . \end{equation} We define a valuation transformer $u: \valus \mapsto \valus$ as $u(0) = \int{r}$ and for all $n > 0$ and state $s \in S$ as, \[ u(n)(s) = \int{r}(s) + \pre_1(u(n - 1))(s) \] We take $\otrew_1(0) = u(0) = \int{r}$ and for $n > 0$, from the definition of total rewards (\ref{eq-total-stra}), we get the n-step total reward value at a state $s \in S$ in terms of $u$ as, \begin{align*} \otrew_1(n)(s) & = \frac{1}{n} \cdot \sum_{i = 1}^n u(i)(s) \eqpun . \end{align*} Notice that $\otrew_1(n)(s) \le u(n)$ for all $n \ge 0$. When $n = 0$, the result is immediate by the definition of $\otrew_1(0)$, noticing that $\metr{s \tpriosim_1^0 t} = \propdist(s, t)$. Assume the result holds for $n - 1 \ge 0$. We have: \begin{align} \otrew_1(n)(s) - \otrew_1(n)(t) & = \frac{1}{n} \cdot \sum_{i = 1}^n u(i)(s) - \frac{1}{n} \cdot \sum_{i = 1}^n u(i)(t) \nonumber \\ & = \frac{1}{n} \cdot \sum_{i = 1}^n (u(i)(s) - u(i)(t)) \nonumber \\ & = \frac{1}{n} \cdot \sum_{i = 1}^n ((\int{r}(s) - \int{r}(t)) + \nonumber \\ & \qquad \qquad \;\; (\pre_1(u(i - 1))(s) - \pre_1(u(i - 1))(t))) \label{pre-total-metric} \\ & \le \frac{1}{n} \cdot \sum_{i = 1}^n \metr{s \tpriosim_1^i t} \label{1e-penul} \\ & \le \metr{s \tpriosim_1^n t}, \label{1e-ul} \end{align} where (\ref{1e-penul}) follows from (\ref{pre-total-metric}) by (\ref{eq-total-balls}), since by our induction hypothesis we have $\otrew_1(i) \le u(i) \in C(\metr{\tpriosim_1^{i}})$ for all $0 \le i < n$ and (\ref{1e-ul}) follows from (\ref{1e-penul}) from the monotonicity of the undiscounted total reward metric. To prove assertion (2), consider the game structure on the left hand side in Figure~\ref{fig:disc-bound}. The total reward at state $s$ is unbounded; $\otrew_1(s) = 2 + 5 + \ldots = \infty$ Now consider a modified version of the game, with identical structure and with states $s'$ and $t'$ corresponding to $s$ and $t$ of the original game. Let $\int{r}(t') = 0$. In the modified game, $\otrew_1(s') = 2$. From result (1)(e), since $\otrew_1(s) = \infty$ and $\otrew_1(s') = 2$, we have $\metr{s \tpriosim_1 s'} = \infty$. \end{proof} It is a very simple observation that the quantitative $\mu$-calculus does not provide a logical characterization for $[\tpriosim^\alpha_1]$ or $[\tpriosim_1]$. In fact, all formulas of the quantitative $\mu$-calculus have valuations in the interval $[\theta_1, \theta_2]$, while as stated in Theorem~\ref{theo-boundedness-of-total-metrics}, the total reward can be unbounded. The difference is essentially due to the fact that our version of the quantitative $\mu$-calculus lacks a ``$+$'' operator. It is not clear how to introduce such a $+$ operator in a context sufficiently restricted to provide a logical characterization for $[\tpriosim^\alpha_1]$; above all, it is not clear whether a canonical calculus, with interesting formal properties, would be obtained. \subsection{Metric kernels} We now show that the kernels of all the metrics defined in the paper coincide: an algorithm developed for the game kernels $\priosim_1$ and $\priobis_g$, compute the kernels of the corresponding discounted and total reward metrics as well. \begin{theo}{}\label{thm-metric-kernel-canonicity} For all game structures $\game$, states $s$ and $t$, all discount factors $\dfactor \in [0, 1)$, the following statements are equivalent: \begin{align*} & (a) \ \metr{s \priosim_1 t} = 0 & & (b) \ \metr{s \priosim_1 t}^\dfactor = 0 & & (c) \ \metr{s \tpriosim_1 t}^\dfactor = 0 \eqpun . \end{align*} \end{theo} \begin{proof} We prove $(a) \Rightarrow (b) \Rightarrow (c) \Rightarrow (a)$. We assume $0 < \dfactor < 1$. Assertion $(a)$ implies that $\propdist(s, t) = 0$ and $\Sup_{k \in C(\metr{\priosim_1})}(\pre_1(k)(s) - \pre_1(k)(t)) \le 0$; Since $C(\metr{\priosim_1}^\dfactor) \subs C(\metr{\priosim_1})$ from (\ref{theo-undisc-bound}), $(b)$ follows. We prove $(b) \Rightarrow (c)$ by induction on the Picard iterations that compute $\metr{s \priosim_1 t}^\dfactor$ and $\metr{s \tpriosim_1 t}^\dfactor$. The base case is immediate. Assume that for all states $s$ and $t$, $\metr{s \priosim_1^{n - 1} t}^\dfactor = 0$ implies $\metr{s \tpriosim_1^{n - 1} t}^\dfactor = 0$. Towards a contradiction, assume $\metr{s \priosim_1^{n} t}^\dfactor = 0$ but $\metr{s \tpriosim_1^{n} t}^\dfactor > 0$. Then there must be $k \in C(\metr{\tpriosim_1^{n - 1}}^\dfactor)$ such that $\pre_1(k)(s) - \pre_1(k)(t) > 0$. By our induction hypothesis, there exists a $\delta > 0$ such that $k' = \delta \cdot k \in C(\metr{\priosim_1^{n - 1}}^\dfactor)$. Since $\pre$ is multi-linear, the player optimal responses in $\pre_1(k)(s)$ remain optimal for $k'$. But this means $(\pre_1(k')(s) - \pre_1(k')(t)) > 0$ for $k' \in C(\metr{\priosim_1^{n - 1}}^\dfactor)$, leading to $\metr{s \priosim^{n} t}^\dfactor > 0$; a contradiction. Therefore, $(b) \Rightarrow (c)$. In a similar fashion we can show that $(c) \Rightarrow (a)$. \end{proof} \section{Algorithms for Turn-Based Games and MDPs} \label{sec-tb-algos} In this section, we present algorithms for computing the metric and its kernel for turn-based games and MDPs. We first present a polynomial time algorithm to compute the operator $H_{\priosim_i}(d)$ that gives the {\em exact\/} one-step distance between two states, for $i \in \set{1,2}$. We then present a PSPACE algorithm to decide whether the limit distance between two states $s$ and $t$ (i.e., $[s \priosim_1 t]$) is at most a rational value $r$. Our algorithm matches the best known bound known for the special class of Markov chains~\cite{vBSW08}. Finally, we present improved algorithms for the important case of the kernel of the metrics. Since by Theorem~\ref{thm-metric-kernel-canonicity} the kernels of the metrics introduced in this paper coincide, we present our algorithms for the kernel of the undiscounted metric. For the bisimulation kernel our algorithm is significantly more efficient compared to previous algorithms. \subsection{Algorithms for the metrics} For turn-based games and MDPs, only one player has a choice of moves at a given state. We consider two player~1 states. A similar analysis applies to player~2 states. We remark that the distance between states in $S_\ii$ and $S_\jj$ is always $\rb - \lb$ due to the existence of the variable $\turn$. For a metric $d \in \metrsp$, and states $s, t \in S_1$, computing $H_{\priosim_1}(d)(s, t)$, given that $p(s, t)$ is trivially computed by its definition, entails evaluating the expression, $\Sup_{k \in C(d)} \bigl( \pre_1(k)(s) - \pre_1(k)(t) \bigr)$, which is the same as, $\Sup_{k \in C(d)} \Sup_{x \in \dis_1(s)} \Inf_{y \in \dis_1(t)} (\E_s^{x}(k) - \E_t^y(k))$, since $\pre_1(k)(s) = \Sup_{x \in \dis_1(s)} (\E_s^{x}(k))$ and $\pre_1(k)(t) = \Sup_{y \in \dis_1(t)} (\E_t^{y}(k))$ as player~1 is the only player with a choice of moves at state $s$. By expanding the expectations, we get the following form, \begin{multline}\label{eq-tb-one-step} \Sup_{k \in C(d)} \Sup_{x \in \dis_1(s)} \Inf_{y \in \dis_1(t)} \biggl( \sum_{u \in S} \sum_{a \in \mov_1(s)} \trans(s, a)(u) \cdot x(a) \cdot k(u) - \sum_{v \in S} \sum_{b \in \mov_1(t)} \trans(t, b)(v) \cdot y(b) \cdot k(v) \biggr) \eqpun . \end{multline} We observe that the one-step distance as defined in (\ref{eq-tb-one-step}) is a \emph{sup-inf non-linear (quadratic)} optimization problem. We now present two lemmas by which we transform (\ref{eq-tb-one-step}) to an \emph{inf linear} optimization problem, which we solve by linear programming (LP). The first lemma reduces (\ref{eq-tb-one-step}) to an equivalent formulation that considers only pure moves at state $s$. The second lemma further reduces (\ref{eq-tb-one-step}), using duality, to a formulation that can be solved using LP. \begin{lem}{}\label{lem-tb-a-priori-post-eq} For all turn-based game structures $\game$, for all player~i states $s$ and $t$, given a metric $d \in \metrsp$, the following equality holds, \[ \Sup_{k \in C(d)} \Sup_{x \in \dis_\ii(s)} \Inf_{y \in \dis_\ii(t)} (\E_s^{x}(k) - \E_t^y(k)) = \Sup_{a \in \mov_\ii(s)} \Inf_{y \in \dis_\ii(t)} \Sup_{k \in C(d)} (\E_s^{a}(k) - \E_t^y(k)) \eqpun . \] \end{lem} \proof We prove the result for player~1 states $s$ and $t$, with the proof being identical for player~2. Given a metric $d \in \metrsp$, we have, \begin{align} \Sup_{k \in C(d)} \Sup_{x \in \dis_1(s)} \Inf_{y \in \dis_1(t)} (\E_s^x(k) - \E_t^y(k)) & = \Sup_{k \in C(d)} (\Sup_{x \in \dis_1(s)} \E_s^x(k) - \Sup_{y \in \dis_1(t)} \E_t^y(k)) \nonumber \\ & = \Sup_{k \in C(d)} (\Sup_{a \in \mov_1(s)} \E_s^a(k) - \Sup_{y \in \dis_1(t)} \E_t^y(k)) \label{pure-pre-decomp} \\ & = \Sup_{k \in C(d)} \Sup_{a \in \mov_1(s)} \Inf_{y \in \dis_1(t)} (\E_s^a(k) - \E_t^y(k)) \nonumber \\ & = \Sup_{a \in \mov_1(s)} \Sup_{k \in C(d)} \Inf_{y \in \dis_1(t)} (\E_s^a(k) - \E_t^y(k)) \label{pre-swap} \\ & = \Sup_{a \in \mov_1(s)} \Inf_{y \in \dis_1(t)} \Sup_{k \in C(d)} (\E_s^a(k) - \E_t^y(k)) \label{post-swap} \end{align} For a fixed $k \in C(d)$, since pure optimal strategies exist at each state for turn-based games and MDPs, we replace the $\Sup_{x \in \dis_1(s)}$ with $\Sup_{a \in \mov_1(s)}$ yielding (\ref{pure-pre-decomp}). Since the difference in expectations is multi-linear, $y \in \dis_1(t)$ is a probability distribution and $C(d)$ is a compact convex set, we can use the generalized minimax theorem \cite{Sio58}, and interchange the innermost $\Sup \Inf$ to get (\ref{post-swap}) from (\ref{pre-swap}).\qed The proof of Lemma~\ref{lem-tb-a-priori-post-eq} is illustrated using the following example. \psfrag{s}{\textbf{\textit{$s$}}} \psfrag{t}{\textbf{\textit{$t$}}} \psfrag{u}{\textbf{\textit{$u$}}} \psfrag{v}{\textbf{\textit{$v$}}} \psfrag{a}{\textbf{\textit{$a$}}} \psfrag{b}{\textbf{\textit{$b$}}} \psfrag{c}{\textbf{\textit{$c$}}} \psfrag{e}{\textbf{\textit{$e$}}} \psfrag{s'}{\textbf{\textit{$s'$}}} \psfrag{w'}{\textbf{\textit{$w'$}}} \psfrag{t'}{\textbf{\textit{$t'$}}} \psfrag{u'}{\textbf{\textit{$u'$}}} \psfrag{v'}{\textbf{\textit{$v'$}}} \psfrag{f}{\textbf{\textit{$f$}}} \psfrag{0}{\textbf{\textit{$0$}}} \psfrag{1}{\textbf{\textit{$1$}}} \begin{figure}\label{fig:lem1-mdp1} \label{fig:lem1-mdp2} \label{fig:lem-mdp-metrics} \end{figure} \begin{exa}{} Consider the example in Figure~\ref{fig:lem-mdp-metrics}. In the MDPs shown in the figure, every move leads to a unique successor state, with the exception of move $e \in \mov_1(s)$, which leads to states $u$ and $v$ with equal probability. Assume the variable valuations are such that all states are at a propositional distance of $1$. Without loss of generality, assume that the valuation $k \in C(d)$ is such that $k(u) > k(v)$. By the linearity of expectations, for move $c \in \mov_1(s)$, $\E_s^c(k) \ge \E_s^x(k)$ for all $x \in \dis_1(s)$. Similar arguments can be made for $k(u) < k(v)$. This gives an informal justification for step (\ref{pure-pre-decomp}) in the proof; given a $k \in C(d)$, there exist pure optimal strategies for the single player with a choice of moves at each state. While we can use pure moves at states $s$ and $t$ {\em if $k \in C(d)$ is known\/}, the principle difficulty in directly computing the left hand side of the equality arises from the uncountably many values for $k$; the distance is the supremum over all possible values of $k$. In the final equality, step (\ref{post-swap}), and hence by this theorem, we have avoided this difficulty, by showing an equivalent expression that picks a $k \in C(d)$ to show the difference in distributions induced over states. As we shall see, this enables computing the one-step metric distance using a trans-shipping formulation. We remark that while we can use pure moves at state $s$, we cannot do so at state $t$ in the right hand side of step (\ref{post-swap}) of the proof. Firstly, the proof of the theorem depends on $y \in \dis_1(t)$ being convex. Secondly, if we could restrict our attention to pure moves at state $t$, then we can replace $\Inf_{y \in \dis_1(t)}$ with $\Inf_{f \in \mov_1(t)}$ on the right hand side. But this yields too fine a one-step distance. Consider move $e$ at state $s$. We see that neither $c$ nor $b$ at state $t$ yield distributions over states that match the distribution induced by $e$. We can then always pick $k \in C(d)$ such that $\E_s^e(k) - \E_t^f(k) > 0$. If we choose $y \in \dis_1(t)$ such that $y(b) = y(c) = \frac{1}{2}$, we match the distribution induced by move $e$ from state $s$, which implies that for any choice of $k \in C(d)$, $\E_s^e(k) - \E_t^{y(b) = y(c) = \frac{1}{2}}(k) = 0$. Intuitively, the right hand side of the equality can be interpreted as a game between a protagonist and an antagonist, with the protagonist picking $y \in \dis_1(t)$, for every pure move $a \in \mov_1(s)$, to match the induced distributions over states. The antagonist then picks a $k \in C(d)$ to maximize the difference in induced distributions. If the distributions match, then no choice of $k \in C(d)$ yields a difference in expectations bounded away from 0. \end{exa} From Lemma~\ref{lem-tb-a-priori-post-eq}, given $d \in \metrsp$, we can write the player~1 one-step distance between states $s$ and $t$ as follows, \begin{equation} \label{eq-onestep} \OneStep(s,t,d) = \Sup_{a \in \mov_1(s)} \Inf_{y \in \dis_1(t)} \Sup_{k \in C(d)} (\E_s^{a}(k) - \E_t^y(k)) \eqpun . \end{equation} Hence we compute for all $a \in \mov_1(s)$, the expression, \[ \OneStep(s,t,d,a)= \Inf_{y \in \dis_1(t)} \Sup_{k \in C(d)} (\E_s^{a}(k) - \E_t^y(k)), \] and then choose the maximum, i.e., $\max_{a \in \mov_1(s)} \OneStep(s,t,d,a)$. We now present a lemma that helps reduce the above $\Inf-\Sup$ optimization problem to a linear program. We first introduce some notation. We denote by $\lambdavec$ the set of variables $\lambda_{u,v}$, for $u,v \in S$. Given $a \in \mov_1(s)$, and a distribution $y \in \dis_1(t)$, we write $\lambdavec \in \Phi(s, t, a, y)$ if the following linear constraints are satisfied: \begin{gather*} \text{(1) for all } v\in S: \sum_{u \in S} \lambda_{u,v} = \trans(s, a)(v); \quad \text{(2) for all } u \in S: \sum_{v \in S} \lambda_{u,v} = \sum_{b\in \mov_1(t)} y(b)\cdot \trans(t,b)(u); \\ \text{(3) for all } u,v \in S: \lambda_{u,v} \ge 0 \eqpun . \end{gather*} \begin{lem}{}\label{lem-trans-shipping} For all turn-based game structures and MDPs $\game$, for all $d\in \metrsp$, and for all $s,t \in S$, the following assertion holds: \[ \sup_{a \in \mov_1(s)} \Inf_{y \in \dis_1(t)} \Sup_{k \in C(d)} (\E_s^{a}(k) - \E_t^y(k)) = \sup_{a \in \mov_1(s)} \Inf_{y \in \dis_1(t)} \Inf_{\lambdavec \in \Phi(s,t,a,y)} \Bigl( \sum_{u,v \in S} d(u,v) \cdot \lambda_{u,v} \Bigr) \eqpun . \] \end{lem} \proof Since duality always holds in LP, from the LP duality based results of \cite{vanBreugelCONCUR01}, for all $a \in \mov_1(s)$ and $y \in \dis_1(t)$, the maximization over all $k \in C(d)$ can be re-written as a minimization problem as follows: \[ \Sup_{k \in C(d)} (\E_s^{a}(k) - \E_t^y(k)) = \inf_{\lambdavec \in \Phi(s,t,a,y)} \Bigl( \sum_{u,v \in S} d(u,v) \cdot \lambda_{u,v} \Bigr) \eqpun . \] The formula on the right hand side of the above equality is the {\em trans-shipping formulation}, which solves for the minimum cost of shipping the distribution $\trans(s, a)$ into $\trans(t, y)$, with edge costs $d$. The result of the lemma follows.\qed Using the above result we obtain the following LP for $\OneStep(s,t,d,a)$ over the variables: (a)~$\set{\lambda_{u, v}}_{u, v \in S}$, and (b)~$y_b$ for $b \in \mov_1(t)$: \begin{align} \label{exact-one-step-tb-lp} \mathrm{Minimize} \quad \sum_{u, v \in S}d(u,v) \cdot\lambda_{u,v} \quad \text{\textrm{subject to}} \end{align} \begin{gather*} \text{(1) for all } v \in S: \sum_{u \in S} \lambda_{u,v} = \trans(s, a)(v); \qquad \text{(2) for all } u \in S: \sum_{v \in S} \lambda_{u,v} = \sum_{b\in \mov_1(t)} y_b\cdot \trans(t,b)(u); \\ \text{(3) for all } u,v\in S: \lambda_{u,v} \ge 0; \qquad \text{(4) for all } b \in \mov_1(t): y_b \geq 0; \qquad \text{(5) } \sum_{b \in \mov_1(t)} y_b=1 \eqpun . \end{gather*} \begin{exa}{} We now use the MDPs in Figure~\ref{fig:mdp1} and \ref{fig:mdp2} to compute the simulation distance between states using the results in Lemma~\ref{lem-tb-a-priori-post-eq} and Lemma~\ref{lem-trans-shipping}. In the figure, states of the same color have a propositional distance of 0 and states of different colors have a propositional distance of 1; $\propdist(s, s') = \propdist(t, t') = \propdist(u, u') = \propdist(v, v') = \propdist(t', w') = 0$. In MDP~1, shown in Figure~\ref{fig:mdp1}, $\trans(s, a)(t) = \trans(t, b)(v) = \trans(t, c)(u) = 1$ and $\trans(t, f)(u) = \trans(t, f)(v) = \frac{1}{2}$. In MDP~2, shown in Figure~\ref{fig:mdp2}, $\trans(s', a)(w') = \trans(s', b)(t') = 1$, $\trans(t', c)(u') = \frac{1}{2} - \epsilon$, $\trans(t', c)(v') = \frac{1}{2} + \epsilon$, $\trans(w', e)(u') = \trans(w', f)(v') = 1 - \epsilon$ and $\trans(w', e)(v') = \trans(w', f)(u') = \epsilon$. \begin{figure}\label{fig:mdp1} \label{fig:mdp2} \label{fig:mdp-metrics} \end{figure} \definecolor{snazz}{rgb}{0.99,0.9,0.9} \begin{table*}[ht] \begin{tabular}{|>{\columncolor{snazz}}l|l|l|l|l|} \hline \rowcolor{snazz} \multicolumn{1}{|>{\columncolor{snazz}}c|}{$t$} & \multicolumn{2}{>{\columncolor{snazz}}c|}{$w'$} & \multicolumn{2}{>{\columncolor{snazz}}c|}{$t'$} \\ \rowcolor{snazz} $\mov_1(t)\qquad$ & $x \in \dis_1(w')\qquad$ & $Cost\qquad$ & $x \in \dis_1(t')\qquad$ & $Cost\qquad$ \\[1.5pt] \hline \hline $b$ & $x(f) = 1$ & $\epsilon$ & $x(c) = 1$ & $\frac{1}{2} - \epsilon$ \\[1.5pt] $c$ & $x(e) = 1$ & $\epsilon$ & $x(c) = 1$ & $\frac{1}{2} + \epsilon$ \\[1.5pt] $f$ & $x(f) = x(e) = \frac{1}{2}$ & $0$ & $x(c) = 1$ & $\epsilon$ \\[1.5pt] \hline \end{tabular} \caption{The moves from states $w'$ and $t'$ that minimize the trans-shipping cost for each $a \in \mov_1(t)$ and the corresponding costs.} \label{table:trans-dist} \end{table*} \begin{table*}[ht] \begin{tabular}{|>{\columncolor{snazz}}l||l|l|l|l|l|} \hline \rowcolor{snazz} $\metr{\priosim}\qquad$ & $s'\qquad$ & $t'\qquad$ & $w'\qquad$ & $u'\qquad$ & $v'\qquad$ \\[1.5pt] \hline \hline $s$ & $\epsilon$ & $1$ & $1$ & $1$ & $1$ \\[1.5pt] $t$ & $1$ & $\frac{1}{2} + \epsilon$ & $\epsilon$ & $1$ & $1$ \\[1.5pt] $u$ & $1$ & $1$ & $1$ & $0$ & $1$ \\[1.5pt] $v$ & $1$ & $1$ & $1$ & $1$ & $0$ \\[1.5pt] \hline \end{tabular} \caption{The simulation metric distance between states in MDP~1 and states in MDP~2.} \label{table:simdist} \end{table*} \begin{figure}\label{fig:tt-dist} \label{fig:ss-dist} \label{fig:trans-shipping} \end{figure} In Table~\ref{table:simdist}, we show the simulation metric distance between states of the MDPs in Figure~\ref{fig:mdp1} and Figure~\ref{fig:mdp2}. Consider states $t$ and $t'$. $c$ is the only move available to player~1 from state $t'$ and it induces a transition probability of $\frac{1}{2} + \epsilon$ to state $v'$ and $\frac{1}{2} - \epsilon$ to state $u'$. For the pure move $c$ at state $t$, the induced transition probabilities and edge costs in the trans-shipping formulation are shown in Figure~\ref{fig:tt-dist}. It is easy to see that the trans-shipping cost in this case is $\frac{1}{2} + \epsilon$; shown in Table~\ref{table:trans-dist} along the row corresponding to move $c$ from state $t$ and column corresponding to state $t'$. Similarly, the trans-shipping cost for the moves $b$ and $f$ from state $t$ are $\frac{1}{2} - \epsilon$ and $\epsilon$ respectively. The metric distance $\metr{t \priosim t'}$, which is the maximum over these trans-shipping costs is then $\frac{1}{2} + \epsilon$. Now consider the states $t$ and $w'$. In Table~\ref{table:trans-dist}, we show for each pure move $a \in \mov_1(t)$, the move $x \in \dis_1(w')$ that minimizes the trans-shipping cost together with the minimum cost. In this case it is easy to see that $\metr{t \priosim w'} = \epsilon$. Given $\metr{t \priosim t'} = \frac{1}{2} + \epsilon$ and $\metr{t \priosim w'} = \epsilon$, we can calculate the distance $\metr{s \priosim s'}$ from the trans-shipping formulation shown in Figure~\ref{fig:ss-dist}; the minimum cost is $\epsilon$ that entails choosing move $a$ from state $s'$, giving us $\metr{s \priosim s'} = \epsilon$. \end{exa} \begin{thm}{} \label{theo-one-step-poly} For all turn-based game structures and MDPs $\game$, given $d \in \metrsp$, for all states $s, t \in S$, we can compute $H_{\priosim_1}(d)(s,t)$ in polynomial time by the Linear Program~(\ref{exact-one-step-tb-lp}). \end{thm} For all states $s, t \in S$, iteration of $\OneStep(s,t,d)$ converges to the exact distance. However, in general, there are no known bounds for the rate of convergence. We now present a decision procedure to check whether the exact distance between two states is at most a rational value $r$. We first show how to express the predicate $d(s, t) = \OneStep(s,t,d)$. We observe that since $H_{\priosim_1}$ is non-decreasing, we have $\OneStep(s,t,d) \geq d(s, t)$. It follows that the equality $d(s, t) = \OneStep(s,t,d)$ holds iff for every $a \in \mov_1(s)$, of which there are finitely many, all the linear inequalities of LP~(\ref{exact-one-step-tb-lp}) are satisfied, and $d(s, t)= \sum_{u, v \in S}d(u, v) \cdot\lambda_{u,v}$ holds. It then follows that $d(s, t) = \OneStep(s,t,d)$ can be written as a predicate in the theory of real closed fields. Given a rational $r$, two states $s$ and $t$, we present an existential theory of reals formula to decide whether $[s \priosim_1 t] \leq r$. Since $[s \priosim_1 t]$ is the least fixed point of $H_{\priosim_1}$, we define a formula $\Phi(r)$ that is true iff, in the fixpoint, $[s \priosim_1 t] \leq r$, as follows: \[ \exists d \in \metrsp. [ (\bigwedge_{u, v \in S}\OneStep(u,v,d) = d(u, v)) \land (d(s,t) \leq r) ] \eqpun . \] If the formula $\Phi(r)$ is true, then there exists a fixpoint $d$, such that $d(s, t)$ is bounded by $r$, which implies that in the least fixpoint $d(s, t)$ is bounded by $r$. Conversely, if in the least fixpoint $d(s, t)$ is bounded by $r$, then the least fixpoint is a witness $d$ for $\Phi(r)$ being true. Since the existential theory of reals is decidable in PSPACE~\cite{Canny88}, we have the following result. \begin{thm}{(Decision complexity for exact distance).} For all turn-based game structures and MDPs $\game$, given a rational $r$, and two states $s$ and $t$, whether $[s \priosim_1 t] \leq r$ can be decided in PSPACE. \end{thm} \noindent{\bf{Approximation.}} Given a rational $\epsilon > 0$, using binary search and $\calo(\log(\frac{\rb - \lb}{\epsilon}))$ calls to check the formula $\Phi(r)$, we can obtain an interval $[l,u]$ with $u-l \leq \epsilon$ such that $[s \priosim_1 t]$ lies in the interval $[l,u]$. \begin{cor}{\bf{(Approximation for exact distance).}} For all turn-based game structures and MDPs $\game$, given a rational $\epsilon$, and two states $s$ and $t$, an interval $[l,u]$ with $u-l \leq \epsilon$ such that $[s \priosim_1 t] \in [l,u]$ can be computed in PSPACE. \end{cor} \subsection{Algorithms for the kernel} The kernel of the simulation metric $\priosim_1$ can be computed as the limit of the series $\priosim^{0}_1$, $\priosim^{1}_1$, $\priosim^{2}_1$, \ldots, of relations. For all $s, t \in S$, we have $(s, t) \in \priosim^{0}_1$ iff ${s \loceq t}$. For all $n \geq 0$, we have $(s, t) \in \priosim^{n+1}_1$ iff $\OneStep(s,t,1_{\priosim^n_1}) = 0$. Checking the condition $\OneStep(s,t,1_{\priosim^n_1}) = 0$, corresponds to solving an LP feasibility problem for every $a \in \mov_1(s)$, as it suffices to replace the minimization goal $\gamma = \sum_{u, v \in S} 1_{\priosim^n_1}(u, v)\cdot \lambda_{u, v}$ with the constraint $\gamma = 0$ in the LP~(\ref{exact-one-step-tb-lp}). We note that this is the same LP feasibility problem that was introduced in \cite{ZhangH07} as part of an algorithm to decide simulation of probabilistic systems in which each label may lead to one or more distributions over states. For the bisimulation kernel, we present a more efficient algorithm, which also improves on the algorithms presented in \cite{ZhangH07}. The idea is to proceed by partition refinement, as usual for bisimulation computations. The refinement step is as follows: given a partition, two states $s$ and $t$ belong to the same refined partition iff every pure move from $s$ induces a probability distribution on equivalence classes that can be matched by mixed moves from $t$, and vice versa. Precisely, we compute a sequence $\calq^0$, $\calq^1$, $\calq^2$, \ldots, of partitions. Two states $s, t$ belong to the same class of $\calq^0$ iff they have the same variable valuation (i.e., iff ${s \loceq t}$). For $n \geq 0$, since by the definition of the bisimulation metric given in (\ref{eq-game-bis-met}), $\metr{s \priobis_g t} = 0$ iff $\metr{s \priosim_1 t} = 0$ and $\metr{t \priosim_1 s} = 0$, two states $s, t$ in a given class of $\calq^n$ remain in the same class in $\calq^{n+1}$ iff both $(s,t)$ and $(t,s)$ satisfy the set of feasibility LP problems $\OneBis(s,t,\calq^n)$ as given below: \begin{quote} $\OneBis(s,t,\calq)$ consists of one feasibility LP problem for each $a \in \mov(s)$. The problem for $a \in \mov(s)$ has set of variables $\set{x_b \mid b \in \mov(t)}$, and set of constraints: \begin{gather*} \text{(1) for all } b \in \mov(t): \; x_b \geq 0, \qquad \text{(2)~}\sum_{b \in \mov(t)} x_b = 1, \\[1ex] \text{(3) for all } V \in \calq: \; \sum_{b \in \mov(t)} \sum_{u \in V} x_b \cdot \trans(t, b)(u) \geq \sum_{u \in V} \trans(s, a)(u) \eqpun . \end{gather*} \end{quote} In the following theorem we show that two states $s, t \in S$ are $n + 1$ step bisimilar iff $\OneBis(s, t, \calq^n)$ and $\OneBis(t, s, \calq^n)$ are feasible. \begin{thm}{} For all turn-based game structures and MDPs $\game$, for all $n \ge 0$, given two states $s, t \in S$ and an $n$-step bisimulation partition of states $\calq^n$ such that $\forall V \in \calq^n$, $\forall u, v \in V$, $\metr{u \priobis_g v}^n = 0$, the following holds, \[ \metr{s \priobis_g t}^{n + 1} = 0 \text{ iff } \OneBis(s, t, \calq^n) \text{ and } \OneBis(t, s, \calq^n) \text{ are both feasible}. \] \end{thm} \proof We proceed by induction on $n$. Assume the result holds for all iteration steps up to $n$ and consider the case for $n + 1$. In one direction, if $\metr{s \priobis_g t}^{n + 1} = 0$, then $\metr{s \priosim_1 t}^{n + 1} = \metr{t \priosim_1 s}^{n + 1} = 0$ by the definition of the bisimulation metric. We need to show that given $\metr{s \priosim_1 t}^{n + 1} = 0$, $\OneBis(s, t, \calq^n)$ is feasible. The proof is identical for $\metr{t \priosim_1 s}^{n + 1} = 0$. From the definition of the $n + 1$ step simulation distance, given $\propdist(s, t) = 0$ by our induction hypothesis, we have, \begin{align} \forall b \in \mov_1(s) \Inf_{x \in \dis_1(t)} \Sup_{k \in C(d^n)} (\E_s^b(k) - \E_t^x(k)) \le 0 \label{onebis-1} \eqpun . \end{align} Consider a player~1 move $a \in \mov_1(s)$. Since we can interchange the order of the $\Inf$ and $\Sup$ by the generalized minimax theorem in $\Inf_{x \in \dis_1(t)} \Sup_{k \in C(d^n)} (\E_s^a(k) - \E_t^x(k))$, the optimal values of $x \in \dis_1(t)$ and $k \in C(d^n)$ exist and only depend on $a$. Let $x_a$ and $k_a$ be the optimal values of $x$ and $k$ that realize the $\Inf$ and $\Sup$ in $\Inf_{x \in \dis_1(t)} \Sup_{k \in C(d^n)} (\E_s^a(k) - \E_t^x(k))$. Using $x_a$ and $k_a$ in (\ref{onebis-1}) we have: \begin{align} \E_t^{x_a}(k_a) & \ge \E_s^a(k_a) \nonumber \\ \sum_{u \in S} \trans(t, x_a)(u) \cdot k_a(u) & \ge \sum_{v \in S} \trans(s, a)(v) \cdot k_a(v) \nonumber \\ \sum_{V \in \calq^n} \sum_{u \in V} \trans(t, x_a)(u) \cdot k_a(u) & \ge \sum_{V \in \calq^n} \sum_{v \in V} \trans(s, a)(v) \cdot k_a(v) \label{onebis-2} \\ \sum_{V \in \calq^n} \sum_{u \in V} \trans(t, x_a)(u) & \ge \sum_{V \in \calq^n} \sum_{v \in V} \trans(s, a)(v) \label{onebis-3} \\ \forall V \in \calq^n . \biggl( \sum_{u \in V} \trans(t, x_a)(u) & \ge \sum_{u \in V} \trans(s, a)(u) \biggr), \label{onebis-4} \end{align} where (\ref{onebis-3}) follows from (\ref{onebis-2}) by noting that for all $V \in \calq^n$, for all states $u, v \in V$, $d^n(u, v) = d^n(v, u) = 0$, by our hypothesis, leading to $k(u) - k(v) \le d^n(u, v) = 0$ and $k(v) - k(u) \le d^n(v, u) = 0$, which implies $k(u) = k(v)$ for all $k \in C(d^n)$. To show (\ref{onebis-4}) follows from (\ref{onebis-3}), assume towards a contradiction that there exists a $V' \in \calq^n$ such that $\sum_{u \in V'} \trans(t, x_a)(u) < \sum_{u \in V'} \trans(s, a)(u)$. Then there must be a $V'' \in \calq^n$ such that $\sum_{u \in V''} \trans(t, x_a)(u) > \sum_{u \in V''} \trans(s, a)(u)$ since $\trans(t, x_a)$ is a probability distribution and the sum of the probability mass allocated to each equivalence class should be $1$. Further, for all $V \in \calq^n$, for all $u, v \in V$, we have $d^n(u, v) = d^n(v, u) = 0$ and for all $u \in V$ and for all $w \in S \setm V$, we have $d^n(u, w) = d^n(w, u) = 1$. Therefore, we can pick a feasible $k' \in C(d^n)$ such that $k'(v) > 0$ for all $v \in V''$ and $k'(v) = 0$ for all other states. Using $k'$ we get $\E_s^a(k') - \E_t^{x_a}(k') > 0$ which means $k_a$ is not optimal, contradicting (\ref{onebis-1}). In the other direction, assume that $\OneBis(s, t, \calq^n)$ is feasible. We need to show that $\metr{s \priosim_1 t}^{n + 1} = 0$. Since $\OneBis(s, t, \calq^n)$ is feasible, there exists a distribution $x_a \in \dis_1(t)$ for all $a \in \mov_1(s)$ such that, $\forall V \in \calq^n . (\sum_{u \in V} \trans(t, x_a)(u) \ge \sum_{v \in V} \trans(s, a)(v))$. By our induction hypothesis, this implies that for all $k \in C(d^n)$, we have $(\E_s^a(k) - \E_t^{x_a}(k)) \le 0$ and in particular $\Sup_{k \in C(d^n)} (\E_s^a(k) - \E_t^{x_a}(k)) \le 0$. Since $\propdist(s, t) = 0$ by our hypothesis and we have shown, \[ \forall a \in \mov_1(s) \Inf_{x \in \dis_1(t)} \Sup_{k \in C(d^n)} (\E_s^a(k) - \E_t^x(k)) \le 0, \] we have, from Lemma~\ref{lem-tb-a-priori-post-eq}, \[ \metr{s \priosim_1 t}^{n + 1} = \propdist(s, t) \imax \Sup_{a \in \mov_1(s)} \Inf_{x \in \dis_1(t)} \Sup_{k \in C(d^n)} (\E_s^a(k) - \E_t^x(k)) = 0\eqpun . \] In a similar fashion, if $\OneBis(t, s, \calq^n)$ is feasible then $\metr{t \priosim_1 s}^{n + 1} = 0$, which leads to $\metr{s \priobis_g t}^{n + 1} = 0$ by the definition of the bisimulation metric, as required.\qed \noindent{\bf Complexity.} The number of partition refinement steps required for the computation of both the simulation and the bisimulation kernel is bounded by $O(|S|^2)$ for turn-based games and MDPs, where $S$ is the set of states. At every refinement step, at most $O(|S|^2)$ state pairs are considered, and for each state pair $(s,t)$ at most $|\mov(s)|$ LP feasibility problems needs to be solved. Let us denote by $\LPF(n,m)$ the complexity of solving the feasibility of $m$ linear inequalities over $n$ variables. We obtain the following result. \begin{thm}{} For all turn-based game structures and MDPs $\game$, the following assertions hold: \begin{enumerate}[\em(1)] \item the simulation kernel can be computed in $\calo\big(n^4 \cdot m \cdot \LPF(n^2 + m,n^2 +2n + m+2)\big) $ time; \item the bisimulation kernel can be computed in $\calo\big(n^4 \cdot m \cdot \LPF(m,n + m+1)\big) $ time; \end{enumerate} where $n=|S|$ is the size of the state space, and $m=\max_{s\in S} |\mov(s)|$. \end{thm} \begin{rem}{} The best known algorithm for $\LPF(n,m)$ works in time $\calo(n^{2.5} \cdot\log(n))$~\cite{Ye06} (assuming each arithmetic operation takes unit time). The previous algorithm for the bisimulation kernel checked two way simulation and hence has the complexity $\calo(n^4 \cdot m \cdot (n^2+m)^{2.5} \cdot \log (n^2 +m))$, whereas our algorithm works in time $\calo(n^4 \cdot m \cdot m^{2.5} \cdot \log (m))$. For most practical purposes, the number of moves at a state is constant (i.e., $m$ is constant). For the case when $m$ is constant, the previous best known algorithm worked in $\calo(n^9 \cdot \log (n))$ time, whereas our algorithm works in time $\calo(n^4)$. \end{rem} \section{Algorithms for Concurrent Games} In this section we first show that the computation of the metric distance is at least as hard as the computation of optimal values in concurrent reachability games. The exact complexity of the latter is open, but it is known to be at least as hard as the square-root sum problem, which is in PSPACE but whose inclusion in NP is a long-standing open problem \cite{EtessamiYannakakis07,GareyGrahamJohnson76}. Next, we present algorithms based on a decision procedure for the theory of real closed fields, for both checking the bounds of the exact distance and the kernel of the metrics. Our reduction to the theory of real closed fields removes one quantifier alternation when compared to the previous known formula (inferred from \cite{dAMRS07}). This improves the complexity of the algorithm. \input{reduction} \subsection{Algorithms for the metrics} We first prove a lemma that helps to obtain reduced-complexity algorithms for concurrent games. The lemma states that the distance $\metr{s \priosim_1 t}$ is attained by restricting player~2 to pure moves at state $t$, for all states $s, t \in S$. \begin{lem}{}\label{lem-pl2-pure} For all concurrent game structures $\game$ and all metrics $d \in \metrsp$, we have, \begin{multline} \label{eq-a-priori-pure} \Sup_{k \in C(d)} \Sup_{x_1 \in \dis_1(s)} \Inf_{y_1 \in \dis_1(t)} \Sup_{y_2 \in \dis_2(t)} \Inf_{x_2 \in \dis_2(s)} (\E_{s}^{x_1,x_2}(k)) - \E_{t}^{y_1,y_2}(k)) \\ = \Sup_{k \in C(d)} \Sup_{x_1 \in \dis_1(s)} \Inf_{y_1 \in \dis_1(t)} \Sup_{b \in \mov_2(t)} \Inf_{x_2 \in \dis_2(s)} (\E_{s}^{x_1,x_2}(k) - \E_{t}^{y_1,b}(k)) \eqpun . \end{multline} \end{lem} \proof To prove our claim we fix $k \in C(d)$, and player~1 mixed moves $x \in \dis_1(s)$, and $y \in \dis_1(t)$. We then have, \begin{align} \Sup_{y_2 \in \dis_2(t)} \Inf_{x_2 \in \dis_2(s)} (\E_{s}^{x,x_2}(k)) - \E_{t}^{y,y_2}(k)) & = \Inf_{x_2 \in \dis_2(s)} \E_s^{x,x_2}(k) - \Inf_{y_2 \in \dis_2(t)} \E_t^{y,y_2}(k) \label{inf-decomp} \\ & = \Inf_{x_2 \in \dis_2(s)} \E_s^{x,x_2}(k) - \Inf_{b \in \mov_2(t)} \E_t^{y,b}(k) \label{inf-pure-decomp} \\ & = \Sup_{b \in \mov_2(t)} \Inf_{x_2 \in \dis_2(s)} (\E_{s}^{x,x_2}(k) - \E_{t}^{y,b}(k)), \nonumber \end{align} where (\ref{inf-pure-decomp}) follows from (\ref{inf-decomp}) since the decomposition on the rhs of (\ref{inf-decomp}) yields two independent linear optimization problems; the optimal values are attained at a vertex of the convex hulls of the distributions induced by pure player~2 moves at the two states. This easily leads to the result.\qed We now present algorithms for metrics in concurrent games. Due to the reduction from concurrent reachability games, shown in Theorem~\ref{theo-reduction}, it is unlikely that we have an algorithm in NP for the metric distance between states. We therefore construct statements in the theory of real closed fields, firstly to decide whether $\metr{s \priosim_1 t} \le r$, for a rational $r$, so that we can approximate the metric distance between states $s$ and $t$, and secondly to decide if $\metr{s \priosim_1 t} = 0$ in order to compute the kernel of the game simulation and bisimulation metrics. The statements improve on the complexity that can be achieved by a direct translation of the statements of \cite{dAMRS07} to the theory of real closed fields. The complexity reduction is based on the observation that using Lemma~\ref{lem-pl2-pure}, we can replace a $\sup$ operator with finite conjunction, and therefore reduce the quantifier complexity of the resulting formula. Fix a game structure $\game$ and states $s$ and $t$ of $\game$. We proceed to construct a statement in the theory of reals that can be used to decide if $\metr{s\priosim_1 t} \le r$, for a given rational $r$. In the following, we use variables $x_1$, $y_1$ and $x_2$ to denote a set of variables $\set{x_1(a) \mid a\in\mov_1(s)}$, $\set{y_1(a) \mid a\in\mov_1(t)}$ and $\set{x_2(b) \mid b\in\mov_2(s)}$ respectively. We use $k$ to denote the set of variables $\set{k(u)\mid u\in S}$, and $d$ for the set of variables $\set{d(u,v)\mid u,v\in S}$. The variables $\alpha, \alpha', \beta, \beta'$ range over reals. For convenience, we assume $\mov_2(t) = \set{b_1,\ldots, b_l}$. First, notice that we can write formulas that state that a variable $x$ is a mixed move for a player at state $s$, and $k$ is a constructible predicate (i.e., $k\in C(d)$): \begin{align*} \IsDistribution(x,\mov_1(s)) \equiv & \bigwedge_{a \in \mov_1(s)} x(a) \ge 0 \wedge \bigwedge_{a \in \mov_1(s)} x(a) \le 1 \wedge \sum_{a \in \mov_1(s)} x(a) = 1 \eqpun ,\\ \IsValidValuation(k,d) \equiv & \bigwedge_{u \in S} \biggl[ k(u) \ge \theta_1 \wedge k(u) \le \theta_2 \biggr] \wedge \bigwedge_{u, v \in S} (k(u) - k(v) \le d(u, v)) \eqpun . \end{align*} In the following, we write bounded quantifiers of the form ``$\exists x_1\in\dis_1(s)$'' or ``$\forall k \in C(d)$'' which mean respectively $\exists x_1. \IsDistribution(x_1,\mov_1(s)) \wedge \cdots$ and $\forall k. \IsValidValuation(k,d) \rightarrow \cdots$. Let $\eta(k,x_1,x_2,y_1,b)$ be the polynomial $\E_s^{x_1,x_2}(k) - \E_t^{y_1,b}(k)$. Notice that $\eta$ is a polynomial of degree $3$. We write $a = \max\set{a_1,\ldots,a_l}$ for variables $a, a_1,\ldots,a_l$ for the formula \[ (a = a_1 \wedge \bigwedge_{i=1}^l a_1\geq a_i) \vee \ldots\vee (a = a_l \wedge \bigwedge_{i=1}^l a_l \geq a_i) \eqpun . \] We construct the formula for game simulation in stages. First, we construct a formula $\Phi_1(d, s, t, k, x, \alpha)$ with free variables $d, k, x, \alpha$ such that $\Phi_1(d,s,t,k,x_1,\alpha)$ holds for a valuation to the variables iff \begin{align*} \alpha = \inf_{y_1\in\dis_1(t)}\sup_{b\in\mov_2(t)}\inf_{x_2\in\dis_2(s)} (\E_s^{x_1,x_2}(k) - \E_t^{y_1,b}(k)) \eqpun . \end{align*} We use the following observation to move the innermost $\inf$ ahead of the $\sup$ over the finite set $\mov_2(t)$ (for a function $f$): \[ \sup_{b\in\mov_2(t)}\inf_{x_2\in\dis_2(s)} f(b,x_2,x) = \inf_{x_2^{b_1}\in\dis_2(s)}\ldots\inf_{x_2^{b_l}\in\dis_2(s)} \max(f(b_1,x_2^{b_1},x),\ldots,f(b_l,x_2^{b_l},x)) \eqpun . \] The formula $\Phi_1(d,s,t,k,x_1,\alpha)$ is given by: \begin{multline*} \forall y_1\in\dis_1(t).\forall x_2^{b_1}\in\dis_2(s)\ldots x_2^{b_l}\in\dis_2(s). \forall w_1\ldots w_l. \forall a.\forall \alpha'.\\ \exists \hat{y}_1\in\dis_1(t).\exists \hat{x}_2^{b_1}\in\dis_2(s)\ldots \hat{x}_2^{b_l}\in\dis_2(s). \exists \hat{w}_1\ldots \hat{w}_l. \exists \hat{a}. \\ \left[ \begin{array}{ccc} \left\{\begin{array}{c} \ \Bigl(w_1=\eta(k,x_1,x_2^{b_1},y_1,b_1)\Bigr)\\ \wedge \cdots \wedge \\ \Bigl(w_l=\eta(k,x_1,x_2^{b_l},y_1,b_l)\Bigr)\wedge\\ \bigl(a = \max\set{w_1,\ldots,w_l}\bigr) \end{array} \right\} & \rightarrow (a\geq \alpha) \end{array} \right] \wedge \\ \left[ \begin{array}{ccc} \left\{\begin{array}{c} \ \Bigl(\hat{w}_1=\eta(k,x_1,\hat{x}_2^{b_1},\hat{y}_1,b_1)\Bigr)\\ \wedge \cdots \wedge \\ \Bigl(\hat{w}_l=\eta(k,x_1,\hat{x}_2^{b_l},\hat{y}_1,b_l)\Bigr)\wedge\\ \bigl(\hat{a} = \max\set{\hat{w}_1,\ldots,\hat{w}_l} \wedge \hat{a}\geq \alpha'(s,t) \bigr) \end{array} \right\} & \rightarrow (\alpha \geq \alpha') \end{array} \right] \eqpun . \end{multline*} Using $\Phi_1$, we construct a formula $\Phi(d, s, t, \alpha)$ with free variables $d \in \metrsp$ and $\alpha \in \metrsp$ such that $\Phi(d,s,t,\alpha)$ is true iff: \[ \alpha = \Sup_{k \in C(d)} \Sup_{x_1 \in \dis_1(s)} \Inf_{y_1 \in \dis_1(t)} \Sup_{b \in \mov_2(t)} \Inf_{x_2 \in \dis_2(s)} (\E_{s}^{x_1,x_2}(k) - \E_{t}^{y_1,b}(k)) \eqpun . \] The formula $\Phi$ is defined as follows: \begin{multline} \label{phi-for-a-priori} \forall k\in C(d). \forall x_1\in \dis_1(s).\forall \beta.\forall \alpha'.\\ \biggl[ \begin{array}{c} \Phi_1(d, s, t, k, x_1, \beta) \rightarrow (\beta(s, t) \le \alpha) \wedge \\ (\forall k'\in C(d). \forall x'_1\in \dis_1(s).\forall\beta'.\Phi_1(d,s,t,k',x'_1,\beta')\wedge \beta'(s, t) \le \alpha') \rightarrow \alpha \le \alpha' \end{array} \biggr] \eqpun . \end{multline} Finally, given a rational $r$, we can check if $\metr{s \priosim_1 t} \le r$ by checking if the following sentence is true: \begin{align} \label{conc-fixpoint-bound} \exists d \in \metrsp . \exists a \in \metrsp . [ (\bigwedge_{u,v \in S} \Phi(d,u,v,a(u, v)) \land (d(u, v) = a(u, v))) \land (d(s,t) \le r) ] \eqpun . \end{align} The above sentence is true iff in the least fixpoint, $d(s, t)$ is bounded by $r$. Like in the case of turn-based games and MDPs, given a rational $\epsilon > 0$, using binary search and $\calo(\log(\frac{\rb - \lb}{\epsilon}))$ calls to a decision procedure to check the sentence (\ref{conc-fixpoint-bound}), we can compute an interval $[l,u]$ with $u - l \le \epsilon$, such that $\metr{s \priosim_1 t} \in [l, u]$. \noindent{\bf Complexity.} Note that $\Phi$ is of the form $\forall\exists\forall$, because $\Phi_1$ is of the form $\forall\exists$, and appears in negative position in $\Phi$. The formula $\Phi$ has $(|S|+|\mov_1(s)|+3)$ universally quantified variables, followed by $(|S|+|\mov_1(s)|+3 + 2(|\mov_1(t)| + |\mov_2(s)|\cdot|\mov_2(t)|+|\mov_2(t)|+2))$ existentially quantified variables, followed by $2(|\mov_1(t)| + |\mov_2(s)|\cdot |\mov_2(t)| + |\mov_2(t)| +1)$ universal variables. The sentence (\ref{conc-fixpoint-bound}) introduces $|S|^2 + |S|^2$ existentially quantified variables ahead of $\Phi$. The matrix of the formula is of length at most quadratic in the size of the game, and the maximum degree of any polynomial in the formula is $3$. We define the size of a game $\game$ as: $|G| = |S| + T$, where $T = \sum_{s,t \in S} \sum_{a, b \in \moves} |\trans(s,a,b)(t)|$. Using the complexity of deciding a formula in the theory of real closed fields \cite{Basu99}, which states that a formula with $i$ quantifier blocks, where each block has $l_i$ variables, of $p$ polynomials, has a time complexity bound of $\calo(p^{\calo(\Pi(l_i + 1))})$, we get the following result. \begin{thm}{(Decision complexity for exact distance).} For all concurrent game structures $\game$, given a rational $r$, and two states $s$ and $t$, whether $\metr{s \priosim_1 t} \le r$ can be decided in time $\calo(|G|^{\calo(|G|^5)})$. \end{thm} \noindent{\bf{Approximation.}} Given a rational $\epsilon > 0$, using binary search and $\calo(\log(\frac{\rb - \lb}{\epsilon}))$ calls to check the formula {\ref{conc-fixpoint-bound}}, we can obtain an interval $[l,u]$ with $u-l \leq \epsilon$ such that $[s \priosim_1 t]$ lies in the interval $[l,u]$. \begin{cor}{\bf{(Approximation for exact distance).}} For all concurrent game structures $\game$, given a rational $\epsilon$, and two states $s$ and $t$, an interval $[l,u]$ with $u-l \leq \epsilon$ such that $[s \priosim_1 t] \in [l,u]$ can be computed in time $\calo(\log(\frac{\rb - \lb}{\epsilon}) \cdot |G|^{\calo(|G|^5)})$. \end{cor} In contrast, the formula to check whether $\metr{s \priosim_1 t} \le r$, for a rational $r$, as implied by the definition of $H_{\priosim_1}(d)(s,t)$, that does not use Lemma~\ref{lem-pl2-pure}, has five quantifier alternations due to the inner sup, which when combined with the $2 \cdot |S|^2$ existentially quantified variables in the sentence (\ref{conc-fixpoint-bound}), yields a decision complexity of $\calo(|G|^{\calo(|G|^7)})$. \subsection{Computing the kernels} Similar to the case of turn-based games and MDPs, the kernel of the simulation metric $\priosim_1$ for concurrent games can be computed as the limit of the series $\priosim^{0}_1$, $\priosim^{1}_1$, $\priosim^{2}_1$, \ldots, of relations. For all $s, t \in S$, we have $(s, t) \in \priosim^{0}_1$ iff ${s \loceq t}$. For all $n \geq 0$, we have $(s, t) \in \priosim^{n+1}_1$ iff the following sentence $\Phi_s$ is true: \begin{align*} \forall a.\Phi(d^n,s,t,a) \rightarrow a = 0, \end{align*} where $\Phi$ is defined as in (\ref{phi-for-a-priori}) and at step $n$ in the iteration, the distance between any pair of states $u, v \in S$ is defined as follows, \begin{align*} \forall u, v \in S .\ d^n(u, v) = \begin{cases} \; 0 \qquad \text{ if (s, t)} \in \; \priosim^n_1 \\ \; 1 \qquad \text{ if (s, t)} \not\in \; \priosim^n_1 \end{cases} \eqpun . \end{align*} To compute the bisimulation kernel, we again proceed by partition refinement. For a set of partitions $\calq^0,\calq^1,\ldots$, where $(s, t) \in V$ for $V \in \calq^n$ implies $(s, t) \in \priobis_1^n$, $(s,t) \in \priobis^{n+1}$ iff the following sentence $\Phi_b$ is true for the state pairs $(s,t)$ and $(t,s)$: \begin{align*} \forall a.\Phi(d^n,s,t,a) \rightarrow a = 0, \end{align*} where $\Phi$ is again as defined in (\ref{phi-for-a-priori}) and at step $n$ in the iteration, the distance between any pair of states $u, v \in S$ is defined as follows, \begin{align*} \forall u, v \in S .\ d^n(u, v) = \begin{cases} \; 0 \qquad \text{ if (s, t)} \in \; \priobis^n_1 \\ \; 1 \qquad \text{ if (s, t)} \not\in \; \priobis^n_1 \end{cases} \eqpun . \end{align*} \noindent{\bf Complexity.} In the worst case we need $\calo(|S|^2)$ partition refinement steps for computing both the simulation and the bisimulation relation. At each partition refinement step the number of state pairs we consider is bounded by $\calo(|S|^2)$. We can check if $\Phi_s$ and $\Phi_b$ are true using a decision procedure for the theory of real closed fields. Therefore, we need $\calo(|S|^4)$ decisions to compute the kernels. The partitioning of states based on the decisions can be done by any of the partition refinement algorithms, such as \cite{PaigeTarjan87}. \begin{thm}{} For all concurrent game structures $\game$, states $s$ and $t$, whether $s \priosim_1 t$ can be decided in $\calo(|G|^{\calo(|G|^3)})$ time, and whether $s \priobis_g t$ can be decided in $\calo(|G|^{\calo(|G|^3)})$ time. \end{thm} \section{Conclusion: Possible Applications and Open Problems} We have shown theoretical applications of game metrics with respect to discounted and long-run average values of games. An interesting question regarding game metrics is related to their usefulness in real-world applications. We now discuss possible applications of game metrics. \begin{itemize} \item {\em State space reduction.\/} The kernels of the metrics are the simulation and bisimulation relations. These relations have been well studied in the context of transition systems with applications in program analysis and verification. For example, in \cite{KatoenKZJ07} the authors show that bisimulation based state space reduction is practical and may result in an enormous reduction in model size, speeding up model checking of probabilistic systems. \item {\em Security.\/} Bisimulation plays a critical role in the formal analysis of security protocols. If two instances of a protocol, parameterized by a message $m$, are bisimilar for messages $m$ and $m'$, then the messages remain secret \cite{CNP09}. The authors use bisimulation in probabilistic transition systems to analyze probabilistic anonymity in security protocols. \item {\em Computational Biology.\/} In the emerging area of computational systems biology, the authors of \cite{Thorsley09} use the metrics defined in the context of probabilistic systems \cite{DGJP99,vanBreugelCONCUR01,vanBreugel-icalp2001} to compare reduced models of {\em Stochastic Reaction Networks\/}. These reaction networks are used to study intra-cellular behavior in computational systems biology. The reduced models are Continuous Time Markov Chains (CTMCs), and the comparison of different reduced models is via the metric distance between their initial states. A central question in the study of intra-cellular behavior is estimating the sizes of populations of various species that cohabitate cells. The inter-cellular dynamics in this context is modeled as a stochastic process, representing the temporal evolution of the species' populations, represented by a family $(X(t))_{t \ge 0}$ of random vectors. For $0 \le i < N$, $N$ being the number of different species, $X_i(t)$ is the population of species $S_i$ at time $t$. In \cite{SandmannW08}, the authors show how CTMCs that model system dynamics can be reduced to Discrete Time Markov Chains (DTMCs) using a technique called uniformization or discrete-time conversion. The DTMCs are stochastically identical to the CTMCs and enable more efficient estimation of species' populations. An assumption that is made in these studies is that systems are spatially homogeneous and thermally equilibrated; the molecules are well stirred in a fixed volume at a constant temperature. These assumptions enable the reduction of these systems to CTMCs and to DTMCs in some cases. \end{itemize} In the applications we have discussed, non-determinism is modeled probabilistically. In applications where non-determinism needs to be interpreted demonically, rather than probabilistically, MDPs or turn-based games would be the appropriate framework for analysis. If the interaction between various sources of non-determinism needs to be modeled simultaneously, then concurrent games would be the appropriate framework for analysis. For the analysis of these general models, our results and algorithms will be useful. \noindent{\bf Open Problems.} While we have shown polynomial time algorithms for the kernel of the simulation and bisimulation metrics for MDPs and turn-based games, the existence of a polynomial time algorithm for the kernel of both the simulation and bisimulation metrics for concurrent games is an open problem. The existence of a polynomial time algorithm to approximate the exact metric distance in the case of turn-based games and MDPs is an open problem. The existence of a PSPACE algorithm for the decision problem of the exact metric distance in concurrent games is an open problem. \subsection*{Acknowledgments.} The first, second and fourth author were supported in part by the National Science Foundation grants CNS-0720884 and CCR-0132780. The third author was supported in part by the National Science Foundation grants CCF-0427202, CCF-0546170. We would like to thank the reviewers for their detailed comments that helped us make the paper better. \end{document}
\begin{document} \maketitle \allowdisplaybreaks \begin{abstract} Of concern is the \varepsilonmph{a priori} symmetry of traveling wave solutions for a general class of nonlocal dispersive equations \[ u_t + (u^2 +Lu)_x=0, \] where $L$ is a Fourier multiplier operator with symbol $m$. Our analysis includes both homogeneous and inhomogeneous symbols. We characterize a class of symbols $m$ guaranteeing that periodic traveling wave solutions are symmetric under a mild assumption on the wave profile. Particularly, instead of considering waves with a unique crest and trough per period or a monotone structure near troughs as classically imposed in the water wave problem, we formulate a \varepsilonmph{reflection criterion}, which allows to affirm the symmetry of periodic traveling waves. The reflection criterion weakens the assumption of monotonicity between trough and crest and enables to treat \varepsilonmph{a priori} solutions with multiple crests of different sizes per period. Moreover, our result not only applies to smooth solutions, but also to traveling waves with a non-smooth structure such as peaks or cusps at a crest. The proof relies on a so-called \varepsilonmph{touching lemma}, which is related to a strong maximum principle for elliptic operators, and a weak form of the celebrated \varepsilonmph{method of moving planes}. \varepsilonnd{abstract} \section{Introduction} The present manuscript is devoted to symmetry of periodic traveling wave solutions for general nonlocal dispersive equations of the form\footnote{We use the notation $\hat f$ to denote the Fourier transformation of a function $f$.} \begin{equation}\label{eq:Equation} u_t+(u^2+Lu)_x=0, \qquad \widehat{Lu}(t,k)=m(k)\hat u(t,k), \varepsilonnd{equation} where $t>0$ and $x\in \mathbb{R}$ denote the time and space variables, respectively. The linear operator $L$ is a Fourier multiplier operator with real symbol $m$. For certain classes of symbols $m$ and a mild \varepsilonmph{a priori} assumption on the wave profile, which we call \varepsilonmph{reflection criterion}, we show that periodic traveling solutions of \varepsilonqref{eq:Equation} are symmetric and have exactly one crest per period. Our study includes smooth solutions as well as the so-called \varepsilonmph{highest waves} exhibiting a cusp or corner singularity at their crests. Our main motivation for studying the symmetric structure for periodic solutions of \varepsilonqref{eq:Equation} steams from the full water wave problem governed by the Euler equations in two dimensions. For the water wave problem the existence and symmetric structure of steady, periodic waves in various settings has been subject of intense study during the last decades. Classical existence results for periodic traveling waves assume that the wave profile is symmetric \cite{Gerstner1809, Dubreil-Jacotin, Hur2001, Constantin-Strauss2002, Constantin-Strauss2004}. However, this is not necessarily a restriction as many studies concerned with {a priori} symmetry of traveling waves show. The first result in the context of irrotational flows goes back to Garabedian \cite{Garabedian} in 1965, who proved that if every stream line obeys a \varepsilonmph{monotone profile}, which means that it has a unique maximum and minimum per period, all of them located on a vertical line, respectively, then the periodic steady wave is symmetric. The proof has been simplified in \cite{Toland1998} in the 90s and the authors of \cite{OS} proved that in fact any steady periodic solution for irrotational flows is symmetric under the condition that (only) the surface wave profile is monotone. Under the same condition on the wave profile, it is shown in \cite{ Constantin-Escher2004b,Constantin-Escher2004} that steady waves are symmetric in the context of flows with vorticity. In the same setting, the authors of \cite{Matioc2013} replaced the assumption of a monotone wave profile by the condition that all streamlines achieve their global minimum on the same vertical line and are monotone in a small (one side) neighborhood. Then they prove that the wave is symmetric and has actually a monotone profile. Notice that {a priori} the wave profile is allowed to have multiple crests per period. A further result for waves which may have {a priori} several crests within a period was established in \cite{Hur2007}. Under the assumption that the wave is monotone near a unique global trough in each period and every stream line attains a minimum below this trough, it is shown that the wave is symmetric and has a single crest per period. The above mentioned results concern flows without stagnation. In fact, it was shown in \cite{EEW} that there exist rotational flows with critical layers having multiple crests of different sizes within one period. It naturally raises the question whether a local monotone structure, either near a global trough or between a trough and a crest, is indispensable to guarantee the symmetry of steady solutions in water wave problems. The answer is negative for \varepsilonmph{solitary water waves} as indicated in \cite{Craig} for irrational flows and \cite{Hs} for waves with vorticity (without stagnation). The precise decay of solitary waves enables the application of the so-called \varepsilonmph{method of moving planes}, which goes back to Alexandrov \cite{Alk} and was refined by the works of Serrin \cite{Serr}, Gidas, Ni, and Nirenberg \cite{GNN} and many others. Concerning water wave model equations in the regime of shallow water, there are results confirming the symmetry of solitary waves irrespective of the precise decay rate, see for instance \cite{BEP16,Pei-DP} for the Whitham and Degasperis--Procesi equation, respectively, and \cite{A} for a class of general dispersive equations. The situation is more complicated for periodic waves in the context of the water wave problem and model equations of the form \varepsilonqref{eq:Equation}. Due to the lack of decay at infinity, a monotone structure, either near a global trough or from it to a global crest, is required in previous results in order to initiate the method of moving planes. We relax the assumption on a monotone structure as far as possible, still guaranteeing that the method of moving planes is directly applicable. To this end, we introduce the following criterion: \begin{description}\label{cond: reflection criterion} \item[Reflection criterion] A $2\pi$-periodic continuous function $\phi$ is said to satisfy the \varepsilonmph{reflection criterion} if there exists $\lambda_*\in [0,2\pi)$ such that \[ \phi(x)>\phi(2\lambda_*-x)\qquad \mbox{for all}\qquad x\in (\lambda_*, \lambda_*+ \pi). \] \varepsilonnd{description} This criterion is weaker than the classical \varepsilonmph{monotone profile} assumption, and does not impose a monotone structure at any particular point on the wave profile so that it can be used to confirm the symmetry of periodic waves with arbitrarily many crests and troughs per period. Classically, the method of moving planes strongly relies on an elliptic maximum principle for local equations. Dealing with a genuinely nonlocal equation like \varepsilonqref{eq:Equation}, an in-depth study of the nonlocal operator is required in order to apply the method of moving planes. We impose assumptions on the symbol of the nonlocal operator $L$, which corresponds to the dispersion relation of \varepsilonqref{eq:Equation}, guaranteeing that the action of $L$ reflects a weak form of an elliptic maximum principle. We call it the \varepsilonmph{touching lemma} (cf. Lemma \ref{touching lemma} below). In our analysis, we impose the following assumption on the symbol $m$: \begin{description} \item[Assumption] The symbol $m$ is even, real and satisfies one of the following conditions: \begin{itemize} \item[(S)] $m\in S^r(\mathbb{Z})$ for some $r<0$ is inhomogeneous, and the sequence $(n_k)_{k\in \mathbb{N}}$ defined by $n_k:=m(|\sqrt{k}|)$ is completely monotone; \varepsilonnd{itemize} or \begin{itemize} \item[(H)] $m$ is homogeneous of degree $r<0$, that is $m(k)\varepsilonqsim |k|^r$. \varepsilonnd{itemize} \varepsilonnd{description} The precise definitions of a completely monotone sequence and the symbol class $S^r(\mathbb{Z})$ are given in Section~\ref{S:2}. The assumptions on $m$ include dispersive equations with weak dispersion, where the symbol of the Fourier multiplier operator can be either inhomogeneous or homogeneous. They are inspired by the fact that for a large class of evolution equations, solutions which are symmetric at any instant of time are in fact traveling, if the symbol $m$ is real and even \cite{BEGP, Ehrn-Holden-Ray}. This fact uncovers the close connection between symmetry and steadiness for waves from the opposite perspective. Examples of well-known water wave model equations falling into the fame of \varepsilonqref{eq:Equation} and satisfying assumption (S) or (H), are for instance the Whitham equation, the Burgers--Hilbert equation or the reduced Ostrovsky equation. Let us turn to our main result. If $u$ is a traveling wave solution of \varepsilonqref{eq:Equation}, then $u(t,x)=\phi(x-ct)$, where $c>0$ denotes the wave propagation speed and $\phi$ solves the steady equation \begin{equation}\label{eq:steady} -c\phi + \frac{1}{2}\phi^2 + L\phi =B \varepsilonnd{equation} for some constant of integration $B\in \mathbb{R}$. The present work is not devoted to the existence of solutions of \varepsilonqref{eq:steady}, but rather to the symmetry of solutions whenever they exist. Results on the existence of solutions of \varepsilonqref{eq:steady} in the periodic setting can be found for instance in \cite{BD, EK, EW} and references therein. Our main result reads: \begin{mainthm}[Symmetry of periodic steady waves] Assume that the symbol $m$ of the linear Fourier multiplier operator $L$ in \varepsilonqref{eq:Equation} satisfies either assumption \varepsilonmph{(S)} or \varepsilonmph{(H)}, and let $\phi$ be a $2\pi$-periodic\footnote{The assumption of $2\pi$-periodic solutions can be replaced by any finite period.}, continuous solution of \varepsilonqref{eq:steady}. If $\phi$ satisfies the reflection criterion and one of the following \begin{enumerate} \item $\phi <\frac{c}{2}$ , \item $\phi \leq \frac{c}{2}$ and $\phi(x)=\frac{c}{2}$ for a unique $x\in [-\pi,\pi)$ \varepsilonnd{enumerate} holds, then $\phi$ is symmetric and has exactly one crest per period. Moreover, \[ \phi^\prime(x)>0\qquad \mbox{for all}\qquad x\in (-\pi,0), \] after a suitable translation. \varepsilonnd{mainthm} The proof of the main theorem is given in Theorem \ref{thm:Snew} and Theorem \ref{thm:S}. As detailed in the paper, if $\phi<\frac{c}{2}$, then $\phi$ is a smooth solution of \varepsilonqref{eq:steady} and we do not need any restriction on the amount and magnitude of its crests (as long as the refection criterion is satisfied). However, if $\phi=\frac{c}{2}$ at some point, then $\phi$ may exhibit a singularity in the form of a cusp or peak, cf. \cite{BD,EW} so that $\phi$ looses its smoothness property. Such a cusp or corner singularity can be compared to a stagnation point on the surface for solutions of the water wave problem, similar as the appearance of a peak in the extremal Stokes wave. As for the Euler equations, this causes difficulties. In our case, we may overcome this problem by the additional assumption that such a singularity occurs at most once per period. Here, we also would like to emphasize that even for solitary solutions as studied in \cite{BEP16}, the proof for the symmetry result fails for the highest wave, unless it is assumed that the highest crest is unique. The proof of the above theorem relies on a weak form of the method of moving planes, which we apply in a periodic setting to a general nonlocal equation. Formally, the action of the nonlocal operator $L$ can be expressed as a convolution with a kernel function $K$, given by a Fourier series with coefficients $(m(k))_{k\in \mathbb{Z}}$. We show that under condition (S) or (H) the periodic kernel function $K$ can be expressed, as an analog of Bernsteins theorem, in terms of integrals involving Theta or trigonometric functions over measures, respectively. This result guarantees that $K$ is even, integrable and decreasing on a half-period, which forms the foundation for a so-called touching lemma and a boundary point lemma for \varepsilonqref{eq:Equation}. Those can be viewed as weak nonlocal counterparts of the maximum principle and Hopf's boundary point lemma for elliptic equations, respectively. We conclude the introduction with the organization of this paper. In Section \ref{S:2} we study the action of the nonlocal Fourier multiplier operator $L$. In particular, we show that if the symbol $m$ satisfies condition (S) or (H), then the action of $L$ corresponds to a convolution operator with an even, real and periodic, integrable kernel function $K$. Moreover, $K$ is monotonically decaying on a half-period. Section \ref{S:T} is concerned with the symmetry of traveling wave solutions of \varepsilonqref{eq:Equation}. We first study the regularity of periodic steady solutions to \varepsilonqref{eq:steady} and then prove a touching lemma and a boundary point lemma. Based on these two lemmas and the structure of smooth or singular wave profiles, we prove the symmetry of regular periodic traveling waves and of the highest periodic wave, respectively. \section{The action of the nonlocal operator $L$} \label{S:2} We first introduce some notation. Let $f$ and $g$ be two functions. We write $f\lesssim g$ ($f\gtrsim g$) if there exists a constant $c>0$ such that $f\leq c g$ ($f\geq cg$). Moreover, we use the notation $f\varepsilonqsim g$ whenever $f\lesssim g$ and $f\gtrsim g$. We denote by $\mathbb{N}_0:=\mathbb{N}\cup \{0\}$ the set of natural numbers including zero. \subsection{Functional analytic setting} We denote by $\mathbb{T}:=\mathbb{R} \setminus 2\pi \mathbb{Z}$ the one-dimensional torus, which is identified with $[0,2\pi) \subset \mathbb{R}$. Let $\mathcal D(\mathbb{T})=C^\infty(\mathbb{T})$ and denote by $\mathcal{S}(\mathbb{Z})$ the space of rapidly decaying functions. Then the (periodic) Fourier transform $\mathcal{F}:\mathcal{D}(\mathbb{T})\to \mathcal S(\mathbb{Z})$ is defined by \[ (\mathcal{F}f)(k)=\hat f(k):=\frac{1}{{2\pi}}\int_{\mathbb{T}}f(x)e^{-ixk}\,dx. \] and any $f\in \mathcal{D}(\mathbb{T})$ can be written as \[ f(x)= \sum_{k\in \mathbb{Z}}\hat f(k)e^{ixk}. \] By duality, the Fourier transform extends uniquely to $\mathcal{F}:\mathcal{D}^\prime(\mathbb{T})\to \mathcal S^\prime(\mathbb{Z})$. Here, $\mathcal D^\prime(\mathbb{T})$ and $\mathcal{S}^\prime(\mathbb{T})$ are the dual spaces of $\mathcal{D}(\mathbb{T})$ and $\mathcal{S}(\mathbb{T})$, respectively. We say a function $f:\mathbb{T} \to \mathbb{R}$ belongs to the space $L^p(\mathbb{T})$, $1\leq p<\infty$, if and only if \[ \|f\|_{L_p}^p:=\int_\mathbb{T} |f|^p(x)\, dx < \infty \] and $f\in L^\infty(\mathbb{T})$ if and only if $\|f\|_\infty:= \varepsilonsssup_{x\in \mathbb{T}}|f(x)|<\infty$. We collect some well-known results on the Fourier transform on $L^p(\mathbb{T})$ (cf. e.g. \cite[Chapter 3]{G}): If $f\in L^1(\mathbb{T})$, then the sequence of Fourier coefficients $(\hat f(k))_{k\in \mathbb{Z}}$ is decreasing in $|k|$ with $\lim_{|k|\to \infty}\hat f(k)=0$. If $f,g\in L^1(\mathbb{T})$ and satisfy $\hat f(k)=\hat g(k)$ for all $k\in \mathbb{Z}$, then $f=g$ almost everywhere. If $f,g\in L^2(\mathbb{T})$, then \[ \widehat{fg}(k)=\sum_{l\in \mathbb{Z}}\hat f(l)\hat g(k-l)=\sum_{l\in \mathbb{Z}} \hat f(k-l)\hat g(l). \] We now introduce the periodic Zygmund spaces on which we perform our subsequent analysis. Let $(\varphi)_{j\geq 0}\subset C_c^\infty(\mathbb{R})$ be a family of smooth, compactly supported functions satisfying \[ \supp \varphi_0 \subset [-2,2],\qquad \supp \varphi_j \subset [-2^{j+1},-2^{j-1}]\cap [2^{j-1},2^{j+1}] \quad\mbox{ for}\quad j\geq 1, \] \[ \sum_{j\geq 0}\varphi_j(\xi)=1\qquad\mbox{for all}\quad \xi\in\mathbb{R}, \] and for any $n\in\mathbb{N}$, there exists a constant $c_n>0$ such that \[\sup_{j\geq 0}2^{jn}\|\varphi^{(n)}_j\|_\infty\leq c_n.\] For $s>0$, the periodic Zygmund space denoted by $ \mathcal{C}^s(\mathbb{T})$ consists of functions $f$ satisfying \[ \|f\|_{\mathcal{C}^s(\mathbb{T})}:=\sup_{j\geq 0}2^{sj}\left\|\sum_{k\in \mathbb{Z}} e^{ik(\cdot)} \varphi_j(k)\hat f(k)\right\|_\infty < \infty. \] Eventually, for $\alpha \in (0,1)$, we denote by $C^\alpha(\mathbb{T})$ the space of $\alpha$-H\"older continuous functions on $\mathbb{T}$. If $k\in \mathbb{N}$ and $\alpha\in (0,1)$, then $C^{k,\alpha}(\mathbb{T})$ denotes the space of $k$-times continuously differentiable functions whose $k$-th derivative is $\alpha$-H\"older continuous on $\mathbb{T}$. To lighten the notation we write $C^s(\mathbb{T})=C^{\left \lfloor{s}\right \rfloor, s- \left \lfloor{s}\right \rfloor }(\mathbb{T})$ for $s\geq 0$. As a consequence of Littlewood--Paley theory, we have the relation $\mathcal{C}^s(\mathbb{T})=C^s(\mathbb{T})$ for any $s>0$ with $s\notin \mathbb{N}$; that is, the H\"older spaces on the torus are completely characterized by Fourier series. If $s\in \mathbb{N}$, then $C^s(\mathbb{T})$ is a proper subset of $\mathcal{C}^s(\mathbb{T})$ and \[ C^1(\mathbb{T})\subsetneq C^{1-}(\mathbb{T})\subsetneq \mathcal{C}^1(\mathbb{T}). \] Here, $C^{1-}(\mathbb{T})$ denotes the space of Lipschitz continuous functions on $\mathbb{T}$. For more details we refer to \cite[Chapter 13]{T3}. \subsection{Fourier multipliers} A Fourier multiplier on $\mathbb{Z}$ is a possibly complex valued function that defines a linear operator $L$ via multiplication on the Fourier side, that is \[ \widehat{Lf}(k)=m(k)\hat f(k). \] The function $m$ is also called the symbol of the multiplier operator $L$. If $m:\mathbb{Z} \to \mathbb{C}$, let us define the difference operator $\Delta^n$ on $m$ by \begin{equation}\label{eq:D} \Delta^{n+1} m(k):=\Delta^n m(k+1)-\Delta^nm(k),\qquad n\in \mathbb{N}_0, \varepsilonnd{equation} where $\Delta^0m(k):=m(k)$. Setting $\Delta:=\Delta^1$, we have that $\Delta m(k):=m(k+1)-m(k)$. It is easy to see by induction that \[ \Delta^n m(k)=\sum_{j}^n (-1)^j\binom{n}{j}m(k+n-j). \] For $r\in \mathbb{R}$ we define the space $S^r(\mathbb{Z})$ consisting of functions $m:\mathbb{Z} \to \mathbb{C}$ for which \[ |\Delta^n m(k)|\lesssim_n (1+|k|)^{r-n},\qquad \mbox{for}\quad k\in \mathbb{Z}\quad \mbox{and all}\quad n\in \mathbb{N}_0. \] If $m\in S^r(\mathbb{Z})$, we say that $m$ is a symbol of order $r$. The analog definition for functions on the real line states that $m\in S^r(\mathbb{R})$ if $m\in C^\infty(\mathbb{R})$ and \[ |m^{(n)}(\xi)|\lesssim_n(1+|\xi|)^{r-n},\qquad \mbox{for} \quad \xi \in\mathbb{R} \quad \mbox{and all}\quad n\in \mathbb{N}_0. \] \begin{lem}[\cite{RT}, Lemma 6.2]\label{lem:S} If $m\in S^r(\mathbb{R})$, then the restriction $m|_{\mathbb{Z}}\in S^r(\mathbb{Z})$. \varepsilonnd{lem} We aim to include in our analysis Fourier multiplier operators with symbols in $S^r(\mathbb{Z})$, which are bounded, as well as operators with homogeneous symbols. The latter are of the form $m(k)\varepsilonqsim|k|^{r}$, where $r<0$. The action of a Fourier multiplier operator with homogeneous symbol on a periodic function is only well-defined for functions with zero mean, that is $\hat f(0)=0$ and \[ Lf(x)=\sum_{k\neq 0}m(k)\hat f(k)e^{ixk}. \] For this reason, the restriction of a certain function space $X$ to its subset of zero mean functions is going to play an important role and we denote it by $X_0$. We now state a classical Fourier multiplier theorem on Zygmund spaces (e.g. \cite[Proposition 13.8.3]{T3}, \cite[Theorem 2.3 (v)]{AB}): \begin{satz} \label{prop:FM} Let $r\in \mathbb{R}$. If $m\in S^r(\mathbb{Z})$, then the Fourier multiplier $L$ defined by \[ Lf(x)=\sum_{k\in \mathbb{Z}}m(k)\hat f(k)e^{ixk} \] belongs to the space $\mathcal{L}\left(\mathcal{C}^{s}(\mathbb{T}),\mathcal{C}^{s-r}(\mathbb{T})\right)$ for any $s\geq 0$. Similarly, if $m$ is a homogeneous symbol of order $r$, that is $m(k)\varepsilonqsim |k|^r$, then then the Fourier multiplier $L$ defined by \[ Lf(x)=\sum_{k\neq 0}|k|^r\hat f(k)e^{ixk} \] belongs to the space $\mathcal{L}\left(\mathcal{C}^{s}_0(\mathbb{T}),\mathcal{C}^{s-r}_0(\mathbb{T})\right)$ for any $s\geq 0$. \varepsilonnd{satz} \subsection{Assumptions on the symbol and properties of the convolution kernel} \label{Ss:2} We first recall some results on {completely monotone} sequences (cf. \cite{Guo, Widder}). A sequence $(n_k)_{k\in\mathbb{N}_0}$ of real numbers is called \varepsilonmph{completely monotone} if its elements are nonnegative and \[ (-1)^n\Delta^nn_k \geq 0\qquad \mbox{for any}\quad n,k\in\mathbb{N}_0, \] where $\Delta^n$ denotes the difference operator defined in \varepsilonqref{eq:D}. In a similar fashion, a smooth function $f:D\subset \mathbb{R}\to \mathbb{R}$ is called \varepsilonmph{completely monotone} if \[ (-1)^nf^{(n)}(x)\geq 0\qquad \mbox{for any} \quad n\in \mathbb{N}_0, x \in D. \] If $f:\mathbb{R}\setminus\{0\}\to \mathbb{R}$ is even, we say that $f$ is completely monotone if its restriction to the positive half-line $(0,\infty)$ is completely monotone. As pointed out in \cite{D}, any nonconstant, completely monotone function on $(0,\infty)$ satisfies the strict inequality $(-1)^nf^{(n)}(x)> 0$. There exists a close relation between completely monotone functions and completely monotone sequences: \begin{lem}[\cite{Guo}, Theorem 3 and Theorem 5]\label{lem:CM} Suppose that $f:[0,\infty)\to \mathbb{R}$ is completely monotone, then for any $a\geq 0$ the sequence $(f(an))_{n\in\mathbb{N}_0}$ is completely monotone. Conversely, if $(n_k)_{k\in \mathbb{N}}$ is a completely monotone sequence, then there exists a completely monotone interpolation function $f:[1,\infty) \to \mathbb{R}$ such that \[ f(k)=n_k\qquad k\in \mathbb{N}. \] \varepsilonnd{lem} As an immediate consequence of Lemma \ref{lem:CM}, we observe that any nontrivial monotone sequence $(n_k)_{k\in \mathbb{N}_0}$ is strictly positive for all $k\in \mathbb{N}_0$ and strictly decreasing for all $k\geq 1$. In what follows we impose the following assumption: \begin{description} \item[Assumption] The symbol $m$ of the Fourier multiplier operator $L$ is real, even, and satisfies either \begin{itemize} \item[(S)] $m\in S^r(\mathbb{Z})$ for some $r<0$ and the sequence $(n_k)_{k\in \mathbb{N}}$ defined by $n_k:=m(|\sqrt{k}|)$ is completely monotone \varepsilonnd{itemize} or \begin{itemize} \item[(H)] $m$ is homogeneous of degree $r<0$, that is $m(k)\varepsilonqsim |k|^r$. \varepsilonnd{itemize} \varepsilonnd{description} Under assumption (S) or (H) on the symbol, the equation \begin{equation}\label{eq:Equation_E} u_t+(u^2+Lu)_x=0 \varepsilonnd{equation} covers the following widely studied equations: \begin{enumerate}[a)] \item The fractional Korteweg--de Vries equation takes the form \varepsilonqref{eq:Equation_E} with \[ m(k)=|k|^r,\qquad r\in \mathbb{R}. \] The symbol $m$ is homogeneous and satisfies therefore (H), whenever $r<0$. The equation corresponds to the Burgers--Hilbert equation for $r=-1$, and to the reduced Ostrovsky equation for $r=-2$. \item The Whitham equation takes the form \varepsilonqref{eq:Equation_E} with \[ m(k)=\sqrt{\frac{\tanh{k}}{k}}. \] The symbol $m$ is inhomogeneous an can be viewed as the restriction of $M:\mathbb{R} \to \mathbb{R}$, $M(\xi):=\sqrt{\frac{\tanh{\xi}}{\xi}}$ on $\mathbb{Z}$. The function $M$ belongs to $S^{-\frac{1}{2}}(\mathbb{R})$ and $\xi \to M(|\sqrt{\xi}|)$ is completely monotone on $(0,\infty)$ as proved in \cite{EW}. Now, Lemma \ref{lem:S} and Lemma \ref{lem:CM} imply that $m$ satisfies assumption (S) for $r=-\frac{1}{2}$. \item The inhomogeneous counter part of the fractional Korteweg--de Vries family, takes the form \varepsilonqref{eq:Equation_E} with \[ m(\xi)=(1+k^2)^{\frac{r}{2}},\qquad r\in \mathbb{R}. \] The symbol $m$ is inhomogeneous and satisfies (S) for $r<0$. Again, this is a direct consequence of Lemma~\ref{lem:S} and Lemma \ref{lem:CM} and the fact that $n:(0,\infty)\to \mathbb{R}$ defined by $n(\xi)=(1+\xi)^{\frac{r}{2}}$ is completely monotone on $(0,\infty)$ for $r<0$. \varepsilonnd{enumerate} The action of the nonlocal operator $L$ can be expressed as a convolution with a kernel function $K$, which takes the form \[ K(x)=\sum m(k)\cos(xk), \] and $L\phi = K*\phi$. The sum above is taken over $\mathbb{Z}$ or $\mathbb{Z}\setminus\{0\}$ depending on whether $m$ is an inhomogeneous or a homogeneous symbol, respectively. The following discrete analog of Bernstein's theorem on completely monotone functions will be used to prove the monotonicity of $K$ on the half-period $(0,\pi)$. \begin{thm}[\cite{Widder}, Theorem 4a]\label{thm:B} A sequence $(n_k)_{k\in\mathbb{N}_0}$ of real numbers is completely monotone if and only if \[ n_k=\int_0^1 t^k d\sigma(t), \] where $\sigma$ is nondecreasing and bounded for $t\in[0,1]$. \varepsilonnd{thm} We now prove the the integrability of $K$ and its monotonicity on $(0,\pi)$, which is crucial for the touching lemma and the boundary point lemma in the next section. \begin{thm}[Properties of $K$] \label{thm:P} The periodic kernel $K$ satisfying the assumption (S) or (H) is even, real-valued and smooth on $\mathbb{T}\setminus\{0\}$. Moreover, $K\in L^1(\mathbb{T})$ is decreasing on $(0,\pi)$. If the symbol $m$ is homogeneous of degree $r<0$, then \[ K(x)=\int_0^1 \left(\frac{2(\cos(x)-t)}{1-t\cos(x)+t^2}+a_0(t)\right)\,d\sigma(t), \] where $\sigma:[0,1]\to \mathbb{R}$ is a nondecreasing and bounded function depending on $m$, and $a_0\in L^\infty(0,1)$. If the symbol $m$ is inhomogeneous and satisfies (S), then \[ K(x)=\int_0^1\mathbb{T}heta_3\left(\frac{x}{2},u\right)\,d\nu(u), \] where $\nu:[0,1]\to \mathbb{R}$ is a nondedcreasing and bounded function depending on $m$, and $\mathbb{T}heta_3$ is the third Theta function. \varepsilonnd{thm} \begin{proof} In fact, if $m$ is a homogeneous symbol of degree $r<-1$ the claim is proved in \cite[Theorem 3.6]{BD} and it is straightforward to adapt the proof for the range $r\in [-1,0)$. We therefore skip the details. Assume that $m$ is an inhomogeneous symbol satisfying (S). Since $m$ is even and real-valued, also $K$ is even and real-valued. The integrability of $K$ follows from the decay property of $m$ and \cite[Theorem 5.13]{Boa}. Now, we prove that $K$ is smooth on $\mathbb{T}\setminus\{0\}$ and decreasing on the half-period $(0,\pi)$. If $n:=m(|\sqrt{\cdot}|)$, then Lemma \ref{lem:CM} implies that $n_k:=n(k)$ build a completely monotone sequence $(n_k)_{k\in \mathbb{N}_0}$. In view of Theorem \ref{thm:B}, there exists a nondecreasing and bounded function $\nu$ such that \[ n_k=\int_0^1 u^kd\nu(u),\qquad k\geq 0. \] We have that $m(k)=n(k^2)$ and thus \[ m(k)=\int_0^1 u^{k^2}d\nu(u),\qquad \mbox{for all}\qquad k\geq 0. \] Consider $u^{k^2}$ as Fourier coefficients for some function $f(u,x)$, that is \[ u^{k^2}=\int_{\mathbb{T}}f(u,x)e^{-ikx}\qquad \mbox{for}\qquad f(u,x)=\sum_{k\in \mathbb{Z}} u^{k^2}e^{ixk}. \] The latter sum is also known as the third Theta function: \[ f(u,x)=\sum_{k\in \mathbb{Z}} u^{k^2}e^{ixk} =: \mathbb{T}heta_3\left(\frac{x}{2},u\right). \] We conclude that \[ m(k)= \int_0^1 \int_{\mathbb{T}}\mathbb{T}heta_3 \left(\frac{x}{2},u\right)e^{-ikx} \,dx\,d\nu(u)= \int_{\mathbb{T}}\int_0^1 \mathbb{T}heta_3 \left(\frac{x}{2},u\right)\,d\nu(u)e^{-ixk}\,dx. \] Since $(m(k))_{k\in \mathbb{Z}}$ form the Fourier coefficients of $K$, we deduce that \[ K(x)=\int_0^1 \mathbb{T}heta\left( \frac{x}{2},u \right)\,d\nu (u). \] From here, we obtain all claimed properties of $K$ relying on the properties of the Theta function. In particular, we have that for all $u\in (0,1)$ the function $\mathbb{T}heta(\frac{\cdot}{2},u)$ is even, positive on $\mathbb{T}$ and $\frac{d}{dx}\mathbb{T}heta\left(\frac{x}{2},u\right)< 0$ for all $x\in (0,\pi)$ and all $u\in (0,1)$. Hence $K$ is smooth away from the origin and $K^\prime(x)< 0$ for all $x\in (0,\pi)$. \varepsilonnd{proof} \begin{lem}\label{behaviour of K at origin} The map $x\mapsto \sin(x)K(x)$ belongs to $L^\infty(\mathbb{T})$ and $ \lim_{|x|\to 0} xK(x)=0. $ \varepsilonnd{lem} \begin{proof} Using the identity $\sin(x)\cos(xk)= \frac{1}{2}\left(\sin(x(k+1))-\sin(x(k-1))\right))$, we get \[ \sin(x)K(x)=c_h\sin(x)+\sum_{k=1}^\infty m(k)\left(\sin(x(k+1))-\sin(x(k-1))\right), \] where $c_h=0$ if $m$ is a homogeneous symbol and $c_h= m(0)$ if $m$ satisfies (S). The sum above can be written as \begin{align*} \sum_{k=1}^\infty m(k)\left(\sin(x(k+1))-\sin(x(k-1))\right)&= \sum_{k=2}^\infty m(k-1)\sin(xk)-\sum_{k=0}^\infty m(k+1)\sin(xk)\\ &= m(2)\sin(x)+\sum_{k=2}^\infty \left( m(k-1)-m(k+1)\right)\sin(xk). \varepsilonnd{align*} We have that $\sum_{k=2}^\infty a_k\sin(xk)$ converges uniformly on $\mathbb{T}$ iff $\lim_{k\to \infty} ka_k=0$ (cf. \cite{Boa}). In view of either assumption (S) or (H), we have that $m(k-1)-m(k+1)\leq \Delta m(k)\lesssim |k|^{r-1}$ for $k\geq 2$. Since $r<0$, it is clear that $\lim_{k\to \infty}k(m(k-1)-m(k+1))=0$ and the series $\sum_{k=2}^\infty(m(k-1)-m(k+1))\sin(xk)$ converges uniformly on $\mathbb{T}$. We deduce that $|\sin(x)K(x)|\lesssim 1 + \sum_{k=2}|k|^{r-1}\lesssim 1$ for all $x\in \mathbb{T}$, which implies that $x\mapsto \sin(x)K(x)$ belongs to $L^\infty(\mathbb{T})$. Moreover, \begin{align*} \lim_{|x|\to 0}xK(x) &= \lim_{|x|\to 0}\sin(x)K(x) = \lim_{|x|\to 0}\lim_{n\to \infty}\sum_{k=2}^n \left( m(k-1)-m(k+1)\right)\sin(xk)\\ &= \lim_{n\to \infty}\lim_{|x|\to 0}\sum_{k=2}^n \left( m(k-1)-m(k+1)\right)\sin(xk)\\ & =0. \varepsilonnd{align*} Here, we used that $\lim_{|x|\to 0}\frac{\sin(x)}{x}=1$. \varepsilonnd{proof} \begin{remark}\label{rem:R} \normalfont \begin{enumerate}[a)] \item Requiring complete monotonicity of $(m(|\sqrt{k}|))_{k\in \mathbb{N}_0}$ instead of assuming that $(m(k))_{k\in \mathbb{N}_0}$ is completely monotone actually broadens the class of admissible symbols, since the composition $n:=m \circ \sqrt{\cdot}$ is completely monotone whenever $m$ is completely monotone. \item The proof of Theorem \ref{thm:P} reveals that whenever the sequence $(n_k)_{k\in \mathbb{N}_0}$ defined by $n_k:=m(|\sqrt{k}|)$ is completely monotone, the corresponding convolution kernel given by $K(x)=\sum m(k)\cos(xk)$ is decreasing on the half-period $(0,\pi)$. It turns out to be a nontrivial task to find a one-to-one correspondence between properties of the Fourier coefficients and decay on a half-period of the corresponding Fourier series. The assumptions we impose on the symbol $m$ are chosen to be fairly mild to include a wide range of admissible symbols while still enabling a straightforward verification in specific examples. \item Assuming that the function $n:=m(|\sqrt{\cdot}|):(0,\infty)\to (0,\infty)$ is not only bounded and completely monotone, but also extends additionally to an analytic function on $\mathbb{C}\setminus (-\infty,0]$ with $\Ima z\cdot\Ima n(z)\leq 0$, then the convolution kernel $K$ inherits the property of being completely monotone (cf. \cite[Theorem. 2.9, Proposition 2.20, Remark 3.4]{EW}). \varepsilonnd{enumerate} \varepsilonnd{remark} \section{Symmetry of traveling waves} \label{S:T} In this section we prove the main theorem on the symmetry of periodic traveling waves for nonlinear dispersive equations of the from \varepsilonqref{eq:Equation} where the symbol $m$ satisfies either assumption (S) or (H), and the wave profile satisfies the reflection criterion, which we recall for convenience. \begin{description} \item[Reflection criterion] A $2\pi$-periodic, continuous function is said to satisfy the \varepsilonmph{reflection criterion} if there exists $\lambda_*\in \mathbb{T}$ such that \[ \phi(x)>\phi(2\lambda_*-x)\qquad \mbox{for all}\qquad x\in (\lambda_*, \lambda_*+ \pi). \] \varepsilonnd{description} Taking the ansatz $u(x,t)= \phi(x-ct)$, where $c>0$ denotes the speed of the right--propagating wave, equation \varepsilonqref{eq:Equation} transforms after integration to \begin{equation}\label{WT} -c\phi + L\phi+\phi^2=B, \varepsilonnd{equation} where $B\in \mathbb{R}$ is a constant of integration. If $m$ is an inhomogenous symbol, then there exists a Galilean shift of variables \[ \phi \mapsto \phi+ \gamma, \qquad c\mapsto c+2\gamma, \qquad B\mapsto B+\gamma(m(0)-c-\gamma), \] which allows us to set the integration constant $B$ to zero. This choice corresponds to a solution with possible different speed and elevation, but the form of solutions remains intact. If on the other hand $m$ is a homogeneous symbol, we assume that $\phi$ is a function of zero mean, which determines the integration constant to be $B= \frac{1}{2\pi}\widehat{\phi^2}(0)$. In what follows, we consider the equation \begin{equation}\label{eq:new} -c\phi +L\phi +\phi^2 =B_h, \varepsilonnd{equation} where $B_h=0$ if $m$ is inhomogeneous and $B_h=\frac{1}{2\pi}\widehat{\phi^2}(0)$ if $m$ is homogeneous. \subsection{Regularity of traveling waves}\label{Ss:reg} If the symbol of the operator $L$ is homogeneous, we work on $X_0$, the restriction of a function space $X$ to its subset of zero mean functions. Let $\phi \in L^\infty(\mathbb{T})$ or $\phi \in L^\infty_0(\mathbb{T})$ be a $2\pi$-periodic solution of \varepsilonqref{eq:new}. For the clarity of presentation, we use in the sequel the following convention: Whenever it is clear from the context the index zero is suppressed, that is we simply write $X$ and mean $X_0$ if $m$ is a homogeneous symbol. \begin{satz}\label{prop:reg} Let $\phi\leq \frac{c}{2}$ be a bounded solution of \varepsilonqref{eq:new}. Then $\phi$ is smooth on any open set where $\phi<\frac{c}{2}$. \varepsilonnd{satz} \begin{proof} Assume first that $\phi<\frac{c}{2}$ uniformly on $\mathbb{T}$ and $\phi \in \mathcal{C}^s(\mathbb{T})$ for some $s\geq 0$. Equation \varepsilonqref{eq:new} can be written as \begin{equation}\label{eq:formulation} B_h+\frac{c^2}{4}-\left(\frac{c}{2}-\phi \right)^2 =L\phi. \varepsilonnd{equation} Due to our assumptions on the symbol $m$ and Proposition \ref{prop:FM}, the Fourier multiplier operator $L$ is a smoothing operator of order $-r$ and $L\phi \in \mathcal{C}^{s-r}(\mathbb{T})$. Moreover, for $s-r>0$ the Nemytskii operator \begin{align*} f\mapsto \frac{c}{2} - \sqrt{B_h+\frac{c^2}{4}- f} \varepsilonnd{align*} maps $ \mathcal{C}^{s-r}(\mathbb{T})$ into itself if $f<B_h+\frac{1}{4}c^2$. From \varepsilonqref{eq:formulation} we see immediately that $L\phi<B_h+\frac{c^2}{4}$ if $\phi<\frac{c}{2}$. Thus, we may take the square root to obtain that \[ \phi = \frac{c}{2}-\sqrt{B_h+\frac{c^2}{4}-L\phi} \in \mathcal{C}^{s-r}(\mathbb{T}). \] Hence, an iteration argument guarantees that $\phi\in C^{\infty}(\mathbb{T})$. Since any Fourier multiplier commutes with the translation operator, we actually have that $\phi \in C^\infty(\mathbb{R})$. Now, let $U\subset \mathbb{R}$ be an open subset of $\mathbb{R}$ on which $\phi <\frac{c}{2}$. Then, we can find an open cover $U=\cup_{i\in I}U_i$, where for any $i\in I$ we have that $U_i$ is connected and satisfies $|U_i|<2\pi$. Due to the translation invariance of \varepsilonqref{eq:new} and the previous part, we obtain that $\phi$ is smooth on $U_i$ for any $i\in I$. Since $U$ is the union of open sets, the assertion follows. \varepsilonnd{proof} The following lemma confirms that a non-smooth structure may appear at a crest of height $\frac{c}{2}$. \begin{satz}\label{prop:C1} If $\phi$ is a nontrivial even, bounded solution of \varepsilonqref{eq:new} with a single crest per period and $\max \phi = \frac{c}{2}$, then $\phi$ does not belong to the class $C^1(\mathbb{T})$. \varepsilonnd{satz} \begin{proof} Assume on the contrary that $\phi\in C^1(\mathbb{T})$ is an even solution of \varepsilonqref{eq:new} with $\max \phi = \phi(0)=\frac{c}{2}$. Then $L\phi$ belongs to $\mathcal{C}^{1-r}(\mathbb{T})$ with $(L\phi)^\prime(0)=0$ and \[ \left(\frac{\frac{c}{2}-\phi}{x}\right)^2=\frac{L\phi(0)-L\phi(x)}{x^2}=\frac{(L\phi)^\prime(\xi)-(L\phi)^\prime(0)}{x}\qquad \mbox{for some}\qquad \xi \in [0,x]. \] Since $\phi\in C^1$, the left-hand side above tends to zero as $x\to 0$. We deduce that $(L\phi)^{\prime\prime}(0)=0$. The action of $L$ is given by convolution with $K$, that is \[ (L\phi)^{\prime\prime}(0)=-2\int_{-\pi }^0 K ^\prime(y)\phi^\prime(y)\,dy=-c_0<0, \] for some $c_0>0$, which is a contradiction. Here, we used the symmetry of $K$ and $\phi$ and that $K ^\prime\phi^\prime\gneq 0$ on $[-\pi ,0]$, since $K^\prime > 0$ and $\phi^\prime\gneq 0$ on $(-\pi,0)$, due to Proposition \ref{thm:P} and our assumption on $\phi$. \varepsilonnd{proof} \begin{remark} \normalfont If $\phi\leq \frac{c}{2}$ is a solution of \varepsilonqref{eq:new} with $\max \phi=\frac{c}{2}$, we refer to $\phi$ as a so-called \varepsilonmph{highest wave}. \varepsilonnd{remark} \subsection{Symmetry of traveling waves} We start with a touching lemma, which is similar to the one formulated in \cite[Lemma 4.3]{EW} for solitary waves. A solution $\phi$ of \varepsilonqref{eq:new} is called a \varepsilonmph{supersolution} if \[c\phi \geq L\phi+\phi^2-B_h\] and a \varepsilonmph{subsolution} if the inequality sign above is replaced by $\leq$. \begin{lem}[Touching lemma within one period]\label{touching lemma} Let $\phi$ be a bounded $2\pi $-periodic super- and $\bar{\phi}$ a bounded $2\pi $-periodic subsolution of \varepsilonqref{eq:new}. If $\phi \geq \bar{\phi}$ on $[\lambda,\lambda +\pi ]$ and $\phi-\bar{\phi}$ is odd with respect to $\lambda$, then either \begin{itemize} \item[$\bullet$] $\phi=\bar{\phi}$ on $\mathbb{R}$ or \item[$\bullet$] $\phi > \bar{\phi}$ with $\phi+\bar \phi<c$ on $(\lambda, \lambda+\pi )$. \varepsilonnd{itemize} \varepsilonnd{lem} \begin{proof} Let $\phi$ and $\bar \phi$ be a super- and subsolution of \varepsilonqref{WT}, respectively, with $\phi \geq \bar \phi$ for all $x\in [\lambda, \lambda+\pi ]$ and $\phi-\bar \phi$ is odd with respect to $\lambda$. Set $w:=\phi-\bar \phi$. Then $w$ is a $2\pi $-periodic function, which is odd with respect to $\lambda$ and $w(x)\geq 0$ for $x\in (\lambda, \lambda+\pi )$. The function $w$ solves the equation \[ cw(x)\geq Lw(x)+w(x)(\phi+\bar \phi)(x), \] which is equivalent to \begin{equation}\label{eq:t1} (c-(\phi+\bar \phi)(x))w(x)\geq Lw(x). \varepsilonnd{equation} Assume that $w$ is not identical zero, but there exist a point $\bar x \in (\lambda, \lambda+\pi )$, such that either $w(\bar x)=0$ or $(\phi+\bar \phi)(\bar x)\geq c$. Then, we obtain from \varepsilonqref{eq:t1} that \begin{equation}\label{eq:con} Lw(\bar x)\leq 0. \varepsilonnd{equation} Note that \begin{align*} Lw(\bar x)=\int_{\lambda-\pi }^{\lambda+\pi }K(\bar x-y)w(y)\,dy=\int_{\lambda}^{\lambda+\pi }\left[K(\bar x-y)-K(\bar x+y-2\lambda) \right]w(y)\,dy. \varepsilonnd{align*} Now we split the integral above in the following way: \begin{align}\label{eq:I} \begin{split} Lw(\bar x)=&\int_{\lambda}^{\bar x}\left[K(\bar x-y)-K(\bar x+y-2\lambda) \right]w(y)\,dy\\ &+\int_{\bar x}^{\lambda +\pi }\left[K(\bar x-y)-K(2\lambda-\bar x-y) \right]w(y)\,dy, \varepsilonnd{split} \varepsilonnd{align} keeping in mind that $\lambda<\bar x <\lambda+\pi $. The function \[ G_p(y):=K(\bar x-y)-K(\bar x+y-2\lambda) \] is $2\pi $-periodic odd with respect to $\lambda$. Notice that $w$ is nonnegative on the half period $(\lambda, \lambda+\pi )$ and \[ \lim_{|y-\bar x|\to 0} G_p(y)>0 \qquad \mbox{and}\qquad G_p(\lambda)=G_p(\lambda+\pi )=0. \] \begin{center} \begin{figure}[h] \centering \begin{tikzpicture}[scale=1] \draw[->] (-6,-1) -- (7,-1) node[right] {$y$}; \draw[-] (-2.4, -1.1)--(-2.4,-0.9) node[below=5pt]{$\lambda$}; \draw[-] (-2.4+pi, -1.1)--(-2.4+pi,-0.9) node[below=5pt]{$\lambda+\pi $}; \draw[-] (-1.65, -1.1)--(-1.65,-0.9) node[below=5pt]{$\bar x$}; \draw[domain=-pi+0.1:pi-0.1,smooth,variable=\x,luh-dark-blue, thick] plot ({\x},{(3*\x*\x-pi^2)/18}); \draw[domain=-1.7*(pi+0.1):-(pi+0.1),smooth,variable=\x,luh-dark-blue, thick] plot ({\x},{((3*(\x+2*pi)*(\x+2*pi)-pi^2)/18}); \draw[domain=pi+0.1:2*(pi-0.1),smooth,variable=\x,luh-dark-blue, thick] plot ({\x},{((3*(\x-2*pi)*(\x-2*pi)-pi^2)/18}) node[right]{$K(\bar x-2\lambda +\cdot)$}; \draw[xshift = 1.5cm, domain=-pi+0.1:pi-0.1,smooth,variable=\x,orange, thick] plot ({\x},{(3*\x*\x-pi^2)/18}) node[right]{$K(\bar x -\cdot)$}; \draw[xshift = 1.5cm,domain=-1.7*(pi+0.1):-(pi+0.1),smooth,variable=\x,orange, thick] plot ({\x},{((3*(\x+2*pi)*(\x+2*pi)-pi^2)/18}) ; \varepsilonnd{tikzpicture} \caption*{Illustration: $G_p(y)=K(\bar x-y)-K(\bar x -2\lambda+y)>0$ on $(\lambda,\lambda+\pi )$.} \varepsilonnd{figure} \varepsilonnd{center} We aim to show that $G_p(y)>0$ on $(\lambda,\lambda+\pi )$. Set $z=y-\bar x $ and $v=2(\bar x-\lambda)$, then $G_p(y)=0$ if and only if $K(z)=K(z+v)$. In view of the symmetry of $K$ and its monotonicity on $(0,\pi )$, it is clear that $K(z)=K(z+v)$ if and only if $v\in 2\pi \mathbb{Z}$ or $v \in -2z+2\pi \mathbb{Z}$. We have $v=2(\bar x-\lambda)\in (0,2\pi )$, therefore $v\notin 2\pi \mathbb{Z}$. Moreover, $v \in -2z+2\pi \mathbb{Z}$ if and only if there exists $n\in \mathbb{Z}$ such that \[ 2(\bar x-\lambda)=-2(y-\bar x)+2\pi n, \] which is equivalent to $y=\lambda+2\pi n$. We deduce that $G_p(y)>0$ on $(\lambda,\lambda+\pi )$. Recalling \varepsilonqref{eq:I} and $w(y)\geq 0$ on $(\lambda, \lambda+\pi )$, we obtain that $K*w(\bar x)>0$, which is a contradiction to \varepsilonqref{eq:con}. \varepsilonnd{proof} While the touching lemma is related to a strong maximum principle, the following lemma plays a role as the Hopf boundary point lemma does for elliptic equations. \begin{lem}[Boundary point lemma]\label{boundary lemma} Let $\phi, \bar \phi\in C^1(\mathbb{T})$ be two $2\pi $-periodic solutions of \varepsilonqref{eq:new}. If $\phi \geq \bar \phi$ on $[\lambda, \lambda + \pi ]$ and $\phi-\bar \phi$ is odd with respect to $\lambda$, then either \begin{itemize} \item $\phi = \bar \phi$ on $\mathbb{R}$, or \item $ (\phi-\bar \phi) ^\prime(\lambda)>0.$ \varepsilonnd{itemize} \varepsilonnd{lem} \begin{proof} If $\phi$ and $\bar \phi$ are two solutions of \varepsilonqref{eq:new}, then \[ c(\phi-\bar \phi)(x) = K*(\phi-\bar \phi)(x)+\phi^2(x)-\bar \phi^2(x). \] Taking the derivative at $x=\lambda$ yields \begin{equation}\label{eq:bnd} [c-(\phi+\bar \phi)(\lambda)](\phi-\bar \phi)^\prime(\lambda)=K*(\phi-\bar{\phi})^\prime(\lambda). \varepsilonnd{equation} Set $w=\phi-\bar \phi$ and consider the convolution $K*w^\prime (\lambda)$. Since $w^\prime$ is symmetric with respect to $\lambda$ and $K$ is even, we deduce that \begin{align*} K*w^\prime(\lambda)&=\int_{\lambda-\pi }^{\lambda+\pi }K(\lambda-y)w^\prime(y)\, dy=2\int_{\lambda}^{\lambda+\pi }K(\lambda-y)w^\prime(y)\, dy. \varepsilonnd{align*} Using that $K$ is smooth on $(-\pi ,\pi )$ away from the origin, we integrate by parts on the interval $[\lambda+\varepsilon,\lambda+\pi ]$ and obtain that \begin{align*} K*w^\prime(\lambda)&= 2\left( \int_{\lambda}^{\lambda+\varepsilon}K(\lambda-y)w^\prime(y)\, dy +[K(\lambda-y)w(y)]_{y=\lambda+\varepsilon}^{\lambda+\pi }+ \int_{\lambda+\varepsilon}^{\lambda+\pi } K^\prime(\lambda-y)w(y)\, dy\right) \varepsilonnd{align*} Because $w^\prime$ is continuous and $K$ is integrable, the first integral on the right-hand side vanishes as $\varepsilon \to 0$. Due to the regularity and symmetry of $w$, we have $w(\lambda +\varepsilon)=O(\varepsilon)$. Moreover $K(\varepsilon)=o(\varepsilon^{-1})$ by Lemma \ref{behaviour of K at origin} so that the boundary term \[ [K(\lambda-y)w(y)]_{y=\lambda+\varepsilon}^{\lambda+\pi }=K(\pi )w(\lambda+\pi )-K(\varepsilon)w(\lambda+\varepsilon)\to 0 \] as $\varepsilon\to 0$, where we used that $w(\lambda+\pi )=w(\lambda)=0$. Hence \begin{align*} K*w^\prime(\lambda)=2\lim_{\varepsilon \to 0} \int_{\lambda+\varepsilon}^{\lambda+\pi } K^\prime(\lambda-y)w(y)\, dy. \varepsilonnd{align*} In view of $w \geq 0$ on $[\lambda,\lambda+\pi ]$ and $K$ being increasing on the half-period $(-\pi ,0) $, we arrive at \[ K*w^\prime(\lambda)=2\lim_{\varepsilon \to 0} \int_{\lambda+\varepsilon}^{\lambda+\pi } K^\prime(\lambda-y)w(y)\, dy>0, \] unless $\phi = \bar \phi$. Now \varepsilonqref{eq:bnd} implies that \[ (\phi-\bar \phi)^\prime(\lambda)>0. \] \varepsilonnd{proof} \begin{remark} \normalfont The proof of the following theorems rely on a weak form of the method of moving planes, which we apply in a nonlocal setting. Due to the periodicity of the solution and therefore the lack of decay at infinity, we impose the aforementioned reflection criterion in order to guarantee that the method of moving planes can be started at some point $x\in \mathbb{T}$. Notice that whenever the wave profile is a priori assumed to be \varepsilonmph{monotone} in the sense that it has only one crest per period, our assumption is satisfied. \begin{center} \begin{figure} [h!] \begin{tikzpicture}[scale=0.7] \draw (-2,0)--(8,0); \draw[-,black](0,0.1)--(0,-0.1) node[below] {$-\pi$}; \draw[-,black, gray](2.5,0.1)--(2.5,-0.1) node[below] {$\lambda_*$}; \draw[-,dashed, gray] (2.5,0)--(2.5,1.5); \draw[-,black](6,0.1)--(6,-0.1) node[below] {$\pi$}; \draw[-,black](3,0.1)--(3,-0.1) node[below] {$0$}; \draw[luh-dark-blue, very thick] plot [smooth] coordinates {(-1,1) (0,0.5) (4,2) (6,0.5) (7,0.8)}; \begin{scope}[xscale=-1] \draw[gray, dashed, xshift=-5cm] plot [smooth] coordinates {(-1,1) (0,0.5) (4,2) (6,0.5) (7,0.8)}; \varepsilonnd{scope} \varepsilonnd{tikzpicture} \qquad \begin{tikzpicture}[scale=0.65] \draw (-2,0)--(8,0); \draw[-,black](0,0.1)--(0,-0.1) node[below] {$-\pi$}; \draw[-,black](6,0.1)--(6,-0.1) node[below] {$\pi$}; \draw[-,black](3,0.1)--(3,-0.1) node[below] {$0$}; \draw[luh-dark-blue, very thick] plot [smooth] coordinates {(-1,1.3) (0,0.5) (0.8,1) (1.3,0.8) (2.1,1.4) (2.5, 1.8) (3.2, 1.6) (5.1,1.4) (6,0.5) (6.8,0.8)}; \begin{scope}[xscale=-1] \draw[gray, dashed, xshift=-4.5cm] plot [smooth] coordinates {(-1,1.3) (0,0.5) (0.8,1) (1.3,0.8) (2.1,1.4) (2.5, 1.8) (3.2, 1.6) (5.1,1.4) (6,0.5) }; \varepsilonnd{scope} \draw[-,black, gray](2.27,0.1)--(2.27,-0.1) node[below] {$\lambda_*$}; \draw[-,dashed, gray] (2.27,0)--(2.27,1.5); \varepsilonnd{tikzpicture} \caption{Any monotone profile satisfies reflection criterion (left). The illustration of a nonmonotone wave profile satisfying reflection criterion (right). The gray dashed curves are the reflections about the axis $x=\lambda_*$.}\label{F:profile} \varepsilonnd{figure} \varepsilonnd{center} \varepsilonnd{remark} \begin{thm}[Symmetry of traveling waves]\label{thm:Snew} Let $\phi<\frac{c}{2}$ be a $2\pi$-periodic, bounded solution of \varepsilonqref{eq:new}, and satisfies the reflection criterion. Then $\phi$ is symmetric and has exactly one crest per period. Moreover, \[ \phi^\prime(x)>0\qquad \mbox{for all}\qquad x\in (-\pi,0), \] after a suitable translation. \varepsilonnd{thm} \begin{proof} Since $\phi<\frac{c}{2}$, Proposition \ref{prop:reg} implies that $\phi$ is a smooth solution. In order to prove the theorem, we suppose that $\phi$ is not symmetric. After possible translation, we may assume that a global minimum (trough) of the periodic wave is located at $x=-\pi$. Then $\lambda_*\in (-\pi,0]$. Indeed, if $\lambda_*\in (0,\pi]$, then the assumption \[ \phi(x)>\phi(2\lambda_*-x)\qquad \mbox{for all}\quad x \in (\lambda_*, \lambda_*+\pi) \] would yield the contradiction $\phi(-\pi)=\phi(\pi)>\phi(2\lambda_*-\pi)$, since the global minimum of $\phi$ is attended at $x=-\pi$. Let $w_\lambda:\mathbb{R}\rightarrow \mathbb{R}$ be the \varepsilonmph{reflection function} around $\lambda$ given by \[w_\lambda(x):= \phi(x)-\phi(2\lambda-x).\] Due to the symmetry of $K$ the function $\phi(2\lambda-\cdot)$ is a solution of \varepsilonqref{eq:new} whenever $\phi$ is a solution. Set \begin{equation}\label{eq:lambda}\lambda_0:= \sup \{\lambda \in [\lambda_*,0 ] \mid w_\lambda(x) > 0 \; \mbox{ for all } \; x\in (\lambda,\lambda+\pi )\}. \varepsilonnd{equation} Notice that such $\lambda_0\geq \lambda_*$ exists, because by assumption \[ w_{\lambda_*}(x)=\phi(x)-\phi(2\lambda_*-x)>0 \qquad \mbox{for all}\qquad x\in(\lambda_*,\lambda_*+\pi). \] We have that $w_{\lambda_0}$ satisfies \begin{itemize} \item[i)] $w_{\lambda_0}({\lambda_0})=0$; \item[ii)] $w_{\lambda_0}$ is odd with respect to $\lambda_0$, that is $w_{\lambda_0}(\cdot)=-w_{\lambda_0}(2\lambda_0-\cdot)$; \item[iii)] $w_{\lambda_0}\geq 0$ in $[\lambda_0, \lambda_0+\pi ]$ and $w_{\lambda_0}\leq 0$ in $[\lambda_0+\pi , \lambda_0+2\pi ]$. \varepsilonnd{itemize} Let us consider $w_\lambda$ for $\lambda\geq \lambda_*$. Starting at $\lambda=\lambda_*$, we move the plane $\lambda$ about which the wave profile is reflected forward as long as $w_{\lambda}>0$ on $(\lambda,\lambda+\pi )$. Clearly, this process stops at or before the first crest in $[\lambda_*,0]$ at $\lambda=\lambda_0$. In fact one of three occasions will occur (cf. Figure \ref{F:Alternatives}): Either there exists $\bar x \in (\lambda_0,\lambda_0+\pi )$ such that $w_{\lambda_0}(\bar x)=0$ as i.e., in (a); or we reach a crest at $x=\lambda_0$ as i.e., in (b); or we reach a trough at $\lambda_0+\pi $ as i.e., in (c). \begin{center} \begin{figure}[h!] \begin{tikzpicture}[scale=0.55] \small \draw (-2,0)--(8,0); \draw[-,black](0,0.1)--(0,-0.1) node[below] {$-\pi$}; \draw[-,black](6,0.1)--(6,-0.1) node[below] {$\pi$}; \draw[luh-dark-blue, very thick] plot [smooth] coordinates { (-1,0.6)(0,0.5) (2.5,1.8) (3.2,1) (6,0.5) (7,0.6) }; \begin{scope}[xscale=-1] \draw[gray, dashed, xshift=-4.2cm] plot [smooth] coordinates {(-2.8,1) (0,0.5) (2.5,1.8) (3.2,1) (6,0.5) }; \varepsilonnd{scope} \draw[-,black, black](0.7,0.1)--(0.7,-0.1) node[below] {$\lambda_*$}; \draw[-,dashed, gray] (2.1,1.6)--(2.1,-0.1)node[below] {$\lambda_0$}; \draw[-,dashed, gray] (3.1,1)--(3.1,-0.1)node[below] {$\bar x$}; \node at (3,-2) {(a) Touching point at $\bar x$.}; \varepsilonnd{tikzpicture} \begin{tikzpicture}[scale=0.55] \small \draw (-2,0)--(8,0); \draw[-,black](0,0.1)--(0,-0.1) node[below] {$-\pi$}; \draw[-,black](6,0.1)--(6,-0.1) node[below] {$\pi$}; \draw[luh-dark-blue, very thick] plot [smooth] coordinates {(-1,1.3) (0,0.5) (0.8,1) (1.3,0.8) (2.1,1.4) (2.5, 1.8) (3.2, 1.6) (5.1,1.4) (6,0.5) (6.8,0.8)}; \begin{scope}[xscale=-1] \draw[gray, dashed, xshift=-4.95cm] plot [smooth] coordinates {(-1,1.3) (0,0.5) (0.8,1) (1.3,0.8) (2.1,1.4) (2.5, 1.8) (3.2, 1.6) (5.1,1.4) (6,0.5) }; \varepsilonnd{scope} \draw[-,black, black](1.8,0.1)--(1.8,-0.1) node[below] {$\lambda_*$}; \draw[-,dashed, gray] (2.5,1.8)--(2.5,-0.1)node[below] {$\lambda_0$}; \node at (3,-2) {(b) Reaching a crest at $\lambda_0$.}; \varepsilonnd{tikzpicture} \begin{tikzpicture}[scale=0.55] \small \draw (-2,0)--(8,0); \draw[-,black](0,0.1)--(0,-0.1) node[below] {$-\pi$}; \draw[-,black](6,0.1)--(6,-0.1) node[below] {$\pi$}; \draw[luh-dark-blue, very thick] plot [smooth] coordinates { (-1,0.6)(0,0.5) (3.5,1.8) (4.2,1.4) (5.2, 1.8)(6,0.5) (7,0.6) }; \begin{scope}[xscale=-1] \draw[gray, dashed, xshift=-6cm] plot [smooth] coordinates { (-1,0.6)(0,0.5) (3.5,1.8) (4.2,1.4) (5.2, 1.8)(6,0.5) (7,0.6) }; \varepsilonnd{scope} \draw[-,black, black](1.5,0.1)--(1.5,-0.1) node[below] {$\lambda_*$}; \draw[-,dashed, gray] (3.1,1.6)--(3.1,-0.1)node[below] {$\lambda_0$}; \node at (3,-2) {(c) Reaching a trouch at $\lambda_0+\pi$.}; \varepsilonnd{tikzpicture} \caption{Exemplary illustrations for the method of moving planes. Here, $\lambda_*$ represents the reflection point due to the reflection criterion and $\lambda_0$ is as in \varepsilonqref{eq:lambda}. }\label{F:Alternatives} \varepsilonnd{figure} \varepsilonnd{center} The first case can be excluded by the touching lemma. If on the other hand we reach a crest at $x=\lambda_0$ and $w_{\lambda_0}>0$ on $(\lambda_0,\lambda_0+\pi )$, then the boundary lemma implies that \[ \phi^\prime(\lambda_0)>0, \] which is a contradiction to $\phi$ being continuously differentiable and having a crest at $x=\lambda_0$. If we reach a trough at $\lambda_0+\pi$, then either $\phi$ touches $\bar \phi$ at $\lambda_0+\pi$ or $w_{\lambda}$ changes sign on different sides of $\lambda_0+\pi$. The former can be excluded by the touching lemma while the latter can be dealt with by applying the boundary point lemma at $\lambda_0+\pi$ with corresponding adjustments in view of $w_{\lambda}\leq 0$ on $[\lambda_0+\pi, \lambda_0+2\pi]$. We conclude that $\phi$ is symmetric. The fact that $\phi$ has exactly one crest per period follows essentially by the same argument. Repeating the method of moving plane for $\lambda\geq \lambda_*$ implies that there does not exist a crest in $[\lambda_*,0)$. To show that there does not exist a crest in $(-\pi,\lambda_*]$, we can apply the same method by moving $\lambda \leq \lambda_*$ towards $-\pi$ as long as $w_\lambda <0$ on $(\lambda-\pi,\lambda)$. This process stops at or before the first trough in $(-\pi,\lambda_*]$ and the same argument as before yields a contradiction to the assumption that $\phi$ has a crest in $(-\pi,\lambda_*]$. We deduce that $\phi$ is symmetric and has exactly one crest per period. By translation, we may assume that the crest is located at $x=0$. In particular, $\phi^\prime(x)\geq 0$ for all $x\in [-\pi,0]$. We are left to show that the strict inequality prevails for any $x\in (-\pi,0)$. Equation \varepsilonqref{eq:new} can be written as \[ B_h+\frac{c^2}{4}-\left(\frac{c}{2}-\phi \right)^2 =L\phi. \] Let $x\in (-\pi,0)$, then \[ 2\left(\frac{c}{2}-\phi \right) \phi^\prime(x)=-\left(L\phi\right)^\prime(x)=-\int_{-\pi}^0 \left[ K(x-y)-K(x+y) \right]\phi^\prime(y)\,dy. \] Here we used the symmetry of $K$ and $\phi$. In the same fashion as in the proof of the touching lemma (cf. Lemma \ref{touching lemma}), one can show that the right-hand side is strictly positive, unless $\phi$ is a trivial solution. But this is already excluded by our assumption on the wave profile. \varepsilonnd{proof} The proof above relies not only on the touching lemma, but also on Lemma \ref{boundary lemma}, which requires continuously differentiable solutions. If $\phi$ is a highest wave, that is $\max_{x\in\mathbb{T}} \phi(x)= \frac{c}{2}$, then the differentiability of $\phi$ is no longer guaranteed, see Proposition \ref{prop:C1}. However, assuming in addition to the refection criterion, that the highest wave $\phi$ has a \varepsilonmph{unique} global maximum with height $\frac{c}{2}$ per period, we prove that $\phi$ is symmetric and has a monotone profile. \begin{thm}[Symmetry of highest waves]\label{thm:S} Let $\phi\leq \frac{c}{2}$ be a $2\pi $-periodic, bounded solution with $\max_{x\in \mathbb{T}} \phi =\frac{c}{2}$. Assume that $\phi$ has a unique global maximum in $\mathbb{T}$ and satisfies the reflection criterion. Then $\phi$ is symmetric and has exactly one crest per period. Moreover, \[ \phi^\prime(x)>0\qquad \mbox{for all}\qquad x\in (-\pi,0), \] after a suitable translation. \varepsilonnd{thm} \begin{proof} Suppose by contradiction that $\phi$ is not symmetric. After proper translation and reflection, we may assume that a global minimum is located at $x=-\pi$ and the unique global maximum at some point $x_1 \in [0,\pi)$ and $\lambda_*\in (-\pi,0]$ (cf. Figure \ref{F:reflection}). \begin{figure}[h!] \begin{tikzpicture}[scale=0.55] \small \draw (-2,0)--(8,0); \draw[-,black](0,0.1)--(0,-0.1) node[below] {$-\pi$}; \draw[-,black](6,0.1)--(6,-0.1) node[below] {$\pi$}; \draw[-,black](3,0.1)--(3,-0.1) node[below] {$0$}; \draw[luh-dark-blue, very thick] plot [smooth] coordinates { (-1,0.6)(0,0.5) (2.5,1.8)}; \draw[luh-dark-blue, very thick] plot [smooth] coordinates { (2.5,1.8)(3.2,1) (6,0.5) (7,0.6) } ; \begin{scope}[xscale=-1] \draw[gray, dashed, xshift=-3.5cm] plot [smooth] coordinates { (-2.8,0.9)(0,0.5) (2.5,1.8)}; \draw[gray, dashed, xshift=-3.5cm] plot [smooth] coordinates { (2.5,1.8)(3.2,1) (5,0.6) } ; \varepsilonnd{scope} \draw[-,dashed] (1.75,1.4)--(1.75,-0.1)node[below] {$\lambda_*$}; \node[align=center] at (3,-2) {\scriptsize{Wave profile with unique crest in $(-\pi,0]$}\\ \scriptsize{satisfying the reflection criterion with $\lambda_*$.}}; \draw[->] (10,1)--(13,1); \node at (11.5,1.5) {\scriptsize {Reflection about $x=0$}}; \varepsilonnd{tikzpicture} \qquad \begin{tikzpicture}[scale=0.55] \small \draw (-2,0)--(8,0); \draw[-,black](0,0.1)--(0,-0.1) node[below] {$-\pi$}; \draw[-,black](6,0.1)--(6,-0.1) node[below] {$\pi$}; \draw[-,black](3,0.1)--(3,-0.1) node[below] {$0$}; \draw[gray, dashed, xshift=-4cm] plot [smooth] coordinates { (3.2,1) (6,0.5) (8.5,1.8) } ; \draw[gray, dashed, xshift=-4cm] plot [smooth] coordinates { (8.5,1.8)(9.2,1) (12,0.5)} ; \begin{scope}[xscale=-1] \draw[orange, very thick,xshift=-6cm] plot [smooth] coordinates { (-1,0.6)(0,0.5) (2.5,1.8)}; \draw[orange, very thick, xshift=-6cm] plot [smooth] coordinates{ (2.5,1.8)(3.2,1) (6,0.5) (7,0.6) } ; \varepsilonnd{scope} \draw[-,dashed] (1,0.6)--(1,-0.1)node[below] {$\bar \lambda_*$}; \node[align=center] at (3,-2) {\scriptsize{Reflected wave profile with unique crest in $[0,\pi)$}\\ \scriptsize{satisfying the reflection criterion with $\bar \lambda_*$.}}; \varepsilonnd{tikzpicture} \caption{By a possible refection about $x=0$, one can assume that the unique global maximum per period is located in $[0,\pi)$ and the reflection criterion stays valid.}\label{F:reflection} \varepsilonnd{figure} Note that $\phi$ is smooth on $\mathbb{R}\setminus\{x_1+2\pi \mathbb{Z}\}$ due to Proposition~\ref{prop:reg}. As in the proof of Theorem \ref{thm:Snew}, let $w_\lambda:\mathbb{R}\rightarrow \mathbb{R}$ be the \varepsilonmph{reflection function} around $\lambda$ given by \[w_\lambda(x):= \phi(x)-\phi(2\lambda -x)\] and recall that due to the symmetry of $K$ the function $\phi(2\lambda-\cdot)$ is a solution whenever $\phi$ is a solution. Again, set \[\lambda_0:= \sup \{\lambda \in [\lambda_*,0 ] \mid w_\lambda(x) > 0 \; \mbox{ for all } \; x\in (\lambda,\lambda+\pi )\}.\] The argument can be carried out in analog to the proof of Theorem \ref{thm:Snew}, since $\phi$ is smooth on $[-\pi,0)$. Consider $w_\lambda$ for $\lambda\geq \lambda_*$. Starting at $x=\lambda_*$, we move the plane $\lambda$ about which the wave profile is reflected forward as long as $w_{\lambda}>0$ on $(\lambda,\lambda+\pi )$. Clearly, this process stops at or before the first crest in $[\lambda_*,0)$ at $\lambda=\lambda_0$ at $\lambda=\lambda_0$. In fact one of three occasions will occur: Either there exists $\bar x \in (\lambda_0,\lambda_0+\pi )$ such that $w_{\lambda_0}(\bar x)=0$; or we reach a crest at $x=\lambda_0<0$; or we reach a trough at $\lambda_0+\pi $. The first case can be excluded by the touching lemma. If on the other hand we reach a crest at $x=\lambda_0<0 $ and $w_{\lambda_0}>0$ on $(\lambda_0,\lambda_0+\pi )$, then the boundary lemma implies that \[ \phi^\prime(\lambda_0)>0, \] which is a contradiction to $\phi$ being continuously differentiable and having a crest at $x=\lambda_0<0 $ where $\phi(\lambda_0)<\frac{c}{2}$. If we reach a trough at $\lambda_0+\pi$, then either $\phi$ touches $\bar \phi$ at $\lambda_0+\pi$ or $w_{\lambda}$ changes sign on different sides of $\lambda_0+\pi$. The former can be excluded by the touching lemma while the latter can be dealt with by applying the boundary point lemma at $\lambda_0+\pi$ with a corresponding adjustment in view of $w_{\lambda}\leq 0$ on $[\lambda_0+\pi,\lambda_0+2\pi]$. We conclude that $\phi$ is symmetric. By translation we can assume that $\phi$ is even. The fact that $\phi$ has a single crest per period and $\phi^\prime(x)> 0$ for all $x\in [-\pi,0)$ can be shown by the same argument as in the proof of Theorem \ref{thm:Snew}. \varepsilonnd{proof} \subsection*{Acknowlegments} The author G.B. gratefully acknowledges financial support by the Deutsche Forschungsgemeinschaft (DFG) through CRC 1173. \varepsilonnd{document}
\begin{document} \addtocounter{footnote}{1} \title{} \begin{center} {\large\bf On Glassey's conjecture for semilinear wave equations \\ in Friedmann-Lema\^itre-Robertson-Walker spacetime \\ } \end{center} \begin{center} Kimitoshi Tsutaya$^\dagger$ and Yuta Wakasugi$^\ddagger$ \\ $^\dagger$Graduate School of Science and Technology \\ Hirosaki University \\ Hirosaki 036-8561, Japan\\ \footnotetext{AMS Subject Classifications: 35L05; 35L70; 35P25.} \footnotetext{* The research was supported by JSPS KAKENHI Grant Number JP18K03351. } $^\ddagger$ Graduate School of Engineering \\ Hiroshima University \\ Higashi-Hiroshima, 739-8527, Japan \end{center} \begin{abstract} Consider nonlinear wave equations in the spatially flat Friedmann-Lema\^itre-Robertson-Walker (FLRW) spacetimes. We show blow-up in finite time of solutions and upper bounds of the lifespan of blow-up solutions to give the FLRW spacetime version of Glassey's conjecture for the time derivative nonlinearity. We also show blow-up results for the space time derivative nonlinearity. \end{abstract} {\bf Keywords}: Wave equation, Blow-up, Lifespan, FLRW spacetime, Glassey's conjecture. \addtolength{\baselineskip}{2mm} \section{Introduction.} The spatially flat FLRW metric is given by \[ g: \; ds^2=-dt^2+a(t)^2d\sigma^2, \] where the speed of light is equal to $1$, $d\sigma^2$ is the line element of $n$-dimensional Euclidean space and $a(t)$ is the scale factor, which describes expansion or contraction of the spatial metric. As in our earlier work \cite{TW1,TW2,TW3}, we treat the scale factor as \begin{equation} a(t)=ct^{\frac 2{n(1+w)}} \label{scw} \end{equation} where $c$ is a positive constant, and $w$ is the proportionality constant in the range $-1< w\le 1$. The constant $w$ appears in the equation of state relating the pressure to the density for the perfect fluid. See \cite{TW1}. In the preceding papers \cite{TW1,TW2,TW3}, we have shown upper bounds of the lifespan for the equation $\Box_g u=-|u|^p$. In this paper we consider the equation $\Box_g u=-|u_t|^p$ as well as $\Box_g u=-|\nabla_x u|^p$, where $\nabla_xu = (\partial u/\partial x^1, \cdots,\partial u/\partial x^n), \; (x^1,\cdots, x^n)\in {\bf R}^n$. For the flat FLRW metric with \eqref{scw}, the semilinear wave equation $\Box_g u\\ =|g|^{-1/2}\partial_\alpha(|g|^{1/2}g^{\alpha\beta}\partial_\beta)u= -|u_t|^p$ or $-|\nabla_x u|^p$ with $p>1$ becomes \begin{equation} u_{tt}-\frac 1{t^{4/(n(1+w))}}\Delta u+\frac 2{(1+w)t}u_t=|u_t|^p, \mbox{ or }|\nabla_x u|^p, \quad x\in {\bf R}^n \label{ore} \end{equation} where $\Delta=\partial_1^2+\cdots \partial_n^2, \; \partial_j=\partial/\partial x^j, \; j=1,\cdots,n$. Our aim of this paper is to show that blow-up in a finite time occurs for the equation above as well as upper bounds of the lifespan of the blow-up solutions. We first consider the following Cauchy problem in order to compare with the related known results including the case of the Minkowski spacetime: \begin{equation} u_{tt}-\frac 1{t^{2\alpha}}\Delta u+\frac\mu{t}u_t=|u_t|^p, \qquad t>1, \; x\in {\bf R}^n \label{Prob0} \end{equation} with the initial data given at $t=1$, \begin{equation} u(1,x)=\varepsilon u_0(x), \; u_t(1,x)=\varepsilon u_1(x), \qquad x\in {\bf R}^n, \label{data0} \end{equation} where $\alpha$ and $\mu$ are nonnegative constants and $\varepsilon>0$ is a small parameter. Let $T_\varepsilon$ be the lifespan of solutions of \eqref{Prob0} and \eqref{data0}, say, $T_\varepsilon$ is the supremum of $T$ such that \eqref{Prob0} and \eqref{data0} have a solution for $x\in {\bf R}^n$ and $1\le t<T$. Let $\alpha=\mu=0$ and $p_G(n)=1+2/(n-1)$. The so-called Glassey's conjecture \cite{Gl2} asserts that if $p>p_G(n)$, then there exist global solutions in time for small initial data, on the other hand, if $1<p\le p_G(n)$ with $n\ge 2$ or if $p>1$ for $n=1$, then blow-up in finite time occurs. This conjecture is proved to be almost true. Actually, blow-up results in low dimensions ($n=2,3$), or in high dimensions ($n\ge 4$) imposing radial symmetry were proved in, e.g., \cite{Ag,Gl1,Jo2,Ma,Ra,Sch}, and Zhou \cite{Zh} finally gave a simple proof of the blow-up result for $1<p\le p_G(n)$ and $n\ge 2$ as well as for $p > 1$ and $n=1$. Global existence of solutions in low dimensions ($n=2,3$) has been proved in, e.g., \cite{HT,Si,Tz}. For high dimensions ($n\ge 3$), it is proved by \cite{HWY} that there exist global solutions in the radial case for $p>p_G(n)$. They \cite{HWY} also proved the lifespan of local solutions in time for $1<p\le p_G(n)$. For the case $\alpha=0$ and $\mu\ge 0$, it is recently shown by Hamouda and Hamza \cite{HH} that blow-up in finite time occurs and the lifespan of the blow-up solutions satisfies \begin{align} &T_\varepsilon \le C\varepsilon^{-(p-1)/\{1-(n+\mu-1)(p-1)/2\}}&& \mbox{ if }\quad 1< p < p_G(n+\mu), \; n\ge 1, \label{subst}\\ &T_\varepsilon \le \exp(C\varepsilon^{-(p-1)})&&\mbox{ if }\quad p = p_G(n+\mu), \; n\ge 1. \label{crst} \end{align} These results improve the ones in \cite{LT}. The present paper treats the case $\alpha\ge 0$ and $\mu\ge 0$. We first show blow-up in a finite time and upper estimates of the lifespan of solutions of \eqref{Prob0} and \eqref{data0} in the case $0\le \alpha<1$. If $\alpha=0$, our upper bounds of the lifespan coincide with the results above by \cite{HH}. Similar results are independently shown by \cite{HHP} where energy solutions are treated. In our results, however, another exponent appears as a blow-up condition in some case. This is different from the results by \cite{HHP}. We emphasize that the generalized exponent of $p_G(n+\mu)$ cannot always be the critical exponent for the global existence of solutions. Our proofs are based on the test function method with the modified Bessel function of the second kind and on a generalized Kato's lemma. We next treat the case $\alpha\ge 1$. Moreover, we show blow-up results for the problem \begin{equation} \begin{cases} \displaystyle u_{tt}-\frac 1{t^{2\alpha}}\Delta u+\frac\mu{t}u_t=|\nabla_xu|^p, \qquad t>1, \; x\in {\bf R}^n, \\ u(1,x)=\varepsilon u_0(x), \; u_t(1,x)=\varepsilon u_1(x), \qquad x\in {\bf R}^n. \end{cases} \label{Prob0x} \end{equation} Unlike the above equation \eqref{Prob0}, our blow-up conditions are related to exponents that originate from the Strauss and Fujita ones and to another exponent like in the case of the time derivative nonlinearity. Hence, upper bounds of the lifespan have to do with those exponents. We then apply our results for \eqref{Prob0} and \eqref{Prob0x} to the original equation \eqref{ore}. Our aim is especially to clarify the difference with the case of the Minkowski spacetime and also how the scale factor affects the lifespan of the solution. Since global existence of solutions has not been obtained yet, {\it critical} exponent used in this paper means a candidate of the true critical exponent. The paper is organized as follows. In Section 2, we state our first main result for \eqref{Prob0} and \eqref{data0} in the case $0\le \alpha <1$. Theorem \ref{th21} presents that it is possible in some case to improve the estimate of the lifespan affected by Glassey's exponent. To prove the theorem, we use the test function method, and also a generalized Kato's lemma for a first-order differential inequality, which is applied to the wave equation with the scale-invariant damping. This is proved by John's iteration argument \cite{Jo1}. We then show our second result which is for the case $\alpha\ge 1$. In Section 3 we treat \eqref{Prob0x} and divide results into several cases $0\le \alpha<1$, wavelike and heatlike cases, critical and subcritical cases, and $\alpha\ge 1$. Finally in Section 4, we apply the theorems in Sections 2 and 3 to the original equation \eqref{ore}. We discuss the effect of the scale factor to the solutions. \section{Time derivative nonlinearity.} \subsection{Case $0\le \alpha <1$.} \setcounter{equation}{0} We first consider the problem \eqref{Prob0} with \eqref{data0} for $0\le \alpha <1$. Our first result is the following theorem: \begin{theo} Let $n\ge 2, \; 0\le \alpha<1, \; \mu\ge 0$ and \begin{align*} &1<p\le p_G'(n,\alpha,\mu)\equiv 1+\frac 2{(1-\alpha)(n-1)+\mu+\alpha}, \intertext{ or } &1<p<p_0(n,\alpha,\mu)\equiv 1+\frac 1{n(1-\alpha)+\mu}. \end{align*} Assume that $u_0\in C^2({\bf R}^n)$ and $u_1\in C^1({\bf R}^n)$ are nontrivial and satisfy $u_1(x)\ge u_0(x)\ge 0$, $\mbox{\rm supp }u_0, \mbox{\rm supp }u_1\subset \{|x|\le R\}$ with $R>0$. Suppose that the problem \eqref{Prob0} with \eqref{data0} has a classical solution $u\in C^2([1,T)\times{\bf R}^n)$. Then, $T<\infty$ and there exists a constant $\varepsilon_0>0$ depending on $p,\alpha,\mu,R,u_0,u_1$ such that $T_\varepsilon$ has to satisfy \begin{align} T_\varepsilon&\le C\varepsilon^{\frac{-(p-1)}{1-\{(1-\alpha)(n-1)+\mu+\alpha\}(p-1)/2}} && \quad \mbox{if } p<p_G'(n,\alpha,\mu), \label{subcls-21} \\ T_\varepsilon&\le \exp(C\varepsilon^{-(p-1)}) &&\quad \mbox{if } p=p_G'(n,\alpha,\mu), \label{cls-21} \\ T_\varepsilon&\le C\varepsilon^{\frac{-(p-1)}{1-(p-1)\{n(1-\alpha)+\mu\}}} && \quad \mbox{if } p<p_0(n,\alpha,\mu) \label{subcls-22} \end{align} for $0<\varepsilon\le \varepsilon_0$, where $C>0$ is a constant independent of $\varepsilon$. \label{th21} \end{theo} \noindent {\bf Remark} (1) If $\alpha =0$, then \eqref{subcls-21} and \eqref{cls-21} are the same with the upper bounds \eqref{subst} and \eqref{crst}. \\ (2) By the theorem, the exponent $p_G'(n,\alpha,\mu)$ cannot always be the critical exponent for the global existence of solutions. We discuss more details in the end of this subsection. \\ (3) If $p<p_0(n,\alpha,\mu)$, then the above assumption $u_1(x)\ge u_0(x)\ge 0$ can be replaced just by $u_1(x)\ge 0$. \noindent{\it Proof)}\hspace{5mm} Mutiplying \eqref{Prob0} by a test funtion $\partialhi(t,x)$ and $t^\mu$, and integrating over ${\bf R}^n$, we have \begin{equation} \frac d{dt}\int t^\mu(u\partialhi)_tdx-2\frac d{dt}\int t^\mu u\partialhi_t dx +\int t^\mu u\lp{\partialhi_{tt}-\frac 1{t^{2\alpha}}\Delta\partialhi+\frac\mu t\partialhi_t}dx =\int t^\mu |u_t|^p\partialhi dx. \label{9} \end{equation} Integrating over $[1,t]$, we obtain \begin{align} &\int t^\mu(u\partialhi)_t(t,x)dx-2\int t^\mu u\partialhi_t(t,x) dx +\int_1^t s^\mu \int u\lp{\partialhi_{ss}-\frac 1{s^{2\alpha}}\Delta\partialhi+\frac\mu s\partialhi_s}dx ds\nonumber\\ &=\varepsilon\int (u_1(x)\partialhi(1,x)- u_0(x)\partialhi_t(1,x)) dx+\int_1^t s^\mu\int |u_s|^p\partialhi dxds. \label{F10} \end{align} We remark that the $C^2$-solution $u$ of \eqref{Prob0} and \eqref{data0} has the property of finite speed of propagation, and satisfies \begin{equation} \mbox{supp }u(t,\cdot)\subset \{|x|\le A(t)+R\}, \qquad A(t)=\int_1^t s^{-\alpha}ds =\frac{t^{1-\alpha}-1}{1-\alpha}, \label{suppu0} \end{equation} provided that supp $u_0$, supp $u_1\subset \{|x|\le R\}$. See \cite{TW1} for its proof. We now define a smooth test function by \begin{align*} &\partialhi(t,x)=\lambda(t)\int_{|\omega|=1}e^{x\cdot \omega}dS_\omega, \\ &\lambda(t)=t^{(1-\mu)/2}K_\nu\lp{\frac 1{1-\alpha}t^{1-\alpha}}, \qquad \nu=\frac{\mu-1}{2(1-\alpha)}, \end{align*} where $K_\nu(t)$ is the modified Bessel funtion of the second kind which is given by \begin{equation} K_\nu(t)=\int_0^\infty e^{-t\cosh z}\cosh \nu zdz, \qquad t>0, \quad \nu\in{\bf R}. \label{Bessel} \end{equation} It is well-known that the Bessel function $K_\nu$ satisfies the following properties (see, e.g.,\cite{AS}): \begin{align} &t^2K_\nu''(t)+tK_\nu'(t)-(t^2+\nu^2)K_\nu(t)=0, \label{B0}\\ & K_\nu(t)=\sqrt{\frac\partiali 2}\frac{e^{-t}}{\sqrt t}\lp{1+O\lp{\frac 1t}} \qquad (t\to\infty), \label{B1}\\ & K_\nu'(t)=\frac\nu tK_\nu(t)-K_{\nu+1}(t). \label{B2} \end{align} We can verify by \eqref{B0}-\eqref{B2} that there holds \begin{align} &\partialhi_{tt}-\frac 1{t^{2\alpha}}\Delta\partialhi+\frac\mu t\partialhi_t=0. \label{Ph1} \end{align} The following estimate is shown in \cite{TW1}: \begin{equation} \int_{|x|\le A(t)+R}|\partialhi(t,x)|dx\lesssim (t+R)^{\{(1-\alpha)(n-1)-(\mu-\alpha)\}/2}. \label{V} \end{equation} See (3.23) in \cite{TW1}. We also see by \cite{TW1} that \begin{equation} C_d\equiv \varepsilon\int (u_1(x)\partialhi(1,x)- u_0(x)\partialhi_t(1,x)) dx =C_0\varepsilon >0 \label{Cdata} \end{equation} under assumption on the initial data. Moreover, we have \begin{align} \partialhi_t(t,x)&=\lambda'(t)\int_{|\omega|=1}e^{x\cdot\omega}dS_\omega \nonumber\\ &=\lb{\frac{1-\mu}2 t^{-1}+t^{-\alpha}\frac{K_\nu'\lp{\frac 1{1-\alpha} t^{1-\alpha}}}{K_\nu\lp{\frac 1{1-\alpha} t^{1-\alpha}}}}\partialhi(t,x)\\ &=-t^{-\alpha}\frac{K_{\nu+1}\lp{\frac 1{1-\alpha} t^{1-\alpha}}}{K_\nu\lp{\frac 1{1-\alpha} t^{1-\alpha}}}\partialhi(t,x). \label{phit} \end{align} where we have used \eqref{B2} and $\nu=(\mu-1)/(2(1-\alpha))$ for the last equality. Set \[ F_1(t)=\int u\partialhi(t,x)dx. \] From \eqref{F10}, proceeding as in \cite{TW1}, we have \begin{align*} \frac{t^{\mu-1}}{K_\nu\lp{\frac 1{1-\alpha}t^{1-\alpha}}^2}F_1(t) -\frac 1{K_\nu\lp{\frac 1{1-\alpha}}^2}F_1(1) &\ge C_d\int_1^t \frac{s^{-1}}{K_\nu\lp{\frac 1{1-\alpha}s^{1-\alpha}}^2}ds. \end{align*} Since $F_1(1)/{K_\nu\lp{\frac 1{1-\alpha}}^2}>0$ by assumption, we obtain \begin{equation} F_1(t)=\int u\partialhi(t,x)dx>0 \quad \mbox{for }t\ge 1. \label{F_1+} \end{equation} We next go back to \eqref{9}, which becomes \[ \frac d{dt}\int t^\mu u_t\partialhi dx-\int t^\mu u_t\partialhi_t dx -\frac 1{t^{2\alpha}}\int t^\mu u\Delta\partialhi dx =\int t^\mu |u_t|^p\partialhi dx. \] Using \eqref{phit} and $\Delta\partialhi=\partialhi$ yields \begin{equation} \frac d{dt}\int t^\mu u_t\partialhi dx+t^{-\alpha}\frac{K_{\nu+1}\lp{\frac 1{1-\alpha} t^{1-\alpha}}}{K_\nu\lp{\frac 1{1-\alpha} t^{1-\alpha}}}\int t^\mu u_t\partialhi dx -\frac 1{t^{2\alpha}}\int t^\mu u\partialhi dx =\int t^\mu |u_t|^p\partialhi dx. \label{91} \end{equation} On the other hand, by \eqref{F10}, \eqref{Ph1}, \eqref{Cdata} and \eqref{phit}, \begin{equation} \int t^\mu u_t\partialhi(t,x)dx+t^{-\alpha}\frac{K_{\nu+1}\lp{\frac 1{1-\alpha} t^{1-\alpha}}}{K_\nu\lp{\frac 1{1-\alpha} t^{1-\alpha}}}\int t^\mu u\partialhi(t,x) dx =C_d+\int_1^t s^\mu\int |u_s|^p\partialhi dxds. \label{F} \end{equation} We see that there exists a constant $M\ge 1$ such that \begin{equation} \frac{K_{\nu+1}\lp{\frac 1{1-\alpha} t^{1-\alpha}}}{K_\nu\lp{\frac 1{1-\alpha} t^{1-\alpha}}}\le M \label{keyc} \end{equation} for $t\ge 1$ by \eqref{B1}. Combining \eqref{91} and \eqref{F} multiplied by $M^{-1}t^{-\alpha}$, we have \begin{align*} &\frac d{dt}\int t^\mu u_t\partialhi dx+\lbt{\frac{K_{\nu+1}\lp{\frac 1{1-\alpha} t^{1-\alpha}}}{K_\nu\lp{\frac 1{1-\alpha} t^{1-\alpha}}}+M^{-1}}t^{-\alpha}\int t^\mu u_t\partialhi dx\\ &\qquad + \lbt{M^{-1}\frac{K_{\nu+1}\lp{\frac 1{1-\alpha} t^{1-\alpha}}}{K_\nu\lp{\frac 1{1-\alpha} t^{1-\alpha}}}-1}t^{-2\alpha}\int t^\mu u\partialhi(t,x) dx \\ =&C_dM^{-1}t^{-\alpha}+ \int t^\mu |u_t|^p\partialhi dx+M^{-1}t^{-\alpha}\int_1^t s^\mu\int |u_s|^p\partialhi dxds. \end{align*} We note that \[ t^{-2\alpha+\mu}\int u\partialhi(t,x) dx >0 \] for all $t\ge 1$ by \eqref{F_1+}. Using \eqref{keyc}, we obtain \begin{align} &\frac d{dt}\int t^\mu u_t\partialhi dx+\lp{M+\frac 1M}t^{-\alpha}\int t^\mu u_t\partialhi dx \nonumber\\ \ge &C_dM^{-1}t^{-\alpha}+ \int t^\mu |u_t|^p\partialhi dx+M^{-1}t^{-\alpha}\int_1^t s^\mu\int |u_s|^p\partialhi dxds \qquad \mbox{for }t\ge 1. \label{star} \end{align} We now set \[ G(t)=\int t^\mu u_t\partialhi dx-\frac 1{M^2+1}\int_1^t s^\mu\int |u_s|^p\partialhi dxds-\frac{C_d}{2(M^2+1)} \qquad \mbox{for }t\ge 1. \] Then \[ G'(t)=\frac{d}{dt}\int t^\mu u_t\partialhi dx-\frac 1{M^2+1} \int t^\mu |u_t|^p\partialhi dx, \] hence \eqref{star} becomes \[ G'(t)+\frac{M^2+1}{M}t^{-\alpha}G(t)\ge \frac{C_d}{2M}t^{-\alpha}+\frac{M^2}{M^2+1}\int t^\mu |u_t|^p\partialhi dx>0 \quad \mbox{for }t\ge 1. \] Multiplying the inequality above by $\exp[(M^2+1)t^{1-\alpha}/(M(1-\alpha))]$ and integrating over $[1,t]$, we obtain \begin{equation} \exp\lp{\frac{M^2+1}{M(1-\alpha)}t^{1-\alpha}}G(t)-\exp\lp{\frac{M^2+1}{M(1-\alpha)}}G(1)>0 \quad \mbox{for }t\ge 1. \label{G1} \end{equation} Note that \begin{align*} G(1)&=\varepsilon\int u_1(x)\partialhi(1,x)dx-\frac \varepsilon{2(M^2+1)}\int (u_1(x)\partialhi(1,x)- u_0(x)\partialhi_t(1,x)) dx\\ &\ge \varepsilon\int\lp{\frac{2M^2+1}{2(M^2+1)}u_1(x)-\frac M{2(M^2+1)}u_0(x)}\partialhi(1,x)dx >0 \end{align*} by \eqref{phit}, \eqref{keyc} and assumption on the initial data. It holds from \eqref{G1} that $G(t)>0$ for $t\ge 1$. Thus, we see that \begin{equation} \int t^\mu u_t\partialhi dx>\frac 1{M^2+1}\int_1^t s^\mu\int |u_s|^p\partialhi dxds+\frac{C_d}{2(M^2+1)} \qquad \mbox{for }t\ge 1. \label{star1} \end{equation} We now define \[ H(t)=\int_1^t s^\mu\int |u_s|^p\partialhi dxds+\frac{C_d}2 \quad \mbox{for }t\ge 1. \] By H\"older's inequality, \eqref{V} and \eqref{star1}, we have \begin{align} H'(t)=\int t^\mu |u_t|^p\partialhi dx &\ge \lp{\int t^\mu u_t\partialhi dx}^p\lp{\int_{|x|\le A(t)+R} t^\mu \partialhi dx}^{1-p} \nonumber\\ &\gtrsim t^{-\{(1-\alpha)(n-1)+\mu+\alpha\}(p-1)/2}H(t)^p \quad \mbox{for }t\ge 1. \label{Ht} \end{align} We also have \begin{equation} H(1)=\frac{C_d}2=\frac 12C_0\varepsilon>0 \label{H1} \end{equation} by \eqref{Cdata}. By integrating \eqref{Ht} multiplied by $H(t)^{-p}$ over $[1,t]$ and using \eqref{H1}, we therefore obtain the desired results \eqref{subcls-21} and \eqref{cls-21}. It remains to prove \eqref{subcls-22}. We use the following lemma, which is a generalized Kato's lemma. \begin{lem} Let $p>1, \;a\ge 0, \;b\ge 0, \;q\ge 0, \; r\ge 0, \; \mu\ge 0, \; c>0$ and \[ M\equiv (p-1)(c-a)-q+1>0. \] Let $T\ge T_1>T_0\ge 1$. Assume that $F\in C^1([T_0,T))$ satisfies the following three conditions: \begin{align*} (i) \quad & F(t) \ge A_0t^{-a}(\ln t)^{-b}(t-T_1)^c \qquad \mbox{for }t\ge T_1, \\ (ii) \quad &F'(t) +\frac{\mu}{t}F(t)\ge A_1(t+R)^{-q}(\ln t)^{-r}|F(t)|^p \quad \mbox{for }t\ge T_0, \\ (iii) \quad &F(T_0)> 0, \end{align*} where $A_0, A_1$ and $R$ are positive constants. Then, $T$ has to satisfy \[ T^{M/(p-1)}(\ln t)^{-b-r/(p-1)}<CA_0^{-1}, \] where $C$ is a constant depending on $R,A_1,\mu,p,q,r,a,b$ and $c$. \label{lm22} \end{lem} \noindent{\it Proof)}\hspace{5mm} Mutiplying assumption (ii) by $t^\mu$, we have \[ t^\mu F'+\mu t^{\mu-1}F\ge A_1 t^{\mu}(t+R)^{-q}(\ln t)^{-r}|F|^p. \] Integrating the above inequality over $[T_0,t]$ yields \begin{align} t^\mu F(t)-F(T_0)\ge A_1C_{R,q}\int_{T_0}^t s^{\mu-q}(\ln s)^{-r}|F(s)|^pds\ge 0, \quad C_{R,q}=(1+R)^{-q}. \label{e0} \end{align} By assumption (iii), we see that $F(t)>0$ for $t\ge T_0$. Hence, by assumption (i), we have \begin{align*} F(t) &\ge A_0^pA_1C_{R,q} t^{-\mu}\int_{T_1}^t s^{\mu-ap-q}(\ln s)^{-bp-r}(s-T_1)^{cp} ds\\ &\ge A_0^pA_1C_{R,q} t^{-\mu-ap-q}(\ln t)^{-bp-r} \int_{T_1}^t (s-T_1)^{\mu+cp}ds \\ &= \frac{A_0^pA_1C_{R,q}}{\mu+cp+1}t^{-\mu-ap-q}(\ln t)^{-bp-r}(t-T_1)^{\mu+cp+1} \qquad \mbox{for }t\ge T_1. \end{align*} Based on the fact above, we define the sequences $a_j, \; b_j, \; c_j, \; D_j$ for $j = 0, 1, 2, \cdots$ by \begin{align} a_{j+1} = pa_j + \mu+q, & & b_{j+1} = pb_j+r, & & c_{j+1}=pc_j +\mu+1, & & D_{j+1} = \frac{A_1C_{R,q}D_j^p}{pc_j+\mu+1} \label{e2}\\ a_0 = a, & & b_0 = b, & & c_0 = c, & & D_0= A_0. \label{e3} \end{align} Solving \eqref{e2} and \eqref{e3}, we obtain \begin{align*} &a_j = p^j\lp{a+\frac{\mu+q}{p-1}}-\frac{\mu+q}{p-1}, \qquad b_j = p^j\lp{b+\frac{r}{p-1}}-\frac{r}{p-1}, \\ &c_j = p^j\lp{c+\frac{\mu+1}{p-1}}-\frac{\mu+1}{p-1}, \end{align*} and thus \[ D_{j+1} = \frac{A_1C_{R,q}D_j^p}{c_{j+1}} \ge \lp{c+\frac{\mu+1}{p-1}}^{-1}\frac{A_1C_{R,q}D_j^p}{p^{j+1}}. \] Then, \begin{align*} D_j &\ge \frac{BD_{j-1}^p}{p^{j}} \\ &\ge \frac B{p^{j}}\lp{\frac{BD_{j-2}^p}{p^{j-1}}}^p =\frac{B^{1+p}}{p^{j+p(j-1)}}D_{j-2}^{p^2} \\ &\ge \frac{B^{1+p}}{p^{j+p(j-1)}}\lp{\frac{BD_{j-3}^p}{p^{j-2}}}^{p^2} =\frac{B^{1+p+p^2}}{p^{j+p(j-1)+p^2(j-2)}}D_{j-3}^{p^3} \\ &\ge \cdots\cdots \ge \frac{B^{1+p+p^2+\cdots+p^{j-1}}}{p^{j+p(j-1)+p^2(j-2)+\cdots+p^{j-1}}}D_0^{p^j}, \\ \intertext{and } \ln D_j &\ge \frac{\ln B}{p-1}(p^j-1)-p^j \sum_{k=0}^j \frac{k}{p^k}\ln p+p^j\ln D_0, \end{align*} where $B=\{c+(\mu+1)/(p-1)\}^{-1}A_1C_{R,q}$. For sufficiently large $j$, we have \[ D_j \ge \exp(Ep^j), \] where \begin{equation} E = \frac{1}{p-1}\min\left(0, \ln B \right) - \sum_{k=0}^{\infty}\frac{k}{p^k}\ln p+\ln A_0. \label{eqofE} \end{equation} Thus, since $F(t)\ge D_jt^{-a_j}(\ln t)^{-b_j}(t-T_1)^{c_j}$ holds for $t\ge T_1$, we obtain \begin{align} F(t) &\ge t^{(\mu+q)/(p-1)}(\ln t)^{r/(p-1)}(t-T_1)^{-(\mu+1)/(p-1)} \nonumber\\ & \quad \cdot\exp\left[\left\{E+\lp{c+\frac{\mu+1}{p-1}}\ln(t-T_1)-\lp{a+\frac{\mu+q}{p-1}}\ln t-\lp{b+\frac{r}{p-1}}\ln(\ln t)\right\}p^j\right] \label{5} \end{align} for $t\ge T_1$. Since \[ \lp{c+\frac{\mu+1}{p-1}}-\lp{a+\frac{\mu+q}{p-1}}=c-a+\frac{1-q}{p-1}>0 \] by assumption, choosing $t$ large enough, we can find a positive $\delta$ such that \[ E+\lp{c+\frac{\mu+1}{p-1}}\ln(t-T_1)-\lp{a+\frac{\mu+q}{p-1}}\ln t-\lp{b+\frac{r}{p-1}}\ln(\ln t) \ge \delta > 0. \] It then follows from \eqref{5} that $F(t) \longrightarrow \infty$ as $j \to \infty$ for sufficiently large $t$. We therefore see that the lifespan $T$ of $F(t)$ has to satisfy \[ T^{M/(p-1)}(\ln T)^{-b-r/(p-1)}<CA_0^{-1}, \] where $M=(p-1)(c-a)-q+1>0$, and $C$ is a constant depending on $A_1,R,\mu,p,q,r,a,b$ and $c$. This completes the proof of the proposition. \qed We now prove \eqref{subcls-22} by applying Lemma \ref{lm22}. Set \begin{equation} F(t)=\int u_t(t,x)dx. \label{int-u_t} \end{equation} Integrating the equation \eqref{Prob0} and using H\"older's inequality, we have by \eqref{suppu0}, \begin{align} F'(t)+\frac\mu t F(t) &=\int |u_t|^p dx \label{eqF} \\ & \ge \frac 1{(A(t)+R)^{n(p-1)}}|F(t)|^p \nonumber \\ & \ge \frac C{(t+R)^{n(1-\alpha)(p-1)}}|F(t)|^p. \label{Ho1} \end{align} Mutiplying \eqref{Ho1} by $t^\mu$ and integrating imply \begin{equation} t^\mu F(t)-F(1)\gtrsim \int_1^t s^{\mu-n(1-\alpha)(p-1)} |F(s)|^pds\ge 0. \label{F01} \end{equation} We hence see that \begin{equation} F(t)\ge t^{-\mu}F(1)=C\varepsilon t^{-\mu}>0 \qquad \mbox{for }t\ge 1 \label{F2} \end{equation} by assumption. From \eqref{F01} and \eqref{F2}, we have \begin{align} F(t)&\ge C\varepsilon^p t^{-\mu}\int_1^t s^{\mu-n(1-\alpha)(p-1)-\mu p}ds \nonumber \\ &\ge C\varepsilon^p t^{-\mu(p+1)-n(1-\alpha)(p-1)}\int_1^t (s-1)^\mu ds. \nonumber\\ \intertext{Therefore, we obtain} F(t)&\ge C\varepsilon^p t^{-\mu(p+1)-n(1-\alpha)(p-1)}(t-1)^{\mu+1} \qquad \mbox{for }t\ge 1. \label{ann} \end{align} Finally, by \eqref{Ho1} and \eqref{ann}, applying Lemma \ref{lm22} with $q=n(1-\alpha)(p-1)$, $a=\mu(p+1)+n(1-\alpha)(p-1)$, $b=r=0, \; c=\mu+1$ and $A_0=C\varepsilon^p$ , we obtain the desired result \eqref{subcls-22} since \begin{align*} M &=(p-1)\lb{\mu+1 - \mu(p+1)-n(1-\alpha)(p-1)}-n(1-\alpha)(p-1)+1 \\ &=p\lb{1-(p-1)(\mu+n(1-\alpha))}>0. \end{align*} This completes the proof of Theorem \ref{th21}. \qed In the end of this subsection, we discuss the blow-up condtions and the estimates of the lifespan in the two subcritical cases in Theorem \ref{th21}. We note that if $p_G'(n,\alpha,\mu)=p_0(n,\alpha,\mu)$, then \[ \mu=\mu_{n,\alpha}\equiv \alpha(n+2)-(n+1). \] Fig.~\ref{Fig1} \begin{figure} \caption{\label{Fig1} \label{Fig1} \end{figure} and Fig.~\ref{Fig2} \begin{figure} \caption{\label{Fig2} \label{Fig2} \end{figure} below show the regions of blow-up conditions in the cases $n=3, \; \alpha=0.2$ and $n=3, \; \alpha=0.9$, respectively. For $\mu > \max\{0,\mu_{n,\alpha}\}$, the exponent $p_G'(n,\alpha,\mu)$ is bigger than $p_0(n,\alpha,\mu)$. See the region (G) in Fig.s~\ref{Fig1} and \ref{Fig2}. On the other hand, if $(n+1)/(n+2)\le \alpha<1$, then for $0\le \mu \le \mu_{n,\alpha}$, the condition $1<p<p_0(n,\alpha,\mu)$ includes the one $1<p<p_G'(n,\alpha,\mu)$. This means that if $\alpha$ is close to $1$, then the better region $1<p<p_0(n,\alpha,\mu)$ than $1<p<p_G'(n,\alpha,\mu)$ appears for $0\le \mu<1$, as shown by the region (O) in Fig.~\ref{Fig2}. Hence, the exponent $p_G'(n,\alpha,\mu)$ cannot always be the critical exponent for the global existence of solutions. If we compare the two upper bounds \eqref{subcls-21} and \eqref{subcls-22}, then in the region (G) in Fig.~\ref{Fig1} and Fig.~\ref{Fig2}, \eqref{subcls-21} is better than \eqref{subcls-22} while in the region (O) in Fig.~\ref{Fig2}, this relation becomes reverse. \subsection{Case $\alpha \ge 1$} We next consider the same problem for the case $\alpha\ge 1$. \begin{theo} Let $n\ge 2, \; 0\le \alpha<1, \; \mu\ge 0$ and $1<p<1+1/\mu$. Assume that $u_0\in C^2({\bf R}^n)$ and $u_1\in C^1({\bf R}^n)$ are nontrivial and satisfy $u_1(x)\ge 0$, $\mbox{\rm supp }u_0, \mbox{\rm supp }u_1\subset \{|x|\le R\}$ with $R>0$. Suppose that the problem \eqref{Prob0} with \eqref{data0} has a classical solution $u\in C^2([1,T)\times{\bf R}^n)$. Then, $T<\infty$ and there exists a constant $\varepsilon_0>0$ depending on $p,\alpha,\mu,R,u_0,u_1$ such that $T_\varepsilon$ has to satisfy \begin{align} &T_\varepsilon^{1-\mu(p-1)}(\ln T_\varepsilon)^{-n(p-1)}\le C\varepsilon^{-(p-1)} &&\mbox{if }\alpha=1, \label{ae1} \\ &T_\varepsilon\le C\varepsilon^{-(p-1)/\{1-\mu(p-1)\}} &&\mbox{if }\alpha>1 \label{ag1} \end{align} for $0<\varepsilon\le \varepsilon_0$, where $C>0$ is a constant independent of $\varepsilon$. \label{th23} \end{theo} \noindent{\it Proof)}\hspace{5mm} We first remark that the $C^2$-solution $u$ of \eqref{Prob0} and \eqref{data0} has the property of finite speed of propagation, and satisfies \[ \mbox{supp }u(t,\cdot)\subset \{|x|\le A(t)+R\}, \] where \begin{equation} A(t)= \begin{cases} \displaystyle \int_1^t s^{-1}ds =\ln t &\mbox{if }\alpha=1, \\ \displaystyle \int_1^t s^{-\alpha}ds =\frac 1{\alpha-1}(1-t^{1-\alpha}) &\mbox{if }\alpha>1, \end{cases} \label{suppu1} \end{equation} provided that supp $u_0$, supp $u_1\subset \{|x|\le R\}$. See \cite{TW3} for its proof. \\ Define $F(t)$ by \eqref{int-u_t} and let $\alpha=1$. From \eqref{eqF} and \eqref{suppu1}, \begin{align} F'(t)+\frac\mu t F(t) & \ge \frac 1{(A(t)+R)^{n(p-1)}}|F(t)|^p \nonumber \\ & \ge \frac C{(\ln t)^{n(p-1)}}|F(t)|^p. \label{Ho2} \end{align} Proceeding as in the proof of Theorem \ref{th21} for the case $0\le \alpha<1$, we have by \eqref{F2}, \begin{align*} F(t)&\ge C\varepsilon^p t^{-\mu}\int_1^t s^{\mu-\mu p}(\ln s)^{-n(p-1)}ds \\ &\ge C\varepsilon^p t^{-\mu(p+1)}(\ln t)^{-n(p-1)}\int_1^t (s-1)^\mu ds. \end{align*} Therefore, we obtain \begin{equation} F(t)\ge C\varepsilon^p t^{-\mu(p+1)}(\ln t)^{-n(p-1)}(t-1)^{\mu+1} \qquad \mbox{for }t\ge 1. \label{ann2} \end{equation} Finally, by \eqref{Ho2} and \eqref{ann2}, applying Lemma \ref{lm22} with $q=0$, $a=\mu(p+1)$, $b=r=n(p-1), \; c=\mu+1$ and $A_0=C\varepsilon^p$, we obtain the desired result \eqref{ae1} since \begin{equation} M =(p-1)(1 - \mu p)+1=p(1-\mu(p-1))>0. \label{Mp2} \end{equation} On the other hand, if $\alpha>1$, then from \eqref{eqF} and \eqref{suppu1}, \begin{equation} F'(t)+\frac\mu t F(t) \ge C |F(t)|^p. \label{Ho3} \end{equation} Proceeding as before, by \eqref{F2}, we have \begin{align*} F(t)&\ge C\varepsilon^p t^{-\mu}\int_1^t s^{\mu-\mu p}ds \\ &\ge C\varepsilon^p t^{-\mu(p+1)}\int_1^t (s-1)^\mu ds. \end{align*} Therefore, we obtain \begin{equation} F(t)\ge C\varepsilon^p t^{-\mu(p+1)}(t-1)^{\mu+1} \qquad \mbox{for }t\ge 1. \label{ann3} \end{equation} Finally, by \eqref{Ho3} and \eqref{ann3}, applying Lemma \ref{lm22} with $q=0$, $a=\mu(p+1)$, $b=r=0, \; c=\mu+1$ and $A_0=C\varepsilon^p$, we obtain the desired result \eqref{ag1} since \eqref{Mp2}. This completes the proof of Theorem \ref{th23}. \qed \section{Space derivative nonlinearity.} \setcounter{equation}{0} In this section we consider the problem \eqref{Prob0x}. Let $u_0$ and $u_1$ be nonnegative and satisfy supp $u_0$, supp $u_1\subset \{|x|\le R\}$ with $R>0$. We prepare several basic inequalities which will be used repeatedly. Let \[ F(t)=\int u(t,x)dx. \] Then integrating equation \eqref{Prob0x} over ${\bf R}^n$ and using Poincar\'e's and H\"older's inequalities imply that \begin{align} F''(t)+\frac\mu t F'(t) &=\int |\nabla_x u|^p dx \label{eqF1} \\ &\ge \frac 1{(A(t)+R)^p}\int|u|^pdx \label{eqF2}\\ &\ge \frac 1{(A(t)+R)^{p+n(p-1)}}|F(t)|^p. \label{eqF3} \end{align} On the other hand, mutiplying \eqref{eqF1} by $t^\mu$ and integrating imply \begin{align} t^\mu F'(t)-F'(1)&= \int_1^t s^\mu\int |\nabla_x u|^p dxds. \label{impr} \intertext{Since $F'(1)>0$ by assumption,} F'(t)& \ge t^{-\mu}\int_1^t s^\mu\int |\nabla_x u|^p dxds. \nonumber \end{align} Integrating again, we have from $F(1)>0$ by assumption, \begin{equation} F(t)\ge \int_1^t \thetau^{-\mu}\int_1^\thetau s^\mu\int |\nabla_x u|^p dxdsd\thetau \qquad \mbox{for }t\ge 1. \label{Fup} \end{equation} \noindent \subsection{Case $0\le \alpha<1$} We call wavelike and heatlike cases if a blow-up condition is concerned with exponents similar to the Strauss and Fujita ones, respectively. \subsubsection{\large Wavelike and subcritical case} Let $p_c'(n,\alpha,\mu)$ be the positive root of the equation \footnote{ For the equation $u_{tt}-\frac 1{t^{2\alpha}}\Delta u+\frac\mu{t}u_t=|u|^p$, the critical exponents as blow-up condtions are $p_c(n,\alpha,\mu)$ and $p_F(n,\alpha)$, where $p_c(n,\alpha,\mu)$ is the positive root of \[ \gamma(n,p,\alpha,\mu) =-p^2\lp{n-1+\frac{\mu-\alpha}{1-\alpha}}+p\lp{n+1+\frac{\mu+3\alpha}{1-\alpha}}+2=0, \] and $p_F(n,\alpha)=1+2/\{n(1-\alpha)\}$. We remark that if $\mu=\alpha=0$, then $p_c(n,0,0)$ and $p_F(n,0)$ coincide with the Strauss and Fujita exponents, respectively. See \cite{TW1,TW2}. } \begin{align} \gamma'(n,p,\alpha,\mu) &\equiv -p^2\lp{n+1+\frac{\mu-\alpha}{1-\alpha}}+p\lp{n+1+\frac{\mu+3\alpha}{1-\alpha}}+2=0 \label{gm'} \end{align} and let \[ p_0'(n,\alpha,\mu)=1+\frac{1+\alpha}{(n+1)(1-\alpha)+\mu-1}. \] \begin{theo} Let $n\ge 2, \; 0\le \alpha<1, \; \mu\ge 0$ and $1<p<p_c'(n,\alpha,\mu)$ or $1<p<p_0'(n,\alpha,\mu)$. Assume that $u_0\in C^2({\bf R}^n)$ and $u_1\in C^1({\bf R}^n)$ are nonnegative, nontrivial and $\mbox{\rm supp }u_0, \mbox{\rm supp }u_1\subset \{|x|\le R\}$ with $R>0$. Suppose that the problem \eqref{Prob0x} has a classical solution $u\in C^2([1,T)\times{\bf R}^n)$. Then, $T<\infty$ and there exists a constant $\varepsilon_0>0$ depending on $p,\alpha,\mu,R,u_0,u_1$ such that $T_\varepsilon$ has to satisfy \begin{align} T_\varepsilon&\le C\varepsilon^{\frac{-2p(p-1)}{(1-\alpha)\gamma'(n,p,\alpha,\mu)}} &&\mbox{ if } 1<p<p_c'(n,\alpha,\mu), \label{subcls-31} \\ T_\varepsilon&\le C\varepsilon^{-\frac{p-1}{\{1-\mu-(n+1)(1-\alpha)\}(p-1)+1+\alpha}} &&\mbox{ if } 1<p< p_0'(n,\alpha,\mu) \label{subcls-32} \end{align} for $0<\varepsilon\le \varepsilon_0$, where $C>0$ is a constant independent of $\varepsilon$. \label{th31} \end{theo} \noindent {\bf Remark} (1) The upper bound of the lifespan in \eqref{subcls-32} is better than that in \eqref{subcls-31} if $1<p< 2(1-\alpha)/\{(n+1)(1-\alpha)+\mu+\alpha-2\}$ since \[ \{1-\mu-(n+1)(1-\alpha)\}(p-1)+1+\alpha> \frac{1-\alpha}{2p}\gamma'(n,p,\alpha,\mu). \] Note that in this case, if $p<p_c'(n,\alpha,\mu)$, then $\{1-\mu-(n+1)(1-\alpha)\}(p-1)+1+\alpha>0$. \\ (2) If $p< 2(1-\alpha)/\{(n+1)(1-\alpha)+\mu+\alpha-2\}$, then the condition $\mu<1$ is necessary since $p>1$ and $n\ge 2$. Hence, the theorem in this case is not applied to the original equation \eqref{ore} for $|\nabla_x u|^p$ in the FLRW spacetime since $\mu=2/(1+w)\ge 1$. This is covered later in Section 4 in more detail. \noindent{\it Proof)}\hspace{5mm} We have proved in \cite{TW1}, by choosing large $T_1>0$, \[ \int |u|^p dx \ge C\varepsilon^p t^{-(\mu-\alpha)p/2+(1-\alpha)(n-1)(1-p/2)} \] for $t\ge T_1$. By \eqref{suppu0} and \eqref{eqF2}, we obtain \begin{align} \int |\nabla_x u|^p dx & \ge C\varepsilon^p t^{-(\mu-\alpha)p/2+(1-\alpha)(n-1)(1-p/2)-p(1-\alpha)} \qquad \mbox{for }t\ge T_1. \label{up} \end{align} Using \eqref{Fup} implies \begin{align} F(t)&\ge C\varepsilon^p \int_{T_1}^t \thetau^{-\mu}\int_{T_1}^\thetau s^{\mu-(\mu-\alpha)p/2+(1-\alpha)(n-1)(1-p/2)-p(1-\alpha)}dsd\thetau \nonumber \\ &\ge C\varepsilon^p \int_{T_1}^t \thetau^{-\mu(1+ p/2)-(1-\alpha)(n-1)p/2-p(1-\alpha)} \int_{T_1}^\thetau (s-T_1)^{\mu+\alpha p/2+(1-\alpha)(n-1)}dsd\thetau \nonumber\\ &\ge C\varepsilon^p t^{-\mu(1+ p/2)-(1-\alpha)(n-1)p/2-p(1-\alpha)}\int_{T_1}^t (\thetau-T_1)^{\mu+\alpha p/2+(1-\alpha)(n-1)+1}d\thetau. \nonumber \\ \intertext{Therefore, } F(t)&\ge C\varepsilon^p t^{-\mu(1+ p/2)-(1-\alpha)(n-1)p/2-p(1-\alpha)}(t-T_1)^{\mu+\alpha p/2+(1-\alpha)(n-1)+2} \qquad \mbox{for }T\ge T_1. \label{an} \end{align} We also have \begin{equation} F''(t)+\frac\mu t F'(t) \ge \frac C{(t+R)^{(p+n(p-1))(1-\alpha)}}|F(t)|^p \label{eqF4} \end{equation} by \eqref{suppu0} and \eqref{eqF3}. The following lemma is proved in \cite{TW1}. \begin{lem} Let $p>1, \;a\ge 0, \;b>0, \;q>0, \; \mu\ge 0$ and \[ M\equiv (p-1)(b-a)-q+2>0. \] Let $T\ge T_1>T_0\ge 1$. Assume that $F\in C^2([T_0,T))$ satisfies the following three conditions: \begin{align*} (i) \quad & F(t) \ge A_0t^{-a}(t-T_1)^b \qquad \mbox{for }t\ge T_1 \\ (ii) \quad &F''(t) +\frac{\mu F'(t)}{t}\ge A_1(t+R)^{-q}|F(t)|^p \quad \mbox{for }t\ge T_0 \\ (iii) \quad &F(T_0)\ge 0, \quad F'(T_0)>0, \end{align*} where $A_0, A_1$ and $R$ are positive constants. Then $T$ has to satisfy \[ T<C A_0^{-(p-1)/M}, \] where $C$ is a constant depending on $R,A_1,\mu,p,q,a$ and $b$. \label{lm32} \end{lem} From \eqref{an} and \eqref{eqF4}, applying Lemma \ref{lm32} with $q=(p+n(p-1))(1-\alpha)$, $a=\mu(1+ p/2)+(1-\alpha)(n-1)p/2+p(1-\alpha)$, $b=\mu+\alpha p/2+(1-\alpha)(n-1)+2$, and $A_0=C\varepsilon^p$, we obtain the desired result since \begin{align*} M&=(p-1)\lb{2-\frac{\mu-\alpha}2p+(1-\alpha)\lp{(n-1)\lp{1-\frac p2}-p}}-(p+n(p-1))(1-\alpha)+2 \\ &=\frac{1-\alpha}2\gamma'(n,p,\alpha,\mu)>0. \end{align*} It remains to prove \eqref{subcls-32}. From \eqref{impr}, we have $F'(t)\ge t^{-\mu}F'(1)$, hence, integrating and using $F(1) > 0$ by assumption imply \[ F(t)\ge F'(1)t^{1-\mu}=C\varepsilon t^{1-\mu} \quad \mbox{for }t\ge T' \] with some $T'>1$. By \eqref{suppu0} and \eqref{eqF3}, \begin{equation} \int |\nabla_x u|^p dx \ge C\varepsilon^p t^{-(p+n(p-1))(1-\alpha)+p(1-\mu)} \qquad \mbox{for }t\ge T'. \label{up1} \end{equation} From \eqref{Fup}, \begin{align} F(t) & \gtrsim \varepsilon^p \int_{T'}^t \thetau^{-\mu}\int_{T'}^\thetau s^{\mu-(p+n(p-1))(1-\alpha)+p(1-\mu)}dsd\thetau \nonumber \\ &\gtrsim \varepsilon^p \int_{T'}^t \thetau^{-(p+n(p-1))(1-\alpha)-p\mu}\int_{T'}^\thetau s^pds d\thetau \nonumber \\ &\gtrsim \varepsilon^p \int_{T'}^t \thetau^{-(p+n(p-1))(1-\alpha)-p\mu}(\thetau-T')^{p+1} d\thetau \nonumber \\ &\ge C\varepsilon^p t^{-(p+n(p-1))(1-\alpha)-p\mu}(t-T')^{p+2} \qquad \mbox{for }t\ge T'. \label{imprFlow} \end{align} From \eqref{eqF4} and \eqref{imprFlow}, applying Lemma \ref{lm32} with $q=(p+n(p-1))(1-\alpha)$, $a=(p+n(p-1))(1-\alpha)+p\mu$, $b=p+2$ and $A_0=C\varepsilon^p$, we obtain the desired result. This completes the proof of Theorem \ref{th31}. \qed In Theorem \ref{th31} the coefficient $n+1+(\mu-\alpha)/(1-\alpha)$ of $p^2$ in \eqref{gm'} is positive since it is assumed that the positive root $p_c'(n,\alpha,\mu)$ exists. If $\mu<1$ and $3/4\le\alpha<1$, then $n+1+(\mu-\alpha)/(1-\alpha)\le 0$ can happen; hence, $\gamma'(n,p,\alpha,\mu)>0$ for all $p>1$ and the following corollary holds: \begin{coro} Let $n\ge 2, \; 0\le \alpha<1, \; \mu\ge 0, \; n+1+(\mu-\alpha)/(1-\alpha)\le 0$ and $p>1$. Under the assumptions on the initial data of Theorem \ref{th31}, there holds \eqref{subcls-31}. \label{co33} \end{coro} \noindent {\bf Remark} One cannot apply Corollary \ref{co33} to the original equation \eqref{ore} for $|\nabla_x u|^p$ in the FLRW spacetime since $n+1+(\mu-\alpha)/(1-\alpha)>0$. This is covered later in Section 5 in more detail. \subsubsection{\large Wavelike and critical case } We next consider the critical case $p=p_c'(n,\alpha,\mu)$. \begin{theo} Let $0\le \alpha<1$ for $n\ge 3$ and $2/7<\alpha <1$ for $n=2$, and let $\mu\ge 0$ and \begin{align} p=p_c'(n,\alpha,\mu)> \begin{cases} \displaystyle p_F'(n,\alpha)=1+\frac{1+\alpha}{(n+1)(1-\alpha)} & \mbox{if }n\ge 3, \\ \max\{p_F'(2,\alpha),2\} & \mbox{if }n=2. \end{cases} \label{pcf} \end{align} Assume that $u_0\in C^2({\bf R}^n)$ and $u_1\in C^1({\bf R}^n)$ are nonnegative, nontrivial and $\mbox{\rm supp }u_0, \mbox{\rm supp }u_1 \subset \{|x|\le R\}$ with some $0<R\le 1/(2(1-\alpha))$. Suppose that the problem \eqref{Prob0x} has a classical solution $u\in C^2([1,T)\times{\bf R}^n)$. Then, $T<\infty$, and there exists a constant $\varepsilon_0>0$ depending on $p,\alpha,\mu,R,u_0,u_1$ such that $T_\varepsilon$ has to satisfy \[ T_\varepsilon\le \exp(C\varepsilon^{-p(p-1)}) \] for $0<\varepsilon\le \varepsilon_0$, where $C>0$ is a constant independent of $\varepsilon$. \label{th34} \end{theo} \noindent {\bf Remark} (1) In case $n=2$, we note that \[ \max\{p_F'(2,\alpha),2\}= \begin{cases} p_F'(2,\alpha) & \mbox{if }\alpha\ge 1/2, \\ 2 & \mbox{if }\alpha < 1/2. \end{cases} \] (2) When $n=2$, the condition $\alpha>2/7$ is necessary for the existence of $p$ satisfying \eqref{pcf}. \\ (3) If $p_F'(n,\alpha)=p_c'(n,\alpha,\mu)$, then \[ \mu=\mu^\ast\equiv (n+1)(1-\alpha)+\alpha- \frac{2(n+1)(1-\alpha)^2}{(n+1)(1-\alpha)+1+\alpha}. \] (4) The case $p_c'(n,\alpha,\mu)<p\le p_F'(n,\alpha)$ is considered in the next sub-subsection. \noindent{\it Proof)}\hspace{5mm} Let \begin{align} \lambda_\eta(t)&=\lambda(\eta t)=(\eta t)^{(1-\mu)/2}K_\nu\lp{\frac 1{1-\alpha}(\eta t)^{1-\alpha}}, \nonumber \\ \partialsi_\eta(t,x)&=\lambda_\eta(t)\int_{|\omega|=1}e^{\eta^{1-\alpha}x\cdot \omega}dS_\omega, \label{psieta} \end{align} where $K_\nu(t)$ is the modified Bessel function given by \eqref{Bessel}. We now define a test function $\partialhi_q(t,x)$ by \begin{equation} \partialhi_q(t,x)=\int_0^1 \partialsi_\eta(t,x)\eta^{q-1+\mu}d\eta. \label{pqdef} \end{equation} Let $q$ satisfy \begin{align} &q>-\frac{\mu+\alpha}2 \label{cd1}\\ \intertext{and } &q+\frac{\mu-1}2-(1-\alpha)|\nu|>-1. \label{cd2} \end{align} It is proved in \cite{TW1} that the function $\partialhi_q(t,x)$ satisfies the following properties: \begin{lem} Let $\partialhi_q(t,x)$ be defined by \eqref{pqdef}. Assume that $q$ satisfies \eqref{cd1} and \eqref{cd2}. \begin{enumerate}[(i)] \item Then, there exists a $T_0>0$ such that $\partialhi_q$ satisfies \[ \partialhi_q(t,x) \sim \begin{cases} t^{(-\mu+\alpha)/2}(t^{1-\alpha}+|x|)^{-\lp{q+\frac{\mu+\alpha}2}/(1-\alpha)} \\ \hspace{6cm} \lp{-\frac{\mu+\alpha}2<q<\frac{(n-1)(1-\alpha)-(\mu+\alpha)}2}, \\ & \\ t^{(-\mu+\alpha)/2} (t^{1-\alpha}+|x|)^{-(n-1)/2} (t^{1-\alpha}-(1-\alpha)|x|)^{(n-1)/2-\lp{q+\frac{\mu+\alpha}2}/(1-\alpha)} \\ \hspace{7cm}\lp{q>\frac{(n-1)(1-\alpha)-(\mu+\alpha)}2}, \end{cases} \] for $t\ge T_0$ and $|x|\le(t^{1-\alpha}-1)/(1-\alpha)+R$ with $0<R\le 1/(2(1-\alpha))$. \item Moreover, if $q+1-\alpha>\{(n-1)(1-\alpha)-(\mu+\alpha)\}/2$, then there exists a $T_1>0$ such that $\partialhi_q$ satisfies \[ \partial_t\partialhi_q(t,x) \sim t^{-(\mu+\alpha)/2}(t^{1-\alpha}+|x|)^{-(n-1)/2} (t^{1-\alpha}-(1-\alpha)|x|)^{(n-1)/2-\lp{q+\frac{\mu+\alpha}2}/(1-\alpha)-1} \] for $t\ge T_1$ and $|x|\le(t^{1-\alpha}-1)/(1-\alpha)+R$ with $0<R\le 1/(2(1-\alpha))$. \end{enumerate} \label{lm35} \end{lem} We now prove the following key lemma to prove the theorem. \begin{lem} Assume that $u,u_0$ and $u_1$ satisfy the conditions in Theorem \ref{th34}. Let $n\ge 2, \; \nu=(\mu-1)/(2(1-\alpha))$ and \[ q=\frac{(n-1)(1-\alpha)-(\mu+\alpha)}2 -\frac{1-\alpha}p, \] and let $p$ satisfy \eqref{pcf}. Define \[ G(t)=\int_1^t(t-\thetau)\thetau^{1+\mu}\int |\nabla_x u|^p\partialhi_q (\thetau,x)dxd\thetau. \] Then, $G(t)$ satisfies \[ G'(t)\ge C(\ln t)^{1-p}t\lp{\int_1^t \thetau^{-3}G(\thetau)d\thetau}^p \qquad \mbox{for }t\ge T_2 \] with some $T_2$ sufficiently large, where $C$ is a constant independent of $\varepsilon$. \label{lm36} \end{lem} \noindent{\it Proof)}\hspace{5mm} We first verify that $q$ satisfies the required conditions \eqref{cd1} and \eqref{cd2} to use Lemma \ref{lm35}. We claim that there holds \begin{equation} -\frac{\mu+\alpha}2<q<\frac{(n-1)(1-\alpha)-(\mu+\alpha)}2. \label{q1} \end{equation} The second inequality is clearly true. To show the first inequality or \eqref{cd1}, we remark that \eqref{pcf} implies after some calculation $p>2/(n-1)$, which is equivalent to $q>-(\mu+\alpha)/2$. We can also show that $q$ satisfies \eqref{cd2} with $\nu=(\mu-1)/(2(1-\alpha))$, {\it i.e., }$q>-\min\{\mu,1\}$. In fact, (i) the assumption $p>p_F'(n,\alpha)$ is equivalent to $q>-1$ since \begin{equation} q=n(1-\alpha)-\frac 2{p-1}-1+p'(1-\alpha), \quad \frac 1p+\frac 1{p'}=1 \qquad \mbox{if }p=p_c', \label{q-eq} \end{equation} (ii) the critical case $p=p_c'$ satisfies $\gamma'(n,p,\alpha,\mu)=0$ in \eqref{gm'} and this equality yields \begin{align*} p&=\frac{(n+1)(1-\alpha)+\mu+3\alpha}{(n+1)(1-\alpha)+\mu-\alpha}+\frac{2(1-\alpha)}{p\lb{(n+1)(1-\alpha)+\mu-\alpha}} \\ &>\frac{(n+1)(1-\alpha)+\mu+\alpha}{(n+1)(1-\alpha)+\mu-\alpha} \\ &=1+\frac{1+\alpha}{(n+1)(1-\alpha)+\mu-1}, \end{align*} which is equivalent to $q>-\mu$ by \eqref{q-eq}. In addition, $q$ satisfies the condition of Lemma \ref{lm35} (ii) since \begin{align} q&<\frac{(n-1)(1-\alpha)-(\mu+\alpha)}2 \nonumber\\ &<\frac{(n-1)(1-\alpha)-(\mu+\alpha)}2+(1-\alpha)\lp{1-\frac 1p} =q+1-\alpha. \label{q2} \end{align} Let us now show the inequality of the lemma. Mutiplying eq. in \eqref{Prob0x} by a test funtion $\partialhi(t,x)$ and $t^\mu$, and integrating over ${\bf R}^n$, we have \begin{equation} \frac d{dt}\int t^\mu(u\partialhi)_tdx-2\frac d{dt}\int t^\mu u\partialhi_t dx +\int t^\mu u\lp{\partialhi_{tt}-\frac 1{t^{2\alpha}}\Delta\partialhi+\frac\mu t\partialhi_t}dx =\int t^\mu |\nabla_x u|^p\partialhi dx. \label{3-9} \end{equation} As shown in \cite{TW1}, the test function $\partialhi_q$ given in \eqref{pqdef} satisfies \begin{equation} \lp{\partial_t^2-\frac 1{t^{2\alpha}}\Delta +\frac\mu t\partial_t}\partialhi_q(t,x)=0. \label{phq0} \end{equation} Applying \eqref{3-9} with $\partialhi=\partialhi_q$ and \eqref{phq0}, we have \[ \frac{d^2}{dt^2}\int t^\mu u\partialhi_q dx-\mu\frac d{dt}\int t^{\mu-1} u\partialhi_q dx -2\frac d{dt}\int t^\mu u\partial_t\partialhi_q dx =\int t^\mu |\nabla_x u|^p\partialhi_q dx. \] Moreover, integrating over $[1,t]$ three times, we obtain \begin{align*} &\int_1^t \thetau^\mu\int u\partialhi_qdxd\thetau -\mu\int_1^t(t-\thetau)\thetau^{\mu-1}\int u\partialhi_q dxd\thetau -2\int_1^t(t-\thetau)\thetau^\mu\int u\partial_\thetau\partialhi_q dxd\thetau \nonumber\\ =&C_{data}(t) +\frac 12\int_1^t (t-\thetau)^2\thetau^\mu\int |\nabla_xu|^p\partialhi_q dxd\thetau, \end{align*} where \[ C_{data}(t) =\varepsilon (t-1)\int u_0(x)\partialhi_q(1,x)dx +\frac \varepsilon 2(t-1)^2 \int (u_1(x)\partialhi_q(1,x)-u_0(x)\partial_t\partialhi_q(1,x))dx. \] We note that $\partial_t\partialhi_q(1,x)\le 0$, which is shown in \cite{TW1}. Hence, by positivity assumption on $u_0$ and $u_1$, it holds that $C_{data}(t)\ge 0$ for $t\ge 1$. Thus, \begin{align} &\int_1^t \thetau^\mu\int u\partialhi_q(\thetau,x)dxd\thetau -\mu\int_1^t(t-\thetau)\thetau^{\mu-1}\int u\partialhi_q(\thetau,x) dxd\thetau \nonumber \\ & \hspace{7cm}-2\int_1^t(t-\thetau)\thetau^\mu\int u\partial_\thetau\partialhi_q(\thetau,x) dxd\thetau \nonumber\\ &\ge \frac 12\int_1^t (t-\thetau)^2\thetau^\mu\int |\nabla_xu|^p\partialhi_q(\thetau,x) dxd\thetau. \label{F1} \end{align} Since \[ G'(t)=\int_1^t \thetau^{1+\mu}\int |\nabla_xu|^p\partialhi_q dxd\thetau \quad \mbox{and } \quad G''(t)= t^{1+\mu}\int |\nabla_xu|^p\partialhi_q dxd\thetau, \] the right-hand side of \eqref{F1} becomes \begin{align} \frac 12\int_1^t (t-\thetau)^2\thetau^\mu\int |\nabla_xu|^p\partialhi_q dxd\thetau =\frac 12\int_1^t (t-\thetau)^2\thetau^{-1} G''(\thetau)d\thetau =\int_1^t t^2\thetau^{-3}G(\thetau)d\thetau. \label{esRHS} \end{align} We now estimate the left-hand side of \eqref{F1}. By Poincar\'e's and H\"older's inequalities, the first integral is estimated by \begin{align*} I&\equiv \int_1^t \thetau^\mu\int u\partialhi_q(\thetau,x)dxd\thetau \\ &\le \int_1^t \thetau^\mu \|\partialhi_q(\thetau)\|_{L^\infty(|x|\le A(\thetau)+R)} \int_{|x|\le A(\thetau)+R} |u(\thetau,x)|dxd\thetau \\ &\le \int_1^t \thetau^\mu \|\partialhi_q(\thetau)\|_{L^\infty(|x|\le A(\thetau)+R)}(A(\thetau)+R) \int_{|x|\le A(\thetau)+R} |\nabla_x u(\thetau,x)|dxd\thetau \\ &\le \int_1^t \thetau^\mu \|\partialhi_q(\thetau)\|_{L^\infty(|x|\le A(\thetau)+R)}(A(\thetau)+R)\\ &\qquad \cdot \lp{\int_{|x|\le A(\thetau)+R} |\nabla_x u(\thetau,x)|^p\partialhi_q(\thetau,x)dx}^{1/p} \lp{\int_{|x|\le A(\thetau)+R}\partialhi_q(\thetau,x)^{1-p'}dx}^{1/p'}d\thetau \\ &\le\lp{\int_1^t \thetau^{1+\mu}\int |\nabla_xu|^p\partialhi_q(\thetau,x)dxd\thetau}^{1/p}\\ &\qquad \cdot\lp{\int_1^t \thetau^{1+\mu-p'}\|\partialhi_q(\thetau)\|_{L^\infty(|x|\le A(\thetau)+R)}^{p'}(A(\thetau)+R)^{p'}\int_{|x|\le A(\thetau)+R}\partialhi_q(\thetau,x)^{1-p'}dxd\thetau }^{1/p'}. \end{align*} We remark that $q$ satisfies \eqref{q1}. Applying Lemma \ref{lm35} to the last integral above, we have \begin{align*} &\lp{\int_1^{T_0}+\int_{T_0}^t} \thetau^{1+\mu-p'}\|\partialhi_q(\thetau)\|_{L^\infty(|x|\le A(\thetau)+R)}^{p'}(A(\thetau)+R)^{p'}\int_{|x|\le A(\thetau)+R}\partialhi_q(\thetau,x)^{1-p'}dxd\thetau \\ \lesssim& C_{T_0}+\int_{T_0}^t \thetau^{1+\mu-p'+p'(-\mu+\alpha)/2-p'(q+(\mu+\alpha)/2)+p'(1-\alpha)} \\ & \qquad\qquad \int_{|x|\le A(\thetau)+R} \lb{\thetau^{(-\mu+\alpha)/2}(\thetau^{1-\alpha}+|x|)^{-(q+(\mu+\alpha)/2)/(1-\alpha)}}^{1-p'} dxd\thetau\\ \lesssim &C_{T_0}+\int_1^t \thetau^{n(1-\alpha)-q-p'/p+p'(1-\alpha)}d\thetau \\ \lesssim & C_{T_0} t^{n(1-\alpha)-q-p'/p+p'(1-\alpha)+1}, \end{align*} where we have used $A(\thetau)+R=(\thetau^{1-\alpha}-1)/(1-\alpha)+R$, with $R\le 1/(2(1-\alpha))$. Note that if $p=p_c'$, then $q$ satisfies by \eqref{q-eq} \begin{equation} n(1-\alpha)-q-\frac{p'}p+p'(1-\alpha)=p'. \label{cdnqp} \end{equation} Thus, we obtain \begin{equation} I\lesssim G'(t)^{1/p}t^{1+1/p'} \qquad \mbox{for }t\ge T_0. \label{esI} \end{equation} The second integral on the left-hand side of \eqref{F1} can be estimated as before by \begin{align*} II&\equiv -\mu\int_1^t (t-\thetau) \thetau^{\mu-1}\int u\partialhi_q(\thetau,x)dxd\thetau \\ \lesssim &\lp{\int_1^t \thetau^{1+\mu}\int |\nabla_xu|^p\partialhi_q(\thetau,x)dxd\thetau}^{1/p} \\ & \cdot \lp{\int_1^t (t-\thetau)^{p'} \thetau^{1+\mu-2p'}\|\partialhi_q(\thetau)\|_{L^\infty(|x|\le A(\thetau)+R)}^{p'}(A(\thetau)+R)^{p'}\int_{|x|\le A(\thetau)+R} \partialhi_q(\thetau,x)^{1-p'}dxd\thetau }^{1/p'}. \end{align*} Using Lemma \ref{lm35}, \begin{align*} II \lesssim & G'(t)^{1/p}\lp{C_{T_0}+\int_{T_0}^t (t-\thetau)^{p'} \thetau^{1-2p'-q+n(1-\alpha)+p'(1-\alpha)}d\thetau}^{1/p'}. \end{align*} Since $q$ satisfies $n(1-\alpha)-q+1-2p'+p'(1-\alpha)=0$ by \eqref{cdnqp}, we obtain \begin{equation} II\lesssim G'(t)^{1/p}t^{1+1/p'} \qquad \mbox{for }t\ge T_0. \label{esII} \end{equation} We finally estimate the third integral on the left-hand side of \eqref{F1}, \[ III\equiv -2\int_1^t (t-\thetau) \thetau^\mu\int u\partial_\thetau\partialhi_q(\thetau,x)dxd\thetau. \] Set $T_2=\max\{T_0,T_1\}$ to apply Lemma \ref{lm35} (i) and (ii). Proceeding in a similar way as before, we have \begin{align*} III\lesssim &\lp{\int_1^t \thetau^{1+\mu}\int |\nabla_xu|^p\partialhi_q(\thetau,x)dxd\thetau}^{1/p}\\ &\quad \qquad \cdot \lp{C_{T_2}+\int_{T_2}^t (t-\thetau)^{p'} \thetau^{1+\mu-p'+p'(1-\alpha)}\int_{|x|\le A(\thetau)+R} \partialhi_q\lp{\frac{\partial_\thetau\partialhi_q}{\partialhi_q}}^{p'}(\thetau,x)dxd\thetau }^{1/p'}. \end{align*} We remark here that $q$ satisfies \eqref{q2}. By Lemma \ref{lm35}, \begin{align*} &\partialhi_q\lp{\frac{\partial_\thetau\partialhi_q}{\partialhi_q}}^{p'} \sim \thetau^{(-\mu+\alpha)/2-\alpha p'}(\thetau^{1-\alpha}+|x|)^{-(n-1)/2-(p'-1)/p} (\thetau^{1-\alpha}-(1-\alpha)|x|)^{-1} \quad \mbox{for }t\ge T_2. \end{align*} Hence, \begin{align*} &\thetau^{1+\mu-p'+p'(1-\alpha)}\int_{|x|\le A(\thetau)+R} \partialhi_q\lp{\frac{\partial_\thetau\partialhi_q}{\partialhi_q}}^{p'}(\thetau,x)dx \\ \le & \thetau^{1+\mu-p'+p'(1-\alpha)+(-\mu+\alpha)/2-\alpha p'-(n-1)(1-\alpha)/2-(p'-1)(1-\alpha)/p}\int_{|x|\le A(\thetau)+R} (\thetau^{1-\alpha}-(1-\alpha)|x|)^{-1} dx \\ \lesssim & \thetau^{-(n-1)(1-\alpha)}\int_0^{A(\thetau)+R}(\thetau^{1-\alpha}-(1-\alpha)r)^{-1}r^{n-1}dr \\ \lesssim& \ln\thetau, \end{align*} where we note that since $(n-1)(1-\alpha)/2=q+(\mu+\alpha)/2+(1-\alpha)/p$ and \eqref{cdnqp}, \begin{align*} &1+\mu-p'+p'(1-\alpha)+\frac{-\mu+\alpha}2-\alpha p'-\frac{(n-1)(1-\alpha)}2-\frac{(p'-1)(1-\alpha)}p \\ =&1-p'+p'(1-\alpha)-\alpha p'-q-\frac{p'(1-\alpha)}p \\ =&-(n-1)(1-\alpha). \end{align*} Thus, we obtain \begin{align} III&\lesssim G'(t)^{1/p} \lp{C_{T_2}+\int_{T_2}^t (t-\thetau)^{p'} \ln\thetau d\thetau }^{1/p'} \nonumber \\ &\lesssim G'(t)^{1/p}t^{1+1/p'}(\ln t)^{1/p'} \qquad \mbox{for }t\ge T_2. \label{esIII} \end{align} Combining \eqref{F1}, \eqref{esRHS}, and \eqref{esI}-\eqref{esIII} all together, we obtain the desired inequality. This completes the proof of Lemma \ref{lm36}. \qed Then the rest of the proof is the same as that of Theorem 2.3 in \cite{TW1}. \qed \subsubsection{\large Heatlike case} \begin{theo} Let $n\ge 2, \; 0\le \alpha<1, \; \mu\ge 0$ and $1<p\le p_F'(n,\alpha)=1+(1+\alpha)/\{(n+1)(1-\alpha)\}$. Assume that $u_0\in C^2({\bf R}^n)$ and $u_1\in C^1({\bf R}^n)$ are nonnegative, nontrivial and $\mbox{\rm supp }u_0, \mbox{\rm supp }u_1\subset \{|x|\le R\}$ with $R>0$. Suppose that the problem \eqref{Prob0x} has a classical solution $u\in C^2([1,T)\times{\bf R}^n)$. Then, $T<\infty$ and there exists a constant $\varepsilon_0>0$ depending on $p,\alpha,\mu,R,u_0,u_1$ such that $T_\varepsilon$ has to satisfy \begin{align} T_\varepsilon&\le C\varepsilon^{-\frac{p-1}{2-\{n(p-1)+p\}(1-\alpha)}} && \mbox{if }p<p_F'(n,\alpha), \label{subcls-33}\\ T_\varepsilon&\le \exp\lp{C\varepsilon^{-p(p-1)/(p+1)}} && \mbox{if } p=p_F'(n,\alpha) \mbox{ and }0\le \mu \le 1, \label{subcls-34}\\ T_\varepsilon&\le \exp\lp{C\varepsilon^{-(p-1)}} && \mbox{if }p=p_F'(n,\alpha) \mbox{ and }\mu > 1 \nonumber \end{align} for $0<\varepsilon\le \varepsilon_0$, where $C>0$ is a constant independent of $\varepsilon$. \label{th37} \end{theo} \noindent {\bf Remark}\quad In the critical case $p=p_F'(n,\alpha)$, if $0\le \mu \le 1$, then the estimate above is better than that for the case $\mu >1$. However, \eqref{subcls-31} and \eqref{subcls-32} are applicable for $p=p_F'(n,\alpha)$ and $0\le \mu \le 1$. These estimates are much better than $T_\varepsilon\le \exp\lp{C\varepsilon^{-p(p-1)/(p+1)}}$ above. See Fig.s~\ref{Fig3}, \ref{Fig4} and \ref{Fig5} below. \noindent{\it Proof)}\hspace{5mm} Proceeding as in \cite{TW2}, we have \begin{equation} F(t)\ge F(1)=C\varepsilon>0 \qquad \mbox{for }t\ge 1. \label{F(1)} \end{equation} By \eqref{suppu0} and \eqref{eqF3}, we have \begin{equation} \int |\nabla_x u|^p dx \ge C\varepsilon^p t^{-(p+n(p-1))(1-\alpha)} \qquad \mbox{for }t\ge 1. \label{uph} \end{equation} Let $p<p_F'(n,\alpha)$. From \eqref{Fup}, \begin{align} F(t) & \gtrsim \varepsilon^p \int_1^t \thetau^{-\mu}\int_1^\thetau s^{\mu-(p+n(p-1))(1-\alpha)}dsd\thetau \nonumber \\ &\gtrsim \varepsilon^p \int_1^t \thetau^{-\mu-(p+n(p-1))(1-\alpha)}\int_1^\thetau (s-1)^\mu ds d\thetau \nonumber \\ &\gtrsim \varepsilon^p \int_1^t \thetau^{-\mu-(p+n(p-1))(1-\alpha)}(\thetau-1)^{\mu+1} d\thetau \nonumber \\ &\ge C\varepsilon^p t^{-\mu-(p+n(p-1))(1-\alpha)}(t-1)^{\mu+2} \qquad \mbox{for }t\ge 1. \label{imprFlow-h} \end{align} From \eqref{eqF4} and \eqref{imprFlow-h}, applying Lemma \ref{lm32} with $q=(p+n(p-1))(1-\alpha)$, $a=\mu+(p+n(p-1))(1-\alpha)$, $b=\mu+2$ and $A_0=C\varepsilon^p$, we obtain the desired results since \begin{align*} M &=(p-1)\lb{2-(p+n(p-1))(1-\alpha)}-(p+n(p-1))(1-\alpha)+2\\ &=p\lb{2-(p+n(p-1))(1-\alpha)}>0. \end{align*} Let next $p=p'_F(n,\alpha)$. Since $(p+n(p-1))(1-\alpha)=2$, from \eqref{uph}, \[ \int |\nabla_x u|^p dx \ge C\varepsilon t^{-2} \qquad \mbox{for }t\ge 1. \] Hence, by \eqref{Fup}, \begin{align} F(t) & \gtrsim \varepsilon^p \int_1^t \thetau^{-\mu}\int_1^\thetau s^{\mu-2}dsd\thetau \nonumber \\ &\gtrsim \varepsilon^p \int_1^t \thetau^{-\mu-2}\int_1^\thetau (s-1)^\mu ds d\thetau \nonumber \\ &\gtrsim \varepsilon^p \int_2^t \thetau^{-1} d\thetau \nonumber \\ &\ge C\varepsilon^p \ln\frac t2 \qquad \mbox{for }t\ge 2. \label{imprFlow-hc} \end{align} The following lemma is proved in \cite{TW2}. \begin{lem} Let $p>1, \;b>0, \; \mu\ge 0$ and $T\ge T_1>T_0\ge 1$. Assume that $F\in C^2([T_0,T))$ satisfies the following three conditions: \begin{align*} (i) \quad & F(t) \ge A_0\lp{\ln \frac t{T_1}}^b\qquad \mbox{for }t\ge T_1, \\ (ii) \quad &F''(t) +\frac{\mu F'(t)}{t}\ge A_1(t+R)^{-2}|F(t)|^p \quad \mbox{for }t\ge T_0, \\ (iii) \quad &F(T_0)\ge 0, \quad F'(T_0)>0, \end{align*} where $A_0, A_1$ and $R$ are positive constants. Then, $T$ has to satisfy \[ T< \begin{cases} \exp\lp{CA_0^{-(p-1)/\{b(p-1)+2\}}} & \mbox{if }\mu\le 1, \\ \exp\lp{CA_0^{-(p-1)/\{b(p-1)+1\}}} & \mbox{if }\mu > 1, \end{cases} \] where $C$ is a constant depending on $R,A_1,\mu,p$ and $b$. \label{lm38} \end{lem} By \eqref{eqF4} with $(p+n(p-1))(1-\alpha)=2$ and \eqref{imprFlow-hc}, using Lemma \ref{lm38} with $b=1$ and $A_0 = C\varepsilon^p$, we obtain the desired results. This completes the proof of Theorem \ref{th37}. \qed In the end of this subsection, we discuss the blow-up condtions and the estimates of the lifespan in the subcritical cases in Theorems \ref{th31} and \ref{th37}. We recall that $p_c'(n,\alpha,\mu)$ is the positive root of the equation $\gamma'(n,p,\alpha,\mu)=0$ given in \eqref{gm'}. Fig.s~\ref{Fig3}, \ref{Fig4} and \ref{Fig5} \begin{figure} \caption{\label{Fig3} \label{Fig3} \end{figure} \begin{figure} \caption{\label{Fig4} \label{Fig4} \end{figure} \begin{figure} \caption{\label{Fig5} \label{Fig5} \end{figure} show the regions of blow-up conditions in the case $\alpha=0, \; \alpha=0.3$ and $\alpha=0.7$, respectively, each for $n=3$. We recall that if $p_c'(n,\alpha,\mu)=p_F'(n,\alpha)$, then \[ \mu=\mu^\ast= (n+1)(1-\alpha)+\alpha-\frac{2(n+1)(1-\alpha)^2}{(n+1)(1-\alpha)+1+\alpha}. \] Note that if $p_c'(n,\alpha,\mu)=p_0'(n,\alpha,\mu) $, then \[ \mu=\mu_0\equiv -(n-1)(1-\alpha)+\sqrt{3\alpha^2-4\alpha+2}. \] We easily see that $\mu^\ast >1$ and $\mu_0<1$. We also note that $2p(p-1)/\{(1-\alpha)\gamma'(n,p,\alpha,\mu)\}=(p-1)/[2-\{n(p-1)+p\}(1-\alpha)]$ yields $p= 2(1-\alpha)/\{(n+1)(1-\alpha)-\mu+\alpha\}$, and that $2p(p-1)/\{(1-\alpha)\gamma'(n,p,\alpha,\mu)\}=(p-1)/[\{1-\mu-(n+1)(1-\alpha)\}(p-1)+1+\alpha]$ yields $p= 2(1-\alpha)/\{(n+1)(1-\alpha)+\mu+\alpha-2\}$. Among the three upper bounds \eqref{subcls-31}, \eqref{subcls-32} and \eqref{subcls-33}, if $1<p< \min\{p_0'(n,\alpha,\mu), \; 2(1-\alpha)/\{(n+1)(1-\alpha)+\mu+\alpha-2\}\}$, then \eqref{subcls-32} is the best. This is Region (O) shown in Fig.s~\ref{Fig4} and \ref{Fig5}. If $\max\{2(1-\alpha)/\{(n+1)(1-\alpha)+\mu+\alpha-2\}, \; 2(1-\alpha)/\{(n+1)(1-\alpha)-\mu+\alpha\}, \; 1\}<p<p_c'(n,\alpha,\mu)$, then \eqref{subcls-31} is the best (Region (C) in Fig.s~\ref{Fig3}, \ref{Fig4} and \ref{Fig5}). On the other hand, if $1<p\le 2(1-\alpha)/\{(n+1)(1-\alpha)-\mu+\alpha\}$ and $p<p_F'(n,\alpha)$, then \eqref{subcls-33} is the best (Region (F) in Fig.s~\ref{Fig3}, \ref{Fig4} and \ref{Fig5}). After some calculation, we see the following facts. If $(n-3)/(n-2)\le \alpha <1$ for $n\ge 3$ and $0\le \alpha<1$ for $n=2$, Region (O) appears in $0\le \mu <1$. Moreover, if $\alpha\ge \max\{0, \; (n^2-2n-1-\sqrt{n^2-2n-1})/(n^2-2n-2)\}$, then $p_0'(n,\alpha,\mu)>p_c'(n,\alpha,\mu)>p_F'(n,\alpha)$ for $0\le \mu< \mu_0$, hence the blow-up condition $1<p\le p_0'(n,\alpha,\mu)$ is the best for $0\le \mu< \mu_0$. This is unlike the case of the equation with $|u|^p$-nonlinearity for which the blow-up condition is related to only $p_c(n,\alpha,\mu)$ and $p_F(n,\alpha)$. \subsection{Case $\alpha\ge 1$} \begin{theo} Let $n\ge 2, \; \alpha\ge 1, \; \mu\ge 0$ and $p>1$. Assume that $u_0\in C^2({\bf R}^n)$ and $u_1\in C^1({\bf R}^n)$ are nonnegative, nontrivial and $\mbox{\rm supp }u_0, \mbox{\rm supp }u_1\subset \{|x|\le R\}$ with $R>0$. Suppose that the problem \eqref{Prob0x} has a classical solution $u\in C^2([1,T)\times{\bf R}^n)$. Then, $T<\infty$ and there exists a constant $\varepsilon_0>0$ depending on $p,\alpha,\mu,R,u_0,u_1$ such that $T_\varepsilon$ has to satisfy \begin{align*} &T_\varepsilon^2(\ln T_\varepsilon)^{-(p+n(p-1))}\le C\varepsilon^{-(p-1)} &&\mbox{if }\alpha=1, \\ &T_\varepsilon\le C\varepsilon^{-(p-1)/2} &&\mbox{if }\alpha>1 \end{align*} for $0<\varepsilon\le \varepsilon_0$, where $C>0$ is a constant independent of $\varepsilon$. \label{th39} \end{theo} \noindent{\it Proof)}\hspace{5mm} We first prove the theorem for the case $\alpha=1$. By \eqref{suppu1} and \eqref{eqF3}, \begin{equation} F''(t)+\frac\mu t F'(t) \ge \frac C{(\ln t)^{p+n(p-1)}}|F(t)|^p. \label{eqF5} \end{equation} Proceeding as in \cite{TW3}, we have \eqref{F(1)}. \[ F(t)\ge F(1)=C\varepsilon>0 \qquad \mbox{for }t\ge 1. \] By \eqref{suppu1} and \eqref{eqF3}, \begin{equation} \int |\nabla_x u|^p dx \ge C\varepsilon^p (\ln t)^{-(p+n(p-1))} \qquad \mbox{for }t\ge 1. \label{uph-2} \end{equation} Hence, from \eqref{Fup}, \begin{align} F(t)&\ge C\varepsilon^p \int_1^t (\ln \thetau)^{-(p+n(p-1))}\thetau^{-\mu}(\thetau-1)^{\mu+1}d\thetau \nonumber \\ &\ge C\varepsilon^p (\ln t)^{-(p+n(p-1))}t^{-\mu}\int_1^t (\thetau-1)^{\mu+1}d\thetau \nonumber \\ &\ge C\varepsilon^p (\ln t)^{-(p+n(p-1))}t^{-\mu}(t-1)^{\mu+2} \qquad \mbox{for }t\ge 1. \label{Fup1} \end{align} We here use another Kato's lemma. We can combine Lemmas 2.3 and 3.3 in \cite{TW3} to obtain the following lemma. \begin{lem} Let $p>1, \;a\ge 0, \;b\ge 0, \; c>0, \;q\ge 0, \; \mu\ge 0$ and \[ M\equiv (p-1)(c-a)+2>0. \] Let $T\ge T_1>T_0\ge 1$. Assume that $F\in C^1([T_0,T))$ satisfies the following three conditions: \begin{align*} (i) \quad & F(t) \ge A_0t^{-a}(\ln t)^{-b}(t-T_1)^c \qquad \mbox{for }t\ge T_1, \\ (ii) \quad &F''(t) +\frac{\mu}{t}F'(t)\ge A_1(\ln t)^{-q}|F(t)|^p \quad \mbox{for }t\ge T_0, \\ (iii) \quad &F(T_0)> 0, \quad F'(T_0)>0, \end{align*} where $A_0, A_1$ and $R$ are positive constants. Then, $T$ has to satisfy \[ T^{M/(p-1)}(\ln t)^{-b-q/(p-1)}<CA_0^{-1}, \] where $C$ is a constant depending on $R,A_1,\mu,p,q,a,b$ and $c$. \label{lm310} \end{lem} By \eqref{eqF5} and \eqref{Fup1}, applying Lemma \ref{lm310} with $a=\mu$, $b=q=p+n(p-1)$, $c=\mu+2$ and $A_0=C\varepsilon^p$, we obtain the desired result for $\alpha=1$ since \[ M=2(p-1)+2=2p>0. \] It remains to prove the theorem for $\alpha>1$. By \eqref{suppu1} and \eqref{eqF3}, \begin{align} F''(t)+\frac\mu t F'(t) \ge C|F(t)|^p. \label{3Fp} \end{align} As above, we also have \[ F(t)\ge C\varepsilon^p t^{-\mu}(t-1)^{\mu+2} \qquad \mbox{for }t\ge 1. \] Using Lemma \ref{lm310} again, we obtain the desired results. This completes the proof of Theorem \ref{th39}. \qed \section{Wave Equations in FLRW} \setcounter{equation}{0} We now apply Theorems \ref{th21}, \ref{th23}, \ref{th31}, \ref{th34}, \ref{th37} and \ref{th39} to the original equation \eqref{ore} , which is equivalent to \eqref{Prob0} and \eqref{Prob0x} with $\alpha=2/(n(1+w))$ and $\mu=2/(1+w)$. We treat the case $-1<w\le 1$ and $n\ge 2$ so that $\alpha\ge 1/n$ and $\mu\ge 1$ in \eqref{Prob0} and \eqref{Prob0x}. Observe that the cases $-1<w\le 2/n-1$ and $2/n-1<w\le 1$ correspond to accelerating and decelerating expanding universes, respectively. Consider the equation with the time derivative nonlinear term $|u_t|^p$. Denote $p_G'(n,2/(n(1+w)),2/(1+w))$ by $p_G'(n,w)$. Fig.~\ref{Fig6} below shows the range of blow-up conditions in terms of $w$ and $p$ in the case $n=3$. For $2/n-1<w\le 1$ and $n\ge 2$, applying Theorem \ref{th21} to \eqref{ore} for $|u_t|^p$, we obtain the following upper bounds of the lifespan: \[ \begin{cases} T_\varepsilon\le C\varepsilon^{\frac{-(p-1)}{1-\{n-1+4/(n(1+w))\}(p-1)/2}} & \mbox{if } 1<p<p_G'(n,w), \\ T_\varepsilon\le \exp(C\varepsilon^{-(p-1)}) & \mbox{if } p=p_G'(n,w), \end{cases} \] where $C>0$ is a constant independent of $\varepsilon$. We note that the estimate \eqref{subcls-22} in Theorem \ref{th21} is not applied to \eqref{ore} since $\mu\ge 1$ in our case. See Region (G) in Fig.~\ref{Fig6}. \begin{figure} \caption{\label{Fig6} \label{Fig6} \end{figure} For $-1<w\le 2/n-1$ and $n\ge 2$, we obtain from Theorem \ref{th23} \[ \begin{cases} T_\varepsilon^{1-2(p-1)/(1+w)}(\ln T_\varepsilon)^{-n(p-1)}\le C\varepsilon^{-(p-1)} & \mbox{if } 1<p<1+\frac 1n \mbox{ and }w=\frac 2n-1, \\ T_\varepsilon\le C\varepsilon^{-(p-1)/\{1-2(p-1)/(1+w)\}} & \mbox{if } 1<p<\frac{3+w}2 \mbox{ and }-1<w < \frac 2n-1. \end{cases} \] See Region (A) in Fig.~\ref{Fig6}. From these results, we see that the blow-up range of $p$ in the flat FLRW spacetime is smaller than that in the Minkowski spacetime because $p_G'(n,w)<p_G(n)=1+2/(n-1)$. Moreover, in the subcritical case $p<p_G'(n,w)$, the lifespan of the blow-up solutions in the FLRW spacetime is longer than that in the Minkowski spacetime since $\varepsilon^{-(p-1)/\{1-(n-1)(p-1)/2\}}<\varepsilon^{-(p-1)/\{1-(n-1+4/(n(1+w)))(p-1)/2\}}$ for sufficiently small $\varepsilon$. Although the critical value has not been established, we can say at the present time that global solutions in the FLRW spacetime exist more easily than in the Minkowski spacetime. We next consider the equation with the space derivative nonlinear term $|\nabla_x u|^p$. We define here $\gamma_0'(n,p,w)$ corresponding to $\gamma'(n,p,\alpha,\mu)$ in \eqref{gm'} by \[ \gamma_0'(n,p,w)=\lp{1-\frac 2{n(1+w)}}\gamma'\lp{n,p,\frac 2{n(1+w)},\frac 2{1+w}}. \] Then, we obtain \[ \gamma_0'(n,p,w)=-\lp{n+1-\frac 4{n(1+w)}}p^2+\lp{n+1+\frac 4{n(1+w)}}p+2-\frac 4{n(1+w)}. \] Let $p_c'(n,w)$ be the positive root of the equation $\gamma_0'(n,p,w)=0$. We also denote $p_F'(n,2/(n(1+w)))$ by $p_F'(n,w)$. For $2/n-1<w\le 1$ and $n\ge 2$, say a decelerated expanding universe, applying Theorems \ref{th31}, \ref{th34} and \ref{th37} to \eqref{ore} for $|\nabla_x u|^p$, we obtain the following upper bounds of the lifespan: \begin{align} &T_\varepsilon\le C\varepsilon^{\frac{-2p(p-1)}{\gamma_0'(n,p,w)}} && \mbox{if } 1<p<p_c'(n,w), \label{ls-cw} \\ &T_\varepsilon\le \exp(C\varepsilon^{-p(p-1)}) && \mbox{if } p=p_c'(n,w)>p_F'(n,w) \nonumber\\ &T_\varepsilon\le C\varepsilon^{\frac{-(p-1)}{2-\{(n+1)p-n\}\{1-2/(n(1+w))\}}} && \mbox{if } 1<p<p_F'(n,w), \label{ls-Fw} \\ &T_\varepsilon\le \exp(C\varepsilon^{-(p-1)}) && \mbox{if } p=p_F'(n,w). \nonumber \end{align} We note that the estimates \eqref{subcls-32} in Theorem \ref{th31} and \eqref{subcls-34} in Theorem \ref{th37} are not applied to \eqref{ore} since $\mu\ge 1$ in our case, and also that if $n=2$, then $p_F'(2,w)\ge 2$ and $\alpha=2/(n(1+w))>2/7$ in Theorem \ref{th34} since $w\le 1$. For $-1<w\le 2/n-1$ and $n\ge 2$, say an accelerated expanding universe, we obtain from Theorem \ref{th39} \begin{align} &T_\varepsilon^2(\ln T_\varepsilon)^{-(n(p+1)-n)}\le C\varepsilon^{-(p-1)} && \mbox{if } p>1\mbox{ and }w=\frac 2n-1, \nonumber\\ &T_\varepsilon\le C\varepsilon^{-(p-1)/2} && \mbox{if } p>1\mbox{ and }-1<w < \frac 2n-1. \label{ls-aw} \end{align} We see that blow-up in finite time can happen to occur for all $p>1$. This is in contrast to the case of decelerated expansion above. Fig.~\ref{Fig7} \begin{figure} \caption{\label{Fig7} \label{Fig7} \end{figure} shows the range of blow-up conditions in terms of $w$ and $p$ in the case $n=3$. Note that if $p_F'(n,w)=p_c'(n,w)$, then $w$ is the larger root $w^\ast$ of the equation \[ n^3(n+1)w^2+2n\{n^2(n+1)-(3n+4)(n-1)\}w+n^3(n+1)-2n(3n+4)(n-1)+8(n^2-n-1)=0. \] Region (A) is for the case of the accelerated expanding universe, where the lifespan of blow-up solutions is dominated by \eqref{ls-aw}. In contrast, each Region (F) and (C) represents the decelerated expanding one. In Region (F) the estimate \eqref{ls-Fw} is better than \eqref{ls-cw}, on the other hand, this relation becomes reverse in Region (C). Finally, let us compare the results for the term $|u_t|^p$ with those for $|\nabla_xu|^p$, especially in the decelerated expanding universe, say (G), (F) and (C). We observe that which has a larger blow-up range depends on the value of $w$ for each $n$. If $n=2$, then $\max\{p_F'(2,w),p_c'(2,w)\}>p_G'(2,w)$ for $0=2/n-1<w\le 1$. The higher the dimension $n$ becomes, however, the larger the $w$-interval such that $p_G'(n,w)>\max\{p_F'(n,w),p_c'(n,w)\}$ becomes. We will treat the remaining case $w=-1$ in future papers. \input{Paper-TW-Blowup-NLW-derivative-Glassey-conjecture-FLRW.bbl} \end{document}
\begin{document} \title[Approximation of \boldmath $x^n$] {Rational approximation of \boldmath $x^n$} \author[Nakatsukasa]{Yuji Nakatsukasa} \address{Mathematical Institute, University of Oxford, Oxford, OX2 6GG, UK} \email{[email protected]} \author[Trefethen]{Lloyd N. Trefethen} \address{Mathematical Institute, University of Oxford, Oxford, OX2 6GG, UK} \email{[email protected]} \subjclass[2010]{41A20} \commby{} \begin{abstract} Let $E_{kk}^{(n)}$ denote the minimax (i.e., best supremum norm) error in approximation of $x^n$ on $[\kern .3pt 0,1]$ by rational functions of type $(k,k)$ with $k<n$. We show that in an appropriate limit $E_{kk}^{(n)} \sim 2\kern .3pt H^{k+1/2}$ independently of $n$, where $H \approx 1/9.28903$ is Halphen's constant. This is the same formula as for minimax approximation of \kern .7pt $e^x$ on $(-\infty,0\kern .3pt]$. \end{abstract} \maketitle \section{Introduction} We consider minimax approximation of $x^n$ on $[\kern .3pt 0,1]$, that is, best approximation with respect to the supremum norm $\|\cdot \|$ on $[\kern .3pt 0,1]$. Although $n$ is usually thought of as an integer, we permit it to be any nonnegative real number. If $n$ is an even integer, approximation of $x^n$ on $[-1,1]$ is equivalent to approximation of $x^{n/2}$ on $[\kern .3pt 0, 1]$, and results will be stated for both intervals. For each integer $k\ge 0$, there is a unique minimax approximant $p_k^{(n)}$ of $x^n$ among polynomials of degree at most $k$~\cite{atap}. Let $E_k^{(n)} = \|x^n - p_k^{(n)}\|$ denote the associated error, which will be nonzero whenever $k<n$. In 1976 Newman and Rivlin~\cite{newman} published theorems showing \begin{equation} E_k^{(n)} \approx \textstyle{\frac{1}{2}}\kern .5pt \hbox{erfc}(k/\sqrt{n}\kern 1.2pt), \label{newriv} \end{equation} where $\hbox{erfc}(s) = 2\kern .3pt \pi^{-1/2}\int_s^\infty \exp(-t^2)\kern .5pt dt$ is the complementary error function. (The constant $1/2$ is our own, based on numerical experiments.) This formula implies that a degree $k = O(\sqrt n\kern 1pt)$ suffices for polynomial approximation of $x^n$ to high accuracy. To illustrate this effect, Figure~\ref{fig1} plots $E_k^{(n)}$ against $k^2$ for the cases $n = 250$ and $1000$, showing good agreement with (\ref{newriv}). The data in our two figures have been computed with the {\tt minimax} command in Chebfun~\cite{chebfun,minimax}. \begin{figure} \caption{\label{fig1} \label{fig1} \end{figure} \begin{figure} \caption{\label{fig2} \label{fig2} \end{figure} We have found that rational functions are far more effective at approximating $x^n$ than polynomials. To be precise, consider approximation by real rational functions of type $(k,k)$, that is, functions that can be written in the form $r(x) = p(x)/q(x)$ where $p$ and $q$ are real polynomials of degree at most $k$. Again, standard theory shows that for each nonnegative real number $n$ and each nonnegative integer $k$, there exists a unique minimax approximant $r_{kk}^{(n)}$~\cite{atap}; we denote the error by $E_{kk}^{(n)} = \|x^n - r_{kk}^{(n)}\|$. Here we will prove that, as illustrated in Figure~\ref{fig2}, the errors are closely approximated by the formula \begin{equation} E_{kk}^{(n)} \approx 2\kern .3pt H^{k+1/2}, \quad H = 1/9.2890254919208\dots , \label{model} \end{equation} which has the remarkable property of being independent of $n$. The number $H$, known as Halphen's constant, appears in the problem of approximation of $\exp(x)$ for $x\in (-\infty, 0]$, where the minimax errors are asymptotic to exactly the same expression (\ref{model}). Chapter 25 of~ \cite{atap} gives a review of this famous problem of rational approximation theory, a story that among others has involved Aptekarev, Carpenter, Cody, Gonchar, Gutknecht, Magnus, Meinardus, Rakhmanov, Ruttan, Trefethen, and Varga. Halphen first identified the number now named after him in 1886~\cite{halphen}, though not in connection with approximation theory. \section{Theorems} To prove that the errors satisfy an estimate of the form~(\ref{model}), we exploit the fact that the set of rational functions of type $(k,k)$ is invariant under M\"obius transformation. In particular, we transplant the approximation domain $[\kern .3pt 0,1]$ to $(-\infty,0\kern .3pt ]$ by the M\"obius transformation that maps $x = 0$, $1$, and $1+1/(n-1)$ to $s = -\infty$, $0$, and $1$: $$ x = \frac{n}{n-s}, \qquad s = \frac{n(x-1)}{x}. $$ The function $x^n$ transplants to \begin{equation} x^n = (n/(n-s))^n = (1-s/n)^{-n}, \end{equation} and this establishes our first lemma. \begin{lemma} \label{trans} For any real number $n>0$ and integer $k\ge 0$, the error $E_{kk}^{(n)}$ in type $(k,k)$ minimax approximation of $x^n$ on $[\kern .3pt 0,1]$ is equal to the error in type $(k,k)$ minimax approximation of\/ $(1-s/n)^{-n}$ on $(-\infty,0\kern .3pt ]$. \end{lemma} Our second lemma quantifies the fact that $(1-s/n)^{-n}\approx e^s$ for $s\in (-\infty,0\kern .3pt]$. \begin{lemma} \label{lem} For any $n\in (0,\infty)$ and $s\in (-\infty,0)$, \begin{equation} 0 < (1-s/n)^{-n} - e^s \le \frac{1}{e\kern .3pt n}. \end{equation} \end{lemma} \begin{proof} Given $n$, define $g(s) = (1-s/n)^{-n}- e^s$, with $g(-\infty) = g(0) = 0$. From a binomial series we may verify $(1-s/n)^n < e^{-s}$ for each~$s$, and taking reciprocals establishes $(1-s/n)^{-n} > e^{s}$, i.e., $g(s) > 0$. The maximum value of $g(s)$ will be attained at a point $s=\sigma$ where the derivative \begin{displaymath} g'(s) = (1-s/n)^{-(n+1)} - e^s \end{displaymath} is zero, i.e., $(1-\sigma/n)^{-(n+1)} = e^\sigma$. At such a point we calculate \begin{displaymath} g(\sigma) = e^\sigma(1-\sigma/n)-e^\sigma = -\sigma e^\sigma\kern -1pt/n, \end{displaymath} and to complete the proof we note that $0 < -\sigma e^\sigma\le 1/e$ for $\sigma \in (-\infty,0)$. \end{proof} We can now derive our main result. \begin{theorem} \label{thm1} The errors in type $(k,k)$ rational minimax approximation of\/ $x^n$ on $[\kern .3pt 0,1]$ satisfy \begin{equation} \lim_{k\to\infty}\, \lim_{n\to\infty\vphantom{k}}\, E_{kk}^{(n)} \kern -3pt\left/\vrule width 0pt height 10pt depth 0pt\right.\kern -3pt 2\kern .3pt H^{k+1/2}\kern 1pt = \kern 1pt 1, \label{thm1eq} \end{equation} where $H \approx 1/9.28903$ is Halphen's constant. In this formula\/ $n$ may range over nonnegative real numbers or over nonnegative integers. \end{theorem} \begin{proof} Let $F_{kk}^{}$ denote the error in minimax type $(k,k)$ rational approximation of $e^s$ on $(-\infty,0\kern .3pt]$. Aptekarev~\cite{aptek} established the identity \begin{equation} \lim_{k\to\infty}\, F_{kk}^{} \kern -1pt\left/\vrule width 0pt height 10pt depth 0pt\right.\kern -3pt 2H^{k+1/2} \kern 1pt =\kern 1pt 1, \label{aptekeq} \end{equation} which had been conjectured earlier by Magnus~\cite{magnus}. On the other hand Lemmas~\ref{trans} and~\ref{lem} imply \begin{equation} F_{kk}^{} = \lim_{n\to\infty} E_{kk}^{(n)}. \label{EandF} \end{equation} Equation (\ref{thm1eq}) follows from (\ref{aptekeq}) and (\ref{EandF}). \end{proof} Equation (\ref{thm1eq}) says little about the errors associated with any finite value of~$n$. Numerical data such as those plotted in Figure~\ref{fig2} suggest that much sharper estimates are probably valid, with $E_{kk}^{(n)}$ coming much closer to $2\kern .3pt H^{k+1/2}$ than is shown by our arguments. As mentioned at the outset, approximation of $x^n$ on $[\kern .3pt 0,1]$ is equivalent to approximation of $x^{n/2}$ on $[\kern .3pt -1,1]$ when $n$ is an even integer. The equivalence is spelled out in the proof of the following theorem, which uses the same notation $E_{kk}^{(n)}$ for $[-1,1]$ as used previously for $[\kern .3pt 0,1]$. \begin{theorem} \label{thm2} The errors in type $(k,k)$ rational minimax approximation of\/ $x^n$ on $[-1,1]$ satisfy \begin{equation} \lim_{k\to\infty}\, \lim_{\vrule width 0pt height 5pt \scriptstyle{n\to\infty}\atop{\scriptstyle{n\hbox{\scriptsize\rm~even}}}}\, E_{kk}^{(n)} \kern -3pt\left/\vrule width 0pt height 10pt depth 0pt\right.\kern -3pt 2H^{\lfloor k/2\rfloor +1/2} \kern 1pt = \kern 1pt 1. \label{thm2eq} \end{equation} In this formula\/ $n$ ranges over nonnegative even integers. \end{theorem} \begin{proof} Let $n$ be a nonnegative even integer. If $k$ is even, then by the change of variables $s = x^2$ (see for example p.~213 of~\cite{atap}), we find that type $(k,k)$ approximation of $x^n$ on $[-1,1]$ is equivalent to type $(k/2,k/2)$ approximation of $x^{n/2}$ on $[\kern .3pt 0,1]$. If $k$ is odd, then the uniqueness of best approximants implies that the type $(k,k)$ approximant of $x^n$ on $[-1,1]$ must still be even, hence the same as the type $(k-1,k-1)$ approximant (see e.g.\ Exercise~24.1 of~\cite{atap}). These observations justify the floor function $\lfloor k/2\rfloor$ of (\ref{thm2eq}). \end{proof} If $n$ is odd, the errors are approximately but not exactly the same. \section{Discussion} In~\cite{atap} it is emphasized that rational approximants tend to greatly outperform polynomials in cases where (i) the function to be approximated has a nearby singularity or (ii) the domain of approximation is unbounded. Approximation of $x^n$ on $[\kern .3pt 0,1]$ is essentially a problem of type (i), with nearly singular behavior at $x\approx 1$ (not technically singular, of course, but one could speak of a ``pseudo-singularity''). It is interesting that the proof of Theorem~\ref{thm1} proceeds by conversion to an equivalent problem of type (ii). The $k = O(\sqrt n\kern 1pt )$ effect for polynomial approximation of $x^n$ has practical consequences. For example, Chebfun's method of numerical computation with functions depends on representing them adaptively to approximately machine precision (${\approx} \kern 1pt 16$ digits) by Chebyshev expansions. Table~\ref{table1} lists the degrees $k$ of the Chebfun polynomials representing various powers $x^n$ on $[\kern .3pt 0,1]$. We see that for small $n$, the system requires $k=n$, but for larger values, each quadrupling of $n$ brings just approximately a doubling of $k$. \begin{table} \caption{\label{table1}Chebfun~\cite{chebfun} constructs a polynomial of an adaptively determined degree $k$ to represent a function on a given interval to about 16 digits of accuracy. For $x^n$ on $[\kern .3pt 0,1]$, $k=n$ is needed for smaller values of $n$, whereas for larger values, $k$ grows at a rate $O(\sqrt n\kern 1pt)$ consistent with (\ref{newriv}).} \begin{center} \begin{tabular}{cccccccc} $n~~$ & 1 & 4 & 16 & 64 & 256 & 1024 & 4096 \\[3pt] $k~~$ & 1 & 4 & 16 & 44 & 91 & 178 & 349 \\ [5pt] \end{tabular} \end{center} \end{table} Another aspect of the $k = O(\sqrt n\kern 1pt )$ effect is discussed by Cornelius Lanczos in a fascinating video recording from 1972 available online~\cite{lanczos} (beginning at about time 10:00); for a written discussion see chapter~5 of his book~\cite{lbook}. Lanczos speaks of the monomials $\{x^n\}$ as a ``tremendously nonorthogonal system,'' a fact quantified by the M\"untz--Sz\'asz theorem~\cite{rudin}, and observes that it was this effect that led him to invent what are now called Chebyshev spectral methods for the numerical solution of differential equations~\cite{boyd,lbook,atap}. Numerical analysts would rarely cite the M\"untz--Sz\'asz theorem, but they are well aware that monomials provide exponentially ill-conditioned bases on real intervals, making them nearly useless for numerical computing, whereas suitably scaled Chebyshev polynomials are excellent for computation because they give well-conditioned bases. These observations pertain to polynomial approximation of $x^n$, whereas the new results of this paper concern the much greater power of rational approximations. There is some previous literature on rational approximation of $x^n$, and an early survey can be found in~\cite{reddy}. The most developed part of this problem has been the case in which $n$ is a fixed positive number that is not an integer and $k\to\infty$. Here one obtains root-exponential convergence with respect to $k\kern .4pt$; see~\cite{stahl} for both sharp results and a survey of earlier work. The more basic phenomenon considered in the present paper of exponential convergence for $k\to\infty$ for large $n$ seems not to have been noted previously, nor, in particular, the connection with ${\approx}\kern 1pt 9.28903$. Perhaps there may be applications where this too will have practical consequences. \end{document}
\begin{document} \let\WriteBookmarks\relax \def1{1} \def.001{.001} \shorttitle{yupi: Python library to handle trajectories} \shortauthors{A. Reyes \emph{et. al}} \title [mode = title]{yupi: Generation, Tracking and Analysis of Trajectory data in Python} \author[1]{A. Reyes}[auid=000, orcid=0000-0001-7305-4710] \credit{Design of methods for generating, analyzing and visualization of trajectories} \affiliation[1]{organization={Group of Complex Systems and Statistical Physics, University of Havana}, addressline={San Lázaro esq. L, Vedado}, city={La Habana}, postcode={10400}, country={Cuba}} \author[2]{G. Viera-López}[auid=001, orcid=0000-0002-9661-5709] \ead{[email protected]} \credit{Design of methods for tracking trajectories from video sources, Conceptualization of the work} \affiliation[2]{organization={Department of Computer Science, Gran Sasso Science Institute}, addressline={Viale Francesco Crispi, 7}, city={L'Aquila}, postcode={67100}, country={Italy}} \author[1]{J.J. Morgado-Vega}[auid=000, orcid=0000-0001-6067-9172] \credit{Main Software developer and maintainer} \author[1]{E. Altshuler}[auid=003, orcid=0000-0003-4192-5635] \credit{Methodology, Design of the examples} \cortext[2]{Corresponding author} \begin{abstract} The study of trajectories is often a core task in several research fields. In environmental modelling, trajectories are crucial to study fluid pollution, animal migrations, oil slick patterns or land movements. In this contribution, we address the lack of standardization and integration existing in current approaches to handle trajectory data. Within this scenario, challenges extend from the extraction of a trajectory from raw sensor data to the application of mathematical tools for modeling or making inferences about populations and their environments. This work introduces a generic framework that addresses the problem as a whole, i.e., a software library to handle trajectory data. It contains a robust tracking module aiming at making data acquisition handy, artificial generation of trajectories powered by different stochastic models to aid comparisons among experimental and theoretical data, a statistical kit for analyzing patterns in groups of trajectories and other resources to speed up pre-processing of trajectory data. It is worth emphasizing that this library does not make assumptions about the nature of trajectories (e.g., those from GPS), which facilitates its usage across different disciplines. We validate the software by reproducing key results when modelling dynamical systems related to environmental modelling applications. An example script to facilitate reproduction is presented for each case. \end{abstract} \begin{keywords} trajectory analysis \sep modelling \sep tracking \sep python \end{keywords} \ExplSyntaxOn \keys_set:nn { stm / mktitle } { nologo } \ExplSyntaxOff \maketitle \section*{Software Availability}\label{sec:availability} \noindent \textbf{Software name:} yupi \\ \textbf{Developers:} A. Reyes, G. Viera-López, J.J. Morgado\\ \textbf{First release:} 2021\\ \textbf{Program language:} Python\\ \textbf{License:} MIT\\ \textbf{Available at:} \\ \url{https://github.com/yupidevs/yupi} \\ \url{https://pypi.org/project/yupi/} \\ \textbf{Documentation:} \\ \url{https://yupi.readthedocs.io/en/latest/} \\ \textbf{Examples:} \\ \url{https://github.com/yupidevs/yupi_examples} \section[Introduction]{Introduction} \label{sec:intro} Environmental modelling, as many other fields of science, has been vastly impacted by a huge availability of mobile tracking sensors. The subsequent increase of accessible trajectory data has lead to an uprising demand of trajectory analysis techniques. For example, in Community Ecology and Movement Ecology different trajectory-based research is well developed \citep{de2019trajectory,demvsar2015analysis}. Likewise, Group-Based Trajectory Modeling (GBTM), a statistical methodology for analyzing developmental trajectories, has been used in the study of restored wetlands \citep{matthews2015group}. Moreover, trajectory analysis has impacted the integration of land use and land cover data \citep{zioti2022platform} as well as oil spill environmental models for predicting oil slick trajectory patterns \citep{balogun2021oil} and pollution transients models \citep{okamoto1987trajectory}. Furthermore, in the context of animal behavior, appropriate handling of trajectory data has allowed the characterization of behavioral patterns within a vast sample of organisms, ranging from microorganisms and cells \citep{figueroa2020coli,altshuler2013flow} to insects with a large impact in the environment, such as leaf-cutter ants \citep{hu2016entangled,tejera2016uninformed}. This overwhelming increase on trajectory-related applications suggests to explore the available frameworks devoted to handle trajectory data. Trajectory analysis software have been designed to address problems in specific research fields (e.g., molecular dynamics \citep{roe2013ptraj, kruger1991simlys}; modelling, transformation and visualization of urban trajectory data \citep{shamal2019open}; animal trajectory analysis \citep{mclean2018trajr} and human mobility analysis \citep{pappalardo2019scikit}). For handling geo-positional trajectory data, a variety of tools has been offered by different \textit{Python} libraries such as \emph{MovingPandas} \citep{graser2019movingpandas}, \emph{PyMove} \citep{sanches2019arquitetura, oliveira2019arquitetura} and \emph{Tracktable} \citep{tracktable}. More recently, \emph{Traja} \citep{shenktraja} provided a more abstract tool set for handling generic two-dimensional trajectories, despite being focused around animal trajectory analysis. In the field of Astrodynamics high-level software has been provided by \textit{Julia}. \emph{SatelliteToolbox.jl} is perhaps the most comprehensive astrodynamics package available in \textit{Julia}, which is provided alongside the in-development trajectory design toolkit, \emph{Astrodynamics.jl} \citep{astrodynamics2013frazer}. In this regard, a programming toolkit specialized in the generation, optimization, and analysis of orbital trajectories has been published as \emph{OrbitalTrajectories.jl} \citep{padilha2021modern}. \textit{R} language has been widely exploited as well. For an excellent review and description of \textit{R} packages for movement, broken down into three stages: pre{\textendash}processing, post{\textendash}processing and analysis, see \citep{joo2020navigating}. As a consequence of the specificity of existing frameworks, there is a wide diversity of software to address specific trajectory-related tasks, but a standard library for handling trajectories in an abstract manner isn't available yet. For instance, most of existing software only address two-dimensional trajectories or trajectories limited to a fixed number of dimensions. Moreover, they typically rely on different data structures to represent a trajectory. In order to tackle these limitations, in this work we offer \emph{yupi}, a general purpose software for handling trajectories regardless their nature. Our library aims to provide maximum abstraction from problem-specific details by representing data in a compact and scalable manner and automating typical tasks related to trajectory processing. At the same time, we want to encourage the synergy among already available software. For this purpose, we also provide tools to convert the trajectory objects used in our library into the data structures used by other available frameworks, and vice-versa. The software is the result of the experience gathered by the research our group has systematically conducted in the past few years regarding analysis and modelling of complex systems and visual tracking techniques in laboratory experiments. We believe that the field of environmental modelling is a strong candidate to showcase our library due to the wide variety of trajectory-related problems from different natures. The manuscript is presented as follows: In Section \ref{sec:methods}, we describe the structure of the library, review basic concepts regarding trajectories and present the way \emph{yupi} handles them. Section \ref{sec:examples} presents applications that use trajectory analysis in diverse environmental modelling scenarios. Finally, we summarize the work emphasizing the main contributions of \emph{yupi} and highlighting its current limitations. \section{Software}\label{sec:methods} Since \emph{yupi} aims to become a standard library to handle a wide spectrum of tasks related to trajectories, all the components of the library share the usage of a unified representation of \textbf{Trajectory} objects as the standard structure to describe a path. Then, task-specific modules were conceived to boost the processes of gathering, handling and analyzing trajectories. The core module, \textbf{yupi}, hosts the \textbf{Trajectory} class. It includes required resources for arithmetic operations among trajectories and its storage on disk. The library has six basic modules operating on \textbf{Trajectory} objects (see Figure \ref{fig:A}). Artificial (i.e., simulated) trajectories with custom mathematical properties can be created with \textbf{yupi.generators}. Data can be extracted from videos using the \textbf{yupi.tracking} module. Regardless the origin of a given trajectory, it can be altered using the \textbf{yupi.transformations} module. Tools included in \textbf{yupi.stats} allow the statistical analysis of an ensemble of trajectories. The module \textbf{yupi.graphics} contains visualization functions for trajectories and its estimated statistical quantities. In addition to \emph{yupi} internal modules, we provide a complementary software package named \emph{yupiwrap} designed exclusively to enable data conversion among \emph{yupi} and third-party software. Next, we present each module of \emph{yupi} providing a brief description of its functionalities. \begin{figure} \caption{Visual representation of the internal structure of \emph{yupi} \label{fig:A} \end{figure} \subsection{Core module}\label{sub:core} Empirically, a trajectory is the path that a body describes through space. More formally, it is a function $\mathbf r(t)$, where $\mathbf r$ denotes position and $t$, time. Here, $\mathbf r$ extends from the origin of an arbitrary reference frame, to the moving body. Consequently, the core module contains the class \textbf{Trajectory} to represent a moving object described by some position vector, $\mathbf r(t)$, of an arbitrary number of dimensions. Since time is a continuous, machines have to deal with a discretized (i.e., sampled) version of the trajectory, $\mathbf r(t)$. As soon as one considers a sampled trajectory, it always requires an associated time vector, $\mathbf t=(t_1,...,t_n)^\intercal$, where each $t_i$ represents the timestamp of the $i$-th sample and $n$ is the total number of samples. For brevity, the sampled trajectory is often referred to as trajectory as well, so we may use either term. For instance, in the typical 3-dimensional case, a trajectory can be defined by the vector $\mathbf r_i = (x_i, y_i, z_i)^\intercal$, where each component denotes a spatial coordinate. The core module defines the way to retrieve specific quantities from a trajectory such as position components or velocity time series. It also defines operations among trajectories such as addition, scaling or rotation. In addition, storage functionalities are provided for different importing/exporting formats. Resources from this module can be imported directly from \emph{yupi} and will be summarized next. \subsubsection{Vector objects}\label{sub:vector_objs} It is very common to refer to position or velocity as a vector that changes through time. According to this, a \textbf{Vector} class was created to store all the time-evolving data in a trajectory. Iterating over each sample of these \textbf{Vector} time series one can get the vector components at specific time instants. This class was implemented by wrapping the \emph{numpy} \textbf{ndarray} type. The main reason that motivated this choice, along with all the benefits from the \textbf{ndarray} class itself, was to gain verbosity over the usual operations on a vector. For instance, getting a specific component of a vector, the differences between its elements, or even calculating its norm, can be done with a vector instance by accessing properties such as: \textbf{norm} or \textbf{delta}. In addition, properties \textbf{x}, \textbf{y}, \textbf{z} allow acquiring data from one specific axis in multidimensional vectors. Although users may not directly instantiate \textbf{Vector} objects, these are used all along the library to represent every time-evolving data one could get from a trajectory such as position, velocity, acceleration and time itself. \subsubsection{Trajectory objects}\label{sub:trajectory_objs} A \textbf{Trajectory} object is \emph{yupi}'s essential structure. Its time evolving data, stored as \textbf{Vector} objects, can be accessed through the attributes \textbf{t}, \textbf{r}, \textbf{v} and \textbf{a}, standing for time, position, velocity and acceleration, respectively. Trajectory data is typically stored in different manners, e.g., a single sequence of $d$-dimensional points where each point represents the position at a time instant or, alternatively, $d$ sequences of position components. Regardless of the input manner, \emph{yupi} offers the way to create \textbf{Trajectory} objects from raw data. By default, trajectories will be assumed to be uniformly spaced every 1 unit of time. However, custom time information can also be supplied. Then, position and time are used to automatically estimate velocity and acceleration according to one of the supported numerical methods: linear finite differences and the method proposed by \citep{fornberg1988generation}. \textbf{Trajectory} objects can be shifted or scaled by performing arithmetic operations on either all position components of a subset of them. For the specific case of 2- or 3-dimensional trajectories, rotation methods were conveniently implemented to ease visualization tasks, named \textbf{rotate\_2d} and \textbf{rotate\_3d}. Furthermore, operations among trajectories are also defined. Trajectories with the same dimension and time vector can be added, subtracted or multiplied together. These operations are defined point-wise and can be used via the conventional operators for addition, subtraction and multiplication: \textbf{+}, \textbf{-} and \textbf{*}. \subsection{Generators module}\label{sub:generating} The usage of randomly generated data is common in different research approaches related to trajectory analysis \citep{tuckerman2010statistical}. In this section we tackle three classical models that usually explain (or serve as a framework to explain) a wide number of phenomena connected to Biology, Engineering and Physics: Random Walks \citep{pearson1905problem}, the Langevin model \citep{langevin1908theory} and Diffusing-Diffusivity model \citep{chechkin2017brownian}. In \emph{yupi}, the aforementioned models are implemented by inheritance of an abstract \textbf{Generator} class. Any \textbf{Generator} object must be used specifying four parameters that characterize numerical properties of the generated trajectories: \textbf{T} (total time), \textbf{dim} (trajectories dimension), \textbf{N} (number of trajectories to be generated) and \textbf{dt} (time step). Additionally, a \textbf{seed} parameter can be specified to initialize a random number generator (\textbf{rng}) which is used locally to reproduce the same results without changing the global seed. Next, we will briefly describe the foundations of each model implemented in \emph{yupi} and explain how to use them to generate ensembles of trajectories like the ones sketched in Figure \ref{fig:B}a. \subsubsection{Random walk}\label{subsub:randomwalk} A Random Walk is a random process that, in a $d$-dimensional space ($d \!\in\! \mathbb{N}^+$), describes a path consisting in a succession of independent random displacements. Since the extension to more than one dimension is straightforward, we shall formulate the process in its simplest way: Let $Z_1, ..., Z_i, ..., Z_{n}$ be independent and identically distributed random variables (r.v.'s) with $P(Z_i=-1)=q$, $P(Z_i=0)=w$ and $P(Z_i=1)=p$, and let also $X_i = \sum_{j=1}^{i} Z_j$. Interpreting $Z_i$ as the displacement at the $i$-th time instant, the collection of random positions $\{X_i, 1 \le i \le n \}$ defines the well known random walk in one dimension. It is possible to extend this definition by allowing the walker to perform displacements of unequal lengths. Hence, if we denote by $L_i$ the variable that accounts for the length of the step at the $i$-th time instant, the position will be given by: \begin{equation} X_i = \sum_{j=1}^{i} L_j Z_j, \qquad i = 1,2,...,n \label{eq:def-random-walk} \end{equation} A process governed by Equation~\ref{eq:def-random-walk} in each axis is a $d$-dimensional Random Walk. Note that our definition is slightly more general than the classical, i.e., it allows the walker to remain at rest in a node of a network that is not necessarily evenly spaced. These generalized versions are often known as Lazy Random Walks \citep{lawler2010random} or Random Walks with multiple step lengths \citep{boczkowski2018random}. We define a trajectory by having a position vector whose components are described by Equation~\ref{eq:def-random-walk}. For instance, in the 3-dimensional case, $\mathbf r_i = (X_i^{(1)}, X_i^{(2)}, X_i^{(3)})$. In \emph{yupi}, this model is accessible through the \textbf{RandomWalkGenerator} class. To use it, the probabilities $q$, $w$ and $p$ need to be defined for each dimension: \begin{verbatim} prob = [[.5, .1, .4], # x-axis [.5, 0, .5]] # y-axis \end{verbatim} Then, trajectories are generated as: \begin{verbatim} from yupi.generators import RandomWalkGenerator rw = RandomWalkGenerator(T, dim, N, dt, prob) trajs = rw.generate() \end{verbatim} In this case, the variable \textbf{trajs} contains a list of \textbf{N} generated \textbf{Trajectory} objects. Note that the first 4 parameters passed to the \textbf{RandomWalkGenerator} are required by any kind of generator in \emph{yupi} as explained in the beginning of this section. \subsubsection{Langevin model}\label{subsub:langevin} An Ornstein-Uhlenbeck process \citep{uhlenbeck1930theory}, well known by its multiple applications to describe processes from different fields \citep{lax2006random}, is defined in the absence of drift by the linear stochastic differential equation: \begin{equation} dv = -\gamma \, v \,dt + \sigma \, dW \label{eq:oh-process} \end{equation} \noindent where $\gamma$ and $\sigma$ are positive constants and $W$ is a Wiener process. Equation~\ref{eq:oh-process} is also written as a Langevin equation \citep{langevin1908theory}, which in the multi-dimensional case takes the form: \begin{equation} \frac{d}{dt}\mathbf v(t) = -\gamma \mathbf v(t) + \sigma \boldsymbol \xi(t) \label{eq:langevin-eq} \end{equation} where $\boldsymbol \xi(t)$ is a white noise; $\sigma$, the scale noise parameter; and $\gamma^{-1}$, a characteristic relaxation time of the process. In trajectory analysis, $\mathbf v(t)$ is intended to denote velocity, so the position vector is given by: \begin{equation} \mathbf{r}(t) = \int_{0}^{t} \mathbf{v}(t') \,dt' \label{eq:r-from-v} \end{equation} Equations~\ref{eq:langevin-eq} and \ref{eq:r-from-v} can be solved numerically if the initial conditions $\mathbf v(0) = \mathbf v_0$ and $\mathbf r(0) = \mathbf r_0$ are known. \textbf{LangevinGenerator} is the class offered by \emph{yupi} to generate trajectories that follow this model. By conveniently setting the parameters described above (i.e., $\gamma$, $\sigma$, $\mathbf v_0$ and $\mathbf r_0$) several real-life scenarios can be modeled. In Section \ref{sub:ejlysozyme}, a Langevin model is used to simulate the motion of a Lysozyme molecule in an aqueous medium. \subsubsection{Diffusing-Diffusivity model}\label{subsub:diffdiff} Slow environmental relaxation has been found in soft matter for colloidal particles diffusing in an environment of biopolymer filaments and phospholipid tube assemblies \citep{wang2012brownian}. Non-Gaussian distribution of increments was observed even when the diffusive dynamics exhibit linear growth of the mean square displacement. A model framework of a diffusion process with fluctuating diffusivity that reproduces this interesting finding has been presented as Diffusing Diffusivity Model. Namely: \begin{subequations} \begin{align} \frac{d}{dt} \mathbf r(t) &= \sqrt{2D(t)} \,\boldsymbol\xi(t) \\ D(t) &= \mathbf Y^2(t) \\ \frac{d}{dt} \mathbf Y(t) &= -\frac{1}{\tau} \mathbf Y(t) + \sigma \boldsymbol \eta(t) \end{align} \label{eq:diffdiff} \end{subequations} where $\boldsymbol\xi(t)$ and $\boldsymbol\eta(t)$ are Gaussian white noises and $D(t)$, the diffusion coefficient, is a random function of time expressed as the square of the auxiliary variable, $\mathbf Y(t)$. In other words, the coupled set of stochastic differential equations (\ref{eq:diffdiff}) predicts a Brownian but Non-Gaussian diffusion, where the position, $\mathbf r(t)$, is described by an over-damped Langevin equation with the diffusion coefficient being the square of an Ornstein-Uhlenbeck process. The model has been discussed and solved analytically by \citep{chechkin2017brownian,thapa2018bayesian}. The class devoted to generate trajectories modeled by the Equations \ref{eq:diffdiff} is called \textbf{DiffDiffGenerator}. Implementation details can be found in the software documentation. \begin{figure*} \caption{Statistical analysis of three generated ensembles of N=1000 two-dimensional trajectories. Rows listed from top to bottom correspond to an ensemble generated using a Random Walk, Langevin and Diffusing Diffusivity models, respectively. (a) Spacial projection of five trajectories. (b) Estimated Velocity autocorrelation function and (c) Mean Squared Displacement as a function of the lag time. (d) Kurtosis as a function of time. (e) Power Spectral Density. (f) Speed histograms. (g) Turning angle probability densities.} \label{fig:B} \end{figure*} \subsection{Stats module}\label{sub:soft_stats} The library provides common techniques based on the mathematical methods that researchers frequently use when analyzing trajectories. This section presents an overview on how to compute observables to describe them. Velocity autocorrelation function, kurtosis or mean square displacement are some typical examples. Most of these observables were originally defined using ensemble averages (i.e., expected values at a given time instant). However, under the assumption of ergodicity\footnote{Ergodicity is the property of a process in which long-time averages of sample functions of the process are equal to the corresponding statistical or ensemble averages.}, it is possible to compute the observables using time averages (i.e., by averaging the quantities among time intervals instead). In \emph{yupi} the observables can be computed using both approaches. In the following subsections, we will denote by $E[\cdot]$ (i.e., expected value) the ensemble average and $\langle\cdot\rangle$ the time average. \subsubsection{Velocity autocorrelation function}\label{subsub:VACF} The velocity autocorrelation function (VACF) is defined as the ensemble average of the product of velocity vectors at any two instants of time. Under stationary conditions, the definition is typically relaxed to the one of Equation~\ref{eq:def-vacf-avg-ensemble}, in which one of the vectors is the initial velocity. On the other hand, Equation~\ref{eq:def-vacf-time-avg-time} presents the way VACF is computed by averaging over time under the assumption that both averages are equivalent (i.e., ergodic assumption). \begin{subequations} \begin{align} C_v(t) &= E[\mathbf{v}(0) \cdot \mathbf{v}(t)] \label{eq:def-vacf-avg-ensemble} \\ \begin{split} C_v(\tau) &= \langle \mathbf{v}(t) \cdot \mathbf{v}(t+\tau) \rangle \\ &= \frac{1}{T-\tau} \int_0^{T-\tau} \mathbf{v}(t) \cdot \mathbf v(t+\tau) dt \end{split} \label{eq:def-vacf-time-avg-time} \end{align} \end{subequations} \noindent Here, $\tau$ is the lag time, a time window swept along the velocity samples and $T$ is the elapsed time, where $\tau \ll T$. It should be noted that in Equation~\ref{eq:def-vacf-time-avg-time} VACF is defined to be computed on a single trajectory, unlike Equation \ref{eq:def-vacf-avg-ensemble} that requires an ensemble. This also applies to further statistical observables that can be computed using both kinds of averaging procedures. VACF quantifies the way in which the memory in the velocity decays as a function of time \citep{balakrishnan2008elements}. Moreover, it can be successfully used to analyze the nature of an anomalous diffusion process \citep{metzler2014anomalous}. From top to bottom, Figure \ref{fig:B}b shows VACF plots for Random Walk, Langevin and Diffusing Diffusivity generated ensembles. VACF scatter as an almost flat curve around zero for the top and bottom case, which indicates the memoryless nature of Random Walk and Diffusing Diffusivity processes. On the other hand, the center row shows an exponential decay, meaning that a Langevin model predicts some characteristic time that dominates the relaxation to equilibrium. With \emph{yupi}, one can estimate the VACF of a collection of trajectories (e.g., the ensemble generated in Section \ref{subsub:randomwalk}) as: \begin{verbatim} from yupi.stats import vacf trajs_vacf, trajs_vacf_std = vacf( trajs, time_avg=True, lag=25) \end{verbatim} \noindent where \textbf{vacf} is the name of the function that computes the autocorrelation, the parameter \textbf{trajs} represents the array of trajectories and \textbf{time\_avg} indicates the method to compute the observable, i.e., averaging over time with a lag time defined by \textbf{lag}. The computation of the remaining observables can be coded in a similar way. Next, for the sake of brevity, we will address only its theoretical foundations. In the software documentation, more examples can be found that make use of all the observables. \subsubsection{Mean square displacement}\label{subsub:MSD} The mean square displacement (MSD) is defined in Equation \ref{eq:def-msd-avg-ensemble} by an ensemble average of square displacements. In addition, the time-averaged mean square displacement (TAMSD) is computed by a moving average of the squared increments along a single trajectory. This is performed by integrating over trajectory points separated by a lag time $\tau$ that is much smaller than the overall measurement time $T$ (Equation \ref{eq:def-msd-avg-time}). \begin{subequations} \begin{align} \delta^2(t) &= E[{(\mathbf r(t) - \mathbf r(0))}^2] \label{eq:def-msd-avg-ensemble} \\ \begin{split} \delta^2(\tau) &= \langle {(\mathbf r(t+\tau) - \mathbf r(t))}^2 \rangle \\ &= \frac{1}{T-\tau} \int_0^{T-\tau} {(\mathbf{r}(t+\tau) - \mathbf r(t))}^2 dt \end{split} \label{eq:def-msd-avg-time} \end{align} \end{subequations} The MSD of a normal diffusive trajectory arises as a linear function of time. Therefore, it is a typical indicator to classify processes far from normal diffusion. Moreover, MSD reveals what are the time scales that characterize different diffusive regimes. In Figure \ref{fig:B}c a comparison of MSD plots is made for the three ensembles previously presented. Regardless of the model used, the same long time behavior can be perceived, i.e., the same scaling law arises for sufficiently long time scales. This can be seen while contrasting the MSD curves with the dashed line of slope equal to one, meaning that normal diffusion is achieved. \subsubsection{Kurtosis}\label{subsub:kurtosis} Another useful statistical observable is the kurtosis. Different formulations for kurtosis have been proposed in the literature \citep{cain2017univariate}. The common choice for the one-dimensional case is presented as the ensemble average of Equation~\ref{eq:def-kurtosis-1d}, where $\mu$ stands for the mean velocity and $\sigma$ the standard deviation. In addition, for the multivariate case we use Mardia's measure \citep{mardia1970measures} as in Equation~\ref{eq:def-kurtosis-Nd}. The vector $\boldsymbol{\mu}$ is the $d$-dimensional mean velocity ($d>1$) and $\mathbf{\Sigma}$ the covariance matrix. In both cases we have omitted the explicit dependence with time, but it should be noted that the expected value is taken at a given instant. \begin{subequations} \begin{align} \kappa(t) &= E\left[\left( \frac{v - \mu}{\sigma} \right)^4\right] \label{eq:def-kurtosis-1d} \\ \kappa(t) &= E[\{(\mathbf{v} - \boldsymbol{\mu})^\intercal \mathbf{\Sigma}^{-1} (\mathbf{v} - \boldsymbol{\mu})\}^2] \label{eq:def-kurtosis-Nd} \end{align} \label{eq:def-kurtosis} \end{subequations} The kurtosis measures the disparity of spatial scales of a dispersal process \citep{mendez2016stochastic} and it is also an intuitive means to understand normality \citep{cain2017univariate}. Figure \ref{fig:B}d shows how kurtosis converges to a value close to $8$ regardless of the model used. This is a consequence of convergence to a Gaussian density and the fact that all three processes are two-dimensional. Moreover, just in the case of the Diffusing Diffusivity model a leptokurtic regime is observed, i.e., a regime in which $\kappa \gtrsim 8$. This means that some flat-tailed density (compared with the Gaussian) aroused first and $\kappa(t)$ provides a direct way to extract, apart from the crossover time, the correlation time of the diffusion coefficient. \subsubsection{Power spectral density}\label{subsub:psd} The Power Spectral Density, or Power Spectrum, (PSD) of a continuous-time random process can be defined by virtue of the Wiener${-}$Khintchin theorem as the Fourier transform $S(\omega)$ of its autocorrelation function $C(t)$: \begin{equation} S (\omega) = \int_{-\infty}^\infty C(t) e^{-i \omega t} dt \end{equation} Power spectrum analysis indicates the frequency content of the process. The inspection of the PSD from a collection of trajectories enables the characterization of the motion in terms of the frequency components. For instance, when analysing the ensembles represented in Figure \ref{fig:B}a, we notice important differences in their spectrum (see Figure \ref{fig:B}e). In the Langevin and Diffusing Diffusivity cases the PSD shows a decay for larger frequencies as opposite to the Random Walk model, in which all frequencies contribute equally, i.e., the spectrum is distributed uniformly. \subsubsection{Histograms}\label{subsub:histograms} Certain probability density functions can also be estimated from input trajectories (e.g., velocity and turning angle distributions). Speed probabilty density function is a useful observable to inspect jump length statistics. For instance, Figure \ref{fig:B}f reveals the discrete nature of the Random Walk and the rapidly decay of the tails for the Langevin and Diffusing Diffusivity plots, which is a typical indicator to discard anomalous diffusion models as candidate theories. Figure \ref{fig:B}f shows turning angle distributions in polar axes. For the Random Walk model just few discrete orientations are available in contrast with the other two cases: a bell-shape around zero and a uniform distribution for the Langevin and Diffusing Diffusivity model, respectively. \subsubsection{Other functionalities}\label{subsub:otherquantities} In addition to the computation of statistical estimators, the \textbf{stats} module of \emph{yupi} includes the \textbf{collect} function for querying specific data from a set of trajectories. If one desires to obtain only position, velocity or speed data from specific time instants, this function automatically iterates over the ensemble and returns the requested data. Moreover, \textbf{collect} also gets samples for a given time scale using sliding windows. A more extensive showcase of this module can be seen in the examples provides as part of the Software Documentation. \subsubsection{Graphics module}\label{sub:soft_graphics} A set of pre-configured visualization functions are included as part of \emph{yupi}. Spatial projections can be visualized for the cases of 2- and 3-dimensional trajectories using \textbf{plot\_2d} and \textbf{plot\_3d} functions. For instance, each subplot in Figure \ref{fig:B}a is the outcome of \textbf{plot\_2d} for different ensembles. In addition, specific plots were added to ease the visualization of the observables offered by the module \textbf{yupi.stats}). This customized plotting functions were designed to highlight statistical patterns following the commonly used standards in the literature (e.g., by default, plots of angle distributions are displayed in polar coordinates and the y- and x-axis of the Power Spectral Density plots in logarithmic scale). All the plots in Figure \ref{fig:B}b-g were produced using the aforementioned functions. All these functions were conceived to allow plots customization through case-specific parameters (e.g., PSD can be plotted as a function of the frequency or angular frequency). Moreover, since all the predefined plots were implemented over \emph{matplotlib}, the users can fully customize their plots via keyword arguments (\textbf{kwargs} parameter) that will override any default values imposed by the specific \emph{yupi} plotting function. \subsection{Transformation module}\label{sub:soft_transf} The \emph{yupi.transformations} module can be used when the desired outcome is a ``transformed'' version of a given trajectory that does not modify the trajectory itself. Since several methods of this kind can be applied from standard signal processing libraries (e.g., \emph{scipy.signal}) we kept this module simple. Therefore, we included mostly specific resources that were both useful in the context of trajectory analysis and uncommon in most popular signal processing libraries. \subsubsection{Trajectory filters}\label{sub:filters} The module \emph{scipy.signal} offers methods to convolve, to spline or to apply low-, band- and high-pass filters. However, we have included a convenient filter especially useful in the context of animal behavior, where the instant velocity vector is sometimes approximated to a local weighted average over past values \citep{li2011dicty}. This is presented as the convolution: \begin{equation} \mathbf{v}_s(t) = \Omega \int_0^t e^{-\Omega(t - t')} \mathbf{v}(t') dt' \label{eq:exp-convolution} \end{equation} \noindent where $\Omega$ is a parameter that accounts for the inverse of the time window over which the average is more significant. A filter defined by Equation \ref{eq:exp-convolution} preserves directional persistence and produces a smoothed version of a trajectory whose velocity as a function of time is given by $\mathbf v_s(t)$. Therefore, position can be recovered with the help of Equation~\ref{eq:r-from-v}. This ``exponential-convolutional'' filter can be used as: \begin{verbatim} from yupi.transformations import ( exp_convolutional_filter) smooth_traj = exp_convolutional_filter( traj, ommega=5) \end{verbatim} Future releases of the library may include new filters required for specific applications in trajectory analysis. \subsubsection{Trajectory re-samplers}\label{subsub:samplers} There are several applications that require trajectories to be sampled in specific time arrays. The most obvious case is when a trajectory has a non-uniform time array and it is desired to produce an equivalent trajectory sampled periodically on time. This can be achieved using the \textbf{resample} function: \begin{verbatim} from yupi.transformations import resample t1 = resample(traj, new_dt=0.3, order=2) \end{verbatim} Notice that, by default, the library uses a linear interpolation to resample the trajectory. However, the order of the estimation can be controlled using the \textbf{order} parameter. Equivalently, a new trajectory can be obtained for a given time array that is not required to be uniformly sampled by specifying the time array itself as the $new\_t$ parameter instead of $new\_dt$ while calling the \textbf{resample} function. We also included a simple sub-sampling method designed for uniformly-sampled trajectories. It produces trajectories that keep only a fraction of the original trajectory points. It can be used as: \begin{verbatim} from yupi.transformations import subsample compact_traj = subsample(traj, step=5) \end{verbatim} \noindent where the \textbf{step} parameter is specifying how many sample points will be skipped. \subsection{Tracking module}\label{sub:tracking} The tracking module contains all the tools related to retrieving trajectories from video inputs. Although \emph{yupi} works with trajectories of an arbitrary number of dimensions, the scope of this module is limited to two-dimensional trajectories due to the nature of video sources. However, inspired by several practical scenarios in which tracking techniques are required to extract meaningful information, we decided to include these tools as part of the library. We will refer to tracking as the process of retrieving the spatial coordinates of moving objects in a sequence of images. Notice that this is not always possible for any image sequence. Some requirements should be met in order to extract meaningful information. Along this section, we will assume that any video used for tracking purposes was taken keeping a constant distance from the camera to the plane in which the target objects are moving. \subsubsection{Tracking of objects of interest}\label{subsub:objecttracking} When following a given object in a video, the aim of tracking techniques is to provide its position vector with respect to the camera, which we will denote by $\mathbf{r}_i^\mathrm{(oc)}$ (superscript $(oc)$ stands for object-to-camera reference and subscript $i$ stands for the $i$-th frame the vector is referred to). If the dimensions of the object being tracked are small enough (i.e., comparatively smaller than the distance covered in the whole trajectory) the centroid of the object determines the only degrees of freedom of the movement since orientation can be neglected. This point is frequently taken as the position of the object. To determine the actual position of an object on every frame, a tracking algorithm to segment the pixels belonging to the object from the background is required. Five algorithms are implemented in \emph{yupi}: \textbf{ColorMatching}, \textbf{FrameDifferencing}, \textbf{BackgroundSubtraction}, \textbf{TemplateMatching} and \textbf{OpticalFlow}. A detailed explanation of the basics of the aforementioned algorithms can by found in \cite{frayle2017chasing}. All those algorithms attempt to solve the same problem using very different strategies. Therefore, it happens often that under specific conditions one of them may outperform the others. To speed up the tracking process, \emph{yupi} uses a region of interest (ROI) around the last known position of the tracked objects. Then, on every frame, the algorithm searches only inside this region instead of in the whole image. When it founds the new position, it updates the center of the region of interest for the next frame. When the tracking ends, the time evolution of the central position of the ROI is used to reconstruct the trajectory of the tracked object. The library enables the extraction of \textbf{Trajectory} objects from videos through \textbf{ObjectTracker} instances which are defined by a tracking algorithm and a ROI. For a given video, several objects can be tracked concurrently using an independent \textbf{ObjectTracker} for each one of them. Finally, a \textbf{TrackingScenario} is the structure that groups all trackers and iterates through the video frames while resolving the desired trajectories for each tracker. \subsubsection{Tracking a camera in motion}\label{subsub:cameratracking} Sometimes the camera used to track the object under study is also in motion and it is able to translate and to execute rotations around a vertical axis. Therefore, knowledge regarding positions and orientations of the camera during the whole experiment is required to enable a correct reconstruction of the trajectory relative to a fixed coordinate system. One way to tackle this issue is inferring the movement of the camera by means of the displacements of the background in the image. Preserving the assumption that the camera only moves in a plane parallel to the plane in which the objects are moving, the motion of the camera can be estimated by tracking a number of background points between two different frames and solve the optimization problem of finding the affine matrix that best transforms the set of points\footnote{An affine transformation (or affinity) is the one that preserves collinearity and ratios of distances (not necessarily lengths or angles).}. This means that for a rotation angle $\theta$, a scale parameter $s$, and displacements $t_x$ and $t_y$, an arbitrary vector $(x,y)$ will become \begin{equation} \begin{bmatrix} x' \\ y' \\ 1 \end{bmatrix} = \begin{bmatrix} s \cos \theta & -\sin \theta & t_x \\ \sin \theta & s \cos \theta & t_y \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} x \\ y \\ 1 \end{bmatrix} \label{eq:affine_transformation} \end{equation} Therefore, the problem reduces to find the vector $(\theta, t_x, t_y, s)$ that minimizes the least square error of the transformation. Under the assumption that the camera is always at approximately the same distance from the background, the scale parameter should be close to 1. As long as the mean square error remains under a given threshold and the condition $s \approx 1$ holds, the validity of the estimation is guaranteed. Hence, the collection $\{\theta_i, t_{xi}, t_{yi}\}$, where $i$ stands for the frame number, contains all the information necessary to compute the positions and orientations of the camera. \subsubsection{Tracking objects and camera simultaneously} As was mentioned above, the position of the object under study with respect to the camera in the $i$-th frame has been denoted by $\mathbf{r}_i^\mathrm{(oc)}$. If the camera also moves and we can follow features on the background from one frame to the another, we are able to calculate the parameters $(\theta_i, t_{xi}, t_{yi})$ of the affine matrix that transforms the $(i{-}1)$-th into the $i$-th frame. The question we now face is: How to compute the position $\mathbf{r}_i \equiv \mathbf{r}_i^\mathrm{(ol)}$ of the object with respect to the frame of reference fixed to the lab in the $i$-th frame? Let us label as $\alpha_i$ the cumulative angular differences of the affine matrix parameter $\theta$ until the $i$-th frame (Equation~\ref{eq:cum_theta}) and $\mathbf{t}_i = (t_{xi}, t_{yi})^\intercal$ the vector of displacements of the affine transformation. Let $\mathbf{R}$ be the rotation matrix (i.e., the upper-left $2{\times}2$ block of the matrix in Equation~\ref{eq:affine_transformation} when $s=1$). Then, the position of the camera in the lab coordinate system, $\mathbf{r}_i^{(\mathrm{cl})}$, can be determined in an iterative manner as follows: \begin{subequations} \begin{align} \alpha_i &= \sum_{j=1}^{i} \theta_j \qquad \label{eq:cum_theta} \\ \mathbf{r}_i^\mathrm{(cl)} &= \mathbf{R}^{-1}(\alpha_i) \cdot (-\mathbf{t}_i) + \mathbf{r}_{i-1}^\mathrm{(cl)} \label{eq:r-cl} \\ \mathbf{r}_i &= \mathbf{R}^{-1}(\alpha_i) \cdot \mathbf{r}_i^\mathrm{(oc)} + \mathbf{r}_{i}^\mathrm{(cl)} \label{eq:r-ol} \end{align} \end{subequations} Therefore, the desired position of the object under study, $\mathbf{r}_i$, can be computed by Equation~\ref{eq:r-ol} in terms of its position in the frame of reference fixed to the camera, $\mathbf{r}_i^\mathrm{(oc)}$, the absolute position of the camera, $\mathbf{r}_i^\mathrm{(cl)}$, and the camera orientation, $\alpha_i$. This whole process is simplified in \emph{yupi} by the \textbf{CameraTracker} class. By default, \emph{yupi} assumes that the position of the camera remains fixed. However, in order to estimate the motion of a camera and use it to retrieve the correct position of the tracked objects, the user only needs to create a \textbf{CameraTracker} object and pass it to the \textbf{TrackingScenario}. \subsubsection{Removing distortion} When recording video, the output images generally contain some distortion caused by the camera optics. This is important when spatial measurements are being done using videos or photographs. To correct these errors some adjustments must be applied to each frame. Applying an undistorter function to the videos is possible in \emph{yupi} using the \textbf{ClassicUndistorter} or the \textbf{RemapUndistorted}\footnote{To instantiate one of the given undistorters a camera calibration file is needed. This file can be created following the steps explained in the documentation \url{https://yupi.readthedocs.io/en/latest/api_reference/tracking/undistorters.html}.}. \subsection{Integration with other software packages}\label{sub:soft_yupiwrap} Along with \emph{yupi}, we offer a \textit{Python} library called \emph{yupiwrap} (available in \url{https://github.com/yupidevs/yupiwrap}), designed to ease the integration of \emph{yupi} with other libraries for handling trajectories. Two-way conversions between a given \emph{yupi} \textbf{Trajectory} and the data structure used by the third-party library can be made via \emph{yupiwrap}. This approach enables users to seamlessly use the resources needed from either library. \subsubsection{Integration with traja}\label{subsub:integration_with_traja} \emph{Traja} \textit{Python} package is a toolkit for numerical characterization and analysis of moving animal trajectories \citep{shenktraja}. It provides some machine learning tools that aren't yet available in \emph{yupi}. To convert from \emph{yupi} to \emph{traja} let us first consider an arbitrary \emph{yupi} Trajectory: \begin{verbatim} traj = Trajectory( x=[0, 1.0, 0.63, -0.37], y=[0, 0, 0.98, 1.24]) \end{verbatim} \noindent and then the conversion to a \emph{traja} DataFrame is done by: \begin{verbatim} from yupiwrap import yupi2traja traj_traja = yupi2traja(traj) \end{verbatim} Notice that only two-dimensional trajectories can be converted to a \emph{traja} object due to its own limitations. The conversion in the opposite direction can be done in a similar way by using \textbf{traja2yupi} instead. \subsubsection{Integration with tracktable}\label{subsub:integration_with_tracktable} \emph{Tracktable} provides a set of tools for handling two- and 3-dimensional trajectories even in geospatial coordinate systems \citep{tracktable}. The core data structures and algorithms in this package are implemented in \textit{C++} for speeding up computation and more efficient memory usage. If we consider the same \emph{yupi} \textbf{Trajectory} from the previous example, we can convert it into a \emph{Tracktable} object using: \begin{verbatim} from yupiwrap import yupi2tracktable traj_track = yupi2tracktable(traj) \end{verbatim} In this case, all trajectories with a number of dimensions within 1 to 3 can be converted into \emph{Tracktable} objects. The conversion in the opposite direction can also be done by importing \textbf{tracktable2yupi}. \section{Examples}\label{sec:examples} In this section, we illustrate the usage of \emph{yupi} through different examples that require a complex integration of different modules. The examples were chosen to showcase the potential of the library to solve problems that heavily rely on trajectory analysis and its extraction from video sources. Most of the examples reproduce core results from published research. Others include original approaches to verify known properties of different phenomena. All in all, the collection of examples is designed to provide a quick starting point for new research projects involving trajectory data, with a particular focus in environmental modelling. For simplicity, we omit some technical details related to its implementation. However, we provide a detailed version of these examples in the software documentation\footnote{Documentation available at \url{https://yupi.readthedocs.io/en/latest/}} with required multimedia resources and source code, available in a repository conceived for \emph{yupi} examples\footnote{Examples available at \url{https://github.com/yupidevs/yupi_examples}}. \subsection{Identifying environmental properties through tracking and trajectory processing} \label{sub:ej3} Visual tracking has proven to be an effective method for the study of physical and biological processes. Moreover, indirect measurements from the enviroment can also be retrieved from the analysis of the directly measured trajectories. For instance, tracking techniques have empowered researchs on animal behavior, which can improve the effectiveness and success of conservation management programs \citep{greggor2019using} and gives valuable information about ecosystem changes \citep{rahman2022linking}. More especifically, in \citep{yuan2018biological} the authors infered the water quality through the analysis of features from fish trajectories. Furthermore, the work of \citep{panwar2020aquavision} shows how to automate the detection of waste in water bodies using images from the environment. In addition, the measurements of river flows have also benefited from the tracking of key features using satellital images \citep{gleason2017tracking}. In the work of \citep{stephenson1999vehicle}, it was shown how the stresses of turning wheels in the grounds can affect vegetation cover, plant health and diversity, as well as reducing underground rhizomes (roots) generation. Inspired on these facts, we decided to reproduce the tracking results from the work of \citep{amigo2019measuring} on the study of the motion of vehicles on granular materials. They reported the analysis of the trajectories performed by a scaled-size wheel while rolling on sand at two different gravitational accelerations, exploiting a frugal instrument design \citep{viera2017note, altshuler2014settling}. Figure \ref{fig:C}a shows a sketch of the instrument where a camera on top captures the motion of the wheel while rolling around a pivot. This example was built using one of the original videos provided by the authors (see Figure \ref{fig:C}b). \begin{figure} \caption{Efficiency of the rolling process computed for a wheel moving across granular material at a constant angular velocity $\omega=4$\,rad/s under a gravitational acceleration as the one on Mars. (a) Experimental setup composed by the wheel electro-mechanical system to ensure the constant angular velocity and a camera to record its motion. (b) Sample frames from a video where the wheel moves around the pivot. (c) Estimated positions of the wheel and the LED used as a reference to track it. (d) Estimation of the rolling efficiency for a single realization of the experiment, showing a strong consistence with the ones reported in the original paper \citep{amigo2019measuring} \label{fig:C} \end{figure} In the video, one observes the wheel forced to move on sand at a fixed angular velocity. In optimal rolling conditions, one can expect it to move at a constant linear velocity. However, due to slippage and compaction-decompaction of the granular soil, the actual linear velocity differs from the one expected under ideal conditions. To study the factors that affect the wheel motion, the first step is quantifying how different the rolling process is with respect to the expected one in ideal conditions. This example focuses on the problem of capturing the trajectory of the wheel and computing the efficiency of the rolling process. We start by creating two trackers: one for the central pivot and one for the green led attached next to the wheel. Since the central pivot should not move significantly, we can track it using \textbf{TemplateMatching} algorithm, by comparing every frame with a template of the object. As the led colors differs from the rest of the image, we can use \textbf{ColorMatching} algorithm to track its position. Both trackers are used among the \textbf{TrackingScenario} to retrieve the trajectories of both objects along the video. It is worth to mention that, for an accurate estimation, it is required to know the scale factor (i.e., the number of pixels required to represent 1\,m). By calling the \textbf{track} method, the tracking process should produce two trajectories (one for each tracker). Notice that these trajectories (i.e., ${t_\text{led}}$ and ${t_\text{pivot}}$) are referred to a frame of reference placed on the bottom left corner of the image as shown in Figure \ref{fig:C}. Then, using the arithmetic operations from \emph{yupi} it is possible to estimate the trajectory of the LED referred to the center pivot by simply subtracting them as: $t_\text{led\_centered} = t_\text{led} - t_\text{pivot}$. Since the LED and the center of the wheel are placed at a constant distance of 0.039\,m, we can estimate the trajectory of the wheel referred to the center pivot: \begin{verbatim} wheel_centered = led_centered.copy() wheel_centered.add_polar_offset(0.039, 0) \end{verbatim} Finally, the trajectory of the wheel referred to its initial position, can be obtained by subtracting the initial from the final position after completing the whole trajectory. \begin{verbatim} wheel = wheel_centered - wheel_centered.r[0] \end{verbatim} Now, assuming no slippage, we can compute the linear velocity as: $v_\text{max} = \mathbf{\omega}R$ = $(4 \,\mathrm{rad/s}) \times (0.07 \,\mathrm{m})$ and measure the actual linear velocity using the trajectory estimated by the tracking process: \begin{verbatim} v_actual = wheel.v.norm \end{verbatim} By dividing $v_\text{actual}$ by $v_\text{max}$, we can estimate the efficiency of the rolling as described in \citep{amigo2019measuring}. The temporal evolution of the efficiency for the single experiment can be observed in Figure~\ref{fig:C}d. We can notice how the linear velocity of the wheel is not constant despite the constant angular velocity, due to slippage in the terrain. Even when we are observing only one realization of the experiment, and assuming the angular velocity of the wheel being perfectly constant, we notice the consistency of this result with the one reported in the original paper. Despite the specific nature of this example, it is easy to make a straightforward extension of its usage across many other problems that may require the identification of objects from video sources and the application of arithmetic operations over trajectories to indirectly measure any derived quantities. In that regard, we included in the software documentation additional examples related to trajectory tracking that partially reproduce key results from published research: The work of \citep{diaz2020rolling} where the authors study the penetration of objects into granular beds; The work of \citep{frayle2017chasing}, where the authors studied the capabilities of different image processing algorithms that can be used for tracking of the motion of insects under controlled environments and the work of \citep{serrano2019autonomous} that extends on the previous one by proposing the design of a robot able to track millimetric-size walkers in much larger distances by tracking both insect and camera simultaneusly. \subsection{Equation-based simulations: A molecule immerse in a fluid} \label{sub:ejlysozyme} Several systems can be explained using stochastic models as the ones shown in Section \ref{sub:generating}. To accurately describe them, it is required to adjust the parameters of the model according to measureable data. Next, we will illustrate how to use \emph{yupi} to generate simulated trajectories of a lysozyme in water using the Langevin model presented in Section \ref{subsub:langevin}. We corroborate that the model correctly predicts the order of magnitude of molecule' speed values. Proteins contribute greatly in environmental processes and are fundamental in soil and ecosystem health \citep{li2020goethite}. Since interactions of proteins with charged surfaces are important in many applications such as biocompatible medical implants \citep{subrahmanyam2002application}, the dynamics of lysozyme and its hydration water has been characterized under electric field effects in different water environments \citep{favi2014dynamics}. Alongside, the thermal velocity for a sizeable particle immerse in water such as a lysozyme molecule at room temperature has been estimated to be around $10 \,\mathrm{m/s}$ \citep{berg2018random}. The right hand side of the Langevin equation (\ref{eq:langevin-eq}) can be interpreted as the net force acting on a particle. This force can be written as a sum of a viscous force proportional to the particle's velocity (i.e., Stokes' law with drag parameter, $\gamma=1/\tau$, with $\tau$ a correlation time), and a noise term, $\sigma\,\boldsymbol{\xi}(t)$, representing the effect of collisions with the molecules of the fluid. Therefore, (\ref{eq:langevin-eq}) can be written in a slightly different way by noting that there is a relation between the strength of the fluctuating force, $\sigma$, and the magnitude, $1/\tau$, of the friction or dissipation, which is known as the Fluctuation-dissipation theorem \citep{kubo1966fluctuation,srokowski2001stochastic}. Consequently, in terms of experimental measured quantities and in differential form, the Langevin equation can be reformulated in the light of stochastic processes by \begin{subequations} \begin{align} d\mathbf{v} &= -\frac{1}{\tau} \mathbf{v} dt + \sqrt{\frac{2}{\tau} \left( \frac{kT}{m} \right)} \, d\mathbf{W} \label{eq:oh-process-particle-fluid} \\ \frac{1}{\tau} &= \gamma = \frac{\alpha}{m} = \frac{6 \pi \eta a}{m} \label{eq:stoke's-law} \end{align} \end{subequations} \noindent where $k$ is the Boltzmann constant, $T$ the absolute temperature, and $m$ the mass of the particle. Equation~\ref{eq:stoke's-law} provides an operational method to measure the correlation time in terms of the Stoke's coefficient, $\alpha$, which depends on the radius of the particle, $a$, and the fluid viscosity, $\eta$. Lysozyme enzymes are molecules with a high molecular weight ($\sim 10^{4}\,\mathrm{g/mol}$) \citep{colvin1952size}. So, it is reasonable to expect a brownian behavior in the limit of large time scales when the particle is subjected to the molecular collisions of the surrounding medium (e.g., an aqueous medium). Then, Equation~\ref{eq:oh-process-particle-fluid} is a good choice to use as a model. By setting the total simulation time \textbf{T}, the dimension \textbf{dim}, the number \textbf{N} and the time step \textbf{dt} of the simulated trajectories, as well as the coefficients \textbf{gamma} and \textbf{sigma} of Equation~\ref{eq:langevin-eq}, we can instantiate the \textbf{LangevinGenerator} class and generate an ensemble of trajectories: \begin{verbatim} from yupi.generators import LangevinGenerator lg = LangevinGenerator( T, dim, N, dt, gamma, sigma) trajs = lg.generate() \end{verbatim} Figure~\ref{fig:v_pdf} shows the velocity probability density function that the model predicts. Apart from the typical Gaussian shape that arises when massive particles are jiggling, the standard deviation (vertical red lines) is in agreement with the previous estimation for the local thermal velocity. \begin{figure} \caption{Gaussian velocity distribution predicted by Equation~\ref{eq:oh-process-particle-fluid} \label{fig:v_pdf} \end{figure} Another equation-based simulation is presented as part of the complementary examples provided in \emph{yupi} documentation. It covers the computation of the probability density function for displacements at different time instants for the case of a one-dimensional process that follows the equations of a Diffusing Diffusivity model (see Section \ref{eq:diffdiff}). The example reproduces important results from the paper presented by \citep{chechkin2017brownian}. \subsection{Time series analysis: water consumption examination} \label{sub:water_usage} This example showcases the usage of \emph{yupi} beyond ``real'' trajectories. We reproduce an example from \citep{hipel1994time} in the context of hydrological studies by simply treating a time series as an abstract trajectory. Seasonal autoregressive integrated moving average \\(SARIMA) models\footnote{SARIMA is a forecasting model that supports seasonal (S: seasonal) components of a time series that is assumed to depend on its past values (AR: autoregressive), past noises (MA: moving average) and resulted from many integrations (I: integrated) of some stationary process.} are useful for modelling seasonal time series in which the mean and other statistics for a given season are not stationary across the years. Some types of hydrological time series which are studied in water resources engineering could be nonstationary. For example, socio-economic factors such as an increasing of population growth in the city of London, Ontario, Canada since the Second World War to 1991, caused a greater water demand in the period \citep{hipel1994time}. Figure \ref{fig:water_usage}a shows the average monthly water consumption (in millions of liters per day) from 1966 to 1988 for this city. The increasing trend around which the seasonal data fluctuates reveals nonstationary characteristics. As a consequence, autocorrelation analysis was used by \citep{hipel1994time} in the design and study of a SARIMA model for the water usage time series. \begin{figure} \caption{Analysis of water consumption in for the city of London, Ontario, Canada. (a) Seasonality of the average monthly water usage (1966-1988). The sinusoidal pattern on top of the linear trend accounts for nonstationarity. (b) Slow decreasing of the normalized autocorrelation function as a function of the lag time. Oscillations are portrayed with a period of a year. (Original plots can be found in Figures VI.2/12.4.1 pages 417/440, in \citep{hipel1994time} \label{fig:water_usage} \end{figure} First, let \textbf{waterusage} be the variable in which the time series has been stored. Visualization of the data shown in Figure \ref{fig:water_usage}a can be done using: \begin{verbatim} from yupi import Trajectory traj = Trajectory(x=np.cumsum(water_usage)) plt.plot(traj.v.x) \end{verbatim} Computation and visualization of the autocorrelation function depicted in Figure \ref{fig:water_usage}b (see Section \ref{subsub:VACF} for theoretical details) can be simply coded as: \begin{verbatim} from yupi.stats import vacf from yupi.graphics import plot_vacf acf, _ = vacf([traj], time_avg=True, lag=50) plot_vacf(acf / acf[0], traj.dt, lag=50, x_units='months', y_units=None) \end{verbatim} \section{Conclusions}\label{sec:conclusion} This contribution presents \emph{yupi}, a general purpose library for handling trajectory data. Our library proposes an integration of tools from different fields conceived as a complete solution for research applications related to obtaining, processing and analyzing trajectory data. Resources are organized in modules according to their nature. However, consistency is guaranteed using standardized trajectory data structures across every module. We have shown the effectiveness of the tool by reproducing results reported in a number of research papers. We believe the examples illustrating the simplicity of \emph{yupi} should enable researchers from different fields to become more proficient in processing and analyzing trajectories even with minimal programming knowledge. The current version of \emph{yupi} does not provide specific functionalities to process geo-spacial data. Considering the wealth of available tools to tackle these specific tasks, we encourage the re-utilization of existing approaches for specific use cases by providing an extension to simplify two-way conversions of data among some existing trajectory-related software libraries. \end{document}
\begin{document} \title{Relativistic quantum walks} \author{Frederick W. Strauch} \email[Electronic address: ]{[email protected]} \affiliation{National Institute of Standards and Technology, Gaithersburg, Maryland 20899-8423, USA} \date{\today} \begin{abstract} By pursuing the deep relation between the one-dimensional Dirac equation and quantum walks, the physical role of quantum interference in the latter is explained. It is shown that the time evolution of the probability density of a quantum walker, initially localized on a lattice, is directly analogous to relativistic wave-packet spreading. Analytic wave-packet solutions reveal a striking connection between the discrete and continuous-time quantum walks. \end{abstract} \pacs{03.67.Lx, 03.65.Pm, 05.40.Fb} \keywords{Dirac equation; entanglement; quantum computation; quantum walk} \maketitle The ``quantum random walk,'' first coined by Aharonov {\it et al.} \cite{Aharonov93}, is a quantum generalization of the classical random walk. Consider a walker moving on a one-dimensional lattice, taking steps left or right based on the state of a coin. Classically, if the coin is flipped after each step, this generates a diffusive random walk. If the coin is quantum mechanical, however, it can be put into a superposition, and rotated by applying a fixed unitary operator. Aharonov {\it et al.} showed that this quantum procedure (or algorithm) can generate displacements that, on average, are much greater than the classical random walk. This {\textit{discrete-time quantum walk}} (DTQW) has been re-discovered and extensively analyzed in the context of quantum computation \cite{Kempe2003}. Two key properties are the following: (i) the standard deviation of the walker's position grows linearly in time ($(\Delta x)_t \sim t$), in clear distinction to the classical random walk ($(\Delta x)_t \sim t^{1/2}$), (ii) for proper initial conditions the walker spreads out symmetrically, with a nearly constant probability distribution save for two peaks located at $x_{\pm} = \pm c t$ (where $c = 1/\sqrt{2}$ for the ``Hadamard walk'' \cite{Kempe2003}), beyond which the probability quickly goes to zero. An entirely different approach to ``quantizing'' random walks was initiated by Farhi and Gutmann \cite{Farhi98}. Beginning with the differential equation for diffusion on a lattice, they performed an analytic continuation to yield a Schr{\"o}dinger equation with a finite-difference Laplacian operator. This {\textit{continuous-time quantum walk}} (CTQW) was used by Childs {\it et al.} \cite{Childs2003} to construct a special search algorithm that is exponentially faster than classical methods. Other local search algorithms (with square-root speedup) have been studied using both the discrete and continuous-time quantum walks, often with similar results \cite{Shenvi2003,Childs2004b}. However, to this author's knowledge, no physical explanation has been proposed to explain the similar performance of these two quantum walks. Before connecting these two walks, recall the connection between the DTQW and the Dirac equation. As discussed by Meyer \cite{Meyer96}, this goes back to Feynman's ``checkerboard,'' a discrete space-time path integral that, in the continuum limit, generates the propagator for the Dirac equation in one dimension \cite{FeynmanBook}. This is best seen in the following unitary representation \cite{BBirula94}, in which the DTQW is written as the discrete mapping: \begin{equation} \left(\begin{array}{l} \psi_R(n,\tau+1) \\ \psi_L(n,\tau+1) \end{array}\right) = U \left(\begin{array}{l} \psi_R(n,\tau) \\ \psi_L(n,\tau) \end{array} \right), \label{xwalk1} \end{equation} $\psi_R$ and $\psi_L$ are wave functions on an infinite lattice, and $U$ is the product of a conditional translation operator and a spin rotation \begin{equation} U = [\frac{1}{2} (I+\sigma_z) D + \frac{1}{2}(I-\sigma_z) D^{-1}] e^{- i \theta \sigma_x}. \label{xwalk2} \end{equation} Here the Pauli matrices $\{I,\sigma_x,\sigma_z\}$ act on the spinor components, and the translation operator $D$ acts on wave functions as $(D \psi)(n) = \psi(n-1)$. The continuum limit is found by letting the position $x = n \epsilon$, $D = e^{-i \epsilon p}$ ($p$ is the momentum), $\theta = m \epsilon$ ($m$ is the mass), and the time $t = \epsilon \tau$. Using the Trotter formula, the limit $\epsilon \to 0$ (with $p, m$, and $t$ finite) yields \begin{equation} U^\tau = [ e^{-i \epsilon \sigma_z P} e^{-i \epsilon m \sigma_x}]^{t/\epsilon} \to e^{-i H_{D} t}, \end{equation} where $H_D$ is the Hamiltonian for the one-dimensional Dirac equation (with $\hbar = c = 1$, $p = -i \partial_x$) \cite{ThallerBook}: \begin{equation} i \partial_t \Psi(x,t) = H_{D} \Psi(x,t) = (-i \sigma_z \partial_x + \sigma_x m) \Psi(x,t). \label{dirac} \end{equation} While quite elegant, the properties of this continuum limit have been largely ignored in the extensive analysis of the Hadamard walk \cite{Nayak2000} (in which $e^{-i \pi \sigma_y/4} \sigma_z$ is used in place of $e^{-i \theta \sigma_x}$ in (\ref{xwalk2})). The closest related work is the continuum limit of the Hadamard walk recently found by Knight {\it et al.} \cite{Knight2003}, but this and the corresponding Airy function solutions are significantly different from the Dirac equation. Another notable work is that of Meyer \cite{Meyer97}, who studied some of the wavelike properties of quantum cellular automata, but not the uniquely relativistic properties explored here. Understanding these properties may have importance for quantum algorithms; it has already been shown that massless Dirac operators can improve a continuous-time search algorithm \cite{Childs2004b}. Here I use explicit solutions of (\ref{dirac}) to illustrate that the quantum-walk probability distribution is analogous to the spreading of a relativistic particle. The term ``relativistic'' is taken to mean any evolution of a particle with a maximum speed limit. The same characteristic spreading---both relativistic and nonrelativistic---is found from a new solution to the quantum walk equations (\ref{xwalk1})-(\ref{xwalk2}) {\it without} going to a continuum limit. Finally, this solution is found to be analytically related to the continuous-time quantum walk, providing a new link between these two relativistic quantum walks. First, it is important to note that, using the Heisenberg equations of motion, wave-packet spreading for any dispersion relation $\omega(p)$ can be written as \begin{equation} (\Delta x)_t^2 = (\Delta x)_0^2 + (\Delta v)_0^2 t^2, \end{equation} where $(\Delta v)_0$ is the standard deviation of the group velocity $v(p) = d \omega(p)/dp$ \cite{Jordan86}. For the Dirac equation (\ref{dirac}), the dispersion relation is $\omega(p) = \sqrt{p^2 + m^2}$, and thus $v(p) = p (p^2 + m^2)^{-1/2} < c = 1$, i.e. there is a maximum group velocity, which is of course the speed of light. While this linear quantum spreading $(\Delta x)_t \sim t$ is universal, the presence of peaks of the probability distribution (at $x_{\pm} = \pm ct$) depends on the initial localization. To show this, I construct an explicit time-dependent solution of (\ref{dirac}) by the following Fourier representation: \begin{equation} \Psi(x,t) = \frac{\mathcal{N}}{2\pi} \int_{-\infty}^{\infty} dp P_+(p) \frac{1}{\sqrt{2}} \left(\begin{array}{l} 1 \\ 1 \end{array} \right) e^{i p x-(a+i t) \omega(p)}. \label{diracw1} \end{equation} The prefactor $P_+(p) \equiv I + H_{D}/\omega(p)$ projects the spinor onto the positive-energy eigenstates of $H_{D}$, while the parameter $a$ in the exponential allows arbitrary localization in position. The integrals can be done analytically to yield \begin{equation} \Psi(x,t) = \frac{m \mathcal{N}}{\pi \sqrt{2}} \left(\begin{array}{l} s^{-1} K_1(m s) [a + i (t+x)] +K_0(m s) \\ s^{-1} K_1(m s) [a + i ( t-x)] + K_0(m s) \end{array} \right), \label{diracw2} \end{equation} where $s = [x^2 + (a+i t)^2]^{1/2}$, the normalization factor is \begin{equation} \mathcal{N} = \sqrt{\pi/2m} \left[K_1(2 m a) + K_0(2 m a)\right]^{-1/2}, \label{diracw3} \end{equation} and $K_n$ is the modified Bessel function of order $n$ \cite{AStegun}. The probability density is shown in Fig.~\ref{diracfig} for two values of $a$ at $t=0$ and $t = 50$. The nonrelativistic wave packet (with large $a$) spreads as a Gaussian, while the relativistic wave packet (with small $a$) spreads near the light-cone ($x_{\pm} = \pm c t$) at the speed of light. \begin{figure} \caption{(a) Non-relativistic ($a=5$) and (b) relativistic ($a=0.5$) solutions of the one-dimensional Dirac equation. The probability density $\rho(x,t) = \Psi^{\dagger} \label{diracfig} \end{figure} Another exact solution of the one-dimensional Dirac equation was found many years ago \cite{Bakke73}. These examples demonstrate the existence of positive-energy states of a relativistic particle localized beneath its Compton wavelength \cite{Bracken99}, despite well-known claims to the contrary. Such states appear to require entanglement between the spatial and spinor degrees of freedom \cite{Peres2002}---this is shown for (\ref{diracw2}) below. While the resemblance between the probability distribution of the Hadamard walk (see, e.g. \cite{Kempe2003}) and the relativistic wave packet in Fig. 1 is quite strong, it begs the question: what about the non-relativistic case? For the quantum walk, this requires an initial superposition over the lattice. A larger spread in the position leads to slower spreading, as expected by the uncertainty principle. An analytic solution to the walk equations (\ref{xwalk1}) and (\ref{xwalk2}), covering both the relativistic and nonrelativistic limits, can be found using a similar procedure as above. I use the Fourier analysis of (\ref{xwalk1}) and (\ref{xwalk2}) and let $D = e^{-i k}$, in which case \begin{equation} U = \left(\begin{array}{ll} e^{-ik} \cos \theta & -i e^{-ik} \sin \theta \\ -i e^{i k} \sin \theta & e^{i k} \cos \theta \end{array}\right). \label{coinq1} \end{equation} This matrix has eigenvalues $e^{\pm i \omega(k)}$, where $\omega(k)$ satisfies the dispersion relation \cite{Meyer97} \begin{equation} \cos \omega(k) = \cos \theta \cos k. \label{coinq2} \end{equation} The wave packet corresponding to (\ref{diracw1})-(\ref{diracw2}) is \begin{equation} \psi(n,\tau) = \frac{N}{2\pi} \int_{-\pi}^{\pi} dk P_+(k) \frac{1}{\sqrt{2}} \left(\begin{array}{l} 1 \\ 1 \end{array}\right) e^{i k n - (\alpha + i \tau) \omega(k)}. \label{coinqw1} \end{equation} The prefactor $P_+(k) \equiv (e^{i \omega(k)} - U)$ projects the spinor onto the ``positive-energy'' eigenstates of $U$, while the parameter $\alpha$ in the exponential allows arbitrary localization on the lattice. This solution can be written as \begin{equation} \psi(n,\tau) = \frac{N}{\sqrt{2}} \left(\begin{array}{l} I_{n}(\tau-1-i \alpha) - e^{-i \theta} I_{n-1}(\tau-i \alpha) \\ I_{n}(\tau-1-i \alpha) - e^{-i \theta} I_{n+1}(\tau-i \alpha) \end{array}\right) \label{coinqw2} \end{equation} where the function $I_n(z)$ is defined by \begin{equation} I_n(z) = \frac{1}{2\pi} \int_{-\pi}^{\pi} dk \exp(i k n - i \omega(k) z), \label{coinqw3} \end{equation} with $\omega(k)$ given by (\ref{coinq2}), and the normalization factor is \begin{equation} N = [2 I_0(-i 2 \alpha) - e^{i\theta} I_1(-1-i 2 \alpha) - e^{-i\theta} I_1(1-i 2 \alpha)]^{-1/2}. \label{coinqw4} \end{equation} At this point, a crucial approximation can be made: if $\cos \theta$ is small, replace $\omega(k)$ by its lowest order expansion from (\ref{coinq2}): $\omega(k) \simeq \pi/2 - \cos \theta \cos k$. This replacement conveniently yields the same maximum group velocity ($\cos \theta$) and allows the following approximation to (\ref{coinqw3}): \begin{equation} I_n(z) \simeq e^{i \pi (n-z)/2} J_n(z \cos \theta), \label{coinqw5} \end{equation} where $J_n$ is the Bessel function of order $n$ \cite{AStegun}. Using this approximation in (\ref{coinqw2}), as shown in Fig.~\ref{dtqwfig}, compares quite favorably to a numerical calculation of (\ref{xwalk1}) and (\ref{xwalk2}). The analogy between this and Fig.~\ref{diracfig} is remarkable, taking the ``speed of light'' for the quantum walk as $\cos \theta$. \begin{figure} \caption{(a) Non-relativistic ($\alpha=22$) and (b) relativistic ($\alpha=2.2$) solutions of the one-dimensional discrete-time quantum walk (DTQW). The probability density $\rho(n,\tau) = \psi^{\dagger} \label{dtqwfig} \end{figure} A few comments are in order. First, I have shown that evolution on the line for the DTQW, in the relativistic case, has fronts that propagate at the maximum speed $c = \cos \theta$, in close analogy to a solution of the Dirac equation. Heuristically, the criterion for a relativistic walker is for the the initial localization $(\Delta x)_0$ to be less than the effective Compton wavelength $\lambda = 1/(m c) = 1/\sin\theta$, where the effective mass is given by $m = [d^2 \omega(k) /d k^2]^{-1}_{k=0} = \tan \theta$. To approximate the Hadamard walk, the appropriate choice is $\theta = \pi/4$, leading to $c = 1/\sqrt{2}$, $m = 1$, and $\lambda = \sqrt{2} > 1$. Thus, the initial condition most widely studied \cite{Nayak2000}, with the walker localized at one position, has $(\Delta x)_0 \sim 1 < \lambda$, leading to a relativistic quantum walk. Second, the analogy between the wave packets of (\ref{diracw2}) and (\ref{coinqw2}) extends beyond the probability distribution to the entanglement between the spinor and spatial degrees of freedom \cite{Peres2002}. The entanglement as a function of the initial localization is shown in Fig.~\ref{entropyfig}. By including only positive-frequency terms in the wave function, the entanglement remains constant in time. Note that here, as in Fig.~\ref{dtqwfig}, I have used the correspondence between the localization parameters $a = \alpha/\tan \theta$, found by comparing the dispersion relations near $k=p=0$. As discussed above, highly localized positive-energy states become significantly entangled in the limit $a \to 0$. \begin{figure} \caption{The entanglement, in ebits, of the Dirac solution (solid), and the discrete-time quantum walk with $\theta = 3\pi/7$ (dashed), as a function of the scaling parameter $a = \alpha/\tan \theta$. The entanglement measure is the spinor entropy \cite{Peres2002} \label{entropyfig} \end{figure} Finally, I note that the particular choice of the spin-rotation (coin) of the quantum walk analyzed above has simplified the calculation. As an example, the Hadamard walk's dispersion relation $\sin \omega(k) = \sin k / \sqrt{2}$ \cite{Nayak2000} does not satisfy $\omega(k) \simeq \omega(0) + k^2/(2 m)$ for small $k$, but rather $\omega(k) \simeq k /\sqrt{2} - k^3/(12 \sqrt{2})$. This expansion, the essential approximation used in \cite{Knight2003}, does not lead to an obvious nonrelativistic limit to the Hadamard walk. There is, however, both types of propagation---relativistic and nonrelativistic---for the CTQW, defined by \cite{Farhi98} \begin{equation} i \partial_t \psi(n,t) = -\gamma \left( \psi(n-1,t) - 2 \psi(n,t) + \psi(n+1,t) \right). \end{equation} An exact solution for this walk can be found as above \begin{equation} \begin{array}{ll} \psi(n,t) & = N(2\pi)^{-1} \int_{-\pi}^{\pi} dk e^{i k n - (\alpha+i t) \omega(k)} \\ & = N e^{-2 \gamma(\alpha+i t)} i^n J_n(2 \gamma (t-i \alpha)), \end{array} \label{ctqw} \end{equation} with the dispersion relation $\omega(k) = 2 \gamma (1-\cos k)$ and normalization factor $N = e^{2 \gamma \alpha} [J_0(-4 i \gamma \alpha)]^{-1/2}$. The relativistic and nonrelativistic evolution for this case is shown in Fig. 4. This solution is strikingly similar to (\ref{coinqw2}) [using the relation (\ref{coinqw5})], both visually and analytically, assuming equal maximum speeds $c = 2\gamma = \cos \theta$. \begin{figure} \caption{(a) Non-relativistic ($\alpha=22$) and (b) relativistic ($\alpha=2.2$) solutions of the one-dimensional continuous-time quantum walk (CTQW). The probability density $\rho(n,t) = \psi^{\dagger} \label{ctqwfig} \end{figure} A Bessel-function approximation to the DTQW similar to (\ref{coinqw1})-(\ref{coinqw5}) was recently found by an entirely different method \cite{Romanelli2004L}. The physical content of this approximation, however, is revealed by the wave-packet analysis presented here: when $\cos \theta \ll 1$ ($\theta \sim \pi/2$), the dispersion relations of both the CTQW and DTQW have the common form $\omega(k) = \omega(0) + c (1-\cos k)$, with the ``relativistic'' property of a maximum speed ($v(k) = d \omega/dk < c$). This equivalence is quite unexpected, since it is the $\theta \ll 1$ ($\cos \theta \sim 1$) limit of (\ref{coinq2}), $\omega(k) \sim \sqrt{k^2+\theta^2}$, that leads to the Dirac equation. Despite this quantitative equivalence, there still appears to be two qualitatively distinct approaches to quantizing a random walk. A possible resolution is simply to consider the CTQW as the discretization of the one-dimensional nonrelativistic Schr{\"o}dinger equation, and the DTQW as the discretization of the one-dimensional Dirac equation. The Schr{\"o}dinger equation can be considered the quantization (by analytic continuation) of the diffusion equation for Brownian motion. The Dirac equation can also be considered the quantization (by analytic continuation) of the two-velocity model for the telegrapher's equation \cite{Gaveau84}. This model describes a particle that moves with a constant velocity left or right, switching its velocity randomly at some constant rate \cite{Kac74}. This latter process corresponds precisely to the coined classical random walk originally described above. From this point of view, the two quantum walks are not two different quantization methods, but rather equivalent quantizations of two different stochastic processes: one described by the diffusion equation and the other by the telegrapher's equation. That {\it both} lead to propagation with a maximum speed is the surprising yet simple consequence of discretizing equations on a lattice. I gratefully acknowledge P. R. Johnson and A. J. Dragt for key discussions at the beginning of this work. \end{document}
\begin{document} \title{The $\lambda$-invariant measures of subcritical Bienaym\'e--Galton--Watson processes} \begin{abstract} A $\lambda$-invariant measure of a sub-Markov chain is a left eigenvector of its transition matrix of eigenvalue $\lambda$. In this article, we give an explicit integral representation of the $\lambda$-invariant measures of subcritical Bienaym\'e--Galton--Watson processes killed upon extinction, i.e.\ upon hitting the origin. In particular, this characterizes all quasi-stationary distributions of these processes. Our formula extends the Kesten--Spitzer formula for the (1-)invariant measures of such a process and can be interpreted as the identification of its minimal $\lambda$-Martin entrance boundary for all $\lambda$. In the particular case of quasi-stationary distributions, we also present an equivalent characterization in terms of semi-stable subordinators. Unlike Kesten and Spitzer's arguments, our proofs are elementary and do not rely on Martin boundary theory. \noindent\textbf{Keywords:} Bienaym\'e--Galton--Watson process, invariant measure, Martin boundary, quasi-stationary distribution, Schr\"oder equation, semi-stable process. \noindent\textbf{MSC2010:} primary: 60J80, 60J50, secondary: 39B12, 60G52. \end{abstract} \section{Results} Let $Z = (Z_n)_{n\ge0}$ be a subcritical Bienaym\'e--Galton--Watson (BGW) process with offspring distribution of mean $m\in(0,1)$. Denote by $P$ the restriction of its transition matrix to $\ensuremath{\mathbb{N}}^*=\{1,2,\ldots\}$. Then $P$ is a sub-stochastic matrix, the transition matrix of the sub-Markov process \{$Z$ killed upon hitting $0$\}. A measure\footnote{Throughout the article, all measures are assumed to be locally finite unless explicitly stated.} $\nu$ on $\ensuremath{\mathbb{N}}^*$ is called a $\lambda$-invariant measure for $Z$ if it is a left eigenvector\footnote{As usual, we consider measures as row vectors and functions as column vectors.} of $P$ of eigenvalue $\lambda$, i.e. if \begin{equation} \label{eq:nu_P} \nu P = \lambda \nu. \end{equation} In terms of generating functions, if $F(z)$ denotes the generating function of the offspring distribution and $G(z) = \sum_{k=1}^\infty \nu(k)z^k$ the generating function of the measure $\nu$, then, supposing that $G(z)$ is finite for all $|z|<1$ (a fact which follows from Lemma~\ref{lem:tail} below), \eqref{eq:nu_P} is equivalent to \begin{equation} \label{eq:gf_measure} G(F(z)) - G(F(0)) = \lambda G(z),\quad|z|<1. \end{equation} For $x\in\ensuremath{\mathbb{N}}^*$ denote by $\ensuremath{\mathbf{P}}_x$ the law of the process $Z$ starting from $Z_0 = x$ and by $\ensuremath{\mathbf{E}}_x$ expectation with respect to $\ensuremath{\mathbf{P}}_x$. Furthermore, for a measure $\nu$ on $\ensuremath{\mathbb{N}}^*$, write $\|\nu\| = \nu(\ensuremath{\mathbb{N}}^*)$. The following limit, called the \emph{Yaglom limit}, is known to exist \cite{Heathcote1967,Joffe1967} (see also \cite[p.\ 16]{Athreya1972}): \begin{equation} \label{eq:yaglom} \nu_{\text{min}} = \lim_{n\rightarrow\infty} \ensuremath{\mathbf{P}}_1(Z_n \in \cdot\,|\,Z_n > 0) = \lim_{n\rightarrow\infty} \frac{\delta_1 P^n}{\|\delta_1 P^n\|}, \end{equation} where the limit holds in the weak topology of measures on $\ensuremath{\mathbb{N}}^*$. Furthermore, the probability measure $\nu_{\text{min}}$ satisfies \eqref{eq:nu_P} with $\lambda = m$, i.e. it is an $m$-invariant probability measure of the process. In particular (see also \cite[Proposition~5]{Meleard2012}), \begin{equation} \label{eq:pn} \frac{\ensuremath{\mathbf{P}}_1(Z_{n+1} > 0)}{\ensuremath{\mathbf{P}}_1(Z_n>0)} = \frac{\|\delta_1 P^{n+1}\|}{\|\delta_1 P^n\|} \rightarrow m,\quad\ensuremath{\text{ as }} n\rightarrow\infty. \end{equation} We denote the generating function of the probability measure $\nu_{\text{min}}$ by \[ H(z)= \sum_{n=1}^\infty \nu_{\text{min}}(n)z^n,\quad |z|\le 1. \] Our main theorem is the following, which identifies all $\lambda$-invariant measures of the BGW process $(Z_n)_{n\ge 0}$. \begin{theorem} \begin{enumerate} \item There exist no non-trivial (i.e. $\not\equiv 0$) $\lambda$-invariant measures for $Z$ with $\lambda < m$. \item The only $m$-invariant measures of $Z$ are multiples of the Yaglom limit $\nu_{\text{min}}$. \item Let $\alpha\in(-\infty,1)$. A measure $\nu$ on $\ensuremath{\mathbb{N}}^*$ is an $m^\alpha$-invariant measure for $Z$ if and only if its generating function $G(z) = \sum \nu(n)z^n$ satisfies \begin{equation} \label{eq:G_rep} G(z) = \int_0^\infty (e^{(H(z)-1)x}-e^{-x})\frac 1 {x^\alpha}\,\Lambda(dx),\quad |z|<1, \end{equation} where $\Lambda$ is a locally finite measure on $(0,\infty)$ satisfying $\Lambda(A) = \Lambda(mA)$ for every Borel set $A\subset(0,\infty)$. The measure $\Lambda$ is uniquely determined from $\nu$. Moreover, for every such measure $\Lambda$, \eqref{eq:G_rep} defines the generating function of an $m^\alpha$-invariant measure for $Z$ with radius of convergence at least 1. \end{enumerate} \label{th:main} \end{theorem} \begin{remark} In the proof of Theorem~\ref{th:main}, the measure $x^{-\alpha}\Lambda(dx)$ will be constructed as the vague limit ($n\rightarrow\infty$) of the measures $\mu_n$ on $(0,\infty)$ defined by $\mu_n(A) = m^{-\alpha n}\nu(p_n^{-1}A)$, where $p_n = \ensuremath{\mathbf{P}}_1(Z_n > 0)$ and $A\subset(0,\infty)$ Borel. \end{remark} \begin{remark} We will give an overview over the existing literature in Section~\ref{sec:history} but mention already here that Formula~\eqref{eq:G_rep} was obtained, in a slightly different form, for $\alpha=0$ and $F(z) = 1-m(1-z)$ (the ``pure death case'') by Kesten and Spitzer \cite{Spitzer1967} (giving credit to H.~Dinges for deriving it independently). It was later shown by Hoppe~\cite{Hoppe1977} that the case of general and even multitype offspring distributions (but still $\alpha=0$) can be reduced to the pure death case. One could adapt Hoppe's arguments for $\alpha\ne 0$, but we do not show this here. \end{remark} \paragraph{Quasi-stationary distributions.} A $\lambda$-invariant \emph{probability} measure $\nu$ (i.e., $\|\nu\| = 1$) is also called a \emph{quasi-stationary distribution} (QSD) of the process $Z$ with eigenvalue $\lambda$. The following result easily follows from Theorem~\ref{th:main}: \begin{theorem} \label{th:qsd} A $\lambda$-invariant measure $\nu$ of the process $Z$ is finite if and only if $\lambda < 1$. In particular, $\nu$ is a QSD with eigenvalue $\lambda$ of the process $Z$ if and only if either \begin{enumerate} \item $\lambda = m$ and $\nu = \nu_{\text{min}}$, or \item $\lambda = m^\alpha$ for some $\alpha\in(0,1)$ and the generating function $G(z) = \sum \nu(n)z^n$ satisfies \eqref{eq:G_rep}, where $\Lambda$ is a locally finite measure on $(0,\infty)$ satisfying \begin{enumerate} \item $\Lambda(A) = \Lambda(mA)$ for every Borel set $A\subset(0,\infty)$ and \item $\int_0^\infty (1-e^{-x})x^{-\alpha}\,\Lambda(dx) = 1$. \end{enumerate} Furthermore, the measure $\Lambda$ in \eqref{eq:G_rep} is uniquely determined from $\nu$. \end{enumerate} \end{theorem} \begin{remark} If $\Lambda(dx) = \frac 1 x\,dx$ in the above theorem, then $G(z) = 1-(1-H(z))^\alpha$, see Remark~\ref{rem:formulae} below. These QSD were found by Seneta and Vere-Jones in their seminal paper on QSD of Markov chains on countably infinite state spaces \cite{Seneta1966}. Rubin and Vere-Jones \cite{Rubin1968} showed later that these QSD are the only ones with regularly varying tails and furthermore, every distribution $\nu$ on $\ensuremath{\mathbb{N}}^*$ with a tail of the form $\nu([x,\infty)) = x^{-\alpha}L(x)$ for a slowly varying function $L(x)$ is in the domain of attraction of the above QSD (see also \cite{Seneta1971} for an analogous result for $1$-invariant measures). \end{remark} \begin{remark} The fact that the Yaglom limit $\nu_{\text{min}}$ as defined in \eqref{eq:yaglom} is the QSD of smallest eigenvalue is a general fact \cite[p515]{Ferrari1995a}. Furthermore, the fact that it is the unique QSD with eigenvalue $m$ is classic in our case \cite{Heathcote1967,Joffe1967}. \end{remark} \begin{remark} QSD of BGW processes (and Formula~\eqref{eq:G_rep}) appear in a recent article by H\'enard and the author on random trees invariant under Bernoulli edge contraction \cite{Henard2014}. \end{remark} \paragraph{Continuous-time BGW processes.} $\lambda$-invariant measures can be defined analogous\-ly for a subcritical \emph{continuous-time} BGW process $(Z_t)_{t\ge0}$. Let $L$ and $(P_t)_{t\ge0}$ be its associated infinitesimal generator and semigroup, respectively, restricted to $\ensuremath{\mathbb{N}}^*$. We say that a measure $\nu$ on $\ensuremath{\mathbb{N}}^*$ is a $\lambda$-invariant measure of the process $(Z_t)_{t\ge0}$ if $\nu L = -\lambda \nu$, or equivalently, if $\nu P_t = e^{-\lambda t} \nu$ for every $t\ge0$. In this case, $\nu$ is also a $e^{-\lambda r}$-invariant measure of the embedded chain $(Z_{rn})_{n\ge0}$, for every $r>0$. The measure $\Lambda$ from Theorem~\ref{th:main} then satisfies $\Lambda(A) = \Lambda(rA)$ for every Borel set $A$ and every $r>0$, hence $\Lambda(dx) = \frac 1 x\,dx$. We therefore have the following corollary to Theorem~\ref{th:main}, which also follows (in the pure death case) from results for general birth-and-death chains \cite{Cavender1978}. \begin{corollary} \label{cor:continuous} Let $(Z_t)_{t\ge0}$ be a subcritical continuous-time BGW process and let $m>0$ such that for all $t\ge0$, $\ensuremath{\mathbf{E}}_1[Z_t] = e^{-mt}$. \begin{enumerate} \item There exist no non-trivial (i.e. $\not\equiv 0$) $\lambda$-invariant measures for $(Z_t)_{t\ge0}$ with $\lambda > m$. \item The only $m$-invariant measures of $(Z_t)_{t\ge0}$ are multiples of the Yaglom limit $\nu_{\text{min}}$. \item For every $\lambda < m$, the $\lambda$-invariant measures of $(Z_t)_{t\ge0}$ are exactly the multiples of the measure whose generating function is given by \eqref{eq:G_rep} with $\alpha = \lambda/m$ and $\Lambda(dx) = \frac 1 x\,dx$ if $\lambda < m$. More explicitly, $G$ is the generating function of a $\lambda$-invariant measure if and only if there exists $c\ge0$, such that \[ G(z) = c\times \begin{cases} 1- (1-H(z))^\alpha, & \alpha > 0\\ -\log (1-H(z)), & \alpha = 0\\ (1-H(z))^\alpha - 1, & \alpha < 0. \end{cases} \] \end{enumerate} In particular, the only QSD of the process $(Z_t)_{t\ge0}$ with eigenvalue $\alpha m$, $\alpha\in(0,1]$ is the probability measure with generating function $1-(1-H(z))^\alpha$. \end{corollary} \begin{remark} A similar phenomenon happens for \emph{continuous-state} branching processes, see \cite{Lambert2007}. \end{remark} \begin{remark} \label{rem:formulae} The explicit formulae in Corollary~\ref{cor:continuous} are obtained from the following well-known equality which we recall for convenience: \begin{equation} \forall a\in(0,1)\,\forall \alpha< 1: \int_0^\infty (e^{-ax} - e^{-x})\frac{dx}{x^{\alpha+1}} = \begin{cases} \Gamma(-\alpha)(a^\alpha - 1), & \alpha \ne 0\\ -\log a, & \alpha = 0. \end{cases} \label{eq:gamma} \end{equation} An easy proof goes by noting that for each $a\in(0,1)$, both sides of the equation define analytic functions on the half-plane $\{\ensuremath{\mathbb{R}}e \alpha < 1\}$ and agree on $\{\ensuremath{\mathbb{R}}e \alpha < 0\}$ as can easily be checked by calculating the two Euler integrals. The case $\alpha=0$ is also a special case of Frullani's integral. \end{remark} \paragraph{$\lambda$-invariant measures of the process which is not killed at the origin.} Say that a measure $\nu$ on $\ensuremath{\mathbb{N}} = \{0,1,\ldots\}$ is a \emph{true} $\lambda$-invariant measure for $Z$ if it is a $\lambda$-invariant measure for the \emph{non-killed} process. In other words, if $P_0$ denotes the transition matrix of the BGW process $Z$ on $\ensuremath{\mathbb{N}}$, a measure $\nu$ on $\ensuremath{\mathbb{N}}$ is a true $\lambda$-invariant measure for $Z$ if and only if $\nu P_0 = \lambda \nu$, or equivalently, if its generating function $G$ satisfies $G(F(z)) = \lambda G(z)$ for every $|z|<1$. Of course, since $0$ is an absorbing state for the process, there are no true $\lambda$-invariant measures for $\lambda < 1$, and for $\lambda = 1$ the only true (1-)invariant measures are the multiples of $\delta_0$ (see e.g. \cite[p.\,67]{Athreya1972}). However, for $\lambda > 1$ the $\lambda$-invariant measures from Theorem~\ref{th:main} all extend to true $\lambda$-invariant measures. In fact, we have the following analogue of Theorem~\ref{th:main}: \begin{theorem} \begin{enumerate} \item There exist no non-trivial (i.e. $\not\equiv 0$) true $\lambda$-invariant measures for $Z$ with $\lambda < 1$. \item The only true (1-)invariant measures for $Z$ are multiples of $\delta_0$. \item Let $\alpha< 0$. A measure $\nu$ on $\ensuremath{\mathbb{N}}$ is a true $m^\alpha$-invariant measure for $Z$ if and only if its generating function $G(z) = \sum \nu(n)z^n$ satisfies \begin{equation} \label{eq:G_rep2} G(z) = \int_0^\infty e^{(H(z)-1)x}\frac 1 {x^\alpha}\,\Lambda(dx),\quad |z|<1, \end{equation} where $\Lambda$ is a locally finite measure on $(0,\infty)$ satisfying $\Lambda(A) = \Lambda(mA)$ for every Borel set $A\subset(0,\infty)$. The measure $\Lambda$ is uniquely determined from $\nu$. Moreover, for every such measure $\Lambda$, \eqref{eq:G_rep} defines the generating function of a true $m^\alpha$-invariant measure for $Z$ with radius of convergence at least 1. \end{enumerate} \label{th:true_main} \end{theorem} For a subcritical \emph{continuous-time} BGW process, we can define true $\lambda$-invariant measures analoguously as above. The analogue of Corollary~\ref{cor:continuous} is then the following: \begin{corollary} Let $(Z_t)_{t\ge0}$ be a subcritical continuous-time BGW process and let $m>0$ such that for all $t\ge0$, $\ensuremath{\mathbf{E}}_1[Z_t] = e^{-mt}$. \begin{enumerate} \item There exist no non-trivial (i.e. $\not\equiv 0$) $\lambda$-invariant measures for $(Z_t)_{t\ge0}$ with $\lambda > 0$. \item The only true (0-)invariant measures for $(Z_t)_{t\ge0}$ are the multiples of $\delta_0$. \item For $\lambda < 0$, the true $\lambda$-invariant measures for $(Z_t)_{t\ge0}$ are exactly the multiples of the one given by \eqref{eq:G_rep2} with $\alpha = \lambda/m$ and $\Lambda(dx) = \frac 1 x\,dx$, i.e. the measures with generating functions $G(z) = c(1-H(z))^\alpha$, $c\ge 0$. \end{enumerate} \end{corollary} \paragraph{Overview of the article.} The remainder of the article is organized as follows: Theorems~\ref{th:main}, \ref{th:qsd} and \ref{th:true_main} are proven in Section~\ref{sec:proof}. Section~\ref{sec:discussion} is an extended discussion consisting of the following three parts. In Section~\ref{sec:martin}, we interpret Theorem~\ref{th:main} in the light of Martin boundary theory. Section~\ref{sec:subordinators} gives a probabilistic interpretation of the QSD from Theorem~\ref{th:qsd} in terms of semi-stable subordinators. In Section~\ref{sec:history}, we review the existing literature on $\lambda$-invariant measures of BGW processes. \paragraph{Notation.} Throughout the article, a statement involving an undefined variable $z$ is meant to hold (at least) for every $z\in(0,1)$. \paragraph{Acknowledgments.} I thank Olivier H\'enard for an extremely fruitful collaboration \cite{Henard2014} from which this article arose. I also thank Alano Ancona and Vadim Kaimanovich for useful discussions about Martin boundary theory. An anonymous referee has made several valuable suggestions improving the presentation of the article. \section{Proofs} \label{sec:proof} \def\ensuremath{\mathbf{P}}oi{\operatorname{Poi}} We start with three simple lemmas. \begin{lemma} \label{lem:ineq} Let $z,w\in(0,1)$, $z\ne w$. Then for $p>0$ small enough and for all $x\ge 0$, \[ (1-p(1-z))^{x/p} - (1-p)^{x/p} \quad \begin{matrix} \ge \\ \le \end{matrix} \quad e^{(w-1)x}-e^{-x} \quad \begin{matrix} \ensuremath{\text{ if }} w<z\\ \ensuremath{\text{ if }} w>z\end{matrix} \] \end{lemma} \begin{proof} Let $z,w\in(0,1)$, $z\ne w$. If $w<z$, then $1-p(1-z) = 1+p(z-1) \ge e^{(w-1)p}$ for all small enough $p>0$. Furthermore, $1-p \le e^{-p}$ for all $p>0$. This implies the first inequality. If $w>z$, then $1-p \ge e^{(z-w-1)p}$ for all small enough $p>0$. Furthermore, $1-p(1-z) \le e^{(z-1)p}$ for all $p>0$. Hence, for $p>0$ small enough and $x\ge 0$. \[ (1-p(1-z))^{x/p} - (1-p)^{x/p} \le e^{(z-1)x} - e^{(z-w-1)x} = e^{(z-w)x}(e^{(w-1)x}-e^{-x}) \le e^{(w-1)x}-e^{-x}. \] This shows the second inequality and thus finishes the proof of the lemma. \end{proof} \begin{lemma} \label{lem:tail} Let $\nu$ be an $m^\alpha$-invariant measure of $Z$, with $\alpha\in\ensuremath{\mathbb{R}}$. Set $M = m^{-1}$. Then for every $\beta < \alpha$, there exists $C<\infty$, such that \[ \nu([M^n,M^{n+1})) \le C m^{\beta n},\quad\forall n\in\ensuremath{\mathbb{N}}. \] As a consequence, we have for every $\beta < \alpha$, for some $C<\infty$, for every $x\ge 1$, \[ \begin{cases} \nu([x,\infty)) \le Cx^{-\beta}, & \beta > 0\\ \nu([1,x]) \le Cx^{-\beta}, & \beta \le 0 \end{cases} \] In particular, $\sum_{n\in\ensuremath{\mathbb{N}}^*} \nu(n) |z|^n < \infty$ for every $|z| < 1$. Moreover, $\nu$ is finite if $\alpha > 0$. \end{lemma} \begin{proof} Fix $a < M < A$ and $\varepsilon>0$. Let $\nu$ be as in the statement and recall the definition of the transition matrix $P$. From the branching property and the law of large numbers we get for large $n$, \[ \delta_n P ([n/A,n/a)) = \ensuremath{\mathbf{P}}_n(Z_1 \in[n/A,n/a)) \ge 1-\varepsilon. \] This implies that for every $x$ large enough and $y\ge x$, \begin{equation*} \nu P([x/A,y/a)) \ge (1-\varepsilon)\nu([x,y)). \end{equation*} Now let $\varepsilon \rightarrow 0$. The previous inequality together with \eqref{eq:nu_P} (with $\lambda = m^\alpha$) gives for every $\beta < \alpha$, for every $x$ large enough and $y \ge x$, \begin{equation} \label{eq:nu1} \nu([x,y)) \le m^\beta \nu ([x/A,y/a)). \end{equation} Now set $b_n = \nu([M^n,M^{n+1}))$. Iterating \eqref{eq:nu1} and choosing $A$ and $a$ close to $M$ one readily shows that for every $\beta < \alpha$ and $\delta > 0$, there exists $K\in\ensuremath{\mathbb{N}}$ such that for $n\ge K$, \[ b_{K+n} \le m^{\beta n}(b_K + \cdots + b_{K + \lfloor \delta n\rfloor + 1}). \] Elementary arguments yield the first statement of the lemma. The remaining statements follow. \end{proof} \begin{lemma} \label{lem:Lambda} Let $f:(0,\infty)\rightarrow(0,\infty)$ be measurable and satisfying for some constant $C\ge1$: \[ \forall n\in\ensuremath{\mathbb{Z}}: \text{ either $f \equiv 0$ on $[m^n,m^{n-1})$ or }\forall x,y\in [m^n,m^{n-1}): f(x)/f(y) \in [C^{-1},C]. \] Furthermore, let $\Lambda$ be a measure on $(0,\infty)$ satisfying $\Lambda(A) = \Lambda(mA)$ for all Borel $A\subset(0,\infty)$ and $\Lambda([m,1)) \ne 0$. Then \[ \frac 1 {\Lambda([m,1))}\int_0^\infty f(x)\,\Lambda(dx) \asymp_C \frac 1 {\log m^{-1}}\int_0^\infty f(x)\,\frac{dx}{x}, \] where we set $a\asymp_C b \iff C^{-1} b\le a\le Cb$. \end{lemma} \begin{proof} First note that the restriction of the measure $\Lambda([m,1))^{-1}\Lambda$ to $[m,1)$ can be written as the image of the measure $(\log m^{-1})^{-1}\frac{dx}{x}$ on $[m,1)$ under a suitable map $\widetilde\varphi$: first map the latter via its distribution function to Lebesgue measure on $[0,1]$, then map this back to $[m,1)$ via the inverse of the distribution function of the measure $\Lambda([m,1))^{-1}\Lambda$. Then extend the map $\widetilde\varphi$ to a map $\varphi$ on $(0,\infty)$ by \[ \varphi(x) = m^n \widetilde\varphi(m^{-n}x),\quad x\in[m^{n+1},m^n),\ n\in\ensuremath{\mathbb{Z}}. \] By the self-similarity of the measures $\Lambda$ and $\frac{dx}{x}$, the measure $\Lambda([m,1))^{-1}\Lambda$ is indeed the image of the measure $(\log m^{-1})^{-1}\frac{dx}{x}$ on $(0,\infty)$ by the map $\varphi$. Furthermore, by construction the map $\varphi$ maps every interval $[m^n,m^{n-1})$, $n\in\ensuremath{\mathbb{Z}}$, to itself. In particular, for all $x>0$, either $f(\varphi(x)) = f(x) = 0$ or $f(\varphi(x))/f(x)\in[C^{-1},C]$ by assumption. The lemma easily follows by the change of variables formula. \end{proof} \begin{corollary} \label{cor:mu} Let $\mu$ be a non-zero measure on $(0,\infty)$ satisfying for some $\alpha\in\ensuremath{\mathbb{R}}$, $\mu(A) = m^{-\alpha}\mu(mA)$ for all Borel $A\subset(0,\infty)$. Then for every $z\in(0,1)$, \[ \int_0^\infty (e^{(H(z)-1)x}-e^{-x})\,\mu(dx) < \infty \iff \alpha < 1. \] \end{corollary} \begin{proof} Note that $\mu([m,1)) \ne 0$, otherwise we would have $\mu = 0$ by self-similarity. Define $\Lambda(dx) = x^\alpha \mu(dx)$. Then $\Lambda$ satisfies the hypothesis of Lemma~\ref{lem:Lambda}. Now let $\beta,\gamma \in\ensuremath{\mathbb{R}}\cup\{+\infty\}$ and set $$f_{\beta,\gamma}(x) = x^\beta\ensuremath{\mathbbm{1}}_{(x<1)} + x^{-\gamma}\ensuremath{\mathbbm{1}}_{(x>1)},\quad x> 0.$$ By Lemma~\ref{lem:Lambda} applied to the function $x\mapsto x^{-\alpha} f_{\beta,\gamma}(x)$, we have for some $C>1$ (depending on $\Lambda$, $m$, $\alpha$, $\beta$ and $\gamma$), \[ \int_0^\infty f_{\beta,\gamma}(x)\,\mu(dx) = \int_0^\infty x^{-\alpha}f_{\beta,\gamma}(x)\,\Lambda(dx) \asymp_C \int_0^\infty x^{-\alpha-1} f_{\beta,\gamma}(x)\,dx. \] In particular, this shows that \begin{equation} \label{eq:finiteness} \int_0^\infty f_{\beta,\gamma}(x)\,\mu(dx) < \infty \iff \beta > \alpha\ensuremath{\text{ and }} \gamma > \alpha, \end{equation} with the obvious meaning if $\beta = +\infty$ or $\gamma = +\infty$. Let $z\in(0,1)$, so that $H(z)\in(0,1)$. We finally consider the integral \begin{equation} \label{eq:finiteness2} \int_0^\infty (e^{(H(z)-1)x}-e^{-x})\,\mu(dx). \end{equation} For large $x$, the integrand is smaller than any fixed polynomial, so that the integral always converges at $\infty$ by \eqref{eq:finiteness} applied with $\beta=+\infty$ and some $\gamma > \alpha$. On the other hand, as $x\rightarrow0$, the integrand is asymptotically equivalent to $H(z) x$. Equation~\eqref{eq:finiteness} applied with $\beta = 1$ and $\gamma = +\infty$ then implies that the integral in \eqref{eq:finiteness2} converges at the origin if and only if $\alpha < 1$. These two facts prove the corollary. \end{proof} \begin{proof}[Proof of Theorem~\ref{th:main}] Although we could restrict ourselves to the pure death case, i.e.\ $F(z) = 1-m(1-z)$ (see Section~\ref{sec:history}), we prove the theorem immediately in its generality. We first introduce some notation. Let $Y_n$ denote a random variable with the law of $Z_n$ under $\ensuremath{\mathbf{P}}_1(\cdot\,|\,Z_n > 0)$. By \eqref{eq:yaglom}, $Y_n$ converges in law to the Yaglom distribution $\nu_{\text{min}}$, in particular, $H_n(z) = \ensuremath{\mathbf{E}}[z^{Y_n}] \rightarrow H(z)$ as $n\rightarrow\infty$. Note that the inverse $H^{-1}$ exists on $[0,1]$ and is continuous. We further define $p_n = \ensuremath{\mathbf{P}}_1(Z_n > 0)$ for $n\in\ensuremath{\mathbb{N}}$ and note that $p_{n+1}/p_n\rightarrow m$ as $n\rightarrow\infty$ by \eqref{eq:pn}. Now let $\nu$ be an $m^\alpha$-invariant measure, $\alpha\in\ensuremath{\mathbb{R}}$. Denote by $G(z) = \sum_{n=1}^\infty \nu(n)z^n$ its generating function, which is finite and well-defined for $|z|<1$ by Lemma~\ref{lem:tail}. We will extend the notation $\ensuremath{\mathbf{P}}_x$ and $\ensuremath{\mathbf{E}}_x$ to the (possibly infinite) measure $\nu$ by $\ensuremath{\mathbf{P}}_\nu(\cdot) = \sum_n \nu(n)\ensuremath{\mathbf{P}}_n(\cdot)$ and $\ensuremath{\mathbf{E}}_\nu[\cdot] = \sum_n \nu(n)\ensuremath{\mathbf{E}}_n[\cdot]$. Define the random variable $N_n$ to be the number of individuals at time $0$ which have a descendant at time $n$. Then $N_n > 0$ iff $Z_n > 0$. Furthermore, by the branching property, $Z_n$ is equal in law (under $\ensuremath{\mathbf{P}}_k$ for every $k\in\ensuremath{\mathbb{N}}^*$) to $Y^{(1)}_n+\cdots+Y^{(N_n)}_n$, where the variables $Y^{(i)}_n$ are iid copies of $Y_n$ and independent of $N_n$. Hence, as $n\rightarrow\infty$, by the $m^\alpha$-stationarity of $\nu$, \begin{align} \nonumber m^{-\alpha n}\ensuremath{\mathbf{E}}_\nu[z^{N_n}\ensuremath{\mathbbm{1}}_{N_n > 0}] &= m^{-\alpha n} \ensuremath{\mathbf{E}}_\nu[H_n^{-1}(z)^{Z_n}\ensuremath{\mathbbm{1}}_{Z_n > 0}] = \ensuremath{\mathbf{E}}_\nu[H_n^{-1}(z)^{Z_0}]\\ \label{eq:apple} &= G(H_n^{-1}(z)) \rightarrow G(H^{-1}(z)),\quad \ensuremath{\text{ as }} n\rightarrow\infty. \end{align} Now note that under $\ensuremath{\mathbf{P}}_k$, $N_n$ is binomially distributed with parameters $k$ and $p_n$ for every $k\in\ensuremath{\mathbb{N}}^*$. In particular, \begin{align} \label{eq:didi} \ensuremath{\mathbf{E}}_\nu[z^{N_n}\ensuremath{\mathbbm{1}}_{N_n > 0}] = \ensuremath{\mathbf{E}}_\nu[z^{N_n} - 0^{N_n}] = \ensuremath{\mathbf{E}}_\nu[(1-p_n(1-z))^{Z_0} - (1-p_n)^{Z_0}]. \end{align} Defining for every $n\in\ensuremath{\mathbb{N}}$ the measure $\mu_n$ by $\mu_n(A) = m^{-\alpha n}\nu(p_n^{-1}A)$ for Borel $A\subset(0,\infty)$, we thus get by \eqref{eq:apple} and \eqref{eq:didi}, \begin{equation} \label{eq:zN} G(H^{-1}(z)) = \lim_{n\rightarrow\infty} \int_0^\infty \big((1-p_n(1-z))^{x/p_n} - (1-p_n)^{x/p_n}\big)\,\mu_n(dx). \end{equation} With the first inequality in Lemma~\ref{lem:ineq} and the fact that $p_n\rightarrow0$ as $n\rightarrow\infty$, this gives \begin{equation} \label{eq:domination} \forall w\in(0,1):\quad\sup_n \int_0^\infty \big(e^{(w-1)x}-e^{-x}\big)\,\mu_n(dx) < \infty. \end{equation} Using \eqref{eq:domination} with $w=1/2$, say, gives that the sequence of measures $\widetilde\mu_n(dx) = xe^{-x}\mu_n(dx)$ is tight and therefore, by Prokhorov's theorem, precompact in the space of finite measures on $[0,\infty)$ endowed with weak convergence. Let $\widetilde\mu$ be a subsequential limit and define the measure $\mu(dx) = x^{-1}e^x\widetilde\mu_{(0,\infty)}(dx)$, where $\widetilde\mu_{(0,\infty)}$ is the restriction of $\widetilde\mu$ to $(0,\infty)$. We claim that \begin{equation} \label{eq:limit} G(z) = \int_0^\infty (e^{(H(z)-1)x}-e^{-x})\,\mu(dx) + \widetilde\mu(0) H(z). \end{equation} Indeed, fix $z\in(0,1)$ and denote by $g_{z,n}(x)$ the integrand on the right-hand side of \eqref{eq:zN}. Then the function $x\mapsto g_{z,n}(x)/(xe^{-x})$, continuously extended to $[0,\infty)$, converges uniformly on every compact subset of $[0,\infty)$ to the function $\widetilde g_z$ defined by $\widetilde g_z(x) = (e^{zx}-1)/x$ for $x>0$ and $\widetilde g_z(0) = z$. Using \eqref{eq:domination} with some $w\in(z,1)$ together with the second inequality in Lemma~\ref{lem:ineq}, a truncation argument then shows that we can pass to the (subsequential) limit inside the integral in \eqref{eq:zN}, which yields \[ G(H^{-1}(z)) = \int_0^\infty \widetilde g_z(x)\,\widetilde\mu(dx). \] This yields \eqref{eq:limit}. Furthermore, the theory of Laplace transforms gives that $\mu$ and $\widetilde\mu(0)$, hence $\widetilde\mu$, are uniquely determined by \eqref{eq:limit}, so that $\widetilde\mu_n$ converges in fact weakly to $\widetilde\mu$. As a consequence, $\mu_n$ converges vaguely on $(0,\infty)$ to $\mu$. The scaling properties of the measure $\mu$ follow from this convergence: we have for every compact interval $A\subset(0,\infty)$ whose endpoints are not atoms of $\mu$, \[ \mu(A) = \lim_{n\rightarrow\infty} m^{-\alpha (n+1)} \nu(p_{n+1}^{-1}A) = m^{-\alpha} \lim_{n\rightarrow\infty} m^{-\alpha n} \nu(p_n^{-1}(p_n/p_{n+1})A) = m^{-\alpha} \mu(m^{-1}A), \] since $p_{n+1}/p_n\rightarrow m$ by \eqref{eq:pn}. This implies that the measure $\Lambda$ defined by $\Lambda(dx) = x^\alpha \mu(dx)$ satisfies $\Lambda(A) = \Lambda(mA)$ for every Borel set $A$. It remains to investigate which terms in \eqref{eq:limit} vanish for particular values of $\alpha$. A first constraint comes from the fact that $G(z)$ is finite for every $z\in(0,1)$ by Lemma~\ref{lem:tail}, and so the integral in \eqref{eq:limit} needs to be finite as well. By Corollary~\ref{cor:mu}, this is true if and only if $\alpha < 1$ or $\mu = 0$. A second constraint comes from the fact that $G$ satisfies \eqref{eq:gf_measure} with $\lambda = m^\alpha$. To verify this, we first recall the following equations for the function $H$: \begin{align} \label{eq:H1} H(F(z)) - H(F(0)) &= m H(z),\quad |z|\le 1.\\ \label{eq:H2} H(F(0)) &= 1-m\\ \label{eq:H3} H(F(z)) - 1 &= m(H(z) - 1),\quad |z|\le 1. \end{align} Indeed, \eqref{eq:H1} is an immediate consequence of \eqref{eq:gf_measure} (with $\lambda = m$) and the finiteness of $H$ for $|z| = 1$, \eqref{eq:H2} follows from \eqref{eq:H1} by setting $z=1$, and \eqref{eq:H3} follows from \eqref{eq:H1} and \eqref{eq:H2} by reordering terms. We now have by \eqref{eq:limit}, for every $z\in(0,1)$, \begin{align*} &G(F(z)) - G(F(0)) \\ &= \int_0^\infty [e^{(H(F(z)) - 1)x} - e^{-x} - e^{(H(F(0)) - 1)x} + e^{-x}]\,\mu(dx) + \widetilde\mu(0) (H(F(z)) - H(F(0)))\\ &= \int_0^\infty [e^{m(H(z) - 1)x} - e^{mx}]\,\mu(dx) + m\widetilde\mu(0) H(z)\qquad \text{(by \eqref{eq:H3},\eqref{eq:H2},\eqref{eq:H1} (in this order))}\\ &= m^\alpha \int_0^\infty [e^{(H(z) - 1)x} - e^{x}]\,\mu(dx) + m\widetilde\mu(0) H(z)\qquad \text{(by self-similarity of $\mu$)}. \end{align*} Comparing with \eqref{eq:gf_measure}, this implies that $\widetilde\mu(0) = 0$ unless $\alpha = 1$. Summing up, we have the following constraints for the quantities in \eqref{eq:limit}: \begin{itemize} \item $\alpha > 1$: $\mu = 0$ and $\widetilde\mu(0) = 0$ \item $\alpha = 1$: $\mu = 0$ \item $\alpha < 1$: $\widetilde\mu(0) = 0$. \end{itemize} This proves the necessity part of the theorem. For the sufficiency, we only need to consider the case $\alpha < 1$. Let $G$ be a function given by \eqref{eq:G_rep} with $\Lambda$ a measure on $(0,\infty)$ satisfying $\Lambda(A) = \Lambda(mA)$ for every Borel set $A$. By the above calculations, one readily shows that $G$ satisfies \eqref{eq:gf_measure} with $\lambda = m^\alpha$. It remains to show that $G$ is the generating function of a (locally finite) measure on $\ensuremath{\mathbb{N}}^*$. Now, for every $x\in(0,\infty)$, the function $z\mapsto e^{(H(z)-1)x} - e^{-x}$ is the generating function of the sum of $\mathcal N$ iid random variables distributed according to $\nu_{\text{min}}$, where $\mathcal N \sim \ensuremath{\mathbf{P}}oi(x)$, restricted on the event that this sum is positive (see also Section~\ref{sec:subordinators}). Hence, $G$ is an integral over a family of generating functions and thus the generating function of a (not necessarily locally finite) measure. But by Corollary~\ref{cor:mu}, $G(z)$ is finite for $z\in(0,1)$, so that this measure is indeed locally finite. This finishes the proof of the sufficiency part of the theorem. \end{proof} \begin{proof}[Proof of Theorem~\ref{th:qsd}] Let $\nu$ be a non-trivial $m^\alpha$-invariant measure of the BGW process $Z$, $\alpha \le 1$. By Theorem~\ref{th:main}, it remains to show that $\nu$ is finite if and only if $\alpha\in(0,1]$. For $\alpha = 1$ this is immediate, suppose therefore that $\alpha<1$. Denote by $G$ the generating function of the measure $\nu$ and let $\Lambda$ be the measure from Theorem~\ref{th:main}. Then Lemma~\ref{lem:Lambda} easily implies that the integral $\int_0^\infty (1-e^{-x})x^{-\alpha}\,\Lambda(dx)$ converges at the origin for all $\alpha < 1$ but converges at $\infty$ if and only if $\alpha \in(0,1]$. Hence, $\|\nu\| = G(1) < \infty$ if and only if $\alpha \in (0,1]$. This proves the theorem. \end{proof} \begin{proof}[Proof of Theorem~\ref{th:true_main}] The first two parts are known, see the discussion before the statement of the theorem. The third part can be proven by adapting the proof of Theorem~\ref{th:main}. Alternatively, it can be derived from Theorem~\ref{th:main} as follows: Let $\lambda = m^\alpha > 1$ (hence, $\alpha < 0$). Let $\nu$ be a measure on $\ensuremath{\mathbb{N}}$ and denote by $\nu^*$ its restriction to $\ensuremath{\mathbb{N}}^*$. Denote by $G$ and $G^*$ the generating functions of $\nu$ and $\nu^*$, respectively, note that $G^* = G - G(0)$. Since $0$ is an absorbing state for the process $Z$, the measure $\nu$ is a true $\lambda$-invariant measure for $Z$ if and only if the following two statements hold: \begin{enumerate} \item $\nu^*$ is a $\lambda$-invariant measure for $Z$. \item $\nu P(0) = \lambda \nu(0)$, equivalently, $G(F(0)) = \lambda G(0)$. \end{enumerate} By Theorem~\ref{th:main}, the first statement is equivalent to \[ G(z) = \int_0^\infty e^{(H(z)-1)x}\frac 1 {x^\alpha}\,\Lambda(dx) + C, \] for some constant $C$ and $\Lambda$ as in the statement of Theorem~\ref{th:main} (it can easily be seen that the integral converges using Lemma~\ref{lem:Lambda}, as in the proof of Corollary~\ref{cor:mu}). Together with \eqref{eq:H2} and the self-similarity of $\Lambda$, this gives \[ G(F(0)) = \int_0^\infty e^{-mx}\frac 1 {x^\alpha}\,\Lambda(dx) + C = \lambda \int_0^\infty e^{-x}\frac 1 {x^\alpha}\,\Lambda(dx) + C = \lambda G(0) + C. \] Hence, given the first statement, the second statement is equivalent to $C=0$, which proves the theorem. \end{proof} \section{Discussion} \label{sec:discussion} \subsection{The Kesten--Spitzer formula for invariant measures and the minimal Martin entrance boundary} \label{sec:martin} To our knowledge, Theorem~\ref{th:main}, and more specifically Formula \eqref{eq:G_rep}, was previously known only for $\lambda = 1$ (i.e., $\alpha=0$). In this case, one simply says \emph{invariant} instead of $1$-invariant. In the literature (Kesten--Spitzer \cite{Spitzer1967}, Athreya--Ney~\cite[p.\,69]{Athreya1972}, Hoppe \cite{Hoppe1977}; see Section~\ref{sec:history} below for the history of the result), one generally finds this result under the following form: A function $Q(z)$ is the generating function of an invariant measure for the BGW process $Z$ if and only if there exists a constant $c\ge0$ and a probability measure $\mu$ on $[0,1)$, such that \begin{equation} \label{eq:Q_rep2} Q(z) = c \int_0^1 \sum_{n=-\infty}^\infty [\exp((H(z)-1)m^{n-t}) - \exp(-m^{n-t})]\,\mu(dt). \end{equation} This is the Choquet decomposition of $Q(z)$ as a convex combination of generating functions of extremal invariant measures. One easily sees that \eqref{eq:G_rep} (with $\alpha=0$) and \eqref{eq:Q_rep2} are equivalent: Given $c\ge0$ and a measure $\mu$ such that \eqref{eq:Q_rep2} holds, we can define a measure $\Lambda$ on $[1,m^{-1})$ as the push-forward of the measure $c\mu$ by the map $t\mapsto m^{-t}$. The measure $\Lambda$ can then be uniquely extended to $(0,\infty)$ in such a way that $\Lambda(A) = \Lambda(mA)$ for every Borel set $A$. One easily checks that \eqref{eq:G_rep} holds with this $\Lambda$ and $\alpha=0$. Conversely, given such a measure $\Lambda$, one can define a finite measure $\widetilde\mu$ on $[0,1)$ as the push-forward of the measure $\Lambda(\cdot \cap [1,m^{-1}))$ by the inverse map $x\mapsto \log_{m^{-1}} x$. Setting $c=\widetilde\mu([0,1))$ and $\mu = \widetilde\mu/c$ gives \eqref{eq:Q_rep2}. We now relate formula \eqref{eq:Q_rep2} to Martin boundary theory, see \cite[Chapter~10]{Kemeny1976} for an introduction to this theory\footnote{Another very good and more modern introduction is \cite[Chapter~IV]{Woess2000}. He only considers Martin \emph{exit} boundaries but one can reduce to this case in our setting by considering the transition matrix $\hat P_{ij} = \nu_{\text{min}}(j)P_{ji}/\nu_{\text{min}}(i)$ instead of $P$.}. We briefly recall the basic constructions of interest to us. Let $P$ be the transition matrix of a transient sub-Markov chain on $\ensuremath{\mathbb{N}}^*$. Define the Green kernel $G(x,y) = \sum_{k=0}^\infty (P^k)_{xy}$ and assume there is a state $o\in\ensuremath{\mathbb{N}}^*$ such that $G(x,o) > 0$ for all $x\in\ensuremath{\mathbb{N}}^*$ (in the case of subcritical BGW processes killed at $0$, we choose $o$ to be the span of the reproduction law). This allows to define the Martin kernel by $K(x,y) = G(x,y)/G(x,o)$. The \emph{Martin entrance compactification} of $\ensuremath{\mathbb{N}}^*$ is then defined as the smallest compactification $\mathcal M$ of the discrete set $\ensuremath{\mathbb{N}}^*$ such that all measures $K(x,\cdot)$ extend continuously (w.r.t.\ pointwise convergence of measures seen as functions on $\ensuremath{\mathbb{N}}^*$). Every point $\xi$ on the \emph{Martin entrance boundary} $B = \mathcal M\backslash \ensuremath{\mathbb{N}}^*$ thus defines an invariant measure $K(\xi,\cdot)$ with mass 1 at $o$. Moreover, every \emph{extremal} invariant measure, meaning that it can not be written as a non-trivial convex combination of invariant measures, arises this way. The set of those points $\xi\in B$ for which $K(\xi,\cdot)$ is extremal is called the \emph{minimal} Martin entrance boundary, denoted by $B^*$. The \emph{Poisson-Martin integral formula} now assigns to every invariant measure $\nu$ a unique integral representation in terms of extremal invariant measures, namely, \[ \nu = \int_{B^*} K(\xi,\cdot)\,\mu^\nu(d\xi), \] for a finite measure $\mu^\nu$ on $B^*$. The construction outlined in the previous paragraph is the approach used by Kesten and Spitzer \cite{Spitzer1967} to derive formula \eqref{eq:Q_rep2} (for pure death processes). In particular, their proof implies that the extremal invariant measures of a subcritical BGW process are (up to multiplicative constants) the measures $\nu_t$, $t\in S^1 = [0,1]_{0\sim 1}$, with generating functions \[ \sum_{k=1}^\infty \nu_t(k) z^k = \sum_{n=-\infty}^\infty [\exp((H(z)-1)m^{n-t}) - \exp(-m^{n-t})]. \] Defining $\xi_t\in B^*$ by $\nu_t = K(\xi_t,\cdot)$, the map $t\mapsto \xi_t$ is thus (by extremality) a bijection between the compact space $S^1$ and $B^*$, moreover, one easily sees that it is continuous. \emph{It follows that the minimal Martin entrance boundary $B^*$ is homeomorphic to the circle $S^1$.} Now let $\lambda > 0$. The above construction can be performed with the operator $P/\lambda$ instead of $P$, giving rise to a $\lambda$-boundary theory for all $\lambda$ such that the $\lambda$-Green function $G_\lambda = \sum_{k=0}^\infty \lambda^{-k}(P^k)_{xy}$ is finite. The infimum of these values of $\lambda$ is the \emph{spectral radius} $\rho = \lim_{k\rightarrow\infty} ((P^k)_{oo})^{1/k}$ \cite[Chapter~II]{Woess2000}, which equals $m$ for subcritical BGW processes by \eqref{eq:yaglom} and \eqref{eq:pn}. For $\lambda > \rho = m$, Theorem~\ref{th:main} then implies a formula similar to \eqref{eq:Q_rep2}. A reasoning as in the last paragraph yields the following: \begin{corollary} \label{cor:stable} For every $\lambda > m$, the minimal $\lambda$-Martin entrance boundary of the BGW process $(Z_n)_{n\ge 0}$ is homeomorphic to the circle $S^1$. \end{corollary} In particular, Corollary \ref{cor:stable} shows that all minimal $\lambda$-Martin entrance boundaries, $\lambda > m$, are homeomorphic. This remarkable fact is part of a property called \emph{stability} by some authors \cite[p301]{Woess2000} and holds true for example for the (exit) boundary of random walks on trees and hyperbolic graphs. We know of no general theory that yields this result without explicitly calculating the $\lambda$-Martin entrance boundaries for every $\lambda$. We finish this section with a discussion of the case $\lambda = m$, for which Theorem~\ref{th:main} gives that the minimal $m$-Martin entrance boundary is trivial, i.e. there exists up to multiplicative constants only one $m$-invariant measure. This fact is quite common and holds in general for example if the process is $m$-recurrent, i.e. if $G_m(x,y) = \infty$ for all (some) $x,y$ \cite[Chapter~IV]{Woess2000}. Note that $m$-recurrence is equivalent to recurrence of the so-called $Q$-process, which is in our case the Markov process with transition matrix $Q$ given by $Q_{ij} = m^{-1}P_{ij} j/i$ (recall that the function $h(i) = i$ is $m$-harmonic for our process, i.e. $Ph = mh$). It is remarkable that in our setting the $Q$-process may be positive recurrent, null recurrent or transient. This fact does not seem to appear in the usually cited monographs on branching processes\footnote{It was even claimed in the literature that recurrence always holds \cite[p972]{Pakes1999}.}, only a criterion for positive recurrence is easy to find (see e.g. \cite[p59]{Athreya1972}): the $Q$-process is positive recurrent if and only if $\ensuremath{\mathbf{E}}_1[Z_1\log Z_1] < \infty$. However, Joffe proved in 1967 already the following recurrence criterion \cite{Joffe1967}: Let $F(z)$ denote the generating function of the offspring distribution and define $\eta$ by $1-F(z) = m(1-z)(1-\eta(z))$. Set $q_n = \ensuremath{\mathbf{P}}_1(Z_n = 0)$. Then the $Q$-process is recurrent if and only if the following sum diverges: \[ \sum_{n=1}^\infty\prod_{k=1}^{n} (1-\eta(q_k)). \] Since $1-q_n = m^{n+o(n)}$ by \eqref{eq:pn}, one can easily construct examples where the above sum converges (so that the $Q$-process is transient), for example when $\eta(z)\ge 1/|\log(1-z)|^\beta$ for some $\beta < 1$ and $z$ close to $1$. \subsection{Probabilistic interpretation of \texorpdfstring{\eqref{eq:G_rep}}{(\ref{eq:G_rep})} and relation with semi-stable subordinators} \label{sec:subordinators} Let $\nu$ be a QSD of eigenvalue $m^\alpha$ of the BGW process, $\alpha\in(0,1)$. By Theorem~\ref{th:qsd}, it admits the representation \eqref{eq:G_rep} with a measure $\Lambda$ as in the statement of the theorem. Let $\mathcal N$ be a random variable whose generating function is equal to the right-hand side of \eqref{eq:G_rep}, but with $H(z) \equiv z$. Then $\nu$ is the law of the sum of $\mathcal N$ iid random variables distributed according to the Yaglom distribution $\nu_{\text{min}}$. As for the law of $\mathcal N$, expanding the exponential in \eqref{eq:G_rep} gives \begin{equation} \label{eq:N} \forall k\ge 1: \ensuremath{\mathbf{P}}(\mathcal N = k) = \int_0^\infty e^{-x}\frac{x^k}{k!}\frac 1 {x^\alpha}\,\Lambda(dx),\quad \ensuremath{\mathbf{P}}(\mathcal N = 0) = 0. \end{equation} Heuristically, $\mathcal N$ is therefore a Poisson-distributed random variable with a random parameter drawn according to the measure $x^{-\alpha}\,\Lambda(dx)$ and conditioned to be non-zero. A way to make this rigorous (note that the measure $x^{-\alpha}\Lambda(dx)$ has infinite mass!) is using \emph{subordinators}, of which we first recall the basic facts. A subordinator $S = (S_t)_{t\ge 0}$ is a real-valued, non-decreasing process with stationary and independent increments. We always assume $S_0 = 0$. Then the law of $S$ is determined by its \emph{cumulant} $\kappa_S(\theta) = -\log \ensuremath{\mathbf{E}}[e^{-\theta S_1}]$, which satisfies the \emph{L\'evy--Khintchine} formula (see e.g.\ \cite[Ch.\,13]{Kallenberg1997} or \cite{Bertoin1996}), \begin{equation} \label{eq:kappa} \kappa_S(\theta) = a\theta + \int_0^\infty (1-e^{-\theta x})M(dx), \end{equation} where $a\ge0$ is called the \emph{drift} and $M$ is a measure on $(0,\infty)$ called the \emph{L\'evy measure} of the subordinator and satisfying $\int_0^\infty (1-e^{-x}) M(dx) < \infty$. If $T = (T_t)_{t\ge0}$ is another subordinator (or, in general, a L\'evy process) independent of $(S_t)_{t\ge0}$, then the \emph{subordinated process} $T\circ S := (T_{S_t})_{t\ge0}$ is again a subordinator (L\'evy process) with cumulant \begin{equation} \label{eq:kappa_sub} \kappa_{T\circ S} = \kappa_S\circ\kappa_T. \end{equation} If $N = (N_t)_{t\ge0}$ is a driftless subordinator whose L\'evy measure is a probability measure on $\ensuremath{\mathbb{N}}^*$, then the subordinators $N$ and $N\circ S$ both take values in $\ensuremath{\mathbb{N}}^*$ and $N\circ S$ is therefore again a driftless subordinator with L\'evy measure concentrated on $\ensuremath{\mathbb{N}}^*$. Let $H$ and $G$ denote the generating functions of the L\'evy measures of $N$ and $N\circ S$, respectively. Note that $H(1) = 1$ and $G(1)<\infty$, because a L\'evy measure on $\ensuremath{\mathbb{N}}^*$ is necessarily finite. It follows from \eqref{eq:kappa} (applied first to $N\circ S$ and then to $N$) and \eqref{eq:kappa_sub} that \begin{equation} \label{eq:GH} G(1) - G(z) = \kappa_{N\circ S}(-\log z) = \kappa_S(1-H(z)). \end{equation} Setting $z=0$ yields $G(1) = \kappa_S(1)$. Rearranging \eqref{eq:GH}, we get with \eqref{eq:kappa}, \begin{equation} \label{eq:GH2} G(z) = \kappa_S(1) - \kappa_S(1-H(z)) = aH(z) + \int_0^\infty (e^{(H(z)-1)x}-e^{-x})\,M(dx). \end{equation} We apply the previous equations to the QSD $\nu$, by setting $a = 0$ and $M(dx) = \frac 1 {x^\alpha}\,\Lambda(dx)$, note that $\kappa_S(1) = \int_0^\infty (1-e^{-x})\,M(dx) = 1$. In particular, $M$ is a L\'evy measure, so that the subordinator $S$ is well defined. We also let the L\'evy measure of the subordinator $N$ be $\nu_{\text{min}}$; we recall that its generating function is indeed denoted by $H(z)$. Equation \eqref{eq:GH2} then gives a probabilistic interpretation to the QSD $\nu$: it says that \emph{$\nu$ is the L\'evy measure of the subordinator $N\circ S$ (or, equivalently, the law of its first jump)}. Note that the case $\alpha = 1$ may also be covered by setting $a = 1$ and $M \equiv 0$ in \eqref{eq:kappa}, i.e. taking the subordinator $S = \mathrm{Id}$. This fact allows for an alternative statement of Theorem~\ref{th:qsd}. For this, we introduce the notion of a \emph{semi-stable} subordinator: we say that the subordinator $S = (S_t)_{t\ge0}$ is \emph{$(\alpha,m)$-semi-stable}\footnote{This terminology is taken from \cite[Section~9.2]{Embrechts2002}.}, if \begin{equation} \label{eq:semistable} (S_{m^\alpha t})_{t\ge0} \stackrel{\text{law}}{=} (m S_t)_{t\ge0}, \end{equation} or, in terms of the cumulant, \begin{equation} \label{eq:ss_cumulant} \kappa_S(m \theta) = m^\alpha \kappa_S(\theta),\quad \theta\ge0. \end{equation} One easily obtains from \eqref{eq:kappa} and \eqref{eq:ss_cumulant} the following characterization of semi-stable subordinators: A subordinator $S = (S_t)_{t\ge0}$ with drift $a$, L\'evy measure $M$, satisfying $S_0 = 0$ and $S \not\equiv 0$, is $(\alpha,m)$-semi-stable, $\alpha\in\ensuremath{\mathbb{R}}$, $m\in(0,1)$, if and only if \begin{itemize} \item $\alpha\in(0,1)$, $a = 0$ and $M(dx) = \frac 1 {x^\alpha}\,\Lambda(dx)$ for a measure $\Lambda$ on $(0,\infty)$ satisfying $\Lambda(A) = \Lambda(mA)$ for all Borel $A\subset (0,\infty)$, \emph{or} \item $\alpha = 1$, $a > 0$ and $M\equiv 0$. \end{itemize} The previous arguments then give the following equivalent statement of Theorem~\ref{th:qsd}: \begin{theorem} \label{th:qsd2} The quasi-stationary distributions of eigenvalue $m^\alpha$ of the BGW process, $\alpha\in\ensuremath{\mathbb{R}}$, are exactly the L\'evy measures of the subordinators $N \circ S$, where $N$ is the driftless subordinator with L\'evy measure $\nu_{\text{min}}$ and $S$ is an $(\alpha,m)$-semi-stable subordinator with $\kappa_S(1) = 1$. \end{theorem} \begin{remark} One can drop the requirement $\kappa_S(1) = 1$ in the above theorem if one replaces ``are exactly the L\'evy measures'' by ``are exactly the laws of the first jumps''. \end{remark} \paragraph{Composition of generating functions} Let $G_\alpha$ be the generating function of a QSD of $Z$ with eigenvalue $m^\alpha$, $\alpha\in(0,1]$. Furthermore, let $G_\beta$ be the generation function of an $m^{\alpha\beta}$-invariant measure, $\beta\le 1$, of the pure death process with mean offspring $m^\alpha$, i.e. with $F(z) = 1-m^\alpha(1-z)$. It is easy to see from \eqref{eq:gf_measure} that the composition $G_\beta\circ G_\alpha$ is the generating function of an $m^{\alpha\beta}$-invariant measure of $Z$ (note that the Yaglom distribution of a pure death process is always $\delta_1$, hence its generating function is the identity $z\mapsto z$). If $\Lambda_\alpha$, $\Lambda_\beta$ and $\Lambda_{\alpha\beta}$ are the measures from Theorem~\ref{th:main} corresponding to $G_\alpha$, $G_\beta$ and $G_\beta\circ G_\alpha$, respectively, then one may ask the following question: \begin{question} \label{q:lambda} Is there a simple formula expressing $\Lambda_{\alpha\beta}$ in terms of $\Lambda_\alpha$ and $\Lambda_\beta$? \end{question} We were not able to answer this question and are in fact doubtful that the answer is positive in general. In order to rephrase this problem into a more familiar setting, consider the case where $G_\beta$ is the generating function of a probability measure, so that in particular $\beta\in(0,1]$. Let $S^\alpha$ and $S^\beta$ be the $(\alpha,m)$- and $(\beta,m^\alpha)$-semi-stable subordinators associated to $G_\alpha$ and $G_\beta$ by Theorem~\ref{th:qsd2}. In particular, $\kappa_{S^\alpha}(1) = \kappa_{S^\beta}(1) = 1$. By \eqref{eq:GH} and \eqref{eq:kappa_sub}, we then have \[ 1- G_\beta\circ G_\alpha(z) = \kappa_{S^\beta}(1-G_\alpha(z)) = \kappa_{S^\beta}(\kappa_{S^\alpha}(1-H(z))) = \kappa_{S^\alpha\circ S^\beta}(1-H(z)). \] Hence, Question~\ref{q:lambda} is equivalent to the question of whether there is a simple formula expressing the L\'evy measure of $S^\alpha\circ S^\beta$ in terms of the L\'evy measures of $S^\alpha$ and $S^\beta$. To the best of our knowledge, no such formula is known, and, given the fact that the Laplace transform of a measure has no simple inversion formula, there does not seem to be much hope. \subsection{History of the problem} \label{sec:history} The study of $\lambda$-invariant measures of subcritical BGW processes has a rich history which we aim to elucidate here. The starting point seems to be Yaglom's 1947 article \cite{Yaglom1947}, who showed the existence of the now-called Yaglom limit of a subcritical BGW process under the assumption of finite variance\footnote{The assumption of finite variance was later removed in \cite{Heathcote1967,Joffe1967}.}. The BGW process appeared again as an important example in the seminal paper by Seneta and Vere-Jones \cite{Seneta1966} on QSD of Markov processes on (countably) infinite state spaces. In this work, the authors show that subcritical BGW processes admit a one-parameter family of QSD whose generating functions are $1-(1-H(z))^\alpha$, $\alpha\in(0,1]$, with $H(z)$ denoting, as above, the generating function of the Yaglom limit. Rubin and Vere-Jones \cite{Rubin1968} raised the question whether there existed other QSD. They failed to answer the question in general but showed that these QSD where the only ones with regularly varying tails. These works on QSD of subcritical BGW process seem to have been independent of other works on ($1$-)invariant measures: In 1965, Kingman \cite{Kingman1965} showed that invariant measures for a subcritical BGW process are not unique, which, as claimed by Kingman, disproved a conjecture by Harris. A full characterization of invariant measures, Formula \eqref{eq:Q_rep2}, was then given by Kesten and Spitzer in 1967 \cite{Spitzer1967} (they also gave credit to H.~Dinges for deriving the formula independently), motivated by the need of finding examples of explicitly calculable Martin boundaries for Markov processes. Spitzer's note only contained a brief sketch of a proof and covered only the pure death case, but he claimed that the method would work as well for arbitrary offspring distributions if $\ensuremath{\mathbf{E}}[Z_1\log Z_1] < \infty$. A full proof of this fact appeared in Athreya and Ney's well-known monograph \cite[p.\,69]{Athreya1972}, which also covers the Yaglom limit but does not treat QSD in general. In the 1970's, Hoppe considered again the question of the uniqueness of the QSD with generating functions $1-(1-H(z))^\alpha$, $\alpha\in(0,1]$. As many of the previous works on branching processes, he extensively used generating functions. Starting point was the following equation, which, for the generating function $G$ of a probability measure $\nu$, is easily seen to be equivalent to \eqref{eq:gf_measure}: \begin{equation} \label{eq:gf} 1- G(F(z)) = \lambda(1-G(z)). \end{equation} Hence, finding all QSD of eigenvalue $\lambda$ amounts to finding all probability generating functions $G$ solving \eqref{eq:gf}. Hoppe \cite{Hoppe1976} showed in 1976 that one can reduce the problem\footnote{He also showed in another article \cite{Hoppe1977} that this is true for invariant measures as well, which allowed him to prove Formula \eqref{eq:Q_rep2} without additional conditions on the offspring distribution. Note that Formula \eqref{eq:Q_rep2} was again reproven in the general case in \cite{Alsmeyer2006}, the authors of which were apparently unaware of Hoppe's work.} to the pure death case $F(z) = 1-m(1-z)$: He proves that a generating function $G$ satisfies \eqref{eq:gf} with $\lambda=m^\alpha$ if and only if there exists a generating function $A(z)$, such that $G(z) = A(H(z))$ and \begin{equation} \label{eq:A_eq} 1- A(1-m(1-z)) = m^\alpha(1-A(z)). \end{equation} He also remarks that the general solution $A(z)$ to this equation is of the form \begin{equation} \label{eq:A} A(z) = 1-(1-z)^\alpha\exp(\psi(-\log(1-z))), \end{equation} for a $|\log m|$-periodic function $\psi$ with $\psi(0)=0$. The drawback of this representation, apart from its uncertain probabilistic meaning, is that it is not immediate from \eqref{eq:A} whether the Taylor series of $A(z)$ only has non-negative coefficients, i.e.\ whether $A(z)$ is the generating function of a probability distribution. Hoppe \cite{Hoppe1976} was not even sure whether such a function exists for a non-constant $\psi$. However, one can show (using for example theorems by Flajolet and Odlyzko \cite[Proposition~1]{FO1990}) that for every $c_1,\ldots,c_n$ there exists $c_0 > 0$, such that for $|c| < c_0$, the Taylor expansion at 0 of the function \[ A(z) = 1-(1-z)^\alpha\exp\left(c\sum_{k=1}^nc_k\sin\left(\frac{2\pi k}{\log m} \log(1-z)\right)\right), \] only has non-negative coefficients (a similar reasoning has been used by Kingman in his article cited above \cite{Kingman1965}). The function is therefore a generating function of a probability distribution which is a QSD of the pure death process. This gives an alternative proof of non-uniqueness of the QSD but no satisfying characterization. In 1980, Hoppe \cite{Hoppe1980} therefore published another representation of solutions of \eqref{eq:A_eq}: He showed that there exists a one-to-one correspondence between QSD and invariant measures of the BGW process. Again, he used functional equations: by \eqref{eq:gf_measure}, a (non-trivial) measure $\nu$ on $\ensuremath{\mathbb{N}}$ is an invariant measure of the BGW process if and only if there exists a normalizing constant $c>0$, such that the generating function $Q(z) = \sum_{n=1}^\infty c\nu(n)z^n$ satisfies the functional equation \begin{equation} \label{eq:Q} Q(F(z)) = 1+Q(z),\quad Q(0)=0. \end{equation} Hoppe \cite{Hoppe1980} then showed that for every $\alpha\in(0,1]$, the function\footnote{When checking this formula in \cite{Hoppe1980}, one should be careful about the typographical ambiguity there: the appearances of ``$\log mP(t)$'' should be replaced by ``$(\log m)P(t)$''.} \begin{equation} \label{eq:qsd_Q} G_\alpha(z) = \frac{\int_0^z H'(w) m^{(\alpha-1) Q(w)}\,dw}{\int_0^1 H'(w)m^{(\alpha-1) Q(w)}\,dw} \end{equation} is the generating function of a QSD of eigenvalue $m^\alpha$ of the BGW process and conversely, for every such function, setting \begin{equation} \label{eq:Q_qsd} Q(z) = \frac{\log(1-G_\alpha(z))}{\log m^{\alpha}} \end{equation} defines a generating function which solves \eqref{eq:Q} (note that this is a special case of the compositions of generating functions studied at the end of Section~\ref{sec:subordinators}). This yields for every $\alpha\in(0,1)$ a bijection between all QSD of eigenvalue $m^{\alpha}$ and all invariant measures and thus apparently solves the problem of characterizing all QSD. However, the non-linear transformations from Equations \eqref{eq:qsd_Q} and \eqref{eq:Q_qsd} do not seem to be easy to tame, for example, we are not aware of any direct way of obtaining a formula like \eqref{eq:G_rep} from \eqref{eq:Q_rep2} using the above formulae. More specifically, we are unable to relate the measures $\Lambda$ in the respective representations of $G_\alpha$ and $Q$ in \eqref{eq:G_rep}, when $G_\alpha$ and $Q$ are related through \eqref{eq:qsd_Q} or \eqref{eq:Q_qsd}. We do not believe that there exists a simple relation between them, similarly to our reservations concerning Question~\ref{q:lambda}. Therefore, to the best of our knowledge, the current article provides a new approach to $\lambda$-invariant measures (and, in particular, quasi-stationary distributions) of subcritical BGW processes, yielding for the first time a complete characterization of these measures involving an explicit formula. \end{document}
\begin{document} \title{A Stability Theorem for Matchings in Tripartite $3$-Graphs} \begin{abstract} It follows from known results that every regular tripartite hypergraph of positive degree, with $n$ vertices in each class, has matching number at least $n/2$. This bound is best possible, and the extremal configuration is unique. Here we prove a stability version of this statement, establishing that every regular tripartite hypergraph with matching number at most $(1 + \varepsilon)n/2$ is close in structure to the extremal configuration, where ``closeness'' is measured by an explicit function of $\varepsilon$. We also answer a question of Aharoni, Kotlar and Ziv about matchings in hypergraphs with a more general degree condition. \end{abstract} \section{Introduction} One of the simplest statements about matchings in bipartite graphs is the following corollary of Hall's Theorem. \begin{thm} \label{cor:bipartitepm} Let $G$ be a bipartite regular multigraph of positive degree. Then $G$ has a perfect matching. \end{thm} Our principal aim in this paper is to study the hypergraph analogue of this result. A $k$-uniform multihypergraph (in which multiple edges are allowed), which we will call a $k$-graph for short, is \emph{$k$-partite} if its vertices can be partitioned into $k$ classes $V_1, \dots, V_k$ such that every edge has exactly one vertex from each class $V_i$. In this paper, we will limit our interests to $3$-partite $3$-graphs. For these, we have the following version of Theorem~\ref{cor:bipartitepm}. \begin{thm} \label{thm:tripartitenhalf} Let $\mathcal{H}$ be a regular $3$-partite $3$-graph of positive degree, with $n$ vertices in each class. Then $\mathcal{H}$ has a matching of size at least $\frac{n}{2}$. \end{thm} This is an immediate consequence of a theorem of Aharoni~\cite{aharoni}, which verified the $3$-partite case of a famous old conjecture due to Ryser~\cite{ryser} relating the minimum size $\tau(\mathcal{H})$ of a vertex cover of $\mathcal{H}$ (a set of vertices meeting all edges) to the maximum size $\nu(\mathcal{H})$ of a matching in $\mathcal{H}$. \begin{thm}[Aharoni's Theorem] \label{thm:aharoni} Let $\mathcal{H}$ be a $3$-partite $3$-graph. Then $\tau(\mathcal{H}) \leq 2\nu(\mathcal{H})$. \end{thm} \begin{proof}[Proof of Theorem~\ref{thm:tripartitenhalf}] Let $\mathcal{H}$ be an $r$-regular $3$-partite $3$-graph with $n$ vertices in each class. Then $\mathcal{H}$ has $rn$ edges, but each vertex only intersects $r$ of them, hence any vertex cover must have at least $\frac{rn}{r} = n$ vertices, so $\tau(\mathcal{H}) \geq n$. By Aharoni's Theorem, we have $\nu(\mathcal{H}) \geq \frac{\tau(\mathcal{H})}{2} \geq \frac{n}{2}$, which proves the theorem. \end{proof} Theorem~\ref{thm:tripartitenhalf} is best possible, as can be seen by the following example. The \emph{truncated Fano Plane} $\mathcal{F}$ (also called the \emph{Pasch configuration}) is the $3$-partite $3$-graph with six vertices $x_1$, $x_2$, $x_3$, $y_1$, $y_2$, $y_3$ and four edges $x_1x_2x_3$, $x_1y_2y_3$, $y_1x_2y_3$, $y_1y_2x_3$, where the sets $\lset{x_i, y_i}$ are the vertex classes. It is easy to check that $\mathcal{F}$ is $2$-regular and $\nu(\mathcal{F}) = 1$. For a hypergraph $\mathcal{H}$ and an integer $s$, we denote by $s\cdot\mathcal{H}$ the hypergraph with the same vertices as $\mathcal{H}$ and with each edge replaced by $s$ parallel copies. If $\mathcal{H}$ consists of $\frac{n}{2}$ disjoint copies of $\frac{r}{2}\cdot\mathcal{F}$, then $\nu(\mathcal{H}) = \frac{n}{2}$, illustrating the tightness of Theorem~\ref{thm:tripartitenhalf} for every even $r$ and every even $n$. This is the unique extremal configuration, a fact which follows from~\cite{HNS2} in which the extremal hypergraphs for Aharoni's Theorem are characterized. Our main aim in this paper is to prove the following stability version of Theorem~\ref{thm:tripartitenhalf}. \begin{thm} \label{main} Let $r \geq 2$. Let $\mathcal{H}$ be an $r$-regular $3$-partite $3$-graph with $n$ vertices in each class, and let $\varepsilon \geq 0$. If $\nu(\mathcal{H}) \leq (1 + \varepsilon)\frac{n}{2}$, then $\mathcal{H}$ has at least $(1 - \left(22r - \frac{77}{3}\right)\varepsilon)\frac{n}{2}$ components that are copies of $\frac{r}{2}\cdot\mathcal{F}$. \end{thm} In general one may expect stronger lower bounds on the matching number for \emph{simple} hypergraphs (i.e. those without multiple edges). For example Aharoni, Kotlar and Ziv~\cite{aharonikotlarziv} asked the following: when $r\geq 3$, does there exist $\mu = \mu(r) > 0$ such that $\nu(\mathcal{H}) \geq (1 + \mu)\frac{\abs{A}}{2}$ for every simple $3$-partite $3$-graph $\mathcal{H}$ with vertex classes $A$, $B$ and $C$ in which every vertex of $A$ has degree at least $r$ and every vertex of $B \cup C$ has degree at most $r$? The following weakened version of Theorem~\ref{main} answers this question affirmatively in a stronger form (with $\mu(r) = (72r^2 - 150r + 77)^{-1}$). \begin{thm} \label{main2} Let $r \geq 2$. Let $\mathcal{H}$ be a $3$-partite $3$-graph with vertex classes $A$, $B$, and $C$, such that $\abs{A} = n$, and let $\varepsilon \geq 0$. Suppose that every vertex of $A$ has degree at least $r$, and that every vertex in $B \cup C$ has degree at most $r$. If $\nu(\mathcal{H}) \leq (1 + \varepsilon)\frac{n}{2}$, then $\mathcal{H}$ contains at least $(1 - (72r^2 - 150r + 77)\varepsilon)\frac{n}{2}$ disjoint copies of $\frac{r}{2}\cdot\mathcal{F}$. \end{thm} Theorem~\ref{main2} may be viewed as a direct hypergraph analogue of the corresponding weakening of Theorem~\ref{cor:bipartitepm}, with the condition that the minimum degree of vertices in vertex class $A$ is at least the maximum degree of vertices in class $B$, and which concludes that the bipartite graph has a matching of size $\abs{A}$. To prove Theorems~\ref{main} and~\ref{main2} we rely on a version of Hall's Theorem for hypergraphs, that uses a graph parameter $\eta$ whose definition is topological (the connectedness of the independence complex). However, the only properties of $\eta$ we will need come from known theorems which can be stated in purely graph theoretical terms. Thus none of our proofs will make any explicit reference to topology. This background material is described in Section~\ref{tool}. In Section~\ref{connbip} we prove a new lower bound on $\eta$ for line graphs of bipartite multigraphs, which will form the basis of our work in this paper. Section~\ref{stab} contains the proofs of Theorems~\ref{main} and~\ref{main2}, and in Section~\ref{last} we describe some constructions that show a limit on the amount by which our theorems could be improved. We close by mentioning a few open problems. \section{Tools} \label{tool} We begin by describing the version of Hall's Theorem for $k$-partite $k$-graphs that we will need. In this setting, the analogue of the neighbourhood of a vertex subset $S$ (which in the bipartite graph case is just an independent set of vertices) is a $(k - 1)$-partite $(k - 1)$-graph called the \emph{link} of $S$. \begin{defn} Let $\mathcal{H}$ be a $k$-partite $k$-graph with vertex classes $V_1, \dots, V_k$, and let $S \subseteq V_i$. The \emph{link} of $S$ is the $(k - 1)$-partite $(k - 1)$-graph $\lk S$ whose vertex classes are the sets $\lset{V_1, \dots, V_k} \setminus \lset{V_i}$, and whose edges are $\set{e - v}{v \in S, v \in e \in E(\mathcal{H})}$. \end{defn} The generalization of Hall's Theorem to $k$-partite $k$-graphs~\cite{aharonihaxell, aharoniberger} can be stated in terms of a number of parameters of the link hypergraphs, for instance their matching numbers, or, as in its original formulation~\cite{aharonihaxell}, their \emph{matching width} (the maximum among all matchings of the size of the smallest matching intersecting each of its edges). The formulation we use here is based on the parameter $\eta(J)$, which is defined to be the topological connectedness of the independence complex of the graph $J$ plus $2$ (we add $2$ in order to make $\eta$ additive under disjoint union, which makes practically every formula involving it simpler. See e.g.~\cite{aharonibergerziv} for a discussion of this parameter.) Our graphs $J$ will usually be subgraphs of the line graph $L(G)$ of a bipartite graph $G$. The relevant version of Hall's Theorem for hypergraphs is as follows. \begin{thm}(Hall's Theorem for Hypergraphs) \label{thm:hallsforhypergraphs} Let $\mathcal{H}$ be a $k$-partite $k$-graph with vertex classes $V_1, \dots, V_k$, and let $d \geq 0$. If $\eta(L(\lk S)) \geq \abs{S} - d$ for every subset $S \subseteq V_i$, then $\mathcal{H}$ has a matching of size at least $\abs{V_i} - d$. \end{thm} The only properties of $\eta$ we will need for our purposes are contained in the next three statements (and in fact the third follows easily from the second). The first lemma is derived from basic properties of connectedness that can be found in any textbook on topology. \begin{lem} \label{lem:connadditivity} \begin{enumerate} \item If the graph $J$ has no vertices then $\eta(J) = 0$. \item If the graph $J$ contains an isolated vertex, then $\eta(J) = \infty$. \item If $J$ and $K$ are disjoint graphs, then \[ \eta(J \cup K) \geq \eta(J) + \eta(K). \] \end{enumerate} \end{lem} Note that the last part implies in particular that adding any nonempty component to a graph increases its connectedness by at least $1$. The next statement is Meshulam's Theorem~\cite{meshulam}, which relates $\eta(J)$ to that of two subgraphs of $J$, obtained by deleting an edge, or by what we call ``exploding'' an edge. If $J$ is a graph and $e \in E(J)$ is an edge, then we denote the edge deletion of $e$ by $J - e$. We denote the edge explosion of $e$ by $J \divideontimes e$, which is the subgraph of $J$ that remains after deleting both endpoints of $e$ and all their neighbours. \begin{thm}[Meshulam's Theorem] \label{thm:meshulam} If $J$ is a graph and $e \in E(J)$, then \[ \eta(J) \geq \min(\eta(J - e), \eta(J \divideontimes e) + 1). \] \end{thm} This result (in a different formulation) is proved in~\cite{meshulam}. For more on Meshulam's Theorem see e.g.~\cite{adamaszekbarmak}, and~\cite{narins}, Section 5.3. Various lower bounds on $\eta(J)$ in terms of other graph parameters have been proven, see e.g.~\cite{aharonibergerziv, meshulam}. Of particular interest to us is the following bound for line graphs (which was used for example in~\cite{aharonihaxell} but also follows easily from Theorem~\ref{thm:meshulam}). \begin{thm} \label{thm:matchconn} If $G$ is a multigraph, then \[ \eta(L(G)) \geq \frac{\nu(G)}{2}. \] \end{thm} In the next section, we will apply Meshulam's Theorem to obtain an alternate version of the above bound for bipartite graphs, which takes into account the maximum degree as well as the matching number. \section{The Connectedness of Line Graphs of Bipartite Multigraphs} \label{connbip} In order to state and prove our results, we will need some definitions first. If $G$ is a multigraph, and $J \subseteq L(G)$ is a subgraph of the line graph of $G$, we denote by $G_J$ the subgraph of $G$ with $V(G_J) = V(G)$ and $E(G_J) = V(J)$. Note that this makes sense, as the vertices of $J$ are a subset of the edges of $G$. An \emph{$r$-regular $C_4$} is a bipartite multigraph consisting of a cycle of length $4$ and edges parallel to the edges of the cycle so that every vertex has degree $r$. An edge $e \in E(J)$ is called \emph{decouplable} if $\eta(J - e) \leq \eta(J)$. It is called \emph{explodable} if $\eta(J \divideontimes e) \leq \eta(J) - 1$. Note that by Meshulam's Theorem, every edge is either decouplable or explodable. A graph is called \emph{reduced} if no edge is decouplable (hence every edge is explodable). A subgraph $J' \subseteq J$ is called a \emph{reduction} of $J$ if $J'$ is reduced, $V(J') = V(J)$, and $\eta(J') \leq \eta(J)$. Note that one may obtain a reduction of a graph $J$ by iteratively deleting decouplable edges until there are none left. In the proof of our theorem, we will be applying Meshulam's Theorem to edges of the line graph, but will be regularly referring back to the original bipartite graph, whose edges are vertices of the line graph. To help eliminate confusion among vertices of the graph $G$, vertices of the line graph $L(G)$, edges of the graph, and edges of the line graph, we will use different terminology. Vertices and edges will always refer to vertices and edges of the original graph, while edges of the line graph will be called \emph{adjacencies}, or \emph{$J$-adjacencies} for $J$ a subgraph of the line graph. If a pair of edges of the graph intersect, they will be adjacent in the line graph, but not necessarily $J$-adjacent. When talking about decouplable or explodable edges of the line graph, rather than say something like ``decouplable adjacency,'' we will often refer to these as decouplable (explodable) pairs of edges (of the original graph). Our main aim in this section is to prove the following theorem. \begin{thm} \label{thm:c4freeconnbound} Let $G$ be a bipartite multigraph with maximum degree $r \geq 2$ that does not contain an $r$-regular $C_4$ component, and let $J \subseteq L(G)$. Then \[ \eta(J) \geq \frac{(2r - 3)\nu(G_J) + \abs{V(J)}}{6r - 7}. \] \end{thm} Note that this is an improvement over the bound in Theorem~\ref{thm:matchconn} whenever $\abs{V(J)} \geq \frac{2r - 1}{2}\nu(G_J)$, and agrees with the bound when equality holds. In order to prove it, we will need the following lemma. \begin{lem} \label{lem:explosiontypes} Let $G$ be a bipartite multigraph with maximum degree $r \geq 2$ that does not contain an $r$-regular $C_4$ component, and let $J \subseteq L(G)$ be reduced and nonempty. Then if $\eta(J) \neq \infty$, $J$ contains an explodable pair $me$ of one of the following types: \begin{enumerate} \renewcommand{(\arabic{enumi})}{(\arabic{enumi})} \renewcommand{\theenumi}{(\arabic{enumi})} \item $\nu(G_{J \divideontimes me}) \geq \nu(G_J) - 1$ and $\abs{V(J \divideontimes me)} \geq \abs{V(J)} - (3r - 2)$, \item $\nu(G_{J \divideontimes me}) \geq \nu(G_J) - 2$ and $\abs{V(J \divideontimes me)} \geq \abs{V(J)} - (2r - 1)$, or \item every reduction $J'$ of $J \divideontimes me$ contains an explodable pair $m'e'$ such that $\nu(G_{J' \divideontimes m'e'}) \geq \nu(G_J) - 3$, and $\abs{V(J' \divideontimes m'e')} \geq \abs{V(J)} - (6r - 5)$. \end{enumerate} \end{lem} \begin{proof}[Proof of Theorem~\ref{thm:c4freeconnbound} from Lemma~\ref{lem:explosiontypes}] Let $G$ be a bipartite multigraph with maximum degree $r \geq 2$ that does not contain an $r$-regular $C_4$ component, and let $J \subseteq L(G)$. Also, suppose that $\abs{V(J)} \geq \frac{2r - 1}{2}\nu(G_J)$ (otherwise we may simply apply Theorem~\ref{thm:matchconn} to prove our theorem). We construct a sequence of subgraphs $J_0, \dots, J_n$ with $J_0 = J$ and $J_n$ having no edges, in which $J_i$ is obtained from $J_{i - 1}$ by either deleting a decouplable $J_i$-adjacency or exploding an explodable pair of edges in $G_{J_i}$. This means that $\eta(J_{i - 1}) \geq \eta(J_i)$, with strict inequality whenever we perform an explosion. We start by iteratively deleting decouplable adjacencies until we have a reduced subgraph $J_k \subseteq J$. Applying Lemma~\ref{lem:explosiontypes}, we find that there is an explodable pair of type (1), (2), or (3). We explode this pair to arrive at $J_{k + 1}$. In the case of an explosion of type (3), we then iteratively decouple decouplable pairs to arrive at a reduction $J'$ of $J_{k + 1}$ and then explode $m'e'$. We continue in this fashion until $J_n$ has no edges. In the end, we will get a bound $\eta(J) \geq t + \eta(J_n)$, where $t$ is the number of explosions we perform in the sequence. Let $x_i$ denote the number of explosions of type ($i$). Note that for every explosion of type (3), we perform another explosion, so the total number of explosions is $t = x_1 + x_2 + 2x_3$. If $J_n$ has a vertex, it is isolated, which would show $\eta(J) = \infty$, so we may assume that $J_n$ is the empty graph, and so $\nu(G_{J_n}) = 0$ and $\eta(J_n) = 0$. Since the matching number is only affected by explosions, we thus obtain a bound \[ x_1 + 2x_2 + 3x_3 \geq \nu(G_J), \] since explosions of type ($i$) decrease the matching number by at most $i$. Similarly, these explosions must reduce the vertex number to $\abs{V(J_n)} = 0$, giving us the bound \[ (3r - 2)x_1 + (2r - 1)x_2 + (6r - 5)x_3 \geq \abs{V(J)}. \] Since we do not assume any control over the values of $x_i$, we suppose that we obtain the worst bound, where $t = x_1 + x_2 + 2x_3$ is minimized among all triples of non-negative integers $(x_1, x_2, x_3)$ satisfying the above two constraints. Relaxing the integer program to a linear program gives us the bound in the theorem, since for $\abs{V(J)} \geq \frac{2r - 1}{2}\nu(G_J)$, the minimum is obtained at \[ x_1 = 0, \quad x_2 = \frac{(6r - 5)\nu(G_J) - 3\abs{V(J)}}{6r - 7}, \quad x_3 = \frac{2\abs{V(J)} - (2r - 1)\nu(G_J)}{6r - 7}, \] with a value of \[ t_{\mathrm{min}} = \frac{(2r - 3)\nu(G_J) + \abs{V(J)}}{6r - 7}. \] This can be confirmed by considering the dual linear program, which is to maximize $\nu(G_J)y_1 + \abs{V(J)}y_2$ among positive real pairs $(y_1, y_2)$ subject to the constraints \begin{align*} y_1 + (3r - 2)y_2 \leq 1,\\ 2y_1 + (2r - 1)y_2 \leq 1,\\ 3y_1 + (6r - 5)y_2 \leq 2. \end{align*} It is enough to note that \[ y_1 = \frac{2r - 3}{6r - 7}, \quad y_2 = \frac{1}{6r - 7} \] is feasible for the dual program, and its value is $\nu(G_J)y_1 + \abs{V(J)}y_2 = t_{\mathrm{min}}$. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:explosiontypes}] Let $G$ be a bipartite multigraph with maximum degree $r \geq 2$, and let $J \subseteq L(G)$ be reduced and contain an edge. Suppose that there are no explodable pairs of any of the types (1), (2), and (3). We aim to show that $G$ contains an $r$-regular $C_4$ component. We follow along the lines of~\cite{HNS}, using many of the same ideas and techniques. Note that any explosion in $J$ destroys at most $3r - 2$ edges of $G$. Indeed, any pair of intersecting edges only have three vertices in which to meet other edges, and as $G$ has maximum degree $r$, there are only $3r - 2$ edges incident to those three vertices, because the two edges in question count towards the degree of two of these vertices each. Thus, every explosion that reduces the matching number by at most $1$ is automatically an explosion of type (1). \begin{lem} \label{lem:parallel} No two edges that are parallel are $J$-adjacent. \end{lem} \begin{proof} If $e$ and $f$ are parallel, then $\nu(G_{J \divideontimes ef}) \geq \nu(G_J) - 2$, and $\abs{V(J \divideontimes ef)} \geq \abs{V(J)} - (2r - 2)$, so this would be an explosion of type (2), which does not exist. Hence $e$ and $f$ cannot be $J$-adjacent, as $J$ is reduced. \end{proof} \begin{lem} \label{lem:mdegree} If $M \subseteq V(J)$ is a maximum matching of $G_J$, and $e \in V(J) \setminus M$ is $J$-adjacent to an edge of $M$, then $e$ is $J$-adjacent to two edges of $M$ (one at each endpoint of $e$). \end{lem} \begin{proof} Suppose $e$ is $J$-adjacent to only $m \in M$, but no other edge of $M$. Then exploding $me$ would destroy only one edge of $M$, which reduces the matching number by at most $1$, hence this would be an explosion of type (1), which we assume not to exist. Thus, $e$ must be $J$-adjacent to a second edge of $M$. \end{proof} We now make a few definitions, which will provide the setup for the two upcoming Lemmas~\ref{lem:ysaturated} and~\ref{lem:edgesoutsidex}. For a maximum matching $M \subseteq V(J)$ and two edges $m \in M$, and $e \in V(J) \setminus M$ with $me \in E(J)$, define $\mathcal{P}(M, m, e)$ to be the set of edges in $V(J)$ contained in some $M$-alternating path in $G_J$ starting with $m$, $e$. Let $A$ be the vertex class of $G$ containing the starting point of these paths, and let $B$ be the other. Let $Y \subseteq A$ be the set of vertices in edges of $\mathcal{P}(M, m, e)$ contained in $A$, but not including the vertices of $m$ and $e$. Let $X \subseteq B$ be the set of vertices in edges of $\mathcal{P}(M, m, e)$ contained in $B$, this time including the vertex in $m \cap e$. Let $m' \in M$ be the other edge of $M$ besides $m$ that is $J$-adjacent to $e$, which is guaranteed to exist by Lemma~\ref{lem:mdegree}. \begin{lem} \label{lem:ysaturated} All vertices of $Y$ are $M$-saturated. \end{lem} \begin{proof} Suppose $y \in Y$ is $M$-unsaturated. By the definition of $Y$, there is an $M$-alternating path in $G_J$ starting $m$, $e$, and ending in vertex $y$. Exploding $me$ destroys two edges $m$ and $m'$ of $M$, since it is not of type (1). However, for $M' = M \setminus \lset{m, m'}$, we have that the rest of the path ending in $y$ is an $M'$-augmenting path in $G_{J \divideontimes me}$, which means that in fact $\nu(G_{J \divideontimes me}) \geq \nu(G_J) - 1$, and therefore the explosion of $me$ is of type (1) after all. This is a contradiction, thus no $y \in Y$ can be $M$-unsaturated. \end{proof} \begin{lem} \label{lem:edgesoutsidex} Every edge of $M$ with a vertex in $Y$ is $J$-adjacent in $Y$ to an edge whose other endpoint is not in $X$. \end{lem} \begin{proof} Consider what happens when we explode $me$. This destroys $m$ and $m'$. Let $d$ be the vertex of $G_J$ in $m' \cap X$. Let $J'$ be a reduction of $J \divideontimes me$, and let $M' = M \setminus \lset{m, m'}$. We will make use of the fact that $me$ is not an explosion of type (3). This means that $J'$ does not contain a pair of $J'$-adjacent edges whose explosion would reduce the matching number by at most $1$ and destroy at most $3r - 3$ edges. \begin{claim} All edges of $M'$ with a vertex in $Y$ are not $J'$-adjacent to any edge preceding or succeeding them in an $M'$-alternating path in $G_{J'}$ starting at $d$. \end{claim} \begin{proof} Consider any $M'$-alternating path $P$ in $G_{J'}$ starting at $d$. Since these are all parts of the $M$-alternating paths in $G_J$ starting with $m$, $e$, we see that every edge of $M'$ incident to $X$ is in one of these paths. Note that $d$ has degree at most $r - 1$ in $G_{J'}$, since $m'$ was incident to it and was destroyed in the explosion of $me$. Denote the edges of the path $P$ by $e_1, m_1, e_2, m_2, \dots$, so that $m_i \in M'$ and $e_1$ is incident to $d$. We claim that none of the pairs in the path are $J'$-adjacent. Indeed, $e_1$ and $m_1$ are not, because if they were explodable, this would make $me$ an explosion of type (3). To see this, note that since we only destroy one edge of $M'$ in the second explosion, we reduce $\nu(G_J')$ by at most $1$, and since $d$ has degree at most $r - 1$, we destroy at most $3r - 3$ edges in the second explosion. This kind of explosion has been ruled out. Neither are $m_1$ and $e_2$ $J'$-adjacent, since exploding this pair would not destroy $e_1$, which means we could add it to $M' \setminus \lset{m_1, m_2}$ to have a matching of size $\nu(G_J') - 1$ after the second explosion, and again we destroy at most $3r - 3$ edges incident to $e_1 \cap m_1$, since we don't destroy $e_1$. This would again make $me$ an explosion of type (3), which contradicts our assumptions. Continuing in this fashion along the path, we see that $e_i$ and $m_i$ are not $J'$-adjacent, because exploding this pair would reduce the matching number by at most $1$, as $e_i$ is not $J'$-adjacent to $m_{i - 1}$, and for the same reason, we only destroy $3r - 3$ edges in the second explosion, which would make $me$ an explosion of type (3). Next, we see that $m_i$ and $e_{i + 1}$ are not $J'$-adjacent, because exploding this pair would leave an $(M' \setminus \lset{m_i, m_{i + 1}})$-augmenting path $e_1, m_1, \dots, e_i$, so even though two edges of $M'$ are destroyed, the matching number decreases only by $1$, if at all, and again, we only destroy $3r - 3$ edges in this second explosion because $e_i$ is not destroyed. This proves the claim. \end{proof} \begin{claim} Every edge of $M'$ incident to $Y$ is not $J'$-adjacent to any edge between $X$ and $Y$. \end{claim} \begin{proof} Consider any pair of intersecting edges $m'' \in M'$ and $e' \in V(J') \setminus M'$ that go between $X$ and $Y$. We claim that if these were explodable, then $me$ would be an explosion of type (3), and hence these are not $J'$-adjacent, as $J'$ is reduced. If $e'$ is incident to $b \in m \cap e$, then exploding $m''e'$ reduces $\nu(G_{J'})$ by only $1$ and destroys at most $3r - 4$ edges, since $m$ and $e$ are already gone. This would make $m$ an explosion of type (3). If $e'$ is incident to $d$, then it is the predecessor of $m''$ on some $M'$-alternating path, so they are not $J'$-adjacent by the previous claim. Otherwise, $e'$ is incident to a vertex of $X \setminus \lset{b, d}$. If it is parallel to $m''$, then exploding it would destroy one edge of $M'$ and at most $2r - 2$ edges, which would again make $me$ a type (3) explosion. The only remaining possibility is that $e'$ meets an edge $m''' \in M'$ in a vertex of $X$. If there is an $M'$-alternating path from $d$ to $m'''$ that does not use $m''$, appending $e'$ and $m''$ to this path shows by the previous claim that $e'$ and $m'$ are not $J'$-adjacent. If there is no such path, then $e'$ together with the part between $m''$ and $m'''$, inclusive, of an $M'$-alternating path from $d$ to $m'''$ forms an $M'$-alternating cycle. In this case, let $M''$ be obtained from $M'$ by switching on that $M'$-alternating cycle. Now exploding $m''e'$ only destroys one edge of $M''$, so the resulting graph has a matching of size at least $\nu(G_{J'}) - 1$. The explosion also does not destroy a predecessor of $m''$ on some $M'$-alternating path from $d$, so we lose at most $3r - 3$ edges in the second explosion, which makes $me$ of type (3). \end{proof} Thus every edge of $M'$ incident to $Y$ is not $J'$-adjacent to any edge between $X$ and $Y$. However, none of these edges are isolated in $J'$, since we have $\eta(J') \leq \eta(J) - 1 < \infty$. This means that they each must be $J'$-adjacent to some edge that is not between $X$ and $Y$. If this edge is incident to $X$, we would have an $M'$-augmenting path by going from $d$ to the matching edge then to this edge, so the edge is not incident to $X$, which proves Lemma~\ref{lem:edgesoutsidex}, since $J'$-adjacent implies $J$-adjacent. \end{proof} We now complete the proof of Lemma~\ref{lem:explosiontypes}. Choose the triple $(M, m, e)$ consisting of a maximum matching $M$ of $G_J$ and a pair of $J$-adjacent edges $m \in M$ and $e \in V(J) \setminus M$ so that $\abs{\mathcal{P}(M, m, e)}$ is maximized among all such triples. We claim that $m$ and $e$ are in fact part of an $r$-regular $C_4$ component of $G_J$. Let $m'$ be the other edge of $M$ that is $J$-adjacent to $e$, which exists by Lemma~\ref{lem:mdegree}, and let the vertices of $m$, $e$, and $m'$ be $a$, $b$, $c$, and $d$, with $m = ab$, $e = bc$, and $m' = cd$. First, we show that there are no edges $J$-adjacent to $m$ at $a$ that do not go to $d$. Suppose that $e'$ were such an edge. By Lemma~\ref{lem:mdegree}, it is $J$-adjacent to another edge $\hat{m} \in M$. If $e' \cap \hat{m} \not\subseteq X$, then we have a contradiction, as any edge in $\mathcal{P}(M, m, e)$ can be reached by an $M$-alternating path starting with $\hat{m}$, $e'$, then continuing with $m$, $e$, and the rest of the path that shows it is in $\mathcal{P}(M, m, e)$. But $\hat{m} \notin \mathcal{P}(M, m, e)$, since it is not incident to $X$, which runs contrary to the assumption that $\abs{\mathcal{P}(M, m, e)}$ is maximum. Therefore, $\hat{m}$ must be incident to $X$. If $\hat{m} \neq m'$, then $\hat{m}$ is also incident to $Y$, and so by Lemma~\ref{lem:edgesoutsidex}, it has an edge $e''$ $J$-adjacent to it in $Y$, which is not incident to $X$, and by Lemma~\ref{lem:mdegree}, $e''$ is $J$-adjacent to another edge $\hat{m}' \in M$. But then $\mathcal{P}(M, \hat{m}', e'')$ would strictly contain $\mathcal{P}(M, m, e)$. This is because for any edge in $\mathcal{P}(M, m, e)$, if the path from $m$, $e$ containing it passes through $\hat{m}$, we can start with $\hat{m}'$, $e''$, $\hat{m}$ and continue along the path to reach it from $\hat{m}'$, $e''$. If on the other hand the path from $m$, $e$ does not include $\hat{m}$, we can reach it by starting with $\hat{m}'$, $e''$, $\hat{m}$, $e'$, $m$, $e$, and continuing along the path. This also contradicts our choice of $(M, m, e)$. This means the only option is $\hat{m} = m'$. Next, we establish that there is an edge $f = ad$, which is $J$-adjacent to $m$. If there were no such edge, then exploding $me$ would destroy only edges incident to $b$ and $c$, of which there are at most $2r - 1$, since $bc$ is an edge. Since also $\nu(G_J)$ would be reduced by at most $2$, this would be an explosion of type (2), which we assume not to exist. Thus there must be an edge incident to $a$ that is $J$-adjacent to $m$, and by the argument in the previous paragraph, we have seen that such an edge must be incident to $d$. Now consider the matching $M^\times = M \cup \lset{e, f} \setminus \lset{m, m'}$, obtained by switching $M$ along the $C_4$ on $abcd$. Note that $\mathcal{P}(M^\times, e, m) = \mathcal{P}(M, m, e) \cup \lset{f} \setminus \lset{m'}$, since any $M$-alternating path starting $m$, $e$, $m'$ can be converted to an $M^\times$-alternating path by starting with $e$, $m$, $f$, and continuing the same way. Therefore this triple is also maximizing, so the same argument as above applies to show that the only edges $J$-adjacent to $e$ at $c$ are parallel to $m'$. We now show that $e$ and $f$ have no $J$-neighbours at $a$ or $c$, respectively, except those parallel to $m$ and $m'$, respectively. If there were an edge $g$ contradicting this statement, then by switching to $M^\times$ and applying Lemma~\ref{lem:mdegree}, we would find that $g$ is $J$-adjacent to some other edge $h$ of $M^\times$ not among $\lset{e, f}$. But $h$ is also an edge of $M$, hence by Lemma~\ref{lem:mdegree}, it would need to be $J$-adjacent to a second edge of $M$, which by virtue of being incident to $a$ or $c$ would have to be $m$ or $m'$. But as seen above, no such edge is $J$-adjacent to $m$ or $m'$, thus we have a contradiction. This shows that none of $m$, $m'$, $e$, and $f$ have any $J$-neighbours incident to $\lset{a, c}$ that leave the $C_4$ on $abdc$. Now suppose that there is an edge incident to $d$ that is not incident to $a$ or $c$. Such an edge is disjoint from $m$ and $e$, so it survives the explosion of $me$. By what we have proven above, the explosion of $me$ only destroys edges incident to $b$ and $d$, of which there are at most $2r$. But since at least one edge incident to $d$ survives, the explosion would destroy at most $2r - 1$ edges, and it clearly only destroys $2$ edges of $M$, hence this would be an explosion of type (2). Therefore, there are no edges incident to $d$, except those that go to $a$ or $c$. A similar argument, by threatening to explode $m'f$, shows that there are no edges incident to $b$, except those that go to $a$ or $c$. If any of $b$ or $d$ is not of degree $r$, then $me$ would again be an explosion of type (2), so they are both maximum degree vertices. This forces all edges incident to $a$ and $c$ to be those from $b$ and $d$ by a simple counting argument. Therefore, $abcd$ form the vertices of an $r$-regular $C_4$-component of $G_J$. This proves the lemma by contraposition. \end{proof} \begin{cor} \label{cor:connbound} Let $G$ be a bipartite multigraph with maximum degree $r \geq 2$ that contains at most $k$ components that are $r$-regular $C_4$'s. Then \[ \eta(L(G)) \geq \frac{(2r - 3)\nu(G) + \abs{E(G)} - k}{6r - 7}. \] \end{cor} \begin{proof} Assume, without loss of generality, that $G$ has exactly $k$ components that are $r$-regular $C_4$'s. Let $G'$ be equal to $G$ with all its $r$-regular $C_4$ components removed. We have $\abs{E(G')} = \abs{E(G)} - 2rk$ and $\nu(G') = \nu(G) - 2k$. Applying Theorem~\ref{thm:c4freeconnbound} to $G'$, we have \[ \eta(L(G')) \geq \frac{(2r - 3)\nu(G') + \abs{E(G')}}{6r - 7}. \] Adding $k$ non-empty components to $L(G')$ will increase its connectedness by at least $k$ by Lemma~\ref{lem:connadditivity}, so $\eta(L(G)) \geq \eta(L(G')) + k$, and this gives the desired bound via a straightforward calculation. \end{proof} We remark that Theorem~\ref{thm:c4freeconnbound} is tight when $r = 2$, as can be seen by taking $G$ to be the disjoint union of any number of paths $P_4$ of length $3$ and cycles of length $10$ (since $\eta(P_4) = 1$, and $\eta(C_{10}) = 3$). \section{Stability}\label{stab} We have two versions of our stability theorem. One is for $r$-regular $3$-partite $3$-graphs, and the other has slightly less stringent degree conditions, which of course results in a weaker bound. \begin{thm} \label{thm:regularfstability} Let $r \geq 2$. Let $\mathcal{H}$ be an $r$-regular $3$-partite $3$-graph with $n$ vertices in each class, and let $\varepsilon \geq 0$. If $\nu(\mathcal{H}) \leq (1 + \varepsilon)\frac{n}{2}$, then $\mathcal{H}$ has at least $(1 - \left(22r - \frac{77}{3}\right)\varepsilon)\frac{n}{2}$ components that are $\frac{r}{2}\cdot\mathcal{F}$'s. \end{thm} \begin{thm} \label{thm:abcfstability} Let $r \geq 2$. Let $\mathcal{H}$ be a $3$-partite $3$-graph with vertex classes $A$, $B$, and $C$, such that $\abs{A} = n$, and let $\varepsilon \geq 0$. Suppose that every vertex of $A$ has degree at least $r$, and that every vertex in $B \cup C$ has degree at most $r$. If $\nu(\mathcal{H}) \leq (1 + \varepsilon)\frac{n}{2}$, then $\mathcal{H}$ contains at least $(1 - (72r^2 - 150r + 77)\varepsilon)\frac{n}{2}$ disjoint copies of $\frac{r}{2}\cdot\mathcal{F}$. \end{thm} Our strategy is to use the low matching number to find a subset of each vertex class whose links have low connectedness. From this, we deduce that each link must have many $r$-regular $C_4$ components. We analyze how these can interact and deduce that a number of them must extend to $\frac{r}{2}\cdot\mathcal{F}$'s. We break the proofs down into several lemmas that apply in both situations. \begin{lem} \label{lem:c4links} Let $\mathcal{H}$ be a $3$-partite $3$-graph with vertex classes $A$, $B$, and $C$, such that $\abs{A} = n$, and let $\varepsilon \geq 0$. Suppose that every vertex of $A$ has degree at least $r$, and that every vertex in $B \cup C$ has degree at most $r$. If $\nu(\mathcal{H}) \leq (1 + \varepsilon)\frac{n}{2}$, then $\lk A$ contains at least $(1 - (6r - 7)\varepsilon)\frac{n}{2}$ components that are $r$-regular $C_4$'s. \end{lem} \begin{proof} We know that there must be some $S \subseteq A$ such that $\eta(L(\lk S)) \leq \abs{S} - (n - \nu(\mathcal{H}))$, otherwise $\mathcal{H}$ would have a matching larger than $\nu(\mathcal{H})$ by Theorem~\ref{thm:hallsforhypergraphs}. Now $\lk S$ has at least $r\abs{S}$ edges and maximum degree at most $r$, so $\tau(\lk S) \geq \abs{S}$, and so by K\"onig's Theorem it follows from this that $\nu(\lk S) \geq \abs{S}$. Let $k$ be the number of $r$-regular $C_4$ components of $\lk S$. By Corollary~\ref{cor:connbound}, we have \begin{align*} \eta(L(\lk S)) &\geq \frac{(2r - 3)\nu(\lk S) + \abs{E(\lk S)} - k}{6r - 7}\\ &\geq \frac{(2r - 3)\abs{S} + r\abs{S} - k}{6r - 7}\\ &= \frac{(3r - 3)\abs{S} - k}{6r - 7}. \end{align*} Combining this with our upper bound, we find \begin{align*} k &\geq (6r - 7)(n - \nu(\mathcal{H})) - (3r - 4)\abs{S}\\ &\geq (6r - 7)\left(n - (1 + \varepsilon)\frac{n}{2}\right) - (3r - 4)n\\ &= (1 - (6r - 7)\varepsilon)\frac{n}{2}. \end{align*} Since the vertices of an $r$-regular $C_4$ have degree $r$, which is the maximum degree of any vertex in $B \cup C$, no additional edges of $\lk A$ intersect any of these components of $\lk S$, hence these are indeed components of $\lk A$, which proves our lemma. \end{proof} We say a subgraph of a link of $\mathcal{H}$ \emph{hosts} an edge $e$ of $\mathcal{H}$ if the edge of the link corresponding to $e$ is present in the subgraph. \begin{lem} \label{lem:extendtof} Let $\mathcal{H}$ be a $3$-partite $3$-graph, let $A$ be one of its vertex classes, and suppose that every vertex in $A$ has degree at most $r$. If an $r$-regular $C_4$ in $\lk A$ does not host two disjoint edges of $\mathcal{H}$, then the edges it hosts form a copy of $\frac{r}{2}\cdot\mathcal{F}$. \end{lem} \begin{proof} Let $e$, $f$, $g$, and $h$ be pairwise nonparallel edges of the $r$-regular $C_4$ in $\lk A$, so that $e, f$ and $g, h$ form matchings. Since no pair of edges extend to disjoint edges of $\mathcal{H}$, all $e$-parallel and $f$-parallel edges must meet in the same vertex, and similarly, all $g$-parallel and $h$-parallel edges meet in the same vertex. These, however, must be two different vertices, since they are incident to $2r$ edges altogether. Thus, each of these vertices is incident to $r$ edges, and so there are $r$ total $e$-parallel and $f$-parallel edges, and $r$ total $g$-parallel and $h$-parallel edges. To form an $r$-regular $C_4$, there must be the same number of $e$-parallel edges as $f$-parallel ones, and similarly the same number of $g$-parallel and $h$-parallel edges. Thus there must be $\frac{r}{2}$ of each, and this forms an $\frac{r}{2}\cdot\mathcal{F}$, as desired. \end{proof} \begin{lem} \label{lem:perfectmatchingcomp} Let $\mathcal{H}$ be a $3$-partite $3$-graph. If an $r$-regular $C_4$ component $K$ of a link of a vertex class of $\mathcal{H}$ is host to two disjoint edges of $\mathcal{H}$, and all of the vertices of $K$ are part of $r$-regular $C_4$ components of the links of the other vertex classes, then $K$ belongs to a component of $\mathcal{H}$ that either \begin{enumerate} \renewcommand{(\arabic{enumi})}{(\arabic{enumi})} \renewcommand{\theenumi}{(\arabic{enumi})} \item has $2$ vertices in each class and a matching of size $2$, or \item has $4$ vertices in each class and a matching of size $4$. \end{enumerate} In particular, $K$ belongs to a component of $\mathcal{H}$ with a perfect matching. \end{lem} \begin{proof} Let $V_1$, $V_2$, and $V_3$ be the vertex classes of $\mathcal{H}$, and suppose that the $r$-regular $C_4$ component $K$ in question is a component of $\lk V_1$. Let $a_1 a_2 a_3$ and $b_1 b_2 b_3$ be two disjoint edges of $\mathcal{H}$ with $a_i, b_i \in V_i$ and $a_2$, $b_2$, $a_3$, and $b_3$ being the vertices of an $r$-regular $C_4$ component of $\lk V_1$, all of whose vertices are part of $r$-regular $C_4$ components in the other links. We consider two cases: \noindent \emph{Case 1}. $a_1 a_2$ and $b_1 b_2$ belong to the same $r$-regular $C_4$ component of $\lk V_3$. In this case, all edges incident to $a_1$ or $b_1$ are incident to $a_2$ or $b_2$, hence incident to $a_3$ or $b_3$, and vice versa. Thus the $a_i$ and $b_i$ are the vertices of a component of type (1). \noindent \emph{Case 2}. $a_1 a_2$ and $b_1 b_2$ belong to two different $r$-regular $C_4$ components of $\lk V_3$. In this case, let the vertices of the components be $a_1$, $c_1$, $a_2$, $c_2$, and $b_1$, $d_1$, $b_2$, $d_2$, respectively. Now consider $\lk V_2$. It has edges $a_1 a_3$ and $b_1 b_3$. If $a_1 b_3$ were an edge of $\lk V_2$, then $a_1$, $b_1$, $a_3$, and $b_3$ would be the vertices of an $r$-regular $C_4$ component in $\lk V_2$, which would preclude the existence of any edge between $a_3$ or $b_3$ and $c_1$. But any edge of $\mathcal{H}$ corresponding to $c_1 a_2$ in $\lk V_3$ must be incident to $a_3$ or $b_3$ as seen by looking at $\lk V_1$. This contradiction implies that $a_1 a_3$ and $b_1 b_3$ are in separate components of $\lk V_2$, and thus the edges of $\mathcal{H}$ corresponding to $a_2 b_3$ in $\lk V_1$ must extend to $c_1$, rather than $a_1$ (these being the only two options given by $\lk V_3$). A similar argument shows that edges corresponding to $b_2 a_3$ extend to $d_1$. Now by assumption, $a_3$ and $b_3$ are each part of an $r$-regular $C_4$ component of $\lk V_2$, and given the edges we already have shown to exist, we know that these are two distinct components, and we know three vertices of each. Denote the remaining vertices by $d_3$ and $c_3$, respectively, so that $a_1$, $d_1$, $a_3$, $d_3$ are the vertices of one component, and $b_1$, $c_1$, $b_3$, $c_3$ the vertices of the other component. Since $a_3$ and $c_1$ are in distinct components of $\lk V_2$, we see that all edges of $\mathcal{H}$ corresponding to $a_2 a_3$ extend to $a_1$. Similarly, all edges corresponding to $b_2 b_3$ extend to $b_1$, all the ones corresponding to $a_2 b_3$ extend to $c_1$, and $b_2 a_3$ to $d_1$. Now in $\lk V_2$ there are the edges $a_1 d_3$ and $b_1 c_3$. These do not extend to $a_2$ or $b_2$ as seen in $\lk V_1$, and hence must extend to $c_2$ and $d_2$, respectively, by considering $\lk V_3$. Similarly, the edges $c_1 c_3$ and $d_1 d_3$ in $\lk V_2$ must extend to $c_2$ and $d_2$, respectively. Thus, we have deduced the structure of the subgraph $\mathcal{G}$ of $\mathcal{H}$ induced by these twelve vertices. It has $4$ vertices in each class and a matching $a_1 a_2 a_3$, $b_1 b_2 b_3$, $c_1 c_2 c_3$, $d_1 d_2 d_3$ of size $4$. All that remains to complete the proof is to show that this is a component of $\mathcal{H}$, which would make it a component of type (2). Suppose there were an edge $e$ of $\mathcal{H}$ containing a vertex $u$ of $\mathcal{G}$ and a vertex $v$ not in $\mathcal{G}$. Let $V_i$ be the vertex class of $u$, let $V_j$ be the vertex class of $v$, and let $V_k$ be the third vertex class of $\mathcal{H}$. The presence of $e$ would mean that there is an edge $uv$ in $\lk V_k$. But since the parts of $\mathcal{G}$ present in the links $\lk V_2$ and $\lk V_3$ are components of those links, $uv$ cannot be part of these links, and hence $k = 1$. Now consider the third vertex $w$ of $e$, which is in $V_1$. If $w$ is a vertex of $\mathcal{G}$, then $vw$ is an edge of $\lk V_i$ of the type we just excluded, and if $w$ is a vertex not in $\mathcal{G}$, then $uw$ is an edge of $\lk V_j$ giving us a similar contradiction. Thus no such edge $e$ can exist, and $\mathcal{G}$ is indeed a component of $\mathcal{H}$. As these cases were exhaustive, the claim follows. \end{proof} We remark that with the previous three lemmas in hand, it would be a short step to conclude that any $3$-partite $3$-graph satisfying the conditions of Theorem~\ref{thm:regularfstability} contains at least $(1 - (30r - 35)\varepsilon)\frac{n}{2}$ components that are $\frac{r}{2}\cdot\mathcal{F}$'s (see the proof of Theorem~\ref{thm:regularfstability}). In order to get the improved bound stated in the theorem, we will establish one more technical lemma. Call a vertex \emph{$V_i$-bad} if it is part of a component of $\lk V_i$ that is not an $r$-regular $C_4$. Call a vertex \emph{bad} if it is $V_i$-bad for some $i$, and call a vertex \emph{good} otherwise. \begin{lem} \label{lem:onebadvertex} Let $\mathcal{H}$ be a $3$-partite $3$-graph of maximum degree $r$ with vertex classes $V_1$, $V_2$, and $V_3$. Let $\lset{i, j, k} = \lset{1, 2, 3}$. If an $r$-regular $C_4$ component of $\lk V_i$ is such that all of its vertices are good except one $V_k$-bad vertex in $V_j$, then it shares vertices of $V_k$ with two $r$-regular $C_4$ components of $\lk V_j$ that each have two bad vertices (one $V_i$-bad, and one $V_k$-bad), and shares one vertex of $V_j$ with an $r$-regular $C_4$ component of $\lk V_k$ that has exactly one $V_i$-bad vertex in $V_j$. Furthermore, these four $r$-regular $C_4$ components do not share vertices with any $r$-regular $C_4$ component outside of these four. \end{lem} \begin{proof} We know by Lemma~\ref{lem:extendtof} that such a $C_4$ component must be host to two disjoint edges of $\mathcal{H}$, otherwise it would extend to an $\frac{r}{2}\cdot\mathcal{F}$ and all of its links would be $r$-regular $C_4$'s. Thus, let $a_1 a_2 a_3$ and $b_1 b_2 b_3$ be two disjoint edges of $\mathcal{H}$ with $a_i, b_i \in V_i$ and $a_2$, $b_2$, $a_3$, and $b_3$ being the vertices of an $r$-regular $C_4$ component of $\lk V_1$, all of whose vertices are part of $r$-regular $C_4$ components in the other links except for $b_3$. We consider two cases: \noindent \emph{Case 1}. $a_1 a_2$ and $b_1 b_2$ belong to the same $r$-regular $C_4$ component of $\lk V_3$. In this case, all edges incident to $a_1$ or $b_1$ are incident to $a_2$ or $b_2$, hence incident to $a_3$ or $b_3$, and vice versa. But this means that the $r$-regular $C_4$ component of $\lk V_2$ that $a_3$ participates in must have $\lset{a_1, b_1, a_3, b_3}$ as its vertex set, which contradicts the fact that $b_3$ is not in an $r$-regular $C_4$ component of $\lk V_2$. Therefore, this case is impossible. \noindent \emph{Case 2}. $a_1 a_2$ and $b_1 b_2$ belong to two different $r$-regular $C_4$ components of $\lk V_3$. In this case, let the vertices of the components be $a_1$, $c_1$, $a_2$, $c_2$, and $b_1$, $d_1$, $b_2$, $d_2$, respectively. Now consider $\lk V_2$. It has edges $a_1 a_3$ and $b_1 b_3$. Note that these edges are in separate components of $\lk V_2$, since $a_3$ participates in an $r$-regular $C_4$, while $b_3$ doesn't. Therefore, there are no edges $a_1 b_3$ or $b_1 a_3$ in $\lk V_2$, which implies that all edges parallel to $a_2 b_3$ in $\lk V_1$ extend to $c_1$, rather than $a_1$ (these being the only two options given by $\lk V_3$), and similarly all edges parallel to $b_2 a_3$ in $\lk V_1$ extend to $d_1$ (not $b_1$). These edges of $\mathcal{H}$ correspond to edges $c_1 b_3$ and $d_1 a_3$, respectively, in $\lk V_2$. Now by assumption, $a_3$ is part of an $r$-regular $C_4$ component of $\lk V_2$, and given the edges we already have shown to exist, we know three of its vertices. Denote the remaining vertex by $d_3$ so that $\lset{a_1, d_1, a_3, d_3}$ is the vertex set of that component. Since $a_3$ and $c_1$ are in distinct components of $\lk V_2$, we see that all edges of $\mathcal{H}$ corresponding to $a_2 a_3$ extend to $a_1$. Similarly, all edges corresponding to $b_2 b_3$ extend to $b_1$, all the ones corresponding to $a_2 b_3$ extend to $c_1$, and $b_2 a_3$ to $d_1$. Now in $\lk V_2$ there is at least one edge $a_1 d_3$. Any such edge does not extend to $a_2$ as seen in $\lk V_1$, and hence must extend to $c_2$ by considering $\lk V_3$. Similarly, the edges parallel to $d_1 d_3$ in $\lk V_2$ must extend to $d_2$. Since $b_1 b_3$ and $c_1 b_3$ are edges of $\lk V_2$ in the component of $b_3$, which is not an $r$-regular $C_4$, we have that $b_1$ and $c_1$ are both $V_2$-bad vertices. We claim that $c_2$ and $d_2$ are $V_1$-bad vertices. Suppose to the contrary that they were good. Then by the existence of edges $c_2 d_3$ and $d_2 d_3$ in $\lk V_1$, these are part of the same $r$-regular $C_4$ component of $\lk V_1$. Call its fourth vertex $c_3$. Now any edge parallel to $c_2 c_3$ in $\lk V_1$ extends to $c_1$, since it may only extend to $c_1$ or $a_1$ by $\lk V_3$, and can't extend to $a_1$ by $\lk V_2$. Similarly, any edge parallel to $d_2 c_3$ in $\lk V_1$ extends to $b_1$. We just showed that all edges on $c_3$ go to $c_1$ or $b_1$ in $\lk V_2$. What we showed earlier is that all edges on $b_3$ go to $c_1$ or $b_1$ in $\lk V_2$. These account for all edges on $c_3$ and $b_3$, putting $b_3$ in an $r$-regular $C_4$ component, which is a contradiction, because $b_3$ was assumed not to participate in one of those in $\lk V_2$. Therefore, the component of $\lk V_1$ including $c_2$ and $d_2$ is not an $r$-regular $C_4$, hence these are $V_1$-bad vertices. Thus, we have found two $r$-regular $C_4$ components of $\lk V_3$ with two bad vertices each: $\lset{a_1, c_1, a_2, c_2}$ harbours an $r$-regular $C_4$ with bad vertices $c_1$ and $c_2$, while $\lset{b_1, d_1, b_2, d_2}$ harbours an $r$-regular $C_4$ with bad vertices $b_1$ and $d_2$. We also have an $r$-regular $C_4$ in $\lk V_2$ on $\lset{a_1, d_1, a_3, d_3}$ with a single $V_1$-bad vertex $d_3$. Since all of the good vertices of these four $r$-regular $C_4$ components are shared among themselves, this proves the lemma. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:regularfstability}] Let $\mathcal{H}$ be an $r$-regular $3$-partite $3$-graph with $n$ vertices in each class, and assume $\nu(\mathcal{H}) \leq (1 + \varepsilon)\frac{n}{2}$. Let $V_1$, $V_2$, and $V_3$ be the vertex classes of $\mathcal{H}$. First, we modify $\mathcal{H}$ by replacing each component of $\mathcal{H}$ that has a perfect matching with $r$ parallel copies of the perfect matching. Note that this does not change $\nu(\mathcal{H})$ nor the number of vertices in each class, and keeps $\mathcal{H}$ $r$-regular. This change also clearly does not create any new copies of $\frac{r}{2}\cdot\mathcal{F}$, so if we prove that the modified hypergraph has some number of $\frac{r}{2}\cdot\mathcal{F}$ components, these must have been present in $\mathcal{H}$ to begin with. Thus, we may assume that every perfect matching component of $\mathcal{H}$ is just $r$ parallel copies of an edge. For each $i$, by applying Lemma~\ref{lem:c4links} with $A = V_i$, we have that $\lk V_i$ contains at least $(1 - (6r - 7)\varepsilon)\frac{n}{2}$ components that are $r$-regular $C_4$'s. Call an $r$-regular $C_4$ component of a link \emph{good} if it contains no bad vertices, and \emph{ruined} otherwise. We claim that at least one of the links has at least $(1 - \left(22r - \frac{77}{3}\right)\varepsilon)\frac{n}{2}$ good $r$-regular $C_4$ components. Since each link has in each vertex class at least $(1 - (6r - 7)\varepsilon)n$ vertices belonging to $r$-regular $C_4$ components, each link contributes at most $(6r - 7)\varepsilon n$ bad vertices to any vertex class. If the bad vertices in each vertex class each ruin a different $r$-regular $C_4$ component of one link, then we may have as many as $(12r - 14)\varepsilon n$ ruined $r$-regular $C_4$ components in that link, leaving us with only $(1 - (30r - 35)\varepsilon)\frac{n}{2}$ good components. But then that link has many $r$-regular $C_4$ components with only one bad vertex, so by Lemma~\ref{lem:onebadvertex}, the other links must have many such components with at least two bad vertices, and so these links will have more good components. To make this precise, we count the total number of bad vertices in all three links. As we have seen, each link contributes at most $(6r - 7)\varepsilon n$ bad vertices to each vertex class. Since there are two vertex classes per link and three links total, we have at most $6(6r - 7)\varepsilon n$ bad vertices in all. Now let $x_i$ count the number of $r$-regular $C_4$ components of $\lk V_i$ with exactly one bad vertex, and let $y_i$ count the number of $r$-regular $C_4$ components of $\lk V_i$ with at least two bad vertices. Let $x = x_1 + x_2 + x_3$ and let $y = y_1 + y_2 + y_3$. Note that any bad vertex contributes to at most one of $x_1$, $x_2$, $x_3$, $y_1$, $y_2$, and $y_3$, since in one of the two links containing that vertex, it is in an $r$-regular $C_4$ component. Therefore, we find that $x + 2y \leq 6(6r - 7)\varepsilon n$, as there must be at least $x + 2y$ bad vertices. Now by Lemma~\ref{lem:onebadvertex}, every $r$-regular $C_4$ component with only one bad vertex appears together with another $r$-regular $C_4$ component with only one bad vertex and two $r$-regular $C_4$ components with two bad vertices each, and these four form a unit that does not touch any other such unit (hence there is no overlap in our counting). This implies that there must be at least as many $r$-regular $C_4$ components with two bad vertices as there are ones with only one bad vertex, hence $y \geq x$. Now let $V_i$ be the vertex class such that $x_i$ is the least among $x_1$, $x_2$, and $x_3$. We thus have $x_i \leq \frac{x}{3}$. And since $3x \leq x + 2y \leq 6(6r - 7)\varepsilon n$, we have $x_i \leq \frac{2}{3}(6r - 7)\varepsilon n$. Now $\lk V_i$ has at most $2(6r - 7)\varepsilon n$ bad vertices that were contributed from the other two links, which leaves at most $2(6r - 7)\varepsilon n - x_i$ bad vertices to ruin the $r$-regular $C_4$ components counted by $y_i$. Since these each use at least two of these vertices, we have $y_i \leq \frac{1}{2}(2(6r - 7)\varepsilon n - x_i)$. Combining our inequalities we find that $\lk V_i$ therefore has $x_i + y_i \leq (6r - 7)\varepsilon n + \frac{1}{2}x_i \leq \frac{4}{3}(6r - 7)\varepsilon n$ ruined $r$-regular $C_4$ components. The rest must be good, so we have at least $(1 - (6r - 7)\varepsilon)\frac{n}{2} - \frac{4}{3}(6r - 7)\varepsilon n = (1 - \left(22r - \frac{77}{3}\right)\varepsilon)\frac{n}{2}$ good $r$-regular $C_4$ components in $\lk V_i$. If any good $r$-regular $C_4$ component hosts two disjoint edges of $\mathcal{H}$, then by Lemma~\ref{lem:perfectmatchingcomp} it is part of a perfect matching component of $\mathcal{H}$, which is a contradiction, since we replaced these by parallel copies of a matching (so their links do not contain any $r$-regular $C_4$ components). Therefore, all good $r$-regular $C_4$ components extend to copies of $\frac{r}{2}\cdot\mathcal{F}$ by Lemma~\ref{lem:extendtof}, so we have found the desired number of those in $\mathcal{H}$, completing the proof. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:abcfstability}] This follows along very similar lines as the proof of Theorem~\ref{thm:regularfstability}. Let $\mathcal{H}$ be a $3$-partite $3$-graph with vertex classes $A$, $B$, and $C$, such that $\abs{A} = n$, and suppose that every vertex of $A$ has degree at least $r$, and that every vertex in $B \cup C$ has degree at most $r$. Assume that $\nu(\mathcal{H}) \leq (1 + \varepsilon)\frac{n}{2}$. First, we modify $\mathcal{H}$ by removing edges from vertices of $A$ that have degree strictly larger than $r$ until every vertex of $A$ has degree exactly $r$. Note that this does not hurt any of our assumptions and cannot create copies of $\frac{r}{2}\cdot\mathcal{F}$. After this modification, $\mathcal{H}$ has maximum degree $r$. Next, we again modify $\mathcal{H}$ (as in the proof of Theorem~\ref{thm:regularfstability}) by replacing each component of $\mathcal{H}$ that has a perfect matching with $r$ parallel copies of the perfect matching. Note that again, this change does not affect our assumptions, and also clearly does not create any new copies of $\frac{r}{2}\cdot\mathcal{F}$. Thus, we may assume that every perfect matching component of $\mathcal{H}$ is just $r$ parallel copies of an edge. Now apply Lemma~\ref{lem:c4links} to $\mathcal{H}$ to find that $\lk A$ contains at least $(1 - (6r - 7)\varepsilon)\frac{n}{2}$-many $r$-regular $C_4$ components. Now delete from $\mathcal{H}$ all vertices of $B$ and $C$ that are not in one of the $r$-regular $C_4$ components. This leaves at least $n' = (1 - (6r - 7)\varepsilon)n$ vertices in each of these classes. Note that all vertices of $B$ and $C$ now have degree $r$. Next, we follow along the lines of the proof of Lemma~\ref{lem:c4links} to find out about $r$-regular $C_4$ components of $\lk B$ and $\lk C$. There must be some $S \subseteq B$ such that $\eta(L(\lk S)) \leq \abs{S} - (\abs{B} - \nu(\mathcal{H}))$, otherwise $\mathcal{H}$ would have a matching larger than $\nu(\mathcal{H})$ by Theorem~\ref{thm:hallsforhypergraphs}. We have $\nu(\lk S) \geq \abs{S}$, so by Corollary~\ref{cor:connbound}, if $\lk S$ has $k$-many $r$-regular $C_4$ components, then \[ \eta(L(\lk S)) \geq \frac{(2r - 3)\abs{S} + r\abs{S} - k}{6r - 7}. \] Combining this with our upper bound, we find \begin{align*} k &\geq (6r - 7)(\abs{B} - \nu(\mathcal{H})) - (3r - 4)\abs{S}\\ &\geq (6r - 7)\left(n' - (1 + \varepsilon)\frac{n}{2}\right) - (3r - 4)n'\\ &= (1 - (36r^2 - 72r + 35)\varepsilon)\frac{n}{2}. \end{align*} Since $\lk B$ has maximum degree $r$, these components of $\lk S$ are all components of $\lk B$, hence we have found at least $(1 - (36r^2 - 72r + 35)\varepsilon)\frac{n}{2}$-many $r$-regular $C_4$ components in $\lk B$. The same holds for $\lk C$. Call an $r$-regular $C_4$ component of a link \emph{good} if it contains no bad vertices, and \emph{ruined} otherwise. We claim that $\lk A$ has at least $(1 - (72r^2 - 150r + 77)\varepsilon)\frac{n}{2}$ good $r$-regular $C_4$ components. Note that there are no $A$-bad vertices, since we deleted them all before considering $\lk B$ and $\lk C$. This means that all ruined $r$-regular $C_4$ components of $\lk A$ have at least two bad vertices, since if they only had one, Lemma~\ref{lem:onebadvertex} would imply the existence of an $A$-bad vertex (in fact, three of them). There are at most $n' - (1 - (36r^2 - 72r + 35)\varepsilon)n = (36r^2 - 78r + 42)\varepsilon n$-many $B$-bad vertices in $C$, and also no more than that many $C$-bad vertices in $B$. Since the ruined $r$-regular $C_4$ components of $\lk A$ each have two bad vertices, this means that there are in fact at most $(36r^2 - 78r + 42)\varepsilon n$ ruined $r$-regular $C_4$ components in $\lk A$. Therefore, since the rest are good, there are indeed at least $(1 - (72r^2 - 150r + 77)\varepsilon)\frac{n}{2}$ good $r$-regular $C_4$ components in $\lk A$. If any good $r$-regular $C_4$ component hosts two disjoint edges of $\mathcal{H}$, then by Lemma~\ref{lem:perfectmatchingcomp} it is part of a perfect matching component of $\mathcal{H}$, which is a contradiction, since we replaced these by parallel copies of a matching (so their links do not contain any $r$-regular $C_4$ components). Therefore, all good $r$-regular $C_4$ components extend to copies of $\frac{r}{2}\cdot\mathcal{F}$ by Lemma~\ref{lem:extendtof}, so we have found the desired number of those in $\mathcal{H}$, completing the proof. \end{proof} \section{$\frac{r}{2}\cdot\mathcal{F}$-Free $3$-Graphs}\label{last} Theorems~\ref{thm:regularfstability} and ~\ref{thm:abcfstability} have the following easy corollaries, respectively: \begin{cor} \label{cor:ffreenubound} Let $\mathcal{H}$ be an $r$-regular $3$-partite $3$-graph with $n$ vertices in each vertex class. If $\mathcal{H}$ does not contain a copy of $\frac{r}{2}\cdot\mathcal{F}$, then \[ \nu(\mathcal{H}) \geq \left(1 + \frac{1}{22r - \frac{77}{3}}\right)\frac{n}{2}. \] \end{cor} \begin{cor} \label{cor:abcffreenubound} Let $\mathcal{H}$ be a $3$-partite $3$-graph with vertex classes $A$, $B$, and $C$, such that $\abs{A} = n$. Suppose that every vertex of $A$ has degree at least $r$, and that every vertex in $B \cup C$ has degree at most $r$. If $\mathcal{H}$ contains no subgraph isomorphic to $\frac{r}{2}\cdot\mathcal{F}$, then \[ \nu(\mathcal{H}) \geq \left(1 + \frac{1}{72r^2 - 150r + 77}\right)\frac{n}{2}. \] \end{cor} This answers the question of Aharoni, Kotlar, and Ziv~\cite{aharonikotlarziv} mentioned in the introduction, since for $r \geq 3$, any simple $3$-partite $3$-graph is $\frac{r}{2}\cdot\mathcal{F}$-free. It would be interesting to determine the correct function $\alpha(r)$ for which $\nu(\mathcal{H}) \geq (1 + \alpha(r))\frac{n}{2}$ for every $\mathcal{H}$ satisfying the conditions of Corollary~\ref{cor:ffreenubound}. The following constructions give upper bounds on $\alpha(r)$. \begin{thm} \label{thm:ffreenuexamples} For every even $r \geq 2$ there exists an $r$-regular $3$-partite $3$-graph $\mathcal{H}$ with $n$ vertices per vertex class, not containing a copy of $\frac{r}{2}\cdot\mathcal{F}$, such that \[ \nu(\mathcal{H}) \leq \left(1 + \frac{1}{r + 1}\right)\frac{n}{2}. \] For every odd $r \geq 3$ there exists an $r$-regular $3$-partite $3$-graph $\mathcal{H}$ with $n$ vertices per vertex class (obviously not containing a copy of $\frac{r}{2}\cdot\mathcal{F}$) such that \[ \nu(\mathcal{H}) \leq \left(1 + \frac{1}{r}\right)\frac{n}{2}. \] \end{thm} \begin{proof} First suppose $r \geq 2$ is even. Let $\frac{r}{2}\cdot\mathcal{F}^-$ denote the $3$-partite $3$-graph obtained by removing a single edge from $\frac{r}{2}\cdot\mathcal{F}$. Note that it has three vertices of degree $r - 1$ and three vertices of degree $r$. Take $\frac{r}{2}$ disjoint copies of $\frac{r}{2}\cdot\mathcal{F}^-$ together with three vertices $a$, $b$, and $c$, one in each class. For each copy $F$ of $\frac{r}{2}\cdot\mathcal{F}^-$, add three edges, each using two of $a$, $b$, and $c$ and one of the three degree-$(r - 1)$ vertices of $F$. Each group of three edges contributes $2$ to the degree of $a$, $b$, and $c$, and $1$ to the degree of the degree-$(r - 1)$ vertices, hence after all $\frac{r}{2}$ such groups are added, the resulting $3$-graph is $r$-regular and clearly $\frac{r}{2}\cdot\mathcal{F}$-free. It has $n = r + 1$ vertices per vertex class, and its largest matching is of size at most $\frac{r}{2} + 1$, since in any matching we can pick at most one edge from each copy of $\frac{r}{2}\cdot\mathcal{F}^-$, and all of the edges we added intersect in one of $a$, $b$, or $c$. This gives the desired bound for even $r$. If $r \geq 3$ is odd, we can use a very similar construction as above. Instead of $\frac{r}{2}\cdot\mathcal{F}^-$, which does not exist for odd $r$, let $\frac{r - 1}{2}\cdot\mathcal{F}^+$ denote the $3$-partite $3$-graph obtained from $\frac{r - 1}{2}\cdot\mathcal{F}$ by adding an extra copy of one of its edges. Note that it has three vertices of degree $r - 1$ and three vertices of degree $r$. Taking $\frac{r - 1}{2}$ disjoint copies of $\frac{r - 1}{2}\cdot\mathcal{F}^+$ together with three vertices $a$, $b$, and $c$, one in each class, we add edges containing two of these vertices and one degree-$(r - 1)$ vertex of an $\frac{r - 1}{2}\cdot\mathcal{F}^+$ as in the previous construction. We also add the edge $abc$. The resulting $3$-graph is $r$-regular and clearly $\frac{r}{2}\cdot\mathcal{F}$-free (since this $3$-graph does not exist for odd $r$). It has $n = r$ vertices per vertex class, and its largest matching is of size at most $\frac{r - 1}{2} + 1$, since we can pick at most one edge from each copy of $\frac{r - 1}{2}\cdot\mathcal{F}^+$, and all of the edges we added intersect in one of the three extra vertices $a$, $b$, and $c$. This gives the desired bound for odd $r$. \end{proof} All of these examples have high edge multiplicity, and as mentioned in the introduction, one may expect substantially better lower bounds on the matching number for \emph{simple} hypergraphs. We close with the following conjectures about this more restrictive case. \begin{conj}[Aharoni, Kotlar and Ziv~\cite{aharonikotlarziv}] Let $\mathcal{H}$ be an $r$-regular simple $3$-partite $3$-graph with $n$ vertices in each class. Then $\nu(\mathcal{H}) \geq \frac{r - 1}{r}n$. \end{conj} \begin{conj}[Aharoni, Berger, Kotlar and Ziv~\cite{ABKZ}] Let $\mathcal{H}$ be a simple $3$-partite $3$-graph with vertex classes $A$, $B$ and $C$. Suppose each vertex in $A$ has degree at least $r$, and each vertex in $B \cup C$ has degree at most $r$. Then $\nu(\mathcal{H}) \geq \frac{r - 1}{r}\abs{A}$. \end{conj} These conjectures for $r = n$ generalize a notorious old open problem of Ryser-Brualdi-Stein on Latin transversals, so in their full generality they are likely to be very difficult. \end{document}
\begin{document} \title{A Fundamental System of Seminorms for $A(K)$} \author{Dietmar Vogt} \date{} \maketitle Let $K\subset\mathbb{R}^d$ be compact and $A(K)$ the space of germs of real analytic functions on $K$ with its natural (LF)-topology (see e.g. \cite{MV}, 24.38, (2)). This topology can also be given by $A(K)=\text{lim ind}_{k\to+\infty} A_k$ where $$A_k=\{(f_\alpha)_{\alpha\in\mathbb{N}_0^d}\in C(K)^{\mathbb{N}_0^d}\,:\, \|f\|_k:=\sup_{x\in K} \frac{|f^{(\alpha)}(x)|}{\alpha!} k^{-|\alpha|}< +\infty\}.$$ Based on this description we give in the present note an explicit fundamental system of seminorms for $A(K)$. We start with a modified problem. Let $X$ be a Banach space. We put $$F_k=\{(x_\alpha)_{\alpha\in\mathbb{N}_0^d}\in X^{\mathbb{N}_0^d}\,:\, \|x\|_k:=\sup_\alpha \|x_\alpha\| k^{-|\alpha|}< +\infty\}$$ and $$F=\text{lim ind}_{k\to+\infty} F_k.$$ On $F$ we consider for any positive null-sequence $\delta=(\delta_n)_{n\in\mathbb{N}}$ the continuous norm $$|x|_\delta = \sup_\alpha \|x_\alpha\| \delta_{|\alpha|}^{|\alpha|}.$$ \begin{lemma} \label{lem}The norms $|\cdot|_\delta$ are a fundamental system of seminorms on $F$. \end{lemma} \bf Proof: \rm It is sufficient to show that for every positive sequence $\varepsilon_k$, $k\in\mathbb{N}$, there exists $\delta$ such that $$U_\delta:=\{x\,:\,|x|_\delta\le 1\}\subset\sum_k \varepsilon_k B_k$$ where $B_k$ denotes the unit ball of $F_k$. Without restriction of generality we may assume, that $\varepsilon_k\le1$ for all $k$. Fot every $k$ we choose $n_k>n_{k-1}$, such that $k-1<\varepsilon_k^{1/n_k}k$. We put $\delta_n^{-1} = \varepsilon_k^{1/n_k}k$ for $n_k\le n< n_{k+1}$. We obtain for these $n$ $$\delta_n^{-n}=\varepsilon_k^{n/n_k}k^n\le \varepsilon_k k^n.$$ Due to the construction $\delta=(\delta_n)_n$ is a null-sequence. For $x\in U_\delta$ and $n_k\le |\alpha|<n_{k+1}$ we have $x_\alpha\in \varepsilon_k B_k$ and therefore $$\xi_k=\sum_{n_k\le|\alpha|<n_{k+1}} x_\alpha\in \varepsilon_k B_k.$$ Since $x=\sum_k \xi_k$ the proof is complete. \hspace*{\fill} $\Box $ \begin{theorem} If $K\subset \mathbb{R}^d$ is compact, then the norms $$|f|_\delta = \sup_\alpha \sup_{x\in K} \frac{|f^{(\alpha)}(x)|}{\alpha!} \delta_{|\alpha|}^{|\alpha|},$$ where $\delta$ runs through all positive null-sequences, are a fundamental system of seminorms in $A(K)$. \end{theorem} \bf Proof: \rm Let $F$ be as above with $X=C(K)$. We define a map $A:A(K)\to F$ by $A(f)=\Big(\frac{f^{(\alpha)}(x)}{\alpha!}\Big)_{\alpha\in\mathbb{N}_0^d}$. The map $A$ is obviously continuous and $A^{-1}(B)$ is bounded in $A(K)$ for every bounded set $B$ in $F$. From Baernstein's Lemma (see \cite{MV}, 26.26) it follows, that $A$ is an injective topological homomorphism. Hence Lemma \ref{lem} proves the result. \hspace*{\fill} $\Box $ \noindent Bergische Universit\"{a}t Wuppertal, \newline FB Math.-Nat., Gau\ss -Str. 20, \newline D-42119 Wuppertal, Germany \newline e-mail: [email protected] \end{document}
\begin{document} \begin{abstract} We prove the meridional rank conjecture for arborescent links associated to plane trees with the following property: all branching points carry a straight branch to at least three leaves. The proof involves an upper bound on the bridge number in terms of the maximal number of link components of the underlying tree, valid for all arborescent links. \end{abstract} \maketitle \thispagestyle{empty} \section{Introduction} The family of arborescent tangles can be defined as the minimal family of tangles containing all rational tangles, closed under horizontal and vertical tangle composition~\cite{Co}. Their closures -- arborescent links -- admit a description via weighted plane trees, where each vertex stands for a twisted band, and edges indicate how these bands are glued together. See~\cite{BS, Ga} for a precise definition and Figure~1 for an illustration (ignoring the additional labels and dots in the link diagram for the time being). These descriptions are not unique, since small weights typically allow for simplifications of the underlying tree, without changing the link type. The meridional rank conjecture by Cappell-Shaneson posits an equality between the bridge number and the meridional rank of a link; see Problem 1.11 in~\cite{Ki}. Early evidence towards this was derived by Boileau and Zimmermann, who showed that two-bridge links are the only links with meridional rank two~\cite{BZ}, and by Rost and Zieschang, who proved the conjecture for torus links~\cite{RZ}. \begin{figure} \caption{Example of an arborescent knot} \end{figure} We recall that the bridge number $\beta(L)$ of a link $L \subset {\mathbb R}^3$ is the minimal number of local maxima of $L$ with respect to a fixed direction, minimised over all isotopic representatives of $L$. The meridional rank $\mu(L)$ is the minimal number of generators of the fundamental group $\pi_1({\mathbb R}^3 \setminus L)$, where all generators are required to be conjugate to a standard meridional loop of the link $L$. The bridge number of a link is bounded below by its meridional rank. Given a fixed plane tree $T$, each choice of weights for its vertices determines an arborescent link $L(T)$. Define $m(T)$ to be the maximal number of components of $L(T)$ over all links obtained by assigning weights to the vertices of $T$. We will show that $m(T)$ admits a description in terms of the combinatorics of the tree. \begin{theorem} \label{thm:bound} For every arborescent link $L(T)$ determined by a weighted plane tree $T$, the bridge number of $L(T)$ is bounded above by the maximal component number of $T$: \[ \beta(L(T)) \leq m(T). \] \end{theorem} This bound is sharp for a class of trees, defined next. A twig is a straight branch connecting a leaf to a branching point. A tree $T$ is said to have many twigs if it is obtained from a subtree $T'\subset T$ by adding at least three twigs to every vertex of $T'$. \begin{theorem} \label{main} Let $L(T)$ be an arborescent link associated to a plane tree $T$ with many twigs and all weights $\neq 0,\pm 1$. The meridional rank conjecture holds for $L(T)$ and $$\mu(L(T))=\ \beta(L(T))=m(T).$$ \end{theorem} To evaluate the maximal component number $m(T)$, we shall use $f(T)$, the flattening number of $T$. Define a subset of edges of $T$ to be flattening if the complement of their interiors is a subforest of $T$ with no vertex of valency bigger than two. The natural number $f(T)$ is the minimal number of edges among all flattening subsets; this definition appears in the context of braid indices of fibred arborescent links in~\cite{Ba}. In the special case of trees with a bipartite ramification structure, where all vertices of valency bigger than two are even distance apart, the number $f(T)$ is easily seen to coincide with the number of leaves of $T$ minus two. The first three authors proved the meridional rank conjecture for links associated to bipartite trees in~\cite{BBK}, establishing Theorems~\ref{thm:bound} and~\ref{main} for this class of links, although the formulation in terms of the maximal component number is new. Trees with many twigs and trees with a bipartite ramification structure have a small intersection, consisting of star-like trees. The links corresponding to these trees are known as Montesinos links. Our proof is inspired by the technique developed in~\cite{BBK}. We construct Coxeter quotients of the groups $\pi_1({\mathbb R}^3 \setminus L(T))$ of rank $f(T)+2$ for trees with many twigs. This is done in the next section and establishes the inequality $$f(T)+2\leq \mu(L(T)).$$ In the third and last section, we show that the bridge number of arborescent links (without restriction) is bounded above by $f(T)+2$, $$\beta(L(T))\leq f(T)+2.$$ This in turn is done by computing the Wirtinger number of arborescent diagrams, a combinatorial version of the bridge number introduced in~\cite{BKVV}. Moreover, for all plane trees we establish the equality $$m(T)=f(T)+2.$$ \section{Coxeter quotients for arborescent links} Coxeter groups are encoded by finite simple weighted graphs. Let $\Gamma$ be a finite simple graph with $v(\Gamma)$ vertices, whose edges carry integer weights $\geq 2$. The corresponding Coxeter group $C(\Gamma)$ is generated by $v(\Gamma)$ elements of order two, one for each vertex of $\Gamma$. Every edge with weight $k$ stands for a relation of the form $(st)^k=1$, where $s,t$ is the pair of generators $s,t$ associated with the two vertices of that edge. Elements of $C(\Gamma)$ conjugate to these generators are called reflections. The minimal number of reflections needed to generate $C(\Gamma)$ is called the reflection rank of $C(\Gamma)$; it is known to equal $v(\Gamma)$. The following elementary lower bound for the meridional rank $\mu(L)$ of links in terms of the reflection rank was derived in~\cite{BBK} (Proposition~1, Section~2). \begin{proposition} \label{lowerbound} Let $L$ be a link whose fundamental group surjects onto a Coxeter group $C(\Gamma)$, so that all meridians are mapped to reflections. Then $\mu(L) \geq v(\Gamma)$. \end{proposition} We will use the term Coxeter quotient for quotients of link groups that arise by sending all meridians of $L$ to reflections of a Coxeter group. These were introduced by Brunner in~\cite{Br}, as homomorphisms onto Artin groups rather than Coxeter groups. An important class of links that admit non-cyclic Coxeter quotients are two-bridge links, which can be encoded by rational numbers $\alpha/\beta$ with relatively prime integers $\alpha, \beta$ and $-\alpha<\beta<\alpha$. As explained in~\cite{BBK}, the two-bridge link $L(\alpha/\beta)$ admits a rank two Coxeter quotient generated by two reflections $s,t$ satisfying the relation $(st)^{\alpha}=1$. The goal of this section is to construct Coxeter quotients of reflection rank $f(T)+2$, for all arborescent links $L(T)$ with the restrictions stated in Theorem~\ref{main}. For this purpose, we need a recursive formula for the flattening number $f(T)$. We say that a tree $\overline{T}$ is obtained from $T$ by adding a ramification point, if $\overline{T}$ contains an edge $e$, whose complement is the union of $T$ and a star-like tree, whose central vertex $c$ is adjacent to $e$. Figure~2 illustrates this operation and serves as a hint for the proof of the following easy fact. \begin{figure} \caption{Adding ramification points of valency 4 and 3} \end{figure} \begin{lemma} \label{lemma1} If $\overline{T}$ is obtained from $T$ by adding a ramification point of valency $k\geq 2$, then $f(\overline{T})=f(T)+k-2$. \end{lemma} Adding a ramification point of valency $k$ to a tree $T$ has the effect of inserting $k-1$ rational tangles to the arborescent link $L(T)$. This is illustrated in Figure~3 for $k=4$, where each of the three boxes labeled $A,B,C$ stands for a rational tangle determined by the branches incident to the new ramification point, and the number of twists in the central band is given by the weight of that point. Recall that a tree~$T$ satisfying the hypotheses of Theorem~\ref{main} is obtained from a subtree $T' \subset T$ by adding at least three straight branches, or twigs, to every vertex of $T'$. For this reason, $T$ can be constructed inductively from a star-shaped tree by adding ramification points of valency at least four to branching points, i.e. to vertices of valency at least three, as in the upper part of Figure~2. We will construct a Coxeter quotient of rank $f(T)+2$ by induction on the number of vertices of $T'$. An important element of this construction is that each twist region of the arborescent diagram corresponding to a vertex of $T'$ -- that is, to a branching point of $T$ -- carries a single Coxeter generator. Coincidentally, the base case and the inductive step can be understood in the same diagram: Figure~3 illustrates the base case of a star-shaped tree with three branches, as well as the addition of a ramification point of valency four to an existing branching point. The three labels $x,a,b$ stand for labels of a Coxeter group. Here a label $z$ means that the meridian around the labeled string gets mapped to the generator $z$ of a Coxeter group determined by the link diagram. Our assumption on the weights makes sure that all the rational tangles have non-trivial numerators, hence give rise to Coxeter relations $(st)^{\alpha}=1$ with $\alpha \geq 2$ (compare the discussion in the second paragraph after Proposition~\ref{lowerbound}). For the base case, a star-shaped tree, we observe that $L(T)$ admits a Coxeter quotient with $f(T)+2$ generators -- as many as the number of branches. All arcs in the twist region associated with the centre of the star carry the same label,~$x$. This is illustrated on the right side of Figure~3, for a star with three branches. \begin{figure} \caption{Extending a system of Coxeter generators} \end{figure} For the inductive step, we construct a Coxeter quotient of rank $f(\overline{T})+2=f(T)+2+k-2$ for the link $L(\overline{T})$, by adding $k-2$ new reflection generators and $k-1$ new Coxeter type relations determined by the new rational tangles. This is again illustrated in Figure~3 for $k=4$, where the label~$x$ stands for the generator of the branching point, to which we add the new ramification point. The new generators are labeled $a$ and $b$. At this point, it is essential that the new ramification point has at least three twigs. If it had only two twigs, the single new generator $a$ would satisfy two Coxeter type relations with $x$, which are possibly in contradiction. For example, if $p,q \in {\mathbb N}$ are coprime, then the two relations $(ax)^p=1=(ax)^q$ enforce $a=x$. This inductive construction, together with Proposition~\ref{lowerbound}, proves the desired lower bound on the meridional rank of arborescent links $L(T)$ associated to trees $T$ with many twigs and all weights $\neq 0,\pm 1$: $$\mu(L(T)) \geq f(T)+2.$$ As pointed out above, the inductive step does not work for arbitrary trees. However, the class of trees $T$ admitting a Coxeter quotient of rank $f(T)+2$ is bigger than the class of trees with many twigs. For example, the labels $a,b,c,d$ of the arborescent knot $L(T)$ in Figure~1 generate a Coxeter quotient of order $f(T)+2=4$, obtained by a similar procedure. The Coxeter relations satisfied by these generators are $$(ab)^4=(ac)^3=(bc)^2=(bd)^4=(cd)^5=1.$$ \section{Wirtinger and bridge number of arborescent links} The Wirtinger number of a link, introduced in~\cite{BKVV}, is a combinatorial version of the bridge number. We fix a connected link diagram $D$ with $n$ crossings, whose complement is a union of $n$ arcs. Marking $k$ of these arcs -- called seeds -- by a dot, we obtain a partial coloring of the diagram. We allow the coloring to propagate over crossings by the rule depicted in Figure~4, motivated by the Wirtinger calculus. The idea is that the meridians of all strands marked with a dot are in the subgroup of the link group generated by the meridians of the initially dotted strands, or seeds. \begin{figure} \caption{Propagation rule for colors} \end{figure} The Wirtinger number $\omega(D)$ is the minimal number of seeds whose coloring propagates to a coloring of the entire diagram~$D$. The main result in~\cite{BKVV} states that the bridge number $\beta(L)$ of a link $L$ coincides with the Wirtinger number of $L$, that is the minimum value of $\omega(D)$ among all diagrams of $L$. In particular, the Wirtinger number of any diagram $D$ of a link $L$ is an upper bound for the bridge number: $$\beta(L) \leq \omega(D).$$ In this section, we will prove that a suitable choice of $f(T)+2$ seeds in a diagram of the arborescent link $L(T)$ propagates to a coloring of the entire diagram, without any restriction on the tree $T$ and its weights. This implies that for all arborescent links $L(T)$ $$\mu(L(T)) \leq \beta(L(T)) \leq f(T)+2.$$ Combining this with the inequality of the previous section, $\mu(L(T)) \geq f(T)+2$, valid for all arborescent links $L(T)$ with the restrictions stated in Theorem~\ref{main}, we obtain the two desired equalities: $$\beta(L(T))=\mu(L(T))=f(T)+2.$$ We are left to construct diagram colorings for $L(T)$ with $f(T)+2$ seeds, starting with the base case $f(T)=0$, or trees $T$ without ramification points. The corresponding links $L(T)$ are precisely the two-bridge links, or closures of rational tangles. As we can see from Figure~5, two suitably chosen initial seeds are enough to propagate to a coloring of the entire rational tangle. Note that no information on over- and undercrossings is needed in these diagrams. \begin{figure} \caption{Initial seeds for rational tangles} \end{figure} Even more is true: a single seed on any of the four outgoing strings of a rational tangle can be complemented by a second seed, so that the coloring propagates to a coloring of the entire tangle. This may require a sequence of flype moves, as shown in Figure~6. \begin{figure} \caption{Flype} \end{figure} \begin{figure} \caption{Seed extension} \label{seeds} \end{figure} We are now set for an inductive construction of a diagram coloring for $L(T)$ with $f(T)+2$ initial seeds, making once again use of Lemma~\ref{lemma1}. Let $\overline{T}$ be a tree obtained from $T$ by adding a ramification point of valency $k$ to $T$. Suppose that a link diagram of $L(T)$ admits a coloring by $f(T)+2$ initial seeds that propagate to a coloring of the entire link diagram. Note that this step potentially requires applying flypes to the standard diagram of $L(T)$, all of which are supported in rational tangles corresponding to branches of T. We obtain a diagram of $L(\overline{T})$ by adding a twist region and $k-1$ rational tangles to the (possibly flyped) diagram of $L(T)$. The twist region corresponding to the new ramification point can be included in any of these rational tangles. For example, in Figure~3, the three crossings together with tangle $A$ form a single rational tangle. Adding $k-2$ suitable seeds, one for each rational tangle except one, as shown schematically in Figure~7 for $k=5$, gives rise to a set of $f(T)+k=f(\overline{T})+2$ seeds that propagate to a coloring of a link diagram of $L(\overline{T})$. This completes the proof of the inequality $\beta(L(T))\leq f(T)+2$. To obtain Theorems~\ref{main} and~\ref{thm:bound}, it remains to show that $m(T)=f(T)+2$ for all plane trees. Once again we consider star-like trees as a first step. \begin{lemma} \label{lem:starlike} Given a star-like plane tree $T$, there exists a vertex labeling of $T$ such that the corresponding link $L(T)$ has $f(T)+2$ components. \end{lemma} \begin{proof} Suppose the center vertex of $T$ has valency $n\geq 2$. Then $f(T)=n-2$ and $L(T)$ is a Montesinos link on $n$ rational tangles as pictured. \begin{figure} \caption{Montesinos link on $n$ rational tangles.} \end{figure} It now suffices to show that we can choose the labels of $T$ to achieve the connectedness diagram given in Figure~9, so that $L(T)$ has $n=f(T)+2$ components. \begin{figure} \caption{Connectedness diagram for a Montesinos link on $n$ rational tangles with suitably chosen weights.} \end{figure} Let $R$ be any of the $n$ rational tangles. If $R$ has length one, that is, it consists of a single twist region, then we may choose any even label to achieve the desired connectedness. If $R$ has length at least two, all three ways to connect the four endpoints in pairs can be achieved. \end{proof} \begin{proposition}\label{m=f+2} For any plane tree $T$, the flattening number $f(T)$ and the maximal component number $m(T)$ are related as follows. \[ m(T)=f(T)+2. \] \end{proposition} \begin{proof} Assign weights to $T$ so that $L(T)$ is a link realizing the maximal component number $m(T)$. We then have $$f(T)+2 \geq \beta(L(T)) \geq m(T),$$ since the inequality $f(T)+2 \geq \beta(L(T))$ holds for any choice of weights, and the number of components of the link $L(T)$ can not exceed its bridge number. It remains to show that $f(T)+2 \leq m(T)$. We do this by choosing labelings on $T$ that result in a link $L(T)$ with $f(T)+2$ components. We induct on the number of ramifications needed to construct $T$. The base case, $T$ is star-like, is treated in Lemma~\ref{lem:starlike}. Now assume $T$ is the result of adding a ramification point, that is, a star-like tree $T_2$, to an arbitrary plane tree $T_1$. \begin{figure} \caption{New ramification point, $c$.} \end{figure} Note that $T_1$ can be constructed using strictly fewer ramifications than $T$. By induction, there exists a labeling of $T_i$ such that $L(T_i)$ has $f(T_i)+2$ components, for $i=1,2$. These labels of $T_1$ and $T_2$ induce a labeling of $T$ which results in the connectedness diagram for $L(T)$ pictured below. \begin{figure} \caption{Tangle substitution corresponding to the new ramification point.} \end{figure} Here, the $R_i$ are rational tangles and the ramification point has valence $n$. \begin{figure} \caption{Connectedness diagram of inserted tangle induced by the choice of labels on $T_2$.} \end{figure} Figure~12 shows the connectedness diagram induced by the chosen labels for $T_2$, with $n-2$ components of $L(T)$ contained therein. Hence, the number of components of $L(T)$ is $$(f(T_1)+2)+(n-2) = f(T_1) +n.$$ By Lemma~\ref{lemma1}, $f(T)=f(T_1)+n-2$, so, as claimed, $L(T)$ has $f(T)+2$ components. \end{proof} \end{document}
\ensuremath{\mathfrak{b}}egin{equation}gin{document} \ensuremath{\mathfrak{a}}ddress{Department of Mathematics, Drexel University, Philadelphia, PA 19104} \email{[email protected]} \ensuremath{\mathfrak{k}}eywords{canonical basis, Hecke algebra, Schur-Weyl duality, seminormal basis} \ensuremath{\mathfrak{t}}ext{\rm Aut}\,hor{Jonah Blasiak}\ensuremath{\mathfrak{t}}hanks{The author is currently an NSF postdoctoral fellow.} \ensuremath{\mathfrak{t}}itle{Quantum Schur-Weyl duality and projected canonical bases} \ensuremath{\mathfrak{b}}egin{equation}gin{abstract} Let $\ensuremath{\mathscr{H}}_r$ be the generic type $A$ Hecke algebra defined over $\ensuremath{\mathbb{Z}}[\ensuremath{u}, \ensuremath{u^{-1}}]$. The Kazhdan-Lusztig bases $\{C_w\}_{w \in \ensuremath{\mathcal{S}}_r}$ and $\{\ensuremath{C^{\prime}}_w\}_{w \in \ensuremath{\mathcal{S}}_r}$ of $\ensuremath{\mathscr{H}}_r$ give rise to two different bases of the Specht module $M_\lambda$, $\lambda \vdash r$, of $\ensuremath{\mathscr{H}}_r$. These bases are not equivalent and we show that the transition matrix $S(\lambda)$ between the two is the identity at $\ensuremath{u} = 0$ and $\ensuremath{u} = \infty$. To prove this, we first prove a similar property for the transition matrices $\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}, \ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}'$ between the Kazhdan-Lusztig bases and their projected counterparts $\{\ensuremath{\ensuremath{\mathfrak{t}}ilde{C}}_w\}_{w \in \ensuremath{\mathcal{S}}_r},$ $\{\ensuremath{\ensuremath{\mathfrak{t}}ilde{C}^{\prime}}_w\}_{w \in \ensuremath{\mathcal{S}}_r}$, where $\ensuremath{\ensuremath{\mathfrak{t}}ilde{C}}_w := C_w p_\lambda$, $\ensuremath{\ensuremath{\mathfrak{t}}ilde{C}^{\prime}}_w := \ensuremath{C^{\prime}}_w p_\lambda$ and $p_\lambda$ is the minimal central idempotent corresponding to the two-sided cell containing $w$. We prove this property of $\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}},\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}'$ using quantum Schur-Weyl duality and results about the upper and lower canonical basis of $V^{\ensuremath{\otimes} r}$ ($V$ the natural representation of $U_q(\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}l_n)$) from \cite{GL, FKK, Brundan}. We also conjecture that the entries of $S(\lambda)$ have a certain positivity property. \end{abstract} \ensuremath{\mathfrak{m}}aketitle \section{Introduction} \label{s Introduction} Let $\{C_w: w \in \ensuremath{\mathcal{S}}_r\}$ and $\{\ensuremath{C^{\prime}}_w: w \in \ensuremath{\mathcal{S}}_r\}$ be the Kazhdan-Lusztig bases of the type $A$ Hecke algebra $\ensuremath{\mathscr{H}}_r$, which we refer to as the upper and lower canonical basis of $\ensuremath{\mathscr{H}}_r$, respectively. After working with these bases for a while, we have convinced ourselves that it is not particularly useful to look at both at once---one can work with one or the other and it is easy to go back and forth between the two (precisely, there is an automorphism $\ensuremath{\mathfrak{t}}heta$ of $\ensuremath{\mathscr{H}}_r$ such that $\ensuremath{\mathfrak{t}}heta(\ensuremath{C^{\prime}}_w) = (-1)^{\ell(w)}C_w$). However, our recent work on the nonstandard Hecke algebra \cite{Bnsbraid, B4} has forced us to look at both these bases simultaneously. Before explaining how this comes about, let us describe our results and conjectures. Let $\ensuremath{K} = \ensuremath{\mathbb{Q}}(\ensuremath{u})$, where $\ensuremath{u}$ is the Hecke algebra parameter, and let $M_\lambda$ be the $\ensuremath{K} \ensuremath{\mathscr{H}}_r$-irreducible of shape $\lambda \vdash r$. The upper and lower canonical basis of $\ensuremath{\mathscr{H}}_r$ give rise to bases $\{C_Q : Q \in \ensuremath{\mathfrak{t}}ext{SYT}(\lambda)\}$ and $\{\ensuremath{C^{\prime}}_Q : Q \in \ensuremath{\mathfrak{t}}ext{SYT}(\lambda)\}$ of $M_\lambda$, which we refer to as the upper and lower canonical basis of $M_\lambda$. These bases are not equivalent, and it appears to be a difficult and interesting question to understand the transition matrix $S(\lambda)$ between them (which is well-defined up to a global scale by the irreducibility of $ M_\lambda$). It turns out that $S(\lambda)$ is the identity at $\ensuremath{u} =0$ and $\ensuremath{u} = \infty$ (Theorem \ref{t transition C' to C}) and, though it is not completely clear what it should mean for an element of $\ensuremath{K}$ to be nonnegative, its entries appear to have some kind of nonnegativity (see Conjecture \ref{cj non-negativity T T' D}). To compare the upper and lower canonical basis of $ M_\lambda$, we compare them both to certain seminormal bases of $M_\lambda$ in the sense of \cite{RamSeminormal}. These bases are compatible with restriction along the chain of subalgebras $\ensuremath{\mathscr{H}}_1 \subseteq \cdots \subseteq \ensuremath{\mathscr{H}}_{r-1} \subseteq \ensuremath{\mathscr{H}}_r$ (see Definition \ref{d seminormal}). Specifically, we define an upper (resp. lower) seminormal basis which differs from the upper (resp. lower) canonical basis by a unitriangular transition matrix $T(\lambda)$ (resp. $T'(\lambda)$). It appears that these transition matrices also possess some kind of nonnegativity property. Since the restrictions $\ensuremath{\mathscr{H}}_{i-1} \subseteq \ensuremath{\mathscr{H}}_i$ are multiplicity-free, these seminormal bases differ from each other by a diagonal transformation $D(\lambda)$. Hence we have $S(\lambda) = T(\lambda) D(\lambda) T'(\lambda)^{-1}$. We briefly mention some related investigations in the literature. Other seminormal bases of $M_\lambda$ have been defined---for instance, Hoefsmit, and later, independently, Ocneanu, and Wenzl construct a Hecke algebra analog of Young's orthogonal basis (see \cite{Wenzl}). This basis differs from our upper and lower seminormal bases by a diagonal transformation, but is not equal to either. The recent paper \cite{GLS} uses an interpretation of the lower seminormal basis in terms of non-symmetric Macdonald polynomials to study $T'(\lambda)$ for $\lambda$ a two-row shape and gives an explicit formula for a column of this matrix (see Remark \ref{r Lascoux's paper}). Along similar lines, the transition matrix between the upper canonical basis at $\ensuremath{u} =1$ and Young's natural basis of $M_\lambda$ is studied by Garsia and McLarnan in \cite{GM}; they show that this matrix is unitriangular and has integer entries. Our investigation further involves projecting the basis element $C_w$ (resp. $\ensuremath{C^{\prime}}_w$) onto the isotypic component corresponding to the two-sided cell containing $w$. This results in what we call the projected upper (resp. lower) canonical basis; let $\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}$ (resp. $\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}'$) denote the transition matrix between the projected and upper (resp. lower) canonical basis. The properties we end up proving about $S(\lambda), T(\lambda), T'(\lambda)$ all follow from properties of $\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}$ and $\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}'$. And we are able to get some handle on $\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}$ and $\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}'$ using quantum Schur-Weyl duality. Specifically, we use the compatibility between an upper (resp. lower) canonical basis of $V^{\ensuremath{\otimes} r}$ with the upper (resp. lower) canonical basis of $\ensuremath{\mathscr{H}}_r$ and well-known results about crystal lattices, where $V$ is the natural representation of $U_q(\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}l_n)$. The results we need are similar to those in \cite{GL, FKK, Brundan} and follow easily from results of \cite{LBook,Kas2}. Brundan's paper \cite{Brundan} is particularly well adapted to our needs and we follow it closely. We now return to our original motivation. The type $A$ \emph{nonstandard Hecke algebra} $\nsbr{\ensuremath{\mathscr{H}}}_r$ is the subalgebra of $\ensuremath{\mathscr{H}}_r \ensuremath{\otimes} \ensuremath{\mathscr{H}}_r$ generated by the elements \[ \mathcal{P}_s := \ensuremath{C^{\prime}}_s \ensuremath{\otimes} \ensuremath{C^{\prime}}_s + C_s \ensuremath{\otimes} C_s, \ s \in S, \] where $S = \{s_1, \dots, s_{r-1}\}$ is the set of simple reflections of $\ensuremath{\mathcal{S}}_r$. We think of the inclusion $\nsbr{\ensuremath{\mathscr{H}}}_r \ensuremath{\ensuremath{\mathfrak{m}}athfrak{h}}ookrightarrow \ensuremath{\mathscr{H}}_r \ensuremath{\otimes} \ensuremath{\mathscr{H}}_r$ as a deformation of the coproduct $\ensuremath{\mathbb{Z}} \ensuremath{\mathcal{S}}_r \ensuremath{\mathfrak{t}}o \ensuremath{\mathbb{Z}} \ensuremath{\mathcal{S}}_r \ensuremath{\otimes} \ensuremath{\mathbb{Z}} \ensuremath{\mathcal{S}}_r$, $w \ensuremath{\mathfrak{m}}apsto w \ensuremath{\otimes} w$. This algebra was constructed by Mulmuley and Sohoni in \cite{GCT4} in an attempt to use canonical bases to understand Kronecker coefficients. Let $\epsilon_+ = M_{(r)}$, $\epsilon_- = M_{(1^r)}$ be the trivial and sign representations of $\ensuremath{K} \ensuremath{\mathscr{H}}_r$. Any representation $M_\lambda \ensuremath{\otimes} M_\ensuremath{\mathfrak{m}}u$ of $\ensuremath{K} (\ensuremath{\mathscr{H}}_r \ensuremath{\otimes} \ensuremath{\mathscr{H}}_r)$ is a $\ensuremath{K} \nsbr{\ensuremath{\mathscr{H}}}_r$-module by restriction. The trivial and sign representations $\nsbr{\epsilon}_+$ and $\nsbr{\epsilon}_-$ of $\ensuremath{K} \nsbr{\ensuremath{\mathscr{H}}}_r$ are the restrictions of $\epsilon_+ \ensuremath{\otimes} \epsilon_+$ and $\epsilon_+ \ensuremath{\otimes} \epsilon_-$, respectively. There is a single copy of $\nsbr{\epsilon}_+$ inside $\ensuremath{\mathcal{R}}es_{\ensuremath{K} \nsbr{\ensuremath{\mathscr{H}}}_r}M_\lambda \ensuremath{\otimes} M_\lambda$ and a single copy of $\nsbr{\epsilon}_-$ inside $\ensuremath{\mathcal{R}}es_{\ensuremath{K} \nsbr{\ensuremath{\mathscr{H}}}_r}M_\lambda \ensuremath{\otimes} M_{\lambda'}$, where $\lambda'$ is the conjugate partition of $\lambda$. These can be written in terms canonical bases as \ensuremath{\mathfrak{b}}egin{equation} \nsbr{\epsilon}_+ \cong \ensuremath{K} \sum_{Q \in \ensuremath{\mathfrak{t}}ext{SYT}(\lambda)} C_Q \ensuremath{\otimes} \ensuremath{C^{\prime}}_Q, \ensuremath{\mathfrak{q}}uad\ensuremath{\mathfrak{q}}uad \nsbr{\epsilon}_- \cong \ensuremath{K} \sum_{Q \in \ensuremath{\mathfrak{t}}ext{SYT}(\lambda)} (-1)^{\ell(Q)} C_Q \ensuremath{\otimes} C_{\ensuremath{\mathfrak{t}}ranspose{Q}}, \end{equation} where $\ensuremath{\mathfrak{t}}ranspose{Q}$ denotes the transpose of the SYT $Q$ and $\ell(Q)$ denotes the distance between $Q$ and some fixed tableau of shape $\lambda$ in the dual Knuth equivalence graph on SYT$(\lambda)$. An important part of understanding the nonstandard Hecke algebra is to understand its trivial and sign representations. If we fix a basis of $\ensuremath{K} (\ensuremath{\mathscr{H}}_r \ensuremath{\otimes} \ensuremath{\mathscr{H}}_r)$, say $\{C_v \ensuremath{\otimes} C_w:v,w \in \ensuremath{\mathcal{S}}_r\}$, then expressing the central idempotent for $ \nsbr{\epsilon}_-$ in this basis involves understanding $\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}$ and expressing the central idempotent for $ \nsbr{\epsilon}_+$ involves understanding $\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}$ and $S(\lambda)$. The same difficulties come up if we choose the basis $\{C_v \ensuremath{\otimes} \ensuremath{C^{\prime}}_w:v,w \in \ensuremath{\mathcal{S}}_r\}$. Admittedly, $\nsbr{\epsilon}_+ \subseteq \ensuremath{\mathcal{R}}es_{\ensuremath{K} \nsbr{\ensuremath{\mathscr{H}}}_r}M_\lambda \ensuremath{\otimes} M_\lambda$ and $\nsbr{\epsilon}_- \subseteq \ensuremath{\mathcal{R}}es_{\ensuremath{K} \nsbr{\ensuremath{\mathscr{H}}}_r}M_\lambda \ensuremath{\otimes} M_{\lambda'}$ both have simple expressions in terms of the Hecke orthogonal basis of \cite{Wenzl}. However, we suspect it will be useful to understand $\nsbr{\ensuremath{\mathscr{H}}}_r$ in terms of a basis like $\{C_v \ensuremath{\otimes} C_w:v,w \in \ensuremath{\mathcal{S}}_r\}$. This is somewhat justified by our work in progress \cite{BMSGCT4}, joint with Ketan Mulmuley and Milind Sohoni, in which we use canonical bases of quantum groups to give a combinatorial rule for Kronecker coefficients with two two-row shapes (here, we do not need $S(\lambda)$, but the projected upper canonical basis plays an essential role). This paper is organized as follows. In \ensuremath{\mathfrak{t}}extsection\ref{s Preliminaries and notation}--\ref{s Crystal bases of the quantized enveloping algebra} we introduce the necessary background on canonical bases of Hecke algebras and quantum groups. We then use this in \ensuremath{\mathfrak{t}}extsection\ref{s Quantum Schur-Weyl duality and canonical bases} to construct canonical bases of $V^{\ensuremath{\otimes} r}$ and relate them to those of $\ensuremath{\mathscr{H}}_r$, closely following \cite{Brundan}. Next, in \ensuremath{\mathfrak{t}}extsection\ref{s projected canonical bases}, we give several characterizations of projected canonical basis elements, which we then use in \ensuremath{\mathfrak{t}}extsection\ref{s Consequences for the canonical bases of M_lambda} to prove that the transition matrices $S(\lambda),T(\lambda)$, and $T'(\lambda)$ are the identity at $\ensuremath{u}=0$ and $\ensuremath{u} = \infty$. Finally, in \ensuremath{\mathfrak{t}}extsection\ref{s The two-row case}, we compute explicitly a matrix similar to $T'(\lambda)$, for $\lambda$ a two-row shape, using the $U_q(\ensuremath{\mathfrak{sl}}_2)$ graphical calculus of \cite{FK}. \section{Preliminaries and notation} Here we introduce notation for general Coxeter groups and then specialize to the weight lattice and Weyl group of $\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}l_n$. In preparation for quantum Schur-Weyl duality, we introduce notation for words and tableaux. Finally, we define cells in the general setting of modules with basis, rather than only for $W$-graphs. \label{s Preliminaries and notation} \subsection{General notation} We work primarily over the ground rings $\ensuremath{\mathfrak{m}}athbf{A} = \ensuremath{\mathbb{Z}}[\ensuremath{u}, \ensuremath{u^{-1}}]$ and $\ensuremath{K} = \ensuremath{\mathbb{Q}}(\ensuremath{u})$. Define $\ensuremath{K}_0$ (resp. $\ensuremath{K}_\infty$) to be the subring of $\ensuremath{K}$ consisting of rational functions with no pole at $\ensuremath{u} = 0$ (resp. $\ensuremath{u} = \infty$). Let $\ensuremath{\mathfrak{b}}r{\cdot}$ be the involution of $\ensuremath{K}$ determined by $\ensuremath{\mathfrak{b}}r{u} = \ensuremath{u^{-1}}$; it restricts to an involution of $\ensuremath{\mathfrak{m}}athbf{A}$. For a nonnegative integer $k$, the $\ensuremath{\mathfrak{b}}r{\cdot}$-invariant quantum integer is $[k] := \ensuremath{f}rac{\ensuremath{u}^k - \ensuremath{u}^{-k}}{\ensuremath{u} - \ensuremath{u^{-1}}} \in \ensuremath{\mathfrak{m}}athbf{A}$ and the quantum factorial is $[k]! := [k][k-1]\dots[1]$. We also use the notation $[k]$ to denote the set $\{1,\ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq}ots,k\}$, but these usages should be easy to distinguish from context. Let $(W, S)$ be a Coxeter group with length function $\ell$ and Bruhat order $<$. If $\ell(vw)=\ell(v)+\ell(w)$, then $vw = v\cdot w$ is a \emph{reduced factorization}. The \emph{right descent set} of $w \in W$ is $R(w) = \{s\in S : ws < w\}$. For any $J\subseteq S$, the \emph{parabolic subgroup} $W_J$ is the subgroup of $W$ generated by $J$. Each left (resp. right) coset $wW_J$ (resp. $W_Jw$) contains a unique element of minimal length called a minimal coset representative. The set of all such elements is denoted $W^J$ (resp. $\leftexp{J}W$). \subsection{Words and tableaux} \label{ss type A combinatorics preliminaries} Our results depend heavily on quantum Schur-Weyl duality, so we work almost entirely in type $A$. The \emph{weight lattice} $X$ of the Lie algebra $\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}l_n$ is $\ensuremath{\mathbb{Z}}^n$ with standard basis $\epsilon_1, \dots, \epsilon_{n}$. Its dual, $\dual{X}$, has basis $\dual{\epsilon}_{1}, \dots, \dual{\epsilon}_{n}$ dual to the standard. The simple roots are $\ensuremath{\mathfrak{a}}lpha_i = \epsilon_i - \epsilon_{i+1}, i \in [n-1]$. We write $\lambda \vdash_l r$ for a partition $\lambda = (\lambda_1, \ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq}ots, \lambda_l)$ of size $r = |\lambda| := \sum_{i=1}^l \lambda_i$. A partition $\lambda \vdash_n r$ is identified with the weight $\lambda_1\epsilon_1 + \dots + \lambda_n\epsilon_n \in X$. For $\zeta = (\zeta_1,\ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq}ots,\zeta_l)$ a weak composition of $r$, let $B_j$ be the interval $[\sum_{i=1}^{j-1}\zeta_i+1,\sum_{i=1}^{j}\zeta_i]$, $j \in [l]$. Define $J_\zeta = \{s_i:i, i +1\in B_j \ensuremath{\mathfrak{t}}ext{ for some } j\}$ so that $(\ensuremath{\mathcal{S}}_r)_{J_\zeta} \cong \ensuremath{\mathcal{S}}_{\zeta_1} \ensuremath{\mathfrak{t}}imes \dots \ensuremath{\mathfrak{t}}imes \ensuremath{\mathcal{S}}_{\zeta_{l}}$. Let $\ensuremath{\mathfrak{m}}athbf{k} = k_1 k_2\dots k_r \in [n]^r$ be a word of length $r$ in the alphabet $[n]$. The \emph{content} of $\ensuremath{\mathfrak{m}}athbf{k}$ is the tuple $(\zeta_1,\ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq}ots,\zeta_n)$ whose $i$-th entry $\zeta_i$ is the number of $i$'s in $\ensuremath{\mathfrak{m}}athbf{k}$. The notation $\ensuremath{\mathfrak{m}}athbf{k}^\dagger$ denotes the word $k_r k_{r-1} \dots k_1$. The symmetric group $\ensuremath{\mathcal{S}}_r$ acts on $[n]^r$ on the right by $\ensuremath{\mathfrak{m}}athbf{k} s_i = k_1 \dots k_{i-1} \, k_{i+1} \, k_i \, k_{i+2} \dots k_r$. Define $\sort(\ensuremath{\mathfrak{m}}athbf{k})$ to be the tuple obtained by rearranging the $k_j$ in weakly increasing order. For a word $\ensuremath{\mathfrak{m}}athbf{k}$ of content $\zeta$, define $d(\ensuremath{\mathfrak{m}}athbf{k})$ (resp. $D(\ensuremath{\mathfrak{m}}athbf{k})$) to be the element $w$ of $\leftexp{J_\zeta}{\ensuremath{\mathcal{S}}_r}$ (resp. $(w_0)_{J_\zeta} \leftexp{J_\zeta}{\ensuremath{\mathcal{S}}_r}$ where $(w_0)_{J_\zeta}$ is the longest element of $(\ensuremath{\mathcal{S}}_r)_{J_\zeta}$) such that $\sort(\ensuremath{\mathfrak{m}}athbf{k})w = \ensuremath{\mathfrak{m}}athbf{k}$. The set of standard Young tableaux is denoted SYT, those SYT of size $r$ denoted SYT$^r$, those SYT$^r$ with at most $n$ rows denoted SYT$^r_{\leq n}$, and those SYT of shape $\lambda$ denoted SYT$(\lambda)$. The set of semistandard Young tableaux of size $r$ with entries in $[n]$ is denoted SSYT$_{[n]}^r$ and the subset of SSYT$_{[n]}^r$ of shape $\lambda \vdash r$ is SSYT$_{[n]}^r(\lambda)$. Tableaux are drawn in English notation, so that entries of a SSYT strictly increase from north to south along columns and weakly increase from west to east along rows. For a tableau $T$, $|T|$ is the number of squares in $T$ and $\ensuremath{\mathfrak{t}}ext{\rm sh}(T)$ its shape. We let $P(\ensuremath{\mathfrak{m}}athbf{k}), Q(\ensuremath{\mathfrak{m}}athbf{k})$ denote the insertion and recording tableaux produced by the Robinson-Schensted-Knuth (RSK) algorithm applied to the word $\ensuremath{\mathfrak{m}}athbf{k}$. We abbreviate $\ensuremath{\mathfrak{t}}ext{\rm sh}(P(\ensuremath{\mathfrak{m}}athbf{k}))$ simply by $\ensuremath{\mathfrak{t}}ext{\rm sh}(\ensuremath{\mathfrak{m}}athbf{k})$. Let $Z_\lambda$ be the superstandard tableau of shape and content $\lambda$---the tableau whose $i$-th row is filled with $i$'s. The conjugate partition $\lambda'$ of a partition $\lambda$ is the partition whose diagram is the transpose of that of $\lambda$ and $\ensuremath{\mathfrak{t}}ranspose{Q}$ denotes the transpose of a SYT $Q$, so that $\ensuremath{\mathfrak{t}}ext{\rm sh}(\ensuremath{\mathfrak{t}}ranspose{Q}) = \ensuremath{\mathfrak{t}}ext{\rm sh}(Q)'$. Lastly, $Q^\dagger$ denotes the Sch\"utzenberger involution of a SYT $Q$ (see, e.g., \cite[A1.2]{F}). \subsection{Cells} \label{ss cells} We define cells in the general setting of modules with basis. Let $H$ be an $R$-algebra for some commutative ring $R$. Let $M$ be a left $H$-module and $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma$ an $R$-basis of $M$. The preorder $\ensuremath{\mathfrak{k}}lo{\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma}$ (also denoted $\ensuremath{\mathfrak{k}}lo{M}$) on the vertex set $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma$ is generated by the relations \ensuremath{\mathfrak{b}}egin{equation} \label{e preorder} \delta\ensuremath{\mathfrak{k}}locov{\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma}\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}amma \ensuremath{\mathfrak{b}}egin{equation}gin{array}{c}\ensuremath{\mathfrak{t}}ext{if there is an $h\in H$ such that $\delta$ appears with non-zero}\\ \ensuremath{\mathfrak{t}}ext{coefficient in the expansion of $h\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}amma$ in the basis $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma$}. \end{array} \end{equation} Equivalence classes of $\ensuremath{\mathfrak{k}}lo{\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma}$ are the \emph{left cells} of $(M, \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma)$. The preorder $\ensuremath{\mathfrak{k}}lo{M}$ induces a partial order on the left cells of $M$, which is also denoted $\ensuremath{\mathfrak{k}}lo{M}$. A \emph{cellular submodule} of $(M, \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma)$ is a submodule of $M$ that is spanned by a subset of $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma$ (and is necessarily a union of left cells). A \emph{cellular quotient} of $(M,\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma)$ is a quotient of $M$ by a cellular submodule, and a \emph{cellular subquotient} of $(M, \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma)$ is a cellular quotient of a cellular submodule. We denote a cellular subquotient $R\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'/ R\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma''$ by $R\ensuremath{\mathscr{L}}ambda$, where $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'' \subseteq \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma' \subseteq \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma$ span cellular submodules and $\ensuremath{\mathscr{L}}ambda = \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma' \setminus \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma''$. We say that the left cells $\ensuremath{\mathscr{L}}ambda$ and $\ensuremath{\mathscr{L}}ambda'$ are isomorphic if $(R \ensuremath{\mathscr{L}}ambda, \ensuremath{\mathscr{L}}ambda)$ and $(R \ensuremath{\mathscr{L}}ambda', \ensuremath{\mathscr{L}}ambda')$ are isomorphic as modules with basis. Sometimes we speak of the left cells of $M$, cellular submodules of $M$, etc. or left cells of $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma$, cellular submodules of $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma$, etc. if the pair $(M, \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma)$ is clear from context. For a right $H$-module $M$, the \emph{right cells}, \emph{cellular submodules}, etc. of $M$ are defined similarly with $\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}amma h$ in place of $h \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}amma$ in \eqref{e preorder}. We also use the terminology $H$-cells, $H$-cellular submodules, etc. to make it clear that the algebra $H$ is acting, and we omit left and right when they are clear. \section{Hecke algebras and canonical bases} \label{s Canonical bases of the type A Hecke algebra} The \emph{Hecke algebra} $\ensuremath{\mathscr{H}}(W)$ of $(W, S)$ is the free $\ensuremath{\mathfrak{m}}athbf{A}$-module with standard basis $\{T_w :\ w\in W\}$ and relations generated by \ensuremath{\mathfrak{b}}egin{equation} \label{e Hecke algebra def} \ensuremath{\mathfrak{b}}egin{equation}gin{array}{ll}T_vT_w = T_{vw} & \ensuremath{\mathfrak{t}}ext{if } vw = v\cdot w\ \ensuremath{\mathfrak{t}}ext{is a reduced factorization},\\ (T_s - \ensuremath{u})(T_s + \ensuremath{u^{-1}}) = 0 & \ensuremath{\mathfrak{t}}ext{if } s\in S.\end{array}\end{equation} For each $J\subseteq S$, $\ensuremath{\mathscr{H}}(W)_J$ denotes the subalgebra of $\ensuremath{\mathscr{H}}(W)$ with $\ensuremath{\mathfrak{m}}athbf{A}$-basis $\{T_w:\ w\in W_J\}$, which is isomorphic to $\ensuremath{\mathscr{H}}(W_J)$. In this section we recall the definition of the Kazhdan-Lusztig basis elements $C_w$ and $\ensuremath{C^{\prime}}_w$ of \cite{KL} and some of their basic properties. We record some useful results about how they behave under induction and restriction. Then we specialize to type $A$ and review the beautiful connection between cells and the RSK algorithm. \subsection{The upper and lower canonical basis of $\ensuremath{\mathscr{H}}(W)$} The \emph{bar-involution}, $\ensuremath{\mathfrak{b}}r{\cdot}$, of $\ensuremath{\mathscr{H}}(W)$ is the additive map from $\ensuremath{\mathscr{H}}(W)$ to itself extending the $\ensuremath{\mathfrak{b}}r{\cdot}$-involution of $\ensuremath{\mathfrak{m}}athbf{A}$ and satisfying $\ensuremath{\mathfrak{b}}r{T_w} = T_{w^{-1}}^{-1}$. Observe that $\ensuremath{\mathfrak{b}}r{T_{s}} = T_s^{-1} = T_s + \ensuremath{u^{-1}} - u$ for $s \in S$. Some simple $\ensuremath{\mathfrak{b}}r{\cdot}$-invariant elements of $\ensuremath{\mathscr{H}}(W)$ are $\ensuremath{C^{\prime}}_\ensuremath{\mathfrak{t}}ext{id} := T_\ensuremath{\mathfrak{t}}ext{id}$, $C_s := T_s - \ensuremath{u} = T_s^{-1} - \ensuremath{u^{-1}}$, and $\ensuremath{C^{\prime}}_s := T_s + \ensuremath{u^{-1}} = T_s^{-1} + u$, $s\in S$. Define the lattices $\ensuremath{\mathscr{H}}(W)_{\ensuremath{\mathbb{Z}}[\ensuremath{u}]} := \ensuremath{\mathbb{Z}}[\ensuremath{u}] \{ T_w : w \in W \}$ and $\ensuremath{\mathscr{H}}(W)_{\ensuremath{\mathbb{Z}}[\ensuremath{u^{-1}}]} := \ensuremath{\mathbb{Z}}[\ensuremath{u^{-1}}] \{ T_w : w \in W \}$ of $\ensuremath{\mathscr{H}}(W)$. \refstepcounter{equation} \ensuremath{\mathfrak{b}}egin{equation}gin{enumerate}[label={(\ensuremath{\mathfrak{t}}heequation)}] \item For each $w \in W$, there is a unique element $C_w \in \ensuremath{\mathscr{H}}(W)$ such that $\ensuremath{\mathfrak{b}}r{C_w} = C_w$ and $C_w$ is congruent to $T_w \ensuremath{\mathfrak{m}}od \ensuremath{u} \ensuremath{\mathscr{H}}(W)_{\ensuremath{\mathbb{Z}}[\ensuremath{u}]}$. \end{enumerate} The $\ensuremath{\mathfrak{m}}athbf{A}$-basis $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma_W := \{C_w: w\in W\}$ is the \emph{upper canonical basis} of $\ensuremath{\mathscr{H}}(W)$ (we use this language to be consistent with that for crystal bases). Similarly, \refstepcounter{equation} \ensuremath{\mathfrak{b}}egin{equation}gin{enumerate}[label={(\ensuremath{\mathfrak{t}}heequation)}] \item for each $w \in W$, there is a unique element $\ensuremath{C^{\prime}}_w \in \ensuremath{\mathscr{H}}(W)$ such that $\ensuremath{\mathfrak{b}}r{\ensuremath{C^{\prime}}_w} = \ensuremath{C^{\prime}}_w$ and $\ensuremath{C^{\prime}}_w$ is congruent to $T_w \ensuremath{\mathfrak{m}}od \ensuremath{u^{-1}} \ensuremath{\mathscr{H}}(W)_{\ensuremath{\mathbb{Z}}[\ensuremath{u^{-1}}]}$. \end{enumerate} The $\ensuremath{\mathfrak{m}}athbf{A}$-basis $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_W := \{\ensuremath{C^{\prime}}_w : w \in W \}$ is the \emph{lower canonical basis} of $\ensuremath{\mathscr{H}}(W)$. The coefficients of the lower canonical basis in terms of the standard basis are the \emph{Kazhdan-Lusztig polynomials} $P'_{x,w}$: \ensuremath{\mathfrak{b}}egin{equation} \ensuremath{C^{\prime}}_w = \sum_{x \in W} P'_{x,w} T_x. \end{equation} (Our $P'_{x,w}$ are equal to $q^{(\ell(x)-\ell(w))/2}P_{x,w}$, where $P_{x,w}$ are the polynomials defined in \cite{KL} and $q^{1/2} = \ensuremath{u}$.) Now let $\ensuremath{\mathfrak{m}}u(x,w) \in \ensuremath{\mathbb{Z}}$ be the coefficient of $\ensuremath{u^{-1}}$ in $P'_{x,w}$ (resp. $P'_{w,x}$) if $x \leq w$ (resp. $w \leq x$). Then the right regular representation in terms of the canonical bases of $\ensuremath{\mathscr{H}}(W)$ takes the following simple forms: \ensuremath{\mathfrak{b}}egin{equation}gin{equation}\label{e prime C on prime canbas} \ensuremath{C^{\prime}}_w \ensuremath{C^{\prime}}_s = \left\{\ensuremath{\mathfrak{b}}egin{equation}gin{array}{ll} [2] \ensuremath{C^{\prime}}_w & \ensuremath{\mathfrak{t}}ext{if}\ s \in R(w),\\ \displaystyle\sum_{\substack{\{w' \in W: s \in R(w')\}}} \ensuremath{\mathfrak{m}}u(w',w)\ensuremath{C^{\prime}}_{w'} & \ensuremath{\mathfrak{t}}ext{if}\ s \notin R(w). \end{array}\right. \end{equation} \ensuremath{\mathfrak{b}}egin{equation}gin{equation}\label{e C on canbas} C_w C_s = \left\{\ensuremath{\mathfrak{b}}egin{equation}gin{array}{ll} -[2] C_w & \ensuremath{\mathfrak{t}}ext{if}\ s \in R(w),\\ \displaystyle\sum_{\substack{\{w' \in W: s \in R(w')\}}} \ensuremath{\mathfrak{m}}u(w',w)C_{w'} & \ensuremath{\mathfrak{t}}ext{if}\ s \notin R(w). \end{array}\right. \end{equation} The simplicity and sparsity of this action along with the fact that the right cells of $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma_W$ and $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_W$ often give rise to $\ensuremath{\ensuremath{\mathfrak{m}}athbb{C}}(\ensuremath{u}) \ensuremath{\otimes}_\ensuremath{\mathfrak{m}}athbf{A} \ensuremath{\mathscr{H}}(W)$-irreducibles are among the most amazing and useful properties of canonical bases. \subsection{Induction and restriction of canonical bases} \label{ss Induction and restriction of canonical bases} It will be important for our applications in \ensuremath{\mathfrak{t}}extsection \ref{s Quantum Schur-Weyl duality and canonical bases}--\ref{s Consequences for the canonical bases of M_lambda} that canonical bases behave well under induction and restriction. Let $J \subseteq S$. Let $\ensuremath{\mathfrak{m}}athbf{A}\ensuremath{\mathscr{L}}ambda'$ (resp. $\ensuremath{\mathfrak{m}}athbf{A}\ensuremath{\mathscr{L}}ambda$) be a right cellular subquotient of $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_{W_J}$ (resp. $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma_{W_J}$). The next proposition follows from general results about inducing $W$-graphs \cite{HY1, HY2} (see \cite[Propositions 2.6 and 3.4]{B0}). We will only apply this with $\ensuremath{\mathfrak{m}}athbf{A}\ensuremath{\mathscr{L}}ambda'$ (resp. $\ensuremath{\mathfrak{m}}athbf{A}\ensuremath{\mathscr{L}}ambda$) the trivial $\ensuremath{\mathscr{H}}(W)$ representation, which is a cellular submodule (resp. quotient) of $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_W$ (resp. $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma_W$). \ensuremath{\mathfrak{b}}egin{equation}gin{proposition}\label{p cell isomorphism induced} The basis $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_{\ensuremath{\mathscr{L}}ambda', J} := \{ \ensuremath{C^{\prime}}_w : w = v\cdot x, \ensuremath{C^{\prime}}_v \in \ensuremath{\mathscr{L}}ambda', x \in \leftexp{J}{W} \} \subseteq \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_W$ of $\ensuremath{\mathfrak{m}}athbf{A} \ensuremath{\mathscr{L}}ambda' \ensuremath{\otimes}_{\ensuremath{\mathscr{H}}(W_J)} \ensuremath{\mathscr{H}}(W)$ can be constructed from the standard basis $\ensuremath{\mathscr{L}}ambda T' := \{ \ensuremath{C^{\prime}}_v \ensuremath{\otimes}_{\ensuremath{\mathscr{H}}(W_J)} T_x : \ensuremath{C^{\prime}}_v \in \ensuremath{\mathscr{L}}ambda', x \in \leftexp{J}{W} \}$ in the sense of \cite{Du}: $\ensuremath{C^{\prime}}_{v x}$ is the unique $\ensuremath{\mathfrak{b}}r{\cdot}$-invariant element of $\ensuremath{\mathbb{Z}}[\ensuremath{u^{-1}}] \ensuremath{\mathscr{L}}ambda T'$ congruent to $\ensuremath{C^{\prime}}_v \ensuremath{\otimes}_{\ensuremath{\mathscr{H}}(W_J)} T_x \ensuremath{\mathfrak{m}}od \ensuremath{u^{-1}} \ensuremath{\mathbb{Z}}[\ensuremath{u^{-1}}] \ensuremath{\mathscr{L}}ambda T'$. Hence, $\ensuremath{\mathfrak{m}}athbf{A} \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_{\ensuremath{\mathscr{L}}ambda',J}$ is a right cellular subquotient of $\ensuremath{\mathscr{H}}(W)$. The same statement holds with $\ensuremath{\mathscr{L}}ambda$ in place of $\ensuremath{\mathscr{L}}ambda'$, $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma_W$ in place of $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_W$, $C$'s in place of $\ensuremath{C^{\prime}}$'s, and $\ensuremath{u}$ in place of $\ensuremath{u^{-1}}$. \end{proposition} The next result about restricting canonical bases originated in the work of Barbasch and Vogan on primitive ideals \cite{BV}, and is proven in the generality stated here by Roichman \cite{R} (see also \cite[\ensuremath{\mathfrak{t}}extsection3.3]{B0}). \ensuremath{\mathfrak{b}}egin{equation}gin{proposition}\label{p restrict Wgraph} Let $J \subseteq S$ and $E$ be the right $\ensuremath{\mathscr{H}}(W_J)$-module $\ensuremath{\mathcal{R}}es_{\ensuremath{\mathscr{H}}(W_J)} \ensuremath{\mathscr{H}}(W)$. Then for any $x \in W^J$, $E_x := \ensuremath{\mathfrak{m}}athbf{A} \{\ensuremath{C^{\prime}}_{xv} : v \in W_J \}$ is a cellular subquotient of $(E,\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_W)$ and \ensuremath{\mathfrak{b}}egin{equation} E_x \xrightarrow{\cong} \ensuremath{\mathscr{H}}(W_J), \ensuremath{C^{\prime}}_{xv} \ensuremath{\mathfrak{m}}apsto \ensuremath{C^{\prime}}_v \end{equation} is an isomorphism of right $\ensuremath{\mathscr{H}}(W_J)$-modules with basis. In particular, any right cell of $(E,\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_W)$ is isomorphic to one occurring in $\ensuremath{\mathscr{H}}(W_J)$. The same statement holds for $(E,\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma_W)$, with $C$'s replacing $\ensuremath{C^{\prime}}$'s. \end{proposition} \subsection{Cells in type $A$} \label{ss cell label conventions C_Q C'_Q} Let $\ensuremath{\mathscr{H}}_r = \ensuremath{\mathscr{H}}(\ensuremath{\mathcal{S}}_r)$ be the type $A$ Hecke algebra. It is well known that $\ensuremath{K} \ensuremath{\mathscr{H}}_r := \ensuremath{K} \ensuremath{\otimes}_\ensuremath{\mathfrak{m}}athbf{A} \ensuremath{\mathscr{H}}_r$ is semisimple and its irreducibles in bijection with partitions of $r$; let $M_\lambda$ and $M_\lambda^{\ensuremath{\mathfrak{m}}athbf{A}}$ be the $\ensuremath{K}\ensuremath{\mathscr{H}}_r$-irreducible and Specht module of $\ensuremath{\mathscr{H}}_r$ of shape $\lambda \vdash r$ (hence $M_\lambda \cong \ensuremath{K}\ensuremath{\otimes}_\ensuremath{\mathfrak{m}}athbf{A} M_\lambda^\ensuremath{\mathfrak{m}}athbf{A}$). For any $\ensuremath{K} \ensuremath{\mathscr{H}}_r$-module $N$ and partition $\lambda$ of $r$, let $N[\lambda]$ be the $M_\lambda$-isotypic component of $N$. Let $s_\lambda^N : N [2]headrightarrow N[\lambda]$ be the canonical surjection and $i_\lambda^N : N[\lambda] \ensuremath{\ensuremath{\mathfrak{m}}athfrak{h}}ookrightarrow N$ the canonical inclusion. Define the projector $p_\lambda^N: N \ensuremath{\mathfrak{t}}o N$ by $p_\lambda^N = i_\lambda^N \circ s_\lambda^N$. We also let $p_\lambda$ denote central idempotent of $\ensuremath{K} \ensuremath{\mathscr{H}}_r$ so that the map $p_\lambda^N$ is given by multiplication by $p_\lambda$. The work of Kazhdan and Lusztig \cite{KL} shows that the decomposition of $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma_{\ensuremath{\mathcal{S}}_r}$ into right cells is $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma_{\ensuremath{\mathcal{S}}_r} = \ensuremath{\mathfrak{b}}igsqcup_{P \in \ensuremath{\mathfrak{t}}ext{SYT}^r} \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma_P$, where $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma_P := \{C_w : P(w) = P\}$. Moreover, the right cells $\{ \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma_P : \ensuremath{\mathfrak{t}}ext{\rm sh}(P) = \lambda\}$ are all isomorphic, and, denoting any of these cells by $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma_\lambda$, $\ensuremath{\mathfrak{m}}athbf{A}\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma_\lambda \cong M_\lambda^\ensuremath{\mathfrak{m}}athbf{A}$. Similarly, the decomposition of $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_{\ensuremath{\mathcal{S}}_r}$ into right cells is $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_{\ensuremath{\mathcal{S}}_r} = \ensuremath{\mathfrak{b}}igsqcup_{P \in \ensuremath{\mathfrak{t}}ext{SYT}^r} \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_P$, where $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_P := \{\ensuremath{C^{\prime}}_w : \ensuremath{\mathfrak{t}}ranspose{P(w)} = P\}$. Moreover, the right cells $\{ \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_P : \ensuremath{\mathfrak{t}}ext{\rm sh}(P) = \lambda\}$ are all isomorphic, and, denoting any of these cells by $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_\lambda$, $\ensuremath{\mathfrak{m}}athbf{A}\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_\lambda \cong M_\lambda^\ensuremath{\mathfrak{m}}athbf{A}$. A combinatorial discussion of left cells in type $A$ is given in \cite[\ensuremath{\mathfrak{t}}extsection 4]{B0}. We refer to the basis $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma_\lambda$ of $M_\lambda^\ensuremath{\mathfrak{m}}athbf{A}$ as the \emph{upper canonical basis of $M_\lambda$} and denote it by $\{ C_Q : Q \in \ensuremath{\mathfrak{t}}ext{SYT}(\lambda) \}$, where $C_Q$ corresponds to $C_w$ for any (every) $w \in \ensuremath{\mathcal{S}}_r$ with recording tableau $Q$. Similarly, the basis $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_\lambda$ of $M_\lambda^\ensuremath{\mathfrak{m}}athbf{A}$ is the \emph{lower canonical basis of $M_\lambda$}, denoted $\{ \ensuremath{C^{\prime}}_Q : Q \in \ensuremath{\mathfrak{t}}ext{SYT}(\lambda) \}$, where $\ensuremath{C^{\prime}}_Q$ corresponds to $\ensuremath{C^{\prime}}_w$ for any (every) $w \in \ensuremath{\mathcal{S}}_r$ with recording tableau $\ensuremath{\mathfrak{t}}ranspose{Q}$. Note that with these labels the action of $C_s$ on the upper canonical basis of $M_\lambda$ is similar to \eqref{e C on canbas}, with $\ensuremath{\mathfrak{m}}u(Q',Q):= \ensuremath{\mathfrak{m}}u(w',w)$ for any $w',w$ such that $P(w')=P(w)$, $Q' = Q(w'),Q = Q(w)$, and right descent sets \ensuremath{\mathfrak{b}}egin{equation} R(C_Q) = \{ s_i : i + 1 \ensuremath{\mathfrak{t}}ext{ is strictly to the south of $i$ in $Q$}\}. \end{equation} Similarly, the action of $\ensuremath{C^{\prime}}_s$ on $\{ \ensuremath{C^{\prime}}_Q : Q \in \ensuremath{\mathfrak{t}}ext{SYT}(\lambda) \}$ is similar to \eqref{e prime C on prime canbas}, with $\ensuremath{\mathfrak{m}}u(Q',Q):= \ensuremath{\mathfrak{m}}u(w',w)$ for any $w',w$ such that $\ensuremath{\mathfrak{t}}ranspose{P(w')}=\ensuremath{\mathfrak{t}}ranspose{P(w)}$, $Q' = \ensuremath{\mathfrak{t}}ranspose{Q(w')},Q = \ensuremath{\mathfrak{t}}ranspose{Q(w)}$, and right descent sets \ensuremath{\mathfrak{b}}egin{equation} R(\ensuremath{C^{\prime}}_Q) = \{ s_i : i + 1 \ensuremath{\mathfrak{t}}ext{ is strictly to the east of $i$ in $Q$}\}. \end{equation} \ensuremath{\mathfrak{b}}egin{equation}gin{example} \label{ex lambda31} The integers $\ensuremath{\mathfrak{m}}u(Q',Q)$ for both the upper and lower canonical basis of $M_{(3,1)}$ are given by the following graph ($\ensuremath{\mathfrak{m}}u$ is 1 if the edge is present and 0 otherwise) \[ \xymatrix@R=0cm{ \ \ensuremath{\mathfrak{t}}ableau{1&2&3\\4} \ensuremath{\mathfrak{a}}r @{-} [r] & \ \ensuremath{\mathfrak{t}}ableau{1&2&4\\3} \ensuremath{\mathfrak{a}}r @{-} [r] & \ \ensuremath{\mathfrak{t}}ableau{1&3&4\\2} \\ Q_4 & Q_3 & Q_2 } \] The right action of the $\ensuremath{C^{\prime}}_s$ on $(\ensuremath{C^{\prime}}_{Q_4}, \ensuremath{C^{\prime}}_{Q_3}, \ensuremath{C^{\prime}}_{Q_2})$ is given by (the columns of the matrices are $\ensuremath{C^{\prime}}_{Q} \ensuremath{C^{\prime}}_s$ in terms of the $\ensuremath{C^{\prime}}_Q$-basis) \[ \ensuremath{C^{\prime}}_{s_1} \ensuremath{\mathfrak{m}}apsto \ensuremath{\mathfrak{b}}egin{equation}gin{pmatrix} [2] & 0 & 0 \\ 0 & [2] & 1 \\ 0 & 0 & 0 \end{pmatrix} \ensuremath{\mathfrak{q}}uad \ensuremath{C^{\prime}}_{s_2} \ensuremath{\mathfrak{m}}apsto \ensuremath{\mathfrak{b}}egin{equation}gin{pmatrix} [2] & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 1 & [2] \end{pmatrix} \ensuremath{\mathfrak{q}}uad \ensuremath{C^{\prime}}_{s_3} \ensuremath{\mathfrak{m}}apsto \ensuremath{\mathfrak{b}}egin{equation}gin{pmatrix} 0 & 0 & 0 \\ 1 & [2] & 0 \\ 0 & 0 & [2] \end{pmatrix} \] The right action of the $\ensuremath{C^{\prime}}_s$ on $(C_{Q_4}, C_{Q_3}, C_{Q_2})$ is given by \[ \ensuremath{C^{\prime}}_{s_1} \ensuremath{\mathfrak{m}}apsto \ensuremath{\mathfrak{b}}egin{equation}gin{pmatrix} [2] & 0 & 0 \\ 0 & [2] & 0 \\ 0 & 1 & 0 \end{pmatrix} \ensuremath{\mathfrak{q}}uad \ensuremath{C^{\prime}}_{s_2} \ensuremath{\mathfrak{m}}apsto \ensuremath{\mathfrak{b}}egin{equation}gin{pmatrix} [2] & 0 & 0 \\ 1 & 0 & 1 \\ 0 & 0 & [2] \end{pmatrix} \ensuremath{\mathfrak{q}}uad \ensuremath{C^{\prime}}_{s_3} \ensuremath{\mathfrak{m}}apsto \ensuremath{\mathfrak{b}}egin{equation}gin{pmatrix} 0 & 1 & 0 \\ 0 & [2] & 0 \\ 0 & 0 & [2] \end{pmatrix} \] Note that the matrix corresponding to the right action of $\ensuremath{C^{\prime}}_s$ on the $\ensuremath{C^{\prime}}_Q$-basis is transpose to the action in the $C_Q$-basis. This is true in general---it is a consequence of Proposition \ref{p dual bases Mlambda} or of \cite[Corollary 3.2]{KL}. \end{example} The partial orders $\ensuremath{\mathfrak{k}}lo{\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma_{\ensuremath{\mathcal{S}}_r}}$ and $\ensuremath{\mathfrak{k}}lo{\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_{\ensuremath{\mathcal{S}}_r}}$ are not well understood, but there is the following deep result which gives us some understanding. The result follows from Lusztig's $a$-invariant and the nonnegativity of the structure constants of the $\ensuremath{C^{\prime}}_w$ due to Beilinson-Bernstein-Deligne-Gabber \cite[\ensuremath{\mathfrak{t}}extsection 5-6]{L2} and results of \cite{BV} and \cite{Joseph} on primitive ideals of $\ensuremath{\mathcal{U}}q$ (see the appendix of \cite{Himmanant}). \ensuremath{\mathfrak{b}}egin{equation}gin{theorem}\label{t right cell partial order} The partial order on the right cells of $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma_{\ensuremath{\mathcal{S}}_r}$ and $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_{\ensuremath{\mathcal{S}}_r}$ is constrained by dominance order: if $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma_{P'} \ensuremath{\mathfrak{k}}loneq{\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma_{\ensuremath{\mathcal{S}}_r}} \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma_{P}$, then $\ensuremath{\mathfrak{t}}ext{\rm sh}(P') \ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq}neq \ensuremath{\mathfrak{t}}ext{\rm sh}(P)$; if $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_{P'} \ensuremath{\mathfrak{k}}loneq{\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_{\ensuremath{\mathcal{S}}_r}} \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_{P}$, then $\ensuremath{\mathfrak{t}}ext{\rm sh}(P') \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}dneq \ensuremath{\mathfrak{t}}ext{\rm sh}(P)$. \end{theorem} \section{The quantized enveloping algebra and crystal bases} \label{s Crystal bases of the quantized enveloping algebra} We recall the definition of the quantized enveloping algebra $\ensuremath{\mathcal{U}}q = U_q(\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}l_n)$ following \cite{Kas1,HK}. We then briefly recall the construction of global crystal bases in the sense of \cite{Kas1,Kas2} and of the similar notion of based modules of \cite{LBook}. \subsection{Definition of $\ensuremath{\mathcal{U}}q = U_q(\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}l_n)$ and basic properties} The \emph{quantized universal enveloping algebra} $\ensuremath{\mathcal{U}}q$ is the associative $\ensuremath{K}$-algebra generated by $q^h, h \in \dual{X}$ (set $K_i = q^{\dual{\epsilon}_{i}-\dual{\epsilon}_{i+1}}$) and $E_i, F_i, i \in [n - 1]$ with relations \ensuremath{\mathfrak{b}}egin{equation} \ensuremath{\mathfrak{b}}egin{equation}gin{array}{ll} q^0 = 1, & q^hq^{h'} = q^{h + h'}, \\ q^hE_iq^{-h} = \ensuremath{u}^{\langle \ensuremath{\mathfrak{a}}lpha_i, h \rangle}E_i, & q^hF_iq^{-h} = \ensuremath{u}^{-\langle \ensuremath{\mathfrak{a}}lpha_i, h \rangle}F_i, \\ E_iF_j - F_jE_i = \delta_{i,j} \ensuremath{f}rac{K_i - K_i^{-1}}{\ensuremath{u}-\ensuremath{u^{-1}}}, & \\ E_iE_j - E_jE_i = F_iF_j - F_jF_i = 0 & \ensuremath{\mathfrak{t}}ext{for } |i-j| > 1, \\ E_i^2E_j- [2]E_iE_jE_i + E_jE^2_i = 0 & \ensuremath{\mathfrak{t}}ext{for } |i-j| = 1, \\ F^2_iF_j - [2]F_iF_jF_i + F_jF^2_i = 0 & \ensuremath{\mathfrak{t}}ext{for } |i-j| = 1. \\ \end{array} \end{equation} \ensuremath{\mathfrak{b}}egin{equation}gin{remark} Our notation is related to that of Kashiwara's and Brundan's \cite{Brundan} by $\ensuremath{u} = q$. We use $\ensuremath{u}$ instead of $q$ because on the Hecke algebra side, our $\ensuremath{u}$ is what is usually $q^{1/2}$. \end{remark} The \emph{bar-involution}, $\ensuremath{\mathfrak{b}}r{\cdot}: \ensuremath{\mathcal{U}}q \ensuremath{\mathfrak{t}}o \ensuremath{\mathcal{U}}q$ is the $\ensuremath{\mathbb{Q}}$-linear automorphism extending the involution $\ensuremath{\mathfrak{b}}r{\cdot}$ on $\ensuremath{K}$ and satisfying \ensuremath{\mathfrak{b}}egin{equation} \ensuremath{\mathfrak{b}}r{q^{h}} = q^{-h}, \ \ensuremath{\mathfrak{b}}r{E_i} = E_i, \ \ensuremath{\mathfrak{b}}r{F_i} = F_i. \end{equation} Let $\varphi : \ensuremath{\mathcal{U}}q \ensuremath{\mathfrak{t}}o \ensuremath{\mathcal{U}}q$ be the algebra antiautomorphism determined by \ensuremath{\mathfrak{b}}egin{equation} \varphi(E_i) = F_i, \ensuremath{\mathfrak{q}}uad \varphi(F_i) = E_i, \ensuremath{\mathfrak{q}}uad \varphi(K_i) = K_i. \end{equation} The algebra $\ensuremath{\mathcal{U}}q$ is a Hopf algebra with coproduct $\ensuremath{\ensuremath{\mathfrak{m}}athcal{D}}elta$ given by \ensuremath{\mathfrak{b}}egin{equation} \label{e U_q coproduct} \ensuremath{\ensuremath{\mathfrak{m}}athcal{D}}elta(q^h) = q^h \ensuremath{\otimes} q^h, \ \ \ensuremath{\ensuremath{\mathfrak{m}}athcal{D}}elta(E_i) = E_i \ensuremath{\otimes} K_i^{-1} + 1 \ensuremath{\otimes} E_i, \ \ \ensuremath{\ensuremath{\mathfrak{m}}athcal{D}}elta(F_i) = F_i \ensuremath{\otimes} 1 + K_i \ensuremath{\otimes} F_i. \end{equation} This is the same as the coproduct used in \cite{Brundan, Kas2, HK}, and it differs from the coproduct $\ensuremath{\mathfrak{t}}ilde{\ensuremath{\ensuremath{\mathfrak{m}}athcal{D}}elta}$ of \cite{LBook} by $(\varphi \ensuremath{\otimes} \varphi) \circ \ensuremath{\ensuremath{\mathfrak{m}}athcal{D}}elta \circ \varphi$. The \emph{weight space} $N^\zeta$ of a $\ensuremath{\mathcal{U}}q$-module $N$ for the weight $\zeta \in X$ is the $\ensuremath{K}$-vector space $\{x \in N : q^h x = \ensuremath{u}^{\langle \zeta, h \rangle} x\}$. Let $\ensuremath{\ensuremath{\mathscr{O}}_{\ensuremath{\mathfrak{t}}ext{int}}^{\geq 0}}$ be as in \cite[Chapter 7]{HK}, the category of finite-dimensional $\ensuremath{\mathcal{U}}q$-modules such that the weight of any non-zero weight space belongs to $\ensuremath{\mathbb{Z}}^n_{\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}eq 0} \subseteq X$. It is semisimple, the simple objects being the highest weight modules $V_\lambda$ for partitions $\lambda$. For any object $N$ of $\ensuremath{\ensuremath{\mathscr{O}}_{\ensuremath{\mathfrak{t}}ext{int}}^{\geq 0}}$ and partition $\lambda$, let $N[\lambda]$ be the $V_\lambda$-isotypic component of $N$. Set $N[\ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq}\lambda] = \ensuremath{\mathfrak{b}}igoplus_{\ensuremath{\mathfrak{m}}u \ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq}\lambda} N[\ensuremath{\mathfrak{m}}u]$, $N[\ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq}neq\lambda] = \ensuremath{\mathfrak{b}}igoplus_{\ensuremath{\mathfrak{m}}u \ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq}neq\lambda} N[\ensuremath{\mathfrak{m}}u]$, $N[\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}d\lambda] = \ensuremath{\mathfrak{b}}igoplus_{\ensuremath{\mathfrak{m}}u \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}d\lambda} N[\ensuremath{\mathfrak{m}}u]$, and $N[\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}dneq\lambda] = \ensuremath{\mathfrak{b}}igoplus_{\ensuremath{\mathfrak{m}}u \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}dneq\lambda} N[\ensuremath{\mathfrak{m}}u]$. Let $\varsigma_\lambda^N : N [2]headrightarrow N[\lambda]$ be the canonical surjection and $\iota_\lambda^N : N[\lambda] \ensuremath{\ensuremath{\mathfrak{m}}athfrak{h}}ookrightarrow N$ the canonical inclusion. Define the projector $\ensuremath{\mathfrak{p}}i_\lambda^N: N \ensuremath{\mathfrak{t}}o N$ by $\ensuremath{\mathfrak{p}}i_\lambda^N = \iota_\lambda^N \circ \varsigma_\lambda^N$. \subsection{Crystal bases} A lower crystal basis at $\ensuremath{u} = 0$ of an object $N$ of $\ensuremath{\ensuremath{\mathscr{O}}_{\ensuremath{\mathfrak{t}}ext{int}}^{\geq 0}}$ is a pair $(\ensuremath{\mathscr{L}}_0(N),\ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}')$, where $\ensuremath{\mathscr{L}}_0(N)$ is a $\ensuremath{K}_0$-submodule of $N$ and $\ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}'$ is a $\ensuremath{\mathbb{Q}}$-basis of $\ensuremath{\mathscr{L}}_0(N) / \ensuremath{u} \ensuremath{\mathscr{L}}_0(N)$ which satisfy a certain compatibility with the Kashiwara operators $\crystall{E_i}, \crystall{F_i}$; an upper crystal basis at $\ensuremath{u} = \infty$ of $N$ is a pair $(\ensuremath{\mathscr{L}}_\infty(N),\ensuremath{\ensuremath{\mathfrak{m}}athscr{B}})$, where $\ensuremath{\mathscr{L}}_\infty(N)$ is a $\ensuremath{K}_\infty$-submodule of $N$ and $\ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}$ is a $\ensuremath{\mathbb{Q}}$-basis of $\ensuremath{\mathscr{L}}_\infty(N) / \ensuremath{u^{-1}} \ensuremath{\mathscr{L}}_\infty(N)$ which satisfy a certain compatibility with the Kashiwara operators $\crystalu{E_i}, \crystalu{F_i}$ (see \cite[\ensuremath{\mathfrak{t}}extsection3.1]{Kas2}). Kashiwara \cite{Kas2} gives a fairly explicit construction of a lower (resp. upper) crystal basis of $V_\lambda$, which we denote by $(\ensuremath{\mathscr{L}}_0(\lambda),\ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}'(\lambda))$ (resp. $(\ensuremath{\mathscr{L}}_\infty(\lambda),\ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}(\lambda))$). The basis $\ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}'(\lambda)$ (resp. $\ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}(\lambda)$) is naturally labeled by $\ensuremath{\mathfrak{t}}ext{SSYT}_{[n]}(\lambda)$ and we let $b'_P$ (resp. $b_P$) denote the basis element corresponding to $P \in \ensuremath{\mathfrak{t}}ext{SSYT}_{[n]}(\lambda)$ (see, for instance, \cite[Chapter 7]{HK}). A fundamental result of \cite{Kas1,Kas2} is that a lower (resp. upper) crystal basis is always isomorphic to a direct sum $\ensuremath{\mathfrak{b}}igoplus_j (\ensuremath{\mathscr{L}}_0(\lambda^j),\ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}'(\lambda^j))$ (resp. $\ensuremath{\mathfrak{b}}igoplus_j (\ensuremath{\mathscr{L}}_\infty(\lambda^j),\ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}(\lambda^j))$). \subsection{Global crystal bases} \label{ss global canonical bases} We next define lower based modules and upper based modules, where a lower based module is a based module in the sense of \cite[Chapter 27]{LBook} adapted to our coproduct. The \emph{$\ensuremath{\mathfrak{m}}athbf{A}$-form $\ensuremath{\mathcal{U}}q_\ensuremath{\mathfrak{m}}athbf{A}$ of $\ensuremath{\mathcal{U}}q$} is the $\ensuremath{\mathfrak{m}}athbf{A}$-subalgebra of $\ensuremath{\mathcal{U}}q$ generated by $ \ensuremath{f}rac{E_i^m}{[m]!}, \ensuremath{f}rac{F_i^m}{[m]!}, q^h, \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}enfrac{\{}{\}}{0pt}{}{q^{h}}{m}$ for $i \in [n-1],\ m \in \ensuremath{\mathbb{Z}}_{\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}eq 0}$, and $h \in \dual{X}$, where \[\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}enfrac{\{}{\}}{0pt}{}{x}{m} := \ensuremath{\mathfrak{p}}rod^m_{k=1} \ensuremath{f}rac{\ensuremath{u}^{1-k}x - \ensuremath{u}^{k-1}x^{-1}}{\ensuremath{u}^k-\ensuremath{u}^{ -k}}. \] We also define the \emph{$\ensuremath{\mathbb{Q}}A$-form $\ensuremath{\mathcal{U}}q_\ensuremath{\mathbb{Q}}$ of $\ensuremath{\mathcal{U}}q$} to be $\ensuremath{\mathbb{Q}} \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} \ensuremath{\mathcal{U}}q_{\ensuremath{\mathfrak{m}}athbf{A}}$. \ensuremath{\mathfrak{b}}egin{equation}gin{definition} A \emph{lower based module} is a pair $(N,B)$, where $N$ is an object of $\ensuremath{\ensuremath{\mathscr{O}}_{\ensuremath{\mathfrak{t}}ext{int}}^{\geq 0}}$ and $B$ is a $\ensuremath{K}$-basis of $N$ such that \ensuremath{\mathfrak{b}}egin{equation}gin{list}{(\ensuremath{\mathfrak{a}}lph{ctr})} {\ensuremath{u}secounter{ctr} \setlength{\itemsep}{1pt} \setlength{\ensuremath{\mathfrak{t}}opsep}{2pt}} \item $B \cap N^{\zeta}$ is a basis of $N^\zeta$, for any $\zeta \in X$; \item Define $N_\ensuremath{\mathfrak{m}}athbf{A} := \ensuremath{\mathfrak{m}}athbf{A} B$. The $\ensuremath{\mathbb{Q}}A$-submodule $\ensuremath{\mathbb{Q}} \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} N_\ensuremath{\mathfrak{m}}athbf{A}$ of $N$ is stable under $\ensuremath{\mathcal{U}}q_\ensuremath{\mathbb{Q}}$; \item the $\ensuremath{\mathbb{Q}}$-linear involution $\ensuremath{\mathfrak{b}}r{\cdot} : N \ensuremath{\mathfrak{t}}o N$ defined by $\ensuremath{\mathfrak{b}}r{ab} = \ensuremath{\mathfrak{b}}r{a}b$ for all $a \in \ensuremath{K}$ and all $b \in B$ intertwines the $\ensuremath{\mathfrak{b}}r{\cdot}$-involution of $\ensuremath{\mathcal{U}}q$, i.e. $\ensuremath{\mathfrak{b}}r{fn} = \ensuremath{\mathfrak{b}}r{f}\ensuremath{\mathfrak{b}}r{n}$ for all $f \in \ensuremath{\mathcal{U}}q, n \in N$; \item Set $\ensuremath{\mathscr{L}}_0(N) = \ensuremath{K}_0 B$ and let $\ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}$ denote the image of $B$ in $\ensuremath{\mathscr{L}}_0(N)/\ensuremath{u} \ensuremath{\mathscr{L}}_0(N)$. Then $(\ensuremath{\mathscr{L}}_0(N), \ensuremath{\ensuremath{\mathfrak{m}}athscr{B}})$ is a lower crystal basis of $N$ at $\ensuremath{u} = 0$. \end{list} \end{definition} \ensuremath{\mathfrak{b}}egin{equation}gin{definition} An \emph{upper based module} is the same as a lower based module except with condition (d) replaced by \ensuremath{\mathfrak{b}}egin{equation}gin{list}{\emph{(\ensuremath{\mathfrak{a}}lph{ctr})}} {\ensuremath{u}secounter{ctr} \setlength{\itemsep}{1pt} \setlength{\ensuremath{\mathfrak{t}}opsep}{2pt}} \item[] Set $\ensuremath{\mathscr{L}}_\infty(N) = \ensuremath{K}_\infty B$ and let $\ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}$ denote the image of $B$ in $\ensuremath{\mathscr{L}}_\infty(N)/\ensuremath{u^{-1}} \ensuremath{\mathscr{L}}_\infty(N)$. Then $(\ensuremath{\mathscr{L}}_\infty(N), \ensuremath{\ensuremath{\mathfrak{m}}athscr{B}})$ is an upper crystal basis of $N$ at $\ensuremath{u} = \infty$. \end{list} \end{definition} The \emph{$\ensuremath{\mathfrak{b}}r{\cdot}$-involution} of the lower (resp. upper) based module is the involution on $N$ defined in (c). The \emph{balanced triple} of a lower (resp. upper) based module is $(\ensuremath{\mathbb{Q}}A B, \ensuremath{K}_0B, \ensuremath{K}_\infty B)$. \ensuremath{\mathfrak{b}}egin{equation}gin{remark} For simplicity and to be consistent with the treatment of upper global crystal bases in \cite{Kas2}, we have used the $\ensuremath{\mathbb{Q}}A$-form $\ensuremath{\mathcal{U}}q_\ensuremath{\mathbb{Q}}$ from \cite{Kas2} rather than the $\ensuremath{\mathfrak{m}}athbf{A}$-form of $\dot{\ensuremath{\mathcal{U}}q}$ defined in \cite{LBook}. \end{remark} \ensuremath{\mathfrak{b}}egin{equation}gin{remark}\label{r balanced triple} In the language of Kashiwara \cite{Kas2}, the basis $B$ in the definitions above is a lower or upper \emph{global crystal basis}. Kashiwara defines a triple $(N_\ensuremath{\mathbb{Q}},\ensuremath{\mathscr{L}}_0,\ensuremath{\mathscr{L}}_\infty)$ to be \emph{balanced} if the canonical surjection $N_\ensuremath{\mathbb{Q}} \cap \ensuremath{\mathscr{L}}_0 \cap \ensuremath{\mathscr{L}}_\infty \ensuremath{\mathfrak{t}}o \ensuremath{\mathscr{L}}_0 / \ensuremath{u} \ensuremath{\mathscr{L}}_0$ is an isomorphism, where $N$ is any $\ensuremath{K}$-vector space, and $N_\ensuremath{\mathbb{Q}}$, $\ensuremath{\mathscr{L}}_0$, $\ensuremath{\mathscr{L}}_\infty$ are any $\ensuremath{\mathbb{Q}}A$-submodule, $\ensuremath{K}_0$-submodule, and $\ensuremath{K}_\infty$-submodule of $N$, respectively. To define global lower crystal bases, Kashiwara first defines a balanced triple $(\ensuremath{\mathbb{Q}} \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} N_\ensuremath{\mathfrak{m}}athbf{A}, \ensuremath{\mathscr{L}}_0(N),\ensuremath{\mathfrak{b}}r{\ensuremath{\mathscr{L}}_0(N)}) $ and a basis $\ensuremath{\ensuremath{\mathfrak{m}}athscr{B}} \subseteq \ensuremath{\mathscr{L}}_0 / \ensuremath{u} \ensuremath{\mathscr{L}}_0$ and then defines $B$ to be the inverse image of $\ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}$ under the isomorphism \[ \ensuremath{\mathbb{Q}} \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} N_\ensuremath{\mathfrak{m}}athbf{A} \cap \ensuremath{\mathscr{L}}_0(N) \cap \ensuremath{\mathfrak{b}}r{\ensuremath{\mathscr{L}}_0(N)} \xrightarrow{\cong} \ensuremath{\mathscr{L}}_0 / \ensuremath{u} \ensuremath{\mathscr{L}}_0.\] Global upper canonical bases are defined similarly. \end{remark} Let $\eta_\lambda$ be a highest weight vector of $V_\lambda$. The $\ensuremath{\mathfrak{b}}r{\cdot}$-involution on $V_\lambda$ is defined by setting $\ensuremath{\mathfrak{b}}r{\eta_\lambda} = \eta_\lambda$ and requiring that it intertwines the $\ensuremath{\mathfrak{b}}r{\cdot}$-involution of $\ensuremath{\mathcal{U}}q$. The $\ensuremath{\mathbb{Q}}A$-forms of $V_\lambda$ of \cite{Kas2} are denoted $V^{\ensuremath{\mathbb{Q}} \ensuremath{\mathfrak{t}}ext{ low}}_\lambda$ and $V^{\ensuremath{\mathbb{Q}} \ensuremath{\mathfrak{t}}ext{ up}}_\lambda$; $V^{\ensuremath{\mathbb{Q}} \ensuremath{\mathfrak{t}}ext{ low}}_\lambda$ is defined to be $\ensuremath{\mathcal{U}}q_\ensuremath{\mathbb{Q}} \eta_\lambda$ and $V^{\ensuremath{\mathbb{Q}} \ensuremath{\mathfrak{t}}ext{ up}}_\lambda$ is defined by dualizing $V^{\ensuremath{\mathbb{Q}} \ensuremath{\mathfrak{t}}ext{ low}}_\lambda$ by a symmetric form on $V_\lambda$. We can now state the fundamental result about the existence of global crystal bases and based modules for $V_\lambda$. \ensuremath{\mathfrak{b}}egin{equation}gin{theorem}[Kashiwara \cite{Kas1,Kas2}] \label{t theorem 6.2.2 HK} \ \ensuremath{\mathfrak{b}}egin{equation}gin{list}{\emph{(\roman{ctr})}} {\ensuremath{u}secounter{ctr} \setlength{\itemsep}{1pt} \setlength{\ensuremath{\mathfrak{t}}opsep}{2pt}} \item The triple $(V^{\ensuremath{\mathbb{Q}} \ensuremath{\mathfrak{t}}ext{ low}}_\lambda, \ensuremath{\mathscr{L}}_0(\lambda), \ensuremath{\mathfrak{b}}r{\ensuremath{\mathscr{L}}_0(\lambda)})$ is balanced. Then, letting $G'_\lambda$ be the inverse of the canonical isomorphism \[ V^{\ensuremath{\mathbb{Q}} \ensuremath{\mathfrak{t}}ext{ low}}_\lambda \cap \ensuremath{\mathscr{L}}_0(\lambda) \cap \ensuremath{\mathfrak{b}}r{\ensuremath{\mathscr{L}}_0(\lambda)} \xrightarrow{\cong} \ensuremath{\mathscr{L}}_0(\lambda) / \ensuremath{u} \ensuremath{\mathscr{L}}_0(\lambda),\] $B'(\lambda) := G'_\lambda(\ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}'(\lambda))$ is the \emph{lower global crystal basis of $V_\lambda$} and $(V_\lambda,B'(\lambda))$ is a lower based module. \item The triple $(V^{\ensuremath{\mathbb{Q}} \ensuremath{\mathfrak{t}}ext{ up}}_\lambda, \ensuremath{\mathfrak{b}}r{\ensuremath{\mathscr{L}}_\infty(\lambda)}, \ensuremath{\mathscr{L}}_\infty(\lambda))$ is balanced. Then, letting $G_\lambda$ be the inverse of the canonical isomorphism \[ V^{\ensuremath{\mathbb{Q}} \ensuremath{\mathfrak{t}}ext{ up}}_\lambda \cap \ensuremath{\mathfrak{b}}r{\ensuremath{\mathscr{L}}_\infty(\lambda)} \cap \ensuremath{\mathscr{L}}_\infty(\lambda) \xrightarrow{\cong} \ensuremath{\mathscr{L}}_\infty(\lambda) / \ensuremath{u^{-1}} \ensuremath{\mathscr{L}}_\infty(\lambda),\] $B(\lambda) := G_\lambda(\ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}(\lambda))$ is the \emph{upper global crystal basis of $V_\lambda$} and $(V_\lambda,B(\lambda))$ is an upper based module. \end{list} \end{theorem} Note that Kashiwara proves that the triples are balanced and the conclusions about based modules follow easily (see \cite[27.1.4]{LBook} or {\cite[Theorem 6.2.2]{HK}}). We may now define integral forms $V^{\ensuremath{\mathfrak{m}}athbf{A} \ensuremath{\mathfrak{t}}ext{ low}}_\lambda := \ensuremath{\mathfrak{m}}athbf{A}B'(\lambda)$ and $V^{\ensuremath{\mathfrak{m}}athbf{A} \ensuremath{\mathfrak{t}}ext{ up}}_\lambda := \ensuremath{\mathfrak{m}}athbf{A}B(\lambda)$ of $V_\lambda$. We wish to make use of some of the facts established about lower based modules in \cite[Chapter 27]{LBook} and their corresponding statements for upper based modules. It is shown in \cite[Chapter 27]{LBook} that if $(N,B)$ is a lower based module, then so are $(N[\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}d \lambda], B[\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}d \lambda])$ and $(N[\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}d \lambda] /N[\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}dneq \lambda], B[\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}d \lambda] - B[\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}dneq \lambda])$, where $B[\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}d \lambda] = N[\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}d \lambda] \cap B$, etc. Moreover, this last based module is isomorphic to a direct sum of copies of $V_\lambda$ with their lower global canonical bases. The analogous statements for upper based modules are true with $\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}d$ replaced by $\ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq}$ and are shown in \cite[\ensuremath{\mathfrak{t}}extsection 5.2]{Kas2}. \subsection{Tensor products of based modules} \label{ss Tensor products of based modules} Let $(N,B), (N',B')$ be lower (resp. upper) based modules. There is a basis $B \diamond B'$ (resp. $B \ensuremath{\ensuremath{\mathfrak{m}}athfrak{h}}eart B'$) which makes $N \ensuremath{\otimes} N'$ into a lower (resp. upper) based module. However, first, we need an involution on $N \ensuremath{\otimes} N'$ that intertwines the $\ensuremath{\mathfrak{b}}r{\cdot}$-involution on $\ensuremath{\mathcal{U}}q$. This definition is not obvious and requires Lusztig's quasi-$R$-matrix, but adapted to our coproduct as in \cite{Brundan}: let $\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}heta = (\varphi \ensuremath{\otimes} \varphi)(\ensuremath{\mathfrak{t}}ilde{\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}heta}^{-1})$ where $\ensuremath{\mathfrak{t}}ilde{\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}heta}$ is exactly Lusztig's quasi-$\ensuremath{\mathcal{R}}$-matrix from \cite[4.1.2]{LBook}. It is an element of a certain completion $(\ensuremath{\mathcal{U}}q \ensuremath{\otimes} \ensuremath{\mathcal{U}}q)^{\ensuremath{\omega}edge}$ of the algebra $\ensuremath{\mathcal{U}}q \ensuremath{\otimes} \ensuremath{\mathcal{U}}q$. Then the involution $\ensuremath{\mathfrak{b}}r{\cdot}:N \ensuremath{\otimes} N' \ensuremath{\mathfrak{t}}o N \ensuremath{\otimes} N'$ is defined by $\ensuremath{\mathfrak{b}}r{n \ensuremath{\otimes} n'} = \ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}heta(\ensuremath{\mathfrak{b}}r{n} \ensuremath{\otimes} \ensuremath{\mathfrak{b}}r{n'})$. (This involution is denoted $\Psi$ in \cite{LBook}.) \ensuremath{\mathfrak{b}}egin{equation}gin{theorem}[Lusztig {\cite[Theorem 27.3.2]{LBook}}] \label{t tsr product lower canbas} Maintain the notation above with $(N,B), (N', B')$ lower based modules and set $(N \ensuremath{\otimes} N')_{\ensuremath{\mathbb{Z}}[\ensuremath{u}]} = \ensuremath{\mathbb{Z}}[\ensuremath{u}] B \ensuremath{\otimes} B'$. For any $(b,b') \in B \ensuremath{\mathfrak{t}}imes B'$, there is a unique element $b \diamond b' \in (N \ensuremath{\otimes} N')_{\ensuremath{\mathbb{Z}}[\ensuremath{u}]}$ such that $\ensuremath{\mathfrak{b}}r{b\diamond b'} = b \diamond b'$ and $(b\diamond b')-b\ensuremath{\otimes} b' \in \ensuremath{u} (N\ensuremath{\otimes} N')_{\ensuremath{\mathbb{Z}}[\ensuremath{u}]}$. Set $B \diamond B' = \{b\diamond b' : b \in B, b' \in B'\}$. Then the pair $(N \ensuremath{\otimes} N', B\diamond B')$ is a lower based module. \end{theorem} There is a similar theorem for upper based modules, as the proof of Theorem \ref{t tsr product lower canbas} adapts easily. This is discussed in \cite{FK} in the $n=2$ case, and we use the notation $\ensuremath{\ensuremath{\mathfrak{m}}athfrak{h}}eart$ for this product as is done there. \ensuremath{\mathfrak{b}}egin{equation}gin{theorem} \label{t tsr product upper canbas} Maintain the notation above with $(N,B), (N', B')$ upper based modules and set $(N \ensuremath{\otimes} N')_{\ensuremath{\mathbb{Z}}[\ensuremath{u^{-1}}]} = \ensuremath{\mathbb{Z}}[\ensuremath{u^{-1}}] B \ensuremath{\otimes} B'$. For any $(b,b') \in B \ensuremath{\mathfrak{t}}imes B'$, there is a unique element $b \ensuremath{\ensuremath{\mathfrak{m}}athfrak{h}}eart b' \in (N \ensuremath{\otimes} N')_{\ensuremath{\mathbb{Z}}[\ensuremath{u^{-1}}]}$ such that $\ensuremath{\mathfrak{b}}r{b\ensuremath{\ensuremath{\mathfrak{m}}athfrak{h}}eart b'} = b \ensuremath{\ensuremath{\mathfrak{m}}athfrak{h}}eart b'$ and $(b\ensuremath{\ensuremath{\mathfrak{m}}athfrak{h}}eart b')-b\ensuremath{\otimes} b' \in \ensuremath{u^{-1}} (N\ensuremath{\otimes} N')_{\ensuremath{\mathbb{Z}}[\ensuremath{u^{-1}}]}$. Set $B \ensuremath{\ensuremath{\mathfrak{m}}athfrak{h}}eart B' = \{b\ensuremath{\ensuremath{\mathfrak{m}}athfrak{h}}eart b' : b \in B, b' \in B'\}$. Then the pair $(N \ensuremath{\otimes} N', B\ensuremath{\ensuremath{\mathfrak{m}}athfrak{h}}eart B')$ is an upper based module. \end{theorem} Moreover, the products $\diamond$ and $\ensuremath{\ensuremath{\mathfrak{m}}athfrak{h}}eart$ are associative (\cite[27.3.6]{LBook}). \section{Quantum Schur-Weyl duality and canonical bases} \label{s Quantum Schur-Weyl duality and canonical bases} Write $V$ for the natural representation $V_{\epsilon_1}$ of $\ensuremath{\mathcal{U}}q$. The action of $\ensuremath{\mathcal{U}}q$ on the weight basis $v_1,\ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq}ots,v_n$ of $V$ is given by $q^{\dual{\epsilon}_{i}} v_j = \ensuremath{u}^{\delta_{ij}} v_j$, $F_i v_i = v_{i+1}$, $F_i v_j = 0$ for $i \neq j$, and $E_i v_{i+1} = v_i$, $E_i v_j = 0$ for $j \neq i+1$. We recall the commuting actions of $\ensuremath{\mathcal{U}}q$ and $\ensuremath{\mathscr{H}}_r$ on $\ensuremath{\mathbf{T}} := V^{\ensuremath{\otimes} r}$ as described in \cite{Jimbo,GL, Ram,FKK, Brundan} and give several characterizations of the lower and upper canonical basis of $\ensuremath{\mathbf{T}}$; we closely follow \cite{Brundan} and are consistent with its conventions. \subsection{Commuting actions on $\ensuremath{\mathbf{T}} = V^{\ensuremath{\otimes} r}$} The action of $\ensuremath{\mathcal{U}}q$ on $\ensuremath{\mathbf{T}}$ is determined by the coproduct $\ensuremath{\ensuremath{\mathfrak{m}}athcal{D}}elta$ \eqref{e U_q coproduct}. The commuting action of $\ensuremath{\mathscr{H}}_r$ on $\ensuremath{\mathbf{T}}$ comes from a $\ensuremath{\mathcal{U}}q$-isomorphism $\ensuremath{\mathcal{R}}_{V,V} : V \ensuremath{\otimes} V \ensuremath{\mathfrak{t}}o V \ensuremath{\otimes} V$ determined by the universal $R$-matrix; this isomorphism can also be defined using the quasi-$\ensuremath{\mathcal{R}}$-matrix \cite[32.1.5]{LBook} (see also \cite[\ensuremath{\mathfrak{t}}extsection 3]{Brundan}). The $\ensuremath{\mathscr{H}}_r$ action is given explicitly on generators as follows: for a word $\ensuremath{\mathfrak{m}}athbf{k} = k_1\dots k_r \in [n]^r$, let $\mathbf{v}_{\ensuremath{\mathfrak{m}}athbf{k}} = v_{k_1} \ensuremath{\otimes} v_{k_2} \ensuremath{\otimes} \dots \ensuremath{\otimes} v_{k_r}$ be the corresponding tensor monomial. Recall from \ensuremath{\mathfrak{t}}extsection\ref{ss type A combinatorics preliminaries} the right action of $\ensuremath{\mathcal{S}}_r$ on words of length $r$. Then \ensuremath{\mathfrak{b}}egin{equation} \label{e T inverse_i act on V} \mathbf{v}_\ensuremath{\mathfrak{m}}athbf{k} T_i^{-1} = \ensuremath{\mathfrak{b}}egin{equation}gin{cases} \mathbf{v}_{\ensuremath{\mathfrak{m}}athbf{k} \, s_i} & \ensuremath{\mathfrak{t}}ext{ if } k_i < k_{i+1}, \\ \ensuremath{u^{-1}} \mathbf{v}_\ensuremath{\mathfrak{m}}athbf{k} & \ensuremath{\mathfrak{t}}ext{ if } k_i = k_{i + 1}, \\ (\ensuremath{u^{-1}} - \ensuremath{u}) \mathbf{v}_\ensuremath{\mathfrak{m}}athbf{k} + \mathbf{v}_{\ensuremath{\mathfrak{m}}athbf{k} \, s_i} & \ensuremath{\mathfrak{t}}ext{ if } k_i > k_{i+1}. \end{cases} \end{equation} \ensuremath{\mathfrak{b}}egin{equation}gin{remark} This convention for the action of $\ensuremath{\mathscr{H}}_r$ on $\ensuremath{\mathbf{T}}$ is consistent with that in \cite{Brundan, Ram, GCT4, BMSGCT4} and \cite[Proposition 2.1$'$]{FKK}, but not with that in \cite{GL} and \cite[Proposition 2.1]{FKK}. Note that $\mathbf{v}_\ensuremath{\mathfrak{m}}athbf{k}, T_i^{-1}$ are denoted $M_\ensuremath{\mathfrak{a}}lpha, H_i$ respectively in \cite{Brundan}. \end{remark} We can now state the beautiful quantum version of Schur-Weyl duality, originally due to Jimbo \cite{Jimbo}. \ensuremath{\mathfrak{b}}egin{equation}gin{theorem}\label{c Schur-Weyl duality basic} As a $(\ensuremath{\mathcal{U}}q, \ensuremath{K}\ensuremath{\mathscr{H}}_r)$-bimodule, $\ensuremath{\mathbf{T}}$ decomposes into irreducibles as \[ \ensuremath{\mathbf{T}} \cong \ensuremath{\mathfrak{b}}igoplus_{\lambda \vdash_n r} V_\lambda \ensuremath{\otimes} M_\lambda. \] \end{theorem} As an $\ensuremath{\mathscr{H}}_r$-module, $\ensuremath{\mathbf{T}}$ decomposes into a direct sum of weight spaces: $\ensuremath{\mathbf{T}} \cong \ensuremath{\mathfrak{b}}igoplus_{\zeta \in X} \ensuremath{\mathbf{T}}^\zeta$. The weight space $\ensuremath{\mathbf{T}}^\zeta$ is the $\ensuremath{K}$-vector space spanned by $\mathbf{v}_{\ensuremath{\mathfrak{m}}athbf{k}}$ such that $\ensuremath{\mathfrak{m}}athbf{k}$ has content $\zeta$. Let $\epsilon_+ := M_{(r)}^\ensuremath{\mathfrak{m}}athbf{A}$ be the trivial $\ensuremath{\mathscr{H}}_r$-module, i.e. the one-dimensional module identified with the map $\ensuremath{\mathscr{H}}_r \ensuremath{\mathfrak{t}}o \ensuremath{\mathfrak{m}}athbf{A}$, $T_i \ensuremath{\mathfrak{m}}apsto \ensuremath{u}$. It is not difficult to prove using \eqref{e T inverse_i act on V} (see \cite[\ensuremath{\mathfrak{t}}extsection 4]{Brundan}) \ensuremath{\mathfrak{b}}egin{equation}gin{proposition} \label{p weight space equals induced} The map $\ensuremath{\mathbf{T}}_\ensuremath{\mathfrak{m}}athbf{A}^\zeta \ensuremath{\mathfrak{t}}o \epsilon_+ \ensuremath{\otimes}_{\ensuremath{\mathscr{H}}_{J_\zeta}} \ensuremath{\mathscr{H}}_r$ given by $\mathbf{v}_\ensuremath{\mathfrak{m}}athbf{k} \ensuremath{\mathfrak{m}}apsto \epsilon_+ \ensuremath{\otimes}_{\ensuremath{\mathscr{H}}_{J_\zeta}} \ensuremath{\mathfrak{b}}r{T}_{d(\ensuremath{\mathfrak{m}}athbf{k})}$ is an isomorphism of right $\ensuremath{\mathscr{H}}_r$-modules. \end{proposition} Here $\ensuremath{\mathbf{T}}_\ensuremath{\mathfrak{m}}athbf{A}$ is the integral form of $\ensuremath{\mathbf{T}}$, defined below. \subsection{Lower canonical basis of $\ensuremath{\mathbf{T}}$} We now apply the general theory of \ensuremath{\mathfrak{t}}extsection\ref{s Crystal bases of the quantized enveloping algebra} to construct global crystal bases of $\ensuremath{\mathbf{T}}$. Recall from \ensuremath{\mathfrak{t}}extsection\ref{ss Tensor products of based modules} that there is a $\ensuremath{\mathfrak{b}}r{\cdot}$-involution on $\ensuremath{\mathbf{T}}$ defined using the quasi-$\ensuremath{\mathcal{R}}$-matrix. The $\ensuremath{\mathfrak{b}}r{\cdot}$-involution on $\ensuremath{\mathscr{H}}_r$ intertwines that of $\ensuremath{\mathbf{T}}$, i.e. \ensuremath{\mathfrak{b}}egin{equation} \label{e br involution on H intertwines} \ensuremath{\mathfrak{b}}r{v h} = \ensuremath{\mathfrak{b}}r{v} \ \ensuremath{\mathfrak{b}}r{h}, \ensuremath{\mathfrak{t}}ext{ for any } v \in \ensuremath{\mathbf{T}}, \ h \in \ensuremath{\mathscr{H}}_r. \end{equation} This follows easily from the identity $\ensuremath{\mathfrak{t}}ilde{\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}heta}^{-1} = \ensuremath{\mathfrak{b}}r{\ensuremath{\mathfrak{t}}ilde{\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}heta}}$ from \cite{LBook}; see \cite{Brundan}. Let $V_\ensuremath{\mathfrak{m}}athbf{A} = \ensuremath{\mathfrak{m}}athbf{A}\{v_i: i \in [n]\}$, which is the same as the integral forms $V^{\ensuremath{\mathfrak{m}}athbf{A} \ensuremath{\mathfrak{t}}ext{ low}}_{\epsilon_1} = V^{\ensuremath{\mathfrak{m}}athbf{A} \ensuremath{\mathfrak{t}}ext{ up}}_{\epsilon_1}$ from \ensuremath{\mathfrak{t}}extsection\ref{ss global canonical bases}. By Theorem \ref{t tsr product lower canbas} and associativity of the $\diamond$ product, $(\ensuremath{\mathbf{T}}, B')$ is a lower based module with balanced triple $(\ensuremath{\mathbb{Q}} \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} \ensuremath{\mathbf{T}}_\ensuremath{\mathfrak{m}}athbf{A}, \ensuremath{\mathscr{L}}_0, \ensuremath{\mathfrak{b}}r{\ensuremath{\mathscr{L}}_0})$, where \ensuremath{\mathfrak{b}}egin{equation} \ensuremath{\mathfrak{b}}egin{equation}gin{array}{ccl} \ensuremath{\mathscr{L}}_0 &:=& \ensuremath{\mathscr{L}}_0(\epsilon_1) \ensuremath{\otimes}_{\ensuremath{K}_0} \dots \ensuremath{\otimes}_{\ensuremath{K}_0} \ensuremath{\mathscr{L}}_0(\epsilon_1), \\ \ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}' &:=& \ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}'(\epsilon_1) \ensuremath{\mathfrak{t}}imes \dots \ensuremath{\mathfrak{t}}imes \ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}'(\epsilon_1) \subseteq \ensuremath{\mathscr{L}}_0 / \ensuremath{u}\ensuremath{\mathscr{L}}_0, \\ \ensuremath{\mathbf{T}}_{\ensuremath{\mathbb{Z}}[\ensuremath{u}]} &:=& \ensuremath{\mathbb{Z}}[\ensuremath{u}] \{\mathbf{v}_\ensuremath{\mathfrak{m}}athbf{k} : \ensuremath{\mathfrak{m}}athbf{k} \in [n]^r\}, \\ \ensuremath{\mathbf{T}}_\ensuremath{\mathfrak{m}}athbf{A} &:=& V_\ensuremath{\mathfrak{m}}athbf{A} \ensuremath{\otimes}_\ensuremath{\mathfrak{m}}athbf{A} \dots \ensuremath{\otimes}_\ensuremath{\mathfrak{m}}athbf{A} V_\ensuremath{\mathfrak{m}}athbf{A} = \ensuremath{\mathfrak{m}}athbf{A} \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} \ensuremath{\mathbf{T}}_{\ensuremath{\mathbb{Z}}[\ensuremath{u}]}, \\ B' &:=& B'(\epsilon_1) \diamond \dots \diamond B'(\epsilon_1). \end{array} \end{equation} We call $B'$ the \emph{lower canonical basis} of $\ensuremath{\mathbf{T}}$ and, for each $\ensuremath{\mathfrak{m}}athbf{k} \in [n]^r$, we write $\ensuremath{c^{\prime}}_\ensuremath{\mathfrak{m}}athbf{k}$ for the element $v_{k_1} \diamond \dots \diamond v_{k_r} \in B'$ and $b'_\ensuremath{\mathfrak{m}}athbf{k} \in \ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}'$ for its image in $\ensuremath{\mathscr{L}}_0 / \ensuremath{u} \ensuremath{\mathscr{L}}_0$. Figure \ref{f c_111 example lower} gives the lower canonical basis in terms of the monomial basis for $r = 3, n =2$. \ensuremath{\mathfrak{b}}egin{equation}gin{figure}[H] \ensuremath{\mathfrak{b}}egin{equation}gin{tikzpicture}[xscale = 5,yscale = 3] \ensuremath{\mathfrak{t}}ikzstyle{vertex}=[inner sep=0pt, outer sep=3pt, fill = white] \ensuremath{\mathfrak{t}}ikzstyle{edge} = [draw, thick, ->,black] \ensuremath{\mathfrak{t}}ikzstyle{LabelStyleH} = [text=black, anchor=south] \ensuremath{\mathfrak{t}}ikzstyle{LabelStyleV} = [text=black, anchor=east] \node[vertex] (111) at (2.8,0) {$\ensuremath{c^{\prime}}_{111} = \mathbf{v}_{111} $}; \node[vertex] (112) at (2,-1) {$\ensuremath{c^{\prime}}_{112} = \mathbf{v}_{112}$}; \node[vertex] (121) at (2, 0) {$\ensuremath{c^{\prime}}_{121} = \mathbf{v}_{121} + \ensuremath{u} \mathbf{v}_{112} $}; \node[vertex] (211) at (2, 1) {$\ensuremath{c^{\prime}}_{211}= \ensuremath{\mathfrak{a}}top \mathbf{v}_{211} + \ensuremath{u} \mathbf{v}_{121} + \ensuremath{u}^2 \mathbf{v}_{112}$}; \node[vertex] (122) at (1, -1) {$\ensuremath{c^{\prime}}_{122}= \mathbf{v}_{122} $}; \node[vertex] (212) at (1 , 0) {$\ensuremath{c^{\prime}}_{212} = \mathbf{v}_{212} + \ensuremath{u} \mathbf{v}_{122} $}; \node[vertex] (221) at (1, 1) {$\ensuremath{c^{\prime}}_{221}= \ensuremath{\mathfrak{a}}top \mathbf{v}_{221} + \ensuremath{u} \mathbf{v}_{212} + \ensuremath{u}^2 \mathbf{v}_{122}$}; \node[vertex] (222) at (0, 0) {$\ensuremath{c^{\prime}}_{222}= \mathbf{v}_{222} $}; \draw[edge] (111) to node[LabelStyleH]{$ $} (211); \draw[edge] (211) to node[LabelStyleH]{$[2]$} (221); \draw[edge] (221) to node[LabelStyleH]{$[3]$} (222); \draw[edge] (121) to node[LabelStyleH]{$ $} (122); \draw[edge] (121) to node[LabelStyleH]{$ $} (221); \draw[edge] (112) to node[LabelStyleH]{$ $} (212); \draw[edge] (212) to node[LabelStyleH]{$[2]$} (222); \draw[edge] (122) to node[LabelStyleH]{$ $} (222); \ensuremath{f}oreach \ensuremath{Y} in {-.2}{ \node[vertex] (t111) at (2.8,0+1.45*\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{ 1 & 1 & 1}, \ensuremath{\mathfrak{t}}ableau{1 & 2 & 3}\right)$}; \node[vertex] (t112) at (2,-1 +1.65*\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{ 1 & 1 \cr 2}, \ensuremath{\mathfrak{t}}ableau{1 & 3 \cr 2}\right)$}; \node[vertex] (t121) at (2, 0+1.65*\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{ 1 & 1 \cr 2}, \ensuremath{\mathfrak{t}}ableau{1 & 2 \cr 3}\right)$}; \node[vertex] (t211) at (2, 1+1.45*\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{ 1 & 1 & 2}, \ensuremath{\mathfrak{t}}ableau{1 & 2 & 3}\right)$}; \node[vertex] (t122) at (1, -1 + 1.65*\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{ 1 & 2\cr 2}, \ensuremath{\mathfrak{t}}ableau{1 & 2 \cr 3}\right)$}; \node[vertex] (t212) at (1 , 0 +1.65*\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{ 1 & 2\cr 2}, \ensuremath{\mathfrak{t}}ableau{1 & 3\cr 2}\right)$}; \node[vertex] (t221) at (1, 1+1.45*\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{ 1 & 2 & 2}, \ensuremath{\mathfrak{t}}ableau{1 & 2 & 3}\right)$}; \node[vertex] (t222) at (0, 0+1.45*\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{2 & 2 & 2}, \ensuremath{\mathfrak{t}}ableau{1 & 2 & 3}\right)$}; } \end{tikzpicture} \caption{An illustration of Corollary \ref{c Schur-Weyl duality lower} for $r=3, n=2$. The pairs of tableaux are of the form $(P(\ensuremath{\mathfrak{m}}athbf{k}^\dagger), Q(\ensuremath{\mathfrak{m}}athbf{k}^\dagger))$. The arrows and their coefficients give the action of $F_1$ on the lower canonical basis.} \label{f c_111 example lower} \end{figure} \ensuremath{\mathfrak{b}}egin{equation}gin{figure}[H] \ensuremath{\mathfrak{b}}egin{equation}gin{tikzpicture}[xscale = 5,yscale = 3] \ensuremath{\mathfrak{t}}ikzstyle{vertex}=[inner sep=0pt, outer sep=3pt, fill = white] \ensuremath{\mathfrak{t}}ikzstyle{edge} = [draw, thick, ->,black] \ensuremath{\mathfrak{t}}ikzstyle{LabelStyleH} = [text=black, anchor=south] \ensuremath{\mathfrak{t}}ikzstyle{LabelStyleV} = [text=black, anchor=east] \node[vertex] (111) at (3,0) {$c_{111} = \mathbf{v}_{111} $}; \node[vertex] (112) at (2,-1) {$c_{112} = \mathbf{v}_{112}$}; \node[vertex] (121) at (2, 0) {$c_{121} = \mathbf{v}_{121} - \ensuremath{u^{-1}} \mathbf{v}_{112} $}; \node[vertex] (211) at (2, 1) {$c_{211}= \mathbf{v}_{211}- \ensuremath{u^{-1}} \mathbf{v}_{121}$}; \node[vertex] (122) at (1, -1) {$c_{122}= \mathbf{v}_{122} $}; \node[vertex] (212) at (1 , 0) {$c_{212} = \mathbf{v}_{212}- \ensuremath{u^{-1}} \mathbf{v}_{122} $}; \node[vertex] (221) at (1, 1) {$c_{221}= \mathbf{v}_{221}- \ensuremath{u^{-1}} \mathbf{v}_{212}$}; \node[vertex] (222) at (0, 0) {$c_{222}= \mathbf{v}_{222} $}; \draw[edge] (111) to node[LabelStyleH]{$[3]$} (112); \draw[edge] (111) to node[LabelStyleH]{$[2]$} (121); \draw[edge] (111) to node[LabelStyleH]{$ $} (211); \draw[edge] (112) to node[LabelStyleH]{$[2]$} (122); \draw[edge] (112) to node[LabelStyleH]{$ $} (212); \draw[edge] (121) to node[LabelStyleH]{$ $} (221); \draw[edge] (211) to node[LabelStyleH]{$ $} (212); \draw[edge] (122) to node[LabelStyleH]{$ $} (222); \ensuremath{f}oreach \ensuremath{Y} in {-.2}{ \node[vertex] (t111) at (3,0+\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{ 1 & 1 & 1}, \ensuremath{\mathfrak{t}}ableau{1 & 2 & 3}\right)$}; \node[vertex] (t112) at (2,-1+\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{ 1 & 1 & 2}, \ensuremath{\mathfrak{t}}ableau{1 & 2 & 3}\right)$}; \node[vertex] (t121) at (2, 0+1.65*\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{ 1 & 1 \cr 2}, \ensuremath{\mathfrak{t}}ableau{1 & 2 \cr 3}\right)$}; \node[vertex] (t211) at (2, 1+1.65*\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{ 1 & 1 \cr 2}, \ensuremath{\mathfrak{t}}ableau{1 & 3\cr 2}\right)$}; \node[vertex] (t122) at (1, -1+\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{ 1 & 2 & 2}, \ensuremath{\mathfrak{t}}ableau{1 & 2 & 3}\right)$}; \node[vertex] (t212) at (1 , 0 +1.65*\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{ 1 & 2\cr 2}, \ensuremath{\mathfrak{t}}ableau{1 & 3\cr 2}\right)$}; \node[vertex] (t221) at (1, 1 +1.65*\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{ 1 & 2\cr 2}, \ensuremath{\mathfrak{t}}ableau{1 & 2 \cr 3}\right)$}; \node[vertex] (t222) at (0, 0+\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{2 & 2 & 2}, \ensuremath{\mathfrak{t}}ableau{1 & 2 & 3}\right)$}; } \end{tikzpicture} \caption{An illustration of Corollary \ref{c Schur-Weyl duality upper} for $r=3, n=2$. The pairs of tableaux are of the form $(P(\ensuremath{\mathfrak{m}}athbf{k}), Q(\ensuremath{\mathfrak{m}}athbf{k}))$. The arrows and their coefficients give the action of $F_1$ on the upper canonical basis.} \label{f c_111 example upper} \end{figure} We assemble some equivalent descriptions of the lower canonical basis of $\ensuremath{\mathbf{T}}$, which are also shown in \cite{Brundan} and appear in a slightly different form in \cite{GL, FKK}. \ensuremath{\mathfrak{b}}egin{equation}gin{theorem} \label{t Schur-Weyl duality lower} The lower canonical basis element $\ensuremath{c^{\prime}}_\ensuremath{\mathfrak{m}}athbf{k}, \, \ensuremath{\mathfrak{m}}athbf{k} \in [n]^r,$ has the following equivalent descriptions \ensuremath{\mathfrak{b}}egin{equation}gin{list}{\emph{(\roman{ctr})}} {\ensuremath{u}secounter{ctr} \setlength{\itemsep}{1pt} \setlength{\ensuremath{\mathfrak{t}}opsep}{2pt}} \item the unique $\ensuremath{\mathfrak{b}}r{\cdot}$-invariant element of $\ensuremath{\mathbf{T}}_{\ensuremath{\mathbb{Z}}[\ensuremath{u}]}$ congruent to $\mathbf{v}_\ensuremath{\mathfrak{m}}athbf{k} \ensuremath{\mathfrak{m}}od \ensuremath{u} \ensuremath{\mathbf{T}}_{\ensuremath{\mathbb{Z}}[\ensuremath{u}]}$; \item $v_{k_1} \diamond \dots \diamond v_{k_r}$; \item $G'(b'_\ensuremath{\mathfrak{m}}athbf{k})$, where $G'$ is the inverse of the canonical isomorphism \[(\ensuremath{\mathbb{Q}} \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} \ensuremath{\mathbf{T}}_\ensuremath{\mathfrak{m}}athbf{A}) \cap \ensuremath{\mathscr{L}}_0\cap\ensuremath{\mathfrak{b}}r{\ensuremath{\mathscr{L}}_0} \xrightarrow{\cong} \ensuremath{\mathscr{L}}_0 / \ensuremath{u} \ensuremath{\mathscr{L}}_0; \] \item The image of $\ensuremath{C^{\prime}}_{D(\ensuremath{\mathfrak{m}}athbf{k})}$ under the isomorphism in Proposition \ref{p weight space equals induced} ($D(\ensuremath{\mathfrak{m}}athbf{k})$ is a maximal coset representative, defined in \ensuremath{\mathfrak{t}}extsection\ref{ss type A combinatorics preliminaries}). \end{list} \end{theorem} \ensuremath{\mathfrak{b}}egin{equation}gin{proof} Description (i) is the definition of (ii) (Theorem \ref{t tsr product lower canbas}) and the element in (iii) is easily seen to satisfy the conditions in (i) (see Remark \ref{r balanced triple} and Theorem \ref{t theorem 6.2.2 HK}). The element in (iv) satisfies the conditions in (i) by Proposition \ref{p cell isomorphism induced} and the combination of Proposition \ref{p weight space equals induced} and \eqref{e br involution on H intertwines}. Note that we are actually applying an easy modification of Proposition \ref{p cell isomorphism induced} with $\ensuremath{u}$ in place of $\ensuremath{u^{-1}}$ and $\ensuremath{C^{\prime}}_v \ensuremath{\otimes}_{\ensuremath{\mathscr{H}}(W_J)} \ensuremath{\mathfrak{b}}r{T}_x$ in place of $\ensuremath{C^{\prime}}_v \ensuremath{\otimes}_{\ensuremath{\mathscr{H}}(W_J)} T_x $. \end{proof} Proposition \ref{p cell isomorphism induced} and the discussion in \ensuremath{\mathfrak{t}}extsection\ref{ss cell label conventions C_Q C'_Q}, results of \cite[Chapter 27]{LBook} (see the discussion at the end of \ensuremath{\mathfrak{t}}extsection\ref{ss global canonical bases}), and the well-known combinatorics of the crystal basis $\ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}'$ (see e.g. \cite[Chapter 7]{HK}) allow us to determine the $\ensuremath{\mathscr{H}}_r$- and $\ensuremath{\mathcal{U}}q$-cells of $(\ensuremath{\mathbf{T}}, B')$. \ensuremath{\mathfrak{b}}egin{equation}gin{corollary} \label{c Schur-Weyl duality lower} \ \ensuremath{\mathfrak{b}}egin{equation}gin{list}{\emph{(\roman{ctr})}} {\ensuremath{u}secounter{ctr} \setlength{\itemsep}{1pt} \setlength{\ensuremath{\mathfrak{t}}opsep}{2pt}} \item The $ \ensuremath{\mathscr{H}}_r$-module with basis $(\ensuremath{\mathbf{T}}, B')$ decomposes into $ \ensuremath{\mathscr{H}}_r$-cells as $B' = \ensuremath{\mathfrak{b}}igsqcup_{T \in \ensuremath{\mathfrak{t}}ext{SSYT}_{[n]}^r} \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_T$, where $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_T = \{\ensuremath{c^{\prime}}_\ensuremath{\mathfrak{m}}athbf{k} : P(\ensuremath{\mathfrak{m}}athbf{k}^\dagger) = T \}.$ \item The $ \ensuremath{\mathscr{H}}_r$-cell $ \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_T$ of $\ensuremath{\mathbf{T}}$ is isomorphic to $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_{\ensuremath{\mathfrak{t}}ext{\rm sh}(T)}$. \item The $\ensuremath{\mathcal{U}}q$-module with basis $(\ensuremath{\mathbf{T}}, B')$ decomposes into $\ensuremath{\mathcal{U}}q$-cells as $B' = \ensuremath{\mathfrak{b}}igsqcup_{T \in \ensuremath{\mathfrak{t}}ext{SYT}^r_{\leq n}} \ensuremath{\mathscr{L}}ambda'_T, $ where $\ensuremath{\mathscr{L}}ambda'_T = \{\ensuremath{c^{\prime}}_\ensuremath{\mathfrak{m}}athbf{k} : Q(\ensuremath{\mathfrak{m}}athbf{k}^\dagger) = T \}.$ \item The $\ensuremath{\mathcal{U}}q$-cell $ \ensuremath{\mathscr{L}}ambda'_T$ is isomorphic to $B'(\ensuremath{\mathfrak{t}}ext{\rm sh}(T))$. \end{list} \end{corollary} \subsection{Upper canonical basis of $\ensuremath{\mathbf{T}}$} By Theorem \ref{t tsr product upper canbas} and associativity of the $\ensuremath{\ensuremath{\mathfrak{m}}athfrak{h}}eart$ product, $(\ensuremath{\mathbf{T}}, B)$ is an upper based module with balanced triple $(\ensuremath{\mathbb{Q}} \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} \ensuremath{\mathbf{T}}_\ensuremath{\mathfrak{m}}athbf{A}, \ensuremath{\mathfrak{b}}r{\ensuremath{\mathscr{L}}_\infty}, \ensuremath{\mathscr{L}}_\infty)$, where \ensuremath{\mathfrak{b}}egin{equation} \ensuremath{\mathfrak{b}}egin{equation}gin{array}{ccl} \ensuremath{\mathscr{L}}_\infty &:=& \ensuremath{\mathscr{L}}_\infty(\epsilon_1) \ensuremath{\otimes}_{\ensuremath{K}_\infty} \dots \ensuremath{\otimes}_{\ensuremath{K}_\infty} \ensuremath{\mathscr{L}}_\infty(\epsilon_1), \\ \ensuremath{\ensuremath{\mathfrak{m}}athscr{B}} &:=& \ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}(\epsilon_1) \ensuremath{\mathfrak{t}}imes \dots \ensuremath{\mathfrak{t}}imes \ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}(\epsilon_1) \subseteq \ensuremath{\mathscr{L}}_\infty / \ensuremath{u^{-1}} \ensuremath{\mathscr{L}}_\infty, \\ \ensuremath{\mathbf{T}}_{\ensuremath{\mathbb{Z}}[\ensuremath{u^{-1}}]} &:=& \ensuremath{\mathbb{Z}}[\ensuremath{u^{-1}}] \{\mathbf{v}_\ensuremath{\mathfrak{m}}athbf{k} : \ensuremath{\mathfrak{m}}athbf{k} \in [n]^r\}, \\ B &:=& B(\epsilon_1) \ensuremath{\ensuremath{\mathfrak{m}}athfrak{h}}eart \dots \ensuremath{\ensuremath{\mathfrak{m}}athfrak{h}}eart B(\epsilon_1). \end{array} \end{equation} We call $B$ the \emph{upper canonical basis} of $\ensuremath{\mathbf{T}}$ and, for each $\ensuremath{\mathfrak{m}}athbf{k} \in [n]^r$, we write $c_\ensuremath{\mathfrak{m}}athbf{k}$ for the element $v_{k_1} \ensuremath{\ensuremath{\mathfrak{m}}athfrak{h}}eart \dots \ensuremath{\ensuremath{\mathfrak{m}}athfrak{h}}eart v_{k_r} \in B$ and $b_\ensuremath{\mathfrak{m}}athbf{k} \in \ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}$ for its image in $\ensuremath{\mathscr{L}}_\infty / \ensuremath{u^{-1}} \ensuremath{\mathscr{L}}_\infty$. Figure \ref{f c_111 example upper} gives the upper canonical basis in terms of the monomial basis for $r = 3, n =2$. We assemble some equivalent descriptions of the upper canonical basis of $\ensuremath{\mathbf{T}}$. The proof is similar to the corresponding Theorem \ref{t Schur-Weyl duality lower} for the lower canonical basis. \ensuremath{\mathfrak{b}}egin{equation}gin{theorem} \label{t Schur-Weyl duality upper} The upper canonical basis element $c_\ensuremath{\mathfrak{m}}athbf{k}, \, \ensuremath{\mathfrak{m}}athbf{k} \in [n]^r,$ has the following equivalent descriptions \ensuremath{\mathfrak{b}}egin{equation}gin{list}{\emph{(\roman{ctr})}} {\ensuremath{u}secounter{ctr} \setlength{\itemsep}{1pt} \setlength{\ensuremath{\mathfrak{t}}opsep}{2pt}} \item the unique $\ensuremath{\mathfrak{b}}r{\cdot}$-invariant element of $\ensuremath{\mathbf{T}}_{\ensuremath{\mathbb{Z}}[\ensuremath{u^{-1}}]}$, congruent to $\mathbf{v}_\ensuremath{\mathfrak{m}}athbf{k} \ensuremath{\mathfrak{m}}od \ensuremath{u^{-1}}\ensuremath{\mathbf{T}}_{\ensuremath{\mathbb{Z}}[\ensuremath{u^{-1}}]}$; \item $v_{k_1} \ensuremath{\ensuremath{\mathfrak{m}}athfrak{h}}eart \dots \ensuremath{\ensuremath{\mathfrak{m}}athfrak{h}}eart v_{k_r}$; \item $G(b_\ensuremath{\mathfrak{m}}athbf{k})$, where $G$ is the inverse of the canonical isomorphism \[(\ensuremath{\mathbb{Q}} \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} \ensuremath{\mathbf{T}}_\ensuremath{\mathfrak{m}}athbf{A}) \cap \ensuremath{\mathfrak{b}}r{\ensuremath{\mathscr{L}}_\infty} \cap \ensuremath{\mathscr{L}}_\infty \xrightarrow{\cong} \ensuremath{\mathscr{L}}_\infty / \ensuremath{u^{-1}} \ensuremath{\mathscr{L}}_\infty; \] \item The image of $C_{d(\ensuremath{\mathfrak{m}}athbf{k})}$ under the isomorphism in Proposition \ref{p weight space equals induced}. \end{list} \end{theorem} We also have the upper canonical basis version of Corollary \ref{c Schur-Weyl duality lower}. \ensuremath{\mathfrak{b}}egin{equation}gin{corollary} \label{c Schur-Weyl duality upper} \ \ensuremath{\mathfrak{b}}egin{equation}gin{list}{\emph{(\roman{ctr})}} {\ensuremath{u}secounter{ctr} \setlength{\itemsep}{1pt} \setlength{\ensuremath{\mathfrak{t}}opsep}{2pt}} \item The $ \ensuremath{\mathscr{H}}_r$-module with basis $(\ensuremath{\mathbf{T}}, B)$ decomposes into $ \ensuremath{\mathscr{H}}_r$-cells as $B = \ensuremath{\mathfrak{b}}igsqcup_{T \in \ensuremath{\mathfrak{t}}ext{SSYT}_{[n]}^r} \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma_T$, where $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma_T = \{\ensuremath{c^{\prime}}_\ensuremath{\mathfrak{m}}athbf{k} : P(\ensuremath{\mathfrak{m}}athbf{k}) = T \}.$ \item The $ \ensuremath{\mathscr{H}}_r$-cell $ \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma_T$ of $\ensuremath{\mathbf{T}}$ is isomorphic to $ \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma_{\ensuremath{\mathfrak{t}}ext{\rm sh}(T)}$. \item The $\ensuremath{\mathcal{U}}q$-module with basis $(\ensuremath{\mathbf{T}}, B)$ decomposes into $\ensuremath{\mathcal{U}}q$-cells as $B = \ensuremath{\mathfrak{b}}igsqcup_{T \in \ensuremath{\mathfrak{t}}ext{SYT}^r_{\leq n}} \ensuremath{\mathscr{L}}ambda_T, $ where $\ensuremath{\mathscr{L}}ambda_T = \{\ensuremath{c^{\prime}}_\ensuremath{\mathfrak{m}}athbf{k} : Q(\ensuremath{\mathfrak{m}}athbf{k}) = T \}.$ \item The $\ensuremath{\mathcal{U}}q$-cell $ \ensuremath{\mathscr{L}}ambda_T$ is isomorphic to $ B(\ensuremath{\mathfrak{t}}ext{\rm sh}(T))$. \end{list} \end{corollary} \subsection{A symmetric bilinear form on $\ensuremath{\mathbf{T}}$} There is a bilinear form $(\cdot, \cdot)$ on $\ensuremath{\mathbf{T}}$ under which the upper and lower canonical basis are dual and satisfies several other nice properties. Let $\dagger$ (resp. $\dagger^\ensuremath{\mathfrak{t}}ext{op}$) be the automorphism (resp. antiautomorphism) of $\ensuremath{\mathscr{H}}_r$ determined by $T_i^{\dagger} = T_{n-i}$ (resp. $T_i^{\dagger^\ensuremath{\mathfrak{t}}ext{op}} = T_{n-i}$). \ensuremath{\mathfrak{b}}egin{equation}gin{propdef}\cite{Brundan} \label{p dual bases} There is a unique symmetric bilinear form $(\cdot, \cdot)$ on $\ensuremath{\mathbf{T}}$ satisfying \ensuremath{\mathfrak{b}}egin{equation}gin{list}{\emph{(\roman{ctr})}} {\ensuremath{u}secounter{ctr} \setlength{\itemsep}{1pt} \setlength{\ensuremath{\mathfrak{t}}opsep}{2pt}} \item $(x \mathbf{v},\mathbf{v}')= (\mathbf{v},\varphi(x) \mathbf{v}')$ for any $x \in \ensuremath{\mathcal{U}}q$, $\mathbf{v}, \mathbf{v}' \in \ensuremath{\mathbf{T}}$, \item $(\ensuremath{\mathfrak{m}}athbf{v} h,\mathbf{v}')= (\mathbf{v},\mathbf{v}' h^{\dagger^\ensuremath{\mathfrak{t}}ext{op}})$ for any $h \in \ensuremath{\mathscr{H}}_r$, $\mathbf{v}, \mathbf{v}' \in \ensuremath{\mathbf{T}}$, \item $(\mathbf{v}_\ensuremath{\mathfrak{m}}athbf{k}, \ensuremath{\mathfrak{b}}r{\mathbf{v}}_{\ensuremath{\mathfrak{m}}athbf{l}^\dagger}) = \delta_{\ensuremath{\mathfrak{m}}athbf{k}, \ensuremath{\mathfrak{m}}athbf{l}}$, \item $(c_\ensuremath{\mathfrak{m}}athbf{k}, \ensuremath{c^{\prime}}_{\ensuremath{\mathfrak{m}}athbf{l}^\dagger}) = \delta_{\ensuremath{\mathfrak{m}}athbf{k}, \ensuremath{\mathfrak{m}}athbf{l}}$. \end{list} \end{propdef} \section{Projected canonical bases} \label{s projected canonical bases} Here we give several equivalent definitions of the projected counterparts of the lower and upper canonical basis of $\ensuremath{\mathbf{T}}$. This will be used in the next section to help us understand the transition matrices discussed in the introduction. Note that by quantum Schur-Weyl duality (Theorem \ref{c Schur-Weyl duality basic}), $\varsigma_\lambda^\ensuremath{\mathbf{T}} = s_\lambda^\ensuremath{\mathbf{T}}$, $\iota_\lambda^\ensuremath{\mathbf{T}} = i_\lambda^\ensuremath{\mathbf{T}}$, and $\ensuremath{\mathfrak{p}}i_\lambda^\ensuremath{\mathbf{T}} = p_\lambda^\ensuremath{\mathbf{T}}$. \subsection{Projected upper canonical basis} \label{ss projected upper canonical basis} For some of our descriptions of the projected upper canonical basis, we need an integral form that is different from $\ensuremath{\mathbf{T}}_\ensuremath{\mathfrak{m}}athbf{A}$. First note that by Corollary \ref{c Schur-Weyl duality upper} and \cite[\ensuremath{\mathfrak{t}}extsection 5.2]{Kas2}, \ensuremath{\mathfrak{b}}egin{equation} \label{e bT lambda fact} \ensuremath{\mathbf{T}}[ \ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq} \lambda] = \ensuremath{K} \{c_\ensuremath{\mathfrak{m}}athbf{k}: \ensuremath{\mathfrak{t}}ext{\rm sh}(\ensuremath{\mathfrak{m}}athbf{k}) \ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq} \lambda\} \ensuremath{\mathfrak{t}}ext{ and } \ensuremath{\mathbf{T}}[ \ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq}neq \lambda] = \ensuremath{K} \{c_\ensuremath{\mathfrak{m}}athbf{k}: \ensuremath{\mathfrak{t}}ext{\rm sh}(\ensuremath{\mathfrak{m}}athbf{k}) \ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq}neq \lambda\}. \end{equation} Further, applying $\varsigma_\lambda^\ensuremath{\mathbf{T}}$ to the upper based module $(\ensuremath{\mathbf{T}}[ \ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq} \lambda],\{c_\ensuremath{\mathfrak{m}}athbf{k}: \ensuremath{\mathfrak{t}}ext{\rm sh}(\ensuremath{\mathfrak{m}}athbf{k}) \ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq} \lambda\})$ yields the upper based module $(\ensuremath{\mathbf{T}}[\lambda],\{\varsigma_\lambda^\ensuremath{\mathbf{T}}(c_\ensuremath{\mathfrak{m}}athbf{k}): \ensuremath{\mathfrak{t}}ext{\rm sh}(\ensuremath{\mathfrak{m}}athbf{k}) = \lambda\})$ with balanced triple \ensuremath{\mathfrak{b}}egin{equation} \label{e lambda triple} (\ensuremath{\mathbb{Q}} \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} \ensuremath{\mathbf{T}}_\ensuremath{\mathfrak{m}}athbf{A}[\ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq} \lambda]/ \ensuremath{\mathbf{T}}_\ensuremath{\mathfrak{m}}athbf{A}[\ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq}neq \lambda], \ensuremath{\mathfrak{b}}r{\ensuremath{\mathscr{L}}_\infty[\ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq} \lambda]}/\ensuremath{\mathfrak{b}}r{\ensuremath{\mathscr{L}}_\infty[\ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq}neq \lambda]}, \ensuremath{\mathscr{L}}_\infty[\ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq} \lambda]/ \ensuremath{\mathscr{L}}_\infty[\ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq}neq \lambda]), \end{equation} where $\ensuremath{\mathbf{T}}_\ensuremath{\mathfrak{m}}athbf{A}[\ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq} \lambda]$ and $\ensuremath{\mathscr{L}}_\infty[\ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq} \lambda]$ (resp. $\ensuremath{\mathbf{T}}_\ensuremath{\mathfrak{m}}athbf{A}[\ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq}neq \lambda]$ and $\ensuremath{\mathscr{L}}_\infty[\ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq}neq \lambda]$) are the $\ensuremath{\mathfrak{m}}athbf{A}$- and $\ensuremath{K}_\infty$- span of $\{c_\ensuremath{\mathfrak{m}}athbf{k} : \ensuremath{\mathfrak{t}}ext{\rm sh}(\ensuremath{\mathfrak{m}}athbf{k}) \ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq} \lambda\}$ (resp. $\{c_\ensuremath{\mathfrak{m}}athbf{k} : \ensuremath{\mathfrak{t}}ext{\rm sh}(\ensuremath{\mathfrak{m}}athbf{k}) \ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq}neq \lambda\}$). Finally, define \ensuremath{\mathfrak{b}}egin{equation} \label{e T lambda definition upper} \ensuremath{\mathfrak{b}}egin{equation}gin{array}{ccl} (\ensuremath{\mathbf{T}}_\ensuremath{\mathfrak{m}}athbf{A})_\lambda & := & \ensuremath{\mathfrak{p}}i_\lambda^\ensuremath{\mathbf{T}}(\ensuremath{\mathbf{T}}_\ensuremath{\mathfrak{m}}athbf{A}[\ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq} \lambda]), \\ \ensuremath{\mathscr{L}}_{\infty \lambda} & := & \ensuremath{\mathfrak{p}}i_\lambda^\ensuremath{\mathbf{T}}(\ensuremath{\mathscr{L}}_\infty[\ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq} \lambda]), \\ \ensuremath{\mathfrak{t}}ilde{\ensuremath{\mathbf{T}}}_\ensuremath{\mathfrak{m}}athbf{A} & := & \ensuremath{\mathfrak{b}}igoplus_\lambda (\ensuremath{\mathbf{T}}_\ensuremath{\mathfrak{m}}athbf{A})_\lambda. \end{array} \end{equation} Our next theorem is similar to results in \cite{GL} and \cite[\ensuremath{\mathfrak{t}}extsection 7]{Brundan}; those of \cite{GL} are proved using geometric methods, and those of \cite[\ensuremath{\mathfrak{t}}extsection 7]{Brundan} using results of \cite{Kas2, LBook} as is done here. \ensuremath{\mathfrak{b}}egin{equation}gin{theorem} \label{t lifted upper canonical basis} Maintain the notation above and let $\ensuremath{\mathfrak{m}}athbf{l} \in [n]^r$ and $\lambda = \ensuremath{\mathfrak{t}}ext{\rm sh}(\ensuremath{\mathfrak{m}}athbf{l})$. Set $\ensuremath{\mathfrak{m}}athbf{j} = \ensuremath{\mathfrak{t}}ext{RSK}^{-1}(Z_\lambda,Q(\ensuremath{\mathfrak{m}}athbf{l}))$, where $Z_\lambda$ is the superstandard tableau of shape $\lambda$ (see \ensuremath{\mathfrak{t}}extsection\ref{ss type A combinatorics preliminaries}). Let $V_{Q(\ensuremath{\mathfrak{m}}athbf{l})} = \ensuremath{\mathcal{U}}q c_\ensuremath{\mathfrak{m}}athbf{j}$ and $V^{\ensuremath{\mathbb{Q}} \ensuremath{\mathfrak{t}}ext{ up}}_{Q(\ensuremath{\mathfrak{m}}athbf{l})}$ be the $\ensuremath{\mathbb{Q}}A$-form of $V_{Q(\ensuremath{\mathfrak{m}}athbf{l})}$ as in \ensuremath{\mathfrak{t}}extsection\ref{ss global canonical bases}. Then the triples in (b) and (c) are balanced and the projected upper canonical basis element $\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}_{\ensuremath{\mathfrak{m}}athbf{l}}$ has the following descriptions \ensuremath{\mathfrak{b}}egin{equation}gin{list}{\emph{(\ensuremath{\mathfrak{a}}lph{ctr})}} {\ensuremath{u}secounter{ctr} \setlength{\itemsep}{1pt} \setlength{\ensuremath{\mathfrak{t}}opsep}{2pt}} \item the unique $\ensuremath{\mathfrak{b}}r{\cdot}$-invariant element of $\ensuremath{\mathfrak{t}}ilde{\ensuremath{\mathbf{T}}}_\ensuremath{\mathfrak{m}}athbf{A}$ congruent to $\mathbf{v}_\ensuremath{\mathfrak{m}}athbf{l} \ensuremath{\mathfrak{m}}od \ensuremath{u^{-1}} \ensuremath{\mathscr{L}}_\infty$, \item $\ensuremath{\mathfrak{t}}ilde{G}(b_{\ensuremath{\mathfrak{m}}athbf{l}})$, where $\ensuremath{\mathfrak{t}}ilde{G}$ is the inverse of the canonical isomorphism \[(\ensuremath{\mathbb{Q}} \ensuremath{\otimes}_\ensuremath{\mathbb{Z}}\ensuremath{\mathfrak{t}}ilde{\ensuremath{\mathbf{T}}}_\ensuremath{\mathfrak{m}}athbf{A}) \cap \ensuremath{\mathfrak{b}}r{\ensuremath{\mathscr{L}}_{\infty}}\cap\ensuremath{\mathscr{L}}_{\infty} \xrightarrow{\cong} \ensuremath{\mathscr{L}}_{\infty} / \ensuremath{u^{-1}} \ensuremath{\mathscr{L}}_{\infty}, \] \item $\ensuremath{\mathfrak{t}}ilde{G}_\lambda(\ensuremath{\mathfrak{p}}i_\lambda(b_{\ensuremath{\mathfrak{m}}athbf{l}}))$, where $\ensuremath{\mathfrak{t}}ilde{G}_\lambda$ is the inverse of the canonical isomorphism \[(\ensuremath{\mathbb{Q}} \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} (\ensuremath{\mathbf{T}}_\ensuremath{\mathfrak{m}}athbf{A})_\lambda) \cap \ensuremath{\mathfrak{b}}r{\ensuremath{\mathscr{L}}_{\infty\lambda}}\cap \ensuremath{\mathscr{L}}_{\infty\lambda} \xrightarrow{\cong} \ensuremath{\mathscr{L}}_{\infty\lambda} / \ensuremath{u^{-1}} \ensuremath{\mathscr{L}}_{\infty\lambda}, \] \item the global crystal basis element $G_\lambda(b_{P(\ensuremath{\mathfrak{m}}athbf{l})})$ of $V_{Q(\ensuremath{\mathfrak{m}}athbf{l})}$, \item $\ensuremath{\mathfrak{p}}i_\lambda^\ensuremath{\mathbf{T}} (c_{\ensuremath{\mathfrak{m}}athbf{l}})$, \item $p_\lambda^\ensuremath{\mathbf{T}} (c_\ensuremath{\mathfrak{m}}athbf{l})$. \end{list} Then $\ensuremath{\mathfrak{t}}ilde{B} := \{\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}_\ensuremath{\mathfrak{m}}athbf{k}: \ensuremath{\mathfrak{m}}athbf{k}\in [n]^r\}$ is the \emph{projected upper canonical basis of $\ensuremath{\mathbf{T}}$} and $(\ensuremath{\mathbf{T}}, \ensuremath{\mathfrak{t}}ilde{B})$ is an upper based module. Its $\ensuremath{\mathcal{U}}q$- and $\ensuremath{\mathscr{H}}_r$-cells are given by Corollary \ref{c Schur-Weyl duality upper} with $\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}$ in place of $c$. \end{theorem} \ensuremath{\mathfrak{b}}egin{equation}gin{proof} The triple in (c) is just the injective image of the triple in \eqref{e lambda triple} under $\iota_\lambda^\ensuremath{\mathbf{T}}$, hence the triple in (c) is balanced and the elements in (c) and (e) are the same. As noted earlier, $\ensuremath{\mathfrak{p}}i_\lambda^\ensuremath{\mathbf{T}} = p_\lambda^\ensuremath{\mathbf{T}}$, so the elements in (e) and (f) are the same. By \cite[\ensuremath{\mathfrak{t}}extsection 5.2]{Kas2} (see the discussion at the end of \ensuremath{\mathfrak{t}}extsection\ref{ss global canonical bases}) and Corollary \ref{c Schur-Weyl duality upper}, there is an isomorphism of upper based modules \[ (\ensuremath{\mathbf{T}}[\lambda],\{\varsigma_\lambda^\ensuremath{\mathbf{T}}(c_\ensuremath{\mathfrak{m}}athbf{k}): \ensuremath{\mathfrak{t}}ext{\rm sh}(\ensuremath{\mathfrak{m}}athbf{k}) = \lambda\})\cong \ensuremath{\mathfrak{b}}igoplus_{Q \in \ensuremath{\mathfrak{t}}ext{SYT}(\lambda)}(V_\lambda^Q, B^Q(\lambda)), \ensuremath{\mathfrak{q}}uad \varsigma_\lambda^\ensuremath{\mathbf{T}}(c_\ensuremath{\mathfrak{m}}athbf{k}) \ensuremath{\mathfrak{m}}apsto G_\lambda^{Q(\ensuremath{\mathfrak{m}}athbf{k})}(b^{Q(\ensuremath{\mathfrak{m}}athbf{k})}_{P(\ensuremath{\mathfrak{m}}athbf{k})}), \] where $V_\lambda^Q, B^Q(\lambda),$ etc. denote copies of $V_\lambda,B(\lambda),$ etc. indexed by $ Q$. It follows that the elements in (c) and (d) are the same. To see that the triple in (b) is balanced and that the elements in (b) and (c) are the same, we must show that the triple in (b) is the direct sum $ \ensuremath{\mathfrak{b}}igoplus_{\ensuremath{\mathfrak{m}}u \vdash_n r}(\ensuremath{\mathbb{Q}} \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} (\ensuremath{\mathbf{T}}_\ensuremath{\mathfrak{m}}athbf{A})_\ensuremath{\mathfrak{m}}u , \ensuremath{\mathfrak{b}}r{\ensuremath{\mathscr{L}}_{\infty\ensuremath{\mathfrak{m}}u}}, \ensuremath{\mathscr{L}}_{\infty\ensuremath{\mathfrak{m}}u})$ of triples of the form in (c). This amounts to showing the equality of upper crystal bases (we need that this is an equality, not just an isomorphism) \ensuremath{\mathfrak{b}}egin{equation} \label{e L infinity equality} (\ensuremath{\mathscr{L}}_\infty, \ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}) = \ensuremath{\mathfrak{b}}igoplus_{\ensuremath{\mathfrak{m}}u \vdash_n r} (\ensuremath{\mathscr{L}}_{\infty\ensuremath{\mathfrak{m}}u}, \{\ensuremath{\mathfrak{p}}i_\ensuremath{\mathfrak{m}}u(b_\ensuremath{\mathfrak{m}}athbf{k}):\ensuremath{\mathfrak{t}}ext{\rm sh}(\ensuremath{\mathfrak{m}}athbf{k})=\ensuremath{\mathfrak{m}}u\}). \end{equation} This follows from the uniqueness of upper crystal bases and the fact that the restriction of both sides of \eqref{e L infinity equality} to $\{x \in \ensuremath{\mathbf{T}}^\ensuremath{\mathfrak{m}}u : E_i x = 0 \ensuremath{\mathfrak{t}}ext{ for all } i \in [n-1]\}$ is $(\ensuremath{K}_\infty \{c_{\ensuremath{\mathfrak{m}}athbf{k}}: P(\ensuremath{\mathfrak{m}}athbf{k}) = Z_\ensuremath{\mathfrak{m}}u\}, \{b_{\ensuremath{\mathfrak{m}}athbf{k}}: P(\ensuremath{\mathfrak{m}}athbf{k}) = Z_\ensuremath{\mathfrak{m}}u\})$. Finally, we can show that the element in (a) is the same as the other descriptions. The minimal central idempotent $p_\lambda$ is $\ensuremath{\mathfrak{b}}r{\cdot}$-invariant. This follows from the $\ensuremath{\mathfrak{b}}r{\cdot}$-invariance of the upper canonical basis of $\ensuremath{\mathscr{H}}_r$ and the fact that an algebra involution must yield an involution of minimal central idempotents. Thus $p_\lambda^\ensuremath{\mathbf{T}} (c_\ensuremath{\mathfrak{m}}athbf{l})$ is $\ensuremath{\mathfrak{b}}r{\cdot}$-invariant and, by description (b), it satisfies the other requirements of (a). The uniqueness in (a) was not clear a priori but is now clear because the triple in (b) is balanced. The statements about $(\ensuremath{\mathbf{T}}, \ensuremath{\mathfrak{t}}ilde{B})$ are clear from (e), (f) and Corollary \ref{c Schur-Weyl duality upper}. \end{proof} \ensuremath{\mathfrak{b}}egin{equation}gin{figure} \ensuremath{\mathfrak{b}}egin{equation}gin{tikzpicture}[xscale = 5,yscale = 3] \ensuremath{\mathfrak{t}}ikzstyle{vertex}=[inner sep=0pt, outer sep=3pt, fill = white] \ensuremath{\mathfrak{t}}ikzstyle{edge} = [draw, thick, ->,black] \ensuremath{\mathfrak{t}}ikzstyle{LabelStyleH} = [text=black, anchor=south] \ensuremath{\mathfrak{t}}ikzstyle{LabelStyleV} = [text=black, anchor=east] \node[vertex] (111) at (3,0) {$\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}_{111} = c_{111} $}; \node[vertex] (112) at (2,-1) {$\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}_{112} = \ensuremath{\mathfrak{a}}top c_{112}+ \ensuremath{f}rac{[2]}{[3]}c_{121} + \ensuremath{f}rac{1}{[3]}c_{211} $}; \node[vertex] (121) at (2, 0) {$\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}_{121} = c_{121}$}; \node[vertex] (211) at (2, 1) {$\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}_{211}= c_{211}$}; \node[vertex] (122) at (1, -1) {$\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}_{122}= \ensuremath{\mathfrak{a}}top c_{122} + \ensuremath{f}rac{[2]}{[3]}c_{212} + \ensuremath{f}rac{1}{[3]}c_{221}$}; \node[vertex] (212) at (1 , 0) {$\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}_{212} = c_{212}$}; \node[vertex] (221) at (1, 1) {$\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}_{221}= c_{221}$}; \node[vertex] (222) at (0, 0) {$\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}_{222}= c_{222} $}; \draw[edge] (111) to node[LabelStyleH]{$[3]$} (112); \draw[edge] (112) to node[LabelStyleH]{$[2]$} (122); \draw[edge] (121) to node[LabelStyleH]{$ $} (221); \draw[edge] (211) to node[LabelStyleH]{$ $} (212); \draw[edge] (122) to node[LabelStyleH]{$ $} (222); \ensuremath{f}oreach \ensuremath{Y} in {-.2}{ \node[vertex] (t111) at (3,0+\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{ 1 & 1 & 1}, \ensuremath{\mathfrak{t}}ableau{1 & 2 & 3}\right)$}; \node[vertex] (t112) at (2,-1+1.3*\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{ 1 & 1 & 2}, \ensuremath{\mathfrak{t}}ableau{1 & 2 & 3}\right)$}; \node[vertex] (t121) at (2, 0+1.65*\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{ 1 & 1 \cr 2}, \ensuremath{\mathfrak{t}}ableau{1 & 2 \cr 3}\right)$}; \node[vertex] (t211) at (2, 1+1.65*\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{ 1 & 1 \cr 2}, \ensuremath{\mathfrak{t}}ableau{1 & 3\cr 2}\right)$}; \node[vertex] (t122) at (1, -1+1.3*\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{ 1 & 2 & 2}, \ensuremath{\mathfrak{t}}ableau{1 & 2 & 3}\right)$}; \node[vertex] (t212) at (1 , 0 +1.65*\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{ 1 & 2\cr 2}, \ensuremath{\mathfrak{t}}ableau{1 & 3\cr 2}\right)$}; \node[vertex] (t221) at (1, 1 +1.65*\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{ 1 & 2\cr 2}, \ensuremath{\mathfrak{t}}ableau{1 & 2 \cr 3}\right)$}; \node[vertex] (t222) at (0, 0+\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{2 & 2 & 2}, \ensuremath{\mathfrak{t}}ableau{1 & 2 & 3}\right)$}; } \end{tikzpicture} \caption{The projected upper canonical basis elements of Theorem \ref{t lifted upper canonical basis} for $r=3, n=2$. The pairs of tableaux are of the form $(P(\ensuremath{\mathfrak{m}}athbf{k}), Q(\ensuremath{\mathfrak{m}}athbf{k}))$. The arrows and their coefficients give the action of $F_1$ on the projected upper canonical basis.} \label{f liftc_111 example upper} \end{figure} \ensuremath{\mathfrak{b}}egin{equation}gin{figure} \ensuremath{\mathfrak{b}}egin{equation}gin{tikzpicture}[xscale = 5,yscale = 3] \ensuremath{\mathfrak{t}}ikzstyle{vertex}=[inner sep=0pt, outer sep=3pt, fill = white] \ensuremath{\mathfrak{t}}ikzstyle{edge} = [draw, thick, ->,black] \ensuremath{\mathfrak{t}}ikzstyle{LabelStyleH} = [text=black, anchor=south] \ensuremath{\mathfrak{t}}ikzstyle{LabelStyleV} = [text=black, anchor=east] \node[vertex] (111) at (2.8,0) {$\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}p_{111} = \ensuremath{c^{\prime}}_{111} $}; \node[vertex] (112) at (2,-1) {$\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}p_{112} = \ensuremath{c^{\prime}}_{112} - \ensuremath{f}rac{1}{[3]}\ensuremath{c^{\prime}}_{211}$}; \node[vertex] (121) at (2, 0) {$\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}p_{121} = \ensuremath{c^{\prime}}_{121} - \ensuremath{f}rac{[2]}{[3]} \ensuremath{c^{\prime}}_{211} $}; \node[vertex] (211) at (2, 1) {$\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}p_{211}= \ensuremath{c^{\prime}}_{211}$}; \node[vertex] (122) at (1, -1) {$\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}p_{122}= \ensuremath{c^{\prime}}_{122} - \ensuremath{f}rac{1}{[3]}\ensuremath{c^{\prime}}_{221} $}; \node[vertex] (212) at (1 , 0) {$\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}p_{212} = \ensuremath{c^{\prime}}_{212} -\ensuremath{f}rac{[2]}{[3]} \ensuremath{c^{\prime}}_{221} $}; \node[vertex] (221) at (1, 1) {$\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}p_{221}= \ensuremath{c^{\prime}}_{221}$}; \node[vertex] (222) at (0, 0) {$\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}p_{222}= \ensuremath{c^{\prime}}_{222} $}; \draw[edge] (111) to node[LabelStyleH]{$ $} (211); \draw[edge] (211) to node[LabelStyleH]{$[2]$} (221); \draw[edge] (221) to node[LabelStyleH]{$[3]$} (222); \draw[edge] (121) to node[LabelStyleH]{$ $} (122); \draw[edge] (112) to node[LabelStyleH]{$ $} (212); \ensuremath{f}oreach \ensuremath{Y} in {-.2}{ \node[vertex] (t111) at (2.8,0+1.45*\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{ 1 & 1 & 1}, \ensuremath{\mathfrak{t}}ableau{1 & 2 & 3}\right)$}; \node[vertex] (t112) at (2,-1 +1.65*\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{ 1 & 1 \cr 2}, \ensuremath{\mathfrak{t}}ableau{1 & 3 \cr 2}\right)$}; \node[vertex] (t121) at (2, 0+1.65*\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{ 1 & 1 \cr 2}, \ensuremath{\mathfrak{t}}ableau{1 & 2 \cr 3}\right)$}; \node[vertex] (t211) at (2, 1+1.45*\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{ 1 & 1 & 2}, \ensuremath{\mathfrak{t}}ableau{1 & 2 & 3}\right)$}; \node[vertex] (t122) at (1, -1 + 1.65*\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{ 1 & 2\cr 2}, \ensuremath{\mathfrak{t}}ableau{1 & 2 \cr 3}\right)$}; \node[vertex] (t212) at (1 , 0 +1.65*\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{ 1 & 2\cr 2}, \ensuremath{\mathfrak{t}}ableau{1 & 3\cr 2}\right)$}; \node[vertex] (t221) at (1, 1+1.45*\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{ 1 & 2 & 2}, \ensuremath{\mathfrak{t}}ableau{1 & 2 & 3}\right)$}; \node[vertex] (t222) at (0, 0+1.45*\ensuremath{Y}) {\small $\left(\ensuremath{\mathfrak{t}}ableau{2 & 2 & 2}, \ensuremath{\mathfrak{t}}ableau{1 & 2 & 3}\right)$}; } \end{tikzpicture} \caption{The projected lower canonical basis elements of Theorem \ref{t lifted lower canonical basis} for $r=3, n=2$. The pairs of tableaux are of the form $(P(\ensuremath{\mathfrak{m}}athbf{k}^\dagger), Q(\ensuremath{\mathfrak{m}}athbf{k}^\dagger))$. The arrows and their coefficients give the action of $F_1$ on the projected lower canonical basis.} \label{f liftc 111 example lower} \end{figure} \subsection{Projected lower canonical basis} Our equivalent descriptions of the projected lower canonical basis are similar to those for the upper, with some minor changes. By Corollary \ref{c Schur-Weyl duality lower} and \cite[Proposition 27.1.8]{LBook}, \ensuremath{\mathfrak{b}}egin{equation} \label{e bT lambda fact lower} \ensuremath{\mathbf{T}}[ \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}d \lambda] = \ensuremath{K} \{\ensuremath{c^{\prime}}_\ensuremath{\mathfrak{m}}athbf{k}: \ensuremath{\mathfrak{t}}ext{\rm sh}(\ensuremath{\mathfrak{m}}athbf{k}^\dagger) \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}d \lambda\} \ensuremath{\mathfrak{t}}ext{ and } \ensuremath{\mathbf{T}}[ \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}dneq \lambda] = \ensuremath{K} \{\ensuremath{c^{\prime}}_\ensuremath{\mathfrak{m}}athbf{k}: \ensuremath{\mathfrak{t}}ext{\rm sh}(\ensuremath{\mathfrak{m}}athbf{k}^\dagger) \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}dneq \lambda\}. \end{equation} Let $\ensuremath{\mathbf{T}}_\ensuremath{\mathfrak{m}}athbf{A}[\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}d \lambda]$ and $\ensuremath{\mathscr{L}}_0[\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}d \lambda]$ be the $\ensuremath{\mathfrak{m}}athbf{A}$- and $\ensuremath{K}_0$- span of $\{\ensuremath{c^{\prime}}_\ensuremath{\mathfrak{m}}athbf{k} : \ensuremath{\mathfrak{t}}ext{\rm sh}(\ensuremath{\mathfrak{m}}athbf{k}^\dagger) \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}d \lambda\}$ and define \ensuremath{\mathfrak{b}}egin{equation} \label{e T lambda definition lower} \ensuremath{\mathfrak{b}}egin{equation}gin{array}{ccl} (\ensuremath{\mathbf{T}}_\ensuremath{\mathfrak{m}}athbf{A})'_\lambda & := & \ensuremath{\mathfrak{p}}i_\lambda^\ensuremath{\mathbf{T}}(\ensuremath{\mathbf{T}}_\ensuremath{\mathfrak{m}}athbf{A}[\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}d \lambda]), \\ \ensuremath{\mathscr{L}}_{0 \lambda} & := & \ensuremath{\mathfrak{p}}i_\lambda^\ensuremath{\mathbf{T}}(\ensuremath{\mathscr{L}}_0[\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}d \lambda]), \\ \ensuremath{\mathfrak{t}}ilde{\ensuremath{\mathbf{T}}}'_\ensuremath{\mathfrak{m}}athbf{A} & := & \ensuremath{\mathfrak{b}}igoplus_\lambda (\ensuremath{\mathbf{T}}_\ensuremath{\mathfrak{m}}athbf{A})'_\lambda. \end{array} \end{equation} \ensuremath{\mathfrak{b}}egin{equation}gin{theorem} \label{t lifted lower canonical basis} Maintain the notation above and let $\ensuremath{\mathfrak{m}}athbf{l} \in [n]^r$ and $\lambda = \ensuremath{\mathfrak{t}}ext{\rm sh}(\ensuremath{\mathfrak{m}}athbf{l}^\dagger)$. Set $\ensuremath{\mathfrak{m}}athbf{j} = \ensuremath{\mathfrak{t}}ext{RSK}^{-1}(Z_\lambda,Q(\ensuremath{\mathfrak{m}}athbf{l}^\dagger))$. Let $V_{Q(\ensuremath{\mathfrak{m}}athbf{l}^\dagger)} = \ensuremath{\mathcal{U}}q \ensuremath{\mathfrak{p}}i_\lambda^\ensuremath{\mathbf{T}}(\ensuremath{c^{\prime}}_\ensuremath{\mathfrak{m}}athbf{j})$ and $V^{\ensuremath{\mathbb{Q}} \ensuremath{\mathfrak{t}}ext{ low}}_{Q(\ensuremath{\mathfrak{m}}athbf{l}^\dagger)}$ be the $\ensuremath{\mathbb{Q}}A$-form of $V_{Q(\ensuremath{\mathfrak{m}}athbf{l}^\dagger)}$ as in \ensuremath{\mathfrak{t}}extsection\ref{ss global canonical bases}. Then the triples in (b) and (c) are balanced and the projected lower canonical basis element $\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}p_{\ensuremath{\mathfrak{m}}athbf{l}}$ has the following descriptions \ensuremath{\mathfrak{b}}egin{equation}gin{list}{\emph{(\ensuremath{\mathfrak{a}}lph{ctr})}} {\ensuremath{u}secounter{ctr} \setlength{\itemsep}{1pt} \setlength{\ensuremath{\mathfrak{t}}opsep}{2pt}} \item the unique $\ensuremath{\mathfrak{b}}r{\cdot}$-invariant element of $\ensuremath{\mathfrak{t}}ilde{\ensuremath{\mathbf{T}}}'_\ensuremath{\mathfrak{m}}athbf{A}$ congruent to $\mathbf{v}_\ensuremath{\mathfrak{m}}athbf{l} \ensuremath{\mathfrak{m}}od \ensuremath{u} \ensuremath{\mathscr{L}}_0$, \item $\ensuremath{\mathfrak{t}}ilde{G}'(b_{\ensuremath{\mathfrak{m}}athbf{l}})$, where $\ensuremath{\mathfrak{t}}ilde{G}'$ is the inverse of the canonical isomorphism \[(\ensuremath{\mathbb{Q}} \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} \ensuremath{\mathfrak{t}}ilde{\ensuremath{\mathbf{T}}}'_\ensuremath{\mathfrak{m}}athbf{A}) \cap \ensuremath{\mathscr{L}}_{0} \cap \ensuremath{\mathfrak{b}}r{\ensuremath{\mathscr{L}}_{0}} \xrightarrow{\cong} \ensuremath{\mathscr{L}}_{0} / \ensuremath{u} \ensuremath{\mathscr{L}}_{0}, \] \item $\ensuremath{\mathfrak{t}}ilde{G}'_\lambda(\ensuremath{\mathfrak{p}}i_\lambda(b_{\ensuremath{\mathfrak{m}}athbf{l}}))$, where $\ensuremath{\mathfrak{t}}ilde{G}'_\lambda$ is the inverse of the canonical isomorphism \[(\ensuremath{\mathbb{Q}} \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} (\ensuremath{\mathbf{T}}_\ensuremath{\mathfrak{m}}athbf{A})'_\lambda) \cap \ensuremath{\mathscr{L}}_{0\lambda}\cap \ensuremath{\mathfrak{b}}r{\ensuremath{\mathscr{L}}_{0\lambda}} \xrightarrow{\cong} \ensuremath{\mathscr{L}}_{0\lambda} / \ensuremath{u} \ensuremath{\mathscr{L}}_{0\lambda}, \] \item the global crystal basis element $G'_\lambda(b_{P(\ensuremath{\mathfrak{m}}athbf{l}^\dagger)})$ of $V_{Q(\ensuremath{\mathfrak{m}}athbf{l}^\dagger)}$, \item $\ensuremath{\mathfrak{p}}i_\lambda^\ensuremath{\mathbf{T}} (\ensuremath{c^{\prime}}_{\ensuremath{\mathfrak{m}}athbf{l}})$, \item $p_\lambda^\ensuremath{\mathbf{T}} (\ensuremath{c^{\prime}}_\ensuremath{\mathfrak{m}}athbf{l})$. \end{list} Then $\ensuremath{\mathfrak{t}}ilde{B'} := \{\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}p_\ensuremath{\mathfrak{m}}athbf{k}: \ensuremath{\mathfrak{m}}athbf{k}\in [n]^r\}$ is the \emph{projected lower canonical basis of $\ensuremath{\mathbf{T}}$} and $(\ensuremath{\mathbf{T}}, \ensuremath{\mathfrak{t}}ilde{B'})$ is a lower based module. Its $\ensuremath{\mathcal{U}}q$- and $\ensuremath{\mathscr{H}}_r$-cells are given by Corollary \ref{c Schur-Weyl duality lower} with $\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}p$ in place of $\ensuremath{c^{\prime}}$. \end{theorem} \ensuremath{\mathfrak{b}}egin{equation}gin{proof} The proof is similar to that of Theorem \ref{t lifted upper canonical basis}, using results of \cite[Chapter 27]{LBook} in place of \cite[\ensuremath{\mathfrak{t}}extsection 5.2]{Kas2}. Slightly more care is needed to prove that \ensuremath{\mathfrak{b}}egin{equation} \label{e L 0 equality} \ensuremath{\mathscr{L}}_0 = \ensuremath{\mathfrak{b}}igoplus_{\ensuremath{\mathfrak{m}}u \vdash_n r} \ensuremath{\mathscr{L}}_{0\ensuremath{\mathfrak{m}}u} \end{equation} since $\ensuremath{\mathfrak{p}}i_\lambda^\ensuremath{\mathbf{T}}(c_\ensuremath{\mathfrak{m}}athbf{j}) = c_\ensuremath{\mathfrak{m}}athbf{j}$, whereas $\ensuremath{\mathfrak{p}}i_\lambda^\ensuremath{\mathbf{T}}(\ensuremath{c^{\prime}}_\ensuremath{\mathfrak{m}}athbf{j}) \neq \ensuremath{c^{\prime}}_\ensuremath{\mathfrak{m}}athbf{j}$ in general. However, uniqueness of lower crystal lattices is still enough: uniqueness means that both sides of \eqref{e L 0 equality} are determined by their intersection with $\ensuremath{\mathbf{T}}^\ensuremath{\mathfrak{m}}u_\ensuremath{\mathfrak{t}}ext{hw} := \{x \in \ensuremath{\mathbf{T}}^\ensuremath{\mathfrak{m}}u : E_i x = 0 \ensuremath{\mathfrak{t}}ext{ for all } i \in [n-1]\}$ for all $\ensuremath{\mathfrak{m}}u \vdash_n r$. Since $\varsigma_\ensuremath{\mathfrak{m}}u^\ensuremath{\mathbf{T}}$ restricts to an isomorphism $\ensuremath{\mathbf{T}}^\ensuremath{\mathfrak{m}}u_\ensuremath{\mathfrak{t}}ext{hw} \xrightarrow{\cong} \ensuremath{\mathbf{T}}[\ensuremath{\mathfrak{m}}u]^\ensuremath{\mathfrak{m}}u$ and \ensuremath{\mathfrak{b}}egin{equation} \varsigma_\ensuremath{\mathfrak{m}}u^\ensuremath{\mathbf{T}}(\ensuremath{\mathscr{L}}_0 \cap \ensuremath{\mathbf{T}}^\ensuremath{\mathfrak{m}}u_\ensuremath{\mathfrak{t}}ext{hw}) = \ensuremath{K}_0 \{\varsigma_\ensuremath{\mathfrak{m}}u^\ensuremath{\mathbf{T}}(\ensuremath{c^{\prime}}_\ensuremath{\mathfrak{m}}athbf{k}): P(\ensuremath{\mathfrak{m}}athbf{k}^\dagger) = Z_\ensuremath{\mathfrak{m}}u \} = \varsigma_\ensuremath{\mathfrak{m}}u^\ensuremath{\mathbf{T}}(\ensuremath{\mathscr{L}}_{0\ensuremath{\mathfrak{m}}u} \cap \ensuremath{\mathbf{T}}^\ensuremath{\mathfrak{m}}u_\ensuremath{\mathfrak{t}}ext{hw}), \end{equation} we have $\ensuremath{\mathscr{L}}_0 \cap \ensuremath{\mathbf{T}}^\ensuremath{\mathfrak{m}}u_\ensuremath{\mathfrak{t}}ext{hw}= \ensuremath{\mathscr{L}}_{0\ensuremath{\mathfrak{m}}u} \cap \ensuremath{\mathbf{T}}^\ensuremath{\mathfrak{m}}u_\ensuremath{\mathfrak{t}}ext{hw}$. The equality \eqref{e L 0 equality} follows. \end{proof} The most interesting part of Theorems \ref{t lifted upper canonical basis} and \ref{t lifted lower canonical basis} for us is that, though the integral form needed for the upper (resp. lower) canonical basis and projected upper (resp. lower) canonical basis differ, the upper (resp. lower) crystal lattices are the same. This has the following consequence. \ensuremath{\mathfrak{b}}egin{equation}gin{corollary} \label{c transition matrix projected to canbas} The transition matrix from the projected upper (resp. lower) canonical basis to the upper (resp. lower) canonical basis is unitriangular and is the identity at $\ensuremath{u} = 0$ and $\ensuremath{u} = \infty$. Precisely, \[ \ensuremath{\mathfrak{b}}egin{equation}gin{array}{ccl} \ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}_\ensuremath{\mathfrak{m}}athbf{k} &=& c_\ensuremath{\mathfrak{m}}athbf{k} + \sum_{\ensuremath{\mathfrak{t}}ext{\rm sh}(\ensuremath{\mathfrak{m}}athbf{k}') \ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq}neq \ensuremath{\mathfrak{t}}ext{\rm sh}(\ensuremath{\mathfrak{m}}athbf{k})} t_{\ensuremath{\mathfrak{m}}athbf{k}'\ensuremath{\mathfrak{m}}athbf{k}} c_{\ensuremath{\mathfrak{m}}athbf{k}'}, \\ \ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}p_\ensuremath{\mathfrak{m}}athbf{k} &=& \ensuremath{c^{\prime}}_{\ensuremath{\mathfrak{m}}athbf{k}} + \sum_{\ensuremath{\mathfrak{t}}ext{\rm sh}(\ensuremath{\mathfrak{m}}athbf{k}'^\dagger) \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}dneq \ensuremath{\mathfrak{t}}ext{\rm sh}(\ensuremath{\mathfrak{m}}athbf{k}^\dagger)} t'_{\ensuremath{\mathfrak{m}}athbf{k}'\ensuremath{\mathfrak{m}}athbf{k}} \ensuremath{c^{\prime}}_{\ensuremath{\mathfrak{m}}athbf{k}'}, \end{array} \] where the coefficients $t_{\ensuremath{\mathfrak{m}}athbf{k}'\ensuremath{\mathfrak{m}}athbf{k}}, t'_{\ensuremath{\mathfrak{m}}athbf{k}'\ensuremath{\mathfrak{m}}athbf{k}}$ are $\ensuremath{\mathfrak{b}}r{\cdot}$-invariant and belong to $\ensuremath{u} \ensuremath{K}_0 \cap \ensuremath{u^{-1}} \ensuremath{K}_\infty$. \end{corollary} \ensuremath{\mathfrak{b}}egin{equation}gin{proof} The constraints on dominance order follow from \eqref{e bT lambda fact} and \eqref{e bT lambda fact lower} using the expressions (e) of Theorems \ref{t lifted upper canonical basis} and \ref{t lifted lower canonical basis} for the projected canonical basis elements. By the expression (a) of Theorem \ref{t lifted upper canonical basis}, $c_\ensuremath{\mathfrak{m}}athbf{k} \equiv \ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}_\ensuremath{\mathfrak{m}}athbf{k} \ensuremath{\mathfrak{m}}od \ensuremath{u^{-1}} \ensuremath{\mathscr{L}}_\infty$. Thus $t_{\ensuremath{\mathfrak{m}}athbf{k}'\ensuremath{\mathfrak{m}}athbf{k}} \in \ensuremath{u^{-1}} \ensuremath{K}_\infty$. Since the upper canonical basis and projected upper canonical basis are $\ensuremath{\mathfrak{b}}r{\cdot}$-invariant, so are the entries of the transition matrix between them. This further implies $t_{\ensuremath{\mathfrak{m}}athbf{k}'\ensuremath{\mathfrak{m}}athbf{k}} \in \ensuremath{u} \ensuremath{K}_0 \cap \ensuremath{u^{-1}} \ensuremath{K}_\infty$. The proof for the lower canonical basis is similar. \end{proof} \subsection{Projected canonical bases are dual under the bilinear form} \ensuremath{\mathfrak{b}}egin{equation}gin{proposition} \label{p dual lifted bases} The projected upper and lower canonical basis of $\ensuremath{\mathbf{T}}$ are dual under $(\cdot, \cdot)$: there holds $(\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}_\ensuremath{\mathfrak{m}}athbf{k}, \ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}p_{\ensuremath{\mathfrak{m}}athbf{l}^\dagger}) = \delta_{\ensuremath{\mathfrak{m}}athbf{k}, \ensuremath{\mathfrak{m}}athbf{l}}$ for all $\ensuremath{\mathfrak{m}}athbf{k}, \ensuremath{\mathfrak{m}}athbf{l} \in [n]^r$. \end{proposition} \ensuremath{\mathfrak{b}}egin{equation}gin{proof} Let $\ensuremath{\mathfrak{m}}athbf{k},\ensuremath{\mathfrak{m}}athbf{l} \in [n]^r$ and $\lambda = \ensuremath{\mathfrak{t}}ext{\rm sh}(\ensuremath{\mathfrak{m}}athbf{k}), \ensuremath{\mathfrak{m}}u = \ensuremath{\mathfrak{t}}ext{\rm sh}(\ensuremath{\mathfrak{m}}athbf{l})$. If $\lambda = \ensuremath{\mathfrak{m}}u$, then by the unitriangularities established in Corollary \ref{c transition matrix projected to canbas} together with the fact that the upper canonical basis is dual to the lower canonical basis, we have $(\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}_\ensuremath{\mathfrak{m}}athbf{k}, \ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}p_{\ensuremath{\mathfrak{m}}athbf{l}^\dagger}) = \delta_{\ensuremath{\mathfrak{m}}athbf{k}, \ensuremath{\mathfrak{m}}athbf{l}}$. In the case $\lambda \neq \ensuremath{\mathfrak{m}}u$, we use Proposition-Definition \ref{p dual bases} (ii) to conclude \ensuremath{\mathfrak{b}}egin{equation} (\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}_\ensuremath{\mathfrak{m}}athbf{k}, \ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}p_{\ensuremath{\mathfrak{m}}athbf{l}^\dagger}) = ( \ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}_\ensuremath{\mathfrak{m}}athbf{k} p_\lambda, \ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}p_{\ensuremath{\mathfrak{m}}athbf{l}^\dagger} p_\ensuremath{\mathfrak{m}}u) = ( \ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}_\ensuremath{\mathfrak{m}}athbf{k}, \ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}p_{\ensuremath{\mathfrak{m}}athbf{l}^\dagger} p_\ensuremath{\mathfrak{m}}u p_\lambda^{\dagger^\ensuremath{\mathfrak{t}}ext{op}}) = ( \ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}_\ensuremath{\mathfrak{m}}athbf{k}, \ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}p_{\ensuremath{\mathfrak{m}}athbf{l}^\dagger} p_\ensuremath{\mathfrak{m}}u p_\lambda ) = 0. \end{equation} \end{proof} \section{Consequences for the canonical bases of $M_\lambda$} \label{s Consequences for the canonical bases of M_lambda} We use the results of the previous section to understand projected canonical bases of $\ensuremath{\mathscr{H}}_r$ and the relation between the upper and lower canonical basis of $ M_\lambda$. We will come across several transition matrices whose entries lie in $\ensuremath{K}_0$ and are the identity at $\ensuremath{u} = 0$. Define, for an element $f \in \ensuremath{u}\ensuremath{K}_0$, the \emph{leading coefficient} of $f$, denoted $\ensuremath{\mathfrak{m}}u(f)$, to be the coefficient of $\ensuremath{u}$ in the power series expansion of $f$. It turns out that the leading coefficients of many of these transition matrix entries coincide with the $\ensuremath{\mathcal{S}}_r$-graph edge weights $\ensuremath{\mathfrak{m}}u(v,w)$. \subsection{Projected canonical bases of $\ensuremath{\mathscr{H}}_r$} \label{ss projected canonical bases of the Hecke algebra} Here we define projected canonical bases of $\ensuremath{\mathscr{H}}_r$, which are essentially a special case of the projected canonical bases in the previous section. For each $w \in \ensuremath{\mathcal{S}}_r$, the projected upper (resp. lower) canonical basis element $\ensuremath{\ensuremath{\mathfrak{t}}ilde{C}}_w \in \ensuremath{\mathscr{H}}_r$ (resp. $\ensuremath{\ensuremath{\mathfrak{t}}ilde{C}^{\prime}}_w$) is defined to be $C_w p_\lambda$ (resp. $\ensuremath{C^{\prime}}_w p_\lambda$), where $\lambda = \ensuremath{\mathfrak{t}}ext{\rm sh}(w)$ (resp. $\lambda = \ensuremath{\mathfrak{t}}ext{\rm sh}(w^\dagger)$). \ensuremath{\mathfrak{b}}egin{equation}gin{corollary} \label{c lift at u = 0 in H} The transition matrix $\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}} = (\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}_{w'w})_{w',w \in \ensuremath{\mathcal{S}}_r}$ (resp. $\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}' = (\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}'_{w'w})_{w',w \in \ensuremath{\mathcal{S}}_r}$) expressing the projected upper (resp. lower) canonical basis of $\ensuremath{\mathscr{H}}_r$ in terms of the upper (resp. lower) canonical basis of $\ensuremath{\mathscr{H}}_r$ \ensuremath{\mathfrak{b}}egin{equation}gin{list}{\emph{(\roman{ctr})}} {\ensuremath{u}secounter{ctr} \setlength{\itemsep}{1pt} \setlength{\ensuremath{\mathfrak{t}}opsep}{2pt}} \item is unitriangular: $\ensuremath{\ensuremath{\mathfrak{t}}ilde{C}}_w = C_w + \sum_{\ensuremath{\mathfrak{t}}ext{\rm sh}(w') \ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq}neq \ensuremath{\mathfrak{t}}ext{\rm sh}(w)} \ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}_{w'w} C_{w'}$ (resp. $\ensuremath{\ensuremath{\mathfrak{t}}ilde{C}^{\prime}}_w = \ensuremath{C^{\prime}}_w + \sum_{\ensuremath{\mathfrak{t}}ext{\rm sh}(w^{\ensuremath{\mathfrak{p}}rime\dagger}) \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}dneq \ensuremath{\mathfrak{t}}ext{\rm sh}(w^\dagger)} \ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}'_{w'w} \ensuremath{C^{\prime}}_{w'}$), \item has entries that are $\ensuremath{\mathfrak{b}}r{\cdot}$-invariant and belong to $\ensuremath{K}_0 \cap \ensuremath{K}_\infty$, \item is the identity at $\ensuremath{u} = 0$ and $\ensuremath{u} = \infty$, \item satisfies: $\ensuremath{\mathfrak{m}}u(\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}_{w'w}) = \ensuremath{\mathfrak{m}}u(w',w)$ (resp. $\ensuremath{\mathfrak{m}}u(\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}'_{w'w}) = -\ensuremath{\mathfrak{m}}u(w',w)$) for $w',w$ such that $P(w') \neq P(w)$ and $R(w') \setminus R({w}) \neq \emptyset$. \end{list} \end{corollary} \ensuremath{\mathfrak{b}}egin{equation}gin{proof} Choose $n = r$ and set $\varepsilon = \epsilon_1 + \dots + \epsilon_r$. Then by Proposition \ref{p weight space equals induced}, Theorem \ref{t Schur-Weyl duality lower} (iv), and Theorem \ref{t Schur-Weyl duality upper} (iv), $\ensuremath{\mathscr{H}}_r \cong \ensuremath{\mathbf{T}}_\ensuremath{\mathfrak{m}}athbf{A}^{\varepsilon}$ as right $\ensuremath{\mathscr{H}}_r$-modules and under this isomorphism canonical bases are sent to canonical bases and projected canonical bases are sent to projected canonical bases. Thus (i)-(iii) are a special case of Corollary \ref{c transition matrix projected to canbas}. To prove (iv), we compute $\ensuremath{\ensuremath{\mathfrak{t}}ilde{C}}_w C_s$, for $w \in \ensuremath{\mathcal{S}}_r$ such that $s \notin R(w)$, in terms of the upper canonical basis in two different ways: \ensuremath{\mathfrak{b}}egin{equation} \ensuremath{\ensuremath{\mathfrak{t}}ilde{C}}_w C_s = (\sum_{w'} \ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}_{w'w} C_{w'}) C_s = -[2]\sum_{\{w' :s \in R({w'})\}} \ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}_{w'w}C_{w'} + \sum_{\stackrel{w',w''}{s \not\in R({w'}), s \in R({w''})}} \ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}_{w'w}\ensuremath{\mathfrak{m}}u(w'', w')C_{w''}. \end{equation} On the other hand, since $\ensuremath{\mathfrak{m}}athbf{A} \{\ensuremath{\ensuremath{\mathfrak{t}}ilde{C}}_{w'}: P(w') = P(w)\}$ and the cellular subquotient $\ensuremath{\mathfrak{m}}athbf{A}\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma_{P(w)}$ are isomorphic as modules with basis, \ensuremath{\mathfrak{b}}egin{equation} \ensuremath{\ensuremath{\mathfrak{t}}ilde{C}}_w C_s = \sum_{\ensuremath{\mathfrak{b}}ig\{w' : {s \in R({w'}), \ensuremath{\mathfrak{a}}top P(w') = P(w)}\ensuremath{\mathfrak{b}}ig\}} \ensuremath{\mathfrak{m}}u(w',w)\ensuremath{\ensuremath{\mathfrak{t}}ilde{C}}_{w'} = \sum_{\ensuremath{\mathfrak{b}}ig\{w' : {s \in R({w'}), \ensuremath{\mathfrak{a}}top P(w') = P(w)}\ensuremath{\mathfrak{b}}ig\}} \ensuremath{\mathfrak{m}}u(w',w) \sum_{w''} \ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}_{w''w'} C_{w''}. \end{equation} Then for any $w''$ such that $s \in R({w''})$ and $P(w'') \neq P(w)$, equating coefficients of $C_{w''}$ yields \[ 0 = \sum_{\ensuremath{\mathfrak{b}}ig\{w' : {s \in R({w'}), \ensuremath{\mathfrak{a}}top P(w') = P(w)}\ensuremath{\mathfrak{b}}ig\}} \ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}_{w''w'} \ensuremath{\mathfrak{m}}u(w',w) + [2]\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}_{w''w} - \sum_{\{w' : s \not\in R({w'}) \}} \ensuremath{\mathfrak{m}}u(w'',w')\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}_{w'w} \equiv \ensuremath{\mathfrak{m}}u(\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}_{w''w}) - \ensuremath{\mathfrak{m}}u(w'',w), \] where the equivalence is mod $\ensuremath{u}\ensuremath{K}_0$ and uses (iii). This proves (iv) for the upper canonical basis. The proof for the lower canonical basis is similar. \end{proof} \ensuremath{\mathfrak{b}}egin{equation}gin{figure} {\ensuremath{\mathfrak{t}}iny\[ \ensuremath{\mathfrak{b}}egin{equation}gin{array}{ccccccccccccccc} &\ensuremath{\ensuremath{\mathfrak{t}}ilde{C}}_{1234} & \ensuremath{\ensuremath{\mathfrak{t}}ilde{C}}_{1324} & \ensuremath{\ensuremath{\mathfrak{t}}ilde{C}}_{2 1 3 4 } & \ensuremath{\ensuremath{\mathfrak{t}}ilde{C}}_{1 2 4 3 } & \ensuremath{\ensuremath{\mathfrak{t}}ilde{C}}_{1 4 2 3 } & \ensuremath{\ensuremath{\mathfrak{t}}ilde{C}}_{1 3 4 2 } & \ensuremath{\ensuremath{\mathfrak{t}}ilde{C}}_{2 3 1 4 } & \ensuremath{\ensuremath{\mathfrak{t}}ilde{C}}_{3 1 2 4 } & \ensuremath{\ensuremath{\mathfrak{t}}ilde{C}}_{2 1 4 3 } & \ensuremath{\ensuremath{\mathfrak{t}}ilde{C}}_{2 4 1 3 } & \ensuremath{\ensuremath{\mathfrak{t}}ilde{C}}_{4 1 2 3 } & \ensuremath{\ensuremath{\mathfrak{t}}ilde{C}}_{2 3 4 1 } & \ensuremath{\ensuremath{\mathfrak{t}}ilde{C}}_{3 1 4 2 } & \ensuremath{\ensuremath{\mathfrak{t}}ilde{C}}_{3 4 1 2 } \\ C_{1234} & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ C_{1324} & \ensuremath{f}rac{[2]^2}{[4]}& 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ C_{2134} & \ensuremath{f}rac{[3]}{[4]}& 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ C_{1243} & \ensuremath{f}rac{[3]}{[4]}& 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ C_{1423} & \ensuremath{f}rac{[2]}{[4]}& 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ C_{1342} & \ensuremath{f}rac{[2]}{[4]}& 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ C_{2314} & \ensuremath{f}rac{[2]}{[4]}& 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ C_{3124} & \ensuremath{f}rac{[2]}{[4]}& 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ C_{2143} & \ensuremath{f}rac{[2]^3}{[3][4]}& 0 &\ensuremath{f}rac{1}{[2]}&\ensuremath{f}rac{1}{[2]}& 0 & 0 & 0 & 0 & 1 & 0 &\ensuremath{f}rac{1}{[2]}&\ensuremath{f}rac{1}{[2]}& 0 & 0 \\ C_{2413} & \ensuremath{f}rac{[2]^2}{[3][4]}& 0 & 0 & 0 &\ensuremath{f}rac{1}{[2]}& 0 &\ensuremath{f}rac{1}{[2]}& 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ C_{4123} & \ensuremath{f}rac{1}{[4]}& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ C_{2341} & \ensuremath{f}rac{1}{[4]}& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ C_{3142} & \ensuremath{f}rac{[2]^2}{[3][4]}& 0 & 0 & 0 & 0 &\ensuremath{f}rac{1}{[2]}& 0 &\ensuremath{f}rac{1}{[2]}& 0 & 0 & 0 & 0 & 1 & 0 \\ C_{3412} & \ensuremath{f}rac{[2]}{[3][4]}&\ensuremath{f}rac{1}{[2]}& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ C_{1432} & \ensuremath{f}rac{[2]^2}{[3][4]}&\ensuremath{f}rac{-1}{[2][4]}& 0 &-\ensuremath{f}rac{[2]}{[4]}&\ensuremath{f}rac{[3]}{[4]}&\ensuremath{f}rac{[3]}{[4]}& 0 & 0 & 0 & 0 & 0 & 0 & 0 &\ensuremath{f}rac{1}{[2]}\\ C_{3214} & \ensuremath{f}rac{[2]^2}{[3][4]}&\ensuremath{f}rac{-1}{[2][4]}&-\ensuremath{f}rac{[2]}{[4]}& 0 & 0 & 0 &\ensuremath{f}rac{[3]}{[4]}&\ensuremath{f}rac{[3]}{[4]}& 0 & 0 & 0 & 0 & 0 &\ensuremath{f}rac{1}{[2]}\\ C_{2431} & \ensuremath{f}rac{[2]}{[3][4]}& 0 & 0 &-\ensuremath{f}rac{1}{[4]}&\ensuremath{f}rac{[3]}{[2][4]}& 0 &\ensuremath{f}rac{-1}{[2][4]}& 0 & 0 &\ensuremath{f}rac{1}{[2]}& 0 &\ensuremath{f}rac{[3]}{[4]}& 0 & 0 \\ C_{4132} & \ensuremath{f}rac{[2]}{[3][4]}& 0 & 0 &-\ensuremath{f}rac{1}{[4]}& 0 &\ensuremath{f}rac{[3]}{[2][4]}& 0 &\ensuremath{f}rac{-1}{[2][4]}& 0 & 0 &\ensuremath{f}rac{[3]}{[4]}& 0 &\ensuremath{f}rac{1}{[2]}& 0 \\ C_{4213} & \ensuremath{f}rac{[2]}{[3][4]}& 0 &-\ensuremath{f}rac{1}{[4]}& 0 &\ensuremath{f}rac{-1}{[2][4]}& 0 &\ensuremath{f}rac{[3]}{[2][4]}& 0 & 0 &\ensuremath{f}rac{1}{[2]}&\ensuremath{f}rac{[3]}{[4]}& 0 & 0 & 0 \\ C_{3241} & \ensuremath{f}rac{[2]}{[3][4]}& 0 &-\ensuremath{f}rac{1}{[4]}& 0 & 0 &\ensuremath{f}rac{-1}{[2][4]}& 0 &\ensuremath{f}rac{[3]}{[2][4]}& 0 & 0 & 0 &\ensuremath{f}rac{[3]}{[4]}&\ensuremath{f}rac{1}{[2]}& 0 \\ C_{4312} & \ensuremath{f}rac{1}{[3][4]}&\ensuremath{f}rac{[3]}{[2][4]}& 0 & 0 &-\ensuremath{f}rac{1}{[4]}& 0 & 0 &-\ensuremath{f}rac{1}{[4]}& 0 & 0 &\ensuremath{f}rac{[2]}{[4]}& 0 & 0 &\ensuremath{f}rac{1}{[2]}\\ C_{3421} & \ensuremath{f}rac{1}{[3][4]}&\ensuremath{f}rac{[3]}{[2][4]}& 0 & 0 & 0 &-\ensuremath{f}rac{1}{[4]}&-\ensuremath{f}rac{1}{[4]}& 0 & 0 & 0 & 0 & \ensuremath{f}rac{[2]}{[4]}& 0 &\ensuremath{f}rac{1}{[2]}\\ C_{4231} & \ensuremath{f}rac{1}{[3][4]}& 0 &\ensuremath{f}rac{-1}{[2][4]}&\ensuremath{f}rac{-1}{[2][4]}& 0 & 0 & 0 & 0 &\ensuremath{f}rac{1}{[2]}& 0 &\ensuremath{f}rac{[3]}{[2][4]}&\ensuremath{f}rac{[3]}{[2][4]}& 0 & 0 \\ C_{4321} & \ensuremath{f}rac{1}{[2][3][4]}&\ensuremath{f}rac{1}{[4]}& 0 & 0 &\ensuremath{f}rac{-1}{[2][4]}&\ensuremath{f}rac{-1}{[2][4]}&\ensuremath{f}rac{-1}{[2][4]}&\ensuremath{f}rac{-1}{[2][4]}&\ensuremath{f}rac{1}{[3]}&\ensuremath{f}rac{-1}{[2][3]}&\ensuremath{f}rac{1}{[4]}&\ensuremath{f}rac{1}{[4]}&\ensuremath{f}rac{-1}{[2][3]}& \ensuremath{f}rac{1}{[3]}\\ \end{array}\]} \caption{Some of the projected upper canonical basis elements $\ensuremath{\ensuremath{\mathfrak{t}}ilde{C}}_w$ in terms of the upper canonical basis $\{C_v\}_{v \in S_4}$.} \label{f liftC example} \end{figure} \ensuremath{\mathfrak{b}}egin{equation}gin{remark} It is sensible to ask whether Corollary \ref{c lift at u = 0 in H} and other results in this section hold for other finite Coxeter groups $W$ in place of $\ensuremath{\mathcal{S}}_r$ (perhaps with a slight modification if right cells do not correspond to $\ensuremath{\ensuremath{\mathfrak{m}}athbb{C}}(\ensuremath{u}) \ensuremath{\otimes}_\ensuremath{\mathfrak{m}}athbf{A} \ensuremath{\mathscr{H}}(W)$-irreducibles). We have not investigated this, but note that our proof will not extend easily as it depends on Schur-Weyl duality. \end{remark} \subsection{Seminormal bases} \label{ss seminormal bases} We wish to use the results about projected canonical bases to understand the transition matrix between the lower canonical basis of $M_\lambda$ and the upper canonical basis of $M_\lambda$. To do this, we relate both to seminormal bases in the sense of \cite{RamSeminormal}. The transition matrices between the canonical bases of $M_\lambda$ and their corresponding seminormal bases also appear to be quite interesting---see the positivity conjectures in the next subsection. \ensuremath{\mathfrak{b}}egin{equation}gin{definition}\label{d seminormal} Given a chain of split semisimple $\ensuremath{K}$-algebras $\ensuremath{K}\cong H_1 \subseteq H_2 \subseteq \dots \subseteq H_r$ and an $H_r$-irreducible $N_\lambda$, a \emph{seminormal basis} of $N_\lambda$ is a $\ensuremath{K}$-basis $B$ of $N_\lambda$ compatible with the restrictions in the following sense: there is a partition $B = B_{\ensuremath{\mathfrak{m}}u^1} \sqcup \dots \sqcup B_{\ensuremath{\mathfrak{m}}u^k}$ such that if $N_{\ensuremath{\mathfrak{m}}u^i} = \ensuremath{K} B_{\ensuremath{\mathfrak{m}}u^i}$ then $N_\lambda = N_{\ensuremath{\mathfrak{m}}u^1} \oplus \dots \oplus N_{\ensuremath{\mathfrak{m}}u^k}$ as $H_{r-1}$-modules. Further, there is a partition of each $B_{\ensuremath{\mathfrak{m}}u^i}$ that gives rise to a decomposition of $N_{\ensuremath{\mathfrak{m}}u^i}$ into $H_{r-2}$-irreducibles, and so on, all the way down to $H_1$. \end{definition} Note that if the restriction of an $H_i$-irreducible to $H_{i-1}$ is multiplicity-free, then a seminormal basis is unique up to a diagonal transformation. To construct seminormal bases corresponding to the upper and lower canonical basis of $M_\lambda$, first define, for any $J \subseteq S$, $(\ensuremath{\ensuremath{\mathfrak{t}}ilde{C}}_Q)^J$ to be the projection of $C_Q$ onto the irreducible $\ensuremath{K} \ensuremath{\mathscr{H}}_J$-module corresponding to the right cell of $\ensuremath{\mathcal{R}}es_{\ensuremath{K} \ensuremath{\mathscr{H}}_J} \ensuremath{K}\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma_{\lambda}$ containing $C_Q$. Define $(\ensuremath{\ensuremath{\mathfrak{t}}ilde{C}^{\prime}}_Q)^J$ similarly. If $J = \{s_1, \dots, s_{r-2} \}$, then by \cite[\ensuremath{\mathfrak{t}}extsection 4]{B0}, $(\ensuremath{\ensuremath{\mathfrak{t}}ilde{C}}_Q)^J$ (resp. $(\ensuremath{\ensuremath{\mathfrak{t}}ilde{C}^{\prime}}_Q)^J$) is equal to $C_Q p_\ensuremath{\mathfrak{m}}u$ (resp. $\ensuremath{C^{\prime}}_Q p_\ensuremath{\mathfrak{m}}u $), where $\ensuremath{\mathfrak{m}}u = \ensuremath{\mathfrak{t}}ext{\rm sh}(Q|_{[r-1]})$. Here, for a tableau $Q$ and set $Z \subseteq \ensuremath{\mathbb{Z}}$, $Q|_Z$ denotes the subtableau of $Q$ obtained by removing the entries not in $Z$. Define a total order $\ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq}$ on SYT$(\lambda)$ by declaring $Q' \ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq} Q$ if the numbers $k+1, \dots, r$ are in the same positions in $Q'$ and $Q$ and $\ensuremath{\mathfrak{t}}ext{\rm sh}(Q'|_{[k-1]}) \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}d \ensuremath{\mathfrak{t}}ext{\rm sh}(Q|_{[k-1]})$; this $k$ is unique and we refer to it as $k(Q',Q)$. This total order is the reverse of the last letter order defined in \cite{GM}. \ensuremath{\mathfrak{b}}egin{equation}gin{lemma} \label{l lift transition matrix} For $J = \{ s_1, \dots, s_{r-2} \}$, the transition matrix expressing the projected basis $\{(\ensuremath{\ensuremath{\mathfrak{t}}ilde{C}}_Q)^J : Q \in SYT(\lambda) \}$ in terms of the upper canonical basis of $M_\lambda$ is lower-unitriangular, is the identity at $\ensuremath{u} = 0$ and $\ensuremath{u} = \infty$, and has $\ensuremath{\mathfrak{b}}r{\cdot}$-invariant entries (i.e. $(\ensuremath{\ensuremath{\mathfrak{t}}ilde{C}}_Q)^J = C_Q + \sum_{Q' \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}dneq Q} m_{Q' Q} C_{Q'}$, $m_{Q' Q} \in \ensuremath{u}\ensuremath{K}_0,\ \ensuremath{\mathfrak{b}}r{m_{Q' Q}} = m_{Q' Q}$). The transition matrix expressing the projected basis $\{(\ensuremath{\ensuremath{\mathfrak{t}}ilde{C}^{\prime}}_Q)^J : Q \in SYT(\lambda) \}$ in terms of the lower canonical basis of $M_\lambda$ satisfies the same properties except is upper-unitriangular instead of lower-unitriangular (i.e. $(\ensuremath{\ensuremath{\mathfrak{t}}ilde{C}^{\prime}}_Q)^J = \ensuremath{C^{\prime}}_Q + \sum_{Q' \ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq}neq Q} m'_{Q' Q} \ensuremath{C^{\prime}}_{Q'}$, $m'_{Q' Q} \in \ensuremath{u}\ensuremath{K}_0,\ \ensuremath{\mathfrak{b}}r{m'_{Q' Q}} = m'_{Q' Q}$). \end{lemma} \ensuremath{\mathfrak{b}}egin{equation}gin{proof} This follows from Corollary \ref{c lift at u = 0 in H} and Proposition \ref{p restrict Wgraph}: identify $M_\lambda^\ensuremath{\mathfrak{m}}athbf{A}$ and its upper canonical basis with $\ensuremath{\mathfrak{m}}athbf{A} \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma_{Z_\lambda^*}$, where $Z_\lambda^*$ is the standard tableau with $1, 2, \dots, \lambda_1$ in the first row, $\lambda_1 + 1, \dots, \lambda_1 + \lambda_2$ in the second row, etc. Then $\ensuremath{\mathcal{R}}es_{\ensuremath{\mathscr{H}}_J} \ensuremath{\mathfrak{m}}athbf{A}\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma_{Z_\lambda^*}$ has right cells labeled by the result of uninserting an outer corner of $Z_\lambda^*$ (see \cite[\ensuremath{\mathfrak{t}}extsection 4]{B0}). The key point is that all uninsertions of $Z_\lambda^*$ result in the entry $\lambda_1$ being kicked out, and therefore by Proposition \ref{p restrict Wgraph}, the $\ensuremath{\mathscr{H}}_J$-module with basis $\ensuremath{\mathcal{R}}es_{\ensuremath{\mathscr{H}}_J} \ensuremath{\mathfrak{m}}athbf{A}\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma_{Z_\lambda^*} \subseteq \ensuremath{\mathfrak{m}}athbf{A} \{ \ensuremath{C^{\prime}}_{xv} : v \in (\ensuremath{\mathcal{S}}_r)_J \}$ is isomorphic to a cellular subquotient of $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma_{\ensuremath{\mathcal{S}}_{r-1}}$ (here $x = s_{\lambda_1} s_{\lambda_1 +1} \cdots s_{r-1}$). We can then apply Corollary \ref{c lift at u = 0 in H} with $r$ of the proposition set to $r-1$. Lower-unitriangularity follows from Corollary \ref{t right cell partial order}. The proof for the lower canonical basis is similar. \end{proof} Set $J_i = \{s_1, \dots, s_{i-1}\}$. We now define the upper seminormal basis to be $\{\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}t_Q : Q \in \ensuremath{\mathfrak{t}}ext{SYT}(\lambda)\}$, where $\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}t_Q$ is the result of applying the construction $C_Q \ensuremath{\rightsquigarrow} (\ensuremath{\ensuremath{\mathfrak{t}}ilde{C}}_Q)^J$ first with $J = J_{r-1}$, then with $J =J_{r-2}$, and so on, finishing with $J = J_1 = \emptyset$. The lower seminormal basis $\{\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}tp_Q : Q \in \ensuremath{\mathfrak{t}}ext{SYT}(\lambda)\}$ is defined similarly. These bases are seminormal with respect to the chain $\ensuremath{\mathscr{H}}_{J_1} \subseteq \cdots \subseteq \ensuremath{\mathscr{H}}_{J_{r-1}} \subseteq \ensuremath{\mathscr{H}}_r$. \ensuremath{\mathfrak{b}}egin{equation}gin{proposition} \label{p gt to canonical transition} The transition matrix $T(\lambda) = (T_{Q' Q})_{Q', Q \in \ensuremath{\mathfrak{t}}ext{SYT}(\lambda)}$ (resp. $T'(\lambda) = (T'_{Q' Q})_{Q', Q \in \ensuremath{\mathfrak{t}}ext{SYT}(\lambda)}$) expressing the upper (resp. lower) seminormal basis of $ M_\lambda$ in terms of the upper (resp. lower) canonical basis of $M_\lambda$ and $T(\lambda)^{-1}$ (resp. $T'(\lambda)^{-1}$) \ensuremath{\mathfrak{b}}egin{equation}gin{list}{\emph{(\roman{ctr})}} {\ensuremath{u}secounter{ctr} \setlength{\itemsep}{1pt} \setlength{\ensuremath{\mathfrak{t}}opsep}{2pt}} \item are lower-unitriangular: $\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}t_Q = C_Q + \sum_{Q' \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}dneq Q} T_{Q' Q} C_{Q'}$ and similarly for $T(\lambda)^{-1}$ \noindent(resp. upper-unitriangular: $\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}tp_Q = \ensuremath{C^{\prime}}_Q + \sum_{Q' \ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq}neq Q} T'_{Q' Q} \ensuremath{C^{\prime}}_{Q'}$ and similarly for $T'(\lambda)^{-1}$), \item have entries that are $\ensuremath{\mathfrak{b}}r{\cdot}$-invariant and belong to $\ensuremath{K}_0 \cap \ensuremath{K}_\infty$, \item are the identity at $\ensuremath{u} = 0$ and $\ensuremath{u} = \infty$, \item satisfy: $\ensuremath{\mathfrak{m}}u(T_{Q'Q}) = \ensuremath{\mathfrak{m}}u(Q',Q)$ and $\ensuremath{\mathfrak{m}}u(T^{-1}_{Q'Q}) = -\ensuremath{\mathfrak{m}}u(Q',Q)$ for $Q',Q$ such that $Q' \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}dneq Q$ and $(R(C_{Q'}) \setminus R(C_{Q})) \cap J_{k(Q',Q)-1} \neq \emptyset$ \noindent(resp. $\ensuremath{\mathfrak{m}}u(T'_{Q'Q}) = -\ensuremath{\mathfrak{m}}u(Q',Q)$ and $\ensuremath{\mathfrak{m}}u(T'^{-1}_{Q'Q}) = \ensuremath{\mathfrak{m}}u(Q',Q)$ for $Q',Q$ such that $Q' \ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq}neq Q$ and $(R(\ensuremath{C^{\prime}}_{Q'}) \setminus R(\ensuremath{C^{\prime}}_{Q})) \cap J_{k(Q',Q)-1} \neq \emptyset$). \end{list} \end{proposition} \ensuremath{\mathfrak{b}}egin{equation}gin{proof} The transition matrix $T(\lambda)$ is the product $\ensuremath{\mathfrak{t}}ilde{M}^{J_{r-1}} \ensuremath{\mathfrak{t}}ilde{M}^{J_{r-2}} \cdots \ensuremath{\mathfrak{t}}ilde{M}^{J_1}$, where $\ensuremath{\mathfrak{t}}ilde{M}^{J_i}$ is a block diagonal matrix, with each block of the form described in Lemma \ref{l lift transition matrix} with $J$ of the lemma equal to $J_i$. Properties (i)-(iii) of $T(\lambda)$ then follow because they are preserved under matrix multiplication and diagonally joining blocks. To prove (iv), we apply the following easy claim \refstepcounter{equation} \ensuremath{\mathfrak{b}}egin{equation}gin{enumerate}[label={(\ensuremath{\mathfrak{t}}heequation)}] \label{en matrix product mu} \item if $M^1, \dots, M^l$ are matrices satisfying (iii) and $M = \ensuremath{\mathfrak{p}}rod_{k=1}^l M^k$, then $\ensuremath{\mathfrak{m}}u(M_{ij}) = \sum_{k} \ensuremath{\mathfrak{m}}u(M^k_{ij})$ for $i \neq j$. \end{enumerate} to obtain $\ensuremath{\mathfrak{m}}u(T_{Q'Q}) = \sum_{k=1}^{r-1} \ensuremath{\mathfrak{m}}u(\ensuremath{\mathfrak{t}}ilde{M}^{J_k}_{Q'Q})$. If $Q' \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}dneq Q$, then there is exactly one $k$ for which $\ensuremath{\mathfrak{t}}ilde{M}^{J_{k-1}}_{Q'Q}$ is non-zero; this $k$ is exactly $k(Q',Q)$. Further, by Corollary \ref{c lift at u = 0 in H} (iv) and the proof of Lemma \ref{l lift transition matrix}, $\ensuremath{\mathfrak{m}}u(\ensuremath{\mathfrak{t}}ilde{M}^{J_{k(Q',Q)-1}}_{Q'Q}) = \ensuremath{\mathfrak{m}}u(Q',Q)$ if $(R(C_{Q'}) \setminus R(C_{Q})) \cap J_{k(Q',Q)-1} \neq \emptyset$. This proves (iv) for $T(\lambda)$. The statements for $T'(\lambda)$ are proved similarly and the statements for $T(\lambda)^{-1}$ and $T'(\lambda)^{-1}$ follow easily. \end{proof} \ensuremath{\mathfrak{b}}egin{equation}gin{example} Continuing Example \ref{ex lambda31}, we give transition matrices between the various bases defined above. The convention is that the columns of the matrix express the basis element at the top of the column in terms of the row labels. The matrices $D(\lambda)$ and $S(\lambda)$ are defined in Theorem \ref{t transition C' to C} and its proof (below). \setlength{\cellsize}{8pt} \[{\ensuremath{\mathfrak{t}}iny \ensuremath{\mathfrak{b}}egin{equation}gin{array}{cccc} & \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}t_{Q_4} & \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}t_{Q_3} & \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}t_{Q_2} \\ C_{Q_4} & 1 & 0 & 0 \\ C_{Q_3} & \ensuremath{f}rac{[2]}{[3]} & 1 & 0 \\ C_{Q_2} & \ensuremath{f}rac{1}{[3]} & \ensuremath{f}rac{1}{[2]} & 1 \\ &\ensuremath{\mathfrak{m}}ulticolumn{3}{c}{T((3,1))} \end{array}} \ensuremath{\mathfrak{q}}uad {\ensuremath{\mathfrak{t}}iny \ensuremath{\mathfrak{b}}egin{equation}gin{array}{cccc} & \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}tp_{Q_4} & \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}tp_{Q_3} & \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}tp_{Q_2} \\ \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}t_{Q_4} & [3] & 0 & 0 \\ \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}t_{Q_3} & 0 & \ensuremath{f}rac{[2][4]}{[3]} & 0 \\ \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}t_{Q_2} & 0 & 0 & \ensuremath{f}rac{[4]}{[2]}\\ & \ensuremath{\mathfrak{m}}ulticolumn{3}{c}{D((3,1))} \end{array}}\ensuremath{\mathfrak{q}}uad {\ensuremath{\mathfrak{t}}iny \ensuremath{\mathfrak{b}}egin{equation}gin{array}{cccc} & \ensuremath{C^{\prime}}_{Q_4} & \ensuremath{C^{\prime}}_{Q_3} & \ensuremath{C^{\prime}}_{Q_2} \\ \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}tp_{Q_4} & 1 & \ensuremath{f}rac{[2]}{[3]} & \ensuremath{f}rac{1}{[3]} \\ \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}tp_{Q_3} & 0 & 1 & \ensuremath{f}rac{1}{[2]} \\ \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}tp_{Q_2} & 0 & 0 & 1 \\ & \ensuremath{\mathfrak{m}}ulticolumn{3}{c}{T'((3,1))^{-1}} \end{array}} \] \[{\ensuremath{\mathfrak{t}}iny \ensuremath{\mathfrak{b}}egin{equation}gin{array}{cccc} & \ensuremath{C^{\prime}}_{Q_4} & \ensuremath{C^{\prime}}_{Q_3} & \ensuremath{C^{\prime}}_{Q_2} \\ C_{Q_4} & [3] & [2] & 1 \\ C_{Q_3} & [2] & [2]^2 & [2] \\ C_{Q_2} & 1 & [2] & [3]\\ & \ensuremath{\mathfrak{m}}ulticolumn{3}{c}{S((3,1))} \end{array}} \] The more substantial example $\lambda = (4,2)$ is below, where $S(\lambda)$ is scaled as in Conjecture \ref{cj non-negativity T T' D}, so that its entries lie in $\ensuremath{\mathfrak{m}}athbf{A}$, are $\ensuremath{\mathfrak{b}}r{\cdot}$-invariant, and have greatest common divisor 1. \[{\ensuremath{\mathfrak{t}}iny \setlength{\cellsize}{8pt} \ensuremath{\mathfrak{b}}egin{equation}gin{array}{cccccccccc} & \ensuremath{\mathfrak{t}}ableau{1 & 2 & 3 & 4\\ 5 & 6} & \ensuremath{\mathfrak{t}}ableau{1 & 2 & 3 & 5\\ 4 & 6} & \ensuremath{\mathfrak{t}}ableau{1 & 2 & 4 & 5\\ 3 & 6} & \ensuremath{\mathfrak{t}}ableau{1 & 3 & 4 & 5\\ 2 & 6} & \ensuremath{\mathfrak{t}}ableau{1 & 2 & 3 & 6\\ 4 & 5} & \ensuremath{\mathfrak{t}}ableau{1 & 2 & 4 & 6\\ 3 & 5} & \ensuremath{\mathfrak{t}}ableau{1 & 3 & 4 & 6\\ 2 & 5} & \ensuremath{\mathfrak{t}}ableau{1 & 2 & 5 & 6\\ 3 & 4} & \ensuremath{\mathfrak{t}}ableau{1 & 3 & 5 & 6\\ 2 & 4} \\ \ensuremath{\mathfrak{t}}ableau{1 & 2 & 3 & 4\\ 5 & 6} & 1 & \ensuremath{f}rac{[3]}{[4]} & \ensuremath{f}rac{[2]}{[4]} & \ensuremath{f}rac{[1]}{[4]} & \ensuremath{f}rac{[2]}{[4]} & \ensuremath{f}rac{[2]^2}{[4]} & \ensuremath{f}rac{[2]}{[4]} & \ensuremath{f}rac{[2]}{[4][3]} & \ensuremath{f}rac{[2]^2}{[3][4]} \\ \ensuremath{\mathfrak{t}}ableau{1 & 2 & 3 & 5\\ 4 & 6} & 0 & 1 & \ensuremath{f}rac{[2]}{[3]} & \ensuremath{f}rac{1}{[3]} & \ensuremath{f}rac{[2]}{[3]} & \ensuremath{f}rac{[2]^2}{[3]^2} & \ensuremath{f}rac{[2]}{[3]^2} & \ensuremath{f}rac{[2]}{[3]^2} & \ensuremath{f}rac{[2]^2}{[3]^2} \\ \ensuremath{\mathfrak{t}}ableau{1 & 2 & 4 & 5\\ 3 & 6} & 0 & 0 & 1 & \ensuremath{f}rac{1}{[2]} & 0 & \ensuremath{f}rac{[2]}{[3]} & \ensuremath{f}rac{1}{[3]} & \ensuremath{f}rac{1}{[3]} & \ensuremath{f}rac{1}{[2][3]} \\ \ensuremath{\mathfrak{t}}ableau{1 & 3 & 4 & 5\\ 2 & 6} & 0 & 0 & 0 & 1 & 0 & 0 & \ensuremath{f}rac{[2]}{[3]} & 0 & \ensuremath{f}rac{1}{[3]} \\ \ensuremath{\mathfrak{t}}ableau{1 & 2 & 3 & 6\\ 4 & 5} & 0& 0& 0& 0& 1&\ensuremath{f}rac{[2]}{[3]}& \ensuremath{f}rac{1}{[3]}& \ensuremath{f}rac{1}{[3]}&\ensuremath{f}rac{[2]}{[3]} \\ \ensuremath{\mathfrak{t}}ableau{1 & 2 & 4 & 6\\ 3 & 5} & 0& 0& 0& 0& 0& 1& \ensuremath{f}rac{1}{[2]}& \ensuremath{f}rac{1}{[2]} & \ensuremath{f}rac{1}{[2]^2} \\ \ensuremath{\mathfrak{t}}ableau{1 & 3 & 4 & 6\\ 2 & 5} & 0& 0& 0& 0& 0& 0& 1& 0& \ensuremath{f}rac{1}{[2]} \\ \ensuremath{\mathfrak{t}}ableau{1 & 2 & 5 & 6\\ 3 & 4} & 0& 0& 0& 0& 0& 0& 0& 1& \ensuremath{f}rac{1}{[2]} \\ \ensuremath{\mathfrak{t}}ableau{1 & 3 & 5 & 6\\ 2 & 4} & 0& 0& 0& 0& 0& 0& 0& 0& 1\\ & \ensuremath{\mathfrak{m}}ulticolumn{9}{c}{T'((4,2))^{-1}} \end{array}}\] \[{\ensuremath{\mathfrak{t}}iny \ensuremath{\mathfrak{b}}egin{equation}gin{array}{cccccccccc} & \ensuremath{\mathfrak{t}}ableau{1 & 2 & 3 & 4\\ 5 & 6} & \ensuremath{\mathfrak{t}}ableau{1 & 2 & 3 & 5\\ 4 & 6} & \ensuremath{\mathfrak{t}}ableau{1 & 2 & 4 & 5\\ 3 & 6} & \ensuremath{\mathfrak{t}}ableau{1 & 3 & 4 & 5\\ 2 & 6} & \ensuremath{\mathfrak{t}}ableau{1 & 2 & 3 & 6\\ 4 & 5} & \ensuremath{\mathfrak{t}}ableau{1 & 2 & 4 & 6\\ 3 & 5} & \ensuremath{\mathfrak{t}}ableau{1 & 3 & 4 & 6\\ 2 & 5} & \ensuremath{\mathfrak{t}}ableau{1 & 2 & 5 & 6\\ 3 & 4} & \ensuremath{\mathfrak{t}}ableau{1 & 3 & 5 & 6\\ 2 & 4} \\ \ensuremath{\mathfrak{t}}ableau{1 & 2 & 3 & 4\\ 5 & 6} & [3][4] & [3]^2 & [2][3] & [3] & [2][3] & [2]^2[3] & [2][3] & [2] & [2]^2 \\ \ensuremath{\mathfrak{t}}ableau{1 & 2 & 3 & 5\\ 4 & 6} & [3]^2 & [2][3]^2 & [2]^2[3] & [2][3] & [2]^2[3] & 2[4] + 3[2] & 2[3] + 1 & [2]^2 & [2]^3 \\ \ensuremath{\mathfrak{t}}ableau{1 & 2 & 4 & 5\\ 3 & 6} & [2][3] & [2]^2[3] & [2][3]^2 & [3]^2 & [2]^3 & [2]^4 & [2]^3 & [2][3] & 2[3] + 1 \\ \ensuremath{\mathfrak{t}}ableau{1 & 3 & 4 & 5\\ 2 & 6} & [3] & [2][3] & [3]^2 & [3][4] & [2]^2 & [2]^3 & [3]^2 & [3] & [2][3] \\ \ensuremath{\mathfrak{t}}ableau{1 & 2 & 3 & 6\\ 4 & 5} & [2][3] & [2]^2[3] & [2]^3 & [2]^2 & [2][3]^2 & [2]^4 & [2]^3 & [2][3] & [2]^2[3] \\ \ensuremath{\mathfrak{t}}ableau{1 & 2 & 4 & 6\\ 3 & 5} & [2]^2[3] & 2[4] + 3[2] & [2]^4 & [2]^3 & [2]^4 & [2]^5 & [2]^4 & [2]^2[3] & 2[4] + 3[2] \\ \ensuremath{\mathfrak{t}}ableau{1 & 3 & 4 & 6\\ 2 & 5} & [2][3] & 2[3] + 1 & [2]^3 & [3]^2 & [2]^3 & [2]^4 & [2][3]^2 & [2][3] & [2]^2[3] \\ \ensuremath{\mathfrak{t}}ableau{1 & 2 & 5 & 6\\ 3 & 4} & [2] & [2]^2 & [2][3] & [3] & [2][3] & [2]^2[3] & [2][3] & [3][4] & [3]^2 \\ \ensuremath{\mathfrak{t}}ableau{1 & 3 & 5 & 6\\ 2 & 4} & [2]^2 & [2]^3 & 2[3] + 1 & [2][3] & [2]^2[3] & 2[4] + 3[2] & [2]^2[3] & [3]^2 & [2][3]^2\\ & \ensuremath{\mathfrak{m}}ulticolumn{9}{c}{S((4,2))} \end{array}}\] \end{example} For the next proposition, let us clarify a confusing point. Let $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_T$, $T \in \ensuremath{\mathfrak{t}}ext{SSYT}_{[n]}(\lambda)$, be the $\ensuremath{\mathscr{H}}_r$-cell of $\ensuremath{\mathbf{T}}$ from Corollary \ref{c Schur-Weyl duality lower} (i); let $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_P$ be the right cell of $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_{\ensuremath{\mathcal{S}}_r}$ from \ensuremath{\mathfrak{t}}extsection \ref{ss cell label conventions C_Q C'_Q}, where $P \in \ensuremath{\mathfrak{t}}ext{SYT}(\lambda)$ is the \emph{standardization} of $T$, i.e. $P = \ensuremath{\mathfrak{t}}ranspose{P(D(\ensuremath{\mathfrak{m}}athbf{k}))}$ for any $\ensuremath{\mathfrak{m}}athbf{k}$ with $P(\ensuremath{\mathfrak{m}}athbf{k}^\dagger) = T$. The $\ensuremath{\mathscr{H}}_r$-cells $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_P, \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_T$, and $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_\lambda$ give rise to isomorphic $\ensuremath{\mathscr{H}}_r$-modules with basis, the isomorphisms being given by \ensuremath{\mathfrak{b}}egin{equation} \label{e sb involution} \ensuremath{\mathfrak{b}}egin{equation}gin{array}{cclccclcccl} \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_T &\cong& \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_P, && \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_P &\cong& \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_\lambda, && \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_T &\cong& \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_\lambda,\\ \ensuremath{c^{\prime}}_\ensuremath{\mathfrak{m}}athbf{k} &\longleftrightarrow& \ensuremath{C^{\prime}}_{D(\ensuremath{\mathfrak{m}}athbf{k})} && \ensuremath{C^{\prime}}_w &\longleftrightarrow& \ensuremath{C^{\prime}}_{\ensuremath{\mathfrak{t}}ranspose{Q(w)}} && \ensuremath{c^{\prime}}_\ensuremath{\mathfrak{m}}athbf{k} &\longleftrightarrow& \ensuremath{C^{\prime}}_{Q(\ensuremath{\mathfrak{m}}athbf{k}^\dagger)^\dagger} \end{array} \end{equation} where $Q^\dagger$, for $Q$ a SYT, denotes the Sch\"utzenberger involution of $Q$ (see, e.g., \cite[A1.2]{F}). The left-hand isomorphism is from Theorem \ref{t Schur-Weyl duality lower} (iv), the middle from \ensuremath{\mathfrak{t}}extsection \ref{ss cell label conventions C_Q C'_Q}, and the right is the composition of the two. Let $1^\ensuremath{\mathfrak{t}}ext{op}$ be the antiautomorphism of $\ensuremath{\mathscr{H}}_r$ determined by $T_i^{1^\ensuremath{\mathfrak{t}}ext{op}} = T_i$. \ensuremath{\mathfrak{b}}egin{equation}gin{proposition} \label{p dual bases Mlambda} There is a bilinear form $\langle \cdot,\cdot \rangle : M_\lambda \ensuremath{\mathfrak{t}}imes M_\lambda \ensuremath{\mathfrak{t}}o \ensuremath{K}$ satisfying \ensuremath{\mathfrak{b}}egin{equation}gin{list}{\emph{(\roman{ctr})}} {\ensuremath{u}secounter{ctr} \setlength{\itemsep}{1pt} \setlength{\ensuremath{\mathfrak{t}}opsep}{2pt}} \item $\langle x h,x' \rangle = \langle x,x' h^{1^\ensuremath{\mathfrak{t}}ext{op}} \rangle $ for any $h \in \ensuremath{\mathscr{H}}_r$, $x, x' \in M_\lambda$, \item $\langle C_Q, \ensuremath{C^{\prime}}_{Q'} \rangle = \delta_{QQ'}$, \item $\langle (\ensuremath{\ensuremath{\mathfrak{t}}ilde{C}}_Q)^J, (\ensuremath{\ensuremath{\mathfrak{t}}ilde{C}^{\prime}}_{Q'})^J \rangle = \delta_{QQ'}$, \item $\langle \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}t_Q, \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}tp_{Q'} \rangle = \delta_{QQ'}$. \end{list} \end{proposition} \ensuremath{\mathfrak{b}}egin{equation}gin{proof} By Proposition \ref{p dual lifted bases}, the inner product on $\ensuremath{\mathbf{T}}$ restricts to an inner product on $\ensuremath{\mathfrak{p}}i^\ensuremath{\mathbf{T}}_\lambda(\ensuremath{K} \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma_T) \ensuremath{\mathfrak{t}}imes \ensuremath{\mathfrak{p}}i^\ensuremath{\mathbf{T}}_\lambda(\ensuremath{K}\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_T) \ensuremath{\mathfrak{t}}o \ensuremath{K}$ for any $T \in \ensuremath{\mathfrak{t}}ext{SSYT}_{[n]}(\lambda)$. This yields an inner product on $M_\lambda$ satisfying $( C_{Q}, \ensuremath{C^{\prime}}_{Q'} ) = \delta_{QQ^{\ensuremath{\mathfrak{p}}rime \dagger}}$ (we have used the right-hand isomorphism of \eqref{e sb involution}). Letting $M_\lambda^\dagger$ denote the result of twisting $M_\lambda$ by the automorphism $\dagger$, we have $M_\lambda \cong M_\lambda^\dagger$ via $\ensuremath{C^{\prime}}_Q \ensuremath{\mathfrak{m}}apsto \ensuremath{C^{\prime}}_{Q^\dagger}$. Applying this isomorphism to the second factor yields the inner product $\langle \cdot, \cdot \rangle $ satisfying (i) and (ii). Given (ii), the proof of (iii) is similar to that of Proposition \ref{p dual lifted bases}. Iterating this argument through the sequence of projections that yields the seminormal bases proves (iv). \end{proof} \ensuremath{\mathfrak{b}}egin{equation}gin{theorem} \label{t transition C' to C} The transition matrix $S(\lambda) = (S_{Q' Q})_{Q', Q \in \ensuremath{\mathfrak{t}}ext{SYT}(\lambda)}$ expressing the lower canonical basis of $ M_\lambda$ in terms of the upper canonical basis of $M_\lambda$ (i.e. $\ensuremath{C^{\prime}}_Q = \sum_{Q' \in \ensuremath{\mathfrak{t}}ext{SYT}(\lambda)} S_{Q' Q} C_{Q'})$ has $\ensuremath{\mathfrak{b}}r{\cdot}$-invariant entries that belong to $\ensuremath{K}_0 \cap \ensuremath{K}_\infty$ and is the identity matrix at $\ensuremath{u} = 0$ and $\ensuremath{u} = \infty$. \end{theorem} \ensuremath{\mathfrak{b}}egin{equation}gin{proof} First note that the $\ensuremath{\mathfrak{b}}r{\cdot}$-invariance of the lower and upper canonical basis of $M_\lambda$ shows that the entries of $S(\lambda)$ are $\ensuremath{\mathfrak{b}}r{\cdot}$-invariant. As remarked after Definition \ref{d seminormal}, the upper seminormal and lower seminormal bases differ by a diagonal transformation. Thus $S(\lambda) = T(\lambda) D(\lambda) T'(\lambda)^{-1}$ for some diagonal matrix $D(\lambda)$. Given Proposition \ref{p gt to canonical transition}, it suffices to show that $D(\lambda)$ is the identity matrix at $\ensuremath{u} = 0$. Let $A^s$ (resp. $A'^s$) be the matrix that expresses right multiplication by $C_s$ in terms of the upper (resp. lower) seminormal basis of $M_\lambda$. Then by definition of $D(\lambda)$, $D(\lambda) A^s D(\lambda)^{-1} = A'^s$. Also, it follows from Proposition \ref{p dual bases Mlambda} that $A^s = \ensuremath{\mathfrak{t}}ranspose{(A'^s)}$. Thus $D(\lambda)$ is determined up to a global scale by the equations $\ensuremath{f}rac{D(\lambda)_{Q'Q'}}{D(\lambda)_{QQ}} = \ensuremath{f}rac{A^s_{QQ'}}{A^s_{Q'Q}}$ for all $s \in S$, $Q,Q' \in \ensuremath{\mathfrak{t}}ext{SYT}(\lambda)$ such that $A^s_{Q'Q} \neq 0$ ($D(\lambda)$ must be determined uniquely by these equations up to a global scale because $M_\lambda$ is irreducible). Now $A^s = T(\lambda)^{-1}M^sT(\lambda)$, where $M^s$ expresses right multiplication by $C_s$ in terms of the upper canonical basis of $M_\lambda$ (thus $M^s_{Q'Q} = \ensuremath{\mathfrak{m}}u(Q',Q)$ if $s \in R(C_{Q'}) \setminus R(C_Q)$). Now we apply an easy modification of \eqref{en matrix product mu} to the product $-\ensuremath{u} A^s = T(\lambda)^{-1} (-\ensuremath{u} M^s) T(\lambda)$ to obtain (assuming $Q' \neq Q$) \ensuremath{\mathfrak{b}}egin{equation} \label{e mu computation} \ensuremath{\mathfrak{b}}egin{equation}gin{array}{c} \ensuremath{\mathfrak{m}}u(-\ensuremath{u} A^s_{Q'Q}) =\chi\{ s \in R(C_Q) \}(\ensuremath{\mathfrak{m}}u(T^{-1}_{Q'Q})) + \ensuremath{\mathfrak{m}}u(-\ensuremath{u} M^s_{Q'Q}) + \chi\{ s \in R(C_{Q'}) \}(\ensuremath{\mathfrak{m}}u(T_{Q'Q})) \\ = \ensuremath{\mathfrak{b}}egin{equation}gin{cases} 0+\ensuremath{\mathfrak{m}}u(-\ensuremath{u} M^s_{Q'Q}) + 0 = -\ensuremath{\mathfrak{m}}u(Q',Q) & \ensuremath{\mathfrak{t}}ext{if } s \in R(C_{Q'}) \setminus R(C_Q) \ensuremath{\mathfrak{t}}ext{ and } Q' \ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq}neq Q,\\ -\ensuremath{\mathfrak{m}}u(Q',Q) + 0 + 0 \ \ = -\ensuremath{\mathfrak{m}}u(Q',Q)& \ensuremath{\mathfrak{t}}ext{if } s \in R(C_{Q}) \setminus R(C_{Q'}), \\ & (R(C_{Q'}) \setminus R(C_{Q})) \cap J_{k(Q',Q)-1} \neq \emptyset, \ensuremath{\mathfrak{t}}ext{ and } Q' \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}dneq Q, \end{cases} \end{array} \end{equation} where $\chi\{P\}(x)$ is equal to $x$ if $P$ is true and 0 otherwise. Here we have used the lower triangularity of $T(\lambda)$ for the top case and Proposition \ref{p gt to canonical transition} (iv) for the bottom case (note that these cases do not cover all possibilities, and we do not know the answer in general). To complete the proof, consider the dual Knuth equivalence graph on SYT$(\lambda)$ as in, for instance, \cite{Sami}. We say that a dual Knuth transformation $Q' \ensuremath{u}nderset{\ensuremath{\mathfrak{t}}ext{DKE}}{\longleftrightarrow} Q$ is \emph{initial} if $Q'$ and $Q$ have the entries $i,i+1$ in different positions and $|R(C_{Q'}) \cap \{s_{i-1}, s_i\}| = |R(C_{Q}) \cap \{s_{i-1}, s_i\}| = 1$. For example, the dual Knuth equivalence \setlength{\cellsize}{8pt} ${\ensuremath{\mathfrak{t}}iny \ensuremath{\mathfrak{t}}ableau{1 &2 &4 \\ 3 & 5&6} \ensuremath{u}nderset{\ensuremath{\mathfrak{t}}ext{DKE}}{\longleftrightarrow} \ensuremath{\mathfrak{t}}ableau{1 &2 &5 \\ 3 & 4&6}}$ is not initial. It is easy to check that $Q' \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}dneq Q$ and $Q' \ensuremath{u}nderset{\ensuremath{\mathfrak{t}}ext{DKE}}{\longleftrightarrow} Q$ initial implies $s_{k(Q',Q)-2} \in (R(C_{Q'}) \setminus R(C_{Q})) \cap J_{k(Q',Q)-1}$. Hence by applying \eqref{e mu computation} to $A^s_{Q'Q}$ and $A^{s}_{QQ'}$ for any pair $Q', Q$ such that $Q' \ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}dneq Q$ and $Q' \ensuremath{u}nderset{\ensuremath{\mathfrak{t}}ext{DKE}}{\longleftrightarrow} Q$ is initial, and with $s \in R(C_{Q}) \setminus R(C_{Q'})$, we conclude $\ensuremath{f}rac{D(\lambda)_{Q'Q'}}{D(\lambda)_{QQ}} = \ensuremath{f}rac{A^s_{QQ'}}{A^s_{Q'Q}} \equiv 1 \ensuremath{\mathfrak{m}}od \ensuremath{u}\ensuremath{K}_0$. The result then follows from the following combinatorial claim \refstepcounter{equation} \ensuremath{\mathfrak{b}}egin{equation}gin{enumerate}[label={(\ensuremath{\mathfrak{t}}heequation)}] \item The graph on SYT$(\lambda)$ consisting of initial dual Knuth transformations is connected. \end{enumerate} The claim is proved by induction on $r = |\lambda|$. Let $\ensuremath{\mathfrak{m}}u^1,\ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq}ots,\ensuremath{\mathfrak{m}}u^l$ be the shapes obtained from $\lambda$ by removing an outer corner. Assume that the graphs for the $\ensuremath{\mathfrak{m}}u^i$ are connected. For any distinct $i, j \in [l]$, it is easy to construct $Q',Q \in \ensuremath{\mathfrak{t}}ext{SYT}(\lambda)$ such that $\ensuremath{\mathfrak{t}}ext{\rm sh}(Q'|_{[r-1]}) = \ensuremath{\mathfrak{m}}u^i$, $\ensuremath{\mathfrak{t}}ext{\rm sh}(Q|_{[r-1]}) = \ensuremath{\mathfrak{m}}u^j$, and $Q' \ensuremath{u}nderset{\ensuremath{\mathfrak{t}}ext{DKE}}{\longleftrightarrow} Q$ is a dual Knuth transformation. Such dual Knuth transformations are always initial, so the claim follows. \end{proof} \subsection{Positivity conjectures} In our computations of many of the matrices discussed above, we have observed positivity properties, which we make precise below. Computing in Magma, we have verified (a) and (b) for all $\lambda \vdash r$, $r \leq 8$ and (c) for $r \leq 6$. Our original motivation for looking for positivity here is that the positivity of $S(\lambda)$ is related to the conjecture in \cite{GCT4} stating that an element spanning $ \nsbr{\epsilon}_+ \subseteq \ensuremath{K} \nsbr{\ensuremath{\mathscr{H}}}_r \subseteq \ensuremath{K} \ensuremath{\mathscr{H}}_r \ensuremath{\otimes} \ensuremath{\mathscr{H}}_r$ has nonnegative coefficients when expressed in the basis $\{C_v \ensuremath{\otimes} C_w : v,w \in \ensuremath{\mathcal{S}}_r\}$ (see the introduction). \ensuremath{\mathfrak{b}}egin{equation}gin{conjecture} \label{cj non-negativity T T' D} For a non-zero matrix $M$ with $\ensuremath{\mathfrak{b}}r{\cdot}$-invariant entries in $\ensuremath{K}$, let $D(M)$ be the unique up to sign element of $\ensuremath{K}$ such that $D(M)M$ has $\ensuremath{\mathfrak{b}}r{\cdot}$-invariant entries in $\ensuremath{\mathfrak{m}}athbf{A}$ and the greatest common divisor of the entries in $D(M)M$ is $1$. The matrices $\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}, \ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}', T(\lambda), T'(\lambda)^{-1}$, and $S(\lambda)$ from Corollary \ref{c lift at u = 0 in H}, Proposition \ref{p gt to canonical transition}, and Theorem \ref{t transition C' to C} have the following positivity properties. \ensuremath{\mathfrak{b}}egin{equation}gin{list}{\emph{(\ensuremath{\mathfrak{a}}lph{ctr})}} {\ensuremath{u}secounter{ctr} \setlength{\itemsep}{1pt} \setlength{\ensuremath{\mathfrak{t}}opsep}{2pt}} \item Let $M$ be $T(\lambda), T'(\lambda)^{-1},$ or $S(\lambda)$. After replacing $D(M)$ with $-D(M)$ if needed, all of the entries of $D(M)M$ have nonnegative coefficients. \item If $M$ is $T(\lambda)$ or $T'(\lambda)^{-1}$, then $D(M)$ belongs to $\ensuremath{\mathfrak{m}}athbf{A}$ and has all nonnegative or all nonpositive coefficients (this is not a sensible conjecture for $S(\lambda)$ because it is only well-defined up to a global scale). \item $\ensuremath{\mathfrak{p}}m D(\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}) = \ensuremath{\mathfrak{p}}m D(\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}') = \ensuremath{\mathfrak{p}}m[r]!$. \end{list} \end{conjecture} It follows from Proposition \ref{p dual bases Mlambda} that $T'(\lambda)^{-1} =\ensuremath{\mathfrak{t}}ranspose{T(\lambda)}$, so the nonnegativity conjectures for $T(\lambda)$ and $T'(\lambda)^{-1}$ are equivalent. This conjecture, or rather its weakening discussed in the remark below, is supported by Proposition \ref{p gt to canonical transition} (iv) since the $\ensuremath{\mathcal{S}}_r$-graph edge weights $\ensuremath{\mathfrak{m}}u(Q',Q)$ are known to be nonnegative. \ensuremath{\mathfrak{b}}egin{equation}gin{remark} It is not completely clear how to define nonnegativity in $\ensuremath{K}$. At first, we used the following definition of nonnegativity: $f \in \ensuremath{K}$ is nonnegative if $f = g/h$, $g, h \in \ensuremath{\mathfrak{m}}athbf{A}$, $g$ and $h$ have no common factor, and $g$ and $h$ have nonnegative coefficients. To our surprise, we discovered that this is not a good definition because this subset is not a semiring. For example, $[2],[3],$ and $\ensuremath{f}rac{1}{[6]}$ are all nonnegative by this definition, but $\ensuremath{f}rac{[2][3]}{[6]} = \ensuremath{f}rac{\ensuremath{u}^2}{1-\ensuremath{u}^2+\ensuremath{u}^4}$ is not (in fact, this is an entry of $T'((6,2))$). A strictly weaker definition of nonnegativity that we may adopt instead is: an element $f \in \ensuremath{K}$ is \emph{nonnegative} if $f(a)$ is defined and nonnegative for all positive real $a$. With this definition, the set of nonnegative rational functions in $\ensuremath{u}$ is a semiring and Conjecture \ref{cj non-negativity T T' D} would imply that the matrices $T(\lambda), T'(\lambda)^{-1}$, and $S(\lambda)$ (after adjusting $S(\lambda)$ by a suitable global scale) have nonnegative entries. \end{remark} \ensuremath{\mathfrak{b}}egin{equation}gin{remark} It is tempting to conjecture from Figure \ref{l lift transition matrix} that every entry of $D(\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}})\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}$ has either all nonnegative coefficients or all nonpositive coefficients. This turns out to be true for $r \leq 5$, but fails for $r = 6$---the only entries of $[6]! \, \ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}$ without this property are equal to \[ [2]^3[5]([3]-3) = \ensuremath{u}^9 +2\ensuremath{u}^7-2 \ensuremath{u}^3-\ensuremath{u}-\ensuremath{u^{-1}} -2\ensuremath{u}^{-3}+2\ensuremath{u}^{-7}+\ensuremath{u}^{-9}.\] Despite this failure, the matrices $\ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}, \ensuremath{\ensuremath{\mathfrak{t}}ilde{T}}'$ deserve further investigation as their entries appear combinatorial in nature. \end{remark} \section{The two-row case} \label{s The two-row case} We now set $n=2$ and use the graphical calculus of \cite{FK} to compute the transition matrix of Lemma \ref{l lift transition matrix} explicitly for $\lambda$ a two-row partition. \ensuremath{\mathfrak{b}}egin{equation}gin{definition} \label{d pairing internal external} The \emph{diagram} of a word $\ensuremath{\mathfrak{m}}athbf{k} \in [n]^r$ is the picture obtained from $\ensuremath{\mathfrak{m}}athbf{k}$ by pairing 2s and 1s as left and right parentheses and then drawing an arc between matching pairs as shown below. The word $\ensuremath{\mathfrak{m}}athbf{k}$ is \emph{Yamanouchi} if its diagram has no unpaired 2s. \end{definition} \ensuremath{\mathfrak{b}}egin{equation}gin{figure}[H] \ensuremath{\mathfrak{b}}egin{equation}gin{tikzpicture}[xscale=.9] \ensuremath{\mathfrak{t}}ikzstyle{column} = [inner sep = -4pt] \ensuremath{\mathfrak{t}}ikzstyle{edge} = [draw,-,black] \ensuremath{\mathfrak{t}}ikzstyle{LabelStyleH} = [text=black, fill =white, inner sep = -.8pt] \ensuremath{\mathfrak{b}}egin{equation}gin{scope}[yshift=0cm] \ensuremath{f}oreach \ensuremath{Y}/\z in {-.2/-14*.25} { \draw[edge, bend right=70] (\z+1*.5,\ensuremath{Y}) to (\z+8*.5,\ensuremath{Y}); \draw[edge, bend right=70] (\z+2*.5,\ensuremath{Y}) to (\z+5*.5,\ensuremath{Y}); \draw[edge, bend right=70] (\z+3*.5,\ensuremath{Y}) to (\z+4*.5,\ensuremath{Y}); \draw[edge, bend right=70] (\z+6*.5,\ensuremath{Y}) to (\z+7*.5,\ensuremath{Y}); \draw[edge, bend right=70] (\z+9*.5,\ensuremath{Y}) to (\z+10*.5,\ensuremath{Y}); \node[column] (theNode) at (0,0) { $\ensuremath{\mathfrak{m}}yvcenter{\ensuremath{\ensuremath{\mathfrak{p}}ad{2} \ensuremath{\mathfrak{p}}ad{2} \ensuremath{\mathfrak{p}}ad{2} \ensuremath{\mathfrak{p}}ad{1} \ensuremath{\mathfrak{p}}ad{1} \ensuremath{\mathfrak{p}}ad{2} \ensuremath{\mathfrak{p}}ad{1} \ensuremath{\mathfrak{p}}ad{1} \ensuremath{\mathfrak{p}}ad{2} \ensuremath{\mathfrak{p}}ad{1} \ensuremath{\mathfrak{p}}ad{1} \ensuremath{\mathfrak{p}}ad{1} \ensuremath{\mathfrak{p}}ad{2} }}$ }; } \end{scope} \end{tikzpicture} \end{figure} As shown in \cite{FK}, diagrams provide a simple and beautiful way to visualize the action of $\ensuremath{\mathcal{U}}q$ and $\ensuremath{\mathscr{H}}_r$ on the upper canonical basis. The first part of the next theorem is established in \cite[\ensuremath{\mathfrak{t}}extsection 2.3]{FK}, and the second part is obtained from the first by dualizing with respect to the inner product on $\ensuremath{\mathbf{T}}$. \ensuremath{\mathfrak{b}}egin{equation}gin{theorem}\label{t FK F action on c basis} \ \ensuremath{\mathfrak{b}}egin{equation}gin{list}{\emph{(\ensuremath{\mathfrak{a}}lph{ctr})}} {\ensuremath{u}secounter{ctr} \setlength{\itemsep}{1pt} \setlength{\ensuremath{\mathfrak{t}}opsep}{2pt}} \item The action of $F_1$ on the upper canonical basis of $\ensuremath{\mathbf{T}}$ is given by \[F_1 (c_{\ensuremath{\mathfrak{m}}athbf{k}}) = \sum_{j=1}^t [j]c_{\ensuremath{\ensuremath{\mathfrak{m}}athscr{F}}_{(j)}(\ensuremath{\mathfrak{m}}athbf{k})}, \] where $t$ is the number of unpaired $1$s in $\ensuremath{\mathfrak{m}}athbf{k}$ and $\ensuremath{\ensuremath{\mathfrak{m}}athscr{F}}_{(j)}(\ensuremath{\mathfrak{m}}athbf{k})$ is the word obtained by replacing the $j$-th unpaired $1$ in $\ensuremath{\mathfrak{m}}athbf{k}$ with a $2$ (the first unpaired $1$ means the leftmost unpaired 1 and the $t$-th means the rightmost). \item the action of $E_1$ on the lower canonical basis of $\ensuremath{\mathbf{T}}$ is given by \[ E_1 (\ensuremath{c^{\prime}}_{\ensuremath{\mathfrak{m}}athbf{k}^\dagger}) = \sum_{\ensuremath{\mathfrak{m}}athbf{k}'} [\ensuremath{\mathfrak{a}}lpha(\ensuremath{\mathfrak{m}}athbf{k'},\ensuremath{\mathfrak{m}}athbf{k})]\ensuremath{c^{\prime}}_{\ensuremath{\mathfrak{m}}athbf{k'}^\dagger}, \] where $\ensuremath{\mathfrak{a}}lpha(\ensuremath{\mathfrak{m}}athbf{k}',\ensuremath{\mathfrak{m}}athbf{k})$ is the positive integer $j$ such that $\ensuremath{\ensuremath{\mathfrak{m}}athscr{F}}_{(j)}(\ensuremath{\mathfrak{m}}athbf{k}') = \ensuremath{\mathfrak{m}}athbf{k}$ and 0 if there is no such positive integer. \end{list} \end{theorem} We will also make use of the action of the Kashiwara operator $\crystalu{F_1}$ on the upper crystal basis (we abuse notation by letting the operator act on words rather than the crystal basis elements $b_\ensuremath{\mathfrak{m}}athbf{k} \in \ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}$): \ensuremath{\mathfrak{b}}egin{equation}gin{enumerate} \item[] $\crystalu{F_1}(\ensuremath{\mathfrak{m}}athbf{k})$ is the word obtained by replacing the rightmost unpaired 1 in $\ensuremath{\mathfrak{m}}athbf{k}$ with a 2 and is undefined if there are no unpaired 1s. \end{enumerate} We need some notation for the next theorem. Let $\ensuremath{\mathfrak{m}}athbf{k}|_j$ denote the subword $k_1k_2 \cdots k_j$ of the word $\ensuremath{\mathfrak{m}}athbf{k} = k_1k_2\cdots k_r$. Let $f$ denote the function on $V^{\ensuremath{\otimes} r-1}$ given by \[ f(\sum_{\ensuremath{\mathfrak{m}}athbf{j} \in [n]^{r -1}} a_\ensuremath{\mathfrak{m}}athbf{j} \ensuremath{c^{\prime}}_{\ensuremath{\mathfrak{m}}athbf{j}^\dagger}) = \sum_{\ensuremath{\mathfrak{m}}athbf{j}} a_\ensuremath{\mathfrak{m}}athbf{j} \ensuremath{c^{\prime}}_{\crystalu{F_1}(\ensuremath{\mathfrak{m}}athbf{j}^\dagger)}, \] where the $a_\ensuremath{\mathfrak{m}}athbf{j}$ belong to $\ensuremath{K}$ and the sum on the right is over those $\ensuremath{\mathfrak{m}}athbf{j}$ such that $\crystalu{F_1}(\ensuremath{\mathfrak{m}}athbf{j}^\dagger)$ is defined. Let $\lambda$ be a partition of $r$ with two rows and identify the lower canonical basis of $M_\lambda$ with the $\ensuremath{\mathscr{H}}_r$-cell $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_{Z_\lambda}$ of $\ensuremath{\mathbf{T}}$ (the vertices of this cell are those $\ensuremath{c^{\prime}}_{\ensuremath{\mathfrak{m}}athbf{k}^\dagger}$ such that $\ensuremath{\mathfrak{m}}athbf{k}$ is Yamanouchi and has content $\lambda$) via the right-hand isomorphism of \eqref{e sb involution} (with $T = Z_\lambda$). Set $\lambda^1 = (\lambda_1 -1, \lambda_2)$ and $\lambda^2 = (\lambda_1, \lambda_2-1)$ and $l = \lambda_1 - \lambda_2$. Let $\crystalu{F_1}(Z_{\lambda^2})$ denote the tableau obtained from $Z_{\lambda^2}$ by changing the last entry in the first row to a 2. We will compute the transition matrix of Lemma \ref{l lift transition matrix} for $\lambda$ as above. We have found it more convenient to compute the matrix for $J^\dagger = \{s_2,\ensuremath{\ensuremath{\mathfrak{t}}rianglelefteq}ots,s_{r-1}\}$ rather than $J = \{ s_1, \dots, s_{r-2} \}$ (the matrix for $J$ can then be obtained from that for $J^\dagger$ by conjugating by the permutation matrix corresponding to $\ensuremath{C^{\prime}}_Q \ensuremath{\mathfrak{m}}apsto \ensuremath{C^{\prime}}_{Q^\dagger}$). Consider the weight space $\ensuremath{\mathbf{T}}^\lambda$, which is isomorphic to $\epsilon_+ \ensuremath{\otimes}_{\ensuremath{K} \ensuremath{\mathscr{H}}_{J_\lambda}} \ensuremath{K} \ensuremath{\mathscr{H}}_r$. Since the intersection of two cellular subquotients is a cellular subquotient, Proposition \ref{p restrict Wgraph} with parabolic subgroup $(\ensuremath{\mathcal{S}}_{r})_{J^\dagger}$ and Theorem \ref{t Schur-Weyl duality lower} imply that \ensuremath{\mathfrak{b}}egin{equation} R: Res_{\ensuremath{K} \ensuremath{\mathscr{H}}_{J^\dagger}} \ensuremath{K} \{\ensuremath{c^{\prime}}_{\ensuremath{\mathfrak{m}}athbf{k}^\dagger} \in \ensuremath{\mathbf{T}}^\lambda : k_r = 1\} \xrightarrow{\cong} (V^{\ensuremath{\otimes} r-1})^{\lambda^1} , \ \ensuremath{c^{\prime}}_{\ensuremath{\mathfrak{m}}athbf{k}^\dagger} \ensuremath{\mathfrak{m}}apsto \ensuremath{c^{\prime}}_{(\ensuremath{\mathfrak{m}}athbf{k}|_{r-1})^\dagger} \end{equation} is an isomorphism of $\ensuremath{K} \ensuremath{\mathscr{H}}_{J^\dagger}$-modules with basis. Quotienting by $\ensuremath{\mathscr{H}}_r$-cells below $\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_{Z_\lambda}$, this yields an isomorphism of modules with basis \ensuremath{\mathfrak{b}}egin{equation} \label{e R lambda definition} Res_{\ensuremath{K} \ensuremath{\mathscr{H}}_{J^\dagger}} \ensuremath{K}\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_{Z_\lambda} \xrightarrow{\cong} \ensuremath{K} \ensuremath{\mathfrak{b}}igl( \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_{Z_{\lambda^1}} \sqcup \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_{\crystalu{F_1}(Z_{\lambda^2})} \ensuremath{\mathfrak{b}}igr). \end{equation} \ensuremath{\mathfrak{b}}egin{equation}gin{theorem} \label{t two row partial lifts} Maintain the notation above. For each $\ensuremath{c^{\prime}}_{\ensuremath{\mathfrak{m}}athbf{k}^\dagger} \in \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_{Z_\lambda}$, define the element \ensuremath{\mathfrak{b}}egin{equation} \label{e liftcp definition} (\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}p_{\ensuremath{\mathfrak{m}}athbf{k}^\dagger})^{J^\dagger} := \ensuremath{\mathfrak{b}}egin{equation}gin{cases} \displaystyle \ensuremath{c^{\prime}}_{\ensuremath{\mathfrak{m}}athbf{k}^\dagger} - \ensuremath{f}rac{1}{[l+1]} R^{-1}( f (E_1(\ensuremath{c^{\prime}}_{(\ensuremath{\mathfrak{m}}athbf{k}|_{r-1})^\dagger}))) & \ensuremath{\mathfrak{t}}ext{if } \ensuremath{\mathfrak{t}}ext{\rm sh}((\ensuremath{\mathfrak{m}}athbf{k}|_{r-1})^\dagger) = \lambda^1, \\ \ensuremath{c^{\prime}}_{\ensuremath{\mathfrak{m}}athbf{k}^\dagger} & \ensuremath{\mathfrak{t}}ext{if } \ensuremath{\mathfrak{t}}ext{\rm sh}((\ensuremath{\mathfrak{m}}athbf{k}|_{r-1})^\dagger) = \lambda^2. \end{cases} \end{equation} Applying $s_\lambda^{\ensuremath{\mathbf{T}}^\lambda}$ to both sides of this definition ($s_\lambda^N$ is the surjection onto the $M_\lambda$-isotypic component of $N$) then yields $(\ensuremath{\ensuremath{\mathfrak{t}}ilde{C}^{\prime}}_{Q(\ensuremath{\mathfrak{m}}athbf{k})^\dagger})^{J^\dagger}$ expanded in terms of the lower canonical basis of $M_\lambda$. \end{theorem} It is helpful to follow the proof with an example: take $\lambda = (5,2)$ and $\ensuremath{\mathfrak{m}}athbf{k} = 2112111$. Then \[ E_1(\ensuremath{c^{\prime}}_{211211^\dagger}) = \ensuremath{c^{\prime}}_{111211^\dagger} + [2]\ensuremath{c^{\prime}}_{211111^\dagger}, \] \[ (\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}p_{2112111^\dagger})^{J^\dagger} = \ensuremath{c^{\prime}}_{2112111^\dagger} - \ensuremath{f}rac{1}{[4]}(\ensuremath{c^{\prime}}_{1112121^\dagger} + [2]\ensuremath{c^{\prime}}_{2111121^\dagger}), \] \setlength{\cellsize}{8pt} \[\ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}ig(\ensuremath{\ensuremath{\mathfrak{t}}ilde{C}^{\prime}}_{{\ensuremath{\mathfrak{t}}iny\ensuremath{\mathfrak{t}}ableau{1 & 2 & 3 &5 &6 \\ 4 & 7}}}\ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}ig)^{J^\dagger} = \ensuremath{C^{\prime}}_{{\ensuremath{\mathfrak{t}}iny\ensuremath{\mathfrak{t}}ableau{1 & 2 & 3 &5 &6 \\ 4 & 7}}} - \ensuremath{f}rac{1}{[4]}\ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}ig(\ensuremath{C^{\prime}}_{{\ensuremath{\mathfrak{t}}iny\ensuremath{\mathfrak{t}}ableau{1 & 3 & 5 &6 &7 \\ 2 & 4}}} + [2]\ensuremath{C^{\prime}}_{{\ensuremath{\mathfrak{t}}iny\ensuremath{\mathfrak{t}}ableau{1 & 3 & 4 &5 &6 \\ 2 & 7}}}\ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}ig), \] \[\ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}ig(\ensuremath{\ensuremath{\mathfrak{t}}ilde{C}^{\prime}}_{{\ensuremath{\mathfrak{t}}iny\ensuremath{\mathfrak{t}}ableau{1 & 3 & 4 &6 &7 \\ 2 & 5}}}\ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}ig)^{J} = \ensuremath{C^{\prime}}_{{\ensuremath{\mathfrak{t}}iny\ensuremath{\mathfrak{t}}ableau{1 & 3& 4 &6 &7 \\ 2 & 5}}} - \ensuremath{f}rac{1}{[4]}\ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}ig(\ensuremath{C^{\prime}}_{{\ensuremath{\mathfrak{t}}iny\ensuremath{\mathfrak{t}}ableau{1 & 2 & 3 &4 &6 \\ 5 & 7}}} + [2]\ensuremath{C^{\prime}}_{{\ensuremath{\mathfrak{t}}iny\ensuremath{\mathfrak{t}}ableau{1 & 3 & 4 &5 &6 \\ 2 & 7}}}\ensuremath{\ensuremath{\mathfrak{m}}athscr{B}}ig). \] \ensuremath{\mathfrak{b}}egin{equation}gin{proof} Assume throughout that $\ensuremath{\mathfrak{m}}athbf{k}$ corresponds to the top case of \eqref{e liftcp definition}, the arguments needed for the bottom case being easy. The key fact to check is that $E_1(R((\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}p_{\ensuremath{\mathfrak{m}}athbf{k}^\dagger})^{J^\dagger}))$ is zero mod $V^{\ensuremath{\otimes} r - 1}[\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}dneq \lambda^2]$. To see that this would prove the theorem, let $\eta$ be the element of the weight space $(V^{\ensuremath{\otimes} r-1})^{\lambda^1}$ such that $E_1(\eta) = 0$ and $R((\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}p_{\ensuremath{\mathfrak{m}}athbf{k}^\dagger})^{J^\dagger}) - \eta \in V^{\ensuremath{\otimes} r - 1}[\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}dneq \lambda^2]$. Then $\eta$ is a highest weight vector of weight $\lambda^1$, so by quantum Schur-Weyl duality, $\eta$ belongs to the $M_{\lambda^1}$-isotypic component of $V^{\ensuremath{\otimes} r -1}$. Thus $R((\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}p_{\ensuremath{\mathfrak{m}}athbf{k}^\dagger})^{J^\dagger})$ and $\eta$ only differ by lower canonical basis elements outside of $ \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_{Z_{\lambda^1}} \sqcup \ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_{\crystalu{F_1}(Z_{\lambda^2})}$; so by \eqref{e R lambda definition}, $(\ensuremath{\ensuremath{\mathfrak{t}}ilde{c}}p_{\ensuremath{\mathfrak{m}}athbf{k}^\dagger})^{J^\dagger}$, regarded as an element of the cellular subquotient $\ensuremath{K}\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_{Z_\lambda}$, belongs to the $M_{\lambda^1}$-isotypic component of $\ensuremath{\mathcal{R}}es_{\ensuremath{K} \ensuremath{\mathscr{H}}_{J^\dagger}} \ensuremath{K}\ensuremath{\ensuremath{\mathfrak{m}}athscr{G}}amma'_{Z_\lambda}$. Now, checking the key fact amounts to showing that if \[ (E_1 - \ensuremath{f}rac{E_1}{[l+1]} f E_1)(\ensuremath{c^{\prime}}_{(\ensuremath{\mathfrak{m}}athbf{k}|_{r-1})^\dagger}) = (1 - \ensuremath{f}rac{E_1}{[l+1]} f) \sum_{\ensuremath{\mathfrak{m}}athbf{j}'} [\ensuremath{\mathfrak{a}}lpha(\ensuremath{\mathfrak{m}}athbf{j'},\ensuremath{\mathfrak{m}}athbf{k}|_{r-1})]\ensuremath{c^{\prime}}_{\ensuremath{\mathfrak{m}}athbf{j'}^\dagger} \] is written as $\sum_{\ensuremath{\mathfrak{m}}athbf{j} \in [n]^{r-1}} a_\ensuremath{\mathfrak{m}}athbf{j} \ensuremath{c^{\prime}}_{\ensuremath{\mathfrak{m}}athbf{j}^\dagger}$, then $a_\ensuremath{\mathfrak{m}}athbf{j}= 0$ for $\ensuremath{\mathfrak{m}}athbf{j}$ such that $\ensuremath{\mathfrak{m}}athbf{j}$ is Yamanouchi. Here we are using the fact that $(V^{\ensuremath{\otimes} r - 1})^{\lambda^2}[\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}dneq \lambda^2]$ is spanned by $\ensuremath{c^{\prime}}_{\ensuremath{\mathfrak{m}}athbf{j}^\dagger}$ such that $\ensuremath{\mathfrak{m}}athbf{j}$ has content $\lambda^2$ and is not Yamanouchi. Now let $\ensuremath{\mathfrak{m}}athbf{j}$ be of content $\lambda^2$ and Yamanouchi; then one checks that $\ensuremath{c^{\prime}}_{\ensuremath{\mathfrak{m}}athbf{j}^\dagger}$ occurs as a term of $E_1 f \ensuremath{c^{\prime}}_{\ensuremath{\mathfrak{m}}athbf{j'}^\dagger}$ expanded in the lower canonical basis if and only if $\ensuremath{\mathfrak{m}}athbf{j} = \ensuremath{\mathfrak{m}}athbf{j'}$, and if it does occur, then its coefficient is $[l+1]$ since $l+1$ is the number of unpaired 1s in $\ensuremath{\mathfrak{m}}athbf{j}$. It follows that $a_\ensuremath{\mathfrak{m}}athbf{j} = 0$, as desired. \end{proof} \ensuremath{\mathfrak{b}}egin{equation}gin{remark}\label{r Lascoux's paper} The recent paper \cite{GLS} studies the matrix $T'(\lambda)$ for $\lambda$ a two-row partition. The lower canonical basis of $M_\lambda$ is realized in a polynomial representation of $\ensuremath{\mathscr{H}}_r$ and the lower seminormal basis of $M_\lambda$ is given by specialized non-symmetric Macdonald polynomials. Let $\lambda = (r/2,r/2)$ and $Q$ be the SYT of shape $\lambda$ such that the first row of $Q$ has odd entries and the second has even entries. The authors show that the coefficients of $\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}tp_{Q}$ expressed in the lower canonical basis of $M_\lambda$ (i.e. the last column of $T'(\lambda)$) are all powers of $-\ensuremath{f}rac{1}{[2]}$ and they give a combinatorial formula for the exponents. \end{remark} \section*{Acknowledgments} I am grateful to John Stembridge, Ketan Mulmuley, and Thomas Lam for helpful conversations and to Michael Bennett for help typing and typesetting figures. \ensuremath{\mathfrak{b}}ibliographystyle{plain} \def\ensuremath{c^{\prime}}rime{$'$} \ensuremath{\mathfrak{b}}egin{equation}gin{thebibliography}{10} \ensuremath{\mathfrak{b}}ibitem{Sami} Sami~H. {Assaf}. \newblock {Dual Equivalence Graphs I: A combinatorial proof of LLT and Macdonald positivity}. \newblock {\em ArXiv e-prints}, May 2010. \newblock {\ensuremath{\mathfrak{t}}t arXiv:1005.3759}. \ensuremath{\mathfrak{b}}ibitem{BV} Dan Barbasch and David Vogan. \newblock Primitive ideals and orbital integrals in complex exceptional groups. \newblock {\em J. Algebra}, 80(2):350--382, 1983. \ensuremath{\mathfrak{b}}ibitem{BMSGCT4} J.~{Blasiak}, K.~D. {Mulmuley}, and M.~{Sohoni}. \newblock {Geometric Complexity Theory IV: nonstandard quantum group for the Kronecker problem}. \newblock {\em {A}r{X}iv e-prints}, June 2013. \newblock {\ensuremath{\mathfrak{t}}t arXiv:cs/0703110v4}. \ensuremath{\mathfrak{b}}ibitem{Bnsbraid} Jonah Blasiak. \newblock {Nonstandard braid relations and Chebyshev polynomials}. \newblock {\em ArXiv e-prints}, October 2010. \newblock {\ensuremath{\mathfrak{t}}t arXiv:1010.0421}. \ensuremath{\mathfrak{b}}ibitem{B0} Jonah Blasiak. \newblock {$W$}-graph versions of tensoring with the {$\ensuremath{\mathcal{S}}_n$} defining representation. \newblock {\em J. Algebraic Combin.}, 34(4):545--585, 2011. \ensuremath{\mathfrak{b}}ibitem{B4} Jonah {Blasiak}. \newblock {Representation theory of the nonstandard Hecke algebra}. \newblock {\em ArXiv e-prints}, February 2012. \newblock {\ensuremath{\mathfrak{t}}t arXiv:1201.2209v2}. \ensuremath{\mathfrak{b}}ibitem{Brundan} Jonathan Brundan. \newblock Dual canonical bases and {K}azhdan-{L}usztig polynomials. \newblock {\em J. Algebra}, 306(1):17--46, 2006. \ensuremath{\mathfrak{b}}ibitem{GLS} J.~{de Gier}, A.~{Lascoux}, and M.~{Sorrell}. \newblock {Deformed Kazhdan-Lusztig elements and Macdonald polynomials}. \newblock {\em ArXiv e-prints}, July 2010. \newblock {\ensuremath{\mathfrak{t}}t arXiv:1007.0861}. \ensuremath{\mathfrak{b}}ibitem{Du} Jie Du. \newblock {${\rm IC}$} bases and quantum linear groups. \newblock In {\em Algebraic groups and their generalizations: quantum and infinite-dimensional methods ({U}niversity {P}ark, {PA}, 1991)}, volume~56 of {\em Proc. Sympos. Pure Math.}, pages 135--148. Amer. Math. Soc., Providence, RI, 1994. \ensuremath{\mathfrak{b}}ibitem{F} Sergey Fomin. \newblock {\em \emph{Knuth equivalence, jeu de taquin, and the Littlewood-Richardson rule}, Appendix {I} in Enumerative Combinatorics, vol. 2}, volume~62 of {\em Cambridge Studies in Advanced Mathematics}. \newblock Cambridge University Press, Cambridge, 1999. \ensuremath{\mathfrak{b}}ibitem{FKK} I.~B. Frenkel, M.~G. Khovanov, and A.~A. Kirillov, Jr. \newblock Kazhdan-{L}usztig polynomials and canonical basis. \newblock {\em Transform. Groups}, 3(4):321--336, 1998. \ensuremath{\mathfrak{b}}ibitem{FK} Igor~B. Frenkel and Mikhail~G. Khovanov. \newblock Canonical bases in tensor products and graphical calculus for {$U_q(\ensuremath{\mathfrak{sl}}_2)$}. \newblock {\em Duke Math. J.}, 87(3):409--480, 1997. \ensuremath{\mathfrak{b}}ibitem{GM} A.~M. Garsia and T.~J. McLarnan. \newblock Relations between {Y}oung's natural and the {K}azhdan-{L}usztig representations of {$S_n$}. \newblock {\em Adv. in Math.}, 69(1):32--92, 1988. \ensuremath{\mathfrak{b}}ibitem{GL} Ian Grojnowski and George Lusztig. \newblock On bases of irreducible representations of quantum {${\rm GL}\sb n$}. \newblock In {\em Kazhdan-{L}usztig theory and related topics ({C}hicago, {IL}, 1989)}, volume 139 of {\em Contemp. Math.}, pages 167--174. Amer. Math. Soc., Providence, RI, 1992. \ensuremath{\mathfrak{b}}ibitem{Himmanant} Mark Haiman. \newblock Hecke algebra characters and immanant conjectures. \newblock {\em J. Amer. Math. Soc.}, 6(3):569--595, 1993. \ensuremath{\mathfrak{b}}ibitem{HK} Jin Hong and Seok-Jin Kang. \newblock {\em Introduction to quantum groups and crystal bases}, volume~42 of {\em Graduate Studies in Mathematics}. \newblock American Mathematical Society, Providence, RI, 2002. \ensuremath{\mathfrak{b}}ibitem{HY1} Robert~B. Howlett and Yunchuan Yin. \newblock Inducing {$W$}-graphs. \newblock {\em Math. Z.}, 244(2):415--431, 2003. \ensuremath{\mathfrak{b}}ibitem{HY2} Robert~B. Howlett and Yunchuan Yin. \newblock Inducing {$W$}-graphs. {II}. \newblock {\em Manuscripta Math.}, 115(4):495--511, 2004. \ensuremath{\mathfrak{b}}ibitem{Jimbo} Michio Jimbo. \newblock A {$q$}-analogue of {$U(\ensuremath{\ensuremath{\mathfrak{m}}athfrak{g}}l(N+1))$}, {H}ecke algebra, and the {Y}ang-{B}axter equation. \newblock {\em Lett. Math. Phys.}, 11(3):247--252, 1986. \ensuremath{\mathfrak{b}}ibitem{Joseph} Anthony Joseph. \newblock Towards the {J}antzen conjecture. {III}. \newblock {\em Compositio Math.}, 42(1):23--30, 1980/81. \ensuremath{\mathfrak{b}}ibitem{Kas1} Masaki Kashiwara. \newblock On crystal bases of the {$Q$}-analogue of universal enveloping algebras. \newblock {\em Duke Math. J.}, 63(2):465--516, 1991. \ensuremath{\mathfrak{b}}ibitem{Kas2} Masaki Kashiwara. \newblock Global crystal bases of quantum groups. \newblock {\em Duke Math. J.}, 69(2):455--485, 1993. \ensuremath{\mathfrak{b}}ibitem{KL} David Kazhdan and George Lusztig. \newblock Representations of {C}oxeter groups and {H}ecke algebras. \newblock {\em Invent. Math.}, 53(2):165--184, 1979. \ensuremath{\mathfrak{b}}ibitem{L2} George Lusztig. \newblock Cells in affine {W}eyl groups. \newblock In {\em Algebraic groups and related topics ({K}yoto/{N}agoya, 1983)}, volume~6 of {\em Adv. Stud. Pure Math.}, pages 255--287. North-Holland, Amsterdam, 1985. \ensuremath{\mathfrak{b}}ibitem{LBook} George Lusztig. \newblock {\em Introduction to quantum groups}, volume 110 of {\em Progress in Mathematics}. \newblock Birkh\"auser Boston Inc., Boston, MA, 1993. \ensuremath{\mathfrak{b}}ibitem{GCT4} Ketan Mulmuley and Milind~A. Sohoni. \newblock Geometric complexity theory {IV}: {Q}uantum group for the {K}ronecker problem. \newblock {\em CoRR}, abs/cs/0703110, 2007. \ensuremath{\mathfrak{b}}ibitem{Ram} Arun Ram. \newblock A {F}robenius formula for the characters of the {H}ecke algebras. \newblock {\em Invent. Math.}, 106(3):461--488, 1991. \ensuremath{\mathfrak{b}}ibitem{RamSeminormal} Arun Ram. \newblock Seminormal representations of {W}eyl groups and {I}wahori-{H}ecke algebras. \newblock {\em Proc. London Math. Soc. (3)}, 75(1):99--133, 1997. \ensuremath{\mathfrak{b}}ibitem{R} Yuval Roichman. \newblock Induction and restriction of {K}azhdan-{L}usztig cells. \newblock {\em Adv. Math.}, 134(2):384--398, 1998. \ensuremath{\mathfrak{b}}ibitem{Wenzl} Hans Wenzl. \newblock Hecke algebras of type {$A_n$} and subfactors. \newblock {\em Invent. Math.}, 92(2):349--383, 1988. \end{thebibliography} \end{document}
\begin{document} \title{Zeno and anti--Zeno effects for quantum Brownian motion} \author{Sabrina Maniscalco} \affiliation{Department of Physics, University of Turku, Turun yliopisto, FIN-20014 Turku, Finland} \author{Jyrki Piilo} \affiliation{Department of Physics, University of Turku, Turun yliopisto, FIN-20014 Turku, Finland} \author{Kalle-Antti Suominen} \affiliation{Department of Physics, University of Turku, Turun yliopisto, FIN-20014 Turku, Finland} \email{[email protected]} \date{\today} \begin{abstract} In this paper we investigate the occurrence of the Zeno and anti-Zeno effects for quantum Brownian motion. We single out the parameters of both the system and the reservoir governing the crossover between Zeno and anti-Zeno dynamics. We demonstrate that, for high reservoir temperatures, the short time behaviour of environment induced decoherence is the ultimate responsible for the occurrence of either the Zeno or the anti-Zeno effect. Finally we suggest a way to manipulate the decay rate of the system and to observe a controlled continuous passage from decay suppression to decay acceleration using engineered reservoirs in the trapped ion context . \end{abstract} \maketitle The quantum Zeno effect (QZE) predicts that the decay of an unstable system can be slowed down by measuring the system frequently enough \cite{Misra}. In some systems, however, an enhancement of the decay due to frequent measurements, namely the anti-Zeno or inverse Zeno effect (AZE), may occur \cite{Lane}. In this paper we focus on the quantum Brownian motion (QBM) model \cite{HuPazZhang,Maniscalco04b,Weissbook} which is a paradigmatic model of the theory of open quantum systems. This model, dealing with the linear interaction of a particle with a bosonic reservoir in thermal equilibrium, is widely used in several physical contexts. It describes the dynamics of a particle interacting with a quantum field in the dipole approximation \cite{Zurek}, as well as a quantum electromagnetic field propagating in a linear dielectric media \cite{Anglin96}. The model is used in nuclear physics to describe, e.g., the two-body decay of an unstable particle \cite{Joichi}. In quantum chemistry, the QBM model describes the quantum Kramers turnover, which forms the basis of modern theory of activated processes \cite{HanggiRev}. Recently, this model has been investigated to explain the loss of quantum coherence (decoherence) due to the interaction between the system and its surroundings. In particular, the absence of macroscopic quantum superpositions in the classical world has been explained, using the QBM model, in terms of environment induced decoherence (EID) \cite{Zurek}. With the last term we mean here a process which transforms a highly delocalized state in position and/or momentum, e.g. a superposition of coherent states, into a localized classical state. To the best of our knowledge, the conditions for the occurrence of the quantum Zeno and anti-Zeno effects have never been considered for the QBM model. The aim of this paper is to investigate the Zeno and anti-Zeno phenomena in a system, namely the damped harmonic oscillator, which possesses a classical limit and where it is therefore possible to monitor the transition from quantum to classical dynamics caused by decoherence induced by the environment. Previous studies focus on few level systems and deal with completely quantum states, such as the spin, which have no classical analogue \cite{FacchiRev,Kurizki05}. Our main result is the demonstration that the occurrence of either the Zeno or the anti-Zeno effect stems from the short time behaviour of the environment induced decoherence, which therefore drives the Zeno--anti-Zeno crossover. On the other hand, one can use the QZE or the AZE to manipulate the quantum-classical border by prolonging or shortening, respectively, the persistence of quantum features in the initial state of the system. Moreover, we suggest a physical context in which the Zeno--anti-Zeno crossover can be observed with current technology by means of reservoir engineering techniques \cite{engineerNIST}. The QBM microscopic Hamiltonian model consists of a quantum harmonic oscillator linearly coupled to a quantum reservoir modelled as a collection of non-interacting harmonic oscillators at thermal equilibrium. In the limit of a continuum of frequencies $\omega$, the reservoir properties are described by the reservoir spectral density $J(\omega)$ measuring the microscopic effective coupling strength between the system oscillator and the oscillators of the reservoir. One of the advantages of the QBM model is that, for factorized initial conditions, it can be described by means of the following exact master equation \cite{HuPazZhang,EPJRWA,PRAsolanalitica} \begin{eqnarray} \frac{d \rho_S(t)}{dt} &=& \frac{1}{i\hbar}[H_0, \rho_S(t)] - \Delta(t) [X,[X,\rho_S(t)]] \nonumber \\ &+& \Pi(t) [X,[P,\rho_S(t)]]+ \frac{i}{2} r(t) [X^2,\rho_S(t)] \nonumber \\ &-& i \gamma(t) [X,\{P,\rho_S(t)\}], \label{QBMme} \end{eqnarray} where $\rho_S(t)$ is the reduced density matrix, and $H_0 = \hbar \omega_0 \left( a^{\dag} a + \frac{1}{2} \right)$ is the system Hamiltonian, with $a$, $a^{\dag}$, and $\omega_0$ the annihilation operator, the creation operator and the frequency of the system oscillator, respectively. Moreover $X = \sqrt{\omega_0/2 \hbar} \left( a + a^{\dag}\right) $ and $P=i/\sqrt{2 \hbar \omega_0} \left( a^{\dag}- a\right) $ are the system position and momentum operators, where we have set the mass of the particle $m=1$. The master equation given by Eq.~(\ref{QBMme}), being exact, describes also the non-Markovian short time system-reservoir correlations due to the finite correlation time of the reservoir. In contrast to other non-Markovian dynamical systems, this master equation is local in time, i.e.~it does not contain memory integrals. All the non-Markovian character of the system is contained in the time dependent coefficients appearing in the master equation (for the analytic expression of the coefficients see, e.g., Ref.~\cite{Maniscalco04b}). These coefficients depend uniquely on the form of the reservoir spectral density. The coefficient $r(t)$ describes a time dependent frequency shift, $\gamma(t)$ is the damping coefficient, $\Delta(t)$ and $\Pi(t)$ are the normal and the anomalous diffusion coefficients, respectively \cite{HuPazZhang}. In the secular approximation, i.e.~after averaging over the rapidly oscillating terms appearing in the dynamics, the only two relevant coefficients are the diffusion coefficient $\Delta(t)$ and the damping coefficient $\gamma(t)$ \cite{Maniscalco04b}. A simple view of the effect of the diffusion term can be given by looking at the approximate solution \cite{HuPazZhang} of the master equation given by Eq.~(\ref{QBMme}), in the position space, \begin{equation} \rho_S(x,x',t)\!\simeq \!\rho_S(x,x',0) \exp \left[ -(x-x')^2 \!\!\!\int_0^t \!\!\!\! \Delta(t_1) dt_1 \right]\!\!. \label{eq:envinde} \end{equation} The previous equation shows that the diffusion term is responsible for the vanishing of the off-diagonal terms of the density matrix in position space, i.e.~for EID. For the sake of concreteness, we focus on the case of an Ohmic spectral density with Lorentz-Drude cutoff (see \cite{Weissbook}, p.25), $J(\omega)= (\omega / \pi) \omega_c^2 / (\omega_c^2+\omega^2)$ where $\omega_c$ is the cutoff frequency. This form of the spectral density is one of the most commonly used since it leads to a friction force proportional to velocity, which is typical of dissipative systems in several physical contexts. The main result of the paper, however, holds for general forms of the spectral density. We assume that the system oscillator is initially prepared in one of the eigenstates of its Hamiltonian, i.e.~a Fock state $\vert n \rangle $. During the time evolution the system is subjected to a series of non-selective measurements, i.e. measurements which do not select the different outcomes \cite{petruccionebook}. We indicate with $\tau$ the time interval between two successive measurements, and we assume that $\tau$ is so short (and/or the coupling so weak) that second order processes may be neglected. Stated another way we assume that the probability $P_n(t)$ of finding the system in its initial state $\vert n \rangle$, i.e.~the survival probability, is such that $P_n(t)\simeq 1 $ and $P_n(t)\gg P_{n\pm 1}(t)$. After $N$ measurements the survival probability reads \cite{FacchiPRL01,FacchiRev} \begin{eqnarray} P_n^{(N)}(t)= P_n(\tau)^N \equiv \exp\left[ - \gamma_n^Z(\tau) t\right], \label{ptau2} \end{eqnarray} where $t=N \tau$ is the effective duration of the experiment and the effective decay rate $\gamma_n^Z(\tau)$ is defined by the last equality. In Eq.~(\ref{ptau2}), we have assumed that the probability $P_n(\tau)$ factorizes. This assumption is justified by the fact that, in the following we will use second order perturbation theory. As shown, e.g., in Ref.~\cite{Lax}, up to second order in the coupling constant, the density matrices of the system and of the environment factorize at all times. The behaviour of the effective decay rate, appearing in Eq.~(\ref{ptau2}), defines the occurrence of the Zeno or anti-Zeno effect. We indicate with $\gamma_n^0$ the Markovian decay rate of the survival probability, as predicted by the Fermi Golden Rule. If a finite time $\tau^*$ such that $\gamma_n^Z(\tau^*)=\gamma_n^0$ exists, then for $\tau < \tau^*$ we have $\gamma_n^Z(\tau)/\gamma_n^0<1$, i.e.~the measurements hinder the decay (QZE). On the other hand, for $\tau > \tau^*$ we have $\gamma_n^Z(\tau)/\gamma_n^0>1$ and the measurements enhance the decay (AZE) \cite{FacchiPRL01}. For an initial Fock state $\vert n \rangle$ there exist two possible decay channels associated with the upward and downward transitions to the states $\vert n+1 \rangle$ and $\vert n-1 \rangle$, respectively. The probability that the oscillator leaves its initial state after a short time interval $\tau$ can be written as $\bar{P}_n(\tau)=P_n^{\uparrow} (\tau) + P_n^{\downarrow} (\tau) $, where $P_n^{\uparrow}(\tau)$ and $P_n^{\downarrow} (\tau) $ are the probabilities that an upward or a downward transition, respectively, has occurred in the interval of time $0 < t < \tau$. From Eq.~(12) of Ref.~\cite{Maniscalco04b}, neglecting second order processes, one gets \begin{eqnarray} \frac{ d \rho_{nn} (t)}{dt} \!=\! - \left\{\!(n\!+\!1)\!\left[\Delta(t)\!-\!\gamma(t) \right] \!+\! n \left[\Delta(t)\!+\!\gamma(t) \right] \right\}\rho_{nn}(t), \label{eq:another} \end{eqnarray} where $\rho_{nn}(t) = \langle n \vert \rho_S (t) \vert n \rangle$. From Eq.~(\ref{eq:another}), noting that $\rho_{nn}(t) \simeq \rho_{nn}(0)$, it is straightforward to see that the upward and downward transition probabilities in the interval $0 < t < \tau$ are $P_n^{\uparrow} (\tau) = (n+1) \int_0^{\tau} [\Delta (t) - \gamma(t)] dt$ and $P_n^{\downarrow} (\tau) = n \int_0^\tau [\Delta (t) +\gamma(t)] dt$, respectively. We note that the survival probability $P_n(\tau)$ defined in Eq.~(\ref{ptau2}) is given by $P_n(\tau)=1-\bar{P}_n (\tau)$. Assuming that $\tau$ is small enough and keeping only the first two terms in the expansion of the exponential appearing in Eq.~(\ref{ptau2}) one easily gets \begin{eqnarray} \gamma_n^Z(\tau) =\frac{1}{\tau}\left[ \left( 2n+1 \right) \int_0^{\tau} \Delta (t) dt - \int_0^{\tau} \gamma(t) dt \right]. \label{gammanZ} \end{eqnarray} We notice that the quantity $\gamma_n^Z(\tau)$ can also be derived starting from the generalized master equation obtained in Ref.~\cite{Facchi05}, applying the formalism to the case of the harmonic oscillator. By definition $\Delta(t)$ and $\gamma(t)$, up to the second order in the coupling constant, are given by \cite{HuPazZhang,Maniscalco04b} \begin{eqnarray} \!\Delta(t)\!\!\!&=&\!\! \!\alpha^2\!\! \!\!\int_0^t \!\!\!\! \int_0^{\infty}\!\!\!\!\!\! \!\!J(\!\omega\!)\! \coth\!\! \left(\!\hbar \omega/2 k_B T \!\right) \!\cos(\omega t_1\!) \!\cos (\omega_0 t_1\!) d \omega d t_1, \label{delta} \\ \gamma(t)\!\!&=& \alpha^2 \int_0^t\!\! \int_0^{\infty}\!\!\! J(\omega) \sin(\omega t_1) \sin (\omega_0 t_1) d \omega d t_1, \label{gamma} \end{eqnarray} with $\alpha$ microscopic dimensionless system-reservoir coupling constant. We note that, for high reservoir temperatures $T$, $\Delta(t)\gg \gamma(t)$. Inserting Eqs.~(\ref{delta})-(\ref{gamma}) into Eq.~(\ref{gammanZ}) and carrying out the double time integration, one obtains \begin{eqnarray} \gamma_n^Z(\tau) \!\!=\!\! \tau \!\!\int_0^{\infty} \!\!\!\!\!\!\!J(\omega) \left\{ n^{(-)}_{\omega} {\rm sinc}^2\!\!\! \left(\omega_-\tau \right) + n^{(+)}_{\omega} {\rm sinc}^2 \!\!\!\left(\omega_+\tau \right) \right\} d \omega, \label{gammanZII} \end{eqnarray} with ${\rm sinc}(x) = \sin(x)/x$, $\omega_{\pm}= (\omega\pm \omega_0)/2$, and $n^{(\pm)}_{\omega}=\alpha^2[\coth \left(\hbar \omega/2 k_B T \right) (n+1/2) \pm 1]$. In the limit $\tau \rightarrow \infty$ one gets the Markovian value of the effective decay rate $\gamma_n^0 = (2n+1) \Delta_M + \gamma_M$, with $\Delta_M$ and $\gamma_M$ Markovian values of the diffusion and damping coefficients respectively. The quantity ruling the occurrence of either the QZE or the AZE is the ratio \begin{eqnarray} \frac{\gamma_n^Z (\tau)}{\gamma_n^0}= \frac{\left( 2n+1 \right) \int_0^{\tau} \Delta (t) dt - \int_0^{\tau} \gamma(t) dt}{\tau [(2n+1) \Delta_M - \gamma_M] }. \label{gammaratio} \end{eqnarray} It is worth underlining that, in general, this quantity depends on the initial state of the system, i.e. on the initial Fock state $\vert n \rangle$. For high $T$, however, since $\Delta(t)\gg \gamma(t)$, Eq.~(\ref{gammaratio}) becomes independent of the initial state \begin{eqnarray} \frac{\gamma_n^Z (\tau)}{\gamma_n^0}&\simeq& \frac{ \int_0^{\tau} \Delta(t) dt}{\tau \Delta_M}. \label{eq:gammaz} \end{eqnarray} Equation~(\ref{eq:gammaz}) approximates Eq.~(\ref{gammaratio}) also for initial highly excited Fock states, i.e. $n \gg 1$. This equation, together with Eq.~(\ref{gammaratio}), establishes a connection between two fundamental phenomena of quantum theory, namely the quantum Zeno effects and environment induced decoherence. This connection and its physical consequences, which will be carefully described in the rest of the paper, constitute our main result. We underline that, in deriving Eqs.~(\ref{gammaratio})-(\ref{eq:gammaz}), no assumption on the form of the spectral density has been done. \begin{figure} \caption{\label{fig:1} \label{fig:1} \end{figure} Equation (\ref{eq:gammaz}) tells us that, for high $T$ reservoirs and/or $n \gg 1$ the effective decay rate depends {\it only} on the diffusion coefficient $\Delta(t)$, which in turn describes EID. This yields a new physical explanation of the occurrence of either Zeno or anti-Zeno effects for QBM . The eigenstates of the quantum harmonic oscillator are highly nonclassical states. They are not localized, either in position or in momentum, and therefore they are very sensitive to environment induced decoherence. If the average EID in the interval between two measurements, quantified by $\int_0^{\tau} \Delta(t) dt /\tau$, is smaller than the Markovian one, quantified by $\Delta_M$, then when the system \lq\lq restarts\rq\rq \; the evolution after each measurement, the effect of EID is again less than in the Markovian case. Essentially the measurements force the system to experience repeatedly an effective EID which is less strong than the Markovian one. In this case the QZE occurs. On the contrary, whenever the average increase of EID in the interval between two measurements is greater than its Markovian value, then the measurements force the system to experience always a stronger decoherence, and hence the system decay is accelerated (AZE). An important consequence of Eq.~(\ref{eq:gammaz}) is that the quantum--classical border can be manipulated by means of the QZE and of the AZE. When the QZE occurs, indeed, an initial delocalized state of the harmonic oscillator such as its energy eigenstate remains delocalized for longer times, hence the QZE \lq\lq moves forward in time\rq\rq \; the quantum--classical border. The opposite situation happens when the AZE occurs. The analytic expression of the diffusion coefficient allows us to single out the relevant system and reservoir parameters ruling the crossover between the QZE and the AZE. The time $\tau^*$ at which $\gamma^Z(\tau^*)=\gamma_0$, when it exists and is finite, can be seen as a transition time between Zeno and anti-Zeno phenomena \cite{FacchiPRL01}. Our analysis shows that there exist two other relevant parameters, namely the ratio $r= \omega_c/\omega_0$, quantifying the asymmetry of the spectral distribution, and the ratio $k_B T/ \hbar\omega_0$. For a fixed time interval $\tau$, a change in the values of $r$ and $T$ may lead to a passage from Zeno dynamics to anti-Zeno dynamics and vice versa. For high $T$, it is easy to prove that a Zeno--anti-Zeno crossover exists only for values of $r < 1$, i.e.~for $\omega_0>\omega_c$, as it is shown in Fig \ref{fig:1} (a). This can be understood in terms of EID by looking at the short time dynamics of $\Delta(t)$ in the time interval $0<t<\tau$, i.e.~before the first measurement is performed (see Fig.~\ref{fig:1} (b)). The situation changes drastically for the case of a zero-$T$ reservoir, characterized by an asymmetric spectral density. For $n \gg 1 $, indeed, in contrast to the high $T$ case, the Zeno--anti-Zeno crossover exists also for values of $r\gg 1$. i.e.~in the case $\omega_0 \ll \omega_c$. The region in which only Zeno dynamics may occur now appears at the edge of the spectral density function, i.e.~for $\omega_0 \simeq \omega_c$. The reason of such a different dynamics for small values of $\omega_0$ stems from the strong decoherence, showing up at low temperatures not only when $r \ll 1$ (as in the high $T$ case), but also for $r \gg 1$ (See Figs.~\ref{fig:1} (c)-(d)). Summarizing, the occurrence of the Zeno or anti-Zeno effects is directly related to the absence or presence, respectively, of the so-called initial jolt of the diffusion coefficient $\Delta(t)$ \cite{HuPazZhang}, which is the signal of strong initial decoherence. Another interesting aspect stemming from our results concerns the Zeno-anti--Zeno crossover for an initial ground state ($n=0$) of the system oscillator. The high $T$ behaviour is given by Eq.~(\ref{eq:gammaz}) [See Fig.~\ref{fig:1} (a)-(b)]. For very low reservoir temperatures ($T \simeq 0$), $\Delta_M \simeq \gamma_M$ and therefore the denominator of Eq.~(\ref{gammaratio}) approaches zero, implying that $\gamma^{Z}(\tau)/\gamma_0 \gg 1$ always, i.e. the measurements always enhance the decay (AZE). Summarizing, in this case, by changing the reservoir temperature, e.g. starting from high $T$ and lowering the temperature, one observes a passage from the situation depicted in Fig. \ref{fig:1} (a) to a situation in which only the AZE is practically observable. The possibility of controlling both the environment and the system-environment coupling would allow one to monitor the transition from Zeno to anti-Zeno dynamics. The use of artificial controllable engineered environments has been recently demonstrated for single trapped ions \cite{engineerNIST}. In Ref.~\cite{Maniscalco04a} it has been shown that the engineered amplitude reservoir realized by applying noisy electric fields to the trap electrodes in \cite{engineerNIST} can be used to simulate quantum Brownian motion and to reveal the quadratic short time dynamics. Shuttering these noisy electric fields one could model a fast switch off--on of the environment. Actually, when the noise is off, the reservoir simply does not exist anymore. The action of the sudden switch off-on of the environment may be seen as a physical implementation of the operation of trace over the reservoir degrees of freedom. The operation of trace is a typical example of a non-selective measurement (see, e.g., Ref.~\cite{petruccionebook}, p. 321). Hence a succession of short switch off-on periods, realized by shuttering the engineered applied noise, would induce Zeno or anti-Zeno dynamics depending on the value of the system and reservoir parameters and of the shuttering period. This is the core idea for monitoring the Zeno--anti-Zeno crossover with trapped ions. It is worth underlining that since the measurements causing the acceleration or inhibition of the decay, implemented by a fast switch off-on of the noisy electric field, are { \it non-selective}, they do not disturb the vibrational state of the ion. Therefore, by using the set up of Ref. \cite{engineerNIST}, a comparison between the population $P_n^{(N)}(t)$ of the initial vibrational state (e.g. the vibrational ground state $\vert n=0 \rangle$) in presence of {\it shuttered noise} (with $N$ the number of switching off-on periods), and the population $P_n(t)$ in presence of {\it un-shuttered noise}, would show that $P_n^{(N)}(t)>P_n(t)$ (QZE) or $P_n^{(N)}(t)< P_n(t)$ (AZE) depending on the choice of the parameters. All the parameters $\omega_0$, $\omega_c$, $T$ and $\tau$, driving the Zeno--anti-Zeno crossover, may be varied in the experiments. Although the value of $\omega_0$ may be modified only within a certain range and under certain constrains, the modification of both $\omega_c$ and $T$ may be obtained by simply filtering the applied noise and varying the noise fluctuations, respectively \cite{Maniscalco04a}. Since all the parameters ruling the Zeno -- anti-Zeno crossover are easily adjustable in the experiments, its observation in the trapped ion context should be already in the grasp of the experimentalists. The authors acknowledge financial support from the Academy of Finland (projects 207614, 206108, 108699,105740), the Magnus Ehrnrooth Foundation, and the EU's project CAMEL (Grant No. MTKD-CT-2004-014427). Stimulating discussions with David Wineland, Paolo Facchi and Saverio Pascazio are gratefully acknowledged. \end{document}
\begin{document} \title[$k$-reflexivity defect of the image of a generalized derivation]{$k$-reflexivity defect of the image of a generalized derivation} \author[T. Rudolf]{Tina Rudolf} \address{University of Ljubljana, IMFM, Jadranska ul. 19, 1000 Ljubljana, Slovenia} \email{[email protected]} \keywords{$k$-reflexivity; $k$-reflexivity defect; generalized derivations; elementary operator} \subjclass[2010]{Primary 47L05; Secondary 15A21, 15A22} \begin{abstract} Let $\mathcal{X}$ be a finite-dimensional complex vector space and let $k$ be a positive integer. An explicit formula for the $k$-reflexivity defect of the image of a generalized derivation on $L(\mathcal{X})$, the space of all linear transformations on $\mathcal{X}$, is given. Using latter, we also study the $k$-reflexivity defect of the image of an elementary operator of the form $\Delta(T)=ATB-T$ ($T \in L(\mathcal{X})$). \end{abstract} \maketitle \section{Introduction} \setcounter{theorem}{0} Let $\mathcal{X}$ be a finite-dimensional complex vector space and let $L(\mathcal{X})$ be the space of all linear transformations on $\mathcal{X}$. Let $k$ be a positive integer and denote by $\mathcal{F}_k$ the set of all elements in $\mathbb{M}_n$ of rank $k$ or less. The $k$-reflexive cover of a non-empty subset $\mathcal{S} \subseteq L(\mathcal{X})$ is defined by \begin{equation*} {\rm Ref_{k}} \mathcal{S}=\{T \in L(\mathcal{X}): \forall \varepsilon >0, \, \forall x_1, \ldots,x_k \in \mathcal{X},\, \exists S \in \mathcal{S}: \, \|Tx_i-Sx_i\|<\varepsilon, \, i=1,\ldots,k\}. \end{equation*} It is easy to see that ${\rm Ref_{k}} \mathcal{S}$ is a linear subspace of $L(\mathcal{X})$. A linear subspace $\mathcal{S}$ is said to be $k$-reflexive if ${\rm Ref_{k}} \mathcal{S}=\mathcal{S}$. The $k$-reflexivity defect of a non-empty subset $\mathcal{S}$ is defined by ${\rm rd} _k(\mathcal{S})=\dim ({\rm Ref_{k}} \mathcal{S} / \mathcal{S})$. Since $\mathcal{X}$ is finite dimensional ${\rm rd} _k(\mathcal{S})=\dim ({\rm Ref_{k}} \mathcal{S})-\dim (\mathcal{S})$ holds. The annihilator of a non-empty subset $\mathcal{S} \subseteq \mathbb{M}_n$ is defined by $\mathcal{S}_\perp=\{C \in \mathbb{M}_n:\, {\rm tr} (CS)=0 \ \textup{for all} \ S \in \mathcal{S}\}$, where ${\rm tr}(\cdot)$ denotes the trace functional. It was shown in \cite{KL, KL2} that \begin{equation} \label{k-ref} {\rm Ref}k \mathcal{S}=\left(\mathcal{S}_\perp \cap \mathcal{F}_k \right)_\perp \end{equation} holds. The latter obviously implies that a $k$-reflexive space is also $j$-reflexive for all $j \geq k$. Let $A,\,B \in L(\mathcal{X})$ be invertible linear transformations and let $\mathcal{S}$ be a linear subspace of $L(\mathcal{X})$. Let us denote $A\mathcal{S} B=\{ASB:\,S \in \mathcal{S}\}$ and $\mathcal{S}^\intercal=\{S^\intercal:\, S \in \mathcal{S}\}$. It is well known that transformations of the type \begin{equation} \label{transf} \mathcal{S} \mapsto A\mathcal{S} B = \{ASB:\,S \in \mathcal{S}\} \quad \textrm{and} \quad \mathcal{S} \mapsto \mathcal{S}^\intercal =\{S^\intercal:\, S \in \mathcal{S}\} \end{equation} preserve the $k$-reflexivity defect. Hence, since $\mathcal{X}$ is a finite-dimensional complex vector space, one can assume that $\mathcal{X}=\mathbb{C}^n$ for some $n \in \mathbb{N}$ and $L(\mathcal{X})$ may be identified with $\mathbb{M}_n$, the algebra of all $n$-by-$n$ complex matrices. Throughout this paper we will be dealing with subspaces of $\mathbb{M}_n$ which have the decomposition of the form $$\mathcal{S}=\left(\begin{array}{ccc} \mathcal{S}_{11}&\ldots&\mathcal{S}_{1N}\\\vdots&&\vdots\\\mathcal{S}_{M1}& \ldots&\mathcal{S}_{MN}\end{array}\right),$$ where, for each pair of indices $(i,\,j)$, $\mathcal{S}_{ij}$ is a subspace of $\mathbb{M}_{m_i,n_j}$, the space of all $m_i$-by-$n_j$ complex matrices, and $\sum_{i=1}^Mm_i=\sum_{j=1}^Nn_j=n$. It is not hard to see that for spaces of this type one has \begin{equation} \label{ref} {\rm Ref_{k}} (\mathcal{S})=\left(\begin{array}{ccc} {\rm Ref_{k}} (\mathcal{S}_{11})&\ldots&{\rm Ref_{k}} (\mathcal{S}_{1N})\\\vdots&&\vdots\\{\rm Ref_{k}} (\mathcal{S}_{M1})&\ldots&{\rm Ref_{k}} (\mathcal{S}_{MN})\end{array}\right) \qquad \textrm{and} \qquad {\rm rd} _k\left(\mathcal{S}\right)=\sum_{i=1}^M \sum_{j=1}^N{\rm rd} _k \left(\mathcal{S}_{ij}\right). \end{equation} In particular, $\mathcal{S}$ is $k$-reflexive if and only if $\mathcal{S}_{ij}$ is $k$-reflexive for every pair of indices $i\in \{1,\ldots,M\}$, $j \in \{1,\ldots,N\}$. \section{Elementary operators} \label{EO} \setcounter{theorem}{0} Let $(A_1,\ldots,A_k)$ and $(B_1,\ldots,B_k)$ be arbitrary pairs of $n$-by-$n$ complex matrices. The elementary operator on $\mathbb{M}_n$ with coefficients $(A_1,\ldots,A_k)$ and $(B_1,\ldots,B_k)$ is defined by \begin{equation*} \label{delta} \Delta (T)=A_1TB_1+\ldots+A_kTB_k, \qquad T \in \mathbb{M}_n. \end{equation*} If all $A_i$ are pairwise linearly independent and if the same holds for all $B_i$ ($1 \leq i \leq k$), then $\Delta$ is called elementary operator of length $k$. The simplest example of such operator is of course two-sided multiplication. Namely, let $A,\,B \in \mathbb{M}_n$ and let $\Delta$ be an elementary operator defined by $\Delta \left(T\right)=ATB$ for $T \in \mathbb{M}_n$. It is easy to see that the kernel and the image of $\Delta$ are reflexive spaces. In fact, if $\Delta$ is an elementary operator of length $k$ on $\mathbb{M}_n$, then by \cite[Proposition 1.1]{B1} the space $\ker \Delta$ is $j$-reflexive for every $j \geq k$. It is reasonable to ask whether the same holds for ${\rm im} \Delta$ and we show that this is not generally the case. \begin{lemma} \label{anih} Let $\Delta$ be an elementary operator on $\mathbb{M}_n$ with coefficients $\left(A_1,\, \ldots,\, A_k\right)$ and \linebreak $\left(B_1,\, \ldots,\,B_k\right)$, defined by $\Delta \left(T\right)=A_1TB_1+A_2TB_2+\ldots+A_kTB_k$. Then there exists an elementary operator $\tilde{\Delta}$ such that $\left({\rm im} \Delta\right)_\perp=\ker \tilde{\Delta}$. \end{lemma} \begin{proof} Define $\tilde{\Delta}\left(T\right)=B_1TA_1+B_2TA_2+\ldots+B_kTA_k$ for $T \in \mathbb{M}_n$. If $T$ is an arbitrary matrix, then ${\rm tr} (\Delta(T)C)={\rm tr}(T(B_1CA_1+\ldots+B_kCA_k))$ and therefore $C \in \left({\rm im} \Delta\right)_\perp$ if and only if $\tilde{\Delta}(C) \in (\mathbb{M}_n)_\perp=\{0\}$, that is, $C \in \ker \tilde{\Delta}$. \end{proof} Next, we introduce some notation. For $k \in \mathbb{N}$ and $\alpha \in \mathbb{C}$, let $J_k(\alpha)$ denote the Jordan block of size $k$, i.e., $$J_k(\alpha)=\left(\begin{smallmatrix} \alpha&1&&&\\&&\ddots&\ddots&\\&&&\alpha&1\\&&&&\alpha\end{smallmatrix}\right) \in \mathbb{M}_k.$$ In the following example we show that for any $n \geq 3$ there exists an inner derivation $\delta$ on $\mathbb{M}_n$ such that ${\rm im} \delta$ is not $(n-1)$-reflexive. Consequently, the image of such elementary operator of length $2$ is not $2$-reflexive. \begin{example} \rm Define $\delta\left(T\right)=J_n(0)T-TJ_n(0)$ for $T \in \mathbb{M}_n$. By \eqref{k-ref}, every subspace of $\mathbb{M}_n$ is $n$-reflexive, hence ${\rm rd}_n ({\rm im} \delta)=0$. It follows by Lemma \ref{anih} that $({\rm im} \delta)_\perp$ is simply $\{J_n(0)\}'$, the commutant of the Jordan block $J_n(0)$. One can easily verify that $\{J_n(0)\}'$ is the algebra of all $n \times n$ upper triangular Toeplitz matrices which we will denote by $\mathfrak{T}_n$. Namely, $$\left({\rm im} \delta\right)_\perp=\left\{ \left( \begin{array}{cccccc} a_1&a_2&\ldots&\ldots&a_n\\ 0&a_1&a_2& &\vdots \\ \vdots&\ddots&\ddots&\ddots&\vdots \\ \vdots&&\ddots&\ddots&a_2 \\ 0&\ldots&\ldots&0&a_1\end{array}\right) :\, a_1,\,a_2, \ldots a_n \in \mathbb{C} \right\}.$$ By \eqref{k-ref}, ${\rm im} \delta$ is not $(n-1)$-reflexive space, since $\left({\rm im} \delta\right)_\perp \cap \mathcal{F}_{n-1} \subsetneq \left({\rm im} \delta\right)_\perp$. Note that \eqref{k-ref} also implies that ${\rm rd}_k \left({\rm im} \delta\right)=n-k$ for $1 \leq k \leq n-1$. Indeed, $\dim ({\rm im} \delta)=n^2-\dim (({\rm im} \delta)_\perp)=n^2-n$ and by \eqref{k-ref} we have $\dim ({\rm Ref}k ({\rm im} \delta))=n^2-\dim(({\rm im} \delta)_\perp \cap \mathcal{F}_k)=n^2-k$ for $1 \leq k \leq n-1$. \end{example} \section{Generalized derivations} \label{GD} \setcounter{theorem}{0} Let $\Delta$ be an elementary operator of length $2$ on $\mathbb{M}_n$, i.e., a linear transformation of the form $\Delta(T)=A_1TB_1+A_2TB_2$ ($T \in \mathbb{M}_n$), where $A_1,\,A_2$ and $B_1,\,B_2$ are two pairs of linearly independent matrices. By \cite[Proposition 1.1]{B1}, one has ${\rm rd}_k(\ker \Delta)=0$ for all $k \geq 2$. In \cite{R} reflexivity of such elementary operator was studied and an explicit formula for the reflexivity defect of its kernel was given. This motivates the main subject of this paper, that is the $k$-reflexivity defect of the image of some special examples of elementary operators of length $2$. Let $A,\,B \in \mathbb{M}_n$ be arbitrary matrices. Define the generalized derivation on $\mathbb{M}_n$ with coefficients $A$ and $B$ by $\Delta \,\left(T\right)=AT-TB$, $T \in \mathbb{M}_n$. Obviously, $\Delta$ is an example of an elementary operator of length $2$. Let $J_{p_1}(\lambda_1)\oplus \ldots \oplus J_{p_N}(\lambda_N)$ be the Jordan canonical form of $A$, where $\sum_{i=1}^Np_i=n$ and $\lambda_1, \ldots,\lambda_N$ are not necessarily distinct eigenvalues of $A$. Similarly, let $J_{r_1}(\mu_1)\oplus \ldots \oplus J_{r_M}(\mu_M)$ be the Jordan canonical form of $B$, where $\sum_{i=1}^Mr_i=n$ and $\mu_1, \ldots,\mu_M$ are not necessarily distinct eigenvalues of $B$. Let $R(i,\,j,\,k)$ be a non-negative integer defined by $$R(i,\,j,\,k):=\left\{\begin{array}{ccl}\min \{p_i,\,r_j\}-k & : & \textup{$\lambda_i=\mu_j$ and $k<\min\{p_i,\,r_j\}$,}\\ 0 & : & \textup{$\lambda_i \neq \mu_j$ or $k \geq \min\{p_i,\,r_j\}$}.\end{array}\right.$$ \begin{proposition} \label{pp2} With the above notation, the $k$-reflexivity defect of ${\rm im} \Delta$ can be expressed as $${\rm rd}_k({\rm im} \Delta)=\sum_{i=1}^N\sum_{j=1}^M R(i,\,j,\,k).$$ In particular, ${\rm im} \Delta$ is a $k$-reflexive space if and only if all roots of the greatest common divisor of $m_A$ and $m_B$ of $A$ and $B$, respectively, are of multiplicity at most $k$. \end{proposition} \begin{proof} Let $\mathbf{0}_{p,r}$ denote the $p \times r$ zero matrix ($p,\,r \in \mathbb{N}$) and let $A$ and $B$ be as before the Proposition \ref{pp2}. For $1 \leq i \leq N$ and $1 \leq j \leq M$ define the following elementary operators on $\mathbb{M}_{p_i,r_j}$ and $\mathbb{M}_{r_j,p_i}$, respectively, \begin{equation*} \begin{split} \Delta_{p_i,r_j}(T) &= J_{p_i}(\lambda_i)T-TJ_{r_j}(\mu_j) \qquad (T \in \mathbb{M}_{p_i,r_j}), \\ \Delta_{r_j,p_i}(T) &= J_{r_j}(\mu_j)T-TJ_{p_i}(\lambda_i) \qquad (T \in \mathbb{M}_{r_j,p_i}). \end{split} \end{equation*} Lemma \ref{anih} yields $({\rm im} \Delta_{p_i,r_j})_\perp=\ker \Delta_{r_j,p_i}$. If $\lambda_i \neq \mu_j$, then $\Delta_{p_i,r_j}$ is bijective and ${\rm im} \Delta_{p_i,r_j}$ is a $k$-reflexive space for every $k \in \mathbb{N}$. Now assume that $\lambda_i=\mu_j$. It is not hard to see that \begin{equation*} \begin{split} \ker \Delta_{r_j,p_i} &=\left\{\left(\begin{array}{cc}\mathbf{0}_{r_j,p_i-r_j} & T\end{array}\right): T \in \mathfrak{T}_{r_j} \right\} \qquad \textrm{if $r_j \leq p_i$,}\\ \ker \Delta_{r_j,p_i} &=\left\{ \left(\begin{array}{c} T \\ \mathbf{0}_{r_j-p_i,p_i}\end{array}\right): T \in \mathfrak{T}_{p_i}\right\} \qquad \textrm{if $r_j > p_i$.} \end{split} \end{equation*} Let us denote $d=\min\{p_i,r_j\}$ and $D=\max\{p_i,r_j\}$. Since transformations of the type \eqref{transf} preserve $k$-reflexivity defect we can without any loss of generality assume that $\ker \Delta_{r_j,p_i}=\left\{\big(\begin{array}{cc} \mathbf{0}_{d,D-d} & T \end{array}\big): T \in \mathfrak{T}_d \right\}$ and therefore $\dim ({\rm im} \Delta)=d(D-1)$. The structure of the space $\ker \Delta_{r_j,p_i}$ yields that $\ker \Delta_{r_j,p_i} \cap \mathcal{F}_k$ is a linear space with the following property. If $k \geq d$, then $\ker \Delta_{r_j,p_i} \cap \mathcal{F}_k=\ker \Delta_{r_j,p_i}$. Otherwise, if $1 \leq k <d$, then $$\ker \Delta_{r_j,p_i} \cap \mathcal{F}_k=\left\{\left(\begin{array}{cc} \mathbf{0}_{k,d-k}& T \\ \mathbf{0}_{D-k,d-k}& \mathbf{0}_{D-k,k} \end{array}\right): T \in \mathfrak{T}_k\right\}.$$ Therefore, ${\rm im} \Delta_{p_i,r_j}$ is a $k$-reflexive space iff $k \geq d$ or $\lambda_i \neq \mu_j$. Otherwise, if $k<d$ and $\lambda_i=\mu_j$, one gets $\dim({\rm Ref}k ({\rm im} \Delta_{p_i,r_j}))=dD-k$. The result in general setting now follows by \eqref{ref}. \end{proof} Let $A,\,B \in \mathbb{M}_n$ be as before the Proposition \ref{pp2}. Let $\Delta: \mathbb{M}_n \rightarrow \mathbb{M}_n$ be an elementary operator defined by $\Delta (T)=ATB-T$. Let $R(i,\,j,\,k)$ be a non-negative integer defined by $$R(i,\,j,\,k):=\left\{\begin{array}{ccl}\min \{p_i,\,r_j\}-k & : & \textup{$\lambda_i,\,\mu_j \neq 0$, $\lambda_i=\frac{1}{\mu_j}$ and $k<\min\{p_i,\,r_j\}$,}\\ 0 & : & \textup{otherwise}.\end{array}\right.$$ \begin{corollary} With the above notation, the $k$-reflexivity defect of ${\rm im} \Delta$ can be expressed as $${\rm rd}_k({\rm im} \Delta)=\sum_{i=1}^N\sum_{j=1}^M R(i,\,j,\,k).$$ \end{corollary} \begin{proof} Define $\Delta_{p_i,r_j}(T)=J_{p_i}(\lambda_i)TJ_{r_j}(\mu_j)-T$ for $T \in \mathbb{M}_{p_i,r_j}$. By \eqref{ref} we get ${\rm rd}_k({\rm im} \Delta)=\sum_{i=1}^N\sum_{j=1}^M {\rm rd}_k({\rm im} \Delta_{p_i,r_j})$, hence it suffices to determine ${\rm rd}_k({\rm im} \Delta_{p_i,r_j})$. If $\lambda_i=\mu_j=0$, then it is not hard to see that for $T=(t_{uv}) \in \mathbb{M}_{p_i,r_j}$ we have $$\Delta_{p_i,r_j}(T)=-T+\left(\begin{array}{cccc}0&t_{21}&\ldots&t_{2,r_j-1}\\\vdots&\vdots&&\vdots\\0&t_{p_i,1}&\ldots&t_{p_i,r_j-1}\\0&0&\ldots&0 \end{array}\right),$$ therefore ${\rm im} \Delta_{p_i,r_j}=\mathbb{M}_{p_i,r_j}$ and ${\rm rd}_k({\rm im} \Delta_{p_i,r_j})=0$ for every positive integer $k$. If $\lambda_i=0$ and $\mu_j \neq 0$, then ${\rm im} \Delta_{p_i,r_j}=\{XJ_{r_j}(\mu_j):\, X \in {\rm im} \tilde{\Delta}_{p_i,r_j}\}$ where $\tilde{\Delta}_{p_i,r_j}: \mathbb{M}_{p_i,r_j} \rightarrow \mathbb{M}_{p_i,r_j}$ is a generalized derivation of the form $\tilde{\Delta}_{p_i,r_j}(T)=J_{p_i}(0)T-TJ_{r_j}(\mu_j)^{-1}$. Thus we have ${\rm rd}_k({\rm im} \Delta_{p_i,r_j})={\rm rd}_k({\rm im} \tilde{\Delta}_{p_i,r_j})$. By \cite[Example 6.2.13]{HJ1} one can easily see that inverting matrices preserves the sizes of Jordan blocks, hence Proposition \ref{pp2} yields ${\rm rd}_k({\rm im} \Delta_{p_i,r_j})=0$. Similarly, if $\lambda_i \neq 0$ and $\mu_j=0$ or if $\lambda_i \neq 0$, $\mu_j \neq 0$ and $\lambda_i \neq \frac{1}{\mu_j}$, then again Proposition \ref{pp2} yields ${\rm rd}_k({\rm im} \Delta_{p_i,r_j})=0$. Now assume that $\lambda_i,\,\mu_j \neq 0$ and that $\lambda_i = \frac{1}{\mu_j}$. As before, ${\rm rd}_k({\rm im} \Delta_{p_i,r_j})={\rm rd}_k({\rm im} \tilde{\Delta}_{p_i,r_j})$ where $\tilde{\Delta}_{p_i,r_j}: \mathbb{M}_{p_i,r_j} \rightarrow \mathbb{M}_{p_i,r_j}$ is a generalized derivation of the form $\tilde{\Delta}_{p_i,r_j}(T)=J_{p_i}(\lambda_i)T-TJ_{r_j}(\mu_j)^{-1}$. Now the Proposition \ref{pp2} yields that ${\rm rd}_k({\rm im} \tilde{\Delta}_{p_i,r_j})=0$ if $k \geq \min\{p_i,r_j\}$ and ${\rm rd}_k({\rm im} \tilde{\Delta}_{p_i,r_j})=\min \{p_i,\,r_j\}-k$ if $k < \min\{p_i,r_j\}$. By \eqref{ref} one gets ${\rm rd}_k({\rm im} \Delta)=\sum_{i=1}^N\sum_{j=1}^M R(i,\,j,\,k)$. \end{proof} \end{document}
\begin{document} \title{Error bounds, facial residual functions and applications to the exponential cone} \author{ Scott B.\ Lindstrom\thanks{ School of Electrical Engineering, Computing and Mathematical Sciences, Faculty of Science and Engineering, Curtin University, Australia. E-mail: \href{[email protected]}{[email protected]}.} \and Bruno F. Louren\c{c}o\thanks{Department of Statistical Inference and Mathematics, Institute of Statistical Mathematics, Japan. This author was supported partly by JSPS Grant-in-Aid for Early-Career Scientists 19K20217 and the Grant-in-Aid for Scientific Research (B)18H03206. Email: \href{[email protected]}{[email protected]}} \and Ting Kei Pong\thanks{ Department of Applied Mathematics, the Hong Kong Polytechnic University, Hong Kong, People's Republic of China. This author was supported partly by Hong Kong Research Grants Council PolyU153003/19p. E-mail: \href{[email protected]}{[email protected]}. } } \date{\today} \maketitle \begin{abstract} \noindent We construct a general framework for deriving error bounds for conic feasibility problems. In particular, our approach allows one to work with cones that fail to be amenable or even to have computable projections, two previously challenging barriers. For the purpose, we first show how error bounds may be constructed using objects called \textit{one-step facial residual functions}. Then, we develop several tools to compute these facial residual functions even in the absence of closed form expressions for the projections onto the cones. We demonstrate the use and power of our results by computing tight error bounds for the exponential cone feasibility problem. Interestingly, we discover a natural example for which the tightest error bound is related to the Boltzmann-Shannon entropy. We were also able to produce an example of sets for which a H\"{o}lderian error bound holds but the supremum of the set of admissible exponents is not itself an admissible exponent. \end{abstract} {\small \noindent {\bfseries Keywords:} Error bounds, facial residual functions, exponential cone, amenable cones, generalized amenability. } \section{Introduction} Our main object of interest is the following convex conic feasibility problem: \begin{align} \text{find} & \quad \ensuremath{\mathbf x} \in ( \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a}) \cap \ensuremath{\mathcal{K}} \label{eq:feas}\tag{Feas}, \end{align} where $ \ensuremath{\mathcal{L}}$ is a subspace contained in some finite-dimensional real Euclidean space $\ensuremath{\mathcal{E}}$, $\ensuremath{\mathbf a} \in \ensuremath{\mathcal{E}}$ and $ \ensuremath{\mathcal{K}} \subseteq \ensuremath{\mathcal{E}}$ is a closed convex cone. For a discussion of some applications and algorithms for \eqref{eq:feas} see \cite{HM11}. See also \cite{BB96} for a broader analysis of convex feasibility problems. We also recall that a \emph{conic linear program} (CLP) is the problem of minimizing a linear function subject to a constraint of the form described in \eqref{eq:feas}. In addition, when the optimal set of a CLP is non-empty it can be written as the intersection of a cone with an affine set. This provides yet another motivation for analyzing \eqref{eq:feas}: to better understand feasible regions and optimal sets of conic linear programs. Here, our main interest is in obtaining \emph{error bounds} for \eqref{eq:feas}. That is, assuming $( \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a})\cap \ensuremath{\mathcal{K}}\neq \emptyset$, we want an inequality that, given some arbitrary $\ensuremath{\mathbf x} \in \ensuremath{\mathcal{E}}$, relates the \emph{individual distances} $\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a}), \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{K}})$ to the \emph{distance to the intersection} $\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, ( \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a})\cap \ensuremath{\mathcal{K}})$. Considering that $\ensuremath{\mathcal{E}}$ is equipped with some norm $\norm{\cdot}$ induced by some inner product $\inProd{\cdot}{\cdot}$, we recall that the \emph{distance function to a convex set $C$} is defined as follows: \begin{equation*} \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, C) \coloneqq \inf _{\ensuremath{\mathbf y} \in C} \norm{\ensuremath{\mathbf x}-\ensuremath{\mathbf y}}. \end{equation*} When $ \ensuremath{\mathcal{K}}$ is a polyhedral cone, the classical Hoffman's error bound \cite{HF52} gives a relatively complete picture of the way that the individual distances relate to the distance to the intersection. If $ \ensuremath{\mathcal{K}}$ is not polyhedral, but $ \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a}$ intersects $ \ensuremath{\mathcal{K}}$ in a sufficiently well-behaved fashion (say, for example, when $ \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a}$ intersects $\ensuremath{\mathrm{ri}\,} \ensuremath{\mathcal{K}}$, the relative interior of $ \ensuremath{\mathcal{K}}$; see Proposition~\ref{prop:cq_er}), we may still expect ``good'' error bounds to hold, e.g., \cite[Corollary~3]{BBL99}. However, checking whether $ \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a}$ intersects $\ensuremath{\mathrm{ri}\,} \ensuremath{\mathcal{K}}$ is not necessarily a trivial task; and, in general, $( \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a})\cap \ensuremath{\mathrm{ri}\,} \ensuremath{\mathcal{K}}$ can be void. Here, we focus on error bound results that \emph{do not} require any assumption on the way that the affine space $ \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a}$ intersects $ \ensuremath{\mathcal{K}}$. So, for example, we want results that are valid even if, say, $ \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a}$ fails to intersect the relative interior of $ \ensuremath{\mathcal{K}}$. Inspired by Sturm's pioneering work on error bounds for positive semidefinite systems \cite{St00}, the class of \emph{amenable cones} was proposed in \cite{L17} and it was shown that the following three ingredients can be used to obtain general error bounds for \eqref{eq:feas}: (i) amenable cones, (ii) facial reduction \cite{BW81,Pa13,WM13} and (iii) the so-called facial residual functions (FRFs) \cite[Definition~16]{L17}. In this paper, we will show that, in fact, it is possible to obtain error bounds for \eqref{eq:feas} by using the so-called \emph{{{one-step facial residual function}}s} directly in combination with facial reduction. It is fair to say that computing the facial residual functions is the most critical step in obtaining error bounds for \eqref{eq:feas}. We will demonstrate techniques that are readily adaptable for the purpose. All the techniques discussed here will be showcased with error bounds for the so-called \emph{exponential cone} which is defined as follows\footnote{Our notation for the exponential cone coincides with the one in \cite{OCPB16}. However, the $x,y,z$ variables might appear permuted in other papers, e.g., \cite{SY15,MC2020,PY21}. }: \begin{align*} \ensuremath{K_{\exp}}:=&\left \{(x,y,z)\in \ensuremath{\mathbb R}^3\;|\;y>0,z\geq ye^{x/y}\right \} \cup \left \{(x,y,z)\;|\; x \leq 0, z\geq 0, y=0 \right \}. \end{align*} Put succinctly, the exponential cone is the closure of the epigraph of the perspective function of $z=e^x$. It is quite useful in entropy optimization, see \cite{CS17}. Furthermore, it is also implemented in the MOSEK package, see \cite{DE21}, \cite[Chapter 5]{MC2020}, and the many modelling examples in Section~5.4 therein. There are several other solvers that either support the exponential cone or convex sets closely related to it \cite{OCPB16,PY21,KT19,CKV21}. See also \cite{Fr21} for an algorithm for projecting onto the exponential cone. So convex optimization with exponential cones is widely available even if, as of this writing, it is not as widespread as, say, semidefinite programming. The exponential cone $\ensuremath{K_{\exp}}$ appears, at a glance, to be simple. However, it possesses a very intricate geometric structure that illustrates a wide range of challenges practitioners may face in computing error bounds. First of all, being non-facially-exposed, it is not amenable, so the theory developed in \cite{L17} does not directly apply to it. Another difficulty is that not many analytical tools have been developed to deal with the projection operator onto $\ensuremath{K_{\exp}}$ (compared with, for example, the projection operator onto PSD cones) which is only implicitly specified. Until now, these issues have made it challenging the establishment of error bounds for objects like $\ensuremath{K_{\exp}}$, many of which are of growing interest in the mathematical programming community. Our research is at the intersection of two topics: \emph{error bounds} and the \emph{facial structure of cones}. General information on the former can be found, for example, in \cite{Pang97,LP98}. Classically, there seems to be a focus on the so-called \emph{H\"olderian error bounds} (see also \cite{Li10,Li13,LMP15}) but we will see in this paper that non H\"olderian behavior can still appear even in relatively natural settings such as conic feasibility problems associated to the exponential cone. Facts on the facial structure of convex cones can be found, for example, in \cite{Ba73,Ba81,Pa00}. We recall that a cone is said to be \emph{facially exposed} if each face arises as the intersection of the whole cone with some supporting hyperplane. Stronger forms of facial exposedness have also been studied to some extent, here are some examples: projectional exposedness~\cite{BW81,ST90}, niceness~\cite{Pa13_2,Ve14}, tangential exposedness~\cite{RT19}, amenability \cite{L17}. See also \cite{LRS20} for a comparison between a few different types of facial exposedness. These notions are useful in many topics, e.g.: regularization of convex programs and extended duals~\cite{BW81,Pa13,LiuPat18}, studying the closure of certain linear images~\cite{Pa07,LiuPat18}, lifts of convex sets~\cite{GPR13} and error bounds \cite{L17}. However, as can be seen in Figure~\ref{fig:cone}, the exponential cone is not even a facially exposed cone, so none of the aforementioned notions apply (in particular, the face $ \ensuremath{\mathcal{F}}_{ne}:=\{(0,0,z)\;|\; z \geq 0\}$ is not exposed). This was one of the motivations for looking beyond facial exposedness and developing a framework for deriving error bounds for feasibility problems associated to general closed convex cones. \begin{figure} \caption{The exponential cone is the union of the two labelled sets} \label{fig:cone} \end{figure} \subsection{Outline and results} The goal of this paper is to build a robust framework that may be used to obtain error bounds for previously inaccessible cones, and to demonstrate the use of this framework by applying it to fully describe error bounds for \eqref{eq:feas} with $ \ensuremath{\mathcal{K}} = \ensuremath{K_{\exp}}$. In Section~\ref{sec:preliminaries}, we recall preliminaries. New contributions begin in Section~\ref{sec:frf}. We first recall some rules for chains of faces and the diamond composition. Then we show how error bounds may be constructed using objects known as {{one-step facial residual function}}. In Section~\ref{sec:frf_comp}, we build our general framework for constructing {{one-step facial residual function}}s. Our key result, Theorem~\ref{thm:1dfacesmain}, obviates the need of computing explicitly the projection onto the cone. Instead, we make use of the parametrization of the boundary of the cone and projections onto the proper \emph{faces} of a cone: thus, our approach is advantageous when these projections are easier to analyze than the projection onto the whole cone itself. We emphasize that \textit{all} of the results of Section~\ref{sec:frf} are applicable to a \textit{general closed convex cone}. In Section~\ref{sec:exp_cone}, we use our new framework to fully describe error bounds for \eqref{eq:feas} with $ \ensuremath{\mathcal{K}} = \ensuremath{K_{\exp}}$. This was previously a problem lacking a clear strategy, because all projections onto $\ensuremath{K_{\exp}}$ are implicitly specified. However, having obviated the need to project onto $\ensuremath{K_{\exp}}$, we successfully obtain all the necessary FRFs, partly because it is easier to project onto the \emph{proper faces} of $\ensuremath{K_{\exp}}$ than to project onto $\ensuremath{K_{\exp}}$ itself. Surprisingly, we discover that different collections of faces and exposing hyperplanes admit very different FRFs. In Section~\ref{sec:freas_2d}, we show that for the unique 2-dimensional face, any exponent in $\left(0,1\right)$ may be used to build a valid FRF, while the supremum over all the admissible exponents \textit{cannot} be. Furthermore, a better FRF for the 2D face can be obtained if we go beyond H\"olderian error bounds and consider a so-called \emph{entropic error bound} which uses a modified Boltzmann-Shannon entropy function, see Theorem~\ref{thm:entropic}. The curious discoveries continue; for infinitely many 1-dimensional faces, the FRF, and the final error bound, feature exponent $1/2$. For the final outstanding 1-dimensional exposed face, the FRF, and the final error bound, are Lipschitzian for all exposing hyperplanes except exactly one, for which \textit{no exponent} will suffice. However, for this exceptional case, our framework \textit{still} successfully finds an FRF, which is logarithmic in character (Corollary~\ref{col:1dfaces_infty}). Consequentially, the system consisting of $\{(0,0,1)\}^\perp$ and $\ensuremath{K_{\exp}}$ possesses a kind of ``logarithmic error bound" (see Example~\ref{ex:non_hold}) instead of a H\"olderian error bound. In Theorems~\ref{thm:main_err} and \ref{theo:sane}, we give explicit error bounds by using our FRFs and the suite of tools we developed in Section~\ref{sec:frf}. We also show that the error bound given in Theorem~\ref{thm:main_err} is tight, see Remark~\ref{rem:opt}. These findings about the exponential cone are surprising, since we are not aware of other objects having this litany of odd behaviour hidden in its structure all at once.\footnote{To be fair, the exponential function is the classical example of a non-semialgebraic analytic function. Given that semialgebraicity is connected to the KL property (which is related to error bounds), one may argue that it is not \emph{that} surprising that the exponential cone has its share of quirks. Nevertheless, given how natural the exponential cone is, the amount of quirks is still somewhat surprising.} One possible reason for the absence of previous reports on these phenomena might have been the sheer absence of tools for obtaining error bounds for general cones. In this sense, we believe that the machinery developed in Section~\ref{sec:frf} might be a reasonable first step towards filling this gap. In Section~\ref{sec:odd}, we document additional odd consequences and connections to other concepts, with particular relevance to the \emph{Kurdyka-{\L}ojasiewicz (KL) property} \cite{BDL07,BDLS07,ABS13,ABRS10,BNPS17,LP18}. In particular, we have two sets satisfying a H\"olderian error bound for every $\gamma \in \left(0,1\right)$ but the supremum of allowable exponents is not allowable. Consequently, one obtains a function with the $KL$ property with exponent $\alpha$ for any $\alpha \in \left(1/2,1\right)$ at the origin, but not for $\alpha = 1/2$. We conclude in Section~\ref{sec:conclusion}. \section{Preliminaries}\label{sec:preliminaries} We recall that $\ensuremath{\mathcal{E}}$ denotes an arbitrary finite-dimensional real Euclidean space. We will adopt the following convention, vectors will be boldfaced while scalars will use normal typeface. For example, if $\ensuremath{\mathbf p} \in \ensuremath{\mathbb R}^3$, we write $\ensuremath{\mathbf p} = (p_x,p_y,p_z)$, where $p_x,p_y,p_z \in \ensuremath{\mathbb R}$. We denote by $B(\eta)$ the closed ball of radius $\eta$ centered at the origin, i.e., $B(\eta) = \{\ensuremath{\mathbf x}\in\ensuremath{\mathcal{E}} \mid \norm{\ensuremath{\mathbf x}} \leq \eta\}$. Let $C\subseteq \ensuremath{\mathcal{E}}$ be a convex set. We denote the relative interior and the linear span of $C$ by $\ensuremath{\mathrm{ri}\,} C$ and $\ensuremath{\mathrm{span}\,} C$, respectively. We also denote the boundary of $C$ by $\partial C$, and $\mathrm{cl}\, C$ is the closure of $C$. We denote the projection operator onto $C$ by $P_C$, so that $P_C(\ensuremath{\mathbf x}) = \ensuremath{\operatorname*{argmin}}_{\ensuremath{\mathbf y} \in C} \norm{\ensuremath{\mathbf x}-\ensuremath{\mathbf y}}$. Given closed convex sets $C_1,C_2 \subseteq \ensuremath{\mathcal{E}}$, we note the following properties of the projection operator \begin{align} \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x},C_1) &\leq \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x},C_2) + \ensuremath{\operatorname{d}}(P_{C_2}(\ensuremath{\mathbf x}),C_1) \label{proj:p1}\\ \ensuremath{\operatorname{d}}(P_{C_2}(\ensuremath{\mathbf x}),C_1) &\leq \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x},C_2) + \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x},C_1). \label{proj:p2} \end{align} \subsection{Cones and their faces} Let $ \ensuremath{\mathcal{K}}$ be a closed convex cone. We say that $ \ensuremath{\mathcal{K}}$ is \emph{pointed} if $ \ensuremath{\mathcal{K}} \cap - \ensuremath{\mathcal{K}} = \{\mathbf{0}\}$. The dimension of $ \ensuremath{\mathcal{K}}$ is denoted by $\dim( \ensuremath{\mathcal{K}})$ and is the dimension of the linear subspace spanned by $ \ensuremath{\mathcal{K}}$. A \emph{face} of $ \ensuremath{\mathcal{K}}$ is a closed convex cone $ \ensuremath{\mathcal{F}}$ satisfying $ \ensuremath{\mathcal{F}} \subseteq \ensuremath{\mathcal{K}}$ and the following property \[ \ensuremath{\mathbf x},\ensuremath{\mathbf y} \in \ensuremath{\mathcal{K}}, \ensuremath{\mathbf x}+\ensuremath{\mathbf y} \in \ensuremath{\mathcal{F}} \Rightarrow \ensuremath{\mathbf x},\ensuremath{\mathbf y} \in \ensuremath{\mathcal{F}}. \] In this case, we write $ \ensuremath{\mathcal{F}} \mathrel{\unlhd} \ensuremath{\mathcal{K}}$. We say that $ \ensuremath{\mathcal{F}}$ is \emph{proper} if $ \ensuremath{\mathcal{F}} \neq \ensuremath{\mathcal{K}}$. A face is said to be \emph{nontrivial} if $ \ensuremath{\mathcal{F}} \neq \ensuremath{\mathcal{K}}$ and $ \ensuremath{\mathcal{F}} \neq \ensuremath{\mathcal{K}} \cap - \ensuremath{\mathcal{K}}$. In particular, if $ \ensuremath{\mathcal{K}}$ is pointed (as is the case of the exponential cone), a nontrivial face is neither $ \ensuremath{\mathcal{K}}$ nor $\{\mathbf{0}\}$. Next, let $ \ensuremath{\mathcal{K}}^*$ denote the dual cone of $ \ensuremath{\mathcal{K}}$, i.e., $ \ensuremath{\mathcal{K}}^* = \{\ensuremath{\mathbf z} \in \ensuremath{\mathcal{E}} \mid \inProd{\ensuremath{\mathbf x}}{\ensuremath{\mathbf z}} \geq 0, \forall \ensuremath{\mathbf x} \in \ensuremath{\mathcal{K}} \}$. We say that $ \ensuremath{\mathcal{F}}$ is an \emph{exposed face} if there exists $\ensuremath{\mathbf z} \in \ensuremath{\mathcal{K}}^*$ such that $ \ensuremath{\mathcal{F}} = \ensuremath{\mathcal{K}} \cap \{\ensuremath{\mathbf z}\}^\perp$. A \emph{chain of faces} of $ \ensuremath{\mathcal{K}}$ is a sequence of faces satisfying $ \ensuremath{\mathcal{F}}_\ell \subsetneq \cdots \subsetneq \ensuremath{\mathcal{F}}_{1}$ such that each $ \ensuremath{\mathcal{F}}_{i}$ is a face of $ \ensuremath{\mathcal{K}}$ and the inclusions $ \ensuremath{\mathcal{F}} _{i+1} \subsetneq \ensuremath{\mathcal{F}}_{i}$ are all proper. The length of the chain is defined to be $\ell$. With that, we define the \emph{distance to polyhedrality of $ \ensuremath{\mathcal{K}}$} as the length {\em minus one} of the longest chain of faces of $ \ensuremath{\mathcal{K}}$ such that $ \ensuremath{\mathcal{F}} _{\ell}$ is polyhedral and $ \ensuremath{\mathcal{F}}_{i}$ is not polyhedral for $i < \ell$, see \cite[Section~5.1]{LMT18}. We denote the distance to polyhedrality by $\ell _{\text{poly}}( \ensuremath{\mathcal{K}})$. \subsection{Lipschitzian and H\"olderian error bounds} In this subsection, suppose that $C_1,\ldots, C_{\ell} \subseteq \ensuremath{\mathcal{E}}$ are convex sets with nonempty intersection. We recall the following definitions. \begin{definition}[H\"olderian and Lipschitzian error bounds]\label{def:hold} The sets $C_1,\ldots, C_\ell$ are said to satisfy a \emph{H\"olderian error bound} if for every bounded set $B \subseteq \ensuremath{\mathcal{E}}$ there exist some $\kappa_B > 0$ and an exponent $\gamma _B\in(0, 1]$ such that \begin{equation*} \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \cap _{i=1}^\ell C_i) \le \kappa_B\max_{1\le i\le \ell}\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \, C_i)^{\gamma_B}, \ensuremath{\mathbf q}uad \forall\ \ensuremath{\mathbf x}\in B. \end{equation*} If we can take the same $\gamma _B = \gamma \in (0,1]$ for all $B$, then we say that the bound is \emph{uniform}. If the bound is uniform with $\gamma = 1$, we call it a \emph{Lipschitzian error bound}. \end{definition} We note that the concepts in Definition~\ref{def:hold} also have different names throughout the literature. When $C_1,\ldots, C_\ell$ satisfy a H\"olderian error bound it is said that they satisfy \emph{bounded H\"older regularity}, e.g., see \cite[Definition~2.2]{BLT17}. When a Lipschitzian error bound holds, $C_1,\ldots, C_\ell$ are said to satisfy \emph{bounded linear regularity}, see \cite[Section~5]{BB96} or \cite{BBL99}. Bounded linear regularity is also closely related to the notion of \emph{subtransversality} \cite[Definition~7.5]{Io17}. H\"olderian and Lipschitzian error bounds will appear frequently in our results, but we also encounter non-H\"olderian bounds as in Theorem~\ref{thm:entropic} and Theorem~\ref{thm:nonzerogammasec72}. Next, we recall the following result which ensures a Lipschitzian error bound holds between families of convex sets when a constraint qualification is satisfied. \begin{proposition}[\!{\cite[Corollary~3]{BBL99}}]\label{prop:cq_er} Let $C_1,\ldots, C_{\ell} \subseteq \ensuremath{\mathcal{E}}$ be convex sets such that $C_{1},\ldots, C_{k}$ are polyhedral. If \begin{equation*} \left(\bigcap _{i=1}^k C_i\right) \bigcap \left(\bigcap _{j=k+1}^\ell \ensuremath{\mathrm{ri}\,} C_j\right) \neq \emptyset, \end{equation*} then for every bounded set $B$ there exists $\kappa _B>0$ such that \[ \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \cap _{i=1}^\ell C_i) \leq \kappa _B(\max_{1 \leq i \leq \ell} \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, C_i)), \ensuremath{\mathbf q}uad \forall \ensuremath{\mathbf x} \in B. \] \end{proposition} In view of \eqref{eq:feas}, we say that \emph{Slater's condition} is satisfied if $( \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a}) \cap \ensuremath{\mathrm{ri}\,} \ensuremath{\mathcal{K}} \neq \emptyset$. If $ \ensuremath{\mathcal{K}}$ can be written as $ \ensuremath{\mathcal{K}}^1 \times \ensuremath{\mathcal{K}}^2\subseteq \ensuremath{\mathcal{E}}^1 \times \ensuremath{\mathcal{E}}^2$, where $\ensuremath{\mathcal{E}}^1$ and $\ensuremath{\mathcal{E}}^2$ are real Euclidean spaces and $ \ensuremath{\mathcal{K}}^1\subseteq \ensuremath{\mathcal{E}}^1$ is polyhedral, we say that the \emph{partial polyhedral Slater's (PPS) condition} is satisfied if \begin{equation}\label{eq:ppsc} ( \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a}) \cap ( \ensuremath{\mathcal{K}}^1 \times (\ensuremath{\mathrm{ri}\,} \ensuremath{\mathcal{K}}^2) ) \neq \emptyset. \end{equation} Adding a dummy coordinate, if necessary, we can see Slater's condition as a particular case of the PPS condition. By convention, we consider that the PPS condition is satisfied for \eqref{eq:feas} if one of the following is satisfied: 1) $ \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a}$ intersects $\ensuremath{\mathrm{ri}\,} \ensuremath{\mathcal{K}}$; 2) $( \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a})\cap \ensuremath{\mathcal{K}}\neq \emptyset$ and $ \ensuremath{\mathcal{K}}$ is polyhedral; or 3) $ \ensuremath{\mathcal{K}}$ can be written as a direct product $ \ensuremath{\mathcal{K}}^1 \times \ensuremath{\mathcal{K}}^2$ where $ \ensuremath{\mathcal{K}}^1$ is polyhedral and \eqref{eq:ppsc} is satisfied. Noting that $( \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a}) \cap ( \ensuremath{\mathcal{K}}^1 \times (\ensuremath{\mathrm{ri}\,} \ensuremath{\mathcal{K}}^2) ) = ( \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a})\cap ( \ensuremath{\mathcal{K}}^1 \times \ensuremath{\mathcal{E}}^2) \cap (\ensuremath{\mathcal{E}}^1 \times (\ensuremath{\mathrm{ri}\,} \ensuremath{\mathcal{K}}^2) )$, we deduce the following result from Proposition~\ref{prop:cq_er}. \begin{proposition}[Error bound under PPS condition]\label{prop:pps_er} Suppose that \eqref{eq:feas} satisfies the \emph{partial polyhedral Slater's condition}. Then, for every bounded set $B$ there exists $\kappa _B>0$ such that \[ \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, ( \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a})\cap \ensuremath{\mathcal{K}}) \leq \kappa _B \max\{\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{K}}),\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a})\}, \ensuremath{\mathbf q}uad \forall \ensuremath{\mathbf x} \in B. \] \end{proposition} We recall that for $a,b \in \ensuremath{\mathbb R}_+$ we have $a+b \leq 2\max\{a,b\} \leq 2(a+b)$, so Propositions~\ref{prop:cq_er} and \ref{prop:pps_er} can also be equivalently stated in terms of sums of distances. \section{Facial residual functions and error bounds}\label{sec:frf} In this section, we discuss a strategy for obtaining error bounds for the conic linear system \eqref{eq:feas} based on the so-called \emph{facial residual functions} that were introduced in \cite{L17}. In contrast to \cite{L17}, we will not require that $ \ensuremath{\mathcal{K}}$ be amenable. The motivation for our approach is as follows. If it were the case that \eqref{eq:feas} satisfies some constraint qualification, we would have a Lipschitizian error bound per Proposition~\ref{prop:pps_er}, see also \cite{BBL99} for other sufficient conditions. Unfortunately, this does not happen in general. However, as long as \eqref{eq:feas} is feasible, there is always a face of $ \ensuremath{\mathcal{K}}$ that contains the feasible region of \eqref{eq:feas} and for which a constraint qualification holds. The error bound computation essentially boils down to understanding how to compute the distance to this special face. The first result towards our goal is the following. \begin{proposition}[An error bound when a face satisfying a CQ is known]\label{prop:err_cq2} Suppose that \eqref{eq:feas} is feasible and let $ \ensuremath{\mathcal{F}} \mathrel{\unlhd} \ensuremath{\mathcal{K}}$ be a face such that \begin{enumerate}[{\rm (a)}] \item $ \ensuremath{\mathcal{F}}$ contains $ \ensuremath{\mathcal{K}} \cap ( \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a})$. \item $\{ \ensuremath{\mathcal{F}}, \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a}\}$ satisfies the PPS condition.\footnote{As a reminder, the PPS condition is, by convention, a shorthand for three closely related conditions, see remarks after \eqref{eq:ppsc}.} \end{enumerate} Then, for every bounded set $B$, there exists $\kappa_B > 0$ such that \[ \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{K}} \cap ( \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a})) \leq \kappa_B(\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{F}}) + \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a})), \ensuremath{\mathbf q}uad \forall \ensuremath{\mathbf x} \in B. \] \end{proposition} \begin{proof} Since $ \ensuremath{\mathcal{F}}$ is a face of $ \ensuremath{\mathcal{K}}$, assumption (a) implies $ \ensuremath{\mathcal{K}} \cap ( \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a}) = \ensuremath{\mathcal{F}} \cap ( \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a})$. Then, the result follows from assumption (b) and Proposition~\ref{prop:pps_er}. \end{proof} From Proposition~\ref{prop:err_cq2} we see that the key to obtaining an error bound for the system \eqref{eq:feas} is to find a face $ \ensuremath{\mathcal{F}} \mathrel{\unlhd} \ensuremath{\mathcal{K}}$ satisfying (a), (b) \emph{and} we must know how to estimate the quantity $\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{F}})$ from the available information $\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{K}})$ and $\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a})$. This is where we will make use of facial reduction and facial residual functions. The former will help us find $ \ensuremath{\mathcal{F}}$ and the latter will be instrumental in upper bounding $\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{F}})$. First, we recall below a result that follows from the analysis of the FRA-poly facial reduction algorithm developed in \cite{LMT18}. \begin{proposition}[{\cite[Proposition~5]{L17}}\footnote{Although {\cite[Proposition~5]{L17}} was originally stated for pointed cones, it holds for general closed convex cones. Indeed, its proof only relies on {\cite[Proposition~8]{LMT18}}, which holds for general closed convex cones.}]\label{prop:fra_poly} Let $ \ensuremath{\mathcal{K}} = \ensuremath{\mathcal{K}}^1\times \cdots \times \ensuremath{\mathcal{K}}^s$, where each $ \ensuremath{\mathcal{K}}^i$ is a closed convex cone. Suppose \eqref{eq:feas} is feasible. Then there is a chain of faces \begin{equation}\label{eq:chain} \ensuremath{\mathcal{F}} _{\ell} \subsetneq \cdots \subsetneq \ensuremath{\mathcal{F}}_1 = \ensuremath{\mathcal{K}} \end{equation} of length $\ell$ and vectors $\{\ensuremath{\mathbf z}_1,\ldots, \ensuremath{\mathbf z}_{\ell-1}\}$ satisfying the following properties. \begin{enumerate}[{\rm (i)}] \item \label{prop:fra_poly:1} $\ell -1\leq \sum _{i=1}^{s} \ell _{\text{poly}}( \ensuremath{\mathcal{K}} ^i) \leq \dim{ \ensuremath{\mathcal{K}}}$. \item \label{prop:fra_poly:2} For all $i \in \{1,\ldots, \ell -1\}$, we have \begin{flalign*} \ensuremath{\mathbf z}_i \in \ensuremath{\mathcal{F}} _i^* \cap \ensuremath{\mathcal{L}}^\perp \cap \{\ensuremath{\mathbf a}\}^\perp \ \ \ {and}\ \ \ \ensuremath{\mathcal{F}} _{i+1} = \ensuremath{\mathcal{F}} _{i} \cap \{\ensuremath{\mathbf z}_i\}^\perp. \end{flalign*} \item \label{prop:fra_poly:3} $ \ensuremath{\mathcal{F}} _{\ell} \cap ( \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a}) = \ensuremath{\mathcal{K}} \cap ( \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a})$ and $\{ \ensuremath{\mathcal{F}} _{\ell}, \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a}\}$ satisfies the PPS condition. \end{enumerate} \end{proposition} In view of Proposition~\ref{prop:fra_poly}, we define the \emph{distance to the PPS condition} $d_{\text{PPS}}( \ensuremath{\mathcal{K}}, \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a})$ as the length \emph{minus one} of the shortest chain of faces (as in \eqref{eq:chain}) satisfying item (iii) in Proposition~\ref{prop:fra_poly}. For example, if \eqref{eq:feas} satisfies the PPS condition, we have $d_{\text{PPS}}( \ensuremath{\mathcal{K}}, \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a}) = 0$. Next, we recall the definition of facial residual functions from \cite[Definition~16]{L17}. \begin{definition}[Facial residual function\footnote{The only difference between Definition~\ref{def:ebtp} and the definition of facial residual functions in {\cite[Definition~16]{L17}} is that we added the ``with respect to $ \ensuremath{\mathcal{K}}$'' part, to emphasize the dependency on $ \ensuremath{\mathcal{K}}$.}]\label{def:ebtp} Let $ \ensuremath{\mathcal{K}}$ be a closed convex cone, $ \ensuremath{\mathcal{F}} \mathrel{\unlhd} \ensuremath{\mathcal{K}}$ be a face, and let $\ensuremath{\mathbf z} \in \ensuremath{\mathcal{F}}^*$. Suppose that $\psi_{ \ensuremath{\mathcal{F}},\ensuremath{\mathbf z}} : \ensuremath{\mathbb{R}_+} \times \ensuremath{\mathbb{R}_+} \to \ensuremath{\mathbb{R}_+}$ satisfies the following properties: \begin{enumerate}[label=({\roman*})] \item $\psi_{ \ensuremath{\mathcal{F}},\ensuremath{\mathbf z}}$ is nonnegative, monotone nondecreasing in each argument and $\psi_{ \ensuremath{\mathcal{F}},\ensuremath{\mathbf z}}(0,t) = 0$ for every $t \in \ensuremath{\mathbb{R}_+}$. \item The following implication holds for any $\ensuremath{\mathbf x} \in \ensuremath{\mathrm{span}\,} \ensuremath{\mathcal{K}}$ and any $\epsilon \geq 0$: \[ \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{K}}) \leq \epsilon, \quad \inProd{\ensuremath{\mathbf x}}{\ensuremath{\mathbf z}} \leq \epsilon, \quad \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathrm{span}\,} \ensuremath{\mathcal{F}} ) \leq \epsilon \quad \Rightarrow \quad \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{F}} \cap \{\ensuremath{\mathbf z}\}^{\perp}) \leq \psi_{ \ensuremath{\mathcal{F}},\ensuremath{\mathbf z}} (\epsilon, \norm{\ensuremath{\mathbf x}}). \] \end{enumerate} Then, $\psi_{ \ensuremath{\mathcal{F}},\ensuremath{\mathbf z}}$ is said to be \emph{a facial residual function for $ \ensuremath{\mathcal{F}}$ and $\ensuremath{\mathbf z}$ with respect to $ \ensuremath{\mathcal{K}}$}. \end{definition} Definition~\ref{def:ebtp}, in its most general form, represents ``two-steps'' along the facial structure of a cone: we have a cone $ \ensuremath{\mathcal{K}}$, a face $ \ensuremath{\mathcal{F}}$ (which could be different from $ \ensuremath{\mathcal{K}}$) and a third face defined by $ \ensuremath{\mathcal{F}} \cap \{\ensuremath{\mathbf z}\}^\perp$. In this work, however, we will be focused on the following special case of Definition~\ref{def:ebtp}. \begin{definition}[{{One-step facial residual function}} ({{one-step facial residual function}s})]\label{def:onefrf} Let $ \ensuremath{\mathcal{K}}$ be a closed convex cone and $\ensuremath{\mathbf z} \in \ensuremath{\mathcal{K}}^*$. A function $\psi _{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}:\ensuremath{\mathbb{R}_+} \times \ensuremath{\mathbb{R}_+} \to \ensuremath{\mathbb{R}_+}$ is called a \emph{{{one-step facial residual function}} ({one-step facial residual function}s) for $ \ensuremath{\mathcal{K}}$ and $\ensuremath{\mathbf z}$} if it is a facial residual function of $ \ensuremath{\mathcal{K}}$ and $\ensuremath{\mathbf z}$ with respect to $ \ensuremath{\mathcal{K}}$. That is, $\psi _{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}$ satisfies item (i) of Definition~\ref{def:ebtp} and for every $\ensuremath{\mathbf x} \in \ensuremath{\mathrm{span}\,} \ensuremath{\mathcal{K}}$ and any $\epsilon \geq 0$: \[ \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{K}}) \leq \epsilon, \quad \inProd{\ensuremath{\mathbf x}}{\ensuremath{\mathbf z}} \leq \epsilon \quad \Rightarrow \quad \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{K}} \cap \{\ensuremath{\mathbf z}\}^{\perp}) \leq \psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}} (\epsilon, \norm{\ensuremath{\mathbf x}}). \] \end{definition} \begin{remark}[Concerning the implication in {Definition~\ref{def:onefrf}}] In view of the monotonicity of $\psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}$, the implication in Definition~\ref{def:onefrf} can be equivalently and more succinctly written as \[ \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{K}}\cap\{\ensuremath{\mathbf z}\}^\perp) \le \psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}(\max\{\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{K}}), \inProd{\ensuremath{\mathbf x}}{\ensuremath{\mathbf z}}\}, \norm{\ensuremath{\mathbf x}}),\ \ \forall \ensuremath{\mathbf x} \in \ensuremath{\mathrm{span}\,} \ensuremath{\mathcal{K}}. \] The {\em unfolded} form presented in Definition~\ref{def:onefrf} is more handy in our discussions and analysis below. \end{remark} Facial residual functions always exist (see \cite[Section~3.2]{L17}), but their computation is often nontrivial. Next, we review a few examples. \begin{example}[Examples of facial residual functions]\label{ex:frf} If $ \ensuremath{\mathcal{K}}$ is a symmetric cone (i.e., a self-dual homogeneous cone, see \cite{FK94,FB08}), then given $ \ensuremath{\mathcal{F}} \mathrel{\unlhd} \ensuremath{\mathcal{K}}$ and $\ensuremath{\mathbf z} \in \ensuremath{\mathcal{F}}^*$, there exists a $\kappa > 0$ such that $\psi _{ \ensuremath{\mathcal{F}},\ensuremath{\mathbf z}}(\epsilon,t) \coloneqq \kappa \epsilon + \kappa \sqrt{\epsilon t}$ is a {{one-step facial residual function}} for $ \ensuremath{\mathcal{F}}$ and $\ensuremath{\mathbf z}$, see \cite[Theorem~35]{L17}. If $ \ensuremath{\mathcal{K}}$ is a polyhedral cone, the function $\psi _{ \ensuremath{\mathcal{F}},\ensuremath{\mathbf z}}(\epsilon,t) \coloneqq \kappa \epsilon$ can be taken instead, with no dependency on $t$, see \cite[Proposition~18]{L17}. \end{example} Moving on, we say that a function $\tilde \psi_{ \ensuremath{\mathcal{F}},\ensuremath{\mathbf z}}$ is a \emph{positively rescaled shift of $\psi_{ \ensuremath{\mathcal{F}},\ensuremath{\mathbf z}}$} if there are positive constants $M_1,M_2,M_3$ and nonnegative constant $M_4$ such that \begin{equation}\label{eq:pos_rescale} \tilde \psi _{ \ensuremath{\mathcal{F}},\ensuremath{\mathbf z}}(\epsilon,t) = M_3\psi_{ \ensuremath{\mathcal{F}},\ensuremath{\mathbf z}} (M_1\epsilon,M_2t) + M_4\epsilon. \end{equation} This is a generalization of the notion of positive rescaling in \cite{L17}, which sets $M_4 = 0$. We also need to compose facial residual functions in a special manner. Let $f:\ensuremath{\mathbb{R}_+}\times \ensuremath{\mathbb{R}_+} \to \ensuremath{\mathbb{R}_+}$ and $g:\ensuremath{\mathbb{R}_+}\times \ensuremath{\mathbb{R}_+} \to \ensuremath{\mathbb{R}_+}$ be functions. We define the \emph{diamond composition} $f\diamondsuit g$ to be the function satisfying \begin{equation}\label{eq:comp} (f\diamondsuit g)(a,b) = f(a+g(a,b),b), \ensuremath{\mathbf q}uad \forall a,b \in \ensuremath{\mathbb{R}_+}. \end{equation} Note that the above composition is not associative in general. When we have functions $f_i:\ensuremath{\mathbb{R}_+}\times \ensuremath{\mathbb{R}_+} \to \ensuremath{\mathbb{R}_+}$, $i = 1,\ldots,m$ with $m\ge 3$, we define $f_m\diamondsuit \cdots \diamondsuit f_1$ inductively as the function $\varphi _m$ such that \begin{align*} \varphi_i &\coloneqq f_i \diamondsuit \varphi_{i-1},\ensuremath{\mathbf q}uad i \in \{2,\ldots,m\}\\ \varphi_1 &\coloneqq f_1. \end{align*} With that, we have $ f_m\diamondsuit f_{m-1} \diamondsuit \cdots \diamondsuit f_2 \diamondsuit f_1 \coloneqq f_m\diamondsuit (f_{m-1}\diamondsuit(\cdots \diamondsuit (f_2\diamondsuit f_1)))$. The following lemma, which holds for a general closed convex cone $ \ensuremath{\mathcal{K}}$, shows how (positively rescaled shifts of) {{one-step facial residual function} s} for the \emph{faces of $ \ensuremath{\mathcal{K}}$} can be combined via the diamond composition to derive useful bounds on the distance to faces. A version of it was proved in {\cite[Lemma~22]{L17}}, which required the cones to be pointed and made use of general (i.e., not necessarily one-step) facial residual functions with respect to $ \ensuremath{\mathcal{K}}$. This is a subtle, but very crucial difference which will allows us to relax the assumptions in \cite{L17}. \begin{lemma}[Diamond composing facial residual functions]\label{lem:chain} Suppose \eqref{eq:feas} is feasible and let \[ \ensuremath{\mathcal{F}} _{\ell} \subsetneq \cdots \subsetneq \ensuremath{\mathcal{F}}_1 = \ensuremath{\mathcal{K}} \] be a chain of faces of $ \ensuremath{\mathcal{K}}$ together with $\ensuremath{\mathbf z}_i \in \ensuremath{\mathcal{F}} _i^*\cap \ensuremath{\mathcal{L}}^\perp \cap \{\ensuremath{\mathbf a}\}^\perp$ such that $ \ensuremath{\mathcal{F}}_{i+1} = \ensuremath{\mathcal{F}} _i\cap \{\ensuremath{\mathbf z}_i\}^\perp$, for $i = 1,\ldots, \ell - 1$. For each $i$, let $\psi _{i}$ be a {{one-step facial residual function}s} for $ \ensuremath{\mathcal{F}}_i$ and $\ensuremath{\mathbf z}_i$. Then, there is a positively rescaled shift of $\psi_i$ (still denoted as $\psi_i$ by an abuse of notation) so that for every $\ensuremath{\mathbf x} \in \ensuremath{\mathcal{E}}$ and $\epsilon \geq 0$: \[ \quad \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{K}}) \leq \epsilon, \quad \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a}) \leq \epsilon \quad \Rightarrow\quad \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{F}} _{\ell}) \leq \varphi (\epsilon,\norm{\ensuremath{\mathbf x}}), \] where $\varphi = \psi _{{\ell-1}}\diamondsuit \cdots \diamondsuit \psi_{{1}}$, if $\ell \geq 2$. If $\ell = 1$, we let $\varphi$ be the function satisfying $\varphi(\epsilon, t) = \epsilon$. \end{lemma} \begin{proof} For $\ell = 1$, we have $ \ensuremath{\mathcal{F}}_{\ell} = \ensuremath{\mathcal{K}}$, so the lemma follows immediately. Now, we consider the case $\ell \geq 2$. First we note that $ \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a}$ is contained in all the $\{\ensuremath{\mathbf z}_{i}\}^\perp$ for $i = 1, \ldots, \ell-1$. Since the distance of $\ensuremath{\mathbf x} \in \ensuremath{\mathcal{E}}$ to $\{\ensuremath{\mathbf z}_{i}\}^\perp$ is given by $\frac{|\inProd{\ensuremath{\mathbf x}}{\ensuremath{\mathbf z}_i}|}{\norm{\ensuremath{\mathbf z}_i}}$, we have the following chain of implications \begin{equation}\label{eq:hyper} \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a}) \leq \epsilon\quad \Rightarrow\quad \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x},\{\ensuremath{\mathbf z}_i\}^\perp) \leq \epsilon \quad \Rightarrow\quad \inProd{\ensuremath{\mathbf x}}{\ensuremath{\mathbf z}_i} \leq \epsilon \norm{\ensuremath{\mathbf z}_i}. \end{equation} Next, we proceed by induction. If $\ell = 2$, we have that $\psi_1$ is a {{one-step facial residual function}} for $ \ensuremath{\mathcal{K}}$ and $\ensuremath{\mathbf z}_1$. By Definition~\ref{def:onefrf}, we have \[ \ensuremath{\mathbf y} \in \ensuremath{\mathrm{span}\,} \ensuremath{\mathcal{K}}, \quad \ensuremath{\operatorname{d}}(\ensuremath{\mathbf y}, \ensuremath{\mathcal{K}}) \leq \epsilon, \quad \inProd{\ensuremath{\mathbf y}}{\ensuremath{\mathbf z}_1} \leq \epsilon \quad \Rightarrow \quad \ensuremath{\operatorname{d}}(\ensuremath{\mathbf y}, \ensuremath{\mathcal{F}}_{2}) \leq \psi_{1} (\epsilon, \norm{\ensuremath{\mathbf y}}). \] In view of \eqref{eq:hyper} and the monotonicity of $\psi_1$, we see further that \begin{equation}\label{eq:ind_base} \ensuremath{\mathbf y} \in \ensuremath{\mathrm{span}\,} \ensuremath{\mathcal{K}}, \, \ensuremath{\operatorname{d}}(\ensuremath{\mathbf y} , \ensuremath{\mathcal{K}}) \leq \epsilon, \, \ensuremath{\operatorname{d}}(\ensuremath{\mathbf y} , \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a}) \leq \epsilon \, \Rightarrow \, \ensuremath{\operatorname{d}}(\ensuremath{\mathbf y} , \ensuremath{\mathcal{F}}_{2}) \leq \psi_{1} (\epsilon(1+\norm{\ensuremath{\mathbf z}_{1}}), \norm{\ensuremath{\mathbf y}}). \end{equation} Now, suppose that $\ensuremath{\mathbf x} \in \ensuremath{\mathcal{E}}$ and $\epsilon \geq 0$ are such that $\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x} , \ensuremath{\mathcal{K}}) \leq \epsilon$ and $\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x} , \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a}) \leq \epsilon$. Let $\hat\ensuremath{\mathbf x} := P_{\ensuremath{\mathrm{span}\,} \ensuremath{\mathcal{K}}}(\ensuremath{\mathbf x})$. Since $ \ensuremath{\mathcal{K}} \subseteq \ensuremath{\mathrm{span}\,} \ensuremath{\mathcal{K}}$, we have $\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathrm{span}\,} \ensuremath{\mathcal{K}})\leq \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{K}})$ and, in view of \eqref{proj:p2}, we have that \begin{equation}\label{eq:ind_base2} \begin{aligned} \ensuremath{\operatorname{d}}(\hat\ensuremath{\mathbf x}, \ensuremath{\mathcal{K}}) &\leq \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x},\ensuremath{\mathrm{span}\,} \ensuremath{\mathcal{K}}) + \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{K}})\le 2\epsilon,\\ \ensuremath{\operatorname{d}}(\hat \ensuremath{\mathbf x}, \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a}) &\leq \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x},\ensuremath{\mathrm{span}\,} \ensuremath{\mathcal{K}}) + \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a})\le 2\epsilon. \end{aligned} \end{equation} From \eqref{proj:p1}, \eqref{eq:ind_base} and \eqref{eq:ind_base2} we obtain \begin{equation}\notag \begin{aligned} \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{F}}_2) \leq \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathrm{span}\,} \ensuremath{\mathcal{K}}) + \ensuremath{\operatorname{d}}(\hat \ensuremath{\mathbf x}, \ensuremath{\mathcal{F}}_{2}) &\leq \epsilon + \psi_{{1}}(2\epsilon(1+\norm{\ensuremath{\mathbf z}_{1}}),\norm{\hat \ensuremath{\mathbf x}})\\ &\leq \epsilon + \psi_{{1}}(2\epsilon(1+\norm{\ensuremath{\mathbf z}_{1}}),\norm{ \ensuremath{\mathbf x}}),\\ \end{aligned} \end{equation} where the last inequality follows from the monotonicity of $\psi_{{1}}$ and the fact that $\norm{\hat \ensuremath{\mathbf x}} \leq \norm{\ensuremath{\mathbf x}}$. This proves the lemma for chains of length $\ell = 2$ because the function mapping $(\epsilon,t)$ to $\epsilon + \psi_{{1}}(2\epsilon(1+\norm{\ensuremath{\mathbf z}_{1}}),t)$ is a positively rescaled shift of $\psi_{{1}}$. Now, suppose that the lemma holds for chains of length $\hat \ell$ and consider a chain of length $\hat \ell + 1$. By the induction hypothesis, we have \begin{equation}\label{eq:inductive} \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{K}}) \leq \epsilon, \quad \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a}) \leq \epsilon \quad \Rightarrow \quad \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{F}}_{\hat \ell}) \leq \varphi (\epsilon, \norm{\ensuremath{\mathbf x}}), \end{equation} where $\varphi = \psi _{{\hat\ell-1}}\diamondsuit \cdots \diamondsuit \psi_1$ and the $\psi_i$ are (positively rescaled shifts of) {one-step facial residual function} s. By the definition of $\psi _{{\hat\ell}}$ as a {{one-step facial residual function}} and using \eqref{eq:hyper}, we may positively rescale $\psi_{{\hat\ell}}$ (still denoted as $\psi_{{\hat\ell}}$ by an abuse of notation) so that for $\ensuremath{\mathbf y} \in \ensuremath{\mathrm{span}\,} \ensuremath{\mathcal{F}}_{\hat\ell}$ and $\hat \epsilon \ge 0$, the following implication holds: \begin{equation}\label{eq:eps_hat_ell} \ensuremath{\operatorname{d}}(\ensuremath{\mathbf y}, \ensuremath{\mathcal{F}}_{\hat\ell}) \leq \hat\epsilon, \quad \ensuremath{\operatorname{d}}(\ensuremath{\mathbf y}, \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a}) \leq \hat\epsilon \quad \Rightarrow \quad \ensuremath{\operatorname{d}}(\ensuremath{\mathbf y}, \ensuremath{\mathcal{F}}_{\hat \ell+1}) \leq \psi_{\hat\ell}(\hat \epsilon, \norm{\ensuremath{\mathbf y}}). \end{equation} Now, suppose that $\ensuremath{\mathbf x} \in \ensuremath{\mathcal{E}}$ and $\epsilon \ge 0$ satisfy $\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{K}}) \leq \epsilon$ and $\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a}) \leq \epsilon$. Let $\hat\ensuremath{\mathbf x} := P_{\ensuremath{\mathrm{span}\,} \ensuremath{\mathcal{F}}_{\hat \ell}}(\ensuremath{\mathbf x})$. As before, since $ \ensuremath{\mathcal{F}}_{\hat\ell} \subseteq \ensuremath{\mathrm{span}\,} \ensuremath{\mathcal{F}}_{\hat\ell}$, we have $\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathrm{span}\,} \ensuremath{\mathcal{F}}_{\hat\ell})\leq \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{F}}_{\hat\ell})$ and, in view of \eqref{proj:p2}, we have \begin{equation}\label{eq:ind} \begin{aligned} \ensuremath{\operatorname{d}}(\hat\ensuremath{\mathbf x}, \ensuremath{\mathcal{F}}_{\hat\ell}) &\leq \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x},\ensuremath{\mathrm{span}\,} \ensuremath{\mathcal{F}}_{\hat\ell}) + \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{F}}_{\hat\ell})\le 2\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{F}}_{\hat \ell}) \le 2\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{F}}_{\hat \ell}) + \epsilon,\\ \ensuremath{\operatorname{d}}(\hat \ensuremath{\mathbf x}, \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a}) &\leq \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x},\ensuremath{\mathrm{span}\,} \ensuremath{\mathcal{F}}_{\hat\ell}) + \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a})\le \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{F}}_{\hat \ell})+\epsilon \le 2\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{F}}_{\hat \ell}) + \epsilon. \end{aligned} \end{equation} Let $\hat \psi_{\hat\ell}$ be such that $\hat \psi_{\hat\ell}(s,t)\coloneqq s + \psi_{\hat\ell}(2s,t)$, so that $\hat \psi_{\hat\ell}$ is a positively rescaled shift of $\psi_{\hat\ell}$. Then, \eqref{eq:ind} together with \eqref{eq:eps_hat_ell} and \eqref{proj:p1} gives \begin{align*} \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{F}}_{\hat \ell+1}) &\le \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x},\ensuremath{\mathrm{span}\,} \ensuremath{\mathcal{F}}_{\hat\ell}) + \ensuremath{\operatorname{d}}(\hat\ensuremath{\mathbf x}, \ensuremath{\mathcal{F}}_{\hat \ell+1})\le \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{F}}_{\hat\ell}) + \ensuremath{\operatorname{d}}(\hat\ensuremath{\mathbf x}, \ensuremath{\mathcal{F}}_{\hat \ell+1})\\ &\leq \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{F}}_{\hat\ell}) + \psi_{\hat\ell}(\epsilon+2\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{F}}_{\hat \ell}) , \norm{\hat\ensuremath{\mathbf x}}) \overset{\rm (a)}\leq \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{F}}_{\hat\ell}) + \psi_{\hat\ell}(2\epsilon+2\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{F}}_{\hat \ell}) , \norm{\ensuremath{\mathbf x}})\\ & \le \hat \psi_{\hat\ell}(\epsilon+\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{F}}_{\hat \ell}), \norm{\ensuremath{\mathbf x}})\overset{\rm (b)}\leq\hat \psi_{\hat\ell}(\epsilon+\varphi(\epsilon, \norm{\ensuremath{\mathbf x}}), \norm{\ensuremath{\mathbf x}})=(\hat \psi_{\hat\ell} \diamondsuit \varphi)(\epsilon,\norm{\ensuremath{\mathbf x}}), \end{align*} where (a) follows from the monotonicity of $\psi_{\hat\ell}$ and the fact that $\|\hat \ensuremath{\mathbf x}\|\le \|\ensuremath{\mathbf x}\|$, and (b) follows from \eqref{eq:inductive} and the monotonicity of $\hat\psi_{\hat\ell}$. This completes the proof. \end{proof} We now have all the pieces to state an error bound result for \eqref{eq:feas} that does not require any constraint qualifications. \begin{theorem}[Error bound based on {{one-step facial residual function}s}s]\label{theo:err} Suppose \eqref{eq:feas} is feasible and let \[ \ensuremath{\mathcal{F}} _{\ell} \subsetneq \cdots \subsetneq \ensuremath{\mathcal{F}}_1 = \ensuremath{\mathcal{K}} \] be a chain of faces of $ \ensuremath{\mathcal{K}}$ together with $\ensuremath{\mathbf z}_i \in \ensuremath{\mathcal{F}} _i^*\cap \ensuremath{\mathcal{L}}^\perp \cap \{\ensuremath{\mathbf a}\}^\perp$ such that $\{ \ensuremath{\mathcal{F}} _{\ell}, \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a}\}$ satisfies the PPS condition and $ \ensuremath{\mathcal{F}}_{i+1} = \ensuremath{\mathcal{F}} _i\cap \{\ensuremath{\mathbf z}_i\}^\perp$ for every $i$. For $i = 1,\ldots, \ell - 1$, let $\psi _{i}$ be a {{one-step facial residual function}s} for $ \ensuremath{\mathcal{F}}_{i}$ and $\ensuremath{\mathbf z}_i$. Then, there is a suitable positively rescaled shift of the $\psi _{i}$ (still denoted as $\psi_i$ by an abuse of notation) such that for any bounded set $B$ there is a positive constant $\kappa_B$ (depending on $B, \ensuremath{\mathcal{L}}, \ensuremath{\mathbf a}, \ensuremath{\mathcal{F}} _{\ell}$) such that \[ \ensuremath{\mathbf x} \in B, \quad \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{K}}) \leq \epsilon, \quad \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a}) \leq \epsilon\quad \Rightarrow \quad \ensuremath{\operatorname{d}}\left(\ensuremath{\mathbf x}, ( \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a}) \cap \ensuremath{\mathcal{K}}\right) \leq \kappa _B (\epsilon+\varphi(\epsilon,M)), \] where $M = \sup _{\ensuremath{\mathbf x} \in B} \norm{\ensuremath{\mathbf x}}$, $\varphi = \psi _{{\ell-1}}\diamondsuit \cdots \diamondsuit \psi_{{1}}$, if $\ell \geq 2$. If $\ell = 1$, we let $\varphi$ be the function satisfying $\varphi(\epsilon, M) = \epsilon$. \end{theorem} \begin{proof} The case $\ell = 1$ follows from Proposition \ref{prop:err_cq2}, by taking $ \ensuremath{\mathcal{F}} = \ensuremath{\mathcal{F}}_1$. Now, suppose $\ell \geq 2$. We apply Lemma \ref{lem:chain}, which tells us that, after positively rescaling and shifting the $\psi_i$, we have: \[ \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{K}}) \leq \epsilon, \quad \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a}) \leq \epsilon \implies \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{F}} _{\ell}) \leq \varphi(\epsilon,\norm{\ensuremath{\mathbf x}}), \] where $\varphi = \psi _{{\ell-1}}\diamondsuit \cdots \diamondsuit \psi_{{1}} $. In particular, since $\norm{\ensuremath{\mathbf x}} \leq M$ for $\ensuremath{\mathbf x} \in B$ we have \begin{equation}\label{eq:aam_aux1} \quad \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{K}}) \leq \epsilon, \quad \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a}) \leq \epsilon \implies \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{F}} _{\ell}) \leq \varphi(\epsilon,M), \ensuremath{\mathbf q}uad \forall \ensuremath{\mathbf x} \in B \end{equation} By assumption, $\{ \ensuremath{\mathcal{F}} _{\ell}, \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a}\}$ satisfies the PPS condition. We invoke Proposition~\ref{prop:err_cq2} to find $ \kappa _B > 0$ such that \begin{equation}\label{eq:aam_aux2} \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{K}} \cap ( \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a})) \leq \kappa_B(\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{F}}_{\ell}) + \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a})), \ensuremath{\mathbf q}uad \forall \ensuremath{\mathbf x} \in B. \end{equation} Combining \eqref{eq:aam_aux1}, \eqref{eq:aam_aux2}, we conclude that if $\ensuremath{\mathbf x} \in B$ and $\epsilon\ge 0$ satisfy $\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{K}}) \leq \epsilon$ and $\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a}) \leq \epsilon$, then we have $\ensuremath{\operatorname{d}}\left(\ensuremath{\mathbf x}, ( \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a}) \cap \ensuremath{\mathcal{K}}\right) \leq \kappa_B(\epsilon+\varphi(\epsilon,M))$. This completes the proof. \end{proof} Theorem~\ref{theo:err} is an improvement over \cite[Theorem~23]{L17} because it removes the amenability assumption. Furthermore, it shows that it is enough to determine the {one-step facial residual function} s for $ \ensuremath{\mathcal{K}}$ and its faces, whereas \cite[Theorem~23]{L17} may require all possible facial residual functions related to $ \ensuremath{\mathcal{K}}$ and its faces. Nevertheless, Theorem~\ref{theo:err} is still an abstract error bound result; whether some concrete inequality can be written down depends on obtaining a formula for the $\varphi$ function. To do so, it would require finding expressions for the {{one-step facial residual function}}s. In the next subsections, we will address this challenge. \subsection{How to compute {{one-step facial residual function}}s?}\label{sec:frf_comp} In this section, we present some general tools for computing {{one-step facial residual function}}s. \begin{lemma}[{{one-step facial residual function}s} from error bound]\label{lem:facialresidualsbeta} Suppose that $ \ensuremath{\mathcal{K}}$ is a closed convex cone and let $\ensuremath{\mathbf z}\in \ensuremath{\mathcal{K}}^*$ be such that $ \ensuremath{\mathcal{F}} = \{\ensuremath{\mathbf z} \}^\perp\cap \ensuremath{\mathcal{K}}$ is a proper face of $ \ensuremath{\mathcal{K}}$. Let $\ensuremath{\mathfrak{g}}:\ensuremath{\mathbb R}_+\to\ensuremath{\mathbb R}_+$ be monotone nondecreasing with $\ensuremath{\mathfrak{g}}(0)=0$, and let $\kappa_{\ensuremath{\mathbf z},\ensuremath{\mathfrak{s}}}$ be a finite monotone nondecreasing nonnegative function in $\ensuremath{\mathfrak{s}}\in \ensuremath{\mathbb R}_+$ such that \begin{equation}\label{assumption:q} \ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{F}}) \leq \kappa_{\ensuremath{\mathbf z},\norm{\ensuremath{\mathbf q}}} \ensuremath{\mathfrak{g}}(\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{K}}))\ \ \mbox{whenever}\ \ \ensuremath{\mathbf q} \in \{\ensuremath{\mathbf z}\}^\perp. \end{equation} Define the function $\psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}:\ensuremath{\mathbb{R}_+}\times \ensuremath{\mathbb{R}_+}\to \ensuremath{\mathbb{R}_+}$ by \begin{equation*} \psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}(s,t) := \max \left\{s,s/\|\ensuremath{\mathbf z}\| \right\} + \kappa_{\ensuremath{\mathbf z},t}\ensuremath{\mathfrak{g}} \left(s +\max \left\{s,s/\|\ensuremath{\mathbf z}\| \right\} \right). \end{equation*} Then we have \begin{equation}\label{haha} \ensuremath{\operatorname{d}}(\ensuremath{\mathbf p}, \ensuremath{\mathcal{F}}) \leq \psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}(\epsilon,\norm{\ensuremath{\mathbf p}}) \mbox{\ \ \ \ whenever\ \ \ \ $\ensuremath{\operatorname{d}}(\ensuremath{\mathbf p}, \ensuremath{\mathcal{K}}) \leq \epsilon$\ \ and\ \ $\inProd{\ensuremath{\mathbf p}}{\ensuremath{\mathbf z}} \leq \epsilon$.} \end{equation} Moreover, $\psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}$ is a {{one-step facial residual function}s} for $ \ensuremath{\mathcal{K}}$ and $\ensuremath{\mathbf z}$. \end{lemma} \begin{proof} Suppose that $\ensuremath{\operatorname{d}}(\ensuremath{\mathbf p}, \ensuremath{\mathcal{K}}) \leq \epsilon$ and $\inProd{\ensuremath{\mathbf p}}{\ensuremath{\mathbf z}} \leq \epsilon$. We first claim that \begin{equation}\label{dpzperp} \ensuremath{\operatorname{d}}(\ensuremath{\mathbf p},\{\ensuremath{\mathbf z} \}^\perp) \leq \max \left\{\epsilon,\epsilon/\|\ensuremath{\mathbf z}\| \right\}. \end{equation} This can be shown as follows. Since $\ensuremath{\mathbf z} \in \ensuremath{\mathcal{K}}^*$, we have $\inProd{\ensuremath{\mathbf p}+P_{ \ensuremath{\mathcal{K}}}(\ensuremath{\mathbf p})-\ensuremath{\mathbf p}}{\ensuremath{\mathbf z}} \geq 0$ and \[ \inProd{\ensuremath{\mathbf p}}{\ensuremath{\mathbf z}} \geq - \inProd{P_{ \ensuremath{\mathcal{K}}}(\ensuremath{\mathbf p})-\ensuremath{\mathbf p}}{\ensuremath{\mathbf z}} \geq -\epsilon \norm{\ensuremath{\mathbf z}}. \] We conclude that $|\langle \ensuremath{\mathbf p},\ensuremath{\mathbf z}\ensuremath{\operatorname{ran}}gle| \leq \max\{\epsilon \norm{\ensuremath{\mathbf z}},\epsilon\}$. This, in combination with $\ensuremath{\operatorname{d}}(\ensuremath{\mathbf p},\{\ensuremath{\mathbf z} \}^\perp) = |\langle \ensuremath{\mathbf p},\ensuremath{\mathbf z}\ensuremath{\operatorname{ran}}gle|/\|\ensuremath{\mathbf z}\|$, leads to \eqref{dpzperp}. Next, let $\ensuremath{\mathbf q}:=P_{\{\ensuremath{\mathbf z} \}^\perp}\ensuremath{\mathbf p}$. Then we have that \begin{equation*} \begin{aligned} \ensuremath{\operatorname{d}}(\ensuremath{\mathbf p}, \ensuremath{\mathcal{F}}) &\leq \|\ensuremath{\mathbf p} - \ensuremath{\mathbf q}\|+\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{F}})\overset{\rm (a)}\leq \max \left\{\epsilon,\epsilon/\|\ensuremath{\mathbf z}\| \right\} + \ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{F}})\\ &\overset{\rm (b)}\leq \max \left\{\epsilon,\epsilon/\|\ensuremath{\mathbf z}\| \right\} + \kappa_{\ensuremath{\mathbf z},\norm{\ensuremath{\mathbf q}}} \ensuremath{\mathfrak{g}}\left(\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{K}})\right)\\ &\overset{\rm (c)}\leq \max \left\{\epsilon,\epsilon/\|\ensuremath{\mathbf z}\| \right\} + \kappa_{\ensuremath{\mathbf z},\norm{\ensuremath{\mathbf p}}} \ensuremath{\mathfrak{g}}\left(\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{K}})\right)\\ &\overset{\rm (d)}\leq \max \left\{\epsilon,\epsilon/\|\ensuremath{\mathbf z}\| \right\} + \kappa_{\ensuremath{\mathbf z},\norm{\ensuremath{\mathbf p}}} \ensuremath{\mathfrak{g}}\left(\epsilon +\max \left\{\epsilon,\epsilon/\|\ensuremath{\mathbf z}\| \right\} \right), \end{aligned} \end{equation*} where (a) follows from \eqref{dpzperp}, (b) is a consequence of \eqref{assumption:q}, (c) holds because $\|\ensuremath{\mathbf q}\| = \|P_{\{\ensuremath{\mathbf z} \}^\perp}\ensuremath{\mathbf p}\| \leq \|\ensuremath{\mathbf p}\|$ so that $\kappa_{\ensuremath{\mathbf z},\|\ensuremath{\mathbf q}\|}\leq \kappa_{\ensuremath{\mathbf z},\|\ensuremath{\mathbf p}\|}$, and (d) holds because $\ensuremath{\mathfrak{g}}$ is monotone nondecreasing and $$ \ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{K}}) \leq \ensuremath{\operatorname{d}}(\ensuremath{\mathbf p}, \ensuremath{\mathcal{K}}) + \|\ensuremath{\mathbf q} - \ensuremath{\mathbf p}\| \leq \epsilon + \max \left\{\epsilon,\epsilon/\|\ensuremath{\mathbf z}\| \right\}; $$ here, the second inequality follows from \eqref{dpzperp} and the assumption that $\ensuremath{\operatorname{d}}(\ensuremath{\mathbf p}, \ensuremath{\mathcal{K}})\le \epsilon$. This proves \eqref{haha}. Finally, notice that $\psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}$ is nonnegative, monotone nondecreasing in each argument, and that $\psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}(0,t)=0$ for every $t \in \ensuremath{\mathbb R}_+$. Hence, $\psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}$ is a {{one-step facial residual function}} for $ \ensuremath{\mathcal{K}}$ and $\ensuremath{\mathbf z}$. \end{proof} \begin{figure} \caption{Theorem~\ref{thm:1dfacesmain} \label{fig:uvw} \end{figure} In view of Lemma~\ref{lem:facialresidualsbeta}, one may construct {{one-step facial residual function}}s after establishing the error bound \eqref{assumption:q}. In the next theorem, we present a characterization for the existence of such an error bound. Our result is based on the quantity \eqref{gammabetaeta} defined below being {\em nonzero}. Note that this quantity {\em does not} explicitly involve projections onto $ \ensuremath{\mathcal{K}}$; this enables us to work with the exponential cone later, whose projections do not seem to have simple expressions. Figure~\ref{fig:uvw} provides a geometric interpretation of \eqref{gammabetaeta}. \begin{theorem}[Characterization of the existence of error bounds]\label{thm:1dfacesmain} Suppose that $ \ensuremath{\mathcal{K}}$ is a closed convex cone and let $\ensuremath{\mathbf z}\in \ensuremath{\mathcal{K}}^*$ be such that $ \ensuremath{\mathcal{F}} = \{\ensuremath{\mathbf z} \}^\perp \cap \ensuremath{\mathcal{K}}$ is a nontrivial exposed face of $ \ensuremath{\mathcal{K}}$. Let $\eta \ge 0$, $\alpha \in (0,1]$ and let $\ensuremath{\mathfrak{g}}:\ensuremath{\mathbb R}_+\to \ensuremath{\mathbb R}_+$ be monotone nondecreasing with $\ensuremath{\mathfrak{g}}(0) = 0$ and $\ensuremath{\mathfrak{g}} \geq |\cdot|^\alpha$. Define \begin{equation}\label{gammabetaeta} \gamma_{\ensuremath{\mathbf z},\eta} := \inf_{\ensuremath{\mathbf v}} \left\{\frac{\ensuremath{\mathfrak{g}}(\|\ensuremath{\mathbf w}-\ensuremath{\mathbf v}\|)}{\|\ensuremath{\mathbf w}-\ensuremath{\mathbf u}\|}\;\bigg|\; \ensuremath{\mathbf v}\in \partial \ensuremath{\mathcal{K}}\cap B(\eta)\backslash \ensuremath{\mathcal{F}},\ \ensuremath{\mathbf w} = P_{\{\ensuremath{\mathbf z} \}^\perp}\ensuremath{\mathbf v},\ \ensuremath{\mathbf u} = P_{ \ensuremath{\mathcal{F}}}\ensuremath{\mathbf w},\ \ensuremath{\mathbf w}\neq \ensuremath{\mathbf u}\right\}. \end{equation} Then the following statements hold. \begin{enumerate}[{\rm (i)}] \item\label{thm:1dfacesmaini} If $\gamma_{\ensuremath{\mathbf z},\eta} \in (0,\infty]$, then it holds that \begin{equation}\label{haha2} \ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{F}}) \leq \kappa_{\ensuremath{\mathbf z},\eta} \ensuremath{\mathfrak{g}}(\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{K}}))\ \ \mbox{whenever\ $\ensuremath{\mathbf q} \in \{\ensuremath{\mathbf z} \}^\perp \cap B(\eta)$}, \end{equation} where $\kappa_{\ensuremath{\mathbf z},\eta} := \max \left \{2\eta^{1-\alpha}, 2\gamma_{\ensuremath{\mathbf z},\eta}^{-1} \right \} < \infty$. \item\label{thm:1dfacesmainii} If there exists $\kappa_{_B} \in (0,\infty)$ so that \begin{equation}\label{hahaww2} \ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{F}}) \leq \kappa_{_B} \ensuremath{\mathfrak{g}}(\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{K}}))\ \ \mbox{whenever\ $\ensuremath{\mathbf q} \in \{\ensuremath{\mathbf z} \}^\perp \cap B(\eta)$}, \end{equation} then $\gamma_{\ensuremath{\mathbf z},\eta} \in (0,\infty]$. \end{enumerate} \end{theorem} \begin{proof} We first consider item (i). If $\eta = 0$ or $\ensuremath{\mathbf q} \in \ensuremath{\mathcal{F}}$, the result is vacuously true, so let $\eta > 0$ and $\ensuremath{\mathbf q} \in \{\ensuremath{\mathbf z} \}^\perp \cap B(\eta) \backslash \ensuremath{\mathcal{F}}$. Then $\ensuremath{\mathbf q}\notin \ensuremath{\mathcal{K}}$ because $ \ensuremath{\mathcal{F}} = \{\ensuremath{\mathbf z}\}^\perp\cap \ensuremath{\mathcal{K}}$. Define \[ \ensuremath{\mathbf v}=P_{ \ensuremath{\mathcal{K}}}\ensuremath{\mathbf q},\quad \ensuremath{\mathbf w} = P_{\{\ensuremath{\mathbf z} \}^\perp}\ensuremath{\mathbf v},\quad \text{and}\quad \ensuremath{\mathbf u} = P_{ \ensuremath{\mathcal{F}}}\ensuremath{\mathbf w}. \] Then $\ensuremath{\mathbf v}\in \partial \ensuremath{\mathcal{K}}\cap B(\eta)$ because $\ensuremath{\mathbf q}\notin \ensuremath{\mathcal{K}}$ and $\|\ensuremath{\mathbf q}\|\le \eta$. If $\ensuremath{\mathbf v}\in \ensuremath{\mathcal{F}}$, then we have $\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{F}}) = \ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{K}})$ and hence \[ \ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{F}}) = \ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{K}})^{1-\alpha}\cdot \ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{K}})^\alpha\le \eta^{1-\alpha}\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{K}})^\alpha\le \kappa_{\ensuremath{\mathbf z},\eta} \ensuremath{\mathfrak{g}}(\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{K}})), \] where the first inequality holds because $\norm{\ensuremath{\mathbf q}}\le \eta$, and the last inequality follows from the definitions of $\ensuremath{\mathfrak{g}}$ and $\kappa_{\ensuremath{\mathbf z},\eta}$. Thus, from now on, we assume that $\ensuremath{\mathbf v}\in \partial \ensuremath{\mathcal{K}}\cap B(\eta)\backslash \ensuremath{\mathcal{F}}$. Next, since $\ensuremath{\mathbf w}=P_{\{\ensuremath{\mathbf z}\}^\perp}\ensuremath{\mathbf v}$, it holds that $\ensuremath{\mathbf v}-\ensuremath{\mathbf w}\in \{\ensuremath{\mathbf z}\}^{\perp\perp}$ and hence $\|\ensuremath{\mathbf q}-\ensuremath{\mathbf v}\|^2 = \|\ensuremath{\mathbf q}-\ensuremath{\mathbf w}\|^2+\|\ensuremath{\mathbf w}-\ensuremath{\mathbf v}\|^2$. In particular, we have \begin{equation}\label{cinequality_infty} \ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{K}}) = \|\ensuremath{\mathbf q}-\ensuremath{\mathbf v}\| \geq \max\{\|\ensuremath{\mathbf v}-\ensuremath{\mathbf w}\|,\|\ensuremath{\mathbf q}-\ensuremath{\mathbf w}\|\}, \end{equation} where the equality follows from the definition of $\ensuremath{\mathbf v}$. Now, to establish \eqref{haha2}, we consider two cases. \begin{enumerate}[(I)] \item\label{thMbetafacescase1_infty} $\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{F}}) \leq 2\ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}, \ensuremath{\mathcal{F}})$; \item\label{thMbetafacescase2_infty} $\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{F}}) > 2\ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}, \ensuremath{\mathcal{F}})$. \end{enumerate} \ref{thMbetafacescase1_infty}: In this case, we have from $\ensuremath{\mathbf u} = P_{ \ensuremath{\mathcal{F}}}\ensuremath{\mathbf w}$ and $\ensuremath{\mathbf q}\notin \ensuremath{\mathcal{F}}$ that \begin{equation}\label{haha3} 2\|\ensuremath{\mathbf w} - \ensuremath{\mathbf u}\|= 2\ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}, \ensuremath{\mathcal{F}}) \ge \ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{F}}) > 0, \end{equation} where the first inequality follows from the assumption in this case \ref{thMbetafacescase1_infty}. Hence, \begin{equation}\label{biginequalitybetageneral_infty} \frac{1}{\kappa_{\ensuremath{\mathbf z},\eta}} \stackrel{\rm (a)}{\le} \frac{1}{2}\gamma_{\ensuremath{\mathbf z},\eta} \stackrel{\rm (b)}{\leq} \frac{\ensuremath{\mathfrak{g}}(\|\ensuremath{\mathbf w}-\ensuremath{\mathbf v}\|)}{2\|\ensuremath{\mathbf w}-\ensuremath{\mathbf u}\|} \stackrel{\rm (c)}{\leq} \frac{\ensuremath{\mathfrak{g}}(\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{K}}))}{2\|\ensuremath{\mathbf w}-\ensuremath{\mathbf u}\|}\stackrel{\rm (d)}{\leq} \frac{\ensuremath{\mathfrak{g}}(\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{K}}))}{\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{F}})}, \end{equation} where (a) is true by the definition of $\kappa_{\ensuremath{\mathbf z},\eta}$, (b) uses the condition that $\ensuremath{\mathbf v}\in \partial \ensuremath{\mathcal{K}}\cap B(\eta)\backslash \ensuremath{\mathcal{F}}$, \eqref{haha3} and the definition of $\gamma_{\ensuremath{\mathbf z},\eta}$, (c) is true by \eqref{cinequality_infty} and the monotonicity of $\ensuremath{\mathfrak{g}}$, and (d) follows from \eqref{haha3}. This concludes case \ref{thMbetafacescase1_infty}.\footnote{In particular, in view of \eqref{biginequalitybetageneral_infty}, we see that this case only happens when $\gamma_{\ensuremath{\mathbf z},\eta} < \infty$.} \noindent\ref{thMbetafacescase2_infty}: Using the triangle inequality, we have \begin{equation*} 2\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{F}}) \leq 2\|\ensuremath{\mathbf q}-\ensuremath{\mathbf w}\|+2\ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}, \ensuremath{\mathcal{F}})< 2\|\ensuremath{\mathbf q}-\ensuremath{\mathbf w}\|+\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{F}}), \end{equation*} where the strict inequality follows from the condition for this case \ref{thMbetafacescase2_infty}. Consequently, we have $\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{F}}) \leq 2\|\ensuremath{\mathbf q}-\ensuremath{\mathbf w}\|$. Combining this with \eqref{cinequality_infty}, we deduce further that \[ \begin{aligned} \ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{F}}) &\le 2\|\ensuremath{\mathbf q}-\ensuremath{\mathbf w}\|\le 2 \max\{\|\ensuremath{\mathbf v}-\ensuremath{\mathbf w}\|,\|\ensuremath{\mathbf q}-\ensuremath{\mathbf w}\|\} \le 2\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{K}}) = 2\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{K}})^{1-\alpha}\cdot \ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{K}})^\alpha\\ & \le 2\eta^{1-\alpha}\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{K}})^\alpha\le \kappa_{\ensuremath{\mathbf z},\eta} \ensuremath{\mathfrak{g}}(\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{\mathcal{K}})), \end{aligned} \] where the fourth inequality holds because $\norm{\ensuremath{\mathbf q}}\le \eta$, and the last inequality follows from the definitions of $\ensuremath{\mathfrak{g}}$ and $\kappa_{\ensuremath{\mathbf z},\eta}$. This proves item (i). We next consider item (ii). Again, the result is vacuously true if $\eta = 0$, so let $\eta > 0$. Let $\ensuremath{\mathbf v}\in \partial \ensuremath{\mathcal{K}}\cap B(\eta)\backslash \ensuremath{\mathcal{F}}$, $\ensuremath{\mathbf w} = P_{\{\ensuremath{\mathbf z} \}^\perp}\ensuremath{\mathbf v}$ and $\ensuremath{\mathbf u} = P_{ \ensuremath{\mathcal{F}}}\ensuremath{\mathbf w}$ with $\ensuremath{\mathbf w}\neq \ensuremath{\mathbf u}$. Then $\ensuremath{\mathbf w}\in B(\eta)$, and we have in view of \eqref{hahaww2} that \[ \|\ensuremath{\mathbf w}-\ensuremath{\mathbf u}\| \overset{\rm (a)}= \ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}, \ensuremath{\mathcal{F}}) \overset{\rm (b)}\le \kappa_{_B}\ensuremath{\mathfrak{g}}(\ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}, \ensuremath{\mathcal{K}})) \overset{\rm (c)}\le \kappa_{_B}\ensuremath{\mathfrak{g}}(\|\ensuremath{\mathbf w} - \ensuremath{\mathbf v}\|), \] where (a) holds because $\ensuremath{\mathbf u} = P_{ \ensuremath{\mathcal{F}}}\ensuremath{\mathbf w}$, (b) holds because of \eqref{hahaww2}, $\ensuremath{\mathbf w}\in \{\ensuremath{\mathbf z}\}^\perp$ and $\|\ensuremath{\mathbf w}\|\le \eta$, and (c) is true because $\ensuremath{\mathfrak{g}}$ is monotone nondecreasing and $\ensuremath{\mathbf v}\in \ensuremath{\mathcal{K}}$. Thus, we have $\gamma_{\ensuremath{\mathbf z},\eta} \ge 1/\kappa_{_B} > 0$. This completes the proof. \end{proof} \begin{remark}[About $\kappa_{\ensuremath{\mathbf z},\eta}$ and $\gamma_{\ensuremath{\mathbf z},\eta}^{-1}$]\label{rem:kappa} As $\eta$ increases, the infimum in \eqref{gammabetaeta} is taken over a larger region, so $\gamma_{\ensuremath{\mathbf z},\eta}$ does not increase. Accordingly, $\gamma_{\ensuremath{\mathbf z},\eta}^{-1}$ does not decrease when $\eta$ increases. Therefore, the $\kappa_{\ensuremath{\mathbf z},\eta}$ and $\gamma_{\ensuremath{\mathbf z},\eta}^{-1}$ considered in Theorem~\ref{thm:1dfacesmain} are monotone nondecreasing as functions of $\eta$ when $\ensuremath{\mathbf z}$ is fixed. We are also using the convention that $1/\infty = 0$ so that $\kappa_{\ensuremath{\mathbf z},\eta} = 2\eta^{1-\alpha}$ when $\gamma_{\ensuremath{\mathbf z},\eta} = \infty$. \end{remark} Thus, to establish an error bound as in \eqref{haha2}, it suffices to show that $\gamma_{\ensuremath{\mathbf z},\eta} \in (0,\infty]$ for the choice of $\ensuremath{\mathfrak{g}}$ and $\eta\ge 0$. Clearly, $\gamma_{\ensuremath{\mathbf z},0} = \infty$. The next lemma allows us to check whether $\gamma_{\ensuremath{\mathbf z},\eta} \in (0,\infty]$ for an $\eta > 0$ by considering {\em convergent} sequences. \begin{lemma}\label{lem:infratio} Suppose that $ \ensuremath{\mathcal{K}}$ is a closed convex cone and let $\ensuremath{\mathbf z}\in \ensuremath{\mathcal{K}}^*$ be such that $ \ensuremath{\mathcal{F}} = \{\ensuremath{\mathbf z} \}^\perp \cap \ensuremath{\mathcal{K}}$ is a nontrivial exposed face of $ \ensuremath{\mathcal{K}}$. Let $\eta > 0$, $\alpha \in (0,1]$ and let $\ensuremath{\mathfrak{g}}:\ensuremath{\mathbb R}_+\to \ensuremath{\mathbb R}_+$ be monotone nondecreasing with $\ensuremath{\mathfrak{g}}(0) = 0$ and $\ensuremath{\mathfrak{g}} \geq |\cdot|^\alpha$. Let $\gamma_{\ensuremath{\mathbf z},\eta}$ be defined as in \eqref{gammabetaeta}. If $\gamma_{\ensuremath{\mathbf z},\eta} = 0$, then there exist $\ensuremath{\bar{\vv}} \in \ensuremath{\mathcal{F}}$ and a sequence $\{\ensuremath{\mathbf v}^k\}\subset \partial \ensuremath{\mathcal{K}}\cap B(\eta) \backslash \ensuremath{\mathcal{F}}$ such that \begin{subequations}\label{infratiohk} \begin{align} \underset{k \rightarrow \infty}{\lim}\ensuremath{\mathbf v}^k = \underset{k \rightarrow \infty}{\lim}\ensuremath{\mathbf w}^k &= \ensuremath{\bar{\vv}} \label{infratiohka} \\ {\rm and}\ \ \ \lim_{k\rightarrow\infty} \frac{\ensuremath{\mathfrak{g}}(\|\ensuremath{\mathbf w}^k - \ensuremath{\mathbf v}^k\|)}{\|\ensuremath{\mathbf w}^k- \ensuremath{\mathbf u}^k\|} &= 0, \label{infratiohkb} \end{align} \end{subequations} where $\ensuremath{\mathbf w}^k = P_{\{\ensuremath{\mathbf z}\}^\perp}\ensuremath{\mathbf v}^k$, $\ensuremath{\mathbf u}^k = P_{ \ensuremath{\mathcal{F}}}\ensuremath{\mathbf w}^k$ and $\ensuremath{\mathbf w}^k\neq \ensuremath{\mathbf u}^k$. \end{lemma} \begin{proof} Suppose that $\gamma_{\ensuremath{\mathbf z},\eta} = 0$. Then, by the definition of infimum, there exists a sequence $\{\ensuremath{\mathbf v}^k\}\subset\partial \ensuremath{\mathcal{K}}\cap B(\eta) \backslash \ensuremath{\mathcal{F}}$ such that \begin{equation}\label{infratio1} \underset{k \rightarrow \infty}{\lim} \frac{\ensuremath{\mathfrak{g}}(\|\ensuremath{\mathbf w}^k - \ensuremath{\mathbf v}^k\|)}{\|\ensuremath{\mathbf w}^k - \ensuremath{\mathbf u}^k\|} = 0, \end{equation} where $\ensuremath{\mathbf w}^k = P_{\{\ensuremath{\mathbf z}\}^\perp}\ensuremath{\mathbf v}^k$, $\ensuremath{\mathbf u}^k = P_{ \ensuremath{\mathcal{F}}}\ensuremath{\mathbf w}^k$ and $\ensuremath{\mathbf w}^k\neq \ensuremath{\mathbf u}^k$. Since $\{\ensuremath{\mathbf v}^k\}\subset B(\eta)$, by passing to a convergent subsequence if necessary, we may assume without loss of generality that \begin{equation}\label{infratio4} \underset{k \rightarrow \infty}{\lim}\ensuremath{\mathbf v}^k = \ensuremath{\bar{\vv}} \end{equation} for some $\ensuremath{\bar{\vv}}\in \ensuremath{\mathcal{K}}\cap B(\eta)$. In addition, since $0\in \ensuremath{\mathcal{F}}\subseteq \{\ensuremath{\mathbf z}\}^\perp$, and projections onto closed convex sets are nonexpansive, we see that $\{\ensuremath{\mathbf w}^k\}\subset B(\eta)$ and $\{\ensuremath{\mathbf u}^k\}\subset B(\eta)$, and hence the sequence $\{\|\ensuremath{\mathbf w}^k-\ensuremath{\mathbf u}^k\|\}$ is bounded. Then we can conclude from \eqref{infratio1} and the assumption $\ensuremath{\mathfrak{g}} \geq |\cdot|^\alpha$ that \begin{equation}\label{infratio3} \underset{k \rightarrow \infty}{\lim}\|\ensuremath{\mathbf w}^k-\ensuremath{\mathbf v}^k\| =0. \end{equation} Now \eqref{infratio3}, \eqref{infratio4}, and the triangle inequality give $\ensuremath{\mathbf w}^k\to \ensuremath{\bar{\vv}}$. Since $\{\ensuremath{\mathbf w}^k\}\subset \{\ensuremath{\mathbf z}\}^\perp$, it then follows that $\ensuremath{\bar{\vv}} \in \{\ensuremath{\mathbf z} \}^\perp$. Thus, $\ensuremath{\bar{\vv}}\in \{\ensuremath{\mathbf z} \}^\perp \cap \ensuremath{\mathcal{K}}= \ensuremath{\mathcal{F}}$. This completes the proof. \end{proof} Let $ \ensuremath{\mathcal{K}}$ be a closed convex cone. Lemma~\ref{lem:facialresidualsbeta}, Theorem~\ref{thm:1dfacesmain} and Lemma~\ref{lem:infratio} are tools to obtain {{one-step facial residual function}}s for $ \ensuremath{\mathcal{K}}$. These are exactly the kind of facial residual functions needed in the abstract error bound result, Theorem~\ref{theo:err}. We conclude this subsection with a result that connects the {{one-step facial residual function}}s of a product cone and those of its constituent cones, which is useful for deriving error bounds for product cones. \begin{proposition}[{{one-step facial residual function}s} for products]\label{prop:frf_prod} Let $ \ensuremath{\mathcal{K}}^i \subseteq \ensuremath{\mathcal{E}}^i$ be closed convex cones for every $i \in \{1,\ldots,m\}$ and let $ \ensuremath{\mathcal{K}} = \ensuremath{\mathcal{K}}^1 \times \cdots \times \ensuremath{\mathcal{K}}^m$. Let $ \ensuremath{\mathcal{F}} \mathrel{\unlhd} \ensuremath{\mathcal{K}}$, $\ensuremath{\mathbf z} \in \ensuremath{\mathcal{F}}^*$ and suppose that $ \ensuremath{\mathcal{F}} = \ensuremath{\mathcal{F}}^1\times \cdots \times \ensuremath{\mathcal{F}}^m$ with $ \ensuremath{\mathcal{F}}^i \mathrel{\unlhd} \ensuremath{\mathcal{K}}^i$ for every $i \in \{1,\ldots,m\}$. Write $\ensuremath{\mathbf z} = (\ensuremath{\mathbf z}_1,\ldots,\ensuremath{\mathbf z}_m)$ with $\ensuremath{\mathbf z}_i \in ( \ensuremath{\mathcal{F}}^i)^*$. For every $i$, let $\psi_{ \ensuremath{\mathcal{F}}^i,\ensuremath{\mathbf z}_i}$ be a {{one-step facial residual function}s} for $ \ensuremath{\mathcal{F}}^i$ and $\ensuremath{\mathbf z}_i$. Then, there exists a $\kappa > 0$ such that the function $\psi _{ \ensuremath{\mathcal{F}},\ensuremath{\mathbf z}}$ satisfying \[ \psi _{ \ensuremath{\mathcal{F}},\ensuremath{\mathbf z}}(\epsilon,t) = \sum _{i=1}^m \psi_{ \ensuremath{\mathcal{F}}^i,\ensuremath{\mathbf z}_i}(\kappa\epsilon,t) \] is a {{one-step facial residual function}s} for $ \ensuremath{\mathcal{F}}$ and $\ensuremath{\mathbf z}$. \end{proposition} \begin{proof} Suppose that $\ensuremath{\mathbf x} \in \ensuremath{\mathrm{span}\,} \ensuremath{\mathcal{F}}$ and $\epsilon\ge 0$ satisfy the inequalities \begin{equation*} \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{F}}) \leq \epsilon, \quad \inProd{\ensuremath{\mathbf x}}{\ensuremath{\mathbf z}} \leq \epsilon. \end{equation*} We note that \[ \ensuremath{\mathcal{F}}\cap \{\ensuremath{\mathbf z}\}^{\perp} = ( \ensuremath{\mathcal{F}}^{1} \cap\{\ensuremath{\mathbf z}_1\}^\perp) \times \cdots \times ( \ensuremath{\mathcal{F}}^{m} \cap\{\ensuremath{\mathbf z}_m\}^\perp), \] and that for every $i \in \{1,\ldots,m\}$, \begin{align}\label{eq:f1} \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}_i, \ensuremath{\mathcal{F}}^i)\le \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{F}})\le \epsilon. \end{align} Since $\ensuremath{\mathbf z}_i \in ( \ensuremath{\mathcal{F}}^i)^*$, we have from \eqref{eq:f1} that \begin{equation}\label{eq:p_f1} 0 \leq \inProd{\ensuremath{\mathbf z}_i}{P_{ \ensuremath{\mathcal{F}}^{i}}({\ensuremath{\mathbf x}}_i)} = \inProd{\ensuremath{\mathbf z}_i}{P_{ \ensuremath{\mathcal{F}}^{i}}({\ensuremath{\mathbf x}}_i) - {\ensuremath{\mathbf x}}_i + {\ensuremath{\mathbf x}}_i } \leq \epsilon\norm{\ensuremath{\mathbf z}_i} + \inProd{\ensuremath{\mathbf z}_i}{{\ensuremath{\mathbf x}}_i}. \end{equation} Using \eqref{eq:p_f1} for all $i$ and recalling that $\inProd{\ensuremath{\mathbf z}}{\ensuremath{\mathbf x}}\leq \epsilon$, we have \begin{align} \inProd{(\ensuremath{\mathbf z}_1,\ldots,\ensuremath{\mathbf z}_m)}{(P_{ \ensuremath{\mathcal{F}}^{1}}({\ensuremath{\mathbf x}}_1),\ldots,P_{ \ensuremath{\mathcal{F}}^{m}}({\ensuremath{\mathbf x}}_m)) } \leq \sum _{i=1}^m\left[\epsilon\norm{\ensuremath{\mathbf z}_i} + \inProd{\ensuremath{\mathbf z}_i}{\ensuremath{\mathbf x}_i}\right] \leq \hat\kappa \epsilon, \label{eq:p_f} \end{align} where $\hat\kappa = 1+\sum_{i=1}^m\norm{\ensuremath{\mathbf z}_i}$. Since $\inProd{\ensuremath{\mathbf z}_i}{P_{ \ensuremath{\mathcal{F}}^{i}}(\ensuremath{\mathbf x}_i)} \geq 0$ for $i \in \{1,\ldots,m\}$, from \eqref{eq:p_f} we obtain \begin{equation}\label{eq:z_pf} \inProd{\ensuremath{\mathbf z}_i}{P_{ \ensuremath{\mathcal{F}}^{i}}(\ensuremath{\mathbf x}_i)} \leq \hat\kappa \epsilon, \ensuremath{\mathbf q}uad i \in \{1,\ldots,m\}. \end{equation} This implies that for $i \in\{1,\ldots,m\}$ we have \begin{equation}\label{eq:revisef1} \inProd{\ensuremath{\mathbf z}_i}{\ensuremath{\mathbf x}_i} = \inProd{\ensuremath{\mathbf z}_i}{\ensuremath{\mathbf x}_i -P_{ \ensuremath{\mathcal{F}}^{i}}(\ensuremath{\mathbf x}_i) + P_{ \ensuremath{\mathcal{F}}^{i}}(\ensuremath{\mathbf x}_i) } \leq \epsilon\norm{\ensuremath{\mathbf z}} + \hat\kappa \epsilon, \end{equation} where the inequality follows from \eqref{eq:f1} and \eqref{eq:z_pf}. Now, recapitulating, the facial residual function $\psi_{ \ensuremath{\mathcal{F}}^i,\ensuremath{\mathbf z}_i}$ has the property that if $\gamma_1,\gamma_2 \in \ensuremath{\mathbb R}_+$ then the relations \begin{align*} \ensuremath{\mathbf y}_i\in \ensuremath{\mathrm{span}\,} \ensuremath{\mathcal{F}}^{i},\quad \ensuremath{\operatorname{d}}(\ensuremath{\mathbf y}_i, \ensuremath{\mathcal{F}}^{i}) \leq \gamma_1,\quad \inProd{\ensuremath{\mathbf y}_i}{\ensuremath{\mathbf z}_i} \leq \gamma_2 \end{align*} imply $\ensuremath{\operatorname{d}}(\ensuremath{\mathbf y}_i, \ensuremath{\mathcal{F}}^i\cap \{\ensuremath{\mathbf z}_i\}^\perp) \leq \psi_{ \ensuremath{\mathcal{F}}^i,\ensuremath{\mathbf z}_i}(\max_{1\leq j \leq 2}\{\gamma_j \},\norm{\ensuremath{\mathbf y}_i})$. Therefore, from \eqref{eq:f1}, \eqref{eq:revisef1} and the monotonicity of $\psi_{ \ensuremath{\mathcal{F}}^i,\ensuremath{\mathbf z}_i}$, we have upon recalling $\ensuremath{\mathbf x}\in \ensuremath{\mathrm{span}\,} \ensuremath{\mathcal{F}}$ that \begin{equation}\label{eq:x_df} \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}_i, \ensuremath{\mathcal{F}}^i\cap \{\ensuremath{\mathbf z}_i\}^\perp) \leq \psi_{ \ensuremath{\mathcal{F}}^i,\ensuremath{\mathbf z}_i}(\max\{1,\hat\kappa+\|\ensuremath{\mathbf z}\|\}\epsilon,\norm{\ensuremath{\mathbf x}_i}). \end{equation} Finally, from \eqref{eq:x_df}, we conclude that \begin{align}\notag \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{F}}\cap \{\ensuremath{\mathbf z}\}^\perp) \le \sum _{i=1}^m{\ensuremath{\operatorname{d}}({\ensuremath{\mathbf x}}_i, \ensuremath{\mathcal{F}}^i \cap \{\ensuremath{\mathbf z}_i\}^\perp)} \leq \sum _{i=1}^m\psi_{ \ensuremath{\mathcal{F}}^i,\ensuremath{\mathbf z}_i}(\max\{1,\hat\kappa+\|\ensuremath{\mathbf z}\|\}\epsilon,\norm{\ensuremath{\mathbf x}}), \end{align} where we also used the monotonicity of $\psi_{ \ensuremath{\mathcal{F}}^i,\ensuremath{\mathbf z}_i}$ for the last inequality. This completes the proof. \end{proof} \section{The exponential cone}\label{sec:exp_cone} In this section, we will use all the techniques developed so far to obtain error bounds for the 3D exponential cone $\ensuremath{K_{\exp}}$. We will start with a study of its facial structure in Section~\ref{sec:facial_structure}, then we will compute its {{one-step facial residual function}}s in Section~\ref{sec:exp_frf}. Finally, error bounds will be presented in Section~\ref{sec:exp_err}. In Section~\ref{sec:odd}, we summarize odd behaviour found in the facial structure of the exponential cone. \subsection{Facial structure}\label{sec:facial_structure} Recall that the exponential cone is defined as follows: \begin{align}\label{d:Kexp} K_{\exp}:=&\left \{(x,y,z)\;|\;y>0,z\geq ye^{x/y}\right \} \cup \left \{(x,y,z)\;|\; x \leq 0, z\geq 0, y=0 \right \}. \end{align} \begin{figure} \caption{The exponential cone and its dual, with faces and exposing vectors labeled according to our index $\beta$.} \label{fig:exp} \end{figure} \noindent Its dual cone is given by \begin{align*} K_{\exp}^*:=&\left \{(x,y,z)\;|\;x<0, ez\geq -xe^{y/x}\right \} \cup \left \{(x,y,z)\;|\; x = 0, z\geq 0, y\geq0 \right \}. \end{align*} It may therefore be readily seen that $K_{\exp}^*$ is a scaled and rotated version of $K_{\exp}$. In this subsection, we will describe the nontrivial faces of $K_{\exp}$; see Figure~\ref{fig:exp}. We will show that we have the following types of nontrivial faces: \begin{enumerate}[{\rm (a)}] \item infinitely many exposed extreme rays (1D faces) parametrized by $\beta \in \ensuremath{\mathbb R}$ as follows: \begin{equation}\label{eq:exp_1d_beta} \ensuremath{{\mathcal F}}_\beta \coloneqq \left\{\left(-\beta y+y,y,e^{1-\beta}y \right)\;\bigg|\;y \in [0,\infty )\right\}. \end{equation} \item a single ``exceptional'' exposed extreme ray denoted by $\ensuremath{{\mathcal F}}_{\infty}$: \begin{equation}\label{eq:exp_1d_exc} \ensuremath{{\mathcal F}}_{\infty}\coloneqq \{(x,0,0)\;|\;x\le0 \}. \end{equation} \item a single non-exposed extreme ray denoted by $\ensuremath{{\mathcal F}} _{ne} $: \begin{equation}\label{eq:exp_1d_ne} \ensuremath{{\mathcal F}} _{ne}\coloneqq \{(0,0,z)\;|\;z\ge0 \}. \end{equation} \item a single 2D exposed face denoted by $\ensuremath{{\mathcal F}}_{-\infty}$: \begin{equation}\label{eq:exp_2d} \ensuremath{{\mathcal F}}_{-\infty}\coloneqq \{(x,y,z)\;|\;x\le0, z\ge0, y=0\}, \end{equation} where we note that $\ensuremath{{\mathcal F}}_{\infty}$ and $\ensuremath{{\mathcal F}} _{ne}$ are the extreme rays of $\ensuremath{{\mathcal F}}_{-\infty}$. \end{enumerate} Notice that except for the case (c), all faces are exposed and thus arise as an intersection $\{\ensuremath{\mathbf z}\}^\perp\cap K_{\exp}$ for some $\ensuremath{\mathbf z} \in K_{\exp}^*$. To establish the above characterization, we start by examining how the components of $\ensuremath{\mathbf z}$ determine the corresponding exposed face. \subsubsection{Exposed faces}\label{sec:exposed} Let $\ensuremath{\mathbf z}\in \ensuremath{K_{\exp}}^*$ be such that $\{\ensuremath{\mathbf z}\}^\perp\cap \ensuremath{K_{\exp}}$ is a nontrivial face of $\ensuremath{K_{\exp}}$. Then $\ensuremath{\mathbf z}\neq 0$ and $\ensuremath{\mathbf z}\in \partial K_{\exp}^*$. We consider the following cases. \noindent \underline{$z_x < 0$}: Since $\ensuremath{\mathbf z}\in \partial K_{\exp}^*$, we must have $z_z e = -z_x e^{\frac{z_y}{z_x}}$ and hence \begin{equation}\label{d:pz} \ensuremath{\mathbf z}=(z_x,z_y,-z_x e^{\frac{z_y}{z_x}-1}). \end{equation} Since $z_x \neq 0$, we see that $\ensuremath{\mathbf q} \in \{\ensuremath{\mathbf z}\}^\perp$ if and only if \begin{equation}\label{qdotp1} q_x+ q_y\left(\frac{z_y}{z_x}\right)-q_z e^{\frac{z_y}{z_x}-1} = 0. \end{equation} Solving \eqref{qdotp1} for $q_z$ and letting $\beta:=\frac{z_y}{z_x}$ to simplify the exposition, we have \begin{equation}\label{qdotp4} q_z = e^{1-\frac{z_y}{z_x}}\left(q_x+q_y\cdot \frac{z_y}{z_x} \right) = e^{1-\beta}\left(q_x+q_y\beta \right) \;\; \text{with}\;\;\beta:=\frac{z_y}{z_x}\in (-\infty,\infty). \end{equation} Thus, we obtain that $\{\ensuremath{\mathbf z}\}^\perp = \left \{ \left(x,y, e^{1-\beta}\left(x+y\beta \right) \right)\;\big|\; x,y \in \ensuremath{\mathbb R} \right \}$. Combining this with the definition of $K_{\exp}$ and the fact that $\{\ensuremath{\mathbf z}\}^\perp$ is a supporting hyperplane (so that $K_{\exp} \cap \{\ensuremath{\mathbf z}\}^\perp = \partial K_{\exp}\cap \{\ensuremath{\mathbf z}\}^\perp$) yields \begin{equation}\label{capperp} \begin{aligned} &K_{\exp} \cap \{\ensuremath{\mathbf z}\}^\perp = \partial K_{\exp}\cap \{\ensuremath{\mathbf z}\}^\perp \\ =& \left \{ \left(x,y, e^{1-\beta}\left(x+y\beta \right) \right) \;\;\big|\;\; e^{1-\beta}\left(x+y\beta \right) = ye^{\frac{x}{y}},y>0 \right \} \\ & \bigcup \left \{ \left(x,y, e^{1-\beta}\left(x+y\beta \right) \right) \;\;\big|\;\; x \leq 0, e^{1-\beta}\left(x+y\beta \right) \geq 0,y=0 \right \} \\ =&\left \{ \left(x,y, e^{1-\beta}\left(x+y\beta \right) \right) \;\;\big|\;\; e^{1-\beta}\left(x+y\beta \right) = ye^{\frac{x}{y}},y>0 \right \} \cup \{0\}. \end{aligned} \end{equation} We now refine the above characterization in the next proposition. \begin{proposition}[Characterization of $\ensuremath{{\mathcal F}}_\beta$, $\beta\in \ensuremath{\mathbb R}$]\label{d:FcapbdKexp} Let $\ensuremath{\mathbf z} \in K_{\exp}^*$ satisfy $\ensuremath{\mathbf z}=(z_x,z_y,z_z)$, where $z_z e = -z_x e^{\frac{z_y}{z_x}}$ and $z_x <0$. Define $\beta=\frac{z_y}{z_x}$ as in \eqref{qdotp4} and let $\ensuremath{{\mathcal F}}_\beta:= K_{\exp}\cap \{\ensuremath{\mathbf z}\}^\perp$. Then \begin{align*} \ensuremath{{\mathcal F}}_\beta = \left\{\left(-\beta y+y,y,e^{1-\beta}y \right)\;\bigg|\;y \in [0,\infty )\right\}. \end{align*} \end{proposition} \begin{proof} Let $\Omega := \left\{\left(-\beta y+y,y,e^{1-\beta}y \right)\;\big|\;y \in [0,\infty )\right\}$. In view of \eqref{capperp}, we can check that $\Omega\subseteq \ensuremath{{\mathcal F}}_\beta$. To prove the converse inclusion, pick any $\ensuremath{\mathbf q}=\left(x,y,e^{1-\beta}(x+y\beta)\right) \in \ensuremath{{\mathcal F}}_\beta$. We need to show that $\ensuremath{\mathbf q}\in \Omega$. To this end, we note from \eqref{capperp} that if $y = 0$, then necessarily $\ensuremath{\mathbf q} = \mathbf{0}$ and consequently $\ensuremath{\mathbf q}\in \Omega$. On the other hand, if $y > 0$, then \eqref{capperp} gives $ye^{x/y}= (x+\beta y)e^{1-\beta}$. Then we have the following chain of equivalences: \begin{equation}\label{xneq0case} \begin{array}{crl} & ye^{x/y}&\!= (x+\beta y)e^{1-\beta} \\ \iff & -e^{-1} &\!=-(x/y+\beta)e^{-(x/y+\beta)} \\ \overset{\rm (a)}\iff & -x/y-\beta&\!=-1 \\ \iff & x&\!=y-y\beta, \end{array} \end{equation} where (a) follows from the fact that the function $t\mapsto te^t$ is strictly increasing on $[-1,\infty)$. Plugging the last expression back into $\ensuremath{\mathbf q}$, we may compute \begin{equation}\label{xneq0case6} q_z = e^{1-\beta}(x+y\beta) = e^{1-\beta}(y-y\beta+y\beta)=ye^{1-\beta}. \end{equation} Altogether, \eqref{xneq0case}, \eqref{xneq0case6} together with $y>0$ yield \begin{equation*} \ensuremath{\mathbf q}=\left(y-\beta y ,y,ye^{1-\beta} \right)\in \Omega. \end{equation*} This completes the proof \end{proof} Next, we move on to the two remaining cases. \noindent{\underline{$z_x = 0$, $z_z > 0$}}: Notice that $\ensuremath{\mathbf q}\in \ensuremath{K_{\exp}}$ means that $q_y \ge 0$ and $q_z\ge 0$. Since $z_z > 0$ and $z_y \ge 0$, in order to have $\ensuremath{\mathbf q}\in \{\ensuremath{\mathbf z}\}^\perp$, we must have $q_z = 0$. The the definition of $K_{\exp}$ also forces $q_y = 0$ and hence \begin{equation}\label{Finfinity} \{\ensuremath{\mathbf z}\}^\perp\cap K_{\exp}=\{(x,0,0)\;|\; x \leq 0\} =:\ensuremath{{\mathcal F}}_{\infty}. \end{equation} This one-dimensional face is exposed by any hyperplane with normal vectors coming from the set $\{(0,z_y,z_z):\; z_y\ge 0, z_z > 0)\}$. \noindent{\underline{$z_x = 0$, $z_z = 0$}}: In this case, we have $z_y > 0$. In order to have $\ensuremath{\mathbf q}\in \{\ensuremath{\mathbf z}\}^\perp$, we must have $q_y = 0$. Thus \begin{equation}\label{Fneginfinity} \{\ensuremath{\mathbf z}\}^\perp\cap K_{\exp} = \{(x,y,z)\;|\;x\le0, z\ge0, y=0\} =: \ensuremath{{\mathcal F}}_{-\infty}, \end{equation} which is the unique two-dimensional face of $K_{\exp}$. \subsubsection{The single non-exposed face and completeness of the classification} The face $\ensuremath{{\mathcal F}}_{ne}$ is non-exposed because, as shown in Proposition~\ref{d:FcapbdKexp}, \eqref{Finfinity} and \eqref{Fneginfinity}, it never arises as an intersection of the form $\{\ensuremath{\mathbf z}\}^\perp\cap K_{\exp}$, for $\ensuremath{\mathbf z} \in K_{\exp}^*$. We now show that all nontrivial faces of $K_{\exp}$ were accounted for in \eqref{eq:exp_1d_beta}, \eqref{eq:exp_1d_exc}, \eqref{eq:exp_1d_ne}, \eqref{eq:exp_2d}. First of all, by the discussion in Section~\ref{sec:exposed}, all nontrivial exposed faces must be among the ones in \eqref{eq:exp_1d_beta}, \eqref{eq:exp_1d_exc} and \eqref{eq:exp_2d}. So, let $ \ensuremath{\mathcal{F}}$ be a non-exposed face of $K_{\exp}$. Then, it must be contained in a nontrivial exposed face of $K_{\exp}$.\footnote{This is a general fact. A proper face of a closed convex cone is contained in a proper exposed face, e.g., \cite[Proposition~3.6]{BW81}.} Therefore, $ \ensuremath{\mathcal{F}}$ must be a proper face of the unique 2D face \eqref{eq:exp_2d}. This implies that $ \ensuremath{\mathcal{F}}$ is one of the extreme rays of \eqref{eq:exp_2d}: $\ensuremath{{\mathcal F}}_{\infty}$ or $\ensuremath{{\mathcal F}}_{ne}$. By assumption, $ \ensuremath{\mathcal{F}}$ is non-exposed, so it must be $\ensuremath{{\mathcal F}}_{ne}$. \subsection{{{One-step facial residual function}}s}\label{sec:exp_frf} In this subsection, we will use the machinery developed in Section~\ref{sec:frf} to obtain the {{one-step facial residual function}}s for $K_{\exp}$. Let us first discuss how the discoveries were originally made, and how that process motivated the development of the framework we built in Section~\ref{sec:frf}. The FRFs proven here were initially found by using the characterizations of Theorem~\ref{thm:1dfacesmain} and Lemma~\ref{lem:infratio} together with numerical experiments. Specifically, we used \emph{Maple}\footnote{Though one could use any suitable computer algebra package.} to numerically evaluate limits of relevant sequences \eqref{infratiohk}, as well as plotting lower dimensional slices of the function $\ensuremath{\mathbf v} \mapsto \ensuremath{\mathfrak{g}}(\|\ensuremath{\mathbf v}-\ensuremath{\mathbf w}\|)/\|\ensuremath{\mathbf w}-\ensuremath{\mathbf u}\|$, where $\ensuremath{\mathbf w}$ and $\ensuremath{\mathbf u}$ are defined similarly as in \eqref{gammabetaeta}. A natural question is whether it might be simpler to change coordinates and work with the nearly equivalent $\ensuremath{\mathbf w} \mapsto \ensuremath{\mathfrak{g}}(\|\ensuremath{\mathbf v}-\ensuremath{\mathbf w}\|)/\|\ensuremath{\mathbf w}-\ensuremath{\mathbf u}\|$, since $\ensuremath{\mathbf w} \in \{\ensuremath{\mathbf z} \}^\perp$. However, $P_{\{\ensuremath{\mathbf z}\}^\perp}^{-1}\{\ensuremath{\mathbf w}\}\cap \partial \ensuremath{\mathcal{K}}$ may contain multiple points, which creates many challenges. We encountered an example of this when working with the exponential cone, where the change of coordinates from $\ensuremath{\mathbf v}$ to $\ensuremath{\mathbf w}$ necessitates the introduction of the two real branches of the Lambert $\mathcal W$ function (see, for example, \cite{BL2016,bauschke2018proximal,burachik2019generalized} or \cite{Se15} for the closely related Wright Omega function). With terrible effort, one can use such a parametrization to prove the FRFs for $\ensuremath{{\mathcal F}}_{\beta}, \beta \in \left[-\infty,\infty \right]\setminus \{\hat{\beta}:=-\mathcal W_{{\rm principal}}(2e^{-2})/2 \}$. However, the change of branches inhibits proving the result for the exceptional number $\hat{\beta}$. The change of variables to $\ensuremath{\mathbf v}$ cures this problem by obviating the need for a branch function in the analysis; see \cite{WWJD} for additional details. This is why we present Theorem~\ref{thm:1dfacesmain} in terms of $\ensuremath{\mathbf v}$. Computational investigation also pointed to the path of proof, though the proof we present may be understood without the aid of a computer. \subsubsection{$\ensuremath{{\mathcal F}}_{-\infty}$: the unique 2D face} \label{sec:freas_2d} Recall the unique 2D face of $K_{\exp}$: $$ \ensuremath{\mathcal{F}}_{-\infty}:=\{(x,y,z)\;|\; x\leq 0, z\geq 0, y=0 \}. $$ Define the piecewise modified Boltzmann--Shannon entropy $\ensuremath{\mathfrak{g}}_{-\infty}:\ensuremath{\mathbb{R}_+}\to \ensuremath{\mathbb{R}_+}$ as follows: \begin{equation}\label{d:entropy} \ensuremath{\mathfrak{g}}_{-\infty}(t) := \begin{cases} 0 & \text{if}\;\; t=0,\\ -t \ln(t) & \text{if}\;\; t\in \left(0,1/e^2\right],\\ t+ \frac{1}{e^2} & \text{if}\;\; t>1/e^2. \end{cases} \end{equation} For more on its usefulness in optimization, see, for example, \cite{bauschke2018proximal,burachik2019generalized}. We note that $\ensuremath{\mathfrak{g}}_{-\infty}$ is monotone increasing and there exists $L\ge 1$ such that the following inequalities hold for every $t \in \ensuremath{\mathbb{R}_+}$ and $M > 0$:\footnote{The third relation in \eqref{d:entropy_p} is derived from the second relation and the monotonicity of $\ensuremath{\mathfrak{g}}_{-\infty}$ as follows: $\ensuremath{\mathfrak{g}}_{-\infty}(Mt) = \ensuremath{\mathfrak{g}}_{-\infty}(2^{\log_2 M}t )\le \ensuremath{\mathfrak{g}}_{-\infty}(2^{\lceil |\log_2 M|\rceil}t )\le L^{\lceil |\log_2 M|\rceil}\ensuremath{\mathfrak{g}}_{-\infty}(t)\le L^{1 +|\log_2 M|}\ensuremath{\mathfrak{g}}_{-\infty}(t)$.} \begin{equation}\label{d:entropy_p} |t| \leq \ensuremath{\mathfrak{g}}_{-\infty}(t), \quad \ensuremath{\mathfrak{g}}_{-\infty}(2t) \leq L\ensuremath{\mathfrak{g}}_{-\infty}(t),\quad \ensuremath{\mathfrak{g}}_{-\infty}(Mt) \leq L^{1+|\log_2(M)|}\ensuremath{\mathfrak{g}}_{-\infty}(t). \end{equation} With that, we prove in the next theorem that $\gamma_{\ensuremath{\mathbf z},\eta}$ is positive for $\ensuremath{{\mathcal F}}_{-\infty}$, which implies that an \emph{entropic error bound} holds. \begin{theorem}[Entropic error bound concerning $\ensuremath{{\mathcal F}}_{-\infty}$]\label{thm:entropic} Let $\ensuremath{\mathbf z}\in K_{\exp}^*$ with $z_x=z_z=0$ and $z_y > 0$ so that $\{\ensuremath{\mathbf z} \}^\perp \cap K_{\exp}=\ensuremath{{\mathcal F}}_{-\infty}$ is the two-dimensional face of $K_{\exp}$. Let $\eta > 0$ and let $\gamma_{\ensuremath{\mathbf z},\eta}$ be defined as in \eqref{gammabetaeta} with $\ensuremath{\mathfrak{g}} = \ensuremath{\mathfrak{g}}_{-\infty}$ in \eqref{d:entropy}. Then $\gamma_{\ensuremath{\mathbf z},\eta} \in (0,\infty]$ and \begin{equation}\label{eq:entropic} \ensuremath{\operatorname{d}}(\ensuremath{\mathbf q},\ensuremath{{\mathcal F}}_{-\infty})\le \max\{2,2\gamma_{\ensuremath{\mathbf z},\eta}^{-1}\}\cdot\ensuremath{\mathfrak{g}}_{-\infty}(\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q},K_{\exp}))\ \ \ \mbox{whenever $\ensuremath{\mathbf q}\in \{\ensuremath{\mathbf z}\}^\perp\cap B(\eta)$.} \end{equation} \end{theorem} \begin{proof} In view of Lemma~\ref{lem:infratio}, take any $\ensuremath{\bar{\vv}} \in \ensuremath{{\mathcal F}}_{-\infty}$ and a sequence $\{\ensuremath{\mathbf v}^k\}\subset \partial K_{\exp}\cap B(\eta) \backslash \ensuremath{{\mathcal F}}_{-\infty}$ such that \begin{equation}\label{infratiohk_contradiction_neg_infty} \underset{k \rightarrow \infty}{\lim}\ensuremath{\mathbf v}^k = \underset{k \rightarrow \infty}{\lim}\ensuremath{\mathbf w}^k =\ensuremath{\bar{\vv}}, \end{equation} where $\ensuremath{\mathbf w}^k = P_{\{\ensuremath{\mathbf z}\}^\perp}\ensuremath{\mathbf v}^k$, $\ensuremath{\mathbf u}^k = P_{\ensuremath{{\mathcal F}}_{-\infty}}\ensuremath{\mathbf w}^k$, and $\ensuremath{\mathbf w}^k\neq \ensuremath{\mathbf u}^k$. We will show that \eqref{infratiohkb} does not hold for $\ensuremath{\mathfrak{g}} = \ensuremath{\mathfrak{g}}_{-\infty}$. Since $\ensuremath{\mathbf v}^k \notin \ensuremath{{\mathcal F}}_{-\infty}$, in view of \eqref{d:Kexp} and \eqref{Fneginfinity}, we have $v^k_y>0$ and \begin{equation}\label{formulav_entropic} \ensuremath{\mathbf v}^k = (v^k_x,v^k_y,v^k_ye^{v^k_x/v^k_y}) = (v^k_y\ln(v^k_z/v^k_y),v^k_y,v^k_z), \end{equation} where the second representation is obtained by solving for $v^k_x$ from $v^k_z = v^k_ye^{v^k_x/v^k_y} > 0$. Using the second representation in \eqref{formulav_entropic}, we then have \begin{equation}\label{formulawu_entropic} \ensuremath{\mathbf w}^k = (v^k_y\ln(v^k_z/v^k_y),0,v^k_z)\ \ {\rm and}\ \ \ensuremath{\mathbf u}^k = (0,0,v^k_z); \end{equation} here, we made use of the fact that $\ensuremath{\mathbf w}^k\neq \ensuremath{\mathbf u}^k$, which implies that $v^k_y\ln(v^k_z/v^k_y) > 0$ and thus the resulting formula for $\ensuremath{\mathbf u}^k$. In addition, we also note from $v^k_y\ln(v^k_z/v^k_y) > 0$ (and $v^k_y>0$) that \begin{equation}\label{inequality_entropic} v^k_z > v^k_y > 0. \end{equation} Furthermore, since $\bar \ensuremath{\mathbf v} \in \ensuremath{{\mathcal F}}_{-\infty}$, we see from \eqref{Fneginfinity} and \eqref{infratiohk_contradiction_neg_infty} that \begin{equation}\label{limitvyzero} \lim_{k\to\infty} v_y^k = 0. \end{equation} Now, using \eqref{formulav_entropic}, \eqref{formulawu_entropic}, \eqref{limitvyzero} and the definition of $\ensuremath{\mathfrak{g}}_{-\infty}$, we see that for $k$ sufficiently large, \begin{equation}\label{e:entropic} \frac{\ensuremath{\mathfrak{g}}_{-\infty}(\|\ensuremath{\mathbf v}^k-\ensuremath{\mathbf w}^k\|)}{\|\ensuremath{\mathbf u}^k-\ensuremath{\mathbf w}^k\|} = \frac{-v_y^k \ln(v_y^k)}{v_y^k\ln(v_z^k/v_y^k)} = \frac{-\ln(v_y^k)}{\ln(v_z^k)-\ln(v_y^k)}. \end{equation} We will show that \eqref{infratiohkb} does not hold for $\ensuremath{\mathfrak{g}} = \ensuremath{\mathfrak{g}}_{-\infty}$ in each of the following cases. \begin{enumerate}[(I)] \item\label{entropiccase1} $\bar{v}_z >0$. \item\label{entropiccaseinfty} $\bar{v}_z =0$. \end{enumerate} \noindent \ref{entropiccase1}: In this case, we deduce from \eqref{limitvyzero} and \eqref{e:entropic} that \begin{equation*} \lim_{k\rightarrow\infty} \frac{\ensuremath{\mathfrak{g}}_{-\infty}(\|\ensuremath{\mathbf v}^k-\ensuremath{\mathbf w}^k\|)}{\|\ensuremath{\mathbf u}^k-\ensuremath{\mathbf w}^k\|} = 1. \end{equation*} Thus \eqref{infratiohkb} does not hold for $\ensuremath{\mathfrak{g}} = \ensuremath{\mathfrak{g}}_{-\infty}$. \noindent \ref{entropiccaseinfty}: By passing to a subsequence if necessary, we may assume that $v^k_z < 1$ for all $k$. This together with \eqref{inequality_entropic} gives $\frac{\ln(v_z^k)}{\ln(v_y^k)} \in (0,1)$ for all $k$. Thus, we conclude from \eqref{e:entropic} that for all $k$, \begin{align*} \frac{\ensuremath{\mathfrak{g}}_{-\infty}(\|\ensuremath{\mathbf v}^k-\ensuremath{\mathbf w}^k\|)}{\|\ensuremath{\mathbf u}^k-\ensuremath{\mathbf w}^k\|} & = \frac{-\ln(v_y^k)}{\ln(v_z^k)-\ln(v_y^k)} = \frac{1}{1 - \frac{\ln(v_z^k)}{\ln(v_y^k)}}> 1. \end{align*} Consequently, \eqref{infratiohkb} also fails for $\ensuremath{\mathfrak{g}} = \ensuremath{\mathfrak{g}}_{-\infty}$ in this case. Having shown that \eqref{infratiohkb} does not hold for $\ensuremath{\mathfrak{g}} = \ensuremath{\mathfrak{g}}_{-\infty}$ in any case, we conclude by Lemma~\ref{lem:infratio} that $\gamma_{\ensuremath{\mathbf z},\eta} \in \left(0,\infty \right]$. With that, \eqref{eq:entropic} follows from Theorem~\ref{thm:1dfacesmain} and \eqref{d:entropy_p}. \end{proof} Using Theorem~\ref{thm:entropic}, we can also show weaker H\"olderian error bounds. \begin{corollary}\label{col:2d_hold} Let $\ensuremath{\mathbf z}\in K_{\exp}^*$ with $z_x=z_z=0$ and $z_y > 0$ so that $\{\ensuremath{\mathbf z} \}^\perp \cap \ensuremath{K_{\exp}}=\ensuremath{{\mathcal F}}_{-\infty}$ is the two-dimensional face of $\ensuremath{K_{\exp}}$. Let $\eta>0$, $\alpha \in (0,1)$, and $\gamma_{\ensuremath{\mathbf z},\eta}$ be as in \eqref{gammabetaeta} with $\ensuremath{\mathfrak{g}} = |\cdot|^\alpha$. Then $\gamma_{\ensuremath{\mathbf z},\eta} \in (0,\infty]$ and \begin{equation*} \ensuremath{\operatorname{d}}(\ensuremath{\mathbf q},\ensuremath{{\mathcal F}}_{-\infty})\le \max\{2\eta^{1-\alpha},2\gamma_{\ensuremath{\mathbf z},\eta}^{-1}\}\cdot\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q},K_{\exp})^\alpha\ \ \ \mbox{whenever $\ensuremath{\mathbf q}\in \{\ensuremath{\mathbf z}\}^\perp\cap B(\eta)$.} \end{equation*} \end{corollary} \begin{proof} Suppose that $\gamma_{\ensuremath{\mathbf z},\eta} = 0$ and let sequences $\{\ensuremath{\mathbf v}^k\},\{\ensuremath{\mathbf w}^k\},\{\ensuremath{\mathbf u}^k\}$ be as in Lemma~\ref{lem:infratio}. Then $\ensuremath{\mathbf v}^k\neq \ensuremath{\mathbf w}^k$ for all $k$ because $\{\ensuremath{\mathbf v}^k\}\subset \ensuremath{K_{\exp}}\backslash \ensuremath{{\mathcal F}}_{-\infty}$, $\{\ensuremath{\mathbf w}^k\}\subset\{\ensuremath{\mathbf z}\}^\perp$, and $\ensuremath{{\mathcal F}}_{-\infty} = \ensuremath{K_{\exp}}\cap \{\ensuremath{\mathbf z}\}^\perp$. Since $\ensuremath{\mathfrak{g}}_{-\infty}(t)/|t|^{\alpha} \downarrow 0 $ as $t \downarrow 0$ we have \begin{equation*} \liminf_{k\rightarrow\infty} \frac{\ensuremath{\mathfrak{g}}_{-\infty}(\|\ensuremath{\mathbf w}^k - \ensuremath{\mathbf v}^k\|)}{\|\ensuremath{\mathbf w}^k- \ensuremath{\mathbf u}^k\|} = \liminf_{k\rightarrow\infty} \frac{\ensuremath{\mathfrak{g}}_{-\infty}(\|\ensuremath{\mathbf w}^k - \ensuremath{\mathbf v}^k\|)}{\|\ensuremath{\mathbf w}^k- \ensuremath{\mathbf v}^k\|^\alpha} \frac{\|\ensuremath{\mathbf w}^k - \ensuremath{\mathbf v}^k\|^{\alpha}}{\|\ensuremath{\mathbf w}^k- \ensuremath{\mathbf u}^k\|} = 0, \end{equation*} which contradicts Theorem~\ref{thm:entropic} because the quantity in \eqref{gammabetaeta} should be positive for $\ensuremath{\mathfrak{g}} = \ensuremath{\mathfrak{g}} _{-\infty}$. \end{proof} Recalling \eqref{d:entropy_p}, we obtain {{one-step facial residual function}}s using Theorem~\ref{thm:entropic} and Corollary~\ref{col:2d_hold} in combination with Theorem~\ref{thm:1dfacesmain}, Remark~\ref{rem:kappa} and Lemma~\ref{lem:facialresidualsbeta}. \begin{corollary}[{{one-step facial residual function}s} concerning $\ensuremath{{\mathcal F}}_{-\infty}$]\label{col:frf_2dface_entropic} Let $\ensuremath{\mathbf z}\in K_{\exp}^*$ be such that $\{\ensuremath{\mathbf z} \}^\perp \cap K_{\exp}=\ensuremath{{\mathcal F}}_{-\infty}$ is the two-dimensional face of $K_{\exp}$. Let $\ensuremath{\mathfrak{g}} = \ensuremath{\mathfrak{g}}_{-\infty}$ in \eqref{d:entropy} or $\ensuremath{\mathfrak{g}} = |\cdot|^\alpha$ for $\alpha\in (0,1)$. Let $\kappa_{\ensuremath{\mathbf z},t}$ be defined as in \eqref{haha2}. Then the function $\psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}:\ensuremath{\mathbb R}_+\times\ensuremath{\mathbb R}_+\to \ensuremath{\mathbb{R}_+}$ given by \begin{equation*} \psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}(\epsilon,t):=\max \left\{\epsilon,\epsilon/\|\ensuremath{\mathbf z}\| \right\} + \kappa_{\ensuremath{\mathbf z},t}\ensuremath{\mathfrak{g}}(\epsilon +\max \left\{\epsilon,\epsilon/\|\ensuremath{\mathbf z}\| \right\} ) \end{equation*} is a {{one-step facial residual function}s} for $K_{\exp}$ and $\ensuremath{\mathbf z}$. In particular, there exist $\kappa > 0$ and a nonnegative monotone nondecreasing function $\rho:\ensuremath{\mathbb R}_+ \to \ensuremath{\mathbb R}_+$ such that the function $\hat \psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}$ given by $\hat \psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}(\epsilon,t) \coloneqq \kappa \epsilon + \rho(t)\ensuremath{\mathfrak{g}}(\epsilon)$ is a {{one-step facial residual function}s} for $K_{\exp}$ and $\ensuremath{\mathbf z}$. \end{corollary} \subsubsection{$\ensuremath{{\mathcal F}}_\beta$: the family of one-dimensional faces $\beta \in \ensuremath{\mathbb R}$} Recall from Proposition~\ref{d:FcapbdKexp} that for each $\beta \in \ensuremath{\mathbb R}$, \begin{align*} \ensuremath{{\mathcal F}}_\beta := \left\{\left(-\beta y+y,y,e^{1-\beta}y \right)\;\bigg|\;y \in [0,\infty )\right\} \end{align*} is a one-dimensional face of $K_{\exp}$. We will now show that for $\ensuremath{{\mathcal F}}_\beta$, $\beta\in \ensuremath{\mathbb R}$, the $\gamma_{\ensuremath{\mathbf z},\eta}$ defined in Theorem~\ref{thm:1dfacesmain} is positive when $\ensuremath{\mathfrak{g}} = |\cdot|^\frac12$. Our discussion will be centered around the following quantities, which were also defined and used in the proof of Theorem~\ref{thm:1dfacesmain}. Specifically, for $\ensuremath{\mathbf z} \in \ensuremath{K_{\exp}}^*$ such that $\ensuremath{{\mathcal F}}_{\beta} = \ensuremath{K_{\exp}}\cap \{\ensuremath{\mathbf z}\}^\perp$, we let $\ensuremath{\mathbf v}\in \partial K_{\exp}\cap B(\eta)\backslash \ensuremath{{\mathcal F}}_\beta$ and define \begin{align} \label{d:w} \ensuremath{\mathbf w}:=P_{\{\ensuremath{\mathbf z} \}^\perp }\ensuremath{\mathbf v}\ \ \ {\rm and}\ \ \ \ensuremath{\mathbf u}:=P_{\ensuremath{{\mathcal F}}_\beta}\ensuremath{\mathbf w}. \end{align} We first note the following three important vectors: \begin{equation}\label{veczfp} \ensuremath{\widehat{\mathbf z}} := \begin{bmatrix} 1\\ \beta\\ -e^{\beta-1} \end{bmatrix},\ \ \ \ensuremath{\widehat{\mathbf f}} = \begin{bmatrix} 1-\beta \\ 1\\ e^{1-\beta} \end{bmatrix},\ \ \ \ensuremath{\widehat{\mathbf p}} = \begin{bmatrix} \beta e^{1-\beta} + e^{\beta-1}\\ -e^{1-\beta} - (1-\beta)e^{\beta-1}\\ \beta^2-\beta+1 \end{bmatrix}. \end{equation} Note that $\ensuremath{\widehat{\mathbf z}}$ is parallel to $\ensuremath{\mathbf z}$ in \eqref{d:pz} (recall that $z_x < 0$ for $\ensuremath{{\mathcal F}}_\beta$, where $\beta := \frac{z_y}{z_x}\in \ensuremath{\mathbb R}$), $\ensuremath{{\mathcal F}}_\beta$ is the conic hull of $\{\ensuremath{\widehat{\mathbf f}}\}$ according to Proposition~\ref{d:FcapbdKexp}, $\langle\ensuremath{\widehat{\mathbf z}},\ensuremath{\widehat{\mathbf f}}\ensuremath{\operatorname{ran}}gle=0$ and $\ensuremath{\widehat{\mathbf p}} = \ensuremath{\widehat{\mathbf z}}\times\ensuremath{\widehat{\mathbf f}}\neq {\bf 0}$. These three {\em nonzero} vectors form a mutually orthogonal set. The next lemma represents $\|\ensuremath{\mathbf u} - \ensuremath{\mathbf w}\|$ and $\|\ensuremath{\mathbf w} - \ensuremath{\mathbf v}\|$ in terms of inner products of $\ensuremath{\mathbf v}$ with these vectors, whenever possible. \begin{lemma}\label{lem:distance} Let $\beta\in \ensuremath{\mathbb R}$ and $\ensuremath{\mathbf z}\in K_{\exp}^*$ with $z_x<0$ such that $\ensuremath{{\mathcal F}}_\beta = \{\ensuremath{\mathbf z} \}^\perp \cap K_{\exp}$ is a one-dimensional face of $K_{\exp}$. Let $\eta >0$, $\ensuremath{\mathbf v}\in \partial K_{\exp}\cap B(\eta)\backslash \ensuremath{{\mathcal F}}_\beta$ and define $\ensuremath{\mathbf w}$ and $\ensuremath{\mathbf u}$ as in \eqref{d:w}. Let $\{\ensuremath{\widehat{\mathbf z}},\ensuremath{\widehat{\mathbf f}},\ensuremath{\widehat{\mathbf p}}\}$ be as in \eqref{veczfp}. Then \[ \|\ensuremath{\mathbf w} - \ensuremath{\mathbf v}\| = \frac{|\langle \ensuremath{\widehat{\mathbf z}},\ensuremath{\mathbf v}\ensuremath{\operatorname{ran}}gle|}{\|\ensuremath{\widehat{\mathbf z}}\|}\ \ \ {\rm and}\ \ \ \|\ensuremath{\mathbf w} - \ensuremath{\mathbf u}\| =\begin{cases} \frac{|\langle \ensuremath{\widehat{\mathbf p}},\ensuremath{\mathbf v}\ensuremath{\operatorname{ran}}gle|}{\|\ensuremath{\widehat{\mathbf p}}\|} & {\rm if}\ \langle\ensuremath{\widehat{\mathbf f}},\ensuremath{\mathbf v}\ensuremath{\operatorname{ran}}gle\ge 0,\\ \|\ensuremath{\mathbf w}\| & {\rm otherwise}. \end{cases} \] Moreover, when $\langle\ensuremath{\widehat{\mathbf f}},\ensuremath{\mathbf v}\ensuremath{\operatorname{ran}}gle\ge 0$, we have $\ensuremath{\mathbf u} = P_{{\rm span}\ensuremath{{\mathcal F}}_\beta}\ensuremath{\mathbf w}$. \end{lemma} \begin{proof} Since $\{\ensuremath{\widehat{\mathbf z}},\ensuremath{\widehat{\mathbf f}},\ensuremath{\widehat{\mathbf p}}\}$ is orthogonal, one can decompose $\ensuremath{\mathbf v}$ as \begin{equation}\label{eq:vdecompose} \ensuremath{\mathbf v} = \lambda_1 \ensuremath{\widehat{\mathbf z}} + \lambda_2 \ensuremath{\widehat{\mathbf f}} + \lambda_3 \ensuremath{\widehat{\mathbf p}}, \end{equation} with \begin{equation}\label{eq:lambda} \lambda_1 = \langle \ensuremath{\widehat{\mathbf z}},\ensuremath{\mathbf v}\ensuremath{\operatorname{ran}}gle/\|\ensuremath{\widehat{\mathbf z}}\|^2, \ \ \lambda_2 = \langle \ensuremath{\widehat{\mathbf f}},\ensuremath{\mathbf v}\ensuremath{\operatorname{ran}}gle/\|\ensuremath{\widehat{\mathbf f}}\|^2\ \ {\rm and}\ \ \lambda_3 = \langle \ensuremath{\widehat{\mathbf p}},\ensuremath{\mathbf v}\ensuremath{\operatorname{ran}}gle/\|\ensuremath{\widehat{\mathbf p}}\|^2. \end{equation} Also, since $\ensuremath{\widehat{\mathbf z}}$ is parallel to $\ensuremath{\mathbf z}$, we must have $\ensuremath{\mathbf w} = \lambda_2 \ensuremath{\widehat{\mathbf f}} + \lambda_3 \ensuremath{\widehat{\mathbf p}}$. Thus, it holds that $\|\ensuremath{\mathbf w}-\ensuremath{\mathbf v}\| = |\lambda_1|\|\ensuremath{\widehat{\mathbf z}}\|$ and the first conclusion follows from this and \eqref{eq:lambda}. Next, we have $\ensuremath{\mathbf u} = \hat t \,\ensuremath{\widehat{\mathbf f}}$, where \[ \hat t= \ensuremath{\operatorname*{argmin}}_{t\ge 0}\|\ensuremath{\mathbf w} - t\ensuremath{\widehat{\mathbf f}}\| = \begin{cases} \frac{\langle \ensuremath{\mathbf w},\ensuremath{\widehat{\mathbf f}}\ensuremath{\operatorname{ran}}gle}{\|\ensuremath{\widehat{\mathbf f}}\|^2} & {\rm if}\ \langle \ensuremath{\widehat{\mathbf f}},\ensuremath{\mathbf w}\ensuremath{\operatorname{ran}}gle\ge 0,\\ 0 & {\rm otherwise}. \end{cases} \] Moreover, observe from \eqref{eq:vdecompose} that $\langle \ensuremath{\widehat{\mathbf f}},\ensuremath{\mathbf w}\ensuremath{\operatorname{ran}}gle = \langle \ensuremath{\widehat{\mathbf f}},\ensuremath{\mathbf v} - \lambda_1\ensuremath{\widehat{\mathbf z}}\ensuremath{\operatorname{ran}}gle = \langle \ensuremath{\widehat{\mathbf f}},\ensuremath{\mathbf v}\ensuremath{\operatorname{ran}}gle$. These mean that when $\langle \ensuremath{\widehat{\mathbf f}},\ensuremath{\mathbf v}\ensuremath{\operatorname{ran}}gle< 0$, we have $\ensuremath{\mathbf u} = 0$, while when $\langle \ensuremath{\widehat{\mathbf f}},\ensuremath{\mathbf v}\ensuremath{\operatorname{ran}}gle\ge 0$, we have $\ensuremath{\mathbf u} = \frac{\langle \ensuremath{\mathbf w},\ensuremath{\widehat{\mathbf f}}\ensuremath{\operatorname{ran}}gle}{\|\ensuremath{\widehat{\mathbf f}}\|^2}\ensuremath{\widehat{\mathbf f}}=P_{{\rm span}\ensuremath{{\mathcal F}}_\beta}\ensuremath{\mathbf w}$ and \[ \|\ensuremath{\mathbf w} - \ensuremath{\mathbf u}\| = \left\|\ensuremath{\mathbf w} - \frac{\langle \ensuremath{\mathbf w},\ensuremath{\widehat{\mathbf f}}\ensuremath{\operatorname{ran}}gle}{\|\ensuremath{\widehat{\mathbf f}}\|^2}\ensuremath{\widehat{\mathbf f}}\right\| = |\lambda_3|\|\ensuremath{\widehat{\mathbf p}}\| = |\langle\ensuremath{\widehat{\mathbf p}},\ensuremath{\mathbf v}\ensuremath{\operatorname{ran}}gle|/\|\ensuremath{\widehat{\mathbf p}}\|, \] where the second and the third equalities follow from \eqref{eq:vdecompose}, \eqref{eq:lambda}, and the fact that $\ensuremath{\mathbf w} = \lambda_2 \ensuremath{\widehat{\mathbf f}} + \lambda_3 \ensuremath{\widehat{\mathbf p}}$. This completes the proof. \end{proof} We now prove our main theorem in this section. \begin{theorem}[H\"{o}lderian error bound concerning $\ensuremath{{\mathcal F}}_\beta$, $\beta\in \ensuremath{\mathbb R}$]\label{thm:nonzerogamma} Let $\beta\in \ensuremath{\mathbb R}$ and $\ensuremath{\mathbf z}\in K_{\exp}^*$ with $z_x<0$ such that $\ensuremath{{\mathcal F}}_\beta = \{\ensuremath{\mathbf z} \}^\perp \cap K_{\exp}$ is a one-dimensional face of $K_{\exp}$. Let $\eta > 0$ and let $\gamma_{\ensuremath{\mathbf z},\eta}$ be defined as in \eqref{gammabetaeta} with $\ensuremath{\mathfrak{g}} = |\cdot|^\frac12$. Then $\gamma_{\ensuremath{\mathbf z},\eta} \in (0,\infty]$ and \[ \ensuremath{\operatorname{d}}(\ensuremath{\mathbf q},\ensuremath{{\mathcal F}}_\beta)\le \max\{2\sqrt{\eta},2\gamma_{\ensuremath{\mathbf z},\eta}^{-1}\}\cdot\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q},\ensuremath{K_{\exp}})^\frac12\ \ \ \mbox{whenever $\ensuremath{\mathbf q}\in \{\ensuremath{\mathbf z}\}^\perp\cap B(\eta)$.} \] \end{theorem} \begin{proof} In view of Lemma~\ref{lem:infratio}, take any $\ensuremath{\bar{\vv}} \in \ensuremath{{\mathcal F}}_\beta$ and a sequence $\{\ensuremath{\mathbf v}^k\}\subset \partial K_{\exp}\cap B(\eta) \backslash \ensuremath{{\mathcal F}}_\beta$ such that \begin{equation}\label{infratiohk_contradiction} \underset{k \rightarrow \infty}{\lim}\ensuremath{\mathbf v}^k = \underset{k \rightarrow \infty}{\lim}\ensuremath{\mathbf w}^k =\ensuremath{\bar{\vv}}, \end{equation} where $\ensuremath{\mathbf w}^k = P_{\{\ensuremath{\mathbf z}\}^\perp}\ensuremath{\mathbf v}^k$, $\ensuremath{\mathbf u}^k = P_{\ensuremath{{\mathcal F}}_\beta}\ensuremath{\mathbf w}^k$, and $\ensuremath{\mathbf w}^k\neq \ensuremath{\mathbf u}^k$. We will show that \eqref{infratiohkb} does not hold for $\ensuremath{\mathfrak{g}} = |\cdot|^\frac12$. We first suppose that $\ensuremath{\mathbf v}^k\in \ensuremath{{\mathcal F}}_{-\infty}$ infinitely often. By extracting a subsequence if necessary, we may assume that $\ensuremath{\mathbf v}^k\in \ensuremath{{\mathcal F}}_{-\infty}$ for all $k$. From the definition of $\ensuremath{{\mathcal F}}_{-\infty}$ in \eqref{Fneginfinity}, we have $v_x^k\le 0$, $v_y^k=0$ and $v_z^k \ge 0$. Thus, recalling the definition of $\ensuremath{\widehat{\mathbf z}}$ in \eqref{veczfp}, it holds that \begin{equation}\label{easycase1} \begin{aligned} |\langle\ensuremath{\widehat{\mathbf z}},\ensuremath{\mathbf v}^k\ensuremath{\operatorname{ran}}gle| &= |v^k_x - v_z^ke^{\beta-1}| = -v^k_x + v_z^ke^{\beta-1} = |v^k_x| + e^{\beta-1}|v_z^k| \\ &\ge \min\{1,e^{\beta-1}\}\sqrt{|v^k_x|^2 + |v_z^k|^2} = \min\{1,e^{\beta-1}\}\|\ensuremath{\mathbf v}^k\|, \end{aligned} \end{equation} where the last equality holds because $v_y^k=0$. Next, using properties of projections onto subspaces, we have $\|\ensuremath{\mathbf w}^k\| \le \|\ensuremath{\mathbf v}^k\|$. This together with Lemma~\ref{lem:distance} and \eqref{easycase1} shows that \[ \|\ensuremath{\mathbf w}^k - \ensuremath{\mathbf u}^k\| = \ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}^k,\ensuremath{{\mathcal F}}_\beta)\le \|\ensuremath{\mathbf w}^k\| \le \|\ensuremath{\mathbf v}^k\|\le \frac{1}{\min\{1,e^{\beta-1}\}}|\langle\ensuremath{\widehat{\mathbf z}},\ensuremath{\mathbf v}^k\ensuremath{\operatorname{ran}}gle| = \frac{\|\ensuremath{\widehat{\mathbf z}}\|}{\min\{1,e^{\beta-1}\}}\|\ensuremath{\mathbf w}^k - \ensuremath{\mathbf v}^k\|. \] Since $\ensuremath{\mathbf w}^k \to \ensuremath{\bar{\vv}}$ and $\ensuremath{\mathbf v}^k \to \ensuremath{\bar{\vv}}$, the above display shows that \eqref{infratiohkb} does not hold for $\ensuremath{\mathfrak{g}} = |\cdot|^\frac12$ in this case. Next, suppose that $\ensuremath{\mathbf v}^k\notin \ensuremath{{\mathcal F}}_{-\infty}$ for all large $k$ instead. By passing to a subsequence, we assume that $\ensuremath{\mathbf v}^k\notin \ensuremath{{\mathcal F}}_{-\infty}$ for all $k$. In view of \eqref{d:Kexp} and \eqref{Fneginfinity}, this means in particular that \begin{equation}\label{formulavstar} \ensuremath{\mathbf v}^k = (v^k_x,v^k_y,v^k_ye^{v^k_x/v^k_y})\ \ {\rm and}\ \ v^k_y>0\ \mbox{ for all}\ k. \end{equation} We consider two cases and show that \eqref{infratiohkb} does not hold for $\ensuremath{\mathfrak{g}} = |\cdot|^\frac12$ in either of them: \begin{enumerate}[(I)] \item\label{ebsubcasev0} $\langle \ensuremath{\widehat{\mathbf f}},\ensuremath{\mathbf v}^k\ensuremath{\operatorname{ran}}gle\ge 0$ infinitely often; \item\label{ebsubcasevneq0} $\langle \ensuremath{\widehat{\mathbf f}},\ensuremath{\mathbf v}^k\ensuremath{\operatorname{ran}}gle< 0$ for all large $k$. \end{enumerate} \ref{ebsubcasev0}: Since $\langle \ensuremath{\widehat{\mathbf f}},\ensuremath{\mathbf v}^k\ensuremath{\operatorname{ran}}gle\ge 0$ infinitely often, by extracting a subsequence if necessary, we assume that $\langle \ensuremath{\widehat{\mathbf f}},\ensuremath{\mathbf v}^k\ensuremath{\operatorname{ran}}gle\ge 0$ for all $k$. Now, consider the following functions: \[ \begin{aligned} h_1(\zeta)&:= \zeta + \beta - e^{\beta+\zeta-1},\\ h_2(\zeta)&:= (\beta e^{1-\beta} + e^{\beta-1})\zeta - e^{1-\beta}-(1-\beta)e^{\beta-1} + (\beta^2-\beta+1)e^\zeta. \end{aligned} \] Using these functions, Lemma~\ref{lem:distance}, \eqref{veczfp} and \eqref{formulavstar}, one can see immediately that \begin{equation}\label{hahahehe1} \|\ensuremath{\mathbf w}^k - \ensuremath{\mathbf v}^k\| = \frac{|\langle\ensuremath{\widehat{\mathbf z}},\ensuremath{\mathbf v}^k\ensuremath{\operatorname{ran}}gle|}{\|\ensuremath{\widehat{\mathbf z}}\|} = \frac{v^k_y |h_1(v^k_x/v^k_y)|}{\|\ensuremath{\widehat{\mathbf z}}\|}\ \ {\rm and}\ \ \|\ensuremath{\mathbf w}^k - \ensuremath{\mathbf u}^k\| = \frac{|\langle\ensuremath{\widehat{\mathbf p}},\ensuremath{\mathbf v}^k\ensuremath{\operatorname{ran}}gle|}{\|\ensuremath{\widehat{\mathbf p}}\|} = \frac{v^k_y |h_2(v^k_x/v^k_y)|}{\|\ensuremath{\widehat{\mathbf p}}\|}. \end{equation} Note that $h_1$ is zero {\bf if and only if} $\zeta = 1 - \beta$. Furthermore, we have $h_1'(1-\beta) = 0$ and $h_1''(1-\beta) = -1$. Then, considering the Taylor expansion of $h_1$ around $1-\beta$ we have \begin{equation*} h_1(\zeta) = 1 + (\zeta + \beta-1) - e^{\beta+\zeta-1} = -\frac{(\zeta+\beta-1)^2}{2} + O(|\zeta+\beta-1|^3)\ \ \ {\rm as}\ \ \zeta\to 1-\beta. \end{equation*} Also, one can check that $h_2(1-\beta)=0$ and that \[ h_2'(1-\beta) = \beta e^{1-\beta} + e^{\beta-1} + (\beta^2-\beta+1)e^{1-\beta} = e^{\beta-1} + (\beta^2+1)e^{1-\beta} > 0. \] Therefore, we have the following Taylor expansion of $h_2$ around $1-\beta$: \begin{equation}\label{newlyadded} h_2(\zeta) = (e^{\beta-1} + (\beta^2+1)e^{1-\beta})(\zeta + \beta-1) + O(|\zeta+\beta-1|^2)\ \ \ {\rm as}\ \ \zeta\to 1-\beta. \end{equation} Thus, using the Taylor expansions of $h_1$ and $h_2$ at $1-\beta$ we have \begin{equation}\label{taylor_limit} \lim\limits_{\zeta\to 1-\beta}\frac{|h_1(\zeta)|^\frac12}{|h_2(\zeta)|} = \frac1{\sqrt{2}(e^{\beta-1} + (\beta^2+1)e^{1-\beta})}> 0. \end{equation} Hence, there exist $C_h > 0$ and $\epsilon>0$ so that \begin{equation}\label{maybeusefulrel} |h_1(\zeta)|^\frac12 \ge C_h|h_2(\zeta)| \ \ {\rm whenever}\ |\zeta - (1-\beta)| \le \epsilon. \end{equation} Next, consider the following function\footnote{Notice that this function is well defined because $h_1$ is zero only at $1-\beta$ and thus we will not end up with $\frac00$.} \[ H(\zeta):=\begin{cases} \frac{|h_1(\zeta)|}{|h_2(\zeta)|} & {\rm if}\ |\zeta -(1-\beta)|\ge \epsilon, h_2(\zeta)\neq 0,\\ \infty & {\rm otherwise}. \end{cases} \] Then it is easy to check that $H$ is proper closed and is never zero. Moreover, by direct computation, we have $\lim\limits_{\zeta\to \infty}H(\zeta) = \frac{e^{\beta-1}}{\beta^2-\beta+1} > 0$ and \[ \lim\limits_{\zeta\to -\infty}H(\zeta) = \begin{cases} |\beta e^{1-\beta} + e^{\beta-1}|^{-1} & {\rm if}\ \beta e^{1-\beta} + e^{\beta-1}\neq 0,\\ \infty & {\rm otherwise}. \end{cases} \] Thus, we deduce that $\inf H > 0$. Now, if it happens that $|v^k_x/v^k_y - (1-\beta)|> \epsilon$ for all large $k$, upon letting $\zeta_k:= v^k_x/v^k_y$, we have from \eqref{hahahehe1} that for all large $k$, \begin{equation}\label{finalcase2} \begin{aligned} \frac{\|\ensuremath{\mathbf w}^k-\ensuremath{\mathbf v}^k\|}{\|\ensuremath{\mathbf w}^k-\ensuremath{\mathbf u}^k\|} &=\frac{\|\ensuremath{\widehat{\mathbf p}}\|}{\|\ensuremath{\widehat{\mathbf z}}\|}\frac{|h_1(\zeta_k)|}{|h_2(\zeta_k)|} = \frac{\|\ensuremath{\widehat{\mathbf p}}\|}{\|\ensuremath{\widehat{\mathbf z}}\|}H(\zeta_k)\ge \frac{\|\ensuremath{\widehat{\mathbf p}}\|}{\|\ensuremath{\widehat{\mathbf z}}\|}\inf H >0, \end{aligned} \end{equation} where the second equality holds because of the definition of $H$ and the facts that $\ensuremath{\mathbf w}^k\neq \ensuremath{\mathbf u}^k$ (so that $h_2(\zeta_k)\neq 0$ by \eqref{hahahehe1}) and $|v^k_x/v^k_y - (1-\beta)|> \epsilon$ for all large $k$. On the other hand, if it holds that $|v^k_x/v^k_y - (1-\beta)|\le \epsilon$ infinitely often, then by extracting a further subsequence, we may assume that $|v^k_x/v^k_y - (1-\beta)|\le \epsilon$ for all $k$. Upon letting $\zeta_k:= v^k_x/v^k_y$, we have from \eqref{hahahehe1} that \begin{equation}\label{finalcase1} \begin{aligned} \frac{\|\ensuremath{\mathbf w}^k-\ensuremath{\mathbf u}^k\|}{\|\ensuremath{\mathbf w}^k-\ensuremath{\mathbf v}^k\|^\frac12} =\frac{\sqrt{v^k_y\|\ensuremath{\widehat{\mathbf z}}\|}}{\|\ensuremath{\widehat{\mathbf p}}\|}\frac{|h_2(\zeta_k)|}{|h_1(\zeta_k)|^\frac12} \le \frac{\sqrt{v^k_y\|\ensuremath{\widehat{\mathbf z}}\|}}{C_h\|\ensuremath{\widehat{\mathbf p}}\|} \le \frac{\sqrt{\eta\|\ensuremath{\widehat{\mathbf z}}\|}}{C_h\|\ensuremath{\widehat{\mathbf p}}\|}, \end{aligned} \end{equation} where the first inequality holds thanks to $|v^k_x/v^k_y - (1-\beta)|\le \epsilon$ for all $k$, \eqref{maybeusefulrel} and the fact that $\ensuremath{\mathbf w}^k\neq {\ensuremath{\mathbf u}}^k$ (so that $h_2(\zeta_k)\neq 0$ and hence $h_1(\zeta_k)\neq 0$), and the second inequality holds because $\ensuremath{\mathbf v}^k\in B(\eta)$. Using \eqref{finalcase2} and \eqref{finalcase1} together with \eqref{infratiohk_contradiction}, we see that \eqref{infratiohkb} does not hold for $\ensuremath{\mathfrak{g}} = |\cdot|^\frac12$. This concludes case~\ref{ebsubcasev0}. \ref{ebsubcasevneq0}: By passing to a subsequence, we may assume that $\langle \ensuremath{\widehat{\mathbf f}},\ensuremath{\mathbf v}^k\ensuremath{\operatorname{ran}}gle< 0$ for all $k$. Then we see from \eqref{veczfp} and \eqref{formulavstar} that \[ (1-\beta)\frac{v_x^k}{v_y^k} + 1 + e^{1-\beta}e^{v_x^k/v_y^k} = \frac1{v_y^k} \langle \ensuremath{\widehat{\mathbf f}},\ensuremath{\mathbf v}^k\ensuremath{\operatorname{ran}}gle< 0. \] Using this together with the fact that $(1-\beta)^2+1+e^{2(1-\beta)} > 0$, we deduce that there exists $\epsilon> 0$ so that \begin{equation}\label{eq:keyrelation} \left|\frac{v_x^k}{v_y^k}-(1-\beta)\right|\ge \epsilon\ \ \mbox{for all}\ k. \end{equation} Now, consider the following function \[ G(\zeta):= \frac{|\zeta+\beta-e^{\beta+\zeta-1}|}{\sqrt{\zeta^2+1+e^{2\zeta}}}. \] Then $G$ is continuous and is zero {\bf if and only if} $\zeta = 1-\beta$. Moreover, by direct computation, we have $\lim\limits_{\zeta\to\infty} G(\zeta) = e^{\beta-1} > 0$ and $\lim\limits_{\zeta\to-\infty} G(\zeta) = 1 > 0$. Thus, it follows that \begin{equation}\label{infG} \underline{G}:= \inf_{|\zeta +\beta-1|\ge \epsilon}G(\zeta) > 0. \end{equation} Finally, since $\langle \ensuremath{\widehat{\mathbf f}},\ensuremath{\mathbf v}^k\ensuremath{\operatorname{ran}}gle< 0$ for all $k$, we see that \[ \begin{aligned} \frac{\|\ensuremath{\mathbf w}^k-\ensuremath{\mathbf v}^k\|}{\|\ensuremath{\mathbf w}^k-\ensuremath{\mathbf u}^k\|}&\overset{\rm (a)}\ge \frac{\|\ensuremath{\mathbf w}^k-\ensuremath{\mathbf v}^k\|}{\|\ensuremath{\mathbf v}^k\|} \overset{\rm (b)}= \frac{|{\widehat{z}}_xv^k_x + {\widehat{z}}_yv^k_y + {\widehat{z}}_zv^k_ye^{v^k_x/v^k_y}|}{\|\ensuremath{\widehat{\mathbf z}}\|\sqrt{(v^k_x)^2+(v^k_y)^2+(v^k_y)^2e^{2v^k_x/v^k_y}}}\\ &\overset{\rm (c)}=\frac{|{\widehat{z}}_x\zeta_k + {\widehat{z}}_y + {\widehat{z}}_ze^{\zeta_k}|}{\|\ensuremath{\widehat{\mathbf z}}\|\sqrt{(\zeta_k)^2 + 1 + e^{2\zeta_k}}} \overset{\rm (d)}= \frac{1}{\|\ensuremath{\widehat{\mathbf z}}\|}G(\zeta_k)\overset{\rm (e)}\ge \frac{1}{\|\ensuremath{\widehat{\mathbf z}}\|}\underline{G} > 0, \end{aligned} \] where (a) follows from $\|\ensuremath{\mathbf w}^k-\ensuremath{\mathbf u}^k\| = \|\ensuremath{\mathbf w}^k\|$ (see Lemma~\ref{lem:distance}) and $\|\ensuremath{\mathbf w}^k\|\leq \norm{\ensuremath{\mathbf v}^k}$ (because $\ensuremath{\mathbf w}^k$ is the projection of ${\ensuremath{\mathbf v}}^k$ onto a subspace), (b) follows from Lemma~\ref{lem:distance} and \eqref{formulavstar}, (c) holds because $v^k_y > 0$ (see \eqref{formulavstar}) and we defined $\zeta_k:= v_x^k/v_y^k$, (d) follows from \eqref{veczfp} and the definition of $G$, and (e) follows from \eqref{eq:keyrelation} and \eqref{infG}. The above together with \eqref{infratiohk_contradiction} shows that \eqref{infratiohkb} does not hold for $\ensuremath{\mathfrak{g}} = |\cdot|^\frac12$, which is what we wanted to show in case~\ref{ebsubcasevneq0}. Summarizing the above cases, we conclude that there does not exist the sequence $\{\ensuremath{\mathbf v}^k\}$ and its associates so that \eqref{infratiohkb} holds for $\ensuremath{\mathfrak{g}} = |\cdot|^\frac12$. By Lemma~\ref{lem:infratio}, it must then hold that $\gamma_{\ensuremath{\mathbf z},\eta}\in (0,\infty]$ and we have the desired error bound in view of Theorem~\ref{thm:1dfacesmain}. This completes the proof. \end{proof} Combining Theorem~\ref{thm:nonzerogamma}, Theorem~\ref{thm:1dfacesmain} and Lemma~\ref{lem:facialresidualsbeta}, and using the observation that $\gamma_{\ensuremath{\mathbf z},0}=\infty$ (see \eqref{gammabetaeta}), we obtain a {{one-step facial residual function}} in the following corollary. \begin{corollary}[{{one-step facial residual function}s} concerning $\ensuremath{{\mathcal F}}_\beta$, $\beta\in \ensuremath{\mathbb R}$]\label{col:frf_fb} Let $\beta\in \ensuremath{\mathbb R}$ and $\ensuremath{\mathbf z}\in K_{\exp}^*$ with $z_x<0$ such that $\ensuremath{{\mathcal F}}_\beta = \{\ensuremath{\mathbf z} \}^\perp \cap K_{\exp}$ is a one-dimensional face of $K_{\exp}$. Let $\kappa_{\ensuremath{\mathbf z},t}$ be defined as in \eqref{haha2} with $\ensuremath{\mathfrak{g}} = |\cdot|^\frac12$. Then the function $\psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}:\ensuremath{\mathbb R}_+\times \ensuremath{\mathbb R}_+\to \ensuremath{\mathbb R}_+$ given by \begin{equation*} \psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}(\epsilon,t) := \max \left\{\epsilon,\epsilon/\|\ensuremath{\mathbf z}\| \right\} + \kappa_{\ensuremath{\mathbf z},t}(\epsilon +\max \left\{\epsilon,\epsilon/\|\ensuremath{\mathbf z}\| \right\} )^\frac12 \end{equation*} is a {{one-step facial residual function}s} for $K_{\exp}$. In particular, there exist $\kappa > 0$ and a nonnegative monotone nondecreasing function $\rho:\ensuremath{\mathbb R}_+ \to \ensuremath{\mathbb R}_+$ such that the function $\hat \psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}$ given by $\hat \psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}(\epsilon,t) \coloneqq \kappa \epsilon + \rho(t)\sqrt{\epsilon}$ is a {{one-step facial residual function}s} for $K_{\exp}$ and $\ensuremath{\mathbf z}$. \end{corollary} \subsubsection{$\ensuremath{{\mathcal F}}_{\infty}$: the exceptional one-dimensional face} Recall the special one-dimensional face of $K_{\exp}$ defined by $$ \ensuremath{\mathcal{F}}_{\infty}:= \{(x,0,0) \;|\; x\leq 0 \}. $$ We first show that we have a Lipschitz error bound for any exposing normal vectors $\ensuremath{\mathbf z} = (0, z_y,z_z)$ with $z_y > 0$ and $z_z > 0$. \begin{theorem}[Lipschitz error bound concerning $\ensuremath{{\mathcal F}}_{\infty}$]\label{thm:nonzerogammasec71} Let $\ensuremath{\mathbf z}\in K_{\exp}^*$ with $z_x=0$, $z_y > 0$ and $z_z>0$ so that $\{\ensuremath{\mathbf z} \}^\perp \cap K_{\exp}=\ensuremath{{\mathcal F}}_\infty$. Let $\eta > 0$ and let $\gamma_{\ensuremath{\mathbf z},\eta}$ be defined as in \eqref{gammabetaeta} with $\ensuremath{\mathfrak{g}} = |\cdot|$. Then $\gamma_{\ensuremath{\mathbf z},\eta} \in (0,\infty]$ and \[ \ensuremath{\operatorname{d}}(\ensuremath{\mathbf q},\ensuremath{{\mathcal F}}_{\infty})\le \max\{2,2\gamma_{\ensuremath{\mathbf z},\eta}^{-1}\}\cdot\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q},K_{\exp})\ \ \ \mbox{whenever $\ensuremath{\mathbf q}\in \{\ensuremath{\mathbf z}\}^\perp\cap B(\eta)$.} \] \end{theorem} \begin{proof} Without loss of generality, upon scaling, we may assume that $\ensuremath{\mathbf z} = (0,a,1)$ for some $a > 0$. Similarly as in the proof of Theorem~\ref{thm:nonzerogamma}, we will consider the following vectors: \begin{equation*} \ensuremath{\widetilde{\mathbf z}} := \begin{bmatrix} 0\\a\\1 \end{bmatrix}, \ \ \ensuremath{\widetilde{\mathbf f}} := \begin{bmatrix} -1\\0\\0 \end{bmatrix},\ \ \ensuremath{\widetilde{\mathbf p}} := \begin{bmatrix} 0\\1\\-a \end{bmatrix}. \end{equation*} Here, $\ensuremath{{\mathcal F}}_\infty$ is the conical hull of $\ensuremath{\widetilde{\mathbf f}}$ (see \eqref{Finfinity}), and $\ensuremath{\widetilde{\mathbf p}}$ is constructed so that $\{\ensuremath{\widetilde{\mathbf z}},\ensuremath{\widetilde{\mathbf f}},\ensuremath{\widetilde{\mathbf p}}\}$ is orthogonal. Now, let $\ensuremath{\mathbf v}\in \partial K_{\exp}\cap B(\eta)\backslash \ensuremath{{\mathcal F}}_\infty$, $\ensuremath{\mathbf w} = P_{\{\ensuremath{\mathbf z}\}^\perp }\ensuremath{\mathbf v}$ and $\ensuremath{\mathbf u}=P_{\ensuremath{{\mathcal F}}_\infty}\ensuremath{\mathbf w}$ with $\ensuremath{\mathbf u}\neq \ensuremath{\mathbf w}$. Then, as in Lemma~\ref{lem:distance}, by decomposing $\ensuremath{\mathbf v}$ as a linear combination of $\{\ensuremath{\widetilde{\mathbf z}},\ensuremath{\widetilde{\mathbf f}},\ensuremath{\widetilde{\mathbf p}}\}$, we have \begin{equation}\label{haha5} \|\ensuremath{\mathbf w} - \ensuremath{\mathbf v}\| = \frac{|\langle\ensuremath{\widetilde{\mathbf z}},\ensuremath{\mathbf v}\ensuremath{\operatorname{ran}}gle|}{\|\ensuremath{\widetilde{\mathbf z}}\|}\ \ {\rm and}\ \ \|\ensuremath{\mathbf w}-\ensuremath{\mathbf u}\| = \begin{cases} \frac{|\langle\ensuremath{\widetilde{\mathbf p}},\ensuremath{\mathbf v}\ensuremath{\operatorname{ran}}gle|}{\|\ensuremath{\widetilde{\mathbf p}}\|} & {\rm if}\ \langle\ensuremath{\widetilde{\mathbf f}},\ensuremath{\mathbf v}\ensuremath{\operatorname{ran}}gle\ge 0,\\ \|\ensuremath{\mathbf w}\| & {\rm otherwise}. \end{cases} \end{equation} We consider the following cases for estimating $\gamma_{\ensuremath{\mathbf z},\eta}$. \begin{enumerate}[(I)] \item\label{inftyebcase1} $\ensuremath{\mathbf v}\in \ensuremath{{\mathcal F}}_{-\infty}\backslash \ensuremath{{\mathcal F}}_\infty$; \item\label{inftyebcase2} $\ensuremath{\mathbf v}\notin \ensuremath{{\mathcal F}}_{-\infty}$ with $v_x \le 0$; \item\label{inftyebcase3} $\ensuremath{\mathbf v}\notin \ensuremath{{\mathcal F}}_{-\infty}$ with $v_x > 0$. \end{enumerate} \ref{inftyebcase1}: In this case, $\ensuremath{\mathbf v} = (v_x,0,v_z)$ with $v_x\le 0\le v_z$; see \eqref{Fneginfinity}. Then $\langle\ensuremath{\widetilde{\mathbf f}},\ensuremath{\mathbf v}\ensuremath{\operatorname{ran}}gle = -v_x\ge 0$ and $|\langle\ensuremath{\widetilde{\mathbf z}},\ensuremath{\mathbf v}\ensuremath{\operatorname{ran}}gle| = |v_z| = \frac1a|\langle\ensuremath{\widetilde{\mathbf p}},\ensuremath{\mathbf v}\ensuremath{\operatorname{ran}}gle|$. Consequently, we have from \eqref{haha5} that \[ \|\ensuremath{\mathbf w} - \ensuremath{\mathbf v}\| = \frac{|\langle\ensuremath{\widetilde{\mathbf z}},\ensuremath{\mathbf v}\ensuremath{\operatorname{ran}}gle|}{\|\ensuremath{\widetilde{\mathbf z}}\|} = \frac{|\langle\ensuremath{\widetilde{\mathbf p}},\ensuremath{\mathbf v}\ensuremath{\operatorname{ran}}gle|}{a\|\ensuremath{\widetilde{\mathbf z}}\|} = \frac{\|\ensuremath{\widetilde{\mathbf p}}\|}{a\|\ensuremath{\widetilde{\mathbf z}}\|}\|\ensuremath{\mathbf w} - \ensuremath{\mathbf u}\|. \] \ref{inftyebcase2}: In this case, in view of \eqref{d:Kexp} and \eqref{Fneginfinity}, we have $\ensuremath{\mathbf v} = (v_x,v_y,v_ye^{v_x/v_y})$ with $v_x\le 0$ and $v_y > 0$. Then $\langle\ensuremath{\widetilde{\mathbf f}},\ensuremath{\mathbf v}\ensuremath{\operatorname{ran}}gle = -v_x\ge 0$. Moreover, since $v_y>0$, we have \[ \begin{aligned} |\langle\ensuremath{\widetilde{\mathbf z}},\ensuremath{\mathbf v}\ensuremath{\operatorname{ran}}gle| &= |av_y+v_ye^{v_x/v_y}| = av_y+v_ye^{v_x/v_y} \ge \min\{1,a\}(v_y+v_ye^{v_x/v_y})\\ & \ge \frac{\min\{1,a\}}{\max\{1,a\}}(v_y+av_ye^{v_x/v_y})\ge \frac{\min\{1,a\}}{\max\{1,a\}}|v_y-av_ye^{v_x/v_y}| = \frac{\min\{1,a\}}{\max\{1,a\}}|\langle\ensuremath{\widetilde{\mathbf p}},\ensuremath{\mathbf v}\ensuremath{\operatorname{ran}}gle|. \end{aligned} \] Using \eqref{haha5}, we then obtain that $\|\ensuremath{\mathbf w} - \ensuremath{\mathbf v}\|\ge \frac{\min\{1,a\}\|\ensuremath{\widetilde{\mathbf p}}\|}{\max\{1,a\}\|\ensuremath{\widetilde{\mathbf z}}\|}\|\ensuremath{\mathbf w} - \ensuremath{\mathbf u}\|$. \ref{inftyebcase3}: In this case, in view of \eqref{d:Kexp} and \eqref{Fneginfinity}, $\ensuremath{\mathbf v} = (v_x,v_y,v_ye^{v_x/v_y})$ with $v_x> 0$ and $v_y > 0$. Then $\langle\ensuremath{\widetilde{\mathbf f}},\ensuremath{\mathbf v}\ensuremath{\operatorname{ran}}gle = -v_x< 0$ and hence $\|\ensuremath{\mathbf w} - \ensuremath{\mathbf u}\|= \|\ensuremath{\mathbf w}\|\le \|\ensuremath{\mathbf v}\|$, where the equality follows from \eqref{haha5} and the inequality holds because $\ensuremath{\mathbf w}$ is the projection of $\ensuremath{\mathbf v}$ onto a subspace. Since $v_y > 0$, we have \[ \begin{aligned} |\langle\ensuremath{\widetilde{\mathbf z}},\ensuremath{\mathbf v}\ensuremath{\operatorname{ran}}gle| &= |av_y+v_ye^{v_x/v_y}| = av_y+0.5v_ye^{v_x/v_y} + 0.5v_ye^{v_x/v_y}\\ & \overset{\rm (a)}\ge av_y+0.5v_y(1 + v_x/v_y) + 0.5v_ye^{v_x/v_y} \ge 0.5v_y + 0.5v_x + 0.5v_ye^{v_x/v_y}= 0.5\|\ensuremath{\mathbf v}\|_1, \end{aligned} \] where we used $v_y > 0$ and $e^t\ge 1+t$ for all $t$ in (a) and $\|\ensuremath{\mathbf v}\|_1$ denotes the $1$-norm of $\ensuremath{\mathbf v}$. Combining this with \eqref{haha5} and the fact that $\|\ensuremath{\mathbf w}\|\le \|\ensuremath{\mathbf v}\|$, we see that \[ \|\ensuremath{\mathbf w} - \ensuremath{\mathbf v}\| = \frac{|\langle\ensuremath{\widetilde{\mathbf z}},\ensuremath{\mathbf v}\ensuremath{\operatorname{ran}}gle|}{\|\ensuremath{\widetilde{\mathbf z}}\|}\ge\frac{\|\ensuremath{\mathbf v}\|_1}{2\|\ensuremath{\widetilde{\mathbf z}}\|}\ge \frac{\|\ensuremath{\mathbf v}\|}{2\|\ensuremath{\widetilde{\mathbf z}}\|} \ge \frac{\|\ensuremath{\mathbf w}\|}{2\|\ensuremath{\widetilde{\mathbf z}}\|} = \frac{\|\ensuremath{\mathbf w}-\ensuremath{\mathbf u}\|}{2\|\ensuremath{\widetilde{\mathbf z}}\|}. \] Summarizing the three cases, we conclude that $\gamma_{\ensuremath{\mathbf z},\eta} \ge \min\left\{\frac{\|\ensuremath{\widetilde{\mathbf p}}\|}{a\|\ensuremath{\widetilde{\mathbf z}}\|},\frac{\min\{1,a\}\|\ensuremath{\widetilde{\mathbf p}}\|}{\max\{1,a\}\|\ensuremath{\widetilde{\mathbf z}}\|},\frac{1}{2\|\ensuremath{\widetilde{\mathbf z}}\|}\right\} > 0$. In view of Theorem~\ref{thm:1dfacesmain}, we have the desired error bound. This completes the proof. \end{proof} We next turn to the supporting hyperplane defined by $\ensuremath{\mathbf z} = (0,0,z_z)$ for some $z_z>0$ and so $\{\ensuremath{\mathbf z} \}^\perp$ is the $xy$-plane. The following lemma demonstrates that the H\"{o}lderian-type error bound in the form of \eqref{haha2} with $\ensuremath{\mathfrak{g}} = |\cdot|^\alpha$ for some $\alpha\in (0,1]$ no longer works in this case. \begin{lemma}[Nonexistence of H\"{o}lderian error bounds]\label{lem:non_hold} Let $\ensuremath{\mathbf z}\in K_{\exp}^*$ with $z_x= z_y = 0$ and $z_z>0$ so that $\{\ensuremath{\mathbf z} \}^\perp \cap K_{\exp}=\ensuremath{{\mathcal F}}_\infty$. Let $\alpha\in (0,1]$ and $\eta > 0$. Then \[ \inf_\ensuremath{\mathbf q}\left\{\frac{\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q},K_{\exp})^\alpha}{\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q},\ensuremath{{\mathcal F}}_\infty)}\;\bigg|\;\ensuremath{\mathbf q}\in \{\ensuremath{\mathbf z}\}^\perp\cap B(\eta)\backslash\ensuremath{{\mathcal F}}_\infty\right\} = 0. \] \end{lemma} \begin{proof} For each $k\in \mathbb{N}$, let $\ensuremath{\mathbf q}^k := (-\frac\eta2,\frac\eta{2k},0)$. Then $\ensuremath{\mathbf q}^k\in \{\ensuremath{\mathbf z}\}^\perp\cap B(\eta)\backslash\ensuremath{{\mathcal F}}_\infty$ and we have $\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}^k,\ensuremath{{\mathcal F}}_\infty) = \frac\eta{2k}$. Moreover, since $(q^k_x,q^k_y,q^k_ye^{q^k_x/q^k_y})\in K_{\exp}$, we have $\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}^k,K_{\exp})\le q^k_ye^{q^k_x/q^k_y} = \frac\eta{2k}e^{-k}$. Then it holds that \[ \frac{\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}^k,K_{\exp})^\alpha}{\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}^k,\ensuremath{{\mathcal F}}_\infty)} \le \frac{\eta^{\alpha-1}}{2^{\alpha-1}}k^{1-\alpha}e^{-\alpha k} \to 0 \ \ \ {\rm as}\ \ k\to \infty \] since $\alpha\in (0,1]$. This completes the proof. \end{proof} Since a zero-at-zero monotone nondecreasing function of the form $(\cdot)^\alpha$ no longer works, we opt for the following function $\ensuremath{\mathfrak{g}}_\infty:\ensuremath{\mathbb{R}_+}\to \ensuremath{\mathbb{R}_+}$ that grows faster around $t=0$: \begin{equation}\label{def:frakg} \ensuremath{\mathfrak{g}}_\infty(t) := \begin{cases} 0 &\text{if}\;t=0,\\ -\frac{1}{\ln(t)} & \text{if}\;0 < t\leq \frac{1}{e^2},\\ \frac{1}{4}+\frac{1}{4}e^2t & \text{if}\;t>\frac{1}{e^2}. \end{cases} \end{equation} Similar to $\ensuremath{\mathfrak{g}}_{-\infty}$ in \eqref{d:entropy}, $\ensuremath{\mathfrak{g}}_{\infty}$ is monotone increasing and there exists a constant $\widehat L \ge 1$ such that the following inequalities hold for every $t \in \ensuremath{\mathbb{R}_+}$ and $M > 0$: \begin{equation}\label{def:frakg_p} |t| \leq \ensuremath{\mathfrak{g}}_{\infty}(t), \quad \ensuremath{\mathfrak{g}}_{\infty}(2t) \leq \widehat L\ensuremath{\mathfrak{g}}_{\infty}(t),\quad \ensuremath{\mathfrak{g}}_{\infty}(Mt) \leq {\widehat L}^{1+|\log_2(M)|}\ensuremath{\mathfrak{g}}_{\infty}(t). \end{equation} We next show that error bounds in the form of \eqref{haha2} holds for $\ensuremath{\mathbf z} = (0,0,z_z)$, $z_z>0$, if we use $\ensuremath{\mathfrak{g}}_\infty$. \begin{theorem}[Log-type error bound concerning $\ensuremath{{\mathcal F}}_{\infty}$]\label{thm:nonzerogammasec72} Let $\ensuremath{\mathbf z}\in \ensuremath{K_{\exp}}^*$ with $z_x=z_y = 0$ and $z_z>0$ so that $\{\ensuremath{\mathbf z} \}^\perp \cap \ensuremath{K_{\exp}}=\ensuremath{{\mathcal F}}_\infty$. Let $\eta > 0$ and let $\gamma_{\ensuremath{\mathbf z},\eta}$ be defined as in \eqref{gammabetaeta} with $\ensuremath{\mathfrak{g}} = \ensuremath{\mathfrak{g}}_\infty$ in \eqref{def:frakg}. Then $\gamma_{\ensuremath{\mathbf z},\eta} \in (0,\infty]$ and \[ \ensuremath{\operatorname{d}}(\ensuremath{\mathbf q},\ensuremath{{\mathcal F}}_{\infty})\le \max\{2,2\gamma_{\ensuremath{\mathbf z},\eta}^{-1}\}\cdot\ensuremath{\mathfrak{g}}_\infty(\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q},K_{\exp}))\ \ \ \mbox{whenever $\ensuremath{\mathbf q}\in \{\ensuremath{\mathbf z}\}^\perp\cap B(\eta)$.} \] \end{theorem} \begin{proof} Take $\ensuremath{\bar{\vv}} \in \ensuremath{{\mathcal F}}_\infty$ and a sequence $\{\ensuremath{\mathbf v}^k\}\subset \partial K_{\exp}\cap B(\eta) \backslash \ensuremath{{\mathcal F}}_\infty$ such that \begin{equation*} \underset{k \rightarrow \infty}{\lim}\ensuremath{\mathbf v}^k = \underset{k \rightarrow \infty}{\lim}\ensuremath{\mathbf w}^k =\ensuremath{\bar{\vv}}, \end{equation*} where $\ensuremath{\mathbf w}^k = P_{\{\ensuremath{\mathbf z}\}^\perp}\ensuremath{\mathbf v}^k$, $\ensuremath{\mathbf u}^k = P_{\ensuremath{{\mathcal F}}_\infty}\ensuremath{\mathbf w}^k$, and $\ensuremath{\mathbf w}^k\neq \ensuremath{\mathbf u}^k$. Since $\ensuremath{\mathbf w}^k\neq \ensuremath{\mathbf u}^k$, in view of \eqref{Finfinity} and \eqref{Fneginfinity}, we must have $\ensuremath{\mathbf v}^k\notin \ensuremath{{\mathcal F}}_{-\infty}$. Then, from \eqref{d:Kexp} and \eqref{Finfinity}, we have \begin{equation}\label{vwurepre} \ensuremath{\mathbf v}^k = (v^k_x,v^k_y,v^k_ye^{v^k_x/v^k_y}) \mbox{ with } v^k_y>0,\ \ \ensuremath{\mathbf w}^k = (v^k_x,v^k_y,0)\ \ {\rm and}\ \ \ensuremath{\mathbf u}^k = (\min\{v^k_x,0\},0,0). \end{equation} Since $\ensuremath{\mathbf w}^k\to \ensuremath{\bar{\vv}}$ and $\ensuremath{\mathbf v}^k\to \ensuremath{\bar{\vv}}$, without loss of generality, by passing to a subsequence if necessary, we assume in addition that $\|\ensuremath{\mathbf w}^k - \ensuremath{\mathbf v}^k\|\le e^{-2}$ for all $k$. From \eqref{vwurepre} we conclude that $\ensuremath{\mathbf v}^k \neq \ensuremath{\mathbf w}^k$, hence $\ensuremath{\mathfrak{g}}_\infty(\|\ensuremath{\mathbf w}^k - \ensuremath{\mathbf v}^k\|) = -(\ln\|\ensuremath{\mathbf w}^k - \ensuremath{\mathbf v}^k\|)^{-1}$. We consider the following two cases in order to show that \eqref{infratiohkb} does not hold for $\ensuremath{\mathfrak{g}} = \ensuremath{\mathfrak{g}}_\infty$: \begin{enumerate}[(I)] \item\label{finalebcasevneq0} $\ensuremath{\bar{\vv}}\neq \mathbf{0}$; \item\label{finalebcasev0} $\ensuremath{\bar{\vv}} = \mathbf{0}$. \end{enumerate} \ref{finalebcasevneq0}: In this case, we have $\ensuremath{\bar{\vv}} = ({\bar{v}}_x,0,0)$ for some ${\bar{v}}_x < 0$. This implies that $v^k_x<0$ for all large $k$. Hence, we have from \eqref{vwurepre} that for all large $k$, \begin{equation*} \frac{\ensuremath{\mathfrak{g}}_\infty(\|\ensuremath{\mathbf w}^k - \ensuremath{\mathbf v}^k\|) }{\|\ensuremath{\mathbf w}^k - \ensuremath{\mathbf u}^k \|} = \frac{-(\ln\|\ensuremath{\mathbf w}^k-\ensuremath{\mathbf v}^k\|)^{-1}}{\|\ensuremath{\mathbf w}^k - \ensuremath{\mathbf u}^k\|} = -\frac{1}{v^k_y\left(v^k_x/v^k_y + \ln v^k_y \right)} = -\frac{1}{v^k_y\ln v^k_y + v^k_x} \to -\frac1{{\bar{v}}_x} > 0 \end{equation*} since $v^k_y\to 0$ and $v^k_x\to {\bar{v}}_x < 0$. This shows that \eqref{infratiohkb} does not hold for $\ensuremath{\mathfrak{g}} = \ensuremath{\mathfrak{g}}_\infty$. \ref{finalebcasev0}: If $v_x^k\le 0$ infinitely often, by extracting a subsequence, we assume that $v_x^k\le 0$ for all $k$. Since $\ensuremath{\mathbf w}^k\neq \ensuremath{\mathbf u}^k$ (and $\ensuremath{\mathbf w}^k\neq \ensuremath{\mathbf v}^k$), we note from \eqref{vwurepre} that \[ -\frac{1}{v^k_y\ln v^k_y + v^k_x} = \frac{-(\ln\|\ensuremath{\mathbf w}^k-\ensuremath{\mathbf v}^k\|)^{-1}}{\|\ensuremath{\mathbf w}^k - \ensuremath{\mathbf u}^k\|} \in (0,\infty)\ \ \mbox{for all}\ k. \] Since $\{-(v^k_y\ln v^k_y + v^k_x)\}$ is a positive sequence and it converges to zero as $(v^k_x,v^k_y)\to 0$, it follows that $\lim\limits_{k\to\infty}\frac{-(\ln\|\ensuremath{\mathbf w}^k-\ensuremath{\mathbf v}^k\|)^{-1}}{\|\ensuremath{\mathbf w}^k - \ensuremath{\mathbf u}^k\|}=\infty$. This shows that \eqref{infratiohkb} does not hold for $\ensuremath{\mathfrak{g}} = \ensuremath{\mathfrak{g}}_\infty$. Now, it remains to consider the case that $v_x^k>0$ for all large $k$. By passing to a subsequence if necessary, we assume that $v_x^k>0$ for all $k$. By solving for $v_x^k$ from $v^k_z=v^k_y e^{v^k_x/v^k_y} > 0$ and noting \eqref{vwurepre}, we obtain that \begin{equation}\label{vwurepre2} \ensuremath{\mathbf v}^k = (v^k_y\ln(v_z^k/v_y^k),v^k_y,v^k_z) \mbox{ with } v^k_y>0,\ \ \ensuremath{\mathbf w}^k = (v^k_y\ln(v_z^k/v_y^k),v^k_y,0)\ \ {\rm and}\ \ \ensuremath{\mathbf u}^k = (0,0,0). \end{equation} Also, we note from $v_x^k=v^k_y\ln(v_z^k/v_y^k)>0$, $v_y^k>0$ and the monotonicity of $\ln(\cdot)$ that for all $k$, \begin{equation}\label{bugfix} v^k_z>v^k_y>0. \end{equation} Next consider the function $h(t) := \frac{1}t\sqrt{1+(\ln t)^2}$ on $[1,\infty)$. Then $h$ is continuous and positive. Since $h(1)=1$ and $\lim_{t\to \infty}h(t) = 0$, there exists $M_h$ such that $h(t)\le M_h$ for all $t\ge 1$. Now, using \eqref{vwurepre2}, we have, upon defining $t_k:= v_z^k/v_y^k$ that \[ \begin{aligned} \frac{\|\ensuremath{\mathbf w}^k - \ensuremath{\mathbf u}^k\|}{-(\ln\|\ensuremath{\mathbf w}^k-\ensuremath{\mathbf v}^k\|)^{-1}} &= \frac{v_y^k\sqrt{1+[\ln(v_z^k/v_y^k)]^2}}{-(\ln v_z^k)^{-1}} = -v_y^k\sqrt{1+[\ln(v_z^k/v_y^k)]^2}\ln v_z^k\\ & \overset{\rm (a)}= -\frac{v_y^k}{v_z^k}\sqrt{1+\left[\ln\left(\frac{v_z^k}{v_y^k}\right)\right]^2}v_z^k\ln v_z^k \overset{\rm (b)}= - h(t_k)v_z^k\ln v_z^k \overset{\rm (c)}\le -M_hv_z^k\ln v_z^k, \end{aligned} \] where the division by $v_z^k$ in (a) is legitimate because $v_z^k>0$, (b) follows from the definition of $h$ and the fact that $t_k > 1$ (see \eqref{bugfix}), and (c) holds because of the definition of $M_h$ and the fact that $-\ln v_z^k > 0$ (thanks to $v_z^k = \|\ensuremath{\mathbf w}^k - \ensuremath{\mathbf v}^k\|\le e^{-2}$). Since $v^k_z\to 0$, it then follows that $\left\{\frac{\|\ensuremath{\mathbf w}^k - \ensuremath{\mathbf u}^k\|}{-(\ln\|\ensuremath{\mathbf w}^k-\ensuremath{\mathbf v}^k\|)^{-1}}\right\}$ is a positive sequence that converges to zero. Thus, $\lim\limits_{k\to\infty}\frac{-(\ln\|\ensuremath{\mathbf w}^k-\ensuremath{\mathbf v}^k\|)^{-1}}{\|\ensuremath{\mathbf w}^k - \ensuremath{\mathbf u}^k\|}=\infty$, which again shows that \eqref{infratiohkb} does not hold for $\ensuremath{\mathfrak{g}} = \ensuremath{\mathfrak{g}}_\infty$. Having shown that \eqref{infratiohkb} does not hold for $\ensuremath{\mathfrak{g}} = \ensuremath{\mathfrak{g}}_\infty$, in view of Lemma~\ref{lem:infratio}, we must have $\gamma_{\ensuremath{\mathbf z},\eta} \in (0,\infty]$. Then the result follows from Theorem~\ref{thm:1dfacesmain} and \eqref{def:frakg_p}. \end{proof} Combining Theorem~\ref{thm:nonzerogammasec71}, Theorem~\ref{thm:nonzerogammasec72} and Lemma~\ref{lem:facialresidualsbeta}, and noting \eqref{def:frakg_p} and $\gamma_{\ensuremath{\mathbf z},0}=\infty$ (see \eqref{gammabetaeta}), we can now summarize the {{one-step facial residual function}}s derived in this section in the following corollary. \begin{corollary}[{{one-step facial residual function}s} concerning $\ensuremath{{\mathcal F}}_{\infty}$]\label{col:1dfaces_infty} Let $\ensuremath{\mathbf z}\in K_{\exp}^*$ with $z_x=0$ and $\{\ensuremath{\mathbf z} \}^\perp \cap K_{\exp}=\ensuremath{{\mathcal F}}_\infty$. \begin{enumerate}[{\rm (i)}] \item \label{lem:facialresidualsbeta:1}In the case when $z_y > 0$, let $\kappa_{\ensuremath{\mathbf z},t}$ be defined as in \eqref{haha2} with $\ensuremath{\mathfrak{g}} = |\cdot|$. Then the function $\psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}:\ensuremath{\mathbb R}_+\times\ensuremath{\mathbb R}_+\to \ensuremath{\mathbb R}_+$ given by \begin{equation*} \psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}(\epsilon,t):=\max \left\{\epsilon,\epsilon/\|\ensuremath{\mathbf z}\| \right\} + \kappa_{\ensuremath{\mathbf z},t}(\epsilon +\max \left\{\epsilon,\epsilon/\|\ensuremath{\mathbf z}\| \right\} ) \end{equation*} is a {{one-step facial residual function}s} for $K_{\exp}$ and $\ensuremath{\mathbf z}$. In particular, there exist $\kappa > 0$ and a nonnegative monotone nondecreasing function $\rho:\ensuremath{\mathbb R}_+ \to \ensuremath{\mathbb R}_+$ such that the function $\hat \psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}$ given by $\hat \psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}(\epsilon,t) \coloneqq \kappa \epsilon + \rho(t)\epsilon$ is a {{one-step facial residual function}s} for $K_{\exp}$ and $\ensuremath{\mathbf z}$. \item \label{lem:facialresidualsbeta:2} In the case when $z_y = 0$, let $\kappa_{\ensuremath{\mathbf z},t}$ be defined as in \eqref{haha2} with $\ensuremath{\mathfrak{g}} = \ensuremath{\mathfrak{g}}_\infty$ in \eqref{def:frakg}. Then the function $\psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}:\ensuremath{\mathbb R}_+\times\ensuremath{\mathbb R}_+\to \ensuremath{\mathbb R}_+$ given by \begin{equation*} \psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}(\epsilon,t):=\max \left\{\epsilon,\epsilon/\|\ensuremath{\mathbf z}\| \right\} + \kappa_{\ensuremath{\mathbf z},t}\ensuremath{\mathfrak{g}}_\infty(\epsilon +\max \left\{\epsilon,\epsilon/\|\ensuremath{\mathbf z}\| \right\} ) \end{equation*} is a {{one-step facial residual function}s} for $K_{\exp}$ and $\ensuremath{\mathbf z}$. In particular, there exist $\kappa > 0$ and a nonnegative monotone nondecreasing function $\rho:\ensuremath{\mathbb R}_+ \to \ensuremath{\mathbb R}_+$ such that the function $\hat \psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}$ given by $\hat \psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}(\epsilon,t) \coloneqq \kappa \epsilon + \rho(t)\ensuremath{\mathfrak{g}}_{\infty}(\epsilon)$ is a {{one-step facial residual function}s} for $K_{\exp}$ and $\ensuremath{\mathbf z}$. \end{enumerate} \end{corollary} \subsubsection{The non-exposed face $\ensuremath{{\mathcal F}}_{ne}$} Recall the unique non-exposed face of $K_{\exp}$: $$ \ensuremath{\mathcal{F}}_{ne} := \{(0,0,z) \;|\; z \geq 0\}. $$ In this subsection, we take a look at $\ensuremath{{\mathcal F}}_{ne}$. Note that $\ensuremath{{\mathcal F}}_{ne}$ is an exposed face of $\ensuremath{{\mathcal F}}_{-\infty}$, which is polyhedral. This observation leads immediately to the following corollary, which also follows from \cite[Proposition~18]{L17} by letting $ \ensuremath{\mathcal{F}}\coloneqq \ensuremath{\mathcal{K}} \coloneqq\ensuremath{{\mathcal F}}_{-\infty}$ therein. We omit the proof for brevity. \begin{corollary}[{{one-step facial residual function}s} for $\ensuremath{{\mathcal F}}_{ne}$]\label{col:frf_ne} Let $\ensuremath{\mathbf z} \in \ \ensuremath{{\mathcal F}}_{-\infty}^*$ be such that $\ensuremath{{\mathcal F}}_{ne} = \ensuremath{{\mathcal F}}_{-\infty} \cap \{\ensuremath{\mathbf z}\}^\perp$. Then there exists $\kappa > 0$ such that \[ \psi _{\ensuremath{{\mathcal F}}_{-\infty},\ensuremath{\mathbf z} }(\epsilon,t) \coloneqq \kappa\epsilon \] is a {{one-step facial residual function}s} for $\ensuremath{{\mathcal F}}_{-\infty}$ and $\ensuremath{\mathbf z}$. \end{corollary} \subsection{Error bounds}\label{sec:exp_err} In this subsection, we return to the feasibility problem \eqref{eq:feas} and consider the case where $ \ensuremath{\mathcal{K}} = \ensuremath{K_{\exp}}$. We now have all the tools for obtaining error bounds. Recalling Definition~\ref{def:hold}, we can state the following result. \begin{theorem}[Error bounds for \eqref{eq:feas} with $ \ensuremath{\mathcal{K}} = \ensuremath{K_{\exp}}$]\label{thm:main_err} Let $ \ensuremath{\mathcal{L}} \subseteq \ensuremath{\mathbb R}^3$ be a subspace and $\ensuremath{\mathbf a} \in \ensuremath{\mathbb R}^3$ such that $( \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a}) \cap \ensuremath{K_{\exp}} \neq \emptyset$. Then the following items hold. \begin{enumerate} \item The distance to the PPS condition of $\{\ensuremath{K_{\exp}}, \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a}\}$ satisfies $d_{\text{PPS}}(\ensuremath{K_{\exp}}, \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a}) \leq 1$. \item\label{thm:mainii} If $d_{\text{PPS}}(\ensuremath{K_{\exp}}, \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a})=0 $, then $\ensuremath{K_{\exp}}$ and $ \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a}$ satisfy a Lipschitzian error bound. \item\label{thm:mainiii} Suppose $d_{\text{PPS}}(\ensuremath{K_{\exp}}, \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a})=1$ and let $ \ensuremath{\mathcal{F}} \subsetneq \ensuremath{K_{\exp}}$ be a chain of faces of length $2$ satisfying items {\rm(ii)} and {\rm(iii)} of Proposition~\ref{prop:fra_poly}. We have the following possibilities. \begin{enumerate}[label={\rm (\alph*)}] \item\label{thm:mainiiia} If $ \ensuremath{\mathcal{F}} = \ensuremath{{\mathcal F}}_{-\infty}$ then $\ensuremath{K_{\exp}}$ and $ \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a}$ satisfy an entropic error bound as in \eqref{eq:entropic_err}. In addition, for all $\alpha \in (0,1)$, a uniform H\"olderian error bound with exponent $\alpha$ holds. \item\label{thm:mainiiib} If $ \ensuremath{\mathcal{F}} = \ensuremath{{\mathcal F}}_{\beta}$, with $\beta \in \ensuremath{\mathbb R}$, then $\ensuremath{K_{\exp}}$ and $ \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a}$ satisfy a uniform H\"olderian error bound with exponent $1/2$. \item\label{thm:mainiiic} Suppose that $ \ensuremath{\mathcal{F}} = \ensuremath{{\mathcal F}}_{\infty}$. If there exists $\ensuremath{\mathbf z}\in \ensuremath{K_{\exp}}^* \cap \ensuremath{\mathcal{L}}^\perp \cap \{a\}^\perp$ with $z_x=0$, $z_y > 0$ and $z_z>0$ then $\ensuremath{K_{\exp}}$ and $ \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a}$ satisfy a Lipschitzian error bound. Otherwise, $\ensuremath{K_{\exp}}$ and $ \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a}$ satisfy a log-type error bound as in \eqref{eq:log_err}. \item \label{thm:mainivc} If $ \ensuremath{\mathcal{F}} = \{\mathbf{0} \}$, then $\ensuremath{K_{\exp}}$ and $ \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a}$ satisfy a Lipschitzian error bound. \end{enumerate} \end{enumerate} \end{theorem} \begin{proof} (i): All proper faces of $\ensuremath{K_{\exp}}$ are polyhedral, therefore $\ell_{\text{poly}}(\ensuremath{K_{\exp}}) = 1$. By item \ref{prop:fra_poly:1} of Proposition~\ref{prop:fra_poly}, there exists a chain of length $2$ satisfying item \ref{prop:fra_poly:3} of Proposition~\ref{prop:fra_poly}. Therefore, $d_{\text{PPS}}(\ensuremath{K_{\exp}}, \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a}) \leq 1$. (ii): If $d_{\text{PPS}}(\ensuremath{K_{\exp}}, \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a}) = 0$, it is because $\{\ensuremath{K_{\exp}}, \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a}\}$ satisfies the PPS condition, which implies a Lipschitzian error bound by Proposition~\ref{prop:cq_er}. (iii): Next, suppose $d_{\text{PPS}}(\ensuremath{K_{\exp}}, \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a})=1$ and let $ \ensuremath{\mathcal{F}} \subsetneq \ensuremath{K_{\exp}}$ be a chain of faces of length $2$ satisfying items \ref{prop:fra_poly:2} and \ref{prop:fra_poly:3} of Proposition~\ref{prop:fra_poly}, together with $\ensuremath{\mathbf z}\in K_{\exp}^* \cap \ensuremath{\mathcal{L}}^\perp \cap \{\ensuremath{\mathbf a}\}^\perp$ such that \[ \ensuremath{\mathcal{F}} = \ensuremath{K_{\exp}} \cap \{\ensuremath{\mathbf z}\}^\perp. \] Since positively scaling $\ensuremath{\mathbf z}$ does not affect the chain of faces, we may assume that $\norm{\ensuremath{\mathbf z}} = 1$. Also, in what follows, for simplicity, we define \[ {\widehat\dist}(\ensuremath{\mathbf x}) \coloneqq \max \{\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a}), \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{K_{\exp}}) \}. \] Then, we prove each item by applying Theorem~\ref{theo:err} with the corresponding facial residual function. \begin{enumerate}[{\rm (a)}] \item If $ \ensuremath{\mathcal{F}} = \ensuremath{{\mathcal F}}_{-\infty}$, the {{one-step facial residual function}}s are given by Corollary~\ref{col:frf_2dface_entropic}. First we consider the case where $\ensuremath{\mathfrak{g}} = \ensuremath{\mathfrak{g}}_{-\infty}$ and we have \begin{equation*} \psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}(\epsilon,t):= \epsilon + \kappa_{\ensuremath{\mathbf z},t}\ensuremath{\mathfrak{g}}_{-\infty}(2\epsilon ), \end{equation*} where $\ensuremath{\mathfrak{g}}_{-\infty}$ is as in \eqref{d:entropy}. Then, if $\psi$ is a positively rescaled shift of $\psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}$, using the monotonicity of $\ensuremath{\mathfrak{g}}_{-\infty}$ and of $\kappa_{\ensuremath{\mathbf z},t}$ as a function of $t$, we conclude that there exists $\widehat M > 0$ such that \begin{equation*} \psi(\epsilon,t) \leq \widehat M \epsilon + \widehat M \kappa_{\ensuremath{\mathbf z}, \widehat M t}\ensuremath{\mathfrak{g}}_{-\infty}(\widehat M\epsilon ). \end{equation*} Invoking Theorem~\ref{theo:err}, using the monotonicity of all functions involved in the definition of $\psi$ and recalling \eqref{d:entropy_p}, we conclude that for every bounded set $B$, there exists $\kappa _B > 0$ \begin{equation}\label{eq:entropic_err} \ensuremath{\operatorname{d}}\left(\ensuremath{\mathbf x}, ( \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a}) \cap \ensuremath{K_{\exp}}\right) \leq \kappa_{B}\ensuremath{\mathfrak{g}}_{-\infty}({\widehat\dist}(\ensuremath{\mathbf x})), \ensuremath{\mathbf q}uad \forall \ensuremath{\mathbf x} \in B, \end{equation} which shows that an entropic error bound holds. Next, we consider the case $\ensuremath{\mathfrak{g}} = |\cdot|^{\alpha}$. Given $\alpha \in (0,1)$, we have the following {{one-step facial residual function}}: \begin{equation*} \psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}(\epsilon,t):= \epsilon + \kappa_{\ensuremath{\mathbf z},t}2^\alpha \epsilon^\alpha, \end{equation*} where $\kappa_{\ensuremath{\mathbf z},t}$ is defined as in \eqref{haha2}. Invoking Theorem~\ref{theo:err}, we conclude that for every bounded set $B$, there exists $\kappa _B > 0$ such that \[ \ensuremath{\operatorname{d}}\left(\ensuremath{\mathbf x}, ( \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a}) \cap \ensuremath{K_{\exp}}\right) \leq \kappa _B {\widehat\dist}(\ensuremath{\mathbf x}) + \kappa_{B} {\widehat\dist}(\ensuremath{\mathbf x})^\alpha, \ensuremath{\mathbf q}uad \forall \ensuremath{\mathbf x} \in B, \] In addition, for $\ensuremath{\mathbf x} \in B$, we have ${\widehat\dist}(\ensuremath{\mathbf x}) \leq {\widehat\dist}(\ensuremath{\mathbf x})^\alpha {M}$, where $M = \sup _{\ensuremath{\mathbf x} \in B} {\widehat\dist}(\ensuremath{\mathbf x})^{1-\alpha}$. In conclusion, for $\kappa = 2\kappa_{B}\max\{M,1\}$, we have \[ \ensuremath{\operatorname{d}}\left(\ensuremath{\mathbf x}, ( \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a}) \cap \ensuremath{K_{\exp}}\right) \leq \kappa {\widehat\dist}(\ensuremath{\mathbf x})^\alpha, \ensuremath{\mathbf q}uad \forall \ensuremath{\mathbf x} \in B. \] That is, a uniform H\"olderian error bound holds with exponent $\alpha$. \item If $ \ensuremath{\mathcal{F}} = \ensuremath{{\mathcal F}}_{\beta}$, with $\beta \in \ensuremath{\mathbb R}$, then the {{one-step facial residual function}} is given by Corollary~\ref{col:frf_fb}, that is, we have \begin{equation*} \psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}(\epsilon,t) := \epsilon + \kappa_{\ensuremath{\mathbf z},t}\sqrt{2} \epsilon^{1/2}, \end{equation*} Then, following the same argument as in the second half of item (a), we conclude that a uniform H\"olderian error bound holds with exponent $1/2$. \item If $ \ensuremath{\mathcal{F}} = \ensuremath{{\mathcal F}}_{\infty}$, the {{one-step facial residual function}}s are given by Corollary~\ref{col:1dfaces_infty} and they depend on $\ensuremath{\mathbf z}$. Since $ \ensuremath{\mathcal{F}} = \ensuremath{{\mathcal F}}_{\infty}$, we must have $z_x = 0$ and $z_z > 0$, see Section~\ref{sec:exposed}. The deciding factor is whether $z_y$ is positive or zero. If $z_y > 0$, then we have the following {{one-step facial residual function}}: \begin{equation*} \psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}(\epsilon,t):= (1+2 \kappa_{\ensuremath{\mathbf z},t})\epsilon, \end{equation*} where $\kappa_{\ensuremath{\mathbf z},t}$ is defined as in \eqref{haha2}. In this case, analogously to items (a) and (b) we have a Lipschitzian error bound. If $z_y = 0$, we have \begin{equation*} \psi_{ \ensuremath{\mathcal{K}},\ensuremath{\mathbf z}}(\epsilon,t):= \epsilon + \kappa_{\ensuremath{\mathbf z},t}\ensuremath{\mathfrak{g}}_\infty(2\epsilon ), \end{equation*} where $\ensuremath{\mathfrak{g}}_\infty$ is as in \eqref{def:frakg}. Analogous to the proof of item (a) but making use of \eqref{def:frakg_p} in place of \eqref{d:entropy_p}, we conclude that for every bounded set $B$, there exists $\kappa _B > 0$ such that \begin{equation}\label{eq:log_err} \ensuremath{\operatorname{d}}\left(\ensuremath{\mathbf x}, ( \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a}) \cap \ensuremath{K_{\exp}}\right) \leq \kappa_{B}\ensuremath{\mathfrak{g}}_\infty({\widehat\dist}(\ensuremath{\mathbf x}))), \ensuremath{\mathbf q}uad \forall \ensuremath{\mathbf x} \in B. \end{equation} \item See \cite[Proposition~27]{L17}. \end{enumerate} \end{proof} \begin{remark}[Tightness of Theorem~\ref{thm:main_err}]\label{rem:opt} We will argue that Theorem~\ref{thm:main_err} is tight by showing that for every situation described in item (iii), there is a specific choice of $ \ensuremath{\mathcal{L}}$ and a sequence $\{\ensuremath{\mathbf w}^k\}$ in $ \ensuremath{\mathcal{L}}\backslash\ensuremath{K_{\exp}}$ with $\ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}^k,\ensuremath{K_{\exp}}) \to 0$ along which the corresponding error bound for $\ensuremath{K_{\exp}}$ and $ \ensuremath{\mathcal{L}}$ is off by at most a multiplicative constant. \begin{enumerate}[{\rm (a)}] \item\label{opt:a} Let $ \ensuremath{\mathcal{L}} = \ensuremath{\mathrm{span}\,} \ensuremath{{\mathcal F}}_{-\infty} = \{(x,y,z) \mid y = 0 \}$ (see \eqref{eq:exp_2d}) and consider the sequence $\{\ensuremath{\mathbf w}^k\}$ where $\ensuremath{\mathbf w}^k = ((1/(k+1))\ln(k+1),0,1)$, for every $k \in \ensuremath{\mathbb N}$. Then, $ \ensuremath{\mathcal{L}} \cap \ensuremath{K_{\exp}} = \ensuremath{{\mathcal F}}_{-\infty}$ and we are under the conditions of item~\ref{thm:mainiii}\ref{thm:mainiiia} of Theorem~\ref{thm:main_err}. Since $\{\ensuremath{\mathbf w}^k\} =: B \subseteq \ensuremath{\mathcal{L}}$, there exists $\kappa_B > 0$ such that \[ \ensuremath{\operatorname{d}}\left(\ensuremath{\mathbf w}^k, \ensuremath{\mathcal{L}} \cap \ensuremath{K_{\exp}}\right) \leq \kappa_{B}\ensuremath{\mathfrak{g}}_{-\infty}(\ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}^k, \ensuremath{K_{\exp}})), \quad \forall k \in \ensuremath{\mathbb N}. \] Then, the projection of $\ensuremath{\mathbf w}^k$ onto $\ensuremath{{\mathcal F}}_{-\infty}$ is given by $(0,0,1)$. Therefore, \[ \frac{\ln(k+1)}{k+1} = \ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}^k, \ensuremath{\mathcal{L}}\cap \ensuremath{K_{\exp}}) \leq \kappa_{B}\ensuremath{\mathfrak{g}}_{-\infty}(\ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}^k, \ensuremath{K_{\exp}})). \] Let $\ensuremath{\mathbf v}^k = ((1/(k+1))\ln(k+1),1/(k+1),1)$ for every $k$. Then, we have $\ensuremath{\mathbf v}^k \in \ensuremath{K_{\exp}}$. Therefore, $\ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}^k, \ensuremath{K_{\exp}}) \leq 1/(k+1)$. In view of the definition of $\ensuremath{\mathfrak{g}}_{-\infty}$ (see \eqref{d:entropy}), we conclude that for large enough $k$ we have \[ \frac{\ln(k+1)}{k+1} = \ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}^k, \ensuremath{\mathcal{L}}\cap \ensuremath{K_{\exp}}) \leq \kappa_{B}\ensuremath{\mathfrak{g}}_{-\infty}(\ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}^k, \ensuremath{K_{\exp}})) \leq \kappa_B\frac{\ln(k+1)}{k+1}. \] Thus, it holds that for all sufficiently large $k$, \[ 1\le \frac{\ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}^k, \ensuremath{\mathcal{L}}\cap \ensuremath{K_{\exp}})}{\ensuremath{\mathfrak{g}}_{-\infty}(\ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}^k,\ensuremath{K_{\exp}}))} \le \kappa_B. \] Consequently, for any given nonnegative function $\ensuremath{\mathfrak{g}}:\ensuremath{\mathbb R}_+\to \ensuremath{\mathbb R}_+$ such that $\lim_{t\downarrow 0}\frac{\ensuremath{\mathfrak{g}}(t)}{\ensuremath{\mathfrak{g}}_{-\infty}(t)}=0$, we have upon noting $\ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}^k,\ensuremath{K_{\exp}})\to 0$ that \[ \frac{\ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}^k, \ensuremath{\mathcal{L}}\cap \ensuremath{K_{\exp}})}{\ensuremath{\mathfrak{g}}(\ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}^k,\ensuremath{K_{\exp}}))} = \frac{\ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}^k, \ensuremath{\mathcal{L}}\cap \ensuremath{K_{\exp}})}{\ensuremath{\mathfrak{g}}_{-\infty}(\ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}^k,\ensuremath{K_{\exp}}))}\frac{\ensuremath{\mathfrak{g}}_{-\infty}(\ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}^k,\ensuremath{K_{\exp}}))}{\ensuremath{\mathfrak{g}}(\ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}^k,\ensuremath{K_{\exp}}))} \to \infty, \] which shows that the choice of $\ensuremath{\mathfrak{g}}_{-\infty}$ in the error bound is tight. \item Let $\beta \in \ensuremath{\mathbb R}$ and let $\ensuremath{\widehat{\mathbf z}}$, $\ensuremath{\widehat{\mathbf p}}$ and $\ensuremath{\widehat{\mathbf f}}$ be as in \eqref{veczfp}. Let $ \ensuremath{\mathcal{L}} = \{\ensuremath{\mathbf z}\}^\perp$ with $z_x < 0$ such that $\ensuremath{K_{\exp}} \cap \ensuremath{\mathcal{L}} = \ensuremath{{\mathcal F}}_{\beta}$. We are then under the conditions of item~\ref{thm:mainiii}\ref{thm:mainiiib} of Theorem~\ref{thm:main_err}. We consider the following sequences \begin{equation*} \ensuremath{\mathbf v}^k = \begin{bmatrix} 1-\beta +1/k\\ 1\\ e^{1-\beta + 1/k} \end{bmatrix},\quad \ensuremath{\mathbf w}^k = P_{\{\ensuremath{\mathbf z}\}^\perp}\ensuremath{\mathbf v}^k,\quad \ensuremath{\mathbf u}^k = P_{\ensuremath{{\mathcal F}}_\beta}\ensuremath{\mathbf w}^k. \end{equation*} For every $k$ we have $\ensuremath{\mathbf v}^k \in \partial \ensuremath{K_{\exp}} \setminus \ensuremath{{\mathcal F}}_{\beta}$, and $\ensuremath{\mathbf v}^k \neq \ensuremath{\mathbf w}^k$ (because otherwise, we would have $\ensuremath{\mathbf v}^k \in \ensuremath{K_{\exp}} \cap \{\ensuremath{\mathbf z}\}^\perp = \ensuremath{{\mathcal F}}_{\beta}$). In addition, we have $\ensuremath{\mathbf v}^k \to \ensuremath{\widehat{\mathbf f}}$ and, since $\ensuremath{\widehat{\mathbf f}} \in \ensuremath{{\mathcal F}}_{\beta}$, we have $\ensuremath{\mathbf w}^k \to \ensuremath{\widehat{\mathbf f}}$ as well. Next, notice that we have $\langle \ensuremath{\widehat{\mathbf f}}, \ensuremath{\mathbf v}^k \ensuremath{\operatorname{ran}}gle \geq 0$ for $k$ sufficiently large and $|v_x^k/v_y^k - (1-\beta)| \rightarrow 0$. Then, following the computations outlined in case~\ref{ebsubcasev0} of the proof of Theorem~\ref{thm:nonzerogamma} and letting $\zeta_k\coloneqq v_x^k/v_y^k$, we have from \eqref{hahahehe1} and \eqref{newlyadded} that $h_2(\zeta_k)\neq 0$ for all large $k$ (hence, $\ensuremath{\mathbf w}^k\neq \ensuremath{\mathbf u}^k$ for all large $k$), and that \begin{equation}\label{eq:beta_limit} L_{\beta} \coloneqq \lim_{k \rightarrow \infty}\frac{\|\ensuremath{\mathbf w}^k-\ensuremath{\mathbf v}^k\|^{\frac{1}{2}}}{\|\ensuremath{\mathbf w}^k-\ensuremath{\mathbf u}^k\|}=\lim_{k \rightarrow \infty}\frac{\|\ensuremath{\widehat{\mathbf p}}\|}{\|\ensuremath{\widehat{\mathbf z}}\|^{\frac{1}{2}}}\frac{|h_1(\zeta_k)|^{\frac{1}{2}}}{|h_2(\zeta_k)|} = \frac{\|\ensuremath{\widehat{\mathbf p}}\|}{\|\ensuremath{\widehat{\mathbf z}}\|^{\frac{1}{2}}}\frac1{\sqrt{2}(e^{\beta-1} + (\beta^2+1)e^{1-\beta})} \in(0,\infty), \end{equation} where the latter equality is from \eqref{taylor_limit}. On the other hand, from item~\ref{thm:mainiii}\ref{thm:mainiiib} of Theorem~\ref{thm:main_err}, for $B \coloneqq \{\ensuremath{\mathbf w}^k\}$, there exists $\kappa_B > 0$ such that for all $k\in \ensuremath{\mathbb N}$, \[ \|\ensuremath{\mathbf w}^k-\ensuremath{\mathbf u}^k\| = \ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}^k, \ensuremath{\mathcal{L}}\cap\ensuremath{K_{\exp}}) \leq \kappa_B \ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}^k,\ensuremath{K_{\exp}})^{\frac{1}{2}} \leq \kappa_B\|\ensuremath{\mathbf w}^k-\ensuremath{\mathbf v}^k\|^{\frac{1}{2}}. \] However, from \eqref{eq:beta_limit}, for large enough $k$, we have $\|\ensuremath{\mathbf w}^k-\ensuremath{\mathbf u}^k\| \geq 1/(2L_{\beta})\|\ensuremath{\mathbf w}^k-\ensuremath{\mathbf v}^k\|^{\frac{1}{2}}$. Therefore, for large enough $k$ we have \[ \frac{1}{2L_{\beta}}\|\ensuremath{\mathbf w}^k-\ensuremath{\mathbf v}^k\|^{\frac{1}{2}} \leq \ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}^k, \ensuremath{\mathcal{L}}\cap\ensuremath{K_{\exp}})\leq \kappa_B \ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}^k,\ensuremath{K_{\exp}})^{\frac{1}{2}} \leq \kappa_B\|\ensuremath{\mathbf w}^k-\ensuremath{\mathbf v}^k\|^{\frac{1}{2}}. \] Consequently, it holds that for all large enough $k$, \[ \frac{1}{2L_\beta}\le \frac{\ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}^k, \ensuremath{\mathcal{L}}\cap \ensuremath{K_{\exp}})}{\ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}^k,\ensuremath{K_{\exp}})^\frac12} \le \kappa_B. \] Arguing similarly as in case (a), we can also conclude that the choice of $|\cdot|^\frac12$ in the error bound is tight. \item Let $\ensuremath{\mathbf z}= (0,0,1)$ and $ \ensuremath{\mathcal{L}} = \{(x,y,0) \mid x,y \in \ensuremath{\mathbb R} \} = \{\ensuremath{\mathbf z}\}^\perp$. Then, from \eqref{Finfinity}, we have $ \ensuremath{\mathcal{L}} \cap \ensuremath{K_{\exp}} = \ensuremath{{\mathcal F}}_{\infty}$. We are then under case \ref{thm:mainiii}\ref{thm:mainiiic} of Theorem~\ref{thm:main_err}. Because there is no $\hat \ensuremath{\mathbf z} \in \ensuremath{\mathcal{L}}^\perp $ with $\hat z _y > 0$, we have a log-type error bound as in \eqref{eq:log_err}. We proceed as in item \ref{opt:a} using sequences such that $\ensuremath{\mathbf w}^k=(-1,1/k,0)$, $\ensuremath{\mathbf v}^k=(-1,1/k,(1/k)e^{-k})$, $\ensuremath{\mathbf u}^k=(-1,0,0)$, for every $k$. Note that $\ensuremath{\mathbf w}^k \in \ensuremath{\mathcal{L}}, \ensuremath{\mathbf v}^k \in \ensuremath{K_{\exp}}$ and $\proj{\ensuremath{{\mathcal F}}_\infty}(\ensuremath{\mathbf w}^k) = \ensuremath{\mathbf u}^k$, for every $k$. Therefore, there exists $\kappa_B > 0$ such that \begin{equation*} \frac{1}{k} = \ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}^k, \ensuremath{\mathcal{L}} \cap \ensuremath{K_{\exp}}) \leq \kappa_B \ensuremath{\mathfrak{g}}_{\infty}(\ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}^k,\ensuremath{K_{\exp}}))\leq \kappa_{B}\ensuremath{\mathfrak{g}}_{\infty}\left(\frac{1}{ke^k}\right), \quad \forall k \in \ensuremath{\mathbb N}. \end{equation*} In view of the definition of $\ensuremath{\mathfrak{g}}_{\infty}$ (see \eqref{def:frakg}), there exists $L > 0$ such that for large enough $k$ we have \begin{equation*} \frac{1}{k} = \ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}^k, \ensuremath{\mathcal{L}} \cap \ensuremath{K_{\exp}}) \le \kappa_B \ensuremath{\mathfrak{g}}_{\infty}(\ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}^k,\ensuremath{K_{\exp}})) \leq \frac{L}{k}. \end{equation*} Consequently, it holds that for all large enough $k$, \[ \frac{\kappa_B}{L}\le \frac{\ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}^k, \ensuremath{\mathcal{L}}\cap \ensuremath{K_{\exp}})}{\ensuremath{\mathfrak{g}}_{\infty}(\ensuremath{\operatorname{d}}(\ensuremath{\mathbf w}^k,\ensuremath{K_{\exp}}))} \le \kappa_B. \] Arguing similarly as in case (a), we conclude that the choice of $\ensuremath{\mathfrak{g}}_{\infty}$ is tight. \end{enumerate} Note that a Lipschitz error bound is always tight up to a constant, because $\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{K}}\cap ( \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a})) \geq \max\{\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{K}}),\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a})\}$. Therefore, the error bounds in items~\ref{thm:mainii}, \ref{thm:mainiii}\ref{thm:mainivc} and in the first half of \ref{thm:mainiii}\ref{thm:mainiiic} are tight. \end{remark} Sometimes we may need to consider direct products of multiple copies of $\ensuremath{K_{\exp}}$ in order to model certain problems, i.e., our problem of interest could have the following shape: \begin{equation*} \text{find} \quad \ensuremath{\mathbf x} \in ( \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a}) \cap \ensuremath{\mathcal{K}}, \label{eq:multiple_exp} \end{equation*} where $ \ensuremath{\mathcal{K}} = \ensuremath{K_{\exp}} \times \cdots \times \ensuremath{K_{\exp}}$ is a direct product of $m$ exponential cones. Fortunately, we already have all the tools required to extend Theorem~\ref{thm:main_err} and compute error bounds for this case too. We recall that the faces of a direct product of cones are direct products of the faces of the individual cones.\footnote{Here is a sketch of the proof. If $ \ensuremath{\mathcal{F}}^1 \mathrel{\unlhd} \ensuremath{\mathcal{K}}^1, \ensuremath{\mathcal{F}}^{2} \mathrel{\unlhd} \ensuremath{\mathcal{K}}^2$, then the definition of face implies that $ \ensuremath{\mathcal{F}}^1 \times \ensuremath{\mathcal{F}}^{2} \mathrel{\unlhd} \ensuremath{\mathcal{K}}^1 \times \ensuremath{\mathcal{K}}^2$. For the converse, let $ \ensuremath{\mathcal{F}} \mathrel{\unlhd} \ensuremath{\mathcal{K}}^1 \times \ensuremath{\mathcal{K}}^2$ and let $ \ensuremath{\mathcal{F}} ^1, \ensuremath{\mathcal{F}}^2$ be the projection of $ \ensuremath{\mathcal{F}}$ onto the first variable and second variables, respectively. Suppose that $\ensuremath{\mathbf x},\ensuremath{\mathbf y} \in \ensuremath{\mathcal{K}}^1$ are such that $\ensuremath{\mathbf x}+\ensuremath{\mathbf y} \in \ensuremath{\mathcal{F}}^1$. Then, $(\ensuremath{\mathbf x}+\ensuremath{\mathbf y},\ensuremath{\mathbf z}) \in \ensuremath{\mathcal{F}}$ for some $\ensuremath{\mathbf z} \in \ensuremath{\mathcal{K}}^2$. Since $(\ensuremath{\mathbf x}+\ensuremath{\mathbf y},\ensuremath{\mathbf z}) = (\ensuremath{\mathbf x},\ensuremath{\mathbf z}/2) + (\ensuremath{\mathbf y},\ensuremath{\mathbf z}/2)$ and $ \ensuremath{\mathcal{F}}$ is a face, we conclude that $(\ensuremath{\mathbf x},\ensuremath{\mathbf z}/2), (\ensuremath{\mathbf y},\ensuremath{\mathbf z}/2) \in \ensuremath{\mathcal{F}}$ and $\ensuremath{\mathbf x}, \ensuremath{\mathbf y} \in \ensuremath{\mathcal{F}}^1$. Therefore $ \ensuremath{\mathcal{F}}^1 \mathrel{\unlhd} \ensuremath{\mathcal{K}}^1$ and, similarly, $ \ensuremath{\mathcal{F}}^2 \mathrel{\unlhd} \ensuremath{\mathcal{K}}^2$. Then, the equality $ \ensuremath{\mathcal{F}} = \ensuremath{\mathcal{F}}^1 \times \ensuremath{\mathcal{F}}^2$ is proven using the definition of face and the fact that $(\ensuremath{\mathbf x},\ensuremath{\mathbf z}) = (\ensuremath{\mathbf x},0) + (0,\ensuremath{\mathbf z})$. } Therefore, using Proposition \ref{prop:frf_prod}, we are able to compute all the necessary {{one-step facial residual function}}s for $ \ensuremath{\mathcal{K}}$. Once they are obtained we can invoke Theorem~\ref{theo:err}. Unfortunately, there is quite a number of different cases one must consider, so we cannot give a concise statement of an all-encompassing tight error bound result. We will, however, given an error bound result under the following \emph{simplifying assumption of non-exceptionality} or SANE. \begin{assumption}[SANE: simplifying assumption of non-exceptionality] Suppose \eqref{eq:feas} is feasible with $ \ensuremath{\mathcal{K}} = \ensuremath{K_{\exp}} \times \cdots \times \ensuremath{K_{\exp}}$ being a direct product of $m$ exponential cones. We say that $ \ensuremath{\mathcal{K}}$ and $ \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a}$ satisfy the \emph{simplifying assumption of non-exceptionality} (SANE) if there exists a chain of faces $ \ensuremath{\mathcal{F}} _{\ell} \subsetneq \cdots \subsetneq \ensuremath{\mathcal{F}}_1 = \ensuremath{\mathcal{K}} $ as in Proposition~\ref{prop:fra_poly} with $\ell - 1 = {d_{\text{PPS}}( \ensuremath{\mathcal{K}}, \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a})}$ such that for all $i$, the exceptional face $\ensuremath{{\mathcal F}}_{\infty}$ of $\ensuremath{K_{\exp}}$ never appears as one of the blocks of $ \ensuremath{\mathcal{F}}_{i}$. \end{assumption} \begin{remark}[SANE is not unreasonable]\label{rem:sane} In many modelling applications of the exponential cone presented in \cite[Chapter~5]{MC2020}, translating to our notation, the $\ensuremath{\mathbf y}$ variable is fixed to be $1$ in \eqref{d:Kexp}. For example, the hypograph of the logarithm function ``$x \leq \ln(z)$'' can be represented as the constraint ``$(x,y,z) \in \ensuremath{K_{\exp}} \cap ( \ensuremath{\mathcal{L}} +\ensuremath{\mathbf a})$'', where $ \ensuremath{\mathcal{L}} +\ensuremath{\mathbf a} = \{(x,y,z) \mid y = 1\}$. Because the $y$ variable is fixed to be $1$, the feasible region does not intersect the 2D face $\ensuremath{{\mathcal F}}_{-\infty}$ nor its subfaces $\ensuremath{{\mathcal F}}_{\infty}$ and $\ensuremath{{\mathcal F}}_{ne}$. In particular, SANE is satisfied. More generally, if $ \ensuremath{\mathcal{K}}$ is a direct product of exponential cones and the affine space $ \ensuremath{\mathcal{L}} +\ensuremath{\mathbf a}$ is such that the $\ensuremath{\mathbf y}$ components of each block are fixed positive constants, then $ \ensuremath{\mathcal{K}}$ and $ \ensuremath{\mathcal{L}} +\ensuremath{\mathbf a}$ satisfy SANE. On the other hand, problems involving the relative entropy $D(x,y) \coloneqq x \ln(x/y)$ are often modelled as ``minimize $t$'' subject to ``$(-t,x,y) \in \ensuremath{K_{\exp}}$'' and additional constraints. We could also have sums so that the problem is of the form ``minimize $\sum t_i$'' subject to ``$(-t_i,x_i,y_i) \in \ensuremath{K_{\exp}}$'' and additional constraints. In those cases, it seems that it could happen that SANE is not satisfied. \end{remark} Under SANE, we can state the following result. \begin{theorem}[Error bounds for direct products of exponential cones]\label{theo:sane} Suppose \eqref{eq:feas} is feasible with $ \ensuremath{\mathcal{K}} = \ensuremath{K_{\exp}} \times \cdots \times \ensuremath{K_{\exp}}$ being a direct product of $m$ exponential cones. Then the following hold. \begin{enumerate} \item The distance to the PPS condition of $\{ \ensuremath{\mathcal{K}}, \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a}\}$ satisfies $d_{\text{PPS}}( \ensuremath{\mathcal{K}}, \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a}) \leq m$. \item If SANE is satisfied, then $ \ensuremath{\mathcal{K}}$ and $ \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a}$ satisfy a uniform H\"olderian error bound with exponent $2^{-d_{\text{PPS}}(\ensuremath{K_{\exp}}, \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a})}$. \end{enumerate} \end{theorem} \begin{proof} (i): All proper faces of $\ensuremath{K_{\exp}}$ are polyhedral, therefore $\ell_{\text{poly}}(\ensuremath{K_{\exp}}) = 1$. By item \ref{prop:fra_poly:1} of Proposition~\ref{prop:fra_poly}, there exists a chain of length $\ell$ satisfying item \ref{prop:fra_poly:3} of Proposition~\ref{prop:fra_poly} such that $\ell-1 \leq m$. Therefore, $d_{\text{PPS}}( \ensuremath{\mathcal{K}}, \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a})\leq \ell-1 \leq m$. (ii): If SANE is satisfied, then there exists a chain $ \ensuremath{\mathcal{F}} _{\ell} \subsetneq \cdots \subsetneq \ensuremath{\mathcal{F}}_1 = \ensuremath{\mathcal{K}} $ of length $\ell \leq m +1$ as in Proposition~\ref{prop:fra_poly}, together with the corresponding $\ensuremath{\mathbf z}_{1},\ldots,\ensuremath{\mathbf z}_{\ell-1}$. Also, the exceptional face $\ensuremath{{\mathcal F}}_{\infty}$ never appears as one of the blocks of the $ \ensuremath{\mathcal{F}} _i$. In what follows, for simplicity, we define \[ {\widehat\dist}(\ensuremath{\mathbf x}) \coloneqq \max \{\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a}), \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, \ensuremath{\mathcal{K}}) \}. \] Then, we invoke Theorem~\ref{theo:err}, which implies that given a bounded set $B$, there exists a constant $\kappa _B > 0$ such that \begin{equation}\label{eq:bound_sane} \ensuremath{\operatorname{d}}\left(\ensuremath{\mathbf x}, ( \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a}) \cap \ensuremath{\mathcal{K}}\right) \leq \kappa _B ({\widehat\dist}(\ensuremath{\mathbf x})+\varphi({\widehat\dist}(\ensuremath{\mathbf x}),M)), \end{equation} where $M = \sup _{\ensuremath{\mathbf x}\in B} \norm{\ensuremath{\mathbf x}}$ and there are two cases for $\varphi$. If $\ell = 1$, $\varphi$ is the function such that $\varphi(\epsilon,M) = \epsilon$. If $\ell \geq 2$, we have $\varphi = \psi _{{\ell-1}}\diamondsuit \cdots \diamondsuit \psi_{{1}}$, where $\psi _{i}$ is a (suitable positively rescaled shift of a) {{one-step facial residual function}} for $ \ensuremath{\mathcal{F}}_{i}$ and $\ensuremath{\mathbf z}_i$. In the former case, the PPS condition is satisfied, we have a Lipschitzian error bound and we are done. We therefore assume that the latter case occurs with $\ell - 1 = {d_{\text{PPS}}( \ensuremath{\mathcal{K}}, \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a})}$. First, we compute the {{one-step facial residual function}}s for each $ \ensuremath{\mathcal{F}}_i$. In order to do that, we recall that each $ \ensuremath{\mathcal{F}}_{i}$ is a direct product $ \ensuremath{\mathcal{F}}_{i}^1\times \cdots \times \ensuremath{\mathcal{F}}_{i}^m$ where each $ \ensuremath{\mathcal{F}}_{i}^j$ is a face of $\ensuremath{K_{\exp}}$, excluding $\ensuremath{{\mathcal F}}_{\infty}$ by SANE. Therefore, a {{one-step facial residual function}} for $ \ensuremath{\mathcal{F}}_{i}^j$ can be obtained from Corollary~\ref{col:frf_2dface_entropic}, \ref{col:frf_fb} or \ref{col:frf_ne}. In particular, taking the worst\footnote{$\sqrt{\cdot}$ is ``worse'' than $\ensuremath{\mathfrak{g}}_{-\infty}$ in that, near zero, $\sqrt{t} \geq \ensuremath{\mathfrak{g}}_{-\infty}(t)$. The function $\ensuremath{\mathfrak{g}}_{\infty}$ need not be considered because, by SANE, $ \ensuremath{\mathcal{F}}_{\infty}$ never appears.} case in consideration, and taking the maximum of the facial residual functions, there exists a nonnegative monotone nondecreasing function $\rho :\ensuremath{\mathbb R}_+ \to \ensuremath{\mathbb R}_+$ such that the function $\psi$ given by \[ \psi(\epsilon,t) \coloneqq \rho(t) \epsilon +\rho(t)\sqrt{\epsilon } \] is a {{one-step facial residual function}} for each $ \ensuremath{\mathcal{F}}_{i}^j$. In what follows, in order to simplify the notation, we define $\hat\amf(t) \coloneqq \sqrt{t}$. Also, for every $j$, we use $\hat\amf_j$ to denote the composition of $\hat\amf$ with itself $j$-times, i.e., \begin{equation}\label{eq:hatg} \hat\amf_j = \underbrace{\hat\amf\circ \cdots \circ \hat\amf}_{j \text{ times}}; \end{equation} and we set $\hat\amf_0$ to be the identity map. Using the above notation and Proposition~\ref{prop:frf_prod}, we conclude the existence of a nonnegative monotone nondecreasing function $\sigma: \ensuremath{\mathbb R}_+ \to \ensuremath{\mathbb R}_+$ such that the function $\psi _{i}$ given by \begin{align} \psi _{i}(\epsilon,t) \coloneqq \sigma(t)\epsilon + \sigma(t)\hat\amf{(\epsilon)} \notag \end{align} is a {{one-step facial residual function}} for $ \ensuremath{\mathcal{F}}_i$ and $\ensuremath{\mathbf z}_i$. Therefore, for $\ensuremath{\mathbf x} \in B$, we have \begin{align} \psi _{i}(\epsilon,\norm{\ensuremath{\mathbf x}}) \leq \sigma(M)\epsilon+ \sigma(M)\hat\amf{(\epsilon)} = \psi _{i}(\epsilon,M), \label{eq:frf_fi} \end{align} where $M = \sup _{\ensuremath{\mathbf x}\in B} \norm{\ensuremath{\mathbf x}}$. Next we are going to make a series of arguments related to the following informal principle: over a bounded set only the terms $\hat\amf_j$ with largest $j$ matter. We start by noting that for any $\ensuremath{\mathbf x}\in B$ and any $0\le k\le j\le \ell$, \begin{equation}\label{relationhahaha} \hat\amf_k({\widehat\dist}(\ensuremath{\mathbf x})) = {\widehat\dist}(\ensuremath{\mathbf x})^{2^{-k}} = {\widehat\dist}(\ensuremath{\mathbf x})^{(2^{-k} - 2^{-j})}{\widehat\dist}(\ensuremath{\mathbf x})^{2^{-j}} \le \hat\kappa_{j,k}{\widehat\dist}(\ensuremath{\mathbf x})^{2^{-j}} \le \hat\kappa\hat\amf_j({\widehat\dist}(\ensuremath{\mathbf x})), \end{equation} where $\hat\kappa_{j,k}:= \sup_{x\in B}{\widehat\dist}(\ensuremath{\mathbf x})^{(2^{-k} - 2^{-j})} < \infty$ because $\ensuremath{\mathbf x} \mapsto {\widehat\dist}(\ensuremath{\mathbf x})^{(2^{-k} - 2^{-j})}$ is continuous, and $\hat\kappa := \max_{0\le k\le j\le \ell}\hat\kappa_{j,k}$. Now, let $\varphi_j \coloneqq \psi _{{j}}\diamondsuit \cdots \diamondsuit \psi_{{1}}$, where $\diamondsuit$ is the diamond composition defined in \eqref{eq:comp}. We will show by induction that for every $j \leq \ell-1$ there exists $\kappa _j$ such that \begin{equation}\label{eq:diamond_bound} \varphi_j({\widehat\dist}(\ensuremath{\mathbf x}),M) \leq \kappa_j\hat\amf_{j}({\widehat\dist}(\ensuremath{\mathbf x})), \ensuremath{\mathbf q}uad \forall \ensuremath{\mathbf x} \in B. \end{equation} For $j = 1$, it follows directly from \eqref{eq:frf_fi} and \eqref{relationhahaha}. Now, suppose that the claim is valid for some $j$ such that $j+1 \leq \ell-1$. By the inductive hypothesis, we have \begin{align} \varphi_{j+1}({\widehat\dist}(\ensuremath{\mathbf x}),M) &= \psi _{j+1}({\widehat\dist}(\ensuremath{\mathbf x})+ \varphi _{j}({\widehat\dist}(\ensuremath{\mathbf x}),M),M) \notag \\ & \leq \psi _{j+1}({\widehat\dist}(\ensuremath{\mathbf x})+ \kappa_j\hat\amf_{j}({\widehat\dist}(\ensuremath{\mathbf x})),M) \notag\\ & \leq \psi _{j+1}(\tilde \kappa_j\hat\amf_{j}({\widehat\dist}(\ensuremath{\mathbf x})),M), \label{eq:varphi_j} \end{align} where $\tilde \kappa_j \coloneqq 2\max\{\hat \kappa,\kappa_j\}$ and the last inequality follows from \eqref{relationhahaha}. Then, we plug $\epsilon = \tilde \kappa_j\hat\amf_{j}({\widehat\dist}(\ensuremath{\mathbf x}))$ in \eqref{eq:frf_fi} to obtain \begin{align} \psi _{j+1}(\tilde \kappa_j\hat\amf_{j}({\widehat\dist}(\ensuremath{\mathbf x})),M) & = \sigma(M)\tilde \kappa_j\hat\amf_{j}({\widehat\dist}(\ensuremath{\mathbf x})) + \sigma(M)\hat\amf(\tilde \kappa_j\hat\amf_{j}({\widehat\dist}(\ensuremath{\mathbf x}))) \notag\\ & = \sigma(M) \tilde \kappa_j \hat\amf_{j}({\widehat\dist}(\ensuremath{\mathbf x})) + \sigma(M) \sqrt{\tilde \kappa_j} \hat\amf_{j+1}({\widehat\dist}(\ensuremath{\mathbf x})) \notag \\ & \le \sigma(M)(\tilde \kappa_j\hat\kappa + \sqrt{\tilde \kappa_j})\hat\amf_{j+1}({\widehat\dist}(\ensuremath{\mathbf x})) \label{eq:varphi_j2}, \end{align} where the last inequality follows from \eqref{relationhahaha}. Combining \eqref{eq:varphi_j} and \eqref{eq:varphi_j2} concludes the induction proof. In particular, \eqref{eq:diamond_bound} is valid for $j = \ell-1$. Then, taking into account some positive rescaling and shifting (see \eqref{eq:pos_rescale}) and adjusting constants, from \eqref{eq:bound_sane}, \eqref{eq:diamond_bound} and \eqref{relationhahaha} we deduce that there exists $\kappa > 0$ such that \begin{equation*} \ensuremath{\operatorname{d}}\left(\ensuremath{\mathbf x}, ( \ensuremath{\mathcal{L}} + \ensuremath{\mathbf a}) \cap \ensuremath{\mathcal{K}}\right) \leq \kappa \hat\amf_{\ell-1}({\widehat\dist}(\ensuremath{\mathbf x})), \ensuremath{\mathbf q}uad \forall \ensuremath{\mathbf x} \in B \end{equation*} with $\hat\amf_{\ell-1}$ as in \eqref{eq:hatg}. To complete the proof, we recall that ${d_{\text{PPS}}( \ensuremath{\mathcal{K}}, \ensuremath{\mathcal{L}}+\ensuremath{\mathbf a})} = \ell-1$. \end{proof} \begin{remark}[Variants of Theorem~\ref{theo:sane}] Theorem~\ref{theo:sane} is not tight and admits variants that are somewhat cumbersome to describe precisely. For example, the $\ensuremath{\mathfrak{g}}_{-\infty}$ function was not taken into account explicitly but simply ``relaxed" to $t\mapsto \sqrt{t}$. Going for greater generality, we can also drop the SANE assumption altogether and try to be as tight as our analysis permits when dealing with possibly inSANE instances. Although there are several possibilities one must consider, the overall strategy is the same as outlined in the proof of Theorem~\ref{theo:sane}: invoke Theorem~\ref{theo:err}, fix a bounded set $B$, pick a chain of faces as in Proposition~\ref{prop:fra_poly} and upper bound the diamond composition of facial residual function as in \eqref{eq:diamond_bound}. Intuitively, whenever sums of function compositions appear, only the ``higher'' compositions matter. However, the analysis must consider the possibility of $\ensuremath{\mathfrak{g}}_{-\infty}$ or $\ensuremath{\mathfrak{g}}_{\infty}$ appearing. After this is done, it is just a matter to plug this upper bound into \eqref{eq:bound_sane}. \end{remark} We conclude this subsection with an application. In \cite{BLT17}, among other results, the authors showed that when a H\"olderian error bound holds, it is possible to derive the convergence rate of several algorithms from the exponent of the error bound. As a consequence, Theorems~\ref{thm:main_err} and \ref{theo:sane} allow us to apply some of their results (e.g., \cite[Corollary~3.8]{BLT17}) to the conic feasibility problem with exponential cones, \emph{whenever a H\"olderian error bound holds}. For non-H\"olderian error bounds appearing in Theorem~\ref{thm:main_err}, different techniques are necessary, such as the ones discussed in \cite{LL20} for deriving convergence rates under more general error bounds. \subsection{Miscellaneous odd behavior and connections to other notions}\label{sec:odd} In this final subsection, we collect several instances of pathological behaviour that can be found inside the facial structure of the exponential cone. \begin{example}[H\"olderian bounds and the non-attainment of admissible exponents]\label{ex:exponents} We recall Definition~\ref{def:hold} and we consider the special case of two closed convex sets $C_1,C_2$ with non-empty intersection. We say that $\gamma \in (0,1]$ is an \emph{admissible exponent} for $C_1, C_2$ if $C_1$ and $C_2$ satisfy a uniform H\"olderian error bound with exponent $\gamma$. It turns out that the supremum of the set of admissible exponents is not itself admissible. In particular, if $C_1 = \ensuremath{K_{\exp}}$ and $C_2 = \ensuremath{\mathrm{span}\,} \ensuremath{{\mathcal F}}_{-\infty}$, then we see from Corollary~\ref{col:2d_hold} that $C_1 \cap C_2 = \ensuremath{{\mathcal F}}_{-\infty}$ and that $C_1$ and $C_2$ satisfy a uniform H\"olderian error bound for all $\gamma \in (0,1)$; however, in view of the sequence constructed in Remark~\ref{rem:opt}(a), the exponent cannot be chosen to be $\gamma = 1$. In fact, from Theorem~\ref{thm:main_err} and Remark~\ref{rem:opt}(a), $C_1$ and $C_2$ satisfy an entropic error bound which is tight and is, in a sense, better than any H\"olderian error bound with $\gamma \in (0,1)$ but worse than a Lipschitzian error bound. \end{example} \begin{example}[Non-H\"olderian error bound]\label{ex:non_hold} The facial structure of $\ensuremath{K_{\exp}}$ can be used to derive an example of two sets that provably do not have a H\"olderian error bound. Let $C_1 = \ensuremath{K_{\exp}}$ and $C_2 = \{\ensuremath{\mathbf z}\}^\perp$, where $z_x=z_y = 0$ and $z_z=1$ so that $C_1\cap C_2=\ensuremath{{\mathcal F}}_\infty$. Then, for every $\eta > 0$ and every $\alpha \in (0,1]$, there is no constant $\kappa > 0$ such that \[ \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x},\ensuremath{{\mathcal F}}_\infty) \leq \kappa \max\{\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x},\ensuremath{K_{\exp}})^\alpha, \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x},\{\ensuremath{\mathbf z}\}^\perp)^\alpha \}, \ensuremath{\mathbf q}uad \forall \ \ensuremath{\mathbf x} \in B(\eta). \] This is because if there were such a positive $\kappa$, the infimum in Lemma~\ref{lem:non_hold} would be positive, which it is not. This shows that $C_1$ and $C_2$ do not have a H\"olderian error bound. However, as seen in Theorem~\ref{thm:nonzerogammasec72}, $C_1$ and $C_2$ have a log-type error bound. In particular if $\ensuremath{\mathbf q} \in B(\eta)$, using \eqref{proj:p1}, \eqref{proj:p2} and Theorem~\ref{thm:nonzerogammasec72}, we have \begin{align} \ensuremath{\operatorname{d}}(\ensuremath{\mathbf q}, \ensuremath{{\mathcal F}}_\infty) & \leq \ensuremath{\operatorname{d}}(\ensuremath{\mathbf q},\{\ensuremath{\mathbf z}\}^\perp) + \ensuremath{\operatorname{d}}(P_{\{\ensuremath{\mathbf z}\}^\perp}(\ensuremath{\mathbf q}),\ensuremath{{\mathcal F}}_\infty) \notag\\ & \leq \ensuremath{\operatorname{d}}(\ensuremath{\mathbf q},\{\ensuremath{\mathbf z}\}^\perp) + \max\{2,2\gamma_{\ensuremath{\mathbf z},\eta}^{-1}\}\ensuremath{\mathfrak{g}}_\infty (\ensuremath{\operatorname{d}}(P_{\{\ensuremath{\mathbf z}\}^\perp}(\ensuremath{\mathbf q}),\ensuremath{K_{\exp}})) \notag\\ &\leq {\widehat\dist}(\ensuremath{\mathbf q}) + \max\{2,2\gamma_{\ensuremath{\mathbf z},\eta}^{-1}\}\ensuremath{\mathfrak{g}}_\infty (2{\widehat\dist}(\ensuremath{\mathbf q})) \label{eq:non_hold}, \end{align} where ${\widehat\dist}(\ensuremath{\mathbf q}) \coloneqq \max\{\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q},\ensuremath{K_{\exp}}),\ensuremath{\operatorname{d}}(\ensuremath{\mathbf q},\{\ensuremath{\mathbf z}\}^\perp) \}$ and in the last inequality we used the monotonicity of $\ensuremath{\mathfrak{g}}_\infty$. \end{example} Let $C_1, \cdots, C_m$ be closed convex sets having nonempty intersection and let $C \coloneqq \cap _{i=1}^m C_i$. Following \cite{LL20}, we say that $\varphi : \ensuremath{\mathbb R}_+\times \ensuremath{\mathbb R}_+ \to \ensuremath{\mathbb R}_+ $ is a \emph{consistent error bound function (CEBF)} for $C_1, \ldots, C_m$ if the following inequality holds \begin{equation*} \ensuremath{\operatorname{d}}(\ensuremath{\mathbf x},\, C) \le \varphi\left(\max_{1 \le i \le m}\ensuremath{\operatorname{d}}(\ensuremath{\mathbf x}, C_i), \, \|\ensuremath{\mathbf x}\|\right) \ \ \ \forall\ \ensuremath{\mathbf x}\in\mathcal{E}; \end{equation*} and the following technical conditions are satisfied for every $a,b\in \ensuremath{\mathbb R}_+$: $\varphi(\cdot,b)$ is monotone nondecreasing, right-continuous at $0$ and $\varphi(0,b) = 0$; $\varphi(a,\cdot)$ is mononotone nondecreasing. CEBFs are a framework for expressing error bounds and can be used in the convergence analysis of algorithms for convex feasibility problems, see \cite[Sections~3 and 4]{LL20}. For example, $C_1,\ldots, C_m$ satisfy a H\"olderian error bound (Definition~\ref{def:hold}) if and only if these sets admit a CEBF of the format $\varphi(a,b) \coloneqq \rho(b)\max\{a,a^{\gamma(b)}\}$, where $\rho:\ensuremath{\mathbb R}_+ \to \ensuremath{\mathbb R}_+$ and $\gamma:\ensuremath{\mathbb R}_+ \to (0,1]$ are monotone nondecreasing functions \cite[Theorem~3.4]{LL20}. We remark that in Example~\ref{ex:non_hold}, although the sets $C_1, C_2$ do not satisfy a H\"olderian error bound, the log-type error bound displayed therein is covered under the framework of consistent error bound functions. This is because $\ensuremath{\mathfrak{g}}_\infty$ is a continuous monotone nondecreasing function and $\gamma_{\ensuremath{\mathbf z},\eta}^{-1}$ is monotone nondecreasing as a function of $\eta$ (Remark~\ref{rem:kappa}). Therefore, in view of \eqref{eq:non_hold}, the function given by $\varphi(a,b) \coloneqq a + \max\{2,2\gamma_{\ensuremath{\mathbf z},b}^{-1}\}\ensuremath{\mathfrak{g}}_\infty (2a)$ is a CEBF for $C_1$ and $C_2$. By the way, it seems conceivable that many of our results in Section~\ref{sec:frf_comp} can be adapted to derive CEBFs for arbitrary convex sets. Specifically, Lemma~\ref{lem:facialresidualsbeta}, Theorem~\ref{thm:1dfacesmain}, and Lemma~\ref{lem:infratio} only rely on convexity rather than on the more specific structure of cones. Next, we will see that we can also adapt Examples~\ref{ex:exponents} and \ref{ex:non_hold} to find instances of odd behavior of the so-called \emph{Kurdyka-{\L}ojasiewicz (KL) property} \cite{BDL07,BDLS07,ABS13,ABRS10,BNPS17,LP18}. First, we recall some notations and definitions. Let $f: \ensuremath{\mathbb R}^n\to \ensuremath{\mathbb R} \cup \{+\infty\}$ be a proper closed convex extended-real-valued function. We denote by $\ensuremath{\operatorname{dom}} \partial f$ the set of points for which the subdifferential $\partial f(\ensuremath{\mathbf x})$ is non-empty and by $[a < f < b]$ the set of $\ensuremath{\mathbf x}$ such that $a < f(\ensuremath{\mathbf x}) < b$. As in \cite[Section~2.3]{BNPS17}, we define for $r_0\in (0,\infty)$ the set \begin{equation*} {\cal K}(0,r_0) := \{\phi\in C[0,r_0)\cap C^1(0,r_0)\;|\; \phi \mbox{ is concave}, \ \phi(0) = 0, \ \phi'(r) > 0\ \forall r\in (0,r_0)\}. \end{equation*} Let $B(\ensuremath{\mathbf x},\epsilon)$ denote the closed ball of radius $\epsilon > 0$ centered at $\ensuremath{\mathbf x}$. With that, we say that $f$ satisfies the KL property at $\ensuremath{\mathbf x} \in \ensuremath{\operatorname{dom}} \partial f$ if there exist $r_0 \in (0,\infty)$, $\epsilon > 0$ and $\phi \in \mathcal{K}(0,r_0)$ such that for all $\ensuremath{\mathbf y} \in B(\ensuremath{\mathbf x},\epsilon) \cap [f(\ensuremath{\mathbf x}) < f < f(\ensuremath{\mathbf x}) + r_0 ]$ we have \[ \phi'(f(\ensuremath{\mathbf y})-f(\ensuremath{\mathbf x}))\ensuremath{\operatorname{d}}(0,\partial f(\ensuremath{\mathbf y})) \geq 1. \] In particular, as in \cite{LP18}, we say that $f$ satisfies the \emph{KL property with exponent $\alpha\in [0,1)$ at $\ensuremath{\mathbf x} \in \ensuremath{\operatorname{dom}} \partial f$}, if $\phi$ can be taken to be $\phi(t) = ct^{1-\alpha}$ for some positive constant $c$. Next, we need a result which is a corollary of \cite[Theorem~5]{BNPS17}. \begin{proposition}\label{prop:error_kl} Let $C_1, C_2 \subseteq \ensuremath{\mathbb R}^n$ be closed convex sets with $C_1 \cap C_2 \neq \emptyset$. Define $f: \ensuremath{\mathbb R}^n \to \ensuremath{\mathbb R}$ as \[ f(\ensuremath{\mathbf y}) = \ensuremath{\operatorname{d}}(\ensuremath{\mathbf y},C_1)^2 + \ensuremath{\operatorname{d}}(\ensuremath{\mathbf y},C_2)^2. \] Let $\ensuremath{\mathbf x} \in C_1\cap C_2$, $\gamma \in (0,1]$. Then, there exist $\kappa > 0$ and $\epsilon > 0 $ such that \begin{equation}\label{eq:error} \ensuremath{\operatorname{d}}(\ensuremath{\mathbf y}, C_1\cap C_2) \leq \kappa \max\{\ensuremath{\operatorname{d}}(\ensuremath{\mathbf y},C_1),\ensuremath{\operatorname{d}}(\ensuremath{\mathbf y},C_2)\}^\gamma,\ensuremath{\mathbf q}uad \forall \ensuremath{\mathbf y} \in B(\ensuremath{\mathbf x},\epsilon) \end{equation} if and only if $f$ satisfies the KL property with exponent $1-\gamma/2$ at $\ensuremath{\mathbf x}$. \end{proposition} \begin{proof} Note that $\inf f = 0$ and $\ensuremath{\operatorname*{argmin}} f = C_1\cap C_2$. Furthermore, \eqref{eq:error} is equivalent to the existence of $\kappa' > 0$ and $\epsilon > 0$ such that \[ \ensuremath{\operatorname{d}}(\ensuremath{\mathbf y}, \ensuremath{\operatorname*{argmin}} f) \leq \varphi(f(\ensuremath{\mathbf y})),\ensuremath{\mathbf q}uad \forall \ensuremath{\mathbf y} \in B(\ensuremath{\mathbf x},\epsilon), \] where $\varphi$ is the function given by $\varphi(r) = \kappa' r^{\gamma/2}$. With that, the result follows from \cite[Theorem~5]{BNPS17}. \end{proof} \begin{example}[Examples in the KL world] In Example~\ref{ex:exponents}, we have two sets $C_1, C_2$ satisfying a uniform H\"olderian error bound for $\gamma \in (0,1)$ but not for $\gamma = 1$. Because $C_1$ and $C_2$ are cones and the corresponding distance functions are positively homogeneous, this implies that for $\mathbf{0} \in C_1 \cap C_2$, a Lipschitzian error bound never holds at any neighbourhood of $\mathbf{0}$. That is, given $\eta > 0$, there is no $\kappa > 0$ such that \[ \ensuremath{\operatorname{d}}(\ensuremath{\mathbf y}, C_1\cap C_2) \leq \kappa \max\{\ensuremath{\operatorname{d}}(\ensuremath{\mathbf y},C_1),\ensuremath{\operatorname{d}}(\ensuremath{\mathbf y},C_2)\},\ensuremath{\mathbf q}uad \forall \ensuremath{\mathbf y} \in B(\eta) \] holds. Consequently, the function $f$ in Proposition~\ref{prop:error_kl} satisfies the KL property with exponent $\alpha$ for any $\alpha \in (1/2,1)$ at the origin, but not for $\alpha = 1/2$. To the best of our knowledge, this is the first explicitly constructed function in the literature such that the infimum of KL exponents at a point is not itself a KL exponent. Similarly, from Example~\ref{ex:non_hold} we obtain $C_1,C_2$ for which \eqref{eq:error} does not hold for $\mathbf{0} \in C_1\cap C_2$ with any chosen $\kappa,\varepsilon>0,\;\gamma \in \left(0,1 \right]$. Thus from Proposition~\ref{prop:error_kl} we obtain a function $f$ that does not satisfy the KL property with exponent $\beta \in [1/2,1)$ at the origin. Since a function satisfying the KL property with exponent $\alpha\in [0,1)$ at an $\ensuremath{\mathbf x}\in \ensuremath{\operatorname{dom}} \partial f$ necessarily satisfies it with exponent $\beta$ for any $\beta \in [\alpha,1)$ at $\ensuremath{\mathbf x}$, we see that this $f$ does not satisfy the KL property with any exponent at the origin. On passing, we would like to point out that there are functions known in the literature that fail to satisfy the KL property; e.g., \cite[Example~1]{BDLS07}. \end{example} \section{Concluding remarks}\label{sec:conclusion} In this work, we presented an extension of the results of \cite{L17} and showed how to obtain error bounds for conic linear systems using {{one-step facial residual function}}s and facial reduction (Theorem~\ref{theo:err}) even when the underlying cone is not amenable. Related to facial residual functions, we also developed techniques that aid in their computation; see Section~\ref{sec:frf_comp}. Finally, all techniques and results developed in Section~\ref{sec:frf} were used in some shape or form in order to obtain error bounds for the exponential cone in Section~\ref{sec:exp_cone}. Our new framework unlocks analysis for cones not reachable with the techniques developed in \cite{L17}; these include cones that are not facially exposed, as well as cones for which the projection operator has no simple closed form or is only implicitly specified. These were, until now, significant barriers against error bound analysis for many cones of interest. As future work, we are planning to use the techniques developed in this paper to analyze and obtain error bounds for some of these other cones that have been previously unapproachable. Potential examples include the cone of $n\times n$ completely positive matrices and its dual, the cone of $n\times n$ copositive matrices. The former is not facially exposed when $n\geq 5$ (see \cite{Zh18}) and the latter is not facially exposed when $n \geq 2$. It would be interesting to clarify how far error bound problems for these cones can be tackled by our framework. Or, more ambitiously, we could try to obtain some of the facial residual functions and some error bound results. Of course, a significant challenge is that their facial structure is not completely understood, but we believe that even partial results for general $n$ or complete results for specific values of $n$ would be relevant and, possibly, quite non-trivial. Finally, as suggested by one of the reviewers, our framework may be enriched by investigating further geometric interpretations of the key quantity $\gamma_{\ensuremath{\mathbf z},\eta}$ in \eqref{gammabetaeta}, beyond Figure~\ref{fig:uvw}. For instance, it will be interesting to see whether the positivity of $\gamma_{\ensuremath{\mathbf z},\eta}$ is related to some generalization of the angle condition in \cite{YangNg02}, which was originally proposed for the study of Lipschitz error bounds. \end{document}
\begin{document} \title{Entanglement and communication-reducing properties of noisy $N$-qubit states} \author{Wies{\l}aw Laskowski} \affiliation{Institute of Theoretical Physics and Astrophysics, University of Gda\'nsk, PL-80-952 Gda\'nsk, Poland} \affiliation{Fakult\"at f\"ur Physik, Ludwig-Maximilians Universit\"at M\"unchen, D-80799 M\"unchen, Germany} \affiliation{Max-Planck Institut f\"ur Quantenoptik, D-85748 Garching, Germany} \author{Tomasz Paterek} \affiliation{Institute for Quantum Optics and Quantum Information, Austrian Academy of Sciences, A-1090 Vienna, Austria} \affiliation{Centre for Quantum Technologies and Department of Physics, National University of Singapore, 117542 Singapore} \author{{\v C}aslav Brukner} \affiliation{Institute for Quantum Optics and Quantum Information, Austrian Academy of Sciences, A-1090 Vienna, Austria} \affiliation{Faculty of Physics, University of Vienna, A-1090 Vienna, Austria} \author{Marek \.Zukowski} \affiliation{Institute of Theoretical Physics and Astrophysics, University of Gda\'nsk, PL-80-952 Gda\'nsk, Poland} \begin{abstract} We consider properties of states of many qubits, which arise after sending certain entangled states via various noisy channels (white noise, coloured noise, local depolarization, dephasing and amplitude damping). Entanglement of these states is studied and their ability to violate certain classes of Bell inequalities. States which violate them allow for higher than classical efficiency of solving related distributed computational tasks with constrained communication. This is a direct property of such states -- not requiring their further modification via stochastic local operations and classical communication such as entanglement purification or distillation procedures. We identify novel families of multi-particle states which are entangled but nevertheless allow local realistic description of specific Bell experiments. For some of them, the ``gap'' between the critical values for entanglement and violation of Bell inequality remains finite even in the limit of infinitely many qubits. \end{abstract} \pacs{03.65.Ud, 03.67.-a} \date{\today} \maketitle \section{Introduction} Despite a considerable progress in understadning entanglement the question whether every entangled state does not admit a local realistic simulation is as yet unanswered. Bell has shown that certain pure entangled states violate constraints imposed by local hidden variable models~\cite{BELL}. Bell's result was generalized by Gisin and Peres who demonstrated the violation for all bipartite pure entangled states~\cite{GISIN,GISINPERES}. Popescu and Rohrlich showed that no local realistic description is possible for any pure multipartite entangled state; the proof involved post-selection~\cite{POPESCU}. Without post-selection, it is not clear whether there are pure entangled states which admit local realistic model for all possible measurements. Bell experiments with two settings per observer in which only correlation functions are measured indeed admit local hidden-variable explanation even for some pure entangled states \cite{GEN_GHZ1,GEN_GHZ2}. For mixed states, this relation is even subtler. Werner states are an example of bipartite entangled mixed states which allow a local realistic model for direct measurements \cite{WERNER,BARRETT}. Almeida {\it et al.} found recently that the range of entanglement admixture for which the state of two $d$-level systems is both entangled and admits local hidden-variable model for all measurements decreases proportionally to $\log(d)/d$~\cite{ALMEIDA}. Also some genuinely tripartite entangled mixed states can admit hidden variable description for all measurements \cite{TA}. It was shown that entangled states upon sequential local measurements may be transformed into ones that do not allow a local realistic description \cite{HN1,HN2,HN3,HN4}. Note, however, that this is not a {\em direct} property of such states, only the final states which result out of such transformations are endowed with it. The relation between entanglement and local realism for multipartite mixed states is still largely unexplored. Our work addresses this problem. This relation is not only of importance for fundamental research, but also in the context of quantum communication and quantum computation. For certain tasks, such as quantum communication complexity problems~\cite{QCCP1,QCCP2} or device-independent quantum key distribution~\cite{KEY_DIST1,KEY_DIST2}, entangled states are useful only to the extent that they violate Bell inequalities. Furthermore, entangled states which violate certain Bell inequalities, but satisfy other ones, are useful for particular quantum communication complexity problems directly related with the violated inequalities (for details of the link between the inequalities and communication complexity problems see \cite{QCCP2}). In such problems, several partners have disjoint sets of data, and under a strict communication constraint, to e.g. one bit per partner, are asked to give the value of a task function which depends on all data. The amount of violation of a Bell inequality for correlation functions related to the problem is proportional to the increase of the probability to get the correct value of the task function, which quantum protocols involving the entangled states allow in comparison with the optimal classical protocol. Note that often additional post-processing of experimental data requires additional classical communication and therefore increases communication complexity of quantum protocols. Therefore, states which violate certain Bell inequalities after sequential measurements or post-selection are usually less efficient in terms of communication complexity reduction than the states which violate the inequalities directly. It is thus important to make both classifications of entangled states: into admitting and not admitting local realistic models, and into violating and not violating a given Bell inequality. Entanglement and Bell violation of different noisy states has already been studied by several authors \cite{ANTIA2005,SU,JCKL2006,LEANDRO,MAURO}. All this indicates that entanglement and impossibility of a local hidden variable model are not only different concepts, but also truly different resources \cite{SCARANI2}. Our aim is to identify a class of states that demonstrates this difference in a striking way. We consider states of $N$ two-level systems resulting from sending different entangled states via noisy channels. Noisy states are of special importance as they take into account errors inevitable in any laboratory. More specifically, we study white noise admixture which is often used to model imperfections of setups involving single crystal in which spontaneous parametric down-conversion takes place. We consider also colored noise admixture which was shown to be appropriate, e.g., in description of states generated in multiple entanglement swapping \cite{ADITI}. Typical noisy channels (depolarization, dephasing, amplitude damping) which act independently on every qubit are also studied. They find applications in modeling random environment and dissipative processes \cite{NIELSENCHUANG}. Here we find states for which even an infinitesimal small admixture of infinitesimal weak entangled state results in a non-separable state, while to violate standard Bell inequalities (with two settings per party) the admixture has to scale at least as $1/\sqrt{d}$, where $d$ is the dimension of $N$ qubits, i.e. $d=2^N$. This shows a remarkable ``gap'' between the critical parameters for entanglement and for violation of standard Bell's inequalities. We observe that keeping the same amount of noise, but changing the type of noise drastically changes entanglement and communication-reducing properties of the states, i.e., whether the states allow for higher than classical reduction of communication complexity. Furthermore, we find mixed states for which this gap remains finite even in the limit of infinitely many qubits. \section{Toolbox} Our tools consist of entanglement criterion in terms of correlation functions~\cite{BADZIAG}, which will prove handy for comparison with conditions for violation of Bell inequalities. We shall take into account sets of Bell inequalities for two and more measurement settings \cite{WZ,WW,ZB,WUZONG1,WUZONG2,LPZB}. We now describe these tools in more detail. Arbitrary state of many qubits can be decomposed into: \begin{equation} \rho = \frac{1}{2^N} \sum_{\mu_1,...,\mu_N=0}^3 T_{\mu_1...\mu_N} \sigma_{\mu_1} \otimes ... \otimes \sigma_{\mu_N}, \label{STATE} \end{equation} where $\sigma_{\mu_n} \in \{\openone,\sigma_x,\sigma_y,\sigma_z\}$ is the $\mu_n$th local Pauli operator of the $n$th party ($\sigma_0= \openone$) and $T_{\mu_1...\mu_N} \in [-1,1]$ are the components of the (real) extended correlation tensor $\hat T$. They are the expectation values $T_{\mu_1...\mu_N} = \mbox{Tr}[\rho (\sigma_{\mu_1} \otimes ... \otimes \sigma_{\mu_N})]$. Thus, description in terms of correlation tensor is equivalent to description in terms of density operator. Fully separable states are endowed with fully separable extended correlation tensor, $\hat T^{\mathrm{sep}} = \sum_{i} p_i \hat T^{\mathrm{prod}}_i$, where $\hat T^{\mathrm{prod}}_i = \hat T^{(1)}_i \otimes ... \otimes \hat T^{(N)}_i$ and each $\hat T^{(n)}_i$ describes a pure one-qubit state. A state $\rho$, with correlation tensor $\hat T$, is entangled if there exists a $G$ such that \cite{BADZIAG}: \begin{equation} \max_{\hat T^{\mathrm{prod}}} (\hat T, \hat T^{\mathrm{prod}})_G < (\hat T, \hat T)_G = ||\hat T||^2_{G}, \label{CRITERION} \end{equation} where maximization is taken over all product states and $(.,.)_G$ denotes a generalized scalar product, with a positive semidefinite metric $G$. We focus on diagonal $G$'s, for which the scalar product is given by \begin{equation} (\hat T, \hat T')_G = \sum_{\mu_1,...,\mu_N=0}^3 T_{\mu_1...\mu_N} G_{\mu_1...\mu_N} T'_{\mu_1...\mu_N}. \label{GEN-SCALAR} \end{equation} The criterion is valid also when the sums of (\ref{GEN-SCALAR}) run through the values $j_n = 1,2,3$, which will be often referred to as $x,y,z$. We compare this entanglement criterion with criteria for violation of Bell inequalities. It was shown that a simple sufficient condition for existence of a local realistic description of the correlation functions obtained in any Bell experiment with two measurement settings per observer has the following form \cite{ZB}: \begin{equation} \mathcal{C} \equiv \max \sum_{j_1,...,j_N = 1}^2 T_{j_1...j_N}^2 \le 1, \label{ZBCOND} \end{equation} where maximization is taken over all possible independent choices of local planes in which the two settings lie. This condition is necessary and sufficient in the case of two qubits \cite{HORODECKIS_BELL_NS}. We shall also use another necessary and sufficient condition, this time for violation of a set of tight Bell inequalities with many measurement settings per observer \cite{LPZB}. For the case of $N+1$ observers, all of which but the last one choose between four settings, and the last one between two settings, this condition reads \begin{equation} \mathcal{D} \equiv \max \sum_{j_1(k),...,j_N(k)=1}^{2} \sum_{k=1}^2 T_{j_1(k)...j_{N}(k)k}^2 \le 1, \label{ZBCOND1} \end{equation} where maximization is over all possible independent choices of local Cartesian frame basis vectors used by the observers to fix the measurement directions determining the correlation tensor components. That is, we allow each observer to define his/her triad of orthogonal basis directions, which define the correlation tensor components. This condition is more demanding than~(\ref{ZBCOND}) because the coordinate systems denoted by the indices $j_1(1)$, ..., $j_N(1)$ do not have to be the same as $j_1(2)$, ..., $j_N(2).$ It is necessary for the existence of local realistic model \cite{LPZB}, or equivalently, its violation is sufficient for non existence of such models. \section{Noises} The states to be studied here are of two general types: (i) Mixtures of an entangled state $\rho$ and white or colored noise $\rho_{\mathrm{noise}}$: \begin{equation} \rho(\Upsilon) = \Upsilon \rho + (1-\Upsilon) \rho_{\mathrm{noise}}, \label{NOISY_STATE} \end{equation} where $\Upsilon$ is the entanglement admixture. (ii) States arising from local noisy channels \cite{NIELSENCHUANG}, i.e. of the form \begin{equation} (\mathcal{E} \otimes ... \otimes \mathcal{E})(\rho) = \frac{1}{2^N} \!\!\!\! \sum_{\mu_1,...,\mu_N=0}^3 \!\!\!\! T_{\mu_1...\mu_N} \mathcal{E}(\sigma_{\mu_1}) \otimes ... \otimes \mathcal{E}(\sigma_{\mu_N}), \label{LOCAL_NOISE} \end{equation} where $\mathcal{E}$ is a map describing depolarization, dephasing or amplitude damping of a single qubit. According to (\ref{LOCAL_NOISE}), such noises are fully described by their action on local Pauli operators. Each type of noise considered is parameterized with a single variable (it is either entanglement admixture $\Upsilon$ or the strength of local decoherence) and therefore the resulting states are also characterized by this variable. We choose the parameters such that value `$1$' corresponds to no noise, whereas value `$0$' corresponds to total noise which immediately destroys all initial entanglement. Using the described separability criterion we determine threshold value of the parameter, above which the resulting state is entangled. Next, using the conditions (\ref{ZBCOND}) and (\ref{ZBCOND1}) for the described Bell inequalities we find maximal parameter below which the state does not violate them. Finally, we contrast these two critical values. We summarize our results in Table \ref{TAB_SUMMARY}. \subsection{White noise} ``White noise'' is represented by a totally mixed state $\rho_{\mathrm{noise}} = \tfrac{1}{2^N} \openone$, where $N$ gives the number of qubits. A channel introducing the white noise to a system is the globally depolarizing channel: \begin{equation} \mathcal{E}_{\Upsilon}(\rho) = \Upsilon \rho + (1-\Upsilon) \tfrac{1}{2^N} \openone. \end{equation} Therefore, the correlation tensor of the state after the globally depolarizing channel, $\hat T'$, is related to the initial state by the admixture parameter $\hat T' = \Upsilon \hat T$. The operator-sum representation \begin{equation} \mathcal{E}_{\Upsilon}(\rho) = \Upsilon \rho + \frac{1-\Upsilon}{2^{2N}} \sum_{\mu_1, ..., \mu_N = 0}^{3} \!\!\!\! \sigma_{\mu_1} \otimes ... \otimes \sigma_{\mu_N} \rho \sigma_{\mu_1} \otimes ... \otimes \sigma_{\mu_N}, \end{equation} reveals that white noise admixture acts in a correlated way on all the qubits. \subsection{Colored noise} We will consider colored noise represented by a product state $\rho_{\mathrm{noise}} = | 0 \rangle \langle 0 | \otimes ... \otimes | 0 \rangle \langle 0 |$, where $| 0 \rangle$ is the eigenstate of local $\sigma_z$ Pauli operator. Such a noise brings perfect correlations along $z$ directions to the system. \subsection{Local depolarization} In many cases, noise affects independently every qubit. For example, local depolarization can be caused by a random environment acting autonomously on each subsystem. The local depolarization is defined for a single qubit in the familiar way \begin{equation} \mathcal{E}_p(\rho) = p \rho + (1-p) \tfrac{1}{2} \openone, \end{equation} i.e. it mixes the local state with the white noise where $p$ describes the fraction of initial state still present after the decoherence. To see the effect of local depolarization on many qubits we find its effect on local Pauli operators \begin{eqnarray} \mathcal{E}_p(\openone) = \openone, & \qquad & \mathcal{E}_p(\sigma_x) = p \sigma_x, \nonumber \\ \mathcal{E}_p(\sigma_y) = p \sigma_y, & \qquad & \mathcal{E}_p(\sigma_z) = p \sigma_z, \end{eqnarray} and follow formula (\ref{LOCAL_NOISE}). In general, the critical values arising from local depolarization and white noise can be different. However, in our case the critical parameters turn out to be the same due to the structure of violation conditions for the Bell inequalities and the form of entanglement criterion we use. Since these conditions involve only $N$-party correlations, local depolarization introduces a factor of $p^N$ to the elements of correlation tensor entering them while white noise admixed to the system introduces a factor of $\Upsilon$. Therefore, the critical values obtained using $p^N$ and $\Upsilon$ are equal, $p_{\mathrm{cr}}^N = \Upsilon_{\mathrm{cr}}$, independently of the state for which they are computed (the numerical value can of course vary from state to state). \subsection{Dephasing} Local depolarization describes gradual loss of coherence in all bases. It may happen that coherence is lost in a preferred basis. This type of noise is described by a dephasing channel defined by its action on local Pauli operators: \begin{eqnarray} \mathcal{E}_{\lambda}(\openone) = \openone, & \qquad & \mathcal{E}_{\lambda}(\sigma_x) = \sqrt{\lambda} \sigma_x, \nonumber \\ \mathcal{E}_{\lambda}(\sigma_y) = \sqrt{\lambda} \sigma_y, & \qquad & \mathcal{E}_{\lambda}(\sigma_z) = \sigma_z, \end{eqnarray} where the $\sigma_z$ basis is chosen to be preferred by decoherence, and $\lambda$ describes its strength. Clearly, for $\lambda = 1$ the initial state is unchanged and for $\lambda = 0$ the final state has only classical correlations along local $z$ directions. \subsection{Amplitude damping} Amplitude damping channel is used to describe energy dissipation from a quantum system. Under amplitude damping, a system has a finite probability, $\gamma$, to loose an excitation. In terms of local Pauli operators this channel is descried as \begin{eqnarray} \mathcal{E}_{\gamma}(\openone) = \openone + (1-\gamma) \sigma_z, & \qquad & \mathcal{E}_{\gamma}(\sigma_x) = \sqrt{\gamma} \sigma_x, \nonumber \\ \mathcal{E}_{\gamma}(\sigma_y) = \sqrt{\gamma} \sigma_y, & \qquad & \mathcal{E}_{\gamma}(\sigma_z) = \gamma \sigma_z. \end{eqnarray} Note that the components of the correlation tensor of a state after amplitude damping which contain the $z$ indices are given by the sums of initial correlation tensor components with both $z$ indices and zero indices, e.g. \begin{equation} T_{\underbrace{z...z}_{k}0...0}' = \!\!\!\! \sum_{l_1,..,l_k = \{0,3\}} \!\!\!\! \!\!\!\! T_{l_1...l_k0...0} (1-\gamma)^{n_0} \gamma^{n_3}, \end{equation} where $n_{0} \equiv \sum_{j=1}^k \delta_{l_j,0}$ gives the number of indices $l_1,\dots, l_k$ equal to $0$, and similarly $n_{3} \equiv \sum_{j=1}^k \delta_{l_j,3} = k-n_0$ denotes the number of indices equal to $3$. Having described the noises of interest, we move to studies of their influence on certain classes of initially entangled states. \section{Noisy states} We begin with the Bell state of two qubits and mix it with white and colored noise. The state with white noise is the Werner state known to admit local hidden variable model for certain admixtures despite being entangled. Interestingly, the states with colored noise, which are maximally entangled mixed states \cite{MEMS1, MEMS2}, will be shown to be entangled and not to violate standard Bell inequalities in an even bigger range of mixing. We then show similar results for the GHZ states and generalized GHZ states. For some of them, the critical admixture of entanglement below which the state admits local hidden variable model scales polynomially with dimension of the system, and the mixed state is entangled already for infinitesimally small admixture of infinitesimal entanglement. Next, we discuss noisy states arising from independent local decoherence. We start with generalized GHZ states as initial states and show that even in the limit of infinitely many qubits there is still a finite gap between critical parameter for entanglement and the one for violation of Bell inequalities. We show similar results when the initial state is a W state. Roughly speaking, a simple application of the entanglement criterion detects entanglement at least quadratically better than the Bell inequalities, i.e. the critical value for entanglement is at most equal to the square of the critical value to satisfy the Bell inequalities. \subsection{Bell state} \subsubsection{White noise} We first rederive known results for a Werner state of two qubits with our tools. It is a mixture of a maximally entangled state $\rho = |\phi^+ \rangle \langle \phi^+|$ and white noise $\rho_{\mathrm{noise}} = \frac{1}{4} \openone$, where $|\phi^+ \rangle = \frac{1}{\sqrt{2}}(\ket{00} + \ket{11})$ and $\ket{0}$ ($\ket{1}$) denotes the eigenstate of $\sigma_z$ operator with eigenvalue $+1$ ($-1$). The family of Werner states is an archetypical example of a state set which contains states that do not violate Bell inequalities despite being entangled. Since the white noise state exhibits no correlations, the correlation tensor components $T_{j_1j_2}'$ of the Werner state are related to the components $T_{j_1j_2}$ of $|\phi^+ \rangle$ by the admixture factor, $T_{j_1j_2}' = \Upsilon T_{j_1j_2}$. The only non-vanishing correlation tensor elements of maximally entangled states lie on the diagonal and are equal to $\pm 1$ (this is so when the two-particle correlation tensor is put in a Schmidt form). If one chooses to sum over $j_n = 1,2,3$ in the scalar products of criterion (\ref{CRITERION}) the left-hand side is given by the maximal Schmidt coefficient of the correlation tensor. For the Werner state it equals $\Upsilon$. The right-hand side reads $3 \Upsilon^2$. Thus, the criterion reveals entanglement for all the states of the family, i.e., for $\Upsilon_{\mathrm{ent}} > \frac{1}{3}$. On the other hand, the necessary and sufficient condition for local realistic model, in the case of a standard two-settings-per-partner Bell experiment (\ref{ZBCOND}), is satisfied for $\Upsilon_{\mathrm{lr}} \le \frac{1}{\sqrt{2}}$. Thus, for a considerable range of $\Upsilon \in (\frac{1}{3},\frac{1}{\sqrt{2}}]$ the state is entangled, nevertheless Bell experiments involving standard inequalities have a local realistic explanation. One could call this range of $\Upsilon$ a ``Werner gap''. \subsubsection{Colored noise} Interestingly, changing the type of noise from white to colored influences both entanglement of the state and possibility of local realistic model. We have investigated critical admixtures of different types of noise, above which condition (\ref{ZBCOND}) is satisfied and summarize them in Table \ref{phi}. Changing the type of colored noise alone, although does not change entanglement threshold of the state, dramatically influences its communication-reducing properties. The splitting of this table into different rows is motivated by different relations between correlations present in the noise and in the entangled state $| \phi^+ \rangle$. In the first row, the white noise has no correlations. In the second row, the noises have some of the correlations of the $| \phi^+ \rangle$ state. Therefore, for all $\Upsilon > 0$ there are perfect correlations in the system (in the basis of states of noise) and additionally at least some correlations in a complementary measurement directions. This explains the violation of a two-setting Bell inequality \cite{ESSENCE}. In the third row, the noise has exactly opposite correlations to those present in the $| \phi^+ \rangle$ state. In the last row, the noises have correlations of a different character than those of the entangled state. \begin{table} \begin{center} \begin{tabular}{c | c | c} \hline \hline Type of noise & Entanglement & Comm. reduction \\ \hline \hline $\openone \otimes \openone$ & $\Upsilon> \frac{1}{3}$& $\Upsilon>\frac{1}{\sqrt{2}} = 0.70711$\\ \hline $\ket{\pm}{}_z {}_z\bra{\pm} \otimes \ket{\pm}{}_z {}_z\bra{\pm} $ & $\Upsilon>0$ & $\Upsilon>0$ \\ $\ket{\pm}{}_y {}_y\bra{\pm} \otimes \ket{\mp}{}_y {}_y\bra{\mp} $ & & \\ $\ket{\pm}{}_x {}_x\bra{\pm} \otimes \ket{\pm}{}_x {}_x\bra{\pm} $ & & \\ \hline $\ket{\pm}{}_z {}_z\bra{\pm} \otimes \ket{\mp}{}_z {}_z\bra{\mp} $ & $\Upsilon>0$ & $\Upsilon>\frac{1}{\sqrt{2}} = 0.70711$ \\ $\ket{\pm}{}_y {}_y\bra{\pm} \otimes \ket{\pm}{}_y {}_y\bra{\pm} $ & & \\ $\ket{\pm}{}_x {}_x\bra{\pm} \otimes \ket{\mp}{}_x {}_x\bra{\mp} $ & & \\ \hline $\ket{\pm}{}_x {}_x\bra{\pm} \otimes \ket{\pm}{}_y {}_y\bra{\pm} $ & $\Upsilon>0$ & $\Upsilon>0.56731$ \\ $\ket{\pm}{}_x {}_x\bra{\pm} \otimes \ket{\mp}{}_y {}_y\bra{\mp} $ & & \\ $\ket{\pm}{}_x {}_x\bra{\pm} \otimes \ket{\pm}{}_z {}_z\bra{\pm} $ & & \\ $\ket{\pm}{}_x {}_x\bra{\pm} \otimes \ket{\mp}{}_z {}_z\bra{\mp} $ & & \\ $\ket{\pm}{}_y {}_y\bra{\pm} \otimes \ket{\pm}{}_z {}_z\bra{\pm} $ & & \\ $\ket{\pm}{}_y {}_y\bra{\pm} \otimes \ket{\mp}{}_z {}_z\bra{\mp} $ & & \\ \hline \hline \end{tabular} \end{center} \caption{The table presents critical value of entanglement admixture above which the two-qubit state $\Upsilon | \phi_+ \rangle \langle \phi_+ | + (1-\Upsilon) \rho_{\rm{noise}}$ is entangled (middle column) and allows reduction of communication complexity, i.e. violates standard Bell inequalities (right column), for different types of noise (left column). In the left column, $\openone \otimes \openone$ denotes the white noise and e.g. $\ket{\pm}{}_k {}_k\bra{\pm} \otimes \ket{\mp}{}_l {}_l\bra{\mp}$ denotes the colored noise which is a product state of either $\ket{+}{}_k {}_k\bra{+} \otimes \ket{-}{}_l {}_l\bra{-}$ or $\ket{-}{}_k {}_k\bra{-} \otimes \ket{+}{}_l {}_l\bra{+}$, where $\ket{\pm}{}_k$ is the eigenstate of Pauli $\sigma_k$ operator with eigenvalue $\pm 1$ (either the upper signs enter the states of the noise or the lower signs).} \label{phi} \end{table} The Werner states are not the ones with the largest possible gap. For example, if one admixes, e.g., colored noise $\rho_{\mathrm{noise}} = | \pm \rangle_z {}_z\langle \pm | \otimes | \mp \rangle_z {}_z \langle \mp |$ to the $|\phi^+ \rangle$ Bell state, the resulting state is entangled already for an infinitesimally small value of $\Upsilon$, and it satisfies condition (\ref{ZBCOND}) for all $\Upsilon_{\mathrm{lr}} \le \frac{1}{\sqrt{2}}$. Thus, the range of $\Upsilon$ for which the state admits local realistic model for standard correlation Bell experiments and is still entangled is much larger than for the Werner state. Moreover, this is the maximal possible range (there is no other state and dichotomic measurements which would give bigger Werner gap) because the critical value $\Upsilon_{\mathrm{lr}} = \frac{1}{\sqrt{2}}$ corresponds to the maximal violation of local realism \cite{TSIRELSON}. We note that such mixed states are known to be maximally entangled \cite{MEMS1,MEMS2}. \subsection{GHZ state} \subsubsection{White noise} The presented tools allow us to construct and investigate entangled states of multiple qubits, with a non-zero Werner gap, in a systematic way. We first consider the Werner-like states of $N$ qubits which are mixtures of the GHZ state $| \mathrm{GHZ}_N \rangle = \frac{1}{\sqrt{2}}(\ket{0 \dots 0} + \ket{1 \dots 1})$ and the white noise. Using criterion (\ref{CRITERION}) one finds $\Upsilon_{\mathrm{ent}} = 1/(2^{N-1} + 1)$ for the critical admixture above which the state is entangled \cite{BADZIAG,PR}. The critical value for violation of a complete set of standard Bell inequalities for correlation functions equals $\Upsilon_{\mathrm{lr}} = 1/\sqrt{2^{N-1}}$ (see \cite{ZB}). Therefore, for $\Upsilon \in (\frac{1}{2^{N-1}+1},\frac{1}{\sqrt{2^{N-1}}}]$ the state is entangled but all two-setting correlation Bell experiments admit local realistic model. Also multisetting inequalities of Ref. \cite{LPZB} are all satisfied in this range. To illustrate how the range of the Werner gap can depend on the Bell inequality, we consider inequalities of Refs. \cite{ZUK,NLP}. If one considers all possible settings, restricted to one measurement plane on the Bloch sphere for each observer, the critical value for violation of local realism changes to $\Upsilon_{\mathrm{lr}}^{\infty} = 2 (2/\pi)^N$, see \cite{ZUK}, and therefore decreases the Werner gap. This result is a limiting case for inequalities involving $M$ settings per party studied in \cite{NLP}. These inequalities involve measurement settings (again in a specific plane for each observer) evenly spaced at the Bloch sphere. One has $\Upsilon_{\mathrm{lr}}^{\infty} = \lim_{M \to \infty} \Upsilon_{\mathrm{lr}}^{M}$ \cite{NLP}, notation is obvious here. One may ask for how many settings the critical entanglement admixture for violation of local realism for finite and continuum number of settings are already very close. For bigger $M$, one finds using Taylor series that $\Upsilon_{\mathrm{lr}}^{M} = \Upsilon_{\mathrm{lr}}^{\infty}[1 + \frac{\pi^2}{24} \frac{N-3}{M^2} + \mathrm{O}(\frac{N^2}{M^4})]$. If one neglects all the small terms of $\mathrm{O}(\frac{N^2}{M^4})$, the relative error $\epsilon = (\Upsilon_{\mathrm{lr}}^{M}-\Upsilon_{\mathrm{lr}}^{\infty})/\Upsilon_{\mathrm{lr}}^{\infty}$ is given by $\epsilon \approx 4 \pi^2 \frac{N-3}{M^2}\%$. Thus, for $M = N$ the two critical admixtures are close even for a few particles ($\epsilon$ smaller than $4\%$ for all $N \ge 4$). \subsubsection{Colored noise} Similarly to the case of the Bell states, also for the GHZ state the range of the Werner gap depends on the type of admixed noise. For the odd-$N$ GHZ states the correlations $T_{z...z}$ vanish, and it is interesting to consider the colored noise $\rho_{\mathrm{noise}} = | 0 \rangle \langle 0 |^{\otimes N}$ which re-introduces the missing correlations. The full correlation tensor of $\rho(\Upsilon)$, i.e., the one covering ``Greek'' indices from $0$ to $3$, has the following non-vanishing components: $T_{z...z} = 1- \Upsilon$, and also $2^{N-1}$ components with $2k$ indices equal to $y$ and the remaining indices equal to $x$ (where $k = 0,1,...,\frac{N-1}{2}$). These latter ones are given by $(-1)^k \Upsilon$. Finally, one finds $2^{N-1}-1$ components with $2k$ indices (where $k = 1,...,\frac{N-1}{2}$) set at $0$ and the remaining indices set to $z$. All these have the value of $1$. Consider a metric $G$ with only the following nonzero elements: $G_{zz0...0} = \Upsilon$ and $G_{i_1...i_N}=1$ for the components with $2k$ indices equal to $y$ and the rest equal to $x$. For such a metric, the maximum of the scalar product on the left-hand side of condition (\ref{CRITERION}) is equal to $\Upsilon$. The right-hand side equals $||\hat T||_G^2 = \Upsilon + 2^{N-1} \Upsilon^2$, which is always greater than $\Upsilon$. Thus, the state is entangled already for an infinitesimally small $\Upsilon$. In order to investigate the direct communication-reducing properties of the state we employ condition~(\ref{ZBCOND}). Depending on the choice of the observation plane, the left-hand side of Eq.~(\ref{ZBCOND}) reads: $2^{N-1} \Upsilon^2$ for the $xy$ plane; $(1-\Upsilon)^2 + \Upsilon^2$ for the $xz$ plane; and $(1-\Upsilon)^2$ for the $yz$ plane. There is no other plane in which the values would be higher, as the correlation tensor is in its generalized Schmidt form \cite{GEN_SCHMIDT1,GEN_SCHMIDT2}. The sum over the settings in the $xy$ plane is greater than the sum over the $xz$ plane for $\Upsilon > 1/(1+\sqrt{2^{N-1}-1})$. Thus, for the state $\rho(\Upsilon)$ the left-hand side of (\ref{ZBCOND}) is given by \begin{equation} \mathcal{C} = \Bigg\{ \begin{array}{rcc} 2^{N-1} \Upsilon^2 & \textrm{ for } & \Upsilon \leq \frac{1}{1+\sqrt{2^{N-1}-1}}, \\ (1-\Upsilon)^2 + \Upsilon^2 & \textrm{ for } & \Upsilon > \frac{1}{1+\sqrt{2^{N-1}-1}}. \end{array} \end{equation} Therefore, there exists a local realistic model for the correlations obtained in any two-setting correlation Bell experiment if $\Upsilon \le \Upsilon_{\mathrm{lr}} = 1/\sqrt{2^{N-1}}$, which is the same critical value as for the state with white noise (in full analogy to the case of Bell state). Finally, for $\Upsilon \in (0,\Upsilon_{\mathrm{lr}}]$ entangled state $\rho(\Upsilon)$ admits local realistic description for such Bell experiments. Since the dimension of the system is $d = 2^N$, the range of Werner gap scales polynomially as $d^{-\frac{1}{2}}$. This is exponentially better than in \cite{ALMEIDA} where the range of Werner gap scales logarithmically as $\log(d)/d$. However, the model of \cite{ALMEIDA} works for arbitrary number of settings whereas here we have studied only two-setting Bell inequalities for correlation functions. Already for the multisetting Bell inequalities for correlation functions \cite{LPZB} the range of the corresponding Werner gap is smaller. The left-hand side of (\ref{ZBCOND1}) is given by $\mathcal{D} = 2^{N-1} \Upsilon^2$ for $N=3$, and $\mathcal{D}=(1-\Upsilon)^2 + 2^{N-2} \Upsilon^2$ for $N \geq 5$. This is illustrated in Fig. \ref{f-multi}, where we show the critical entanglement admixtures below which the state satisfies the inequalities. Note that in this case the fact that the condition is satisfied does not guarantee the existence of the local realistic model, because this set of inequalities is not necessary and sufficient for the existence of such a model \cite{LPZB}. We also checked that for the colored noise, the inequalities with continuous settings \cite{ZUK} do not improve the critical admixture for violation of local realism over the multisetting inequalities \cite{LPZB}, except for $N=3$. \begin{figure} \caption{Entanglement and violation of the Bell inequalities. The state $\Upsilon | \mathrm{GHZ} \label{f-multi} \end{figure} \subsection{Generalized GHZ states} \subsubsection{Colored noise} We give an explicit example of a noisy separable state for which even an infinitesimal small admixture of an infinitesimal weak entangled state results in a non-separable state. Consider generalized GHZ state \cite{GEN_GHZ1,GEN_GHZ2,LPZB}: \begin{equation} | \mathrm{GHZ(\alpha)} \rangle = \cos\alpha | 0 \dots 0 \rangle + \sin\alpha | 1 \dots 1 \rangle. \end{equation} It has the following non-vanishing components of the correlation tensor: \begin{eqnarray} T_{\underbrace{y...y}_{2k}x....x} & = & (-1)^{k} \sin 2 \alpha, \quad k = 0,1,...,\lfloor \tfrac{N-1}{2} \rfloor \nonumber \\ T_{\underbrace{z...z}_{k}0...0} & = & \Bigg\{ \begin{array}{ll} 1 & \textrm{ for } k \textrm{ even}, \\ \cos 2 \alpha & \textrm{ for } k \textrm{ odd}. \\ \end{array} \end{eqnarray} and similarly for all permutations of indices. We mix this state with a colored noise $\rho_{\mathrm{noise}} = \ket{0} \bra{0}^{\otimes N}$. If the number of qubits is {\it even}, this state is entangled and violates Bell inequalities for both infinitesimal $\alpha$ and $\Upsilon$. This follows from the fact that the state has perfect correlations $T_{z...z} = 1$ and additional correlations in complementary directions, e.g., $T_{x...x} = \Upsilon \sin 2 \alpha$. Therefore, summing up the correlations in the $xz$ plane, using the multisetting condition (\ref{ZBCOND1}) proves the violation. For {\it odd} number of qubits, the range of Werner gap again scales (independently of $\alpha$) polynomially with $d$. First, we show that the mixed state is entangled already for infinitesimal $\alpha$ and $\Upsilon$, irrespectively of the number of qubits. To this aim take two non-vanishing metric elements $G_{zz0...0}$ and $G_{x...x}$ to be equal to $1$. For this choice, the right-hand side of (\ref{CRITERION}) equals $R = 1 + \Upsilon^2 \sin^2 2 \alpha$. The left-hand side of the condition now reads $L = \max (\Upsilon \sin 2 \alpha T_x^{(1)} ... T_x^{(N)} + T_z^{(1)} T_z^{(2)})$, where we maximize over the choice of local tensors (vectors) $\hat T^{(n)}$. We set $T_x^{(n)}$ to the maximal value of $1$ for all the parties $n > 2$, and write the tensor elements for the remaining two parties in polar coordinates, $L = \max_{\theta_1, \theta_2} (\Upsilon \sin 2 \alpha \sin \theta_1 \sin \theta_2 + \cos \theta_1 \cos \theta_2)$. Since $\Upsilon \sin 2 \alpha \le 1$, we have $L \le \max_{\theta_1, \theta_2} \cos(\theta_1 - \theta_2) \le 1$. The maximum is equal to $1$ and for all allowed $\alpha > 0$ and $\Upsilon > 0$ it is smaller than the right-hand side. The state is entangled. For violation of Bell inequalities, consider summation over $xy$ plane in the necessary and sufficient condition (\ref{ZBCOND1}). For the present state, it involves $\sum_{k=0}^{(N-1)/2} {N \choose 2k} = 2^{N-1}$ terms, each equal to $\Upsilon^2 \sin^2 2 \alpha$, and gives the critical value of $\Upsilon_{\mathrm{lr}} = \frac{\sqrt{2}}{\sin2\alpha} 2^{-N/2}$. Therefore the gap $|\Upsilon_{\mathrm{lr}}-\Upsilon_{\mathrm{ent}}|$ scales polynomially with dimension as $1/\sqrt{d}$ for $d=2^N$. \subsubsection{Local depolarization} The non-vanishing correlation tensor elements, after local depolarizing channels are applied to the generalized GHZ state, read \begin{eqnarray} T_{\underbrace{y...y}_{2 k}x....x} & = & (-1)^{k} p^N \sin 2 \alpha, \quad k = 0,1,...,\lfloor \tfrac{N-1}{2} \rfloor \nonumber \\ T_{\underbrace{z...z}_{k}0...0} & = & \Bigg\{ \begin{array}{ll} p^k & \textrm{ for } k \textrm{ even}, \\ p^k \cos 2 \alpha & \textrm{ for } k \textrm{ odd}. \end{array} \end{eqnarray} To show the Werner gap for $N \to \infty$, we first prove that the state is entangled for all $p> \tfrac{1}{2}$. Choose the following non-zero elements of the metric: $G_{j_1...j_N} = 1$ for $j_n = 1,2$, $G_{z...z} = 1$ for $N$ even and $G_{z...z0} = 1$ for odd $N$. The right-hand side of the entanglement condition (\ref{CRITERION}) is $R = p^{2(N-1+N_2)} + 2^{N-1} p^{2(N-1+N_2)} \sin^2 2 \alpha$, where $N_2 = N \mod 2$ encodes the cases of odd and even $N$. The left-hand side is maximized if all local tensors are the same and along the $z$ axes and have the value of $L = p^{N-1+N_2}$. Therefore, the state is entangled if $1 < p^{N-1+N_2} + 2^{N-1} p^{N-1+N_2} \sin^2 2 \alpha$. To unify the cases of odd and even $N$ we bound the right-hand side from below using $p^N \le p^{N-1+N_2}$, and obtain the sufficient condition for entanglement. The corresponding critical value is \begin{equation} p_{\mathrm{ent}} = (1 + 2^{N-1} \sin^2 2 \alpha)^{-\frac{1}{N}} \to \tfrac{1}{2}, \end{equation} where the limit is for $N \to \infty$. Arrows in following formulae always denote this limit. The multisetting Bell inequalities give better results than standard inequalities for this state. Consider the violation condition (\ref{ZBCOND1}) in which the last index of the correlation tensor takes on the values $\{y,z\}$, whereas indices of other parties are either $\{x,y\}$, if the last index is $y$, or $z$, if the last index is $z$. (Note that we explicitly make use here of the advantage of the multisetting condition over the two-setting condition). The value of parameter $\mathcal{D}$ is at least (it is higher for $N$ even) equal to $p^{2N}(\cos^2 2 \alpha + 2^{N-2} \sin^2 2 \alpha)$, where $2^{N-2}$ gives the number of non-zero correlation tensor elements with $y$ index at last position. Therefore, the critical parameter is \begin{equation} p_{\mathrm{lr}} = (\cos^2 2 \alpha + 2^{N-2} \sin^2 2 \alpha)^{-\tfrac{1}{2N}} \to \tfrac{1}{\sqrt{2}}. \end{equation} The critical values of $p$ decrease with the number of qubits showing that many party generalized GHZ states are more and more robust against this type of noise. Finally, for $N \to \infty$ there is a Werner gap of $p \in (\frac{1}{2},\frac{1}{\sqrt{2}})$ for which the entangled state does not improve related communication complexity tasks. \subsubsection{Dephasing} Similar results hold for other local noises. After dephasing in the local $z$-bases, the correlation tensor of the generalized GHZ state reads \begin{eqnarray} T_{\underbrace{y...y}_{2 k}x....x} & = & (-1)^{k} \lambda^{\frac{N}{2}} \sin 2 \alpha, \quad k = 0,1,...,\lfloor \tfrac{N-1}{2} \rfloor \nonumber \\ T_{\underbrace{z...z}_{k}0...0} & = & \Bigg\{ \begin{array}{ll} 1 & \textrm{ for } k \textrm{ even}, \\ \cos 2 \alpha & \textrm{ for } k \textrm{ odd}. \end{array} \end{eqnarray} All entangled generalized GHZ states are still entangled after the local dephasing. For a proof, it is sufficient to choose $G_{zz0..0} = G_{x..xx} = 1$. For this choice, the right-hand side of (\ref{CRITERION}) reads $R = 1 + \lambda^N \sin^2 2 \alpha$, whereas for the left-hand side we have $L = \max\left( T_z^{(1)} T_z^{(2)} + T_x^{(1)} T_x^{(2)} \lambda^{N/2} \sin2\alpha \right) \le T_z^{(1)} T_z^{(2)} + T_x^{(1)} T_x^{(2)} \le 1$, which follows from $\lambda^{N/2} \sin2\alpha \le 1$ and writing the components of the local tensors in polar coordinates. We also assumed that the local Bloch vectors of all the parties except first and second are along the $x$ axis, which is optimal. Therefore, the state is entangled for all $\alpha > 0$ and $\lambda >0$, independently of the number of qubits. Since dephasing leaves the correlations in specific directions unchanged, violation of Bell inequality for the generalized GHZ state is very robust against this type of noise. We show that it actually is state independent, i.e. violation is observed for all $\alpha > 0$ and only depends on the degree of dephasing if $N$ is odd. Consider the mutisetting condition, in which as before the last index takes values $\{y,z\}$ and the remaining indices are either $\{x,y\}$, if the last index is $y$, or $z$, if the last index is $z$. If $N$ is even, after dephasing the state still contains perfect correlations in the $z$ directions and some other correlations in the $xy$ plane, and violates the inequalities for all $\alpha > 0$ and $\lambda >0$. For the case of odd number of qubits, the condition reads $\mathcal{D} = \cos^2 2 \alpha + 2^{N-2} \lambda^N \sin^2 2 \alpha$, and the violation is observed as soon as \begin{equation} \sin^2\alpha > 0 \textrm{ and } \lambda > 2^{\frac{2}{N}-1} \to \tfrac{1}{2}. \end{equation} Therefore, violation only depends on the degree of dephasing in the case of odd $N$, and again there is a finite Werner gap of $\lambda \in (0,\frac{1}{2})$ in the limit $N \to \infty$. \subsubsection{Amplitude damping} Finally, we consider independent local amplitude damping channels. The elements of the decohered generalized GHZ states are the following: \begin{eqnarray} T_{\underbrace{y...y}_{2 k}x....x} & = & (-1)^{k} \gamma^{\frac{N}{2}} \sin 2 \alpha, \quad k = 0,1,...,\lfloor \tfrac{N-1}{2} \rfloor \nonumber \\ T_{\underbrace{z...z}_{k}0...0} & = & \Bigg\{ \begin{array}{ll} \cos^2\alpha+\bar \gamma^k \sin^2 \alpha & \textrm{ for } k \textrm{ even}, \\ \cos^2\alpha-\bar \gamma^k \sin^2 \alpha & \textrm{ for } k \textrm{ odd}, \end{array} \end{eqnarray} where $\bar \gamma = 2 \gamma -1$. To prove the Werner gap consider the metric with non-vanishing elements $G_{j_1...j_N} = 1$ with $j_n=1,2$. The right-hand side of the entanglement criterion reads $R = 2^{N-1}\gamma^N \sin^2 2 \alpha$. The left-hand side is $L = \gamma^{N/2} \sin 2\alpha \max[T_x^{(1)} \dots T_x^{(N)} - T_y^{(1)}T_y^{(2)}T_x^{(3)} \dots T_x^{(N)} - \dots ]$, with the maximum taken over local tensors with components from one plane. We write the elements of individual tensors in polar coordinates, i.e. $T_x^{(n)} = \cos \theta_n$ and $T_y^{(n)} = \sin \theta_n$, and recognize that expression in the bracket is now given by $\cos(\theta_1 + \ldots + \theta_N)$. Therefore, $L = \gamma^{N/2} \sin 2\alpha$, which translates into critical parameter for entanglement \begin{equation} \gamma_{\mathrm{ent}} = \frac{1}{4}\left( \frac{2}{\sin 2 \alpha}\right)^{\frac{2}{N}} \to \frac{1}{4}. \end{equation} The violation of both many-setting and two-setting inequalities is the same for higher number of qubits. The two-setting condition reveals the critical parameter \begin{equation} \gamma_{\mathrm{lr}} = \frac{1}{2} \left(\frac{2}{\sin 2 \alpha}\right)^{\frac{1}{N}} \to \frac{1}{2}. \end{equation} For large $N$, the states for practically all $\alpha > 0$ violate the inequalities if $\Upsilon_{\mathrm{lr}} > 1/2$ and there is a finite Werner gap of $\gamma \in (\frac{1}{4},\frac{1}{2})$ in the limit $N \to \infty$. \subsection{$W$ state} In this section we study the $W$ state, and we shall emphasize the properties distinguishing it form the class of generalized GHZ states. The $W$ state involves a single excitation delocalized over all the qubits: \begin{equation} | W \rangle = \tfrac{1}{\sqrt{N}} \left( | 1 0 \ldots 0 \rangle + | 0 1 \ldots 0 \rangle + \ldots +| 0 0 \ldots 1 \rangle \right). \end{equation} It is permutationally invariant, i.e. any permutation of particles leaves the state unchanged. Therefore, to describe its correlation tensor it is sufficient to present just three elements. All other non-vanishing elements have indices being permutations of the indices of the following ones: \begin{eqnarray} T_{\underbrace{z...z}_{k} 0...0} & = & 1 - \frac{2 k}{N},\nonumber \\ T_{yyz...z0...0} = T_{xxz...z0...0} & = & \frac{2}{N}. \end{eqnarray} \subsubsection{White noise} Consider $W$ state mixed with the white noise: \begin{equation} \rho = \Upsilon | W \rangle \langle W | + (1-\Upsilon) \frac{1}{2^N}\openone. \end{equation} Contrary to the case of mixed GHZ state, this mixed state gives rise to a Werner gap in the limit of $N \to \infty$. To prove entanglement of this state consider metric with non-vanishing elements $G_{j_1...j_N} = 1$ where $j_n = \{x,z\}$. With this choice, the right-hand side of condition (\ref{CRITERION}) reads $R = \Upsilon^2(1 + {N \choose 2} \frac{4}{N^2}) = \Upsilon^2(3-\tfrac{2}{N})$. The left-hand side is maximized, if all the local tensors are along the $\pm z$ axis and equals $L = \Upsilon$, which we have verified numerically. Therefore, the critical parameter for entanglement reads \begin{equation} \Upsilon_{\mathrm{ent}} = \frac{1}{3-\tfrac{2}{N}} \to \frac{1}{3}. \end{equation} The multisetting inequalities are violated as soon as entanglement admixture is above the critical value \cite{LPZB}: \begin{equation} \Upsilon_{\mathrm{lr}} = \sqrt{\tfrac{1}{3-\frac{2}{N}}} \to \frac{1}{\sqrt{3}}. \end{equation} Note that the same correlations enter both the Bell inequalities and the entanglement criterion, showing that this simple application of the criterion is at least quadratically better in revealing entanglement than the Bell inequalities, i.e. $\Upsilon_{\mathrm{ent}} \le \Upsilon_{\mathrm{lr}}^2$. This is a general feature present in all our examples. \subsubsection{Colored noise} Similar conclusion for comparison with the GHZ states follows in the case of colored noise admixture to the $W$ state: \begin{equation} \rho = \Upsilon | W \rangle \langle W | + (1 - \Upsilon) |0...0 \rangle \langle 0...0 |, \end{equation} where the colored noise introduces correlations in the local $z$ directions. This state is entangled for all $\Upsilon > 0$. A simple way to see this is to use the fact that if a subsystem is entangled, then the whole system is also entangled. The definition of the $W$ state leads to the following form of the reduced density operator for any two qubits \begin{equation} \Upsilon' | \psi^+ \rangle \langle \psi^+ | + (1-\Upsilon') |00 \rangle \langle 00|, \end{equation} with $\Upsilon' = \frac{2}{N} \Upsilon$ and $| \psi^+ \rangle = \frac{1}{\sqrt{2}}(\ket{01} + \ket{10})$. This state is entangled (has negative partial transposition \cite{PERES_PPT,HORODECKI_PPT}) for all $\Upsilon' > 0$. Therefore the global state is entangled for all $\Upsilon_{\mathrm{ent}} > 0$ and any finite $N$. If one chooses settings from the $xz$ plane in the multisetting Bell inequality, the violation conditions reveal the critical value of the admixture parameter as given by \begin{equation} \Upsilon_{\mathrm{lr}} = \frac{2}{3-\frac{1}{N}} \to \frac{2}{3}. \end{equation} In the limit of $N \to \infty$ we find the Werner gap of $\Upsilon \in (0,\frac{2}{3})$. This is the biggest gap among all the states studied here. \subsubsection{Local depolarization} We shall show that $W$ state is very fragile with respect to this type of decoherence, in contrast to the GHZ state. After local depolarization the elements of the correlation tensor of the decohered $W$ state read \begin{eqnarray} T_{\underbrace{z...z}_{k} 0...0} & = & p^k \left(1 - \frac{2 k}{N} \right), \nonumber \\ T_{yy\underbrace{z...z}_{k}0...0} = T_{xx\underbrace{z...z}_{k}0...0} & = & p^{k+2} \frac{2}{N}. \end{eqnarray} To prove entanglement of this state, consider non-zero metric elements $G_{j_1...j_N} = 1/p^N$ with $j_n = \{x,z\}$. For this choice the right-hand side of the criterion equals $R = p^N(1 + {N \choose 2} \frac{4}{N^2})$, whereas the maximum of the left-hand side is $1$, which we have verified numerically. Therefore, the state is entangled above the critical value of \begin{equation} p_{\mathrm{ent}} = \left( \frac{1}{3-\frac{2}{N}} \right)^{\frac{1}{N}} \to 1. \end{equation} The multisetting Bell inequalities are violated as soon as $\mathcal{D} = T_{z...z}^2 + {N \choose 2} T_{xxz...z}^2 > 1$. This gives the critical parameter \begin{equation} p_{\mathrm{lr}} = \left( \frac{1}{3-\frac{2}{N}} \right)^{\frac{1}{2 N}} \to 1. \end{equation} which rapidly increases with $N$ and already for five qubits requires $p>0.9$. Since $p_{\mathrm{ent}} = p_{\mathrm{lr}}^2$, there is a Werner gap for all finite $N$, and in the limit both parameters tend to the same value. Of course, a smarter choice of the metric in the entanglement condition could prove that even in the limit there is a finite Werner gap. \subsubsection{Dephasing} The $W$ state is extremely robust against dephasing, as it leaves the perfect correlations unchanged. After dephasing the $W$ state is transformed to \begin{eqnarray} T_{\underbrace{z...z}_{k} 0...0} & = & 1 - \frac{2 k}{N}, \nonumber \\ T_{yy\underbrace{z...z}_{k}0...0} = T_{xx\underbrace{z...z}_{k}0...0} & = & \lambda \frac{2}{N}. \end{eqnarray} The dephased state violates Bell inequality (and therefore is entangled) for all $N$ and all non-trivial dephasing channels. Consider correlations in the $xz$ plane and multisetting condition. The value of parameter $\mathcal{D} = 1 + 2 \lambda^2(1-\tfrac{1}{N})$ exceeds unity for all $\lambda > 0$. Note that this is true also in the limit $N \to \infty$. \subsubsection{Amplitude damping} The $W$ state after this type of decoherence reads \begin{eqnarray} T_{\underbrace{z...z}_{k}0...0} & = & 1 - \frac{2 k}{N} \gamma, \nonumber \\ T_{yy\underbrace{z...z}_{k}0...0} = T_{xx\underbrace{z...z}_{k}0...0} & = & \frac{2}{N}\gamma. \end{eqnarray} To prove the Werner gap, consider non-vanishing metric elements $G_{j_1...j_N} = 1$ for $j_n = \{x,z\}$. The right hand side of the criterion is $R = (1 - 2 \gamma)^2 + {N \choose 2} \frac{4}{N^2}$, and the maximum of the left-hand side $L \le 1 - 2 \gamma$ is attained for all local vectors along $z$ directions. Therefore, the state is entangled above the critical value \begin{equation} \gamma_{\mathrm{ent}} = \frac{1}{3 - \frac{1}{N}} \to \frac{1}{3}. \end{equation} We check violation of Bell inequalities using the condition with many settings in the $xz$ plane. The expression reads $\mathcal{D} = R$ and exceeds unity for all values above \begin{equation} \gamma_{\mathrm{lr}} = \frac{2}{3 - \frac{1}{N}} \to \frac{2}{3}. \end{equation} The Werner gap is present also in the limit $N \to \infty$, just as for the GHZ state. \section{Summary} \begin{table*} \begin{tabular}{l l l l l l} \hline \hline & white noise $\qquad$ & colored noise $\quad$ & local depolarization $\quad$ & dephasing $\qquad$ & amplitude damping \\ \hline \hline Gen. GHZ $\quad$ & $\zeta_{\mathrm{ent}} \to 1/2$ & $\zeta_{\mathrm{ent}} \to 0$ & $\zeta_{\mathrm{ent}} \to 1/2$ & $\zeta_{\mathrm{ent}} \to 0$ & $\zeta_{\mathrm{ent}} \to 1/4$ \\ & $\zeta_{\mathrm{lr}} \to 1/\sqrt{2}$ & $\zeta_{\mathrm{lr}} \to 1/\sqrt{2}$ & $\zeta_{\mathrm{lr}} \to 1/\sqrt{2}$ & $\zeta_{\mathrm{lr}} \to 1/2$ & $\zeta_{\mathrm{lr}} \to 1/2$ \\ \hline W & $\zeta_{\mathrm{ent}} \to 1$ & $\zeta_{\mathrm{ent}} \to 0$ & $\zeta_{\mathrm{ent}} \to 1$ & $\zeta_{\mathrm{ent}} \to 0$ & $\zeta_{\mathrm{ent}} \to 1/3$ \\ & $\zeta_{\mathrm{lr}} \to 1$ & $\zeta_{\mathrm{lr}} \to 1$ & $\zeta_{\mathrm{lr}} \to 1$ & $\zeta_{\mathrm{lr}} \to 0$ & $\zeta_{\mathrm{lr}} \to 2/3$ \\ \hline \hline \end{tabular} \caption{Summary of the results. The results for different initial states are presented in rows. Noisy channels applied to them are presented in columns. The strength of the noises is characterized by a single parameter $\zeta$ ($\zeta = 1$ corresponds to no noise, $\zeta = 0$ describes the strongest noise which immediately destroys entanglement). To unify presentation in this table we represent all the parameters with $\zeta$. In the main text the parameter characterizing local depolarization is denoted by $p$, dephasing by $\lambda$, amplitude damping by $\gamma$, and admixture of white or colored noise by $\Upsilon$. We present the critical parameter $\zeta_{\mathrm{ent}}$, above which the resulting state is entangled, and $\zeta_{\mathrm{lr}}$, below which the state satisfies classes of Bell inequalities, in the limit of large number of qubits $N \to \infty$. Additionally, to compare on equal footing the critical parameters for different types of noises, they are all calculated here per particle in a sense that the values related to white and colored noise are $N$th roots of the values of the main text. In all the cases, $\zeta_{\mathrm{ent}}$ is at most a square of $\zeta_{\mathrm{lr}}$.} \label{TAB_SUMMARY} \end{table*} Using the entanglement criterion \cite{BADZIAG} we have found families of entangled states which satisfy specific classes of Bell inequalities. We summarize our findings in Table \ref{TAB_SUMMARY}. Generally speaking, a simple application of the entanglement criterion gives at least quadratically better critical parameters than the ones obtained using the Bell inequalities. Therefore, we found entangled states satisfying the Bell inequalities in all studies cases. Moreover, we gave examples in which even in the limit of large number of qubits there is a finite gap between critical parameter for entanglement and critical parameter for Bell violation. We found that maximally entangled mixed states of two qubits give rise to the highest discrepancy between the critical parameters. It would be interesting to investigate if this also holds for higher number of qubits. Our results are a further step towards full classification of entangled states into those which do and do not admit local realistic explanation. \section{Acknowledgments} We thank Johannes Kofler and Ravishankar Ramanathan for discussions. W.L. and M.\.Z. performed this work at National Quantum Information Centre of Gda\'nsk. W.L. is supported by the Foundation for Polish Science (KOLUMB program). M. \.Z. acknowledges EU program SCALA. We acknowledge support of the Austrian Science Foundation FWF Project No. P19570-N16, SFB and the Doctoral Program CoQuS,the $6$th EU program QAP (Qubit Applications) Contract No. 015848, and the National Research Foundation \& Ministry of Education in Singapore. The collaboration is a part of an \"OAD/MNiSW program. \end{document}
\begin{document} \renewcommand{References}{References} \renewcommand{Contents}{Contents} \begin{center} {\huge Global solvability of 1D equations\\ of viscous compressible multi-fluids} \end{center} \begin{center} {\large Alexander Mamontov,\quad Dmitriy Prokudin\footnote{The research was supported by the Ministry of Education and Science of the Russian Federation (grant 14.Z50.31.0037).}} \end{center} \begin{center} {\large August 25, 2017} \end{center} \begin{center} { Lavrentyev Institute of Hydrodynamics, \\ Siberian Branch of the Russian Academy of Sciences\\ pr. Lavrent'eva 15, Novosibirsk 630090, Russia} \end{center} \begin{center} {\bfseries Abstract} \end{center} \begin{center} \begin{minipage}{110mm} We consider the model of viscous compressible multi-fluids with multiple velocities. We review different formulations of the model and the existence results for boundary value problems. We analyze crucial mathematical difficulties which arise during the proof of the global existence theorems in 1D case. \end{minipage} \end{center} {\bf Keywords:} existence theorem, uniqueness, unsteady boundary value problem, viscous compressible fluid, homogeneous mixture with multiple velocities \tableofcontents \section{Introduction} The description of the motion of multi-component media is an interesting and rather little-studied problem both in physics/mechanics and in mathematics. There is no standard approach to simulating these motions, nor is there any developed mathematical theory concerning the existence, uniqueness and properties of solutions of initial-boundary value problems arising in this simulation. In the present paper, we choose one of the numerous versions of simulating the motion of multi-component fluid mixtures, namely, a homogeneous mixture of viscous compressible fluids and a multi-velocity model. This means that all components (constituents) of the mixture are present at any point of the space and with the same phase, and each of them has its own local velocity. The interaction between the components occurs via the viscous friction and the exchange between momenta, and also using the heat exchange (in heat-conducting models). This kind of mixtures is also called a multi-fluid, see \cite{mamprok1d.france} for the details. \section{Mathematical model of multi-fluids} The mathematical model of a multi-fluid which consists of $N\geqslant 2$ components, includes (see \cite{mamprok1d.Nigm} and \cite{mamprok1d.Raj}) the continuity equations for each constituent \begin{equation}\label{mamprok1d.21061710}\partial_{t}\rho_{i}+{\rm div}(\rho_{i}\boldsymbol{u}_{i})=0,\quad i=1,\ldots,N,\end{equation} the momentum equations for each constituent \begin{equation}\label{mamprok1d.21061711}\partial_{t}(\rho_{i}\boldsymbol{u}_{i})+{\rm div}(\rho_{i}\boldsymbol{u}_{i}\otimes\boldsymbol{u}_{i}) +\nabla p_{i}-{\rm div}\, {\mathbb S}_{i}=\rho_{i}\boldsymbol{f}_{i} +\boldsymbol{J}_{i},\quad i=1,\ldots,N,\end{equation} and the energy equations. Here $\rho_i$ is the density of the $i$-th constituent of the multi-fluid, $\boldsymbol{u}_{i}$ is the velocity field, $p_i$ is the pressure, ${\mathbb S}_{i}$ is the viscous part of the stress tensor ${\mathbb{P}}_{i}=-p_i\mathbb{I}+\mathbb{S}_{i}$: $${\mathbb S}_{i}=\sum\limits_{j=1}^{N}\left(\lambda_{ij}({\rm div}\,\boldsymbol{u}_{j}){\mathbb I}+2\mu_{ij}{\mathbb D}(\boldsymbol{u}_{j})\right),\quad i=1,\ldots,N,$$ where $\lambda_{ij}$ and $\mu_{ij}$ are the viscosity coefficients, ${\mathbb D}$ is the rate of deformation tensor, and ${\mathbb I}$ is the identity tensor. The viscosity coefficients $\lambda_{ij}$ and $\mu_{ij}$ compose the matrices $\boldsymbol{\Lambda}=\{\lambda_{ij}\}_{i, j =1}^{N}$ and $\textbf{M}=\{\mu_{ij}\}_{i, j =1}^{N}$. Finally,\linebreak $\boldsymbol{f}_{i}=(f_{i1}, \ldots, f_{in})$ denotes the external body force ($n$ is the dimension of the flow), and $$\boldsymbol{J}_{i}=\sum\limits_{j=1}^{N}a_{ij}(\boldsymbol{u}_{j}-\boldsymbol{u}_{i}),\quad i=1,\ldots,N,\quad a_{ij}=a_{ji},\quad i, j=1,\ldots,N$$ stands for the momentum exchange between the constituents (the momentum supply). The viscosity matrices in real multi-fluids take non-trivial forms and, generally speaking (see \cite{mamprok1d.mamprok17} for the details), they depend on the concentrations $\displaystyle \xi_{i}=\frac{\rho_{i}}{\rho}$, where $\displaystyle \rho=\sum\limits_{i=1}^{N}\rho_{i}$ is the total density of the multi-fluid. The dependence of the viscosity matrices on the concentrations constitutes serious difficulties. We accept the simplifying assumption of constant viscosities, but with the preservation of necessary properties of positiveness. The thing is that it is very important (physically and mathematically) to validate Second law of thermodynamics which means \begin{equation}\label{mamprok1d.2106171}\sum\limits_{i=1}^{N}{\mathbb S}_{i}:{\mathbb D}(\boldsymbol{u}_{i})\geqslant 0.\end{equation} Besides, in order to provide important mathematical property of ellipticity, it is necessary to validate the condition \begin{equation}\label{mamprok1d.2106172}\sum\limits_{i=1}^{N}\int\limits_{\Omega}{\mathbb S}_{i}:{\mathbb D}(\boldsymbol{u}_{i})d\boldsymbol{x}\geqslant C\sum\limits_{i=1}^{N} \int\limits_{\Omega} |\nabla\otimes \boldsymbol{u}_{i}|^2 d\boldsymbol{x},\end{equation} where $\Omega$ is the flow domain, and $\boldsymbol{u}_{i}|_{\partial\Omega}=0$, $i=1,\ldots,N$. The formulated positiveness or coercivity can be provided by the following properties of viscosity matrices: the properties $n\boldsymbol{\Lambda}+2\textbf{M}\geqslant 0$, $\textbf{M}\geqslant 0$ provide \eqref{mamprok1d.2106171}, and the properties $\boldsymbol{\Lambda}+2\textbf{M}>0$, $\textbf{M}>0$ provide \eqref{mamprok1d.2106172}. A very important observation is that the viscosity matrices should not be diagonal. Momentum supply $\boldsymbol{J}_{i}$ in the momentum equations gives lower order terms (physically important, but mathematically causing no difficulties), and if the matrices are diagonal then $\boldsymbol{J}_{i}$ is the only connection between the constituents, so we have $N$ Navier--Stokes systems connected only via lower order terms. Earlier such problems were relevant (even in 1D) (see, e.~g., \cite{mamprok1d.kazhpetr78}, \cite{mamprok1d.petr82} and \cite{mamprok1d.zlotn95}), but nowadays such results almost automatically come from the theory of mono-fluid systems governed by the compressible Navier--Stokes equations. If viscosity matrices are ``complete'' (general) then we have interesting mathematical problems (see the reviews in \cite{mamprok1d.france}, \cite{mamprok1d.semr17} and the citations there). The crucial problem in the multi-D flows is that an automatic extension of the theory of compressible Navier--Stokes equations to the theory of multi-fluids requires $${\rm div}\,{\rm div}\,\mathbb S_{i}={\rm const}_{i}\cdot\Delta{\rm div}\,\boldsymbol{u}_{i},\quad i=1,\ldots,N,$$ but we have $$\left( \begin{array}{c}{\rm div}\,{\rm div}\,\mathbb S_{1} \\ \ldots \\ {\rm div}\,{\rm div}\,\mathbb S_{N} \end{array} \right)=\textbf{N} \left( \begin{array}{c}\Delta{\rm div}\,\boldsymbol{u}_{1} \\ \ldots \\ \Delta{\rm div}\,\boldsymbol{u}_{N} \end{array} \right),$$ where $\textbf{N}=\{\nu_{ij}\}_{i, j =1}^{N}=\{\lambda_{ij}+2\mu_{ij}\}_{i, j =1}^{N}$ is the matrix of total viscosities. It is possible to obtain results in the case of triangular matrices $\textbf{N}$ (see, e.~g., \cite{mamprok1d.smj12} and \cite{mamprok1d.izvran}). However, for a general~$\textbf{N}$, it is important to radically review the existing techniques developed for the compressible Navier--Stokes system. If $n=1$ then interesting problems also appear (see below). The authors found a way to overcome the crucial problem of total viscosity matrix structure, that is to introduce the assumptions that \begin{itemize} \item the pressures in all constituents are equal to each other, \item the velocities of the constituents in the material derivative operators are replaced by the average velocity. \end{itemize} Both assumptions are physically realistic in many situations (see \cite{mamprok1d.mamprok17} for the details), and the main mathematical effect is that the model does not lose richness inherent to the multi-fluid models (different densities and velocities of the constituent are preserved), moreover, the variety of the model grows because this trick allows to eliminate the restrictions of the viscosity matrix, and hence to admit all possible viscous terms. As a result, we come (see \cite{mamprok1d.mamprok17} and \cite{mamprok1d.semr17}) to the following system of equations: \begin{equation}\label{mamprok1d.continit}\partial_{t}\rho_{i}+{\rm div\,}(\rho_{i}\boldsymbol{v})=0,\quad i=1, \ldots, N,\end{equation} \begin{equation}\label{mamprok1d.mominit}\partial_{t}(\rho_{i}\boldsymbol{u}_{i})+{\rm div\,}(\rho_{i}\boldsymbol{v}\otimes\boldsymbol{u}_{i})+\alpha_i\nabla p ={\rm div\,}{\mathbb S}_{i}+\rho_{i}\boldsymbol{f}_{i},\quad i=1, \ldots, N.\end{equation} Here $\displaystyle \boldsymbol{v}=\sum\limits_{i=1}^N\alpha_i \boldsymbol{u}_{i}$ is the average velocity of the multi-fluid, and the coefficients $\alpha_i\left(\{\xi_{j}\}_{j=1}^{N}\right)>0$ are such that $\displaystyle \sum\limits_{i=1}^N \alpha_i=1$ (for instance, $\alpha_i=\xi_i$). Thus, the values $\alpha_i$, due to their dependence on the concentrations, satisfy the equations $$\frac{\partial \alpha_i}{\partial t}+\boldsymbol{v}\cdot\nabla\alpha_i=0,\quad i=1,\ldots,N.$$ In the simplest version, $\alpha_i$ may be regarded as constants. The main existence results for the model \eqref{mamprok1d.continit}, \eqref{mamprok1d.mominit} are obtained in \cite{mamprok1d.semi1}, \cite{mamprok1d.semi2}, \cite{mamprok1d.smz1}, \cite{mamprok1d.smz2} and \cite{mamprok1d.prokkraj}. To conclude the Section, let us note that the multidimensional existence theorems concern only weak solutions, whose regularity is not sufficient even for the uniqueness; smoothness increase is hindered by serious obstacles. Moreover, the difficulty of multidimensional problems eclipses the study of many qualitative properties of solutions, as well as related problems including modeling; this gives researchers the right to consider the corresponding questions first in the one-dimensional case. Hence, whereas in the solvability theory for the main boundary value problems of the viscous gas there was a shift of emphasis from the one-dimensional case to the multidimensional one already two decades ago, in many other domains of the theory, the one-dimensional problems stay at the forefront. On the other hand, as well as in the multidimensional case, the classical one-dimensional results for mono-fluids cannot be reproduced for multi-fluids automatically, in particular, due to essentially different structure of the viscous terms, namely, the presence of non-diagonal viscosity matrices; this difference in difficulty {\itshape does not depend on the dimension of the flow}. \section{Statement of the 1D problem, formulation of the result, Lagrangian coordinates} Consider the rectangular $Q_{T}$ (here and below $Q_{t}=(0, 1)\times(0, t)$) with an arbitrary finite height $T>0$, and the system of equations ($i=1,\ldots,N$) \begin{align}\label{mamprok1d.newcontinuity}\partial_{t}\rho_{i}+\partial_{x}(\rho_{i} v)=0,\quad v=\frac{1}{N}\sum\limits_{i=1}^{N}u_{i},\end{align} \begin{align}\label{mamprok1d.newmomentum}\rho_{i}\left(\partial_{t}u_{i}+v\partial_{x}u_{i}\right)+K\partial_{x} \rho^{\gamma}= \sum\limits_{j=1}^N \mu_{ij}\partial_{xx}u_{j},\quad \rho=\sum\limits_{i=1}^{N}\rho_{i},\end{align} with the following initial and boundary conditions ($i=1,\ldots,N$): \begin{align}\label{mamprok1d.nachusl}\rho_{i}|_{t=0}=\rho_{0i}(x), \quad u_{i}|_{t=0}=u_{0i}(x),\quad x\in [0, 1],\end{align} \begin{align}\label{mamprok1d.boundvelocity}u_{i}|_{x=0}=u_{i}|_{x=1}=0,\quad t\in [0, T].\end{align} {\bfseries Definition 1.} Refer as a {\it strong solution} to the problem \eqref{mamprok1d.newcontinuity}--\eqref{mamprok1d.boundvelocity} a collection of $2N$ functions $(\rho_{1},\ldots, \rho_{N}, u_{1},\ldots, u_{N})$ such that the equations \eqref{mamprok1d.newcontinuity},~\eqref{mamprok1d.newmomentum} are valid a.~e. in $Q_{T}$, the initial conditions \eqref{mamprok1d.nachusl}~are satisfied for a.~a. $x\in (0, 1)$, the boundary conditions \eqref{mamprok1d.boundvelocity} hold for a.~a.~$ t\in (0, T)$, and the following inequalities and inclusions hold $($$i=1,\ldots,N$$)$ $$\rho_{i}>0,\quad \rho_{i}\in L_{\infty}\big(0, T; W^{1}_{2}(0, 1)\big), \quad \partial_{t}\rho_{i}\in L_{\infty}\big(0, T; L_{2}(0, 1)\big),$$ $$u_{i}\in L_{\infty}\big(0, T; W^{1}_{2}(0, 1)\big)\bigcap L_{2}\big(0, T; W^{2}_{2}(0, 1)\big),\quad \partial_{t}u_{i} \in L_{2}(Q_{T}).$$ The result is formulated as the following Theorem. {\bfseries Theorem 2.} {\it Let the initial conditions in \eqref{mamprok1d.nachusl} satisfy the assumptions $$\rho_{0i}\in W^{1}_{2}(0,1),\quad \rho_{0i}>0,\quad u_{0i}\in \overset{\circ}{W^1_2}(0, 1),\quad i=1,\ldots,N,$$ the symmetric viscosity matrix $\textbf{M}$ be positive, the adiabatic index $\gamma>1$, the constants $K,T>0$. Then there exists the unique strong solution to the problem \eqref{mamprok1d.newcontinuity}--\eqref{mamprok1d.boundvelocity} in the sense of Definition~1.} In the model case (when all densities are equal to each other), the proof of Theorem 2 is given in \cite{mamprok1d.semi3} and \cite{mamprok1d.prok17}. Let us comment the scheme of the proof of Theorem~2. First of all, we prove the local in time solvability of the initial boundary value problem which is obtained from \eqref{mamprok1d.newcontinuity}--\eqref{mamprok1d.boundvelocity} via the Galerkin method (with respect to the spacial variable $x$) in the momentum equations~\eqref{mamprok1d.newmomentum}. The proof is made via the Leray--Schauder Fixed Point Theorem. Then, basing on the uniform estimates, we pass to the limit and establish the local solvability of the problem \eqref{mamprok1d.newcontinuity}--\eqref{mamprok1d.boundvelocity}, i.~e. on a small time interval $[0, t_{0}]$. In order to continue the solution over the whole interval $[0,T]$, we prove estimates in which the constants depend on the input data of the problem and on $T$, but not on $t_{0}$. In conclusion, we prove the uniqueness in a standard way. The key problem in the proof of Theorem 2 is the verification of the strict positiveness and boundedness of the densities (note that this problem is the key one for the complete model \eqref{mamprok1d.21061710}, \eqref{mamprok1d.21061711} as well). We consider this problem in more detail in Section 4. During the study of the problem \eqref{mamprok1d.newcontinuity}--\eqref{mamprok1d.boundvelocity}, the parallel use of the Lagrangian coordinates is convenient. Let us accept $\displaystyle y(x,t)=\int\limits_{0}^{x}\rho(s,t)\,ds$ and~$t$ as new independent variables. Then the system \eqref{mamprok1d.newcontinuity}, \eqref{mamprok1d.newmomentum} takes the form $$\partial_{t}\rho_{i}+\rho\rho_{i}\partial_{y}v=0,\quad v=\frac{1}{N}\sum\limits_{i=1}^{N}u_{i},$$ $$\frac{\rho_{i}}{\rho}\partial_{t}u_{i}+K\partial_{y} \rho^{\gamma}=\sum\limits_{j=1}^N \mu_{ij}\partial_{y}(\rho\partial_{y}u_{j}),\quad i=1,\ldots,N,\quad \rho=\sum\limits_{i=1}^{N}\rho_{i}.$$ The domain $Q_{T}$ is transformed into the rectangular $\Pi_{T}=(0, d)\times(0, T)$, where $\displaystyle d=\int\limits_{0}^{1}\rho_{0}\,dx>0$, $\displaystyle \rho_{0}=\sum\limits_{i=1}^{N}\rho_{0i}$, and the initial and boundary conditions take the form ($i=1,\ldots,N$) $$\rho_{i}|_{t=0}=\widetilde{\rho}_{0i}(y), \quad u_{i}|_{t=0}=\widetilde{u}_{0i}(y),\quad y\in [0, d],$$ $$u_{i}|_{y=0}=u_{i}|_{y=d}=0,\quad t\in [0, T].$$ \section{Review of 1D viscous gas theory, strict positiveness and boundedness of the densities in the complete model \eqref{mamprok1d.21061710}, \eqref{mamprok1d.21061711}} Consider the initial boundary value problem for the 1D Navier--Stokes equations (in the Lagrangian coordinates): \begin{equation}\label{mamprok1d.1newcontinuity1lagr1d}\partial_{t}\rho+\rho^{2}\partial_{y}v=0,\end{equation} \begin{equation}\label{mamprok1d.1newmomentum1lagr1d}\partial_{t}v+K\partial_{y} \rho^{\gamma}=\mu\partial_{y}(\rho\partial_{y}v),\end{equation} \begin{equation}\label{mamprok1d.1nachusl1lagr1d}\rho|_{t=0}=\widetilde{\rho}_{0}, \quad v|_{t=0}=\widetilde{v}_{0},\quad y\in [0, d],\end{equation} \begin{equation}\label{mamprok1d.1boundvelocity1lagr1d}v|_{y=0}=v|_{y=d}=0,\quad t\in [0, T].\end{equation} The first a priori estimate is obtained via the standard arguments and takes the form: \begin{equation}\label{mamprok1d.lemma1lagr1d} \|v\|_{L_{\infty}\big(0, T;L_{2}(0, d)\big)}+\|\sqrt{\rho}\partial_{y}v\|_{L_{2}(\Pi_{T})}+\|\rho\|_{L_{\infty}\big(0, T;L_{\gamma-1}(0,d)\big)}\leqslant C. \end{equation} The equation \eqref{mamprok1d.1newcontinuity1lagr1d} allows to express $\rho\partial_{y}v=-\partial_{t}\ln\rho$ and to substitute it into~\eqref{mamprok1d.1newmomentum1lagr1d}: \begin{equation}\label{mamprok1d.eq01021711d}\partial_{ty}\ln\rho+K\partial_{y}\rho^{\gamma}=-\partial_{t}v.\end{equation} Let us multiply this equality by $\displaystyle \partial_{y}\ln\rho=:w$ and integrate over $y\in(0,d)$, then we obtain \begin{equation}\label{mamprok1d.eq01021721d} \frac{1}{2}\frac{d}{dt}\left(\int\limits_{0}^{d}w^{2}\, dy\right)+K\gamma\int\limits_{0}^{d}\rho^{\gamma}w^{2}\,dy=-\int\limits_{0}^{d}\left(\partial_{t} v\right)w\,dy. \end{equation} The right-hand side may be transformed via integration by parts due to \eqref{mamprok1d.eq01021711d}: \begin{equation}\label{mamprok1d.eq010217771d}-\int\limits_{0}^{d}\left(\partial_{t} v\right)w\,dy=-\frac{d}{dt}\left(\int\limits_{0}^{d} vw\,dy\right)+\int\limits_{0}^{d} \rho|\partial_{y}v|^{2}\,dy.\end{equation} Thus, after integration of \eqref{mamprok1d.eq01021721d} in $t$, using \eqref{mamprok1d.eq010217771d}, we find that $$\|w\|^{2}_{L_{2}(0, d)}+2K\gamma\int\limits_{0}^{t}\int\limits_{0}^{d}\rho^{\gamma}w^{2}\,dyd\tau \leqslant$$ $$\leqslant\|w_{0}\|^{2}_{L_{2}(0, d)}-2\int\limits_{0}^{d}vw\,dy+2\int\limits_{0}^{d}v_{0}w_{0}\,dy+2\int\limits_{0}^{t}\|\sqrt{\rho}\partial_{y}v\|_{L_{2}(0, d)}\,d\tau,$$ where $w_{0}=w(0,t)$. Using Cauchy's inequality and the estimate \eqref{mamprok1d.lemma1lagr1d}, we derive $$\|w(t)\|_{L_{2}(0,d)}\leqslant C\quad \forall\, t\in[0,T],$$ i.~e. the norm of the derivative $\partial_{y}\ln\rho$ in $L_{2}(0,d)$ is bounded uniformly in $t\in[0,T]$. The equation \eqref{mamprok1d.1newcontinuity1lagr1d} and the conditions \eqref{mamprok1d.1nachusl1lagr1d}, \eqref{mamprok1d.1boundvelocity1lagr1d} obviously imply that for every $t\in[0,T]$ the equality $\rho(z(t), t)=d$ holds with some $z(t)\in[0, d]$. Hence, we can use the presentation $${\ln{\rho(y,t)}}={\ln{\rho(z(t),t)}}+\int\limits_{z(t)}^{y}\partial_{s}\ln{\rho(s,t)}\, ds.$$ This leads to the fact that $|\ln\rho(y,t)|\leqslant |\ln{d}|+\sqrt{d}\|w\|_{L_{2}(0,d)}\leqslant C$, and hence $$0<C^{-1}\leqslant\rho(y,t)\leqslant C.$$ For the complete multi-fluid model \eqref{mamprok1d.21061710}, \eqref{mamprok1d.21061711}, difficulties appear even on the stage of introducing the Lagrangian coordinates. In fact, if the Lagrangian coordinates are related to the velocity of one constituent (as it was made in \cite{mamprok1d.kazhpetr78}), say, with the number~$m$, then we come to the following system of equations $(i=1,\ldots,N)$: $$\partial_{t}\rho_{i}+\rho_{m}(u_{i}-u_{m})\partial_{y}\rho_{i}+\rho_{i}\rho_{m}\partial_{y}u_{i}=0,$$ $$\frac{\rho_{i}}{\rho_{m}}\partial_{t}u_{i}+\rho_{i}(u_{i}-u_{m})\partial_{y}u_{i}+K\partial_{y} \rho^{\gamma}=\sum\limits_{j=1}^N \mu_{ij}\partial_{y}(\rho_{m}\partial_{y}u_{j})+ \frac{1}{\rho_{m}}\sum\limits_{j=1}^{N}a_{ij}(u_{j}-u_{i}).$$ The analysis of this system via 1D viscous gas technique is possible only in the case of the diagonal viscosity matrix. For the modified multi-fluid model \eqref{mamprok1d.newcontinuity}, \eqref{mamprok1d.newmomentum}, the direct application of the technique also fails. However, as we have seen, it is possible to overcome all difficulties and to obtain all necessary estimates. \section{Open problems} Let us identify several open problems in the 1D theory of viscous compressible multi-fluids: \begin{itemize} \item General model with complete viscosity matrix. \item Heat-conductive models. \item Asymptotics as $t\to+\infty$. \item Other boundary value problems. \end{itemize} \end{document}
\begin{document} \title{Coverings, Composites and Cables of Virtual Strings} \author{Andrew Gibson} \address{ Department of Mathematics, Tokyo Institute of Technology, Oh-okayama, Meguro, Tokyo 152-8551, Japan } \email{[email protected]} \date{\today} \begin{abstract} A virtual string can be defined as an equivalence class of planar diagrams under certain kinds of diagrammatic moves. Virtual strings are related to virtual knots in that a simple operation on a virtual knot diagram produces a diagram for a virtual string. \par In this paper we consider three operations on a virtual string or virtual strings which produce another virtual string, namely covering, composition and cabling. In particular we study virtual strings unchanged by the covering operation. We also show how the based matrix of a composite virtual string is related to the based matrices of its components, correcting a result by Turaev. Finally we investigate what happens under cabling to some invariants defined by Turaev. \end{abstract} \keywords{virtual strings, virtual knots} \subjclass[2000]{Primary 57M25; Secondary 57M99} \thanks{This work was supported by a Scholarship from the Ministry of Education, Culture, Sports, Science and Technology of Japan. Most of the contents of this paper come from part of the author's Master's thesis submitted to the Tokyo Institute of Technology in March 2008 \cite{Gibson:mthesis}.} \maketitle \section{Introduction} Kauffman introduced the idea of virtual knots in \cite{Kauffman:VirtualKnotTheory}. Virtual knots are defined as equivalence classes of virtual knot diagrams under diagrammatic moves which include the usual Reidemeister moves and other similar moves involving virtual crossings. \par A virtual string diagram is a virtual knot diagram where the over and under crossing information at the real crossings has been removed. In other words, the real crossings are treated simply as double points. We call this operation \emph{flattening}. By flattening the diagrammatic moves given by Kauffman we can derive moves for virtual string diagrams. A virtual string is then an equivalence class of virtual string diagrams under these moves. A complete definition will be given in Section~\ref{sec:virtual_strings}. In some parts of the literature virtual strings are known by other names. For example in \cite{Hrencecin/Kauffman:Filamentations} they are called flat knots or flat virtual knots, in \cite{Kadokami:Non-triviality} projected virtual knots, and in \cite{CKS:StableEquivalence} universes of virtual knots. \par In \cite{Turaev:2004}, Turaev defines virtual strings in terms of diagrams consisting of a circle and a finite set of ordered pairs of distinct points on the circle. This definition can be shown to be equivalent to the definition given above. In that paper, Turaev defines the $u$-polynomial and the primitive based matrix which are invariants of virtual strings. We recall these definitions in this paper. \par Two virtual knot diagrams representing the same virtual knot are related by a sequence of moves. By flattening the moves we get a sequence of flattened moves relating the corresponding flattened diagrams. From this we can see that the virtual string derived from the flattening of a particular virtual knot diagram is actually an invariant of the virtual knot which the diagram represents. Of course, many virtual knots may have the same underlying virtual string. For example, the virtual string underlying every classical knot is the trivial virtual string. On the other hand, the virtual string underlying Kishino's knot (Figure~\ref{fig:kishino}) is not trivial \cite{Fenn/Turaev:WeylAlgebras} and this shows that Kishino's knot is not a trivial virtual knot. Kishino, using a different method, was the first to prove the non-triviality of this virtual knot \cite{Kishino/Shin:VirtualKnots}. Various other methods of proof have been found and these are summarised in Problem~1 of the list of problems in \cite{FKM:Unsolved}. However, we note that the method used by Kadokami to prove non-triviality of the virtual string underlying Kishino's knot \cite{Kadokami:Non-triviality} is based on a theorem in that paper with which we found a problem. This problem is explained in \cite{Gibson:tabulating-vs}. \begin{figure} \caption{A virtual knot known as Kishino's knot} \label{fig:kishino} \end{figure} \par In this paper we study three different operations on virtual strings which produce new virtual strings. \par In \cite{Turaev:2004}, for each non-negative integer $r$, Turaev defined an operation on a virtual string called an $r$-covering which produces another virtual string. Thus we can consider an $r$-covering to be a map from the set of virtual strings to itself. The result of an $r$-covering is an invariant of the original virtual string \cite{Turaev:2004}. Thus invariants of the new virtual string may be considered as invariants of the original virtual string. Of course, it is possible to take the covering of the new virtual string and get a hierarchy of virtual strings derived from the original one. We study certain questions about this operation. We show that $r$-covering is surjective for all $r$ and when $r$ is not $1$, there are an infinite number of virtual strings that map to any given virtual string under $r$-covering. We also show that for all $r$ the set of virtual strings unchanged by the $r$-covering operation is infinite. \par Given two virtual string diagrams we can make a composite virtual string diagram by cutting the curve in each diagram and joining them to each other to make a single curve. This operation is not well-defined for virtual strings as the result depends on the points where the curves in the original diagrams were cut. However, it is still possible to examine how the invariants of virtual strings created in this way are related to the invariants of the virtual strings from which they are constructed. \par For classical knots, invariants of cables of knots can be used to distinguish knots where the same invariants calculated directly on the knots themselves are the same. We define a cabling of a virtual string and study what happens to Turaev's invariants under this operation. We discovered that we do not gain any more information from Turaev's invariants in this way. We show how Turaev's invariants for the cable of a virtual string can be calculated from the same invariant for the virtual string itself. \par In Section~\ref{sec:virtual_strings} we give a formal definition of virtual strings. In this paper we will often use Turaev's nanoword notation \cite{Turaev:KnotsAndWords} to represent virtual strings. This notation is explained in Section~\ref{sec:nanowords}. In the same section we also recall the definition of Turaev's $u$-polynomial. \par In Section~\ref{sec:th_matrices} we define the head and tail matrices of a virtual string diagram. We use these when we recall the definition of Turaev's based matrices in Section~\ref{sec:based_matrices}. From a based matrix we can derive a primitive based matrix which is another invariant of virtual strings. \par In Section~\ref{sec:coverings} we recall the definition of covering, make some observations about the operation and define some invariants of virtual strings from it. Then in Section~\ref{sec:fixed} we consider fixed points under covering. In Section~\ref{sec:geomcover} we explain a geometric intrepretation of the covering operation. \par In Section~\ref{sec:composite}, we show how the based matrix of a composite virtual string is related to the based matrices of its components. \par In Section~\ref{sec:cable} we consider cables of virtual strings. We show that the $u$-polynomial of a cable can be calculated directly from the $u$-polynomial of the original virtual string. We also show a corresponding result for based matrices. In the same section we also show a relationship between coverings of cables and cables of coverings. We conclude that if we have a pair of virtual strings with the same $u$-polynomial, based matrix and coverings then the corresponding invariants of an \textcable{n} of each virtual string will also be equivalent. \end{ack} \section{Virtual strings}\label{sec:virtual_strings} A virtual string diagram is an oriented circle immersed in a plane. Self-intersections are permitted but at most two arcs can cross at any particular point and they should cross transversally. We call such self-intersections crossings and we allow two kinds: real crossings, which are unmarked; and virtual crossings, each of which is marked with a small circle (see Figure~\ref{fig:virtualcrossings}). An example of a virtual string diagram is given in Figure~\ref{fig:diagram31}, where the orientation of the circle is marked by an arrow. \begin{figure} \caption{The two kinds of crossing: real (left) and virtual (right)} \label{fig:virtualcrossings} \end{figure} \begin{figure} \caption{A non-trivial virtual string} \label{fig:diagram31} \end{figure} \par Moves have been defined for virtual string diagrams \cite{Kauffman:VirtualKnotTheory}. There are moves involving only real crossings which are shown in Figure~\ref{fig:reidermeister}. These are called the flattened Reidemeister moves because they are like the standard Reidemeister moves of knot theory but with crossings flattened (see, for example, \cite{Lickorish:1997} for more information about standard Reidemeister moves). There are a similar set of moves only involving virtual crossings. We call these the virtual flattened Reidemeister moves and they are shown in Figure~\ref{fig:vreidermeister}. Lastly there is a move that involves both real and virtual crossings. It is shown in Figure~\ref{fig:mixedmove} and we call it the mixed move. Collectively, flattened Reidemeister moves, virtual flattened Reidemeister moves and the mixed move are called homotopy moves. \par \begin{figure} \caption{The flattened Reidemeister moves} \label{fig:reidermeister} \end{figure} \begin{figure} \caption{The virtual flattened Reidemeister moves} \label{fig:vreidermeister} \end{figure} \begin{figure} \caption{The mixed move} \label{fig:mixedmove} \end{figure} If a pair of virtual string diagrams are related by a finite sequence of homotopy moves and ambient isotopies in the plane, we say that they are equivalent under homotopy, or, more simply, that they are homotopic. It is not hard to see that equivalence under homotopy is an equivalence relation. Virtual strings are the equivalence classes of virtual string diagrams under this relation. We say that a virtual string $\Gamma$ is represented by a particular diagram $D$ if the equivalence class under homotopy containing $D$ is $\Gamma$. \par Let $\mathcal{VS}$ denote the set of equivalence classes, under homotopy, of virtual strings that can be represented by a diagram with a finite number of double points. In this paper we only consider virtual strings that are in $\mathcal{VS}$. \par There is a unique virtual string represented by a diagram with no double points, real or virtual. It is called the trivial virtual string and is written $0$. \par We can think of virtual strings in another way. We consider pairs $(S,D)$ where $S$ is a compact oriented surface and $D$ is an immersion of an oriented circle in $S$. As before we only allow self-intersections in $D$ to be transverse double points. This time all such crossings are real. Virtual crossings are not permitted. \par For a pair $(S,D)$, we define $N(D)$ to be the regular neighbourhood of $D$ in $S$. A stable homeomorphism from a pair $(S_1,D_1)$ to a pair $(S_2,D_2)$ is an orientation preserving homeomorphism from $N(D_1)$ to $N(D_2)$. Here orientation preserving means both the orientation of the surface and of the curve itself are preserved. Two pairs are stably equivalent if there exists a finite sequence of stable homeomorphisms and flattened Reidemeister moves in the surface transforming one pair to the other. \par Clearly stable equivalence is an equivalence relation. There is a bijection between the set of equivalence classes under this relation and the set of virtual strings. This was shown by Kadokami in \cite{Kadokami:Non-triviality}, following a result of Carter, Kamada and Saito which relates virtual knots to non-virtual diagrams on oriented surfaces \cite{CKS:StableEquivalence}. The bijection can be visualized as follows. From a virtual string diagram we can construct a pair $(S,D)$ by replacing virtual crossings with handles in the plane and routing one arc over the handle (see Figure~\ref{fig:handle}). On the other hand, with some care, we can take a pair $(S,D)$ and project $D$ onto a plane so that the only self-intersection points are double points. Any double points that do not correspond to double points in $D$ are marked as virtual. The result is a virtual string diagram. \begin{figure} \caption{Changing a surface with a virtual crossing (left) to a surface with a hollow handle and no crossing (right)} \label{fig:handle} \end{figure} \par The \emph{canonical surface} of a virtual string diagram $D$ is a surface of minimal genus containing $D$. It is unique up to homeomorphism of the surface. Such a surface can be easily constructed. First construct a surface containing $D$ using the process described above. Then cut $N(D)$ from the surface and construct a new surface by glueing a disk to each boundary of $N(D)$. The result is the canonical surface of $D$. This construction is described in several places, for example \cite{Kadokami:Non-triviality} or \cite{Turaev:2004}. \par If we allow diagrams with multiple oriented circles then the equivalence classes of these diagrams under the flattened Reidemeister moves form a generalization of virtual strings. We call these multi-component virtual strings. This generalization is analogous to the generalization of virtual knots to virtual links. Indeed, multi-component virtual strings can be viewed as flattened virtual links. In general we only consider single component virtual strings in this paper. However, multi-component virtual strings appear in Section~\ref{sec:geomcover}. \section{Nanowords}\label{sec:nanowords} In \cite{Turaev:Words}, Turaev defined the concept of a nanoword. We recall the definition here. \par A \emph{word} is an ordered sequence of elements of a set. We call the elements of the set \emph{letters}. A \emph{Gauss word} is a word where each letter appears exactly twice or not at all. For some set fixed set $\psi$, a \emph{nanoword over $\psi$} is defined to be a Gauss word with a map from the set of letters appearing in the Gauss word to $\psi$ \cite{Turaev:Words}. \par In \cite{Turaev:KnotsAndWords} Turaev showed that we can represent a virtual string diagram as a nanoword over the set $\{a,b\}$. As we will only be interested in this kind of nanoword in this paper, from now on we will write \emph{nanoword} to mean a nanoword over $\{a,b\}$. We now explain how to associate such a nanoword to a virtual string diagram. \par Given a virtual string diagram, we label the real crossings and introduce a base point on the curve at some point other than a crossing. Starting at the base point, we follow the curve according to its orientation and record the labels of the crossings as we pass through them. When we get back to the base point we have passed through each crossing exactly twice and the sequence of labels we have recorded is a Gauss word. We assign a map from the letters in the Gauss word to the set $\{a,b\}$ in the following way. For a given letter we consider its corresponding crossing. If, during our traversal of the curve, the second time we passed through the crossing, we crossed the first arc from right to left, we map the letter to $a$. If we crossed the first arc from left to right, we map the letter to $b$. These two cases are shown in Figure~\ref{fig:crossing}. \par \begin{figure} \caption{The two types of real crossing} \label{fig:crossing} \end{figure} \begin{figure} \caption{The flattened trefoil with base point and crossing points labelled} \label{fig:trefoil} \end{figure} As an example we calculate the nanoword corresponding to the diagram of a flattened trefoil shown in Figure~\ref{fig:trefoil}. Starting at the base point $O$, traversing the curve gives the Gauss word $ABCABC$. By comparing each crossing to those in Figure~\ref{fig:crossing} we define the map from to $\{A,B,C\}$ to $\{a,b\}$. In this case $A$ and $C$ map to $a$ and $B$ maps to $b$. \par As $a$ and $b$ encode the crossing type we sometimes refer to them as types. Thus in our example the type of $A$ is $a$ and the type of $B$ is $b$. Following Turaev \cite{Turaev:Words}, we use the notation $\xType{X}$ to mean the type of the letter $X$. So, in this example $\xType{A}$ is $a$, $\xType{B}$ is $b$ and $\xType{C}$ is $a$. \par The map from the letters to the types can be represented in a compact form by listing, in alphabetical order of the letters, the images under the map. So in this example we can represent the map by $aba$ which expresses the fact that $A$ maps to $a$, $B$ maps to $b$ and $C$ maps to $a$. The nanoword from our example can then be written simply as $\nanoword{ABCABC}{aba}$. We will use this format often in this paper. \par It is sometimes useful to draw an arrow diagram of a nanoword. Here we write the letters of the Gauss word in order and then join each pair of identical letters by an arrow. The direction of the arrow indicates the crossing type. If the arrow goes from left to right, the crossing is a type $a$ crossing. If the arrow goes from right to left, the crossing is a type $b$ crossing. As an example, an arrow diagram of the nanoword $\nanoword{ABCABC}{aba}$ is shown in Figure~\ref{fig:abcabc}. \begin{figure} \caption{Arrow diagram of nanoword $\nanoword{ABCABC} \label{fig:abcabc} \end{figure} \par The rank of a nanoword $\alpha$ is the number of different letters in the Gauss word \cite{Turaev:Words}. This is the number of real crossings in the virtual string diagram which $\alpha$ represents. We write the rank of $\alpha$ as $\rank(\alpha)$. In the previous example the rank of the nanoword is $3$. \par An isomorphism of nanowords \cite{Turaev:Words} is a bijection $i$ from the letters of a nanoword $\alpha_1$ to the letters of another nanoword $\alpha_2$ which satisfies the following two requirements. Firstly, that $i$ maps the $n$th letter of $\alpha_1$ to the $n$th letter of $\alpha_2$ for all $n$. Secondly, for each letter $X$ in $\alpha_1$, $\xType{X}$ is equal to $\xType{i(X)}$. Two nanowords $\alpha_1$ and $\alpha_2$ are isomorphic if there exists such an isomorphism between them. Diagrammatically, if we relabel the real crossings of a diagram, the nanowords associated with the diagram before and after the relabelling will be isomorphic. \par Note that the nanoword representation is dependent on the base point that we picked. To remove dependence on the base point Turaev defined a shift move \cite{Turaev:KnotsAndWords}. \par A shift move takes the first letter in the nanoword and moves it to the end of the nanoword. In the new nanoword, the moved letter is mapped to the opposite type. The inverse of the shift move takes the last letter in the nanoword and moves it to the beginning of the nanoword. Again, the type of the moved letter is swapped. We can write the move like this: \begin{equation*} AxAy \longleftrightarrow xA^\prime yA^\prime \end{equation*} where $A$ and $A^\prime$ are arbitrary letters which map to opposite types and $x$ and $y$ represent arbitrary sequences of letters such that the words on either side of the move are Gauss words. \par Turaev defined homotopy moves for nanowords over any set \cite{Turaev:Words}. Here we describe the moves in the specific case of nanowords over $\{a,b\}$. When we describe moves on nanowords we use the following conventions. Arbitrary individual letters are represented by upper case letters. Lower case letters $x$, $y$, $z$ and $t$ are used to represent sequences of letters. These sequences are arbitrary under the constraint that each side of a move should be a Gauss word. \par Homotopy move 1 (H1): \begin{equation*} xAAy \longleftrightarrow xy \end{equation*} where $\xType{A}$ is either $a$ or $b$. \par Homotopy move 2 (H2): \begin{equation*} xAByBAz \longleftrightarrow xyz \end{equation*} where $\xType{A}$ is not equal to $\xType{B}$. \par Homotopy move 3 (H3): \begin{equation*} xAByACzBCt \longleftrightarrow xBAyCAzCBt \end{equation*} where $\xType{A}$, $\xType{B}$ and $\xType{C}$ are all the same. \par Note that these moves correspond to flattened Reidemeister moves. A detailed explanation of this correspondance is given in \cite{Gibson:mthesis}. \par Turaev derived some simple moves from moves H1, H2 and H3. They appear in Lemmas~3.2.1 and 3.2.2 in \cite{Turaev:Words}. We quote them here: \begin{equation*} \begin{array}{lll} \textrm{H2a:}\quad & xAByABz \longleftrightarrow xyz & \textrm{where $\xType{A}\ne\xType{B}$}, \\ \textrm{H3a:}\quad & xAByCAzBCt \longleftrightarrow xBAyACzCBt & \textrm{where $\xType{A}=\xType{C}\ne\xType{B}$}, \\ \textrm{H3b:}\quad & xAByCAzCBt \longleftrightarrow xBAyACzBCt & \textrm{where $\xType{A}=\xType{B}\ne\xType{C}$}, \\ \textrm{H3c:}\quad & xAByACzCBt \longleftrightarrow xBAyCAzBCt & \textrm{where $\xType{B}=\xType{C}\ne\xType{A}$}. \end{array} \end{equation*} \par If there is a finite sequence of the homotopy moves H1, H2 and H3, shift moves and isotopies which transforms one nanoword into another, then those two nanowords are said to be \emph{homotopic} \cite{Turaev:Words}. This relation is an equivalence relation. Turaev showed that this idea of homotopy of nanoword representations of virtual strings and the usual homotopy of virtual strings are equivalent \cite{Turaev:KnotsAndWords}. That is two nanowords $\alpha$ and $\beta$ are homotopic if and only if the virtual strings $\Gamma_\alpha$ and $\Gamma_\beta$ they represent are homotopic. \par The homotopy rank of a nanoword $\alpha$, written $\hr(\alpha)$, is the minimal rank of all nanowords homotopic to $\alpha$ \cite{Turaev:Words}. This is a homotopy invariant of $\alpha$. The homotopy rank of a virtual string $\Gamma$, $\hr(\Gamma)$, is defined to be the homotopy rank of any nanoword $\alpha$ representing $\Gamma$. Clearly this is a homotopy invariant of $\Gamma$. Geometrically, this invariant is the minimum number of real crossings that we need to be able to draw the virtual string as a virtual string diagram. \par In \cite{Turaev:2004}, Turaev defined an invariant for virtual strings called the $u$-polynomial. We recall the definition here. \par We fix a nanoword $\alpha$. Two distinct letters of $\alpha$, $A$ and $B$, are said to be linked if $A$ and $B$ alternate in $\alpha$ and unlinked otherwise. Using this concept the linking number of $A$ and $B$, $\link{A}{B}$ is defined as follows. If $A$ and $B$ are unlinked, their linking number is zero. If $A$ and $B$ are linked, their linking number is either $1$ or $-1$ depending on the order that $A$ and $B$ appear in $\alpha$ and on the types of $A$ and $B$: \begin{equation*} \link{A}{B}= \begin{cases} 0 & \text{$A$ and $B$ are unlinked}, \\ 1 & \text{$A$ and $B$ are linked with pattern $\linkpattern{A}{B}{A}{B}$, $\xType{A}=\xType{B}$}, \\ -1 & \text{$A$ and $B$ are linked with pattern $\linkpattern{A}{B}{A}{B}$, $\xType{A}\neq\xType{B}$}, \\ 1 & \text{$A$ and $B$ are linked with pattern $\linkpattern{B}{A}{B}{A}$, $\xType{A}\neq\xType{B}$}, \\ -1 & \text{$A$ and $B$ are linked with pattern $\linkpattern{B}{A}{B}{A}$, $\xType{A}=\xType{B}$}. \end{cases} \end{equation*} For completeness, the linking number of any letter $X$ with itself is defined to be $0$. \par Note that the linking number is well-defined under the shift move and that \begin{equation}\label{eqn:link_skew} \link{A}{B} = -\link{B}{A} \end{equation} for all letters $A$ and $B$ appearing in $\alpha$. \par For any letter $X$ in $\alpha$, $\n(X)$ is defined to be the sum of the linking numbers of $X$ with each of the letters in $\alpha$: \begin{equation*} \n(X) = \sum_{Y \in \alpha}\link{X}{Y}. \end{equation*} Note that as $\abs{\link{X}{Y}}$ is less than or equal to $1$ for all $Y$ and $\link{X}{X}$ is $0$, $\abs{\n(X)}$ is less than $\rank(\alpha)$. We also note that \begin{equation}\label{eqn:n_zero_sum} \sum_{X \in \alpha}\n(X) = \sum_{X \in \alpha}\sum_{Y \in \alpha}\link{X}{Y} = 0, \end{equation} where the left hand equality is true by definition and the right hand equality is given by \eqref{eqn:link_skew}. \par We remark that $\n(X)$ can be interpreted geometrically by considering a diagram corresponding to $\alpha$. Note that we can orient $X$ so that it looks like the crossing on the left of Figure~\ref{fig:virtualcrossings}. By removing a small neighbourhood of the crossing $X$, the curve is split into two segments. We label the segment starting at the right hand outgoing arc $p$ and the arc starting at the left hand outgoing arc $q$. Then $\n(X)$ is the number of times $q$ crosses $p$ from right to left minus the number of times $q$ crosses $p$ from left to right. \par For a positive integer $k$ we define $u_k(\alpha)$ as follows: \begin{equation*} u_k(\alpha) = \sharp \lbrace X\in \alpha | \n(X)=k \rbrace - \sharp \lbrace X\in \alpha | \n(X)=-k \rbrace . \end{equation*} Here $\sharp$ indicates the number of elements in the set. Turaev showed that $u_k(\alpha)$ is invariant under homotopy \cite{Turaev:2004}. He combined these invariants into a polynomial called the $u$-polynomial of $\alpha$ which is defined as \begin{equation*} u_{\alpha}(t) = \sum_{k \geq 1}u_k(\alpha)t^k. \end{equation*} \par As each $u_k(\alpha)$ is invariant under homotopy, it is clear that the $u$-polynomial is also a homotopy invariant. The $u$-polynomial of a virtual string $\Gamma$ is defined to be the $u$-polynomial of some nanoword $\alpha$ representing $\Gamma$. \par We note that for the trivial virtual string $u_{0}(t)$ is $0$. We also mention that Theorem~3.4.1 of \cite{Turaev:2004} states that an integral polynomial $u(t)$ can be realized as the $u$-polynomial of a virtual string if and only if $u(0)=u^{\prime}(1)=0$. \par To illustrate the use of the $u$-polynomial we reproduce, in nanoword terminology, a calculation of the $u$-polynomial of a 2-parameter family of virtual strings which was orginally made by Turaev in Section~3.3, Exercise~1 of \cite{Turaev:2004}. We will use these virtual strings later in this paper. \begin{ex}\label{ex:alpha_pq} Consider the virtual string $\Gamma_{p,q}$ for positive integers $p$ and $q$ represented by the nanoword $\alpha_{p,q}$ given by \begin{equation*} X_1X_2\dotso X_pY_1Y_2\dotso Y_qX_p\dotso X_2X_1Y_q\dotso Y_2Y_1 \end{equation*} where $\xType{X_i}=a$ for all $i$ and $\xType{Y_j}=a$ for all $j$. Then $\n(X_i)$ is equal to $q$ for all $i$ and $\n(Y_j)$ is equal to $-p$ for all $j$. So the $u$-polynomial for $\alpha_{p,q}$, and thus $\Gamma_{p,q}$, is $pt^q-qt^p$. When $p$ and $q$ are not equal, the $u$-polynomial is non-zero and so $\Gamma_{p,q}$ is non-trivial. In this case, $\Gamma_{p,q}$ is homotopic to $\Gamma_{r,s}$ only if $p$ equals $r$ and $q$ equals $s$. \par By use of another invariant, the based matrix of a virtual string (which we review in Section~\ref{sec:based_matrices}), Turaev showed that the virtual strings $\Gamma_{p,p}$ are non-trivial and mutually distinct under homotopy for $p$ greater than or equal to $2$ (Section~6.4~(1) of \cite{Turaev:2004}). Using homotopy move H2a on $\alpha_{1,1}$, it is easy to show that $\Gamma_{1,1}$ is homotopically trivial (this is also mentioned in Section~3.3, Exercise~1 of \cite{Turaev:2004}). \end{ex} \par We now give a definition of the composition of two nanowords $\alpha$ and $\beta$ which is written $\alpha\beta$. This operation was originally defined by Turaev in \cite{Turaev:Words} where he called it multiplication. \par If the Gauss words of $\alpha$ and $\beta$ have no letters in common, the Gauss word of $\alpha\beta$ is the concatenation of the Gauss words of $\alpha$ and $\beta$. The map from the letters of $\alpha\beta$ to $\{a,b\}$ is defined by using the map belonging to $\alpha$ for letters coming from $\alpha$ and the map belonging to $\beta$ for letters coming from $\beta$. \par If the Gauss words of $\alpha$ and $\beta$ do have letters in common, we can use an isomorphism to transform $\beta$ to a nanoword $\beta^\prime$ that does not have letters in common with $\alpha$. Then the composition of $\alpha$ and $\beta$ is defined to be the composition of $\alpha$ and $\beta^\prime$. The following example demonstrates this operation. \begin{ex} Let $\alpha$ be the nanoword $\nanoword{ABACDBDC}{abbb}$ and $\beta$ be the nanoword $\nanoword{ABACBC}{abb}$. As letters appearing in $\beta$ also appear in $\alpha$, we use an isomorphism to get a new nanoword $\nanoword{EFEGFG}{abb}$ which is isomorphic to $\beta$. We call this new nanoword $\beta^\prime$. The composition of $\alpha$ and $\beta$ is then the composition $\alpha$ and $\beta^\prime$. We get the nanoword $\nanoword{ABACDBDCEFEGFG}{abbbabb}$. \end{ex} \par In \cite{Turaev:2004} Turaev noted that \begin{equation}\label{eqn:u-poly_concatenation} u_{\alpha\beta}(t) = u_{\alpha}(t) + u_{\beta}(t). \end{equation} This is because in $\alpha\beta$, letters in $\beta$ do not link any letters in $\alpha$. Thus for any letter $X$ in $\alpha$, $\n(X)$ in $\alpha\beta$ is equal to $\n(X)$ in $\alpha$. Similarly, for any letter $X$ in $\beta$, $\n(X)$ in $\alpha\beta$ is equal to $\n(X)$ in $\beta$. \par Composition of nanowords is not well-defined up to homotopy. For example, we take $\gamma$ to be the trivial nanoword and $\delta$ to be the nanoword $\nanoword{ABAB}{aa}$. Then $\delta$ is homotopic to $\gamma$ by the move H2a. On the other hand, the composition $\gamma\gamma$ is clearly trivial, yet it can be shown using primitive based matrices (which we recall later) that $\delta\delta$ is non-trivial. In fact, $\delta\delta$ represents the virtual string which underlies Kishino's knot shown in Figure~\ref{fig:kishino}. \section{Head and tail matrices of a nanoword}\label{sec:th_matrices} Given a nanoword $\alpha$, we define $\mathcal{A}$ to be the set of letters in $\alpha$. Then we define two maps, $t:\mathcal{A}\times\mathcal{A}\rightarrow\{0,1\}$ and $h:\mathcal{A}\times\mathcal{A}\rightarrow\{0,1\}$. \par We set $t(X,X)=h(X,X)=0$ for all $X$ in $\mathcal{A}$. To define $t(X,Y)$ and $h(X,Y)$, where $X$ and $Y$ are different elements in $\mathcal{A}$, we use the arrow diagram of the nanoword $\alpha$. Starting at the letter $X$ at the tail of the arrow joining the two occurences of $X$, we move right along the nanoword, noting the letters that we move past. If we reach the end of the nanoword, we return to the start of the nanoword and continue moving rightwards noting letters. We keep moving until the letter $X$ at the head of the arrow is found. If we noted the letter $Y$ at the tail of the arrow joining the two occurences of $Y$, $t(X,Y)$ is $1$, otherwise $t(X,Y)$ is $0$. Similarly if we noted the $Y$ at the head of the arrow, $h(X,Y)$ is $1$, otherwise $h(X,Y)$ is $0$. \par By assigning an order to the letters in $\mathcal{A}$, we can represent the maps as matrices. We call the matrix representing $t$ the tail matrix and write it $T(\alpha)$. Similarly the matrix representing $h$ is called the head matrix and is written $H(\alpha)$. Note that by picking a different order of the letters in $\mathcal{A}$ we may well get a different pair of matrices. \par \begin{figure} \caption{Arrow diagram of nanoword $\nanoword{ABCBCA} \label{fig:abcbca} \end{figure} \begin{ex} Consider the nanoword $\nanoword{ABCBCA}{bab}$, for which the arrow diagram is given in Figure~\ref{fig:abcbca}. Ordering the elements alphabetically we get this tail matrix: \begin{equation*} \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 1 & 1 & 0 \end{pmatrix} \end{equation*} and this head matrix: \begin{equation*} \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{pmatrix}. \end{equation*} \end{ex} \par We note that we can calculate the linking number between $X$ and $Y$ in $\alpha$ from the head and tail matrices by \begin{equation}\label{eqn:linking} l(X,Y) = t(X,Y) - h(X,Y). \end{equation} \par If $\rank(\alpha)$ is $n$ then $T(\alpha)$ and $H(\alpha)$ are in the set of $n \times n$ matrices for which all the diagonal entries are $0$ and all the other entries are either $0$ or $1$. For a given $n$ we can take any two matrices $T$ and $H$ in this set and ask whether a nanoword $\alpha$ exists for which $T(\alpha)$ is $T$ and $H(\alpha)$ is $H$. \par A simple restriction comes from the fact that the linking number is skew-symmetric. Using \eqref{eqn:linking}, any matrices $T$ and $H$ corresponding to a nanoword $\alpha$ must satisfy \begin{equation}\label{eqn:restriction} T-H = -\transpose{(T-H)}, \end{equation} where $t$ means the matrix transpose operation. Unfortunately this is not a sufficient condition. For example, consider the pair of matrices $T$ and $H$ given by \begin{equation*} \begin{pmatrix} 0 & 0 & 0 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix} \text {and} \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix}. \end{equation*} These matrices satisfy \eqref{eqn:restriction} but a simple combinatorial check shows that there is no 3-letter nanoword $\alpha$ to which they correspond. \section{Based matrices of virtual strings}\label{sec:based_matrices} In \cite{Turaev:2004}, Turaev introduced the concept of a based matrix and described how to associate a based matrix with a virtual string. We briefly recall the definitions here. \par Let $G$ be a finite set with a special element $s$. For some abelian group $H$ let $b$ be a map from $G\times G$ to $H$ satisfying $b(g,h) = -b(h,g)$ for all $g$ and $h$ in $G$ (in other words, $b$ is skew-symmetric). Then the triple $(G,s,b)$ is a based matrix over $H$. For the rest of this paper we will take $H$ to be $\mathbb{Z}$. \par We can associate a based matrix with a nanoword $\alpha$ as follows. First we take $G$ to be the set of letters in $\alpha$ union the special element $s$. \par We then consider a diagram corresponding to $\alpha$ embedded in some surface $S$. To each element $g$ of $G$ we associate a closed loop in the surface $S$ which we label $g_c$. For the special element $s$ we define $s_c$ to be the whole curve. Any other element in $G$ corresponds to a crossing in the diagram. For any crossing $X$ we define a closed loop $X_c$ as follows. First we orient the crossing so that it looks like the crossing on the left of Figure~\ref{fig:virtualcrossings}. Then starting from $X$ we leave the crossing on the outgoing right hand arc and follow the curve until we get back to $X$ for the first time. We define $X_c$ to be a loop parallel to the loop we have just traced. Figure~\ref{fig:loop} shows an example. \par We define $b(g,h)$ to be the homological intersection number of the loop $g_c$ with the loop $h_c$. This is just the number of times that $h_c$ crosses $g_c$ from right to left minus the number of times that $h_c$ crosses $g_c$ from left to right. \begin{figure} \caption{Example of defining a loop at a crossing} \label{fig:loop} \end{figure} \par We then define the based matrix associated with $\alpha$ to be the triple $(G,s,b)$. We write this $M(\alpha)$. \par By assigning an order to the elements in $G$ it is possible to write $b$ as a matrix. By convention, the special element $s$ always comes first in such an ordering. Thus the first row of the matrix has elements of the form $b(s,x)$ and the first column has elements of the form $b(x,s)$ for each $x$ in $G$. The resulting matrix is skew-symmetric. \par Let $(G_1,s_1,b_1)$ and $(G_2,s_2,b_2)$ be two based matrices. If there is a bijection $f$ from $G_1$ to $G_2$ such that $f(s_1)$ equals $s_2$ and for all $g$ and $h$ in $G$, $b_2(f(x),f(y))$ is equal to $b_1(g,h)$, then the two based matrices are said to be isomorphic \cite{Turaev:2004}. Informally, two based matrices are isomorphic if we can pick orderings of the elements of the two sets $G_1$ and $G_2$ such that $s_1$ and $s_2$ are first in their respective sets and the corresponding skew-symmetric matrices are the same. \par We can calculate $b$ directly from $\alpha$. It is clear that $b(s,s)$ is $0$. In \cite{Turaev:2004}, Turaev showed that $b(g,s)$ is equal to $\n(g)$ for all $g$ in $G-\lbrace s \rbrace$. In other words, $\n(X)$ is the homological intersection number of $X_c$ and the whole curve in the diagram. For $g$ and $h$ in $G-\lbrace s\rbrace$ we can calculate $b(g,h)$ by \begin{equation}\label{eqn:matrix_elements} b(g,h) = t(g,h) - h(g,h) + \sum_{k \in G-\lbrace s\rbrace}\bigl( t(g,k)h(h,k) - h(g,k)t(h,k) \bigr) \end{equation} where $t$ and $h$ were defined in Section~\ref{sec:th_matrices} (this equation is derived from Lemma~4.2.1 in \cite{Turaev:2004}). We can thus write $b$ as a matrix in the form \begin{equation*} \begin{pmatrix} 0& -\transpose{\cvector{n}} \\ \cvector{n}& B \end{pmatrix} \end{equation*} where $B$ is the submatrix of $b$ corresponding to the elements in $G-\lbrace s \rbrace$ and $\cvector{n}$ is the column vector consisting of $\n(g)$ for each $g$ (where the order of elements in the vector matches the order of the elements in the matrix $B$). By \eqref{eqn:matrix_elements} we can calculate $B$ directly from $T(\alpha)$ and $H(\alpha)$ using this formula \begin{equation*} B = T - H + T \transpose{H} - H \transpose{T} \end{equation*} where we have written $T$ for $T(\alpha)$ and $H$ for $H(\alpha)$. \par Turaev made the following definitions \cite{Turaev:2004}. An annihilating element of a based matrix $(G,s,b)$ is an element $g$ in $G-\lbrace s\rbrace$ for which $b(g,h)=0$ for all $h$ in $G$. A core element is an element $g$ in $G-\lbrace s\rbrace$ for which $b(g,h) = b(s,h)$ for all $h$ in $G$. Two elements $g$ and $h$ in $G-\lbrace s\rbrace$ are complementary elements if $b(g,k) + b(h,k) = b(s,k)$ for all $k$ in $G$. A based matrix is called primitive if it has no annihilating elements, core elements or complementary elements. \par Turaev defined three reducing operations on based matrices \cite{Turaev:2004}. The first removes an annihilating element, the second removes a core element and the third removes a complementary pair. Each move transforms a based matrix $(G,s,b)$ to $(G^\prime,s,b^\prime)$, where $G^\prime$ is derived from $G$ by removing the element(s) involved in the operation and $b^\prime$ is $b$ restricted to $G^\prime$. \par A primitive based matrix does not admit any of these reducing operations. Clearly we can apply a sequence of these operations to a non-primitive based matrix until we derive a primitive based matrix. Turaev showed that up to isomorphism, the resulting primitive based matrix is the same, irrespective of which elements we remove and the order in which we remove them \cite{Turaev:2004}. \par We can apply these reducing operations to the based matrix $M(\alpha)$ associated with the nanoword $\alpha$ to get a primitive based matrix. We call it $P(\alpha)$. Turaev showed that up to isomorphism $P(\alpha)$ is a homotopy invariant of $\alpha$ \cite{Turaev:2004}. Thus we can define the primitive based matrix $P(\Gamma)$ of a virtual string $\Gamma$ to be $P(\alpha)$ for any nanoword representing $\Gamma$. In particular this means that we can use properties of $P(\Gamma)$ that are invariant under isomorphism of based matrices as invariants of virtual strings. Turaev gave some suggestions for such invariants in \cite{Turaev:2004}. In \cite{Gibson:tabulating-vs} we define a canonical representation of a based matrix which can be used as a complete invariant of based matrices up to isomorphism. \par A simple invariant of primitive based matrices that we will use in this paper is the number of elements in the set in $P(\Gamma)$. As the special element $s$ cannot be removed by any of the moves defined on based matrices, a based matrix always has at least one element. Thus $\rho(\Gamma)$ is defined to be the number of elements in $P(\Gamma)$ minus one. This invariant was defined in \cite{Turaev:2004}. \par Turaev also noted that the $u$-polynomial of a nanoword $\alpha$ can be calculated from $M(\alpha)$ or $P(\alpha)$ (because, as we have mentioned, $b(g,s)$ is equal to $\n(s)$ for all $g$ in $G-\lbrace s \rbrace$) \cite{Turaev:2004}. This means that for nanowords $\alpha$ and $\beta$, if $P(\alpha)$ is isomorphic to $P(\beta)$ then $u_{\alpha}(t)$ is equal to $u_{\beta}(t)$. However the converse is not necessarily true and the primitive based matrix invariant is stronger than the $u$-polynomial \cite{Turaev:2004}. \section{Coverings}\label{sec:coverings} Turaev defined an operation on virtual strings called a covering \cite{Turaev:2004}. He also defined coverings for nanowords in \cite{Turaev:Words}. Given a nanoword $\alpha$ and a non-negative integer $r$, the \textcover{r} of $\alpha$ is the nanoword derived from $\alpha$ by removing any letter $X$ in $\alpha$ for which $\n(X)$ is not divisible by $r$. The \textcover{r} of $\alpha$ is written $\cover{\alpha}{r}$. \par For a virtual string $\Gamma$ we pick a nanoword $\alpha$ that represents it and then define $\cover{\Gamma}{r}$ to be the virtual string realized by $\cover{\alpha}{r}$. In \cite{Turaev:2004}, Turaev showed that $\cover{\Gamma}{r}$ is not dependent on the nanoword $\alpha$ that we picked. That is, if $\alpha_1$ and $\alpha_2$ are homotopic nanowords representing the virtual string $\Gamma$, $\cover{\alpha_1}{r}$ and $\cover{\alpha_2}{r}$ are also homotopic. This means that $\cover{\Gamma}{r}$ is an invariant of the virtual string $\Gamma$ and invariants of coverings of virtual strings can be used to distinguish the virtual strings themselves. We call $\cover{\Gamma}{r}$ the \textcover{r} of $\Gamma$. \par Note that we have extended Turaev's definition to include the case where $r$ is $0$. Turaev's invariance result is true in this case too. \par A covering is thus a map from the set of virtual strings to itself. It is interesting to ask such questions as whether the map is injective or surjective and whether there exist any fixed points. We can also ask what happens when we repeatedly apply the map to its own output. Are there any periodic points? \par Note that for any virtual string $\Gamma$, the \textcover{1} of $\Gamma$ is always $\Gamma$ and so these questions are easily answered when $r$ is $1$. We also note that for all $r$, $\cover{0}{r}$ is $0$. \par \begin{ex}\label{ex:simple_covering} Consider the virtual string $\Gamma$ represented by the nanoword $\alpha$ given by $\nanoword{ABCACB}{aaa}$. We have $\n(A)=2$ and $\n(B)=\n(C)=-1$. So we have $u_\Gamma(t)=t^2-2t$ and $\Gamma$ is not trivial. When $r$ is 2, $\cover{\alpha}{2}$ is $\nanoword{AA}{a}$ which is homotopically trivial by the first homotopy move and so $\cover{\Gamma}{2}$ is $0$. When $r$ is $0$ or greater than $2$, $\cover{\alpha}{r}$ is $0$ and so $\cover{\Gamma}{r}$ is also $0$. Thus for all $r$ not equal to $1$, $\cover{\Gamma}{r}$ is trivial and equal to $0$. \end{ex} \par This example shows that the covering map for $r$ is not injective unless $r$ is equal to $1$. \par \begin{thm}\label{thm:surjectivity} For any non-negative integer $r$, the covering map corresponding to $r$ is surjective. When $r$ is not $1$, for any given virtual string $\Gamma$ there are an infinite number of virtual strings which map to $\Gamma$ under the covering map. \end{thm} \begin{proof} We have already observed that when $r$ is $1$ the first claim is true. We consider the case when $r$ is not $1$. We show that given a virtual string $\Gamma$ represented by a nanoword $\alpha$, we can construct a new nanoword $\beta$ for which $\cover{\beta}{r}$ is $\alpha$. Then $\beta$ represents a virtual string which maps to $\Gamma$ under the covering map corresponding to $r$. \par To construct $\beta$ we first make a copy of $\alpha$. Then for each letter $X$ in $\alpha$, we consider the value of $\n(X)$. \par If $\n(X)$ is zero then we make no changes relating to $X$. \par If $\n(X)$ is non-zero we add letters to $\beta$ in the following way \begin{equation*} xXyXz \longrightarrow xXyA_1A_2\dotso A_kXA_k\dotso A_2A_1z \end{equation*} where $k$ equals $\abs{\n(X)}$ and, for all $i$, $\xType{A_i}$ is set to $\xType{X}$ if $\n(X)$ is negative and the opposite type to $\xType{X}$ if $\n(X)$ is positive. Note that in the new nanoword $\n(A_i)$ is $\pm 1$ for all $i$ and $\n(X)$ is $0$. Also note that for any other letter $Y$ in the old nanoword, $\n(Y)$ is unchanged as we go from the old nanoword to the new nanoword. \par Once we have considered every letter in $\alpha$ and made the appropriate additions, we call the resultant nanoword $\beta$. The nanoword consists of letters $X_i$ originally in $\alpha$, and letters $A_j$ which we added. By construction, in $\beta$, $\n(X_i)$ is $0$ for all $i$ and $\n(A_j)$ is $\pm 1$ for all $j$. Thus when we take the \textcover{r} ($r$ not $1$) of $\beta$, we remove all the letters $A_j$ and keep all the letters $X_i$. Since the order and the types of the letters $X_i$ were not changed during our construction of $\beta$, the result is $\alpha$. Thus $\cover{\beta}{r}$ is $\alpha$ and the first claim of the theorem is proved. \par To prove the second claim we use the fact that for a letter $X$ in a nanoword $\gamma$ or $\delta$, $\n(X)$ remains unchanged in the composition $\gamma\delta$. This implies that \begin{equation*} \cover{(\gamma\delta)}{r} = \cover{\gamma}{r}\cover{\delta}{r}. \end{equation*} \par It is simple to calculate that the $u$-polynomial for the virtual string $\beta$ constructed above is $0$. \par Consider the nanowords $\alpha_{p,1}$ and $\alpha_{1,p}$ in Example \ref{ex:alpha_pq}. These have $u$-polynomials $pt-t^p$ and $t^p-pt$ respectively. If we take the \textcover{r} of either nanoword ($r$ not $1$), we either get the trivial nanoword (if $r$ does not divide $p$) or a nanoword isomorphic to $\nanoword{AA}{a}$ (if $r$ does divide $p$). In the latter case, the letter $A$ can then be removed by the move H1. The result in either case is the trivial nanoword. \par Therefore, if we take the composition of $\beta$ with such nanowords we can construct new nanowords such that the \textcover{r} is still $\alpha$. However by \eqref{eqn:u-poly_concatenation} the $u$-polynomial will be non-zero. In particular we can construct an infinite family of nanowords $\beta\alpha_{1,p}$ for which $\cover{(\beta\alpha_{1,p})}{r}$ is $\alpha$ and, by \eqref{eqn:u-poly_concatenation}, the $u$-polynomial is $t^p-pt$ (for $p$ greater than $1$). Thus the family of virtual strings that they represent are all mutually homotopically distinct. \end{proof} Note that in fact, by concatenating $\beta$ with multiple copies of $\alpha_{p,1}$ and $\alpha_{1,p}$, possibly with different values of $p$, it is possible to construct a nanoword $\gamma$ with any $u$-polynomial satisfying $u(0)=u^{\prime}(1)=0$ such that $\cover{\gamma}{r}$ is $\alpha$. Thus we have the corollary: \begin{cor} For any virtual string $\Gamma$, any $u$-polynomial $u(t)$ satisfying $u(0)=u^{\prime}(1)=0$ and any non-negative integer $r$, $r$ not $1$, there is a virtual string $\Lambda$ such that $u_\Lambda(t)=u(t)$ and $\cover{\Lambda}{r}$ is $\Gamma$. \end{cor} \par We can use coverings to define some numeric invariants of virtual strings. The following proposition suggests one such invariant. \par \begin{prop} For a virtual string $\Gamma$, there exists an integer $m$ such that for all $n$ greater than or equal to $m$, $\cover{\Gamma}{n}$ is $\cover{\Gamma}{0}$. \end{prop} \begin{proof} Consider $\alpha$ a nanoword with finite rank which represents $\Gamma$. Then for any letter $X$ in $\alpha$, we have already observed that $\abs{\n(X)}$ is less than $\rank(\alpha)$. If we set $m$ to be $\rank(\alpha)$ then for all $n$ greater than or equal to $m$, only the letters $X$ in $\alpha$ with $\n(X)$ equal to $0$ will appear in the \textcover{n}. Thus $\cover{\Gamma}{n}$ is $\cover{\Gamma}{0}$. \end{proof} Thus we can define $\m(\Gamma)$ to be the minimal integer $m$ such that for all $n$ greater than or equal to $m$, $\cover{\Gamma}{n}$ is $\cover{\Gamma}{0}$. The proposition shows that $\m(\Gamma)$ is always defined. Of course, the minimal such $m$ may be less than $\rank(\alpha)$. In Example \ref{ex:simple_covering} the rank of the initial nanoword was $3$ but all the coverings except for the \textcover{1} were trivial. \par The proof of the proposition shows that $\m(\Gamma)$ is less than or equal to $\hr(\Gamma)$. In fact, the proof shows that if $m$ is greater than the largest $\abs{\n(X)}$ for an $X$ in an $\alpha$ representing $\Gamma$, then $\cover{\Gamma}{m}$ is $\cover{\Gamma}{0}$. Therefore, \begin{equation*} \m(\Gamma) \leq \max \bigl\{ \abs{\n(X)} \ \bigm| X\in \alpha \bigr\} +1 \end{equation*} where $\alpha$ represents $\Gamma$ and $\rank(\alpha)$ is equal to $\hr(\Gamma)$. If all the letters $X$ in $\alpha$ have $\n(X)$ equal to $0$ then $\m(\Gamma)$ is $0$. In particular $\m(0)$ is $0$. \par \begin{prop} For a virtual string $\Gamma$ and a non-negative integer $r$ such that $\cover{\Gamma}{r}$ is not equal to $\Gamma$, \begin{equation*} \hr(\cover{\Gamma}{r}) \leq \hr(\Gamma)-2. \end{equation*} \end{prop} \begin{proof} For a given virtual string $\Gamma$, we can find a nanoword $\alpha$ which represents $\Gamma$ and has minimal rank. So $\rank(\alpha)$ is equal to $\hr(\Gamma)$. When we take the \textcover{r}, we get a nanoword $\cover{\alpha}{r}$ representing $\cover{\Gamma}{r}$. By assumption, $\cover{\alpha}{r}$ does not equal $\alpha$, so we must have deleted some letters from $\alpha$ to get the \textcover{r}. \par Assume that we deleted just a single letter $X$ from $\alpha$. We deal with two cases, $r$ is greater than or equal to $2$ and $r$ is equal to $0$. It is not possible that $r$ is $1$ because of the assumption that $\cover{\Gamma}{r}$ is not equal to $\Gamma$. \par When $r$ is greater than or equal to $2$, $\n(X)$ must be equal to $pr+q$, $(0<q<r)$, for some integers $p$ and $q$. All other letters $Y$ in $\alpha$ must have $\n(Y)$ equal to $kr$ for some integer $k$ dependent on $Y$. We now calculate the sum of $\n(Z)$ for all letters $Z$ in $\alpha$. \begin{equation*} \sum_{Z \in \alpha}\n(Z) = \n(X) + \sum_{Y \neq X}\n(Y) \equiv q \pmod{r}. \end{equation*} However, by \eqref{eqn:n_zero_sum} the left hand side is $0$ and we have a contradiction. \par When $r$ is equal to $0$, $\n(X)$ must be equal to some non-zero integer $q$. All other letters $Y$ in $\alpha$ must have $\n(Y)$ equal to $0$. Then we have \begin{equation*} 0 = \sum_{Z \in \alpha}\n(Z) = \n(X) + \sum_{Y \neq X}\n(Y) = q \neq 0 \end{equation*} which is a contradiction. \par Thus, in either case, we must have deleted at least two letters. We have \begin{equation*} \hr(\cover{\Gamma}{r}) \leq \rank(\cover{\alpha}{r}) \leq \rank(\alpha)-2 = \hr(\Gamma)-2. \end{equation*} \end{proof} \par In particular the homotopy rank of the \textcover{r} can never be bigger than the homotopy rank of the original virtual string. \par Fixing $r$, we can define a sequence of virtual strings for any virtual string $\Gamma$ as follows. Set $\Gamma_0$ to be $\Gamma$. Then define $\Gamma_i$ to be $\cover{(\Gamma_{i-1})}{r}$ for $i\geq 1$. As the number of crossings in $\Gamma$ is finite and cannot increase when we take the cover, there exists an $n$ such that $\Gamma_{n+1}$ is equal to $\Gamma_{n}$. We can thus make the following definitions: \begin{equation*} \height_r(\Gamma):=\min \lbrace n|\Gamma_{n+1} = \Gamma_{n} \rbrace \end{equation*} and \begin{equation*} \base_r(\Gamma):=\Gamma_{\height_r(\Gamma)}. \end{equation*} It is clear that these are invariants of $\Gamma$. \par For $r$ not $1$, we can represent the action of the \textcover{r} map on the set of virtual strings as a directed graph. The vertices of the graph represent the virtual strings. For each virtual string $\Gamma$ we draw an oriented edge from the vertex which represents it to the vertex which represents $\cover{\Gamma}{r}$. By the above discussion it is clear that each connected component of the graph will take the form of a tree with a loop at the root point. Every vertex has an infinite number of incoming edges and a single outgoing edge. Figure~\ref{fig:graph} depicts a small part of the graph near the base of a single component. \begin{figure} \caption{Part of a single component of the graph of a covering map} \label{fig:graph} \end{figure} \section{Fixed points under coverings}\label{sec:fixed} For a fixed $r$, we can define the set of fixed points under the \textcover{r} map by \begin{equation*} \baseSet{r}:= \lbrace \Gamma \in \mathcal{VS} | \cover{\Gamma}{r} = \Gamma \rbrace. \end{equation*} \par Note that we could equivalently define the set by \begin{equation*} \baseSet{r}= \lbrace \base_r(\Gamma)| \Gamma \in \mathcal{VS} \rbrace, \end{equation*} or indeed by \begin{equation*} \baseSet{r}= \lbrace \Gamma \in \mathcal{VS} | \height_r(\Gamma) = 0 \rbrace. \end{equation*} \par When $r$ is $1$, $\baseSet{1}$ is $\mathcal{VS}$. For other $r$ this is not the case. Indeed, we showed that for any given virtual string $\Gamma$ there are infinitely many virtual strings which are different from $\Gamma$ but for which the \textcover{r} is $\Gamma$. Those virtual strings are not in the set $\baseSet{r}$. On the other hand, we have already noted that the trivial virtual string is in $\baseSet{r}$ for all $r$. We now give a method for constructing more examples of virtual strings in $\baseSet{r}$ for $r$ greater than $1$. \par Given a nanoword $\alpha$ and an integer $r$ greater than $1$ we define a new nanoword from $\alpha$ in the following way. For each letter $A$ in $\alpha$ we replace the first occurence of $A$ with $r$ letters $A_1A_2\dotso A_r$ and the second occurence of $A$ with $r$ letters $A_r\dotso A_2A_1$ where the letters $A_i$ have the same type as $A$ for all $i$. We call the resultant nanoword $r\cdot\alpha$. The construction $r\cdot\alpha$ appears in Section~3.7, Exercise~2 of \cite{Turaev:2004}. \par As an example, if $\alpha$ is the nanoword $\nanoword{ABACBC}{aab}$, then $2\cdot\alpha$ is the nanoword $A_1A_2B_1B_2A_2A_1C_1C_2B_2B_1C_2C_1$ where letters $C_1$ and $C_2$ are of type $b$ and the other letters are of type $a$. \par We note that this operation is not well-defined for virtual strings. If $\alpha$ and $\beta$ are homotopic nanowords, it is not necessarily true that $r\cdot\alpha$ and $r\cdot\beta$ are homotopic. An example suffices to show this. Take $\alpha$ to be the nanoword $\nanoword{ABCBDCAD}{aabb}$ and $\beta$ to be the nanoword $\nanoword{BACDBCDA}{aabb}$. Then $\alpha$ and $\beta$ are homotopic (in fact they are related by an H3b move involving $A$, $B$ and $D$) but, by using based matrices, we can show that $2\cdot\alpha$ and $2\cdot\beta$ are not homotopic. \par We now consider the behaviour of $r\cdot\alpha$ under \textcover{r}. For any letter $A_i$ in $r\cdot\alpha$, $\n(A_i)$ is equal to $r\n(A)$. Thus $r\cdot\alpha$ is fixed under the \textcover{r} map. So the virtual string represented by $r\cdot\alpha$ is in $\baseSet{r}$. \par We now note that if $\Gamma$ is in $\baseSet{r}$ then any nanoword $\alpha$ representing $\Gamma$ which satisfies $\rank(\alpha)=\hr(\alpha)$ must also satisfy $\cover{\alpha}{r}=\alpha$. Then $\alpha$ consists only of letters $X$ with $\n(X)$ equal to $kr$ for some integer $k$ dependent on $X$. Using this fact we can easily find relationships between the sets of fixed points. We have \begin{equation}\label{eqn:subset_zero} \baseSet{0} \subset \baseSet{r} \end{equation} for all $r$ and \begin{equation}\label{eqn:subset_multiple} \baseSet{kr} \subset \baseSet{r} \end{equation} for all $k\geq 2$ and for all $r \geq 2$. \begin{prop} For distinct natural numbers $p$ and $q$, write $l$ for the lowest common multiplier of $p$ and $q$, and write $g$ for the greatest common divisor of $p$ and $q$. Then \begin{equation}\label{eqn:base_lcm} \baseSet{p} \cap \baseSet{q} = \baseSet{l} \end{equation} and \begin{equation}\label{eqn:base_gcd} \baseSet{p} \cap \baseSet{q} \subset \baseSet{g}. \end{equation} \end{prop} \begin{proof} If $\Gamma$ is in $\baseSet{p} \cap \baseSet{q}$, then by the remark above there exists a nanoword $\alpha$ which represents $\Gamma$ for which $\cover{\alpha}{p}=\alpha$ and $\cover{\alpha}{q}=\alpha$. Then $\alpha$ consists only of letters $X$ with $\n(X)=0$ or $\n(X)$ divisible both by $p$ and by $q$. In the latter case, this means that $l$ also divides $\n(X)$. Thus $\Gamma$ is in $\baseSet{l}$ and \begin{equation*} \baseSet{p} \cap \baseSet{q} \subseteq \baseSet{l}. \end{equation*} As $l$ is a multiple of both $p$ and $q$, $\baseSet{l}$ is a subset of both $\baseSet{p}$ and $\baseSet{q}$ by \eqref{eqn:subset_multiple}. This proves \eqref{eqn:base_lcm}. \par As $l$ is a multiple of $g$, we can use \eqref{eqn:subset_multiple} and \eqref{eqn:base_lcm} to show \eqref{eqn:base_gcd}. \end{proof} \begin{prop} We have \begin{equation*} \bigcap_{i=1}^\infty \baseSet{i} = \baseSet{0}. \end{equation*} \end{prop} \begin{proof} From \eqref{eqn:subset_zero} we can see that \begin{equation*} \bigcap_{i=1}^\infty \baseSet{i} \supseteq \baseSet{0}. \end{equation*} \par Now assume that equality does not hold. That is, we assume there exists a virtual string $\Gamma$ in $\mathcal{VS}$ which is in the intersection but not in $\baseSet{0}$. Then for all $r$ greater $0$, $\cover{\Gamma}{r}$ is $\Gamma$. However, as $\Gamma$ is in $\mathcal{VS}$, $\Gamma$ is represented by a nanoword $\alpha$ which has finite rank $n$. \par Consider $\cover{\alpha}{n}$. This must be $\alpha$ by our assumption that $\Gamma$ is in the intersection. This means that every letter $X$ in $\alpha$ has $\n(X)$ equal to $nk$ for some integer $k$. Now as $\alpha$ has rank $n$, we know that $\abs{\n(X)}$ must be less than $n$ for all $X$. Thus $\n(X)$ must be $0$ for all $X$. However, this means that $\cover{\alpha}{0}$ is $\alpha$ and thus that $\Gamma$ is in $\baseSet{0}$ which contradicts our initial assumption. Thus equality holds and the proof is complete. \end{proof} \begin{thm} For $r$ greater than $1$, $\baseSet{r}$ is infinite in size. \end{thm} \begin{proof} Recall the virtual string $\Gamma_{p,q}$ from Example \ref{ex:alpha_pq}. Considering the cover $\cover{(\Gamma_{p,q})}{r}$, it is clear that this is $\Gamma_{p,q}$ if $r$ divides both $p$ and $q$, and trivial otherwise. Thus $\Gamma_{jr,kr}$ for $j,k\geq 1$ is a 2-parameter family of virtual strings which are fixed under the \textcover{r} and so are all in $\baseSet{r}$. In Example \ref{ex:alpha_pq} we saw that these virtual strings are mutually distinct. Thus $\baseSet{r}$ is infinite in size. \end{proof} Now consider the two families $\Gamma_{kr,r}$ and $\Gamma_{r,kr}$. Elements in these families have the property that if we take the \textcover{p} for $p$ greater than $r$ we get the trivial virtual string. We also note that the \textcover{0} of these virtual strings are trivial. Thus the set \begin{equation*} \baseSet{r} - \baseSet{0} - \bigcup_{i=r+1}^{\infty}\baseSet{i} \end{equation*} is infinite in size. \par We now consider the set $\baseSet{0}$. We know that the trivial virtual string is in this set. We now show that this set has other members. \par Consider the family of virtual strings $\Gamma_n$ for $n \geq 3$ represented by the nanowords $\alpha_n$ given by \begin{equation*} X_0X_{n-1}X_1X_0X_2X_1 \dotso X_{i}X_{i-1}X_{i+1}X_{i} \dotso X_{n-2}X_{n-3}X_{n-1}X_{n-2} \end{equation*} where $\xType{X_i}$ is $a$ for $0 \leq i \leq n-2$ and $\xType{X_{n-1}}$ is $b$. Then $\n(X_i)$ is $0$ for all $i$ and so $\cover{(\alpha_n)}{0}$ is $\alpha_n$ for all $n$. Thus $\alpha_n$ is in $\baseSet{0}$. By appropriate shift and homotopy moves it is possible to show that when $n$ is $3$, $4$ or $6$, $\alpha_n$ is homotopically trivial. For the remaining $n$ we have the following lemma. \begin{lem} For $n=5$ or $n \geq 7$ the nanowords $\alpha_n$ are non-trivial and mutually distinct under homotopy. \end{lem} \begin{proof} In the following calculations we consider the subscripts of the letters $X_i$ to be in $\cyclic{n}$. \par We have \begin{equation}\label{eqn:baseSet0Tail} t(X_i,X_j) = \begin{cases} 1 & \text{if $j=i+1$} \\ 0 & \text{otherwise} \end{cases} \end{equation} and \begin{equation}\label{eqn:baseSet0Head} h(X_i,X_j) = \begin{cases} 1 & \text{if $j=i-1$} \\ 0 & \text{otherwise.} \end{cases} \end{equation} Now \begin{align*} b(X_i,X_j) = & t(X_i,X_j) - h(X_i,X_j) + \\ & \sum_X \bigl( t(X_i,X)h(X_j,X) - h(X_i,X)t(X_j,X) \bigr) \end{align*} where the sum is taken over all the letters $X_k$. However, by \eqref{eqn:baseSet0Head} and \eqref{eqn:baseSet0Tail} most of the elements in the sum will be zero and so we only need to consider the cases when $X$ is $X_{i-1}$, $X_{i+1}$, $X_{j-1}$ or $X_{j+1}$. In the calculations below we write $s(X_i,X_j)$ for the sum \begin{equation*} \sum_X \bigl( t(X_i,X)h(X_j,X) - h(X_i,X)t(X_j,X) \bigr). \end{equation*} We also assume that $n \geq 5$. \par Of course, if $i$ equals $j$, $b(X_i,X_j)$ is $0$. We consider the following cases: $i=j-2$; $i=j-1$; $i=j+1$; $i=j+2$; $\abs{i-j} \geq 3$. \par Case $i=j-2$: as $X_{j-1} = X_{i+1}$ the sum is over $3$ elements ($X_{i-1}$, $X_{i+1}$ and $X_{i+3}$). \begin{align*} s(X_i,X_{i+2}) = & t(X_i,X_{i-1})h(X_{i+2},X_{i-1}) - h(X_i,X_{i-1})t(X_{i+2},X_{i-1}) + \\ & t(X_i,X_{i+1})h(X_{i+2},X_{i+1}) - h(X_i,X_{i+1})t(X_{i+2},X_{i+1}) + \\ & t(X_i,X_{i+3})h(X_{i+2},X_{i+3}) - h(X_i,X_{i+3})t(X_{i+2},X_{i+3}) \\ = & 0 - t(X_{i+2},X_{i-1}) + 1 - 0 + 0 - h(X_i,X_{i+3}) \\ = & 1 \end{align*} as $t(X_j,X_{j-3})$ and $h(X_i,X_{i+3})$ are both $0$ when $n \geq 5$. Thus \begin{align*} b(X_i,X_{i+2}) = & t(X_i,X_{i+2}) - h(X_i,X_{i+2}) + s(X_i,X_{i+2}) \\ = & 1. \end{align*} \par Case $i=j-1$: we have $X_{j-1} = X_i$ and $X_{j+1} = X_{i+2}$. Thus the sum is over $4$ elements ($X_{i-1}$, $X_i$, $X_{i+1}$ and $X_{i+2}$). \begin{align*} s(X_i,X_{i+1}) = & t(X_i,X_{i-1})h(X_{i+1},X_{i-1}) - h(X_i,X_{i-1})t(X_{i+1},X_{i-1}) + \\ & t(X_i,X_i)h(X_{i+1},X_i) - h(X_i,X_i)t(X_{i+1},X_i) + \\ & t(X_i,X_{i+1})h(X_{i+1},X_{i+1}) - h(X_i,X_{i+1})t(X_{i+1},X_{i+1}) + \\ & t(X_i,X_{i+2})h(X_{i+1},X_{i+2}) - h(X_i,X_{i+2})t(X_{i+1},X_{i+2}) \\ = & 0. \end{align*} Thus \begin{align*} b(X_i,X_{i+1}) = & t(X_i,X_{i+1}) - h(X_i,X_{i+1}) + s(X_i,X_{i+1}) \\ = & 1. \end{align*} \par Case $i=j+1$: by skew-symmetry and the case $i=j-1$ we have \begin{equation*} b(X_i,X_{i-1}) = -b(X_{i-1},X_i) = -1. \end{equation*} \par Case $i=j+2$: by skew-symmetry and the case $i=j-2$ we have \begin{equation*} b(X_i,X_{i-2}) = -b(X_{i-2},X_i) = -1. \end{equation*} \par Case $\abs{i-j} \geq 3$: in this case we have \begin{equation*} t(X_j,X_{i-1})=t(X_j,X_{i+1})=t(X_i,X_{j-1})=t(X_i,X_{j+1})=0 \end{equation*} and \begin{equation*} h(X_j,X_{i-1})=h(X_j,X_{i+1})=h(X_i,X_{j-1})=h(X_i,X_{j+1})=0. \end{equation*} Thus $s(X_i,X_j)$ is $0$ and \begin{equation*} b(X_i,X_j) = t(X_i,X_j) - h(X_i,X_j) + s(X_i,X_j) = 0. \end{equation*} \par Summarising the cases: \begin{equation}\label{eqn:bm_zeroexample} b(X_i,X_j) = \begin{cases} 1 & \text{if $i-j=-1$ or $i-j=-2$ } \\ -1 & \text{if $i-j=1$ or $i-j=2$ } \\ 0 & \text{otherwise.} \end{cases} \end{equation} \par Now we have calculated the based matrix for $\alpha_n$ we can determine whether it has any annihilating, core or complementary elements. As $b(X_i,X_{i+1})$ is non-zero and $\n(X_i)=0$ for all $i$, $X_i$ cannot be an annihilating element or a core element. We now assume that there are an $i$ and a $j$ such that $X_i$ and $X_j$ form a complementary pair. Then \begin{equation}\label{eqn:bm_zeroexample_complementary} b(X_i,X) + b(X_j,X) = b(s,X) = 0 \end{equation} for all $X$. By substituting $X_{j-2}$ for $X$ and using \eqref{eqn:bm_zeroexample} we obtain \begin{equation*} b(X_i,X_{j-2}) = 1. \end{equation*} Similarly, by substituting $X_{j-1}$ for $X$ we get \begin{equation*} b(X_i,X_{j-1}) = 1. \end{equation*} By comparing these last two equations with \eqref{eqn:bm_zeroexample} we can conclude that $i-(j-2)$ and $i-(j-1)$ are in the set $\lbrace -1, -2 \rbrace$. Thus \begin{equation}\label{eqn:bm_zeroexample_ij1} i \equiv j+3 \text{(mod $n$)}. \end{equation} \par Similarly, by substituting $X_{j+2}$ and $X_{j+1}$ for $X$ in \eqref{eqn:bm_zeroexample_complementary} and using \eqref{eqn:bm_zeroexample} we have \begin{equation*} b(X_i,X_{j+2}) = b(X_i,X_{j+1}) = -1. \end{equation*} Comparison with \eqref{eqn:bm_zeroexample} implies that $i-(j+2)$ and $i-(j+1)$ are in the set $\lbrace 1, 2 \rbrace$. In this case \begin{equation}\label{eqn:bm_zeroexample_ij2} i \equiv j-3 \text{(mod $n$)}. \end{equation} \par Combining \eqref{eqn:bm_zeroexample_ij1} and \eqref{eqn:bm_zeroexample_ij2} we have \begin{equation*} j+3 \equiv j-3 \text{(mod $n$)} \end{equation*} which gives \begin{equation*} 0 \equiv 6 \text{(mod $n$)}. \end{equation*} As we assumed that $n \geq 5$, this implies that if $X_i$ and $X_j$ are a complementary pair then $n$ is $6$. \par So if $n=5$ or $n \geq 7$, $M(\alpha_n)$ is primitive. In particular this means that for these cases $\rho(\alpha_n)$ is equal to $n$. This shows that the $\alpha_n$ are non-trivial and distinct. \end{proof} By the lemma we have an infinite number of examples of members of $\baseSet{0}$. Thus we have the following theorem. \begin{thm} The set $\baseSet{0}$ is infinite in size. \end{thm} \section{Geometric interpretation of coverings of nanowords}\label{sec:geomcover} In \cite{Turaev:2004} Turaev gave a geometric interpretation of the covering operation on nanowords which we explain in more detail here. \par Given a nanoword we can construct the corresponding diagram of a curve on its canonical surface (we descibed this towards the end of Section~\ref{sec:virtual_strings}). The curve can then be considered as a graph on the surface. If the nanoword has rank $n$, the curve has $n$ crossings and so the graph has $n$ vertices and $2n$ edges. We assign labels $0,1,\dotsc,2n-1$ to the edges by assigning $0$ to the edge with the base point and then, starting from that edge, follow the curve assigning labels in order to the edges as they are traversed. Figure~\ref{fig:5xprecover} gives an example of such a labelled diagram corresponding to the nanoword $\nanoword{ABCDBEDEAC}{baaaa}$. \begin{figure} \caption{A 5 crossing virtual string drawn on a closed genus 2 surface with edges labelled} \label{fig:5xprecover} \end{figure} \par By the construction of the canonical surface, cutting along the curve splits the surface into one or more polygonal pieces. Each of the edges of the polygons inherits a label and an orientation from the original diagram. Obviously, we can reconstruct the canonical surface by glueing edges with the same labels so that their orientations coincide. Figure~\ref{fig:5xpoly} shows the result of this process on the diagram in Figure~\ref{fig:5xprecover}. \begin{figure} \caption{Polygons with labelled oriented edges. Glueing edges with matching labels so that their orientations coincide gives the surface in Figure~\ref{fig:5xprecover} \label{fig:5xpoly} \end{figure} \par We now make an $m$-fold cover of the canonical surface (for $m$ an integer greater than $1$). We first make $m$ copies of the set of polygonal pieces and label the sets $0,1,\dotsc,m-1$. We relabel the edges of each set as follows. In the set of pieces labelled $i$, we relabel the edge $j$ as $j_i$ if the orientation of the edge (relative to the polygon) is anti-clockwise and relabel it $j_{(i-1)}$ ($i-1$ calculated modulo $n$) if the orientation is clockwise. This labelling divides the edges into pairs. Figure~\ref{fig:5xpoly2} shows two copies of the polygons in Figure~\ref{fig:5xpoly} labelled in the way described above. \begin{figure} \caption{Two copies of the polygons in Figure~\ref{fig:5xpoly} \label{fig:5xpoly2} \end{figure} \par By glueing the pairs of edges according to their orientation we get a new closed surface. Figure~\ref{fig:5xcover} shows the result of glueing the edges of Figure~\ref{fig:5xpoly2}. \begin{figure} \caption{The genus 3 surface constructed by glueing edges of the polygons in Figure~\ref{fig:5xpoly2} \label{fig:5xcover} \end{figure} \par Note that for a given edge $e$, the label of the polygon to the left of $e$, $l_e$ and the label of the polygon to the right of $e$, $r_e$, satisfy the relation \begin{equation} r_e \equiv l_e + 1 \text{(mod $n$)} \end{equation} by construction. \par \begin{figure} \caption{Labels of polygons and edges around a vertex} \label{fig:planelabel} \end{figure} The glued edges now form a graph on the constructed surface. In fact, each vertex of the graph is $4$-valent and the labels of the polygons and edges meeting at a vertex have a fixed pattern as shown in Figure~\ref{fig:planelabel}. Since there were $2n$ edges on the original surface, there are $2mn$ edges on the new surface. We now show that these edges form an $m$-component virtual string. \par Consider the edge labelled $j_i$. It corresponds to the edge $j$ in the original diagram. When we cross the vertex at the end of edge $j_i$, the next edge corresponds to the edge $j+1$ (modulo $n$) in the original diagram. If the edges at the vertex perpendicular to the edge $j_i$ are oriented leftwards from the perspective of edge $j_i$ then the next edge has subscript $i+1$ (modulo $m$). In other words, on the new surface the edge $(j+1)_{i+1}$ follows the edge $j_i$. On the other hand, if the edges at the vertex perpendicular to the edge $j_i$ are oriented rightwards, then the edge $(j+1)_{i-1}$ follows the edge $j_i$. Both these situations can be seen in Figure~\ref{fig:planelabel}. \par Consider the edge labelled $0_k$ for some $k$. This edge corresponds to the edge $0$ in the original diagram. We now consider what happens to this subscript as we traverse the curve in the original diagram. Each time we pass through a crossing in the original diagram, this corresponds to passing through a crossing in the new diagram. We have seen that when we do this, passing through on one arc of the crossing increases the subscript by one (modulo $n$) and passing through the other decreases the subscript by one (modulo $n$). When we get back to our starting point we have passed through each crossing exactly twice. Therefore, when we get back to our starting point, the net change on the edge label subscript is zero. This means that in our new diagram a single component has exactly $2n$ edges which are in one-to-one correspondance with the $2n$ edges in the original diagram. Thus there are $m$ components in the covering and these are indexed by the edge label subscript of edges corresponding to edge $0$ in the original diagram. \par The new diagram contains $mn$ crossings. By dropping the subscripts of the edge labels around a vertex we can see that these can be grouped in sets of $m$ crossings, each set corresponding to a single crossing in the original diagram. We can label the crossings in the new diagram by taking the label of the corresponding crossing in the original diagram and adding a subscript which is the label of the polygon to the left of both arcs in the crossing. For example, this labelling scheme is used in Figure~\ref{fig:5xcover}. \par The crossings in the new diagram can be divided into two types. Those for which both arcs belong to the same component and those for which the arcs belong to different components. \par Consider the single component containing the edge labelled $0_k$ for some $k$ in the covering. Then the edges of this component correspond to edges in the original diagram and we can copy the edge label subscripts to the original diagram under this correspondance. For example, Figure~\ref{fig:5xpullback} shows the result of copying the edge label subscripts of the single component containing $0_0$ in Figure~\ref{fig:5xcover} back to the original diagram (Figure~\ref{fig:5xprecover}). \begin{figure} \caption{A copy of Figure~\ref{fig:5xprecover} \label{fig:5xpullback} \end{figure} \par By rotating a crossing we can orient it so that it looks like the crossing on the left of Figure~\ref{fig:virtualcrossings}. We define left and right edges of a crossing with respect to this orientation. \par A necessary and sufficient condition for the component to cross itself at a particular crossing in the new diagram is that the edge label subscripts of the two right hand edges of the corresponding crossing in the original diagram are the same (or, equivalently, that the edge label subscripts of the two right left edges are the same). This can be seen by considering Figure~\ref{fig:planelabel}. If this condition is not met, the crossing arcs in the original diagram correspond to arcs which go through different crossings in the new diagram. These crossings are formed with arcs of a different component or components. \par For example, in Figure~\ref{fig:5xpullback}, the crossings $B$, $C$ and $D$ satisfy the condition that the subscript of both right hand edges are the same. In Figure~\ref{fig:5xcover} we can see that at each of the crossings labelled $B_0$, $B_1$, $C_0$, $C_1$, $D_0$ and $D_1$, both arcs belong to the same component. On the other hand, the crossings $A$ and $E$ do not satisfy the condition in Figure~\ref{fig:5xpullback}. In Figure~\ref{fig:5xcover}, at each of the crossings $A_0$, $A_1$, $E_0$ and $E_1$ the two arcs belong to different components. \par For a given crossing $X$ in the original diagram, we define $\delta(X)$ to be the subscript of the incoming right edge minus the subscript of the outgoing right edge, modulo $m$. Then $X$ becomes a crossing where a component crosses itself in the new diagram if and only if $\delta(X)$ is $0$. If we remove a small neighbourhood of $X$ from the original diagram, the curve is split into two sections: the left section of the curve which starts at the left hand outgoing edge of $X$ and ends at the left hand incoming edge of $X$; and the right section which starts at the right hand outgoing edge of $X$ and ends at the right hand incoming edge of $X$. We write $r(X)$ for the number of arcs crossing the right section from right to left minus the number of arcs crossing the right section from left to right. Then $\delta(X)$ is also equivalent, modulo $m$, to $r(X)$. Now note that $\n(X)$ is equal to $r(X)$. This implies that for $X$ to be a self-crossing crossing in the new diagram, $\n(X)$ must be equal to $0$ modulo $m$. Now this condition is the same as the condition used in the construction of the \textcover{m} of a virtual string. If we just consider a single component of our new diagram and remove all other components we have a new virtual string and by the above discussion it should be clear that this is a diagram of the \textcover{m} of our original virtual string. \par By a similar argument it can be seen that the \textcover{0} of a nanoword corresponds to the infinite cyclic cover of the associated canonical surface. \par Returning to the example in Figure~\ref{fig:5xcover}, we consider the component drawn by the dotted line. Starting from the edge labelled $0_0$ we follow the curve noting only the labels of crossings where the component crosses itself. The result is the Gauss word $B_1C_0D_1B_1D_1C_0$. The type of all three letters is $a$. This gives a nanoword which is isomorphic to $\nanoword{BCDBDC}{aaa}$ which is the \textcover{2} of $\nanoword{ABCDBEDEAC}{baaaa}$, the nanoword we started with. \par \begin{figure} \caption{A virtual string drawn on a closed genus 2 surface with edges labelled} \label{fig:31precover} \end{figure} \begin{figure} \caption{A dodecagon with labelled oriented edges. Glueing edges with matching labels so that their orientations coincide gives the surface in figure \ref{fig:31precover} \label{fig:31poly} \end{figure} \begin{figure} \caption{Two copies of the dodecagon in Figure~\ref{fig:31poly} \label{fig:2coverpoly} \end{figure} \begin{figure} \caption{The genus 3 surface constructed by glueing edges of the polygons in Figure~\ref{fig:2coverpoly} \label{fig:2cover} \end{figure} \begin{figure} \caption{Three copies of the dodecagon in Figure~\ref{fig:31poly} \label{fig:3coverpoly} \end{figure} \begin{figure} \caption{The genus 4 surface constructed by glueing edges of the polygons in Figure~\ref{fig:3coverpoly} \label{fig:3cover} \end{figure} As further examples we construct the \textcover{2} and \textcover{3} of the nanoword $\nanoword{ABCACB}{aaa}$ which is isomorphic to the \textcover{2} in the previous example. We have already calculated the coverings of this nanoword in Example \ref{ex:simple_covering}. Figure~\ref{fig:31precover} gives a diagram of $\nanoword{ABCACB}{aaa}$ on a surface with the edges labelled. Figure~\ref{fig:31poly} shows the result of cutting along the edges, in this case a single polygon. The \textcover{2} is constructed from two copies of this polygon shown in Figure~\ref{fig:2coverpoly}. Figure~\ref{fig:2cover} shows the result of glueing. Similarly Figures~\ref{fig:3coverpoly} and \ref{fig:3cover} show the \textcover{3} of $\nanoword{ABCACB}{aaa}$ before and after glueing. \section{Based matrices of composite nanowords}\label{sec:composite} For two nanowords $\alpha$ and $\beta$, it is possible to calculate the based matrix of the composite nanoword $\alpha\beta$ from the based matrices of $\alpha$ and $\beta$ and the types of the letters in $\alpha$ and $\beta$. In fact, in \cite{Turaev:2004} Turaev gave the result of this calculation for graded based matrices of open virtual strings (an open virtual string can be defined as a virtual string with a fixed base point on the curve where no moves can involve the base point). The statement of the result in Turaev's paper contains an error, but once this is corrected, the result is the same for the case of virtual strings. As Turaev did not give the calculation in his paper, we give it here. The main aim of this section is to prove Proposition~\ref{prop:compositeSameLetters} and Corollary~\ref{cor:primitive}. \par Using isomorphisms we can write $\alpha$ using letters $W_1,W_2,\dotsc,W_m$ and $\beta$ using letters $X_1,X_2,\dotsc,X_n$. We can write the based matrices of $\alpha$ and $\beta$ in the following form \begin{equation*} b_\alpha= \begin{pmatrix} 0& -\transpose{\cvector{n_\alpha}} \\ \cvector{n_\alpha}& B_\alpha \end{pmatrix}, b_\beta= \begin{pmatrix} 0& -\transpose{\cvector{n_\beta}} \\ \cvector{n_\beta}& B_\beta \end{pmatrix}. \end{equation*} \begin{prop}[Turaev]\label{prop:composite} Using the notation above, the based matrix for the composite nanoword $\alpha\beta$ has the form \begin{equation*} b_{\alpha\beta}= \begin{pmatrix} 0& -\transpose{\cvector{n_\alpha}} & -\transpose{\cvector{n_\beta}}\\ \cvector{n_\alpha}& B_\alpha & D \\ \cvector{n_\beta}& -\transpose{D} & B_\beta \end{pmatrix} \end{equation*} where the entries of $D$ are given by \begin{equation*} b_{\alpha\beta}(W_i,X_j) = \begin{cases} 0 & \text{if $\xType{W_i} = \xType{X_j} = a$ } \\ -n_\beta(X_j) & \text{if $\xType{W_i} = b, \xType{X_j} = a$ } \\ n_\alpha(W_i) & \text{if $\xType{W_i} = a, \xType{X_j} = b$ } \\ n_\alpha(W_i)-n_\beta(X_j) & \text{if $\xType{W_i} = \xType{X_j} = b$. } \end{cases} \end{equation*} \end{prop} \begin{proof} As the letters $W_i$ and $X_j$ do not link in $\alpha\beta$ for all $i$ and all $j$, it is clear that $\n_{\alpha\beta}(W_i)$ equals $\n_\alpha(W_i)$ and $\n_{\alpha\beta}(X_j)$ equals $\n_\beta(X_j)$ for all $i$ and all $j$. \par We now consider $b_{\alpha\beta}(W_i,W_j)$. We use \eqref{eqn:matrix_elements} and the fact that for all $k$ and $l$, $h(W_k,X_l) = t(W_k,X_l)$. We have \begin{align*} b_{\alpha\beta}(W_i,W_j) =& t(W_i,W_j) - h(W_i,W_j) + \sum_{k \in G-\lbrace s\rbrace}t(W_i,k)h(W_j,k) - h(W_i,k)t(W_j,k) \\ = & t(W_i,W_j) - h(W_i,W_j) + \\ & \sum_{k}\left( t(W_i,W_k)h(W_j,W_k) - h(W_i,W_k)t(W_j,W_k) \right) + \\ & \sum_{k}\left( t(W_i,X_k)h(W_j,X_k) - h(W_i,X_k)t(W_j,X_k) \right) \\ = & b_\alpha(W_i,W_j) + \sum_{k}\left( t(W_i,X_k)t(W_j,X_k) - t(W_i,X_k)t(W_j,X_k) \right) \\ = & b_\alpha(W_i,W_j). \end{align*} We can calculate $b_{\alpha\beta}(X_i,X_j)$ in the same way. \par We now calculate $b_{\alpha\beta}(W_i,X_j)$. We have \begin{align*} b_{\alpha\beta}(W_i,X_j) =& t(W_i,X_j) - h(W_i,X_j) + \sum_{k \in G-\lbrace s\rbrace}t(W_i,k)h(X_j,k) - h(W_i,k)t(X_j,k) \\ = & \sum_{k}\left( t(W_i,W_k)h(X_j,W_k) - h(W_i,W_k)t(X_j,W_k) \right) + \\ & \sum_{k}\left( t(W_i,X_k)h(X_j,X_k) - h(W_i,X_k)t(X_j,X_k) \right). \end{align*} \par Consider the first part. If $\xType{X_j}$ is $a$ then $h(X_j,W_k) = t(X_j,W_k) = 0$ and \begin{equation*} \sum_{k}\left( t(W_i,W_k)h(X_j,W_k) - h(W_i,W_k)t(X_j,W_k) \right) = 0. \end{equation*} If $\xType{X_j}$ is $b$ then $h(X_j,W_k) = t(X_j,W_k) = 1$ and \begin{align*} \sum_{k}\left( t(W_i,W_k)h(X_j,W_k) - h(W_i,W_k)t(X_j,W_k) \right) & = \sum_{k}\left( t(W_i,W_k) - h(W_i,W_k) \right) \\ & = \n(W_i). \end{align*} \par Consider the second part. If $\xType{W_i}$ is $a$ then $h(W_i,X_k) = t(W_i,X_k) = 0$ and \begin{equation*} \sum_{k}\left( t(W_i,X_k)h(X_j,X_k) - h(W_i,X_k)t(X_j,X_k) \right) = 0. \end{equation*} If $\xType{W_i}$ is $b$ then $h(W_i,X_k) = t(W_i,X_k) = 1$ and \begin{align*} \sum_{k}\left( t(W_i,X_k)h(X_j,X_k) - h(W_i,X_k)t(X_j,X_k) \right) & = \sum_{k}\left( h(X_j,X_k) - t(X_j,X_k) \right) \\ & = -\n(X_j). \end{align*} \par By combining the results of the calculations of the two parts the proof is complete. \end{proof} We now make some observations. If $Y$ is an annihilating element in $M(\alpha\beta)$ then $Y$ must be an annihilating element in $M(\alpha)$ or in $M(\beta)$. If $Y$ is a core element in $M(\alpha\beta)$ then $Y$ must be a core element in $M(\alpha)$ or in $M(\beta)$. If $Y$ and $Z$ are complementary elements in $M(\alpha\beta)$ we have two possibilities. The first case is that $Y$ and $Z$ both come from the same component, say $\alpha$. In this case $Y$ and $Z$ are complementary elements in $M(\alpha)$. The second case is that $Y$ and $Z$ come from different components, say $Y$ comes from $\alpha$ and $Z$ comes from $\beta$. As $Y$ and $Z$ are complementary we have $\n(Y)+\n(Z)=0$ and for all letters $K$ in $\alpha\beta$ \begin{equation*} b_{\alpha\beta}(Y,K) + b_{\alpha\beta}(Z,K) = b_{\alpha\beta}(s,K) = -\n(K). \end{equation*} Then $b_{\alpha\beta}(Y,Z)$ is $-\n(Z)$. However, by Proposition~\ref{prop:composite} we have \begin{equation*} b_{\alpha\beta}(Y,Z) = \begin{cases} 0 &\text{if $\xType{Y} = \xType{Z} = a$} \\ -\n(Z) &\text{if $\xType{Y} = b, \xType{Z} = a$} \\ \n(Y) &\text{if $\xType{Y} = a, \xType{Z} = b$} \\ \n(Y)-\n(Z) &\text{if $\xType{Y} = \xType{Z} = b$.} \end{cases} \end{equation*} \par We now assume that $\xType{Z}=\xType{Y}$. Then from the above calculation we have $\n(Y)=\n(Z)=0$. Furthermore, for $W$ in $\alpha$ we have \begin{equation*} b_{\alpha\beta}(Y,W) = -b_{\alpha\beta}(Z,W) - \n(W). \end{equation*} Using Proposition~\ref{prop:composite} to evaluate $b_{\alpha\beta}(Z,W)$ we get \begin{equation*} b_{\alpha\beta}(Y,W) = \begin{cases} -\n(W) &\text{if $\xType{Z} = \xType{W} = a$} \\ -\n(W) &\text{if $\xType{Z} = a, \xType{W} = b$} \\ 0 &\text{if $\xType{Z} = b, \xType{W} = a$} \\ 0 &\text{if $\xType{Z} = \xType{W} = b$.} \end{cases} \end{equation*} Note that the value of $b_{\alpha\beta}(Y,W)$ does not depend on the type of $W$. Similarly, for a letter $X$ in $x$, we can calculate $b_{\alpha\beta}(Z,X)$. We have \begin{equation*} b_{\alpha\beta}(Z,X) = -b_{\alpha\beta}(Y,X) - \n(X) \end{equation*} and thus \begin{equation*} b_{\alpha\beta}(Z,X) = \begin{cases} -\n(X) &\text{if $\xType{Y} = \xType{X} = a$} \\ -\n(X) &\text{if $\xType{Y} = a, \xType{W} = b$} \\ 0 &\text{if $\xType{Y} = b, \xType{X} = a$} \\ 0 &\text{if $\xType{Y} = \xType{X} = b$.} \end{cases} \end{equation*} Again, the value of $b_{\alpha\beta}(Z,X)$ does not depend on the type of $X$. \par Thus if $Y$ in $\alpha$ and $Z$ in $\beta$ are complementary and both letters are type $a$, then $Y$ is a core element in $M(\alpha)$ and $Z$ is a core element in $M(\beta)$. If both letters are type $b$ then $Y$ is an annihilating element in $M(\alpha)$ and $Z$ is an annihilating element in $M(\beta)$. \begin{prop}\label{prop:compositeSameLetters} If the letters in $\alpha$ and $\beta$ all have the same type then $\rho(\alpha\beta) \geq \rho(\alpha) + \rho(\beta)$. \end{prop} \begin{proof} To calculate $\rho(\alpha\beta)$ we start with $M(\alpha\beta)$ and remove elements until we get to the primitive based matrix $P(\alpha\beta)$. From the above argument, any elements which can be removed from $M(\alpha\beta)$ correspond to elements that can be removed from $M(\alpha)$ or $M(\beta)$. Thus the result follows. \end{proof} \begin{cor}\label{cor:primitive} If the letters in $\alpha$ and $\beta$ all have the same type and $M(\alpha)$ and $M(\beta)$ are primitive then $\rho(\alpha\beta) = \rho(\alpha) + \rho(\beta)$. \end{cor} \begin{proof} If $M(\alpha\beta)$ is not primitive, then by the above argument there exists at least one reducible element in $M(\alpha)$ or $M(\beta)$ which would contradict the assumption that $M(\alpha)$ and $M(\beta)$ are primitive. \end{proof} \section{Cablings of virtual strings}\label{sec:cable} We can define a cabling of a virtual string in a similar way to cablings of knots. We can construct the \textcable{n} of a virtual string $\Gamma$ from a diagram $D$ of $\Gamma$ embedded without virtual crossings in a surface $S$. We pick an arbitrary point on $D$ which is not a double point and cut $D$ at that point to get an arc in $S$. We then replace the arc with $n$ parallel copies of the arc. We label these arcs $0$ to $n-1$ from left to right as we travel along the curve according to its orientation. From the orientation of the curve, each arc has a start and an end. We join the end of arc $i$ to the beginning of arc $i+1$ for $i$ going from $0$ to $n-2$. Finally we join the end of arc $n-1$ to the beginning of arc $0$ by crossing the other $n-1$ arcs. We call the resulting diagram $\cable{D}{n}$ and describe it as the \textcable{n} of $D$. \par \begin{figure} \caption{The \textcable{3} \label{fig:cabling31} \end{figure} To illustrate the result of this process, Figure~\ref{fig:cabling31} shows the \textcable{3} of the diagram in Figure~\ref{fig:diagram31} after the virtual crossings have been replaced by handles. The point on the curve marked with a blob is where the $3$ arcs have been joined to each other according to the description given above. \par Note that in this construction, each crossing in the original virtual string diagram becomes $n^2$ crossings in the \textcable{n}. We also get an additional $n-1$ crossings when we join up the arcs. Thus the number of crossings in the \textcable{n} of a $k$ crossing virtual string diagram is $kn^2 + n - 1$. \par We picked an arbitrary point on $D$ to join the arcs. It is easy to check, using the flattened Reidemeister moves, that we get a homotopic virtual string diagram, no matter which non-double point on $D$ we pick. It is also easy to check that for homotopically equivalent virtual string diagrams $D$ and $D^\prime$ the \textcable{n}s $\cable{D}{n}$ and $\cable{D^\prime}{n}$ are also homotopic. Thus, if we define the \textcable{n} of $\Gamma$ as the virtual string represented by $\cable{D}{n}$ for some diagram $D$ representing $\Gamma$, the \textcable{n} of $\Gamma$ is well-defined and we write it $\cable{\Gamma}{n}$. In particular we note that invariants of cables of virtual strings can be used to distinguish virtual strings themselves. \par We remark that Turaev defined cables of virtual strings in \cite{Turaev:2004}. He defines cables as the result of ``Adams operations''. He noted that this is a well-defined operation on virtual strings. He also gave a relation between the $u$-polynomial of a virtual string and the $u$-polynomial of its \textcable{n}. We discuss this further below. \par After we had written this section we discovered that Kadokami had defined cables of virtual strings in the same way as we have in \cite{Kadokami:Non-triviality}. He calls the result of the construction the $n$-parallelized projected virtual knot diagram of $D$. Although Kadokami made this definition, he did not examine any of the properties that we give here. \par We can define the \textcable{n} of a nanoword $\alpha$ in the following way. We first construct a diagram $D$ corresponding to $\alpha$ on a surface $S$ such that $D$ has no virtual crossings. Each crossing of $D$ is labelled with a letter from $\alpha$. The nanoword $\alpha$ gives us a base point on $D$ which we use to construct $\cable{D}{n}$ as explained above. Using the base point we can then write a nanoword which represents $\cable{D}{n}$. We define this nanoword to be the \textcable{n} of $\alpha$. \par We examine this construction more closely. We start by labelling the crossings of the \textcable{n} of $D$. We have already labelled each arc $0$ to $n-1$ going from left to right. The crossing points which are created at the points where we join the $i$th arc to the $(i+1)$th arc are labelled $C_i$ (for $i$ running from $0$ to $n-2$). For the $n\times n$ crossings derived from a single crossing $A$ in $\alpha$ we adopt the following naming scheme. We note that each crossing in $D$ has a type ($a$ or $b$) which is specified in $\alpha$. If $A$ is of type $a$, we simply label the crossings $A_{i,j}$ where the crossing is the intersection of the $i$th arc coming from the left and the $j$th arc coming from the right. If $A$ is of type $b$, we again label the crossings $A_{i,j}$. However this time the crossing is the intersection of the $(i-1)$th arc (calculating in $\cyclic{n}$) coming from the left and the $j$th arc coming from the right. Figure~\ref{fig:cablelabel} shows how the arcs and crossings are labelled for a \textcable{3}. \begin{figure} \caption{Labelling crossings in a \textcable{3} \label{fig:cablelabel} \end{figure} \par The types of the crossings $A_{i,j}$ in the cable can be determined from $i$, $j$ and the type of $A$. If $A$ is of type $a$, $A_{i,j}$ is of type $a$ if $i$ is less than or equal to $j$ and of type $b$ if $i$ is greater than $j$. If $A$ is of type $b$, writing $d$ for $i-1$ calculated in $\cyclic{n}$, $A_{i,j}$ is of type $a$ if $j$ is greater than $d$ and of type $b$ if $j$ is less than or equal to $d$. The crossings $C_i$ are all of type $a$. \par Using this labelling scheme we can calculate the nanoword that represents the cable of a virtual string directly from a nanoword $\alpha$ representing the virtual string in a mechanical way. For an integer $i$ between $0$ and $n-1$ inclusive we define $w_i$ to be a copy of $\alpha$ with every letter $A$ replaced by $n$ letters as follows. If $A$ has type $a$, replace the first occurence of $A$ with \begin{equation*} A_{i,0}A_{i,1}\dotso A_{i,n-1} \end{equation*} and the second occurence of $A$ with \begin{equation*} A_{n-1,i}A_{n-2,i}\dotso A_{0,i}. \end{equation*} If $A$ has type $b$, replace the first occurence of $A$ with \begin{equation*} A_{0,i}A_{n-1,i}A_{n-2,i}\dotso A_{1,i} \end{equation*} and the second occurence of $A$ with \begin{equation*} A_{i+1,0}A_{i+1,1}\dotso A_{i+1,n-1} \end{equation*} where $i+1$ is calculated in $\cyclic{n}$. Then the \textcable{n} of $\alpha$ is given by \begin{equation*} w_0C_0w_1C_1\dotso C_{n-3}w_{n-2}C_{n-2}w_{n-1}C_{n-2}C_{n-3}\dotso C_1C_0 \end{equation*} where the types of the letters $A_{i,j}$ and letters $C_i$ are defined as above. \par In this way the cabling operation can be defined for nanowords representing virtual strings without reference to diagrams. We denote the \textcable{n} of a nanoword $\alpha$ by $\cable{\alpha}{n}$. We give an example of calculating a \textcable{2} of a nanoword. \par \begin{ex}\label{ex:2cable} Consider $\alpha$, the nanoword $\nanoword{XYXZYZ}{abb}$. Then $w_0$ is given by \begin{equation*} X_{0,0}X_{0,1}Y_{0,0}Y_{1,0}X_{1,0}X_{0,0}Z_{0,0}Z_{1,0}Y_{1,0}Y_{1,1}Z_{1,0}Z_{1,1}, \end{equation*} $w_1$ is given by \begin{equation*} X_{1,0}X_{1,1}Y_{0,1}Y_{1,1}X_{1,1}X_{0,1}Z_{0,1}Z_{1,1}Y_{0,0}Y_{0,1}Z_{0,0}Z_{0,1} \end{equation*} and the \textcable{2} of $\alpha$, $\cable{\alpha}{2}$, is $w_0C_0w_1C_0$. The letters $X_{0,0}$, $X_{0,1}$, $X_{1,1}$, $Y_{1,1}$, $Z_{1,1}$ and $C_0$ are all of type $a$ and the remaining letters are of type $b$. \end{ex} \par We can calculate the $u$-polynomial of a cable directly from the $u$-polynomial of the original virtual string. We note that in Section~5.4 of Turaev's paper \cite{Turaev:2004}, there is a statement of the relationship between the $u$-polynomials of a cable and the original virtual string. However there is an error in the statement and no proof is given. We give the correct statement and provide a proof here. \begin{thm}\label{thm:cableUPoly} For a virtual string $\Gamma$ the $u$-polynomial of the \textcable{n} of $\Gamma$ is given by \begin{equation}\label{eqn:cableUPoly} u_{\cable{\Gamma}{n}}(t) = n^2u_{\Gamma}(t^n). \end{equation} \end{thm} \begin{proof} Recall that the $u$-polynomial of a virtual string $\Gamma$ is given by \begin{equation*} u_{\Gamma}(t) = \sum_{k \geq 1}u_k(\Gamma)t^k. \end{equation*} Substituting this into \eqref{eqn:cableUPoly} and moving $n^2$ inside the sum, we get \begin{equation*} u_{\cable{\Gamma}{n}}(t) = \sum_{k \geq 1}n^2u_k(\Gamma)t^{nk}. \end{equation*} We now prove this equation. \par We construct a diagram $D$ of $\Gamma$ in some surface $S$ so that $D$ has no virtual crossings. We then construct $\cable{D}{n}$ the \textcable{n} of $D$. \par It is sufficient to prove the following two facts. Firstly, for all $n^2$ crossings $A_{i,j}$ in $\cable{D}{n}$ coming from a crossing $A$ in $D$, $\n(A_{i,j})$ is equal to $n \n(A)$. Secondly, for all $n-1$ crossings $C_i$ added by joining the arcs of the cable, we have $\n(C_i)=0$. \par To show both facts we recall that $\n(X)$ is the homological intersection number of the loop starting at $X$ and the curve $\cable{D}{n}$. The whole cable $\cable{D}{n}$, for the purposes of computing the homological intersection number, can be considered equivalent to $n$ parallel copies of the original curve $D$. Similarly, any loop in the cable can be considered to be equivalent to zero or more parallel copies of $D$ and possibly a loop in $\alpha$ starting at a crossing in $\alpha$. \par We first consider a crossing $A_{i,j}$ derived from a crossing $A$ in $D$. The loop at $A_{i,j}$ is equivalent to the loop at $A$ in $D$ and $p$ parallel copies of $D$. Here $p$ is equal to $j-i$, calculating in $\cyclic{n}$. Then \begin{equation*} \n(A_{i,j}) = n \left(p\;b_D(s,s) + b_D(A,s) \right). \end{equation*} As $b_D(s,s) = 0$ and $b_D(A,s)=\n(A)$, we have \begin{equation*} \n(A_{i,j}) = n \n(A). \end{equation*} We now consider a crossing $C_i$. In this case the loop at $C_i$ is equivalent to $(n-1)-i$ parallel copies of $D$. In this case we have \begin{align*} \n(C_i) = & n \left(\left( (n-1) - i \right) b_D(s,s) \right) \\ = & 0 \end{align*} and the proof is complete. \end{proof} \begin{cor} If $\Gamma$ is a virtual string with non-trivial $u$-polynomial, the family of virtual strings $\{\Gamma,\cable{\Gamma}{n+1}|n \in \mathbb{Z}pos\}$ are all mutually homotopically distinct. \end{cor} \begin{proof} As $\Gamma$ has a non-trivial $u$-polynomial, the degree of the $u$-polynomial is $k$ for some non-zero positive integer $k$. By the theorem, the degree of the $u$-polynomial of $\cable{\Gamma}{n}$ is $nk$. Thus the $u$-polynomials of the family are all different and so the virtual strings themselves must all be homotopically distinct. \end{proof} As we can consider both cablings and coverings as maps from the set of virtual strings into itself, it makes sense to consider their composition. Using our notation we can write the \textcover{r} of the \textcable{n} of a virtual string $\Gamma$ as $\cover{(\cable{\Gamma}{n})}{r}$ and the \textcable{n} of the \textcover{r} of $\Gamma$ as $\cable{(\cover{\Gamma}{r})}{n}$. In general these are not necessarily the same. An example is sufficient to show this. \par \begin{ex} We consider the virtual string $\Gamma$ represented by the nanoword $\nanoword{XYXZYZ}{abb}$. We consider the case where $n$ and $r$ are both $2$. Then $\cover{\Gamma}{2}$ is represented by $\nanoword{XX}{a}$ which is homotopically trivial. Thus $\cable{(\cover{\Gamma}{2})}{2}$ is also trivial. On the other hand, we calculated $\cable{\Gamma}{2}$ in Example \ref{ex:2cable}. It is easy to check (or see the discussion below) that this nanoword is fixed under the \textcover{2} map. That is, $\cover{(\cable{\Gamma}{2})}{2}$ is equal to $\cable{\Gamma}{2}$. As the $u$-polynomial for $\Gamma$ is $t^2-2t$, by Theorem~\ref{thm:cableUPoly}, the $u$-polynomial for $\cable{\Gamma}{2}$ is $4t^4-8t^2$. Thus $\cable{\Gamma}{2}$ is non-trivial and $\cover{(\cable{\Gamma}{2})}{2}$ is not homotopic to $\cable{(\cover{\Gamma}{2})}{2}$. \end{ex} However, we do have a relationship between cablings of coverings and coverings of cables. \begin{thm} For any virtual string $\Gamma$, any positive integer $n$ and any non-negative integer $r$, $\cable{(\cover{\Gamma}{k})}{n}$ and $\cover{(\cable{\Gamma}{n})}{r}$ are homotopic. Here $k$ is zero if $r$ is zero and $k$ is $r/d$ otherwise, where $d$ is the greatest common divisor of $n$ and $r$. \end{thm} \begin{proof} It is enough to show that for some nanoword $\alpha$ representing $\Gamma$, $\cable{(\cover{\alpha}{k})}{n}$ and $\cover{(\cable{\alpha}{n})}{r}$ are isomorphic. \par First we consider $\cable{(\cover{\alpha}{k})}{n}$. We note that because of the mechanical way in which we can calculate the cable of a nanoword, we can calculate this nanoword from $\cable{\alpha}{n}$. We can do this by deleting all letters $X_{i,j}$ in the nanoword $\cable{\alpha}{n}$ which came from a letter $X$ in $\alpha$ such that $k$ does not divide $\n(X)$. Thus the $X_{i,j}$ that are retained from $\cable{\alpha}{n}$ are those such that $\n(X)$ is in $k\mathbb{Z}$. \par We now consider $\cover{(\cable{\alpha}{n})}{r}$. We first note that the $n-1$ crossings $C_i$ added to join up the $n$ arcs in the cabling all have $\n(C_i)=0$. Thus they are not deleted when we take the covering. We then note that any crossing $X$ in $\alpha$ gets transformed into $n^2$ crossings $X_{i,j}$ by the cabling operation. As we noted in the proof of Theorem~\ref{thm:cableUPoly}, $\n(X_{i,j})$ is equal to $n \n(X)$. If we can show that the letters $X_{i,j}$ that are retained after taking the \textcover{r} have the property that $\n(X)$ is in $k\mathbb{Z}$ then we have shown that $\cable{(\cover{\alpha}{k})}{n}$ and $\cover{(\cable{\alpha}{n})}{r}$ are isomorphic. \par If $r$ is zero then the letters $X_{i,j}$ appear in $\cover{(\cable{\alpha}{n})}{r}$ if and only if $n \n(X)$ is zero. As $n$ is non-zero, this can only happen if $\n(X)$ is zero. Thus $\n(X)$ is in $k\mathbb{Z}$. \par If $r$ is non-zero then the letters $X_{i,j}$ appear in $\cover{(\cable{\alpha}{n})}{r}$ if and only if $n \n(X)$ is divisible by $r$ or $\n(X)$ is zero. If $\n(X)$ is zero then $\n(X)$ is in $k\mathbb{Z}$. If $r$ divides $n \n(X)$ then, as $k$ divides $r$ and $k$ is coprime to $n$, $k$ must divide $\n(X)$. In other words $\n(X)$ is in $k\mathbb{Z}$ and the proof is complete. \end{proof} We note that all cablings of virtual strings are fixed points under some non-trivial covering. We have seen that for a crossing $X_{i,j}$ in $\cable{\alpha}{n}$ derived from a crossing $X$ in $\alpha$, $\n(X_{i,j})$ is equal to $n\n(X)$. Thus $\alpha$ is fixed under the \textcover{r} map if and only if $\cable{\alpha}{n}$ is fixed under the \textcover{rn} map. This means that, $\Gamma$ is in $\baseSet{r}$ if and only if $\cable{\Gamma}{n}$ is in $\baseSet{rn}$. In particular $\cable{\Gamma}{n}$ is in $\baseSet{n}$ as $\baseSet{1}$ contains all virtual strings. \par We now calculate the based matrix of an \textcable{n} of a virtual string $\Gamma$. As before we take a labelled diagram $D$ of $\Gamma$ and construct a labelled diagram of the cable $\cable{D}{n}$. We use the notation for crossing labels that we used above. \par Recall that for the purposes of calculating the homological intersection number the loop at $A_{i,j}$ is equivalent to the loop at $A$ in $D$ and $j-i$ parallel copies of $D$ (calculating in $\cyclic{n}$). The loop at $C_i$ is equivalent to $(n-1)-i$ parallel copies of $D$. \par We can calculate the based matrix of the \textcable{n} as follows: \begin{equation*} b(C_i,C_j) = (n-1-i)\left( (n-1-j) b_D(s,s) \right) = 0, \end{equation*} \begin{equation*} b(A_{i,j},C_k) = (n-1-k)\left(b_D(A,s) + (j-i) b_D(s,s) \right) = (n-1-k)\n(A) \end{equation*} and \begin{align*} b(A_{i,j},B_{k,l}) = & (l-k) \left( b_D(A,s) + (j-i) b_D(s,s) \right) + (j-i)b_D(s,B) + b_D(A,B) \\ =& b_D(A,B) + (l-k)\n(A) - (j-i)\n(B). \end{align*} \begin{thm} If $\Gamma$ and $\Gamma^\prime$ are virtual strings such that $P(\Gamma)$ and $P(\Gamma^\prime)$ are isomorphic, then $P(\cable{\Gamma}{n})$ is isomorphic to $P(\cable{\Gamma^\prime}{n})$ for all $n$. \end{thm} \begin{proof} Let $\alpha$ be a nanoword representing $\Gamma$. When we calculate $P(\alpha)$ from $M(\alpha)$ we make a series of reductions removing one or two elements at each step. For each such step we will show that there is an equivalent set of reduction moves on $M(\cable{\alpha}{n})$ which allows us to remove $n^2$ or $2n^2$ elements. We call the matrix derived from $M(\cable{\alpha}{n})$ in this way $Q(\cable{\alpha}{n})$. Note that $Q(\cable{\alpha}{n})$ is not necessarily $P(\cable{\alpha}{n})$. \par Assume that $A$ is an annihilating element in $M(\alpha)$. Then $\n(A)$ is $0$ and $b(A,B)=0$ for all $B$ in $\alpha$. For each crossing $A_{i,j}$ in $\cable{\alpha}{n}$ which is derived from $A$ we have \begin{equation*} b(A_{i,j},C_k) = 0 \end{equation*} and \begin{equation*} b(A_{i,j},B_{k,l}) = - (j-i)\n(B) = -(j-i)\n(B_{k,l}). \end{equation*} We note that values of $j-i$ run from $0$ to $n-1$ and there are $n$ pairs $(i,j)$ for which $j-i$ is the same. We thus have $n$ elements $X_i$ derived from $A$ such that \begin{equation*} b(X_i,B_{k,l}) = -i\n(B_{k,l}) \end{equation*} for each $i$ running from $0$ to $n-1$. Then the $n$ $X_0$ elements are all annihilating elements in $M(\cable{\alpha}{n})$. We can pair all the remaining $X_i$ elements with $X_{n-i}$ elements. Note that when $n$ is even, we pair the $X_{n/2}$ elements with themselves, but as there are an even number of such elements this is always possible. These pairs are complementary elements in $M(\cable{\alpha}{n})$ because \begin{equation*} b(X_i,Y) + b(X_{n-i},Y) = -(i + n - i)\n(Y) = -n\n(Y) = b(s,Y) \end{equation*} for all crossings $Y$ corresponding to crossings in $\alpha$ and \begin{equation*} b(X_i,C_j) + b(X_{n-i},C_j) = 0 = b(s,C_j) \end{equation*} for all $j$. \par If $A$ is a core element of $M(\alpha)$ we can make a similar calculation. We find that we have $n$ core elements in $M(\cable{\alpha}{n})$ which are derived from $A$. We also find that the rest of the crossings derived from $A$ can be paired to form complementary pairs in $M(\cable{\alpha}{n})$. \par If $A$ and $B$ are complementary elements in $M(\alpha)$ then, we can pair crossings derived from $A$ and $B$ in a particular way to make complementary pairs in $M(\cable{\alpha}{n})$. Again the calculation is similar. \par We now note that $Q(\cable{\alpha}{n})$ can be defined algebraically in terms of $P(\alpha)$. That is, we can define $Q(\cable{\alpha}{n})$ in terms of $P(\alpha)$ without reference to $\alpha$ itself. We do this as follows. \par We define a based matrix $Q(P(\alpha),n)$ which is a triple $(G_Q,s,b_Q)$ from the based matrix $P(\alpha)$ with triple $(G,s,b)$. $G_Q$ consists of $s$, $n^2$ elements $X_{i,j}$ for each element $X$ of $G-\lbrace s \rbrace$ in $P(\alpha)$ and $n-1$ elements $C_i$ ($i$ running from $0$ to $n-1$). We define $\n(X_{i,j}) = n\n(X)$ and $\n(C_i)=0$. We then define \begin{equation*} b_Q(X_{i,j},Y_{k,l}) = b(X,Y) + k\n(X) - i\n(Y), \end{equation*} \begin{equation*} b_Q(X_{i,j},C_k) = (k+1)\n(Y) \end{equation*} and \begin{equation*} b_Q(C_i,C_j) = 0. \end{equation*} It is easy to check that the based matrix $Q(P(\alpha),n)$ is isomorphic to $Q(\cable{\alpha}{n})$. \par Now if $P(\Gamma^\prime)$ is isomorphic to $P(\Gamma)$ then $Q(\cable{\Gamma^\prime}{n})$ is isomorphic to $Q(\cable{\Gamma}{n})$ and thus the result follows. \end{proof} \par The implication of this theorem is that there is no benefit in calculating based matrix derived invariants for cables of virtual strings in order to distinguish the virtual strings themselves. \par We note that in $Q(\cable{\alpha}{n})$, for any $Y$ not equal to $C_j$ for some $j$, \begin{equation*} b(C_i,Y) + b(C_{n-i-1},Y) = -n \n(Y) = b(s,Y) \end{equation*} and \begin{equation*} b(C_i,C_j) + b(C_{n-i-1},C_j) = 0 = b(s,C_j). \end{equation*} Thus $C_i$ and $C_{n-i-1}$ form a complementary pair in $Q(\cable{\alpha}{n})$ if $i$ does not equal $n-i-1$. If $n$ is odd then we can pair up the $n-1$ letters $C_i$ to form complementary pairs. If $n$ is even then we can pair up $n-2$ letters $C_i$ to form complementary pairs and we get left with a single element $C_i$ with $i$ equal to $(n-2)/2$. Note that if all other elements have been eliminated from $Q(\cable{\alpha}{n})$ then this final $C_i$ is a core element. From this discussion we have the following results. \begin{prop} For any virtual string $\Gamma$ and any natural number $n$, \begin{equation*} \rho(\cable{\Gamma}{n}) \leq n^2\rho(\Gamma) + \delta_n \end{equation*} where $\delta_n$ is $1$ if $n$ is even and $0$ if $n$ is odd. \end{prop} \begin{prop} If $\rho(\Gamma)$ is $0$ then $\rho(\cable{\Gamma}{n})$ is also $0$. \end{prop} \end{document}
\begin{document} \twocolumn[ \title{Regularized Q-learning} \author{Han-Dong Lim$^1$ \\ \texttt{[email protected]} \and Do Wan Kim$^2$\\ \texttt{[email protected]} \and Donghwan Lee$^1$ \\ \texttt{[email protected]}} \date{ $^1$Department of Electrical Engineering, KAIST, Daejeon, 34141, South Korea\\% $^2$Department of Electrical Engineering, Hanbat National University\\[2ex] } \maketitle ] \begin{abstract} Q-learning is widely used algorithm in reinforcement learning community. Under the lookup table setting, its convergence is well established. However, its behavior is known to be unstable with the linear function approximation case. This paper develops a new Q-learning algorithm that converges when linear function approximation is used. We prove that simply adding an appropriate regularization term ensures convergence of the algorithm. We prove its stability using a recent analysis tool based on switching system models. Moreover, we experimentally show that it converges in environments where Q-learning with linear function approximation has known to diverge. We also provide an error bound on the solution where the algorithm converges. \end{abstract} \section{Introduction} \par Recently, reinforcement learning has shown great success in various fields. For instance,~\citealp{mnih2015human} achieved human level performance in several video games in the Atari benchmark~\cite{bellemare2013arcade}. Since then, researches on deep reinforcement learning algorithms have shown significant progresses~\cite{lan2020maxmin,chen2021randomized}. For example,~\citealp{badia2020agent57} performs better than standard human performance in all 57 Atari games.~\citealp{schrittwieser2020mastering} solves Go, chess, Shogi, and Atari without prior knowledge about the rules. Although great success has been achieved in practice, there is still gap between theory and the practical success. Especially when off-policy, function approximation, and bootstrapping are used together, the algorithm may diverge or show unstable behaviors. This phenomenon is called the deadly triad~\cite{sutton2018reinforcement}. Famous counter-examples are given in~\citealp{baird1995residual,tsitsiklis1997analysis}. \par For policy evaluation, especially for temporal-difference (TD) learning algorithm, there has been several algorithms to resolve the deadly triad issue.~\citealp{bradtke1996linear} uses least-squares method to compute fixed point of TD-learning algorithm, but suffers from \(O(h^2)\) time complexity, where \(h\) is number of features. \citealp{maei2011gradient,sutton2009fast} developed gradient descent based methods which minimize the mean square projected Bellman error.~\citealp{ghiassian2020gradient} added regularization term to TD Correction (TDC) algorithm, which uses single time scale step-size.~\citealp{lee2021versions} introduced several variants of the gradient TD (GTD) algorithm under control theoretic frameworks. \citealp{sutton2016emphatic} re-weights some states to match on-policy distribution to stabilize the off-policy TD-learning. \citealp{diddigi2019convergent} uses \(l_2\) regularization to propose a new convergent off-policy TD-learning algorithm. \par First presented by~\citealp{watkins1992q}, Q-learning also suffers from divergence issues under the deadly triad. While there are convergence results under the look-up table setting~\cite{watkins1992q,jaakkola1994convergence,borkar2000ode,lee2019unified}, even with simple linear function approximation, the convergence is only guaranteed under strong assumptions~\cite{melo2008analysis,lee2019unified,yang2019sample}.~\citealp{melo2008analysis} adopts an assumption that the behavior policy and target policy are similar enough to guarantee convergence, which is not practical in general. \citealp{lee2019unified} assumes a strong assumption to ensure the convergence with the so-called switching system approach.~\citealp{yang2019sample} has a stringent assumption on anchor state-action pairs. There are few works,~\citealp{agarwal2021online,carvalho2020new,zhang2021breaking,maei2010toward}, that guarantee convergence under more general assumptions. \par The main goal of this paper is to propose a practical Q-learning algorithm that guarantees convergence under linear function approximation. We prove its convergence using the ordinary differential equation (O.D.E) analysis framework~\cite{borkar2000ode} together with the switching system approach developed in~\citealp{lee2019unified}. As in~\citealp{lee2019unified}, we construct upper and lower comparison systems, and prove its global asymptotic stability based on switching system theories. Compared to the standard Q-learning~\cite{watkins1992q}, the only difference is the additional \(l_2\) regularization term, which makes the algorithm relavently simple. Compared to the previous works~\cite{carvalho2020new,maei2010toward}, our algorithm is single time-scale, and hence, shows faster convergence rates experimentally. Our algorithm directly uses bootstrapping rather than circumventing the issue in the deadly triad. Therefore, it could give a new insight into training reinforcement learning algorithms without using the so-called target network technique introduced in~\citealp{mnih2015human}. The main contributions of this paper are summarized as follows: \begin{enumerate} \item A new single time-scale Q-learning algorithm with linear function approximation is proposed. \item We prove the convergence of the proposed algorithm based on the O.D.E. approach together with the switching system model in~\citealp{lee2019unified}. \item We experimentally show that our algorithm performs faster than other two time-scale Q-learning algorithms~\cite{carvalho2020new,maei2010toward}. \end{enumerate} \par Related works are summarized as follows: Motivated by the empirical success of the deep Q-learning in~\citealp{mnih2015human}, recent works in~\citealp{zhang2021breaking,carvalho2020new,agarwal2021online} use the target network to circumvent the bootstrapping issue and guarantees convergence. \citealp{carvalho2020new} uses a two time-scale learning method, and has a strong assumption on the boundedness of the feature matrix. \citealp{zhang2021breaking} used \(l_2\) regularization with the target network, while a projection step is involved, which makes it difficult to implement practically. Moreover, it also uses a two time-scale learning method. \citealp{agarwal2021online} additionally uses the so-called experience replay technique with the target network, and also has a strong assumption on the boundedness of the feature matrix. Furthermore, the optimality is only guaranteed under a specific type of Markov Decision Process. \citealp{maei2010toward} suggested the so-called Greedy-GQ (gradient Q-learning) algorithm, but due to non-convexity of the objective function, it could converge to a local optima. \section{Preliminaries and Notations} \subsection{Markov Decision Process} We consider an infinite horizon Markov Decision Process (MDP), which consists of a tuple \(\mathcal{M}= (\mathcal{S},\mathcal{A},P,r,\gamma)\), where the state space \(\mathcal{S}\) and action space \(\mathcal{A}\) are finite sets, \( P(s'|s,a) \) denotes the transition model, \( r: \mathcal{S} \times \mathcal{A} \times \mathcal{S} {\textnormal{i}}ghtarrow \mathbb{R}\) is the reward, and \(\gamma \in (0,1) \) is the discount factor. Given a stochastic policy \( \pi: \mathcal{S} {\textnormal{i}}ghtarrow \mathcal{P}(\mathcal{A})\), where $\mathcal{P}(\mathcal{A})$ is the set of probability distributions over $\mathcal A$, agent at the current state \(s_k\) selects an action \(a_k \sim \pi (\cdot|s_k) \), then the agent's state changes to the next state \( s_{k+1} \sim P(\cdot | s_k,a_k) \), and receives reward \(r_{k+1} := r(s_k,a_k,s_{k+1}) \). A deterministic policy is a special stochastic policy, which can be defined simply as a mapping $\pi:{\cal S} \to {\cal A}$, which maps a state to an action. The objective of the Markov decision problem (MDP) is to find a deterministic optimal policy, denoted by $\pi^*$, such that the cumulative discounted rewards over infinite time horizons is maximized, i.e., \begin{align*} \pi^*:= \argmax_{\pi} {\mathbb E}\left[ \left.\sum_{k=0}^\infty {\gamma^k r_k}{\textnormal{i}}ght|\pi{\textnormal{i}}ght], \end{align*} where $(s_0,a_0,s_1,a_1,\ldots)$ is a state-action trajectory generated by the Markov chain under policy $\pi$, and ${\mathbb E}[\cdot|\pi]$ is an expectation conditioned on the policy $\pi$. The Q-function under policy $\pi$ is defined as \begin{align*} &Q^{\pi}(s,a)={\mathbb E}\left[ \left. \sum_{k=0}^\infty {\gamma^k r_k} {\textnormal{i}}ght|s_0=s,a_0=a,\pi {\textnormal{i}}ght], \\ & s\in {\cal S},a\in {\cal A}, \end{align*} and the optimal Q-function is defined as $Q^*(s,a)=Q^{\pi^*}(s,a)$ for all $s\in {\cal S},a\in {\cal A}$. Once $Q^*$ is known, then an optimal policy can be retrieved by the greedy action, i.e., $\pi^*(s)=\argmax_{a\in {\cal A}}Q^*(s,a)$. Throughout, we assume that the MDP is ergodic so that the stationary state distribution exists and the Markov decision problem is well posed. It is known that the optimal Q-function satisfies the so-called Bellman equation expressed as follows: \begin{align} &Q^*(s,a)\nonumber \\ =& \mathbb{E}\left[r_{k+1} + \max_{a_{k+1}\in \mathcal{A}} \gamma Q^{*}(s_{k+1},s_{k+1})|s_k = s,a_k = a{\textnormal{i}}ght]\nonumber \\ :=& {\mathcal{T}} Q^*\label{eq:Bellman-equation} \end{align} where \({\mathcal{T}}\) is called the Bellman operator. \subsection{Notations} In this paper, we will use an O.D.E. model of Q-learning to analyze its convergence. To this end, it is useful to introduce some notations in order to simplify the overall expressions. Throughout the paper, \( e_a \) and \( e_s \) denote \(a\)-th and \(s\)-th canonical basis vectors in \(\mathbb{R}^{|\mathcal{A}|}\) and \(\mathbb{R}^{|\mathcal{S}|}\), respectively. Moreover, $\otimes$ stands for the Kronecker product. Let us introduce the following notations: \begin{align*} P:=& \begin{bmatrix} P_1\\ {\bm{d}}ots\\ P_{|{\cal A}|}\\ \end{bmatrix} \in {\mathbb R}^{ |{\cal S}||{\cal A}|\times |{\cal S}| } ,\quad R:= \begin{bmatrix} R_1 \\ {\bm{d}}ots \\ R_{|{\cal A}|} \\ \end{bmatrix} \in {\mathbb R}^{|{\cal S}||{\cal A}|},\\ Q:=& \begin{bmatrix} Q_1\\ {\bm{d}}ots\\ Q_{|{\cal A}|}\\ \end{bmatrix}\in {\mathbb R}^{|{\cal S}||{\cal A}|},\\ D_a:=& \begin{bmatrix} d(1,a) & & \\ & \ddots & \\ & & d(|{\cal S}|,a)\\ \end{bmatrix}\in {\mathbb R}^{|{\cal S}| \times |{\cal S}|},\\ D:=&\begin{bmatrix} D_1 & & \\ & \ddots & \\ & & D_{|{\cal A}|} \end{bmatrix} \in {\mathbb R}^{|{\cal S}||{\cal A}| \times |{\cal S}||{\cal A}|}, \end{align*} where \(P_a \in \mathbb{R}^{|{\cal S}|\times|{\cal S}|}, a \in {\cal A}\) is the state transition matrix whose \(i\)-th row and \(j\)-th column component denotes the probability of transition to state \(j\) when action \(a\) is taken at state \(i\), \(P^{\pi}\in \mathbb{R}^{|{\cal S}||{\cal A}|\times|{\cal S}||{\cal A}|}\) represents the state-action transition matrix under policy \(\pi\), i.e., \begin{align*} &(e_s \otimes e_a )^T P^\pi (e_{s'} \otimes e_{a'} )\\ =& {\mathbb P}[s_{k + 1} = s',a_{k + 1} = a'|s_k = s,a_k = a,\pi], \end{align*} $Q_a= Q(\cdot,a)\in {\mathbb R}^{|{\cal S}|},a\in {\cal A}$ and $R_a(s):={\mathbb E}[r(s,a,s')|s,a], s\in {\cal S}$. Moreover, \(d(\cdot,\cdot)\) is the state-action visit distribution, where i.i.d random variables \(\{(s_k,a_k)\}_{k=0}^{\infty}\) are sampled, i.e., \[ d(s,a) = {\mathbb P}[s_k = s,a_k = a],\quad \forall a\in {\cal A},s\in {\cal S}. \] With a slight abuse of notation, $d$ will be also used to denote the vector $d \in {\mathbb R}^{|{\cal S}||{\cal A}|}$ such that \[ d^T (e_s \otimes e_a ) = d(s,a),\quad \forall s \in {\cal S},a \in {\cal A}. \] In this paper, we represent a policy in a matrix form in order to formulate a switching system model. In particular, for a given policy $\pi$, define the matrix \begin{align} \Pi^{\pi} := \begin{bmatrix} e_{\pi(1)} \otimes e_1\\ {\bm{d}}ots\\ e_{\pi(|\mathcal{S}|)} \otimes e_{|\mathcal{S}|}\\ \end{bmatrix} \in \mathbb{R}^{|{\mathcal{S}}|\times |{\mathcal{S}}||{\mathcal{A}}|} . \end{align} Then, we can prove that for any deterministic policy, $\pi$, we have \begin{align*} \Pi^\pi Q = \begin{bmatrix} {Q(1,\pi (1))} \\ {Q(2,\pi (2))} \\ {\bm{d}}ots \\ {Q(|{\cal S}|,\pi (|{\cal S}|))} \\ \end{bmatrix}. \end{align*} For simplicity, let \(\Pi_{Q}:= \Pi^{\pi}\) for \(Q\in \mathbb{R}^{|\cal S||\cal A|}\). Moreover, we can prove that for any deterministic policy $\pi$, \( P^{\pi} = P\Pi^{\pi} \in \mathbb{R}b^{ |\Scal||\Acal|\times |\Scal||\Acal|}\). Using the notations introduced, the Bellman equation in~({\textnormal{e}}f{eq:Bellman-equation}) can be compactly written as \begin{align*} Q^* = \gamma P\Pi_{Q^* } Q^* + R = :{\cal T}Q^*, \end{align*} where $\pi_{Q^*}$ is the greedy policy defined as $\pi_{Q^*} (s) = \argmax_{a\in \mathcal{A}} Q^*(s,a)$. \subsection{Q-learning with linear function approximation} Q-learning is widely used model-free learning to find \( Q^*\), whose updates are given as \begin{align} Q_{k+1} (s_k,a_k) \leftarrow Q_k (s_k,a_k) + \alpha_k \delta_k,\label{eq:standard-Q-learning} \end{align} where \begin{equation*} \delta_k = r_{k+1} + \gamma \max_{a\in \mathcal{A}} Q_k (s_{k+1},a) - Q_k (s_k,a_k) \end{equation*} is called the TD error. Each update uses an i.i.d. sample \( (s_k,a_k,r_k,s_{k+1}) \), where \( (s_k,a_k) \) is sampled from a state-action distribution \(d(\cdot,\cdot)\). Here, we assume that the step-size is chosen to satisfy the so-called the Robbins-Monro condition \begin{align} \alpha_k >0,\quad \sum\limits^{\infty}_{k=0}\alpha_k = \infty,\quad \sum\limits^{\infty}_{k=0}\alpha_k^2 < \infty.\label{eq:Robbins-Monro} \end{align} When the state-space and action-space are too large, then the memory and computational complexities usually become intractable. In such cases, function approximation is commonly used to approximate Q-function,~\cite{mnih2015human,schrittwieser2020mastering,hessel2018rainbow,lan2020maxmin}. Linear function approximation is the simplest function approximation approach. In particular, we use the feature matrix \(X \in \mathbb{R}b^{|\Scal||\Acal|\times h}\) and parameter vector \( \theta \in \mathbb{R}^h \) to approximate the Q-function, i.e., \( Q \simeq X\theta \), where the feature matrix is expressed as \begin{align*} X := \begin{bmatrix} x^T(1,1) \\ {\bm{d}}ots\\ x^T(1,|\mathcal{A}|)\\ {\bm{d}}ots\\ x^T(|\mathcal{S}|,|\mathcal{A}|) \end{bmatrix} \in \mathbb{R}b^{|\Scal||\Acal|\times h} . \end{align*} Here, \( x(\cdot.\cdot) \in \mathbb{R}^h \) is called the feature vector, and $h$ is a positive integer with \( h <\!\!< |{\cal S}| \). The corresponding greedy policy becomes \begin{align} \pi_{X\theta} (s) = \argmax_{a\in \mathcal{A}} x^T(s,a)\theta\label{eq:greedy-policy} . \end{align} Next, we summarize some assumptions adapted throughout this paper. \begin{assumption}\label{iid_assumption} The state-action visit distribution is positive, i.e., \(d(s,a)>0\) for all \( s\in\mathcal{S},a\in\mathcal{A}\). \end{assumption} \begin{assumption}\label{full_rank_assumption} The feature matrix, \(X\), has full column rank, and is a non-negative matrix. Moreover, columns of $X$ are orthogonal. \end{assumption} \begin{assumption}[Boundedness on feature matrix and reward matrix]\label{feature_reward_boundedness_assumption} There exists constants, \(X_{\max}>0\) and \( R_{\max}>0\), such that \begin{align*} \max(||X||_{\infty} ,||X^T||_{\infty}) &< X_{\max} , \\ ||R||_{\infty} &< R_{\max} . \end{align*} \end{assumption} Note that~\cref{full_rank_assumption} and~\cref{feature_reward_boundedness_assumption} are commonly adopted in the literature, e.g. \citet{carvalho2020new,melo2008analysis,lee2019unified}. Moreover, under~\cref{iid_assumption}, $D$ is a nonsingular diagonal matrix with strictly positive diagonal elements. \begin{lemma}\label{Boundedness of Q^*}\cite{gosavi2006boundedness} Under~\cref{feature_reward_boundedness_assumption}, the optimal Q-function, $Q^*$, is bounded as follows: \begin{align} ||Q^*||_{\infty} \leq \frac{R_{\max}}{1-\gamma} \end{align} \end{lemma} The proof of~\cref{Boundedness of Q^*} comes from the fact that under the discounted infinite horizon setting, \(Q^*\) can be expressed as an infinite sum of geometric sequence. \begin{remark} \citealp{carvalho2020new,agarwal2021online} assume \(||x(s,a)||_{\infty} \leq 1 \) for all \((s,a) \in \mathcal{S}\times \mathcal{A}\). Moreover,~\citealt{zhang2021breaking} requires specific bounds on the feature matrix which is dependent on various factors e.g. projection radius and transition matrix . On the other hand, our feature matrix can be chosen arbitrarily large regardless of those factors. \end{remark} \subsection{O.D.E. Analysis} The dynamic system framework has been widely used to show convergence of reinforcement learning algorithms, e.g.,~\citealt{sutton2009fast,maei2010toward,borkar2000ode,lee2019unified,carvalho2020new,lee2021versions}. Especially,~\citealp{borkar2000ode} is one of the most widely used techniques to show stability of stochastic approximation using O.D.E analysis. Consider the following stochastic approximation with a non-linear mapping \(f: \mathbb{R}b^n {\textnormal{i}}ghtarrow \mathbb{R}b^n\), \begin{align} &\theta_{k+1} = f(\theta_k) + m_k, \quad\label{eq:stochastic-algorithm} \end{align} where $m_k \in \mathbb{R}b^n$ is an i.i.d. noise vector. For completeness, results in~\citealp{borkar2000ode} are briefly summarized in the sequel. First of all, some assumptions to guarantee the convergence are listed below. \begin{assumption}\label{borkar_meyn_assumption} \ \\ 1. The mapping \(f: \mathbb{R}b^n {\textnormal{i}}ghtarrow \mathbb{R}b^n\) is globally Lipschitz continuous, and there exists a function \(f_{\infty} : \mathbb{R}b^n {\textnormal{i}}ghtarrow \mathbb{R}b^n\) such that \begin{equation} \lim_{c{\textnormal{i}}ghtarrow\infty} \frac{f(cx)}{c} = f_{\infty}(x) , \quad \forall{x} \in \mathbb{R}b^n. \end{equation} 2. The origin in \(\mathbb{R}b^n\) is an asymptotically stable equilibrium for the ODE \(\dot{x}_t = f_{\infty}(x_t)\). \newline \newline 3. There exists a unique globally asymptotically stable equilibrium \(\theta^e\in\mathbb{R}^n\) for the ODE \( \dot{x}_t=f(x_t)\) , i.e., \(x_t {\textnormal{i}}ghtarrow \theta^e\) as \( t{\textnormal{i}}ghtarrow\infty\). \newline \newline 4. The sequence \(\{ m, k\geq 1 \} \) where \( \Gc_k \) is sigma-alebra generated by \(\{(\theta_i,m_i,i\geq k ) \}\), is a Martingale difference sequence. In addition , there exists a constant \( C_0 < \infty \) such that for any initial \( \theta_0 \in \mathbb{R}b^n \) , we have \(\mathbb{E}b [|| m_{k+1} ||^2 | \Gc_k ] \leq C_0 (1+|| \theta_k ||^2, \forall{k}\geq 0 )\). \newline \newline 5. The step-sizes satisfies the Robbins-Monro condition in~({\textnormal{e}}f{eq:Robbins-Monro}). \end{assumption} Under the above assumptions, we now introduce Borkar and Meyn theorem below. \begin{lemma}[Borkar and Meyn theorem]\label{borkar_meyn_lemma} Suppose that~\Cref{borkar_meyn_assumption} holds, and consider the stochastic algorithm in~({\textnormal{e}}f{eq:stochastic-algorithm}). Then, for any initial \(\theta_0 \in \mathbb{R}b^n \), \(\sup_{k\geq 0}||\theta_k|| < \infty \) with probability one. In addition , \( \theta_k {\textnormal{i}}ghtarrow \theta^e \) as \( k {\textnormal{i}}ghtarrow \infty \) with probability one. \end{lemma} The main idea of Borkar and Meyn theorem is as follows: iterations of a stochastic recursive algorithm follow the solution of its corresponding O.D.E. in the limit when the step-size satisfies the Robbins-Monro condition. Therefore, by proving the asymptotic stability of the O.D.E., we can induce the convergence of the original algorithm. In this paper, we will use an O.D.E. model of Q-learning, which is expressed as a special nonlinear system called a switching system. In the next section, basic concepts in switching system theory are briefly introduced. \subsection{Switching System} In this paper, we will consider a particular nonlinear system, called the \tilde{p}h{switched linear system} \cite{liberzon2003switching}, \begin{align} &\frac{d}{dt} x_t=A_{\sigma_t} x_t,\quad x_0=z\in {\mathbb R}^n,\quad t\in {\mathbb R}_+,\label{eq:switched-system} \end{align} where $x_t \in {\mathbb R}^n$ is the state, $\sigma\in {\mathcal M}:=\{1,2,\ldots,M\}$ is called the mode, $\sigma_t \in {\mathcal M}$ is called the switching signal, and $\{A_\sigma,\sigma\in {\mathcal M}\}$ are called the subsystem matrices. The switching signal can be either arbitrary or controlled by the user under a certain switching policy. Especially, a state-feedback switching policy is denoted by $\sigma(x_t)$. Stability and stabilization of ({\textnormal{e}}f{eq:switched-system}) have been widely studied for decades. Still, finding a practical and effective condition for them is known to be a challenging open problem. For example, contrary to linear time-invariant systems, even if each subsystem matrix $A_{\sigma}$ is Hurwitz, the overall switching system may not be stable in general. This tells us that tools in linear system theories cannot be directly applied to conclude the stability of the switching system. Another approach is to use the Lyapunov theory ~\cite{Khalil:1173048}. From standard results in control system theories, finding a Lyapunov function ensures stability of the switching system. In this paper, we prove that if the switching system consists of negative definite matrices, then we can always find a common quadratic Lyapunov function and ensure its stability. \begin{proposition}\label{thm:cqlf for n.d} Consider the switching system~({\textnormal{e}}f{eq:switched-system}), and assume that each subsystem matrix, \(A_{\sigma}\), is negative definite for all \(\sigma \in \{1,\dots,M\}\). Then,~({\textnormal{e}}f{eq:switched-system}) is globally asymptotically stable. \end{proposition} We will use this fact to prove the convergence of the proposed algorithm. In particular, the proposed Q-learning algorithm can be modelled as a switching system, whose subsystem matrices are all negative definite. \section{Projected Bellman equation} In this section, we introduce the notion of projected Bellman equation with a regularization term, and establish connections between it and the proposed algorithm. Moreover, we briefly discuss the existence and uniqueness of the projected Bellman equation. We will also provide an example to illustrate the existence and uniqueness. When using the linear function approximation, since the true action value may not lie in the subspace spanned by the feature vectors, a solution of the Bellman equation may not exist in general. To resolve this issue, a standard approach is to consider the projected Bellman equation defined as \begin{align}\label{ProjectedBellmanOptimalEq} X\theta^* = \Gamma {\mathcal{T}} X\theta^* , \end{align} where \(\Gamma:=X(X^TDX)^{-1}X^TD \) is the weighted Euclidean Projection with respect to state-action visit distribution onto the subspace spanned by the feature vectors, and ${\cal T}X\theta ^* = \gamma P\Pi _{X\theta ^* } X\theta ^* + R$. In this case, there is more chances for a solution satisfying the above projected Bellman equation to exist. Still, there may exist cases where the projected Bellman equation does not admit a solution. We will give an example of such case in Section~3. To proceed, let us rewrite~({\textnormal{e}}f{ProjectedBellmanOptimalEq}) equivalently as \begin{align*} X\theta ^* =& X(X^T DX)^{ - 1} X^T D(\gamma P\Pi _{X\theta ^* } X\theta ^* + R)\\ \Leftrightarrow& X^T DX\theta ^* \\ =& X^T DX(X^T DX)^{ - 1} X^T D(\gamma P\Pi _{X\theta ^* } X\theta ^* + R)\\ \Leftrightarrow & \underbrace {(X^T DX - \gamma X^T DP\Pi _{X\theta ^* } X)}_{A_{\pi _{X\theta ^* } } }\theta ^* = \underbrace {X^T DR}_b , \end{align*} where we use the simplified notations \begin{align*} A_{\pi _{X\theta ^* } } : = X^T DX - \gamma X^T DP\Pi _{X\theta ^* } X,\quad b = X^T DR . \end{align*} Therefore, the projected Bellman equation in~({\textnormal{e}}f{ProjectedBellmanOptimalEq}) can be equivalently written as the nonlinear equation \begin{align} b - A_{\pi _{X\theta ^* } } \theta ^* = 0.\label{eq:projected-Bellman2} \end{align} A potential deterministic algorithm to solve the above equation is \begin{align} \theta _{k + 1} = \theta _k + \alpha _k (b - A_{\pi _{X\theta _k } } \theta _k ).\label{eq:deterministic-algo1} \end{align} If it converges, i.e., $\theta _k \to \theta^* $ as $k \to \infty$, then it is clear that $\theta^*$ solves~({\textnormal{e}}f{eq:projected-Bellman2}). In this paper, the proposed algorithm is a stochastic algorithm that solves the modified equation \begin{align} b - (A_{\pi _{X\theta_e } } + \eta C) \theta_e = 0,\label{eq:projected-Bellman3} \end{align} where $C:= X^T DX$, and $\eta \ge 0$ is a weight on the regularization term. Similar to~({\textnormal{e}}f{eq:deterministic-algo1}), the corresponding deterministic algorithm is \begin{align} \theta _{k + 1} = \theta _k + \alpha _k (b - (A_{\pi _{X\theta_k} } + \eta C)\theta _k)\label{eq:deterministic-algo2}. \end{align} If it converges, i.e., $\theta _k \to \theta_e$ as $k \to \infty$, then it is clear that $\theta_e$ solves~({\textnormal{e}}f{eq:projected-Bellman3}). Some natural questions that arise here are as follows: Which conditions can determine the existence and uniqueness of the equations in~({\textnormal{e}}f{eq:projected-Bellman2}) and~({\textnormal{e}}f{eq:projected-Bellman3}). Partial answers are given in the sequel. Considering the non-existence of fixed point of ({\textnormal{e}}f{ProjectedBellmanOptimalEq})~\cite{de2000existence}, both ({\textnormal{e}}f{eq:projected-Bellman2}) and~({\textnormal{e}}f{eq:projected-Bellman3}) may not also have a solution. However, for the modified Bellman equation in~({\textnormal{e}}f{eq:projected-Bellman3}), we can prove that under appropriate conditions, its solution exists and is unique. \begin{lemma}\label{reg-bellman-eq=existence-uniquenss} When \( \eta > ||(X^TDX)^{-1}||_{\infty} X_{\max}^2-1 \), a solution of~({\textnormal{e}}f{eq:projected-Bellman3}) exists and is unique. \end{lemma} The proof is given in~\Cref{sec:reg-bellman-eq=existence-uniquenss} of the supplemental material, and uses Banach fixed-point theorem~\cite{agarwal2018fixed}. It follows similar proof~\cite{melo2001convergence} to establish the existence and uniqueness of solution of~({\textnormal{e}}f{eq:Bellman-equation}). From~\Cref{reg-bellman-eq=existence-uniquenss}, we can see that when the weight \(\eta\) is sufficiently large, the existence and uniqueness of the solution is guaranteed. Note that even if a solution satisfying~({\textnormal{e}}f{eq:projected-Bellman3}) exists, $X\theta_e$ may be different from the optimal Q-function, $Q^*$. However, we can derive a bound on the error, $X\theta_e - Q^*$, using some algebraic inequalities and contraction property of Bellman operator, which is presented below. \begin{lemma}\label{bias of theta_e} Assume that a solution of~({\textnormal{e}}f{eq:projected-Bellman3}) exists. When \( \eta > \gamma||\Gamma||_{\infty}-1 \), we have the following bound: \begin{align*} ||X\theta_e -Q^*||_{\infty} \leq &\frac{1}{1+\eta-\gamma||\Gamma||_{\infty}}||Q^*-\Gamma Q^*||_{\infty} \\ &+ \frac{\eta}{1+\eta-\gamma||\Gamma||_{\infty}}\frac{R_{\max}}{1-\gamma} \end{align*} where \( \Gamma = X(X^TDX)^{-1}DX \) is the projection operator. \end{lemma} Some remarks are in order for~\Cref{bias of theta_e}. First of all, \( \eta > \gamma||\Gamma||_{\infty}-1 \) ensures that the error is always bounded. The first term represents the error incurred by the difference between the optimal $Q^*$ and $Q^*$ projected onto the feature space. Therefore, this error is induced by the linear function approximation. The second term represents the error potentially induced by the regularization. For instance, if \( 1 > \gamma||\Gamma||_{\infty} \), then the second error term vanishes as $\eta \to 0$. Finally, note that as $\eta \to \infty$, the first error term vanishes. This means that the error by the regularization dominates the error induced by the linear function approximation. We now provide an example where the solution does not exist for~({\textnormal{e}}f{eq:projected-Bellman2}) but does exist for ({\textnormal{e}}f{eq:projected-Bellman3}). \begin{example} Let us define a MDP whose state transition diagram is given as in~\cref{fig:1}. The cardinality of state space and action space are $|{\cal S}|=3$, $|{\cal A}|=2$ respectively. \begin{figure} \caption{State transition diagram} \label{fig:1} \end{figure} The corresponding state transition matrix, and other parameters are given as follows: \begin{align*} X &= \begin{bmatrix} 0.01 & 0\\ 0.1 & 0 \\ 0 & 0.01 \\ 0 & 0.01\\ 0.1 & 0\\ 0 & 0.01 \end{bmatrix},\; R_1 = \begin{bmatrix} -2\\ 0\\ 0 \end{bmatrix},\; R_2= \begin{bmatrix} 1\\ 0\\ 0 \end{bmatrix},\\ P_1 &= \begin{bmatrix} 0 & 1 & 0\\ \frac{1}{4} & \frac{1}{4} & \frac{1}{2} \\ \frac{1}{4} & \frac{1}{2} & \frac{1}{4}\\ \end{bmatrix},\; P_2 = \begin{bmatrix} 0 & 0 & 1 \\ \frac{1}{4} & \frac{1}{4} & \frac{1}{2} \\ \frac{1}{4} & \frac{1}{2} & \frac{1}{4} \end{bmatrix},\; \\ \gamma &= 0.99, \quad d(s,a) =\frac{1}{6}, \; \forall s\in {\cal S}, \forall a\in {\cal A} \end{align*} where the order of elements of each column follows the orders of the corresponding definitions. Note that for this Markov decision process, taking action $a=1$ and action $a=2$ at state $s=2$ have the same transition probabilities and reward. It is similar for the state $s=3$. In this MDP, there are only two deterministic policies available, denoted by \(\pi_1\) and \(\pi_2\), that selects action \(a=1\) and action \(a=2\) at state \(s= 1\), respectively, i.e., \( \pi_1(1)= 1\) and \(\pi_2(1)=2\). The actions at state $s=2$ and $s=3$ do not affect the overall results. \end{example} The motivation of this MDP is as follows. Substitute \(\pi_{X\theta^*}\) in ({\textnormal{e}}f{eq:projected-Bellman2}) with \(\pi_1\) and \( \pi_2\). Then each of its solution becomes \begin{align*} \theta^{e1}&:= \begin{bmatrix} \theta^{e1}_1\\ \theta^{e1}_2 \end{bmatrix}\approx \begin{bmatrix} -6\\ 111 \end{bmatrix} \in \mathbb{R}^2,\\ \theta^{e2}&:= \begin{bmatrix} \theta^{e2}_1\\ \theta^{e2}_2 \end{bmatrix} \approx \begin{bmatrix} -496\\ -4715 \end{bmatrix} \in \mathbb{R}^2\\ \end{align*} respectively. If \(\pi_1\) is the corresponding policy to the solution of ({\textnormal{e}}f{eq:projected-Bellman2}), it means that action \(a=1\) is greedily selected at state \(s=1\). Therefore, \(Q^{\pi_1}(1,1)>Q^{\pi_1}(1,2)\) should be satisfied. However, since \(Q^{\pi_1}(1,1)= x(1,1)^T\theta^{e1} =- 0.06 \) and \(Q^{\pi_1}(1,2)=x(1,2)^T\theta^{e1} \approx 1.11 \), this is contradiction. The same logic applies to the case for \(\pi_2\). Therefore, neither of them becomes a solution of~({\textnormal{e}}f{eq:projected-Bellman2}). On the other hand, considering ({\textnormal{e}}f{eq:projected-Bellman3}) with \( \eta = 1.98 \) which satisfies ({\textnormal{e}}f{ineq:eta condition for convergence}), the solution for each policy becomes \(\theta^{e1}_1 \approx 0.13 ,\theta^{e1}_2 \approx 13\) and \(\theta^{e2}_1 \approx 0.02 ,\theta^{e2}_2 \approx 14\) respectively. For \(\pi_1\) and \(\pi_2\), we have \(Q^{\pi_1}(1,1)<Q^{\pi_2}(1,2)\) and \(Q^{\pi_1}(1,1)<Q^{\pi_1}(1,2)\) respectively. Hence, \( \theta^{e2}\) satisfies ({\textnormal{e}}f{eq:projected-Bellman3}) and becomes the unique solution. \section{Algorithm} In this section, we will introduce our main algorithm, and elaborate the condition on the regularization term to make the algorithm convergent. The proposed algorithm is motivated by TD-learning. In particular, for on-policy TD-learning, one can establish its convergence using the property of the stationary distribution. On the other hand, for an off-policy case, the mismatch between the sampling distribution and the stationary distribution could cause its divergence~\cite{sutton2016emphatic}. To address this problem,~\citealp{diddigi2019convergent} adds a regularization term to TD-learning in order to make it convergent. Since Q-learning can be interpreted as an off-policy TD-learning, we add a regularization term to Q-learning update motivated by~\citealp{diddigi2019convergent}. This modification leads to the proposed algorithm as follows: \begin{align}\label{stochastic_approximation} \theta_{k+1}=\theta_k+ \alpha_k (x(s_k,a_k)\delta_k + \eta x(s_k,a_k) x(s_k,a_k)^T\theta_k) \end{align} Note that letting $\eta =0$, the above update is reduced to the standard Q-learning with linear function approximation in~({\textnormal{e}}f{eq:standard-Q-learning}). The proposed algorithm is different from~\citealp{diddigi2019convergent} in the sense that a regularization term is applied to Q-learning instead of TD-learning. Rewriting the stochastic update in a deterministic manner, it can be written as follows: \begin{align}\label{eq:biased_q} \theta_{k+1} = \theta_k + \alpha_k (b -( A_{\pi_{X\theta_k}}+\eta C)\theta_k + m_{k+1}) , \end{align} where \begin{align}\label{eq:martingale difference sequence} m_{k+1} &= \delta_k x(s_k,a_k) + \eta x(s_k,a_k)^T x(s_k,a_k) \theta_k \\ & - ( b- (A_{\pi_{X\theta_k}}+\eta C)\theta_k) \nonumber \end{align} is an i.i.d. noise. Note that without the noise,~({\textnormal{e}}f{eq:biased_q}) is reduced to the deterministic version in~({\textnormal{e}}f{eq:deterministic-algo2}). In our convergence analysis, we will apply the O.D.E. approach, and in this case, \(A_{\pi_{X\theta_k}}+\eta C\) will determine the stability of the corresponding O.D.E. model, and hence, convergence of~({\textnormal{e}}f{stochastic_approximation}). Note that~({\textnormal{e}}f{eq:biased_q}) can be interpreted as a switching system defined in~({\textnormal{e}}f{eq:switched-system}) with stochastic noises. As mentioned earlier, proving the stability of a general switching system is challenging in general. However, we will use~\cref{thm:cqlf for n.d} to prove its asymptotic stability. In particular, we can make \(A_{\pi_{X\theta_k}}+\eta C\) to be negative definite under the following condition: \begin{equation}\label{ineq:eta condition for convergence} \eta > \max _{\pi \in \Theta ,s \in {\cal S},a \in {\cal A}} \frac{\gamma d^T P^\pi (e_a \otimes e_s )}{2d(s,a)} - \frac{2 - \gamma}{2} \end{equation} where $\Theta$ is the set of all deterministic policies, and $\otimes$ is the Kronecker product. \Cref{psd_matrix}, given in~\Cref{app:n.d for matrix} of the supplemental material, is similar to Theorem~2 in~\citealp{diddigi2019convergent}, and ensures such a property. Now, we can use \Cref{thm:cqlf for n.d} to establish stability of the overall system. Building on the negative definiteness of the \(A_{\pi_{X\theta_k}}+\eta C\), in the next section, we prove that under the stochastic update ({\textnormal{e}}f{stochastic_approximation}), we have \(\theta_k {\textnormal{i}}ghtarrow \theta_e\) as $k \to \infty$ with probability one, where \(\theta_e\) satisfies the projected Bellman equation in~({\textnormal{e}}f{eq:projected-Bellman3}). If \( \eta =0\) satisfies ({\textnormal{e}}f{ineq:eta condition for convergence}), we can guarantee convergence to an optimal policy without errors. \section{Convergence Analysis}\label{sec:convergence analysis} Recently, \citealp{lee2019unified} suggested a switching system framework to prove the stability of Q-learning in the linear function approximation cases. However, its assumption seems too stringent to check in practice. Here, we develop more practical Q-learning algorithm by adding appropriately a preconditioned regularization term. We prove the stability of the proposed Q-learning with regularization term~({\textnormal{e}}f{stochastic_approximation}) following lines similar to~\citealp{lee2019unified}. Our proof mainly relies on Borkar Meyn theorem. Therefore, we first discuss about the corresponding O.D.E. for the proposed update in~({\textnormal{e}}f{stochastic_approximation}), which is \begin{align}\label{original_ode_theta} \dot{\theta}_t &= -(1+\eta)X^TDX\theta_t + \gamma X^TDP\Pi_{X\theta_t} X\theta_t + X^TDR \\ &:= f(\theta_t). \nonumber \end{align} Then, using changes of coordinates, the above O.D.E. can be rewritten as \begin{equation}\label{change_coord_ode} \begin{aligned} &\frac{d}{dt} (\theta_t-\theta_e)\\ &= (-(1+\eta)X^TDX +\gamma X^TDP\Pi_{X\theta_t}X )(\theta_t-\theta_e) \\ &+ \gamma X^TDP\Pi_{X\theta_t} X\theta_e - \gamma X^TDP\Pi_{X\theta_e} X\theta_e, \end{aligned} \end{equation} where \(\theta_e \) satisfies~({\textnormal{e}}f{eq:projected-Bellman3}). Here, we assume that the equilibrium point exists and is unique. We later prove that if the equilibrium exists, then it is unique. To apply Borkar Meyn theorem, we discuss about the asymptotic stability of the O.D.E. in~({\textnormal{e}}f{change_coord_ode}), and check conditions of \Cref{borkar_meyn_assumption} in~\Cref{sec:check assumptions of borkar meyn} of the supplemental material. Note that~({\textnormal{e}}f{change_coord_ode}) includes an affine term, i.e., it cannot be expressed as a matrix times $\theta_t-\theta_e$. Establishing asymptotic stability of switched linear system with affine term is difficult compared to switched linear system~({\textnormal{e}}f{change_coord_ode}). To circumvent this difficulty,~\citealp{lee2019unified} proposed upper and lower systems, which upper bounds and lower bounds the original system, respectively using the so-called vector comparison principle. Then, the stability of the original system can be established by proving the stability of the upper and lower systems, which are easier to analyze. Following similar lines, to check global asymptotic stability of the original system, we also introduce upper and lower systems, which upper bounds and lower bounds the original system, respectively. Then, we prove global asymptotic stability of the two bounding systems. Since upper and lower systems can be viewed as switched linear system and linear system, respectively, the global asymptotic stability is easier to prove. We stress that although the switching system approach in~\citealp{lee2019unified} is applied in this paper, the detailed proof is entirely different, and is nontrivial. In particular, the upper and lower systems are given as follows: \begin{align*} \frac{d}{dt} \theta^u_t &= (-(1+\eta)X^TDX +\gamma X^TDP\Pi_{X\theta^u_t}X ) \theta^u_t,\\ \frac{d}{dt} \theta^l_t &= -((1+\eta) X^TDX \theta^l_t+ \gamma X^TDP\Pi_{X\theta_e}X )\theta^l_t, \end{align*} where \(\theta^u_t\) denotes the state of the upper system, and \(\theta^l_t\) stands for the state of the lower system. We defer the detailed construction of each system to~\Cref{proof_ode_proof} of the supplemental material. Establishing stability of upper and lower system gives the stability of overall system. \begin{theorem}\label{ode_proof} Suppose that \begin{enumerate} \item \Cref{full_rank_assumption} holds \item ({\textnormal{e}}f{ineq:eta condition for convergence}) holds \item a solution of~({\textnormal{e}}f{eq:projected-Bellman3}) exists \end{enumerate} Then, the solution is unique, and the origin is the unique globally asymptotically stable equilibrium point of ({\textnormal{e}}f{change_coord_ode}). \end{theorem} The detailed proof is given in~\Cref{proof_ode_proof} of the supplemental material. \par Building on previous results, we now use Borkar and Meyn's theorem~\Cref{borkar_meyn_lemma} to establish the convergence of regularized Q-learning algorithm. \begin{theorem}\label{convergence-final} Assume Robbins-Monro step-size, and ({\textnormal{e}}f{ineq:eta condition for convergence}). Then under \Cref{full_rank_assumption} and \Cref{feature_reward_boundedness_assumption}, under the stochastic update~({\textnormal{e}}f{stochastic_approximation}), \(\theta_k {\textnormal{i}}ghtarrow \theta_e \) as $k \to \infty$ with probability one, where \(\theta_e\) satisfies ({\textnormal{e}}f{eq:projected-Bellman3}). \end{theorem} The full proof is given in~\Cref{sec:convergence-final} of the supplemental material. \begin{lemma}\label{equilibrium_point_uniqueness} Let \begin{equation}\label{reg_solution} -(1+\eta)X^TDX\theta_e + \gamma X^TDP\Pi_{\pi_{X\theta_e}} X\theta_e + X^TDR =0 \end{equation} Under \Cref{full_rank_assumption}, and \( \eta = \max (0,\max_{\pi_{\theta},i}\frac{\gamma d^T p_{\pi_{\theta}}(i|\cdot)}{2d_i}- \frac{2-\gamma}{2}+{\epsilon}ilon)\), the equilirbrium point exists and it is unique. \end{lemma} \begin{proof} By Brouwer's fixed point theorem the solution exists. Uniqueness follows from the global asymptotic stability of \eqref{original_ode_theta}. \end{proof} \section{Experiments} In this section, we present experimental results under well-known environments~\cite{tsitsiklis1996feature,baird1995residual} where Q-learning with linear function approximation diverges. We also show trajectories of upper and lower systems to illustrate the theoretical results. \subsection{\( \theta {\textnormal{i}}ghtarrow 2\theta \) \cite{tsitsiklis1996feature} } Even when there are only two states, Q-learning with linear function approximation could diverge~\cite{tsitsiklis1996feature}. Depicted in~\Cref{fig:thetatwotheta} in~\Cref{sec:experiment diagram} of the supplemental material, from state one (\(\theta\)), the transition is deterministic to absorbing state two (\(2\theta\)), and reward is zero at every time steps. Therefore, the episode length is fixed to be two. Learning rate for Greedy GQ (GGQ) and Coupled Q Learning (CQL) are set as $0.05$ and $0.25$, respectively as in~\citealp{carvalho2020new,maei2010toward}. Since CQL requires normalized feature values, we scaled the feature value with \(\frac{1}{2}\) as in~\citealp{carvalho2020new}, and initialized weights as one. We implemented Q-learning with target network~\cite{zhang2021breaking} without projection for practical reason (Qtarget). We set the learning rate as $0.25$ and $0.05$ respectively, and the weight \(\eta\) as two. For the regularized Q-learning, we set the learning rate as $0.25$, and the weight \(\eta\) as two. It is averaged over $100$ runs. In~\Cref{fig:thetatwotheta performance}, we can see that the regularized Q-learning achieves the fastest convergence rate. \begin{figure} \caption{Comparison with other algorithms in \(\theta {\textnormal{i} \label{fig:thetatwotheta performance} \end{figure} \subsection{Baird Seven Star Counter Example \cite{baird1995residual}} \citealp{baird1995residual} considers an overparameterized example, where Q-learning with linear function approximation diverges. The overall state transition is depicted in~\Cref{fig:barid} given in~\Cref{sec:experiment diagram} of the supplemental material. There are seven states and two actions for each state, which are solid and dash action. The number of features are \(h=15\). At each episode, it is initialized at random state with uniform probability. Solid action leads to seventh state while dashed action makes transition uniformly random to states other than seventh state. At seventh state, the episode ends with probability \(\frac{1}{100}\). The behavior policy selects dashed action with probability \(\frac{5}{6}\), and solid action with probability \(\frac{1}{6}\). Since CQL \cite{carvalho2020new} converges under normalized feature values, we scaled the feature matrix with \(\frac{1}{\sqrt{5}}\). The weights are set as one except for \(\theta_7 = 2\). The learning rates and the weight \(\eta\) are set as same as the previous experiment. \begin{figure} \caption{Comparison with other algorithms in Baird seven star counter example} \label{fig:baird performance} \end{figure} As in~\Cref{fig:baird performance}, Qtarget shows the fastest convergence but to guarantee convergence, it requires projection twice theoretically, which is not implemented in this experiment. Our algorithm shows fastest convergence compared to other two time scale algorithm, CQL and GGQ. \subsection{O.D.E. Experiment}\label{sec:ode experiment} Let us consider an MDP with $|{\cal S}|=2,|{\cal A}|=2$, and the following parameters \begin{align*} X &= \begin{bmatrix} 1 & 0\\ 0 & 2\\ 1 & 0 \\ 0 & 2 \end{bmatrix},\quad D = \begin{bmatrix} \frac{1}{4} & 0 & 0 & 0\\ 0 & \frac{1}{4} & 0 & 0\\ 0 & 0 & \frac{1}{4} & 0\\ 0 & 0 & 0 & \frac{1}{4} \end{bmatrix}\\ P_1 &= \begin{bmatrix} 0.5 & 0.5\\ 1 & 0\\ \end{bmatrix}, P_2 = \begin{bmatrix} 0.5 & 0.5 \\ 0.25 & 0.75 \end{bmatrix},\\ R_1 &= \begin{bmatrix} 1 \\ 1 \end{bmatrix}, R_2 = \begin{bmatrix} 1 \\ 1 \end{bmatrix}, \quad \gamma = 0.99 \end{align*} For this MDP, we will illustrate trajectories of the upper and lower system. Each state action pair is sampled uniformly random and reward is one for every time step. \(\eta = 2.25\) is chosen to satisfy conditions of~\Cref{ode_proof}. From~\Cref{fig:trajectories upper and lower ode} in~\Cref{appendix:sec:ode experiment} of the supplemental material, we can see that the trajectory of the original system is bounded by the trajectories of lower and upper system. \section{Conclusion} In this paper, we presented a new convergent Q-learning with linear function approximation, which is simple to implement. We provided theoretical analysis on the proposed Q-learning algorithm with regularization term, and demonstrated its performance on several experiments, where the original Q-learning with linear function approximation diverges. Developing a new Q-learning algorithm with linear function approximation without bias would be one interesting future research topic. Moreover, considering the great success of deep learning, it would be interesting to develop deep reinforcement learning algorithms with appropriately preconditioned regularization term instead of using the target network. \nocite{langley00} \appendix \onecolumn \section{Appendix} \subsection{Proof of \Cref{thm:cqlf for n.d}} \begin{proof} To verify asymptotic stability of switching systems~({\textnormal{e}}f{eq:switched-system}), one of the simplest ways is to find a common quadratic Lyapunov function $V(x) = x^T P x$. In particular,~({\textnormal{e}}f{eq:switched-system}) is asymptotically stable if there exists a symmetric positive definite matrix \(P\) such that \begin{align*} PA_{\sigma} + A^T_{\sigma}P \prec 0 \quad \text{for all} \quad \sigma \in \{1,\dots, M\} \end{align*} holds. In our case, we assume that \(A_{\sigma}\) is negative definite, which significantly simplifies the analysis. In particular, if \(A_{\sigma}\) is negative definite, then from the definition of a negative definite matrix, we have \begin{align*} A_{\sigma} + A^T_{\sigma} \prec 0 \quad \text{for all} \quad \sigma \in \{1,\dots, M\} . \end{align*} Therefore, we can automatically choose \(P=I\). This implies that for a switching system with \(A_{\sigma}\) being negative definite, we can ensure that $V(x) = x^Tx$ is a common quadratic Lyapunov function, and the system is asymptotically stable. This completes the proof. \end{proof} \subsection{Negative definiteness of \(A_{\pi_{X\theta}}+\eta C\)}\label{app:n.d for matrix} We first introduce Gerschgorin circle theorem~\cite{horn2013matrix} to prove ~\Cref{psd_matrix}. \begin{lemma}[Gerschgorin circle theorem]\cite{horn2013matrix}\label{gersgorin-circel-theroem} Let \(A = [a_{ij}] \in \mathbb{R}^{n\times m}\) and \( R_i (A) = \sum\limits_{j\neq i}^m a_{ij} \). Consider the Gerschgorin circles \begin{align*} \{ z\in \mathbb{C} | : | z- a_{ii} | \leq R_i (A) \} ,\quad i=1,\dots,n. \end{align*} The eigenvalues of \(A\) are in the union of Gerschgorin discs \begin{align*} G(A) = \cup^n_{i=1} \{ z\in \mathbb{C} | : | z- a_{ii} | \leq R_i (A) \} . \end{align*} \end{lemma} Now, we state the lemma to guarantee negative definiteness of \(A_{\pi_{X\theta}}+\eta C\). \begin{lemma}\label{psd_matrix} Let \begin{align*} M^{\pi_{X\theta}} := D((1+\eta)I-\gamma P^{\pi_{X\theta}}) . \end{align*} Under the following condition: \begin{align*} \eta > \max _{\pi \in \Theta ,s \in S,a \in A} \frac{\gamma d^T P^\pi (e_a \otimes e_s)}{2d(s,a)} - \frac{2 - \gamma}{2} \quad , \end{align*} where $\Theta$ is the set of all deterministic policies, and $\otimes$ is the Kronecker product, \( M_{\pi_{X\theta}}\) is positive definite. \end{lemma} \begin{proof} We use Gerschgorin circle theorem for the proof. First, denote \( m_{ij}=[M_{\pi_{X\theta}}]_{ij}\). Then, one gets \begin{align*} m_{ii} &= d_i ((1+\eta)-\gamma e^T_i P^{\pi_{X\theta}} e_i), \\ m_{ij} &= -d_i \gamma e^T_i P^{\pi_{X\theta}} e_j \quad \text{for}\quad i\neq j. \end{align*} Except for the diagonal element, the row and column sums, respectively, become \begin{align*} \sum\limits_{j\neq i} |m_{ij}| &= \gamma d_i(1-e^T_i P^{\pi_{X\theta}} e_i) \\ \sum\limits_{j\neq i} |m_{ji}| &= \gamma d^T P^{\pi_{X\theta}} e_i - \gamma d_i e^T_i P^{\pi_{X\theta}} e_i \\ \end{align*} We need to show that \( M^{\pi_{X\theta}} + M^{\pi_{X\theta}^T}\) is positive definite. To this end, we use~\Cref{gersgorin-circel-theroem} to have the following inequality: \begin{align*} |\lambda - 2m_{ii}| &\leq \sum\limits_{j\neq i}|m_{ij}| + \sum\limits_{j\neq i}|m_{ji}| \end{align*} Considering the lower bound of \(\lambda\), we have \begin{align*} \lambda &\geq 2m_{ii} -\sum\limits_{j\neq i}|m_{ij}| - \sum\limits_{j\neq i}|m_{ji}| \\ &= (1+\eta - \gamma)d_i + ((1+\eta)d_i - \gamma d^TP^{\pi_{X\theta}}e_i) \\ &= \eta (2d_i) + (2-\gamma)d_i -\gamma d^TP^{\pi_{X\theta}}e_i . \end{align*} Hence, for \(\lambda > 0\), we should have \begin{align*} \eta > \frac{\gamma d^T P^{\pi_{X\theta}}e_i}{2d_i}- \frac{2-\gamma}{2} . \end{align*} Taking \( \eta > \max_{\pi \in \Theta ,s \in {\cal S},a \in {\cal A}} \frac{\gamma d^T P^\pi (e_a \otimes e_s )}{2d(s,a)} - \frac{2 - \gamma}{2} \), we can make \(M^{\pi_{X\theta}}\) always positive definite. \end{proof} \subsection{Proof of \Cref{reg-bellman-eq=existence-uniquenss}}\label{sec:reg-bellman-eq=existence-uniquenss} To show existence and uniqueness of the solution of ({\textnormal{e}}f{eq:projected-Bellman3}), we use Banach fixed-point theorem \cite{agarwal2018fixed}. First, we define the operator \(\mathcal{T}_{\eta}\) as follows: \begin{align*} \mathcal{T}_{\eta} (\theta) := \frac{1}{1+\eta}(X^TDX)^{-1}(X^TDR + \gamma X^TDP\Pi_{X\theta}X\theta) \end{align*} We show that \(\mathcal{T}_{\eta}\) is contraction mapping. The existence and uniqueness of ({\textnormal{e}}f{eq:projected-Bellman3}) follows from the Banach fixed-point theorem. \begin{align*} ||\theta_1 - \theta_2||_{\infty} &= \frac{1}{1+\eta}||(X^TDX)^{-1}(\gamma X^TDP\Pi_{X\theta_1}X\theta_1 - \gamma X^TDP\Pi_{X\theta_2}X\theta_2)||_{\infty}\\ &\leq \frac{\gamma}{1+\eta}||(X^TDX)^{-1}||_{\infty} ||X^T||_{\infty}||\Pi_{X\theta_1}X\theta_1 - \Pi_{\pi_{X\theta_2}}X\theta_2||_{\infty}\\ &\leq \frac{\gamma}{1+\eta}||(X^TDX)^{-1}||_{\infty} ||X^T||_{\infty}||\Pi_{X(\theta_1 - \theta_2)}(X\theta_1 - X\theta_2)||_{\infty}\\ &\leq \frac{\gamma}{1+\eta}||(X^TDX)^{-1}||_{\infty} ||X^T||_{\infty}||X\theta_1 - X\theta_2||_{\infty}\\ &\leq \frac{\gamma}{1+\eta}||(X^TDX)^{-1}||_{\infty} ||X^T||_{\infty}||X||_{\infty}||\theta_1 - \theta_2||_{\infty}\\ &\leq \gamma ||\theta_1 - \theta_2||_{\infty} \end{align*} The first inequality follows from the submultiplicativity of matrix norm and \(||DP||_{\infty} \leq 1\). The second inequality follows from the fact that \( \max x - \max y \leq \max (x-y) \). The last inequality is due to the condition \( \eta > ||(X^TDX)^{-1}||_{\infty} X_{\max}^2-1 \). Since \(\gamma <1 \), \(\mathcal{T}_{\eta}\) is contraction mapping. Now we can use Banach fixed-point theorem to conclude existence and uniqueness of~({\textnormal{e}}f{eq:projected-Bellman3}). \subsection{Proof of \Cref{bias of theta_e}} \begin{proof} The bias term of the solution can be obtained using simple algebraic inequalities. \begin{align*} ||X\theta_e - Q^*||_{\infty} &\leq \left\lVert X\theta_e - \frac{1}{1+\eta}Q^*{\textnormal{i}}ght\rVert_{\infty} + \left\lVert \frac{\eta}{1+\eta}Q^*{\textnormal{i}}ght\rVert_{\infty}\\ &\leq \left\lVert X\theta_e - \frac{1}{1+\eta}\Gamma Q^* {\textnormal{i}}ght\rVert_{\infty}+ \left\lVert \frac{1}{1+\eta}Q^* - \frac{1}{1+\eta}\Gamma Q^*{\textnormal{i}}ght\rVert_{\infty} + \left\lVert\frac{\eta}{1+\eta}Q^*{\textnormal{i}}ght\rVert_{\infty}\\ &\leq \frac{1}{1+\eta} || \Gamma(\mathcal{T}X\theta_e) - \Gamma\mathcal{T}Q^*||_{\infty}+ \left\lVert \frac{1}{1+\eta}Q^* - \frac{1}{1+\eta}\Gamma Q^*{\textnormal{i}}ght\rVert_{\infty}+ \left\lVert \frac{\eta}{1+\eta}Q^*{\textnormal{i}}ght\rVert_{\infty}\\ &\leq \frac{\gamma||\Gamma||_{\infty}}{1+\eta} || X\theta_e - Q^*||_{\infty}+\left\lVert \frac{1}{1+\eta}Q^* - \frac{1}{1+\eta}\Gamma Q^*{\textnormal{i}}ght\rVert_{\infty} + \left\lVert \frac{\eta}{1+\eta}Q^*{\textnormal{i}}ght\rVert_{\infty}. \end{align*} The last inequality follows from the fact that the Bellman operator \(\mathcal{T}\) is \(\gamma\)-contraction with respect to the infinity norm. Rearranging the terms, we have \begin{align*} ||X\theta_e -Q^*||_{\infty} \leq \frac{1}{1+\eta-\gamma||\Gamma||_{\infty}}||Q^*-\Gamma Q^*||_{\infty} + \frac{\eta}{1+\eta-\gamma||\Gamma||_{\infty}}\frac{R_{\max}}{1-\gamma} . \end{align*} The bias is caused by projection and additional error term due to regularization. \end{proof} \subsection{Proofs for~\Cref{borkar_meyn_assumption}}\label{sec:check assumptions of borkar meyn} In this section, we provide omitted proofs to check~\Cref{borkar_meyn_assumption} for~\Cref{convergence-final}. First of all, Lipschitzness of \(f(\theta)\) ensures the unique solution of the O.D.E.. \begin{lemma}[Lipschitzness]\label{lipschitzness} Let \begin{equation} f(\theta) = -(1+\eta)X^TDX\theta + \gamma X^TDP\Pi_{X\theta} X\theta + X^TDR \end{equation} Then, \( f(\theta)\) is globally Lipschitzness continuous. \end{lemma} \begin{proof} Lipschitzness of \(f(\theta)\) can be proven as follows: \begin{align*} ||f(\theta)- f(\theta')||_{\infty} &\leq (1+\eta)||X^TDX(\theta-\theta')||_{\infty}+ \gamma ||X^TDP(\Pi_{X\theta}X_{\theta}-\Pi_{X\theta'}X\theta')||_{\infty} \\ &\leq (1+\eta) ||X^TDX||_{\infty} ||\theta-\theta')||_{\infty} + \gamma ||X^TDP||_{\infty} || \Pi_{X\theta}X\theta-\Pi_{X\theta'}X\theta' ||_{\infty} \\ &\leq (1+\eta) ||X^TDX||_{\infty} ||\theta-\theta')||_{\infty} + \gamma ||X^TDP||_{\infty} || \Pi_{X(\theta-\theta')}X(\theta-\theta') ||_{\infty}\\ &\leq ((1+\eta) ||X^TDX||_{\infty} +\gamma ||X^TDP ||_{\infty} ||X||_{\infty}) ||\theta-\theta')||_{\infty} \end{align*} Therefore \(f(\theta)\) is Lipschitz continuous with respect to the \(||\cdot||_{\infty}\), \end{proof} Next, the existence of limiting O.D.E. of ({\textnormal{e}}f{original_ode_theta}) can be proved using the fact that policy is invariant under constant multiplication when linear function approximation is used. \begin{lemma}[Existence of limiting O.D.E. and stability]\label{limiting_Ode} Under ({\textnormal{e}}f{ineq:eta condition for convergence}), there exists limiting ode of ({\textnormal{e}}f{original_ode}) and its origin is asymptotically stable. \begin{equation}\label{original_ode} f(\theta) = -(1+\eta)X^TDX\theta + \gamma X^TDP\Pi_{X\theta} X\theta + X^TDR \end{equation} \end{lemma} \begin{proof} The existence of limiting O.D.E can be obtained using the homogeneity of policy, \(\Pi_{X(c\theta)} = \Pi_{X\theta} \). \begin{align*} f(c\theta) &= -(1+\eta)X^TDX(c\theta) + \gamma X^TDP\Pi_{X(c\theta)} X(c\theta) + X^TDR , \\ \lim_{c{\textnormal{i}}ghtarrow\infty} \frac{f(cx)}{c} &= (-(1+\eta)X^TDX + \gamma X^TDP\Pi_{X\theta} X)\theta \end{align*} This can be seen as switching system and shares common Lyapunov function \( V= ||\theta||^2 \). Hence, the origin is asymptotically stable by \Cref{thm:cqlf for n.d}. \end{proof} Lastly, we check conditions for martingale difference sequences. \begin{lemma}[Martingale difference sequence, \(m_k\), and square integrability]\label{martingale_proof} We have \begin{align*} \mathbb{E}b [m_{k+1}|\Fc_k] &= 0,\\ \mathbb{E}b[||m_{k+1}||^2|\Fc_k] &< C_0 (1+ ||\theta||^2 ), \end{align*} where \( C_0 = \max (12 X_{\max}^2 R_{\max}^2 , 12 \gamma X_{\max}^4+ 4\eta^2 X^2_{\max} )\). \end{lemma} \begin{proof} To show \(\{m_k,k\in \mathbb{N}\}\) is a martingale difference sequence with respect to the sigma-algebra generated by \({\cal G}_k\), we first prove expectation of \(m_{k+1}\) is zero conditioned on \(\mathcal{G}_k\). \begin{equation*} \mathbb{E}b[m_{k+1}|\mathcal{G}_k] = 0 \end{equation*} This follows from definition of \(b,C\) and \(A_{\pi_{X\theta}}\). \newline The boundedness \( \mathbb{E}[||n_k||]<\infty\) also follows from simple algebraic inequalities. Therefore \(\{m_k,\mathcal{G}),k\in \mathbb{N}\}\) is martingale difference sequence. Now, we show square integrability of the martingale difference sequence, which is \begin{equation*} \mathbb{E}b[||m_{k+1}||^2|\mathcal{G}_k] \leq C_0 (||\theta_k||^2 + 1) . \end{equation*} Using simple algebraic inequalities, we have \begin{align*} \mathbb{E}b [||m_{k+1}||^2|\mathcal{G}_k] &= \mathbb{E}b[|| \delta_k x(s_k,a_k) + \eta x(s_k,a_k)^T x(s_k,a_k) \theta_k -\mathbb{E}b_{\mu}[\delta_k x(s_k,a_k) + \eta x(s_k,a_k)^T x(s_k,a_k) \theta_k ] ||^2|\mathcal{G}_t]\\ &\leq \mathbb{E}b[|| \delta_k x(s_k,a_k) + \eta x(s_k,a_k)^T x(s_k,a_k) \theta_k ||^2 + || \mathbb{E}b_{\mu}[\delta_k x(s_k,a_k) + \eta x(s_k,a_k)^T x(s_k,a_k) \theta_k ] ||^2|\mathcal{G}_t]\\ &\leq 2\mathbb{E}b[||\delta_k x(s_k,a_k) + \eta x(s_k,a_k)^T x(s_k,a_k) \theta_k ] ||^2\mathcal{G}_t]\\ &\leq 4 \mathbb{E}b[||\delta_k x(s_k,a_k)||^2|\mathcal{G}_t]+4\eta^2 \mathbb{E}b[|||x(s_k,a_k)^T x(s_k,a_k) \theta_k||^2|\mathcal{G}_t]\\ &\leq 12 X_{\max}^2\mathbb{E}b[||r_k||^2+||\gamma \max x(s_k,a_k)\theta_k||^2+||x(s_k,a_k)\theta_k||^2|\mathcal{G}_t]+4\eta^2 X_{\max}^2 ||\theta_k||^2\\ &\leq 12 X_{\max}^2 R_{\max}^2 + 12 \gamma X_{\max}^4||\theta_k||^2 + X_{\max}^2 ||\theta_k||^2 + 4\eta^2 X^2_{\max}||\theta_k||^2 \\ &\leq C_0 (1 + ||\theta_k||^2 ) , \end{align*} where \( C_0 = \max (12 X_{\max}^2 R_{\max}^2 , 12 \gamma X_{\max}^4+ 4\eta^2 X^2_{\max} )\). The fourth inequality follows from the fact that \( ||a+b+c||^2 \leq 3||a||^2+3||b||^2+3||c||^2 \). This completes the proof. \end{proof} \subsection{Proof of \Cref{ode_proof}}\label{proof_ode_proof} Before moving onto the proof of~\Cref{ode_proof}, in order to prove the stability using the upper and lower systems, we need to introduce some notions such as the quasi-monotone function and vector comparison principle. We first introduce the notion of quasi-monotone increasing function, which is a necessary prerequisite for the comparison principle for multidimensional vector system. \begin{definition}[Quasi-monotone function]\label{def:quasi-monotone} A vector-valued function $f:{\mathbb R}^n \to {\mathbb R}^n$ with $f:=\begin{bmatrix} f_1 & f_2 & \cdots & f_n\\ \end{bmatrix}^T$ is said to be quasi-monotone increasing if $f_i (x) \le f_i (y)$ holds for all $i \in \{1,2,\ldots,n \}$ and $x,y \in {\mathbb R}^n$ such that $x_i=y_i$ and $x_j\le y_j$ for all $j\neq i$. \end{definition} Based on the notion of quasi-monotone function, we introduce the vector comparison principle. \begin{lemma}[Vector Comparison Principle]~\cite{hirsch2006monotone}\label{lem:vector_comparison_principle} Suppose that \(\Bar{f},\underbar{$f$}\) are globally Lipschitz continuous. Let \(v_t\) be a solution of the system \begin{align*} \frac{d}{dt}x_t = \Bar{f}(x_t) ,\qquad x_o \in \mathbb{R}b^n ,\forall{t}\geq 0 . \end{align*} Assume that \(\Bar{f}\) is quasi-monotone increasing, and let \(v_t\) be a solution of the system \begin{align*} \frac{d}{dt}v_t = \underbar{$f$}(v_t) ,\qquad v_0 < x_0, \forall{t}\geq 0 , \end{align*} where \( \underbar{$f$}(v) \leq \Bar{f}(v) \) holds for any \( v\in \mathbb{R}b^n\). Then, \( v_t \geq x_t \) for all \( t\geq 0\). \end{lemma} The vector comparison lemma can be used to bound the state trajectory of the original system by those of the upper and lower systems. Then, proving global asymptotic stability of the upper and lower systems leads to global asymptotic stability of original system. We now give the proof of~\Cref{ode_proof}. \begin{proof} First we construct the upper comparison part. Noting that \begin{equation}\label{ineq:upper_prop_1} \gamma X^TDP\Pi_{X\theta_e} X\theta_e \geq \gamma X^TDP\Pi_{X\theta} X\theta_e \end{equation} and \begin{equation}\label{ineq:upper_prop_2} \gamma X^TDP\Pi_{X(\theta-\theta_e)}X (\theta-\theta_e) \geq \gamma X^TDP\Pi_{X\theta}X (\theta-\theta_e), \end{equation} we define \( \Bar{f}(y) \) and \(\underbar{$f$}(y) \) as follows: \begin{align*} \Bar{f}(y) &= (-(1+\eta)X^TDX +\gamma X^TDP\Pi_{Xy}X ) y ,\\ \underbar{$f$}(y) &= (-(1+\eta )X^TDX y +\gamma X^TDP\Pi_{X(y+\theta_e)}X ) y + \gamma X^TDP (\Pi_{X(y+\theta_e)} - \Pi_{X\theta_e})X\theta_e \end{align*} Using using ({\textnormal{e}}f{ineq:upper_prop_1}) and ({\textnormal{e}}f{ineq:upper_prop_2}), we have \( \Bar{f}(y)\leq \underbar{$f$}(y) \). \( \underbar{$f$} \) is the corresponding O.D.E of original system and \(\Bar{f}\) becomes O.D.E of the upper system. \(\Bar{f}\) becomes switched linear system. Now consider the O.D.E systems \begin{align*} \frac{d}{dt} \theta^u_t &= \Bar{f}(\theta^u_t) ,\qquad \theta^u_0 > \theta_0 ,\\ \frac{d}{dt} \theta_t &= \underbar{$f$}(\theta_t) . \end{align*} Next, we prove quasi-monotone increasing property of \( \Bar{f} \). For any \(z \in \mathbb{R}b^{|S||A|}\), consider a non-negative vector \( p \in \mathbb{R}b^{|S||A|} \) such that its \(i\)-th element is zero. Then, for any \( 1\leq i \leq d\),we have \begin{align*} e^T_i \Bar{f} (y+p) &= e^T_i (-(1+\eta)X^TDX + \gamma X^TDP\Pi_{X(y+p)}X)(y+p) \\ &= -(1+\eta) e ^T_i X^TDX y -(1+\eta) e ^T_i X^TDX p + \gamma e^T_i X^TDP\Pi_{X(y+p)}X(y+p) \\ &\geq -(1+\eta) e ^T_i X^TDX y + \gamma e^T_i X^TDP\Pi_{Xy}Xy \\ &= e^T_i \Bar{f}(y) , \end{align*} where the inequality comes from $e ^T_i X^TDX p = 0$ due to~\Cref{full_rank_assumption}. Therefore by \Cref{lem:vector_comparison_principle}, we can conclude that \(\theta_t \leq \theta^u_t\). The switching system matrices of the upper system are all negative definite by \Cref{psd_matrix}, The switching system shares \( V(\theta) = ||\theta||^2 \) as common Lyapunov function. Therefore by \Cref{thm:cqlf for n.d}, we can conclude that the upper comparison system is globally asymptotically stable. \newline \newline For the lower comparison part, noting that \begin{equation*} \gamma X^TDP\Pi_{X\theta}X\theta \geq \gamma X^TDP\Pi_{X\theta_e}X\theta , \end{equation*} we can define \(\underbar{$f$}(y)\) and \(\Bar{f}(y)\) such that \(\underbar{$f$}(y) \leq \Bar{f}(y) \) as follows: \begin{align*} \Bar{f}(y) &= -(1+\eta) X^TDXy + \gamma X^TDP\Pi_{Xy}Xy + X^TDR , \\ \underbar{$f$}(y) &= -(1+\eta) X^TDXy + \gamma X^TDP\Pi_{X\theta_e}Xy + X^TDR \end{align*} The corresponding O.D.E system becomes \begin{align} \frac{d}{dt} \theta_t &= \Bar{f}(\theta_t) , \nonumber \\ \frac{d}{dt} \theta^l_t &= \underbar{$f$}(\theta^l_t) , \quad \theta^l_0 < \theta_0 . \label{lower_ode} \end{align} Proving quasi-monotonicity of \( \Bar{f}\) is similar to previous step. Consider non-negative vector \( p \in \mathbb{R}b^{|S||A|} \) such that its \(i\)-th element is zero. \begin{align*} e^T_i\Bar{f}(y+p) &=e_i^T (-(1+\eta) X^TDX(y+p) + \gamma X^TDP\Pi_{X(y+p)}X(y+p) + X^TDR) \\ &=e_i^T ( -(1+\eta )X^TDX y + \gamma X^TDP\Pi_{X(y+p)}X(y+p) + X^TDR ) \\ &\geq e_i^T ( -(1+\eta )X^TDX y + \gamma X^TDP\Pi_{Xy}Xy + X^TDR ) \\ &= e^T_i \Bar{f}(y) \end{align*} The second equality holds since \( X^TDX\) is diagonal matrix and \( p_i =0\). \newline Therefore by \Cref{lem:vector_comparison_principle}, we can conclude that \(\theta^l_t \leq \theta_t\). The lower comparison part is linear system with affine term, and the matrix is negative definite by \Cref{psd_matrix}. Therefore, we can conclude that ({\textnormal{e}}f{lower_ode}) is globally asymptotically stable. To prove uniqueness of the equilibrium point, assume there exists two different equilibrium points \( \theta^e_1\) and \(\theta^e_2\). The global asymptotic stability implies that regardless of initial state, \( \theta_t {\textnormal{i}}ghtarrow \theta^e_1 \) and \(\theta_t {\textnormal{i}}ghtarrow \theta^e_2\). However this becomes contradiction if \( \theta^e_1 \neq \theta^e_2\). Therefore, the equilibrium point is unique. \end{proof} \subsection{Proof of \Cref{convergence-final}}\label{sec:convergence-final} \begin{proof} To apply \Cref{borkar_meyn_lemma}, let us check \Cref{borkar_meyn_assumption}. \begin{enumerate} \item First and second statement of \Cref{borkar_meyn_assumption} follows from \Cref{limiting_Ode} \item Third statement of \Cref{borkar_meyn_assumption} follows from \Cref{ode_proof} \item Fourth statement of \Cref{borkar_meyn_assumption} follows from \Cref{martingale_proof} \end{enumerate} Since we assumed Robbins Monro step-size, we can now apply \Cref{borkar_meyn_lemma} to complete the proof. \end{proof} \subsection{Experiments} \subsubsection{Diagrams for \(\theta {\textnormal{i}}ghtarrow 2\theta\) and Baird Seven Star Counter Example}\label{sec:experiment diagram} The state transition diagrams of \(\theta {\textnormal{i}}ghtarrow 2\theta\) and Baird seven-star example are depicted. \begin{figure} \caption{\(\theta {\textnormal{i} \caption{Baird seven star counter example} \caption{Counter-examples where Q-learning with linear function approximation diverges} \label{fig:thetatwotheta} \label{fig:barid} \end{figure} \subsubsection{O.D.E. trajectories of MDP introduced in~\Cref{sec:ode experiment}}\label{appendix:sec:ode experiment} The O.D.E. trajectories of MDP introduced in~\Cref{sec:ode experiment} are depicted. \begin{figure} \caption{\( \theta_1-\theta^e_1 \)} \label{fig:y equals x} \caption{\(\theta_2-\theta^e_2\)} \label{fig:upper lower theta 2} \caption{Trajectories of upper and lower O.D.E.} \label{fig:trajectories upper and lower ode} \end{figure} \end{document}
\begin{document} \title{\bf A step to Gronwall's conjecture. } \author{ Jean Paul Dufour. \\} \maketitle \begin{abstract} In this paper we will explore a way to prove the hundred years old Gronwall's conjecture: if two plane linear 3-webs with non-zero curvature are locally isomorphic, then the isomorphism is a homography. Using recent results of S. I. Agafonov, we exhibit an invariant, the {\sl characteristic}, attached to each generic point of such a web, with the following property: if a diffeomorphism interchanges two such linear webs, sending a point of the first to a point of the second which have the same characteristic, then this diffeomomorphism is locally a homography. \end{abstract} \noindent {\bf Keywords:} planar 3-webs. Gronwall conjecture. \noindent {\bf AMS classification :} 53A60 \section{Introduction} In this text we work in the real projective plane. The results are the same in the complex case. We also work in the analytic case. A plane 3-web is a triple of 1-dimensional foliations, two by two tranversal, on an open domain of the plane. In the sequel we will work only with these 3-webs : so we forgot the word {\sl plane}. Such a 3-web is called {\sl linear} if the leaves of the foliations are rectilinear. Let $W$ and $\bar W$ be two 3-webs, linear or not, defined respectively on domains $U$ and $\bar U.$ We say that they are isomorphic if there is a diffeomorphism from $U$ to $\bar U$ which maps every leaf of the foliations of $W$ on a leaf of the foliations of $\bar W.$ Near any point of the domain of a 3-web $W,$ there are coordinates $(x,y)$ (in general non affine) such that the foliations are given by the verticals $ x={ constant},$ the horizontals $y={ constant}$ and the level sets of some function $f$ (the sets of $(x,y)$ such that $ f(x,y) ={constant}).$ Such a 3-web is denoted $(x,y,f).$ In other words we can say that $W$ is locally isomorphic to some $(x,y,f).$ Attached to each 3-web there is a 2-form called its (Blaschke) {\sl curvature}. For $(x,y,f)$ this curvature is $$\partial_x \partial_y(\log{\partial_x f\over \partial_y f}) dx\wedge dy.$$ We say that a 3-web is {\sl non flat} if its curvature doesn't vanish at any point of its domain. \noindent{\bf Gronwall's conjecture : } {\sl If two non flat linear 3-webs are isomorphic, then the diffeomorphism which realizes this isomorphism is a homography near any point.} Probably the recent paper of S.I. Agafonov (\cite{SA}) contains the best historical references on this subject. We will also use some notions appearing in this paper and, particularly, the following. We choose affine coordinates $(x,y)$ on an open subset $U$ of the plane. Let $W$ a linear 3-web defined on $U$ by the slopes $P,$ $Q$ and $R$ of the different foliations ($P(x,y)$ is the slope of the first foliation at the point of coordinates $(x,y)$ ...). We define following quantities $$\Pi=(P-Q)(Q-R)(R-P),$$ $$\Delta=(P-Q)R_y+(Q-R)P_y+(R-P)Q_y,$$ with the convention: if $f$ is a function of $(x,y)$ we denote $f_y$ its derivative with respect to $y,$ $f_{yy}$ its second derivative with respect to $y$.... We assume now that $W$ is non flat. The {\bf Lemma 1} of \cite{SA} implies that we can assume also that $\Delta$ is everywhere non vanishing. \begin{dfn} The characteristic of $W$ is $$car_W:=\Pi . (P_{yy}+Q_{yy}+R_{yy})/\Delta^2.$$ \end{dfn} In \cite{SA} we can find a complete set of projective invariants (invariant up to homographies) and $car_W$ is the sum of three of them. Because $ car_W$ is a projective invariant, if a homography $\psi$ interchanges linear 3-webs $W$ and $\bar W$, we have $$car_W(M)=car_{\bar W}(\psi(M)),$$ for any point $M.$ Our central result is the following. \begin{thm}\label{resultatA} We consider two non flat linear 3-webs $W$ and $\bar W.$ We assume that there is an isomorphism $\phi$ from the first one to the second. If there is a point $M$ such that \begin{equation}\label{(1)}car_W(M)=car_{\bar W}(\phi(M))\end{equation} then the isomorphism is a homography near $M.$ \end{thm} This theorem says that Gronwall's conjecture is true if we can always find a point $M$ satisfying condition (\ref{(1)}). We can hope to find such $M$ by a fix point method. \section{Description of linear 3-webs near a point.}\label{descro} We consider a linear 3-web $W,$ and $M$ a point of its domain. Then we can prolongate the three foliations to obtain three families of lines $\cal A,$ $\cal B$ and $\cal C.$ In the projective plane $\cal A,$ $\cal B$ and $\cal C$ envelop three curves, which may degenerate into a point. We denote respectively $A_M ,$ $B_M$ and $C_M$ the focal points on the three line passing by $M$ (the points where these lines touch the envelops). We impose now that $W$ is non flat. Then the {\bf Lemma 1} of \cite{SA} implies also that $(M,A_M,B_M,C_M)$ is a projective frame. Up to a homography, we can choose coordinates $(u,v)$ such that $$M=(0,0),\ A_M=(1,1), \ B_M=(1,-1),\ C_M=(2,0).$$ Near the origin each leaf of our foliations is transversal to the $v$-axis. So our 3-web can be described as follows. There are three 1-variable functions $a:x\mapsto a(x)$ (resp. $b:y\mapsto b(y)$, $c:z\mapsto c(z)$) such $\cal A$ (resp $\cal B$, $\cal C$) consists of the lines $v=a(x)u+x$ (resp. $v=b(y)u+y$, $v=c(z)u+z$) where $x$ (resp. $y$, $z$) is a parameter varying near the origin. Be aware that the functions $a$, $b$ and $c$ we just defined aren't the $a$, $b$ and $c$ of \cite{SA}. Because of the choice of $(u,v)$, Taylor expansions of $a,$ $b$ and $c$ have the shapes $$a(t)=1-t+a_2t^2+\cdots +a_it^i+\cdots ,$$ $$b(t)= -1-t+b_2t^2+\cdots +b_it^i+\cdots,$$ $$ c(t)=- t/2 +c_2t^2+\cdots+c_it^i+\cdots .$$ \begin{lemma}\label{car(0,0)} The following formula holds: $$car_W(0,0)=4(a_2+b_2+c_2).$$ Moreover $a_2+b_2+c_2$ vanishes if and only if the curvature of $W$ at the origin vanishes.\end{lemma} To prove this lemma we begin to compute the 2-order Taylor expansion of the function $x(u,v)$ (resp. $y(u,v),$ resp. $z(u,v)$) given by the implicit relation $v=a(x(u,v))u+x(u,v)$ (resp. $v=b(y(u,v))u+y(u,v),$ resp. $v=c(z(u,v))u+z(u,v)$). We find they are $-u+v-u^2+uv$ for $x(u,v),$ $u+v+u^2+uv$ for $y(u,v)$ and $v+uv/2$ for $z(u,v)$). Then the slope functions are $P(u,v)=a(x(u,v)),$ $Q(u,v)=b(y(u,v))$ and $R(u,v)=c(z(u,v)).$ So their 2-order Taylor expansions are respectively $$1+u-v+(1+a_2)u^2+(-1-2a_2)vu+a_2v^2,$$ $$-1-u-v+(-1+b_2)u^2+(-1+2b_2)vu+b_2v^2$$ and $$-v/2-uv/4+c_2v^2.$$ So the values of $\Pi,$ $\Delta$, $P_{yy}$, $Q_{yy}$ and $R_{yy}$ at the origin are respectively $2,$ $1,$ $2a_2,$ $2b_2$ and $2c_2.$ This proves the first assertion of the lemma. The second can be proved, for example, using the formula of the curvature given in the introduction. \section{Isomorphic linear webs.} In order to prove theorem \ref{resultatA}, we consider two non flat linear 3-webs $W$ and $\bar W,$ the first near a point $M$, the second near a point $\bar M.$ For both we adopt a description as in the preceeding section : the first is described by the three local functions $a,$ $b$ and $c,$ the second is described by the local functions $\bar a,$ $\bar b$ and $\bar c.$ Via the lemma \ref{car(0,0)}, the relation $car_W(M)=car_{\bar W}(\bar M)$ writes as $$a_2+b_2+c_2=\bar a_2+\bar b_2+\bar c_2,$$ with evident notations. In the following of this section we propose a way to express the existence of an isomorphism from $W$ to $\bar W$ mapping $M$ to $\bar M.$ Let $x,$ $y,$ $z$ be three numbers such that the three leaves $v=a(x)u+x,$ $v=b(y)u+y,$ $v=c(z)u+z$ of $W$ have a commun point. They are characterized by the relation \begin{equation}\label{pointcommun} det\left|\matrix{1 & 1 & 1\cr x & y & z \cr a(x) & b(y) & c(z) \cr}\right|=0\ .\end{equation} This equation defines implicitly a function $z=f_W(x,y)$ on a neigborhood of the origin. This proves that $W$ is locally isomorphic to $(x,y,f_W).$ Because the change of coordinates $(u,v)\mapsto (x,y)$ is not a homography (in general), $(x,y,f_W)$ has no reason to be linear. Replacing respectively $a$ with $\bar a,$ $b$ with $\bar b,$ $c$ with $\bar c ,$ we can construct $f_{\bar W}$ such that $\bar W$ is isomorphic to the 3-web $(x,y,f_{\bar W}).$ Now $W$ is locally isomorphic to $\bar W$ by a diffeomorphism which maps $M$ to $\bar M,$ if and only if there is a local isomorphism, preserving the origin, from $(x,y,f_W)$ to $(x,y,f_{\bar W}).$ \begin{prop}\label{formenormale}Let $\mu$ be any non zero number. Every 3-web $(x,y,f),$ with non-zero curvature at the origin, is locally isomorphic to an unique 3-web $(x,y,F_{\mu})$ such that $$F_\mu (x,y)=x+y+xy(x-y)(\mu+g(x,y))$$ where $g$ is a function vanishing at the origin. \end{prop} This proposition is a particular case of the existence and unicity of {\sl normal form} for 3-webs which appears for the first time in \cite{DJ}: for any 3-web $V$ near any point $m,$ there are local coordinates $(x,y)$ vanishing at $m$ such that $V$ becomes $(x,y,h)$ with $$h(x,y)=x+y+xy(x-y)k(x,y)$$ where $k$ may be any function. Moreover $k$ is unique up to a homothety $(x,y)\mapsto (\lambda x,\lambda y).$ For the moment we impose no assumption concerning curvature. If the curvature doesn't vanish at $m$ then $k(0,0)$ is different from zero. Then, up to a homothety, we can assume $k(0,0)=\mu$ and we obtain the above proposition. The fact that $W$ and $\bar W$ are non flat implies that $(x,y,f_W)$ and $(x,y,f_{\bar W})$ have non zero curvature at the origin. The above proposition says that there are two functions $ g_{\mu}$ and $\bar g_{\mu}$, vanishing at the origin, such that $W$ and $\bar W$ are respectively isomorphics to $(x,y,F_{\mu})$ and $(x,y,\bar F_{\mu}),$ with $$F_{\mu}(x,y)=x+y+xy(x-y)(\mu+g_{\mu}(x,y)),$$ $$\bar F_{\mu}(x,y)=x+y+xy(x-y)(\mu+\bar g_{\mu}(x,y)).$$ The unicity part of the above proposition implies the following lemma. \begin{lemma} Fix the non zero number $\mu .$ Then $W$ is isomophic to $\bar W,$ by an isomorphism which maps $M$ to $\bar M,$ if and only if $$ g_{\mu} =\bar g_{\mu}.$$ \end{lemma} To be able to express the isomorphy of $W$ and $\bar W$ with this lemma, we need to have a practical method to compute $g_{\mu}$ and $\bar g_{\mu}.$ We don't have such a method but, at least, we will give in the next section an algorithm which computes the $k$-jet of $g_{\mu}$ (resp. $\bar g_{\mu}$), starting with $(k+2)$-jets of $a$, $b$ and $c$ (resp. $\bar a$, $\bar b$ and $\bar c$). If $W$ and $\bar W$ are isomorphic and for every $k$, this will gives many precise polynomial equations between $(a_2,b_2,c_2,\cdots ,a_{k+2}, b_{k+2}, c_{k+2})$ and $({\bar a}_2,{\bar b}_2,{\bar c}_2,\cdots,{\bar a}_{k+2},{\bar b}_{k+2},{\bar c}_{k+2}).$ \section{Algorithm to compute $k$-jets of $g_{\mu}.$} In this section we mimic a classical proof of the existence of {\sl normal form} for any 3-web to get an algorithm which works for jets. We choose $$\mu=a_2+b_2+c_2$$ i.e. $\mu$ is the characteristic of $W$ at the origin up to the factor 4. The {\bf input} is the $(k+2)$-jet of $a$, $b$ and $c$, i.e. the numbers $a_2,\cdots ,a_{(k+2)},$ $b_2,\cdots ,b_{(k+2)}$ and $c_2,\cdots ,c_{(k+2)}.$ The {\bf first procedure} gives the $(k+3)$-jet, $j^{(k+3)}f_W$ of $f_W$ at the origin by computing the $(k+3)$-jet of the solution of the implicit relation (\ref{pointcommun}) when we replace respectively $a,$ $b$ and $c$ by their $(k+2)$-jets. The {\bf second procedure} is the {\sl normalisation} of $j^{(k+3)}f_W$ to obtain the $k$-jet of $g_\mu.$ We do that in six steps. {\sl First step}. We use the simplifying notation $F=j^{(k+3)}f_W$ and consider it as a polynomial function with two variables. Compute $X,$ the $(k+3)$-jet of the inverse function of $t\mapsto F(t,0).$ {\sl Second step}. Compute $Y,$ the $(k+3)$-jet of the inverse function of $t\mapsto F(0,t).$ {\sl Third step}. Compute the $(k+3)$-jet of $F(X(x),Y(y)).$ We denote it by $G$ (remark that $G$ has the shape $x+y+xy{\overline{T}}heta (x,y)$) {\sl Fourth step}. Consider $K=G(t,t).$ Find the 1-variable polynomial $U=t+u_2t^2+\cdots u_{(k+3^)}t^{(k+3)}$ such that the $(k+3)$-jet of $G(U(t),U(t))$ is egal to $U(2t)$ (it exists by the classical Sternberg's theorem \cite{SS} which says that every map $t\mapsto 2t+d_2t^2+\cdots$ is conjugated to its linear part ; it is also unique because $j^1U=t$). {\sl Fifth step}. Compute $V,$ the $(k+3)$-jet of the inverse of $U,$ and $H,$ the $(k+3)$-jet of $V(G(U(x),U(y)))$ (remark that $H$ has the shape $x+y+xy(x-y)\Psi (x,y)$). {\sl Sixth step}. Compute $ L,$ the $k$-jet of $(H-x-y)/(xy(x-y))$. The {\bf output} is $E=L-\mu$ which is the k-jet of $g_{\mu}.$ We have implemented this algorithm on Maple. It works very rapidly if $k$ is less or equal to seven. \section{Proof of Theorem \ref{resultatA}.} We keep notation $\mu:=a_2+b_2+c_2.$ The assumption of our theorem writes as $$\bar a_2+\bar b_2+\bar c_2=\mu.$$ Using the algorithm of the preceeding section we compute respectively the 5-jets $E$ of $g_{\mu}$ and $\bar E$ of $\bar g_{\mu}.$ We write the Taylor expansion of $E$ as $E_{10} x+ E_{01}y+\cdots+E_{ij}x^iy^j.$ We have \begin{equation}\label{rel1} E_{10}=(-2a_2-2b_2+c_2+20a_3+8b_3+14c_3)/7.\end{equation} \begin{equation}\label{rel2}E_{01}=(2a_2+2b_2-c_2+20b_3+8a_3+14c_3)/7.\end{equation} And also $$ E_{20}=a_2/12+b_2/12-c_2/6+10a_2^2/3+c_2^2+2a_2b_2/3-2b_2c_2/3+2a_2c_2/3$$ $$-2a_3+c_3/3+20a_4/3+4b_4/3+3c_4.$$ $$E_{11}=-a_2/12-b_2/12-c_2/3+a_2^2/3-b_2^2/3+4c_2(a_2-b_2)/3+4a_4+4b_4+5c_4.$$ $$ E_{02}=a_2/12+b_2/12-c_2/6-10b_2^2/3-c_2^2-2a_2b_2/3-2b_2c_2/3+2a_2c_2/3$$ $$+2b_3-c_3/3+20b_4/3+4a_4/3+3c_4.$$ The following $ E_{ij}$ may have very long expressions. For example if $i+j =5$ they contain nearly hundred terms. We only retain that they are polynomial expressions, with rational coefficients, in some of the $a_r,$ $b_r$ and $c_r$ variables. In the case of $\bar E$, we obtain the same expressions for its coefficients ${\bar E}_{ij}$ except we have to change respectivily $a_r,$ $b_r$ and $c_r$ by $\bar a_r,$ $\bar b_r$ and $\bar c_r.$ To simplify we use the following notations: $$\bar a_r=a_r+A_r,\ \ \bar b_r=b_r+B_r,\ \ \bar c_r=c_r+C_r,$$ for every $r.$ The hypothesis of our theorem writes as $C_2=-A_2-B_2.$ To prove it we have to prove $A_r=B_r=C_r=0$ for every $r.$ We adopt notations $$T_{ij}=E_{ij}-{\bar E}_{ij},$$ for every $i$ and $j.$ The existence of an isomorphism between $W$ and $\bar W$ implies the set of equations $T_{ij}=0.$ They are polynomial equations with unknown $A_r,$ $B_r$ and $C_r$ and coefficients rational in some of the $a_r,$ $b_r,$ $c_r.$ For example the relations (\ref{rel1}) and (\ref{rel2}) give $T_{10}$ and $T_{01},$ i.e. the order one equations. Equations $T_{10}=0$ and $T_{01}=0,$ give relations $$A_3=A_2/4+B_2/4-C_3/2,$$ $$B_3=-A_2/4-B_2/4-C_3/2.$$ At order 2 the equations $T_{20}=0,$ $T_{11}=0$ and $T_{02}=0$ give $$A_4 = A_2/8+c_2B_2/3+b_2B_2/3+B_2/8-b_2A_2/12-B_2A_2/2-B_2a_2/6$$$$-19a_2A_2/12-A_2^2+5c_2A_2/12-C_3/4,$$ $$B_4 = A_2/8-5c_2B_2/12+19b_2B_2/12+B_2/8-a_2A_2/3+b_2A_2/6$$$$+B_2A_2/2+B_2a_2/12+B_2^2-c_2A2/3+C_3/4,$$ $$C_4 = -A_2/4+c_2B_2/3-5b_2B_2/3-B_2/4+5a_2A_2/3-b_2A_2/3$$$$+B_2a_2/3-B_2^2-c_2A_2/3+A_2^2.$$ Note that the second members of these relations are polynomial with variables $A_2,$ $B_2,$ $C_3,$ $a_2,$ $b_2$ and $c_2.$ At order 3 and using Maple, we obtain four equations $T_{ij}=0$ and they allow to express $A_5,$ $B_5,$ $C_5$ and $C_3$ as rational fonctions of the variables $A_2,$ $B_2,$ $a_2,$ $b_2,$ $c_2,$ $a_3,$ $b_3$ and $c_3,$ with the denominator $a_2+b_2+c_2.$ Formulas are two long to be reproduced here. For the moment we skip order 4 and consider the six equations $T_{ij}=0$ with $i+j=5.$ Maple proves that they allow to express $A_6,$ $B_6,$ $C_6,$ $A_7,$ $B_7$ and $C_7$ rationally in function of $A_2,$ $B_2,$ $a_2,$ $b_2,$ $c_2,$ $\cdots ,$ $a_5,$ $b_5$ and $c_5$ with the denominator $(a_2+b_2+c_2)^2.$ We remark that $A_2=B_2=0$ implies that $A_i,$ $B_i$ and $C_i$ vanish for $i=3,\cdots ,7.$ Now we compute the five equations $T_{ij}=0$ with $i+j=4.$ They have the shape $$\alpha_j A_2^2+\beta_j B_2^2+\gamma_j A_2B_2+\mu_j A_2+\nu_j B_2 =0$$ where $\alpha_j,\beta_j ,\gamma_j ,\mu_j ,\nu_j $ are rationally functions of $b_2,$ $c_2,$ $\cdots ,$ $a_5,$ $b_5$ and $c_5$ with the denominator $a_2+b_2+c_2.$ We see also that the $(3\times 5)$-matrix with lines $(\alpha_j,\beta_j ,\gamma_j )$ has rank 3 for any value of $b_2,$ $c_2,$ $\cdots ,$ $a_5,$ $b_5$ and $c_5.$ This is a consequence of the fact that the coefficients of this matrix depends only on the three numbers $a_2+b_2+c_2,$ $a_3+b_3+c_3$ and $a_2+b_2+2b_3-2a_3.$ So our system of equations can be rewriten as \begin{equation}\label{ordre4}\matrix{A_2^2=\psi_1A_2+\phi_1B_2,\cr B_2^2=\psi_2A_2+\phi_2B_2,\cr A_2B_2=\psi_3A_2+\phi_3B_2,\cr 0=\psi_4A_2+\phi_4B_2,\cr 0=\psi_5A_2+\phi_5B_2. }\end{equation} {\bf To obtain the Maple worksheet which gives these results, contact the author at [email protected]} \begin{lemma} For $i+j>2$ we have relations $$E_ {ij}=\theta_{ij}a_{i+j+2}+\phi_{ij}b_{i+j+2}+\psi_{ij}c_{i+j+2}+S_{ij}$$ where $\theta_{ij},$ $\phi_{ij}$ and $\psi_{ij}$ are some constants and $S_{ij}$ is a polynomial with variables $a_2,$ $b_2,$ $c_2,$ $\cdots ,$ $a_{i+j+1},$ $b_{i+j+1}$ and $c_{i+j+1}.$ Moreover any $(3\times 3)$-submatrix of the matrix whose lines are $(\theta_{ij},\phi_{ij},\psi_{ij})$ is of rank 3. \end{lemma} This can be proven for any $(i,j)$ (whithout Maple !) as follows. We use notation $n=i+j+2.$ We first see that $F,$ the $(n+1)$-jet of $f_W,$ has the shape $$(x+y)/2+ {\overline{T}}heta +a_nK+b_nL+c_nM,$$ where $K,$ $L$ and $M$ are homogeneous polynomials of degree $(n+1),$ with variable $(x,y)$ and constant coefficients; ${\overline{T}}heta$ is a polynomial expression with variables $x,$ $y,$ $a_2,$ $b_2,$ $c_2,$ $\cdots ,$ $a_{n-1},$ $b_{n-1}$ and $c_{n-1}.$ Now we apply the normalising procedure, described in the previous section, to $F.$ We only have to follow what happens to the terms containing $a_n,$ $b_n$ and $c_n.$ It is a little long but elementary. Note that, with this lemma, we recover in part above Maple results: we recover that the equations $T_{ij}=0$ for $i+j=n-2$ give expressions of $A_n,$ $B_n$ and $C_n$ in terms of the previous $A_p,$ $B_p$ and $C_p,$ and some of the $a_r$, $b_r$ and $c_r,$ for $n=4,5,6,7.$ This lemma proves also that, if $A_2,$ $B_2,$ $A_3,$ $B_3$ and $C_3$ vanish then all the $A_r,$ $B_r$ and $C_r$ vanish also. Using Maple calculations above we see that the relation $A_2=B_2=0$ implies that all the $A_r,$ $B_r$ and $C_r$ vanish but also all the $T_{ij}.$ Using the three first equations (5) we can replace any monomial in $A_2$ and $B_2$ by a linear expression $\rho A_2+\tau B_2.$ So our system of equations $T_{ij}$ can rewriten as a set of equations $$A_r=u_r^1A_2+u_r^2B_2,\ B_r=v_r^1A_2+v_r^2B_2,\ C_r=w_r^1A_2+w_r^2B_2,\ t^1_kA_2+t^2_kB_2=0;$$ for $ r>2$ and an infinity of $k;$ the coefficients $u^s_r,$ $v^s_r,$ $w^s_r$ and $t^s_r$ depending only on the $a_n,$ $b_n$ and $c_n.$ If this system has a non zero solution $(A_2,B_2,A_3,B_3,C_3,\dots)$ then it has an infinity of solutions: $t(A_2,B_2,A_3,B_3,C_3,\dots)$ for any number $t.$ This means that $W$ would be isomorphic to any linear 3-web $\bar W_t$ which is described by the three one variable functions $$\bar a_t=a+tA,\ \bar b_t=b+tB,\ \bar c_t=c+tA,$$ where $A,$ $B$ and $C$ are the functions which have respectively Taylor expansions $A_2x^2+\cdots +A_nx^n+\cdots,$ $B_2y^2+\cdots +b_ny^n+\cdots$ and $C_2z^2+\cdots +C_nz^n+\cdots .$ This contredicts the known fact that any non flat linear 3-web can only be isomorphic to a finite number of homographically different linear 3-webs. So we have proven that the only possiblity is $A_r=B_r=C_r=0$ for any $r$ and our Theorem. \section{Remarks.} Let $\phi$ be an isomorphism between the two linear non flat webs $W$ and $\bar W.$ We suppose also that $\phi$ maps a point $M$ of the domain of $W$ to a point $\bar M$ on the domain of $\bar W.$ We adopt the description of $W$ (resp. $\bar W$) near $M$ (resp. $\bar M$) of the section \ref{descro}, i.e. with three 1-variable functions $a,$ $b$ and $c$ (resp. $\bar a,$ $\bar b$ and $\bar c$). Then $\phi$ becomes a local diffeomorphism which preserves the origin. As it preserves the $u$-axis and the two bissectrices $u=v$ , $u=-v,$ its 1-jet at the origin is a homothety $kI$. Then we have $$car_W(M)=k^2car_{\bar W}(\bar M).$$ For any linear 3-web we find in \cite{SA} the construction of 1-forms $U_1,$ $U_2$ and $U_3 $ which are invariant up to homographies, such that the three foliations are given by the kernels of these forms and $$U_1+U_2+U_3=0.$$ We denote $U_1,$ $U_2$ and $U_3 $ these forms for $W$ and $\bar U_1,$ $\bar U_2$ and $\bar U_3 $ for $\bar W.$ Classically, there is a function $f$ such that $$\phi^{*}\bar U_i=f.U_i$$ for every $i=1,2,3.$ Using the description of section \ref{descro}, we can show $$car_W(M)=f^2car_{\bar W}(\bar M).$$ So our result can be rephrased as: if $f^2$ is equal to 1 at some point then, near this point, $\phi$ is a homography. \end{document}
\begin{document} \title{f A note on a metric associated to certain finite groups} \begin{abstract} In this short note we introduce a new metric on certain finite groups. It leads to a class of groups for which the element orders satisfy an interesting inequality. This extends the class ${\rm CP}_2$ studied in our previous paper \cite{16}. \end{abstract} \noindent oindent{\bf MSC (2010):} Primary 20D10, 20D20; Secondary 20D15, 20D25, 20E34. \noindent oindent{\bf Key words:} finite groups, element orders, CP-groups, metrics. {\sigma}ection{Introduction} In group theory, and especially in geometric group theory, several metrics on a finite group $G$ have been studied (see e.g. \cite{2,4,6}). These are important because they give a way to measure the distance between any two elements of $G$. A new metric on certain finite groups $G$ will be presented in the following. Let $d:G\times G\longrightarrow\mathbb{N}$ be the function defined by $$d(x,y)=o(xy^{-1})-1, \forall\, x,y\in G,$$where $o(a)$ is the order of $a\in G$. Then $d$ is a metric on $G$ if and only if $$o(ab)<o(a)+o(b), \forall\, a,b\in G.\leqno(*)$$Denote by ${\rm CP}_3$ the class of finite groups $G$ which satisfy $(*)$. Remark that $d$ becomes an ultrametric on $G$ if and only if $$o(ab)\leq{\rm max}\{o(a),o(b)\}, \forall\, a,b\in G,$$that is $G$ belongs to the class ${\rm CP}_2$ studied in \cite{16}. Consequently, ${\rm CP}_2$ is properly contained in ${\rm CP}_3$ (an example of a finite group in ${\rm CP}_3$ but not in ${\rm CP}_2$ is the symmetric group $S_3$). This implies that ${\rm CP}_3$ contains all abelian $p$-groups, as well as the quaternion group $Q_8$ or the alternating group $A_4$, but at first sight it is difficult to describe all finite groups in this class. Their study is the main goal of our note. Most of our notation is standard and will not be repeated here. Basic notions and results on group theory can be found in \cite{7,11,12,15}. {\sigma}ection{Main results} First of all, we observe that ${\rm CP}_3$ is closed under subgroups. On the other hand, since the cyclic group $\mathbb{Z}_6$ does not belong to ${\rm CP}_3$ (it has two elements of orders 2 and 3 whose sum is of order 6), we infer that ${\rm CP}_3$ is not closed under direct products or extensions. \noindent oindent{\bf Remarks.} \begin{itemize} \item[\rm 1.] We know that ${\rm CP}_2{\sigma}ubset{\rm CP}_3$. Then, by Remark 1 of \cite{16}, we are able to indicate other three classes of finite $p$-groups, more large as the class of abelian $p$-groups, that are contained in ${\rm CP}_3$: regular $p$-groups (see Theorem 3.14 of \cite{15}, II, page 47), $p$-groups whose subgroup lattices are modular (see Lemma 2.3.5 of \cite{13}), and powerful $p$-groups for $p$ odd (see the main theorem of \cite{17}). \item[\rm 2.] $Q_8$ is the smallest nonabelian $p$-group contained in ${\rm CP}_3$, while the dihedral group $D_8$ is the smallest $p$-group not contained in ${\rm CP}_3$. Note that all quaternion groups $Q_{2^n}$, $n\geq 4$, as well as all dihedral groups $D_{2n}$, $n\geq 4$, does not belong to ${\rm CP}_3$. \item[\rm 3.] The groups $S_n$ with $n\geq 4$ are not contained in ${\rm CP}_3$ (for this it is enough to observe that $S_4\noindent ot\in {\rm CP}_3$: there are $a=(12)(34),b=(13)\in S_4$ such that $o(ab)=4\noindent less o(a)+o(b)=4$). Similarly, the groups $A_n$ with $n\geq 5$ are also not contained in ${\rm CP}_3$. \end{itemize} The following theorem gives a connection between ${\rm CP}_3$ and the well-known class of CP-groups (see e.g. \cite{1,3,5,8,9,10,14,18}). \noindent oindent{\bf Theorem 1.} {\it ${\rm CP}_3$ is properly contained in ${\rm CP}$.} \noindent oindent{\bf Proof.} Assume that a group $G$ in ${\rm CP}_3$ contains an element $x$ whose order is not a prime power. Then there are two powers $a$ and $b$ of $x$ such that $o(a)=p$ and $o(b)=q$, where $p$ and $q$ are distinct primes. Since $a$ and $b$ commute, we have $o(ab)=pq$. By $(*)$ it follows that $$pq<p+q,$$a contradiction. Clearly, $S_4$ is contained in ${\rm CP}$, but not in ${\rm CP}_3$. This shows that the inclusion ${\rm CP}_3{\sigma}ubset{\rm CP}$ is proper. \rule{1,5mm}{1,5mm} A similar argument as in the above proof leads to the following property of groups in ${\rm CP}_3$. \noindent oindent{\bf Theorem 2.} {\it Any abelian subgroup of a group in ${\rm CP}_3$ is a $p$-group. In particular, an abelian group is contained in ${\rm CP}_3$ if and only if it is a $p$-group.} {\sigma}mallskip Our next result proves that the intersections of ${\rm CP}_2$ and ${\rm CP}_3$ with the class of $p$-groups are the same. \noindent oindent{\bf Theorem 3.} {\it A $p$-group $G$ is contained in ${\rm CP}_3$ if and only if it is contained in ${\rm CP}_2$.} \noindent oindent{\bf Proof.} Assume that $G$ belongs to ${\rm CP}_3$ and let $p^n$ be its order. We will prove that for every $i=0,1,...,n$ the set $G_i=\{x \in G \mid o(x)\leq p^i\}$ is a normal subgroup of $G$. Let $x,y\in G_i$. Then $o(x),o(y)\leq p^i$ and therefore $o(xy)< 2p^i$ by $(*)$. On the other hand, we know that $G$ belongs to ${\rm CP}$ and so $o(xy)=p^j$ for some non-negative integer $j$. Thus $p^j<2p^i$, which leads to $j\leq i$, i.e. $xy\in G_i$. This proves that $G_i$ is a subgroup of $G$. Moreover, $G_i$ is normal in $G$ because the order map is constant on each conjugacy class. Then Theorem A of \cite{16} implies that $G$ belongs to ${\rm CP}_2$, as desired. \rule{1,5mm}{1,5mm} By \cite{14} we know that only eight nonabelian finite simple CP-groups exist: ${\rm PSL}(2,q)$ for $q=4,7,8,9,17$, ${\rm PSL}(3,4)$, ${\rm Sz}(8)$, and ${\rm Sz}(32)$. All these groups are not contained in ${\rm CP}_3$, as shows the following theorem. \noindent oindent{\bf Theorem 4.} {\it ${\rm CP}_3$ contains no nonabelian finite simple group.} \noindent oindent{\bf Proof.} Since the product of any two elements of order 2 of a group in ${\rm CP}_3$ can have order at most 3, we infer that ${\rm PSL}(2,q)$ does not belong to ${\rm CP}_3$ whenever $q\geq 4$ (note that ${\rm PSL}(2,2)\cong S_3$ and ${\rm PSL}(2,3)\cong A_4$ belong to ${\rm CP}_3$). ${\rm PSL}(3,4)$ has a subgroup isomorphic to ${\rm PSL}(2,4)\cong A_5$, and consequently it also does not belong to ${\rm CP}_3$. The same conclusion is obtained for the Suzuki groups ${\rm Sz}(8)$ and ${\rm Sz}(32)$ because they contain a subgroup isomorphic to $D_{10}$. \rule{1,5mm}{1,5mm} {\sigma}mallskip Inspired by Theorem 4 and by Corollary E of \cite{16} we came up with the following conjecture. \noindent oindent{\bf Conjecture 5.} {\it ${\rm CP}_3$ is properly contained in the class of finite solvable groups.} Finally, we indicate three natural problems concerning the class of finite groups introduced in our paper. \noindent oindent{\bf Problem 1.} Study whether ${\rm CP}_3$ is closed under homomorphic images. \noindent oindent{\bf Problem 2.} Determine the intersection between ${\rm CP}_3$ and ${\rm CP}_1$. \noindent oindent{\bf Problem 3.} Give a precise description of the structure of finite groups contained in ${\rm CP}_3$. \vspace*{5ex}{\sigma}mall \begin{minipage}[t]{5cm} Marius T\u arn\u auceanu \\ Faculty of Mathematics \\ ``Al.I. Cuza'' University \\ Ia\c si, Romania \\ e-mail: {\tt [email protected]} \end{minipage} \end{document}
\begin{equation}gin{document} \nablatle{Energy equality for the isentropic compressible Navier-Stokes equations without upper bound of the density } \author{ Yulin Ye\fracootnote{ School of Mathematics and Statistics, Henan University, Kaifeng, 475004, P. R. China. Email: [email protected]}, ~ Yanqing Wang\fracootnote{Corresponding author. College of Mathematics and Information Science, Zhengzhou University of Light Industry, Zhengzhou, Henan 450002, P. R. China Email: [email protected]} ~ \, \, and \, Huan Yu \fracootnote{ School of Applied Science, Beijing Information Science and Technology University, Beijing, 100192, P. R. China Email: [email protected]} } \date{} \title{Energy equality for the isentropic compressible Navier-Stokes equations without upper bound of the density } \begin{equation}gin{abstract} In this paper, we are concerned with the minimal regularity of both the density and the velocity for the weak solutions keeping energy equality in the isentropic compressible Navier-Stokes equations. The energy equality criteria without upper bound of the density are established. Almost all previous corresponding results requires $\varrho\in L^{\infty}(0,T;L^{\infty}(\mathbb{T}^{d}))$. \end{abstract} \noindent {\bf MSC(2000):}\quad 35Q30, 35Q35, 76D03, 76D05\\\noindent {\bf Keywords:} compressible Navier-Stokes equations; energy equality; vacuum \mathrm{div}ection{Introduction} \Delta bel{intro} \mathrm{div}etcounter{section}{1}\mathrm{div}etcounter{equation}{0} The classical isentropic compressible Navier-Stokes equations \begin{equation}\left\{\begin{aligned}\Delta bel{INS} &\varrho_t+\nabla \cdot (\varrho v)=0, \\ &(\varrho v)_{t} +\mathrm{div}\,(\varrho v\otimes v)+\nabla P(\varrho )- \textxt{div\,}(\mu\mathfrak{D}v)- \nabla(\Delta mbda\textxt{div\,}v)=0,\\ \end{aligned}\right.\end{equation} where $\varrho$ stands for the density of the flow, $v$ reprents the flow velocity field and $P(\varrho)=\varrho^{\gamma}$ is the scalar pressure; The viscosity coefficients $\mu$ and $\Delta mbda$ satisfy $\mu\geq 0$ and $2\mu+d\Delta mbda>0$. $\mathfrak{D}v=\frac12(\nabla v\otimes \nabla v^{T} )$ is the stain tensor; We complement equations \eqref{INS} with initial data \begin{equation}gin{equation}\Delta bel{INS1} \varrho(0,x)=\varrho_0(x),\ (\varrho v)(0,x)=(\varrho_0 v_0)(x),\ x\in \Omega, \end{equation} where we define $v_0 =0$ on the sets $\{x\in \Omega:\ \varrho_0=0\}.$ In the present paper, we consider the periodic case, that is $\Omega=\mathbb{T}^d$ with dimension $d\geq 2$. One of the celebrated results of the isentropic compressible Navier-Stokes equations \eqref{INS} is the global existence of the finite energy weak solutions due to Lions \cite{[Lions1]} with $\gamma\geq \frac{3d}{d+2}$ for $d=2$ or $3$, where $d$ is the spatial dimension. Subsequently, in \cite{FNP}, Feireisl-Novotny-Petzeltov\'a further extended the Lions' work to $\gamma >\frac{3}{2}$ for $d=3$. In \cite{JZ}, Jiang and Zhang considered the global existence of weak solutions for the case $\gamma>1$ with the spherical symmetric initial data. For the convenience of the reader, we recall the definition of the finite energy weak solutions: \begin{equation}gin{definition}\Delta bel{wsdefi} A pair ($\varrho,v$) is called a weak solution to \eqref{INS} with initial data \eqref{INS1} if ($\varrho,v$) satisfies \begin{equation}gin{enumerate}[(i)] \item equation \eqref{INS} holds in $D'(0,T;\Omega)$ and \begin{equation}gin{equation}P(\varrho ), \varrho |v|^2\in L^\infty(0,T;L^1(\Omega)),\ \ \ \nabla v\in L^2(0,T;L^2(\Omega)), \end{equation} \item[(ii)] the density $\varrho$ is a renormalized solution of \eqref{INS} in the sense of \cite{[PL]}. \item[(iii)] the energy inequality holds \begin{equation}gin{equation}\Delta bel{energyineq} \begin{equation}gin{aligned} E(t) +\int_0^T\int_{\Omega}\left( \mu|\nabla v|^2+(\mu+\Delta mbda)|\mathrm{div}\, v|^2 \right) dxdt \leq E(0), \end{aligned}\end{equation} where $E(t)=\int_{\Omega}\left( \fracrac{1}{2}\varrho |v|^2+\frac{\varrho^{\gamma}}{\gamma-1} \right) dx$. \end{enumerate} \end{definition} Note that the finite energy weak solutions constructed in \cite{[Lions1],FNP} satisfy the energy inequality \eqref{energyineq}. A natural question is how much regularity of weak solutions in these system are required to ensure energy equality. In \cite{[Yu2]}, Yu opened the door of research of the Lions-Shinbrot type energy equality criterion to the weak solutions of the compressible Navier-Stokes equations with both non vacuum and vacuum cases. The so-called Lions-Shinbrot type criterion is that if the Leray-Hopf weak solutions $v$ of the 3D incompressible Navier-Stokes equations satisfy $ v\in L^{p}(0,T;L^{q}(\mathbb{T}^d)),~\textxt{with}~\frac{2}{p}+ \frac{2}{q}=1~\textxt{and}~q\geq 4,$ then the following corresponding energy equality holds $$ \|v\|_{L^{2}(\Omega)}^{2}+2 \int_{0}^{T}\|\nabla v\|_{L^{2}(\Omega)}^{2}ds= \|v_0\|_{L^{2}(\Omega)}^{2}. $$ Yu's results in \cite{[Yu2]} without vacuum can be formulated as: if a weak solution $(\varrho, v)$ of the compressible Navier-Stokes equations \eqref{INS} with $\mu =2\varrho$ and $\Delta mbda =0$ satisfies \begin{equation}\begin{aligned}\Delta bel{yu1} \mathrm{div}qrt{\varrho}v\in L^{\infty}(0,T;L^{2}(\Omega)),\mathrm{div}qrt{\varrho}\nabla v\in L^{2}(0,T;L^{2}(\Omega)),\\ 0<c_{1}\leq\varrho\leq c_{2}<\infty, \ \ \nabla\mathrm{div}qrt{\varrho} \in L^{\infty}(0,T;L^{2}(\Omega)),\\ v\in L^{p}(0,T;L^{q}(\Omega)) ~~\frac1p+\frac1q\leq \frac5{12} \ \textxt{ and}\ q\geq6, \mathrm{div}qrt{\varrho_{0}} v_{0} \in L^{4}(\Omega), \end{aligned} \end{equation} then such a weak solution $(\varrho, v)$ for any $t \in[0, T]$ satisfies \begin{equation}gin{equation}\Delta bel{EIy}\begin{aligned} \int_{\mathbb{T}^{d}}\left( \fracrac{1}{2}\varrho |v|^2+\frac{\varrho^{\gamma}}{\gamma-1} \right) dx +\int_0^t\int_{\mathbb{T}^{d}}& \varrho|\mathfrak{D}v|^2 dxdt = \int_{\mathbb{T}^{d}}\left( \fracrac{1}{2} \varrho_0 |v_0|^2+\frac{\varrho_{0}^{\gamma}}{\gamma-1}\right)dx. \end{aligned} \end{equation} Subsequently, Nguye-Nguye-Tang \cite{[NNT]} obtained the following Lions-Shinbrot type criterion for the weak solutions of the compressible Navier-Stokes equations \eqref{INS} with $\mu=\mu(\varrho)$, $\Delta mbda=\Delta mbda(\varrho)$ and general pressure law $P(\varrho)\in C^{2}(0,+\infty)$, \begin{equation}\begin{aligned}\Delta bel{NNT} &0< c_{1}\leq\varrho\leq c_{2}<\infty, v\in L^{\infty}(0,T; L^{2} (\mathbb{T}^{d})), \nabla v\in L^{2}(0,T; L^{2}(\mathbb{T}^{d})), \\&\mathrm{div}up_{t\in(0,T)}\mathrm{div}up_{|h|<\varepsilon}|h|^{-\frac12} \|\varrho(\cdot+h,t)-\varrho(\cdot,t)\|_{L^{2}(\mathbb{T}^{d})}<\infty,\\ &v\in L^{4}(0,T;L^{4}(\mathbb{T}^{d})). \end{aligned}\end{equation} ensures that the corresponding energy equality is valid. Very recently, in \cite{[WY]}, the authors refined \eqref{NNT} to \begin{equation} \begin{aligned}\Delta bel{wy} &0< c_{1}\leq\varrho\leq c_{2}<\infty, v\in L^{\infty}(0,T; L^{2}(\mathbb{T}^{d})), \nabla v\in L^{2}(0,T; L^{2}(\mathbb{T}^{d})), \\ & v\in L^{\frac{2p}{p-1}}(0,T; L^{\frac{2q}{q-1}}(\mathbb{T}^{d})), \nabla v\in L^{p}(0,T; L^{q}(\mathbb{T}^{d})). \end{aligned} \end{equation} It is worth pointing out that a special case $p=q=2$ in \eqref{wy} is an improvement of \eqref{NNT}. Other kind Lions-Shinbrot type criterion for the compressible Navier-Stokes system \eqref{INS} can be found in \cite{[Liang],[WY]}. To the knowledge of the authors, all Lions-Shinbrot type criterion for the energy equality in the compressible Navier-Stokes system require the upper bound of the density. One goal of this paper is to analysis the role of the integrability of density in the energy equality for the weak solutions of the compressible Navier-Stokes equations. In the case of no vacuum, we formulate our first result as follows: \begin{equation}gin{theorem}\Delta bel{the1.2} Let $(\varrho, v)$ be a weak solutions in the sense of definition \eqref{wsdefi}. Assume \begin{equation}gin{equation} \Delta bel{u}\begin{equation}gin{aligned} &0< c\leq\varrho \in L^{ k}(0,T;L^{ l}(\mathbb{T}^d)), \\ & v\in L^{p }(0,T;L^{q} (\mathbb{T}^{d})),\nabla v\in L^{\frac{kp}{k(p-2)-4p}}(0,T; L^{\frac{lq}{l(q-2)-4q}}(\mathbb{T}^{d})), \end{aligned} \end{equation} with $k> \max\{\fracrac{4p}{p-2},\frac{(\gamma-2)p }{2}\}$, $l> \max\{\fracrac{4q}{q-2}, \frac{(\gamma-2)q}{2}\}$ and $k(4-p)+7p\geq 0,\ l(4-q)+7q\geq 0$. the energy equality below is valid, for any $t\in[0,T]$, \begin{equation}gin{equation}\Delta bel{EI}\begin{aligned} \int_{\mathbb{T}^{d}}\left( \fracrac{1}{2}\varrho |v|^2+\frac{\varrho^{\gamma}}{\gamma-1} \right) dx +\int_0^t\int_{\mathbb{T}^{d}}&\left[\mu|\nabla v|^2+(\mu+\Delta mbda)|\mathrm{div}\, v|^2 \right] dxdt\\=&\int_{\mathbb{T}^{d}}\left( \fracrac{1}{2} \varrho_0 |v_0|^2+\frac{\varrho_{0}^{\gamma}}{\gamma-1}\right)dx.\end{aligned}\end{equation} \end{theorem} \begin{equation}gin{remark} This theorem is a generalization of recent wroks \cite{[WY],[NNT]}. It seems that this is the first energy equality criterion for the compressible Navier-Stokes equations without the upper bound of the density. \end{remark} \begin{equation}gin{remark} A special case of \eqref{u} is that $v\in L^4(0,T;L^4(\mathbb{T}^d)),$ $\nabla v\in L^{\fracrac{2k}{k-8}}(0,T;L^{\fracrac{2l}{l-8}})$ and $\varrho \in L^k(0,T;L^l(\mathbb{T}^d))$ with $k>\max\{8, 4(\gamma-2)\} $ and $l>\max\{8, 4(\gamma-2)\}$ guarantee the energy conservation of the weak solutions. This shows that lower integrability of the density $\varrho$ means that more integrability of gradient of the velocity $\nabla v$ is necessary in energy conservation of the isentropic compressible fluid with non-vacuum and the inverse is also true. \end{remark} Next, we turn our attentions to the energy equality of the isentropic Navier-Stokes equations allowing vacuum. In aforementioned work \cite{[Yu2]}, Yu also deat with the energy conservation of system \eqref{INS} in the present of vacuum and proved that if a weak solution $(\varrho, v)$ in the sense of Definition \ref{wsdefi} satisfies \begin{equation}\begin{aligned}\Delta bel{yu} &0\leq\varrho\leq c<\infty, \ \ \nabla\mathrm{div}qrt{\varrho} \in L^{\infty}(0,T;L^{2}(\mathbb{T}^{d})),\\ &v\in L^{p}(0,T;L^{q}(\mathbb{T}^{d})) ~~p\geq4 \ \textxt{ and}\ q\geq6,\ and\ v_0\in L^{q_0},\ q_0\geq 3, \end{aligned} \end{equation} then the energy equality \eqref{EI} is valid for $t\in[0,T]$. Developing the technique used in \cite{[Yu2]}, the authors showed Lions's $L^4 (0,T; L^4(\Omega))$ condition for energy balance is also valid for the weak solutions of the isentropic compressible Navier-Stokes equations allowing vacuum via the following energy equality criterion: if a weak solution $(\varrho, v)$ satisfies, for any $p,q\geq 4$ and $dp< 2q+3d$ with $d\geq 2$, \begin{equation}\Delta bel{key0} \begin{aligned} &0\leq \varrho<c<\infty, \nabla\mathrm{div}qrt{\varrho}\in L^{\frac{2p}{3-p}}(0,T;L^{\frac{2q}{3-q}}(\mathbb{T}^{d})), \\& v\in L^{\frac{2p}{p-1}}(0,T;L^{\frac{2q}{q-1} }(\mathbb{T}^{d})), \nabla v\in L^{p}(0,T;L^{q}(\mathbb{T}^{d})),\ v_0\in L^{\frac{q}{q-1}}(\mathbb{T}^{d}). \end{aligned}\end{equation} then there holds \eqref{EI}. Now, we state our second result for the compressible Navier-Stokes equations \eqref{INS} in the presence of vacuum as follows: \begin{equation}gin{theorem}\Delta bel{the1.5} For any dimension $d\geq2$, the energy equality \eqref{EI} of weak solutions $(\varrho, v)$ to the compressible Navier-Stokes equation \eqref{INS} with vacuum is valid for $t\in[0,T]$ provided \begin{equation} \begin{aligned} &0\leq \varrho\in L^{k}(0,T;L^{l}(\mathbb{T}^d)) ,\nabla\mathrm{div}qrt{\varrho}\in L^{\frac{2pk}{2k(p-3)-p}}(0,T;L^{\frac{2lq}{2l(q-3)-q}}(\mathbb{T}^d))\\ & v\in L^{p }(0,T;L^{q}(\mathbb{T}^d)),\nabla v \in L^{\frac{pk}{k(p-2)-p}}(0,T;L^{\frac{ql}{l(q-2)-q}}(\mathbb{T}^d))\ and\ v_0\in L^{\max\{ \fracrac{2\gamma }{\gamma -1},\fracrac{q}{2}\}}(\mathbb{T}^d), \end{aligned}\end{equation} where $k> \max\{\fracrac{p}{2(p-3)},\fracrac{p}{p-2}, \fracrac{(\gamma-1)p}{2},\fracrac{(\gamma-1)(d+q)p}{2q-d(p-3)}\}$, $l> \max\{\fracrac{q}{2(q-3)},\fracrac{q}{q-2}, \fracrac{(\gamma-1)q}{2}\}$, $p>3$ and $q>\max\{3, \fracrac{d(p-3)}{2}\}$. \end{theorem} \begin{equation}gin{remark} If let $k=l\rightarrow+\infty$, then Theorem \ref{the1.5} will reduce to the result obtained in \cite{[YWW]}, hence, this result can be seen as a generalization of recent works in \cite{[Yu2],[YWW]}. To the best of our knowledge, this seems to be the first result of the energy conservation criteria for the weak solutions of compressible Navier-Stokes equations without upper bound of density in the presence of vacuum. \end{remark} \begin{equation}gin{coro} The energy equality \eqref{EI} of weak solutions $(\varrho, v)$ to the compressible Navier-Stokes equation \eqref{INS} allowing vacuum holds for $t\in[0,T]$ provided \begin{equation}gin{enumerate}[(1)] \item $\nabla v \in L^{2 }(0,T;L^{2}(\mathbb{T}^{d})),$ $$ \begin{aligned} &0\leq\varrho\in L^{k}(0,T;L^{l} (\mathbb{T}^{d})) ,\nabla\mathrm{div}qrt{\varrho}\in L^{\frac{4k}{ k +4}}(0,T;L^{\frac{4l}{ l+4}} (\mathbb{T}^{d})),\\ & v\in L^{\frac{4k}{k - 2}}(0,T;L^{\frac{4l}{ l- 2}} (\mathbb{T}^{d})), v_0\in L^{\max\{\fracrac{2\gamma}{\gamma-1},\fracrac{2l}{l-2}\}}(\mathbb{T}^d),\\ &with\ k,l> 2\gamma\ and \ k>\fracrac{2\Big(2(d+4)\gamma+d\Big)l-4d(2\gamma+1)}{(8-d)l+2d}; \end{aligned}$$ \item $v\in L^4(0,T;L^4(\mathbb{T}^d))$, $$ \begin{aligned} &0\leq \varrho \in L^{k}(0,T;L^{l}(\mathbb{T}^d)), \nabla \mathrm{div}qrt{\varrho }\in L^{\fracrac{4k}{k-2}}(0,T;L^{\fracrac{4l}{l-2}}(\mathbb{T}^d)) ,\\ & \nabla v\in L^{\fracrac{2k}{k-2}}(0,T;L^{\fracrac{2l}{l-2}}(\mathbb{T}^d))\ and\ v_0\in L^{\fracrac{2\gamma}{\gamma-1}}(\mathbb{T}^d),\\ & with\ k>\max\{2,\fracrac{4(\gamma -1)(d+4)}{8-d}\}\ and\ l>\max\{2,2(\gamma-1) \}; \end{aligned}$$ \end{enumerate} \end{coro} \begin{equation}gin{remark} This shows that lower integrability of the density $\varrho$ means that more integrability of the velocity $v$ or the gradient of the velocity $\nabla v$ are necessary in energy conservation of the isentropic compressible fluid with vacuum and the inverse is also true. \end{remark} The minimum regularity of weak solution keeping energy conservation in the fluid equations originated from Onsager's work \cite{[Onsager]}. The recent progrss of Onsager's conjecture can be found in \cite{[CY],[CET],[NNT1],[ADSW]} We refer the readers to \cite{[WYY]} for the Onsager conjecture on the energy conservation for the isentropic compressible Euler equations via establishing the energy conservation criterion involving the density $\varrho\in L^{k}(0,T;L^{l}(\mathbb{T}^{d}))$. There are very significant recent developments on energy equality of the Leray-Hopf weak solutions of the 3D incompressible Navier-Stokes equations in \cite{[WWY],[BY],[BC],[CL],[Zhang],[Yu1],[Yu3]}. The paper is organized as follows: We present some auxiliary lemmas for dealing with the density without upper bound of the density. In section 3, we prove sufficient conditions for energy equality of weak solutions keeping energy equality in the isentropic compressible Navier-Stokes equations in the absence of vacuum. Finally, section 4 is devoted to the energy equality criterion in the presence of vacuum. \mathrm{div}ection{Notations and some auxiliary lemmas} \Delta bel{section2} First, we introduce some notations used in this paper. For $p\in [1,\,\infty]$, the notation $L^{p}(0,\,T;X)$ stands for the set of measurable functions on the interval $(0,\,T)$ with values in $X$ and $\|f(t,\cdot)\|_{X}$ belonging to $L^{p}(0,\,T)$. The classical Sobolev space $W^{k,p}(\mathbb{T}^d)$ is equipped with the norm $\|f\|_{W^{k,p}(\mathbb{T}^d)}=\mathrm{div}um\limits_{|\alpha| =0}^{k}\|D^{\alpha}f\|_{L^{p}(\mathbb{T}^d)}$. For simplicity, we denote by $$\int_0^T\int_{\mathbb{T}^d} f(t, x)dxdt=\int_0^T\int f\ ~~\textxt{and}~~ \|f\|_{L^p(0,T;X )}=\|f\|_{L^p(X)}.$$ On the one hand, if we let $\eta$ be non-negative smooth function only supported in the space ball of radius 1 and its integral equals to 1, we define the rescaled space mollifier $\eta_{\varepsilon}(x)=\fracrac{1}{\varepsilon^{d}}\eta(\fracrac{x}{\varepsilon})$ and $$ f^{\varepsilon}(x)=\int_{\mathbb{T}^d}f(y)\eta_{\varepsilon}(x-y)dy, $$ then we have the following standard properties of the mollifier kernel: \begin{equation}gin{lemma}\Delta bel{lem2.11} Let $ p,q,p_1,q_1,p_2,q_2\in[1,+\infty)$ with $\fracrac{1}{p}=\fracrac{1}{p_1}+\fracrac{1}{p_2},\fracrac{1}{q}=\fracrac{1}{q_1}+\fracrac{1}{q_2} $. Assume $f\in L^{p_1}(0,T;L^{q_1}(\mathbb{T}^d)) $ and $g\in L^{p_2}(0,T;L^{q_2}(\mathbb{T}^d))$. Then for any $\varepsilon>0$, there holds \begin{equation}gin{equation}\Delta bel{a4} \|(fg)^\varepsilon-f^\varepsilon g^\varepsilon\|_{L^p(0,T;L^q(\mathbb{T}^d))}\rightarrow 0,\ \ \ as\ \varepsilon\rightarrow 0. \end{equation} \end{lemma} We also recall the general Constantin-E-Titi type and Lions type commutators on mollifying kernel proved in \cite{[WY],[NNT]}. \begin{equation}gin{lemma} \Delta bel{lem2.2} Let $1\leq p,q,p_1,p_2,q_1,q_2\leq \infty$ with $\fracrac{1}{p}=\fracrac{1}{p_1}+\fracrac{1}{p_2}$ and $\fracrac{1}{q}=\fracrac{1}{q_1}+\fracrac{1}{q_2}$. Assume $f\in L^{p_1}(0,T;W^{1,q_1}(\mathbb{T}^d))$ and $g\in L^{p_2}(0,T;L^{q_2}(\mathbb{T}^d))$. Then for any $\varepsilon> 0$, there holds \begin{equation}gin{align} \Delta bel{fg'} \|(fg)^\varepsilon-f^\varepsilon g^\varepsilon\|_{L^p(0,T;L^q(\mathbb{T}^d))}\leq C\varepsilon \|\nabla f\|_{L^{p_1}(0,T;L^{ q_1}( \mathbb{T}^d))}\|g\|_{L^{p_2}(0,T;L^{q_2}(\mathbb{T}^d))}. \end{align} Moreover, if $q_1,q_2<\infty$ then \begin{equation}gin{align}\Delta bel{limite'} \limsup_{\varepsilon \to 0}\varepsilon^{-1} \|(fg)^\varepsilon-f^\varepsilon g^\varepsilon\|_{L^p(0,T;L^q(\mathbb{T}^d))}=0. \end{align} \end{lemma} The following two lemmas plays an important role in the proof of Theorem \ref{the1.2}. We refer the reader to \cite{[NNT],[Liang]} for their original version. \begin{equation}gin{lemma} \Delta bel{lem2.1} Suppose that $f\in L^{p}(0,T;L^{q}(\mathbb{T}^{d}))$. Then for any $\varepsilon>0$, there holds \begin{equation}\Delta bel{2.1} \|\nabla f^{\varepsilon}\|_{L^{p}(0,T;L^{q}(\mathbb{T}^{d}))} \leq C\varepsilon^{-1}\| f\|_{L^{p}(0,T;L^{q}(\mathbb{T}^{d}))}, \end{equation} and, if $p,q<\infty$ \begin{equation}gin{equation}\Delta bel{2.2} \limsup_{\varepsilon\rightarrow0} \varepsilon\|\nabla f^{\varepsilon}\|_{L^{p}(0,T;L^{q}(\mathbb{T}^{d}))}=0. \end{equation} Moreover, if $0<c_{1}\leq g\in L^k(0,T;L^{l}(\mathbb{T}^d))$, then there holds, for any $\varepsilon>0$, \begin{equation}gin{equation}\Delta bel{2.3}\begin{equation}gin{aligned} \Big\|\nabla \frac{f^{\varepsilon}}{g^{\varepsilon}}\Big\|_{L^{\fracrac{pk}{k+p}}(0,T;L^{\fracrac{ql}{l+q}}(\mathbb{T}^{d}))}\leq C \varepsilon^{-1}\|f\|_{L^{p}(0,T;L^{q}(\mathbb{T}^{d}))}\|g\|_{L^k(0,T;L^{l}(\mathbb{T}^d))}, \end{aligned}\end{equation} and if $1\leq q,l<\infty$ \begin{equation}gin{equation}\Delta bel{2.4} \limsup_{\varepsilon\rightarrow0} \varepsilon\Big\|\nabla \frac{f^{\varepsilon}}{g^{\varepsilon}}\Big\|_{L^{\fracrac{pk}{k+p}}(0,T;L^{\fracrac{ql}{l+q}}(\mathbb{T}^{d}))} =0.\end{equation} \end{lemma} \begin{equation}gin{proof} The proof of \eqref{2.1} and \eqref{2.2} was given in \cite{[NNT]}, hence we only focus on the proof of \eqref{2.3}-\eqref{2.4} here. By direct calculation and using the lower bound of $g$, we have \begin{equation}gin{equation}\Delta bel{2.5} \begin{equation}gin{aligned} \Big\|\nabla (\fracrac{f^\varepsilon}{g^\varepsilon})\Big\|_{L^{\fracrac{ql}{l+q}}}&\leq C\left(\Big\|\fracrac{g^\varepsilon}{(g^\varepsilon)^2}\nabla f^\varepsilon\Big\|_{L^{\fracrac{ql}{l+q}}}+\Big\|f^\varepsilon\fracrac{\nabla g^\varepsilon}{(g^\varepsilon)^2}\Big\|_{L^{\fracrac{ql}{l+q}}}\right)\\ &\leq C\left(\|g^\varepsilon \nabla f^\varepsilon\|_{L^{\fracrac{ql}{l+q}}}+\|f^\varepsilon \nabla g^\varepsilon\|_{L^{\fracrac{ql}{l+q}}}\right).\\ \end{aligned} \end{equation} Then we need to deal with the two terms on the right-hand side of the inequality \eqref{2.5}, since \begin{equation}gin{equation}\Delta bel{2.6} \begin{equation}gin{aligned} g^\varepsilon\nabla f^\varepsilon&=\int \fracrac{1}{\varepsilon^d} g(y)\eta(\fracrac{x-y}{\varepsilon})dy \int \fracrac{1}{\varepsilon^d}f(y)\nabla \eta (\fracrac{x-y}{\varepsilon})\fracrac{1}{\varepsilon}dy\\ &\leq C{\varepsilon}^{-1}\int_{B(x,\varepsilon)}\fracrac{1}{\varepsilon^d}|g(y)|dy \int_{B(x,\varepsilon)}\fracrac{1}{\varepsilon^d}|f(y)|dy\\ &\leq C{\varepsilon}^{-1}\left(\int_{B(0,\varepsilon)}|g(x-z)|\fracrac{{\bf{1}}|_{B(0,\varepsilon)}(z)}{\varepsilon^d}dz\right)\left(\int_{B(0,\varepsilon)}|f(x-z)|\fracrac{{\bf{1}}|_{B(0,\varepsilon)}(z)}{\varepsilon^d}dz\right)\\ &\leq C\varepsilon^{-1}\left(|g|*J_{1\varepsilon}(x)\right)\left(|f|*J_{1\varepsilon}(x)\right), \end{aligned} \end{equation} where $J_{1\varepsilon}=\fracrac{{\bf{1}}|_{B(0,\varepsilon)}(z)}{\varepsilon^d}\geq 0$ and $\int_{\mathbb{R}^d} J_{1\varepsilon}(z) dz=$measure of $B(0,1)$. Then it follows from the H\"older's inequality and Minkowski's inequality, one has \begin{equation}gin{equation}\Delta bel{2.7} \begin{equation}gin{aligned} \|g^\varepsilon \nabla f^\varepsilon\|_{L^{\fracrac{ql}{l+q}}}&\leq C\varepsilon^{-1}\| \left(|g|*J_{1\varepsilon}(x)\right)\left(|f|*J_{1\varepsilon}(x)\right)\|_{L^{\fracrac{ql}{l+q}}}\\ &\leq C\varepsilon^{-1}\||g|*J_{1\varepsilon}\|_{L^{l}}\||f|*J_{1\varepsilon}\|_{L^q}\\ &\leq C\varepsilon^{-1}\|g\|_{L^{l}}\|f\|_{L^q}. \end{aligned} \end{equation} Similarly, we also have \begin{equation}gin{equation}\Delta bel{2.8} \begin{equation}gin{aligned} \Big\|f^\varepsilon \nabla g^\varepsilon\Big\|_{L^{\fracrac{ql}{l+q}}}\leq C\varepsilon^{-1}\|f\|_{L^q}\|g\|_{L^{l}}. \end{aligned} \end{equation} Substituting \eqref{2.7} and \eqref{2.8} into \eqref{2.5}, one can obtain \begin{equation}gin{equation}\Delta bel{2.9} \begin{equation}gin{aligned} \Big\|\nabla \left(\fracrac{f^\varepsilon}{g^\varepsilon}\right)\Big\|_{L^{\fracrac{ql}{l+q}}}\leq C\varepsilon^{-1}\|f\|_{L^q}\|g\|_{L^{l}}. \end{aligned} \end{equation} It follows from H\"older's inequality that \begin{equation}gin{equation}\Delta bel{2.10} \Big\|\nabla \left(\fracrac{f^\varepsilon}{g^\varepsilon}\right)\Big\|_{L^{\fracrac{pk}{k+p}}(0,T;L^{\fracrac{ql}{l+q}})}\leq C\varepsilon^{-1}\|f\|_{L^p(0,T;L^q)}\|g\|_{L^k(0,T;L^{l})}. \end{equation} Furthermore, if $1\leq q, l<\infty$, let $\{f_n\}, \{g_n\}\in C_{0}^\infty (\mathbb{T}^d)$ with $f_n\rightarrow f,\ g_n\rightarrow g$ strongly in $L^q(\mathbb{T}^d)$ and $L^{l}(\mathbb{T}^d)$, respectively. Thus, by the density arguments, we find that \begin{equation}gin{equation}\Delta bel{2.11} \begin{equation}gin{aligned} &\varepsilon\Big\|\nabla \left(\fracrac{f^\varepsilon}{g^\varepsilon}\right)\Big\|_{L^{\fracrac{ql}{l+q}}}\leq C\varepsilon\left(\Big\|\nabla \left(\fracrac{(f-f_n)^\varepsilon}{g^\varepsilon}\right)\Big\|_{L^{\fracrac{ql}{l+q}}}+\Big\|\nabla \left(\fracrac{f_n^\varepsilon}{g^\varepsilon}\right)\Big\|_{L^{\fracrac{ql}{l+q}}}\right)\\ \leq C&\varepsilon\left(\varepsilon^{-1}\|f-f_n\|_{L^q}\|g\|_{L^{l}}+\|g^\varepsilon \nabla f_n^\varepsilon\|_{L^{\fracrac{ql}{l+q}}}+\|f_n^\varepsilon \nabla (g-g_n)^\varepsilon\|_{L^{\fracrac{ql}{l+q}}}+\|f_n^\varepsilon \nabla g_n^\varepsilon\|_{L^{\fracrac{ql}{l+q}}}\right)\\ \leq C & \left(\|f-f_n\|_{L^q}\|g\|_{L^{l}}+\varepsilon\|g\|_{L^{l}}\|\nabla f_n\|_{L^q}+\|f_n\|_{L^q}\|g-g_n\|_{L^{l}}+\varepsilon\|f_n\|_{L^q}\|\nabla g_n\|_{L^{l}}\right), \end{aligned} \end{equation} which gives \begin{equation}gin{equation}\Delta bel{2.12} \begin{equation}gin{aligned} \limsup_{\varepsilon\rightarrow 0}&\varepsilon \Big\|\nabla \left(\fracrac{f^\varepsilon}{g^\varepsilon}\right)\Big\|_{L^{\fracrac{ql}{l+q}}}\\ \leq & C\left(\|f-f_n\|_{L^q}\|g\|_{L^{l}}++\|f_n\|_{L^q}\|g-g_n\|_{L^{l}}\right)\rightarrow 0,\ as\ n\rightarrow +\infty. \end{aligned} \end{equation} Then taking $L^{\fracrac{pk}{k+p}}$ norm with respect to $t$ on \eqref{2.12} and using the H\"older's inequality, it leads to \eqref{2.4}. \end{proof} \begin{equation}gin{lemma}\Delta bel{lem2.3} Assume that $0<c\leq \varrho (x,t)\in L^{l}(\mathbb{T}^d)$ and $\nabla v\in L^q(\mathbb{T}^d)$ with $l\geq \fracrac{2q}{q-1},\ 1\leq q\leq \infty$. Then \begin{equation}gin{equation}\Delta bel{b1} \Big\|\partial\left(\fracrac{(\varrho v)^\varepsilon}{\varrho ^\varepsilon}\right)\Big\|_{L^{\fracrac{ql}{2q+l}}(\mathbb{T}^d)}\leq C\|\nabla v\|_{L^q(\mathbb{T}^d)}\left(\|\varrho\|_{L^{\fracrac{l}{2}}(\mathbb{T}^d)}+\|\varrho\|_{L^{l}(\mathbb{T}^d)}^2\right). \end{equation} \end{lemma} \begin{equation}gin{proof} By direct computation, one has \begin{equation}gin{equation}\Delta bel{b3} \partial\left(\fracrac{(\varrho v)^\varepsilon}{\varrho ^\varepsilon}\right)=\fracrac{\partial (\varrho v)^\varepsilon-v\partial \varrho ^\varepsilon}{\varrho^\varepsilon}-\fracrac{\left((\varrho v)^\varepsilon-\varrho ^\varepsilon v\right)\partial \varrho ^\varepsilon}{(\varrho ^\varepsilon)^2}:=I_1+I_2. \end{equation} Let $B(x,\varepsilon)=\{y\in \mathbb{T}^d; |x-y|<\varepsilon\}$, then Using the H\"older's inequality, we have \begin{equation}gin{equation}\Delta bel{b4} \begin{equation}gin{aligned} |I_1|&\leq C|\int \varrho(y)\left(v(y)-v(x)\right)\nabla_x\eta_\varepsilon(x-y)dy|\\ &\leq C|\int_{\mathbb{R}^d}\varrho(y)\fracrac{v(y)-v(x)}{\varepsilon}\fracrac{1}{\varepsilon^d}\nabla \eta (\fracrac{x-y}{^\varepsilon})dy|\\ &\leq C\left(\fracrac{1}{\varepsilon^d}\int _{B(x,\varepsilon)}\fracrac{|v(y)-v(x)|^{s_1}}{\varepsilon^{s_1}}dy\right)^{\fracrac{1}{s_1}}\left(\fracrac{1}{\varepsilon^d}\int_{B(x,\varepsilon)}|\varrho(y)|^{s_2}dy\right)^{\fracrac{1}{s_2}},\\ \end{aligned} \end{equation} where $s_1\leq q,\ 2s_2\leq l$ and $\fracrac{1}{s_1}+\fracrac{1}{s_2}=1$. Using the mean value theorem, one has \begin{equation}gin{equation}\Delta bel{b5} \begin{equation}gin{aligned} \fracrac{1}{\varepsilon^d}\int_{B(x,\varepsilon)}\fracrac{|v(y)-v(x)|^{s_1}}{\varepsilon^{s_1}}dy&\leq C\fracrac{1}{\varepsilon^d}\int_{B(x,\varepsilon)}\int_0^1 |\nabla v(x+(y-x)s)|^{s_1}\fracrac{|y-x|^{s_1}}{\varepsilon^{s_1}}dsdy\\ &\leq C \int_0^1\int_{B(0,1)}|\nabla v(x+s\varepsilon \omega)|^{s_1}d\omega ds\\ &\leq C\int _{\mathbb{R}^d}|\nabla v(x-z)|^{s_1}\int_0^1\fracrac{{\bf{1}}_{B(0,\varepsilon s)}(z)}{(\varepsilon s)^d} ds dz\\ &=|\nabla v|^{s_1} *J_\varepsilon (x), \end{aligned} \end{equation} where $J_\varepsilon(z)=\int_0^1 \fracrac{{\bf{1}}_{B (0,\varepsilon s)}(z)}{(\varepsilon s)^d} ds dz\geq 0$ and it's easy to check that $\int_{\mathbb{R}^d} J_\varepsilon dz=measure\ of\ (B(0,1))$. Similarly, we also have \begin{equation}gin{equation}\Delta bel{b5-1} \begin{equation}gin{aligned} \fracrac{1}{\varepsilon^d}\int_{B(x,\varepsilon)}|\varrho(y)|^{s_2}dy&\leq C\int_{B(0,\varepsilon)}|\varrho(x-z)|^{s_2} dz\\ &\leq C\int_{\mathbb{R}^d}|\varrho(x-z)|^{s_2}\fracrac{{\bf{1}}_{B (0,\varepsilon )}(z)}{(\varepsilon )^d}dz\\ &\leq C|\varrho|^{s_2}*J_{1\varepsilon}(x), \end{aligned} \end{equation} where $J_{1\varepsilon}=\fracrac{\bf{1}_{B(0,\varepsilon )}(z)}{(\varepsilon )^d}\geq 0$ and $\int_{\mathbb{R}^d} J_{1\varepsilon}(z)dz =measure\ of \ (B(0,1))$. Next, to estimate $I_2$, due to the H\"older's inequality, one deduces \begin{equation}gin{equation}\Delta bel{b6} \begin{equation}gin{aligned} |I_2|&=|\int \varrho(y)\left(v(y)-v(x)\right)\eta_\varepsilon(x-y)dy \fracrac{\int \varrho(y)\nabla_x \eta_\varepsilon(x-y)dy}{\left(\int \varrho(y)\eta_\varepsilon(x-y)dy\right)^2}|\\ &\leq C\int_{B(x,\varepsilon)} \varrho(y) |v(y)-v(x)|\fracrac{1}{\varepsilon^d}dy\int_{B(x,\varepsilon)} \fracrac{1}{\varepsilon^d}\varrho (y)|\nabla \eta(\fracrac{x-y}{\varepsilon})|\fracrac{1}{\varepsilon} dy\\ &\leq C \left(\fracrac{1}{\varepsilon^d}\int _{B(x,\varepsilon)}\fracrac{|v(y)-v(x)|^{s_1}}{\varepsilon^{s_1}}dy\right)^{\fracrac{1}{s_1}}\left(\fracrac{1}{\varepsilon^d}\int_{B(x,\varepsilon)}|\varrho(y)|^{s_2}dy \right)^{\fracrac{2}{s_2}}. \end{aligned} \end{equation} Therefor, by the same arguments as in \eqref{b5} and \eqref{b5-1}, in combination with \eqref{b3}-\eqref{b6}, we have \begin{equation}gin{equation}\Delta bel{b7} |I_1|+|I_2|\leq C\left(\nabla v|^{s_1} *J_\varepsilon (z)\right)^{\fracrac{1}{s_1}}\left(\left(\varrho^{s_2}*J_{1\varepsilon}\right)^{\fracrac{1}{s_2}}+\left(\varrho^{s_2}*J_{1\varepsilon}\right)^{\fracrac{2}{s_2}}\right). \end{equation} Then from the Young's inequality, we arrive at \begin{equation}gin{equation} \begin{equation}gin{aligned} &\Big\|\partial \fracrac{(\varrho v)^{\varepsilon}}{\varrho ^\varepsilon}\Big\|_{{L^\fracrac{ql}{2q+l}}(\mathbb{T}^d)}\\ \leq& C\|\left(|\nabla v|^{s_1}*J_\varepsilon(z)\right)^{\fracrac{1}{s_1}}\|_{L^{q}}\left(\|\left(|\varrho(y)|^{s_2}*J_{1\varepsilon}\right)^{\fracrac{1}{s_2}}\|_{L^{\fracrac{l}{2}}}+\|\left(|\varrho(y)|^{s_2}*J_{1\varepsilon}\right)^{\fracrac{2}{s_2}}\|_{L^{\fracrac{l}{2}}}\right)\\ \leq& C\|\nabla v\|_{L^q}\|J_\varepsilon\|_{L^1}^{\fracrac{1}{s_1}}\left(\|\varrho\|_{L^{\fracrac{l}{2}}}\|J_{1\varepsilon}\|_{L^1}^{\fracrac{1}{s_2}}+\|\varrho \|_{L^{l}}^{2}\|J_{1\varepsilon}\|_{L^1}^{\fracrac{2}{s_1}}\right)\\ \leq & C\|\nabla v\|_{L^q}\left(\|\varrho\|_{L^{\fracrac{l}{2}}}+\|\varrho\|_{L^{l}}^2\right). \end{aligned} \end{equation} Then we have completed the proof of lemma \ref{lem2.3}. \end{proof} On the other hand, if we let $\eta$ be non-negative smooth function supported in the space-time ball of radius 1 and its integral equals to 1, we define the rescaled space-time mollifier $\eta_{\varepsilon}(t,x)=\fracrac{1}{\varepsilon^{d+1}}\eta(\fracrac{t}{\varepsilon},\fracrac{x}{\varepsilon})$ and $$ f^{\varepsilon}(t,x)=\int_{0}^{T}\int_{\mathbb{T}^d}f(\tau,y)\eta_{\varepsilon}(t-\tau,x-y)dyd\tau. $$ We list two lemmas for the proof of Theorem \eqref{the1.5}. The fist one is the Lions type commutators on space-time mollifying kernel. The second one is the generalized Aubin-Lions Lemma, which helps us to extend the energy equality up to the initial time. \begin{equation}gin{lemma} (\cite{[YWW],[LV],[CY]}) \Delta bel{pLions}Let $1\leq p,q,p_1,q_1,p_2,q_2\leq \infty$, with $\fracrac{1}{p}=\fracrac{1}{p_1}+\fracrac{1}{p_2}$ and $\fracrac{1}{q}=\fracrac{1}{q_1}+\fracrac{1}{q_2}$. Let $\partial$ be a partial derivative in space or time, in addition, let $\partial_t f,\ \nabla f \in L^{p_1}(0,T;L^{q_1}(\Omega))$, $g\in L^{p_2}(0,T;L^{q_2}(\Omega))$. Then, there holds $$\|{\partial(fg)^\varepsilon}-\partial(f\,{g}^\varepsilon)\|_{L^p(0,T;L^q(\Omega))}\leq C\left(\|\partial_{t} f\|_{L^{p_{1}}(0,T;L^{q_{1}}(\Omega))}+\|\nabla f\|_{L^{p_{1}}(0,T;L^{q_{1}}(\Omega))}\right)\|g\|_{L^{p_{2}}(0,T;L^{q_{2}}(\Omega))}, $$ for some constant $C>0$ independent of $\varepsilon$, $f$ and $g$. Moreover, $${\partial{(fg)^\varepsilon}}-\partial{(f\,{g^\varepsilon})}\to 0\quad\textxt{ in } {L^{p}(0,T;L^{q}(\Omega))},$$ as $\varepsilon\to 0$ if $p_2,q_2<\infty.$ \end{lemma} \begin{equation}gin{lemma}[\cite{[Simon]}]\Delta bel{AL} Let $X\hookrightarrow B\hookrightarrow Y$ be three Banach spaces with compact imbedding $X \hookrightarrow\hookrightarrow Y$. Further, let there exist $0<\theta <1$ and $M>0$ such that \begin{equation}gin{equation}\Delta bel{le1} \|v\|_{B}\leq M\|v\|_{X}^{1-\theta}\|v\|_{Y}^\theta\ \ for\ all\ v\in X\cap Y.\end{equation} Denote for $T>0$, \begin{equation}gin{equation}\Delta bel{le2} W(0,T):=W^{s_0,r_0}((0,T), X)\cap W^{s_1,r_1}((0,T),Y) \end{equation} with \begin{equation}gin{equation}\Delta bel{le3} \begin{equation}gin{aligned} &s_0,s_1 \in \mathbb{R}; \ r_0, r_1\in [1,\infty],\\ s_\theta :=(1-\theta)s_0&+\theta s_1,\ \frac{1}{r_\theta}:=\frac{1-\theta}{r_0}+\frac{\theta}{r_1},\ s^{*}:=s_\theta -\frac{1}{r_\theta}. \end{aligned} \end{equation} Assume that $s_\theta>0$ and $F$ is a bounded set in $W(0,T)$. Then, we have If $s_{*}\leq 0$, then $F$ is relatively compact in $L^p((0,T),B)$ for all $1\leq p< p^{*}:=-\frac{1}{s^{*}}$. If $s_{*}> 0$, then $F$ is relatively compact in $C((0,T),B)$. \end{lemma} \mathrm{div}ection{Energy equality in compressible Navier-Stokes equations without vacuum} When we consider the compressible Navier-Stokes equations with the density containing non-vacuum, due to the moumentum eqaution $\eqref{INS}_2$, just spatially regularized velocity $v^\varepsilon$ can be used as a test function to generate the global energy equality. However, to avoid a commutator estimate involing time $t$, we choose $\fracrac{(\varrho v)^\varepsilon}{\varrho ^\varepsilon}$ instead of $v^\varepsilon$ as the test function, which was introduced in \cite{[NNT],[LS]}. Hence, in this section, we let $\eta$ be non-negative smooth function only supported in the space ball of radius 1 and its integral equals to 1. We define the rescaled space mollifier $\eta_{\varepsilon}(x)=\fracrac{1}{\varepsilon^{d}}\eta(\fracrac{x}{\varepsilon})$ and $$ f^{\varepsilon}(x)=\int_{\Omega}f(y)\eta_{\varepsilon}(x-y)dyds. $$ \begin{equation}gin{proof}[Proof of Theorem \ref{the1.2}] Multiplying $\eqref{INS}_2$ by $\left(\fracrac{(\varrho v)^\varepsilon}{\varrho^\varepsilon}\right)^\varepsilon$, then integrating over $(s,t]\nablames \mathbb{T}^d$ with $0<s<t<T$, we have \begin{equation}gin{equation}\Delta bel{c1} \begin{equation}gin{aligned} \int_s^t\int\frac{(\varrho v)^{\varepsilon}}{\varrho^{\varepsilon}}\Big[\partial_{\tau}(\varrho v)^{\varepsilon}+ \mathrm{div}\,(\varrho v\otimes v)^{\varepsilon}+\nabla P(\varrho)^\varepsilon-\mu \textxt{Div\,}elta v^\varepsilon-(\mu+\Delta mbda) \nabla(\mathrm{div}\, v)^\varepsilon\Big]=0. \end{aligned}\end{equation} We will rewrite every term of the last equality to pass the limit of $\varepsilon$. For the first term, a straightforward calculation and $\eqref{INS}_{1}$ yields that \begin{equation}gin{equation}\Delta bel{c2} \begin{equation}gin{aligned} \int_s^t\int\frac{(\varrho v)^{\varepsilon}}{\varrho^{\varepsilon}} \partial_{\tau}\Big(\varrho v\Big)^{\varepsilon}&= \int_s^t\int\frac12\partial_{\tau}(\frac{|(\varrho v)^{\varepsilon}|^{2}}{\varrho^{\varepsilon}})+\frac12\partial_{\tau}\varrho^{\varepsilon} \frac{|(\varrho v)^{\varepsilon}|^{2}}{(\varrho^{\varepsilon})^{2}}\\ &= \int_s^t\int\frac12\partial_{\tau}\Big(\frac{|(\varrho v)^{\varepsilon}|^{2}}{\varrho^{\varepsilon}}\Big)- \frac12\mathrm{div}\,(\varrho v)^{\varepsilon} \frac{|(\varrho v)^{\varepsilon}|^{2}}{(\varrho^{\varepsilon})^{2}}.\\ \end{aligned}\end{equation} Integration by parts means that \begin{equation}gin{equation}\Delta bel{c3}\begin{equation}gin{aligned} &\int_s^t\int\frac{(\varrho v)^{\varepsilon}}{\varrho^{\varepsilon}} \mathrm{div}\,(\varrho v\otimes v)^{\varepsilon}\\ =&-\int_s^t\int\nabla\Big(\frac{(\varrho v)^{\varepsilon}}{\varrho^{\varepsilon}} \Big) [(\varrho v\otimes v)^{\varepsilon}-(\varrho v)^{\varepsilon}\otimes v^{\varepsilon}]-\int_s^t\int\nabla\Big(\frac{(\varrho v)^{\varepsilon}}{\varrho^{\varepsilon}} \Big)(\varrho v)^{\varepsilon}\otimes v^{\varepsilon}. \end{aligned}\end{equation} Making use of integration by parts once again, we infer that \begin{equation}gin{equation}\Delta bel{c4}\begin{equation}gin{aligned} &- \int_s^t\int\nabla\Big(\frac{(\varrho v)^{\varepsilon}}{\varrho^{\varepsilon}} \Big)(\varrho v)^{\varepsilon}\otimes v^{\varepsilon}\\=& \int_s^t\int \mathrm{div}\, v^{\varepsilon} \frac{|(\varrho v)^\varepsilon|^{2}}{\varrho^{\varepsilon}}+\frac12 \frac{ v^{\varepsilon}}{\varrho^{\varepsilon}}\nabla|(\varrho v)^{\varepsilon} |^{2}\\ =& \int_s^t\int \frac12 \mathrm{div}\, v^{\varepsilon} \frac{|(\varrho v)^\varepsilon|^{2}}{\varrho^{\varepsilon}}-\frac12 v^{\varepsilon}\nabla({\frac{1}{\varrho^{\varepsilon}}} )|(\varrho v)^{\varepsilon} |^{2}\\ =& \frac12\int_s^t\int \mathrm{div}\,( \varrho^{\varepsilon}v^{\varepsilon} ) \frac{|(\varrho v)^\varepsilon|^{2}}{(\varrho^{\varepsilon})^{2}} \\ =& \frac12\int_s^t\int \mathrm{div}\,\Big[ \varrho^{\varepsilon}v^{\varepsilon}-(\varrho v)^{\varepsilon} \Big] \frac{|(\varrho v)^\varepsilon|^{2}}{(\varrho^{\varepsilon})^{2}} + \frac12\int_s^t\int \mathrm{div}\, (\varrho v)^{\varepsilon} \frac{|( \varrho v)^\varepsilon|^{2}}{(\varrho^{\varepsilon})^{2}}\\ =&- \int_s^t\int\Big[ \varrho^{\varepsilon}v^{\varepsilon}-(\varrho v)^{\varepsilon} \Big] \frac{(\varrho v)^\varepsilon} {\varrho^{\varepsilon}}\nabla\frac{(\varrho v)^\varepsilon}{\varrho^{\varepsilon}} + \frac12\int_s^t\int \mathrm{div}\, (\varrho v)^{\varepsilon} \frac{|(\varrho v)^\varepsilon|^{2}}{(\varrho^{\varepsilon})^{2}}. \end{aligned}\end{equation} Inserting \eqref{c4} into \eqref{c3}, we arrive at \begin{equation}gin{equation}\Delta bel{c5}\begin{equation}gin{aligned} &\int_s^t\int\frac{(\varrho v)^{\varepsilon}}{\varrho^{\varepsilon}} \mathrm{div}\,(\varrho v\otimes v)^{\varepsilon}\\=&-\int_s^t\int\nabla\Big(\frac{(\varrho v)^{\varepsilon}}{\varrho^{\varepsilon}} \Big) [(\varrho v\otimes v)^{\varepsilon}-(\varrho v)^{\varepsilon}\otimes v^{\varepsilon}]\\&-\int_s^t\int\Big[ \varrho^{\varepsilon}v^{\varepsilon}-(\varrho v)^{\varepsilon} \Big] \frac{(\varrho v)^\varepsilon} {\varrho^{\varepsilon}}\nabla \frac{(\varrho v)^\varepsilon}{\varrho^{\varepsilon}}+ \frac12\int_s^t\int \mathrm{div}\, (\varrho v)^{\varepsilon} \frac{|(\varrho v)^\varepsilon|^{2}}{(\varrho^{\varepsilon})^{2}}. \end{aligned}\end{equation} For the pressure term, by the integration by parts, one has \begin{equation}gin{equation}\Delta bel{c6} \begin{equation}gin{aligned} \int_s^t\int\frac{(\varrho v)^{\varepsilon}}{\varrho^{\varepsilon}} \nabla(P(\varrho))^{\varepsilon}= &\int_s^t\int\fracrac{(\varrho v)^\varepsilon}{\varrho^\varepsilon}\nabla \left[(P(\varrho))^\varepsilon-P(\varrho^\varepsilon)\right]+\int_s^t\int \frac{(\varrho v)^{\varepsilon}}{\varrho^{\varepsilon}} \nabla P(\varrho^{\varepsilon})\\ =&-\int_s^t\int\mathrm{div}\,\Big[\frac{(\varrho v)^{\varepsilon}}{\varrho^{\varepsilon}} \Big] [(P(\varrho))^{\varepsilon}- P(\varrho^{\varepsilon}) ]+\int_s^t\int \frac{(\varrho v)^{\varepsilon}}{\varrho^{\varepsilon}} \nabla P(\varrho^{\varepsilon}). \end{aligned} \end{equation} Using the mass equation $\eqref{INS}_1$, the second term on the right hand-side of \eqref{c6} can be rewritten as \begin{equation}gin{equation}\Delta bel{c7} \begin{equation}gin{aligned} \int_s^t\int \fracrac{(\varrho v)^\varepsilon}{\varrho ^\varepsilon}\nabla P(\varrho^\varepsilon)&=\int_s^t\int (\varrho v)^\varepsilon \gamma (\varrho^\varepsilon)^{\gamma-2}\nabla \varrho^\varepsilon=\int_s^t\int (\varrho v)^\varepsilon\fracrac{\gamma}{\gamma -1}\nabla (\varrho^\varepsilon)^{\gamma-1}\\ &=\int_s^t\int \partial_\tau\varrho^\varepsilon \fracrac{\gamma}{\gamma-1}(\varrho^\varepsilon)^{\gamma-1}dxd\tau=\int_s^t\int \fracrac{1}{\gamma-1}\partial_\tau P(\varrho^\varepsilon). \end{aligned} \end{equation} It is clear that \begin{equation}gin{equation}\Delta bel{c8}\begin{equation}gin{aligned} &-\mu\int_s^t\int\frac{(\varrho v)^{\varepsilon}}{\varrho^{\varepsilon}} \textxt{Div\,}elta v^{\varepsilon}= \mu\int_s^t\int-\textxt{Div\,}elta v ^\varepsilon v^\varepsilon - \textxt{Div\,}elta v^\varepsilon \frac{(\varrho v)^{\varepsilon}-\varrho^{\varepsilon} v^{\varepsilon}}{\varrho^{\varepsilon}}, \\ &-(\mu+\Delta mbda)\int_s^t\int\frac{(\varrho v)^{\varepsilon}}{\varrho^{\varepsilon}}\nabla(\mathrm{div}\, v)^\varepsilon=(\mu+\Delta mbda) \int_s^t\int -\nabla (\mathrm{div}\, v)^\varepsilon v^{\varepsilon}-\nabla(\mathrm{div}\, v)^{\varepsilon} \frac{(\varrho v)^{\varepsilon}-\varrho^{\varepsilon} v^{\varepsilon}}{\varrho^{\varepsilon}}. \end{aligned} \end{equation} Substituting \eqref{c2}, \eqref{c5}-\eqref{c8} into \eqref{c1}, we see that \begin{equation}gin{equation}\Delta bel{c9} \begin{equation}gin{aligned} &\int_s^t\int\partial_{\tau}\left(\frac12\frac{|(\varrho v)^{\varepsilon}|^{2}}{\varrho^{\varepsilon}}+\fracrac{1}{\gamma-1}P(\varrho^\varepsilon)\right) +\mu \int_s^t\int |\nabla v^{\varepsilon} |^2 +(\mu+\Delta mbda)\int_s^t\int |\nabla \mathrm{div}\, v^{\varepsilon} |^2\\ = &\int_s^t\int \mu \textxt{Div\,}elta v^{\varepsilon} \frac{(\varrho v)^{\varepsilon}-\varrho^{\varepsilon} v^{\varepsilon}}{\varrho^{\varepsilon}}+(\mu+\Delta mbda)\int_s^t\int\nabla(\mathrm{div}\, v)^{\varepsilon} \frac{(\varrho v)^{\varepsilon}-\varrho^{\varepsilon} v^{\varepsilon}}{\varrho^{\varepsilon}} \\&\ \ \ +\int_s^t\int\mathrm{div}\,\Big[\frac{(\varrho v)^{\varepsilon}}{\varrho^{\varepsilon}} \Big] [(P(\varrho))^{\varepsilon}- P(\varrho^{\varepsilon}) ]\\ &+\int_s^t\int\nabla\Big(\frac{(\varrho v)^{\varepsilon}}{\varrho^{\varepsilon}} \Big) [(\varrho v\otimes v)^{\varepsilon}-(\varrho v)^{\varepsilon}\otimes v^{\varepsilon}] +\int_s^t\int\Big[ \varrho^{\varepsilon}v^{\varepsilon}-(\varrho v)^{\varepsilon} \Big] \frac{(\varrho v)^\varepsilon} {\varrho^{\varepsilon}}\nabla\frac{(\varrho v)^\varepsilon}{\varrho^{\varepsilon}}. \end{aligned}\end{equation} Next, we need to prove that the terms on the right hand-side of \eqref{c9} tend to zero as $\varepsilon\rightarrow 0$. Under the hypothesis \begin{equation}gin{equation}\Delta bel{unified} \begin{equation}gin{aligned} &0< c\leq\varrho \in L^{ k}(0,T;L^{ l}(\mathbb{T}^d)), \\ & v\in L^{p }(0,T;L^{q} (\mathbb{T}^{d})),\nabla v\in L^{\frac{kp}{k(p-2)-4p}}(0,T; L^{\frac{lq}{l(q-2)-4q}}(\mathbb{T}^{d})), \end{aligned} \end{equation} with $k> \max\{\fracrac{4p}{p-2},\frac{(\gamma-2)p }{2}\}$, $l> \max\{\fracrac{4q}{q-2}, \frac{(\gamma-2)q}{2}\}$ and $k(4-p)+7p\geq 0,\ l(4-q)+7q\geq 0$. It follows from Lemma \ref{lem2.1} that \begin{equation}gin{equation}\Delta bel{ec10}\begin{equation}gin{aligned} \|\textxt{Div\,}elta v^\varepsilon\|_{L^{\frac{kp}{k(p-2)-4p}}(L^{\frac{lq}{l(q-2)-4q}})} \leq& C\|\mathrm{div}\, (\nabla v)^\varepsilon\|_{L^{\frac{kp}{k(p-2)-4p}}(L^{\frac{lq}{l(q-2)-4q}})}\\ \leq& C\varepsilon^{-1}\| \nabla v \|_{L^{\frac{kp}{k(p-2)-4p}}(L^{\frac{lq}{l(q-2)-4q}})}, \end{aligned}\end{equation} and \begin{equation}\begin{aligned} \limsup_{\varepsilon\rightarrow0}\varepsilon\|\textxt{Div\,}elta v^\varepsilon\|&_{L^{\frac{kp}{k(p-2)-4p}}(L^{\frac{lq}{l(q-2)-4q}})}=0. \end{aligned}\end{equation} Using the Constantin-E-Titi type commutators on mollifying kernel Lemma \ref{lem2.2}, we know that \begin{equation}gin{equation}\Delta bel{ec11}\begin{equation}gin{aligned} \|(\varrho v)^\varepsilon-\varrho^\varepsilon v^\varepsilon\|_{L^{\fracrac{kp}{2k+4p}}(L^{\fracrac{lq}{2l+4q}})}\leq C\varepsilon \|\varrho \|_{L^{\fracrac{kp}{k(4-p)+8p}}(L^{\fracrac{lq}{l(4-q)+8q}} )} \|\nabla v\|_{L^{\frac{kp}{k(p-2)-4p}}(L^{\frac{lq}{l(q-2)-4q}})}. \end{aligned}\end{equation} Combining the the H\"older's inequality and \eqref{ec10}-\eqref{ec11}, we arrive at \begin{equation}gin{equation}\Delta bel{c11}\begin{equation}gin{aligned} &\Big|\int_s^t\int \mu \textxt{Div\,}elta v^\varepsilon \frac{(\varrho v)^{\varepsilon}-\varrho^{\varepsilon} v^{\varepsilon}}{\varrho^{\varepsilon}}\Big|\\ \leq& C\|\textxt{Div\,}elta v^\varepsilon\|_{L^{\frac{kp}{k(p-2)-4p}}(L^{\frac{lq}{l(q-2)-4q}})}\Big\| \frac{(\varrho v)^{\varepsilon}-\varrho^{\varepsilon} v^{\varepsilon}}{\varrho^{\varepsilon}}\Big\|_{L^{\fracrac{kp}{2k+4p}}(L^{\fracrac{lq}{2l+4q}})}\\ \leq& C\varepsilon\|\textxt{Div\,}elta v^\varepsilon\|_{L^{\frac{kp}{k(p-2)-4p}}(L^{\frac{lq}{l(q-2)-4q}})}\|\varrho \|_{L^{\fracrac{kp}{k(4-p)+8p}}(L^{\fracrac{lq}{l(4-q)+8q}}) } \|\nabla v\|_{L^{\frac{kp}{k(p-2)-4p}}(L^{\frac{lq}{l(q-2)-4q}})}, \end{aligned}\end{equation} where we need $ k >\fracrac{4p}{p-2},\ k(4-p)+7p\geq 0$ and $l >\fracrac{4q}{q-2},\ l(4-q)+7q\geq 0$. As a consequence, we get $$ \limsup_{\varepsilon\rightarrow0}\Big|\mu\int_s^t\int\textxt{Div\,}elta v^\varepsilon \frac{(\varrho v)^{\varepsilon}-\varrho^{\varepsilon} v^{\varepsilon}}{\varrho^{\varepsilon}}\Big|=0. $$ Likewise, there holds \begin{equation}gin{equation}\Delta bel{c12} \begin{equation}gin{aligned} \limsup_{\varepsilon\rightarrow0}\Big|(\mu+\Delta mbda )\int_s^t\int\nabla(\mathrm{div}\, v)^{\varepsilon} \frac{(\varrho v)^{\varepsilon}-\varrho^{\varepsilon} v^{\varepsilon}}{\varrho^{\varepsilon}}\Big|=0. \end{aligned}\end{equation} Applying Lemma \ref{lem2.3}, we get \begin{equation}\Delta bel{ec3.16} \|\mathrm{div}\,\Big[\frac{(\varrho v)^{\varepsilon}}{\varrho^{\varepsilon}} \Big]\|_{L^{\frac{kp}{k(p-2)-2p}}(L^{\frac{lq}{l(q-2)-2q}})}\leq C\|\nabla v\|_{L^{\frac{kp}{k(p-2)-4p}}(L^{\frac{lq}{l(q-2)-4q}})} \left(\|\varrho\|_{L^{\fracrac{k}{2}}(L^{\fracrac{l}{2}})}+\|\varrho \|_{L^k(L^{l})}^2\right) \end{equation} For some $\theta\in [0,1]$, the mean value theorem, the H\"older's inequality and the triangle inequality ensure that \begin{equation}\begin{aligned}\Delta bel{ec3.17} \| \varrho^{\gamma}-( \varrho^{\varepsilon} )^{\gamma} \|_{L^\fracrac{ql}{2(l+q) }} &=\|[\varrho+\theta(\varrho-\varrho^{\varepsilon}) ]^{\gamma-1} (\varrho -\varrho^{\varepsilon}) \|_{L^\fracrac{ql}{2(l+q) }}\\ &\leq C \| \varrho \|_{L^\fracrac{\gamma ql}{2(l+q) }}^{\gamma-1} \| \varrho -\varrho^{\varepsilon} \|_{L^\fracrac{\gamma ql}{2(l+q) }}, \end{aligned}\end{equation} which follows from that, by Lemma \ref{lem2.11}, if $\varrho\in L^\fracrac{\gamma ql}{2(l+q) }$, as $\varepsilon\rightarrow 0$, \begin{equation}\begin{aligned}\Delta bel{ec3.18}\| \varrho^{\gamma}-( \varrho^{\gamma} )^{\varepsilon} \|_{L^\fracrac{ql}{2(l+q) }}\rightarrow 0.\end{aligned}\end{equation} With the help of the triangle inequality, the H\"older's inequality and \eqref{ec3.16}-\eqref{ec3.18}, we obtain \begin{equation}gin{equation}\Delta bel{c13}\begin{equation}gin{aligned} &\int_s^t\int\mathrm{div}\,\Big[\frac{(\varrho v)^{\varepsilon}}{\varrho^{\varepsilon}} \Big] [( \varrho^{\gamma} )^{\varepsilon}- (\varrho^{\varepsilon})^{\gamma} ]\\ \leq& \int_s^t\int\mathrm{div}\,\Big[\frac{(\varrho v)^{\varepsilon}}{\varrho^{\varepsilon}} \Big] |( \varrho^{\gamma} )^{\varepsilon}- \varrho^{\gamma} |+ \int_s^t\int\mathrm{div}\,\Big[\frac{(\varrho v)^{\varepsilon}}{\varrho^{\varepsilon}} \Big] | \varrho^{\gamma} - (\varrho^{\varepsilon})^{\gamma}|\\ \leq& C \|\mathrm{div}\,\Big[\frac{(\varrho v)^{\varepsilon}}{\varrho^{\varepsilon}} \Big]\|_{L^{\frac{kp}{k(p-2)-2p}}(L^{\frac{lq}{l(q-2)-2q}})}\Big(\|( \varrho^{\gamma} )^{\varepsilon}- \varrho^{\gamma}\|_{L^\fracrac{pk}{2(k+p) }(L^\fracrac{ql}{2(l+q) })}+\| \varrho^{\gamma}-( \varrho^{\gamma} )^{\varepsilon} \|_{L^\fracrac{pk}{2(k+p) }(L^\fracrac{ql}{2(q+l) })}\Big) \\ \leq& C\|\nabla v\|_{L^{\frac{kp}{k(p-2)-4p}}(L^{\frac{lq}{l(q-2)-4q}})} \left(\|\varrho\|_{L^{\fracrac{l}{2}}(L^{\fracrac{k}{2}})}+\|\varrho \|_{L^l(L^{k})}^2\right)\\ &\ \ \ \ \ \ \ \Big(\|( \varrho^{\gamma} )^{\varepsilon}- \varrho^{\gamma}\|_{L^\fracrac{pk}{2(k+p) }(L^\fracrac{ql}{2(l+q) })}+\| \varrho^{\gamma}-( \varrho^{\gamma} )^{\varepsilon} \|_{L^\fracrac{pk}{2(k+p) }(L^\fracrac{ql}{2(q+l) })}\Big) \end{aligned}\end{equation} which means that $$ \limsup_{\varepsilon\rightarrow0}\int_s^t\int\mathrm{div}\,\left(\frac{(\varrho v)^{\varepsilon}}{\varrho^{\varepsilon}} \right) \left((P(\varrho))^{\varepsilon}- P(\varrho^{\varepsilon}) \right)=0.$$ At this stage, it is enough to show \begin{equation}gin{equation}\Delta bel{c14}\begin{equation}gin{aligned} \limsup_{\varepsilon\rightarrow0}\int_s^t\int\nabla\Big(\frac{(\varrho v)^{\varepsilon}}{\varrho^{\varepsilon}} \Big) [(\varrho v\otimes v)^{\varepsilon}-(\varrho v)^{\varepsilon}\otimes v^{\varepsilon}] +\int_s^t\int\Big[ \varrho^{\varepsilon}v^{\varepsilon}-(\varrho v)^{\varepsilon} \Big] \frac{(\varrho v)^\varepsilon} {\varrho^{\varepsilon}}\nabla\frac{(\varrho v)^\varepsilon}{\varrho^{\varepsilon}} =0, \end{aligned}\end{equation} In view of \eqref{2.3} and Lemma \ref{lem2.1}, \begin{equation}\Delta bel{c3.22} \Big\|\nabla\Big(\frac{(\varrho v)^{\varepsilon}}{\varrho^{\varepsilon}} \Big)\Big\|_{L^{\frac{kp}{k+2p}}(L^{\frac{lq}{l+2q}})}\leq C\varepsilon^{-1}\| v\|_{L^{p}(L^{q})}\|\varrho \|_{L^k (L^{l})}^2, \end{equation} and \begin{equation}\Delta bel{c3.23} \limsup_{\varepsilon\rightarrow0}\varepsilon\Big\|\nabla\Big(\frac{(\varrho v)^{\varepsilon}}{\varrho^{\varepsilon}} \Big)\Big\|_{L^{\frac{kp}{k+2p}}(L^{\frac{lk}{l+2q}})}=0. \end{equation} Taking advantage of \eqref{fg'} in Lemma \ref{lem2.2}, \begin{equation}\Delta bel{c3.24} \|(\varrho v\otimes v)^{\varepsilon}-(\varrho v)^{\varepsilon}\otimes v^{\varepsilon} \|_{L^{\frac{kp}{k(p-1)-3p}}(L^{\frac{lq}{l(q-1)-3q}})} \leq C\varepsilon\|\nabla v\|_{L^{\frac{kp}{k(p-2)-4p}}(L^{\frac{lq}{l(q-2)-4q}})} \| v\|_{L^{p}(L^{q})}\|\varrho\|_{L^{k}(L^{l})} \end{equation} Thanks to the H\"older's inequality and \eqref{c3.22}-\eqref{c3.24}, we find \begin{equation}gin{equation}\Delta bel{c18}\begin{equation}gin{aligned} &\Big|\int_s^t\int\nabla\Big(\frac{(\varrho v)^{\varepsilon}}{\varrho^{\varepsilon}} \Big) [(\varrho v\otimes v)^{\varepsilon}-(\varrho v)^{\varepsilon}\otimes v^{\varepsilon}]\Big|\\ \leq & C\Big\|\nabla\Big(\frac{(\varrho v)^{\varepsilon}}{\varrho^{\varepsilon}} \Big)\Big\|_{L^{\frac{kp}{k+2p}}(L^{\frac{lk}{l+2q}})}\|(\varrho v\otimes v)^{\varepsilon}-(\varrho v)^{\varepsilon}\otimes v^{\varepsilon} \|_{L^{\frac{kp}{k(p-1)-3p}}(L^{\frac{lq}{l(q-1)-3q}})}\|1\|_{L^{k}(L^{l})}\\ \leq & C\varepsilon\Big\|\nabla\Big(\frac{(\varrho v)^{\varepsilon}}{\varrho^{\varepsilon}} \Big)\Big\|_{L^{\frac{kp}{k+2p}}(L^{\frac{lk}{l+2q}})}\|\nabla v\|_{L^{\frac{kp}{k(p-2)-4p}}(L^{\frac{lq}{l(q-2)-4q}})} \| v\|_{L^{p}(L^{q})}\|\varrho\|_{L^{k}(L^{l})}. \end{aligned}\end{equation} which in turn implies $$ \limsup_{\varepsilon\rightarrow0}\Big|\int_s^t\int\nabla\Big(\frac{(\varrho v)^{\varepsilon}}{\varrho^{\varepsilon}} \Big) [(\varrho v\otimes v)^{\varepsilon}-(\varrho v)^{\varepsilon}\otimes v^{\varepsilon}]\Big|=0.$$ We turn our attentions to the term $\int_s^t\int\Big[ \varrho^{\varepsilon}v^{\varepsilon}-(\varrho v)^{\varepsilon} \Big] \frac{(\varrho v)^\varepsilon} {\varrho^{\varepsilon}}\nabla\frac{(\varrho v)^\varepsilon}{\varrho^{\varepsilon}}$. We conclude from Lemma \ref{lem2.2} that \begin{equation}gin{equation}\Delta bel{1.3.16} \|\varrho^{\varepsilon}v^{\varepsilon}-(\varrho v)^{\varepsilon}\|_{L^{\frac{kp}{k(p-2)-3p}}(L^{\frac{lq}{l(q-2)-3q}})} \leq C \varepsilon\|\nabla v\|_{L^{\frac{kp}{k(p-2)-4p}}(L^{\frac{lq}{l(q-2)-4q}})} \|\varrho\|_{L^{k} (L^{l} )}. \end{equation} Using the H\"older's inequality and \eqref{1.3.16} we find, \begin{equation}gin{equation}\Delta bel{1.3.17} \begin{equation}gin{aligned} &\Big|\int_s^t\int\Big[ \varrho^{\varepsilon}v^{\varepsilon}-(\varrho v)^{\varepsilon} \Big] \frac{(\varrho v)^\varepsilon} {\varrho^{\varepsilon}}\nabla \frac{(\varrho v)^\varepsilon}{\varrho^{\varepsilon}}\Big| \\ \leq&C \|\varrho^{\varepsilon}v^{\varepsilon}-(\varrho v)^{\varepsilon}\|_{L^{\frac{kp}{k(p-2)-3p}}(L^{\frac{lq}{l(q-2)-3q}})}\Big\|\frac{(\varrho v)^\varepsilon} {\varrho^{\varepsilon}}\Big\|_{L^{\fracrac{kp}{k+p}} (L^{\fracrac{lq}{l+q}}) } \Big\|\nabla\frac{(\varrho v)^\varepsilon}{\varrho^{\varepsilon}}\Big\|_{L^{\frac{kp}{k+2p}}(L^{\frac{lk}{l+2q}})}\\ \leq& C \varepsilon\|\nabla v\|_{L^{\frac{kp}{k(p-2)-4p}}(L^{\frac{lq}{l(q-2)-4q}})} \|\varrho\|_{L^{k}( L^{l}) }^2 \|v\|_{L^{p}L^{q}} \Big\|\nabla\frac{(\varrho v)^\varepsilon}{\varrho^{\varepsilon}}\Big\|_{L^{\frac{kp}{k+2p}}(L^{\frac{lk}{l+2q}})}\\ \end{aligned}\end{equation} Together this with \eqref{c3.22} and \eqref{c3.23} yield that $$ \limsup_{\varepsilon\rightarrow0} \Big|\int_s^t\int\Big[ \varrho^{\varepsilon}v^{\varepsilon}-(\varrho v)^{\varepsilon} \Big] \frac{(\varrho v)^\varepsilon} {\varrho^{\varepsilon}}\nabla \frac{(\varrho v)^\varepsilon}{\varrho^{\varepsilon}}\Big|=0. $$ Collecting all the above estimates, using the weak continuity of $\varrho$ and $\varrho v$, we complete the proof of Theorem \ref{the1.2}. \end{proof} \mathrm{div}ection{Energy equality in compressible Navier-Stokes equations allowing vacuum} Unlike the case with non-vacuum, when we consider the compressible Navier-Stokes equations with the density containing vacuum, only spatially regularized velocity $v^\varepsilon$ fails to possess enough temporal regularity to qualify for a test function. To over this difficulity, we need to mollify the velocity both in space and time. Hence, in this section, we let $\eta$ be non-negative smooth function supported in the space-time ball of radius 1 and its integral equals to 1. We define the rescaled space mollifier $\eta_{\varepsilon}(t,x)=\fracrac{1}{\varepsilon^{d+1}}\eta(\fracrac{t}{\varepsilon},\fracrac{x}{\varepsilon})$ and $$ f^{\varepsilon}(t,x)=\int_0^T\int_{\mathbb{T}^d}f(\tau,y)\eta_{\varepsilon}(t-\tau,x-y)dyd\tau. $$ \begin{equation}gin{proof}[Proof of Theorem \ref{the1.2}] Let $\phi(t)$ be a smooth function compactly supported in $(0,+\infty)$. Multiplying $\eqref{INS}_2$ by $(\phi v^{\varepsilon})^\varepsilon$, then integrating over $(0,T)\nablames \mathbb{T}^d$ , we infer that \begin{equation}gin{equation}\Delta bel{d1} \begin{equation}gin{aligned} \int_0^T\int \phi(t)v^{\varepsilon}\Big[\partial_{t}(\varrho v)^{\varepsilon}+ \mathrm{div}\,(\varrho v\otimes v)^{\varepsilon}+\nabla P(\varrho)^\varepsilon- \mu \textxt{Div\,}elta v^\varepsilon-(\mu+\Delta mbda)\nabla( \mathrm{div}\, v)^\varepsilon\Big]=0. \end{aligned}\end{equation} To pass the limit of $\varepsilon$, we reformulate every term of the last equation. A straightforward computation leads to \begin{equation}gin{equation}\Delta bel{d2} \begin{equation}gin{aligned} \int_0^T\int \phi(t)v^{\varepsilon} \partial_{t} (\varrho v )^{\varepsilon}=&\int_0^T\int \phi(t)v^{\varepsilon}\Big[ \partial_{t} (\varrho v )^{\varepsilon}-\partial_{t}(\varrho v^{\varepsilon})\Big]+ \int_0^T\int \phi(t)v^{\varepsilon} \partial_{t}(\varrho v^{\varepsilon}) \\ =& \int_0^T\int \phi(t)v^{\varepsilon} \Big[\partial_{t} (\varrho v )^{\varepsilon}-\partial_{t}(\varrho v^{\varepsilon})\Big]+\int_0^T\int \phi(t)\varrho\partial_t{\fracrac{|v^{\varepsilon}|^2}{2}} \\ &+\int_0^T\int \phi(t)\varrho_t|v^{\varepsilon}|^2. \end{aligned}\end{equation} It follows from integration by parts and the mass equation $\eqref{INS}_1$ that \begin{equation}gin{align} &\int_0^T\int\phi(t)v^{\varepsilon} \mathrm{div}\,(\varrho v\otimes v)^{\varepsilon}\nonumber\\ =& \int_0^T\int\phi(t) v^{\varepsilon} \mathrm{div}\,[(\varrho v\otimes v)^{\varepsilon}-(\varrho v)\otimes v^{\varepsilon}]+\int_0^T\int\phi(t)v^{\varepsilon}\mathrm{div}\,(\varrho v\otimes v^{\varepsilon})\nonumber\\ =& -\int_0^T\int\phi(t) \nabla v^{\varepsilon} [(\varrho v\otimes v)^{\varepsilon}-(\varrho v)\otimes v^{\varepsilon}]+ \int_0^T\int \phi(t)\left(\mathrm{div}\, (\varrho v ) |v^{\varepsilon}|^{2}+\frac12 \varrho v \nabla|v^{\varepsilon} |^{2} \right)\nonumber\\ =& -\int_0^T\int\phi(t) \nabla v^{\varepsilon} [(\varrho v\otimes v)^{\varepsilon}-(\varrho v)\otimes v^{\varepsilon}]+\frac{1}{2}\int_0^T\int\phi(t) \mathrm{div}\, (\varrho v ) |v^{\varepsilon}|^{2}\nonumber\\ =& -\int_0^T\int\phi(t) \nabla v^{\varepsilon} [(\varrho v\otimes v)^{\varepsilon}-(\varrho v)\otimes v^{\varepsilon}]-\fracrac{1}{2}\int_0^T\int \phi(t) \partial_t \varrho |v^{\varepsilon}|^{2}. \Delta bel{d3} \end{align} We rewrite the pressure term as \begin{equation}gin{equation}\Delta bel{d6} \begin{equation}gin{aligned} &\int_0^T\int\phi(t)v^{\varepsilon}\nabla (\varrho^\gamma)^{\varepsilon}= \int_0^T\int\phi(t) [v^{\varepsilon}\nabla(\varrho^\gamma)^{\varepsilon}-v\nabla (\varrho^\gamma)]+\int_0^T\int\phi(t)v \nabla (\varrho^\gamma). \end{aligned} \end{equation} Then using the integration by parts and mass equation $\eqref{INS}_1$ again, we find $$\begin{aligned} \int_0^T\int\phi(t)v \cdot\nabla (\varrho^\gamma) =&-\int_0^T\int\phi(t) \varrho^{\gamma-1} \varrho\textxt{div\,}v\\ =&\int_0^T\int\phi(t) \varrho^{\gamma-1}(\partial_{t}\varrho+v\cdot\nabla\varrho) \\ =&\frac{1}{\gamma}\int_0^T\int\phi(t) \partial_{t}\varrho^{\gamma } +\frac{1}{\gamma}\int_0^T\int\phi(t)v\cdot\nabla\varrho^{\gamma }, \end{aligned} $$ which in turn means that \begin{equation}gin{equation}\Delta bel{d62}\begin{equation}gin{aligned} \int_0^T\int\phi(t)v \cdot\nabla (\varrho^\gamma) = \frac{1}{\gamma-1}\int_0^T\int\phi(t) \partial_{t}\varrho^{\gamma }. \end{aligned}\end{equation} Thanks to integration by parts, we arrive at \begin{equation}gin{equation}\Delta bel{d8}\begin{equation}gin{aligned} &-\mu\int_0^T\int \phi(t)v^{\varepsilon} \textxt{Div\,}elta v^{\varepsilon}=\mu \int_0^T\int \phi(t)|\nabla v^{\varepsilon}|^{2},\\ &- (\mu+\Delta mbda)\int_0^T\int \phi(t)v^{\varepsilon} \nabla\textxt{div\,}v^\varepsilon=(\mu+\Delta mbda)\int_0^T\int \phi(t)|\mathrm{div}\,{v^{\varepsilon}}|^{2}. \end{aligned} \end{equation} Plugging \eqref{d2}-\eqref{d8} into \eqref{d1} and using the integration by parts, we conclude that \begin{equation}gin{equation}\Delta bel{d9} \begin{equation}gin{aligned} &-\int_0^T\int \phi(t)_t\left(\varrho{\fracrac{|v^{\varepsilon}|^2}{2}}+\frac{1}{\gamma-1}\varrho^{\gamma }\right)+\int_0^T\int \left(\mu |\nabla v^\varepsilon|^2+(\mu+\Delta mbda)|\mathrm{div}\, v^\varepsilon|^2\right)\\ =&-\int_0^T\int \phi(t)v^{\varepsilon} \Big[\partial_{t} (\varrho v )^{\varepsilon}-\partial_{t}(\varrho v^{\varepsilon})\Big]+\int_0^T\int\phi(t) \nabla v^{\varepsilon} [(\varrho v\otimes v)^{\varepsilon}-(\varrho v)\otimes v^{\varepsilon}]\\ &-\int_0^T\int\phi(t) [v^{\varepsilon}\nabla(\varrho^\gamma)^{\varepsilon}-v\nabla (\varrho^\gamma)]. \end{aligned}\end{equation} It is enough to prove that the terms on the right hand-side of \eqref{d9} tend to zero as $\varepsilon\rightarrow 0$. Under the hypothesis \begin{equation} \begin{aligned} &0\leq \varrho\in L^{k}(0,T;L^{l}(\mathbb{T}^d)) ,\nabla\mathrm{div}qrt{\varrho}\in L^{\frac{2pk}{2k(p-3)-p}}(0,T;L^{\frac{2lq}{2l(q-3)-q}}(\mathbb{T}^d))\\ & v\in L^{p }(0,T;L^{q}(\mathbb{T}^d)),\nabla v \in L^{\frac{pk}{k(p-2)-p}}(0,T;L^{\frac{ql}{l(q-2)-q}}(\mathbb{T}^d))\ and\ v_0\in L^{\max\{ \fracrac{2\gamma }{\gamma -1},\fracrac{q}{2}\}}(\mathbb{T}^d), \end{aligned}\end{equation} with $k> \max\{\fracrac{p}{2(p-3)},\fracrac{p}{p-2}, \fracrac{(\gamma-1)p}{2},\fracrac{(\gamma-1)(d+q)p}{2q-d(p-3)}\}$, $l> \max\{\fracrac{q}{2(q-3)},\fracrac{q}{q-2}, \fracrac{(\gamma-1)q}{2}\}$, $p>3$ and $q>\max\{3, \fracrac{d(p-3)}{2}\}$. In view of H\"older's inequality and Lemma \eqref{pLions}, we know that \begin{equation}gin{equation}\Delta bel{3.81}\begin{equation}gin{aligned} \int_s^t\int \phi(t)v^{\varepsilon} &\Big[\partial_{t} (\varrho v )^{\varepsilon}-\partial_{t}(\varrho v^{\varepsilon})\Big]\leq C\|v^{\varepsilon}\|_{L^{p}(L^{q})} \|\partial_{t} (\varrho v )^{\varepsilon}-\partial_{t}(\varrho v^{\varepsilon})\|_{L^{\frac{ p }{p-1}}(L^\frac{q}{q-1})}\\ &\leq C\|v\|^{2}_{L^{p}(L^{q})} \left(\|\varrho_{t}\|_{L^{\frac{ p }{p-2}}(L^\frac{q}{q-2})}+\|\nabla \varrho\|_{L^{\frac{ p }{p-2}}(L^\frac{q}{q-2})}\right). \end{aligned}\end{equation} To bound $\varrho_{t}$ and $\nabla \varrho$, we employ mass equation $\eqref{INS}_1$ to obtain $$ \varrho_t=-2\mathrm{div}qrt{\varrho}v\cdot\nabla\mathrm{div}qrt{\varrho}-{\varrho}\textxt{div}v, \ and\ \nabla \varrho=2\mathrm{div}qrt{\varrho}\nabla \mathrm{div}qrt{\varrho}. $$ As a consequence, the triangle inequality and H\"older's inequality guarantee that \begin{equation}gin{equation}\Delta bel{3.91}\begin{equation}gin{aligned} &\|\varrho_{t}\|_{L^{\frac{ p }{p-2}}(L^\frac{q}{q-2})}\\\leq& C\left(\|-2\mathrm{div}qrt{\varrho}v\cdot\nabla\mathrm{div}qrt{\varrho} \|_{L^{\frac{ p }{p-2}}(L^\frac{q}{q-2})}+ \|{\varrho}\textxt{div}v\|_{L^{\frac{ p }{p-2}}(L^\frac{q}{q-2})}\right)\\ \leq& C\left( \|v\|_{L^{p}L^{q}}\|\nabla\mathrm{div}qrt{\varrho} \|_{L^{\frac{2pk}{2k(p-3)-p}}(L^{\frac{2lq}{2l(q-3)-q}})}\|\varrho\|_{L^{k}(L^{l})}^{1/2}+\| \nabla v\|_{L^{\frac{pk}{k(p-2)-p}}(L^{\frac{ql}{l(q-2)-q}})} \|\varrho\|_{L^{k}(L^{l})}\right), \end{aligned}\end{equation} and \begin{equation}gin{equation}\Delta bel{3.92} \begin{equation}gin{aligned} \|\nabla \varrho\|_{L^{\frac{ p }{p-2}}(L^\frac{q}{q-2})}\leq & C\|\mathrm{div}qrt{\varrho}\nabla \mathrm{div}qrt{\varrho}\|_{L^{\frac{ p }{p-2}}(L^\frac{q}{q-2})}\\\leq& C \|\nabla\mathrm{div}qrt{\varrho} \|_{L^{\frac{2pk}{2k(p-2)-p}}(L^{\frac{2lq}{2l(q-2)-q}})}\|\varrho\|_{L^{k}(L^{l})}^{1/2}\\\leq& C \|\nabla\mathrm{div}qrt{\varrho} \|_{L^{\frac{2pk}{2k(p-3)-p}}(L^{\frac{2lq}{2l(q-3)-q}})}\|\varrho\|_{L^{k}(L^{l})}^{1/2}. \end{aligned} \end{equation} Plugging \eqref{3.91} and \eqref{3.92} into \eqref{3.81}, we get \begin{equation}gin{equation}\Delta bel{c3.11}\begin{equation}gin{aligned} &\int_0^T\int \phi(t)v^{\varepsilon} \Big[\partial_{t} (\varrho v )^{\varepsilon}-\partial_{t}(\varrho v^{\varepsilon})\Big]\\\leq& C\|v\|^{2}_{L^{p}(L^{q})}\\& \left[\Big(\|v\|_{L^{p}(L^{q})}+1\Big)\|\nabla\mathrm{div}qrt{\varrho} \|_{L^{\frac{2pk}{2k(p-3)-p}}(L^{\frac{2lq}{2l(q-3)-q}})}\|\varrho\|_{L^{k}(L^{l})}^{1/2}+\| \nabla v\|_{L^{\frac{pk}{k(p-2)-p}}(L^{\frac{ql}{l(q-2)-q}})} \|\varrho\|_{L^{k}(L^{l})}\right],. \end{aligned}\end{equation} From Lemma \eqref{pLions}, we end up with, as $\varepsilon\rightarrow0$, $$ \int_0^T\int \phi(t)v^{\varepsilon} \Big[\partial_{t} (\varrho v )^{\varepsilon}-\partial_{t}(\varrho v^{\varepsilon})\Big]\rightarrow0.$$ In the light of the H\"older's inequality, we obtain \begin{equation}\begin{aligned} \|\varrho v\otimes v\|_{L^{\frac{pk}{2k+p}}(L^{\frac{ql}{2l+q}})}\leq \|v\|^{2}_{L^{p}(L^{q})}\|\varrho\|_{L^{k}(L^{l})} \end{aligned}\end{equation} Using the integration by parts, we observe that \begin{equation}gin{equation}\Delta bel{3.11}\begin{equation}gin{aligned} &\left|\int_0^T\int\phi(t) \nabla v^{\varepsilon} [(\varrho v\otimes v)^{\varepsilon}-(\varrho v)\otimes v^{\varepsilon}]\right| \\ \leq &C\|\nabla v^\varepsilon\|_{L^{\frac{pk}{k(p-2)-p}}(L^{\frac{ql}{l(q-2)-q}})} \|(\varrho v\otimes v)^{\varepsilon}-(\varrho v)\otimes v^{\varepsilon}\|_{L^{\frac{pk}{2k+p}}(L^{\frac{ql}{2l+q}})}\\ \leq & C\|\nabla v\|_{L^{\frac{pk}{k(p-2)-p}}(L^{\frac{ql}{l(q-2)-q}})} \left(\|(\varrho v \otimes v)^\varepsilon- \varrho v \otimes v\|_{L^{\frac{pk}{2k+p}}(L^{\frac{ql}{2l+q}})}+\|\varrho v\otimes v- \varrho v\otimes v^\varepsilon\|_{L^{\frac{pk}{2k+p}}(L^{\frac{ql}{2l+q}})}\right). \end{aligned}\end{equation} Hence, by the standard properties of the mollification, we have $$\int_0^T\int\phi(t) \nabla v^{\varepsilon} [(\varrho v\otimes v)^{\varepsilon}-(\varrho v)\otimes v^{\varepsilon}] \rightarrow0 \textxt{ as }\varepsilon\rightarrow0.$$ According to H\"older's inequality on bound domain, we observe that $$\|\nabla(\varrho^\gamma)\|_{L^{\frac{p}{p-1}}(L^{\frac{q}{q-1}})}\leq C \|\varrho\|^{\frac{2\gamma-1}{2}}_{L^{\frac{pk(2\gamma-1)}{4k+p}}(L^{\frac{ql(2\gamma-1)}{4l+q}})}\|\nabla\mathrm{div}qrt{\varrho} \|_{L^{\frac{2pk}{2k(p-3)-p}}(L^{\frac{2lq}{2l(q-3)-q}})},$$ which in turn implies that \begin{equation}gin{equation}\Delta bel{d10}\begin{equation}gin{aligned} \int_0^T\int\phi(t) [v^{\varepsilon}\nabla(\varrho^\gamma)^{\varepsilon}-v\nabla (\varrho^\gamma)]\rightarrow0, \end{aligned}\end{equation} where we have used lemma \ref{lem2.11} and the condition $k\geq \fracrac{(\gamma-1)p}{2}$, $l\geq \fracrac{(\gamma-1)q}{2}$. Then together with \eqref{c3.11}-\eqref{d10}, passing to the limits as $\varepsilon\rightarrow 0$, we know that \begin{equation}gin{equation}\Delta bel{d11} \begin{equation}gin{aligned} -\int_0^T\int \phi_t \left(\fracrac{1}{2}\varrho |v|^2+\fracrac{\varrho^\gamma}{\gamma -1}\right)+\int_0^T\int\phi(t) \left(\mu |\nabla v|^2+(\mu+\Delta mbda)|\mathrm{div}\, v|^2\right)=0. \end{aligned} \end{equation} The next objective is to get the energy equality up to the initial time $t=0$ by the similar method in \cite{[CLWX]} and \cite{[Yu2]}, for the convenience of the reader and the integrity of the paper, we give the details. First we prove the continuity of $\mathrm{div}qrt{\varrho}v(t)$ in the strong topology as $t\to 0^+$. To do this, we define the function $f$ on $[0,T]$ as $$f(t)=\int_{\mathbb{T}^d}(\varrho v)(t,x)\cdot \phi(x) dx,\ for\ any\ \phi(x)\in \mathfrak{D}(\mathbb{T}^d),$$ which is a continuous function with respect to $t\in [0,T]$. Moreover, since $$\varrho \in L^\infty{0,T; L^\gamma (\mathbb{T}^n)}\ and \ \mathrm{div}qrt{\varrho}v\in L^\infty(0,T;L^2(\mathbb{T}^n)),$$ we can obtain $\varrho v\in L^\infty(0,T;L^{\fracrac{2\gamma }{\gamma+1}}(\mathbb{T}^d)).$\\ From the moument equation, we have $$\fracrac{d}{dt}\int_{\mathbb{T}^d} (\varrho v)(t,x)\cdot \phi(x) dx=\int_{\mathbb{T}^d}\varrho v\otimes v:\nabla \phi(x)-P\mathrm{div}\,\phi(x)-\mu\nabla v\nabla \phi(x)-(\mu+\Delta mbda )\mathrm{div}\, v\mathrm{div}\, \phi(x)dx,$$ which is bounded for any function $\phi\in \mathfrak{D}(\mathbb{T}^d)$. Then it follows from the Corollary 2.1 in \cite{[Feireisl2004]} that \begin{equation}gin{equation}\Delta bel{d14} \varrho v\in C([0,T];L^{\fracrac{2\gamma}{\gamma +1}}_{\textxt{weak}}(\mathbb{T}^d)). \end{equation} On the other hand, we derive from the mass equation $\eqref{INS}_1$ that \begin{equation}gin{equation}\Delta bel{d12} \begin{equation}gin{aligned} \partial_t(\varrho^\gamma )=-\gamma \varrho ^\gamma \mathrm{div}\, v-2\gamma \varrho^{\gamma -\fracrac{1}{2}}v\cdot \nabla \mathrm{div}qrt{\varrho}, \end{aligned} \end{equation} and \begin{equation}gin{equation}\Delta bel{d13} \begin{equation}gin{aligned} \partial_t(\mathrm{div}qrt{\varrho})=\fracrac{\mathrm{div}qrt{\varrho }}{2}\mathrm{div}\, v+v\cdot \nabla \mathrm{div}qrt{\varrho}, \end{aligned} \end{equation} which together with \eqref{key0} gives $$\partial_t\varrho^\gamma\in L^{\frac{kp}{k(p-2)+(\gamma-1)p}}(L^{\frac{lq}{l(q-2)+(\gamma-1)q}}),\ \ \ \nabla \varrho^\gamma\in L^{\frac{kp}{k(p-3)+(\gamma-1)p}}( L^{\frac{lq}{l(q-3)+(\gamma-1)q}}),$$ and $$\partial_t\mathrm{div}qrt{\varrho}\in L^{\frac{2kp}{2k(p-2)-p}}(L^{\frac{2lq}{2l(q-2)-q}}),\ \ \ \nabla \mathrm{div}qrt{\varrho}\in L^{\frac{2kp}{2k(p-3)-p}}( L^{\frac{2lq}{2l(q-3)-q}}),$$ Hence, using the Aubin-Lions Lemma \ref{AL}, we can obtain \begin{equation}gin{equation}\Delta bel{d15} \varrho^\gamma\in C([0,T];L^{\fracrac{lq}{l(q-3)+(\gamma-1)q}}((\mathbb{T}^d)) \,and\,\ \mathrm{div}qrt{\varrho }\in C([0,T];L^{\frac{2lq}{2l(q-3)-q}}(\mathbb{T}^d)),\ \end{equation} for $k\geq \fracrac{(\gamma-1)(d+q)p}{2q-d(p-3)}$, $p>3$ and $q>\max\{3, \fracrac{d(p-3)}{2}\}$. Meanwhile, using the natural energy \eqref{energyineq}, \eqref{d14} and \eqref{d15}, we have \begin{equation}gin{equation}\Delta bel{d16} \begin{equation}gin{aligned} 0&\leq \overline{\lim_{t\rightarrow 0}}\int |\mathrm{div}qrt{\varrho} v-\mathrm{div}qrt{\varrho_0}v_0|^2 dx\\ &=2\overline{\lim_{t\rightarrow 0}}\left(\int \left(\frac{1}{2}\varrho |v|^2 +\frac{1}{\gamma -1}\varrho ^\gamma \right)dx-\int\left(\frac{1}{2}\varrho_0 |v_0|^2+\frac{1}{\gamma -1}\varrho_0 ^\gamma \right)dx\right)\\ &\ \ \ +2\overline{\lim_{t\rightarrow 0}}\left(\int\mathrm{div}qrt{\varrho_0}v_0\left(\mathrm{div}qrt{\varrho_0}v_0-\mathrm{div}qrt{\varrho} v\right)dx+\frac{1}{\gamma -1}\int \left(\varrho_0^\gamma -\varrho^\gamma\right)dx\right)\\ &\leq 2\overline{\lim_{t\rightarrow 0}}\int \mathrm{div}qrt{\varrho_0}v_0\left(\mathrm{div}qrt{\varrho_0}v_0-\mathrm{div}qrt{\varrho}v\right)dx\\ &=2\overline{\lim_{t\rightarrow 0}}\int v_0 \left(\varrho_0 v_0 -\varrho v\right)dx+2\overline{\lim_{t\rightarrow 0}}\int v_0 \mathrm{div}qrt{\varrho }v\left(\mathrm{div}qrt{\varrho }-\mathrm{div}qrt{\varrho_0}\right)dx=0, \end{aligned} \end{equation} from which it follows \begin{equation}gin{equation}\Delta bel{d17} \mathrm{div}qrt{\varrho} v(t)\rightarrow \mathrm{div}qrt{\varrho }v(0)\ \ strongly\ in\ L^2(\Omega)\ as\ t\rightarrow 0^+. \end{equation} Similarly, one has the right temporal continuity of $\mathrm{div}qrt{\varrho}v$ in $L^2(\Omega)$, hence, for any $t_0\geq 0$, we infer that \begin{equation}gin{equation}\Delta bel{d18} \mathrm{div}qrt{\varrho} v(t)\rightarrow \mathrm{div}qrt{\varrho }v(t_0)\ \ strongly\ in\ L^2(\Omega)\ as\ t\rightarrow t_0^+. \end{equation} Before we go any further, it should be noted that \eqref{d11} remains valid for function $\phi$ belonging to $W^{1,\infty}$ rather than $C^1$, then for any $t_0>0$, we redefine the test function $\phi$ as $\phi_\tau$ for some positive $\tau$ and $\alpha $ such that $\tau +\alpha <t_0$, that is \begin{equation}gin{equation} \phi_\tau(t)=\left\{\begin{equation}gin{array}{lll} 0, & 0\leq t\leq \tau,\\ \frac{t-\tau}{\alpha}, & \tau\leq t\leq \tau+\alpha,\\ 1, &\tau+\alpha \leq t\leq t_0,\\ \frac{t_0-t}{\alpha }, & t_0\leq t\leq t_0 +\alpha ,\\ 0, & t_0+\alpha \leq t. \end{array}\right. \end{equation} Then substituting this test function into \eqref{d11}, we arrive at \begin{equation}gin{equation} \begin{equation}gin{aligned} -\int_\tau^{\tau+\alpha}\int& \frac{1}{\alpha}\left(\frac{1}{2}\varrho v^2+\frac{1}{\gamma-1}\varrho^\gamma \right)+\frac{1}{\alpha}\int_{t_0}^{t_0+\alpha}\int \left(\frac{1}{2}\varrho v^2+\frac{1}{\gamma-1}\varrho^\gamma \right)\\ &+\int_{\tau}^{t_0+\alpha}\int \phi_\tau \left(\mu |\nabla v|^2+\left(\mu+\Delta mbda\right)|\mathrm{div}\, v|^2\right)=0. \end{aligned} \end{equation} Taking $\alpha\rightarrow 0$ and using the fact that $\int_0^t\int\left(\mu |\nabla v|^2+\left(\mu+\Delta mbda\right)|\mathrm{div}\, v|^2\right)$ is continuous with respect to $t$ and the Lebesgue point Theorem, we deduce that \begin{equation}gin{equation} \begin{equation}gin{aligned} -\int&\left(\frac{1}{2}\varrho v^2+\frac{1}{\gamma-1}\varrho^\gamma \right)(\tau)dx+\int\left(\frac{1}{2}\varrho v^2+\frac{1}{\gamma-1}\varrho^\gamma \right)(t_0)dx\\ &+\int_\tau^{t_0}\int\left(\mu |\nabla v|^2+\left(\mu+\Delta mbda\right)|\mathrm{div}\, v|^2\right)=0. \end{aligned} \end{equation} Finally, letting $\tau\rightarrow 0$, using the continuity of $\int_0^t\int\left(\mu |\nabla v|^2+\left(\mu+\Delta mbda\right)|\mathrm{div}\, v|^2\right)$, \eqref{d14} and \eqref{d17}, we can obtain \begin{equation}gin{equation}\begin{aligned} \int\left(\frac{1}{2}\varrho v^2+\frac{1}{\gamma-1}\varrho^\gamma \right)(t_0)dx+\int_0^{t_0}\int&\left(\mu |\nabla v|^2+\left(\mu+\Delta mbda\right)|\mathrm{div}\, v|^2\right)dxds\\=&\int\left(\frac{1}{2}\varrho_0 v_0^2+\frac{1}{\gamma-1}\varrho_0^\gamma \right)dx. \end{aligned}\end{equation} Then we complete the proof of Theorem \ref{the1.5}. \end{proof} \mathrm{div}ection*{Acknowledgement} The authors would like to express their sincere gratitude to Prof. Quansen Jiu for pointing out this problem to us. Ye was partially supported by the National Natural Science Foundation of China under grant (No.11701145) and China Postdoctoral Science Foundation (No. 2020M672196). Wang was partially supported by the National Natural Science Foundation of China under grant (No. 11971446, No. 12071113 and No. 11601492). Yu was partially supported by the National Natural Science Foundation of China (NNSFC) (No. 11901040), Beijing Natural Science Foundation (BNSF) (No. 1204030) and Beijing Municipal Education Commission (KM202011232020). \begin{equation}gin{thebibliography}{00} \bibitem{[ADSW]} I. Akramov, T. Debiec, J. W. D. Skipper and E. Wiedemann, Energy conservation for the compressible Euler and Navier-Stokes equations with vacuum. Anal. PDE. 13 (2020), 789--811 \bibitem{[BY]} H. Beirao da Veiga and J. Yang, On the Shinbrot's criteria for energy equality to Newtonian fluids: a simplified proof, and an extension of the range of application. Nonlinear Anal. 196 (2020), 111809, 4 pp. \bibitem{[BC]}L. C. Berselli and E. Chiodaroli, On the energy equality for the 3D Navier-Stokes equations. Nonlinear Anal. 192 (2020), 111704, 24 pp. \bibitem{[BGSTW]} C. Bardos, P. Gwiazda, A. \'Swierczewska-Gwiazda, E. S. Titi and E. Wiedemann, Onsager's conjecture in bounded domains for the conservation of entropy and other companion laws. Proc. R. Soc. A, 475 (2019), 18 pp. \bibitem{[CY]} R. M. Chen and C. Yu, Onsager's energy conservation for inhomogeneous Euler equations, J. Math. Pures Appl. 131 (2019), 1--16. \bibitem{[CLWX]} M. Chen, Z. Liang, D. Wang and R. Xu, Energy equality in compressible fluids with physical boundaries. SIAM J. Math. Anal. 52 (2020), 1363--1385. \bibitem{[CL]} A. Cheskidov and X. Luo, Energy equality for the Navier-Stokes equations in weak-in-time Onsager spaces. Nonlinearity, 33 (2020), 1388--1403. \bibitem{[CET]} P. Constantin, E. Weinan and E.S. Titi, Onsager's conjecture on the energy conservation for solutions of Euler's equation. Commun. Math. Phys. 165 (1994), 207--209. \bibitem{[PL]} R. J. DiPerna and P. L. Lions, Ordinary differential equations, transport theory and Sobolev spaces, Invent. Math., 98 (1989), 511--547. \bibitem{FNP} E. Feireisl, A. Novotny, H. Petzeltov\'a, On the existence of globally defined weak solutions to the Navier-Stokes equations. J. Math. Fluid Mech. 3 (2001), 358--392. \bibitem{[Feireisl2004]} E. Feireisl, Dynamics of Viscous Compressible Fluids, Oxford University Press, 2004. \bibitem{[FN]} E. Feireisl, A. Novotn\'y, H. Petzeltov\'a, On the existence of globally defined weak solutions to the Navier-Stokes equations, J. Math. Fluid Mech. 3 (2001), 358--392. \bibitem{JZ} S. Jiang, P. Zhang, On spherically symmetric solutions of the compressible isentropic Navier-Stokes equations, Comm. Math. Phys. 215 (2001), 559--581. \bibitem{[LV]} I. Lacroix-Violet and A. Vasseur, Global weak solutions to the compressible quantum Navier-Stokes equation and its semi-classical limit, J. Math. Pures Appl. 114 (2018), 191--210. \bibitem{[LS]} T. M. Leslie and R. Shvydkoy, The energy balance relation for weak solutions of the density-dependent Navier-Stokes equations J. Differential Equations. 261 (2016), 3719--3733. \bibitem{[Liang]} Z. Liang, Regularity criterion on the energy conservation for the compressible Navier-Stokes equations. Proc. Roy. Soc. Edinburgh Sect. A, (2020), 1--18. \bibitem{[Lions]} J. L. Lions, Sur la r\'egularit\'e et l'unicit\'e des solutions turbulentes des \'equations de Navier Stokes. Rend. Semin. Mat. Univ. Padova, 30 (1960), 16--23. \bibitem{[Lions1]} P. L. Lions, Mathematical Topics in Fluid Mechanics, vol. 1. Incompressible Models, Oxford University Press, New York, 1998. \bibitem{[Lions2]} P. L. Lions, Mathematical Topics in Fluid Mechanics, vol. 2. Compressible Models, Oxford University Press, New York, 1998. \bibitem{[Isett]} P. Isett, A proof of Onsager's conjecture. Ann. of Math. 188 (2018), 871--963. \bibitem{[NNT]} Q. Nguyen, P. Nguyen and B. Tang, Energy equalities for compressible Navier-Stokes equations. Nonlinearity 32 (2019), 4206--4231. \bibitem{[NNT1]} Q. Nguyen, P. Nguyen and B. Tang, Energy conservation for inhomogeneous incompressible and compressible Euler equations. J. Differential Equations, 269 (2020), 7171--7210. \bibitem{[Onsager]} L. Onsager, Statistical hydrodynamics, Nuovo Cim. (Suppl.) 6 (1949), 279--287. \bibitem{[Shinbrot]} M. Shinbrot, The energy equation for the Navier-Stokes system. SIAM J. Math. Anal. 5 (1974), 948--954. \bibitem{[Simon]} J. Simon, Compact sets in the space $L^p(0, T; B)$, Ann. Mat. Pura Appl., 146 (1987), 65--96. \bibitem{[Taniuchi]} Y. Taniuchi, On generalized energy equality of the Navier-Stokes equations. Manuscripta Math. 94 (1997), 365--384. \bibitem{[WY]}Y. Wang and Y. Ye, Energy conservation via a combination of velocity and its gradient in the Navier-Stokes system. arXiv: 2106.01233. \bibitem{[WYY]}Y. Wang, Y. Ye and Y. Yu, The role of density in the energy conservation for the isentropic compressible Euler equations. Preprint. 2021. \bibitem{[WWY]}W. Wei, Y. Wang and Y. Ye, Gagliardo-Nirenberg inequalities in Lorentz type spaces and energy equality for the Navier-Stokes system. arXiv: 2106.11212. \bibitem{[YWW]}Y. Ye, Y. Wang and W. Wei, Energy equality in the isentropic compressible Navier-Stokes equations allowing vacuum. arXiv:2108.09425. \bibitem{[Yu1]} C. Yu. A new proof to the energy conservation for the Navier-Stokes equations. arXiv: 1604.05697. \bibitem{[Yu2]} C. Yu. Energy conservation for the weak solutions of the compressible Navier-Stokes equations. Arch. Ration. Mech. Anal. 225 (2017), 1073--1087. \bibitem{[Yu3]} C. Yu. The energy equality for the Navier-Stokes equations in bounded domains. arXiv: 1802.07661. \bibitem{[Zhang]} Z. Zhang, Remarks on the energy equality for the non-Newtonian fluids. J. Math. Anal. Appl. 480 (2019), 123443, 9 pp. \end{thebibliography} \end{document}
\begin{document} \flushbottom \title{Spatiotemporal slope stability analytics for failure estimation (SSSAFE): linking radar data to the fundamental dynamics of granular failure} \thispagestyle{empty} \sigmaection*{Introduction} \label{Introduction} Natural and engineered slopes, composed of granular materials like rocks, concrete and soil, can maintain their structural integrity even as damage spreads. But there is a tipping point, beyond which damage can propagate to cause catastrophic failure with little to no apparent warning signs at the macroscale~\cite{carla2019,Handwerger2019rainfall,Clarkson2020damfailure}. The landslides in Xinmo (China, 2017) and the dam collapse in Brumadinho (Brazil, 2019) are recent reminders of the devastating impact of slope failure on human lives and livelihoods, infrastructure, and the environment~\cite{carla2019,Handwerger2019rainfall,Clarkson2020damfailure,WLF5,SILVAROTTA2019Brumadinho}. Here the term {\it failure} is used from an operative point of view and is the moment when the slope totally or partially collapses, displaying a paroxysmal acceleration and a disintegration of the mobilized material. A critical frontline defense against these hazards is large-scale monitoring and analysis of slope movement using remote sensing technologies~\cite{MCQUILLAN2020151,carla2019,Clarkson2020damfailure,Dicketal2015,harries2006,wasowski2014,intrieri2019,Intrieri2017,Dai2020}. Some of these measurements have now reached spatial and temporal resolutions (e.g., Slope Stability Radar~\cite{Dicketal2015}) which enable direct connections to be made to the fundamental deformation and failure of granular materials~\cite{MCQUILLAN2020151,Clarkson2020damfailure,carla2019,Intrieri2017}. Nevertheless, there are significant challenges to overcome before the full potential of these data assets can be harnessed for geotechnical risk assessment and hazard management~\cite{Intrieri2017,Dai2020}. One of, if not, the biggest challenge lies in the analysis and interpretation of monitoring data with respect to the underlying micromechanics and dynamics of deformation in the precursory failure regime (PFR)~\cite{intrieri2019,MCQUILLAN2020151,Dicketal2015,wasowski2014,harries2006}. Here we address this challenge by formulating a holistic framework for spatiotemporal slope stability analytics for failure estimation (SSSAFE). SSSAFE is physics-based and bears explicit connections to the micromechanics and dynamics of ductile to brittle failure in granular solids (e.g.,~\cite{ATSKCRMNJT2,ATSKCRMNJT,tordesillas2015,tordesillasEtAl2016} and references therein). A hallmark of SSSAFE is its detailed characterization of the spatiotemporal coevolution of the preferred pathways for force and damage in PFR using kinematic data. As highlighted in various reviews~\cite{MCQUILLAN2020151,Clarkson2020damfailure,Dai2020,intrieri2019,Intrieri2017,wasowski2014}, scant attention has been paid to the spatiotemporal dynamics of landslide deformation, with existing approaches in landslide forecasting and early warning systems (EWS) falling into one of two categories: (a) spatial analysis of an unstable slope to estimate the location and geometry of a landslide~\cite{hoek1981}, or (b) temporal analysis of ground deformation of single measurement points exhibiting tertiary creep, to deliver a short-term forecast of the time of failure~\cite{Carl2016GuidelinesOT,intrieri2019,carla2019}. The former partially relies on expert judgment (e.g. the choice of the failure criterion and the method of analysis~\cite{MCQUILLAN2020151}) and on in situ data (depth of the lithologies and of the water table, resistance parameters of the rock or soil) that always bear a certain level of uncertainty and representativeness bias~\cite{ChristianBaecher2011}. In temporal analysis, the inverse velocity (INV) theory originally proposed by Fukuzono~\cite{fukuzono1985} is the most widely applied method for prediction of the time of collapse in the terminal stages of PFR. This approach has no spatial aspect and depends on assumptions which motivate areas for improvement in forecasting~\cite{Dicketal2015,carla2019}. Being based on the inverse value of a derivative parameter, this method is heavily affected by noise, especially when the velocity is not particularly high. While this can be addressed by smoothing the data using a moving average~\cite{Carl2016GuidelinesOT,intrieri2019}, this comes at the cost of diminished sensitivity to important changes in the acceleration trends due to short-terms events, including: surface boundary conditions (e.g., civil engineering and mining works~\cite{MCQUILLAN2020151}), variations in the trigger factors of slope instability (e.g., rainfall~\cite{Handwerger2019rainfall}, seismic~\cite{Qiu2016seismic}, mining blasts~\cite{Dicketal2015,harries2006}), and the inherently complex mechanical interactions between different parts of the slope. Concurrent sites of instability may also interact and induce stress redistributions that lead a landslide to ``self-stabilize''~\cite{hungr2014,wang2019,TordesillasZhouBatterham2018}. Efforts~\cite{carla2019,Dicketal2015} to improve the INV approach give prima facie evidence to suggest that more accurate forecasts can be achieved when the spatial characteristics of slope displacements are incorporated in the temporal analysis of monitoring data. Accordingly, recent work focused on the spatiotemporal evolution of landslide kinematics in PFR in two case studies using: (a) ground-based radar data of a rockfall in an open pit mine (Mine 1) where two sites of instability emerged, leading one to self-stabilize before the larger one collapsed; and (b) satellite-based Sentinel 1 radar data (Xinmo) of the catastrophic collapse in Xinmo, which led to 83 fatalities~\cite{TordesillasZhouBatterham2018,KSAT,SDAT,ZhouAOAD,WangSS,WLF5}. Guided by lessons learned from the physics and dynamics of granular failure, these delivered a reliable early prediction of the location and geometry of the failure region~\cite{TordesillasZhouBatterham2018,ZhouAOAD,WangSS,KSAT,WLF5}, as well as regime change points in PFR~\cite{KSAT,SDAT,WLF5,WangSS}. In this study, we build on these efforts to develop a holistic data-driven framework which eliminates the uncertainties associated with a postulated stress-strain model for the slope, yet holds explicit connections to the first principles of fracture and failure mechanics of heterogeneous and disordered granular solids (e.g.,~\cite{ATSKCRMNJT2,ATSKCRMNJT,tordesillas2015,tordesillasEtAl2016} and references therein). To do this, we adopt a transdisciplinary approach which integrates network flow theory of granular failure~\cite{ATSKCRMNJT2,ATSKCRMNJT,tordesillas2015,tordesillasEtAl2016} and mesoscience~\cite{li2013, li2018mesoscience, li2018}. Given the novelty of this formulation from several fronts, the next section gives a brief review of the relevant developments which, woven together, form the basis of SSSAFE. \sigmaection*{Precursory dynamics of granular failure across system levels and scales} \label{sec:connect} {\it a) Preferred paths for transmission of damage versus force. \,\,} In complex systems, not all paths for transmission are created equal. Some are preferred over others. Experimental studies into the transmission of force and energy in natural and synthetic granular media (e.g., sand, photoelastic disk assemblies) and associated discrete element simulations have shown that the mesoregime of PFR is governed by the coupled evolution of two dominant mechanisms~\cite{burnley2013,ATSKCRMNJT,ATSKCRMNJT2,tordesillas2015,PRE}. The first comprises the preferred paths for force transmission (mechanism A): a set of system-spanning paths that can transmit the highest force flow along direct and shortest possible routes through the system. Distinct force chains (Figure~\ref{fig:rewire}) can be readily observed to form along these percolating paths, in alignment with the major principal stress axis~\cite{CundallStrack1979,Potyondy_2004,Majmudar_2005}. The second are the preferred paths for damage (mechanism B), where cracks and/or shearbands emerge. Note that the term {\it damage} is broadly used here to mean the separation of two grains in contact, bonded or unbonded. \begin{figure} \caption{(Color online) Redistribution of contact forces around a force chain in the sample Biax across different stages of PFR, prior to the time of failure $t^F_B=104$. Link thickness is proportional to the contact force magnitude. Red (black) links correspond to contacts between member particles of the force chain (all other supporting contacts). Most of the grains in the force chain are colored blue to aid visualization. There is a build up of force across stages 74-78, leading to a new force chain contact at the bottom amid rearrangements of supporting lateral contacts. Further build up of force in the force chain column results in the buckling of the top and bottom segments of the chain across stages 78-82: in turn, more force is rerouted to the bottom right (top left) in stages 82-86, resulting in a new force chain contact. } \label{fig:rewire} \end{figure} {\it b) Coevolution of preferred paths: a compromise-in-competition. \,\,} Arguably the best manifestation of the coupled evolution between force and damage can be observed in deforming photoelastic disk assemblies~\cite{Majmudar_2005,PRE}. Here one can readily observe forces continually rerouted to alternative pathways as damage spreads (Figure~\ref{fig:rewire}). This scenario is similar to traffic flows where vehicles are diverted to alternative routes when a road is closed off for repairs or other incidents. Following this analogy to road networks, grain contact networks similarly give rise to emergent flow bottlenecks. Prior network flow studies have shown that these sites, which are highly prone to congestion, ultimately become the preferred paths for damage in the failure regime~\cite{ATSKCRMNJT,ATSKCRMNJT2}. Counter to intuition, however, the bottlenecks do not generally coincide with the location of damage sites in the nascent stages of PFR, which is when ideally predictions should be made to allow enough time to enact mitigative measures. Instead, a process which can be described as a {\it compromise-in-competition} between the preferred paths for force and damage develops, which effectively shields the bottleneck from damage. Specifically, force congestion in the bottleneck is relieved by complex stress redistributions that redirect forces to other parts of the sample, where damage can be accommodated with minimal reduction to the system's resistance to failure (or global force transmission capacity). This may explain why current failure detection methods, which rely on damage sites in the early stages of PFR for spatial clues on where catastrophic failure ultimately forms, sometimes suffer high false positive rates in laboratory~\cite{chakraborty2019early} and field levels~\cite{Intrieri2017}. \begin{table} \caption{Mechanisms underlying the strength and failure of granular materials compete in the precursory failure regime (PFR).} \centering \begin{tabular}{ccc} \toprule \textbf{Regime} & \textbf{A -Preferred paths for force } & \textbf{B -Preferred paths for damage}\\ \textbf{(emergent structures)} & \textbf{(force chains)} & \textbf{(cracks, shear bands)}\\ \midrule (A) Stable regime & dominant & suppressed\\ (A-B) Mesoregime PFR & compromise-in-competition & compromise-in-competition\\ (B) Failure regime & suppressed & dominant\\ \bottomrule \end{tabular} \label{Tab:compete} \end{table} {\it c) The principles of mesoscience. \,\,} To account for the compromise-in-competition among preferred transmission paths and simultaneously `jump scale' -- from laboratory to field -- we integrate the network flow approach~\cite{ATSKCRMNJT,ATSKCRMNJT2} with the principles of mesoscience (Table~\ref{Tab:compete}, Figure~\ref{fig:mesofield}). Pioneered by Li and co-workers~\cite{li2013,li2018mesoscience,li2018} in the area of chemical and process engineering, mesoscience has enabled the upscaling of models of gas/solid-particle flow systems from laboratory to industrial scale. Mesoscience is predicated on the concept of a compromise-in-competition between {\it at least} two dominant mechanisms in a so-called mesoregime of a complex system. In the simplest case of two competing mechanisms (A and B), the mesoregime mediates two limiting regimes; each is governed by one dominant mechanism, A (B) in the A-dominated (B-dominated) regime, which is formulated as an extremum. Li et al. argues that, while the classical single objective optimization formalism applies to each limiting regime, the compromise-in-competition in the $A-B$ mesoregime necessitates a multiobjective optimization approach. Results from prelude studies~\cite{ATSKCRMNJT,ATSKCRMNJT2}, employing a dual objective network flow analysis, corroborate this view. Moreover, opposing trends manifest as the system evolves from one limiting regime to the other ($A \rightarrow A-B \rightarrow B$ and vice versa), consistent with the mesoscience principles (Figure~\ref{fig:mesofield}, Table~\ref{Tab:compete}). In laboratory tests where detailed analysis of underlying mechanisms are possible, the B-dominated failure regime is characterized by bursts to a peak in all the indicators of stored energy release and dissipation, including: kinetic energy, dissipation rate, population of buckling force chains and their supporting 3-cycles, average values of local nonaffine motion, grain velocity and rotation~\cite{tordesillasEtAl2016,microband,Tordesillas2007,TORDESILLAS2009706}. By contrast, at the opposite extreme, in the A-dominated stable regime, all of these quantities are negligibly small. In PFR, these opposing tendencies compromise and give rise to spatiotemporal dynamical patterns~\cite{SDAT,KSAT,ZhouAOAD,WLF5}. \begin{figure} \caption{(Color online) The precursory failure regime (PFR) over the course of monitoring a developing rockslide in an open pit mine. This chart summarizes the mesoscience perspective of a mesoregime (PFR) where two mechanisms (A and B) coexist and give rise to emergent mesoscale kinematic clusters. The clusters share a common boundary shown as black points overlaid on top of the displacement map at the time of the rockslide (chart-centre). This chart is analogous to the mesoscience perspective depicted for gas- or solid- particle flow systems \cite{li2018} \label{fig:mesofield} \end{figure} {\it d) Clustering patterns in the kinematics characterize the mesoregime PFR. \,\,} The compromise-in-competition between force transfer and damage paths in PFR gives rise to collective motion or kinematic partitions: mesoscale clusters where constituent members move collectively in near rigid-body motion~\cite{SDAT,KSAT,ZhouAOAD,WLF5}. Interestingly, Li and co-workers also observed particle clusters in the mesoregime of gas/solid-particle flow systems, and conjectured that these emerge from particles tending to minimize their potential energy, while the gas tries to choose a path of least resistance through the particle layers~\cite{li2013,li2018mesoscience,li2018}. Analogously, in the systems studied here, {\it damage favors the path of least resistance to failure -- the path that forms the common boundaries of kinematic clusters~\cite{ATSKCRMNJT,ATSKCRMNJT2}.} {\it e) Dynamics of kinematic clusters provide early prediction of failure across scales. \,\,} A complex network analysis of individual grain motions in sand in laboratory tests~\cite{TordesillasWalkerAndoViggiani2013} and of surface ground motion in a slope (e.g., Mine 1)~\cite{TordesillasZhouBatterham2018} has shown that the impending failure region develops in between subregions of transient but high kinematic similarity early in PFR. Moreover, the spatiotemporal dynamics of these clusters can deliver a reliable change point $t^{*}$ from which such partitions become incised in the granular body, giving rise to their near relative rigid body motion: for example, when the active `slip region' of a slope begins to detach and accelerate downslope from a relatively stationary region below; or when parts of a rock mass on either side of a developing crack undergo relative slip. That is, persistent partitions in kinematics space forewarn of impending partitions in physical space~\cite{WLF5,KSAT,SDAT,ZhouAOAD}. In a parallel effort~\cite{WangSS}, the computational challenges of embedding knowledge of kinematic clustering in a stochastic statistical learning model from high-dimensional, non-stationary spatiotemporal time series data were overcome, with displacement and velocity trends and the failure region of Mine1 successfully predicted more than five days in advance. {\it f) Establishing a connection to first principles fracture and failure mechanics for granular solids. \,\,} Relative motions at the grain-grain level were used to study the coevolution of force and damage propagation in a network flow analysis -- with explicit connections to the most popular fracture criteria, starting with Griffith's theory for crack propagation~\cite{ATSKCRMNJT,ATSKCRMNJT2}. The emerging {\it flow bottlenecks} for force and energy, proven to be the paths of least resistance to failure, were found to deliver an accurate and early prediction of the location of shear bands and macrocracks that ultimately develop in the failure regime. The question that now arises is: {\it Can a combined mesoscience and network flow approach detect the bottlenecks and kinematic clusters from radar-measured surface ground motion data and, if so, how can their spatiotemporal evolution be used to deliver an early prediction of a likely place and time of failure?} Here we answer this question and demonstrate our approach through SSSAFE. Based solely on kinematic data for input, SSSAFE first applies the network flow model to identify and characterize the emerging kinematic clusters in PFR, and then uses their dynamics to deliver an early prediction of where and when failure is likely to develop. Different from ~\cite{TordesillasZhouBatterham2018,SDAT,KSAT,ZhouAOAD} which adopt an essentially pattern-mining approach, SSSAFE rigorously predicts the path of least resistance to failure in a manner consistent with the fundamental micromechanics and dynamics of failure across different system levels and scales. Four systems are analyzed: a standard laboratory test (Biax); and three rock slopes, man-made slopes Mine 1 and Mine 2 and a natural slope Xinmo. The input kinematic data to SSSAFE comprise individual grain displacements in Biax, and radar line-of-sight displacement data gathered from ground-based radar (Mine 1 and Mine 2) and space-borne radar (Xinmo). \begin{figure} \caption{(Color online) The systems under study in the B-dominated failure regime. (a) Map of the cumulative absolute grain rotation in sample Biax showing the shear band where plastic deformation and energy dissipation concentrates. Map of the cumulative line-of-sight displacement for the rock slopes, highlighting the failure location (orange-red): (b) Mine 1, (c) Mine 2 and (d) Xinmo (dimensions of Mines 1 and 2 are in meters). In Mine 1, the second region of instability to the east (encircled) stabilized the day before the collapse. } \label{fig:system} \end{figure} \sigmaection*{Data} \label{sec:data} The input data to our analysis consist of the following system properties at each time state of the monitoring period $t=1,2,....,T$: (a) coordinates of observation points $\ell=1,2,....,L$; (b) displacement vector recorded at each point ${\varepsilonc{d}_1,\varepsilonc{d}_2,...., \varepsilonc{d}_L}$. We have four data sets (Figure~\ref{fig:system}). The first is Biax, a well-studied simulation of granular failure in a standard laboratory test in which an assembly of polydisperse spherical grains is subjected to planar biaxial compression~\cite{tordesillasEtAl2016,microband,Tordesillas2007,TORDESILLAS2009706}. Here each point $\ell$ is a moving grain and the vector $\varepsilonc{d}_\ell$ is two-dimensional. The sample begins to dilate at around $t=50$. Collective buckling of force chains initiate at around $t=98$, giving way to a brief period of strain-softening and the development of a single shear band along the forward diagonal of the sample. This shear band becomes fully formed at $t=104$, referred to as the time of failure $t^F_B$ (Figure~\ref{fig:system} (a)). From this point on, the sample exists as two clusters, in each of which constituent grains move collectively as one: two `solids' in relative rigid-body motion along their common boundary, viz. the shear band. Details of this simulation and mechanisms underlying its bulk behavior in the lead up to and during failure are provided elsewhere~\cite{tordesillasEtAl2016,microband,Tordesillas2007,TORDESILLAS2009706}. Three large field scale data sets are examined. Mines 1 and 2 are from monitoring data of a rock slope in two different open pit mines using ground-based SSR-XT – 3D real aperture radar\cite{Dicketal2015,harries2006} (Figure~\ref{fig:system} (b, c)). The mine operation, location and year of the rockslides are confidential. However we have all the information needed for this analysis. Each observation point $\ell$ is a grid cell or pixel, ranging in size from 3.5m x 3.5m to 7m x 7m, in a fixed grid. The vector $\varepsilonc{d}_\ell$ is 1D, which corresponds to the displacement along a line-of-sight (LOS) between the radar and the point $\ell$ on the slope surface. Mine 1 is an unconsolidated rock slope (Figure~\ref{fig:system} (b)). The monitored domain stretches to around 200 m in length and 40 m in height. Movements of the rock face were monitored over a period of three weeks: 10:07 May 31 to 23:55 June 21. Displacement at each observed location on the surface of the rock slope was recorded at every six minutes, with millimetric accuracy. This led to time series data from 1803 pixel locations at high spatial and temporal resolutions for the entire slope. A rockslide occurred on the western side of the slope on June 15, with an arcuate back scar and a strike length of around 120 m. Mine 1 reached peak pixel velocity of around 640 m/yr. Considering a precautionary correction for the radar line of sight, this falls in the moderate velocity category~\cite{CrudenVarnes1996} and corresponds to an evacuation response~\cite{hungr2014}. The time of collapse $t^F_1$ occurred at around at 13:10 June 15, close to when the global average peak velocity of 33.61 mm/hr was reached. There is a competing slide: a second region of instability, to the east (encircled area, Figure~\ref{fig:system} (b)). This region intermittently developed large movements, but the instability was somehow arrested and movement slowed down the day before the collapse of the west wall~\cite{TordesillasZhouBatterham2018,KSAT}. In this context, this region is sometimes referred to as a false alarm in the sense that it did not eventuate into a collapse~\cite{Intrieri2017}. While in many cases “tertiary creep” ends with a total or partial failure, it is also possible, like in Mine 1, that the whole landslide or a part of it finds a new equilibrium~\cite{intrieri2019,hungr2014}. There are many possible reasons for this, such as a reduction of the destabilizing forces through stress redistributions or the geometric configuration of the sliding surface, which slows down and ultimately arrests the whole or part of the landslide body. There are also certain landslide types, such as earth flows~\cite{hungr2014}, which do not have a ductile behaviour and cannot experience a catastrophic failure. Nevertheless, such flows can exhibit an exponential acceleration that, in terms of public safety, still pose an emergency challenge to deal with~\cite{intrieri2019}. Mine 2 is a consolidated rock slope of an open cut mine dominated by intact igneous rock that is heavily structured or faulted by many naturally occurring discontinuities (Figure~\ref{fig:system} (c)). A slope stability radar scanned the section of the rock face for displacement for approximately 6 days from 15:39 August 19, until 07:05 August 25, each scan taking approximately 6 minutes, again with millimetric precision. Measurements at 5394 pixel locations were taken every 6 minutes giving high spatial and temporal resolution for the entire domain, measuring 1280 m wide and around 224 m high. A rockslide occurred on the Southern wall at 03:00 August 25; we refer to this as the time of failure $t^F_2$ for the rest of this paper. The area that failed, measuring approximately 135m wide and 145m high, moved over 1 million tonnes of debris. Mine 2 reached peak pixel velocity of 2.8 m/day, which is classified as fast~\cite{CrudenVarnes1996}. The Xinmo landslide is a rock avalanche (Figure~\ref{fig:system} (d)), composed of metamorphic sandstone intercalated with slate, that detached on June 24, 2017 and hit Xinmo village (Maoxian, China, $32^{\circ} \, 03' \, 58''$ N, $103 ^{\circ} \, 39' \, 46''$ E) causing 83 deaths and destroying 64 houses. The analyzed data set is focused only on the original source area that was located near the crest of the mountain ridge north of Xinmo village, at an altitude of 3431 m a.s.l.. As this source moved along the slope it entrained new rock material and reached an estimated volume of 13 million m$^3$ and a terminal velocity of 250 km/h~\cite{fan2017}. The site was not actively monitored at the time, but displacement data obtained from Sentinel-1 constellation, that takes periodical interferometric acquisitions of the area, have been retrospectively analyzed to determine if a forewarning would have been possible (Intrieri et al., 2018). The data used consisted in 45 SAR images in C-band (6.5 cm wavelength), at 5 m $\times$ 14 m spatial resolution, acquired along the descending orbit (incidence angle of 40.78) and spanning from 9 October 2014 to 19 June 2017 (that is five days before the failure). The pixels are of size 5m $\times$ 14 m. Data, covering an area of 460 km$^2$, were elaborated with the SqueeSAR algorithm~\cite{ferretti2011} and comprised more than 130,000 measurement points. Xinmo reached peak pixel velocity of around 27 mm/yr in the tertiary creep phase, which is very slow~\cite{CrudenVarnes1996} but later reached a terminal velocity of 250 km/h during failure~\cite{fan2017}. \sigmaection*{Method} \label{sec:methods} The core components of our proposed spatiotemporal slope stability analytics for failure estimation (SSSAFE) framework are summarized in Table~\ref{Tab:scale} and Figure~\ref{fig:flowchart}. The key idea is to model the transmission of force in each studied system in a way that accounts for the coupled evolution of the preferred pathways for force and damage, and to use this model to predict the emerging {\it kinematic clusters in the mesoregime PFR}. To achieve a consistent formulation across different system levels, we model force transmission as {\it a flow through a network}. At the core of this formulation is a set of optimization problems on a network in accordance with network flow theory and mesoscience principles. We emphasize that our implementation of this model is confined only to finding the preferred paths for damage which represent the common boundary of the kinematic clusters. Detection and characterization of the preferred paths for force are outside the scope of this investigation. Such paths have been characterized for different laboratory samples, including concrete (e.g.,~\cite{ATSKCRMNJT,tordesillas2015,PRE}). \begin{table} \caption{Combined mesoscience and network flow formulation behind SSSAFE. A compromise-in-competition between mechanism A (preferred paths for force) and mechanism B (preferred paths for damage) governs the mesoscale at the laboratory level and field level.} \centering \begin{tabular}{cccc} \toprule \textbf{System$\rightarrow$} & \textbf{Biax} & \textbf{Mines 1 \& 2 } & \textbf{Flow network model}\\ \textbf{Scale $\deltaownarrow$} & & & \\ \midrule microscale & grains, grain-grain interaction & pixels, pixel-pixel interaction & nodes, links\\ mesoscale & A versus B & A versus B & A versus B \\ macroscale & sample & slope & flow network\\ \bottomrule \end{tabular} \label{Tab:scale} \end{table} \begin{figure} \caption{(Color online) Flow chart summarizing the 3 steps in SSSAFE, designed for prediction of where and when failure will likely occur in a monitored domain based on spatiotemporal kinematic data.} \label{fig:flowchart} \end{figure} \sigmaubsection*{Core components of SSSAFE} \label{secFlow} The core components of SSSAFE are implemented in three consecutive steps, with Steps 2 and 3 delivering respectively the predictions on the likely location and the time of failure. Recent work~\cite{ATSKCRMNJT2} shows explicit connections between the formulation below and the most popular fracture criteria of fracture mechanics, beginning with Griffith's theory for crack propagation. The method below, consistent with Griffith's theory, was found to provide the most accurate and early prediction of failure in PFR for a range of ductile to quasi-brittle laboratory samples. {\bf Step 1: Construct the flow network $\mathcal{F}$.} Forces are transmitted along physical connections. Hence the construction of the flow network $\mathcal{F}$ begins with an undirected network $\mathcal{N}$ that represents the physical connectivity of the system: the grain contact network in Biax or the proximity network in Mines 1 and 2 where pixels within a distance $d$ of each other are connected. Each node of $\mathcal{N}$ represents a grain (pixel), while each link in $\mathcal{N}$ represents a grain-grain contact (pixel-pixel connection). The links in $\mathcal{N}$ vary with loading history in Biax, but is fixed across the monitoring period for both Mines 1 and 2. Next, $\mathcal{N}$ is transformed to a directed network $G = (V, A)$, where $V$, $A$ are the set of nodes and set of arcs respectively. That is, each link connecting nodes $i \in V$ and $j \in V$ in $\mathcal{N}$ is represented by a pair of symmetric arcs $e \in A$: one from $i$ to $j$ and another vice versa. Given this symmetry, we will use the symbol $e$ to also denote a link. Every link in $G$ is then assigned a non-negative {capacity} $c(e)$ which corresponds to the maximum flow value that the given link can support. Since the model concerns force transmission, $c$ is thus the force that must be overcome to break the connection: the strength or {\it resistance to failure} of the grain-grain contact or pixel-pixel connection. Given that what is measured reliably in both laboratory and field levels is motion -- not forces or stresses -- we express this capacity $c$ in terms of the motions of the connected elements. Hence, the contact capacity function $c$ is given by \begin{equation} c(e)=c_{ij} =c_{ji} = \frac{1}{|\overrightarrow{\Deltalta u_{ij}}|^2}, \label{eq:cap} \end{equation} where $|\overrightarrow{\Deltalta u_{ij}}|$ is the magnitude of the relative displacement of two grains (or two pixels) linked in $\mathcal{N}$. Note that since we are only interested in the flow bottleneck~\cite{ATSKCRMNJT,AhujaNetworkFlows}, what is important in this analysis are the relative values of the link capacities and not their absolute values. Indeed, for this purpose, the model for the link capacity need not be in units of force, as previously shown (e.g.,~\cite{ATSKCRMNJT,tordesillas2015, PRE}). Consequently, in Equation~\eqref{eq:cap}, we set the capacity to be such that the higher the relative motion of grains (pixels) linked in $\mathcal{N}$, the less stable is the connection and in turn the lower is its corresponding capacity $c$ (see Figure~\ref{fig:pattern}). Finally, a direction for the flow is dictated by a pair of artificial nodes $q$ and $k$ called the \emph{source} and the \emph{sink} of $G$. The quadruple $\mathcal{F} = (G, c, q, k)$ is called a {\it flow network}. In the Biax sample, the natural choice for the \emph{source} $q$ and \emph{sink} $k$ are the top and bottom walls so that the direction of flow is in alignment with the direction of the applied vertical compression (and major principal stress axis) of the sample. \begin{figure} \caption{(Color online) Collective motion of mesoscale clusters characterizes the terminal stages of the mesoregime PFR. Emerging kinematic clusters increasingly move in near relative rigid-body motion: (a) the actual displacement field at failure in Biax, (b) depiction of surface ground motion on a slope. Links along the shared boundary of kinematic clusters, $\Omega$, are closest to breaking point (i.e., smallest total path capacity $c(\Gamma)$) due to the large relative motions of its constituent elements. } \label{fig:pattern} \end{figure} {\bf Step 2: Find the kinematic clusters from the bottleneck of $\mathcal{F}$.} The bottleneck of $\mathcal{F}$, $B(\mathcal{F})$, is given by the cut of $\mathcal{F}$ with the least capacity. Any cut of $\mathcal{F}$, $\Gamma$, is a set of links in $\mathcal{N}$ which, if disconnected, represents a literal cut of $\mathcal{F}$ into two disjoint components $\{W,W'\}$ of $V$ such that no flow can be transmitted from source $q\in W$ to sink $k\in W'$. Thus, any cut $\Gamma$ contains all arcs emanating from a node in $W$ and terminating on a node in $W'$. Physically, a cut $\Gamma$ may be thought of as a virtual crack of the studied granular body or domain whose connectivity is described by $\mathcal{N}$. Physical disconnection of the contacts associated with the links in $\Gamma$ would thus result in a literal system-spanning crack which splits the body into two disjoint pieces. The capacity of $\Gamma$ is defined as $c(\Gamma) = \deltaisplaystyle \sigmaum_{e\in\Gamma}{c(e)}.$ Following Equation~\eqref{eq:cap}, this represents the total force flow that must be overcome to disconnect every link in $\Gamma$. Here we are interested in finding that cut with the least capacity -- the so-called minimum cut, also known as the bottleneck $B(\mathcal{F})$. Thus, the capacity of the bottleneck $B(\mathcal{F})$ represents the global failure resistance, $F^*$: the minimum amount of force flow needed to overcome the resistance of the connected links $B(\mathcal{F})$ to break apart and split the granular body into two disjoint pieces. Note that this analysis does not preclude a body from splitting apart into more than two pieces: in such cases, one can repeat the same analysis described here for each piece to obtain further subpartitions. In the cases studied here, this is unnecessary as the studied systems split apart essentially into two components with the bottleneck being their shared boundary. In Biax, the bottleneck $B(\mathcal{F})$ predicts the location of the shear band that forms in the failure regime. In the case of Mines 1 and 2 and Xinmo, $B(\mathcal{F})$ predicts the boundary of the landslide. As time to failure draws near, we expect motion in the components to become increasingly coherent and near-rigid-body resulting in kinematic clustering. The {\it active cluster} in PFR, denoted by $\Omega$, distinguishes itself by manifesting an increasing downward motion (viz. increasing trend in cumulative displacement and velocity) due to gravity, while the stable cluster remains relatively stationary. To find the bottleneck of Biax at each time, we solve the \textsc{Maximum flow - Minimum cut (MFMC) problem} on $\mathcal{F}$, following earlier work~\cite{ATSKCRMNJT,PRE}. This is a two stage calculation. Stage 1 solves the \textsc{ Maximum flow problem} to find the global flow capacity, $F^*$, the maximum flow that can be transmitted through $\mathcal{N}$ given its topology and link capacities. More formally, given a flow network $\mathcal{F} = (G,c,q,k)$, a link flow $x(e)$ is called a {\it feasible} $q$-$k$ flow, if it satisfies: \noindent (a) the conservation of flow \begin{equation} \sigmaum_{e \in \delta^{-}(v)} x(e) \;= \;\sigmaum_{e \in \delta^{+}(v)} x(e), \; \; \; \forall v \in V - \{q, k\}, \label{eq:flow-con} \end{equation} where ${e \in \delta^{-}(v)}$ denotes arcs entering node $v$ and ${e \in \delta^{+}(v)}$ denotes arcs leaving node $v$; \noindent (b) the capacity rule \begin{equation} 0 \le x(e) \le c(e),\; \; \; \; \forall e \in G. \label{eq:flow-caps} \end{equation} Hence, the force flow along each link, $x(e)$, is regulated by the threshold for damage $c(e)$ which is a function of the relative motion between the connected elements (Equation~\ref{eq:cap}). \begin{enumerate} \item[]The \textsc{Maximum Flow Problem} can be expressed as: find a feasible $q$-$k$ flow $x$ such that the following flow function $f(x)$ is maximum: \begin{equation} f(x) = \sigmaum_{e \in \delta^{+}(q)} x(e) - \sigmaum_{e \in \delta^{-}(q)} x(e). \label{eq:flow-vals} \end{equation} The {\it flow value} that solves the above is the maximum flow $F^*$. \end{enumerate} \noindent Once $F^*$ is established, we move to Stage 2 to solve the \textsc{Minimum Cut Problem}. \begin{enumerate} \item[]The \textsc{Minimum Cut Problem} of $\mathcal{F}=(G,c,q,k)$ is the cut $\Gamma_{min}$ such that \begin{equation} c(\Gamma_{min}) = Minimize\big\{ \deltaisplaystyle \sigmaum_{e\in\Gamma}{c(e)} \big\}. \label{eq:capacity} \end{equation} \end{enumerate} \noindent The above is typically solved using the Ford-Fulkerson algorithm \cite{AhujaNetworkFlows}. This exploits the well known \emph{max-flow min-cut theorem} which states that the maximum flow possible $F^*$ is the capacity of the minimum cut or bottleneck~\cite{liu2011segmenting}. Using this theorem and Equations~\eqref{eq:cap} -- \eqref{eq:capacity}, we can now directly relate the conditions on where and when catastrophic failure occurs to the bottleneck. That is, catastrophic failure occurs when the force flow exceeds the resistance to breakage of all the links in the bottleneck. Where the system physically breaks apart is given by the bottleneck itself, which is the limiting shared boundary of the kinematic clusters. While finding the bottleneck in Biax is relatively straightforward, this is not the case for Mines 1 and 2. The difficulty arises because there is no obvious choice for the source-sink pair to direct the flow, given the uncontrolled and unknown loading conditions of these slopes. To address this, we construct the Gomory-Hu tree (GHT)~\cite{gomory1961multi} for the network $\mathcal{N}$. The procedure is described in \cite{KahagalagePhDthesis} but in what follows we outline this briefly for completeness. Let $G^* = (\mathcal{N} ,c)$ be an undirected, link-capacitated network. For every pair of nodes $u,v$ in $G^*$, the GHT stores information on the minimum $u$-$v$ cut of $G^*$ that separates $u$ and $v$. For a network with $n$ nodes, there are a total of $n(n-1)/2$ possible source-sink pairs, each with a corresponding minimum cut. However, the construction of the GHT shows that the minimum cuts for some pairs of nodes are identical. In fact, the GHT contains information on exactly $n-1$ distinct minimum cuts~\cite{gomory1961multi} corresponding to a set of $n-1$ explicit source-sink pairs. One could infer all the remaining implicit source-sink pairs from this set using the GHT, as illustrated in Figure~\ref{fig:stM1}. Formally, the GHT is defined as follows. \begin{definition} \label{def:ght} \textit{For a given link-capacitated network, $G^* = (\mathcal{N},c)$, a tree $\mathcal{T}$ is a Gomory-Hu tree (GHT) if the following holds:} \begin{enumerate} \item The nodes of $\mathcal{T}$ coincide with the nodes of $G^*$. \item Each link $l$ in $\mathcal{T}$ has a non-negative weight $w(l)$. \item For each pair of nodes $u,v$ in $\mathcal{T}$, let $l_{m}$ be the link of minimum weight on the path joining $u$ and $v$ in $\mathcal{T}$. \end{enumerate} Then $w(l_{m})$ is equal to the capacity of the minimum cut separating $u$ and $v$ in $G^*$. \end{definition} In Figure~\ref{fig:stM1}, we illustrate an example contact network $\mathcal{N}$ with $n = 9$ pixels, its corresponding Gomory-Hu tree $\mathcal{T}$ and a table summarizing the outcome of removing a link in $\mathcal{T}$. There are 36 possible source-sink pairs. $\mathcal{T}$ contains 8 explicit source-sink pairs (column 1, Figure~\ref{fig:stM1} (c)). Removing link $l$, connecting nodes $u$ and $v$ in $\mathcal{T}$, gives two distinct components $\left\lbrace W, W' \right\rbrace$: these correspond to the kinematic clusters of $\mathcal{N}$ when the edges in the minimum cut separating the source-sink pair $u$ and $v$ are removed. All other source-sink pairs and their corresponding minimum cuts can be inferred from $\mathcal{T}$. Consider, for example, the minimum cut of $\mathcal{N}$ separating the source-sink pair $u=1$ and $v=8$. Link $l_m=(2,5)$ has the minimum weight in the path from $u=1$ to $v=8$ in $\mathcal{T}$ (Definition~\ref{def:ght}). Thus, removing $l_m=(2,5)$ $\mathcal{T}$ results in $W=\left\lbrace1,\;2,\;3\right\rbrace$ and $W'=\left\lbrace4,\;5,\;6,\;7,\;8,\;9\right\rbrace$. In $\mathcal{N}$, this partition corresponds to the removal of edges $(1,4)$, $(2,5)$, and $(3,6)$ that constitute the minimum cut for the source-sink pair $u=1$ and $v=8$ with capacity of 5. Note that there are other source-sink pairs having the same minimum cut. From $\mathcal{T}$, the absolute (global) minimum cut capacity is 2. The corresponding two partitions are $W = \left\lbrace7\right\rbrace$ and $W' = \left\lbrace1,\; 2,\; 3,\; 4,\; 5,\; 6,\; 8,\; 9\right\rbrace$. The global minimum cut contains edges $\left\lbrace(4,7),\; (7,8)\right\rbrace$. Observe this global minimum cut is biased towards highly imbalanced cuts where one component is significantly smaller than the other in terms of the number of member nodes. Such highly imbalanced partitions may correspond to the smaller component having only one or at most a few pixel locations out of thousands or more. As such, this may not provide a complete summary of emerging partitions that lead to catastrophic failure. Larger partitions that span the system, where the part that dislodges from the rest of the slope constitutes a sizeable portion of the slope, are of interest. Accordingly, we introduce a cut ratio $\rho$ which is defined as the number of nodes in the smallest to largest component upon removal of a link in $\mathcal{T}$. Hence, in this example, if we are interested in the smallest component containing at least 3 pixels, we find the minimum cut such that $0.3 \leq \rho \leq 1$. This yields the cut that corresponds to the removal of link $(2,5)$ in $\mathcal{T}$; the explicit source-sink pair is $(u=2, v=5)$ as before. By inspecting $\mathcal{T}$, we can see that source-sink pairs $(u=1, v=5), (u=1, v=8), (u=1, v=6), (u=1, v=9), (u=2, v = 8), (u=2, v = 6), (u=2, v = 9)$ correspond to the other minimum cuts also satisfying $0.3 \leq \rho \leq 1$. Note that this requires enumeration of all possible source-sink pairs and their minimum cuts. An outline of this procedure is given in Algorithm~\ref{alg:bcut}. \begin{figure} \caption{(Color online) (a) An example contact network $\mathcal{N} \label{fig:stM1} \end{figure} \begin{algorithm} \caption{The major crack prediction algorithm} \label{alg:bcut} \begin{algorithmic}[1] \STATE Let $\mathcal{T}$ be a GHT constructed for a capacitated graph $G^*$ with $n$ nodes \STATE Set $O$ to an $(n-1) \times 3$ matrix whose entries are all zeros \FOR{$i := 1:n-1$} \STATE Remove a link $l_i$ in $\mathcal{T}$ \STATE Compute the ratio $\rho(l_{i})$ upon the removal of the link $l_i$ and set the $i^{th}$ row of $O$, $O[i,:] := [\rho(l_{i}) \;w(l_i) \;l_i]$ where $w(l_i)$ is the weight of the link $l_i$ in $\mathcal{T}$ \ENDFOR \RETURN The edge $l_i$ which corresponds to the minimum capacity $w(l_i)$ for $\rho_m \leq \rho(l_i) \leq 1$. Two nodes in $l_i$ gives the location of the source and the sink node and removal of this link $l_i$ in $\mathcal{T}$ gives the corresponding two components $\left\lbrace W, W' \right\rbrace$. \end{algorithmic} \end{algorithm} For Mines 1 and 2 and Xinmo, the number of nodes in $G^*$ and possible source-sink pairs are, respectively: $612$, $186,966$; $5394$, 14,544,921; $610$, 185,745. It is thus computationally expensive to enumerate every source-sink pair. However, we can use $\mathcal{T}$ to capture a failure event such that the minimum cut identifies a failure area that is no smaller than a prescribed fraction of the studied domain size. That is, we can remove the link in $\mathcal{T}$ with the minimum weight such that $0.3 < \rho \leq 1$ to obtain two corresponding cluster components $\left\lbrace W, W' \right\rbrace$. It is easy enough to check the partitions, and ensure that parts of landslide boundaries that are close to the boundary of the monitored domain (recall the top left boundary of the rockfall in Mine 1 (Figure~\ref{fig:system} (b))) are also captured. This is done simply by checking the cases where $0.03 \leq \rho \leq 0.3$, which identify partitions that lead to the smaller cluster being as small as 3\% of the number of nodes in the larger cluster (Figure~\ref{fig:clusters}). In summary, the key output from Step 2 is the bottleneck $B(\mathcal{F})$, the path of least resistance to failure which separates the active cluster $\Omega$ that will likely collapse from the rest of the slope, and the slope's failure resistance $F^*$. \begin{figure} \caption{(Color online) Kinematic clusters (red and blue) for Mine 1 at stage $t=225$ for $\rho_m \leq \rho \leq 1$. The active cluster $\Omega$ is colored red. Black points highlight the pixels connected by the set of links in the bottleneck, the common boundary of the clusters. } \label{fig:clusters} \end{figure} {\bf Step 3: Characterize the cluster dynamics. } As depicted in Figure~\ref{fig:flowchart}, at each time state up until the current time $t$, we find the flow bottleneck $B(\mathcal{F})$, its associated clusters and the failure resistance. We can use this historical information to characterize the dynamics of the cluster motions as the monitoring advances in time. Here we are interested in one of the defining aspects of granular failure, namely, collective motion. As time advances towards the failure regime, we quantify the extent to which: (a) intracluster motions become increasingly coherent and similar -- at the same time as intercluster motions become more and more different (separated in kinematic state space); and (b) the predicted clusters no longer change in member elements, suggesting that the pattern of impending failure has become physically incised in the system. To do this, we compute the silhouette score $S$ \cite{Rousseeuw1987} to quantify the quality of the clustering pattern obtained from the network flow analysis, coupled with an information-theoretic measure of Normalized Mutual Information (NMI) \cite{Vinh10} to quantify the temporal persistence of the clustering pattern. The Silhouette score $S \in [-1,1]$ gives an overall measure of the quality of clustering \cite{Rousseeuw1987}. It is the global average of $s(i)$ which measures how similar is a given node $i$ to the other nodes $j$ in its own cluster (cohesion) compared to the nodes in the other clusters (separation): \begin{equation} S= {\frac{1}{n}}\sigmaum\limits_{i=1}^n s(i) =\sigmaum\limits_{i=1}^n \frac{b(i)-a(i)}{\text{max}\big[b(i),a(i)\big]}; \label{eqn:Silh} \end{equation} here $a(i)$ is the average distance in the displacement state-space from $i$ to all other nodes in the same cluster, and $b(i)$ is the average of the distances from $i$ to all points in the other cluster. As shown in Figure \ref{fig:sil}, a good clustering pattern (high $S$) is one where the nodes in the same cluster exhibit very similar features (nodes are tightly packed in feature state space hence small $a(i)$); while nodes from different clusters have very different features (red nodes are well separated from blue nodes in feature state space hence large $b(i)$). As a general guide, values below 0.2 suggest essentially no clustering pattern was found, while the closer $S$ is to one, the more compact are the individual clusters while being more separated from each other. Given the studied feature is motion, an increasing trend with respect to time in $S$ from around 0.2 to its upper bound of 1 suggests that the clusters are moving in increasingly relative rigid-body motion. The Normalized Mutual Information (NMI) \cite{Vinh10} basically tells us how much knowing the clustering pattern at the previous time, $X(t-1)$, reduces our uncertainty of the clustering at the current time, $X(t)$. The Normalized Mutual Information (NMI) is defined as \begin{equation} \text{NMI}= \frac{I(X(t);X(t-1))}{\sigmaqrt{(H(X(t))H(X(t-1)))}}; \label{eqn:nmi} \end{equation} here $I(X(t);X(t-1))$ is the mutual information between $X(t)$ and $X(t-1)$ and $H(.)$ is the entropy of the corresponding clustering assignments. NMI $\in [0, 1]$: 0 means there is no mutual information, as opposed to 1 where there is perfect correlation or similarity, between the clusters at $t$ and $t-1$. Intuitively, NMI measures the information that the clustering assignments $X(t)$ and $X(t-1)$ share: the higher the NMI, the more useful information on the clustering pattern is encoded in $X(t-1)$ that can help us predict the clustering at the next time state $X(t)$. \begin{figure} \caption{(Color online) Depiction of the silhouette score $s(i)$ for node $i$, used to quantify the quality of clustering in kinematic state space. $a(i)$ ($b(i)$) measures intra- (inter-) cluster similarity of node $i$. } \label{fig:sil} \end{figure} In summary, based on the results from Steps 2 and 3, we can identify a regime change point $t^*$ from which the failure resistance $F^*$ drops close to its minimum of zero, as $S$ rises and/or levels above 0.2, while NMI stays close to 1. For all $t \geq t^*$, a prediction on the landslide region is given by $\Omega $, the active or fastest moving cluster. In addition, the time of failure $t^F$ can be predicted by performing a linear regression with a rolling time window of the inverse mean velocity of $\Omega$ for $t \geq t^*$. This not only obviates the need to subjectively select a pixel to implement the Fukuzono INV analysis~\cite{fukuzono1985} but also ensures this analysis takes into account the spatiotemporal and coupled evolution of force and damage pathways in PFR. \sigmaection*{Results and discussion} \label{sec:results} \begin{figure} \caption{(Color online) The mesoregime mediates the stable regime and the failure regime in Biax. Time evolution of: (a) the Biax mean velocity, along with the shear strength of Biax as measured by the stress ratio; (b) the failure resistance $F^*$ from Step 2. Inset in: (a) shows the collective buckling of force chains in the shear band; (b) shows zoomed-in area near the regime change point $t^*_B$= 80. (c) Plot of the time evolution of NMI and $S$ from Step 3. Vertical lines mark the regime change point $t^*_B$=80 (solid grey line) and the time of failure $t^F_B$= 104 (dashed black line), respectively. (d) Cumulative predictions of the shear band (black points) from Step 2 overlaid on top of the map of the magnitude of displacement at the time of failure. Southwesterly (northeasterly) displacement is given a negative (positive) sign. } \label{fig:oppose} \end{figure} In all of the systems studied, SSSAFE uncovers three dynamical regimes over the course of the monitoring campaign, consistent with a compromise-in-competition between force and damage (Figures~\ref{fig:oppose} --\ref{fig:xinmo}). In Biax, the global mean velocity steadily rises in PFR, before a sudden burst to a peak in the failure regime (Figure~\ref{fig:oppose} (a)). Simultaneously, the opposite trend can be observed in the time evolution of the system's resistance to failure $F^*$, which decreases progressively as damage spreads in PFR, eventually dropping to its minimum value close to zero at stage 80 (Figure~\ref{fig:oppose} (b)). Extensive published studies of this sample has shown that columnar force chains at stage 80 have lost considerable lateral support in the region of impending shear band, due to dilatancy~\cite{Tordesillas2007,TORDESILLAS2009706,TORDESILLAS2011265}. While force redistributions around force chains continually occur during this period (recall Figure~\ref{fig:rewire}), ultimately, the degradation in the region precipitates collective force chain buckling at the peak stress ($t=98$, Figure~\ref{fig:oppose} (a) inset), culminating in a fully developed shear band at $t^F=104$, when kinematic clusters move in almost relative rigid-body motion (recall Figure~\ref{fig:pattern} (a)). These events were previously observed in various types of sand and photoelastic disk assemblies~\cite{tordesillas2015,PRE,TordesillasWalkerAndoViggiani2013}. A consistent dynamics emerges in the time evolution of NMI and $S$ in Figure~\ref{fig:oppose} (c). From stage $t=80$, $S$ rises from around 0.5 before levelling off at the start of the failure regime at $t=104$; NMI stays close to 1 from $t=80$. These trends imply a recurring bottleneck $B(\mathcal{F})$ as evident in $80 \le t \le 110$ of Figure~\ref{fig:oppose} (d), such that grains on either side progressively move collectively as one in opposite directions (Figure~\ref{fig:pattern} (a)). \begin{figure} \caption{(Color online) Slope Mine 1. Time evolution of: (a) the mean velocity of Mine 1 with the failure location in red (inset); (b) the failure resistance $F^*$ on log-axis from Step 2; (c) the NMI index and $S$ (inset) from Step 3; (e) the inverse mean velocity of $\Omega$ and of pixel $p$ (inset), over the time interval indicated by the red window in the inset in (c). Vertical lines mark the regime change points (solid grey line) $t^*_{1a} \label{fig:mine1} \end{figure} At the field scale using radar data, SSSAFE delivers qualitatively similar trends for Mines 1 and 2 and Xinmo. The presence of large fluctuations in Mines 1 and 2 (Figures~\ref{fig:mine1}-\ref{fig:mine2} (a-c)) is not surprising given these mines were operational with blasting, pumping, transport and drilling works taking place at various times over the course of the monitoring period. Like in Biax, the failure resistance of Mine 1 drops close to zero well before failure (Figures~\ref{fig:mine1} (b)), with corresponding rises in NMI and S towards 1 (Figures~\ref{fig:mine1}(c)), as early as around $t^*_{1a}=69$, even though the rock slope appears intact with near-zero global mean velocity (Figures~\ref{fig:mine1} (a)). This suggests that internal cracks and shear bands have started to propagate internally, as delineated by the preferred paths of damage (black points in Figures~\ref{fig:mine1} (d)). Damage spread is to the extent that the capacity for force transfer between adjacent connected material points (pixels) along the {\it recurring} bottleneck $B(\mathcal{F})$ to the west, has significantly reduced even though there are still many remaining connections in the rock slope that keep it manifestly intact: Supplementary Movie Mine1 clearly shows that the west wall persists as an active area $\Omega$ from the beginning of the monitoring campaign. As time advances towards failure, a second regime change point manifests: $t^*_{1b}=3322$=12:14 June 14 (Figures~\ref{fig:mine1} (b-c)). During $69 \le t < 3322$, the interaction between the two regions of instability leads to an initial decline in $S$ while NMI stays close to 1 due to the persistence of the west wall cluster, the site that eventually collapses. But the day before the collapse, $S$ sharply rises from $t^*_{1b}$. This rise in $S$ suggests that the clustering pattern has now become incised in the slope to the extent that the clusters are now essentially undergoing relative motion along their common boundary, as $\Omega$ accelerates~\cite{TordesillasZhouBatterham2018,SDAT,KSAT,ZhouAOAD}. This is corroborated by the INV analysis of $\Omega$ and of the fastest moving pixel $p$ in $\Omega$ (Figures~\ref{fig:mine1} (e)). The change point $t^*_{1b}$ improves on earlier work using a pattern mining approach which detects the time of imminent failure to be one to two hours later: $t$=14:53 June 14 \cite{SDAT} and $t$=13:16 June 14 \cite{KSAT}. In Mine 1, multiple sites of instability interact mechanically in PFR. Our method can reliably identify and differentiate these regions (Figure~\ref{fig:mine1}). The west wall where catastrophic failure occurs can be distinguished early in PFR by the temporal persistence of the predicted landslide boundary (black points) in this area, in contrast to the eastern corner where this boundary only occasionally appears (Supplementary Movie Mine1). This intermittent dynamics in the latter is due to redundant force pathways which the system exploits to relieve the build up of stress in $B(\mathcal{F})$ along the west wall by diverting the forces and damage there to alternative paths, including to the competing slide to the east. In laboratory samples undergoing quasi-brittle failure~\cite{ATSKCRMNJT2}, the interaction between competing cracks manifest in the form of stress redistributions along the preferred force pathways {\it between} the bottleneck where the macrocrack ultimately forms and the competing crack which later undergoes structural arrest (self-stabilize). The same is observed in Mine 1: note the concentration of black points in the area between the actual failure region to the west and the competing slide to the east in $0.3 < \rho \le 1$, $69 \le t < 3322$ of Figure~\ref{fig:mine1} (d). This compromise-in-competition continues until all such paths are exhausted, $t = t^*_{1b} =3322$, from which time $B(\mathcal{F})$ remains fixed and becomes primed for uncontrolled crack propagation, along the landslide boundary ($3322 \le t \le 3600$, Figure~\ref{fig:mine1} (d)). Mine 1 provides a good example of why early prediction of failure rests crucially on methods that can account for the spatiotemporal compromise-in-competition between force and damage pathways. Essentially, the ultimate effect of stress redistributions is to delay failure, since any damage to $B(\mathcal{F})$ leads to a reduction in $F^*$. But there is an undesired concomitant which is the considerable uncertainty they pose for early prediction of failure, given damage is rerouted and concentrated elsewhere -- away from the region of impending failure in PFR~\cite{ATSKCRMNJT2}. \begin{figure} \caption{(Color online) Slope Mine 2. Time evolution of: (a) the mean velocity of Mine 2 with the failure location in red (inset); (b) the failure resistance $F^*$ from Step 2; (c) the NMI index and S (inset); (e) the inverse mean velocity of $\Omega$ and of pixel $p$ (inset), over the time interval indicated by the red window in the inset in (c). Vertical lines mark the regime change point $t^*_2$=221=13:39 August 20 (solid grey line) and time of failure $t^F_2$=1315=03:00 August 25 (dashed black line). (d) Cumulative predictions of the landslide boundary (black points) are overlaid on top of the displacement map at $t^F_2$. Supplementary Movie Mine2 shows the evolution of $\Omega$ over the period of the monitoring campaign.} \label{fig:mine2} \end{figure} In Mine 2, a consolidated rock slope dominated by intact igneous rock that embodies many natural joints or faults. There is the A-dominated stable regime over the first day of the monitoring period $1 \le t < 221$, where the global mean velocity fluctuates initially around 0, as $F^*$ portrays a decreasing trend (Figure~\ref{fig:mine2} (a-b)). Trends in both NMI and Silhouette coefficient $S$ suggest that no substantial clustering structure in the kinematics developed on the first day: NMI fluctuates between 0 and 1 while S remains below 0.5 (Figure~\ref{fig:mine2} (c)). The system embodies redundant pathways to divert stresses away from the area of impending failure ($1 \leq t < 221$, Figure~\ref{fig:mine2} (d)). In the A-B dominated mesoregime of PFR, $S$ progressively increases to 1, implying the emergence of collective motion ($221 \leq t \leq 1370$, Figure~\ref{fig:mine2} (d)). As failure draws near, intracluster motions become coherent and near rigid-body, while intercluster motions become separated (Figure~\ref{fig:mine2} (c) inset), as the cluster corresponding to the location of impending failure $\Omega$ accelerates (Figure~\ref{fig:mine2} (e)). We see these trends are precisely mirrored by the Normalized Mutual Information (NMI) of the clusters (Figure~\ref{fig:mine2} (c)). Note that the landslide boundary, shown at $t= 221$ in Figure~\ref{fig:mesofield} actually appears as early as $t= 104$ and persists up until $t=178$, which explains the high NMI scores. However, the kinematic clusters undergo a short period of change during $179 \leq t < 221$ which may reflect any number of perturbations on the mine site, including blasting. Around the same time interval, large fluctuations can also be observed in $S$. Close to and during the B-dominated stable regime, $S$ flattens out close to 1, indicative of a strong clustered motion. Altogether, the evidence from $F^*$, $S$ and NMI marks a regime change point at $t^*_2$=221=13:39 August 20, which is just over 4 days prior to the collapse on $t^F_2$=1315=03:00 August 25. The INV analysis of $\Omega$ and the fastest moving pixel $p$ supports the progressive evolution to collapse at $t^F_2$(Figure~\ref{fig:mine2} (e)). \begin{figure} \caption{(Color online) Slope Xinmo. Time evolution of: (a) the mean velocity of Xinmo with the failure location in red (inset); (b) the failure resistance $F^*$ from Step 2; (c) the NMI index and the global average silhouette score (inset); (d) the inverse mean velocity of $\Omega$ and of pixel $p$ (inset), over the time interval indicated by the red window in the inset of (c). Vertical lines mark the regime change point $t^*_X$=26=August 23, 2016 (solid grey line) and time of failure $t^F_X=$June 24, 2017 (dashed black line). (e) Cumulative predictions of the landslide boundary (black diamonds) are overlaid on top of the displacement map at $t^F_X$. Supplementary Movie Xinmo shows the evolution of $\Omega$ over the period of the monitoring campaign.} \label{fig:xinmo} \end{figure} For the Xinmo landslide, two key trends are evident from SSSAFE: (a) the regime change point on $t^*_{X} = 26$ = August 23, 2016 which is 10 months in advance of the actual time of collapse from when the active area $\Omega$ became fixed in the location that later became the rock avalanche source~\cite{Intrieri2017} (Figure~\ref{fig:xinmo} (a-e)); and (b) the development of cracks in a smaller competing failure zone above and to the east of the actual rock avalanche source (black diamonds in $0.03 \le \rho \le 0.3$, $1 \le t < 26$ of Figure~\ref{fig:xinmo} (e), Supplementary Movie Xinmo) as early as 2015. Thus results here corroborate prior findings~\cite{Intrieri2017,Hu2018,fan2017,carla2019,WLF5} that the rock avalanche was generated from the upper part of the slope (Figure~\ref{fig:xinmo} (a) inset), near the mountain crest, and only afterwards did it entrain old landslide deposits along its way, which showed no sign of movement and played a role only in the propagation and not in the triggering phase. In the months immediately following $t^*_X$, the velocities recorded were slow~\cite{CrudenVarnes1996} and predisposed to self-stabilization~\cite{hungr2014}. Interestingly, the shift in $\Omega$ from the eastern to the western flank of the source area match the reconstruction proposed by Hu et al.~\cite{Hu2018}, who attribute the triggering of the rock avalanche to an initial rockfall on June 24, 2017 at 05:39:07 (local time) and impacting the bedrock in the source area, where a crack network was present but still locked by persistent rock bridges. While the long creep behavior of the source area demonstrates that the failure was the result of a long-term process, instead of the sudden outcome of an external impulse, it is credible that the trigger rockfall hypothesized by Hu et al.~\cite{Hu2018} could have imposed the final stresses needed to overcome the failure resistance (capacity $c$ in Equation \ref{eq:cap}) of the remaining connections in the recurring bottleneck $B(\mathcal{F}(t)), \, \forall t \ge t^*_{X}$. That $B(\mathcal{F})$ persisted in the same location from August 23, 2016 strongly suggests a progressive degradation in rock strength all along this path, with the antecedent prolonged rainfall~\cite{fan2017} likely aiding this condition and rendering $B(\mathcal{F})$ increasingly poised for uncontrolled crack propagation in the lead up to the failure event on June 24, 2017 (Figure~\ref{fig:xinmo} (b)). Finally, around 40 days prior to the collapse, SSSAFE’s predicted area of collapse began to manifest a linear trend in the temporal evolution of its inverse mean velocity, which delivered a time of failure $t^F_X$ a day later than the actual collapse. SSSAFE offers a lead time of a day to weeks. This constitutes sufficient forewarning to undertake evacuation and other response actions~\cite{guzzetti2020}. SSSAFE takes only a few tens of seconds per time state to generate predictions on the likely region of failure: 30 seconds for Mine 1 and Xinmo, and 50 seconds for Mine 2, on a standard laptop computer with 8 cores 1.30 GHz CPU. Thus a prediction can be returned before the next measurement even for the most advanced radar technology (e.g., 1-5 minutes). At this rate, a reasonable number of time states (e.g., 30 consecutive time states would take at most 30 minutes) to establish robustly the dynamics of the region of interest for the purposes of identifying $t^*$ and $t^{F}$. \sigmaection*{Conclusion} \label{sec:conclude} A holistic framework for Spatiotemporal Slope Stability Analytics for Failure Estimation (SSSAFE) is developed. We demonstrate how SSSAFE can be applied to identify emergent kinematic clusters in the early stages of the precursory failure regime for four case studies of catastrophic failure: one at the laboratory scale using individual grain displacement data; and three slopes at the field scale, using line-of-sight displacement of a slope surface, from ground-based and space-borne radars. The spatiotemporal dynamics of the kinematic clusters reliably predicts where and when catastrophic failure occurs. The clusters share a common boundary along the path of least failure resistance. Here we found this path to precisely locate the impending, shear band in the laboratory sample and the landslide boundary in the natural and man-made slopes. The regime change point is marked by intracluster (intercluster) motions becoming very similar or rigid-body (separated) which, in turn, induces a spatial pattern of physical partitions that become invariant in time through to failure. Our findings illuminate a way forward to rationalize and refine decision-making from broad-area coverage monitoring data for improved geotechnical risk assessment and hazard mitigation. To that end, ongoing efforts are focused on the extension of SSSAFE to a probabilistic platform~\cite{WangSS} that incorporates uncertainty systematically for various slopes and relevant scenario projections. \sigmaection*{Appendix} \label{sec:appendix} {\bf A. The biaxial compression test data} The laboratory scale data set Biax is a well studied data set from a simulation of polydisperse spherical grains confined to planar biaxial compression test (Figure~\ref{fig:system})~\cite{Tordesillas2007}. It is governed by a classical DEM model~\cite{CundallStrack1979}, modified to incorporate a moment transfer to account for rolling resistance and thus capture the effects of non-idealized particle shapes. A combination of Hooke's law, Coulomb's friction, and hysteresis damping is used to model the interactions between contacting particles. An initially isotropic assembly of $5098$ spherical particles is prepared and subject to biaxial compression with motion constrained to the plane. The ensemble is subjected to constant confining pressure, with a coefficient of rolling friction of $\mu^r = 0.02$. The initial packing fraction is $0.858$. A rolling resistance and a sliding resistance act at the contacts, both of which are governed by a spring up to a limiting Coulomb value of ${\mu}|{{\bf f}^n}|$ and ${\mu^r }{R_{min}}|{{\bf f}^n}|$, respectively, where ${{\bf f}^n}$ is the normal contact force and $R_{min}$ is the radius of the smaller of the two contacting particles. A summary of all the interaction parameters governing the contacts and other quantities relevant to this study is presented elsewhere \cite{Tordesillas2007}. \sigmaection*{Acknowledgments} We thank our two anonymous reviewers whose insightful comments and suggestions helped improve and clarify this manuscript. AT and RB thank Prof. Jinghai Li and Dr. Jianhua Chen for the stimulating discussions at the 2nd International Panel of Mesoscience. AT and SK acknowledge support from the U.S. Army International Technology Center Pacific (ITC-PAC) and US DoD High Performance Computing Modernization Program (HPCMP) under Contract No. FA5209-18-C-0002. \sigmaection*{Author contributions statement} AT conceived, designed and coordinated the research. SK conducted the network flow experiments. RB supplied the data for Mine 1. LC and PB supplied the data for Mine 2. EI supplied the data for Xinmo. AT and SK analyzed the results. AT wrote the original draft and revisions. All authors reviewed, and contributed to the writing of, the manuscript. \sigmaection*{Additional information} \noindent \textbf{Competing interests} \\ The authors declare no competing interests. \noindent \textbf{Data availability}\\ The data that support the findings of this study are available from GroundProbe Pty Ltd but restrictions apply since these data were used under license for the current study. \sigmaection*{Symbols and their meaning} \begin{flushleft} \begin{longtable}{p{.2\textwidth} p{.7\textwidth}} \hline \hline \hline Symbol & Meaning \\ \hline $|\overrightarrow{\Deltalta u_{ij}}|$ & the magnitude of the relative displacement of two grains $i$ and $j$ (or two pixels) linked in $\mathcal{N}$\\ $\deltaelta^+(v)$ & set of arcs leaving node $v$\\ $\deltaelta^-(v)$ & set of arcs entering node $v$\\ $\Gamma$ & a cut or virtual crack path in $\mathcal{N}$\\ $\Gamma_{min}$ & minimum cut in $\mathcal{N}$\\ $\mu^r$ & coefficient of rolling friction\\ $\rho(l)$ & cut ratio, the ratio of the number nodes in the smallest to largest components upon the removal of link $l$ in $\mathcal{T}$ \\ $\Omega$ & active and fastest-moving kinematic cluster, predicted location of failure\\ \hline \hline $A$ & set of arcs\\ $a(i)$ & average distance in the displacement state-space from $i$ to all other nodes in the same cluster\\ $B(\mathcal{F})$ & bottleneck, minimum cut of $\mathcal{F}$ \\ $b(i)$ & average of the distances from $i$ to all the points in the other clusters\\ $c$ & capacity function \\ $c(e)$ & capacity of link $e$, threshold for damage, strength of contact, failure resistance of contact, proximity to failure \\ $c(\Gamma_{min})$ & capacity of the minimum cut $\Gamma_{min}$\\ $\varepsilonc{d}_\ell$ & displacement vector of grain (pixel) $\ell$\\ EWS & early warning systems\\ $e$ & link in $A$ corresponding to a contact in $\mathcal{N}$ \\ $\mathcal{F}$ & flow network\\ $F^*$ & failure resistance, global force flow capacity, bottleneck capacity\\ ${{\bf f}^n}$ & normal contact force\\ $f(x)$ & total flow leaving the source node $q$\\ $G$ & directed network\\ $G^*$ & undirected link-capacitated network\\ $H(X(t))$ & entropy of the clustering assignment $X(t)$\\ $I(X(t);X(t-1))$ & mutual information between $X(t)$ and $X(t-1)$\\ $i, j$ & grain (pixel) in $\mathcal{N}$ which corresponds to a physical contact $e$\\ $k$ & sink node \\ $l$ & link in $\mathcal{T}$\\ $l_m$ & link of minimum weight on the path joining nodes $u$ and $v$ in $\mathcal{T}$\\ LOS & line-of-sight\\ $\ell$ & observation point (grain or pixel)\\ NMI & normalized mutual information\\ $\mathcal{N}$ & physical contact network\\ $n$ & number of nodes in $\mathcal{N}$, or $G^*$\\ $O$ & $(n-1) \times 3$ matrix whose entries are all zeros\\ $O[i,:]$ & $i^{th}$ row of $O$\\ $p$ & pixel with the highest rate of movement in $\Omega$ at the time of failure\\ PFR & precursory failure regime, mesoregime\\ $q$ & source node\\ $R_{min}$ & radius of the smaller of the two contacting particles\\ $S$ & Silhouette score for the whole body\\ $s(i)$ & Silhouette score for a node $i$\\ SSR & slope stability radar\\ $T$ & final time stage\\ $\mathcal{T}$ & Gomory-Hu tree\\ $t$ & time stage\\ $t^F, t_B^F, t_1^F, t_2^F, t_X^F$ & time of failure for a general case, Biax, Mines 1, 2 and Xinmo, respectively\\ $t^*, t_B^*, t_1^*, t_2^*, t_X^*$ & regime change point for a general case, Biax, Mines 1, 2 and Xinmo, respectively\\ $V$ & set of nodes\\ $v, u$ & node in $G^*$\\ $W,W'$ & two disjoint components of $\mathcal{N}$ corresponds to the removal of edges in $B(\mathcal{F})$ or removal of a link $l$ in $\mathcal{T}$\\ $w(l)$ & non-negative weight on link $l$ in $\mathcal{T}$\\ $w(l_m)$ & capacity of the minimum cut separating $u$ and $v$ in $\mathcal{N}$\\ $X(t)$ & clustering assignment at time stage $t$\\ $x(e)$ & flow of link $e$\\ \hline \hline \hline \end{longtable} \end{flushleft} \end{document}